linux-api.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] mm: introduce secretmemfd system call to create "secret" memory areas
@ 2020-07-20  9:24 Mike Rapoport
  2020-07-20  9:24 ` [PATCH 1/6] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
                   ` (5 more replies)
  0 siblings, 6 replies; 17+ messages in thread
From: Mike Rapoport @ 2020-07-20  9:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: Alexander Viro, Andrew Morton, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Idan Yaniv, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Mike Rapoport, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Hi,

This is the third version of "secret" mappings implementation backed by a
file descriptor. 

The file descriptor is created using a dedicated secretmemfd system call
The desired protection mode for the memory is configured using flags
parameter of the system call. The mmap() of the file descriptor created
with secretmemfd() will create a "secret" memory mapping. The pages in that
mapping will be marked as not present in the direct map and will have
desired protection bits set in the user page table. For instance, current
implementation allows uncached mappings.

Although normally Linux userspace mappings are protected from other users, 
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.

Additionally, the secret mappings may be used as a mean to protect guest
memory in a virtual machine host.

For demonstration of secret memory usage we've created a userspace library
[1] that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.

I've hesitated whether to continue to use new flags to memfd_create() or to
add a new system call and I've decided to use a new system call after I've
started to look into man pages update. There would have been two completely
independent descriptions and I think it would have been very confusing.

Hiding secret memory mappings behind an anonymous file allows (ab)use of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.

The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.

As the fragmentation of the direct map was one of the major concerns raised
during the previous postings, I've added an amortizing cache of PMD-size
pages to each file descriptor and an ability to reserve large chunks of the
physical memory at boot time and then use this memory as an allocation pool
for the secret memory areas.

In addition, I've tried to find some numbers that show the benefit of using
larger pages in the direct map, but I couldn't find anything so I've run a
couple of benchmarks from phoronix-test-suite on my laptop (i7-8650U with
32G RAM).

I've tested three variants: the default with 28G of the physical memory
covered with 1G pages, then I disabled 1G pages using "nogbpages" in the
kernel command line and at last I've forced the entire direct map to use 4K
pages using a simple patch to arch/x86/mm/init.c.
I've made runs of the benchmarks with SSD and tmpfs.

Surprisingly, the results does not show huge advantage for large pages. For
instance, here the results for kernel build with 'make -j8', in seconds:

                        |  1G    |  2M    |  4K
------------------------+--------+--------+---------
ssd, mitigations=on	| 308.75 | 317.37 | 314.9 
ssd, mitigations=off	| 305.25 | 295.32 | 304.92 
ram, mitigations=on	| 301.58 | 322.49 | 306.54 
ram, mitigations=off	| 299.32 | 288.44 | 310.65

All the results I have are available at [2].
If anybody is interested in plain text, please let me know.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/rppt/secret-memory-preloader.git/
[2] https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing

Mike Rapoport (6):
  mm: add definition of PMD_PAGE_ORDER
  mmap: make mlock_future_check() global
  mm: introduce secretmemfd system call to create "secret" memory areas
  arch, mm: wire up secretmemfd system call were relevant
  mm: secretmem: use PMD-size pages to amortize direct map fragmentation
  mm: secretmem: add ability to reserve memory at boot

 arch/arm64/include/asm/unistd32.h      |   2 +
 arch/arm64/include/uapi/asm/unistd.h   |   1 +
 arch/riscv/include/asm/unistd.h        |   1 +
 arch/x86/entry/syscalls/syscall_32.tbl |   1 +
 arch/x86/entry/syscalls/syscall_64.tbl |   1 +
 fs/dax.c                               |  10 +-
 include/linux/pgtable.h                |   3 +
 include/linux/syscalls.h               |   1 +
 include/uapi/asm-generic/unistd.h      |   7 +-
 include/uapi/linux/magic.h             |   1 +
 include/uapi/linux/secretmem.h         |   9 +
 mm/Kconfig                             |   4 +
 mm/Makefile                            |   1 +
 mm/internal.h                          |   3 +
 mm/mmap.c                              |   5 +-
 mm/secretmem.c                         | 450 +++++++++++++++++++++++++
 16 files changed, 491 insertions(+), 9 deletions(-)
 create mode 100644 include/uapi/linux/secretmem.h
 create mode 100644 mm/secretmem.c


base-commit: f932d58abc38c898d7d3fe635ecb2b821a256f54
-- 
2.26.2


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/6] mm: add definition of PMD_PAGE_ORDER
  2020-07-20  9:24 [PATCH 0/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
@ 2020-07-20  9:24 ` Mike Rapoport
  2020-07-20  9:24 ` [PATCH 2/6] mmap: make mlock_future_check() global Mike Rapoport
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 17+ messages in thread
From: Mike Rapoport @ 2020-07-20  9:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: Alexander Viro, Andrew Morton, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Idan Yaniv, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Mike Rapoport, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.

Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid conflict with these definitions use
PMD_PAGE_ORDER name and update DAX respectively.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 fs/dax.c                | 10 +++++-----
 include/linux/pgtable.h |  3 +++
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 11b16729b86f..b91d8c8dda45 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -50,7 +50,7 @@ static inline unsigned int pe_order(enum page_entry_size pe_size)
 #define PG_PMD_NR	(PMD_SIZE >> PAGE_SHIFT)
 
 /* The order of a PMD entry */
-#define PMD_ORDER	(PMD_SHIFT - PAGE_SHIFT)
+#define PMD_PAGE_ORDER	(PMD_SHIFT - PAGE_SHIFT)
 
 static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
 
@@ -98,7 +98,7 @@ static bool dax_is_locked(void *entry)
 static unsigned int dax_entry_order(void *entry)
 {
 	if (xa_to_value(entry) & DAX_PMD)
-		return PMD_ORDER;
+		return PMD_PAGE_ORDER;
 	return 0;
 }
 
@@ -1456,7 +1456,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 {
 	struct vm_area_struct *vma = vmf->vma;
 	struct address_space *mapping = vma->vm_file->f_mapping;
-	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER);
+	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER);
 	unsigned long pmd_addr = vmf->address & PMD_MASK;
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
 	bool sync;
@@ -1515,7 +1515,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 	 * entry is already in the array, for instance), it will return
 	 * VM_FAULT_FALLBACK.
 	 */
-	entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+	entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER);
 	if (xa_is_internal(entry)) {
 		result = xa_to_internal(entry);
 		goto fallback;
@@ -1681,7 +1681,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
 	if (order == 0)
 		ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
 #ifdef CONFIG_FS_DAX_PMD
-	else if (order == PMD_ORDER)
+	else if (order == PMD_PAGE_ORDER)
 		ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
 #endif
 	else
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 56c1e8eb7bb0..79f8443609e7 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -28,6 +28,9 @@
 #define USER_PGTABLES_CEILING	0UL
 #endif
 
+/* Number of base pages in a second level leaf page */
+#define PMD_PAGE_ORDER	(PMD_SHIFT - PAGE_SHIFT)
+
 /*
  * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
  *
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/6] mmap: make mlock_future_check() global
  2020-07-20  9:24 [PATCH 0/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
  2020-07-20  9:24 ` [PATCH 1/6] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
@ 2020-07-20  9:24 ` Mike Rapoport
  2020-07-20  9:24 ` [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 17+ messages in thread
From: Mike Rapoport @ 2020-07-20  9:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: Alexander Viro, Andrew Morton, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Idan Yaniv, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Mike Rapoport, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

It will be used by the upcoming secret memory implementation.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 mm/internal.h | 3 +++
 mm/mmap.c     | 5 ++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 9886db20d94f..af0a92f8f6bc 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -349,6 +349,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
 extern void mlock_vma_page(struct page *page);
 extern unsigned int munlock_vma_page(struct page *page);
 
+extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+			      unsigned long len);
+
 /*
  * Clear the page's PageMlocked().  This can be useful in a situation where
  * we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c
index 59a4682ebf3f..4dd40a4fedfb 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1310,9 +1310,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
 	return hint;
 }
 
-static inline int mlock_future_check(struct mm_struct *mm,
-				     unsigned long flags,
-				     unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+		       unsigned long len)
 {
 	unsigned long locked, lock_limit;
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas
  2020-07-20  9:24 [PATCH 0/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
  2020-07-20  9:24 ` [PATCH 1/6] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
  2020-07-20  9:24 ` [PATCH 2/6] mmap: make mlock_future_check() global Mike Rapoport
@ 2020-07-20  9:24 ` Mike Rapoport
  2020-07-20 11:30   ` Arnd Bergmann
  2020-07-21 10:59   ` Michael Kerrisk (man-pages)
  2020-07-20  9:24 ` [PATCH 4/6] arch, mm: wire up secretmemfd system call were relevant Mike Rapoport
                   ` (2 subsequent siblings)
  5 siblings, 2 replies; 17+ messages in thread
From: Mike Rapoport @ 2020-07-20  9:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: Alexander Viro, Andrew Morton, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Idan Yaniv, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Mike Rapoport, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Introduce "secretmemfd" system call with the ability to create memory areas
visible only in the context of the owning process and not mapped not only
to other processes but in the kernel page tables as well.

The user will create a file descriptor using the secretmemfd system call
where flags supplied as a parameter to this system call will define the
desired protection mode for the memory associated with that file
descriptor. Currently there are two protection modes:

* exclusive - the memory area is unmapped from the kernel direct map and it
              is present only in the page tables of the owning mm.
* uncached  - the memory area is present only in the page tables of the
              owning mm and it is mapped there as uncached.

For instance, the following example will create an uncached mapping (error
handling is omitted):

	fd = secretmemfd(SECRETMEM_UNCACHED);
	ftruncate(fd, MAP_SIZE);
	ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED,
		   fd, 0);

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 include/uapi/linux/magic.h     |   1 +
 include/uapi/linux/secretmem.h |   9 ++
 mm/Kconfig                     |   4 +
 mm/Makefile                    |   1 +
 mm/secretmem.c                 | 263 +++++++++++++++++++++++++++++++++
 5 files changed, 278 insertions(+)
 create mode 100644 include/uapi/linux/secretmem.h
 create mode 100644 mm/secretmem.c

diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index f3956fc11de6..35687dcb1a42 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -97,5 +97,6 @@
 #define DEVMEM_MAGIC		0x454d444d	/* "DMEM" */
 #define Z3FOLD_MAGIC		0x33
 #define PPC_CMM_MAGIC		0xc7571590
+#define SECRETMEM_MAGIC		0x5345434d	/* "SECM" */
 
 #endif /* __LINUX_MAGIC_H__ */
diff --git a/include/uapi/linux/secretmem.h b/include/uapi/linux/secretmem.h
new file mode 100644
index 000000000000..cef7a59f7492
--- /dev/null
+++ b/include/uapi/linux/secretmem.h
@@ -0,0 +1,9 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI_LINUX_SECRERTMEM_H
+#define _UAPI_LINUX_SECRERTMEM_H
+
+/* secretmem operation modes */
+#define SECRETMEM_EXCLUSIVE	0x1
+#define SECRETMEM_UNCACHED	0x2
+
+#endif /* _UAPI_LINUX_SECRERTMEM_H */
diff --git a/mm/Kconfig b/mm/Kconfig
index f2104cc0d35c..c5aa948214f9 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -872,4 +872,8 @@ config ARCH_HAS_HUGEPD
 config MAPPING_DIRTY_HELPERS
         bool
 
+config SECRETMEM
+        def_bool ARCH_HAS_SET_DIRECT_MAP
+	select GENERIC_ALLOCATOR
+
 endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 6e9d46b2efc9..c2aa7a393b73 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -121,3 +121,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
 obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
 obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
 obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_SECRETMEM) += secretmem.o
diff --git a/mm/secretmem.c b/mm/secretmem.c
new file mode 100644
index 000000000000..2f65219baf80
--- /dev/null
+++ b/mm/secretmem.c
@@ -0,0 +1,263 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/mount.h>
+#include <linux/memfd.h>
+#include <linux/bitops.h>
+#include <linux/printk.h>
+#include <linux/pagemap.h>
+#include <linux/syscalls.h>
+#include <linux/pseudo_fs.h>
+#include <linux/set_memory.h>
+#include <linux/sched/signal.h>
+
+#include <uapi/linux/secretmem.h>
+#include <uapi/linux/magic.h>
+
+#include <asm/tlbflush.h>
+
+#include "internal.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "secretmem: " fmt
+
+#define SECRETMEM_MODE_MASK	(SECRETMEM_EXCLUSIVE | SECRETMEM_UNCACHED)
+#define SECRETMEM_FLAGS_MASK	SECRETMEM_MODE_MASK
+
+struct secretmem_ctx {
+	unsigned int mode;
+};
+
+static struct page *secretmem_alloc_page(gfp_t gfp)
+{
+	/*
+	 * FIXME: use a cache of large pages to reduce the direct map
+	 * fragmentation
+	 */
+	return alloc_page(gfp);
+}
+
+static vm_fault_t secretmem_fault(struct vm_fault *vmf)
+{
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+	struct inode *inode = file_inode(vmf->vma->vm_file);
+	pgoff_t offset = vmf->pgoff;
+	unsigned long addr;
+	struct page *page;
+	int ret = 0;
+
+	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+		return vmf_error(-EINVAL);
+
+	page = find_get_entry(mapping, offset);
+	if (!page) {
+		page = secretmem_alloc_page(vmf->gfp_mask);
+		if (!page)
+			return vmf_error(-ENOMEM);
+
+		ret = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
+		if (unlikely(ret))
+			goto err_put_page;
+
+		ret = set_direct_map_invalid_noflush(page);
+		if (ret)
+			goto err_del_page_cache;
+
+		addr = (unsigned long)page_address(page);
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+
+		__SetPageUptodate(page);
+
+		ret = VM_FAULT_LOCKED;
+	}
+
+	vmf->page = page;
+	return ret;
+
+err_del_page_cache:
+	delete_from_page_cache(page);
+err_put_page:
+	put_page(page);
+	return vmf_error(ret);
+}
+
+static const struct vm_operations_struct secretmem_vm_ops = {
+	.fault = secretmem_fault,
+};
+
+static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+	struct secretmem_ctx *ctx = file->private_data;
+	unsigned long mode = ctx->mode;
+	unsigned long len = vma->vm_end - vma->vm_start;
+
+	if (!mode)
+		return -EINVAL;
+
+	if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
+		return -EAGAIN;
+
+	switch (mode) {
+	case SECRETMEM_UNCACHED:
+		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+		fallthrough;
+	case SECRETMEM_EXCLUSIVE:
+		vma->vm_ops = &secretmem_vm_ops;
+		break;
+	default:
+		return -EINVAL;
+	}
+
+	vma->vm_flags |= VM_LOCKED;
+
+	return 0;
+}
+
+const struct file_operations secretmem_fops = {
+	.mmap		= secretmem_mmap,
+};
+
+static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode)
+{
+	return false;
+}
+
+static int secretmem_migratepage(struct address_space *mapping,
+				 struct page *newpage, struct page *page,
+				 enum migrate_mode mode)
+{
+	return -EBUSY;
+}
+
+static void secretmem_freepage(struct page *page)
+{
+	set_direct_map_default_noflush(page);
+}
+
+static const struct address_space_operations secretmem_aops = {
+	.freepage	= secretmem_freepage,
+	.migratepage	= secretmem_migratepage,
+	.isolate_page	= secretmem_isolate_page,
+};
+
+static struct vfsmount *secretmem_mnt;
+
+static struct file *secretmem_file_create(unsigned long flags)
+{
+	struct file *file = ERR_PTR(-ENOMEM);
+	struct secretmem_ctx *ctx;
+	struct inode *inode;
+
+	inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
+	if (IS_ERR(inode))
+		return ERR_CAST(inode);
+
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx)
+		goto err_free_inode;
+
+	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+				 O_RDWR, &secretmem_fops);
+	if (IS_ERR(file))
+		goto err_free_ctx;
+
+	mapping_set_unevictable(inode->i_mapping);
+
+	inode->i_mapping->private_data = ctx;
+	inode->i_mapping->a_ops = &secretmem_aops;
+
+	/* pretend we are a normal file with zero size */
+	inode->i_mode |= S_IFREG;
+	inode->i_size = 0;
+
+	file->private_data = ctx;
+
+	ctx->mode = flags & SECRETMEM_MODE_MASK;
+
+	return file;
+
+err_free_ctx:
+	kfree(ctx);
+err_free_inode:
+	iput(inode);
+	return file;
+}
+
+SYSCALL_DEFINE1(secretmemfd, unsigned long, flags)
+{
+	struct file *file;
+	unsigned int mode;
+	int fd, err;
+
+	/* make sure local flags do not confict with global fcntl.h */
+	BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+	if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+		return -EINVAL;
+
+	/* modes are mutually exclusive, only one mode bit should be set */
+	mode = flags & SECRETMEM_FLAGS_MASK;
+	if (ffs(mode) != fls(mode))
+		return -EINVAL;
+
+	fd = get_unused_fd_flags(flags & O_CLOEXEC);
+	if (fd < 0)
+		return fd;
+
+	file = secretmem_file_create(flags);
+	if (IS_ERR(file)) {
+		err = PTR_ERR(file);
+		goto err_put_fd;
+	}
+
+	file->f_flags |= O_LARGEFILE;
+
+	fd_install(fd, file);
+	return fd;
+
+err_put_fd:
+	put_unused_fd(fd);
+	return err;
+}
+
+static void secretmem_evict_inode(struct inode *inode)
+{
+	struct secretmem_ctx *ctx = inode->i_private;
+
+	truncate_inode_pages_final(&inode->i_data);
+	clear_inode(inode);
+	kfree(ctx);
+}
+
+static const struct super_operations secretmem_super_ops = {
+	.evict_inode = secretmem_evict_inode,
+};
+
+static int secretmem_init_fs_context(struct fs_context *fc)
+{
+	struct pseudo_fs_context *ctx = init_pseudo(fc, SECRETMEM_MAGIC);
+
+	if (!ctx)
+		return -ENOMEM;
+	ctx->ops = &secretmem_super_ops;
+
+	return 0;
+}
+
+static struct file_system_type secretmem_fs = {
+	.name		= "secretmem",
+	.init_fs_context = secretmem_init_fs_context,
+	.kill_sb	= kill_anon_super,
+};
+
+static int secretmem_init(void)
+{
+	int ret = 0;
+
+	secretmem_mnt = kern_mount(&secretmem_fs);
+	if (IS_ERR(secretmem_mnt))
+		ret = PTR_ERR(secretmem_mnt);
+
+	return ret;
+}
+fs_initcall(secretmem_init);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/6] arch, mm: wire up secretmemfd system call were relevant
  2020-07-20  9:24 [PATCH 0/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
                   ` (2 preceding siblings ...)
  2020-07-20  9:24 ` [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
@ 2020-07-20  9:24 ` Mike Rapoport
  2020-07-26 17:44   ` Palmer Dabbelt
  2020-07-20  9:24 ` [PATCH 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
  2020-07-20  9:24 ` [PATCH 6/6] mm: secretmem: add ability to reserve memory at boot Mike Rapoport
  5 siblings, 1 reply; 17+ messages in thread
From: Mike Rapoport @ 2020-07-20  9:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: Alexander Viro, Andrew Morton, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Idan Yaniv, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Mike Rapoport, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Wire up secretmemfd system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 arch/arm64/include/asm/unistd32.h      | 2 ++
 arch/arm64/include/uapi/asm/unistd.h   | 1 +
 arch/riscv/include/asm/unistd.h        | 1 +
 arch/x86/entry/syscalls/syscall_32.tbl | 1 +
 arch/x86/entry/syscalls/syscall_64.tbl | 1 +
 include/linux/syscalls.h               | 1 +
 include/uapi/asm-generic/unistd.h      | 7 ++++++-
 7 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
index 6d95d0c8bf2f..f9e00baa67f5 100644
--- a/arch/arm64/include/asm/unistd32.h
+++ b/arch/arm64/include/asm/unistd32.h
@@ -885,6 +885,8 @@ __SYSCALL(__NR_openat2, sys_openat2)
 __SYSCALL(__NR_pidfd_getfd, sys_pidfd_getfd)
 #define __NR_faccessat2 439
 __SYSCALL(__NR_faccessat2, sys_faccessat2)
+#define __NR_secretmemfd 439
+__SYSCALL(__NR_secretmemfd, sys_secretmemfd)
 
 /*
  * Please add new compat syscalls above this comment and update
diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index f83a70e07df8..f2693f05fc80 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -20,5 +20,6 @@
 #define __ARCH_WANT_SET_GET_RLIMIT
 #define __ARCH_WANT_TIME32_SYSCALLS
 #define __ARCH_WANT_SYS_CLONE3
+#define __ARCH_WANT_SECRETMEMFD
 
 #include <asm-generic/unistd.h>
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
index 977ee6181dab..9e47d9aed5eb 100644
--- a/arch/riscv/include/asm/unistd.h
+++ b/arch/riscv/include/asm/unistd.h
@@ -9,6 +9,7 @@
  */
 
 #define __ARCH_WANT_SYS_CLONE
+#define __ARCH_WANT_SECRETMEMFD
 
 #include <uapi/asm/unistd.h>
 
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index d8f8a1a69ed1..7b91a932ed13 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -443,3 +443,4 @@
 437	i386	openat2			sys_openat2
 438	i386	pidfd_getfd		sys_pidfd_getfd
 439	i386	faccessat2		sys_faccessat2
+440	i386	secretmemfd		sys_secretmemfd
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index 78847b32e137..9cddea4ec1ce 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -360,6 +360,7 @@
 437	common	openat2			sys_openat2
 438	common	pidfd_getfd		sys_pidfd_getfd
 439	common	faccessat2		sys_faccessat2
+440	common	secretmemfd		sys_secretmemfd
 
 #
 # x32-specific system call numbers start at 512 to avoid cache impact
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index b951a87da987..8fe242fc70ea 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1005,6 +1005,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
 				       siginfo_t __user *info,
 				       unsigned int flags);
 asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
+asmlinkage long sys_secretmemfd(unsigned long flags);
 
 /*
  * Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index f4a01305d9a6..aadaf25cbf5a 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -858,8 +858,13 @@ __SYSCALL(__NR_pidfd_getfd, sys_pidfd_getfd)
 #define __NR_faccessat2 439
 __SYSCALL(__NR_faccessat2, sys_faccessat2)
 
+#ifdef __ARCH_WANT_SECRETMEMFD
+#define __NR_secretmemfd 440
+__SYSCALL(__NR_secretmemfd, sys_secretmemfd)
+#endif
+
 #undef __NR_syscalls
-#define __NR_syscalls 440
+#define __NR_syscalls 441
 
 /*
  * 32 bit systems traditionally used different
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation
  2020-07-20  9:24 [PATCH 0/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
                   ` (3 preceding siblings ...)
  2020-07-20  9:24 ` [PATCH 4/6] arch, mm: wire up secretmemfd system call were relevant Mike Rapoport
@ 2020-07-20  9:24 ` Mike Rapoport
  2020-07-20  9:24 ` [PATCH 6/6] mm: secretmem: add ability to reserve memory at boot Mike Rapoport
  5 siblings, 0 replies; 17+ messages in thread
From: Mike Rapoport @ 2020-07-20  9:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: Alexander Viro, Andrew Morton, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Idan Yaniv, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Mike Rapoport, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Removing a PAGE_SIZE page from the direct map every time such page is
allocated for a secret memory mapping will cause severe fragmentation of
the direct map. This fragmentation can be reduced by using PMD-size pages
as a pool for small pages for secret memory mappings.

Add a gen_pool per secretmem inode and lazily populate this pool with
PMD-size pages.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 mm/secretmem.c | 107 ++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 88 insertions(+), 19 deletions(-)

diff --git a/mm/secretmem.c b/mm/secretmem.c
index 2f65219baf80..dce56f84968f 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -6,6 +6,7 @@
 #include <linux/bitops.h>
 #include <linux/printk.h>
 #include <linux/pagemap.h>
+#include <linux/genalloc.h>
 #include <linux/syscalls.h>
 #include <linux/pseudo_fs.h>
 #include <linux/set_memory.h>
@@ -25,24 +26,66 @@
 #define SECRETMEM_FLAGS_MASK	SECRETMEM_MODE_MASK
 
 struct secretmem_ctx {
+	struct gen_pool *pool;
 	unsigned int mode;
 };
 
-static struct page *secretmem_alloc_page(gfp_t gfp)
+static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
 {
-	/*
-	 * FIXME: use a cache of large pages to reduce the direct map
-	 * fragmentation
-	 */
-	return alloc_page(gfp);
+	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
+	struct gen_pool *pool = ctx->pool;
+	unsigned long addr;
+	struct page *page;
+	int err;
+
+	page = alloc_pages(gfp, PMD_PAGE_ORDER);
+	if (!page)
+		return -ENOMEM;
+
+	addr = (unsigned long)page_address(page);
+	split_page(page, PMD_PAGE_ORDER);
+
+	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
+	if (err) {
+		__free_pages(page, PMD_PAGE_ORDER);
+		return err;
+	}
+
+	__kernel_map_pages(page, nr_pages, 0);
+
+	return 0;
+}
+
+static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
+					 gfp_t gfp)
+{
+	struct gen_pool *pool = ctx->pool;
+	unsigned long addr;
+	struct page *page;
+	int err;
+
+	if (gen_pool_avail(pool) < PAGE_SIZE) {
+		err = secretmem_pool_increase(ctx, gfp);
+		if (err)
+			return NULL;
+	}
+
+	addr = gen_pool_alloc(pool, PAGE_SIZE);
+	if (!addr)
+		return NULL;
+
+	page = virt_to_page(addr);
+	get_page(page);
+
+	return page;
 }
 
 static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 {
+	struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	struct inode *inode = file_inode(vmf->vma->vm_file);
 	pgoff_t offset = vmf->pgoff;
-	unsigned long addr;
 	struct page *page;
 	int ret = 0;
 
@@ -51,7 +94,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 
 	page = find_get_entry(mapping, offset);
 	if (!page) {
-		page = secretmem_alloc_page(vmf->gfp_mask);
+		page = secretmem_alloc_page(ctx, vmf->gfp_mask);
 		if (!page)
 			return vmf_error(-ENOMEM);
 
@@ -59,14 +102,8 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 		if (unlikely(ret))
 			goto err_put_page;
 
-		ret = set_direct_map_invalid_noflush(page);
-		if (ret)
-			goto err_del_page_cache;
-
-		addr = (unsigned long)page_address(page);
-		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
-
 		__SetPageUptodate(page);
+		set_page_private(page, (unsigned long)ctx);
 
 		ret = VM_FAULT_LOCKED;
 	}
@@ -74,8 +111,6 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 	vmf->page = page;
 	return ret;
 
-err_del_page_cache:
-	delete_from_page_cache(page);
 err_put_page:
 	put_page(page);
 	return vmf_error(ret);
@@ -131,7 +166,11 @@ static int secretmem_migratepage(struct address_space *mapping,
 
 static void secretmem_freepage(struct page *page)
 {
-	set_direct_map_default_noflush(page);
+	unsigned long addr = (unsigned long)page_address(page);
+	struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
+	struct gen_pool *pool = ctx->pool;
+
+	gen_pool_free(pool, addr, PAGE_SIZE);
 }
 
 static const struct address_space_operations secretmem_aops = {
@@ -156,13 +195,18 @@ static struct file *secretmem_file_create(unsigned long flags)
 	if (!ctx)
 		goto err_free_inode;
 
+	ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+	if (!ctx->pool)
+		goto err_free_ctx;
+
 	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
 				 O_RDWR, &secretmem_fops);
 	if (IS_ERR(file))
-		goto err_free_ctx;
+		goto err_free_pool;
 
 	mapping_set_unevictable(inode->i_mapping);
 
+	inode->i_private = ctx;
 	inode->i_mapping->private_data = ctx;
 	inode->i_mapping->a_ops = &secretmem_aops;
 
@@ -176,6 +220,8 @@ static struct file *secretmem_file_create(unsigned long flags)
 
 	return file;
 
+err_free_pool:
+	gen_pool_destroy(ctx->pool);
 err_free_ctx:
 	kfree(ctx);
 err_free_inode:
@@ -220,11 +266,34 @@ SYSCALL_DEFINE1(secretmemfd, unsigned long, flags)
 	return err;
 }
 
+static void secretmem_cleanup_chunk(struct gen_pool *pool,
+				    struct gen_pool_chunk *chunk, void *data)
+{
+	unsigned long start = chunk->start_addr;
+	unsigned long end = chunk->end_addr;
+	unsigned long nr_pages, addr;
+
+	nr_pages = (end - start + 1) / PAGE_SIZE;
+	__kernel_map_pages(virt_to_page(start), nr_pages, 1);
+
+	for (addr = start; addr < end; addr += PAGE_SIZE)
+		put_page(virt_to_page(addr));
+}
+
+static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
+{
+	struct gen_pool *pool = ctx->pool;
+
+	gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
+	gen_pool_destroy(pool);
+}
+
 static void secretmem_evict_inode(struct inode *inode)
 {
 	struct secretmem_ctx *ctx = inode->i_private;
 
 	truncate_inode_pages_final(&inode->i_data);
+	secretmem_cleanup_pool(ctx);
 	clear_inode(inode);
 	kfree(ctx);
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 6/6] mm: secretmem: add ability to reserve memory at boot
  2020-07-20  9:24 [PATCH 0/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
                   ` (4 preceding siblings ...)
  2020-07-20  9:24 ` [PATCH 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
@ 2020-07-20  9:24 ` Mike Rapoport
  5 siblings, 0 replies; 17+ messages in thread
From: Mike Rapoport @ 2020-07-20  9:24 UTC (permalink / raw)
  To: linux-kernel
  Cc: Alexander Viro, Andrew Morton, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Idan Yaniv, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Mike Rapoport, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Taking pages out from the direct map and bringing them back may create
undesired fragmentation and usage of the smaller pages in the direct
mapping of the physical memory.

This can be avoided if a significantly large area of the physical memory
would be reserved for secretmem purposes at boot time.

Add ability to reserve physical memory for secretmem at boot time using
"secretmem" kernel parameter and then use that reserved memory as a global
pool for secret memory needs.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 mm/secretmem.c | 134 ++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 126 insertions(+), 8 deletions(-)

diff --git a/mm/secretmem.c b/mm/secretmem.c
index dce56f84968f..322f425dbb22 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -8,6 +8,7 @@
 #include <linux/pagemap.h>
 #include <linux/genalloc.h>
 #include <linux/syscalls.h>
+#include <linux/memblock.h>
 #include <linux/pseudo_fs.h>
 #include <linux/set_memory.h>
 #include <linux/sched/signal.h>
@@ -30,6 +31,39 @@ struct secretmem_ctx {
 	unsigned int mode;
 };
 
+struct secretmem_pool {
+	struct gen_pool *pool;
+	unsigned long reserved_size;
+	void *reserved;
+};
+
+static struct secretmem_pool secretmem_pool;
+
+static struct page *secretmem_alloc_huge_page(gfp_t gfp)
+{
+	struct gen_pool *pool = secretmem_pool.pool;
+	unsigned long addr = 0;
+	struct page *page = NULL;
+
+	if (pool) {
+		if (gen_pool_avail(pool) < PMD_SIZE)
+			return NULL;
+
+		addr = gen_pool_alloc(pool, PMD_SIZE);
+		if (!addr)
+			return NULL;
+
+		page = virt_to_page(addr);
+	} else {
+		page = alloc_pages(gfp, PMD_PAGE_ORDER);
+
+		if (page)
+			split_page(page, PMD_PAGE_ORDER);
+	}
+
+	return page;
+}
+
 static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
 {
 	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
@@ -38,12 +72,11 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
 	struct page *page;
 	int err;
 
-	page = alloc_pages(gfp, PMD_PAGE_ORDER);
+	page = secretmem_alloc_huge_page(gfp);
 	if (!page)
 		return -ENOMEM;
 
 	addr = (unsigned long)page_address(page);
-	split_page(page, PMD_PAGE_ORDER);
 
 	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
 	if (err) {
@@ -266,11 +299,13 @@ SYSCALL_DEFINE1(secretmemfd, unsigned long, flags)
 	return err;
 }
 
-static void secretmem_cleanup_chunk(struct gen_pool *pool,
-				    struct gen_pool_chunk *chunk, void *data)
+static void secretmem_recycle_range(unsigned long start, unsigned long end)
+{
+	gen_pool_free(secretmem_pool.pool, start, PMD_SIZE);
+}
+
+static void secretmem_release_range(unsigned long start, unsigned long end)
 {
-	unsigned long start = chunk->start_addr;
-	unsigned long end = chunk->end_addr;
 	unsigned long nr_pages, addr;
 
 	nr_pages = (end - start + 1) / PAGE_SIZE;
@@ -280,6 +315,18 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
 		put_page(virt_to_page(addr));
 }
 
+static void secretmem_cleanup_chunk(struct gen_pool *pool,
+				    struct gen_pool_chunk *chunk, void *data)
+{
+	unsigned long start = chunk->start_addr;
+	unsigned long end = chunk->end_addr;
+
+	if (secretmem_pool.pool)
+		secretmem_recycle_range(start, end);
+	else
+		secretmem_release_range(start, end);
+}
+
 static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
 {
 	struct gen_pool *pool = ctx->pool;
@@ -319,14 +366,85 @@ static struct file_system_type secretmem_fs = {
 	.kill_sb	= kill_anon_super,
 };
 
+static int secretmem_reserved_mem_init(void)
+{
+	struct gen_pool *pool;
+	struct page *page;
+	void *addr;
+	int err;
+
+	if (!secretmem_pool.reserved)
+		return 0;
+
+	pool = gen_pool_create(PMD_SHIFT, NUMA_NO_NODE);
+	if (!pool)
+		return -ENOMEM;
+
+	err = gen_pool_add(pool, (unsigned long)secretmem_pool.reserved,
+			   secretmem_pool.reserved_size, NUMA_NO_NODE);
+	if (err)
+		goto err_destroy_pool;
+
+	for (addr = secretmem_pool.reserved;
+	     addr < secretmem_pool.reserved + secretmem_pool.reserved_size;
+	     addr += PAGE_SIZE) {
+		page = virt_to_page(addr);
+		__ClearPageReserved(page);
+		set_page_count(page, 1);
+	}
+
+	secretmem_pool.pool = pool;
+	page = virt_to_page(secretmem_pool.reserved);
+	__kernel_map_pages(page, secretmem_pool.reserved_size / PAGE_SIZE, 0);
+	return 0;
+
+err_destroy_pool:
+	gen_pool_destroy(pool);
+	return err;
+}
+
 static int secretmem_init(void)
 {
-	int ret = 0;
+	int ret;
+
+	ret = secretmem_reserved_mem_init();
+	if (ret)
+		return ret;
 
 	secretmem_mnt = kern_mount(&secretmem_fs);
-	if (IS_ERR(secretmem_mnt))
+	if (IS_ERR(secretmem_mnt)) {
+		gen_pool_destroy(secretmem_pool.pool);
 		ret = PTR_ERR(secretmem_mnt);
+	}
 
 	return ret;
 }
 fs_initcall(secretmem_init);
+
+static int __init secretmem_setup(char *str)
+{
+	phys_addr_t align = PMD_SIZE;
+	unsigned long reserved_size;
+	void *reserved;
+
+	reserved_size = memparse(str, NULL);
+	if (!reserved_size)
+		return 0;
+
+	if (reserved_size * 2 > PUD_SIZE)
+		align = PUD_SIZE;
+
+	reserved = memblock_alloc(reserved_size, align);
+	if (!reserved) {
+		pr_err("failed to reserve %zu bytes\n", secretmem_pool.reserved_size);
+		return 0;
+	}
+
+	secretmem_pool.reserved_size = reserved_size;
+	secretmem_pool.reserved = reserved;
+
+	pr_info("reserved %zuM\n", reserved_size >> 20);
+
+	return 1;
+}
+__setup("secretmem=", secretmem_setup);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas
  2020-07-20  9:24 ` [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
@ 2020-07-20 11:30   ` Arnd Bergmann
  2020-07-20 14:20     ` Mike Rapoport
  2020-07-20 15:51     ` James Bottomley
  2020-07-21 10:59   ` Michael Kerrisk (man-pages)
  1 sibling, 2 replies; 17+ messages in thread
From: Arnd Bergmann @ 2020-07-20 11:30 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: linux-kernel, Alexander Viro, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Idan Yaniv, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Linux API, linux-arch, Linux ARM, Linux FS-devel Mailing List,
	Linux-MM, linux-nvdimm, linux-riscv, the arch/x86 maintainers,
	linaro-mm-sig, Sumit Semwal

On Mon, Jul 20, 2020 at 11:25 AM Mike Rapoport <rppt@kernel.org> wrote:
>
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Introduce "secretmemfd" system call with the ability to create memory areas
> visible only in the context of the owning process and not mapped not only
> to other processes but in the kernel page tables as well.
>
> The user will create a file descriptor using the secretmemfd system call
> where flags supplied as a parameter to this system call will define the
> desired protection mode for the memory associated with that file
> descriptor. Currently there are two protection modes:
>
> * exclusive - the memory area is unmapped from the kernel direct map and it
>               is present only in the page tables of the owning mm.
> * uncached  - the memory area is present only in the page tables of the
>               owning mm and it is mapped there as uncached.
>
> For instance, the following example will create an uncached mapping (error
> handling is omitted):
>
>         fd = secretmemfd(SECRETMEM_UNCACHED);
>         ftruncate(fd, MAP_SIZE);
>         ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED,
>                    fd, 0);
>
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>

I wonder if this should be more closely related to dmabuf file
descriptors, which
are already used for a similar purpose: sharing access to secret memory areas
that are not visible to the OS but can be shared with hardware through device
drivers that can import a dmabuf file descriptor.

      Arnd

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas
  2020-07-20 11:30   ` Arnd Bergmann
@ 2020-07-20 14:20     ` Mike Rapoport
  2020-07-20 14:34       ` Arnd Bergmann
  2020-07-20 15:51     ` James Bottomley
  1 sibling, 1 reply; 17+ messages in thread
From: Mike Rapoport @ 2020-07-20 14:20 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linux-kernel, Alexander Viro, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Idan Yaniv, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Linux API, linux-arch, Linux ARM, Linux FS-devel Mailing List,
	Linux-MM, linux-nvdimm, linux-riscv, the arch/x86 maintainers,
	linaro-mm-sig, Sumit Semwal

On Mon, Jul 20, 2020 at 01:30:13PM +0200, Arnd Bergmann wrote:
> On Mon, Jul 20, 2020 at 11:25 AM Mike Rapoport <rppt@kernel.org> wrote:
> >
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Introduce "secretmemfd" system call with the ability to create memory areas
> > visible only in the context of the owning process and not mapped not only
> > to other processes but in the kernel page tables as well.
> >
> > The user will create a file descriptor using the secretmemfd system call
> > where flags supplied as a parameter to this system call will define the
> > desired protection mode for the memory associated with that file
> > descriptor. Currently there are two protection modes:
> >
> > * exclusive - the memory area is unmapped from the kernel direct map and it
> >               is present only in the page tables of the owning mm.
> > * uncached  - the memory area is present only in the page tables of the
> >               owning mm and it is mapped there as uncached.
> >
> > For instance, the following example will create an uncached mapping (error
> > handling is omitted):
> >
> >         fd = secretmemfd(SECRETMEM_UNCACHED);
> >         ftruncate(fd, MAP_SIZE);
> >         ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED,
> >                    fd, 0);
> >
> > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> 
> I wonder if this should be more closely related to dmabuf file
> descriptors, which
> are already used for a similar purpose: sharing access to secret memory areas
> that are not visible to the OS but can be shared with hardware through device
> drivers that can import a dmabuf file descriptor.

TBH, I didn't think about dmabuf, but my undestanding is that is this
case memory areas are not visible to the OS because they are on device
memory rather than normal RAM and when dmabuf is backed by the normal
RAM, the memory is visible to the OS.

Did I miss anything?


>       Arnd

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas
  2020-07-20 14:20     ` Mike Rapoport
@ 2020-07-20 14:34       ` Arnd Bergmann
  2020-07-20 17:46         ` Mike Rapoport
  0 siblings, 1 reply; 17+ messages in thread
From: Arnd Bergmann @ 2020-07-20 14:34 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: linux-kernel, Alexander Viro, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Idan Yaniv, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Linux API, linux-arch, Linux ARM, Linux FS-devel Mailing List,
	Linux-MM, linux-nvdimm, linux-riscv, the arch/x86 maintainers,
	linaro-mm-sig, Sumit Semwal

On Mon, Jul 20, 2020 at 4:21 PM Mike Rapoport <rppt@kernel.org> wrote:
> On Mon, Jul 20, 2020 at 01:30:13PM +0200, Arnd Bergmann wrote:
> > On Mon, Jul 20, 2020 at 11:25 AM Mike Rapoport <rppt@kernel.org> wrote:
> > >
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Introduce "secretmemfd" system call with the ability to create memory areas
> > > visible only in the context of the owning process and not mapped not only
> > > to other processes but in the kernel page tables as well.
> > >
> > > The user will create a file descriptor using the secretmemfd system call
> > > where flags supplied as a parameter to this system call will define the
> > > desired protection mode for the memory associated with that file
> > > descriptor. Currently there are two protection modes:
> > >
> > > * exclusive - the memory area is unmapped from the kernel direct map and it
> > >               is present only in the page tables of the owning mm.
> > > * uncached  - the memory area is present only in the page tables of the
> > >               owning mm and it is mapped there as uncached.
> > >
> > > For instance, the following example will create an uncached mapping (error
> > > handling is omitted):
> > >
> > >         fd = secretmemfd(SECRETMEM_UNCACHED);
> > >         ftruncate(fd, MAP_SIZE);
> > >         ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED,
> > >                    fd, 0);
> > >
> > > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> >
> > I wonder if this should be more closely related to dmabuf file
> > descriptors, which
> > are already used for a similar purpose: sharing access to secret memory areas
> > that are not visible to the OS but can be shared with hardware through device
> > drivers that can import a dmabuf file descriptor.
>
> TBH, I didn't think about dmabuf, but my undestanding is that is this
> case memory areas are not visible to the OS because they are on device
> memory rather than normal RAM and when dmabuf is backed by the normal
> RAM, the memory is visible to the OS.

No, dmabuf is normally about normal RAM that is shared between multiple
devices, the idea is that you can have one driver allocate a buffer in RAM
and export it to user space through a file descriptor. The application can then
go and mmap() it or pass it into one or more other drivers.

This can be used e.g. for sharing a buffer between a video codec and the
gpu, or between a crypto engine and another device that accesses
unencrypted data while software can only observe the encrypted version.

       Arnd

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas
  2020-07-20 11:30   ` Arnd Bergmann
  2020-07-20 14:20     ` Mike Rapoport
@ 2020-07-20 15:51     ` James Bottomley
  2020-07-20 18:08       ` Arnd Bergmann
  1 sibling, 1 reply; 17+ messages in thread
From: James Bottomley @ 2020-07-20 15:51 UTC (permalink / raw)
  To: Arnd Bergmann, Mike Rapoport
  Cc: linux-kernel, Alexander Viro, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Idan Yaniv, Ingo Molnar, Kirill A. Shutemov, Matthew Wilcox,
	Mike Rapoport, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Thomas Gleixner, Tycho Andersen, Will Deacon, Linux API,
	linux-arch, Linux ARM, Linux FS-devel Mailing List, Linux-MM,
	linux-nvdimm, linux-riscv, the arch/x86 maintainers,
	linaro-mm-sig, Sumit Semwal

On Mon, 2020-07-20 at 13:30 +0200, Arnd Bergmann wrote:
> On Mon, Jul 20, 2020 at 11:25 AM Mike Rapoport <rppt@kernel.org>
> wrote:
> > 
> > From: Mike Rapoport <rppt@linux.ibm.com>
> > 
> > Introduce "secretmemfd" system call with the ability to create
> > memory areas visible only in the context of the owning process and
> > not mapped not only to other processes but in the kernel page
> > tables as well.
> > 
> > The user will create a file descriptor using the secretmemfd system
> > call where flags supplied as a parameter to this system call will
> > define the desired protection mode for the memory associated with
> > that file descriptor. Currently there are two protection modes:
> > 
> > * exclusive - the memory area is unmapped from the kernel direct
> > map and it
> >               is present only in the page tables of the owning mm.
> > * uncached  - the memory area is present only in the page tables of
> > the
> >               owning mm and it is mapped there as uncached.
> > 
> > For instance, the following example will create an uncached mapping
> > (error handling is omitted):
> > 
> >         fd = secretmemfd(SECRETMEM_UNCACHED);
> >         ftruncate(fd, MAP_SIZE);
> >         ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE,
> > MAP_SHARED,
> >                    fd, 0);
> > 
> > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> 
> I wonder if this should be more closely related to dmabuf file
> descriptors, which are already used for a similar purpose: sharing
> access to secret memory areas that are not visible to the OS but can
> be shared with hardware through device drivers that can import a
> dmabuf file descriptor.

I'll assume you mean the dmabuf userspace API?  Because the kernel API
is completely device exchange specific and wholly inappropriate for
this use case.

The user space API of dmabuf uses a pseudo-filesystem.  So you mount
the dmabuf file type (and by "you" I mean root because an ordinary user
doesn't have sufficient privilege).  This is basically because every
dmabuf is usable by any user who has permissions.  This really isn't
the initial interface we want for secret memory because secret regions
are supposed to be per process and not shared (at least we don't want
other tenants to see who's using what).

Once you have the fd, you can seek to find the size, mmap, poll and
ioctl it.  The ioctls are all to do with memory synchronization (as
you'd expect from a device backed region) and the mmap is handled by
the dma_buf_ops, which is device specific.  Sizing is missing because
that's reported by the device not settable by the user.

What we want is the ability to get an fd, set the properties and the
size and mmap it.  This is pretty much a 100% overlap with the memfd
API and not much overlap with the dmabuf one, which is why I don't
think the interface is very well suited.

James


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas
  2020-07-20 14:34       ` Arnd Bergmann
@ 2020-07-20 17:46         ` Mike Rapoport
  0 siblings, 0 replies; 17+ messages in thread
From: Mike Rapoport @ 2020-07-20 17:46 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: linux-kernel, Alexander Viro, Andrew Morton, Andy Lutomirski,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Idan Yaniv, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Linux API, linux-arch, Linux ARM, Linux FS-devel Mailing List,
	Linux-MM, linux-nvdimm, linux-riscv, the arch/x86 maintainers,
	linaro-mm-sig, Sumit Semwal

On Mon, Jul 20, 2020 at 04:34:12PM +0200, Arnd Bergmann wrote:
> On Mon, Jul 20, 2020 at 4:21 PM Mike Rapoport <rppt@kernel.org> wrote:
> > On Mon, Jul 20, 2020 at 01:30:13PM +0200, Arnd Bergmann wrote:
> > > On Mon, Jul 20, 2020 at 11:25 AM Mike Rapoport <rppt@kernel.org> wrote:
> > > >
> > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > >
> > > > Introduce "secretmemfd" system call with the ability to create memory areas
> > > > visible only in the context of the owning process and not mapped not only
> > > > to other processes but in the kernel page tables as well.
> > > >
> > > > The user will create a file descriptor using the secretmemfd system call
> > > > where flags supplied as a parameter to this system call will define the
> > > > desired protection mode for the memory associated with that file
> > > > descriptor. Currently there are two protection modes:
> > > >
> > > > * exclusive - the memory area is unmapped from the kernel direct map and it
> > > >               is present only in the page tables of the owning mm.
> > > > * uncached  - the memory area is present only in the page tables of the
> > > >               owning mm and it is mapped there as uncached.
> > > >
> > > > For instance, the following example will create an uncached mapping (error
> > > > handling is omitted):
> > > >
> > > >         fd = secretmemfd(SECRETMEM_UNCACHED);
> > > >         ftruncate(fd, MAP_SIZE);
> > > >         ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED,
> > > >                    fd, 0);
> > > >
> > > > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > I wonder if this should be more closely related to dmabuf file
> > > descriptors, which
> > > are already used for a similar purpose: sharing access to secret memory areas
> > > that are not visible to the OS but can be shared with hardware through device
> > > drivers that can import a dmabuf file descriptor.
> >
> > TBH, I didn't think about dmabuf, but my undestanding is that is this
> > case memory areas are not visible to the OS because they are on device
> > memory rather than normal RAM and when dmabuf is backed by the normal
> > RAM, the memory is visible to the OS.
> 
> No, dmabuf is normally about normal RAM that is shared between multiple
> devices, the idea is that you can have one driver allocate a buffer in RAM
> and export it to user space through a file descriptor. The application can then
> go and mmap() it or pass it into one or more other drivers.
> 
> This can be used e.g. for sharing a buffer between a video codec and the
> gpu, or between a crypto engine and another device that accesses
> unencrypted data while software can only observe the encrypted version.

For our usecase sharing is optional from one side and there are no
devices involved from the other.

As James pointed out, there is no match for the userspace API and if
there will emerge a usacase that requires integration of secretmem with
dma-buf, we'll deal with it then.

>        Arnd

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas
  2020-07-20 15:51     ` James Bottomley
@ 2020-07-20 18:08       ` Arnd Bergmann
  2020-07-20 19:16         ` James Bottomley
  0 siblings, 1 reply; 17+ messages in thread
From: Arnd Bergmann @ 2020-07-20 18:08 UTC (permalink / raw)
  To: James E.J. Bottomley
  Cc: Mike Rapoport, linux-kernel, Alexander Viro, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Idan Yaniv, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Linux API, linux-arch, Linux ARM, Linux FS-devel Mailing List,
	Linux-MM, linux-nvdimm, linux-riscv, the arch/x86 maintainers,
	linaro-mm-sig, Sumit Semwal

On Mon, Jul 20, 2020 at 5:52 PM James Bottomley <jejb@linux.ibm.com> wrote:
> On Mon, 2020-07-20 at 13:30 +0200, Arnd Bergmann wrote:
>
> I'll assume you mean the dmabuf userspace API?  Because the kernel API
> is completely device exchange specific and wholly inappropriate for
> this use case.
>
> The user space API of dmabuf uses a pseudo-filesystem.  So you mount
> the dmabuf file type (and by "you" I mean root because an ordinary user
> doesn't have sufficient privilege).  This is basically because every
> dmabuf is usable by any user who has permissions.  This really isn't
> the initial interface we want for secret memory because secret regions
> are supposed to be per process and not shared (at least we don't want
> other tenants to see who's using what).
>
> Once you have the fd, you can seek to find the size, mmap, poll and
> ioctl it.  The ioctls are all to do with memory synchronization (as
> you'd expect from a device backed region) and the mmap is handled by
> the dma_buf_ops, which is device specific.  Sizing is missing because
> that's reported by the device not settable by the user.

I was mainly talking about the in-kernel interface that is used for
sharing a buffer with hardware. Aside from the limited ioctls, anything
in the kernel can decide on how it wants to export a dma_buf by
calling dma_buf_export()/dma_buf_fd(), which is roughly what the
new syscall does as well. Using dma_buf vs the proposed
implementation for this is not a big difference in complexity.

The one thing that a dma_buf does is that it allows devices to
do DMA on it. This is either something that can turn out to be
useful later, or it is not. From the description, it sounded like
the sharing might be useful, since we already have known use
cases in which "secret" data is exchanged with a trusted execution
environment using the dma-buf interface.

If there is no way the data stored in this new secret memory area
would relate to secret data in a TEE or some other hardware
device, then I agree that dma-buf has no value.

> What we want is the ability to get an fd, set the properties and the
> size and mmap it.  This is pretty much a 100% overlap with the memfd
> API and not much overlap with the dmabuf one, which is why I don't
> think the interface is very well suited.

Does that mean you are suggesting to use additional flags on
memfd_create() instead of a new system call?

      Arnd

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas
  2020-07-20 18:08       ` Arnd Bergmann
@ 2020-07-20 19:16         ` James Bottomley
  2020-07-20 20:05           ` Arnd Bergmann
  0 siblings, 1 reply; 17+ messages in thread
From: James Bottomley @ 2020-07-20 19:16 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Mike Rapoport, linux-kernel, Alexander Viro, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Idan Yaniv, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Linux API, linux-arch, Linux ARM, Linux FS-devel Mailing List,
	Linux-MM, linux-nvdimm, linux-riscv, the arch/x86 maintainers,
	linaro-mm-sig, Sumit Semwal

On Mon, 2020-07-20 at 20:08 +0200, Arnd Bergmann wrote:
> On Mon, Jul 20, 2020 at 5:52 PM James Bottomley <jejb@linux.ibm.com>
> wrote:
> > On Mon, 2020-07-20 at 13:30 +0200, Arnd Bergmann wrote:
> > 
> > I'll assume you mean the dmabuf userspace API?  Because the kernel
> > API is completely device exchange specific and wholly inappropriate
> > for this use case.
> > 
> > The user space API of dmabuf uses a pseudo-filesystem.  So you
> > mount the dmabuf file type (and by "you" I mean root because an
> > ordinary user doesn't have sufficient privilege).  This is
> > basically because every dmabuf is usable by any user who has
> > permissions.  This really isn't the initial interface we want for
> > secret memory because secret regions are supposed to be per process
> > and not shared (at least we don't want other tenants to see who's
> > using what).
> > 
> > Once you have the fd, you can seek to find the size, mmap, poll and
> > ioctl it.  The ioctls are all to do with memory synchronization (as
> > you'd expect from a device backed region) and the mmap is handled
> > by the dma_buf_ops, which is device specific.  Sizing is missing
> > because that's reported by the device not settable by the user.
> 
> I was mainly talking about the in-kernel interface that is used for
> sharing a buffer with hardware. Aside from the limited ioctls,
> anything in the kernel can decide on how it wants to export a dma_buf
> by calling dma_buf_export()/dma_buf_fd(), which is roughly what the
> new syscall does as well. Using dma_buf vs the proposed
> implementation for this is not a big difference in complexity.

I have thought about it, but haven't got much further:  We can't couple
to SGX without a huge break in the current simple userspace API (it
becomes complex because you'd have to enter the enclave each time you
want to use the memory, or put the whole process in the enclave, which
is a bit of a nightmare for simplicity), and we could only couple it to
SEV if the memory encryption engine would respond to PCID as well as
ASID, which it doesn't.

> The one thing that a dma_buf does is that it allows devices to
> do DMA on it. This is either something that can turn out to be
> useful later, or it is not. From the description, it sounded like
> the sharing might be useful, since we already have known use
> cases in which "secret" data is exchanged with a trusted execution
> environment using the dma-buf interface.

The current use case for private keys is that you take an encrypted
file (which would be the DMA coupled part) and you decrypt the contents
into the secret memory.  There might possibly be a DMA component later
where a HSM like device DMAs a key directly into your secret memory to
avoid exposure, but I wouldn't anticipate any need for anything beyond
the usual page cache API for that case (effectively this would behave
like an ordinary page cache page except that only the current process
would be able to touch the contents).

> If there is no way the data stored in this new secret memory area
> would relate to secret data in a TEE or some other hardware
> device, then I agree that dma-buf has no value.

Never say never, but current TEE designs tend to require full
confidentiality for the entire execution.  What we're probing is
whether we can improve security by doing an API that requires less than
full confidentiality for the process.  I think if the API proves useful
then we will get HW support for it, but it likely won't be in the
current TEE of today form.

> > What we want is the ability to get an fd, set the properties and
> > the size and mmap it.  This is pretty much a 100% overlap with the
> > memfd API and not much overlap with the dmabuf one, which is why I
> > don't think the interface is very well suited.
> 
> Does that mean you are suggesting to use additional flags on
> memfd_create() instead of a new system call?

Well, that was what the previous patch did.  I'm agnostic on the
mechanism for obtaining the fd: new syscall as this patch does or
extension to memfd like the old one did.  All I was saying is that once
you have the fd, the API you use on it is the same as the memfd API.

James


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas
  2020-07-20 19:16         ` James Bottomley
@ 2020-07-20 20:05           ` Arnd Bergmann
  0 siblings, 0 replies; 17+ messages in thread
From: Arnd Bergmann @ 2020-07-20 20:05 UTC (permalink / raw)
  To: James E.J. Bottomley
  Cc: Mike Rapoport, linux-kernel, Alexander Viro, Andrew Morton,
	Andy Lutomirski, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Idan Yaniv, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Mike Rapoport, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Thomas Gleixner, Tycho Andersen, Will Deacon,
	Linux API, linux-arch, Linux ARM, Linux FS-devel Mailing List,
	Linux-MM, linux-nvdimm, linux-riscv, the arch/x86 maintainers,
	linaro-mm-sig, Sumit Semwal

On Mon, Jul 20, 2020 at 9:16 PM James Bottomley <jejb@linux.ibm.com> wrote:
> On Mon, 2020-07-20 at 20:08 +0200, Arnd Bergmann wrote:
> > On Mon, Jul 20, 2020 at 5:52 PM James Bottomley <jejb@linux.ibm.com>
> >
> > If there is no way the data stored in this new secret memory area
> > would relate to secret data in a TEE or some other hardware
> > device, then I agree that dma-buf has no value.
>
> Never say never, but current TEE designs tend to require full
> confidentiality for the entire execution.  What we're probing is
> whether we can improve security by doing an API that requires less than
> full confidentiality for the process.  I think if the API proves useful
> then we will get HW support for it, but it likely won't be in the
> current TEE of today form.

As I understand it, you normally have two kinds of buffers for the TEE:
one that may be allocated by Linux but is owned by the TEE itself
and not accessible by any process, and one that is used for
communication between the TEE and a user process.

The sharing would clearly work only for the second type: data that
a process wants to share with the TEE but as little else as possible.

A hypothetical example might be a process that passes encrypted
data to the TEE (which holds the key) for decryption, receives
decrypted data and then consumes that data in its own address
space. An electronic voting system (I know, evil example) might
receive encrypted ballots and sum them up this way without itself
having the secret key or anything else being able to observe
intermediate results.

> > > What we want is the ability to get an fd, set the properties and
> > > the size and mmap it.  This is pretty much a 100% overlap with the
> > > memfd API and not much overlap with the dmabuf one, which is why I
> > > don't think the interface is very well suited.
> >
> > Does that mean you are suggesting to use additional flags on
> > memfd_create() instead of a new system call?
>
> Well, that was what the previous patch did.  I'm agnostic on the
> mechanism for obtaining the fd: new syscall as this patch does or
> extension to memfd like the old one did.  All I was saying is that once
> you have the fd, the API you use on it is the same as the memfd API.

Ok.

I suppose we could even retrofit dma-buf underneath the
secretmemfd implementation if it ends up being useful later on,

      Arnd

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas
  2020-07-20  9:24 ` [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
  2020-07-20 11:30   ` Arnd Bergmann
@ 2020-07-21 10:59   ` Michael Kerrisk (man-pages)
  1 sibling, 0 replies; 17+ messages in thread
From: Michael Kerrisk (man-pages) @ 2020-07-21 10:59 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: lkml, Alexander Viro, Andrew Morton, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Idan Yaniv, Ingo Molnar, James Bottomley,
	Kirill A. Shutemov, Matthew Wilcox, Mike Rapoport,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Thomas Gleixner,
	Tycho Andersen, Will Deacon, Linux API, linux-arch,
	linux-arm-kernel, linux-fsdevel, Linux-MM, linux-nvdimm,
	linux-riscv, maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)

Hi Mike,

On Mon, 20 Jul 2020 at 11:26, Mike Rapoport <rppt@kernel.org> wrote:
>
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Introduce "secretmemfd" system call with the ability to create memory areas
> visible only in the context of the owning process and not mapped not only
> to other processes but in the kernel page tables as well.
>
> The user will create a file descriptor using the secretmemfd system call

Without wanting to start a bikeshed discussion, the more common
convention in recently added system calls is to use an underscore in
names that consist of multiple clearly distinct words. See many
examples in  https://man7.org/linux/man-pages/man2/syscalls.2.html.

Thus, I'd suggest at least secret_memfd().

Also, I wonder whether memfd_secret() might not be even better.
There's plenty of precedent for the naming style where related APIs
share a common prefix [1].

Thanks,

Michael

[1] Some examples:

       epoll_create(2)
       epoll_create1(2)
       epoll_ctl(2)
       epoll_pwait(2)
       epoll_wait(2)

       mq_getsetattr(2)
       mq_notify(2)
       mq_open(2)
       mq_timedreceive(2)
       mq_timedsend(2)
       mq_unlink(2)

       sched_get_affinity(2)
       sched_get_priority_max(2)
       sched_get_priority_min(2)
       sched_getaffinity(2)
       sched_getattr(2)
       sched_getparam(2)
       sched_getscheduler(2)
       sched_rr_get_interval(2)
       sched_set_affinity(2)
       sched_setaffinity(2)
       sched_setattr(2)
       sched_setparam(2)
       sched_setscheduler(2)
       sched_yield(2)

       timer_create(2)
       timer_delete(2)
       timer_getoverrun(2)
       timer_gettime(2)
       timer_settime(2)

       timerfd_create(2)
       timerfd_gettime(2)
       timerfd_settime(2)




-- 
Michael Kerrisk
Linux man-pages maintainer; http://www.kernel.org/doc/man-pages/
Linux/UNIX System Programming Training: http://man7.org/training/

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/6] arch, mm: wire up secretmemfd system call were relevant
  2020-07-20  9:24 ` [PATCH 4/6] arch, mm: wire up secretmemfd system call were relevant Mike Rapoport
@ 2020-07-26 17:44   ` Palmer Dabbelt
  0 siblings, 0 replies; 17+ messages in thread
From: Palmer Dabbelt @ 2020-07-26 17:44 UTC (permalink / raw)
  To: rppt
  Cc: linux-kernel, viro, akpm, luto, Arnd Bergmann, bp,
	catalin.marinas, cl, dan.j.williams, dave.hansen,
	elena.reshetova, hpa, idan.yaniv, mingo, jejb, kirill, willy,
	rppt, rppt, Paul Walmsley, peterz, tglx, tycho, will, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-nvdimm, linux-riscv, x86

On Mon, 20 Jul 2020 02:24:33 PDT (-0700), rppt@kernel.org wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Wire up secretmemfd system call on architectures that define
> ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.
>
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
>  arch/arm64/include/asm/unistd32.h      | 2 ++
>  arch/arm64/include/uapi/asm/unistd.h   | 1 +
>  arch/riscv/include/asm/unistd.h        | 1 +
>  arch/x86/entry/syscalls/syscall_32.tbl | 1 +
>  arch/x86/entry/syscalls/syscall_64.tbl | 1 +
>  include/linux/syscalls.h               | 1 +
>  include/uapi/asm-generic/unistd.h      | 7 ++++++-
>  7 files changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
> index 977ee6181dab..9e47d9aed5eb 100644
> --- a/arch/riscv/include/asm/unistd.h
> +++ b/arch/riscv/include/asm/unistd.h
> @@ -9,6 +9,7 @@
>   */
>
>  #define __ARCH_WANT_SYS_CLONE
> +#define __ARCH_WANT_SECRETMEMFD
>
>  #include <uapi/asm/unistd.h>

Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-07-26 17:44 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-20  9:24 [PATCH 0/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
2020-07-20  9:24 ` [PATCH 1/6] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
2020-07-20  9:24 ` [PATCH 2/6] mmap: make mlock_future_check() global Mike Rapoport
2020-07-20  9:24 ` [PATCH 3/6] mm: introduce secretmemfd system call to create "secret" memory areas Mike Rapoport
2020-07-20 11:30   ` Arnd Bergmann
2020-07-20 14:20     ` Mike Rapoport
2020-07-20 14:34       ` Arnd Bergmann
2020-07-20 17:46         ` Mike Rapoport
2020-07-20 15:51     ` James Bottomley
2020-07-20 18:08       ` Arnd Bergmann
2020-07-20 19:16         ` James Bottomley
2020-07-20 20:05           ` Arnd Bergmann
2020-07-21 10:59   ` Michael Kerrisk (man-pages)
2020-07-20  9:24 ` [PATCH 4/6] arch, mm: wire up secretmemfd system call were relevant Mike Rapoport
2020-07-26 17:44   ` Palmer Dabbelt
2020-07-20  9:24 ` [PATCH 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
2020-07-20  9:24 ` [PATCH 6/6] mm: secretmem: add ability to reserve memory at boot Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).