linux-nvdimm.lists.01.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2020-08-18 14:15 Mike Rapoport
  2020-08-18 14:15 ` [PATCH v4 1/6] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
                   ` (8 more replies)
  0 siblings, 9 replies; 20+ messages in thread
From: Mike Rapoport @ 2020-08-18 14:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Idan Yaniv, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-

From: Mike Rapoport <rppt@linux.ibm.com>

Hi,

This is an implementation of "secret" mappings backed by a file descriptor. 

v4 changes:
* rebase on v5.9-rc1
* Do not redefine PMD_PAGE_ORDER in fs/dax.c, thanks Kirill
* Make secret mappings exclusive by default and only require flags to
  memfd_secret() system call for uncached mappings, thanks again Kirill :)

v3 changes:
* Squash kernel-parameters.txt update into the commit that added the
  command line option.
* Make uncached mode explicitly selectable by architectures. For now enable
  it only on x86.

v2 changes:
* Follow Michael's suggestion and name the new system call 'memfd_secret'
* Add kernel-parameters documentation about the boot option
* Fix i386-tinyconfig regression reported by the kbuild bot.
  CONFIG_SECRETMEM now depends on !EMBEDDED to disable it on small systems
  from one side and still make it available unconditionally on
  architectures that support SET_DIRECT_MAP.


The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will have desired protection bits set in the user page
table. For instance, current implementation allows uncached mappings.

Although normally Linux userspace mappings are protected from other users, 
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.

Additionally, the secret mappings may be used as a mean to protect guest
memory in a virtual machine host.

For demonstration of secret memory usage we've created a userspace library
[1] that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.

I've hesitated whether to continue to use new flags to memfd_create() or to
add a new system call and I've decided to use a new system call after I've
started to look into man pages update. There would have been two completely
independent descriptions and I think it would have been very confusing.

Hiding secret memory mappings behind an anonymous file allows (ab)use of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.

The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.

As the fragmentation of the direct map was one of the major concerns raised
during the previous postings, I've added an amortizing cache of PMD-size
pages to each file descriptor and an ability to reserve large chunks of the
physical memory at boot time and then use this memory as an allocation pool
for the secret memory areas.

v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org/
rfc-v2: https://lore.kernel.org/lkml/20200706172051.19465-1-rppt@kernel.org/
rfc-v1: https://lore.kernel.org/lkml/20200130162340.GA14232@rapoport-lnx/

Mike Rapoport (6):
  mm: add definition of PMD_PAGE_ORDER
  mmap: make mlock_future_check() global
  mm: introduce memfd_secret system call to create "secret" memory areas
  arch, mm: wire up memfd_secret system call were relevant
  mm: secretmem: use PMD-size pages to amortize direct map fragmentation
  mm: secretmem: add ability to reserve memory at boot

 arch/Kconfig                           |   7 +
 arch/arm64/include/asm/unistd.h        |   2 +-
 arch/arm64/include/asm/unistd32.h      |   2 +
 arch/arm64/include/uapi/asm/unistd.h   |   1 +
 arch/riscv/include/asm/unistd.h        |   1 +
 arch/x86/Kconfig                       |   1 +
 arch/x86/entry/syscalls/syscall_32.tbl |   1 +
 arch/x86/entry/syscalls/syscall_64.tbl |   1 +
 fs/dax.c                               |  11 +-
 include/linux/pgtable.h                |   3 +
 include/linux/syscalls.h               |   1 +
 include/uapi/asm-generic/unistd.h      |   7 +-
 include/uapi/linux/magic.h             |   1 +
 include/uapi/linux/secretmem.h         |   8 +
 kernel/sys_ni.c                        |   2 +
 mm/Kconfig                             |   4 +
 mm/Makefile                            |   1 +
 mm/internal.h                          |   3 +
 mm/mmap.c                              |   5 +-
 mm/secretmem.c                         | 451 +++++++++++++++++++++++++
 20 files changed, 501 insertions(+), 12 deletions(-)
 create mode 100644 include/uapi/linux/secretmem.h
 create mode 100644 mm/secretmem.c

-- 
2.26.2
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v4 1/6] mm: add definition of PMD_PAGE_ORDER
  2020-08-18 14:15 [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
@ 2020-08-18 14:15 ` Mike Rapoport
  2020-08-18 14:15 ` [PATCH v4 2/6] mmap: make mlock_future_check() global Mike Rapoport
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 20+ messages in thread
From: Mike Rapoport @ 2020-08-18 14:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Idan Yaniv, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-

From: Mike Rapoport <rppt@linux.ibm.com>

The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.

Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid conflict with these definitions use
PMD_PAGE_ORDER name and update DAX respectively.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 fs/dax.c                | 11 ++++-------
 include/linux/pgtable.h |  3 +++
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 95341af1a966..09a7fdb879b6 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -49,9 +49,6 @@ static inline unsigned int pe_order(enum page_entry_size pe_size)
 #define PG_PMD_COLOUR	((PMD_SIZE >> PAGE_SHIFT) - 1)
 #define PG_PMD_NR	(PMD_SIZE >> PAGE_SHIFT)
 
-/* The order of a PMD entry */
-#define PMD_ORDER	(PMD_SHIFT - PAGE_SHIFT)
-
 static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
 
 static int __init init_dax_wait_table(void)
@@ -98,7 +95,7 @@ static bool dax_is_locked(void *entry)
 static unsigned int dax_entry_order(void *entry)
 {
 	if (xa_to_value(entry) & DAX_PMD)
-		return PMD_ORDER;
+		return PMD_PAGE_ORDER;
 	return 0;
 }
 
@@ -1455,7 +1452,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 {
 	struct vm_area_struct *vma = vmf->vma;
 	struct address_space *mapping = vma->vm_file->f_mapping;
-	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER);
+	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER);
 	unsigned long pmd_addr = vmf->address & PMD_MASK;
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
 	bool sync;
@@ -1514,7 +1511,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 	 * entry is already in the array, for instance), it will return
 	 * VM_FAULT_FALLBACK.
 	 */
-	entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+	entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER);
 	if (xa_is_internal(entry)) {
 		result = xa_to_internal(entry);
 		goto fallback;
@@ -1680,7 +1677,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
 	if (order == 0)
 		ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
 #ifdef CONFIG_FS_DAX_PMD
-	else if (order == PMD_ORDER)
+	else if (order == PMD_PAGE_ORDER)
 		ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
 #endif
 	else
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index a124c21e3204..fb9c386e4f54 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -28,6 +28,9 @@
 #define USER_PGTABLES_CEILING	0UL
 #endif
 
+/* Number of base pages in a second level leaf page */
+#define PMD_PAGE_ORDER	(PMD_SHIFT - PAGE_SHIFT)
+
 /*
  * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
  *
-- 
2.26.2
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v4 2/6] mmap: make mlock_future_check() global
  2020-08-18 14:15 [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
  2020-08-18 14:15 ` [PATCH v4 1/6] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
@ 2020-08-18 14:15 ` Mike Rapoport
  2020-08-18 14:15 ` [PATCH v4 3/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 20+ messages in thread
From: Mike Rapoport @ 2020-08-18 14:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Idan Yaniv, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-

From: Mike Rapoport <rppt@linux.ibm.com>

It will be used by the upcoming secret memory implementation.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 mm/internal.h | 3 +++
 mm/mmap.c     | 5 ++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 10c677655912..40544fbf49c9 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -350,6 +350,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
 extern void mlock_vma_page(struct page *page);
 extern unsigned int munlock_vma_page(struct page *page);
 
+extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+			      unsigned long len);
+
 /*
  * Clear the page's PageMlocked().  This can be useful in a situation where
  * we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c
index 40248d84ad5f..190761920142 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1310,9 +1310,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
 	return hint;
 }
 
-static inline int mlock_future_check(struct mm_struct *mm,
-				     unsigned long flags,
-				     unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+		       unsigned long len)
 {
 	unsigned long locked, lock_limit;
 
-- 
2.26.2
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v4 3/6] mm: introduce memfd_secret system call to create "secret" memory areas
  2020-08-18 14:15 [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
  2020-08-18 14:15 ` [PATCH v4 1/6] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
  2020-08-18 14:15 ` [PATCH v4 2/6] mmap: make mlock_future_check() global Mike Rapoport
@ 2020-08-18 14:15 ` Mike Rapoport
  2020-08-18 14:15 ` [PATCH v4 4/6] arch, mm: wire up memfd_secret system call were relevant Mike Rapoport
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 20+ messages in thread
From: Mike Rapoport @ 2020-08-18 14:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Idan Yaniv, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-

From: Mike Rapoport <rppt@linux.ibm.com>

Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.

The user will create a file descriptor using the memfd_secret() system call
where flags supplied as a parameter to this system call will define the
desired protection mode for the memory associated with that file
descriptor.

 Currently there are two protection modes:

* exclusive - the memory area is unmapped from the kernel direct map and it
              is present only in the page tables of the owning mm.
* uncached  - the memory area is present only in the page tables of the
              owning mm and it is mapped there as uncached.

The "exclusive" mode is enabled implicitly and it is the default mode for
memfd_secret().

The "uncached" mode requires architecture support and an architecture
should opt-in for this mode using HAVE_SECRETMEM_UNCACHED configuration
option.

For instance, the following example will create an uncached mapping (error
handling is omitted):

	fd = memfd_secret(SECRETMEM_UNCACHED);
	ftruncate(fd, MAP_SIZE);
	ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 arch/Kconfig                   |   7 +
 arch/x86/Kconfig               |   1 +
 include/uapi/linux/magic.h     |   1 +
 include/uapi/linux/secretmem.h |   8 +
 kernel/sys_ni.c                |   2 +
 mm/Kconfig                     |   4 +
 mm/Makefile                    |   1 +
 mm/secretmem.c                 | 264 +++++++++++++++++++++++++++++++++
 8 files changed, 288 insertions(+)
 create mode 100644 include/uapi/linux/secretmem.h
 create mode 100644 mm/secretmem.c

diff --git a/arch/Kconfig b/arch/Kconfig
index af14a567b493..8d161bd4142d 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -975,6 +975,13 @@ config HAVE_SPARSE_SYSCALL_NR
 config ARCH_HAS_VDSO_DATA
 	bool
 
+config HAVE_SECRETMEM_UNCACHED
+       bool
+       help
+          An architecture can select this if its semantics of non-cached
+          mappings can be used to prevent speculative loads and it is
+          useful for secret protection.
+
 source "kernel/gcov/Kconfig"
 
 source "scripts/gcc-plugins/Kconfig"
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 7101ac64bb20..38ead8bd9909 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -220,6 +220,7 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select HAVE_SECRETMEM_UNCACHED
 	select HOTPLUG_SMT			if SMP
 	select IRQ_FORCED_THREADING
 	select NEED_SG_DMA_LENGTH
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index f3956fc11de6..35687dcb1a42 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -97,5 +97,6 @@
 #define DEVMEM_MAGIC		0x454d444d	/* "DMEM" */
 #define Z3FOLD_MAGIC		0x33
 #define PPC_CMM_MAGIC		0xc7571590
+#define SECRETMEM_MAGIC		0x5345434d	/* "SECM" */
 
 #endif /* __LINUX_MAGIC_H__ */
diff --git a/include/uapi/linux/secretmem.h b/include/uapi/linux/secretmem.h
new file mode 100644
index 000000000000..2b9675f5dea9
--- /dev/null
+++ b/include/uapi/linux/secretmem.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI_LINUX_SECRERTMEM_H
+#define _UAPI_LINUX_SECRERTMEM_H
+
+/* secretmem operation modes */
+#define SECRETMEM_UNCACHED	0x1
+
+#endif /* _UAPI_LINUX_SECRERTMEM_H */
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 4d59775ea79c..8ae8d0c2d381 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -349,6 +349,8 @@ COND_SYSCALL(pkey_mprotect);
 COND_SYSCALL(pkey_alloc);
 COND_SYSCALL(pkey_free);
 
+/* memfd_secret */
+COND_SYSCALL(memfd_secret);
 
 /*
  * Architecture specific weak syscall entries.
diff --git a/mm/Kconfig b/mm/Kconfig
index 6c974888f86f..70cfc20d7caa 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -868,4 +868,8 @@ config ARCH_HAS_HUGEPD
 config MAPPING_DIRTY_HELPERS
         bool
 
+config SECRETMEM
+        def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+	select GENERIC_ALLOCATOR
+
 endmenu
diff --git a/mm/Makefile b/mm/Makefile
index d5649f1c12c0..cae063dc8298 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -121,3 +121,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
 obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
 obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
 obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_SECRETMEM) += secretmem.o
diff --git a/mm/secretmem.c b/mm/secretmem.c
new file mode 100644
index 000000000000..3293f761076e
--- /dev/null
+++ b/mm/secretmem.c
@@ -0,0 +1,264 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/mount.h>
+#include <linux/memfd.h>
+#include <linux/bitops.h>
+#include <linux/printk.h>
+#include <linux/pagemap.h>
+#include <linux/syscalls.h>
+#include <linux/pseudo_fs.h>
+#include <linux/set_memory.h>
+#include <linux/sched/signal.h>
+
+#include <uapi/linux/secretmem.h>
+#include <uapi/linux/magic.h>
+
+#include <asm/tlbflush.h>
+
+#include "internal.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "secretmem: " fmt
+
+/*
+ * Secret memory areas are always exclusive to owning mm and they are
+ * removed from the direct map.
+ */
+#ifdef CONFIG_HAVE_SECRETMEM_UNCACHED
+#define SECRETMEM_MODE_MASK	(SECRETMEM_UNCACHED)
+#else
+#define SECRETMEM_MODE_MASK	(0x0)
+#endif
+
+#define SECRETMEM_FLAGS_MASK	SECRETMEM_MODE_MASK
+
+struct secretmem_ctx {
+	unsigned int mode;
+};
+
+static struct page *secretmem_alloc_page(gfp_t gfp)
+{
+	/*
+	 * FIXME: use a cache of large pages to reduce the direct map
+	 * fragmentation
+	 */
+	return alloc_page(gfp);
+}
+
+static vm_fault_t secretmem_fault(struct vm_fault *vmf)
+{
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+	struct inode *inode = file_inode(vmf->vma->vm_file);
+	pgoff_t offset = vmf->pgoff;
+	unsigned long addr;
+	struct page *page;
+	int ret = 0;
+
+	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+		return vmf_error(-EINVAL);
+
+	page = find_get_entry(mapping, offset);
+	if (!page) {
+		page = secretmem_alloc_page(vmf->gfp_mask);
+		if (!page)
+			return vmf_error(-ENOMEM);
+
+		ret = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
+		if (unlikely(ret))
+			goto err_put_page;
+
+		ret = set_direct_map_invalid_noflush(page);
+		if (ret)
+			goto err_del_page_cache;
+
+		addr = (unsigned long)page_address(page);
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+
+		__SetPageUptodate(page);
+
+		ret = VM_FAULT_LOCKED;
+	}
+
+	vmf->page = page;
+	return ret;
+
+err_del_page_cache:
+	delete_from_page_cache(page);
+err_put_page:
+	put_page(page);
+	return vmf_error(ret);
+}
+
+static const struct vm_operations_struct secretmem_vm_ops = {
+	.fault = secretmem_fault,
+};
+
+static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+	struct secretmem_ctx *ctx = file->private_data;
+	unsigned long len = vma->vm_end - vma->vm_start;
+
+	if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+		return -EINVAL;
+
+	if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
+		return -EAGAIN;
+
+	if (ctx->mode & SECRETMEM_UNCACHED)
+		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+
+	vma->vm_ops = &secretmem_vm_ops;
+	vma->vm_flags |= VM_LOCKED;
+
+	return 0;
+}
+
+const struct file_operations secretmem_fops = {
+	.mmap		= secretmem_mmap,
+};
+
+static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode)
+{
+	return false;
+}
+
+static int secretmem_migratepage(struct address_space *mapping,
+				 struct page *newpage, struct page *page,
+				 enum migrate_mode mode)
+{
+	return -EBUSY;
+}
+
+static void secretmem_freepage(struct page *page)
+{
+	set_direct_map_default_noflush(page);
+}
+
+static const struct address_space_operations secretmem_aops = {
+	.freepage	= secretmem_freepage,
+	.migratepage	= secretmem_migratepage,
+	.isolate_page	= secretmem_isolate_page,
+};
+
+static struct vfsmount *secretmem_mnt;
+
+static struct file *secretmem_file_create(unsigned long flags)
+{
+	struct file *file = ERR_PTR(-ENOMEM);
+	struct secretmem_ctx *ctx;
+	struct inode *inode;
+
+	inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
+	if (IS_ERR(inode))
+		return ERR_CAST(inode);
+
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx)
+		goto err_free_inode;
+
+	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+				 O_RDWR, &secretmem_fops);
+	if (IS_ERR(file))
+		goto err_free_ctx;
+
+	mapping_set_unevictable(inode->i_mapping);
+
+	inode->i_mapping->private_data = ctx;
+	inode->i_mapping->a_ops = &secretmem_aops;
+
+	/* pretend we are a normal file with zero size */
+	inode->i_mode |= S_IFREG;
+	inode->i_size = 0;
+
+	file->private_data = ctx;
+
+	ctx->mode = flags & SECRETMEM_MODE_MASK;
+
+	return file;
+
+err_free_ctx:
+	kfree(ctx);
+err_free_inode:
+	iput(inode);
+	return file;
+}
+
+SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
+{
+	struct file *file;
+	int fd, err;
+
+	/* make sure local flags do not confict with global fcntl.h */
+	BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+	if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+		return -EINVAL;
+
+	fd = get_unused_fd_flags(flags & O_CLOEXEC);
+	if (fd < 0)
+		return fd;
+
+	file = secretmem_file_create(flags);
+	if (IS_ERR(file)) {
+		err = PTR_ERR(file);
+		goto err_put_fd;
+	}
+
+	file->f_flags |= O_LARGEFILE;
+
+	fd_install(fd, file);
+	return fd;
+
+err_put_fd:
+	put_unused_fd(fd);
+	return err;
+}
+
+static void secretmem_evict_inode(struct inode *inode)
+{
+	struct secretmem_ctx *ctx = inode->i_private;
+
+	truncate_inode_pages_final(&inode->i_data);
+	clear_inode(inode);
+	kfree(ctx);
+}
+
+static const struct super_operations secretmem_super_ops = {
+	.evict_inode = secretmem_evict_inode,
+};
+
+static int secretmem_init_fs_context(struct fs_context *fc)
+{
+	struct pseudo_fs_context *ctx = init_pseudo(fc, SECRETMEM_MAGIC);
+
+	if (!ctx)
+		return -ENOMEM;
+	ctx->ops = &secretmem_super_ops;
+
+	return 0;
+}
+
+static struct file_system_type secretmem_fs = {
+	.name		= "secretmem",
+	.init_fs_context = secretmem_init_fs_context,
+	.kill_sb	= kill_anon_super,
+};
+
+static int secretmem_init(void)
+{
+	int ret = 0;
+
+	secretmem_mnt = kern_mount(&secretmem_fs);
+	if (IS_ERR(secretmem_mnt))
+		ret = PTR_ERR(secretmem_mnt);
+
+	return ret;
+}
+fs_initcall(secretmem_init);
-- 
2.26.2
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v4 4/6] arch, mm: wire up memfd_secret system call were relevant
  2020-08-18 14:15 [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (2 preceding siblings ...)
  2020-08-18 14:15 ` [PATCH v4 3/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
@ 2020-08-18 14:15 ` Mike Rapoport
  2020-08-18 14:15 ` [PATCH v4 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 20+ messages in thread
From: Mike Rapoport @ 2020-08-18 14:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Idan Yaniv, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-

From: Mike Rapoport <rppt@linux.ibm.com>

Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/unistd.h        | 2 +-
 arch/arm64/include/asm/unistd32.h      | 2 ++
 arch/arm64/include/uapi/asm/unistd.h   | 1 +
 arch/riscv/include/asm/unistd.h        | 1 +
 arch/x86/entry/syscalls/syscall_32.tbl | 1 +
 arch/x86/entry/syscalls/syscall_64.tbl | 1 +
 include/linux/syscalls.h               | 1 +
 include/uapi/asm-generic/unistd.h      | 7 ++++++-
 8 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h
index 3b859596840d..b3b2019f8d16 100644
--- a/arch/arm64/include/asm/unistd.h
+++ b/arch/arm64/include/asm/unistd.h
@@ -38,7 +38,7 @@
 #define __ARM_NR_compat_set_tls		(__ARM_NR_COMPAT_BASE + 5)
 #define __ARM_NR_COMPAT_END		(__ARM_NR_COMPAT_BASE + 0x800)
 
-#define __NR_compat_syscalls		440
+#define __NR_compat_syscalls		441
 #endif
 
 #define __ARCH_WANT_SYS_CLONE
diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
index 734860ac7cf9..ce0838fc7a5c 100644
--- a/arch/arm64/include/asm/unistd32.h
+++ b/arch/arm64/include/asm/unistd32.h
@@ -887,6 +887,8 @@ __SYSCALL(__NR_openat2, sys_openat2)
 __SYSCALL(__NR_pidfd_getfd, sys_pidfd_getfd)
 #define __NR_faccessat2 439
 __SYSCALL(__NR_faccessat2, sys_faccessat2)
+#define __NR_memfd_secret 440
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
 
 /*
  * Please add new compat syscalls above this comment and update
diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index f83a70e07df8..ce2ee8f1e361 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -20,5 +20,6 @@
 #define __ARCH_WANT_SET_GET_RLIMIT
 #define __ARCH_WANT_TIME32_SYSCALLS
 #define __ARCH_WANT_SYS_CLONE3
+#define __ARCH_WANT_MEMFD_SECRET
 
 #include <asm-generic/unistd.h>
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
index 977ee6181dab..6c316093a1e5 100644
--- a/arch/riscv/include/asm/unistd.h
+++ b/arch/riscv/include/asm/unistd.h
@@ -9,6 +9,7 @@
  */
 
 #define __ARCH_WANT_SYS_CLONE
+#define __ARCH_WANT_MEMFD_SECRET
 
 #include <uapi/asm/unistd.h>
 
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index 9d1102873666..e7a58a360732 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -444,3 +444,4 @@
 437	i386	openat2			sys_openat2
 438	i386	pidfd_getfd		sys_pidfd_getfd
 439	i386	faccessat2		sys_faccessat2
+440	i386	memfd_secret		sys_memfd_secret
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index f30d6ae9a688..635d7aa2bb9a 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -361,6 +361,7 @@
 437	common	openat2			sys_openat2
 438	common	pidfd_getfd		sys_pidfd_getfd
 439	common	faccessat2		sys_faccessat2
+440	common	memfd_secret		sys_memfd_secret
 
 #
 # x32-specific system call numbers start at 512 to avoid cache impact
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 75ac7f8ae93c..78afb99c6892 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1006,6 +1006,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
 				       siginfo_t __user *info,
 				       unsigned int flags);
 asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
+asmlinkage long sys_memfd_secret(unsigned long flags);
 
 /*
  * Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 995b36c2ea7d..d063e37dbb4a 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -860,8 +860,13 @@ __SYSCALL(__NR_pidfd_getfd, sys_pidfd_getfd)
 #define __NR_faccessat2 439
 __SYSCALL(__NR_faccessat2, sys_faccessat2)
 
+#ifdef __ARCH_WANT_MEMFD_SECRET
+#define __NR_memfd_secret 440
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
+#endif
+
 #undef __NR_syscalls
-#define __NR_syscalls 440
+#define __NR_syscalls 441
 
 /*
  * 32 bit systems traditionally used different
-- 
2.26.2
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v4 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation
  2020-08-18 14:15 [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (3 preceding siblings ...)
  2020-08-18 14:15 ` [PATCH v4 4/6] arch, mm: wire up memfd_secret system call were relevant Mike Rapoport
@ 2020-08-18 14:15 ` Mike Rapoport
  2020-08-18 14:15 ` [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot Mike Rapoport
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 20+ messages in thread
From: Mike Rapoport @ 2020-08-18 14:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Idan Yaniv, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-

From: Mike Rapoport <rppt@linux.ibm.com>

Removing a PAGE_SIZE page from the direct map every time such page is
allocated for a secret memory mapping will cause severe fragmentation of
the direct map. This fragmentation can be reduced by using PMD-size pages
as a pool for small pages for secret memory mappings.

Add a gen_pool per secretmem inode and lazily populate this pool with
PMD-size pages.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 mm/secretmem.c | 107 ++++++++++++++++++++++++++++++++++++++++---------
 1 file changed, 88 insertions(+), 19 deletions(-)

diff --git a/mm/secretmem.c b/mm/secretmem.c
index 3293f761076e..333eb18fb483 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -12,6 +12,7 @@
 #include <linux/bitops.h>
 #include <linux/printk.h>
 #include <linux/pagemap.h>
+#include <linux/genalloc.h>
 #include <linux/syscalls.h>
 #include <linux/pseudo_fs.h>
 #include <linux/set_memory.h>
@@ -40,24 +41,66 @@
 #define SECRETMEM_FLAGS_MASK	SECRETMEM_MODE_MASK
 
 struct secretmem_ctx {
+	struct gen_pool *pool;
 	unsigned int mode;
 };
 
-static struct page *secretmem_alloc_page(gfp_t gfp)
+static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
 {
-	/*
-	 * FIXME: use a cache of large pages to reduce the direct map
-	 * fragmentation
-	 */
-	return alloc_page(gfp);
+	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
+	struct gen_pool *pool = ctx->pool;
+	unsigned long addr;
+	struct page *page;
+	int err;
+
+	page = alloc_pages(gfp, PMD_PAGE_ORDER);
+	if (!page)
+		return -ENOMEM;
+
+	addr = (unsigned long)page_address(page);
+	split_page(page, PMD_PAGE_ORDER);
+
+	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
+	if (err) {
+		__free_pages(page, PMD_PAGE_ORDER);
+		return err;
+	}
+
+	__kernel_map_pages(page, nr_pages, 0);
+
+	return 0;
+}
+
+static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
+					 gfp_t gfp)
+{
+	struct gen_pool *pool = ctx->pool;
+	unsigned long addr;
+	struct page *page;
+	int err;
+
+	if (gen_pool_avail(pool) < PAGE_SIZE) {
+		err = secretmem_pool_increase(ctx, gfp);
+		if (err)
+			return NULL;
+	}
+
+	addr = gen_pool_alloc(pool, PAGE_SIZE);
+	if (!addr)
+		return NULL;
+
+	page = virt_to_page(addr);
+	get_page(page);
+
+	return page;
 }
 
 static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 {
+	struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	struct inode *inode = file_inode(vmf->vma->vm_file);
 	pgoff_t offset = vmf->pgoff;
-	unsigned long addr;
 	struct page *page;
 	int ret = 0;
 
@@ -66,7 +109,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 
 	page = find_get_entry(mapping, offset);
 	if (!page) {
-		page = secretmem_alloc_page(vmf->gfp_mask);
+		page = secretmem_alloc_page(ctx, vmf->gfp_mask);
 		if (!page)
 			return vmf_error(-ENOMEM);
 
@@ -74,14 +117,8 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 		if (unlikely(ret))
 			goto err_put_page;
 
-		ret = set_direct_map_invalid_noflush(page);
-		if (ret)
-			goto err_del_page_cache;
-
-		addr = (unsigned long)page_address(page);
-		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
-
 		__SetPageUptodate(page);
+		set_page_private(page, (unsigned long)ctx);
 
 		ret = VM_FAULT_LOCKED;
 	}
@@ -89,8 +126,6 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 	vmf->page = page;
 	return ret;
 
-err_del_page_cache:
-	delete_from_page_cache(page);
 err_put_page:
 	put_page(page);
 	return vmf_error(ret);
@@ -138,7 +173,11 @@ static int secretmem_migratepage(struct address_space *mapping,
 
 static void secretmem_freepage(struct page *page)
 {
-	set_direct_map_default_noflush(page);
+	unsigned long addr = (unsigned long)page_address(page);
+	struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
+	struct gen_pool *pool = ctx->pool;
+
+	gen_pool_free(pool, addr, PAGE_SIZE);
 }
 
 static const struct address_space_operations secretmem_aops = {
@@ -163,13 +202,18 @@ static struct file *secretmem_file_create(unsigned long flags)
 	if (!ctx)
 		goto err_free_inode;
 
+	ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+	if (!ctx->pool)
+		goto err_free_ctx;
+
 	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
 				 O_RDWR, &secretmem_fops);
 	if (IS_ERR(file))
-		goto err_free_ctx;
+		goto err_free_pool;
 
 	mapping_set_unevictable(inode->i_mapping);
 
+	inode->i_private = ctx;
 	inode->i_mapping->private_data = ctx;
 	inode->i_mapping->a_ops = &secretmem_aops;
 
@@ -183,6 +227,8 @@ static struct file *secretmem_file_create(unsigned long flags)
 
 	return file;
 
+err_free_pool:
+	gen_pool_destroy(ctx->pool);
 err_free_ctx:
 	kfree(ctx);
 err_free_inode:
@@ -221,11 +267,34 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
 	return err;
 }
 
+static void secretmem_cleanup_chunk(struct gen_pool *pool,
+				    struct gen_pool_chunk *chunk, void *data)
+{
+	unsigned long start = chunk->start_addr;
+	unsigned long end = chunk->end_addr;
+	unsigned long nr_pages, addr;
+
+	nr_pages = (end - start + 1) / PAGE_SIZE;
+	__kernel_map_pages(virt_to_page(start), nr_pages, 1);
+
+	for (addr = start; addr < end; addr += PAGE_SIZE)
+		put_page(virt_to_page(addr));
+}
+
+static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
+{
+	struct gen_pool *pool = ctx->pool;
+
+	gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
+	gen_pool_destroy(pool);
+}
+
 static void secretmem_evict_inode(struct inode *inode)
 {
 	struct secretmem_ctx *ctx = inode->i_private;
 
 	truncate_inode_pages_final(&inode->i_data);
+	secretmem_cleanup_pool(ctx);
 	clear_inode(inode);
 	kfree(ctx);
 }
-- 
2.26.2
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot
  2020-08-18 14:15 [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (4 preceding siblings ...)
  2020-08-18 14:15 ` [PATCH v4 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
@ 2020-08-18 14:15 ` Mike Rapoport
  2020-08-19 10:49   ` David Hildenbrand
  2020-08-19 10:47 ` [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas David Hildenbrand
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 20+ messages in thread
From: Mike Rapoport @ 2020-08-18 14:15 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Idan Yaniv, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-

From: Mike Rapoport <rppt@linux.ibm.com>

Taking pages out from the direct map and bringing them back may create
undesired fragmentation and usage of the smaller pages in the direct
mapping of the physical memory.

This can be avoided if a significantly large area of the physical memory
would be reserved for secretmem purposes at boot time.

Add ability to reserve physical memory for secretmem at boot time using
"secretmem" kernel parameter and then use that reserved memory as a global
pool for secret memory needs.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 mm/secretmem.c | 134 ++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 126 insertions(+), 8 deletions(-)

diff --git a/mm/secretmem.c b/mm/secretmem.c
index 333eb18fb483..54067ea62b2d 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -14,6 +14,7 @@
 #include <linux/pagemap.h>
 #include <linux/genalloc.h>
 #include <linux/syscalls.h>
+#include <linux/memblock.h>
 #include <linux/pseudo_fs.h>
 #include <linux/set_memory.h>
 #include <linux/sched/signal.h>
@@ -45,6 +46,39 @@ struct secretmem_ctx {
 	unsigned int mode;
 };
 
+struct secretmem_pool {
+	struct gen_pool *pool;
+	unsigned long reserved_size;
+	void *reserved;
+};
+
+static struct secretmem_pool secretmem_pool;
+
+static struct page *secretmem_alloc_huge_page(gfp_t gfp)
+{
+	struct gen_pool *pool = secretmem_pool.pool;
+	unsigned long addr = 0;
+	struct page *page = NULL;
+
+	if (pool) {
+		if (gen_pool_avail(pool) < PMD_SIZE)
+			return NULL;
+
+		addr = gen_pool_alloc(pool, PMD_SIZE);
+		if (!addr)
+			return NULL;
+
+		page = virt_to_page(addr);
+	} else {
+		page = alloc_pages(gfp, PMD_PAGE_ORDER);
+
+		if (page)
+			split_page(page, PMD_PAGE_ORDER);
+	}
+
+	return page;
+}
+
 static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
 {
 	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
@@ -53,12 +87,11 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
 	struct page *page;
 	int err;
 
-	page = alloc_pages(gfp, PMD_PAGE_ORDER);
+	page = secretmem_alloc_huge_page(gfp);
 	if (!page)
 		return -ENOMEM;
 
 	addr = (unsigned long)page_address(page);
-	split_page(page, PMD_PAGE_ORDER);
 
 	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
 	if (err) {
@@ -267,11 +300,13 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
 	return err;
 }
 
-static void secretmem_cleanup_chunk(struct gen_pool *pool,
-				    struct gen_pool_chunk *chunk, void *data)
+static void secretmem_recycle_range(unsigned long start, unsigned long end)
+{
+	gen_pool_free(secretmem_pool.pool, start, PMD_SIZE);
+}
+
+static void secretmem_release_range(unsigned long start, unsigned long end)
 {
-	unsigned long start = chunk->start_addr;
-	unsigned long end = chunk->end_addr;
 	unsigned long nr_pages, addr;
 
 	nr_pages = (end - start + 1) / PAGE_SIZE;
@@ -281,6 +316,18 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
 		put_page(virt_to_page(addr));
 }
 
+static void secretmem_cleanup_chunk(struct gen_pool *pool,
+				    struct gen_pool_chunk *chunk, void *data)
+{
+	unsigned long start = chunk->start_addr;
+	unsigned long end = chunk->end_addr;
+
+	if (secretmem_pool.pool)
+		secretmem_recycle_range(start, end);
+	else
+		secretmem_release_range(start, end);
+}
+
 static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
 {
 	struct gen_pool *pool = ctx->pool;
@@ -320,14 +367,85 @@ static struct file_system_type secretmem_fs = {
 	.kill_sb	= kill_anon_super,
 };
 
+static int secretmem_reserved_mem_init(void)
+{
+	struct gen_pool *pool;
+	struct page *page;
+	void *addr;
+	int err;
+
+	if (!secretmem_pool.reserved)
+		return 0;
+
+	pool = gen_pool_create(PMD_SHIFT, NUMA_NO_NODE);
+	if (!pool)
+		return -ENOMEM;
+
+	err = gen_pool_add(pool, (unsigned long)secretmem_pool.reserved,
+			   secretmem_pool.reserved_size, NUMA_NO_NODE);
+	if (err)
+		goto err_destroy_pool;
+
+	for (addr = secretmem_pool.reserved;
+	     addr < secretmem_pool.reserved + secretmem_pool.reserved_size;
+	     addr += PAGE_SIZE) {
+		page = virt_to_page(addr);
+		__ClearPageReserved(page);
+		set_page_count(page, 1);
+	}
+
+	secretmem_pool.pool = pool;
+	page = virt_to_page(secretmem_pool.reserved);
+	__kernel_map_pages(page, secretmem_pool.reserved_size / PAGE_SIZE, 0);
+	return 0;
+
+err_destroy_pool:
+	gen_pool_destroy(pool);
+	return err;
+}
+
 static int secretmem_init(void)
 {
-	int ret = 0;
+	int ret;
+
+	ret = secretmem_reserved_mem_init();
+	if (ret)
+		return ret;
 
 	secretmem_mnt = kern_mount(&secretmem_fs);
-	if (IS_ERR(secretmem_mnt))
+	if (IS_ERR(secretmem_mnt)) {
+		gen_pool_destroy(secretmem_pool.pool);
 		ret = PTR_ERR(secretmem_mnt);
+	}
 
 	return ret;
 }
 fs_initcall(secretmem_init);
+
+static int __init secretmem_setup(char *str)
+{
+	phys_addr_t align = PMD_SIZE;
+	unsigned long reserved_size;
+	void *reserved;
+
+	reserved_size = memparse(str, NULL);
+	if (!reserved_size)
+		return 0;
+
+	if (reserved_size * 2 > PUD_SIZE)
+		align = PUD_SIZE;
+
+	reserved = memblock_alloc(reserved_size, align);
+	if (!reserved) {
+		pr_err("failed to reserve %lu bytes\n", secretmem_pool.reserved_size);
+		return 0;
+	}
+
+	secretmem_pool.reserved_size = reserved_size;
+	secretmem_pool.reserved = reserved;
+
+	pr_info("reserved %luM\n", reserved_size >> 20);
+
+	return 1;
+}
+__setup("secretmem=", secretmem_setup);
-- 
2.26.2
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas
  2020-08-18 14:15 [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (5 preceding siblings ...)
  2020-08-18 14:15 ` [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot Mike Rapoport
@ 2020-08-19 10:47 ` David Hildenbrand
  2020-08-19 11:42   ` Mike Rapoport
  2020-08-26 11:01 ` Mike Rapoport
  2020-09-03  7:46 ` Mike Rapoport
  8 siblings, 1 reply; 20+ messages in thread
From: David Hildenbrand @ 2020-08-19 10:47 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Idan Yaniv, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-nvdimm, linux-riscv,
	x86

On 18.08.20 16:15, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Hi,
> 
> This is an implementation of "secret" mappings backed by a file descriptor. 
> 
> v4 changes:
> * rebase on v5.9-rc1
> * Do not redefine PMD_PAGE_ORDER in fs/dax.c, thanks Kirill
> * Make secret mappings exclusive by default and only require flags to
>   memfd_secret() system call for uncached mappings, thanks again Kirill :)
> 
> v3 changes:
> * Squash kernel-parameters.txt update into the commit that added the
>   command line option.
> * Make uncached mode explicitly selectable by architectures. For now enable
>   it only on x86.
> 
> v2 changes:
> * Follow Michael's suggestion and name the new system call 'memfd_secret'
> * Add kernel-parameters documentation about the boot option
> * Fix i386-tinyconfig regression reported by the kbuild bot.
>   CONFIG_SECRETMEM now depends on !EMBEDDED to disable it on small systems
>   from one side and still make it available unconditionally on
>   architectures that support SET_DIRECT_MAP.
> 
> 
> The file descriptor backing secret memory mappings is created using a
> dedicated memfd_secret system call The desired protection mode for the
> memory is configured using flags parameter of the system call. The mmap()
> of the file descriptor created with memfd_secret() will create a "secret"
> memory mapping. The pages in that mapping will be marked as not present in
> the direct map and will have desired protection bits set in the user page
> table. For instance, current implementation allows uncached mappings.
> 
> Although normally Linux userspace mappings are protected from other users, 
> such secret mappings are useful for environments where a hostile tenant is
> trying to trick the kernel into giving them access to other tenants
> mappings.
> 
> Additionally, the secret mappings may be used as a mean to protect guest
> memory in a virtual machine host.
> 

Just a general question. I assume such pages (where the direct mapping
was changed) cannot get migrated - I can spot a simple alloc_page(). So
essentially a process can just allocate a whole bunch of memory that is
unmovable, correct? Is there any limit? Is it properly accounted towards
the process (memctl) ?

-- 
Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot
  2020-08-18 14:15 ` [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot Mike Rapoport
@ 2020-08-19 10:49   ` David Hildenbrand
  2020-08-19 11:53     ` Mike Rapoport
  0 siblings, 1 reply; 20+ messages in thread
From: David Hildenbrand @ 2020-08-19 10:49 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Idan Yaniv, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-nvdimm, linux-riscv,
	x86

On 18.08.20 16:15, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Taking pages out from the direct map and bringing them back may create
> undesired fragmentation and usage of the smaller pages in the direct
> mapping of the physical memory.
> 
> This can be avoided if a significantly large area of the physical memory
> would be reserved for secretmem purposes at boot time.
> 
> Add ability to reserve physical memory for secretmem at boot time using
> "secretmem" kernel parameter and then use that reserved memory as a global
> pool for secret memory needs.

Wouldn't something like CMA be the better fit? Just wondering. Then, the
memory can actually be reused for something else while not needed.

> 
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
>  mm/secretmem.c | 134 ++++++++++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 126 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 333eb18fb483..54067ea62b2d 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -14,6 +14,7 @@
>  #include <linux/pagemap.h>
>  #include <linux/genalloc.h>
>  #include <linux/syscalls.h>
> +#include <linux/memblock.h>
>  #include <linux/pseudo_fs.h>
>  #include <linux/set_memory.h>
>  #include <linux/sched/signal.h>
> @@ -45,6 +46,39 @@ struct secretmem_ctx {
>  	unsigned int mode;
>  };
>  
> +struct secretmem_pool {
> +	struct gen_pool *pool;
> +	unsigned long reserved_size;
> +	void *reserved;
> +};
> +
> +static struct secretmem_pool secretmem_pool;
> +
> +static struct page *secretmem_alloc_huge_page(gfp_t gfp)
> +{
> +	struct gen_pool *pool = secretmem_pool.pool;
> +	unsigned long addr = 0;
> +	struct page *page = NULL;
> +
> +	if (pool) {
> +		if (gen_pool_avail(pool) < PMD_SIZE)
> +			return NULL;
> +
> +		addr = gen_pool_alloc(pool, PMD_SIZE);
> +		if (!addr)
> +			return NULL;
> +
> +		page = virt_to_page(addr);
> +	} else {
> +		page = alloc_pages(gfp, PMD_PAGE_ORDER);
> +
> +		if (page)
> +			split_page(page, PMD_PAGE_ORDER);
> +	}
> +
> +	return page;
> +}
> +
>  static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  {
>  	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> @@ -53,12 +87,11 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  	struct page *page;
>  	int err;
>  
> -	page = alloc_pages(gfp, PMD_PAGE_ORDER);
> +	page = secretmem_alloc_huge_page(gfp);
>  	if (!page)
>  		return -ENOMEM;
>  
>  	addr = (unsigned long)page_address(page);
> -	split_page(page, PMD_PAGE_ORDER);
>  
>  	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
>  	if (err) {
> @@ -267,11 +300,13 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
>  	return err;
>  }
>  
> -static void secretmem_cleanup_chunk(struct gen_pool *pool,
> -				    struct gen_pool_chunk *chunk, void *data)
> +static void secretmem_recycle_range(unsigned long start, unsigned long end)
> +{
> +	gen_pool_free(secretmem_pool.pool, start, PMD_SIZE);
> +}
> +
> +static void secretmem_release_range(unsigned long start, unsigned long end)
>  {
> -	unsigned long start = chunk->start_addr;
> -	unsigned long end = chunk->end_addr;
>  	unsigned long nr_pages, addr;
>  
>  	nr_pages = (end - start + 1) / PAGE_SIZE;
> @@ -281,6 +316,18 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
>  		put_page(virt_to_page(addr));
>  }
>  
> +static void secretmem_cleanup_chunk(struct gen_pool *pool,
> +				    struct gen_pool_chunk *chunk, void *data)
> +{
> +	unsigned long start = chunk->start_addr;
> +	unsigned long end = chunk->end_addr;
> +
> +	if (secretmem_pool.pool)
> +		secretmem_recycle_range(start, end);
> +	else
> +		secretmem_release_range(start, end);
> +}
> +
>  static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
>  {
>  	struct gen_pool *pool = ctx->pool;
> @@ -320,14 +367,85 @@ static struct file_system_type secretmem_fs = {
>  	.kill_sb	= kill_anon_super,
>  };
>  
> +static int secretmem_reserved_mem_init(void)
> +{
> +	struct gen_pool *pool;
> +	struct page *page;
> +	void *addr;
> +	int err;
> +
> +	if (!secretmem_pool.reserved)
> +		return 0;
> +
> +	pool = gen_pool_create(PMD_SHIFT, NUMA_NO_NODE);
> +	if (!pool)
> +		return -ENOMEM;
> +
> +	err = gen_pool_add(pool, (unsigned long)secretmem_pool.reserved,
> +			   secretmem_pool.reserved_size, NUMA_NO_NODE);
> +	if (err)
> +		goto err_destroy_pool;
> +
> +	for (addr = secretmem_pool.reserved;
> +	     addr < secretmem_pool.reserved + secretmem_pool.reserved_size;
> +	     addr += PAGE_SIZE) {
> +		page = virt_to_page(addr);
> +		__ClearPageReserved(page);
> +		set_page_count(page, 1);
> +	}
> +
> +	secretmem_pool.pool = pool;
> +	page = virt_to_page(secretmem_pool.reserved);
> +	__kernel_map_pages(page, secretmem_pool.reserved_size / PAGE_SIZE, 0);
> +	return 0;
> +
> +err_destroy_pool:
> +	gen_pool_destroy(pool);
> +	return err;
> +}
> +
>  static int secretmem_init(void)
>  {
> -	int ret = 0;
> +	int ret;
> +
> +	ret = secretmem_reserved_mem_init();
> +	if (ret)
> +		return ret;
>  
>  	secretmem_mnt = kern_mount(&secretmem_fs);
> -	if (IS_ERR(secretmem_mnt))
> +	if (IS_ERR(secretmem_mnt)) {
> +		gen_pool_destroy(secretmem_pool.pool);
>  		ret = PTR_ERR(secretmem_mnt);
> +	}
>  
>  	return ret;
>  }
>  fs_initcall(secretmem_init);
> +
> +static int __init secretmem_setup(char *str)
> +{
> +	phys_addr_t align = PMD_SIZE;
> +	unsigned long reserved_size;
> +	void *reserved;
> +
> +	reserved_size = memparse(str, NULL);
> +	if (!reserved_size)
> +		return 0;
> +
> +	if (reserved_size * 2 > PUD_SIZE)
> +		align = PUD_SIZE;
> +
> +	reserved = memblock_alloc(reserved_size, align);
> +	if (!reserved) {
> +		pr_err("failed to reserve %lu bytes\n", secretmem_pool.reserved_size);
> +		return 0;
> +	}
> +
> +	secretmem_pool.reserved_size = reserved_size;
> +	secretmem_pool.reserved = reserved;
> +
> +	pr_info("reserved %luM\n", reserved_size >> 20);
> +
> +	return 1;
> +}
> +__setup("secretmem=", secretmem_setup);
> 


-- 
Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas
  2020-08-19 10:47 ` [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas David Hildenbrand
@ 2020-08-19 11:42   ` Mike Rapoport
  2020-08-19 12:05     ` David Hildenbrand
  0 siblings, 1 reply; 20+ messages in thread
From: Mike Rapoport @ 2020-08-19 11:42 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dave Hansen, Elena Reshetova, H. Peter Anvin, Idan Yaniv,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel

On Wed, Aug 19, 2020 at 12:47:54PM +0200, David Hildenbrand wrote:
> On 18.08.20 16:15, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> > 
> > Hi,
> > 
> > This is an implementation of "secret" mappings backed by a file descriptor. 
> > 
> > v4 changes:
> > * rebase on v5.9-rc1
> > * Do not redefine PMD_PAGE_ORDER in fs/dax.c, thanks Kirill
> > * Make secret mappings exclusive by default and only require flags to
> >   memfd_secret() system call for uncached mappings, thanks again Kirill :)
> > 
> > v3 changes:
> > * Squash kernel-parameters.txt update into the commit that added the
> >   command line option.
> > * Make uncached mode explicitly selectable by architectures. For now enable
> >   it only on x86.
> > 
> > v2 changes:
> > * Follow Michael's suggestion and name the new system call 'memfd_secret'
> > * Add kernel-parameters documentation about the boot option
> > * Fix i386-tinyconfig regression reported by the kbuild bot.
> >   CONFIG_SECRETMEM now depends on !EMBEDDED to disable it on small systems
> >   from one side and still make it available unconditionally on
> >   architectures that support SET_DIRECT_MAP.
> > 
> > 
> > The file descriptor backing secret memory mappings is created using a
> > dedicated memfd_secret system call The desired protection mode for the
> > memory is configured using flags parameter of the system call. The mmap()
> > of the file descriptor created with memfd_secret() will create a "secret"
> > memory mapping. The pages in that mapping will be marked as not present in
> > the direct map and will have desired protection bits set in the user page
> > table. For instance, current implementation allows uncached mappings.
> > 
> > Although normally Linux userspace mappings are protected from other users, 
> > such secret mappings are useful for environments where a hostile tenant is
> > trying to trick the kernel into giving them access to other tenants
> > mappings.
> > 
> > Additionally, the secret mappings may be used as a mean to protect guest
> > memory in a virtual machine host.
> > 
> 
> Just a general question. I assume such pages (where the direct mapping
> was changed) cannot get migrated - I can spot a simple alloc_page(). So
> essentially a process can just allocate a whole bunch of memory that is
> unmovable, correct? Is there any limit? Is it properly accounted towards
> the process (memctl) ?

The memory as accounted in the same way like with mlock(), so normal
user won't be able to allocate more than RLIMIT_MEMLOCK.

> -- 
> Thanks,
> 
> David / dhildenb
> 

-- 
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot
  2020-08-19 10:49   ` David Hildenbrand
@ 2020-08-19 11:53     ` Mike Rapoport
  2020-08-19 12:10       ` David Hildenbrand
  0 siblings, 1 reply; 20+ messages in thread
From: Mike Rapoport @ 2020-08-19 11:53 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dave Hansen, Elena Reshetova, H. Peter Anvin, Idan Yaniv,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel

On Wed, Aug 19, 2020 at 12:49:05PM +0200, David Hildenbrand wrote:
> On 18.08.20 16:15, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> > 
> > Taking pages out from the direct map and bringing them back may create
> > undesired fragmentation and usage of the smaller pages in the direct
> > mapping of the physical memory.
> > 
> > This can be avoided if a significantly large area of the physical memory
> > would be reserved for secretmem purposes at boot time.
> > 
> > Add ability to reserve physical memory for secretmem at boot time using
> > "secretmem" kernel parameter and then use that reserved memory as a global
> > pool for secret memory needs.
> 
> Wouldn't something like CMA be the better fit? Just wondering. Then, the
> memory can actually be reused for something else while not needed.

The memory allocated as secret is removed from the direct map and the
boot time reservation is intended to reduce direct map fragmentatioan
and to avoid splitting 1G pages there. So with CMA I'd still need to
allocate 1G chunks for this and once 1G page is dropped from the direct
map it still cannot be reused for anything else until it is freed.

I could use CMA to do the boot time reservation, but doing the
reservesion directly seemed simpler and more explicit to me.


> > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> > ---
> >  mm/secretmem.c | 134 ++++++++++++++++++++++++++++++++++++++++++++++---
> >  1 file changed, 126 insertions(+), 8 deletions(-)
> > 
> > diff --git a/mm/secretmem.c b/mm/secretmem.c
> > index 333eb18fb483..54067ea62b2d 100644
> > --- a/mm/secretmem.c
> > +++ b/mm/secretmem.c
> > @@ -14,6 +14,7 @@
> >  #include <linux/pagemap.h>
> >  #include <linux/genalloc.h>
> >  #include <linux/syscalls.h>
> > +#include <linux/memblock.h>
> >  #include <linux/pseudo_fs.h>
> >  #include <linux/set_memory.h>
> >  #include <linux/sched/signal.h>
> > @@ -45,6 +46,39 @@ struct secretmem_ctx {
> >  	unsigned int mode;
> >  };
> >  
> > +struct secretmem_pool {
> > +	struct gen_pool *pool;
> > +	unsigned long reserved_size;
> > +	void *reserved;
> > +};
> > +
> > +static struct secretmem_pool secretmem_pool;
> > +
> > +static struct page *secretmem_alloc_huge_page(gfp_t gfp)
> > +{
> > +	struct gen_pool *pool = secretmem_pool.pool;
> > +	unsigned long addr = 0;
> > +	struct page *page = NULL;
> > +
> > +	if (pool) {
> > +		if (gen_pool_avail(pool) < PMD_SIZE)
> > +			return NULL;
> > +
> > +		addr = gen_pool_alloc(pool, PMD_SIZE);
> > +		if (!addr)
> > +			return NULL;
> > +
> > +		page = virt_to_page(addr);
> > +	} else {
> > +		page = alloc_pages(gfp, PMD_PAGE_ORDER);
> > +
> > +		if (page)
> > +			split_page(page, PMD_PAGE_ORDER);
> > +	}
> > +
> > +	return page;
> > +}
> > +
> >  static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> >  {
> >  	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> > @@ -53,12 +87,11 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> >  	struct page *page;
> >  	int err;
> >  
> > -	page = alloc_pages(gfp, PMD_PAGE_ORDER);
> > +	page = secretmem_alloc_huge_page(gfp);
> >  	if (!page)
> >  		return -ENOMEM;
> >  
> >  	addr = (unsigned long)page_address(page);
> > -	split_page(page, PMD_PAGE_ORDER);
> >  
> >  	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> >  	if (err) {
> > @@ -267,11 +300,13 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
> >  	return err;
> >  }
> >  
> > -static void secretmem_cleanup_chunk(struct gen_pool *pool,
> > -				    struct gen_pool_chunk *chunk, void *data)
> > +static void secretmem_recycle_range(unsigned long start, unsigned long end)
> > +{
> > +	gen_pool_free(secretmem_pool.pool, start, PMD_SIZE);
> > +}
> > +
> > +static void secretmem_release_range(unsigned long start, unsigned long end)
> >  {
> > -	unsigned long start = chunk->start_addr;
> > -	unsigned long end = chunk->end_addr;
> >  	unsigned long nr_pages, addr;
> >  
> >  	nr_pages = (end - start + 1) / PAGE_SIZE;
> > @@ -281,6 +316,18 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
> >  		put_page(virt_to_page(addr));
> >  }
> >  
> > +static void secretmem_cleanup_chunk(struct gen_pool *pool,
> > +				    struct gen_pool_chunk *chunk, void *data)
> > +{
> > +	unsigned long start = chunk->start_addr;
> > +	unsigned long end = chunk->end_addr;
> > +
> > +	if (secretmem_pool.pool)
> > +		secretmem_recycle_range(start, end);
> > +	else
> > +		secretmem_release_range(start, end);
> > +}
> > +
> >  static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
> >  {
> >  	struct gen_pool *pool = ctx->pool;
> > @@ -320,14 +367,85 @@ static struct file_system_type secretmem_fs = {
> >  	.kill_sb	= kill_anon_super,
> >  };
> >  
> > +static int secretmem_reserved_mem_init(void)
> > +{
> > +	struct gen_pool *pool;
> > +	struct page *page;
> > +	void *addr;
> > +	int err;
> > +
> > +	if (!secretmem_pool.reserved)
> > +		return 0;
> > +
> > +	pool = gen_pool_create(PMD_SHIFT, NUMA_NO_NODE);
> > +	if (!pool)
> > +		return -ENOMEM;
> > +
> > +	err = gen_pool_add(pool, (unsigned long)secretmem_pool.reserved,
> > +			   secretmem_pool.reserved_size, NUMA_NO_NODE);
> > +	if (err)
> > +		goto err_destroy_pool;
> > +
> > +	for (addr = secretmem_pool.reserved;
> > +	     addr < secretmem_pool.reserved + secretmem_pool.reserved_size;
> > +	     addr += PAGE_SIZE) {
> > +		page = virt_to_page(addr);
> > +		__ClearPageReserved(page);
> > +		set_page_count(page, 1);
> > +	}
> > +
> > +	secretmem_pool.pool = pool;
> > +	page = virt_to_page(secretmem_pool.reserved);
> > +	__kernel_map_pages(page, secretmem_pool.reserved_size / PAGE_SIZE, 0);
> > +	return 0;
> > +
> > +err_destroy_pool:
> > +	gen_pool_destroy(pool);
> > +	return err;
> > +}
> > +
> >  static int secretmem_init(void)
> >  {
> > -	int ret = 0;
> > +	int ret;
> > +
> > +	ret = secretmem_reserved_mem_init();
> > +	if (ret)
> > +		return ret;
> >  
> >  	secretmem_mnt = kern_mount(&secretmem_fs);
> > -	if (IS_ERR(secretmem_mnt))
> > +	if (IS_ERR(secretmem_mnt)) {
> > +		gen_pool_destroy(secretmem_pool.pool);
> >  		ret = PTR_ERR(secretmem_mnt);
> > +	}
> >  
> >  	return ret;
> >  }
> >  fs_initcall(secretmem_init);
> > +
> > +static int __init secretmem_setup(char *str)
> > +{
> > +	phys_addr_t align = PMD_SIZE;
> > +	unsigned long reserved_size;
> > +	void *reserved;
> > +
> > +	reserved_size = memparse(str, NULL);
> > +	if (!reserved_size)
> > +		return 0;
> > +
> > +	if (reserved_size * 2 > PUD_SIZE)
> > +		align = PUD_SIZE;
> > +
> > +	reserved = memblock_alloc(reserved_size, align);
> > +	if (!reserved) {
> > +		pr_err("failed to reserve %lu bytes\n", secretmem_pool.reserved_size);
> > +		return 0;
> > +	}
> > +
> > +	secretmem_pool.reserved_size = reserved_size;
> > +	secretmem_pool.reserved = reserved;
> > +
> > +	pr_info("reserved %luM\n", reserved_size >> 20);
> > +
> > +	return 1;
> > +}
> > +__setup("secretmem=", secretmem_setup);
> > 
> 
> 
> -- 
> Thanks,
> 
> David / dhildenb
> 

-- 
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas
  2020-08-19 11:42   ` Mike Rapoport
@ 2020-08-19 12:05     ` David Hildenbrand
  0 siblings, 0 replies; 20+ messages in thread
From: David Hildenbrand @ 2020-08-19 12:05 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dave Hansen, Elena Reshetova, H. Peter Anvin, Idan Yaniv,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel

On 19.08.20 13:42, Mike Rapoport wrote:
> On Wed, Aug 19, 2020 at 12:47:54PM +0200, David Hildenbrand wrote:
>> On 18.08.20 16:15, Mike Rapoport wrote:
>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>
>>> Hi,
>>>
>>> This is an implementation of "secret" mappings backed by a file descriptor. 
>>>
>>> v4 changes:
>>> * rebase on v5.9-rc1
>>> * Do not redefine PMD_PAGE_ORDER in fs/dax.c, thanks Kirill
>>> * Make secret mappings exclusive by default and only require flags to
>>>   memfd_secret() system call for uncached mappings, thanks again Kirill :)
>>>
>>> v3 changes:
>>> * Squash kernel-parameters.txt update into the commit that added the
>>>   command line option.
>>> * Make uncached mode explicitly selectable by architectures. For now enable
>>>   it only on x86.
>>>
>>> v2 changes:
>>> * Follow Michael's suggestion and name the new system call 'memfd_secret'
>>> * Add kernel-parameters documentation about the boot option
>>> * Fix i386-tinyconfig regression reported by the kbuild bot.
>>>   CONFIG_SECRETMEM now depends on !EMBEDDED to disable it on small systems
>>>   from one side and still make it available unconditionally on
>>>   architectures that support SET_DIRECT_MAP.
>>>
>>>
>>> The file descriptor backing secret memory mappings is created using a
>>> dedicated memfd_secret system call The desired protection mode for the
>>> memory is configured using flags parameter of the system call. The mmap()
>>> of the file descriptor created with memfd_secret() will create a "secret"
>>> memory mapping. The pages in that mapping will be marked as not present in
>>> the direct map and will have desired protection bits set in the user page
>>> table. For instance, current implementation allows uncached mappings.
>>>
>>> Although normally Linux userspace mappings are protected from other users, 
>>> such secret mappings are useful for environments where a hostile tenant is
>>> trying to trick the kernel into giving them access to other tenants
>>> mappings.
>>>
>>> Additionally, the secret mappings may be used as a mean to protect guest
>>> memory in a virtual machine host.
>>>
>>
>> Just a general question. I assume such pages (where the direct mapping
>> was changed) cannot get migrated - I can spot a simple alloc_page(). So
>> essentially a process can just allocate a whole bunch of memory that is
>> unmovable, correct? Is there any limit? Is it properly accounted towards
>> the process (memctl) ?
> 
> The memory as accounted in the same way like with mlock(), so normal
> user won't be able to allocate more than RLIMIT_MEMLOCK.

Okay, thanks. AFAIU the difference to mlock() is that the pages here are
not movable, fragment memory, and limit compaction. Hm.

-- 
Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot
  2020-08-19 11:53     ` Mike Rapoport
@ 2020-08-19 12:10       ` David Hildenbrand
  2020-08-19 17:33         ` Mike Rapoport
  0 siblings, 1 reply; 20+ messages in thread
From: David Hildenbrand @ 2020-08-19 12:10 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dave Hansen, Elena Reshetova, H. Peter Anvin, Idan Yaniv,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel

On 19.08.20 13:53, Mike Rapoport wrote:
> On Wed, Aug 19, 2020 at 12:49:05PM +0200, David Hildenbrand wrote:
>> On 18.08.20 16:15, Mike Rapoport wrote:
>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>
>>> Taking pages out from the direct map and bringing them back may create
>>> undesired fragmentation and usage of the smaller pages in the direct
>>> mapping of the physical memory.
>>>
>>> This can be avoided if a significantly large area of the physical memory
>>> would be reserved for secretmem purposes at boot time.
>>>
>>> Add ability to reserve physical memory for secretmem at boot time using
>>> "secretmem" kernel parameter and then use that reserved memory as a global
>>> pool for secret memory needs.
>>
>> Wouldn't something like CMA be the better fit? Just wondering. Then, the
>> memory can actually be reused for something else while not needed.
> 
> The memory allocated as secret is removed from the direct map and the
> boot time reservation is intended to reduce direct map fragmentatioan
> and to avoid splitting 1G pages there. So with CMA I'd still need to
> allocate 1G chunks for this and once 1G page is dropped from the direct
> map it still cannot be reused for anything else until it is freed.
> 
> I could use CMA to do the boot time reservation, but doing the
> reservesion directly seemed simpler and more explicit to me.

Well, using CMA would give you the possibility to let the memory be used
for other purposes until you decide it's the right time to take it +
remove the direct mapping etc.

I wonder if a sane approach would be to require to allocate a pool
during boot and only take pages ever from that pool. That would avoid
spilling many unmovable pages all over the place, locally limiting them
to your area here.

-- 
Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot
  2020-08-19 12:10       ` David Hildenbrand
@ 2020-08-19 17:33         ` Mike Rapoport
  2020-08-19 17:45           ` David Hildenbrand
  0 siblings, 1 reply; 20+ messages in thread
From: Mike Rapoport @ 2020-08-19 17:33 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dave Hansen, Elena Reshetova, H. Peter Anvin, Idan Yaniv,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel

On Wed, Aug 19, 2020 at 02:10:43PM +0200, David Hildenbrand wrote:
> On 19.08.20 13:53, Mike Rapoport wrote:
> > On Wed, Aug 19, 2020 at 12:49:05PM +0200, David Hildenbrand wrote:
> >> On 18.08.20 16:15, Mike Rapoport wrote:
> >>> From: Mike Rapoport <rppt@linux.ibm.com>
> >>>
> >>> Taking pages out from the direct map and bringing them back may create
> >>> undesired fragmentation and usage of the smaller pages in the direct
> >>> mapping of the physical memory.
> >>>
> >>> This can be avoided if a significantly large area of the physical memory
> >>> would be reserved for secretmem purposes at boot time.
> >>>
> >>> Add ability to reserve physical memory for secretmem at boot time using
> >>> "secretmem" kernel parameter and then use that reserved memory as a global
> >>> pool for secret memory needs.
> >>
> >> Wouldn't something like CMA be the better fit? Just wondering. Then, the
> >> memory can actually be reused for something else while not needed.
> > 
> > The memory allocated as secret is removed from the direct map and the
> > boot time reservation is intended to reduce direct map fragmentatioan
> > and to avoid splitting 1G pages there. So with CMA I'd still need to
> > allocate 1G chunks for this and once 1G page is dropped from the direct
> > map it still cannot be reused for anything else until it is freed.
> > 
> > I could use CMA to do the boot time reservation, but doing the
> > reservesion directly seemed simpler and more explicit to me.
> 
> Well, using CMA would give you the possibility to let the memory be used
> for other purposes until you decide it's the right time to take it +
> remove the direct mapping etc.

I still can't say I follow you here. If I reseve a CMA area as a pool
for secret memory 1G pages, it is still reserved and it still cannot be
used for other purposes, right?

> I wonder if a sane approach would be to require to allocate a pool
> during boot and only take pages ever from that pool. That would avoid
> spilling many unmovable pages all over the place, locally limiting them
> to your area here.

That's what I tried to implement. The pool reserved at boot time is in a
way similar to booting with mem=X and then splitting the remaining
memory between the VMs.
In this case, the memory reserved at boot is never in the direct map and
allocations from such pool will not cause fragmentation.

> -- 
> Thanks,
> 
> David / dhildenb
> 

-- 
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot
  2020-08-19 17:33         ` Mike Rapoport
@ 2020-08-19 17:45           ` David Hildenbrand
  2020-08-20 15:52             ` Mike Rapoport
  0 siblings, 1 reply; 20+ messages in thread
From: David Hildenbrand @ 2020-08-19 17:45 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dave Hansen, Elena Reshetova, H. Peter Anvin, Idan Yaniv,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel

On 19.08.20 19:33, Mike Rapoport wrote:
> On Wed, Aug 19, 2020 at 02:10:43PM +0200, David Hildenbrand wrote:
>> On 19.08.20 13:53, Mike Rapoport wrote:
>>> On Wed, Aug 19, 2020 at 12:49:05PM +0200, David Hildenbrand wrote:
>>>> On 18.08.20 16:15, Mike Rapoport wrote:
>>>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>>>
>>>>> Taking pages out from the direct map and bringing them back may create
>>>>> undesired fragmentation and usage of the smaller pages in the direct
>>>>> mapping of the physical memory.
>>>>>
>>>>> This can be avoided if a significantly large area of the physical memory
>>>>> would be reserved for secretmem purposes at boot time.
>>>>>
>>>>> Add ability to reserve physical memory for secretmem at boot time using
>>>>> "secretmem" kernel parameter and then use that reserved memory as a global
>>>>> pool for secret memory needs.
>>>>
>>>> Wouldn't something like CMA be the better fit? Just wondering. Then, the
>>>> memory can actually be reused for something else while not needed.
>>>
>>> The memory allocated as secret is removed from the direct map and the
>>> boot time reservation is intended to reduce direct map fragmentatioan
>>> and to avoid splitting 1G pages there. So with CMA I'd still need to
>>> allocate 1G chunks for this and once 1G page is dropped from the direct
>>> map it still cannot be reused for anything else until it is freed.
>>>
>>> I could use CMA to do the boot time reservation, but doing the
>>> reservesion directly seemed simpler and more explicit to me.
>>
>> Well, using CMA would give you the possibility to let the memory be used
>> for other purposes until you decide it's the right time to take it +
>> remove the direct mapping etc.
> 
> I still can't say I follow you here. If I reseve a CMA area as a pool
> for secret memory 1G pages, it is still reserved and it still cannot be
> used for other purposes, right?

So, AFAIK, if you create a CMA pool it can be used for any MOVABLE
allocations (similar to ZONE_MOVABLE) until you actually allocate CMA
memory from that region. Other allocations on that are will then be
migrated away (using alloc_contig_range()).

For example, if you have a 1~GiB CMA area, you could allocate 4~MB pages
from that CMA area on demand (removing the direct mapping, etc ..), and
free when no longer needed (instantiating the direct mapping). The free
memory in that area could used for MOVABLE allocations.

Please let me know if I am missing something important.

-- 
Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot
  2020-08-19 17:45           ` David Hildenbrand
@ 2020-08-20 15:52             ` Mike Rapoport
  2020-09-08  9:09               ` David Hildenbrand
  0 siblings, 1 reply; 20+ messages in thread
From: Mike Rapoport @ 2020-08-20 15:52 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dave Hansen, Elena Reshetova, H. Peter Anvin, Idan Yaniv,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel

On Wed, Aug 19, 2020 at 07:45:29PM +0200, David Hildenbrand wrote:
> On 19.08.20 19:33, Mike Rapoport wrote:
> > On Wed, Aug 19, 2020 at 02:10:43PM +0200, David Hildenbrand wrote:
> >> On 19.08.20 13:53, Mike Rapoport wrote:
> >>> On Wed, Aug 19, 2020 at 12:49:05PM +0200, David Hildenbrand wrote:
> >>>> On 18.08.20 16:15, Mike Rapoport wrote:
> >>>>> From: Mike Rapoport <rppt@linux.ibm.com>
> >>>>>
> >>>>> Taking pages out from the direct map and bringing them back may create
> >>>>> undesired fragmentation and usage of the smaller pages in the direct
> >>>>> mapping of the physical memory.
> >>>>>
> >>>>> This can be avoided if a significantly large area of the physical memory
> >>>>> would be reserved for secretmem purposes at boot time.
> >>>>>
> >>>>> Add ability to reserve physical memory for secretmem at boot time using
> >>>>> "secretmem" kernel parameter and then use that reserved memory as a global
> >>>>> pool for secret memory needs.
> >>>>
> >>>> Wouldn't something like CMA be the better fit? Just wondering. Then, the
> >>>> memory can actually be reused for something else while not needed.
> >>>
> >>> The memory allocated as secret is removed from the direct map and the
> >>> boot time reservation is intended to reduce direct map fragmentatioan
> >>> and to avoid splitting 1G pages there. So with CMA I'd still need to
> >>> allocate 1G chunks for this and once 1G page is dropped from the direct
> >>> map it still cannot be reused for anything else until it is freed.
> >>>
> >>> I could use CMA to do the boot time reservation, but doing the
> >>> reservesion directly seemed simpler and more explicit to me.
> >>
> >> Well, using CMA would give you the possibility to let the memory be used
> >> for other purposes until you decide it's the right time to take it +
> >> remove the direct mapping etc.
> > 
> > I still can't say I follow you here. If I reseve a CMA area as a pool
> > for secret memory 1G pages, it is still reserved and it still cannot be
> > used for other purposes, right?
> 
> So, AFAIK, if you create a CMA pool it can be used for any MOVABLE
> allocations (similar to ZONE_MOVABLE) until you actually allocate CMA
> memory from that region. Other allocations on that are will then be
> migrated away (using alloc_contig_range()).
> 
> For example, if you have a 1~GiB CMA area, you could allocate 4~MB pages
> from that CMA area on demand (removing the direct mapping, etc ..), and
> free when no longer needed (instantiating the direct mapping). The free
> memory in that area could used for MOVABLE allocations.

The boot time resrvation is intended to avoid splitting 1G pages in the
direct map. Without the boot time reservation, we maintain a pool of 2M
pages so the 1G pages are split and 2M pages remain unsplit.

If I scale your example to match the requirement to avoid splitting 1G
pages in the direct map, that would mean creating a CMA area of several
tens of gigabytes and then doing cma_alloc() of 1G each time we need to
refill the secretmem pool. 

It is quite probable that we won't be able to get 1G from CMA after the
system worked for some time.

With boot time reservation we won't need physcally contiguous 1G to
satisfy smaller allocation requests for secretmem because we don't need
to maintain 1G mappings in the secretmem pool.

That said, I believe the addition of the boot time reservation, either
direct or with CMA, can be added as an incrememntal patch after the
"core" functionality is merged.

> Please let me know if I am missing something important.
> 
> -- 
> Thanks,
> 
> David / dhildenb
> 

-- 
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas
  2020-08-18 14:15 [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (6 preceding siblings ...)
  2020-08-19 10:47 ` [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas David Hildenbrand
@ 2020-08-26 11:01 ` Mike Rapoport
  2020-09-03  7:46 ` Mike Rapoport
  8 siblings, 0 replies; 20+ messages in thread
From: Mike Rapoport @ 2020-08-26 11:01 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Idan Yaniv, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-nvdimm, linux-riscv,
	x86

Any comments on this?

On Tue, Aug 18, 2020 at 05:15:48PM +0300, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Hi,
> 
> This is an implementation of "secret" mappings backed by a file descriptor. 
> 
> v4 changes:
> * rebase on v5.9-rc1
> * Do not redefine PMD_PAGE_ORDER in fs/dax.c, thanks Kirill
> * Make secret mappings exclusive by default and only require flags to
>   memfd_secret() system call for uncached mappings, thanks again Kirill :)
> 
> v3 changes:
> * Squash kernel-parameters.txt update into the commit that added the
>   command line option.
> * Make uncached mode explicitly selectable by architectures. For now enable
>   it only on x86.
> 
> v2 changes:
> * Follow Michael's suggestion and name the new system call 'memfd_secret'
> * Add kernel-parameters documentation about the boot option
> * Fix i386-tinyconfig regression reported by the kbuild bot.
>   CONFIG_SECRETMEM now depends on !EMBEDDED to disable it on small systems
>   from one side and still make it available unconditionally on
>   architectures that support SET_DIRECT_MAP.
> 
> 
> The file descriptor backing secret memory mappings is created using a
> dedicated memfd_secret system call The desired protection mode for the
> memory is configured using flags parameter of the system call. The mmap()
> of the file descriptor created with memfd_secret() will create a "secret"
> memory mapping. The pages in that mapping will be marked as not present in
> the direct map and will have desired protection bits set in the user page
> table. For instance, current implementation allows uncached mappings.
> 
> Although normally Linux userspace mappings are protected from other users, 
> such secret mappings are useful for environments where a hostile tenant is
> trying to trick the kernel into giving them access to other tenants
> mappings.
> 
> Additionally, the secret mappings may be used as a mean to protect guest
> memory in a virtual machine host.
> 
> For demonstration of secret memory usage we've created a userspace library
> [1] that does two things: the first is act as a preloader for openssl to
> redirect all the OPENSSL_malloc calls to secret memory meaning any secret
> keys get automatically protected this way and the other thing it does is
> expose the API to the user who needs it. We anticipate that a lot of the
> use cases would be like the openssl one: many toolkits that deal with
> secret keys already have special handling for the memory to try to give
> them greater protection, so this would simply be pluggable into the
> toolkits without any need for user application modification.
> 
> I've hesitated whether to continue to use new flags to memfd_create() or to
> add a new system call and I've decided to use a new system call after I've
> started to look into man pages update. There would have been two completely
> independent descriptions and I think it would have been very confusing.
> 
> Hiding secret memory mappings behind an anonymous file allows (ab)use of
> the page cache for tracking pages allocated for the "secret" mappings as
> well as using address_space_operations for e.g. page migration callbacks.
> 
> The anonymous file may be also used implicitly, like hugetlb files, to
> implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
> ABIs in the future.
> 
> As the fragmentation of the direct map was one of the major concerns raised
> during the previous postings, I've added an amortizing cache of PMD-size
> pages to each file descriptor and an ability to reserve large chunks of the
> physical memory at boot time and then use this memory as an allocation pool
> for the secret memory areas.
> 
> v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
> v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
> v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org/
> rfc-v2: https://lore.kernel.org/lkml/20200706172051.19465-1-rppt@kernel.org/
> rfc-v1: https://lore.kernel.org/lkml/20200130162340.GA14232@rapoport-lnx/
> 
> Mike Rapoport (6):
>   mm: add definition of PMD_PAGE_ORDER
>   mmap: make mlock_future_check() global
>   mm: introduce memfd_secret system call to create "secret" memory areas
>   arch, mm: wire up memfd_secret system call were relevant
>   mm: secretmem: use PMD-size pages to amortize direct map fragmentation
>   mm: secretmem: add ability to reserve memory at boot
> 
>  arch/Kconfig                           |   7 +
>  arch/arm64/include/asm/unistd.h        |   2 +-
>  arch/arm64/include/asm/unistd32.h      |   2 +
>  arch/arm64/include/uapi/asm/unistd.h   |   1 +
>  arch/riscv/include/asm/unistd.h        |   1 +
>  arch/x86/Kconfig                       |   1 +
>  arch/x86/entry/syscalls/syscall_32.tbl |   1 +
>  arch/x86/entry/syscalls/syscall_64.tbl |   1 +
>  fs/dax.c                               |  11 +-
>  include/linux/pgtable.h                |   3 +
>  include/linux/syscalls.h               |   1 +
>  include/uapi/asm-generic/unistd.h      |   7 +-
>  include/uapi/linux/magic.h             |   1 +
>  include/uapi/linux/secretmem.h         |   8 +
>  kernel/sys_ni.c                        |   2 +
>  mm/Kconfig                             |   4 +
>  mm/Makefile                            |   1 +
>  mm/internal.h                          |   3 +
>  mm/mmap.c                              |   5 +-
>  mm/secretmem.c                         | 451 +++++++++++++++++++++++++
>  20 files changed, 501 insertions(+), 12 deletions(-)
>  create mode 100644 include/uapi/linux/secretmem.h
>  create mode 100644 mm/secretmem.c
> 
> -- 
> 2.26.2
> 

-- 
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas
  2020-08-18 14:15 [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (7 preceding siblings ...)
  2020-08-26 11:01 ` Mike Rapoport
@ 2020-09-03  7:46 ` Mike Rapoport
  8 siblings, 0 replies; 20+ messages in thread
From: Mike Rapoport @ 2020-09-03  7:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Idan Yaniv, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-nvdimm, linux-riscv,
	x86

Any updates on this?

On Tue, Aug 18, 2020 at 05:15:48PM +0300, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Hi,
> 
> This is an implementation of "secret" mappings backed by a file descriptor. 
> 
> v4 changes:
> * rebase on v5.9-rc1
> * Do not redefine PMD_PAGE_ORDER in fs/dax.c, thanks Kirill
> * Make secret mappings exclusive by default and only require flags to
>   memfd_secret() system call for uncached mappings, thanks again Kirill :)
> 
> v3 changes:
> * Squash kernel-parameters.txt update into the commit that added the
>   command line option.
> * Make uncached mode explicitly selectable by architectures. For now enable
>   it only on x86.
> 
> v2 changes:
> * Follow Michael's suggestion and name the new system call 'memfd_secret'
> * Add kernel-parameters documentation about the boot option
> * Fix i386-tinyconfig regression reported by the kbuild bot.
>   CONFIG_SECRETMEM now depends on !EMBEDDED to disable it on small systems
>   from one side and still make it available unconditionally on
>   architectures that support SET_DIRECT_MAP.
> 
> 
> The file descriptor backing secret memory mappings is created using a
> dedicated memfd_secret system call The desired protection mode for the
> memory is configured using flags parameter of the system call. The mmap()
> of the file descriptor created with memfd_secret() will create a "secret"
> memory mapping. The pages in that mapping will be marked as not present in
> the direct map and will have desired protection bits set in the user page
> table. For instance, current implementation allows uncached mappings.
> 
> Although normally Linux userspace mappings are protected from other users, 
> such secret mappings are useful for environments where a hostile tenant is
> trying to trick the kernel into giving them access to other tenants
> mappings.
> 
> Additionally, the secret mappings may be used as a mean to protect guest
> memory in a virtual machine host.
> 
> For demonstration of secret memory usage we've created a userspace library
> [1] that does two things: the first is act as a preloader for openssl to
> redirect all the OPENSSL_malloc calls to secret memory meaning any secret
> keys get automatically protected this way and the other thing it does is
> expose the API to the user who needs it. We anticipate that a lot of the
> use cases would be like the openssl one: many toolkits that deal with
> secret keys already have special handling for the memory to try to give
> them greater protection, so this would simply be pluggable into the
> toolkits without any need for user application modification.
> 
> I've hesitated whether to continue to use new flags to memfd_create() or to
> add a new system call and I've decided to use a new system call after I've
> started to look into man pages update. There would have been two completely
> independent descriptions and I think it would have been very confusing.
> 
> Hiding secret memory mappings behind an anonymous file allows (ab)use of
> the page cache for tracking pages allocated for the "secret" mappings as
> well as using address_space_operations for e.g. page migration callbacks.
> 
> The anonymous file may be also used implicitly, like hugetlb files, to
> implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
> ABIs in the future.
> 
> As the fragmentation of the direct map was one of the major concerns raised
> during the previous postings, I've added an amortizing cache of PMD-size
> pages to each file descriptor and an ability to reserve large chunks of the
> physical memory at boot time and then use this memory as an allocation pool
> for the secret memory areas.
> 
> v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
> v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
> v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org/
> rfc-v2: https://lore.kernel.org/lkml/20200706172051.19465-1-rppt@kernel.org/
> rfc-v1: https://lore.kernel.org/lkml/20200130162340.GA14232@rapoport-lnx/
> 
> Mike Rapoport (6):
>   mm: add definition of PMD_PAGE_ORDER
>   mmap: make mlock_future_check() global
>   mm: introduce memfd_secret system call to create "secret" memory areas
>   arch, mm: wire up memfd_secret system call were relevant
>   mm: secretmem: use PMD-size pages to amortize direct map fragmentation
>   mm: secretmem: add ability to reserve memory at boot
> 
>  arch/Kconfig                           |   7 +
>  arch/arm64/include/asm/unistd.h        |   2 +-
>  arch/arm64/include/asm/unistd32.h      |   2 +
>  arch/arm64/include/uapi/asm/unistd.h   |   1 +
>  arch/riscv/include/asm/unistd.h        |   1 +
>  arch/x86/Kconfig                       |   1 +
>  arch/x86/entry/syscalls/syscall_32.tbl |   1 +
>  arch/x86/entry/syscalls/syscall_64.tbl |   1 +
>  fs/dax.c                               |  11 +-
>  include/linux/pgtable.h                |   3 +
>  include/linux/syscalls.h               |   1 +
>  include/uapi/asm-generic/unistd.h      |   7 +-
>  include/uapi/linux/magic.h             |   1 +
>  include/uapi/linux/secretmem.h         |   8 +
>  kernel/sys_ni.c                        |   2 +
>  mm/Kconfig                             |   4 +
>  mm/Makefile                            |   1 +
>  mm/internal.h                          |   3 +
>  mm/mmap.c                              |   5 +-
>  mm/secretmem.c                         | 451 +++++++++++++++++++++++++
>  20 files changed, 501 insertions(+), 12 deletions(-)
>  create mode 100644 include/uapi/linux/secretmem.h
>  create mode 100644 mm/secretmem.c
> 
> -- 
> 2.26.2
> 

-- 
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot
  2020-08-20 15:52             ` Mike Rapoport
@ 2020-09-08  9:09               ` David Hildenbrand
  2020-09-08 12:31                 ` Mike Rapoport
  0 siblings, 1 reply; 20+ messages in thread
From: David Hildenbrand @ 2020-09-08  9:09 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dave Hansen, Elena Reshetova, H. Peter Anvin, Idan Yaniv,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel

On 20.08.20 17:52, Mike Rapoport wrote:
> On Wed, Aug 19, 2020 at 07:45:29PM +0200, David Hildenbrand wrote:
>> On 19.08.20 19:33, Mike Rapoport wrote:
>>> On Wed, Aug 19, 2020 at 02:10:43PM +0200, David Hildenbrand wrote:
>>>> On 19.08.20 13:53, Mike Rapoport wrote:
>>>>> On Wed, Aug 19, 2020 at 12:49:05PM +0200, David Hildenbrand wrote:
>>>>>> On 18.08.20 16:15, Mike Rapoport wrote:
>>>>>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>>>>>
>>>>>>> Taking pages out from the direct map and bringing them back may create
>>>>>>> undesired fragmentation and usage of the smaller pages in the direct
>>>>>>> mapping of the physical memory.
>>>>>>>
>>>>>>> This can be avoided if a significantly large area of the physical memory
>>>>>>> would be reserved for secretmem purposes at boot time.
>>>>>>>
>>>>>>> Add ability to reserve physical memory for secretmem at boot time using
>>>>>>> "secretmem" kernel parameter and then use that reserved memory as a global
>>>>>>> pool for secret memory needs.
>>>>>>
>>>>>> Wouldn't something like CMA be the better fit? Just wondering. Then, the
>>>>>> memory can actually be reused for something else while not needed.
>>>>>
>>>>> The memory allocated as secret is removed from the direct map and the
>>>>> boot time reservation is intended to reduce direct map fragmentatioan
>>>>> and to avoid splitting 1G pages there. So with CMA I'd still need to
>>>>> allocate 1G chunks for this and once 1G page is dropped from the direct
>>>>> map it still cannot be reused for anything else until it is freed.
>>>>>
>>>>> I could use CMA to do the boot time reservation, but doing the
>>>>> reservesion directly seemed simpler and more explicit to me.
>>>>
>>>> Well, using CMA would give you the possibility to let the memory be used
>>>> for other purposes until you decide it's the right time to take it +
>>>> remove the direct mapping etc.
>>>
>>> I still can't say I follow you here. If I reseve a CMA area as a pool
>>> for secret memory 1G pages, it is still reserved and it still cannot be
>>> used for other purposes, right?
>>
>> So, AFAIK, if you create a CMA pool it can be used for any MOVABLE
>> allocations (similar to ZONE_MOVABLE) until you actually allocate CMA
>> memory from that region. Other allocations on that are will then be
>> migrated away (using alloc_contig_range()).
>>
>> For example, if you have a 1~GiB CMA area, you could allocate 4~MB pages
>> from that CMA area on demand (removing the direct mapping, etc ..), and
>> free when no longer needed (instantiating the direct mapping). The free
>> memory in that area could used for MOVABLE allocations.
> 
> The boot time resrvation is intended to avoid splitting 1G pages in the
> direct map. Without the boot time reservation, we maintain a pool of 2M
> pages so the 1G pages are split and 2M pages remain unsplit.
> 
> If I scale your example to match the requirement to avoid splitting 1G
> pages in the direct map, that would mean creating a CMA area of several
> tens of gigabytes and then doing cma_alloc() of 1G each time we need to
> refill the secretmem pool. 
> 
> It is quite probable that we won't be able to get 1G from CMA after the
> system worked for some time.

Why? It should only contain movable pages, and if that is not the case,
it's a bug we have to fix. It should behave just as ZONE_MOVABLE.
(although I agree that in corner cases, alloc_contig_pages() might
temporarily fail on some chunks - e.g., with long/short-term page
pinnings - in contrast to memory offlining, it won't retry forever)

> 
> With boot time reservation we won't need physcally contiguous 1G to
> satisfy smaller allocation requests for secretmem because we don't need
> to maintain 1G mappings in the secretmem pool.

You can allocate within your CMA area however you want - doesn't need to
be whole gigabytes in case there is no need for it.

Again, the big benefit of CMA is that the reserved memory can be reused
for other purpose while nobody is actually making use of it.

> 
> That said, I believe the addition of the boot time reservation, either
> direct or with CMA, can be added as an incrememntal patch after the
> "core" functionality is merged.

I am not convinced that we want to let random processes to do
alloc_pages() in the range of tens of gigabytes. It's not just mlocked
memory. I prefer either using CMA or relying on the boot time
reservations. But let's see if there are other opinions and people just
don't care.

Having that said, I have no further comments.

-- 
Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot
  2020-09-08  9:09               ` David Hildenbrand
@ 2020-09-08 12:31                 ` Mike Rapoport
  0 siblings, 0 replies; 20+ messages in thread
From: Mike Rapoport @ 2020-09-08 12:31 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dave Hansen, Elena Reshetova, H. Peter Anvin, Idan Yaniv,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel

Hi David,

On Tue, Sep 08, 2020 at 11:09:19AM +0200, David Hildenbrand wrote:
> On 20.08.20 17:52, Mike Rapoport wrote:
> > On Wed, Aug 19, 2020 at 07:45:29PM +0200, David Hildenbrand wrote:
> >> On 19.08.20 19:33, Mike Rapoport wrote:
> >>> On Wed, Aug 19, 2020 at 02:10:43PM +0200, David Hildenbrand wrote:
> >>>> On 19.08.20 13:53, Mike Rapoport wrote:
> >>>>> On Wed, Aug 19, 2020 at 12:49:05PM +0200, David Hildenbrand wrote:
> >>>>>> On 18.08.20 16:15, Mike Rapoport wrote:
> >>>>>>> From: Mike Rapoport <rppt@linux.ibm.com>
> >>>>>>>
> >>>>>>> Taking pages out from the direct map and bringing them back may create
> >>>>>>> undesired fragmentation and usage of the smaller pages in the direct
> >>>>>>> mapping of the physical memory.
> >>>>>>>
> >>>>>>> This can be avoided if a significantly large area of the physical memory
> >>>>>>> would be reserved for secretmem purposes at boot time.
> >>>>>>>
> >>>>>>> Add ability to reserve physical memory for secretmem at boot time using
> >>>>>>> "secretmem" kernel parameter and then use that reserved memory as a global
> >>>>>>> pool for secret memory needs.
> >>>>>>
> >>>>>> Wouldn't something like CMA be the better fit? Just wondering. Then, the
> >>>>>> memory can actually be reused for something else while not needed.
> >>>>>
> >>>>> The memory allocated as secret is removed from the direct map and the
> >>>>> boot time reservation is intended to reduce direct map fragmentatioan
> >>>>> and to avoid splitting 1G pages there. So with CMA I'd still need to
> >>>>> allocate 1G chunks for this and once 1G page is dropped from the direct
> >>>>> map it still cannot be reused for anything else until it is freed.
> >>>>>
> >>>>> I could use CMA to do the boot time reservation, but doing the
> >>>>> reservesion directly seemed simpler and more explicit to me.
> >>>>
> >>>> Well, using CMA would give you the possibility to let the memory be used
> >>>> for other purposes until you decide it's the right time to take it +
> >>>> remove the direct mapping etc.
> >>>
> >>> I still can't say I follow you here. If I reseve a CMA area as a pool
> >>> for secret memory 1G pages, it is still reserved and it still cannot be
> >>> used for other purposes, right?
> >>
> >> So, AFAIK, if you create a CMA pool it can be used for any MOVABLE
> >> allocations (similar to ZONE_MOVABLE) until you actually allocate CMA
> >> memory from that region. Other allocations on that are will then be
> >> migrated away (using alloc_contig_range()).
> >>
> >> For example, if you have a 1~GiB CMA area, you could allocate 4~MB pages
> >> from that CMA area on demand (removing the direct mapping, etc ..), and
> >> free when no longer needed (instantiating the direct mapping). The free
> >> memory in that area could used for MOVABLE allocations.
> > 
> > The boot time resrvation is intended to avoid splitting 1G pages in the
> > direct map. Without the boot time reservation, we maintain a pool of 2M
> > pages so the 1G pages are split and 2M pages remain unsplit.
> > 
> > If I scale your example to match the requirement to avoid splitting 1G
> > pages in the direct map, that would mean creating a CMA area of several
> > tens of gigabytes and then doing cma_alloc() of 1G each time we need to
> > refill the secretmem pool. 
> > 
> > It is quite probable that we won't be able to get 1G from CMA after the
> > system worked for some time.
> 
> Why? It should only contain movable pages, and if that is not the case,
> it's a bug we have to fix. It should behave just as ZONE_MOVABLE.
> (although I agree that in corner cases, alloc_contig_pages() might
> temporarily fail on some chunks - e.g., with long/short-term page
> pinnings - in contrast to memory offlining, it won't retry forever)
 
The use-case I had in mind for the boot time reservation in secretmem is
a machine that runs VMs and there is a desire to have the VM memory
protected from the host. In a way this should be similar to booting a
host with mem=X where most of the machine memory never gets to be used
by the host kernel.

For such use case, boot time reservation controlled by the command
line parameter seems to me simpler than using CMA. I agree that there is
no way to use the reserved memory for other purpose, but then we won't
need to create physically contiguous chunk of several gigs every time a
VM is created.

> > With boot time reservation we won't need physcally contiguous 1G to
> > satisfy smaller allocation requests for secretmem because we don't need
> > to maintain 1G mappings in the secretmem pool.
> 
> You can allocate within your CMA area however you want - doesn't need to
> be whole gigabytes in case there is no need for it.

The whole point of boot time reservation is to prevent splitting 1G
pages in the direct map. Allocating smaller chunks will still cause
fragmentation of the direct map.

> Again, the big benefit of CMA is that the reserved memory can be reused
> for other purpose while nobody is actually making use of it.

Right, but I think if a user explicitly asked to use X gigabytes for the
secretmem we can allow that.

> > 
> > That said, I believe the addition of the boot time reservation, either
> > direct or with CMA, can be added as an incrememntal patch after the
> > "core" functionality is merged.
> 
> I am not convinced that we want to let random processes to do
> alloc_pages() in the range of tens of gigabytes. It's not just mlocked
> memory. I prefer either using CMA or relying on the boot time
> reservations. But let's see if there are other opinions and people just
> don't care.
> 
> Having that said, I have no further comments.
> 
> -- 
> Thanks,
> 
> David / dhildenb
> 

-- 
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2020-09-08 12:32 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-18 14:15 [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2020-08-18 14:15 ` [PATCH v4 1/6] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
2020-08-18 14:15 ` [PATCH v4 2/6] mmap: make mlock_future_check() global Mike Rapoport
2020-08-18 14:15 ` [PATCH v4 3/6] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2020-08-18 14:15 ` [PATCH v4 4/6] arch, mm: wire up memfd_secret system call were relevant Mike Rapoport
2020-08-18 14:15 ` [PATCH v4 5/6] mm: secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
2020-08-18 14:15 ` [PATCH v4 6/6] mm: secretmem: add ability to reserve memory at boot Mike Rapoport
2020-08-19 10:49   ` David Hildenbrand
2020-08-19 11:53     ` Mike Rapoport
2020-08-19 12:10       ` David Hildenbrand
2020-08-19 17:33         ` Mike Rapoport
2020-08-19 17:45           ` David Hildenbrand
2020-08-20 15:52             ` Mike Rapoport
2020-09-08  9:09               ` David Hildenbrand
2020-09-08 12:31                 ` Mike Rapoport
2020-08-19 10:47 ` [PATCH v4 0/6] mm: introduce memfd_secret system call to create "secret" memory areas David Hildenbrand
2020-08-19 11:42   ` Mike Rapoport
2020-08-19 12:05     ` David Hildenbrand
2020-08-26 11:01 ` Mike Rapoport
2020-09-03  7:46 ` Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).