linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2020-11-10 15:14 Mike Rapoport
  2020-11-10 15:14 ` [PATCH v8 1/9] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
                   ` (9 more replies)
  0 siblings, 10 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-10 15:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Hi,

This is an implementation of "secret" mappings backed by a file descriptor.

The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will have desired protection bits set in the user page
table. For instance, current implementation allows uncached mappings.

Although normally Linux userspace mappings are protected from other users,
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.

Additionally, in the future the secret mappings may be used as a mean to
protect guest memory in a virtual machine host.

For demonstration of secret memory usage we've created a userspace library

https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git

that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.

Hiding secret memory mappings behind an anonymous file allows (ab)use of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.

The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.

To limit fragmentation of the direct map to splitting only PUD-size pages,
I've added an amortizing cache of PMD-size pages to each file descriptor
that is used as an allocation pool for the secret memory areas.

As the memory allocated by secretmem becomes unmovable, we use CMA to back
large page caches so that page allocator won't be surprised by failing attempt
to migrate these pages.

v8:
* Use CMA for all secretmem allocations as David suggested
* Update memcg accounting after transtion to CMA
* Prevent hibernation when there are active secretmem users
* Add zeroing of the memory before releasing it back to cma/page allocator
* Rebase on v5.10-rc2-mmotm-2020-11-07-21-40

v7: https://lore.kernel.org/lkml/20201026083752.13267-1-rppt@kernel.org
* Use set_direct_map() instead of __kernel_map_pages() to ensure error
  handling in case the direct map update fails
* Add accounting of large pages used to reduce the direct map fragmentation
* Teach get_user_pages() and frieds to refuse get/pin secretmem pages

v6: https://lore.kernel.org/lkml/20200924132904.1391-1-rppt@kernel.org
* Silence the warning about missing syscall, thanks to Qian Cai
* Replace spaces with tabs in Kconfig additions, per Randy
* Add a selftest.

v5: https://lore.kernel.org/lkml/20200916073539.3552-1-rppt@kernel.org
* rebase on v5.9-rc5
* drop boot time memory reservation patch

v4: https://lore.kernel.org/lkml/20200818141554.13945-1-rppt@kernel.org
* rebase on v5.9-rc1
* Do not redefine PMD_PAGE_ORDER in fs/dax.c, thanks Kirill
* Make secret mappings exclusive by default and only require flags to
  memfd_secret() system call for uncached mappings, thanks again Kirill :)

v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
* Squash kernel-parameters.txt update into the commit that added the
  command line option.
* Make uncached mode explicitly selectable by architectures. For now enable
  it only on x86.

v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
* Follow Michael's suggestion and name the new system call 'memfd_secret'
* Add kernel-parameters documentation about the boot option
* Fix i386-tinyconfig regression reported by the kbuild bot.
  CONFIG_SECRETMEM now depends on !EMBEDDED to disable it on small systems
  from one side and still make it available unconditionally on
  architectures that support SET_DIRECT_MAP.

v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org

Mike Rapoport (9):
  mm: add definition of PMD_PAGE_ORDER
  mmap: make mlock_future_check() global
  set_memory: allow set_direct_map_*_noflush() for multiple pages
  mm: introduce memfd_secret system call to create "secret" memory areas
  secretmem: use PMD-size pages to amortize direct map fragmentation
  secretmem: add memcg accounting
  PM: hibernate: disable when there are active secretmem users
  arch, mm: wire up memfd_secret system call were relevant
  secretmem: test: add basic selftest for memfd_secret(2)

 arch/Kconfig                              |   7 +
 arch/arm64/include/asm/cacheflush.h       |   4 +-
 arch/arm64/include/asm/unistd.h           |   2 +-
 arch/arm64/include/asm/unistd32.h         |   2 +
 arch/arm64/include/uapi/asm/unistd.h      |   1 +
 arch/arm64/mm/pageattr.c                  |  10 +-
 arch/riscv/include/asm/set_memory.h       |   4 +-
 arch/riscv/include/asm/unistd.h           |   1 +
 arch/riscv/mm/pageattr.c                  |   8 +-
 arch/x86/Kconfig                          |   1 +
 arch/x86/entry/syscalls/syscall_32.tbl    |   1 +
 arch/x86/entry/syscalls/syscall_64.tbl    |   1 +
 arch/x86/include/asm/set_memory.h         |   4 +-
 arch/x86/mm/pat/set_memory.c              |   8 +-
 fs/dax.c                                  |  11 +-
 include/linux/pgtable.h                   |   3 +
 include/linux/secretmem.h                 |  30 ++
 include/linux/set_memory.h                |   4 +-
 include/linux/syscalls.h                  |   1 +
 include/uapi/asm-generic/unistd.h         |   6 +-
 include/uapi/linux/magic.h                |   1 +
 include/uapi/linux/secretmem.h            |   8 +
 kernel/power/hibernate.c                  |   5 +-
 kernel/power/snapshot.c                   |   4 +-
 kernel/sys_ni.c                           |   2 +
 mm/Kconfig                                |   5 +
 mm/Makefile                               |   1 +
 mm/filemap.c                              |   2 +-
 mm/gup.c                                  |  10 +
 mm/internal.h                             |   3 +
 mm/mmap.c                                 |   5 +-
 mm/secretmem.c                            | 451 ++++++++++++++++++++++
 mm/vmalloc.c                              |   5 +-
 scripts/checksyscalls.sh                  |   4 +
 tools/testing/selftests/vm/.gitignore     |   1 +
 tools/testing/selftests/vm/Makefile       |   3 +-
 tools/testing/selftests/vm/memfd_secret.c | 298 ++++++++++++++
 tools/testing/selftests/vm/run_vmtests    |  17 +
 38 files changed, 895 insertions(+), 39 deletions(-)
 create mode 100644 include/linux/secretmem.h
 create mode 100644 include/uapi/linux/secretmem.h
 create mode 100644 mm/secretmem.c
 create mode 100644 tools/testing/selftests/vm/memfd_secret.c


base-commit: 9f8ce377d420db12b19d6a4f636fecbd88a725a5
-- 
2.28.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v8 1/9] mm: add definition of PMD_PAGE_ORDER
  2020-11-10 15:14 [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
@ 2020-11-10 15:14 ` Mike Rapoport
  2020-11-10 15:14 ` [PATCH v8 2/9] mmap: make mlock_future_check() global Mike Rapoport
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-10 15:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.

Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid conflict with these definitions use
PMD_PAGE_ORDER name and update DAX respectively.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 fs/dax.c                | 11 ++++-------
 include/linux/pgtable.h |  3 +++
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 26d5dcd2d69e..0f109eb16196 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -49,9 +49,6 @@ static inline unsigned int pe_order(enum page_entry_size pe_size)
 #define PG_PMD_COLOUR	((PMD_SIZE >> PAGE_SHIFT) - 1)
 #define PG_PMD_NR	(PMD_SIZE >> PAGE_SHIFT)
 
-/* The order of a PMD entry */
-#define PMD_ORDER	(PMD_SHIFT - PAGE_SHIFT)
-
 static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
 
 static int __init init_dax_wait_table(void)
@@ -98,7 +95,7 @@ static bool dax_is_locked(void *entry)
 static unsigned int dax_entry_order(void *entry)
 {
 	if (xa_to_value(entry) & DAX_PMD)
-		return PMD_ORDER;
+		return PMD_PAGE_ORDER;
 	return 0;
 }
 
@@ -1470,7 +1467,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 {
 	struct vm_area_struct *vma = vmf->vma;
 	struct address_space *mapping = vma->vm_file->f_mapping;
-	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER);
+	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER);
 	unsigned long pmd_addr = vmf->address & PMD_MASK;
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
 	bool sync;
@@ -1529,7 +1526,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 	 * entry is already in the array, for instance), it will return
 	 * VM_FAULT_FALLBACK.
 	 */
-	entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+	entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER);
 	if (xa_is_internal(entry)) {
 		result = xa_to_internal(entry);
 		goto fallback;
@@ -1695,7 +1692,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
 	if (order == 0)
 		ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
 #ifdef CONFIG_FS_DAX_PMD
-	else if (order == PMD_ORDER)
+	else if (order == PMD_PAGE_ORDER)
 		ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
 #endif
 	else
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 71125a4676c4..7f718b8dc789 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -28,6 +28,9 @@
 #define USER_PGTABLES_CEILING	0UL
 #endif
 
+/* Number of base pages in a second level leaf page */
+#define PMD_PAGE_ORDER	(PMD_SHIFT - PAGE_SHIFT)
+
 /*
  * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
  *
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v8 2/9] mmap: make mlock_future_check() global
  2020-11-10 15:14 [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
  2020-11-10 15:14 ` [PATCH v8 1/9] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
@ 2020-11-10 15:14 ` Mike Rapoport
  2020-11-10 17:17   ` David Hildenbrand
  2020-11-10 15:14 ` [PATCH v8 3/9] set_memory: allow set_direct_map_*_noflush() for multiple pages Mike Rapoport
                   ` (7 subsequent siblings)
  9 siblings, 1 reply; 29+ messages in thread
From: Mike Rapoport @ 2020-11-10 15:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

It will be used by the upcoming secret memory implementation.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 mm/internal.h | 3 +++
 mm/mmap.c     | 5 ++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index c43ccdddb0f6..ae146a260b14 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -348,6 +348,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
 extern void mlock_vma_page(struct page *page);
 extern unsigned int munlock_vma_page(struct page *page);
 
+extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+			      unsigned long len);
+
 /*
  * Clear the page's PageMlocked().  This can be useful in a situation where
  * we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c
index 61f72b09d990..c481f088bd50 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1348,9 +1348,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
 	return hint;
 }
 
-static inline int mlock_future_check(struct mm_struct *mm,
-				     unsigned long flags,
-				     unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+		       unsigned long len)
 {
 	unsigned long locked, lock_limit;
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v8 3/9] set_memory: allow set_direct_map_*_noflush() for multiple pages
  2020-11-10 15:14 [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
  2020-11-10 15:14 ` [PATCH v8 1/9] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
  2020-11-10 15:14 ` [PATCH v8 2/9] mmap: make mlock_future_check() global Mike Rapoport
@ 2020-11-10 15:14 ` Mike Rapoport
  2020-11-13 12:26   ` Catalin Marinas
  2020-11-10 15:14 ` [PATCH v8 4/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 29+ messages in thread
From: Mike Rapoport @ 2020-11-10 15:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

The underlying implementations of set_direct_map_invalid_noflush() and
set_direct_map_default_noflush() allow updating multiple contiguous pages
at once.

Add numpages parameter to set_direct_map_*_noflush() to expose this ability
with these APIs.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 arch/arm64/include/asm/cacheflush.h |  4 ++--
 arch/arm64/mm/pageattr.c            | 10 ++++++----
 arch/riscv/include/asm/set_memory.h |  4 ++--
 arch/riscv/mm/pageattr.c            |  8 ++++----
 arch/x86/include/asm/set_memory.h   |  4 ++--
 arch/x86/mm/pat/set_memory.c        |  8 ++++----
 include/linux/set_memory.h          |  4 ++--
 kernel/power/snapshot.c             |  4 ++--
 mm/vmalloc.c                        |  5 +++--
 9 files changed, 27 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 45217f21f1fe..d3598419a284 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -138,8 +138,8 @@ static __always_inline void __flush_icache_all(void)
 
 int set_memory_valid(unsigned long addr, int numpages, int enable);
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 #include <asm-generic/cacheflush.h>
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 92eccaf595c8..b53ef37bf95a 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,34 +148,36 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
 					__pgprot(PTE_VALID));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	struct page_change_data data = {
 		.set_mask = __pgprot(0),
 		.clear_mask = __pgprot(PTE_VALID),
 	};
+	unsigned long size = PAGE_SIZE * numpages;
 
 	if (!debug_pagealloc_enabled() && !rodata_full)
 		return 0;
 
 	return apply_to_page_range(&init_mm,
 				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   size, change_page_range, &data);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	struct page_change_data data = {
 		.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
 		.clear_mask = __pgprot(PTE_RDONLY),
 	};
+	unsigned long size = PAGE_SIZE * numpages;
 
 	if (!debug_pagealloc_enabled() && !rodata_full)
 		return 0;
 
 	return apply_to_page_range(&init_mm,
 				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   size, change_page_range, &data);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index d690b08dff2a..92b9bb26bf5e 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -22,8 +22,8 @@ static inline int set_memory_x(unsigned long addr, int numpages) { return 0; }
 static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 87ba5a68bbb8..0454f2d052c4 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -150,11 +150,11 @@ int set_memory_nx(unsigned long addr, int numpages)
 	return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	int ret;
 	unsigned long start = (unsigned long)page_address(page);
-	unsigned long end = start + PAGE_SIZE;
+	unsigned long end = start + PAGE_SIZE * numpages;
 	struct pageattr_masks masks = {
 		.set_mask = __pgprot(0),
 		.clear_mask = __pgprot(_PAGE_PRESENT)
@@ -167,11 +167,11 @@ int set_direct_map_invalid_noflush(struct page *page)
 	return ret;
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	int ret;
 	unsigned long start = (unsigned long)page_address(page);
-	unsigned long end = start + PAGE_SIZE;
+	unsigned long end = start + PAGE_SIZE * numpages;
 	struct pageattr_masks masks = {
 		.set_mask = PAGE_KERNEL,
 		.clear_mask = __pgprot(0)
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 4352f08bfbb5..6224cb291f6c 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -80,8 +80,8 @@ int set_pages_wb(struct page *page, int numpages);
 int set_pages_ro(struct page *page, int numpages);
 int set_pages_rw(struct page *page, int numpages);
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 extern int kernel_set_to_readonly;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 16f878c26667..d157fd617c99 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2184,14 +2184,14 @@ static int __set_pages_np(struct page *page, int numpages)
 	return __change_page_attr_set_clr(&cpa, 0);
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
-	return __set_pages_np(page, 1);
+	return __set_pages_np(page, numpages);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
-	return __set_pages_p(page, 1);
+	return __set_pages_p(page, numpages);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index fe1aa4e54680..c650f82db813 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -15,11 +15,11 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
 #ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP
-static inline int set_direct_map_invalid_noflush(struct page *page)
+static inline int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	return 0;
 }
-static inline int set_direct_map_default_noflush(struct page *page)
+static inline int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	return 0;
 }
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 069576704c57..d40bb6666735 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -89,9 +89,9 @@ static inline void hibernate_map_page(struct page *page, int enable)
 		 * changes and this will no longer be the case.
 		 */
 		if (enable)
-			ret = set_direct_map_default_noflush(page);
+			ret = set_direct_map_default_noflush(page, 1);
 		else
-			ret = set_direct_map_invalid_noflush(page);
+			ret = set_direct_map_invalid_noflush(page, 1);
 
 		if (ret) {
 			pr_warn_once("Failed to remap page\n");
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d7075ad340aa..7e903524e002 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2179,13 +2179,14 @@ struct vm_struct *remove_vm_area(const void *addr)
 }
 
 static inline void set_area_direct_map(const struct vm_struct *area,
-				       int (*set_direct_map)(struct page *page))
+				       int (*set_direct_map)(struct page *page,
+							     int numpages))
 {
 	int i;
 
 	for (i = 0; i < area->nr_pages; i++)
 		if (page_address(area->pages[i]))
-			set_direct_map(area->pages[i]);
+			set_direct_map(area->pages[i], 1);
 }
 
 /* Handle removing and resetting vm mappings related to the vm_struct. */
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v8 4/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2020-11-10 15:14 [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (2 preceding siblings ...)
  2020-11-10 15:14 ` [PATCH v8 3/9] set_memory: allow set_direct_map_*_noflush() for multiple pages Mike Rapoport
@ 2020-11-10 15:14 ` Mike Rapoport
  2020-11-13 13:58   ` Matthew Wilcox
  2020-11-13 14:06   ` Matthew Wilcox
  2020-11-10 15:14 ` [PATCH v8 5/9] secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
                   ` (5 subsequent siblings)
  9 siblings, 2 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-10 15:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer

From: Mike Rapoport <rppt@linux.ibm.com>

Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.

The user will create a file descriptor using the memfd_secret() system call
where flags supplied as a parameter to this system call will define the
desired protection mode for the memory associated with that file
descriptor.

The secret memory remains accessible in the process context using uaccess
primitives, but it is not accessible using direct/linear map addresses.

Functions in the follow_page()/get_user_page() family will refuse to return
a page that belongs to the secret memory area.

A page that was a part of the secret memory area is cleared when it is
freed.

Currently there are two protection modes:

* exclusive - the memory area is unmapped from the kernel direct map and it
              is present only in the page tables of the owning mm.
* uncached  - the memory area is present only in the page tables of the
              owning mm and it is mapped there as uncached.

The "exclusive" mode is enabled implicitly and it is the default mode for
memfd_secret().

The "uncached" mode requires architecture support and an architecture
should opt-in for this mode using HAVE_SECRETMEM_UNCACHED configuration
option.

For instance, the following example will create an uncached mapping (error
handling is omitted):

	fd = memfd_secret(SECRETMEM_UNCACHED);
	ftruncate(fd, MAP_SIZE);
	ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>
---
 arch/Kconfig                   |   7 +
 arch/x86/Kconfig               |   1 +
 include/linux/secretmem.h      |  24 +++
 include/uapi/linux/magic.h     |   1 +
 include/uapi/linux/secretmem.h |   8 +
 kernel/sys_ni.c                |   2 +
 mm/Kconfig                     |   3 +
 mm/Makefile                    |   1 +
 mm/gup.c                       |  10 ++
 mm/secretmem.c                 | 280 +++++++++++++++++++++++++++++++++
 10 files changed, 337 insertions(+)
 create mode 100644 include/linux/secretmem.h
 create mode 100644 include/uapi/linux/secretmem.h
 create mode 100644 mm/secretmem.c

diff --git a/arch/Kconfig b/arch/Kconfig
index e175529bfb12..0b54b9d8a21f 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1041,6 +1041,13 @@ config ARCH_SUPPORTS_DEBUG_PAGEALLOC
 config HAVE_ARCH_PFN_VALID
 	bool
 
+config HAVE_SECRETMEM_UNCACHED
+	bool
+	help
+	  An architecture can select this if its semantics of non-cached
+	  mappings can be used to prevent speculative loads and it is
+	  useful for secret protection.
+
 source "kernel/gcov/Kconfig"
 
 source "scripts/gcc-plugins/Kconfig"
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 34d5fb82f674..907e24ae7698 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -224,6 +224,7 @@ config X86
 	select HAVE_UNSTABLE_SCHED_CLOCK
 	select HAVE_USER_RETURN_NOTIFIER
 	select HAVE_GENERIC_VDSO
+	select HAVE_SECRETMEM_UNCACHED
 	select HOTPLUG_SMT			if SMP
 	select IRQ_FORCED_THREADING
 	select NEED_SG_DMA_LENGTH
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
new file mode 100644
index 000000000000..70e7db9f94fe
--- /dev/null
+++ b/include/linux/secretmem.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_SECRETMEM_H
+#define _LINUX_SECRETMEM_H
+
+#ifdef CONFIG_SECRETMEM
+
+bool vma_is_secretmem(struct vm_area_struct *vma);
+bool page_is_secretmem(struct page *page);
+
+#else
+
+static inline bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+	return false;
+}
+
+static inline bool page_is_secretmem(struct page *page)
+{
+	return false;
+}
+
+#endif /* CONFIG_SECRETMEM */
+
+#endif /* _LINUX_SECRETMEM_H */
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index f3956fc11de6..35687dcb1a42 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -97,5 +97,6 @@
 #define DEVMEM_MAGIC		0x454d444d	/* "DMEM" */
 #define Z3FOLD_MAGIC		0x33
 #define PPC_CMM_MAGIC		0xc7571590
+#define SECRETMEM_MAGIC		0x5345434d	/* "SECM" */
 
 #endif /* __LINUX_MAGIC_H__ */
diff --git a/include/uapi/linux/secretmem.h b/include/uapi/linux/secretmem.h
new file mode 100644
index 000000000000..7cf9492c70d2
--- /dev/null
+++ b/include/uapi/linux/secretmem.h
@@ -0,0 +1,8 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _UAPI_LINUX_SECRETMEM_H
+#define _UAPI_LINUX_SECRETMEM_H
+
+/* secretmem operation modes */
+#define SECRETMEM_UNCACHED	0x1
+
+#endif /* _UAPI_LINUX_SECRETMEM_H */
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 2dd6cbb8cabc..805fd7a668be 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -353,6 +353,8 @@ COND_SYSCALL(pkey_mprotect);
 COND_SYSCALL(pkey_alloc);
 COND_SYSCALL(pkey_free);
 
+/* memfd_secret */
+COND_SYSCALL(memfd_secret);
 
 /*
  * Architecture specific weak syscall entries.
diff --git a/mm/Kconfig b/mm/Kconfig
index c89c5444924b..d8d170fa5210 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -884,4 +884,7 @@ config ARCH_HAS_HUGEPD
 config MAPPING_DIRTY_HELPERS
         bool
 
+config SECRETMEM
+	def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+
 endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 6eeb4b29efb8..dfda14c48a75 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -121,3 +121,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
 obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
 obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
 obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_SECRETMEM) += secretmem.o
diff --git a/mm/gup.c b/mm/gup.c
index 5ec98de1e5de..71164fa83114 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -10,6 +10,7 @@
 #include <linux/rmap.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/secretmem.h>
 
 #include <linux/sched/signal.h>
 #include <linux/rwsem.h>
@@ -793,6 +794,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
 	struct follow_page_context ctx = { NULL };
 	struct page *page;
 
+	if (vma_is_secretmem(vma))
+		return NULL;
+
 	page = follow_page_mask(vma, address, foll_flags, &ctx);
 	if (ctx.pgmap)
 		put_dev_pagemap(ctx.pgmap);
@@ -923,6 +927,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
 	if (gup_flags & FOLL_ANON && !vma_is_anonymous(vma))
 		return -EFAULT;
 
+	if (vma_is_secretmem(vma))
+		return -EFAULT;
+
 	if (write) {
 		if (!(vm_flags & VM_WRITE)) {
 			if (!(gup_flags & FOLL_FORCE))
@@ -2196,6 +2203,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 		page = pte_page(pte);
 
+		if (page_is_secretmem(page))
+			goto pte_unmap;
+
 		head = try_grab_compound_head(page, 1, flags);
 		if (!head)
 			goto pte_unmap;
diff --git a/mm/secretmem.c b/mm/secretmem.c
new file mode 100644
index 000000000000..7b24f0bcde7b
--- /dev/null
+++ b/mm/secretmem.c
@@ -0,0 +1,280 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/mount.h>
+#include <linux/memfd.h>
+#include <linux/bitops.h>
+#include <linux/printk.h>
+#include <linux/pagemap.h>
+#include <linux/syscalls.h>
+#include <linux/pseudo_fs.h>
+#include <linux/set_memory.h>
+#include <linux/sched/signal.h>
+
+#include <uapi/linux/secretmem.h>
+#include <uapi/linux/magic.h>
+
+#include <asm/tlbflush.h>
+
+#include "internal.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "secretmem: " fmt
+
+/*
+ * Secret memory areas are always exclusive to owning mm and they are
+ * removed from the direct map.
+ */
+#ifdef CONFIG_HAVE_SECRETMEM_UNCACHED
+#define SECRETMEM_MODE_MASK	(SECRETMEM_UNCACHED)
+#else
+#define SECRETMEM_MODE_MASK	(0x0)
+#endif
+
+#define SECRETMEM_FLAGS_MASK	SECRETMEM_MODE_MASK
+
+struct secretmem_ctx {
+	unsigned int mode;
+};
+
+static struct page *secretmem_alloc_page(gfp_t gfp)
+{
+	/*
+	 * FIXME: use a cache of large pages to reduce the direct map
+	 * fragmentation
+	 */
+	return alloc_page(gfp);
+}
+
+static vm_fault_t secretmem_fault(struct vm_fault *vmf)
+{
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+	struct inode *inode = file_inode(vmf->vma->vm_file);
+	pgoff_t offset = vmf->pgoff;
+	unsigned long addr;
+	struct page *page;
+	int ret = 0;
+
+	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+		return vmf_error(-EINVAL);
+
+	page = find_get_entry(mapping, offset);
+	if (!page) {
+		page = secretmem_alloc_page(vmf->gfp_mask);
+		if (!page)
+			return vmf_error(-EINVAL);
+
+		ret = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
+		if (unlikely(ret))
+			goto err_put_page;
+
+		ret = set_direct_map_invalid_noflush(page, 1);
+		if (ret)
+			goto err_del_page_cache;
+
+		addr = (unsigned long)page_address(page);
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+
+		__SetPageUptodate(page);
+
+		ret = VM_FAULT_LOCKED;
+	}
+
+	vmf->page = page;
+	return ret;
+
+err_del_page_cache:
+	delete_from_page_cache(page);
+err_put_page:
+	put_page(page);
+	return vmf_error(ret);
+}
+
+static const struct vm_operations_struct secretmem_vm_ops = {
+	.fault = secretmem_fault,
+};
+
+static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+	struct secretmem_ctx *ctx = file->private_data;
+	unsigned long len = vma->vm_end - vma->vm_start;
+
+	if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+		return -EINVAL;
+
+	if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
+		return -EAGAIN;
+
+	if (ctx->mode & SECRETMEM_UNCACHED)
+		vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+
+	vma->vm_ops = &secretmem_vm_ops;
+	vma->vm_flags |= VM_LOCKED;
+
+	return 0;
+}
+
+bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+	return vma->vm_ops == &secretmem_vm_ops;
+}
+
+const struct file_operations secretmem_fops = {
+	.mmap		= secretmem_mmap,
+};
+
+static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode)
+{
+	return false;
+}
+
+static int secretmem_migratepage(struct address_space *mapping,
+				 struct page *newpage, struct page *page,
+				 enum migrate_mode mode)
+{
+	return -EBUSY;
+}
+
+static void secretmem_freepage(struct page *page)
+{
+	set_direct_map_default_noflush(page, 1);
+	clear_highpage(page);
+}
+
+static const struct address_space_operations secretmem_aops = {
+	.freepage	= secretmem_freepage,
+	.migratepage	= secretmem_migratepage,
+	.isolate_page	= secretmem_isolate_page,
+};
+
+bool page_is_secretmem(struct page *page)
+{
+	struct address_space *mapping = page_mapping(page);
+
+	if (!mapping)
+		return false;
+
+	return mapping->a_ops == &secretmem_aops;
+}
+
+static struct vfsmount *secretmem_mnt;
+
+static struct file *secretmem_file_create(unsigned long flags)
+{
+	struct file *file = ERR_PTR(-ENOMEM);
+	struct secretmem_ctx *ctx;
+	struct inode *inode;
+
+	inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
+	if (IS_ERR(inode))
+		return ERR_CAST(inode);
+
+	ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+	if (!ctx)
+		goto err_free_inode;
+
+	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+				 O_RDWR, &secretmem_fops);
+	if (IS_ERR(file))
+		goto err_free_ctx;
+
+	mapping_set_unevictable(inode->i_mapping);
+
+	inode->i_mapping->private_data = ctx;
+	inode->i_mapping->a_ops = &secretmem_aops;
+
+	/* pretend we are a normal file with zero size */
+	inode->i_mode |= S_IFREG;
+	inode->i_size = 0;
+
+	file->private_data = ctx;
+
+	ctx->mode = flags & SECRETMEM_MODE_MASK;
+
+	return file;
+
+err_free_ctx:
+	kfree(ctx);
+err_free_inode:
+	iput(inode);
+	return file;
+}
+
+SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
+{
+	struct file *file;
+	int fd, err;
+
+	/* make sure local flags do not confict with global fcntl.h */
+	BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+	if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+		return -EINVAL;
+
+	fd = get_unused_fd_flags(flags & O_CLOEXEC);
+	if (fd < 0)
+		return fd;
+
+	file = secretmem_file_create(flags);
+	if (IS_ERR(file)) {
+		err = PTR_ERR(file);
+		goto err_put_fd;
+	}
+
+	file->f_flags |= O_LARGEFILE;
+
+	fd_install(fd, file);
+	return fd;
+
+err_put_fd:
+	put_unused_fd(fd);
+	return err;
+}
+
+static void secretmem_evict_inode(struct inode *inode)
+{
+	struct secretmem_ctx *ctx = inode->i_private;
+
+	truncate_inode_pages_final(&inode->i_data);
+	clear_inode(inode);
+	kfree(ctx);
+}
+
+static const struct super_operations secretmem_super_ops = {
+	.evict_inode = secretmem_evict_inode,
+};
+
+static int secretmem_init_fs_context(struct fs_context *fc)
+{
+	struct pseudo_fs_context *ctx = init_pseudo(fc, SECRETMEM_MAGIC);
+
+	if (!ctx)
+		return -ENOMEM;
+	ctx->ops = &secretmem_super_ops;
+
+	return 0;
+}
+
+static struct file_system_type secretmem_fs = {
+	.name		= "secretmem",
+	.init_fs_context = secretmem_init_fs_context,
+	.kill_sb	= kill_anon_super,
+};
+
+static int secretmem_init(void)
+{
+	int ret = 0;
+
+	secretmem_mnt = kern_mount(&secretmem_fs);
+	if (IS_ERR(secretmem_mnt))
+		ret = PTR_ERR(secretmem_mnt);
+
+	return ret;
+}
+fs_initcall(secretmem_init);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v8 5/9] secretmem: use PMD-size pages to amortize direct map fragmentation
  2020-11-10 15:14 [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (3 preceding siblings ...)
  2020-11-10 15:14 ` [PATCH v8 4/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
@ 2020-11-10 15:14 ` Mike Rapoport
  2020-11-10 15:14 ` [PATCH v8 6/9] secretmem: add memcg accounting Mike Rapoport
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-10 15:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Removing a PAGE_SIZE page from the direct map every time such page is
allocated for a secret memory mapping will cause severe fragmentation of
the direct map. This fragmentation can be reduced by using PMD-size pages
as a pool for small pages for secret memory mappings.

Add a gen_pool per secretmem inode and lazily populate this pool with
PMD-size pages.

As pages allocated by secretmem become unmovable, use CMA to back large
page caches so that page allocator won't be surprised by failing attempt to
migrate these pages.

The CMA area used by secretmem is controlled by the "secretmem=" kernel
parameter. This allows explicit control over the memory available for
secretmem and provides upper hard limit for secretmem consumption.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 mm/Kconfig     |   2 +
 mm/secretmem.c | 151 +++++++++++++++++++++++++++++++++++++++++++------
 2 files changed, 135 insertions(+), 18 deletions(-)

diff --git a/mm/Kconfig b/mm/Kconfig
index d8d170fa5210..e0e789398421 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -886,5 +886,7 @@ config MAPPING_DIRTY_HELPERS
 
 config SECRETMEM
 	def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+	select GENERIC_ALLOCATOR
+	select CMA
 
 endmenu
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 7b24f0bcde7b..1aa2b7cffe0d 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -7,12 +7,15 @@
 
 #include <linux/mm.h>
 #include <linux/fs.h>
+#include <linux/cma.h>
 #include <linux/mount.h>
 #include <linux/memfd.h>
 #include <linux/bitops.h>
 #include <linux/printk.h>
 #include <linux/pagemap.h>
+#include <linux/genalloc.h>
 #include <linux/syscalls.h>
+#include <linux/memblock.h>
 #include <linux/pseudo_fs.h>
 #include <linux/set_memory.h>
 #include <linux/sched/signal.h>
@@ -40,24 +43,79 @@
 #define SECRETMEM_FLAGS_MASK	SECRETMEM_MODE_MASK
 
 struct secretmem_ctx {
+	struct gen_pool *pool;
 	unsigned int mode;
 };
 
-static struct page *secretmem_alloc_page(gfp_t gfp)
+static struct cma *secretmem_cma;
+
+static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
 {
+	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
+	struct gen_pool *pool = ctx->pool;
+	unsigned long addr;
+	struct page *page;
+	int err;
+
+	page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
+	if (!page)
+		return -ENOMEM;
+
+	err = set_direct_map_invalid_noflush(page, nr_pages);
+	if (err)
+		goto err_cma_release;
+
+	addr = (unsigned long)page_address(page);
+	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
+	if (err)
+		goto err_set_direct_map;
+
+	flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+
+	return 0;
+
+err_set_direct_map:
 	/*
-	 * FIXME: use a cache of large pages to reduce the direct map
-	 * fragmentation
+	 * If a split of PUD-size page was required, it already happened
+	 * when we marked the pages invalid which guarantees that this call
+	 * won't fail
 	 */
-	return alloc_page(gfp);
+	set_direct_map_default_noflush(page, nr_pages);
+err_cma_release:
+	cma_release(secretmem_cma, page, nr_pages);
+	return err;
+}
+
+static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
+					 gfp_t gfp)
+{
+	struct gen_pool *pool = ctx->pool;
+	unsigned long addr;
+	struct page *page;
+	int err;
+
+	if (gen_pool_avail(pool) < PAGE_SIZE) {
+		err = secretmem_pool_increase(ctx, gfp);
+		if (err)
+			return NULL;
+	}
+
+	addr = gen_pool_alloc(pool, PAGE_SIZE);
+	if (!addr)
+		return NULL;
+
+	page = virt_to_page(addr);
+	get_page(page);
+
+	return page;
 }
 
 static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 {
+	struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	struct inode *inode = file_inode(vmf->vma->vm_file);
 	pgoff_t offset = vmf->pgoff;
-	unsigned long addr;
 	struct page *page;
 	int ret = 0;
 
@@ -66,7 +124,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 
 	page = find_get_entry(mapping, offset);
 	if (!page) {
-		page = secretmem_alloc_page(vmf->gfp_mask);
+		page = secretmem_alloc_page(ctx, vmf->gfp_mask);
 		if (!page)
 			return vmf_error(-EINVAL);
 
@@ -74,14 +132,8 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 		if (unlikely(ret))
 			goto err_put_page;
 
-		ret = set_direct_map_invalid_noflush(page, 1);
-		if (ret)
-			goto err_del_page_cache;
-
-		addr = (unsigned long)page_address(page);
-		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
-
 		__SetPageUptodate(page);
+		set_page_private(page, (unsigned long)ctx);
 
 		ret = VM_FAULT_LOCKED;
 	}
@@ -89,8 +141,6 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 	vmf->page = page;
 	return ret;
 
-err_del_page_cache:
-	delete_from_page_cache(page);
 err_put_page:
 	put_page(page);
 	return vmf_error(ret);
@@ -143,8 +193,11 @@ static int secretmem_migratepage(struct address_space *mapping,
 
 static void secretmem_freepage(struct page *page)
 {
-	set_direct_map_default_noflush(page, 1);
-	clear_highpage(page);
+	unsigned long addr = (unsigned long)page_address(page);
+	struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
+	struct gen_pool *pool = ctx->pool;
+
+	gen_pool_free(pool, addr, PAGE_SIZE);
 }
 
 static const struct address_space_operations secretmem_aops = {
@@ -179,13 +232,18 @@ static struct file *secretmem_file_create(unsigned long flags)
 	if (!ctx)
 		goto err_free_inode;
 
+	ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+	if (!ctx->pool)
+		goto err_free_ctx;
+
 	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
 				 O_RDWR, &secretmem_fops);
 	if (IS_ERR(file))
-		goto err_free_ctx;
+		goto err_free_pool;
 
 	mapping_set_unevictable(inode->i_mapping);
 
+	inode->i_private = ctx;
 	inode->i_mapping->private_data = ctx;
 	inode->i_mapping->a_ops = &secretmem_aops;
 
@@ -199,6 +257,8 @@ static struct file *secretmem_file_create(unsigned long flags)
 
 	return file;
 
+err_free_pool:
+	gen_pool_destroy(ctx->pool);
 err_free_ctx:
 	kfree(ctx);
 err_free_inode:
@@ -217,6 +277,9 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
 	if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
 		return -EINVAL;
 
+	if (!secretmem_cma)
+		return -ENOMEM;
+
 	fd = get_unused_fd_flags(flags & O_CLOEXEC);
 	if (fd < 0)
 		return fd;
@@ -237,11 +300,37 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
 	return err;
 }
 
+static void secretmem_cleanup_chunk(struct gen_pool *pool,
+				    struct gen_pool_chunk *chunk, void *data)
+{
+	unsigned long start = chunk->start_addr;
+	unsigned long end = chunk->end_addr;
+	struct page *page = virt_to_page(start);
+	unsigned long nr_pages = (end - start + 1) / PAGE_SIZE;
+	int i;
+
+	set_direct_map_default_noflush(page, nr_pages);
+
+	for (i = 0; i < nr_pages; i++)
+		clear_highpage(page + i);
+
+	cma_release(secretmem_cma, page, nr_pages);
+}
+
+static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
+{
+	struct gen_pool *pool = ctx->pool;
+
+	gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
+	gen_pool_destroy(pool);
+}
+
 static void secretmem_evict_inode(struct inode *inode)
 {
 	struct secretmem_ctx *ctx = inode->i_private;
 
 	truncate_inode_pages_final(&inode->i_data);
+	secretmem_cleanup_pool(ctx);
 	clear_inode(inode);
 	kfree(ctx);
 }
@@ -278,3 +367,29 @@ static int secretmem_init(void)
 	return ret;
 }
 fs_initcall(secretmem_init);
+
+static int __init secretmem_setup(char *str)
+{
+	phys_addr_t align = PMD_SIZE;
+	unsigned long reserved_size;
+	int err;
+
+	reserved_size = memparse(str, NULL);
+	if (!reserved_size)
+		return 0;
+
+	if (reserved_size * 2 > PUD_SIZE)
+		align = PUD_SIZE;
+
+	err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false,
+				     "secretmem", &secretmem_cma);
+	if (err) {
+		pr_err("failed to create CMA: %d\n", err);
+		return err;
+	}
+
+	pr_info("reserved %luM\n", reserved_size >> 20);
+
+	return 0;
+}
+__setup("secretmem=", secretmem_setup);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v8 6/9] secretmem: add memcg accounting
  2020-11-10 15:14 [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (4 preceding siblings ...)
  2020-11-10 15:14 ` [PATCH v8 5/9] secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
@ 2020-11-10 15:14 ` Mike Rapoport
  2020-11-13  1:35   ` Andrew Morton
  2020-11-13 23:42   ` Roman Gushchin
  2020-11-10 15:14 ` [PATCH v8 7/9] PM: hibernate: disable when there are active secretmem users Mike Rapoport
                   ` (3 subsequent siblings)
  9 siblings, 2 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-10 15:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Account memory consumed by secretmem to memcg. The accounting is updated
when the memory is actually allocated and freed.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 mm/filemap.c   |  2 +-
 mm/secretmem.c | 42 +++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 249cf489f5df..11387a077373 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -844,7 +844,7 @@ static noinline int __add_to_page_cache_locked(struct page *page,
 	page->mapping = mapping;
 	page->index = offset;
 
-	if (!huge) {
+	if (!huge && !page->memcg_data) {
 		error = mem_cgroup_charge(page, current->mm, gfp);
 		if (error)
 			goto error;
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 1aa2b7cffe0d..1eb7667016fa 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -17,6 +17,7 @@
 #include <linux/syscalls.h>
 #include <linux/memblock.h>
 #include <linux/pseudo_fs.h>
+#include <linux/memcontrol.h>
 #include <linux/set_memory.h>
 #include <linux/sched/signal.h>
 
@@ -49,6 +50,38 @@ struct secretmem_ctx {
 
 static struct cma *secretmem_cma;
 
+static int secretmem_memcg_charge(struct page *page, gfp_t gfp, int order)
+{
+	unsigned long nr_pages = (1 << order);
+	int i, err;
+
+	err = memcg_kmem_charge_page(page, gfp, order);
+	if (err)
+		return err;
+
+	for (i = 1; i < nr_pages; i++) {
+		struct page *p = page + i;
+
+		p->memcg_data = page->memcg_data;
+	}
+
+	return 0;
+}
+
+static void secretmem_memcg_uncharge(struct page *page, int order)
+{
+	unsigned long nr_pages = (1 << order);
+	int i;
+
+	for (i = 1; i < nr_pages; i++) {
+		struct page *p = page + i;
+
+		p->memcg_data = 0;
+	}
+
+	memcg_kmem_uncharge_page(page, PMD_PAGE_ORDER);
+}
+
 static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
 {
 	unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
@@ -61,10 +94,14 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
 	if (!page)
 		return -ENOMEM;
 
-	err = set_direct_map_invalid_noflush(page, nr_pages);
+	err = secretmem_memcg_charge(page, gfp, PMD_PAGE_ORDER);
 	if (err)
 		goto err_cma_release;
 
+	err = set_direct_map_invalid_noflush(page, nr_pages);
+	if (err)
+		goto err_memcg_uncharge;
+
 	addr = (unsigned long)page_address(page);
 	err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
 	if (err)
@@ -81,6 +118,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
 	 * won't fail
 	 */
 	set_direct_map_default_noflush(page, nr_pages);
+err_memcg_uncharge:
+	secretmem_memcg_uncharge(page, PMD_PAGE_ORDER);
 err_cma_release:
 	cma_release(secretmem_cma, page, nr_pages);
 	return err;
@@ -310,6 +349,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
 	int i;
 
 	set_direct_map_default_noflush(page, nr_pages);
+	secretmem_memcg_uncharge(page, PMD_PAGE_ORDER);
 
 	for (i = 0; i < nr_pages; i++)
 		clear_highpage(page + i);
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v8 7/9] PM: hibernate: disable when there are active secretmem users
  2020-11-10 15:14 [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (5 preceding siblings ...)
  2020-11-10 15:14 ` [PATCH v8 6/9] secretmem: add memcg accounting Mike Rapoport
@ 2020-11-10 15:14 ` Mike Rapoport
  2020-11-10 15:14 ` [PATCH v8 8/9] arch, mm: wire up memfd_secret system call were relevant Mike Rapoport
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-10 15:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

It is unsafe to allow saving of secretmem areas to the hibernation snapshot
as they would be visible after the resume and this essentially will defeat
the purpose of secret memory mappings.

Prevent hibernation whenever there are active secret memory users.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 include/linux/secretmem.h |  6 ++++++
 kernel/power/hibernate.c  |  5 ++++-
 mm/secretmem.c            | 16 ++++++++++++++++
 3 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 70e7db9f94fe..907a6734059c 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -6,6 +6,7 @@
 
 bool vma_is_secretmem(struct vm_area_struct *vma);
 bool page_is_secretmem(struct page *page);
+bool secretmem_active(void);
 
 #else
 
@@ -19,6 +20,11 @@ static inline bool page_is_secretmem(struct page *page)
 	return false;
 }
 
+static inline bool secretmem_active(void)
+{
+	return false;
+}
+
 #endif /* CONFIG_SECRETMEM */
 
 #endif /* _LINUX_SECRETMEM_H */
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index da0b41914177..559acef3fddb 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -31,6 +31,7 @@
 #include <linux/genhd.h>
 #include <linux/ktime.h>
 #include <linux/security.h>
+#include <linux/secretmem.h>
 #include <trace/events/power.h>
 
 #include "power.h"
@@ -81,7 +82,9 @@ void hibernate_release(void)
 
 bool hibernation_available(void)
 {
-	return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION);
+	return nohibernate == 0 &&
+		!security_locked_down(LOCKDOWN_HIBERNATION) &&
+		!secretmem_active();
 }
 
 /**
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 1eb7667016fa..5ed6b2070136 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -50,6 +50,13 @@ struct secretmem_ctx {
 
 static struct cma *secretmem_cma;
 
+static atomic_t secretmem_users;
+
+bool secretmem_active(void)
+{
+	return !!atomic_read(&secretmem_users);
+}
+
 static int secretmem_memcg_charge(struct page *page, gfp_t gfp, int order)
 {
 	unsigned long nr_pages = (1 << order);
@@ -189,6 +196,12 @@ static const struct vm_operations_struct secretmem_vm_ops = {
 	.fault = secretmem_fault,
 };
 
+static int secretmem_release(struct inode *inode, struct file *file)
+{
+	atomic_dec(&secretmem_users);
+	return 0;
+}
+
 static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
 {
 	struct secretmem_ctx *ctx = file->private_data;
@@ -214,7 +227,9 @@ bool vma_is_secretmem(struct vm_area_struct *vma)
 	return vma->vm_ops == &secretmem_vm_ops;
 }
 
+
 const struct file_operations secretmem_fops = {
+	.release	= secretmem_release,
 	.mmap		= secretmem_mmap,
 };
 
@@ -332,6 +347,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
 	file->f_flags |= O_LARGEFILE;
 
 	fd_install(fd, file);
+	atomic_inc(&secretmem_users);
 	return fd;
 
 err_put_fd:
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v8 8/9] arch, mm: wire up memfd_secret system call were relevant
  2020-11-10 15:14 [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (6 preceding siblings ...)
  2020-11-10 15:14 ` [PATCH v8 7/9] PM: hibernate: disable when there are active secretmem users Mike Rapoport
@ 2020-11-10 15:14 ` Mike Rapoport
  2020-11-13 12:25   ` Catalin Marinas
  2020-11-10 15:14 ` [PATCH v8 9/9] secretmem: test: add basic selftest for memfd_secret(2) Mike Rapoport
  2020-11-12 14:56 ` [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
  9 siblings, 1 reply; 29+ messages in thread
From: Mike Rapoport @ 2020-11-10 15:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
---
 arch/arm64/include/asm/unistd.h        | 2 +-
 arch/arm64/include/asm/unistd32.h      | 2 ++
 arch/arm64/include/uapi/asm/unistd.h   | 1 +
 arch/riscv/include/asm/unistd.h        | 1 +
 arch/x86/entry/syscalls/syscall_32.tbl | 1 +
 arch/x86/entry/syscalls/syscall_64.tbl | 1 +
 include/linux/syscalls.h               | 1 +
 include/uapi/asm-generic/unistd.h      | 6 +++++-
 scripts/checksyscalls.sh               | 4 ++++
 9 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h
index 86a9d7b3eabe..949788f5ba40 100644
--- a/arch/arm64/include/asm/unistd.h
+++ b/arch/arm64/include/asm/unistd.h
@@ -38,7 +38,7 @@
 #define __ARM_NR_compat_set_tls		(__ARM_NR_COMPAT_BASE + 5)
 #define __ARM_NR_COMPAT_END		(__ARM_NR_COMPAT_BASE + 0x800)
 
-#define __NR_compat_syscalls		442
+#define __NR_compat_syscalls		443
 #endif
 
 #define __ARCH_WANT_SYS_CLONE
diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
index 6c1dcca067e0..c71c3fe0b6cd 100644
--- a/arch/arm64/include/asm/unistd32.h
+++ b/arch/arm64/include/asm/unistd32.h
@@ -891,6 +891,8 @@ __SYSCALL(__NR_faccessat2, sys_faccessat2)
 __SYSCALL(__NR_process_madvise, sys_process_madvise)
 #define __NR_watch_mount 441
 __SYSCALL(__NR_watch_mount, sys_watch_mount)
+#define __NR_memfd_secret 442
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
 
 /*
  * Please add new compat syscalls above this comment and update
diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index f83a70e07df8..ce2ee8f1e361 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -20,5 +20,6 @@
 #define __ARCH_WANT_SET_GET_RLIMIT
 #define __ARCH_WANT_TIME32_SYSCALLS
 #define __ARCH_WANT_SYS_CLONE3
+#define __ARCH_WANT_MEMFD_SECRET
 
 #include <asm-generic/unistd.h>
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
index 977ee6181dab..6c316093a1e5 100644
--- a/arch/riscv/include/asm/unistd.h
+++ b/arch/riscv/include/asm/unistd.h
@@ -9,6 +9,7 @@
  */
 
 #define __ARCH_WANT_SYS_CLONE
+#define __ARCH_WANT_MEMFD_SECRET
 
 #include <uapi/asm/unistd.h>
 
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index c52ab1c4a755..109e6681b8fa 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -446,3 +446,4 @@
 439	i386	faccessat2		sys_faccessat2
 440	i386	process_madvise		sys_process_madvise
 441	i386	watch_mount		sys_watch_mount
+442	i386	memfd_secret		sys_memfd_secret
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index f3270a9ef467..742cf17d7725 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -363,6 +363,7 @@
 439	common	faccessat2		sys_faccessat2
 440	common	process_madvise		sys_process_madvise
 441	common	watch_mount		sys_watch_mount
+442	common	memfd_secret		sys_memfd_secret
 
 #
 # Due to a historical design error, certain syscalls are numbered differently
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 6d55324363ab..f9d93fbf9b69 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1010,6 +1010,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
 asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
 asmlinkage long sys_watch_mount(int dfd, const char __user *path,
 				unsigned int at_flags, int watch_fd, int watch_id);
+asmlinkage long sys_memfd_secret(unsigned long flags);
 
 /*
  * Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 5df46517260e..51151888f330 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -861,9 +861,13 @@ __SYSCALL(__NR_faccessat2, sys_faccessat2)
 __SYSCALL(__NR_process_madvise, sys_process_madvise)
 #define __NR_watch_mount 441
 __SYSCALL(__NR_watch_mount, sys_watch_mount)
+#ifdef __ARCH_WANT_MEMFD_SECRET
+#define __NR_memfd_secret 442
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
+#endif
 
 #undef __NR_syscalls
-#define __NR_syscalls 442
+#define __NR_syscalls 443
 
 /*
  * 32 bit systems traditionally used different
diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
index a18b47695f55..b7609958ee36 100755
--- a/scripts/checksyscalls.sh
+++ b/scripts/checksyscalls.sh
@@ -40,6 +40,10 @@ cat << EOF
 #define __IGNORE_setrlimit	/* setrlimit */
 #endif
 
+#ifndef __ARCH_WANT_MEMFD_SECRET
+#define __IGNORE_memfd_secret
+#endif
+
 /* Missing flags argument */
 #define __IGNORE_renameat	/* renameat2 */
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v8 9/9] secretmem: test: add basic selftest for memfd_secret(2)
  2020-11-10 15:14 [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (7 preceding siblings ...)
  2020-11-10 15:14 ` [PATCH v8 8/9] arch, mm: wire up memfd_secret system call were relevant Mike Rapoport
@ 2020-11-10 15:14 ` Mike Rapoport
  2020-11-12 14:56 ` [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
  9 siblings, 0 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-10 15:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

The test verifies that file descriptor created with memfd_secret does
not allow read/write operations, that secret memory mappings respect
RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
ptrace() to the secret memory fail.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
---
 tools/testing/selftests/vm/.gitignore     |   1 +
 tools/testing/selftests/vm/Makefile       |   3 +-
 tools/testing/selftests/vm/memfd_secret.c | 298 ++++++++++++++++++++++
 tools/testing/selftests/vm/run_vmtests    |  17 ++
 4 files changed, 318 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/vm/memfd_secret.c

diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore
index 9a35c3f6a557..c8deddc81e7a 100644
--- a/tools/testing/selftests/vm/.gitignore
+++ b/tools/testing/selftests/vm/.gitignore
@@ -21,4 +21,5 @@ va_128TBswitch
 map_fixed_noreplace
 write_to_hugetlbfs
 hmm-tests
+memfd_secret
 local_config.*
diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index 62fb15f286ee..9ab98946fbf2 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -34,6 +34,7 @@ TEST_GEN_FILES += khugepaged
 TEST_GEN_FILES += map_fixed_noreplace
 TEST_GEN_FILES += map_hugetlb
 TEST_GEN_FILES += map_populate
+TEST_GEN_FILES += memfd_secret
 TEST_GEN_FILES += mlock-random-test
 TEST_GEN_FILES += mlock2-tests
 TEST_GEN_FILES += mremap_dontunmap
@@ -129,7 +130,7 @@ warn_32bit_failure:
 endif
 endif
 
-$(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+$(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap
 
 $(OUTPUT)/gup_test: ../../../../mm/gup_test.h
 
diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c
new file mode 100644
index 000000000000..79578dfd13e6
--- /dev/null
+++ b/tools/testing/selftests/vm/memfd_secret.c
@@ -0,0 +1,298 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#define _GNU_SOURCE
+#include <sys/uio.h>
+#include <sys/mman.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ptrace.h>
+#include <sys/syscall.h>
+#include <sys/resource.h>
+#include <sys/capability.h>
+
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <errno.h>
+#include <stdio.h>
+
+#include "../kselftest.h"
+
+#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__)
+#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
+#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
+
+#ifdef __NR_memfd_secret
+
+#include <linux/secretmem.h>
+
+#define PATTERN	0x55
+
+static const int prot = PROT_READ | PROT_WRITE;
+static const int mode = MAP_SHARED;
+
+static unsigned long page_size;
+static unsigned long mlock_limit_cur;
+static unsigned long mlock_limit_max;
+
+static int memfd_secret(unsigned long flags)
+{
+	return syscall(__NR_memfd_secret, flags);
+}
+
+static void test_file_apis(int fd)
+{
+	char buf[64];
+
+	if ((read(fd, buf, sizeof(buf)) >= 0) ||
+	    (write(fd, buf, sizeof(buf)) >= 0) ||
+	    (pread(fd, buf, sizeof(buf), 0) >= 0) ||
+	    (pwrite(fd, buf, sizeof(buf), 0) >= 0))
+		fail("unexpected file IO\n");
+	else
+		pass("file IO is blocked as expected\n");
+}
+
+static void test_mlock_limit(int fd)
+{
+	size_t len;
+	char *mem;
+
+	len = mlock_limit_cur;
+	mem = mmap(NULL, len, prot, mode, fd, 0);
+	if (mem == MAP_FAILED) {
+		fail("unable to mmap secret memory\n");
+		return;
+	}
+	munmap(mem, len);
+
+	len = mlock_limit_max * 2;
+	mem = mmap(NULL, len, prot, mode, fd, 0);
+	if (mem != MAP_FAILED) {
+		fail("unexpected mlock limit violation\n");
+		munmap(mem, len);
+		return;
+	}
+
+	pass("mlock limit is respected\n");
+}
+
+static void try_process_vm_read(int fd, int pipefd[2])
+{
+	struct iovec liov, riov;
+	char buf[64];
+	char *mem;
+
+	if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+		fail("pipe write: %s\n", strerror(errno));
+		exit(KSFT_FAIL);
+	}
+
+	liov.iov_len = riov.iov_len = sizeof(buf);
+	liov.iov_base = buf;
+	riov.iov_base = mem;
+
+	if (process_vm_readv(getppid(), &liov, 1, &riov, 1, 0) < 0) {
+		if (errno == ENOSYS)
+			exit(KSFT_SKIP);
+		exit(KSFT_PASS);
+	}
+
+	exit(KSFT_FAIL);
+}
+
+static void try_ptrace(int fd, int pipefd[2])
+{
+	pid_t ppid = getppid();
+	int status;
+	char *mem;
+	long ret;
+
+	if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+		perror("pipe write");
+		exit(KSFT_FAIL);
+	}
+
+	ret = ptrace(PTRACE_ATTACH, ppid, 0, 0);
+	if (ret) {
+		perror("ptrace_attach");
+		exit(KSFT_FAIL);
+	}
+
+	ret = waitpid(ppid, &status, WUNTRACED);
+	if ((ret != ppid) || !(WIFSTOPPED(status))) {
+		fprintf(stderr, "weird waitppid result %ld stat %x\n",
+			ret, status);
+		exit(KSFT_FAIL);
+	}
+
+	if (ptrace(PTRACE_PEEKDATA, ppid, mem, 0))
+		exit(KSFT_PASS);
+
+	exit(KSFT_FAIL);
+}
+
+static void check_child_status(pid_t pid, const char *name)
+{
+	int status;
+
+	waitpid(pid, &status, 0);
+
+	if (WIFEXITED(status) && WEXITSTATUS(status) == KSFT_SKIP) {
+		skip("%s is not supported\n", name);
+		return;
+	}
+
+	if ((WIFEXITED(status) && WEXITSTATUS(status) == KSFT_PASS) ||
+	    WIFSIGNALED(status)) {
+		pass("%s is blocked as expected\n", name);
+		return;
+	}
+
+	fail("%s: unexpected memory access\n", name);
+}
+
+static void test_remote_access(int fd, const char *name,
+			       void (*func)(int fd, int pipefd[2]))
+{
+	int pipefd[2];
+	pid_t pid;
+	char *mem;
+
+	if (pipe(pipefd)) {
+		fail("pipe failed: %s\n", strerror(errno));
+		return;
+	}
+
+	pid = fork();
+	if (pid < 0) {
+		fail("fork failed: %s\n", strerror(errno));
+		return;
+	}
+
+	if (pid == 0) {
+		func(fd, pipefd);
+		return;
+	}
+
+	mem = mmap(NULL, page_size, prot, mode, fd, 0);
+	if (mem == MAP_FAILED) {
+		fail("Unable to mmap secret memory\n");
+		return;
+	}
+
+	ftruncate(fd, page_size);
+	memset(mem, PATTERN, page_size);
+
+	if (write(pipefd[1], &mem, sizeof(mem)) < 0) {
+		fail("pipe write: %s\n", strerror(errno));
+		return;
+	}
+
+	check_child_status(pid, name);
+}
+
+static void test_process_vm_read(int fd)
+{
+	test_remote_access(fd, "process_vm_read", try_process_vm_read);
+}
+
+static void test_ptrace(int fd)
+{
+	test_remote_access(fd, "ptrace", try_ptrace);
+}
+
+static int set_cap_limits(rlim_t max)
+{
+	struct rlimit new;
+	cap_t cap = cap_init();
+
+	new.rlim_cur = max;
+	new.rlim_max = max;
+	if (setrlimit(RLIMIT_MEMLOCK, &new)) {
+		perror("setrlimit() returns error");
+		return -1;
+	}
+
+	/* drop capabilities including CAP_IPC_LOCK */
+	if (cap_set_proc(cap)) {
+		perror("cap_set_proc() returns error");
+		return -2;
+	}
+
+	return 0;
+}
+
+static void prepare(void)
+{
+	struct rlimit rlim;
+
+	page_size = sysconf(_SC_PAGE_SIZE);
+	if (!page_size)
+		ksft_exit_fail_msg("Failed to get page size %s\n",
+				   strerror(errno));
+
+	if (getrlimit(RLIMIT_MEMLOCK, &rlim))
+		ksft_exit_fail_msg("Unable to detect mlock limit: %s\n",
+				   strerror(errno));
+
+	mlock_limit_cur = rlim.rlim_cur;
+	mlock_limit_max = rlim.rlim_max;
+
+	printf("page_size: %ld, mlock.soft: %ld, mlock.hard: %ld\n",
+	       page_size, mlock_limit_cur, mlock_limit_max);
+
+	if (page_size > mlock_limit_cur)
+		mlock_limit_cur = page_size;
+	if (page_size > mlock_limit_max)
+		mlock_limit_max = page_size;
+
+	if (set_cap_limits(mlock_limit_max))
+		ksft_exit_fail_msg("Unable to set mlock limit: %s\n",
+				   strerror(errno));
+}
+
+#define NUM_TESTS 4
+
+int main(int argc, char *argv[])
+{
+	int fd;
+
+	prepare();
+
+	ksft_print_header();
+	ksft_set_plan(NUM_TESTS);
+
+	fd = memfd_secret(0);
+	if (fd < 0) {
+		if (errno == ENOSYS)
+			ksft_exit_skip("memfd_secret is not supported\n");
+		else
+			ksft_exit_fail_msg("memfd_secret failed: %s\n",
+					   strerror(errno));
+	}
+
+	test_mlock_limit(fd);
+	test_file_apis(fd);
+	test_process_vm_read(fd);
+	test_ptrace(fd);
+
+	close(fd);
+
+	ksft_exit(!ksft_get_fail_cnt());
+}
+
+#else /* __NR_memfd_secret */
+
+int main(int argc, char *argv[])
+{
+	printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n");
+	return KSFT_SKIP;
+}
+
+#endif /* __NR_memfd_secret */
diff --git a/tools/testing/selftests/vm/run_vmtests b/tools/testing/selftests/vm/run_vmtests
index e953f3cd9664..95a67382f132 100755
--- a/tools/testing/selftests/vm/run_vmtests
+++ b/tools/testing/selftests/vm/run_vmtests
@@ -346,4 +346,21 @@ else
 	exitcode=1
 fi
 
+echo "running memfd_secret test"
+echo "------------------------------------"
+./memfd_secret
+ret_val=$?
+
+if [ $ret_val -eq 0 ]; then
+	echo "[PASS]"
+elif [ $ret_val -eq $ksft_skip ]; then
+	echo "[SKIP]"
+	exitcode=$ksft_skip
+else
+	echo "[FAIL]"
+	exitcode=1
+fi
+
+exit $exitcode
+
 exit $exitcode
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 2/9] mmap: make mlock_future_check() global
  2020-11-10 15:14 ` [PATCH v8 2/9] mmap: make mlock_future_check() global Mike Rapoport
@ 2020-11-10 17:17   ` David Hildenbrand
  2020-11-10 18:06     ` Mike Rapoport
  0 siblings, 1 reply; 29+ messages in thread
From: David Hildenbrand @ 2020-11-10 17:17 UTC (permalink / raw)
  To: Mike Rapoport, Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, James Bottomley,
	Kirill A. Shutemov, Matthew Wilcox, Mark Rutland, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rick Edgecombe, Shuah Khan, Thomas Gleixner, Tycho Andersen,
	Will Deacon, linux-api, linux-arch, linux-arm-kernel,
	linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
	linux-nvdimm, linux-riscv, x86

On 10.11.20 16:14, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> It will be used by the upcoming secret memory implementation.
> 
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
>   mm/internal.h | 3 +++
>   mm/mmap.c     | 5 ++---
>   2 files changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/internal.h b/mm/internal.h
> index c43ccdddb0f6..ae146a260b14 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -348,6 +348,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
>   extern void mlock_vma_page(struct page *page);
>   extern unsigned int munlock_vma_page(struct page *page);
>   
> +extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
> +			      unsigned long len);
> +
>   /*
>    * Clear the page's PageMlocked().  This can be useful in a situation where
>    * we want to unconditionally remove a page from the pagecache -- e.g.,
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 61f72b09d990..c481f088bd50 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -1348,9 +1348,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
>   	return hint;
>   }
>   
> -static inline int mlock_future_check(struct mm_struct *mm,
> -				     unsigned long flags,
> -				     unsigned long len)
> +int mlock_future_check(struct mm_struct *mm, unsigned long flags,
> +		       unsigned long len)
>   {
>   	unsigned long locked, lock_limit;
>   
> 

So, an interesting question is if you actually want to charge secretmem 
pages against mlock now, or if you want a dedicated secretmem cgroup 
controller instead?

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 2/9] mmap: make mlock_future_check() global
  2020-11-10 17:17   ` David Hildenbrand
@ 2020-11-10 18:06     ` Mike Rapoport
  2020-11-12 16:22       ` David Hildenbrand
  0 siblings, 1 reply; 29+ messages in thread
From: Mike Rapoport @ 2020-11-10 18:06 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Tue, Nov 10, 2020 at 06:17:26PM +0100, David Hildenbrand wrote:
> On 10.11.20 16:14, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> > 
> > It will be used by the upcoming secret memory implementation.
> > 
> > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> > ---
> >   mm/internal.h | 3 +++
> >   mm/mmap.c     | 5 ++---
> >   2 files changed, 5 insertions(+), 3 deletions(-)
> > 
> > diff --git a/mm/internal.h b/mm/internal.h
> > index c43ccdddb0f6..ae146a260b14 100644
> > --- a/mm/internal.h
> > +++ b/mm/internal.h
> > @@ -348,6 +348,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
> >   extern void mlock_vma_page(struct page *page);
> >   extern unsigned int munlock_vma_page(struct page *page);
> > +extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
> > +			      unsigned long len);
> > +
> >   /*
> >    * Clear the page's PageMlocked().  This can be useful in a situation where
> >    * we want to unconditionally remove a page from the pagecache -- e.g.,
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 61f72b09d990..c481f088bd50 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -1348,9 +1348,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
> >   	return hint;
> >   }
> > -static inline int mlock_future_check(struct mm_struct *mm,
> > -				     unsigned long flags,
> > -				     unsigned long len)
> > +int mlock_future_check(struct mm_struct *mm, unsigned long flags,
> > +		       unsigned long len)
> >   {
> >   	unsigned long locked, lock_limit;
> > 
> 
> So, an interesting question is if you actually want to charge secretmem
> pages against mlock now, or if you want a dedicated secretmem cgroup
> controller instead?

Well, with the current implementation there are three limits an
administrator can use to control secretmem limits: mlock, memcg and
kernel parameter.

The kernel parameter puts a global upper limit for secretmem usage,
memcg accounts all secretmem allocations, including the unused memory in
large pages caching and mlock allows per task limit for secretmem
mappings, well, like mlock does.

I didn't consider a dedicated cgroup, as it seems we already have enough
existing knobs and a new one would be unnecessary.

> -- 
> Thanks,
> 
> David / dhildenb
> 

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2020-11-10 15:14 [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
                   ` (8 preceding siblings ...)
  2020-11-10 15:14 ` [PATCH v8 9/9] secretmem: test: add basic selftest for memfd_secret(2) Mike Rapoport
@ 2020-11-12 14:56 ` Mike Rapoport
  9 siblings, 0 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-12 14:56 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

Hi Andrew,

It'll be great if this can be pulled back to mmotm for wider exposure to
testing and fuzzing.

On Tue, Nov 10, 2020 at 05:14:35PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> Hi,
> 
> This is an implementation of "secret" mappings backed by a file descriptor.
> 
> The file descriptor backing secret memory mappings is created using a
> dedicated memfd_secret system call The desired protection mode for the
> memory is configured using flags parameter of the system call. The mmap()
> of the file descriptor created with memfd_secret() will create a "secret"
> memory mapping. The pages in that mapping will be marked as not present in
> the direct map and will have desired protection bits set in the user page
> table. For instance, current implementation allows uncached mappings.
> 
> Although normally Linux userspace mappings are protected from other users,
> such secret mappings are useful for environments where a hostile tenant is
> trying to trick the kernel into giving them access to other tenants
> mappings.
> 
> Additionally, in the future the secret mappings may be used as a mean to
> protect guest memory in a virtual machine host.
> 
> For demonstration of secret memory usage we've created a userspace library
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git
> 
> that does two things: the first is act as a preloader for openssl to
> redirect all the OPENSSL_malloc calls to secret memory meaning any secret
> keys get automatically protected this way and the other thing it does is
> expose the API to the user who needs it. We anticipate that a lot of the
> use cases would be like the openssl one: many toolkits that deal with
> secret keys already have special handling for the memory to try to give
> them greater protection, so this would simply be pluggable into the
> toolkits without any need for user application modification.
> 
> Hiding secret memory mappings behind an anonymous file allows (ab)use of
> the page cache for tracking pages allocated for the "secret" mappings as
> well as using address_space_operations for e.g. page migration callbacks.
> 
> The anonymous file may be also used implicitly, like hugetlb files, to
> implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
> ABIs in the future.
> 
> To limit fragmentation of the direct map to splitting only PUD-size pages,
> I've added an amortizing cache of PMD-size pages to each file descriptor
> that is used as an allocation pool for the secret memory areas.
> 
> As the memory allocated by secretmem becomes unmovable, we use CMA to back
> large page caches so that page allocator won't be surprised by failing attempt
> to migrate these pages.
> 
> v8:
> * Use CMA for all secretmem allocations as David suggested
> * Update memcg accounting after transtion to CMA
> * Prevent hibernation when there are active secretmem users
> * Add zeroing of the memory before releasing it back to cma/page allocator
> * Rebase on v5.10-rc2-mmotm-2020-11-07-21-40
> 
> v7: https://lore.kernel.org/lkml/20201026083752.13267-1-rppt@kernel.org
> * Use set_direct_map() instead of __kernel_map_pages() to ensure error
>   handling in case the direct map update fails
> * Add accounting of large pages used to reduce the direct map fragmentation
> * Teach get_user_pages() and frieds to refuse get/pin secretmem pages
> 
> v6: https://lore.kernel.org/lkml/20200924132904.1391-1-rppt@kernel.org
> * Silence the warning about missing syscall, thanks to Qian Cai
> * Replace spaces with tabs in Kconfig additions, per Randy
> * Add a selftest.
> 
> v5: https://lore.kernel.org/lkml/20200916073539.3552-1-rppt@kernel.org
> * rebase on v5.9-rc5
> * drop boot time memory reservation patch
> 
> v4: https://lore.kernel.org/lkml/20200818141554.13945-1-rppt@kernel.org
> * rebase on v5.9-rc1
> * Do not redefine PMD_PAGE_ORDER in fs/dax.c, thanks Kirill
> * Make secret mappings exclusive by default and only require flags to
>   memfd_secret() system call for uncached mappings, thanks again Kirill :)
> 
> v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
> * Squash kernel-parameters.txt update into the commit that added the
>   command line option.
> * Make uncached mode explicitly selectable by architectures. For now enable
>   it only on x86.
> 
> v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
> * Follow Michael's suggestion and name the new system call 'memfd_secret'
> * Add kernel-parameters documentation about the boot option
> * Fix i386-tinyconfig regression reported by the kbuild bot.
>   CONFIG_SECRETMEM now depends on !EMBEDDED to disable it on small systems
>   from one side and still make it available unconditionally on
>   architectures that support SET_DIRECT_MAP.
> 
> v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org
> 
> Mike Rapoport (9):
>   mm: add definition of PMD_PAGE_ORDER
>   mmap: make mlock_future_check() global
>   set_memory: allow set_direct_map_*_noflush() for multiple pages
>   mm: introduce memfd_secret system call to create "secret" memory areas
>   secretmem: use PMD-size pages to amortize direct map fragmentation
>   secretmem: add memcg accounting
>   PM: hibernate: disable when there are active secretmem users
>   arch, mm: wire up memfd_secret system call were relevant
>   secretmem: test: add basic selftest for memfd_secret(2)
> 
>  arch/Kconfig                              |   7 +
>  arch/arm64/include/asm/cacheflush.h       |   4 +-
>  arch/arm64/include/asm/unistd.h           |   2 +-
>  arch/arm64/include/asm/unistd32.h         |   2 +
>  arch/arm64/include/uapi/asm/unistd.h      |   1 +
>  arch/arm64/mm/pageattr.c                  |  10 +-
>  arch/riscv/include/asm/set_memory.h       |   4 +-
>  arch/riscv/include/asm/unistd.h           |   1 +
>  arch/riscv/mm/pageattr.c                  |   8 +-
>  arch/x86/Kconfig                          |   1 +
>  arch/x86/entry/syscalls/syscall_32.tbl    |   1 +
>  arch/x86/entry/syscalls/syscall_64.tbl    |   1 +
>  arch/x86/include/asm/set_memory.h         |   4 +-
>  arch/x86/mm/pat/set_memory.c              |   8 +-
>  fs/dax.c                                  |  11 +-
>  include/linux/pgtable.h                   |   3 +
>  include/linux/secretmem.h                 |  30 ++
>  include/linux/set_memory.h                |   4 +-
>  include/linux/syscalls.h                  |   1 +
>  include/uapi/asm-generic/unistd.h         |   6 +-
>  include/uapi/linux/magic.h                |   1 +
>  include/uapi/linux/secretmem.h            |   8 +
>  kernel/power/hibernate.c                  |   5 +-
>  kernel/power/snapshot.c                   |   4 +-
>  kernel/sys_ni.c                           |   2 +
>  mm/Kconfig                                |   5 +
>  mm/Makefile                               |   1 +
>  mm/filemap.c                              |   2 +-
>  mm/gup.c                                  |  10 +
>  mm/internal.h                             |   3 +
>  mm/mmap.c                                 |   5 +-
>  mm/secretmem.c                            | 451 ++++++++++++++++++++++
>  mm/vmalloc.c                              |   5 +-
>  scripts/checksyscalls.sh                  |   4 +
>  tools/testing/selftests/vm/.gitignore     |   1 +
>  tools/testing/selftests/vm/Makefile       |   3 +-
>  tools/testing/selftests/vm/memfd_secret.c | 298 ++++++++++++++
>  tools/testing/selftests/vm/run_vmtests    |  17 +
>  38 files changed, 895 insertions(+), 39 deletions(-)
>  create mode 100644 include/linux/secretmem.h
>  create mode 100644 include/uapi/linux/secretmem.h
>  create mode 100644 mm/secretmem.c
>  create mode 100644 tools/testing/selftests/vm/memfd_secret.c
> 
> 
> base-commit: 9f8ce377d420db12b19d6a4f636fecbd88a725a5
> -- 
> 2.28.0
> 

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 2/9] mmap: make mlock_future_check() global
  2020-11-10 18:06     ` Mike Rapoport
@ 2020-11-12 16:22       ` David Hildenbrand
  2020-11-12 19:08         ` Mike Rapoport
  0 siblings, 1 reply; 29+ messages in thread
From: David Hildenbrand @ 2020-11-12 16:22 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On 10.11.20 19:06, Mike Rapoport wrote:
> On Tue, Nov 10, 2020 at 06:17:26PM +0100, David Hildenbrand wrote:
>> On 10.11.20 16:14, Mike Rapoport wrote:
>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>
>>> It will be used by the upcoming secret memory implementation.
>>>
>>> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
>>> ---
>>>    mm/internal.h | 3 +++
>>>    mm/mmap.c     | 5 ++---
>>>    2 files changed, 5 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/mm/internal.h b/mm/internal.h
>>> index c43ccdddb0f6..ae146a260b14 100644
>>> --- a/mm/internal.h
>>> +++ b/mm/internal.h
>>> @@ -348,6 +348,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
>>>    extern void mlock_vma_page(struct page *page);
>>>    extern unsigned int munlock_vma_page(struct page *page);
>>> +extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
>>> +			      unsigned long len);
>>> +
>>>    /*
>>>     * Clear the page's PageMlocked().  This can be useful in a situation where
>>>     * we want to unconditionally remove a page from the pagecache -- e.g.,
>>> diff --git a/mm/mmap.c b/mm/mmap.c
>>> index 61f72b09d990..c481f088bd50 100644
>>> --- a/mm/mmap.c
>>> +++ b/mm/mmap.c
>>> @@ -1348,9 +1348,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
>>>    	return hint;
>>>    }
>>> -static inline int mlock_future_check(struct mm_struct *mm,
>>> -				     unsigned long flags,
>>> -				     unsigned long len)
>>> +int mlock_future_check(struct mm_struct *mm, unsigned long flags,
>>> +		       unsigned long len)
>>>    {
>>>    	unsigned long locked, lock_limit;
>>>
>>
>> So, an interesting question is if you actually want to charge secretmem
>> pages against mlock now, or if you want a dedicated secretmem cgroup
>> controller instead?
> 
> Well, with the current implementation there are three limits an
> administrator can use to control secretmem limits: mlock, memcg and
> kernel parameter.
> 
> The kernel parameter puts a global upper limit for secretmem usage,
> memcg accounts all secretmem allocations, including the unused memory in
> large pages caching and mlock allows per task limit for secretmem
> mappings, well, like mlock does.
> 
> I didn't consider a dedicated cgroup, as it seems we already have enough
> existing knobs and a new one would be unnecessary.

To me it feels like the mlock() limit is a wrong fit for secretmem. But 
maybe there are other cases of using the mlock() limit without actually 
doing mlock() that I am not aware of (most probably :) )?

I mean, my concern is not earth shattering, this can be reworked later. 
As I said, it just feels wrong.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 2/9] mmap: make mlock_future_check() global
  2020-11-12 16:22       ` David Hildenbrand
@ 2020-11-12 19:08         ` Mike Rapoport
  2020-11-12 20:15           ` David Hildenbrand
  0 siblings, 1 reply; 29+ messages in thread
From: Mike Rapoport @ 2020-11-12 19:08 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Thu, Nov 12, 2020 at 05:22:00PM +0100, David Hildenbrand wrote:
> On 10.11.20 19:06, Mike Rapoport wrote:
> > On Tue, Nov 10, 2020 at 06:17:26PM +0100, David Hildenbrand wrote:
> > > On 10.11.20 16:14, Mike Rapoport wrote:
> > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > 
> > > > It will be used by the upcoming secret memory implementation.
> > > > 
> > > > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> > > > ---
> > > >    mm/internal.h | 3 +++
> > > >    mm/mmap.c     | 5 ++---
> > > >    2 files changed, 5 insertions(+), 3 deletions(-)
> > > > 
> > > > diff --git a/mm/internal.h b/mm/internal.h
> > > > index c43ccdddb0f6..ae146a260b14 100644
> > > > --- a/mm/internal.h
> > > > +++ b/mm/internal.h
> > > > @@ -348,6 +348,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
> > > >    extern void mlock_vma_page(struct page *page);
> > > >    extern unsigned int munlock_vma_page(struct page *page);
> > > > +extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
> > > > +			      unsigned long len);
> > > > +
> > > >    /*
> > > >     * Clear the page's PageMlocked().  This can be useful in a situation where
> > > >     * we want to unconditionally remove a page from the pagecache -- e.g.,
> > > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > > index 61f72b09d990..c481f088bd50 100644
> > > > --- a/mm/mmap.c
> > > > +++ b/mm/mmap.c
> > > > @@ -1348,9 +1348,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
> > > >    	return hint;
> > > >    }
> > > > -static inline int mlock_future_check(struct mm_struct *mm,
> > > > -				     unsigned long flags,
> > > > -				     unsigned long len)
> > > > +int mlock_future_check(struct mm_struct *mm, unsigned long flags,
> > > > +		       unsigned long len)
> > > >    {
> > > >    	unsigned long locked, lock_limit;
> > > > 
> > > 
> > > So, an interesting question is if you actually want to charge secretmem
> > > pages against mlock now, or if you want a dedicated secretmem cgroup
> > > controller instead?
> > 
> > Well, with the current implementation there are three limits an
> > administrator can use to control secretmem limits: mlock, memcg and
> > kernel parameter.
> > 
> > The kernel parameter puts a global upper limit for secretmem usage,
> > memcg accounts all secretmem allocations, including the unused memory in
> > large pages caching and mlock allows per task limit for secretmem
> > mappings, well, like mlock does.
> > 
> > I didn't consider a dedicated cgroup, as it seems we already have enough
> > existing knobs and a new one would be unnecessary.
> 
> To me it feels like the mlock() limit is a wrong fit for secretmem. But
> maybe there are other cases of using the mlock() limit without actually
> doing mlock() that I am not aware of (most probably :) )?

Secretmem does not explicitly calls to mlock() but it does what mlock()
does and a bit more. Citing mlock(2):

  mlock(),  mlock2(),  and  mlockall()  lock  part  or all of the calling
  process's virtual address space into RAM, preventing that  memory  from
  being paged to the swap area.

So, based on that secretmem pages are not swappable, I think that
RLIMIT_MEMLOCK is appropriate here.

> I mean, my concern is not earth shattering, this can be reworked later. As I
> said, it just feels wrong.
> 
> -- 
> Thanks,
> 
> David / dhildenb
> 

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 2/9] mmap: make mlock_future_check() global
  2020-11-12 19:08         ` Mike Rapoport
@ 2020-11-12 20:15           ` David Hildenbrand
  2020-11-15  8:26             ` Mike Rapoport
  0 siblings, 1 reply; 29+ messages in thread
From: David Hildenbrand @ 2020-11-12 20:15 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: David Hildenbrand, Andrew Morton, Alexander Viro,
	Andy Lutomirski, Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86


> Am 12.11.2020 um 20:08 schrieb Mike Rapoport <rppt@kernel.org>:
> 
> On Thu, Nov 12, 2020 at 05:22:00PM +0100, David Hildenbrand wrote:
>>> On 10.11.20 19:06, Mike Rapoport wrote:
>>> On Tue, Nov 10, 2020 at 06:17:26PM +0100, David Hildenbrand wrote:
>>>> On 10.11.20 16:14, Mike Rapoport wrote:
>>>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>>> 
>>>>> It will be used by the upcoming secret memory implementation.
>>>>> 
>>>>> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
>>>>> ---
>>>>>   mm/internal.h | 3 +++
>>>>>   mm/mmap.c     | 5 ++---
>>>>>   2 files changed, 5 insertions(+), 3 deletions(-)
>>>>> 
>>>>> diff --git a/mm/internal.h b/mm/internal.h
>>>>> index c43ccdddb0f6..ae146a260b14 100644
>>>>> --- a/mm/internal.h
>>>>> +++ b/mm/internal.h
>>>>> @@ -348,6 +348,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
>>>>>   extern void mlock_vma_page(struct page *page);
>>>>>   extern unsigned int munlock_vma_page(struct page *page);
>>>>> +extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
>>>>> +                  unsigned long len);
>>>>> +
>>>>>   /*
>>>>>    * Clear the page's PageMlocked().  This can be useful in a situation where
>>>>>    * we want to unconditionally remove a page from the pagecache -- e.g.,
>>>>> diff --git a/mm/mmap.c b/mm/mmap.c
>>>>> index 61f72b09d990..c481f088bd50 100644
>>>>> --- a/mm/mmap.c
>>>>> +++ b/mm/mmap.c
>>>>> @@ -1348,9 +1348,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
>>>>>       return hint;
>>>>>   }
>>>>> -static inline int mlock_future_check(struct mm_struct *mm,
>>>>> -                     unsigned long flags,
>>>>> -                     unsigned long len)
>>>>> +int mlock_future_check(struct mm_struct *mm, unsigned long flags,
>>>>> +               unsigned long len)
>>>>>   {
>>>>>       unsigned long locked, lock_limit;
>>>>> 
>>>> 
>>>> So, an interesting question is if you actually want to charge secretmem
>>>> pages against mlock now, or if you want a dedicated secretmem cgroup
>>>> controller instead?
>>> 
>>> Well, with the current implementation there are three limits an
>>> administrator can use to control secretmem limits: mlock, memcg and
>>> kernel parameter.
>>> 
>>> The kernel parameter puts a global upper limit for secretmem usage,
>>> memcg accounts all secretmem allocations, including the unused memory in
>>> large pages caching and mlock allows per task limit for secretmem
>>> mappings, well, like mlock does.
>>> 
>>> I didn't consider a dedicated cgroup, as it seems we already have enough
>>> existing knobs and a new one would be unnecessary.
>> 
>> To me it feels like the mlock() limit is a wrong fit for secretmem. But
>> maybe there are other cases of using the mlock() limit without actually
>> doing mlock() that I am not aware of (most probably :) )?
> 
> Secretmem does not explicitly calls to mlock() but it does what mlock()
> does and a bit more. Citing mlock(2):
> 
>  mlock(),  mlock2(),  and  mlockall()  lock  part  or all of the calling
>  process's virtual address space into RAM, preventing that  memory  from
>  being paged to the swap area.
> 
> So, based on that secretmem pages are not swappable, I think that
> RLIMIT_MEMLOCK is appropriate here.
> 

The page explicitly lists mlock() system calls. E.g., we also don‘t account for gigantic pages - which might be allocated from CMA and are not swappable.



>> I mean, my concern is not earth shattering, this can be reworked later. As I
>> said, it just feels wrong.
>> 
>> -- 
>> Thanks,
>> 
>> David / dhildenb
>> 
> 
> -- 
> Sincerely yours,
> Mike.
> 


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 6/9] secretmem: add memcg accounting
  2020-11-10 15:14 ` [PATCH v8 6/9] secretmem: add memcg accounting Mike Rapoport
@ 2020-11-13  1:35   ` Andrew Morton
  2020-11-13 23:42   ` Roman Gushchin
  1 sibling, 0 replies; 29+ messages in thread
From: Andrew Morton @ 2020-11-13  1:35 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Tue, 10 Nov 2020 17:14:41 +0200 Mike Rapoport <rppt@kernel.org> wrote:

> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.

From: Andrew Morton <akpm@linux-foundation.org>
Subject: secretmem-add-memcg-accounting-fix

fix CONFIG_MEMCG=n build

Cc: Mike Rapoport <rppt@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/filemap.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/mm/filemap.c~secretmem-add-memcg-accounting-fix
+++ a/mm/filemap.c
@@ -844,7 +844,7 @@ static noinline int __add_to_page_cache_
 	page->mapping = mapping;
 	page->index = offset;
 
-	if (!huge && !page->memcg_data) {
+	if (!huge && !page_memcg(page)) {
 		error = mem_cgroup_charge(page, current->mm, gfp);
 		if (error)
 			goto error;
_


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 8/9] arch, mm: wire up memfd_secret system call were relevant
  2020-11-10 15:14 ` [PATCH v8 8/9] arch, mm: wire up memfd_secret system call were relevant Mike Rapoport
@ 2020-11-13 12:25   ` Catalin Marinas
  2020-11-15  8:56     ` Mike Rapoport
  0 siblings, 1 reply; 29+ messages in thread
From: Catalin Marinas @ 2020-11-13 12:25 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Palmer Dabbelt

Hi Mike,

On Tue, Nov 10, 2020 at 05:14:43PM +0200, Mike Rapoport wrote:
> diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
> index 6c1dcca067e0..c71c3fe0b6cd 100644
> --- a/arch/arm64/include/asm/unistd32.h
> +++ b/arch/arm64/include/asm/unistd32.h
> @@ -891,6 +891,8 @@ __SYSCALL(__NR_faccessat2, sys_faccessat2)
>  __SYSCALL(__NR_process_madvise, sys_process_madvise)
>  #define __NR_watch_mount 441
>  __SYSCALL(__NR_watch_mount, sys_watch_mount)
> +#define __NR_memfd_secret 442
> +__SYSCALL(__NR_memfd_secret, sys_memfd_secret)

arch/arm doesn't select ARCH_HAS_SET_DIRECT_MAP and doesn't support
memfd_secret(), so I wouldn't add it to the compat layer.

-- 
Catalin

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 3/9] set_memory: allow set_direct_map_*_noflush() for multiple pages
  2020-11-10 15:14 ` [PATCH v8 3/9] set_memory: allow set_direct_map_*_noflush() for multiple pages Mike Rapoport
@ 2020-11-13 12:26   ` Catalin Marinas
  0 siblings, 0 replies; 29+ messages in thread
From: Catalin Marinas @ 2020-11-13 12:26 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Tue, Nov 10, 2020 at 05:14:38PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
> 
> The underlying implementations of set_direct_map_invalid_noflush() and
> set_direct_map_default_noflush() allow updating multiple contiguous pages
> at once.
> 
> Add numpages parameter to set_direct_map_*_noflush() to expose this ability
> with these APIs.
> 
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
>  arch/arm64/include/asm/cacheflush.h |  4 ++--
>  arch/arm64/mm/pageattr.c            | 10 ++++++----

For arm64:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 4/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2020-11-10 15:14 ` [PATCH v8 4/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
@ 2020-11-13 13:58   ` Matthew Wilcox
  2020-11-15  8:53     ` Mike Rapoport
  2020-11-13 14:06   ` Matthew Wilcox
  1 sibling, 1 reply; 29+ messages in thread
From: Matthew Wilcox @ 2020-11-13 13:58 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer

On Tue, Nov 10, 2020 at 05:14:39PM +0200, Mike Rapoport wrote:
> +static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> +{
> +	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> +	struct inode *inode = file_inode(vmf->vma->vm_file);
> +	pgoff_t offset = vmf->pgoff;
> +	unsigned long addr;
> +	struct page *page;
> +	int ret = 0;
> +
> +	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
> +		return vmf_error(-EINVAL);
> +
> +	page = find_get_entry(mapping, offset);

Why did you decide to use find_get_entry() here?  You don't handle
swap or shadow entries.

> +	if (!page) {
> +		page = secretmem_alloc_page(vmf->gfp_mask);
> +		if (!page)
> +			return vmf_error(-EINVAL);

Why is this EINVAL and not ENOMEM?

> +		ret = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
> +		if (unlikely(ret))
> +			goto err_put_page;
> +
> +		ret = set_direct_map_invalid_noflush(page, 1);
> +		if (ret)
> +			goto err_del_page_cache;
> +
> +		addr = (unsigned long)page_address(page);
> +		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> +
> +		__SetPageUptodate(page);
> +
> +		ret = VM_FAULT_LOCKED;
> +	}
> +
> +	vmf->page = page;
> +	return ret;

Does sparse not warn you about this abuse of vm_fault_t?  Separate out
'ret' and 'err'.


Andrew, please fold in this fix.  I suspect Mike will want to fix
the other things I mention above.

diff --git a/mm/secretmem.c b/mm/secretmem.c
index 3dfdbd85ba00..09ca27f21661 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -172,7 +172,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
 		return vmf_error(-EINVAL);
 
-	page = find_get_entry(mapping, offset);
+	page = find_get_page(mapping, offset);
 	if (!page) {
 		page = secretmem_alloc_page(ctx, vmf->gfp_mask);
 		if (!page)

^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 4/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2020-11-10 15:14 ` [PATCH v8 4/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
  2020-11-13 13:58   ` Matthew Wilcox
@ 2020-11-13 14:06   ` Matthew Wilcox
  2020-11-15  8:45     ` Mike Rapoport
  1 sibling, 1 reply; 29+ messages in thread
From: Matthew Wilcox @ 2020-11-13 14:06 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer

On Tue, Nov 10, 2020 at 05:14:39PM +0200, Mike Rapoport wrote:
> diff --git a/mm/Kconfig b/mm/Kconfig
> index c89c5444924b..d8d170fa5210 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -884,4 +884,7 @@ config ARCH_HAS_HUGEPD
>  config MAPPING_DIRTY_HELPERS
>          bool
>  
> +config SECRETMEM
> +	def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED

So I now have to build this in, whether I want it or not?

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 6/9] secretmem: add memcg accounting
  2020-11-10 15:14 ` [PATCH v8 6/9] secretmem: add memcg accounting Mike Rapoport
  2020-11-13  1:35   ` Andrew Morton
@ 2020-11-13 23:42   ` Roman Gushchin
  2020-11-15  9:17     ` Mike Rapoport
  1 sibling, 1 reply; 29+ messages in thread
From: Roman Gushchin @ 2020-11-13 23:42 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

вт, 10 нояб. 2020 г. в 07:16, Mike Rapoport <rppt@kernel.org>:
>
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.
>
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> ---
>  mm/filemap.c   |  2 +-
>  mm/secretmem.c | 42 +++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 42 insertions(+), 2 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 249cf489f5df..11387a077373 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -844,7 +844,7 @@ static noinline int __add_to_page_cache_locked(struct page *page,
>         page->mapping = mapping;
>         page->index = offset;
>
> -       if (!huge) {
> +       if (!huge && !page->memcg_data) {
>                 error = mem_cgroup_charge(page, current->mm, gfp);
>                 if (error)
>                         goto error;
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 1aa2b7cffe0d..1eb7667016fa 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -17,6 +17,7 @@
>  #include <linux/syscalls.h>
>  #include <linux/memblock.h>
>  #include <linux/pseudo_fs.h>
> +#include <linux/memcontrol.h>
>  #include <linux/set_memory.h>
>  #include <linux/sched/signal.h>
>
> @@ -49,6 +50,38 @@ struct secretmem_ctx {
>
>  static struct cma *secretmem_cma;
>

Hi Mike!

> +static int secretmem_memcg_charge(struct page *page, gfp_t gfp, int order)
> +{
> +       unsigned long nr_pages = (1 << order);
> +       int i, err;
> +
> +       err = memcg_kmem_charge_page(page, gfp, order);
> +       if (err)
> +               return err;
> +
> +       for (i = 1; i < nr_pages; i++) {
> +               struct page *p = page + i;
> +
> +               p->memcg_data = page->memcg_data;
> +       }

Hm, it looks very strange to me. Why do we need to copy memcg_data?
What about css reference counting?

And what about statistics?

I'm sorry for being late.

Thank you!

> +
> +       return 0;
> +}
> +
> +static void secretmem_memcg_uncharge(struct page *page, int order)
> +{
> +       unsigned long nr_pages = (1 << order);
> +       int i;
> +
> +       for (i = 1; i < nr_pages; i++) {
> +               struct page *p = page + i;
> +
> +               p->memcg_data = 0;
> +       }
> +
> +       memcg_kmem_uncharge_page(page, PMD_PAGE_ORDER);
> +}
> +
>  static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>  {
>         unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> @@ -61,10 +94,14 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>         if (!page)
>                 return -ENOMEM;
>
> -       err = set_direct_map_invalid_noflush(page, nr_pages);
> +       err = secretmem_memcg_charge(page, gfp, PMD_PAGE_ORDER);
>         if (err)
>                 goto err_cma_release;
>
> +       err = set_direct_map_invalid_noflush(page, nr_pages);
> +       if (err)
> +               goto err_memcg_uncharge;
> +
>         addr = (unsigned long)page_address(page);
>         err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
>         if (err)
> @@ -81,6 +118,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>          * won't fail
>          */
>         set_direct_map_default_noflush(page, nr_pages);
> +err_memcg_uncharge:
> +       secretmem_memcg_uncharge(page, PMD_PAGE_ORDER);
>  err_cma_release:
>         cma_release(secretmem_cma, page, nr_pages);
>         return err;
> @@ -310,6 +349,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
>         int i;
>
>         set_direct_map_default_noflush(page, nr_pages);
> +       secretmem_memcg_uncharge(page, PMD_PAGE_ORDER);
>
>         for (i = 0; i < nr_pages; i++)
>                 clear_highpage(page + i);
> --
> 2.28.0
>
>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 2/9] mmap: make mlock_future_check() global
  2020-11-12 20:15           ` David Hildenbrand
@ 2020-11-15  8:26             ` Mike Rapoport
  2020-11-17 15:09               ` David Hildenbrand
  0 siblings, 1 reply; 29+ messages in thread
From: Mike Rapoport @ 2020-11-15  8:26 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Thu, Nov 12, 2020 at 09:15:18PM +0100, David Hildenbrand wrote:
> 
> > Am 12.11.2020 um 20:08 schrieb Mike Rapoport <rppt@kernel.org>:
> > 
> > On Thu, Nov 12, 2020 at 05:22:00PM +0100, David Hildenbrand wrote:
> >>> On 10.11.20 19:06, Mike Rapoport wrote:
> >>> On Tue, Nov 10, 2020 at 06:17:26PM +0100, David Hildenbrand wrote:
> >>>> On 10.11.20 16:14, Mike Rapoport wrote:
> >>>>> From: Mike Rapoport <rppt@linux.ibm.com>
> >>>>> 
> >>>>> It will be used by the upcoming secret memory implementation.
> >>>>> 
> >>>>> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> >>>>> ---
> >>>>>   mm/internal.h | 3 +++
> >>>>>   mm/mmap.c     | 5 ++---
> >>>>>   2 files changed, 5 insertions(+), 3 deletions(-)
> >>>>> 
> >>>>> diff --git a/mm/internal.h b/mm/internal.h
> >>>>> index c43ccdddb0f6..ae146a260b14 100644
> >>>>> --- a/mm/internal.h
> >>>>> +++ b/mm/internal.h
> >>>>> @@ -348,6 +348,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
> >>>>>   extern void mlock_vma_page(struct page *page);
> >>>>>   extern unsigned int munlock_vma_page(struct page *page);
> >>>>> +extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
> >>>>> +                  unsigned long len);
> >>>>> +
> >>>>>   /*
> >>>>>    * Clear the page's PageMlocked().  This can be useful in a situation where
> >>>>>    * we want to unconditionally remove a page from the pagecache -- e.g.,
> >>>>> diff --git a/mm/mmap.c b/mm/mmap.c
> >>>>> index 61f72b09d990..c481f088bd50 100644
> >>>>> --- a/mm/mmap.c
> >>>>> +++ b/mm/mmap.c
> >>>>> @@ -1348,9 +1348,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
> >>>>>       return hint;
> >>>>>   }
> >>>>> -static inline int mlock_future_check(struct mm_struct *mm,
> >>>>> -                     unsigned long flags,
> >>>>> -                     unsigned long len)
> >>>>> +int mlock_future_check(struct mm_struct *mm, unsigned long flags,
> >>>>> +               unsigned long len)
> >>>>>   {
> >>>>>       unsigned long locked, lock_limit;
> >>>>> 
> >>>> 
> >>>> So, an interesting question is if you actually want to charge secretmem
> >>>> pages against mlock now, or if you want a dedicated secretmem cgroup
> >>>> controller instead?
> >>> 
> >>> Well, with the current implementation there are three limits an
> >>> administrator can use to control secretmem limits: mlock, memcg and
> >>> kernel parameter.
> >>> 
> >>> The kernel parameter puts a global upper limit for secretmem usage,
> >>> memcg accounts all secretmem allocations, including the unused memory in
> >>> large pages caching and mlock allows per task limit for secretmem
> >>> mappings, well, like mlock does.
> >>> 
> >>> I didn't consider a dedicated cgroup, as it seems we already have enough
> >>> existing knobs and a new one would be unnecessary.
> >> 
> >> To me it feels like the mlock() limit is a wrong fit for secretmem. But
> >> maybe there are other cases of using the mlock() limit without actually
> >> doing mlock() that I am not aware of (most probably :) )?
> > 
> > Secretmem does not explicitly calls to mlock() but it does what mlock()
> > does and a bit more. Citing mlock(2):
> > 
> >  mlock(),  mlock2(),  and  mlockall()  lock  part  or all of the calling
> >  process's virtual address space into RAM, preventing that  memory  from
> >  being paged to the swap area.
> > 
> > So, based on that secretmem pages are not swappable, I think that
> > RLIMIT_MEMLOCK is appropriate here.
> > 
> 
> The page explicitly lists mlock() system calls.

Well, it's mlock() man page, isn't it? ;-)

My thinking was that since secretmem does what mlock() does wrt
swapability, it should at least obey the same limit, i.e.
RLIMIT_MEMLOCK.

> E.g., we also don‘t
> account for gigantic pages - which might be allocated from CMA and are
> not swappable.
 
Do you mean gigantic pages in hugetlbfs?
It seems to me that hugetlbfs accounting is a completely different
story.

> >> I mean, my concern is not earth shattering, this can be reworked later. As I
> >> said, it just feels wrong.
> >> 
> >> -- 
> >> Thanks,
> >> 
> >> David / dhildenb
> >> 
> > 
> > -- 
> > Sincerely yours,
> > Mike.
> > 
> 

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 4/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2020-11-13 14:06   ` Matthew Wilcox
@ 2020-11-15  8:45     ` Mike Rapoport
  0 siblings, 0 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-15  8:45 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer

On Fri, Nov 13, 2020 at 02:06:56PM +0000, Matthew Wilcox wrote:
> On Tue, Nov 10, 2020 at 05:14:39PM +0200, Mike Rapoport wrote:
> > diff --git a/mm/Kconfig b/mm/Kconfig
> > index c89c5444924b..d8d170fa5210 100644
> > --- a/mm/Kconfig
> > +++ b/mm/Kconfig
> > @@ -884,4 +884,7 @@ config ARCH_HAS_HUGEPD
> >  config MAPPING_DIRTY_HELPERS
> >          bool
> >  
> > +config SECRETMEM
> > +	def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
> 
> So I now have to build this in, whether I want it or not?

Why wouldn't anybody want this nice feature? ;-)

Now, seriously, I hesitated a lot about having a prompt here, but in the
end I've decided to go without it.

The added footprint is not so big, with x86 defconfig it's less than 8K
and with distro (I've checked with Fedora) config the difference is less
than 1k because they anyway have CMA=y.

As this is "security" feature, disros most probably would have this
enabled anyway, and I believe users that will see something like "Allow
hiding memory from the kernel" will hit Y there.

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 4/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2020-11-13 13:58   ` Matthew Wilcox
@ 2020-11-15  8:53     ` Mike Rapoport
  0 siblings, 0 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-15  8:53 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer

On Fri, Nov 13, 2020 at 01:58:48PM +0000, Matthew Wilcox wrote:
> On Tue, Nov 10, 2020 at 05:14:39PM +0200, Mike Rapoport wrote:
> > +static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> > +{
> > +	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> > +	struct inode *inode = file_inode(vmf->vma->vm_file);
> > +	pgoff_t offset = vmf->pgoff;
> > +	unsigned long addr;
> > +	struct page *page;
> > +	int ret = 0;
> > +
> > +	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
> > +		return vmf_error(-EINVAL);
> > +
> > +	page = find_get_entry(mapping, offset);
> 
> Why did you decide to use find_get_entry() here?  You don't handle
> swap or shadow entries.

Right, I've missed that. 

> > +	if (!page) {
> > +		page = secretmem_alloc_page(vmf->gfp_mask);
> > +		if (!page)
> > +			return vmf_error(-EINVAL);
> 
> Why is this EINVAL and not ENOMEM?

Ah, I was annoyed by OOMs I got when I simulated various allocation
failures, so I changed it to get SIGBUS instead and than forgot to restore.
Will fix.

> > +		ret = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
> > +		if (unlikely(ret))
> > +			goto err_put_page;
> > +
> > +		ret = set_direct_map_invalid_noflush(page, 1);
> > +		if (ret)
> > +			goto err_del_page_cache;
> > +
> > +		addr = (unsigned long)page_address(page);
> > +		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> > +
> > +		__SetPageUptodate(page);
> > +
> > +		ret = VM_FAULT_LOCKED;
> > +	}
> > +
> > +	vmf->page = page;
> > +	return ret;
> 
> Does sparse not warn you about this abuse of vm_fault_t?  Separate out
> 'ret' and 'err'.
 
Will fix.

> Andrew, please fold in this fix.  I suspect Mike will want to fix
> the other things I mention above.
> 
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 3dfdbd85ba00..09ca27f21661 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -172,7 +172,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
>  	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
>  		return vmf_error(-EINVAL);
>  
> -	page = find_get_entry(mapping, offset);
> +	page = find_get_page(mapping, offset);
>  	if (!page) {
>  		page = secretmem_alloc_page(ctx, vmf->gfp_mask);
>  		if (!page)

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 8/9] arch, mm: wire up memfd_secret system call were relevant
  2020-11-13 12:25   ` Catalin Marinas
@ 2020-11-15  8:56     ` Mike Rapoport
  0 siblings, 0 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-15  8:56 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Palmer Dabbelt

On Fri, Nov 13, 2020 at 12:25:08PM +0000, Catalin Marinas wrote:
> Hi Mike,
> 
> On Tue, Nov 10, 2020 at 05:14:43PM +0200, Mike Rapoport wrote:
> > diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
> > index 6c1dcca067e0..c71c3fe0b6cd 100644
> > --- a/arch/arm64/include/asm/unistd32.h
> > +++ b/arch/arm64/include/asm/unistd32.h
> > @@ -891,6 +891,8 @@ __SYSCALL(__NR_faccessat2, sys_faccessat2)
> >  __SYSCALL(__NR_process_madvise, sys_process_madvise)
> >  #define __NR_watch_mount 441
> >  __SYSCALL(__NR_watch_mount, sys_watch_mount)
> > +#define __NR_memfd_secret 442
> > +__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
> 
> arch/arm doesn't select ARCH_HAS_SET_DIRECT_MAP and doesn't support
> memfd_secret(), so I wouldn't add it to the compat layer.

Ok, I'll drop it.

> -- 
> Catalin

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 6/9] secretmem: add memcg accounting
  2020-11-13 23:42   ` Roman Gushchin
@ 2020-11-15  9:17     ` Mike Rapoport
  0 siblings, 0 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-15  9:17 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
	Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Fri, Nov 13, 2020 at 03:42:25PM -0800, Roman Gushchin wrote:
> вт, 10 нояб. 2020 г. в 07:16, Mike Rapoport <rppt@kernel.org>:
> >
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Account memory consumed by secretmem to memcg. The accounting is updated
> > when the memory is actually allocated and freed.
> >
> > Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> > ---
> >  mm/filemap.c   |  2 +-
> >  mm/secretmem.c | 42 +++++++++++++++++++++++++++++++++++++++++-
> >  2 files changed, 42 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/filemap.c b/mm/filemap.c
> > index 249cf489f5df..11387a077373 100644
> > --- a/mm/filemap.c
> > +++ b/mm/filemap.c
> > @@ -844,7 +844,7 @@ static noinline int __add_to_page_cache_locked(struct page *page,
> >         page->mapping = mapping;
> >         page->index = offset;
> >
> > -       if (!huge) {
> > +       if (!huge && !page->memcg_data) {
> >                 error = mem_cgroup_charge(page, current->mm, gfp);
> >                 if (error)
> >                         goto error;
> > diff --git a/mm/secretmem.c b/mm/secretmem.c
> > index 1aa2b7cffe0d..1eb7667016fa 100644
> > --- a/mm/secretmem.c
> > +++ b/mm/secretmem.c
> > @@ -17,6 +17,7 @@
> >  #include <linux/syscalls.h>
> >  #include <linux/memblock.h>
> >  #include <linux/pseudo_fs.h>
> > +#include <linux/memcontrol.h>
> >  #include <linux/set_memory.h>
> >  #include <linux/sched/signal.h>
> >
> > @@ -49,6 +50,38 @@ struct secretmem_ctx {
> >
> >  static struct cma *secretmem_cma;
> >
> 
> Hi Mike!
> 
> > +static int secretmem_memcg_charge(struct page *page, gfp_t gfp, int order)
> > +{
> > +       unsigned long nr_pages = (1 << order);
> > +       int i, err;
> > +
> > +       err = memcg_kmem_charge_page(page, gfp, order);
> > +       if (err)
> > +               return err;
> > +
> > +       for (i = 1; i < nr_pages; i++) {
> > +               struct page *p = page + i;
> > +
> > +               p->memcg_data = page->memcg_data;
> > +       }
> 
> Hm, it looks very strange to me. Why do we need to copy memcg_data?
> What about css reference counting?

I need to copy memcg_data to mark a page as being accounted so it won't
be charged again when it is added to page cache.

What happens here is that I allocate a large page and then use it as a
local cache for allocations in secretmem_fault(). I charge the large
page as kmem. 

During secretmem_fault() a small sub-page from that large page goes into
page cache and there I skip its memcg accounting.

In the end, when the large page is freed, the memcg_data for all its
sub-pages is cleared and I uncharge memcg with the order of large page.

An alternative would be to uncharge a small page from kmem in
secretmem_fault() and make this page charged in add_to_page_cache(), but
that would complicate the release path as I would need to re-charge the
small page back to kmem at secretmem_freepage() and track all the
participating memcgs till the large page is freed.

> And what about statistics?

Hmm, that's probably won't be accurate :-/

> I'm sorry for being late.
> 
> Thank you!
> 
> > +
> > +       return 0;
> > +}
> > +
> > +static void secretmem_memcg_uncharge(struct page *page, int order)
> > +{
> > +       unsigned long nr_pages = (1 << order);
> > +       int i;
> > +
> > +       for (i = 1; i < nr_pages; i++) {
> > +               struct page *p = page + i;
> > +
> > +               p->memcg_data = 0;
> > +       }
> > +
> > +       memcg_kmem_uncharge_page(page, PMD_PAGE_ORDER);
> > +}
> > +
> >  static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> >  {
> >         unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> > @@ -61,10 +94,14 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> >         if (!page)
> >                 return -ENOMEM;
> >
> > -       err = set_direct_map_invalid_noflush(page, nr_pages);
> > +       err = secretmem_memcg_charge(page, gfp, PMD_PAGE_ORDER);
> >         if (err)
> >                 goto err_cma_release;
> >
> > +       err = set_direct_map_invalid_noflush(page, nr_pages);
> > +       if (err)
> > +               goto err_memcg_uncharge;
> > +
> >         addr = (unsigned long)page_address(page);
> >         err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> >         if (err)
> > @@ -81,6 +118,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> >          * won't fail
> >          */
> >         set_direct_map_default_noflush(page, nr_pages);
> > +err_memcg_uncharge:
> > +       secretmem_memcg_uncharge(page, PMD_PAGE_ORDER);
> >  err_cma_release:
> >         cma_release(secretmem_cma, page, nr_pages);
> >         return err;
> > @@ -310,6 +349,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
> >         int i;
> >
> >         set_direct_map_default_noflush(page, nr_pages);
> > +       secretmem_memcg_uncharge(page, PMD_PAGE_ORDER);
> >
> >         for (i = 0; i < nr_pages; i++)
> >                 clear_highpage(page + i);
> > --
> > 2.28.0
> >
> >

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 2/9] mmap: make mlock_future_check() global
  2020-11-15  8:26             ` Mike Rapoport
@ 2020-11-17 15:09               ` David Hildenbrand
  2020-11-17 15:58                 ` Mike Rapoport
  0 siblings, 1 reply; 29+ messages in thread
From: David Hildenbrand @ 2020-11-17 15:09 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On 15.11.20 09:26, Mike Rapoport wrote:
> On Thu, Nov 12, 2020 at 09:15:18PM +0100, David Hildenbrand wrote:
>>
>>> Am 12.11.2020 um 20:08 schrieb Mike Rapoport <rppt@kernel.org>:
>>>
>>> On Thu, Nov 12, 2020 at 05:22:00PM +0100, David Hildenbrand wrote:
>>>>> On 10.11.20 19:06, Mike Rapoport wrote:
>>>>> On Tue, Nov 10, 2020 at 06:17:26PM +0100, David Hildenbrand wrote:
>>>>>> On 10.11.20 16:14, Mike Rapoport wrote:
>>>>>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>>>>>
>>>>>>> It will be used by the upcoming secret memory implementation.
>>>>>>>
>>>>>>> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
>>>>>>> ---
>>>>>>>    mm/internal.h | 3 +++
>>>>>>>    mm/mmap.c     | 5 ++---
>>>>>>>    2 files changed, 5 insertions(+), 3 deletions(-)
>>>>>>>
>>>>>>> diff --git a/mm/internal.h b/mm/internal.h
>>>>>>> index c43ccdddb0f6..ae146a260b14 100644
>>>>>>> --- a/mm/internal.h
>>>>>>> +++ b/mm/internal.h
>>>>>>> @@ -348,6 +348,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
>>>>>>>    extern void mlock_vma_page(struct page *page);
>>>>>>>    extern unsigned int munlock_vma_page(struct page *page);
>>>>>>> +extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
>>>>>>> +                  unsigned long len);
>>>>>>> +
>>>>>>>    /*
>>>>>>>     * Clear the page's PageMlocked().  This can be useful in a situation where
>>>>>>>     * we want to unconditionally remove a page from the pagecache -- e.g.,
>>>>>>> diff --git a/mm/mmap.c b/mm/mmap.c
>>>>>>> index 61f72b09d990..c481f088bd50 100644
>>>>>>> --- a/mm/mmap.c
>>>>>>> +++ b/mm/mmap.c
>>>>>>> @@ -1348,9 +1348,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
>>>>>>>        return hint;
>>>>>>>    }
>>>>>>> -static inline int mlock_future_check(struct mm_struct *mm,
>>>>>>> -                     unsigned long flags,
>>>>>>> -                     unsigned long len)
>>>>>>> +int mlock_future_check(struct mm_struct *mm, unsigned long flags,
>>>>>>> +               unsigned long len)
>>>>>>>    {
>>>>>>>        unsigned long locked, lock_limit;
>>>>>>>
>>>>>>
>>>>>> So, an interesting question is if you actually want to charge secretmem
>>>>>> pages against mlock now, or if you want a dedicated secretmem cgroup
>>>>>> controller instead?
>>>>>
>>>>> Well, with the current implementation there are three limits an
>>>>> administrator can use to control secretmem limits: mlock, memcg and
>>>>> kernel parameter.
>>>>>
>>>>> The kernel parameter puts a global upper limit for secretmem usage,
>>>>> memcg accounts all secretmem allocations, including the unused memory in
>>>>> large pages caching and mlock allows per task limit for secretmem
>>>>> mappings, well, like mlock does.
>>>>>
>>>>> I didn't consider a dedicated cgroup, as it seems we already have enough
>>>>> existing knobs and a new one would be unnecessary.
>>>>
>>>> To me it feels like the mlock() limit is a wrong fit for secretmem. But
>>>> maybe there are other cases of using the mlock() limit without actually
>>>> doing mlock() that I am not aware of (most probably :) )?
>>>
>>> Secretmem does not explicitly calls to mlock() but it does what mlock()
>>> does and a bit more. Citing mlock(2):
>>>
>>>   mlock(),  mlock2(),  and  mlockall()  lock  part  or all of the calling
>>>   process's virtual address space into RAM, preventing that  memory  from
>>>   being paged to the swap area.
>>>
>>> So, based on that secretmem pages are not swappable, I think that
>>> RLIMIT_MEMLOCK is appropriate here.
>>>
>>
>> The page explicitly lists mlock() system calls.
> 
> Well, it's mlock() man page, isn't it? ;-)

;)

> 
> My thinking was that since secretmem does what mlock() does wrt
> swapability, it should at least obey the same limit, i.e.
> RLIMIT_MEMLOCK.

Right, but at least currently, it behaves like any other CMA allocation 
(IIRC they are all unmovable and, therefore, not swappable). In the 
future, if pages would be movable (but not swappable), I guess it might 
makes more sense. I assume we never ever want to swap secretmem.

"man getrlimit" states for RLIMIT_MEMLOCK:

"This is the maximum number of bytes of memory that may be
  locked into RAM.  [...] This limit affects
  mlock(2), mlockall(2), and the mmap(2) MAP_LOCKED operation.
  Since Linux 2.6.9, it also affects the shmctl(2) SHM_LOCK op‐
  eration [...]"

So that place has to be updated as well I guess? Otherwise this might 
come as a surprise for users.

> 
>> E.g., we also don‘t
>> account for gigantic pages - which might be allocated from CMA and are
>> not swappable.
>   
> Do you mean gigantic pages in hugetlbfs?

Yes

> It seems to me that hugetlbfs accounting is a completely different
> story.

I'd say it is right now comparable to secretmem - which is why I though 
similar accounting would make sense.


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v8 2/9] mmap: make mlock_future_check() global
  2020-11-17 15:09               ` David Hildenbrand
@ 2020-11-17 15:58                 ` Mike Rapoport
  0 siblings, 0 replies; 29+ messages in thread
From: Mike Rapoport @ 2020-11-17 15:58 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
	Borislav Petkov, Catalin Marinas, Christopher Lameter,
	Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
	Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
	Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Tue, Nov 17, 2020 at 04:09:39PM +0100, David Hildenbrand wrote:
> On 15.11.20 09:26, Mike Rapoport wrote:
> > On Thu, Nov 12, 2020 at 09:15:18PM +0100, David Hildenbrand wrote:

...

> > My thinking was that since secretmem does what mlock() does wrt
> > swapability, it should at least obey the same limit, i.e.
> > RLIMIT_MEMLOCK.
> 
> Right, but at least currently, it behaves like any other CMA allocation
> (IIRC they are all unmovable and, therefore, not swappable). In the future,
> if pages would be movable (but not swappable), I guess it might makes more
> sense. I assume we never ever want to swap secretmem.
> 
> "man getrlimit" states for RLIMIT_MEMLOCK:
> 
> "This is the maximum number of bytes of memory that may be
>  locked into RAM.  [...] This limit affects
>  mlock(2), mlockall(2), and the mmap(2) MAP_LOCKED operation.
>  Since Linux 2.6.9, it also affects the shmctl(2) SHM_LOCK op‐
>  eration [...]"
> 
> So that place has to be updated as well I guess? Otherwise this might come
> as a surprise for users.

Sure.

> > 
> > > E.g., we also don‘t
> > > account for gigantic pages - which might be allocated from CMA and are
> > > not swappable.
> > Do you mean gigantic pages in hugetlbfs?
> 
> Yes
> 
> > It seems to me that hugetlbfs accounting is a completely different
> > story.
> 
> I'd say it is right now comparable to secretmem - which is why I though
> similar accounting would make sense.

IMHO, using RLIMIT_MEMLOCK and memcg is a more straightforward way than
a custom cgroup.

And if we'll see a need for additional mechanism, we can always add it.
 
> -- 
> Thanks,
> 
> David / dhildenb
> 
> 

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2020-11-17 15:59 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-10 15:14 [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2020-11-10 15:14 ` [PATCH v8 1/9] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
2020-11-10 15:14 ` [PATCH v8 2/9] mmap: make mlock_future_check() global Mike Rapoport
2020-11-10 17:17   ` David Hildenbrand
2020-11-10 18:06     ` Mike Rapoport
2020-11-12 16:22       ` David Hildenbrand
2020-11-12 19:08         ` Mike Rapoport
2020-11-12 20:15           ` David Hildenbrand
2020-11-15  8:26             ` Mike Rapoport
2020-11-17 15:09               ` David Hildenbrand
2020-11-17 15:58                 ` Mike Rapoport
2020-11-10 15:14 ` [PATCH v8 3/9] set_memory: allow set_direct_map_*_noflush() for multiple pages Mike Rapoport
2020-11-13 12:26   ` Catalin Marinas
2020-11-10 15:14 ` [PATCH v8 4/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2020-11-13 13:58   ` Matthew Wilcox
2020-11-15  8:53     ` Mike Rapoport
2020-11-13 14:06   ` Matthew Wilcox
2020-11-15  8:45     ` Mike Rapoport
2020-11-10 15:14 ` [PATCH v8 5/9] secretmem: use PMD-size pages to amortize direct map fragmentation Mike Rapoport
2020-11-10 15:14 ` [PATCH v8 6/9] secretmem: add memcg accounting Mike Rapoport
2020-11-13  1:35   ` Andrew Morton
2020-11-13 23:42   ` Roman Gushchin
2020-11-15  9:17     ` Mike Rapoport
2020-11-10 15:14 ` [PATCH v8 7/9] PM: hibernate: disable when there are active secretmem users Mike Rapoport
2020-11-10 15:14 ` [PATCH v8 8/9] arch, mm: wire up memfd_secret system call were relevant Mike Rapoport
2020-11-13 12:25   ` Catalin Marinas
2020-11-15  8:56     ` Mike Rapoport
2020-11-10 15:14 ` [PATCH v8 9/9] secretmem: test: add basic selftest for memfd_secret(2) Mike Rapoport
2020-11-12 14:56 ` [PATCH v8 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).