All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-03-03 16:22 ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel B utt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Hi,

@Andrew, this is based on v5.12-rc1, I can rebase whatever way you prefer.

This is an implementation of "secret" mappings backed by a file descriptor.

The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will be present only in the page table of the owning mm.

Although normally Linux userspace mappings are protected from other users,
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.

Additionally, in the future the secret mappings may be used as a mean to
protect guest memory in a virtual machine host.

For demonstration of secret memory usage we've created a userspace library

https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git

that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.

Hiding secret memory mappings behind an anonymous file allows usage of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.

The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.

Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which affects
the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to have
secretmem disabled by default with the ability of a system administrator to
enable it at boot time.

In addition, there is also a long term goal to improve management of the
direct map.

[1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/

v18:
* rebase on v5.12-rc1
* merge kfence fix into the original patch
* massage commit message of the patch introducing the memfd_secret syscall

v17: https://lore.kernel.org/lkml/20210208084920.2884-1-rppt@kernel.org
* Remove pool of large pages backing secretmem allocations, per Michal Hocko
* Add secretmem pages to unevictable LRU, per Michal Hocko
* Use GFP_HIGHUSER as secretmem mapping mask, per Michal Hocko
* Make secretmem an opt-in feature that is disabled by default
 
v16: https://lore.kernel.org/lkml/20210121122723.3446-1-rppt@kernel.org
* Fix memory leak intorduced in v15
* Clean the data left from previous page user before handing the page to
  the userspace

v15: https://lore.kernel.org/lkml/20210120180612.1058-1-rppt@kernel.org
* Add riscv/Kconfig update to disable set_memory operations for nommu
  builds (patch 3)
* Update the code around add_to_page_cache() per Matthew's comments
  (patches 6,7)
* Add fixups for build/checkpatch errors discovered by CI systems

v14: https://lore.kernel.org/lkml/20201203062949.5484-1-rppt@kernel.org
* Finally s/mod_node_page_state/mod_lruvec_page_state/

v13: https://lore.kernel.org/lkml/20201201074559.27742-1-rppt@kernel.org
* Added Reviewed-by, thanks Catalin and David
* s/mod_node_page_state/mod_lruvec_page_state/ as Shakeel suggested

Older history:
v12: https://lore.kernel.org/lkml/20201125092208.12544-1-rppt@kernel.org
v11: https://lore.kernel.org/lkml/20201124092556.12009-1-rppt@kernel.org
v10: https://lore.kernel.org/lkml/20201123095432.5860-1-rppt@kernel.org
v9: https://lore.kernel.org/lkml/20201117162932.13649-1-rppt@kernel.org
v8: https://lore.kernel.org/lkml/20201110151444.20662-1-rppt@kernel.org
v7: https://lore.kernel.org/lkml/20201026083752.13267-1-rppt@kernel.org
v6: https://lore.kernel.org/lkml/20200924132904.1391-1-rppt@kernel.org
v5: https://lore.kernel.org/lkml/20200916073539.3552-1-rppt@kernel.org
v4: https://lore.kernel.org/lkml/20200818141554.13945-1-rppt@kernel.org
v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org
rfc-v2: https://lore.kernel.org/lkml/20200706172051.19465-1-rppt@kernel.org/
rfc-v1: https://lore.kernel.org/lkml/20200130162340.GA14232@rapoport-lnx/
rfc-v0: https://lore.kernel.org/lkml/1572171452-7958-1-git-send-email-rppt@kernel.org/

Mike Rapoport (9):
  mm: add definition of PMD_PAGE_ORDER
  mmap: make mlock_future_check() global
  riscv/Kconfig: make direct map manipulation options depend on MMU
  set_memory: allow set_direct_map_*_noflush() for multiple pages
  set_memory: allow querying whether set_direct_map_*() is actually enabled
  mm: introduce memfd_secret system call to create "secret" memory areas
  PM: hibernate: disable when there are active secretmem users
  arch, mm: wire up memfd_secret system call where relevant
  secretmem: test: add basic selftest for memfd_secret(2)

 arch/arm64/include/asm/Kbuild             |   1 -
 arch/arm64/include/asm/cacheflush.h       |   6 -
 arch/arm64/include/asm/kfence.h           |   2 +-
 arch/arm64/include/asm/set_memory.h       |  17 ++
 arch/arm64/include/uapi/asm/unistd.h      |   1 +
 arch/arm64/kernel/machine_kexec.c         |   1 +
 arch/arm64/mm/mmu.c                       |   6 +-
 arch/arm64/mm/pageattr.c                  |  23 +-
 arch/riscv/Kconfig                        |   4 +-
 arch/riscv/include/asm/set_memory.h       |   4 +-
 arch/riscv/include/asm/unistd.h           |   1 +
 arch/riscv/mm/pageattr.c                  |   8 +-
 arch/x86/entry/syscalls/syscall_32.tbl    |   1 +
 arch/x86/entry/syscalls/syscall_64.tbl    |   1 +
 arch/x86/include/asm/set_memory.h         |   4 +-
 arch/x86/mm/pat/set_memory.c              |   8 +-
 fs/dax.c                                  |  11 +-
 include/linux/pgtable.h                   |   3 +
 include/linux/secretmem.h                 |  30 +++
 include/linux/set_memory.h                |  16 +-
 include/linux/syscalls.h                  |   1 +
 include/uapi/asm-generic/unistd.h         |   6 +-
 include/uapi/linux/magic.h                |   1 +
 kernel/power/hibernate.c                  |   5 +-
 kernel/power/snapshot.c                   |   4 +-
 kernel/sys_ni.c                           |   2 +
 mm/Kconfig                                |   3 +
 mm/Makefile                               |   1 +
 mm/gup.c                                  |  10 +
 mm/internal.h                             |   3 +
 mm/mlock.c                                |   3 +-
 mm/mmap.c                                 |   5 +-
 mm/secretmem.c                            | 261 +++++++++++++++++++
 mm/vmalloc.c                              |   5 +-
 scripts/checksyscalls.sh                  |   4 +
 tools/testing/selftests/vm/.gitignore     |   1 +
 tools/testing/selftests/vm/Makefile       |   3 +-
 tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
 tools/testing/selftests/vm/run_vmtests.sh |  17 ++
 39 files changed, 726 insertions(+), 53 deletions(-)
 create mode 100644 arch/arm64/include/asm/set_memory.h
 create mode 100644 include/linux/secretmem.h
 create mode 100644 mm/secretmem.c
 create mode 100644 tools/testing/selftests/vm/memfd_secret.c

-- 
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-03-03 16:22 ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Hi,

@Andrew, this is based on v5.12-rc1, I can rebase whatever way you prefer.

This is an implementation of "secret" mappings backed by a file descriptor.

The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will be present only in the page table of the owning mm.

Although normally Linux userspace mappings are protected from other users,
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.

Additionally, in the future the secret mappings may be used as a mean to
protect guest memory in a virtual machine host.

For demonstration of secret memory usage we've created a userspace library

https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git

that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.

Hiding secret memory mappings behind an anonymous file allows usage of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.

The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.

Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which affects
the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to have
secretmem disabled by default with the ability of a system administrator to
enable it at boot time.

In addition, there is also a long term goal to improve management of the
direct map.

[1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/

v18:
* rebase on v5.12-rc1
* merge kfence fix into the original patch
* massage commit message of the patch introducing the memfd_secret syscall

v17: https://lore.kernel.org/lkml/20210208084920.2884-1-rppt@kernel.org
* Remove pool of large pages backing secretmem allocations, per Michal Hocko
* Add secretmem pages to unevictable LRU, per Michal Hocko
* Use GFP_HIGHUSER as secretmem mapping mask, per Michal Hocko
* Make secretmem an opt-in feature that is disabled by default
 
v16: https://lore.kernel.org/lkml/20210121122723.3446-1-rppt@kernel.org
* Fix memory leak intorduced in v15
* Clean the data left from previous page user before handing the page to
  the userspace

v15: https://lore.kernel.org/lkml/20210120180612.1058-1-rppt@kernel.org
* Add riscv/Kconfig update to disable set_memory operations for nommu
  builds (patch 3)
* Update the code around add_to_page_cache() per Matthew's comments
  (patches 6,7)
* Add fixups for build/checkpatch errors discovered by CI systems

v14: https://lore.kernel.org/lkml/20201203062949.5484-1-rppt@kernel.org
* Finally s/mod_node_page_state/mod_lruvec_page_state/

v13: https://lore.kernel.org/lkml/20201201074559.27742-1-rppt@kernel.org
* Added Reviewed-by, thanks Catalin and David
* s/mod_node_page_state/mod_lruvec_page_state/ as Shakeel suggested

Older history:
v12: https://lore.kernel.org/lkml/20201125092208.12544-1-rppt@kernel.org
v11: https://lore.kernel.org/lkml/20201124092556.12009-1-rppt@kernel.org
v10: https://lore.kernel.org/lkml/20201123095432.5860-1-rppt@kernel.org
v9: https://lore.kernel.org/lkml/20201117162932.13649-1-rppt@kernel.org
v8: https://lore.kernel.org/lkml/20201110151444.20662-1-rppt@kernel.org
v7: https://lore.kernel.org/lkml/20201026083752.13267-1-rppt@kernel.org
v6: https://lore.kernel.org/lkml/20200924132904.1391-1-rppt@kernel.org
v5: https://lore.kernel.org/lkml/20200916073539.3552-1-rppt@kernel.org
v4: https://lore.kernel.org/lkml/20200818141554.13945-1-rppt@kernel.org
v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org
rfc-v2: https://lore.kernel.org/lkml/20200706172051.19465-1-rppt@kernel.org/
rfc-v1: https://lore.kernel.org/lkml/20200130162340.GA14232@rapoport-lnx/
rfc-v0: https://lore.kernel.org/lkml/1572171452-7958-1-git-send-email-rppt@kernel.org/

Mike Rapoport (9):
  mm: add definition of PMD_PAGE_ORDER
  mmap: make mlock_future_check() global
  riscv/Kconfig: make direct map manipulation options depend on MMU
  set_memory: allow set_direct_map_*_noflush() for multiple pages
  set_memory: allow querying whether set_direct_map_*() is actually enabled
  mm: introduce memfd_secret system call to create "secret" memory areas
  PM: hibernate: disable when there are active secretmem users
  arch, mm: wire up memfd_secret system call where relevant
  secretmem: test: add basic selftest for memfd_secret(2)

 arch/arm64/include/asm/Kbuild             |   1 -
 arch/arm64/include/asm/cacheflush.h       |   6 -
 arch/arm64/include/asm/kfence.h           |   2 +-
 arch/arm64/include/asm/set_memory.h       |  17 ++
 arch/arm64/include/uapi/asm/unistd.h      |   1 +
 arch/arm64/kernel/machine_kexec.c         |   1 +
 arch/arm64/mm/mmu.c                       |   6 +-
 arch/arm64/mm/pageattr.c                  |  23 +-
 arch/riscv/Kconfig                        |   4 +-
 arch/riscv/include/asm/set_memory.h       |   4 +-
 arch/riscv/include/asm/unistd.h           |   1 +
 arch/riscv/mm/pageattr.c                  |   8 +-
 arch/x86/entry/syscalls/syscall_32.tbl    |   1 +
 arch/x86/entry/syscalls/syscall_64.tbl    |   1 +
 arch/x86/include/asm/set_memory.h         |   4 +-
 arch/x86/mm/pat/set_memory.c              |   8 +-
 fs/dax.c                                  |  11 +-
 include/linux/pgtable.h                   |   3 +
 include/linux/secretmem.h                 |  30 +++
 include/linux/set_memory.h                |  16 +-
 include/linux/syscalls.h                  |   1 +
 include/uapi/asm-generic/unistd.h         |   6 +-
 include/uapi/linux/magic.h                |   1 +
 kernel/power/hibernate.c                  |   5 +-
 kernel/power/snapshot.c                   |   4 +-
 kernel/sys_ni.c                           |   2 +
 mm/Kconfig                                |   3 +
 mm/Makefile                               |   1 +
 mm/gup.c                                  |  10 +
 mm/internal.h                             |   3 +
 mm/mlock.c                                |   3 +-
 mm/mmap.c                                 |   5 +-
 mm/secretmem.c                            | 261 +++++++++++++++++++
 mm/vmalloc.c                              |   5 +-
 scripts/checksyscalls.sh                  |   4 +
 tools/testing/selftests/vm/.gitignore     |   1 +
 tools/testing/selftests/vm/Makefile       |   3 +-
 tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
 tools/testing/selftests/vm/run_vmtests.sh |  17 ++
 39 files changed, 726 insertions(+), 53 deletions(-)
 create mode 100644 arch/arm64/include/asm/set_memory.h
 create mode 100644 include/linux/secretmem.h
 create mode 100644 mm/secretmem.c
 create mode 100644 tools/testing/selftests/vm/memfd_secret.c

-- 
2.28.0


^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-03-03 16:22 ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Hi,

@Andrew, this is based on v5.12-rc1, I can rebase whatever way you prefer.

This is an implementation of "secret" mappings backed by a file descriptor.

The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will be present only in the page table of the owning mm.

Although normally Linux userspace mappings are protected from other users,
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.

Additionally, in the future the secret mappings may be used as a mean to
protect guest memory in a virtual machine host.

For demonstration of secret memory usage we've created a userspace library

https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git

that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.

Hiding secret memory mappings behind an anonymous file allows usage of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.

The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.

Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which affects
the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to have
secretmem disabled by default with the ability of a system administrator to
enable it at boot time.

In addition, there is also a long term goal to improve management of the
direct map.

[1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/

v18:
* rebase on v5.12-rc1
* merge kfence fix into the original patch
* massage commit message of the patch introducing the memfd_secret syscall

v17: https://lore.kernel.org/lkml/20210208084920.2884-1-rppt@kernel.org
* Remove pool of large pages backing secretmem allocations, per Michal Hocko
* Add secretmem pages to unevictable LRU, per Michal Hocko
* Use GFP_HIGHUSER as secretmem mapping mask, per Michal Hocko
* Make secretmem an opt-in feature that is disabled by default
 
v16: https://lore.kernel.org/lkml/20210121122723.3446-1-rppt@kernel.org
* Fix memory leak intorduced in v15
* Clean the data left from previous page user before handing the page to
  the userspace

v15: https://lore.kernel.org/lkml/20210120180612.1058-1-rppt@kernel.org
* Add riscv/Kconfig update to disable set_memory operations for nommu
  builds (patch 3)
* Update the code around add_to_page_cache() per Matthew's comments
  (patches 6,7)
* Add fixups for build/checkpatch errors discovered by CI systems

v14: https://lore.kernel.org/lkml/20201203062949.5484-1-rppt@kernel.org
* Finally s/mod_node_page_state/mod_lruvec_page_state/

v13: https://lore.kernel.org/lkml/20201201074559.27742-1-rppt@kernel.org
* Added Reviewed-by, thanks Catalin and David
* s/mod_node_page_state/mod_lruvec_page_state/ as Shakeel suggested

Older history:
v12: https://lore.kernel.org/lkml/20201125092208.12544-1-rppt@kernel.org
v11: https://lore.kernel.org/lkml/20201124092556.12009-1-rppt@kernel.org
v10: https://lore.kernel.org/lkml/20201123095432.5860-1-rppt@kernel.org
v9: https://lore.kernel.org/lkml/20201117162932.13649-1-rppt@kernel.org
v8: https://lore.kernel.org/lkml/20201110151444.20662-1-rppt@kernel.org
v7: https://lore.kernel.org/lkml/20201026083752.13267-1-rppt@kernel.org
v6: https://lore.kernel.org/lkml/20200924132904.1391-1-rppt@kernel.org
v5: https://lore.kernel.org/lkml/20200916073539.3552-1-rppt@kernel.org
v4: https://lore.kernel.org/lkml/20200818141554.13945-1-rppt@kernel.org
v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org
rfc-v2: https://lore.kernel.org/lkml/20200706172051.19465-1-rppt@kernel.org/
rfc-v1: https://lore.kernel.org/lkml/20200130162340.GA14232@rapoport-lnx/
rfc-v0: https://lore.kernel.org/lkml/1572171452-7958-1-git-send-email-rppt@kernel.org/

Mike Rapoport (9):
  mm: add definition of PMD_PAGE_ORDER
  mmap: make mlock_future_check() global
  riscv/Kconfig: make direct map manipulation options depend on MMU
  set_memory: allow set_direct_map_*_noflush() for multiple pages
  set_memory: allow querying whether set_direct_map_*() is actually enabled
  mm: introduce memfd_secret system call to create "secret" memory areas
  PM: hibernate: disable when there are active secretmem users
  arch, mm: wire up memfd_secret system call where relevant
  secretmem: test: add basic selftest for memfd_secret(2)

 arch/arm64/include/asm/Kbuild             |   1 -
 arch/arm64/include/asm/cacheflush.h       |   6 -
 arch/arm64/include/asm/kfence.h           |   2 +-
 arch/arm64/include/asm/set_memory.h       |  17 ++
 arch/arm64/include/uapi/asm/unistd.h      |   1 +
 arch/arm64/kernel/machine_kexec.c         |   1 +
 arch/arm64/mm/mmu.c                       |   6 +-
 arch/arm64/mm/pageattr.c                  |  23 +-
 arch/riscv/Kconfig                        |   4 +-
 arch/riscv/include/asm/set_memory.h       |   4 +-
 arch/riscv/include/asm/unistd.h           |   1 +
 arch/riscv/mm/pageattr.c                  |   8 +-
 arch/x86/entry/syscalls/syscall_32.tbl    |   1 +
 arch/x86/entry/syscalls/syscall_64.tbl    |   1 +
 arch/x86/include/asm/set_memory.h         |   4 +-
 arch/x86/mm/pat/set_memory.c              |   8 +-
 fs/dax.c                                  |  11 +-
 include/linux/pgtable.h                   |   3 +
 include/linux/secretmem.h                 |  30 +++
 include/linux/set_memory.h                |  16 +-
 include/linux/syscalls.h                  |   1 +
 include/uapi/asm-generic/unistd.h         |   6 +-
 include/uapi/linux/magic.h                |   1 +
 kernel/power/hibernate.c                  |   5 +-
 kernel/power/snapshot.c                   |   4 +-
 kernel/sys_ni.c                           |   2 +
 mm/Kconfig                                |   3 +
 mm/Makefile                               |   1 +
 mm/gup.c                                  |  10 +
 mm/internal.h                             |   3 +
 mm/mlock.c                                |   3 +-
 mm/mmap.c                                 |   5 +-
 mm/secretmem.c                            | 261 +++++++++++++++++++
 mm/vmalloc.c                              |   5 +-
 scripts/checksyscalls.sh                  |   4 +
 tools/testing/selftests/vm/.gitignore     |   1 +
 tools/testing/selftests/vm/Makefile       |   3 +-
 tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
 tools/testing/selftests/vm/run_vmtests.sh |  17 ++
 39 files changed, 726 insertions(+), 53 deletions(-)
 create mode 100644 arch/arm64/include/asm/set_memory.h
 create mode 100644 include/linux/secretmem.h
 create mode 100644 mm/secretmem.c
 create mode 100644 tools/testing/selftests/vm/memfd_secret.c

-- 
2.28.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-03-03 16:22 ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

From: Mike Rapoport <rppt@linux.ibm.com>

Hi,

@Andrew, this is based on v5.12-rc1, I can rebase whatever way you prefer.

This is an implementation of "secret" mappings backed by a file descriptor.

The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will be present only in the page table of the owning mm.

Although normally Linux userspace mappings are protected from other users,
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.

Additionally, in the future the secret mappings may be used as a mean to
protect guest memory in a virtual machine host.

For demonstration of secret memory usage we've created a userspace library

https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git

that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.

Hiding secret memory mappings behind an anonymous file allows usage of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.

The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.

Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which affects
the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to have
secretmem disabled by default with the ability of a system administrator to
enable it at boot time.

In addition, there is also a long term goal to improve management of the
direct map.

[1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/

v18:
* rebase on v5.12-rc1
* merge kfence fix into the original patch
* massage commit message of the patch introducing the memfd_secret syscall

v17: https://lore.kernel.org/lkml/20210208084920.2884-1-rppt@kernel.org
* Remove pool of large pages backing secretmem allocations, per Michal Hocko
* Add secretmem pages to unevictable LRU, per Michal Hocko
* Use GFP_HIGHUSER as secretmem mapping mask, per Michal Hocko
* Make secretmem an opt-in feature that is disabled by default
 
v16: https://lore.kernel.org/lkml/20210121122723.3446-1-rppt@kernel.org
* Fix memory leak intorduced in v15
* Clean the data left from previous page user before handing the page to
  the userspace

v15: https://lore.kernel.org/lkml/20210120180612.1058-1-rppt@kernel.org
* Add riscv/Kconfig update to disable set_memory operations for nommu
  builds (patch 3)
* Update the code around add_to_page_cache() per Matthew's comments
  (patches 6,7)
* Add fixups for build/checkpatch errors discovered by CI systems

v14: https://lore.kernel.org/lkml/20201203062949.5484-1-rppt@kernel.org
* Finally s/mod_node_page_state/mod_lruvec_page_state/

v13: https://lore.kernel.org/lkml/20201201074559.27742-1-rppt@kernel.org
* Added Reviewed-by, thanks Catalin and David
* s/mod_node_page_state/mod_lruvec_page_state/ as Shakeel suggested

Older history:
v12: https://lore.kernel.org/lkml/20201125092208.12544-1-rppt@kernel.org
v11: https://lore.kernel.org/lkml/20201124092556.12009-1-rppt@kernel.org
v10: https://lore.kernel.org/lkml/20201123095432.5860-1-rppt@kernel.org
v9: https://lore.kernel.org/lkml/20201117162932.13649-1-rppt@kernel.org
v8: https://lore.kernel.org/lkml/20201110151444.20662-1-rppt@kernel.org
v7: https://lore.kernel.org/lkml/20201026083752.13267-1-rppt@kernel.org
v6: https://lore.kernel.org/lkml/20200924132904.1391-1-rppt@kernel.org
v5: https://lore.kernel.org/lkml/20200916073539.3552-1-rppt@kernel.org
v4: https://lore.kernel.org/lkml/20200818141554.13945-1-rppt@kernel.org
v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org
rfc-v2: https://lore.kernel.org/lkml/20200706172051.19465-1-rppt@kernel.org/
rfc-v1: https://lore.kernel.org/lkml/20200130162340.GA14232@rapoport-lnx/
rfc-v0: https://lore.kernel.org/lkml/1572171452-7958-1-git-send-email-rppt@kernel.org/

Mike Rapoport (9):
  mm: add definition of PMD_PAGE_ORDER
  mmap: make mlock_future_check() global
  riscv/Kconfig: make direct map manipulation options depend on MMU
  set_memory: allow set_direct_map_*_noflush() for multiple pages
  set_memory: allow querying whether set_direct_map_*() is actually enabled
  mm: introduce memfd_secret system call to create "secret" memory areas
  PM: hibernate: disable when there are active secretmem users
  arch, mm: wire up memfd_secret system call where relevant
  secretmem: test: add basic selftest for memfd_secret(2)

 arch/arm64/include/asm/Kbuild             |   1 -
 arch/arm64/include/asm/cacheflush.h       |   6 -
 arch/arm64/include/asm/kfence.h           |   2 +-
 arch/arm64/include/asm/set_memory.h       |  17 ++
 arch/arm64/include/uapi/asm/unistd.h      |   1 +
 arch/arm64/kernel/machine_kexec.c         |   1 +
 arch/arm64/mm/mmu.c                       |   6 +-
 arch/arm64/mm/pageattr.c                  |  23 +-
 arch/riscv/Kconfig                        |   4 +-
 arch/riscv/include/asm/set_memory.h       |   4 +-
 arch/riscv/include/asm/unistd.h           |   1 +
 arch/riscv/mm/pageattr.c                  |   8 +-
 arch/x86/entry/syscalls/syscall_32.tbl    |   1 +
 arch/x86/entry/syscalls/syscall_64.tbl    |   1 +
 arch/x86/include/asm/set_memory.h         |   4 +-
 arch/x86/mm/pat/set_memory.c              |   8 +-
 fs/dax.c                                  |  11 +-
 include/linux/pgtable.h                   |   3 +
 include/linux/secretmem.h                 |  30 +++
 include/linux/set_memory.h                |  16 +-
 include/linux/syscalls.h                  |   1 +
 include/uapi/asm-generic/unistd.h         |   6 +-
 include/uapi/linux/magic.h                |   1 +
 kernel/power/hibernate.c                  |   5 +-
 kernel/power/snapshot.c                   |   4 +-
 kernel/sys_ni.c                           |   2 +
 mm/Kconfig                                |   3 +
 mm/Makefile                               |   1 +
 mm/gup.c                                  |  10 +
 mm/internal.h                             |   3 +
 mm/mlock.c                                |   3 +-
 mm/mmap.c                                 |   5 +-
 mm/secretmem.c                            | 261 +++++++++++++++++++
 mm/vmalloc.c                              |   5 +-
 scripts/checksyscalls.sh                  |   4 +
 tools/testing/selftests/vm/.gitignore     |   1 +
 tools/testing/selftests/vm/Makefile       |   3 +-
 tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
 tools/testing/selftests/vm/run_vmtests.sh |  17 ++
 39 files changed, 726 insertions(+), 53 deletions(-)
 create mode 100644 arch/arm64/include/asm/set_memory.h
 create mode 100644 include/linux/secretmem.h
 create mode 100644 mm/secretmem.c
 create mode 100644 tools/testing/selftests/vm/memfd_secret.c

-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 1/9] mm: add definition of PMD_PAGE_ORDER
  2021-03-03 16:22 ` Mike Rapoport
  (?)
  (?)
@ 2021-03-03 16:22   ` Mike Rapoport
  -1 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel B utt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.

Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid conflict with these definitions use
PMD_PAGE_ORDER name and update DAX respectively.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
---
 fs/dax.c                | 11 ++++-------
 include/linux/pgtable.h |  3 +++
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index b3d27fdc6775..12ff48bcee5b 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -49,9 +49,6 @@ static inline unsigned int pe_order(enum page_entry_size pe_size)
 #define PG_PMD_COLOUR	((PMD_SIZE >> PAGE_SHIFT) - 1)
 #define PG_PMD_NR	(PMD_SIZE >> PAGE_SHIFT)
 
-/* The order of a PMD entry */
-#define PMD_ORDER	(PMD_SHIFT - PAGE_SHIFT)
-
 static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
 
 static int __init init_dax_wait_table(void)
@@ -98,7 +95,7 @@ static bool dax_is_locked(void *entry)
 static unsigned int dax_entry_order(void *entry)
 {
 	if (xa_to_value(entry) & DAX_PMD)
-		return PMD_ORDER;
+		return PMD_PAGE_ORDER;
 	return 0;
 }
 
@@ -1471,7 +1468,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 {
 	struct vm_area_struct *vma = vmf->vma;
 	struct address_space *mapping = vma->vm_file->f_mapping;
-	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER);
+	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER);
 	unsigned long pmd_addr = vmf->address & PMD_MASK;
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
 	bool sync;
@@ -1530,7 +1527,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 	 * entry is already in the array, for instance), it will return
 	 * VM_FAULT_FALLBACK.
 	 */
-	entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+	entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER);
 	if (xa_is_internal(entry)) {
 		result = xa_to_internal(entry);
 		goto fallback;
@@ -1696,7 +1693,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
 	if (order == 0)
 		ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
 #ifdef CONFIG_FS_DAX_PMD
-	else if (order == PMD_ORDER)
+	else if (order == PMD_PAGE_ORDER)
 		ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
 #endif
 	else
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index cdfc4e9f253e..3562cccf84ee 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -28,6 +28,9 @@
 #define USER_PGTABLES_CEILING	0UL
 #endif
 
+/* Number of base pages in a second level leaf page */
+#define PMD_PAGE_ORDER	(PMD_SHIFT - PAGE_SHIFT)
+
 /*
  * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
  *
-- 
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 1/9] mm: add definition of PMD_PAGE_ORDER
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.

Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid conflict with these definitions use
PMD_PAGE_ORDER name and update DAX respectively.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
---
 fs/dax.c                | 11 ++++-------
 include/linux/pgtable.h |  3 +++
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index b3d27fdc6775..12ff48bcee5b 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -49,9 +49,6 @@ static inline unsigned int pe_order(enum page_entry_size pe_size)
 #define PG_PMD_COLOUR	((PMD_SIZE >> PAGE_SHIFT) - 1)
 #define PG_PMD_NR	(PMD_SIZE >> PAGE_SHIFT)
 
-/* The order of a PMD entry */
-#define PMD_ORDER	(PMD_SHIFT - PAGE_SHIFT)
-
 static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
 
 static int __init init_dax_wait_table(void)
@@ -98,7 +95,7 @@ static bool dax_is_locked(void *entry)
 static unsigned int dax_entry_order(void *entry)
 {
 	if (xa_to_value(entry) & DAX_PMD)
-		return PMD_ORDER;
+		return PMD_PAGE_ORDER;
 	return 0;
 }
 
@@ -1471,7 +1468,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 {
 	struct vm_area_struct *vma = vmf->vma;
 	struct address_space *mapping = vma->vm_file->f_mapping;
-	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER);
+	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER);
 	unsigned long pmd_addr = vmf->address & PMD_MASK;
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
 	bool sync;
@@ -1530,7 +1527,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 	 * entry is already in the array, for instance), it will return
 	 * VM_FAULT_FALLBACK.
 	 */
-	entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+	entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER);
 	if (xa_is_internal(entry)) {
 		result = xa_to_internal(entry);
 		goto fallback;
@@ -1696,7 +1693,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
 	if (order == 0)
 		ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
 #ifdef CONFIG_FS_DAX_PMD
-	else if (order == PMD_ORDER)
+	else if (order == PMD_PAGE_ORDER)
 		ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
 #endif
 	else
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index cdfc4e9f253e..3562cccf84ee 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -28,6 +28,9 @@
 #define USER_PGTABLES_CEILING	0UL
 #endif
 
+/* Number of base pages in a second level leaf page */
+#define PMD_PAGE_ORDER	(PMD_SHIFT - PAGE_SHIFT)
+
 /*
  * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
  *
-- 
2.28.0


^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 1/9] mm: add definition of PMD_PAGE_ORDER
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.

Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid conflict with these definitions use
PMD_PAGE_ORDER name and update DAX respectively.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
---
 fs/dax.c                | 11 ++++-------
 include/linux/pgtable.h |  3 +++
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index b3d27fdc6775..12ff48bcee5b 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -49,9 +49,6 @@ static inline unsigned int pe_order(enum page_entry_size pe_size)
 #define PG_PMD_COLOUR	((PMD_SIZE >> PAGE_SHIFT) - 1)
 #define PG_PMD_NR	(PMD_SIZE >> PAGE_SHIFT)
 
-/* The order of a PMD entry */
-#define PMD_ORDER	(PMD_SHIFT - PAGE_SHIFT)
-
 static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
 
 static int __init init_dax_wait_table(void)
@@ -98,7 +95,7 @@ static bool dax_is_locked(void *entry)
 static unsigned int dax_entry_order(void *entry)
 {
 	if (xa_to_value(entry) & DAX_PMD)
-		return PMD_ORDER;
+		return PMD_PAGE_ORDER;
 	return 0;
 }
 
@@ -1471,7 +1468,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 {
 	struct vm_area_struct *vma = vmf->vma;
 	struct address_space *mapping = vma->vm_file->f_mapping;
-	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER);
+	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER);
 	unsigned long pmd_addr = vmf->address & PMD_MASK;
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
 	bool sync;
@@ -1530,7 +1527,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 	 * entry is already in the array, for instance), it will return
 	 * VM_FAULT_FALLBACK.
 	 */
-	entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+	entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER);
 	if (xa_is_internal(entry)) {
 		result = xa_to_internal(entry);
 		goto fallback;
@@ -1696,7 +1693,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
 	if (order == 0)
 		ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
 #ifdef CONFIG_FS_DAX_PMD
-	else if (order == PMD_ORDER)
+	else if (order == PMD_PAGE_ORDER)
 		ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
 #endif
 	else
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index cdfc4e9f253e..3562cccf84ee 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -28,6 +28,9 @@
 #define USER_PGTABLES_CEILING	0UL
 #endif
 
+/* Number of base pages in a second level leaf page */
+#define PMD_PAGE_ORDER	(PMD_SHIFT - PAGE_SHIFT)
+
 /*
  * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
  *
-- 
2.28.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 1/9] mm: add definition of PMD_PAGE_ORDER
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.

Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid conflict with these definitions use
PMD_PAGE_ORDER name and update DAX respectively.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
---
 fs/dax.c                | 11 ++++-------
 include/linux/pgtable.h |  3 +++
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index b3d27fdc6775..12ff48bcee5b 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -49,9 +49,6 @@ static inline unsigned int pe_order(enum page_entry_size pe_size)
 #define PG_PMD_COLOUR	((PMD_SIZE >> PAGE_SHIFT) - 1)
 #define PG_PMD_NR	(PMD_SIZE >> PAGE_SHIFT)
 
-/* The order of a PMD entry */
-#define PMD_ORDER	(PMD_SHIFT - PAGE_SHIFT)
-
 static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
 
 static int __init init_dax_wait_table(void)
@@ -98,7 +95,7 @@ static bool dax_is_locked(void *entry)
 static unsigned int dax_entry_order(void *entry)
 {
 	if (xa_to_value(entry) & DAX_PMD)
-		return PMD_ORDER;
+		return PMD_PAGE_ORDER;
 	return 0;
 }
 
@@ -1471,7 +1468,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 {
 	struct vm_area_struct *vma = vmf->vma;
 	struct address_space *mapping = vma->vm_file->f_mapping;
-	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER);
+	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER);
 	unsigned long pmd_addr = vmf->address & PMD_MASK;
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
 	bool sync;
@@ -1530,7 +1527,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 	 * entry is already in the array, for instance), it will return
 	 * VM_FAULT_FALLBACK.
 	 */
-	entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+	entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER);
 	if (xa_is_internal(entry)) {
 		result = xa_to_internal(entry);
 		goto fallback;
@@ -1696,7 +1693,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
 	if (order == 0)
 		ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
 #ifdef CONFIG_FS_DAX_PMD
-	else if (order == PMD_ORDER)
+	else if (order == PMD_PAGE_ORDER)
 		ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
 #endif
 	else
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index cdfc4e9f253e..3562cccf84ee 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -28,6 +28,9 @@
 #define USER_PGTABLES_CEILING	0UL
 #endif
 
+/* Number of base pages in a second level leaf page */
+#define PMD_PAGE_ORDER	(PMD_SHIFT - PAGE_SHIFT)
+
 /*
  * A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
  *
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 2/9] mmap: make mlock_future_check() global
  2021-03-03 16:22 ` Mike Rapoport
  (?)
  (?)
@ 2021-03-03 16:22   ` Mike Rapoport
  -1 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel B utt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

It will be used by the upcoming secret memory implementation.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 mm/internal.h | 3 +++
 mm/mmap.c     | 5 ++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 9902648f2206..8e9c660f33ca 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -353,6 +353,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
 extern void mlock_vma_page(struct page *page);
 extern unsigned int munlock_vma_page(struct page *page);
 
+extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+			      unsigned long len);
+
 /*
  * Clear the page's PageMlocked().  This can be useful in a situation where
  * we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c
index 3f287599a7a3..f989aa170de4 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1346,9 +1346,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
 	return hint;
 }
 
-static inline int mlock_future_check(struct mm_struct *mm,
-				     unsigned long flags,
-				     unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+		       unsigned long len)
 {
 	unsigned long locked, lock_limit;
 
-- 
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 2/9] mmap: make mlock_future_check() global
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

It will be used by the upcoming secret memory implementation.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 mm/internal.h | 3 +++
 mm/mmap.c     | 5 ++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 9902648f2206..8e9c660f33ca 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -353,6 +353,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
 extern void mlock_vma_page(struct page *page);
 extern unsigned int munlock_vma_page(struct page *page);
 
+extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+			      unsigned long len);
+
 /*
  * Clear the page's PageMlocked().  This can be useful in a situation where
  * we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c
index 3f287599a7a3..f989aa170de4 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1346,9 +1346,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
 	return hint;
 }
 
-static inline int mlock_future_check(struct mm_struct *mm,
-				     unsigned long flags,
-				     unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+		       unsigned long len)
 {
 	unsigned long locked, lock_limit;
 
-- 
2.28.0


^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 2/9] mmap: make mlock_future_check() global
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

It will be used by the upcoming secret memory implementation.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 mm/internal.h | 3 +++
 mm/mmap.c     | 5 ++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 9902648f2206..8e9c660f33ca 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -353,6 +353,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
 extern void mlock_vma_page(struct page *page);
 extern unsigned int munlock_vma_page(struct page *page);
 
+extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+			      unsigned long len);
+
 /*
  * Clear the page's PageMlocked().  This can be useful in a situation where
  * we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c
index 3f287599a7a3..f989aa170de4 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1346,9 +1346,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
 	return hint;
 }
 
-static inline int mlock_future_check(struct mm_struct *mm,
-				     unsigned long flags,
-				     unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+		       unsigned long len)
 {
 	unsigned long locked, lock_limit;
 
-- 
2.28.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 2/9] mmap: make mlock_future_check() global
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

It will be used by the upcoming secret memory implementation.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 mm/internal.h | 3 +++
 mm/mmap.c     | 5 ++---
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 9902648f2206..8e9c660f33ca 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -353,6 +353,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
 extern void mlock_vma_page(struct page *page);
 extern unsigned int munlock_vma_page(struct page *page);
 
+extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+			      unsigned long len);
+
 /*
  * Clear the page's PageMlocked().  This can be useful in a situation where
  * we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c
index 3f287599a7a3..f989aa170de4 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1346,9 +1346,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
 	return hint;
 }
 
-static inline int mlock_future_check(struct mm_struct *mm,
-				     unsigned long flags,
-				     unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+		       unsigned long len)
 {
 	unsigned long locked, lock_limit;
 
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 3/9] riscv/Kconfig: make direct map manipulation options depend on MMU
  2021-03-03 16:22 ` Mike Rapoport
  (?)
  (?)
@ 2021-03-03 16:22   ` Mike Rapoport
  -1 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel B utt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	kernel test robot

From: Mike Rapoport <rppt@linux.ibm.com>

ARCH_HAS_SET_DIRECT_MAP and ARCH_HAS_SET_MEMORY configuration options have
no meaning when CONFIG_MMU is disabled and there is no point to enable
them for the nommu case.

Add an explicit dependency on MMU for these options.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reported-by: kernel test robot <lkp@intel.com>
---
 arch/riscv/Kconfig | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 85d626b8ce5e..87ccc29fc31d 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -25,8 +25,8 @@ config RISCV
 	select ARCH_HAS_KCOV
 	select ARCH_HAS_MMIOWB
 	select ARCH_HAS_PTE_SPECIAL
-	select ARCH_HAS_SET_DIRECT_MAP
-	select ARCH_HAS_SET_MEMORY
+	select ARCH_HAS_SET_DIRECT_MAP if MMU
+	select ARCH_HAS_SET_MEMORY if MMU
 	select ARCH_HAS_STRICT_KERNEL_RWX if MMU
 	select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
-- 
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 3/9] riscv/Kconfig: make direct map manipulation options depend on MMU
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	kernel test robot

From: Mike Rapoport <rppt@linux.ibm.com>

ARCH_HAS_SET_DIRECT_MAP and ARCH_HAS_SET_MEMORY configuration options have
no meaning when CONFIG_MMU is disabled and there is no point to enable
them for the nommu case.

Add an explicit dependency on MMU for these options.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reported-by: kernel test robot <lkp@intel.com>
---
 arch/riscv/Kconfig | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 85d626b8ce5e..87ccc29fc31d 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -25,8 +25,8 @@ config RISCV
 	select ARCH_HAS_KCOV
 	select ARCH_HAS_MMIOWB
 	select ARCH_HAS_PTE_SPECIAL
-	select ARCH_HAS_SET_DIRECT_MAP
-	select ARCH_HAS_SET_MEMORY
+	select ARCH_HAS_SET_DIRECT_MAP if MMU
+	select ARCH_HAS_SET_MEMORY if MMU
 	select ARCH_HAS_STRICT_KERNEL_RWX if MMU
 	select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
-- 
2.28.0


^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 3/9] riscv/Kconfig: make direct map manipulation options depend on MMU
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	kernel test robot

From: Mike Rapoport <rppt@linux.ibm.com>

ARCH_HAS_SET_DIRECT_MAP and ARCH_HAS_SET_MEMORY configuration options have
no meaning when CONFIG_MMU is disabled and there is no point to enable
them for the nommu case.

Add an explicit dependency on MMU for these options.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reported-by: kernel test robot <lkp@intel.com>
---
 arch/riscv/Kconfig | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 85d626b8ce5e..87ccc29fc31d 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -25,8 +25,8 @@ config RISCV
 	select ARCH_HAS_KCOV
 	select ARCH_HAS_MMIOWB
 	select ARCH_HAS_PTE_SPECIAL
-	select ARCH_HAS_SET_DIRECT_MAP
-	select ARCH_HAS_SET_MEMORY
+	select ARCH_HAS_SET_DIRECT_MAP if MMU
+	select ARCH_HAS_SET_MEMORY if MMU
 	select ARCH_HAS_STRICT_KERNEL_RWX if MMU
 	select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
-- 
2.28.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 3/9] riscv/Kconfig: make direct map manipulation options depend on MMU
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	kernel test robot

From: Mike Rapoport <rppt@linux.ibm.com>

ARCH_HAS_SET_DIRECT_MAP and ARCH_HAS_SET_MEMORY configuration options have
no meaning when CONFIG_MMU is disabled and there is no point to enable
them for the nommu case.

Add an explicit dependency on MMU for these options.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reported-by: kernel test robot <lkp@intel.com>
---
 arch/riscv/Kconfig | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 85d626b8ce5e..87ccc29fc31d 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -25,8 +25,8 @@ config RISCV
 	select ARCH_HAS_KCOV
 	select ARCH_HAS_MMIOWB
 	select ARCH_HAS_PTE_SPECIAL
-	select ARCH_HAS_SET_DIRECT_MAP
-	select ARCH_HAS_SET_MEMORY
+	select ARCH_HAS_SET_DIRECT_MAP if MMU
+	select ARCH_HAS_SET_MEMORY if MMU
 	select ARCH_HAS_STRICT_KERNEL_RWX if MMU
 	select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
 	select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 4/9] set_memory: allow set_direct_map_*_noflush() for multiple pages
  2021-03-03 16:22 ` Mike Rapoport
  (?)
  (?)
@ 2021-03-03 16:22   ` Mike Rapoport
  -1 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel B utt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

The underlying implementations of set_direct_map_invalid_noflush() and
set_direct_map_default_noflush() allow updating multiple contiguous pages
at once.

Add numpages parameter to set_direct_map_*_noflush() to expose this
ability with these APIs.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cacheflush.h |  4 ++--
 arch/arm64/mm/pageattr.c            | 10 ++++++----
 arch/riscv/include/asm/set_memory.h |  4 ++--
 arch/riscv/mm/pageattr.c            |  8 ++++----
 arch/x86/include/asm/set_memory.h   |  4 ++--
 arch/x86/mm/pat/set_memory.c        |  8 ++++----
 include/linux/set_memory.h          |  4 ++--
 kernel/power/snapshot.c             |  4 ++--
 mm/vmalloc.c                        |  5 +++--
 9 files changed, 27 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 52e5c1623224..ace2c3d7ae7e 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -133,8 +133,8 @@ static __always_inline void __flush_icache_all(void)
 
 int set_memory_valid(unsigned long addr, int numpages, int enable);
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 #include <asm-generic/cacheflush.h>
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 92eccaf595c8..b53ef37bf95a 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,34 +148,36 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
 					__pgprot(PTE_VALID));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	struct page_change_data data = {
 		.set_mask = __pgprot(0),
 		.clear_mask = __pgprot(PTE_VALID),
 	};
+	unsigned long size = PAGE_SIZE * numpages;
 
 	if (!debug_pagealloc_enabled() && !rodata_full)
 		return 0;
 
 	return apply_to_page_range(&init_mm,
 				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   size, change_page_range, &data);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	struct page_change_data data = {
 		.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
 		.clear_mask = __pgprot(PTE_RDONLY),
 	};
+	unsigned long size = PAGE_SIZE * numpages;
 
 	if (!debug_pagealloc_enabled() && !rodata_full)
 		return 0;
 
 	return apply_to_page_range(&init_mm,
 				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   size, change_page_range, &data);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index 6887b3d9f371..018e26732940 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -26,8 +26,8 @@ static inline void protect_kernel_text_data(void) {}
 static inline int set_memory_rw_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 5e49e4b4a4cc..9618181b70be 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -156,11 +156,11 @@ int set_memory_nx(unsigned long addr, int numpages)
 	return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	int ret;
 	unsigned long start = (unsigned long)page_address(page);
-	unsigned long end = start + PAGE_SIZE;
+	unsigned long end = start + PAGE_SIZE * numpages;
 	struct pageattr_masks masks = {
 		.set_mask = __pgprot(0),
 		.clear_mask = __pgprot(_PAGE_PRESENT)
@@ -173,11 +173,11 @@ int set_direct_map_invalid_noflush(struct page *page)
 	return ret;
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	int ret;
 	unsigned long start = (unsigned long)page_address(page);
-	unsigned long end = start + PAGE_SIZE;
+	unsigned long end = start + PAGE_SIZE * numpages;
 	struct pageattr_masks masks = {
 		.set_mask = PAGE_KERNEL,
 		.clear_mask = __pgprot(0)
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 4352f08bfbb5..6224cb291f6c 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -80,8 +80,8 @@ int set_pages_wb(struct page *page, int numpages);
 int set_pages_ro(struct page *page, int numpages);
 int set_pages_rw(struct page *page, int numpages);
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 extern int kernel_set_to_readonly;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 16f878c26667..d157fd617c99 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2184,14 +2184,14 @@ static int __set_pages_np(struct page *page, int numpages)
 	return __change_page_attr_set_clr(&cpa, 0);
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
-	return __set_pages_np(page, 1);
+	return __set_pages_np(page, numpages);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
-	return __set_pages_p(page, 1);
+	return __set_pages_p(page, numpages);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index fe1aa4e54680..c650f82db813 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -15,11 +15,11 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
 #ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP
-static inline int set_direct_map_invalid_noflush(struct page *page)
+static inline int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	return 0;
 }
-static inline int set_direct_map_default_noflush(struct page *page)
+static inline int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	return 0;
 }
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index d63560e1cf87..64b7aab9aee4 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -86,7 +86,7 @@ static inline void hibernate_restore_unprotect_page(void *page_address) {}
 static inline void hibernate_map_page(struct page *page)
 {
 	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
-		int ret = set_direct_map_default_noflush(page);
+		int ret = set_direct_map_default_noflush(page, 1);
 
 		if (ret)
 			pr_warn_once("Failed to remap page\n");
@@ -99,7 +99,7 @@ static inline void hibernate_unmap_page(struct page *page)
 {
 	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
 		unsigned long addr = (unsigned long)page_address(page);
-		int ret  = set_direct_map_invalid_noflush(page);
+		int ret = set_direct_map_invalid_noflush(page, 1);
 
 		if (ret)
 			pr_warn_once("Failed to remap page\n");
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 4f5f8c907897..8ab83fbecadd 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2195,13 +2195,14 @@ struct vm_struct *remove_vm_area(const void *addr)
 }
 
 static inline void set_area_direct_map(const struct vm_struct *area,
-				       int (*set_direct_map)(struct page *page))
+				       int (*set_direct_map)(struct page *page,
+							     int numpages))
 {
 	int i;
 
 	for (i = 0; i < area->nr_pages; i++)
 		if (page_address(area->pages[i]))
-			set_direct_map(area->pages[i]);
+			set_direct_map(area->pages[i], 1);
 }
 
 /* Handle removing and resetting vm mappings related to the vm_struct. */
-- 
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 4/9] set_memory: allow set_direct_map_*_noflush() for multiple pages
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

The underlying implementations of set_direct_map_invalid_noflush() and
set_direct_map_default_noflush() allow updating multiple contiguous pages
at once.

Add numpages parameter to set_direct_map_*_noflush() to expose this
ability with these APIs.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cacheflush.h |  4 ++--
 arch/arm64/mm/pageattr.c            | 10 ++++++----
 arch/riscv/include/asm/set_memory.h |  4 ++--
 arch/riscv/mm/pageattr.c            |  8 ++++----
 arch/x86/include/asm/set_memory.h   |  4 ++--
 arch/x86/mm/pat/set_memory.c        |  8 ++++----
 include/linux/set_memory.h          |  4 ++--
 kernel/power/snapshot.c             |  4 ++--
 mm/vmalloc.c                        |  5 +++--
 9 files changed, 27 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 52e5c1623224..ace2c3d7ae7e 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -133,8 +133,8 @@ static __always_inline void __flush_icache_all(void)
 
 int set_memory_valid(unsigned long addr, int numpages, int enable);
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 #include <asm-generic/cacheflush.h>
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 92eccaf595c8..b53ef37bf95a 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,34 +148,36 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
 					__pgprot(PTE_VALID));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	struct page_change_data data = {
 		.set_mask = __pgprot(0),
 		.clear_mask = __pgprot(PTE_VALID),
 	};
+	unsigned long size = PAGE_SIZE * numpages;
 
 	if (!debug_pagealloc_enabled() && !rodata_full)
 		return 0;
 
 	return apply_to_page_range(&init_mm,
 				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   size, change_page_range, &data);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	struct page_change_data data = {
 		.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
 		.clear_mask = __pgprot(PTE_RDONLY),
 	};
+	unsigned long size = PAGE_SIZE * numpages;
 
 	if (!debug_pagealloc_enabled() && !rodata_full)
 		return 0;
 
 	return apply_to_page_range(&init_mm,
 				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   size, change_page_range, &data);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index 6887b3d9f371..018e26732940 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -26,8 +26,8 @@ static inline void protect_kernel_text_data(void) {}
 static inline int set_memory_rw_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 5e49e4b4a4cc..9618181b70be 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -156,11 +156,11 @@ int set_memory_nx(unsigned long addr, int numpages)
 	return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	int ret;
 	unsigned long start = (unsigned long)page_address(page);
-	unsigned long end = start + PAGE_SIZE;
+	unsigned long end = start + PAGE_SIZE * numpages;
 	struct pageattr_masks masks = {
 		.set_mask = __pgprot(0),
 		.clear_mask = __pgprot(_PAGE_PRESENT)
@@ -173,11 +173,11 @@ int set_direct_map_invalid_noflush(struct page *page)
 	return ret;
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	int ret;
 	unsigned long start = (unsigned long)page_address(page);
-	unsigned long end = start + PAGE_SIZE;
+	unsigned long end = start + PAGE_SIZE * numpages;
 	struct pageattr_masks masks = {
 		.set_mask = PAGE_KERNEL,
 		.clear_mask = __pgprot(0)
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 4352f08bfbb5..6224cb291f6c 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -80,8 +80,8 @@ int set_pages_wb(struct page *page, int numpages);
 int set_pages_ro(struct page *page, int numpages);
 int set_pages_rw(struct page *page, int numpages);
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 extern int kernel_set_to_readonly;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 16f878c26667..d157fd617c99 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2184,14 +2184,14 @@ static int __set_pages_np(struct page *page, int numpages)
 	return __change_page_attr_set_clr(&cpa, 0);
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
-	return __set_pages_np(page, 1);
+	return __set_pages_np(page, numpages);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
-	return __set_pages_p(page, 1);
+	return __set_pages_p(page, numpages);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index fe1aa4e54680..c650f82db813 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -15,11 +15,11 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
 #ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP
-static inline int set_direct_map_invalid_noflush(struct page *page)
+static inline int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	return 0;
 }
-static inline int set_direct_map_default_noflush(struct page *page)
+static inline int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	return 0;
 }
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index d63560e1cf87..64b7aab9aee4 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -86,7 +86,7 @@ static inline void hibernate_restore_unprotect_page(void *page_address) {}
 static inline void hibernate_map_page(struct page *page)
 {
 	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
-		int ret = set_direct_map_default_noflush(page);
+		int ret = set_direct_map_default_noflush(page, 1);
 
 		if (ret)
 			pr_warn_once("Failed to remap page\n");
@@ -99,7 +99,7 @@ static inline void hibernate_unmap_page(struct page *page)
 {
 	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
 		unsigned long addr = (unsigned long)page_address(page);
-		int ret  = set_direct_map_invalid_noflush(page);
+		int ret = set_direct_map_invalid_noflush(page, 1);
 
 		if (ret)
 			pr_warn_once("Failed to remap page\n");
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 4f5f8c907897..8ab83fbecadd 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2195,13 +2195,14 @@ struct vm_struct *remove_vm_area(const void *addr)
 }
 
 static inline void set_area_direct_map(const struct vm_struct *area,
-				       int (*set_direct_map)(struct page *page))
+				       int (*set_direct_map)(struct page *page,
+							     int numpages))
 {
 	int i;
 
 	for (i = 0; i < area->nr_pages; i++)
 		if (page_address(area->pages[i]))
-			set_direct_map(area->pages[i]);
+			set_direct_map(area->pages[i], 1);
 }
 
 /* Handle removing and resetting vm mappings related to the vm_struct. */
-- 
2.28.0


^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 4/9] set_memory: allow set_direct_map_*_noflush() for multiple pages
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

The underlying implementations of set_direct_map_invalid_noflush() and
set_direct_map_default_noflush() allow updating multiple contiguous pages
at once.

Add numpages parameter to set_direct_map_*_noflush() to expose this
ability with these APIs.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cacheflush.h |  4 ++--
 arch/arm64/mm/pageattr.c            | 10 ++++++----
 arch/riscv/include/asm/set_memory.h |  4 ++--
 arch/riscv/mm/pageattr.c            |  8 ++++----
 arch/x86/include/asm/set_memory.h   |  4 ++--
 arch/x86/mm/pat/set_memory.c        |  8 ++++----
 include/linux/set_memory.h          |  4 ++--
 kernel/power/snapshot.c             |  4 ++--
 mm/vmalloc.c                        |  5 +++--
 9 files changed, 27 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 52e5c1623224..ace2c3d7ae7e 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -133,8 +133,8 @@ static __always_inline void __flush_icache_all(void)
 
 int set_memory_valid(unsigned long addr, int numpages, int enable);
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 #include <asm-generic/cacheflush.h>
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 92eccaf595c8..b53ef37bf95a 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,34 +148,36 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
 					__pgprot(PTE_VALID));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	struct page_change_data data = {
 		.set_mask = __pgprot(0),
 		.clear_mask = __pgprot(PTE_VALID),
 	};
+	unsigned long size = PAGE_SIZE * numpages;
 
 	if (!debug_pagealloc_enabled() && !rodata_full)
 		return 0;
 
 	return apply_to_page_range(&init_mm,
 				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   size, change_page_range, &data);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	struct page_change_data data = {
 		.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
 		.clear_mask = __pgprot(PTE_RDONLY),
 	};
+	unsigned long size = PAGE_SIZE * numpages;
 
 	if (!debug_pagealloc_enabled() && !rodata_full)
 		return 0;
 
 	return apply_to_page_range(&init_mm,
 				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   size, change_page_range, &data);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index 6887b3d9f371..018e26732940 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -26,8 +26,8 @@ static inline void protect_kernel_text_data(void) {}
 static inline int set_memory_rw_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 5e49e4b4a4cc..9618181b70be 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -156,11 +156,11 @@ int set_memory_nx(unsigned long addr, int numpages)
 	return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	int ret;
 	unsigned long start = (unsigned long)page_address(page);
-	unsigned long end = start + PAGE_SIZE;
+	unsigned long end = start + PAGE_SIZE * numpages;
 	struct pageattr_masks masks = {
 		.set_mask = __pgprot(0),
 		.clear_mask = __pgprot(_PAGE_PRESENT)
@@ -173,11 +173,11 @@ int set_direct_map_invalid_noflush(struct page *page)
 	return ret;
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	int ret;
 	unsigned long start = (unsigned long)page_address(page);
-	unsigned long end = start + PAGE_SIZE;
+	unsigned long end = start + PAGE_SIZE * numpages;
 	struct pageattr_masks masks = {
 		.set_mask = PAGE_KERNEL,
 		.clear_mask = __pgprot(0)
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 4352f08bfbb5..6224cb291f6c 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -80,8 +80,8 @@ int set_pages_wb(struct page *page, int numpages);
 int set_pages_ro(struct page *page, int numpages);
 int set_pages_rw(struct page *page, int numpages);
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 extern int kernel_set_to_readonly;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 16f878c26667..d157fd617c99 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2184,14 +2184,14 @@ static int __set_pages_np(struct page *page, int numpages)
 	return __change_page_attr_set_clr(&cpa, 0);
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
-	return __set_pages_np(page, 1);
+	return __set_pages_np(page, numpages);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
-	return __set_pages_p(page, 1);
+	return __set_pages_p(page, numpages);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index fe1aa4e54680..c650f82db813 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -15,11 +15,11 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
 #ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP
-static inline int set_direct_map_invalid_noflush(struct page *page)
+static inline int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	return 0;
 }
-static inline int set_direct_map_default_noflush(struct page *page)
+static inline int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	return 0;
 }
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index d63560e1cf87..64b7aab9aee4 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -86,7 +86,7 @@ static inline void hibernate_restore_unprotect_page(void *page_address) {}
 static inline void hibernate_map_page(struct page *page)
 {
 	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
-		int ret = set_direct_map_default_noflush(page);
+		int ret = set_direct_map_default_noflush(page, 1);
 
 		if (ret)
 			pr_warn_once("Failed to remap page\n");
@@ -99,7 +99,7 @@ static inline void hibernate_unmap_page(struct page *page)
 {
 	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
 		unsigned long addr = (unsigned long)page_address(page);
-		int ret  = set_direct_map_invalid_noflush(page);
+		int ret = set_direct_map_invalid_noflush(page, 1);
 
 		if (ret)
 			pr_warn_once("Failed to remap page\n");
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 4f5f8c907897..8ab83fbecadd 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2195,13 +2195,14 @@ struct vm_struct *remove_vm_area(const void *addr)
 }
 
 static inline void set_area_direct_map(const struct vm_struct *area,
-				       int (*set_direct_map)(struct page *page))
+				       int (*set_direct_map)(struct page *page,
+							     int numpages))
 {
 	int i;
 
 	for (i = 0; i < area->nr_pages; i++)
 		if (page_address(area->pages[i]))
-			set_direct_map(area->pages[i]);
+			set_direct_map(area->pages[i], 1);
 }
 
 /* Handle removing and resetting vm mappings related to the vm_struct. */
-- 
2.28.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 4/9] set_memory: allow set_direct_map_*_noflush() for multiple pages
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

The underlying implementations of set_direct_map_invalid_noflush() and
set_direct_map_default_noflush() allow updating multiple contiguous pages
at once.

Add numpages parameter to set_direct_map_*_noflush() to expose this
ability with these APIs.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>	[arm64]
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/cacheflush.h |  4 ++--
 arch/arm64/mm/pageattr.c            | 10 ++++++----
 arch/riscv/include/asm/set_memory.h |  4 ++--
 arch/riscv/mm/pageattr.c            |  8 ++++----
 arch/x86/include/asm/set_memory.h   |  4 ++--
 arch/x86/mm/pat/set_memory.c        |  8 ++++----
 include/linux/set_memory.h          |  4 ++--
 kernel/power/snapshot.c             |  4 ++--
 mm/vmalloc.c                        |  5 +++--
 9 files changed, 27 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 52e5c1623224..ace2c3d7ae7e 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -133,8 +133,8 @@ static __always_inline void __flush_icache_all(void)
 
 int set_memory_valid(unsigned long addr, int numpages, int enable);
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 #include <asm-generic/cacheflush.h>
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 92eccaf595c8..b53ef37bf95a 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,34 +148,36 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
 					__pgprot(PTE_VALID));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	struct page_change_data data = {
 		.set_mask = __pgprot(0),
 		.clear_mask = __pgprot(PTE_VALID),
 	};
+	unsigned long size = PAGE_SIZE * numpages;
 
 	if (!debug_pagealloc_enabled() && !rodata_full)
 		return 0;
 
 	return apply_to_page_range(&init_mm,
 				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   size, change_page_range, &data);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	struct page_change_data data = {
 		.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
 		.clear_mask = __pgprot(PTE_RDONLY),
 	};
+	unsigned long size = PAGE_SIZE * numpages;
 
 	if (!debug_pagealloc_enabled() && !rodata_full)
 		return 0;
 
 	return apply_to_page_range(&init_mm,
 				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   size, change_page_range, &data);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index 6887b3d9f371..018e26732940 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -26,8 +26,8 @@ static inline void protect_kernel_text_data(void) {}
 static inline int set_memory_rw_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 5e49e4b4a4cc..9618181b70be 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -156,11 +156,11 @@ int set_memory_nx(unsigned long addr, int numpages)
 	return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	int ret;
 	unsigned long start = (unsigned long)page_address(page);
-	unsigned long end = start + PAGE_SIZE;
+	unsigned long end = start + PAGE_SIZE * numpages;
 	struct pageattr_masks masks = {
 		.set_mask = __pgprot(0),
 		.clear_mask = __pgprot(_PAGE_PRESENT)
@@ -173,11 +173,11 @@ int set_direct_map_invalid_noflush(struct page *page)
 	return ret;
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	int ret;
 	unsigned long start = (unsigned long)page_address(page);
-	unsigned long end = start + PAGE_SIZE;
+	unsigned long end = start + PAGE_SIZE * numpages;
 	struct pageattr_masks masks = {
 		.set_mask = PAGE_KERNEL,
 		.clear_mask = __pgprot(0)
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 4352f08bfbb5..6224cb291f6c 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -80,8 +80,8 @@ int set_pages_wb(struct page *page, int numpages);
 int set_pages_ro(struct page *page, int numpages);
 int set_pages_rw(struct page *page, int numpages);
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
 bool kernel_page_present(struct page *page);
 
 extern int kernel_set_to_readonly;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 16f878c26667..d157fd617c99 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2184,14 +2184,14 @@ static int __set_pages_np(struct page *page, int numpages)
 	return __change_page_attr_set_clr(&cpa, 0);
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
-	return __set_pages_np(page, 1);
+	return __set_pages_np(page, numpages);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
 {
-	return __set_pages_p(page, 1);
+	return __set_pages_p(page, numpages);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index fe1aa4e54680..c650f82db813 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -15,11 +15,11 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
 #endif
 
 #ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP
-static inline int set_direct_map_invalid_noflush(struct page *page)
+static inline int set_direct_map_invalid_noflush(struct page *page, int numpages)
 {
 	return 0;
 }
-static inline int set_direct_map_default_noflush(struct page *page)
+static inline int set_direct_map_default_noflush(struct page *page, int numpages)
 {
 	return 0;
 }
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index d63560e1cf87..64b7aab9aee4 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -86,7 +86,7 @@ static inline void hibernate_restore_unprotect_page(void *page_address) {}
 static inline void hibernate_map_page(struct page *page)
 {
 	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
-		int ret = set_direct_map_default_noflush(page);
+		int ret = set_direct_map_default_noflush(page, 1);
 
 		if (ret)
 			pr_warn_once("Failed to remap page\n");
@@ -99,7 +99,7 @@ static inline void hibernate_unmap_page(struct page *page)
 {
 	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
 		unsigned long addr = (unsigned long)page_address(page);
-		int ret  = set_direct_map_invalid_noflush(page);
+		int ret = set_direct_map_invalid_noflush(page, 1);
 
 		if (ret)
 			pr_warn_once("Failed to remap page\n");
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 4f5f8c907897..8ab83fbecadd 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2195,13 +2195,14 @@ struct vm_struct *remove_vm_area(const void *addr)
 }
 
 static inline void set_area_direct_map(const struct vm_struct *area,
-				       int (*set_direct_map)(struct page *page))
+				       int (*set_direct_map)(struct page *page,
+							     int numpages))
 {
 	int i;
 
 	for (i = 0; i < area->nr_pages; i++)
 		if (page_address(area->pages[i]))
-			set_direct_map(area->pages[i]);
+			set_direct_map(area->pages[i], 1);
 }
 
 /* Handle removing and resetting vm mappings related to the vm_struct. */
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 5/9] set_memory: allow querying whether set_direct_map_*() is actually enabled
  2021-03-03 16:22 ` Mike Rapoport
  (?)
  (?)
@ 2021-03-03 16:22   ` Mike Rapoport
  -1 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel B utt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

On arm64, set_direct_map_*() functions may return 0 without actually
changing the linear map.  This behaviour can be controlled using kernel
parameters, so we need a way to determine at runtime whether calls to
set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
any effect.

Extend set_memory API with can_set_direct_map() function that allows
checking if calling set_direct_map_*() will actually change the page
table, replace several occurrences of open coded checks in arm64 with the
new function and provide a generic stub for architectures that always
modify page tables upon calls to set_direct_map APIs.

[arnd@arndb.de: arm64: kfence: fix header inclusion ]

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/Kbuild       |  1 -
 arch/arm64/include/asm/cacheflush.h |  6 ------
 arch/arm64/include/asm/kfence.h     |  2 +-
 arch/arm64/include/asm/set_memory.h | 17 +++++++++++++++++
 arch/arm64/kernel/machine_kexec.c   |  1 +
 arch/arm64/mm/mmu.c                 |  6 +++---
 arch/arm64/mm/pageattr.c            | 13 +++++++++----
 include/linux/set_memory.h          | 12 ++++++++++++
 8 files changed, 43 insertions(+), 15 deletions(-)
 create mode 100644 arch/arm64/include/asm/set_memory.h

diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 07ac208edc89..73aa25843f65 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -3,5 +3,4 @@ generic-y += early_ioremap.h
 generic-y += mcs_spinlock.h
 generic-y += qrwlock.h
 generic-y += qspinlock.h
-generic-y += set_memory.h
 generic-y += user.h
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index ace2c3d7ae7e..4e3c13799735 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -131,12 +131,6 @@ static __always_inline void __flush_icache_all(void)
 	dsb(ish);
 }
 
-int set_memory_valid(unsigned long addr, int numpages, int enable);
-
-int set_direct_map_invalid_noflush(struct page *page, int numpages);
-int set_direct_map_default_noflush(struct page *page, int numpages);
-bool kernel_page_present(struct page *page);
-
 #include <asm-generic/cacheflush.h>
 
 #endif /* __ASM_CACHEFLUSH_H */
diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h
index d061176d57ea..aa855c6a0ae6 100644
--- a/arch/arm64/include/asm/kfence.h
+++ b/arch/arm64/include/asm/kfence.h
@@ -8,7 +8,7 @@
 #ifndef __ASM_KFENCE_H
 #define __ASM_KFENCE_H
 
-#include <asm/cacheflush.h>
+#include <asm/set_memory.h>
 
 static inline bool arch_kfence_init_pool(void) { return true; }
 
diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h
new file mode 100644
index 000000000000..ecb6b0f449ab
--- /dev/null
+++ b/arch/arm64/include/asm/set_memory.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_ARM64_SET_MEMORY_H
+#define _ASM_ARM64_SET_MEMORY_H
+
+#include <asm-generic/set_memory.h>
+
+bool can_set_direct_map(void);
+#define can_set_direct_map can_set_direct_map
+
+int set_memory_valid(unsigned long addr, int numpages, int enable);
+
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
+bool kernel_page_present(struct page *page);
+
+#endif /* _ASM_ARM64_SET_MEMORY_H */
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 90a335c74442..0ec94e718724 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -11,6 +11,7 @@
 #include <linux/kernel.h>
 #include <linux/kexec.h>
 #include <linux/page-flags.h>
+#include <linux/set_memory.h>
 #include <linux/smp.h>
 
 #include <asm/cacheflush.h>
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 3802cfbdd20d..9243ea9f4e9f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -22,6 +22,7 @@
 #include <linux/io.h>
 #include <linux/mm.h>
 #include <linux/vmalloc.h>
+#include <linux/set_memory.h>
 
 #include <asm/barrier.h>
 #include <asm/cputype.h>
@@ -492,7 +493,7 @@ static void __init map_mem(pgd_t *pgdp)
 	int flags = 0;
 	u64 i;
 
-	if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
+	if (can_set_direct_map() || crash_mem_map)
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	/*
@@ -1470,8 +1471,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 	 * KFENCE requires linear map to be mapped at page granularity, so that
 	 * it is possible to protect/unprotect single pages in the KFENCE pool.
 	 */
-	if (rodata_full || debug_pagealloc_enabled() ||
-	    IS_ENABLED(CONFIG_KFENCE))
+	if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index b53ef37bf95a..d505172265b0 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -19,6 +19,11 @@ struct page_change_data {
 
 bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
 
+bool can_set_direct_map(void)
+{
+	return rodata_full || debug_pagealloc_enabled();
+}
+
 static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
 {
 	struct page_change_data *cdata = data;
@@ -156,7 +161,7 @@ int set_direct_map_invalid_noflush(struct page *page, int numpages)
 	};
 	unsigned long size = PAGE_SIZE * numpages;
 
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return 0;
 
 	return apply_to_page_range(&init_mm,
@@ -172,7 +177,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
 	};
 	unsigned long size = PAGE_SIZE * numpages;
 
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return 0;
 
 	return apply_to_page_range(&init_mm,
@@ -183,7 +188,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
 #ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return;
 
 	set_memory_valid((unsigned long)page_address(page), numpages, enable);
@@ -208,7 +213,7 @@ bool kernel_page_present(struct page *page)
 	pte_t *ptep;
 	unsigned long addr = (unsigned long)page_address(page);
 
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return true;
 
 	pgdp = pgd_offset_k(addr);
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index c650f82db813..7b4b6626032d 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -28,7 +28,19 @@ static inline bool kernel_page_present(struct page *page)
 {
 	return true;
 }
+#else /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+/*
+ * Some architectures, e.g. ARM64 can disable direct map modifications at
+ * boot time. Let them overrive this query.
+ */
+#ifndef can_set_direct_map
+static inline bool can_set_direct_map(void)
+{
+	return true;
+}
+#define can_set_direct_map can_set_direct_map
 #endif
+#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
 
 #ifndef set_mce_nospec
 static inline int set_mce_nospec(unsigned long pfn, bool unmap)
-- 
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 5/9] set_memory: allow querying whether set_direct_map_*() is actually enabled
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

On arm64, set_direct_map_*() functions may return 0 without actually
changing the linear map.  This behaviour can be controlled using kernel
parameters, so we need a way to determine at runtime whether calls to
set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
any effect.

Extend set_memory API with can_set_direct_map() function that allows
checking if calling set_direct_map_*() will actually change the page
table, replace several occurrences of open coded checks in arm64 with the
new function and provide a generic stub for architectures that always
modify page tables upon calls to set_direct_map APIs.

[arnd@arndb.de: arm64: kfence: fix header inclusion ]

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/Kbuild       |  1 -
 arch/arm64/include/asm/cacheflush.h |  6 ------
 arch/arm64/include/asm/kfence.h     |  2 +-
 arch/arm64/include/asm/set_memory.h | 17 +++++++++++++++++
 arch/arm64/kernel/machine_kexec.c   |  1 +
 arch/arm64/mm/mmu.c                 |  6 +++---
 arch/arm64/mm/pageattr.c            | 13 +++++++++----
 include/linux/set_memory.h          | 12 ++++++++++++
 8 files changed, 43 insertions(+), 15 deletions(-)
 create mode 100644 arch/arm64/include/asm/set_memory.h

diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 07ac208edc89..73aa25843f65 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -3,5 +3,4 @@ generic-y += early_ioremap.h
 generic-y += mcs_spinlock.h
 generic-y += qrwlock.h
 generic-y += qspinlock.h
-generic-y += set_memory.h
 generic-y += user.h
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index ace2c3d7ae7e..4e3c13799735 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -131,12 +131,6 @@ static __always_inline void __flush_icache_all(void)
 	dsb(ish);
 }
 
-int set_memory_valid(unsigned long addr, int numpages, int enable);
-
-int set_direct_map_invalid_noflush(struct page *page, int numpages);
-int set_direct_map_default_noflush(struct page *page, int numpages);
-bool kernel_page_present(struct page *page);
-
 #include <asm-generic/cacheflush.h>
 
 #endif /* __ASM_CACHEFLUSH_H */
diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h
index d061176d57ea..aa855c6a0ae6 100644
--- a/arch/arm64/include/asm/kfence.h
+++ b/arch/arm64/include/asm/kfence.h
@@ -8,7 +8,7 @@
 #ifndef __ASM_KFENCE_H
 #define __ASM_KFENCE_H
 
-#include <asm/cacheflush.h>
+#include <asm/set_memory.h>
 
 static inline bool arch_kfence_init_pool(void) { return true; }
 
diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h
new file mode 100644
index 000000000000..ecb6b0f449ab
--- /dev/null
+++ b/arch/arm64/include/asm/set_memory.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_ARM64_SET_MEMORY_H
+#define _ASM_ARM64_SET_MEMORY_H
+
+#include <asm-generic/set_memory.h>
+
+bool can_set_direct_map(void);
+#define can_set_direct_map can_set_direct_map
+
+int set_memory_valid(unsigned long addr, int numpages, int enable);
+
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
+bool kernel_page_present(struct page *page);
+
+#endif /* _ASM_ARM64_SET_MEMORY_H */
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 90a335c74442..0ec94e718724 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -11,6 +11,7 @@
 #include <linux/kernel.h>
 #include <linux/kexec.h>
 #include <linux/page-flags.h>
+#include <linux/set_memory.h>
 #include <linux/smp.h>
 
 #include <asm/cacheflush.h>
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 3802cfbdd20d..9243ea9f4e9f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -22,6 +22,7 @@
 #include <linux/io.h>
 #include <linux/mm.h>
 #include <linux/vmalloc.h>
+#include <linux/set_memory.h>
 
 #include <asm/barrier.h>
 #include <asm/cputype.h>
@@ -492,7 +493,7 @@ static void __init map_mem(pgd_t *pgdp)
 	int flags = 0;
 	u64 i;
 
-	if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
+	if (can_set_direct_map() || crash_mem_map)
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	/*
@@ -1470,8 +1471,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 	 * KFENCE requires linear map to be mapped at page granularity, so that
 	 * it is possible to protect/unprotect single pages in the KFENCE pool.
 	 */
-	if (rodata_full || debug_pagealloc_enabled() ||
-	    IS_ENABLED(CONFIG_KFENCE))
+	if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index b53ef37bf95a..d505172265b0 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -19,6 +19,11 @@ struct page_change_data {
 
 bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
 
+bool can_set_direct_map(void)
+{
+	return rodata_full || debug_pagealloc_enabled();
+}
+
 static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
 {
 	struct page_change_data *cdata = data;
@@ -156,7 +161,7 @@ int set_direct_map_invalid_noflush(struct page *page, int numpages)
 	};
 	unsigned long size = PAGE_SIZE * numpages;
 
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return 0;
 
 	return apply_to_page_range(&init_mm,
@@ -172,7 +177,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
 	};
 	unsigned long size = PAGE_SIZE * numpages;
 
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return 0;
 
 	return apply_to_page_range(&init_mm,
@@ -183,7 +188,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
 #ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return;
 
 	set_memory_valid((unsigned long)page_address(page), numpages, enable);
@@ -208,7 +213,7 @@ bool kernel_page_present(struct page *page)
 	pte_t *ptep;
 	unsigned long addr = (unsigned long)page_address(page);
 
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return true;
 
 	pgdp = pgd_offset_k(addr);
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index c650f82db813..7b4b6626032d 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -28,7 +28,19 @@ static inline bool kernel_page_present(struct page *page)
 {
 	return true;
 }
+#else /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+/*
+ * Some architectures, e.g. ARM64 can disable direct map modifications at
+ * boot time. Let them overrive this query.
+ */
+#ifndef can_set_direct_map
+static inline bool can_set_direct_map(void)
+{
+	return true;
+}
+#define can_set_direct_map can_set_direct_map
 #endif
+#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
 
 #ifndef set_mce_nospec
 static inline int set_mce_nospec(unsigned long pfn, bool unmap)
-- 
2.28.0


^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 5/9] set_memory: allow querying whether set_direct_map_*() is actually enabled
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

On arm64, set_direct_map_*() functions may return 0 without actually
changing the linear map.  This behaviour can be controlled using kernel
parameters, so we need a way to determine at runtime whether calls to
set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
any effect.

Extend set_memory API with can_set_direct_map() function that allows
checking if calling set_direct_map_*() will actually change the page
table, replace several occurrences of open coded checks in arm64 with the
new function and provide a generic stub for architectures that always
modify page tables upon calls to set_direct_map APIs.

[arnd@arndb.de: arm64: kfence: fix header inclusion ]

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/Kbuild       |  1 -
 arch/arm64/include/asm/cacheflush.h |  6 ------
 arch/arm64/include/asm/kfence.h     |  2 +-
 arch/arm64/include/asm/set_memory.h | 17 +++++++++++++++++
 arch/arm64/kernel/machine_kexec.c   |  1 +
 arch/arm64/mm/mmu.c                 |  6 +++---
 arch/arm64/mm/pageattr.c            | 13 +++++++++----
 include/linux/set_memory.h          | 12 ++++++++++++
 8 files changed, 43 insertions(+), 15 deletions(-)
 create mode 100644 arch/arm64/include/asm/set_memory.h

diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 07ac208edc89..73aa25843f65 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -3,5 +3,4 @@ generic-y += early_ioremap.h
 generic-y += mcs_spinlock.h
 generic-y += qrwlock.h
 generic-y += qspinlock.h
-generic-y += set_memory.h
 generic-y += user.h
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index ace2c3d7ae7e..4e3c13799735 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -131,12 +131,6 @@ static __always_inline void __flush_icache_all(void)
 	dsb(ish);
 }
 
-int set_memory_valid(unsigned long addr, int numpages, int enable);
-
-int set_direct_map_invalid_noflush(struct page *page, int numpages);
-int set_direct_map_default_noflush(struct page *page, int numpages);
-bool kernel_page_present(struct page *page);
-
 #include <asm-generic/cacheflush.h>
 
 #endif /* __ASM_CACHEFLUSH_H */
diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h
index d061176d57ea..aa855c6a0ae6 100644
--- a/arch/arm64/include/asm/kfence.h
+++ b/arch/arm64/include/asm/kfence.h
@@ -8,7 +8,7 @@
 #ifndef __ASM_KFENCE_H
 #define __ASM_KFENCE_H
 
-#include <asm/cacheflush.h>
+#include <asm/set_memory.h>
 
 static inline bool arch_kfence_init_pool(void) { return true; }
 
diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h
new file mode 100644
index 000000000000..ecb6b0f449ab
--- /dev/null
+++ b/arch/arm64/include/asm/set_memory.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_ARM64_SET_MEMORY_H
+#define _ASM_ARM64_SET_MEMORY_H
+
+#include <asm-generic/set_memory.h>
+
+bool can_set_direct_map(void);
+#define can_set_direct_map can_set_direct_map
+
+int set_memory_valid(unsigned long addr, int numpages, int enable);
+
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
+bool kernel_page_present(struct page *page);
+
+#endif /* _ASM_ARM64_SET_MEMORY_H */
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 90a335c74442..0ec94e718724 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -11,6 +11,7 @@
 #include <linux/kernel.h>
 #include <linux/kexec.h>
 #include <linux/page-flags.h>
+#include <linux/set_memory.h>
 #include <linux/smp.h>
 
 #include <asm/cacheflush.h>
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 3802cfbdd20d..9243ea9f4e9f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -22,6 +22,7 @@
 #include <linux/io.h>
 #include <linux/mm.h>
 #include <linux/vmalloc.h>
+#include <linux/set_memory.h>
 
 #include <asm/barrier.h>
 #include <asm/cputype.h>
@@ -492,7 +493,7 @@ static void __init map_mem(pgd_t *pgdp)
 	int flags = 0;
 	u64 i;
 
-	if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
+	if (can_set_direct_map() || crash_mem_map)
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	/*
@@ -1470,8 +1471,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 	 * KFENCE requires linear map to be mapped at page granularity, so that
 	 * it is possible to protect/unprotect single pages in the KFENCE pool.
 	 */
-	if (rodata_full || debug_pagealloc_enabled() ||
-	    IS_ENABLED(CONFIG_KFENCE))
+	if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index b53ef37bf95a..d505172265b0 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -19,6 +19,11 @@ struct page_change_data {
 
 bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
 
+bool can_set_direct_map(void)
+{
+	return rodata_full || debug_pagealloc_enabled();
+}
+
 static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
 {
 	struct page_change_data *cdata = data;
@@ -156,7 +161,7 @@ int set_direct_map_invalid_noflush(struct page *page, int numpages)
 	};
 	unsigned long size = PAGE_SIZE * numpages;
 
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return 0;
 
 	return apply_to_page_range(&init_mm,
@@ -172,7 +177,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
 	};
 	unsigned long size = PAGE_SIZE * numpages;
 
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return 0;
 
 	return apply_to_page_range(&init_mm,
@@ -183,7 +188,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
 #ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return;
 
 	set_memory_valid((unsigned long)page_address(page), numpages, enable);
@@ -208,7 +213,7 @@ bool kernel_page_present(struct page *page)
 	pte_t *ptep;
 	unsigned long addr = (unsigned long)page_address(page);
 
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return true;
 
 	pgdp = pgd_offset_k(addr);
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index c650f82db813..7b4b6626032d 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -28,7 +28,19 @@ static inline bool kernel_page_present(struct page *page)
 {
 	return true;
 }
+#else /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+/*
+ * Some architectures, e.g. ARM64 can disable direct map modifications at
+ * boot time. Let them overrive this query.
+ */
+#ifndef can_set_direct_map
+static inline bool can_set_direct_map(void)
+{
+	return true;
+}
+#define can_set_direct_map can_set_direct_map
 #endif
+#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
 
 #ifndef set_mce_nospec
 static inline int set_mce_nospec(unsigned long pfn, bool unmap)
-- 
2.28.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 5/9] set_memory: allow querying whether set_direct_map_*() is actually enabled
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

On arm64, set_direct_map_*() functions may return 0 without actually
changing the linear map.  This behaviour can be controlled using kernel
parameters, so we need a way to determine at runtime whether calls to
set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
any effect.

Extend set_memory API with can_set_direct_map() function that allows
checking if calling set_direct_map_*() will actually change the page
table, replace several occurrences of open coded checks in arm64 with the
new function and provide a generic stub for architectures that always
modify page tables upon calls to set_direct_map APIs.

[arnd@arndb.de: arm64: kfence: fix header inclusion ]

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/Kbuild       |  1 -
 arch/arm64/include/asm/cacheflush.h |  6 ------
 arch/arm64/include/asm/kfence.h     |  2 +-
 arch/arm64/include/asm/set_memory.h | 17 +++++++++++++++++
 arch/arm64/kernel/machine_kexec.c   |  1 +
 arch/arm64/mm/mmu.c                 |  6 +++---
 arch/arm64/mm/pageattr.c            | 13 +++++++++----
 include/linux/set_memory.h          | 12 ++++++++++++
 8 files changed, 43 insertions(+), 15 deletions(-)
 create mode 100644 arch/arm64/include/asm/set_memory.h

diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 07ac208edc89..73aa25843f65 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -3,5 +3,4 @@ generic-y += early_ioremap.h
 generic-y += mcs_spinlock.h
 generic-y += qrwlock.h
 generic-y += qspinlock.h
-generic-y += set_memory.h
 generic-y += user.h
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index ace2c3d7ae7e..4e3c13799735 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -131,12 +131,6 @@ static __always_inline void __flush_icache_all(void)
 	dsb(ish);
 }
 
-int set_memory_valid(unsigned long addr, int numpages, int enable);
-
-int set_direct_map_invalid_noflush(struct page *page, int numpages);
-int set_direct_map_default_noflush(struct page *page, int numpages);
-bool kernel_page_present(struct page *page);
-
 #include <asm-generic/cacheflush.h>
 
 #endif /* __ASM_CACHEFLUSH_H */
diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h
index d061176d57ea..aa855c6a0ae6 100644
--- a/arch/arm64/include/asm/kfence.h
+++ b/arch/arm64/include/asm/kfence.h
@@ -8,7 +8,7 @@
 #ifndef __ASM_KFENCE_H
 #define __ASM_KFENCE_H
 
-#include <asm/cacheflush.h>
+#include <asm/set_memory.h>
 
 static inline bool arch_kfence_init_pool(void) { return true; }
 
diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h
new file mode 100644
index 000000000000..ecb6b0f449ab
--- /dev/null
+++ b/arch/arm64/include/asm/set_memory.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_ARM64_SET_MEMORY_H
+#define _ASM_ARM64_SET_MEMORY_H
+
+#include <asm-generic/set_memory.h>
+
+bool can_set_direct_map(void);
+#define can_set_direct_map can_set_direct_map
+
+int set_memory_valid(unsigned long addr, int numpages, int enable);
+
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
+bool kernel_page_present(struct page *page);
+
+#endif /* _ASM_ARM64_SET_MEMORY_H */
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index 90a335c74442..0ec94e718724 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -11,6 +11,7 @@
 #include <linux/kernel.h>
 #include <linux/kexec.h>
 #include <linux/page-flags.h>
+#include <linux/set_memory.h>
 #include <linux/smp.h>
 
 #include <asm/cacheflush.h>
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 3802cfbdd20d..9243ea9f4e9f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -22,6 +22,7 @@
 #include <linux/io.h>
 #include <linux/mm.h>
 #include <linux/vmalloc.h>
+#include <linux/set_memory.h>
 
 #include <asm/barrier.h>
 #include <asm/cputype.h>
@@ -492,7 +493,7 @@ static void __init map_mem(pgd_t *pgdp)
 	int flags = 0;
 	u64 i;
 
-	if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
+	if (can_set_direct_map() || crash_mem_map)
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	/*
@@ -1470,8 +1471,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 	 * KFENCE requires linear map to be mapped at page granularity, so that
 	 * it is possible to protect/unprotect single pages in the KFENCE pool.
 	 */
-	if (rodata_full || debug_pagealloc_enabled() ||
-	    IS_ENABLED(CONFIG_KFENCE))
+	if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index b53ef37bf95a..d505172265b0 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -19,6 +19,11 @@ struct page_change_data {
 
 bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
 
+bool can_set_direct_map(void)
+{
+	return rodata_full || debug_pagealloc_enabled();
+}
+
 static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
 {
 	struct page_change_data *cdata = data;
@@ -156,7 +161,7 @@ int set_direct_map_invalid_noflush(struct page *page, int numpages)
 	};
 	unsigned long size = PAGE_SIZE * numpages;
 
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return 0;
 
 	return apply_to_page_range(&init_mm,
@@ -172,7 +177,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
 	};
 	unsigned long size = PAGE_SIZE * numpages;
 
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return 0;
 
 	return apply_to_page_range(&init_mm,
@@ -183,7 +188,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
 #ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return;
 
 	set_memory_valid((unsigned long)page_address(page), numpages, enable);
@@ -208,7 +213,7 @@ bool kernel_page_present(struct page *page)
 	pte_t *ptep;
 	unsigned long addr = (unsigned long)page_address(page);
 
-	if (!debug_pagealloc_enabled() && !rodata_full)
+	if (!can_set_direct_map())
 		return true;
 
 	pgdp = pgd_offset_k(addr);
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index c650f82db813..7b4b6626032d 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -28,7 +28,19 @@ static inline bool kernel_page_present(struct page *page)
 {
 	return true;
 }
+#else /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+/*
+ * Some architectures, e.g. ARM64 can disable direct map modifications at
+ * boot time. Let them overrive this query.
+ */
+#ifndef can_set_direct_map
+static inline bool can_set_direct_map(void)
+{
+	return true;
+}
+#define can_set_direct_map can_set_direct_map
 #endif
+#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
 
 #ifndef set_mce_nospec
 static inline int set_mce_nospec(unsigned long pfn, bool unmap)
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 6/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2021-03-03 16:22 ` Mike Rapoport
  (?)
  (?)
@ 2021-03-03 16:22   ` Mike Rapoport
  -1 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel B utt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.

The secretmem feature is off by default and the user must explicitly enable
it at the boot time.

Once secretmem is enabled, the user will be able to create a file
descriptor using the memfd_secret() system call. The memory areas created
by mmap() calls from this file descriptor will be unmapped from the kernel
direct map and they will be only mapped in the page table of the processes
that have access to the file descriptor.

The file descriptor based memory has several advantages over the
"traditional" mm interfaces, such as mlock(), mprotect(), madvise(). File
descriptor approach allows explict and controlled sharing of the memory
areas, it allows to seal the operations. Besides, file descriptor based
memory paves the way for VMMs to remove the secret memory range from the
userpace hipervisor process, for instance QEMU. Andy Lutomirski says:

  "Getting fd-backed memory into a guest will take some possibly major work
   in the kernel, but getting vma-backed memory into a guest without
   mapping it in the host user address space seems much, much worse."

memfd_secret() is made a dedicated system call rather than an extention to
memfd_create() because it's purpose is to allow the user to create more
secure memory mappings rather than to simply allow file based access to the
memory. Nowadays a new system call cost is negligible while it is way
simpler for userspace to deal with a clear-cut system calls than with a
multiplexer or an overloaded syscall. Moreover, the initial implementation
of memfd_secret() is completely distinct from memfd_create() so there is no
much sense in overloading memfd_create() to begin with. If there will be a
need for code sharing between these implementation it can be easily
achieved without a need to adjust user visible APIs.

The secret memory remains accessible in the process context using uaccess
primitives, but it is not exposed to the kernel otherwise; secret memory
areas are removed from the direct map and functions in the
follow_page()/get_user_page() family will refuse to return a page that
belongs to the secret memory area.

Once there will be a use case that will require exposing secretmem to the
kernel it will be an opt-in request in the system call flags so that user
would have to decide what data can be exposed to the kernel.

Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which affects
the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to have
secretmem disabled by default with the ability of a system administrator to
enable it at boot time.

Pages in the secretmem regions are unevictable and unmovable to avoid
accidental exposure of the sensitive data via swap or during page
migration.

Since the secretmem mappings are locked in memory they cannot exceed
RLIMIT_MEMLOCK. Since these mappings are already locked independently from
mlock(), an attempt to mlock()/munlock() secretmem range would fail and
mlockall()/munlockall() will ignore secretmem mappings.

However, unlike mlock()ed memory, secretmem currently behaves more like
long-term GUP: secretmem mappings are unmovable mappings directly consumed
by user space. With default limits, there is no excessive use of secretmem
and it poses no real problem in combination with ZONE_MOVABLE/CMA, but in
the future this should be addressed to allow balanced use of large amounts
of secretmem along with ZONE_MOVABLE/CMA.

A page that was a part of the secret memory area is cleared when it is
freed to ensure the data is not exposed to the next user of that page.

The following example demonstrates creation of a secret mapping (error
handling is omitted):

	fd = memfd_secret(0);
	ftruncate(fd, MAP_SIZE);
	ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE,
		   MAP_SHARED, fd, 0);

[1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 include/linux/secretmem.h  |  24 ++++
 include/uapi/linux/magic.h |   1 +
 kernel/sys_ni.c            |   2 +
 mm/Kconfig                 |   3 +
 mm/Makefile                |   1 +
 mm/gup.c                   |  10 ++
 mm/mlock.c                 |   3 +-
 mm/secretmem.c             | 246 +++++++++++++++++++++++++++++++++++++
 8 files changed, 289 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/secretmem.h
 create mode 100644 mm/secretmem.c

diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
new file mode 100644
index 000000000000..70e7db9f94fe
--- /dev/null
+++ b/include/linux/secretmem.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_SECRETMEM_H
+#define _LINUX_SECRETMEM_H
+
+#ifdef CONFIG_SECRETMEM
+
+bool vma_is_secretmem(struct vm_area_struct *vma);
+bool page_is_secretmem(struct page *page);
+
+#else
+
+static inline bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+	return false;
+}
+
+static inline bool page_is_secretmem(struct page *page)
+{
+	return false;
+}
+
+#endif /* CONFIG_SECRETMEM */
+
+#endif /* _LINUX_SECRETMEM_H */
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index f3956fc11de6..35687dcb1a42 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -97,5 +97,6 @@
 #define DEVMEM_MAGIC		0x454d444d	/* "DMEM" */
 #define Z3FOLD_MAGIC		0x33
 #define PPC_CMM_MAGIC		0xc7571590
+#define SECRETMEM_MAGIC		0x5345434d	/* "SECM" */
 
 #endif /* __LINUX_MAGIC_H__ */
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 19aa806890d5..e9a2011ee4a2 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -352,6 +352,8 @@ COND_SYSCALL(pkey_mprotect);
 COND_SYSCALL(pkey_alloc);
 COND_SYSCALL(pkey_free);
 
+/* memfd_secret */
+COND_SYSCALL(memfd_secret);
 
 /*
  * Architecture specific weak syscall entries.
diff --git a/mm/Kconfig b/mm/Kconfig
index 24c045b24b95..5f8243442f66 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -872,4 +872,7 @@ config MAPPING_DIRTY_HELPERS
 config KMAP_LOCAL
 	bool
 
+config SECRETMEM
+	def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+
 endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 72227b24a616..b2a564eec27f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -120,3 +120,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
 obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
 obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
 obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_SECRETMEM) += secretmem.o
diff --git a/mm/gup.c b/mm/gup.c
index e40579624f10..ecadc80934b2 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -10,6 +10,7 @@
 #include <linux/rmap.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/secretmem.h>
 
 #include <linux/sched/signal.h>
 #include <linux/rwsem.h>
@@ -758,6 +759,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
 	struct follow_page_context ctx = { NULL };
 	struct page *page;
 
+	if (vma_is_secretmem(vma))
+		return NULL;
+
 	page = follow_page_mask(vma, address, foll_flags, &ctx);
 	if (ctx.pgmap)
 		put_dev_pagemap(ctx.pgmap);
@@ -891,6 +895,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
 	if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
 		return -EOPNOTSUPP;
 
+	if (vma_is_secretmem(vma))
+		return -EFAULT;
+
 	if (write) {
 		if (!(vm_flags & VM_WRITE)) {
 			if (!(gup_flags & FOLL_FORCE))
@@ -2030,6 +2037,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 		page = pte_page(pte);
 
+		if (page_is_secretmem(page))
+			goto pte_unmap;
+
 		head = try_grab_compound_head(page, 1, flags);
 		if (!head)
 			goto pte_unmap;
diff --git a/mm/mlock.c b/mm/mlock.c
index f8f8cc32d03d..188711c72b67 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -23,6 +23,7 @@
 #include <linux/hugetlb.h>
 #include <linux/memcontrol.h>
 #include <linux/mm_inline.h>
+#include <linux/secretmem.h>
 
 #include "internal.h"
 
@@ -503,7 +504,7 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
 
 	if (newflags == vma->vm_flags || (vma->vm_flags & VM_SPECIAL) ||
 	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
-	    vma_is_dax(vma))
+	    vma_is_dax(vma) || vma_is_secretmem(vma))
 		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
 		goto out;
 
diff --git a/mm/secretmem.c b/mm/secretmem.c
new file mode 100644
index 000000000000..fa6738e860c2
--- /dev/null
+++ b/mm/secretmem.c
@@ -0,0 +1,246 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2021
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/swap.h>
+#include <linux/mount.h>
+#include <linux/memfd.h>
+#include <linux/bitops.h>
+#include <linux/printk.h>
+#include <linux/pagemap.h>
+#include <linux/syscalls.h>
+#include <linux/pseudo_fs.h>
+#include <linux/secretmem.h>
+#include <linux/set_memory.h>
+#include <linux/sched/signal.h>
+
+#include <uapi/linux/magic.h>
+
+#include <asm/tlbflush.h>
+
+#include "internal.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "secretmem: " fmt
+
+/*
+ * Define mode and flag masks to allow validation of the system call
+ * parameters.
+ */
+#define SECRETMEM_MODE_MASK	(0x0)
+#define SECRETMEM_FLAGS_MASK	SECRETMEM_MODE_MASK
+
+static bool secretmem_enable __ro_after_init;
+module_param_named(enable, secretmem_enable, bool, 0400);
+MODULE_PARM_DESC(secretmem_enable,
+		 "Enable secretmem and memfd_secret(2) system call");
+
+static vm_fault_t secretmem_fault(struct vm_fault *vmf)
+{
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+	struct inode *inode = file_inode(vmf->vma->vm_file);
+	pgoff_t offset = vmf->pgoff;
+	gfp_t gfp = vmf->gfp_mask;
+	unsigned long addr;
+	struct page *page;
+	int err;
+
+	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+		return vmf_error(-EINVAL);
+
+retry:
+	page = find_lock_page(mapping, offset);
+	if (!page) {
+		page = alloc_page(gfp | __GFP_ZERO);
+		if (!page)
+			return VM_FAULT_OOM;
+
+		err = set_direct_map_invalid_noflush(page, 1);
+		if (err) {
+			put_page(page);
+			return vmf_error(err);
+		}
+
+		__SetPageUptodate(page);
+		err = add_to_page_cache_lru(page, mapping, offset, gfp);
+		if (unlikely(err)) {
+			put_page(page);
+			/*
+			 * If a split of large page was required, it
+			 * already happened when we marked the page invalid
+			 * which guarantees that this call won't fail
+			 */
+			set_direct_map_default_noflush(page, 1);
+			if (err == -EEXIST)
+				goto retry;
+
+			return vmf_error(err);
+		}
+
+		addr = (unsigned long)page_address(page);
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+	}
+
+	vmf->page = page;
+	return VM_FAULT_LOCKED;
+}
+
+static const struct vm_operations_struct secretmem_vm_ops = {
+	.fault = secretmem_fault,
+};
+
+static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+	unsigned long len = vma->vm_end - vma->vm_start;
+
+	if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+		return -EINVAL;
+
+	if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
+		return -EAGAIN;
+
+	vma->vm_flags |= VM_LOCKED | VM_DONTDUMP;
+	vma->vm_ops = &secretmem_vm_ops;
+
+	return 0;
+}
+
+bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+	return vma->vm_ops == &secretmem_vm_ops;
+}
+
+static const struct file_operations secretmem_fops = {
+	.mmap		= secretmem_mmap,
+};
+
+static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode)
+{
+	return false;
+}
+
+static int secretmem_migratepage(struct address_space *mapping,
+				 struct page *newpage, struct page *page,
+				 enum migrate_mode mode)
+{
+	return -EBUSY;
+}
+
+static void secretmem_freepage(struct page *page)
+{
+	set_direct_map_default_noflush(page, 1);
+	clear_highpage(page);
+}
+
+static const struct address_space_operations secretmem_aops = {
+	.freepage	= secretmem_freepage,
+	.migratepage	= secretmem_migratepage,
+	.isolate_page	= secretmem_isolate_page,
+};
+
+bool page_is_secretmem(struct page *page)
+{
+	struct address_space *mapping = page_mapping(page);
+
+	if (!mapping)
+		return false;
+
+	return mapping->a_ops == &secretmem_aops;
+}
+
+static struct vfsmount *secretmem_mnt;
+
+static struct file *secretmem_file_create(unsigned long flags)
+{
+	struct file *file = ERR_PTR(-ENOMEM);
+	struct inode *inode;
+
+	inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
+	if (IS_ERR(inode))
+		return ERR_CAST(inode);
+
+	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+				 O_RDWR, &secretmem_fops);
+	if (IS_ERR(file))
+		goto err_free_inode;
+
+	mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
+	mapping_set_unevictable(inode->i_mapping);
+
+	inode->i_mapping->a_ops = &secretmem_aops;
+
+	/* pretend we are a normal file with zero size */
+	inode->i_mode |= S_IFREG;
+	inode->i_size = 0;
+
+	return file;
+
+err_free_inode:
+	iput(inode);
+	return file;
+}
+
+SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
+{
+	struct file *file;
+	int fd, err;
+
+	/* make sure local flags do not confict with global fcntl.h */
+	BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+	if (!secretmem_enable)
+		return -ENOSYS;
+
+	if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+		return -EINVAL;
+
+	fd = get_unused_fd_flags(flags & O_CLOEXEC);
+	if (fd < 0)
+		return fd;
+
+	file = secretmem_file_create(flags);
+	if (IS_ERR(file)) {
+		err = PTR_ERR(file);
+		goto err_put_fd;
+	}
+
+	file->f_flags |= O_LARGEFILE;
+
+	fd_install(fd, file);
+	return fd;
+
+err_put_fd:
+	put_unused_fd(fd);
+	return err;
+}
+
+static int secretmem_init_fs_context(struct fs_context *fc)
+{
+	return init_pseudo(fc, SECRETMEM_MAGIC) ? 0 : -ENOMEM;
+}
+
+static struct file_system_type secretmem_fs = {
+	.name		= "secretmem",
+	.init_fs_context = secretmem_init_fs_context,
+	.kill_sb	= kill_anon_super,
+};
+
+static int secretmem_init(void)
+{
+	int ret = 0;
+
+	if (!secretmem_enable)
+		return ret;
+
+	secretmem_mnt = kern_mount(&secretmem_fs);
+	if (IS_ERR(secretmem_mnt))
+		ret = PTR_ERR(secretmem_mnt);
+
+	return ret;
+}
+fs_initcall(secretmem_init);
-- 
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 6/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.

The secretmem feature is off by default and the user must explicitly enable
it at the boot time.

Once secretmem is enabled, the user will be able to create a file
descriptor using the memfd_secret() system call. The memory areas created
by mmap() calls from this file descriptor will be unmapped from the kernel
direct map and they will be only mapped in the page table of the processes
that have access to the file descriptor.

The file descriptor based memory has several advantages over the
"traditional" mm interfaces, such as mlock(), mprotect(), madvise(). File
descriptor approach allows explict and controlled sharing of the memory
areas, it allows to seal the operations. Besides, file descriptor based
memory paves the way for VMMs to remove the secret memory range from the
userpace hipervisor process, for instance QEMU. Andy Lutomirski says:

  "Getting fd-backed memory into a guest will take some possibly major work
   in the kernel, but getting vma-backed memory into a guest without
   mapping it in the host user address space seems much, much worse."

memfd_secret() is made a dedicated system call rather than an extention to
memfd_create() because it's purpose is to allow the user to create more
secure memory mappings rather than to simply allow file based access to the
memory. Nowadays a new system call cost is negligible while it is way
simpler for userspace to deal with a clear-cut system calls than with a
multiplexer or an overloaded syscall. Moreover, the initial implementation
of memfd_secret() is completely distinct from memfd_create() so there is no
much sense in overloading memfd_create() to begin with. If there will be a
need for code sharing between these implementation it can be easily
achieved without a need to adjust user visible APIs.

The secret memory remains accessible in the process context using uaccess
primitives, but it is not exposed to the kernel otherwise; secret memory
areas are removed from the direct map and functions in the
follow_page()/get_user_page() family will refuse to return a page that
belongs to the secret memory area.

Once there will be a use case that will require exposing secretmem to the
kernel it will be an opt-in request in the system call flags so that user
would have to decide what data can be exposed to the kernel.

Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which affects
the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to have
secretmem disabled by default with the ability of a system administrator to
enable it at boot time.

Pages in the secretmem regions are unevictable and unmovable to avoid
accidental exposure of the sensitive data via swap or during page
migration.

Since the secretmem mappings are locked in memory they cannot exceed
RLIMIT_MEMLOCK. Since these mappings are already locked independently from
mlock(), an attempt to mlock()/munlock() secretmem range would fail and
mlockall()/munlockall() will ignore secretmem mappings.

However, unlike mlock()ed memory, secretmem currently behaves more like
long-term GUP: secretmem mappings are unmovable mappings directly consumed
by user space. With default limits, there is no excessive use of secretmem
and it poses no real problem in combination with ZONE_MOVABLE/CMA, but in
the future this should be addressed to allow balanced use of large amounts
of secretmem along with ZONE_MOVABLE/CMA.

A page that was a part of the secret memory area is cleared when it is
freed to ensure the data is not exposed to the next user of that page.

The following example demonstrates creation of a secret mapping (error
handling is omitted):

	fd = memfd_secret(0);
	ftruncate(fd, MAP_SIZE);
	ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE,
		   MAP_SHARED, fd, 0);

[1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 include/linux/secretmem.h  |  24 ++++
 include/uapi/linux/magic.h |   1 +
 kernel/sys_ni.c            |   2 +
 mm/Kconfig                 |   3 +
 mm/Makefile                |   1 +
 mm/gup.c                   |  10 ++
 mm/mlock.c                 |   3 +-
 mm/secretmem.c             | 246 +++++++++++++++++++++++++++++++++++++
 8 files changed, 289 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/secretmem.h
 create mode 100644 mm/secretmem.c

diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
new file mode 100644
index 000000000000..70e7db9f94fe
--- /dev/null
+++ b/include/linux/secretmem.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_SECRETMEM_H
+#define _LINUX_SECRETMEM_H
+
+#ifdef CONFIG_SECRETMEM
+
+bool vma_is_secretmem(struct vm_area_struct *vma);
+bool page_is_secretmem(struct page *page);
+
+#else
+
+static inline bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+	return false;
+}
+
+static inline bool page_is_secretmem(struct page *page)
+{
+	return false;
+}
+
+#endif /* CONFIG_SECRETMEM */
+
+#endif /* _LINUX_SECRETMEM_H */
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index f3956fc11de6..35687dcb1a42 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -97,5 +97,6 @@
 #define DEVMEM_MAGIC		0x454d444d	/* "DMEM" */
 #define Z3FOLD_MAGIC		0x33
 #define PPC_CMM_MAGIC		0xc7571590
+#define SECRETMEM_MAGIC		0x5345434d	/* "SECM" */
 
 #endif /* __LINUX_MAGIC_H__ */
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 19aa806890d5..e9a2011ee4a2 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -352,6 +352,8 @@ COND_SYSCALL(pkey_mprotect);
 COND_SYSCALL(pkey_alloc);
 COND_SYSCALL(pkey_free);
 
+/* memfd_secret */
+COND_SYSCALL(memfd_secret);
 
 /*
  * Architecture specific weak syscall entries.
diff --git a/mm/Kconfig b/mm/Kconfig
index 24c045b24b95..5f8243442f66 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -872,4 +872,7 @@ config MAPPING_DIRTY_HELPERS
 config KMAP_LOCAL
 	bool
 
+config SECRETMEM
+	def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+
 endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 72227b24a616..b2a564eec27f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -120,3 +120,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
 obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
 obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
 obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_SECRETMEM) += secretmem.o
diff --git a/mm/gup.c b/mm/gup.c
index e40579624f10..ecadc80934b2 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -10,6 +10,7 @@
 #include <linux/rmap.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/secretmem.h>
 
 #include <linux/sched/signal.h>
 #include <linux/rwsem.h>
@@ -758,6 +759,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
 	struct follow_page_context ctx = { NULL };
 	struct page *page;
 
+	if (vma_is_secretmem(vma))
+		return NULL;
+
 	page = follow_page_mask(vma, address, foll_flags, &ctx);
 	if (ctx.pgmap)
 		put_dev_pagemap(ctx.pgmap);
@@ -891,6 +895,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
 	if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
 		return -EOPNOTSUPP;
 
+	if (vma_is_secretmem(vma))
+		return -EFAULT;
+
 	if (write) {
 		if (!(vm_flags & VM_WRITE)) {
 			if (!(gup_flags & FOLL_FORCE))
@@ -2030,6 +2037,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 		page = pte_page(pte);
 
+		if (page_is_secretmem(page))
+			goto pte_unmap;
+
 		head = try_grab_compound_head(page, 1, flags);
 		if (!head)
 			goto pte_unmap;
diff --git a/mm/mlock.c b/mm/mlock.c
index f8f8cc32d03d..188711c72b67 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -23,6 +23,7 @@
 #include <linux/hugetlb.h>
 #include <linux/memcontrol.h>
 #include <linux/mm_inline.h>
+#include <linux/secretmem.h>
 
 #include "internal.h"
 
@@ -503,7 +504,7 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
 
 	if (newflags == vma->vm_flags || (vma->vm_flags & VM_SPECIAL) ||
 	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
-	    vma_is_dax(vma))
+	    vma_is_dax(vma) || vma_is_secretmem(vma))
 		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
 		goto out;
 
diff --git a/mm/secretmem.c b/mm/secretmem.c
new file mode 100644
index 000000000000..fa6738e860c2
--- /dev/null
+++ b/mm/secretmem.c
@@ -0,0 +1,246 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2021
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/swap.h>
+#include <linux/mount.h>
+#include <linux/memfd.h>
+#include <linux/bitops.h>
+#include <linux/printk.h>
+#include <linux/pagemap.h>
+#include <linux/syscalls.h>
+#include <linux/pseudo_fs.h>
+#include <linux/secretmem.h>
+#include <linux/set_memory.h>
+#include <linux/sched/signal.h>
+
+#include <uapi/linux/magic.h>
+
+#include <asm/tlbflush.h>
+
+#include "internal.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "secretmem: " fmt
+
+/*
+ * Define mode and flag masks to allow validation of the system call
+ * parameters.
+ */
+#define SECRETMEM_MODE_MASK	(0x0)
+#define SECRETMEM_FLAGS_MASK	SECRETMEM_MODE_MASK
+
+static bool secretmem_enable __ro_after_init;
+module_param_named(enable, secretmem_enable, bool, 0400);
+MODULE_PARM_DESC(secretmem_enable,
+		 "Enable secretmem and memfd_secret(2) system call");
+
+static vm_fault_t secretmem_fault(struct vm_fault *vmf)
+{
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+	struct inode *inode = file_inode(vmf->vma->vm_file);
+	pgoff_t offset = vmf->pgoff;
+	gfp_t gfp = vmf->gfp_mask;
+	unsigned long addr;
+	struct page *page;
+	int err;
+
+	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+		return vmf_error(-EINVAL);
+
+retry:
+	page = find_lock_page(mapping, offset);
+	if (!page) {
+		page = alloc_page(gfp | __GFP_ZERO);
+		if (!page)
+			return VM_FAULT_OOM;
+
+		err = set_direct_map_invalid_noflush(page, 1);
+		if (err) {
+			put_page(page);
+			return vmf_error(err);
+		}
+
+		__SetPageUptodate(page);
+		err = add_to_page_cache_lru(page, mapping, offset, gfp);
+		if (unlikely(err)) {
+			put_page(page);
+			/*
+			 * If a split of large page was required, it
+			 * already happened when we marked the page invalid
+			 * which guarantees that this call won't fail
+			 */
+			set_direct_map_default_noflush(page, 1);
+			if (err == -EEXIST)
+				goto retry;
+
+			return vmf_error(err);
+		}
+
+		addr = (unsigned long)page_address(page);
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+	}
+
+	vmf->page = page;
+	return VM_FAULT_LOCKED;
+}
+
+static const struct vm_operations_struct secretmem_vm_ops = {
+	.fault = secretmem_fault,
+};
+
+static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+	unsigned long len = vma->vm_end - vma->vm_start;
+
+	if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+		return -EINVAL;
+
+	if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
+		return -EAGAIN;
+
+	vma->vm_flags |= VM_LOCKED | VM_DONTDUMP;
+	vma->vm_ops = &secretmem_vm_ops;
+
+	return 0;
+}
+
+bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+	return vma->vm_ops == &secretmem_vm_ops;
+}
+
+static const struct file_operations secretmem_fops = {
+	.mmap		= secretmem_mmap,
+};
+
+static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode)
+{
+	return false;
+}
+
+static int secretmem_migratepage(struct address_space *mapping,
+				 struct page *newpage, struct page *page,
+				 enum migrate_mode mode)
+{
+	return -EBUSY;
+}
+
+static void secretmem_freepage(struct page *page)
+{
+	set_direct_map_default_noflush(page, 1);
+	clear_highpage(page);
+}
+
+static const struct address_space_operations secretmem_aops = {
+	.freepage	= secretmem_freepage,
+	.migratepage	= secretmem_migratepage,
+	.isolate_page	= secretmem_isolate_page,
+};
+
+bool page_is_secretmem(struct page *page)
+{
+	struct address_space *mapping = page_mapping(page);
+
+	if (!mapping)
+		return false;
+
+	return mapping->a_ops == &secretmem_aops;
+}
+
+static struct vfsmount *secretmem_mnt;
+
+static struct file *secretmem_file_create(unsigned long flags)
+{
+	struct file *file = ERR_PTR(-ENOMEM);
+	struct inode *inode;
+
+	inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
+	if (IS_ERR(inode))
+		return ERR_CAST(inode);
+
+	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+				 O_RDWR, &secretmem_fops);
+	if (IS_ERR(file))
+		goto err_free_inode;
+
+	mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
+	mapping_set_unevictable(inode->i_mapping);
+
+	inode->i_mapping->a_ops = &secretmem_aops;
+
+	/* pretend we are a normal file with zero size */
+	inode->i_mode |= S_IFREG;
+	inode->i_size = 0;
+
+	return file;
+
+err_free_inode:
+	iput(inode);
+	return file;
+}
+
+SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
+{
+	struct file *file;
+	int fd, err;
+
+	/* make sure local flags do not confict with global fcntl.h */
+	BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+	if (!secretmem_enable)
+		return -ENOSYS;
+
+	if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+		return -EINVAL;
+
+	fd = get_unused_fd_flags(flags & O_CLOEXEC);
+	if (fd < 0)
+		return fd;
+
+	file = secretmem_file_create(flags);
+	if (IS_ERR(file)) {
+		err = PTR_ERR(file);
+		goto err_put_fd;
+	}
+
+	file->f_flags |= O_LARGEFILE;
+
+	fd_install(fd, file);
+	return fd;
+
+err_put_fd:
+	put_unused_fd(fd);
+	return err;
+}
+
+static int secretmem_init_fs_context(struct fs_context *fc)
+{
+	return init_pseudo(fc, SECRETMEM_MAGIC) ? 0 : -ENOMEM;
+}
+
+static struct file_system_type secretmem_fs = {
+	.name		= "secretmem",
+	.init_fs_context = secretmem_init_fs_context,
+	.kill_sb	= kill_anon_super,
+};
+
+static int secretmem_init(void)
+{
+	int ret = 0;
+
+	if (!secretmem_enable)
+		return ret;
+
+	secretmem_mnt = kern_mount(&secretmem_fs);
+	if (IS_ERR(secretmem_mnt))
+		ret = PTR_ERR(secretmem_mnt);
+
+	return ret;
+}
+fs_initcall(secretmem_init);
-- 
2.28.0


^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 6/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.

The secretmem feature is off by default and the user must explicitly enable
it at the boot time.

Once secretmem is enabled, the user will be able to create a file
descriptor using the memfd_secret() system call. The memory areas created
by mmap() calls from this file descriptor will be unmapped from the kernel
direct map and they will be only mapped in the page table of the processes
that have access to the file descriptor.

The file descriptor based memory has several advantages over the
"traditional" mm interfaces, such as mlock(), mprotect(), madvise(). File
descriptor approach allows explict and controlled sharing of the memory
areas, it allows to seal the operations. Besides, file descriptor based
memory paves the way for VMMs to remove the secret memory range from the
userpace hipervisor process, for instance QEMU. Andy Lutomirski says:

  "Getting fd-backed memory into a guest will take some possibly major work
   in the kernel, but getting vma-backed memory into a guest without
   mapping it in the host user address space seems much, much worse."

memfd_secret() is made a dedicated system call rather than an extention to
memfd_create() because it's purpose is to allow the user to create more
secure memory mappings rather than to simply allow file based access to the
memory. Nowadays a new system call cost is negligible while it is way
simpler for userspace to deal with a clear-cut system calls than with a
multiplexer or an overloaded syscall. Moreover, the initial implementation
of memfd_secret() is completely distinct from memfd_create() so there is no
much sense in overloading memfd_create() to begin with. If there will be a
need for code sharing between these implementation it can be easily
achieved without a need to adjust user visible APIs.

The secret memory remains accessible in the process context using uaccess
primitives, but it is not exposed to the kernel otherwise; secret memory
areas are removed from the direct map and functions in the
follow_page()/get_user_page() family will refuse to return a page that
belongs to the secret memory area.

Once there will be a use case that will require exposing secretmem to the
kernel it will be an opt-in request in the system call flags so that user
would have to decide what data can be exposed to the kernel.

Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which affects
the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to have
secretmem disabled by default with the ability of a system administrator to
enable it at boot time.

Pages in the secretmem regions are unevictable and unmovable to avoid
accidental exposure of the sensitive data via swap or during page
migration.

Since the secretmem mappings are locked in memory they cannot exceed
RLIMIT_MEMLOCK. Since these mappings are already locked independently from
mlock(), an attempt to mlock()/munlock() secretmem range would fail and
mlockall()/munlockall() will ignore secretmem mappings.

However, unlike mlock()ed memory, secretmem currently behaves more like
long-term GUP: secretmem mappings are unmovable mappings directly consumed
by user space. With default limits, there is no excessive use of secretmem
and it poses no real problem in combination with ZONE_MOVABLE/CMA, but in
the future this should be addressed to allow balanced use of large amounts
of secretmem along with ZONE_MOVABLE/CMA.

A page that was a part of the secret memory area is cleared when it is
freed to ensure the data is not exposed to the next user of that page.

The following example demonstrates creation of a secret mapping (error
handling is omitted):

	fd = memfd_secret(0);
	ftruncate(fd, MAP_SIZE);
	ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE,
		   MAP_SHARED, fd, 0);

[1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 include/linux/secretmem.h  |  24 ++++
 include/uapi/linux/magic.h |   1 +
 kernel/sys_ni.c            |   2 +
 mm/Kconfig                 |   3 +
 mm/Makefile                |   1 +
 mm/gup.c                   |  10 ++
 mm/mlock.c                 |   3 +-
 mm/secretmem.c             | 246 +++++++++++++++++++++++++++++++++++++
 8 files changed, 289 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/secretmem.h
 create mode 100644 mm/secretmem.c

diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
new file mode 100644
index 000000000000..70e7db9f94fe
--- /dev/null
+++ b/include/linux/secretmem.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_SECRETMEM_H
+#define _LINUX_SECRETMEM_H
+
+#ifdef CONFIG_SECRETMEM
+
+bool vma_is_secretmem(struct vm_area_struct *vma);
+bool page_is_secretmem(struct page *page);
+
+#else
+
+static inline bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+	return false;
+}
+
+static inline bool page_is_secretmem(struct page *page)
+{
+	return false;
+}
+
+#endif /* CONFIG_SECRETMEM */
+
+#endif /* _LINUX_SECRETMEM_H */
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index f3956fc11de6..35687dcb1a42 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -97,5 +97,6 @@
 #define DEVMEM_MAGIC		0x454d444d	/* "DMEM" */
 #define Z3FOLD_MAGIC		0x33
 #define PPC_CMM_MAGIC		0xc7571590
+#define SECRETMEM_MAGIC		0x5345434d	/* "SECM" */
 
 #endif /* __LINUX_MAGIC_H__ */
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 19aa806890d5..e9a2011ee4a2 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -352,6 +352,8 @@ COND_SYSCALL(pkey_mprotect);
 COND_SYSCALL(pkey_alloc);
 COND_SYSCALL(pkey_free);
 
+/* memfd_secret */
+COND_SYSCALL(memfd_secret);
 
 /*
  * Architecture specific weak syscall entries.
diff --git a/mm/Kconfig b/mm/Kconfig
index 24c045b24b95..5f8243442f66 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -872,4 +872,7 @@ config MAPPING_DIRTY_HELPERS
 config KMAP_LOCAL
 	bool
 
+config SECRETMEM
+	def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+
 endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 72227b24a616..b2a564eec27f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -120,3 +120,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
 obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
 obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
 obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_SECRETMEM) += secretmem.o
diff --git a/mm/gup.c b/mm/gup.c
index e40579624f10..ecadc80934b2 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -10,6 +10,7 @@
 #include <linux/rmap.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/secretmem.h>
 
 #include <linux/sched/signal.h>
 #include <linux/rwsem.h>
@@ -758,6 +759,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
 	struct follow_page_context ctx = { NULL };
 	struct page *page;
 
+	if (vma_is_secretmem(vma))
+		return NULL;
+
 	page = follow_page_mask(vma, address, foll_flags, &ctx);
 	if (ctx.pgmap)
 		put_dev_pagemap(ctx.pgmap);
@@ -891,6 +895,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
 	if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
 		return -EOPNOTSUPP;
 
+	if (vma_is_secretmem(vma))
+		return -EFAULT;
+
 	if (write) {
 		if (!(vm_flags & VM_WRITE)) {
 			if (!(gup_flags & FOLL_FORCE))
@@ -2030,6 +2037,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 		page = pte_page(pte);
 
+		if (page_is_secretmem(page))
+			goto pte_unmap;
+
 		head = try_grab_compound_head(page, 1, flags);
 		if (!head)
 			goto pte_unmap;
diff --git a/mm/mlock.c b/mm/mlock.c
index f8f8cc32d03d..188711c72b67 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -23,6 +23,7 @@
 #include <linux/hugetlb.h>
 #include <linux/memcontrol.h>
 #include <linux/mm_inline.h>
+#include <linux/secretmem.h>
 
 #include "internal.h"
 
@@ -503,7 +504,7 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
 
 	if (newflags == vma->vm_flags || (vma->vm_flags & VM_SPECIAL) ||
 	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
-	    vma_is_dax(vma))
+	    vma_is_dax(vma) || vma_is_secretmem(vma))
 		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
 		goto out;
 
diff --git a/mm/secretmem.c b/mm/secretmem.c
new file mode 100644
index 000000000000..fa6738e860c2
--- /dev/null
+++ b/mm/secretmem.c
@@ -0,0 +1,246 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2021
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/swap.h>
+#include <linux/mount.h>
+#include <linux/memfd.h>
+#include <linux/bitops.h>
+#include <linux/printk.h>
+#include <linux/pagemap.h>
+#include <linux/syscalls.h>
+#include <linux/pseudo_fs.h>
+#include <linux/secretmem.h>
+#include <linux/set_memory.h>
+#include <linux/sched/signal.h>
+
+#include <uapi/linux/magic.h>
+
+#include <asm/tlbflush.h>
+
+#include "internal.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "secretmem: " fmt
+
+/*
+ * Define mode and flag masks to allow validation of the system call
+ * parameters.
+ */
+#define SECRETMEM_MODE_MASK	(0x0)
+#define SECRETMEM_FLAGS_MASK	SECRETMEM_MODE_MASK
+
+static bool secretmem_enable __ro_after_init;
+module_param_named(enable, secretmem_enable, bool, 0400);
+MODULE_PARM_DESC(secretmem_enable,
+		 "Enable secretmem and memfd_secret(2) system call");
+
+static vm_fault_t secretmem_fault(struct vm_fault *vmf)
+{
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+	struct inode *inode = file_inode(vmf->vma->vm_file);
+	pgoff_t offset = vmf->pgoff;
+	gfp_t gfp = vmf->gfp_mask;
+	unsigned long addr;
+	struct page *page;
+	int err;
+
+	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+		return vmf_error(-EINVAL);
+
+retry:
+	page = find_lock_page(mapping, offset);
+	if (!page) {
+		page = alloc_page(gfp | __GFP_ZERO);
+		if (!page)
+			return VM_FAULT_OOM;
+
+		err = set_direct_map_invalid_noflush(page, 1);
+		if (err) {
+			put_page(page);
+			return vmf_error(err);
+		}
+
+		__SetPageUptodate(page);
+		err = add_to_page_cache_lru(page, mapping, offset, gfp);
+		if (unlikely(err)) {
+			put_page(page);
+			/*
+			 * If a split of large page was required, it
+			 * already happened when we marked the page invalid
+			 * which guarantees that this call won't fail
+			 */
+			set_direct_map_default_noflush(page, 1);
+			if (err == -EEXIST)
+				goto retry;
+
+			return vmf_error(err);
+		}
+
+		addr = (unsigned long)page_address(page);
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+	}
+
+	vmf->page = page;
+	return VM_FAULT_LOCKED;
+}
+
+static const struct vm_operations_struct secretmem_vm_ops = {
+	.fault = secretmem_fault,
+};
+
+static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+	unsigned long len = vma->vm_end - vma->vm_start;
+
+	if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+		return -EINVAL;
+
+	if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
+		return -EAGAIN;
+
+	vma->vm_flags |= VM_LOCKED | VM_DONTDUMP;
+	vma->vm_ops = &secretmem_vm_ops;
+
+	return 0;
+}
+
+bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+	return vma->vm_ops == &secretmem_vm_ops;
+}
+
+static const struct file_operations secretmem_fops = {
+	.mmap		= secretmem_mmap,
+};
+
+static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode)
+{
+	return false;
+}
+
+static int secretmem_migratepage(struct address_space *mapping,
+				 struct page *newpage, struct page *page,
+				 enum migrate_mode mode)
+{
+	return -EBUSY;
+}
+
+static void secretmem_freepage(struct page *page)
+{
+	set_direct_map_default_noflush(page, 1);
+	clear_highpage(page);
+}
+
+static const struct address_space_operations secretmem_aops = {
+	.freepage	= secretmem_freepage,
+	.migratepage	= secretmem_migratepage,
+	.isolate_page	= secretmem_isolate_page,
+};
+
+bool page_is_secretmem(struct page *page)
+{
+	struct address_space *mapping = page_mapping(page);
+
+	if (!mapping)
+		return false;
+
+	return mapping->a_ops == &secretmem_aops;
+}
+
+static struct vfsmount *secretmem_mnt;
+
+static struct file *secretmem_file_create(unsigned long flags)
+{
+	struct file *file = ERR_PTR(-ENOMEM);
+	struct inode *inode;
+
+	inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
+	if (IS_ERR(inode))
+		return ERR_CAST(inode);
+
+	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+				 O_RDWR, &secretmem_fops);
+	if (IS_ERR(file))
+		goto err_free_inode;
+
+	mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
+	mapping_set_unevictable(inode->i_mapping);
+
+	inode->i_mapping->a_ops = &secretmem_aops;
+
+	/* pretend we are a normal file with zero size */
+	inode->i_mode |= S_IFREG;
+	inode->i_size = 0;
+
+	return file;
+
+err_free_inode:
+	iput(inode);
+	return file;
+}
+
+SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
+{
+	struct file *file;
+	int fd, err;
+
+	/* make sure local flags do not confict with global fcntl.h */
+	BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+	if (!secretmem_enable)
+		return -ENOSYS;
+
+	if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+		return -EINVAL;
+
+	fd = get_unused_fd_flags(flags & O_CLOEXEC);
+	if (fd < 0)
+		return fd;
+
+	file = secretmem_file_create(flags);
+	if (IS_ERR(file)) {
+		err = PTR_ERR(file);
+		goto err_put_fd;
+	}
+
+	file->f_flags |= O_LARGEFILE;
+
+	fd_install(fd, file);
+	return fd;
+
+err_put_fd:
+	put_unused_fd(fd);
+	return err;
+}
+
+static int secretmem_init_fs_context(struct fs_context *fc)
+{
+	return init_pseudo(fc, SECRETMEM_MAGIC) ? 0 : -ENOMEM;
+}
+
+static struct file_system_type secretmem_fs = {
+	.name		= "secretmem",
+	.init_fs_context = secretmem_init_fs_context,
+	.kill_sb	= kill_anon_super,
+};
+
+static int secretmem_init(void)
+{
+	int ret = 0;
+
+	if (!secretmem_enable)
+		return ret;
+
+	secretmem_mnt = kern_mount(&secretmem_fs);
+	if (IS_ERR(secretmem_mnt))
+		ret = PTR_ERR(secretmem_mnt);
+
+	return ret;
+}
+fs_initcall(secretmem_init);
-- 
2.28.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 6/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.

The secretmem feature is off by default and the user must explicitly enable
it at the boot time.

Once secretmem is enabled, the user will be able to create a file
descriptor using the memfd_secret() system call. The memory areas created
by mmap() calls from this file descriptor will be unmapped from the kernel
direct map and they will be only mapped in the page table of the processes
that have access to the file descriptor.

The file descriptor based memory has several advantages over the
"traditional" mm interfaces, such as mlock(), mprotect(), madvise(). File
descriptor approach allows explict and controlled sharing of the memory
areas, it allows to seal the operations. Besides, file descriptor based
memory paves the way for VMMs to remove the secret memory range from the
userpace hipervisor process, for instance QEMU. Andy Lutomirski says:

  "Getting fd-backed memory into a guest will take some possibly major work
   in the kernel, but getting vma-backed memory into a guest without
   mapping it in the host user address space seems much, much worse."

memfd_secret() is made a dedicated system call rather than an extention to
memfd_create() because it's purpose is to allow the user to create more
secure memory mappings rather than to simply allow file based access to the
memory. Nowadays a new system call cost is negligible while it is way
simpler for userspace to deal with a clear-cut system calls than with a
multiplexer or an overloaded syscall. Moreover, the initial implementation
of memfd_secret() is completely distinct from memfd_create() so there is no
much sense in overloading memfd_create() to begin with. If there will be a
need for code sharing between these implementation it can be easily
achieved without a need to adjust user visible APIs.

The secret memory remains accessible in the process context using uaccess
primitives, but it is not exposed to the kernel otherwise; secret memory
areas are removed from the direct map and functions in the
follow_page()/get_user_page() family will refuse to return a page that
belongs to the secret memory area.

Once there will be a use case that will require exposing secretmem to the
kernel it will be an opt-in request in the system call flags so that user
would have to decide what data can be exposed to the kernel.

Removing of the pages from the direct map may cause its fragmentation on
architectures that use large pages to map the physical memory which affects
the system performance. However, the original Kconfig text for
CONFIG_DIRECT_GBPAGES said that gigabyte pages in the direct map "... can
improve the kernel's performance a tiny bit ..." (commit 00d1c5e05736
("x86: add gbpages switches")) and the recent report [1] showed that "...
although 1G mappings are a good default choice, there is no compelling
evidence that it must be the only choice". Hence, it is sufficient to have
secretmem disabled by default with the ability of a system administrator to
enable it at boot time.

Pages in the secretmem regions are unevictable and unmovable to avoid
accidental exposure of the sensitive data via swap or during page
migration.

Since the secretmem mappings are locked in memory they cannot exceed
RLIMIT_MEMLOCK. Since these mappings are already locked independently from
mlock(), an attempt to mlock()/munlock() secretmem range would fail and
mlockall()/munlockall() will ignore secretmem mappings.

However, unlike mlock()ed memory, secretmem currently behaves more like
long-term GUP: secretmem mappings are unmovable mappings directly consumed
by user space. With default limits, there is no excessive use of secretmem
and it poses no real problem in combination with ZONE_MOVABLE/CMA, but in
the future this should be addressed to allow balanced use of large amounts
of secretmem along with ZONE_MOVABLE/CMA.

A page that was a part of the secret memory area is cleared when it is
freed to ensure the data is not exposed to the next user of that page.

The following example demonstrates creation of a secret mapping (error
handling is omitted):

	fd = memfd_secret(0);
	ftruncate(fd, MAP_SIZE);
	ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE,
		   MAP_SHARED, fd, 0);

[1] https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 include/linux/secretmem.h  |  24 ++++
 include/uapi/linux/magic.h |   1 +
 kernel/sys_ni.c            |   2 +
 mm/Kconfig                 |   3 +
 mm/Makefile                |   1 +
 mm/gup.c                   |  10 ++
 mm/mlock.c                 |   3 +-
 mm/secretmem.c             | 246 +++++++++++++++++++++++++++++++++++++
 8 files changed, 289 insertions(+), 1 deletion(-)
 create mode 100644 include/linux/secretmem.h
 create mode 100644 mm/secretmem.c

diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
new file mode 100644
index 000000000000..70e7db9f94fe
--- /dev/null
+++ b/include/linux/secretmem.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_SECRETMEM_H
+#define _LINUX_SECRETMEM_H
+
+#ifdef CONFIG_SECRETMEM
+
+bool vma_is_secretmem(struct vm_area_struct *vma);
+bool page_is_secretmem(struct page *page);
+
+#else
+
+static inline bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+	return false;
+}
+
+static inline bool page_is_secretmem(struct page *page)
+{
+	return false;
+}
+
+#endif /* CONFIG_SECRETMEM */
+
+#endif /* _LINUX_SECRETMEM_H */
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index f3956fc11de6..35687dcb1a42 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -97,5 +97,6 @@
 #define DEVMEM_MAGIC		0x454d444d	/* "DMEM" */
 #define Z3FOLD_MAGIC		0x33
 #define PPC_CMM_MAGIC		0xc7571590
+#define SECRETMEM_MAGIC		0x5345434d	/* "SECM" */
 
 #endif /* __LINUX_MAGIC_H__ */
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 19aa806890d5..e9a2011ee4a2 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -352,6 +352,8 @@ COND_SYSCALL(pkey_mprotect);
 COND_SYSCALL(pkey_alloc);
 COND_SYSCALL(pkey_free);
 
+/* memfd_secret */
+COND_SYSCALL(memfd_secret);
 
 /*
  * Architecture specific weak syscall entries.
diff --git a/mm/Kconfig b/mm/Kconfig
index 24c045b24b95..5f8243442f66 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -872,4 +872,7 @@ config MAPPING_DIRTY_HELPERS
 config KMAP_LOCAL
 	bool
 
+config SECRETMEM
+	def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+
 endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 72227b24a616..b2a564eec27f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -120,3 +120,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
 obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
 obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
 obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_SECRETMEM) += secretmem.o
diff --git a/mm/gup.c b/mm/gup.c
index e40579624f10..ecadc80934b2 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -10,6 +10,7 @@
 #include <linux/rmap.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/secretmem.h>
 
 #include <linux/sched/signal.h>
 #include <linux/rwsem.h>
@@ -758,6 +759,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
 	struct follow_page_context ctx = { NULL };
 	struct page *page;
 
+	if (vma_is_secretmem(vma))
+		return NULL;
+
 	page = follow_page_mask(vma, address, foll_flags, &ctx);
 	if (ctx.pgmap)
 		put_dev_pagemap(ctx.pgmap);
@@ -891,6 +895,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
 	if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
 		return -EOPNOTSUPP;
 
+	if (vma_is_secretmem(vma))
+		return -EFAULT;
+
 	if (write) {
 		if (!(vm_flags & VM_WRITE)) {
 			if (!(gup_flags & FOLL_FORCE))
@@ -2030,6 +2037,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 		VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
 		page = pte_page(pte);
 
+		if (page_is_secretmem(page))
+			goto pte_unmap;
+
 		head = try_grab_compound_head(page, 1, flags);
 		if (!head)
 			goto pte_unmap;
diff --git a/mm/mlock.c b/mm/mlock.c
index f8f8cc32d03d..188711c72b67 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -23,6 +23,7 @@
 #include <linux/hugetlb.h>
 #include <linux/memcontrol.h>
 #include <linux/mm_inline.h>
+#include <linux/secretmem.h>
 
 #include "internal.h"
 
@@ -503,7 +504,7 @@ static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
 
 	if (newflags == vma->vm_flags || (vma->vm_flags & VM_SPECIAL) ||
 	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
-	    vma_is_dax(vma))
+	    vma_is_dax(vma) || vma_is_secretmem(vma))
 		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
 		goto out;
 
diff --git a/mm/secretmem.c b/mm/secretmem.c
new file mode 100644
index 000000000000..fa6738e860c2
--- /dev/null
+++ b/mm/secretmem.c
@@ -0,0 +1,246 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2021
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/swap.h>
+#include <linux/mount.h>
+#include <linux/memfd.h>
+#include <linux/bitops.h>
+#include <linux/printk.h>
+#include <linux/pagemap.h>
+#include <linux/syscalls.h>
+#include <linux/pseudo_fs.h>
+#include <linux/secretmem.h>
+#include <linux/set_memory.h>
+#include <linux/sched/signal.h>
+
+#include <uapi/linux/magic.h>
+
+#include <asm/tlbflush.h>
+
+#include "internal.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "secretmem: " fmt
+
+/*
+ * Define mode and flag masks to allow validation of the system call
+ * parameters.
+ */
+#define SECRETMEM_MODE_MASK	(0x0)
+#define SECRETMEM_FLAGS_MASK	SECRETMEM_MODE_MASK
+
+static bool secretmem_enable __ro_after_init;
+module_param_named(enable, secretmem_enable, bool, 0400);
+MODULE_PARM_DESC(secretmem_enable,
+		 "Enable secretmem and memfd_secret(2) system call");
+
+static vm_fault_t secretmem_fault(struct vm_fault *vmf)
+{
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+	struct inode *inode = file_inode(vmf->vma->vm_file);
+	pgoff_t offset = vmf->pgoff;
+	gfp_t gfp = vmf->gfp_mask;
+	unsigned long addr;
+	struct page *page;
+	int err;
+
+	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+		return vmf_error(-EINVAL);
+
+retry:
+	page = find_lock_page(mapping, offset);
+	if (!page) {
+		page = alloc_page(gfp | __GFP_ZERO);
+		if (!page)
+			return VM_FAULT_OOM;
+
+		err = set_direct_map_invalid_noflush(page, 1);
+		if (err) {
+			put_page(page);
+			return vmf_error(err);
+		}
+
+		__SetPageUptodate(page);
+		err = add_to_page_cache_lru(page, mapping, offset, gfp);
+		if (unlikely(err)) {
+			put_page(page);
+			/*
+			 * If a split of large page was required, it
+			 * already happened when we marked the page invalid
+			 * which guarantees that this call won't fail
+			 */
+			set_direct_map_default_noflush(page, 1);
+			if (err == -EEXIST)
+				goto retry;
+
+			return vmf_error(err);
+		}
+
+		addr = (unsigned long)page_address(page);
+		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+	}
+
+	vmf->page = page;
+	return VM_FAULT_LOCKED;
+}
+
+static const struct vm_operations_struct secretmem_vm_ops = {
+	.fault = secretmem_fault,
+};
+
+static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+	unsigned long len = vma->vm_end - vma->vm_start;
+
+	if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+		return -EINVAL;
+
+	if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
+		return -EAGAIN;
+
+	vma->vm_flags |= VM_LOCKED | VM_DONTDUMP;
+	vma->vm_ops = &secretmem_vm_ops;
+
+	return 0;
+}
+
+bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+	return vma->vm_ops == &secretmem_vm_ops;
+}
+
+static const struct file_operations secretmem_fops = {
+	.mmap		= secretmem_mmap,
+};
+
+static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode)
+{
+	return false;
+}
+
+static int secretmem_migratepage(struct address_space *mapping,
+				 struct page *newpage, struct page *page,
+				 enum migrate_mode mode)
+{
+	return -EBUSY;
+}
+
+static void secretmem_freepage(struct page *page)
+{
+	set_direct_map_default_noflush(page, 1);
+	clear_highpage(page);
+}
+
+static const struct address_space_operations secretmem_aops = {
+	.freepage	= secretmem_freepage,
+	.migratepage	= secretmem_migratepage,
+	.isolate_page	= secretmem_isolate_page,
+};
+
+bool page_is_secretmem(struct page *page)
+{
+	struct address_space *mapping = page_mapping(page);
+
+	if (!mapping)
+		return false;
+
+	return mapping->a_ops == &secretmem_aops;
+}
+
+static struct vfsmount *secretmem_mnt;
+
+static struct file *secretmem_file_create(unsigned long flags)
+{
+	struct file *file = ERR_PTR(-ENOMEM);
+	struct inode *inode;
+
+	inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
+	if (IS_ERR(inode))
+		return ERR_CAST(inode);
+
+	file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+				 O_RDWR, &secretmem_fops);
+	if (IS_ERR(file))
+		goto err_free_inode;
+
+	mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
+	mapping_set_unevictable(inode->i_mapping);
+
+	inode->i_mapping->a_ops = &secretmem_aops;
+
+	/* pretend we are a normal file with zero size */
+	inode->i_mode |= S_IFREG;
+	inode->i_size = 0;
+
+	return file;
+
+err_free_inode:
+	iput(inode);
+	return file;
+}
+
+SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
+{
+	struct file *file;
+	int fd, err;
+
+	/* make sure local flags do not confict with global fcntl.h */
+	BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+	if (!secretmem_enable)
+		return -ENOSYS;
+
+	if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+		return -EINVAL;
+
+	fd = get_unused_fd_flags(flags & O_CLOEXEC);
+	if (fd < 0)
+		return fd;
+
+	file = secretmem_file_create(flags);
+	if (IS_ERR(file)) {
+		err = PTR_ERR(file);
+		goto err_put_fd;
+	}
+
+	file->f_flags |= O_LARGEFILE;
+
+	fd_install(fd, file);
+	return fd;
+
+err_put_fd:
+	put_unused_fd(fd);
+	return err;
+}
+
+static int secretmem_init_fs_context(struct fs_context *fc)
+{
+	return init_pseudo(fc, SECRETMEM_MAGIC) ? 0 : -ENOMEM;
+}
+
+static struct file_system_type secretmem_fs = {
+	.name		= "secretmem",
+	.init_fs_context = secretmem_init_fs_context,
+	.kill_sb	= kill_anon_super,
+};
+
+static int secretmem_init(void)
+{
+	int ret = 0;
+
+	if (!secretmem_enable)
+		return ret;
+
+	secretmem_mnt = kern_mount(&secretmem_fs);
+	if (IS_ERR(secretmem_mnt))
+		ret = PTR_ERR(secretmem_mnt);
+
+	return ret;
+}
+fs_initcall(secretmem_init);
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 7/9] PM: hibernate: disable when there are active secretmem users
  2021-03-03 16:22 ` Mike Rapoport
  (?)
  (?)
@ 2021-03-03 16:22   ` Mike Rapoport
  -1 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel B utt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

It is unsafe to allow saving of secretmem areas to the hibernation
snapshot as they would be visible after the resume and this essentially
will defeat the purpose of secret memory mappings.

Prevent hibernation whenever there are active secret memory users.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 include/linux/secretmem.h |  6 ++++++
 kernel/power/hibernate.c  |  5 ++++-
 mm/secretmem.c            | 15 +++++++++++++++
 3 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 70e7db9f94fe..907a6734059c 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -6,6 +6,7 @@
 
 bool vma_is_secretmem(struct vm_area_struct *vma);
 bool page_is_secretmem(struct page *page);
+bool secretmem_active(void);
 
 #else
 
@@ -19,6 +20,11 @@ static inline bool page_is_secretmem(struct page *page)
 	return false;
 }
 
+static inline bool secretmem_active(void)
+{
+	return false;
+}
+
 #endif /* CONFIG_SECRETMEM */
 
 #endif /* _LINUX_SECRETMEM_H */
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index da0b41914177..559acef3fddb 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -31,6 +31,7 @@
 #include <linux/genhd.h>
 #include <linux/ktime.h>
 #include <linux/security.h>
+#include <linux/secretmem.h>
 #include <trace/events/power.h>
 
 #include "power.h"
@@ -81,7 +82,9 @@ void hibernate_release(void)
 
 bool hibernation_available(void)
 {
-	return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION);
+	return nohibernate == 0 &&
+		!security_locked_down(LOCKDOWN_HIBERNATION) &&
+		!secretmem_active();
 }
 
 /**
diff --git a/mm/secretmem.c b/mm/secretmem.c
index fa6738e860c2..f2ae3f32a193 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -40,6 +40,13 @@ module_param_named(enable, secretmem_enable, bool, 0400);
 MODULE_PARM_DESC(secretmem_enable,
 		 "Enable secretmem and memfd_secret(2) system call");
 
+static atomic_t secretmem_users;
+
+bool secretmem_active(void)
+{
+	return !!atomic_read(&secretmem_users);
+}
+
 static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 {
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
@@ -94,6 +101,12 @@ static const struct vm_operations_struct secretmem_vm_ops = {
 	.fault = secretmem_fault,
 };
 
+static int secretmem_release(struct inode *inode, struct file *file)
+{
+	atomic_dec(&secretmem_users);
+	return 0;
+}
+
 static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
 {
 	unsigned long len = vma->vm_end - vma->vm_start;
@@ -116,6 +129,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma)
 }
 
 static const struct file_operations secretmem_fops = {
+	.release	= secretmem_release,
 	.mmap		= secretmem_mmap,
 };
 
@@ -212,6 +226,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
 	file->f_flags |= O_LARGEFILE;
 
 	fd_install(fd, file);
+	atomic_inc(&secretmem_users);
 	return fd;
 
 err_put_fd:
-- 
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 7/9] PM: hibernate: disable when there are active secretmem users
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

It is unsafe to allow saving of secretmem areas to the hibernation
snapshot as they would be visible after the resume and this essentially
will defeat the purpose of secret memory mappings.

Prevent hibernation whenever there are active secret memory users.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 include/linux/secretmem.h |  6 ++++++
 kernel/power/hibernate.c  |  5 ++++-
 mm/secretmem.c            | 15 +++++++++++++++
 3 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 70e7db9f94fe..907a6734059c 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -6,6 +6,7 @@
 
 bool vma_is_secretmem(struct vm_area_struct *vma);
 bool page_is_secretmem(struct page *page);
+bool secretmem_active(void);
 
 #else
 
@@ -19,6 +20,11 @@ static inline bool page_is_secretmem(struct page *page)
 	return false;
 }
 
+static inline bool secretmem_active(void)
+{
+	return false;
+}
+
 #endif /* CONFIG_SECRETMEM */
 
 #endif /* _LINUX_SECRETMEM_H */
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index da0b41914177..559acef3fddb 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -31,6 +31,7 @@
 #include <linux/genhd.h>
 #include <linux/ktime.h>
 #include <linux/security.h>
+#include <linux/secretmem.h>
 #include <trace/events/power.h>
 
 #include "power.h"
@@ -81,7 +82,9 @@ void hibernate_release(void)
 
 bool hibernation_available(void)
 {
-	return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION);
+	return nohibernate == 0 &&
+		!security_locked_down(LOCKDOWN_HIBERNATION) &&
+		!secretmem_active();
 }
 
 /**
diff --git a/mm/secretmem.c b/mm/secretmem.c
index fa6738e860c2..f2ae3f32a193 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -40,6 +40,13 @@ module_param_named(enable, secretmem_enable, bool, 0400);
 MODULE_PARM_DESC(secretmem_enable,
 		 "Enable secretmem and memfd_secret(2) system call");
 
+static atomic_t secretmem_users;
+
+bool secretmem_active(void)
+{
+	return !!atomic_read(&secretmem_users);
+}
+
 static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 {
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
@@ -94,6 +101,12 @@ static const struct vm_operations_struct secretmem_vm_ops = {
 	.fault = secretmem_fault,
 };
 
+static int secretmem_release(struct inode *inode, struct file *file)
+{
+	atomic_dec(&secretmem_users);
+	return 0;
+}
+
 static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
 {
 	unsigned long len = vma->vm_end - vma->vm_start;
@@ -116,6 +129,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma)
 }
 
 static const struct file_operations secretmem_fops = {
+	.release	= secretmem_release,
 	.mmap		= secretmem_mmap,
 };
 
@@ -212,6 +226,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
 	file->f_flags |= O_LARGEFILE;
 
 	fd_install(fd, file);
+	atomic_inc(&secretmem_users);
 	return fd;
 
 err_put_fd:
-- 
2.28.0


^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 7/9] PM: hibernate: disable when there are active secretmem users
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

It is unsafe to allow saving of secretmem areas to the hibernation
snapshot as they would be visible after the resume and this essentially
will defeat the purpose of secret memory mappings.

Prevent hibernation whenever there are active secret memory users.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 include/linux/secretmem.h |  6 ++++++
 kernel/power/hibernate.c  |  5 ++++-
 mm/secretmem.c            | 15 +++++++++++++++
 3 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 70e7db9f94fe..907a6734059c 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -6,6 +6,7 @@
 
 bool vma_is_secretmem(struct vm_area_struct *vma);
 bool page_is_secretmem(struct page *page);
+bool secretmem_active(void);
 
 #else
 
@@ -19,6 +20,11 @@ static inline bool page_is_secretmem(struct page *page)
 	return false;
 }
 
+static inline bool secretmem_active(void)
+{
+	return false;
+}
+
 #endif /* CONFIG_SECRETMEM */
 
 #endif /* _LINUX_SECRETMEM_H */
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index da0b41914177..559acef3fddb 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -31,6 +31,7 @@
 #include <linux/genhd.h>
 #include <linux/ktime.h>
 #include <linux/security.h>
+#include <linux/secretmem.h>
 #include <trace/events/power.h>
 
 #include "power.h"
@@ -81,7 +82,9 @@ void hibernate_release(void)
 
 bool hibernation_available(void)
 {
-	return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION);
+	return nohibernate == 0 &&
+		!security_locked_down(LOCKDOWN_HIBERNATION) &&
+		!secretmem_active();
 }
 
 /**
diff --git a/mm/secretmem.c b/mm/secretmem.c
index fa6738e860c2..f2ae3f32a193 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -40,6 +40,13 @@ module_param_named(enable, secretmem_enable, bool, 0400);
 MODULE_PARM_DESC(secretmem_enable,
 		 "Enable secretmem and memfd_secret(2) system call");
 
+static atomic_t secretmem_users;
+
+bool secretmem_active(void)
+{
+	return !!atomic_read(&secretmem_users);
+}
+
 static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 {
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
@@ -94,6 +101,12 @@ static const struct vm_operations_struct secretmem_vm_ops = {
 	.fault = secretmem_fault,
 };
 
+static int secretmem_release(struct inode *inode, struct file *file)
+{
+	atomic_dec(&secretmem_users);
+	return 0;
+}
+
 static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
 {
 	unsigned long len = vma->vm_end - vma->vm_start;
@@ -116,6 +129,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma)
 }
 
 static const struct file_operations secretmem_fops = {
+	.release	= secretmem_release,
 	.mmap		= secretmem_mmap,
 };
 
@@ -212,6 +226,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
 	file->f_flags |= O_LARGEFILE;
 
 	fd_install(fd, file);
+	atomic_inc(&secretmem_users);
 	return fd;
 
 err_put_fd:
-- 
2.28.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 7/9] PM: hibernate: disable when there are active secretmem users
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

It is unsafe to allow saving of secretmem areas to the hibernation
snapshot as they would be visible after the resume and this essentially
will defeat the purpose of secret memory mappings.

Prevent hibernation whenever there are active secret memory users.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 include/linux/secretmem.h |  6 ++++++
 kernel/power/hibernate.c  |  5 ++++-
 mm/secretmem.c            | 15 +++++++++++++++
 3 files changed, 25 insertions(+), 1 deletion(-)

diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 70e7db9f94fe..907a6734059c 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -6,6 +6,7 @@
 
 bool vma_is_secretmem(struct vm_area_struct *vma);
 bool page_is_secretmem(struct page *page);
+bool secretmem_active(void);
 
 #else
 
@@ -19,6 +20,11 @@ static inline bool page_is_secretmem(struct page *page)
 	return false;
 }
 
+static inline bool secretmem_active(void)
+{
+	return false;
+}
+
 #endif /* CONFIG_SECRETMEM */
 
 #endif /* _LINUX_SECRETMEM_H */
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index da0b41914177..559acef3fddb 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -31,6 +31,7 @@
 #include <linux/genhd.h>
 #include <linux/ktime.h>
 #include <linux/security.h>
+#include <linux/secretmem.h>
 #include <trace/events/power.h>
 
 #include "power.h"
@@ -81,7 +82,9 @@ void hibernate_release(void)
 
 bool hibernation_available(void)
 {
-	return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION);
+	return nohibernate == 0 &&
+		!security_locked_down(LOCKDOWN_HIBERNATION) &&
+		!secretmem_active();
 }
 
 /**
diff --git a/mm/secretmem.c b/mm/secretmem.c
index fa6738e860c2..f2ae3f32a193 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -40,6 +40,13 @@ module_param_named(enable, secretmem_enable, bool, 0400);
 MODULE_PARM_DESC(secretmem_enable,
 		 "Enable secretmem and memfd_secret(2) system call");
 
+static atomic_t secretmem_users;
+
+bool secretmem_active(void)
+{
+	return !!atomic_read(&secretmem_users);
+}
+
 static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 {
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
@@ -94,6 +101,12 @@ static const struct vm_operations_struct secretmem_vm_ops = {
 	.fault = secretmem_fault,
 };
 
+static int secretmem_release(struct inode *inode, struct file *file)
+{
+	atomic_dec(&secretmem_users);
+	return 0;
+}
+
 static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
 {
 	unsigned long len = vma->vm_end - vma->vm_start;
@@ -116,6 +129,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma)
 }
 
 static const struct file_operations secretmem_fops = {
+	.release	= secretmem_release,
 	.mmap		= secretmem_mmap,
 };
 
@@ -212,6 +226,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
 	file->f_flags |= O_LARGEFILE;
 
 	fd_install(fd, file);
+	atomic_inc(&secretmem_users);
 	return fd;
 
 err_put_fd:
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 8/9] arch, mm: wire up memfd_secret system call where relevant
  2021-03-03 16:22 ` Mike Rapoport
  (?)
  (?)
@ 2021-03-03 16:22   ` Mike Rapoport
  -1 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel B utt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86, Palmer Dabbelt,
	Hagen Paul Pfeifer

From: Mike Rapoport <rppt@linux.ibm.com>

Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/uapi/asm/unistd.h   | 1 +
 arch/riscv/include/asm/unistd.h        | 1 +
 arch/x86/entry/syscalls/syscall_32.tbl | 1 +
 arch/x86/entry/syscalls/syscall_64.tbl | 1 +
 include/linux/syscalls.h               | 1 +
 include/uapi/asm-generic/unistd.h      | 6 +++++-
 scripts/checksyscalls.sh               | 4 ++++
 7 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index f83a70e07df8..ce2ee8f1e361 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -20,5 +20,6 @@
 #define __ARCH_WANT_SET_GET_RLIMIT
 #define __ARCH_WANT_TIME32_SYSCALLS
 #define __ARCH_WANT_SYS_CLONE3
+#define __ARCH_WANT_MEMFD_SECRET
 
 #include <asm-generic/unistd.h>
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
index 977ee6181dab..6c316093a1e5 100644
--- a/arch/riscv/include/asm/unistd.h
+++ b/arch/riscv/include/asm/unistd.h
@@ -9,6 +9,7 @@
  */
 
 #define __ARCH_WANT_SYS_CLONE
+#define __ARCH_WANT_MEMFD_SECRET
 
 #include <uapi/asm/unistd.h>
 
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index a1c9f496fca6..34f04076a140 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -447,3 +447,4 @@
 440	i386	process_madvise		sys_process_madvise
 441	i386	epoll_pwait2		sys_epoll_pwait2		compat_sys_epoll_pwait2
 442	i386	mount_setattr		sys_mount_setattr
+443	i386	memfd_secret		sys_memfd_secret
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index 7bf01cbe582f..bd3783edf27f 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -364,6 +364,7 @@
 440	common	process_madvise		sys_process_madvise
 441	common	epoll_pwait2		sys_epoll_pwait2
 442	common	mount_setattr		sys_mount_setattr
+443	common	memfd_secret		sys_memfd_secret
 
 #
 # Due to a historical design error, certain syscalls are numbered differently
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 2839dc9a7c01..4b87a2b3f442 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1041,6 +1041,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
 				       siginfo_t __user *info,
 				       unsigned int flags);
 asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
+asmlinkage long sys_memfd_secret(unsigned long flags);
 
 /*
  * Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index ce58cff99b66..7ac0732dbaa4 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -863,9 +863,13 @@ __SYSCALL(__NR_process_madvise, sys_process_madvise)
 __SC_COMP(__NR_epoll_pwait2, sys_epoll_pwait2, compat_sys_epoll_pwait2)
 #define __NR_mount_setattr 442
 __SYSCALL(__NR_mount_setattr, sys_mount_setattr)
+#ifdef __ARCH_WANT_MEMFD_SECRET
+#define __NR_memfd_secret 443
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
+#endif
 
 #undef __NR_syscalls
-#define __NR_syscalls 443
+#define __NR_syscalls 444
 
 /*
  * 32 bit systems traditionally used different
diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
index a18b47695f55..b7609958ee36 100755
--- a/scripts/checksyscalls.sh
+++ b/scripts/checksyscalls.sh
@@ -40,6 +40,10 @@ cat << EOF
 #define __IGNORE_setrlimit	/* setrlimit */
 #endif
 
+#ifndef __ARCH_WANT_MEMFD_SECRET
+#define __IGNORE_memfd_secret
+#endif
+
 /* Missing flags argument */
 #define __IGNORE_renameat	/* renameat2 */
 
-- 
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 8/9] arch, mm: wire up memfd_secret system call where relevant
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86, Palmer Dabbelt,
	Hagen Paul Pfeifer

From: Mike Rapoport <rppt@linux.ibm.com>

Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/uapi/asm/unistd.h   | 1 +
 arch/riscv/include/asm/unistd.h        | 1 +
 arch/x86/entry/syscalls/syscall_32.tbl | 1 +
 arch/x86/entry/syscalls/syscall_64.tbl | 1 +
 include/linux/syscalls.h               | 1 +
 include/uapi/asm-generic/unistd.h      | 6 +++++-
 scripts/checksyscalls.sh               | 4 ++++
 7 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index f83a70e07df8..ce2ee8f1e361 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -20,5 +20,6 @@
 #define __ARCH_WANT_SET_GET_RLIMIT
 #define __ARCH_WANT_TIME32_SYSCALLS
 #define __ARCH_WANT_SYS_CLONE3
+#define __ARCH_WANT_MEMFD_SECRET
 
 #include <asm-generic/unistd.h>
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
index 977ee6181dab..6c316093a1e5 100644
--- a/arch/riscv/include/asm/unistd.h
+++ b/arch/riscv/include/asm/unistd.h
@@ -9,6 +9,7 @@
  */
 
 #define __ARCH_WANT_SYS_CLONE
+#define __ARCH_WANT_MEMFD_SECRET
 
 #include <uapi/asm/unistd.h>
 
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index a1c9f496fca6..34f04076a140 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -447,3 +447,4 @@
 440	i386	process_madvise		sys_process_madvise
 441	i386	epoll_pwait2		sys_epoll_pwait2		compat_sys_epoll_pwait2
 442	i386	mount_setattr		sys_mount_setattr
+443	i386	memfd_secret		sys_memfd_secret
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index 7bf01cbe582f..bd3783edf27f 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -364,6 +364,7 @@
 440	common	process_madvise		sys_process_madvise
 441	common	epoll_pwait2		sys_epoll_pwait2
 442	common	mount_setattr		sys_mount_setattr
+443	common	memfd_secret		sys_memfd_secret
 
 #
 # Due to a historical design error, certain syscalls are numbered differently
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 2839dc9a7c01..4b87a2b3f442 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1041,6 +1041,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
 				       siginfo_t __user *info,
 				       unsigned int flags);
 asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
+asmlinkage long sys_memfd_secret(unsigned long flags);
 
 /*
  * Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index ce58cff99b66..7ac0732dbaa4 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -863,9 +863,13 @@ __SYSCALL(__NR_process_madvise, sys_process_madvise)
 __SC_COMP(__NR_epoll_pwait2, sys_epoll_pwait2, compat_sys_epoll_pwait2)
 #define __NR_mount_setattr 442
 __SYSCALL(__NR_mount_setattr, sys_mount_setattr)
+#ifdef __ARCH_WANT_MEMFD_SECRET
+#define __NR_memfd_secret 443
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
+#endif
 
 #undef __NR_syscalls
-#define __NR_syscalls 443
+#define __NR_syscalls 444
 
 /*
  * 32 bit systems traditionally used different
diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
index a18b47695f55..b7609958ee36 100755
--- a/scripts/checksyscalls.sh
+++ b/scripts/checksyscalls.sh
@@ -40,6 +40,10 @@ cat << EOF
 #define __IGNORE_setrlimit	/* setrlimit */
 #endif
 
+#ifndef __ARCH_WANT_MEMFD_SECRET
+#define __IGNORE_memfd_secret
+#endif
+
 /* Missing flags argument */
 #define __IGNORE_renameat	/* renameat2 */
 
-- 
2.28.0


^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 8/9] arch, mm: wire up memfd_secret system call where relevant
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86, Palmer Dabbelt,
	Hagen Paul Pfeifer

From: Mike Rapoport <rppt@linux.ibm.com>

Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/uapi/asm/unistd.h   | 1 +
 arch/riscv/include/asm/unistd.h        | 1 +
 arch/x86/entry/syscalls/syscall_32.tbl | 1 +
 arch/x86/entry/syscalls/syscall_64.tbl | 1 +
 include/linux/syscalls.h               | 1 +
 include/uapi/asm-generic/unistd.h      | 6 +++++-
 scripts/checksyscalls.sh               | 4 ++++
 7 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index f83a70e07df8..ce2ee8f1e361 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -20,5 +20,6 @@
 #define __ARCH_WANT_SET_GET_RLIMIT
 #define __ARCH_WANT_TIME32_SYSCALLS
 #define __ARCH_WANT_SYS_CLONE3
+#define __ARCH_WANT_MEMFD_SECRET
 
 #include <asm-generic/unistd.h>
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
index 977ee6181dab..6c316093a1e5 100644
--- a/arch/riscv/include/asm/unistd.h
+++ b/arch/riscv/include/asm/unistd.h
@@ -9,6 +9,7 @@
  */
 
 #define __ARCH_WANT_SYS_CLONE
+#define __ARCH_WANT_MEMFD_SECRET
 
 #include <uapi/asm/unistd.h>
 
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index a1c9f496fca6..34f04076a140 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -447,3 +447,4 @@
 440	i386	process_madvise		sys_process_madvise
 441	i386	epoll_pwait2		sys_epoll_pwait2		compat_sys_epoll_pwait2
 442	i386	mount_setattr		sys_mount_setattr
+443	i386	memfd_secret		sys_memfd_secret
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index 7bf01cbe582f..bd3783edf27f 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -364,6 +364,7 @@
 440	common	process_madvise		sys_process_madvise
 441	common	epoll_pwait2		sys_epoll_pwait2
 442	common	mount_setattr		sys_mount_setattr
+443	common	memfd_secret		sys_memfd_secret
 
 #
 # Due to a historical design error, certain syscalls are numbered differently
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 2839dc9a7c01..4b87a2b3f442 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1041,6 +1041,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
 				       siginfo_t __user *info,
 				       unsigned int flags);
 asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
+asmlinkage long sys_memfd_secret(unsigned long flags);
 
 /*
  * Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index ce58cff99b66..7ac0732dbaa4 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -863,9 +863,13 @@ __SYSCALL(__NR_process_madvise, sys_process_madvise)
 __SC_COMP(__NR_epoll_pwait2, sys_epoll_pwait2, compat_sys_epoll_pwait2)
 #define __NR_mount_setattr 442
 __SYSCALL(__NR_mount_setattr, sys_mount_setattr)
+#ifdef __ARCH_WANT_MEMFD_SECRET
+#define __NR_memfd_secret 443
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
+#endif
 
 #undef __NR_syscalls
-#define __NR_syscalls 443
+#define __NR_syscalls 444
 
 /*
  * 32 bit systems traditionally used different
diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
index a18b47695f55..b7609958ee36 100755
--- a/scripts/checksyscalls.sh
+++ b/scripts/checksyscalls.sh
@@ -40,6 +40,10 @@ cat << EOF
 #define __IGNORE_setrlimit	/* setrlimit */
 #endif
 
+#ifndef __ARCH_WANT_MEMFD_SECRET
+#define __IGNORE_memfd_secret
+#endif
+
 /* Missing flags argument */
 #define __IGNORE_renameat	/* renameat2 */
 
-- 
2.28.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 8/9] arch, mm: wire up memfd_secret system call where relevant
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86, Palmer Dabbelt,
	Hagen Paul Pfeifer

From: Mike Rapoport <rppt@linux.ibm.com>

Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/uapi/asm/unistd.h   | 1 +
 arch/riscv/include/asm/unistd.h        | 1 +
 arch/x86/entry/syscalls/syscall_32.tbl | 1 +
 arch/x86/entry/syscalls/syscall_64.tbl | 1 +
 include/linux/syscalls.h               | 1 +
 include/uapi/asm-generic/unistd.h      | 6 +++++-
 scripts/checksyscalls.sh               | 4 ++++
 7 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index f83a70e07df8..ce2ee8f1e361 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -20,5 +20,6 @@
 #define __ARCH_WANT_SET_GET_RLIMIT
 #define __ARCH_WANT_TIME32_SYSCALLS
 #define __ARCH_WANT_SYS_CLONE3
+#define __ARCH_WANT_MEMFD_SECRET
 
 #include <asm-generic/unistd.h>
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
index 977ee6181dab..6c316093a1e5 100644
--- a/arch/riscv/include/asm/unistd.h
+++ b/arch/riscv/include/asm/unistd.h
@@ -9,6 +9,7 @@
  */
 
 #define __ARCH_WANT_SYS_CLONE
+#define __ARCH_WANT_MEMFD_SECRET
 
 #include <uapi/asm/unistd.h>
 
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index a1c9f496fca6..34f04076a140 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -447,3 +447,4 @@
 440	i386	process_madvise		sys_process_madvise
 441	i386	epoll_pwait2		sys_epoll_pwait2		compat_sys_epoll_pwait2
 442	i386	mount_setattr		sys_mount_setattr
+443	i386	memfd_secret		sys_memfd_secret
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index 7bf01cbe582f..bd3783edf27f 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -364,6 +364,7 @@
 440	common	process_madvise		sys_process_madvise
 441	common	epoll_pwait2		sys_epoll_pwait2
 442	common	mount_setattr		sys_mount_setattr
+443	common	memfd_secret		sys_memfd_secret
 
 #
 # Due to a historical design error, certain syscalls are numbered differently
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 2839dc9a7c01..4b87a2b3f442 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1041,6 +1041,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
 				       siginfo_t __user *info,
 				       unsigned int flags);
 asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
+asmlinkage long sys_memfd_secret(unsigned long flags);
 
 /*
  * Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index ce58cff99b66..7ac0732dbaa4 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -863,9 +863,13 @@ __SYSCALL(__NR_process_madvise, sys_process_madvise)
 __SC_COMP(__NR_epoll_pwait2, sys_epoll_pwait2, compat_sys_epoll_pwait2)
 #define __NR_mount_setattr 442
 __SYSCALL(__NR_mount_setattr, sys_mount_setattr)
+#ifdef __ARCH_WANT_MEMFD_SECRET
+#define __NR_memfd_secret 443
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
+#endif
 
 #undef __NR_syscalls
-#define __NR_syscalls 443
+#define __NR_syscalls 444
 
 /*
  * 32 bit systems traditionally used different
diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
index a18b47695f55..b7609958ee36 100755
--- a/scripts/checksyscalls.sh
+++ b/scripts/checksyscalls.sh
@@ -40,6 +40,10 @@ cat << EOF
 #define __IGNORE_setrlimit	/* setrlimit */
 #endif
 
+#ifndef __ARCH_WANT_MEMFD_SECRET
+#define __IGNORE_memfd_secret
+#endif
+
 /* Missing flags argument */
 #define __IGNORE_renameat	/* renameat2 */
 
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 9/9] secretmem: test: add basic selftest for memfd_secret(2)
  2021-03-03 16:22 ` Mike Rapoport
  (?)
  (?)
@ 2021-03-03 16:22   ` Mike Rapoport
  -1 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel B utt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

The test verifies that file descriptor created with memfd_secret does not
allow read/write operations, that secret memory mappings respect
RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
ptrace() to the secret memory fail.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 tools/testing/selftests/vm/.gitignore     |   1 +
 tools/testing/selftests/vm/Makefile       |   3 +-
 tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
 tools/testing/selftests/vm/run_vmtests.sh |  17 ++
 4 files changed, 316 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/vm/memfd_secret.c

diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore
index 9a35c3f6a557..c8deddc81e7a 100644
--- a/tools/testing/selftests/vm/.gitignore
+++ b/tools/testing/selftests/vm/.gitignore
@@ -21,4 +21,5 @@ va_128TBswitch
 map_fixed_noreplace
 write_to_hugetlbfs
 hmm-tests
+memfd_secret
 local_config.*
diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index d42115e4284d..0200fb61646c 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -34,6 +34,7 @@ TEST_GEN_FILES += khugepaged
 TEST_GEN_FILES += map_fixed_noreplace
 TEST_GEN_FILES += map_hugetlb
 TEST_GEN_FILES += map_populate
+TEST_GEN_FILES += memfd_secret
 TEST_GEN_FILES += mlock-random-test
 TEST_GEN_FILES += mlock2-tests
 TEST_GEN_FILES += mremap_dontunmap
@@ -133,7 +134,7 @@ warn_32bit_failure:
 endif
 endif
 
-$(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+$(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap
 
 $(OUTPUT)/gup_test: ../../../../mm/gup_test.h
 
diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c
new file mode 100644
index 000000000000..c878c2b841fc
--- /dev/null
+++ b/tools/testing/selftests/vm/memfd_secret.c
@@ -0,0 +1,296 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#define _GNU_SOURCE
+#include <sys/uio.h>
+#include <sys/mman.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ptrace.h>
+#include <sys/syscall.h>
+#include <sys/resource.h>
+#include <sys/capability.h>
+
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <errno.h>
+#include <stdio.h>
+
+#include "../kselftest.h"
+
+#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__)
+#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
+#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
+
+#ifdef __NR_memfd_secret
+
+#define PATTERN	0x55
+
+static const int prot = PROT_READ | PROT_WRITE;
+static const int mode = MAP_SHARED;
+
+static unsigned long page_size;
+static unsigned long mlock_limit_cur;
+static unsigned long mlock_limit_max;
+
+static int memfd_secret(unsigned long flags)
+{
+	return syscall(__NR_memfd_secret, flags);
+}
+
+static void test_file_apis(int fd)
+{
+	char buf[64];
+
+	if ((read(fd, buf, sizeof(buf)) >= 0) ||
+	    (write(fd, buf, sizeof(buf)) >= 0) ||
+	    (pread(fd, buf, sizeof(buf), 0) >= 0) ||
+	    (pwrite(fd, buf, sizeof(buf), 0) >= 0))
+		fail("unexpected file IO\n");
+	else
+		pass("file IO is blocked as expected\n");
+}
+
+static void test_mlock_limit(int fd)
+{
+	size_t len;
+	char *mem;
+
+	len = mlock_limit_cur;
+	mem = mmap(NULL, len, prot, mode, fd, 0);
+	if (mem == MAP_FAILED) {
+		fail("unable to mmap secret memory\n");
+		return;
+	}
+	munmap(mem, len);
+
+	len = mlock_limit_max * 2;
+	mem = mmap(NULL, len, prot, mode, fd, 0);
+	if (mem != MAP_FAILED) {
+		fail("unexpected mlock limit violation\n");
+		munmap(mem, len);
+		return;
+	}
+
+	pass("mlock limit is respected\n");
+}
+
+static void try_process_vm_read(int fd, int pipefd[2])
+{
+	struct iovec liov, riov;
+	char buf[64];
+	char *mem;
+
+	if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+		fail("pipe write: %s\n", strerror(errno));
+		exit(KSFT_FAIL);
+	}
+
+	liov.iov_len = riov.iov_len = sizeof(buf);
+	liov.iov_base = buf;
+	riov.iov_base = mem;
+
+	if (process_vm_readv(getppid(), &liov, 1, &riov, 1, 0) < 0) {
+		if (errno == ENOSYS)
+			exit(KSFT_SKIP);
+		exit(KSFT_PASS);
+	}
+
+	exit(KSFT_FAIL);
+}
+
+static void try_ptrace(int fd, int pipefd[2])
+{
+	pid_t ppid = getppid();
+	int status;
+	char *mem;
+	long ret;
+
+	if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+		perror("pipe write");
+		exit(KSFT_FAIL);
+	}
+
+	ret = ptrace(PTRACE_ATTACH, ppid, 0, 0);
+	if (ret) {
+		perror("ptrace_attach");
+		exit(KSFT_FAIL);
+	}
+
+	ret = waitpid(ppid, &status, WUNTRACED);
+	if ((ret != ppid) || !(WIFSTOPPED(status))) {
+		fprintf(stderr, "weird waitppid result %ld stat %x\n",
+			ret, status);
+		exit(KSFT_FAIL);
+	}
+
+	if (ptrace(PTRACE_PEEKDATA, ppid, mem, 0))
+		exit(KSFT_PASS);
+
+	exit(KSFT_FAIL);
+}
+
+static void check_child_status(pid_t pid, const char *name)
+{
+	int status;
+
+	waitpid(pid, &status, 0);
+
+	if (WIFEXITED(status) && WEXITSTATUS(status) == KSFT_SKIP) {
+		skip("%s is not supported\n", name);
+		return;
+	}
+
+	if ((WIFEXITED(status) && WEXITSTATUS(status) == KSFT_PASS) ||
+	    WIFSIGNALED(status)) {
+		pass("%s is blocked as expected\n", name);
+		return;
+	}
+
+	fail("%s: unexpected memory access\n", name);
+}
+
+static void test_remote_access(int fd, const char *name,
+			       void (*func)(int fd, int pipefd[2]))
+{
+	int pipefd[2];
+	pid_t pid;
+	char *mem;
+
+	if (pipe(pipefd)) {
+		fail("pipe failed: %s\n", strerror(errno));
+		return;
+	}
+
+	pid = fork();
+	if (pid < 0) {
+		fail("fork failed: %s\n", strerror(errno));
+		return;
+	}
+
+	if (pid == 0) {
+		func(fd, pipefd);
+		return;
+	}
+
+	mem = mmap(NULL, page_size, prot, mode, fd, 0);
+	if (mem == MAP_FAILED) {
+		fail("Unable to mmap secret memory\n");
+		return;
+	}
+
+	ftruncate(fd, page_size);
+	memset(mem, PATTERN, page_size);
+
+	if (write(pipefd[1], &mem, sizeof(mem)) < 0) {
+		fail("pipe write: %s\n", strerror(errno));
+		return;
+	}
+
+	check_child_status(pid, name);
+}
+
+static void test_process_vm_read(int fd)
+{
+	test_remote_access(fd, "process_vm_read", try_process_vm_read);
+}
+
+static void test_ptrace(int fd)
+{
+	test_remote_access(fd, "ptrace", try_ptrace);
+}
+
+static int set_cap_limits(rlim_t max)
+{
+	struct rlimit new;
+	cap_t cap = cap_init();
+
+	new.rlim_cur = max;
+	new.rlim_max = max;
+	if (setrlimit(RLIMIT_MEMLOCK, &new)) {
+		perror("setrlimit() returns error");
+		return -1;
+	}
+
+	/* drop capabilities including CAP_IPC_LOCK */
+	if (cap_set_proc(cap)) {
+		perror("cap_set_proc() returns error");
+		return -2;
+	}
+
+	return 0;
+}
+
+static void prepare(void)
+{
+	struct rlimit rlim;
+
+	page_size = sysconf(_SC_PAGE_SIZE);
+	if (!page_size)
+		ksft_exit_fail_msg("Failed to get page size %s\n",
+				   strerror(errno));
+
+	if (getrlimit(RLIMIT_MEMLOCK, &rlim))
+		ksft_exit_fail_msg("Unable to detect mlock limit: %s\n",
+				   strerror(errno));
+
+	mlock_limit_cur = rlim.rlim_cur;
+	mlock_limit_max = rlim.rlim_max;
+
+	printf("page_size: %ld, mlock.soft: %ld, mlock.hard: %ld\n",
+	       page_size, mlock_limit_cur, mlock_limit_max);
+
+	if (page_size > mlock_limit_cur)
+		mlock_limit_cur = page_size;
+	if (page_size > mlock_limit_max)
+		mlock_limit_max = page_size;
+
+	if (set_cap_limits(mlock_limit_max))
+		ksft_exit_fail_msg("Unable to set mlock limit: %s\n",
+				   strerror(errno));
+}
+
+#define NUM_TESTS 4
+
+int main(int argc, char *argv[])
+{
+	int fd;
+
+	prepare();
+
+	ksft_print_header();
+	ksft_set_plan(NUM_TESTS);
+
+	fd = memfd_secret(0);
+	if (fd < 0) {
+		if (errno == ENOSYS)
+			ksft_exit_skip("memfd_secret is not supported\n");
+		else
+			ksft_exit_fail_msg("memfd_secret failed: %s\n",
+					   strerror(errno));
+	}
+
+	test_mlock_limit(fd);
+	test_file_apis(fd);
+	test_process_vm_read(fd);
+	test_ptrace(fd);
+
+	close(fd);
+
+	ksft_exit(!ksft_get_fail_cnt());
+}
+
+#else /* __NR_memfd_secret */
+
+int main(int argc, char *argv[])
+{
+	printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n");
+	return KSFT_SKIP;
+}
+
+#endif /* __NR_memfd_secret */
diff --git a/tools/testing/selftests/vm/run_vmtests.sh b/tools/testing/selftests/vm/run_vmtests.sh
index e953f3cd9664..95a67382f132 100755
--- a/tools/testing/selftests/vm/run_vmtests.sh
+++ b/tools/testing/selftests/vm/run_vmtests.sh
@@ -346,4 +346,21 @@ else
 	exitcode=1
 fi
 
+echo "running memfd_secret test"
+echo "------------------------------------"
+./memfd_secret
+ret_val=$?
+
+if [ $ret_val -eq 0 ]; then
+	echo "[PASS]"
+elif [ $ret_val -eq $ksft_skip ]; then
+	echo "[SKIP]"
+	exitcode=$ksft_skip
+else
+	echo "[FAIL]"
+	exitcode=1
+fi
+
+exit $exitcode
+
 exit $exitcode
-- 
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 9/9] secretmem: test: add basic selftest for memfd_secret(2)
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

The test verifies that file descriptor created with memfd_secret does not
allow read/write operations, that secret memory mappings respect
RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
ptrace() to the secret memory fail.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 tools/testing/selftests/vm/.gitignore     |   1 +
 tools/testing/selftests/vm/Makefile       |   3 +-
 tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
 tools/testing/selftests/vm/run_vmtests.sh |  17 ++
 4 files changed, 316 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/vm/memfd_secret.c

diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore
index 9a35c3f6a557..c8deddc81e7a 100644
--- a/tools/testing/selftests/vm/.gitignore
+++ b/tools/testing/selftests/vm/.gitignore
@@ -21,4 +21,5 @@ va_128TBswitch
 map_fixed_noreplace
 write_to_hugetlbfs
 hmm-tests
+memfd_secret
 local_config.*
diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index d42115e4284d..0200fb61646c 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -34,6 +34,7 @@ TEST_GEN_FILES += khugepaged
 TEST_GEN_FILES += map_fixed_noreplace
 TEST_GEN_FILES += map_hugetlb
 TEST_GEN_FILES += map_populate
+TEST_GEN_FILES += memfd_secret
 TEST_GEN_FILES += mlock-random-test
 TEST_GEN_FILES += mlock2-tests
 TEST_GEN_FILES += mremap_dontunmap
@@ -133,7 +134,7 @@ warn_32bit_failure:
 endif
 endif
 
-$(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+$(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap
 
 $(OUTPUT)/gup_test: ../../../../mm/gup_test.h
 
diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c
new file mode 100644
index 000000000000..c878c2b841fc
--- /dev/null
+++ b/tools/testing/selftests/vm/memfd_secret.c
@@ -0,0 +1,296 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#define _GNU_SOURCE
+#include <sys/uio.h>
+#include <sys/mman.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ptrace.h>
+#include <sys/syscall.h>
+#include <sys/resource.h>
+#include <sys/capability.h>
+
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <errno.h>
+#include <stdio.h>
+
+#include "../kselftest.h"
+
+#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__)
+#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
+#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
+
+#ifdef __NR_memfd_secret
+
+#define PATTERN	0x55
+
+static const int prot = PROT_READ | PROT_WRITE;
+static const int mode = MAP_SHARED;
+
+static unsigned long page_size;
+static unsigned long mlock_limit_cur;
+static unsigned long mlock_limit_max;
+
+static int memfd_secret(unsigned long flags)
+{
+	return syscall(__NR_memfd_secret, flags);
+}
+
+static void test_file_apis(int fd)
+{
+	char buf[64];
+
+	if ((read(fd, buf, sizeof(buf)) >= 0) ||
+	    (write(fd, buf, sizeof(buf)) >= 0) ||
+	    (pread(fd, buf, sizeof(buf), 0) >= 0) ||
+	    (pwrite(fd, buf, sizeof(buf), 0) >= 0))
+		fail("unexpected file IO\n");
+	else
+		pass("file IO is blocked as expected\n");
+}
+
+static void test_mlock_limit(int fd)
+{
+	size_t len;
+	char *mem;
+
+	len = mlock_limit_cur;
+	mem = mmap(NULL, len, prot, mode, fd, 0);
+	if (mem == MAP_FAILED) {
+		fail("unable to mmap secret memory\n");
+		return;
+	}
+	munmap(mem, len);
+
+	len = mlock_limit_max * 2;
+	mem = mmap(NULL, len, prot, mode, fd, 0);
+	if (mem != MAP_FAILED) {
+		fail("unexpected mlock limit violation\n");
+		munmap(mem, len);
+		return;
+	}
+
+	pass("mlock limit is respected\n");
+}
+
+static void try_process_vm_read(int fd, int pipefd[2])
+{
+	struct iovec liov, riov;
+	char buf[64];
+	char *mem;
+
+	if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+		fail("pipe write: %s\n", strerror(errno));
+		exit(KSFT_FAIL);
+	}
+
+	liov.iov_len = riov.iov_len = sizeof(buf);
+	liov.iov_base = buf;
+	riov.iov_base = mem;
+
+	if (process_vm_readv(getppid(), &liov, 1, &riov, 1, 0) < 0) {
+		if (errno == ENOSYS)
+			exit(KSFT_SKIP);
+		exit(KSFT_PASS);
+	}
+
+	exit(KSFT_FAIL);
+}
+
+static void try_ptrace(int fd, int pipefd[2])
+{
+	pid_t ppid = getppid();
+	int status;
+	char *mem;
+	long ret;
+
+	if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+		perror("pipe write");
+		exit(KSFT_FAIL);
+	}
+
+	ret = ptrace(PTRACE_ATTACH, ppid, 0, 0);
+	if (ret) {
+		perror("ptrace_attach");
+		exit(KSFT_FAIL);
+	}
+
+	ret = waitpid(ppid, &status, WUNTRACED);
+	if ((ret != ppid) || !(WIFSTOPPED(status))) {
+		fprintf(stderr, "weird waitppid result %ld stat %x\n",
+			ret, status);
+		exit(KSFT_FAIL);
+	}
+
+	if (ptrace(PTRACE_PEEKDATA, ppid, mem, 0))
+		exit(KSFT_PASS);
+
+	exit(KSFT_FAIL);
+}
+
+static void check_child_status(pid_t pid, const char *name)
+{
+	int status;
+
+	waitpid(pid, &status, 0);
+
+	if (WIFEXITED(status) && WEXITSTATUS(status) == KSFT_SKIP) {
+		skip("%s is not supported\n", name);
+		return;
+	}
+
+	if ((WIFEXITED(status) && WEXITSTATUS(status) == KSFT_PASS) ||
+	    WIFSIGNALED(status)) {
+		pass("%s is blocked as expected\n", name);
+		return;
+	}
+
+	fail("%s: unexpected memory access\n", name);
+}
+
+static void test_remote_access(int fd, const char *name,
+			       void (*func)(int fd, int pipefd[2]))
+{
+	int pipefd[2];
+	pid_t pid;
+	char *mem;
+
+	if (pipe(pipefd)) {
+		fail("pipe failed: %s\n", strerror(errno));
+		return;
+	}
+
+	pid = fork();
+	if (pid < 0) {
+		fail("fork failed: %s\n", strerror(errno));
+		return;
+	}
+
+	if (pid == 0) {
+		func(fd, pipefd);
+		return;
+	}
+
+	mem = mmap(NULL, page_size, prot, mode, fd, 0);
+	if (mem == MAP_FAILED) {
+		fail("Unable to mmap secret memory\n");
+		return;
+	}
+
+	ftruncate(fd, page_size);
+	memset(mem, PATTERN, page_size);
+
+	if (write(pipefd[1], &mem, sizeof(mem)) < 0) {
+		fail("pipe write: %s\n", strerror(errno));
+		return;
+	}
+
+	check_child_status(pid, name);
+}
+
+static void test_process_vm_read(int fd)
+{
+	test_remote_access(fd, "process_vm_read", try_process_vm_read);
+}
+
+static void test_ptrace(int fd)
+{
+	test_remote_access(fd, "ptrace", try_ptrace);
+}
+
+static int set_cap_limits(rlim_t max)
+{
+	struct rlimit new;
+	cap_t cap = cap_init();
+
+	new.rlim_cur = max;
+	new.rlim_max = max;
+	if (setrlimit(RLIMIT_MEMLOCK, &new)) {
+		perror("setrlimit() returns error");
+		return -1;
+	}
+
+	/* drop capabilities including CAP_IPC_LOCK */
+	if (cap_set_proc(cap)) {
+		perror("cap_set_proc() returns error");
+		return -2;
+	}
+
+	return 0;
+}
+
+static void prepare(void)
+{
+	struct rlimit rlim;
+
+	page_size = sysconf(_SC_PAGE_SIZE);
+	if (!page_size)
+		ksft_exit_fail_msg("Failed to get page size %s\n",
+				   strerror(errno));
+
+	if (getrlimit(RLIMIT_MEMLOCK, &rlim))
+		ksft_exit_fail_msg("Unable to detect mlock limit: %s\n",
+				   strerror(errno));
+
+	mlock_limit_cur = rlim.rlim_cur;
+	mlock_limit_max = rlim.rlim_max;
+
+	printf("page_size: %ld, mlock.soft: %ld, mlock.hard: %ld\n",
+	       page_size, mlock_limit_cur, mlock_limit_max);
+
+	if (page_size > mlock_limit_cur)
+		mlock_limit_cur = page_size;
+	if (page_size > mlock_limit_max)
+		mlock_limit_max = page_size;
+
+	if (set_cap_limits(mlock_limit_max))
+		ksft_exit_fail_msg("Unable to set mlock limit: %s\n",
+				   strerror(errno));
+}
+
+#define NUM_TESTS 4
+
+int main(int argc, char *argv[])
+{
+	int fd;
+
+	prepare();
+
+	ksft_print_header();
+	ksft_set_plan(NUM_TESTS);
+
+	fd = memfd_secret(0);
+	if (fd < 0) {
+		if (errno == ENOSYS)
+			ksft_exit_skip("memfd_secret is not supported\n");
+		else
+			ksft_exit_fail_msg("memfd_secret failed: %s\n",
+					   strerror(errno));
+	}
+
+	test_mlock_limit(fd);
+	test_file_apis(fd);
+	test_process_vm_read(fd);
+	test_ptrace(fd);
+
+	close(fd);
+
+	ksft_exit(!ksft_get_fail_cnt());
+}
+
+#else /* __NR_memfd_secret */
+
+int main(int argc, char *argv[])
+{
+	printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n");
+	return KSFT_SKIP;
+}
+
+#endif /* __NR_memfd_secret */
diff --git a/tools/testing/selftests/vm/run_vmtests.sh b/tools/testing/selftests/vm/run_vmtests.sh
index e953f3cd9664..95a67382f132 100755
--- a/tools/testing/selftests/vm/run_vmtests.sh
+++ b/tools/testing/selftests/vm/run_vmtests.sh
@@ -346,4 +346,21 @@ else
 	exitcode=1
 fi
 
+echo "running memfd_secret test"
+echo "------------------------------------"
+./memfd_secret
+ret_val=$?
+
+if [ $ret_val -eq 0 ]; then
+	echo "[PASS]"
+elif [ $ret_val -eq $ksft_skip ]; then
+	echo "[SKIP]"
+	exitcode=$ksft_skip
+else
+	echo "[FAIL]"
+	exitcode=1
+fi
+
+exit $exitcode
+
 exit $exitcode
-- 
2.28.0


^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 9/9] secretmem: test: add basic selftest for memfd_secret(2)
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

The test verifies that file descriptor created with memfd_secret does not
allow read/write operations, that secret memory mappings respect
RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
ptrace() to the secret memory fail.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 tools/testing/selftests/vm/.gitignore     |   1 +
 tools/testing/selftests/vm/Makefile       |   3 +-
 tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
 tools/testing/selftests/vm/run_vmtests.sh |  17 ++
 4 files changed, 316 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/vm/memfd_secret.c

diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore
index 9a35c3f6a557..c8deddc81e7a 100644
--- a/tools/testing/selftests/vm/.gitignore
+++ b/tools/testing/selftests/vm/.gitignore
@@ -21,4 +21,5 @@ va_128TBswitch
 map_fixed_noreplace
 write_to_hugetlbfs
 hmm-tests
+memfd_secret
 local_config.*
diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index d42115e4284d..0200fb61646c 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -34,6 +34,7 @@ TEST_GEN_FILES += khugepaged
 TEST_GEN_FILES += map_fixed_noreplace
 TEST_GEN_FILES += map_hugetlb
 TEST_GEN_FILES += map_populate
+TEST_GEN_FILES += memfd_secret
 TEST_GEN_FILES += mlock-random-test
 TEST_GEN_FILES += mlock2-tests
 TEST_GEN_FILES += mremap_dontunmap
@@ -133,7 +134,7 @@ warn_32bit_failure:
 endif
 endif
 
-$(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+$(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap
 
 $(OUTPUT)/gup_test: ../../../../mm/gup_test.h
 
diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c
new file mode 100644
index 000000000000..c878c2b841fc
--- /dev/null
+++ b/tools/testing/selftests/vm/memfd_secret.c
@@ -0,0 +1,296 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#define _GNU_SOURCE
+#include <sys/uio.h>
+#include <sys/mman.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ptrace.h>
+#include <sys/syscall.h>
+#include <sys/resource.h>
+#include <sys/capability.h>
+
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <errno.h>
+#include <stdio.h>
+
+#include "../kselftest.h"
+
+#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__)
+#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
+#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
+
+#ifdef __NR_memfd_secret
+
+#define PATTERN	0x55
+
+static const int prot = PROT_READ | PROT_WRITE;
+static const int mode = MAP_SHARED;
+
+static unsigned long page_size;
+static unsigned long mlock_limit_cur;
+static unsigned long mlock_limit_max;
+
+static int memfd_secret(unsigned long flags)
+{
+	return syscall(__NR_memfd_secret, flags);
+}
+
+static void test_file_apis(int fd)
+{
+	char buf[64];
+
+	if ((read(fd, buf, sizeof(buf)) >= 0) ||
+	    (write(fd, buf, sizeof(buf)) >= 0) ||
+	    (pread(fd, buf, sizeof(buf), 0) >= 0) ||
+	    (pwrite(fd, buf, sizeof(buf), 0) >= 0))
+		fail("unexpected file IO\n");
+	else
+		pass("file IO is blocked as expected\n");
+}
+
+static void test_mlock_limit(int fd)
+{
+	size_t len;
+	char *mem;
+
+	len = mlock_limit_cur;
+	mem = mmap(NULL, len, prot, mode, fd, 0);
+	if (mem == MAP_FAILED) {
+		fail("unable to mmap secret memory\n");
+		return;
+	}
+	munmap(mem, len);
+
+	len = mlock_limit_max * 2;
+	mem = mmap(NULL, len, prot, mode, fd, 0);
+	if (mem != MAP_FAILED) {
+		fail("unexpected mlock limit violation\n");
+		munmap(mem, len);
+		return;
+	}
+
+	pass("mlock limit is respected\n");
+}
+
+static void try_process_vm_read(int fd, int pipefd[2])
+{
+	struct iovec liov, riov;
+	char buf[64];
+	char *mem;
+
+	if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+		fail("pipe write: %s\n", strerror(errno));
+		exit(KSFT_FAIL);
+	}
+
+	liov.iov_len = riov.iov_len = sizeof(buf);
+	liov.iov_base = buf;
+	riov.iov_base = mem;
+
+	if (process_vm_readv(getppid(), &liov, 1, &riov, 1, 0) < 0) {
+		if (errno == ENOSYS)
+			exit(KSFT_SKIP);
+		exit(KSFT_PASS);
+	}
+
+	exit(KSFT_FAIL);
+}
+
+static void try_ptrace(int fd, int pipefd[2])
+{
+	pid_t ppid = getppid();
+	int status;
+	char *mem;
+	long ret;
+
+	if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+		perror("pipe write");
+		exit(KSFT_FAIL);
+	}
+
+	ret = ptrace(PTRACE_ATTACH, ppid, 0, 0);
+	if (ret) {
+		perror("ptrace_attach");
+		exit(KSFT_FAIL);
+	}
+
+	ret = waitpid(ppid, &status, WUNTRACED);
+	if ((ret != ppid) || !(WIFSTOPPED(status))) {
+		fprintf(stderr, "weird waitppid result %ld stat %x\n",
+			ret, status);
+		exit(KSFT_FAIL);
+	}
+
+	if (ptrace(PTRACE_PEEKDATA, ppid, mem, 0))
+		exit(KSFT_PASS);
+
+	exit(KSFT_FAIL);
+}
+
+static void check_child_status(pid_t pid, const char *name)
+{
+	int status;
+
+	waitpid(pid, &status, 0);
+
+	if (WIFEXITED(status) && WEXITSTATUS(status) == KSFT_SKIP) {
+		skip("%s is not supported\n", name);
+		return;
+	}
+
+	if ((WIFEXITED(status) && WEXITSTATUS(status) == KSFT_PASS) ||
+	    WIFSIGNALED(status)) {
+		pass("%s is blocked as expected\n", name);
+		return;
+	}
+
+	fail("%s: unexpected memory access\n", name);
+}
+
+static void test_remote_access(int fd, const char *name,
+			       void (*func)(int fd, int pipefd[2]))
+{
+	int pipefd[2];
+	pid_t pid;
+	char *mem;
+
+	if (pipe(pipefd)) {
+		fail("pipe failed: %s\n", strerror(errno));
+		return;
+	}
+
+	pid = fork();
+	if (pid < 0) {
+		fail("fork failed: %s\n", strerror(errno));
+		return;
+	}
+
+	if (pid == 0) {
+		func(fd, pipefd);
+		return;
+	}
+
+	mem = mmap(NULL, page_size, prot, mode, fd, 0);
+	if (mem == MAP_FAILED) {
+		fail("Unable to mmap secret memory\n");
+		return;
+	}
+
+	ftruncate(fd, page_size);
+	memset(mem, PATTERN, page_size);
+
+	if (write(pipefd[1], &mem, sizeof(mem)) < 0) {
+		fail("pipe write: %s\n", strerror(errno));
+		return;
+	}
+
+	check_child_status(pid, name);
+}
+
+static void test_process_vm_read(int fd)
+{
+	test_remote_access(fd, "process_vm_read", try_process_vm_read);
+}
+
+static void test_ptrace(int fd)
+{
+	test_remote_access(fd, "ptrace", try_ptrace);
+}
+
+static int set_cap_limits(rlim_t max)
+{
+	struct rlimit new;
+	cap_t cap = cap_init();
+
+	new.rlim_cur = max;
+	new.rlim_max = max;
+	if (setrlimit(RLIMIT_MEMLOCK, &new)) {
+		perror("setrlimit() returns error");
+		return -1;
+	}
+
+	/* drop capabilities including CAP_IPC_LOCK */
+	if (cap_set_proc(cap)) {
+		perror("cap_set_proc() returns error");
+		return -2;
+	}
+
+	return 0;
+}
+
+static void prepare(void)
+{
+	struct rlimit rlim;
+
+	page_size = sysconf(_SC_PAGE_SIZE);
+	if (!page_size)
+		ksft_exit_fail_msg("Failed to get page size %s\n",
+				   strerror(errno));
+
+	if (getrlimit(RLIMIT_MEMLOCK, &rlim))
+		ksft_exit_fail_msg("Unable to detect mlock limit: %s\n",
+				   strerror(errno));
+
+	mlock_limit_cur = rlim.rlim_cur;
+	mlock_limit_max = rlim.rlim_max;
+
+	printf("page_size: %ld, mlock.soft: %ld, mlock.hard: %ld\n",
+	       page_size, mlock_limit_cur, mlock_limit_max);
+
+	if (page_size > mlock_limit_cur)
+		mlock_limit_cur = page_size;
+	if (page_size > mlock_limit_max)
+		mlock_limit_max = page_size;
+
+	if (set_cap_limits(mlock_limit_max))
+		ksft_exit_fail_msg("Unable to set mlock limit: %s\n",
+				   strerror(errno));
+}
+
+#define NUM_TESTS 4
+
+int main(int argc, char *argv[])
+{
+	int fd;
+
+	prepare();
+
+	ksft_print_header();
+	ksft_set_plan(NUM_TESTS);
+
+	fd = memfd_secret(0);
+	if (fd < 0) {
+		if (errno == ENOSYS)
+			ksft_exit_skip("memfd_secret is not supported\n");
+		else
+			ksft_exit_fail_msg("memfd_secret failed: %s\n",
+					   strerror(errno));
+	}
+
+	test_mlock_limit(fd);
+	test_file_apis(fd);
+	test_process_vm_read(fd);
+	test_ptrace(fd);
+
+	close(fd);
+
+	ksft_exit(!ksft_get_fail_cnt());
+}
+
+#else /* __NR_memfd_secret */
+
+int main(int argc, char *argv[])
+{
+	printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n");
+	return KSFT_SKIP;
+}
+
+#endif /* __NR_memfd_secret */
diff --git a/tools/testing/selftests/vm/run_vmtests.sh b/tools/testing/selftests/vm/run_vmtests.sh
index e953f3cd9664..95a67382f132 100755
--- a/tools/testing/selftests/vm/run_vmtests.sh
+++ b/tools/testing/selftests/vm/run_vmtests.sh
@@ -346,4 +346,21 @@ else
 	exitcode=1
 fi
 
+echo "running memfd_secret test"
+echo "------------------------------------"
+./memfd_secret
+ret_val=$?
+
+if [ $ret_val -eq 0 ]; then
+	echo "[PASS]"
+elif [ $ret_val -eq $ksft_skip ]; then
+	echo "[SKIP]"
+	exitcode=$ksft_skip
+else
+	echo "[FAIL]"
+	exitcode=1
+fi
+
+exit $exitcode
+
 exit $exitcode
-- 
2.28.0


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* [PATCH v18 9/9] secretmem: test: add basic selftest for memfd_secret(2)
@ 2021-03-03 16:22   ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-03-03 16:22 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86,
	Hagen Paul Pfeifer, Palmer Dabbelt

From: Mike Rapoport <rppt@linux.ibm.com>

The test verifies that file descriptor created with memfd_secret does not
allow read/write operations, that secret memory mappings respect
RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
ptrace() to the secret memory fail.

Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
 tools/testing/selftests/vm/.gitignore     |   1 +
 tools/testing/selftests/vm/Makefile       |   3 +-
 tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
 tools/testing/selftests/vm/run_vmtests.sh |  17 ++
 4 files changed, 316 insertions(+), 1 deletion(-)
 create mode 100644 tools/testing/selftests/vm/memfd_secret.c

diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore
index 9a35c3f6a557..c8deddc81e7a 100644
--- a/tools/testing/selftests/vm/.gitignore
+++ b/tools/testing/selftests/vm/.gitignore
@@ -21,4 +21,5 @@ va_128TBswitch
 map_fixed_noreplace
 write_to_hugetlbfs
 hmm-tests
+memfd_secret
 local_config.*
diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index d42115e4284d..0200fb61646c 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -34,6 +34,7 @@ TEST_GEN_FILES += khugepaged
 TEST_GEN_FILES += map_fixed_noreplace
 TEST_GEN_FILES += map_hugetlb
 TEST_GEN_FILES += map_populate
+TEST_GEN_FILES += memfd_secret
 TEST_GEN_FILES += mlock-random-test
 TEST_GEN_FILES += mlock2-tests
 TEST_GEN_FILES += mremap_dontunmap
@@ -133,7 +134,7 @@ warn_32bit_failure:
 endif
 endif
 
-$(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+$(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap
 
 $(OUTPUT)/gup_test: ../../../../mm/gup_test.h
 
diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c
new file mode 100644
index 000000000000..c878c2b841fc
--- /dev/null
+++ b/tools/testing/selftests/vm/memfd_secret.c
@@ -0,0 +1,296 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#define _GNU_SOURCE
+#include <sys/uio.h>
+#include <sys/mman.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ptrace.h>
+#include <sys/syscall.h>
+#include <sys/resource.h>
+#include <sys/capability.h>
+
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <errno.h>
+#include <stdio.h>
+
+#include "../kselftest.h"
+
+#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__)
+#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
+#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
+
+#ifdef __NR_memfd_secret
+
+#define PATTERN	0x55
+
+static const int prot = PROT_READ | PROT_WRITE;
+static const int mode = MAP_SHARED;
+
+static unsigned long page_size;
+static unsigned long mlock_limit_cur;
+static unsigned long mlock_limit_max;
+
+static int memfd_secret(unsigned long flags)
+{
+	return syscall(__NR_memfd_secret, flags);
+}
+
+static void test_file_apis(int fd)
+{
+	char buf[64];
+
+	if ((read(fd, buf, sizeof(buf)) >= 0) ||
+	    (write(fd, buf, sizeof(buf)) >= 0) ||
+	    (pread(fd, buf, sizeof(buf), 0) >= 0) ||
+	    (pwrite(fd, buf, sizeof(buf), 0) >= 0))
+		fail("unexpected file IO\n");
+	else
+		pass("file IO is blocked as expected\n");
+}
+
+static void test_mlock_limit(int fd)
+{
+	size_t len;
+	char *mem;
+
+	len = mlock_limit_cur;
+	mem = mmap(NULL, len, prot, mode, fd, 0);
+	if (mem == MAP_FAILED) {
+		fail("unable to mmap secret memory\n");
+		return;
+	}
+	munmap(mem, len);
+
+	len = mlock_limit_max * 2;
+	mem = mmap(NULL, len, prot, mode, fd, 0);
+	if (mem != MAP_FAILED) {
+		fail("unexpected mlock limit violation\n");
+		munmap(mem, len);
+		return;
+	}
+
+	pass("mlock limit is respected\n");
+}
+
+static void try_process_vm_read(int fd, int pipefd[2])
+{
+	struct iovec liov, riov;
+	char buf[64];
+	char *mem;
+
+	if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+		fail("pipe write: %s\n", strerror(errno));
+		exit(KSFT_FAIL);
+	}
+
+	liov.iov_len = riov.iov_len = sizeof(buf);
+	liov.iov_base = buf;
+	riov.iov_base = mem;
+
+	if (process_vm_readv(getppid(), &liov, 1, &riov, 1, 0) < 0) {
+		if (errno == ENOSYS)
+			exit(KSFT_SKIP);
+		exit(KSFT_PASS);
+	}
+
+	exit(KSFT_FAIL);
+}
+
+static void try_ptrace(int fd, int pipefd[2])
+{
+	pid_t ppid = getppid();
+	int status;
+	char *mem;
+	long ret;
+
+	if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+		perror("pipe write");
+		exit(KSFT_FAIL);
+	}
+
+	ret = ptrace(PTRACE_ATTACH, ppid, 0, 0);
+	if (ret) {
+		perror("ptrace_attach");
+		exit(KSFT_FAIL);
+	}
+
+	ret = waitpid(ppid, &status, WUNTRACED);
+	if ((ret != ppid) || !(WIFSTOPPED(status))) {
+		fprintf(stderr, "weird waitppid result %ld stat %x\n",
+			ret, status);
+		exit(KSFT_FAIL);
+	}
+
+	if (ptrace(PTRACE_PEEKDATA, ppid, mem, 0))
+		exit(KSFT_PASS);
+
+	exit(KSFT_FAIL);
+}
+
+static void check_child_status(pid_t pid, const char *name)
+{
+	int status;
+
+	waitpid(pid, &status, 0);
+
+	if (WIFEXITED(status) && WEXITSTATUS(status) == KSFT_SKIP) {
+		skip("%s is not supported\n", name);
+		return;
+	}
+
+	if ((WIFEXITED(status) && WEXITSTATUS(status) == KSFT_PASS) ||
+	    WIFSIGNALED(status)) {
+		pass("%s is blocked as expected\n", name);
+		return;
+	}
+
+	fail("%s: unexpected memory access\n", name);
+}
+
+static void test_remote_access(int fd, const char *name,
+			       void (*func)(int fd, int pipefd[2]))
+{
+	int pipefd[2];
+	pid_t pid;
+	char *mem;
+
+	if (pipe(pipefd)) {
+		fail("pipe failed: %s\n", strerror(errno));
+		return;
+	}
+
+	pid = fork();
+	if (pid < 0) {
+		fail("fork failed: %s\n", strerror(errno));
+		return;
+	}
+
+	if (pid == 0) {
+		func(fd, pipefd);
+		return;
+	}
+
+	mem = mmap(NULL, page_size, prot, mode, fd, 0);
+	if (mem == MAP_FAILED) {
+		fail("Unable to mmap secret memory\n");
+		return;
+	}
+
+	ftruncate(fd, page_size);
+	memset(mem, PATTERN, page_size);
+
+	if (write(pipefd[1], &mem, sizeof(mem)) < 0) {
+		fail("pipe write: %s\n", strerror(errno));
+		return;
+	}
+
+	check_child_status(pid, name);
+}
+
+static void test_process_vm_read(int fd)
+{
+	test_remote_access(fd, "process_vm_read", try_process_vm_read);
+}
+
+static void test_ptrace(int fd)
+{
+	test_remote_access(fd, "ptrace", try_ptrace);
+}
+
+static int set_cap_limits(rlim_t max)
+{
+	struct rlimit new;
+	cap_t cap = cap_init();
+
+	new.rlim_cur = max;
+	new.rlim_max = max;
+	if (setrlimit(RLIMIT_MEMLOCK, &new)) {
+		perror("setrlimit() returns error");
+		return -1;
+	}
+
+	/* drop capabilities including CAP_IPC_LOCK */
+	if (cap_set_proc(cap)) {
+		perror("cap_set_proc() returns error");
+		return -2;
+	}
+
+	return 0;
+}
+
+static void prepare(void)
+{
+	struct rlimit rlim;
+
+	page_size = sysconf(_SC_PAGE_SIZE);
+	if (!page_size)
+		ksft_exit_fail_msg("Failed to get page size %s\n",
+				   strerror(errno));
+
+	if (getrlimit(RLIMIT_MEMLOCK, &rlim))
+		ksft_exit_fail_msg("Unable to detect mlock limit: %s\n",
+				   strerror(errno));
+
+	mlock_limit_cur = rlim.rlim_cur;
+	mlock_limit_max = rlim.rlim_max;
+
+	printf("page_size: %ld, mlock.soft: %ld, mlock.hard: %ld\n",
+	       page_size, mlock_limit_cur, mlock_limit_max);
+
+	if (page_size > mlock_limit_cur)
+		mlock_limit_cur = page_size;
+	if (page_size > mlock_limit_max)
+		mlock_limit_max = page_size;
+
+	if (set_cap_limits(mlock_limit_max))
+		ksft_exit_fail_msg("Unable to set mlock limit: %s\n",
+				   strerror(errno));
+}
+
+#define NUM_TESTS 4
+
+int main(int argc, char *argv[])
+{
+	int fd;
+
+	prepare();
+
+	ksft_print_header();
+	ksft_set_plan(NUM_TESTS);
+
+	fd = memfd_secret(0);
+	if (fd < 0) {
+		if (errno == ENOSYS)
+			ksft_exit_skip("memfd_secret is not supported\n");
+		else
+			ksft_exit_fail_msg("memfd_secret failed: %s\n",
+					   strerror(errno));
+	}
+
+	test_mlock_limit(fd);
+	test_file_apis(fd);
+	test_process_vm_read(fd);
+	test_ptrace(fd);
+
+	close(fd);
+
+	ksft_exit(!ksft_get_fail_cnt());
+}
+
+#else /* __NR_memfd_secret */
+
+int main(int argc, char *argv[])
+{
+	printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n");
+	return KSFT_SKIP;
+}
+
+#endif /* __NR_memfd_secret */
diff --git a/tools/testing/selftests/vm/run_vmtests.sh b/tools/testing/selftests/vm/run_vmtests.sh
index e953f3cd9664..95a67382f132 100755
--- a/tools/testing/selftests/vm/run_vmtests.sh
+++ b/tools/testing/selftests/vm/run_vmtests.sh
@@ -346,4 +346,21 @@ else
 	exitcode=1
 fi
 
+echo "running memfd_secret test"
+echo "------------------------------------"
+./memfd_secret
+ret_val=$?
+
+if [ $ret_val -eq 0 ]; then
+	echo "[PASS]"
+elif [ $ret_val -eq $ksft_skip ]; then
+	echo "[SKIP]"
+	exitcode=$ksft_skip
+else
+	echo "[FAIL]"
+	exitcode=1
+fi
+
+exit $exitcode
+
 exit $exitcode
-- 
2.28.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2021-03-03 16:22 ` Mike Rapoport
  (?)
  (?)
@ 2021-05-05 19:08   ` Andrew Morton
  -1 siblings, 0 replies; 84+ messages in thread
From: Andrew Morton @ 2021-05-05 19:08 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org> wrote:

> This is an implementation of "secret" mappings backed by a file descriptor.
> 
> The file descriptor backing secret memory mappings is created using a
> dedicated memfd_secret system call The desired protection mode for the
> memory is configured using flags parameter of the system call. The mmap()
> of the file descriptor created with memfd_secret() will create a "secret"
> memory mapping. The pages in that mapping will be marked as not present in
> the direct map and will be present only in the page table of the owning mm.
> 
> Although normally Linux userspace mappings are protected from other users,
> such secret mappings are useful for environments where a hostile tenant is
> trying to trick the kernel into giving them access to other tenants
> mappings.

I continue to struggle with this and I don't recall seeing much
enthusiasm from others.  Perhaps we're all missing the value point and
some additional selling is needed.

Am I correct in understanding that the overall direction here is to
protect keys (and perhaps other things) from kernel bugs?  That if the
kernel was bug-free then there would be no need for this feature?  If
so, that's a bit sad.  But realistic I guess.

Is this intended to protect keys/etc after the attacker has gained the
ability to run arbitrary kernel-mode code?  If so, that seems
optimistic, doesn't it?

I think that a very complete description of the threats which this
feature addresses would be helpful.  

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-05 19:08   ` Andrew Morton
  0 siblings, 0 replies; 84+ messages in thread
From: Andrew Morton @ 2021-05-05 19:08 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org> wrote:

> This is an implementation of "secret" mappings backed by a file descriptor.
> 
> The file descriptor backing secret memory mappings is created using a
> dedicated memfd_secret system call The desired protection mode for the
> memory is configured using flags parameter of the system call. The mmap()
> of the file descriptor created with memfd_secret() will create a "secret"
> memory mapping. The pages in that mapping will be marked as not present in
> the direct map and will be present only in the page table of the owning mm.
> 
> Although normally Linux userspace mappings are protected from other users,
> such secret mappings are useful for environments where a hostile tenant is
> trying to trick the kernel into giving them access to other tenants
> mappings.

I continue to struggle with this and I don't recall seeing much
enthusiasm from others.  Perhaps we're all missing the value point and
some additional selling is needed.

Am I correct in understanding that the overall direction here is to
protect keys (and perhaps other things) from kernel bugs?  That if the
kernel was bug-free then there would be no need for this feature?  If
so, that's a bit sad.  But realistic I guess.

Is this intended to protect keys/etc after the attacker has gained the
ability to run arbitrary kernel-mode code?  If so, that seems
optimistic, doesn't it?

I think that a very complete description of the threats which this
feature addresses would be helpful.  

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-05 19:08   ` Andrew Morton
  0 siblings, 0 replies; 84+ messages in thread
From: Andrew Morton @ 2021-05-05 19:08 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org> wrote:

> This is an implementation of "secret" mappings backed by a file descriptor.
> 
> The file descriptor backing secret memory mappings is created using a
> dedicated memfd_secret system call The desired protection mode for the
> memory is configured using flags parameter of the system call. The mmap()
> of the file descriptor created with memfd_secret() will create a "secret"
> memory mapping. The pages in that mapping will be marked as not present in
> the direct map and will be present only in the page table of the owning mm.
> 
> Although normally Linux userspace mappings are protected from other users,
> such secret mappings are useful for environments where a hostile tenant is
> trying to trick the kernel into giving them access to other tenants
> mappings.

I continue to struggle with this and I don't recall seeing much
enthusiasm from others.  Perhaps we're all missing the value point and
some additional selling is needed.

Am I correct in understanding that the overall direction here is to
protect keys (and perhaps other things) from kernel bugs?  That if the
kernel was bug-free then there would be no need for this feature?  If
so, that's a bit sad.  But realistic I guess.

Is this intended to protect keys/etc after the attacker has gained the
ability to run arbitrary kernel-mode code?  If so, that seems
optimistic, doesn't it?

I think that a very complete description of the threats which this
feature addresses would be helpful.  
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-05 19:08   ` Andrew Morton
  0 siblings, 0 replies; 84+ messages in thread
From: Andrew Morton @ 2021-05-05 19:08 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org> wrote:

> This is an implementation of "secret" mappings backed by a file descriptor.
> 
> The file descriptor backing secret memory mappings is created using a
> dedicated memfd_secret system call The desired protection mode for the
> memory is configured using flags parameter of the system call. The mmap()
> of the file descriptor created with memfd_secret() will create a "secret"
> memory mapping. The pages in that mapping will be marked as not present in
> the direct map and will be present only in the page table of the owning mm.
> 
> Although normally Linux userspace mappings are protected from other users,
> such secret mappings are useful for environments where a hostile tenant is
> trying to trick the kernel into giving them access to other tenants
> mappings.

I continue to struggle with this and I don't recall seeing much
enthusiasm from others.  Perhaps we're all missing the value point and
some additional selling is needed.

Am I correct in understanding that the overall direction here is to
protect keys (and perhaps other things) from kernel bugs?  That if the
kernel was bug-free then there would be no need for this feature?  If
so, that's a bit sad.  But realistic I guess.

Is this intended to protect keys/etc after the attacker has gained the
ability to run arbitrary kernel-mode code?  If so, that seems
optimistic, doesn't it?

I think that a very complete description of the threats which this
feature addresses would be helpful.  

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2021-05-05 19:08   ` Andrew Morton
  (?)
  (?)
@ 2021-05-06 15:26     ` James Bottomley
  -1 siblings, 0 replies; 84+ messages in thread
From: James Bottomley @ 2021-05-06 15:26 UTC (permalink / raw)
  To: Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org>
> wrote:
> 
> > This is an implementation of "secret" mappings backed by a file
> > descriptor.
> > 
> > The file descriptor backing secret memory mappings is created using
> > a dedicated memfd_secret system call The desired protection mode
> > for the memory is configured using flags parameter of the system
> > call. The mmap() of the file descriptor created with memfd_secret()
> > will create a "secret" memory mapping. The pages in that mapping
> > will be marked as not present in the direct map and will be present
> > only in the page table of the owning mm.
> > 
> > Although normally Linux userspace mappings are protected from other
> > users, such secret mappings are useful for environments where a
> > hostile tenant is trying to trick the kernel into giving them
> > access to other tenants mappings.
> 
> I continue to struggle with this and I don't recall seeing much
> enthusiasm from others.  Perhaps we're all missing the value point
> and some additional selling is needed.
> 
> Am I correct in understanding that the overall direction here is to
> protect keys (and perhaps other things) from kernel bugs?  That if
> the kernel was bug-free then there would be no need for this
> feature?  If so, that's a bit sad.  But realistic I guess.

Secret memory really serves several purposes. The "increase the level
of difficulty of secret exfiltration" you describe.  And, as you say,
if the kernel were bug free this wouldn't be necessary.

But also:

   1. Memory safety for use space code.  Once the secret memory is
      allocated, the user can't accidentally pass it into the kernel to be
      transmitted somewhere.
   2. It also serves as a basis for context protection of virtual
      machines, but other groups are working on this aspect, and it is
      broadly similar to the secret exfiltration from the kernel problem.

> 
> Is this intended to protect keys/etc after the attacker has gained
> the ability to run arbitrary kernel-mode code?  If so, that seems
> optimistic, doesn't it?

Not exactly: there are many types of kernel attack, but mostly the
attacker either manages to effect a privilege escalation to root or
gets the ability to run a ROP gadget.  The object of this code is to be
completely secure against root trying to extract the secret (some what
similar to the lockdown idea), thus defeating privilege escalation and
to provide "sufficient" protection against ROP gadgets.

The ROP gadget thing needs more explanation: the usual defeatist
approach is to say that once the attacker gains the stack, they can do
anything because they can find enough ROP gadgets to be turing
complete.  However, in the real world, given the kernel stack size
limit and address space layout randomization making finding gadgets
really hard, usually the attacker gets one or at most two gadgets to
string together.  Not having any in-kernel primitive for accessing
secret memory means the one gadget ROP attack can't work.  Since the
only way to access secret memory is to reconstruct the missing mapping
entry, the attacker has to recover the physical page and insert a PTE
pointing to it in the kernel and then retrieve the contents.  That
takes at least three gadgets which is a level of difficulty beyond most
standard attacks.

> I think that a very complete description of the threats which this
> feature addresses would be helpful.  

It's designed to protect against three different threats:

   1. Detection of user secret memory mismanagement
   2. significant protection against privilege escalation
   3. enhanced protection (in conjunction with all the other in-kernel
      attack prevention systems) against ROP attacks.

Do you want us to add this to one of the patch descriptions?

James



^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 15:26     ` James Bottomley
  0 siblings, 0 replies; 84+ messages in thread
From: James Bottomley @ 2021-05-06 15:26 UTC (permalink / raw)
  To: Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org>
> wrote:
> 
> > This is an implementation of "secret" mappings backed by a file
> > descriptor.
> > 
> > The file descriptor backing secret memory mappings is created using
> > a dedicated memfd_secret system call The desired protection mode
> > for the memory is configured using flags parameter of the system
> > call. The mmap() of the file descriptor created with memfd_secret()
> > will create a "secret" memory mapping. The pages in that mapping
> > will be marked as not present in the direct map and will be present
> > only in the page table of the owning mm.
> > 
> > Although normally Linux userspace mappings are protected from other
> > users, such secret mappings are useful for environments where a
> > hostile tenant is trying to trick the kernel into giving them
> > access to other tenants mappings.
> 
> I continue to struggle with this and I don't recall seeing much
> enthusiasm from others.  Perhaps we're all missing the value point
> and some additional selling is needed.
> 
> Am I correct in understanding that the overall direction here is to
> protect keys (and perhaps other things) from kernel bugs?  That if
> the kernel was bug-free then there would be no need for this
> feature?  If so, that's a bit sad.  But realistic I guess.

Secret memory really serves several purposes. The "increase the level
of difficulty of secret exfiltration" you describe.  And, as you say,
if the kernel were bug free this wouldn't be necessary.

But also:

   1. Memory safety for use space code.  Once the secret memory is
      allocated, the user can't accidentally pass it into the kernel to be
      transmitted somewhere.
   2. It also serves as a basis for context protection of virtual
      machines, but other groups are working on this aspect, and it is
      broadly similar to the secret exfiltration from the kernel problem.

> 
> Is this intended to protect keys/etc after the attacker has gained
> the ability to run arbitrary kernel-mode code?  If so, that seems
> optimistic, doesn't it?

Not exactly: there are many types of kernel attack, but mostly the
attacker either manages to effect a privilege escalation to root or
gets the ability to run a ROP gadget.  The object of this code is to be
completely secure against root trying to extract the secret (some what
similar to the lockdown idea), thus defeating privilege escalation and
to provide "sufficient" protection against ROP gadgets.

The ROP gadget thing needs more explanation: the usual defeatist
approach is to say that once the attacker gains the stack, they can do
anything because they can find enough ROP gadgets to be turing
complete.  However, in the real world, given the kernel stack size
limit and address space layout randomization making finding gadgets
really hard, usually the attacker gets one or at most two gadgets to
string together.  Not having any in-kernel primitive for accessing
secret memory means the one gadget ROP attack can't work.  Since the
only way to access secret memory is to reconstruct the missing mapping
entry, the attacker has to recover the physical page and insert a PTE
pointing to it in the kernel and then retrieve the contents.  That
takes at least three gadgets which is a level of difficulty beyond most
standard attacks.

> I think that a very complete description of the threats which this
> feature addresses would be helpful.  

It's designed to protect against three different threats:

   1. Detection of user secret memory mismanagement
   2. significant protection against privilege escalation
   3. enhanced protection (in conjunction with all the other in-kernel
      attack prevention systems) against ROP attacks.

Do you want us to add this to one of the patch descriptions?

James



_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 15:26     ` James Bottomley
  0 siblings, 0 replies; 84+ messages in thread
From: James Bottomley @ 2021-05-06 15:26 UTC (permalink / raw)
  To: Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixn er, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org>
> wrote:
> 
> > This is an implementation of "secret" mappings backed by a file
> > descriptor.
> > 
> > The file descriptor backing secret memory mappings is created using
> > a dedicated memfd_secret system call The desired protection mode
> > for the memory is configured using flags parameter of the system
> > call. The mmap() of the file descriptor created with memfd_secret()
> > will create a "secret" memory mapping. The pages in that mapping
> > will be marked as not present in the direct map and will be present
> > only in the page table of the owning mm.
> > 
> > Although normally Linux userspace mappings are protected from other
> > users, such secret mappings are useful for environments where a
> > hostile tenant is trying to trick the kernel into giving them
> > access to other tenants mappings.
> 
> I continue to struggle with this and I don't recall seeing much
> enthusiasm from others.  Perhaps we're all missing the value point
> and some additional selling is needed.
> 
> Am I correct in understanding that the overall direction here is to
> protect keys (and perhaps other things) from kernel bugs?  That if
> the kernel was bug-free then there would be no need for this
> feature?  If so, that's a bit sad.  But realistic I guess.

Secret memory really serves several purposes. The "increase the level
of difficulty of secret exfiltration" you describe.  And, as you say,
if the kernel were bug free this wouldn't be necessary.

But also:

   1. Memory safety for use space code.  Once the secret memory is
      allocated, the user can't accidentally pass it into the kernel to be
      transmitted somewhere.
   2. It also serves as a basis for context protection of virtual
      machines, but other groups are working on this aspect, and it is
      broadly similar to the secret exfiltration from the kernel problem.

> 
> Is this intended to protect keys/etc after the attacker has gained
> the ability to run arbitrary kernel-mode code?  If so, that seems
> optimistic, doesn't it?

Not exactly: there are many types of kernel attack, but mostly the
attacker either manages to effect a privilege escalation to root or
gets the ability to run a ROP gadget.  The object of this code is to be
completely secure against root trying to extract the secret (some what
similar to the lockdown idea), thus defeating privilege escalation and
to provide "sufficient" protection against ROP gadgets.

The ROP gadget thing needs more explanation: the usual defeatist
approach is to say that once the attacker gains the stack, they can do
anything because they can find enough ROP gadgets to be turing
complete.  However, in the real world, given the kernel stack size
limit and address space layout randomization making finding gadgets
really hard, usually the attacker gets one or at most two gadgets to
string together.  Not having any in-kernel primitive for accessing
secret memory means the one gadget ROP attack can't work.  Since the
only way to access secret memory is to reconstruct the missing mapping
entry, the attacker has to recover the physical page and insert a PTE
pointing to it in the kernel and then retrieve the contents.  That
takes at least three gadgets which is a level of difficulty beyond most
standard attacks.

> I think that a very complete description of the threats which this
> feature addresses would be helpful.  

It's designed to protect against three different threats:

   1. Detection of user secret memory mismanagement
   2. significant protection against privilege escalation
   3. enhanced protection (in conjunction with all the other in-kernel
      attack prevention systems) against ROP attacks.

Do you want us to add this to one of the patch descriptions?

James

_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 15:26     ` James Bottomley
  0 siblings, 0 replies; 84+ messages in thread
From: James Bottomley @ 2021-05-06 15:26 UTC (permalink / raw)
  To: Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org>
> wrote:
> 
> > This is an implementation of "secret" mappings backed by a file
> > descriptor.
> > 
> > The file descriptor backing secret memory mappings is created using
> > a dedicated memfd_secret system call The desired protection mode
> > for the memory is configured using flags parameter of the system
> > call. The mmap() of the file descriptor created with memfd_secret()
> > will create a "secret" memory mapping. The pages in that mapping
> > will be marked as not present in the direct map and will be present
> > only in the page table of the owning mm.
> > 
> > Although normally Linux userspace mappings are protected from other
> > users, such secret mappings are useful for environments where a
> > hostile tenant is trying to trick the kernel into giving them
> > access to other tenants mappings.
> 
> I continue to struggle with this and I don't recall seeing much
> enthusiasm from others.  Perhaps we're all missing the value point
> and some additional selling is needed.
> 
> Am I correct in understanding that the overall direction here is to
> protect keys (and perhaps other things) from kernel bugs?  That if
> the kernel was bug-free then there would be no need for this
> feature?  If so, that's a bit sad.  But realistic I guess.

Secret memory really serves several purposes. The "increase the level
of difficulty of secret exfiltration" you describe.  And, as you say,
if the kernel were bug free this wouldn't be necessary.

But also:

   1. Memory safety for use space code.  Once the secret memory is
      allocated, the user can't accidentally pass it into the kernel to be
      transmitted somewhere.
   2. It also serves as a basis for context protection of virtual
      machines, but other groups are working on this aspect, and it is
      broadly similar to the secret exfiltration from the kernel problem.

> 
> Is this intended to protect keys/etc after the attacker has gained
> the ability to run arbitrary kernel-mode code?  If so, that seems
> optimistic, doesn't it?

Not exactly: there are many types of kernel attack, but mostly the
attacker either manages to effect a privilege escalation to root or
gets the ability to run a ROP gadget.  The object of this code is to be
completely secure against root trying to extract the secret (some what
similar to the lockdown idea), thus defeating privilege escalation and
to provide "sufficient" protection against ROP gadgets.

The ROP gadget thing needs more explanation: the usual defeatist
approach is to say that once the attacker gains the stack, they can do
anything because they can find enough ROP gadgets to be turing
complete.  However, in the real world, given the kernel stack size
limit and address space layout randomization making finding gadgets
really hard, usually the attacker gets one or at most two gadgets to
string together.  Not having any in-kernel primitive for accessing
secret memory means the one gadget ROP attack can't work.  Since the
only way to access secret memory is to reconstruct the missing mapping
entry, the attacker has to recover the physical page and insert a PTE
pointing to it in the kernel and then retrieve the contents.  That
takes at least three gadgets which is a level of difficulty beyond most
standard attacks.

> I think that a very complete description of the threats which this
> feature addresses would be helpful.  

It's designed to protect against three different threats:

   1. Detection of user secret memory mismanagement
   2. significant protection against privilege escalation
   3. enhanced protection (in conjunction with all the other in-kernel
      attack prevention systems) against ROP attacks.

Do you want us to add this to one of the patch descriptions?

James



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2021-05-06 15:26     ` James Bottomley
  (?)
  (?)
@ 2021-05-06 16:45       ` David Hildenbrand
  -1 siblings, 0 replies; 84+ messages in thread
From: David Hildenbrand @ 2021-05-06 16:45 UTC (permalink / raw)
  To: jejb, Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

On 06.05.21 17:26, James Bottomley wrote:
> On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
>> On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org>
>> wrote:
>>
>>> This is an implementation of "secret" mappings backed by a file
>>> descriptor.
>>>
>>> The file descriptor backing secret memory mappings is created using
>>> a dedicated memfd_secret system call The desired protection mode
>>> for the memory is configured using flags parameter of the system
>>> call. The mmap() of the file descriptor created with memfd_secret()
>>> will create a "secret" memory mapping. The pages in that mapping
>>> will be marked as not present in the direct map and will be present
>>> only in the page table of the owning mm.
>>>
>>> Although normally Linux userspace mappings are protected from other
>>> users, such secret mappings are useful for environments where a
>>> hostile tenant is trying to trick the kernel into giving them
>>> access to other tenants mappings.
>>
>> I continue to struggle with this and I don't recall seeing much
>> enthusiasm from others.  Perhaps we're all missing the value point
>> and some additional selling is needed.
>>
>> Am I correct in understanding that the overall direction here is to
>> protect keys (and perhaps other things) from kernel bugs?  That if
>> the kernel was bug-free then there would be no need for this
>> feature?  If so, that's a bit sad.  But realistic I guess.
> 
> Secret memory really serves several purposes. The "increase the level
> of difficulty of secret exfiltration" you describe.  And, as you say,
> if the kernel were bug free this wouldn't be necessary.
> 
> But also:
> 
>     1. Memory safety for use space code.  Once the secret memory is
>        allocated, the user can't accidentally pass it into the kernel to be
>        transmitted somewhere.

That's an interesting point I didn't realize so far.

>     2. It also serves as a basis for context protection of virtual
>        machines, but other groups are working on this aspect, and it is
>        broadly similar to the secret exfiltration from the kernel problem.
> 

I was wondering if this also helps against CPU microcode issues like 
spectre and friends.

>>
>> Is this intended to protect keys/etc after the attacker has gained
>> the ability to run arbitrary kernel-mode code?  If so, that seems
>> optimistic, doesn't it?
> 
> Not exactly: there are many types of kernel attack, but mostly the
> attacker either manages to effect a privilege escalation to root or
> gets the ability to run a ROP gadget.  The object of this code is to be
> completely secure against root trying to extract the secret (some what
> similar to the lockdown idea), thus defeating privilege escalation and
> to provide "sufficient" protection against ROP gadget.

What stops "root" from mapping /dev/mem and reading that memory?

IOW, would we want to enforce "CONFIG_STRICT_DEVMEM" with CONFIG_SECRETMEM?


Also, there is a way to still read that memory when root by

1. Having kdump active (which would often be the case, but maybe not to 
dump user pages )
2. Triggering a kernel crash (easy via proc as root)
3. Waiting for the reboot after kump() created the dump and then reading 
the content from disk.

Or, as an attacker, load a custom kexec() kernel and read memory from 
the new environment. Of course, the latter two are advanced mechanisms, 
but they are possible when root. We might be able to mitigate, for 
example, by zeroing out secretmem pages before booting into the kexec 
kernel, if we care :)

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 16:45       ` David Hildenbrand
  0 siblings, 0 replies; 84+ messages in thread
From: David Hildenbrand @ 2021-05-06 16:45 UTC (permalink / raw)
  To: jejb, Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

On 06.05.21 17:26, James Bottomley wrote:
> On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
>> On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org>
>> wrote:
>>
>>> This is an implementation of "secret" mappings backed by a file
>>> descriptor.
>>>
>>> The file descriptor backing secret memory mappings is created using
>>> a dedicated memfd_secret system call The desired protection mode
>>> for the memory is configured using flags parameter of the system
>>> call. The mmap() of the file descriptor created with memfd_secret()
>>> will create a "secret" memory mapping. The pages in that mapping
>>> will be marked as not present in the direct map and will be present
>>> only in the page table of the owning mm.
>>>
>>> Although normally Linux userspace mappings are protected from other
>>> users, such secret mappings are useful for environments where a
>>> hostile tenant is trying to trick the kernel into giving them
>>> access to other tenants mappings.
>>
>> I continue to struggle with this and I don't recall seeing much
>> enthusiasm from others.  Perhaps we're all missing the value point
>> and some additional selling is needed.
>>
>> Am I correct in understanding that the overall direction here is to
>> protect keys (and perhaps other things) from kernel bugs?  That if
>> the kernel was bug-free then there would be no need for this
>> feature?  If so, that's a bit sad.  But realistic I guess.
> 
> Secret memory really serves several purposes. The "increase the level
> of difficulty of secret exfiltration" you describe.  And, as you say,
> if the kernel were bug free this wouldn't be necessary.
> 
> But also:
> 
>     1. Memory safety for use space code.  Once the secret memory is
>        allocated, the user can't accidentally pass it into the kernel to be
>        transmitted somewhere.

That's an interesting point I didn't realize so far.

>     2. It also serves as a basis for context protection of virtual
>        machines, but other groups are working on this aspect, and it is
>        broadly similar to the secret exfiltration from the kernel problem.
> 

I was wondering if this also helps against CPU microcode issues like 
spectre and friends.

>>
>> Is this intended to protect keys/etc after the attacker has gained
>> the ability to run arbitrary kernel-mode code?  If so, that seems
>> optimistic, doesn't it?
> 
> Not exactly: there are many types of kernel attack, but mostly the
> attacker either manages to effect a privilege escalation to root or
> gets the ability to run a ROP gadget.  The object of this code is to be
> completely secure against root trying to extract the secret (some what
> similar to the lockdown idea), thus defeating privilege escalation and
> to provide "sufficient" protection against ROP gadget.

What stops "root" from mapping /dev/mem and reading that memory?

IOW, would we want to enforce "CONFIG_STRICT_DEVMEM" with CONFIG_SECRETMEM?


Also, there is a way to still read that memory when root by

1. Having kdump active (which would often be the case, but maybe not to 
dump user pages )
2. Triggering a kernel crash (easy via proc as root)
3. Waiting for the reboot after kump() created the dump and then reading 
the content from disk.

Or, as an attacker, load a custom kexec() kernel and read memory from 
the new environment. Of course, the latter two are advanced mechanisms, 
but they are possible when root. We might be able to mitigate, for 
example, by zeroing out secretmem pages before booting into the kexec 
kernel, if we care :)

-- 
Thanks,

David / dhildenb


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 16:45       ` David Hildenbrand
  0 siblings, 0 replies; 84+ messages in thread
From: David Hildenbrand @ 2021-05-06 16:45 UTC (permalink / raw)
  To: jejb, Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Anderse n, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

On 06.05.21 17:26, James Bottomley wrote:
> On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
>> On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org>
>> wrote:
>>
>>> This is an implementation of "secret" mappings backed by a file
>>> descriptor.
>>>
>>> The file descriptor backing secret memory mappings is created using
>>> a dedicated memfd_secret system call The desired protection mode
>>> for the memory is configured using flags parameter of the system
>>> call. The mmap() of the file descriptor created with memfd_secret()
>>> will create a "secret" memory mapping. The pages in that mapping
>>> will be marked as not present in the direct map and will be present
>>> only in the page table of the owning mm.
>>>
>>> Although normally Linux userspace mappings are protected from other
>>> users, such secret mappings are useful for environments where a
>>> hostile tenant is trying to trick the kernel into giving them
>>> access to other tenants mappings.
>>
>> I continue to struggle with this and I don't recall seeing much
>> enthusiasm from others.  Perhaps we're all missing the value point
>> and some additional selling is needed.
>>
>> Am I correct in understanding that the overall direction here is to
>> protect keys (and perhaps other things) from kernel bugs?  That if
>> the kernel was bug-free then there would be no need for this
>> feature?  If so, that's a bit sad.  But realistic I guess.
> 
> Secret memory really serves several purposes. The "increase the level
> of difficulty of secret exfiltration" you describe.  And, as you say,
> if the kernel were bug free this wouldn't be necessary.
> 
> But also:
> 
>     1. Memory safety for use space code.  Once the secret memory is
>        allocated, the user can't accidentally pass it into the kernel to be
>        transmitted somewhere.

That's an interesting point I didn't realize so far.

>     2. It also serves as a basis for context protection of virtual
>        machines, but other groups are working on this aspect, and it is
>        broadly similar to the secret exfiltration from the kernel problem.
> 

I was wondering if this also helps against CPU microcode issues like 
spectre and friends.

>>
>> Is this intended to protect keys/etc after the attacker has gained
>> the ability to run arbitrary kernel-mode code?  If so, that seems
>> optimistic, doesn't it?
> 
> Not exactly: there are many types of kernel attack, but mostly the
> attacker either manages to effect a privilege escalation to root or
> gets the ability to run a ROP gadget.  The object of this code is to be
> completely secure against root trying to extract the secret (some what
> similar to the lockdown idea), thus defeating privilege escalation and
> to provide "sufficient" protection against ROP gadget.

What stops "root" from mapping /dev/mem and reading that memory?

IOW, would we want to enforce "CONFIG_STRICT_DEVMEM" with CONFIG_SECRETMEM?


Also, there is a way to still read that memory when root by

1. Having kdump active (which would often be the case, but maybe not to 
dump user pages )
2. Triggering a kernel crash (easy via proc as root)
3. Waiting for the reboot after kump() created the dump and then reading 
the content from disk.

Or, as an attacker, load a custom kexec() kernel and read memory from 
the new environment. Of course, the latter two are advanced mechanisms, 
but they are possible when root. We might be able to mitigate, for 
example, by zeroing out secretmem pages before booting into the kexec 
kernel, if we care :)

-- 
Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 16:45       ` David Hildenbrand
  0 siblings, 0 replies; 84+ messages in thread
From: David Hildenbrand @ 2021-05-06 16:45 UTC (permalink / raw)
  To: jejb, Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

On 06.05.21 17:26, James Bottomley wrote:
> On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
>> On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org>
>> wrote:
>>
>>> This is an implementation of "secret" mappings backed by a file
>>> descriptor.
>>>
>>> The file descriptor backing secret memory mappings is created using
>>> a dedicated memfd_secret system call The desired protection mode
>>> for the memory is configured using flags parameter of the system
>>> call. The mmap() of the file descriptor created with memfd_secret()
>>> will create a "secret" memory mapping. The pages in that mapping
>>> will be marked as not present in the direct map and will be present
>>> only in the page table of the owning mm.
>>>
>>> Although normally Linux userspace mappings are protected from other
>>> users, such secret mappings are useful for environments where a
>>> hostile tenant is trying to trick the kernel into giving them
>>> access to other tenants mappings.
>>
>> I continue to struggle with this and I don't recall seeing much
>> enthusiasm from others.  Perhaps we're all missing the value point
>> and some additional selling is needed.
>>
>> Am I correct in understanding that the overall direction here is to
>> protect keys (and perhaps other things) from kernel bugs?  That if
>> the kernel was bug-free then there would be no need for this
>> feature?  If so, that's a bit sad.  But realistic I guess.
> 
> Secret memory really serves several purposes. The "increase the level
> of difficulty of secret exfiltration" you describe.  And, as you say,
> if the kernel were bug free this wouldn't be necessary.
> 
> But also:
> 
>     1. Memory safety for use space code.  Once the secret memory is
>        allocated, the user can't accidentally pass it into the kernel to be
>        transmitted somewhere.

That's an interesting point I didn't realize so far.

>     2. It also serves as a basis for context protection of virtual
>        machines, but other groups are working on this aspect, and it is
>        broadly similar to the secret exfiltration from the kernel problem.
> 

I was wondering if this also helps against CPU microcode issues like 
spectre and friends.

>>
>> Is this intended to protect keys/etc after the attacker has gained
>> the ability to run arbitrary kernel-mode code?  If so, that seems
>> optimistic, doesn't it?
> 
> Not exactly: there are many types of kernel attack, but mostly the
> attacker either manages to effect a privilege escalation to root or
> gets the ability to run a ROP gadget.  The object of this code is to be
> completely secure against root trying to extract the secret (some what
> similar to the lockdown idea), thus defeating privilege escalation and
> to provide "sufficient" protection against ROP gadget.

What stops "root" from mapping /dev/mem and reading that memory?

IOW, would we want to enforce "CONFIG_STRICT_DEVMEM" with CONFIG_SECRETMEM?


Also, there is a way to still read that memory when root by

1. Having kdump active (which would often be the case, but maybe not to 
dump user pages )
2. Triggering a kernel crash (easy via proc as root)
3. Waiting for the reboot after kump() created the dump and then reading 
the content from disk.

Or, as an attacker, load a custom kexec() kernel and read memory from 
the new environment. Of course, the latter two are advanced mechanisms, 
but they are possible when root. We might be able to mitigate, for 
example, by zeroing out secretmem pages before booting into the kexec 
kernel, if we care :)

-- 
Thanks,

David / dhildenb


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2021-05-06 16:45       ` David Hildenbrand
  (?)
  (?)
@ 2021-05-06 17:05         ` James Bottomley
  -1 siblings, 0 replies; 84+ messages in thread
From: James Bottomley @ 2021-05-06 17:05 UTC (permalink / raw)
  To: David Hildenbrand, Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
> On 06.05.21 17:26, James Bottomley wrote:
> > On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> > > On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org
> > > >
> > > wrote:
> > > 
> > > > This is an implementation of "secret" mappings backed by a file
> > > > descriptor.
> > > > 
> > > > The file descriptor backing secret memory mappings is created
> > > > using a dedicated memfd_secret system call The desired
> > > > protection mode for the memory is configured using flags
> > > > parameter of the system call. The mmap() of the file descriptor
> > > > created with memfd_secret() will create a "secret" memory
> > > > mapping. The pages in that mapping will be marked as not
> > > > present in the direct map and will be present only in the page
> > > > table of the owning mm.
> > > > 
> > > > Although normally Linux userspace mappings are protected from
> > > > other users, such secret mappings are useful for environments
> > > > where a hostile tenant is trying to trick the kernel into
> > > > giving them access to other tenants mappings.
> > > 
> > > I continue to struggle with this and I don't recall seeing much
> > > enthusiasm from others.  Perhaps we're all missing the value
> > > point and some additional selling is needed.
> > > 
> > > Am I correct in understanding that the overall direction here is
> > > to protect keys (and perhaps other things) from kernel
> > > bugs?  That if the kernel was bug-free then there would be no
> > > need for this feature?  If so, that's a bit sad.  But realistic I
> > > guess.
> > 
> > Secret memory really serves several purposes. The "increase the
> > level of difficulty of secret exfiltration" you describe.  And, as
> > you say, if the kernel were bug free this wouldn't be necessary.
> > 
> > But also:
> > 
> >     1. Memory safety for use space code.  Once the secret memory is
> >        allocated, the user can't accidentally pass it into the
> > kernel to be
> >        transmitted somewhere.
> 
> That's an interesting point I didn't realize so far.
> 
> >     2. It also serves as a basis for context protection of virtual
> >        machines, but other groups are working on this aspect, and
> > it is
> >        broadly similar to the secret exfiltration from the kernel
> > problem.
> > 
> 
> I was wondering if this also helps against CPU microcode issues like 
> spectre and friends.

It can for VMs, but not really for the user space secret memory use
cases ... the in-kernel mitigations already present are much more
effective.

> 
> > > Is this intended to protect keys/etc after the attacker has
> > > gained the ability to run arbitrary kernel-mode code?  If so,
> > > that seems optimistic, doesn't it?
> > 
> > Not exactly: there are many types of kernel attack, but mostly the
> > attacker either manages to effect a privilege escalation to root or
> > gets the ability to run a ROP gadget.  The object of this code is
> > to be completely secure against root trying to extract the secret
> > (some what similar to the lockdown idea), thus defeating privilege
> > escalation and to provide "sufficient" protection against ROP
> > gadget.
> 
> What stops "root" from mapping /dev/mem and reading that memory?

/dev/mem uses the direct map for the copy at least for read/write, so
it gets a fault in the same way root trying to use ptrace does.  I
think we've protected mmap, but Mike would know that better than I.

> IOW, would we want to enforce "CONFIG_STRICT_DEVMEM" with
> CONFIG_SECRETMEM?

Unless there's a corner case I haven't thought of, I don't think it
adds much.  However, doing a full lockdown on a public system where
users want to use secret memory is best practice I think (except I
think you want it to be the full secure boot lockdown to close all the
root holes).

> Also, there is a way to still read that memory when root by
> 
> 1. Having kdump active (which would often be the case, but maybe not
> to dump user pages )
> 2. Triggering a kernel crash (easy via proc as root)
> 3. Waiting for the reboot after kump() created the dump and then
> reading the content from disk.

Anything that can leave physical memory intact but boot to a kernel
where the missing direct map entry is restored could theoretically
extract the secret.  However, it's not exactly going to be a stealthy
extraction ...

> Or, as an attacker, load a custom kexec() kernel and read memory
> from the new environment. Of course, the latter two are advanced
> mechanisms, but they are possible when root. We might be able to
> mitigate, for example, by zeroing out secretmem pages before booting
> into the kexec kernel, if we care :)

I think we could handle it by marking the region, yes, and a zero on
shutdown might be useful ... it would prevent all warm reboot type
attacks.

James


^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 17:05         ` James Bottomley
  0 siblings, 0 replies; 84+ messages in thread
From: James Bottomley @ 2021-05-06 17:05 UTC (permalink / raw)
  To: David Hildenbrand, Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
> On 06.05.21 17:26, James Bottomley wrote:
> > On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> > > On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org
> > > >
> > > wrote:
> > > 
> > > > This is an implementation of "secret" mappings backed by a file
> > > > descriptor.
> > > > 
> > > > The file descriptor backing secret memory mappings is created
> > > > using a dedicated memfd_secret system call The desired
> > > > protection mode for the memory is configured using flags
> > > > parameter of the system call. The mmap() of the file descriptor
> > > > created with memfd_secret() will create a "secret" memory
> > > > mapping. The pages in that mapping will be marked as not
> > > > present in the direct map and will be present only in the page
> > > > table of the owning mm.
> > > > 
> > > > Although normally Linux userspace mappings are protected from
> > > > other users, such secret mappings are useful for environments
> > > > where a hostile tenant is trying to trick the kernel into
> > > > giving them access to other tenants mappings.
> > > 
> > > I continue to struggle with this and I don't recall seeing much
> > > enthusiasm from others.  Perhaps we're all missing the value
> > > point and some additional selling is needed.
> > > 
> > > Am I correct in understanding that the overall direction here is
> > > to protect keys (and perhaps other things) from kernel
> > > bugs?  That if the kernel was bug-free then there would be no
> > > need for this feature?  If so, that's a bit sad.  But realistic I
> > > guess.
> > 
> > Secret memory really serves several purposes. The "increase the
> > level of difficulty of secret exfiltration" you describe.  And, as
> > you say, if the kernel were bug free this wouldn't be necessary.
> > 
> > But also:
> > 
> >     1. Memory safety for use space code.  Once the secret memory is
> >        allocated, the user can't accidentally pass it into the
> > kernel to be
> >        transmitted somewhere.
> 
> That's an interesting point I didn't realize so far.
> 
> >     2. It also serves as a basis for context protection of virtual
> >        machines, but other groups are working on this aspect, and
> > it is
> >        broadly similar to the secret exfiltration from the kernel
> > problem.
> > 
> 
> I was wondering if this also helps against CPU microcode issues like 
> spectre and friends.

It can for VMs, but not really for the user space secret memory use
cases ... the in-kernel mitigations already present are much more
effective.

> 
> > > Is this intended to protect keys/etc after the attacker has
> > > gained the ability to run arbitrary kernel-mode code?  If so,
> > > that seems optimistic, doesn't it?
> > 
> > Not exactly: there are many types of kernel attack, but mostly the
> > attacker either manages to effect a privilege escalation to root or
> > gets the ability to run a ROP gadget.  The object of this code is
> > to be completely secure against root trying to extract the secret
> > (some what similar to the lockdown idea), thus defeating privilege
> > escalation and to provide "sufficient" protection against ROP
> > gadget.
> 
> What stops "root" from mapping /dev/mem and reading that memory?

/dev/mem uses the direct map for the copy at least for read/write, so
it gets a fault in the same way root trying to use ptrace does.  I
think we've protected mmap, but Mike would know that better than I.

> IOW, would we want to enforce "CONFIG_STRICT_DEVMEM" with
> CONFIG_SECRETMEM?

Unless there's a corner case I haven't thought of, I don't think it
adds much.  However, doing a full lockdown on a public system where
users want to use secret memory is best practice I think (except I
think you want it to be the full secure boot lockdown to close all the
root holes).

> Also, there is a way to still read that memory when root by
> 
> 1. Having kdump active (which would often be the case, but maybe not
> to dump user pages )
> 2. Triggering a kernel crash (easy via proc as root)
> 3. Waiting for the reboot after kump() created the dump and then
> reading the content from disk.

Anything that can leave physical memory intact but boot to a kernel
where the missing direct map entry is restored could theoretically
extract the secret.  However, it's not exactly going to be a stealthy
extraction ...

> Or, as an attacker, load a custom kexec() kernel and read memory
> from the new environment. Of course, the latter two are advanced
> mechanisms, but they are possible when root. We might be able to
> mitigate, for example, by zeroing out secretmem pages before booting
> into the kexec kernel, if we care :)

I think we could handle it by marking the region, yes, and a zero on
shutdown might be useful ... it would prevent all warm reboot type
attacks.

James


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 17:05         ` James Bottomley
  0 siblings, 0 replies; 84+ messages in thread
From: James Bottomley @ 2021-05-06 17:05 UTC (permalink / raw)
  To: David Hildenbrand, Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki  <rjw@rjwysocki.net>,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Anders en, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
> On 06.05.21 17:26, James Bottomley wrote:
> > On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> > > On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org
> > > >
> > > wrote:
> > > 
> > > > This is an implementation of "secret" mappings backed by a file
> > > > descriptor.
> > > > 
> > > > The file descriptor backing secret memory mappings is created
> > > > using a dedicated memfd_secret system call The desired
> > > > protection mode for the memory is configured using flags
> > > > parameter of the system call. The mmap() of the file descriptor
> > > > created with memfd_secret() will create a "secret" memory
> > > > mapping. The pages in that mapping will be marked as not
> > > > present in the direct map and will be present only in the page
> > > > table of the owning mm.
> > > > 
> > > > Although normally Linux userspace mappings are protected from
> > > > other users, such secret mappings are useful for environments
> > > > where a hostile tenant is trying to trick the kernel into
> > > > giving them access to other tenants mappings.
> > > 
> > > I continue to struggle with this and I don't recall seeing much
> > > enthusiasm from others.  Perhaps we're all missing the value
> > > point and some additional selling is needed.
> > > 
> > > Am I correct in understanding that the overall direction here is
> > > to protect keys (and perhaps other things) from kernel
> > > bugs?  That if the kernel was bug-free then there would be no
> > > need for this feature?  If so, that's a bit sad.  But realistic I
> > > guess.
> > 
> > Secret memory really serves several purposes. The "increase the
> > level of difficulty of secret exfiltration" you describe.  And, as
> > you say, if the kernel were bug free this wouldn't be necessary.
> > 
> > But also:
> > 
> >     1. Memory safety for use space code.  Once the secret memory is
> >        allocated, the user can't accidentally pass it into the
> > kernel to be
> >        transmitted somewhere.
> 
> That's an interesting point I didn't realize so far.
> 
> >     2. It also serves as a basis for context protection of virtual
> >        machines, but other groups are working on this aspect, and
> > it is
> >        broadly similar to the secret exfiltration from the kernel
> > problem.
> > 
> 
> I was wondering if this also helps against CPU microcode issues like 
> spectre and friends.

It can for VMs, but not really for the user space secret memory use
cases ... the in-kernel mitigations already present are much more
effective.

> 
> > > Is this intended to protect keys/etc after the attacker has
> > > gained the ability to run arbitrary kernel-mode code?  If so,
> > > that seems optimistic, doesn't it?
> > 
> > Not exactly: there are many types of kernel attack, but mostly the
> > attacker either manages to effect a privilege escalation to root or
> > gets the ability to run a ROP gadget.  The object of this code is
> > to be completely secure against root trying to extract the secret
> > (some what similar to the lockdown idea), thus defeating privilege
> > escalation and to provide "sufficient" protection against ROP
> > gadget.
> 
> What stops "root" from mapping /dev/mem and reading that memory?

/dev/mem uses the direct map for the copy at least for read/write, so
it gets a fault in the same way root trying to use ptrace does.  I
think we've protected mmap, but Mike would know that better than I.

> IOW, would we want to enforce "CONFIG_STRICT_DEVMEM" with
> CONFIG_SECRETMEM?

Unless there's a corner case I haven't thought of, I don't think it
adds much.  However, doing a full lockdown on a public system where
users want to use secret memory is best practice I think (except I
think you want it to be the full secure boot lockdown to close all the
root holes).

> Also, there is a way to still read that memory when root by
> 
> 1. Having kdump active (which would often be the case, but maybe not
> to dump user pages )
> 2. Triggering a kernel crash (easy via proc as root)
> 3. Waiting for the reboot after kump() created the dump and then
> reading the content from disk.

Anything that can leave physical memory intact but boot to a kernel
where the missing direct map entry is restored could theoretically
extract the secret.  However, it's not exactly going to be a stealthy
extraction ...

> Or, as an attacker, load a custom kexec() kernel and read memory
> from the new environment. Of course, the latter two are advanced
> mechanisms, but they are possible when root. We might be able to
> mitigate, for example, by zeroing out secretmem pages before booting
> into the kexec kernel, if we care :)

I think we could handle it by marking the region, yes, and a zero on
shutdown might be useful ... it would prevent all warm reboot type
attacks.

James
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 17:05         ` James Bottomley
  0 siblings, 0 replies; 84+ messages in thread
From: James Bottomley @ 2021-05-06 17:05 UTC (permalink / raw)
  To: David Hildenbrand, Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
> On 06.05.21 17:26, James Bottomley wrote:
> > On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> > > On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org
> > > >
> > > wrote:
> > > 
> > > > This is an implementation of "secret" mappings backed by a file
> > > > descriptor.
> > > > 
> > > > The file descriptor backing secret memory mappings is created
> > > > using a dedicated memfd_secret system call The desired
> > > > protection mode for the memory is configured using flags
> > > > parameter of the system call. The mmap() of the file descriptor
> > > > created with memfd_secret() will create a "secret" memory
> > > > mapping. The pages in that mapping will be marked as not
> > > > present in the direct map and will be present only in the page
> > > > table of the owning mm.
> > > > 
> > > > Although normally Linux userspace mappings are protected from
> > > > other users, such secret mappings are useful for environments
> > > > where a hostile tenant is trying to trick the kernel into
> > > > giving them access to other tenants mappings.
> > > 
> > > I continue to struggle with this and I don't recall seeing much
> > > enthusiasm from others.  Perhaps we're all missing the value
> > > point and some additional selling is needed.
> > > 
> > > Am I correct in understanding that the overall direction here is
> > > to protect keys (and perhaps other things) from kernel
> > > bugs?  That if the kernel was bug-free then there would be no
> > > need for this feature?  If so, that's a bit sad.  But realistic I
> > > guess.
> > 
> > Secret memory really serves several purposes. The "increase the
> > level of difficulty of secret exfiltration" you describe.  And, as
> > you say, if the kernel were bug free this wouldn't be necessary.
> > 
> > But also:
> > 
> >     1. Memory safety for use space code.  Once the secret memory is
> >        allocated, the user can't accidentally pass it into the
> > kernel to be
> >        transmitted somewhere.
> 
> That's an interesting point I didn't realize so far.
> 
> >     2. It also serves as a basis for context protection of virtual
> >        machines, but other groups are working on this aspect, and
> > it is
> >        broadly similar to the secret exfiltration from the kernel
> > problem.
> > 
> 
> I was wondering if this also helps against CPU microcode issues like 
> spectre and friends.

It can for VMs, but not really for the user space secret memory use
cases ... the in-kernel mitigations already present are much more
effective.

> 
> > > Is this intended to protect keys/etc after the attacker has
> > > gained the ability to run arbitrary kernel-mode code?  If so,
> > > that seems optimistic, doesn't it?
> > 
> > Not exactly: there are many types of kernel attack, but mostly the
> > attacker either manages to effect a privilege escalation to root or
> > gets the ability to run a ROP gadget.  The object of this code is
> > to be completely secure against root trying to extract the secret
> > (some what similar to the lockdown idea), thus defeating privilege
> > escalation and to provide "sufficient" protection against ROP
> > gadget.
> 
> What stops "root" from mapping /dev/mem and reading that memory?

/dev/mem uses the direct map for the copy at least for read/write, so
it gets a fault in the same way root trying to use ptrace does.  I
think we've protected mmap, but Mike would know that better than I.

> IOW, would we want to enforce "CONFIG_STRICT_DEVMEM" with
> CONFIG_SECRETMEM?

Unless there's a corner case I haven't thought of, I don't think it
adds much.  However, doing a full lockdown on a public system where
users want to use secret memory is best practice I think (except I
think you want it to be the full secure boot lockdown to close all the
root holes).

> Also, there is a way to still read that memory when root by
> 
> 1. Having kdump active (which would often be the case, but maybe not
> to dump user pages )
> 2. Triggering a kernel crash (easy via proc as root)
> 3. Waiting for the reboot after kump() created the dump and then
> reading the content from disk.

Anything that can leave physical memory intact but boot to a kernel
where the missing direct map entry is restored could theoretically
extract the secret.  However, it's not exactly going to be a stealthy
extraction ...

> Or, as an attacker, load a custom kexec() kernel and read memory
> from the new environment. Of course, the latter two are advanced
> mechanisms, but they are possible when root. We might be able to
> mitigate, for example, by zeroing out secretmem pages before booting
> into the kexec kernel, if we care :)

I think we could handle it by marking the region, yes, and a zero on
shutdown might be useful ... it would prevent all warm reboot type
attacks.

James


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2021-05-06 17:05         ` James Bottomley
  (?)
  (?)
@ 2021-05-06 17:24           ` David Hildenbrand
  -1 siblings, 0 replies; 84+ messages in thread
From: David Hildenbrand @ 2021-05-06 17:24 UTC (permalink / raw)
  To: jejb, Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

>>>> Is this intended to protect keys/etc after the attacker has
>>>> gained the ability to run arbitrary kernel-mode code?  If so,
>>>> that seems optimistic, doesn't it?
>>>
>>> Not exactly: there are many types of kernel attack, but mostly the
>>> attacker either manages to effect a privilege escalation to root or
>>> gets the ability to run a ROP gadget.  The object of this code is
>>> to be completely secure against root trying to extract the secret
>>> (some what similar to the lockdown idea), thus defeating privilege
>>> escalation and to provide "sufficient" protection against ROP
>>> gadget.
>>
>> What stops "root" from mapping /dev/mem and reading that memory?
> 
> /dev/mem uses the direct map for the copy at least for read/write, so
> it gets a fault in the same way root trying to use ptrace does.  I
> think we've protected mmap, but Mike would know that better than I.
> 

I'm more concerned about the mmap case -> remap_pfn_range(). Anybody 
going via the VMA shouldn't see the struct page, at least when 
vma_normal_page() is properly used; so you cannot detect secretmem
memory mapped via /dev/mem reliably. At least that's my theory :)

[...]

>> Also, there is a way to still read that memory when root by
>>
>> 1. Having kdump active (which would often be the case, but maybe not
>> to dump user pages )
>> 2. Triggering a kernel crash (easy via proc as root)
>> 3. Waiting for the reboot after kump() created the dump and then
>> reading the content from disk.
> 
> Anything that can leave physical memory intact but boot to a kernel
> where the missing direct map entry is restored could theoretically
> extract the secret.  However, it's not exactly going to be a stealthy
> extraction ...
> 
>> Or, as an attacker, load a custom kexec() kernel and read memory
>> from the new environment. Of course, the latter two are advanced
>> mechanisms, but they are possible when root. We might be able to
>> mitigate, for example, by zeroing out secretmem pages before booting
>> into the kexec kernel, if we care :)
> 
> I think we could handle it by marking the region, yes, and a zero on
> shutdown might be useful ... it would prevent all warm reboot type
> attacks.

Right. But I guess when you're actually root, you can just write a 
kernel module to extract the information you need (unless we have signed 
modules, so it could be harder/impossible).

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 17:24           ` David Hildenbrand
  0 siblings, 0 replies; 84+ messages in thread
From: David Hildenbrand @ 2021-05-06 17:24 UTC (permalink / raw)
  To: jejb, Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

>>>> Is this intended to protect keys/etc after the attacker has
>>>> gained the ability to run arbitrary kernel-mode code?  If so,
>>>> that seems optimistic, doesn't it?
>>>
>>> Not exactly: there are many types of kernel attack, but mostly the
>>> attacker either manages to effect a privilege escalation to root or
>>> gets the ability to run a ROP gadget.  The object of this code is
>>> to be completely secure against root trying to extract the secret
>>> (some what similar to the lockdown idea), thus defeating privilege
>>> escalation and to provide "sufficient" protection against ROP
>>> gadget.
>>
>> What stops "root" from mapping /dev/mem and reading that memory?
> 
> /dev/mem uses the direct map for the copy at least for read/write, so
> it gets a fault in the same way root trying to use ptrace does.  I
> think we've protected mmap, but Mike would know that better than I.
> 

I'm more concerned about the mmap case -> remap_pfn_range(). Anybody 
going via the VMA shouldn't see the struct page, at least when 
vma_normal_page() is properly used; so you cannot detect secretmem
memory mapped via /dev/mem reliably. At least that's my theory :)

[...]

>> Also, there is a way to still read that memory when root by
>>
>> 1. Having kdump active (which would often be the case, but maybe not
>> to dump user pages )
>> 2. Triggering a kernel crash (easy via proc as root)
>> 3. Waiting for the reboot after kump() created the dump and then
>> reading the content from disk.
> 
> Anything that can leave physical memory intact but boot to a kernel
> where the missing direct map entry is restored could theoretically
> extract the secret.  However, it's not exactly going to be a stealthy
> extraction ...
> 
>> Or, as an attacker, load a custom kexec() kernel and read memory
>> from the new environment. Of course, the latter two are advanced
>> mechanisms, but they are possible when root. We might be able to
>> mitigate, for example, by zeroing out secretmem pages before booting
>> into the kexec kernel, if we care :)
> 
> I think we could handle it by marking the region, yes, and a zero on
> shutdown might be useful ... it would prevent all warm reboot type
> attacks.

Right. But I guess when you're actually root, you can just write a 
kernel module to extract the information you need (unless we have signed 
modules, so it could be harder/impossible).

-- 
Thanks,

David / dhildenb


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 17:24           ` David Hildenbrand
  0 siblings, 0 replies; 84+ messages in thread
From: David Hildenbrand @ 2021-05-06 17:24 UTC (permalink / raw)
  To: jejb, Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Anderse n, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

>>>> Is this intended to protect keys/etc after the attacker has
>>>> gained the ability to run arbitrary kernel-mode code?  If so,
>>>> that seems optimistic, doesn't it?
>>>
>>> Not exactly: there are many types of kernel attack, but mostly the
>>> attacker either manages to effect a privilege escalation to root or
>>> gets the ability to run a ROP gadget.  The object of this code is
>>> to be completely secure against root trying to extract the secret
>>> (some what similar to the lockdown idea), thus defeating privilege
>>> escalation and to provide "sufficient" protection against ROP
>>> gadget.
>>
>> What stops "root" from mapping /dev/mem and reading that memory?
> 
> /dev/mem uses the direct map for the copy at least for read/write, so
> it gets a fault in the same way root trying to use ptrace does.  I
> think we've protected mmap, but Mike would know that better than I.
> 

I'm more concerned about the mmap case -> remap_pfn_range(). Anybody 
going via the VMA shouldn't see the struct page, at least when 
vma_normal_page() is properly used; so you cannot detect secretmem
memory mapped via /dev/mem reliably. At least that's my theory :)

[...]

>> Also, there is a way to still read that memory when root by
>>
>> 1. Having kdump active (which would often be the case, but maybe not
>> to dump user pages )
>> 2. Triggering a kernel crash (easy via proc as root)
>> 3. Waiting for the reboot after kump() created the dump and then
>> reading the content from disk.
> 
> Anything that can leave physical memory intact but boot to a kernel
> where the missing direct map entry is restored could theoretically
> extract the secret.  However, it's not exactly going to be a stealthy
> extraction ...
> 
>> Or, as an attacker, load a custom kexec() kernel and read memory
>> from the new environment. Of course, the latter two are advanced
>> mechanisms, but they are possible when root. We might be able to
>> mitigate, for example, by zeroing out secretmem pages before booting
>> into the kexec kernel, if we care :)
> 
> I think we could handle it by marking the region, yes, and a zero on
> shutdown might be useful ... it would prevent all warm reboot type
> attacks.

Right. But I guess when you're actually root, you can just write a 
kernel module to extract the information you need (unless we have signed 
modules, so it could be harder/impossible).

-- 
Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 17:24           ` David Hildenbrand
  0 siblings, 0 replies; 84+ messages in thread
From: David Hildenbrand @ 2021-05-06 17:24 UTC (permalink / raw)
  To: jejb, Andrew Morton, Mike Rapoport
  Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
	Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv, x86

>>>> Is this intended to protect keys/etc after the attacker has
>>>> gained the ability to run arbitrary kernel-mode code?  If so,
>>>> that seems optimistic, doesn't it?
>>>
>>> Not exactly: there are many types of kernel attack, but mostly the
>>> attacker either manages to effect a privilege escalation to root or
>>> gets the ability to run a ROP gadget.  The object of this code is
>>> to be completely secure against root trying to extract the secret
>>> (some what similar to the lockdown idea), thus defeating privilege
>>> escalation and to provide "sufficient" protection against ROP
>>> gadget.
>>
>> What stops "root" from mapping /dev/mem and reading that memory?
> 
> /dev/mem uses the direct map for the copy at least for read/write, so
> it gets a fault in the same way root trying to use ptrace does.  I
> think we've protected mmap, but Mike would know that better than I.
> 

I'm more concerned about the mmap case -> remap_pfn_range(). Anybody 
going via the VMA shouldn't see the struct page, at least when 
vma_normal_page() is properly used; so you cannot detect secretmem
memory mapped via /dev/mem reliably. At least that's my theory :)

[...]

>> Also, there is a way to still read that memory when root by
>>
>> 1. Having kdump active (which would often be the case, but maybe not
>> to dump user pages )
>> 2. Triggering a kernel crash (easy via proc as root)
>> 3. Waiting for the reboot after kump() created the dump and then
>> reading the content from disk.
> 
> Anything that can leave physical memory intact but boot to a kernel
> where the missing direct map entry is restored could theoretically
> extract the secret.  However, it's not exactly going to be a stealthy
> extraction ...
> 
>> Or, as an attacker, load a custom kexec() kernel and read memory
>> from the new environment. Of course, the latter two are advanced
>> mechanisms, but they are possible when root. We might be able to
>> mitigate, for example, by zeroing out secretmem pages before booting
>> into the kexec kernel, if we care :)
> 
> I think we could handle it by marking the region, yes, and a zero on
> shutdown might be useful ... it would prevent all warm reboot type
> attacks.

Right. But I guess when you're actually root, you can just write a 
kernel module to extract the information you need (unless we have signed 
modules, so it could be harder/impossible).

-- 
Thanks,

David / dhildenb


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2021-05-06 15:26     ` James Bottomley
  (?)
  (?)
@ 2021-05-06 17:33       ` Kees Cook
  -1 siblings, 0 replies; 84+ messages in thread
From: Kees Cook @ 2021-05-06 17:33 UTC (permalink / raw)
  To: James Bottomley
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> > On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org>
> > wrote:
> > 
> > > This is an implementation of "secret" mappings backed by a file
> > > descriptor.

tl;dr: I like this series, I think there are number of clarifications
needed, though. See below.

> > > 
> > > The file descriptor backing secret memory mappings is created using
> > > a dedicated memfd_secret system call The desired protection mode
> > > for the memory is configured using flags parameter of the system
> > > call. The mmap() of the file descriptor created with memfd_secret()
> > > will create a "secret" memory mapping. The pages in that mapping
> > > will be marked as not present in the direct map and will be present
> > > only in the page table of the owning mm.
> > > 
> > > Although normally Linux userspace mappings are protected from other
> > > users, such secret mappings are useful for environments where a
> > > hostile tenant is trying to trick the kernel into giving them
> > > access to other tenants mappings.
> > 
> > I continue to struggle with this and I don't recall seeing much
> > enthusiasm from others.  Perhaps we're all missing the value point
> > and some additional selling is needed.
> > 
> > Am I correct in understanding that the overall direction here is to
> > protect keys (and perhaps other things) from kernel bugs?  That if
> > the kernel was bug-free then there would be no need for this
> > feature?  If so, that's a bit sad.  But realistic I guess.
> 
> Secret memory really serves several purposes. The "increase the level
> of difficulty of secret exfiltration" you describe.  And, as you say,
> if the kernel were bug free this wouldn't be necessary.
> 
> But also:
> 
>    1. Memory safety for user space code.  Once the secret memory is
>       allocated, the user can't accidentally pass it into the kernel to be
>       transmitted somewhere.

In my first read through, I didn't see how cross-userspace operations
were blocked, but it looks like it's the various gup paths where
{vma,page}_is_secretmem() is called. (Thank you for the self-test! That
helped me follow along.) I think this access pattern should be more
clearly spelled out in the cover later (i.e. "This will block things
like process_vm_readv()").

I like the results (inaccessible outside the process), though I suspect
this will absolutely melt gdb or other ptracers that try to see into
the memory. Don't get me wrong, I'm a big fan of such concepts[0], but
I see nothing in the cover letter about it (e.g. the effects on "ptrace"
or "gdb" are not mentioned.)

There is also a risk here of this becoming a forensics nightmare:
userspace malware will just download their entire executable region
into a memfd_secret region. Can we, perhaps, disallow mmap/mprotect
with PROT_EXEC when vma_is_secretmem()? The OpenSSL example, for
example, certainly doesn't need PROT_EXEC.

What's happening with O_CLOEXEC in this code? I don't see that mentioned
in the cover letter either. Why is it disallowed? That seems a strange
limitation for something trying to avoid leaking secrets into other
processes.

And just so I'm sure I understand: if a vma_is_secretmem() check is
missed in future mm code evolutions, it seems there is nothing to block
the kernel from accessing the contents directly through copy_from_user()
via the userspace virtual address, yes?

>    2. It also serves as a basis for context protection of virtual
>       machines, but other groups are working on this aspect, and it is
>       broadly similar to the secret exfiltration from the kernel problem.
> 
> > 
> > Is this intended to protect keys/etc after the attacker has gained
> > the ability to run arbitrary kernel-mode code?  If so, that seems
> > optimistic, doesn't it?
> 
> Not exactly: there are many types of kernel attack, but mostly the
> attacker either manages to effect a privilege escalation to root or
> gets the ability to run a ROP gadget.  The object of this code is to be
> completely secure against root trying to extract the secret (some what
> similar to the lockdown idea), thus defeating privilege escalation and
> to provide "sufficient" protection against ROP gadgets.
> 
> The ROP gadget thing needs more explanation: the usual defeatist
> approach is to say that once the attacker gains the stack, they can do
> anything because they can find enough ROP gadgets to be turing
> complete.  However, in the real world, given the kernel stack size
> limit and address space layout randomization making finding gadgets
> really hard, usually the attacker gets one or at most two gadgets to
> string together.  Not having any in-kernel primitive for accessing
> secret memory means the one gadget ROP attack can't work.  Since the
> only way to access secret memory is to reconstruct the missing mapping
> entry, the attacker has to recover the physical page and insert a PTE
> pointing to it in the kernel and then retrieve the contents.  That
> takes at least three gadgets which is a level of difficulty beyond most
> standard attacks.

As for protecting against exploited kernel flaws I also see benefits
here. While the kernel is already blocked from directly reading contents
from userspace virtual addresses (i.e. SMAP), this feature does help by
blocking the kernel from directly reading contents via the direct map
alias. (i.e. this feature is a specialized version of XPFO[1], which
tried to do this for ALL user memory.) So in that regard, yes, this has
value in the sense that to perform exfiltration, an attacker would need
a significant level of control over kernel execution or over page table
contents.

Sufficient control over PTE allocation and positioning is possible
without kernel execution control[3], and "only" having an arbitrary
write primitive can lead to direct PTE control. Because of this, it
would be nice to have page tables strongly protected[2] in the kernel.
They remain a viable "data only" attack given a sufficiently "capable"
write flaw.

I would argue that page table entries are a more important asset to
protect than userspace secrets, but given the difficulties with XPFO
and the not-yet-available PKS I can understand starting here. It does,
absolutely, narrow the ways exploits must be written to exfiltrate secret
contents. (We are starting to now constrict[4] many attack methods
into attacking the page table itself, which is good in the sense that
protecting page tables will be a big win, and bad in the sense that
focusing attack research on page tables means we're going to see some
very powerful attacks.)

> > I think that a very complete description of the threats which this
> > feature addresses would be helpful.  
> 
> It's designed to protect against three different threats:
> 
>    1. Detection of user secret memory mismanagement

I would say "cross-process secret userspace memory exposures" (via a
number of common interfaces by blocking it at the GUP level).

>    2. significant protection against privilege escalation

I don't see how this series protects against privilege escalation. (It
protects against exfiltration.) Maybe you mean include this in the first
bullet point (i.e. "cross-process secret userspace memory exposures,
even in the face of privileged processes")?

>    3. enhanced protection (in conjunction with all the other in-kernel
>       attack prevention systems) against ROP attacks.

Same here, I don't see it preventing ROP, but I see it making "simple"
ROP insufficient to perform exfiltration.

-Kees

[0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/security/yama/yama_lsm.c?h=v5.12#n410
[1] https://lore.kernel.org/linux-mm/cover.1554248001.git.khalid.aziz@oracle.com/
[2] https://lore.kernel.org/lkml/20210505003032.489164-1-rick.p.edgecombe@intel.com/
[3] https://googleprojectzero.blogspot.com/2015/03/exploiting-dram-rowhammer-bug-to-gain.html
[4] https://git.kernel.org/linus/cf68fffb66d60d96209446bfc4a15291dc5a5d41

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 17:33       ` Kees Cook
  0 siblings, 0 replies; 84+ messages in thread
From: Kees Cook @ 2021-05-06 17:33 UTC (permalink / raw)
  To: James Bottomley
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> > On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org>
> > wrote:
> > 
> > > This is an implementation of "secret" mappings backed by a file
> > > descriptor.

tl;dr: I like this series, I think there are number of clarifications
needed, though. See below.

> > > 
> > > The file descriptor backing secret memory mappings is created using
> > > a dedicated memfd_secret system call The desired protection mode
> > > for the memory is configured using flags parameter of the system
> > > call. The mmap() of the file descriptor created with memfd_secret()
> > > will create a "secret" memory mapping. The pages in that mapping
> > > will be marked as not present in the direct map and will be present
> > > only in the page table of the owning mm.
> > > 
> > > Although normally Linux userspace mappings are protected from other
> > > users, such secret mappings are useful for environments where a
> > > hostile tenant is trying to trick the kernel into giving them
> > > access to other tenants mappings.
> > 
> > I continue to struggle with this and I don't recall seeing much
> > enthusiasm from others.  Perhaps we're all missing the value point
> > and some additional selling is needed.
> > 
> > Am I correct in understanding that the overall direction here is to
> > protect keys (and perhaps other things) from kernel bugs?  That if
> > the kernel was bug-free then there would be no need for this
> > feature?  If so, that's a bit sad.  But realistic I guess.
> 
> Secret memory really serves several purposes. The "increase the level
> of difficulty of secret exfiltration" you describe.  And, as you say,
> if the kernel were bug free this wouldn't be necessary.
> 
> But also:
> 
>    1. Memory safety for user space code.  Once the secret memory is
>       allocated, the user can't accidentally pass it into the kernel to be
>       transmitted somewhere.

In my first read through, I didn't see how cross-userspace operations
were blocked, but it looks like it's the various gup paths where
{vma,page}_is_secretmem() is called. (Thank you for the self-test! That
helped me follow along.) I think this access pattern should be more
clearly spelled out in the cover later (i.e. "This will block things
like process_vm_readv()").

I like the results (inaccessible outside the process), though I suspect
this will absolutely melt gdb or other ptracers that try to see into
the memory. Don't get me wrong, I'm a big fan of such concepts[0], but
I see nothing in the cover letter about it (e.g. the effects on "ptrace"
or "gdb" are not mentioned.)

There is also a risk here of this becoming a forensics nightmare:
userspace malware will just download their entire executable region
into a memfd_secret region. Can we, perhaps, disallow mmap/mprotect
with PROT_EXEC when vma_is_secretmem()? The OpenSSL example, for
example, certainly doesn't need PROT_EXEC.

What's happening with O_CLOEXEC in this code? I don't see that mentioned
in the cover letter either. Why is it disallowed? That seems a strange
limitation for something trying to avoid leaking secrets into other
processes.

And just so I'm sure I understand: if a vma_is_secretmem() check is
missed in future mm code evolutions, it seems there is nothing to block
the kernel from accessing the contents directly through copy_from_user()
via the userspace virtual address, yes?

>    2. It also serves as a basis for context protection of virtual
>       machines, but other groups are working on this aspect, and it is
>       broadly similar to the secret exfiltration from the kernel problem.
> 
> > 
> > Is this intended to protect keys/etc after the attacker has gained
> > the ability to run arbitrary kernel-mode code?  If so, that seems
> > optimistic, doesn't it?
> 
> Not exactly: there are many types of kernel attack, but mostly the
> attacker either manages to effect a privilege escalation to root or
> gets the ability to run a ROP gadget.  The object of this code is to be
> completely secure against root trying to extract the secret (some what
> similar to the lockdown idea), thus defeating privilege escalation and
> to provide "sufficient" protection against ROP gadgets.
> 
> The ROP gadget thing needs more explanation: the usual defeatist
> approach is to say that once the attacker gains the stack, they can do
> anything because they can find enough ROP gadgets to be turing
> complete.  However, in the real world, given the kernel stack size
> limit and address space layout randomization making finding gadgets
> really hard, usually the attacker gets one or at most two gadgets to
> string together.  Not having any in-kernel primitive for accessing
> secret memory means the one gadget ROP attack can't work.  Since the
> only way to access secret memory is to reconstruct the missing mapping
> entry, the attacker has to recover the physical page and insert a PTE
> pointing to it in the kernel and then retrieve the contents.  That
> takes at least three gadgets which is a level of difficulty beyond most
> standard attacks.

As for protecting against exploited kernel flaws I also see benefits
here. While the kernel is already blocked from directly reading contents
from userspace virtual addresses (i.e. SMAP), this feature does help by
blocking the kernel from directly reading contents via the direct map
alias. (i.e. this feature is a specialized version of XPFO[1], which
tried to do this for ALL user memory.) So in that regard, yes, this has
value in the sense that to perform exfiltration, an attacker would need
a significant level of control over kernel execution or over page table
contents.

Sufficient control over PTE allocation and positioning is possible
without kernel execution control[3], and "only" having an arbitrary
write primitive can lead to direct PTE control. Because of this, it
would be nice to have page tables strongly protected[2] in the kernel.
They remain a viable "data only" attack given a sufficiently "capable"
write flaw.

I would argue that page table entries are a more important asset to
protect than userspace secrets, but given the difficulties with XPFO
and the not-yet-available PKS I can understand starting here. It does,
absolutely, narrow the ways exploits must be written to exfiltrate secret
contents. (We are starting to now constrict[4] many attack methods
into attacking the page table itself, which is good in the sense that
protecting page tables will be a big win, and bad in the sense that
focusing attack research on page tables means we're going to see some
very powerful attacks.)

> > I think that a very complete description of the threats which this
> > feature addresses would be helpful.  
> 
> It's designed to protect against three different threats:
> 
>    1. Detection of user secret memory mismanagement

I would say "cross-process secret userspace memory exposures" (via a
number of common interfaces by blocking it at the GUP level).

>    2. significant protection against privilege escalation

I don't see how this series protects against privilege escalation. (It
protects against exfiltration.) Maybe you mean include this in the first
bullet point (i.e. "cross-process secret userspace memory exposures,
even in the face of privileged processes")?

>    3. enhanced protection (in conjunction with all the other in-kernel
>       attack prevention systems) against ROP attacks.

Same here, I don't see it preventing ROP, but I see it making "simple"
ROP insufficient to perform exfiltration.

-Kees

[0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/security/yama/yama_lsm.c?h=v5.12#n410
[1] https://lore.kernel.org/linux-mm/cover.1554248001.git.khalid.aziz@oracle.com/
[2] https://lore.kernel.org/lkml/20210505003032.489164-1-rick.p.edgecombe@intel.com/
[3] https://googleprojectzero.blogspot.com/2015/03/exploiting-dram-rowhammer-bug-to-gain.html
[4] https://git.kernel.org/linus/cf68fffb66d60d96209446bfc4a15291dc5a5d41

-- 
Kees Cook

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 17:33       ` Kees Cook
  0 siblings, 0 replies; 84+ messages in thread
From: Kees Cook @ 2021-05-06 17:33 UTC (permalink / raw)
  To: James Bottomley
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dave Hansen, David Hildenbrand,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shak eel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv

On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> > On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org>
> > wrote:
> > 
> > > This is an implementation of "secret" mappings backed by a file
> > > descriptor.

tl;dr: I like this series, I think there are number of clarifications
needed, though. See below.

> > > 
> > > The file descriptor backing secret memory mappings is created using
> > > a dedicated memfd_secret system call The desired protection mode
> > > for the memory is configured using flags parameter of the system
> > > call. The mmap() of the file descriptor created with memfd_secret()
> > > will create a "secret" memory mapping. The pages in that mapping
> > > will be marked as not present in the direct map and will be present
> > > only in the page table of the owning mm.
> > > 
> > > Although normally Linux userspace mappings are protected from other
> > > users, such secret mappings are useful for environments where a
> > > hostile tenant is trying to trick the kernel into giving them
> > > access to other tenants mappings.
> > 
> > I continue to struggle with this and I don't recall seeing much
> > enthusiasm from others.  Perhaps we're all missing the value point
> > and some additional selling is needed.
> > 
> > Am I correct in understanding that the overall direction here is to
> > protect keys (and perhaps other things) from kernel bugs?  That if
> > the kernel was bug-free then there would be no need for this
> > feature?  If so, that's a bit sad.  But realistic I guess.
> 
> Secret memory really serves several purposes. The "increase the level
> of difficulty of secret exfiltration" you describe.  And, as you say,
> if the kernel were bug free this wouldn't be necessary.
> 
> But also:
> 
>    1. Memory safety for user space code.  Once the secret memory is
>       allocated, the user can't accidentally pass it into the kernel to be
>       transmitted somewhere.

In my first read through, I didn't see how cross-userspace operations
were blocked, but it looks like it's the various gup paths where
{vma,page}_is_secretmem() is called. (Thank you for the self-test! That
helped me follow along.) I think this access pattern should be more
clearly spelled out in the cover later (i.e. "This will block things
like process_vm_readv()").

I like the results (inaccessible outside the process), though I suspect
this will absolutely melt gdb or other ptracers that try to see into
the memory. Don't get me wrong, I'm a big fan of such concepts[0], but
I see nothing in the cover letter about it (e.g. the effects on "ptrace"
or "gdb" are not mentioned.)

There is also a risk here of this becoming a forensics nightmare:
userspace malware will just download their entire executable region
into a memfd_secret region. Can we, perhaps, disallow mmap/mprotect
with PROT_EXEC when vma_is_secretmem()? The OpenSSL example, for
example, certainly doesn't need PROT_EXEC.

What's happening with O_CLOEXEC in this code? I don't see that mentioned
in the cover letter either. Why is it disallowed? That seems a strange
limitation for something trying to avoid leaking secrets into other
processes.

And just so I'm sure I understand: if a vma_is_secretmem() check is
missed in future mm code evolutions, it seems there is nothing to block
the kernel from accessing the contents directly through copy_from_user()
via the userspace virtual address, yes?

>    2. It also serves as a basis for context protection of virtual
>       machines, but other groups are working on this aspect, and it is
>       broadly similar to the secret exfiltration from the kernel problem.
> 
> > 
> > Is this intended to protect keys/etc after the attacker has gained
> > the ability to run arbitrary kernel-mode code?  If so, that seems
> > optimistic, doesn't it?
> 
> Not exactly: there are many types of kernel attack, but mostly the
> attacker either manages to effect a privilege escalation to root or
> gets the ability to run a ROP gadget.  The object of this code is to be
> completely secure against root trying to extract the secret (some what
> similar to the lockdown idea), thus defeating privilege escalation and
> to provide "sufficient" protection against ROP gadgets.
> 
> The ROP gadget thing needs more explanation: the usual defeatist
> approach is to say that once the attacker gains the stack, they can do
> anything because they can find enough ROP gadgets to be turing
> complete.  However, in the real world, given the kernel stack size
> limit and address space layout randomization making finding gadgets
> really hard, usually the attacker gets one or at most two gadgets to
> string together.  Not having any in-kernel primitive for accessing
> secret memory means the one gadget ROP attack can't work.  Since the
> only way to access secret memory is to reconstruct the missing mapping
> entry, the attacker has to recover the physical page and insert a PTE
> pointing to it in the kernel and then retrieve the contents.  That
> takes at least three gadgets which is a level of difficulty beyond most
> standard attacks.

As for protecting against exploited kernel flaws I also see benefits
here. While the kernel is already blocked from directly reading contents
from userspace virtual addresses (i.e. SMAP), this feature does help by
blocking the kernel from directly reading contents via the direct map
alias. (i.e. this feature is a specialized version of XPFO[1], which
tried to do this for ALL user memory.) So in that regard, yes, this has
value in the sense that to perform exfiltration, an attacker would need
a significant level of control over kernel execution or over page table
contents.

Sufficient control over PTE allocation and positioning is possible
without kernel execution control[3], and "only" having an arbitrary
write primitive can lead to direct PTE control. Because of this, it
would be nice to have page tables strongly protected[2] in the kernel.
They remain a viable "data only" attack given a sufficiently "capable"
write flaw.

I would argue that page table entries are a more important asset to
protect than userspace secrets, but given the difficulties with XPFO
and the not-yet-available PKS I can understand starting here. It does,
absolutely, narrow the ways exploits must be written to exfiltrate secret
contents. (We are starting to now constrict[4] many attack methods
into attacking the page table itself, which is good in the sense that
protecting page tables will be a big win, and bad in the sense that
focusing attack research on page tables means we're going to see some
very powerful attacks.)

> > I think that a very complete description of the threats which this
> > feature addresses would be helpful.  
> 
> It's designed to protect against three different threats:
> 
>    1. Detection of user secret memory mismanagement

I would say "cross-process secret userspace memory exposures" (via a
number of common interfaces by blocking it at the GUP level).

>    2. significant protection against privilege escalation

I don't see how this series protects against privilege escalation. (It
protects against exfiltration.) Maybe you mean include this in the first
bullet point (i.e. "cross-process secret userspace memory exposures,
even in the face of privileged processes")?

>    3. enhanced protection (in conjunction with all the other in-kernel
>       attack prevention systems) against ROP attacks.

Same here, I don't see it preventing ROP, but I see it making "simple"
ROP insufficient to perform exfiltration.

-Kees

[0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/security/yama/yama_lsm.c?h=v5.12#n410
[1] https://lore.kernel.org/linux-mm/cover.1554248001.git.khalid.aziz@oracle.com/
[2] https://lore.kernel.org/lkml/20210505003032.489164-1-rick.p.edgecombe@intel.com/
[3] https://googleprojectzero.blogspot.com/2015/03/exploiting-dram-rowhammer-bug-to-gain.html
[4] https://git.kernel.org/linus/cf68fffb66d60d96209446bfc4a15291dc5a5d41

-- 
Kees Cook
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 17:33       ` Kees Cook
  0 siblings, 0 replies; 84+ messages in thread
From: Kees Cook @ 2021-05-06 17:33 UTC (permalink / raw)
  To: James Bottomley
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> On Wed, 2021-05-05 at 12:08 -0700, Andrew Morton wrote:
> > On Wed,  3 Mar 2021 18:22:00 +0200 Mike Rapoport <rppt@kernel.org>
> > wrote:
> > 
> > > This is an implementation of "secret" mappings backed by a file
> > > descriptor.

tl;dr: I like this series, I think there are number of clarifications
needed, though. See below.

> > > 
> > > The file descriptor backing secret memory mappings is created using
> > > a dedicated memfd_secret system call The desired protection mode
> > > for the memory is configured using flags parameter of the system
> > > call. The mmap() of the file descriptor created with memfd_secret()
> > > will create a "secret" memory mapping. The pages in that mapping
> > > will be marked as not present in the direct map and will be present
> > > only in the page table of the owning mm.
> > > 
> > > Although normally Linux userspace mappings are protected from other
> > > users, such secret mappings are useful for environments where a
> > > hostile tenant is trying to trick the kernel into giving them
> > > access to other tenants mappings.
> > 
> > I continue to struggle with this and I don't recall seeing much
> > enthusiasm from others.  Perhaps we're all missing the value point
> > and some additional selling is needed.
> > 
> > Am I correct in understanding that the overall direction here is to
> > protect keys (and perhaps other things) from kernel bugs?  That if
> > the kernel was bug-free then there would be no need for this
> > feature?  If so, that's a bit sad.  But realistic I guess.
> 
> Secret memory really serves several purposes. The "increase the level
> of difficulty of secret exfiltration" you describe.  And, as you say,
> if the kernel were bug free this wouldn't be necessary.
> 
> But also:
> 
>    1. Memory safety for user space code.  Once the secret memory is
>       allocated, the user can't accidentally pass it into the kernel to be
>       transmitted somewhere.

In my first read through, I didn't see how cross-userspace operations
were blocked, but it looks like it's the various gup paths where
{vma,page}_is_secretmem() is called. (Thank you for the self-test! That
helped me follow along.) I think this access pattern should be more
clearly spelled out in the cover later (i.e. "This will block things
like process_vm_readv()").

I like the results (inaccessible outside the process), though I suspect
this will absolutely melt gdb or other ptracers that try to see into
the memory. Don't get me wrong, I'm a big fan of such concepts[0], but
I see nothing in the cover letter about it (e.g. the effects on "ptrace"
or "gdb" are not mentioned.)

There is also a risk here of this becoming a forensics nightmare:
userspace malware will just download their entire executable region
into a memfd_secret region. Can we, perhaps, disallow mmap/mprotect
with PROT_EXEC when vma_is_secretmem()? The OpenSSL example, for
example, certainly doesn't need PROT_EXEC.

What's happening with O_CLOEXEC in this code? I don't see that mentioned
in the cover letter either. Why is it disallowed? That seems a strange
limitation for something trying to avoid leaking secrets into other
processes.

And just so I'm sure I understand: if a vma_is_secretmem() check is
missed in future mm code evolutions, it seems there is nothing to block
the kernel from accessing the contents directly through copy_from_user()
via the userspace virtual address, yes?

>    2. It also serves as a basis for context protection of virtual
>       machines, but other groups are working on this aspect, and it is
>       broadly similar to the secret exfiltration from the kernel problem.
> 
> > 
> > Is this intended to protect keys/etc after the attacker has gained
> > the ability to run arbitrary kernel-mode code?  If so, that seems
> > optimistic, doesn't it?
> 
> Not exactly: there are many types of kernel attack, but mostly the
> attacker either manages to effect a privilege escalation to root or
> gets the ability to run a ROP gadget.  The object of this code is to be
> completely secure against root trying to extract the secret (some what
> similar to the lockdown idea), thus defeating privilege escalation and
> to provide "sufficient" protection against ROP gadgets.
> 
> The ROP gadget thing needs more explanation: the usual defeatist
> approach is to say that once the attacker gains the stack, they can do
> anything because they can find enough ROP gadgets to be turing
> complete.  However, in the real world, given the kernel stack size
> limit and address space layout randomization making finding gadgets
> really hard, usually the attacker gets one or at most two gadgets to
> string together.  Not having any in-kernel primitive for accessing
> secret memory means the one gadget ROP attack can't work.  Since the
> only way to access secret memory is to reconstruct the missing mapping
> entry, the attacker has to recover the physical page and insert a PTE
> pointing to it in the kernel and then retrieve the contents.  That
> takes at least three gadgets which is a level of difficulty beyond most
> standard attacks.

As for protecting against exploited kernel flaws I also see benefits
here. While the kernel is already blocked from directly reading contents
from userspace virtual addresses (i.e. SMAP), this feature does help by
blocking the kernel from directly reading contents via the direct map
alias. (i.e. this feature is a specialized version of XPFO[1], which
tried to do this for ALL user memory.) So in that regard, yes, this has
value in the sense that to perform exfiltration, an attacker would need
a significant level of control over kernel execution or over page table
contents.

Sufficient control over PTE allocation and positioning is possible
without kernel execution control[3], and "only" having an arbitrary
write primitive can lead to direct PTE control. Because of this, it
would be nice to have page tables strongly protected[2] in the kernel.
They remain a viable "data only" attack given a sufficiently "capable"
write flaw.

I would argue that page table entries are a more important asset to
protect than userspace secrets, but given the difficulties with XPFO
and the not-yet-available PKS I can understand starting here. It does,
absolutely, narrow the ways exploits must be written to exfiltrate secret
contents. (We are starting to now constrict[4] many attack methods
into attacking the page table itself, which is good in the sense that
protecting page tables will be a big win, and bad in the sense that
focusing attack research on page tables means we're going to see some
very powerful attacks.)

> > I think that a very complete description of the threats which this
> > feature addresses would be helpful.  
> 
> It's designed to protect against three different threats:
> 
>    1. Detection of user secret memory mismanagement

I would say "cross-process secret userspace memory exposures" (via a
number of common interfaces by blocking it at the GUP level).

>    2. significant protection against privilege escalation

I don't see how this series protects against privilege escalation. (It
protects against exfiltration.) Maybe you mean include this in the first
bullet point (i.e. "cross-process secret userspace memory exposures,
even in the face of privileged processes")?

>    3. enhanced protection (in conjunction with all the other in-kernel
>       attack prevention systems) against ROP attacks.

Same here, I don't see it preventing ROP, but I see it making "simple"
ROP insufficient to perform exfiltration.

-Kees

[0] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/security/yama/yama_lsm.c?h=v5.12#n410
[1] https://lore.kernel.org/linux-mm/cover.1554248001.git.khalid.aziz@oracle.com/
[2] https://lore.kernel.org/lkml/20210505003032.489164-1-rick.p.edgecombe@intel.com/
[3] https://googleprojectzero.blogspot.com/2015/03/exploiting-dram-rowhammer-bug-to-gain.html
[4] https://git.kernel.org/linus/cf68fffb66d60d96209446bfc4a15291dc5a5d41

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2021-05-06 17:33       ` Kees Cook
  (?)
  (?)
@ 2021-05-06 18:47         ` James Bottomley
  -1 siblings, 0 replies; 84+ messages in thread
From: James Bottomley @ 2021-05-06 18:47 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
[...]
> >    1. Memory safety for user space code.  Once the secret memory is
> >       allocated, the user can't accidentally pass it into the
> > kernel to be
> >       transmitted somewhere.
> 
> In my first read through, I didn't see how cross-userspace operations
> were blocked, but it looks like it's the various gup paths where
> {vma,page}_is_secretmem() is called. (Thank you for the self-test!
> That helped me follow along.) I think this access pattern should be
> more clearly spelled out in the cover later (i.e. "This will block
> things like process_vm_readv()").

I'm sure Mike can add it.

> I like the results (inaccessible outside the process), though I
> suspect this will absolutely melt gdb or other ptracers that try to
> see into the memory.

I wouldn't say "melt" ... one of the Demos we did a FOSDEM was using
gdb/ptrace to extract secrets and then showing it couldn't be done if
secret memory was used.  You can still trace the execution of the
process (and thus you could extract the secret as it's processed in
registers, for instance) but you just can't extract the actual secret
memory contents ... that's a fairly limited and well defined
restriction.

>  Don't get me wrong, I'm a big fan of such concepts[0], but I see
> nothing in the cover letter about it (e.g. the effects on "ptrace" or
> "gdb" are not mentioned.)

Sure, but we thought "secret" covered it.  It wouldn't be secret if
gdb/ptrace from another process could see it.

> There is also a risk here of this becoming a forensics nightmare:
> userspace malware will just download their entire executable region
> into a memfd_secret region. Can we, perhaps, disallow mmap/mprotect
> with PROT_EXEC when vma_is_secretmem()? The OpenSSL example, for
> example, certainly doesn't need PROT_EXEC.

I think disallowing PROT_EXEC is a great enhancement.

> What's happening with O_CLOEXEC in this code? I don't see that
> mentioned in the cover letter either. Why is it disallowed? That
> seems a strange limitation for something trying to avoid leaking
> secrets into other processes.

I actually thought we forced it, so I'll let Mike address this.  I
think allowing it is great, so the secret memory isn't inherited by
children, but I can see use cases where a process would want its child
to inherit the secrets.

> And just so I'm sure I understand: if a vma_is_secretmem() check is
> missed in future mm code evolutions, it seems there is nothing to
> block the kernel from accessing the contents directly through
> copy_from_user() via the userspace virtual address, yes?

Technically no because copy_from_user goes via the userspace page
tables which do have access.

> >    2. It also serves as a basis for context protection of virtual
> >       machines, but other groups are working on this aspect, and it
> > is
> >       broadly similar to the secret exfiltration from the kernel
> > problem.
> > 
> > > Is this intended to protect keys/etc after the attacker has
> > > gained the ability to run arbitrary kernel-mode code?  If so,
> > > that seems optimistic, doesn't it?
> > 
> > Not exactly: there are many types of kernel attack, but mostly the
> > attacker either manages to effect a privilege escalation to root or
> > gets the ability to run a ROP gadget.  The object of this code is
> > to be completely secure against root trying to extract the secret
> > (some what similar to the lockdown idea), thus defeating privilege
> > escalation and to provide "sufficient" protection against ROP
> > gadgets.
> > 
> > The ROP gadget thing needs more explanation: the usual defeatist
> > approach is to say that once the attacker gains the stack, they can
> > do anything because they can find enough ROP gadgets to be turing
> > complete.  However, in the real world, given the kernel stack size
> > limit and address space layout randomization making finding gadgets
> > really hard, usually the attacker gets one or at most two gadgets
> > to string together.  Not having any in-kernel primitive for
> > accessing secret memory means the one gadget ROP attack can't
> > work.  Since the only way to access secret memory is to reconstruct
> > the missing mapping entry, the attacker has to recover the physical
> > page and insert a PTE pointing to it in the kernel and then
> > retrieve the contents.  That takes at least three gadgets which is
> > a level of difficulty beyond most standard attacks.
> 
> As for protecting against exploited kernel flaws I also see benefits
> here. While the kernel is already blocked from directly reading
> contents from userspace virtual addresses (i.e. SMAP), this feature
> does help by blocking the kernel from directly reading contents via
> the direct map alias. (i.e. this feature is a specialized version of
> XPFO[1], which tried to do this for ALL user memory.) So in that
> regard, yes, this has value in the sense that to perform
> exfiltration, an attacker would need a significant level of control
> over kernel execution or over page table contents.
> 
> Sufficient control over PTE allocation and positioning is possible
> without kernel execution control[3], and "only" having an arbitrary
> write primitive can lead to direct PTE control. Because of this, it
> would be nice to have page tables strongly protected[2] in the
> kernel. They remain a viable "data only" attack given a sufficiently
> "capable" write flaw.

Right, but this is on the radar of several people and when fixed will
strengthen the value of secret memory.

> I would argue that page table entries are a more important asset to
> protect than userspace secrets, but given the difficulties with XPFO
> and the not-yet-available PKS I can understand starting here. It
> does, absolutely, narrow the ways exploits must be written to
> exfiltrate secret contents. (We are starting to now constrict[4] many
> attack methods into attacking the page table itself, which is good in
> the sense that protecting page tables will be a big win, and bad in
> the sense that focusing attack research on page tables means we're
> going to see some very powerful attacks.)
> 
> > > I think that a very complete description of the threats which
> > > this feature addresses would be helpful.  
> > 
> > It's designed to protect against three different threats:
> > 
> >    1. Detection of user secret memory mismanagement
> 
> I would say "cross-process secret userspace memory exposures" (via a
> number of common interfaces by blocking it at the GUP level).
> 
> >    2. significant protection against privilege escalation
> 
> I don't see how this series protects against privilege escalation.
> (It protects against exfiltration.) Maybe you mean include this in
> the first bullet point (i.e. "cross-process secret userspace memory
> exposures, even in the face of privileged processes")?

It doesn't prevent privilege escalation from happening in the first
place, but once the escalation has happened it protects against
exfiltration by the newly minted root attacker.

> >    3. enhanced protection (in conjunction with all the other in-
> > kernel
> >       attack prevention systems) against ROP attacks.
> 
> Same here, I don't see it preventing ROP, but I see it making
> "simple" ROP insufficient to perform exfiltration.

Right, that's why I call it "enhanced protection".  With ROP the design
goal is to take exfiltration beyond the simple, and require increasing
complexity in the attack ... the usual security whack-a-mole approach
... in the hope that script kiddies get bored by the level of
difficulty and move on to something easier.

James



^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 18:47         ` James Bottomley
  0 siblings, 0 replies; 84+ messages in thread
From: James Bottomley @ 2021-05-06 18:47 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
[...]
> >    1. Memory safety for user space code.  Once the secret memory is
> >       allocated, the user can't accidentally pass it into the
> > kernel to be
> >       transmitted somewhere.
> 
> In my first read through, I didn't see how cross-userspace operations
> were blocked, but it looks like it's the various gup paths where
> {vma,page}_is_secretmem() is called. (Thank you for the self-test!
> That helped me follow along.) I think this access pattern should be
> more clearly spelled out in the cover later (i.e. "This will block
> things like process_vm_readv()").

I'm sure Mike can add it.

> I like the results (inaccessible outside the process), though I
> suspect this will absolutely melt gdb or other ptracers that try to
> see into the memory.

I wouldn't say "melt" ... one of the Demos we did a FOSDEM was using
gdb/ptrace to extract secrets and then showing it couldn't be done if
secret memory was used.  You can still trace the execution of the
process (and thus you could extract the secret as it's processed in
registers, for instance) but you just can't extract the actual secret
memory contents ... that's a fairly limited and well defined
restriction.

>  Don't get me wrong, I'm a big fan of such concepts[0], but I see
> nothing in the cover letter about it (e.g. the effects on "ptrace" or
> "gdb" are not mentioned.)

Sure, but we thought "secret" covered it.  It wouldn't be secret if
gdb/ptrace from another process could see it.

> There is also a risk here of this becoming a forensics nightmare:
> userspace malware will just download their entire executable region
> into a memfd_secret region. Can we, perhaps, disallow mmap/mprotect
> with PROT_EXEC when vma_is_secretmem()? The OpenSSL example, for
> example, certainly doesn't need PROT_EXEC.

I think disallowing PROT_EXEC is a great enhancement.

> What's happening with O_CLOEXEC in this code? I don't see that
> mentioned in the cover letter either. Why is it disallowed? That
> seems a strange limitation for something trying to avoid leaking
> secrets into other processes.

I actually thought we forced it, so I'll let Mike address this.  I
think allowing it is great, so the secret memory isn't inherited by
children, but I can see use cases where a process would want its child
to inherit the secrets.

> And just so I'm sure I understand: if a vma_is_secretmem() check is
> missed in future mm code evolutions, it seems there is nothing to
> block the kernel from accessing the contents directly through
> copy_from_user() via the userspace virtual address, yes?

Technically no because copy_from_user goes via the userspace page
tables which do have access.

> >    2. It also serves as a basis for context protection of virtual
> >       machines, but other groups are working on this aspect, and it
> > is
> >       broadly similar to the secret exfiltration from the kernel
> > problem.
> > 
> > > Is this intended to protect keys/etc after the attacker has
> > > gained the ability to run arbitrary kernel-mode code?  If so,
> > > that seems optimistic, doesn't it?
> > 
> > Not exactly: there are many types of kernel attack, but mostly the
> > attacker either manages to effect a privilege escalation to root or
> > gets the ability to run a ROP gadget.  The object of this code is
> > to be completely secure against root trying to extract the secret
> > (some what similar to the lockdown idea), thus defeating privilege
> > escalation and to provide "sufficient" protection against ROP
> > gadgets.
> > 
> > The ROP gadget thing needs more explanation: the usual defeatist
> > approach is to say that once the attacker gains the stack, they can
> > do anything because they can find enough ROP gadgets to be turing
> > complete.  However, in the real world, given the kernel stack size
> > limit and address space layout randomization making finding gadgets
> > really hard, usually the attacker gets one or at most two gadgets
> > to string together.  Not having any in-kernel primitive for
> > accessing secret memory means the one gadget ROP attack can't
> > work.  Since the only way to access secret memory is to reconstruct
> > the missing mapping entry, the attacker has to recover the physical
> > page and insert a PTE pointing to it in the kernel and then
> > retrieve the contents.  That takes at least three gadgets which is
> > a level of difficulty beyond most standard attacks.
> 
> As for protecting against exploited kernel flaws I also see benefits
> here. While the kernel is already blocked from directly reading
> contents from userspace virtual addresses (i.e. SMAP), this feature
> does help by blocking the kernel from directly reading contents via
> the direct map alias. (i.e. this feature is a specialized version of
> XPFO[1], which tried to do this for ALL user memory.) So in that
> regard, yes, this has value in the sense that to perform
> exfiltration, an attacker would need a significant level of control
> over kernel execution or over page table contents.
> 
> Sufficient control over PTE allocation and positioning is possible
> without kernel execution control[3], and "only" having an arbitrary
> write primitive can lead to direct PTE control. Because of this, it
> would be nice to have page tables strongly protected[2] in the
> kernel. They remain a viable "data only" attack given a sufficiently
> "capable" write flaw.

Right, but this is on the radar of several people and when fixed will
strengthen the value of secret memory.

> I would argue that page table entries are a more important asset to
> protect than userspace secrets, but given the difficulties with XPFO
> and the not-yet-available PKS I can understand starting here. It
> does, absolutely, narrow the ways exploits must be written to
> exfiltrate secret contents. (We are starting to now constrict[4] many
> attack methods into attacking the page table itself, which is good in
> the sense that protecting page tables will be a big win, and bad in
> the sense that focusing attack research on page tables means we're
> going to see some very powerful attacks.)
> 
> > > I think that a very complete description of the threats which
> > > this feature addresses would be helpful.  
> > 
> > It's designed to protect against three different threats:
> > 
> >    1. Detection of user secret memory mismanagement
> 
> I would say "cross-process secret userspace memory exposures" (via a
> number of common interfaces by blocking it at the GUP level).
> 
> >    2. significant protection against privilege escalation
> 
> I don't see how this series protects against privilege escalation.
> (It protects against exfiltration.) Maybe you mean include this in
> the first bullet point (i.e. "cross-process secret userspace memory
> exposures, even in the face of privileged processes")?

It doesn't prevent privilege escalation from happening in the first
place, but once the escalation has happened it protects against
exfiltration by the newly minted root attacker.

> >    3. enhanced protection (in conjunction with all the other in-
> > kernel
> >       attack prevention systems) against ROP attacks.
> 
> Same here, I don't see it preventing ROP, but I see it making
> "simple" ROP insufficient to perform exfiltration.

Right, that's why I call it "enhanced protection".  With ROP the design
goal is to take exfiltration beyond the simple, and require increasing
complexity in the attack ... the usual security whack-a-mole approach
... in the hope that script kiddies get bored by the level of
difficulty and move on to something easier.

James



_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 18:47         ` James Bottomley
  0 siblings, 0 replies; 84+ messages in thread
From: James Bottomley @ 2021-05-06 18:47 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dave Hansen, David Hildenbrand,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki  <rjw@rjwysocki.net>,
	Rick Edgecombe, Roman Gushchin, Sha keel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
[...]
> >    1. Memory safety for user space code.  Once the secret memory is
> >       allocated, the user can't accidentally pass it into the
> > kernel to be
> >       transmitted somewhere.
> 
> In my first read through, I didn't see how cross-userspace operations
> were blocked, but it looks like it's the various gup paths where
> {vma,page}_is_secretmem() is called. (Thank you for the self-test!
> That helped me follow along.) I think this access pattern should be
> more clearly spelled out in the cover later (i.e. "This will block
> things like process_vm_readv()").

I'm sure Mike can add it.

> I like the results (inaccessible outside the process), though I
> suspect this will absolutely melt gdb or other ptracers that try to
> see into the memory.

I wouldn't say "melt" ... one of the Demos we did a FOSDEM was using
gdb/ptrace to extract secrets and then showing it couldn't be done if
secret memory was used.  You can still trace the execution of the
process (and thus you could extract the secret as it's processed in
registers, for instance) but you just can't extract the actual secret
memory contents ... that's a fairly limited and well defined
restriction.

>  Don't get me wrong, I'm a big fan of such concepts[0], but I see
> nothing in the cover letter about it (e.g. the effects on "ptrace" or
> "gdb" are not mentioned.)

Sure, but we thought "secret" covered it.  It wouldn't be secret if
gdb/ptrace from another process could see it.

> There is also a risk here of this becoming a forensics nightmare:
> userspace malware will just download their entire executable region
> into a memfd_secret region. Can we, perhaps, disallow mmap/mprotect
> with PROT_EXEC when vma_is_secretmem()? The OpenSSL example, for
> example, certainly doesn't need PROT_EXEC.

I think disallowing PROT_EXEC is a great enhancement.

> What's happening with O_CLOEXEC in this code? I don't see that
> mentioned in the cover letter either. Why is it disallowed? That
> seems a strange limitation for something trying to avoid leaking
> secrets into other processes.

I actually thought we forced it, so I'll let Mike address this.  I
think allowing it is great, so the secret memory isn't inherited by
children, but I can see use cases where a process would want its child
to inherit the secrets.

> And just so I'm sure I understand: if a vma_is_secretmem() check is
> missed in future mm code evolutions, it seems there is nothing to
> block the kernel from accessing the contents directly through
> copy_from_user() via the userspace virtual address, yes?

Technically no because copy_from_user goes via the userspace page
tables which do have access.

> >    2. It also serves as a basis for context protection of virtual
> >       machines, but other groups are working on this aspect, and it
> > is
> >       broadly similar to the secret exfiltration from the kernel
> > problem.
> > 
> > > Is this intended to protect keys/etc after the attacker has
> > > gained the ability to run arbitrary kernel-mode code?  If so,
> > > that seems optimistic, doesn't it?
> > 
> > Not exactly: there are many types of kernel attack, but mostly the
> > attacker either manages to effect a privilege escalation to root or
> > gets the ability to run a ROP gadget.  The object of this code is
> > to be completely secure against root trying to extract the secret
> > (some what similar to the lockdown idea), thus defeating privilege
> > escalation and to provide "sufficient" protection against ROP
> > gadgets.
> > 
> > The ROP gadget thing needs more explanation: the usual defeatist
> > approach is to say that once the attacker gains the stack, they can
> > do anything because they can find enough ROP gadgets to be turing
> > complete.  However, in the real world, given the kernel stack size
> > limit and address space layout randomization making finding gadgets
> > really hard, usually the attacker gets one or at most two gadgets
> > to string together.  Not having any in-kernel primitive for
> > accessing secret memory means the one gadget ROP attack can't
> > work.  Since the only way to access secret memory is to reconstruct
> > the missing mapping entry, the attacker has to recover the physical
> > page and insert a PTE pointing to it in the kernel and then
> > retrieve the contents.  That takes at least three gadgets which is
> > a level of difficulty beyond most standard attacks.
> 
> As for protecting against exploited kernel flaws I also see benefits
> here. While the kernel is already blocked from directly reading
> contents from userspace virtual addresses (i.e. SMAP), this feature
> does help by blocking the kernel from directly reading contents via
> the direct map alias. (i.e. this feature is a specialized version of
> XPFO[1], which tried to do this for ALL user memory.) So in that
> regard, yes, this has value in the sense that to perform
> exfiltration, an attacker would need a significant level of control
> over kernel execution or over page table contents.
> 
> Sufficient control over PTE allocation and positioning is possible
> without kernel execution control[3], and "only" having an arbitrary
> write primitive can lead to direct PTE control. Because of this, it
> would be nice to have page tables strongly protected[2] in the
> kernel. They remain a viable "data only" attack given a sufficiently
> "capable" write flaw.

Right, but this is on the radar of several people and when fixed will
strengthen the value of secret memory.

> I would argue that page table entries are a more important asset to
> protect than userspace secrets, but given the difficulties with XPFO
> and the not-yet-available PKS I can understand starting here. It
> does, absolutely, narrow the ways exploits must be written to
> exfiltrate secret contents. (We are starting to now constrict[4] many
> attack methods into attacking the page table itself, which is good in
> the sense that protecting page tables will be a big win, and bad in
> the sense that focusing attack research on page tables means we're
> going to see some very powerful attacks.)
> 
> > > I think that a very complete description of the threats which
> > > this feature addresses would be helpful.  
> > 
> > It's designed to protect against three different threats:
> > 
> >    1. Detection of user secret memory mismanagement
> 
> I would say "cross-process secret userspace memory exposures" (via a
> number of common interfaces by blocking it at the GUP level).
> 
> >    2. significant protection against privilege escalation
> 
> I don't see how this series protects against privilege escalation.
> (It protects against exfiltration.) Maybe you mean include this in
> the first bullet point (i.e. "cross-process secret userspace memory
> exposures, even in the face of privileged processes")?

It doesn't prevent privilege escalation from happening in the first
place, but once the escalation has happened it protects against
exfiltration by the newly minted root attacker.

> >    3. enhanced protection (in conjunction with all the other in-
> > kernel
> >       attack prevention systems) against ROP attacks.
> 
> Same here, I don't see it preventing ROP, but I see it making
> "simple" ROP insufficient to perform exfiltration.

Right, that's why I call it "enhanced protection".  With ROP the design
goal is to take exfiltration beyond the simple, and require increasing
complexity in the attack ... the usual security whack-a-mole approach
... in the hope that script kiddies get bored by the level of
difficulty and move on to something easier.

James

_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 18:47         ` James Bottomley
  0 siblings, 0 replies; 84+ messages in thread
From: James Bottomley @ 2021-05-06 18:47 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
[...]
> >    1. Memory safety for user space code.  Once the secret memory is
> >       allocated, the user can't accidentally pass it into the
> > kernel to be
> >       transmitted somewhere.
> 
> In my first read through, I didn't see how cross-userspace operations
> were blocked, but it looks like it's the various gup paths where
> {vma,page}_is_secretmem() is called. (Thank you for the self-test!
> That helped me follow along.) I think this access pattern should be
> more clearly spelled out in the cover later (i.e. "This will block
> things like process_vm_readv()").

I'm sure Mike can add it.

> I like the results (inaccessible outside the process), though I
> suspect this will absolutely melt gdb or other ptracers that try to
> see into the memory.

I wouldn't say "melt" ... one of the Demos we did a FOSDEM was using
gdb/ptrace to extract secrets and then showing it couldn't be done if
secret memory was used.  You can still trace the execution of the
process (and thus you could extract the secret as it's processed in
registers, for instance) but you just can't extract the actual secret
memory contents ... that's a fairly limited and well defined
restriction.

>  Don't get me wrong, I'm a big fan of such concepts[0], but I see
> nothing in the cover letter about it (e.g. the effects on "ptrace" or
> "gdb" are not mentioned.)

Sure, but we thought "secret" covered it.  It wouldn't be secret if
gdb/ptrace from another process could see it.

> There is also a risk here of this becoming a forensics nightmare:
> userspace malware will just download their entire executable region
> into a memfd_secret region. Can we, perhaps, disallow mmap/mprotect
> with PROT_EXEC when vma_is_secretmem()? The OpenSSL example, for
> example, certainly doesn't need PROT_EXEC.

I think disallowing PROT_EXEC is a great enhancement.

> What's happening with O_CLOEXEC in this code? I don't see that
> mentioned in the cover letter either. Why is it disallowed? That
> seems a strange limitation for something trying to avoid leaking
> secrets into other processes.

I actually thought we forced it, so I'll let Mike address this.  I
think allowing it is great, so the secret memory isn't inherited by
children, but I can see use cases where a process would want its child
to inherit the secrets.

> And just so I'm sure I understand: if a vma_is_secretmem() check is
> missed in future mm code evolutions, it seems there is nothing to
> block the kernel from accessing the contents directly through
> copy_from_user() via the userspace virtual address, yes?

Technically no because copy_from_user goes via the userspace page
tables which do have access.

> >    2. It also serves as a basis for context protection of virtual
> >       machines, but other groups are working on this aspect, and it
> > is
> >       broadly similar to the secret exfiltration from the kernel
> > problem.
> > 
> > > Is this intended to protect keys/etc after the attacker has
> > > gained the ability to run arbitrary kernel-mode code?  If so,
> > > that seems optimistic, doesn't it?
> > 
> > Not exactly: there are many types of kernel attack, but mostly the
> > attacker either manages to effect a privilege escalation to root or
> > gets the ability to run a ROP gadget.  The object of this code is
> > to be completely secure against root trying to extract the secret
> > (some what similar to the lockdown idea), thus defeating privilege
> > escalation and to provide "sufficient" protection against ROP
> > gadgets.
> > 
> > The ROP gadget thing needs more explanation: the usual defeatist
> > approach is to say that once the attacker gains the stack, they can
> > do anything because they can find enough ROP gadgets to be turing
> > complete.  However, in the real world, given the kernel stack size
> > limit and address space layout randomization making finding gadgets
> > really hard, usually the attacker gets one or at most two gadgets
> > to string together.  Not having any in-kernel primitive for
> > accessing secret memory means the one gadget ROP attack can't
> > work.  Since the only way to access secret memory is to reconstruct
> > the missing mapping entry, the attacker has to recover the physical
> > page and insert a PTE pointing to it in the kernel and then
> > retrieve the contents.  That takes at least three gadgets which is
> > a level of difficulty beyond most standard attacks.
> 
> As for protecting against exploited kernel flaws I also see benefits
> here. While the kernel is already blocked from directly reading
> contents from userspace virtual addresses (i.e. SMAP), this feature
> does help by blocking the kernel from directly reading contents via
> the direct map alias. (i.e. this feature is a specialized version of
> XPFO[1], which tried to do this for ALL user memory.) So in that
> regard, yes, this has value in the sense that to perform
> exfiltration, an attacker would need a significant level of control
> over kernel execution or over page table contents.
> 
> Sufficient control over PTE allocation and positioning is possible
> without kernel execution control[3], and "only" having an arbitrary
> write primitive can lead to direct PTE control. Because of this, it
> would be nice to have page tables strongly protected[2] in the
> kernel. They remain a viable "data only" attack given a sufficiently
> "capable" write flaw.

Right, but this is on the radar of several people and when fixed will
strengthen the value of secret memory.

> I would argue that page table entries are a more important asset to
> protect than userspace secrets, but given the difficulties with XPFO
> and the not-yet-available PKS I can understand starting here. It
> does, absolutely, narrow the ways exploits must be written to
> exfiltrate secret contents. (We are starting to now constrict[4] many
> attack methods into attacking the page table itself, which is good in
> the sense that protecting page tables will be a big win, and bad in
> the sense that focusing attack research on page tables means we're
> going to see some very powerful attacks.)
> 
> > > I think that a very complete description of the threats which
> > > this feature addresses would be helpful.  
> > 
> > It's designed to protect against three different threats:
> > 
> >    1. Detection of user secret memory mismanagement
> 
> I would say "cross-process secret userspace memory exposures" (via a
> number of common interfaces by blocking it at the GUP level).
> 
> >    2. significant protection against privilege escalation
> 
> I don't see how this series protects against privilege escalation.
> (It protects against exfiltration.) Maybe you mean include this in
> the first bullet point (i.e. "cross-process secret userspace memory
> exposures, even in the face of privileged processes")?

It doesn't prevent privilege escalation from happening in the first
place, but once the escalation has happened it protects against
exfiltration by the newly minted root attacker.

> >    3. enhanced protection (in conjunction with all the other in-
> > kernel
> >       attack prevention systems) against ROP attacks.
> 
> Same here, I don't see it preventing ROP, but I see it making
> "simple" ROP insufficient to perform exfiltration.

Right, that's why I call it "enhanced protection".  With ROP the design
goal is to take exfiltration beyond the simple, and require increasing
complexity in the attack ... the usual security whack-a-mole approach
... in the hope that script kiddies get bored by the level of
difficulty and move on to something easier.

James



_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2021-05-06 17:05         ` James Bottomley
  (?)
  (?)
@ 2021-05-06 23:16           ` Nick Kossifidis
  -1 siblings, 0 replies; 84+ messages in thread
From: Nick Kossifidis @ 2021-05-06 23:16 UTC (permalink / raw)
  To: jejb
  Cc: David Hildenbrand, Andrew Morton, Mike Rapoport, Alexander Viro,
	Andy Lutomirski, Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

Στις 2021-05-06 20:05, James Bottomley έγραψε:
> On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
>> 
>> Also, there is a way to still read that memory when root by
>> 
>> 1. Having kdump active (which would often be the case, but maybe not
>> to dump user pages )
>> 2. Triggering a kernel crash (easy via proc as root)
>> 3. Waiting for the reboot after kump() created the dump and then
>> reading the content from disk.
> 
> Anything that can leave physical memory intact but boot to a kernel
> where the missing direct map entry is restored could theoretically
> extract the secret.  However, it's not exactly going to be a stealthy
> extraction ...
> 
>> Or, as an attacker, load a custom kexec() kernel and read memory
>> from the new environment. Of course, the latter two are advanced
>> mechanisms, but they are possible when root. We might be able to
>> mitigate, for example, by zeroing out secretmem pages before booting
>> into the kexec kernel, if we care :)
> 
> I think we could handle it by marking the region, yes, and a zero on
> shutdown might be useful ... it would prevent all warm reboot type
> attacks.
> 

I had similar concerns about recovering secrets with kdump, and 
considered cleaning up keyrings before jumping to the new kernel. The 
problem is we can't provide guarantees in that case, once the kernel has 
crashed and we are on our way to run crashkernel, we can't be sure we 
can reliably zero-out anything, the more code we add to that path the 
more risky it gets. However during reboot/normal kexec() we should do 
some cleanup, it makes sense and secretmem can indeed be useful in that 
case. Regarding loading custom kexec() kernels, we mitigate this with 
the kexec file-based API where we can verify the signature of the loaded 
kimage (assuming the system runs a kernel provided by a trusted 3rd 
party and we 've maintained a chain of trust since booting).

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 23:16           ` Nick Kossifidis
  0 siblings, 0 replies; 84+ messages in thread
From: Nick Kossifidis @ 2021-05-06 23:16 UTC (permalink / raw)
  To: jejb
  Cc: David Hildenbrand, Andrew Morton, Mike Rapoport, Alexander Viro,
	Andy Lutomirski, Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

Στις 2021-05-06 20:05, James Bottomley έγραψε:
> On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
>> 
>> Also, there is a way to still read that memory when root by
>> 
>> 1. Having kdump active (which would often be the case, but maybe not
>> to dump user pages )
>> 2. Triggering a kernel crash (easy via proc as root)
>> 3. Waiting for the reboot after kump() created the dump and then
>> reading the content from disk.
> 
> Anything that can leave physical memory intact but boot to a kernel
> where the missing direct map entry is restored could theoretically
> extract the secret.  However, it's not exactly going to be a stealthy
> extraction ...
> 
>> Or, as an attacker, load a custom kexec() kernel and read memory
>> from the new environment. Of course, the latter two are advanced
>> mechanisms, but they are possible when root. We might be able to
>> mitigate, for example, by zeroing out secretmem pages before booting
>> into the kexec kernel, if we care :)
> 
> I think we could handle it by marking the region, yes, and a zero on
> shutdown might be useful ... it would prevent all warm reboot type
> attacks.
> 

I had similar concerns about recovering secrets with kdump, and 
considered cleaning up keyrings before jumping to the new kernel. The 
problem is we can't provide guarantees in that case, once the kernel has 
crashed and we are on our way to run crashkernel, we can't be sure we 
can reliably zero-out anything, the more code we add to that path the 
more risky it gets. However during reboot/normal kexec() we should do 
some cleanup, it makes sense and secretmem can indeed be useful in that 
case. Regarding loading custom kexec() kernels, we mitigate this with 
the kexec file-based API where we can verify the signature of the loaded 
kimage (assuming the system runs a kernel provided by a trusted 3rd 
party and we 've maintained a chain of trust since booting).

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 23:16           ` Nick Kossifidis
  0 siblings, 0 replies; 84+ messages in thread
From: Nick Kossifidis @ 2021-05-06 23:16 UTC (permalink / raw)
  To: jejb
  Cc: David Hildenbrand, Andrew Morton, Mike Rapoport, Alexander Viro,
	Andy Lutomirski, Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki  <rjw@rjwysocki.net>,
	Rick Edgecombe, Roman Gushchin, Sha keel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

Στις 2021-05-06 20:05, James Bottomley έγραψε:
> On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
>> 
>> Also, there is a way to still read that memory when root by
>> 
>> 1. Having kdump active (which would often be the case, but maybe not
>> to dump user pages )
>> 2. Triggering a kernel crash (easy via proc as root)
>> 3. Waiting for the reboot after kump() created the dump and then
>> reading the content from disk.
> 
> Anything that can leave physical memory intact but boot to a kernel
> where the missing direct map entry is restored could theoretically
> extract the secret.  However, it's not exactly going to be a stealthy
> extraction ...
> 
>> Or, as an attacker, load a custom kexec() kernel and read memory
>> from the new environment. Of course, the latter two are advanced
>> mechanisms, but they are possible when root. We might be able to
>> mitigate, for example, by zeroing out secretmem pages before booting
>> into the kexec kernel, if we care :)
> 
> I think we could handle it by marking the region, yes, and a zero on
> shutdown might be useful ... it would prevent all warm reboot type
> attacks.
> 

I had similar concerns about recovering secrets with kdump, and 
considered cleaning up keyrings before jumping to the new kernel. The 
problem is we can't provide guarantees in that case, once the kernel has 
crashed and we are on our way to run crashkernel, we can't be sure we 
can reliably zero-out anything, the more code we add to that path the 
more risky it gets. However during reboot/normal kexec() we should do 
some cleanup, it makes sense and secretmem can indeed be useful in that 
case. Regarding loading custom kexec() kernels, we mitigate this with 
the kexec file-based API where we can verify the signature of the loaded 
kimage (assuming the system runs a kernel provided by a trusted 3rd 
party and we 've maintained a chain of trust since booting).
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-06 23:16           ` Nick Kossifidis
  0 siblings, 0 replies; 84+ messages in thread
From: Nick Kossifidis @ 2021-05-06 23:16 UTC (permalink / raw)
  To: jejb
  Cc: David Hildenbrand, Andrew Morton, Mike Rapoport, Alexander Viro,
	Andy Lutomirski, Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

Στις 2021-05-06 20:05, James Bottomley έγραψε:
> On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
>> 
>> Also, there is a way to still read that memory when root by
>> 
>> 1. Having kdump active (which would often be the case, but maybe not
>> to dump user pages )
>> 2. Triggering a kernel crash (easy via proc as root)
>> 3. Waiting for the reboot after kump() created the dump and then
>> reading the content from disk.
> 
> Anything that can leave physical memory intact but boot to a kernel
> where the missing direct map entry is restored could theoretically
> extract the secret.  However, it's not exactly going to be a stealthy
> extraction ...
> 
>> Or, as an attacker, load a custom kexec() kernel and read memory
>> from the new environment. Of course, the latter two are advanced
>> mechanisms, but they are possible when root. We might be able to
>> mitigate, for example, by zeroing out secretmem pages before booting
>> into the kexec kernel, if we care :)
> 
> I think we could handle it by marking the region, yes, and a zero on
> shutdown might be useful ... it would prevent all warm reboot type
> attacks.
> 

I had similar concerns about recovering secrets with kdump, and 
considered cleaning up keyrings before jumping to the new kernel. The 
problem is we can't provide guarantees in that case, once the kernel has 
crashed and we are on our way to run crashkernel, we can't be sure we 
can reliably zero-out anything, the more code we add to that path the 
more risky it gets. However during reboot/normal kexec() we should do 
some cleanup, it makes sense and secretmem can indeed be useful in that 
case. Regarding loading custom kexec() kernels, we mitigate this with 
the kexec file-based API where we can verify the signature of the loaded 
kimage (assuming the system runs a kernel provided by a trusted 3rd 
party and we 've maintained a chain of trust since booting).

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2021-05-06 23:16           ` Nick Kossifidis
  (?)
  (?)
@ 2021-05-07  7:35             ` David Hildenbrand
  -1 siblings, 0 replies; 84+ messages in thread
From: David Hildenbrand @ 2021-05-07  7:35 UTC (permalink / raw)
  To: Nick Kossifidis, jejb
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On 07.05.21 01:16, Nick Kossifidis wrote:
> Στις 2021-05-06 20:05, James Bottomley έγραψε:
>> On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
>>>
>>> Also, there is a way to still read that memory when root by
>>>
>>> 1. Having kdump active (which would often be the case, but maybe not
>>> to dump user pages )
>>> 2. Triggering a kernel crash (easy via proc as root)
>>> 3. Waiting for the reboot after kump() created the dump and then
>>> reading the content from disk.
>>
>> Anything that can leave physical memory intact but boot to a kernel
>> where the missing direct map entry is restored could theoretically
>> extract the secret.  However, it's not exactly going to be a stealthy
>> extraction ...
>>
>>> Or, as an attacker, load a custom kexec() kernel and read memory
>>> from the new environment. Of course, the latter two are advanced
>>> mechanisms, but they are possible when root. We might be able to
>>> mitigate, for example, by zeroing out secretmem pages before booting
>>> into the kexec kernel, if we care :)
>>
>> I think we could handle it by marking the region, yes, and a zero on
>> shutdown might be useful ... it would prevent all warm reboot type
>> attacks.
>>
> 
> I had similar concerns about recovering secrets with kdump, and
> considered cleaning up keyrings before jumping to the new kernel. The
> problem is we can't provide guarantees in that case, once the kernel has
> crashed and we are on our way to run crashkernel, we can't be sure we
> can reliably zero-out anything, the more code we add to that path the

Well, I think it depends. Assume we do the following

1) Zero out any secretmem pages when handing them back to the buddy. 
(alternative: init_on_free=1) -- if not already done, I didn't check the 
code.

2) On kdump(), zero out all allocated secretmem. It'd be easier if we'd 
just allocated from a fixed physical memory area; otherwise we have to 
walk process page tables or use a PFN walker. And zeroing out secretmem 
pages without a direct mapping is a different challenge.

Now, during 2) it can happen that

a) We crash in our clearing code (e.g., something is seriously messed 
up) and fail to start the kdump kernel. That's actually good, instead of 
leaking data we fail hard.

b) We don't find all secretmem pages, for example, because process page 
tables are messed up or something messed up our memmap (if we'd use that 
to identify secretmem pages via a PFN walker somehow)


But for the simple cases (e.g., malicious root tries to crash the kernel 
via /proc/sysrq-trigger) both a) and b) wouldn't apply.

Obviously, if an admin would want to mitigate right now, he would want 
to disable kdump completely, meaning any attempt to load a crashkernel 
would fail and cannot be enabled again for that kernel (also not via 
cmdline an attacker could modify to reboot into a system with the option 
for a crashkernel). Disabling kdump in the kernel when secretmem pages 
are allocated is one approach, although sub-optimal.

> more risky it gets. However during reboot/normal kexec() we should do
> some cleanup, it makes sense and secretmem can indeed be useful in that
> case. Regarding loading custom kexec() kernels, we mitigate this with
> the kexec file-based API where we can verify the signature of the loaded
> kimage (assuming the system runs a kernel provided by a trusted 3rd
> party and we 've maintained a chain of trust since booting).

For example in VMs (like QEMU), we often don't clear physical memory 
during a reboot. So if an attacker manages to load a kernel that you can 
trick into reading random physical memory areas, we can leak secretmem 
data I think.

And there might be ways to achieve that just using the cmdline, not 
necessarily loading a different kernel. For example if you limit the 
kernel footprint ("mem=256M") and disable strict_iomem_checks 
("strict_iomem_checks=relaxed") you can just extract that memory via 
/dev/mem if I am not wrong.

So as an attacker, modify the (grub) cmdline to "mem=256M 
strict_iomem_checks=relaxed", reboot, and read all memory via /dev/mem. 
Or load a signed kexec kernel with that cmdline and boot into it.

Interesting problem :)

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-07  7:35             ` David Hildenbrand
  0 siblings, 0 replies; 84+ messages in thread
From: David Hildenbrand @ 2021-05-07  7:35 UTC (permalink / raw)
  To: Nick Kossifidis, jejb
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On 07.05.21 01:16, Nick Kossifidis wrote:
> Στις 2021-05-06 20:05, James Bottomley έγραψε:
>> On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
>>>
>>> Also, there is a way to still read that memory when root by
>>>
>>> 1. Having kdump active (which would often be the case, but maybe not
>>> to dump user pages )
>>> 2. Triggering a kernel crash (easy via proc as root)
>>> 3. Waiting for the reboot after kump() created the dump and then
>>> reading the content from disk.
>>
>> Anything that can leave physical memory intact but boot to a kernel
>> where the missing direct map entry is restored could theoretically
>> extract the secret.  However, it's not exactly going to be a stealthy
>> extraction ...
>>
>>> Or, as an attacker, load a custom kexec() kernel and read memory
>>> from the new environment. Of course, the latter two are advanced
>>> mechanisms, but they are possible when root. We might be able to
>>> mitigate, for example, by zeroing out secretmem pages before booting
>>> into the kexec kernel, if we care :)
>>
>> I think we could handle it by marking the region, yes, and a zero on
>> shutdown might be useful ... it would prevent all warm reboot type
>> attacks.
>>
> 
> I had similar concerns about recovering secrets with kdump, and
> considered cleaning up keyrings before jumping to the new kernel. The
> problem is we can't provide guarantees in that case, once the kernel has
> crashed and we are on our way to run crashkernel, we can't be sure we
> can reliably zero-out anything, the more code we add to that path the

Well, I think it depends. Assume we do the following

1) Zero out any secretmem pages when handing them back to the buddy. 
(alternative: init_on_free=1) -- if not already done, I didn't check the 
code.

2) On kdump(), zero out all allocated secretmem. It'd be easier if we'd 
just allocated from a fixed physical memory area; otherwise we have to 
walk process page tables or use a PFN walker. And zeroing out secretmem 
pages without a direct mapping is a different challenge.

Now, during 2) it can happen that

a) We crash in our clearing code (e.g., something is seriously messed 
up) and fail to start the kdump kernel. That's actually good, instead of 
leaking data we fail hard.

b) We don't find all secretmem pages, for example, because process page 
tables are messed up or something messed up our memmap (if we'd use that 
to identify secretmem pages via a PFN walker somehow)


But for the simple cases (e.g., malicious root tries to crash the kernel 
via /proc/sysrq-trigger) both a) and b) wouldn't apply.

Obviously, if an admin would want to mitigate right now, he would want 
to disable kdump completely, meaning any attempt to load a crashkernel 
would fail and cannot be enabled again for that kernel (also not via 
cmdline an attacker could modify to reboot into a system with the option 
for a crashkernel). Disabling kdump in the kernel when secretmem pages 
are allocated is one approach, although sub-optimal.

> more risky it gets. However during reboot/normal kexec() we should do
> some cleanup, it makes sense and secretmem can indeed be useful in that
> case. Regarding loading custom kexec() kernels, we mitigate this with
> the kexec file-based API where we can verify the signature of the loaded
> kimage (assuming the system runs a kernel provided by a trusted 3rd
> party and we 've maintained a chain of trust since booting).

For example in VMs (like QEMU), we often don't clear physical memory 
during a reboot. So if an attacker manages to load a kernel that you can 
trick into reading random physical memory areas, we can leak secretmem 
data I think.

And there might be ways to achieve that just using the cmdline, not 
necessarily loading a different kernel. For example if you limit the 
kernel footprint ("mem=256M") and disable strict_iomem_checks 
("strict_iomem_checks=relaxed") you can just extract that memory via 
/dev/mem if I am not wrong.

So as an attacker, modify the (grub) cmdline to "mem=256M 
strict_iomem_checks=relaxed", reboot, and read all memory via /dev/mem. 
Or load a signed kexec kernel with that cmdline and boot into it.

Interesting problem :)

-- 
Thanks,

David / dhildenb


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-07  7:35             ` David Hildenbrand
  0 siblings, 0 replies; 84+ messages in thread
From: David Hildenbrand @ 2021-05-07  7:35 UTC (permalink / raw)
  To: Nick Kossifidis, jejb
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On 07.05.21 01:16, Nick Kossifidis wrote:
> Στις 2021-05-06 20:05, James Bottomley έγραψε:
>> On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
>>>
>>> Also, there is a way to still read that memory when root by
>>>
>>> 1. Having kdump active (which would often be the case, but maybe not
>>> to dump user pages )
>>> 2. Triggering a kernel crash (easy via proc as root)
>>> 3. Waiting for the reboot after kump() created the dump and then
>>> reading the content from disk.
>>
>> Anything that can leave physical memory intact but boot to a kernel
>> where the missing direct map entry is restored could theoretically
>> extract the secret.  However, it's not exactly going to be a stealthy
>> extraction ...
>>
>>> Or, as an attacker, load a custom kexec() kernel and read memory
>>> from the new environment. Of course, the latter two are advanced
>>> mechanisms, but they are possible when root. We might be able to
>>> mitigate, for example, by zeroing out secretmem pages before booting
>>> into the kexec kernel, if we care :)
>>
>> I think we could handle it by marking the region, yes, and a zero on
>> shutdown might be useful ... it would prevent all warm reboot type
>> attacks.
>>
> 
> I had similar concerns about recovering secrets with kdump, and
> considered cleaning up keyrings before jumping to the new kernel. The
> problem is we can't provide guarantees in that case, once the kernel has
> crashed and we are on our way to run crashkernel, we can't be sure we
> can reliably zero-out anything, the more code we add to that path the

Well, I think it depends. Assume we do the following

1) Zero out any secretmem pages when handing them back to the buddy. 
(alternative: init_on_free=1) -- if not already done, I didn't check the 
code.

2) On kdump(), zero out all allocated secretmem. It'd be easier if we'd 
just allocated from a fixed physical memory area; otherwise we have to 
walk process page tables or use a PFN walker. And zeroing out secretmem 
pages without a direct mapping is a different challenge.

Now, during 2) it can happen that

a) We crash in our clearing code (e.g., something is seriously messed 
up) and fail to start the kdump kernel. That's actually good, instead of 
leaking data we fail hard.

b) We don't find all secretmem pages, for example, because process page 
tables are messed up or something messed up our memmap (if we'd use that 
to identify secretmem pages via a PFN walker somehow)


But for the simple cases (e.g., malicious root tries to crash the kernel 
via /proc/sysrq-trigger) both a) and b) wouldn't apply.

Obviously, if an admin would want to mitigate right now, he would want 
to disable kdump completely, meaning any attempt to load a crashkernel 
would fail and cannot be enabled again for that kernel (also not via 
cmdline an attacker could modify to reboot into a system with the option 
for a crashkernel). Disabling kdump in the kernel when secretmem pages 
are allocated is one approach, although sub-optimal.

> more risky it gets. However during reboot/normal kexec() we should do
> some cleanup, it makes sense and secretmem can indeed be useful in that
> case. Regarding loading custom kexec() kernels, we mitigate this with
> the kexec file-based API where we can verify the signature of the loaded
> kimage (assuming the system runs a kernel provided by a trusted 3rd
> party and we 've maintained a chain of trust since booting).

For example in VMs (like QEMU), we often don't clear physical memory 
during a reboot. So if an attacker manages to load a kernel that you can 
trick into reading random physical memory areas, we can leak secretmem 
data I think.

And there might be ways to achieve that just using the cmdline, not 
necessarily loading a different kernel. For example if you limit the 
kernel footprint ("mem=256M") and disable strict_iomem_checks 
("strict_iomem_checks=relaxed") you can just extract that memory via 
/dev/mem if I am not wrong.

So as an attacker, modify the (grub) cmdline to "mem=256M 
strict_iomem_checks=relaxed", reboot, and read all memory via /dev/mem. 
Or load a signed kexec kernel with that cmdline and boot into it.

Interesting problem :)

-- 
Thanks,

David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-07  7:35             ` David Hildenbrand
  0 siblings, 0 replies; 84+ messages in thread
From: David Hildenbrand @ 2021-05-07  7:35 UTC (permalink / raw)
  To: Nick Kossifidis, jejb
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
	H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov, Matthew Wilcox,
	Matthew Garrett, Mark Rutland, Michal Hocko, Mike Rapoport,
	Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
	Rafael J. Wysocki, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
	Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
	linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86

On 07.05.21 01:16, Nick Kossifidis wrote:
> Στις 2021-05-06 20:05, James Bottomley έγραψε:
>> On Thu, 2021-05-06 at 18:45 +0200, David Hildenbrand wrote:
>>>
>>> Also, there is a way to still read that memory when root by
>>>
>>> 1. Having kdump active (which would often be the case, but maybe not
>>> to dump user pages )
>>> 2. Triggering a kernel crash (easy via proc as root)
>>> 3. Waiting for the reboot after kump() created the dump and then
>>> reading the content from disk.
>>
>> Anything that can leave physical memory intact but boot to a kernel
>> where the missing direct map entry is restored could theoretically
>> extract the secret.  However, it's not exactly going to be a stealthy
>> extraction ...
>>
>>> Or, as an attacker, load a custom kexec() kernel and read memory
>>> from the new environment. Of course, the latter two are advanced
>>> mechanisms, but they are possible when root. We might be able to
>>> mitigate, for example, by zeroing out secretmem pages before booting
>>> into the kexec kernel, if we care :)
>>
>> I think we could handle it by marking the region, yes, and a zero on
>> shutdown might be useful ... it would prevent all warm reboot type
>> attacks.
>>
> 
> I had similar concerns about recovering secrets with kdump, and
> considered cleaning up keyrings before jumping to the new kernel. The
> problem is we can't provide guarantees in that case, once the kernel has
> crashed and we are on our way to run crashkernel, we can't be sure we
> can reliably zero-out anything, the more code we add to that path the

Well, I think it depends. Assume we do the following

1) Zero out any secretmem pages when handing them back to the buddy. 
(alternative: init_on_free=1) -- if not already done, I didn't check the 
code.

2) On kdump(), zero out all allocated secretmem. It'd be easier if we'd 
just allocated from a fixed physical memory area; otherwise we have to 
walk process page tables or use a PFN walker. And zeroing out secretmem 
pages without a direct mapping is a different challenge.

Now, during 2) it can happen that

a) We crash in our clearing code (e.g., something is seriously messed 
up) and fail to start the kdump kernel. That's actually good, instead of 
leaking data we fail hard.

b) We don't find all secretmem pages, for example, because process page 
tables are messed up or something messed up our memmap (if we'd use that 
to identify secretmem pages via a PFN walker somehow)


But for the simple cases (e.g., malicious root tries to crash the kernel 
via /proc/sysrq-trigger) both a) and b) wouldn't apply.

Obviously, if an admin would want to mitigate right now, he would want 
to disable kdump completely, meaning any attempt to load a crashkernel 
would fail and cannot be enabled again for that kernel (also not via 
cmdline an attacker could modify to reboot into a system with the option 
for a crashkernel). Disabling kdump in the kernel when secretmem pages 
are allocated is one approach, although sub-optimal.

> more risky it gets. However during reboot/normal kexec() we should do
> some cleanup, it makes sense and secretmem can indeed be useful in that
> case. Regarding loading custom kexec() kernels, we mitigate this with
> the kexec file-based API where we can verify the signature of the loaded
> kimage (assuming the system runs a kernel provided by a trusted 3rd
> party and we 've maintained a chain of trust since booting).

For example in VMs (like QEMU), we often don't clear physical memory 
during a reboot. So if an attacker manages to load a kernel that you can 
trick into reading random physical memory areas, we can leak secretmem 
data I think.

And there might be ways to achieve that just using the cmdline, not 
necessarily loading a different kernel. For example if you limit the 
kernel footprint ("mem=256M") and disable strict_iomem_checks 
("strict_iomem_checks=relaxed") you can just extract that memory via 
/dev/mem if I am not wrong.

So as an attacker, modify the (grub) cmdline to "mem=256M 
strict_iomem_checks=relaxed", reboot, and read all memory via /dev/mem. 
Or load a signed kexec kernel with that cmdline and boot into it.

Interesting problem :)

-- 
Thanks,

David / dhildenb


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2021-05-06 18:47         ` James Bottomley
  (?)
  (?)
@ 2021-05-07 23:57           ` Kees Cook
  -1 siblings, 0 replies; 84+ messages in thread
From: Kees Cook @ 2021-05-07 23:57 UTC (permalink / raw)
  To: James Bottomley
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, May 06, 2021 at 11:47:47AM -0700, James Bottomley wrote:
> On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> > On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> [...]
> > > > I think that a very complete description of the threats which
> > > > this feature addresses would be helpful.  
> > > 
> > > It's designed to protect against three different threats:
> > > 
> > >    1. Detection of user secret memory mismanagement
> > 
> > I would say "cross-process secret userspace memory exposures" (via a
> > number of common interfaces by blocking it at the GUP level).
> > 
> > >    2. significant protection against privilege escalation
> > 
> > I don't see how this series protects against privilege escalation.
> > (It protects against exfiltration.) Maybe you mean include this in
> > the first bullet point (i.e. "cross-process secret userspace memory
> > exposures, even in the face of privileged processes")?
> 
> It doesn't prevent privilege escalation from happening in the first
> place, but once the escalation has happened it protects against
> exfiltration by the newly minted root attacker.

So, after thinking a bit more about this, I don't think there is
protection here against privileged execution. This feature kind of helps
against cross-process read/write attempts, but it doesn't help with
sufficiently privileged (i.e. ptraced) execution, since we can just ask
the process itself to do the reading:

$ gdb ./memfd_secret
...
ready: 0x7ffff7ffb000
Breakpoint 1, ...
(gdb) compile code unsigned long addr = 0x7ffff7ffb000UL; printf("%016lx\n", *((unsigned long *)addr));
55555555555555555

And since process_vm_readv() requires PTRACE_ATTACH, there's very little
difference in effort between process_vm_readv() and the above.

So, what other paths through GUP exist that aren't covered by
PTRACE_ATTACH? And if none, then should this actually just be done by
setting the process undumpable? (This is already what things like gnupg
do.)

So, the user-space side of this doesn't seem to really help. The kernel
side protection is interesting for kernel read/write flaws, though, in
the sense that the process is likely not being attacked from "current",
so a kernel-side attack would need to either walk the page tables and
create new ones, or spawn a new userspace process to do the ptracing.

So, while I like the idea of this stuff, and I see how it provides
certain coverages, I'm curious to learn more about the threat model to
make sure it's actually providing meaningful hurdles to attacks.

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-07 23:57           ` Kees Cook
  0 siblings, 0 replies; 84+ messages in thread
From: Kees Cook @ 2021-05-07 23:57 UTC (permalink / raw)
  To: James Bottomley
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, May 06, 2021 at 11:47:47AM -0700, James Bottomley wrote:
> On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> > On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> [...]
> > > > I think that a very complete description of the threats which
> > > > this feature addresses would be helpful.  
> > > 
> > > It's designed to protect against three different threats:
> > > 
> > >    1. Detection of user secret memory mismanagement
> > 
> > I would say "cross-process secret userspace memory exposures" (via a
> > number of common interfaces by blocking it at the GUP level).
> > 
> > >    2. significant protection against privilege escalation
> > 
> > I don't see how this series protects against privilege escalation.
> > (It protects against exfiltration.) Maybe you mean include this in
> > the first bullet point (i.e. "cross-process secret userspace memory
> > exposures, even in the face of privileged processes")?
> 
> It doesn't prevent privilege escalation from happening in the first
> place, but once the escalation has happened it protects against
> exfiltration by the newly minted root attacker.

So, after thinking a bit more about this, I don't think there is
protection here against privileged execution. This feature kind of helps
against cross-process read/write attempts, but it doesn't help with
sufficiently privileged (i.e. ptraced) execution, since we can just ask
the process itself to do the reading:

$ gdb ./memfd_secret
...
ready: 0x7ffff7ffb000
Breakpoint 1, ...
(gdb) compile code unsigned long addr = 0x7ffff7ffb000UL; printf("%016lx\n", *((unsigned long *)addr));
55555555555555555

And since process_vm_readv() requires PTRACE_ATTACH, there's very little
difference in effort between process_vm_readv() and the above.

So, what other paths through GUP exist that aren't covered by
PTRACE_ATTACH? And if none, then should this actually just be done by
setting the process undumpable? (This is already what things like gnupg
do.)

So, the user-space side of this doesn't seem to really help. The kernel
side protection is interesting for kernel read/write flaws, though, in
the sense that the process is likely not being attacked from "current",
so a kernel-side attack would need to either walk the page tables and
create new ones, or spawn a new userspace process to do the ptracing.

So, while I like the idea of this stuff, and I see how it provides
certain coverages, I'm curious to learn more about the threat model to
make sure it's actually providing meaningful hurdles to attacks.

-- 
Kees Cook

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-07 23:57           ` Kees Cook
  0 siblings, 0 replies; 84+ messages in thread
From: Kees Cook @ 2021-05-07 23:57 UTC (permalink / raw)
  To: James Bottomley
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dave Hansen, David Hildenbrand,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Shak eel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv

On Thu, May 06, 2021 at 11:47:47AM -0700, James Bottomley wrote:
> On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> > On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> [...]
> > > > I think that a very complete description of the threats which
> > > > this feature addresses would be helpful.  
> > > 
> > > It's designed to protect against three different threats:
> > > 
> > >    1. Detection of user secret memory mismanagement
> > 
> > I would say "cross-process secret userspace memory exposures" (via a
> > number of common interfaces by blocking it at the GUP level).
> > 
> > >    2. significant protection against privilege escalation
> > 
> > I don't see how this series protects against privilege escalation.
> > (It protects against exfiltration.) Maybe you mean include this in
> > the first bullet point (i.e. "cross-process secret userspace memory
> > exposures, even in the face of privileged processes")?
> 
> It doesn't prevent privilege escalation from happening in the first
> place, but once the escalation has happened it protects against
> exfiltration by the newly minted root attacker.

So, after thinking a bit more about this, I don't think there is
protection here against privileged execution. This feature kind of helps
against cross-process read/write attempts, but it doesn't help with
sufficiently privileged (i.e. ptraced) execution, since we can just ask
the process itself to do the reading:

$ gdb ./memfd_secret
...
ready: 0x7ffff7ffb000
Breakpoint 1, ...
(gdb) compile code unsigned long addr = 0x7ffff7ffb000UL; printf("%016lx\n", *((unsigned long *)addr));
55555555555555555

And since process_vm_readv() requires PTRACE_ATTACH, there's very little
difference in effort between process_vm_readv() and the above.

So, what other paths through GUP exist that aren't covered by
PTRACE_ATTACH? And if none, then should this actually just be done by
setting the process undumpable? (This is already what things like gnupg
do.)

So, the user-space side of this doesn't seem to really help. The kernel
side protection is interesting for kernel read/write flaws, though, in
the sense that the process is likely not being attacked from "current",
so a kernel-side attack would need to either walk the page tables and
create new ones, or spawn a new userspace process to do the ptracing.

So, while I like the idea of this stuff, and I see how it provides
certain coverages, I'm curious to learn more about the threat model to
make sure it's actually providing meaningful hurdles to attacks.

-- 
Kees Cook
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-07 23:57           ` Kees Cook
  0 siblings, 0 replies; 84+ messages in thread
From: Kees Cook @ 2021-05-07 23:57 UTC (permalink / raw)
  To: James Bottomley
  Cc: Andrew Morton, Mike Rapoport, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, May 06, 2021 at 11:47:47AM -0700, James Bottomley wrote:
> On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> > On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> [...]
> > > > I think that a very complete description of the threats which
> > > > this feature addresses would be helpful.  
> > > 
> > > It's designed to protect against three different threats:
> > > 
> > >    1. Detection of user secret memory mismanagement
> > 
> > I would say "cross-process secret userspace memory exposures" (via a
> > number of common interfaces by blocking it at the GUP level).
> > 
> > >    2. significant protection against privilege escalation
> > 
> > I don't see how this series protects against privilege escalation.
> > (It protects against exfiltration.) Maybe you mean include this in
> > the first bullet point (i.e. "cross-process secret userspace memory
> > exposures, even in the face of privileged processes")?
> 
> It doesn't prevent privilege escalation from happening in the first
> place, but once the escalation has happened it protects against
> exfiltration by the newly minted root attacker.

So, after thinking a bit more about this, I don't think there is
protection here against privileged execution. This feature kind of helps
against cross-process read/write attempts, but it doesn't help with
sufficiently privileged (i.e. ptraced) execution, since we can just ask
the process itself to do the reading:

$ gdb ./memfd_secret
...
ready: 0x7ffff7ffb000
Breakpoint 1, ...
(gdb) compile code unsigned long addr = 0x7ffff7ffb000UL; printf("%016lx\n", *((unsigned long *)addr));
55555555555555555

And since process_vm_readv() requires PTRACE_ATTACH, there's very little
difference in effort between process_vm_readv() and the above.

So, what other paths through GUP exist that aren't covered by
PTRACE_ATTACH? And if none, then should this actually just be done by
setting the process undumpable? (This is already what things like gnupg
do.)

So, the user-space side of this doesn't seem to really help. The kernel
side protection is interesting for kernel read/write flaws, though, in
the sense that the process is likely not being attacked from "current",
so a kernel-side attack would need to either walk the page tables and
create new ones, or spawn a new userspace process to do the ptracing.

So, while I like the idea of this stuff, and I see how it provides
certain coverages, I'm curious to learn more about the threat model to
make sure it's actually providing meaningful hurdles to attacks.

-- 
Kees Cook

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
  2021-05-06 18:47         ` James Bottomley
  (?)
  (?)
@ 2021-05-10 18:02           ` Mike Rapoport
  -1 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-05-10 18:02 UTC (permalink / raw)
  To: James Bottomley
  Cc: Kees Cook, Andrew Morton, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, May 06, 2021 at 11:47:47AM -0700, James Bottomley wrote:
> On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> > On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> 
> > What's happening with O_CLOEXEC in this code? I don't see that
> > mentioned in the cover letter either. Why is it disallowed? That
> > seems a strange limitation for something trying to avoid leaking
> > secrets into other processes.
> 
> I actually thought we forced it, so I'll let Mike address this.  I
> think allowing it is great, so the secret memory isn't inherited by
> children, but I can see use cases where a process would want its child
> to inherit the secrets.

We do not enforce O_CLOEXEC, but if the user explicitly requested O_CLOEXEC
it would be passed to get_unused_fd_flags().

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-10 18:02           ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-05-10 18:02 UTC (permalink / raw)
  To: James Bottomley
  Cc: Kees Cook, Andrew Morton, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, May 06, 2021 at 11:47:47AM -0700, James Bottomley wrote:
> On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> > On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> 
> > What's happening with O_CLOEXEC in this code? I don't see that
> > mentioned in the cover letter either. Why is it disallowed? That
> > seems a strange limitation for something trying to avoid leaking
> > secrets into other processes.
> 
> I actually thought we forced it, so I'll let Mike address this.  I
> think allowing it is great, so the secret memory isn't inherited by
> children, but I can see use cases where a process would want its child
> to inherit the secrets.

We do not enforce O_CLOEXEC, but if the user explicitly requested O_CLOEXEC
it would be passed to get_unused_fd_flags().

-- 
Sincerely yours,
Mike.

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-10 18:02           ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-05-10 18:02 UTC (permalink / raw)
  To: James Bottomley
  Cc: Kees Cook, Andrew Morton, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dave Hansen, David Hildenbrand,
	Elena Reshetova, H. Peter Anvin, Ingo Molnar, Kirill A. Shutemov,
	Matthew Wilcox, Matthew Garrett, Mark Rutland, Michal Hocko,
	Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
	Peter Zijlstra, Rafael J. Wysocki, Rick Edgecombe,
	Roman Gushchin, Sh akeel Butt, Shuah Khan, Thomas Gleixner,
	Tycho Andersen, Will Deacon, linux-api, linux-arch,
	linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
	linux-kselftest, linux-nvdimm, linux-riscv

On Thu, May 06, 2021 at 11:47:47AM -0700, James Bottomley wrote:
> On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> > On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> 
> > What's happening with O_CLOEXEC in this code? I don't see that
> > mentioned in the cover letter either. Why is it disallowed? That
> > seems a strange limitation for something trying to avoid leaking
> > secrets into other processes.
> 
> I actually thought we forced it, so I'll let Mike address this.  I
> think allowing it is great, so the secret memory isn't inherited by
> children, but I can see use cases where a process would want its child
> to inherit the secrets.

We do not enforce O_CLOEXEC, but if the user explicitly requested O_CLOEXEC
it would be passed to get_unused_fd_flags().

-- 
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

^ permalink raw reply	[flat|nested] 84+ messages in thread

* Re: [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-05-10 18:02           ` Mike Rapoport
  0 siblings, 0 replies; 84+ messages in thread
From: Mike Rapoport @ 2021-05-10 18:02 UTC (permalink / raw)
  To: James Bottomley
  Cc: Kees Cook, Andrew Morton, Alexander Viro, Andy Lutomirski,
	Arnd Bergmann, Borislav Petkov, Catalin Marinas,
	Christopher Lameter, Dan Williams, Dave Hansen,
	David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
	Kirill A. Shutemov, Matthew Wilcox, Matthew Garrett,
	Mark Rutland, Michal Hocko, Mike Rapoport, Michael Kerrisk,
	Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rafael J. Wysocki,
	Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
	Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
	linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
	linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv

On Thu, May 06, 2021 at 11:47:47AM -0700, James Bottomley wrote:
> On Thu, 2021-05-06 at 10:33 -0700, Kees Cook wrote:
> > On Thu, May 06, 2021 at 08:26:41AM -0700, James Bottomley wrote:
> 
> > What's happening with O_CLOEXEC in this code? I don't see that
> > mentioned in the cover letter either. Why is it disallowed? That
> > seems a strange limitation for something trying to avoid leaking
> > secrets into other processes.
> 
> I actually thought we forced it, so I'll let Mike address this.  I
> think allowing it is great, so the secret memory isn't inherited by
> children, but I can see use cases where a process would want its child
> to inherit the secrets.

We do not enforce O_CLOEXEC, but if the user explicitly requested O_CLOEXEC
it would be passed to get_unused_fd_flags().

-- 
Sincerely yours,
Mike.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 84+ messages in thread

end of thread, other threads:[~2021-05-10 18:08 UTC | newest]

Thread overview: 84+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-03 16:22 [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2021-03-03 16:22 ` Mike Rapoport
2021-03-03 16:22 ` Mike Rapoport
2021-03-03 16:22 ` Mike Rapoport
2021-03-03 16:22 ` [PATCH v18 1/9] mm: add definition of PMD_PAGE_ORDER Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22 ` [PATCH v18 2/9] mmap: make mlock_future_check() global Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22 ` [PATCH v18 3/9] riscv/Kconfig: make direct map manipulation options depend on MMU Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22 ` [PATCH v18 4/9] set_memory: allow set_direct_map_*_noflush() for multiple pages Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22 ` [PATCH v18 5/9] set_memory: allow querying whether set_direct_map_*() is actually enabled Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22 ` [PATCH v18 6/9] mm: introduce memfd_secret system call to create "secret" memory areas Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22 ` [PATCH v18 7/9] PM: hibernate: disable when there are active secretmem users Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22 ` [PATCH v18 8/9] arch, mm: wire up memfd_secret system call where relevant Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22 ` [PATCH v18 9/9] secretmem: test: add basic selftest for memfd_secret(2) Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-03-03 16:22   ` Mike Rapoport
2021-05-05 19:08 ` [PATCH v18 0/9] mm: introduce memfd_secret system call to create "secret" memory areas Andrew Morton
2021-05-05 19:08   ` Andrew Morton
2021-05-05 19:08   ` Andrew Morton
2021-05-05 19:08   ` Andrew Morton
2021-05-06 15:26   ` James Bottomley
2021-05-06 15:26     ` James Bottomley
2021-05-06 15:26     ` James Bottomley
2021-05-06 15:26     ` James Bottomley
2021-05-06 16:45     ` David Hildenbrand
2021-05-06 16:45       ` David Hildenbrand
2021-05-06 16:45       ` David Hildenbrand
2021-05-06 16:45       ` David Hildenbrand
2021-05-06 17:05       ` James Bottomley
2021-05-06 17:05         ` James Bottomley
2021-05-06 17:05         ` James Bottomley
2021-05-06 17:05         ` James Bottomley
2021-05-06 17:24         ` David Hildenbrand
2021-05-06 17:24           ` David Hildenbrand
2021-05-06 17:24           ` David Hildenbrand
2021-05-06 17:24           ` David Hildenbrand
2021-05-06 23:16         ` Nick Kossifidis
2021-05-06 23:16           ` Nick Kossifidis
2021-05-06 23:16           ` Nick Kossifidis
2021-05-06 23:16           ` Nick Kossifidis
2021-05-07  7:35           ` David Hildenbrand
2021-05-07  7:35             ` David Hildenbrand
2021-05-07  7:35             ` David Hildenbrand
2021-05-07  7:35             ` David Hildenbrand
2021-05-06 17:33     ` Kees Cook
2021-05-06 17:33       ` Kees Cook
2021-05-06 17:33       ` Kees Cook
2021-05-06 17:33       ` Kees Cook
2021-05-06 18:47       ` James Bottomley
2021-05-06 18:47         ` James Bottomley
2021-05-06 18:47         ` James Bottomley
2021-05-06 18:47         ` James Bottomley
2021-05-07 23:57         ` Kees Cook
2021-05-07 23:57           ` Kees Cook
2021-05-07 23:57           ` Kees Cook
2021-05-07 23:57           ` Kees Cook
2021-05-10 18:02         ` Mike Rapoport
2021-05-10 18:02           ` Mike Rapoport
2021-05-10 18:02           ` Mike Rapoport
2021-05-10 18:02           ` Mike Rapoport

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.