* [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86
From: Mike Rapoport <rppt@linux.ibm.com>
Hi,
@Andrew, this is based on v5.11-rc4-mmots-2021-01-19-13-54 with secretmem
patches dropped from there, I can rebase whatever way you prefer.
This is an implementation of "secret" mappings backed by a file descriptor.
The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will be present only in the page table of the owning mm.
Although normally Linux userspace mappings are protected from other users,
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.
Additionally, in the future the secret mappings may be used as a mean to
protect guest memory in a virtual machine host.
For demonstration of secret memory usage we've created a userspace library
https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git
that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.
Hiding secret memory mappings behind an anonymous file allows (ab)use of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.
The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.
To limit fragmentation of the direct map to splitting only PUD-size pages,
I've added an amortizing cache of PMD-size pages to each file descriptor
that is used as an allocation pool for the secret memory areas.
As the memory allocated by secretmem becomes unmovable, we use CMA to back
large page caches so that page allocator won't be surprised by failing attempt
to migrate these pages.
v16:
* Fix memory leak intorduced in v15
* Clean the data left from previous page user before handing the page to
the userspace
v15: https://lore.kernel.org/lkml/20210120180612.1058-1-rppt@kernel.org
* Add riscv/Kconfig update to disable set_memory operations for nommu
builds (patch 3)
* Update the code around add_to_page_cache() per Matthew's comments
(patches 6,7)
* Add fixups for build/checkpatch errors discovered by CI systems
v14: https://lore.kernel.org/lkml/20201203062949.5484-1-rppt@kernel.org
* Finally s/mod_node_page_state/mod_lruvec_page_state/
v13: https://lore.kernel.org/lkml/20201201074559.27742-1-rppt@kernel.org
* Added Reviewed-by, thanks Catalin and David
* s/mod_node_page_state/mod_lruvec_page_state/ as Shakeel suggested
v12: https://lore.kernel.org/lkml/20201125092208.12544-1-rppt@kernel.org
* Add detection of whether set_direct_map has actual effect on arm64 and bail
out of CMA allocation for secretmem and the memfd_secret() syscall if pages
would not be removed from the direct map
Older history:
v11: https://lore.kernel.org/lkml/20201124092556.12009-1-rppt@kernel.org
v10: https://lore.kernel.org/lkml/20201123095432.5860-1-rppt@kernel.org
v9: https://lore.kernel.org/lkml/20201117162932.13649-1-rppt@kernel.org
v8: https://lore.kernel.org/lkml/20201110151444.20662-1-rppt@kernel.org
v7: https://lore.kernel.org/lkml/20201026083752.13267-1-rppt@kernel.org
v6: https://lore.kernel.org/lkml/20200924132904.1391-1-rppt@kernel.org
v5: https://lore.kernel.org/lkml/20200916073539.3552-1-rppt@kernel.org
v4: https://lore.kernel.org/lkml/20200818141554.13945-1-rppt@kernel.org
v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org
Mike Rapoport (11):
mm: add definition of PMD_PAGE_ORDER
mmap: make mlock_future_check() global
riscv/Kconfig: make direct map manipulation options depend on MMU
set_memory: allow set_direct_map_*_noflush() for multiple pages
set_memory: allow querying whether set_direct_map_*() is actually enabled
mm: introduce memfd_secret system call to create "secret" memory areas
secretmem: use PMD-size pages to amortize direct map fragmentation
secretmem: add memcg accounting
PM: hibernate: disable when there are active secretmem users
arch, mm: wire up memfd_secret system call where relevant
secretmem: test: add basic selftest for memfd_secret(2)
arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/cacheflush.h | 6 -
arch/arm64/include/asm/set_memory.h | 17 +
arch/arm64/include/uapi/asm/unistd.h | 1 +
arch/arm64/kernel/machine_kexec.c | 1 +
arch/arm64/mm/mmu.c | 6 +-
arch/arm64/mm/pageattr.c | 23 +-
arch/riscv/Kconfig | 4 +-
arch/riscv/include/asm/set_memory.h | 4 +-
arch/riscv/include/asm/unistd.h | 1 +
arch/riscv/mm/pageattr.c | 8 +-
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/x86/include/asm/set_memory.h | 4 +-
arch/x86/mm/pat/set_memory.c | 8 +-
fs/dax.c | 11 +-
include/linux/pgtable.h | 3 +
include/linux/secretmem.h | 30 ++
include/linux/set_memory.h | 16 +-
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/unistd.h | 6 +-
include/uapi/linux/magic.h | 1 +
kernel/power/hibernate.c | 5 +-
kernel/power/snapshot.c | 4 +-
kernel/sys_ni.c | 2 +
mm/Kconfig | 5 +
mm/Makefile | 1 +
mm/filemap.c | 3 +-
mm/gup.c | 10 +
mm/internal.h | 3 +
mm/mmap.c | 5 +-
mm/secretmem.c | 451 ++++++++++++++++++++++
mm/vmalloc.c | 5 +-
scripts/checksyscalls.sh | 4 +
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +-
tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++
tools/testing/selftests/vm/run_vmtests | 17 +
38 files changed, 917 insertions(+), 52 deletions(-)
create mode 100644 arch/arm64/include/asm/set_memory.h
create mode 100644 include/linux/secretmem.h
create mode 100644 mm/secretmem.c
create mode 100644 tools/testing/selftests/vm/memfd_secret.c
--
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86
From: Mike Rapoport <rppt@linux.ibm.com>
Hi,
@Andrew, this is based on v5.11-rc4-mmots-2021-01-19-13-54 with secretmem
patches dropped from there, I can rebase whatever way you prefer.
This is an implementation of "secret" mappings backed by a file descriptor.
The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will be present only in the page table of the owning mm.
Although normally Linux userspace mappings are protected from other users,
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.
Additionally, in the future the secret mappings may be used as a mean to
protect guest memory in a virtual machine host.
For demonstration of secret memory usage we've created a userspace library
https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git
that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.
Hiding secret memory mappings behind an anonymous file allows (ab)use of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.
The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.
To limit fragmentation of the direct map to splitting only PUD-size pages,
I've added an amortizing cache of PMD-size pages to each file descriptor
that is used as an allocation pool for the secret memory areas.
As the memory allocated by secretmem becomes unmovable, we use CMA to back
large page caches so that page allocator won't be surprised by failing attempt
to migrate these pages.
v16:
* Fix memory leak intorduced in v15
* Clean the data left from previous page user before handing the page to
the userspace
v15: https://lore.kernel.org/lkml/20210120180612.1058-1-rppt@kernel.org
* Add riscv/Kconfig update to disable set_memory operations for nommu
builds (patch 3)
* Update the code around add_to_page_cache() per Matthew's comments
(patches 6,7)
* Add fixups for build/checkpatch errors discovered by CI systems
v14: https://lore.kernel.org/lkml/20201203062949.5484-1-rppt@kernel.org
* Finally s/mod_node_page_state/mod_lruvec_page_state/
v13: https://lore.kernel.org/lkml/20201201074559.27742-1-rppt@kernel.org
* Added Reviewed-by, thanks Catalin and David
* s/mod_node_page_state/mod_lruvec_page_state/ as Shakeel suggested
v12: https://lore.kernel.org/lkml/20201125092208.12544-1-rppt@kernel.org
* Add detection of whether set_direct_map has actual effect on arm64 and bail
out of CMA allocation for secretmem and the memfd_secret() syscall if pages
would not be removed from the direct map
Older history:
v11: https://lore.kernel.org/lkml/20201124092556.12009-1-rppt@kernel.org
v10: https://lore.kernel.org/lkml/20201123095432.5860-1-rppt@kernel.org
v9: https://lore.kernel.org/lkml/20201117162932.13649-1-rppt@kernel.org
v8: https://lore.kernel.org/lkml/20201110151444.20662-1-rppt@kernel.org
v7: https://lore.kernel.org/lkml/20201026083752.13267-1-rppt@kernel.org
v6: https://lore.kernel.org/lkml/20200924132904.1391-1-rppt@kernel.org
v5: https://lore.kernel.org/lkml/20200916073539.3552-1-rppt@kernel.org
v4: https://lore.kernel.org/lkml/20200818141554.13945-1-rppt@kernel.org
v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org
Mike Rapoport (11):
mm: add definition of PMD_PAGE_ORDER
mmap: make mlock_future_check() global
riscv/Kconfig: make direct map manipulation options depend on MMU
set_memory: allow set_direct_map_*_noflush() for multiple pages
set_memory: allow querying whether set_direct_map_*() is actually enabled
mm: introduce memfd_secret system call to create "secret" memory areas
secretmem: use PMD-size pages to amortize direct map fragmentation
secretmem: add memcg accounting
PM: hibernate: disable when there are active secretmem users
arch, mm: wire up memfd_secret system call where relevant
secretmem: test: add basic selftest for memfd_secret(2)
arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/cacheflush.h | 6 -
arch/arm64/include/asm/set_memory.h | 17 +
arch/arm64/include/uapi/asm/unistd.h | 1 +
arch/arm64/kernel/machine_kexec.c | 1 +
arch/arm64/mm/mmu.c | 6 +-
arch/arm64/mm/pageattr.c | 23 +-
arch/riscv/Kconfig | 4 +-
arch/riscv/include/asm/set_memory.h | 4 +-
arch/riscv/include/asm/unistd.h | 1 +
arch/riscv/mm/pageattr.c | 8 +-
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/x86/include/asm/set_memory.h | 4 +-
arch/x86/mm/pat/set_memory.c | 8 +-
fs/dax.c | 11 +-
include/linux/pgtable.h | 3 +
include/linux/secretmem.h | 30 ++
include/linux/set_memory.h | 16 +-
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/unistd.h | 6 +-
include/uapi/linux/magic.h | 1 +
kernel/power/hibernate.c | 5 +-
kernel/power/snapshot.c | 4 +-
kernel/sys_ni.c | 2 +
mm/Kconfig | 5 +
mm/Makefile | 1 +
mm/filemap.c | 3 +-
mm/gup.c | 10 +
mm/internal.h | 3 +
mm/mmap.c | 5 +-
mm/secretmem.c | 451 ++++++++++++++++++++++
mm/vmalloc.c | 5 +-
scripts/checksyscalls.sh | 4 +
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +-
tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++
tools/testing/selftests/vm/run_vmtests | 17 +
38 files changed, 917 insertions(+), 52 deletions(-)
create mode 100644 arch/arm64/include/asm/set_memory.h
create mode 100644 include/linux/secretmem.h
create mode 100644 mm/secretmem.c
create mode 100644 tools/testing/selftests/vm/memfd_secret.c
--
2.28.0
^ permalink raw reply [flat|nested] 318+ messages in thread
* [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Arnd Bergmann, James Bottomley, Borislav Petkov,
Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Rick Edgecombe, Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
Hi,
@Andrew, this is based on v5.11-rc4-mmots-2021-01-19-13-54 with secretmem
patches dropped from there, I can rebase whatever way you prefer.
This is an implementation of "secret" mappings backed by a file descriptor.
The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will be present only in the page table of the owning mm.
Although normally Linux userspace mappings are protected from other users,
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.
Additionally, in the future the secret mappings may be used as a mean to
protect guest memory in a virtual machine host.
For demonstration of secret memory usage we've created a userspace library
https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git
that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.
Hiding secret memory mappings behind an anonymous file allows (ab)use of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.
The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.
To limit fragmentation of the direct map to splitting only PUD-size pages,
I've added an amortizing cache of PMD-size pages to each file descriptor
that is used as an allocation pool for the secret memory areas.
As the memory allocated by secretmem becomes unmovable, we use CMA to back
large page caches so that page allocator won't be surprised by failing attempt
to migrate these pages.
v16:
* Fix memory leak intorduced in v15
* Clean the data left from previous page user before handing the page to
the userspace
v15: https://lore.kernel.org/lkml/20210120180612.1058-1-rppt@kernel.org
* Add riscv/Kconfig update to disable set_memory operations for nommu
builds (patch 3)
* Update the code around add_to_page_cache() per Matthew's comments
(patches 6,7)
* Add fixups for build/checkpatch errors discovered by CI systems
v14: https://lore.kernel.org/lkml/20201203062949.5484-1-rppt@kernel.org
* Finally s/mod_node_page_state/mod_lruvec_page_state/
v13: https://lore.kernel.org/lkml/20201201074559.27742-1-rppt@kernel.org
* Added Reviewed-by, thanks Catalin and David
* s/mod_node_page_state/mod_lruvec_page_state/ as Shakeel suggested
v12: https://lore.kernel.org/lkml/20201125092208.12544-1-rppt@kernel.org
* Add detection of whether set_direct_map has actual effect on arm64 and bail
out of CMA allocation for secretmem and the memfd_secret() syscall if pages
would not be removed from the direct map
Older history:
v11: https://lore.kernel.org/lkml/20201124092556.12009-1-rppt@kernel.org
v10: https://lore.kernel.org/lkml/20201123095432.5860-1-rppt@kernel.org
v9: https://lore.kernel.org/lkml/20201117162932.13649-1-rppt@kernel.org
v8: https://lore.kernel.org/lkml/20201110151444.20662-1-rppt@kernel.org
v7: https://lore.kernel.org/lkml/20201026083752.13267-1-rppt@kernel.org
v6: https://lore.kernel.org/lkml/20200924132904.1391-1-rppt@kernel.org
v5: https://lore.kernel.org/lkml/20200916073539.3552-1-rppt@kernel.org
v4: https://lore.kernel.org/lkml/20200818141554.13945-1-rppt@kernel.org
v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org
Mike Rapoport (11):
mm: add definition of PMD_PAGE_ORDER
mmap: make mlock_future_check() global
riscv/Kconfig: make direct map manipulation options depend on MMU
set_memory: allow set_direct_map_*_noflush() for multiple pages
set_memory: allow querying whether set_direct_map_*() is actually enabled
mm: introduce memfd_secret system call to create "secret" memory areas
secretmem: use PMD-size pages to amortize direct map fragmentation
secretmem: add memcg accounting
PM: hibernate: disable when there are active secretmem users
arch, mm: wire up memfd_secret system call where relevant
secretmem: test: add basic selftest for memfd_secret(2)
arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/cacheflush.h | 6 -
arch/arm64/include/asm/set_memory.h | 17 +
arch/arm64/include/uapi/asm/unistd.h | 1 +
arch/arm64/kernel/machine_kexec.c | 1 +
arch/arm64/mm/mmu.c | 6 +-
arch/arm64/mm/pageattr.c | 23 +-
arch/riscv/Kconfig | 4 +-
arch/riscv/include/asm/set_memory.h | 4 +-
arch/riscv/include/asm/unistd.h | 1 +
arch/riscv/mm/pageattr.c | 8 +-
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/x86/include/asm/set_memory.h | 4 +-
arch/x86/mm/pat/set_memory.c | 8 +-
fs/dax.c | 11 +-
include/linux/pgtable.h | 3 +
include/linux/secretmem.h | 30 ++
include/linux/set_memory.h | 16 +-
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/unistd.h | 6 +-
include/uapi/linux/magic.h | 1 +
kernel/power/hibernate.c | 5 +-
kernel/power/snapshot.c | 4 +-
kernel/sys_ni.c | 2 +
mm/Kconfig | 5 +
mm/Makefile | 1 +
mm/filemap.c | 3 +-
mm/gup.c | 10 +
mm/internal.h | 3 +
mm/mmap.c | 5 +-
mm/secretmem.c | 451 ++++++++++++++++++++++
mm/vmalloc.c | 5 +-
scripts/checksyscalls.sh | 4 +
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +-
tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++
tools/testing/selftests/vm/run_vmtests | 17 +
38 files changed, 917 insertions(+), 52 deletions(-)
create mode 100644 arch/arm64/include/asm/set_memory.h
create mode 100644 include/linux/secretmem.h
create mode 100644 mm/secretmem.c
create mode 100644 tools/testing/selftests/vm/memfd_secret.c
--
2.28.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Arnd Bergmann, James Bottomley, Borislav Petkov,
Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Rick Edgecombe, Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
Hi,
@Andrew, this is based on v5.11-rc4-mmots-2021-01-19-13-54 with secretmem
patches dropped from there, I can rebase whatever way you prefer.
This is an implementation of "secret" mappings backed by a file descriptor.
The file descriptor backing secret memory mappings is created using a
dedicated memfd_secret system call The desired protection mode for the
memory is configured using flags parameter of the system call. The mmap()
of the file descriptor created with memfd_secret() will create a "secret"
memory mapping. The pages in that mapping will be marked as not present in
the direct map and will be present only in the page table of the owning mm.
Although normally Linux userspace mappings are protected from other users,
such secret mappings are useful for environments where a hostile tenant is
trying to trick the kernel into giving them access to other tenants
mappings.
Additionally, in the future the secret mappings may be used as a mean to
protect guest memory in a virtual machine host.
For demonstration of secret memory usage we've created a userspace library
https://git.kernel.org/pub/scm/linux/kernel/git/jejb/secret-memory-preloader.git
that does two things: the first is act as a preloader for openssl to
redirect all the OPENSSL_malloc calls to secret memory meaning any secret
keys get automatically protected this way and the other thing it does is
expose the API to the user who needs it. We anticipate that a lot of the
use cases would be like the openssl one: many toolkits that deal with
secret keys already have special handling for the memory to try to give
them greater protection, so this would simply be pluggable into the
toolkits without any need for user application modification.
Hiding secret memory mappings behind an anonymous file allows (ab)use of
the page cache for tracking pages allocated for the "secret" mappings as
well as using address_space_operations for e.g. page migration callbacks.
The anonymous file may be also used implicitly, like hugetlb files, to
implement mmap(MAP_SECRET) and use the secret memory areas with "native" mm
ABIs in the future.
To limit fragmentation of the direct map to splitting only PUD-size pages,
I've added an amortizing cache of PMD-size pages to each file descriptor
that is used as an allocation pool for the secret memory areas.
As the memory allocated by secretmem becomes unmovable, we use CMA to back
large page caches so that page allocator won't be surprised by failing attempt
to migrate these pages.
v16:
* Fix memory leak intorduced in v15
* Clean the data left from previous page user before handing the page to
the userspace
v15: https://lore.kernel.org/lkml/20210120180612.1058-1-rppt@kernel.org
* Add riscv/Kconfig update to disable set_memory operations for nommu
builds (patch 3)
* Update the code around add_to_page_cache() per Matthew's comments
(patches 6,7)
* Add fixups for build/checkpatch errors discovered by CI systems
v14: https://lore.kernel.org/lkml/20201203062949.5484-1-rppt@kernel.org
* Finally s/mod_node_page_state/mod_lruvec_page_state/
v13: https://lore.kernel.org/lkml/20201201074559.27742-1-rppt@kernel.org
* Added Reviewed-by, thanks Catalin and David
* s/mod_node_page_state/mod_lruvec_page_state/ as Shakeel suggested
v12: https://lore.kernel.org/lkml/20201125092208.12544-1-rppt@kernel.org
* Add detection of whether set_direct_map has actual effect on arm64 and bail
out of CMA allocation for secretmem and the memfd_secret() syscall if pages
would not be removed from the direct map
Older history:
v11: https://lore.kernel.org/lkml/20201124092556.12009-1-rppt@kernel.org
v10: https://lore.kernel.org/lkml/20201123095432.5860-1-rppt@kernel.org
v9: https://lore.kernel.org/lkml/20201117162932.13649-1-rppt@kernel.org
v8: https://lore.kernel.org/lkml/20201110151444.20662-1-rppt@kernel.org
v7: https://lore.kernel.org/lkml/20201026083752.13267-1-rppt@kernel.org
v6: https://lore.kernel.org/lkml/20200924132904.1391-1-rppt@kernel.org
v5: https://lore.kernel.org/lkml/20200916073539.3552-1-rppt@kernel.org
v4: https://lore.kernel.org/lkml/20200818141554.13945-1-rppt@kernel.org
v3: https://lore.kernel.org/lkml/20200804095035.18778-1-rppt@kernel.org
v2: https://lore.kernel.org/lkml/20200727162935.31714-1-rppt@kernel.org
v1: https://lore.kernel.org/lkml/20200720092435.17469-1-rppt@kernel.org
Mike Rapoport (11):
mm: add definition of PMD_PAGE_ORDER
mmap: make mlock_future_check() global
riscv/Kconfig: make direct map manipulation options depend on MMU
set_memory: allow set_direct_map_*_noflush() for multiple pages
set_memory: allow querying whether set_direct_map_*() is actually enabled
mm: introduce memfd_secret system call to create "secret" memory areas
secretmem: use PMD-size pages to amortize direct map fragmentation
secretmem: add memcg accounting
PM: hibernate: disable when there are active secretmem users
arch, mm: wire up memfd_secret system call where relevant
secretmem: test: add basic selftest for memfd_secret(2)
arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/cacheflush.h | 6 -
arch/arm64/include/asm/set_memory.h | 17 +
arch/arm64/include/uapi/asm/unistd.h | 1 +
arch/arm64/kernel/machine_kexec.c | 1 +
arch/arm64/mm/mmu.c | 6 +-
arch/arm64/mm/pageattr.c | 23 +-
arch/riscv/Kconfig | 4 +-
arch/riscv/include/asm/set_memory.h | 4 +-
arch/riscv/include/asm/unistd.h | 1 +
arch/riscv/mm/pageattr.c | 8 +-
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/x86/include/asm/set_memory.h | 4 +-
arch/x86/mm/pat/set_memory.c | 8 +-
fs/dax.c | 11 +-
include/linux/pgtable.h | 3 +
include/linux/secretmem.h | 30 ++
include/linux/set_memory.h | 16 +-
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/unistd.h | 6 +-
include/uapi/linux/magic.h | 1 +
kernel/power/hibernate.c | 5 +-
kernel/power/snapshot.c | 4 +-
kernel/sys_ni.c | 2 +
mm/Kconfig | 5 +
mm/Makefile | 1 +
mm/filemap.c | 3 +-
mm/gup.c | 10 +
mm/internal.h | 3 +
mm/mmap.c | 5 +-
mm/secretmem.c | 451 ++++++++++++++++++++++
mm/vmalloc.c | 5 +-
scripts/checksyscalls.sh | 4 +
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +-
tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++
tools/testing/selftests/vm/run_vmtests | 17 +
38 files changed, 917 insertions(+), 52 deletions(-)
create mode 100644 arch/arm64/include/asm/set_memory.h
create mode 100644 include/linux/secretmem.h
create mode 100644 mm/secretmem.c
create mode 100644 tools/testing/selftests/vm/memfd_secret.c
--
2.28.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* [PATCH v16 01/11] mm: add definition of PMD_PAGE_ORDER
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-21 12:27 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.
Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid conflict with these definitions use
PMD_PAGE_ORDER name and update DAX respectively.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
---
fs/dax.c | 11 ++++-------
include/linux/pgtable.h | 3 +++
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index 26d5dcd2d69e..0f109eb16196 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -49,9 +49,6 @@ static inline unsigned int pe_order(enum page_entry_size pe_size)
#define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1)
#define PG_PMD_NR (PMD_SIZE >> PAGE_SHIFT)
-/* The order of a PMD entry */
-#define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT)
-
static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
static int __init init_dax_wait_table(void)
@@ -98,7 +95,7 @@ static bool dax_is_locked(void *entry)
static unsigned int dax_entry_order(void *entry)
{
if (xa_to_value(entry) & DAX_PMD)
- return PMD_ORDER;
+ return PMD_PAGE_ORDER;
return 0;
}
@@ -1470,7 +1467,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
{
struct vm_area_struct *vma = vmf->vma;
struct address_space *mapping = vma->vm_file->f_mapping;
- XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER);
+ XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER);
unsigned long pmd_addr = vmf->address & PMD_MASK;
bool write = vmf->flags & FAULT_FLAG_WRITE;
bool sync;
@@ -1529,7 +1526,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
* entry is already in the array, for instance), it will return
* VM_FAULT_FALLBACK.
*/
- entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+ entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER);
if (xa_is_internal(entry)) {
result = xa_to_internal(entry);
goto fallback;
@@ -1695,7 +1692,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
if (order == 0)
ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
#ifdef CONFIG_FS_DAX_PMD
- else if (order == PMD_ORDER)
+ else if (order == PMD_PAGE_ORDER)
ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
#endif
else
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 8fcdfa52eb4b..ea5c4102c23e 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -28,6 +28,9 @@
#define USER_PGTABLES_CEILING 0UL
#endif
+/* Number of base pages in a second level leaf page */
+#define PMD_PAGE_ORDER (PMD_SHIFT - PAGE_SHIFT)
+
/*
* A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
*
--
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 01/11] mm: add definition of PMD_PAGE_ORDER
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.
Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid conflict with these definitions use
PMD_PAGE_ORDER name and update DAX respectively.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
---
fs/dax.c | 11 ++++-------
include/linux/pgtable.h | 3 +++
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index 26d5dcd2d69e..0f109eb16196 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -49,9 +49,6 @@ static inline unsigned int pe_order(enum page_entry_size pe_size)
#define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1)
#define PG_PMD_NR (PMD_SIZE >> PAGE_SHIFT)
-/* The order of a PMD entry */
-#define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT)
-
static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
static int __init init_dax_wait_table(void)
@@ -98,7 +95,7 @@ static bool dax_is_locked(void *entry)
static unsigned int dax_entry_order(void *entry)
{
if (xa_to_value(entry) & DAX_PMD)
- return PMD_ORDER;
+ return PMD_PAGE_ORDER;
return 0;
}
@@ -1470,7 +1467,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
{
struct vm_area_struct *vma = vmf->vma;
struct address_space *mapping = vma->vm_file->f_mapping;
- XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER);
+ XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER);
unsigned long pmd_addr = vmf->address & PMD_MASK;
bool write = vmf->flags & FAULT_FLAG_WRITE;
bool sync;
@@ -1529,7 +1526,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
* entry is already in the array, for instance), it will return
* VM_FAULT_FALLBACK.
*/
- entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+ entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER);
if (xa_is_internal(entry)) {
result = xa_to_internal(entry);
goto fallback;
@@ -1695,7 +1692,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
if (order == 0)
ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
#ifdef CONFIG_FS_DAX_PMD
- else if (order == PMD_ORDER)
+ else if (order == PMD_PAGE_ORDER)
ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
#endif
else
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 8fcdfa52eb4b..ea5c4102c23e 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -28,6 +28,9 @@
#define USER_PGTABLES_CEILING 0UL
#endif
+/* Number of base pages in a second level leaf page */
+#define PMD_PAGE_ORDER (PMD_SHIFT - PAGE_SHIFT)
+
/*
* A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
*
--
2.28.0
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 01/11] mm: add definition of PMD_PAGE_ORDER
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.
Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid conflict with these definitions use
PMD_PAGE_ORDER name and update DAX respectively.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
---
fs/dax.c | 11 ++++-------
include/linux/pgtable.h | 3 +++
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index 26d5dcd2d69e..0f109eb16196 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -49,9 +49,6 @@ static inline unsigned int pe_order(enum page_entry_size pe_size)
#define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1)
#define PG_PMD_NR (PMD_SIZE >> PAGE_SHIFT)
-/* The order of a PMD entry */
-#define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT)
-
static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
static int __init init_dax_wait_table(void)
@@ -98,7 +95,7 @@ static bool dax_is_locked(void *entry)
static unsigned int dax_entry_order(void *entry)
{
if (xa_to_value(entry) & DAX_PMD)
- return PMD_ORDER;
+ return PMD_PAGE_ORDER;
return 0;
}
@@ -1470,7 +1467,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
{
struct vm_area_struct *vma = vmf->vma;
struct address_space *mapping = vma->vm_file->f_mapping;
- XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER);
+ XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER);
unsigned long pmd_addr = vmf->address & PMD_MASK;
bool write = vmf->flags & FAULT_FLAG_WRITE;
bool sync;
@@ -1529,7 +1526,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
* entry is already in the array, for instance), it will return
* VM_FAULT_FALLBACK.
*/
- entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+ entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER);
if (xa_is_internal(entry)) {
result = xa_to_internal(entry);
goto fallback;
@@ -1695,7 +1692,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
if (order == 0)
ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
#ifdef CONFIG_FS_DAX_PMD
- else if (order == PMD_ORDER)
+ else if (order == PMD_PAGE_ORDER)
ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
#endif
else
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 8fcdfa52eb4b..ea5c4102c23e 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -28,6 +28,9 @@
#define USER_PGTABLES_CEILING 0UL
#endif
+/* Number of base pages in a second level leaf page */
+#define PMD_PAGE_ORDER (PMD_SHIFT - PAGE_SHIFT)
+
/*
* A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
*
--
2.28.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 01/11] mm: add definition of PMD_PAGE_ORDER
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
The definition of PMD_PAGE_ORDER denoting the number of base pages in the
second-level leaf page is already used by DAX and maybe handy in other
cases as well.
Several architectures already have definition of PMD_ORDER as the size of
second level page table, so to avoid conflict with these definitions use
PMD_PAGE_ORDER name and update DAX respectively.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
---
fs/dax.c | 11 ++++-------
include/linux/pgtable.h | 3 +++
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/fs/dax.c b/fs/dax.c
index 26d5dcd2d69e..0f109eb16196 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -49,9 +49,6 @@ static inline unsigned int pe_order(enum page_entry_size pe_size)
#define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1)
#define PG_PMD_NR (PMD_SIZE >> PAGE_SHIFT)
-/* The order of a PMD entry */
-#define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT)
-
static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
static int __init init_dax_wait_table(void)
@@ -98,7 +95,7 @@ static bool dax_is_locked(void *entry)
static unsigned int dax_entry_order(void *entry)
{
if (xa_to_value(entry) & DAX_PMD)
- return PMD_ORDER;
+ return PMD_PAGE_ORDER;
return 0;
}
@@ -1470,7 +1467,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
{
struct vm_area_struct *vma = vmf->vma;
struct address_space *mapping = vma->vm_file->f_mapping;
- XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_ORDER);
+ XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, PMD_PAGE_ORDER);
unsigned long pmd_addr = vmf->address & PMD_MASK;
bool write = vmf->flags & FAULT_FLAG_WRITE;
bool sync;
@@ -1529,7 +1526,7 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
* entry is already in the array, for instance), it will return
* VM_FAULT_FALLBACK.
*/
- entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+ entry = grab_mapping_entry(&xas, mapping, PMD_PAGE_ORDER);
if (xa_is_internal(entry)) {
result = xa_to_internal(entry);
goto fallback;
@@ -1695,7 +1692,7 @@ dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
if (order == 0)
ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
#ifdef CONFIG_FS_DAX_PMD
- else if (order == PMD_ORDER)
+ else if (order == PMD_PAGE_ORDER)
ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
#endif
else
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 8fcdfa52eb4b..ea5c4102c23e 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -28,6 +28,9 @@
#define USER_PGTABLES_CEILING 0UL
#endif
+/* Number of base pages in a second level leaf page */
+#define PMD_PAGE_ORDER (PMD_SHIFT - PAGE_SHIFT)
+
/*
* A page table page can be thought of an array like this: pXd_t[PTRS_PER_PxD]
*
--
2.28.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 02/11] mmap: make mlock_future_check() global
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-21 12:27 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
It will be used by the upcoming secret memory implementation.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
mm/internal.h | 3 +++
mm/mmap.c | 5 ++---
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 9902648f2206..8e9c660f33ca 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -353,6 +353,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
extern void mlock_vma_page(struct page *page);
extern unsigned int munlock_vma_page(struct page *page);
+extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+ unsigned long len);
+
/*
* Clear the page's PageMlocked(). This can be useful in a situation where
* we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c
index 28ef5e29152a..10b9b8b88913 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1346,9 +1346,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
return hint;
}
-static inline int mlock_future_check(struct mm_struct *mm,
- unsigned long flags,
- unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+ unsigned long len)
{
unsigned long locked, lock_limit;
--
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 02/11] mmap: make mlock_future_check() global
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
It will be used by the upcoming secret memory implementation.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
mm/internal.h | 3 +++
mm/mmap.c | 5 ++---
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 9902648f2206..8e9c660f33ca 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -353,6 +353,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
extern void mlock_vma_page(struct page *page);
extern unsigned int munlock_vma_page(struct page *page);
+extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+ unsigned long len);
+
/*
* Clear the page's PageMlocked(). This can be useful in a situation where
* we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c
index 28ef5e29152a..10b9b8b88913 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1346,9 +1346,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
return hint;
}
-static inline int mlock_future_check(struct mm_struct *mm,
- unsigned long flags,
- unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+ unsigned long len)
{
unsigned long locked, lock_limit;
--
2.28.0
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 02/11] mmap: make mlock_future_check() global
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
It will be used by the upcoming secret memory implementation.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
mm/internal.h | 3 +++
mm/mmap.c | 5 ++---
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 9902648f2206..8e9c660f33ca 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -353,6 +353,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
extern void mlock_vma_page(struct page *page);
extern unsigned int munlock_vma_page(struct page *page);
+extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+ unsigned long len);
+
/*
* Clear the page's PageMlocked(). This can be useful in a situation where
* we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c
index 28ef5e29152a..10b9b8b88913 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1346,9 +1346,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
return hint;
}
-static inline int mlock_future_check(struct mm_struct *mm,
- unsigned long flags,
- unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+ unsigned long len)
{
unsigned long locked, lock_limit;
--
2.28.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 02/11] mmap: make mlock_future_check() global
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
It will be used by the upcoming secret memory implementation.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
mm/internal.h | 3 +++
mm/mmap.c | 5 ++---
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 9902648f2206..8e9c660f33ca 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -353,6 +353,9 @@ static inline void munlock_vma_pages_all(struct vm_area_struct *vma)
extern void mlock_vma_page(struct page *page);
extern unsigned int munlock_vma_page(struct page *page);
+extern int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+ unsigned long len);
+
/*
* Clear the page's PageMlocked(). This can be useful in a situation where
* we want to unconditionally remove a page from the pagecache -- e.g.,
diff --git a/mm/mmap.c b/mm/mmap.c
index 28ef5e29152a..10b9b8b88913 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1346,9 +1346,8 @@ static inline unsigned long round_hint_to_min(unsigned long hint)
return hint;
}
-static inline int mlock_future_check(struct mm_struct *mm,
- unsigned long flags,
- unsigned long len)
+int mlock_future_check(struct mm_struct *mm, unsigned long flags,
+ unsigned long len)
{
unsigned long locked, lock_limit;
--
2.28.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 03/11] riscv/Kconfig: make direct map manipulation options depend on MMU
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-21 12:27 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
kernel test robot
From: Mike Rapoport <rppt@linux.ibm.com>
ARCH_HAS_SET_DIRECT_MAP and ARCH_HAS_SET_MEMORY configuration options have
no meaning when CONFIG_MMU is disabled and there is no point to enable them
for the nommu case.
Add an explicit dependency on MMU for these options.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reported-by: kernel test robot <lkp@intel.com>
---
arch/riscv/Kconfig | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index d82303dcc6b6..d35ce19ab1fa 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -25,8 +25,8 @@ config RISCV
select ARCH_HAS_KCOV
select ARCH_HAS_MMIOWB
select ARCH_HAS_PTE_SPECIAL
- select ARCH_HAS_SET_DIRECT_MAP
- select ARCH_HAS_SET_MEMORY
+ select ARCH_HAS_SET_DIRECT_MAP if MMU
+ select ARCH_HAS_SET_MEMORY if MMU
select ARCH_HAS_STRICT_KERNEL_RWX if MMU
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
--
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 03/11] riscv/Kconfig: make direct map manipulation options depend on MMU
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
kernel test robot
From: Mike Rapoport <rppt@linux.ibm.com>
ARCH_HAS_SET_DIRECT_MAP and ARCH_HAS_SET_MEMORY configuration options have
no meaning when CONFIG_MMU is disabled and there is no point to enable them
for the nommu case.
Add an explicit dependency on MMU for these options.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reported-by: kernel test robot <lkp@intel.com>
---
arch/riscv/Kconfig | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index d82303dcc6b6..d35ce19ab1fa 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -25,8 +25,8 @@ config RISCV
select ARCH_HAS_KCOV
select ARCH_HAS_MMIOWB
select ARCH_HAS_PTE_SPECIAL
- select ARCH_HAS_SET_DIRECT_MAP
- select ARCH_HAS_SET_MEMORY
+ select ARCH_HAS_SET_DIRECT_MAP if MMU
+ select ARCH_HAS_SET_MEMORY if MMU
select ARCH_HAS_STRICT_KERNEL_RWX if MMU
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
--
2.28.0
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 03/11] riscv/Kconfig: make direct map manipulation options depend on MMU
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, kernel test robot,
linux-nvdimm, Will Deacon, x86, Matthew Wilcox, Mike Rapoport,
Ingo Molnar, Michael Kerrisk, Arnd Bergmann, James Bottomley,
Borislav Petkov, Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Rick Edgecombe, Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
ARCH_HAS_SET_DIRECT_MAP and ARCH_HAS_SET_MEMORY configuration options have
no meaning when CONFIG_MMU is disabled and there is no point to enable them
for the nommu case.
Add an explicit dependency on MMU for these options.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reported-by: kernel test robot <lkp@intel.com>
---
arch/riscv/Kconfig | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index d82303dcc6b6..d35ce19ab1fa 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -25,8 +25,8 @@ config RISCV
select ARCH_HAS_KCOV
select ARCH_HAS_MMIOWB
select ARCH_HAS_PTE_SPECIAL
- select ARCH_HAS_SET_DIRECT_MAP
- select ARCH_HAS_SET_MEMORY
+ select ARCH_HAS_SET_DIRECT_MAP if MMU
+ select ARCH_HAS_SET_MEMORY if MMU
select ARCH_HAS_STRICT_KERNEL_RWX if MMU
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
--
2.28.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 03/11] riscv/Kconfig: make direct map manipulation options depend on MMU
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, kernel test robot,
linux-nvdimm, Will Deacon, x86, Matthew Wilcox, Mike Rapoport,
Ingo Molnar, Michael Kerrisk, Arnd Bergmann, James Bottomley,
Borislav Petkov, Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Rick Edgecombe, Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
ARCH_HAS_SET_DIRECT_MAP and ARCH_HAS_SET_MEMORY configuration options have
no meaning when CONFIG_MMU is disabled and there is no point to enable them
for the nommu case.
Add an explicit dependency on MMU for these options.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reported-by: kernel test robot <lkp@intel.com>
---
arch/riscv/Kconfig | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index d82303dcc6b6..d35ce19ab1fa 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -25,8 +25,8 @@ config RISCV
select ARCH_HAS_KCOV
select ARCH_HAS_MMIOWB
select ARCH_HAS_PTE_SPECIAL
- select ARCH_HAS_SET_DIRECT_MAP
- select ARCH_HAS_SET_MEMORY
+ select ARCH_HAS_SET_DIRECT_MAP if MMU
+ select ARCH_HAS_SET_MEMORY if MMU
select ARCH_HAS_STRICT_KERNEL_RWX if MMU
select ARCH_OPTIONAL_KERNEL_RWX if ARCH_HAS_STRICT_KERNEL_RWX
select ARCH_OPTIONAL_KERNEL_RWX_DEFAULT
--
2.28.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 04/11] set_memory: allow set_direct_map_*_noflush() for multiple pages
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-21 12:27 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
The underlying implementations of set_direct_map_invalid_noflush() and
set_direct_map_default_noflush() allow updating multiple contiguous pages
at once.
Add numpages parameter to set_direct_map_*_noflush() to expose this ability
with these APIs.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cacheflush.h | 4 ++--
arch/arm64/mm/pageattr.c | 10 ++++++----
arch/riscv/include/asm/set_memory.h | 4 ++--
arch/riscv/mm/pageattr.c | 8 ++++----
arch/x86/include/asm/set_memory.h | 4 ++--
arch/x86/mm/pat/set_memory.c | 8 ++++----
include/linux/set_memory.h | 4 ++--
kernel/power/snapshot.c | 4 ++--
mm/vmalloc.c | 5 +++--
9 files changed, 27 insertions(+), 24 deletions(-)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 45217f21f1fe..d3598419a284 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -138,8 +138,8 @@ static __always_inline void __flush_icache_all(void)
int set_memory_valid(unsigned long addr, int numpages, int enable);
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);
#include <asm-generic/cacheflush.h>
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 92eccaf595c8..b53ef37bf95a 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,34 +148,36 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
__pgprot(PTE_VALID));
}
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
struct page_change_data data = {
.set_mask = __pgprot(0),
.clear_mask = __pgprot(PTE_VALID),
};
+ unsigned long size = PAGE_SIZE * numpages;
if (!debug_pagealloc_enabled() && !rodata_full)
return 0;
return apply_to_page_range(&init_mm,
(unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ size, change_page_range, &data);
}
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
struct page_change_data data = {
.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
.clear_mask = __pgprot(PTE_RDONLY),
};
+ unsigned long size = PAGE_SIZE * numpages;
if (!debug_pagealloc_enabled() && !rodata_full)
return 0;
return apply_to_page_range(&init_mm,
(unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ size, change_page_range, &data);
}
#ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index 211eb8244a45..1aaf2720b8f6 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -26,8 +26,8 @@ static inline void protect_kernel_text_data(void) {};
static inline int set_memory_rw_nx(unsigned long addr, int numpages) { return 0; }
#endif
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);
#endif /* __ASSEMBLY__ */
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 5e49e4b4a4cc..9618181b70be 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -156,11 +156,11 @@ int set_memory_nx(unsigned long addr, int numpages)
return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC));
}
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
int ret;
unsigned long start = (unsigned long)page_address(page);
- unsigned long end = start + PAGE_SIZE;
+ unsigned long end = start + PAGE_SIZE * numpages;
struct pageattr_masks masks = {
.set_mask = __pgprot(0),
.clear_mask = __pgprot(_PAGE_PRESENT)
@@ -173,11 +173,11 @@ int set_direct_map_invalid_noflush(struct page *page)
return ret;
}
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
int ret;
unsigned long start = (unsigned long)page_address(page);
- unsigned long end = start + PAGE_SIZE;
+ unsigned long end = start + PAGE_SIZE * numpages;
struct pageattr_masks masks = {
.set_mask = PAGE_KERNEL,
.clear_mask = __pgprot(0)
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 4352f08bfbb5..6224cb291f6c 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -80,8 +80,8 @@ int set_pages_wb(struct page *page, int numpages);
int set_pages_ro(struct page *page, int numpages);
int set_pages_rw(struct page *page, int numpages);
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);
extern int kernel_set_to_readonly;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 16f878c26667..d157fd617c99 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2184,14 +2184,14 @@ static int __set_pages_np(struct page *page, int numpages)
return __change_page_attr_set_clr(&cpa, 0);
}
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
- return __set_pages_np(page, 1);
+ return __set_pages_np(page, numpages);
}
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
- return __set_pages_p(page, 1);
+ return __set_pages_p(page, numpages);
}
#ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index fe1aa4e54680..c650f82db813 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -15,11 +15,11 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
#endif
#ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP
-static inline int set_direct_map_invalid_noflush(struct page *page)
+static inline int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
return 0;
}
-static inline int set_direct_map_default_noflush(struct page *page)
+static inline int set_direct_map_default_noflush(struct page *page, int numpages)
{
return 0;
}
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index d63560e1cf87..64b7aab9aee4 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -86,7 +86,7 @@ static inline void hibernate_restore_unprotect_page(void *page_address) {}
static inline void hibernate_map_page(struct page *page)
{
if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
- int ret = set_direct_map_default_noflush(page);
+ int ret = set_direct_map_default_noflush(page, 1);
if (ret)
pr_warn_once("Failed to remap page\n");
@@ -99,7 +99,7 @@ static inline void hibernate_unmap_page(struct page *page)
{
if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
unsigned long addr = (unsigned long)page_address(page);
- int ret = set_direct_map_invalid_noflush(page);
+ int ret = set_direct_map_invalid_noflush(page, 1);
if (ret)
pr_warn_once("Failed to remap page\n");
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d5f2a84e488a..1da9cd1d0758 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2195,13 +2195,14 @@ struct vm_struct *remove_vm_area(const void *addr)
}
static inline void set_area_direct_map(const struct vm_struct *area,
- int (*set_direct_map)(struct page *page))
+ int (*set_direct_map)(struct page *page,
+ int numpages))
{
int i;
for (i = 0; i < area->nr_pages; i++)
if (page_address(area->pages[i]))
- set_direct_map(area->pages[i]);
+ set_direct_map(area->pages[i], 1);
}
/* Handle removing and resetting vm mappings related to the vm_struct. */
--
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 04/11] set_memory: allow set_direct_map_*_noflush() for multiple pages
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
The underlying implementations of set_direct_map_invalid_noflush() and
set_direct_map_default_noflush() allow updating multiple contiguous pages
at once.
Add numpages parameter to set_direct_map_*_noflush() to expose this ability
with these APIs.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cacheflush.h | 4 ++--
arch/arm64/mm/pageattr.c | 10 ++++++----
arch/riscv/include/asm/set_memory.h | 4 ++--
arch/riscv/mm/pageattr.c | 8 ++++----
arch/x86/include/asm/set_memory.h | 4 ++--
arch/x86/mm/pat/set_memory.c | 8 ++++----
include/linux/set_memory.h | 4 ++--
kernel/power/snapshot.c | 4 ++--
mm/vmalloc.c | 5 +++--
9 files changed, 27 insertions(+), 24 deletions(-)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 45217f21f1fe..d3598419a284 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -138,8 +138,8 @@ static __always_inline void __flush_icache_all(void)
int set_memory_valid(unsigned long addr, int numpages, int enable);
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);
#include <asm-generic/cacheflush.h>
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 92eccaf595c8..b53ef37bf95a 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,34 +148,36 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
__pgprot(PTE_VALID));
}
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
struct page_change_data data = {
.set_mask = __pgprot(0),
.clear_mask = __pgprot(PTE_VALID),
};
+ unsigned long size = PAGE_SIZE * numpages;
if (!debug_pagealloc_enabled() && !rodata_full)
return 0;
return apply_to_page_range(&init_mm,
(unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ size, change_page_range, &data);
}
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
struct page_change_data data = {
.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
.clear_mask = __pgprot(PTE_RDONLY),
};
+ unsigned long size = PAGE_SIZE * numpages;
if (!debug_pagealloc_enabled() && !rodata_full)
return 0;
return apply_to_page_range(&init_mm,
(unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ size, change_page_range, &data);
}
#ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index 211eb8244a45..1aaf2720b8f6 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -26,8 +26,8 @@ static inline void protect_kernel_text_data(void) {};
static inline int set_memory_rw_nx(unsigned long addr, int numpages) { return 0; }
#endif
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);
#endif /* __ASSEMBLY__ */
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 5e49e4b4a4cc..9618181b70be 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -156,11 +156,11 @@ int set_memory_nx(unsigned long addr, int numpages)
return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC));
}
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
int ret;
unsigned long start = (unsigned long)page_address(page);
- unsigned long end = start + PAGE_SIZE;
+ unsigned long end = start + PAGE_SIZE * numpages;
struct pageattr_masks masks = {
.set_mask = __pgprot(0),
.clear_mask = __pgprot(_PAGE_PRESENT)
@@ -173,11 +173,11 @@ int set_direct_map_invalid_noflush(struct page *page)
return ret;
}
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
int ret;
unsigned long start = (unsigned long)page_address(page);
- unsigned long end = start + PAGE_SIZE;
+ unsigned long end = start + PAGE_SIZE * numpages;
struct pageattr_masks masks = {
.set_mask = PAGE_KERNEL,
.clear_mask = __pgprot(0)
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 4352f08bfbb5..6224cb291f6c 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -80,8 +80,8 @@ int set_pages_wb(struct page *page, int numpages);
int set_pages_ro(struct page *page, int numpages);
int set_pages_rw(struct page *page, int numpages);
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);
extern int kernel_set_to_readonly;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 16f878c26667..d157fd617c99 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2184,14 +2184,14 @@ static int __set_pages_np(struct page *page, int numpages)
return __change_page_attr_set_clr(&cpa, 0);
}
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
- return __set_pages_np(page, 1);
+ return __set_pages_np(page, numpages);
}
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
- return __set_pages_p(page, 1);
+ return __set_pages_p(page, numpages);
}
#ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index fe1aa4e54680..c650f82db813 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -15,11 +15,11 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
#endif
#ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP
-static inline int set_direct_map_invalid_noflush(struct page *page)
+static inline int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
return 0;
}
-static inline int set_direct_map_default_noflush(struct page *page)
+static inline int set_direct_map_default_noflush(struct page *page, int numpages)
{
return 0;
}
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index d63560e1cf87..64b7aab9aee4 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -86,7 +86,7 @@ static inline void hibernate_restore_unprotect_page(void *page_address) {}
static inline void hibernate_map_page(struct page *page)
{
if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
- int ret = set_direct_map_default_noflush(page);
+ int ret = set_direct_map_default_noflush(page, 1);
if (ret)
pr_warn_once("Failed to remap page\n");
@@ -99,7 +99,7 @@ static inline void hibernate_unmap_page(struct page *page)
{
if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
unsigned long addr = (unsigned long)page_address(page);
- int ret = set_direct_map_invalid_noflush(page);
+ int ret = set_direct_map_invalid_noflush(page, 1);
if (ret)
pr_warn_once("Failed to remap page\n");
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d5f2a84e488a..1da9cd1d0758 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2195,13 +2195,14 @@ struct vm_struct *remove_vm_area(const void *addr)
}
static inline void set_area_direct_map(const struct vm_struct *area,
- int (*set_direct_map)(struct page *page))
+ int (*set_direct_map)(struct page *page,
+ int numpages))
{
int i;
for (i = 0; i < area->nr_pages; i++)
if (page_address(area->pages[i]))
- set_direct_map(area->pages[i]);
+ set_direct_map(area->pages[i], 1);
}
/* Handle removing and resetting vm mappings related to the vm_struct. */
--
2.28.0
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 04/11] set_memory: allow set_direct_map_*_noflush() for multiple pages
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
The underlying implementations of set_direct_map_invalid_noflush() and
set_direct_map_default_noflush() allow updating multiple contiguous pages
at once.
Add numpages parameter to set_direct_map_*_noflush() to expose this ability
with these APIs.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cacheflush.h | 4 ++--
arch/arm64/mm/pageattr.c | 10 ++++++----
arch/riscv/include/asm/set_memory.h | 4 ++--
arch/riscv/mm/pageattr.c | 8 ++++----
arch/x86/include/asm/set_memory.h | 4 ++--
arch/x86/mm/pat/set_memory.c | 8 ++++----
include/linux/set_memory.h | 4 ++--
kernel/power/snapshot.c | 4 ++--
mm/vmalloc.c | 5 +++--
9 files changed, 27 insertions(+), 24 deletions(-)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 45217f21f1fe..d3598419a284 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -138,8 +138,8 @@ static __always_inline void __flush_icache_all(void)
int set_memory_valid(unsigned long addr, int numpages, int enable);
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);
#include <asm-generic/cacheflush.h>
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 92eccaf595c8..b53ef37bf95a 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,34 +148,36 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
__pgprot(PTE_VALID));
}
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
struct page_change_data data = {
.set_mask = __pgprot(0),
.clear_mask = __pgprot(PTE_VALID),
};
+ unsigned long size = PAGE_SIZE * numpages;
if (!debug_pagealloc_enabled() && !rodata_full)
return 0;
return apply_to_page_range(&init_mm,
(unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ size, change_page_range, &data);
}
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
struct page_change_data data = {
.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
.clear_mask = __pgprot(PTE_RDONLY),
};
+ unsigned long size = PAGE_SIZE * numpages;
if (!debug_pagealloc_enabled() && !rodata_full)
return 0;
return apply_to_page_range(&init_mm,
(unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ size, change_page_range, &data);
}
#ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index 211eb8244a45..1aaf2720b8f6 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -26,8 +26,8 @@ static inline void protect_kernel_text_data(void) {};
static inline int set_memory_rw_nx(unsigned long addr, int numpages) { return 0; }
#endif
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);
#endif /* __ASSEMBLY__ */
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 5e49e4b4a4cc..9618181b70be 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -156,11 +156,11 @@ int set_memory_nx(unsigned long addr, int numpages)
return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC));
}
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
int ret;
unsigned long start = (unsigned long)page_address(page);
- unsigned long end = start + PAGE_SIZE;
+ unsigned long end = start + PAGE_SIZE * numpages;
struct pageattr_masks masks = {
.set_mask = __pgprot(0),
.clear_mask = __pgprot(_PAGE_PRESENT)
@@ -173,11 +173,11 @@ int set_direct_map_invalid_noflush(struct page *page)
return ret;
}
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
int ret;
unsigned long start = (unsigned long)page_address(page);
- unsigned long end = start + PAGE_SIZE;
+ unsigned long end = start + PAGE_SIZE * numpages;
struct pageattr_masks masks = {
.set_mask = PAGE_KERNEL,
.clear_mask = __pgprot(0)
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 4352f08bfbb5..6224cb291f6c 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -80,8 +80,8 @@ int set_pages_wb(struct page *page, int numpages);
int set_pages_ro(struct page *page, int numpages);
int set_pages_rw(struct page *page, int numpages);
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);
extern int kernel_set_to_readonly;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 16f878c26667..d157fd617c99 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2184,14 +2184,14 @@ static int __set_pages_np(struct page *page, int numpages)
return __change_page_attr_set_clr(&cpa, 0);
}
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
- return __set_pages_np(page, 1);
+ return __set_pages_np(page, numpages);
}
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
- return __set_pages_p(page, 1);
+ return __set_pages_p(page, numpages);
}
#ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index fe1aa4e54680..c650f82db813 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -15,11 +15,11 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
#endif
#ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP
-static inline int set_direct_map_invalid_noflush(struct page *page)
+static inline int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
return 0;
}
-static inline int set_direct_map_default_noflush(struct page *page)
+static inline int set_direct_map_default_noflush(struct page *page, int numpages)
{
return 0;
}
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index d63560e1cf87..64b7aab9aee4 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -86,7 +86,7 @@ static inline void hibernate_restore_unprotect_page(void *page_address) {}
static inline void hibernate_map_page(struct page *page)
{
if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
- int ret = set_direct_map_default_noflush(page);
+ int ret = set_direct_map_default_noflush(page, 1);
if (ret)
pr_warn_once("Failed to remap page\n");
@@ -99,7 +99,7 @@ static inline void hibernate_unmap_page(struct page *page)
{
if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
unsigned long addr = (unsigned long)page_address(page);
- int ret = set_direct_map_invalid_noflush(page);
+ int ret = set_direct_map_invalid_noflush(page, 1);
if (ret)
pr_warn_once("Failed to remap page\n");
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d5f2a84e488a..1da9cd1d0758 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2195,13 +2195,14 @@ struct vm_struct *remove_vm_area(const void *addr)
}
static inline void set_area_direct_map(const struct vm_struct *area,
- int (*set_direct_map)(struct page *page))
+ int (*set_direct_map)(struct page *page,
+ int numpages))
{
int i;
for (i = 0; i < area->nr_pages; i++)
if (page_address(area->pages[i]))
- set_direct_map(area->pages[i]);
+ set_direct_map(area->pages[i], 1);
}
/* Handle removing and resetting vm mappings related to the vm_struct. */
--
2.28.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 04/11] set_memory: allow set_direct_map_*_noflush() for multiple pages
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
The underlying implementations of set_direct_map_invalid_noflush() and
set_direct_map_default_noflush() allow updating multiple contiguous pages
at once.
Add numpages parameter to set_direct_map_*_noflush() to expose this ability
with these APIs.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com> [arm64]
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/cacheflush.h | 4 ++--
arch/arm64/mm/pageattr.c | 10 ++++++----
arch/riscv/include/asm/set_memory.h | 4 ++--
arch/riscv/mm/pageattr.c | 8 ++++----
arch/x86/include/asm/set_memory.h | 4 ++--
arch/x86/mm/pat/set_memory.c | 8 ++++----
include/linux/set_memory.h | 4 ++--
kernel/power/snapshot.c | 4 ++--
mm/vmalloc.c | 5 +++--
9 files changed, 27 insertions(+), 24 deletions(-)
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index 45217f21f1fe..d3598419a284 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -138,8 +138,8 @@ static __always_inline void __flush_icache_all(void)
int set_memory_valid(unsigned long addr, int numpages, int enable);
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);
#include <asm-generic/cacheflush.h>
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 92eccaf595c8..b53ef37bf95a 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -148,34 +148,36 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
__pgprot(PTE_VALID));
}
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
struct page_change_data data = {
.set_mask = __pgprot(0),
.clear_mask = __pgprot(PTE_VALID),
};
+ unsigned long size = PAGE_SIZE * numpages;
if (!debug_pagealloc_enabled() && !rodata_full)
return 0;
return apply_to_page_range(&init_mm,
(unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ size, change_page_range, &data);
}
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
struct page_change_data data = {
.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
.clear_mask = __pgprot(PTE_RDONLY),
};
+ unsigned long size = PAGE_SIZE * numpages;
if (!debug_pagealloc_enabled() && !rodata_full)
return 0;
return apply_to_page_range(&init_mm,
(unsigned long)page_address(page),
- PAGE_SIZE, change_page_range, &data);
+ size, change_page_range, &data);
}
#ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index 211eb8244a45..1aaf2720b8f6 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -26,8 +26,8 @@ static inline void protect_kernel_text_data(void) {};
static inline int set_memory_rw_nx(unsigned long addr, int numpages) { return 0; }
#endif
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);
#endif /* __ASSEMBLY__ */
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 5e49e4b4a4cc..9618181b70be 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -156,11 +156,11 @@ int set_memory_nx(unsigned long addr, int numpages)
return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC));
}
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
int ret;
unsigned long start = (unsigned long)page_address(page);
- unsigned long end = start + PAGE_SIZE;
+ unsigned long end = start + PAGE_SIZE * numpages;
struct pageattr_masks masks = {
.set_mask = __pgprot(0),
.clear_mask = __pgprot(_PAGE_PRESENT)
@@ -173,11 +173,11 @@ int set_direct_map_invalid_noflush(struct page *page)
return ret;
}
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
int ret;
unsigned long start = (unsigned long)page_address(page);
- unsigned long end = start + PAGE_SIZE;
+ unsigned long end = start + PAGE_SIZE * numpages;
struct pageattr_masks masks = {
.set_mask = PAGE_KERNEL,
.clear_mask = __pgprot(0)
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 4352f08bfbb5..6224cb291f6c 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -80,8 +80,8 @@ int set_pages_wb(struct page *page, int numpages);
int set_pages_ro(struct page *page, int numpages);
int set_pages_rw(struct page *page, int numpages);
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
bool kernel_page_present(struct page *page);
extern int kernel_set_to_readonly;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 16f878c26667..d157fd617c99 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2184,14 +2184,14 @@ static int __set_pages_np(struct page *page, int numpages)
return __change_page_attr_set_clr(&cpa, 0);
}
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
- return __set_pages_np(page, 1);
+ return __set_pages_np(page, numpages);
}
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
{
- return __set_pages_p(page, 1);
+ return __set_pages_p(page, numpages);
}
#ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index fe1aa4e54680..c650f82db813 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -15,11 +15,11 @@ static inline int set_memory_nx(unsigned long addr, int numpages) { return 0; }
#endif
#ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP
-static inline int set_direct_map_invalid_noflush(struct page *page)
+static inline int set_direct_map_invalid_noflush(struct page *page, int numpages)
{
return 0;
}
-static inline int set_direct_map_default_noflush(struct page *page)
+static inline int set_direct_map_default_noflush(struct page *page, int numpages)
{
return 0;
}
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index d63560e1cf87..64b7aab9aee4 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -86,7 +86,7 @@ static inline void hibernate_restore_unprotect_page(void *page_address) {}
static inline void hibernate_map_page(struct page *page)
{
if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
- int ret = set_direct_map_default_noflush(page);
+ int ret = set_direct_map_default_noflush(page, 1);
if (ret)
pr_warn_once("Failed to remap page\n");
@@ -99,7 +99,7 @@ static inline void hibernate_unmap_page(struct page *page)
{
if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
unsigned long addr = (unsigned long)page_address(page);
- int ret = set_direct_map_invalid_noflush(page);
+ int ret = set_direct_map_invalid_noflush(page, 1);
if (ret)
pr_warn_once("Failed to remap page\n");
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d5f2a84e488a..1da9cd1d0758 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2195,13 +2195,14 @@ struct vm_struct *remove_vm_area(const void *addr)
}
static inline void set_area_direct_map(const struct vm_struct *area,
- int (*set_direct_map)(struct page *page))
+ int (*set_direct_map)(struct page *page,
+ int numpages))
{
int i;
for (i = 0; i < area->nr_pages; i++)
if (page_address(area->pages[i]))
- set_direct_map(area->pages[i]);
+ set_direct_map(area->pages[i], 1);
}
/* Handle removing and resetting vm mappings related to the vm_struct. */
--
2.28.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 05/11] set_memory: allow querying whether set_direct_map_*() is actually enabled
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-21 12:27 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
On arm64, set_direct_map_*() functions may return 0 without actually
changing the linear map. This behaviour can be controlled using kernel
parameters, so we need a way to determine at runtime whether calls to
set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
any effect.
Extend set_memory API with can_set_direct_map() function that allows
checking if calling set_direct_map_*() will actually change the page table,
replace several occurrences of open coded checks in arm64 with the new
function and provide a generic stub for architectures that always modify
page tables upon calls to set_direct_map APIs.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/cacheflush.h | 6 ------
arch/arm64/include/asm/set_memory.h | 17 +++++++++++++++++
arch/arm64/kernel/machine_kexec.c | 1 +
arch/arm64/mm/mmu.c | 6 +++---
arch/arm64/mm/pageattr.c | 13 +++++++++----
include/linux/set_memory.h | 12 ++++++++++++
7 files changed, 42 insertions(+), 14 deletions(-)
create mode 100644 arch/arm64/include/asm/set_memory.h
diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 07ac208edc89..73aa25843f65 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -3,5 +3,4 @@ generic-y += early_ioremap.h
generic-y += mcs_spinlock.h
generic-y += qrwlock.h
generic-y += qspinlock.h
-generic-y += set_memory.h
generic-y += user.h
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index d3598419a284..b1bdf83a73db 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -136,12 +136,6 @@ static __always_inline void __flush_icache_all(void)
dsb(ish);
}
-int set_memory_valid(unsigned long addr, int numpages, int enable);
-
-int set_direct_map_invalid_noflush(struct page *page, int numpages);
-int set_direct_map_default_noflush(struct page *page, int numpages);
-bool kernel_page_present(struct page *page);
-
#include <asm-generic/cacheflush.h>
#endif /* __ASM_CACHEFLUSH_H */
diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h
new file mode 100644
index 000000000000..ecb6b0f449ab
--- /dev/null
+++ b/arch/arm64/include/asm/set_memory.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_ARM64_SET_MEMORY_H
+#define _ASM_ARM64_SET_MEMORY_H
+
+#include <asm-generic/set_memory.h>
+
+bool can_set_direct_map(void);
+#define can_set_direct_map can_set_direct_map
+
+int set_memory_valid(unsigned long addr, int numpages, int enable);
+
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
+bool kernel_page_present(struct page *page);
+
+#endif /* _ASM_ARM64_SET_MEMORY_H */
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index a0b144cfaea7..0cbc50c4fa5a 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -11,6 +11,7 @@
#include <linux/kernel.h>
#include <linux/kexec.h>
#include <linux/page-flags.h>
+#include <linux/set_memory.h>
#include <linux/smp.h>
#include <asm/cacheflush.h>
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 30c6dd02e706..79604049fff5 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -22,6 +22,7 @@
#include <linux/io.h>
#include <linux/mm.h>
#include <linux/vmalloc.h>
+#include <linux/set_memory.h>
#include <asm/barrier.h>
#include <asm/cputype.h>
@@ -492,7 +493,7 @@ static void __init map_mem(pgd_t *pgdp)
int flags = 0;
u64 i;
- if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
+ if (can_set_direct_map() || crash_mem_map)
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*
@@ -1468,8 +1469,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
* KFENCE requires linear map to be mapped at page granularity, so that
* it is possible to protect/unprotect single pages in the KFENCE pool.
*/
- if (rodata_full || debug_pagealloc_enabled() ||
- IS_ENABLED(CONFIG_KFENCE))
+ if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index b53ef37bf95a..d505172265b0 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -19,6 +19,11 @@ struct page_change_data {
bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
+bool can_set_direct_map(void)
+{
+ return rodata_full || debug_pagealloc_enabled();
+}
+
static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
{
struct page_change_data *cdata = data;
@@ -156,7 +161,7 @@ int set_direct_map_invalid_noflush(struct page *page, int numpages)
};
unsigned long size = PAGE_SIZE * numpages;
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return 0;
return apply_to_page_range(&init_mm,
@@ -172,7 +177,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
};
unsigned long size = PAGE_SIZE * numpages;
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return 0;
return apply_to_page_range(&init_mm,
@@ -183,7 +188,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
#ifdef CONFIG_DEBUG_PAGEALLOC
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return;
set_memory_valid((unsigned long)page_address(page), numpages, enable);
@@ -208,7 +213,7 @@ bool kernel_page_present(struct page *page)
pte_t *ptep;
unsigned long addr = (unsigned long)page_address(page);
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return true;
pgdp = pgd_offset_k(addr);
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index c650f82db813..7b4b6626032d 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -28,7 +28,19 @@ static inline bool kernel_page_present(struct page *page)
{
return true;
}
+#else /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+/*
+ * Some architectures, e.g. ARM64 can disable direct map modifications at
+ * boot time. Let them overrive this query.
+ */
+#ifndef can_set_direct_map
+static inline bool can_set_direct_map(void)
+{
+ return true;
+}
+#define can_set_direct_map can_set_direct_map
#endif
+#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
#ifndef set_mce_nospec
static inline int set_mce_nospec(unsigned long pfn, bool unmap)
--
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 05/11] set_memory: allow querying whether set_direct_map_*() is actually enabled
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
On arm64, set_direct_map_*() functions may return 0 without actually
changing the linear map. This behaviour can be controlled using kernel
parameters, so we need a way to determine at runtime whether calls to
set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
any effect.
Extend set_memory API with can_set_direct_map() function that allows
checking if calling set_direct_map_*() will actually change the page table,
replace several occurrences of open coded checks in arm64 with the new
function and provide a generic stub for architectures that always modify
page tables upon calls to set_direct_map APIs.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/cacheflush.h | 6 ------
arch/arm64/include/asm/set_memory.h | 17 +++++++++++++++++
arch/arm64/kernel/machine_kexec.c | 1 +
arch/arm64/mm/mmu.c | 6 +++---
arch/arm64/mm/pageattr.c | 13 +++++++++----
include/linux/set_memory.h | 12 ++++++++++++
7 files changed, 42 insertions(+), 14 deletions(-)
create mode 100644 arch/arm64/include/asm/set_memory.h
diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 07ac208edc89..73aa25843f65 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -3,5 +3,4 @@ generic-y += early_ioremap.h
generic-y += mcs_spinlock.h
generic-y += qrwlock.h
generic-y += qspinlock.h
-generic-y += set_memory.h
generic-y += user.h
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index d3598419a284..b1bdf83a73db 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -136,12 +136,6 @@ static __always_inline void __flush_icache_all(void)
dsb(ish);
}
-int set_memory_valid(unsigned long addr, int numpages, int enable);
-
-int set_direct_map_invalid_noflush(struct page *page, int numpages);
-int set_direct_map_default_noflush(struct page *page, int numpages);
-bool kernel_page_present(struct page *page);
-
#include <asm-generic/cacheflush.h>
#endif /* __ASM_CACHEFLUSH_H */
diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h
new file mode 100644
index 000000000000..ecb6b0f449ab
--- /dev/null
+++ b/arch/arm64/include/asm/set_memory.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_ARM64_SET_MEMORY_H
+#define _ASM_ARM64_SET_MEMORY_H
+
+#include <asm-generic/set_memory.h>
+
+bool can_set_direct_map(void);
+#define can_set_direct_map can_set_direct_map
+
+int set_memory_valid(unsigned long addr, int numpages, int enable);
+
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
+bool kernel_page_present(struct page *page);
+
+#endif /* _ASM_ARM64_SET_MEMORY_H */
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index a0b144cfaea7..0cbc50c4fa5a 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -11,6 +11,7 @@
#include <linux/kernel.h>
#include <linux/kexec.h>
#include <linux/page-flags.h>
+#include <linux/set_memory.h>
#include <linux/smp.h>
#include <asm/cacheflush.h>
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 30c6dd02e706..79604049fff5 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -22,6 +22,7 @@
#include <linux/io.h>
#include <linux/mm.h>
#include <linux/vmalloc.h>
+#include <linux/set_memory.h>
#include <asm/barrier.h>
#include <asm/cputype.h>
@@ -492,7 +493,7 @@ static void __init map_mem(pgd_t *pgdp)
int flags = 0;
u64 i;
- if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
+ if (can_set_direct_map() || crash_mem_map)
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*
@@ -1468,8 +1469,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
* KFENCE requires linear map to be mapped at page granularity, so that
* it is possible to protect/unprotect single pages in the KFENCE pool.
*/
- if (rodata_full || debug_pagealloc_enabled() ||
- IS_ENABLED(CONFIG_KFENCE))
+ if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index b53ef37bf95a..d505172265b0 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -19,6 +19,11 @@ struct page_change_data {
bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
+bool can_set_direct_map(void)
+{
+ return rodata_full || debug_pagealloc_enabled();
+}
+
static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
{
struct page_change_data *cdata = data;
@@ -156,7 +161,7 @@ int set_direct_map_invalid_noflush(struct page *page, int numpages)
};
unsigned long size = PAGE_SIZE * numpages;
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return 0;
return apply_to_page_range(&init_mm,
@@ -172,7 +177,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
};
unsigned long size = PAGE_SIZE * numpages;
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return 0;
return apply_to_page_range(&init_mm,
@@ -183,7 +188,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
#ifdef CONFIG_DEBUG_PAGEALLOC
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return;
set_memory_valid((unsigned long)page_address(page), numpages, enable);
@@ -208,7 +213,7 @@ bool kernel_page_present(struct page *page)
pte_t *ptep;
unsigned long addr = (unsigned long)page_address(page);
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return true;
pgdp = pgd_offset_k(addr);
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index c650f82db813..7b4b6626032d 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -28,7 +28,19 @@ static inline bool kernel_page_present(struct page *page)
{
return true;
}
+#else /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+/*
+ * Some architectures, e.g. ARM64 can disable direct map modifications at
+ * boot time. Let them overrive this query.
+ */
+#ifndef can_set_direct_map
+static inline bool can_set_direct_map(void)
+{
+ return true;
+}
+#define can_set_direct_map can_set_direct_map
#endif
+#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
#ifndef set_mce_nospec
static inline int set_mce_nospec(unsigned long pfn, bool unmap)
--
2.28.0
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 05/11] set_memory: allow querying whether set_direct_map_*() is actually enabled
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
On arm64, set_direct_map_*() functions may return 0 without actually
changing the linear map. This behaviour can be controlled using kernel
parameters, so we need a way to determine at runtime whether calls to
set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
any effect.
Extend set_memory API with can_set_direct_map() function that allows
checking if calling set_direct_map_*() will actually change the page table,
replace several occurrences of open coded checks in arm64 with the new
function and provide a generic stub for architectures that always modify
page tables upon calls to set_direct_map APIs.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/cacheflush.h | 6 ------
arch/arm64/include/asm/set_memory.h | 17 +++++++++++++++++
arch/arm64/kernel/machine_kexec.c | 1 +
arch/arm64/mm/mmu.c | 6 +++---
arch/arm64/mm/pageattr.c | 13 +++++++++----
include/linux/set_memory.h | 12 ++++++++++++
7 files changed, 42 insertions(+), 14 deletions(-)
create mode 100644 arch/arm64/include/asm/set_memory.h
diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 07ac208edc89..73aa25843f65 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -3,5 +3,4 @@ generic-y += early_ioremap.h
generic-y += mcs_spinlock.h
generic-y += qrwlock.h
generic-y += qspinlock.h
-generic-y += set_memory.h
generic-y += user.h
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index d3598419a284..b1bdf83a73db 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -136,12 +136,6 @@ static __always_inline void __flush_icache_all(void)
dsb(ish);
}
-int set_memory_valid(unsigned long addr, int numpages, int enable);
-
-int set_direct_map_invalid_noflush(struct page *page, int numpages);
-int set_direct_map_default_noflush(struct page *page, int numpages);
-bool kernel_page_present(struct page *page);
-
#include <asm-generic/cacheflush.h>
#endif /* __ASM_CACHEFLUSH_H */
diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h
new file mode 100644
index 000000000000..ecb6b0f449ab
--- /dev/null
+++ b/arch/arm64/include/asm/set_memory.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_ARM64_SET_MEMORY_H
+#define _ASM_ARM64_SET_MEMORY_H
+
+#include <asm-generic/set_memory.h>
+
+bool can_set_direct_map(void);
+#define can_set_direct_map can_set_direct_map
+
+int set_memory_valid(unsigned long addr, int numpages, int enable);
+
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
+bool kernel_page_present(struct page *page);
+
+#endif /* _ASM_ARM64_SET_MEMORY_H */
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index a0b144cfaea7..0cbc50c4fa5a 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -11,6 +11,7 @@
#include <linux/kernel.h>
#include <linux/kexec.h>
#include <linux/page-flags.h>
+#include <linux/set_memory.h>
#include <linux/smp.h>
#include <asm/cacheflush.h>
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 30c6dd02e706..79604049fff5 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -22,6 +22,7 @@
#include <linux/io.h>
#include <linux/mm.h>
#include <linux/vmalloc.h>
+#include <linux/set_memory.h>
#include <asm/barrier.h>
#include <asm/cputype.h>
@@ -492,7 +493,7 @@ static void __init map_mem(pgd_t *pgdp)
int flags = 0;
u64 i;
- if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
+ if (can_set_direct_map() || crash_mem_map)
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*
@@ -1468,8 +1469,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
* KFENCE requires linear map to be mapped at page granularity, so that
* it is possible to protect/unprotect single pages in the KFENCE pool.
*/
- if (rodata_full || debug_pagealloc_enabled() ||
- IS_ENABLED(CONFIG_KFENCE))
+ if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index b53ef37bf95a..d505172265b0 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -19,6 +19,11 @@ struct page_change_data {
bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
+bool can_set_direct_map(void)
+{
+ return rodata_full || debug_pagealloc_enabled();
+}
+
static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
{
struct page_change_data *cdata = data;
@@ -156,7 +161,7 @@ int set_direct_map_invalid_noflush(struct page *page, int numpages)
};
unsigned long size = PAGE_SIZE * numpages;
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return 0;
return apply_to_page_range(&init_mm,
@@ -172,7 +177,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
};
unsigned long size = PAGE_SIZE * numpages;
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return 0;
return apply_to_page_range(&init_mm,
@@ -183,7 +188,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
#ifdef CONFIG_DEBUG_PAGEALLOC
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return;
set_memory_valid((unsigned long)page_address(page), numpages, enable);
@@ -208,7 +213,7 @@ bool kernel_page_present(struct page *page)
pte_t *ptep;
unsigned long addr = (unsigned long)page_address(page);
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return true;
pgdp = pgd_offset_k(addr);
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index c650f82db813..7b4b6626032d 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -28,7 +28,19 @@ static inline bool kernel_page_present(struct page *page)
{
return true;
}
+#else /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+/*
+ * Some architectures, e.g. ARM64 can disable direct map modifications at
+ * boot time. Let them overrive this query.
+ */
+#ifndef can_set_direct_map
+static inline bool can_set_direct_map(void)
+{
+ return true;
+}
+#define can_set_direct_map can_set_direct_map
#endif
+#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
#ifndef set_mce_nospec
static inline int set_mce_nospec(unsigned long pfn, bool unmap)
--
2.28.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 05/11] set_memory: allow querying whether set_direct_map_*() is actually enabled
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
On arm64, set_direct_map_*() functions may return 0 without actually
changing the linear map. This behaviour can be controlled using kernel
parameters, so we need a way to determine at runtime whether calls to
set_direct_map_invalid_noflush() and set_direct_map_default_noflush() have
any effect.
Extend set_memory API with can_set_direct_map() function that allows
checking if calling set_direct_map_*() will actually change the page table,
replace several occurrences of open coded checks in arm64 with the new
function and provide a generic stub for architectures that always modify
page tables upon calls to set_direct_map APIs.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/asm/Kbuild | 1 -
arch/arm64/include/asm/cacheflush.h | 6 ------
arch/arm64/include/asm/set_memory.h | 17 +++++++++++++++++
arch/arm64/kernel/machine_kexec.c | 1 +
arch/arm64/mm/mmu.c | 6 +++---
arch/arm64/mm/pageattr.c | 13 +++++++++----
include/linux/set_memory.h | 12 ++++++++++++
7 files changed, 42 insertions(+), 14 deletions(-)
create mode 100644 arch/arm64/include/asm/set_memory.h
diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild
index 07ac208edc89..73aa25843f65 100644
--- a/arch/arm64/include/asm/Kbuild
+++ b/arch/arm64/include/asm/Kbuild
@@ -3,5 +3,4 @@ generic-y += early_ioremap.h
generic-y += mcs_spinlock.h
generic-y += qrwlock.h
generic-y += qspinlock.h
-generic-y += set_memory.h
generic-y += user.h
diff --git a/arch/arm64/include/asm/cacheflush.h b/arch/arm64/include/asm/cacheflush.h
index d3598419a284..b1bdf83a73db 100644
--- a/arch/arm64/include/asm/cacheflush.h
+++ b/arch/arm64/include/asm/cacheflush.h
@@ -136,12 +136,6 @@ static __always_inline void __flush_icache_all(void)
dsb(ish);
}
-int set_memory_valid(unsigned long addr, int numpages, int enable);
-
-int set_direct_map_invalid_noflush(struct page *page, int numpages);
-int set_direct_map_default_noflush(struct page *page, int numpages);
-bool kernel_page_present(struct page *page);
-
#include <asm-generic/cacheflush.h>
#endif /* __ASM_CACHEFLUSH_H */
diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h
new file mode 100644
index 000000000000..ecb6b0f449ab
--- /dev/null
+++ b/arch/arm64/include/asm/set_memory.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#ifndef _ASM_ARM64_SET_MEMORY_H
+#define _ASM_ARM64_SET_MEMORY_H
+
+#include <asm-generic/set_memory.h>
+
+bool can_set_direct_map(void);
+#define can_set_direct_map can_set_direct_map
+
+int set_memory_valid(unsigned long addr, int numpages, int enable);
+
+int set_direct_map_invalid_noflush(struct page *page, int numpages);
+int set_direct_map_default_noflush(struct page *page, int numpages);
+bool kernel_page_present(struct page *page);
+
+#endif /* _ASM_ARM64_SET_MEMORY_H */
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index a0b144cfaea7..0cbc50c4fa5a 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -11,6 +11,7 @@
#include <linux/kernel.h>
#include <linux/kexec.h>
#include <linux/page-flags.h>
+#include <linux/set_memory.h>
#include <linux/smp.h>
#include <asm/cacheflush.h>
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 30c6dd02e706..79604049fff5 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -22,6 +22,7 @@
#include <linux/io.h>
#include <linux/mm.h>
#include <linux/vmalloc.h>
+#include <linux/set_memory.h>
#include <asm/barrier.h>
#include <asm/cputype.h>
@@ -492,7 +493,7 @@ static void __init map_mem(pgd_t *pgdp)
int flags = 0;
u64 i;
- if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
+ if (can_set_direct_map() || crash_mem_map)
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
/*
@@ -1468,8 +1469,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
* KFENCE requires linear map to be mapped at page granularity, so that
* it is possible to protect/unprotect single pages in the KFENCE pool.
*/
- if (rodata_full || debug_pagealloc_enabled() ||
- IS_ENABLED(CONFIG_KFENCE))
+ if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE))
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index b53ef37bf95a..d505172265b0 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -19,6 +19,11 @@ struct page_change_data {
bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED);
+bool can_set_direct_map(void)
+{
+ return rodata_full || debug_pagealloc_enabled();
+}
+
static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
{
struct page_change_data *cdata = data;
@@ -156,7 +161,7 @@ int set_direct_map_invalid_noflush(struct page *page, int numpages)
};
unsigned long size = PAGE_SIZE * numpages;
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return 0;
return apply_to_page_range(&init_mm,
@@ -172,7 +177,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
};
unsigned long size = PAGE_SIZE * numpages;
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return 0;
return apply_to_page_range(&init_mm,
@@ -183,7 +188,7 @@ int set_direct_map_default_noflush(struct page *page, int numpages)
#ifdef CONFIG_DEBUG_PAGEALLOC
void __kernel_map_pages(struct page *page, int numpages, int enable)
{
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return;
set_memory_valid((unsigned long)page_address(page), numpages, enable);
@@ -208,7 +213,7 @@ bool kernel_page_present(struct page *page)
pte_t *ptep;
unsigned long addr = (unsigned long)page_address(page);
- if (!debug_pagealloc_enabled() && !rodata_full)
+ if (!can_set_direct_map())
return true;
pgdp = pgd_offset_k(addr);
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index c650f82db813..7b4b6626032d 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -28,7 +28,19 @@ static inline bool kernel_page_present(struct page *page)
{
return true;
}
+#else /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
+/*
+ * Some architectures, e.g. ARM64 can disable direct map modifications at
+ * boot time. Let them overrive this query.
+ */
+#ifndef can_set_direct_map
+static inline bool can_set_direct_map(void)
+{
+ return true;
+}
+#define can_set_direct_map can_set_direct_map
#endif
+#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
#ifndef set_mce_nospec
static inline int set_mce_nospec(unsigned long pfn, bool unmap)
--
2.28.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-21 12:27 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.
The user will create a file descriptor using the memfd_secret() system
call. The memory areas created by mmap() calls from this file descriptor
will be unmapped from the kernel direct map and they will be only mapped in
the page table of the owning mm.
The secret memory remains accessible in the process context using uaccess
primitives, but it is not accessible using direct/linear map addresses.
Functions in the follow_page()/get_user_page() family will refuse to return
a page that belongs to the secret memory area.
A page that was a part of the secret memory area is cleared when it is
freed.
The following example demonstrates creation of a secret mapping (error
handling is omitted):
fd = memfd_secret(0);
ftruncate(fd, MAP_SIZE);
ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
include/linux/secretmem.h | 24 ++++
include/uapi/linux/magic.h | 1 +
kernel/sys_ni.c | 2 +
mm/Kconfig | 3 +
mm/Makefile | 1 +
mm/gup.c | 10 ++
mm/secretmem.c | 278 +++++++++++++++++++++++++++++++++++++
7 files changed, 319 insertions(+)
create mode 100644 include/linux/secretmem.h
create mode 100644 mm/secretmem.c
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
new file mode 100644
index 000000000000..70e7db9f94fe
--- /dev/null
+++ b/include/linux/secretmem.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_SECRETMEM_H
+#define _LINUX_SECRETMEM_H
+
+#ifdef CONFIG_SECRETMEM
+
+bool vma_is_secretmem(struct vm_area_struct *vma);
+bool page_is_secretmem(struct page *page);
+
+#else
+
+static inline bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+ return false;
+}
+
+static inline bool page_is_secretmem(struct page *page)
+{
+ return false;
+}
+
+#endif /* CONFIG_SECRETMEM */
+
+#endif /* _LINUX_SECRETMEM_H */
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index f3956fc11de6..35687dcb1a42 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -97,5 +97,6 @@
#define DEVMEM_MAGIC 0x454d444d /* "DMEM" */
#define Z3FOLD_MAGIC 0x33
#define PPC_CMM_MAGIC 0xc7571590
+#define SECRETMEM_MAGIC 0x5345434d /* "SECM" */
#endif /* __LINUX_MAGIC_H__ */
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 769ad6225ab1..869aa6b5bf34 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -355,6 +355,8 @@ COND_SYSCALL(pkey_mprotect);
COND_SYSCALL(pkey_alloc);
COND_SYSCALL(pkey_free);
+/* memfd_secret */
+COND_SYSCALL(memfd_secret);
/*
* Architecture specific weak syscall entries.
diff --git a/mm/Kconfig b/mm/Kconfig
index 24c045b24b95..5f8243442f66 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -872,4 +872,7 @@ config MAPPING_DIRTY_HELPERS
config KMAP_LOCAL
bool
+config SECRETMEM
+ def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+
endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 72227b24a616..b2a564eec27f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -120,3 +120,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_SECRETMEM) += secretmem.o
diff --git a/mm/gup.c b/mm/gup.c
index e4c224cd9661..3e086b073624 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -10,6 +10,7 @@
#include <linux/rmap.h>
#include <linux/swap.h>
#include <linux/swapops.h>
+#include <linux/secretmem.h>
#include <linux/sched/signal.h>
#include <linux/rwsem.h>
@@ -759,6 +760,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
struct follow_page_context ctx = { NULL };
struct page *page;
+ if (vma_is_secretmem(vma))
+ return NULL;
+
page = follow_page_mask(vma, address, foll_flags, &ctx);
if (ctx.pgmap)
put_dev_pagemap(ctx.pgmap);
@@ -892,6 +896,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
return -EOPNOTSUPP;
+ if (vma_is_secretmem(vma))
+ return -EFAULT;
+
if (write) {
if (!(vm_flags & VM_WRITE)) {
if (!(gup_flags & FOLL_FORCE))
@@ -2031,6 +2038,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);
+ if (page_is_secretmem(page))
+ goto pte_unmap;
+
head = try_grab_compound_head(page, 1, flags);
if (!head)
goto pte_unmap;
diff --git a/mm/secretmem.c b/mm/secretmem.c
new file mode 100644
index 000000000000..904351d12c33
--- /dev/null
+++ b/mm/secretmem.c
@@ -0,0 +1,278 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/mount.h>
+#include <linux/memfd.h>
+#include <linux/bitops.h>
+#include <linux/printk.h>
+#include <linux/pagemap.h>
+#include <linux/syscalls.h>
+#include <linux/pseudo_fs.h>
+#include <linux/secretmem.h>
+#include <linux/set_memory.h>
+#include <linux/sched/signal.h>
+
+#include <uapi/linux/magic.h>
+
+#include <asm/tlbflush.h>
+
+#include "internal.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "secretmem: " fmt
+
+/*
+ * Define mode and flag masks to allow validation of the system call
+ * parameters.
+ */
+#define SECRETMEM_MODE_MASK (0x0)
+#define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
+
+struct secretmem_ctx {
+ unsigned int mode;
+};
+
+static struct page *secretmem_alloc_page(gfp_t gfp)
+{
+ /*
+ * FIXME: use a cache of large pages to reduce the direct map
+ * fragmentation
+ */
+ return alloc_page(gfp | __GFP_ZERO);
+}
+
+static vm_fault_t secretmem_fault(struct vm_fault *vmf)
+{
+ struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+ struct inode *inode = file_inode(vmf->vma->vm_file);
+ pgoff_t offset = vmf->pgoff;
+ unsigned long addr;
+ struct page *page;
+ int err;
+
+ if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+ return vmf_error(-EINVAL);
+
+retry:
+ page = find_lock_page(mapping, offset);
+ if (!page) {
+ page = secretmem_alloc_page(vmf->gfp_mask);
+ if (!page)
+ return VM_FAULT_OOM;
+
+ err = set_direct_map_invalid_noflush(page, 1);
+ if (err) {
+ put_page(page);
+ return vmf_error(err);
+ }
+
+ __SetPageUptodate(page);
+ err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
+ if (unlikely(err)) {
+ put_page(page);
+ if (err == -EEXIST)
+ goto retry;
+ goto err_restore_direct_map;
+ }
+
+ addr = (unsigned long)page_address(page);
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ }
+
+ vmf->page = page;
+ return VM_FAULT_LOCKED;
+
+err_restore_direct_map:
+ /*
+ * If a split of large page was required, it already happened
+ * when we marked the page invalid which guarantees that this call
+ * won't fail
+ */
+ set_direct_map_default_noflush(page, 1);
+ return vmf_error(err);
+}
+
+static const struct vm_operations_struct secretmem_vm_ops = {
+ .fault = secretmem_fault,
+};
+
+static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ unsigned long len = vma->vm_end - vma->vm_start;
+
+ if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+ return -EINVAL;
+
+ if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
+ return -EAGAIN;
+
+ vma->vm_ops = &secretmem_vm_ops;
+ vma->vm_flags |= VM_LOCKED;
+
+ return 0;
+}
+
+bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+ return vma->vm_ops == &secretmem_vm_ops;
+}
+
+static const struct file_operations secretmem_fops = {
+ .mmap = secretmem_mmap,
+};
+
+static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode)
+{
+ return false;
+}
+
+static int secretmem_migratepage(struct address_space *mapping,
+ struct page *newpage, struct page *page,
+ enum migrate_mode mode)
+{
+ return -EBUSY;
+}
+
+static void secretmem_freepage(struct page *page)
+{
+ set_direct_map_default_noflush(page, 1);
+ clear_highpage(page);
+}
+
+static const struct address_space_operations secretmem_aops = {
+ .freepage = secretmem_freepage,
+ .migratepage = secretmem_migratepage,
+ .isolate_page = secretmem_isolate_page,
+};
+
+bool page_is_secretmem(struct page *page)
+{
+ struct address_space *mapping = page_mapping(page);
+
+ if (!mapping)
+ return false;
+
+ return mapping->a_ops == &secretmem_aops;
+}
+
+static struct vfsmount *secretmem_mnt;
+
+static struct file *secretmem_file_create(unsigned long flags)
+{
+ struct file *file = ERR_PTR(-ENOMEM);
+ struct secretmem_ctx *ctx;
+ struct inode *inode;
+
+ inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx)
+ goto err_free_inode;
+
+ file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+ O_RDWR, &secretmem_fops);
+ if (IS_ERR(file))
+ goto err_free_ctx;
+
+ mapping_set_unevictable(inode->i_mapping);
+
+ inode->i_mapping->private_data = ctx;
+ inode->i_mapping->a_ops = &secretmem_aops;
+
+ /* pretend we are a normal file with zero size */
+ inode->i_mode |= S_IFREG;
+ inode->i_size = 0;
+
+ file->private_data = ctx;
+
+ ctx->mode = flags & SECRETMEM_MODE_MASK;
+
+ return file;
+
+err_free_ctx:
+ kfree(ctx);
+err_free_inode:
+ iput(inode);
+ return file;
+}
+
+SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
+{
+ struct file *file;
+ int fd, err;
+
+ /* make sure local flags do not confict with global fcntl.h */
+ BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+ if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+ return -EINVAL;
+
+ fd = get_unused_fd_flags(flags & O_CLOEXEC);
+ if (fd < 0)
+ return fd;
+
+ file = secretmem_file_create(flags);
+ if (IS_ERR(file)) {
+ err = PTR_ERR(file);
+ goto err_put_fd;
+ }
+
+ file->f_flags |= O_LARGEFILE;
+
+ fd_install(fd, file);
+ return fd;
+
+err_put_fd:
+ put_unused_fd(fd);
+ return err;
+}
+
+static void secretmem_evict_inode(struct inode *inode)
+{
+ struct secretmem_ctx *ctx = inode->i_private;
+
+ truncate_inode_pages_final(&inode->i_data);
+ clear_inode(inode);
+ kfree(ctx);
+}
+
+static const struct super_operations secretmem_super_ops = {
+ .evict_inode = secretmem_evict_inode,
+};
+
+static int secretmem_init_fs_context(struct fs_context *fc)
+{
+ struct pseudo_fs_context *ctx = init_pseudo(fc, SECRETMEM_MAGIC);
+
+ if (!ctx)
+ return -ENOMEM;
+ ctx->ops = &secretmem_super_ops;
+
+ return 0;
+}
+
+static struct file_system_type secretmem_fs = {
+ .name = "secretmem",
+ .init_fs_context = secretmem_init_fs_context,
+ .kill_sb = kill_anon_super,
+};
+
+static int secretmem_init(void)
+{
+ int ret = 0;
+
+ secretmem_mnt = kern_mount(&secretmem_fs);
+ if (IS_ERR(secretmem_mnt))
+ ret = PTR_ERR(secretmem_mnt);
+
+ return ret;
+}
+fs_initcall(secretmem_init);
--
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.
The user will create a file descriptor using the memfd_secret() system
call. The memory areas created by mmap() calls from this file descriptor
will be unmapped from the kernel direct map and they will be only mapped in
the page table of the owning mm.
The secret memory remains accessible in the process context using uaccess
primitives, but it is not accessible using direct/linear map addresses.
Functions in the follow_page()/get_user_page() family will refuse to return
a page that belongs to the secret memory area.
A page that was a part of the secret memory area is cleared when it is
freed.
The following example demonstrates creation of a secret mapping (error
handling is omitted):
fd = memfd_secret(0);
ftruncate(fd, MAP_SIZE);
ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
include/linux/secretmem.h | 24 ++++
include/uapi/linux/magic.h | 1 +
kernel/sys_ni.c | 2 +
mm/Kconfig | 3 +
mm/Makefile | 1 +
mm/gup.c | 10 ++
mm/secretmem.c | 278 +++++++++++++++++++++++++++++++++++++
7 files changed, 319 insertions(+)
create mode 100644 include/linux/secretmem.h
create mode 100644 mm/secretmem.c
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
new file mode 100644
index 000000000000..70e7db9f94fe
--- /dev/null
+++ b/include/linux/secretmem.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_SECRETMEM_H
+#define _LINUX_SECRETMEM_H
+
+#ifdef CONFIG_SECRETMEM
+
+bool vma_is_secretmem(struct vm_area_struct *vma);
+bool page_is_secretmem(struct page *page);
+
+#else
+
+static inline bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+ return false;
+}
+
+static inline bool page_is_secretmem(struct page *page)
+{
+ return false;
+}
+
+#endif /* CONFIG_SECRETMEM */
+
+#endif /* _LINUX_SECRETMEM_H */
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index f3956fc11de6..35687dcb1a42 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -97,5 +97,6 @@
#define DEVMEM_MAGIC 0x454d444d /* "DMEM" */
#define Z3FOLD_MAGIC 0x33
#define PPC_CMM_MAGIC 0xc7571590
+#define SECRETMEM_MAGIC 0x5345434d /* "SECM" */
#endif /* __LINUX_MAGIC_H__ */
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 769ad6225ab1..869aa6b5bf34 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -355,6 +355,8 @@ COND_SYSCALL(pkey_mprotect);
COND_SYSCALL(pkey_alloc);
COND_SYSCALL(pkey_free);
+/* memfd_secret */
+COND_SYSCALL(memfd_secret);
/*
* Architecture specific weak syscall entries.
diff --git a/mm/Kconfig b/mm/Kconfig
index 24c045b24b95..5f8243442f66 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -872,4 +872,7 @@ config MAPPING_DIRTY_HELPERS
config KMAP_LOCAL
bool
+config SECRETMEM
+ def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+
endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 72227b24a616..b2a564eec27f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -120,3 +120,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_SECRETMEM) += secretmem.o
diff --git a/mm/gup.c b/mm/gup.c
index e4c224cd9661..3e086b073624 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -10,6 +10,7 @@
#include <linux/rmap.h>
#include <linux/swap.h>
#include <linux/swapops.h>
+#include <linux/secretmem.h>
#include <linux/sched/signal.h>
#include <linux/rwsem.h>
@@ -759,6 +760,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
struct follow_page_context ctx = { NULL };
struct page *page;
+ if (vma_is_secretmem(vma))
+ return NULL;
+
page = follow_page_mask(vma, address, foll_flags, &ctx);
if (ctx.pgmap)
put_dev_pagemap(ctx.pgmap);
@@ -892,6 +896,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
return -EOPNOTSUPP;
+ if (vma_is_secretmem(vma))
+ return -EFAULT;
+
if (write) {
if (!(vm_flags & VM_WRITE)) {
if (!(gup_flags & FOLL_FORCE))
@@ -2031,6 +2038,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);
+ if (page_is_secretmem(page))
+ goto pte_unmap;
+
head = try_grab_compound_head(page, 1, flags);
if (!head)
goto pte_unmap;
diff --git a/mm/secretmem.c b/mm/secretmem.c
new file mode 100644
index 000000000000..904351d12c33
--- /dev/null
+++ b/mm/secretmem.c
@@ -0,0 +1,278 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/mount.h>
+#include <linux/memfd.h>
+#include <linux/bitops.h>
+#include <linux/printk.h>
+#include <linux/pagemap.h>
+#include <linux/syscalls.h>
+#include <linux/pseudo_fs.h>
+#include <linux/secretmem.h>
+#include <linux/set_memory.h>
+#include <linux/sched/signal.h>
+
+#include <uapi/linux/magic.h>
+
+#include <asm/tlbflush.h>
+
+#include "internal.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "secretmem: " fmt
+
+/*
+ * Define mode and flag masks to allow validation of the system call
+ * parameters.
+ */
+#define SECRETMEM_MODE_MASK (0x0)
+#define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
+
+struct secretmem_ctx {
+ unsigned int mode;
+};
+
+static struct page *secretmem_alloc_page(gfp_t gfp)
+{
+ /*
+ * FIXME: use a cache of large pages to reduce the direct map
+ * fragmentation
+ */
+ return alloc_page(gfp | __GFP_ZERO);
+}
+
+static vm_fault_t secretmem_fault(struct vm_fault *vmf)
+{
+ struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+ struct inode *inode = file_inode(vmf->vma->vm_file);
+ pgoff_t offset = vmf->pgoff;
+ unsigned long addr;
+ struct page *page;
+ int err;
+
+ if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+ return vmf_error(-EINVAL);
+
+retry:
+ page = find_lock_page(mapping, offset);
+ if (!page) {
+ page = secretmem_alloc_page(vmf->gfp_mask);
+ if (!page)
+ return VM_FAULT_OOM;
+
+ err = set_direct_map_invalid_noflush(page, 1);
+ if (err) {
+ put_page(page);
+ return vmf_error(err);
+ }
+
+ __SetPageUptodate(page);
+ err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
+ if (unlikely(err)) {
+ put_page(page);
+ if (err == -EEXIST)
+ goto retry;
+ goto err_restore_direct_map;
+ }
+
+ addr = (unsigned long)page_address(page);
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ }
+
+ vmf->page = page;
+ return VM_FAULT_LOCKED;
+
+err_restore_direct_map:
+ /*
+ * If a split of large page was required, it already happened
+ * when we marked the page invalid which guarantees that this call
+ * won't fail
+ */
+ set_direct_map_default_noflush(page, 1);
+ return vmf_error(err);
+}
+
+static const struct vm_operations_struct secretmem_vm_ops = {
+ .fault = secretmem_fault,
+};
+
+static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ unsigned long len = vma->vm_end - vma->vm_start;
+
+ if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+ return -EINVAL;
+
+ if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
+ return -EAGAIN;
+
+ vma->vm_ops = &secretmem_vm_ops;
+ vma->vm_flags |= VM_LOCKED;
+
+ return 0;
+}
+
+bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+ return vma->vm_ops == &secretmem_vm_ops;
+}
+
+static const struct file_operations secretmem_fops = {
+ .mmap = secretmem_mmap,
+};
+
+static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode)
+{
+ return false;
+}
+
+static int secretmem_migratepage(struct address_space *mapping,
+ struct page *newpage, struct page *page,
+ enum migrate_mode mode)
+{
+ return -EBUSY;
+}
+
+static void secretmem_freepage(struct page *page)
+{
+ set_direct_map_default_noflush(page, 1);
+ clear_highpage(page);
+}
+
+static const struct address_space_operations secretmem_aops = {
+ .freepage = secretmem_freepage,
+ .migratepage = secretmem_migratepage,
+ .isolate_page = secretmem_isolate_page,
+};
+
+bool page_is_secretmem(struct page *page)
+{
+ struct address_space *mapping = page_mapping(page);
+
+ if (!mapping)
+ return false;
+
+ return mapping->a_ops == &secretmem_aops;
+}
+
+static struct vfsmount *secretmem_mnt;
+
+static struct file *secretmem_file_create(unsigned long flags)
+{
+ struct file *file = ERR_PTR(-ENOMEM);
+ struct secretmem_ctx *ctx;
+ struct inode *inode;
+
+ inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx)
+ goto err_free_inode;
+
+ file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+ O_RDWR, &secretmem_fops);
+ if (IS_ERR(file))
+ goto err_free_ctx;
+
+ mapping_set_unevictable(inode->i_mapping);
+
+ inode->i_mapping->private_data = ctx;
+ inode->i_mapping->a_ops = &secretmem_aops;
+
+ /* pretend we are a normal file with zero size */
+ inode->i_mode |= S_IFREG;
+ inode->i_size = 0;
+
+ file->private_data = ctx;
+
+ ctx->mode = flags & SECRETMEM_MODE_MASK;
+
+ return file;
+
+err_free_ctx:
+ kfree(ctx);
+err_free_inode:
+ iput(inode);
+ return file;
+}
+
+SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
+{
+ struct file *file;
+ int fd, err;
+
+ /* make sure local flags do not confict with global fcntl.h */
+ BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+ if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+ return -EINVAL;
+
+ fd = get_unused_fd_flags(flags & O_CLOEXEC);
+ if (fd < 0)
+ return fd;
+
+ file = secretmem_file_create(flags);
+ if (IS_ERR(file)) {
+ err = PTR_ERR(file);
+ goto err_put_fd;
+ }
+
+ file->f_flags |= O_LARGEFILE;
+
+ fd_install(fd, file);
+ return fd;
+
+err_put_fd:
+ put_unused_fd(fd);
+ return err;
+}
+
+static void secretmem_evict_inode(struct inode *inode)
+{
+ struct secretmem_ctx *ctx = inode->i_private;
+
+ truncate_inode_pages_final(&inode->i_data);
+ clear_inode(inode);
+ kfree(ctx);
+}
+
+static const struct super_operations secretmem_super_ops = {
+ .evict_inode = secretmem_evict_inode,
+};
+
+static int secretmem_init_fs_context(struct fs_context *fc)
+{
+ struct pseudo_fs_context *ctx = init_pseudo(fc, SECRETMEM_MAGIC);
+
+ if (!ctx)
+ return -ENOMEM;
+ ctx->ops = &secretmem_super_ops;
+
+ return 0;
+}
+
+static struct file_system_type secretmem_fs = {
+ .name = "secretmem",
+ .init_fs_context = secretmem_init_fs_context,
+ .kill_sb = kill_anon_super,
+};
+
+static int secretmem_init(void)
+{
+ int ret = 0;
+
+ secretmem_mnt = kern_mount(&secretmem_fs);
+ if (IS_ERR(secretmem_mnt))
+ ret = PTR_ERR(secretmem_mnt);
+
+ return ret;
+}
+fs_initcall(secretmem_init);
--
2.28.0
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.
The user will create a file descriptor using the memfd_secret() system
call. The memory areas created by mmap() calls from this file descriptor
will be unmapped from the kernel direct map and they will be only mapped in
the page table of the owning mm.
The secret memory remains accessible in the process context using uaccess
primitives, but it is not accessible using direct/linear map addresses.
Functions in the follow_page()/get_user_page() family will refuse to return
a page that belongs to the secret memory area.
A page that was a part of the secret memory area is cleared when it is
freed.
The following example demonstrates creation of a secret mapping (error
handling is omitted):
fd = memfd_secret(0);
ftruncate(fd, MAP_SIZE);
ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
include/linux/secretmem.h | 24 ++++
include/uapi/linux/magic.h | 1 +
kernel/sys_ni.c | 2 +
mm/Kconfig | 3 +
mm/Makefile | 1 +
mm/gup.c | 10 ++
mm/secretmem.c | 278 +++++++++++++++++++++++++++++++++++++
7 files changed, 319 insertions(+)
create mode 100644 include/linux/secretmem.h
create mode 100644 mm/secretmem.c
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
new file mode 100644
index 000000000000..70e7db9f94fe
--- /dev/null
+++ b/include/linux/secretmem.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_SECRETMEM_H
+#define _LINUX_SECRETMEM_H
+
+#ifdef CONFIG_SECRETMEM
+
+bool vma_is_secretmem(struct vm_area_struct *vma);
+bool page_is_secretmem(struct page *page);
+
+#else
+
+static inline bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+ return false;
+}
+
+static inline bool page_is_secretmem(struct page *page)
+{
+ return false;
+}
+
+#endif /* CONFIG_SECRETMEM */
+
+#endif /* _LINUX_SECRETMEM_H */
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index f3956fc11de6..35687dcb1a42 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -97,5 +97,6 @@
#define DEVMEM_MAGIC 0x454d444d /* "DMEM" */
#define Z3FOLD_MAGIC 0x33
#define PPC_CMM_MAGIC 0xc7571590
+#define SECRETMEM_MAGIC 0x5345434d /* "SECM" */
#endif /* __LINUX_MAGIC_H__ */
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 769ad6225ab1..869aa6b5bf34 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -355,6 +355,8 @@ COND_SYSCALL(pkey_mprotect);
COND_SYSCALL(pkey_alloc);
COND_SYSCALL(pkey_free);
+/* memfd_secret */
+COND_SYSCALL(memfd_secret);
/*
* Architecture specific weak syscall entries.
diff --git a/mm/Kconfig b/mm/Kconfig
index 24c045b24b95..5f8243442f66 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -872,4 +872,7 @@ config MAPPING_DIRTY_HELPERS
config KMAP_LOCAL
bool
+config SECRETMEM
+ def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+
endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 72227b24a616..b2a564eec27f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -120,3 +120,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_SECRETMEM) += secretmem.o
diff --git a/mm/gup.c b/mm/gup.c
index e4c224cd9661..3e086b073624 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -10,6 +10,7 @@
#include <linux/rmap.h>
#include <linux/swap.h>
#include <linux/swapops.h>
+#include <linux/secretmem.h>
#include <linux/sched/signal.h>
#include <linux/rwsem.h>
@@ -759,6 +760,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
struct follow_page_context ctx = { NULL };
struct page *page;
+ if (vma_is_secretmem(vma))
+ return NULL;
+
page = follow_page_mask(vma, address, foll_flags, &ctx);
if (ctx.pgmap)
put_dev_pagemap(ctx.pgmap);
@@ -892,6 +896,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
return -EOPNOTSUPP;
+ if (vma_is_secretmem(vma))
+ return -EFAULT;
+
if (write) {
if (!(vm_flags & VM_WRITE)) {
if (!(gup_flags & FOLL_FORCE))
@@ -2031,6 +2038,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);
+ if (page_is_secretmem(page))
+ goto pte_unmap;
+
head = try_grab_compound_head(page, 1, flags);
if (!head)
goto pte_unmap;
diff --git a/mm/secretmem.c b/mm/secretmem.c
new file mode 100644
index 000000000000..904351d12c33
--- /dev/null
+++ b/mm/secretmem.c
@@ -0,0 +1,278 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/mount.h>
+#include <linux/memfd.h>
+#include <linux/bitops.h>
+#include <linux/printk.h>
+#include <linux/pagemap.h>
+#include <linux/syscalls.h>
+#include <linux/pseudo_fs.h>
+#include <linux/secretmem.h>
+#include <linux/set_memory.h>
+#include <linux/sched/signal.h>
+
+#include <uapi/linux/magic.h>
+
+#include <asm/tlbflush.h>
+
+#include "internal.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "secretmem: " fmt
+
+/*
+ * Define mode and flag masks to allow validation of the system call
+ * parameters.
+ */
+#define SECRETMEM_MODE_MASK (0x0)
+#define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
+
+struct secretmem_ctx {
+ unsigned int mode;
+};
+
+static struct page *secretmem_alloc_page(gfp_t gfp)
+{
+ /*
+ * FIXME: use a cache of large pages to reduce the direct map
+ * fragmentation
+ */
+ return alloc_page(gfp | __GFP_ZERO);
+}
+
+static vm_fault_t secretmem_fault(struct vm_fault *vmf)
+{
+ struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+ struct inode *inode = file_inode(vmf->vma->vm_file);
+ pgoff_t offset = vmf->pgoff;
+ unsigned long addr;
+ struct page *page;
+ int err;
+
+ if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+ return vmf_error(-EINVAL);
+
+retry:
+ page = find_lock_page(mapping, offset);
+ if (!page) {
+ page = secretmem_alloc_page(vmf->gfp_mask);
+ if (!page)
+ return VM_FAULT_OOM;
+
+ err = set_direct_map_invalid_noflush(page, 1);
+ if (err) {
+ put_page(page);
+ return vmf_error(err);
+ }
+
+ __SetPageUptodate(page);
+ err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
+ if (unlikely(err)) {
+ put_page(page);
+ if (err == -EEXIST)
+ goto retry;
+ goto err_restore_direct_map;
+ }
+
+ addr = (unsigned long)page_address(page);
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ }
+
+ vmf->page = page;
+ return VM_FAULT_LOCKED;
+
+err_restore_direct_map:
+ /*
+ * If a split of large page was required, it already happened
+ * when we marked the page invalid which guarantees that this call
+ * won't fail
+ */
+ set_direct_map_default_noflush(page, 1);
+ return vmf_error(err);
+}
+
+static const struct vm_operations_struct secretmem_vm_ops = {
+ .fault = secretmem_fault,
+};
+
+static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ unsigned long len = vma->vm_end - vma->vm_start;
+
+ if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+ return -EINVAL;
+
+ if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
+ return -EAGAIN;
+
+ vma->vm_ops = &secretmem_vm_ops;
+ vma->vm_flags |= VM_LOCKED;
+
+ return 0;
+}
+
+bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+ return vma->vm_ops == &secretmem_vm_ops;
+}
+
+static const struct file_operations secretmem_fops = {
+ .mmap = secretmem_mmap,
+};
+
+static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode)
+{
+ return false;
+}
+
+static int secretmem_migratepage(struct address_space *mapping,
+ struct page *newpage, struct page *page,
+ enum migrate_mode mode)
+{
+ return -EBUSY;
+}
+
+static void secretmem_freepage(struct page *page)
+{
+ set_direct_map_default_noflush(page, 1);
+ clear_highpage(page);
+}
+
+static const struct address_space_operations secretmem_aops = {
+ .freepage = secretmem_freepage,
+ .migratepage = secretmem_migratepage,
+ .isolate_page = secretmem_isolate_page,
+};
+
+bool page_is_secretmem(struct page *page)
+{
+ struct address_space *mapping = page_mapping(page);
+
+ if (!mapping)
+ return false;
+
+ return mapping->a_ops == &secretmem_aops;
+}
+
+static struct vfsmount *secretmem_mnt;
+
+static struct file *secretmem_file_create(unsigned long flags)
+{
+ struct file *file = ERR_PTR(-ENOMEM);
+ struct secretmem_ctx *ctx;
+ struct inode *inode;
+
+ inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx)
+ goto err_free_inode;
+
+ file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+ O_RDWR, &secretmem_fops);
+ if (IS_ERR(file))
+ goto err_free_ctx;
+
+ mapping_set_unevictable(inode->i_mapping);
+
+ inode->i_mapping->private_data = ctx;
+ inode->i_mapping->a_ops = &secretmem_aops;
+
+ /* pretend we are a normal file with zero size */
+ inode->i_mode |= S_IFREG;
+ inode->i_size = 0;
+
+ file->private_data = ctx;
+
+ ctx->mode = flags & SECRETMEM_MODE_MASK;
+
+ return file;
+
+err_free_ctx:
+ kfree(ctx);
+err_free_inode:
+ iput(inode);
+ return file;
+}
+
+SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
+{
+ struct file *file;
+ int fd, err;
+
+ /* make sure local flags do not confict with global fcntl.h */
+ BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+ if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+ return -EINVAL;
+
+ fd = get_unused_fd_flags(flags & O_CLOEXEC);
+ if (fd < 0)
+ return fd;
+
+ file = secretmem_file_create(flags);
+ if (IS_ERR(file)) {
+ err = PTR_ERR(file);
+ goto err_put_fd;
+ }
+
+ file->f_flags |= O_LARGEFILE;
+
+ fd_install(fd, file);
+ return fd;
+
+err_put_fd:
+ put_unused_fd(fd);
+ return err;
+}
+
+static void secretmem_evict_inode(struct inode *inode)
+{
+ struct secretmem_ctx *ctx = inode->i_private;
+
+ truncate_inode_pages_final(&inode->i_data);
+ clear_inode(inode);
+ kfree(ctx);
+}
+
+static const struct super_operations secretmem_super_ops = {
+ .evict_inode = secretmem_evict_inode,
+};
+
+static int secretmem_init_fs_context(struct fs_context *fc)
+{
+ struct pseudo_fs_context *ctx = init_pseudo(fc, SECRETMEM_MAGIC);
+
+ if (!ctx)
+ return -ENOMEM;
+ ctx->ops = &secretmem_super_ops;
+
+ return 0;
+}
+
+static struct file_system_type secretmem_fs = {
+ .name = "secretmem",
+ .init_fs_context = secretmem_init_fs_context,
+ .kill_sb = kill_anon_super,
+};
+
+static int secretmem_init(void)
+{
+ int ret = 0;
+
+ secretmem_mnt = kern_mount(&secretmem_fs);
+ if (IS_ERR(secretmem_mnt))
+ ret = PTR_ERR(secretmem_mnt);
+
+ return ret;
+}
+fs_initcall(secretmem_init);
--
2.28.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
Introduce "memfd_secret" system call with the ability to create memory
areas visible only in the context of the owning process and not mapped not
only to other processes but in the kernel page tables as well.
The user will create a file descriptor using the memfd_secret() system
call. The memory areas created by mmap() calls from this file descriptor
will be unmapped from the kernel direct map and they will be only mapped in
the page table of the owning mm.
The secret memory remains accessible in the process context using uaccess
primitives, but it is not accessible using direct/linear map addresses.
Functions in the follow_page()/get_user_page() family will refuse to return
a page that belongs to the secret memory area.
A page that was a part of the secret memory area is cleared when it is
freed.
The following example demonstrates creation of a secret mapping (error
handling is omitted):
fd = memfd_secret(0);
ftruncate(fd, MAP_SIZE);
ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
include/linux/secretmem.h | 24 ++++
include/uapi/linux/magic.h | 1 +
kernel/sys_ni.c | 2 +
mm/Kconfig | 3 +
mm/Makefile | 1 +
mm/gup.c | 10 ++
mm/secretmem.c | 278 +++++++++++++++++++++++++++++++++++++
7 files changed, 319 insertions(+)
create mode 100644 include/linux/secretmem.h
create mode 100644 mm/secretmem.c
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
new file mode 100644
index 000000000000..70e7db9f94fe
--- /dev/null
+++ b/include/linux/secretmem.h
@@ -0,0 +1,24 @@
+/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
+#ifndef _LINUX_SECRETMEM_H
+#define _LINUX_SECRETMEM_H
+
+#ifdef CONFIG_SECRETMEM
+
+bool vma_is_secretmem(struct vm_area_struct *vma);
+bool page_is_secretmem(struct page *page);
+
+#else
+
+static inline bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+ return false;
+}
+
+static inline bool page_is_secretmem(struct page *page)
+{
+ return false;
+}
+
+#endif /* CONFIG_SECRETMEM */
+
+#endif /* _LINUX_SECRETMEM_H */
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index f3956fc11de6..35687dcb1a42 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -97,5 +97,6 @@
#define DEVMEM_MAGIC 0x454d444d /* "DMEM" */
#define Z3FOLD_MAGIC 0x33
#define PPC_CMM_MAGIC 0xc7571590
+#define SECRETMEM_MAGIC 0x5345434d /* "SECM" */
#endif /* __LINUX_MAGIC_H__ */
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 769ad6225ab1..869aa6b5bf34 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -355,6 +355,8 @@ COND_SYSCALL(pkey_mprotect);
COND_SYSCALL(pkey_alloc);
COND_SYSCALL(pkey_free);
+/* memfd_secret */
+COND_SYSCALL(memfd_secret);
/*
* Architecture specific weak syscall entries.
diff --git a/mm/Kconfig b/mm/Kconfig
index 24c045b24b95..5f8243442f66 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -872,4 +872,7 @@ config MAPPING_DIRTY_HELPERS
config KMAP_LOCAL
bool
+config SECRETMEM
+ def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+
endmenu
diff --git a/mm/Makefile b/mm/Makefile
index 72227b24a616..b2a564eec27f 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -120,3 +120,4 @@ obj-$(CONFIG_MEMFD_CREATE) += memfd.o
obj-$(CONFIG_MAPPING_DIRTY_HELPERS) += mapping_dirty_helpers.o
obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
obj-$(CONFIG_PAGE_REPORTING) += page_reporting.o
+obj-$(CONFIG_SECRETMEM) += secretmem.o
diff --git a/mm/gup.c b/mm/gup.c
index e4c224cd9661..3e086b073624 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -10,6 +10,7 @@
#include <linux/rmap.h>
#include <linux/swap.h>
#include <linux/swapops.h>
+#include <linux/secretmem.h>
#include <linux/sched/signal.h>
#include <linux/rwsem.h>
@@ -759,6 +760,9 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
struct follow_page_context ctx = { NULL };
struct page *page;
+ if (vma_is_secretmem(vma))
+ return NULL;
+
page = follow_page_mask(vma, address, foll_flags, &ctx);
if (ctx.pgmap)
put_dev_pagemap(ctx.pgmap);
@@ -892,6 +896,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
return -EOPNOTSUPP;
+ if (vma_is_secretmem(vma))
+ return -EFAULT;
+
if (write) {
if (!(vm_flags & VM_WRITE)) {
if (!(gup_flags & FOLL_FORCE))
@@ -2031,6 +2038,9 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
VM_BUG_ON(!pfn_valid(pte_pfn(pte)));
page = pte_page(pte);
+ if (page_is_secretmem(page))
+ goto pte_unmap;
+
head = try_grab_compound_head(page, 1, flags);
if (!head)
goto pte_unmap;
diff --git a/mm/secretmem.c b/mm/secretmem.c
new file mode 100644
index 000000000000..904351d12c33
--- /dev/null
+++ b/mm/secretmem.c
@@ -0,0 +1,278 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/mount.h>
+#include <linux/memfd.h>
+#include <linux/bitops.h>
+#include <linux/printk.h>
+#include <linux/pagemap.h>
+#include <linux/syscalls.h>
+#include <linux/pseudo_fs.h>
+#include <linux/secretmem.h>
+#include <linux/set_memory.h>
+#include <linux/sched/signal.h>
+
+#include <uapi/linux/magic.h>
+
+#include <asm/tlbflush.h>
+
+#include "internal.h"
+
+#undef pr_fmt
+#define pr_fmt(fmt) "secretmem: " fmt
+
+/*
+ * Define mode and flag masks to allow validation of the system call
+ * parameters.
+ */
+#define SECRETMEM_MODE_MASK (0x0)
+#define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
+
+struct secretmem_ctx {
+ unsigned int mode;
+};
+
+static struct page *secretmem_alloc_page(gfp_t gfp)
+{
+ /*
+ * FIXME: use a cache of large pages to reduce the direct map
+ * fragmentation
+ */
+ return alloc_page(gfp | __GFP_ZERO);
+}
+
+static vm_fault_t secretmem_fault(struct vm_fault *vmf)
+{
+ struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+ struct inode *inode = file_inode(vmf->vma->vm_file);
+ pgoff_t offset = vmf->pgoff;
+ unsigned long addr;
+ struct page *page;
+ int err;
+
+ if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+ return vmf_error(-EINVAL);
+
+retry:
+ page = find_lock_page(mapping, offset);
+ if (!page) {
+ page = secretmem_alloc_page(vmf->gfp_mask);
+ if (!page)
+ return VM_FAULT_OOM;
+
+ err = set_direct_map_invalid_noflush(page, 1);
+ if (err) {
+ put_page(page);
+ return vmf_error(err);
+ }
+
+ __SetPageUptodate(page);
+ err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
+ if (unlikely(err)) {
+ put_page(page);
+ if (err == -EEXIST)
+ goto retry;
+ goto err_restore_direct_map;
+ }
+
+ addr = (unsigned long)page_address(page);
+ flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ }
+
+ vmf->page = page;
+ return VM_FAULT_LOCKED;
+
+err_restore_direct_map:
+ /*
+ * If a split of large page was required, it already happened
+ * when we marked the page invalid which guarantees that this call
+ * won't fail
+ */
+ set_direct_map_default_noflush(page, 1);
+ return vmf_error(err);
+}
+
+static const struct vm_operations_struct secretmem_vm_ops = {
+ .fault = secretmem_fault,
+};
+
+static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ unsigned long len = vma->vm_end - vma->vm_start;
+
+ if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) == 0)
+ return -EINVAL;
+
+ if (mlock_future_check(vma->vm_mm, vma->vm_flags | VM_LOCKED, len))
+ return -EAGAIN;
+
+ vma->vm_ops = &secretmem_vm_ops;
+ vma->vm_flags |= VM_LOCKED;
+
+ return 0;
+}
+
+bool vma_is_secretmem(struct vm_area_struct *vma)
+{
+ return vma->vm_ops == &secretmem_vm_ops;
+}
+
+static const struct file_operations secretmem_fops = {
+ .mmap = secretmem_mmap,
+};
+
+static bool secretmem_isolate_page(struct page *page, isolate_mode_t mode)
+{
+ return false;
+}
+
+static int secretmem_migratepage(struct address_space *mapping,
+ struct page *newpage, struct page *page,
+ enum migrate_mode mode)
+{
+ return -EBUSY;
+}
+
+static void secretmem_freepage(struct page *page)
+{
+ set_direct_map_default_noflush(page, 1);
+ clear_highpage(page);
+}
+
+static const struct address_space_operations secretmem_aops = {
+ .freepage = secretmem_freepage,
+ .migratepage = secretmem_migratepage,
+ .isolate_page = secretmem_isolate_page,
+};
+
+bool page_is_secretmem(struct page *page)
+{
+ struct address_space *mapping = page_mapping(page);
+
+ if (!mapping)
+ return false;
+
+ return mapping->a_ops == &secretmem_aops;
+}
+
+static struct vfsmount *secretmem_mnt;
+
+static struct file *secretmem_file_create(unsigned long flags)
+{
+ struct file *file = ERR_PTR(-ENOMEM);
+ struct secretmem_ctx *ctx;
+ struct inode *inode;
+
+ inode = alloc_anon_inode(secretmem_mnt->mnt_sb);
+ if (IS_ERR(inode))
+ return ERR_CAST(inode);
+
+ ctx = kzalloc(sizeof(*ctx), GFP_KERNEL);
+ if (!ctx)
+ goto err_free_inode;
+
+ file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
+ O_RDWR, &secretmem_fops);
+ if (IS_ERR(file))
+ goto err_free_ctx;
+
+ mapping_set_unevictable(inode->i_mapping);
+
+ inode->i_mapping->private_data = ctx;
+ inode->i_mapping->a_ops = &secretmem_aops;
+
+ /* pretend we are a normal file with zero size */
+ inode->i_mode |= S_IFREG;
+ inode->i_size = 0;
+
+ file->private_data = ctx;
+
+ ctx->mode = flags & SECRETMEM_MODE_MASK;
+
+ return file;
+
+err_free_ctx:
+ kfree(ctx);
+err_free_inode:
+ iput(inode);
+ return file;
+}
+
+SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
+{
+ struct file *file;
+ int fd, err;
+
+ /* make sure local flags do not confict with global fcntl.h */
+ BUILD_BUG_ON(SECRETMEM_FLAGS_MASK & O_CLOEXEC);
+
+ if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
+ return -EINVAL;
+
+ fd = get_unused_fd_flags(flags & O_CLOEXEC);
+ if (fd < 0)
+ return fd;
+
+ file = secretmem_file_create(flags);
+ if (IS_ERR(file)) {
+ err = PTR_ERR(file);
+ goto err_put_fd;
+ }
+
+ file->f_flags |= O_LARGEFILE;
+
+ fd_install(fd, file);
+ return fd;
+
+err_put_fd:
+ put_unused_fd(fd);
+ return err;
+}
+
+static void secretmem_evict_inode(struct inode *inode)
+{
+ struct secretmem_ctx *ctx = inode->i_private;
+
+ truncate_inode_pages_final(&inode->i_data);
+ clear_inode(inode);
+ kfree(ctx);
+}
+
+static const struct super_operations secretmem_super_ops = {
+ .evict_inode = secretmem_evict_inode,
+};
+
+static int secretmem_init_fs_context(struct fs_context *fc)
+{
+ struct pseudo_fs_context *ctx = init_pseudo(fc, SECRETMEM_MAGIC);
+
+ if (!ctx)
+ return -ENOMEM;
+ ctx->ops = &secretmem_super_ops;
+
+ return 0;
+}
+
+static struct file_system_type secretmem_fs = {
+ .name = "secretmem",
+ .init_fs_context = secretmem_init_fs_context,
+ .kill_sb = kill_anon_super,
+};
+
+static int secretmem_init(void)
+{
+ int ret = 0;
+
+ secretmem_mnt = kern_mount(&secretmem_fs);
+ if (IS_ERR(secretmem_mnt))
+ ret = PTR_ERR(secretmem_mnt);
+
+ return ret;
+}
+fs_initcall(secretmem_init);
--
2.28.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-21 12:27 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
Removing a PAGE_SIZE page from the direct map every time such page is
allocated for a secret memory mapping will cause severe fragmentation of
the direct map. This fragmentation can be reduced by using PMD-size pages
as a pool for small pages for secret memory mappings.
Add a gen_pool per secretmem inode and lazily populate this pool with
PMD-size pages.
As pages allocated by secretmem become unmovable, use CMA to back large
page caches so that page allocator won't be surprised by failing attempt to
migrate these pages.
The CMA area used by secretmem is controlled by the "secretmem=" kernel
parameter. This allows explicit control over the memory available for
secretmem and provides upper hard limit for secretmem consumption.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
mm/Kconfig | 2 +
mm/secretmem.c | 175 +++++++++++++++++++++++++++++++++++++++++--------
2 files changed, 150 insertions(+), 27 deletions(-)
diff --git a/mm/Kconfig b/mm/Kconfig
index 5f8243442f66..ec35bf406439 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -874,5 +874,7 @@ config KMAP_LOCAL
config SECRETMEM
def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+ select GENERIC_ALLOCATOR
+ select CMA
endmenu
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 904351d12c33..469211c7cc3a 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -7,12 +7,15 @@
#include <linux/mm.h>
#include <linux/fs.h>
+#include <linux/cma.h>
#include <linux/mount.h>
#include <linux/memfd.h>
#include <linux/bitops.h>
#include <linux/printk.h>
#include <linux/pagemap.h>
+#include <linux/genalloc.h>
#include <linux/syscalls.h>
+#include <linux/memblock.h>
#include <linux/pseudo_fs.h>
#include <linux/secretmem.h>
#include <linux/set_memory.h>
@@ -35,24 +38,94 @@
#define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
struct secretmem_ctx {
+ struct gen_pool *pool;
unsigned int mode;
};
-static struct page *secretmem_alloc_page(gfp_t gfp)
+static struct cma *secretmem_cma;
+
+static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
{
+ unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
+ struct gen_pool *pool = ctx->pool;
+ unsigned long addr;
+ struct page *page;
+ int i, err;
+
+ page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
+ if (!page)
+ return -ENOMEM;
+
/*
- * FIXME: use a cache of large pages to reduce the direct map
- * fragmentation
+ * clear the data left from the prevoius user before dropping the
+ * pages from the direct map
*/
- return alloc_page(gfp | __GFP_ZERO);
+ for (i = 0; i < nr_pages; i++)
+ clear_highpage(page + i);
+
+ err = set_direct_map_invalid_noflush(page, nr_pages);
+ if (err)
+ goto err_cma_release;
+
+ addr = (unsigned long)page_address(page);
+ err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
+ if (err)
+ goto err_set_direct_map;
+
+ flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+
+ return 0;
+
+err_set_direct_map:
+ /*
+ * If a split of PUD-size page was required, it already happened
+ * when we marked the pages invalid which guarantees that this call
+ * won't fail
+ */
+ set_direct_map_default_noflush(page, nr_pages);
+err_cma_release:
+ cma_release(secretmem_cma, page, nr_pages);
+ return err;
+}
+
+static void secretmem_free_page(struct secretmem_ctx *ctx, struct page *page)
+{
+ unsigned long addr = (unsigned long)page_address(page);
+ struct gen_pool *pool = ctx->pool;
+
+ gen_pool_free(pool, addr, PAGE_SIZE);
+}
+
+static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
+ gfp_t gfp)
+{
+ struct gen_pool *pool = ctx->pool;
+ unsigned long addr;
+ struct page *page;
+ int err;
+
+ if (gen_pool_avail(pool) < PAGE_SIZE) {
+ err = secretmem_pool_increase(ctx, gfp);
+ if (err)
+ return NULL;
+ }
+
+ addr = gen_pool_alloc(pool, PAGE_SIZE);
+ if (!addr)
+ return NULL;
+
+ page = virt_to_page(addr);
+ get_page(page);
+
+ return page;
}
static vm_fault_t secretmem_fault(struct vm_fault *vmf)
{
+ struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
struct inode *inode = file_inode(vmf->vma->vm_file);
pgoff_t offset = vmf->pgoff;
- unsigned long addr;
struct page *page;
int err;
@@ -62,40 +135,25 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
retry:
page = find_lock_page(mapping, offset);
if (!page) {
- page = secretmem_alloc_page(vmf->gfp_mask);
+ page = secretmem_alloc_page(ctx, vmf->gfp_mask);
if (!page)
return VM_FAULT_OOM;
- err = set_direct_map_invalid_noflush(page, 1);
- if (err) {
- put_page(page);
- return vmf_error(err);
- }
-
__SetPageUptodate(page);
err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
if (unlikely(err)) {
+ secretmem_free_page(ctx, page);
put_page(page);
if (err == -EEXIST)
goto retry;
- goto err_restore_direct_map;
+ return vmf_error(err);
}
- addr = (unsigned long)page_address(page);
- flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ set_page_private(page, (unsigned long)ctx);
}
vmf->page = page;
return VM_FAULT_LOCKED;
-
-err_restore_direct_map:
- /*
- * If a split of large page was required, it already happened
- * when we marked the page invalid which guarantees that this call
- * won't fail
- */
- set_direct_map_default_noflush(page, 1);
- return vmf_error(err);
}
static const struct vm_operations_struct secretmem_vm_ops = {
@@ -141,8 +199,9 @@ static int secretmem_migratepage(struct address_space *mapping,
static void secretmem_freepage(struct page *page)
{
- set_direct_map_default_noflush(page, 1);
- clear_highpage(page);
+ struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
+
+ secretmem_free_page(ctx, page);
}
static const struct address_space_operations secretmem_aops = {
@@ -177,13 +236,18 @@ static struct file *secretmem_file_create(unsigned long flags)
if (!ctx)
goto err_free_inode;
+ ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+ if (!ctx->pool)
+ goto err_free_ctx;
+
file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
O_RDWR, &secretmem_fops);
if (IS_ERR(file))
- goto err_free_ctx;
+ goto err_free_pool;
mapping_set_unevictable(inode->i_mapping);
+ inode->i_private = ctx;
inode->i_mapping->private_data = ctx;
inode->i_mapping->a_ops = &secretmem_aops;
@@ -197,6 +261,8 @@ static struct file *secretmem_file_create(unsigned long flags)
return file;
+err_free_pool:
+ gen_pool_destroy(ctx->pool);
err_free_ctx:
kfree(ctx);
err_free_inode:
@@ -215,6 +281,9 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
return -EINVAL;
+ if (!secretmem_cma)
+ return -ENOMEM;
+
fd = get_unused_fd_flags(flags & O_CLOEXEC);
if (fd < 0)
return fd;
@@ -235,11 +304,37 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
return err;
}
+static void secretmem_cleanup_chunk(struct gen_pool *pool,
+ struct gen_pool_chunk *chunk, void *data)
+{
+ unsigned long start = chunk->start_addr;
+ unsigned long end = chunk->end_addr;
+ struct page *page = virt_to_page(start);
+ unsigned long nr_pages = (end - start + 1) / PAGE_SIZE;
+ int i;
+
+ set_direct_map_default_noflush(page, nr_pages);
+
+ for (i = 0; i < nr_pages; i++)
+ clear_highpage(page + i);
+
+ cma_release(secretmem_cma, page, nr_pages);
+}
+
+static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
+{
+ struct gen_pool *pool = ctx->pool;
+
+ gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
+ gen_pool_destroy(pool);
+}
+
static void secretmem_evict_inode(struct inode *inode)
{
struct secretmem_ctx *ctx = inode->i_private;
truncate_inode_pages_final(&inode->i_data);
+ secretmem_cleanup_pool(ctx);
clear_inode(inode);
kfree(ctx);
}
@@ -276,3 +371,29 @@ static int secretmem_init(void)
return ret;
}
fs_initcall(secretmem_init);
+
+static int __init secretmem_setup(char *str)
+{
+ phys_addr_t align = PMD_SIZE;
+ unsigned long reserved_size;
+ int err;
+
+ reserved_size = memparse(str, NULL);
+ if (!reserved_size)
+ return 0;
+
+ if (reserved_size * 2 > PUD_SIZE)
+ align = PUD_SIZE;
+
+ err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false,
+ "secretmem", &secretmem_cma);
+ if (err) {
+ pr_err("failed to create CMA: %d\n", err);
+ return err;
+ }
+
+ pr_info("reserved %luM\n", reserved_size >> 20);
+
+ return 0;
+}
+__setup("secretmem=", secretmem_setup);
--
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
Removing a PAGE_SIZE page from the direct map every time such page is
allocated for a secret memory mapping will cause severe fragmentation of
the direct map. This fragmentation can be reduced by using PMD-size pages
as a pool for small pages for secret memory mappings.
Add a gen_pool per secretmem inode and lazily populate this pool with
PMD-size pages.
As pages allocated by secretmem become unmovable, use CMA to back large
page caches so that page allocator won't be surprised by failing attempt to
migrate these pages.
The CMA area used by secretmem is controlled by the "secretmem=" kernel
parameter. This allows explicit control over the memory available for
secretmem and provides upper hard limit for secretmem consumption.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
mm/Kconfig | 2 +
mm/secretmem.c | 175 +++++++++++++++++++++++++++++++++++++++++--------
2 files changed, 150 insertions(+), 27 deletions(-)
diff --git a/mm/Kconfig b/mm/Kconfig
index 5f8243442f66..ec35bf406439 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -874,5 +874,7 @@ config KMAP_LOCAL
config SECRETMEM
def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+ select GENERIC_ALLOCATOR
+ select CMA
endmenu
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 904351d12c33..469211c7cc3a 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -7,12 +7,15 @@
#include <linux/mm.h>
#include <linux/fs.h>
+#include <linux/cma.h>
#include <linux/mount.h>
#include <linux/memfd.h>
#include <linux/bitops.h>
#include <linux/printk.h>
#include <linux/pagemap.h>
+#include <linux/genalloc.h>
#include <linux/syscalls.h>
+#include <linux/memblock.h>
#include <linux/pseudo_fs.h>
#include <linux/secretmem.h>
#include <linux/set_memory.h>
@@ -35,24 +38,94 @@
#define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
struct secretmem_ctx {
+ struct gen_pool *pool;
unsigned int mode;
};
-static struct page *secretmem_alloc_page(gfp_t gfp)
+static struct cma *secretmem_cma;
+
+static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
{
+ unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
+ struct gen_pool *pool = ctx->pool;
+ unsigned long addr;
+ struct page *page;
+ int i, err;
+
+ page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
+ if (!page)
+ return -ENOMEM;
+
/*
- * FIXME: use a cache of large pages to reduce the direct map
- * fragmentation
+ * clear the data left from the prevoius user before dropping the
+ * pages from the direct map
*/
- return alloc_page(gfp | __GFP_ZERO);
+ for (i = 0; i < nr_pages; i++)
+ clear_highpage(page + i);
+
+ err = set_direct_map_invalid_noflush(page, nr_pages);
+ if (err)
+ goto err_cma_release;
+
+ addr = (unsigned long)page_address(page);
+ err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
+ if (err)
+ goto err_set_direct_map;
+
+ flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+
+ return 0;
+
+err_set_direct_map:
+ /*
+ * If a split of PUD-size page was required, it already happened
+ * when we marked the pages invalid which guarantees that this call
+ * won't fail
+ */
+ set_direct_map_default_noflush(page, nr_pages);
+err_cma_release:
+ cma_release(secretmem_cma, page, nr_pages);
+ return err;
+}
+
+static void secretmem_free_page(struct secretmem_ctx *ctx, struct page *page)
+{
+ unsigned long addr = (unsigned long)page_address(page);
+ struct gen_pool *pool = ctx->pool;
+
+ gen_pool_free(pool, addr, PAGE_SIZE);
+}
+
+static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
+ gfp_t gfp)
+{
+ struct gen_pool *pool = ctx->pool;
+ unsigned long addr;
+ struct page *page;
+ int err;
+
+ if (gen_pool_avail(pool) < PAGE_SIZE) {
+ err = secretmem_pool_increase(ctx, gfp);
+ if (err)
+ return NULL;
+ }
+
+ addr = gen_pool_alloc(pool, PAGE_SIZE);
+ if (!addr)
+ return NULL;
+
+ page = virt_to_page(addr);
+ get_page(page);
+
+ return page;
}
static vm_fault_t secretmem_fault(struct vm_fault *vmf)
{
+ struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
struct inode *inode = file_inode(vmf->vma->vm_file);
pgoff_t offset = vmf->pgoff;
- unsigned long addr;
struct page *page;
int err;
@@ -62,40 +135,25 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
retry:
page = find_lock_page(mapping, offset);
if (!page) {
- page = secretmem_alloc_page(vmf->gfp_mask);
+ page = secretmem_alloc_page(ctx, vmf->gfp_mask);
if (!page)
return VM_FAULT_OOM;
- err = set_direct_map_invalid_noflush(page, 1);
- if (err) {
- put_page(page);
- return vmf_error(err);
- }
-
__SetPageUptodate(page);
err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
if (unlikely(err)) {
+ secretmem_free_page(ctx, page);
put_page(page);
if (err == -EEXIST)
goto retry;
- goto err_restore_direct_map;
+ return vmf_error(err);
}
- addr = (unsigned long)page_address(page);
- flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ set_page_private(page, (unsigned long)ctx);
}
vmf->page = page;
return VM_FAULT_LOCKED;
-
-err_restore_direct_map:
- /*
- * If a split of large page was required, it already happened
- * when we marked the page invalid which guarantees that this call
- * won't fail
- */
- set_direct_map_default_noflush(page, 1);
- return vmf_error(err);
}
static const struct vm_operations_struct secretmem_vm_ops = {
@@ -141,8 +199,9 @@ static int secretmem_migratepage(struct address_space *mapping,
static void secretmem_freepage(struct page *page)
{
- set_direct_map_default_noflush(page, 1);
- clear_highpage(page);
+ struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
+
+ secretmem_free_page(ctx, page);
}
static const struct address_space_operations secretmem_aops = {
@@ -177,13 +236,18 @@ static struct file *secretmem_file_create(unsigned long flags)
if (!ctx)
goto err_free_inode;
+ ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+ if (!ctx->pool)
+ goto err_free_ctx;
+
file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
O_RDWR, &secretmem_fops);
if (IS_ERR(file))
- goto err_free_ctx;
+ goto err_free_pool;
mapping_set_unevictable(inode->i_mapping);
+ inode->i_private = ctx;
inode->i_mapping->private_data = ctx;
inode->i_mapping->a_ops = &secretmem_aops;
@@ -197,6 +261,8 @@ static struct file *secretmem_file_create(unsigned long flags)
return file;
+err_free_pool:
+ gen_pool_destroy(ctx->pool);
err_free_ctx:
kfree(ctx);
err_free_inode:
@@ -215,6 +281,9 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
return -EINVAL;
+ if (!secretmem_cma)
+ return -ENOMEM;
+
fd = get_unused_fd_flags(flags & O_CLOEXEC);
if (fd < 0)
return fd;
@@ -235,11 +304,37 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
return err;
}
+static void secretmem_cleanup_chunk(struct gen_pool *pool,
+ struct gen_pool_chunk *chunk, void *data)
+{
+ unsigned long start = chunk->start_addr;
+ unsigned long end = chunk->end_addr;
+ struct page *page = virt_to_page(start);
+ unsigned long nr_pages = (end - start + 1) / PAGE_SIZE;
+ int i;
+
+ set_direct_map_default_noflush(page, nr_pages);
+
+ for (i = 0; i < nr_pages; i++)
+ clear_highpage(page + i);
+
+ cma_release(secretmem_cma, page, nr_pages);
+}
+
+static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
+{
+ struct gen_pool *pool = ctx->pool;
+
+ gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
+ gen_pool_destroy(pool);
+}
+
static void secretmem_evict_inode(struct inode *inode)
{
struct secretmem_ctx *ctx = inode->i_private;
truncate_inode_pages_final(&inode->i_data);
+ secretmem_cleanup_pool(ctx);
clear_inode(inode);
kfree(ctx);
}
@@ -276,3 +371,29 @@ static int secretmem_init(void)
return ret;
}
fs_initcall(secretmem_init);
+
+static int __init secretmem_setup(char *str)
+{
+ phys_addr_t align = PMD_SIZE;
+ unsigned long reserved_size;
+ int err;
+
+ reserved_size = memparse(str, NULL);
+ if (!reserved_size)
+ return 0;
+
+ if (reserved_size * 2 > PUD_SIZE)
+ align = PUD_SIZE;
+
+ err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false,
+ "secretmem", &secretmem_cma);
+ if (err) {
+ pr_err("failed to create CMA: %d\n", err);
+ return err;
+ }
+
+ pr_info("reserved %luM\n", reserved_size >> 20);
+
+ return 0;
+}
+__setup("secretmem=", secretmem_setup);
--
2.28.0
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
Removing a PAGE_SIZE page from the direct map every time such page is
allocated for a secret memory mapping will cause severe fragmentation of
the direct map. This fragmentation can be reduced by using PMD-size pages
as a pool for small pages for secret memory mappings.
Add a gen_pool per secretmem inode and lazily populate this pool with
PMD-size pages.
As pages allocated by secretmem become unmovable, use CMA to back large
page caches so that page allocator won't be surprised by failing attempt to
migrate these pages.
The CMA area used by secretmem is controlled by the "secretmem=" kernel
parameter. This allows explicit control over the memory available for
secretmem and provides upper hard limit for secretmem consumption.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
mm/Kconfig | 2 +
mm/secretmem.c | 175 +++++++++++++++++++++++++++++++++++++++++--------
2 files changed, 150 insertions(+), 27 deletions(-)
diff --git a/mm/Kconfig b/mm/Kconfig
index 5f8243442f66..ec35bf406439 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -874,5 +874,7 @@ config KMAP_LOCAL
config SECRETMEM
def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+ select GENERIC_ALLOCATOR
+ select CMA
endmenu
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 904351d12c33..469211c7cc3a 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -7,12 +7,15 @@
#include <linux/mm.h>
#include <linux/fs.h>
+#include <linux/cma.h>
#include <linux/mount.h>
#include <linux/memfd.h>
#include <linux/bitops.h>
#include <linux/printk.h>
#include <linux/pagemap.h>
+#include <linux/genalloc.h>
#include <linux/syscalls.h>
+#include <linux/memblock.h>
#include <linux/pseudo_fs.h>
#include <linux/secretmem.h>
#include <linux/set_memory.h>
@@ -35,24 +38,94 @@
#define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
struct secretmem_ctx {
+ struct gen_pool *pool;
unsigned int mode;
};
-static struct page *secretmem_alloc_page(gfp_t gfp)
+static struct cma *secretmem_cma;
+
+static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
{
+ unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
+ struct gen_pool *pool = ctx->pool;
+ unsigned long addr;
+ struct page *page;
+ int i, err;
+
+ page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
+ if (!page)
+ return -ENOMEM;
+
/*
- * FIXME: use a cache of large pages to reduce the direct map
- * fragmentation
+ * clear the data left from the prevoius user before dropping the
+ * pages from the direct map
*/
- return alloc_page(gfp | __GFP_ZERO);
+ for (i = 0; i < nr_pages; i++)
+ clear_highpage(page + i);
+
+ err = set_direct_map_invalid_noflush(page, nr_pages);
+ if (err)
+ goto err_cma_release;
+
+ addr = (unsigned long)page_address(page);
+ err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
+ if (err)
+ goto err_set_direct_map;
+
+ flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+
+ return 0;
+
+err_set_direct_map:
+ /*
+ * If a split of PUD-size page was required, it already happened
+ * when we marked the pages invalid which guarantees that this call
+ * won't fail
+ */
+ set_direct_map_default_noflush(page, nr_pages);
+err_cma_release:
+ cma_release(secretmem_cma, page, nr_pages);
+ return err;
+}
+
+static void secretmem_free_page(struct secretmem_ctx *ctx, struct page *page)
+{
+ unsigned long addr = (unsigned long)page_address(page);
+ struct gen_pool *pool = ctx->pool;
+
+ gen_pool_free(pool, addr, PAGE_SIZE);
+}
+
+static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
+ gfp_t gfp)
+{
+ struct gen_pool *pool = ctx->pool;
+ unsigned long addr;
+ struct page *page;
+ int err;
+
+ if (gen_pool_avail(pool) < PAGE_SIZE) {
+ err = secretmem_pool_increase(ctx, gfp);
+ if (err)
+ return NULL;
+ }
+
+ addr = gen_pool_alloc(pool, PAGE_SIZE);
+ if (!addr)
+ return NULL;
+
+ page = virt_to_page(addr);
+ get_page(page);
+
+ return page;
}
static vm_fault_t secretmem_fault(struct vm_fault *vmf)
{
+ struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
struct inode *inode = file_inode(vmf->vma->vm_file);
pgoff_t offset = vmf->pgoff;
- unsigned long addr;
struct page *page;
int err;
@@ -62,40 +135,25 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
retry:
page = find_lock_page(mapping, offset);
if (!page) {
- page = secretmem_alloc_page(vmf->gfp_mask);
+ page = secretmem_alloc_page(ctx, vmf->gfp_mask);
if (!page)
return VM_FAULT_OOM;
- err = set_direct_map_invalid_noflush(page, 1);
- if (err) {
- put_page(page);
- return vmf_error(err);
- }
-
__SetPageUptodate(page);
err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
if (unlikely(err)) {
+ secretmem_free_page(ctx, page);
put_page(page);
if (err == -EEXIST)
goto retry;
- goto err_restore_direct_map;
+ return vmf_error(err);
}
- addr = (unsigned long)page_address(page);
- flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ set_page_private(page, (unsigned long)ctx);
}
vmf->page = page;
return VM_FAULT_LOCKED;
-
-err_restore_direct_map:
- /*
- * If a split of large page was required, it already happened
- * when we marked the page invalid which guarantees that this call
- * won't fail
- */
- set_direct_map_default_noflush(page, 1);
- return vmf_error(err);
}
static const struct vm_operations_struct secretmem_vm_ops = {
@@ -141,8 +199,9 @@ static int secretmem_migratepage(struct address_space *mapping,
static void secretmem_freepage(struct page *page)
{
- set_direct_map_default_noflush(page, 1);
- clear_highpage(page);
+ struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
+
+ secretmem_free_page(ctx, page);
}
static const struct address_space_operations secretmem_aops = {
@@ -177,13 +236,18 @@ static struct file *secretmem_file_create(unsigned long flags)
if (!ctx)
goto err_free_inode;
+ ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+ if (!ctx->pool)
+ goto err_free_ctx;
+
file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
O_RDWR, &secretmem_fops);
if (IS_ERR(file))
- goto err_free_ctx;
+ goto err_free_pool;
mapping_set_unevictable(inode->i_mapping);
+ inode->i_private = ctx;
inode->i_mapping->private_data = ctx;
inode->i_mapping->a_ops = &secretmem_aops;
@@ -197,6 +261,8 @@ static struct file *secretmem_file_create(unsigned long flags)
return file;
+err_free_pool:
+ gen_pool_destroy(ctx->pool);
err_free_ctx:
kfree(ctx);
err_free_inode:
@@ -215,6 +281,9 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
return -EINVAL;
+ if (!secretmem_cma)
+ return -ENOMEM;
+
fd = get_unused_fd_flags(flags & O_CLOEXEC);
if (fd < 0)
return fd;
@@ -235,11 +304,37 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
return err;
}
+static void secretmem_cleanup_chunk(struct gen_pool *pool,
+ struct gen_pool_chunk *chunk, void *data)
+{
+ unsigned long start = chunk->start_addr;
+ unsigned long end = chunk->end_addr;
+ struct page *page = virt_to_page(start);
+ unsigned long nr_pages = (end - start + 1) / PAGE_SIZE;
+ int i;
+
+ set_direct_map_default_noflush(page, nr_pages);
+
+ for (i = 0; i < nr_pages; i++)
+ clear_highpage(page + i);
+
+ cma_release(secretmem_cma, page, nr_pages);
+}
+
+static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
+{
+ struct gen_pool *pool = ctx->pool;
+
+ gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
+ gen_pool_destroy(pool);
+}
+
static void secretmem_evict_inode(struct inode *inode)
{
struct secretmem_ctx *ctx = inode->i_private;
truncate_inode_pages_final(&inode->i_data);
+ secretmem_cleanup_pool(ctx);
clear_inode(inode);
kfree(ctx);
}
@@ -276,3 +371,29 @@ static int secretmem_init(void)
return ret;
}
fs_initcall(secretmem_init);
+
+static int __init secretmem_setup(char *str)
+{
+ phys_addr_t align = PMD_SIZE;
+ unsigned long reserved_size;
+ int err;
+
+ reserved_size = memparse(str, NULL);
+ if (!reserved_size)
+ return 0;
+
+ if (reserved_size * 2 > PUD_SIZE)
+ align = PUD_SIZE;
+
+ err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false,
+ "secretmem", &secretmem_cma);
+ if (err) {
+ pr_err("failed to create CMA: %d\n", err);
+ return err;
+ }
+
+ pr_info("reserved %luM\n", reserved_size >> 20);
+
+ return 0;
+}
+__setup("secretmem=", secretmem_setup);
--
2.28.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
Removing a PAGE_SIZE page from the direct map every time such page is
allocated for a secret memory mapping will cause severe fragmentation of
the direct map. This fragmentation can be reduced by using PMD-size pages
as a pool for small pages for secret memory mappings.
Add a gen_pool per secretmem inode and lazily populate this pool with
PMD-size pages.
As pages allocated by secretmem become unmovable, use CMA to back large
page caches so that page allocator won't be surprised by failing attempt to
migrate these pages.
The CMA area used by secretmem is controlled by the "secretmem=" kernel
parameter. This allows explicit control over the memory available for
secretmem and provides upper hard limit for secretmem consumption.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
mm/Kconfig | 2 +
mm/secretmem.c | 175 +++++++++++++++++++++++++++++++++++++++++--------
2 files changed, 150 insertions(+), 27 deletions(-)
diff --git a/mm/Kconfig b/mm/Kconfig
index 5f8243442f66..ec35bf406439 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -874,5 +874,7 @@ config KMAP_LOCAL
config SECRETMEM
def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
+ select GENERIC_ALLOCATOR
+ select CMA
endmenu
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 904351d12c33..469211c7cc3a 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -7,12 +7,15 @@
#include <linux/mm.h>
#include <linux/fs.h>
+#include <linux/cma.h>
#include <linux/mount.h>
#include <linux/memfd.h>
#include <linux/bitops.h>
#include <linux/printk.h>
#include <linux/pagemap.h>
+#include <linux/genalloc.h>
#include <linux/syscalls.h>
+#include <linux/memblock.h>
#include <linux/pseudo_fs.h>
#include <linux/secretmem.h>
#include <linux/set_memory.h>
@@ -35,24 +38,94 @@
#define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
struct secretmem_ctx {
+ struct gen_pool *pool;
unsigned int mode;
};
-static struct page *secretmem_alloc_page(gfp_t gfp)
+static struct cma *secretmem_cma;
+
+static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
{
+ unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
+ struct gen_pool *pool = ctx->pool;
+ unsigned long addr;
+ struct page *page;
+ int i, err;
+
+ page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
+ if (!page)
+ return -ENOMEM;
+
/*
- * FIXME: use a cache of large pages to reduce the direct map
- * fragmentation
+ * clear the data left from the prevoius user before dropping the
+ * pages from the direct map
*/
- return alloc_page(gfp | __GFP_ZERO);
+ for (i = 0; i < nr_pages; i++)
+ clear_highpage(page + i);
+
+ err = set_direct_map_invalid_noflush(page, nr_pages);
+ if (err)
+ goto err_cma_release;
+
+ addr = (unsigned long)page_address(page);
+ err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
+ if (err)
+ goto err_set_direct_map;
+
+ flush_tlb_kernel_range(addr, addr + PMD_SIZE);
+
+ return 0;
+
+err_set_direct_map:
+ /*
+ * If a split of PUD-size page was required, it already happened
+ * when we marked the pages invalid which guarantees that this call
+ * won't fail
+ */
+ set_direct_map_default_noflush(page, nr_pages);
+err_cma_release:
+ cma_release(secretmem_cma, page, nr_pages);
+ return err;
+}
+
+static void secretmem_free_page(struct secretmem_ctx *ctx, struct page *page)
+{
+ unsigned long addr = (unsigned long)page_address(page);
+ struct gen_pool *pool = ctx->pool;
+
+ gen_pool_free(pool, addr, PAGE_SIZE);
+}
+
+static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
+ gfp_t gfp)
+{
+ struct gen_pool *pool = ctx->pool;
+ unsigned long addr;
+ struct page *page;
+ int err;
+
+ if (gen_pool_avail(pool) < PAGE_SIZE) {
+ err = secretmem_pool_increase(ctx, gfp);
+ if (err)
+ return NULL;
+ }
+
+ addr = gen_pool_alloc(pool, PAGE_SIZE);
+ if (!addr)
+ return NULL;
+
+ page = virt_to_page(addr);
+ get_page(page);
+
+ return page;
}
static vm_fault_t secretmem_fault(struct vm_fault *vmf)
{
+ struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
struct address_space *mapping = vmf->vma->vm_file->f_mapping;
struct inode *inode = file_inode(vmf->vma->vm_file);
pgoff_t offset = vmf->pgoff;
- unsigned long addr;
struct page *page;
int err;
@@ -62,40 +135,25 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
retry:
page = find_lock_page(mapping, offset);
if (!page) {
- page = secretmem_alloc_page(vmf->gfp_mask);
+ page = secretmem_alloc_page(ctx, vmf->gfp_mask);
if (!page)
return VM_FAULT_OOM;
- err = set_direct_map_invalid_noflush(page, 1);
- if (err) {
- put_page(page);
- return vmf_error(err);
- }
-
__SetPageUptodate(page);
err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
if (unlikely(err)) {
+ secretmem_free_page(ctx, page);
put_page(page);
if (err == -EEXIST)
goto retry;
- goto err_restore_direct_map;
+ return vmf_error(err);
}
- addr = (unsigned long)page_address(page);
- flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
+ set_page_private(page, (unsigned long)ctx);
}
vmf->page = page;
return VM_FAULT_LOCKED;
-
-err_restore_direct_map:
- /*
- * If a split of large page was required, it already happened
- * when we marked the page invalid which guarantees that this call
- * won't fail
- */
- set_direct_map_default_noflush(page, 1);
- return vmf_error(err);
}
static const struct vm_operations_struct secretmem_vm_ops = {
@@ -141,8 +199,9 @@ static int secretmem_migratepage(struct address_space *mapping,
static void secretmem_freepage(struct page *page)
{
- set_direct_map_default_noflush(page, 1);
- clear_highpage(page);
+ struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
+
+ secretmem_free_page(ctx, page);
}
static const struct address_space_operations secretmem_aops = {
@@ -177,13 +236,18 @@ static struct file *secretmem_file_create(unsigned long flags)
if (!ctx)
goto err_free_inode;
+ ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
+ if (!ctx->pool)
+ goto err_free_ctx;
+
file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
O_RDWR, &secretmem_fops);
if (IS_ERR(file))
- goto err_free_ctx;
+ goto err_free_pool;
mapping_set_unevictable(inode->i_mapping);
+ inode->i_private = ctx;
inode->i_mapping->private_data = ctx;
inode->i_mapping->a_ops = &secretmem_aops;
@@ -197,6 +261,8 @@ static struct file *secretmem_file_create(unsigned long flags)
return file;
+err_free_pool:
+ gen_pool_destroy(ctx->pool);
err_free_ctx:
kfree(ctx);
err_free_inode:
@@ -215,6 +281,9 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
return -EINVAL;
+ if (!secretmem_cma)
+ return -ENOMEM;
+
fd = get_unused_fd_flags(flags & O_CLOEXEC);
if (fd < 0)
return fd;
@@ -235,11 +304,37 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
return err;
}
+static void secretmem_cleanup_chunk(struct gen_pool *pool,
+ struct gen_pool_chunk *chunk, void *data)
+{
+ unsigned long start = chunk->start_addr;
+ unsigned long end = chunk->end_addr;
+ struct page *page = virt_to_page(start);
+ unsigned long nr_pages = (end - start + 1) / PAGE_SIZE;
+ int i;
+
+ set_direct_map_default_noflush(page, nr_pages);
+
+ for (i = 0; i < nr_pages; i++)
+ clear_highpage(page + i);
+
+ cma_release(secretmem_cma, page, nr_pages);
+}
+
+static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
+{
+ struct gen_pool *pool = ctx->pool;
+
+ gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
+ gen_pool_destroy(pool);
+}
+
static void secretmem_evict_inode(struct inode *inode)
{
struct secretmem_ctx *ctx = inode->i_private;
truncate_inode_pages_final(&inode->i_data);
+ secretmem_cleanup_pool(ctx);
clear_inode(inode);
kfree(ctx);
}
@@ -276,3 +371,29 @@ static int secretmem_init(void)
return ret;
}
fs_initcall(secretmem_init);
+
+static int __init secretmem_setup(char *str)
+{
+ phys_addr_t align = PMD_SIZE;
+ unsigned long reserved_size;
+ int err;
+
+ reserved_size = memparse(str, NULL);
+ if (!reserved_size)
+ return 0;
+
+ if (reserved_size * 2 > PUD_SIZE)
+ align = PUD_SIZE;
+
+ err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false,
+ "secretmem", &secretmem_cma);
+ if (err) {
+ pr_err("failed to create CMA: %d\n", err);
+ return err;
+ }
+
+ pr_info("reserved %luM\n", reserved_size >> 20);
+
+ return 0;
+}
+__setup("secretmem=", secretmem_setup);
--
2.28.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-21 12:27 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
Account memory consumed by secretmem to memcg. The accounting is updated
when the memory is actually allocated and freed.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Roman Gushchin <guro@fb.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
mm/filemap.c | 3 ++-
mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
2 files changed, 37 insertions(+), 2 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 2d0c6721879d..bb28dd6d9e22 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -42,6 +42,7 @@
#include <linux/psi.h>
#include <linux/ramfs.h>
#include <linux/page_idle.h>
+#include <linux/secretmem.h>
#include "internal.h"
#define CREATE_TRACE_POINTS
@@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
page->mapping = mapping;
page->index = offset;
- if (!huge) {
+ if (!huge && !page_is_secretmem(page)) {
error = mem_cgroup_charge(page, current->mm, gfp);
if (error)
goto error;
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 469211c7cc3a..05026460e2ee 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -18,6 +18,7 @@
#include <linux/memblock.h>
#include <linux/pseudo_fs.h>
#include <linux/secretmem.h>
+#include <linux/memcontrol.h>
#include <linux/set_memory.h>
#include <linux/sched/signal.h>
@@ -44,6 +45,32 @@ struct secretmem_ctx {
static struct cma *secretmem_cma;
+static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
+{
+ int err;
+
+ err = memcg_kmem_charge_page(page, gfp, order);
+ if (err)
+ return err;
+
+ /*
+ * seceremem caches are unreclaimable kernel allocations, so treat
+ * them as unreclaimable slab memory for VM statistics purposes
+ */
+ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+ PAGE_SIZE << order);
+
+ return 0;
+}
+
+static void secretmem_unaccount_pages(struct page *page, int order)
+{
+
+ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+ -PAGE_SIZE << order);
+ memcg_kmem_uncharge_page(page, order);
+}
+
static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
{
unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
@@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
if (!page)
return -ENOMEM;
+ err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
+ if (err)
+ goto err_cma_release;
+
/*
* clear the data left from the prevoius user before dropping the
* pages from the direct map
@@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
err = set_direct_map_invalid_noflush(page, nr_pages);
if (err)
- goto err_cma_release;
+ goto err_memcg_uncharge;
addr = (unsigned long)page_address(page);
err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
@@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
* won't fail
*/
set_direct_map_default_noflush(page, nr_pages);
+err_memcg_uncharge:
+ secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
err_cma_release:
cma_release(secretmem_cma, page, nr_pages);
return err;
@@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
int i;
set_direct_map_default_noflush(page, nr_pages);
+ secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
for (i = 0; i < nr_pages; i++)
clear_highpage(page + i);
--
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
Account memory consumed by secretmem to memcg. The accounting is updated
when the memory is actually allocated and freed.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Roman Gushchin <guro@fb.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
mm/filemap.c | 3 ++-
mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
2 files changed, 37 insertions(+), 2 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 2d0c6721879d..bb28dd6d9e22 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -42,6 +42,7 @@
#include <linux/psi.h>
#include <linux/ramfs.h>
#include <linux/page_idle.h>
+#include <linux/secretmem.h>
#include "internal.h"
#define CREATE_TRACE_POINTS
@@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
page->mapping = mapping;
page->index = offset;
- if (!huge) {
+ if (!huge && !page_is_secretmem(page)) {
error = mem_cgroup_charge(page, current->mm, gfp);
if (error)
goto error;
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 469211c7cc3a..05026460e2ee 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -18,6 +18,7 @@
#include <linux/memblock.h>
#include <linux/pseudo_fs.h>
#include <linux/secretmem.h>
+#include <linux/memcontrol.h>
#include <linux/set_memory.h>
#include <linux/sched/signal.h>
@@ -44,6 +45,32 @@ struct secretmem_ctx {
static struct cma *secretmem_cma;
+static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
+{
+ int err;
+
+ err = memcg_kmem_charge_page(page, gfp, order);
+ if (err)
+ return err;
+
+ /*
+ * seceremem caches are unreclaimable kernel allocations, so treat
+ * them as unreclaimable slab memory for VM statistics purposes
+ */
+ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+ PAGE_SIZE << order);
+
+ return 0;
+}
+
+static void secretmem_unaccount_pages(struct page *page, int order)
+{
+
+ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+ -PAGE_SIZE << order);
+ memcg_kmem_uncharge_page(page, order);
+}
+
static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
{
unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
@@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
if (!page)
return -ENOMEM;
+ err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
+ if (err)
+ goto err_cma_release;
+
/*
* clear the data left from the prevoius user before dropping the
* pages from the direct map
@@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
err = set_direct_map_invalid_noflush(page, nr_pages);
if (err)
- goto err_cma_release;
+ goto err_memcg_uncharge;
addr = (unsigned long)page_address(page);
err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
@@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
* won't fail
*/
set_direct_map_default_noflush(page, nr_pages);
+err_memcg_uncharge:
+ secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
err_cma_release:
cma_release(secretmem_cma, page, nr_pages);
return err;
@@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
int i;
set_direct_map_default_noflush(page, nr_pages);
+ secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
for (i = 0; i < nr_pages; i++)
clear_highpage(page + i);
--
2.28.0
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
Account memory consumed by secretmem to memcg. The accounting is updated
when the memory is actually allocated and freed.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Roman Gushchin <guro@fb.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
mm/filemap.c | 3 ++-
mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
2 files changed, 37 insertions(+), 2 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 2d0c6721879d..bb28dd6d9e22 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -42,6 +42,7 @@
#include <linux/psi.h>
#include <linux/ramfs.h>
#include <linux/page_idle.h>
+#include <linux/secretmem.h>
#include "internal.h"
#define CREATE_TRACE_POINTS
@@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
page->mapping = mapping;
page->index = offset;
- if (!huge) {
+ if (!huge && !page_is_secretmem(page)) {
error = mem_cgroup_charge(page, current->mm, gfp);
if (error)
goto error;
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 469211c7cc3a..05026460e2ee 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -18,6 +18,7 @@
#include <linux/memblock.h>
#include <linux/pseudo_fs.h>
#include <linux/secretmem.h>
+#include <linux/memcontrol.h>
#include <linux/set_memory.h>
#include <linux/sched/signal.h>
@@ -44,6 +45,32 @@ struct secretmem_ctx {
static struct cma *secretmem_cma;
+static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
+{
+ int err;
+
+ err = memcg_kmem_charge_page(page, gfp, order);
+ if (err)
+ return err;
+
+ /*
+ * seceremem caches are unreclaimable kernel allocations, so treat
+ * them as unreclaimable slab memory for VM statistics purposes
+ */
+ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+ PAGE_SIZE << order);
+
+ return 0;
+}
+
+static void secretmem_unaccount_pages(struct page *page, int order)
+{
+
+ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+ -PAGE_SIZE << order);
+ memcg_kmem_uncharge_page(page, order);
+}
+
static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
{
unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
@@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
if (!page)
return -ENOMEM;
+ err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
+ if (err)
+ goto err_cma_release;
+
/*
* clear the data left from the prevoius user before dropping the
* pages from the direct map
@@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
err = set_direct_map_invalid_noflush(page, nr_pages);
if (err)
- goto err_cma_release;
+ goto err_memcg_uncharge;
addr = (unsigned long)page_address(page);
err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
@@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
* won't fail
*/
set_direct_map_default_noflush(page, nr_pages);
+err_memcg_uncharge:
+ secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
err_cma_release:
cma_release(secretmem_cma, page, nr_pages);
return err;
@@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
int i;
set_direct_map_default_noflush(page, nr_pages);
+ secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
for (i = 0; i < nr_pages; i++)
clear_highpage(page + i);
--
2.28.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
Account memory consumed by secretmem to memcg. The accounting is updated
when the memory is actually allocated and freed.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Roman Gushchin <guro@fb.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
mm/filemap.c | 3 ++-
mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
2 files changed, 37 insertions(+), 2 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 2d0c6721879d..bb28dd6d9e22 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -42,6 +42,7 @@
#include <linux/psi.h>
#include <linux/ramfs.h>
#include <linux/page_idle.h>
+#include <linux/secretmem.h>
#include "internal.h"
#define CREATE_TRACE_POINTS
@@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
page->mapping = mapping;
page->index = offset;
- if (!huge) {
+ if (!huge && !page_is_secretmem(page)) {
error = mem_cgroup_charge(page, current->mm, gfp);
if (error)
goto error;
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 469211c7cc3a..05026460e2ee 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -18,6 +18,7 @@
#include <linux/memblock.h>
#include <linux/pseudo_fs.h>
#include <linux/secretmem.h>
+#include <linux/memcontrol.h>
#include <linux/set_memory.h>
#include <linux/sched/signal.h>
@@ -44,6 +45,32 @@ struct secretmem_ctx {
static struct cma *secretmem_cma;
+static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
+{
+ int err;
+
+ err = memcg_kmem_charge_page(page, gfp, order);
+ if (err)
+ return err;
+
+ /*
+ * seceremem caches are unreclaimable kernel allocations, so treat
+ * them as unreclaimable slab memory for VM statistics purposes
+ */
+ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+ PAGE_SIZE << order);
+
+ return 0;
+}
+
+static void secretmem_unaccount_pages(struct page *page, int order)
+{
+
+ mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+ -PAGE_SIZE << order);
+ memcg_kmem_uncharge_page(page, order);
+}
+
static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
{
unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
@@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
if (!page)
return -ENOMEM;
+ err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
+ if (err)
+ goto err_cma_release;
+
/*
* clear the data left from the prevoius user before dropping the
* pages from the direct map
@@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
err = set_direct_map_invalid_noflush(page, nr_pages);
if (err)
- goto err_cma_release;
+ goto err_memcg_uncharge;
addr = (unsigned long)page_address(page);
err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
@@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
* won't fail
*/
set_direct_map_default_noflush(page, nr_pages);
+err_memcg_uncharge:
+ secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
err_cma_release:
cma_release(secretmem_cma, page, nr_pages);
return err;
@@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
int i;
set_direct_map_default_noflush(page, nr_pages);
+ secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
for (i = 0; i < nr_pages; i++)
clear_highpage(page + i);
--
2.28.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 09/11] PM: hibernate: disable when there are active secretmem users
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-21 12:27 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
It is unsafe to allow saving of secretmem areas to the hibernation snapshot
as they would be visible after the resume and this essentially will defeat
the purpose of secret memory mappings.
Prevent hibernation whenever there are active secret memory users.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
include/linux/secretmem.h | 6 ++++++
kernel/power/hibernate.c | 5 ++++-
mm/secretmem.c | 15 +++++++++++++++
3 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 70e7db9f94fe..907a6734059c 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -6,6 +6,7 @@
bool vma_is_secretmem(struct vm_area_struct *vma);
bool page_is_secretmem(struct page *page);
+bool secretmem_active(void);
#else
@@ -19,6 +20,11 @@ static inline bool page_is_secretmem(struct page *page)
return false;
}
+static inline bool secretmem_active(void)
+{
+ return false;
+}
+
#endif /* CONFIG_SECRETMEM */
#endif /* _LINUX_SECRETMEM_H */
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index da0b41914177..559acef3fddb 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -31,6 +31,7 @@
#include <linux/genhd.h>
#include <linux/ktime.h>
#include <linux/security.h>
+#include <linux/secretmem.h>
#include <trace/events/power.h>
#include "power.h"
@@ -81,7 +82,9 @@ void hibernate_release(void)
bool hibernation_available(void)
{
- return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION);
+ return nohibernate == 0 &&
+ !security_locked_down(LOCKDOWN_HIBERNATION) &&
+ !secretmem_active();
}
/**
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 05026460e2ee..6ef32ad08184 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -45,6 +45,13 @@ struct secretmem_ctx {
static struct cma *secretmem_cma;
+static atomic_t secretmem_users;
+
+bool secretmem_active(void)
+{
+ return !!atomic_read(&secretmem_users);
+}
+
static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
{
int err;
@@ -193,6 +200,12 @@ static const struct vm_operations_struct secretmem_vm_ops = {
.fault = secretmem_fault,
};
+static int secretmem_release(struct inode *inode, struct file *file)
+{
+ atomic_dec(&secretmem_users);
+ return 0;
+}
+
static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
{
unsigned long len = vma->vm_end - vma->vm_start;
@@ -215,6 +228,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma)
}
static const struct file_operations secretmem_fops = {
+ .release = secretmem_release,
.mmap = secretmem_mmap,
};
@@ -330,6 +344,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
file->f_flags |= O_LARGEFILE;
fd_install(fd, file);
+ atomic_inc(&secretmem_users);
return fd;
err_put_fd:
--
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 09/11] PM: hibernate: disable when there are active secretmem users
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
It is unsafe to allow saving of secretmem areas to the hibernation snapshot
as they would be visible after the resume and this essentially will defeat
the purpose of secret memory mappings.
Prevent hibernation whenever there are active secret memory users.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
include/linux/secretmem.h | 6 ++++++
kernel/power/hibernate.c | 5 ++++-
mm/secretmem.c | 15 +++++++++++++++
3 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 70e7db9f94fe..907a6734059c 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -6,6 +6,7 @@
bool vma_is_secretmem(struct vm_area_struct *vma);
bool page_is_secretmem(struct page *page);
+bool secretmem_active(void);
#else
@@ -19,6 +20,11 @@ static inline bool page_is_secretmem(struct page *page)
return false;
}
+static inline bool secretmem_active(void)
+{
+ return false;
+}
+
#endif /* CONFIG_SECRETMEM */
#endif /* _LINUX_SECRETMEM_H */
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index da0b41914177..559acef3fddb 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -31,6 +31,7 @@
#include <linux/genhd.h>
#include <linux/ktime.h>
#include <linux/security.h>
+#include <linux/secretmem.h>
#include <trace/events/power.h>
#include "power.h"
@@ -81,7 +82,9 @@ void hibernate_release(void)
bool hibernation_available(void)
{
- return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION);
+ return nohibernate == 0 &&
+ !security_locked_down(LOCKDOWN_HIBERNATION) &&
+ !secretmem_active();
}
/**
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 05026460e2ee..6ef32ad08184 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -45,6 +45,13 @@ struct secretmem_ctx {
static struct cma *secretmem_cma;
+static atomic_t secretmem_users;
+
+bool secretmem_active(void)
+{
+ return !!atomic_read(&secretmem_users);
+}
+
static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
{
int err;
@@ -193,6 +200,12 @@ static const struct vm_operations_struct secretmem_vm_ops = {
.fault = secretmem_fault,
};
+static int secretmem_release(struct inode *inode, struct file *file)
+{
+ atomic_dec(&secretmem_users);
+ return 0;
+}
+
static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
{
unsigned long len = vma->vm_end - vma->vm_start;
@@ -215,6 +228,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma)
}
static const struct file_operations secretmem_fops = {
+ .release = secretmem_release,
.mmap = secretmem_mmap,
};
@@ -330,6 +344,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
file->f_flags |= O_LARGEFILE;
fd_install(fd, file);
+ atomic_inc(&secretmem_users);
return fd;
err_put_fd:
--
2.28.0
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 09/11] PM: hibernate: disable when there are active secretmem users
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
It is unsafe to allow saving of secretmem areas to the hibernation snapshot
as they would be visible after the resume and this essentially will defeat
the purpose of secret memory mappings.
Prevent hibernation whenever there are active secret memory users.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
include/linux/secretmem.h | 6 ++++++
kernel/power/hibernate.c | 5 ++++-
mm/secretmem.c | 15 +++++++++++++++
3 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 70e7db9f94fe..907a6734059c 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -6,6 +6,7 @@
bool vma_is_secretmem(struct vm_area_struct *vma);
bool page_is_secretmem(struct page *page);
+bool secretmem_active(void);
#else
@@ -19,6 +20,11 @@ static inline bool page_is_secretmem(struct page *page)
return false;
}
+static inline bool secretmem_active(void)
+{
+ return false;
+}
+
#endif /* CONFIG_SECRETMEM */
#endif /* _LINUX_SECRETMEM_H */
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index da0b41914177..559acef3fddb 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -31,6 +31,7 @@
#include <linux/genhd.h>
#include <linux/ktime.h>
#include <linux/security.h>
+#include <linux/secretmem.h>
#include <trace/events/power.h>
#include "power.h"
@@ -81,7 +82,9 @@ void hibernate_release(void)
bool hibernation_available(void)
{
- return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION);
+ return nohibernate == 0 &&
+ !security_locked_down(LOCKDOWN_HIBERNATION) &&
+ !secretmem_active();
}
/**
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 05026460e2ee..6ef32ad08184 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -45,6 +45,13 @@ struct secretmem_ctx {
static struct cma *secretmem_cma;
+static atomic_t secretmem_users;
+
+bool secretmem_active(void)
+{
+ return !!atomic_read(&secretmem_users);
+}
+
static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
{
int err;
@@ -193,6 +200,12 @@ static const struct vm_operations_struct secretmem_vm_ops = {
.fault = secretmem_fault,
};
+static int secretmem_release(struct inode *inode, struct file *file)
+{
+ atomic_dec(&secretmem_users);
+ return 0;
+}
+
static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
{
unsigned long len = vma->vm_end - vma->vm_start;
@@ -215,6 +228,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma)
}
static const struct file_operations secretmem_fops = {
+ .release = secretmem_release,
.mmap = secretmem_mmap,
};
@@ -330,6 +344,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
file->f_flags |= O_LARGEFILE;
fd_install(fd, file);
+ atomic_inc(&secretmem_users);
return fd;
err_put_fd:
--
2.28.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 09/11] PM: hibernate: disable when there are active secretmem users
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
It is unsafe to allow saving of secretmem areas to the hibernation snapshot
as they would be visible after the resume and this essentially will defeat
the purpose of secret memory mappings.
Prevent hibernation whenever there are active secret memory users.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
include/linux/secretmem.h | 6 ++++++
kernel/power/hibernate.c | 5 ++++-
mm/secretmem.c | 15 +++++++++++++++
3 files changed, 25 insertions(+), 1 deletion(-)
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index 70e7db9f94fe..907a6734059c 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -6,6 +6,7 @@
bool vma_is_secretmem(struct vm_area_struct *vma);
bool page_is_secretmem(struct page *page);
+bool secretmem_active(void);
#else
@@ -19,6 +20,11 @@ static inline bool page_is_secretmem(struct page *page)
return false;
}
+static inline bool secretmem_active(void)
+{
+ return false;
+}
+
#endif /* CONFIG_SECRETMEM */
#endif /* _LINUX_SECRETMEM_H */
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index da0b41914177..559acef3fddb 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -31,6 +31,7 @@
#include <linux/genhd.h>
#include <linux/ktime.h>
#include <linux/security.h>
+#include <linux/secretmem.h>
#include <trace/events/power.h>
#include "power.h"
@@ -81,7 +82,9 @@ void hibernate_release(void)
bool hibernation_available(void)
{
- return nohibernate == 0 && !security_locked_down(LOCKDOWN_HIBERNATION);
+ return nohibernate == 0 &&
+ !security_locked_down(LOCKDOWN_HIBERNATION) &&
+ !secretmem_active();
}
/**
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 05026460e2ee..6ef32ad08184 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -45,6 +45,13 @@ struct secretmem_ctx {
static struct cma *secretmem_cma;
+static atomic_t secretmem_users;
+
+bool secretmem_active(void)
+{
+ return !!atomic_read(&secretmem_users);
+}
+
static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
{
int err;
@@ -193,6 +200,12 @@ static const struct vm_operations_struct secretmem_vm_ops = {
.fault = secretmem_fault,
};
+static int secretmem_release(struct inode *inode, struct file *file)
+{
+ atomic_dec(&secretmem_users);
+ return 0;
+}
+
static int secretmem_mmap(struct file *file, struct vm_area_struct *vma)
{
unsigned long len = vma->vm_end - vma->vm_start;
@@ -215,6 +228,7 @@ bool vma_is_secretmem(struct vm_area_struct *vma)
}
static const struct file_operations secretmem_fops = {
+ .release = secretmem_release,
.mmap = secretmem_mmap,
};
@@ -330,6 +344,7 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
file->f_flags |= O_LARGEFILE;
fd_install(fd, file);
+ atomic_inc(&secretmem_users);
return fd;
err_put_fd:
--
2.28.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 10/11] arch, mm: wire up memfd_secret system call where relevant
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-21 12:27 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86, Palmer Dabbelt,
Hagen Paul Pfeifer
From: Mike Rapoport <rppt@linux.ibm.com>
Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/uapi/asm/unistd.h | 1 +
arch/riscv/include/asm/unistd.h | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/unistd.h | 6 +++++-
mm/secretmem.c | 3 +++
scripts/checksyscalls.sh | 4 ++++
8 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index f83a70e07df8..ce2ee8f1e361 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -20,5 +20,6 @@
#define __ARCH_WANT_SET_GET_RLIMIT
#define __ARCH_WANT_TIME32_SYSCALLS
#define __ARCH_WANT_SYS_CLONE3
+#define __ARCH_WANT_MEMFD_SECRET
#include <asm-generic/unistd.h>
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
index 977ee6181dab..6c316093a1e5 100644
--- a/arch/riscv/include/asm/unistd.h
+++ b/arch/riscv/include/asm/unistd.h
@@ -9,6 +9,7 @@
*/
#define __ARCH_WANT_SYS_CLONE
+#define __ARCH_WANT_MEMFD_SECRET
#include <uapi/asm/unistd.h>
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index 02a349afaf9c..a1578cdf6d91 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -447,3 +447,4 @@
440 i386 process_madvise sys_process_madvise
441 i386 epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2
442 i386 watch_mount sys_watch_mount
+443 i386 memfd_secret sys_memfd_secret
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index d9bcc4e02588..d8ecd9df0942 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -364,6 +364,7 @@
440 common process_madvise sys_process_madvise
441 common epoll_pwait2 sys_epoll_pwait2
442 common watch_mount sys_watch_mount
+443 common memfd_secret sys_memfd_secret
#
# Due to a historical design error, certain syscalls are numbered differently
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 28bde029109d..4bc70ac0e993 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1039,6 +1039,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
asmlinkage long sys_watch_mount(int dfd, const char __user *path,
unsigned int at_flags, int watch_fd, int watch_id);
+asmlinkage long sys_memfd_secret(unsigned long flags);
/*
* Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index ad58f661f4aa..26125974a8a2 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -863,9 +863,13 @@ __SYSCALL(__NR_process_madvise, sys_process_madvise)
__SC_COMP(__NR_epoll_pwait2, sys_epoll_pwait2, compat_sys_epoll_pwait2)
#define __NR_watch_mount 442
__SYSCALL(__NR_watch_mount, sys_watch_mount)
+#ifdef __ARCH_WANT_MEMFD_SECRET
+#define __NR_memfd_secret 443
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
+#endif
#undef __NR_syscalls
-#define __NR_syscalls 443
+#define __NR_syscalls 444
/*
* 32 bit systems traditionally used different
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 6ef32ad08184..3d78b2807a2e 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -427,6 +427,9 @@ static int __init secretmem_setup(char *str)
unsigned long reserved_size;
int err;
+ if (!can_set_direct_map())
+ return 0;
+
reserved_size = memparse(str, NULL);
if (!reserved_size)
return 0;
diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
index a18b47695f55..b7609958ee36 100755
--- a/scripts/checksyscalls.sh
+++ b/scripts/checksyscalls.sh
@@ -40,6 +40,10 @@ cat << EOF
#define __IGNORE_setrlimit /* setrlimit */
#endif
+#ifndef __ARCH_WANT_MEMFD_SECRET
+#define __IGNORE_memfd_secret
+#endif
+
/* Missing flags argument */
#define __IGNORE_renameat /* renameat2 */
--
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 10/11] arch, mm: wire up memfd_secret system call where relevant
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86, Palmer Dabbelt,
Hagen Paul Pfeifer
From: Mike Rapoport <rppt@linux.ibm.com>
Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/uapi/asm/unistd.h | 1 +
arch/riscv/include/asm/unistd.h | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/unistd.h | 6 +++++-
mm/secretmem.c | 3 +++
scripts/checksyscalls.sh | 4 ++++
8 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index f83a70e07df8..ce2ee8f1e361 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -20,5 +20,6 @@
#define __ARCH_WANT_SET_GET_RLIMIT
#define __ARCH_WANT_TIME32_SYSCALLS
#define __ARCH_WANT_SYS_CLONE3
+#define __ARCH_WANT_MEMFD_SECRET
#include <asm-generic/unistd.h>
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
index 977ee6181dab..6c316093a1e5 100644
--- a/arch/riscv/include/asm/unistd.h
+++ b/arch/riscv/include/asm/unistd.h
@@ -9,6 +9,7 @@
*/
#define __ARCH_WANT_SYS_CLONE
+#define __ARCH_WANT_MEMFD_SECRET
#include <uapi/asm/unistd.h>
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index 02a349afaf9c..a1578cdf6d91 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -447,3 +447,4 @@
440 i386 process_madvise sys_process_madvise
441 i386 epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2
442 i386 watch_mount sys_watch_mount
+443 i386 memfd_secret sys_memfd_secret
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index d9bcc4e02588..d8ecd9df0942 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -364,6 +364,7 @@
440 common process_madvise sys_process_madvise
441 common epoll_pwait2 sys_epoll_pwait2
442 common watch_mount sys_watch_mount
+443 common memfd_secret sys_memfd_secret
#
# Due to a historical design error, certain syscalls are numbered differently
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 28bde029109d..4bc70ac0e993 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1039,6 +1039,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
asmlinkage long sys_watch_mount(int dfd, const char __user *path,
unsigned int at_flags, int watch_fd, int watch_id);
+asmlinkage long sys_memfd_secret(unsigned long flags);
/*
* Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index ad58f661f4aa..26125974a8a2 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -863,9 +863,13 @@ __SYSCALL(__NR_process_madvise, sys_process_madvise)
__SC_COMP(__NR_epoll_pwait2, sys_epoll_pwait2, compat_sys_epoll_pwait2)
#define __NR_watch_mount 442
__SYSCALL(__NR_watch_mount, sys_watch_mount)
+#ifdef __ARCH_WANT_MEMFD_SECRET
+#define __NR_memfd_secret 443
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
+#endif
#undef __NR_syscalls
-#define __NR_syscalls 443
+#define __NR_syscalls 444
/*
* 32 bit systems traditionally used different
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 6ef32ad08184..3d78b2807a2e 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -427,6 +427,9 @@ static int __init secretmem_setup(char *str)
unsigned long reserved_size;
int err;
+ if (!can_set_direct_map())
+ return 0;
+
reserved_size = memparse(str, NULL);
if (!reserved_size)
return 0;
diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
index a18b47695f55..b7609958ee36 100755
--- a/scripts/checksyscalls.sh
+++ b/scripts/checksyscalls.sh
@@ -40,6 +40,10 @@ cat << EOF
#define __IGNORE_setrlimit /* setrlimit */
#endif
+#ifndef __ARCH_WANT_MEMFD_SECRET
+#define __IGNORE_memfd_secret
+#endif
+
/* Missing flags argument */
#define __IGNORE_renameat /* renameat2 */
--
2.28.0
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 10/11] arch, mm: wire up memfd_secret system call where relevant
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/uapi/asm/unistd.h | 1 +
arch/riscv/include/asm/unistd.h | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/unistd.h | 6 +++++-
mm/secretmem.c | 3 +++
scripts/checksyscalls.sh | 4 ++++
8 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index f83a70e07df8..ce2ee8f1e361 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -20,5 +20,6 @@
#define __ARCH_WANT_SET_GET_RLIMIT
#define __ARCH_WANT_TIME32_SYSCALLS
#define __ARCH_WANT_SYS_CLONE3
+#define __ARCH_WANT_MEMFD_SECRET
#include <asm-generic/unistd.h>
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
index 977ee6181dab..6c316093a1e5 100644
--- a/arch/riscv/include/asm/unistd.h
+++ b/arch/riscv/include/asm/unistd.h
@@ -9,6 +9,7 @@
*/
#define __ARCH_WANT_SYS_CLONE
+#define __ARCH_WANT_MEMFD_SECRET
#include <uapi/asm/unistd.h>
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index 02a349afaf9c..a1578cdf6d91 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -447,3 +447,4 @@
440 i386 process_madvise sys_process_madvise
441 i386 epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2
442 i386 watch_mount sys_watch_mount
+443 i386 memfd_secret sys_memfd_secret
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index d9bcc4e02588..d8ecd9df0942 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -364,6 +364,7 @@
440 common process_madvise sys_process_madvise
441 common epoll_pwait2 sys_epoll_pwait2
442 common watch_mount sys_watch_mount
+443 common memfd_secret sys_memfd_secret
#
# Due to a historical design error, certain syscalls are numbered differently
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 28bde029109d..4bc70ac0e993 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1039,6 +1039,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
asmlinkage long sys_watch_mount(int dfd, const char __user *path,
unsigned int at_flags, int watch_fd, int watch_id);
+asmlinkage long sys_memfd_secret(unsigned long flags);
/*
* Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index ad58f661f4aa..26125974a8a2 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -863,9 +863,13 @@ __SYSCALL(__NR_process_madvise, sys_process_madvise)
__SC_COMP(__NR_epoll_pwait2, sys_epoll_pwait2, compat_sys_epoll_pwait2)
#define __NR_watch_mount 442
__SYSCALL(__NR_watch_mount, sys_watch_mount)
+#ifdef __ARCH_WANT_MEMFD_SECRET
+#define __NR_memfd_secret 443
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
+#endif
#undef __NR_syscalls
-#define __NR_syscalls 443
+#define __NR_syscalls 444
/*
* 32 bit systems traditionally used different
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 6ef32ad08184..3d78b2807a2e 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -427,6 +427,9 @@ static int __init secretmem_setup(char *str)
unsigned long reserved_size;
int err;
+ if (!can_set_direct_map())
+ return 0;
+
reserved_size = memparse(str, NULL);
if (!reserved_size)
return 0;
diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
index a18b47695f55..b7609958ee36 100755
--- a/scripts/checksyscalls.sh
+++ b/scripts/checksyscalls.sh
@@ -40,6 +40,10 @@ cat << EOF
#define __IGNORE_setrlimit /* setrlimit */
#endif
+#ifndef __ARCH_WANT_MEMFD_SECRET
+#define __IGNORE_memfd_secret
+#endif
+
/* Missing flags argument */
#define __IGNORE_renameat /* renameat2 */
--
2.28.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 10/11] arch, mm: wire up memfd_secret system call where relevant
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
Wire up memfd_secret system call on architectures that define
ARCH_HAS_SET_DIRECT_MAP, namely arm64, risc-v and x86.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
arch/arm64/include/uapi/asm/unistd.h | 1 +
arch/riscv/include/asm/unistd.h | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/unistd.h | 6 +++++-
mm/secretmem.c | 3 +++
scripts/checksyscalls.sh | 4 ++++
8 files changed, 17 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index f83a70e07df8..ce2ee8f1e361 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -20,5 +20,6 @@
#define __ARCH_WANT_SET_GET_RLIMIT
#define __ARCH_WANT_TIME32_SYSCALLS
#define __ARCH_WANT_SYS_CLONE3
+#define __ARCH_WANT_MEMFD_SECRET
#include <asm-generic/unistd.h>
diff --git a/arch/riscv/include/asm/unistd.h b/arch/riscv/include/asm/unistd.h
index 977ee6181dab..6c316093a1e5 100644
--- a/arch/riscv/include/asm/unistd.h
+++ b/arch/riscv/include/asm/unistd.h
@@ -9,6 +9,7 @@
*/
#define __ARCH_WANT_SYS_CLONE
+#define __ARCH_WANT_MEMFD_SECRET
#include <uapi/asm/unistd.h>
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index 02a349afaf9c..a1578cdf6d91 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -447,3 +447,4 @@
440 i386 process_madvise sys_process_madvise
441 i386 epoll_pwait2 sys_epoll_pwait2 compat_sys_epoll_pwait2
442 i386 watch_mount sys_watch_mount
+443 i386 memfd_secret sys_memfd_secret
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index d9bcc4e02588..d8ecd9df0942 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -364,6 +364,7 @@
440 common process_madvise sys_process_madvise
441 common epoll_pwait2 sys_epoll_pwait2
442 common watch_mount sys_watch_mount
+443 common memfd_secret sys_memfd_secret
#
# Due to a historical design error, certain syscalls are numbered differently
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index 28bde029109d..4bc70ac0e993 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -1039,6 +1039,7 @@ asmlinkage long sys_pidfd_send_signal(int pidfd, int sig,
asmlinkage long sys_pidfd_getfd(int pidfd, int fd, unsigned int flags);
asmlinkage long sys_watch_mount(int dfd, const char __user *path,
unsigned int at_flags, int watch_fd, int watch_id);
+asmlinkage long sys_memfd_secret(unsigned long flags);
/*
* Architecture-specific system calls
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index ad58f661f4aa..26125974a8a2 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -863,9 +863,13 @@ __SYSCALL(__NR_process_madvise, sys_process_madvise)
__SC_COMP(__NR_epoll_pwait2, sys_epoll_pwait2, compat_sys_epoll_pwait2)
#define __NR_watch_mount 442
__SYSCALL(__NR_watch_mount, sys_watch_mount)
+#ifdef __ARCH_WANT_MEMFD_SECRET
+#define __NR_memfd_secret 443
+__SYSCALL(__NR_memfd_secret, sys_memfd_secret)
+#endif
#undef __NR_syscalls
-#define __NR_syscalls 443
+#define __NR_syscalls 444
/*
* 32 bit systems traditionally used different
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 6ef32ad08184..3d78b2807a2e 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -427,6 +427,9 @@ static int __init secretmem_setup(char *str)
unsigned long reserved_size;
int err;
+ if (!can_set_direct_map())
+ return 0;
+
reserved_size = memparse(str, NULL);
if (!reserved_size)
return 0;
diff --git a/scripts/checksyscalls.sh b/scripts/checksyscalls.sh
index a18b47695f55..b7609958ee36 100755
--- a/scripts/checksyscalls.sh
+++ b/scripts/checksyscalls.sh
@@ -40,6 +40,10 @@ cat << EOF
#define __IGNORE_setrlimit /* setrlimit */
#endif
+#ifndef __ARCH_WANT_MEMFD_SECRET
+#define __IGNORE_memfd_secret
+#endif
+
/* Missing flags argument */
#define __IGNORE_renameat /* renameat2 */
--
2.28.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 11/11] secretmem: test: add basic selftest for memfd_secret(2)
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-21 12:27 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
The test verifies that file descriptor created with memfd_secret does
not allow read/write operations, that secret memory mappings respect
RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
ptrace() to the secret memory fail.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +-
tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
tools/testing/selftests/vm/run_vmtests | 17 ++
4 files changed, 316 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/vm/memfd_secret.c
diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore
index 9a35c3f6a557..c8deddc81e7a 100644
--- a/tools/testing/selftests/vm/.gitignore
+++ b/tools/testing/selftests/vm/.gitignore
@@ -21,4 +21,5 @@ va_128TBswitch
map_fixed_noreplace
write_to_hugetlbfs
hmm-tests
+memfd_secret
local_config.*
diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index d42115e4284d..0200fb61646c 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -34,6 +34,7 @@ TEST_GEN_FILES += khugepaged
TEST_GEN_FILES += map_fixed_noreplace
TEST_GEN_FILES += map_hugetlb
TEST_GEN_FILES += map_populate
+TEST_GEN_FILES += memfd_secret
TEST_GEN_FILES += mlock-random-test
TEST_GEN_FILES += mlock2-tests
TEST_GEN_FILES += mremap_dontunmap
@@ -133,7 +134,7 @@ warn_32bit_failure:
endif
endif
-$(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+$(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap
$(OUTPUT)/gup_test: ../../../../mm/gup_test.h
diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c
new file mode 100644
index 000000000000..c878c2b841fc
--- /dev/null
+++ b/tools/testing/selftests/vm/memfd_secret.c
@@ -0,0 +1,296 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#define _GNU_SOURCE
+#include <sys/uio.h>
+#include <sys/mman.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ptrace.h>
+#include <sys/syscall.h>
+#include <sys/resource.h>
+#include <sys/capability.h>
+
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <errno.h>
+#include <stdio.h>
+
+#include "../kselftest.h"
+
+#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__)
+#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
+#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
+
+#ifdef __NR_memfd_secret
+
+#define PATTERN 0x55
+
+static const int prot = PROT_READ | PROT_WRITE;
+static const int mode = MAP_SHARED;
+
+static unsigned long page_size;
+static unsigned long mlock_limit_cur;
+static unsigned long mlock_limit_max;
+
+static int memfd_secret(unsigned long flags)
+{
+ return syscall(__NR_memfd_secret, flags);
+}
+
+static void test_file_apis(int fd)
+{
+ char buf[64];
+
+ if ((read(fd, buf, sizeof(buf)) >= 0) ||
+ (write(fd, buf, sizeof(buf)) >= 0) ||
+ (pread(fd, buf, sizeof(buf), 0) >= 0) ||
+ (pwrite(fd, buf, sizeof(buf), 0) >= 0))
+ fail("unexpected file IO\n");
+ else
+ pass("file IO is blocked as expected\n");
+}
+
+static void test_mlock_limit(int fd)
+{
+ size_t len;
+ char *mem;
+
+ len = mlock_limit_cur;
+ mem = mmap(NULL, len, prot, mode, fd, 0);
+ if (mem == MAP_FAILED) {
+ fail("unable to mmap secret memory\n");
+ return;
+ }
+ munmap(mem, len);
+
+ len = mlock_limit_max * 2;
+ mem = mmap(NULL, len, prot, mode, fd, 0);
+ if (mem != MAP_FAILED) {
+ fail("unexpected mlock limit violation\n");
+ munmap(mem, len);
+ return;
+ }
+
+ pass("mlock limit is respected\n");
+}
+
+static void try_process_vm_read(int fd, int pipefd[2])
+{
+ struct iovec liov, riov;
+ char buf[64];
+ char *mem;
+
+ if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+ fail("pipe write: %s\n", strerror(errno));
+ exit(KSFT_FAIL);
+ }
+
+ liov.iov_len = riov.iov_len = sizeof(buf);
+ liov.iov_base = buf;
+ riov.iov_base = mem;
+
+ if (process_vm_readv(getppid(), &liov, 1, &riov, 1, 0) < 0) {
+ if (errno == ENOSYS)
+ exit(KSFT_SKIP);
+ exit(KSFT_PASS);
+ }
+
+ exit(KSFT_FAIL);
+}
+
+static void try_ptrace(int fd, int pipefd[2])
+{
+ pid_t ppid = getppid();
+ int status;
+ char *mem;
+ long ret;
+
+ if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+ perror("pipe write");
+ exit(KSFT_FAIL);
+ }
+
+ ret = ptrace(PTRACE_ATTACH, ppid, 0, 0);
+ if (ret) {
+ perror("ptrace_attach");
+ exit(KSFT_FAIL);
+ }
+
+ ret = waitpid(ppid, &status, WUNTRACED);
+ if ((ret != ppid) || !(WIFSTOPPED(status))) {
+ fprintf(stderr, "weird waitppid result %ld stat %x\n",
+ ret, status);
+ exit(KSFT_FAIL);
+ }
+
+ if (ptrace(PTRACE_PEEKDATA, ppid, mem, 0))
+ exit(KSFT_PASS);
+
+ exit(KSFT_FAIL);
+}
+
+static void check_child_status(pid_t pid, const char *name)
+{
+ int status;
+
+ waitpid(pid, &status, 0);
+
+ if (WIFEXITED(status) && WEXITSTATUS(status) == KSFT_SKIP) {
+ skip("%s is not supported\n", name);
+ return;
+ }
+
+ if ((WIFEXITED(status) && WEXITSTATUS(status) == KSFT_PASS) ||
+ WIFSIGNALED(status)) {
+ pass("%s is blocked as expected\n", name);
+ return;
+ }
+
+ fail("%s: unexpected memory access\n", name);
+}
+
+static void test_remote_access(int fd, const char *name,
+ void (*func)(int fd, int pipefd[2]))
+{
+ int pipefd[2];
+ pid_t pid;
+ char *mem;
+
+ if (pipe(pipefd)) {
+ fail("pipe failed: %s\n", strerror(errno));
+ return;
+ }
+
+ pid = fork();
+ if (pid < 0) {
+ fail("fork failed: %s\n", strerror(errno));
+ return;
+ }
+
+ if (pid == 0) {
+ func(fd, pipefd);
+ return;
+ }
+
+ mem = mmap(NULL, page_size, prot, mode, fd, 0);
+ if (mem == MAP_FAILED) {
+ fail("Unable to mmap secret memory\n");
+ return;
+ }
+
+ ftruncate(fd, page_size);
+ memset(mem, PATTERN, page_size);
+
+ if (write(pipefd[1], &mem, sizeof(mem)) < 0) {
+ fail("pipe write: %s\n", strerror(errno));
+ return;
+ }
+
+ check_child_status(pid, name);
+}
+
+static void test_process_vm_read(int fd)
+{
+ test_remote_access(fd, "process_vm_read", try_process_vm_read);
+}
+
+static void test_ptrace(int fd)
+{
+ test_remote_access(fd, "ptrace", try_ptrace);
+}
+
+static int set_cap_limits(rlim_t max)
+{
+ struct rlimit new;
+ cap_t cap = cap_init();
+
+ new.rlim_cur = max;
+ new.rlim_max = max;
+ if (setrlimit(RLIMIT_MEMLOCK, &new)) {
+ perror("setrlimit() returns error");
+ return -1;
+ }
+
+ /* drop capabilities including CAP_IPC_LOCK */
+ if (cap_set_proc(cap)) {
+ perror("cap_set_proc() returns error");
+ return -2;
+ }
+
+ return 0;
+}
+
+static void prepare(void)
+{
+ struct rlimit rlim;
+
+ page_size = sysconf(_SC_PAGE_SIZE);
+ if (!page_size)
+ ksft_exit_fail_msg("Failed to get page size %s\n",
+ strerror(errno));
+
+ if (getrlimit(RLIMIT_MEMLOCK, &rlim))
+ ksft_exit_fail_msg("Unable to detect mlock limit: %s\n",
+ strerror(errno));
+
+ mlock_limit_cur = rlim.rlim_cur;
+ mlock_limit_max = rlim.rlim_max;
+
+ printf("page_size: %ld, mlock.soft: %ld, mlock.hard: %ld\n",
+ page_size, mlock_limit_cur, mlock_limit_max);
+
+ if (page_size > mlock_limit_cur)
+ mlock_limit_cur = page_size;
+ if (page_size > mlock_limit_max)
+ mlock_limit_max = page_size;
+
+ if (set_cap_limits(mlock_limit_max))
+ ksft_exit_fail_msg("Unable to set mlock limit: %s\n",
+ strerror(errno));
+}
+
+#define NUM_TESTS 4
+
+int main(int argc, char *argv[])
+{
+ int fd;
+
+ prepare();
+
+ ksft_print_header();
+ ksft_set_plan(NUM_TESTS);
+
+ fd = memfd_secret(0);
+ if (fd < 0) {
+ if (errno == ENOSYS)
+ ksft_exit_skip("memfd_secret is not supported\n");
+ else
+ ksft_exit_fail_msg("memfd_secret failed: %s\n",
+ strerror(errno));
+ }
+
+ test_mlock_limit(fd);
+ test_file_apis(fd);
+ test_process_vm_read(fd);
+ test_ptrace(fd);
+
+ close(fd);
+
+ ksft_exit(!ksft_get_fail_cnt());
+}
+
+#else /* __NR_memfd_secret */
+
+int main(int argc, char *argv[])
+{
+ printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n");
+ return KSFT_SKIP;
+}
+
+#endif /* __NR_memfd_secret */
diff --git a/tools/testing/selftests/vm/run_vmtests b/tools/testing/selftests/vm/run_vmtests
index e953f3cd9664..95a67382f132 100755
--- a/tools/testing/selftests/vm/run_vmtests
+++ b/tools/testing/selftests/vm/run_vmtests
@@ -346,4 +346,21 @@ else
exitcode=1
fi
+echo "running memfd_secret test"
+echo "------------------------------------"
+./memfd_secret
+ret_val=$?
+
+if [ $ret_val -eq 0 ]; then
+ echo "[PASS]"
+elif [ $ret_val -eq $ksft_skip ]; then
+ echo "[SKIP]"
+ exitcode=$ksft_skip
+else
+ echo "[FAIL]"
+ exitcode=1
+fi
+
+exit $exitcode
+
exit $exitcode
--
2.28.0
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 11/11] secretmem: test: add basic selftest for memfd_secret(2)
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
From: Mike Rapoport <rppt@linux.ibm.com>
The test verifies that file descriptor created with memfd_secret does
not allow read/write operations, that secret memory mappings respect
RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
ptrace() to the secret memory fail.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +-
tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
tools/testing/selftests/vm/run_vmtests | 17 ++
4 files changed, 316 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/vm/memfd_secret.c
diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore
index 9a35c3f6a557..c8deddc81e7a 100644
--- a/tools/testing/selftests/vm/.gitignore
+++ b/tools/testing/selftests/vm/.gitignore
@@ -21,4 +21,5 @@ va_128TBswitch
map_fixed_noreplace
write_to_hugetlbfs
hmm-tests
+memfd_secret
local_config.*
diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index d42115e4284d..0200fb61646c 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -34,6 +34,7 @@ TEST_GEN_FILES += khugepaged
TEST_GEN_FILES += map_fixed_noreplace
TEST_GEN_FILES += map_hugetlb
TEST_GEN_FILES += map_populate
+TEST_GEN_FILES += memfd_secret
TEST_GEN_FILES += mlock-random-test
TEST_GEN_FILES += mlock2-tests
TEST_GEN_FILES += mremap_dontunmap
@@ -133,7 +134,7 @@ warn_32bit_failure:
endif
endif
-$(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+$(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap
$(OUTPUT)/gup_test: ../../../../mm/gup_test.h
diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c
new file mode 100644
index 000000000000..c878c2b841fc
--- /dev/null
+++ b/tools/testing/selftests/vm/memfd_secret.c
@@ -0,0 +1,296 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#define _GNU_SOURCE
+#include <sys/uio.h>
+#include <sys/mman.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ptrace.h>
+#include <sys/syscall.h>
+#include <sys/resource.h>
+#include <sys/capability.h>
+
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <errno.h>
+#include <stdio.h>
+
+#include "../kselftest.h"
+
+#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__)
+#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
+#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
+
+#ifdef __NR_memfd_secret
+
+#define PATTERN 0x55
+
+static const int prot = PROT_READ | PROT_WRITE;
+static const int mode = MAP_SHARED;
+
+static unsigned long page_size;
+static unsigned long mlock_limit_cur;
+static unsigned long mlock_limit_max;
+
+static int memfd_secret(unsigned long flags)
+{
+ return syscall(__NR_memfd_secret, flags);
+}
+
+static void test_file_apis(int fd)
+{
+ char buf[64];
+
+ if ((read(fd, buf, sizeof(buf)) >= 0) ||
+ (write(fd, buf, sizeof(buf)) >= 0) ||
+ (pread(fd, buf, sizeof(buf), 0) >= 0) ||
+ (pwrite(fd, buf, sizeof(buf), 0) >= 0))
+ fail("unexpected file IO\n");
+ else
+ pass("file IO is blocked as expected\n");
+}
+
+static void test_mlock_limit(int fd)
+{
+ size_t len;
+ char *mem;
+
+ len = mlock_limit_cur;
+ mem = mmap(NULL, len, prot, mode, fd, 0);
+ if (mem == MAP_FAILED) {
+ fail("unable to mmap secret memory\n");
+ return;
+ }
+ munmap(mem, len);
+
+ len = mlock_limit_max * 2;
+ mem = mmap(NULL, len, prot, mode, fd, 0);
+ if (mem != MAP_FAILED) {
+ fail("unexpected mlock limit violation\n");
+ munmap(mem, len);
+ return;
+ }
+
+ pass("mlock limit is respected\n");
+}
+
+static void try_process_vm_read(int fd, int pipefd[2])
+{
+ struct iovec liov, riov;
+ char buf[64];
+ char *mem;
+
+ if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+ fail("pipe write: %s\n", strerror(errno));
+ exit(KSFT_FAIL);
+ }
+
+ liov.iov_len = riov.iov_len = sizeof(buf);
+ liov.iov_base = buf;
+ riov.iov_base = mem;
+
+ if (process_vm_readv(getppid(), &liov, 1, &riov, 1, 0) < 0) {
+ if (errno == ENOSYS)
+ exit(KSFT_SKIP);
+ exit(KSFT_PASS);
+ }
+
+ exit(KSFT_FAIL);
+}
+
+static void try_ptrace(int fd, int pipefd[2])
+{
+ pid_t ppid = getppid();
+ int status;
+ char *mem;
+ long ret;
+
+ if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+ perror("pipe write");
+ exit(KSFT_FAIL);
+ }
+
+ ret = ptrace(PTRACE_ATTACH, ppid, 0, 0);
+ if (ret) {
+ perror("ptrace_attach");
+ exit(KSFT_FAIL);
+ }
+
+ ret = waitpid(ppid, &status, WUNTRACED);
+ if ((ret != ppid) || !(WIFSTOPPED(status))) {
+ fprintf(stderr, "weird waitppid result %ld stat %x\n",
+ ret, status);
+ exit(KSFT_FAIL);
+ }
+
+ if (ptrace(PTRACE_PEEKDATA, ppid, mem, 0))
+ exit(KSFT_PASS);
+
+ exit(KSFT_FAIL);
+}
+
+static void check_child_status(pid_t pid, const char *name)
+{
+ int status;
+
+ waitpid(pid, &status, 0);
+
+ if (WIFEXITED(status) && WEXITSTATUS(status) == KSFT_SKIP) {
+ skip("%s is not supported\n", name);
+ return;
+ }
+
+ if ((WIFEXITED(status) && WEXITSTATUS(status) == KSFT_PASS) ||
+ WIFSIGNALED(status)) {
+ pass("%s is blocked as expected\n", name);
+ return;
+ }
+
+ fail("%s: unexpected memory access\n", name);
+}
+
+static void test_remote_access(int fd, const char *name,
+ void (*func)(int fd, int pipefd[2]))
+{
+ int pipefd[2];
+ pid_t pid;
+ char *mem;
+
+ if (pipe(pipefd)) {
+ fail("pipe failed: %s\n", strerror(errno));
+ return;
+ }
+
+ pid = fork();
+ if (pid < 0) {
+ fail("fork failed: %s\n", strerror(errno));
+ return;
+ }
+
+ if (pid == 0) {
+ func(fd, pipefd);
+ return;
+ }
+
+ mem = mmap(NULL, page_size, prot, mode, fd, 0);
+ if (mem == MAP_FAILED) {
+ fail("Unable to mmap secret memory\n");
+ return;
+ }
+
+ ftruncate(fd, page_size);
+ memset(mem, PATTERN, page_size);
+
+ if (write(pipefd[1], &mem, sizeof(mem)) < 0) {
+ fail("pipe write: %s\n", strerror(errno));
+ return;
+ }
+
+ check_child_status(pid, name);
+}
+
+static void test_process_vm_read(int fd)
+{
+ test_remote_access(fd, "process_vm_read", try_process_vm_read);
+}
+
+static void test_ptrace(int fd)
+{
+ test_remote_access(fd, "ptrace", try_ptrace);
+}
+
+static int set_cap_limits(rlim_t max)
+{
+ struct rlimit new;
+ cap_t cap = cap_init();
+
+ new.rlim_cur = max;
+ new.rlim_max = max;
+ if (setrlimit(RLIMIT_MEMLOCK, &new)) {
+ perror("setrlimit() returns error");
+ return -1;
+ }
+
+ /* drop capabilities including CAP_IPC_LOCK */
+ if (cap_set_proc(cap)) {
+ perror("cap_set_proc() returns error");
+ return -2;
+ }
+
+ return 0;
+}
+
+static void prepare(void)
+{
+ struct rlimit rlim;
+
+ page_size = sysconf(_SC_PAGE_SIZE);
+ if (!page_size)
+ ksft_exit_fail_msg("Failed to get page size %s\n",
+ strerror(errno));
+
+ if (getrlimit(RLIMIT_MEMLOCK, &rlim))
+ ksft_exit_fail_msg("Unable to detect mlock limit: %s\n",
+ strerror(errno));
+
+ mlock_limit_cur = rlim.rlim_cur;
+ mlock_limit_max = rlim.rlim_max;
+
+ printf("page_size: %ld, mlock.soft: %ld, mlock.hard: %ld\n",
+ page_size, mlock_limit_cur, mlock_limit_max);
+
+ if (page_size > mlock_limit_cur)
+ mlock_limit_cur = page_size;
+ if (page_size > mlock_limit_max)
+ mlock_limit_max = page_size;
+
+ if (set_cap_limits(mlock_limit_max))
+ ksft_exit_fail_msg("Unable to set mlock limit: %s\n",
+ strerror(errno));
+}
+
+#define NUM_TESTS 4
+
+int main(int argc, char *argv[])
+{
+ int fd;
+
+ prepare();
+
+ ksft_print_header();
+ ksft_set_plan(NUM_TESTS);
+
+ fd = memfd_secret(0);
+ if (fd < 0) {
+ if (errno == ENOSYS)
+ ksft_exit_skip("memfd_secret is not supported\n");
+ else
+ ksft_exit_fail_msg("memfd_secret failed: %s\n",
+ strerror(errno));
+ }
+
+ test_mlock_limit(fd);
+ test_file_apis(fd);
+ test_process_vm_read(fd);
+ test_ptrace(fd);
+
+ close(fd);
+
+ ksft_exit(!ksft_get_fail_cnt());
+}
+
+#else /* __NR_memfd_secret */
+
+int main(int argc, char *argv[])
+{
+ printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n");
+ return KSFT_SKIP;
+}
+
+#endif /* __NR_memfd_secret */
diff --git a/tools/testing/selftests/vm/run_vmtests b/tools/testing/selftests/vm/run_vmtests
index e953f3cd9664..95a67382f132 100755
--- a/tools/testing/selftests/vm/run_vmtests
+++ b/tools/testing/selftests/vm/run_vmtests
@@ -346,4 +346,21 @@ else
exitcode=1
fi
+echo "running memfd_secret test"
+echo "------------------------------------"
+./memfd_secret
+ret_val=$?
+
+if [ $ret_val -eq 0 ]; then
+ echo "[PASS]"
+elif [ $ret_val -eq $ksft_skip ]; then
+ echo "[SKIP]"
+ exitcode=$ksft_skip
+else
+ echo "[FAIL]"
+ exitcode=1
+fi
+
+exit $exitcode
+
exit $exitcode
--
2.28.0
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 11/11] secretmem: test: add basic selftest for memfd_secret(2)
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
The test verifies that file descriptor created with memfd_secret does
not allow read/write operations, that secret memory mappings respect
RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
ptrace() to the secret memory fail.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +-
tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
tools/testing/selftests/vm/run_vmtests | 17 ++
4 files changed, 316 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/vm/memfd_secret.c
diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore
index 9a35c3f6a557..c8deddc81e7a 100644
--- a/tools/testing/selftests/vm/.gitignore
+++ b/tools/testing/selftests/vm/.gitignore
@@ -21,4 +21,5 @@ va_128TBswitch
map_fixed_noreplace
write_to_hugetlbfs
hmm-tests
+memfd_secret
local_config.*
diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index d42115e4284d..0200fb61646c 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -34,6 +34,7 @@ TEST_GEN_FILES += khugepaged
TEST_GEN_FILES += map_fixed_noreplace
TEST_GEN_FILES += map_hugetlb
TEST_GEN_FILES += map_populate
+TEST_GEN_FILES += memfd_secret
TEST_GEN_FILES += mlock-random-test
TEST_GEN_FILES += mlock2-tests
TEST_GEN_FILES += mremap_dontunmap
@@ -133,7 +134,7 @@ warn_32bit_failure:
endif
endif
-$(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+$(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap
$(OUTPUT)/gup_test: ../../../../mm/gup_test.h
diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c
new file mode 100644
index 000000000000..c878c2b841fc
--- /dev/null
+++ b/tools/testing/selftests/vm/memfd_secret.c
@@ -0,0 +1,296 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#define _GNU_SOURCE
+#include <sys/uio.h>
+#include <sys/mman.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ptrace.h>
+#include <sys/syscall.h>
+#include <sys/resource.h>
+#include <sys/capability.h>
+
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <errno.h>
+#include <stdio.h>
+
+#include "../kselftest.h"
+
+#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__)
+#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
+#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
+
+#ifdef __NR_memfd_secret
+
+#define PATTERN 0x55
+
+static const int prot = PROT_READ | PROT_WRITE;
+static const int mode = MAP_SHARED;
+
+static unsigned long page_size;
+static unsigned long mlock_limit_cur;
+static unsigned long mlock_limit_max;
+
+static int memfd_secret(unsigned long flags)
+{
+ return syscall(__NR_memfd_secret, flags);
+}
+
+static void test_file_apis(int fd)
+{
+ char buf[64];
+
+ if ((read(fd, buf, sizeof(buf)) >= 0) ||
+ (write(fd, buf, sizeof(buf)) >= 0) ||
+ (pread(fd, buf, sizeof(buf), 0) >= 0) ||
+ (pwrite(fd, buf, sizeof(buf), 0) >= 0))
+ fail("unexpected file IO\n");
+ else
+ pass("file IO is blocked as expected\n");
+}
+
+static void test_mlock_limit(int fd)
+{
+ size_t len;
+ char *mem;
+
+ len = mlock_limit_cur;
+ mem = mmap(NULL, len, prot, mode, fd, 0);
+ if (mem == MAP_FAILED) {
+ fail("unable to mmap secret memory\n");
+ return;
+ }
+ munmap(mem, len);
+
+ len = mlock_limit_max * 2;
+ mem = mmap(NULL, len, prot, mode, fd, 0);
+ if (mem != MAP_FAILED) {
+ fail("unexpected mlock limit violation\n");
+ munmap(mem, len);
+ return;
+ }
+
+ pass("mlock limit is respected\n");
+}
+
+static void try_process_vm_read(int fd, int pipefd[2])
+{
+ struct iovec liov, riov;
+ char buf[64];
+ char *mem;
+
+ if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+ fail("pipe write: %s\n", strerror(errno));
+ exit(KSFT_FAIL);
+ }
+
+ liov.iov_len = riov.iov_len = sizeof(buf);
+ liov.iov_base = buf;
+ riov.iov_base = mem;
+
+ if (process_vm_readv(getppid(), &liov, 1, &riov, 1, 0) < 0) {
+ if (errno == ENOSYS)
+ exit(KSFT_SKIP);
+ exit(KSFT_PASS);
+ }
+
+ exit(KSFT_FAIL);
+}
+
+static void try_ptrace(int fd, int pipefd[2])
+{
+ pid_t ppid = getppid();
+ int status;
+ char *mem;
+ long ret;
+
+ if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+ perror("pipe write");
+ exit(KSFT_FAIL);
+ }
+
+ ret = ptrace(PTRACE_ATTACH, ppid, 0, 0);
+ if (ret) {
+ perror("ptrace_attach");
+ exit(KSFT_FAIL);
+ }
+
+ ret = waitpid(ppid, &status, WUNTRACED);
+ if ((ret != ppid) || !(WIFSTOPPED(status))) {
+ fprintf(stderr, "weird waitppid result %ld stat %x\n",
+ ret, status);
+ exit(KSFT_FAIL);
+ }
+
+ if (ptrace(PTRACE_PEEKDATA, ppid, mem, 0))
+ exit(KSFT_PASS);
+
+ exit(KSFT_FAIL);
+}
+
+static void check_child_status(pid_t pid, const char *name)
+{
+ int status;
+
+ waitpid(pid, &status, 0);
+
+ if (WIFEXITED(status) && WEXITSTATUS(status) == KSFT_SKIP) {
+ skip("%s is not supported\n", name);
+ return;
+ }
+
+ if ((WIFEXITED(status) && WEXITSTATUS(status) == KSFT_PASS) ||
+ WIFSIGNALED(status)) {
+ pass("%s is blocked as expected\n", name);
+ return;
+ }
+
+ fail("%s: unexpected memory access\n", name);
+}
+
+static void test_remote_access(int fd, const char *name,
+ void (*func)(int fd, int pipefd[2]))
+{
+ int pipefd[2];
+ pid_t pid;
+ char *mem;
+
+ if (pipe(pipefd)) {
+ fail("pipe failed: %s\n", strerror(errno));
+ return;
+ }
+
+ pid = fork();
+ if (pid < 0) {
+ fail("fork failed: %s\n", strerror(errno));
+ return;
+ }
+
+ if (pid == 0) {
+ func(fd, pipefd);
+ return;
+ }
+
+ mem = mmap(NULL, page_size, prot, mode, fd, 0);
+ if (mem == MAP_FAILED) {
+ fail("Unable to mmap secret memory\n");
+ return;
+ }
+
+ ftruncate(fd, page_size);
+ memset(mem, PATTERN, page_size);
+
+ if (write(pipefd[1], &mem, sizeof(mem)) < 0) {
+ fail("pipe write: %s\n", strerror(errno));
+ return;
+ }
+
+ check_child_status(pid, name);
+}
+
+static void test_process_vm_read(int fd)
+{
+ test_remote_access(fd, "process_vm_read", try_process_vm_read);
+}
+
+static void test_ptrace(int fd)
+{
+ test_remote_access(fd, "ptrace", try_ptrace);
+}
+
+static int set_cap_limits(rlim_t max)
+{
+ struct rlimit new;
+ cap_t cap = cap_init();
+
+ new.rlim_cur = max;
+ new.rlim_max = max;
+ if (setrlimit(RLIMIT_MEMLOCK, &new)) {
+ perror("setrlimit() returns error");
+ return -1;
+ }
+
+ /* drop capabilities including CAP_IPC_LOCK */
+ if (cap_set_proc(cap)) {
+ perror("cap_set_proc() returns error");
+ return -2;
+ }
+
+ return 0;
+}
+
+static void prepare(void)
+{
+ struct rlimit rlim;
+
+ page_size = sysconf(_SC_PAGE_SIZE);
+ if (!page_size)
+ ksft_exit_fail_msg("Failed to get page size %s\n",
+ strerror(errno));
+
+ if (getrlimit(RLIMIT_MEMLOCK, &rlim))
+ ksft_exit_fail_msg("Unable to detect mlock limit: %s\n",
+ strerror(errno));
+
+ mlock_limit_cur = rlim.rlim_cur;
+ mlock_limit_max = rlim.rlim_max;
+
+ printf("page_size: %ld, mlock.soft: %ld, mlock.hard: %ld\n",
+ page_size, mlock_limit_cur, mlock_limit_max);
+
+ if (page_size > mlock_limit_cur)
+ mlock_limit_cur = page_size;
+ if (page_size > mlock_limit_max)
+ mlock_limit_max = page_size;
+
+ if (set_cap_limits(mlock_limit_max))
+ ksft_exit_fail_msg("Unable to set mlock limit: %s\n",
+ strerror(errno));
+}
+
+#define NUM_TESTS 4
+
+int main(int argc, char *argv[])
+{
+ int fd;
+
+ prepare();
+
+ ksft_print_header();
+ ksft_set_plan(NUM_TESTS);
+
+ fd = memfd_secret(0);
+ if (fd < 0) {
+ if (errno == ENOSYS)
+ ksft_exit_skip("memfd_secret is not supported\n");
+ else
+ ksft_exit_fail_msg("memfd_secret failed: %s\n",
+ strerror(errno));
+ }
+
+ test_mlock_limit(fd);
+ test_file_apis(fd);
+ test_process_vm_read(fd);
+ test_ptrace(fd);
+
+ close(fd);
+
+ ksft_exit(!ksft_get_fail_cnt());
+}
+
+#else /* __NR_memfd_secret */
+
+int main(int argc, char *argv[])
+{
+ printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n");
+ return KSFT_SKIP;
+}
+
+#endif /* __NR_memfd_secret */
diff --git a/tools/testing/selftests/vm/run_vmtests b/tools/testing/selftests/vm/run_vmtests
index e953f3cd9664..95a67382f132 100755
--- a/tools/testing/selftests/vm/run_vmtests
+++ b/tools/testing/selftests/vm/run_vmtests
@@ -346,4 +346,21 @@ else
exitcode=1
fi
+echo "running memfd_secret test"
+echo "------------------------------------"
+./memfd_secret
+ret_val=$?
+
+if [ $ret_val -eq 0 ]; then
+ echo "[PASS]"
+elif [ $ret_val -eq $ksft_skip ]; then
+ echo "[SKIP]"
+ exitcode=$ksft_skip
+else
+ echo "[FAIL]"
+ exitcode=1
+fi
+
+exit $exitcode
+
exit $exitcode
--
2.28.0
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply related [flat|nested] 318+ messages in thread
* [PATCH v16 11/11] secretmem: test: add basic selftest for memfd_secret(2)
@ 2021-01-21 12:27 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-21 12:27 UTC (permalink / raw)
To: Andrew Morton
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
From: Mike Rapoport <rppt@linux.ibm.com>
The test verifies that file descriptor created with memfd_secret does
not allow read/write operations, that secret memory mappings respect
RLIMIT_MEMLOCK and that remote accesses with process_vm_read() and
ptrace() to the secret memory fail.
Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christopher Lameter <cl@linux.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Elena Reshetova <elena.reshetova@intel.com>
Cc: Hagen Paul Pfeifer <hagen@jauu.net>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: James Bottomley <jejb@linux.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michael Kerrisk <mtk.manpages@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Palmer Dabbelt <palmerdabbelt@google.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Shuah Khan <shuah@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tycho Andersen <tycho@tycho.ws>
Cc: Will Deacon <will@kernel.org>
---
tools/testing/selftests/vm/.gitignore | 1 +
tools/testing/selftests/vm/Makefile | 3 +-
tools/testing/selftests/vm/memfd_secret.c | 296 ++++++++++++++++++++++
tools/testing/selftests/vm/run_vmtests | 17 ++
4 files changed, 316 insertions(+), 1 deletion(-)
create mode 100644 tools/testing/selftests/vm/memfd_secret.c
diff --git a/tools/testing/selftests/vm/.gitignore b/tools/testing/selftests/vm/.gitignore
index 9a35c3f6a557..c8deddc81e7a 100644
--- a/tools/testing/selftests/vm/.gitignore
+++ b/tools/testing/selftests/vm/.gitignore
@@ -21,4 +21,5 @@ va_128TBswitch
map_fixed_noreplace
write_to_hugetlbfs
hmm-tests
+memfd_secret
local_config.*
diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
index d42115e4284d..0200fb61646c 100644
--- a/tools/testing/selftests/vm/Makefile
+++ b/tools/testing/selftests/vm/Makefile
@@ -34,6 +34,7 @@ TEST_GEN_FILES += khugepaged
TEST_GEN_FILES += map_fixed_noreplace
TEST_GEN_FILES += map_hugetlb
TEST_GEN_FILES += map_populate
+TEST_GEN_FILES += memfd_secret
TEST_GEN_FILES += mlock-random-test
TEST_GEN_FILES += mlock2-tests
TEST_GEN_FILES += mremap_dontunmap
@@ -133,7 +134,7 @@ warn_32bit_failure:
endif
endif
-$(OUTPUT)/mlock-random-test: LDLIBS += -lcap
+$(OUTPUT)/mlock-random-test $(OUTPUT)/memfd_secret: LDLIBS += -lcap
$(OUTPUT)/gup_test: ../../../../mm/gup_test.h
diff --git a/tools/testing/selftests/vm/memfd_secret.c b/tools/testing/selftests/vm/memfd_secret.c
new file mode 100644
index 000000000000..c878c2b841fc
--- /dev/null
+++ b/tools/testing/selftests/vm/memfd_secret.c
@@ -0,0 +1,296 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright IBM Corporation, 2020
+ *
+ * Author: Mike Rapoport <rppt@linux.ibm.com>
+ */
+
+#define _GNU_SOURCE
+#include <sys/uio.h>
+#include <sys/mman.h>
+#include <sys/wait.h>
+#include <sys/types.h>
+#include <sys/ptrace.h>
+#include <sys/syscall.h>
+#include <sys/resource.h>
+#include <sys/capability.h>
+
+#include <stdlib.h>
+#include <string.h>
+#include <unistd.h>
+#include <errno.h>
+#include <stdio.h>
+
+#include "../kselftest.h"
+
+#define fail(fmt, ...) ksft_test_result_fail(fmt, ##__VA_ARGS__)
+#define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
+#define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
+
+#ifdef __NR_memfd_secret
+
+#define PATTERN 0x55
+
+static const int prot = PROT_READ | PROT_WRITE;
+static const int mode = MAP_SHARED;
+
+static unsigned long page_size;
+static unsigned long mlock_limit_cur;
+static unsigned long mlock_limit_max;
+
+static int memfd_secret(unsigned long flags)
+{
+ return syscall(__NR_memfd_secret, flags);
+}
+
+static void test_file_apis(int fd)
+{
+ char buf[64];
+
+ if ((read(fd, buf, sizeof(buf)) >= 0) ||
+ (write(fd, buf, sizeof(buf)) >= 0) ||
+ (pread(fd, buf, sizeof(buf), 0) >= 0) ||
+ (pwrite(fd, buf, sizeof(buf), 0) >= 0))
+ fail("unexpected file IO\n");
+ else
+ pass("file IO is blocked as expected\n");
+}
+
+static void test_mlock_limit(int fd)
+{
+ size_t len;
+ char *mem;
+
+ len = mlock_limit_cur;
+ mem = mmap(NULL, len, prot, mode, fd, 0);
+ if (mem == MAP_FAILED) {
+ fail("unable to mmap secret memory\n");
+ return;
+ }
+ munmap(mem, len);
+
+ len = mlock_limit_max * 2;
+ mem = mmap(NULL, len, prot, mode, fd, 0);
+ if (mem != MAP_FAILED) {
+ fail("unexpected mlock limit violation\n");
+ munmap(mem, len);
+ return;
+ }
+
+ pass("mlock limit is respected\n");
+}
+
+static void try_process_vm_read(int fd, int pipefd[2])
+{
+ struct iovec liov, riov;
+ char buf[64];
+ char *mem;
+
+ if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+ fail("pipe write: %s\n", strerror(errno));
+ exit(KSFT_FAIL);
+ }
+
+ liov.iov_len = riov.iov_len = sizeof(buf);
+ liov.iov_base = buf;
+ riov.iov_base = mem;
+
+ if (process_vm_readv(getppid(), &liov, 1, &riov, 1, 0) < 0) {
+ if (errno == ENOSYS)
+ exit(KSFT_SKIP);
+ exit(KSFT_PASS);
+ }
+
+ exit(KSFT_FAIL);
+}
+
+static void try_ptrace(int fd, int pipefd[2])
+{
+ pid_t ppid = getppid();
+ int status;
+ char *mem;
+ long ret;
+
+ if (read(pipefd[0], &mem, sizeof(mem)) < 0) {
+ perror("pipe write");
+ exit(KSFT_FAIL);
+ }
+
+ ret = ptrace(PTRACE_ATTACH, ppid, 0, 0);
+ if (ret) {
+ perror("ptrace_attach");
+ exit(KSFT_FAIL);
+ }
+
+ ret = waitpid(ppid, &status, WUNTRACED);
+ if ((ret != ppid) || !(WIFSTOPPED(status))) {
+ fprintf(stderr, "weird waitppid result %ld stat %x\n",
+ ret, status);
+ exit(KSFT_FAIL);
+ }
+
+ if (ptrace(PTRACE_PEEKDATA, ppid, mem, 0))
+ exit(KSFT_PASS);
+
+ exit(KSFT_FAIL);
+}
+
+static void check_child_status(pid_t pid, const char *name)
+{
+ int status;
+
+ waitpid(pid, &status, 0);
+
+ if (WIFEXITED(status) && WEXITSTATUS(status) == KSFT_SKIP) {
+ skip("%s is not supported\n", name);
+ return;
+ }
+
+ if ((WIFEXITED(status) && WEXITSTATUS(status) == KSFT_PASS) ||
+ WIFSIGNALED(status)) {
+ pass("%s is blocked as expected\n", name);
+ return;
+ }
+
+ fail("%s: unexpected memory access\n", name);
+}
+
+static void test_remote_access(int fd, const char *name,
+ void (*func)(int fd, int pipefd[2]))
+{
+ int pipefd[2];
+ pid_t pid;
+ char *mem;
+
+ if (pipe(pipefd)) {
+ fail("pipe failed: %s\n", strerror(errno));
+ return;
+ }
+
+ pid = fork();
+ if (pid < 0) {
+ fail("fork failed: %s\n", strerror(errno));
+ return;
+ }
+
+ if (pid == 0) {
+ func(fd, pipefd);
+ return;
+ }
+
+ mem = mmap(NULL, page_size, prot, mode, fd, 0);
+ if (mem == MAP_FAILED) {
+ fail("Unable to mmap secret memory\n");
+ return;
+ }
+
+ ftruncate(fd, page_size);
+ memset(mem, PATTERN, page_size);
+
+ if (write(pipefd[1], &mem, sizeof(mem)) < 0) {
+ fail("pipe write: %s\n", strerror(errno));
+ return;
+ }
+
+ check_child_status(pid, name);
+}
+
+static void test_process_vm_read(int fd)
+{
+ test_remote_access(fd, "process_vm_read", try_process_vm_read);
+}
+
+static void test_ptrace(int fd)
+{
+ test_remote_access(fd, "ptrace", try_ptrace);
+}
+
+static int set_cap_limits(rlim_t max)
+{
+ struct rlimit new;
+ cap_t cap = cap_init();
+
+ new.rlim_cur = max;
+ new.rlim_max = max;
+ if (setrlimit(RLIMIT_MEMLOCK, &new)) {
+ perror("setrlimit() returns error");
+ return -1;
+ }
+
+ /* drop capabilities including CAP_IPC_LOCK */
+ if (cap_set_proc(cap)) {
+ perror("cap_set_proc() returns error");
+ return -2;
+ }
+
+ return 0;
+}
+
+static void prepare(void)
+{
+ struct rlimit rlim;
+
+ page_size = sysconf(_SC_PAGE_SIZE);
+ if (!page_size)
+ ksft_exit_fail_msg("Failed to get page size %s\n",
+ strerror(errno));
+
+ if (getrlimit(RLIMIT_MEMLOCK, &rlim))
+ ksft_exit_fail_msg("Unable to detect mlock limit: %s\n",
+ strerror(errno));
+
+ mlock_limit_cur = rlim.rlim_cur;
+ mlock_limit_max = rlim.rlim_max;
+
+ printf("page_size: %ld, mlock.soft: %ld, mlock.hard: %ld\n",
+ page_size, mlock_limit_cur, mlock_limit_max);
+
+ if (page_size > mlock_limit_cur)
+ mlock_limit_cur = page_size;
+ if (page_size > mlock_limit_max)
+ mlock_limit_max = page_size;
+
+ if (set_cap_limits(mlock_limit_max))
+ ksft_exit_fail_msg("Unable to set mlock limit: %s\n",
+ strerror(errno));
+}
+
+#define NUM_TESTS 4
+
+int main(int argc, char *argv[])
+{
+ int fd;
+
+ prepare();
+
+ ksft_print_header();
+ ksft_set_plan(NUM_TESTS);
+
+ fd = memfd_secret(0);
+ if (fd < 0) {
+ if (errno == ENOSYS)
+ ksft_exit_skip("memfd_secret is not supported\n");
+ else
+ ksft_exit_fail_msg("memfd_secret failed: %s\n",
+ strerror(errno));
+ }
+
+ test_mlock_limit(fd);
+ test_file_apis(fd);
+ test_process_vm_read(fd);
+ test_ptrace(fd);
+
+ close(fd);
+
+ ksft_exit(!ksft_get_fail_cnt());
+}
+
+#else /* __NR_memfd_secret */
+
+int main(int argc, char *argv[])
+{
+ printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n");
+ return KSFT_SKIP;
+}
+
+#endif /* __NR_memfd_secret */
diff --git a/tools/testing/selftests/vm/run_vmtests b/tools/testing/selftests/vm/run_vmtests
index e953f3cd9664..95a67382f132 100755
--- a/tools/testing/selftests/vm/run_vmtests
+++ b/tools/testing/selftests/vm/run_vmtests
@@ -346,4 +346,21 @@ else
exitcode=1
fi
+echo "running memfd_secret test"
+echo "------------------------------------"
+./memfd_secret
+ret_val=$?
+
+if [ $ret_val -eq 0 ]; then
+ echo "[PASS]"
+elif [ $ret_val -eq $ksft_skip ]; then
+ echo "[SKIP]"
+ exitcode=$ksft_skip
+else
+ echo "[FAIL]"
+ exitcode=1
+fi
+
+exit $exitcode
+
exit $exitcode
--
2.28.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 318+ messages in thread
* Re: [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-21 22:18 ` Andrew Morton
-1 siblings, 0 replies; 318+ messages in thread
From: Andrew Morton @ 2021-01-21 22:18 UTC (permalink / raw)
To: Mike Rapoport
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon
On Thu, 21 Jan 2021 14:27:12 +0200 Mike Rapoport <rppt@kernel.org> wrote:
> @Andrew, this is based on v5.11-rc4-mmots-2021-01-19-13-54 with secretmem
> patches dropped from there, I can rebase whatever way you prefer.
Thanks. I merged this version.
Silently, to avoid spraying out all those emails again ;)
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-21 22:18 ` Andrew Morton
0 siblings, 0 replies; 318+ messages in thread
From: Andrew Morton @ 2021-01-21 22:18 UTC (permalink / raw)
To: Mike Rapoport
Cc: Alexander Viro, Andy Lutomirski, Arnd Bergmann, Borislav Petkov,
Catalin Marinas, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86
On Thu, 21 Jan 2021 14:27:12 +0200 Mike Rapoport <rppt@kernel.org> wrote:
> @Andrew, this is based on v5.11-rc4-mmots-2021-01-19-13-54 with secretmem
> patches dropped from there, I can rebase whatever way you prefer.
Thanks. I merged this version.
Silently, to avoid spraying out all those emails again ;)
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-21 22:18 ` Andrew Morton
0 siblings, 0 replies; 318+ messages in thread
From: Andrew Morton @ 2021-01-21 22:18 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Arnd Bergmann, James Bottomley, Borislav Petkov,
Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Rick Edgecombe, Roman Gushchin
On Thu, 21 Jan 2021 14:27:12 +0200 Mike Rapoport <rppt@kernel.org> wrote:
> @Andrew, this is based on v5.11-rc4-mmots-2021-01-19-13-54 with secretmem
> patches dropped from there, I can rebase whatever way you prefer.
Thanks. I merged this version.
Silently, to avoid spraying out all those emails again ;)
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 00/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-21 22:18 ` Andrew Morton
0 siblings, 0 replies; 318+ messages in thread
From: Andrew Morton @ 2021-01-21 22:18 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Arnd Bergmann, James Bottomley, Borislav Petkov,
Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Rick Edgecombe, Roman Gushchin
On Thu, 21 Jan 2021 14:27:12 +0200 Mike Rapoport <rppt@kernel.org> wrote:
> @Andrew, this is based on v5.11-rc4-mmots-2021-01-19-13-54 with secretmem
> patches dropped from there, I can rebase whatever way you prefer.
Thanks. I merged this version.
Silently, to avoid spraying out all those emails again ;)
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-25 16:17 ` Matthew Wilcox
-1 siblings, 0 replies; 318+ messages in thread
From: Matthew Wilcox @ 2021-01-25 16:17 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Mark Rutland,
Mike Rapoport, Michael Kerrisk, Palmer Dabbelt, Paul Walmsley,
Peter Zijlstra, Rick Edgecombe, Roman Gushchin, Shakeel Butt,
Shuah Khan, Thomas Gleixner, Tycho Andersen, Will Deacon,
linux-api, linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.
I think this is wrong. It fails to account subsequent allocators from
the same PMD. If you want to track like this, you need separate pools
per memcg.
I think you shouldn't try to track like this; better to just track on
a per-page basis. After all, the page allocator doesn't track order-10
pages to the memcg that initially caused them to be split.
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Acked-by: Roman Gushchin <guro@fb.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
> mm/filemap.c | 3 ++-
> mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
> 2 files changed, 37 insertions(+), 2 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 2d0c6721879d..bb28dd6d9e22 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -42,6 +42,7 @@
> #include <linux/psi.h>
> #include <linux/ramfs.h>
> #include <linux/page_idle.h>
> +#include <linux/secretmem.h>
> #include "internal.h"
>
> #define CREATE_TRACE_POINTS
> @@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
> page->mapping = mapping;
> page->index = offset;
>
> - if (!huge) {
> + if (!huge && !page_is_secretmem(page)) {
> error = mem_cgroup_charge(page, current->mm, gfp);
> if (error)
> goto error;
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 469211c7cc3a..05026460e2ee 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -18,6 +18,7 @@
> #include <linux/memblock.h>
> #include <linux/pseudo_fs.h>
> #include <linux/secretmem.h>
> +#include <linux/memcontrol.h>
> #include <linux/set_memory.h>
> #include <linux/sched/signal.h>
>
> @@ -44,6 +45,32 @@ struct secretmem_ctx {
>
> static struct cma *secretmem_cma;
>
> +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> +{
> + int err;
> +
> + err = memcg_kmem_charge_page(page, gfp, order);
> + if (err)
> + return err;
> +
> + /*
> + * seceremem caches are unreclaimable kernel allocations, so treat
> + * them as unreclaimable slab memory for VM statistics purposes
> + */
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> + PAGE_SIZE << order);
> +
> + return 0;
> +}
> +
> +static void secretmem_unaccount_pages(struct page *page, int order)
> +{
> +
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> + -PAGE_SIZE << order);
> + memcg_kmem_uncharge_page(page, order);
> +}
> +
> static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> {
> unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> @@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> if (!page)
> return -ENOMEM;
>
> + err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
> + if (err)
> + goto err_cma_release;
> +
> /*
> * clear the data left from the prevoius user before dropping the
> * pages from the direct map
> @@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>
> err = set_direct_map_invalid_noflush(page, nr_pages);
> if (err)
> - goto err_cma_release;
> + goto err_memcg_uncharge;
>
> addr = (unsigned long)page_address(page);
> err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> @@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> * won't fail
> */
> set_direct_map_default_noflush(page, nr_pages);
> +err_memcg_uncharge:
> + secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
> err_cma_release:
> cma_release(secretmem_cma, page, nr_pages);
> return err;
> @@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
> int i;
>
> set_direct_map_default_noflush(page, nr_pages);
> + secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
>
> for (i = 0; i < nr_pages; i++)
> clear_highpage(page + i);
> --
> 2.28.0
>
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 16:17 ` Matthew Wilcox
0 siblings, 0 replies; 318+ messages in thread
From: Matthew Wilcox @ 2021-01-25 16:17 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.
I think this is wrong. It fails to account subsequent allocators from
the same PMD. If you want to track like this, you need separate pools
per memcg.
I think you shouldn't try to track like this; better to just track on
a per-page basis. After all, the page allocator doesn't track order-10
pages to the memcg that initially caused them to be split.
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Acked-by: Roman Gushchin <guro@fb.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
> mm/filemap.c | 3 ++-
> mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
> 2 files changed, 37 insertions(+), 2 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 2d0c6721879d..bb28dd6d9e22 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -42,6 +42,7 @@
> #include <linux/psi.h>
> #include <linux/ramfs.h>
> #include <linux/page_idle.h>
> +#include <linux/secretmem.h>
> #include "internal.h"
>
> #define CREATE_TRACE_POINTS
> @@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
> page->mapping = mapping;
> page->index = offset;
>
> - if (!huge) {
> + if (!huge && !page_is_secretmem(page)) {
> error = mem_cgroup_charge(page, current->mm, gfp);
> if (error)
> goto error;
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 469211c7cc3a..05026460e2ee 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -18,6 +18,7 @@
> #include <linux/memblock.h>
> #include <linux/pseudo_fs.h>
> #include <linux/secretmem.h>
> +#include <linux/memcontrol.h>
> #include <linux/set_memory.h>
> #include <linux/sched/signal.h>
>
> @@ -44,6 +45,32 @@ struct secretmem_ctx {
>
> static struct cma *secretmem_cma;
>
> +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> +{
> + int err;
> +
> + err = memcg_kmem_charge_page(page, gfp, order);
> + if (err)
> + return err;
> +
> + /*
> + * seceremem caches are unreclaimable kernel allocations, so treat
> + * them as unreclaimable slab memory for VM statistics purposes
> + */
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> + PAGE_SIZE << order);
> +
> + return 0;
> +}
> +
> +static void secretmem_unaccount_pages(struct page *page, int order)
> +{
> +
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> + -PAGE_SIZE << order);
> + memcg_kmem_uncharge_page(page, order);
> +}
> +
> static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> {
> unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> @@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> if (!page)
> return -ENOMEM;
>
> + err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
> + if (err)
> + goto err_cma_release;
> +
> /*
> * clear the data left from the prevoius user before dropping the
> * pages from the direct map
> @@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>
> err = set_direct_map_invalid_noflush(page, nr_pages);
> if (err)
> - goto err_cma_release;
> + goto err_memcg_uncharge;
>
> addr = (unsigned long)page_address(page);
> err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> @@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> * won't fail
> */
> set_direct_map_default_noflush(page, nr_pages);
> +err_memcg_uncharge:
> + secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
> err_cma_release:
> cma_release(secretmem_cma, page, nr_pages);
> return err;
> @@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
> int i;
>
> set_direct_map_default_noflush(page, nr_pages);
> + secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
>
> for (i = 0; i < nr_pages; i++)
> clear_highpage(page + i);
> --
> 2.28.0
>
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 16:17 ` Matthew Wilcox
0 siblings, 0 replies; 318+ messages in thread
From: Matthew Wilcox @ 2021-01-25 16:17 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, linux-riscv, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, Palmer Dabbelt,
linux-fsdevel, Shakeel Butt, Andrew Morton, Rick Edgecombe,
Roman Gushchin
On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.
I think this is wrong. It fails to account subsequent allocators from
the same PMD. If you want to track like this, you need separate pools
per memcg.
I think you shouldn't try to track like this; better to just track on
a per-page basis. After all, the page allocator doesn't track order-10
pages to the memcg that initially caused them to be split.
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Acked-by: Roman Gushchin <guro@fb.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
> mm/filemap.c | 3 ++-
> mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
> 2 files changed, 37 insertions(+), 2 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 2d0c6721879d..bb28dd6d9e22 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -42,6 +42,7 @@
> #include <linux/psi.h>
> #include <linux/ramfs.h>
> #include <linux/page_idle.h>
> +#include <linux/secretmem.h>
> #include "internal.h"
>
> #define CREATE_TRACE_POINTS
> @@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
> page->mapping = mapping;
> page->index = offset;
>
> - if (!huge) {
> + if (!huge && !page_is_secretmem(page)) {
> error = mem_cgroup_charge(page, current->mm, gfp);
> if (error)
> goto error;
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 469211c7cc3a..05026460e2ee 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -18,6 +18,7 @@
> #include <linux/memblock.h>
> #include <linux/pseudo_fs.h>
> #include <linux/secretmem.h>
> +#include <linux/memcontrol.h>
> #include <linux/set_memory.h>
> #include <linux/sched/signal.h>
>
> @@ -44,6 +45,32 @@ struct secretmem_ctx {
>
> static struct cma *secretmem_cma;
>
> +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> +{
> + int err;
> +
> + err = memcg_kmem_charge_page(page, gfp, order);
> + if (err)
> + return err;
> +
> + /*
> + * seceremem caches are unreclaimable kernel allocations, so treat
> + * them as unreclaimable slab memory for VM statistics purposes
> + */
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> + PAGE_SIZE << order);
> +
> + return 0;
> +}
> +
> +static void secretmem_unaccount_pages(struct page *page, int order)
> +{
> +
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> + -PAGE_SIZE << order);
> + memcg_kmem_uncharge_page(page, order);
> +}
> +
> static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> {
> unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> @@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> if (!page)
> return -ENOMEM;
>
> + err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
> + if (err)
> + goto err_cma_release;
> +
> /*
> * clear the data left from the prevoius user before dropping the
> * pages from the direct map
> @@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>
> err = set_direct_map_invalid_noflush(page, nr_pages);
> if (err)
> - goto err_cma_release;
> + goto err_memcg_uncharge;
>
> addr = (unsigned long)page_address(page);
> err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> @@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> * won't fail
> */
> set_direct_map_default_noflush(page, nr_pages);
> +err_memcg_uncharge:
> + secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
> err_cma_release:
> cma_release(secretmem_cma, page, nr_pages);
> return err;
> @@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
> int i;
>
> set_direct_map_default_noflush(page, nr_pages);
> + secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
>
> for (i = 0; i < nr_pages; i++)
> clear_highpage(page + i);
> --
> 2.28.0
>
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 16:17 ` Matthew Wilcox
0 siblings, 0 replies; 318+ messages in thread
From: Matthew Wilcox @ 2021-01-25 16:17 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, linux-riscv, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, Palmer Dabbelt,
linux-fsdevel, Shakeel Butt, Andrew Morton, Rick Edgecombe,
Roman Gushchin
On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.
I think this is wrong. It fails to account subsequent allocators from
the same PMD. If you want to track like this, you need separate pools
per memcg.
I think you shouldn't try to track like this; better to just track on
a per-page basis. After all, the page allocator doesn't track order-10
pages to the memcg that initially caused them to be split.
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Acked-by: Roman Gushchin <guro@fb.com>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
> mm/filemap.c | 3 ++-
> mm/secretmem.c | 36 +++++++++++++++++++++++++++++++++++-
> 2 files changed, 37 insertions(+), 2 deletions(-)
>
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 2d0c6721879d..bb28dd6d9e22 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -42,6 +42,7 @@
> #include <linux/psi.h>
> #include <linux/ramfs.h>
> #include <linux/page_idle.h>
> +#include <linux/secretmem.h>
> #include "internal.h"
>
> #define CREATE_TRACE_POINTS
> @@ -839,7 +840,7 @@ noinline int __add_to_page_cache_locked(struct page *page,
> page->mapping = mapping;
> page->index = offset;
>
> - if (!huge) {
> + if (!huge && !page_is_secretmem(page)) {
> error = mem_cgroup_charge(page, current->mm, gfp);
> if (error)
> goto error;
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 469211c7cc3a..05026460e2ee 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -18,6 +18,7 @@
> #include <linux/memblock.h>
> #include <linux/pseudo_fs.h>
> #include <linux/secretmem.h>
> +#include <linux/memcontrol.h>
> #include <linux/set_memory.h>
> #include <linux/sched/signal.h>
>
> @@ -44,6 +45,32 @@ struct secretmem_ctx {
>
> static struct cma *secretmem_cma;
>
> +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> +{
> + int err;
> +
> + err = memcg_kmem_charge_page(page, gfp, order);
> + if (err)
> + return err;
> +
> + /*
> + * seceremem caches are unreclaimable kernel allocations, so treat
> + * them as unreclaimable slab memory for VM statistics purposes
> + */
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> + PAGE_SIZE << order);
> +
> + return 0;
> +}
> +
> +static void secretmem_unaccount_pages(struct page *page, int order)
> +{
> +
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> + -PAGE_SIZE << order);
> + memcg_kmem_uncharge_page(page, order);
> +}
> +
> static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> {
> unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> @@ -56,6 +83,10 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> if (!page)
> return -ENOMEM;
>
> + err = secretmem_account_pages(page, gfp, PMD_PAGE_ORDER);
> + if (err)
> + goto err_cma_release;
> +
> /*
> * clear the data left from the prevoius user before dropping the
> * pages from the direct map
> @@ -65,7 +96,7 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
>
> err = set_direct_map_invalid_noflush(page, nr_pages);
> if (err)
> - goto err_cma_release;
> + goto err_memcg_uncharge;
>
> addr = (unsigned long)page_address(page);
> err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> @@ -83,6 +114,8 @@ static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> * won't fail
> */
> set_direct_map_default_noflush(page, nr_pages);
> +err_memcg_uncharge:
> + secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
> err_cma_release:
> cma_release(secretmem_cma, page, nr_pages);
> return err;
> @@ -314,6 +347,7 @@ static void secretmem_cleanup_chunk(struct gen_pool *pool,
> int i;
>
> set_direct_map_default_noflush(page, nr_pages);
> + secretmem_unaccount_pages(page, PMD_PAGE_ORDER);
>
> for (i = 0; i < nr_pages; i++)
> clear_highpage(page + i);
> --
> 2.28.0
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-25 16:54 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-25 16:54 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.
What does this mean? What are the lifetime rules?
[...]
> +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> +{
> + int err;
> +
> + err = memcg_kmem_charge_page(page, gfp, order);
> + if (err)
> + return err;
> +
> + /*
> + * seceremem caches are unreclaimable kernel allocations, so treat
> + * them as unreclaimable slab memory for VM statistics purposes
> + */
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> + PAGE_SIZE << order);
A lot of memcg accounted memory is not reclaimable. Why do you abuse
SLAB counter when this is not a slab owned memory? Why do you use the
kmem accounting API when __GFP_ACCOUNT should give you the same without
this details?
--
Michal Hocko
SUSE Labs
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 16:54 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-25 16:54 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.
What does this mean? What are the lifetime rules?
[...]
> +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> +{
> + int err;
> +
> + err = memcg_kmem_charge_page(page, gfp, order);
> + if (err)
> + return err;
> +
> + /*
> + * seceremem caches are unreclaimable kernel allocations, so treat
> + * them as unreclaimable slab memory for VM statistics purposes
> + */
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> + PAGE_SIZE << order);
A lot of memcg accounted memory is not reclaimable. Why do you abuse
SLAB counter when this is not a slab owned memory? Why do you use the
kmem accounting API when __GFP_ACCOUNT should give you the same without
this details?
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 16:54 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-25 16:54 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.
What does this mean? What are the lifetime rules?
[...]
> +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> +{
> + int err;
> +
> + err = memcg_kmem_charge_page(page, gfp, order);
> + if (err)
> + return err;
> +
> + /*
> + * seceremem caches are unreclaimable kernel allocations, so treat
> + * them as unreclaimable slab memory for VM statistics purposes
> + */
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> + PAGE_SIZE << order);
A lot of memcg accounted memory is not reclaimable. Why do you abuse
SLAB counter when this is not a slab owned memory? Why do you use the
kmem accounting API when __GFP_ACCOUNT should give you the same without
this details?
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 16:54 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-25 16:54 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Account memory consumed by secretmem to memcg. The accounting is updated
> when the memory is actually allocated and freed.
What does this mean? What are the lifetime rules?
[...]
> +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> +{
> + int err;
> +
> + err = memcg_kmem_charge_page(page, gfp, order);
> + if (err)
> + return err;
> +
> + /*
> + * seceremem caches are unreclaimable kernel allocations, so treat
> + * them as unreclaimable slab memory for VM statistics purposes
> + */
> + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> + PAGE_SIZE << order);
A lot of memcg accounted memory is not reclaimable. Why do you abuse
SLAB counter when this is not a slab owned memory? Why do you use the
kmem accounting API when __GFP_ACCOUNT should give you the same without
this details?
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-25 17:01 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-25 17:01 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Introduce "memfd_secret" system call with the ability to create memory
> areas visible only in the context of the owning process and not mapped not
> only to other processes but in the kernel page tables as well.
>
> The user will create a file descriptor using the memfd_secret() system
> call. The memory areas created by mmap() calls from this file descriptor
> will be unmapped from the kernel direct map and they will be only mapped in
> the page table of the owning mm.
>
> The secret memory remains accessible in the process context using uaccess
> primitives, but it is not accessible using direct/linear map addresses.
>
> Functions in the follow_page()/get_user_page() family will refuse to return
> a page that belongs to the secret memory area.
>
> A page that was a part of the secret memory area is cleared when it is
> freed.
>
> The following example demonstrates creation of a secret mapping (error
> handling is omitted):
>
> fd = memfd_secret(0);
> ftruncate(fd, MAP_SIZE);
> ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
I do not see any access control or permission model for this feature.
Is this feature generally safe to anybody?
--
Michal Hocko
SUSE Labs
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-25 17:01 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-25 17:01 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Introduce "memfd_secret" system call with the ability to create memory
> areas visible only in the context of the owning process and not mapped not
> only to other processes but in the kernel page tables as well.
>
> The user will create a file descriptor using the memfd_secret() system
> call. The memory areas created by mmap() calls from this file descriptor
> will be unmapped from the kernel direct map and they will be only mapped in
> the page table of the owning mm.
>
> The secret memory remains accessible in the process context using uaccess
> primitives, but it is not accessible using direct/linear map addresses.
>
> Functions in the follow_page()/get_user_page() family will refuse to return
> a page that belongs to the secret memory area.
>
> A page that was a part of the secret memory area is cleared when it is
> freed.
>
> The following example demonstrates creation of a secret mapping (error
> handling is omitted):
>
> fd = memfd_secret(0);
> ftruncate(fd, MAP_SIZE);
> ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
I do not see any access control or permission model for this feature.
Is this feature generally safe to anybody?
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-25 17:01 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-25 17:01 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Introduce "memfd_secret" system call with the ability to create memory
> areas visible only in the context of the owning process and not mapped not
> only to other processes but in the kernel page tables as well.
>
> The user will create a file descriptor using the memfd_secret() system
> call. The memory areas created by mmap() calls from this file descriptor
> will be unmapped from the kernel direct map and they will be only mapped in
> the page table of the owning mm.
>
> The secret memory remains accessible in the process context using uaccess
> primitives, but it is not accessible using direct/linear map addresses.
>
> Functions in the follow_page()/get_user_page() family will refuse to return
> a page that belongs to the secret memory area.
>
> A page that was a part of the secret memory area is cleared when it is
> freed.
>
> The following example demonstrates creation of a secret mapping (error
> handling is omitted):
>
> fd = memfd_secret(0);
> ftruncate(fd, MAP_SIZE);
> ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
I do not see any access control or permission model for this feature.
Is this feature generally safe to anybody?
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-25 17:01 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-25 17:01 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Introduce "memfd_secret" system call with the ability to create memory
> areas visible only in the context of the owning process and not mapped not
> only to other processes but in the kernel page tables as well.
>
> The user will create a file descriptor using the memfd_secret() system
> call. The memory areas created by mmap() calls from this file descriptor
> will be unmapped from the kernel direct map and they will be only mapped in
> the page table of the owning mm.
>
> The secret memory remains accessible in the process context using uaccess
> primitives, but it is not accessible using direct/linear map addresses.
>
> Functions in the follow_page()/get_user_page() family will refuse to return
> a page that belongs to the secret memory area.
>
> A page that was a part of the secret memory area is cleared when it is
> freed.
>
> The following example demonstrates creation of a secret mapping (error
> handling is omitted):
>
> fd = memfd_secret(0);
> ftruncate(fd, MAP_SIZE);
> ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
I do not see any access control or permission model for this feature.
Is this feature generally safe to anybody?
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-25 16:17 ` Matthew Wilcox
` (2 preceding siblings ...)
(?)
@ 2021-01-25 17:18 ` Shakeel Butt
-1 siblings, 0 replies; 318+ messages in thread
From: Shakeel Butt @ 2021-01-25 17:18 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Mike Rapoport, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dave Hansen, David Hildenbrand,
Elena Reshetova, H. Peter Anvin, Ingo Molnar, James Bottomley,
Kirill A. Shutemov, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon
On Mon, Jan 25, 2021 at 8:20 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Account memory consumed by secretmem to memcg. The accounting is updated
> > when the memory is actually allocated and freed.
>
> I think this is wrong. It fails to account subsequent allocators from
> the same PMD. If you want to track like this, you need separate pools
> per memcg.
>
Are these secretmem pools shared between different jobs/memcgs?
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 17:18 ` Shakeel Butt
0 siblings, 0 replies; 318+ messages in thread
From: Shakeel Butt @ 2021-01-25 17:18 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Mike Rapoport, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Mark Rutland, Mike Rapoport,
Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
Rick Edgecombe, Roman Gushchin, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, Linux MM, LKML, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Mon, Jan 25, 2021 at 8:20 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Account memory consumed by secretmem to memcg. The accounting is updated
> > when the memory is actually allocated and freed.
>
> I think this is wrong. It fails to account subsequent allocators from
> the same PMD. If you want to track like this, you need separate pools
> per memcg.
>
Are these secretmem pools shared between different jobs/memcgs?
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 17:18 ` Shakeel Butt
0 siblings, 0 replies; 318+ messages in thread
From: Shakeel Butt @ 2021-01-25 17:18 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, Linux MM, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, linux-riscv, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, LKML, Palmer Dabbelt, linux-fsdevel,
Andrew Morton, Rick Edgecombe, Roman Gushchin, Mike Rapoport
On Mon, Jan 25, 2021 at 8:20 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Account memory consumed by secretmem to memcg. The accounting is updated
> > when the memory is actually allocated and freed.
>
> I think this is wrong. It fails to account subsequent allocators from
> the same PMD. If you want to track like this, you need separate pools
> per memcg.
>
Are these secretmem pools shared between different jobs/memcgs?
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 17:18 ` Shakeel Butt
0 siblings, 0 replies; 318+ messages in thread
From: Shakeel Butt @ 2021-01-25 17:18 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Mike Rapoport, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Mark Rutland, Mike Rapoport,
Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
Rick Edgecombe, Roman Gushchin, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, Linux MM, LKML, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Mon, Jan 25, 2021 at 8:20 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Account memory consumed by secretmem to memcg. The accounting is updated
> > when the memory is actually allocated and freed.
>
> I think this is wrong. It fails to account subsequent allocators from
> the same PMD. If you want to track like this, you need separate pools
> per memcg.
>
Are these secretmem pools shared between different jobs/memcgs?
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 17:18 ` Shakeel Butt
0 siblings, 0 replies; 318+ messages in thread
From: Shakeel Butt @ 2021-01-25 17:18 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, Linux MM, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, linux-riscv, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, LKML, Palmer Dabbelt, linux-fsdevel,
Andrew Morton, Rick Edgecombe, Roman Gushchin, Mike Rapoport
On Mon, Jan 25, 2021 at 8:20 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Account memory consumed by secretmem to memcg. The accounting is updated
> > when the memory is actually allocated and freed.
>
> I think this is wrong. It fails to account subsequent allocators from
> the same PMD. If you want to track like this, you need separate pools
> per memcg.
>
Are these secretmem pools shared between different jobs/memcgs?
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 10/11] arch, mm: wire up memfd_secret system call where relevant
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-25 18:18 ` Catalin Marinas
-1 siblings, 0 replies; 318+ messages in thread
From: Catalin Marinas @ 2021-01-25 18:18 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Christopher Lameter, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon
On Thu, Jan 21, 2021 at 02:27:22PM +0200, Mike Rapoport wrote:
> diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
> index f83a70e07df8..ce2ee8f1e361 100644
> --- a/arch/arm64/include/uapi/asm/unistd.h
> +++ b/arch/arm64/include/uapi/asm/unistd.h
> @@ -20,5 +20,6 @@
> #define __ARCH_WANT_SET_GET_RLIMIT
> #define __ARCH_WANT_TIME32_SYSCALLS
> #define __ARCH_WANT_SYS_CLONE3
> +#define __ARCH_WANT_MEMFD_SECRET
I thought I already acked v10 of this patch. Here it is again:
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 10/11] arch, mm: wire up memfd_secret system call where relevant
@ 2021-01-25 18:18 ` Catalin Marinas
0 siblings, 0 replies; 318+ messages in thread
From: Catalin Marinas @ 2021-01-25 18:18 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Palmer Dabbelt,
Hagen Paul Pfeifer
On Thu, Jan 21, 2021 at 02:27:22PM +0200, Mike Rapoport wrote:
> diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
> index f83a70e07df8..ce2ee8f1e361 100644
> --- a/arch/arm64/include/uapi/asm/unistd.h
> +++ b/arch/arm64/include/uapi/asm/unistd.h
> @@ -20,5 +20,6 @@
> #define __ARCH_WANT_SET_GET_RLIMIT
> #define __ARCH_WANT_TIME32_SYSCALLS
> #define __ARCH_WANT_SYS_CLONE3
> +#define __ARCH_WANT_MEMFD_SECRET
I thought I already acked v10 of this patch. Here it is again:
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 10/11] arch, mm: wire up memfd_secret system call where relevant
@ 2021-01-25 18:18 ` Catalin Marinas
0 siblings, 0 replies; 318+ messages in thread
From: Catalin Marinas @ 2021-01-25 18:18 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Dave Hansen,
linux-mm, linux-kselftest, H. Peter Anvin, Christopher Lameter,
Shuah Khan, Thomas Gleixner, Elena Reshetova, linux-arch,
Tycho Andersen, linux-nvdimm, Will Deacon, x86, Matthew Wilcox,
Mike Rapoport, Ingo Molnar, Michael Kerrisk, Palmer Dabbelt,
Arnd Bergmann, James Bottomley, Hagen Paul Pfeifer,
Borislav Petkov, Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Andrew Morton, Rick Edgecombe, Roman Gushchin
On Thu, Jan 21, 2021 at 02:27:22PM +0200, Mike Rapoport wrote:
> diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
> index f83a70e07df8..ce2ee8f1e361 100644
> --- a/arch/arm64/include/uapi/asm/unistd.h
> +++ b/arch/arm64/include/uapi/asm/unistd.h
> @@ -20,5 +20,6 @@
> #define __ARCH_WANT_SET_GET_RLIMIT
> #define __ARCH_WANT_TIME32_SYSCALLS
> #define __ARCH_WANT_SYS_CLONE3
> +#define __ARCH_WANT_MEMFD_SECRET
I thought I already acked v10 of this patch. Here it is again:
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 10/11] arch, mm: wire up memfd_secret system call where relevant
@ 2021-01-25 18:18 ` Catalin Marinas
0 siblings, 0 replies; 318+ messages in thread
From: Catalin Marinas @ 2021-01-25 18:18 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Dave Hansen,
linux-mm, linux-kselftest, H. Peter Anvin, Christopher Lameter,
Shuah Khan, Thomas Gleixner, Elena Reshetova, linux-arch,
Tycho Andersen, linux-nvdimm, Will Deacon, x86, Matthew Wilcox,
Mike Rapoport, Ingo Molnar, Michael Kerrisk, Palmer Dabbelt,
Arnd Bergmann, James Bottomley, Hagen Paul Pfeifer,
Borislav Petkov, Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Andrew Morton, Rick Edgecombe, Roman Gushchin
On Thu, Jan 21, 2021 at 02:27:22PM +0200, Mike Rapoport wrote:
> diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
> index f83a70e07df8..ce2ee8f1e361 100644
> --- a/arch/arm64/include/uapi/asm/unistd.h
> +++ b/arch/arm64/include/uapi/asm/unistd.h
> @@ -20,5 +20,6 @@
> #define __ARCH_WANT_SET_GET_RLIMIT
> #define __ARCH_WANT_TIME32_SYSCALLS
> #define __ARCH_WANT_SYS_CLONE3
> +#define __ARCH_WANT_MEMFD_SECRET
I thought I already acked v10 of this patch. Here it is again:
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-25 17:18 ` Shakeel Butt
(?)
(?)
@ 2021-01-25 21:35 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-25 21:35 UTC (permalink / raw)
To: Shakeel Butt
Cc: Matthew Wilcox, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dave Hansen, David Hildenbrand,
Elena Reshetova, H. Peter Anvin, Ingo Molnar, James Bottomley,
Kirill A. Shutemov, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deac on, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, Linux MM, LKML, linux-kselftest, linux-nvdimm,
linux-riscv, x86, Hagen Paul Pfeifer, Palmer Dabbelt
On Mon, Jan 25, 2021 at 09:18:04AM -0800, Shakeel Butt wrote:
> On Mon, Jan 25, 2021 at 8:20 AM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > when the memory is actually allocated and freed.
I though about doing per-page accounting, but then one would be able to
create a lot of secretmem file descriptors, use only a page from each while
actual memory consumption will be way higher.
> > I think this is wrong. It fails to account subsequent allocators from
> > the same PMD. If you want to track like this, you need separate pools
> > per memcg.
> >
>
> Are these secretmem pools shared between different jobs/memcgs?
A secretmem pool is per anonymous file descriptor and this file descriptor
can be shared only explicitly between several processes. So, the secretmem
pool should not be shared between different jobs/memcg. Of course, it's
possible to spread threads of a process across different memcgs, but in
that case the accounting will be similar to what's happening today with
sl*b. The first thread to cause kmalloc() will be charged for the
allocation of the entire slab and subsequent allocations from that slab
will not be accounted.
That said, having a pool per memcg will add ton of complexity with very
dubious value.
--
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 21:35 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-25 21:35 UTC (permalink / raw)
To: Shakeel Butt
Cc: Matthew Wilcox, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Mark Rutland, Mike Rapoport,
Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
Rick Edgecombe, Roman Gushchin, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, Linux MM, LKML, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Mon, Jan 25, 2021 at 09:18:04AM -0800, Shakeel Butt wrote:
> On Mon, Jan 25, 2021 at 8:20 AM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > when the memory is actually allocated and freed.
I though about doing per-page accounting, but then one would be able to
create a lot of secretmem file descriptors, use only a page from each while
actual memory consumption will be way higher.
> > I think this is wrong. It fails to account subsequent allocators from
> > the same PMD. If you want to track like this, you need separate pools
> > per memcg.
> >
>
> Are these secretmem pools shared between different jobs/memcgs?
A secretmem pool is per anonymous file descriptor and this file descriptor
can be shared only explicitly between several processes. So, the secretmem
pool should not be shared between different jobs/memcg. Of course, it's
possible to spread threads of a process across different memcgs, but in
that case the accounting will be similar to what's happening today with
sl*b. The first thread to cause kmalloc() will be charged for the
allocation of the entire slab and subsequent allocations from that slab
will not be accounted.
That said, having a pool per memcg will add ton of complexity with very
dubious value.
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 21:35 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-25 21:35 UTC (permalink / raw)
To: Shakeel Butt
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, Linux MM, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, LKML, linux-riscv, Palmer Dabbelt,
linux-fsdevel, Andrew Morton, Rick Edgecombe, Roman Gushchin
On Mon, Jan 25, 2021 at 09:18:04AM -0800, Shakeel Butt wrote:
> On Mon, Jan 25, 2021 at 8:20 AM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > when the memory is actually allocated and freed.
I though about doing per-page accounting, but then one would be able to
create a lot of secretmem file descriptors, use only a page from each while
actual memory consumption will be way higher.
> > I think this is wrong. It fails to account subsequent allocators from
> > the same PMD. If you want to track like this, you need separate pools
> > per memcg.
> >
>
> Are these secretmem pools shared between different jobs/memcgs?
A secretmem pool is per anonymous file descriptor and this file descriptor
can be shared only explicitly between several processes. So, the secretmem
pool should not be shared between different jobs/memcg. Of course, it's
possible to spread threads of a process across different memcgs, but in
that case the accounting will be similar to what's happening today with
sl*b. The first thread to cause kmalloc() will be charged for the
allocation of the entire slab and subsequent allocations from that slab
will not be accounted.
That said, having a pool per memcg will add ton of complexity with very
dubious value.
--
Sincerely yours,
Mike.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 21:35 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-25 21:35 UTC (permalink / raw)
To: Shakeel Butt
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, Linux MM, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, LKML, linux-riscv, Palmer Dabbelt,
linux-fsdevel, Andrew Morton, Rick Edgecombe, Roman Gushchin
On Mon, Jan 25, 2021 at 09:18:04AM -0800, Shakeel Butt wrote:
> On Mon, Jan 25, 2021 at 8:20 AM Matthew Wilcox <willy@infradead.org> wrote:
> >
> > On Thu, Jan 21, 2021 at 02:27:20PM +0200, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > when the memory is actually allocated and freed.
I though about doing per-page accounting, but then one would be able to
create a lot of secretmem file descriptors, use only a page from each while
actual memory consumption will be way higher.
> > I think this is wrong. It fails to account subsequent allocators from
> > the same PMD. If you want to track like this, you need separate pools
> > per memcg.
> >
>
> Are these secretmem pools shared between different jobs/memcgs?
A secretmem pool is per anonymous file descriptor and this file descriptor
can be shared only explicitly between several processes. So, the secretmem
pool should not be shared between different jobs/memcg. Of course, it's
possible to spread threads of a process across different memcgs, but in
that case the accounting will be similar to what's happening today with
sl*b. The first thread to cause kmalloc() will be charged for the
allocation of the entire slab and subsequent allocations from that slab
will not be accounted.
That said, having a pool per memcg will add ton of complexity with very
dubious value.
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
2021-01-25 17:01 ` Michal Hocko
(?)
(?)
@ 2021-01-25 21:36 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-25 21:36 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Introduce "memfd_secret" system call with the ability to create memory
> > areas visible only in the context of the owning process and not mapped not
> > only to other processes but in the kernel page tables as well.
> >
> > The user will create a file descriptor using the memfd_secret() system
> > call. The memory areas created by mmap() calls from this file descriptor
> > will be unmapped from the kernel direct map and they will be only mapped in
> > the page table of the owning mm.
> >
> > The secret memory remains accessible in the process context using uaccess
> > primitives, but it is not accessible using direct/linear map addresses.
> >
> > Functions in the follow_page()/get_user_page() family will refuse to return
> > a page that belongs to the secret memory area.
> >
> > A page that was a part of the secret memory area is cleared when it is
> > freed.
> >
> > The following example demonstrates creation of a secret mapping (error
> > handling is omitted):
> >
> > fd = memfd_secret(0);
> > ftruncate(fd, MAP_SIZE);
> > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>
> I do not see any access control or permission model for this feature.
> Is this feature generally safe to anybody?
The mappings obey memlock limit. Besides, this feature should be enabled
explicitly at boot with the kernel parameter that says what is the maximal
memory size secretmem can consume.
--
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-25 21:36 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-25 21:36 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Introduce "memfd_secret" system call with the ability to create memory
> > areas visible only in the context of the owning process and not mapped not
> > only to other processes but in the kernel page tables as well.
> >
> > The user will create a file descriptor using the memfd_secret() system
> > call. The memory areas created by mmap() calls from this file descriptor
> > will be unmapped from the kernel direct map and they will be only mapped in
> > the page table of the owning mm.
> >
> > The secret memory remains accessible in the process context using uaccess
> > primitives, but it is not accessible using direct/linear map addresses.
> >
> > Functions in the follow_page()/get_user_page() family will refuse to return
> > a page that belongs to the secret memory area.
> >
> > A page that was a part of the secret memory area is cleared when it is
> > freed.
> >
> > The following example demonstrates creation of a secret mapping (error
> > handling is omitted):
> >
> > fd = memfd_secret(0);
> > ftruncate(fd, MAP_SIZE);
> > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>
> I do not see any access control or permission model for this feature.
> Is this feature generally safe to anybody?
The mappings obey memlock limit. Besides, this feature should be enabled
explicitly at boot with the kernel parameter that says what is the maximal
memory size secretmem can consume.
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-25 21:36 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-25 21:36 UTC (permalink / raw)
To: Michal Hocko
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Introduce "memfd_secret" system call with the ability to create memory
> > areas visible only in the context of the owning process and not mapped not
> > only to other processes but in the kernel page tables as well.
> >
> > The user will create a file descriptor using the memfd_secret() system
> > call. The memory areas created by mmap() calls from this file descriptor
> > will be unmapped from the kernel direct map and they will be only mapped in
> > the page table of the owning mm.
> >
> > The secret memory remains accessible in the process context using uaccess
> > primitives, but it is not accessible using direct/linear map addresses.
> >
> > Functions in the follow_page()/get_user_page() family will refuse to return
> > a page that belongs to the secret memory area.
> >
> > A page that was a part of the secret memory area is cleared when it is
> > freed.
> >
> > The following example demonstrates creation of a secret mapping (error
> > handling is omitted):
> >
> > fd = memfd_secret(0);
> > ftruncate(fd, MAP_SIZE);
> > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>
> I do not see any access control or permission model for this feature.
> Is this feature generally safe to anybody?
The mappings obey memlock limit. Besides, this feature should be enabled
explicitly at boot with the kernel parameter that says what is the maximal
memory size secretmem can consume.
--
Sincerely yours,
Mike.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-25 21:36 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-25 21:36 UTC (permalink / raw)
To: Michal Hocko
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Introduce "memfd_secret" system call with the ability to create memory
> > areas visible only in the context of the owning process and not mapped not
> > only to other processes but in the kernel page tables as well.
> >
> > The user will create a file descriptor using the memfd_secret() system
> > call. The memory areas created by mmap() calls from this file descriptor
> > will be unmapped from the kernel direct map and they will be only mapped in
> > the page table of the owning mm.
> >
> > The secret memory remains accessible in the process context using uaccess
> > primitives, but it is not accessible using direct/linear map addresses.
> >
> > Functions in the follow_page()/get_user_page() family will refuse to return
> > a page that belongs to the secret memory area.
> >
> > A page that was a part of the secret memory area is cleared when it is
> > freed.
> >
> > The following example demonstrates creation of a secret mapping (error
> > handling is omitted):
> >
> > fd = memfd_secret(0);
> > ftruncate(fd, MAP_SIZE);
> > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>
> I do not see any access control or permission model for this feature.
> Is this feature generally safe to anybody?
The mappings obey memlock limit. Besides, this feature should be enabled
explicitly at boot with the kernel parameter that says what is the maximal
memory size secretmem can consume.
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-25 16:54 ` Michal Hocko
(?)
(?)
@ 2021-01-25 21:38 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-25 21:38 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Account memory consumed by secretmem to memcg. The accounting is updated
> > when the memory is actually allocated and freed.
>
> What does this mean?
That means that the accounting is updated when secretmem does cma_alloc()
and cma_relase().
> What are the lifetime rules?
Hmm, what do you mean by lifetime rules?
> [...]
>
> > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > +{
> > + int err;
> > +
> > + err = memcg_kmem_charge_page(page, gfp, order);
> > + if (err)
> > + return err;
> > +
> > + /*
> > + * seceremem caches are unreclaimable kernel allocations, so treat
> > + * them as unreclaimable slab memory for VM statistics purposes
> > + */
> > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > + PAGE_SIZE << order);
>
> A lot of memcg accounted memory is not reclaimable. Why do you abuse
> SLAB counter when this is not a slab owned memory? Why do you use the
> kmem accounting API when __GFP_ACCOUNT should give you the same without
> this details?
I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
Besides, kmem accounting with __GFP_ACCOUNT does not seem
to update stats and there was an explicit request for statistics:
https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
I think that a dedicated stats counter would be too much at the moment and
NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
--
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 21:38 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-25 21:38 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Account memory consumed by secretmem to memcg. The accounting is updated
> > when the memory is actually allocated and freed.
>
> What does this mean?
That means that the accounting is updated when secretmem does cma_alloc()
and cma_relase().
> What are the lifetime rules?
Hmm, what do you mean by lifetime rules?
> [...]
>
> > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > +{
> > + int err;
> > +
> > + err = memcg_kmem_charge_page(page, gfp, order);
> > + if (err)
> > + return err;
> > +
> > + /*
> > + * seceremem caches are unreclaimable kernel allocations, so treat
> > + * them as unreclaimable slab memory for VM statistics purposes
> > + */
> > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > + PAGE_SIZE << order);
>
> A lot of memcg accounted memory is not reclaimable. Why do you abuse
> SLAB counter when this is not a slab owned memory? Why do you use the
> kmem accounting API when __GFP_ACCOUNT should give you the same without
> this details?
I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
Besides, kmem accounting with __GFP_ACCOUNT does not seem
to update stats and there was an explicit request for statistics:
https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
I think that a dedicated stats counter would be too much at the moment and
NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 21:38 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-25 21:38 UTC (permalink / raw)
To: Michal Hocko
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Account memory consumed by secretmem to memcg. The accounting is updated
> > when the memory is actually allocated and freed.
>
> What does this mean?
That means that the accounting is updated when secretmem does cma_alloc()
and cma_relase().
> What are the lifetime rules?
Hmm, what do you mean by lifetime rules?
> [...]
>
> > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > +{
> > + int err;
> > +
> > + err = memcg_kmem_charge_page(page, gfp, order);
> > + if (err)
> > + return err;
> > +
> > + /*
> > + * seceremem caches are unreclaimable kernel allocations, so treat
> > + * them as unreclaimable slab memory for VM statistics purposes
> > + */
> > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > + PAGE_SIZE << order);
>
> A lot of memcg accounted memory is not reclaimable. Why do you abuse
> SLAB counter when this is not a slab owned memory? Why do you use the
> kmem accounting API when __GFP_ACCOUNT should give you the same without
> this details?
I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
Besides, kmem accounting with __GFP_ACCOUNT does not seem
to update stats and there was an explicit request for statistics:
https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
I think that a dedicated stats counter would be too much at the moment and
NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
--
Sincerely yours,
Mike.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-25 21:38 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-25 21:38 UTC (permalink / raw)
To: Michal Hocko
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > From: Mike Rapoport <rppt@linux.ibm.com>
> >
> > Account memory consumed by secretmem to memcg. The accounting is updated
> > when the memory is actually allocated and freed.
>
> What does this mean?
That means that the accounting is updated when secretmem does cma_alloc()
and cma_relase().
> What are the lifetime rules?
Hmm, what do you mean by lifetime rules?
> [...]
>
> > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > +{
> > + int err;
> > +
> > + err = memcg_kmem_charge_page(page, gfp, order);
> > + if (err)
> > + return err;
> > +
> > + /*
> > + * seceremem caches are unreclaimable kernel allocations, so treat
> > + * them as unreclaimable slab memory for VM statistics purposes
> > + */
> > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > + PAGE_SIZE << order);
>
> A lot of memcg accounted memory is not reclaimable. Why do you abuse
> SLAB counter when this is not a slab owned memory? Why do you use the
> kmem accounting API when __GFP_ACCOUNT should give you the same without
> this details?
I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
Besides, kmem accounting with __GFP_ACCOUNT does not seem
to update stats and there was an explicit request for statistics:
https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
I think that a dedicated stats counter would be too much at the moment and
NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
2021-01-25 21:36 ` Mike Rapoport
(?)
(?)
@ 2021-01-26 7:16 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 7:16 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Introduce "memfd_secret" system call with the ability to create memory
> > > areas visible only in the context of the owning process and not mapped not
> > > only to other processes but in the kernel page tables as well.
> > >
> > > The user will create a file descriptor using the memfd_secret() system
> > > call. The memory areas created by mmap() calls from this file descriptor
> > > will be unmapped from the kernel direct map and they will be only mapped in
> > > the page table of the owning mm.
> > >
> > > The secret memory remains accessible in the process context using uaccess
> > > primitives, but it is not accessible using direct/linear map addresses.
> > >
> > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > a page that belongs to the secret memory area.
> > >
> > > A page that was a part of the secret memory area is cleared when it is
> > > freed.
> > >
> > > The following example demonstrates creation of a secret mapping (error
> > > handling is omitted):
> > >
> > > fd = memfd_secret(0);
> > > ftruncate(fd, MAP_SIZE);
> > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> >
> > I do not see any access control or permission model for this feature.
> > Is this feature generally safe to anybody?
>
> The mappings obey memlock limit. Besides, this feature should be enabled
> explicitly at boot with the kernel parameter that says what is the maximal
> memory size secretmem can consume.
Why is such a model sufficient and future proof? I mean even when it has
to be enabled by an admin it is still all or nothing approach. Mlock
limit is not really useful because it is per mm rather than per user.
Is there any reason why this is allowed for non-privileged processes?
Maybe this has been discussed in the past but is there any reason why
this cannot be done by a special device which will allow to provide at
least some permission policy?
Please make sure to describe all those details in the changelog.
--
Michal Hocko
SUSE Labs
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 7:16 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 7:16 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Introduce "memfd_secret" system call with the ability to create memory
> > > areas visible only in the context of the owning process and not mapped not
> > > only to other processes but in the kernel page tables as well.
> > >
> > > The user will create a file descriptor using the memfd_secret() system
> > > call. The memory areas created by mmap() calls from this file descriptor
> > > will be unmapped from the kernel direct map and they will be only mapped in
> > > the page table of the owning mm.
> > >
> > > The secret memory remains accessible in the process context using uaccess
> > > primitives, but it is not accessible using direct/linear map addresses.
> > >
> > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > a page that belongs to the secret memory area.
> > >
> > > A page that was a part of the secret memory area is cleared when it is
> > > freed.
> > >
> > > The following example demonstrates creation of a secret mapping (error
> > > handling is omitted):
> > >
> > > fd = memfd_secret(0);
> > > ftruncate(fd, MAP_SIZE);
> > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> >
> > I do not see any access control or permission model for this feature.
> > Is this feature generally safe to anybody?
>
> The mappings obey memlock limit. Besides, this feature should be enabled
> explicitly at boot with the kernel parameter that says what is the maximal
> memory size secretmem can consume.
Why is such a model sufficient and future proof? I mean even when it has
to be enabled by an admin it is still all or nothing approach. Mlock
limit is not really useful because it is per mm rather than per user.
Is there any reason why this is allowed for non-privileged processes?
Maybe this has been discussed in the past but is there any reason why
this cannot be done by a special device which will allow to provide at
least some permission policy?
Please make sure to describe all those details in the changelog.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 7:16 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 7:16 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Introduce "memfd_secret" system call with the ability to create memory
> > > areas visible only in the context of the owning process and not mapped not
> > > only to other processes but in the kernel page tables as well.
> > >
> > > The user will create a file descriptor using the memfd_secret() system
> > > call. The memory areas created by mmap() calls from this file descriptor
> > > will be unmapped from the kernel direct map and they will be only mapped in
> > > the page table of the owning mm.
> > >
> > > The secret memory remains accessible in the process context using uaccess
> > > primitives, but it is not accessible using direct/linear map addresses.
> > >
> > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > a page that belongs to the secret memory area.
> > >
> > > A page that was a part of the secret memory area is cleared when it is
> > > freed.
> > >
> > > The following example demonstrates creation of a secret mapping (error
> > > handling is omitted):
> > >
> > > fd = memfd_secret(0);
> > > ftruncate(fd, MAP_SIZE);
> > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> >
> > I do not see any access control or permission model for this feature.
> > Is this feature generally safe to anybody?
>
> The mappings obey memlock limit. Besides, this feature should be enabled
> explicitly at boot with the kernel parameter that says what is the maximal
> memory size secretmem can consume.
Why is such a model sufficient and future proof? I mean even when it has
to be enabled by an admin it is still all or nothing approach. Mlock
limit is not really useful because it is per mm rather than per user.
Is there any reason why this is allowed for non-privileged processes?
Maybe this has been discussed in the past but is there any reason why
this cannot be done by a special device which will allow to provide at
least some permission policy?
Please make sure to describe all those details in the changelog.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 7:16 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 7:16 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Introduce "memfd_secret" system call with the ability to create memory
> > > areas visible only in the context of the owning process and not mapped not
> > > only to other processes but in the kernel page tables as well.
> > >
> > > The user will create a file descriptor using the memfd_secret() system
> > > call. The memory areas created by mmap() calls from this file descriptor
> > > will be unmapped from the kernel direct map and they will be only mapped in
> > > the page table of the owning mm.
> > >
> > > The secret memory remains accessible in the process context using uaccess
> > > primitives, but it is not accessible using direct/linear map addresses.
> > >
> > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > a page that belongs to the secret memory area.
> > >
> > > A page that was a part of the secret memory area is cleared when it is
> > > freed.
> > >
> > > The following example demonstrates creation of a secret mapping (error
> > > handling is omitted):
> > >
> > > fd = memfd_secret(0);
> > > ftruncate(fd, MAP_SIZE);
> > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> >
> > I do not see any access control or permission model for this feature.
> > Is this feature generally safe to anybody?
>
> The mappings obey memlock limit. Besides, this feature should be enabled
> explicitly at boot with the kernel parameter that says what is the maximal
> memory size secretmem can consume.
Why is such a model sufficient and future proof? I mean even when it has
to be enabled by an admin it is still all or nothing approach. Mlock
limit is not really useful because it is per mm rather than per user.
Is there any reason why this is allowed for non-privileged processes?
Maybe this has been discussed in the past but is there any reason why
this cannot be done by a special device which will allow to provide at
least some permission policy?
Please make sure to describe all those details in the changelog.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-25 21:38 ` Mike Rapoport
(?)
(?)
@ 2021-01-26 7:31 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 7:31 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > when the memory is actually allocated and freed.
> >
> > What does this mean?
>
> That means that the accounting is updated when secretmem does cma_alloc()
> and cma_relase().
>
> > What are the lifetime rules?
>
> Hmm, what do you mean by lifetime rules?
OK, so let's start by reservation time (mmap time right?) then the
instantiation time (faulting in memory). What if the calling process of
the former has a different memcg context than the later. E.g. when you
send your fd or inherited fd over fork will move to a different memcg.
What about freeing path? E.g. when you punch a hole in the middle of
a mapping?
Please make sure to document all this.
> > [...]
> >
> > > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > > +{
> > > + int err;
> > > +
> > > + err = memcg_kmem_charge_page(page, gfp, order);
> > > + if (err)
> > > + return err;
> > > +
> > > + /*
> > > + * seceremem caches are unreclaimable kernel allocations, so treat
> > > + * them as unreclaimable slab memory for VM statistics purposes
> > > + */
> > > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > > + PAGE_SIZE << order);
> >
> > A lot of memcg accounted memory is not reclaimable. Why do you abuse
> > SLAB counter when this is not a slab owned memory? Why do you use the
> > kmem accounting API when __GFP_ACCOUNT should give you the same without
> > this details?
>
> I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
Other people are working on this to change. But OK, I do see that this
can be done later but it looks rather awkward.
> Besides, kmem accounting with __GFP_ACCOUNT does not seem
> to update stats and there was an explicit request for statistics:
>
> https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
charging and stats are two different things. You can still take care of
your stats without explicitly using the charging API. But this is a mere
detail. It just hit my eyes.
> As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
>
> https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
Those arguments should be a part of the changelof.
> I think that a dedicated stats counter would be too much at the moment and
> NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
Why do you think it would be too much? If the secret memory becomes a
prevalent memory user because it will happen to back the whole virtual
machine then hiding it into any existing counter would be less than
useful.
Please note that this all is a user visible stuff that will become PITA
(if possible) to change later on. You should really have strong
arguments in your justification here.
--
Michal Hocko
SUSE Labs
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 7:31 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 7:31 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > when the memory is actually allocated and freed.
> >
> > What does this mean?
>
> That means that the accounting is updated when secretmem does cma_alloc()
> and cma_relase().
>
> > What are the lifetime rules?
>
> Hmm, what do you mean by lifetime rules?
OK, so let's start by reservation time (mmap time right?) then the
instantiation time (faulting in memory). What if the calling process of
the former has a different memcg context than the later. E.g. when you
send your fd or inherited fd over fork will move to a different memcg.
What about freeing path? E.g. when you punch a hole in the middle of
a mapping?
Please make sure to document all this.
> > [...]
> >
> > > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > > +{
> > > + int err;
> > > +
> > > + err = memcg_kmem_charge_page(page, gfp, order);
> > > + if (err)
> > > + return err;
> > > +
> > > + /*
> > > + * seceremem caches are unreclaimable kernel allocations, so treat
> > > + * them as unreclaimable slab memory for VM statistics purposes
> > > + */
> > > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > > + PAGE_SIZE << order);
> >
> > A lot of memcg accounted memory is not reclaimable. Why do you abuse
> > SLAB counter when this is not a slab owned memory? Why do you use the
> > kmem accounting API when __GFP_ACCOUNT should give you the same without
> > this details?
>
> I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
Other people are working on this to change. But OK, I do see that this
can be done later but it looks rather awkward.
> Besides, kmem accounting with __GFP_ACCOUNT does not seem
> to update stats and there was an explicit request for statistics:
>
> https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
charging and stats are two different things. You can still take care of
your stats without explicitly using the charging API. But this is a mere
detail. It just hit my eyes.
> As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
>
> https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
Those arguments should be a part of the changelof.
> I think that a dedicated stats counter would be too much at the moment and
> NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
Why do you think it would be too much? If the secret memory becomes a
prevalent memory user because it will happen to back the whole virtual
machine then hiding it into any existing counter would be less than
useful.
Please note that this all is a user visible stuff that will become PITA
(if possible) to change later on. You should really have strong
arguments in your justification here.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 7:31 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 7:31 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > when the memory is actually allocated and freed.
> >
> > What does this mean?
>
> That means that the accounting is updated when secretmem does cma_alloc()
> and cma_relase().
>
> > What are the lifetime rules?
>
> Hmm, what do you mean by lifetime rules?
OK, so let's start by reservation time (mmap time right?) then the
instantiation time (faulting in memory). What if the calling process of
the former has a different memcg context than the later. E.g. when you
send your fd or inherited fd over fork will move to a different memcg.
What about freeing path? E.g. when you punch a hole in the middle of
a mapping?
Please make sure to document all this.
> > [...]
> >
> > > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > > +{
> > > + int err;
> > > +
> > > + err = memcg_kmem_charge_page(page, gfp, order);
> > > + if (err)
> > > + return err;
> > > +
> > > + /*
> > > + * seceremem caches are unreclaimable kernel allocations, so treat
> > > + * them as unreclaimable slab memory for VM statistics purposes
> > > + */
> > > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > > + PAGE_SIZE << order);
> >
> > A lot of memcg accounted memory is not reclaimable. Why do you abuse
> > SLAB counter when this is not a slab owned memory? Why do you use the
> > kmem accounting API when __GFP_ACCOUNT should give you the same without
> > this details?
>
> I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
Other people are working on this to change. But OK, I do see that this
can be done later but it looks rather awkward.
> Besides, kmem accounting with __GFP_ACCOUNT does not seem
> to update stats and there was an explicit request for statistics:
>
> https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
charging and stats are two different things. You can still take care of
your stats without explicitly using the charging API. But this is a mere
detail. It just hit my eyes.
> As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
>
> https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
Those arguments should be a part of the changelof.
> I think that a dedicated stats counter would be too much at the moment and
> NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
Why do you think it would be too much? If the secret memory becomes a
prevalent memory user because it will happen to back the whole virtual
machine then hiding it into any existing counter would be less than
useful.
Please note that this all is a user visible stuff that will become PITA
(if possible) to change later on. You should really have strong
arguments in your justification here.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 7:31 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 7:31 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > when the memory is actually allocated and freed.
> >
> > What does this mean?
>
> That means that the accounting is updated when secretmem does cma_alloc()
> and cma_relase().
>
> > What are the lifetime rules?
>
> Hmm, what do you mean by lifetime rules?
OK, so let's start by reservation time (mmap time right?) then the
instantiation time (faulting in memory). What if the calling process of
the former has a different memcg context than the later. E.g. when you
send your fd or inherited fd over fork will move to a different memcg.
What about freeing path? E.g. when you punch a hole in the middle of
a mapping?
Please make sure to document all this.
> > [...]
> >
> > > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > > +{
> > > + int err;
> > > +
> > > + err = memcg_kmem_charge_page(page, gfp, order);
> > > + if (err)
> > > + return err;
> > > +
> > > + /*
> > > + * seceremem caches are unreclaimable kernel allocations, so treat
> > > + * them as unreclaimable slab memory for VM statistics purposes
> > > + */
> > > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > > + PAGE_SIZE << order);
> >
> > A lot of memcg accounted memory is not reclaimable. Why do you abuse
> > SLAB counter when this is not a slab owned memory? Why do you use the
> > kmem accounting API when __GFP_ACCOUNT should give you the same without
> > this details?
>
> I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
Other people are working on this to change. But OK, I do see that this
can be done later but it looks rather awkward.
> Besides, kmem accounting with __GFP_ACCOUNT does not seem
> to update stats and there was an explicit request for statistics:
>
> https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
charging and stats are two different things. You can still take care of
your stats without explicitly using the charging API. But this is a mere
detail. It just hit my eyes.
> As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
>
> https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
Those arguments should be a part of the changelof.
> I think that a dedicated stats counter would be too much at the moment and
> NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
Why do you think it would be too much? If the secret memory becomes a
prevalent memory user because it will happen to back the whole virtual
machine then hiding it into any existing counter would be less than
useful.
Please note that this all is a user visible stuff that will become PITA
(if possible) to change later on. You should really have strong
arguments in your justification here.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
2021-01-26 7:16 ` Michal Hocko
(?)
(?)
@ 2021-01-26 8:33 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-26 8:33 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > >
> > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > areas visible only in the context of the owning process and not mapped not
> > > > only to other processes but in the kernel page tables as well.
> > > >
> > > > The user will create a file descriptor using the memfd_secret() system
> > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > the page table of the owning mm.
> > > >
> > > > The secret memory remains accessible in the process context using uaccess
> > > > primitives, but it is not accessible using direct/linear map addresses.
> > > >
> > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > a page that belongs to the secret memory area.
> > > >
> > > > A page that was a part of the secret memory area is cleared when it is
> > > > freed.
> > > >
> > > > The following example demonstrates creation of a secret mapping (error
> > > > handling is omitted):
> > > >
> > > > fd = memfd_secret(0);
> > > > ftruncate(fd, MAP_SIZE);
> > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > >
> > > I do not see any access control or permission model for this feature.
> > > Is this feature generally safe to anybody?
> >
> > The mappings obey memlock limit. Besides, this feature should be enabled
> > explicitly at boot with the kernel parameter that says what is the maximal
> > memory size secretmem can consume.
>
> Why is such a model sufficient and future proof? I mean even when it has
> to be enabled by an admin it is still all or nothing approach. Mlock
> limit is not really useful because it is per mm rather than per user.
>
> Is there any reason why this is allowed for non-privileged processes?
> Maybe this has been discussed in the past but is there any reason why
> this cannot be done by a special device which will allow to provide at
> least some permission policy?
Why this should not be allowed for non-privileged processes? This behaves
similarly to mlocked memory, so I don't see a reason why secretmem should
have different permissions model.
> Please make sure to describe all those details in the changelog.
--
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 8:33 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-26 8:33 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > >
> > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > areas visible only in the context of the owning process and not mapped not
> > > > only to other processes but in the kernel page tables as well.
> > > >
> > > > The user will create a file descriptor using the memfd_secret() system
> > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > the page table of the owning mm.
> > > >
> > > > The secret memory remains accessible in the process context using uaccess
> > > > primitives, but it is not accessible using direct/linear map addresses.
> > > >
> > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > a page that belongs to the secret memory area.
> > > >
> > > > A page that was a part of the secret memory area is cleared when it is
> > > > freed.
> > > >
> > > > The following example demonstrates creation of a secret mapping (error
> > > > handling is omitted):
> > > >
> > > > fd = memfd_secret(0);
> > > > ftruncate(fd, MAP_SIZE);
> > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > >
> > > I do not see any access control or permission model for this feature.
> > > Is this feature generally safe to anybody?
> >
> > The mappings obey memlock limit. Besides, this feature should be enabled
> > explicitly at boot with the kernel parameter that says what is the maximal
> > memory size secretmem can consume.
>
> Why is such a model sufficient and future proof? I mean even when it has
> to be enabled by an admin it is still all or nothing approach. Mlock
> limit is not really useful because it is per mm rather than per user.
>
> Is there any reason why this is allowed for non-privileged processes?
> Maybe this has been discussed in the past but is there any reason why
> this cannot be done by a special device which will allow to provide at
> least some permission policy?
Why this should not be allowed for non-privileged processes? This behaves
similarly to mlocked memory, so I don't see a reason why secretmem should
have different permissions model.
> Please make sure to describe all those details in the changelog.
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 8:33 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-26 8:33 UTC (permalink / raw)
To: Michal Hocko
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > >
> > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > areas visible only in the context of the owning process and not mapped not
> > > > only to other processes but in the kernel page tables as well.
> > > >
> > > > The user will create a file descriptor using the memfd_secret() system
> > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > the page table of the owning mm.
> > > >
> > > > The secret memory remains accessible in the process context using uaccess
> > > > primitives, but it is not accessible using direct/linear map addresses.
> > > >
> > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > a page that belongs to the secret memory area.
> > > >
> > > > A page that was a part of the secret memory area is cleared when it is
> > > > freed.
> > > >
> > > > The following example demonstrates creation of a secret mapping (error
> > > > handling is omitted):
> > > >
> > > > fd = memfd_secret(0);
> > > > ftruncate(fd, MAP_SIZE);
> > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > >
> > > I do not see any access control or permission model for this feature.
> > > Is this feature generally safe to anybody?
> >
> > The mappings obey memlock limit. Besides, this feature should be enabled
> > explicitly at boot with the kernel parameter that says what is the maximal
> > memory size secretmem can consume.
>
> Why is such a model sufficient and future proof? I mean even when it has
> to be enabled by an admin it is still all or nothing approach. Mlock
> limit is not really useful because it is per mm rather than per user.
>
> Is there any reason why this is allowed for non-privileged processes?
> Maybe this has been discussed in the past but is there any reason why
> this cannot be done by a special device which will allow to provide at
> least some permission policy?
Why this should not be allowed for non-privileged processes? This behaves
similarly to mlocked memory, so I don't see a reason why secretmem should
have different permissions model.
> Please make sure to describe all those details in the changelog.
--
Sincerely yours,
Mike.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 8:33 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-26 8:33 UTC (permalink / raw)
To: Michal Hocko
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > >
> > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > areas visible only in the context of the owning process and not mapped not
> > > > only to other processes but in the kernel page tables as well.
> > > >
> > > > The user will create a file descriptor using the memfd_secret() system
> > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > the page table of the owning mm.
> > > >
> > > > The secret memory remains accessible in the process context using uaccess
> > > > primitives, but it is not accessible using direct/linear map addresses.
> > > >
> > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > a page that belongs to the secret memory area.
> > > >
> > > > A page that was a part of the secret memory area is cleared when it is
> > > > freed.
> > > >
> > > > The following example demonstrates creation of a secret mapping (error
> > > > handling is omitted):
> > > >
> > > > fd = memfd_secret(0);
> > > > ftruncate(fd, MAP_SIZE);
> > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > >
> > > I do not see any access control or permission model for this feature.
> > > Is this feature generally safe to anybody?
> >
> > The mappings obey memlock limit. Besides, this feature should be enabled
> > explicitly at boot with the kernel parameter that says what is the maximal
> > memory size secretmem can consume.
>
> Why is such a model sufficient and future proof? I mean even when it has
> to be enabled by an admin it is still all or nothing approach. Mlock
> limit is not really useful because it is per mm rather than per user.
>
> Is there any reason why this is allowed for non-privileged processes?
> Maybe this has been discussed in the past but is there any reason why
> this cannot be done by a special device which will allow to provide at
> least some permission policy?
Why this should not be allowed for non-privileged processes? This behaves
similarly to mlocked memory, so I don't see a reason why secretmem should
have different permissions model.
> Please make sure to describe all those details in the changelog.
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-26 7:31 ` Michal Hocko
(?)
(?)
@ 2021-01-26 8:56 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-26 8:56 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Tue, Jan 26, 2021 at 08:31:42AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> > On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > >
> > > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > > when the memory is actually allocated and freed.
> > >
> > > What does this mean?
> >
> > That means that the accounting is updated when secretmem does cma_alloc()
> > and cma_relase().
> >
> > > What are the lifetime rules?
> >
> > Hmm, what do you mean by lifetime rules?
>
> OK, so let's start by reservation time (mmap time right?) then the
> instantiation time (faulting in memory). What if the calling process of
> the former has a different memcg context than the later. E.g. when you
> send your fd or inherited fd over fork will move to a different memcg.
>
> What about freeing path? E.g. when you punch a hole in the middle of
> a mapping?
>
> Please make sure to document all this.
So, does something like this answer your question:
---
The memory cgroup is charged when secremem allocates pages from CMA to
increase large pages pool during ->fault() processing.
The pages are uncharged from memory cgroup when they are released back to
CMA at the time secretme inode is evicted.
---
> > > [...]
> > >
> > > > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > > > +{
> > > > + int err;
> > > > +
> > > > + err = memcg_kmem_charge_page(page, gfp, order);
> > > > + if (err)
> > > > + return err;
> > > > +
> > > > + /*
> > > > + * seceremem caches are unreclaimable kernel allocations, so treat
> > > > + * them as unreclaimable slab memory for VM statistics purposes
> > > > + */
> > > > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > > > + PAGE_SIZE << order);
> > >
> > > A lot of memcg accounted memory is not reclaimable. Why do you abuse
> > > SLAB counter when this is not a slab owned memory? Why do you use the
> > > kmem accounting API when __GFP_ACCOUNT should give you the same without
> > > this details?
> >
> > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
>
> Other people are working on this to change. But OK, I do see that this
> can be done later but it looks rather awkward.
>
> > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > to update stats and there was an explicit request for statistics:
> >
> > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
>
> charging and stats are two different things. You can still take care of
> your stats without explicitly using the charging API. But this is a mere
> detail. It just hit my eyes.
>
> > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> >
> > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
>
> Those arguments should be a part of the changelof.
>
> > I think that a dedicated stats counter would be too much at the moment and
> > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
>
> Why do you think it would be too much? If the secret memory becomes a
> prevalent memory user because it will happen to back the whole virtual
> machine then hiding it into any existing counter would be less than
> useful.
>
> Please note that this all is a user visible stuff that will become PITA
> (if possible) to change later on. You should really have strong
> arguments in your justification here.
I think that adding a dedicated counter for few 2M areas per container is
not worth the churn.
When we'll get to the point that secretmem can be used to back the entire
guest memory we can add a new counter and it does not seem to PITA to me.
--
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 8:56 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-26 8:56 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue, Jan 26, 2021 at 08:31:42AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> > On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > >
> > > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > > when the memory is actually allocated and freed.
> > >
> > > What does this mean?
> >
> > That means that the accounting is updated when secretmem does cma_alloc()
> > and cma_relase().
> >
> > > What are the lifetime rules?
> >
> > Hmm, what do you mean by lifetime rules?
>
> OK, so let's start by reservation time (mmap time right?) then the
> instantiation time (faulting in memory). What if the calling process of
> the former has a different memcg context than the later. E.g. when you
> send your fd or inherited fd over fork will move to a different memcg.
>
> What about freeing path? E.g. when you punch a hole in the middle of
> a mapping?
>
> Please make sure to document all this.
So, does something like this answer your question:
---
The memory cgroup is charged when secremem allocates pages from CMA to
increase large pages pool during ->fault() processing.
The pages are uncharged from memory cgroup when they are released back to
CMA at the time secretme inode is evicted.
---
> > > [...]
> > >
> > > > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > > > +{
> > > > + int err;
> > > > +
> > > > + err = memcg_kmem_charge_page(page, gfp, order);
> > > > + if (err)
> > > > + return err;
> > > > +
> > > > + /*
> > > > + * seceremem caches are unreclaimable kernel allocations, so treat
> > > > + * them as unreclaimable slab memory for VM statistics purposes
> > > > + */
> > > > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > > > + PAGE_SIZE << order);
> > >
> > > A lot of memcg accounted memory is not reclaimable. Why do you abuse
> > > SLAB counter when this is not a slab owned memory? Why do you use the
> > > kmem accounting API when __GFP_ACCOUNT should give you the same without
> > > this details?
> >
> > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
>
> Other people are working on this to change. But OK, I do see that this
> can be done later but it looks rather awkward.
>
> > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > to update stats and there was an explicit request for statistics:
> >
> > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
>
> charging and stats are two different things. You can still take care of
> your stats without explicitly using the charging API. But this is a mere
> detail. It just hit my eyes.
>
> > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> >
> > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
>
> Those arguments should be a part of the changelof.
>
> > I think that a dedicated stats counter would be too much at the moment and
> > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
>
> Why do you think it would be too much? If the secret memory becomes a
> prevalent memory user because it will happen to back the whole virtual
> machine then hiding it into any existing counter would be less than
> useful.
>
> Please note that this all is a user visible stuff that will become PITA
> (if possible) to change later on. You should really have strong
> arguments in your justification here.
I think that adding a dedicated counter for few 2M areas per container is
not worth the churn.
When we'll get to the point that secretmem can be used to back the entire
guest memory we can add a new counter and it does not seem to PITA to me.
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 8:56 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-26 8:56 UTC (permalink / raw)
To: Michal Hocko
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue, Jan 26, 2021 at 08:31:42AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> > On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > >
> > > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > > when the memory is actually allocated and freed.
> > >
> > > What does this mean?
> >
> > That means that the accounting is updated when secretmem does cma_alloc()
> > and cma_relase().
> >
> > > What are the lifetime rules?
> >
> > Hmm, what do you mean by lifetime rules?
>
> OK, so let's start by reservation time (mmap time right?) then the
> instantiation time (faulting in memory). What if the calling process of
> the former has a different memcg context than the later. E.g. when you
> send your fd or inherited fd over fork will move to a different memcg.
>
> What about freeing path? E.g. when you punch a hole in the middle of
> a mapping?
>
> Please make sure to document all this.
So, does something like this answer your question:
---
The memory cgroup is charged when secremem allocates pages from CMA to
increase large pages pool during ->fault() processing.
The pages are uncharged from memory cgroup when they are released back to
CMA at the time secretme inode is evicted.
---
> > > [...]
> > >
> > > > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > > > +{
> > > > + int err;
> > > > +
> > > > + err = memcg_kmem_charge_page(page, gfp, order);
> > > > + if (err)
> > > > + return err;
> > > > +
> > > > + /*
> > > > + * seceremem caches are unreclaimable kernel allocations, so treat
> > > > + * them as unreclaimable slab memory for VM statistics purposes
> > > > + */
> > > > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > > > + PAGE_SIZE << order);
> > >
> > > A lot of memcg accounted memory is not reclaimable. Why do you abuse
> > > SLAB counter when this is not a slab owned memory? Why do you use the
> > > kmem accounting API when __GFP_ACCOUNT should give you the same without
> > > this details?
> >
> > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
>
> Other people are working on this to change. But OK, I do see that this
> can be done later but it looks rather awkward.
>
> > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > to update stats and there was an explicit request for statistics:
> >
> > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
>
> charging and stats are two different things. You can still take care of
> your stats without explicitly using the charging API. But this is a mere
> detail. It just hit my eyes.
>
> > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> >
> > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
>
> Those arguments should be a part of the changelof.
>
> > I think that a dedicated stats counter would be too much at the moment and
> > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
>
> Why do you think it would be too much? If the secret memory becomes a
> prevalent memory user because it will happen to back the whole virtual
> machine then hiding it into any existing counter would be less than
> useful.
>
> Please note that this all is a user visible stuff that will become PITA
> (if possible) to change later on. You should really have strong
> arguments in your justification here.
I think that adding a dedicated counter for few 2M areas per container is
not worth the churn.
When we'll get to the point that secretmem can be used to back the entire
guest memory we can add a new counter and it does not seem to PITA to me.
--
Sincerely yours,
Mike.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 8:56 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-26 8:56 UTC (permalink / raw)
To: Michal Hocko
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue, Jan 26, 2021 at 08:31:42AM +0100, Michal Hocko wrote:
> On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> > On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > >
> > > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > > when the memory is actually allocated and freed.
> > >
> > > What does this mean?
> >
> > That means that the accounting is updated when secretmem does cma_alloc()
> > and cma_relase().
> >
> > > What are the lifetime rules?
> >
> > Hmm, what do you mean by lifetime rules?
>
> OK, so let's start by reservation time (mmap time right?) then the
> instantiation time (faulting in memory). What if the calling process of
> the former has a different memcg context than the later. E.g. when you
> send your fd or inherited fd over fork will move to a different memcg.
>
> What about freeing path? E.g. when you punch a hole in the middle of
> a mapping?
>
> Please make sure to document all this.
So, does something like this answer your question:
---
The memory cgroup is charged when secremem allocates pages from CMA to
increase large pages pool during ->fault() processing.
The pages are uncharged from memory cgroup when they are released back to
CMA at the time secretme inode is evicted.
---
> > > [...]
> > >
> > > > +static int secretmem_account_pages(struct page *page, gfp_t gfp, int order)
> > > > +{
> > > > + int err;
> > > > +
> > > > + err = memcg_kmem_charge_page(page, gfp, order);
> > > > + if (err)
> > > > + return err;
> > > > +
> > > > + /*
> > > > + * seceremem caches are unreclaimable kernel allocations, so treat
> > > > + * them as unreclaimable slab memory for VM statistics purposes
> > > > + */
> > > > + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
> > > > + PAGE_SIZE << order);
> > >
> > > A lot of memcg accounted memory is not reclaimable. Why do you abuse
> > > SLAB counter when this is not a slab owned memory? Why do you use the
> > > kmem accounting API when __GFP_ACCOUNT should give you the same without
> > > this details?
> >
> > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
>
> Other people are working on this to change. But OK, I do see that this
> can be done later but it looks rather awkward.
>
> > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > to update stats and there was an explicit request for statistics:
> >
> > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
>
> charging and stats are two different things. You can still take care of
> your stats without explicitly using the charging API. But this is a mere
> detail. It just hit my eyes.
>
> > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> >
> > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
>
> Those arguments should be a part of the changelof.
>
> > I think that a dedicated stats counter would be too much at the moment and
> > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
>
> Why do you think it would be too much? If the secret memory becomes a
> prevalent memory user because it will happen to back the whole virtual
> machine then hiding it into any existing counter would be less than
> useful.
>
> Please note that this all is a user visible stuff that will become PITA
> (if possible) to change later on. You should really have strong
> arguments in your justification here.
I think that adding a dedicated counter for few 2M areas per container is
not worth the churn.
When we'll get to the point that secretmem can be used to back the entire
guest memory we can add a new counter and it does not seem to PITA to me.
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
2021-01-26 8:33 ` Mike Rapoport
(?)
(?)
@ 2021-01-26 9:00 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:00 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > >
> > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > areas visible only in the context of the owning process and not mapped not
> > > > > only to other processes but in the kernel page tables as well.
> > > > >
> > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > the page table of the owning mm.
> > > > >
> > > > > The secret memory remains accessible in the process context using uaccess
> > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > >
> > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > a page that belongs to the secret memory area.
> > > > >
> > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > freed.
> > > > >
> > > > > The following example demonstrates creation of a secret mapping (error
> > > > > handling is omitted):
> > > > >
> > > > > fd = memfd_secret(0);
> > > > > ftruncate(fd, MAP_SIZE);
> > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > >
> > > > I do not see any access control or permission model for this feature.
> > > > Is this feature generally safe to anybody?
> > >
> > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > explicitly at boot with the kernel parameter that says what is the maximal
> > > memory size secretmem can consume.
> >
> > Why is such a model sufficient and future proof? I mean even when it has
> > to be enabled by an admin it is still all or nothing approach. Mlock
> > limit is not really useful because it is per mm rather than per user.
> >
> > Is there any reason why this is allowed for non-privileged processes?
> > Maybe this has been discussed in the past but is there any reason why
> > this cannot be done by a special device which will allow to provide at
> > least some permission policy?
>
> Why this should not be allowed for non-privileged processes? This behaves
> similarly to mlocked memory, so I don't see a reason why secretmem should
> have different permissions model.
Because appart from the reclaim aspect it fragments the direct mapping
IIUC. That might have an impact on all others, right?
--
Michal Hocko
SUSE Labs
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:00 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:00 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > >
> > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > areas visible only in the context of the owning process and not mapped not
> > > > > only to other processes but in the kernel page tables as well.
> > > > >
> > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > the page table of the owning mm.
> > > > >
> > > > > The secret memory remains accessible in the process context using uaccess
> > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > >
> > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > a page that belongs to the secret memory area.
> > > > >
> > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > freed.
> > > > >
> > > > > The following example demonstrates creation of a secret mapping (error
> > > > > handling is omitted):
> > > > >
> > > > > fd = memfd_secret(0);
> > > > > ftruncate(fd, MAP_SIZE);
> > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > >
> > > > I do not see any access control or permission model for this feature.
> > > > Is this feature generally safe to anybody?
> > >
> > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > explicitly at boot with the kernel parameter that says what is the maximal
> > > memory size secretmem can consume.
> >
> > Why is such a model sufficient and future proof? I mean even when it has
> > to be enabled by an admin it is still all or nothing approach. Mlock
> > limit is not really useful because it is per mm rather than per user.
> >
> > Is there any reason why this is allowed for non-privileged processes?
> > Maybe this has been discussed in the past but is there any reason why
> > this cannot be done by a special device which will allow to provide at
> > least some permission policy?
>
> Why this should not be allowed for non-privileged processes? This behaves
> similarly to mlocked memory, so I don't see a reason why secretmem should
> have different permissions model.
Because appart from the reclaim aspect it fragments the direct mapping
IIUC. That might have an impact on all others, right?
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:00 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:00 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > >
> > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > areas visible only in the context of the owning process and not mapped not
> > > > > only to other processes but in the kernel page tables as well.
> > > > >
> > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > the page table of the owning mm.
> > > > >
> > > > > The secret memory remains accessible in the process context using uaccess
> > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > >
> > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > a page that belongs to the secret memory area.
> > > > >
> > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > freed.
> > > > >
> > > > > The following example demonstrates creation of a secret mapping (error
> > > > > handling is omitted):
> > > > >
> > > > > fd = memfd_secret(0);
> > > > > ftruncate(fd, MAP_SIZE);
> > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > >
> > > > I do not see any access control or permission model for this feature.
> > > > Is this feature generally safe to anybody?
> > >
> > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > explicitly at boot with the kernel parameter that says what is the maximal
> > > memory size secretmem can consume.
> >
> > Why is such a model sufficient and future proof? I mean even when it has
> > to be enabled by an admin it is still all or nothing approach. Mlock
> > limit is not really useful because it is per mm rather than per user.
> >
> > Is there any reason why this is allowed for non-privileged processes?
> > Maybe this has been discussed in the past but is there any reason why
> > this cannot be done by a special device which will allow to provide at
> > least some permission policy?
>
> Why this should not be allowed for non-privileged processes? This behaves
> similarly to mlocked memory, so I don't see a reason why secretmem should
> have different permissions model.
Because appart from the reclaim aspect it fragments the direct mapping
IIUC. That might have an impact on all others, right?
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:00 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:00 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > >
> > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > areas visible only in the context of the owning process and not mapped not
> > > > > only to other processes but in the kernel page tables as well.
> > > > >
> > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > the page table of the owning mm.
> > > > >
> > > > > The secret memory remains accessible in the process context using uaccess
> > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > >
> > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > a page that belongs to the secret memory area.
> > > > >
> > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > freed.
> > > > >
> > > > > The following example demonstrates creation of a secret mapping (error
> > > > > handling is omitted):
> > > > >
> > > > > fd = memfd_secret(0);
> > > > > ftruncate(fd, MAP_SIZE);
> > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > >
> > > > I do not see any access control or permission model for this feature.
> > > > Is this feature generally safe to anybody?
> > >
> > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > explicitly at boot with the kernel parameter that says what is the maximal
> > > memory size secretmem can consume.
> >
> > Why is such a model sufficient and future proof? I mean even when it has
> > to be enabled by an admin it is still all or nothing approach. Mlock
> > limit is not really useful because it is per mm rather than per user.
> >
> > Is there any reason why this is allowed for non-privileged processes?
> > Maybe this has been discussed in the past but is there any reason why
> > this cannot be done by a special device which will allow to provide at
> > least some permission policy?
>
> Why this should not be allowed for non-privileged processes? This behaves
> similarly to mlocked memory, so I don't see a reason why secretmem should
> have different permissions model.
Because appart from the reclaim aspect it fragments the direct mapping
IIUC. That might have an impact on all others, right?
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-26 8:56 ` Mike Rapoport
(?)
(?)
@ 2021-01-26 9:15 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:15 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Tue 26-01-21 10:56:54, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 08:31:42AM +0100, Michal Hocko wrote:
> > On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> > > On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > > > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > >
> > > > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > > > when the memory is actually allocated and freed.
> > > >
> > > > What does this mean?
> > >
> > > That means that the accounting is updated when secretmem does cma_alloc()
> > > and cma_relase().
> > >
> > > > What are the lifetime rules?
> > >
> > > Hmm, what do you mean by lifetime rules?
> >
> > OK, so let's start by reservation time (mmap time right?) then the
> > instantiation time (faulting in memory). What if the calling process of
> > the former has a different memcg context than the later. E.g. when you
> > send your fd or inherited fd over fork will move to a different memcg.
> >
> > What about freeing path? E.g. when you punch a hole in the middle of
> > a mapping?
> >
> > Please make sure to document all this.
>
> So, does something like this answer your question:
>
> ---
> The memory cgroup is charged when secremem allocates pages from CMA to
> increase large pages pool during ->fault() processing.
OK so that is when the memory is faulted in. Good that is a standard
model we have. The memcg context of the creator of the secret memory is
not really important. So whoever has created is not charged.
> The pages are uncharged from memory cgroup when they are released back to
> CMA at the time secretme inode is evicted.
> ---
so effectivelly when they are unmapped, right? This is similar to
anonymous memory.
As I've said it would be really great to have this life cycle documented
properly.
> > Please note that this all is a user visible stuff that will become PITA
> > (if possible) to change later on. You should really have strong
> > arguments in your justification here.
>
> I think that adding a dedicated counter for few 2M areas per container is
> not worth the churn.
What kind of churn you have in mind? What is the downside?
> When we'll get to the point that secretmem can be used to back the entire
> guest memory we can add a new counter and it does not seem to PITA to me.
What does really prevent a larger use with this implementation?
--
Michal Hocko
SUSE Labs
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 9:15 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:15 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue 26-01-21 10:56:54, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 08:31:42AM +0100, Michal Hocko wrote:
> > On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> > > On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > > > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > >
> > > > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > > > when the memory is actually allocated and freed.
> > > >
> > > > What does this mean?
> > >
> > > That means that the accounting is updated when secretmem does cma_alloc()
> > > and cma_relase().
> > >
> > > > What are the lifetime rules?
> > >
> > > Hmm, what do you mean by lifetime rules?
> >
> > OK, so let's start by reservation time (mmap time right?) then the
> > instantiation time (faulting in memory). What if the calling process of
> > the former has a different memcg context than the later. E.g. when you
> > send your fd or inherited fd over fork will move to a different memcg.
> >
> > What about freeing path? E.g. when you punch a hole in the middle of
> > a mapping?
> >
> > Please make sure to document all this.
>
> So, does something like this answer your question:
>
> ---
> The memory cgroup is charged when secremem allocates pages from CMA to
> increase large pages pool during ->fault() processing.
OK so that is when the memory is faulted in. Good that is a standard
model we have. The memcg context of the creator of the secret memory is
not really important. So whoever has created is not charged.
> The pages are uncharged from memory cgroup when they are released back to
> CMA at the time secretme inode is evicted.
> ---
so effectivelly when they are unmapped, right? This is similar to
anonymous memory.
As I've said it would be really great to have this life cycle documented
properly.
> > Please note that this all is a user visible stuff that will become PITA
> > (if possible) to change later on. You should really have strong
> > arguments in your justification here.
>
> I think that adding a dedicated counter for few 2M areas per container is
> not worth the churn.
What kind of churn you have in mind? What is the downside?
> When we'll get to the point that secretmem can be used to back the entire
> guest memory we can add a new counter and it does not seem to PITA to me.
What does really prevent a larger use with this implementation?
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 9:15 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:15 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue 26-01-21 10:56:54, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 08:31:42AM +0100, Michal Hocko wrote:
> > On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> > > On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > > > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > >
> > > > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > > > when the memory is actually allocated and freed.
> > > >
> > > > What does this mean?
> > >
> > > That means that the accounting is updated when secretmem does cma_alloc()
> > > and cma_relase().
> > >
> > > > What are the lifetime rules?
> > >
> > > Hmm, what do you mean by lifetime rules?
> >
> > OK, so let's start by reservation time (mmap time right?) then the
> > instantiation time (faulting in memory). What if the calling process of
> > the former has a different memcg context than the later. E.g. when you
> > send your fd or inherited fd over fork will move to a different memcg.
> >
> > What about freeing path? E.g. when you punch a hole in the middle of
> > a mapping?
> >
> > Please make sure to document all this.
>
> So, does something like this answer your question:
>
> ---
> The memory cgroup is charged when secremem allocates pages from CMA to
> increase large pages pool during ->fault() processing.
OK so that is when the memory is faulted in. Good that is a standard
model we have. The memcg context of the creator of the secret memory is
not really important. So whoever has created is not charged.
> The pages are uncharged from memory cgroup when they are released back to
> CMA at the time secretme inode is evicted.
> ---
so effectivelly when they are unmapped, right? This is similar to
anonymous memory.
As I've said it would be really great to have this life cycle documented
properly.
> > Please note that this all is a user visible stuff that will become PITA
> > (if possible) to change later on. You should really have strong
> > arguments in your justification here.
>
> I think that adding a dedicated counter for few 2M areas per container is
> not worth the churn.
What kind of churn you have in mind? What is the downside?
> When we'll get to the point that secretmem can be used to back the entire
> guest memory we can add a new counter and it does not seem to PITA to me.
What does really prevent a larger use with this implementation?
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 9:15 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:15 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue 26-01-21 10:56:54, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 08:31:42AM +0100, Michal Hocko wrote:
> > On Mon 25-01-21 23:38:17, Mike Rapoport wrote:
> > > On Mon, Jan 25, 2021 at 05:54:51PM +0100, Michal Hocko wrote:
> > > > On Thu 21-01-21 14:27:20, Mike Rapoport wrote:
> > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > >
> > > > > Account memory consumed by secretmem to memcg. The accounting is updated
> > > > > when the memory is actually allocated and freed.
> > > >
> > > > What does this mean?
> > >
> > > That means that the accounting is updated when secretmem does cma_alloc()
> > > and cma_relase().
> > >
> > > > What are the lifetime rules?
> > >
> > > Hmm, what do you mean by lifetime rules?
> >
> > OK, so let's start by reservation time (mmap time right?) then the
> > instantiation time (faulting in memory). What if the calling process of
> > the former has a different memcg context than the later. E.g. when you
> > send your fd or inherited fd over fork will move to a different memcg.
> >
> > What about freeing path? E.g. when you punch a hole in the middle of
> > a mapping?
> >
> > Please make sure to document all this.
>
> So, does something like this answer your question:
>
> ---
> The memory cgroup is charged when secremem allocates pages from CMA to
> increase large pages pool during ->fault() processing.
OK so that is when the memory is faulted in. Good that is a standard
model we have. The memcg context of the creator of the secret memory is
not really important. So whoever has created is not charged.
> The pages are uncharged from memory cgroup when they are released back to
> CMA at the time secretme inode is evicted.
> ---
so effectivelly when they are unmapped, right? This is similar to
anonymous memory.
As I've said it would be really great to have this life cycle documented
properly.
> > Please note that this all is a user visible stuff that will become PITA
> > (if possible) to change later on. You should really have strong
> > arguments in your justification here.
>
> I think that adding a dedicated counter for few 2M areas per container is
> not worth the churn.
What kind of churn you have in mind? What is the downside?
> When we'll get to the point that secretmem can be used to back the entire
> guest memory we can add a new counter and it does not seem to PITA to me.
What does really prevent a larger use with this implementation?
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
2021-01-26 9:00 ` Michal Hocko
(?)
(?)
@ 2021-01-26 9:20 ` Mike Rapoport
-1 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-26 9:20 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Tue, Jan 26, 2021 at 10:00:13AM +0100, Michal Hocko wrote:
> On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> > On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > >
> > > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > > areas visible only in the context of the owning process and not mapped not
> > > > > > only to other processes but in the kernel page tables as well.
> > > > > >
> > > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > > the page table of the owning mm.
> > > > > >
> > > > > > The secret memory remains accessible in the process context using uaccess
> > > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > > >
> > > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > > a page that belongs to the secret memory area.
> > > > > >
> > > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > > freed.
> > > > > >
> > > > > > The following example demonstrates creation of a secret mapping (error
> > > > > > handling is omitted):
> > > > > >
> > > > > > fd = memfd_secret(0);
> > > > > > ftruncate(fd, MAP_SIZE);
> > > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > > >
> > > > > I do not see any access control or permission model for this feature.
> > > > > Is this feature generally safe to anybody?
> > > >
> > > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > > explicitly at boot with the kernel parameter that says what is the maximal
> > > > memory size secretmem can consume.
> > >
> > > Why is such a model sufficient and future proof? I mean even when it has
> > > to be enabled by an admin it is still all or nothing approach. Mlock
> > > limit is not really useful because it is per mm rather than per user.
> > >
> > > Is there any reason why this is allowed for non-privileged processes?
> > > Maybe this has been discussed in the past but is there any reason why
> > > this cannot be done by a special device which will allow to provide at
> > > least some permission policy?
> >
> > Why this should not be allowed for non-privileged processes? This behaves
> > similarly to mlocked memory, so I don't see a reason why secretmem should
> > have different permissions model.
>
> Because appart from the reclaim aspect it fragments the direct mapping
> IIUC. That might have an impact on all others, right?
It does fragment the direct map, but first it only splits 1G pages to 2M
pages and as was discussed several times already it's not that clear which
page size in the direct map is the best and this is very much workload
dependent.
These are the results of the benchmarks I've run with the default direct
mapping covered with 1G pages, with disabled 1G pages using "nogbpages" in
the kernel command line and with the entire direct map forced to use 4K
pages using a simple patch to arch/x86/mm/init.c.
https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
--
Sincerely yours,
Mike.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:20 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-26 9:20 UTC (permalink / raw)
To: Michal Hocko
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue, Jan 26, 2021 at 10:00:13AM +0100, Michal Hocko wrote:
> On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> > On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > >
> > > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > > areas visible only in the context of the owning process and not mapped not
> > > > > > only to other processes but in the kernel page tables as well.
> > > > > >
> > > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > > the page table of the owning mm.
> > > > > >
> > > > > > The secret memory remains accessible in the process context using uaccess
> > > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > > >
> > > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > > a page that belongs to the secret memory area.
> > > > > >
> > > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > > freed.
> > > > > >
> > > > > > The following example demonstrates creation of a secret mapping (error
> > > > > > handling is omitted):
> > > > > >
> > > > > > fd = memfd_secret(0);
> > > > > > ftruncate(fd, MAP_SIZE);
> > > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > > >
> > > > > I do not see any access control or permission model for this feature.
> > > > > Is this feature generally safe to anybody?
> > > >
> > > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > > explicitly at boot with the kernel parameter that says what is the maximal
> > > > memory size secretmem can consume.
> > >
> > > Why is such a model sufficient and future proof? I mean even when it has
> > > to be enabled by an admin it is still all or nothing approach. Mlock
> > > limit is not really useful because it is per mm rather than per user.
> > >
> > > Is there any reason why this is allowed for non-privileged processes?
> > > Maybe this has been discussed in the past but is there any reason why
> > > this cannot be done by a special device which will allow to provide at
> > > least some permission policy?
> >
> > Why this should not be allowed for non-privileged processes? This behaves
> > similarly to mlocked memory, so I don't see a reason why secretmem should
> > have different permissions model.
>
> Because appart from the reclaim aspect it fragments the direct mapping
> IIUC. That might have an impact on all others, right?
It does fragment the direct map, but first it only splits 1G pages to 2M
pages and as was discussed several times already it's not that clear which
page size in the direct map is the best and this is very much workload
dependent.
These are the results of the benchmarks I've run with the default direct
mapping covered with 1G pages, with disabled 1G pages using "nogbpages" in
the kernel command line and with the entire direct map forced to use 4K
pages using a simple patch to arch/x86/mm/init.c.
https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
--
Sincerely yours,
Mike.
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:20 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-26 9:20 UTC (permalink / raw)
To: Michal Hocko
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue, Jan 26, 2021 at 10:00:13AM +0100, Michal Hocko wrote:
> On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> > On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > >
> > > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > > areas visible only in the context of the owning process and not mapped not
> > > > > > only to other processes but in the kernel page tables as well.
> > > > > >
> > > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > > the page table of the owning mm.
> > > > > >
> > > > > > The secret memory remains accessible in the process context using uaccess
> > > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > > >
> > > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > > a page that belongs to the secret memory area.
> > > > > >
> > > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > > freed.
> > > > > >
> > > > > > The following example demonstrates creation of a secret mapping (error
> > > > > > handling is omitted):
> > > > > >
> > > > > > fd = memfd_secret(0);
> > > > > > ftruncate(fd, MAP_SIZE);
> > > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > > >
> > > > > I do not see any access control or permission model for this feature.
> > > > > Is this feature generally safe to anybody?
> > > >
> > > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > > explicitly at boot with the kernel parameter that says what is the maximal
> > > > memory size secretmem can consume.
> > >
> > > Why is such a model sufficient and future proof? I mean even when it has
> > > to be enabled by an admin it is still all or nothing approach. Mlock
> > > limit is not really useful because it is per mm rather than per user.
> > >
> > > Is there any reason why this is allowed for non-privileged processes?
> > > Maybe this has been discussed in the past but is there any reason why
> > > this cannot be done by a special device which will allow to provide at
> > > least some permission policy?
> >
> > Why this should not be allowed for non-privileged processes? This behaves
> > similarly to mlocked memory, so I don't see a reason why secretmem should
> > have different permissions model.
>
> Because appart from the reclaim aspect it fragments the direct mapping
> IIUC. That might have an impact on all others, right?
It does fragment the direct map, but first it only splits 1G pages to 2M
pages and as was discussed several times already it's not that clear which
page size in the direct map is the best and this is very much workload
dependent.
These are the results of the benchmarks I've run with the default direct
mapping covered with 1G pages, with disabled 1G pages using "nogbpages" in
the kernel command line and with the entire direct map forced to use 4K
pages using a simple patch to arch/x86/mm/init.c.
https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
--
Sincerely yours,
Mike.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:20 ` Mike Rapoport
0 siblings, 0 replies; 318+ messages in thread
From: Mike Rapoport @ 2021-01-26 9:20 UTC (permalink / raw)
To: Michal Hocko
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue, Jan 26, 2021 at 10:00:13AM +0100, Michal Hocko wrote:
> On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> > On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > >
> > > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > > areas visible only in the context of the owning process and not mapped not
> > > > > > only to other processes but in the kernel page tables as well.
> > > > > >
> > > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > > the page table of the owning mm.
> > > > > >
> > > > > > The secret memory remains accessible in the process context using uaccess
> > > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > > >
> > > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > > a page that belongs to the secret memory area.
> > > > > >
> > > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > > freed.
> > > > > >
> > > > > > The following example demonstrates creation of a secret mapping (error
> > > > > > handling is omitted):
> > > > > >
> > > > > > fd = memfd_secret(0);
> > > > > > ftruncate(fd, MAP_SIZE);
> > > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > > >
> > > > > I do not see any access control or permission model for this feature.
> > > > > Is this feature generally safe to anybody?
> > > >
> > > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > > explicitly at boot with the kernel parameter that says what is the maximal
> > > > memory size secretmem can consume.
> > >
> > > Why is such a model sufficient and future proof? I mean even when it has
> > > to be enabled by an admin it is still all or nothing approach. Mlock
> > > limit is not really useful because it is per mm rather than per user.
> > >
> > > Is there any reason why this is allowed for non-privileged processes?
> > > Maybe this has been discussed in the past but is there any reason why
> > > this cannot be done by a special device which will allow to provide at
> > > least some permission policy?
> >
> > Why this should not be allowed for non-privileged processes? This behaves
> > similarly to mlocked memory, so I don't see a reason why secretmem should
> > have different permissions model.
>
> Because appart from the reclaim aspect it fragments the direct mapping
> IIUC. That might have an impact on all others, right?
It does fragment the direct map, but first it only splits 1G pages to 2M
pages and as was discussed several times already it's not that clear which
page size in the direct map is the best and this is very much workload
dependent.
These are the results of the benchmarks I've run with the default direct
mapping covered with 1G pages, with disabled 1G pages using "nogbpages" in
the kernel command line and with the entire direct map forced to use 4K
pages using a simple patch to arch/x86/mm/init.c.
https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
--
Sincerely yours,
Mike.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
2021-01-26 9:00 ` Michal Hocko
(?)
(?)
@ 2021-01-26 9:20 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:20 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Tue 26-01-21 10:00:14, Michal Hocko wrote:
> On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> > On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > >
> > > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > > areas visible only in the context of the owning process and not mapped not
> > > > > > only to other processes but in the kernel page tables as well.
> > > > > >
> > > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > > the page table of the owning mm.
> > > > > >
> > > > > > The secret memory remains accessible in the process context using uaccess
> > > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > > >
> > > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > > a page that belongs to the secret memory area.
> > > > > >
> > > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > > freed.
> > > > > >
> > > > > > The following example demonstrates creation of a secret mapping (error
> > > > > > handling is omitted):
> > > > > >
> > > > > > fd = memfd_secret(0);
> > > > > > ftruncate(fd, MAP_SIZE);
> > > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > > >
> > > > > I do not see any access control or permission model for this feature.
> > > > > Is this feature generally safe to anybody?
> > > >
> > > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > > explicitly at boot with the kernel parameter that says what is the maximal
> > > > memory size secretmem can consume.
> > >
> > > Why is such a model sufficient and future proof? I mean even when it has
> > > to be enabled by an admin it is still all or nothing approach. Mlock
> > > limit is not really useful because it is per mm rather than per user.
> > >
> > > Is there any reason why this is allowed for non-privileged processes?
> > > Maybe this has been discussed in the past but is there any reason why
> > > this cannot be done by a special device which will allow to provide at
> > > least some permission policy?
> >
> > Why this should not be allowed for non-privileged processes? This behaves
> > similarly to mlocked memory, so I don't see a reason why secretmem should
> > have different permissions model.
>
> Because appart from the reclaim aspect it fragments the direct mapping
> IIUC. That might have an impact on all others, right?
Also forgot to mention that you rely on a contiguous allocations and
that can become a very scarce resource so what does prevent one abuser
from using it all and deny the access to others. And unless I am missing
something allocation failure would lead to OOM which cannot really help
because the oom killer cannot compensate for the CMA reservation.
--
Michal Hocko
SUSE Labs
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:20 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:20 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue 26-01-21 10:00:14, Michal Hocko wrote:
> On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> > On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > >
> > > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > > areas visible only in the context of the owning process and not mapped not
> > > > > > only to other processes but in the kernel page tables as well.
> > > > > >
> > > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > > the page table of the owning mm.
> > > > > >
> > > > > > The secret memory remains accessible in the process context using uaccess
> > > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > > >
> > > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > > a page that belongs to the secret memory area.
> > > > > >
> > > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > > freed.
> > > > > >
> > > > > > The following example demonstrates creation of a secret mapping (error
> > > > > > handling is omitted):
> > > > > >
> > > > > > fd = memfd_secret(0);
> > > > > > ftruncate(fd, MAP_SIZE);
> > > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > > >
> > > > > I do not see any access control or permission model for this feature.
> > > > > Is this feature generally safe to anybody?
> > > >
> > > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > > explicitly at boot with the kernel parameter that says what is the maximal
> > > > memory size secretmem can consume.
> > >
> > > Why is such a model sufficient and future proof? I mean even when it has
> > > to be enabled by an admin it is still all or nothing approach. Mlock
> > > limit is not really useful because it is per mm rather than per user.
> > >
> > > Is there any reason why this is allowed for non-privileged processes?
> > > Maybe this has been discussed in the past but is there any reason why
> > > this cannot be done by a special device which will allow to provide at
> > > least some permission policy?
> >
> > Why this should not be allowed for non-privileged processes? This behaves
> > similarly to mlocked memory, so I don't see a reason why secretmem should
> > have different permissions model.
>
> Because appart from the reclaim aspect it fragments the direct mapping
> IIUC. That might have an impact on all others, right?
Also forgot to mention that you rely on a contiguous allocations and
that can become a very scarce resource so what does prevent one abuser
from using it all and deny the access to others. And unless I am missing
something allocation failure would lead to OOM which cannot really help
because the oom killer cannot compensate for the CMA reservation.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:20 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:20 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue 26-01-21 10:00:14, Michal Hocko wrote:
> On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> > On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > >
> > > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > > areas visible only in the context of the owning process and not mapped not
> > > > > > only to other processes but in the kernel page tables as well.
> > > > > >
> > > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > > the page table of the owning mm.
> > > > > >
> > > > > > The secret memory remains accessible in the process context using uaccess
> > > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > > >
> > > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > > a page that belongs to the secret memory area.
> > > > > >
> > > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > > freed.
> > > > > >
> > > > > > The following example demonstrates creation of a secret mapping (error
> > > > > > handling is omitted):
> > > > > >
> > > > > > fd = memfd_secret(0);
> > > > > > ftruncate(fd, MAP_SIZE);
> > > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > > >
> > > > > I do not see any access control or permission model for this feature.
> > > > > Is this feature generally safe to anybody?
> > > >
> > > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > > explicitly at boot with the kernel parameter that says what is the maximal
> > > > memory size secretmem can consume.
> > >
> > > Why is such a model sufficient and future proof? I mean even when it has
> > > to be enabled by an admin it is still all or nothing approach. Mlock
> > > limit is not really useful because it is per mm rather than per user.
> > >
> > > Is there any reason why this is allowed for non-privileged processes?
> > > Maybe this has been discussed in the past but is there any reason why
> > > this cannot be done by a special device which will allow to provide at
> > > least some permission policy?
> >
> > Why this should not be allowed for non-privileged processes? This behaves
> > similarly to mlocked memory, so I don't see a reason why secretmem should
> > have different permissions model.
>
> Because appart from the reclaim aspect it fragments the direct mapping
> IIUC. That might have an impact on all others, right?
Also forgot to mention that you rely on a contiguous allocations and
that can become a very scarce resource so what does prevent one abuser
from using it all and deny the access to others. And unless I am missing
something allocation failure would lead to OOM which cannot really help
because the oom killer cannot compensate for the CMA reservation.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:20 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:20 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue 26-01-21 10:00:14, Michal Hocko wrote:
> On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> > On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > >
> > > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > > areas visible only in the context of the owning process and not mapped not
> > > > > > only to other processes but in the kernel page tables as well.
> > > > > >
> > > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > > the page table of the owning mm.
> > > > > >
> > > > > > The secret memory remains accessible in the process context using uaccess
> > > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > > >
> > > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > > a page that belongs to the secret memory area.
> > > > > >
> > > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > > freed.
> > > > > >
> > > > > > The following example demonstrates creation of a secret mapping (error
> > > > > > handling is omitted):
> > > > > >
> > > > > > fd = memfd_secret(0);
> > > > > > ftruncate(fd, MAP_SIZE);
> > > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > > >
> > > > > I do not see any access control or permission model for this feature.
> > > > > Is this feature generally safe to anybody?
> > > >
> > > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > > explicitly at boot with the kernel parameter that says what is the maximal
> > > > memory size secretmem can consume.
> > >
> > > Why is such a model sufficient and future proof? I mean even when it has
> > > to be enabled by an admin it is still all or nothing approach. Mlock
> > > limit is not really useful because it is per mm rather than per user.
> > >
> > > Is there any reason why this is allowed for non-privileged processes?
> > > Maybe this has been discussed in the past but is there any reason why
> > > this cannot be done by a special device which will allow to provide at
> > > least some permission policy?
> >
> > Why this should not be allowed for non-privileged processes? This behaves
> > similarly to mlocked memory, so I don't see a reason why secretmem should
> > have different permissions model.
>
> Because appart from the reclaim aspect it fragments the direct mapping
> IIUC. That might have an impact on all others, right?
Also forgot to mention that you rely on a contiguous allocations and
that can become a very scarce resource so what does prevent one abuser
from using it all and deny the access to others. And unless I am missing
something allocation failure would lead to OOM which cannot really help
because the oom killer cannot compensate for the CMA reservation.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
2021-01-26 9:20 ` Mike Rapoport
(?)
(?)
@ 2021-01-26 9:49 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:49 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Tue 26-01-21 11:20:11, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 10:00:13AM +0100, Michal Hocko wrote:
> > On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> > > On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > > > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > > >
> > > > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > > > areas visible only in the context of the owning process and not mapped not
> > > > > > > only to other processes but in the kernel page tables as well.
> > > > > > >
> > > > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > > > the page table of the owning mm.
> > > > > > >
> > > > > > > The secret memory remains accessible in the process context using uaccess
> > > > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > > > >
> > > > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > > > a page that belongs to the secret memory area.
> > > > > > >
> > > > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > > > freed.
> > > > > > >
> > > > > > > The following example demonstrates creation of a secret mapping (error
> > > > > > > handling is omitted):
> > > > > > >
> > > > > > > fd = memfd_secret(0);
> > > > > > > ftruncate(fd, MAP_SIZE);
> > > > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > > > >
> > > > > > I do not see any access control or permission model for this feature.
> > > > > > Is this feature generally safe to anybody?
> > > > >
> > > > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > > > explicitly at boot with the kernel parameter that says what is the maximal
> > > > > memory size secretmem can consume.
> > > >
> > > > Why is such a model sufficient and future proof? I mean even when it has
> > > > to be enabled by an admin it is still all or nothing approach. Mlock
> > > > limit is not really useful because it is per mm rather than per user.
> > > >
> > > > Is there any reason why this is allowed for non-privileged processes?
> > > > Maybe this has been discussed in the past but is there any reason why
> > > > this cannot be done by a special device which will allow to provide at
> > > > least some permission policy?
> > >
> > > Why this should not be allowed for non-privileged processes? This behaves
> > > similarly to mlocked memory, so I don't see a reason why secretmem should
> > > have different permissions model.
> >
> > Because appart from the reclaim aspect it fragments the direct mapping
> > IIUC. That might have an impact on all others, right?
>
> It does fragment the direct map, but first it only splits 1G pages to 2M
> pages and as was discussed several times already it's not that clear which
> page size in the direct map is the best and this is very much workload
> dependent.
I do appreciate this has been discussed but this changelog is not
specific on any of that reasoning and I am pretty sure nobody will
remember details in few years in the future. Also some numbers would be
appropriate.
> These are the results of the benchmarks I've run with the default direct
> mapping covered with 1G pages, with disabled 1G pages using "nogbpages" in
> the kernel command line and with the entire direct map forced to use 4K
> pages using a simple patch to arch/x86/mm/init.c.
>
> https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
A good start for the data I am asking above.
--
Michal Hocko
SUSE Labs
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:49 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:49 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue 26-01-21 11:20:11, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 10:00:13AM +0100, Michal Hocko wrote:
> > On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> > > On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > > > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > > >
> > > > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > > > areas visible only in the context of the owning process and not mapped not
> > > > > > > only to other processes but in the kernel page tables as well.
> > > > > > >
> > > > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > > > the page table of the owning mm.
> > > > > > >
> > > > > > > The secret memory remains accessible in the process context using uaccess
> > > > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > > > >
> > > > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > > > a page that belongs to the secret memory area.
> > > > > > >
> > > > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > > > freed.
> > > > > > >
> > > > > > > The following example demonstrates creation of a secret mapping (error
> > > > > > > handling is omitted):
> > > > > > >
> > > > > > > fd = memfd_secret(0);
> > > > > > > ftruncate(fd, MAP_SIZE);
> > > > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > > > >
> > > > > > I do not see any access control or permission model for this feature.
> > > > > > Is this feature generally safe to anybody?
> > > > >
> > > > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > > > explicitly at boot with the kernel parameter that says what is the maximal
> > > > > memory size secretmem can consume.
> > > >
> > > > Why is such a model sufficient and future proof? I mean even when it has
> > > > to be enabled by an admin it is still all or nothing approach. Mlock
> > > > limit is not really useful because it is per mm rather than per user.
> > > >
> > > > Is there any reason why this is allowed for non-privileged processes?
> > > > Maybe this has been discussed in the past but is there any reason why
> > > > this cannot be done by a special device which will allow to provide at
> > > > least some permission policy?
> > >
> > > Why this should not be allowed for non-privileged processes? This behaves
> > > similarly to mlocked memory, so I don't see a reason why secretmem should
> > > have different permissions model.
> >
> > Because appart from the reclaim aspect it fragments the direct mapping
> > IIUC. That might have an impact on all others, right?
>
> It does fragment the direct map, but first it only splits 1G pages to 2M
> pages and as was discussed several times already it's not that clear which
> page size in the direct map is the best and this is very much workload
> dependent.
I do appreciate this has been discussed but this changelog is not
specific on any of that reasoning and I am pretty sure nobody will
remember details in few years in the future. Also some numbers would be
appropriate.
> These are the results of the benchmarks I've run with the default direct
> mapping covered with 1G pages, with disabled 1G pages using "nogbpages" in
> the kernel command line and with the entire direct map forced to use 4K
> pages using a simple patch to arch/x86/mm/init.c.
>
> https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
A good start for the data I am asking above.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:49 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:49 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue 26-01-21 11:20:11, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 10:00:13AM +0100, Michal Hocko wrote:
> > On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> > > On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > > > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > > >
> > > > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > > > areas visible only in the context of the owning process and not mapped not
> > > > > > > only to other processes but in the kernel page tables as well.
> > > > > > >
> > > > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > > > the page table of the owning mm.
> > > > > > >
> > > > > > > The secret memory remains accessible in the process context using uaccess
> > > > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > > > >
> > > > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > > > a page that belongs to the secret memory area.
> > > > > > >
> > > > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > > > freed.
> > > > > > >
> > > > > > > The following example demonstrates creation of a secret mapping (error
> > > > > > > handling is omitted):
> > > > > > >
> > > > > > > fd = memfd_secret(0);
> > > > > > > ftruncate(fd, MAP_SIZE);
> > > > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > > > >
> > > > > > I do not see any access control or permission model for this feature.
> > > > > > Is this feature generally safe to anybody?
> > > > >
> > > > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > > > explicitly at boot with the kernel parameter that says what is the maximal
> > > > > memory size secretmem can consume.
> > > >
> > > > Why is such a model sufficient and future proof? I mean even when it has
> > > > to be enabled by an admin it is still all or nothing approach. Mlock
> > > > limit is not really useful because it is per mm rather than per user.
> > > >
> > > > Is there any reason why this is allowed for non-privileged processes?
> > > > Maybe this has been discussed in the past but is there any reason why
> > > > this cannot be done by a special device which will allow to provide at
> > > > least some permission policy?
> > >
> > > Why this should not be allowed for non-privileged processes? This behaves
> > > similarly to mlocked memory, so I don't see a reason why secretmem should
> > > have different permissions model.
> >
> > Because appart from the reclaim aspect it fragments the direct mapping
> > IIUC. That might have an impact on all others, right?
>
> It does fragment the direct map, but first it only splits 1G pages to 2M
> pages and as was discussed several times already it's not that clear which
> page size in the direct map is the best and this is very much workload
> dependent.
I do appreciate this has been discussed but this changelog is not
specific on any of that reasoning and I am pretty sure nobody will
remember details in few years in the future. Also some numbers would be
appropriate.
> These are the results of the benchmarks I've run with the default direct
> mapping covered with 1G pages, with disabled 1G pages using "nogbpages" in
> the kernel command line and with the entire direct map forced to use 4K
> pages using a simple patch to arch/x86/mm/init.c.
>
> https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
A good start for the data I am asking above.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:49 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 9:49 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Tue 26-01-21 11:20:11, Mike Rapoport wrote:
> On Tue, Jan 26, 2021 at 10:00:13AM +0100, Michal Hocko wrote:
> > On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
> > > On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
> > > > On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
> > > > > On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
> > > > > > On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
> > > > > > > From: Mike Rapoport <rppt@linux.ibm.com>
> > > > > > >
> > > > > > > Introduce "memfd_secret" system call with the ability to create memory
> > > > > > > areas visible only in the context of the owning process and not mapped not
> > > > > > > only to other processes but in the kernel page tables as well.
> > > > > > >
> > > > > > > The user will create a file descriptor using the memfd_secret() system
> > > > > > > call. The memory areas created by mmap() calls from this file descriptor
> > > > > > > will be unmapped from the kernel direct map and they will be only mapped in
> > > > > > > the page table of the owning mm.
> > > > > > >
> > > > > > > The secret memory remains accessible in the process context using uaccess
> > > > > > > primitives, but it is not accessible using direct/linear map addresses.
> > > > > > >
> > > > > > > Functions in the follow_page()/get_user_page() family will refuse to return
> > > > > > > a page that belongs to the secret memory area.
> > > > > > >
> > > > > > > A page that was a part of the secret memory area is cleared when it is
> > > > > > > freed.
> > > > > > >
> > > > > > > The following example demonstrates creation of a secret mapping (error
> > > > > > > handling is omitted):
> > > > > > >
> > > > > > > fd = memfd_secret(0);
> > > > > > > ftruncate(fd, MAP_SIZE);
> > > > > > > ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > > > > >
> > > > > > I do not see any access control or permission model for this feature.
> > > > > > Is this feature generally safe to anybody?
> > > > >
> > > > > The mappings obey memlock limit. Besides, this feature should be enabled
> > > > > explicitly at boot with the kernel parameter that says what is the maximal
> > > > > memory size secretmem can consume.
> > > >
> > > > Why is such a model sufficient and future proof? I mean even when it has
> > > > to be enabled by an admin it is still all or nothing approach. Mlock
> > > > limit is not really useful because it is per mm rather than per user.
> > > >
> > > > Is there any reason why this is allowed for non-privileged processes?
> > > > Maybe this has been discussed in the past but is there any reason why
> > > > this cannot be done by a special device which will allow to provide at
> > > > least some permission policy?
> > >
> > > Why this should not be allowed for non-privileged processes? This behaves
> > > similarly to mlocked memory, so I don't see a reason why secretmem should
> > > have different permissions model.
> >
> > Because appart from the reclaim aspect it fragments the direct mapping
> > IIUC. That might have an impact on all others, right?
>
> It does fragment the direct map, but first it only splits 1G pages to 2M
> pages and as was discussed several times already it's not that clear which
> page size in the direct map is the best and this is very much workload
> dependent.
I do appreciate this has been discussed but this changelog is not
specific on any of that reasoning and I am pretty sure nobody will
remember details in few years in the future. Also some numbers would be
appropriate.
> These are the results of the benchmarks I've run with the default direct
> mapping covered with 1G pages, with disabled 1G pages using "nogbpages" in
> the kernel command line and with the entire direct map forced to use 4K
> pages using a simple patch to arch/x86/mm/init.c.
>
> https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
A good start for the data I am asking above.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
2021-01-26 9:49 ` Michal Hocko
(?)
(?)
@ 2021-01-26 9:53 ` David Hildenbrand
-1 siblings, 0 replies; 318+ messages in thread
From: David Hildenbrand @ 2021-01-26 9:53 UTC (permalink / raw)
To: Michal Hocko, Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On 26.01.21 10:49, Michal Hocko wrote:
> On Tue 26-01-21 11:20:11, Mike Rapoport wrote:
>> On Tue, Jan 26, 2021 at 10:00:13AM +0100, Michal Hocko wrote:
>>> On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
>>>> On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
>>>>> On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
>>>>>> On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
>>>>>>> On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
>>>>>>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>>>>>>
>>>>>>>> Introduce "memfd_secret" system call with the ability to create memory
>>>>>>>> areas visible only in the context of the owning process and not mapped not
>>>>>>>> only to other processes but in the kernel page tables as well.
>>>>>>>>
>>>>>>>> The user will create a file descriptor using the memfd_secret() system
>>>>>>>> call. The memory areas created by mmap() calls from this file descriptor
>>>>>>>> will be unmapped from the kernel direct map and they will be only mapped in
>>>>>>>> the page table of the owning mm.
>>>>>>>>
>>>>>>>> The secret memory remains accessible in the process context using uaccess
>>>>>>>> primitives, but it is not accessible using direct/linear map addresses.
>>>>>>>>
>>>>>>>> Functions in the follow_page()/get_user_page() family will refuse to return
>>>>>>>> a page that belongs to the secret memory area.
>>>>>>>>
>>>>>>>> A page that was a part of the secret memory area is cleared when it is
>>>>>>>> freed.
>>>>>>>>
>>>>>>>> The following example demonstrates creation of a secret mapping (error
>>>>>>>> handling is omitted):
>>>>>>>>
>>>>>>>> fd = memfd_secret(0);
>>>>>>>> ftruncate(fd, MAP_SIZE);
>>>>>>>> ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>>>>>>>
>>>>>>> I do not see any access control or permission model for this feature.
>>>>>>> Is this feature generally safe to anybody?
>>>>>>
>>>>>> The mappings obey memlock limit. Besides, this feature should be enabled
>>>>>> explicitly at boot with the kernel parameter that says what is the maximal
>>>>>> memory size secretmem can consume.
>>>>>
>>>>> Why is such a model sufficient and future proof? I mean even when it has
>>>>> to be enabled by an admin it is still all or nothing approach. Mlock
>>>>> limit is not really useful because it is per mm rather than per user.
>>>>>
>>>>> Is there any reason why this is allowed for non-privileged processes?
>>>>> Maybe this has been discussed in the past but is there any reason why
>>>>> this cannot be done by a special device which will allow to provide at
>>>>> least some permission policy?
>>>>
>>>> Why this should not be allowed for non-privileged processes? This behaves
>>>> similarly to mlocked memory, so I don't see a reason why secretmem should
>>>> have different permissions model.
>>>
>>> Because appart from the reclaim aspect it fragments the direct mapping
>>> IIUC. That might have an impact on all others, right?
>>
>> It does fragment the direct map, but first it only splits 1G pages to 2M
>> pages and as was discussed several times already it's not that clear which
>> page size in the direct map is the best and this is very much workload
>> dependent.
>
> I do appreciate this has been discussed but this changelog is not
> specific on any of that reasoning and I am pretty sure nobody will
> remember details in few years in the future. Also some numbers would be
> appropriate.
>
>> These are the results of the benchmarks I've run with the default direct
>> mapping covered with 1G pages, with disabled 1G pages using "nogbpages" in
>> the kernel command line and with the entire direct map forced to use 4K
>> pages using a simple patch to arch/x86/mm/init.c.
>>
>> https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
>
> A good start for the data I am asking above.
I assume you've seen the benchmark results provided by Xing Zhengjun
https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
--
Thanks,
David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:53 ` David Hildenbrand
0 siblings, 0 replies; 318+ messages in thread
From: David Hildenbrand @ 2021-01-26 9:53 UTC (permalink / raw)
To: Michal Hocko, Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On 26.01.21 10:49, Michal Hocko wrote:
> On Tue 26-01-21 11:20:11, Mike Rapoport wrote:
>> On Tue, Jan 26, 2021 at 10:00:13AM +0100, Michal Hocko wrote:
>>> On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
>>>> On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
>>>>> On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
>>>>>> On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
>>>>>>> On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
>>>>>>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>>>>>>
>>>>>>>> Introduce "memfd_secret" system call with the ability to create memory
>>>>>>>> areas visible only in the context of the owning process and not mapped not
>>>>>>>> only to other processes but in the kernel page tables as well.
>>>>>>>>
>>>>>>>> The user will create a file descriptor using the memfd_secret() system
>>>>>>>> call. The memory areas created by mmap() calls from this file descriptor
>>>>>>>> will be unmapped from the kernel direct map and they will be only mapped in
>>>>>>>> the page table of the owning mm.
>>>>>>>>
>>>>>>>> The secret memory remains accessible in the process context using uaccess
>>>>>>>> primitives, but it is not accessible using direct/linear map addresses.
>>>>>>>>
>>>>>>>> Functions in the follow_page()/get_user_page() family will refuse to return
>>>>>>>> a page that belongs to the secret memory area.
>>>>>>>>
>>>>>>>> A page that was a part of the secret memory area is cleared when it is
>>>>>>>> freed.
>>>>>>>>
>>>>>>>> The following example demonstrates creation of a secret mapping (error
>>>>>>>> handling is omitted):
>>>>>>>>
>>>>>>>> fd = memfd_secret(0);
>>>>>>>> ftruncate(fd, MAP_SIZE);
>>>>>>>> ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>>>>>>>
>>>>>>> I do not see any access control or permission model for this feature.
>>>>>>> Is this feature generally safe to anybody?
>>>>>>
>>>>>> The mappings obey memlock limit. Besides, this feature should be enabled
>>>>>> explicitly at boot with the kernel parameter that says what is the maximal
>>>>>> memory size secretmem can consume.
>>>>>
>>>>> Why is such a model sufficient and future proof? I mean even when it has
>>>>> to be enabled by an admin it is still all or nothing approach. Mlock
>>>>> limit is not really useful because it is per mm rather than per user.
>>>>>
>>>>> Is there any reason why this is allowed for non-privileged processes?
>>>>> Maybe this has been discussed in the past but is there any reason why
>>>>> this cannot be done by a special device which will allow to provide at
>>>>> least some permission policy?
>>>>
>>>> Why this should not be allowed for non-privileged processes? This behaves
>>>> similarly to mlocked memory, so I don't see a reason why secretmem should
>>>> have different permissions model.
>>>
>>> Because appart from the reclaim aspect it fragments the direct mapping
>>> IIUC. That might have an impact on all others, right?
>>
>> It does fragment the direct map, but first it only splits 1G pages to 2M
>> pages and as was discussed several times already it's not that clear which
>> page size in the direct map is the best and this is very much workload
>> dependent.
>
> I do appreciate this has been discussed but this changelog is not
> specific on any of that reasoning and I am pretty sure nobody will
> remember details in few years in the future. Also some numbers would be
> appropriate.
>
>> These are the results of the benchmarks I've run with the default direct
>> mapping covered with 1G pages, with disabled 1G pages using "nogbpages" in
>> the kernel command line and with the entire direct map forced to use 4K
>> pages using a simple patch to arch/x86/mm/init.c.
>>
>> https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
>
> A good start for the data I am asking above.
I assume you've seen the benchmark results provided by Xing Zhengjun
https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:53 ` David Hildenbrand
0 siblings, 0 replies; 318+ messages in thread
From: David Hildenbrand @ 2021-01-26 9:53 UTC (permalink / raw)
To: Michal Hocko, Mike Rapoport
Cc: Mark Rutland, Peter Zijlstra, Catalin Marinas, Dave Hansen,
linux-mm, linux-kselftest, H. Peter Anvin, Christopher Lameter,
Shuah Khan, Thomas Gleixner, Elena Reshetova, linux-arch,
Tycho Andersen, linux-nvdimm, Will Deacon, x86, Matthew Wilcox,
Mike Rapoport, Ingo Molnar, Michael Kerrisk, Palmer Dabbelt,
Arnd Bergmann, James Bottomley, Hagen Paul Pfeifer,
Borislav Petkov, Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Andrew Morton, Rick Edgecombe, Roman Gushchin
On 26.01.21 10:49, Michal Hocko wrote:
> On Tue 26-01-21 11:20:11, Mike Rapoport wrote:
>> On Tue, Jan 26, 2021 at 10:00:13AM +0100, Michal Hocko wrote:
>>> On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
>>>> On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
>>>>> On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
>>>>>> On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
>>>>>>> On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
>>>>>>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>>>>>>
>>>>>>>> Introduce "memfd_secret" system call with the ability to create memory
>>>>>>>> areas visible only in the context of the owning process and not mapped not
>>>>>>>> only to other processes but in the kernel page tables as well.
>>>>>>>>
>>>>>>>> The user will create a file descriptor using the memfd_secret() system
>>>>>>>> call. The memory areas created by mmap() calls from this file descriptor
>>>>>>>> will be unmapped from the kernel direct map and they will be only mapped in
>>>>>>>> the page table of the owning mm.
>>>>>>>>
>>>>>>>> The secret memory remains accessible in the process context using uaccess
>>>>>>>> primitives, but it is not accessible using direct/linear map addresses.
>>>>>>>>
>>>>>>>> Functions in the follow_page()/get_user_page() family will refuse to return
>>>>>>>> a page that belongs to the secret memory area.
>>>>>>>>
>>>>>>>> A page that was a part of the secret memory area is cleared when it is
>>>>>>>> freed.
>>>>>>>>
>>>>>>>> The following example demonstrates creation of a secret mapping (error
>>>>>>>> handling is omitted):
>>>>>>>>
>>>>>>>> fd = memfd_secret(0);
>>>>>>>> ftruncate(fd, MAP_SIZE);
>>>>>>>> ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>>>>>>>
>>>>>>> I do not see any access control or permission model for this feature.
>>>>>>> Is this feature generally safe to anybody?
>>>>>>
>>>>>> The mappings obey memlock limit. Besides, this feature should be enabled
>>>>>> explicitly at boot with the kernel parameter that says what is the maximal
>>>>>> memory size secretmem can consume.
>>>>>
>>>>> Why is such a model sufficient and future proof? I mean even when it has
>>>>> to be enabled by an admin it is still all or nothing approach. Mlock
>>>>> limit is not really useful because it is per mm rather than per user.
>>>>>
>>>>> Is there any reason why this is allowed for non-privileged processes?
>>>>> Maybe this has been discussed in the past but is there any reason why
>>>>> this cannot be done by a special device which will allow to provide at
>>>>> least some permission policy?
>>>>
>>>> Why this should not be allowed for non-privileged processes? This behaves
>>>> similarly to mlocked memory, so I don't see a reason why secretmem should
>>>> have different permissions model.
>>>
>>> Because appart from the reclaim aspect it fragments the direct mapping
>>> IIUC. That might have an impact on all others, right?
>>
>> It does fragment the direct map, but first it only splits 1G pages to 2M
>> pages and as was discussed several times already it's not that clear which
>> page size in the direct map is the best and this is very much workload
>> dependent.
>
> I do appreciate this has been discussed but this changelog is not
> specific on any of that reasoning and I am pretty sure nobody will
> remember details in few years in the future. Also some numbers would be
> appropriate.
>
>> These are the results of the benchmarks I've run with the default direct
>> mapping covered with 1G pages, with disabled 1G pages using "nogbpages" in
>> the kernel command line and with the entire direct map forced to use 4K
>> pages using a simple patch to arch/x86/mm/init.c.
>>
>> https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
>
> A good start for the data I am asking above.
I assume you've seen the benchmark results provided by Xing Zhengjun
https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
--
Thanks,
David / dhildenb
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 9:53 ` David Hildenbrand
0 siblings, 0 replies; 318+ messages in thread
From: David Hildenbrand @ 2021-01-26 9:53 UTC (permalink / raw)
To: Michal Hocko, Mike Rapoport
Cc: Mark Rutland, Peter Zijlstra, Catalin Marinas, Dave Hansen,
linux-mm, linux-kselftest, H. Peter Anvin, Christopher Lameter,
Shuah Khan, Thomas Gleixner, Elena Reshetova, linux-arch,
Tycho Andersen, linux-nvdimm, Will Deacon, x86, Matthew Wilcox,
Mike Rapoport, Ingo Molnar, Michael Kerrisk, Palmer Dabbelt,
Arnd Bergmann, James Bottomley, Hagen Paul Pfeifer,
Borislav Petkov, Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Andrew Morton, Rick Edgecombe, Roman Gushchin
On 26.01.21 10:49, Michal Hocko wrote:
> On Tue 26-01-21 11:20:11, Mike Rapoport wrote:
>> On Tue, Jan 26, 2021 at 10:00:13AM +0100, Michal Hocko wrote:
>>> On Tue 26-01-21 10:33:11, Mike Rapoport wrote:
>>>> On Tue, Jan 26, 2021 at 08:16:14AM +0100, Michal Hocko wrote:
>>>>> On Mon 25-01-21 23:36:18, Mike Rapoport wrote:
>>>>>> On Mon, Jan 25, 2021 at 06:01:22PM +0100, Michal Hocko wrote:
>>>>>>> On Thu 21-01-21 14:27:18, Mike Rapoport wrote:
>>>>>>>> From: Mike Rapoport <rppt@linux.ibm.com>
>>>>>>>>
>>>>>>>> Introduce "memfd_secret" system call with the ability to create memory
>>>>>>>> areas visible only in the context of the owning process and not mapped not
>>>>>>>> only to other processes but in the kernel page tables as well.
>>>>>>>>
>>>>>>>> The user will create a file descriptor using the memfd_secret() system
>>>>>>>> call. The memory areas created by mmap() calls from this file descriptor
>>>>>>>> will be unmapped from the kernel direct map and they will be only mapped in
>>>>>>>> the page table of the owning mm.
>>>>>>>>
>>>>>>>> The secret memory remains accessible in the process context using uaccess
>>>>>>>> primitives, but it is not accessible using direct/linear map addresses.
>>>>>>>>
>>>>>>>> Functions in the follow_page()/get_user_page() family will refuse to return
>>>>>>>> a page that belongs to the secret memory area.
>>>>>>>>
>>>>>>>> A page that was a part of the secret memory area is cleared when it is
>>>>>>>> freed.
>>>>>>>>
>>>>>>>> The following example demonstrates creation of a secret mapping (error
>>>>>>>> handling is omitted):
>>>>>>>>
>>>>>>>> fd = memfd_secret(0);
>>>>>>>> ftruncate(fd, MAP_SIZE);
>>>>>>>> ptr = mmap(NULL, MAP_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>>>>>>>
>>>>>>> I do not see any access control or permission model for this feature.
>>>>>>> Is this feature generally safe to anybody?
>>>>>>
>>>>>> The mappings obey memlock limit. Besides, this feature should be enabled
>>>>>> explicitly at boot with the kernel parameter that says what is the maximal
>>>>>> memory size secretmem can consume.
>>>>>
>>>>> Why is such a model sufficient and future proof? I mean even when it has
>>>>> to be enabled by an admin it is still all or nothing approach. Mlock
>>>>> limit is not really useful because it is per mm rather than per user.
>>>>>
>>>>> Is there any reason why this is allowed for non-privileged processes?
>>>>> Maybe this has been discussed in the past but is there any reason why
>>>>> this cannot be done by a special device which will allow to provide at
>>>>> least some permission policy?
>>>>
>>>> Why this should not be allowed for non-privileged processes? This behaves
>>>> similarly to mlocked memory, so I don't see a reason why secretmem should
>>>> have different permissions model.
>>>
>>> Because appart from the reclaim aspect it fragments the direct mapping
>>> IIUC. That might have an impact on all others, right?
>>
>> It does fragment the direct map, but first it only splits 1G pages to 2M
>> pages and as was discussed several times already it's not that clear which
>> page size in the direct map is the best and this is very much workload
>> dependent.
>
> I do appreciate this has been discussed but this changelog is not
> specific on any of that reasoning and I am pretty sure nobody will
> remember details in few years in the future. Also some numbers would be
> appropriate.
>
>> These are the results of the benchmarks I've run with the default direct
>> mapping covered with 1G pages, with disabled 1G pages using "nogbpages" in
>> the kernel command line and with the entire direct map forced to use 4K
>> pages using a simple patch to arch/x86/mm/init.c.
>>
>> https://docs.google.com/spreadsheets/d/1tdD-cu8e93vnfGsTFxZ5YdaEfs2E1GELlvWNOGkJV2U/edit?usp=sharing
>
> A good start for the data I am asking above.
I assume you've seen the benchmark results provided by Xing Zhengjun
https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
--
Thanks,
David / dhildenb
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
2021-01-26 9:53 ` David Hildenbrand
(?)
(?)
@ 2021-01-26 10:19 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 10:19 UTC (permalink / raw)
To: David Hildenbrand
Cc: Mike Rapoport, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dave Hansen, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Ander sen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue 26-01-21 10:53:08, David Hildenbrand wrote:
[...]
> I assume you've seen the benchmark results provided by Xing Zhengjun
>
> https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
I was not. Thanks for the pointer. I will have a look.
--
Michal Hocko
SUSE Labs
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 10:19 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 10:19 UTC (permalink / raw)
To: David Hildenbrand
Cc: Mike Rapoport, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue 26-01-21 10:53:08, David Hildenbrand wrote:
[...]
> I assume you've seen the benchmark results provided by Xing Zhengjun
>
> https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
I was not. Thanks for the pointer. I will have a look.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 10:19 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 10:19 UTC (permalink / raw)
To: David Hildenbrand
Cc: Mark Rutland, Peter Zijlstra, Catalin Marinas, Dave Hansen,
linux-mm, linux-kselftest, H. Peter Anvin, Christopher Lameter,
Shuah Khan, Thomas Gleixner, Elena Reshetova, linux-arch,
Tycho Andersen, linux-nvdimm, Will Deacon, x86, Matthew Wilcox,
Mike Rapoport, Ingo Molnar, Michael Kerrisk, Palmer Dabbelt,
Arnd Bergmann, James Bottomley, Hagen Paul Pfeifer,
Borislav Petkov, Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Andrew Morton, Rick Edgecombe, Roman Gushchin,
Mike Rapoport
On Tue 26-01-21 10:53:08, David Hildenbrand wrote:
[...]
> I assume you've seen the benchmark results provided by Xing Zhengjun
>
> https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
I was not. Thanks for the pointer. I will have a look.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 06/11] mm: introduce memfd_secret system call to create "secret" memory areas
@ 2021-01-26 10:19 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 10:19 UTC (permalink / raw)
To: David Hildenbrand
Cc: Mark Rutland, Peter Zijlstra, Catalin Marinas, Dave Hansen,
linux-mm, linux-kselftest, H. Peter Anvin, Christopher Lameter,
Shuah Khan, Thomas Gleixner, Elena Reshetova, linux-arch,
Tycho Andersen, linux-nvdimm, Will Deacon, x86, Matthew Wilcox,
Mike Rapoport, Ingo Molnar, Michael Kerrisk, Palmer Dabbelt,
Arnd Bergmann, James Bottomley, Hagen Paul Pfeifer,
Borislav Petkov, Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Andrew Morton, Rick Edgecombe, Roman Gushchin,
Mike Rapoport
On Tue 26-01-21 10:53:08, David Hildenbrand wrote:
[...]
> I assume you've seen the benchmark results provided by Xing Zhengjun
>
> https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/
I was not. Thanks for the pointer. I will have a look.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
2021-01-21 12:27 ` Mike Rapoport
(?)
(?)
@ 2021-01-26 11:46 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 11:46 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, David Hildenbrand, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Removing a PAGE_SIZE page from the direct map every time such page is
> allocated for a secret memory mapping will cause severe fragmentation of
> the direct map. This fragmentation can be reduced by using PMD-size pages
> as a pool for small pages for secret memory mappings.
>
> Add a gen_pool per secretmem inode and lazily populate this pool with
> PMD-size pages.
>
> As pages allocated by secretmem become unmovable, use CMA to back large
> page caches so that page allocator won't be surprised by failing attempt to
> migrate these pages.
>
> The CMA area used by secretmem is controlled by the "secretmem=" kernel
> parameter. This allows explicit control over the memory available for
> secretmem and provides upper hard limit for secretmem consumption.
OK, so I have finally had a look at this closer and this is really not
acceptable. I have already mentioned that in a response to other patch
but any task is able to deprive access to secret memory to other tasks
and cause OOM killer which wouldn't really recover ever and potentially
panic the system. Now you could be less drastic and only make SIGBUS on
fault but that would be still quite terrible. There is a very good
reason why hugetlb implements is non-trivial reservation system to avoid
exactly these problems.
So unless I am really misreading the code
Nacked-by: Michal Hocko <mhocko@suse.com>
That doesn't mean I reject the whole idea. There are some details to
sort out as mentioned elsewhere but you cannot really depend on
pre-allocated pool which can fail at a fault time like that.
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
> mm/Kconfig | 2 +
> mm/secretmem.c | 175 +++++++++++++++++++++++++++++++++++++++++--------
> 2 files changed, 150 insertions(+), 27 deletions(-)
>
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 5f8243442f66..ec35bf406439 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -874,5 +874,7 @@ config KMAP_LOCAL
>
> config SECRETMEM
> def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
> + select GENERIC_ALLOCATOR
> + select CMA
>
> endmenu
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 904351d12c33..469211c7cc3a 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -7,12 +7,15 @@
>
> #include <linux/mm.h>
> #include <linux/fs.h>
> +#include <linux/cma.h>
> #include <linux/mount.h>
> #include <linux/memfd.h>
> #include <linux/bitops.h>
> #include <linux/printk.h>
> #include <linux/pagemap.h>
> +#include <linux/genalloc.h>
> #include <linux/syscalls.h>
> +#include <linux/memblock.h>
> #include <linux/pseudo_fs.h>
> #include <linux/secretmem.h>
> #include <linux/set_memory.h>
> @@ -35,24 +38,94 @@
> #define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
>
> struct secretmem_ctx {
> + struct gen_pool *pool;
> unsigned int mode;
> };
>
> -static struct page *secretmem_alloc_page(gfp_t gfp)
> +static struct cma *secretmem_cma;
> +
> +static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> {
> + unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> + struct gen_pool *pool = ctx->pool;
> + unsigned long addr;
> + struct page *page;
> + int i, err;
> +
> + page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
> + if (!page)
> + return -ENOMEM;
> +
> /*
> - * FIXME: use a cache of large pages to reduce the direct map
> - * fragmentation
> + * clear the data left from the prevoius user before dropping the
> + * pages from the direct map
> */
> - return alloc_page(gfp | __GFP_ZERO);
> + for (i = 0; i < nr_pages; i++)
> + clear_highpage(page + i);
> +
> + err = set_direct_map_invalid_noflush(page, nr_pages);
> + if (err)
> + goto err_cma_release;
> +
> + addr = (unsigned long)page_address(page);
> + err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> + if (err)
> + goto err_set_direct_map;
> +
> + flush_tlb_kernel_range(addr, addr + PMD_SIZE);
> +
> + return 0;
> +
> +err_set_direct_map:
> + /*
> + * If a split of PUD-size page was required, it already happened
> + * when we marked the pages invalid which guarantees that this call
> + * won't fail
> + */
> + set_direct_map_default_noflush(page, nr_pages);
> +err_cma_release:
> + cma_release(secretmem_cma, page, nr_pages);
> + return err;
> +}
> +
> +static void secretmem_free_page(struct secretmem_ctx *ctx, struct page *page)
> +{
> + unsigned long addr = (unsigned long)page_address(page);
> + struct gen_pool *pool = ctx->pool;
> +
> + gen_pool_free(pool, addr, PAGE_SIZE);
> +}
> +
> +static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
> + gfp_t gfp)
> +{
> + struct gen_pool *pool = ctx->pool;
> + unsigned long addr;
> + struct page *page;
> + int err;
> +
> + if (gen_pool_avail(pool) < PAGE_SIZE) {
> + err = secretmem_pool_increase(ctx, gfp);
> + if (err)
> + return NULL;
> + }
> +
> + addr = gen_pool_alloc(pool, PAGE_SIZE);
> + if (!addr)
> + return NULL;
> +
> + page = virt_to_page(addr);
> + get_page(page);
> +
> + return page;
> }
>
> static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> {
> + struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
> struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> struct inode *inode = file_inode(vmf->vma->vm_file);
> pgoff_t offset = vmf->pgoff;
> - unsigned long addr;
> struct page *page;
> int err;
>
> @@ -62,40 +135,25 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> retry:
> page = find_lock_page(mapping, offset);
> if (!page) {
> - page = secretmem_alloc_page(vmf->gfp_mask);
> + page = secretmem_alloc_page(ctx, vmf->gfp_mask);
> if (!page)
> return VM_FAULT_OOM;
>
> - err = set_direct_map_invalid_noflush(page, 1);
> - if (err) {
> - put_page(page);
> - return vmf_error(err);
> - }
> -
> __SetPageUptodate(page);
> err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
> if (unlikely(err)) {
> + secretmem_free_page(ctx, page);
> put_page(page);
> if (err == -EEXIST)
> goto retry;
> - goto err_restore_direct_map;
> + return vmf_error(err);
> }
>
> - addr = (unsigned long)page_address(page);
> - flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> + set_page_private(page, (unsigned long)ctx);
> }
>
> vmf->page = page;
> return VM_FAULT_LOCKED;
> -
> -err_restore_direct_map:
> - /*
> - * If a split of large page was required, it already happened
> - * when we marked the page invalid which guarantees that this call
> - * won't fail
> - */
> - set_direct_map_default_noflush(page, 1);
> - return vmf_error(err);
> }
>
> static const struct vm_operations_struct secretmem_vm_ops = {
> @@ -141,8 +199,9 @@ static int secretmem_migratepage(struct address_space *mapping,
>
> static void secretmem_freepage(struct page *page)
> {
> - set_direct_map_default_noflush(page, 1);
> - clear_highpage(page);
> + struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
> +
> + secretmem_free_page(ctx, page);
> }
>
> static const struct address_space_operations secretmem_aops = {
> @@ -177,13 +236,18 @@ static struct file *secretmem_file_create(unsigned long flags)
> if (!ctx)
> goto err_free_inode;
>
> + ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
> + if (!ctx->pool)
> + goto err_free_ctx;
> +
> file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
> O_RDWR, &secretmem_fops);
> if (IS_ERR(file))
> - goto err_free_ctx;
> + goto err_free_pool;
>
> mapping_set_unevictable(inode->i_mapping);
>
> + inode->i_private = ctx;
> inode->i_mapping->private_data = ctx;
> inode->i_mapping->a_ops = &secretmem_aops;
>
> @@ -197,6 +261,8 @@ static struct file *secretmem_file_create(unsigned long flags)
>
> return file;
>
> +err_free_pool:
> + gen_pool_destroy(ctx->pool);
> err_free_ctx:
> kfree(ctx);
> err_free_inode:
> @@ -215,6 +281,9 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
> if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
> return -EINVAL;
>
> + if (!secretmem_cma)
> + return -ENOMEM;
> +
> fd = get_unused_fd_flags(flags & O_CLOEXEC);
> if (fd < 0)
> return fd;
> @@ -235,11 +304,37 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
> return err;
> }
>
> +static void secretmem_cleanup_chunk(struct gen_pool *pool,
> + struct gen_pool_chunk *chunk, void *data)
> +{
> + unsigned long start = chunk->start_addr;
> + unsigned long end = chunk->end_addr;
> + struct page *page = virt_to_page(start);
> + unsigned long nr_pages = (end - start + 1) / PAGE_SIZE;
> + int i;
> +
> + set_direct_map_default_noflush(page, nr_pages);
> +
> + for (i = 0; i < nr_pages; i++)
> + clear_highpage(page + i);
> +
> + cma_release(secretmem_cma, page, nr_pages);
> +}
> +
> +static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
> +{
> + struct gen_pool *pool = ctx->pool;
> +
> + gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
> + gen_pool_destroy(pool);
> +}
> +
> static void secretmem_evict_inode(struct inode *inode)
> {
> struct secretmem_ctx *ctx = inode->i_private;
>
> truncate_inode_pages_final(&inode->i_data);
> + secretmem_cleanup_pool(ctx);
> clear_inode(inode);
> kfree(ctx);
> }
> @@ -276,3 +371,29 @@ static int secretmem_init(void)
> return ret;
> }
> fs_initcall(secretmem_init);
> +
> +static int __init secretmem_setup(char *str)
> +{
> + phys_addr_t align = PMD_SIZE;
> + unsigned long reserved_size;
> + int err;
> +
> + reserved_size = memparse(str, NULL);
> + if (!reserved_size)
> + return 0;
> +
> + if (reserved_size * 2 > PUD_SIZE)
> + align = PUD_SIZE;
> +
> + err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false,
> + "secretmem", &secretmem_cma);
> + if (err) {
> + pr_err("failed to create CMA: %d\n", err);
> + return err;
> + }
> +
> + pr_info("reserved %luM\n", reserved_size >> 20);
> +
> + return 0;
> +}
> +__setup("secretmem=", secretmem_setup);
> --
> 2.28.0
>
--
Michal Hocko
SUSE Labs
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
@ 2021-01-26 11:46 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 11:46 UTC (permalink / raw)
To: Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, David Hildenbrand, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Removing a PAGE_SIZE page from the direct map every time such page is
> allocated for a secret memory mapping will cause severe fragmentation of
> the direct map. This fragmentation can be reduced by using PMD-size pages
> as a pool for small pages for secret memory mappings.
>
> Add a gen_pool per secretmem inode and lazily populate this pool with
> PMD-size pages.
>
> As pages allocated by secretmem become unmovable, use CMA to back large
> page caches so that page allocator won't be surprised by failing attempt to
> migrate these pages.
>
> The CMA area used by secretmem is controlled by the "secretmem=" kernel
> parameter. This allows explicit control over the memory available for
> secretmem and provides upper hard limit for secretmem consumption.
OK, so I have finally had a look at this closer and this is really not
acceptable. I have already mentioned that in a response to other patch
but any task is able to deprive access to secret memory to other tasks
and cause OOM killer which wouldn't really recover ever and potentially
panic the system. Now you could be less drastic and only make SIGBUS on
fault but that would be still quite terrible. There is a very good
reason why hugetlb implements is non-trivial reservation system to avoid
exactly these problems.
So unless I am really misreading the code
Nacked-by: Michal Hocko <mhocko@suse.com>
That doesn't mean I reject the whole idea. There are some details to
sort out as mentioned elsewhere but you cannot really depend on
pre-allocated pool which can fail at a fault time like that.
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
> mm/Kconfig | 2 +
> mm/secretmem.c | 175 +++++++++++++++++++++++++++++++++++++++++--------
> 2 files changed, 150 insertions(+), 27 deletions(-)
>
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 5f8243442f66..ec35bf406439 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -874,5 +874,7 @@ config KMAP_LOCAL
>
> config SECRETMEM
> def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
> + select GENERIC_ALLOCATOR
> + select CMA
>
> endmenu
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 904351d12c33..469211c7cc3a 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -7,12 +7,15 @@
>
> #include <linux/mm.h>
> #include <linux/fs.h>
> +#include <linux/cma.h>
> #include <linux/mount.h>
> #include <linux/memfd.h>
> #include <linux/bitops.h>
> #include <linux/printk.h>
> #include <linux/pagemap.h>
> +#include <linux/genalloc.h>
> #include <linux/syscalls.h>
> +#include <linux/memblock.h>
> #include <linux/pseudo_fs.h>
> #include <linux/secretmem.h>
> #include <linux/set_memory.h>
> @@ -35,24 +38,94 @@
> #define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
>
> struct secretmem_ctx {
> + struct gen_pool *pool;
> unsigned int mode;
> };
>
> -static struct page *secretmem_alloc_page(gfp_t gfp)
> +static struct cma *secretmem_cma;
> +
> +static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> {
> + unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> + struct gen_pool *pool = ctx->pool;
> + unsigned long addr;
> + struct page *page;
> + int i, err;
> +
> + page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
> + if (!page)
> + return -ENOMEM;
> +
> /*
> - * FIXME: use a cache of large pages to reduce the direct map
> - * fragmentation
> + * clear the data left from the prevoius user before dropping the
> + * pages from the direct map
> */
> - return alloc_page(gfp | __GFP_ZERO);
> + for (i = 0; i < nr_pages; i++)
> + clear_highpage(page + i);
> +
> + err = set_direct_map_invalid_noflush(page, nr_pages);
> + if (err)
> + goto err_cma_release;
> +
> + addr = (unsigned long)page_address(page);
> + err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> + if (err)
> + goto err_set_direct_map;
> +
> + flush_tlb_kernel_range(addr, addr + PMD_SIZE);
> +
> + return 0;
> +
> +err_set_direct_map:
> + /*
> + * If a split of PUD-size page was required, it already happened
> + * when we marked the pages invalid which guarantees that this call
> + * won't fail
> + */
> + set_direct_map_default_noflush(page, nr_pages);
> +err_cma_release:
> + cma_release(secretmem_cma, page, nr_pages);
> + return err;
> +}
> +
> +static void secretmem_free_page(struct secretmem_ctx *ctx, struct page *page)
> +{
> + unsigned long addr = (unsigned long)page_address(page);
> + struct gen_pool *pool = ctx->pool;
> +
> + gen_pool_free(pool, addr, PAGE_SIZE);
> +}
> +
> +static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
> + gfp_t gfp)
> +{
> + struct gen_pool *pool = ctx->pool;
> + unsigned long addr;
> + struct page *page;
> + int err;
> +
> + if (gen_pool_avail(pool) < PAGE_SIZE) {
> + err = secretmem_pool_increase(ctx, gfp);
> + if (err)
> + return NULL;
> + }
> +
> + addr = gen_pool_alloc(pool, PAGE_SIZE);
> + if (!addr)
> + return NULL;
> +
> + page = virt_to_page(addr);
> + get_page(page);
> +
> + return page;
> }
>
> static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> {
> + struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
> struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> struct inode *inode = file_inode(vmf->vma->vm_file);
> pgoff_t offset = vmf->pgoff;
> - unsigned long addr;
> struct page *page;
> int err;
>
> @@ -62,40 +135,25 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> retry:
> page = find_lock_page(mapping, offset);
> if (!page) {
> - page = secretmem_alloc_page(vmf->gfp_mask);
> + page = secretmem_alloc_page(ctx, vmf->gfp_mask);
> if (!page)
> return VM_FAULT_OOM;
>
> - err = set_direct_map_invalid_noflush(page, 1);
> - if (err) {
> - put_page(page);
> - return vmf_error(err);
> - }
> -
> __SetPageUptodate(page);
> err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
> if (unlikely(err)) {
> + secretmem_free_page(ctx, page);
> put_page(page);
> if (err == -EEXIST)
> goto retry;
> - goto err_restore_direct_map;
> + return vmf_error(err);
> }
>
> - addr = (unsigned long)page_address(page);
> - flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> + set_page_private(page, (unsigned long)ctx);
> }
>
> vmf->page = page;
> return VM_FAULT_LOCKED;
> -
> -err_restore_direct_map:
> - /*
> - * If a split of large page was required, it already happened
> - * when we marked the page invalid which guarantees that this call
> - * won't fail
> - */
> - set_direct_map_default_noflush(page, 1);
> - return vmf_error(err);
> }
>
> static const struct vm_operations_struct secretmem_vm_ops = {
> @@ -141,8 +199,9 @@ static int secretmem_migratepage(struct address_space *mapping,
>
> static void secretmem_freepage(struct page *page)
> {
> - set_direct_map_default_noflush(page, 1);
> - clear_highpage(page);
> + struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
> +
> + secretmem_free_page(ctx, page);
> }
>
> static const struct address_space_operations secretmem_aops = {
> @@ -177,13 +236,18 @@ static struct file *secretmem_file_create(unsigned long flags)
> if (!ctx)
> goto err_free_inode;
>
> + ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
> + if (!ctx->pool)
> + goto err_free_ctx;
> +
> file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
> O_RDWR, &secretmem_fops);
> if (IS_ERR(file))
> - goto err_free_ctx;
> + goto err_free_pool;
>
> mapping_set_unevictable(inode->i_mapping);
>
> + inode->i_private = ctx;
> inode->i_mapping->private_data = ctx;
> inode->i_mapping->a_ops = &secretmem_aops;
>
> @@ -197,6 +261,8 @@ static struct file *secretmem_file_create(unsigned long flags)
>
> return file;
>
> +err_free_pool:
> + gen_pool_destroy(ctx->pool);
> err_free_ctx:
> kfree(ctx);
> err_free_inode:
> @@ -215,6 +281,9 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
> if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
> return -EINVAL;
>
> + if (!secretmem_cma)
> + return -ENOMEM;
> +
> fd = get_unused_fd_flags(flags & O_CLOEXEC);
> if (fd < 0)
> return fd;
> @@ -235,11 +304,37 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
> return err;
> }
>
> +static void secretmem_cleanup_chunk(struct gen_pool *pool,
> + struct gen_pool_chunk *chunk, void *data)
> +{
> + unsigned long start = chunk->start_addr;
> + unsigned long end = chunk->end_addr;
> + struct page *page = virt_to_page(start);
> + unsigned long nr_pages = (end - start + 1) / PAGE_SIZE;
> + int i;
> +
> + set_direct_map_default_noflush(page, nr_pages);
> +
> + for (i = 0; i < nr_pages; i++)
> + clear_highpage(page + i);
> +
> + cma_release(secretmem_cma, page, nr_pages);
> +}
> +
> +static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
> +{
> + struct gen_pool *pool = ctx->pool;
> +
> + gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
> + gen_pool_destroy(pool);
> +}
> +
> static void secretmem_evict_inode(struct inode *inode)
> {
> struct secretmem_ctx *ctx = inode->i_private;
>
> truncate_inode_pages_final(&inode->i_data);
> + secretmem_cleanup_pool(ctx);
> clear_inode(inode);
> kfree(ctx);
> }
> @@ -276,3 +371,29 @@ static int secretmem_init(void)
> return ret;
> }
> fs_initcall(secretmem_init);
> +
> +static int __init secretmem_setup(char *str)
> +{
> + phys_addr_t align = PMD_SIZE;
> + unsigned long reserved_size;
> + int err;
> +
> + reserved_size = memparse(str, NULL);
> + if (!reserved_size)
> + return 0;
> +
> + if (reserved_size * 2 > PUD_SIZE)
> + align = PUD_SIZE;
> +
> + err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false,
> + "secretmem", &secretmem_cma);
> + if (err) {
> + pr_err("failed to create CMA: %d\n", err);
> + return err;
> + }
> +
> + pr_info("reserved %luM\n", reserved_size >> 20);
> +
> + return 0;
> +}
> +__setup("secretmem=", secretmem_setup);
> --
> 2.28.0
>
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
@ 2021-01-26 11:46 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 11:46 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Removing a PAGE_SIZE page from the direct map every time such page is
> allocated for a secret memory mapping will cause severe fragmentation of
> the direct map. This fragmentation can be reduced by using PMD-size pages
> as a pool for small pages for secret memory mappings.
>
> Add a gen_pool per secretmem inode and lazily populate this pool with
> PMD-size pages.
>
> As pages allocated by secretmem become unmovable, use CMA to back large
> page caches so that page allocator won't be surprised by failing attempt to
> migrate these pages.
>
> The CMA area used by secretmem is controlled by the "secretmem=" kernel
> parameter. This allows explicit control over the memory available for
> secretmem and provides upper hard limit for secretmem consumption.
OK, so I have finally had a look at this closer and this is really not
acceptable. I have already mentioned that in a response to other patch
but any task is able to deprive access to secret memory to other tasks
and cause OOM killer which wouldn't really recover ever and potentially
panic the system. Now you could be less drastic and only make SIGBUS on
fault but that would be still quite terrible. There is a very good
reason why hugetlb implements is non-trivial reservation system to avoid
exactly these problems.
So unless I am really misreading the code
Nacked-by: Michal Hocko <mhocko@suse.com>
That doesn't mean I reject the whole idea. There are some details to
sort out as mentioned elsewhere but you cannot really depend on
pre-allocated pool which can fail at a fault time like that.
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
> mm/Kconfig | 2 +
> mm/secretmem.c | 175 +++++++++++++++++++++++++++++++++++++++++--------
> 2 files changed, 150 insertions(+), 27 deletions(-)
>
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 5f8243442f66..ec35bf406439 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -874,5 +874,7 @@ config KMAP_LOCAL
>
> config SECRETMEM
> def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
> + select GENERIC_ALLOCATOR
> + select CMA
>
> endmenu
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 904351d12c33..469211c7cc3a 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -7,12 +7,15 @@
>
> #include <linux/mm.h>
> #include <linux/fs.h>
> +#include <linux/cma.h>
> #include <linux/mount.h>
> #include <linux/memfd.h>
> #include <linux/bitops.h>
> #include <linux/printk.h>
> #include <linux/pagemap.h>
> +#include <linux/genalloc.h>
> #include <linux/syscalls.h>
> +#include <linux/memblock.h>
> #include <linux/pseudo_fs.h>
> #include <linux/secretmem.h>
> #include <linux/set_memory.h>
> @@ -35,24 +38,94 @@
> #define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
>
> struct secretmem_ctx {
> + struct gen_pool *pool;
> unsigned int mode;
> };
>
> -static struct page *secretmem_alloc_page(gfp_t gfp)
> +static struct cma *secretmem_cma;
> +
> +static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> {
> + unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> + struct gen_pool *pool = ctx->pool;
> + unsigned long addr;
> + struct page *page;
> + int i, err;
> +
> + page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
> + if (!page)
> + return -ENOMEM;
> +
> /*
> - * FIXME: use a cache of large pages to reduce the direct map
> - * fragmentation
> + * clear the data left from the prevoius user before dropping the
> + * pages from the direct map
> */
> - return alloc_page(gfp | __GFP_ZERO);
> + for (i = 0; i < nr_pages; i++)
> + clear_highpage(page + i);
> +
> + err = set_direct_map_invalid_noflush(page, nr_pages);
> + if (err)
> + goto err_cma_release;
> +
> + addr = (unsigned long)page_address(page);
> + err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> + if (err)
> + goto err_set_direct_map;
> +
> + flush_tlb_kernel_range(addr, addr + PMD_SIZE);
> +
> + return 0;
> +
> +err_set_direct_map:
> + /*
> + * If a split of PUD-size page was required, it already happened
> + * when we marked the pages invalid which guarantees that this call
> + * won't fail
> + */
> + set_direct_map_default_noflush(page, nr_pages);
> +err_cma_release:
> + cma_release(secretmem_cma, page, nr_pages);
> + return err;
> +}
> +
> +static void secretmem_free_page(struct secretmem_ctx *ctx, struct page *page)
> +{
> + unsigned long addr = (unsigned long)page_address(page);
> + struct gen_pool *pool = ctx->pool;
> +
> + gen_pool_free(pool, addr, PAGE_SIZE);
> +}
> +
> +static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
> + gfp_t gfp)
> +{
> + struct gen_pool *pool = ctx->pool;
> + unsigned long addr;
> + struct page *page;
> + int err;
> +
> + if (gen_pool_avail(pool) < PAGE_SIZE) {
> + err = secretmem_pool_increase(ctx, gfp);
> + if (err)
> + return NULL;
> + }
> +
> + addr = gen_pool_alloc(pool, PAGE_SIZE);
> + if (!addr)
> + return NULL;
> +
> + page = virt_to_page(addr);
> + get_page(page);
> +
> + return page;
> }
>
> static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> {
> + struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
> struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> struct inode *inode = file_inode(vmf->vma->vm_file);
> pgoff_t offset = vmf->pgoff;
> - unsigned long addr;
> struct page *page;
> int err;
>
> @@ -62,40 +135,25 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> retry:
> page = find_lock_page(mapping, offset);
> if (!page) {
> - page = secretmem_alloc_page(vmf->gfp_mask);
> + page = secretmem_alloc_page(ctx, vmf->gfp_mask);
> if (!page)
> return VM_FAULT_OOM;
>
> - err = set_direct_map_invalid_noflush(page, 1);
> - if (err) {
> - put_page(page);
> - return vmf_error(err);
> - }
> -
> __SetPageUptodate(page);
> err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
> if (unlikely(err)) {
> + secretmem_free_page(ctx, page);
> put_page(page);
> if (err == -EEXIST)
> goto retry;
> - goto err_restore_direct_map;
> + return vmf_error(err);
> }
>
> - addr = (unsigned long)page_address(page);
> - flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> + set_page_private(page, (unsigned long)ctx);
> }
>
> vmf->page = page;
> return VM_FAULT_LOCKED;
> -
> -err_restore_direct_map:
> - /*
> - * If a split of large page was required, it already happened
> - * when we marked the page invalid which guarantees that this call
> - * won't fail
> - */
> - set_direct_map_default_noflush(page, 1);
> - return vmf_error(err);
> }
>
> static const struct vm_operations_struct secretmem_vm_ops = {
> @@ -141,8 +199,9 @@ static int secretmem_migratepage(struct address_space *mapping,
>
> static void secretmem_freepage(struct page *page)
> {
> - set_direct_map_default_noflush(page, 1);
> - clear_highpage(page);
> + struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
> +
> + secretmem_free_page(ctx, page);
> }
>
> static const struct address_space_operations secretmem_aops = {
> @@ -177,13 +236,18 @@ static struct file *secretmem_file_create(unsigned long flags)
> if (!ctx)
> goto err_free_inode;
>
> + ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
> + if (!ctx->pool)
> + goto err_free_ctx;
> +
> file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
> O_RDWR, &secretmem_fops);
> if (IS_ERR(file))
> - goto err_free_ctx;
> + goto err_free_pool;
>
> mapping_set_unevictable(inode->i_mapping);
>
> + inode->i_private = ctx;
> inode->i_mapping->private_data = ctx;
> inode->i_mapping->a_ops = &secretmem_aops;
>
> @@ -197,6 +261,8 @@ static struct file *secretmem_file_create(unsigned long flags)
>
> return file;
>
> +err_free_pool:
> + gen_pool_destroy(ctx->pool);
> err_free_ctx:
> kfree(ctx);
> err_free_inode:
> @@ -215,6 +281,9 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
> if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
> return -EINVAL;
>
> + if (!secretmem_cma)
> + return -ENOMEM;
> +
> fd = get_unused_fd_flags(flags & O_CLOEXEC);
> if (fd < 0)
> return fd;
> @@ -235,11 +304,37 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
> return err;
> }
>
> +static void secretmem_cleanup_chunk(struct gen_pool *pool,
> + struct gen_pool_chunk *chunk, void *data)
> +{
> + unsigned long start = chunk->start_addr;
> + unsigned long end = chunk->end_addr;
> + struct page *page = virt_to_page(start);
> + unsigned long nr_pages = (end - start + 1) / PAGE_SIZE;
> + int i;
> +
> + set_direct_map_default_noflush(page, nr_pages);
> +
> + for (i = 0; i < nr_pages; i++)
> + clear_highpage(page + i);
> +
> + cma_release(secretmem_cma, page, nr_pages);
> +}
> +
> +static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
> +{
> + struct gen_pool *pool = ctx->pool;
> +
> + gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
> + gen_pool_destroy(pool);
> +}
> +
> static void secretmem_evict_inode(struct inode *inode)
> {
> struct secretmem_ctx *ctx = inode->i_private;
>
> truncate_inode_pages_final(&inode->i_data);
> + secretmem_cleanup_pool(ctx);
> clear_inode(inode);
> kfree(ctx);
> }
> @@ -276,3 +371,29 @@ static int secretmem_init(void)
> return ret;
> }
> fs_initcall(secretmem_init);
> +
> +static int __init secretmem_setup(char *str)
> +{
> + phys_addr_t align = PMD_SIZE;
> + unsigned long reserved_size;
> + int err;
> +
> + reserved_size = memparse(str, NULL);
> + if (!reserved_size)
> + return 0;
> +
> + if (reserved_size * 2 > PUD_SIZE)
> + align = PUD_SIZE;
> +
> + err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false,
> + "secretmem", &secretmem_cma);
> + if (err) {
> + pr_err("failed to create CMA: %d\n", err);
> + return err;
> + }
> +
> + pr_info("reserved %luM\n", reserved_size >> 20);
> +
> + return 0;
> +}
> +__setup("secretmem=", secretmem_setup);
> --
> 2.28.0
>
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
@ 2021-01-26 11:46 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 11:46 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Roman Gushchin
On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> From: Mike Rapoport <rppt@linux.ibm.com>
>
> Removing a PAGE_SIZE page from the direct map every time such page is
> allocated for a secret memory mapping will cause severe fragmentation of
> the direct map. This fragmentation can be reduced by using PMD-size pages
> as a pool for small pages for secret memory mappings.
>
> Add a gen_pool per secretmem inode and lazily populate this pool with
> PMD-size pages.
>
> As pages allocated by secretmem become unmovable, use CMA to back large
> page caches so that page allocator won't be surprised by failing attempt to
> migrate these pages.
>
> The CMA area used by secretmem is controlled by the "secretmem=" kernel
> parameter. This allows explicit control over the memory available for
> secretmem and provides upper hard limit for secretmem consumption.
OK, so I have finally had a look at this closer and this is really not
acceptable. I have already mentioned that in a response to other patch
but any task is able to deprive access to secret memory to other tasks
and cause OOM killer which wouldn't really recover ever and potentially
panic the system. Now you could be less drastic and only make SIGBUS on
fault but that would be still quite terrible. There is a very good
reason why hugetlb implements is non-trivial reservation system to avoid
exactly these problems.
So unless I am really misreading the code
Nacked-by: Michal Hocko <mhocko@suse.com>
That doesn't mean I reject the whole idea. There are some details to
sort out as mentioned elsewhere but you cannot really depend on
pre-allocated pool which can fail at a fault time like that.
> Signed-off-by: Mike Rapoport <rppt@linux.ibm.com>
> Cc: Alexander Viro <viro@zeniv.linux.org.uk>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Borislav Petkov <bp@alien8.de>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christopher Lameter <cl@linux.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Elena Reshetova <elena.reshetova@intel.com>
> Cc: Hagen Paul Pfeifer <hagen@jauu.net>
> Cc: "H. Peter Anvin" <hpa@zytor.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: James Bottomley <jejb@linux.ibm.com>
> Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michael Kerrisk <mtk.manpages@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Palmer Dabbelt <palmerdabbelt@google.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Shuah Khan <shuah@kernel.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Tycho Andersen <tycho@tycho.ws>
> Cc: Will Deacon <will@kernel.org>
> ---
> mm/Kconfig | 2 +
> mm/secretmem.c | 175 +++++++++++++++++++++++++++++++++++++++++--------
> 2 files changed, 150 insertions(+), 27 deletions(-)
>
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 5f8243442f66..ec35bf406439 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -874,5 +874,7 @@ config KMAP_LOCAL
>
> config SECRETMEM
> def_bool ARCH_HAS_SET_DIRECT_MAP && !EMBEDDED
> + select GENERIC_ALLOCATOR
> + select CMA
>
> endmenu
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index 904351d12c33..469211c7cc3a 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -7,12 +7,15 @@
>
> #include <linux/mm.h>
> #include <linux/fs.h>
> +#include <linux/cma.h>
> #include <linux/mount.h>
> #include <linux/memfd.h>
> #include <linux/bitops.h>
> #include <linux/printk.h>
> #include <linux/pagemap.h>
> +#include <linux/genalloc.h>
> #include <linux/syscalls.h>
> +#include <linux/memblock.h>
> #include <linux/pseudo_fs.h>
> #include <linux/secretmem.h>
> #include <linux/set_memory.h>
> @@ -35,24 +38,94 @@
> #define SECRETMEM_FLAGS_MASK SECRETMEM_MODE_MASK
>
> struct secretmem_ctx {
> + struct gen_pool *pool;
> unsigned int mode;
> };
>
> -static struct page *secretmem_alloc_page(gfp_t gfp)
> +static struct cma *secretmem_cma;
> +
> +static int secretmem_pool_increase(struct secretmem_ctx *ctx, gfp_t gfp)
> {
> + unsigned long nr_pages = (1 << PMD_PAGE_ORDER);
> + struct gen_pool *pool = ctx->pool;
> + unsigned long addr;
> + struct page *page;
> + int i, err;
> +
> + page = cma_alloc(secretmem_cma, nr_pages, PMD_SIZE, gfp & __GFP_NOWARN);
> + if (!page)
> + return -ENOMEM;
> +
> /*
> - * FIXME: use a cache of large pages to reduce the direct map
> - * fragmentation
> + * clear the data left from the prevoius user before dropping the
> + * pages from the direct map
> */
> - return alloc_page(gfp | __GFP_ZERO);
> + for (i = 0; i < nr_pages; i++)
> + clear_highpage(page + i);
> +
> + err = set_direct_map_invalid_noflush(page, nr_pages);
> + if (err)
> + goto err_cma_release;
> +
> + addr = (unsigned long)page_address(page);
> + err = gen_pool_add(pool, addr, PMD_SIZE, NUMA_NO_NODE);
> + if (err)
> + goto err_set_direct_map;
> +
> + flush_tlb_kernel_range(addr, addr + PMD_SIZE);
> +
> + return 0;
> +
> +err_set_direct_map:
> + /*
> + * If a split of PUD-size page was required, it already happened
> + * when we marked the pages invalid which guarantees that this call
> + * won't fail
> + */
> + set_direct_map_default_noflush(page, nr_pages);
> +err_cma_release:
> + cma_release(secretmem_cma, page, nr_pages);
> + return err;
> +}
> +
> +static void secretmem_free_page(struct secretmem_ctx *ctx, struct page *page)
> +{
> + unsigned long addr = (unsigned long)page_address(page);
> + struct gen_pool *pool = ctx->pool;
> +
> + gen_pool_free(pool, addr, PAGE_SIZE);
> +}
> +
> +static struct page *secretmem_alloc_page(struct secretmem_ctx *ctx,
> + gfp_t gfp)
> +{
> + struct gen_pool *pool = ctx->pool;
> + unsigned long addr;
> + struct page *page;
> + int err;
> +
> + if (gen_pool_avail(pool) < PAGE_SIZE) {
> + err = secretmem_pool_increase(ctx, gfp);
> + if (err)
> + return NULL;
> + }
> +
> + addr = gen_pool_alloc(pool, PAGE_SIZE);
> + if (!addr)
> + return NULL;
> +
> + page = virt_to_page(addr);
> + get_page(page);
> +
> + return page;
> }
>
> static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> {
> + struct secretmem_ctx *ctx = vmf->vma->vm_file->private_data;
> struct address_space *mapping = vmf->vma->vm_file->f_mapping;
> struct inode *inode = file_inode(vmf->vma->vm_file);
> pgoff_t offset = vmf->pgoff;
> - unsigned long addr;
> struct page *page;
> int err;
>
> @@ -62,40 +135,25 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
> retry:
> page = find_lock_page(mapping, offset);
> if (!page) {
> - page = secretmem_alloc_page(vmf->gfp_mask);
> + page = secretmem_alloc_page(ctx, vmf->gfp_mask);
> if (!page)
> return VM_FAULT_OOM;
>
> - err = set_direct_map_invalid_noflush(page, 1);
> - if (err) {
> - put_page(page);
> - return vmf_error(err);
> - }
> -
> __SetPageUptodate(page);
> err = add_to_page_cache(page, mapping, offset, vmf->gfp_mask);
> if (unlikely(err)) {
> + secretmem_free_page(ctx, page);
> put_page(page);
> if (err == -EEXIST)
> goto retry;
> - goto err_restore_direct_map;
> + return vmf_error(err);
> }
>
> - addr = (unsigned long)page_address(page);
> - flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> + set_page_private(page, (unsigned long)ctx);
> }
>
> vmf->page = page;
> return VM_FAULT_LOCKED;
> -
> -err_restore_direct_map:
> - /*
> - * If a split of large page was required, it already happened
> - * when we marked the page invalid which guarantees that this call
> - * won't fail
> - */
> - set_direct_map_default_noflush(page, 1);
> - return vmf_error(err);
> }
>
> static const struct vm_operations_struct secretmem_vm_ops = {
> @@ -141,8 +199,9 @@ static int secretmem_migratepage(struct address_space *mapping,
>
> static void secretmem_freepage(struct page *page)
> {
> - set_direct_map_default_noflush(page, 1);
> - clear_highpage(page);
> + struct secretmem_ctx *ctx = (struct secretmem_ctx *)page_private(page);
> +
> + secretmem_free_page(ctx, page);
> }
>
> static const struct address_space_operations secretmem_aops = {
> @@ -177,13 +236,18 @@ static struct file *secretmem_file_create(unsigned long flags)
> if (!ctx)
> goto err_free_inode;
>
> + ctx->pool = gen_pool_create(PAGE_SHIFT, NUMA_NO_NODE);
> + if (!ctx->pool)
> + goto err_free_ctx;
> +
> file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem",
> O_RDWR, &secretmem_fops);
> if (IS_ERR(file))
> - goto err_free_ctx;
> + goto err_free_pool;
>
> mapping_set_unevictable(inode->i_mapping);
>
> + inode->i_private = ctx;
> inode->i_mapping->private_data = ctx;
> inode->i_mapping->a_ops = &secretmem_aops;
>
> @@ -197,6 +261,8 @@ static struct file *secretmem_file_create(unsigned long flags)
>
> return file;
>
> +err_free_pool:
> + gen_pool_destroy(ctx->pool);
> err_free_ctx:
> kfree(ctx);
> err_free_inode:
> @@ -215,6 +281,9 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
> if (flags & ~(SECRETMEM_FLAGS_MASK | O_CLOEXEC))
> return -EINVAL;
>
> + if (!secretmem_cma)
> + return -ENOMEM;
> +
> fd = get_unused_fd_flags(flags & O_CLOEXEC);
> if (fd < 0)
> return fd;
> @@ -235,11 +304,37 @@ SYSCALL_DEFINE1(memfd_secret, unsigned long, flags)
> return err;
> }
>
> +static void secretmem_cleanup_chunk(struct gen_pool *pool,
> + struct gen_pool_chunk *chunk, void *data)
> +{
> + unsigned long start = chunk->start_addr;
> + unsigned long end = chunk->end_addr;
> + struct page *page = virt_to_page(start);
> + unsigned long nr_pages = (end - start + 1) / PAGE_SIZE;
> + int i;
> +
> + set_direct_map_default_noflush(page, nr_pages);
> +
> + for (i = 0; i < nr_pages; i++)
> + clear_highpage(page + i);
> +
> + cma_release(secretmem_cma, page, nr_pages);
> +}
> +
> +static void secretmem_cleanup_pool(struct secretmem_ctx *ctx)
> +{
> + struct gen_pool *pool = ctx->pool;
> +
> + gen_pool_for_each_chunk(pool, secretmem_cleanup_chunk, ctx);
> + gen_pool_destroy(pool);
> +}
> +
> static void secretmem_evict_inode(struct inode *inode)
> {
> struct secretmem_ctx *ctx = inode->i_private;
>
> truncate_inode_pages_final(&inode->i_data);
> + secretmem_cleanup_pool(ctx);
> clear_inode(inode);
> kfree(ctx);
> }
> @@ -276,3 +371,29 @@ static int secretmem_init(void)
> return ret;
> }
> fs_initcall(secretmem_init);
> +
> +static int __init secretmem_setup(char *str)
> +{
> + phys_addr_t align = PMD_SIZE;
> + unsigned long reserved_size;
> + int err;
> +
> + reserved_size = memparse(str, NULL);
> + if (!reserved_size)
> + return 0;
> +
> + if (reserved_size * 2 > PUD_SIZE)
> + align = PUD_SIZE;
> +
> + err = cma_declare_contiguous(0, reserved_size, 0, align, 0, false,
> + "secretmem", &secretmem_cma);
> + if (err) {
> + pr_err("failed to create CMA: %d\n", err);
> + return err;
> + }
> +
> + pr_info("reserved %luM\n", reserved_size >> 20);
> +
> + return 0;
> +}
> +__setup("secretmem=", secretmem_setup);
> --
> 2.28.0
>
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
2021-01-26 11:46 ` Michal Hocko
(?)
(?)
@ 2021-01-26 11:56 ` David Hildenbrand
-1 siblings, 0 replies; 318+ messages in thread
From: David Hildenbrand @ 2021-01-26 11:56 UTC (permalink / raw)
To: Michal Hocko, Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dave Hansen, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On 26.01.21 12:46, Michal Hocko wrote:
> On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
>> From: Mike Rapoport <rppt@linux.ibm.com>
>>
>> Removing a PAGE_SIZE page from the direct map every time such page is
>> allocated for a secret memory mapping will cause severe fragmentation of
>> the direct map. This fragmentation can be reduced by using PMD-size pages
>> as a pool for small pages for secret memory mappings.
>>
>> Add a gen_pool per secretmem inode and lazily populate this pool with
>> PMD-size pages.
>>
>> As pages allocated by secretmem become unmovable, use CMA to back large
>> page caches so that page allocator won't be surprised by failing attempt to
>> migrate these pages.
>>
>> The CMA area used by secretmem is controlled by the "secretmem=" kernel
>> parameter. This allows explicit control over the memory available for
>> secretmem and provides upper hard limit for secretmem consumption.
>
> OK, so I have finally had a look at this closer and this is really not
> acceptable. I have already mentioned that in a response to other patch
> but any task is able to deprive access to secret memory to other tasks
> and cause OOM killer which wouldn't really recover ever and potentially
> panic the system. Now you could be less drastic and only make SIGBUS on
> fault but that would be still quite terrible. There is a very good
> reason why hugetlb implements is non-trivial reservation system to avoid
> exactly these problems.
>
> So unless I am really misreading the code
> Nacked-by: Michal Hocko <mhocko@suse.com>
>
> That doesn't mean I reject the whole idea. There are some details to
> sort out as mentioned elsewhere but you cannot really depend on
> pre-allocated pool which can fail at a fault time like that.
So, to do it similar to hugetlbfs (e.g., with CMA), there would have to
be a mechanism to actually try pre-reserving (e.g., from the CMA area),
at which point in time the pages would get moved to the secretmem pool,
and a mechanism for mmap() etc. to "reserve" from these secretmem pool,
such that there are guarantees at fault time?
What we have right now feels like some kind of overcommit (reading, as
overcommiting huge pages, so we might get SIGBUS at fault time).
TBH, the SIGBUS thingy doesn't sound terrible to me - if this behavior
is to be expected right now by applications using it and they can handle
it - no guarantees. I fully agree that some kind of
reservation/guarantee mechanism would be preferable.
--
Thanks,
David / dhildenb
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
@ 2021-01-26 11:56 ` David Hildenbrand
0 siblings, 0 replies; 318+ messages in thread
From: David Hildenbrand @ 2021-01-26 11:56 UTC (permalink / raw)
To: Michal Hocko, Mike Rapoport
Cc: Andrew Morton, Alexander Viro, Andy Lutomirski, Arnd Bergmann,
Borislav Petkov, Catalin Marinas, Christopher Lameter,
Dan Williams, Dave Hansen, Elena Reshetova, H. Peter Anvin,
Ingo Molnar, James Bottomley, Kirill A. Shutemov, Matthew Wilcox,
Mark Rutland, Mike Rapoport, Michael Kerrisk, Palmer Dabbelt,
Paul Walmsley, Peter Zijlstra, Rick Edgecombe, Roman Gushchin,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tycho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On 26.01.21 12:46, Michal Hocko wrote:
> On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
>> From: Mike Rapoport <rppt@linux.ibm.com>
>>
>> Removing a PAGE_SIZE page from the direct map every time such page is
>> allocated for a secret memory mapping will cause severe fragmentation of
>> the direct map. This fragmentation can be reduced by using PMD-size pages
>> as a pool for small pages for secret memory mappings.
>>
>> Add a gen_pool per secretmem inode and lazily populate this pool with
>> PMD-size pages.
>>
>> As pages allocated by secretmem become unmovable, use CMA to back large
>> page caches so that page allocator won't be surprised by failing attempt to
>> migrate these pages.
>>
>> The CMA area used by secretmem is controlled by the "secretmem=" kernel
>> parameter. This allows explicit control over the memory available for
>> secretmem and provides upper hard limit for secretmem consumption.
>
> OK, so I have finally had a look at this closer and this is really not
> acceptable. I have already mentioned that in a response to other patch
> but any task is able to deprive access to secret memory to other tasks
> and cause OOM killer which wouldn't really recover ever and potentially
> panic the system. Now you could be less drastic and only make SIGBUS on
> fault but that would be still quite terrible. There is a very good
> reason why hugetlb implements is non-trivial reservation system to avoid
> exactly these problems.
>
> So unless I am really misreading the code
> Nacked-by: Michal Hocko <mhocko@suse.com>
>
> That doesn't mean I reject the whole idea. There are some details to
> sort out as mentioned elsewhere but you cannot really depend on
> pre-allocated pool which can fail at a fault time like that.
So, to do it similar to hugetlbfs (e.g., with CMA), there would have to
be a mechanism to actually try pre-reserving (e.g., from the CMA area),
at which point in time the pages would get moved to the secretmem pool,
and a mechanism for mmap() etc. to "reserve" from these secretmem pool,
such that there are guarantees at fault time?
What we have right now feels like some kind of overcommit (reading, as
overcommiting huge pages, so we might get SIGBUS at fault time).
TBH, the SIGBUS thingy doesn't sound terrible to me - if this behavior
is to be expected right now by applications using it and they can handle
it - no guarantees. I fully agree that some kind of
reservation/guarantee mechanism would be preferable.
--
Thanks,
David / dhildenb
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
@ 2021-01-26 11:56 ` David Hildenbrand
0 siblings, 0 replies; 318+ messages in thread
From: David Hildenbrand @ 2021-01-26 11:56 UTC (permalink / raw)
To: Michal Hocko, Mike Rapoport
Cc: Mark Rutland, Peter Zijlstra, Catalin Marinas, Dave Hansen,
linux-mm, linux-kselftest, H. Peter Anvin, Christopher Lameter,
Shuah Khan, Thomas Gleixner, Elena Reshetova, linux-arch,
Tycho Andersen, linux-nvdimm, Will Deacon, x86, Matthew Wilcox,
Mike Rapoport, Ingo Molnar, Michael Kerrisk, Palmer Dabbelt,
Arnd Bergmann, James Bottomley, Hagen Paul Pfeifer,
Borislav Petkov, Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Andrew Morton, Rick Edgecombe, Roman Gushchin
On 26.01.21 12:46, Michal Hocko wrote:
> On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
>> From: Mike Rapoport <rppt@linux.ibm.com>
>>
>> Removing a PAGE_SIZE page from the direct map every time such page is
>> allocated for a secret memory mapping will cause severe fragmentation of
>> the direct map. This fragmentation can be reduced by using PMD-size pages
>> as a pool for small pages for secret memory mappings.
>>
>> Add a gen_pool per secretmem inode and lazily populate this pool with
>> PMD-size pages.
>>
>> As pages allocated by secretmem become unmovable, use CMA to back large
>> page caches so that page allocator won't be surprised by failing attempt to
>> migrate these pages.
>>
>> The CMA area used by secretmem is controlled by the "secretmem=" kernel
>> parameter. This allows explicit control over the memory available for
>> secretmem and provides upper hard limit for secretmem consumption.
>
> OK, so I have finally had a look at this closer and this is really not
> acceptable. I have already mentioned that in a response to other patch
> but any task is able to deprive access to secret memory to other tasks
> and cause OOM killer which wouldn't really recover ever and potentially
> panic the system. Now you could be less drastic and only make SIGBUS on
> fault but that would be still quite terrible. There is a very good
> reason why hugetlb implements is non-trivial reservation system to avoid
> exactly these problems.
>
> So unless I am really misreading the code
> Nacked-by: Michal Hocko <mhocko@suse.com>
>
> That doesn't mean I reject the whole idea. There are some details to
> sort out as mentioned elsewhere but you cannot really depend on
> pre-allocated pool which can fail at a fault time like that.
So, to do it similar to hugetlbfs (e.g., with CMA), there would have to
be a mechanism to actually try pre-reserving (e.g., from the CMA area),
at which point in time the pages would get moved to the secretmem pool,
and a mechanism for mmap() etc. to "reserve" from these secretmem pool,
such that there are guarantees at fault time?
What we have right now feels like some kind of overcommit (reading, as
overcommiting huge pages, so we might get SIGBUS at fault time).
TBH, the SIGBUS thingy doesn't sound terrible to me - if this behavior
is to be expected right now by applications using it and they can handle
it - no guarantees. I fully agree that some kind of
reservation/guarantee mechanism would be preferable.
--
Thanks,
David / dhildenb
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
@ 2021-01-26 11:56 ` David Hildenbrand
0 siblings, 0 replies; 318+ messages in thread
From: David Hildenbrand @ 2021-01-26 11:56 UTC (permalink / raw)
To: Michal Hocko, Mike Rapoport
Cc: Mark Rutland, Peter Zijlstra, Catalin Marinas, Dave Hansen,
linux-mm, linux-kselftest, H. Peter Anvin, Christopher Lameter,
Shuah Khan, Thomas Gleixner, Elena Reshetova, linux-arch,
Tycho Andersen, linux-nvdimm, Will Deacon, x86, Matthew Wilcox,
Mike Rapoport, Ingo Molnar, Michael Kerrisk, Palmer Dabbelt,
Arnd Bergmann, James Bottomley, Hagen Paul Pfeifer,
Borislav Petkov, Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Andrew Morton, Rick Edgecombe, Roman Gushchin
On 26.01.21 12:46, Michal Hocko wrote:
> On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
>> From: Mike Rapoport <rppt@linux.ibm.com>
>>
>> Removing a PAGE_SIZE page from the direct map every time such page is
>> allocated for a secret memory mapping will cause severe fragmentation of
>> the direct map. This fragmentation can be reduced by using PMD-size pages
>> as a pool for small pages for secret memory mappings.
>>
>> Add a gen_pool per secretmem inode and lazily populate this pool with
>> PMD-size pages.
>>
>> As pages allocated by secretmem become unmovable, use CMA to back large
>> page caches so that page allocator won't be surprised by failing attempt to
>> migrate these pages.
>>
>> The CMA area used by secretmem is controlled by the "secretmem=" kernel
>> parameter. This allows explicit control over the memory available for
>> secretmem and provides upper hard limit for secretmem consumption.
>
> OK, so I have finally had a look at this closer and this is really not
> acceptable. I have already mentioned that in a response to other patch
> but any task is able to deprive access to secret memory to other tasks
> and cause OOM killer which wouldn't really recover ever and potentially
> panic the system. Now you could be less drastic and only make SIGBUS on
> fault but that would be still quite terrible. There is a very good
> reason why hugetlb implements is non-trivial reservation system to avoid
> exactly these problems.
>
> So unless I am really misreading the code
> Nacked-by: Michal Hocko <mhocko@suse.com>
>
> That doesn't mean I reject the whole idea. There are some details to
> sort out as mentioned elsewhere but you cannot really depend on
> pre-allocated pool which can fail at a fault time like that.
So, to do it similar to hugetlbfs (e.g., with CMA), there would have to
be a mechanism to actually try pre-reserving (e.g., from the CMA area),
at which point in time the pages would get moved to the secretmem pool,
and a mechanism for mmap() etc. to "reserve" from these secretmem pool,
such that there are guarantees at fault time?
What we have right now feels like some kind of overcommit (reading, as
overcommiting huge pages, so we might get SIGBUS at fault time).
TBH, the SIGBUS thingy doesn't sound terrible to me - if this behavior
is to be expected right now by applications using it and they can handle
it - no guarantees. I fully agree that some kind of
reservation/guarantee mechanism would be preferable.
--
Thanks,
David / dhildenb
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
2021-01-26 11:56 ` David Hildenbrand
(?)
(?)
@ 2021-01-26 12:08 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 12:08 UTC (permalink / raw)
To: David Hildenbrand
Cc: Mike Rapoport, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dave Hansen, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Ander sen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue 26-01-21 12:56:48, David Hildenbrand wrote:
> On 26.01.21 12:46, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Removing a PAGE_SIZE page from the direct map every time such page is
> > > allocated for a secret memory mapping will cause severe fragmentation of
> > > the direct map. This fragmentation can be reduced by using PMD-size pages
> > > as a pool for small pages for secret memory mappings.
> > >
> > > Add a gen_pool per secretmem inode and lazily populate this pool with
> > > PMD-size pages.
> > >
> > > As pages allocated by secretmem become unmovable, use CMA to back large
> > > page caches so that page allocator won't be surprised by failing attempt to
> > > migrate these pages.
> > >
> > > The CMA area used by secretmem is controlled by the "secretmem=" kernel
> > > parameter. This allows explicit control over the memory available for
> > > secretmem and provides upper hard limit for secretmem consumption.
> >
> > OK, so I have finally had a look at this closer and this is really not
> > acceptable. I have already mentioned that in a response to other patch
> > but any task is able to deprive access to secret memory to other tasks
> > and cause OOM killer which wouldn't really recover ever and potentially
> > panic the system. Now you could be less drastic and only make SIGBUS on
> > fault but that would be still quite terrible. There is a very good
> > reason why hugetlb implements is non-trivial reservation system to avoid
> > exactly these problems.
> >
> > So unless I am really misreading the code
> > Nacked-by: Michal Hocko <mhocko@suse.com>
> >
> > That doesn't mean I reject the whole idea. There are some details to
> > sort out as mentioned elsewhere but you cannot really depend on
> > pre-allocated pool which can fail at a fault time like that.
>
> So, to do it similar to hugetlbfs (e.g., with CMA), there would have to be a
> mechanism to actually try pre-reserving (e.g., from the CMA area), at which
> point in time the pages would get moved to the secretmem pool, and a
> mechanism for mmap() etc. to "reserve" from these secretmem pool, such that
> there are guarantees at fault time?
yes, reserve at mmap time and use during the fault. But this all sounds
like a self inflicted problem to me. Sure you can have a pre-allocated
or more dynamic pool to reduce the direct mapping fragmentation but you
can always fall back to regular allocatios. In other ways have the pool
as an optimization rather than a hard requirement. With a careful access
control this sounds like a manageable solution to me.
--
Michal Hocko
SUSE Labs
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
@ 2021-01-26 12:08 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 12:08 UTC (permalink / raw)
To: David Hildenbrand
Cc: Mike Rapoport, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dan Williams, Dave Hansen, Elena Reshetova,
H. Peter Anvin, Ingo Molnar, James Bottomley, Kirill A. Shutemov,
Matthew Wilcox, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue 26-01-21 12:56:48, David Hildenbrand wrote:
> On 26.01.21 12:46, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Removing a PAGE_SIZE page from the direct map every time such page is
> > > allocated for a secret memory mapping will cause severe fragmentation of
> > > the direct map. This fragmentation can be reduced by using PMD-size pages
> > > as a pool for small pages for secret memory mappings.
> > >
> > > Add a gen_pool per secretmem inode and lazily populate this pool with
> > > PMD-size pages.
> > >
> > > As pages allocated by secretmem become unmovable, use CMA to back large
> > > page caches so that page allocator won't be surprised by failing attempt to
> > > migrate these pages.
> > >
> > > The CMA area used by secretmem is controlled by the "secretmem=" kernel
> > > parameter. This allows explicit control over the memory available for
> > > secretmem and provides upper hard limit for secretmem consumption.
> >
> > OK, so I have finally had a look at this closer and this is really not
> > acceptable. I have already mentioned that in a response to other patch
> > but any task is able to deprive access to secret memory to other tasks
> > and cause OOM killer which wouldn't really recover ever and potentially
> > panic the system. Now you could be less drastic and only make SIGBUS on
> > fault but that would be still quite terrible. There is a very good
> > reason why hugetlb implements is non-trivial reservation system to avoid
> > exactly these problems.
> >
> > So unless I am really misreading the code
> > Nacked-by: Michal Hocko <mhocko@suse.com>
> >
> > That doesn't mean I reject the whole idea. There are some details to
> > sort out as mentioned elsewhere but you cannot really depend on
> > pre-allocated pool which can fail at a fault time like that.
>
> So, to do it similar to hugetlbfs (e.g., with CMA), there would have to be a
> mechanism to actually try pre-reserving (e.g., from the CMA area), at which
> point in time the pages would get moved to the secretmem pool, and a
> mechanism for mmap() etc. to "reserve" from these secretmem pool, such that
> there are guarantees at fault time?
yes, reserve at mmap time and use during the fault. But this all sounds
like a self inflicted problem to me. Sure you can have a pre-allocated
or more dynamic pool to reduce the direct mapping fragmentation but you
can always fall back to regular allocatios. In other ways have the pool
as an optimization rather than a hard requirement. With a careful access
control this sounds like a manageable solution to me.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
@ 2021-01-26 12:08 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 12:08 UTC (permalink / raw)
To: David Hildenbrand
Cc: Mark Rutland, Peter Zijlstra, Catalin Marinas, Dave Hansen,
linux-mm, linux-kselftest, H. Peter Anvin, Christopher Lameter,
Shuah Khan, Thomas Gleixner, Elena Reshetova, linux-arch,
Tycho Andersen, linux-nvdimm, Will Deacon, x86, Matthew Wilcox,
Mike Rapoport, Ingo Molnar, Michael Kerrisk, Palmer Dabbelt,
Arnd Bergmann, James Bottomley, Hagen Paul Pfeifer,
Borislav Petkov, Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Andrew Morton, Rick Edgecombe, Roman Gushchin,
Mike Rapoport
On Tue 26-01-21 12:56:48, David Hildenbrand wrote:
> On 26.01.21 12:46, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Removing a PAGE_SIZE page from the direct map every time such page is
> > > allocated for a secret memory mapping will cause severe fragmentation of
> > > the direct map. This fragmentation can be reduced by using PMD-size pages
> > > as a pool for small pages for secret memory mappings.
> > >
> > > Add a gen_pool per secretmem inode and lazily populate this pool with
> > > PMD-size pages.
> > >
> > > As pages allocated by secretmem become unmovable, use CMA to back large
> > > page caches so that page allocator won't be surprised by failing attempt to
> > > migrate these pages.
> > >
> > > The CMA area used by secretmem is controlled by the "secretmem=" kernel
> > > parameter. This allows explicit control over the memory available for
> > > secretmem and provides upper hard limit for secretmem consumption.
> >
> > OK, so I have finally had a look at this closer and this is really not
> > acceptable. I have already mentioned that in a response to other patch
> > but any task is able to deprive access to secret memory to other tasks
> > and cause OOM killer which wouldn't really recover ever and potentially
> > panic the system. Now you could be less drastic and only make SIGBUS on
> > fault but that would be still quite terrible. There is a very good
> > reason why hugetlb implements is non-trivial reservation system to avoid
> > exactly these problems.
> >
> > So unless I am really misreading the code
> > Nacked-by: Michal Hocko <mhocko@suse.com>
> >
> > That doesn't mean I reject the whole idea. There are some details to
> > sort out as mentioned elsewhere but you cannot really depend on
> > pre-allocated pool which can fail at a fault time like that.
>
> So, to do it similar to hugetlbfs (e.g., with CMA), there would have to be a
> mechanism to actually try pre-reserving (e.g., from the CMA area), at which
> point in time the pages would get moved to the secretmem pool, and a
> mechanism for mmap() etc. to "reserve" from these secretmem pool, such that
> there are guarantees at fault time?
yes, reserve at mmap time and use during the fault. But this all sounds
like a self inflicted problem to me. Sure you can have a pre-allocated
or more dynamic pool to reduce the direct mapping fragmentation but you
can always fall back to regular allocatios. In other ways have the pool
as an optimization rather than a hard requirement. With a careful access
control this sounds like a manageable solution to me.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 07/11] secretmem: use PMD-size pages to amortize direct map fragmentation
@ 2021-01-26 12:08 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 12:08 UTC (permalink / raw)
To: David Hildenbrand
Cc: Mark Rutland, Peter Zijlstra, Catalin Marinas, Dave Hansen,
linux-mm, linux-kselftest, H. Peter Anvin, Christopher Lameter,
Shuah Khan, Thomas Gleixner, Elena Reshetova, linux-arch,
Tycho Andersen, linux-nvdimm, Will Deacon, x86, Matthew Wilcox,
Mike Rapoport, Ingo Molnar, Michael Kerrisk, Palmer Dabbelt,
Arnd Bergmann, James Bottomley, Hagen Paul Pfeifer,
Borislav Petkov, Alexander Viro, Andy Lutomirski, Paul Walmsley,
Kirill A. Shutemov, Dan Williams, linux-arm-kernel, linux-api,
linux-kernel, linux-riscv, Palmer Dabbelt, linux-fsdevel,
Shakeel Butt, Andrew Morton, Rick Edgecombe, Roman Gushchin,
Mike Rapoport
On Tue 26-01-21 12:56:48, David Hildenbrand wrote:
> On 26.01.21 12:46, Michal Hocko wrote:
> > On Thu 21-01-21 14:27:19, Mike Rapoport wrote:
> > > From: Mike Rapoport <rppt@linux.ibm.com>
> > >
> > > Removing a PAGE_SIZE page from the direct map every time such page is
> > > allocated for a secret memory mapping will cause severe fragmentation of
> > > the direct map. This fragmentation can be reduced by using PMD-size pages
> > > as a pool for small pages for secret memory mappings.
> > >
> > > Add a gen_pool per secretmem inode and lazily populate this pool with
> > > PMD-size pages.
> > >
> > > As pages allocated by secretmem become unmovable, use CMA to back large
> > > page caches so that page allocator won't be surprised by failing attempt to
> > > migrate these pages.
> > >
> > > The CMA area used by secretmem is controlled by the "secretmem=" kernel
> > > parameter. This allows explicit control over the memory available for
> > > secretmem and provides upper hard limit for secretmem consumption.
> >
> > OK, so I have finally had a look at this closer and this is really not
> > acceptable. I have already mentioned that in a response to other patch
> > but any task is able to deprive access to secret memory to other tasks
> > and cause OOM killer which wouldn't really recover ever and potentially
> > panic the system. Now you could be less drastic and only make SIGBUS on
> > fault but that would be still quite terrible. There is a very good
> > reason why hugetlb implements is non-trivial reservation system to avoid
> > exactly these problems.
> >
> > So unless I am really misreading the code
> > Nacked-by: Michal Hocko <mhocko@suse.com>
> >
> > That doesn't mean I reject the whole idea. There are some details to
> > sort out as mentioned elsewhere but you cannot really depend on
> > pre-allocated pool which can fail at a fault time like that.
>
> So, to do it similar to hugetlbfs (e.g., with CMA), there would have to be a
> mechanism to actually try pre-reserving (e.g., from the CMA area), at which
> point in time the pages would get moved to the secretmem pool, and a
> mechanism for mmap() etc. to "reserve" from these secretmem pool, such that
> there are guarantees at fault time?
yes, reserve at mmap time and use during the fault. But this all sounds
like a self inflicted problem to me. Sure you can have a pre-allocated
or more dynamic pool to reduce the direct mapping fragmentation but you
can always fall back to regular allocatios. In other ways have the pool
as an optimization rather than a hard requirement. With a careful access
control this sounds like a manageable solution to me.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-25 21:38 ` Mike Rapoport
(?)
(?)
@ 2021-01-26 14:48 ` Matthew Wilcox
-1 siblings, 0 replies; 318+ messages in thread
From: Matthew Wilcox @ 2021-01-26 14:48 UTC (permalink / raw)
To: Mike Rapoport
Cc: Michal Hocko, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dave Hansen, David Hildenbrand,
Elena Reshetova, H. Peter Anvin, Ingo Molnar, James Bottomley,
Kirill A. Shutemov, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Anders en, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> Besides, kmem accounting with __GFP_ACCOUNT does not seem
> to update stats and there was an explicit request for statistics:
>
> https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
>
> As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
>
> https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
>
> I think that a dedicated stats counter would be too much at the moment and
> NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
That's not true -- Mlocked is also unreclaimable. And doesn't this
feel more like mlocked memory than unreclaimable slab? It's also
Unevictable, so could be counted there instead.
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 14:48 ` Matthew Wilcox
0 siblings, 0 replies; 318+ messages in thread
From: Matthew Wilcox @ 2021-01-26 14:48 UTC (permalink / raw)
To: Mike Rapoport
Cc: Michal Hocko, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Mark Rutland, Mike Rapoport,
Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> Besides, kmem accounting with __GFP_ACCOUNT does not seem
> to update stats and there was an explicit request for statistics:
>
> https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
>
> As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
>
> https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
>
> I think that a dedicated stats counter would be too much at the moment and
> NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
That's not true -- Mlocked is also unreclaimable. And doesn't this
feel more like mlocked memory than unreclaimable slab? It's also
Unevictable, so could be counted there instead.
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 14:48 ` Matthew Wilcox
0 siblings, 0 replies; 318+ messages in thread
From: Matthew Wilcox @ 2021-01-26 14:48 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, Michal Hocko, David Hildenbrand, Peter Zijlstra,
Catalin Marinas, Dave Hansen, linux-mm, linux-kselftest,
H. Peter Anvin, Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, linux-riscv, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, Palmer Dabbelt,
linux-fsdevel, Shakeel Butt, Andrew Morton, Rick Edgecombe,
Roman Gushchin
On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> Besides, kmem accounting with __GFP_ACCOUNT does not seem
> to update stats and there was an explicit request for statistics:
>
> https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
>
> As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
>
> https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
>
> I think that a dedicated stats counter would be too much at the moment and
> NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
That's not true -- Mlocked is also unreclaimable. And doesn't this
feel more like mlocked memory than unreclaimable slab? It's also
Unevictable, so could be counted there instead.
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 14:48 ` Matthew Wilcox
0 siblings, 0 replies; 318+ messages in thread
From: Matthew Wilcox @ 2021-01-26 14:48 UTC (permalink / raw)
To: Mike Rapoport
Cc: Mark Rutland, Michal Hocko, David Hildenbrand, Peter Zijlstra,
Catalin Marinas, Dave Hansen, linux-mm, linux-kselftest,
H. Peter Anvin, Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, linux-riscv, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, Palmer Dabbelt,
linux-fsdevel, Shakeel Butt, Andrew Morton, Rick Edgecombe,
Roman Gushchin
On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> Besides, kmem accounting with __GFP_ACCOUNT does not seem
> to update stats and there was an explicit request for statistics:
>
> https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
>
> As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
>
> https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
>
> I think that a dedicated stats counter would be too much at the moment and
> NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
That's not true -- Mlocked is also unreclaimable. And doesn't this
feel more like mlocked memory than unreclaimable slab? It's also
Unevictable, so could be counted there instead.
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-26 14:48 ` Matthew Wilcox
(?)
(?)
@ 2021-01-26 15:05 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 15:05 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Mike Rapoport, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dave Hansen, David Hildenbrand,
Elena Reshetova, H. Peter Anvin, Ingo Molnar, James Bottomley,
Kirill A. Shutemov, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Roman Gushchin, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Ander sen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue 26-01-21 14:48:38, Matthew Wilcox wrote:
> On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > to update stats and there was an explicit request for statistics:
> >
> > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
> >
> > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> >
> > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
> >
> > I think that a dedicated stats counter would be too much at the moment and
> > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
>
> That's not true -- Mlocked is also unreclaimable. And doesn't this
> feel more like mlocked memory than unreclaimable slab? It's also
> Unevictable, so could be counted there instead.
yes, that is indeed true, except the unreclaimable counter is tracking
the unevictable LRUs. These pages are not on any LRU and that can cause
some confusion. Maybe they shouldn't be so special and they should live
on unevistable LRU and get their stats automagically.
I definitely do agree that this would be a better fit than NR_SLAB
abuse. But considering that this is somehow even more special than mlock
then a dedicated counter sounds as even better fit.
--
Michal Hocko
SUSE Labs
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 15:05 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 15:05 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Mike Rapoport, Andrew Morton, Alexander Viro, Andy Lutomirski,
Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Mark Rutland, Mike Rapoport,
Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
Rick Edgecombe, Roman Gushchin, Shakeel Butt, Shuah Khan,
Thomas Gleixner, Tycho Andersen, Will Deacon, linux-api,
linux-arch, linux-arm-kernel, linux-fsdevel, linux-mm,
linux-kernel, linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue 26-01-21 14:48:38, Matthew Wilcox wrote:
> On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > to update stats and there was an explicit request for statistics:
> >
> > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
> >
> > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> >
> > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
> >
> > I think that a dedicated stats counter would be too much at the moment and
> > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
>
> That's not true -- Mlocked is also unreclaimable. And doesn't this
> feel more like mlocked memory than unreclaimable slab? It's also
> Unevictable, so could be counted there instead.
yes, that is indeed true, except the unreclaimable counter is tracking
the unevictable LRUs. These pages are not on any LRU and that can cause
some confusion. Maybe they shouldn't be so special and they should live
on unevistable LRU and get their stats automagically.
I definitely do agree that this would be a better fit than NR_SLAB
abuse. But considering that this is somehow even more special than mlock
then a dedicated counter sounds as even better fit.
--
Michal Hocko
SUSE Labs
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 15:05 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 15:05 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, linux-riscv, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, Palmer Dabbelt,
linux-fsdevel, Shakeel Butt, Andrew Morton, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
On Tue 26-01-21 14:48:38, Matthew Wilcox wrote:
> On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > to update stats and there was an explicit request for statistics:
> >
> > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
> >
> > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> >
> > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
> >
> > I think that a dedicated stats counter would be too much at the moment and
> > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
>
> That's not true -- Mlocked is also unreclaimable. And doesn't this
> feel more like mlocked memory than unreclaimable slab? It's also
> Unevictable, so could be counted there instead.
yes, that is indeed true, except the unreclaimable counter is tracking
the unevictable LRUs. These pages are not on any LRU and that can cause
some confusion. Maybe they shouldn't be so special and they should live
on unevistable LRU and get their stats automagically.
I definitely do agree that this would be a better fit than NR_SLAB
abuse. But considering that this is somehow even more special than mlock
then a dedicated counter sounds as even better fit.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-26 15:05 ` Michal Hocko
0 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-26 15:05 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, linux-riscv, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, Palmer Dabbelt,
linux-fsdevel, Shakeel Butt, Andrew Morton, Rick Edgecombe,
Roman Gushchin, Mike Rapoport
On Tue 26-01-21 14:48:38, Matthew Wilcox wrote:
> On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > to update stats and there was an explicit request for statistics:
> >
> > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
> >
> > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> >
> > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
> >
> > I think that a dedicated stats counter would be too much at the moment and
> > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
>
> That's not true -- Mlocked is also unreclaimable. And doesn't this
> feel more like mlocked memory than unreclaimable slab? It's also
> Unevictable, so could be counted there instead.
yes, that is indeed true, except the unreclaimable counter is tracking
the unevictable LRUs. These pages are not on any LRU and that can cause
some confusion. Maybe they shouldn't be so special and they should live
on unevistable LRU and get their stats automagically.
I definitely do agree that this would be a better fit than NR_SLAB
abuse. But considering that this is somehow even more special than mlock
then a dedicated counter sounds as even better fit.
--
Michal Hocko
SUSE Labs
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-26 15:05 ` Michal Hocko
(?)
(?)
@ 2021-01-27 18:42 ` Roman Gushchin
-1 siblings, 0 replies; 318+ messages in thread
From: Roman Gushchin @ 2021-01-27 18:42 UTC (permalink / raw)
To: Michal Hocko
Cc: Matthew Wilcox, Mike Rapoport, Andrew Morton, Alexander Viro,
Andy Lutomirski, Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dave Hansen, David Hildenbrand,
Elena Reshetova, H. Peter Anvin, Ingo Molnar, James Bottomley,
Kirill A. Shutemov, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tyc ho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Tue, Jan 26, 2021 at 04:05:55PM +0100, Michal Hocko wrote:
> On Tue 26-01-21 14:48:38, Matthew Wilcox wrote:
> > On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> > > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> > > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > > to update stats and there was an explicit request for statistics:
> > >
> > > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
> > >
> > > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> > >
> > > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
> > >
> > > I think that a dedicated stats counter would be too much at the moment and
> > > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
> >
> > That's not true -- Mlocked is also unreclaimable. And doesn't this
> > feel more like mlocked memory than unreclaimable slab? It's also
> > Unevictable, so could be counted there instead.
>
> yes, that is indeed true, except the unreclaimable counter is tracking
> the unevictable LRUs. These pages are not on any LRU and that can cause
> some confusion. Maybe they shouldn't be so special and they should live
> on unevistable LRU and get their stats automagically.
>
> I definitely do agree that this would be a better fit than NR_SLAB
> abuse. But considering that this is somehow even more special than mlock
> then a dedicated counter sounds as even better fit.
I think it depends on how large these areas will be in practice.
If they will be measured in single or double digits MBs, a separate entry
is hardly a good choice: because of the batching the displayed value
will be in the noise range, plus every new vmstat item adds to the
struct mem_cgroup size.
If it will be measured in GBs, of course, a separate counter is preferred.
So I'd suggest to go with NR_SLAB (which should have been named NR_KMEM)
as now and conditionally switch to a separate counter later.
Thanks!
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-27 18:42 ` Roman Gushchin
0 siblings, 0 replies; 318+ messages in thread
From: Roman Gushchin @ 2021-01-27 18:42 UTC (permalink / raw)
To: Michal Hocko
Cc: Matthew Wilcox, Mike Rapoport, Andrew Morton, Alexander Viro,
Andy Lutomirski, Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dan Williams, Dave Hansen,
David Hildenbrand, Elena Reshetova, H. Peter Anvin, Ingo Molnar,
James Bottomley, Kirill A. Shutemov, Mark Rutland, Mike Rapoport,
Michael Kerrisk, Palmer Dabbelt, Paul Walmsley, Peter Zijlstra,
Rick Edgecombe, Shakeel Butt, Shuah Khan, Thomas Gleixner,
Tycho Andersen, Will Deacon, linux-api, linux-arch,
linux-arm-kernel, linux-fsdevel, linux-mm, linux-kernel,
linux-kselftest, linux-nvdimm, linux-riscv, x86,
Hagen Paul Pfeifer, Palmer Dabbelt
On Tue, Jan 26, 2021 at 04:05:55PM +0100, Michal Hocko wrote:
> On Tue 26-01-21 14:48:38, Matthew Wilcox wrote:
> > On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> > > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> > > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > > to update stats and there was an explicit request for statistics:
> > >
> > > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
> > >
> > > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> > >
> > > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
> > >
> > > I think that a dedicated stats counter would be too much at the moment and
> > > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
> >
> > That's not true -- Mlocked is also unreclaimable. And doesn't this
> > feel more like mlocked memory than unreclaimable slab? It's also
> > Unevictable, so could be counted there instead.
>
> yes, that is indeed true, except the unreclaimable counter is tracking
> the unevictable LRUs. These pages are not on any LRU and that can cause
> some confusion. Maybe they shouldn't be so special and they should live
> on unevistable LRU and get their stats automagically.
>
> I definitely do agree that this would be a better fit than NR_SLAB
> abuse. But considering that this is somehow even more special than mlock
> then a dedicated counter sounds as even better fit.
I think it depends on how large these areas will be in practice.
If they will be measured in single or double digits MBs, a separate entry
is hardly a good choice: because of the batching the displayed value
will be in the noise range, plus every new vmstat item adds to the
struct mem_cgroup size.
If it will be measured in GBs, of course, a separate counter is preferred.
So I'd suggest to go with NR_SLAB (which should have been named NR_KMEM)
as now and conditionally switch to a separate counter later.
Thanks!
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-27 18:42 ` Roman Gushchin
0 siblings, 0 replies; 318+ messages in thread
From: Roman Gushchin @ 2021-01-27 18:42 UTC (permalink / raw)
To: Michal Hocko
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Mike Rapoport
On Tue, Jan 26, 2021 at 04:05:55PM +0100, Michal Hocko wrote:
> On Tue 26-01-21 14:48:38, Matthew Wilcox wrote:
> > On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> > > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> > > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > > to update stats and there was an explicit request for statistics:
> > >
> > > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
> > >
> > > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> > >
> > > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
> > >
> > > I think that a dedicated stats counter would be too much at the moment and
> > > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
> >
> > That's not true -- Mlocked is also unreclaimable. And doesn't this
> > feel more like mlocked memory than unreclaimable slab? It's also
> > Unevictable, so could be counted there instead.
>
> yes, that is indeed true, except the unreclaimable counter is tracking
> the unevictable LRUs. These pages are not on any LRU and that can cause
> some confusion. Maybe they shouldn't be so special and they should live
> on unevistable LRU and get their stats automagically.
>
> I definitely do agree that this would be a better fit than NR_SLAB
> abuse. But considering that this is somehow even more special than mlock
> then a dedicated counter sounds as even better fit.
I think it depends on how large these areas will be in practice.
If they will be measured in single or double digits MBs, a separate entry
is hardly a good choice: because of the batching the displayed value
will be in the noise range, plus every new vmstat item adds to the
struct mem_cgroup size.
If it will be measured in GBs, of course, a separate counter is preferred.
So I'd suggest to go with NR_SLAB (which should have been named NR_KMEM)
as now and conditionally switch to a separate counter later.
Thanks!
_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
@ 2021-01-27 18:42 ` Roman Gushchin
0 siblings, 0 replies; 318+ messages in thread
From: Roman Gushchin @ 2021-01-27 18:42 UTC (permalink / raw)
To: Michal Hocko
Cc: Mark Rutland, David Hildenbrand, Peter Zijlstra, Catalin Marinas,
Dave Hansen, linux-mm, linux-kselftest, H. Peter Anvin,
Christopher Lameter, Shuah Khan, Thomas Gleixner,
Elena Reshetova, linux-arch, Tycho Andersen, linux-nvdimm,
Will Deacon, x86, Matthew Wilcox, Mike Rapoport, Ingo Molnar,
Michael Kerrisk, Palmer Dabbelt, Arnd Bergmann, James Bottomley,
Hagen Paul Pfeifer, Borislav Petkov, Alexander Viro,
Andy Lutomirski, Paul Walmsley, Kirill A. Shutemov, Dan Williams,
linux-arm-kernel, linux-api, linux-kernel, linux-riscv,
Palmer Dabbelt, linux-fsdevel, Shakeel Butt, Andrew Morton,
Rick Edgecombe, Mike Rapoport
On Tue, Jan 26, 2021 at 04:05:55PM +0100, Michal Hocko wrote:
> On Tue 26-01-21 14:48:38, Matthew Wilcox wrote:
> > On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> > > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> > > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > > to update stats and there was an explicit request for statistics:
> > >
> > > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
> > >
> > > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> > >
> > > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
> > >
> > > I think that a dedicated stats counter would be too much at the moment and
> > > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
> >
> > That's not true -- Mlocked is also unreclaimable. And doesn't this
> > feel more like mlocked memory than unreclaimable slab? It's also
> > Unevictable, so could be counted there instead.
>
> yes, that is indeed true, except the unreclaimable counter is tracking
> the unevictable LRUs. These pages are not on any LRU and that can cause
> some confusion. Maybe they shouldn't be so special and they should live
> on unevistable LRU and get their stats automagically.
>
> I definitely do agree that this would be a better fit than NR_SLAB
> abuse. But considering that this is somehow even more special than mlock
> then a dedicated counter sounds as even better fit.
I think it depends on how large these areas will be in practice.
If they will be measured in single or double digits MBs, a separate entry
is hardly a good choice: because of the batching the displayed value
will be in the noise range, plus every new vmstat item adds to the
struct mem_cgroup size.
If it will be measured in GBs, of course, a separate counter is preferred.
So I'd suggest to go with NR_SLAB (which should have been named NR_KMEM)
as now and conditionally switch to a separate counter later.
Thanks!
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 318+ messages in thread
* Re: [PATCH v16 08/11] secretmem: add memcg accounting
2021-01-27 18:42 ` Roman Gushchin
(?)
(?)
@ 2021-01-28 7:58 ` Michal Hocko
-1 siblings, 0 replies; 318+ messages in thread
From: Michal Hocko @ 2021-01-28 7:58 UTC (permalink / raw)
To: Roman Gushchin
Cc: Matthew Wilcox, Mike Rapoport, Andrew Morton, Alexander Viro,
Andy Lutomirski, Arnd Bergmann, Borislav Petkov, Catalin Marinas,
Christopher Lameter, Dave Hansen, David Hildenbrand,
Elena Reshetova, H. Peter Anvin, Ingo Molnar, James Bottomley,
Kirill A. Shutemov, Mark Rutland, Mike Rapoport, Michael Kerrisk,
Palmer Dabbelt, Paul Walmsley, Peter Zijlstra, Rick Edgecombe,
Shakeel Butt, Shuah Khan, Thomas Gleixner, Tyc ho Andersen,
Will Deacon, linux-api, linux-arch, linux-arm-kernel,
linux-fsdevel, linux-mm, linux-kernel, linux-kselftest,
linux-nvdimm, linux-riscv, x86, Hagen Paul Pfeifer,
Palmer Dabbelt
On Wed 27-01-21 10:42:13, Roman Gushchin wrote:
> On Tue, Jan 26, 2021 at 04:05:55PM +0100, Michal Hocko wrote:
> > On Tue 26-01-21 14:48:38, Matthew Wilcox wrote:
> > > On Mon, Jan 25, 2021 at 11:38:17PM +0200, Mike Rapoport wrote:
> > > > I cannot use __GFP_ACCOUNT because cma_alloc() does not use gfp.
> > > > Besides, kmem accounting with __GFP_ACCOUNT does not seem
> > > > to update stats and there was an explicit request for statistics:
> > > >
> > > > https://lore.kernel.org/lkml/CALo0P13aq3GsONnZrksZNU9RtfhMsZXGWhK1n=xYJWQizCd4Zw@mail.gmail.com/
> > > >
> > > > As for (ab)using NR_SLAB_UNRECLAIMABLE_B, as it was already discussed here:
> > > >
> > > > https://lore.kernel.org/lkml/20201129172625.GD557259@kernel.org/
> > > >
> > > > I think that a dedicated stats counter would be too much at the moment and
> > > > NR_SLAB_UNRECLAIMABLE_B is the only explicit stat for unreclaimable memory.
> > >
> > > That's not true -- Mlocked is also unreclaimable. And doesn't this
> > > feel more like mlocked memory than unreclaimable slab? It's also
> > > Unevictable, so could be counted there instead.
> >
> > yes, that is indeed true, except the unreclaimable counter is tracking
> > the unevictable LRUs. These pages are not on any LRU and that can cause
> > some confusion. Maybe they shouldn't be so special and they should live
> > on unevistable LRU and get their stats automagically.
> >
> > I definitely do agree that this would be a better fit than NR_SLAB
> > abuse. But considering that this is somehow even more special than mlock
> > then a dedicated counter sounds as even better fit.
>
> I think it depends on how large these areas will be in practice.
> If they will be measured in single or double digits MBs, a separate entry
> is hardly a good choice: because of the batching the displayed value
> will be in the noise range, plus every new vmstat item adds to the
> struct mem_cgroup size.
>
> If it will be measured in GBs, of course, a separate counter is pref