linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 0/4] Introduce mseal()
@ 2024-01-22 15:28 jeffxu
  2024-01-22 15:28 ` [PATCH v7 1/4] mseal: Wire up mseal syscall jeffxu
                   ` (5 more replies)
  0 siblings, 6 replies; 23+ messages in thread
From: jeffxu @ 2024-01-22 15:28 UTC (permalink / raw)
  To: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap
  Cc: jeffxu, jorgelo, groeck, linux-kernel, linux-kselftest, linux-mm,
	pedro.falcato, dave.hansen, linux-hardening, deraadt, Jeff Xu

From: Jeff Xu <jeffxu@chromium.org>

This patchset proposes a new mseal() syscall for the Linux kernel.

In a nutshell, mseal() protects the VMAs of a given virtual memory
range against modifications, such as changes to their permission bits.

Modern CPUs support memory permissions, such as the read/write (RW)
and no-execute (NX) bits. Linux has supported NX since the release of
kernel version 2.6.8 in August 2004 [1]. The memory permission feature
improves the security stance on memory corruption bugs, as an attacker
cannot simply write to arbitrary memory and point the code to it. The
memory must be marked with the X bit, or else an exception will occur.
Internally, the kernel maintains the memory permissions in a data
structure called VMA (vm_area_struct). mseal() additionally protects
the VMA itself against modifications of the selected seal type.

Memory sealing is useful to mitigate memory corruption issues where a
corrupted pointer is passed to a memory management system. For
example, such an attacker primitive can break control-flow integrity
guarantees since read-only memory that is supposed to be trusted can
become writable or .text pages can get remapped. Memory sealing can
automatically be applied by the runtime loader to seal .text and
.rodata pages and applications can additionally seal security critical
data at runtime. A similar feature already exists in the XNU kernel
with the VM_FLAGS_PERMANENT [3] flag and on OpenBSD with the
mimmutable syscall [4]. Also, Chrome wants to adopt this feature for
their CFI work [2] and this patchset has been designed to be
compatible with the Chrome use case.

Two system calls are involved in sealing the map:  mmap() and mseal().

The new mseal() is an syscall on 64 bit CPU, and with
following signature:

int mseal(void addr, size_t len, unsigned long flags)
addr/len: memory range.
flags: reserved.

mseal() blocks following operations for the given memory range.

1> Unmapping, moving to another location, and shrinking the size,
   via munmap() and mremap(), can leave an empty space, therefore can
   be replaced with a VMA with a new set of attributes.

2> Moving or expanding a different VMA into the current location,
   via mremap().

3> Modifying a VMA via mmap(MAP_FIXED).

4> Size expansion, via mremap(), does not appear to pose any specific
   risks to sealed VMAs. It is included anyway because the use case is
   unclear. In any case, users can rely on merging to expand a sealed VMA.

5> mprotect() and pkey_mprotect().

6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous
   memory, when users don't have write permission to the memory. Those
   behaviors can alter region contents by discarding pages, effectively a
   memset(0) for anonymous memory.

In addition: mmap() has two related changes.

The PROT_SEAL bit in prot field of mmap(). When present, it marks
the map sealed since creation.

The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
the map as sealable. A map created without MAP_SEALABLE will not support
sealing, i.e. mseal() will fail.

Applications that don't care about sealing will expect their behavior
unchanged. For those that need sealing support, opt-in by adding
MAP_SEALABLE in mmap().

The idea that inspired this patch comes from Stephen Röttger’s work in
V8 CFI [5]. Chrome browser in ChromeOS will be the first user of this
API.

Indeed, the Chrome browser has very specific requirements for sealing,
which are distinct from those of most applications. For example, in
the case of libc, sealing is only applied to read-only (RO) or
read-execute (RX) memory segments (such as .text and .RELRO) to
prevent them from becoming writable, the lifetime of those mappings
are tied to the lifetime of the process.

Chrome wants to seal two large address space reservations that are
managed by different allocators. The memory is mapped RW- and RWX
respectively but write access to it is restricted using pkeys (or in
the future ARM permission overlay extensions). The lifetime of those
mappings are not tied to the lifetime of the process, therefore, while
the memory is sealed, the allocators still need to free or discard the
unused memory. For example, with madvise(DONTNEED).

However, always allowing madvise(DONTNEED) on this range poses a
security risk. For example if a jump instruction crosses a page
boundary and the second page gets discarded, it will overwrite the
target bytes with zeros and change the control flow. Checking
write-permission before the discard operation allows us to control
when the operation is valid. In this case, the madvise will only
succeed if the executing thread has PKEY write permissions and PKRU
changes are protected in software by control-flow integrity.

Although the initial version of this patch series is targeting the
Chrome browser as its first user, it became evident during upstream
discussions that we would also want to ensure that the patch set
eventually is a complete solution for memory sealing and compatible
with other use cases. The specific scenario currently in mind is
glibc's use case of loading and sealing ELF executables. To this end,
Stephen is working on a change to glibc to add sealing support to the
dynamic linker, which will seal all non-writable segments at startup.
Once this work is completed, all applications will be able to
automatically benefit from these new protections.

In closing, I would like to formally acknowledge the valuable
contributions received during the RFC process, which were instrumental
in shaping this patch:

Jann Horn: raising awareness and providing valuable insights on the
destructive madvise operations.
Linus Torvalds: assisting in defining system call signature and scope.
Pedro Falcato: suggesting sealing in the mmap().
Theo de Raadt: sharing the experiences and insights gained from
implementing mimmutable() in OpenBSD.

Change history:
===============
V7:
- fix index.rst (Randy Dunlap)
- fix arm build (Randy Dunlap)
- return EPERM for blocked operations (Theo de Raadt)

V6:
- Drop RFC from subject, Given Linus's general approval.
- Adjust syscall number for mseal (main Jan.11/2024) 
- Code style fix (Matthew Wilcox)
- selftest: use ksft macros (Muhammad Usama Anjum)
- Document fix. (Randy Dunlap)
https://lore.kernel.org/all/20240111234142.2944934-1-jeffxu@chromium.org/

V5:
- fix build issue in mseal-Wire-up-mseal-syscall
  (Suggested by Linus Torvalds, and Greg KH)
- updates on selftest.
https://lore.kernel.org/lkml/20240109154547.1839886-1-jeffxu@chromium.org/#r

V4:
(Suggested by Linus Torvalds)
- new signature: mseal(start,len,flags)
- 32 bit is not supported. vm_seal is removed, use vm_flags instead.
- single bit in vm_flags for sealed state.
- CONFIG_MSEAL kernel config is removed.
- single bit of PROT_SEAL in the "Prot" field of mmap().
Other changes:
- update selftest (Suggested by Muhammad Usama Anjum)
- update documentation.
https://lore.kernel.org/all/20240104185138.169307-1-jeffxu@chromium.org/

V3:
- Abandon per-syscall approach, (Suggested by Linus Torvalds).
- Organize sealing types around their functionality, such as
  MM_SEAL_BASE, MM_SEAL_PROT_PKEY.
- Extend the scope of sealing from calls originated in userspace to
  both kernel and userspace. (Suggested by Linus Torvalds)
- Add seal type support in mmap(). (Suggested by Pedro Falcato)
- Add a new sealing type: MM_SEAL_DISCARD_RO_ANON to prevent
  destructive operations of madvise. (Suggested by Jann Horn and
  Stephen Röttger)
- Make sealed VMAs mergeable. (Suggested by Jann Horn)
- Add MAP_SEALABLE to mmap()
- Add documentation - mseal.rst
https://lore.kernel.org/linux-mm/20231212231706.2680890-2-jeffxu@chromium.org/

v2:
Use _BITUL to define MM_SEAL_XX type.
Use unsigned long for seal type in sys_mseal() and other functions.
Remove internal VM_SEAL_XX type and convert_user_seal_type().
Remove MM_ACTION_XX type.
Remove caller_origin(ON_BEHALF_OF_XX) and replace with sealing bitmask.
Add more comments in code.
Add a detailed commit message.
https://lore.kernel.org/lkml/20231017090815.1067790-1-jeffxu@chromium.org/

v1:
https://lore.kernel.org/lkml/20231016143828.647848-1-jeffxu@chromium.org/

----------------------------------------------------------------
[1] https://kernelnewbies.org/Linux_2_6_8
[2] https://v8.dev/blog/control-flow-integrity
[3] https://github.com/apple-oss-distributions/xnu/blob/1031c584a5e37aff177559b9f69dbd3c8c3fd30a/osfmk/mach/vm_statistics.h#L274
[4] https://man.openbsd.org/mimmutable.2
[5] https://docs.google.com/document/d/1O2jwK4dxI3nRcOJuPYkonhTkNQfbmwdvxQMyXgeaRHo/edit#heading=h.bvaojj9fu6hc
[6] https://lore.kernel.org/lkml/CAG48ez3ShUYey+ZAFsU2i1RpQn0a5eOs2hzQ426FkcgnfUGLvA@mail.gmail.com/
[7] https://lore.kernel.org/lkml/20230515130553.2311248-1-jeffxu@chromium.org/

Jeff Xu (4):
  mseal: Wire up mseal syscall
  mseal: add mseal syscall
  selftest mm/mseal memory sealing
  mseal:add documentation

 Documentation/userspace-api/index.rst       |    1 +
 Documentation/userspace-api/mseal.rst       |  183 ++
 arch/alpha/kernel/syscalls/syscall.tbl      |    1 +
 arch/arm/tools/syscall.tbl                  |    1 +
 arch/arm64/include/asm/unistd.h             |    2 +-
 arch/arm64/include/asm/unistd32.h           |    2 +
 arch/m68k/kernel/syscalls/syscall.tbl       |    1 +
 arch/microblaze/kernel/syscalls/syscall.tbl |    1 +
 arch/mips/kernel/syscalls/syscall_n32.tbl   |    1 +
 arch/mips/kernel/syscalls/syscall_n64.tbl   |    1 +
 arch/mips/kernel/syscalls/syscall_o32.tbl   |    1 +
 arch/parisc/kernel/syscalls/syscall.tbl     |    1 +
 arch/powerpc/kernel/syscalls/syscall.tbl    |    1 +
 arch/s390/kernel/syscalls/syscall.tbl       |    1 +
 arch/sh/kernel/syscalls/syscall.tbl         |    1 +
 arch/sparc/kernel/syscalls/syscall.tbl      |    1 +
 arch/x86/entry/syscalls/syscall_32.tbl      |    1 +
 arch/x86/entry/syscalls/syscall_64.tbl      |    1 +
 arch/xtensa/kernel/syscalls/syscall.tbl     |    1 +
 include/linux/mm.h                          |   48 +
 include/linux/syscalls.h                    |    1 +
 include/uapi/asm-generic/mman-common.h      |    8 +
 include/uapi/asm-generic/unistd.h           |    5 +-
 kernel/sys_ni.c                             |    1 +
 mm/Makefile                                 |    4 +
 mm/madvise.c                                |   12 +
 mm/mmap.c                                   |   27 +
 mm/mprotect.c                               |   10 +
 mm/mremap.c                                 |   31 +
 mm/mseal.c                                  |  343 ++++
 tools/testing/selftests/mm/.gitignore       |    1 +
 tools/testing/selftests/mm/Makefile         |    1 +
 tools/testing/selftests/mm/mseal_test.c     | 1997 +++++++++++++++++++
 33 files changed, 2690 insertions(+), 2 deletions(-)
 create mode 100644 Documentation/userspace-api/mseal.rst
 create mode 100644 mm/mseal.c
 create mode 100644 tools/testing/selftests/mm/mseal_test.c

-- 
2.43.0.429.g432eaa2c6b-goog



^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v7 1/4] mseal: Wire up mseal syscall
  2024-01-22 15:28 [PATCH v7 0/4] Introduce mseal() jeffxu
@ 2024-01-22 15:28 ` jeffxu
  2024-01-22 15:28 ` [PATCH v7 2/4] mseal: add " jeffxu
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 23+ messages in thread
From: jeffxu @ 2024-01-22 15:28 UTC (permalink / raw)
  To: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap
  Cc: jeffxu, jorgelo, groeck, linux-kernel, linux-kselftest, linux-mm,
	pedro.falcato, dave.hansen, linux-hardening, deraadt, Jeff Xu

From: Jeff Xu <jeffxu@chromium.org>

Wire up mseal syscall for all architectures.

Signed-off-by: Jeff Xu <jeffxu@chromium.org>
---
 arch/alpha/kernel/syscalls/syscall.tbl      | 1 +
 arch/arm/tools/syscall.tbl                  | 1 +
 arch/arm64/include/asm/unistd.h             | 2 +-
 arch/arm64/include/asm/unistd32.h           | 2 ++
 arch/m68k/kernel/syscalls/syscall.tbl       | 1 +
 arch/microblaze/kernel/syscalls/syscall.tbl | 1 +
 arch/mips/kernel/syscalls/syscall_n32.tbl   | 1 +
 arch/mips/kernel/syscalls/syscall_n64.tbl   | 1 +
 arch/mips/kernel/syscalls/syscall_o32.tbl   | 1 +
 arch/parisc/kernel/syscalls/syscall.tbl     | 1 +
 arch/powerpc/kernel/syscalls/syscall.tbl    | 1 +
 arch/s390/kernel/syscalls/syscall.tbl       | 1 +
 arch/sh/kernel/syscalls/syscall.tbl         | 1 +
 arch/sparc/kernel/syscalls/syscall.tbl      | 1 +
 arch/x86/entry/syscalls/syscall_32.tbl      | 1 +
 arch/x86/entry/syscalls/syscall_64.tbl      | 1 +
 arch/xtensa/kernel/syscalls/syscall.tbl     | 1 +
 include/uapi/asm-generic/unistd.h           | 5 ++++-
 kernel/sys_ni.c                             | 1 +
 19 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl
index 8ff110826ce2..d8f96362e9f8 100644
--- a/arch/alpha/kernel/syscalls/syscall.tbl
+++ b/arch/alpha/kernel/syscalls/syscall.tbl
@@ -501,3 +501,4 @@
 569	common	lsm_get_self_attr		sys_lsm_get_self_attr
 570	common	lsm_set_self_attr		sys_lsm_set_self_attr
 571	common	lsm_list_modules		sys_lsm_list_modules
+572	common  mseal				sys_mseal
diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl
index b6c9e01e14f5..2ed7d229c8f9 100644
--- a/arch/arm/tools/syscall.tbl
+++ b/arch/arm/tools/syscall.tbl
@@ -475,3 +475,4 @@
 459	common	lsm_get_self_attr		sys_lsm_get_self_attr
 460	common	lsm_set_self_attr		sys_lsm_set_self_attr
 461	common	lsm_list_modules		sys_lsm_list_modules
+462	common	mseal				sys_mseal
diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h
index 491b2b9bd553..1346579f802f 100644
--- a/arch/arm64/include/asm/unistd.h
+++ b/arch/arm64/include/asm/unistd.h
@@ -39,7 +39,7 @@
 #define __ARM_NR_compat_set_tls		(__ARM_NR_COMPAT_BASE + 5)
 #define __ARM_NR_COMPAT_END		(__ARM_NR_COMPAT_BASE + 0x800)
 
-#define __NR_compat_syscalls		462
+#define __NR_compat_syscalls		463
 #endif
 
 #define __ARCH_WANT_SYS_CLONE
diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
index 7118282d1c79..266b96acc014 100644
--- a/arch/arm64/include/asm/unistd32.h
+++ b/arch/arm64/include/asm/unistd32.h
@@ -929,6 +929,8 @@ __SYSCALL(__NR_lsm_get_self_attr, sys_lsm_get_self_attr)
 __SYSCALL(__NR_lsm_set_self_attr, sys_lsm_set_self_attr)
 #define __NR_lsm_list_modules 461
 __SYSCALL(__NR_lsm_list_modules, sys_lsm_list_modules)
+#define __NR_mseal 462
+__SYSCALL(__NR_mseal, sys_mseal)
 
 /*
  * Please add new compat syscalls above this comment and update
diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl
index 7fd43fd4c9f2..22a3cbd4c602 100644
--- a/arch/m68k/kernel/syscalls/syscall.tbl
+++ b/arch/m68k/kernel/syscalls/syscall.tbl
@@ -461,3 +461,4 @@
 459	common	lsm_get_self_attr		sys_lsm_get_self_attr
 460	common	lsm_set_self_attr		sys_lsm_set_self_attr
 461	common	lsm_list_modules		sys_lsm_list_modules
+462	common	mseal				sys_mseal
diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl
index b00ab2cabab9..2b81a6bd78b2 100644
--- a/arch/microblaze/kernel/syscalls/syscall.tbl
+++ b/arch/microblaze/kernel/syscalls/syscall.tbl
@@ -467,3 +467,4 @@
 459	common	lsm_get_self_attr		sys_lsm_get_self_attr
 460	common	lsm_set_self_attr		sys_lsm_set_self_attr
 461	common	lsm_list_modules		sys_lsm_list_modules
+462	common	mseal				sys_mseal
diff --git a/arch/mips/kernel/syscalls/syscall_n32.tbl b/arch/mips/kernel/syscalls/syscall_n32.tbl
index 83cfc9eb6b88..cc869f5d5693 100644
--- a/arch/mips/kernel/syscalls/syscall_n32.tbl
+++ b/arch/mips/kernel/syscalls/syscall_n32.tbl
@@ -400,3 +400,4 @@
 459	n32	lsm_get_self_attr		sys_lsm_get_self_attr
 460	n32	lsm_set_self_attr		sys_lsm_set_self_attr
 461	n32	lsm_list_modules		sys_lsm_list_modules
+462	n32	mseal				sys_mseal
diff --git a/arch/mips/kernel/syscalls/syscall_n64.tbl b/arch/mips/kernel/syscalls/syscall_n64.tbl
index 532b855df589..1464c6be6eb3 100644
--- a/arch/mips/kernel/syscalls/syscall_n64.tbl
+++ b/arch/mips/kernel/syscalls/syscall_n64.tbl
@@ -376,3 +376,4 @@
 459	n64	lsm_get_self_attr		sys_lsm_get_self_attr
 460	n64	lsm_set_self_attr		sys_lsm_set_self_attr
 461	n64	lsm_list_modules		sys_lsm_list_modules
+462	n64	mseal				sys_mseal
diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl
index f45c9530ea93..008ebe60263e 100644
--- a/arch/mips/kernel/syscalls/syscall_o32.tbl
+++ b/arch/mips/kernel/syscalls/syscall_o32.tbl
@@ -449,3 +449,4 @@
 459	o32	lsm_get_self_attr		sys_lsm_get_self_attr
 460	o32	lsm_set_self_attr		sys_lsm_set_self_attr
 461	o32	lsm_list_modules		sys_lsm_list_modules
+462	o32	mseal				sys_mseal
diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl
index b236a84c4e12..b13c21373974 100644
--- a/arch/parisc/kernel/syscalls/syscall.tbl
+++ b/arch/parisc/kernel/syscalls/syscall.tbl
@@ -460,3 +460,4 @@
 459	common	lsm_get_self_attr		sys_lsm_get_self_attr
 460	common	lsm_set_self_attr		sys_lsm_set_self_attr
 461	common	lsm_list_modules		sys_lsm_list_modules
+462	common	mseal				sys_mseal
diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl
index 17173b82ca21..3656f1ca7a21 100644
--- a/arch/powerpc/kernel/syscalls/syscall.tbl
+++ b/arch/powerpc/kernel/syscalls/syscall.tbl
@@ -548,3 +548,4 @@
 459	common	lsm_get_self_attr		sys_lsm_get_self_attr
 460	common	lsm_set_self_attr		sys_lsm_set_self_attr
 461	common	lsm_list_modules		sys_lsm_list_modules
+462	common	mseal				sys_mseal
diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl
index 095bb86339a7..bd0fee24ad10 100644
--- a/arch/s390/kernel/syscalls/syscall.tbl
+++ b/arch/s390/kernel/syscalls/syscall.tbl
@@ -464,3 +464,4 @@
 459  common	lsm_get_self_attr	sys_lsm_get_self_attr		sys_lsm_get_self_attr
 460  common	lsm_set_self_attr	sys_lsm_set_self_attr		sys_lsm_set_self_attr
 461  common	lsm_list_modules	sys_lsm_list_modules		sys_lsm_list_modules
+462  common	mseal			sys_mseal			sys_mseal
diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl
index 86fe269f0220..bbf83a2db986 100644
--- a/arch/sh/kernel/syscalls/syscall.tbl
+++ b/arch/sh/kernel/syscalls/syscall.tbl
@@ -464,3 +464,4 @@
 459	common	lsm_get_self_attr		sys_lsm_get_self_attr
 460	common	lsm_set_self_attr		sys_lsm_set_self_attr
 461	common	lsm_list_modules		sys_lsm_list_modules
+462	common	mseal				sys_mseal
diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl
index b23d59313589..ac6c281ccfe0 100644
--- a/arch/sparc/kernel/syscalls/syscall.tbl
+++ b/arch/sparc/kernel/syscalls/syscall.tbl
@@ -507,3 +507,4 @@
 459	common	lsm_get_self_attr		sys_lsm_get_self_attr
 460	common	lsm_set_self_attr		sys_lsm_set_self_attr
 461	common	lsm_list_modules		sys_lsm_list_modules
+462	common	mseal 				sys_mseal
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index 5f8591ce7f25..7fd1f57ad3d3 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -466,3 +466,4 @@
 459	i386	lsm_get_self_attr	sys_lsm_get_self_attr
 460	i386	lsm_set_self_attr	sys_lsm_set_self_attr
 461	i386	lsm_list_modules	sys_lsm_list_modules
+462	i386	mseal 			sys_mseal
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index 7e8d46f4147f..52df0dec70da 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -383,6 +383,7 @@
 459	common	lsm_get_self_attr	sys_lsm_get_self_attr
 460	common	lsm_set_self_attr	sys_lsm_set_self_attr
 461	common	lsm_list_modules	sys_lsm_list_modules
+462 	common  mseal			sys_mseal
 
 #
 # Due to a historical design error, certain syscalls are numbered differently
diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl
index dd116598fb25..67083fc1b2f5 100644
--- a/arch/xtensa/kernel/syscalls/syscall.tbl
+++ b/arch/xtensa/kernel/syscalls/syscall.tbl
@@ -432,3 +432,4 @@
 459	common	lsm_get_self_attr		sys_lsm_get_self_attr
 460	common	lsm_set_self_attr		sys_lsm_set_self_attr
 461	common	lsm_list_modules		sys_lsm_list_modules
+462	common	mseal 				sys_mseal
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 75f00965ab15..d983c48a3b6a 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -842,8 +842,11 @@ __SYSCALL(__NR_lsm_set_self_attr, sys_lsm_set_self_attr)
 #define __NR_lsm_list_modules 461
 __SYSCALL(__NR_lsm_list_modules, sys_lsm_list_modules)
 
+#define __NR_mseal 462
+__SYSCALL(__NR_mseal, sys_mseal)
+
 #undef __NR_syscalls
-#define __NR_syscalls 462
+#define __NR_syscalls 463
 
 /*
  * 32 bit systems traditionally used different
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index faad00cce269..d7eee421d4bc 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -196,6 +196,7 @@ COND_SYSCALL(migrate_pages);
 COND_SYSCALL(move_pages);
 COND_SYSCALL(set_mempolicy_home_node);
 COND_SYSCALL(cachestat);
+COND_SYSCALL(mseal);
 
 COND_SYSCALL(perf_event_open);
 COND_SYSCALL(accept4);
-- 
2.43.0.429.g432eaa2c6b-goog



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v7 2/4] mseal: add mseal syscall
  2024-01-22 15:28 [PATCH v7 0/4] Introduce mseal() jeffxu
  2024-01-22 15:28 ` [PATCH v7 1/4] mseal: Wire up mseal syscall jeffxu
@ 2024-01-22 15:28 ` jeffxu
  2024-01-23 18:14   ` Liam R. Howlett
  2024-01-22 15:28 ` [PATCH v7 3/4] selftest mm/mseal memory sealing jeffxu
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 23+ messages in thread
From: jeffxu @ 2024-01-22 15:28 UTC (permalink / raw)
  To: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap
  Cc: jeffxu, jorgelo, groeck, linux-kernel, linux-kselftest, linux-mm,
	pedro.falcato, dave.hansen, linux-hardening, deraadt, Jeff Xu

From: Jeff Xu <jeffxu@chromium.org>

The new mseal() is an syscall on 64 bit CPU, and with
following signature:

int mseal(void addr, size_t len, unsigned long flags)
addr/len: memory range.
flags: reserved.

mseal() blocks following operations for the given memory range.

1> Unmapping, moving to another location, and shrinking the size,
   via munmap() and mremap(), can leave an empty space, therefore can
   be replaced with a VMA with a new set of attributes.

2> Moving or expanding a different VMA into the current location,
   via mremap().

3> Modifying a VMA via mmap(MAP_FIXED).

4> Size expansion, via mremap(), does not appear to pose any specific
   risks to sealed VMAs. It is included anyway because the use case is
   unclear. In any case, users can rely on merging to expand a sealed VMA.

5> mprotect() and pkey_mprotect().

6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous
   memory, when users don't have write permission to the memory. Those
   behaviors can alter region contents by discarding pages, effectively a
   memset(0) for anonymous memory.

In addition: mmap() has two related changes.

The PROT_SEAL bit in prot field of mmap(). When present, it marks
the map sealed since creation.

The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
the map as sealable. A map created without MAP_SEALABLE will not support
sealing, i.e. mseal() will fail.

Applications that don't care about sealing will expect their behavior
unchanged. For those that need sealing support, opt-in by adding
MAP_SEALABLE in mmap().

I would like to formally acknowledge the valuable contributions
received during the RFC process, which were instrumental
in shaping this patch:

Jann Horn: raising awareness and providing valuable insights on the
destructive madvise operations.
Linus Torvalds: assisting in defining system call signature and scope.
Pedro Falcato: suggesting sealing in the mmap().
Theo de Raadt: sharing the experiences and insights gained from
implementing mimmutable() in OpenBSD.

Finally, the idea that inspired this patch comes from Stephen Röttger’s
work in Chrome V8 CFI.

Signed-off-by: Jeff Xu <jeffxu@chromium.org>
---
 include/linux/mm.h                     |  48 ++++
 include/linux/syscalls.h               |   1 +
 include/uapi/asm-generic/mman-common.h |   8 +
 mm/Makefile                            |   4 +
 mm/madvise.c                           |  12 +
 mm/mmap.c                              |  27 ++
 mm/mprotect.c                          |  10 +
 mm/mremap.c                            |  31 +++
 mm/mseal.c                             | 343 +++++++++++++++++++++++++
 9 files changed, 484 insertions(+)
 create mode 100644 mm/mseal.c

diff --git a/include/linux/mm.h b/include/linux/mm.h
index f5a97dec5169..bdd9a53e9291 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -328,6 +328,14 @@ extern unsigned int kobjsize(const void *objp);
 #define VM_HIGH_ARCH_5	BIT(VM_HIGH_ARCH_BIT_5)
 #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
 
+#ifdef CONFIG_64BIT
+/* VM is sealable, in vm_flags */
+#define VM_SEALABLE	_BITUL(63)
+
+/* VM is sealed, in vm_flags */
+#define VM_SEALED	_BITUL(62)
+#endif
+
 #ifdef CONFIG_ARCH_HAS_PKEYS
 # define VM_PKEY_SHIFT	VM_HIGH_ARCH_BIT_0
 # define VM_PKEY_BIT0	VM_HIGH_ARCH_0	/* A protection key is a 4-bit value */
@@ -4182,4 +4190,44 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
 	return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
 }
 
+#ifdef CONFIG_64BIT
+static inline int can_do_mseal(unsigned long flags)
+{
+	if (flags)
+		return -EINVAL;
+
+	return 0;
+}
+
+bool can_modify_mm(struct mm_struct *mm, unsigned long start,
+		unsigned long end);
+bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
+		unsigned long end, int behavior);
+unsigned long get_mmap_seals(unsigned long prot,
+		unsigned long flags);
+#else
+static inline int can_do_mseal(unsigned long flags)
+{
+	return -EPERM;
+}
+
+static inline bool can_modify_mm(struct mm_struct *mm, unsigned long start,
+		unsigned long end)
+{
+	return true;
+}
+
+static inline bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
+		unsigned long end, int behavior)
+{
+	return true;
+}
+
+static inline unsigned long get_mmap_seals(unsigned long prot,
+	unsigned long flags)
+{
+	return 0;
+}
+#endif
+
 #endif /* _LINUX_MM_H */
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index cdba4d0c6d4a..2d44e0d99e37 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -820,6 +820,7 @@ asmlinkage long sys_process_mrelease(int pidfd, unsigned int flags);
 asmlinkage long sys_remap_file_pages(unsigned long start, unsigned long size,
 			unsigned long prot, unsigned long pgoff,
 			unsigned long flags);
+asmlinkage long sys_mseal(unsigned long start, size_t len, unsigned long flags);
 asmlinkage long sys_mbind(unsigned long start, unsigned long len,
 				unsigned long mode,
 				const unsigned long __user *nmask,
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
index 6ce1f1ceb432..3ca4d694a621 100644
--- a/include/uapi/asm-generic/mman-common.h
+++ b/include/uapi/asm-generic/mman-common.h
@@ -17,6 +17,11 @@
 #define PROT_GROWSDOWN	0x01000000	/* mprotect flag: extend change to start of growsdown vma */
 #define PROT_GROWSUP	0x02000000	/* mprotect flag: extend change to end of growsup vma */
 
+/*
+ * The PROT_SEAL defines memory sealing in the prot argument of mmap().
+ */
+#define PROT_SEAL	0x04000000	/* _BITUL(26) */
+
 /* 0x01 - 0x03 are defined in linux/mman.h */
 #define MAP_TYPE	0x0f		/* Mask for type of mapping */
 #define MAP_FIXED	0x10		/* Interpret addr exactly */
@@ -33,6 +38,9 @@
 #define MAP_UNINITIALIZED 0x4000000	/* For anonymous mmap, memory could be
 					 * uninitialized */
 
+/* map is sealable */
+#define MAP_SEALABLE	0x8000000	/* _BITUL(27) */
+
 /*
  * Flags for mlock
  */
diff --git a/mm/Makefile b/mm/Makefile
index e4b5b75aaec9..cbae83f74642 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -43,6 +43,10 @@ ifdef CONFIG_CROSS_MEMORY_ATTACH
 mmu-$(CONFIG_MMU)	+= process_vm_access.o
 endif
 
+ifdef CONFIG_64BIT
+mmu-$(CONFIG_MMU)	+= mseal.o
+endif
+
 obj-y			:= filemap.o mempool.o oom_kill.o fadvise.o \
 			   maccess.o page-writeback.o folio-compat.o \
 			   readahead.o swap.o truncate.o vmscan.o shrinker.o \
diff --git a/mm/madvise.c b/mm/madvise.c
index 912155a94ed5..41eb5163ed1f 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1393,6 +1393,7 @@ int madvise_set_anon_name(struct mm_struct *mm, unsigned long start,
  *  -EIO    - an I/O error occurred while paging in data.
  *  -EBADF  - map exists, but area maps something that isn't a file.
  *  -EAGAIN - a kernel resource was temporarily unavailable.
+ *  -EACCES - memory is sealed.
  */
 int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior)
 {
@@ -1436,10 +1437,21 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh
 	start = untagged_addr_remote(mm, start);
 	end = start + len;
 
+	/*
+	 * Check if the address range is sealed for do_madvise().
+	 * can_modify_mm_madv assumes we have acquired the lock on MM.
+	 */
+	if (!can_modify_mm_madv(mm, start, end, behavior)) {
+		error = -EPERM;
+		goto out;
+	}
+
 	blk_start_plug(&plug);
 	error = madvise_walk_vmas(mm, start, end, behavior,
 			madvise_vma_behavior);
 	blk_finish_plug(&plug);
+
+out:
 	if (write)
 		mmap_write_unlock(mm);
 	else
diff --git a/mm/mmap.c b/mm/mmap.c
index b78e83d351d2..32bc2179aed0 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1213,6 +1213,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
 {
 	struct mm_struct *mm = current->mm;
 	int pkey = 0;
+	unsigned long vm_seals;
 
 	*populate = 0;
 
@@ -1233,6 +1234,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
 	if (flags & MAP_FIXED_NOREPLACE)
 		flags |= MAP_FIXED;
 
+	vm_seals = get_mmap_seals(prot, flags);
+
 	if (!(flags & MAP_FIXED))
 		addr = round_hint_to_min(addr);
 
@@ -1261,6 +1264,13 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
 			return -EEXIST;
 	}
 
+	/*
+	 * Check if the address range is sealed for do_mmap().
+	 * can_modify_mm assumes we have acquired the lock on MM.
+	 */
+	if (!can_modify_mm(mm, addr, addr + len))
+		return -EPERM;
+
 	if (prot == PROT_EXEC) {
 		pkey = execute_only_pkey(mm);
 		if (pkey < 0)
@@ -1376,6 +1386,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
 			vm_flags |= VM_NORESERVE;
 	}
 
+	vm_flags |= vm_seals;
 	addr = mmap_region(file, addr, len, vm_flags, pgoff, uf);
 	if (!IS_ERR_VALUE(addr) &&
 	    ((vm_flags & VM_LOCKED) ||
@@ -2679,6 +2690,14 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
 	if (end == start)
 		return -EINVAL;
 
+	/*
+	 * Check if memory is sealed before arch_unmap.
+	 * Prevent unmapping a sealed VMA.
+	 * can_modify_mm assumes we have acquired the lock on MM.
+	 */
+	if (!can_modify_mm(mm, start, end))
+		return -EPERM;
+
 	 /* arch_unmap() might do unmaps itself.  */
 	arch_unmap(mm, start, end);
 
@@ -3102,6 +3121,14 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
 {
 	struct mm_struct *mm = vma->vm_mm;
 
+	/*
+	 * Check if memory is sealed before arch_unmap.
+	 * Prevent unmapping a sealed VMA.
+	 * can_modify_mm assumes we have acquired the lock on MM.
+	 */
+	if (!can_modify_mm(mm, start, end))
+		return -EPERM;
+
 	arch_unmap(mm, start, end);
 	return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
 }
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 81991102f785..5f0f716bf4ae 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -32,6 +32,7 @@
 #include <linux/sched/sysctl.h>
 #include <linux/userfaultfd_k.h>
 #include <linux/memory-tiers.h>
+#include <uapi/linux/mman.h>
 #include <asm/cacheflush.h>
 #include <asm/mmu_context.h>
 #include <asm/tlbflush.h>
@@ -743,6 +744,15 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
 		}
 	}
 
+	/*
+	 * checking if memory is sealed.
+	 * can_modify_mm assumes we have acquired the lock on MM.
+	 */
+	if (!can_modify_mm(current->mm, start, end)) {
+		error = -EPERM;
+		goto out;
+	}
+
 	prev = vma_prev(&vmi);
 	if (start > vma->vm_start)
 		prev = vma;
diff --git a/mm/mremap.c b/mm/mremap.c
index 38d98465f3d8..d69b438dcf83 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -902,7 +902,25 @@ static unsigned long mremap_to(unsigned long addr, unsigned long old_len,
 	if ((mm->map_count + 2) >= sysctl_max_map_count - 3)
 		return -ENOMEM;
 
+	/*
+	 * In mremap_to().
+	 * Move a VMA to another location, check if src addr is sealed.
+	 *
+	 * Place can_modify_mm here because mremap_to()
+	 * does its own checking for address range, and we only
+	 * check the sealing after passing those checks.
+	 *
+	 * can_modify_mm assumes we have acquired the lock on MM.
+	 */
+	if (!can_modify_mm(mm, addr, addr + old_len))
+		return -EPERM;
+
 	if (flags & MREMAP_FIXED) {
+		/*
+		 * In mremap_to().
+		 * VMA is moved to dst address, and munmap dst first.
+		 * do_munmap will check if dst is sealed.
+		 */
 		ret = do_munmap(mm, new_addr, new_len, uf_unmap_early);
 		if (ret)
 			goto out;
@@ -1061,6 +1079,19 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
 		goto out;
 	}
 
+	/*
+	 * Below is shrink/expand case (not mremap_to())
+	 * Check if src address is sealed, if so, reject.
+	 * In other words, prevent shrinking or expanding a sealed VMA.
+	 *
+	 * Place can_modify_mm here so we can keep the logic related to
+	 * shrink/expand together.
+	 */
+	if (!can_modify_mm(mm, addr, addr + old_len)) {
+		ret = -EPERM;
+		goto out;
+	}
+
 	/*
 	 * Always allow a shrinking remap: that just unmaps
 	 * the unnecessary pages..
diff --git a/mm/mseal.c b/mm/mseal.c
new file mode 100644
index 000000000000..abc00c0b9895
--- /dev/null
+++ b/mm/mseal.c
@@ -0,0 +1,343 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  Implement mseal() syscall.
+ *
+ *  Copyright (c) 2023,2024 Google, Inc.
+ *
+ *  Author: Jeff Xu <jeffxu@chromium.org>
+ */
+
+#include <linux/mempolicy.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/mm_inline.h>
+#include <linux/mmu_context.h>
+#include <linux/syscalls.h>
+#include <linux/sched.h>
+#include "internal.h"
+
+static inline bool vma_is_sealed(struct vm_area_struct *vma)
+{
+	return (vma->vm_flags & VM_SEALED);
+}
+
+static inline bool vma_is_sealable(struct vm_area_struct *vma)
+{
+	return vma->vm_flags & VM_SEALABLE;
+}
+
+static inline void set_vma_sealed(struct vm_area_struct *vma)
+{
+	vm_flags_set(vma, VM_SEALED);
+}
+
+/*
+ * check if a vma is sealed for modification.
+ * return true, if modification is allowed.
+ */
+static bool can_modify_vma(struct vm_area_struct *vma)
+{
+	if (vma_is_sealed(vma))
+		return false;
+
+	return true;
+}
+
+static bool is_madv_discard(int behavior)
+{
+	return	behavior &
+		(MADV_FREE | MADV_DONTNEED | MADV_DONTNEED_LOCKED |
+		 MADV_REMOVE | MADV_DONTFORK | MADV_WIPEONFORK);
+}
+
+static bool is_ro_anon(struct vm_area_struct *vma)
+{
+	/* check anonymous mapping. */
+	if (vma->vm_file || vma->vm_flags & VM_SHARED)
+		return false;
+
+	/*
+	 * check for non-writable:
+	 * PROT=RO or PKRU is not writeable.
+	 */
+	if (!(vma->vm_flags & VM_WRITE) ||
+		!arch_vma_access_permitted(vma, true, false, false))
+		return true;
+
+	return false;
+}
+
+/*
+ * Check if the vmas of a memory range are allowed to be modified.
+ * the memory ranger can have a gap (unallocated memory).
+ * return true, if it is allowed.
+ */
+bool can_modify_mm(struct mm_struct *mm, unsigned long start, unsigned long end)
+{
+	struct vm_area_struct *vma;
+
+	VMA_ITERATOR(vmi, mm, start);
+
+	/* going through each vma to check. */
+	for_each_vma_range(vmi, vma, end) {
+		if (!can_modify_vma(vma))
+			return false;
+	}
+
+	/* Allow by default. */
+	return true;
+}
+
+/*
+ * Check if the vmas of a memory range are allowed to be modified by madvise.
+ * the memory ranger can have a gap (unallocated memory).
+ * return true, if it is allowed.
+ */
+bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, unsigned long end,
+		int behavior)
+{
+	struct vm_area_struct *vma;
+
+	VMA_ITERATOR(vmi, mm, start);
+
+	if (!is_madv_discard(behavior))
+		return true;
+
+	/* going through each vma to check. */
+	for_each_vma_range(vmi, vma, end)
+		if (is_ro_anon(vma) && !can_modify_vma(vma))
+			return false;
+
+	/* Allow by default. */
+	return true;
+}
+
+unsigned long get_mmap_seals(unsigned long prot,
+		unsigned long flags)
+{
+	unsigned long vm_seals;
+
+	if (prot & PROT_SEAL)
+		vm_seals = VM_SEALED | VM_SEALABLE;
+	else
+		vm_seals = (flags & MAP_SEALABLE) ? VM_SEALABLE : 0;
+
+	return vm_seals;
+}
+
+/*
+ * Check if a seal type can be added to VMA.
+ */
+static bool can_add_vma_seal(struct vm_area_struct *vma)
+{
+	/* if map is not sealable, reject. */
+	if (!vma_is_sealable(vma))
+		return false;
+
+	return true;
+}
+
+static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
+		struct vm_area_struct **prev, unsigned long start,
+		unsigned long end, vm_flags_t newflags)
+{
+	int ret = 0;
+	vm_flags_t oldflags = vma->vm_flags;
+
+	if (newflags == oldflags)
+		goto out;
+
+	vma = vma_modify_flags(vmi, *prev, vma, start, end, newflags);
+	if (IS_ERR(vma)) {
+		ret = PTR_ERR(vma);
+		goto out;
+	}
+
+	set_vma_sealed(vma);
+out:
+	*prev = vma;
+	return ret;
+}
+
+/*
+ * Check for do_mseal:
+ * 1> start is part of a valid vma.
+ * 2> end is part of a valid vma.
+ * 3> No gap (unallocated address) between start and end.
+ * 4> map is sealable.
+ */
+static int check_mm_seal(unsigned long start, unsigned long end)
+{
+	struct vm_area_struct *vma;
+	unsigned long nstart = start;
+
+	VMA_ITERATOR(vmi, current->mm, start);
+
+	/* going through each vma to check. */
+	for_each_vma_range(vmi, vma, end) {
+		if (vma->vm_start > nstart)
+			/* unallocated memory found. */
+			return -ENOMEM;
+
+		if (!can_add_vma_seal(vma))
+			return -EACCES;
+
+		if (vma->vm_end >= end)
+			return 0;
+
+		nstart = vma->vm_end;
+	}
+
+	return -ENOMEM;
+}
+
+/*
+ * Apply sealing.
+ */
+static int apply_mm_seal(unsigned long start, unsigned long end)
+{
+	unsigned long nstart;
+	struct vm_area_struct *vma, *prev;
+
+	VMA_ITERATOR(vmi, current->mm, start);
+
+	vma = vma_iter_load(&vmi);
+	/*
+	 * Note: check_mm_seal should already checked ENOMEM case.
+	 * so vma should not be null, same for the other ENOMEM cases.
+	 */
+	prev = vma_prev(&vmi);
+	if (start > vma->vm_start)
+		prev = vma;
+
+	nstart = start;
+	for_each_vma_range(vmi, vma, end) {
+		int error;
+		unsigned long tmp;
+		vm_flags_t newflags;
+
+		newflags = vma->vm_flags | VM_SEALED;
+		tmp = vma->vm_end;
+		if (tmp > end)
+			tmp = end;
+		error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
+		if (error)
+			return error;
+		tmp = vma_iter_end(&vmi);
+		nstart = tmp;
+	}
+
+	return 0;
+}
+
+/*
+ * mseal(2) seals the VM's meta data from
+ * selected syscalls.
+ *
+ * addr/len: VM address range.
+ *
+ *  The address range by addr/len must meet:
+ *   start (addr) must be in a valid VMA.
+ *   end (addr + len) must be in a valid VMA.
+ *   no gap (unallocated memory) between start and end.
+ *   start (addr) must be page aligned.
+ *
+ *  len: len will be page aligned implicitly.
+ *
+ *   Below VMA operations are blocked after sealing.
+ *   1> Unmapping, moving to another location, and shrinking
+ *	the size, via munmap() and mremap(), can leave an empty
+ *	space, therefore can be replaced with a VMA with a new
+ *	set of attributes.
+ *   2> Moving or expanding a different vma into the current location,
+ *	via mremap().
+ *   3> Modifying a VMA via mmap(MAP_FIXED).
+ *   4> Size expansion, via mremap(), does not appear to pose any
+ *	specific risks to sealed VMAs. It is included anyway because
+ *	the use case is unclear. In any case, users can rely on
+ *	merging to expand a sealed VMA.
+ *   5> mprotect and pkey_mprotect.
+ *   6> Some destructive madvice() behavior (e.g. MADV_DONTNEED)
+ *      for anonymous memory, when users don't have write permission to the
+ *	memory. Those behaviors can alter region contents by discarding pages,
+ *	effectively a memset(0) for anonymous memory.
+ *
+ *  flags: reserved.
+ *
+ * return values:
+ *  zero: success.
+ *  -EINVAL:
+ *   invalid input flags.
+ *   start address is not page aligned.
+ *   Address arange (start + len) overflow.
+ *  -ENOMEM:
+ *   addr is not a valid address (not allocated).
+ *   end (start + len) is not a valid address.
+ *   a gap (unallocated memory) between start and end.
+ *  -EACCES:
+ *   MAP_SEALABLE is not set.
+ *  -EPERM:
+ *  - In 32 bit architecture, sealing is not supported.
+ * Note:
+ *  user can call mseal(2) multiple times, adding a seal on an
+ *  already sealed memory is a no-action (no error).
+ *
+ *  unseal() is not supported.
+ */
+static int do_mseal(unsigned long start, size_t len_in, unsigned long flags)
+{
+	size_t len;
+	int ret = 0;
+	unsigned long end;
+	struct mm_struct *mm = current->mm;
+
+	ret = can_do_mseal(flags);
+	if (ret)
+		return ret;
+
+	start = untagged_addr(start);
+	if (!PAGE_ALIGNED(start))
+		return -EINVAL;
+
+	len = PAGE_ALIGN(len_in);
+	/* Check to see whether len was rounded up from small -ve to zero. */
+	if (len_in && !len)
+		return -EINVAL;
+
+	end = start + len;
+	if (end < start)
+		return -EINVAL;
+
+	if (end == start)
+		return 0;
+
+	if (mmap_write_lock_killable(mm))
+		return -EINTR;
+
+	/*
+	 * First pass, this helps to avoid
+	 * partial sealing in case of error in input address range,
+	 * e.g. ENOMEM and EACCESS error.
+	 */
+	ret = check_mm_seal(start, end);
+	if (ret)
+		goto out;
+
+	/*
+	 * Second pass, this should success, unless there are errors
+	 * from vma_modify_flags, e.g. merge/split error, or process
+	 * reaching the max supported VMAs, however, those cases shall
+	 * be rare.
+	 */
+	ret = apply_mm_seal(start, end);
+
+out:
+	mmap_write_unlock(current->mm);
+	return ret;
+}
+
+SYSCALL_DEFINE3(mseal, unsigned long, start, size_t, len, unsigned long,
+		flags)
+{
+	return do_mseal(start, len, flags);
+}
-- 
2.43.0.429.g432eaa2c6b-goog



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v7 3/4] selftest mm/mseal memory sealing
  2024-01-22 15:28 [PATCH v7 0/4] Introduce mseal() jeffxu
  2024-01-22 15:28 ` [PATCH v7 1/4] mseal: Wire up mseal syscall jeffxu
  2024-01-22 15:28 ` [PATCH v7 2/4] mseal: add " jeffxu
@ 2024-01-22 15:28 ` jeffxu
  2024-01-22 15:28 ` [PATCH v7 4/4] mseal:add documentation jeffxu
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 23+ messages in thread
From: jeffxu @ 2024-01-22 15:28 UTC (permalink / raw)
  To: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap
  Cc: jeffxu, jorgelo, groeck, linux-kernel, linux-kselftest, linux-mm,
	pedro.falcato, dave.hansen, linux-hardening, deraadt, Jeff Xu

From: Jeff Xu <jeffxu@chromium.org>

selftest for memory sealing change in mmap() and mseal().

Signed-off-by: Jeff Xu <jeffxu@chromium.org>
---
 tools/testing/selftests/mm/.gitignore   |    1 +
 tools/testing/selftests/mm/Makefile     |    1 +
 tools/testing/selftests/mm/mseal_test.c | 1997 +++++++++++++++++++++++
 3 files changed, 1999 insertions(+)
 create mode 100644 tools/testing/selftests/mm/mseal_test.c

diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore
index 4ff10ea61461..76474c51c786 100644
--- a/tools/testing/selftests/mm/.gitignore
+++ b/tools/testing/selftests/mm/.gitignore
@@ -46,3 +46,4 @@ gup_longterm
 mkdirty
 va_high_addr_switch
 hugetlb_fault_after_madv
+mseal_test
diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
index 2453add65d12..ba36a5c2b1fc 100644
--- a/tools/testing/selftests/mm/Makefile
+++ b/tools/testing/selftests/mm/Makefile
@@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests
 TEST_GEN_FILES += mrelease_test
 TEST_GEN_FILES += mremap_dontunmap
 TEST_GEN_FILES += mremap_test
+TEST_GEN_FILES += mseal_test
 TEST_GEN_FILES += on-fault-limit
 TEST_GEN_FILES += pagemap_ioctl
 TEST_GEN_FILES += thuge-gen
diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c
new file mode 100644
index 000000000000..0d8b7041a7a0
--- /dev/null
+++ b/tools/testing/selftests/mm/mseal_test.c
@@ -0,0 +1,1997 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+#include <sys/mman.h>
+#include <stdint.h>
+#include <unistd.h>
+#include <string.h>
+#include <sys/time.h>
+#include <sys/resource.h>
+#include <stdbool.h>
+#include "../kselftest.h"
+#include <syscall.h>
+#include <errno.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <assert.h>
+#include <fcntl.h>
+#include <assert.h>
+#include <sys/ioctl.h>
+#include <sys/vfs.h>
+#include <sys/stat.h>
+
+/*
+ * need those definition for manually build using gcc.
+ * gcc -I ../../../../usr/include   -DDEBUG -O3  -DDEBUG -O3 mseal_test.c -o mseal_test
+ */
+#ifndef MAP_SEALABLE
+#define MAP_SEALABLE 0x8000000
+#endif
+
+#ifndef PROT_SEAL
+#define PROT_SEAL 0x04000000
+#endif
+
+#ifndef PKEY_DISABLE_ACCESS
+# define PKEY_DISABLE_ACCESS    0x1
+#endif
+
+#ifndef PKEY_DISABLE_WRITE
+# define PKEY_DISABLE_WRITE     0x2
+#endif
+
+#ifndef PKEY_BITS_PER_KEY
+#define PKEY_BITS_PER_PKEY      2
+#endif
+
+#ifndef PKEY_MASK
+#define PKEY_MASK       (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)
+#endif
+
+#define FAIL_TEST_IF_FALSE(c) do {\
+		if (!(c)) {\
+			ksft_test_result_fail("%s, line:%d\n", __func__, __LINE__);\
+			goto test_end;\
+		} \
+	} \
+	while (0)
+
+#define SKIP_TEST_IF_FALSE(c) do {\
+		if (!(c)) {\
+			ksft_test_result_skip("%s, line:%d\n", __func__, __LINE__);\
+			goto test_end;\
+		} \
+	} \
+	while (0)
+
+
+#define TEST_END_CHECK() {\
+		ksft_test_result_pass("%s\n", __func__);\
+		return;\
+test_end:\
+		return;\
+}
+
+#ifndef u64
+#define u64 unsigned long long
+#endif
+
+static unsigned long get_vma_size(void *addr)
+{
+	FILE *maps;
+	char line[256];
+	int size = 0;
+	uintptr_t  addr_start, addr_end;
+
+	maps = fopen("/proc/self/maps", "r");
+	if (!maps)
+		return 0;
+
+	while (fgets(line, sizeof(line), maps)) {
+		if (sscanf(line, "%lx-%lx", &addr_start, &addr_end) == 2) {
+			if (addr_start == (uintptr_t) addr) {
+				size = addr_end - addr_start;
+				break;
+			}
+		}
+	}
+	fclose(maps);
+	return size;
+}
+
+/*
+ * define sys_xyx to call syscall directly.
+ */
+static int sys_mseal(void *start, size_t len)
+{
+	int sret;
+
+	errno = 0;
+	sret = syscall(__NR_mseal, start, len, 0);
+	return sret;
+}
+
+static int sys_mprotect(void *ptr, size_t size, unsigned long prot)
+{
+	int sret;
+
+	errno = 0;
+	sret = syscall(SYS_mprotect, ptr, size, prot);
+	return sret;
+}
+
+static int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
+		unsigned long pkey)
+{
+	int sret;
+
+	errno = 0;
+	sret = syscall(__NR_pkey_mprotect, ptr, size, orig_prot, pkey);
+	return sret;
+}
+
+static void *sys_mmap(void *addr, unsigned long len, unsigned long prot,
+	unsigned long flags, unsigned long fd, unsigned long offset)
+{
+	void *sret;
+
+	errno = 0;
+	sret = (void *) syscall(__NR_mmap, addr, len, prot,
+		flags, fd, offset);
+	return sret;
+}
+
+static int sys_munmap(void *ptr, size_t size)
+{
+	int sret;
+
+	errno = 0;
+	sret = syscall(SYS_munmap, ptr, size);
+	return sret;
+}
+
+static int sys_madvise(void *start, size_t len, int types)
+{
+	int sret;
+
+	errno = 0;
+	sret = syscall(__NR_madvise, start, len, types);
+	return sret;
+}
+
+static int sys_pkey_alloc(unsigned long flags, unsigned long init_val)
+{
+	int ret = syscall(SYS_pkey_alloc, flags, init_val);
+
+	return ret;
+}
+
+static unsigned int __read_pkey_reg(void)
+{
+	unsigned int eax, edx;
+	unsigned int ecx = 0;
+	unsigned int pkey_reg;
+
+	asm volatile(".byte 0x0f,0x01,0xee\n\t"
+			: "=a" (eax), "=d" (edx)
+			: "c" (ecx));
+	pkey_reg = eax;
+	return pkey_reg;
+}
+
+static void __write_pkey_reg(u64 pkey_reg)
+{
+	unsigned int eax = pkey_reg;
+	unsigned int ecx = 0;
+	unsigned int edx = 0;
+
+	asm volatile(".byte 0x0f,0x01,0xef\n\t"
+			: : "a" (eax), "c" (ecx), "d" (edx));
+	assert(pkey_reg == __read_pkey_reg());
+}
+
+static unsigned long pkey_bit_position(int pkey)
+{
+	return pkey * PKEY_BITS_PER_PKEY;
+}
+
+static u64 set_pkey_bits(u64 reg, int pkey, u64 flags)
+{
+	unsigned long shift = pkey_bit_position(pkey);
+
+	/* mask out bits from pkey in old value */
+	reg &= ~((u64)PKEY_MASK << shift);
+	/* OR in new bits for pkey */
+	reg |= (flags & PKEY_MASK) << shift;
+	return reg;
+}
+
+static void set_pkey(int pkey, unsigned long pkey_value)
+{
+	unsigned long mask = (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE);
+	u64 new_pkey_reg;
+
+	assert(!(pkey_value & ~mask));
+	new_pkey_reg = set_pkey_bits(__read_pkey_reg(), pkey, pkey_value);
+	__write_pkey_reg(new_pkey_reg);
+}
+
+static void setup_single_address(int size, void **ptrOut)
+{
+	void *ptr;
+
+	ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
+	assert(ptr != (void *)-1);
+	*ptrOut = ptr;
+}
+
+static void setup_single_address_rw_sealable(int size, void **ptrOut, bool sealable)
+{
+	void *ptr;
+	unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE;
+
+	if (sealable)
+		mapflags |= MAP_SEALABLE;
+
+	ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE, mapflags, -1, 0);
+	assert(ptr != (void *)-1);
+	*ptrOut = ptr;
+}
+
+static void clean_single_address(void *ptr, int size)
+{
+	int ret;
+
+	ret = munmap(ptr, size);
+	assert(!ret);
+}
+
+static void seal_single_address(void *ptr, int size)
+{
+	int ret;
+
+	ret = sys_mseal(ptr, size);
+	assert(!ret);
+}
+
+bool seal_support(void)
+{
+	int ret;
+	void *ptr;
+	unsigned long page_size = getpagesize();
+
+	ptr = sys_mmap(NULL, page_size, PROT_READ | PROT_SEAL, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+	if (ptr == (void *) -1)
+		return false;
+
+	ret = sys_mseal(ptr, page_size);
+	if (ret < 0)
+		return false;
+
+	return true;
+}
+
+bool pkey_supported(void)
+{
+	int pkey = sys_pkey_alloc(0, 0);
+
+	if (pkey > 0)
+		return true;
+	return false;
+}
+
+static void test_seal_addseal(void)
+{
+	int ret;
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+
+	setup_single_address(size, &ptr);
+
+	ret = sys_mseal(ptr, size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_unmapped_start(void)
+{
+	int ret;
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+
+	setup_single_address(size, &ptr);
+
+	/* munmap 2 pages from ptr. */
+	ret = sys_munmap(ptr, 2 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* mprotect will fail because 2 pages from ptr are unmapped. */
+	ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	/* mseal will fail because 2 pages from ptr are unmapped. */
+	ret = sys_mseal(ptr, size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	ret = sys_mseal(ptr + 2 * page_size, 2 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_unmapped_middle(void)
+{
+	int ret;
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+
+	setup_single_address(size, &ptr);
+
+	/* munmap 2 pages from ptr + page. */
+	ret = sys_munmap(ptr + page_size, 2 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* mprotect will fail, since middle 2 pages are unmapped. */
+	ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	/* mseal will fail as well. */
+	ret = sys_mseal(ptr, size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	/* we still can add seal to the first page and last page*/
+	ret = sys_mseal(ptr, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	ret = sys_mseal(ptr + 3 * page_size, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_unmapped_end(void)
+{
+	int ret;
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+
+	setup_single_address(size, &ptr);
+
+	/* unmap last 2 pages. */
+	ret = sys_munmap(ptr + 2 * page_size, 2 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* mprotect will fail since last 2 pages are unmapped. */
+	ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	/* mseal will fail as well. */
+	ret = sys_mseal(ptr, size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	/* The first 2 pages is not sealed, and can add seals */
+	ret = sys_mseal(ptr, 2 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_multiple_vmas(void)
+{
+	int ret;
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+
+	setup_single_address(size, &ptr);
+
+	/* use mprotect to split the vma into 3. */
+	ret = sys_mprotect(ptr + page_size, 2 * page_size,
+			PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* mprotect will get applied to all 4 pages - 3 VMAs. */
+	ret = sys_mprotect(ptr, size, PROT_READ);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* use mprotect to split the vma into 3. */
+	ret = sys_mprotect(ptr + page_size, 2 * page_size,
+			PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* mseal get applied to all 4 pages - 3 VMAs. */
+	ret = sys_mseal(ptr, size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_split_start(void)
+{
+	int ret;
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+
+	setup_single_address(size, &ptr);
+
+	/* use mprotect to split at middle */
+	ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* seal the first page, this will split the VMA */
+	ret = sys_mseal(ptr, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* add seal to the remain 3 pages */
+	ret = sys_mseal(ptr + page_size, 3 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_split_end(void)
+{
+	int ret;
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+
+	setup_single_address(size, &ptr);
+
+	/* use mprotect to split at middle */
+	ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* seal the last page */
+	ret = sys_mseal(ptr + 3 * page_size, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* Adding seals to the first 3 pages */
+	ret = sys_mseal(ptr, 3 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_invalid_input(void)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(8 * page_size, &ptr);
+	clean_single_address(ptr + 4 * page_size, 4 * page_size);
+
+	/* invalid flag */
+	ret = syscall(__NR_mseal, ptr, size, 0x20);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	/* unaligned address */
+	ret = sys_mseal(ptr + 1, 2 * page_size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	/* length too big */
+	ret = sys_mseal(ptr, 5 * page_size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	/* length overflow */
+	ret = sys_mseal(ptr, UINT64_MAX/page_size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	/* start is not in a valid VMA */
+	ret = sys_mseal(ptr - page_size, 5 * page_size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_zero_length(void)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	ret = sys_mprotect(ptr, 0, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* seal 0 length will be OK, same as mprotect */
+	ret = sys_mseal(ptr, 0);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* verify the 4 pages are not sealed by previous call. */
+	ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_twice(void)
+{
+	int ret;
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+
+	setup_single_address(size, &ptr);
+
+	ret = sys_mseal(ptr, size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* apply the same seal will be OK. idempotent. */
+	ret = sys_mseal(ptr, size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mprotect(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	if (seal)
+		seal_single_address(ptr, size);
+
+	ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_start_mprotect(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	if (seal)
+		seal_single_address(ptr, page_size);
+
+	/* the first page is sealed. */
+	ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	/* pages after the first page is not sealed. */
+	ret = sys_mprotect(ptr + page_size, page_size * 3,
+			PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_end_mprotect(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	if (seal)
+		seal_single_address(ptr + page_size, 3 * page_size);
+
+	/* first page is not sealed */
+	ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* last 3 page are sealed */
+	ret = sys_mprotect(ptr + page_size, page_size * 3,
+			PROT_READ | PROT_WRITE);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mprotect_unalign_len(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	if (seal)
+		seal_single_address(ptr, page_size * 2 - 1);
+
+	/* 2 pages are sealed. */
+	ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	ret = sys_mprotect(ptr + page_size * 2, page_size,
+			PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mprotect_unalign_len_variant_2(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+	if (seal)
+		seal_single_address(ptr, page_size * 2 + 1);
+
+	/* 3 pages are sealed. */
+	ret = sys_mprotect(ptr, page_size * 3, PROT_READ | PROT_WRITE);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	ret = sys_mprotect(ptr + page_size * 3, page_size,
+			PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mprotect_two_vma(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	/* use mprotect to split */
+	ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	if (seal)
+		seal_single_address(ptr, page_size * 4);
+
+	ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	ret = sys_mprotect(ptr + page_size * 2, page_size * 2,
+			PROT_READ | PROT_WRITE);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mprotect_two_vma_with_split(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	/* use mprotect to split as two vma. */
+	ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* mseal can apply across 2 vma, also split them. */
+	if (seal)
+		seal_single_address(ptr + page_size, page_size * 2);
+
+	/* the first page is not sealed. */
+	ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* the second page is sealed. */
+	ret = sys_mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	/* the third page is sealed. */
+	ret = sys_mprotect(ptr + 2 * page_size, page_size,
+			PROT_READ | PROT_WRITE);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	/* the fouth page is not sealed. */
+	ret = sys_mprotect(ptr + 3 * page_size, page_size,
+			PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mprotect_partial_mprotect(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	/* seal one page. */
+	if (seal)
+		seal_single_address(ptr, page_size);
+
+	/* mprotect first 2 page will fail, since the first page are sealed. */
+	ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mprotect_two_vma_with_gap(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	/* use mprotect to split. */
+	ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* use mprotect to split. */
+	ret = sys_mprotect(ptr + 3 * page_size, page_size,
+			PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* use munmap to free two pages in the middle */
+	ret = sys_munmap(ptr + page_size, 2 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* mprotect will fail, because there is a gap in the address. */
+	/* notes, internally mprotect still updated the first page. */
+	ret = sys_mprotect(ptr, 4 * page_size, PROT_READ);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	/* mseal will fail as well. */
+	ret = sys_mseal(ptr, 4 * page_size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	/* the first page is not sealed. */
+	ret = sys_mprotect(ptr, page_size, PROT_READ);
+	FAIL_TEST_IF_FALSE(ret == 0);
+
+	/* the last page is not sealed. */
+	ret = sys_mprotect(ptr + 3 * page_size, page_size, PROT_READ);
+	FAIL_TEST_IF_FALSE(ret == 0);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mprotect_split(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	/* use mprotect to split. */
+	ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* seal all 4 pages. */
+	if (seal) {
+		ret = sys_mseal(ptr, 4 * page_size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* mprotect is sealed. */
+	ret = sys_mprotect(ptr, 2 * page_size, PROT_READ);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+
+	ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mprotect_merge(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	/* use mprotect to split one page. */
+	ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* seal first two pages. */
+	if (seal) {
+		ret = sys_mseal(ptr, 2 * page_size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* 2 pages are sealed. */
+	ret = sys_mprotect(ptr, 2 * page_size, PROT_READ);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	/* last 2 pages are not sealed. */
+	ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ);
+	FAIL_TEST_IF_FALSE(ret == 0);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_munmap(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* 4 pages are sealed. */
+	ret = sys_munmap(ptr, size);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+/*
+ * allocate 4 pages,
+ * use mprotect to split it as two VMAs
+ * seal the whole range
+ * munmap will fail on both
+ */
+static void test_seal_munmap_two_vma(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	/* use mprotect to split */
+	ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	ret = sys_munmap(ptr, page_size * 2);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	ret = sys_munmap(ptr + page_size, page_size * 2);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+/*
+ * allocate a VMA with 4 pages.
+ * munmap the middle 2 pages.
+ * seal the whole 4 pages, will fail.
+ * note: one of the pages are sealed
+ * munmap the first page will be OK.
+ * munmap the last page will be OK.
+ */
+static void test_seal_munmap_vma_with_gap(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	ret = sys_munmap(ptr + page_size, page_size * 2);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	if (seal) {
+		/* can't have gap in the middle. */
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(ret < 0);
+	}
+
+	ret = sys_munmap(ptr, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	ret = sys_munmap(ptr + page_size * 2, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	ret = sys_munmap(ptr, size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_munmap_start_freed(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	/* unmap the first page. */
+	ret = sys_munmap(ptr, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* seal the last 3 pages. */
+	if (seal) {
+		ret = sys_mseal(ptr + page_size, 3 * page_size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* unmap from the first page. */
+	ret = sys_munmap(ptr, size);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		/* note: this will be OK, even the first page is */
+		/* already unmapped. */
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_munmap_end_freed(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+	/* unmap last page. */
+	ret = sys_munmap(ptr + page_size * 3, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* seal the first 3 pages. */
+	if (seal) {
+		ret = sys_mseal(ptr, 3 * page_size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* unmap all pages. */
+	ret = sys_munmap(ptr, size);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_munmap_middle_freed(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+	/* unmap 2 pages in the middle. */
+	ret = sys_munmap(ptr + page_size, page_size * 2);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* seal the first page. */
+	if (seal) {
+		ret = sys_mseal(ptr, page_size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* munmap all 4 pages. */
+	ret = sys_munmap(ptr, size);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mremap_shrink(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+	void *ret2;
+
+	setup_single_address(size, &ptr);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* shrink from 4 pages to 2 pages. */
+	ret2 = mremap(ptr, size, 2 * page_size, 0, 0);
+	if (seal) {
+		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+		FAIL_TEST_IF_FALSE(errno == EPERM);
+	} else {
+		FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
+
+	}
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mremap_expand(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+	void *ret2;
+
+	setup_single_address(size, &ptr);
+	/* ummap last 2 pages. */
+	ret = sys_munmap(ptr + 2 * page_size, 2 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	if (seal) {
+		ret = sys_mseal(ptr, 2 * page_size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* expand from 2 page to 4 pages. */
+	ret2 = mremap(ptr, 2 * page_size, 4 * page_size, 0, 0);
+	if (seal) {
+		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+		FAIL_TEST_IF_FALSE(errno == EPERM);
+	} else {
+		FAIL_TEST_IF_FALSE(ret2 == ptr);
+
+	}
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mremap_move(bool seal)
+{
+	void *ptr, *newPtr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = page_size;
+	int ret;
+	void *ret2;
+
+	setup_single_address(size, &ptr);
+	setup_single_address(size, &newPtr);
+	clean_single_address(newPtr, size);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* move from ptr to fixed address. */
+	ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newPtr);
+	if (seal) {
+		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+		FAIL_TEST_IF_FALSE(errno == EPERM);
+	} else {
+		FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
+
+	}
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mmap_overwrite_prot(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = page_size;
+	int ret;
+	void *ret2;
+
+	setup_single_address(size, &ptr);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* use mmap to change protection. */
+	ret2 = sys_mmap(ptr, size, PROT_NONE,
+			MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+	if (seal) {
+		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+		FAIL_TEST_IF_FALSE(errno == EPERM);
+	} else
+		FAIL_TEST_IF_FALSE(ret2 == ptr);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mmap_expand(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 12 * page_size;
+	int ret;
+	void *ret2;
+
+	setup_single_address(size, &ptr);
+	/* ummap last 4 pages. */
+	ret = sys_munmap(ptr + 8 * page_size, 4 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	if (seal) {
+		ret = sys_mseal(ptr, 8 * page_size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* use mmap to expand. */
+	ret2 = sys_mmap(ptr, size, PROT_READ,
+			MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+	if (seal) {
+		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+		FAIL_TEST_IF_FALSE(errno == EPERM);
+	} else
+		FAIL_TEST_IF_FALSE(ret2 == ptr);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mmap_shrink(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 12 * page_size;
+	int ret;
+	void *ret2;
+
+	setup_single_address(size, &ptr);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* use mmap to shrink. */
+	ret2 = sys_mmap(ptr, 8 * page_size, PROT_READ,
+			MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+	if (seal) {
+		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+		FAIL_TEST_IF_FALSE(errno == EPERM);
+	} else
+		FAIL_TEST_IF_FALSE(ret2 == ptr);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mremap_shrink_fixed(bool seal)
+{
+	void *ptr;
+	void *newAddr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+	void *ret2;
+
+	setup_single_address(size, &ptr);
+	setup_single_address(size, &newAddr);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* mremap to move and shrink to fixed address */
+	ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
+			newAddr);
+	if (seal) {
+		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+		FAIL_TEST_IF_FALSE(errno == EPERM);
+	} else
+		FAIL_TEST_IF_FALSE(ret2 == newAddr);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mremap_expand_fixed(bool seal)
+{
+	void *ptr;
+	void *newAddr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+	void *ret2;
+
+	setup_single_address(page_size, &ptr);
+	setup_single_address(size, &newAddr);
+
+	if (seal) {
+		ret = sys_mseal(newAddr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* mremap to move and expand to fixed address */
+	ret2 = mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED,
+			newAddr);
+	if (seal) {
+		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+		FAIL_TEST_IF_FALSE(errno == EPERM);
+	} else
+		FAIL_TEST_IF_FALSE(ret2 == newAddr);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mremap_move_fixed(bool seal)
+{
+	void *ptr;
+	void *newAddr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+	void *ret2;
+
+	setup_single_address(size, &ptr);
+	setup_single_address(size, &newAddr);
+
+	if (seal) {
+		ret = sys_mseal(newAddr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* mremap to move to fixed address */
+	ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr);
+	if (seal) {
+		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+		FAIL_TEST_IF_FALSE(errno == EPERM);
+	} else
+		FAIL_TEST_IF_FALSE(ret2 == newAddr);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mremap_move_fixed_zero(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+	void *ret2;
+
+	setup_single_address(size, &ptr);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/*
+	 * MREMAP_FIXED can move the mapping to zero address
+	 */
+	ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
+			0);
+	if (seal) {
+		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+		FAIL_TEST_IF_FALSE(errno == EPERM);
+	} else {
+		FAIL_TEST_IF_FALSE(ret2 == 0);
+
+	}
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mremap_move_dontunmap(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+	void *ret2;
+
+	setup_single_address(size, &ptr);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* mremap to move, and don't unmap src addr. */
+	ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0);
+	if (seal) {
+		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+		FAIL_TEST_IF_FALSE(errno == EPERM);
+	} else {
+		FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
+
+	}
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mremap_move_dontunmap_anyaddr(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+	void *ret2;
+
+	setup_single_address(size, &ptr);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/*
+	 * The 0xdeaddead should not have effect on dest addr
+	 * when MREMAP_DONTUNMAP is set.
+	 */
+	ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
+			0xdeaddead);
+	if (seal) {
+		FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED);
+		FAIL_TEST_IF_FALSE(errno == EPERM);
+	} else {
+		FAIL_TEST_IF_FALSE(ret2 != MAP_FAILED);
+		FAIL_TEST_IF_FALSE((long)ret2 != 0xdeaddead);
+
+	}
+
+	TEST_END_CHECK();
+}
+
+
+static void test_seal_mmap_seal(void)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	ptr = sys_mmap(NULL, size, PROT_READ | PROT_SEAL, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr != (void *)-1);
+
+	ret = sys_munmap(ptr, size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	ret = sys_madvise(ptr, size, MADV_DONTNEED);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_merge_and_split(void)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size;
+	int ret;
+
+	/* (24 RO) */
+	setup_single_address(24 * page_size, &ptr);
+
+	/* use mprotect(NONE) to set out boundary */
+	/* (1 NONE) (22 RO) (1 NONE) */
+	ret = sys_mprotect(ptr, page_size, PROT_NONE);
+	FAIL_TEST_IF_FALSE(!ret);
+	ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
+	FAIL_TEST_IF_FALSE(!ret);
+	size = get_vma_size(ptr + page_size);
+	FAIL_TEST_IF_FALSE(size == 22 * page_size);
+
+	/* use mseal to split from beginning */
+	/* (1 NONE) (1 RO_SEAL) (21 RO) (1 NONE) */
+	ret = sys_mseal(ptr + page_size, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+	size = get_vma_size(ptr + page_size);
+	FAIL_TEST_IF_FALSE(size == page_size);
+	size = get_vma_size(ptr + 2 * page_size);
+	FAIL_TEST_IF_FALSE(size == 21 * page_size);
+
+	/* use mseal to split from the end. */
+	/* (1 NONE) (1 RO_SEAL) (20 RO) (1 RO_SEAL) (1 NONE) */
+	ret = sys_mseal(ptr + 22 * page_size, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+	size = get_vma_size(ptr + 22 * page_size);
+	FAIL_TEST_IF_FALSE(size == page_size);
+	size = get_vma_size(ptr + 2 * page_size);
+	FAIL_TEST_IF_FALSE(size == 20 * page_size);
+
+	/* merge with prev. */
+	/* (1 NONE) (2 RO_SEAL) (19 RO) (1 RO_SEAL) (1 NONE) */
+	ret = sys_mseal(ptr + 2 * page_size, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+	size = get_vma_size(ptr +  page_size);
+	FAIL_TEST_IF_FALSE(size ==  2 * page_size);
+
+	/* merge with after. */
+	/* (1 NONE) (2 RO_SEAL) (18 RO) (2 RO_SEALS) (1 NONE) */
+	ret = sys_mseal(ptr + 21 * page_size, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+	size = get_vma_size(ptr +  21 * page_size);
+	FAIL_TEST_IF_FALSE(size ==  2 * page_size);
+
+	/* split and merge from prev */
+	/* (1 NONE) (3 RO_SEAL) (17 RO) (2 RO_SEALS) (1 NONE) */
+	ret = sys_mseal(ptr + 2 * page_size, 2 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+	size = get_vma_size(ptr +  1 * page_size);
+	FAIL_TEST_IF_FALSE(size ==  3 * page_size);
+	ret = sys_munmap(ptr + page_size,  page_size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+	ret = sys_mprotect(ptr + 2 * page_size, page_size,  PROT_NONE);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	/* split and merge from next */
+	/* (1 NONE) (3 RO_SEAL) (16 RO) (3 RO_SEALS) (1 NONE) */
+	ret = sys_mseal(ptr + 20 * page_size, 2 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+	size = get_vma_size(ptr +  20 * page_size);
+	FAIL_TEST_IF_FALSE(size ==  3 * page_size);
+
+	/* merge from middle of prev and middle of next. */
+	/* (1 NONE) (22 RO_SEAL) (1 NONE) */
+	ret = sys_mseal(ptr + 2 * page_size, 20 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+	size = get_vma_size(ptr +  page_size);
+	FAIL_TEST_IF_FALSE(size ==  22 * page_size);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_mmap_merge(void)
+{
+
+	void *ptr, *ptr2;
+	unsigned long page_size = getpagesize();
+	unsigned long size;
+	int ret;
+
+	/* (24 RO) */
+	setup_single_address(24 * page_size, &ptr);
+
+	/* use mprotect(NONE) to set out boundary */
+	/* (1 NONE) (22 RO) (1 NONE) */
+	ret = sys_mprotect(ptr, page_size, PROT_NONE);
+	FAIL_TEST_IF_FALSE(!ret);
+	ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
+	FAIL_TEST_IF_FALSE(!ret);
+	size = get_vma_size(ptr + page_size);
+	FAIL_TEST_IF_FALSE(size == 22 * page_size);
+
+	/* use munmap to free 2 segment of memory. */
+	/* (1 NONE) (1 free) (20 RO) (1 free) (1 NONE) */
+	ret = sys_munmap(ptr + page_size, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	ret = sys_munmap(ptr + 22 * page_size, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* apply seal to the middle */
+	/* (1 NONE) (1 free) (20 RO_SEAL) (1 free) (1 NONE) */
+	ret = sys_mseal(ptr + 2 * page_size, 20 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+	size = get_vma_size(ptr + 2 * page_size);
+	FAIL_TEST_IF_FALSE(size == 20 * page_size);
+
+	/* allocate a mapping at beginning, and make sure it merges. */
+	/* (1 NONE) (21 RO_SEAL) (1 free) (1 NONE) */
+	ptr2 = sys_mmap(ptr + page_size, page_size, PROT_READ | PROT_SEAL,
+		MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr2 != (void *)-1);
+	size = get_vma_size(ptr + page_size);
+	FAIL_TEST_IF_FALSE(size == 21 * page_size);
+
+	/* allocate a mapping at end, and make sure it merges. */
+	/* (1 NONE) (22 RO_SEAL) (1 NONE) */
+	ptr2 = sys_mmap(ptr + 22 * page_size, page_size, PROT_READ | PROT_SEAL,
+		MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr != (void *)-1);
+	size = get_vma_size(ptr + page_size);
+	FAIL_TEST_IF_FALSE(size == 22 * page_size);
+
+	TEST_END_CHECK();
+}
+
+static void test_not_sealable(void)
+{
+	int ret;
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+
+	ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr != (void *)-1);
+
+	ret = sys_mseal(ptr, size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	TEST_END_CHECK();
+}
+
+static void test_mmap_fixed_change_to_sealable(void)
+{
+	int ret;
+	void *ptr, *ptr2;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+
+	ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr != (void *)-1);
+
+	ret = sys_mseal(ptr, size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	ptr2 = sys_mmap(ptr, size, PROT_READ,
+		MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr2 == ptr);
+
+	ret = sys_mseal(ptr, size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_mmap_fixed_change_to_not_sealable(void)
+{
+	int ret;
+	void *ptr, *ptr2;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+
+	ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr != (void *)-1);
+
+	ptr2 = sys_mmap(ptr, size, PROT_READ,
+		MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr2 == ptr);
+
+	ret = sys_mseal(ptr, size);
+	FAIL_TEST_IF_FALSE(ret < 0);
+
+	TEST_END_CHECK();
+}
+
+static void test_merge_sealable(void)
+{
+	int ret;
+	void *ptr, *ptr2;
+	unsigned long page_size = getpagesize();
+	unsigned long size;
+
+	/* (24 RO) */
+	setup_single_address(24 * page_size, &ptr);
+
+	/* use mprotect(NONE) to set out boundary */
+	/* (1 NONE) (22 RO) (1 NONE) */
+	ret = sys_mprotect(ptr, page_size, PROT_NONE);
+	FAIL_TEST_IF_FALSE(!ret);
+	ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
+	FAIL_TEST_IF_FALSE(!ret);
+	size = get_vma_size(ptr + page_size);
+	FAIL_TEST_IF_FALSE(size == 22 * page_size);
+
+	/* (1 NONE) (RO) (4 free) (17 RO) (1 NONE) */
+	ret = sys_munmap(ptr + 2 * page_size,  4 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+	size = get_vma_size(ptr + page_size);
+	FAIL_TEST_IF_FALSE(size == 1 * page_size);
+	size = get_vma_size(ptr +  6 * page_size);
+	FAIL_TEST_IF_FALSE(size == 17 * page_size);
+
+	/* (1 NONE) (RO) (1 free) (2 RO) (1 free) (17 RO) (1 NONE) */
+	ptr2 = sys_mmap(ptr + 3 * page_size, 2 * page_size, PROT_READ,
+		MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
+	size = get_vma_size(ptr + 3 * page_size);
+	FAIL_TEST_IF_FALSE(size == 2 * page_size);
+
+	/* (1 NONE) (RO) (1 free) (20 RO) (1 NONE) */
+	ptr2 = sys_mmap(ptr + 5 * page_size, 1 * page_size, PROT_READ,
+		MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr2 != (void *)-1);
+	size = get_vma_size(ptr + 3 * page_size);
+	FAIL_TEST_IF_FALSE(size == 20 * page_size);
+
+	/* (1 NONE) (RO) (1 free) (19 RO) (1 RO_SEAL) (1 NONE) */
+	ret = sys_mseal(ptr + 22 * page_size, page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* (1 NONE) (RO) (not sealable) (19 RO) (1 RO_SEAL) (1 NONE) */
+	ptr2 = sys_mmap(ptr + 2 * page_size, page_size, PROT_READ,
+		MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr2 != (void *)-1);
+	size = get_vma_size(ptr + page_size);
+	FAIL_TEST_IF_FALSE(size == page_size);
+	size = get_vma_size(ptr + 2 * page_size);
+	FAIL_TEST_IF_FALSE(size == page_size);
+
+	/* (1 NONE) (1 free) (1 NOT_SEALABLE) (19 free) (1 RO_SEAL) (1 NONE) */
+	ret = sys_munmap(ptr + page_size,  page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+	ret = sys_munmap(ptr + 3 * page_size,  19 * page_size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* (1 NONE) (2 NOT_SEALABLE) (19 free) (1 RO_SEAL) (1 NONE) */
+	ptr2 = sys_mmap(ptr + page_size, page_size, PROT_READ,
+		MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr2 != (void *)-1);
+	size = get_vma_size(ptr + page_size);
+	FAIL_TEST_IF_FALSE(size == 2 * page_size);
+
+	/* (1 NONE) (21 NOT_SEALABLE)(1 RO_SEAL) (1 NONE) */
+	ptr2 = sys_mmap(ptr + 3 * page_size, 19 * page_size, PROT_READ,
+		MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr2 != (void *)-1);
+	size = get_vma_size(ptr + page_size);
+	FAIL_TEST_IF_FALSE(size == 21 * page_size);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_discard_ro_anon_on_rw(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address_rw_sealable(size, &ptr, seal);
+	FAIL_TEST_IF_FALSE(ptr != (void *)-1);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* sealing doesn't take effect on RW memory. */
+	ret = sys_madvise(ptr, size, MADV_DONTNEED);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* base seal still apply. */
+	ret = sys_munmap(ptr, size);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_discard_ro_anon_on_pkey(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+	int pkey;
+
+	SKIP_TEST_IF_FALSE(pkey_supported());
+
+	setup_single_address_rw_sealable(size, &ptr, seal);
+	FAIL_TEST_IF_FALSE(ptr != (void *)-1);
+
+	pkey = sys_pkey_alloc(0, 0);
+	FAIL_TEST_IF_FALSE(pkey > 0);
+
+	ret = sys_mprotect_pkey((void *)ptr, size, PROT_READ | PROT_WRITE, pkey);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* sealing doesn't take effect if PKRU allow write. */
+	set_pkey(pkey, 0);
+	ret = sys_madvise(ptr, size, MADV_DONTNEED);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	/* sealing will take effect if PKRU deny write. */
+	set_pkey(pkey, PKEY_DISABLE_WRITE);
+	ret = sys_madvise(ptr, size, MADV_DONTNEED);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	/* base seal still apply. */
+	ret = sys_munmap(ptr, size);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_discard_ro_anon_on_filebacked(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+	int fd;
+	unsigned long mapflags = MAP_PRIVATE;
+
+	if (seal)
+		mapflags |= MAP_SEALABLE;
+
+	fd = memfd_create("test", 0);
+	FAIL_TEST_IF_FALSE(fd > 0);
+
+	ret = fallocate(fd, 0, 0, size);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	ptr = sys_mmap(NULL, size, PROT_READ, mapflags, fd, 0);
+	FAIL_TEST_IF_FALSE(ptr != MAP_FAILED);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* sealing doesn't apply for file backed mapping. */
+	ret = sys_madvise(ptr, size, MADV_DONTNEED);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	ret = sys_munmap(ptr, size);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+	close(fd);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_discard_ro_anon_on_shared(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+	unsigned long mapflags = MAP_ANONYMOUS | MAP_SHARED;
+
+	if (seal)
+		mapflags |= MAP_SEALABLE;
+
+	ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0);
+	FAIL_TEST_IF_FALSE(ptr != (void *)-1);
+
+	if (seal) {
+		ret = sys_mseal(ptr, size);
+		FAIL_TEST_IF_FALSE(!ret);
+	}
+
+	/* sealing doesn't apply for shared mapping. */
+	ret = sys_madvise(ptr, size, MADV_DONTNEED);
+	FAIL_TEST_IF_FALSE(!ret);
+
+	ret = sys_munmap(ptr, size);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+static void test_seal_discard_ro_anon(bool seal)
+{
+	void *ptr;
+	unsigned long page_size = getpagesize();
+	unsigned long size = 4 * page_size;
+	int ret;
+
+	setup_single_address(size, &ptr);
+
+	if (seal)
+		seal_single_address(ptr, size);
+
+	ret = sys_madvise(ptr, size, MADV_DONTNEED);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	ret = sys_munmap(ptr, size);
+	if (seal)
+		FAIL_TEST_IF_FALSE(ret < 0);
+	else
+		FAIL_TEST_IF_FALSE(!ret);
+
+	TEST_END_CHECK();
+}
+
+int main(int argc, char **argv)
+{
+	bool test_seal = seal_support();
+
+	ksft_print_header();
+
+	if (!test_seal)
+		ksft_exit_skip("sealing not supported, check CONFIG_64BIT\n");
+
+	if (!pkey_supported())
+		ksft_print_msg("PKEY not supported\n");
+
+	ksft_set_plan(85);
+
+	test_seal_addseal();
+	test_seal_unmapped_start();
+	test_seal_unmapped_middle();
+	test_seal_unmapped_end();
+	test_seal_multiple_vmas();
+	test_seal_split_start();
+	test_seal_split_end();
+	test_seal_invalid_input();
+	test_seal_zero_length();
+	test_seal_twice();
+
+	test_seal_mprotect(false);
+	test_seal_mprotect(true);
+
+	test_seal_start_mprotect(false);
+	test_seal_start_mprotect(true);
+
+	test_seal_end_mprotect(false);
+	test_seal_end_mprotect(true);
+
+	test_seal_mprotect_unalign_len(false);
+	test_seal_mprotect_unalign_len(true);
+
+	test_seal_mprotect_unalign_len_variant_2(false);
+	test_seal_mprotect_unalign_len_variant_2(true);
+
+	test_seal_mprotect_two_vma(false);
+	test_seal_mprotect_two_vma(true);
+
+	test_seal_mprotect_two_vma_with_split(false);
+	test_seal_mprotect_two_vma_with_split(true);
+
+	test_seal_mprotect_partial_mprotect(false);
+	test_seal_mprotect_partial_mprotect(true);
+
+	test_seal_mprotect_two_vma_with_gap(false);
+	test_seal_mprotect_two_vma_with_gap(true);
+
+	test_seal_mprotect_merge(false);
+	test_seal_mprotect_merge(true);
+
+	test_seal_mprotect_split(false);
+	test_seal_mprotect_split(true);
+
+	test_seal_munmap(false);
+	test_seal_munmap(true);
+	test_seal_munmap_two_vma(false);
+	test_seal_munmap_two_vma(true);
+	test_seal_munmap_vma_with_gap(false);
+	test_seal_munmap_vma_with_gap(true);
+
+	test_munmap_start_freed(false);
+	test_munmap_start_freed(true);
+	test_munmap_middle_freed(false);
+	test_munmap_middle_freed(true);
+	test_munmap_end_freed(false);
+	test_munmap_end_freed(true);
+
+	test_seal_mremap_shrink(false);
+	test_seal_mremap_shrink(true);
+	test_seal_mremap_expand(false);
+	test_seal_mremap_expand(true);
+	test_seal_mremap_move(false);
+	test_seal_mremap_move(true);
+
+	test_seal_mremap_shrink_fixed(false);
+	test_seal_mremap_shrink_fixed(true);
+	test_seal_mremap_expand_fixed(false);
+	test_seal_mremap_expand_fixed(true);
+	test_seal_mremap_move_fixed(false);
+	test_seal_mremap_move_fixed(true);
+	test_seal_mremap_move_dontunmap(false);
+	test_seal_mremap_move_dontunmap(true);
+	test_seal_mremap_move_fixed_zero(false);
+	test_seal_mremap_move_fixed_zero(true);
+	test_seal_mremap_move_dontunmap_anyaddr(false);
+	test_seal_mremap_move_dontunmap_anyaddr(true);
+	test_seal_discard_ro_anon(false);
+	test_seal_discard_ro_anon(true);
+	test_seal_discard_ro_anon_on_rw(false);
+	test_seal_discard_ro_anon_on_rw(true);
+	test_seal_discard_ro_anon_on_shared(false);
+	test_seal_discard_ro_anon_on_shared(true);
+	test_seal_discard_ro_anon_on_filebacked(false);
+	test_seal_discard_ro_anon_on_filebacked(true);
+	test_seal_mmap_overwrite_prot(false);
+	test_seal_mmap_overwrite_prot(true);
+	test_seal_mmap_expand(false);
+	test_seal_mmap_expand(true);
+	test_seal_mmap_shrink(false);
+	test_seal_mmap_shrink(true);
+
+	test_seal_mmap_seal();
+	test_seal_merge_and_split();
+	test_seal_mmap_merge();
+
+	test_not_sealable();
+	test_merge_sealable();
+	test_mmap_fixed_change_to_sealable();
+	test_mmap_fixed_change_to_not_sealable();
+
+	test_seal_discard_ro_anon_on_pkey(false);
+	test_seal_discard_ro_anon_on_pkey(true);
+
+	ksft_finished();
+	return 0;
+}
-- 
2.43.0.429.g432eaa2c6b-goog



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v7 4/4] mseal:add documentation
  2024-01-22 15:28 [PATCH v7 0/4] Introduce mseal() jeffxu
                   ` (2 preceding siblings ...)
  2024-01-22 15:28 ` [PATCH v7 3/4] selftest mm/mseal memory sealing jeffxu
@ 2024-01-22 15:28 ` jeffxu
  2024-01-22 15:49 ` [PATCH v7 0/4] Introduce mseal() Theo de Raadt
  2024-01-29 22:36 ` Jonathan Corbet
  5 siblings, 0 replies; 23+ messages in thread
From: jeffxu @ 2024-01-22 15:28 UTC (permalink / raw)
  To: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap
  Cc: jeffxu, jorgelo, groeck, linux-kernel, linux-kselftest, linux-mm,
	pedro.falcato, dave.hansen, linux-hardening, deraadt, Jeff Xu

From: Jeff Xu <jeffxu@chromium.org>

Add documentation for mseal().

Signed-off-by: Jeff Xu <jeffxu@chromium.org>
---
 Documentation/userspace-api/index.rst |   1 +
 Documentation/userspace-api/mseal.rst | 183 ++++++++++++++++++++++++++
 2 files changed, 184 insertions(+)
 create mode 100644 Documentation/userspace-api/mseal.rst

diff --git a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst
index 09f61bd2ac2e..178f6a1d79cb 100644
--- a/Documentation/userspace-api/index.rst
+++ b/Documentation/userspace-api/index.rst
@@ -26,6 +26,7 @@ place where this information is gathered.
    iommu
    iommufd
    media/index
+   mseal
    netlink/index
    sysfs-platform_profile
    vduse
diff --git a/Documentation/userspace-api/mseal.rst b/Documentation/userspace-api/mseal.rst
new file mode 100644
index 000000000000..929a706b70eb
--- /dev/null
+++ b/Documentation/userspace-api/mseal.rst
@@ -0,0 +1,183 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=====================
+Introduction of mseal
+=====================
+
+:Author: Jeff Xu <jeffxu@chromium.org>
+
+Modern CPUs support memory permissions such as RW and NX bits. The memory
+permission feature improves security stance on memory corruption bugs, i.e.
+the attacker can’t just write to arbitrary memory and point the code to it,
+the memory has to be marked with X bit, or else an exception will happen.
+
+Memory sealing additionally protects the mapping itself against
+modifications. This is useful to mitigate memory corruption issues where a
+corrupted pointer is passed to a memory management system. For example,
+such an attacker primitive can break control-flow integrity guarantees
+since read-only memory that is supposed to be trusted can become writable
+or .text pages can get remapped. Memory sealing can automatically be
+applied by the runtime loader to seal .text and .rodata pages and
+applications can additionally seal security critical data at runtime.
+
+A similar feature already exists in the XNU kernel with the
+VM_FLAGS_PERMANENT flag [1] and on OpenBSD with the mimmutable syscall [2].
+
+User API
+========
+Two system calls are involved in virtual memory sealing, mseal() and mmap().
+
+mseal()
+-----------
+The mseal() syscall has the following signature:
+
+``int mseal(void addr, size_t len, unsigned long flags)``
+
+**addr/len**: virtual memory address range.
+
+The address range set by ``addr``/``len`` must meet:
+   - The start address must be in an allocated VMA.
+   - The start address must be page aligned.
+   - The end address (``addr`` + ``len``) must be in an allocated VMA.
+   - no gap (unallocated memory) between start and end address.
+
+The ``len`` will be paged aligned implicitly by the kernel.
+
+**flags**: reserved for future use.
+
+**return values**:
+
+- ``0``: Success.
+
+- ``-EINVAL``:
+    - Invalid input ``flags``.
+    - The start address (``addr``) is not page aligned.
+    - Address range (``addr`` + ``len``) overflow.
+
+- ``-ENOMEM``:
+    - The start address (``addr``) is not allocated.
+    - The end address (``addr`` + ``len``) is not allocated.
+    - A gap (unallocated memory) between start and end address.
+
+- ``-EACCES``:
+    - ``MAP_SEALABLE`` is not set during mmap().
+
+- ``-EPERM``:
+    - sealing is supported only on 64-bit CPUs, 32-bit is not supported.
+
+- For above error cases, users can expect the given memory range is
+  unmodified, i.e. no partial update.
+
+- There might be other internal errors/cases not listed here, e.g.
+  error during merging/splitting VMAs, or the process reaching the max
+  number of supported VMAs. In those cases, partial updates to the given
+  memory range could happen. However, those cases should be rare.
+
+**Blocked operations after sealing**:
+    Unmapping, moving to another location, and shrinking the size,
+    via munmap() and mremap(), can leave an empty space, therefore
+    can be replaced with a VMA with a new set of attributes.
+
+    Moving or expanding a different VMA into the current location,
+    via mremap().
+
+    Modifying a VMA via mmap(MAP_FIXED).
+
+    Size expansion, via mremap(), does not appear to pose any
+    specific risks to sealed VMAs. It is included anyway because
+    the use case is unclear. In any case, users can rely on
+    merging to expand a sealed VMA.
+
+    mprotect() and pkey_mprotect().
+
+    Some destructive madvice() behaviors (e.g. MADV_DONTNEED)
+    for anonymous memory, when users don't have write permission to the
+    memory. Those behaviors can alter region contents by discarding pages,
+    effectively a memset(0) for anonymous memory.
+
+    Kernel will return -EPERM for blocked operations.
+
+**Note**:
+
+- mseal() only works on 64-bit CPUs, not 32-bit CPU.
+
+- users can call mseal() multiple times, mseal() on an already sealed memory
+  is a no-action (not error).
+
+- munseal() is not supported.
+
+mmap()
+----------
+``void *mmap(void* addr, size_t length, int prot, int flags, int fd,
+off_t offset);``
+
+We add two changes in ``prot`` and ``flags`` of  mmap() related to
+memory sealing.
+
+**prot**
+
+The ``PROT_SEAL`` bit in ``prot`` field of mmap().
+
+When present, it marks the memory is sealed since creation.
+
+This is useful as optimization because it avoids having to make two
+system calls: one for mmap() and one for mseal().
+
+It's worth noting that even though the sealing is set via the
+``prot`` field in mmap(), it can't be set in the ``prot``
+field in later mprotect(). This is unlike the ``PROT_READ``,
+``PROT_WRITE``, ``PROT_EXEC`` bits, e.g. if ``PROT_WRITE`` is not set in
+mprotect(), it means that the region is not writable.
+
+Setting ``PROT_SEAL`` implies setting ``MAP_SEALABLE`` below.
+
+**flags**
+
+The ``MAP_SEALABLE`` bit in the ``flags`` field of mmap().
+
+When present, it marks the map as sealable. A map created
+without ``MAP_SEALABLE`` will not support sealing. In other words,
+mseal() will fail for such a map.
+
+
+Applications that don't care about sealing will expect their
+behavior unchanged. For those that need sealing support, opt in
+by adding ``MAP_SEALABLE`` in mmap().
+
+Note: for a map created without ``MAP_SEALABLE`` or a map created
+with ``MAP_SEALABLE`` but not sealed yet, mmap(MAP_FIXED) can
+change the sealable or sealing bit.
+
+Use Case:
+=========
+- glibc:
+  The dynamic linker, during loading ELF executables, can apply sealing to
+  non-writable memory segments.
+
+- Chrome browser: protect some security sensitive data-structures.
+
+Additional notes:
+=================
+As Jann Horn pointed out in [3], there are still a few ways to write
+to RO memory, which is, in a way, by design. Those cases are not covered
+by mseal(). If applications want to block such cases, sandbox tools (such as
+seccomp, LSM, etc) might be considered.
+
+Those cases are:
+
+- Write to read-only memory through /proc/self/mem interface.
+- Write to read-only memory through ptrace (such as PTRACE_POKETEXT).
+- userfaultfd.
+
+The idea that inspired this patch comes from Stephen Röttger’s work in V8
+CFI [4]. Chrome browser in ChromeOS will be the first user of this API.
+
+Reference:
+==========
+[1] https://github.com/apple-oss-distributions/xnu/blob/1031c584a5e37aff177559b9f69dbd3c8c3fd30a/osfmk/mach/vm_statistics.h#L274
+
+[2] https://man.openbsd.org/mimmutable.2
+
+[3] https://lore.kernel.org/lkml/CAG48ez3ShUYey+ZAFsU2i1RpQn0a5eOs2hzQ426FkcgnfUGLvA@mail.gmail.com
+
+[4] https://docs.google.com/document/d/1O2jwK4dxI3nRcOJuPYkonhTkNQfbmwdvxQMyXgeaRHo/edit#heading=h.bvaojj9fu6hc
-- 
2.43.0.429.g432eaa2c6b-goog



^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 0/4] Introduce mseal()
  2024-01-22 15:28 [PATCH v7 0/4] Introduce mseal() jeffxu
                   ` (3 preceding siblings ...)
  2024-01-22 15:28 ` [PATCH v7 4/4] mseal:add documentation jeffxu
@ 2024-01-22 15:49 ` Theo de Raadt
  2024-01-22 22:10   ` Jeff Xu
  2024-01-29 22:36 ` Jonathan Corbet
  5 siblings, 1 reply; 23+ messages in thread
From: Theo de Raadt @ 2024-01-22 15:49 UTC (permalink / raw)
  To: jeffxu
  Cc: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap, jeffxu, jorgelo, groeck, linux-kernel,
	linux-kselftest, linux-mm, pedro.falcato, dave.hansen,
	linux-hardening

Regarding these pieces

> The PROT_SEAL bit in prot field of mmap(). When present, it marks
> the map sealed since creation.

OpenBSD won't be doing this.  I had PROT_IMMUTABLE as a draft.  In my
research I found basically zero circumstances when you userland does
that.  The most common circumstance is you create a RW mapping, fill it,
and then change to a more restrictve mapping, and lock it.

There are a few regions in the addressspace that can be locked while RW.
For instance, the stack.  But the kernel does that, not userland.  I
found regions where the kernel wants to do this to the address space,
but there is no need to export useless functionality to userland.

OpenBSD now uses this for a high percent of the address space.  It might
be worth re-reading a description of the split of responsibility regarding
who locks different types of memory in a process;
- kernel (the majority, based upon what ELF layout tell us),
- shared library linker (the next majority, dealing with shared
  library mappings and left-overs not determinable at kernel time),
- libc (a small minority, mostly regarding forced mutable objects)
- and the applications themselves (only 1 application today)

    https://lwn.net/Articles/915662/

> The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
> the map as sealable. A map created without MAP_SEALABLE will not support
> sealing, i.e. mseal() will fail.

We definately won't be doing this.  We allow a process to lock any and all
it's memory that isn't locked already, even if it means it is shooting
itself in the foot.

I think you are going to severely hurt the power of this mechanism,
because you won't be able to lock memory that has been allocated by a
different callsite not under your source-code control which lacks the
MAP_SEALABLE flag.  (Which is extremely common with the system-parts of
a process, meaning not just libc but kernel allocated objects).

It may be fine inside a program like chrome, but I expect that flag to make
it harder to use in libc, and it will hinder adoption.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 0/4] Introduce mseal()
  2024-01-22 15:49 ` [PATCH v7 0/4] Introduce mseal() Theo de Raadt
@ 2024-01-22 22:10   ` Jeff Xu
  2024-01-22 22:34     ` Theo de Raadt
  0 siblings, 1 reply; 23+ messages in thread
From: Jeff Xu @ 2024-01-22 22:10 UTC (permalink / raw)
  To: Theo de Raadt
  Cc: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap, jeffxu, jorgelo, groeck, linux-kernel,
	linux-kselftest, linux-mm, pedro.falcato, dave.hansen,
	linux-hardening

On Mon, Jan 22, 2024 at 7:49 AM Theo de Raadt <deraadt@openbsd.org> wrote:
>
> Regarding these pieces
>
> > The PROT_SEAL bit in prot field of mmap(). When present, it marks
> > the map sealed since creation.
>
> OpenBSD won't be doing this.  I had PROT_IMMUTABLE as a draft.  In my
> research I found basically zero circumstances when you userland does
> that.  The most common circumstance is you create a RW mapping, fill it,
> and then change to a more restrictve mapping, and lock it.
>
> There are a few regions in the addressspace that can be locked while RW.
> For instance, the stack.  But the kernel does that, not userland.  I
> found regions where the kernel wants to do this to the address space,
> but there is no need to export useless functionality to userland.
>
I have a feeling that most apps that need to use mmap() in their code
are likely using RW mappings. Adding sealing to mmap() could stop
those mappings from being executable. Of course, those apps would
need to change their code. We can't do it for them.

Also, I believe adding this to mmap() has no downsides, only
performance gain, as Pedro Falcato pointed out in [1].

[1] https://lore.kernel.org/lkml/CAKbZUD2A+=bp_sd+Q0Yif7NJqMu8p__eb4yguq0agEcmLH8SDQ@mail.gmail.com/

> OpenBSD now uses this for a high percent of the address space.  It might
> be worth re-reading a description of the split of responsibility regarding
> who locks different types of memory in a process;
> - kernel (the majority, based upon what ELF layout tell us),
> - shared library linker (the next majority, dealing with shared
>   library mappings and left-overs not determinable at kernel time),
> - libc (a small minority, mostly regarding forced mutable objects)
> - and the applications themselves (only 1 application today)
>
>     https://lwn.net/Articles/915662/
>
> > The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
> > the map as sealable. A map created without MAP_SEALABLE will not support
> > sealing, i.e. mseal() will fail.
>
> We definately won't be doing this.  We allow a process to lock any and all
> it's memory that isn't locked already, even if it means it is shooting
> itself in the foot.
>
> I think you are going to severely hurt the power of this mechanism,
> because you won't be able to lock memory that has been allocated by a
> different callsite not under your source-code control which lacks the
> MAP_SEALABLE flag.  (Which is extremely common with the system-parts of
> a process, meaning not just libc but kernel allocated objects).
>
MAP_SEALABLE was an open discussion item called out on V3 [2] and V4 [3].

I acknowledge that additional coordination would be required if
mapping were to be allocated by one software component and sealed in
another. However, this is feasible.

Considering the side effect of not having this flag (as discussed in
V3/V4) and the significant implications of altering the lifetime of
the mapping (since unmapping would not be possible), I believe it is
reasonable to expect developers to exercise additional care and
caution when utilizing memory sealing.

[2] https://lore.kernel.org/linux-mm/20231212231706.2680890-2-jeffxu@chromium.org/
[3] https://lore.kernel.org/all/20240104185138.169307-1-jeffxu@chromium.org/

> It may be fine inside a program like chrome, but I expect that flag to make
> it harder to use in libc, and it will hinder adoption.
>
In the case of glibc and linux, as stated in the cover letter, Stephen
is working on a change to glibc to add sealing support to the dynamic
linker,  also I plan to make necessary code changes in the linux kernel.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 0/4] Introduce mseal()
  2024-01-22 22:10   ` Jeff Xu
@ 2024-01-22 22:34     ` Theo de Raadt
  2024-01-23 17:33       ` Liam R. Howlett
  2024-01-24 18:55       ` Jeff Xu
  0 siblings, 2 replies; 23+ messages in thread
From: Theo de Raadt @ 2024-01-22 22:34 UTC (permalink / raw)
  To: Jeff Xu
  Cc: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap, jeffxu, jorgelo, groeck, linux-kernel,
	linux-kselftest, linux-mm, pedro.falcato, dave.hansen,
	linux-hardening

Jeff Xu <jeffxu@chromium.org> wrote:

> On Mon, Jan 22, 2024 at 7:49 AM Theo de Raadt <deraadt@openbsd.org> wrote:
> >
> > Regarding these pieces
> >
> > > The PROT_SEAL bit in prot field of mmap(). When present, it marks
> > > the map sealed since creation.
> >
> > OpenBSD won't be doing this.  I had PROT_IMMUTABLE as a draft.  In my
> > research I found basically zero circumstances when you userland does
> > that.  The most common circumstance is you create a RW mapping, fill it,
> > and then change to a more restrictve mapping, and lock it.
> >
> > There are a few regions in the addressspace that can be locked while RW.
> > For instance, the stack.  But the kernel does that, not userland.  I
> > found regions where the kernel wants to do this to the address space,
> > but there is no need to export useless functionality to userland.
> >
> I have a feeling that most apps that need to use mmap() in their code
> are likely using RW mappings. Adding sealing to mmap() could stop
> those mappings from being executable. Of course, those apps would
> need to change their code. We can't do it for them.

I don't have a feeling about it.

I spent a year engineering a complete system which exercises the maximum
amount of memory you can lock.

I saw nothing like what you are describing.  I had PROT_IMMUTABLE in my
drafts, and saw it turning into a dangerous anti-pattern.

> Also, I believe adding this to mmap() has no downsides, only
> performance gain, as Pedro Falcato pointed out in [1].
> 
> [1] https://lore.kernel.org/lkml/CAKbZUD2A+=bp_sd+Q0Yif7NJqMu8p__eb4yguq0agEcmLH8SDQ@mail.gmail.com/

Are you joking?  You don't have any code doing that today.  More feelings?

OpenBSD userland has zero places it can use mmap() MAP_IMMUTABLE.

It has two places where it has mprotect() + mimmutable() adjacent to each
other, two codepaths for late mprotect() of RELRO, and then make the RELRO
immutable.

I think this idea is a premature optimization, and intentionally incompatible.

Like I say, I had a similar MAP_ flag for mprotect() and mmap() in my
development trees, and I recognized it was pointless, distracting developers
into the wrong patterns, and I threw it out.

> > OpenBSD now uses this for a high percent of the address space.  It might
> > be worth re-reading a description of the split of responsibility regarding
> > who locks different types of memory in a process;
> > - kernel (the majority, based upon what ELF layout tell us),
> > - shared library linker (the next majority, dealing with shared
> >   library mappings and left-overs not determinable at kernel time),
> > - libc (a small minority, mostly regarding forced mutable objects)
> > - and the applications themselves (only 1 application today)
> >
> >     https://lwn.net/Articles/915662/
> >
> > > The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
> > > the map as sealable. A map created without MAP_SEALABLE will not support
> > > sealing, i.e. mseal() will fail.
> >
> > We definately won't be doing this.  We allow a process to lock any and all
> > it's memory that isn't locked already, even if it means it is shooting
> > itself in the foot.
> >
> > I think you are going to severely hurt the power of this mechanism,
> > because you won't be able to lock memory that has been allocated by a
> > different callsite not under your source-code control which lacks the
> > MAP_SEALABLE flag.  (Which is extremely common with the system-parts of
> > a process, meaning not just libc but kernel allocated objects).
> >
> MAP_SEALABLE was an open discussion item called out on V3 [2] and V4 [3].
> 
> I acknowledge that additional coordination would be required if
> mapping were to be allocated by one software component and sealed in
> another. However, this is feasible.
> 
> Considering the side effect of not having this flag (as discussed in
> V3/V4) and the significant implications of altering the lifetime of
> the mapping (since unmapping would not be possible), I believe it is
> reasonable to expect developers to exercise additional care and
> caution when utilizing memory sealing.
>
> [2] https://lore.kernel.org/linux-mm/20231212231706.2680890-2-jeffxu@chromium.org/
> [3] https://lore.kernel.org/all/20240104185138.169307-1-jeffxu@chromium.org/

I disagree *strongly*.  Developers need to exercise additional care on
memory, period.  Memory sealing issues is the least of their worries.

(Except for handling RELRO, but only the ld.so developers will lose
their hair).


OK, so mseal and mimmutable are very different.

mimmutable can be used by any developer on the address space easily.

mseal requires control of the whole stack between allocation and consumption.

I'm sorry, but I don't think you understand how dangerous this MAP_SEALABLE
proposal is because of the difficulties it will create for use.

The immutable memory management we have today in OpenBSD would completely
impossible with such a flag.  Seperation between allocator (that doesn't know
what is going to happen), and consumer (that does know), is completely common
in the systems environment (meaning the interaction between DSO, libc, other
libraries, and the underside of applications).

This is not not like an application where you can simply sprinkle the flag
into the mmap() calls that cause you problems.  That mmap() call is now in
someone else's code, and you CANNOT gain security advantage unless you
convince them to gain an understanding of what that flag means -- and it is
a flag that other Linux variants don't have, not even in their #include
files.

> > It may be fine inside a program like chrome, but I expect that flag to make
> > it harder to use in libc, and it will hinder adoption.
> >
> In the case of glibc and linux, as stated in the cover letter, Stephen
> is working on a change to glibc to add sealing support to the dynamic
> linker,  also I plan to make necessary code changes in the linux kernel.

How will ld.so seal memory which the kernel mapped?  The kernel will now
automatically puts MAP_SEALABLE on the text segment and stack?  Why not
put it on all mmap() allocations?  Why not just skip the flag entirely?

To me, this is all very bizzare.

I don't understand what the MAP_SEALABLE flag is trying to solve.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 0/4] Introduce mseal()
  2024-01-22 22:34     ` Theo de Raadt
@ 2024-01-23 17:33       ` Liam R. Howlett
  2024-01-23 18:58         ` Theo de Raadt
  2024-01-24 18:55       ` Jeff Xu
  1 sibling, 1 reply; 23+ messages in thread
From: Liam R. Howlett @ 2024-01-23 17:33 UTC (permalink / raw)
  To: Theo de Raadt
  Cc: Jeff Xu, akpm, keescook, jannh, sroettger, willy, gregkh,
	torvalds, usama.anjum, rdunlap, jeffxu, jorgelo, groeck,
	linux-kernel, linux-kselftest, linux-mm, pedro.falcato,
	dave.hansen, linux-hardening

* Theo de Raadt <deraadt@openbsd.org> [240122 17:35]:
> Jeff Xu <jeffxu@chromium.org> wrote:
> 
> > On Mon, Jan 22, 2024 at 7:49 AM Theo de Raadt <deraadt@openbsd.org> wrote:
> > >
> > > Regarding these pieces
> > >
> > > > The PROT_SEAL bit in prot field of mmap(). When present, it marks
> > > > the map sealed since creation.
> > >
> > > OpenBSD won't be doing this.  I had PROT_IMMUTABLE as a draft.  In my
> > > research I found basically zero circumstances when you userland does
> > > that.  The most common circumstance is you create a RW mapping, fill it,
> > > and then change to a more restrictve mapping, and lock it.
> > >
> > > There are a few regions in the addressspace that can be locked while RW.
> > > For instance, the stack.  But the kernel does that, not userland.  I
> > > found regions where the kernel wants to do this to the address space,
> > > but there is no need to export useless functionality to userland.
> > >
> > I have a feeling that most apps that need to use mmap() in their code
> > are likely using RW mappings. Adding sealing to mmap() could stop
> > those mappings from being executable. Of course, those apps would
> > need to change their code. We can't do it for them.
> 
> I don't have a feeling about it.
> 
> I spent a year engineering a complete system which exercises the maximum
> amount of memory you can lock.
> 
> I saw nothing like what you are describing.  I had PROT_IMMUTABLE in my
> drafts, and saw it turning into a dangerous anti-pattern.
> 
> > Also, I believe adding this to mmap() has no downsides, only
> > performance gain, as Pedro Falcato pointed out in [1].
> > 
> > [1] https://lore.kernel.org/lkml/CAKbZUD2A+=bp_sd+Q0Yif7NJqMu8p__eb4yguq0agEcmLH8SDQ@mail.gmail.com/
> 
> Are you joking?  You don't have any code doing that today.  More feelings?

The 'no downside" is to combining two calls together; mmap() & mseal(),
at least that is how I read the linked discussion.

The common case (since there are no users today) of just calling
mmap()/munmap() will have the downside.

There will be a performance impact once you have can_modify_mm() doing
more than just returning true.  Certainly, the impact will be larger
in munmap where multiple VMAs may need to be checked (assuming that's
the plan?).

This will require a new and earlier walk of the vma tree while holding
the mmap_lock.  Since you are checking (potentially multiple) VMAs for
something, I don't think there is a way around holding the lock.

I'm not saying the cost will be large, but it will be a positive
non-zero number.

Thanks,
Liam


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 2/4] mseal: add mseal syscall
  2024-01-22 15:28 ` [PATCH v7 2/4] mseal: add " jeffxu
@ 2024-01-23 18:14   ` Liam R. Howlett
  2024-01-24 17:50     ` Jeff Xu
  0 siblings, 1 reply; 23+ messages in thread
From: Liam R. Howlett @ 2024-01-23 18:14 UTC (permalink / raw)
  To: jeffxu
  Cc: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap, jeffxu, jorgelo, groeck, linux-kernel,
	linux-kselftest, linux-mm, pedro.falcato, dave.hansen,
	linux-hardening, deraadt

* jeffxu@chromium.org <jeffxu@chromium.org> [240122 10:29]:
> From: Jeff Xu <jeffxu@chromium.org>
> 
> The new mseal() is an syscall on 64 bit CPU, and with
> following signature:
> 
> int mseal(void addr, size_t len, unsigned long flags)
> addr/len: memory range.
> flags: reserved.
> 
> mseal() blocks following operations for the given memory range.
> 
> 1> Unmapping, moving to another location, and shrinking the size,
>    via munmap() and mremap(), can leave an empty space, therefore can
>    be replaced with a VMA with a new set of attributes.
> 
> 2> Moving or expanding a different VMA into the current location,
>    via mremap().
> 
> 3> Modifying a VMA via mmap(MAP_FIXED).
> 
> 4> Size expansion, via mremap(), does not appear to pose any specific
>    risks to sealed VMAs. It is included anyway because the use case is
>    unclear. In any case, users can rely on merging to expand a sealed VMA.
> 
> 5> mprotect() and pkey_mprotect().
> 
> 6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous
>    memory, when users don't have write permission to the memory. Those
>    behaviors can alter region contents by discarding pages, effectively a
>    memset(0) for anonymous memory.
> 
> In addition: mmap() has two related changes.
> 
> The PROT_SEAL bit in prot field of mmap(). When present, it marks
> the map sealed since creation.
> 
> The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
> the map as sealable. A map created without MAP_SEALABLE will not support
> sealing, i.e. mseal() will fail.
> 
> Applications that don't care about sealing will expect their behavior
> unchanged. For those that need sealing support, opt-in by adding
> MAP_SEALABLE in mmap().
> 
> I would like to formally acknowledge the valuable contributions
> received during the RFC process, which were instrumental
> in shaping this patch:
> 
> Jann Horn: raising awareness and providing valuable insights on the
> destructive madvise operations.
> Linus Torvalds: assisting in defining system call signature and scope.
> Pedro Falcato: suggesting sealing in the mmap().
> Theo de Raadt: sharing the experiences and insights gained from
> implementing mimmutable() in OpenBSD.
> 
> Finally, the idea that inspired this patch comes from Stephen Röttger’s
> work in Chrome V8 CFI.
> 
> Signed-off-by: Jeff Xu <jeffxu@chromium.org>
> ---
>  include/linux/mm.h                     |  48 ++++
>  include/linux/syscalls.h               |   1 +
>  include/uapi/asm-generic/mman-common.h |   8 +
>  mm/Makefile                            |   4 +
>  mm/madvise.c                           |  12 +
>  mm/mmap.c                              |  27 ++
>  mm/mprotect.c                          |  10 +
>  mm/mremap.c                            |  31 +++
>  mm/mseal.c                             | 343 +++++++++++++++++++++++++
>  9 files changed, 484 insertions(+)
>  create mode 100644 mm/mseal.c
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index f5a97dec5169..bdd9a53e9291 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h

None of this can live in mm/internal.h ?

> @@ -328,6 +328,14 @@ extern unsigned int kobjsize(const void *objp);
>  #define VM_HIGH_ARCH_5	BIT(VM_HIGH_ARCH_BIT_5)
>  #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
>  
> +#ifdef CONFIG_64BIT
> +/* VM is sealable, in vm_flags */
> +#define VM_SEALABLE	_BITUL(63)
> +
> +/* VM is sealed, in vm_flags */
> +#define VM_SEALED	_BITUL(62)
> +#endif
> +
>  #ifdef CONFIG_ARCH_HAS_PKEYS
>  # define VM_PKEY_SHIFT	VM_HIGH_ARCH_BIT_0
>  # define VM_PKEY_BIT0	VM_HIGH_ARCH_0	/* A protection key is a 4-bit value */
> @@ -4182,4 +4190,44 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
>  	return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
>  }
>  
> +#ifdef CONFIG_64BIT
> +static inline int can_do_mseal(unsigned long flags)
> +{
> +	if (flags)
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +bool can_modify_mm(struct mm_struct *mm, unsigned long start,
> +		unsigned long end);
> +bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
> +		unsigned long end, int behavior);
> +unsigned long get_mmap_seals(unsigned long prot,
> +		unsigned long flags);
> +#else
> +static inline int can_do_mseal(unsigned long flags)
> +{
> +	return -EPERM;
> +}
> +
> +static inline bool can_modify_mm(struct mm_struct *mm, unsigned long start,
> +		unsigned long end)
> +{
> +	return true;
> +}
> +
> +static inline bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
> +		unsigned long end, int behavior)
> +{
> +	return true;
> +}
> +
> +static inline unsigned long get_mmap_seals(unsigned long prot,
> +	unsigned long flags)
> +{
> +	return 0;
> +}
> +#endif
> +
>  #endif /* _LINUX_MM_H */

...

> diff --git a/mm/mmap.c b/mm/mmap.c
> index b78e83d351d2..32bc2179aed0 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -1213,6 +1213,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
>  {
>  	struct mm_struct *mm = current->mm;
>  	int pkey = 0;
> +	unsigned long vm_seals;
>  
>  	*populate = 0;
>  
> @@ -1233,6 +1234,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
>  	if (flags & MAP_FIXED_NOREPLACE)
>  		flags |= MAP_FIXED;
>  
> +	vm_seals = get_mmap_seals(prot, flags);
> +
>  	if (!(flags & MAP_FIXED))
>  		addr = round_hint_to_min(addr);
>  
> @@ -1261,6 +1264,13 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
>  			return -EEXIST;
>  	}
>  
> +	/*
> +	 * Check if the address range is sealed for do_mmap().
> +	 * can_modify_mm assumes we have acquired the lock on MM.
> +	 */
> +	if (!can_modify_mm(mm, addr, addr + len))
> +		return -EPERM;
> +

This is called after get_unmapped_area(), so this area is either going
to be MAP_FIXED and return the "hint" addr or it's going to be empty.
You can probably avoid walking the VMAs in the non-FIXED case.  This
would remove the overhead of your check in the most common case.

>  	if (prot == PROT_EXEC) {
>  		pkey = execute_only_pkey(mm);
>  		if (pkey < 0)
> @@ -1376,6 +1386,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
>  			vm_flags |= VM_NORESERVE;
>  	}
>  
> +	vm_flags |= vm_seals;
>  	addr = mmap_region(file, addr, len, vm_flags, pgoff, uf);
>  	if (!IS_ERR_VALUE(addr) &&
>  	    ((vm_flags & VM_LOCKED) ||
> @@ -2679,6 +2690,14 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
>  	if (end == start)
>  		return -EINVAL;
>  
> +	/*
> +	 * Check if memory is sealed before arch_unmap.
> +	 * Prevent unmapping a sealed VMA.
> +	 * can_modify_mm assumes we have acquired the lock on MM.
> +	 */
> +	if (!can_modify_mm(mm, start, end))
> +		return -EPERM;
> +

This function is currently called from mmap_region(), so we are going to
run this check twice as you have it; once in do_mmap() then again in
mma_region() -> do_vmi_munmap().  This effectively doubles your impact
to MAP_FIXED calls.

>  	 /* arch_unmap() might do unmaps itself.  */
>  	arch_unmap(mm, start, end);
>  
> @@ -3102,6 +3121,14 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
>  {
>  	struct mm_struct *mm = vma->vm_mm;
>  
> +	/*
> +	 * Check if memory is sealed before arch_unmap.
> +	 * Prevent unmapping a sealed VMA.
> +	 * can_modify_mm assumes we have acquired the lock on MM.
> +	 */
> +	if (!can_modify_mm(mm, start, end))
> +		return -EPERM;
> +

I am sure you've looked at the callers, from what I found there are two:

The brk call uses this function, so it may check more than one VMA in
that path.  Will the brk VMAs potentially be msealed?  I guess someone
could do that?

The other place this is use is in ipc/shm.c whhere the start/end is just
the vma start/end, so we only really need to check that one vma.

Is there a way to avoid walking the tree for the single known VMA?  Does
it make sense to deny mseal writing to brk VMAs?


>  	arch_unmap(mm, start, end);
>  	return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
>  }

...


Ah, I see them now.  Yes, this is what I expected to see.  Does this not
have any impact on mmap/munmap benchmarks?

> +bool can_modify_mm(struct mm_struct *mm, unsigned long start, unsigned long end)
> +{
> +	struct vm_area_struct *vma;
> +
> +	VMA_ITERATOR(vmi, mm, start);
> +
> +	/* going through each vma to check. */
> +	for_each_vma_range(vmi, vma, end) {
> +		if (!can_modify_vma(vma))
> +			return false;
> +	}
> +
> +	/* Allow by default. */
> +	return true;
> +}
> +
> +/*
> + * Check if the vmas of a memory range are allowed to be modified by madvise.
> + * the memory ranger can have a gap (unallocated memory).
> + * return true, if it is allowed.
> + */
> +bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, unsigned long end,
> +		int behavior)
> +{
> +	struct vm_area_struct *vma;
> +
> +	VMA_ITERATOR(vmi, mm, start);
> +
> +	if (!is_madv_discard(behavior))
> +		return true;
> +
> +	/* going through each vma to check. */
> +	for_each_vma_range(vmi, vma, end)
> +		if (is_ro_anon(vma) && !can_modify_vma(vma))
> +			return false;
> +
> +	/* Allow by default. */
> +	return true;
> +}
> +

...

> +static int check_mm_seal(unsigned long start, unsigned long end)
> +{
> +	struct vm_area_struct *vma;
> +	unsigned long nstart = start;
> +
> +	VMA_ITERATOR(vmi, current->mm, start);
> +
> +	/* going through each vma to check. */
> +	for_each_vma_range(vmi, vma, end) {
> +		if (vma->vm_start > nstart)
> +			/* unallocated memory found. */
> +			return -ENOMEM;

Ah, another potential user for a contiguous iterator of VMAs.

> +
> +		if (!can_add_vma_seal(vma))
> +			return -EACCES;
> +
> +		if (vma->vm_end >= end)
> +			return 0;
> +
> +		nstart = vma->vm_end;
> +	}
> +
> +	return -ENOMEM;
> +}
> +
> +/*
> + * Apply sealing.
> + */
> +static int apply_mm_seal(unsigned long start, unsigned long end)
> +{
> +	unsigned long nstart;
> +	struct vm_area_struct *vma, *prev;
> +
> +	VMA_ITERATOR(vmi, current->mm, start);
> +
> +	vma = vma_iter_load(&vmi);
> +	/*
> +	 * Note: check_mm_seal should already checked ENOMEM case.
> +	 * so vma should not be null, same for the other ENOMEM cases.

The start to end is contiguous, right?

> +	 */
> +	prev = vma_prev(&vmi);
> +	if (start > vma->vm_start)
> +		prev = vma;
> +
> +	nstart = start;
> +	for_each_vma_range(vmi, vma, end) {
> +		int error;
> +		unsigned long tmp;
> +		vm_flags_t newflags;
> +
> +		newflags = vma->vm_flags | VM_SEALED;
> +		tmp = vma->vm_end;
> +		if (tmp > end)
> +			tmp = end;
> +		error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
> +		if (error)
> +			return error;

> +		tmp = vma_iter_end(&vmi);
> +		nstart = tmp;

You set tmp before using it unconditionally to vma->vm_end above, so you
can set nstart = vma_iter_end(&vmi) here.  But, also we know the
VMAs are contiguous from your check_mm_seal() call, so we know nstart ==
vma->vm_start on the next loop.

...


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 0/4] Introduce mseal()
  2024-01-23 17:33       ` Liam R. Howlett
@ 2024-01-23 18:58         ` Theo de Raadt
  2024-01-24 18:56           ` Jeff Xu
  0 siblings, 1 reply; 23+ messages in thread
From: Theo de Raadt @ 2024-01-23 18:58 UTC (permalink / raw)
  To: Liam R. Howlett, Jeff Xu, akpm, keescook, jannh, sroettger,
	willy, gregkh, torvalds, usama.anjum, rdunlap, jeffxu, jorgelo,
	groeck, linux-kernel, linux-kselftest, linux-mm, pedro.falcato,
	dave.hansen, linux-hardening

Liam R. Howlett <Liam.Howlett@Oracle.com> wrote:

> * Theo de Raadt <deraadt@openbsd.org> [240122 17:35]:
> > Jeff Xu <jeffxu@chromium.org> wrote:
> > 
> > > On Mon, Jan 22, 2024 at 7:49 AM Theo de Raadt <deraadt@openbsd.org> wrote:
> > > >
> > > > Regarding these pieces
> > > >
> > > > > The PROT_SEAL bit in prot field of mmap(). When present, it marks
> > > > > the map sealed since creation.
> > > >
> > > > OpenBSD won't be doing this.  I had PROT_IMMUTABLE as a draft.  In my
> > > > research I found basically zero circumstances when you userland does
> > > > that.  The most common circumstance is you create a RW mapping, fill it,
> > > > and then change to a more restrictve mapping, and lock it.
> > > >
> > > > There are a few regions in the addressspace that can be locked while RW.
> > > > For instance, the stack.  But the kernel does that, not userland.  I
> > > > found regions where the kernel wants to do this to the address space,
> > > > but there is no need to export useless functionality to userland.
> > > >
> > > I have a feeling that most apps that need to use mmap() in their code
> > > are likely using RW mappings. Adding sealing to mmap() could stop
> > > those mappings from being executable. Of course, those apps would
> > > need to change their code. We can't do it for them.
> > 
> > I don't have a feeling about it.
> > 
> > I spent a year engineering a complete system which exercises the maximum
> > amount of memory you can lock.
> > 
> > I saw nothing like what you are describing.  I had PROT_IMMUTABLE in my
> > drafts, and saw it turning into a dangerous anti-pattern.
> > 
> > > Also, I believe adding this to mmap() has no downsides, only
> > > performance gain, as Pedro Falcato pointed out in [1].
> > > 
> > > [1] https://lore.kernel.org/lkml/CAKbZUD2A+=bp_sd+Q0Yif7NJqMu8p__eb4yguq0agEcmLH8SDQ@mail.gmail.com/
> > 
> > Are you joking?  You don't have any code doing that today.  More feelings?
> 
> The 'no downside" is to combining two calls together; mmap() & mseal(),
> at least that is how I read the linked discussion.
> 
> The common case (since there are no users today) of just calling
> mmap()/munmap() will have the downside.
> 
> There will be a performance impact once you have can_modify_mm() doing
> more than just returning true.  Certainly, the impact will be larger
> in munmap where multiple VMAs may need to be checked (assuming that's
> the plan?).
> 
> This will require a new and earlier walk of the vma tree while holding
> the mmap_lock.  Since you are checking (potentially multiple) VMAs for
> something, I don't think there is a way around holding the lock.
> 
> I'm not saying the cost will be large, but it will be a positive
> non-zero number.

For future glibc changes, I predict you will have zero cases where you
can call mmap+immutable or mprotect+immutable, I say so, because I ended
up having none.  You always have to fill the memory.  (At first glance
you might think it works for a new DSO's BSS, but RELRO overlaps it, and
since RELRO mprotect happens quite late, the permission locking is quite
delayed relative to the allocation).

I think chrome also won't lock memory at allocation.  I suspect the
generic allocator is quite seperate from the code using the allocation,
which knows which objects can have their permissions locked and which
objects can't.

In OpenBSD, the only cases where we could set immutable at the same time
as creating the mapping was in execve, for a new process's stack regions,
and that is kernel code, not the userland exposed system call APIs.
 
This change could skip adding PROT_MSEAL today, and add it later when
there are facts the need.


It's the same with MAP_MSEALABLE.  I don't get it. So now there are 3
memory types:
       - cannot be sealed, ever
       - not yet sealed
       - sealed

What purpose does the first type serve?  Please explain the use case.

Today, processes have control over their entire address space.

What is the purpose of "permissions cannot be locked".  Please supply
an example.  If I am wrong, I'd like to know where I went wrong.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 2/4] mseal: add mseal syscall
  2024-01-23 18:14   ` Liam R. Howlett
@ 2024-01-24 17:50     ` Jeff Xu
  2024-01-24 20:06       ` Liam R. Howlett
  0 siblings, 1 reply; 23+ messages in thread
From: Jeff Xu @ 2024-01-24 17:50 UTC (permalink / raw)
  To: Liam R. Howlett, jeffxu, akpm, keescook, jannh, sroettger, willy,
	gregkh, torvalds, usama.anjum, rdunlap, jeffxu, jorgelo, groeck,
	linux-kernel, linux-kselftest, linux-mm, pedro.falcato,
	dave.hansen, linux-hardening, deraadt

On Tue, Jan 23, 2024 at 10:15 AM Liam R. Howlett
<Liam.Howlett@oracle.com> wrote:
>
> * jeffxu@chromium.org <jeffxu@chromium.org> [240122 10:29]:
> > From: Jeff Xu <jeffxu@chromium.org>
> >
> > The new mseal() is an syscall on 64 bit CPU, and with
> > following signature:
> >
> > int mseal(void addr, size_t len, unsigned long flags)
> > addr/len: memory range.
> > flags: reserved.
> >
> > mseal() blocks following operations for the given memory range.
> >
> > 1> Unmapping, moving to another location, and shrinking the size,
> >    via munmap() and mremap(), can leave an empty space, therefore can
> >    be replaced with a VMA with a new set of attributes.
> >
> > 2> Moving or expanding a different VMA into the current location,
> >    via mremap().
> >
> > 3> Modifying a VMA via mmap(MAP_FIXED).
> >
> > 4> Size expansion, via mremap(), does not appear to pose any specific
> >    risks to sealed VMAs. It is included anyway because the use case is
> >    unclear. In any case, users can rely on merging to expand a sealed VMA.
> >
> > 5> mprotect() and pkey_mprotect().
> >
> > 6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous
> >    memory, when users don't have write permission to the memory. Those
> >    behaviors can alter region contents by discarding pages, effectively a
> >    memset(0) for anonymous memory.
> >
> > In addition: mmap() has two related changes.
> >
> > The PROT_SEAL bit in prot field of mmap(). When present, it marks
> > the map sealed since creation.
> >
> > The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
> > the map as sealable. A map created without MAP_SEALABLE will not support
> > sealing, i.e. mseal() will fail.
> >
> > Applications that don't care about sealing will expect their behavior
> > unchanged. For those that need sealing support, opt-in by adding
> > MAP_SEALABLE in mmap().
> >
> > I would like to formally acknowledge the valuable contributions
> > received during the RFC process, which were instrumental
> > in shaping this patch:
> >
> > Jann Horn: raising awareness and providing valuable insights on the
> > destructive madvise operations.
> > Linus Torvalds: assisting in defining system call signature and scope.
> > Pedro Falcato: suggesting sealing in the mmap().
> > Theo de Raadt: sharing the experiences and insights gained from
> > implementing mimmutable() in OpenBSD.
> >
> > Finally, the idea that inspired this patch comes from Stephen Röttger’s
> > work in Chrome V8 CFI.
> >
> > Signed-off-by: Jeff Xu <jeffxu@chromium.org>
> > ---
> >  include/linux/mm.h                     |  48 ++++
> >  include/linux/syscalls.h               |   1 +
> >  include/uapi/asm-generic/mman-common.h |   8 +
> >  mm/Makefile                            |   4 +
> >  mm/madvise.c                           |  12 +
> >  mm/mmap.c                              |  27 ++
> >  mm/mprotect.c                          |  10 +
> >  mm/mremap.c                            |  31 +++
> >  mm/mseal.c                             | 343 +++++++++++++++++++++++++
> >  9 files changed, 484 insertions(+)
> >  create mode 100644 mm/mseal.c
> >
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index f5a97dec5169..bdd9a53e9291 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
>
> None of this can live in mm/internal.h ?
>
Will move. Thanks.


> > @@ -328,6 +328,14 @@ extern unsigned int kobjsize(const void *objp);
> >  #define VM_HIGH_ARCH_5       BIT(VM_HIGH_ARCH_BIT_5)
> >  #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
> >
> > +#ifdef CONFIG_64BIT
> > +/* VM is sealable, in vm_flags */
> > +#define VM_SEALABLE  _BITUL(63)
> > +
> > +/* VM is sealed, in vm_flags */
> > +#define VM_SEALED    _BITUL(62)
> > +#endif
> > +
> >  #ifdef CONFIG_ARCH_HAS_PKEYS
> >  # define VM_PKEY_SHIFT       VM_HIGH_ARCH_BIT_0
> >  # define VM_PKEY_BIT0        VM_HIGH_ARCH_0  /* A protection key is a 4-bit value */
> > @@ -4182,4 +4190,44 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
> >       return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
> >  }
> >
> > +#ifdef CONFIG_64BIT
> > +static inline int can_do_mseal(unsigned long flags)
> > +{
> > +     if (flags)
> > +             return -EINVAL;
> > +
> > +     return 0;
> > +}
> > +
> > +bool can_modify_mm(struct mm_struct *mm, unsigned long start,
> > +             unsigned long end);
> > +bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
> > +             unsigned long end, int behavior);
> > +unsigned long get_mmap_seals(unsigned long prot,
> > +             unsigned long flags);
> > +#else
> > +static inline int can_do_mseal(unsigned long flags)
> > +{
> > +     return -EPERM;
> > +}
> > +
> > +static inline bool can_modify_mm(struct mm_struct *mm, unsigned long start,
> > +             unsigned long end)
> > +{
> > +     return true;
> > +}
> > +
> > +static inline bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
> > +             unsigned long end, int behavior)
> > +{
> > +     return true;
> > +}
> > +
> > +static inline unsigned long get_mmap_seals(unsigned long prot,
> > +     unsigned long flags)
> > +{
> > +     return 0;
> > +}
> > +#endif
> > +
> >  #endif /* _LINUX_MM_H */
>
> ...
>
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index b78e83d351d2..32bc2179aed0 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -1213,6 +1213,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> >  {
> >       struct mm_struct *mm = current->mm;
> >       int pkey = 0;
> > +     unsigned long vm_seals;
> >
> >       *populate = 0;
> >
> > @@ -1233,6 +1234,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> >       if (flags & MAP_FIXED_NOREPLACE)
> >               flags |= MAP_FIXED;
> >
> > +     vm_seals = get_mmap_seals(prot, flags);
> > +
> >       if (!(flags & MAP_FIXED))
> >               addr = round_hint_to_min(addr);
> >
> > @@ -1261,6 +1264,13 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> >                       return -EEXIST;
> >       }
> >
> > +     /*
> > +      * Check if the address range is sealed for do_mmap().
> > +      * can_modify_mm assumes we have acquired the lock on MM.
> > +      */
> > +     if (!can_modify_mm(mm, addr, addr + len))
> > +             return -EPERM;
> > +
>
> This is called after get_unmapped_area(), so this area is either going
> to be MAP_FIXED and return the "hint" addr or it's going to be empty.
> You can probably avoid walking the VMAs in the non-FIXED case.  This
> would remove the overhead of your check in the most common case.
>

Thanks for flagging this!

I wasn't entirely sure about get_unmapped_area() after reading the
code,  It calls a few variants of  arch_get_unmapped_area_xxx()
functions.

e.g. it seems like the generic_get_unmapped_area_topdown  is returning
a non-null address even when MAP_FIXED is set to false

 ----------------------------------------------------------------------------
generic_get_unmapped_area_topdown (
...
if (flags & MAP_FIXED)  <-- MAP_FIXED case.
return addr;

/* requesting a specific address */
if (addr) {  <--  note not MAP_FIXED
addr = PAGE_ALIGN(addr);
vma = find_vma_prev(mm, addr, &prev);
if (mmap_end - len >= addr && addr >= mmap_min_addr &&
(!vma || addr + len <= vm_start_gap(vma)) &&
(!prev || addr >= vm_end_gap(prev)))
return addr;                         <--- note return not null addr here.
}

----------------------------------------------------------------------------
I thought also about adding a check for addr != null  instead, i.e.
if (addr && !can_modify_mm(mm, addr, addr + len))
    return -EPERM;
}

But using MAP_FIXED to allocate memory at address 0 is legit, e.g.
allocating a PROT_NONE | PROT_SEAL at address 0.

Another factor to consider is: what will be the cost of passing an
empty address into can_modify_mm() ? the search will be 0 to len.

> >       if (prot == PROT_EXEC) {
> >               pkey = execute_only_pkey(mm);
> >               if (pkey < 0)
> > @@ -1376,6 +1386,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> >                       vm_flags |= VM_NORESERVE;
> >       }
> >
> > +     vm_flags |= vm_seals;
> >       addr = mmap_region(file, addr, len, vm_flags, pgoff, uf);
> >       if (!IS_ERR_VALUE(addr) &&
> >           ((vm_flags & VM_LOCKED) ||
> > @@ -2679,6 +2690,14 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
> >       if (end == start)
> >               return -EINVAL;
> >
> > +     /*
> > +      * Check if memory is sealed before arch_unmap.
> > +      * Prevent unmapping a sealed VMA.
> > +      * can_modify_mm assumes we have acquired the lock on MM.
> > +      */
> > +     if (!can_modify_mm(mm, start, end))
> > +             return -EPERM;
> > +
>
> This function is currently called from mmap_region(), so we are going to
> run this check twice as you have it; once in do_mmap() then again in
> mma_region() -> do_vmi_munmap().  This effectively doubles your impact
> to MAP_FIXED calls.
>
Yes. To address this would require a new flag in the do_vmi_munmap(),
after passing the first check in mmap(), we could set the flag as false,
so do_vmi_munmap() would not check it again.

However, this approach was attempted in v1 and V2 of the patch [1] [2],
and was strongly opposed by Linus. It was considered as too random and
decreased the readability.

Below is my  text in V2: [3]

"When handing the mmap/munmap/mremap/mmap, once the code passed
can_modify_mm(), it means the memory area is not sealed, if the code
continues to call the other utility functions, we don't need to check
the seal again. This is the case for mremap(), the seal of src address
and dest address (when applicable) are checked first, later when the
code calls  do_vmi_munmap(), it no longer needs to check the seal
again."

Considering this is the MAP_FIXED case, and maybe that is not used
that often in practice, I think this is acceptable performance-wise,
unless you know another solution to help this.

[1] https://lore.kernel.org/lkml/20231016143828.647848-6-jeffxu@chromium.org/
[2] https://lore.kernel.org/lkml/20231017090815.1067790-6-jeffxu@chromium.org/
[3] https://lore.kernel.org/lkml/CALmYWFux2m=9189Gs0o8-xhPNW4dnFvtqj7ptcT5QvzxVgfvYQ@mail.gmail.com/


> >        /* arch_unmap() might do unmaps itself.  */
> >       arch_unmap(mm, start, end);
> >
> > @@ -3102,6 +3121,14 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> >  {
> >       struct mm_struct *mm = vma->vm_mm;
> >
> > +     /*
> > +      * Check if memory is sealed before arch_unmap.
> > +      * Prevent unmapping a sealed VMA.
> > +      * can_modify_mm assumes we have acquired the lock on MM.
> > +      */
> > +     if (!can_modify_mm(mm, start, end))
> > +             return -EPERM;
> > +
>
> I am sure you've looked at the callers, from what I found there are two:
>
> The brk call uses this function, so it may check more than one VMA in
> that path.  Will the brk VMAs potentially be msealed?  I guess someone
> could do that?
>
> The other place this is use is in ipc/shm.c whhere the start/end is just
> the vma start/end, so we only really need to check that one vma.
>
Yes. Those two cases were looked at, and was the main reason that
MAP_SEALABLE is introduced as part of mmap().

As in the open discussion of the V3/V4 patch: [4] [5]

[4] https://lore.kernel.org/linux-mm/20231212231706.2680890-1-jeffxu@chromium.org/T/
[5] https://lore.kernel.org/linux-mm/20240104185138.169307-3-jeffxu@chromium.org/T/

Copied here for ease of reading:
---------------------------------------------------------------------------------------------

During the development of V3, I had new questions and thoughts and
wished to discuss.

1> shm/aio
From reading the code, it seems to me that aio/shm can mmap/munmap
maps on behalf of userspace, e.g. ksys_shmdt() in shm.c. The lifetime
of those mapping are not tied to the lifetime of the process. If those
memories are sealed from userspace, then unmap will fail. This isn’t a
huge problem, since the memory will eventually be freed at exit or
exec. However, it feels like the solution is not complete, because of
the leaks in VMA address space during the lifetime of the process.

2> Brk (heap/stack)
Currently, userspace applications can seal parts of the heap by
calling malloc() and mseal(). This raises the question of what the
expected behavior is when sealing the heap is attempted.

let's assume following calls from user space:

ptr = malloc(size);
mprotect(ptr, size, RO);
mseal(ptr, size, SEAL_PROT_PKEY);
free(ptr);

Technically, before mseal() is added, the user can change the
protection of the heap by calling mprotect(RO). As long as the user
changes the protection back to RW before free(), the memory can be
reused.

Adding mseal() into picture, however, the heap is then sealed
partially, user can still free it, but the memory remains to be RO,
and the result of brk-shrink is nondeterministic, depending on if
munmap() will try to free the sealed memory.(brk uses munmap to shrink
the heap).

3> Above two cases led to the third topic:
There one option to address the problem mentioned above.
Option 1:  A “MAP_SEALABLE” flag in mmap().
If a map is created without this flag, the mseal() operation will
fail. Applications that are not concerned with sealing will expect
their behavior to be unchanged. For those that are concerned, adding a
flag at mmap time to opt in is not difficult. For the short term, this
solves problems 1 and 2 above. The memory in shm/aio/brk will not have
the MAP_SEALABLE flag at mmap(), and the same is true for the heap.

If we choose not to go with path, all mapping will by default
sealable. We could document above mentioned limitations so devs are
more careful at the time to choose what memory to seal. I think
deny of service through mseal() by attacker is probably not a concern,
if attackers have access to mseal() and unsealed memory, then they can
also do other harmful thing to the memory, such as munmap, etc.

4>
I think it might be possible to seal the stack or other special
mappings created at runtime (vdso, vsyscall, vvar). This means we can
enforce and seal W^X for certain types of application. For instance,
the stack is typically used in read-write mode, but in some cases, it
can become executable. To defend against unintented addition of
executable bit to stack, we could let the application to seal it.

Sealing the heap (for adding X) requires special handling, since the
heap can shrink, and shrink is implemented through munmap().

Indeed, it might be possible that all virtual memory accessible to user
space, regardless of its usage pattern, could be sealed. However, this
would require additional research and development work.

-----------------------------------------------------------------------------------------------------


> Is there a way to avoid walking the tree for the single known VMA?
Are you thinking about a hash table to record brk VMA ? or a dedicated
tree for sealed VMAs? possible. code will be a lot more though.

> Does
> it make sense to deny mseal writing to brk VMAs?
>
Yes. It makes sense. Since brk memory doesn't have MAP_SEALABLE at
this moment,  mseal will fail even if someone tries to seal it.
Sealing brk memory would require more research and design.

>
> >       arch_unmap(mm, start, end);
> >       return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
> >  }
>
> ...
>
>
> Ah, I see them now.  Yes, this is what I expected to see.  Does this not
> have any impact on mmap/munmap benchmarks?
>
Thanks for bringing this topic! I'm kind of hoping for performance related
questions.

I haven't done any benchmarks, due to lack of knowledge on how those
tests are usually performed.

For mseal(), since it will be called only in a few places (libc/elf
loading),  I'm expecting no real world  impact, and that can be
measured when we have implementations in place in libc and
elf-loading.

The hot path could be on mmap() and munmap(), as you pointed out.

mmap() was discussed above (adding a check for FIXED )

munmap(), There is a cost in calling can_modify_mm(). I thought about
calling can_modify_vma in do_vmi_align_munmap, but there are two reasons:

a. it skips arch_unmap, and arch_unmap can unmap the memory.
b. Current logic of checking sealing is: if one of VMAs between start to end is
sealed, mprotect/mmap/munmap will fail without any of VMAs being modified.
This means we will need additional walking over the VMA tree.

> > +bool can_modify_mm(struct mm_struct *mm, unsigned long start, unsigned long end)
> > +{
> > +     struct vm_area_struct *vma;
> > +
> > +     VMA_ITERATOR(vmi, mm, start);
> > +
> > +     /* going through each vma to check. */
> > +     for_each_vma_range(vmi, vma, end) {
> > +             if (!can_modify_vma(vma))
> > +                     return false;
> > +     }
> > +
> > +     /* Allow by default. */
> > +     return true;
> > +}
> > +
> > +/*
> > + * Check if the vmas of a memory range are allowed to be modified by madvise.
> > + * the memory ranger can have a gap (unallocated memory).
> > + * return true, if it is allowed.
> > + */
> > +bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, unsigned long end,
> > +             int behavior)
> > +{
> > +     struct vm_area_struct *vma;
> > +
> > +     VMA_ITERATOR(vmi, mm, start);
> > +
> > +     if (!is_madv_discard(behavior))
> > +             return true;
> > +
> > +     /* going through each vma to check. */
> > +     for_each_vma_range(vmi, vma, end)
> > +             if (is_ro_anon(vma) && !can_modify_vma(vma))
> > +                     return false;
> > +
> > +     /* Allow by default. */
> > +     return true;
> > +}
> > +
>
> ...
>
> > +static int check_mm_seal(unsigned long start, unsigned long end)
> > +{
> > +     struct vm_area_struct *vma;
> > +     unsigned long nstart = start;
> > +
> > +     VMA_ITERATOR(vmi, current->mm, start);
> > +
> > +     /* going through each vma to check. */
> > +     for_each_vma_range(vmi, vma, end) {
> > +             if (vma->vm_start > nstart)
> > +                     /* unallocated memory found. */
> > +                     return -ENOMEM;
>
> Ah, another potential user for a contiguous iterator of VMAs.
>
> > +
> > +             if (!can_add_vma_seal(vma))
> > +                     return -EACCES;
> > +
> > +             if (vma->vm_end >= end)
> > +                     return 0;
> > +
> > +             nstart = vma->vm_end;
> > +     }
> > +
> > +     return -ENOMEM;
> > +}
> > +
> > +/*
> > + * Apply sealing.
> > + */
> > +static int apply_mm_seal(unsigned long start, unsigned long end)
> > +{
> > +     unsigned long nstart;
> > +     struct vm_area_struct *vma, *prev;
> > +
> > +     VMA_ITERATOR(vmi, current->mm, start);
> > +
> > +     vma = vma_iter_load(&vmi);
> > +     /*
> > +      * Note: check_mm_seal should already checked ENOMEM case.
> > +      * so vma should not be null, same for the other ENOMEM cases.
>
> The start to end is contiguous, right?
Yes.  check_mm_seal makes sure the start to end is contiguous.

>
> > +      */
> > +     prev = vma_prev(&vmi);
> > +     if (start > vma->vm_start)
> > +             prev = vma;
> > +
> > +     nstart = start;
> > +     for_each_vma_range(vmi, vma, end) {
> > +             int error;
> > +             unsigned long tmp;
> > +             vm_flags_t newflags;
> > +
> > +             newflags = vma->vm_flags | VM_SEALED;
> > +             tmp = vma->vm_end;
> > +             if (tmp > end)
> > +                     tmp = end;
> > +             error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
> > +             if (error)
> > +                     return error;
>
> > +             tmp = vma_iter_end(&vmi);
> > +             nstart = tmp;
>
> You set tmp before using it unconditionally to vma->vm_end above, so you
> can set nstart = vma_iter_end(&vmi) here.  But, also we know the
> VMAs are contiguous from your check_mm_seal() call, so we know nstart ==
> vma->vm_start on the next loop.
The code is almost the same as in mlock.c, except that we know the
VMAs are contiguous, so we don't check for some of the ENOMEM cases.
There might be ways to improve this code. For ease of code review, I
choose a consistency (same as mlock)  for now.

>
> ...


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 0/4] Introduce mseal()
  2024-01-22 22:34     ` Theo de Raadt
  2024-01-23 17:33       ` Liam R. Howlett
@ 2024-01-24 18:55       ` Jeff Xu
  2024-01-24 19:17         ` Theo de Raadt
  1 sibling, 1 reply; 23+ messages in thread
From: Jeff Xu @ 2024-01-24 18:55 UTC (permalink / raw)
  To: Theo de Raadt
  Cc: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap, jeffxu, jorgelo, groeck, linux-kernel,
	linux-kselftest, linux-mm, pedro.falcato, dave.hansen,
	linux-hardening

On Mon, Jan 22, 2024 at 2:34 PM Theo de Raadt <deraadt@openbsd.org> wrote:
>
> Jeff Xu <jeffxu@chromium.org> wrote:
>
> > On Mon, Jan 22, 2024 at 7:49 AM Theo de Raadt <deraadt@openbsd.org> wrote:
> > >
> > > Regarding these pieces
> > >
> > > > The PROT_SEAL bit in prot field of mmap(). When present, it marks
> > > > the map sealed since creation.
> > >
> > > OpenBSD won't be doing this.  I had PROT_IMMUTABLE as a draft.  In my
> > > research I found basically zero circumstances when you userland does
> > > that.  The most common circumstance is you create a RW mapping, fill it,
> > > and then change to a more restrictve mapping, and lock it.
> > >
> > > There are a few regions in the addressspace that can be locked while RW.
> > > For instance, the stack.  But the kernel does that, not userland.  I
> > > found regions where the kernel wants to do this to the address space,
> > > but there is no need to export useless functionality to userland.
> > >
> > I have a feeling that most apps that need to use mmap() in their code
> > are likely using RW mappings. Adding sealing to mmap() could stop
> > those mappings from being executable. Of course, those apps would
> > need to change their code. We can't do it for them.
>
> I don't have a feeling about it.
>
> I spent a year engineering a complete system which exercises the maximum
> amount of memory you can lock.
>
> I saw nothing like what you are describing.  I had PROT_IMMUTABLE in my
> drafts, and saw it turning into a dangerous anti-pattern.
>
I'm sorry, I have never looked at one line of openBSD code, prototype
or not, nor did I install openBSD before.

Because of this situation on my side, I failed to understand why you
have such a strong opinion on PROC_SEAL in mmap() in linux kernel,
based on your own OpenBSD's experience ?

For PROT_SEAL in mmap(), I see it as a good and reasonable suggestion
raised during the RFC process, and incorporate it into the patch set,
there is nothing more and nothing less.

If openBSD doesn't want it, that is fine to me, it is not that I'm
trying to force this into openBSD's kernel, I understand it is a
different code base.

> > Also, I believe adding this to mmap() has no downsides, only
> > performance gain, as Pedro Falcato pointed out in [1].
> >
> > [1] https://lore.kernel.org/lkml/CAKbZUD2A+=bp_sd+Q0Yif7NJqMu8p__eb4yguq0agEcmLH8SDQ@mail.gmail.com/
>
> Are you joking?  You don't have any code doing that today.  More feelings?
>
> OpenBSD userland has zero places it can use mmap() MAP_IMMUTABLE.
>
> It has two places where it has mprotect() + mimmutable() adjacent to each
> other, two codepaths for late mprotect() of RELRO, and then make the RELRO
> immutable.
>
> I think this idea is a premature optimization, and intentionally incompatible.
>
> Like I say, I had a similar MAP_ flag for mprotect() and mmap() in my
> development trees, and I recognized it was pointless, distracting developers
> into the wrong patterns, and I threw it out.
>
> > > OpenBSD now uses this for a high percent of the address space.  It might
> > > be worth re-reading a description of the split of responsibility regarding
> > > who locks different types of memory in a process;
> > > - kernel (the majority, based upon what ELF layout tell us),
> > > - shared library linker (the next majority, dealing with shared
> > >   library mappings and left-overs not determinable at kernel time),
> > > - libc (a small minority, mostly regarding forced mutable objects)
> > > - and the applications themselves (only 1 application today)
> > >
> > >     https://lwn.net/Articles/915662/
> > >
> > > > The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
> > > > the map as sealable. A map created without MAP_SEALABLE will not support
> > > > sealing, i.e. mseal() will fail.
> > >
> > > We definately won't be doing this.  We allow a process to lock any and all
> > > it's memory that isn't locked already, even if it means it is shooting
> > > itself in the foot.
> > >
> > > I think you are going to severely hurt the power of this mechanism,
> > > because you won't be able to lock memory that has been allocated by a
> > > different callsite not under your source-code control which lacks the
> > > MAP_SEALABLE flag.  (Which is extremely common with the system-parts of
> > > a process, meaning not just libc but kernel allocated objects).
> > >
> > MAP_SEALABLE was an open discussion item called out on V3 [2] and V4 [3].
> >
> > I acknowledge that additional coordination would be required if
> > mapping were to be allocated by one software component and sealed in
> > another. However, this is feasible.
> >
> > Considering the side effect of not having this flag (as discussed in
> > V3/V4) and the significant implications of altering the lifetime of
> > the mapping (since unmapping would not be possible), I believe it is
> > reasonable to expect developers to exercise additional care and
> > caution when utilizing memory sealing.
> >
> > [2] https://lore.kernel.org/linux-mm/20231212231706.2680890-2-jeffxu@chromium.org/
> > [3] https://lore.kernel.org/all/20240104185138.169307-1-jeffxu@chromium.org/
>
> I disagree *strongly*.  Developers need to exercise additional care on
> memory, period.  Memory sealing issues is the least of their worries.
>
> (Except for handling RELRO, but only the ld.so developers will lose
> their hair).
>
>
> OK, so mseal and mimmutable are very different.
>
> mimmutable can be used by any developer on the address space easily.
>
> mseal requires control of the whole stack between allocation and consumption.
>
> I'm sorry, but I don't think you understand how dangerous this MAP_SEALABLE
> proposal is because of the difficulties it will create for use.
>
> The immutable memory management we have today in OpenBSD would completely
> impossible with such a flag.  Seperation between allocator (that doesn't know
> what is going to happen), and consumer (that does know), is completely common
> in the systems environment (meaning the interaction between DSO, libc, other
> libraries, and the underside of applications).
>
> This is not not like an application where you can simply sprinkle the flag
> into the mmap() calls that cause you problems.  That mmap() call is now in
> someone else's code, and you CANNOT gain security advantage unless you
> convince them to gain an understanding of what that flag means -- and it is
> a flag that other Linux variants don't have, not even in their #include
> files.
>
I respect your reasoning with OpenBSD, but do you have a real example
that this will be problematic for linux ?

In my opinion, the extra communication part with mmap()'s owner has
its pros and cons.

The cons is what you mentioned: extra time for convincing and approval.

The pro is that there won't be unexpected behavior from the code owner
point of view, once this communication process is completed. It can
reduce the possibility of introducing bugs.

So far, I do not have enough information to say this is a bad idea.
if you can provide a real example in the context of linux, e.g. DSO
and libc you mentioned with details, that will be helpful.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 0/4] Introduce mseal()
  2024-01-23 18:58         ` Theo de Raadt
@ 2024-01-24 18:56           ` Jeff Xu
  0 siblings, 0 replies; 23+ messages in thread
From: Jeff Xu @ 2024-01-24 18:56 UTC (permalink / raw)
  To: Liam R. Howlett, Jeff Xu, akpm, keescook, jannh, sroettger,
	willy, gregkh, torvalds, usama.anjum, rdunlap, jeffxu, jorgelo,
	groeck, linux-kernel, linux-kselftest, linux-mm, pedro.falcato,
	dave.hansen, linux-hardening

On Tue, Jan 23, 2024 at 10:58 AM Theo de Raadt <deraadt@openbsd.org> wrote:
>
> It's the same with MAP_MSEALABLE.  I don't get it. So now there are 3
> memory types:
>        - cannot be sealed, ever
>        - not yet sealed
>        - sealed
>
> What purpose does the first type serve?  Please explain the use case.
>
> Today, processes have control over their entire address space.
>
> What is the purpose of "permissions cannot be locked".  Please supply
> an example.  If I am wrong, I'd like to know where I went wrong.
>
The linux example is in the V3 and V4 cover letter [1] [2] of the open
discussion section.

[1] https://lore.kernel.org/linux-mm/20231212231706.2680890-1-jeffxu@chromium.org/T/
[2] https://lore.kernel.org/linux-mm/20240104185138.169307-3-jeffxu@chromium.org/T/

Copied below for ease of reading.
-----------------------------------------------------------------------------------------
During the development of V3, I had new questions and thoughts and
wished to discuss.

1> shm/aio
From reading the code, it seems to me that aio/shm can mmap/munmap
maps on behalf of userspace, e.g. ksys_shmdt() in shm.c. The lifetime
of those mapping are not tied to the lifetime of the process. If those
memories are sealed from userspace, then unmap will fail. This isn’t a
huge problem, since the memory will eventually be freed at exit or
exec. However, it feels like the solution is not complete, because of
the leaks in VMA address space during the lifetime of the process.

2> Brk (heap/stack)
Currently, userspace applications can seal parts of the heap by
calling malloc() and mseal(). This raises the question of what the
expected behavior is when sealing the heap is attempted.

let's assume following calls from user space:

ptr = malloc(size);
mprotect(ptr, size, RO);
mseal(ptr, size, SEAL_PROT_PKEY);
free(ptr);

Technically, before mseal() is added, the user can change the
protection of the heap by calling mprotect(RO). As long as the user
changes the protection back to RW before free(), the memory can be
reused.

Adding mseal() into picture, however, the heap is then sealed
partially, user can still free it, but the memory remains to be RO,
and the result of brk-shrink is nondeterministic, depending on if
munmap() will try to free the sealed memory.(brk uses munmap to shrink
the heap).

3> Above two cases led to the third topic:
There one option to address the problem mentioned above.
Option 1:  A “MAP_SEALABLE” flag in mmap().
If a map is created without this flag, the mseal() operation will
fail. Applications that are not concerned with sealing will expect
their behavior to be unchanged. For those that are concerned, adding a
flag at mmap time to opt in is not difficult. For the short term, this
solves problems 1 and 2 above. The memory in shm/aio/brk will not have
the MAP_SEALABLE flag at mmap(), and the same is true for the heap.

If we choose not to go with path, all mapping will by default
sealable. We could document above mentioned limitations so devs are
more careful at the time to choose what memory to seal. I think
deny of service through mseal() by attacker is probably not a concern,
if attackers have access to mseal() and unsealed memory, then they can
also do other harmful thing to the memory, such as munmap, etc.

4>
I think it might be possible to seal the stack or other special
mappings created at runtime (vdso, vsyscall, vvar). This means we can
enforce and seal W^X for certain types of application. For instance,
the stack is typically used in read-write mode, but in some cases, it
can become executable. To defend against unintented addition of
executable bit to stack, we could let the application to seal it.

Sealing the heap (for adding X) requires special handling, since the
heap can shrink, and shrink is implemented through munmap().

Indeed, it might be possible that all virtual memory accessible to user
space, regardless of its usage pattern, could be sealed. However, this
would require additional research and development work.

-----------------------------------------------------------------------------------------------------


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 0/4] Introduce mseal()
  2024-01-24 18:55       ` Jeff Xu
@ 2024-01-24 19:17         ` Theo de Raadt
  0 siblings, 0 replies; 23+ messages in thread
From: Theo de Raadt @ 2024-01-24 19:17 UTC (permalink / raw)
  To: Jeff Xu
  Cc: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap, jeffxu, jorgelo, groeck, linux-kernel,
	linux-kselftest, linux-mm, pedro.falcato, dave.hansen,
	linux-hardening

Jeff Xu <jeffxu@chromium.org> wrote:

> > I don't have a feeling about it.
> >
> > I spent a year engineering a complete system which exercises the maximum
> > amount of memory you can lock.
> >
> > I saw nothing like what you are describing.  I had PROT_IMMUTABLE in my
> > drafts, and saw it turning into a dangerous anti-pattern.
> >
> I'm sorry, I have never looked at one line of openBSD code, prototype
> or not, nor did I install openBSD before.

That is really disingeneous.

It is obvious to everyone that mseal is a derivative of the mimmutable
mechanism, the raw idea stems directly from this and you didn't need to
stay at a Holiday Express Inn.

> Because of this situation on my side, I failed to understand why you
> have such a strong opinion on PROC_SEAL in mmap() in linux kernel,
> based on your own OpenBSD's experience ?

Portable and compatible interfaces are good.

Historically, incompatible interfaces are less good.

> For PROT_SEAL in mmap(), I see it as a good and reasonable suggestion
> raised during the RFC process, and incorporate it into the patch set,
> there is nothing more and nothing less.

Yet, you and those who suggested it don't have a single line of userland
code ready which will use this.
 
> If openBSD doesn't want it, that is fine to me, it is not that I'm
> trying to force this into openBSD's kernel, I understand it is a
> different code base.

This has nothing to do with code base.

It is about attempting to decrease differences between systems; this
approach which has always been valuable.

Divergence has always been painful.

> > > > OpenBSD now uses this for a high percent of the address space.  It might
> > > > be worth re-reading a description of the split of responsibility regarding
> > > > who locks different types of memory in a process;
> > > > - kernel (the majority, based upon what ELF layout tell us),
> > > > - shared library linker (the next majority, dealing with shared
> > > >   library mappings and left-overs not determinable at kernel time),
> > > > - libc (a small minority, mostly regarding forced mutable objects)
> > > > - and the applications themselves (only 1 application today)
> > > >
> > > >     https://lwn.net/Articles/915662/
> > > >
> > > > > The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
> > > > > the map as sealable. A map created without MAP_SEALABLE will not support
> > > > > sealing, i.e. mseal() will fail.
> > > >
> > > > We definately won't be doing this.  We allow a process to lock any and all
> > > > it's memory that isn't locked already, even if it means it is shooting
> > > > itself in the foot.
> > > >
> > > > I think you are going to severely hurt the power of this mechanism,
> > > > because you won't be able to lock memory that has been allocated by a
> > > > different callsite not under your source-code control which lacks the
> > > > MAP_SEALABLE flag.  (Which is extremely common with the system-parts of
> > > > a process, meaning not just libc but kernel allocated objects).
> > > >
> > > MAP_SEALABLE was an open discussion item called out on V3 [2] and V4 [3].
> > >
> > > I acknowledge that additional coordination would be required if
> > > mapping were to be allocated by one software component and sealed in
> > > another. However, this is feasible.
> > >
> > > Considering the side effect of not having this flag (as discussed in
> > > V3/V4) and the significant implications of altering the lifetime of
> > > the mapping (since unmapping would not be possible), I believe it is
> > > reasonable to expect developers to exercise additional care and
> > > caution when utilizing memory sealing.
> > >
> > > [2] https://lore.kernel.org/linux-mm/20231212231706.2680890-2-jeffxu@chromium.org/
> > > [3] https://lore.kernel.org/all/20240104185138.169307-1-jeffxu@chromium.org/
> >
> > I disagree *strongly*.  Developers need to exercise additional care on
> > memory, period.  Memory sealing issues is the least of their worries.
> >
> > (Except for handling RELRO, but only the ld.so developers will lose
> > their hair).
> >
> >
> > OK, so mseal and mimmutable are very different.
> >
> > mimmutable can be used by any developer on the address space easily.
> >
> > mseal requires control of the whole stack between allocation and consumption.
> >
> > I'm sorry, but I don't think you understand how dangerous this MAP_SEALABLE
> > proposal is because of the difficulties it will create for use.
> >
> > The immutable memory management we have today in OpenBSD would completely
> > impossible with such a flag.  Seperation between allocator (that doesn't know
> > what is going to happen), and consumer (that does know), is completely common
> > in the systems environment (meaning the interaction between DSO, libc, other
> > libraries, and the underside of applications).
> >
> > This is not not like an application where you can simply sprinkle the flag
> > into the mmap() calls that cause you problems.  That mmap() call is now in
> > someone else's code, and you CANNOT gain security advantage unless you
> > convince them to gain an understanding of what that flag means -- and it is
> > a flag that other Linux variants don't have, not even in their #include
> > files.
> >
> I respect your reasoning with OpenBSD, but do you have a real example
> that this will be problematic for linux ?

See below.

> In my opinion, the extra communication part with mmap()'s owner has
> its pros and cons.

See below.

> The cons is what you mentioned: extra time for convincing and approval.

No, it is much worse than that.  See below.

> The pro is that there won't be unexpected behavior from the code owner
> point of view, once this communication process is completed. It can
> reduce the possibility of introducing bugs.
> 
> So far, I do not have enough information to say this is a bad idea.
> if you can provide a real example in the context of linux, e.g. DSO
> and libc you mentioned with details, that will be helpful.

Does the kernel map the main program's text segment, data segment, bss
segment, and stack with MAP_SEALABLE or without MAP_SEALABLE?

Once it is mapped, userland starts running.

If those objects don't have MAP_SEALABLE, then ld.so and libc cannot
perform locking of those mappings.  And ld.so or libc must do some of
those lockings later, some of these map lockings cannot be performed in
the kernel because userland makes data modifications and permission modifications
before proceeding into main().

This is unavoidable, because of RELRO; binaries with text relocation; binaries
with W|X mappings; it is probably required for IFUNC setup; and I strongly
suspect there are additional circumstances which require this, *just for glibc*
to use the mechanism.

If the kernel does map those regions with MAP_SEALABLE, then it seems
the most important parts of the address space are going to have MAP_SEALABLE
anyways.  So what were you trying to defend against?

So why are you doing this MAP_SEALABLE dance?   It makes no sense.

I'm sorry, but it is you who must justify these strange semantics which
you are introducing -- to change a mechanism previously engineered and
fully deployed in another operating system.  To me, not being able to
justify these behavious seems to be based on intentional ignorance.
"Not Invented Here", is what I see.

You say glibc will use this.  I call bollocks.  I see a specific behaviour
which will prevent use by glibc.  I designed my mechanism with libc specifically
considered -- it was a whole system environment.

You work on chrome.  You don't work on glibc.  The glibc people aren't publically
talking about this.  From my perspective, this is looking really dumb.


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 2/4] mseal: add mseal syscall
  2024-01-24 17:50     ` Jeff Xu
@ 2024-01-24 20:06       ` Liam R. Howlett
  2024-01-24 20:37         ` Theo de Raadt
  2024-01-24 22:49         ` Jeff Xu
  0 siblings, 2 replies; 23+ messages in thread
From: Liam R. Howlett @ 2024-01-24 20:06 UTC (permalink / raw)
  To: Jeff Xu
  Cc: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap, jeffxu, jorgelo, groeck, linux-kernel,
	linux-kselftest, linux-mm, pedro.falcato, dave.hansen,
	linux-hardening, deraadt

* Jeff Xu <jeffxu@chromium.org> [240124 12:50]:
> On Tue, Jan 23, 2024 at 10:15 AM Liam R. Howlett
> <Liam.Howlett@oracle.com> wrote:
> >
> > * jeffxu@chromium.org <jeffxu@chromium.org> [240122 10:29]:
> > > From: Jeff Xu <jeffxu@chromium.org>
> > >
> > > The new mseal() is an syscall on 64 bit CPU, and with
> > > following signature:
> > >
> > > int mseal(void addr, size_t len, unsigned long flags)
> > > addr/len: memory range.
> > > flags: reserved.
> > >
> > > mseal() blocks following operations for the given memory range.
> > >
> > > 1> Unmapping, moving to another location, and shrinking the size,
> > >    via munmap() and mremap(), can leave an empty space, therefore can
> > >    be replaced with a VMA with a new set of attributes.
> > >
> > > 2> Moving or expanding a different VMA into the current location,
> > >    via mremap().
> > >
> > > 3> Modifying a VMA via mmap(MAP_FIXED).
> > >
> > > 4> Size expansion, via mremap(), does not appear to pose any specific
> > >    risks to sealed VMAs. It is included anyway because the use case is
> > >    unclear. In any case, users can rely on merging to expand a sealed VMA.
> > >
> > > 5> mprotect() and pkey_mprotect().
> > >
> > > 6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous
> > >    memory, when users don't have write permission to the memory. Those
> > >    behaviors can alter region contents by discarding pages, effectively a
> > >    memset(0) for anonymous memory.
> > >
> > > In addition: mmap() has two related changes.
> > >
> > > The PROT_SEAL bit in prot field of mmap(). When present, it marks
> > > the map sealed since creation.
> > >
> > > The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
> > > the map as sealable. A map created without MAP_SEALABLE will not support
> > > sealing, i.e. mseal() will fail.
> > >
> > > Applications that don't care about sealing will expect their behavior
> > > unchanged. For those that need sealing support, opt-in by adding
> > > MAP_SEALABLE in mmap().
> > >
> > > I would like to formally acknowledge the valuable contributions
> > > received during the RFC process, which were instrumental
> > > in shaping this patch:
> > >
> > > Jann Horn: raising awareness and providing valuable insights on the
> > > destructive madvise operations.
> > > Linus Torvalds: assisting in defining system call signature and scope.
> > > Pedro Falcato: suggesting sealing in the mmap().
> > > Theo de Raadt: sharing the experiences and insights gained from
> > > implementing mimmutable() in OpenBSD.
> > >
> > > Finally, the idea that inspired this patch comes from Stephen Röttger’s
> > > work in Chrome V8 CFI.
> > >
> > > Signed-off-by: Jeff Xu <jeffxu@chromium.org>
> > > ---
> > >  include/linux/mm.h                     |  48 ++++
> > >  include/linux/syscalls.h               |   1 +
> > >  include/uapi/asm-generic/mman-common.h |   8 +
> > >  mm/Makefile                            |   4 +
> > >  mm/madvise.c                           |  12 +
> > >  mm/mmap.c                              |  27 ++
> > >  mm/mprotect.c                          |  10 +
> > >  mm/mremap.c                            |  31 +++
> > >  mm/mseal.c                             | 343 +++++++++++++++++++++++++
> > >  9 files changed, 484 insertions(+)
> > >  create mode 100644 mm/mseal.c
> > >

...

> >
> > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > index b78e83d351d2..32bc2179aed0 100644
> > > --- a/mm/mmap.c
> > > +++ b/mm/mmap.c
> > > @@ -1213,6 +1213,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> > >  {
> > >       struct mm_struct *mm = current->mm;
> > >       int pkey = 0;
> > > +     unsigned long vm_seals;
> > >
> > >       *populate = 0;
> > >
> > > @@ -1233,6 +1234,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> > >       if (flags & MAP_FIXED_NOREPLACE)
> > >               flags |= MAP_FIXED;
> > >
> > > +     vm_seals = get_mmap_seals(prot, flags);
> > > +
> > >       if (!(flags & MAP_FIXED))
> > >               addr = round_hint_to_min(addr);
> > >
> > > @@ -1261,6 +1264,13 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> > >                       return -EEXIST;
> > >       }
> > >
> > > +     /*
> > > +      * Check if the address range is sealed for do_mmap().
> > > +      * can_modify_mm assumes we have acquired the lock on MM.
> > > +      */
> > > +     if (!can_modify_mm(mm, addr, addr + len))
> > > +             return -EPERM;
> > > +
> >
> > This is called after get_unmapped_area(), so this area is either going
> > to be MAP_FIXED and return the "hint" addr or it's going to be empty.
> > You can probably avoid walking the VMAs in the non-FIXED case.  This
> > would remove the overhead of your check in the most common case.
> >
> 
> Thanks for flagging this!
> 
> I wasn't entirely sure about get_unmapped_area() after reading the
> code,  It calls a few variants of  arch_get_unmapped_area_xxx()
> functions.
> 
> e.g. it seems like the generic_get_unmapped_area_topdown  is returning
> a non-null address even when MAP_FIXED is set to false
> 
>  ----------------------------------------------------------------------------
> generic_get_unmapped_area_topdown (
> ...
> if (flags & MAP_FIXED)  <-- MAP_FIXED case.
> return addr;
> 
> /* requesting a specific address */
> if (addr) {  <--  note not MAP_FIXED
> addr = PAGE_ALIGN(addr);
> vma = find_vma_prev(mm, addr, &prev);
> if (mmap_end - len >= addr && addr >= mmap_min_addr &&
> (!vma || addr + len <= vm_start_gap(vma)) &&
> (!prev || addr >= vm_end_gap(prev)))
> return addr;                         <--- note return not null addr here.
> }

Sorry, I was not clear.  Either MAP_FIXED will just return the addr, or
the addr that is returned has no VMA (the memory area is empty).  This
function finds a gap to place your data and the gap is (at least) as big
as you want (usually oversized, but that doesn't matter here).  The
mmap_lock is held, so we know it's going to remain empty.

So there are two scenarios:
1. MAP_FIXED which may or may not have a VMA over the range
2. An address which has no VMA over the range

Anyways, this is probably not needed, because of what I say later.

> 
> ----------------------------------------------------------------------------
> I thought also about adding a check for addr != null  instead, i.e.
> if (addr && !can_modify_mm(mm, addr, addr + len))
>     return -EPERM;
> }
> 
> But using MAP_FIXED to allocate memory at address 0 is legit, e.g.
> allocating a PROT_NONE | PROT_SEAL at address 0.
> 
> Another factor to consider is: what will be the cost of passing an
> empty address into can_modify_mm() ? the search will be 0 to len.

Almost always zero VMAs to check, it's not worth optimising.  The maple
tree will walk to the first range and it'll be 0 to some very large
number, most likely.

> 
> > >       if (prot == PROT_EXEC) {
> > >               pkey = execute_only_pkey(mm);
> > >               if (pkey < 0)
> > > @@ -1376,6 +1386,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> > >                       vm_flags |= VM_NORESERVE;
> > >       }
> > >
> > > +     vm_flags |= vm_seals;
> > >       addr = mmap_region(file, addr, len, vm_flags, pgoff, uf);
> > >       if (!IS_ERR_VALUE(addr) &&
> > >           ((vm_flags & VM_LOCKED) ||
> > > @@ -2679,6 +2690,14 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
> > >       if (end == start)
> > >               return -EINVAL;
> > >
> > > +     /*
> > > +      * Check if memory is sealed before arch_unmap.
> > > +      * Prevent unmapping a sealed VMA.
> > > +      * can_modify_mm assumes we have acquired the lock on MM.
> > > +      */
> > > +     if (!can_modify_mm(mm, start, end))
> > > +             return -EPERM;
> > > +
> >
> > This function is currently called from mmap_region(), so we are going to
> > run this check twice as you have it; once in do_mmap() then again in
> > mma_region() -> do_vmi_munmap().  This effectively doubles your impact
> > to MAP_FIXED calls.
> >
> Yes. To address this would require a new flag in the do_vmi_munmap(),
> after passing the first check in mmap(), we could set the flag as false,
> so do_vmi_munmap() would not check it again.
> 
> However, this approach was attempted in v1 and V2 of the patch [1] [2],
> and was strongly opposed by Linus. It was considered as too random and
> decreased the readability.

Oh yes, I recall that now.  He was not pleased.

> 
> Below is my  text in V2: [3]
> 
> "When handing the mmap/munmap/mremap/mmap, once the code passed
> can_modify_mm(), it means the memory area is not sealed, if the code
> continues to call the other utility functions, we don't need to check
> the seal again. This is the case for mremap(), the seal of src address
> and dest address (when applicable) are checked first, later when the
> code calls  do_vmi_munmap(), it no longer needs to check the seal
> again."
> 
> Considering this is the MAP_FIXED case, and maybe that is not used
> that often in practice, I think this is acceptable performance-wise,
> unless you know another solution to help this.

Okay, sure, I haven't been yelled at on the ML for a few weeks.  Here
goes:

do_mmap() will call get_unmapped_area(), which will return an empty area
(no need to check mseal, I hope - or we have larger issues here) or a
MAP_FIXED address.

do_mmap() will pass the address along to mmap_region()

mmap_region() will then call do_vmi_munmap() - which will either remove
the VMA(s) in the way, or do nothing... or error.

mmap_region() will return -ENOMEM in the case of an error returned from
do_vmi_munmap() today.  Change that to return the error code, and let
do_vmi_munmap() do the mseal check.  If mseal check fails then the error
is propagated the same way -ENOMEM is propagated today.

This relies on the fact that we only really need to check the mseal
status of existing VMAs and we can only really map over existing VMAs by
first munmapping them.

It does move your error return to much later in the call stack, but it
removes duplicate work and less code.  Considering this should be a rare
event, I don't think that's of concern.

> 
> [1] https://lore.kernel.org/lkml/20231016143828.647848-6-jeffxu@chromium.org/
> [2] https://lore.kernel.org/lkml/20231017090815.1067790-6-jeffxu@chromium.org/
> [3] https://lore.kernel.org/lkml/CALmYWFux2m=9189Gs0o8-xhPNW4dnFvtqj7ptcT5QvzxVgfvYQ@mail.gmail.com/
> 
> 
> > >        /* arch_unmap() might do unmaps itself.  */
> > >       arch_unmap(mm, start, end);
> > >
> > > @@ -3102,6 +3121,14 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> > >  {
> > >       struct mm_struct *mm = vma->vm_mm;
> > >
> > > +     /*
> > > +      * Check if memory is sealed before arch_unmap.
> > > +      * Prevent unmapping a sealed VMA.
> > > +      * can_modify_mm assumes we have acquired the lock on MM.
> > > +      */
> > > +     if (!can_modify_mm(mm, start, end))
> > > +             return -EPERM;
> > > +
> >
> > I am sure you've looked at the callers, from what I found there are two:
> >
> > The brk call uses this function, so it may check more than one VMA in
> > that path.  Will the brk VMAs potentially be msealed?  I guess someone
> > could do that?
> >
> > The other place this is use is in ipc/shm.c whhere the start/end is just
> > the vma start/end, so we only really need to check that one vma.
> >
> Yes. Those two cases were looked at, and was the main reason that
> MAP_SEALABLE is introduced as part of mmap().
> 
> As in the open discussion of the V3/V4 patch: [4] [5]
> 
> [4] https://lore.kernel.org/linux-mm/20231212231706.2680890-1-jeffxu@chromium.org/T/
> [5] https://lore.kernel.org/linux-mm/20240104185138.169307-3-jeffxu@chromium.org/T/
> 
> Copied here for ease of reading:
> ---------------------------------------------------------------------------------------------
> 
> During the development of V3, I had new questions and thoughts and
> wished to discuss.
> 
> 1> shm/aio
> From reading the code, it seems to me that aio/shm can mmap/munmap
> maps on behalf of userspace, e.g. ksys_shmdt() in shm.c. The lifetime
> of those mapping are not tied to the lifetime of the process. If those
> memories are sealed from userspace, then unmap will fail. This isn’t a
> huge problem, since the memory will eventually be freed at exit or
> exec. However, it feels like the solution is not complete, because of
> the leaks in VMA address space during the lifetime of the process.
> 
> 2> Brk (heap/stack)
> Currently, userspace applications can seal parts of the heap by
> calling malloc() and mseal(). This raises the question of what the
> expected behavior is when sealing the heap is attempted.
> 
> let's assume following calls from user space:
> 
> ptr = malloc(size);
> mprotect(ptr, size, RO);
> mseal(ptr, size, SEAL_PROT_PKEY);
> free(ptr);
> 
> Technically, before mseal() is added, the user can change the
> protection of the heap by calling mprotect(RO). As long as the user
> changes the protection back to RW before free(), the memory can be
> reused.
> 
> Adding mseal() into picture, however, the heap is then sealed
> partially, user can still free it, but the memory remains to be RO,
> and the result of brk-shrink is nondeterministic, depending on if
> munmap() will try to free the sealed memory.(brk uses munmap to shrink
> the heap).
> 
> 3> Above two cases led to the third topic:
> There one option to address the problem mentioned above.
> Option 1:  A “MAP_SEALABLE” flag in mmap().
> If a map is created without this flag, the mseal() operation will
> fail. Applications that are not concerned with sealing will expect
> their behavior to be unchanged. For those that are concerned, adding a
> flag at mmap time to opt in is not difficult. For the short term, this
> solves problems 1 and 2 above. The memory in shm/aio/brk will not have
> the MAP_SEALABLE flag at mmap(), and the same is true for the heap.
> 
> If we choose not to go with path, all mapping will by default
> sealable. We could document above mentioned limitations so devs are
> more careful at the time to choose what memory to seal. I think
> deny of service through mseal() by attacker is probably not a concern,
> if attackers have access to mseal() and unsealed memory, then they can
> also do other harmful thing to the memory, such as munmap, etc.
> 
> 4>
> I think it might be possible to seal the stack or other special
> mappings created at runtime (vdso, vsyscall, vvar). This means we can
> enforce and seal W^X for certain types of application. For instance,
> the stack is typically used in read-write mode, but in some cases, it
> can become executable. To defend against unintented addition of
> executable bit to stack, we could let the application to seal it.
> 
> Sealing the heap (for adding X) requires special handling, since the
> heap can shrink, and shrink is implemented through munmap().
> 
> Indeed, it might be possible that all virtual memory accessible to user
> space, regardless of its usage pattern, could be sealed. However, this
> would require additional research and development work.
> 
> -----------------------------------------------------------------------------------------------------
> 
> 
> > Is there a way to avoid walking the tree for the single known VMA?
> Are you thinking about a hash table to record brk VMA ? or a dedicated
> tree for sealed VMAs? possible. code will be a lot more though.

No, instead of calling a loop to walk the tree to find the same VMA,
just check the single VMA.

ipc/shm.c: do_vma_munmap(&vmi, vma, vma->vm_start, vma->vm_end...

So if you just check the single VMA then we don't need to worry about
re-walking.

I think this is a moot point if my outline above works.

> 
> > Does
> > it make sense to deny mseal writing to brk VMAs?
> >
> Yes. It makes sense. Since brk memory doesn't have MAP_SEALABLE at
> this moment,  mseal will fail even if someone tries to seal it.
> Sealing brk memory would require more research and design.
> 
> >
> > >       arch_unmap(mm, start, end);
> > >       return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
> > >  }
> >
> > ...
> >
> >
> > Ah, I see them now.  Yes, this is what I expected to see.  Does this not
> > have any impact on mmap/munmap benchmarks?
> >
> Thanks for bringing this topic! I'm kind of hoping for performance related
> questions.
> 
> I haven't done any benchmarks, due to lack of knowledge on how those
> tests are usually performed.
> 
> For mseal(), since it will be called only in a few places (libc/elf
> loading),  I'm expecting no real world  impact, and that can be
> measured when we have implementations in place in libc and
> elf-loading.
> 
> The hot path could be on mmap() and munmap(), as you pointed out.
> 
> mmap() was discussed above (adding a check for FIXED )

That can probably be dropped as discussed above.

> 
> munmap(), There is a cost in calling can_modify_mm(). I thought about
> calling can_modify_vma in do_vmi_align_munmap, but there are two reasons:
> 
> a. it skips arch_unmap, and arch_unmap can unmap the memory.
> b. Current logic of checking sealing is: if one of VMAs between start to end is
> sealed, mprotect/mmap/munmap will fail without any of VMAs being modified.
> This means we will need additional walking over the VMA tree.

Certainly, but it comes at a cost.  I was just surprised with the
statement that there is no negative from the previous discussion, as I
replied to the cover letter.

> > > +/*
> > > + * Apply sealing.
> > > + */
> > > +static int apply_mm_seal(unsigned long start, unsigned long end)
> > > +{
> > > +     unsigned long nstart;
> > > +     struct vm_area_struct *vma, *prev;
> > > +
> > > +     VMA_ITERATOR(vmi, current->mm, start);
> > > +
> > > +     vma = vma_iter_load(&vmi);
> > > +     /*
> > > +      * Note: check_mm_seal should already checked ENOMEM case.
> > > +      * so vma should not be null, same for the other ENOMEM cases.
> >
> > The start to end is contiguous, right?
> Yes.  check_mm_seal makes sure the start to end is contiguous.
> 
> >
> > > +      */
> > > +     prev = vma_prev(&vmi);
> > > +     if (start > vma->vm_start)
> > > +             prev = vma;
> > > +
> > > +     nstart = start;
> > > +     for_each_vma_range(vmi, vma, end) {
> > > +             int error;
> > > +             unsigned long tmp;
> > > +             vm_flags_t newflags;
> > > +
> > > +             newflags = vma->vm_flags | VM_SEALED;
> > > +             tmp = vma->vm_end;
> > > +             if (tmp > end)
> > > +                     tmp = end;
> > > +             error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
> > > +             if (error)
> > > +                     return error;
> >
> > > +             tmp = vma_iter_end(&vmi);
> > > +             nstart = tmp;
> >
> > You set tmp before using it unconditionally to vma->vm_end above, so you
> > can set nstart = vma_iter_end(&vmi) here.  But, also we know the
> > VMAs are contiguous from your check_mm_seal() call, so we know nstart ==
> > vma->vm_start on the next loop.
> The code is almost the same as in mlock.c, except that we know the
> VMAs are contiguous, so we don't check for some of the ENOMEM cases.
> There might be ways to improve this code. For ease of code review, I
> choose a consistency (same as mlock)  for now.

Yes, I thought that was the case.  tmp is updated in that code to ensure
we have reached the end of the range without a gap at the end.  Since
you already checked that the VMAs are contiguous, the last tmp update in
your loop is not needed.

Thanks,
Liam


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 2/4] mseal: add mseal syscall
  2024-01-24 20:06       ` Liam R. Howlett
@ 2024-01-24 20:37         ` Theo de Raadt
  2024-01-24 20:51           ` Theo de Raadt
  2024-01-24 22:49         ` Jeff Xu
  1 sibling, 1 reply; 23+ messages in thread
From: Theo de Raadt @ 2024-01-24 20:37 UTC (permalink / raw)
  To: Liam R. Howlett, Jeff Xu, akpm, keescook, jannh, sroettger,
	willy, gregkh, torvalds, usama.anjum, rdunlap, jeffxu, jorgelo,
	groeck, linux-kernel, linux-kselftest, linux-mm, pedro.falcato,
	dave.hansen, linux-hardening

Liam R. Howlett <Liam.Howlett@Oracle.com> wrote:

> > Adding mseal() into picture, however, the heap is then sealed
> > partially, user can still free it, but the memory remains to be RO,
> > and the result of brk-shrink is nondeterministic, depending on if
> > munmap() will try to free the sealed memory.(brk uses munmap to shrink
> > the heap).

"You are holding it wrong".

> > [...]. We could document above mentioned limitations so devs are
> > more careful at the time to choose what memory to seal.

You mean like they need to be careful what memory they map, careful
what memory they unmap, careful what they do with mprotect, careful
about not writing or reading out of bounds, etc.  They need to be
careful about everything.

Programmers have complete control over the address space in a program.
This is Linux we are talking about, it still doesn't have strict policy
on W | X memory, but misuse of mseal is suddenly a developer crisis?

Why is this memory attribute different, and how does it actually help?

When they use mseal on objects with unproven future, the program will
crash later, beautifully demonstrating that they held it wrong.  Then
they can fix their abusive incorrect code.

This discussion about the malloc heap is ridiculous.  Obviously it is
programmer error to lock the permissions on memory you will free for
reuse.  But you can't fix this problem with malloc(), without breaking
other extremely common circumstances where the allocation of memory
and PERMANENT-USE-WITHOUT-RELEASE of such memory are seperated over a
memory boundary, unless you start telling all open source library authors
to always use MAP_SEALABLE in their mmap() calls.



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 2/4] mseal: add mseal syscall
  2024-01-24 20:37         ` Theo de Raadt
@ 2024-01-24 20:51           ` Theo de Raadt
  0 siblings, 0 replies; 23+ messages in thread
From: Theo de Raadt @ 2024-01-24 20:51 UTC (permalink / raw)
  To: Liam R. Howlett, Jeff Xu, akpm, keescook, jannh, sroettger,
	willy, gregkh, torvalds, usama.anjum, rdunlap, jeffxu, jorgelo,
	groeck, linux-kernel, linux-kselftest, linux-mm, pedro.falcato,
	dave.hansen, linux-hardening

Theo de Raadt <deraadt@openbsd.org> wrote:

> This discussion about the malloc heap is ridiculous.  Obviously it is
> programmer error to lock the permissions on memory you will free for
> reuse.  But you can't fix this problem with malloc(), without breaking
> other extremely common circumstances where the allocation of memory
> and PERMANENT-USE-WITHOUT-RELEASE of such memory are seperated over a
> memory boundary, unless you start telling all open source library authors

  ^^^^^^^^^^^^^^^ library boundary, sorry

> to always use MAP_SEALABLE in their mmap() calls.

Example:

1. libcrypto (or some other library) has some ways to allocate memory and
   provide it to an application.
2. Even if this is using malloc(), heap allocations over a pagesize are
   page-aligned, so even then following assumptions are sound.
3. I have an application which uses that memory, but will never release the memory
   until program termination
4. The library interface is public and used by many programs, so the library
   author has a choice of using MAP_SEALABLE or not using MAP_SEALABLE

Due to your choice, my application cannot make lock the memory permissions
unless that library author chooses MAP_SEALABLE

If they choose to use MAP_SEALABLE, all programs get this memory you consider
less safe.

Exactly what is being gained here?







^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 2/4] mseal: add mseal syscall
  2024-01-24 20:06       ` Liam R. Howlett
  2024-01-24 20:37         ` Theo de Raadt
@ 2024-01-24 22:49         ` Jeff Xu
  2024-01-25  2:04           ` Jeff Xu
  1 sibling, 1 reply; 23+ messages in thread
From: Jeff Xu @ 2024-01-24 22:49 UTC (permalink / raw)
  To: Liam R. Howlett, Jeff Xu, akpm, keescook, jannh, sroettger,
	willy, gregkh, torvalds, usama.anjum, rdunlap, jeffxu, jorgelo,
	groeck, linux-kernel, linux-kselftest, linux-mm, pedro.falcato,
	dave.hansen, linux-hardening, deraadt

On Wed, Jan 24, 2024 at 12:06 PM Liam R. Howlett
<Liam.Howlett@oracle.com> wrote:
>
> * Jeff Xu <jeffxu@chromium.org> [240124 12:50]:
> > On Tue, Jan 23, 2024 at 10:15 AM Liam R. Howlett
> > <Liam.Howlett@oracle.com> wrote:
> > >
> > > * jeffxu@chromium.org <jeffxu@chromium.org> [240122 10:29]:
> > > > From: Jeff Xu <jeffxu@chromium.org>
> > > >
> > > > The new mseal() is an syscall on 64 bit CPU, and with
> > > > following signature:
> > > >
> > > > int mseal(void addr, size_t len, unsigned long flags)
> > > > addr/len: memory range.
> > > > flags: reserved.
> > > >
> > > > mseal() blocks following operations for the given memory range.
> > > >
> > > > 1> Unmapping, moving to another location, and shrinking the size,
> > > >    via munmap() and mremap(), can leave an empty space, therefore can
> > > >    be replaced with a VMA with a new set of attributes.
> > > >
> > > > 2> Moving or expanding a different VMA into the current location,
> > > >    via mremap().
> > > >
> > > > 3> Modifying a VMA via mmap(MAP_FIXED).
> > > >
> > > > 4> Size expansion, via mremap(), does not appear to pose any specific
> > > >    risks to sealed VMAs. It is included anyway because the use case is
> > > >    unclear. In any case, users can rely on merging to expand a sealed VMA.
> > > >
> > > > 5> mprotect() and pkey_mprotect().
> > > >
> > > > 6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous
> > > >    memory, when users don't have write permission to the memory. Those
> > > >    behaviors can alter region contents by discarding pages, effectively a
> > > >    memset(0) for anonymous memory.
> > > >
> > > > In addition: mmap() has two related changes.
> > > >
> > > > The PROT_SEAL bit in prot field of mmap(). When present, it marks
> > > > the map sealed since creation.
> > > >
> > > > The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
> > > > the map as sealable. A map created without MAP_SEALABLE will not support
> > > > sealing, i.e. mseal() will fail.
> > > >
> > > > Applications that don't care about sealing will expect their behavior
> > > > unchanged. For those that need sealing support, opt-in by adding
> > > > MAP_SEALABLE in mmap().
> > > >
> > > > I would like to formally acknowledge the valuable contributions
> > > > received during the RFC process, which were instrumental
> > > > in shaping this patch:
> > > >
> > > > Jann Horn: raising awareness and providing valuable insights on the
> > > > destructive madvise operations.
> > > > Linus Torvalds: assisting in defining system call signature and scope.
> > > > Pedro Falcato: suggesting sealing in the mmap().
> > > > Theo de Raadt: sharing the experiences and insights gained from
> > > > implementing mimmutable() in OpenBSD.
> > > >
> > > > Finally, the idea that inspired this patch comes from Stephen Röttger’s
> > > > work in Chrome V8 CFI.
> > > >
> > > > Signed-off-by: Jeff Xu <jeffxu@chromium.org>
> > > > ---
> > > >  include/linux/mm.h                     |  48 ++++
> > > >  include/linux/syscalls.h               |   1 +
> > > >  include/uapi/asm-generic/mman-common.h |   8 +
> > > >  mm/Makefile                            |   4 +
> > > >  mm/madvise.c                           |  12 +
> > > >  mm/mmap.c                              |  27 ++
> > > >  mm/mprotect.c                          |  10 +
> > > >  mm/mremap.c                            |  31 +++
> > > >  mm/mseal.c                             | 343 +++++++++++++++++++++++++
> > > >  9 files changed, 484 insertions(+)
> > > >  create mode 100644 mm/mseal.c
> > > >
>
> ...
>
> > >
> > > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > > index b78e83d351d2..32bc2179aed0 100644
> > > > --- a/mm/mmap.c
> > > > +++ b/mm/mmap.c
> > > > @@ -1213,6 +1213,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> > > >  {
> > > >       struct mm_struct *mm = current->mm;
> > > >       int pkey = 0;
> > > > +     unsigned long vm_seals;
> > > >
> > > >       *populate = 0;
> > > >
> > > > @@ -1233,6 +1234,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> > > >       if (flags & MAP_FIXED_NOREPLACE)
> > > >               flags |= MAP_FIXED;
> > > >
> > > > +     vm_seals = get_mmap_seals(prot, flags);
> > > > +
> > > >       if (!(flags & MAP_FIXED))
> > > >               addr = round_hint_to_min(addr);
> > > >
> > > > @@ -1261,6 +1264,13 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> > > >                       return -EEXIST;
> > > >       }
> > > >
> > > > +     /*
> > > > +      * Check if the address range is sealed for do_mmap().
> > > > +      * can_modify_mm assumes we have acquired the lock on MM.
> > > > +      */
> > > > +     if (!can_modify_mm(mm, addr, addr + len))
> > > > +             return -EPERM;
> > > > +
> > >
> > > This is called after get_unmapped_area(), so this area is either going
> > > to be MAP_FIXED and return the "hint" addr or it's going to be empty.
> > > You can probably avoid walking the VMAs in the non-FIXED case.  This
> > > would remove the overhead of your check in the most common case.
> > >
> >
> > Thanks for flagging this!
> >
> > I wasn't entirely sure about get_unmapped_area() after reading the
> > code,  It calls a few variants of  arch_get_unmapped_area_xxx()
> > functions.
> >
> > e.g. it seems like the generic_get_unmapped_area_topdown  is returning
> > a non-null address even when MAP_FIXED is set to false
> >
> >  ----------------------------------------------------------------------------
> > generic_get_unmapped_area_topdown (
> > ...
> > if (flags & MAP_FIXED)  <-- MAP_FIXED case.
> > return addr;
> >
> > /* requesting a specific address */
> > if (addr) {  <--  note not MAP_FIXED
> > addr = PAGE_ALIGN(addr);
> > vma = find_vma_prev(mm, addr, &prev);
> > if (mmap_end - len >= addr && addr >= mmap_min_addr &&
> > (!vma || addr + len <= vm_start_gap(vma)) &&
> > (!prev || addr >= vm_end_gap(prev)))
> > return addr;                         <--- note return not null addr here.
> > }
>
> Sorry, I was not clear.  Either MAP_FIXED will just return the addr, or
> the addr that is returned has no VMA (the memory area is empty).  This
> function finds a gap to place your data and the gap is (at least) as big
> as you want (usually oversized, but that doesn't matter here).  The
> mmap_lock is held, so we know it's going to remain empty.
>
> So there are two scenarios:
> 1. MAP_FIXED which may or may not have a VMA over the range
> 2. An address which has no VMA over the range
>
> Anyways, this is probably not needed, because of what I say later.
>
> >
> > ----------------------------------------------------------------------------
> > I thought also about adding a check for addr != null  instead, i.e.
> > if (addr && !can_modify_mm(mm, addr, addr + len))
> >     return -EPERM;
> > }
> >
> > But using MAP_FIXED to allocate memory at address 0 is legit, e.g.
> > allocating a PROT_NONE | PROT_SEAL at address 0.
> >
> > Another factor to consider is: what will be the cost of passing an
> > empty address into can_modify_mm() ? the search will be 0 to len.
>
> Almost always zero VMAs to check, it's not worth optimising.  The maple
> tree will walk to the first range and it'll be 0 to some very large
> number, most likely.
>
Got you.

> >
> > > >       if (prot == PROT_EXEC) {
> > > >               pkey = execute_only_pkey(mm);
> > > >               if (pkey < 0)
> > > > @@ -1376,6 +1386,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
> > > >                       vm_flags |= VM_NORESERVE;
> > > >       }
> > > >
> > > > +     vm_flags |= vm_seals;
> > > >       addr = mmap_region(file, addr, len, vm_flags, pgoff, uf);
> > > >       if (!IS_ERR_VALUE(addr) &&
> > > >           ((vm_flags & VM_LOCKED) ||
> > > > @@ -2679,6 +2690,14 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
> > > >       if (end == start)
> > > >               return -EINVAL;
> > > >
> > > > +     /*
> > > > +      * Check if memory is sealed before arch_unmap.
> > > > +      * Prevent unmapping a sealed VMA.
> > > > +      * can_modify_mm assumes we have acquired the lock on MM.
> > > > +      */
> > > > +     if (!can_modify_mm(mm, start, end))
> > > > +             return -EPERM;
> > > > +
> > >
> > > This function is currently called from mmap_region(), so we are going to
> > > run this check twice as you have it; once in do_mmap() then again in
> > > mma_region() -> do_vmi_munmap().  This effectively doubles your impact
> > > to MAP_FIXED calls.
> > >
> > Yes. To address this would require a new flag in the do_vmi_munmap(),
> > after passing the first check in mmap(), we could set the flag as false,
> > so do_vmi_munmap() would not check it again.
> >
> > However, this approach was attempted in v1 and V2 of the patch [1] [2],
> > and was strongly opposed by Linus. It was considered as too random and
> > decreased the readability.
>
> Oh yes, I recall that now.  He was not pleased.
>
> >
> > Below is my  text in V2: [3]
> >
> > "When handing the mmap/munmap/mremap/mmap, once the code passed
> > can_modify_mm(), it means the memory area is not sealed, if the code
> > continues to call the other utility functions, we don't need to check
> > the seal again. This is the case for mremap(), the seal of src address
> > and dest address (when applicable) are checked first, later when the
> > code calls  do_vmi_munmap(), it no longer needs to check the seal
> > again."
> >
> > Considering this is the MAP_FIXED case, and maybe that is not used
> > that often in practice, I think this is acceptable performance-wise,
> > unless you know another solution to help this.
>
> Okay, sure, I haven't been yelled at on the ML for a few weeks.  Here
> goes:
>
> do_mmap() will call get_unmapped_area(), which will return an empty area
> (no need to check mseal, I hope - or we have larger issues here) or a
> MAP_FIXED address.
>
> do_mmap() will pass the address along to mmap_region()
>
> mmap_region() will then call do_vmi_munmap() - which will either remove
> the VMA(s) in the way, or do nothing... or error.
>
> mmap_region() will return -ENOMEM in the case of an error returned from
> do_vmi_munmap() today.  Change that to return the error code, and let
> do_vmi_munmap() do the mseal check.  If mseal check fails then the error
> is propagated the same way -ENOMEM is propagated today.
>
> This relies on the fact that we only really need to check the mseal
> status of existing VMAs and we can only really map over existing VMAs by
> first munmapping them.
>
> It does move your error return to much later in the call stack, but it
> removes duplicate work and less code.  Considering this should be a rare
> event, I don't think that's of concern.
>
I think that is a great idea, I will try to implement it and get back
to you on this.

> >
> > [1] https://lore.kernel.org/lkml/20231016143828.647848-6-jeffxu@chromium.org/
> > [2] https://lore.kernel.org/lkml/20231017090815.1067790-6-jeffxu@chromium.org/
> > [3] https://lore.kernel.org/lkml/CALmYWFux2m=9189Gs0o8-xhPNW4dnFvtqj7ptcT5QvzxVgfvYQ@mail.gmail.com/
> >
> >
> > > >        /* arch_unmap() might do unmaps itself.  */
> > > >       arch_unmap(mm, start, end);
> > > >
> > > > @@ -3102,6 +3121,14 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> > > >  {
> > > >       struct mm_struct *mm = vma->vm_mm;
> > > >
> > > > +     /*
> > > > +      * Check if memory is sealed before arch_unmap.
> > > > +      * Prevent unmapping a sealed VMA.
> > > > +      * can_modify_mm assumes we have acquired the lock on MM.
> > > > +      */
> > > > +     if (!can_modify_mm(mm, start, end))
> > > > +             return -EPERM;
> > > > +
> > >
> > > I am sure you've looked at the callers, from what I found there are two:
> > >
> > > The brk call uses this function, so it may check more than one VMA in
> > > that path.  Will the brk VMAs potentially be msealed?  I guess someone
> > > could do that?
> > >
> > > The other place this is use is in ipc/shm.c whhere the start/end is just
> > > the vma start/end, so we only really need to check that one vma.
> > >
> > Yes. Those two cases were looked at, and was the main reason that
> > MAP_SEALABLE is introduced as part of mmap().
> >
> > As in the open discussion of the V3/V4 patch: [4] [5]
> >
> > [4] https://lore.kernel.org/linux-mm/20231212231706.2680890-1-jeffxu@chromium.org/T/
> > [5] https://lore.kernel.org/linux-mm/20240104185138.169307-3-jeffxu@chromium.org/T/
> >
> > Copied here for ease of reading:
> > ---------------------------------------------------------------------------------------------
> >
> > During the development of V3, I had new questions and thoughts and
> > wished to discuss.
> >
> > 1> shm/aio
> > From reading the code, it seems to me that aio/shm can mmap/munmap
> > maps on behalf of userspace, e.g. ksys_shmdt() in shm.c. The lifetime
> > of those mapping are not tied to the lifetime of the process. If those
> > memories are sealed from userspace, then unmap will fail. This isn’t a
> > huge problem, since the memory will eventually be freed at exit or
> > exec. However, it feels like the solution is not complete, because of
> > the leaks in VMA address space during the lifetime of the process.
> >
> > 2> Brk (heap/stack)
> > Currently, userspace applications can seal parts of the heap by
> > calling malloc() and mseal(). This raises the question of what the
> > expected behavior is when sealing the heap is attempted.
> >
> > let's assume following calls from user space:
> >
> > ptr = malloc(size);
> > mprotect(ptr, size, RO);
> > mseal(ptr, size, SEAL_PROT_PKEY);
> > free(ptr);
> >
> > Technically, before mseal() is added, the user can change the
> > protection of the heap by calling mprotect(RO). As long as the user
> > changes the protection back to RW before free(), the memory can be
> > reused.
> >
> > Adding mseal() into picture, however, the heap is then sealed
> > partially, user can still free it, but the memory remains to be RO,
> > and the result of brk-shrink is nondeterministic, depending on if
> > munmap() will try to free the sealed memory.(brk uses munmap to shrink
> > the heap).
> >
> > 3> Above two cases led to the third topic:
> > There one option to address the problem mentioned above.
> > Option 1:  A “MAP_SEALABLE” flag in mmap().
> > If a map is created without this flag, the mseal() operation will
> > fail. Applications that are not concerned with sealing will expect
> > their behavior to be unchanged. For those that are concerned, adding a
> > flag at mmap time to opt in is not difficult. For the short term, this
> > solves problems 1 and 2 above. The memory in shm/aio/brk will not have
> > the MAP_SEALABLE flag at mmap(), and the same is true for the heap.
> >
> > If we choose not to go with path, all mapping will by default
> > sealable. We could document above mentioned limitations so devs are
> > more careful at the time to choose what memory to seal. I think
> > deny of service through mseal() by attacker is probably not a concern,
> > if attackers have access to mseal() and unsealed memory, then they can
> > also do other harmful thing to the memory, such as munmap, etc.
> >
> > 4>
> > I think it might be possible to seal the stack or other special
> > mappings created at runtime (vdso, vsyscall, vvar). This means we can
> > enforce and seal W^X for certain types of application. For instance,
> > the stack is typically used in read-write mode, but in some cases, it
> > can become executable. To defend against unintented addition of
> > executable bit to stack, we could let the application to seal it.
> >
> > Sealing the heap (for adding X) requires special handling, since the
> > heap can shrink, and shrink is implemented through munmap().
> >
> > Indeed, it might be possible that all virtual memory accessible to user
> > space, regardless of its usage pattern, could be sealed. However, this
> > would require additional research and development work.
> >
> > -----------------------------------------------------------------------------------------------------
> >
> >
> > > Is there a way to avoid walking the tree for the single known VMA?
> > Are you thinking about a hash table to record brk VMA ? or a dedicated
> > tree for sealed VMAs? possible. code will be a lot more though.
>
> No, instead of calling a loop to walk the tree to find the same VMA,
> just check the single VMA.
>
> ipc/shm.c: do_vma_munmap(&vmi, vma, vma->vm_start, vma->vm_end...
>
> So if you just check the single VMA then we don't need to worry about
> re-walking.
>
If you meant:
have a new function  do_single_vma_munmap() which checks sealing flag
and munmap for one VMA, and is used by ipc/shm.c
Yes, we can have that.

> I think this is a moot point if my outline above works.
>
Yes, I agree. that  has performance impact only for shm. We can do
this optimationzation as a follow-up patch set.

> >
> > > Does
> > > it make sense to deny mseal writing to brk VMAs?
> > >
> > Yes. It makes sense. Since brk memory doesn't have MAP_SEALABLE at
> > this moment,  mseal will fail even if someone tries to seal it.
> > Sealing brk memory would require more research and design.
> >
> > >
> > > >       arch_unmap(mm, start, end);
> > > >       return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
> > > >  }
> > >
> > > ...
> > >
> > >
> > > Ah, I see them now.  Yes, this is what I expected to see.  Does this not
> > > have any impact on mmap/munmap benchmarks?
> > >
> > Thanks for bringing this topic! I'm kind of hoping for performance related
> > questions.
> >
> > I haven't done any benchmarks, due to lack of knowledge on how those
> > tests are usually performed.
> >
> > For mseal(), since it will be called only in a few places (libc/elf
> > loading),  I'm expecting no real world  impact, and that can be
> > measured when we have implementations in place in libc and
> > elf-loading.
> >
> > The hot path could be on mmap() and munmap(), as you pointed out.
> >
> > mmap() was discussed above (adding a check for FIXED )
>
> That can probably be dropped as discussed above.
>
Ok.

> >
> > munmap(), There is a cost in calling can_modify_mm(). I thought about
> > calling can_modify_vma in do_vmi_align_munmap, but there are two reasons:
> >
> > a. it skips arch_unmap, and arch_unmap can unmap the memory.
> > b. Current logic of checking sealing is: if one of VMAs between start to end is
> > sealed, mprotect/mmap/munmap will fail without any of VMAs being modified.
> > This means we will need additional walking over the VMA tree.
>
> Certainly, but it comes at a cost.  I was just surprised with the
> statement that there is no negative from the previous discussion, as I
> replied to the cover letter.
>
Ah, the context of  my "no downside" comment is specifically to
"having the PROT_SEAL  flag in mmap()", i.e. combine mmap() and
mseal() in one call.


> > > > +/*
> > > > + * Apply sealing.
> > > > + */
> > > > +static int apply_mm_seal(unsigned long start, unsigned long end)
> > > > +{
> > > > +     unsigned long nstart;
> > > > +     struct vm_area_struct *vma, *prev;
> > > > +
> > > > +     VMA_ITERATOR(vmi, current->mm, start);
> > > > +
> > > > +     vma = vma_iter_load(&vmi);
> > > > +     /*
> > > > +      * Note: check_mm_seal should already checked ENOMEM case.
> > > > +      * so vma should not be null, same for the other ENOMEM cases.
> > >
> > > The start to end is contiguous, right?
> > Yes.  check_mm_seal makes sure the start to end is contiguous.
> >
> > >
> > > > +      */
> > > > +     prev = vma_prev(&vmi);
> > > > +     if (start > vma->vm_start)
> > > > +             prev = vma;
> > > > +
> > > > +     nstart = start;
> > > > +     for_each_vma_range(vmi, vma, end) {
> > > > +             int error;
> > > > +             unsigned long tmp;
> > > > +             vm_flags_t newflags;
> > > > +
> > > > +             newflags = vma->vm_flags | VM_SEALED;
> > > > +             tmp = vma->vm_end;
> > > > +             if (tmp > end)
> > > > +                     tmp = end;
> > > > +             error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
> > > > +             if (error)
> > > > +                     return error;
> > >
> > > > +             tmp = vma_iter_end(&vmi);
> > > > +             nstart = tmp;
> > >
> > > You set tmp before using it unconditionally to vma->vm_end above, so you
> > > can set nstart = vma_iter_end(&vmi) here.  But, also we know the
> > > VMAs are contiguous from your check_mm_seal() call, so we know nstart ==
> > > vma->vm_start on the next loop.
> > The code is almost the same as in mlock.c, except that we know the
> > VMAs are contiguous, so we don't check for some of the ENOMEM cases.
> > There might be ways to improve this code. For ease of code review, I
> > choose a consistency (same as mlock)  for now.
>
> Yes, I thought that was the case.  tmp is updated in that code to ensure
> we have reached the end of the range without a gap at the end.  Since
> you already checked that the VMAs are contiguous, the last tmp update in
> your loop is not needed.
>
> Thanks,
> Liam


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 2/4] mseal: add mseal syscall
  2024-01-24 22:49         ` Jeff Xu
@ 2024-01-25  2:04           ` Jeff Xu
  0 siblings, 0 replies; 23+ messages in thread
From: Jeff Xu @ 2024-01-25  2:04 UTC (permalink / raw)
  To: Liam R. Howlett, Jeff Xu, akpm, keescook, jannh, sroettger,
	willy, gregkh, torvalds, usama.anjum, rdunlap, jeffxu, jorgelo,
	groeck, linux-kernel, linux-kselftest, linux-mm, pedro.falcato,
	dave.hansen, linux-hardening, deraadt

On Wed, Jan 24, 2024 at 2:49 PM Jeff Xu <jeffxu@chromium.org> wrote:
>
> On Wed, Jan 24, 2024 at 12:06 PM Liam R. Howlett
> <Liam.Howlett@oracle.com> wrote:
> >
> > > Considering this is the MAP_FIXED case, and maybe that is not used
> > > that often in practice, I think this is acceptable performance-wise,
> > > unless you know another solution to help this.
> >
> > Okay, sure, I haven't been yelled at on the ML for a few weeks.  Here
> > goes:
> >
> > do_mmap() will call get_unmapped_area(), which will return an empty area
> > (no need to check mseal, I hope - or we have larger issues here) or a
> > MAP_FIXED address.
> >
> > do_mmap() will pass the address along to mmap_region()
> >
> > mmap_region() will then call do_vmi_munmap() - which will either remove
> > the VMA(s) in the way, or do nothing... or error.
> >
> > mmap_region() will return -ENOMEM in the case of an error returned from
> > do_vmi_munmap() today.  Change that to return the error code, and let
> > do_vmi_munmap() do the mseal check.  If mseal check fails then the error
> > is propagated the same way -ENOMEM is propagated today.
> >
> > This relies on the fact that we only really need to check the mseal
> > status of existing VMAs and we can only really map over existing VMAs by
> > first munmapping them.
> >
> > It does move your error return to much later in the call stack, but it
> > removes duplicate work and less code.  Considering this should be a rare
> > event, I don't think that's of concern.
> >
> I think that is a great idea, I will try to implement it and get back
> to you on this.
>
I confirm this works. I will add that in the next version. Thanks for
the suggestion.

-Jeff


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 0/4] Introduce mseal()
  2024-01-22 15:28 [PATCH v7 0/4] Introduce mseal() jeffxu
                   ` (4 preceding siblings ...)
  2024-01-22 15:49 ` [PATCH v7 0/4] Introduce mseal() Theo de Raadt
@ 2024-01-29 22:36 ` Jonathan Corbet
  2024-01-31 17:49   ` Jeff Xu
  5 siblings, 1 reply; 23+ messages in thread
From: Jonathan Corbet @ 2024-01-29 22:36 UTC (permalink / raw)
  To: jeffxu, akpm, keescook, jannh, sroettger, willy, gregkh,
	torvalds, usama.anjum, rdunlap
  Cc: jeffxu, jorgelo, groeck, linux-kernel, linux-kselftest, linux-mm,
	pedro.falcato, dave.hansen, linux-hardening, deraadt, Jeff Xu

jeffxu@chromium.org writes:

> Although the initial version of this patch series is targeting the
> Chrome browser as its first user, it became evident during upstream
> discussions that we would also want to ensure that the patch set
> eventually is a complete solution for memory sealing and compatible
> with other use cases. The specific scenario currently in mind is
> glibc's use case of loading and sealing ELF executables. To this end,
> Stephen is working on a change to glibc to add sealing support to the
> dynamic linker, which will seal all non-writable segments at startup.
> Once this work is completed, all applications will be able to
> automatically benefit from these new protections.

Is this work posted somewhere?  Having a second - and more generally
useful - user for this API would do a lot to show that the design is, in
fact, right and useful beyond the Chrome browser.

Thanks,

jon


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 0/4] Introduce mseal()
  2024-01-29 22:36 ` Jonathan Corbet
@ 2024-01-31 17:49   ` Jeff Xu
  2024-01-31 20:51     ` Jonathan Corbet
  0 siblings, 1 reply; 23+ messages in thread
From: Jeff Xu @ 2024-01-31 17:49 UTC (permalink / raw)
  To: Jonathan Corbet
  Cc: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap, jeffxu, jorgelo, groeck, linux-kernel,
	linux-kselftest, linux-mm, pedro.falcato, dave.hansen,
	linux-hardening, deraadt

On Mon, Jan 29, 2024 at 2:37 PM Jonathan Corbet <corbet@lwn.net> wrote:
>
> jeffxu@chromium.org writes:
>
> > Although the initial version of this patch series is targeting the
> > Chrome browser as its first user, it became evident during upstream
> > discussions that we would also want to ensure that the patch set
> > eventually is a complete solution for memory sealing and compatible
> > with other use cases. The specific scenario currently in mind is
> > glibc's use case of loading and sealing ELF executables. To this end,
> > Stephen is working on a change to glibc to add sealing support to the
> > dynamic linker, which will seal all non-writable segments at startup.
> > Once this work is completed, all applications will be able to
> > automatically benefit from these new protections.
>
> Is this work posted somewhere?  Having a second - and more generally
> useful - user for this API would do a lot to show that the design is, in
> fact, right and useful beyond the Chrome browser.
>
Stephen conducted a PoC last year, it will be published once it is complete.
We're super excited about introducing this as a general safety measure
for all of Linux!

Thanks
-Jeff

> Thanks,
>
> jon


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v7 0/4] Introduce mseal()
  2024-01-31 17:49   ` Jeff Xu
@ 2024-01-31 20:51     ` Jonathan Corbet
  0 siblings, 0 replies; 23+ messages in thread
From: Jonathan Corbet @ 2024-01-31 20:51 UTC (permalink / raw)
  To: Jeff Xu
  Cc: akpm, keescook, jannh, sroettger, willy, gregkh, torvalds,
	usama.anjum, rdunlap, jeffxu, jorgelo, groeck, linux-kernel,
	linux-kselftest, linux-mm, pedro.falcato, dave.hansen,
	linux-hardening, deraadt

Jeff Xu <jeffxu@chromium.org> writes:

> On Mon, Jan 29, 2024 at 2:37 PM Jonathan Corbet <corbet@lwn.net> wrote:
>>
>> jeffxu@chromium.org writes:
>>
>> > Although the initial version of this patch series is targeting the
>> > Chrome browser as its first user, it became evident during upstream
>> > discussions that we would also want to ensure that the patch set
>> > eventually is a complete solution for memory sealing and compatible
>> > with other use cases. The specific scenario currently in mind is
>> > glibc's use case of loading and sealing ELF executables. To this end,
>> > Stephen is working on a change to glibc to add sealing support to the
>> > dynamic linker, which will seal all non-writable segments at startup.
>> > Once this work is completed, all applications will be able to
>> > automatically benefit from these new protections.
>>
>> Is this work posted somewhere?  Having a second - and more generally
>> useful - user for this API would do a lot to show that the design is, in
>> fact, right and useful beyond the Chrome browser.
>>
> Stephen conducted a PoC last year, it will be published once it is complete.
> We're super excited about introducing this as a general safety measure
> for all of Linux!

We're excited too, something like mseal() seems like a good thing to
have.  My point, though, is that it would be good to see this second
(and more general) user of the API *before* merging it.  As others have
noted, once mseal() is in a released kernel, it will be difficult to
change if adjustments turn out to be necessary.

Thanks,

jon


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2024-01-31 20:51 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-22 15:28 [PATCH v7 0/4] Introduce mseal() jeffxu
2024-01-22 15:28 ` [PATCH v7 1/4] mseal: Wire up mseal syscall jeffxu
2024-01-22 15:28 ` [PATCH v7 2/4] mseal: add " jeffxu
2024-01-23 18:14   ` Liam R. Howlett
2024-01-24 17:50     ` Jeff Xu
2024-01-24 20:06       ` Liam R. Howlett
2024-01-24 20:37         ` Theo de Raadt
2024-01-24 20:51           ` Theo de Raadt
2024-01-24 22:49         ` Jeff Xu
2024-01-25  2:04           ` Jeff Xu
2024-01-22 15:28 ` [PATCH v7 3/4] selftest mm/mseal memory sealing jeffxu
2024-01-22 15:28 ` [PATCH v7 4/4] mseal:add documentation jeffxu
2024-01-22 15:49 ` [PATCH v7 0/4] Introduce mseal() Theo de Raadt
2024-01-22 22:10   ` Jeff Xu
2024-01-22 22:34     ` Theo de Raadt
2024-01-23 17:33       ` Liam R. Howlett
2024-01-23 18:58         ` Theo de Raadt
2024-01-24 18:56           ` Jeff Xu
2024-01-24 18:55       ` Jeff Xu
2024-01-24 19:17         ` Theo de Raadt
2024-01-29 22:36 ` Jonathan Corbet
2024-01-31 17:49   ` Jeff Xu
2024-01-31 20:51     ` Jonathan Corbet

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).