linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack
@ 2020-11-10 16:21 Yu-cheng Yu
  2020-11-10 16:21 ` [PATCH v15 01/26] Documentation/x86: Add CET description Yu-cheng Yu
                   ` (26 more replies)
  0 siblings, 27 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

Control-flow Enforcement (CET) is a new Intel processor feature that blocks
return/jump-oriented programming attacks.  Details are in "Intel 64 and
IA-32 Architectures Software Developer's Manual" [1].

CET can protect applications and the kernel.  This series enables only
application-level protection, and has three parts:

  - Shadow stack [2],
  - Indirect branch tracking [3], and
  - Selftests [4].

I have run tests on these patches for quite some time, and they have been
very stable.  Linux distributions with CET are available now, and Intel
processors with CET are becoming available.  It would be nice if CET
support can be accepted into the kernel.  I will be working to address any
issues should they come up.

Changes in v15:
- Rebase to v5.10-rc3.
- Small changes to the documentation to make meanings clear.
- Remove changes to tools/arch/x86/include/ files.
- Remove Reviewed-by tags from patches that have been revised too many
  times.

[1] Intel 64 and IA-32 Architectures Software Developer's Manual:

    https://software.intel.com/en-us/download/intel-64-and-ia-32-
    architectures-sdm-combined-volumes-1-2a-2b-2c-2d-3a-3b-3c-3d-and-4

[2] CET Shadow Stack patches v14:

    https://lkml.kernel.org/r/20201012153850.26996-1-yu-cheng.yu@intel.com/

[3] Indirect Branch Tracking patches v14.

    https://lkml.kernel.org/r/20201012154530.28382-1-yu-cheng.yu@intel.com/

[4] I am holding off the selftests changes and working to get Acked-by's.
    The earlier version of the selftests patches:

    https://lkml.kernel.org/r/20200521211720.20236-1-yu-cheng.yu@intel.com/

[5] The kernel ptrace patch is tested with an Intel-internal updated GDB.
    I am holding off the kernel ptrace patch to re-test it with my earlier
    patch for fixing regset holes.

Yu-cheng Yu (26):
  Documentation/x86: Add CET description
  x86/cpufeatures: Add CET CPU feature flags for Control-flow
    Enforcement Technology (CET)
  x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states
  x86/cet: Add control-protection fault handler
  x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack
  x86/mm: Change _PAGE_DIRTY to _PAGE_DIRTY_HW
  x86/mm: Remove _PAGE_DIRTY_HW from kernel RO pages
  x86/mm: Introduce _PAGE_COW
  drm/i915/gvt: Change _PAGE_DIRTY to _PAGE_DIRTY_BITS
  x86/mm: Update pte_modify for _PAGE_COW
  x86/mm: Update ptep_set_wrprotect() and pmdp_set_wrprotect() for
    transition from _PAGE_DIRTY_HW to _PAGE_COW
  mm: Introduce VM_SHSTK for shadow stack memory
  x86/mm: Shadow Stack page fault error checking
  x86/mm: Update maybe_mkwrite() for shadow stack
  mm: Fixup places that call pte_mkwrite() directly
  mm: Add guard pages around a shadow stack.
  mm/mmap: Add shadow stack pages to memory accounting
  mm: Update can_follow_write_pte() for shadow stack
  mm: Re-introduce vm_flags to do_mmap()
  x86/cet/shstk: User-mode shadow stack support
  x86/cet/shstk: Handle signals for shadow stack
  binfmt_elf: Define GNU_PROPERTY_X86_FEATURE_1_AND properties
  ELF: Introduce arch_setup_elf_property()
  x86/cet/shstk: Handle thread shadow stack
  x86/cet/shstk: Add arch_prctl functions for shadow stack
  mm: Introduce PROT_SHSTK for shadow stack

 .../admin-guide/kernel-parameters.txt         |   6 +
 Documentation/x86/index.rst                   |   1 +
 Documentation/x86/intel_cet.rst               | 138 +++++++
 arch/arm64/include/asm/elf.h                  |   5 +
 arch/x86/Kconfig                              |  39 ++
 arch/x86/ia32/ia32_signal.c                   |  17 +
 arch/x86/include/asm/cet.h                    |  42 +++
 arch/x86/include/asm/cpufeatures.h            |   2 +
 arch/x86/include/asm/disabled-features.h      |   8 +-
 arch/x86/include/asm/elf.h                    |  13 +
 arch/x86/include/asm/fpu/internal.h           |  10 +
 arch/x86/include/asm/fpu/types.h              |  23 +-
 arch/x86/include/asm/fpu/xstate.h             |   6 +-
 arch/x86/include/asm/idtentry.h               |   4 +
 arch/x86/include/asm/mman.h                   |  83 +++++
 arch/x86/include/asm/mmu_context.h            |   3 +
 arch/x86/include/asm/msr-index.h              |  20 +
 arch/x86/include/asm/page_64_types.h          |  10 +
 arch/x86/include/asm/pgtable.h                | 209 ++++++++++-
 arch/x86/include/asm/pgtable_types.h          |  57 ++-
 arch/x86/include/asm/processor.h              |   5 +
 arch/x86/include/asm/special_insns.h          |  32 ++
 arch/x86/include/asm/trap_pf.h                |   2 +
 arch/x86/include/uapi/asm/mman.h              |  28 +-
 arch/x86/include/uapi/asm/prctl.h             |   4 +
 arch/x86/include/uapi/asm/processor-flags.h   |   2 +
 arch/x86/include/uapi/asm/sigcontext.h        |   9 +
 arch/x86/kernel/Makefile                      |   2 +
 arch/x86/kernel/cet.c                         | 343 ++++++++++++++++++
 arch/x86/kernel/cet_prctl.c                   |  68 ++++
 arch/x86/kernel/cpu/common.c                  |  28 ++
 arch/x86/kernel/cpu/cpuid-deps.c              |   2 +
 arch/x86/kernel/fpu/signal.c                  | 100 +++++
 arch/x86/kernel/fpu/xstate.c                  |  25 +-
 arch/x86/kernel/idt.c                         |   4 +
 arch/x86/kernel/process.c                     |  14 +-
 arch/x86/kernel/process_64.c                  |  32 ++
 arch/x86/kernel/relocate_kernel_64.S          |   2 +-
 arch/x86/kernel/signal.c                      |  10 +
 arch/x86/kernel/signal_compat.c               |   2 +-
 arch/x86/kernel/traps.c                       |  59 +++
 arch/x86/kvm/vmx/vmx.c                        |   2 +-
 arch/x86/mm/fault.c                           |  19 +
 arch/x86/mm/mmap.c                            |   2 +
 arch/x86/mm/pat/set_memory.c                  |   2 +-
 arch/x86/mm/pgtable.c                         |  25 ++
 drivers/gpu/drm/i915/gvt/gtt.c                |   2 +-
 fs/aio.c                                      |   2 +-
 fs/binfmt_elf.c                               |   4 +
 fs/proc/task_mmu.c                            |   3 +
 include/linux/elf.h                           |   6 +
 include/linux/mm.h                            |  38 +-
 include/linux/pgtable.h                       |  35 ++
 include/uapi/asm-generic/siginfo.h            |   3 +-
 include/uapi/linux/elf.h                      |   9 +
 ipc/shm.c                                     |   2 +-
 mm/gup.c                                      |   8 +-
 mm/huge_memory.c                              |  10 +-
 mm/memory.c                                   |   5 +-
 mm/migrate.c                                  |   3 +-
 mm/mmap.c                                     |  23 +-
 mm/mprotect.c                                 |   2 +-
 mm/nommu.c                                    |   4 +-
 mm/util.c                                     |   2 +-
 scripts/as-x86_64-has-shadow-stack.sh         |   4 +
 65 files changed, 1594 insertions(+), 90 deletions(-)
 create mode 100644 Documentation/x86/intel_cet.rst
 create mode 100644 arch/x86/include/asm/cet.h
 create mode 100644 arch/x86/include/asm/mman.h
 create mode 100644 arch/x86/kernel/cet.c
 create mode 100644 arch/x86/kernel/cet_prctl.c
 create mode 100755 scripts/as-x86_64-has-shadow-stack.sh

-- 
2.21.0


^ permalink raw reply	[flat|nested] 60+ messages in thread

* [PATCH v15 01/26] Documentation/x86: Add CET description
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-11-30 18:26   ` Nick Desaulniers
  2020-11-10 16:21 ` [PATCH v15 02/26] x86/cpufeatures: Add CET CPU feature flags for Control-flow Enforcement Technology (CET) Yu-cheng Yu
                   ` (25 subsequent siblings)
  26 siblings, 1 reply; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

Explain no_user_shstk/no_user_ibt kernel parameters, and introduce a new
document on Control-flow Enforcement Technology (CET).

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 .../admin-guide/kernel-parameters.txt         |   6 +
 Documentation/x86/index.rst                   |   1 +
 Documentation/x86/intel_cet.rst               | 138 ++++++++++++++++++
 3 files changed, 145 insertions(+)
 create mode 100644 Documentation/x86/intel_cet.rst

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 526d65d8573a..0ca8fb4d4d1e 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3193,6 +3193,12 @@
 			noexec=on: enable non-executable mappings (default)
 			noexec=off: disable non-executable mappings
 
+	no_user_shstk	[X86-64] Disable Shadow Stack for user-mode
+			applications
+
+	no_user_ibt	[X86-64] Disable Indirect Branch Tracking for user-mode
+			applications
+
 	nosmap		[X86,PPC]
 			Disable SMAP (Supervisor Mode Access Prevention)
 			even if it is supported by processor.
diff --git a/Documentation/x86/index.rst b/Documentation/x86/index.rst
index b224d12c880b..e88dcea4300b 100644
--- a/Documentation/x86/index.rst
+++ b/Documentation/x86/index.rst
@@ -21,6 +21,7 @@ x86-specific Documentation
    tlb
    mtrr
    pat
+   intel_cet
    intel-iommu
    intel_txt
    amd-memory-encryption
diff --git a/Documentation/x86/intel_cet.rst b/Documentation/x86/intel_cet.rst
new file mode 100644
index 000000000000..4a81e7c9b29a
--- /dev/null
+++ b/Documentation/x86/intel_cet.rst
@@ -0,0 +1,138 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=========================================
+Control-flow Enforcement Technology (CET)
+=========================================
+
+[1] Overview
+============
+
+Control-flow Enforcement Technology (CET) is an Intel processor feature
+that provides protection against return/jump-oriented programming (ROP)
+attacks.  It can be set up to protect both applications and the kernel.
+Only user-mode protection is implemented in the 64-bit kernel, including
+support for running legacy 32-bit applications.
+
+CET introduces Shadow Stack and Indirect Branch Tracking.  Shadow stack is
+a secondary stack allocated from memory and cannot be directly modified by
+applications.  When executing a CALL instruction, the processor pushes the
+return address to both the normal stack and the shadow stack.  Upon
+function return, the processor pops the shadow stack copy and compares it
+to the normal stack copy.  If the two differ, the processor raises a
+control-protection fault.  Indirect branch tracking verifies indirect
+CALL/JMP targets are intended as marked by the compiler with 'ENDBR'
+opcodes.
+
+There are two kernel configuration options:
+
+    X86_SHADOW_STACK_USER, and
+    X86_BRANCH_TRACKING_USER.
+
+These need to be enabled to build a CET-enabled kernel, and Binutils v2.31
+and GCC v8.1 or later are required to build a CET kernel.  To build a CET-
+enabled application, GLIBC v2.28 or later is also required.
+
+There are two command-line options for disabling CET features::
+
+    no_user_shstk - disables user shadow stack, and
+    no_user_ibt   - disables user indirect branch tracking.
+
+At run time, /proc/cpuinfo shows CET features if the processor supports
+CET.
+
+[2] Application Enabling
+========================
+
+An application's CET capability is marked in its ELF header and can be
+verified from the following command output, in the NT_GNU_PROPERTY_TYPE_0
+field:
+
+    readelf -n <application> | grep SHSTK
+        properties: x86 feature: IBT, SHSTK
+
+If an application supports CET and is statically linked, it will run with
+CET protection.  If the application needs any shared libraries, the loader
+checks all dependencies and enables CET when all requirements are met.
+
+[3] Backward Compatibility
+==========================
+
+GLIBC provides a few CET tunables via the GLIBC_TUNABLES environment
+variable:
+
+GLIBC_TUNABLES=glibc.tune.hwcaps=-SHSTK,-IBT
+    Turn off SHSTK/IBT.
+
+GLIBC_TUNABLES=glibc.tune.x86_shstk=<on, permissive>
+    This controls how dlopen() handles SHSTK legacy libraries::
+
+        on         - continue with SHSTK enabled;
+        permissive - continue with SHSTK off.
+
+Details can be found in the GLIBC manual pages.
+
+[4] CET arch_prctl()'s
+======================
+
+Several arch_prctl()'s have been added for CET:
+
+arch_prctl(ARCH_X86_CET_STATUS, u64 *addr)
+    Return CET feature status.
+
+    The parameter 'addr' is a pointer to a user buffer.
+    On returning to the caller, the kernel fills the following
+    information::
+
+        *addr       = shadow stack/indirect branch tracking status
+        *(addr + 1) = shadow stack base address
+        *(addr + 2) = shadow stack size
+
+arch_prctl(ARCH_X86_CET_DISABLE, unsigned int features)
+    Disable shadow stack and/or indirect branch tracking as specified in
+    'features'.  Return -EPERM if CET is locked.
+
+arch_prctl(ARCH_X86_CET_LOCK)
+    Lock in all CET features.  They cannot be turned off afterwards.
+
+Note:
+  There is no CET-enabling arch_prctl function.  By design, CET is enabled
+  automatically if the binary and the system can support it.
+
+[5] The implementation of the Shadow Stack
+==========================================
+
+Shadow Stack size
+-----------------
+
+A task's shadow stack is allocated from memory to a fixed size of
+MIN(RLIMIT_STACK, 4 GB).  In other words, the shadow stack is allocated to
+the maximum size of the normal stack, but capped to 4 GB.  However,
+a compat-mode application's address space is smaller, each of its thread's
+shadow stack size is MIN(1/4 RLIMIT_STACK, 4 GB).
+
+Signal
+------
+
+The main program and its signal handlers use the same shadow stack.
+Because the shadow stack stores only return addresses, a large shadow
+stack covers the condition that both the program stack and the signal
+alternate stack run out.
+
+The kernel creates a restore token for the shadow stack restoring address
+and verifies that token when restoring from the signal handler.
+
+Fork
+----
+
+The shadow stack's vma has VM_SHSTK flag set; its PTEs are required to be
+read-only and dirty.  When a shadow stack PTE is not RO and dirty, a
+shadow access triggers a page fault with the shadow stack access bit set
+in the page fault error code.
+
+When a task forks a child, its shadow stack PTEs are copied and both the
+parent's and the child's shadow stack PTEs are cleared of the dirty bit.
+Upon the next shadow stack access, the resulting shadow stack page fault
+is handled by page copy/re-use.
+
+When a pthread child is created, the kernel allocates a new shadow stack
+for the new thread.
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 02/26] x86/cpufeatures: Add CET CPU feature flags for Control-flow Enforcement Technology (CET)
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
  2020-11-10 16:21 ` [PATCH v15 01/26] Documentation/x86: Add CET description Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-11-10 16:21 ` [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states Yu-cheng Yu
                   ` (24 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

Add CPU feature flags for Control-flow Enforcement Technology (CET).

CPUID.(EAX=7,ECX=0):ECX[bit 7] Shadow stack
CPUID.(EAX=7,ECX=0):EDX[bit 20] Indirect Branch Tracking

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/cpufeatures.h | 2 ++
 arch/x86/kernel/cpu/cpuid-deps.c   | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index dad350d42ecf..c9f6d62da463 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -343,6 +343,7 @@
 #define X86_FEATURE_OSPKE		(16*32+ 4) /* OS Protection Keys Enable */
 #define X86_FEATURE_WAITPKG		(16*32+ 5) /* UMONITOR/UMWAIT/TPAUSE Instructions */
 #define X86_FEATURE_AVX512_VBMI2	(16*32+ 6) /* Additional AVX512 Vector Bit Manipulation Instructions */
+#define X86_FEATURE_SHSTK		(16*32+ 7) /* Shadow Stack */
 #define X86_FEATURE_GFNI		(16*32+ 8) /* Galois Field New Instructions */
 #define X86_FEATURE_VAES		(16*32+ 9) /* Vector AES */
 #define X86_FEATURE_VPCLMULQDQ		(16*32+10) /* Carry-Less Multiplication Double Quadword */
@@ -374,6 +375,7 @@
 #define X86_FEATURE_TSXLDTRK		(18*32+16) /* TSX Suspend Load Address Tracking */
 #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
 #define X86_FEATURE_ARCH_LBR		(18*32+19) /* Intel ARCH LBR */
+#define X86_FEATURE_IBT			(18*32+20) /* Indirect Branch Tracking */
 #define X86_FEATURE_SPEC_CTRL		(18*32+26) /* "" Speculation Control (IBRS + IBPB) */
 #define X86_FEATURE_INTEL_STIBP		(18*32+27) /* "" Single Thread Indirect Branch Predictors */
 #define X86_FEATURE_FLUSH_L1D		(18*32+28) /* Flush L1D cache */
diff --git a/arch/x86/kernel/cpu/cpuid-deps.c b/arch/x86/kernel/cpu/cpuid-deps.c
index d502241995a3..9a3971e2f98f 100644
--- a/arch/x86/kernel/cpu/cpuid-deps.c
+++ b/arch/x86/kernel/cpu/cpuid-deps.c
@@ -71,6 +71,8 @@ static const struct cpuid_dep cpuid_deps[] = {
 	{ X86_FEATURE_AVX512_BF16,		X86_FEATURE_AVX512VL  },
 	{ X86_FEATURE_ENQCMD,			X86_FEATURE_XSAVES    },
 	{ X86_FEATURE_PER_THREAD_MBA,		X86_FEATURE_MBA       },
+	{ X86_FEATURE_SHSTK,			X86_FEATURE_XSAVES    },
+	{ X86_FEATURE_IBT,			X86_FEATURE_XSAVES    },
 	{}
 };
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
  2020-11-10 16:21 ` [PATCH v15 01/26] Documentation/x86: Add CET description Yu-cheng Yu
  2020-11-10 16:21 ` [PATCH v15 02/26] x86/cpufeatures: Add CET CPU feature flags for Control-flow Enforcement Technology (CET) Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-11-26 11:02   ` Borislav Petkov
  2020-11-30 17:45   ` [NEEDS-REVIEW] " Dave Hansen
  2020-11-10 16:21 ` [PATCH v15 04/26] x86/cet: Add control-protection fault handler Yu-cheng Yu
                   ` (23 subsequent siblings)
  26 siblings, 2 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

Control-flow Enforcement Technology (CET) adds five MSRs.  Introduce them
and their XSAVES supervisor states:

    MSR_IA32_U_CET (user-mode CET settings),
    MSR_IA32_PL3_SSP (user-mode Shadow Stack pointer),
    MSR_IA32_PL0_SSP (kernel-mode Shadow Stack pointer),
    MSR_IA32_PL1_SSP (Privilege Level 1 Shadow Stack pointer),
    MSR_IA32_PL2_SSP (Privilege Level 2 Shadow Stack pointer).

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/fpu/types.h            | 23 +++++++++++++++++--
 arch/x86/include/asm/fpu/xstate.h           |  6 +++--
 arch/x86/include/asm/msr-index.h            | 20 +++++++++++++++++
 arch/x86/include/uapi/asm/processor-flags.h |  2 ++
 arch/x86/kernel/fpu/xstate.c                | 25 ++++++++++++++++++---
 5 files changed, 69 insertions(+), 7 deletions(-)

diff --git a/arch/x86/include/asm/fpu/types.h b/arch/x86/include/asm/fpu/types.h
index f5a38a5f3ae1..035eb0ec665e 100644
--- a/arch/x86/include/asm/fpu/types.h
+++ b/arch/x86/include/asm/fpu/types.h
@@ -115,8 +115,8 @@ enum xfeature {
 	XFEATURE_PT_UNIMPLEMENTED_SO_FAR,
 	XFEATURE_PKRU,
 	XFEATURE_PASID,
-	XFEATURE_RSRVD_COMP_11,
-	XFEATURE_RSRVD_COMP_12,
+	XFEATURE_CET_USER,
+	XFEATURE_CET_KERNEL,
 	XFEATURE_RSRVD_COMP_13,
 	XFEATURE_RSRVD_COMP_14,
 	XFEATURE_LBR,
@@ -135,6 +135,8 @@ enum xfeature {
 #define XFEATURE_MASK_PT		(1 << XFEATURE_PT_UNIMPLEMENTED_SO_FAR)
 #define XFEATURE_MASK_PKRU		(1 << XFEATURE_PKRU)
 #define XFEATURE_MASK_PASID		(1 << XFEATURE_PASID)
+#define XFEATURE_MASK_CET_USER		(1 << XFEATURE_CET_USER)
+#define XFEATURE_MASK_CET_KERNEL	(1 << XFEATURE_CET_KERNEL)
 #define XFEATURE_MASK_LBR		(1 << XFEATURE_LBR)
 
 #define XFEATURE_MASK_FPSSE		(XFEATURE_MASK_FP | XFEATURE_MASK_SSE)
@@ -237,6 +239,23 @@ struct pkru_state {
 	u32				pad;
 } __packed;
 
+/*
+ * State component 11 is Control-flow Enforcement user states
+ */
+struct cet_user_state {
+	u64 user_cet;			/* user control-flow settings */
+	u64 user_ssp;			/* user shadow stack pointer */
+};
+
+/*
+ * State component 12 is Control-flow Enforcement kernel states
+ */
+struct cet_kernel_state {
+	u64 kernel_ssp;			/* kernel shadow stack */
+	u64 pl1_ssp;			/* privilege level 1 shadow stack */
+	u64 pl2_ssp;			/* privilege level 2 shadow stack */
+};
+
 /*
  * State component 15: Architectural LBR configuration state.
  * The size of Arch LBR state depends on the number of LBRs (lbr_depth).
diff --git a/arch/x86/include/asm/fpu/xstate.h b/arch/x86/include/asm/fpu/xstate.h
index 47a92232d595..582f3575e0bd 100644
--- a/arch/x86/include/asm/fpu/xstate.h
+++ b/arch/x86/include/asm/fpu/xstate.h
@@ -35,7 +35,8 @@
 				      XFEATURE_MASK_BNDCSR)
 
 /* All currently supported supervisor features */
-#define XFEATURE_MASK_SUPERVISOR_SUPPORTED (XFEATURE_MASK_PASID)
+#define XFEATURE_MASK_SUPERVISOR_SUPPORTED (XFEATURE_MASK_PASID | \
+					    XFEATURE_MASK_CET_USER)
 
 /*
  * A supervisor state component may not always contain valuable information,
@@ -62,7 +63,8 @@
  * Unsupported supervisor features. When a supervisor feature in this mask is
  * supported in the future, move it to the supported supervisor feature mask.
  */
-#define XFEATURE_MASK_SUPERVISOR_UNSUPPORTED (XFEATURE_MASK_PT)
+#define XFEATURE_MASK_SUPERVISOR_UNSUPPORTED (XFEATURE_MASK_PT | \
+					      XFEATURE_MASK_CET_KERNEL)
 
 /* All supervisor states including supported and unsupported states. */
 #define XFEATURE_MASK_SUPERVISOR_ALL (XFEATURE_MASK_SUPERVISOR_SUPPORTED | \
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 972a34d93505..6f05ab2a1fa4 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -922,4 +922,24 @@
 #define MSR_VM_IGNNE                    0xc0010115
 #define MSR_VM_HSAVE_PA                 0xc0010117
 
+/* Control-flow Enforcement Technology MSRs */
+#define MSR_IA32_U_CET		0x6a0 /* user mode cet setting */
+#define MSR_IA32_S_CET		0x6a2 /* kernel mode cet setting */
+#define MSR_IA32_PL0_SSP	0x6a4 /* kernel shstk pointer */
+#define MSR_IA32_PL1_SSP	0x6a5 /* ring-1 shstk pointer */
+#define MSR_IA32_PL2_SSP	0x6a6 /* ring-2 shstk pointer */
+#define MSR_IA32_PL3_SSP	0x6a7 /* user shstk pointer */
+#define MSR_IA32_INT_SSP_TAB	0x6a8 /* exception shstk table */
+
+/* MSR_IA32_U_CET and MSR_IA32_S_CET bits */
+#define CET_SHSTK_EN		BIT_ULL(0)
+#define CET_WRSS_EN		BIT_ULL(1)
+#define CET_ENDBR_EN		BIT_ULL(2)
+#define CET_LEG_IW_EN		BIT_ULL(3)
+#define CET_NO_TRACK_EN		BIT_ULL(4)
+#define CET_SUPPRESS_DISABLE	BIT_ULL(5)
+#define CET_RESERVED		(BIT_ULL(6) | BIT_ULL(7) | BIT_ULL(8) | BIT_ULL(9))
+#define CET_SUPPRESS		BIT_ULL(10)
+#define CET_WAIT_ENDBR		BIT_ULL(11)
+
 #endif /* _ASM_X86_MSR_INDEX_H */
diff --git a/arch/x86/include/uapi/asm/processor-flags.h b/arch/x86/include/uapi/asm/processor-flags.h
index bcba3c643e63..a8df907e8017 100644
--- a/arch/x86/include/uapi/asm/processor-flags.h
+++ b/arch/x86/include/uapi/asm/processor-flags.h
@@ -130,6 +130,8 @@
 #define X86_CR4_SMAP		_BITUL(X86_CR4_SMAP_BIT)
 #define X86_CR4_PKE_BIT		22 /* enable Protection Keys support */
 #define X86_CR4_PKE		_BITUL(X86_CR4_PKE_BIT)
+#define X86_CR4_CET_BIT		23 /* enable Control-flow Enforcement */
+#define X86_CR4_CET		_BITUL(X86_CR4_CET_BIT)
 
 /*
  * x86-64 Task Priority Register, CR8
diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
index 5d8047441a0a..9a4307227c1f 100644
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/fpu/xstate.c
@@ -38,6 +38,8 @@ static const char *xfeature_names[] =
 	"Processor Trace (unused)"	,
 	"Protection Keys User registers",
 	"PASID state",
+	"Control-flow User registers"	,
+	"Control-flow Kernel registers"	,
 	"unknown xstate feature"	,
 };
 
@@ -53,6 +55,8 @@ static short xsave_cpuid_features[] __initdata = {
 	X86_FEATURE_INTEL_PT,
 	X86_FEATURE_PKU,
 	X86_FEATURE_ENQCMD,
+	X86_FEATURE_SHSTK, /* XFEATURE_CET_USER */
+	X86_FEATURE_SHSTK, /* XFEATURE_CET_KERNEL */
 };
 
 /*
@@ -321,6 +325,8 @@ static void __init print_xstate_features(void)
 	print_xstate_feature(XFEATURE_MASK_Hi16_ZMM);
 	print_xstate_feature(XFEATURE_MASK_PKRU);
 	print_xstate_feature(XFEATURE_MASK_PASID);
+	print_xstate_feature(XFEATURE_MASK_CET_USER);
+	print_xstate_feature(XFEATURE_MASK_CET_KERNEL);
 }
 
 /*
@@ -596,6 +602,8 @@ static void check_xstate_against_struct(int nr)
 	XCHECK_SZ(sz, nr, XFEATURE_Hi16_ZMM,  struct avx_512_hi16_state);
 	XCHECK_SZ(sz, nr, XFEATURE_PKRU,      struct pkru_state);
 	XCHECK_SZ(sz, nr, XFEATURE_PASID,     struct ia32_pasid_state);
+	XCHECK_SZ(sz, nr, XFEATURE_CET_USER,   struct cet_user_state);
+	XCHECK_SZ(sz, nr, XFEATURE_CET_KERNEL, struct cet_kernel_state);
 
 	/*
 	 * Make *SURE* to add any feature numbers in below if
@@ -605,7 +613,7 @@ static void check_xstate_against_struct(int nr)
 	if ((nr < XFEATURE_YMM) ||
 	    (nr >= XFEATURE_MAX) ||
 	    (nr == XFEATURE_PT_UNIMPLEMENTED_SO_FAR) ||
-	    ((nr >= XFEATURE_RSRVD_COMP_11) && (nr <= XFEATURE_LBR))) {
+	    ((nr >= XFEATURE_RSRVD_COMP_13) && (nr <= XFEATURE_LBR))) {
 		WARN_ONCE(1, "no structure for xstate: %d\n", nr);
 		XSTATE_WARN_ON(1);
 	}
@@ -835,8 +843,19 @@ void __init fpu__init_system_xstate(void)
 	 * Clear XSAVE features that are disabled in the normal CPUID.
 	 */
 	for (i = 0; i < ARRAY_SIZE(xsave_cpuid_features); i++) {
-		if (!boot_cpu_has(xsave_cpuid_features[i]))
-			xfeatures_mask_all &= ~BIT_ULL(i);
+		if (xsave_cpuid_features[i] == X86_FEATURE_SHSTK) {
+			/*
+			 * X86_FEATURE_SHSTK and X86_FEATURE_IBT share
+			 * same states, but can be enabled separately.
+			 */
+			if (!boot_cpu_has(X86_FEATURE_SHSTK) &&
+			    !boot_cpu_has(X86_FEATURE_IBT))
+				xfeatures_mask_all &= ~BIT_ULL(i);
+		} else {
+			if ((xsave_cpuid_features[i] == -1) ||
+			    !boot_cpu_has(xsave_cpuid_features[i]))
+				xfeatures_mask_all &= ~BIT_ULL(i);
+		}
 	}
 
 	xfeatures_mask_all &= fpu__get_supported_xfeatures_mask();
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 04/26] x86/cet: Add control-protection fault handler
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (2 preceding siblings ...)
  2020-11-10 16:21 ` [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-11-26 18:49   ` Borislav Petkov
  2020-11-10 16:21 ` [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack Yu-cheng Yu
                   ` (22 subsequent siblings)
  26 siblings, 1 reply; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

A control-protection fault is triggered when a control-flow transfer
attempt violates Shadow Stack or Indirect Branch Tracking constraints.
For example, the return address for a RET instruction differs from the copy
on the Shadow Stack; or an indirect JMP instruction, without the NOTRACK
prefix, arrives at a non-ENDBR opcode.

The control-protection fault handler works in a similar way as the general
protection fault handler.  It provides the si_code SEGV_CPERR to the signal
handler.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/idtentry.h    |  4 ++
 arch/x86/kernel/idt.c              |  4 ++
 arch/x86/kernel/signal_compat.c    |  2 +-
 arch/x86/kernel/traps.c            | 59 ++++++++++++++++++++++++++++++
 include/uapi/asm-generic/siginfo.h |  3 +-
 5 files changed, 70 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index b2442eb0ac2f..f519b8ce0273 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -577,6 +577,10 @@ DECLARE_IDTENTRY_ERRORCODE(X86_TRAP_SS,	exc_stack_segment);
 DECLARE_IDTENTRY_ERRORCODE(X86_TRAP_GP,	exc_general_protection);
 DECLARE_IDTENTRY_ERRORCODE(X86_TRAP_AC,	exc_alignment_check);
 
+#ifdef CONFIG_X86_CET
+DECLARE_IDTENTRY_ERRORCODE(X86_TRAP_CP, exc_control_protection);
+#endif
+
 /* Raw exception entries which need extra work */
 DECLARE_IDTENTRY_RAW(X86_TRAP_UD,		exc_invalid_op);
 DECLARE_IDTENTRY_RAW(X86_TRAP_BP,		exc_int3);
diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
index ee1a283f8e96..e8166d9bbb10 100644
--- a/arch/x86/kernel/idt.c
+++ b/arch/x86/kernel/idt.c
@@ -105,6 +105,10 @@ static const __initconst struct idt_data def_idts[] = {
 #elif defined(CONFIG_X86_32)
 	SYSG(IA32_SYSCALL_VECTOR,	entry_INT80_32),
 #endif
+
+#ifdef CONFIG_X86_CET
+	INTG(X86_TRAP_CP,		asm_exc_control_protection),
+#endif
 };
 
 /*
diff --git a/arch/x86/kernel/signal_compat.c b/arch/x86/kernel/signal_compat.c
index a7f3e12cfbdb..c44d4bebea07 100644
--- a/arch/x86/kernel/signal_compat.c
+++ b/arch/x86/kernel/signal_compat.c
@@ -27,7 +27,7 @@ static inline void signal_compat_build_tests(void)
 	 */
 	BUILD_BUG_ON(NSIGILL  != 11);
 	BUILD_BUG_ON(NSIGFPE  != 15);
-	BUILD_BUG_ON(NSIGSEGV != 9);
+	BUILD_BUG_ON(NSIGSEGV != 10);
 	BUILD_BUG_ON(NSIGBUS  != 5);
 	BUILD_BUG_ON(NSIGTRAP != 5);
 	BUILD_BUG_ON(NSIGCHLD != 6);
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index e19df6cde35d..6c21c1e92605 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -598,6 +598,65 @@ DEFINE_IDTENTRY_ERRORCODE(exc_general_protection)
 	cond_local_irq_disable(regs);
 }
 
+#ifdef CONFIG_X86_CET
+static const char * const control_protection_err[] = {
+	"unknown",
+	"near-ret",
+	"far-ret/iret",
+	"endbranch",
+	"rstorssp",
+	"setssbsy",
+};
+
+/*
+ * When a control protection exception occurs, send a signal
+ * to the responsible application.  Currently, control
+ * protection is only enabled for the user mode.  This
+ * exception should not come from the kernel mode.
+ */
+DEFINE_IDTENTRY_ERRORCODE(exc_control_protection)
+{
+	struct task_struct *tsk;
+
+	if (notify_die(DIE_TRAP, "control protection fault", regs,
+		       error_code, X86_TRAP_CP, SIGSEGV) == NOTIFY_STOP)
+		return;
+	cond_local_irq_enable(regs);
+
+	if (!user_mode(regs))
+		die("kernel control protection fault", regs, error_code);
+
+	if (!static_cpu_has(X86_FEATURE_SHSTK) &&
+	    !static_cpu_has(X86_FEATURE_IBT))
+		WARN_ONCE(1, "CET is disabled but got control protection fault\n");
+
+	tsk = current;
+	tsk->thread.error_code = error_code;
+	tsk->thread.trap_nr = X86_TRAP_CP;
+
+	if (show_unhandled_signals && unhandled_signal(tsk, SIGSEGV) &&
+	    printk_ratelimit()) {
+		unsigned int max_err;
+		unsigned long ssp;
+
+		max_err = ARRAY_SIZE(control_protection_err) - 1;
+		if ((error_code < 0) || (error_code > max_err))
+			error_code = 0;
+		rdmsrl(MSR_IA32_PL3_SSP, ssp);
+		pr_info("%s[%d] control protection ip:%lx sp:%lx ssp:%lx error:%lx(%s)",
+			tsk->comm, task_pid_nr(tsk),
+			regs->ip, regs->sp, ssp, error_code,
+			control_protection_err[error_code]);
+		print_vma_addr(KERN_CONT " in ", regs->ip);
+		pr_cont("\n");
+	}
+
+	force_sig_fault(SIGSEGV, SEGV_CPERR,
+			(void __user *)uprobe_get_trap_addr(regs));
+	cond_local_irq_disable(regs);
+}
+#endif
+
 static bool do_int3(struct pt_regs *regs)
 {
 	int res;
diff --git a/include/uapi/asm-generic/siginfo.h b/include/uapi/asm-generic/siginfo.h
index 7aacf9389010..96b9647d14ae 100644
--- a/include/uapi/asm-generic/siginfo.h
+++ b/include/uapi/asm-generic/siginfo.h
@@ -231,7 +231,8 @@ typedef struct siginfo {
 #define SEGV_ADIPERR	7	/* Precise MCD exception */
 #define SEGV_MTEAERR	8	/* Asynchronous ARM MTE error */
 #define SEGV_MTESERR	9	/* Synchronous ARM MTE exception */
-#define NSIGSEGV	9
+#define SEGV_CPERR	10	/* Control protection fault */
+#define NSIGSEGV	10
 
 /*
  * SIGBUS si_codes
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (3 preceding siblings ...)
  2020-11-10 16:21 ` [PATCH v15 04/26] x86/cet: Add control-protection fault handler Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-11-27 17:10   ` Borislav Petkov
  2020-11-30 19:56   ` Nick Desaulniers
  2020-11-10 16:21 ` [PATCH v15 06/26] x86/mm: Change _PAGE_DIRTY to _PAGE_DIRTY_HW Yu-cheng Yu
                   ` (21 subsequent siblings)
  26 siblings, 2 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

Shadow Stack provides protection against function return address
corruption.  It is active when the processor supports it, the kernel has
CONFIG_X86_SHADOW_STACK_USER, and the application is built for the feature.
This is only implemented for the 64-bit kernel.  When it is enabled, legacy
non-shadow stack applications continue to work, but without protection.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 arch/x86/Kconfig                      | 33 +++++++++++++++++++++++++++
 scripts/as-x86_64-has-shadow-stack.sh |  4 ++++
 2 files changed, 37 insertions(+)
 create mode 100755 scripts/as-x86_64-has-shadow-stack.sh

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index f6946b81f74a..a51d2a3de166 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1930,6 +1930,39 @@ config X86_INTEL_TSX_MODE_AUTO
 	  side channel attacks- equals the tsx=auto command line parameter.
 endchoice
 
+config AS_HAS_SHADOW_STACK
+	def_bool $(success,$(srctree)/scripts/as-x86_64-has-shadow-stack.sh $(CC))
+	help
+	  Test the assembler for shadow stack instructions.
+
+config X86_CET
+	def_bool n
+
+config ARCH_HAS_SHADOW_STACK
+	def_bool n
+
+config X86_SHADOW_STACK_USER
+	prompt "Intel Shadow Stacks for user-mode"
+	def_bool n
+	depends on CPU_SUP_INTEL && X86_64
+	depends on AS_HAS_SHADOW_STACK
+	select ARCH_USES_HIGH_VMA_FLAGS
+	select X86_CET
+	select ARCH_HAS_SHADOW_STACK
+	help
+	  Shadow Stacks provides protection against program stack
+	  corruption.  It's a hardware feature.  This only matters
+	  if you have the right hardware.  It's a security hardening
+	  feature and apps must be enabled to use it.  You get no
+	  protection "for free" on old userspace.  The hardware can
+	  support user and kernel, but this option is for user space
+	  only.
+	  Support for this feature is only known to be present on
+	  processors released in 2020 or later.  CET features are also
+	  known to increase kernel text size by 3.7 KB.
+
+	  If unsure, say N.
+
 config EFI
 	bool "EFI runtime service support"
 	depends on ACPI
diff --git a/scripts/as-x86_64-has-shadow-stack.sh b/scripts/as-x86_64-has-shadow-stack.sh
new file mode 100755
index 000000000000..fac1d363a1b8
--- /dev/null
+++ b/scripts/as-x86_64-has-shadow-stack.sh
@@ -0,0 +1,4 @@
+#!/bin/sh
+# SPDX-License-Identifier: GPL-2.0
+
+echo "wrussq %rax, (%rbx)" | $* -x assembler -c -
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 06/26] x86/mm: Change _PAGE_DIRTY to _PAGE_DIRTY_HW
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (4 preceding siblings ...)
  2020-11-10 16:21 ` [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-12-03  9:19   ` Borislav Petkov
  2020-11-10 16:21 ` [PATCH v15 07/26] x86/mm: Remove _PAGE_DIRTY_HW from kernel RO pages Yu-cheng Yu
                   ` (20 subsequent siblings)
  26 siblings, 1 reply; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu, Dave Hansen

Before introducing _PAGE_COW for non-hardware memory management purposes in
the next patch, rename _PAGE_DIRTY to _PAGE_DIRTY_HW and _PAGE_BIT_DIRTY to
_PAGE_BIT_DIRTY_HW to make meanings more clear.  There are no functional
changes from this patch.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Dave Hansen <dave.hansen@intel.com>
---
 arch/x86/include/asm/pgtable.h       | 18 +++++++++---------
 arch/x86/include/asm/pgtable_types.h | 10 +++++-----
 arch/x86/kernel/relocate_kernel_64.S |  2 +-
 arch/x86/kvm/vmx/vmx.c               |  2 +-
 4 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index a02c67291cfc..b23697658b28 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -123,7 +123,7 @@ extern pmdval_t early_pmd_flags;
  */
 static inline int pte_dirty(pte_t pte)
 {
-	return pte_flags(pte) & _PAGE_DIRTY;
+	return pte_flags(pte) & _PAGE_DIRTY_HW;
 }
 
 
@@ -162,7 +162,7 @@ static inline int pte_young(pte_t pte)
 
 static inline int pmd_dirty(pmd_t pmd)
 {
-	return pmd_flags(pmd) & _PAGE_DIRTY;
+	return pmd_flags(pmd) & _PAGE_DIRTY_HW;
 }
 
 static inline int pmd_young(pmd_t pmd)
@@ -172,7 +172,7 @@ static inline int pmd_young(pmd_t pmd)
 
 static inline int pud_dirty(pud_t pud)
 {
-	return pud_flags(pud) & _PAGE_DIRTY;
+	return pud_flags(pud) & _PAGE_DIRTY_HW;
 }
 
 static inline int pud_young(pud_t pud)
@@ -333,7 +333,7 @@ static inline pte_t pte_clear_uffd_wp(pte_t pte)
 
 static inline pte_t pte_mkclean(pte_t pte)
 {
-	return pte_clear_flags(pte, _PAGE_DIRTY);
+	return pte_clear_flags(pte, _PAGE_DIRTY_HW);
 }
 
 static inline pte_t pte_mkold(pte_t pte)
@@ -353,7 +353,7 @@ static inline pte_t pte_mkexec(pte_t pte)
 
 static inline pte_t pte_mkdirty(pte_t pte)
 {
-	return pte_set_flags(pte, _PAGE_DIRTY | _PAGE_SOFT_DIRTY);
+	return pte_set_flags(pte, _PAGE_DIRTY_HW | _PAGE_SOFT_DIRTY);
 }
 
 static inline pte_t pte_mkyoung(pte_t pte)
@@ -434,7 +434,7 @@ static inline pmd_t pmd_mkold(pmd_t pmd)
 
 static inline pmd_t pmd_mkclean(pmd_t pmd)
 {
-	return pmd_clear_flags(pmd, _PAGE_DIRTY);
+	return pmd_clear_flags(pmd, _PAGE_DIRTY_HW);
 }
 
 static inline pmd_t pmd_wrprotect(pmd_t pmd)
@@ -444,7 +444,7 @@ static inline pmd_t pmd_wrprotect(pmd_t pmd)
 
 static inline pmd_t pmd_mkdirty(pmd_t pmd)
 {
-	return pmd_set_flags(pmd, _PAGE_DIRTY | _PAGE_SOFT_DIRTY);
+	return pmd_set_flags(pmd, _PAGE_DIRTY_HW | _PAGE_SOFT_DIRTY);
 }
 
 static inline pmd_t pmd_mkdevmap(pmd_t pmd)
@@ -488,7 +488,7 @@ static inline pud_t pud_mkold(pud_t pud)
 
 static inline pud_t pud_mkclean(pud_t pud)
 {
-	return pud_clear_flags(pud, _PAGE_DIRTY);
+	return pud_clear_flags(pud, _PAGE_DIRTY_HW);
 }
 
 static inline pud_t pud_wrprotect(pud_t pud)
@@ -498,7 +498,7 @@ static inline pud_t pud_wrprotect(pud_t pud)
 
 static inline pud_t pud_mkdirty(pud_t pud)
 {
-	return pud_set_flags(pud, _PAGE_DIRTY | _PAGE_SOFT_DIRTY);
+	return pud_set_flags(pud, _PAGE_DIRTY_HW | _PAGE_SOFT_DIRTY);
 }
 
 static inline pud_t pud_mkdevmap(pud_t pud)
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 816b31c68550..810eb1567050 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -15,7 +15,7 @@
 #define _PAGE_BIT_PWT		3	/* page write through */
 #define _PAGE_BIT_PCD		4	/* page cache disabled */
 #define _PAGE_BIT_ACCESSED	5	/* was accessed (raised by CPU) */
-#define _PAGE_BIT_DIRTY		6	/* was written to (raised by CPU) */
+#define _PAGE_BIT_DIRTY_HW	6	/* was written to (raised by CPU) */
 #define _PAGE_BIT_PSE		7	/* 4 MB (or 2MB) page */
 #define _PAGE_BIT_PAT		7	/* on 4KB pages */
 #define _PAGE_BIT_GLOBAL	8	/* Global TLB entry PPro+ */
@@ -46,7 +46,7 @@
 #define _PAGE_PWT	(_AT(pteval_t, 1) << _PAGE_BIT_PWT)
 #define _PAGE_PCD	(_AT(pteval_t, 1) << _PAGE_BIT_PCD)
 #define _PAGE_ACCESSED	(_AT(pteval_t, 1) << _PAGE_BIT_ACCESSED)
-#define _PAGE_DIRTY	(_AT(pteval_t, 1) << _PAGE_BIT_DIRTY)
+#define _PAGE_DIRTY_HW	(_AT(pteval_t, 1) << _PAGE_BIT_DIRTY_HW)
 #define _PAGE_PSE	(_AT(pteval_t, 1) << _PAGE_BIT_PSE)
 #define _PAGE_GLOBAL	(_AT(pteval_t, 1) << _PAGE_BIT_GLOBAL)
 #define _PAGE_SOFTW1	(_AT(pteval_t, 1) << _PAGE_BIT_SOFTW1)
@@ -74,7 +74,7 @@
 			 _PAGE_PKEY_BIT3)
 
 #if defined(CONFIG_X86_64) || defined(CONFIG_X86_PAE)
-#define _PAGE_KNL_ERRATUM_MASK (_PAGE_DIRTY | _PAGE_ACCESSED)
+#define _PAGE_KNL_ERRATUM_MASK (_PAGE_DIRTY_HW | _PAGE_ACCESSED)
 #else
 #define _PAGE_KNL_ERRATUM_MASK 0
 #endif
@@ -126,7 +126,7 @@
  * pte_modify() does modify it.
  */
 #define _PAGE_CHG_MASK	(PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT |		\
-			 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |	\
+			 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY_HW |	\
 			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC |  \
 			 _PAGE_UFFD_WP)
 #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
@@ -163,7 +163,7 @@ enum page_cache_mode {
 #define __RW _PAGE_RW
 #define _USR _PAGE_USER
 #define ___A _PAGE_ACCESSED
-#define ___D _PAGE_DIRTY
+#define ___D _PAGE_DIRTY_HW
 #define ___G _PAGE_GLOBAL
 #define __NX _PAGE_NX
 
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index a4d9a261425b..e3bb4ff95523 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -17,7 +17,7 @@
  */
 
 #define PTR(x) (x << 3)
-#define PAGE_ATTR (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY)
+#define PAGE_ATTR (_PAGE_PRESENT | _PAGE_RW | _PAGE_ACCESSED | _PAGE_DIRTY_HW)
 
 /*
  * control_page + KEXEC_CONTROL_CODE_MAX_SIZE
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 47b8357b9751..3962284843ee 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3574,7 +3574,7 @@ static int init_rmode_identity_map(struct kvm *kvm)
 	/* Set up identity-mapping pagetable for EPT in real mode */
 	for (i = 0; i < PT32_ENT_PER_PAGE; i++) {
 		tmp = (i << 22) + (_PAGE_PRESENT | _PAGE_RW | _PAGE_USER |
-			_PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_PSE);
+			_PAGE_ACCESSED | _PAGE_DIRTY_HW | _PAGE_PSE);
 		r = kvm_write_guest_page(kvm, identity_map_pfn,
 				&tmp, i * sizeof(tmp), sizeof(tmp));
 		if (r < 0)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 07/26] x86/mm: Remove _PAGE_DIRTY_HW from kernel RO pages
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (5 preceding siblings ...)
  2020-11-10 16:21 ` [PATCH v15 06/26] x86/mm: Change _PAGE_DIRTY to _PAGE_DIRTY_HW Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-12-07 16:36   ` Borislav Petkov
  2020-11-10 16:21 ` [PATCH v15 08/26] x86/mm: Introduce _PAGE_COW Yu-cheng Yu
                   ` (19 subsequent siblings)
  26 siblings, 1 reply; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu, Christoph Hellwig

Kernel read-only PTEs are setup as _PAGE_DIRTY_HW.  Since these become
shadow stack PTEs, remove the dirty bit.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 arch/x86/include/asm/pgtable_types.h | 6 +++---
 arch/x86/mm/pat/set_memory.c         | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 810eb1567050..7462a574fc93 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -193,10 +193,10 @@ enum page_cache_mode {
 #define _KERNPG_TABLE		 (__PP|__RW|   0|___A|   0|___D|   0|   0| _ENC)
 #define _PAGE_TABLE_NOENC	 (__PP|__RW|_USR|___A|   0|___D|   0|   0)
 #define _PAGE_TABLE		 (__PP|__RW|_USR|___A|   0|___D|   0|   0| _ENC)
-#define __PAGE_KERNEL_RO	 (__PP|   0|   0|___A|__NX|___D|   0|___G)
-#define __PAGE_KERNEL_ROX	 (__PP|   0|   0|___A|   0|___D|   0|___G)
+#define __PAGE_KERNEL_RO	 (__PP|   0|   0|___A|__NX|   0|   0|___G)
+#define __PAGE_KERNEL_ROX	 (__PP|   0|   0|___A|   0|   0|   0|___G)
 #define __PAGE_KERNEL_NOCACHE	 (__PP|__RW|   0|___A|__NX|___D|   0|___G| __NC)
-#define __PAGE_KERNEL_VVAR	 (__PP|   0|_USR|___A|__NX|___D|   0|___G)
+#define __PAGE_KERNEL_VVAR	 (__PP|   0|_USR|___A|__NX|   0|   0|___G)
 #define __PAGE_KERNEL_LARGE	 (__PP|__RW|   0|___A|__NX|___D|_PSE|___G)
 #define __PAGE_KERNEL_LARGE_EXEC (__PP|__RW|   0|___A|   0|___D|_PSE|___G)
 #define __PAGE_KERNEL_WP	 (__PP|__RW|   0|___A|__NX|___D|   0|___G| __WP)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 40baa90e74f4..207bbf796f5f 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -1932,7 +1932,7 @@ int set_memory_nx(unsigned long addr, int numpages)
 
 int set_memory_ro(unsigned long addr, int numpages)
 {
-	return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW), 0);
+	return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW | _PAGE_DIRTY_HW), 0);
 }
 
 int set_memory_rw(unsigned long addr, int numpages)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 08/26] x86/mm: Introduce _PAGE_COW
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (6 preceding siblings ...)
  2020-11-10 16:21 ` [PATCH v15 07/26] x86/mm: Remove _PAGE_DIRTY_HW from kernel RO pages Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-12-08 17:50   ` Borislav Petkov
  2020-11-10 16:21 ` [PATCH v15 09/26] drm/i915/gvt: Change _PAGE_DIRTY to _PAGE_DIRTY_BITS Yu-cheng Yu
                   ` (18 subsequent siblings)
  26 siblings, 1 reply; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

There is essentially no room left in the x86 hardware PTEs on some OSes
(not Linux).  That left the hardware architects looking for a way to
represent a new memory type (shadow stack) within the existing bits.
They chose to repurpose a lightly-used state: Write=0,Dirty=1.

The reason it's lightly used is that Dirty=1 is normally set by hardware
and cannot normally be set by hardware on a Write=0 PTE.  Software must
normally be involved to create one of these PTEs, so software can simply
opt to not create them.

But that leaves us with a Linux problem: we need to ensure we never create
Write=0,Dirty=1 PTEs.  In places where we do create them, we need to find
an alternative way to represent them _without_ using the same hardware bit
combination.  Thus, enter _PAGE_COW.  This results in the following:

(a) A modified, copy-on-write (COW) page: (R/O + _PAGE_COW)
(b) A R/O page that has been COW'ed: (R/O + _PAGE_COW)
    The user page is in a R/O VMA, and get_user_pages() needs a writable
    copy.  The page fault handler creates a copy of the page and sets
    the new copy's PTE as R/O and _PAGE_COW.
(c) A shadow stack PTE: (R/O + _PAGE_DIRTY_HW)
(d) A shared shadow stack PTE: (R/O + _PAGE_COW)
    When a shadow stack page is being shared among processes (this happens
    at fork()), its PTE is cleared of _PAGE_DIRTY_HW, so the next shadow
    stack access causes a fault, and the page is duplicated and
    _PAGE_DIRTY_HW is set again.  This is the COW equivalent for shadow
    stack pages, even though it's copy-on-access rather than copy-on-write.
(e) A page where the processor observed a Write=1 PTE, started a write, set
    Dirty=1, but then observed a Write=0 PTE.  That's possible today, but
    will not happen on processors that support shadow stack.

Use _PAGE_COW in pte_wrprotect() and _PAGE_DIRTY_HW in pte_mkwrite().
Apply the same changes to pmd and pud.

When this patch is applied, there are six free bits left in the 64-bit PTE.
There are no more free bits in the 32-bit PTE (except for PAE) and shadow
stack is not implemented for the 32-bit kernel.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 arch/x86/include/asm/pgtable.h       | 120 ++++++++++++++++++++++++---
 arch/x86/include/asm/pgtable_types.h |  41 ++++++++-
 2 files changed, 150 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index b23697658b28..c88c7ccf0318 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -121,9 +121,9 @@ extern pmdval_t early_pmd_flags;
  * The following only work if pte_present() is true.
  * Undefined behaviour if not..
  */
-static inline int pte_dirty(pte_t pte)
+static inline bool pte_dirty(pte_t pte)
 {
-	return pte_flags(pte) & _PAGE_DIRTY_HW;
+	return pte_flags(pte) & _PAGE_DIRTY_BITS;
 }
 
 
@@ -160,9 +160,9 @@ static inline int pte_young(pte_t pte)
 	return pte_flags(pte) & _PAGE_ACCESSED;
 }
 
-static inline int pmd_dirty(pmd_t pmd)
+static inline bool pmd_dirty(pmd_t pmd)
 {
-	return pmd_flags(pmd) & _PAGE_DIRTY_HW;
+	return pmd_flags(pmd) & _PAGE_DIRTY_BITS;
 }
 
 static inline int pmd_young(pmd_t pmd)
@@ -170,9 +170,9 @@ static inline int pmd_young(pmd_t pmd)
 	return pmd_flags(pmd) & _PAGE_ACCESSED;
 }
 
-static inline int pud_dirty(pud_t pud)
+static inline bool pud_dirty(pud_t pud)
 {
-	return pud_flags(pud) & _PAGE_DIRTY_HW;
+	return pud_flags(pud) & _PAGE_DIRTY_BITS;
 }
 
 static inline int pud_young(pud_t pud)
@@ -182,6 +182,12 @@ static inline int pud_young(pud_t pud)
 
 static inline int pte_write(pte_t pte)
 {
+	/*
+	 * If _PAGE_DIRTY_HW is set, the PTE must either have
+	 * _PAGE_RW or be a shadow stack PTE, which is logically writable.
+	 */
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK))
+		return pte_flags(pte) & (_PAGE_RW | _PAGE_DIRTY_HW);
 	return pte_flags(pte) & _PAGE_RW;
 }
 
@@ -333,7 +339,7 @@ static inline pte_t pte_clear_uffd_wp(pte_t pte)
 
 static inline pte_t pte_mkclean(pte_t pte)
 {
-	return pte_clear_flags(pte, _PAGE_DIRTY_HW);
+	return pte_clear_flags(pte, _PAGE_DIRTY_BITS);
 }
 
 static inline pte_t pte_mkold(pte_t pte)
@@ -343,6 +349,17 @@ static inline pte_t pte_mkold(pte_t pte)
 
 static inline pte_t pte_wrprotect(pte_t pte)
 {
+	/*
+	 * Blindly clearing _PAGE_RW might accidentally create
+	 * a shadow stack PTE (RW=0,Dirty=1).  Move the hardware
+	 * dirty value to the software bit.
+	 */
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK)) {
+		pte.pte |= (pte.pte & _PAGE_DIRTY_HW) >>
+			   _PAGE_BIT_DIRTY_HW << _PAGE_BIT_COW;
+		pte = pte_clear_flags(pte, _PAGE_DIRTY_HW);
+	}
+
 	return pte_clear_flags(pte, _PAGE_RW);
 }
 
@@ -353,6 +370,18 @@ static inline pte_t pte_mkexec(pte_t pte)
 
 static inline pte_t pte_mkdirty(pte_t pte)
 {
+	pteval_t dirty = _PAGE_DIRTY_HW;
+
+	/* Avoid creating (HW)Dirty=1,Write=0 PTEs */
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK) && !pte_write(pte))
+		dirty = _PAGE_COW;
+
+	return pte_set_flags(pte, dirty | _PAGE_SOFT_DIRTY);
+}
+
+static inline pte_t pte_mkwrite_shstk(pte_t pte)
+{
+	pte = pte_clear_flags(pte, _PAGE_COW);
 	return pte_set_flags(pte, _PAGE_DIRTY_HW | _PAGE_SOFT_DIRTY);
 }
 
@@ -363,6 +392,13 @@ static inline pte_t pte_mkyoung(pte_t pte)
 
 static inline pte_t pte_mkwrite(pte_t pte)
 {
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK)) {
+		if (pte_flags(pte) & _PAGE_COW) {
+			pte = pte_clear_flags(pte, _PAGE_COW);
+			pte = pte_set_flags(pte, _PAGE_DIRTY_HW);
+		}
+	}
+
 	return pte_set_flags(pte, _PAGE_RW);
 }
 
@@ -434,16 +470,41 @@ static inline pmd_t pmd_mkold(pmd_t pmd)
 
 static inline pmd_t pmd_mkclean(pmd_t pmd)
 {
-	return pmd_clear_flags(pmd, _PAGE_DIRTY_HW);
+	return pmd_clear_flags(pmd, _PAGE_DIRTY_BITS);
 }
 
 static inline pmd_t pmd_wrprotect(pmd_t pmd)
 {
+	/*
+	 * Blindly clearing _PAGE_RW might accidentally create
+	 * a shadow stack PMD (RW=0,Dirty=1).  Move the hardware
+	 * dirty value to the software bit.
+	 */
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK)) {
+		pmdval_t v = native_pmd_val(pmd);
+
+		v |= (v & _PAGE_DIRTY_HW) >> _PAGE_BIT_DIRTY_HW <<
+		     _PAGE_BIT_COW;
+		pmd = pmd_clear_flags(__pmd(v), _PAGE_DIRTY_HW);
+	}
+
 	return pmd_clear_flags(pmd, _PAGE_RW);
 }
 
 static inline pmd_t pmd_mkdirty(pmd_t pmd)
 {
+	pmdval_t dirty = _PAGE_DIRTY_HW;
+
+	/* Avoid creating (HW)Dirty=1,Write=0 PMDs */
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK) && !(pmd_flags(pmd) & _PAGE_RW))
+		dirty = _PAGE_COW;
+
+	return pmd_set_flags(pmd, dirty | _PAGE_SOFT_DIRTY);
+}
+
+static inline pmd_t pmd_mkwrite_shstk(pmd_t pmd)
+{
+	pmd = pmd_clear_flags(pmd, _PAGE_COW);
 	return pmd_set_flags(pmd, _PAGE_DIRTY_HW | _PAGE_SOFT_DIRTY);
 }
 
@@ -464,6 +525,13 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd)
 
 static inline pmd_t pmd_mkwrite(pmd_t pmd)
 {
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK)) {
+		if (pmd_flags(pmd) & _PAGE_COW) {
+			pmd = pmd_clear_flags(pmd, _PAGE_COW);
+			pmd = pmd_set_flags(pmd, _PAGE_DIRTY_HW);
+		}
+	}
+
 	return pmd_set_flags(pmd, _PAGE_RW);
 }
 
@@ -488,17 +556,36 @@ static inline pud_t pud_mkold(pud_t pud)
 
 static inline pud_t pud_mkclean(pud_t pud)
 {
-	return pud_clear_flags(pud, _PAGE_DIRTY_HW);
+	return pud_clear_flags(pud, _PAGE_DIRTY_BITS);
 }
 
 static inline pud_t pud_wrprotect(pud_t pud)
 {
+	/*
+	 * Blindly clearing _PAGE_RW might accidentally create
+	 * a shadow stack PUD (RW=0,Dirty=1).  Move the hardware
+	 * dirty value to the software bit.
+	 */
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK)) {
+		pudval_t v = native_pud_val(pud);
+
+		v |= (v & _PAGE_DIRTY_HW) >> _PAGE_BIT_DIRTY_HW <<
+		     _PAGE_BIT_COW;
+		pud = pud_clear_flags(__pud(v), _PAGE_DIRTY_HW);
+	}
+
 	return pud_clear_flags(pud, _PAGE_RW);
 }
 
 static inline pud_t pud_mkdirty(pud_t pud)
 {
-	return pud_set_flags(pud, _PAGE_DIRTY_HW | _PAGE_SOFT_DIRTY);
+	pudval_t dirty = _PAGE_DIRTY_HW;
+
+	/* Avoid creating (HW)Dirty=1,Write=0 PUDs */
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK) && !(pud_flags(pud) & _PAGE_RW))
+		dirty = _PAGE_COW;
+
+	return pud_set_flags(pud, dirty | _PAGE_SOFT_DIRTY);
 }
 
 static inline pud_t pud_mkdevmap(pud_t pud)
@@ -518,6 +605,13 @@ static inline pud_t pud_mkyoung(pud_t pud)
 
 static inline pud_t pud_mkwrite(pud_t pud)
 {
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK)) {
+		if (pud_flags(pud) & _PAGE_COW) {
+			pud = pud_clear_flags(pud, _PAGE_COW);
+			pud = pud_set_flags(pud, _PAGE_DIRTY_HW);
+		}
+	}
+
 	return pud_set_flags(pud, _PAGE_RW);
 }
 
@@ -1131,6 +1225,12 @@ extern int pmdp_clear_flush_young(struct vm_area_struct *vma,
 #define pmd_write pmd_write
 static inline int pmd_write(pmd_t pmd)
 {
+	/*
+	 * If _PAGE_DIRTY_HW is set, then the PMD must either have
+	 * _PAGE_RW or be a shadow stack PMD, which is logically writable.
+	 */
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK))
+		return pmd_flags(pmd) & (_PAGE_RW | _PAGE_DIRTY_HW);
 	return pmd_flags(pmd) & _PAGE_RW;
 }
 
diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index 7462a574fc93..5f764d8d9bae 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -23,7 +23,8 @@
 #define _PAGE_BIT_SOFTW2	10	/* " */
 #define _PAGE_BIT_SOFTW3	11	/* " */
 #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
-#define _PAGE_BIT_SOFTW4	58	/* available for programmer */
+#define _PAGE_BIT_SOFTW4	57	/* available for programmer */
+#define _PAGE_BIT_SOFTW5	58	/* available for programmer */
 #define _PAGE_BIT_PKEY_BIT0	59	/* Protection Keys, bit 1/4 */
 #define _PAGE_BIT_PKEY_BIT1	60	/* Protection Keys, bit 2/4 */
 #define _PAGE_BIT_PKEY_BIT2	61	/* Protection Keys, bit 3/4 */
@@ -36,6 +37,16 @@
 #define _PAGE_BIT_SOFT_DIRTY	_PAGE_BIT_SOFTW3 /* software dirty tracking */
 #define _PAGE_BIT_DEVMAP	_PAGE_BIT_SOFTW4
 
+/*
+ * This bit indicates a copy-on-write page, and is different from
+ * _PAGE_BIT_SOFT_DIRTY, which tracks which pages a task writes to.
+ */
+#ifdef CONFIG_X86_64
+#define _PAGE_BIT_COW		_PAGE_BIT_SOFTW5 /* copy-on-write */
+#else
+#define _PAGE_BIT_COW		0
+#endif
+
 /* If _PAGE_BIT_PRESENT is clear, we use these: */
 /* - if the user mapped it with PROT_NONE; pte_present gives true */
 #define _PAGE_BIT_PROTNONE	_PAGE_BIT_GLOBAL
@@ -117,6 +128,34 @@
 #define _PAGE_DEVMAP	(_AT(pteval_t, 0))
 #endif
 
+/*
+ * _PAGE_COW is used to separate R/O and copy-on-write PTEs created by
+ * software from the shadow stack PTE setting required by the hardware:
+ * (a) A modified, copy-on-write (COW) page: (R/O + _PAGE_COW)
+ * (b) A R/O page that has been COW'ed: (R/O +_PAGE_COW)
+ *     The user page is in a R/O VMA, and get_user_pages() needs a
+ *     writable copy.  The page fault handler creates a copy of the page
+ *     and sets the new copy's PTE as R/O and _PAGE_COW.
+ * (c) A shadow stack PTE: (R/O + _PAGE_DIRTY_HW)
+ * (d) A shared (copy-on-access) shadow stack PTE: (R/O + _PAGE_COW)
+ *     When a shadow stack page is being shared among processes (this
+ *     happens at fork()), its PTE is cleared of _PAGE_DIRTY_HW, so the
+ *     next shadow stack access causes a fault, and the page is duplicated
+ *     and _PAGE_DIRTY_HW is set again.  This is the COW equivalent for
+ *     shadow stack pages, even though it's copy-on-access rather than
+ *     copy-on-write.
+ * (e) A page where the processor observed a Write=1 PTE, started a write,
+ *     set Dirty=1, but then observed a Write=0 PTE.  That's possible
+ *     today, but will not happen on processors that support shadow stack.
+ */
+#ifdef CONFIG_X86_SHADOW_STACK_USER
+#define _PAGE_COW	(_AT(pteval_t, 1) << _PAGE_BIT_COW)
+#else
+#define _PAGE_COW	(_AT(pteval_t, 0))
+#endif
+
+#define _PAGE_DIRTY_BITS (_PAGE_DIRTY_HW | _PAGE_COW)
+
 #define _PAGE_PROTNONE	(_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
 
 /*
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 09/26] drm/i915/gvt: Change _PAGE_DIRTY to _PAGE_DIRTY_BITS
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (7 preceding siblings ...)
  2020-11-10 16:21 ` [PATCH v15 08/26] x86/mm: Introduce _PAGE_COW Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-11-10 16:21 ` [PATCH v15 10/26] x86/mm: Update pte_modify for _PAGE_COW Yu-cheng Yu
                   ` (17 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu, David Airlie, Joonas Lahtinen, Jani Nikula,
	Daniel Vetter, Rodrigo Vivi, Zhenyu Wang, Zhi Wang

After the introduction of _PAGE_COW, a modified page's PTE can have either
_PAGE_DIRTY_HW or _PAGE_COW.  Change _PAGE_DIRTY to _PAGE_DIRTY_BITS.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: David Airlie <airlied@linux.ie>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Zhenyu Wang <zhenyuw@linux.intel.com>
Cc: Zhi Wang <zhi.a.wang@intel.com>
---
 drivers/gpu/drm/i915/gvt/gtt.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/gvt/gtt.c b/drivers/gpu/drm/i915/gvt/gtt.c
index a3a4305eda01..dd0ab28cfe7d 100644
--- a/drivers/gpu/drm/i915/gvt/gtt.c
+++ b/drivers/gpu/drm/i915/gvt/gtt.c
@@ -1207,7 +1207,7 @@ static int split_2MB_gtt_entry(struct intel_vgpu *vgpu,
 	}
 
 	/* Clear dirty field. */
-	se->val64 &= ~_PAGE_DIRTY;
+	se->val64 &= ~_PAGE_DIRTY_BITS;
 
 	ops->clear_pse(se);
 	ops->clear_ips(se);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 10/26] x86/mm: Update pte_modify for _PAGE_COW
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (8 preceding siblings ...)
  2020-11-10 16:21 ` [PATCH v15 09/26] drm/i915/gvt: Change _PAGE_DIRTY to _PAGE_DIRTY_BITS Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-11-10 16:21 ` [PATCH v15 11/26] x86/mm: Update ptep_set_wrprotect() and pmdp_set_wrprotect() for transition from _PAGE_DIRTY_HW to _PAGE_COW Yu-cheng Yu
                   ` (16 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

Pte_modify() changes a PTE to 'newprot'.  It doesn't use the pte_*()
helpers that a previous patch fixed up, so we need a new site.

Introduce fixup_dirty_pte() to set the dirty bits based on _PAGE_RW, and
apply the same changes to pmd_modify().

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 arch/x86/include/asm/pgtable.h | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index c88c7ccf0318..07a08e763bce 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -726,6 +726,21 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd)
 
 static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask);
 
+static inline pteval_t fixup_dirty_pte(pteval_t pteval)
+{
+	pte_t pte = __pte(pteval);
+
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK) && pte_dirty(pte)) {
+		pte = pte_mkclean(pte);
+
+		if (pte_flags(pte) & _PAGE_RW)
+			pte = pte_set_flags(pte, _PAGE_DIRTY_HW);
+		else
+			pte = pte_set_flags(pte, _PAGE_COW);
+	}
+	return pte_val(pte);
+}
+
 static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 {
 	pteval_t val = pte_val(pte), oldval = val;
@@ -736,16 +751,34 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
 	 */
 	val &= _PAGE_CHG_MASK;
 	val |= check_pgprot(newprot) & ~_PAGE_CHG_MASK;
+	val = fixup_dirty_pte(val);
 	val = flip_protnone_guard(oldval, val, PTE_PFN_MASK);
 	return __pte(val);
 }
 
+static inline int pmd_write(pmd_t pmd);
+static inline pmdval_t fixup_dirty_pmd(pmdval_t pmdval)
+{
+	pmd_t pmd = __pmd(pmdval);
+
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK) && pmd_dirty(pmd)) {
+		pmd = pmd_mkclean(pmd);
+
+		if (pmd_flags(pmd) & _PAGE_RW)
+			pmd = pmd_set_flags(pmd, _PAGE_DIRTY_HW);
+		else
+			pmd = pmd_set_flags(pmd, _PAGE_COW);
+	}
+	return pmd_val(pmd);
+}
+
 static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
 {
 	pmdval_t val = pmd_val(pmd), oldval = val;
 
 	val &= _HPAGE_CHG_MASK;
 	val |= check_pgprot(newprot) & ~_HPAGE_CHG_MASK;
+	val = fixup_dirty_pmd(val);
 	val = flip_protnone_guard(oldval, val, PHYSICAL_PMD_PAGE_MASK);
 	return __pmd(val);
 }
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 11/26] x86/mm: Update ptep_set_wrprotect() and pmdp_set_wrprotect() for transition from _PAGE_DIRTY_HW to _PAGE_COW
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (9 preceding siblings ...)
  2020-11-10 16:21 ` [PATCH v15 10/26] x86/mm: Update pte_modify for _PAGE_COW Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-11-10 16:21 ` [PATCH v15 12/26] mm: Introduce VM_SHSTK for shadow stack memory Yu-cheng Yu
                   ` (15 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

When shadow stack is introduced, [R/O + _PAGE_DIRTY_HW] PTE is reserved
for shadow stack.  Copy-on-write PTEs have [R/O + _PAGE_COW].

When a PTE goes from [R/W + _PAGE_DIRTY_HW] to [R/O + _PAGE_COW], it could
become a transient shadow stack PTE in two cases:

The first case is that some processors can start a write but end up seeing
a read-only PTE by the time they get to the Dirty bit, creating a transient
shadow stack PTE.  However, this will not occur on processors supporting
shadow stack, therefore we don't need a TLB flush here.

The second case is that when the software, without atomic, tests & replaces
_PAGE_DIRTY_HW with _PAGE_COW, a transient shadow stack PTE can exist.
This is prevented with cmpxchg.

Dave Hansen, Jann Horn, Andy Lutomirski, and Peter Zijlstra provided many
insights to the issue.  Jann Horn provided the cmpxchg solution.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/pgtable.h | 52 ++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 07a08e763bce..5fd4d6b60383 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1229,6 +1229,32 @@ static inline pte_t ptep_get_and_clear_full(struct mm_struct *mm,
 static inline void ptep_set_wrprotect(struct mm_struct *mm,
 				      unsigned long addr, pte_t *ptep)
 {
+	/*
+	 * Some processors can start a write, but end up seeing a read-only
+	 * PTE by the time they get to the Dirty bit.  In this case, they
+	 * will set the Dirty bit, leaving a read-only, Dirty PTE which
+	 * looks like a shadow stack PTE.
+	 *
+	 * However, this behavior has been improved and will not occur on
+	 * processors supporting shadow stack.  Without this guarantee, a
+	 * transition to a non-present PTE and flush the TLB would be
+	 * needed.
+	 *
+	 * When changing a writable PTE to read-only and if the PTE has
+	 * _PAGE_DIRTY_HW set, move that bit to _PAGE_COW so that the
+	 * PTE is not a shadow stack PTE.
+	 */
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK)) {
+		pte_t old_pte, new_pte;
+
+		do {
+			old_pte = READ_ONCE(*ptep);
+			new_pte = pte_wrprotect(old_pte);
+
+		} while (!try_cmpxchg(&ptep->pte, &old_pte.pte, new_pte.pte));
+
+		return;
+	}
 	clear_bit(_PAGE_BIT_RW, (unsigned long *)&ptep->pte);
 }
 
@@ -1285,6 +1311,32 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
 static inline void pmdp_set_wrprotect(struct mm_struct *mm,
 				      unsigned long addr, pmd_t *pmdp)
 {
+	/*
+	 * Some processors can start a write, but end up seeing a read-only
+	 * PMD by the time they get to the Dirty bit.  In this case, they
+	 * will set the Dirty bit, leaving a read-only, Dirty PMD which
+	 * looks like a Shadow Stack PMD.
+	 *
+	 * However, this behavior has been improved and will not occur on
+	 * processors supporting Shadow Stack.  Without this guarantee, a
+	 * transition to a non-present PMD and flush the TLB would be
+	 * needed.
+	 *
+	 * When changing a writable PMD to read-only and if the PMD has
+	 * _PAGE_DIRTY_HW set, we move that bit to _PAGE_COW so that the
+	 * PMD is not a shadow stack PMD.
+	 */
+	if (cpu_feature_enabled(X86_FEATURE_SHSTK)) {
+		pmd_t old_pmd, new_pmd;
+
+		do {
+			old_pmd = READ_ONCE(*pmdp);
+			new_pmd = pmd_wrprotect(old_pmd);
+
+		} while (!try_cmpxchg((pmdval_t *)pmdp, (pmdval_t *)&old_pmd, pmd_val(new_pmd)));
+
+		return;
+	}
 	clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp);
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 12/26] mm: Introduce VM_SHSTK for shadow stack memory
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (10 preceding siblings ...)
  2020-11-10 16:21 ` [PATCH v15 11/26] x86/mm: Update ptep_set_wrprotect() and pmdp_set_wrprotect() for transition from _PAGE_DIRTY_HW to _PAGE_COW Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-11-10 16:21 ` [PATCH v15 13/26] x86/mm: Shadow Stack page fault error checking Yu-cheng Yu
                   ` (14 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

A Shadow Stack PTE must be read-only and have _PAGE_DIRTY set.  However,
read-only and Dirty PTEs also exist for copy-on-write (COW) pages.  These
two cases are handled differently for page faults.  Introduce VM_SHSTK to
track shadow stack VMAs.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/mm/mmap.c | 2 ++
 fs/proc/task_mmu.c | 3 +++
 include/linux/mm.h | 8 ++++++++
 3 files changed, 13 insertions(+)

diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
index c90c20904a60..a22c6b6fc607 100644
--- a/arch/x86/mm/mmap.c
+++ b/arch/x86/mm/mmap.c
@@ -165,6 +165,8 @@ unsigned long get_mmap_base(int is_legacy)
 
 const char *arch_vma_name(struct vm_area_struct *vma)
 {
+	if (vma->vm_flags & VM_SHSTK)
+		return "[shadow stack]";
 	return NULL;
 }
 
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 217aa2705d5d..c72143cdbb5d 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -661,6 +661,9 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
 		[ilog2(VM_PKEY_BIT4)]	= "",
 #endif
 #endif /* CONFIG_ARCH_HAS_PKEYS */
+#ifdef CONFIG_X86_SHADOW_STACK_USER
+		[ilog2(VM_SHSTK)]	= "ss",
+#endif
 	};
 	size_t i;
 
diff --git a/include/linux/mm.h b/include/linux/mm.h
index db6ae4d3fb4e..c7f527bd21fb 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -304,11 +304,13 @@ extern unsigned int kobjsize(const void *objp);
 #define VM_HIGH_ARCH_BIT_2	34	/* bit only usable on 64-bit architectures */
 #define VM_HIGH_ARCH_BIT_3	35	/* bit only usable on 64-bit architectures */
 #define VM_HIGH_ARCH_BIT_4	36	/* bit only usable on 64-bit architectures */
+#define VM_HIGH_ARCH_BIT_5	37	/* bit only usable on 64-bit architectures */
 #define VM_HIGH_ARCH_0	BIT(VM_HIGH_ARCH_BIT_0)
 #define VM_HIGH_ARCH_1	BIT(VM_HIGH_ARCH_BIT_1)
 #define VM_HIGH_ARCH_2	BIT(VM_HIGH_ARCH_BIT_2)
 #define VM_HIGH_ARCH_3	BIT(VM_HIGH_ARCH_BIT_3)
 #define VM_HIGH_ARCH_4	BIT(VM_HIGH_ARCH_BIT_4)
+#define VM_HIGH_ARCH_5	BIT(VM_HIGH_ARCH_BIT_5)
 #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
 
 #ifdef CONFIG_ARCH_HAS_PKEYS
@@ -324,6 +326,12 @@ extern unsigned int kobjsize(const void *objp);
 #endif
 #endif /* CONFIG_ARCH_HAS_PKEYS */
 
+#ifdef CONFIG_X86_SHADOW_STACK_USER
+# define VM_SHSTK	VM_HIGH_ARCH_5
+#else
+# define VM_SHSTK	VM_NONE
+#endif
+
 #if defined(CONFIG_X86)
 # define VM_PAT		VM_ARCH_1	/* PAT reserves whole VMA at once (x86) */
 #elif defined(CONFIG_PPC)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 13/26] x86/mm: Shadow Stack page fault error checking
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (11 preceding siblings ...)
  2020-11-10 16:21 ` [PATCH v15 12/26] mm: Introduce VM_SHSTK for shadow stack memory Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-11-10 16:21 ` [PATCH v15 14/26] x86/mm: Update maybe_mkwrite() for shadow stack Yu-cheng Yu
                   ` (13 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

Shadow stack accesses are those that are performed by the CPU where it
expects to encounter a shadow stack mapping.  These accesses are performed
implicitly by CALL/RET at the site of the shadow stack pointer.  These
accesses are made explicitly by shadow stack management instructions like
WRUSSQ.

Shadow stacks accesses to shadow-stack mapping can see faults in normal,
valid operation just like regular accesses to regular mappings.  Shadow
stacks need some of the same features like delayed allocation, swap and
copy-on-write.

Shadow stack accesses can also result in errors, such as when a shadow
stack overflows, or if a shadow stack access occurs to a non-shadow-stack
mapping.

In handling a shadow stack page fault, verify it occurs within a shadow
stack mapping.  It is always an error otherwise.  For valid shadow stack
accesses, set FAULT_FLAG_WRITE to effect copy-on-write.  Because clearing
_PAGE_DIRTY_HW (vs. _PAGE_RW) is used to trigger the fault, shadow stack
read fault and shadow stack write fault are not differentiated and both are
handled as a write access.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/trap_pf.h |  2 ++
 arch/x86/mm/fault.c            | 19 +++++++++++++++++++
 2 files changed, 21 insertions(+)

diff --git a/arch/x86/include/asm/trap_pf.h b/arch/x86/include/asm/trap_pf.h
index 305bc1214aef..205766c438b3 100644
--- a/arch/x86/include/asm/trap_pf.h
+++ b/arch/x86/include/asm/trap_pf.h
@@ -11,6 +11,7 @@
  *   bit 3 ==				1: use of reserved bit detected
  *   bit 4 ==				1: fault was an instruction fetch
  *   bit 5 ==				1: protection keys block access
+ *   bit 6 ==				1: shadow stack access fault
  */
 enum x86_pf_error_code {
 	X86_PF_PROT	=		1 << 0,
@@ -19,6 +20,7 @@ enum x86_pf_error_code {
 	X86_PF_RSVD	=		1 << 3,
 	X86_PF_INSTR	=		1 << 4,
 	X86_PF_PK	=		1 << 5,
+	X86_PF_SHSTK	=		1 << 6,
 };
 
 #endif /* _ASM_X86_TRAP_PF_H */
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 82bf37a5c9ec..941f55ee7c75 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1110,6 +1110,17 @@ access_error(unsigned long error_code, struct vm_area_struct *vma)
 				       (error_code & X86_PF_INSTR), foreign))
 		return 1;
 
+	/*
+	 * Verify a shadow stack access is within a shadow stack VMA.
+	 * It is always an error otherwise.  Normal data access to a
+	 * shadow stack area is checked in the case followed.
+	 */
+	if (error_code & X86_PF_SHSTK) {
+		if (!(vma->vm_flags & VM_SHSTK))
+			return 1;
+		return 0;
+	}
+
 	if (error_code & X86_PF_WRITE) {
 		/* write, present and write, not present: */
 		if (unlikely(!(vma->vm_flags & VM_WRITE)))
@@ -1275,6 +1286,14 @@ void do_user_addr_fault(struct pt_regs *regs,
 
 	perf_sw_event(PERF_COUNT_SW_PAGE_FAULTS, 1, regs, address);
 
+	/*
+	 * Clearing _PAGE_DIRTY_HW is used to detect shadow stack access.
+	 * This method cannot distinguish shadow stack read vs. write.
+	 * For valid shadow stack accesses, set FAULT_FLAG_WRITE to effect
+	 * copy-on-write.
+	 */
+	if (hw_error_code & X86_PF_SHSTK)
+		flags |= FAULT_FLAG_WRITE;
 	if (hw_error_code & X86_PF_WRITE)
 		flags |= FAULT_FLAG_WRITE;
 	if (hw_error_code & X86_PF_INSTR)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 14/26] x86/mm: Update maybe_mkwrite() for shadow stack
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (12 preceding siblings ...)
  2020-11-10 16:21 ` [PATCH v15 13/26] x86/mm: Shadow Stack page fault error checking Yu-cheng Yu
@ 2020-11-10 16:21 ` Yu-cheng Yu
  2020-11-10 16:22 ` [PATCH v15 15/26] mm: Fixup places that call pte_mkwrite() directly Yu-cheng Yu
                   ` (12 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:21 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

Shadow stack memory is writable, but its VMA has VM_SHSTK instead of
VM_WRITE.  Update maybe_mkwrite() to include the shadow stack.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 arch/x86/Kconfig        |  4 ++++
 arch/x86/mm/pgtable.c   | 18 ++++++++++++++++++
 include/linux/mm.h      |  2 ++
 include/linux/pgtable.h | 24 ++++++++++++++++++++++++
 mm/huge_memory.c        |  2 ++
 5 files changed, 50 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index a51d2a3de166..960993862b96 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1938,6 +1938,9 @@ config AS_HAS_SHADOW_STACK
 config X86_CET
 	def_bool n
 
+config ARCH_MAYBE_MKWRITE
+	def_bool n
+
 config ARCH_HAS_SHADOW_STACK
 	def_bool n
 
@@ -1948,6 +1951,7 @@ config X86_SHADOW_STACK_USER
 	depends on AS_HAS_SHADOW_STACK
 	select ARCH_USES_HIGH_VMA_FLAGS
 	select X86_CET
+	select ARCH_MAYBE_MKWRITE
 	select ARCH_HAS_SHADOW_STACK
 	help
 	  Shadow Stacks provides protection against program stack
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index dfd82f51ba66..a9666b64bc05 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -610,6 +610,24 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
 }
 #endif
 
+#ifdef CONFIG_ARCH_MAYBE_MKWRITE
+pte_t arch_maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
+{
+	if (likely(vma->vm_flags & VM_SHSTK))
+		pte = pte_mkwrite_shstk(pte);
+	return pte;
+}
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+pmd_t arch_maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
+{
+	if (likely(vma->vm_flags & VM_SHSTK))
+		pmd = pmd_mkwrite_shstk(pmd);
+	return pmd;
+}
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+#endif /* CONFIG_ARCH_MAYBE_MKWRITE */
+
 /**
  * reserve_top_address - reserves a hole in the top of kernel address space
  * @reserve - size of hole to reserve
diff --git a/include/linux/mm.h b/include/linux/mm.h
index c7f527bd21fb..09f07d07a8ff 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -977,6 +977,8 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
 {
 	if (likely(vma->vm_flags & VM_WRITE))
 		pte = pte_mkwrite(pte);
+	else
+		pte = arch_maybe_mkwrite(pte, vma);
 	return pte;
 }
 
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 71125a4676c4..ea66461610ae 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1384,6 +1384,30 @@ static inline bool arch_has_pfn_modify_check(void)
 }
 #endif /* !_HAVE_ARCH_PFN_MODIFY_ALLOWED */
 
+#ifdef CONFIG_MMU
+#ifdef CONFIG_ARCH_MAYBE_MKWRITE
+pte_t arch_maybe_mkwrite(pte_t pte, struct vm_area_struct *vma);
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+pmd_t arch_maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+#else /* !CONFIG_ARCH_MAYBE_MKWRITE */
+static inline pte_t arch_maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
+{
+	return pte;
+}
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static inline pmd_t arch_maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
+{
+	return pmd;
+}
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+#endif /* CONFIG_ARCH_MAYBE_MKWRITE */
+#endif /* CONFIG_MMU */
+
 /*
  * Architecture PAGE_KERNEL_* fallbacks
  *
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9474dbc150ed..ecd23777b8bf 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -464,6 +464,8 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
 {
 	if (likely(vma->vm_flags & VM_WRITE))
 		pmd = pmd_mkwrite(pmd);
+	else
+		pmd = arch_maybe_pmd_mkwrite(pmd, vma);
 	return pmd;
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 15/26] mm: Fixup places that call pte_mkwrite() directly
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (13 preceding siblings ...)
  2020-11-10 16:21 ` [PATCH v15 14/26] x86/mm: Update maybe_mkwrite() for shadow stack Yu-cheng Yu
@ 2020-11-10 16:22 ` Yu-cheng Yu
  2020-11-10 16:22 ` [PATCH v15 16/26] mm: Add guard pages around a shadow stack Yu-cheng Yu
                   ` (11 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:22 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

A shadow stack page is made writable by pte_mkwrite_shstk(), which sets
_PAGE_DIRTY_HW.  There are a few places that call pte_mkwrite() directly
and miss the maybe_mkwrite() fixup in the previous patch.  Fix them with
maybe_mkwrite():

- do_anonymous_page() and migrate_vma_insert_page() check VM_WRITE directly
  and call pte_mkwrite(), which is the same as maybe_mkwrite().  Change
  them to maybe_mkwrite().

- In do_numa_page(), if the numa entry 'was-writable', then pte_mkwrite()
  is called directly.  Fix it by doing maybe_mkwrite().

- In change_pte_range(), pte_mkwrite() is called directly.  Replace it with
  maybe_mkwrite().

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 mm/memory.c   | 5 ++---
 mm/migrate.c  | 3 +--
 mm/mprotect.c | 2 +-
 3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index c48f8df6e502..65c56a5de418 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3536,8 +3536,7 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf)
 
 	entry = mk_pte(page, vma->vm_page_prot);
 	entry = pte_sw_mkyoung(entry);
-	if (vma->vm_flags & VM_WRITE)
-		entry = pte_mkwrite(pte_mkdirty(entry));
+	entry = maybe_mkwrite(pte_mkdirty(entry), vma);
 
 	vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
 			&vmf->ptl);
@@ -4192,7 +4191,7 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	pte = pte_modify(old_pte, vma->vm_page_prot);
 	pte = pte_mkyoung(pte);
 	if (was_writable)
-		pte = pte_mkwrite(pte);
+		pte = maybe_mkwrite(pte, vma);
 	ptep_modify_prot_commit(vma, vmf->address, vmf->pte, old_pte, pte);
 	update_mmu_cache(vma, vmf->address, vmf->pte);
 
diff --git a/mm/migrate.c b/mm/migrate.c
index 5ca5842df5db..f27ec0436fce 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2914,8 +2914,7 @@ static void migrate_vma_insert_page(struct migrate_vma *migrate,
 		}
 	} else {
 		entry = mk_pte(page, vma->vm_page_prot);
-		if (vma->vm_flags & VM_WRITE)
-			entry = pte_mkwrite(pte_mkdirty(entry));
+		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
 	}
 
 	ptep = pte_offset_map_lock(mm, pmdp, addr, &ptl);
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 56c02beb6041..7235b2409422 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -135,7 +135,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 			if (dirty_accountable && pte_dirty(ptent) &&
 					(pte_soft_dirty(ptent) ||
 					 !(vma->vm_flags & VM_SOFTDIRTY))) {
-				ptent = pte_mkwrite(ptent);
+				ptent = maybe_mkwrite(ptent, vma);
 			}
 			ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent);
 			pages++;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 16/26] mm: Add guard pages around a shadow stack.
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (14 preceding siblings ...)
  2020-11-10 16:22 ` [PATCH v15 15/26] mm: Fixup places that call pte_mkwrite() directly Yu-cheng Yu
@ 2020-11-10 16:22 ` Yu-cheng Yu
  2020-11-10 16:22 ` [PATCH v15 17/26] mm/mmap: Add shadow stack pages to memory accounting Yu-cheng Yu
                   ` (10 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:22 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

INCSSP(Q/D) increments shadow stack pointer and 'pops and discards' the
first and the last elements in the range, effectively touches those memory
areas.

The maximum moving distance by INCSSPQ is 255 * 8 = 2040 bytes and
255 * 4 = 1020 bytes by INCSSPD.  Both ranges are far from PAGE_SIZE.
Thus, putting a gap page on both ends of a shadow stack prevents INCSSP,
CALL, and RET from going beyond.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 arch/x86/include/asm/page_64_types.h | 10 ++++++++++
 include/linux/mm.h                   | 24 ++++++++++++++++++++----
 2 files changed, 30 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 3f49dac03617..0fbee6dcd3ca 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -97,6 +97,16 @@
 #define STACK_TOP		TASK_SIZE_LOW
 #define STACK_TOP_MAX		TASK_SIZE_MAX
 
+/*
+ * Shadow stack pointer is moved by CALL, JMP, and INCSSP(Q/D).  INCSSPQ
+ * moves shadow stack pointer up to 255 * 8 = ~2 KB (~1KB for INCSSPD) and
+ * touches the first and the last element in the range, which triggers a
+ * page fault if the range is not in a shadow stack.  Because of this,
+ * creating 4-KB guard pages around a shadow stack prevents these
+ * instructions from going beyond.
+ */
+#define ARCH_SHADOW_STACK_GUARD_GAP PAGE_SIZE
+
 /*
  * Maximum kernel image size is limited to 1 GiB, due to the fixmap living
  * in the next 1 GiB (see level2_kernel_pgt in arch/x86/kernel/head_64.S).
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 09f07d07a8ff..80cda65bb1ae 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2639,6 +2639,10 @@ extern vm_fault_t filemap_page_mkwrite(struct vm_fault *vmf);
 int __must_check write_one_page(struct page *page);
 void task_dirty_inc(struct task_struct *tsk);
 
+#ifndef ARCH_SHADOW_STACK_GUARD_GAP
+#define ARCH_SHADOW_STACK_GUARD_GAP 0
+#endif
+
 extern unsigned long stack_guard_gap;
 /* Generic expand stack which grows the stack according to GROWS{UP,DOWN} */
 extern int expand_stack(struct vm_area_struct *vma, unsigned long address);
@@ -2671,9 +2675,15 @@ static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * m
 static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
 {
 	unsigned long vm_start = vma->vm_start;
+	unsigned long gap = 0;
 
-	if (vma->vm_flags & VM_GROWSDOWN) {
-		vm_start -= stack_guard_gap;
+	if (vma->vm_flags & VM_GROWSDOWN)
+		gap = stack_guard_gap;
+	else if (vma->vm_flags & VM_SHSTK)
+		gap = ARCH_SHADOW_STACK_GUARD_GAP;
+
+	if (gap != 0) {
+		vm_start -= gap;
 		if (vm_start > vma->vm_start)
 			vm_start = 0;
 	}
@@ -2683,9 +2693,15 @@ static inline unsigned long vm_start_gap(struct vm_area_struct *vma)
 static inline unsigned long vm_end_gap(struct vm_area_struct *vma)
 {
 	unsigned long vm_end = vma->vm_end;
+	unsigned long gap = 0;
+
+	if (vma->vm_flags & VM_GROWSUP)
+		gap = stack_guard_gap;
+	else if (vma->vm_flags & VM_SHSTK)
+		gap = ARCH_SHADOW_STACK_GUARD_GAP;
 
-	if (vma->vm_flags & VM_GROWSUP) {
-		vm_end += stack_guard_gap;
+	if (gap != 0) {
+		vm_end += gap;
 		if (vm_end < vma->vm_end)
 			vm_end = -PAGE_SIZE;
 	}
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 17/26] mm/mmap: Add shadow stack pages to memory accounting
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (15 preceding siblings ...)
  2020-11-10 16:22 ` [PATCH v15 16/26] mm: Add guard pages around a shadow stack Yu-cheng Yu
@ 2020-11-10 16:22 ` Yu-cheng Yu
  2020-11-10 16:22 ` [PATCH v15 18/26] mm: Update can_follow_write_pte() for shadow stack Yu-cheng Yu
                   ` (9 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:22 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

Account shadow stack pages to stack memory.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 arch/x86/mm/pgtable.c   |  7 +++++++
 include/linux/pgtable.h | 11 +++++++++++
 mm/mmap.c               |  5 +++++
 3 files changed, 23 insertions(+)

diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index a9666b64bc05..68e98f70298b 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -893,3 +893,10 @@ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr)
 
 #endif /* CONFIG_X86_64 */
 #endif	/* CONFIG_HAVE_ARCH_HUGE_VMAP */
+
+#ifdef CONFIG_ARCH_HAS_SHADOW_STACK
+bool arch_shadow_stack_mapping(vm_flags_t vm_flags)
+{
+	return (vm_flags & VM_SHSTK);
+}
+#endif
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index ea66461610ae..13cb5fe3c2be 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1408,6 +1408,17 @@ static inline pmd_t arch_maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma
 #endif /* CONFIG_ARCH_MAYBE_MKWRITE */
 #endif /* CONFIG_MMU */
 
+#ifdef CONFIG_MMU
+#ifdef CONFIG_ARCH_HAS_SHADOW_STACK
+bool arch_shadow_stack_mapping(vm_flags_t vm_flags);
+#else
+static inline bool arch_shadow_stack_mapping(vm_flags_t vm_flags)
+{
+	return false;
+}
+#endif /* CONFIG_ARCH_HAS_SHADOW_STACK */
+#endif /* CONFIG_MMU */
+
 /*
  * Architecture PAGE_KERNEL_* fallbacks
  *
diff --git a/mm/mmap.c b/mm/mmap.c
index d91ecb00d38c..2290a67f4d3f 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1720,6 +1720,9 @@ static inline int accountable_mapping(struct file *file, vm_flags_t vm_flags)
 	if (file && is_file_hugepages(file))
 		return 0;
 
+	if (arch_shadow_stack_mapping(vm_flags))
+		return 1;
+
 	return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) == VM_WRITE;
 }
 
@@ -3391,6 +3394,8 @@ void vm_stat_account(struct mm_struct *mm, vm_flags_t flags, long npages)
 		mm->stack_vm += npages;
 	else if (is_data_mapping(flags))
 		mm->data_vm += npages;
+	else if (arch_shadow_stack_mapping(flags))
+		mm->stack_vm += npages;
 }
 
 static vm_fault_t special_mapping_fault(struct vm_fault *vmf);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 18/26] mm: Update can_follow_write_pte() for shadow stack
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (16 preceding siblings ...)
  2020-11-10 16:22 ` [PATCH v15 17/26] mm/mmap: Add shadow stack pages to memory accounting Yu-cheng Yu
@ 2020-11-10 16:22 ` Yu-cheng Yu
  2020-11-10 16:22 ` [PATCH v15 19/26] mm: Re-introduce vm_flags to do_mmap() Yu-cheng Yu
                   ` (8 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:22 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

Can_follow_write_pte() ensures a read-only page is COWed by checking the
FOLL_COW flag, and uses pte_dirty() to validate the flag is still valid.

Like a writable data page, a shadow stack page is writable, and becomes
read-only during copy-on-write, but it is always dirty.  Thus, in the
can_follow_write_pte() check, it belongs to the writable page case and
should be excluded from the read-only page pte_dirty() check.  Apply
the same changes to can_follow_write_pmd().

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 mm/gup.c         | 8 +++++---
 mm/huge_memory.c | 8 +++++---
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 102877ed77a4..cae6e7eec0a4 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -391,10 +391,12 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
  * FOLL_FORCE can write to even unwritable pte's, but only
  * after we've gone through a COW cycle and they are dirty.
  */
-static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
+static inline bool can_follow_write_pte(pte_t pte, unsigned int flags,
+					struct vm_area_struct *vma)
 {
 	return pte_write(pte) ||
-		((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));
+		((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte) &&
+				  !arch_shadow_stack_mapping(vma->vm_flags));
 }
 
 static struct page *follow_page_pte(struct vm_area_struct *vma,
@@ -437,7 +439,7 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 	}
 	if ((flags & FOLL_NUMA) && pte_protnone(pte))
 		goto no_page;
-	if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags)) {
+	if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, flags, vma)) {
 		pte_unmap_unlock(ptep, ptl);
 		return NULL;
 	}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ecd23777b8bf..c6301c6d93b7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1324,10 +1324,12 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
  * FOLL_FORCE can write to even unwritable pmd's, but only
  * after we've gone through a COW cycle and they are dirty.
  */
-static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags)
+static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags,
+					struct vm_area_struct *vma)
 {
 	return pmd_write(pmd) ||
-	       ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd));
+	       ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd) &&
+				  !arch_shadow_stack_mapping(vma->vm_flags));
 }
 
 struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
@@ -1340,7 +1342,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 
 	assert_spin_locked(pmd_lockptr(mm, pmd));
 
-	if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags))
+	if (flags & FOLL_WRITE && !can_follow_write_pmd(*pmd, flags, vma))
 		goto out;
 
 	/* Avoid dumping huge zero page */
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 19/26] mm: Re-introduce vm_flags to do_mmap()
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (17 preceding siblings ...)
  2020-11-10 16:22 ` [PATCH v15 18/26] mm: Update can_follow_write_pte() for shadow stack Yu-cheng Yu
@ 2020-11-10 16:22 ` Yu-cheng Yu
  2020-11-10 16:22 ` [PATCH v15 20/26] x86/cet/shstk: User-mode shadow stack support Yu-cheng Yu
                   ` (7 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:22 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu, Peter Collingbourne, Andrew Morton

There was no more caller passing vm_flags to do_mmap(), and vm_flags was
removed from the function's input by:

    commit 45e55300f114 ("mm: remove unnecessary wrapper function do_mmap_pgoff()").

There is a new user now.  Shadow stack allocation passes VM_SHSTK to
do_mmap().  Re-introduce vm_flags to do_mmap(), but without the old wrapper
do_mmap_pgoff().  Instead, make all callers of the wrapper pass a zero
vm_flags to do_mmap().

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Reviewed-by: Peter Collingbourne <pcc@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: linux-mm@kvack.org
---
 fs/aio.c           |  2 +-
 include/linux/mm.h |  3 ++-
 ipc/shm.c          |  2 +-
 mm/mmap.c          | 10 +++++-----
 mm/nommu.c         |  4 ++--
 mm/util.c          |  2 +-
 6 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index c45c20d87538..641640288b50 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -527,7 +527,7 @@ static int aio_setup_ring(struct kioctx *ctx, unsigned int nr_events)
 
 	ctx->mmap_base = do_mmap(ctx->aio_ring_file, 0, ctx->mmap_size,
 				 PROT_READ | PROT_WRITE,
-				 MAP_SHARED, 0, &unused, NULL);
+				 MAP_SHARED, 0, 0, &unused, NULL);
 	mmap_write_unlock(mm);
 	if (IS_ERR((void *)ctx->mmap_base)) {
 		ctx->mmap_size = 0;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 80cda65bb1ae..ef1bd7c7e88b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2584,7 +2584,8 @@ extern unsigned long mmap_region(struct file *file, unsigned long addr,
 	struct list_head *uf);
 extern unsigned long do_mmap(struct file *file, unsigned long addr,
 	unsigned long len, unsigned long prot, unsigned long flags,
-	unsigned long pgoff, unsigned long *populate, struct list_head *uf);
+	vm_flags_t vm_flags, unsigned long pgoff, unsigned long *populate,
+	struct list_head *uf);
 extern int __do_munmap(struct mm_struct *, unsigned long, size_t,
 		       struct list_head *uf, bool downgrade);
 extern int do_munmap(struct mm_struct *, unsigned long, size_t,
diff --git a/ipc/shm.c b/ipc/shm.c
index e25c7c6106bc..91474258933d 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -1556,7 +1556,7 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg,
 			goto invalid;
 	}
 
-	addr = do_mmap(file, addr, size, prot, flags, 0, &populate, NULL);
+	addr = do_mmap(file, addr, size, prot, flags, 0, 0, &populate, NULL);
 	*raddr = addr;
 	err = 0;
 	if (IS_ERR_VALUE(addr))
diff --git a/mm/mmap.c b/mm/mmap.c
index 2290a67f4d3f..c4938e4b789b 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1403,11 +1403,11 @@ static inline bool file_mmap_ok(struct file *file, struct inode *inode,
  */
 unsigned long do_mmap(struct file *file, unsigned long addr,
 			unsigned long len, unsigned long prot,
-			unsigned long flags, unsigned long pgoff,
-			unsigned long *populate, struct list_head *uf)
+			unsigned long flags, vm_flags_t vm_flags,
+			unsigned long pgoff, unsigned long *populate,
+			struct list_head *uf)
 {
 	struct mm_struct *mm = current->mm;
-	vm_flags_t vm_flags;
 	int pkey = 0;
 
 	*populate = 0;
@@ -1469,7 +1469,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
 	 * to. we assume access permissions have been handled by the open
 	 * of the memory object, so we don't do any here.
 	 */
-	vm_flags = calc_vm_prot_bits(prot, pkey) | calc_vm_flag_bits(flags) |
+	vm_flags |= calc_vm_prot_bits(prot, pkey) | calc_vm_flag_bits(flags) |
 			mm->def_flags | VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC;
 
 	if (flags & MAP_LOCKED)
@@ -3051,7 +3051,7 @@ SYSCALL_DEFINE5(remap_file_pages, unsigned long, start, unsigned long, size,
 
 	file = get_file(vma->vm_file);
 	ret = do_mmap(vma->vm_file, start, size,
-			prot, flags, pgoff, &populate, NULL);
+			prot, flags, 0, pgoff, &populate, NULL);
 	fput(file);
 out:
 	mmap_write_unlock(mm);
diff --git a/mm/nommu.c b/mm/nommu.c
index 0faf39b32cdb..a03c72f0c3f8 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -1071,6 +1071,7 @@ unsigned long do_mmap(struct file *file,
 			unsigned long len,
 			unsigned long prot,
 			unsigned long flags,
+			vm_flags_t vm_flags,
 			unsigned long pgoff,
 			unsigned long *populate,
 			struct list_head *uf)
@@ -1078,7 +1079,6 @@ unsigned long do_mmap(struct file *file,
 	struct vm_area_struct *vma;
 	struct vm_region *region;
 	struct rb_node *rb;
-	vm_flags_t vm_flags;
 	unsigned long capabilities, result;
 	int ret;
 
@@ -1097,7 +1097,7 @@ unsigned long do_mmap(struct file *file,
 
 	/* we've determined that we can make the mapping, now translate what we
 	 * now know into VMA flags */
-	vm_flags = determine_vm_flags(file, prot, flags, capabilities);
+	vm_flags |= determine_vm_flags(file, prot, flags, capabilities);
 
 	/* we're going to need to record the mapping */
 	region = kmem_cache_zalloc(vm_region_jar, GFP_KERNEL);
diff --git a/mm/util.c b/mm/util.c
index 4ddb6e186dd5..6fd9a272b7f9 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -504,7 +504,7 @@ unsigned long vm_mmap_pgoff(struct file *file, unsigned long addr,
 	if (!ret) {
 		if (mmap_write_lock_killable(mm))
 			return -EINTR;
-		ret = do_mmap(file, addr, len, prot, flag, pgoff, &populate,
+		ret = do_mmap(file, addr, len, prot, flag, 0, pgoff, &populate,
 			      &uf);
 		mmap_write_unlock(mm);
 		userfaultfd_unmap_complete(mm, &uf);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 20/26] x86/cet/shstk: User-mode shadow stack support
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (18 preceding siblings ...)
  2020-11-10 16:22 ` [PATCH v15 19/26] mm: Re-introduce vm_flags to do_mmap() Yu-cheng Yu
@ 2020-11-10 16:22 ` Yu-cheng Yu
  2020-11-10 16:22 ` [PATCH v15 21/26] x86/cet/shstk: Handle signals for shadow stack Yu-cheng Yu
                   ` (6 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:22 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

This patch adds basic shadow stack enabling/disabling routines.  A task's
shadow stack is allocated from memory with VM_SHSTK flag and has a fixed
size of min(RLIMIT_STACK, 4GB).

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 arch/x86/include/asm/cet.h               |  28 +++++
 arch/x86/include/asm/disabled-features.h |   8 +-
 arch/x86/include/asm/processor.h         |   5 +
 arch/x86/kernel/Makefile                 |   2 +
 arch/x86/kernel/cet.c                    | 147 +++++++++++++++++++++++
 arch/x86/kernel/cpu/common.c             |  28 +++++
 arch/x86/kernel/process.c                |   1 +
 7 files changed, 218 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/include/asm/cet.h
 create mode 100644 arch/x86/kernel/cet.c

diff --git a/arch/x86/include/asm/cet.h b/arch/x86/include/asm/cet.h
new file mode 100644
index 000000000000..5750fbcbb952
--- /dev/null
+++ b/arch/x86/include/asm/cet.h
@@ -0,0 +1,28 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_CET_H
+#define _ASM_X86_CET_H
+
+#ifndef __ASSEMBLY__
+#include <linux/types.h>
+
+struct task_struct;
+/*
+ * Per-thread CET status
+ */
+struct cet_status {
+	unsigned long	shstk_base;
+	unsigned long	shstk_size;
+};
+
+#ifdef CONFIG_X86_CET
+int cet_setup_shstk(void);
+void cet_disable_shstk(void);
+void cet_free_shstk(struct task_struct *p);
+#else
+static inline void cet_disable_shstk(void) {}
+static inline void cet_free_shstk(struct task_struct *p) {}
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_X86_CET_H */
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index 5861d34f9771..6a9fb7f9bc01 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -62,6 +62,12 @@
 # define DISABLE_ENQCMD (1 << (X86_FEATURE_ENQCMD & 31))
 #endif
 
+#ifdef CONFIG_X86_SHADOW_STACK_USER
+#define DISABLE_SHSTK	0
+#else
+#define DISABLE_SHSTK	(1 << (X86_FEATURE_SHSTK & 31))
+#endif
+
 /*
  * Make sure to add features to the correct mask
  */
@@ -82,7 +88,7 @@
 #define DISABLED_MASK14	0
 #define DISABLED_MASK15	0
 #define DISABLED_MASK16	(DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57|DISABLE_UMIP| \
-			 DISABLE_ENQCMD)
+			 DISABLE_ENQCMD|DISABLE_SHSTK)
 #define DISABLED_MASK17	0
 #define DISABLED_MASK18	0
 #define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 82a08b585818..2e0d9286f6cf 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -27,6 +27,7 @@ struct vm86;
 #include <asm/unwind_hints.h>
 #include <asm/vmxfeatures.h>
 #include <asm/vdso/processor.h>
+#include <asm/cet.h>
 
 #include <linux/personality.h>
 #include <linux/cache.h>
@@ -536,6 +537,10 @@ struct thread_struct {
 
 	unsigned int		sig_on_uaccess_err:1;
 
+#ifdef CONFIG_X86_CET
+	struct cet_status	cet;
+#endif
+
 	/* Floating point and extended processor state */
 	struct fpu		fpu;
 	/*
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 68608bd892c0..4a89d0f3792e 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -151,6 +151,8 @@ obj-$(CONFIG_UNWINDER_FRAME_POINTER)	+= unwind_frame.o
 obj-$(CONFIG_UNWINDER_GUESS)		+= unwind_guess.o
 
 obj-$(CONFIG_AMD_MEM_ENCRYPT)		+= sev-es.o
+obj-$(CONFIG_X86_CET)			+= cet.o
+
 ###
 # 64 bit specific files
 ifeq ($(CONFIG_X86_64),y)
diff --git a/arch/x86/kernel/cet.c b/arch/x86/kernel/cet.c
new file mode 100644
index 000000000000..f8b0a077594f
--- /dev/null
+++ b/arch/x86/kernel/cet.c
@@ -0,0 +1,147 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * cet.c - Control-flow Enforcement (CET)
+ *
+ * Copyright (c) 2019, Intel Corporation.
+ * Yu-cheng Yu <yu-cheng.yu@intel.com>
+ */
+
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/mman.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/sched/signal.h>
+#include <linux/compat.h>
+#include <asm/msr.h>
+#include <asm/user.h>
+#include <asm/fpu/internal.h>
+#include <asm/fpu/xstate.h>
+#include <asm/fpu/types.h>
+#include <asm/cet.h>
+
+static void start_update_msrs(void)
+{
+	fpregs_lock();
+	if (test_thread_flag(TIF_NEED_FPU_LOAD))
+		__fpregs_load_activate();
+}
+
+static void end_update_msrs(void)
+{
+	fpregs_unlock();
+}
+
+static unsigned long cet_get_shstk_addr(void)
+{
+	struct fpu *fpu = &current->thread.fpu;
+	unsigned long ssp = 0;
+
+	fpregs_lock();
+
+	if (fpregs_state_valid(fpu, smp_processor_id())) {
+		rdmsrl(MSR_IA32_PL3_SSP, ssp);
+	} else {
+		struct cet_user_state *p;
+
+		p = get_xsave_addr(&fpu->state.xsave, XFEATURE_CET_USER);
+		if (p)
+			ssp = p->user_ssp;
+	}
+
+	fpregs_unlock();
+	return ssp;
+}
+
+static unsigned long alloc_shstk(unsigned long size, int flags)
+{
+	struct mm_struct *mm = current->mm;
+	unsigned long addr, populate;
+
+	/* VM_SHSTK requires MAP_ANONYMOUS, MAP_PRIVATE */
+	flags |= MAP_ANONYMOUS | MAP_PRIVATE;
+
+	mmap_write_lock(mm);
+	addr = do_mmap(NULL, 0, size, PROT_READ, flags, VM_SHSTK, 0,
+		       &populate, NULL);
+	mmap_write_unlock(mm);
+
+	if (populate)
+		mm_populate(addr, populate);
+
+	return addr;
+}
+
+int cet_setup_shstk(void)
+{
+	unsigned long addr, size;
+	struct cet_status *cet = &current->thread.cet;
+
+	if (!static_cpu_has(X86_FEATURE_SHSTK))
+		return -EOPNOTSUPP;
+
+	size = round_up(min(rlimit(RLIMIT_STACK), 1UL << 32), PAGE_SIZE);
+	addr = alloc_shstk(size, 0);
+
+	if (IS_ERR_VALUE(addr))
+		return PTR_ERR((void *)addr);
+
+	cet->shstk_base = addr;
+	cet->shstk_size = size;
+
+	start_update_msrs();
+	wrmsrl(MSR_IA32_PL3_SSP, addr + size);
+	wrmsrl(MSR_IA32_U_CET, CET_SHSTK_EN);
+	end_update_msrs();
+	return 0;
+}
+
+void cet_disable_shstk(void)
+{
+	struct cet_status *cet = &current->thread.cet;
+	u64 msr_val;
+
+	if (!static_cpu_has(X86_FEATURE_SHSTK) ||
+	    !cet->shstk_size || !cet->shstk_base)
+		return;
+
+	start_update_msrs();
+	rdmsrl(MSR_IA32_U_CET, msr_val);
+	wrmsrl(MSR_IA32_U_CET, msr_val & ~CET_SHSTK_EN);
+	wrmsrl(MSR_IA32_PL3_SSP, 0);
+	end_update_msrs();
+
+	cet_free_shstk(current);
+}
+
+void cet_free_shstk(struct task_struct *tsk)
+{
+	struct cet_status *cet = &tsk->thread.cet;
+
+	if (!static_cpu_has(X86_FEATURE_SHSTK) ||
+	    !cet->shstk_size || !cet->shstk_base)
+		return;
+
+	if (!tsk->mm || (tsk->mm != current->mm))
+		return;
+
+	while (1) {
+		int r;
+
+		r = vm_munmap(cet->shstk_base, cet->shstk_size);
+
+		/*
+		 * Retry if mmap_lock is not available.
+		 */
+		if (r == -EINTR) {
+			cond_resched();
+			continue;
+		}
+
+		WARN_ON_ONCE(r);
+		break;
+	}
+
+	cet->shstk_base = 0;
+	cet->shstk_size = 0;
+}
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 35ad8480c464..3d38ae02d9d3 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -57,6 +57,7 @@
 #include <asm/microcode_intel.h>
 #include <asm/intel-family.h>
 #include <asm/cpu_device_id.h>
+#include <asm/cet.h>
 #include <asm/uv/uv.h>
 
 #include "cpu.h"
@@ -510,6 +511,32 @@ static __init int setup_disable_pku(char *arg)
 __setup("nopku", setup_disable_pku);
 #endif /* CONFIG_X86_64 */
 
+static __always_inline void setup_cet(struct cpuinfo_x86 *c)
+{
+	if (!cpu_feature_enabled(X86_FEATURE_SHSTK) &&
+	    !cpu_feature_enabled(X86_FEATURE_IBT))
+		return;
+
+	cr4_set_bits(X86_CR4_CET);
+}
+
+#ifdef CONFIG_X86_SHADOW_STACK_USER
+static __init int setup_disable_shstk(char *s)
+{
+	/* require an exact match without trailing characters */
+	if (s[0] != '\0')
+		return 0;
+
+	if (!boot_cpu_has(X86_FEATURE_SHSTK))
+		return 1;
+
+	setup_clear_cpu_cap(X86_FEATURE_SHSTK);
+	pr_info("x86: 'no_user_shstk' specified, disabling user Shadow Stack\n");
+	return 1;
+}
+__setup("no_user_shstk", setup_disable_shstk);
+#endif
+
 /*
  * Some CPU features depend on higher CPUID levels, which may not always
  * be available due to CPUID level capping or broken virtualization
@@ -1591,6 +1618,7 @@ static void identify_cpu(struct cpuinfo_x86 *c)
 
 	x86_init_rdrand(c);
 	setup_pku(c);
+	setup_cet(c);
 
 	/*
 	 * Clear/Set all flags overridden by options, need do it
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index ba4593a913fa..ff3b44d6740b 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -43,6 +43,7 @@
 #include <asm/io_bitmap.h>
 #include <asm/proto.h>
 #include <asm/frame.h>
+#include <asm/cet.h>
 
 #include "process.h"
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 21/26] x86/cet/shstk: Handle signals for shadow stack
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (19 preceding siblings ...)
  2020-11-10 16:22 ` [PATCH v15 20/26] x86/cet/shstk: User-mode shadow stack support Yu-cheng Yu
@ 2020-11-10 16:22 ` Yu-cheng Yu
  2020-11-10 16:22 ` [PATCH v15 22/26] binfmt_elf: Define GNU_PROPERTY_X86_FEATURE_1_AND properties Yu-cheng Yu
                   ` (5 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:22 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

To deliver a signal, create a shadow stack restore token and put a restore
token and the signal restorer address on the shadow stack.  For sigreturn,
verify the token and restore the shadow stack pointer.

Introduce WRUSS, which is a kernel-mode instruction but writes directly to
user shadow stack.  It is used to construct the user signal stack as
described above.

Introduce a signal context extension struct 'sc_ext', which is used to save
shadow stack restore token address and WAIT_ENDBR status.  WAIT_ENDBR will
be introduced later in the Indirect Branch Tracking (IBT) series, but add
that into sc_ext now to keep the struct stable in case the IBT series is
applied later.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 arch/x86/ia32/ia32_signal.c            |  17 +++
 arch/x86/include/asm/cet.h             |   8 ++
 arch/x86/include/asm/fpu/internal.h    |  10 ++
 arch/x86/include/asm/special_insns.h   |  32 ++++++
 arch/x86/include/uapi/asm/sigcontext.h |   9 ++
 arch/x86/kernel/cet.c                  | 152 +++++++++++++++++++++++++
 arch/x86/kernel/fpu/signal.c           | 100 ++++++++++++++++
 arch/x86/kernel/signal.c               |  10 ++
 8 files changed, 338 insertions(+)

diff --git a/arch/x86/ia32/ia32_signal.c b/arch/x86/ia32/ia32_signal.c
index 81cf22398cd1..cec9cf0a00cf 100644
--- a/arch/x86/ia32/ia32_signal.c
+++ b/arch/x86/ia32/ia32_signal.c
@@ -35,6 +35,7 @@
 #include <asm/sigframe.h>
 #include <asm/sighandling.h>
 #include <asm/smap.h>
+#include <asm/cet.h>
 
 static inline void reload_segments(struct sigcontext_32 *sc)
 {
@@ -205,6 +206,7 @@ static void __user *get_sigframe(struct ksignal *ksig, struct pt_regs *regs,
 				 void __user **fpstate)
 {
 	unsigned long sp, fx_aligned, math_size;
+	void __user *restorer = NULL;
 
 	/* Default to using normal stack */
 	sp = regs->sp;
@@ -218,8 +220,23 @@ static void __user *get_sigframe(struct ksignal *ksig, struct pt_regs *regs,
 		 ksig->ka.sa.sa_restorer)
 		sp = (unsigned long) ksig->ka.sa.sa_restorer;
 
+	if (ksig->ka.sa.sa_flags & SA_RESTORER) {
+		restorer = ksig->ka.sa.sa_restorer;
+	} else if (current->mm->context.vdso) {
+		if (ksig->ka.sa.sa_flags & SA_SIGINFO)
+			restorer = current->mm->context.vdso +
+				vdso_image_32.sym___kernel_rt_sigreturn;
+		else
+			restorer = current->mm->context.vdso +
+				vdso_image_32.sym___kernel_sigreturn;
+	}
+
 	sp = fpu__alloc_mathframe(sp, 1, &fx_aligned, &math_size);
 	*fpstate = (struct _fpstate_32 __user *) sp;
+
+	if (save_cet_to_sigframe(1, *fpstate, (unsigned long)restorer))
+		return (void __user *) -1L;
+
 	if (copy_fpstate_to_sigframe(*fpstate, (void __user *)fx_aligned,
 				     math_size) < 0)
 		return (void __user *) -1L;
diff --git a/arch/x86/include/asm/cet.h b/arch/x86/include/asm/cet.h
index 5750fbcbb952..73435856ce54 100644
--- a/arch/x86/include/asm/cet.h
+++ b/arch/x86/include/asm/cet.h
@@ -6,6 +6,8 @@
 #include <linux/types.h>
 
 struct task_struct;
+struct sc_ext;
+
 /*
  * Per-thread CET status
  */
@@ -18,9 +20,15 @@ struct cet_status {
 int cet_setup_shstk(void);
 void cet_disable_shstk(void);
 void cet_free_shstk(struct task_struct *p);
+int cet_verify_rstor_token(bool ia32, unsigned long ssp, unsigned long *new_ssp);
+void cet_restore_signal(struct sc_ext *sc);
+int cet_setup_signal(bool ia32, unsigned long rstor, struct sc_ext *sc);
 #else
 static inline void cet_disable_shstk(void) {}
 static inline void cet_free_shstk(struct task_struct *p) {}
+static inline void cet_restore_signal(struct sc_ext *sc) { return; }
+static inline int cet_setup_signal(bool ia32, unsigned long rstor,
+				   struct sc_ext *sc) { return -EINVAL; }
 #endif
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/x86/include/asm/fpu/internal.h b/arch/x86/include/asm/fpu/internal.h
index 8d33ad80704f..c1dedec2281b 100644
--- a/arch/x86/include/asm/fpu/internal.h
+++ b/arch/x86/include/asm/fpu/internal.h
@@ -443,6 +443,16 @@ static inline void copy_kernel_to_fpregs(union fpregs_state *fpstate)
 	__copy_kernel_to_fpregs(fpstate, -1);
 }
 
+#ifdef CONFIG_X86_CET
+extern int save_cet_to_sigframe(int ia32, void __user *fp,
+				unsigned long restorer);
+#else
+static inline int save_cet_to_sigframe(int ia32, void __user *fp,
+				unsigned long restorer)
+{
+	return 0;
+}
+#endif
 extern int copy_fpstate_to_sigframe(void __user *buf, void __user *fp, int size);
 
 /*
diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
index cc177b4431ae..d979d0deb3ae 100644
--- a/arch/x86/include/asm/special_insns.h
+++ b/arch/x86/include/asm/special_insns.h
@@ -234,6 +234,38 @@ static inline void clwb(volatile void *__p)
 		: [pax] "a" (p));
 }
 
+#ifdef CONFIG_X86_CET
+#if defined(CONFIG_IA32_EMULATION) || defined(CONFIG_X86_X32)
+static inline int write_user_shstk_32(unsigned long addr, unsigned int val)
+{
+	asm_volatile_goto("1: wrussd %1, (%0)\n"
+			  _ASM_EXTABLE(1b, %l[fail])
+			  :: "r" (addr), "r" (val)
+			  :: fail);
+	return 0;
+fail:
+	return -EPERM;
+}
+#else
+static inline int write_user_shstk_32(unsigned long addr, unsigned int val)
+{
+	WARN_ONCE(1, "%s used but not supported.\n", __func__);
+	return -EFAULT;
+}
+#endif
+
+static inline int write_user_shstk_64(unsigned long addr, unsigned long val)
+{
+	asm_volatile_goto("1: wrussq %1, (%0)\n"
+			  _ASM_EXTABLE(1b, %l[fail])
+			  :: "r" (addr), "r" (val)
+			  :: fail);
+	return 0;
+fail:
+	return -EPERM;
+}
+#endif /* CONFIG_X86_CET */
+
 #define nop() asm volatile ("nop")
 
 static inline void serialize(void)
diff --git a/arch/x86/include/uapi/asm/sigcontext.h b/arch/x86/include/uapi/asm/sigcontext.h
index 844d60eb1882..cf2d55db3be4 100644
--- a/arch/x86/include/uapi/asm/sigcontext.h
+++ b/arch/x86/include/uapi/asm/sigcontext.h
@@ -196,6 +196,15 @@ struct _xstate {
 	/* New processor state extensions go here: */
 };
 
+/*
+ * Located at the end of sigcontext->fpstate, aligned to 8.
+ */
+struct sc_ext {
+	unsigned long total_size;
+	unsigned long ssp;
+	unsigned long wait_endbr;
+};
+
 /*
  * The 32-bit signal frame:
  */
diff --git a/arch/x86/kernel/cet.c b/arch/x86/kernel/cet.c
index f8b0a077594f..728d9baceb74 100644
--- a/arch/x86/kernel/cet.c
+++ b/arch/x86/kernel/cet.c
@@ -19,6 +19,8 @@
 #include <asm/fpu/xstate.h>
 #include <asm/fpu/types.h>
 #include <asm/cet.h>
+#include <asm/special_insns.h>
+#include <uapi/asm/sigcontext.h>
 
 static void start_update_msrs(void)
 {
@@ -72,6 +74,80 @@ static unsigned long alloc_shstk(unsigned long size, int flags)
 	return addr;
 }
 
+#define TOKEN_MODE_MASK	3UL
+#define TOKEN_MODE_64	1UL
+#define IS_TOKEN_64(token) (((token) & TOKEN_MODE_MASK) == TOKEN_MODE_64)
+#define IS_TOKEN_32(token) (((token) & TOKEN_MODE_MASK) == 0)
+
+/*
+ * Verify the restore token at the address of 'ssp' is
+ * valid and then set shadow stack pointer according to the
+ * token.
+ */
+int cet_verify_rstor_token(bool ia32, unsigned long ssp,
+			   unsigned long *new_ssp)
+{
+	unsigned long token;
+
+	*new_ssp = 0;
+
+	if (!IS_ALIGNED(ssp, 8))
+		return -EINVAL;
+
+	if (get_user(token, (unsigned long __user *)ssp))
+		return -EFAULT;
+
+	/* Is 64-bit mode flag correct? */
+	if (!ia32 && !IS_TOKEN_64(token))
+		return -EINVAL;
+	else if (ia32 && !IS_TOKEN_32(token))
+		return -EINVAL;
+
+	token &= ~TOKEN_MODE_MASK;
+
+	/*
+	 * Restore address properly aligned?
+	 */
+	if ((!ia32 && !IS_ALIGNED(token, 8)) || !IS_ALIGNED(token, 4))
+		return -EINVAL;
+
+	/*
+	 * Token was placed properly?
+	 */
+	if (((ALIGN_DOWN(token, 8) - 8) != ssp) || (token >= TASK_SIZE_MAX))
+		return -EINVAL;
+
+	*new_ssp = token;
+	return 0;
+}
+
+/*
+ * Create a restore token on the shadow stack.
+ * A token is always 8-byte and aligned to 8.
+ */
+static int create_rstor_token(bool ia32, unsigned long ssp,
+			      unsigned long *new_ssp)
+{
+	unsigned long addr;
+
+	*new_ssp = 0;
+
+	if ((!ia32 && !IS_ALIGNED(ssp, 8)) || !IS_ALIGNED(ssp, 4))
+		return -EINVAL;
+
+	addr = ALIGN_DOWN(ssp, 8) - 8;
+
+	/* Is the token for 64-bit? */
+	if (!ia32)
+		ssp |= TOKEN_MODE_64;
+
+	if (write_user_shstk_64(addr, ssp))
+		return -EFAULT;
+
+	*new_ssp = addr;
+	return 0;
+}
+
 int cet_setup_shstk(void)
 {
 	unsigned long addr, size;
@@ -145,3 +221,79 @@ void cet_free_shstk(struct task_struct *tsk)
 	cet->shstk_base = 0;
 	cet->shstk_size = 0;
 }
+
+/*
+ * Called from __fpu__restore_sig() and XSAVES buffer is protected by
+ * set_thread_flag(TIF_NEED_FPU_LOAD) in the slow path.
+ */
+void cet_restore_signal(struct sc_ext *sc_ext)
+{
+	struct cet_user_state *cet_user_state;
+	struct cet_status *cet = &current->thread.cet;
+	u64 msr_val = 0;
+
+	if (!static_cpu_has(X86_FEATURE_SHSTK))
+		return;
+
+	cet_user_state = get_xsave_addr(&current->thread.fpu.state.xsave,
+					XFEATURE_CET_USER);
+	if (!cet_user_state)
+		return;
+
+	if (cet->shstk_size) {
+		if (test_thread_flag(TIF_NEED_FPU_LOAD))
+			cet_user_state->user_ssp = sc_ext->ssp;
+		else
+			wrmsrl(MSR_IA32_PL3_SSP, sc_ext->ssp);
+
+		msr_val |= CET_SHSTK_EN;
+	}
+
+	if (test_thread_flag(TIF_NEED_FPU_LOAD))
+		cet_user_state->user_cet = msr_val;
+	else
+		wrmsrl(MSR_IA32_U_CET, msr_val);
+}
+
+/*
+ * Setup the shadow stack for the signal handler: first,
+ * create a restore token to keep track of the current ssp,
+ * and then the return address of the signal handler.
+ */
+int cet_setup_signal(bool ia32, unsigned long rstor_addr, struct sc_ext *sc_ext)
+{
+	struct cet_status *cet = &current->thread.cet;
+	unsigned long ssp = 0, new_ssp = 0;
+	int err;
+
+	if (cet->shstk_size) {
+		if (!rstor_addr)
+			return -EINVAL;
+
+		ssp = cet_get_shstk_addr();
+		err = create_rstor_token(ia32, ssp, &new_ssp);
+		if (err)
+			return err;
+
+		if (ia32) {
+			ssp = new_ssp - sizeof(u32);
+			err = write_user_shstk_32(ssp, (unsigned int)rstor_addr);
+		} else {
+			ssp = new_ssp - sizeof(u64);
+			err = write_user_shstk_64(ssp, rstor_addr);
+		}
+
+		if (err)
+			return err;
+
+		sc_ext->ssp = new_ssp;
+	}
+
+	if (ssp) {
+		start_update_msrs();
+		wrmsrl(MSR_IA32_PL3_SSP, ssp);
+		end_update_msrs();
+	}
+
+	return 0;
+}
diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
index a4ec65317a7f..c0c2141cb4b3 100644
--- a/arch/x86/kernel/fpu/signal.c
+++ b/arch/x86/kernel/fpu/signal.c
@@ -52,6 +52,74 @@ static inline int check_for_xstate(struct fxregs_state __user *buf,
 	return 0;
 }
 
+#ifdef CONFIG_X86_CET
+int save_cet_to_sigframe(int ia32, void __user *fp, unsigned long restorer)
+{
+	int err = 0;
+
+	if (!current->thread.cet.shstk_size)
+		return 0;
+
+	if (fp) {
+		struct sc_ext ext = {0, 0, 0};
+
+		err = cet_setup_signal(ia32, restorer, &ext);
+		if (!err) {
+			void __user *p = fp;
+
+			ext.total_size = sizeof(ext);
+
+			if (ia32)
+				p += sizeof(struct fregs_state);
+
+			p += fpu_user_xstate_size + FP_XSTATE_MAGIC2_SIZE;
+			p = (void __user *)ALIGN((unsigned long)p, 8);
+
+			if (copy_to_user(p, &ext, sizeof(ext)))
+				return -EFAULT;
+		}
+	}
+
+	return err;
+}
+
+static int get_cet_from_sigframe(int ia32, void __user *fp, struct sc_ext *ext)
+{
+	int err = 0;
+
+	memset(ext, 0, sizeof(*ext));
+
+	if (!current->thread.cet.shstk_size)
+		return 0;
+
+	if (fp) {
+		void __user *p = fp;
+
+		if (ia32)
+			p += sizeof(struct fregs_state);
+
+		p += fpu_user_xstate_size + FP_XSTATE_MAGIC2_SIZE;
+		p = (void __user *)ALIGN((unsigned long)p, 8);
+
+		if (copy_from_user(ext, p, sizeof(*ext)))
+			return -EFAULT;
+
+		if (ext->total_size != sizeof(*ext))
+			return -EFAULT;
+
+		if (current->thread.cet.shstk_size)
+			err = cet_verify_rstor_token(ia32, ext->ssp, &ext->ssp);
+	}
+
+	return err;
+}
+#else
+static int get_cet_from_sigframe(int ia32, void __user *fp, struct sc_ext *ext)
+{
+	return 0;
+}
+#endif
+
 /*
  * Signal frame handlers.
  */
@@ -295,6 +363,7 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
 	struct task_struct *tsk = current;
 	struct fpu *fpu = &tsk->thread.fpu;
 	struct user_i387_ia32_struct env;
+	struct sc_ext sc_ext;
 	u64 user_xfeatures = 0;
 	int fx_only = 0;
 	int ret = 0;
@@ -335,6 +404,10 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
 	if ((unsigned long)buf_fx % 64)
 		fx_only = 1;
 
+	ret = get_cet_from_sigframe(ia32_fxstate, buf, &sc_ext);
+	if (ret)
+		return ret;
+
 	if (!ia32_fxstate) {
 		/*
 		 * Attempt to restore the FPU registers directly from user
@@ -349,6 +422,8 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
 		pagefault_enable();
 		if (!ret) {
 
+			cet_restore_signal(&sc_ext);
+
 			/*
 			 * Restore supervisor states: previous context switch
 			 * etc has done XSAVES and saved the supervisor states
@@ -423,6 +498,8 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size)
 		if (unlikely(init_bv))
 			copy_kernel_to_xregs(&init_fpstate.xsave, init_bv);
 
+		cet_restore_signal(&sc_ext);
+
 		/*
 		 * Restore previously saved supervisor xstates along with
 		 * copied-in user xstates.
@@ -491,12 +568,35 @@ int fpu__restore_sig(void __user *buf, int ia32_frame)
 	return __fpu__restore_sig(buf, buf_fx, size);
 }
 
+#ifdef CONFIG_X86_CET
+static unsigned long fpu__alloc_sigcontext_ext(unsigned long sp)
+{
+	struct cet_status *cet = &current->thread.cet;
+
+	/*
+	 * sigcontext_ext is at: fpu + fpu_user_xstate_size +
+	 * FP_XSTATE_MAGIC2_SIZE, then aligned to 8.
+	 */
+	if (cet->shstk_size)
+		sp -= (sizeof(struct sc_ext) + 8);
+
+	return sp;
+}
+#else
+static unsigned long fpu__alloc_sigcontext_ext(unsigned long sp)
+{
+	return sp;
+}
+#endif
+
 unsigned long
 fpu__alloc_mathframe(unsigned long sp, int ia32_frame,
 		     unsigned long *buf_fx, unsigned long *size)
 {
 	unsigned long frame_size = xstate_sigframe_size();
 
+	sp = fpu__alloc_sigcontext_ext(sp);
+
 	*buf_fx = sp = round_down(sp - frame_size, 64);
 	if (ia32_frame && use_fxsr()) {
 		frame_size += sizeof(struct fregs_state);
diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
index be0d7d4152ec..f39335ed4f7e 100644
--- a/arch/x86/kernel/signal.c
+++ b/arch/x86/kernel/signal.c
@@ -46,6 +46,7 @@
 #include <asm/syscall.h>
 #include <asm/sigframe.h>
 #include <asm/signal.h>
+#include <asm/cet.h>
 
 #ifdef CONFIG_X86_64
 /*
@@ -239,6 +240,9 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size,
 	unsigned long buf_fx = 0;
 	int onsigstack = on_sig_stack(sp);
 	int ret;
+#ifdef CONFIG_X86_64
+	void __user *restorer = NULL;
+#endif
 
 	/* redzone */
 	if (IS_ENABLED(CONFIG_X86_64))
@@ -270,6 +274,12 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size,
 	if (onsigstack && !likely(on_sig_stack(sp)))
 		return (void __user *)-1L;
 
+#ifdef CONFIG_X86_64
+	if (ka->sa.sa_flags & SA_RESTORER)
+		restorer = ka->sa.sa_restorer;
+	ret = save_cet_to_sigframe(0, *fpstate, (unsigned long)restorer);
+#endif
+
 	/* save i387 and extended state */
 	ret = copy_fpstate_to_sigframe(*fpstate, (void __user *)buf_fx, math_size);
 	if (ret < 0)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 22/26] binfmt_elf: Define GNU_PROPERTY_X86_FEATURE_1_AND properties
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (20 preceding siblings ...)
  2020-11-10 16:22 ` [PATCH v15 21/26] x86/cet/shstk: Handle signals for shadow stack Yu-cheng Yu
@ 2020-11-10 16:22 ` Yu-cheng Yu
  2020-11-10 16:22 ` [PATCH v15 23/26] ELF: Introduce arch_setup_elf_property() Yu-cheng Yu
                   ` (4 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:22 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

An ELF file's .note.gnu.property indicates architecture features of the
file.. Introduce feature definitions for Shadow Stack and Indirect Branch
Tracking.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 include/uapi/linux/elf.h | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/include/uapi/linux/elf.h b/include/uapi/linux/elf.h
index 30f68b42eeb5..7f0e46780a35 100644
--- a/include/uapi/linux/elf.h
+++ b/include/uapi/linux/elf.h
@@ -455,4 +455,13 @@ typedef struct elf64_note {
 /* Bits for GNU_PROPERTY_AARCH64_FEATURE_1_BTI */
 #define GNU_PROPERTY_AARCH64_FEATURE_1_BTI	(1U << 0)
 
+/* .note.gnu.property types for x86: */
+#define GNU_PROPERTY_X86_FEATURE_1_AND		0xc0000002
+
+/* Bits for GNU_PROPERTY_X86_FEATURE_1_AND */
+#define GNU_PROPERTY_X86_FEATURE_1_IBT		0x00000001
+#define GNU_PROPERTY_X86_FEATURE_1_SHSTK	0x00000002
+#define GNU_PROPERTY_X86_FEATURE_1_INVAL ~(GNU_PROPERTY_X86_FEATURE_1_IBT | \
+					    GNU_PROPERTY_X86_FEATURE_1_SHSTK)
+
 #endif /* _UAPI_LINUX_ELF_H */
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 23/26] ELF: Introduce arch_setup_elf_property()
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (21 preceding siblings ...)
  2020-11-10 16:22 ` [PATCH v15 22/26] binfmt_elf: Define GNU_PROPERTY_X86_FEATURE_1_AND properties Yu-cheng Yu
@ 2020-11-10 16:22 ` Yu-cheng Yu
  2020-11-10 16:22 ` [PATCH v15 24/26] x86/cet/shstk: Handle thread shadow stack Yu-cheng Yu
                   ` (3 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:22 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu, Mark Brown, Catalin Marinas

An ELF file's .note.gnu.property indicates arch features supported by the
file.  These features are extracted by arch_parse_elf_property() and stored
in 'arch_elf_state'.  Introduce arch_setup_elf_property() for enabling such
features.  The first use-case of this function is shadow stack.

ARM64 is the other arch that has ARCH_USER_GNU_PROPERTY and arch_parse_elf_
property().  Add arch_setup_elf_property() for it.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Dave Martin <Dave.Martin@arm.com>
---
 arch/arm64/include/asm/elf.h |  5 +++++
 arch/x86/Kconfig             |  2 ++
 arch/x86/include/asm/elf.h   | 13 +++++++++++++
 arch/x86/kernel/process_64.c | 32 ++++++++++++++++++++++++++++++++
 fs/binfmt_elf.c              |  4 ++++
 include/linux/elf.h          |  6 ++++++
 6 files changed, 62 insertions(+)

diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index 8d1c8dcb87fd..d37bc7915935 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -281,6 +281,11 @@ static inline int arch_parse_elf_property(u32 type, const void *data,
 	return 0;
 }
 
+static inline int arch_setup_elf_property(struct arch_elf_state *arch)
+{
+	return 0;
+}
+
 static inline int arch_elf_pt_proc(void *ehdr, void *phdr,
 				   struct file *f, bool is_interp,
 				   struct arch_elf_state *state)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 960993862b96..18fd8cb549ad 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1953,6 +1953,8 @@ config X86_SHADOW_STACK_USER
 	select X86_CET
 	select ARCH_MAYBE_MKWRITE
 	select ARCH_HAS_SHADOW_STACK
+	select ARCH_USE_GNU_PROPERTY
+	select ARCH_BINFMT_ELF_STATE
 	help
 	  Shadow Stacks provides protection against program stack
 	  corruption.  It's a hardware feature.  This only matters
diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index b9a5d488f1a5..0e1be2a13359 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -385,6 +385,19 @@ extern int compat_arch_setup_additional_pages(struct linux_binprm *bprm,
 					      int uses_interp);
 #define compat_arch_setup_additional_pages compat_arch_setup_additional_pages
 
+#ifdef CONFIG_ARCH_BINFMT_ELF_STATE
+struct arch_elf_state {
+	unsigned int gnu_property;
+};
+
+#define INIT_ARCH_ELF_STATE {	\
+	.gnu_property = 0,	\
+}
+
+#define arch_elf_pt_proc(ehdr, phdr, elf, interp, state) (0)
+#define arch_check_elf(ehdr, interp, interp_ehdr, state) (0)
+#endif
+
 /* Do not change the values. See get_align_mask() */
 enum align_flags {
 	ALIGN_VA_32	= BIT(0),
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index df342bedea88..7c4687a0f001 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -837,3 +837,35 @@ unsigned long KSTK_ESP(struct task_struct *task)
 {
 	return task_pt_regs(task)->sp;
 }
+
+#ifdef CONFIG_ARCH_USE_GNU_PROPERTY
+int arch_parse_elf_property(u32 type, const void *data, size_t datasz,
+			     bool compat, struct arch_elf_state *state)
+{
+	if (type != GNU_PROPERTY_X86_FEATURE_1_AND)
+		return 0;
+
+	if (datasz != sizeof(unsigned int))
+		return -ENOEXEC;
+
+	state->gnu_property = *(unsigned int *)data;
+	return 0;
+}
+
+int arch_setup_elf_property(struct arch_elf_state *state)
+{
+	int r = 0;
+
+	if (!IS_ENABLED(CONFIG_X86_CET))
+		return r;
+
+	memset(&current->thread.cet, 0, sizeof(struct cet_status));
+
+	if (static_cpu_has(X86_FEATURE_SHSTK)) {
+		if (state->gnu_property & GNU_PROPERTY_X86_FEATURE_1_SHSTK)
+			r = cet_setup_shstk();
+	}
+
+	return r;
+}
+#endif
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index fa50e8936f5f..1ae32cc0f61b 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -1245,6 +1245,10 @@ static int load_elf_binary(struct linux_binprm *bprm)
 
 	set_binfmt(&elf_format);
 
+	retval = arch_setup_elf_property(&arch_state);
+	if (retval < 0)
+		goto out;
+
 #ifdef ARCH_HAS_SETUP_ADDITIONAL_PAGES
 	retval = arch_setup_additional_pages(bprm, !!interpreter);
 	if (retval < 0)
diff --git a/include/linux/elf.h b/include/linux/elf.h
index 5d5b0321da0b..4827695ca415 100644
--- a/include/linux/elf.h
+++ b/include/linux/elf.h
@@ -82,9 +82,15 @@ static inline int arch_parse_elf_property(u32 type, const void *data,
 {
 	return 0;
 }
+
+static inline int arch_setup_elf_property(struct arch_elf_state *arch)
+{
+	return 0;
+}
 #else
 extern int arch_parse_elf_property(u32 type, const void *data, size_t datasz,
 				   bool compat, struct arch_elf_state *arch);
+extern int arch_setup_elf_property(struct arch_elf_state *arch);
 #endif
 
 #ifdef CONFIG_ARCH_HAVE_ELF_PROT
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 24/26] x86/cet/shstk: Handle thread shadow stack
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (22 preceding siblings ...)
  2020-11-10 16:22 ` [PATCH v15 23/26] ELF: Introduce arch_setup_elf_property() Yu-cheng Yu
@ 2020-11-10 16:22 ` Yu-cheng Yu
  2020-11-10 16:22 ` [PATCH v15 25/26] x86/cet/shstk: Add arch_prctl functions for " Yu-cheng Yu
                   ` (2 subsequent siblings)
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:22 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

The kernel allocates (and frees on thread exit) a new shadow stack for a
pthread child.

    It is possible for the kernel to complete the clone syscall and set the
    child's shadow stack pointer to NULL and let the child thread allocate
    a shadow stack for itself.  There are two issues in this approach: It
    is not compatible with existing code that does inline syscall and it
    cannot handle signals before the child can successfully allocate a
    shadow stack.

A 64-bit shadow stack has a size of min(RLIMIT_STACK, 4 GB).  A compat-mode
thread shadow stack has a size of 1/4 min(RLIMIT_STACK, 4 GB).  This allows
more threads to run in a 32-bit address space.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 arch/x86/include/asm/cet.h         |  3 ++
 arch/x86/include/asm/mmu_context.h |  3 ++
 arch/x86/kernel/cet.c              | 44 ++++++++++++++++++++++++++++++
 arch/x86/kernel/process.c          |  7 +++++
 4 files changed, 57 insertions(+)

diff --git a/arch/x86/include/asm/cet.h b/arch/x86/include/asm/cet.h
index 73435856ce54..ec4b5e62d0ce 100644
--- a/arch/x86/include/asm/cet.h
+++ b/arch/x86/include/asm/cet.h
@@ -18,12 +18,15 @@ struct cet_status {
 
 #ifdef CONFIG_X86_CET
 int cet_setup_shstk(void);
+int cet_setup_thread_shstk(struct task_struct *p, unsigned long clone_flags);
 void cet_disable_shstk(void);
 void cet_free_shstk(struct task_struct *p);
 int cet_verify_rstor_token(bool ia32, unsigned long ssp, unsigned long *new_ssp);
 void cet_restore_signal(struct sc_ext *sc);
 int cet_setup_signal(bool ia32, unsigned long rstor, struct sc_ext *sc);
 #else
+static inline int cet_setup_thread_shstk(struct task_struct *p,
+					 unsigned long clone_flags) { return 0; }
 static inline void cet_disable_shstk(void) {}
 static inline void cet_free_shstk(struct task_struct *p) {}
 static inline void cet_restore_signal(struct sc_ext *sc) { return; }
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index d98016b83755..ceb593e405e1 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -11,6 +11,7 @@
 
 #include <asm/tlbflush.h>
 #include <asm/paravirt.h>
+#include <asm/cet.h>
 #include <asm/debugreg.h>
 
 extern atomic64_t last_mm_ctx_id;
@@ -142,6 +143,8 @@ do {						\
 #else
 #define deactivate_mm(tsk, mm)			\
 do {						\
+	if (!tsk->vfork_done)			\
+		cet_free_shstk(tsk);		\
 	load_gs_index(0);			\
 	loadsegment(fs, 0);			\
 } while (0)
diff --git a/arch/x86/kernel/cet.c b/arch/x86/kernel/cet.c
index 728d9baceb74..d57f3a433af9 100644
--- a/arch/x86/kernel/cet.c
+++ b/arch/x86/kernel/cet.c
@@ -172,6 +172,50 @@ int cet_setup_shstk(void)
 	return 0;
 }
 
+int cet_setup_thread_shstk(struct task_struct *tsk, unsigned long clone_flags)
+{
+	unsigned long addr, size;
+	struct cet_user_state *state;
+	struct cet_status *cet = &tsk->thread.cet;
+
+	if (!cet->shstk_size)
+		return 0;
+
+	if ((clone_flags & (CLONE_VFORK | CLONE_VM)) != CLONE_VM)
+		return 0;
+
+	state = get_xsave_addr(&tsk->thread.fpu.state.xsave,
+			       XFEATURE_CET_USER);
+
+	if (!state)
+		return -EINVAL;
+
+	/* Cap shadow stack size to 4 GB */
+	size = min(rlimit(RLIMIT_STACK), 1UL << 32);
+
+	/*
+	 * Compat-mode pthreads share a limited address space.
+	 * If each function call takes an average of four slots
+	 * stack space, we need 1/4 of stack size for shadow stack.
+	 */
+	if (in_compat_syscall())
+		size /= 4;
+	size = round_up(size, PAGE_SIZE);
+	addr = alloc_shstk(size, 0);
+
+	if (IS_ERR_VALUE(addr)) {
+		cet->shstk_base = 0;
+		cet->shstk_size = 0;
+		return PTR_ERR((void *)addr);
+	}
+
+	fpu__prepare_write(&tsk->thread.fpu);
+	state->user_ssp = (u64)(addr + size);
+	cet->shstk_base = addr;
+	cet->shstk_size = size;
+	return 0;
+}
+
 void cet_disable_shstk(void)
 {
 	struct cet_status *cet = &current->thread.cet;
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index ff3b44d6740b..67632ba893b7 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -110,6 +110,7 @@ void exit_thread(struct task_struct *tsk)
 
 	free_vm86(t);
 
+	cet_free_shstk(tsk);
 	fpu__drop(fpu);
 }
 
@@ -182,6 +183,12 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg,
 	if (clone_flags & CLONE_SETTLS)
 		ret = set_new_tls(p, tls);
 
+#ifdef CONFIG_X86_64
+	/* Allocate a new shadow stack for pthread */
+	if (!ret)
+		ret = cet_setup_thread_shstk(p, clone_flags);
+#endif
+
 	if (!ret && unlikely(test_tsk_thread_flag(current, TIF_IO_BITMAP)))
 		io_bitmap_share(p);
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 25/26] x86/cet/shstk: Add arch_prctl functions for shadow stack
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (23 preceding siblings ...)
  2020-11-10 16:22 ` [PATCH v15 24/26] x86/cet/shstk: Handle thread shadow stack Yu-cheng Yu
@ 2020-11-10 16:22 ` Yu-cheng Yu
  2020-11-10 16:22 ` [PATCH v15 26/26] mm: Introduce PROT_SHSTK " Yu-cheng Yu
  2020-11-27  9:29 ` [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Balbir Singh
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:22 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

arch_prctl(ARCH_X86_CET_STATUS, u64 *args)
    Get CET feature status.

    The parameter 'args' is a pointer to a user buffer.  The kernel returns
    the following information:

    *args = shadow stack/IBT status
    *(args + 1) = shadow stack base address
    *(args + 2) = shadow stack size

arch_prctl(ARCH_X86_CET_DISABLE, unsigned int features)
    Disable CET features specified in 'features'.  Return -EPERM if CET is
    locked.

arch_prctl(ARCH_X86_CET_LOCK)
    Lock in CET features.

Also change do_arch_prctl_common()'s parameter 'cpuid_enabled' to
'arg2', as it is now also passed to prctl_cet().

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 arch/x86/include/asm/cet.h        |  3 ++
 arch/x86/include/uapi/asm/prctl.h |  4 ++
 arch/x86/kernel/Makefile          |  2 +-
 arch/x86/kernel/cet_prctl.c       | 68 +++++++++++++++++++++++++++++++
 arch/x86/kernel/process.c         |  6 +--
 5 files changed, 79 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/kernel/cet_prctl.c

diff --git a/arch/x86/include/asm/cet.h b/arch/x86/include/asm/cet.h
index ec4b5e62d0ce..16870e5bc8eb 100644
--- a/arch/x86/include/asm/cet.h
+++ b/arch/x86/include/asm/cet.h
@@ -14,9 +14,11 @@ struct sc_ext;
 struct cet_status {
 	unsigned long	shstk_base;
 	unsigned long	shstk_size;
+	unsigned int	locked:1;
 };
 
 #ifdef CONFIG_X86_CET
+int prctl_cet(int option, u64 arg2);
 int cet_setup_shstk(void);
 int cet_setup_thread_shstk(struct task_struct *p, unsigned long clone_flags);
 void cet_disable_shstk(void);
@@ -25,6 +27,7 @@ int cet_verify_rstor_token(bool ia32, unsigned long ssp, unsigned long *new_ssp)
 void cet_restore_signal(struct sc_ext *sc);
 int cet_setup_signal(bool ia32, unsigned long rstor, struct sc_ext *sc);
 #else
+static inline int prctl_cet(int option, u64 arg2) { return -EINVAL; }
 static inline int cet_setup_thread_shstk(struct task_struct *p,
 					 unsigned long clone_flags) { return 0; }
 static inline void cet_disable_shstk(void) {}
diff --git a/arch/x86/include/uapi/asm/prctl.h b/arch/x86/include/uapi/asm/prctl.h
index 5a6aac9fa41f..9245bf629120 100644
--- a/arch/x86/include/uapi/asm/prctl.h
+++ b/arch/x86/include/uapi/asm/prctl.h
@@ -14,4 +14,8 @@
 #define ARCH_MAP_VDSO_32	0x2002
 #define ARCH_MAP_VDSO_64	0x2003
 
+#define ARCH_X86_CET_STATUS		0x3001
+#define ARCH_X86_CET_DISABLE		0x3002
+#define ARCH_X86_CET_LOCK		0x3003
+
 #endif /* _ASM_X86_PRCTL_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 4a89d0f3792e..5d04c4f21485 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -151,7 +151,7 @@ obj-$(CONFIG_UNWINDER_FRAME_POINTER)	+= unwind_frame.o
 obj-$(CONFIG_UNWINDER_GUESS)		+= unwind_guess.o
 
 obj-$(CONFIG_AMD_MEM_ENCRYPT)		+= sev-es.o
-obj-$(CONFIG_X86_CET)			+= cet.o
+obj-$(CONFIG_X86_CET)			+= cet.o cet_prctl.o
 
 ###
 # 64 bit specific files
diff --git a/arch/x86/kernel/cet_prctl.c b/arch/x86/kernel/cet_prctl.c
new file mode 100644
index 000000000000..bd5ad11763e4
--- /dev/null
+++ b/arch/x86/kernel/cet_prctl.c
@@ -0,0 +1,68 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/errno.h>
+#include <linux/uaccess.h>
+#include <linux/prctl.h>
+#include <linux/compat.h>
+#include <linux/mman.h>
+#include <linux/elfcore.h>
+#include <asm/processor.h>
+#include <asm/prctl.h>
+#include <asm/cet.h>
+
+/* See Documentation/x86/intel_cet.rst. */
+
+static int copy_status_to_user(struct cet_status *cet, u64 arg2)
+{
+	u64 buf[3] = {0, 0, 0};
+
+	if (cet->shstk_size) {
+		buf[0] |= GNU_PROPERTY_X86_FEATURE_1_SHSTK;
+		buf[1] = (u64)cet->shstk_base;
+		buf[2] = (u64)cet->shstk_size;
+	}
+
+	return copy_to_user((u64 __user *)arg2, buf, sizeof(buf));
+}
+
+int prctl_cet(int option, u64 arg2)
+{
+	struct cet_status *cet;
+	unsigned int features;
+
+	/*
+	 * GLIBC's ENOTSUPP == EOPNOTSUPP == 95, and it does not recognize
+	 * the kernel's ENOTSUPP (524).  So return EOPNOTSUPP here.
+	 */
+	if (!IS_ENABLED(CONFIG_X86_CET))
+		return -EOPNOTSUPP;
+
+	cet = &current->thread.cet;
+
+	if (option == ARCH_X86_CET_STATUS)
+		return copy_status_to_user(cet, arg2);
+
+	if (!static_cpu_has(X86_FEATURE_SHSTK))
+		return -EOPNOTSUPP;
+
+	switch (option) {
+	case ARCH_X86_CET_DISABLE:
+		if (cet->locked)
+			return -EPERM;
+
+		features = (unsigned int)arg2;
+
+		if (features & GNU_PROPERTY_X86_FEATURE_1_INVAL)
+			return -EINVAL;
+		if (features & GNU_PROPERTY_X86_FEATURE_1_SHSTK)
+			cet_disable_shstk();
+		return 0;
+
+	case ARCH_X86_CET_LOCK:
+		cet->locked = 1;
+		return 0;
+
+	default:
+		return -ENOSYS;
+	}
+}
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 67632ba893b7..33cb6da22ef0 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -977,14 +977,14 @@ unsigned long get_wchan(struct task_struct *p)
 }
 
 long do_arch_prctl_common(struct task_struct *task, int option,
-			  unsigned long cpuid_enabled)
+			  unsigned long arg2)
 {
 	switch (option) {
 	case ARCH_GET_CPUID:
 		return get_cpuid_mode();
 	case ARCH_SET_CPUID:
-		return set_cpuid_mode(task, cpuid_enabled);
+		return set_cpuid_mode(task, arg2);
 	}
 
-	return -EINVAL;
+	return prctl_cet(option, arg2);
 }
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH v15 26/26] mm: Introduce PROT_SHSTK for shadow stack
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (24 preceding siblings ...)
  2020-11-10 16:22 ` [PATCH v15 25/26] x86/cet/shstk: Add arch_prctl functions for " Yu-cheng Yu
@ 2020-11-10 16:22 ` Yu-cheng Yu
  2020-11-27  9:29 ` [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Balbir Singh
  26 siblings, 0 replies; 60+ messages in thread
From: Yu-cheng Yu @ 2020-11-10 16:22 UTC (permalink / raw)
  To: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Borislav Petkov, Cyrill Gorcunov,
	Dave Hansen, Eugene Syromiatnikov, Florian Weimer, H.J. Lu,
	Jann Horn, Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu
  Cc: Yu-cheng Yu

There are three possible options to create a shadow stack allocation API:
an arch_prctl, a new syscall, or adding PROT_SHSTK to mmap()/mprotect().
Each has its advantages and compromises.

An arch_prctl() is the least intrusive.  However, the existing x86
arch_prctl() takes only two parameters.  Multiple parameters must be
passed in a memory buffer.  There is a proposal to pass more parameters in
registers [1], but no active discussion on that.

A new syscall minimizes compatibility issues and offers an extensible frame
work to other architectures, but this will likely result in some overlap of
mmap()/mprotect().

The introduction of PROT_SHSTK to mmap()/mprotect() takes advantage of
existing APIs.  The x86-specific PROT_SHSTK is translated to VM_SHSTK and
a shadow stack mapping is created without reinventing the wheel.  There are
potential pitfalls though.  The most obvious one would be using this as a
bypass to shadow stack protection.  However, the attacker would have to get
to the syscall first.

Since arch_calc_vm_prot_bits() is modified, I have moved arch_vm_get_page
_prot() and arch_calc_vm_prot_bits() to x86/include/asm/mman.h.
This will be more consistent with other architectures.

[1] https://lore.kernel.org/lkml/20200828121624.108243-1-hjl.tools@gmail.com/

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 arch/x86/include/asm/mman.h      | 83 ++++++++++++++++++++++++++++++++
 arch/x86/include/uapi/asm/mman.h | 28 ++---------
 include/linux/mm.h               |  1 +
 mm/mmap.c                        |  8 ++-
 4 files changed, 95 insertions(+), 25 deletions(-)
 create mode 100644 arch/x86/include/asm/mman.h

diff --git a/arch/x86/include/asm/mman.h b/arch/x86/include/asm/mman.h
new file mode 100644
index 000000000000..0dcaef6f889a
--- /dev/null
+++ b/arch/x86/include/asm/mman.h
@@ -0,0 +1,83 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_X86_MMAN_H
+#define _ASM_X86_MMAN_H
+
+#include <linux/mm.h>
+#include <uapi/asm/mman.h>
+
+#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+/*
+ * Take the 4 protection key bits out of the vma->vm_flags
+ * value and turn them in to the bits that we can put in
+ * to a pte.
+ *
+ * Only override these if Protection Keys are available
+ * (which is only on 64-bit).
+ */
+#define arch_vm_get_page_prot(vm_flags)	__pgprot(	\
+		((vm_flags) & VM_PKEY_BIT0 ? _PAGE_PKEY_BIT0 : 0) |	\
+		((vm_flags) & VM_PKEY_BIT1 ? _PAGE_PKEY_BIT1 : 0) |	\
+		((vm_flags) & VM_PKEY_BIT2 ? _PAGE_PKEY_BIT2 : 0) |	\
+		((vm_flags) & VM_PKEY_BIT3 ? _PAGE_PKEY_BIT3 : 0))
+
+#define pkey_vm_prot_bits(prot, key) (			\
+		((key) & 0x1 ? VM_PKEY_BIT0 : 0) |      \
+		((key) & 0x2 ? VM_PKEY_BIT1 : 0) |      \
+		((key) & 0x4 ? VM_PKEY_BIT2 : 0) |      \
+		((key) & 0x8 ? VM_PKEY_BIT3 : 0))
+#else
+#define pkey_vm_prot_bits(prot, key) (0)
+#endif
+
+static inline unsigned long arch_calc_vm_prot_bits(unsigned long prot,
+	unsigned long pkey)
+{
+	unsigned long vm_prot_bits = pkey_vm_prot_bits(prot, pkey);
+
+	if (!(prot & PROT_WRITE) && (prot & PROT_SHSTK))
+		vm_prot_bits |= VM_SHSTK;
+
+	return vm_prot_bits;
+}
+#define arch_calc_vm_prot_bits(prot, pkey) arch_calc_vm_prot_bits(prot, pkey)
+
+#ifdef CONFIG_X86_SHADOW_STACK_USER
+static inline bool arch_validate_prot(unsigned long prot, unsigned long addr)
+{
+	unsigned long valid = PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM;
+
+	if (prot & ~(valid | PROT_SHSTK))
+		return false;
+
+	if (prot & PROT_SHSTK) {
+		struct vm_area_struct *vma;
+
+		if (!current->thread.cet.shstk_size)
+			return false;
+
+		/*
+		 * A shadow stack mapping is indirectly writable by only
+		 * the CALL and WRUSS instructions, but not other write
+		 * instructions).  PROT_SHSTK and PROT_WRITE are mutually
+		 * exclusive.
+		 */
+		if (prot & PROT_WRITE)
+			return false;
+
+		vma = find_vma(current->mm, addr);
+		if (!vma)
+			return false;
+
+		/*
+		 * Shadow stack cannot be backed by a file or shared.
+		 */
+		if (vma->vm_file || (vma->vm_flags & VM_SHARED))
+			return false;
+	}
+
+	return true;
+}
+#define arch_validate_prot arch_validate_prot
+#endif
+
+#endif /* _ASM_X86_MMAN_H */
diff --git a/arch/x86/include/uapi/asm/mman.h b/arch/x86/include/uapi/asm/mman.h
index d4a8d0424bfb..39bb7db344a6 100644
--- a/arch/x86/include/uapi/asm/mman.h
+++ b/arch/x86/include/uapi/asm/mman.h
@@ -1,31 +1,11 @@
 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-#ifndef _ASM_X86_MMAN_H
-#define _ASM_X86_MMAN_H
+#ifndef _UAPI_ASM_X86_MMAN_H
+#define _UAPI_ASM_X86_MMAN_H
 
 #define MAP_32BIT	0x40		/* only give out 32bit addresses */
 
-#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
-/*
- * Take the 4 protection key bits out of the vma->vm_flags
- * value and turn them in to the bits that we can put in
- * to a pte.
- *
- * Only override these if Protection Keys are available
- * (which is only on 64-bit).
- */
-#define arch_vm_get_page_prot(vm_flags)	__pgprot(	\
-		((vm_flags) & VM_PKEY_BIT0 ? _PAGE_PKEY_BIT0 : 0) |	\
-		((vm_flags) & VM_PKEY_BIT1 ? _PAGE_PKEY_BIT1 : 0) |	\
-		((vm_flags) & VM_PKEY_BIT2 ? _PAGE_PKEY_BIT2 : 0) |	\
-		((vm_flags) & VM_PKEY_BIT3 ? _PAGE_PKEY_BIT3 : 0))
-
-#define arch_calc_vm_prot_bits(prot, key) (		\
-		((key) & 0x1 ? VM_PKEY_BIT0 : 0) |      \
-		((key) & 0x2 ? VM_PKEY_BIT1 : 0) |      \
-		((key) & 0x4 ? VM_PKEY_BIT2 : 0) |      \
-		((key) & 0x8 ? VM_PKEY_BIT3 : 0))
-#endif
+#define PROT_SHSTK	0x10		/* shadow stack pages */
 
 #include <asm-generic/mman.h>
 
-#endif /* _ASM_X86_MMAN_H */
+#endif /* _UAPI_ASM_X86_MMAN_H */
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef1bd7c7e88b..4c5ff13ed332 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -334,6 +334,7 @@ extern unsigned int kobjsize(const void *objp);
 
 #if defined(CONFIG_X86)
 # define VM_PAT		VM_ARCH_1	/* PAT reserves whole VMA at once (x86) */
+# define VM_ARCH_CLEAR	VM_SHSTK
 #elif defined(CONFIG_PPC)
 # define VM_SAO		VM_ARCH_1	/* Strong Access Ordering (powerpc) */
 #elif defined(CONFIG_PARISC)
diff --git a/mm/mmap.c b/mm/mmap.c
index c4938e4b789b..da7e0c6689ee 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1483,6 +1483,12 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
 		struct inode *inode = file_inode(file);
 		unsigned long flags_mask;
 
+		/*
+		 * Call stack cannot be backed by a file.
+		 */
+		if (vm_flags & VM_SHSTK)
+			return -EINVAL;
+
 		if (!file_mmap_ok(file, inode, pgoff, len))
 			return -EOVERFLOW;
 
@@ -1547,7 +1553,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
 	} else {
 		switch (flags & MAP_TYPE) {
 		case MAP_SHARED:
-			if (vm_flags & (VM_GROWSDOWN|VM_GROWSUP))
+			if (vm_flags & (VM_GROWSDOWN|VM_GROWSUP|VM_SHSTK))
 				return -EINVAL;
 			/*
 			 * Ignore pgoff.
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states
  2020-11-10 16:21 ` [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states Yu-cheng Yu
@ 2020-11-26 11:02   ` Borislav Petkov
  2020-11-30 17:45   ` [NEEDS-REVIEW] " Dave Hansen
  1 sibling, 0 replies; 60+ messages in thread
From: Borislav Petkov @ 2020-11-26 11:02 UTC (permalink / raw)
  To: Yu-cheng Yu
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On Tue, Nov 10, 2020 at 08:21:48AM -0800, Yu-cheng Yu wrote:
> diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
> index 972a34d93505..6f05ab2a1fa4 100644
> --- a/arch/x86/include/asm/msr-index.h
> +++ b/arch/x86/include/asm/msr-index.h
> @@ -922,4 +922,24 @@
>  #define MSR_VM_IGNNE                    0xc0010115
>  #define MSR_VM_HSAVE_PA                 0xc0010117
>  
> +/* Control-flow Enforcement Technology MSRs */
> +#define MSR_IA32_U_CET		0x6a0 /* user mode cet setting */
> +#define MSR_IA32_S_CET		0x6a2 /* kernel mode cet setting */
> +#define MSR_IA32_PL0_SSP	0x6a4 /* kernel shstk pointer */
> +#define MSR_IA32_PL1_SSP	0x6a5 /* ring-1 shstk pointer */
> +#define MSR_IA32_PL2_SSP	0x6a6 /* ring-2 shstk pointer */
> +#define MSR_IA32_PL3_SSP	0x6a7 /* user shstk pointer */
> +#define MSR_IA32_INT_SSP_TAB	0x6a8 /* exception shstk table */
> +
> +/* MSR_IA32_U_CET and MSR_IA32_S_CET bits */

Pls put the bit defines under the MSRs they belong to.

> +#define CET_SHSTK_EN		BIT_ULL(0)
> +#define CET_WRSS_EN		BIT_ULL(1)
> +#define CET_ENDBR_EN		BIT_ULL(2)
> +#define CET_LEG_IW_EN		BIT_ULL(3)
> +#define CET_NO_TRACK_EN		BIT_ULL(4)
> +#define CET_SUPPRESS_DISABLE	BIT_ULL(5)
> +#define CET_RESERVED		(BIT_ULL(6) | BIT_ULL(7) | BIT_ULL(8) | BIT_ULL(9))
> +#define CET_SUPPRESS		BIT_ULL(10)
> +#define CET_WAIT_ENDBR		BIT_ULL(11)

...

>  	 * Clear XSAVE features that are disabled in the normal CPUID.
>  	 */
>  	for (i = 0; i < ARRAY_SIZE(xsave_cpuid_features); i++) {
> -		if (!boot_cpu_has(xsave_cpuid_features[i]))
> -			xfeatures_mask_all &= ~BIT_ULL(i);
> +		if (xsave_cpuid_features[i] == X86_FEATURE_SHSTK) {
> +			/*
> +			 * X86_FEATURE_SHSTK and X86_FEATURE_IBT share
> +			 * same states, but can be enabled separately.
> +			 */
> +			if (!boot_cpu_has(X86_FEATURE_SHSTK) &&
> +			    !boot_cpu_has(X86_FEATURE_IBT))
> +				xfeatures_mask_all &= ~BIT_ULL(i);
> +		} else {
> +			if ((xsave_cpuid_features[i] == -1) ||
			     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

That is a new check. I guess it could be done first to simplify the
code:

	for (i = 0; i < ARRAY_SIZE(xsave_cpuid_features); i++) {
		if (xsave_cpuid_features[i] == -1) {
			xfeatures_mask_all &= ~BIT_ULL(i);
			continue;
		}

		/* the rest of the bla */

Yes?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 04/26] x86/cet: Add control-protection fault handler
  2020-11-10 16:21 ` [PATCH v15 04/26] x86/cet: Add control-protection fault handler Yu-cheng Yu
@ 2020-11-26 18:49   ` Borislav Petkov
  0 siblings, 0 replies; 60+ messages in thread
From: Borislav Petkov @ 2020-11-26 18:49 UTC (permalink / raw)
  To: Yu-cheng Yu
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On Tue, Nov 10, 2020 at 08:21:49AM -0800, Yu-cheng Yu wrote:
> diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
> index e19df6cde35d..6c21c1e92605 100644
> --- a/arch/x86/kernel/traps.c
> +++ b/arch/x86/kernel/traps.c
> @@ -598,6 +598,65 @@ DEFINE_IDTENTRY_ERRORCODE(exc_general_protection)
>  	cond_local_irq_disable(regs);
>  }
>  
> +#ifdef CONFIG_X86_CET
> +static const char * const control_protection_err[] = {
> +	"unknown",
> +	"near-ret",
> +	"far-ret/iret",
> +	"endbranch",
> +	"rstorssp",
> +	"setssbsy",
> +};
> +
> +/*
> + * When a control protection exception occurs, send a signal
> + * to the responsible application.  Currently, control
> + * protection is only enabled for the user mode.  This
> + * exception should not come from the kernel mode.
> + */

Make that 80 cols wide.

> +DEFINE_IDTENTRY_ERRORCODE(exc_control_protection)
> +{
> +	struct task_struct *tsk;
> +
> +	if (notify_die(DIE_TRAP, "control protection fault", regs,
> +		       error_code, X86_TRAP_CP, SIGSEGV) == NOTIFY_STOP)
> +		return;

What is the intent here, notifiers can prevent the machine from printing
the CP error below?

> +	cond_local_irq_enable(regs);
> +
> +	if (!user_mode(regs))
> +		die("kernel control protection fault", regs, error_code);

Let's write that more explicitly:

		die("Unexpected/unsupported control protection fault"...

> +
> +	if (!static_cpu_has(X86_FEATURE_SHSTK) &&
> +	    !static_cpu_has(X86_FEATURE_IBT))

Why static_cpu_has?

> +		WARN_ONCE(1, "CET is disabled but got control protection fault\n");

			     "Control protection fault with CET support disabled\n"

> +
> +	tsk = current;
> +	tsk->thread.error_code = error_code;
> +	tsk->thread.trap_nr = X86_TRAP_CP;
> +
> +	if (show_unhandled_signals && unhandled_signal(tsk, SIGSEGV) &&
> +	    printk_ratelimit()) {
> +		unsigned int max_err;
> +		unsigned long ssp;
> +
> +		max_err = ARRAY_SIZE(control_protection_err) - 1;
> +		if ((error_code < 0) || (error_code > max_err))
> +			error_code = 0;

<---- newline here.

> +		rdmsrl(MSR_IA32_PL3_SSP, ssp);
> +		pr_info("%s[%d] control protection ip:%lx sp:%lx ssp:%lx error:%lx(%s)",
> +			tsk->comm, task_pid_nr(tsk),
> +			regs->ip, regs->sp, ssp, error_code,
> +			control_protection_err[error_code]);
> +		print_vma_addr(KERN_CONT " in ", regs->ip);
> +		pr_cont("\n");
> +	}

...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack
  2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
                   ` (25 preceding siblings ...)
  2020-11-10 16:22 ` [PATCH v15 26/26] mm: Introduce PROT_SHSTK " Yu-cheng Yu
@ 2020-11-27  9:29 ` Balbir Singh
  2020-11-28 16:31   ` Yu, Yu-cheng
  26 siblings, 1 reply; 60+ messages in thread
From: Balbir Singh @ 2020-11-27  9:29 UTC (permalink / raw)
  To: Yu-cheng Yu
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Borislav Petkov, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On Tue, Nov 10, 2020 at 08:21:45AM -0800, Yu-cheng Yu wrote:
> Control-flow Enforcement (CET) is a new Intel processor feature that blocks
> return/jump-oriented programming attacks.  Details are in "Intel 64 and
> IA-32 Architectures Software Developer's Manual" [1].
> 
> CET can protect applications and the kernel.  This series enables only
> application-level protection, and has three parts:
> 
>   - Shadow stack [2],
>   - Indirect branch tracking [3], and
>   - Selftests [4].
> 
> I have run tests on these patches for quite some time, and they have been
> very stable.  Linux distributions with CET are available now, and Intel
> processors with CET are becoming available.  It would be nice if CET
> support can be accepted into the kernel.  I will be working to address any
> issues should they come up.
>

Is there a way to run these patches for testing? Bochs emulation or anything
else? I presume you've been testing against violations of CET in user space?
Can you share your testing?
 
Balbir Singh.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack
  2020-11-10 16:21 ` [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack Yu-cheng Yu
@ 2020-11-27 17:10   ` Borislav Petkov
  2020-11-28 16:23     ` Yu, Yu-cheng
  2020-11-30 19:56   ` Nick Desaulniers
  1 sibling, 1 reply; 60+ messages in thread
From: Borislav Petkov @ 2020-11-27 17:10 UTC (permalink / raw)
  To: Yu-cheng Yu
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On Tue, Nov 10, 2020 at 08:21:50AM -0800, Yu-cheng Yu wrote:
> +config X86_CET
> +	def_bool n
> +
> +config ARCH_HAS_SHADOW_STACK
> +	def_bool n
> +
> +config X86_SHADOW_STACK_USER

Is X86_SHADOW_STACK_KERNEL coming too?

Regardless, you can add it when it comes and you can use only X86_CET
for now and drop this one and simplify this pile of Kconfig symbols.

> +	prompt "Intel Shadow Stacks for user-mode"
> +	def_bool n
> +	depends on CPU_SUP_INTEL && X86_64
> +	depends on AS_HAS_SHADOW_STACK
> +	select ARCH_USES_HIGH_VMA_FLAGS
> +	select X86_CET
> +	select ARCH_HAS_SHADOW_STACK
> +	help
> +	  Shadow Stacks provides protection against program stack
> +	  corruption.  It's a hardware feature.  This only matters
> +	  if you have the right hardware.  It's a security hardening
> +	  feature and apps must be enabled to use it.  You get no
> +	  protection "for free" on old userspace.  The hardware can
> +	  support user and kernel, but this option is for user space
> +	  only.
> +	  Support for this feature is only known to be present on
> +	  processors released in 2020 or later.  CET features are also
> +	  known to increase kernel text size by 3.7 KB.

This help text needs some rewriting. You can find an inspiration about
more adequate style in that same Kconfig file.

> +
> +	  If unsure, say N.
> +
>  config EFI
>  	bool "EFI runtime service support"
>  	depends on ACPI
> diff --git a/scripts/as-x86_64-has-shadow-stack.sh b/scripts/as-x86_64-has-shadow-stack.sh
> new file mode 100755
> index 000000000000..fac1d363a1b8
> --- /dev/null
> +++ b/scripts/as-x86_64-has-shadow-stack.sh
> @@ -0,0 +1,4 @@
> +#!/bin/sh
> +# SPDX-License-Identifier: GPL-2.0
> +
> +echo "wrussq %rax, (%rbx)" | $* -x assembler -c -

						      2> /dev/null

otherwise you get

{standard input}: Assembler messages:
{standard input}:1: Error: no such instruction: `wrussq %rax,(%rbx)

on non-enlightened toolchains during build.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack
  2020-11-27 17:10   ` Borislav Petkov
@ 2020-11-28 16:23     ` Yu, Yu-cheng
  2020-11-30 18:15       ` Borislav Petkov
  0 siblings, 1 reply; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-11-28 16:23 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On 11/27/2020 9:10 AM, Borislav Petkov wrote:
> On Tue, Nov 10, 2020 at 08:21:50AM -0800, Yu-cheng Yu wrote:
>> +config X86_CET
>> +	def_bool n
>> +
>> +config ARCH_HAS_SHADOW_STACK
>> +	def_bool n
>> +
>> +config X86_SHADOW_STACK_USER
> 
> Is X86_SHADOW_STACK_KERNEL coming too?
> 
> Regardless, you can add it when it comes and you can use only X86_CET
> for now and drop this one and simplify this pile of Kconfig symbols.

We have X86_BRANCH_TRACKING_USER too.  My thought was, X86_CET means any 
of kernel/user shadow stack/ibt.

> 
>> +	prompt "Intel Shadow Stacks for user-mode"
>> +	def_bool n
>> +	depends on CPU_SUP_INTEL && X86_64
>> +	depends on AS_HAS_SHADOW_STACK
>> +	select ARCH_USES_HIGH_VMA_FLAGS
>> +	select X86_CET
>> +	select ARCH_HAS_SHADOW_STACK
>> +	help
>> +	  Shadow Stacks provides protection against program stack
>> +	  corruption.  It's a hardware feature.  This only matters
>> +	  if you have the right hardware.  It's a security hardening
>> +	  feature and apps must be enabled to use it.  You get no
>> +	  protection "for free" on old userspace.  The hardware can
>> +	  support user and kernel, but this option is for user space
>> +	  only.
>> +	  Support for this feature is only known to be present on
>> +	  processors released in 2020 or later.  CET features are also
>> +	  known to increase kernel text size by 3.7 KB.
> 
> This help text needs some rewriting. You can find an inspiration about
> more adequate style in that same Kconfig file.
> 

I will work on it.

>> +
>> +	  If unsure, say N.
>> +
>>   config EFI
>>   	bool "EFI runtime service support"
>>   	depends on ACPI
>> diff --git a/scripts/as-x86_64-has-shadow-stack.sh b/scripts/as-x86_64-has-shadow-stack.sh
>> new file mode 100755
>> index 000000000000..fac1d363a1b8
>> --- /dev/null
>> +++ b/scripts/as-x86_64-has-shadow-stack.sh
>> @@ -0,0 +1,4 @@
>> +#!/bin/sh
>> +# SPDX-License-Identifier: GPL-2.0
>> +
>> +echo "wrussq %rax, (%rbx)" | $* -x assembler -c -
> 
> 						      2> /dev/null
> 
> otherwise you get
> 
> {standard input}: Assembler messages:
> {standard input}:1: Error: no such instruction: `wrussq %rax,(%rbx)
> 
> on non-enlightened toolchains during build.
> 

Yes, I will fix this in the next revision.

Yu-cheng

> Thx.
> 


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack
  2020-11-27  9:29 ` [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Balbir Singh
@ 2020-11-28 16:31   ` Yu, Yu-cheng
  0 siblings, 0 replies; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-11-28 16:31 UTC (permalink / raw)
  To: Balbir Singh
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Borislav Petkov, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On 11/27/2020 1:29 AM, Balbir Singh wrote:
> On Tue, Nov 10, 2020 at 08:21:45AM -0800, Yu-cheng Yu wrote:
>> Control-flow Enforcement (CET) is a new Intel processor feature that blocks
>> return/jump-oriented programming attacks.  Details are in "Intel 64 and
>> IA-32 Architectures Software Developer's Manual" [1].
>>
>> CET can protect applications and the kernel.  This series enables only
>> application-level protection, and has three parts:
>>
>>    - Shadow stack [2],
>>    - Indirect branch tracking [3], and
>>    - Selftests [4].
>>
>> I have run tests on these patches for quite some time, and they have been
>> very stable.  Linux distributions with CET are available now, and Intel
>> processors with CET are becoming available.  It would be nice if CET
>> support can be accepted into the kernel.  I will be working to address any
>> issues should they come up.
>>
> 
> Is there a way to run these patches for testing? Bochs emulation or anything
> else? I presume you've been testing against violations of CET in user space?
> Can you share your testing?
>   
> Balbir Singh.
> 

Machines with CET are already available on the market.  I tested these 
on real machines with Fedora.  There is a quick test in my earlier 
selftest patches:

https://lore.kernel.org/linux-api/20200521211720.20236-6-yu-cheng.yu@intel.com/

Thanks,
Yu-cheng

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [NEEDS-REVIEW] [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states
  2020-11-10 16:21 ` [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states Yu-cheng Yu
  2020-11-26 11:02   ` Borislav Petkov
@ 2020-11-30 17:45   ` Dave Hansen
  2020-11-30 18:06     ` Yu, Yu-cheng
  2020-11-30 23:16     ` Yu, Yu-cheng
  1 sibling, 2 replies; 60+ messages in thread
From: Dave Hansen @ 2020-11-30 17:45 UTC (permalink / raw)
  To: Yu-cheng Yu, x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar,
	linux-kernel, linux-doc, linux-mm, linux-arch, linux-api,
	Arnd Bergmann, Andy Lutomirski, Balbir Singh, Borislav Petkov,
	Cyrill Gorcunov, Dave Hansen, Eugene Syromiatnikov,
	Florian Weimer, H.J. Lu, Jann Horn, Jonathan Corbet, Kees Cook,
	Mike Kravetz, Nadav Amit, Oleg Nesterov, Pavel Machek,
	Peter Zijlstra, Randy Dunlap, Ravi V. Shankar,
	Vedvyas Shanbhogue, Dave Martin, Weijiang Yang, Pengfei Xu

On 11/10/20 8:21 AM, Yu-cheng Yu wrote:
> Control-flow Enforcement Technology (CET) adds five MSRs.  Introduce
> them and their XSAVES supervisor states:
> 
>     MSR_IA32_U_CET (user-mode CET settings),
>     MSR_IA32_PL3_SSP (user-mode Shadow Stack pointer),
>     MSR_IA32_PL0_SSP (kernel-mode Shadow Stack pointer),
>     MSR_IA32_PL1_SSP (Privilege Level 1 Shadow Stack pointer),
>     MSR_IA32_PL2_SSP (Privilege Level 2 Shadow Stack pointer).

This patch goes into a bunch of XSAVE work that this changelog only
briefly touches on.  I think it needs to be beefed up a bit.

> @@ -835,8 +843,19 @@ void __init fpu__init_system_xstate(void)
>  	 * Clear XSAVE features that are disabled in the normal CPUID.
>  	 */
>  	for (i = 0; i < ARRAY_SIZE(xsave_cpuid_features); i++) {
> -		if (!boot_cpu_has(xsave_cpuid_features[i]))
> -			xfeatures_mask_all &= ~BIT_ULL(i);
> +		if (xsave_cpuid_features[i] == X86_FEATURE_SHSTK) {
> +			/*
> +			 * X86_FEATURE_SHSTK and X86_FEATURE_IBT share
> +			 * same states, but can be enabled separately.
> +			 */
> +			if (!boot_cpu_has(X86_FEATURE_SHSTK) &&
> +			    !boot_cpu_has(X86_FEATURE_IBT))
> +				xfeatures_mask_all &= ~BIT_ULL(i);
> +		} else {
> +			if ((xsave_cpuid_features[i] == -1) ||

Where did the -1 come from?  Was that introduced earlier in this series?
 I don't see any way a xsave_cpuid_features[] can be -1 in the current tree.

> +			    !boot_cpu_has(xsave_cpuid_features[i]))
> +				xfeatures_mask_all &= ~BIT_ULL(i);
> +		}
>  	}

Do we have any other spots in the kernel where we care about:

	boot_cpu_has(X86_FEATURE_SHSTK) ||
	boot_cpu_has(X86_FEATURE_IBT)

?  If so, we could also address this by declaring a software-defined
X86_FEATURE_CET and then setting it if SHSTK||IBT is supported, then we
just put that one feature in xsave_cpuid_features[].

I'm also not crazy about the loop as it is.  I'd much rather see this in
a helper like:

bool cpu_supports_xsave_deps(int xfeature)
{
	bool ret;

	ret = boot_cpu_has(xsave_cpuid_features[xfeature])

	/*
	 * X86_FEATURE_SHSTK is checked in xsave_cpuid_features()
	 * but the CET states are needed if either SHSTK or IBT are
	 * available.
	 */
	if (xfeature == XFEATURE_CET_USER ||
	    xfeature == XFEATURE_CET_KERNEL)
		ret |= boot_cpu_has(X86_FEATURE_IBT)
		
	return ret;
}

See how that's extensible?  You can add as many special cases as you want.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [NEEDS-REVIEW] [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states
  2020-11-30 17:45   ` [NEEDS-REVIEW] " Dave Hansen
@ 2020-11-30 18:06     ` Yu, Yu-cheng
  2020-11-30 18:12       ` Dave Hansen
  2020-11-30 23:16     ` Yu, Yu-cheng
  1 sibling, 1 reply; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-11-30 18:06 UTC (permalink / raw)
  To: Dave Hansen, x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar,
	linux-kernel, linux-doc, linux-mm, linux-arch, linux-api,
	Arnd Bergmann, Andy Lutomirski, Balbir Singh, Borislav Petkov,
	Cyrill Gorcunov, Dave Hansen, Eugene Syromiatnikov,
	Florian Weimer, H.J. Lu, Jann Horn, Jonathan Corbet, Kees Cook,
	Mike Kravetz, Nadav Amit, Oleg Nesterov, Pavel Machek,
	Peter Zijlstra, Randy Dunlap, Ravi V. Shankar,
	Vedvyas Shanbhogue, Dave Martin, Weijiang Yang, Pengfei Xu

On 11/30/2020 9:45 AM, Dave Hansen wrote:
> On 11/10/20 8:21 AM, Yu-cheng Yu wrote:
>> Control-flow Enforcement Technology (CET) adds five MSRs.  Introduce
>> them and their XSAVES supervisor states:
>>
>>      MSR_IA32_U_CET (user-mode CET settings),
>>      MSR_IA32_PL3_SSP (user-mode Shadow Stack pointer),
>>      MSR_IA32_PL0_SSP (kernel-mode Shadow Stack pointer),
>>      MSR_IA32_PL1_SSP (Privilege Level 1 Shadow Stack pointer),
>>      MSR_IA32_PL2_SSP (Privilege Level 2 Shadow Stack pointer).
> 
> This patch goes into a bunch of XSAVE work that this changelog only
> briefly touches on.  I think it needs to be beefed up a bit.

I will do that.

> 
>> @@ -835,8 +843,19 @@ void __init fpu__init_system_xstate(void)
>>   	 * Clear XSAVE features that are disabled in the normal CPUID.
>>   	 */
>>   	for (i = 0; i < ARRAY_SIZE(xsave_cpuid_features); i++) {
>> -		if (!boot_cpu_has(xsave_cpuid_features[i]))
>> -			xfeatures_mask_all &= ~BIT_ULL(i);
>> +		if (xsave_cpuid_features[i] == X86_FEATURE_SHSTK) {
>> +			/*
>> +			 * X86_FEATURE_SHSTK and X86_FEATURE_IBT share
>> +			 * same states, but can be enabled separately.
>> +			 */
>> +			if (!boot_cpu_has(X86_FEATURE_SHSTK) &&
>> +			    !boot_cpu_has(X86_FEATURE_IBT))
>> +				xfeatures_mask_all &= ~BIT_ULL(i);
>> +		} else {
>> +			if ((xsave_cpuid_features[i] == -1) ||
> 
> Where did the -1 come from?  Was that introduced earlier in this series?
>   I don't see any way a xsave_cpuid_features[] can be -1 in the current tree.
> 

Yes, we used to have a hole in xsave_cpuid_features[] and put -1 there. 
Do we want to keep this in case we again have holes in the future?

>> +			    !boot_cpu_has(xsave_cpuid_features[i]))
>> +				xfeatures_mask_all &= ~BIT_ULL(i);
>> +		}
>>   	}
> 
> Do we have any other spots in the kernel where we care about:
> 
> 	boot_cpu_has(X86_FEATURE_SHSTK) ||
> 	boot_cpu_has(X86_FEATURE_IBT)
> 
> ?  If so, we could also address this by declaring a software-defined
> X86_FEATURE_CET and then setting it if SHSTK||IBT is supported, then we
> just put that one feature in xsave_cpuid_features[].

That is a better solution.  I will look into that.

> 
> I'm also not crazy about the loop as it is.  I'd much rather see this in
> a helper like:
> 
> bool cpu_supports_xsave_deps(int xfeature)
> {
> 	bool ret;
> 
> 	ret = boot_cpu_has(xsave_cpuid_features[xfeature])
> 
> 	/*
> 	 * X86_FEATURE_SHSTK is checked in xsave_cpuid_features()
> 	 * but the CET states are needed if either SHSTK or IBT are
> 	 * available.
> 	 */
> 	if (xfeature == XFEATURE_CET_USER ||
> 	    xfeature == XFEATURE_CET_KERNEL)
> 		ret |= boot_cpu_has(X86_FEATURE_IBT)
> 		
> 	return ret;
> }
> 
> See how that's extensible?  You can add as many special cases as you want.
> 

Yes.

Thanks,
Yu-cheng

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [NEEDS-REVIEW] [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states
  2020-11-30 18:06     ` Yu, Yu-cheng
@ 2020-11-30 18:12       ` Dave Hansen
  2020-11-30 18:17         ` Yu, Yu-cheng
  0 siblings, 1 reply; 60+ messages in thread
From: Dave Hansen @ 2020-11-30 18:12 UTC (permalink / raw)
  To: Yu, Yu-cheng, x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar,
	linux-kernel, linux-doc, linux-mm, linux-arch, linux-api,
	Arnd Bergmann, Andy Lutomirski, Balbir Singh, Borislav Petkov,
	Cyrill Gorcunov, Dave Hansen, Eugene Syromiatnikov,
	Florian Weimer, H.J. Lu, Jann Horn, Jonathan Corbet, Kees Cook,
	Mike Kravetz, Nadav Amit, Oleg Nesterov, Pavel Machek,
	Peter Zijlstra, Randy Dunlap, Ravi V. Shankar,
	Vedvyas Shanbhogue, Dave Martin, Weijiang Yang, Pengfei Xu

On 11/30/20 10:06 AM, Yu, Yu-cheng wrote:
>>> +            if (!boot_cpu_has(X86_FEATURE_SHSTK) &&
>>> +                !boot_cpu_has(X86_FEATURE_IBT))
>>> +                xfeatures_mask_all &= ~BIT_ULL(i);
>>> +        } else {
>>> +            if ((xsave_cpuid_features[i] == -1) ||
>>
>> Where did the -1 come from?  Was that introduced earlier in this series?
>>   I don't see any way a xsave_cpuid_features[] can be -1 in the
>> current tree.
> 
> Yes, we used to have a hole in xsave_cpuid_features[] and put -1 there.
> Do we want to keep this in case we again have holes in the future?

So, it's dead code for the moment and it's impossible to tell what -1
means without looking at git history?  That seems, um, suboptimal.

Shouldn't we have:

#define XFEATURE_NO_DEP -1

?

And then this code becomes:

	if ((xsave_cpuid_features[i] == XFEATURE_NO_DEP))
		// skip it...

We can even put a comment in xsave_cpuid_features[] to tell folks to use
it.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack
  2020-11-28 16:23     ` Yu, Yu-cheng
@ 2020-11-30 18:15       ` Borislav Petkov
  2020-11-30 22:48         ` Yu, Yu-cheng
  0 siblings, 1 reply; 60+ messages in thread
From: Borislav Petkov @ 2020-11-30 18:15 UTC (permalink / raw)
  To: Yu, Yu-cheng
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On Sat, Nov 28, 2020 at 08:23:59AM -0800, Yu, Yu-cheng wrote:
> We have X86_BRANCH_TRACKING_USER too.  My thought was, X86_CET means any of
> kernel/user shadow stack/ibt.

It is not about what it means - it is what you're going to use/need. You have
ifdeffery both with X86_CET and X86_SHADOW_STACK_USER.

This one

+#ifdef CONFIG_X86_SHADOW_STACK_USER
+#define DISABLE_SHSTK	0
+#else
+#define DISABLE_SHSTK	(1 << (X86_FEATURE_SHSTK & 31))
+#endif

for example, is clearly wrong and wants to be #ifdef CONFIG_X86_CET, for
example. Unless I'm missing something totally obvious.

In any case, you need to analyze what Kconfig defines the code will
need and to what they belong and add only the minimal subset needed.
Our Kconfig symbols space is already nuts so adding more needs to be
absolutely justified.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [NEEDS-REVIEW] [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states
  2020-11-30 18:12       ` Dave Hansen
@ 2020-11-30 18:17         ` Yu, Yu-cheng
  0 siblings, 0 replies; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-11-30 18:17 UTC (permalink / raw)
  To: Dave Hansen, x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar,
	linux-kernel, linux-doc, linux-mm, linux-arch, linux-api,
	Arnd Bergmann, Andy Lutomirski, Balbir Singh, Borislav Petkov,
	Cyrill Gorcunov, Dave Hansen, Eugene Syromiatnikov,
	Florian Weimer, H.J. Lu, Jann Horn, Jonathan Corbet, Kees Cook,
	Mike Kravetz, Nadav Amit, Oleg Nesterov, Pavel Machek,
	Peter Zijlstra, Randy Dunlap, Ravi V. Shankar,
	Vedvyas Shanbhogue, Dave Martin, Weijiang Yang, Pengfei Xu

On 11/30/2020 10:12 AM, Dave Hansen wrote:
> On 11/30/20 10:06 AM, Yu, Yu-cheng wrote:
>>>> +            if (!boot_cpu_has(X86_FEATURE_SHSTK) &&
>>>> +                !boot_cpu_has(X86_FEATURE_IBT))
>>>> +                xfeatures_mask_all &= ~BIT_ULL(i);
>>>> +        } else {
>>>> +            if ((xsave_cpuid_features[i] == -1) ||
>>>
>>> Where did the -1 come from?  Was that introduced earlier in this series?
>>>    I don't see any way a xsave_cpuid_features[] can be -1 in the
>>> current tree.
>>
>> Yes, we used to have a hole in xsave_cpuid_features[] and put -1 there.
>> Do we want to keep this in case we again have holes in the future?
> 
> So, it's dead code for the moment and it's impossible to tell what -1
> means without looking at git history?  That seems, um, suboptimal.
> 
> Shouldn't we have:
> 
> #define XFEATURE_NO_DEP -1
> 
> ?
> 
> And then this code becomes:
> 
> 	if ((xsave_cpuid_features[i] == XFEATURE_NO_DEP))
> 		// skip it...
> 
> We can even put a comment in xsave_cpuid_features[] to tell folks to use
> it.
> 

Yes, I will work on that.

Yu-cheng

^ permalink raw reply	[flat|nested] 60+ messages in thread

* RE: [PATCH v15 01/26] Documentation/x86: Add CET description
  2020-11-10 16:21 ` [PATCH v15 01/26] Documentation/x86: Add CET description Yu-cheng Yu
@ 2020-11-30 18:26   ` Nick Desaulniers
  2020-11-30 18:34     ` Yu, Yu-cheng
  0 siblings, 1 reply; 60+ messages in thread
From: Nick Desaulniers @ 2020-11-30 18:26 UTC (permalink / raw)
  To: yu-cheng.yu
  Cc: Dave.Martin, arnd, bp, bsingharora, corbet, dave.hansen, esyr,
	fweimer, gorcunov, hjl.tools, hpa, jannh, keescook, linux-api,
	linux-arch, linux-doc, linux-kernel, linux-mm, luto,
	mike.kravetz, mingo, nadav.amit, oleg, pavel, pengfei.xu, peterz,
	ravi.v.shankar, rdunlap, tglx, vedvyas.shanbhogue, weijiang.yang,
	x86, maskray, llozano, clang-built-linux, erich.keane

(In response to https://lore.kernel.org/lkml/20201110162211.9207-2-yu-cheng.yu@intel.com/)

> These need to be enabled to build a CET-enabled kernel, and Binutils v2.31
> and GCC v8.1 or later are required to build a CET kernel.

What about LLVM? Surely CrOS might be of interest to ship this on (we ship the
equivalent for aarch64 on Android).

> An application's CET capability is marked in its ELF header and can be
> verified from the following command output, in the NT_GNU_PROPERTY_TYPE_0
> field:
>
>     readelf -n <application> | grep SHSTK
>         properties: x86 feature: IBT, SHSTK

Same for llvm-readelf.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 01/26] Documentation/x86: Add CET description
  2020-11-30 18:26   ` Nick Desaulniers
@ 2020-11-30 18:34     ` Yu, Yu-cheng
  2020-11-30 19:38       ` Fāng-ruì Sòng
  0 siblings, 1 reply; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-11-30 18:34 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Dave.Martin, arnd, bp, bsingharora, corbet, dave.hansen, esyr,
	fweimer, gorcunov, hjl.tools, hpa, jannh, keescook, linux-api,
	linux-arch, linux-doc, linux-kernel, linux-mm, luto,
	mike.kravetz, mingo, nadav.amit, oleg, pavel, pengfei.xu, peterz,
	ravi.v.shankar, rdunlap, tglx, vedvyas.shanbhogue, weijiang.yang,
	x86, maskray, llozano, clang-built-linux, erich.keane

On 11/30/2020 10:26 AM, Nick Desaulniers wrote:
> (In response to https://lore.kernel.org/lkml/20201110162211.9207-2-yu-cheng.yu@intel.com/)
> 
>> These need to be enabled to build a CET-enabled kernel, and Binutils v2.31
>> and GCC v8.1 or later are required to build a CET kernel.
> 
> What about LLVM? Surely CrOS might be of interest to ship this on (we ship the
> equivalent for aarch64 on Android).
> 

I have not built with LLVM, but think it probably will work as well.  I 
will test it.

>> An application's CET capability is marked in its ELF header and can be
>> verified from the following command output, in the NT_GNU_PROPERTY_TYPE_0
>> field:
>>
>>      readelf -n <application> | grep SHSTK
>>          properties: x86 feature: IBT, SHSTK
> 
> Same for llvm-readelf.
> 

I will add that to the document.

Thanks,
Yu-cheng

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 01/26] Documentation/x86: Add CET description
  2020-11-30 18:34     ` Yu, Yu-cheng
@ 2020-11-30 19:38       ` Fāng-ruì Sòng
  2020-11-30 19:47         ` Yu, Yu-cheng
  0 siblings, 1 reply; 60+ messages in thread
From: Fāng-ruì Sòng @ 2020-11-30 19:38 UTC (permalink / raw)
  To: Yu, Yu-cheng
  Cc: Nick Desaulniers, Dave P Martin, Arnd Bergmann, Borislav Petkov,
	bsingharora, Jonathan Corbet, dave.hansen, esyr, Florian Weimer,
	gorcunov, H.J. Lu, H. Peter Anvin, jannh, Kees Cook, linux-api,
	linux-arch, Linux Doc Mailing List, LKML, linux-mm, luto,
	mike.kravetz, Ingo Molnar, nadav.amit, oleg, pavel, pengfei.xu,
	Peter Zijlstra, ravi.v.shankar, Randy Dunlap, Thomas Gleixner,
	vedvyas.shanbhogue, weijiang.yang, X86 ML, Luis Lozano,
	clang-built-linux, erich.keane

On Mon, Nov 30, 2020 at 10:34 AM Yu, Yu-cheng <yu-cheng.yu@intel.com> wrote:
>
> On 11/30/2020 10:26 AM, Nick Desaulniers wrote:
> > (In response to https://lore.kernel.org/lkml/20201110162211.9207-2-yu-cheng.yu@intel.com/)
> >
> >> These need to be enabled to build a CET-enabled kernel, and Binutils v2.31
> >> and GCC v8.1 or later are required to build a CET kernel.
> >
> > What about LLVM? Surely CrOS might be of interest to ship this on (we ship the
> > equivalent for aarch64 on Android).
> >
>
> I have not built with LLVM, but think it probably will work as well.  I
> will test it.
>
> >> An application's CET capability is marked in its ELF header and can be
> >> verified from the following command output, in the NT_GNU_PROPERTY_TYPE_0
> >> field:
> >>
> >>      readelf -n <application> | grep SHSTK
> >>          properties: x86 feature: IBT, SHSTK
> >
> > Same for llvm-readelf.
> >
>
> I will add that to the document.
>
> Thanks,
> Yu-cheng

The baseline LLVM version is 10.0.1, which is good enough for  clang
-fcf-protection=full, llvm-readelf -n, LLD's .note.gnu.property
handling (the LLD option is `-z force-ibt`, though)


-- 
宋方睿

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 01/26] Documentation/x86: Add CET description
  2020-11-30 19:38       ` Fāng-ruì Sòng
@ 2020-11-30 19:47         ` Yu, Yu-cheng
  0 siblings, 0 replies; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-11-30 19:47 UTC (permalink / raw)
  To: Fāng-ruì Sòng
  Cc: Nick Desaulniers, Dave P Martin, Arnd Bergmann, Borislav Petkov,
	bsingharora, Jonathan Corbet, dave.hansen, esyr, Florian Weimer,
	gorcunov, H.J. Lu, H. Peter Anvin, jannh, Kees Cook, linux-api,
	linux-arch, Linux Doc Mailing List, LKML, linux-mm, luto,
	mike.kravetz, Ingo Molnar, nadav.amit, oleg, pavel, pengfei.xu,
	Peter Zijlstra, ravi.v.shankar, Randy Dunlap, Thomas Gleixner,
	vedvyas.shanbhogue, weijiang.yang, X86 ML, Luis Lozano,
	clang-built-linux, erich.keane

On 11/30/2020 11:38 AM, Fāng-ruì Sòng wrote:
> On Mon, Nov 30, 2020 at 10:34 AM Yu, Yu-cheng <yu-cheng.yu@intel.com> wrote:
>>
>> On 11/30/2020 10:26 AM, Nick Desaulniers wrote:
>>> (In response to https://lore.kernel.org/lkml/20201110162211.9207-2-yu-cheng.yu@intel.com/)
>>>
>>>> These need to be enabled to build a CET-enabled kernel, and Binutils v2.31
>>>> and GCC v8.1 or later are required to build a CET kernel.
>>>
>>> What about LLVM? Surely CrOS might be of interest to ship this on (we ship the
>>> equivalent for aarch64 on Android).
>>>
>>
>> I have not built with LLVM, but think it probably will work as well.  I
>> will test it.
>>
>>>> An application's CET capability is marked in its ELF header and can be
>>>> verified from the following command output, in the NT_GNU_PROPERTY_TYPE_0
>>>> field:
>>>>
>>>>       readelf -n <application> | grep SHSTK
>>>>           properties: x86 feature: IBT, SHSTK
>>>
>>> Same for llvm-readelf.
>>>
>>
>> I will add that to the document.
>>
>> Thanks,
>> Yu-cheng
> 
> The baseline LLVM version is 10.0.1, which is good enough for  clang
> -fcf-protection=full, llvm-readelf -n, LLD's .note.gnu.property
> handling (the LLD option is `-z force-ibt`, though)
> 
> 

Thanks!

Yu-cheng

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack
  2020-11-10 16:21 ` [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack Yu-cheng Yu
  2020-11-27 17:10   ` Borislav Petkov
@ 2020-11-30 19:56   ` Nick Desaulniers
  2020-11-30 20:30     ` Yu, Yu-cheng
  1 sibling, 1 reply; 60+ messages in thread
From: Nick Desaulniers @ 2020-11-30 19:56 UTC (permalink / raw)
  To: yu-cheng.yu
  Cc: Dave.Martin, arnd, bp, bsingharora, corbet, dave.hansen, esyr,
	fweimer, gorcunov, hjl.tools, hpa, jannh, keescook, linux-api,
	linux-arch, linux-doc, linux-kernel, linux-mm, luto,
	mike.kravetz, mingo, nadav.amit, oleg, pavel, pengfei.xu, peterz,
	ravi.v.shankar, rdunlap, tglx, vedvyas.shanbhogue, weijiang.yang,
	x86, Sami Tolvanen, Will Deacon, Masahiro Yamada

In response to https://lore.kernel.org/lkml/20201110162211.9207-6-yu-cheng.yu@intel.com/.

Hi Yu-cheng,
This feature reminds me very much of
ARCH_SUPPORTS_SHADOW_CALL_STACK/CC_HAVE_SHADOW_CALL_STACK implemented in
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5287569a790d2546a06db07e391bf84b8bd6cf51.

Do you think it would be worthwhile to share the same config name between x86
and aarch64?

(Though, it seems on x86 there will be a distinction between kernel mode and
user mode configs, if I understand correctly?)

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack
  2020-11-30 19:56   ` Nick Desaulniers
@ 2020-11-30 20:30     ` Yu, Yu-cheng
  0 siblings, 0 replies; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-11-30 20:30 UTC (permalink / raw)
  To: Nick Desaulniers
  Cc: Dave.Martin, arnd, bp, bsingharora, corbet, dave.hansen, esyr,
	fweimer, gorcunov, hjl.tools, hpa, jannh, keescook, linux-api,
	linux-arch, linux-doc, linux-kernel, linux-mm, luto,
	mike.kravetz, mingo, nadav.amit, oleg, pavel, pengfei.xu, peterz,
	ravi.v.shankar, rdunlap, tglx, vedvyas.shanbhogue, weijiang.yang,
	x86, Sami Tolvanen, Will Deacon, Masahiro Yamada

On 11/30/2020 11:56 AM, Nick Desaulniers wrote:
> In response to https://lore.kernel.org/lkml/20201110162211.9207-6-yu-cheng.yu@intel.com/.
> 
> Hi Yu-cheng,
> This feature reminds me very much of
> ARCH_SUPPORTS_SHADOW_CALL_STACK/CC_HAVE_SHADOW_CALL_STACK implemented in
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5287569a790d2546a06db07e391bf84b8bd6cf51.
> 
> Do you think it would be worthwhile to share the same config name between x86
> and aarch64?

The CET series has ARCH_HAS_SHADOW_STACK.  In response to Boris' earlier 
comment, I think this maybe eliminated.  In case it is still needed, I 
think it is better to have different names (but I am open to changing it).

> 
> (Though, it seems on x86 there will be a distinction between kernel mode and
> user mode configs, if I understand correctly?)
> 

Yes, on x86, kernel and user-mode can be enabled separately.

Yu-cheng

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack
  2020-11-30 18:15       ` Borislav Petkov
@ 2020-11-30 22:48         ` Yu, Yu-cheng
  2020-12-01 16:02           ` Borislav Petkov
  0 siblings, 1 reply; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-11-30 22:48 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On 11/30/2020 10:15 AM, Borislav Petkov wrote:
> On Sat, Nov 28, 2020 at 08:23:59AM -0800, Yu, Yu-cheng wrote:
>> We have X86_BRANCH_TRACKING_USER too.  My thought was, X86_CET means any of
>> kernel/user shadow stack/ibt.
> 
> It is not about what it means - it is what you're going to use/need. You have
> ifdeffery both with X86_CET and X86_SHADOW_STACK_USER.
> 
> This one
> 
> +#ifdef CONFIG_X86_SHADOW_STACK_USER
> +#define DISABLE_SHSTK	0
> +#else
> +#define DISABLE_SHSTK	(1 << (X86_FEATURE_SHSTK & 31))
> +#endif
> 
> for example, is clearly wrong and wants to be #ifdef CONFIG_X86_CET, for
> example. Unless I'm missing something totally obvious.

Logically, enabling IBT without shadow stack does not make sense, but 
these features have different CPUIDs, and CONFIG_X86_SHADOW_STACK_USER 
and CONFIG_X86_BRANCH_TRACKING_USER can be selected separately.

Do we want to have only one selection for both features?  In other 
words, we turn on both or neither.

Thanks,
Yu-cheng

> 
> In any case, you need to analyze what Kconfig defines the code will
> need and to what they belong and add only the minimal subset needed.
> Our Kconfig symbols space is already nuts so adding more needs to be
> absolutely justified.
> 
> Thx.
> 

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [NEEDS-REVIEW] [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states
  2020-11-30 17:45   ` [NEEDS-REVIEW] " Dave Hansen
  2020-11-30 18:06     ` Yu, Yu-cheng
@ 2020-11-30 23:16     ` Yu, Yu-cheng
  2020-12-01 22:26       ` Dave Hansen
  1 sibling, 1 reply; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-11-30 23:16 UTC (permalink / raw)
  To: Dave Hansen, x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar,
	linux-kernel, linux-doc, linux-mm, linux-arch, linux-api,
	Arnd Bergmann, Andy Lutomirski, Balbir Singh, Borislav Petkov,
	Cyrill Gorcunov, Dave Hansen, Eugene Syromiatnikov,
	Florian Weimer, H.J. Lu, Jann Horn, Jonathan Corbet, Kees Cook,
	Mike Kravetz, Nadav Amit, Oleg Nesterov, Pavel Machek,
	Peter Zijlstra, Randy Dunlap, Ravi V. Shankar,
	Vedvyas Shanbhogue, Dave Martin, Weijiang Yang, Pengfei Xu

On 11/30/2020 9:45 AM, Dave Hansen wrote:
> On 11/10/20 8:21 AM, Yu-cheng Yu wrote:
>> Control-flow Enforcement Technology (CET) adds five MSRs.  Introduce
>> them and their XSAVES supervisor states:
>>
>>      MSR_IA32_U_CET (user-mode CET settings),
>>      MSR_IA32_PL3_SSP (user-mode Shadow Stack pointer),
>>      MSR_IA32_PL0_SSP (kernel-mode Shadow Stack pointer),
>>      MSR_IA32_PL1_SSP (Privilege Level 1 Shadow Stack pointer),
>>      MSR_IA32_PL2_SSP (Privilege Level 2 Shadow Stack pointer).
> 
> This patch goes into a bunch of XSAVE work that this changelog only
> briefly touches on.  I think it needs to be beefed up a bit.
> 
[...]
> 
> Do we have any other spots in the kernel where we care about:
> 
> 	boot_cpu_has(X86_FEATURE_SHSTK) ||
> 	boot_cpu_has(X86_FEATURE_IBT)
> 
> ?  If so, we could also address this by declaring a software-defined
> X86_FEATURE_CET and then setting it if SHSTK||IBT is supported, then we
> just put that one feature in xsave_cpuid_features[].
> 

These features have different CPUIDs but are complementary parts.  I 
don't know if someday there will be shadow-stack-only CPUs, but an 
IBT-only CPU is weird.  What if the kernel checks that the CPU has both 
features and presents only one feature flag (X86_FEATURE_CET), no 
X86_FEATURE_SHSTK or X86_FEATURE_IBT?

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack
  2020-11-30 22:48         ` Yu, Yu-cheng
@ 2020-12-01 16:02           ` Borislav Petkov
  0 siblings, 0 replies; 60+ messages in thread
From: Borislav Petkov @ 2020-12-01 16:02 UTC (permalink / raw)
  To: Yu, Yu-cheng
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On Mon, Nov 30, 2020 at 02:48:09PM -0800, Yu, Yu-cheng wrote:
> Logically, enabling IBT without shadow stack does not make sense, but these
> features have different CPUIDs, and CONFIG_X86_SHADOW_STACK_USER and
> CONFIG_X86_BRANCH_TRACKING_USER can be selected separately.
> 
> Do we want to have only one selection for both features?  In other words, we
> turn on both or neither.

Question is, do they need to be handled separately at all?

If not and IOW, I like dhansen's X86_FEATURE_CET synthetic feature
suggestion.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states
  2020-11-30 23:16     ` Yu, Yu-cheng
@ 2020-12-01 22:26       ` Dave Hansen
  2020-12-01 22:35         ` Yu, Yu-cheng
  0 siblings, 1 reply; 60+ messages in thread
From: Dave Hansen @ 2020-12-01 22:26 UTC (permalink / raw)
  To: Yu, Yu-cheng, x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar,
	linux-kernel, linux-doc, linux-mm, linux-arch, linux-api,
	Arnd Bergmann, Andy Lutomirski, Balbir Singh, Borislav Petkov,
	Cyrill Gorcunov, Dave Hansen, Eugene Syromiatnikov,
	Florian Weimer, H.J. Lu, Jann Horn, Jonathan Corbet, Kees Cook,
	Mike Kravetz, Nadav Amit, Oleg Nesterov, Pavel Machek,
	Peter Zijlstra, Randy Dunlap, Ravi V. Shankar,
	Vedvyas Shanbhogue, Dave Martin, Weijiang Yang, Pengfei Xu

On 11/30/20 3:16 PM, Yu, Yu-cheng wrote:
>>
>> Do we have any other spots in the kernel where we care about:
>>
>>     boot_cpu_has(X86_FEATURE_SHSTK) ||
>>     boot_cpu_has(X86_FEATURE_IBT)
>>
>> ?  If so, we could also address this by declaring a software-defined
>> X86_FEATURE_CET and then setting it if SHSTK||IBT is supported, then we
>> just put that one feature in xsave_cpuid_features[].
>>
> 
> These features have different CPUIDs but are complementary parts.  I
> don't know if someday there will be shadow-stack-only CPUs, but an
> IBT-only CPU is weird.  What if the kernel checks that the CPU has both
> features and presents only one feature flag (X86_FEATURE_CET), no
> X86_FEATURE_SHSTK or X86_FEATURE_IBT?

Logically, that's probably fine.  But, X86_FEATURE_IBT/SHSTK are in a
non-scattered leaf, so we'll kinda define them whether we like it or
not.  We'd have to go out of our way to *not* define them.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states
  2020-12-01 22:26       ` Dave Hansen
@ 2020-12-01 22:35         ` Yu, Yu-cheng
  0 siblings, 0 replies; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-12-01 22:35 UTC (permalink / raw)
  To: Dave Hansen, x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar,
	linux-kernel, linux-doc, linux-mm, linux-arch, linux-api,
	Arnd Bergmann, Andy Lutomirski, Balbir Singh, Borislav Petkov,
	Cyrill Gorcunov, Dave Hansen, Eugene Syromiatnikov,
	Florian Weimer, H.J. Lu, Jann Horn, Jonathan Corbet, Kees Cook,
	Mike Kravetz, Nadav Amit, Oleg Nesterov, Pavel Machek,
	Peter Zijlstra, Randy Dunlap, Ravi V. Shankar,
	Vedvyas Shanbhogue, Dave Martin, Weijiang Yang, Pengfei Xu

On 12/1/2020 2:26 PM, Dave Hansen wrote:
> On 11/30/20 3:16 PM, Yu, Yu-cheng wrote:
>>>
>>> Do we have any other spots in the kernel where we care about:
>>>
>>>      boot_cpu_has(X86_FEATURE_SHSTK) ||
>>>      boot_cpu_has(X86_FEATURE_IBT)
>>>
>>> ?  If so, we could also address this by declaring a software-defined
>>> X86_FEATURE_CET and then setting it if SHSTK||IBT is supported, then we
>>> just put that one feature in xsave_cpuid_features[].
>>>
>>
>> These features have different CPUIDs but are complementary parts.  I
>> don't know if someday there will be shadow-stack-only CPUs, but an
>> IBT-only CPU is weird.  What if the kernel checks that the CPU has both
>> features and presents only one feature flag (X86_FEATURE_CET), no
>> X86_FEATURE_SHSTK or X86_FEATURE_IBT?
> 
> Logically, that's probably fine.  But, X86_FEATURE_IBT/SHSTK are in a
> non-scattered leaf, so we'll kinda define them whether we like it or
> not.  We'd have to go out of our way to *not* define them.
> 

After more thoughts, I think it is better to just add X86_FEATURE_CET 
and not more.  We cannot predict what is going to happen later.
So, like what you suggested, X86_FEATURE_CET means (X86_FEATURE_SHSTK | 
X86_FEATURE_IBT).

Thanks,
Yu-cheng

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 06/26] x86/mm: Change _PAGE_DIRTY to _PAGE_DIRTY_HW
  2020-11-10 16:21 ` [PATCH v15 06/26] x86/mm: Change _PAGE_DIRTY to _PAGE_DIRTY_HW Yu-cheng Yu
@ 2020-12-03  9:19   ` Borislav Petkov
  2020-12-03 15:12     ` Dave Hansen
  0 siblings, 1 reply; 60+ messages in thread
From: Borislav Petkov @ 2020-12-03  9:19 UTC (permalink / raw)
  To: Yu-cheng Yu
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu, Dave Hansen

On Tue, Nov 10, 2020 at 08:21:51AM -0800, Yu-cheng Yu wrote:
> Before introducing _PAGE_COW for non-hardware memory management purposes in
> the next patch, rename _PAGE_DIRTY to _PAGE_DIRTY_HW and _PAGE_BIT_DIRTY to
> _PAGE_BIT_DIRTY_HW to make meanings more clear.  There are no functional
> changes from this patch.

There's no guarantee for "next" or "this" patch when a patch gets
applied so reword your commit message pls.

Also, I fail to understand here what _PAGE_DIRTY_HW makes more clear?
The page dirty bit is clear enough to me so why the churn?

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 06/26] x86/mm: Change _PAGE_DIRTY to _PAGE_DIRTY_HW
  2020-12-03  9:19   ` Borislav Petkov
@ 2020-12-03 15:12     ` Dave Hansen
  2020-12-03 15:56       ` Yu, Yu-cheng
  0 siblings, 1 reply; 60+ messages in thread
From: Dave Hansen @ 2020-12-03 15:12 UTC (permalink / raw)
  To: Borislav Petkov, Yu-cheng Yu
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On 12/3/20 1:19 AM, Borislav Petkov wrote:
> On Tue, Nov 10, 2020 at 08:21:51AM -0800, Yu-cheng Yu wrote:
>> Before introducing _PAGE_COW for non-hardware memory management purposes in
>> the next patch, rename _PAGE_DIRTY to _PAGE_DIRTY_HW and _PAGE_BIT_DIRTY to
>> _PAGE_BIT_DIRTY_HW to make meanings more clear.  There are no functional
>> changes from this patch.
> There's no guarantee for "next" or "this" patch when a patch gets
> applied so reword your commit message pls.
> 
> Also, I fail to understand here what _PAGE_DIRTY_HW makes more clear?
> The page dirty bit is clear enough to me so why the churn?

Once upon a time in this set, we had:

	_PAGE_DIRTY	(the old hardware bit)
and
	_PAGE_DIRTY_SW	(the new shadow stack necessitated bit)

In *that* case, it made sense to change the name of the hardware one to
help differentiate them.  But, over time, we changed _PAGE_DIRTY_SW to
_PAGE_COW.

I think you're right.  The renaming is just churn now with the current
naming.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 06/26] x86/mm: Change _PAGE_DIRTY to _PAGE_DIRTY_HW
  2020-12-03 15:12     ` Dave Hansen
@ 2020-12-03 15:56       ` Yu, Yu-cheng
  0 siblings, 0 replies; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-12-03 15:56 UTC (permalink / raw)
  To: Dave Hansen, Borislav Petkov
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On 12/3/2020 7:12 AM, Dave Hansen wrote:
> On 12/3/20 1:19 AM, Borislav Petkov wrote:
>> On Tue, Nov 10, 2020 at 08:21:51AM -0800, Yu-cheng Yu wrote:
>>> Before introducing _PAGE_COW for non-hardware memory management purposes in
>>> the next patch, rename _PAGE_DIRTY to _PAGE_DIRTY_HW and _PAGE_BIT_DIRTY to
>>> _PAGE_BIT_DIRTY_HW to make meanings more clear.  There are no functional
>>> changes from this patch.
>> There's no guarantee for "next" or "this" patch when a patch gets
>> applied so reword your commit message pls.
>>
>> Also, I fail to understand here what _PAGE_DIRTY_HW makes more clear?
>> The page dirty bit is clear enough to me so why the churn?
> 
> Once upon a time in this set, we had:
> 
> 	_PAGE_DIRTY	(the old hardware bit)
> and
> 	_PAGE_DIRTY_SW	(the new shadow stack necessitated bit)
> 
> In *that* case, it made sense to change the name of the hardware one to
> help differentiate them.  But, over time, we changed _PAGE_DIRTY_SW to
> _PAGE_COW.
> 
> I think you're right.  The renaming is just churn now with the current
> naming.
> 

Ok, I will drop this patch.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 07/26] x86/mm: Remove _PAGE_DIRTY_HW from kernel RO pages
  2020-11-10 16:21 ` [PATCH v15 07/26] x86/mm: Remove _PAGE_DIRTY_HW from kernel RO pages Yu-cheng Yu
@ 2020-12-07 16:36   ` Borislav Petkov
  2020-12-07 17:11     ` Yu, Yu-cheng
  0 siblings, 1 reply; 60+ messages in thread
From: Borislav Petkov @ 2020-12-07 16:36 UTC (permalink / raw)
  To: Yu-cheng Yu
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu, Christoph Hellwig

On Tue, Nov 10, 2020 at 08:21:52AM -0800, Yu-cheng Yu wrote:
> Kernel read-only PTEs are setup as _PAGE_DIRTY_HW.  Since these become
> shadow stack PTEs, remove the dirty bit.

This commit message is laconic to say the least. You need to start
explaining what you're doing because everytime I look at a patch of
yours, I'm always grepping the SDM and looking forward in the patchset,
trying to rhyme up what that is all about.

Like for this one. I had to fast-forward to the next patch where all
that is explained. But this is not how review works - each patch's
commit message needs to be understandable on its own because when
they land upstream, they're not in a patchset like here. And review
should be done in the order the patches are numbered - not by jumping
back'n'forth.

So please think of the readers of your patches when writing those commit
messages. Latter are *not* write-only and not unimportant.

And those readers haven't spent copious amounts of time on the
technology so being more verbose and explaining things is a Good
Thing(tm). Don't worry about explaining too much - better too much than
too little.

And last but not least, having understandable and properly written
commit messages increases the chances of your patches landing upstream
considerably.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 07/26] x86/mm: Remove _PAGE_DIRTY_HW from kernel RO pages
  2020-12-07 16:36   ` Borislav Petkov
@ 2020-12-07 17:11     ` Yu, Yu-cheng
  0 siblings, 0 replies; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-12-07 17:11 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu, Christoph Hellwig

On 12/7/2020 8:36 AM, Borislav Petkov wrote:
> On Tue, Nov 10, 2020 at 08:21:52AM -0800, Yu-cheng Yu wrote:
>> Kernel read-only PTEs are setup as _PAGE_DIRTY_HW.  Since these become
>> shadow stack PTEs, remove the dirty bit.
> 
> This commit message is laconic to say the least. You need to start
> explaining what you're doing because everytime I look at a patch of
> yours, I'm always grepping the SDM and looking forward in the patchset,
> trying to rhyme up what that is all about.
> 
> Like for this one. I had to fast-forward to the next patch where all
> that is explained. But this is not how review works - each patch's
> commit message needs to be understandable on its own because when
> they land upstream, they're not in a patchset like here. And review
> should be done in the order the patches are numbered - not by jumping
> back'n'forth.
> 
> So please think of the readers of your patches when writing those commit
> messages. Latter are *not* write-only and not unimportant.
> 
> And those readers haven't spent copious amounts of time on the
> technology so being more verbose and explaining things is a Good
> Thing(tm). Don't worry about explaining too much - better too much than
> too little.
> 
> And last but not least, having understandable and properly written
> commit messages increases the chances of your patches landing upstream
> considerably.
> 
> Thx.
> 

Thanks for your feedback.  I will improve the commit logs.

--
Yu-cheng

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 08/26] x86/mm: Introduce _PAGE_COW
  2020-11-10 16:21 ` [PATCH v15 08/26] x86/mm: Introduce _PAGE_COW Yu-cheng Yu
@ 2020-12-08 17:50   ` Borislav Petkov
  2020-12-08 18:25     ` Yu, Yu-cheng
  0 siblings, 1 reply; 60+ messages in thread
From: Borislav Petkov @ 2020-12-08 17:50 UTC (permalink / raw)
  To: Yu-cheng Yu
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On Tue, Nov 10, 2020 at 08:21:53AM -0800, Yu-cheng Yu wrote:
> There is essentially no room left in the x86 hardware PTEs on some OSes
> (not Linux).  That left the hardware architects looking for a way to
> represent a new memory type (shadow stack) within the existing bits.
> They chose to repurpose a lightly-used state: Write=0,Dirty=1.

It is not clear to me what the definition and semantics of that bit is.

+#define _PAGE_BIT_COW          _PAGE_BIT_SOFTW5 /* copy-on-write */

Is it set by hw or by sw and hw uses it to know it is a shadow stack
page, and so on.

I think you should lead with its definition.

> The reason it's lightly used is that Dirty=1 is normally set by hardware
> and cannot normally be set by hardware on a Write=0 PTE.  Software must
> normally be involved to create one of these PTEs, so software can simply
> opt to not create them.
> 
> But that leaves us with a Linux problem: we need to ensure we never create

Please use passive voice in your commit message: no "we" or "I", etc.

> Write=0,Dirty=1 PTEs.  In places where we do create them, we need to find
> an alternative way to represent them _without_ using the same hardware bit
> combination.  Thus, enter _PAGE_COW.  This results in the following:
> 
> (a) A modified, copy-on-write (COW) page: (R/O + _PAGE_COW)
> (b) A R/O page that has been COW'ed: (R/O + _PAGE_COW)

Both are "R/O + _PAGE_COW". Where's the difference? The dirty bit?

>     The user page is in a R/O VMA, and get_user_pages() needs a writable
>     copy.  The page fault handler creates a copy of the page and sets
>     the new copy's PTE as R/O and _PAGE_COW.
> (c) A shadow stack PTE: (R/O + _PAGE_DIRTY_HW)

So W=0, D=1 ?

> (d) A shared shadow stack PTE: (R/O + _PAGE_COW)
>     When a shadow stack page is being shared among processes (this happens
>     at fork()), its PTE is cleared of _PAGE_DIRTY_HW, so the next shadow
>     stack access causes a fault, and the page is duplicated and
>     _PAGE_DIRTY_HW is set again.  This is the COW equivalent for shadow
>     stack pages, even though it's copy-on-access rather than copy-on-write.
> (e) A page where the processor observed a Write=1 PTE, started a write, set
>     Dirty=1, but then observed a Write=0 PTE.

How does that happen? Something changed the PTE's W bit to 0 in-between?

> That's possible today, but
>     will not happen on processors that support shadow stack.
> 
> Use _PAGE_COW in pte_wrprotect() and _PAGE_DIRTY_HW in pte_mkwrite().
> Apply the same changes to pmd and pud.
> 
> When this patch is applied, there are six free bits left in the 64-bit PTE.

s/When this patch is applied/After this/

Avoid having "This patch" or "This commit" in the commit message. It is
tautologically useless.

Also, do

$ git grep 'This patch' Documentation/process

for more details.

> There are no more free bits in the 32-bit PTE (except for PAE) and shadow
> stack is not implemented for the 32-bit kernel.
> 
> Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
> ---
>  arch/x86/include/asm/pgtable.h       | 120 ++++++++++++++++++++++++---
>  arch/x86/include/asm/pgtable_types.h |  41 ++++++++-
>  2 files changed, 150 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index b23697658b28..c88c7ccf0318 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -121,9 +121,9 @@ extern pmdval_t early_pmd_flags;
>   * The following only work if pte_present() is true.
>   * Undefined behaviour if not..
>   */
> -static inline int pte_dirty(pte_t pte)
> +static inline bool pte_dirty(pte_t pte)
>  {
> -	return pte_flags(pte) & _PAGE_DIRTY_HW;
> +	return pte_flags(pte) & _PAGE_DIRTY_BITS;

Why?

Does _PAGE_COW mean dirty too?

> @@ -343,6 +349,17 @@ static inline pte_t pte_mkold(pte_t pte)
>  
>  static inline pte_t pte_wrprotect(pte_t pte)
>  {
> +	/*
> +	 * Blindly clearing _PAGE_RW might accidentally create
> +	 * a shadow stack PTE (RW=0,Dirty=1).  Move the hardware
> +	 * dirty value to the software bit.
> +	 */
> +	if (cpu_feature_enabled(X86_FEATURE_SHSTK)) {
> +		pte.pte |= (pte.pte & _PAGE_DIRTY_HW) >>
> +			   _PAGE_BIT_DIRTY_HW << _PAGE_BIT_COW;

Let that line stick out. And that shifting is not grokkable at a quick
glance, at least not to me. Simplify?

>  static inline pmd_t pmd_wrprotect(pmd_t pmd)
>  {
> +	/*
> +	 * Blindly clearing _PAGE_RW might accidentally create
> +	 * a shadow stack PMD (RW=0,Dirty=1).  Move the hardware
> +	 * dirty value to the software bit.

This whole carefully sidestepping the possiblity of creating a shadow
stack pXd is kinda sucky...

> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
> index 7462a574fc93..5f764d8d9bae 100644
> --- a/arch/x86/include/asm/pgtable_types.h
> +++ b/arch/x86/include/asm/pgtable_types.h
> @@ -23,7 +23,8 @@
>  #define _PAGE_BIT_SOFTW2	10	/* " */
>  #define _PAGE_BIT_SOFTW3	11	/* " */
>  #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
> -#define _PAGE_BIT_SOFTW4	58	/* available for programmer */
> +#define _PAGE_BIT_SOFTW4	57	/* available for programmer */
> +#define _PAGE_BIT_SOFTW5	58	/* available for programmer */
>  #define _PAGE_BIT_PKEY_BIT0	59	/* Protection Keys, bit 1/4 */
>  #define _PAGE_BIT_PKEY_BIT1	60	/* Protection Keys, bit 2/4 */
>  #define _PAGE_BIT_PKEY_BIT2	61	/* Protection Keys, bit 3/4 */
> @@ -36,6 +37,16 @@
>  #define _PAGE_BIT_SOFT_DIRTY	_PAGE_BIT_SOFTW3 /* software dirty tracking */
>  #define _PAGE_BIT_DEVMAP	_PAGE_BIT_SOFTW4
>  
> +/*
> + * This bit indicates a copy-on-write page, and is different from
> + * _PAGE_BIT_SOFT_DIRTY, which tracks which pages a task writes to.
> + */
> +#ifdef CONFIG_X86_64

CONFIG_X86_64 ? Do all x86 machines out there support CET?

If anything, CONFIG_X86_CET...

> +#define _PAGE_BIT_COW		_PAGE_BIT_SOFTW5 /* copy-on-write */
> +#else
> +#define _PAGE_BIT_COW		0
> +#endif
> +
>  /* If _PAGE_BIT_PRESENT is clear, we use these: */
>  /* - if the user mapped it with PROT_NONE; pte_present gives true */
-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 08/26] x86/mm: Introduce _PAGE_COW
  2020-12-08 17:50   ` Borislav Petkov
@ 2020-12-08 18:25     ` Yu, Yu-cheng
  2020-12-08 18:47       ` Borislav Petkov
  0 siblings, 1 reply; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-12-08 18:25 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On 12/8/2020 9:50 AM, Borislav Petkov wrote:
> On Tue, Nov 10, 2020 at 08:21:53AM -0800, Yu-cheng Yu wrote:
>> There is essentially no room left in the x86 hardware PTEs on some OSes
>> (not Linux).  That left the hardware architects looking for a way to
>> represent a new memory type (shadow stack) within the existing bits.
>> They chose to repurpose a lightly-used state: Write=0,Dirty=1.
> 
> It is not clear to me what the definition and semantics of that bit is.
> 
> +#define _PAGE_BIT_COW          _PAGE_BIT_SOFTW5 /* copy-on-write */
> 
> Is it set by hw or by sw and hw uses it to know it is a shadow stack
> page, and so on.
> 
> I think you should lead with its definition.

Ok.

...

>> Write=0,Dirty=1 PTEs.  In places where we do create them, we need to find
>> an alternative way to represent them _without_ using the same hardware bit
>> combination.  Thus, enter _PAGE_COW.  This results in the following:
>>
>> (a) A modified, copy-on-write (COW) page: (R/O + _PAGE_COW)
>> (b) A R/O page that has been COW'ed: (R/O + _PAGE_COW)
> 
> Both are "R/O + _PAGE_COW". Where's the difference? The dirty bit?

The PTEs are the same for both (a) and (b), but come from different routes.

>>      The user page is in a R/O VMA, and get_user_pages() needs a writable
>>      copy.  The page fault handler creates a copy of the page and sets
>>      the new copy's PTE as R/O and _PAGE_COW.
>> (c) A shadow stack PTE: (R/O + _PAGE_DIRTY_HW)
> 
> So W=0, D=1 ?

Yes.

>> (d) A shared shadow stack PTE: (R/O + _PAGE_COW)
>>      When a shadow stack page is being shared among processes (this happens
>>      at fork()), its PTE is cleared of _PAGE_DIRTY_HW, so the next shadow
>>      stack access causes a fault, and the page is duplicated and
>>      _PAGE_DIRTY_HW is set again.  This is the COW equivalent for shadow
>>      stack pages, even though it's copy-on-access rather than copy-on-write.
>> (e) A page where the processor observed a Write=1 PTE, started a write, set
>>      Dirty=1, but then observed a Write=0 PTE.
> 
> How does that happen? Something changed the PTE's W bit to 0 in-between?

Yes.

...

>> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
>> index b23697658b28..c88c7ccf0318 100644
>> --- a/arch/x86/include/asm/pgtable.h
>> +++ b/arch/x86/include/asm/pgtable.h
>> @@ -121,9 +121,9 @@ extern pmdval_t early_pmd_flags;
>>    * The following only work if pte_present() is true.
>>    * Undefined behaviour if not..
>>    */
>> -static inline int pte_dirty(pte_t pte)
>> +static inline bool pte_dirty(pte_t pte)
>>   {
>> -	return pte_flags(pte) & _PAGE_DIRTY_HW;
>> +	return pte_flags(pte) & _PAGE_DIRTY_BITS;
> 
> Why?
> 
> Does _PAGE_COW mean dirty too?

Yes.  Basically [read-only & dirty] is created by software.  Now the 
software uses a different bit.

>> @@ -343,6 +349,17 @@ static inline pte_t pte_mkold(pte_t pte)
>>   
>>   static inline pte_t pte_wrprotect(pte_t pte)
>>   {
>> +	/*
>> +	 * Blindly clearing _PAGE_RW might accidentally create
>> +	 * a shadow stack PTE (RW=0,Dirty=1).  Move the hardware
>> +	 * dirty value to the software bit.
>> +	 */
>> +	if (cpu_feature_enabled(X86_FEATURE_SHSTK)) {
>> +		pte.pte |= (pte.pte & _PAGE_DIRTY_HW) >>
>> +			   _PAGE_BIT_DIRTY_HW << _PAGE_BIT_COW;
> 
> Let that line stick out. And that shifting is not grokkable at a quick
> glance, at least not to me. Simplify?

Ok.

>>   static inline pmd_t pmd_wrprotect(pmd_t pmd)
>>   {
>> +	/*
>> +	 * Blindly clearing _PAGE_RW might accidentally create
>> +	 * a shadow stack PMD (RW=0,Dirty=1).  Move the hardware
>> +	 * dirty value to the software bit.
> 
> This whole carefully sidestepping the possiblity of creating a shadow
> stack pXd is kinda sucky...
> 
>> diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
>> index 7462a574fc93..5f764d8d9bae 100644
>> --- a/arch/x86/include/asm/pgtable_types.h
>> +++ b/arch/x86/include/asm/pgtable_types.h
>> @@ -23,7 +23,8 @@
>>   #define _PAGE_BIT_SOFTW2	10	/* " */
>>   #define _PAGE_BIT_SOFTW3	11	/* " */
>>   #define _PAGE_BIT_PAT_LARGE	12	/* On 2MB or 1GB pages */
>> -#define _PAGE_BIT_SOFTW4	58	/* available for programmer */
>> +#define _PAGE_BIT_SOFTW4	57	/* available for programmer */
>> +#define _PAGE_BIT_SOFTW5	58	/* available for programmer */
>>   #define _PAGE_BIT_PKEY_BIT0	59	/* Protection Keys, bit 1/4 */
>>   #define _PAGE_BIT_PKEY_BIT1	60	/* Protection Keys, bit 2/4 */
>>   #define _PAGE_BIT_PKEY_BIT2	61	/* Protection Keys, bit 3/4 */
>> @@ -36,6 +37,16 @@
>>   #define _PAGE_BIT_SOFT_DIRTY	_PAGE_BIT_SOFTW3 /* software dirty tracking */
>>   #define _PAGE_BIT_DEVMAP	_PAGE_BIT_SOFTW4
>>   
>> +/*
>> + * This bit indicates a copy-on-write page, and is different from
>> + * _PAGE_BIT_SOFT_DIRTY, which tracks which pages a task writes to.
>> + */
>> +#ifdef CONFIG_X86_64
> 
> CONFIG_X86_64 ? Do all x86 machines out there support CET?
> 
> If anything, CONFIG_X86_CET...

Ok.

--
Yu-cheng

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 08/26] x86/mm: Introduce _PAGE_COW
  2020-12-08 18:25     ` Yu, Yu-cheng
@ 2020-12-08 18:47       ` Borislav Petkov
  2020-12-08 19:24         ` Yu, Yu-cheng
  0 siblings, 1 reply; 60+ messages in thread
From: Borislav Petkov @ 2020-12-08 18:47 UTC (permalink / raw)
  To: Yu, Yu-cheng
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On Tue, Dec 08, 2020 at 10:25:15AM -0800, Yu, Yu-cheng wrote:
> > Both are "R/O + _PAGE_COW". Where's the difference? The dirty bit?
> 
> The PTEs are the same for both (a) and (b), but come from different routes.

Do not be afraid to go into detail and explain to me what those routes
are please.

> > > (e) A page where the processor observed a Write=1 PTE, started a write, set
> > >      Dirty=1, but then observed a Write=0 PTE.
> > 
> > How does that happen? Something changed the PTE's W bit to 0 in-between?
> 
> Yes.

Also do not scare from going into detail and explaining what you mean
here. Example?

> > Does _PAGE_COW mean dirty too?
> 
> Yes.  Basically [read-only & dirty] is created by software.  Now the
> software uses a different bit.

That convention:

"[read-only & dirty] is created by software."

needs some prominent writeup somewhere explaining what it is.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 08/26] x86/mm: Introduce _PAGE_COW
  2020-12-08 18:47       ` Borislav Petkov
@ 2020-12-08 19:24         ` Yu, Yu-cheng
  2020-12-10 17:41           ` Borislav Petkov
  0 siblings, 1 reply; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-12-08 19:24 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On 12/8/2020 10:47 AM, Borislav Petkov wrote:
> On Tue, Dec 08, 2020 at 10:25:15AM -0800, Yu, Yu-cheng wrote:
>>> Both are "R/O + _PAGE_COW". Where's the difference? The dirty bit?
>>
>> The PTEs are the same for both (a) and (b), but come from different routes.
> 
> Do not be afraid to go into detail and explain to me what those routes
> are please.

Case (a) is a normal writable data page that has gone through fork(). 
So it has W=0, D=1.  But here, the software chooses not to use the D 
bit, and instead, W=0, COW=1.

Case (b) is a normal read-only data page.  Since it is read-only, fork() 
won't affect it.  In __get_user_pages(), a copy of the read-only page is 
needed, and the page is duplicated.  The software sets COW=1 for the new 
copy.

>>>> (e) A page where the processor observed a Write=1 PTE, started a write, set
>>>>       Dirty=1, but then observed a Write=0 PTE.
>>>
>>> How does that happen? Something changed the PTE's W bit to 0 in-between?
>>
>> Yes.
> 
> Also do not scare from going into detail and explaining what you mean
> here. Example?

Thread-A is writing to a writable page, and the page's PTE is becoming 
W=1, D=1.  In the middle of it, Thread-B is changing the PTE to W=0.

>>> Does _PAGE_COW mean dirty too?
>>
>> Yes.  Basically [read-only & dirty] is created by software.  Now the
>> software uses a different bit.
> 
> That convention:
> 
> "[read-only & dirty] is created by software."
> 
> needs some prominent writeup somewhere explaining what it is.
> 
> Thx.
> 

I will put these into the comments.

--
Yu-cheng

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 08/26] x86/mm: Introduce _PAGE_COW
  2020-12-08 19:24         ` Yu, Yu-cheng
@ 2020-12-10 17:41           ` Borislav Petkov
  2020-12-10 18:10             ` Yu, Yu-cheng
  0 siblings, 1 reply; 60+ messages in thread
From: Borislav Petkov @ 2020-12-10 17:41 UTC (permalink / raw)
  To: Yu, Yu-cheng
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On Tue, Dec 08, 2020 at 11:24:16AM -0800, Yu, Yu-cheng wrote:
> Case (a) is a normal writable data page that has gone through fork(). So it

Writable?

> has W=0, D=1.  But here, the software chooses not to use the D bit, and

But it has W=0. So not writable?

> instead, W=0, COW=1.

So the "new" way of denoting that the page is modified is COW=1
*when* on CET hw. The D=1 bit is still used on the rest thus the two
_PAGE_DIRTY_BITS.

Am I close?

> Case (b) is a normal read-only data page.  Since it is read-only, fork()
> won't affect it.  In __get_user_pages(), a copy of the read-only page is
> needed, and the page is duplicated.  The software sets COW=1 for the new
> copy.

That makes more sense.

> Thread-A is writing to a writable page, and the page's PTE is becoming W=1,
> D=1.  In the middle of it, Thread-B is changing the PTE to W=0.

Yah, add that to the explanation pls.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH v15 08/26] x86/mm: Introduce _PAGE_COW
  2020-12-10 17:41           ` Borislav Petkov
@ 2020-12-10 18:10             ` Yu, Yu-cheng
  0 siblings, 0 replies; 60+ messages in thread
From: Yu, Yu-cheng @ 2020-12-10 18:10 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: x86, H. Peter Anvin, Thomas Gleixner, Ingo Molnar, linux-kernel,
	linux-doc, linux-mm, linux-arch, linux-api, Arnd Bergmann,
	Andy Lutomirski, Balbir Singh, Cyrill Gorcunov, Dave Hansen,
	Eugene Syromiatnikov, Florian Weimer, H.J. Lu, Jann Horn,
	Jonathan Corbet, Kees Cook, Mike Kravetz, Nadav Amit,
	Oleg Nesterov, Pavel Machek, Peter Zijlstra, Randy Dunlap,
	Ravi V. Shankar, Vedvyas Shanbhogue, Dave Martin, Weijiang Yang,
	Pengfei Xu

On 12/10/2020 9:41 AM, Borislav Petkov wrote:
> On Tue, Dec 08, 2020 at 11:24:16AM -0800, Yu, Yu-cheng wrote:
>> Case (a) is a normal writable data page that has gone through fork(). So it
> 
> Writable >
>> has W=0, D=1.  But here, the software chooses not to use the D bit, and
> 
> But it has W=0. So not writable?

Maybe I will change to: A page in a writable vma, has been modified and 
gone through fork().

>> instead, W=0, COW=1.
> 
> So the "new" way of denoting that the page is modified is COW=1
> *when* on CET hw. The D=1 bit is still used on the rest thus the two
> _PAGE_DIRTY_BITS.
> 
> Am I close?

COW=1 is only used in copy-on-write situation (when CET is enabled).  If 
W=1, D bit is used.

>> Case (b) is a normal read-only data page.  Since it is read-only, fork()
>> won't affect it.  In __get_user_pages(), a copy of the read-only page is
>> needed, and the page is duplicated.  The software sets COW=1 for the new
>> copy.
> 
> That makes more sense.
> 
>> Thread-A is writing to a writable page, and the page's PTE is becoming W=1,
>> D=1.  In the middle of it, Thread-B is changing the PTE to W=0.
> 
> Yah, add that to the explanation pls.
> 

Sure.

^ permalink raw reply	[flat|nested] 60+ messages in thread

end of thread, other threads:[~2020-12-10 18:12 UTC | newest]

Thread overview: 60+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-10 16:21 [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Yu-cheng Yu
2020-11-10 16:21 ` [PATCH v15 01/26] Documentation/x86: Add CET description Yu-cheng Yu
2020-11-30 18:26   ` Nick Desaulniers
2020-11-30 18:34     ` Yu, Yu-cheng
2020-11-30 19:38       ` Fāng-ruì Sòng
2020-11-30 19:47         ` Yu, Yu-cheng
2020-11-10 16:21 ` [PATCH v15 02/26] x86/cpufeatures: Add CET CPU feature flags for Control-flow Enforcement Technology (CET) Yu-cheng Yu
2020-11-10 16:21 ` [PATCH v15 03/26] x86/fpu/xstate: Introduce CET MSR XSAVES supervisor states Yu-cheng Yu
2020-11-26 11:02   ` Borislav Petkov
2020-11-30 17:45   ` [NEEDS-REVIEW] " Dave Hansen
2020-11-30 18:06     ` Yu, Yu-cheng
2020-11-30 18:12       ` Dave Hansen
2020-11-30 18:17         ` Yu, Yu-cheng
2020-11-30 23:16     ` Yu, Yu-cheng
2020-12-01 22:26       ` Dave Hansen
2020-12-01 22:35         ` Yu, Yu-cheng
2020-11-10 16:21 ` [PATCH v15 04/26] x86/cet: Add control-protection fault handler Yu-cheng Yu
2020-11-26 18:49   ` Borislav Petkov
2020-11-10 16:21 ` [PATCH v15 05/26] x86/cet/shstk: Add Kconfig option for user-mode Shadow Stack Yu-cheng Yu
2020-11-27 17:10   ` Borislav Petkov
2020-11-28 16:23     ` Yu, Yu-cheng
2020-11-30 18:15       ` Borislav Petkov
2020-11-30 22:48         ` Yu, Yu-cheng
2020-12-01 16:02           ` Borislav Petkov
2020-11-30 19:56   ` Nick Desaulniers
2020-11-30 20:30     ` Yu, Yu-cheng
2020-11-10 16:21 ` [PATCH v15 06/26] x86/mm: Change _PAGE_DIRTY to _PAGE_DIRTY_HW Yu-cheng Yu
2020-12-03  9:19   ` Borislav Petkov
2020-12-03 15:12     ` Dave Hansen
2020-12-03 15:56       ` Yu, Yu-cheng
2020-11-10 16:21 ` [PATCH v15 07/26] x86/mm: Remove _PAGE_DIRTY_HW from kernel RO pages Yu-cheng Yu
2020-12-07 16:36   ` Borislav Petkov
2020-12-07 17:11     ` Yu, Yu-cheng
2020-11-10 16:21 ` [PATCH v15 08/26] x86/mm: Introduce _PAGE_COW Yu-cheng Yu
2020-12-08 17:50   ` Borislav Petkov
2020-12-08 18:25     ` Yu, Yu-cheng
2020-12-08 18:47       ` Borislav Petkov
2020-12-08 19:24         ` Yu, Yu-cheng
2020-12-10 17:41           ` Borislav Petkov
2020-12-10 18:10             ` Yu, Yu-cheng
2020-11-10 16:21 ` [PATCH v15 09/26] drm/i915/gvt: Change _PAGE_DIRTY to _PAGE_DIRTY_BITS Yu-cheng Yu
2020-11-10 16:21 ` [PATCH v15 10/26] x86/mm: Update pte_modify for _PAGE_COW Yu-cheng Yu
2020-11-10 16:21 ` [PATCH v15 11/26] x86/mm: Update ptep_set_wrprotect() and pmdp_set_wrprotect() for transition from _PAGE_DIRTY_HW to _PAGE_COW Yu-cheng Yu
2020-11-10 16:21 ` [PATCH v15 12/26] mm: Introduce VM_SHSTK for shadow stack memory Yu-cheng Yu
2020-11-10 16:21 ` [PATCH v15 13/26] x86/mm: Shadow Stack page fault error checking Yu-cheng Yu
2020-11-10 16:21 ` [PATCH v15 14/26] x86/mm: Update maybe_mkwrite() for shadow stack Yu-cheng Yu
2020-11-10 16:22 ` [PATCH v15 15/26] mm: Fixup places that call pte_mkwrite() directly Yu-cheng Yu
2020-11-10 16:22 ` [PATCH v15 16/26] mm: Add guard pages around a shadow stack Yu-cheng Yu
2020-11-10 16:22 ` [PATCH v15 17/26] mm/mmap: Add shadow stack pages to memory accounting Yu-cheng Yu
2020-11-10 16:22 ` [PATCH v15 18/26] mm: Update can_follow_write_pte() for shadow stack Yu-cheng Yu
2020-11-10 16:22 ` [PATCH v15 19/26] mm: Re-introduce vm_flags to do_mmap() Yu-cheng Yu
2020-11-10 16:22 ` [PATCH v15 20/26] x86/cet/shstk: User-mode shadow stack support Yu-cheng Yu
2020-11-10 16:22 ` [PATCH v15 21/26] x86/cet/shstk: Handle signals for shadow stack Yu-cheng Yu
2020-11-10 16:22 ` [PATCH v15 22/26] binfmt_elf: Define GNU_PROPERTY_X86_FEATURE_1_AND properties Yu-cheng Yu
2020-11-10 16:22 ` [PATCH v15 23/26] ELF: Introduce arch_setup_elf_property() Yu-cheng Yu
2020-11-10 16:22 ` [PATCH v15 24/26] x86/cet/shstk: Handle thread shadow stack Yu-cheng Yu
2020-11-10 16:22 ` [PATCH v15 25/26] x86/cet/shstk: Add arch_prctl functions for " Yu-cheng Yu
2020-11-10 16:22 ` [PATCH v15 26/26] mm: Introduce PROT_SHSTK " Yu-cheng Yu
2020-11-27  9:29 ` [PATCH v15 00/26] Control-flow Enforcement: Shadow Stack Balbir Singh
2020-11-28 16:31   ` Yu, Yu-cheng

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).