All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation
@ 2020-11-25  5:16 Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 01/22] powerpc: Add new macro to handle NESTED_IFCLR Aneesh Kumar K.V
                   ` (21 more replies)
  0 siblings, 22 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

This patch series implements KUAP and KUEP with hash translation mode using
memory keys. The kernel now uses memory protection key 3 to control access
to the kernel. Kernel page table entries are now configured with key 3.
Access to locations configured with any other key value is denied when in
kernel mode (MSR_PR=0). This includes userspace which is by default configured
with key 0.

null-syscall benchmark results:

With smap/smep disabled:
Without patch:
	845.29 ns    2451.44 cycles
With patch series:
	858.38 ns    2489.30 cycles

With smap/smep enabled:
Without patch:
	NA
With patch series:
	1021.51 ns    2962.44 cycles

Changes from v5:
* Rework the patch based on suggestion from Michael to avoid the
  usage of CONFIG_PPC_PKEY on BOOKE platforms. 

Changes from v4:
* Repost with other pkey related changes split out as a separate series.
* Improve null-syscall benchmark by optimizing SPRN save and restore.

Changes from v3:
* Fix build error reported by kernel test robot <lkp@intel.com>

Changes from v2:
* Rebase to the latest kernel.
* Fixed a bug with disabling KUEP/KUAP on kernel command line
* Added a patch to make kup key dynamic.

Changes from V1:
* Rebased on latest kernel

Aneesh Kumar K.V (22):
  powerpc: Add new macro to handle NESTED_IFCLR
  KVM: PPC: BOOK3S: PR: Ignore UAMOR SPR
  powerpc/book3s64/kuap/kuep: Make KUAP and KUEP a subfeature of
    PPC_MEM_KEYS
  powerpc/book3s64/kuap/kuep: Move uamor setup to pkey init
  powerpc/book3s64/kuap: Move KUAP related function outside radix
  powerpc/book3s64/kuep: Move KUEP related function outside radix
  powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP to MMU_FTR_KUAP
  powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash
    translation
  powerpc/exec: Set thread.regs early during exec
  powerpc/book3s64/pkeys: Store/restore userspace AMR/IAMR correctly on
    entry and exit from kernel
  powerpc/book3s64/pkeys: Inherit correctly on fork.
  powerpc/book3s64/pkeys: Reset userspace AMR correctly on exec
  powerpc/ptrace-view: Use pt_regs values instead of thread_struct based
    one.
  powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode.
  powerpc/book3s64/kuap: Restrict access to userspace based on userspace
    AMR
  powerpc/book3s64/kuap: Improve error reporting with KUAP
  powerpc/book3s64/kuap: Use Key 3 to implement KUAP with hash
    translation.
  powerpc/book3s64/kuep: Use Key 3 to implement KUEP with hash
    translation.
  powerpc/book3s64/hash/kuap: Enable kuap on hash
  powerpc/book3s64/hash/kuep: Enable KUEP on hash
  powerpc/book3s64/hash/kup: Don't hardcode kup key
  powerpc/book3s64/pkeys: Optimize FTR_KUAP and FTR_KUEP disabled case

 arch/powerpc/include/asm/book3s/32/kup.h      |   4 +-
 .../powerpc/include/asm/book3s/64/hash-pkey.h |  10 +-
 arch/powerpc/include/asm/book3s/64/hash.h     |   2 +-
 .../powerpc/include/asm/book3s/64/kup-radix.h | 203 --------
 arch/powerpc/include/asm/book3s/64/kup.h      | 440 ++++++++++++++++++
 arch/powerpc/include/asm/book3s/64/mmu-hash.h |   1 +
 arch/powerpc/include/asm/book3s/64/mmu.h      |   2 +-
 arch/powerpc/include/asm/book3s/64/pkeys.h    |   3 +
 arch/powerpc/include/asm/feature-fixups.h     |   3 +
 arch/powerpc/include/asm/kup.h                |   8 +-
 arch/powerpc/include/asm/mmu.h                |  14 +-
 arch/powerpc/include/asm/mmu_context.h        |   2 +-
 arch/powerpc/include/asm/nohash/32/kup-8xx.h  |   4 +-
 arch/powerpc/include/asm/processor.h          |   4 -
 arch/powerpc/include/asm/ptrace.h             |  12 +-
 arch/powerpc/include/asm/thread_info.h        |   2 -
 arch/powerpc/kernel/asm-offsets.c             |   5 +
 arch/powerpc/kernel/entry_64.S                |   6 +-
 arch/powerpc/kernel/exceptions-64s.S          |   4 +-
 arch/powerpc/kernel/process.c                 |  58 ++-
 arch/powerpc/kernel/ptrace/ptrace-view.c      |   7 +-
 arch/powerpc/kernel/syscall_64.c              |  38 +-
 arch/powerpc/kernel/traps.c                   |   6 -
 arch/powerpc/kvm/book3s_emulate.c             |   6 +
 arch/powerpc/mm/book3s64/Makefile             |   2 +-
 arch/powerpc/mm/book3s64/hash_4k.c            |   2 +-
 arch/powerpc/mm/book3s64/hash_64k.c           |   4 +-
 arch/powerpc/mm/book3s64/hash_hugepage.c      |   2 +-
 arch/powerpc/mm/book3s64/hash_hugetlbpage.c   |   2 +-
 arch/powerpc/mm/book3s64/hash_pgtable.c       |   2 +-
 arch/powerpc/mm/book3s64/hash_utils.c         |  10 +-
 arch/powerpc/mm/book3s64/pkeys.c              | 177 ++++---
 arch/powerpc/mm/book3s64/radix_pgtable.c      |  47 +-
 arch/powerpc/mm/fault.c                       |   2 +-
 arch/powerpc/platforms/Kconfig.cputype        |   5 +
 35 files changed, 715 insertions(+), 384 deletions(-)
 delete mode 100644 arch/powerpc/include/asm/book3s/64/kup-radix.h
 create mode 100644 arch/powerpc/include/asm/book3s/64/kup.h

-- 
2.28.0


^ permalink raw reply	[flat|nested] 48+ messages in thread

* [PATCH v6 01/22] powerpc: Add new macro to handle NESTED_IFCLR
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 02/22] KVM: PPC: BOOK3S: PR: Ignore UAMOR SPR Aneesh Kumar K.V
                   ` (20 subsequent siblings)
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

This will be used by the following patches

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/feature-fixups.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/include/asm/feature-fixups.h b/arch/powerpc/include/asm/feature-fixups.h
index fbd406cd6916..5cdba929a8ae 100644
--- a/arch/powerpc/include/asm/feature-fixups.h
+++ b/arch/powerpc/include/asm/feature-fixups.h
@@ -100,6 +100,9 @@ label##5:							\
 #define END_MMU_FTR_SECTION_NESTED_IFSET(msk, label)	\
 	END_MMU_FTR_SECTION_NESTED((msk), (msk), label)
 
+#define END_MMU_FTR_SECTION_NESTED_IFCLR(msk, label)	\
+	END_MMU_FTR_SECTION_NESTED((msk), 0, label)
+
 #define END_MMU_FTR_SECTION_IFSET(msk)	END_MMU_FTR_SECTION((msk), (msk))
 #define END_MMU_FTR_SECTION_IFCLR(msk)	END_MMU_FTR_SECTION((msk), 0)
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 02/22] KVM: PPC: BOOK3S: PR: Ignore UAMOR SPR
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 01/22] powerpc: Add new macro to handle NESTED_IFCLR Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 03/22] powerpc/book3s64/kuap/kuep: Make KUAP and KUEP a subfeature of PPC_MEM_KEYS Aneesh Kumar K.V
                   ` (19 subsequent siblings)
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

With power7 and above we expect the cpu to support keys. The
number of keys are firmware controlled based on device tree.
PR KVM do not expose key details via device tree. Hence when running with PR KVM
we do run with MMU_FTR_KEY support disabled. But we can still
get updates on UAMOR. Hence ignore access to them and for mfstpr return
0 indicating no AMR/IAMR update is no allowed.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/kvm/book3s_emulate.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/powerpc/kvm/book3s_emulate.c b/arch/powerpc/kvm/book3s_emulate.c
index 0effd48c8f4d..b08cc15f31c7 100644
--- a/arch/powerpc/kvm/book3s_emulate.c
+++ b/arch/powerpc/kvm/book3s_emulate.c
@@ -840,6 +840,9 @@ int kvmppc_core_emulate_mtspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong spr_val)
 	case SPRN_MMCR1:
 	case SPRN_MMCR2:
 	case SPRN_UMMCR2:
+	case SPRN_UAMOR:
+	case SPRN_IAMR:
+	case SPRN_AMR:
 #endif
 		break;
 unprivileged:
@@ -1004,6 +1007,9 @@ int kvmppc_core_emulate_mfspr_pr(struct kvm_vcpu *vcpu, int sprn, ulong *spr_val
 	case SPRN_MMCR2:
 	case SPRN_UMMCR2:
 	case SPRN_TIR:
+	case SPRN_UAMOR:
+	case SPRN_IAMR:
+	case SPRN_AMR:
 #endif
 		*spr_val = 0;
 		break;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 03/22] powerpc/book3s64/kuap/kuep: Make KUAP and KUEP a subfeature of PPC_MEM_KEYS
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 01/22] powerpc: Add new macro to handle NESTED_IFCLR Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 02/22] KVM: PPC: BOOK3S: PR: Ignore UAMOR SPR Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25 13:30   ` Christophe Leroy
  2020-11-26  3:25   ` Michael Ellerman
  2020-11-25  5:16 ` [PATCH v6 04/22] powerpc/book3s64/kuap/kuep: Move uamor setup to pkey init Aneesh Kumar K.V
                   ` (18 subsequent siblings)
  21 siblings, 2 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

The next set of patches adds support for kuap with hash translation.
Hence make KUAP a BOOK3S_64 feature. Also make it a subfeature of
PPC_MEM_KEYS. Hash translation is going to use pkeys to support
KUAP/KUEP. Adding this dependency reduces the code complexity and
enables us to move some of the initialization code to pkeys.c

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 .../powerpc/include/asm/book3s/64/kup-radix.h |  4 ++--
 arch/powerpc/include/asm/book3s/64/mmu.h      |  2 +-
 arch/powerpc/include/asm/ptrace.h             |  7 +++++-
 arch/powerpc/kernel/asm-offsets.c             |  3 +++
 arch/powerpc/mm/book3s64/Makefile             |  2 +-
 arch/powerpc/mm/book3s64/pkeys.c              | 24 ++++++++++++-------
 arch/powerpc/platforms/Kconfig.cputype        |  5 ++++
 7 files changed, 33 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h
index 28716e2f13e3..68eaa2fac3ab 100644
--- a/arch/powerpc/include/asm/book3s/64/kup-radix.h
+++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h
@@ -16,7 +16,7 @@
 #ifdef CONFIG_PPC_KUAP
 	BEGIN_MMU_FTR_SECTION_NESTED(67)
 	mfspr	\gpr1, SPRN_AMR
-	ld	\gpr2, STACK_REGS_KUAP(r1)
+	ld	\gpr2, STACK_REGS_AMR(r1)
 	cmpd	\gpr1, \gpr2
 	beq	998f
 	isync
@@ -48,7 +48,7 @@
 	bne	\msr_pr_cr, 99f
 	.endif
 	mfspr	\gpr1, SPRN_AMR
-	std	\gpr1, STACK_REGS_KUAP(r1)
+	std	\gpr1, STACK_REGS_AMR(r1)
 	li	\gpr2, (AMR_KUAP_BLOCKED >> AMR_KUAP_SHIFT)
 	sldi	\gpr2, \gpr2, AMR_KUAP_SHIFT
 	cmpd	\use_cr, \gpr1, \gpr2
diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index e0b52940e43c..a2a015066bae 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -199,7 +199,7 @@ extern int mmu_io_psize;
 void mmu_early_init_devtree(void);
 void hash__early_init_devtree(void);
 void radix__early_init_devtree(void);
-#ifdef CONFIG_PPC_MEM_KEYS
+#ifdef CONFIG_PPC_PKEY
 void pkey_early_init_devtree(void);
 #else
 static inline void pkey_early_init_devtree(void) {}
diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
index e2c778c176a3..e7f1caa007a4 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -53,9 +53,14 @@ struct pt_regs
 #ifdef CONFIG_PPC64
 			unsigned long ppr;
 #endif
+			union {
 #ifdef CONFIG_PPC_KUAP
-			unsigned long kuap;
+				unsigned long kuap;
 #endif
+#ifdef CONFIG_PPC_PKEY
+				unsigned long amr;
+#endif
+			};
 		};
 		unsigned long __pad[2];	/* Maintain 16 byte interrupt stack alignment */
 	};
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index c2722ff36e98..418a0b314a33 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -354,6 +354,9 @@ int main(void)
 	STACK_PT_REGS_OFFSET(_PPR, ppr);
 #endif /* CONFIG_PPC64 */
 
+#ifdef CONFIG_PPC_PKEY
+	STACK_PT_REGS_OFFSET(STACK_REGS_AMR, amr);
+#endif
 #ifdef CONFIG_PPC_KUAP
 	STACK_PT_REGS_OFFSET(STACK_REGS_KUAP, kuap);
 #endif
diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64/Makefile
index fd393b8be14f..1b56d3af47d4 100644
--- a/arch/powerpc/mm/book3s64/Makefile
+++ b/arch/powerpc/mm/book3s64/Makefile
@@ -17,7 +17,7 @@ endif
 obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hash_hugepage.o
 obj-$(CONFIG_PPC_SUBPAGE_PROT)	+= subpage_prot.o
 obj-$(CONFIG_SPAPR_TCE_IOMMU)	+= iommu_api.o
-obj-$(CONFIG_PPC_MEM_KEYS)	+= pkeys.o
+obj-$(CONFIG_PPC_PKEY)	+= pkeys.o
 
 # Instrumenting the SLB fault path can lead to duplicate SLB entries
 KCOV_INSTRUMENT_slb.o := n
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index b1d091a97611..7dc71f85683d 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -89,12 +89,14 @@ static int scan_pkey_feature(void)
 		}
 	}
 
+#ifdef CONFIG_PPC_MEM_KEYS
 	/*
 	 * Adjust the upper limit, based on the number of bits supported by
 	 * arch-neutral code.
 	 */
 	pkeys_total = min_t(int, pkeys_total,
 			    ((ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT) + 1));
+#endif
 	return pkeys_total;
 }
 
@@ -102,6 +104,7 @@ void __init pkey_early_init_devtree(void)
 {
 	int pkeys_total, i;
 
+#ifdef CONFIG_PPC_MEM_KEYS
 	/*
 	 * We define PKEY_DISABLE_EXECUTE in addition to the arch-neutral
 	 * generic defines for PKEY_DISABLE_ACCESS and PKEY_DISABLE_WRITE.
@@ -117,7 +120,7 @@ void __init pkey_early_init_devtree(void)
 	BUILD_BUG_ON(__builtin_clzl(ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT) +
 		     __builtin_popcountl(ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT)
 				!= (sizeof(u64) * BITS_PER_BYTE));
-
+#endif
 	/*
 	 * Only P7 and above supports SPRN_AMR update with MSR[PR] = 1
 	 */
@@ -223,14 +226,6 @@ void __init pkey_early_init_devtree(void)
 	return;
 }
 
-void pkey_mm_init(struct mm_struct *mm)
-{
-	if (!mmu_has_feature(MMU_FTR_PKEY))
-		return;
-	mm_pkey_allocation_map(mm) = initial_allocation_mask;
-	mm->context.execute_only_pkey = execute_only_key;
-}
-
 static inline u64 read_amr(void)
 {
 	return mfspr(SPRN_AMR);
@@ -257,6 +252,15 @@ static inline void write_iamr(u64 value)
 	mtspr(SPRN_IAMR, value);
 }
 
+#ifdef CONFIG_PPC_MEM_KEYS
+void pkey_mm_init(struct mm_struct *mm)
+{
+	if (!mmu_has_feature(MMU_FTR_PKEY))
+		return;
+	mm_pkey_allocation_map(mm) = initial_allocation_mask;
+	mm->context.execute_only_pkey = execute_only_key;
+}
+
 static inline void init_amr(int pkey, u8 init_bits)
 {
 	u64 new_amr_bits = (((u64)init_bits & 0x3UL) << pkeyshift(pkey));
@@ -445,3 +449,5 @@ void arch_dup_pkeys(struct mm_struct *oldmm, struct mm_struct *mm)
 	mm_pkey_allocation_map(mm) = mm_pkey_allocation_map(oldmm);
 	mm->context.execute_only_pkey = oldmm->context.execute_only_pkey;
 }
+
+#endif /* CONFIG_PPC_MEM_KEYS */
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index c194c4ae8bc7..f255e8f32155 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -395,6 +395,11 @@ config PPC_KUAP_DEBUG
 	  Add extra debugging for Kernel Userspace Access Protection (KUAP)
 	  If you're unsure, say N.
 
+config PPC_PKEY
+	def_bool y
+	depends on PPC_BOOK3S_64
+	depends on PPC_MEM_KEYS || PPC_KUAP || PPC_KUEP
+
 config ARCH_ENABLE_HUGEPAGE_MIGRATION
 	def_bool y
 	depends on PPC_BOOK3S_64 && HUGETLB_PAGE && MIGRATION
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 04/22] powerpc/book3s64/kuap/kuep: Move uamor setup to pkey init
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (2 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 03/22] powerpc/book3s64/kuap/kuep: Make KUAP and KUEP a subfeature of PPC_MEM_KEYS Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25 13:32   ` Christophe Leroy
  2020-11-26  3:28   ` Michael Ellerman
  2020-11-25  5:16 ` [PATCH v6 05/22] powerpc/book3s64/kuap: Move KUAP related function outside radix Aneesh Kumar K.V
                   ` (17 subsequent siblings)
  21 siblings, 2 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

This patch consolidates UAMOR update across pkey, kuap and kuep features.
The boot cpu initialize UAMOR via pkey init and both radix/hash do the
secondary cpu UAMOR init in early_init_mmu_secondary.

We don't check for mmu_feature in radix secondary init because UAMOR
is a supported SPRN with all CPUs supporting radix translation.
The old code was not updating UAMOR if we had smap disabled and smep enabled.
This change handles that case.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/mm/book3s64/radix_pgtable.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 3adcf730f478..bfe441af916a 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -620,9 +620,6 @@ void setup_kuap(bool disabled)
 		cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
 	}
 
-	/* Make sure userspace can't change the AMR */
-	mtspr(SPRN_UAMOR, 0);
-
 	/*
 	 * Set the default kernel AMR values on all cpus.
 	 */
@@ -721,6 +718,11 @@ void radix__early_init_mmu_secondary(void)
 
 	radix__switch_mmu_context(NULL, &init_mm);
 	tlbiel_all();
+
+#ifdef CONFIG_PPC_PKEY
+	/* Make sure userspace can't change the AMR */
+	mtspr(SPRN_UAMOR, 0);
+#endif
 }
 
 void radix__mmu_cleanup_all(void)
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 05/22] powerpc/book3s64/kuap: Move KUAP related function outside radix
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (3 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 04/22] powerpc/book3s64/kuap/kuep: Move uamor setup to pkey init Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 06/22] powerpc/book3s64/kuep: Move KUEP " Aneesh Kumar K.V
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

The next set of patches adds support for kuap with hash translation.
In preparation for that rename/move kuap related functions to
non radix names.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 .../asm/book3s/64/{kup-radix.h => kup.h}      |  6 ++---
 arch/powerpc/include/asm/kup.h                |  4 +++-
 arch/powerpc/mm/book3s64/pkeys.c              | 22 +++++++++++++++++++
 arch/powerpc/mm/book3s64/radix_pgtable.c      | 19 ----------------
 4 files changed, 28 insertions(+), 23 deletions(-)
 rename arch/powerpc/include/asm/book3s/64/{kup-radix.h => kup.h} (97%)

diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup.h
similarity index 97%
rename from arch/powerpc/include/asm/book3s/64/kup-radix.h
rename to arch/powerpc/include/asm/book3s/64/kup.h
index 68eaa2fac3ab..56dbe3666dc8 100644
--- a/arch/powerpc/include/asm/book3s/64/kup-radix.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H
-#define _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H
+#ifndef _ASM_POWERPC_BOOK3S_64_KUP_H
+#define _ASM_POWERPC_BOOK3S_64_KUP_H
 
 #include <linux/const.h>
 #include <asm/reg.h>
@@ -200,4 +200,4 @@ static inline void restore_user_access(unsigned long flags)
 }
 #endif /* __ASSEMBLY__ */
 
-#endif /* _ASM_POWERPC_BOOK3S_64_KUP_RADIX_H */
+#endif /* _ASM_POWERPC_BOOK3S_64_KUP_H */
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index 0d93331d0fab..a06e50b68d40 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -15,11 +15,13 @@
 #define KUAP_CURRENT		(KUAP_CURRENT_READ | KUAP_CURRENT_WRITE)
 
 #ifdef CONFIG_PPC_BOOK3S_64
-#include <asm/book3s/64/kup-radix.h>
+#include <asm/book3s/64/kup.h>
 #endif
+
 #ifdef CONFIG_PPC_8xx
 #include <asm/nohash/32/kup-8xx.h>
 #endif
+
 #ifdef CONFIG_PPC_BOOK3S_32
 #include <asm/book3s/32/kup.h>
 #endif
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 7dc71f85683d..c75994cf50a7 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -9,9 +9,12 @@
 #include <asm/mmu_context.h>
 #include <asm/mmu.h>
 #include <asm/setup.h>
+#include <asm/smp.h>
+
 #include <linux/pkeys.h>
 #include <linux/of_fdt.h>
 
+
 int  num_pkey;		/* Max number of pkeys supported */
 /*
  *  Keys marked in the reservation list cannot be allocated by  userspace
@@ -226,6 +229,25 @@ void __init pkey_early_init_devtree(void)
 	return;
 }
 
+#ifdef CONFIG_PPC_KUAP
+void __init setup_kuap(bool disabled)
+{
+	if (disabled || !early_radix_enabled())
+		return;
+
+	if (smp_processor_id() == boot_cpuid) {
+		pr_info("Activating Kernel Userspace Access Prevention\n");
+		cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
+	}
+
+	/*
+	 * Set the default kernel AMR values on all cpus.
+	 */
+	mtspr(SPRN_AMR, AMR_KUAP_BLOCKED);
+	isync();
+}
+#endif
+
 static inline u64 read_amr(void)
 {
 	return mfspr(SPRN_AMR);
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index bfe441af916a..cd9872894552 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -609,25 +609,6 @@ void setup_kuep(bool disabled)
 }
 #endif
 
-#ifdef CONFIG_PPC_KUAP
-void setup_kuap(bool disabled)
-{
-	if (disabled || !early_radix_enabled())
-		return;
-
-	if (smp_processor_id() == boot_cpuid) {
-		pr_info("Activating Kernel Userspace Access Prevention\n");
-		cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
-	}
-
-	/*
-	 * Set the default kernel AMR values on all cpus.
-	 */
-	mtspr(SPRN_AMR, AMR_KUAP_BLOCKED);
-	isync();
-}
-#endif
-
 void __init radix__early_init_mmu(void)
 {
 	unsigned long lpcr;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 06/22] powerpc/book3s64/kuep: Move KUEP related function outside radix
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (4 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 05/22] powerpc/book3s64/kuap: Move KUAP related function outside radix Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 07/22] powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP to MMU_FTR_KUAP Aneesh Kumar K.V
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

The next set of patches adds support for kuep with hash translation.
In preparation for that rename/move kuap related functions to
non radix names.

Also set MMU_FTR_KUEP and add the missing isync().

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/kup.h |  1 +
 arch/powerpc/mm/book3s64/pkeys.c         | 21 +++++++++++++++++++++
 arch/powerpc/mm/book3s64/radix_pgtable.c | 20 --------------------
 3 files changed, 22 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 56dbe3666dc8..39d2e3a0d64d 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -7,6 +7,7 @@
 
 #define AMR_KUAP_BLOCK_READ	UL(0x4000000000000000)
 #define AMR_KUAP_BLOCK_WRITE	UL(0x8000000000000000)
+#define AMR_KUEP_BLOCKED	(1UL << 62)
 #define AMR_KUAP_BLOCKED	(AMR_KUAP_BLOCK_READ | AMR_KUAP_BLOCK_WRITE)
 #define AMR_KUAP_SHIFT		62
 
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index c75994cf50a7..82c722fbce52 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -229,6 +229,27 @@ void __init pkey_early_init_devtree(void)
 	return;
 }
 
+#ifdef CONFIG_PPC_KUEP
+void __init setup_kuep(bool disabled)
+{
+	if (disabled || !early_radix_enabled())
+		return;
+
+	if (smp_processor_id() == boot_cpuid) {
+		pr_info("Activating Kernel Userspace Execution Prevention\n");
+		cur_cpu_spec->mmu_features |= MMU_FTR_KUEP;
+	}
+
+	/*
+	 * Radix always uses key0 of the IAMR to determine if an access is
+	 * allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
+	 * fetch.
+	 */
+	mtspr(SPRN_IAMR, AMR_KUEP_BLOCKED);
+	isync();
+}
+#endif
+
 #ifdef CONFIG_PPC_KUAP
 void __init setup_kuap(bool disabled)
 {
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index cd9872894552..fd22a5e9f0ff 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -589,26 +589,6 @@ static void radix_init_amor(void)
 	mtspr(SPRN_AMOR, (3ul << 62));
 }
 
-#ifdef CONFIG_PPC_KUEP
-void setup_kuep(bool disabled)
-{
-	if (disabled || !early_radix_enabled())
-		return;
-
-	if (smp_processor_id() == boot_cpuid) {
-		pr_info("Activating Kernel Userspace Execution Prevention\n");
-		cur_cpu_spec->mmu_features |= MMU_FTR_KUEP;
-	}
-
-	/*
-	 * Radix always uses key0 of the IAMR to determine if an access is
-	 * allowed. We set bit 0 (IBM bit 1) of key0, to prevent instruction
-	 * fetch.
-	 */
-	mtspr(SPRN_IAMR, (1ul << 62));
-}
-#endif
-
 void __init radix__early_init_mmu(void)
 {
 	unsigned long lpcr;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 07/22] powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP to MMU_FTR_KUAP
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (5 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 06/22] powerpc/book3s64/kuep: Move KUEP " Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25 13:43   ` Christophe Leroy
  2020-11-26  3:16   ` Michael Ellerman
  2020-11-25  5:16 ` [PATCH v6 08/22] powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translation Aneesh Kumar K.V
                   ` (14 subsequent siblings)
  21 siblings, 2 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

This is in preparate to adding support for kuap with hash translation.
In preparation for that rename/move kuap related functions to
non radix names. Also move the feature bit closer to MMU_FTR_KUEP.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/kup.h | 18 +++++++++---------
 arch/powerpc/include/asm/mmu.h           | 14 +++++++-------
 arch/powerpc/mm/book3s64/pkeys.c         |  2 +-
 3 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 39d2e3a0d64d..1d38eab83d48 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -24,7 +24,7 @@
 	mtspr	SPRN_AMR, \gpr2
 	/* No isync required, see kuap_restore_amr() */
 998:
-	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
+	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
 #endif
 .endm
 
@@ -37,7 +37,7 @@
 	sldi	\gpr2, \gpr2, AMR_KUAP_SHIFT
 999:	tdne	\gpr1, \gpr2
 	EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
-	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
+	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
 #endif
 .endm
 #endif
@@ -58,7 +58,7 @@
 	mtspr	SPRN_AMR, \gpr2
 	isync
 99:
-	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
+	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
 #endif
 .endm
 
@@ -73,7 +73,7 @@ DECLARE_STATIC_KEY_FALSE(uaccess_flush_key);
 
 static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr)
 {
-	if (mmu_has_feature(MMU_FTR_RADIX_KUAP) && unlikely(regs->kuap != amr)) {
+	if (mmu_has_feature(MMU_FTR_KUAP) && unlikely(regs->kuap != amr)) {
 		isync();
 		mtspr(SPRN_AMR, regs->kuap);
 		/*
@@ -86,7 +86,7 @@ static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr)
 
 static inline unsigned long kuap_get_and_check_amr(void)
 {
-	if (mmu_has_feature(MMU_FTR_RADIX_KUAP)) {
+	if (mmu_has_feature(MMU_FTR_KUAP)) {
 		unsigned long amr = mfspr(SPRN_AMR);
 		if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG)) /* kuap_check_amr() */
 			WARN_ON_ONCE(amr != AMR_KUAP_BLOCKED);
@@ -97,7 +97,7 @@ static inline unsigned long kuap_get_and_check_amr(void)
 
 static inline void kuap_check_amr(void)
 {
-	if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_RADIX_KUAP))
+	if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_KUAP))
 		WARN_ON_ONCE(mfspr(SPRN_AMR) != AMR_KUAP_BLOCKED);
 }
 
@@ -116,7 +116,7 @@ static inline unsigned long get_kuap(void)
 	 * This has no effect in terms of actually blocking things on hash,
 	 * so it doesn't break anything.
 	 */
-	if (!early_mmu_has_feature(MMU_FTR_RADIX_KUAP))
+	if (!early_mmu_has_feature(MMU_FTR_KUAP))
 		return AMR_KUAP_BLOCKED;
 
 	return mfspr(SPRN_AMR);
@@ -124,7 +124,7 @@ static inline unsigned long get_kuap(void)
 
 static inline void set_kuap(unsigned long value)
 {
-	if (!early_mmu_has_feature(MMU_FTR_RADIX_KUAP))
+	if (!early_mmu_has_feature(MMU_FTR_KUAP))
 		return;
 
 	/*
@@ -139,7 +139,7 @@ static inline void set_kuap(unsigned long value)
 static inline bool
 bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
 {
-	return WARN(mmu_has_feature(MMU_FTR_RADIX_KUAP) &&
+	return WARN(mmu_has_feature(MMU_FTR_KUAP) &&
 		    (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)),
 		    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
 }
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 255a1837e9f7..f5c7a17c198a 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -28,6 +28,11 @@
  * Individual features below.
  */
 
+/*
+ * Supports KUAP (key 0 controlling userspace addresses) on radix
+ */
+#define MMU_FTR_KUAP			ASM_CONST(0x00000200)
+
 /*
  * Support for KUEP feature.
  */
@@ -120,11 +125,6 @@
  */
 #define MMU_FTR_1T_SEGMENT		ASM_CONST(0x40000000)
 
-/*
- * Supports KUAP (key 0 controlling userspace addresses) on radix
- */
-#define MMU_FTR_RADIX_KUAP		ASM_CONST(0x80000000)
-
 /* MMU feature bit sets for various CPUs */
 #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2	\
 	MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
@@ -187,10 +187,10 @@ enum {
 #ifdef CONFIG_PPC_RADIX_MMU
 		MMU_FTR_TYPE_RADIX |
 		MMU_FTR_GTSE |
+#endif /* CONFIG_PPC_RADIX_MMU */
 #ifdef CONFIG_PPC_KUAP
-		MMU_FTR_RADIX_KUAP |
+	MMU_FTR_KUAP |
 #endif /* CONFIG_PPC_KUAP */
-#endif /* CONFIG_PPC_RADIX_MMU */
 #ifdef CONFIG_PPC_MEM_KEYS
 	MMU_FTR_PKEY |
 #endif
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 82c722fbce52..bfc27f1f0ab0 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -258,7 +258,7 @@ void __init setup_kuap(bool disabled)
 
 	if (smp_processor_id() == boot_cpuid) {
 		pr_info("Activating Kernel Userspace Access Prevention\n");
-		cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
+		cur_cpu_spec->mmu_features |= MMU_FTR_KUAP;
 	}
 
 	/*
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 08/22] powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translation
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (6 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 07/22] powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP to MMU_FTR_KUAP Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 09/22] powerpc/exec: Set thread.regs early during exec Aneesh Kumar K.V
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V, Sandipan Das

This patch updates kernel hash page table entries to use storage key 3
for its mapping. This implies all kernel access will now use key 3 to
control READ/WRITE. The patch also prevents the allocation of key 3 from
userspace and UAMOR value is updated such that userspace cannot modify key 3.

Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 .../powerpc/include/asm/book3s/64/hash-pkey.h | 24 ++++++++++++++-----
 arch/powerpc/include/asm/book3s/64/hash.h     |  2 +-
 arch/powerpc/include/asm/book3s/64/mmu-hash.h |  1 +
 arch/powerpc/include/asm/mmu_context.h        |  2 +-
 arch/powerpc/mm/book3s64/hash_4k.c            |  2 +-
 arch/powerpc/mm/book3s64/hash_64k.c           |  4 ++--
 arch/powerpc/mm/book3s64/hash_hugepage.c      |  2 +-
 arch/powerpc/mm/book3s64/hash_hugetlbpage.c   |  2 +-
 arch/powerpc/mm/book3s64/hash_pgtable.c       |  2 +-
 arch/powerpc/mm/book3s64/hash_utils.c         | 10 ++++----
 arch/powerpc/mm/book3s64/pkeys.c              |  4 ++++
 11 files changed, 37 insertions(+), 18 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/hash-pkey.h b/arch/powerpc/include/asm/book3s/64/hash-pkey.h
index 795010897e5d..9f44e208f036 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-pkey.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-pkey.h
@@ -2,6 +2,9 @@
 #ifndef _ASM_POWERPC_BOOK3S_64_HASH_PKEY_H
 #define _ASM_POWERPC_BOOK3S_64_HASH_PKEY_H
 
+/*  We use key 3 for KERNEL */
+#define HASH_DEFAULT_KERNEL_KEY (HPTE_R_KEY_BIT0 | HPTE_R_KEY_BIT1)
+
 static inline u64 hash__vmflag_to_pte_pkey_bits(u64 vm_flags)
 {
 	return (((vm_flags & VM_PKEY_BIT0) ? H_PTE_PKEY_BIT0 : 0x0UL) |
@@ -11,13 +14,22 @@ static inline u64 hash__vmflag_to_pte_pkey_bits(u64 vm_flags)
 		((vm_flags & VM_PKEY_BIT4) ? H_PTE_PKEY_BIT4 : 0x0UL));
 }
 
-static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
+static inline u64 pte_to_hpte_pkey_bits(u64 pteflags, unsigned long flags)
 {
-	return (((pteflags & H_PTE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL) |
-		((pteflags & H_PTE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
-		((pteflags & H_PTE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
-		((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
-		((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL));
+	unsigned long pte_pkey;
+
+	pte_pkey = (((pteflags & H_PTE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL) |
+		    ((pteflags & H_PTE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
+		    ((pteflags & H_PTE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
+		    ((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
+		    ((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL));
+
+	if (mmu_has_feature(MMU_FTR_KUAP) || mmu_has_feature(MMU_FTR_KUEP)) {
+		if ((pte_pkey == 0) && (flags & HPTE_USE_KERNEL_KEY))
+			return HASH_DEFAULT_KERNEL_KEY;
+	}
+
+	return pte_pkey;
 }
 
 static inline u16 hash__pte_to_pkey_bits(u64 pteflags)
diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index 73ad038ed10b..d959b0195ad9 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -145,7 +145,7 @@ extern void hash__mark_initmem_nx(void);
 
 extern void hpte_need_flush(struct mm_struct *mm, unsigned long addr,
 			    pte_t *ptep, unsigned long pte, int huge);
-extern unsigned long htab_convert_pte_flags(unsigned long pteflags);
+unsigned long htab_convert_pte_flags(unsigned long pteflags, unsigned long flags);
 /* Atomic PTE updates */
 static inline unsigned long hash__pte_update(struct mm_struct *mm,
 					 unsigned long addr,
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 683a9c7d1b03..9192cb05a6ab 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -452,6 +452,7 @@ static inline unsigned long hpt_hash(unsigned long vpn,
 
 #define HPTE_LOCAL_UPDATE	0x1
 #define HPTE_NOHPTE_UPDATE	0x2
+#define HPTE_USE_KERNEL_KEY	0x4
 
 extern int __hash_page_4K(unsigned long ea, unsigned long access,
 			  unsigned long vsid, pte_t *ptep, unsigned long trap,
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index e02aa793420b..4b5e1cb49dce 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -284,7 +284,7 @@ static inline bool arch_vma_access_permitted(struct vm_area_struct *vma,
 #define thread_pkey_regs_init(thread)
 #define arch_dup_pkeys(oldmm, mm)
 
-static inline u64 pte_to_hpte_pkey_bits(u64 pteflags)
+static inline u64 pte_to_hpte_pkey_bits(u64 pteflags, unsigned long flags)
 {
 	return 0x0UL;
 }
diff --git a/arch/powerpc/mm/book3s64/hash_4k.c b/arch/powerpc/mm/book3s64/hash_4k.c
index 22e787123cdf..7de1a8a0c62a 100644
--- a/arch/powerpc/mm/book3s64/hash_4k.c
+++ b/arch/powerpc/mm/book3s64/hash_4k.c
@@ -54,7 +54,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
 	 * PP bits. _PAGE_USER is already PP bit 0x2, so we only
 	 * need to add in 0x1 if it's a read-only user page
 	 */
-	rflags = htab_convert_pte_flags(new_pte);
+	rflags = htab_convert_pte_flags(new_pte, flags);
 	rpte = __real_pte(__pte(old_pte), ptep, PTRS_PER_PTE);
 
 	if (cpu_has_feature(CPU_FTR_NOEXECUTE) &&
diff --git a/arch/powerpc/mm/book3s64/hash_64k.c b/arch/powerpc/mm/book3s64/hash_64k.c
index 7084ce2951e6..998c6817ed47 100644
--- a/arch/powerpc/mm/book3s64/hash_64k.c
+++ b/arch/powerpc/mm/book3s64/hash_64k.c
@@ -72,7 +72,7 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
 	 * Handle the subpage protection bits
 	 */
 	subpg_pte = new_pte & ~subpg_prot;
-	rflags = htab_convert_pte_flags(subpg_pte);
+	rflags = htab_convert_pte_flags(subpg_pte, flags);
 
 	if (cpu_has_feature(CPU_FTR_NOEXECUTE) &&
 	    !cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) {
@@ -260,7 +260,7 @@ int __hash_page_64K(unsigned long ea, unsigned long access,
 			new_pte |= _PAGE_DIRTY;
 	} while (!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
 
-	rflags = htab_convert_pte_flags(new_pte);
+	rflags = htab_convert_pte_flags(new_pte, flags);
 	rpte = __real_pte(__pte(old_pte), ptep, PTRS_PER_PTE);
 
 	if (cpu_has_feature(CPU_FTR_NOEXECUTE) &&
diff --git a/arch/powerpc/mm/book3s64/hash_hugepage.c b/arch/powerpc/mm/book3s64/hash_hugepage.c
index 440823797de7..c0fabe6c5a12 100644
--- a/arch/powerpc/mm/book3s64/hash_hugepage.c
+++ b/arch/powerpc/mm/book3s64/hash_hugepage.c
@@ -57,7 +57,7 @@ int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid,
 	if (!(old_pmd & (H_PAGE_THP_HUGE | _PAGE_DEVMAP)))
 		return 0;
 
-	rflags = htab_convert_pte_flags(new_pmd);
+	rflags = htab_convert_pte_flags(new_pmd, flags);
 
 #if 0
 	if (!cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) {
diff --git a/arch/powerpc/mm/book3s64/hash_hugetlbpage.c b/arch/powerpc/mm/book3s64/hash_hugetlbpage.c
index 964467b3a776..b5e9fff8c217 100644
--- a/arch/powerpc/mm/book3s64/hash_hugetlbpage.c
+++ b/arch/powerpc/mm/book3s64/hash_hugetlbpage.c
@@ -70,7 +70,7 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
 	if (old_pte & (H_PAGE_THP_HUGE | _PAGE_DEVMAP))
 		return 0;
 
-	rflags = htab_convert_pte_flags(new_pte);
+	rflags = htab_convert_pte_flags(new_pte, flags);
 	if (unlikely(mmu_psize == MMU_PAGE_16G))
 		offset = PTRS_PER_PUD;
 	else
diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
index fd9c7f91b092..567e0c6b3978 100644
--- a/arch/powerpc/mm/book3s64/hash_pgtable.c
+++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
@@ -443,7 +443,7 @@ void hash__mark_initmem_nx(void)
 	start = (unsigned long)__init_begin;
 	end = (unsigned long)__init_end;
 
-	pp = htab_convert_pte_flags(pgprot_val(PAGE_KERNEL));
+	pp = htab_convert_pte_flags(pgprot_val(PAGE_KERNEL), HPTE_USE_KERNEL_KEY);
 
 	WARN_ON(!hash__change_memory_range(start, end, pp));
 }
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 24702c0a92e0..41ab053e17c9 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -186,7 +186,7 @@ static struct mmu_psize_def mmu_psize_defaults_gp[] = {
  *    - We make sure R is always set and never lost
  *    - C is _PAGE_DIRTY, and *should* always be set for a writeable mapping
  */
-unsigned long htab_convert_pte_flags(unsigned long pteflags)
+unsigned long htab_convert_pte_flags(unsigned long pteflags, unsigned long flags)
 {
 	unsigned long rflags = 0;
 
@@ -240,7 +240,7 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags)
 		 */
 		rflags |= HPTE_R_M;
 
-	rflags |= pte_to_hpte_pkey_bits(pteflags);
+	rflags |= pte_to_hpte_pkey_bits(pteflags, flags);
 	return rflags;
 }
 
@@ -255,7 +255,7 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long vend,
 	shift = mmu_psize_defs[psize].shift;
 	step = 1 << shift;
 
-	prot = htab_convert_pte_flags(prot);
+	prot = htab_convert_pte_flags(prot, HPTE_USE_KERNEL_KEY);
 
 	DBG("htab_bolt_mapping(%lx..%lx -> %lx (%lx,%d,%d)\n",
 	    vstart, vend, pstart, prot, psize, ssize);
@@ -1317,12 +1317,14 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
 		vsid = get_kernel_vsid(ea, mmu_kernel_ssize);
 		psize = mmu_vmalloc_psize;
 		ssize = mmu_kernel_ssize;
+		flags |= HPTE_USE_KERNEL_KEY;
 		break;
 
 	case IO_REGION_ID:
 		vsid = get_kernel_vsid(ea, mmu_kernel_ssize);
 		psize = mmu_io_psize;
 		ssize = mmu_kernel_ssize;
+		flags |= HPTE_USE_KERNEL_KEY;
 		break;
 	default:
 		/*
@@ -1901,7 +1903,7 @@ static void kernel_map_linear_page(unsigned long vaddr, unsigned long lmi)
 	unsigned long hash;
 	unsigned long vsid = get_kernel_vsid(vaddr, mmu_kernel_ssize);
 	unsigned long vpn = hpt_vpn(vaddr, vsid, mmu_kernel_ssize);
-	unsigned long mode = htab_convert_pte_flags(pgprot_val(PAGE_KERNEL));
+	unsigned long mode = htab_convert_pte_flags(pgprot_val(PAGE_KERNEL), HPTE_USE_KERNEL_KEY);
 	long ret;
 
 	hash = hpt_hash(vpn, PAGE_SHIFT, mmu_kernel_ssize);
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index bfc27f1f0ab0..640f090b9f9d 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -205,6 +205,10 @@ void __init pkey_early_init_devtree(void)
 	reserved_allocation_mask |= (0x1 << 1);
 	default_uamor &= ~(0x3ul << pkeyshift(1));
 
+	/*  handle key 3 which is used by kernel for KAUP */
+	reserved_allocation_mask |= (0x1 << 3);
+	default_uamor &= ~(0x3ul << pkeyshift(3));
+
 	/*
 	 * Prevent the usage of OS reserved keys. Update UAMOR
 	 * for those keys. Also mark the rest of the bits in the
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 09/22] powerpc/exec: Set thread.regs early during exec
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (7 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 08/22] powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translation Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25 13:47   ` Christophe Leroy
  2020-11-25  5:16 ` [PATCH v6 10/22] powerpc/book3s64/pkeys: Store/restore userspace AMR/IAMR correctly on entry and exit from kernel Aneesh Kumar K.V
                   ` (12 subsequent siblings)
  21 siblings, 1 reply; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

In later patches during exec, we would like to access default regs.amr to
control access to the user mapping. Having thread.regs set early makes the
code changes simpler.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/thread_info.h |  2 --
 arch/powerpc/kernel/process.c          | 37 +++++++++++++++++---------
 2 files changed, 25 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index 46a210b03d2b..de4c911d9ced 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -77,10 +77,8 @@ struct thread_info {
 /* how to get the thread information struct from C */
 extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
 
-#ifdef CONFIG_PPC_BOOK3S_64
 void arch_setup_new_exec(void);
 #define arch_setup_new_exec arch_setup_new_exec
-#endif
 
 #endif /* __ASSEMBLY__ */
 
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index d421a2c7f822..b6b8a845e454 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1530,10 +1530,32 @@ void flush_thread(void)
 #ifdef CONFIG_PPC_BOOK3S_64
 void arch_setup_new_exec(void)
 {
-	if (radix_enabled())
-		return;
-	hash__setup_new_exec();
+	if (!radix_enabled())
+		hash__setup_new_exec();
+
+	/*
+	 * If we exec out of a kernel thread then thread.regs will not be
+	 * set.  Do it now.
+	 */
+	if (!current->thread.regs) {
+		struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
+		current->thread.regs = regs - 1;
+	}
+
+}
+#else
+void arch_setup_new_exec(void)
+{
+	/*
+	 * If we exec out of a kernel thread then thread.regs will not be
+	 * set.  Do it now.
+	 */
+	if (!current->thread.regs) {
+		struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
+		current->thread.regs = regs - 1;
+	}
 }
+
 #endif
 
 #ifdef CONFIG_PPC64
@@ -1765,15 +1787,6 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
 		preload_new_slb_context(start, sp);
 #endif
 
-	/*
-	 * If we exec out of a kernel thread then thread.regs will not be
-	 * set.  Do it now.
-	 */
-	if (!current->thread.regs) {
-		struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
-		current->thread.regs = regs - 1;
-	}
-
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
 	/*
 	 * Clear any transactional state, we're exec()ing. The cause is
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 10/22] powerpc/book3s64/pkeys: Store/restore userspace AMR/IAMR correctly on entry and exit from kernel
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (8 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 09/22] powerpc/exec: Set thread.regs early during exec Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25 13:52   ` Christophe Leroy
  2020-11-25  5:16 ` [PATCH v6 11/22] powerpc/book3s64/pkeys: Inherit correctly on fork Aneesh Kumar K.V
                   ` (11 subsequent siblings)
  21 siblings, 1 reply; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V, Sandipan Das

This prepare kernel to operate with a different value than userspace AMR/IAMR.
For this, AMR/IAMR need to be saved and restored on entry and return from the
kernel.

With KUAP we modify kernel AMR when accessing user address from the kernel
via copy_to/from_user interfaces. We don't need to modify IAMR value in
similar fashion.

If MMU_FTR_PKEY is enabled we need to save AMR/IAMR in pt_regs on entering
kernel from userspace. If not we can assume that AMR/IAMR is not modified
from userspace.

We need to save AMR if we have MMU_FTR_KUAP feature enabled and we are
interrupted within kernel. This is required so that if we get interrupted
within copy_to/from_user we continue with the right AMR value.

If we hae MMU_FTR_KUEP enabled we need to restore IAMR on return to userspace
beause kernel will be running with a different IAMR value.

Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/kup.h | 222 +++++++++++++++++++----
 arch/powerpc/include/asm/ptrace.h        |   5 +-
 arch/powerpc/kernel/asm-offsets.c        |   2 +
 arch/powerpc/kernel/entry_64.S           |   6 +-
 arch/powerpc/kernel/exceptions-64s.S     |   4 +-
 arch/powerpc/kernel/syscall_64.c         |  32 +++-
 6 files changed, 225 insertions(+), 46 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 1d38eab83d48..4dbb2d53fd8f 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -13,17 +13,46 @@
 
 #ifdef __ASSEMBLY__
 
-.macro kuap_restore_amr	gpr1, gpr2
-#ifdef CONFIG_PPC_KUAP
+.macro kuap_restore_user_amr gpr1
+#if defined(CONFIG_PPC_PKEY)
 	BEGIN_MMU_FTR_SECTION_NESTED(67)
-	mfspr	\gpr1, SPRN_AMR
+	/*
+	 * AMR and IAMR are going to be different when
+	 * returning to userspace.
+	 */
+	ld	\gpr1, STACK_REGS_AMR(r1)
+	isync
+	mtspr	SPRN_AMR, \gpr1
+	/*
+	 * Restore IAMR only when returning to userspace
+	 */
+	ld	\gpr1, STACK_REGS_IAMR(r1)
+	mtspr	SPRN_IAMR, \gpr1
+
+	/* No isync required, see kuap_restore_user_amr() */
+	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_PKEY, 67)
+#endif
+.endm
+
+.macro kuap_restore_kernel_amr	gpr1, gpr2
+#if defined(CONFIG_PPC_PKEY)
+
+	BEGIN_MMU_FTR_SECTION_NESTED(67)
+	/*
+	 * AMR is going to be mostly the same since we are
+	 * returning to the kernel. Compare and do a mtspr.
+	 */
 	ld	\gpr2, STACK_REGS_AMR(r1)
+	mfspr	\gpr1, SPRN_AMR
 	cmpd	\gpr1, \gpr2
-	beq	998f
+	beq	100f
 	isync
 	mtspr	SPRN_AMR, \gpr2
-	/* No isync required, see kuap_restore_amr() */
-998:
+	/*
+	 * No isync required, see kuap_restore_amr()
+	 * No need to restore IAMR when returning to kernel space.
+	 */
+100:
 	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
 #endif
 .endm
@@ -42,23 +71,98 @@
 .endm
 #endif
 
+/*
+ *	if (pkey) {
+ *
+ *		save AMR -> stack;
+ *		if (kuap) {
+ *			if (AMR != BLOCKED)
+ *				KUAP_BLOCKED -> AMR;
+ *		}
+ *		if (from_user) {
+ *			save IAMR -> stack;
+ *			if (kuep) {
+ *				KUEP_BLOCKED ->IAMR
+ *			}
+ *		}
+ *		return;
+ *	}
+ *
+ *	if (kuap) {
+ *		if (from_kernel) {
+ *			save AMR -> stack;
+ *			if (AMR != BLOCKED)
+ *				KUAP_BLOCKED -> AMR;
+ *		}
+ *
+ *	}
+ */
 .macro kuap_save_amr_and_lock gpr1, gpr2, use_cr, msr_pr_cr
-#ifdef CONFIG_PPC_KUAP
+#if defined(CONFIG_PPC_PKEY)
+
+	/*
+	 * if both pkey and kuap is disabled, nothing to do
+	 */
+	BEGIN_MMU_FTR_SECTION_NESTED(68)
+	b	100f  // skip_save_amr
+	END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY | MMU_FTR_KUAP, 68)
+
+	/*
+	 * if pkey is disabled and we are entering from userspace
+	 * don't do anything.
+	 */
 	BEGIN_MMU_FTR_SECTION_NESTED(67)
 	.ifnb \msr_pr_cr
-	bne	\msr_pr_cr, 99f
+	/*
+	 * Without pkey we are not changing AMR outside the kernel
+	 * hence skip this completely.
+	 */
+	bne	\msr_pr_cr, 100f  // from userspace
 	.endif
+        END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY, 67)
+
+	/*
+	 * pkey is enabled or pkey is disabled but entering from kernel
+	 */
 	mfspr	\gpr1, SPRN_AMR
 	std	\gpr1, STACK_REGS_AMR(r1)
-	li	\gpr2, (AMR_KUAP_BLOCKED >> AMR_KUAP_SHIFT)
-	sldi	\gpr2, \gpr2, AMR_KUAP_SHIFT
+
+	/*
+	 * update kernel AMR with AMR_KUAP_BLOCKED only
+	 * if KUAP feature is enabled
+	 */
+	BEGIN_MMU_FTR_SECTION_NESTED(69)
+	LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED)
 	cmpd	\use_cr, \gpr1, \gpr2
-	beq	\use_cr, 99f
-	// We don't isync here because we very recently entered via rfid
+	beq	\use_cr, 102f
+	/*
+	 * We don't isync here because we very recently entered via an interrupt
+	 */
 	mtspr	SPRN_AMR, \gpr2
 	isync
-99:
-	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
+102:
+	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 69)
+
+	/*
+	 * if entering from kernel we don't need save IAMR
+	 */
+	.ifnb \msr_pr_cr
+	beq	\msr_pr_cr, 100f // from kernel space
+	mfspr	\gpr1, SPRN_IAMR
+	std	\gpr1, STACK_REGS_IAMR(r1)
+
+	/*
+	 * update kernel IAMR with AMR_KUEP_BLOCKED only
+	 * if KUEP feature is enabled
+	 */
+	BEGIN_MMU_FTR_SECTION_NESTED(70)
+	LOAD_REG_IMMEDIATE(\gpr2, AMR_KUEP_BLOCKED)
+	mtspr	SPRN_IAMR, \gpr2
+	isync
+	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUEP, 70)
+	.endif
+
+100: // skip_save_amr
 #endif
 .endm
 
@@ -66,22 +170,42 @@
 
 DECLARE_STATIC_KEY_FALSE(uaccess_flush_key);
 
-#ifdef CONFIG_PPC_KUAP
+#ifdef CONFIG_PPC_PKEY
 
 #include <asm/mmu.h>
 #include <asm/ptrace.h>
 
-static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr)
+static inline void kuap_restore_user_amr(struct pt_regs *regs)
+{
+	if (!mmu_has_feature(MMU_FTR_PKEY))
+		return;
+
+	isync();
+	mtspr(SPRN_AMR, regs->amr);
+	mtspr(SPRN_IAMR, regs->iamr);
+	/*
+	 * No isync required here because we are about to rfi
+	 * back to previous context before any user accesses
+	 * would be made, which is a CSI.
+	 */
+}
+static inline void kuap_restore_kernel_amr(struct pt_regs *regs,
+					   unsigned long amr)
 {
-	if (mmu_has_feature(MMU_FTR_KUAP) && unlikely(regs->kuap != amr)) {
-		isync();
-		mtspr(SPRN_AMR, regs->kuap);
-		/*
-		 * No isync required here because we are about to RFI back to
-		 * previous context before any user accesses would be made,
-		 * which is a CSI.
-		 */
+	if (mmu_has_feature(MMU_FTR_KUAP)) {
+		if (unlikely(regs->amr != amr)) {
+			isync();
+			mtspr(SPRN_AMR, regs->amr);
+			/*
+			 * No isync required here because we are about to rfi
+			 * back to previous context before any user accesses
+			 * would be made, which is a CSI.
+			 */
+		}
 	}
+	/*
+	 * No need to restore IAMR when returning to kernel space.
+	 */
 }
 
 static inline unsigned long kuap_get_and_check_amr(void)
@@ -95,6 +219,26 @@ static inline unsigned long kuap_get_and_check_amr(void)
 	return 0;
 }
 
+#else /* CONFIG_PPC_PKEY */
+
+static inline void kuap_restore_user_amr(struct pt_regs *regs)
+{
+}
+
+static inline void kuap_restore_kernel_amr(struct pt_regs *regs, unsigned long amr)
+{
+}
+
+static inline unsigned long kuap_get_and_check_amr(void)
+{
+	return 0;
+}
+
+#endif /* CONFIG_PPC_PKEY */
+
+
+#ifdef CONFIG_PPC_KUAP
+
 static inline void kuap_check_amr(void)
 {
 	if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_KUAP))
@@ -143,21 +287,6 @@ bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
 		    (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)),
 		    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
 }
-#else /* CONFIG_PPC_KUAP */
-static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr) { }
-
-static inline unsigned long kuap_get_and_check_amr(void)
-{
-	return 0UL;
-}
-
-static inline unsigned long get_kuap(void)
-{
-	return AMR_KUAP_BLOCKED;
-}
-
-static inline void set_kuap(unsigned long value) { }
-#endif /* !CONFIG_PPC_KUAP */
 
 static __always_inline void allow_user_access(void __user *to, const void __user *from,
 					      unsigned long size, unsigned long dir)
@@ -174,6 +303,21 @@ static __always_inline void allow_user_access(void __user *to, const void __user
 		BUILD_BUG();
 }
 
+#else /* CONFIG_PPC_KUAP */
+
+static inline unsigned long get_kuap(void)
+{
+	return AMR_KUAP_BLOCKED;
+}
+
+static inline void set_kuap(unsigned long value) { }
+
+static __always_inline void allow_user_access(void __user *to, const void __user *from,
+					      unsigned long size, unsigned long dir)
+{ }
+
+#endif /* !CONFIG_PPC_KUAP */
+
 static inline void prevent_user_access(void __user *to, const void __user *from,
 				       unsigned long size, unsigned long dir)
 {
diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
index e7f1caa007a4..f240f0cdcec2 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -61,8 +61,11 @@ struct pt_regs
 				unsigned long amr;
 #endif
 			};
+#ifdef CONFIG_PPC_PKEY
+			unsigned long iamr;
+#endif
 		};
-		unsigned long __pad[2];	/* Maintain 16 byte interrupt stack alignment */
+		unsigned long __pad[4];	/* Maintain 16 byte interrupt stack alignment */
 	};
 };
 #endif
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 418a0b314a33..76545cdc7f8a 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -356,11 +356,13 @@ int main(void)
 
 #ifdef CONFIG_PPC_PKEY
 	STACK_PT_REGS_OFFSET(STACK_REGS_AMR, amr);
+	STACK_PT_REGS_OFFSET(STACK_REGS_IAMR, iamr);
 #endif
 #ifdef CONFIG_PPC_KUAP
 	STACK_PT_REGS_OFFSET(STACK_REGS_KUAP, kuap);
 #endif
 
+
 #if defined(CONFIG_PPC32)
 #if defined(CONFIG_BOOKE) || defined(CONFIG_40x)
 	DEFINE(EXC_LVL_SIZE, STACK_EXC_LVL_FRAME_SIZE);
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 2f3846192ec7..e49291594c68 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -653,8 +653,8 @@ _ASM_NOKPROBE_SYMBOL(fast_interrupt_return)
 	kuap_check_amr r3, r4
 	ld	r5,_MSR(r1)
 	andi.	r0,r5,MSR_PR
-	bne	.Lfast_user_interrupt_return
-	kuap_restore_amr r3, r4
+	bne	.Lfast_user_interrupt_return_amr
+	kuap_restore_kernel_amr r3, r4
 	andi.	r0,r5,MSR_RI
 	li	r3,0 /* 0 return value, no EMULATE_STACK_STORE */
 	bne+	.Lfast_kernel_interrupt_return
@@ -674,6 +674,8 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return)
 	cmpdi	r3,0
 	bne-	.Lrestore_nvgprs
 
+.Lfast_user_interrupt_return_amr:
+	kuap_restore_user_amr r3
 .Lfast_user_interrupt_return:
 	ld	r11,_NIP(r1)
 	ld	r12,_MSR(r1)
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 4d01f09ecf80..84cc23529cdb 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1059,7 +1059,7 @@ EXC_COMMON_BEGIN(system_reset_common)
 	ld	r10,SOFTE(r1)
 	stb	r10,PACAIRQSOFTMASK(r13)
 
-	kuap_restore_amr r9, r10
+	kuap_restore_kernel_amr r9, r10
 	EXCEPTION_RESTORE_REGS
 	RFI_TO_USER_OR_KERNEL
 
@@ -2875,7 +2875,7 @@ EXC_COMMON_BEGIN(soft_nmi_common)
 	ld	r10,SOFTE(r1)
 	stb	r10,PACAIRQSOFTMASK(r13)
 
-	kuap_restore_amr r9, r10
+	kuap_restore_kernel_amr r9, r10
 	EXCEPTION_RESTORE_REGS hsrr=0
 	RFI_TO_KERNEL
 
diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
index 310bcd768cd5..60c57609d316 100644
--- a/arch/powerpc/kernel/syscall_64.c
+++ b/arch/powerpc/kernel/syscall_64.c
@@ -35,7 +35,25 @@ notrace long system_call_exception(long r3, long r4, long r5,
 	BUG_ON(!FULL_REGS(regs));
 	BUG_ON(regs->softe != IRQS_ENABLED);
 
-	kuap_check_amr();
+#ifdef CONFIG_PPC_PKEY
+	if (mmu_has_feature(MMU_FTR_PKEY)) {
+		unsigned long amr, iamr;
+		/*
+		 * When entering from userspace we mostly have the AMR/IAMR
+		 * different from kernel default values. Hence don't compare.
+		 */
+		amr = mfspr(SPRN_AMR);
+		iamr = mfspr(SPRN_IAMR);
+		regs->amr  = amr;
+		regs->iamr = iamr;
+		if (mmu_has_feature(MMU_FTR_KUAP))
+			mtspr(SPRN_AMR, AMR_KUAP_BLOCKED);
+		if (mmu_has_feature(MMU_FTR_KUEP))
+			mtspr(SPRN_IAMR, AMR_KUEP_BLOCKED);
+		isync();
+	} else
+#endif
+		kuap_check_amr();
 
 	account_cpu_user_entry();
 
@@ -245,6 +263,12 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
 
 	account_cpu_user_exit();
 
+#ifdef CONFIG_PPC_BOOK3S /* BOOK3E not yet using this */
+	/*
+	 * We do this at the end so that we do context switch with KERNEL AMR
+	 */
+	kuap_restore_user_amr(regs);
+#endif
 	return ret;
 }
 
@@ -330,6 +354,10 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
 
 	account_cpu_user_exit();
 
+	/*
+	 * We do this at the end so that we do context switch with KERNEL AMR
+	 */
+	kuap_restore_user_amr(regs);
 	return ret;
 }
 
@@ -400,7 +428,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
 	 * which would cause Read-After-Write stalls. Hence, we take the AMR
 	 * value from the check above.
 	 */
-	kuap_restore_amr(regs, amr);
+	kuap_restore_kernel_amr(regs, amr);
 
 	return ret;
 }
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 11/22] powerpc/book3s64/pkeys: Inherit correctly on fork.
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (9 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 10/22] powerpc/book3s64/pkeys: Store/restore userspace AMR/IAMR correctly on entry and exit from kernel Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25 13:54   ` Christophe Leroy
  2020-11-25  5:16 ` [PATCH v6 12/22] powerpc/book3s64/pkeys: Reset userspace AMR correctly on exec Aneesh Kumar K.V
                   ` (10 subsequent siblings)
  21 siblings, 1 reply; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V, Sandipan Das

Child thread.kuap value is inherited from the parent in copy_thread_tls. We still
need to make sure when the child returns from a fork in the kernel we start with the kernel
default AMR value.

Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/kernel/process.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index b6b8a845e454..733680de0ba4 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1768,6 +1768,17 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
 		childregs->ppr = DEFAULT_PPR;
 
 	p->thread.tidr = 0;
+#endif
+	/*
+	 * Run with the current AMR value of the kernel
+	 */
+#ifdef CONFIG_PPC_KUAP
+	if (mmu_has_feature(MMU_FTR_KUAP))
+		kregs->kuap = AMR_KUAP_BLOCKED;
+#endif
+#ifdef CONFIG_PPC_KUEP
+	if (mmu_has_feature(MMU_FTR_KUEP))
+		kregs->iamr = AMR_KUEP_BLOCKED;
 #endif
 	kregs->nip = ppc_function_entry(f);
 	return 0;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 12/22] powerpc/book3s64/pkeys: Reset userspace AMR correctly on exec
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (10 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 11/22] powerpc/book3s64/pkeys: Inherit correctly on fork Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 13/22] powerpc/ptrace-view: Use pt_regs values instead of thread_struct based one Aneesh Kumar K.V
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V, Sandipan Das

On fork, we inherit from the parent and on exec, we should switch to default_amr values.

Also, avoid changing the AMR register value within the kernel. The kernel now runs with
different AMR values.

Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/pkeys.h |  2 ++
 arch/powerpc/kernel/process.c              |  6 +++++-
 arch/powerpc/mm/book3s64/pkeys.c           | 16 ++--------------
 3 files changed, 9 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pkeys.h b/arch/powerpc/include/asm/book3s/64/pkeys.h
index b7d9f4267bcd..3b8640498f5b 100644
--- a/arch/powerpc/include/asm/book3s/64/pkeys.h
+++ b/arch/powerpc/include/asm/book3s/64/pkeys.h
@@ -6,6 +6,8 @@
 #include <asm/book3s/64/hash-pkey.h>
 
 extern u64 __ro_after_init default_uamor;
+extern u64 __ro_after_init default_amr;
+extern u64 __ro_after_init default_iamr;
 
 static inline u64 vmflag_to_pte_pkey_bits(u64 vm_flags)
 {
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 733680de0ba4..98f7e9ec766f 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1542,6 +1542,11 @@ void arch_setup_new_exec(void)
 		current->thread.regs = regs - 1;
 	}
 
+#ifdef CONFIG_PPC_MEM_KEYS
+	current->thread.regs->amr  = default_amr;
+	current->thread.regs->iamr  = default_iamr;
+#endif
+
 }
 #else
 void arch_setup_new_exec(void)
@@ -1902,7 +1907,6 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
 	current->thread.load_tm = 0;
 #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
 
-	thread_pkey_regs_init(&current->thread);
 }
 EXPORT_SYMBOL(start_thread);
 
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 640f090b9f9d..f47d11f2743d 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -28,8 +28,8 @@ static u32 initial_allocation_mask __ro_after_init;
  * Even if we allocate keys with sys_pkey_alloc(), we need to make sure
  * other thread still find the access denied using the same keys.
  */
-static u64 default_amr = ~0x0UL;
-static u64 default_iamr = 0x5555555555555555UL;
+u64 default_amr __ro_after_init  = ~0x0UL;
+u64 default_iamr __ro_after_init = 0x5555555555555555UL;
 u64 default_uamor __ro_after_init;
 /*
  * Key used to implement PROT_EXEC mmap. Denies READ/WRITE
@@ -388,18 +388,6 @@ void thread_pkey_regs_restore(struct thread_struct *new_thread,
 		write_iamr(new_thread->iamr);
 }
 
-void thread_pkey_regs_init(struct thread_struct *thread)
-{
-	if (!mmu_has_feature(MMU_FTR_PKEY))
-		return;
-
-	thread->amr   = default_amr;
-	thread->iamr  = default_iamr;
-
-	write_amr(default_amr);
-	write_iamr(default_iamr);
-}
-
 int execute_only_pkey(struct mm_struct *mm)
 {
 	return mm->context.execute_only_pkey;
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 13/22] powerpc/ptrace-view: Use pt_regs values instead of thread_struct based one.
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (11 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 12/22] powerpc/book3s64/pkeys: Reset userspace AMR correctly on exec Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 14/22] powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode Aneesh Kumar K.V
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

We will remove thread.amr/iamr/uamor in a later patch

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/kernel/ptrace/ptrace-view.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/ptrace/ptrace-view.c b/arch/powerpc/kernel/ptrace/ptrace-view.c
index 7e6478e7ed07..bdbe8cfdafc7 100644
--- a/arch/powerpc/kernel/ptrace/ptrace-view.c
+++ b/arch/powerpc/kernel/ptrace/ptrace-view.c
@@ -470,12 +470,12 @@ static int pkey_active(struct task_struct *target, const struct user_regset *reg
 static int pkey_get(struct task_struct *target, const struct user_regset *regset,
 		    struct membuf to)
 {
-	BUILD_BUG_ON(TSO(amr) + sizeof(unsigned long) != TSO(iamr));
 
 	if (!arch_pkeys_enabled())
 		return -ENODEV;
 
-	membuf_write(&to, &target->thread.amr, 2 * sizeof(unsigned long));
+	membuf_store(&to, target->thread.regs->amr);
+	membuf_store(&to, target->thread.regs->iamr);
 	return membuf_store(&to, default_uamor);
 }
 
@@ -508,7 +508,8 @@ static int pkey_set(struct task_struct *target, const struct user_regset *regset
 	 * Pick the AMR values for the keys that kernel is using. This
 	 * will be indicated by the ~default_uamor bits.
 	 */
-	target->thread.amr = (new_amr & default_uamor) | (target->thread.amr & ~default_uamor);
+	target->thread.regs->amr = (new_amr & default_uamor) |
+		(target->thread.regs->amr & ~default_uamor);
 
 	return 0;
 }
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 14/22] powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode.
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (12 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 13/22] powerpc/ptrace-view: Use pt_regs values instead of thread_struct based one Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 15/22] powerpc/book3s64/kuap: Restrict access to userspace based on userspace AMR Aneesh Kumar K.V
                   ` (7 subsequent siblings)
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V, Sandipan Das

Now that kernel correctly store/restore userspace AMR/IAMR values, avoid
manipulating AMR and IAMR from the kernel on behalf of userspace.

Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/kup.h | 21 +++++++++
 arch/powerpc/include/asm/processor.h     |  4 --
 arch/powerpc/kernel/process.c            |  4 --
 arch/powerpc/kernel/traps.c              |  6 ---
 arch/powerpc/mm/book3s64/pkeys.c         | 57 +++++-------------------
 5 files changed, 31 insertions(+), 61 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 4dbb2d53fd8f..47270596215b 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -175,6 +175,27 @@ DECLARE_STATIC_KEY_FALSE(uaccess_flush_key);
 #include <asm/mmu.h>
 #include <asm/ptrace.h>
 
+/*
+ * For kernel thread that doesn't have thread.regs return
+ * default AMR/IAMR values.
+ */
+static inline u64 current_thread_amr(void)
+{
+	if (current->thread.regs)
+		return current->thread.regs->amr;
+	return AMR_KUAP_BLOCKED;
+}
+
+static inline u64 current_thread_iamr(void)
+{
+	if (current->thread.regs)
+		return current->thread.regs->iamr;
+	return AMR_KUEP_BLOCKED;
+}
+#endif /* CONFIG_PPC_PKEY */
+
+#ifdef CONFIG_PPC_KUAP
+
 static inline void kuap_restore_user_amr(struct pt_regs *regs)
 {
 	if (!mmu_has_feature(MMU_FTR_PKEY))
diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index c61c859b51a8..c3df3a420c92 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -230,10 +230,6 @@ struct thread_struct {
 	struct thread_vr_state ckvr_state; /* Checkpointed VR state */
 	unsigned long	ckvrsave; /* Checkpointed VRSAVE */
 #endif /* CONFIG_PPC_TRANSACTIONAL_MEM */
-#ifdef CONFIG_PPC_MEM_KEYS
-	unsigned long	amr;
-	unsigned long	iamr;
-#endif
 #ifdef CONFIG_KVM_BOOK3S_32_HANDLER
 	void*		kvm_shadow_vcpu; /* KVM internal data */
 #endif /* CONFIG_KVM_BOOK3S_32_HANDLER */
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 98f7e9ec766f..5ffdac46a187 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -589,7 +589,6 @@ static void save_all(struct task_struct *tsk)
 		__giveup_spe(tsk);
 
 	msr_check_and_clear(msr_all_available);
-	thread_pkey_regs_save(&tsk->thread);
 }
 
 void flush_all_to_thread(struct task_struct *tsk)
@@ -1160,8 +1159,6 @@ static inline void save_sprs(struct thread_struct *t)
 		t->tar = mfspr(SPRN_TAR);
 	}
 #endif
-
-	thread_pkey_regs_save(t);
 }
 
 static inline void restore_sprs(struct thread_struct *old_thread,
@@ -1202,7 +1199,6 @@ static inline void restore_sprs(struct thread_struct *old_thread,
 		mtspr(SPRN_TIDR, new_thread->tidr);
 #endif
 
-	thread_pkey_regs_restore(new_thread, old_thread);
 }
 
 struct task_struct *__switch_to(struct task_struct *prev,
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 5006dcbe1d9f..419028d53fd6 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -347,12 +347,6 @@ static bool exception_common(int signr, struct pt_regs *regs, int code,
 
 	current->thread.trap_nr = code;
 
-	/*
-	 * Save all the pkey registers AMR/IAMR/UAMOR. Eg: Core dumps need
-	 * to capture the content, if the task gets killed.
-	 */
-	thread_pkey_regs_save(&current->thread);
-
 	return true;
 }
 
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index f47d11f2743d..f747d66cc87d 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -273,30 +273,17 @@ void __init setup_kuap(bool disabled)
 }
 #endif
 
-static inline u64 read_amr(void)
+static inline void update_current_thread_amr(u64 value)
 {
-	return mfspr(SPRN_AMR);
+	current->thread.regs->amr = value;
 }
 
-static inline void write_amr(u64 value)
-{
-	mtspr(SPRN_AMR, value);
-}
-
-static inline u64 read_iamr(void)
-{
-	if (!likely(pkey_execute_disable_supported))
-		return 0x0UL;
-
-	return mfspr(SPRN_IAMR);
-}
-
-static inline void write_iamr(u64 value)
+static inline void update_current_thread_iamr(u64 value)
 {
 	if (!likely(pkey_execute_disable_supported))
 		return;
 
-	mtspr(SPRN_IAMR, value);
+	current->thread.regs->iamr = value;
 }
 
 #ifdef CONFIG_PPC_MEM_KEYS
@@ -311,17 +298,17 @@ void pkey_mm_init(struct mm_struct *mm)
 static inline void init_amr(int pkey, u8 init_bits)
 {
 	u64 new_amr_bits = (((u64)init_bits & 0x3UL) << pkeyshift(pkey));
-	u64 old_amr = read_amr() & ~((u64)(0x3ul) << pkeyshift(pkey));
+	u64 old_amr = current_thread_amr() & ~((u64)(0x3ul) << pkeyshift(pkey));
 
-	write_amr(old_amr | new_amr_bits);
+	update_current_thread_amr(old_amr | new_amr_bits);
 }
 
 static inline void init_iamr(int pkey, u8 init_bits)
 {
 	u64 new_iamr_bits = (((u64)init_bits & 0x1UL) << pkeyshift(pkey));
-	u64 old_iamr = read_iamr() & ~((u64)(0x1ul) << pkeyshift(pkey));
+	u64 old_iamr = current_thread_iamr() & ~((u64)(0x1ul) << pkeyshift(pkey));
 
-	write_iamr(old_iamr | new_iamr_bits);
+	update_current_thread_iamr(old_iamr | new_iamr_bits);
 }
 
 /*
@@ -364,30 +351,6 @@ int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
 	return 0;
 }
 
-void thread_pkey_regs_save(struct thread_struct *thread)
-{
-	if (!mmu_has_feature(MMU_FTR_PKEY))
-		return;
-
-	/*
-	 * TODO: Skip saving registers if @thread hasn't used any keys yet.
-	 */
-	thread->amr = read_amr();
-	thread->iamr = read_iamr();
-}
-
-void thread_pkey_regs_restore(struct thread_struct *new_thread,
-			      struct thread_struct *old_thread)
-{
-	if (!mmu_has_feature(MMU_FTR_PKEY))
-		return;
-
-	if (old_thread->amr != new_thread->amr)
-		write_amr(new_thread->amr);
-	if (old_thread->iamr != new_thread->iamr)
-		write_iamr(new_thread->iamr);
-}
-
 int execute_only_pkey(struct mm_struct *mm)
 {
 	return mm->context.execute_only_pkey;
@@ -436,9 +399,9 @@ static bool pkey_access_permitted(int pkey, bool write, bool execute)
 
 	pkey_shift = pkeyshift(pkey);
 	if (execute)
-		return !(read_iamr() & (IAMR_EX_BIT << pkey_shift));
+		return !(current_thread_iamr() & (IAMR_EX_BIT << pkey_shift));
 
-	amr = read_amr();
+	amr = current_thread_amr();
 	if (write)
 		return !(amr & (AMR_WR_BIT << pkey_shift));
 
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 15/22] powerpc/book3s64/kuap: Restrict access to userspace based on userspace AMR
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (13 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 14/22] powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 16/22] powerpc/book3s64/kuap: Improve error reporting with KUAP Aneesh Kumar K.V
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V, Sandipan Das

If an application has configured address protection such that read/write is
denied using pkey even the kernel should receive a FAULT on accessing the same.

This patch use user AMR value stored in pt_regs.amr to achieve the same.

Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/kup.h | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 47270596215b..4a3d0d601745 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -312,14 +312,20 @@ bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
 static __always_inline void allow_user_access(void __user *to, const void __user *from,
 					      unsigned long size, unsigned long dir)
 {
+	unsigned long thread_amr = 0;
+
 	// This is written so we can resolve to a single case at build time
 	BUILD_BUG_ON(!__builtin_constant_p(dir));
+
+	if (mmu_has_feature(MMU_FTR_PKEY))
+		thread_amr = current_thread_amr();
+
 	if (dir == KUAP_READ)
-		set_kuap(AMR_KUAP_BLOCK_WRITE);
+		set_kuap(thread_amr | AMR_KUAP_BLOCK_WRITE);
 	else if (dir == KUAP_WRITE)
-		set_kuap(AMR_KUAP_BLOCK_READ);
+		set_kuap(thread_amr | AMR_KUAP_BLOCK_READ);
 	else if (dir == KUAP_READ_WRITE)
-		set_kuap(0);
+		set_kuap(thread_amr);
 	else
 		BUILD_BUG();
 }
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 16/22] powerpc/book3s64/kuap: Improve error reporting with KUAP
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (14 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 15/22] powerpc/book3s64/kuap: Restrict access to userspace based on userspace AMR Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25 14:04   ` Christophe Leroy
  2020-11-25  5:16 ` [PATCH v6 17/22] powerpc/book3s64/kuap: Use Key 3 to implement KUAP with hash translation Aneesh Kumar K.V
                   ` (5 subsequent siblings)
  21 siblings, 1 reply; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

With hash translation use DSISR_KEYFAULT to identify a wrong access.
With Radix we look at the AMR value and type of fault.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/32/kup.h     |  4 +--
 arch/powerpc/include/asm/book3s/64/kup.h     | 27 ++++++++++++++++----
 arch/powerpc/include/asm/kup.h               |  4 +--
 arch/powerpc/include/asm/nohash/32/kup-8xx.h |  4 +--
 arch/powerpc/mm/fault.c                      |  2 +-
 5 files changed, 29 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
index 32fd4452e960..b18cd931e325 100644
--- a/arch/powerpc/include/asm/book3s/32/kup.h
+++ b/arch/powerpc/include/asm/book3s/32/kup.h
@@ -177,8 +177,8 @@ static inline void restore_user_access(unsigned long flags)
 		allow_user_access(to, to, end - addr, KUAP_READ_WRITE);
 }
 
-static inline bool
-bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
+static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
+				  bool is_write, unsigned long error_code)
 {
 	unsigned long begin = regs->kuap & 0xf0000000;
 	unsigned long end = regs->kuap << 28;
diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 4a3d0d601745..2922c442a218 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -301,12 +301,29 @@ static inline void set_kuap(unsigned long value)
 	isync();
 }
 
-static inline bool
-bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
+#define RADIX_KUAP_BLOCK_READ	UL(0x4000000000000000)
+#define RADIX_KUAP_BLOCK_WRITE	UL(0x8000000000000000)
+
+static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
+				  bool is_write, unsigned long error_code)
 {
-	return WARN(mmu_has_feature(MMU_FTR_KUAP) &&
-		    (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)),
-		    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
+	if (!mmu_has_feature(MMU_FTR_KUAP))
+		return false;
+
+	if (radix_enabled()) {
+		/*
+		 * Will be a storage protection fault.
+		 * Only check the details of AMR[0]
+		 */
+		return WARN((regs->kuap & (is_write ? RADIX_KUAP_BLOCK_WRITE : RADIX_KUAP_BLOCK_READ)),
+			    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
+	}
+	/*
+	 * We don't want to WARN here because userspace can setup
+	 * keys such that a kernel access to user address can cause
+	 * fault
+	 */
+	return !!(error_code & DSISR_KEYFAULT);
 }
 
 static __always_inline void allow_user_access(void __user *to, const void __user *from,
diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
index a06e50b68d40..952be0414f43 100644
--- a/arch/powerpc/include/asm/kup.h
+++ b/arch/powerpc/include/asm/kup.h
@@ -59,8 +59,8 @@ void setup_kuap(bool disabled);
 #else
 static inline void setup_kuap(bool disabled) { }
 
-static inline bool
-bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
+static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
+				  bool is_write, unsigned long error_code)
 {
 	return false;
 }
diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
index 567cdc557402..7bdd9e5b63ed 100644
--- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
@@ -60,8 +60,8 @@ static inline void restore_user_access(unsigned long flags)
 	mtspr(SPRN_MD_AP, flags);
 }
 
-static inline bool
-bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
+static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
+				  bool is_write, unsigned long error_code)
 {
 	return WARN(!((regs->kuap ^ MD_APG_KUAP) & 0xff000000),
 		    "Bug: fault blocked by AP register !");
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 0add963a849b..c91621df0c61 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -227,7 +227,7 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code,
 
 	// Read/write fault in a valid region (the exception table search passed
 	// above), but blocked by KUAP is bad, it can never succeed.
-	if (bad_kuap_fault(regs, address, is_write))
+	if (bad_kuap_fault(regs, address, is_write, error_code))
 		return true;
 
 	// What's left? Kernel fault on user in well defined regions (extable
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 17/22] powerpc/book3s64/kuap: Use Key 3 to implement KUAP with hash translation.
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (15 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 16/22] powerpc/book3s64/kuap: Improve error reporting with KUAP Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 18/22] powerpc/book3s64/kuep: Use Key 3 to implement KUEP " Aneesh Kumar K.V
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V, Sandipan Das

Radix use AMR Key 0 and hash translation use AMR key 3.

Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/kup.h | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 2922c442a218..b8861cc2b6c7 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -5,11 +5,10 @@
 #include <linux/const.h>
 #include <asm/reg.h>
 
-#define AMR_KUAP_BLOCK_READ	UL(0x4000000000000000)
-#define AMR_KUAP_BLOCK_WRITE	UL(0x8000000000000000)
+#define AMR_KUAP_BLOCK_READ	UL(0x5455555555555555)
+#define AMR_KUAP_BLOCK_WRITE	UL(0xa8aaaaaaaaaaaaaa)
 #define AMR_KUEP_BLOCKED	(1UL << 62)
 #define AMR_KUAP_BLOCKED	(AMR_KUAP_BLOCK_READ | AMR_KUAP_BLOCK_WRITE)
-#define AMR_KUAP_SHIFT		62
 
 #ifdef __ASSEMBLY__
 
@@ -62,8 +61,8 @@
 #ifdef CONFIG_PPC_KUAP_DEBUG
 	BEGIN_MMU_FTR_SECTION_NESTED(67)
 	mfspr	\gpr1, SPRN_AMR
-	li	\gpr2, (AMR_KUAP_BLOCKED >> AMR_KUAP_SHIFT)
-	sldi	\gpr2, \gpr2, AMR_KUAP_SHIFT
+	/* Prevent access to userspace using any key values */
+	LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED)
 999:	tdne	\gpr1, \gpr2
 	EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
 	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 18/22] powerpc/book3s64/kuep: Use Key 3 to implement KUEP with hash translation.
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (16 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 17/22] powerpc/book3s64/kuap: Use Key 3 to implement KUAP with hash translation Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 19/22] powerpc/book3s64/hash/kuap: Enable kuap on hash Aneesh Kumar K.V
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V, Sandipan Das

Radix use IAMR Key 0 and hash translation use IAMR key 3.

Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/kup.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index b8861cc2b6c7..7026d1b5d0c6 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -7,7 +7,7 @@
 
 #define AMR_KUAP_BLOCK_READ	UL(0x5455555555555555)
 #define AMR_KUAP_BLOCK_WRITE	UL(0xa8aaaaaaaaaaaaaa)
-#define AMR_KUEP_BLOCKED	(1UL << 62)
+#define AMR_KUEP_BLOCKED	UL(0x5455555555555555)
 #define AMR_KUAP_BLOCKED	(AMR_KUAP_BLOCK_READ | AMR_KUAP_BLOCK_WRITE)
 
 #ifdef __ASSEMBLY__
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 19/22] powerpc/book3s64/hash/kuap: Enable kuap on hash
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (17 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 18/22] powerpc/book3s64/kuep: Use Key 3 to implement KUEP " Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2021-03-15 12:06   ` Christophe Leroy
  2021-10-08  9:32   ` Christophe Leroy
  2020-11-25  5:16 ` [PATCH v6 20/22] powerpc/book3s64/hash/kuep: Enable KUEP " Aneesh Kumar K.V
                   ` (2 subsequent siblings)
  21 siblings, 2 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V, Sandipan Das

Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/mm/book3s64/pkeys.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index f747d66cc87d..84f8664ffc47 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -257,7 +257,12 @@ void __init setup_kuep(bool disabled)
 #ifdef CONFIG_PPC_KUAP
 void __init setup_kuap(bool disabled)
 {
-	if (disabled || !early_radix_enabled())
+	if (disabled)
+		return;
+	/*
+	 * On hash if PKEY feature is not enabled, disable KUAP too.
+	 */
+	if (!early_radix_enabled() && !early_mmu_has_feature(MMU_FTR_PKEY))
 		return;
 
 	if (smp_processor_id() == boot_cpuid) {
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 20/22] powerpc/book3s64/hash/kuep: Enable KUEP on hash
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (18 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 19/22] powerpc/book3s64/hash/kuap: Enable kuap on hash Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 21/22] powerpc/book3s64/hash/kup: Don't hardcode kup key Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 22/22] powerpc/book3s64/pkeys: Optimize FTR_KUAP and FTR_KUEP disabled case Aneesh Kumar K.V
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V, Sandipan Das

Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/mm/book3s64/pkeys.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index 84f8664ffc47..f029e7bf5ca2 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -236,7 +236,12 @@ void __init pkey_early_init_devtree(void)
 #ifdef CONFIG_PPC_KUEP
 void __init setup_kuep(bool disabled)
 {
-	if (disabled || !early_radix_enabled())
+	if (disabled)
+		return;
+	/*
+	 * On hash if PKEY feature is not enabled, disable KUAP too.
+	 */
+	if (!early_radix_enabled() && !early_mmu_has_feature(MMU_FTR_PKEY))
 		return;
 
 	if (smp_processor_id() == boot_cpuid) {
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 21/22] powerpc/book3s64/hash/kup: Don't hardcode kup key
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (19 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 20/22] powerpc/book3s64/hash/kuep: Enable KUEP " Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  2020-11-25  5:16 ` [PATCH v6 22/22] powerpc/book3s64/pkeys: Optimize FTR_KUAP and FTR_KUEP disabled case Aneesh Kumar K.V
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

Make KUAP/KUEP key a variable and also check whether the platform
limit the max key such that we can't use the key for KUAP/KEUP.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 .../powerpc/include/asm/book3s/64/hash-pkey.h | 22 +-------
 arch/powerpc/include/asm/book3s/64/pkeys.h    |  1 +
 arch/powerpc/mm/book3s64/pkeys.c              | 53 ++++++++++++++++---
 3 files changed, 49 insertions(+), 27 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/hash-pkey.h b/arch/powerpc/include/asm/book3s/64/hash-pkey.h
index 9f44e208f036..ff9907c72ee3 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-pkey.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-pkey.h
@@ -2,9 +2,7 @@
 #ifndef _ASM_POWERPC_BOOK3S_64_HASH_PKEY_H
 #define _ASM_POWERPC_BOOK3S_64_HASH_PKEY_H
 
-/*  We use key 3 for KERNEL */
-#define HASH_DEFAULT_KERNEL_KEY (HPTE_R_KEY_BIT0 | HPTE_R_KEY_BIT1)
-
+u64 pte_to_hpte_pkey_bits(u64 pteflags, unsigned long flags);
 static inline u64 hash__vmflag_to_pte_pkey_bits(u64 vm_flags)
 {
 	return (((vm_flags & VM_PKEY_BIT0) ? H_PTE_PKEY_BIT0 : 0x0UL) |
@@ -14,24 +12,6 @@ static inline u64 hash__vmflag_to_pte_pkey_bits(u64 vm_flags)
 		((vm_flags & VM_PKEY_BIT4) ? H_PTE_PKEY_BIT4 : 0x0UL));
 }
 
-static inline u64 pte_to_hpte_pkey_bits(u64 pteflags, unsigned long flags)
-{
-	unsigned long pte_pkey;
-
-	pte_pkey = (((pteflags & H_PTE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL) |
-		    ((pteflags & H_PTE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
-		    ((pteflags & H_PTE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
-		    ((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
-		    ((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL));
-
-	if (mmu_has_feature(MMU_FTR_KUAP) || mmu_has_feature(MMU_FTR_KUEP)) {
-		if ((pte_pkey == 0) && (flags & HPTE_USE_KERNEL_KEY))
-			return HASH_DEFAULT_KERNEL_KEY;
-	}
-
-	return pte_pkey;
-}
-
 static inline u16 hash__pte_to_pkey_bits(u64 pteflags)
 {
 	return (((pteflags & H_PTE_PKEY_BIT4) ? 0x10 : 0x0UL) |
diff --git a/arch/powerpc/include/asm/book3s/64/pkeys.h b/arch/powerpc/include/asm/book3s/64/pkeys.h
index 3b8640498f5b..a2b6c4a7275f 100644
--- a/arch/powerpc/include/asm/book3s/64/pkeys.h
+++ b/arch/powerpc/include/asm/book3s/64/pkeys.h
@@ -8,6 +8,7 @@
 extern u64 __ro_after_init default_uamor;
 extern u64 __ro_after_init default_amr;
 extern u64 __ro_after_init default_iamr;
+extern int kup_key;
 
 static inline u64 vmflag_to_pte_pkey_bits(u64 vm_flags)
 {
diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
index f029e7bf5ca2..204e4598b45c 100644
--- a/arch/powerpc/mm/book3s64/pkeys.c
+++ b/arch/powerpc/mm/book3s64/pkeys.c
@@ -37,7 +37,10 @@ u64 default_uamor __ro_after_init;
  */
 static int execute_only_key = 2;
 static bool pkey_execute_disable_supported;
-
+/*
+ * key used to implement KUAP/KUEP with hash translation.
+ */
+int kup_key = 3;
 
 #define AMR_BITS_PER_PKEY 2
 #define AMR_RD_BIT 0x1UL
@@ -185,6 +188,25 @@ void __init pkey_early_init_devtree(void)
 		default_uamor &= ~(0x3ul << pkeyshift(execute_only_key));
 	}
 
+	if (unlikely(num_pkey <= kup_key)) {
+		/*
+		 * Insufficient number of keys to support
+		 * KUAP/KUEP feature.
+		 */
+		kup_key = -1;
+	} else {
+		/*  handle key which is used by kernel for KAUP */
+		reserved_allocation_mask |= (0x1 << kup_key);
+		/*
+		 * Mark access for kup_key in default amr so that
+		 * we continue to operate with that AMR in
+		 * copy_to/from_user().
+		 */
+		default_amr   &= ~(0x3ul << pkeyshift(kup_key));
+		default_iamr  &= ~(0x1ul << pkeyshift(kup_key));
+		default_uamor &= ~(0x3ul << pkeyshift(kup_key));
+	}
+
 	/*
 	 * Allow access for only key 0. And prevent any other modification.
 	 */
@@ -205,9 +227,6 @@ void __init pkey_early_init_devtree(void)
 	reserved_allocation_mask |= (0x1 << 1);
 	default_uamor &= ~(0x3ul << pkeyshift(1));
 
-	/*  handle key 3 which is used by kernel for KAUP */
-	reserved_allocation_mask |= (0x1 << 3);
-	default_uamor &= ~(0x3ul << pkeyshift(3));
 
 	/*
 	 * Prevent the usage of OS reserved keys. Update UAMOR
@@ -236,7 +255,7 @@ void __init pkey_early_init_devtree(void)
 #ifdef CONFIG_PPC_KUEP
 void __init setup_kuep(bool disabled)
 {
-	if (disabled)
+	if (disabled || kup_key == -1)
 		return;
 	/*
 	 * On hash if PKEY feature is not enabled, disable KUAP too.
@@ -262,7 +281,7 @@ void __init setup_kuep(bool disabled)
 #ifdef CONFIG_PPC_KUAP
 void __init setup_kuap(bool disabled)
 {
-	if (disabled)
+	if (disabled || kup_key == -1)
 		return;
 	/*
 	 * On hash if PKEY feature is not enabled, disable KUAP too.
@@ -458,4 +477,26 @@ void arch_dup_pkeys(struct mm_struct *oldmm, struct mm_struct *mm)
 	mm->context.execute_only_pkey = oldmm->context.execute_only_pkey;
 }
 
+u64 pte_to_hpte_pkey_bits(u64 pteflags, unsigned long flags)
+{
+	unsigned long pte_pkey;
+
+	pte_pkey = (((pteflags & H_PTE_PKEY_BIT4) ? HPTE_R_KEY_BIT4 : 0x0UL) |
+		    ((pteflags & H_PTE_PKEY_BIT3) ? HPTE_R_KEY_BIT3 : 0x0UL) |
+		    ((pteflags & H_PTE_PKEY_BIT2) ? HPTE_R_KEY_BIT2 : 0x0UL) |
+		    ((pteflags & H_PTE_PKEY_BIT1) ? HPTE_R_KEY_BIT1 : 0x0UL) |
+		    ((pteflags & H_PTE_PKEY_BIT0) ? HPTE_R_KEY_BIT0 : 0x0UL));
+
+	if (mmu_has_feature(MMU_FTR_KUAP) || mmu_has_feature(MMU_FTR_KUEP)) {
+		if ((pte_pkey == 0) &&
+		    (flags & HPTE_USE_KERNEL_KEY) && (kup_key != -1)) {
+			u64 vm_flag = pkey_to_vmflag_bits(kup_key);
+			u64 pte_flag = hash__vmflag_to_pte_pkey_bits(vm_flag);
+			return pte_to_hpte_pkey_bits(pte_flag, 0);
+		}
+	}
+
+	return pte_pkey;
+}
+
 #endif /* CONFIG_PPC_MEM_KEYS */
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* [PATCH v6 22/22] powerpc/book3s64/pkeys: Optimize FTR_KUAP and FTR_KUEP disabled case
  2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
                   ` (20 preceding siblings ...)
  2020-11-25  5:16 ` [PATCH v6 21/22] powerpc/book3s64/hash/kup: Don't hardcode kup key Aneesh Kumar K.V
@ 2020-11-25  5:16 ` Aneesh Kumar K.V
  21 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25  5:16 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: Aneesh Kumar K.V

If FTR_KUAP is disabled kernel will continue to run with the same AMR
value with which it was entered. Hence there is a high chance that
we can return without restoring the AMR value. This also helps the case
when applications are not using the pkey feature. In this case, different
applications will have the same AMR values and hence we can avoid restoring
AMR in this case too.

Also avoid isync() if not really needed.

Do the same for IAMR.

null-syscall benchmark results:

With smap/smep disabled:
Without patch:
	957.95 ns    2778.17 cycles
With patch:
	858.38 ns    2489.30 cycles

With smap/smep enabled:
Without patch:
	1017.26 ns    2950.36 cycles
With patch:
	1021.51 ns    2962.44 cycles

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/kup.h | 61 +++++++++++++++++++++---
 arch/powerpc/kernel/entry_64.S           |  2 +-
 arch/powerpc/kernel/syscall_64.c         | 12 +++--
 3 files changed, 65 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
index 7026d1b5d0c6..e063e439b0a8 100644
--- a/arch/powerpc/include/asm/book3s/64/kup.h
+++ b/arch/powerpc/include/asm/book3s/64/kup.h
@@ -12,28 +12,54 @@
 
 #ifdef __ASSEMBLY__
 
-.macro kuap_restore_user_amr gpr1
+.macro kuap_restore_user_amr gpr1, gpr2
 #if defined(CONFIG_PPC_PKEY)
 	BEGIN_MMU_FTR_SECTION_NESTED(67)
+	b	100f  // skip_restore_amr
+	END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY, 67)
 	/*
 	 * AMR and IAMR are going to be different when
 	 * returning to userspace.
 	 */
 	ld	\gpr1, STACK_REGS_AMR(r1)
+
+	/*
+	 * If kuap feature is not enabled, do the mtspr
+	 * only if AMR value is different.
+	 */
+	BEGIN_MMU_FTR_SECTION_NESTED(68)
+	mfspr	\gpr2, SPRN_AMR
+	cmpd	\gpr1, \gpr2
+	beq	99f
+	END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_KUAP, 68)
+
 	isync
 	mtspr	SPRN_AMR, \gpr1
+99:
 	/*
 	 * Restore IAMR only when returning to userspace
 	 */
 	ld	\gpr1, STACK_REGS_IAMR(r1)
+
+	/*
+	 * If kuep feature is not enabled, do the mtspr
+	 * only if IAMR value is different.
+	 */
+	BEGIN_MMU_FTR_SECTION_NESTED(69)
+	mfspr	\gpr2, SPRN_IAMR
+	cmpd	\gpr1, \gpr2
+	beq	100f
+	END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_KUEP, 69)
+
+	isync
 	mtspr	SPRN_IAMR, \gpr1
 
+100: //skip_restore_amr
 	/* No isync required, see kuap_restore_user_amr() */
-	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_PKEY, 67)
 #endif
 .endm
 
-.macro kuap_restore_kernel_amr	gpr1, gpr2
+.macro kuap_restore_kernel_amr gpr1, gpr2
 #if defined(CONFIG_PPC_PKEY)
 
 	BEGIN_MMU_FTR_SECTION_NESTED(67)
@@ -197,18 +223,41 @@ static inline u64 current_thread_iamr(void)
 
 static inline void kuap_restore_user_amr(struct pt_regs *regs)
 {
+	bool restore_amr = false, restore_iamr = false;
+	unsigned long amr, iamr;
+
 	if (!mmu_has_feature(MMU_FTR_PKEY))
 		return;
 
-	isync();
-	mtspr(SPRN_AMR, regs->amr);
-	mtspr(SPRN_IAMR, regs->iamr);
+	if (!mmu_has_feature(MMU_FTR_KUAP)) {
+		amr = mfspr(SPRN_AMR);
+		if (amr != regs->amr)
+			restore_amr = true;
+	} else
+		restore_amr = true;
+
+	if (!mmu_has_feature(MMU_FTR_KUEP)) {
+		iamr = mfspr(SPRN_IAMR);
+		if (iamr != regs->iamr)
+			restore_iamr = true;
+	} else
+		restore_iamr = true;
+
+
+	if (restore_amr || restore_iamr) {
+		isync();
+		if (restore_amr)
+			mtspr(SPRN_AMR, regs->amr);
+		if (restore_iamr)
+			mtspr(SPRN_IAMR, regs->iamr);
+	}
 	/*
 	 * No isync required here because we are about to rfi
 	 * back to previous context before any user accesses
 	 * would be made, which is a CSI.
 	 */
 }
+
 static inline void kuap_restore_kernel_amr(struct pt_regs *regs,
 					   unsigned long amr)
 {
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index e49291594c68..a68517e99fd2 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -675,7 +675,7 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return)
 	bne-	.Lrestore_nvgprs
 
 .Lfast_user_interrupt_return_amr:
-	kuap_restore_user_amr r3
+	kuap_restore_user_amr r3, r4
 .Lfast_user_interrupt_return:
 	ld	r11,_NIP(r1)
 	ld	r12,_MSR(r1)
diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
index 60c57609d316..681f9afafc6f 100644
--- a/arch/powerpc/kernel/syscall_64.c
+++ b/arch/powerpc/kernel/syscall_64.c
@@ -38,6 +38,7 @@ notrace long system_call_exception(long r3, long r4, long r5,
 #ifdef CONFIG_PPC_PKEY
 	if (mmu_has_feature(MMU_FTR_PKEY)) {
 		unsigned long amr, iamr;
+		bool flush_needed = false;
 		/*
 		 * When entering from userspace we mostly have the AMR/IAMR
 		 * different from kernel default values. Hence don't compare.
@@ -46,11 +47,16 @@ notrace long system_call_exception(long r3, long r4, long r5,
 		iamr = mfspr(SPRN_IAMR);
 		regs->amr  = amr;
 		regs->iamr = iamr;
-		if (mmu_has_feature(MMU_FTR_KUAP))
+		if (mmu_has_feature(MMU_FTR_KUAP)) {
 			mtspr(SPRN_AMR, AMR_KUAP_BLOCKED);
-		if (mmu_has_feature(MMU_FTR_KUEP))
+			flush_needed = true;
+		}
+		if (mmu_has_feature(MMU_FTR_KUEP)) {
 			mtspr(SPRN_IAMR, AMR_KUEP_BLOCKED);
-		isync();
+			flush_needed = true;
+		}
+		if (flush_needed)
+			isync();
 	} else
 #endif
 		kuap_check_amr();
-- 
2.28.0


^ permalink raw reply related	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 03/22] powerpc/book3s64/kuap/kuep: Make KUAP and KUEP a subfeature of PPC_MEM_KEYS
  2020-11-25  5:16 ` [PATCH v6 03/22] powerpc/book3s64/kuap/kuep: Make KUAP and KUEP a subfeature of PPC_MEM_KEYS Aneesh Kumar K.V
@ 2020-11-25 13:30   ` Christophe Leroy
  2020-11-25 14:57     ` Aneesh Kumar K.V
  2020-11-26  3:25   ` Michael Ellerman
  1 sibling, 1 reply; 48+ messages in thread
From: Christophe Leroy @ 2020-11-25 13:30 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev, mpe



Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
> The next set of patches adds support for kuap with hash translation.
> Hence make KUAP a BOOK3S_64 feature. Also make it a subfeature of
> PPC_MEM_KEYS. Hash translation is going to use pkeys to support
> KUAP/KUEP. Adding this dependency reduces the code complexity and
> enables us to move some of the initialization code to pkeys.c
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>   .../powerpc/include/asm/book3s/64/kup-radix.h |  4 ++--
>   arch/powerpc/include/asm/book3s/64/mmu.h      |  2 +-
>   arch/powerpc/include/asm/ptrace.h             |  7 +++++-
>   arch/powerpc/kernel/asm-offsets.c             |  3 +++
>   arch/powerpc/mm/book3s64/Makefile             |  2 +-
>   arch/powerpc/mm/book3s64/pkeys.c              | 24 ++++++++++++-------
>   arch/powerpc/platforms/Kconfig.cputype        |  5 ++++
>   7 files changed, 33 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h
> index 28716e2f13e3..68eaa2fac3ab 100644
> --- a/arch/powerpc/include/asm/book3s/64/kup-radix.h
> +++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h
> @@ -16,7 +16,7 @@
>   #ifdef CONFIG_PPC_KUAP
>   	BEGIN_MMU_FTR_SECTION_NESTED(67)
>   	mfspr	\gpr1, SPRN_AMR
> -	ld	\gpr2, STACK_REGS_KUAP(r1)
> +	ld	\gpr2, STACK_REGS_AMR(r1)
>   	cmpd	\gpr1, \gpr2
>   	beq	998f
>   	isync
> @@ -48,7 +48,7 @@
>   	bne	\msr_pr_cr, 99f
>   	.endif
>   	mfspr	\gpr1, SPRN_AMR
> -	std	\gpr1, STACK_REGS_KUAP(r1)
> +	std	\gpr1, STACK_REGS_AMR(r1)
>   	li	\gpr2, (AMR_KUAP_BLOCKED >> AMR_KUAP_SHIFT)
>   	sldi	\gpr2, \gpr2, AMR_KUAP_SHIFT
>   	cmpd	\use_cr, \gpr1, \gpr2
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> index e0b52940e43c..a2a015066bae 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -199,7 +199,7 @@ extern int mmu_io_psize;
>   void mmu_early_init_devtree(void);
>   void hash__early_init_devtree(void);
>   void radix__early_init_devtree(void);
> -#ifdef CONFIG_PPC_MEM_KEYS
> +#ifdef CONFIG_PPC_PKEY
>   void pkey_early_init_devtree(void);
>   #else
>   static inline void pkey_early_init_devtree(void) {}
> diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
> index e2c778c176a3..e7f1caa007a4 100644
> --- a/arch/powerpc/include/asm/ptrace.h
> +++ b/arch/powerpc/include/asm/ptrace.h
> @@ -53,9 +53,14 @@ struct pt_regs
>   #ifdef CONFIG_PPC64
>   			unsigned long ppr;
>   #endif
> +			union {
>   #ifdef CONFIG_PPC_KUAP
> -			unsigned long kuap;
> +				unsigned long kuap;
>   #endif
> +#ifdef CONFIG_PPC_PKEY
> +				unsigned long amr;
> +#endif
> +			};
>   		};
>   		unsigned long __pad[2];	/* Maintain 16 byte interrupt stack alignment */
>   	};
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index c2722ff36e98..418a0b314a33 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -354,6 +354,9 @@ int main(void)
>   	STACK_PT_REGS_OFFSET(_PPR, ppr);
>   #endif /* CONFIG_PPC64 */
>   
> +#ifdef CONFIG_PPC_PKEY
> +	STACK_PT_REGS_OFFSET(STACK_REGS_AMR, amr);
> +#endif
>   #ifdef CONFIG_PPC_KUAP
>   	STACK_PT_REGS_OFFSET(STACK_REGS_KUAP, kuap);
>   #endif
> diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64/Makefile
> index fd393b8be14f..1b56d3af47d4 100644
> --- a/arch/powerpc/mm/book3s64/Makefile
> +++ b/arch/powerpc/mm/book3s64/Makefile
> @@ -17,7 +17,7 @@ endif
>   obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hash_hugepage.o
>   obj-$(CONFIG_PPC_SUBPAGE_PROT)	+= subpage_prot.o
>   obj-$(CONFIG_SPAPR_TCE_IOMMU)	+= iommu_api.o
> -obj-$(CONFIG_PPC_MEM_KEYS)	+= pkeys.o
> +obj-$(CONFIG_PPC_PKEY)	+= pkeys.o
>   
>   # Instrumenting the SLB fault path can lead to duplicate SLB entries
>   KCOV_INSTRUMENT_slb.o := n
> diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
> index b1d091a97611..7dc71f85683d 100644
> --- a/arch/powerpc/mm/book3s64/pkeys.c
> +++ b/arch/powerpc/mm/book3s64/pkeys.c
> @@ -89,12 +89,14 @@ static int scan_pkey_feature(void)
>   		}
>   	}
>   
> +#ifdef CONFIG_PPC_MEM_KEYS
>   	/*
>   	 * Adjust the upper limit, based on the number of bits supported by
>   	 * arch-neutral code.
>   	 */
>   	pkeys_total = min_t(int, pkeys_total,
>   			    ((ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT) + 1));

I don't think we need an #ifdef here. I thing an 'if (IS_ENABLED(CONFIG_PPC_MEM_KEYS))' should make it.

> +#endif
>   	return pkeys_total;
>   }
>   
> @@ -102,6 +104,7 @@ void __init pkey_early_init_devtree(void)
>   {
>   	int pkeys_total, i;
>   
> +#ifdef CONFIG_PPC_MEM_KEYS
>   	/*
>   	 * We define PKEY_DISABLE_EXECUTE in addition to the arch-neutral
>   	 * generic defines for PKEY_DISABLE_ACCESS and PKEY_DISABLE_WRITE.
> @@ -117,7 +120,7 @@ void __init pkey_early_init_devtree(void)
>   	BUILD_BUG_ON(__builtin_clzl(ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT) +
>   		     __builtin_popcountl(ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT)
>   				!= (sizeof(u64) * BITS_PER_BYTE));
> -
> +#endif
>   	/*
>   	 * Only P7 and above supports SPRN_AMR update with MSR[PR] = 1
>   	 */
> @@ -223,14 +226,6 @@ void __init pkey_early_init_devtree(void)
>   	return;
>   }
>   
> -void pkey_mm_init(struct mm_struct *mm)
> -{
> -	if (!mmu_has_feature(MMU_FTR_PKEY))
> -		return;
> -	mm_pkey_allocation_map(mm) = initial_allocation_mask;
> -	mm->context.execute_only_pkey = execute_only_key;
> -}
> -
>   static inline u64 read_amr(void)
>   {
>   	return mfspr(SPRN_AMR);
> @@ -257,6 +252,15 @@ static inline void write_iamr(u64 value)
>   	mtspr(SPRN_IAMR, value);
>   }
>   
> +#ifdef CONFIG_PPC_MEM_KEYS
> +void pkey_mm_init(struct mm_struct *mm)
> +{
> +	if (!mmu_has_feature(MMU_FTR_PKEY))
> +		return;
> +	mm_pkey_allocation_map(mm) = initial_allocation_mask;
> +	mm->context.execute_only_pkey = execute_only_key;
> +}
> +
>   static inline void init_amr(int pkey, u8 init_bits)
>   {
>   	u64 new_amr_bits = (((u64)init_bits & 0x3UL) << pkeyshift(pkey));
> @@ -445,3 +449,5 @@ void arch_dup_pkeys(struct mm_struct *oldmm, struct mm_struct *mm)
>   	mm_pkey_allocation_map(mm) = mm_pkey_allocation_map(oldmm);
>   	mm->context.execute_only_pkey = oldmm->context.execute_only_pkey;
>   }
> +
> +#endif /* CONFIG_PPC_MEM_KEYS */
> diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
> index c194c4ae8bc7..f255e8f32155 100644
> --- a/arch/powerpc/platforms/Kconfig.cputype
> +++ b/arch/powerpc/platforms/Kconfig.cputype
> @@ -395,6 +395,11 @@ config PPC_KUAP_DEBUG
>   	  Add extra debugging for Kernel Userspace Access Protection (KUAP)
>   	  If you're unsure, say N.
>   
> +config PPC_PKEY
> +	def_bool y
> +	depends on PPC_BOOK3S_64
> +	depends on PPC_MEM_KEYS || PPC_KUAP || PPC_KUEP
> +
>   config ARCH_ENABLE_HUGEPAGE_MIGRATION
>   	def_bool y
>   	depends on PPC_BOOK3S_64 && HUGETLB_PAGE && MIGRATION
> 

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 04/22] powerpc/book3s64/kuap/kuep: Move uamor setup to pkey init
  2020-11-25  5:16 ` [PATCH v6 04/22] powerpc/book3s64/kuap/kuep: Move uamor setup to pkey init Aneesh Kumar K.V
@ 2020-11-25 13:32   ` Christophe Leroy
  2020-11-26  3:28   ` Michael Ellerman
  1 sibling, 0 replies; 48+ messages in thread
From: Christophe Leroy @ 2020-11-25 13:32 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev, mpe



Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
> This patch consolidates UAMOR update across pkey, kuap and kuep features.
> The boot cpu initialize UAMOR via pkey init and both radix/hash do the
> secondary cpu UAMOR init in early_init_mmu_secondary.
> 
> We don't check for mmu_feature in radix secondary init because UAMOR
> is a supported SPRN with all CPUs supporting radix translation.
> The old code was not updating UAMOR if we had smap disabled and smep enabled.
> This change handles that case.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>   arch/powerpc/mm/book3s64/radix_pgtable.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index 3adcf730f478..bfe441af916a 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -620,9 +620,6 @@ void setup_kuap(bool disabled)
>   		cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
>   	}
>   
> -	/* Make sure userspace can't change the AMR */
> -	mtspr(SPRN_UAMOR, 0);
> -
>   	/*
>   	 * Set the default kernel AMR values on all cpus.
>   	 */
> @@ -721,6 +718,11 @@ void radix__early_init_mmu_secondary(void)
>   
>   	radix__switch_mmu_context(NULL, &init_mm);
>   	tlbiel_all();
> +
> +#ifdef CONFIG_PPC_PKEY

It should be possible to use an 'if' with IS_ENABLED(CONFIG_PPC_PKEY) instead of this #ifdef

> +	/* Make sure userspace can't change the AMR */
> +	mtspr(SPRN_UAMOR, 0);
> +#endif
>   }
>   
>   void radix__mmu_cleanup_all(void)
> 

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 07/22] powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP to MMU_FTR_KUAP
  2020-11-25  5:16 ` [PATCH v6 07/22] powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP to MMU_FTR_KUAP Aneesh Kumar K.V
@ 2020-11-25 13:43   ` Christophe Leroy
  2020-11-25 13:52     ` Aneesh Kumar K.V
  2020-11-26  3:16   ` Michael Ellerman
  1 sibling, 1 reply; 48+ messages in thread
From: Christophe Leroy @ 2020-11-25 13:43 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev, mpe



Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
> This is in preparate to adding support for kuap with hash translation.
> In preparation for that rename/move kuap related functions to
> non radix names. Also move the feature bit closer to MMU_FTR_KUEP.

It was obvious with MMU_FTR_RADIX_KUAP that it was only for Radix PPC64.
Now, do we expect it to be applies on PPC32 as well or is it still for PPC64 only ?

> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/book3s/64/kup.h | 18 +++++++++---------
>   arch/powerpc/include/asm/mmu.h           | 14 +++++++-------
>   arch/powerpc/mm/book3s64/pkeys.c         |  2 +-
>   3 files changed, 17 insertions(+), 17 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
> index 39d2e3a0d64d..1d38eab83d48 100644
> --- a/arch/powerpc/include/asm/book3s/64/kup.h
> +++ b/arch/powerpc/include/asm/book3s/64/kup.h
> @@ -24,7 +24,7 @@
>   	mtspr	SPRN_AMR, \gpr2
>   	/* No isync required, see kuap_restore_amr() */
>   998:
> -	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
> +	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
>   #endif
>   .endm
>   
> @@ -37,7 +37,7 @@
>   	sldi	\gpr2, \gpr2, AMR_KUAP_SHIFT
>   999:	tdne	\gpr1, \gpr2
>   	EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
> -	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
> +	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
>   #endif
>   .endm
>   #endif
> @@ -58,7 +58,7 @@
>   	mtspr	SPRN_AMR, \gpr2
>   	isync
>   99:
> -	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
> +	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
>   #endif
>   .endm
>   
> @@ -73,7 +73,7 @@ DECLARE_STATIC_KEY_FALSE(uaccess_flush_key);
>   
>   static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr)
>   {
> -	if (mmu_has_feature(MMU_FTR_RADIX_KUAP) && unlikely(regs->kuap != amr)) {
> +	if (mmu_has_feature(MMU_FTR_KUAP) && unlikely(regs->kuap != amr)) {
>   		isync();
>   		mtspr(SPRN_AMR, regs->kuap);
>   		/*
> @@ -86,7 +86,7 @@ static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr)
>   
>   static inline unsigned long kuap_get_and_check_amr(void)
>   {
> -	if (mmu_has_feature(MMU_FTR_RADIX_KUAP)) {
> +	if (mmu_has_feature(MMU_FTR_KUAP)) {
>   		unsigned long amr = mfspr(SPRN_AMR);
>   		if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG)) /* kuap_check_amr() */
>   			WARN_ON_ONCE(amr != AMR_KUAP_BLOCKED);
> @@ -97,7 +97,7 @@ static inline unsigned long kuap_get_and_check_amr(void)
>   
>   static inline void kuap_check_amr(void)
>   {
> -	if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_RADIX_KUAP))
> +	if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_KUAP))
>   		WARN_ON_ONCE(mfspr(SPRN_AMR) != AMR_KUAP_BLOCKED);
>   }
>   
> @@ -116,7 +116,7 @@ static inline unsigned long get_kuap(void)
>   	 * This has no effect in terms of actually blocking things on hash,
>   	 * so it doesn't break anything.
>   	 */
> -	if (!early_mmu_has_feature(MMU_FTR_RADIX_KUAP))
> +	if (!early_mmu_has_feature(MMU_FTR_KUAP))
>   		return AMR_KUAP_BLOCKED;
>   
>   	return mfspr(SPRN_AMR);
> @@ -124,7 +124,7 @@ static inline unsigned long get_kuap(void)
>   
>   static inline void set_kuap(unsigned long value)
>   {
> -	if (!early_mmu_has_feature(MMU_FTR_RADIX_KUAP))
> +	if (!early_mmu_has_feature(MMU_FTR_KUAP))
>   		return;
>   
>   	/*
> @@ -139,7 +139,7 @@ static inline void set_kuap(unsigned long value)
>   static inline bool
>   bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
>   {
> -	return WARN(mmu_has_feature(MMU_FTR_RADIX_KUAP) &&
> +	return WARN(mmu_has_feature(MMU_FTR_KUAP) &&
>   		    (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)),
>   		    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
>   }
> diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
> index 255a1837e9f7..f5c7a17c198a 100644
> --- a/arch/powerpc/include/asm/mmu.h
> +++ b/arch/powerpc/include/asm/mmu.h
> @@ -28,6 +28,11 @@
>    * Individual features below.
>    */
>   
> +/*
> + * Supports KUAP (key 0 controlling userspace addresses) on radix
> + */
> +#define MMU_FTR_KUAP			ASM_CONST(0x00000200)
> +
>   /*
>    * Support for KUEP feature.
>    */
> @@ -120,11 +125,6 @@
>    */
>   #define MMU_FTR_1T_SEGMENT		ASM_CONST(0x40000000)
>   
> -/*
> - * Supports KUAP (key 0 controlling userspace addresses) on radix
> - */
> -#define MMU_FTR_RADIX_KUAP		ASM_CONST(0x80000000)
> -
>   /* MMU feature bit sets for various CPUs */
>   #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2	\
>   	MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
> @@ -187,10 +187,10 @@ enum {
>   #ifdef CONFIG_PPC_RADIX_MMU
>   		MMU_FTR_TYPE_RADIX |
>   		MMU_FTR_GTSE |
> +#endif /* CONFIG_PPC_RADIX_MMU */
>   #ifdef CONFIG_PPC_KUAP
> -		MMU_FTR_RADIX_KUAP |
> +	MMU_FTR_KUAP |
>   #endif /* CONFIG_PPC_KUAP */
> -#endif /* CONFIG_PPC_RADIX_MMU */
>   #ifdef CONFIG_PPC_MEM_KEYS
>   	MMU_FTR_PKEY |
>   #endif
> diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
> index 82c722fbce52..bfc27f1f0ab0 100644
> --- a/arch/powerpc/mm/book3s64/pkeys.c
> +++ b/arch/powerpc/mm/book3s64/pkeys.c
> @@ -258,7 +258,7 @@ void __init setup_kuap(bool disabled)
>   
>   	if (smp_processor_id() == boot_cpuid) {
>   		pr_info("Activating Kernel Userspace Access Prevention\n");
> -		cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
> +		cur_cpu_spec->mmu_features |= MMU_FTR_KUAP;
>   	}
>   
>   	/*
> 

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 09/22] powerpc/exec: Set thread.regs early during exec
  2020-11-25  5:16 ` [PATCH v6 09/22] powerpc/exec: Set thread.regs early during exec Aneesh Kumar K.V
@ 2020-11-25 13:47   ` Christophe Leroy
  2020-11-26  7:38     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 48+ messages in thread
From: Christophe Leroy @ 2020-11-25 13:47 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev, mpe



Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
> In later patches during exec, we would like to access default regs.amr to
> control access to the user mapping. Having thread.regs set early makes the
> code changes simpler.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/thread_info.h |  2 --
>   arch/powerpc/kernel/process.c          | 37 +++++++++++++++++---------
>   2 files changed, 25 insertions(+), 14 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
> index 46a210b03d2b..de4c911d9ced 100644
> --- a/arch/powerpc/include/asm/thread_info.h
> +++ b/arch/powerpc/include/asm/thread_info.h
> @@ -77,10 +77,8 @@ struct thread_info {
>   /* how to get the thread information struct from C */
>   extern int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
>   
> -#ifdef CONFIG_PPC_BOOK3S_64
>   void arch_setup_new_exec(void);
>   #define arch_setup_new_exec arch_setup_new_exec
> -#endif
>   
>   #endif /* __ASSEMBLY__ */
>   
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index d421a2c7f822..b6b8a845e454 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -1530,10 +1530,32 @@ void flush_thread(void)
>   #ifdef CONFIG_PPC_BOOK3S_64
>   void arch_setup_new_exec(void)
>   {
> -	if (radix_enabled())
> -		return;
> -	hash__setup_new_exec();
> +	if (!radix_enabled())
> +		hash__setup_new_exec();
> +
> +	/*
> +	 * If we exec out of a kernel thread then thread.regs will not be
> +	 * set.  Do it now.
> +	 */
> +	if (!current->thread.regs) {
> +		struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
> +		current->thread.regs = regs - 1;
> +	}
> +
> +}
> +#else
> +void arch_setup_new_exec(void)
> +{
> +	/*
> +	 * If we exec out of a kernel thread then thread.regs will not be
> +	 * set.  Do it now.
> +	 */
> +	if (!current->thread.regs) {
> +		struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
> +		current->thread.regs = regs - 1;
> +	}
>   }
> +
>   #endif

No need to duplicate arch_setup_new_exec() I think. radix_enabled() is defined at all time so the 
first function should be valid at all time.

>   
>   #ifdef CONFIG_PPC64
> @@ -1765,15 +1787,6 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
>   		preload_new_slb_context(start, sp);
>   #endif
>   
> -	/*
> -	 * If we exec out of a kernel thread then thread.regs will not be
> -	 * set.  Do it now.
> -	 */
> -	if (!current->thread.regs) {
> -		struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
> -		current->thread.regs = regs - 1;
> -	}
> -
>   #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
>   	/*
>   	 * Clear any transactional state, we're exec()ing. The cause is
> 

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 07/22] powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP to MMU_FTR_KUAP
  2020-11-25 13:43   ` Christophe Leroy
@ 2020-11-25 13:52     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25 13:52 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev, mpe

On 11/25/20 7:13 PM, Christophe Leroy wrote:
> 
> 
> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
>> This is in preparate to adding support for kuap with hash translation.
>> In preparation for that rename/move kuap related functions to
>> non radix names. Also move the feature bit closer to MMU_FTR_KUEP.
> 
> It was obvious with MMU_FTR_RADIX_KUAP that it was only for Radix PPC64.
> Now, do we expect it to be applies on PPC32 as well or is it still for 
> PPC64 only ?

Right now this is PPC64 only. I added

+config PPC_PKEY
+	def_bool y
+	depends on PPC_BOOK3S_64
+	depends on PPC_MEM_KEYS || PPC_KUAP || PPC_KUEP

to select the base bits needed for both KUAP and MEM_KEYS. I haven't 
looked at PPC32 to see if we can implement it there also.

> 
>>
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>> ---
>>   arch/powerpc/include/asm/book3s/64/kup.h | 18 +++++++++---------
>>   arch/powerpc/include/asm/mmu.h           | 14 +++++++-------
>>   arch/powerpc/mm/book3s64/pkeys.c         |  2 +-
>>   3 files changed, 17 insertions(+), 17 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
>> b/arch/powerpc/include/asm/book3s/64/kup.h
>> index 39d2e3a0d64d..1d38eab83d48 100644
>> --- a/arch/powerpc/include/asm/book3s/64/kup.h
>> +++ b/arch/powerpc/include/asm/book3s/64/kup.h
>> @@ -24,7 +24,7 @@
>>       mtspr    SPRN_AMR, \gpr2
>>       /* No isync required, see kuap_restore_amr() */
>>   998:
>> -    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
>> +    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
>>   #endif
>>   .endm
>> @@ -37,7 +37,7 @@
>>       sldi    \gpr2, \gpr2, AMR_KUAP_SHIFT
>>   999:    tdne    \gpr1, \gpr2
>>       EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | 
>> BUGFLAG_ONCE)
>> -    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
>> +    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
>>   #endif
>>   .endm
>>   #endif
>> @@ -58,7 +58,7 @@
>>       mtspr    SPRN_AMR, \gpr2
>>       isync
>>   99:
>> -    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_RADIX_KUAP, 67)
>> +    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
>>   #endif
>>   .endm
>> @@ -73,7 +73,7 @@ DECLARE_STATIC_KEY_FALSE(uaccess_flush_key);
>>   static inline void kuap_restore_amr(struct pt_regs *regs, unsigned 
>> long amr)
>>   {
>> -    if (mmu_has_feature(MMU_FTR_RADIX_KUAP) && unlikely(regs->kuap != 
>> amr)) {
>> +    if (mmu_has_feature(MMU_FTR_KUAP) && unlikely(regs->kuap != amr)) {
>>           isync();
>>           mtspr(SPRN_AMR, regs->kuap);
>>           /*
>> @@ -86,7 +86,7 @@ static inline void kuap_restore_amr(struct pt_regs 
>> *regs, unsigned long amr)
>>   static inline unsigned long kuap_get_and_check_amr(void)
>>   {
>> -    if (mmu_has_feature(MMU_FTR_RADIX_KUAP)) {
>> +    if (mmu_has_feature(MMU_FTR_KUAP)) {
>>           unsigned long amr = mfspr(SPRN_AMR);
>>           if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG)) /* kuap_check_amr() */
>>               WARN_ON_ONCE(amr != AMR_KUAP_BLOCKED);
>> @@ -97,7 +97,7 @@ static inline unsigned long 
>> kuap_get_and_check_amr(void)
>>   static inline void kuap_check_amr(void)
>>   {
>> -    if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && 
>> mmu_has_feature(MMU_FTR_RADIX_KUAP))
>> +    if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && 
>> mmu_has_feature(MMU_FTR_KUAP))
>>           WARN_ON_ONCE(mfspr(SPRN_AMR) != AMR_KUAP_BLOCKED);
>>   }
>> @@ -116,7 +116,7 @@ static inline unsigned long get_kuap(void)
>>        * This has no effect in terms of actually blocking things on hash,
>>        * so it doesn't break anything.
>>        */
>> -    if (!early_mmu_has_feature(MMU_FTR_RADIX_KUAP))
>> +    if (!early_mmu_has_feature(MMU_FTR_KUAP))
>>           return AMR_KUAP_BLOCKED;
>>       return mfspr(SPRN_AMR);
>> @@ -124,7 +124,7 @@ static inline unsigned long get_kuap(void)
>>   static inline void set_kuap(unsigned long value)
>>   {
>> -    if (!early_mmu_has_feature(MMU_FTR_RADIX_KUAP))
>> +    if (!early_mmu_has_feature(MMU_FTR_KUAP))
>>           return;
>>       /*
>> @@ -139,7 +139,7 @@ static inline void set_kuap(unsigned long value)
>>   static inline bool
>>   bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool 
>> is_write)
>>   {
>> -    return WARN(mmu_has_feature(MMU_FTR_RADIX_KUAP) &&
>> +    return WARN(mmu_has_feature(MMU_FTR_KUAP) &&
>>               (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : 
>> AMR_KUAP_BLOCK_READ)),
>>               "Bug: %s fault blocked by AMR!", is_write ? "Write" : 
>> "Read");
>>   }
>> diff --git a/arch/powerpc/include/asm/mmu.h 
>> b/arch/powerpc/include/asm/mmu.h
>> index 255a1837e9f7..f5c7a17c198a 100644
>> --- a/arch/powerpc/include/asm/mmu.h
>> +++ b/arch/powerpc/include/asm/mmu.h
>> @@ -28,6 +28,11 @@
>>    * Individual features below.
>>    */
>> +/*
>> + * Supports KUAP (key 0 controlling userspace addresses) on radix
>> + */
>> +#define MMU_FTR_KUAP            ASM_CONST(0x00000200)
>> +
>>   /*
>>    * Support for KUEP feature.
>>    */
>> @@ -120,11 +125,6 @@
>>    */
>>   #define MMU_FTR_1T_SEGMENT        ASM_CONST(0x40000000)
>> -/*
>> - * Supports KUAP (key 0 controlling userspace addresses) on radix
>> - */
>> -#define MMU_FTR_RADIX_KUAP        ASM_CONST(0x80000000)
>> -
>>   /* MMU feature bit sets for various CPUs */
>>   #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2    \
>>       MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
>> @@ -187,10 +187,10 @@ enum {
>>   #ifdef CONFIG_PPC_RADIX_MMU
>>           MMU_FTR_TYPE_RADIX |
>>           MMU_FTR_GTSE |
>> +#endif /* CONFIG_PPC_RADIX_MMU */
>>   #ifdef CONFIG_PPC_KUAP
>> -        MMU_FTR_RADIX_KUAP |
>> +    MMU_FTR_KUAP |
>>   #endif /* CONFIG_PPC_KUAP */
>> -#endif /* CONFIG_PPC_RADIX_MMU */
>>   #ifdef CONFIG_PPC_MEM_KEYS
>>       MMU_FTR_PKEY |
>>   #endif
>> diff --git a/arch/powerpc/mm/book3s64/pkeys.c 
>> b/arch/powerpc/mm/book3s64/pkeys.c
>> index 82c722fbce52..bfc27f1f0ab0 100644
>> --- a/arch/powerpc/mm/book3s64/pkeys.c
>> +++ b/arch/powerpc/mm/book3s64/pkeys.c
>> @@ -258,7 +258,7 @@ void __init setup_kuap(bool disabled)
>>       if (smp_processor_id() == boot_cpuid) {
>>           pr_info("Activating Kernel Userspace Access Prevention\n");
>> -        cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
>> +        cur_cpu_spec->mmu_features |= MMU_FTR_KUAP;
>>       }
>>       /*
>>


^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 10/22] powerpc/book3s64/pkeys: Store/restore userspace AMR/IAMR correctly on entry and exit from kernel
  2020-11-25  5:16 ` [PATCH v6 10/22] powerpc/book3s64/pkeys: Store/restore userspace AMR/IAMR correctly on entry and exit from kernel Aneesh Kumar K.V
@ 2020-11-25 13:52   ` Christophe Leroy
  2020-11-25 13:55     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 48+ messages in thread
From: Christophe Leroy @ 2020-11-25 13:52 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev, mpe; +Cc: Sandipan Das



Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
> This prepare kernel to operate with a different value than userspace AMR/IAMR.
> For this, AMR/IAMR need to be saved and restored on entry and return from the
> kernel.
> 
> With KUAP we modify kernel AMR when accessing user address from the kernel
> via copy_to/from_user interfaces. We don't need to modify IAMR value in
> similar fashion.
> 
> If MMU_FTR_PKEY is enabled we need to save AMR/IAMR in pt_regs on entering
> kernel from userspace. If not we can assume that AMR/IAMR is not modified
> from userspace.
> 
> We need to save AMR if we have MMU_FTR_KUAP feature enabled and we are
> interrupted within kernel. This is required so that if we get interrupted
> within copy_to/from_user we continue with the right AMR value.
> 
> If we hae MMU_FTR_KUEP enabled we need to restore IAMR on return to userspace
> beause kernel will be running with a different IAMR value.
> 
> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/book3s/64/kup.h | 222 +++++++++++++++++++----
>   arch/powerpc/include/asm/ptrace.h        |   5 +-
>   arch/powerpc/kernel/asm-offsets.c        |   2 +
>   arch/powerpc/kernel/entry_64.S           |   6 +-
>   arch/powerpc/kernel/exceptions-64s.S     |   4 +-
>   arch/powerpc/kernel/syscall_64.c         |  32 +++-
>   6 files changed, 225 insertions(+), 46 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
> index 1d38eab83d48..4dbb2d53fd8f 100644
> --- a/arch/powerpc/include/asm/book3s/64/kup.h
> +++ b/arch/powerpc/include/asm/book3s/64/kup.h
> @@ -13,17 +13,46 @@
>   
>   #ifdef __ASSEMBLY__
>   
> -.macro kuap_restore_amr	gpr1, gpr2
> -#ifdef CONFIG_PPC_KUAP
> +.macro kuap_restore_user_amr gpr1
> +#if defined(CONFIG_PPC_PKEY)
>   	BEGIN_MMU_FTR_SECTION_NESTED(67)
> -	mfspr	\gpr1, SPRN_AMR
> +	/*
> +	 * AMR and IAMR are going to be different when
> +	 * returning to userspace.
> +	 */
> +	ld	\gpr1, STACK_REGS_AMR(r1)
> +	isync
> +	mtspr	SPRN_AMR, \gpr1
> +	/*
> +	 * Restore IAMR only when returning to userspace
> +	 */
> +	ld	\gpr1, STACK_REGS_IAMR(r1)
> +	mtspr	SPRN_IAMR, \gpr1
> +
> +	/* No isync required, see kuap_restore_user_amr() */
> +	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_PKEY, 67)
> +#endif
> +.endm
> +
> +.macro kuap_restore_kernel_amr	gpr1, gpr2
> +#if defined(CONFIG_PPC_PKEY)
> +
> +	BEGIN_MMU_FTR_SECTION_NESTED(67)
> +	/*
> +	 * AMR is going to be mostly the same since we are
> +	 * returning to the kernel. Compare and do a mtspr.
> +	 */
>   	ld	\gpr2, STACK_REGS_AMR(r1)
> +	mfspr	\gpr1, SPRN_AMR
>   	cmpd	\gpr1, \gpr2
> -	beq	998f
> +	beq	100f
>   	isync
>   	mtspr	SPRN_AMR, \gpr2
> -	/* No isync required, see kuap_restore_amr() */
> -998:
> +	/*
> +	 * No isync required, see kuap_restore_amr()
> +	 * No need to restore IAMR when returning to kernel space.
> +	 */
> +100:
>   	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
>   #endif
>   .endm
> @@ -42,23 +71,98 @@
>   .endm
>   #endif
>   
> +/*
> + *	if (pkey) {
> + *
> + *		save AMR -> stack;
> + *		if (kuap) {
> + *			if (AMR != BLOCKED)
> + *				KUAP_BLOCKED -> AMR;
> + *		}
> + *		if (from_user) {
> + *			save IAMR -> stack;
> + *			if (kuep) {
> + *				KUEP_BLOCKED ->IAMR
> + *			}
> + *		}
> + *		return;
> + *	}
> + *
> + *	if (kuap) {
> + *		if (from_kernel) {
> + *			save AMR -> stack;
> + *			if (AMR != BLOCKED)
> + *				KUAP_BLOCKED -> AMR;
> + *		}
> + *
> + *	}
> + */
>   .macro kuap_save_amr_and_lock gpr1, gpr2, use_cr, msr_pr_cr
> -#ifdef CONFIG_PPC_KUAP
> +#if defined(CONFIG_PPC_PKEY)
> +
> +	/*
> +	 * if both pkey and kuap is disabled, nothing to do
> +	 */
> +	BEGIN_MMU_FTR_SECTION_NESTED(68)
> +	b	100f  // skip_save_amr
> +	END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY | MMU_FTR_KUAP, 68)
> +
> +	/*
> +	 * if pkey is disabled and we are entering from userspace
> +	 * don't do anything.
> +	 */
>   	BEGIN_MMU_FTR_SECTION_NESTED(67)
>   	.ifnb \msr_pr_cr
> -	bne	\msr_pr_cr, 99f
> +	/*
> +	 * Without pkey we are not changing AMR outside the kernel
> +	 * hence skip this completely.
> +	 */
> +	bne	\msr_pr_cr, 100f  // from userspace
>   	.endif
> +        END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY, 67)
> +
> +	/*
> +	 * pkey is enabled or pkey is disabled but entering from kernel
> +	 */
>   	mfspr	\gpr1, SPRN_AMR
>   	std	\gpr1, STACK_REGS_AMR(r1)
> -	li	\gpr2, (AMR_KUAP_BLOCKED >> AMR_KUAP_SHIFT)
> -	sldi	\gpr2, \gpr2, AMR_KUAP_SHIFT
> +
> +	/*
> +	 * update kernel AMR with AMR_KUAP_BLOCKED only
> +	 * if KUAP feature is enabled
> +	 */
> +	BEGIN_MMU_FTR_SECTION_NESTED(69)
> +	LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED)
>   	cmpd	\use_cr, \gpr1, \gpr2
> -	beq	\use_cr, 99f
> -	// We don't isync here because we very recently entered via rfid
> +	beq	\use_cr, 102f
> +	/*
> +	 * We don't isync here because we very recently entered via an interrupt
> +	 */
>   	mtspr	SPRN_AMR, \gpr2
>   	isync
> -99:
> -	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
> +102:
> +	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 69)
> +
> +	/*
> +	 * if entering from kernel we don't need save IAMR
> +	 */
> +	.ifnb \msr_pr_cr
> +	beq	\msr_pr_cr, 100f // from kernel space
> +	mfspr	\gpr1, SPRN_IAMR
> +	std	\gpr1, STACK_REGS_IAMR(r1)
> +
> +	/*
> +	 * update kernel IAMR with AMR_KUEP_BLOCKED only
> +	 * if KUEP feature is enabled
> +	 */
> +	BEGIN_MMU_FTR_SECTION_NESTED(70)
> +	LOAD_REG_IMMEDIATE(\gpr2, AMR_KUEP_BLOCKED)
> +	mtspr	SPRN_IAMR, \gpr2
> +	isync
> +	END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUEP, 70)
> +	.endif
> +
> +100: // skip_save_amr
>   #endif
>   .endm
>   
> @@ -66,22 +170,42 @@
>   
>   DECLARE_STATIC_KEY_FALSE(uaccess_flush_key);
>   
> -#ifdef CONFIG_PPC_KUAP
> +#ifdef CONFIG_PPC_PKEY
>   
>   #include <asm/mmu.h>
>   #include <asm/ptrace.h>
>   
> -static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr)
> +static inline void kuap_restore_user_amr(struct pt_regs *regs)

While we are at changing the function's names, could we remove the _amr from the names in order to 
make it more generic and allow to re-use that name when we migrate PPC32 to C interrupt/syscall 
entries/exits ? (see 
https://patchwork.ozlabs.org/project/linuxppc-dev/patch/302a0e88e15ce4569d9619631b36248041d5ed27.1586196948.git.christophe.leroy@c-s.fr/)

> +{
> +	if (!mmu_has_feature(MMU_FTR_PKEY))
> +		return;
> +
> +	isync();
> +	mtspr(SPRN_AMR, regs->amr);
> +	mtspr(SPRN_IAMR, regs->iamr);
> +	/*
> +	 * No isync required here because we are about to rfi
> +	 * back to previous context before any user accesses
> +	 * would be made, which is a CSI.
> +	 */
> +}
> +static inline void kuap_restore_kernel_amr(struct pt_regs *regs,
> +					   unsigned long amr)
>   {
> -	if (mmu_has_feature(MMU_FTR_KUAP) && unlikely(regs->kuap != amr)) {
> -		isync();
> -		mtspr(SPRN_AMR, regs->kuap);
> -		/*
> -		 * No isync required here because we are about to RFI back to
> -		 * previous context before any user accesses would be made,
> -		 * which is a CSI.
> -		 */
> +	if (mmu_has_feature(MMU_FTR_KUAP)) {
> +		if (unlikely(regs->amr != amr)) {
> +			isync();
> +			mtspr(SPRN_AMR, regs->amr);
> +			/*
> +			 * No isync required here because we are about to rfi
> +			 * back to previous context before any user accesses
> +			 * would be made, which is a CSI.
> +			 */
> +		}
>   	}
> +	/*
> +	 * No need to restore IAMR when returning to kernel space.
> +	 */
>   }
>   
>   static inline unsigned long kuap_get_and_check_amr(void)
> @@ -95,6 +219,26 @@ static inline unsigned long kuap_get_and_check_amr(void)
>   	return 0;
>   }
>   
> +#else /* CONFIG_PPC_PKEY */
> +
> +static inline void kuap_restore_user_amr(struct pt_regs *regs)
> +{
> +}
> +
> +static inline void kuap_restore_kernel_amr(struct pt_regs *regs, unsigned long amr)
> +{
> +}
> +
> +static inline unsigned long kuap_get_and_check_amr(void)
> +{
> +	return 0;
> +}
> +
> +#endif /* CONFIG_PPC_PKEY */
> +
> +
> +#ifdef CONFIG_PPC_KUAP
> +
>   static inline void kuap_check_amr(void)
>   {
>   	if (IS_ENABLED(CONFIG_PPC_KUAP_DEBUG) && mmu_has_feature(MMU_FTR_KUAP))
> @@ -143,21 +287,6 @@ bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
>   		    (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)),
>   		    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
>   }
> -#else /* CONFIG_PPC_KUAP */
> -static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr) { }
> -
> -static inline unsigned long kuap_get_and_check_amr(void)
> -{
> -	return 0UL;
> -}
> -
> -static inline unsigned long get_kuap(void)
> -{
> -	return AMR_KUAP_BLOCKED;
> -}
> -
> -static inline void set_kuap(unsigned long value) { }
> -#endif /* !CONFIG_PPC_KUAP */
>   
>   static __always_inline void allow_user_access(void __user *to, const void __user *from,
>   					      unsigned long size, unsigned long dir)
> @@ -174,6 +303,21 @@ static __always_inline void allow_user_access(void __user *to, const void __user
>   		BUILD_BUG();
>   }
>   
> +#else /* CONFIG_PPC_KUAP */
> +
> +static inline unsigned long get_kuap(void)
> +{
> +	return AMR_KUAP_BLOCKED;
> +}
> +
> +static inline void set_kuap(unsigned long value) { }
> +
> +static __always_inline void allow_user_access(void __user *to, const void __user *from,
> +					      unsigned long size, unsigned long dir)
> +{ }
> +
> +#endif /* !CONFIG_PPC_KUAP */
> +
>   static inline void prevent_user_access(void __user *to, const void __user *from,
>   				       unsigned long size, unsigned long dir)
>   {
> diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
> index e7f1caa007a4..f240f0cdcec2 100644
> --- a/arch/powerpc/include/asm/ptrace.h
> +++ b/arch/powerpc/include/asm/ptrace.h
> @@ -61,8 +61,11 @@ struct pt_regs
>   				unsigned long amr;
>   #endif
>   			};
> +#ifdef CONFIG_PPC_PKEY
> +			unsigned long iamr;
> +#endif
>   		};
> -		unsigned long __pad[2];	/* Maintain 16 byte interrupt stack alignment */
> +		unsigned long __pad[4];	/* Maintain 16 byte interrupt stack alignment */
>   	};
>   };
>   #endif
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index 418a0b314a33..76545cdc7f8a 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -356,11 +356,13 @@ int main(void)
>   
>   #ifdef CONFIG_PPC_PKEY
>   	STACK_PT_REGS_OFFSET(STACK_REGS_AMR, amr);
> +	STACK_PT_REGS_OFFSET(STACK_REGS_IAMR, iamr);
>   #endif
>   #ifdef CONFIG_PPC_KUAP
>   	STACK_PT_REGS_OFFSET(STACK_REGS_KUAP, kuap);
>   #endif
>   
> +
>   #if defined(CONFIG_PPC32)
>   #if defined(CONFIG_BOOKE) || defined(CONFIG_40x)
>   	DEFINE(EXC_LVL_SIZE, STACK_EXC_LVL_FRAME_SIZE);
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index 2f3846192ec7..e49291594c68 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -653,8 +653,8 @@ _ASM_NOKPROBE_SYMBOL(fast_interrupt_return)
>   	kuap_check_amr r3, r4
>   	ld	r5,_MSR(r1)
>   	andi.	r0,r5,MSR_PR
> -	bne	.Lfast_user_interrupt_return
> -	kuap_restore_amr r3, r4
> +	bne	.Lfast_user_interrupt_return_amr
> +	kuap_restore_kernel_amr r3, r4
>   	andi.	r0,r5,MSR_RI
>   	li	r3,0 /* 0 return value, no EMULATE_STACK_STORE */
>   	bne+	.Lfast_kernel_interrupt_return
> @@ -674,6 +674,8 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return)
>   	cmpdi	r3,0
>   	bne-	.Lrestore_nvgprs
>   
> +.Lfast_user_interrupt_return_amr:
> +	kuap_restore_user_amr r3
>   .Lfast_user_interrupt_return:
>   	ld	r11,_NIP(r1)
>   	ld	r12,_MSR(r1)
> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
> index 4d01f09ecf80..84cc23529cdb 100644
> --- a/arch/powerpc/kernel/exceptions-64s.S
> +++ b/arch/powerpc/kernel/exceptions-64s.S
> @@ -1059,7 +1059,7 @@ EXC_COMMON_BEGIN(system_reset_common)
>   	ld	r10,SOFTE(r1)
>   	stb	r10,PACAIRQSOFTMASK(r13)
>   
> -	kuap_restore_amr r9, r10
> +	kuap_restore_kernel_amr r9, r10
>   	EXCEPTION_RESTORE_REGS
>   	RFI_TO_USER_OR_KERNEL
>   
> @@ -2875,7 +2875,7 @@ EXC_COMMON_BEGIN(soft_nmi_common)
>   	ld	r10,SOFTE(r1)
>   	stb	r10,PACAIRQSOFTMASK(r13)
>   
> -	kuap_restore_amr r9, r10
> +	kuap_restore_kernel_amr r9, r10
>   	EXCEPTION_RESTORE_REGS hsrr=0
>   	RFI_TO_KERNEL
>   
> diff --git a/arch/powerpc/kernel/syscall_64.c b/arch/powerpc/kernel/syscall_64.c
> index 310bcd768cd5..60c57609d316 100644
> --- a/arch/powerpc/kernel/syscall_64.c
> +++ b/arch/powerpc/kernel/syscall_64.c
> @@ -35,7 +35,25 @@ notrace long system_call_exception(long r3, long r4, long r5,
>   	BUG_ON(!FULL_REGS(regs));
>   	BUG_ON(regs->softe != IRQS_ENABLED);
>   
> -	kuap_check_amr();
> +#ifdef CONFIG_PPC_PKEY
> +	if (mmu_has_feature(MMU_FTR_PKEY)) {
> +		unsigned long amr, iamr;
> +		/*
> +		 * When entering from userspace we mostly have the AMR/IAMR
> +		 * different from kernel default values. Hence don't compare.
> +		 */
> +		amr = mfspr(SPRN_AMR);
> +		iamr = mfspr(SPRN_IAMR);
> +		regs->amr  = amr;
> +		regs->iamr = iamr;
> +		if (mmu_has_feature(MMU_FTR_KUAP))
> +			mtspr(SPRN_AMR, AMR_KUAP_BLOCKED);
> +		if (mmu_has_feature(MMU_FTR_KUEP))
> +			mtspr(SPRN_IAMR, AMR_KUEP_BLOCKED);
> +		isync();
> +	} else
> +#endif
> +		kuap_check_amr();
>   
>   	account_cpu_user_entry();
>   
> @@ -245,6 +263,12 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>   
>   	account_cpu_user_exit();
>   
> +#ifdef CONFIG_PPC_BOOK3S /* BOOK3E not yet using this */
> +	/*
> +	 * We do this at the end so that we do context switch with KERNEL AMR
> +	 */
> +	kuap_restore_user_amr(regs);
> +#endif
>   	return ret;
>   }
>   
> @@ -330,6 +354,10 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>   
>   	account_cpu_user_exit();
>   
> +	/*
> +	 * We do this at the end so that we do context switch with KERNEL AMR
> +	 */
> +	kuap_restore_user_amr(regs);
>   	return ret;
>   }
>   
> @@ -400,7 +428,7 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
>   	 * which would cause Read-After-Write stalls. Hence, we take the AMR
>   	 * value from the check above.
>   	 */
> -	kuap_restore_amr(regs, amr);
> +	kuap_restore_kernel_amr(regs, amr);
>   
>   	return ret;
>   }
> 

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 11/22] powerpc/book3s64/pkeys: Inherit correctly on fork.
  2020-11-25  5:16 ` [PATCH v6 11/22] powerpc/book3s64/pkeys: Inherit correctly on fork Aneesh Kumar K.V
@ 2020-11-25 13:54   ` Christophe Leroy
  2020-11-25 13:56     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 48+ messages in thread
From: Christophe Leroy @ 2020-11-25 13:54 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev, mpe; +Cc: Sandipan Das



Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
> Child thread.kuap value is inherited from the parent in copy_thread_tls. We still
> need to make sure when the child returns from a fork in the kernel we start with the kernel
> default AMR value.
> 
> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>   arch/powerpc/kernel/process.c | 11 +++++++++++
>   1 file changed, 11 insertions(+)
> 
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index b6b8a845e454..733680de0ba4 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -1768,6 +1768,17 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
>   		childregs->ppr = DEFAULT_PPR;
>   
>   	p->thread.tidr = 0;
> +#endif
> +	/*
> +	 * Run with the current AMR value of the kernel
> +	 */
> +#ifdef CONFIG_PPC_KUAP
> +	if (mmu_has_feature(MMU_FTR_KUAP))
> +		kregs->kuap = AMR_KUAP_BLOCKED;
> +#endif

Do we need that ifdef at all ?

Shouldn't mmu_has_feature(MMU_FTR_KUAP) be always false and get optimised out when CONFIG_PPC_KUAP 
is not defined ?

> +#ifdef CONFIG_PPC_KUEP
> +	if (mmu_has_feature(MMU_FTR_KUEP))
> +		kregs->iamr = AMR_KUEP_BLOCKED;

Same ?

>   #endif
>   	kregs->nip = ppc_function_entry(f);
>   	return 0;
> 

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 10/22] powerpc/book3s64/pkeys: Store/restore userspace AMR/IAMR correctly on entry and exit from kernel
  2020-11-25 13:52   ` Christophe Leroy
@ 2020-11-25 13:55     ` Aneesh Kumar K.V
  2020-11-25 14:16       ` Christophe Leroy
  0 siblings, 1 reply; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25 13:55 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev, mpe; +Cc: Sandipan Das

On 11/25/20 7:22 PM, Christophe Leroy wrote:
> 
> 
> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
>> This prepare kernel to operate with a different value than userspace 
>> AMR/IAMR.
>> For this, AMR/IAMR need to be saved and restored on entry and return 
>> from the
>> kernel.
>>
>> With KUAP we modify kernel AMR when accessing user address from the 
>> kernel
>> via copy_to/from_user interfaces. We don't need to modify IAMR value in
>> similar fashion.
>>
>> If MMU_FTR_PKEY is enabled we need to save AMR/IAMR in pt_regs on 
>> entering
>> kernel from userspace. If not we can assume that AMR/IAMR is not modified
>> from userspace.
>>
>> We need to save AMR if we have MMU_FTR_KUAP feature enabled and we are
>> interrupted within kernel. This is required so that if we get interrupted
>> within copy_to/from_user we continue with the right AMR value.
>>
>> If we hae MMU_FTR_KUEP enabled we need to restore IAMR on return to 
>> userspace
>> beause kernel will be running with a different IAMR value.
>>
>> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>> ---
>>   arch/powerpc/include/asm/book3s/64/kup.h | 222 +++++++++++++++++++----
>>   arch/powerpc/include/asm/ptrace.h        |   5 +-
>>   arch/powerpc/kernel/asm-offsets.c        |   2 +
>>   arch/powerpc/kernel/entry_64.S           |   6 +-
>>   arch/powerpc/kernel/exceptions-64s.S     |   4 +-
>>   arch/powerpc/kernel/syscall_64.c         |  32 +++-
>>   6 files changed, 225 insertions(+), 46 deletions(-)
>>
>> diff --git a/arch/powerpc/include/asm/book3s/64/kup.h 
>> b/arch/powerpc/include/asm/book3s/64/kup.h
>> index 1d38eab83d48..4dbb2d53fd8f 100644
>> --- a/arch/powerpc/include/asm/book3s/64/kup.h
>> +++ b/arch/powerpc/include/asm/book3s/64/kup.h
>> @@ -13,17 +13,46 @@
>>   #ifdef __ASSEMBLY__
>> -.macro kuap_restore_amr    gpr1, gpr2
>> -#ifdef CONFIG_PPC_KUAP
>> +.macro kuap_restore_user_amr gpr1
>> +#if defined(CONFIG_PPC_PKEY)
>>       BEGIN_MMU_FTR_SECTION_NESTED(67)
>> -    mfspr    \gpr1, SPRN_AMR
>> +    /*
>> +     * AMR and IAMR are going to be different when
>> +     * returning to userspace.
>> +     */
>> +    ld    \gpr1, STACK_REGS_AMR(r1)
>> +    isync
>> +    mtspr    SPRN_AMR, \gpr1
>> +    /*
>> +     * Restore IAMR only when returning to userspace
>> +     */
>> +    ld    \gpr1, STACK_REGS_IAMR(r1)
>> +    mtspr    SPRN_IAMR, \gpr1
>> +
>> +    /* No isync required, see kuap_restore_user_amr() */
>> +    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_PKEY, 67)
>> +#endif
>> +.endm
>> +
>> +.macro kuap_restore_kernel_amr    gpr1, gpr2
>> +#if defined(CONFIG_PPC_PKEY)
>> +
>> +    BEGIN_MMU_FTR_SECTION_NESTED(67)
>> +    /*
>> +     * AMR is going to be mostly the same since we are
>> +     * returning to the kernel. Compare and do a mtspr.
>> +     */
>>       ld    \gpr2, STACK_REGS_AMR(r1)
>> +    mfspr    \gpr1, SPRN_AMR
>>       cmpd    \gpr1, \gpr2
>> -    beq    998f
>> +    beq    100f
>>       isync
>>       mtspr    SPRN_AMR, \gpr2
>> -    /* No isync required, see kuap_restore_amr() */
>> -998:
>> +    /*
>> +     * No isync required, see kuap_restore_amr()
>> +     * No need to restore IAMR when returning to kernel space.
>> +     */
>> +100:
>>       END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
>>   #endif
>>   .endm
>> @@ -42,23 +71,98 @@
>>   .endm
>>   #endif
>> +/*
>> + *    if (pkey) {
>> + *
>> + *        save AMR -> stack;
>> + *        if (kuap) {
>> + *            if (AMR != BLOCKED)
>> + *                KUAP_BLOCKED -> AMR;
>> + *        }
>> + *        if (from_user) {
>> + *            save IAMR -> stack;
>> + *            if (kuep) {
>> + *                KUEP_BLOCKED ->IAMR
>> + *            }
>> + *        }
>> + *        return;
>> + *    }
>> + *
>> + *    if (kuap) {
>> + *        if (from_kernel) {
>> + *            save AMR -> stack;
>> + *            if (AMR != BLOCKED)
>> + *                KUAP_BLOCKED -> AMR;
>> + *        }
>> + *
>> + *    }
>> + */
>>   .macro kuap_save_amr_and_lock gpr1, gpr2, use_cr, msr_pr_cr
>> -#ifdef CONFIG_PPC_KUAP
>> +#if defined(CONFIG_PPC_PKEY)
>> +
>> +    /*
>> +     * if both pkey and kuap is disabled, nothing to do
>> +     */
>> +    BEGIN_MMU_FTR_SECTION_NESTED(68)
>> +    b    100f  // skip_save_amr
>> +    END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY | MMU_FTR_KUAP, 68)
>> +
>> +    /*
>> +     * if pkey is disabled and we are entering from userspace
>> +     * don't do anything.
>> +     */
>>       BEGIN_MMU_FTR_SECTION_NESTED(67)
>>       .ifnb \msr_pr_cr
>> -    bne    \msr_pr_cr, 99f
>> +    /*
>> +     * Without pkey we are not changing AMR outside the kernel
>> +     * hence skip this completely.
>> +     */
>> +    bne    \msr_pr_cr, 100f  // from userspace
>>       .endif
>> +        END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY, 67)
>> +
>> +    /*
>> +     * pkey is enabled or pkey is disabled but entering from kernel
>> +     */
>>       mfspr    \gpr1, SPRN_AMR
>>       std    \gpr1, STACK_REGS_AMR(r1)
>> -    li    \gpr2, (AMR_KUAP_BLOCKED >> AMR_KUAP_SHIFT)
>> -    sldi    \gpr2, \gpr2, AMR_KUAP_SHIFT
>> +
>> +    /*
>> +     * update kernel AMR with AMR_KUAP_BLOCKED only
>> +     * if KUAP feature is enabled
>> +     */
>> +    BEGIN_MMU_FTR_SECTION_NESTED(69)
>> +    LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED)
>>       cmpd    \use_cr, \gpr1, \gpr2
>> -    beq    \use_cr, 99f
>> -    // We don't isync here because we very recently entered via rfid
>> +    beq    \use_cr, 102f
>> +    /*
>> +     * We don't isync here because we very recently entered via an 
>> interrupt
>> +     */
>>       mtspr    SPRN_AMR, \gpr2
>>       isync
>> -99:
>> -    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
>> +102:
>> +    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 69)
>> +
>> +    /*
>> +     * if entering from kernel we don't need save IAMR
>> +     */
>> +    .ifnb \msr_pr_cr
>> +    beq    \msr_pr_cr, 100f // from kernel space
>> +    mfspr    \gpr1, SPRN_IAMR
>> +    std    \gpr1, STACK_REGS_IAMR(r1)
>> +
>> +    /*
>> +     * update kernel IAMR with AMR_KUEP_BLOCKED only
>> +     * if KUEP feature is enabled
>> +     */
>> +    BEGIN_MMU_FTR_SECTION_NESTED(70)
>> +    LOAD_REG_IMMEDIATE(\gpr2, AMR_KUEP_BLOCKED)
>> +    mtspr    SPRN_IAMR, \gpr2
>> +    isync
>> +    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUEP, 70)
>> +    .endif
>> +
>> +100: // skip_save_amr
>>   #endif
>>   .endm
>> @@ -66,22 +170,42 @@
>>   DECLARE_STATIC_KEY_FALSE(uaccess_flush_key);
>> -#ifdef CONFIG_PPC_KUAP
>> +#ifdef CONFIG_PPC_PKEY
>>   #include <asm/mmu.h>
>>   #include <asm/ptrace.h>
>> -static inline void kuap_restore_amr(struct pt_regs *regs, unsigned 
>> long amr)
>> +static inline void kuap_restore_user_amr(struct pt_regs *regs)
> 
> While we are at changing the function's names, could we remove the _amr 
> from the names in order to make it more generic and allow to re-use that 
> name when we migrate PPC32 to C interrupt/syscall entries/exits ? (see 
> https://patchwork.ozlabs.org/project/linuxppc-dev/patch/302a0e88e15ce4569d9619631b36248041d5ed27.1586196948.git.christophe.leroy@c-s.fr/) 
> 

How do you suggest we rename it? kuap_restore_user is a bit ambiguous right?

-aneesh

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 11/22] powerpc/book3s64/pkeys: Inherit correctly on fork.
  2020-11-25 13:54   ` Christophe Leroy
@ 2020-11-25 13:56     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25 13:56 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev, mpe; +Cc: Sandipan Das

On 11/25/20 7:24 PM, Christophe Leroy wrote:
> 
> 
> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
>> Child thread.kuap value is inherited from the parent in 
>> copy_thread_tls. We still
>> need to make sure when the child returns from a fork in the kernel we 
>> start with the kernel
>> default AMR value.
>>
>> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>> ---
>>   arch/powerpc/kernel/process.c | 11 +++++++++++
>>   1 file changed, 11 insertions(+)
>>
>> diff --git a/arch/powerpc/kernel/process.c 
>> b/arch/powerpc/kernel/process.c
>> index b6b8a845e454..733680de0ba4 100644
>> --- a/arch/powerpc/kernel/process.c
>> +++ b/arch/powerpc/kernel/process.c
>> @@ -1768,6 +1768,17 @@ int copy_thread(unsigned long clone_flags, 
>> unsigned long usp,
>>           childregs->ppr = DEFAULT_PPR;
>>       p->thread.tidr = 0;
>> +#endif
>> +    /*
>> +     * Run with the current AMR value of the kernel
>> +     */
>> +#ifdef CONFIG_PPC_KUAP
>> +    if (mmu_has_feature(MMU_FTR_KUAP))
>> +        kregs->kuap = AMR_KUAP_BLOCKED;
>> +#endif
> 
> Do we need that ifdef at all ?
> 
> Shouldn't mmu_has_feature(MMU_FTR_KUAP) be always false and get 
> optimised out when CONFIG_PPC_KUAP is not defined ?
> 
>> +#ifdef CONFIG_PPC_KUEP
>> +    if (mmu_has_feature(MMU_FTR_KUEP))
>> +        kregs->iamr = AMR_KUEP_BLOCKED;
> 
> Same ?
> 
>>   #endif
>>       kregs->nip = ppc_function_entry(f);
>>       return 0;
>>

Not really. I did hit a compile error with this patch on 
mpc885_ads_defconfig and that required me to do

modified   arch/powerpc/kernel/process.c
@@ -1772,11 +1772,10 @@ int copy_thread(unsigned long clone_flags, 
unsigned long usp,
  	/*
  	 * Run with the current AMR value of the kernel
  	 */
-#ifdef CONFIG_PPC_KUAP
+#ifdef CONFIG_PPC_PKEY
  	if (mmu_has_feature(MMU_FTR_KUAP))
-		kregs->kuap = AMR_KUAP_BLOCKED;
-#endif
-#ifdef CONFIG_PPC_KUEP
+		kregs->amr = AMR_KUAP_BLOCKED;
+
  	if (mmu_has_feature(MMU_FTR_KUEP))
  		kregs->iamr = AMR_KUEP_BLOCKED;
  #endif




^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 16/22] powerpc/book3s64/kuap: Improve error reporting with KUAP
  2020-11-25  5:16 ` [PATCH v6 16/22] powerpc/book3s64/kuap: Improve error reporting with KUAP Aneesh Kumar K.V
@ 2020-11-25 14:04   ` Christophe Leroy
  2020-11-26  7:44     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 48+ messages in thread
From: Christophe Leroy @ 2020-11-25 14:04 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev, mpe



Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
> With hash translation use DSISR_KEYFAULT to identify a wrong access.
> With Radix we look at the AMR value and type of fault.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>   arch/powerpc/include/asm/book3s/32/kup.h     |  4 +--
>   arch/powerpc/include/asm/book3s/64/kup.h     | 27 ++++++++++++++++----
>   arch/powerpc/include/asm/kup.h               |  4 +--
>   arch/powerpc/include/asm/nohash/32/kup-8xx.h |  4 +--
>   arch/powerpc/mm/fault.c                      |  2 +-
>   5 files changed, 29 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
> index 32fd4452e960..b18cd931e325 100644
> --- a/arch/powerpc/include/asm/book3s/32/kup.h
> +++ b/arch/powerpc/include/asm/book3s/32/kup.h
> @@ -177,8 +177,8 @@ static inline void restore_user_access(unsigned long flags)
>   		allow_user_access(to, to, end - addr, KUAP_READ_WRITE);
>   }
>   
> -static inline bool
> -bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
> +static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
> +				  bool is_write, unsigned long error_code)
>   {
>   	unsigned long begin = regs->kuap & 0xf0000000;
>   	unsigned long end = regs->kuap << 28;
> diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
> index 4a3d0d601745..2922c442a218 100644
> --- a/arch/powerpc/include/asm/book3s/64/kup.h
> +++ b/arch/powerpc/include/asm/book3s/64/kup.h
> @@ -301,12 +301,29 @@ static inline void set_kuap(unsigned long value)
>   	isync();
>   }
>   
> -static inline bool
> -bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
> +#define RADIX_KUAP_BLOCK_READ	UL(0x4000000000000000)
> +#define RADIX_KUAP_BLOCK_WRITE	UL(0x8000000000000000)
> +
> +static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
> +				  bool is_write, unsigned long error_code)
>   {
> -	return WARN(mmu_has_feature(MMU_FTR_KUAP) &&
> -		    (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)),
> -		    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
> +	if (!mmu_has_feature(MMU_FTR_KUAP))
> +		return false;
> +
> +	if (radix_enabled()) {
> +		/*
> +		 * Will be a storage protection fault.
> +		 * Only check the details of AMR[0]
> +		 */
> +		return WARN((regs->kuap & (is_write ? RADIX_KUAP_BLOCK_WRITE : RADIX_KUAP_BLOCK_READ)),
> +			    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");

I think it is pointless to keep the WARN() here.

I have a series aiming at removing them. See 
https://patchwork.ozlabs.org/project/linuxppc-dev/patch/cc9129bdda1dbc2f0a09cf45fece7d0b0e690784.1605541983.git.christophe.leroy@csgroup.eu/

> +	}
> +	/*
> +	 * We don't want to WARN here because userspace can setup
> +	 * keys such that a kernel access to user address can cause
> +	 * fault
> +	 */
> +	return !!(error_code & DSISR_KEYFAULT);
>   }
>   
>   static __always_inline void allow_user_access(void __user *to, const void __user *from,
> diff --git a/arch/powerpc/include/asm/kup.h b/arch/powerpc/include/asm/kup.h
> index a06e50b68d40..952be0414f43 100644
> --- a/arch/powerpc/include/asm/kup.h
> +++ b/arch/powerpc/include/asm/kup.h
> @@ -59,8 +59,8 @@ void setup_kuap(bool disabled);
>   #else
>   static inline void setup_kuap(bool disabled) { }
>   
> -static inline bool
> -bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
> +static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
> +				  bool is_write, unsigned long error_code)
>   {
>   	return false;
>   }
> diff --git a/arch/powerpc/include/asm/nohash/32/kup-8xx.h b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
> index 567cdc557402..7bdd9e5b63ed 100644
> --- a/arch/powerpc/include/asm/nohash/32/kup-8xx.h
> +++ b/arch/powerpc/include/asm/nohash/32/kup-8xx.h
> @@ -60,8 +60,8 @@ static inline void restore_user_access(unsigned long flags)
>   	mtspr(SPRN_MD_AP, flags);
>   }
>   
> -static inline bool
> -bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
> +static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
> +				  bool is_write, unsigned long error_code)
>   {
>   	return WARN(!((regs->kuap ^ MD_APG_KUAP) & 0xff000000),
>   		    "Bug: fault blocked by AP register !");
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index 0add963a849b..c91621df0c61 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -227,7 +227,7 @@ static bool bad_kernel_fault(struct pt_regs *regs, unsigned long error_code,
>   
>   	// Read/write fault in a valid region (the exception table search passed
>   	// above), but blocked by KUAP is bad, it can never succeed.
> -	if (bad_kuap_fault(regs, address, is_write))
> +	if (bad_kuap_fault(regs, address, is_write, error_code))
>   		return true;
>   
>   	// What's left? Kernel fault on user in well defined regions (extable
> 

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 10/22] powerpc/book3s64/pkeys: Store/restore userspace AMR/IAMR correctly on entry and exit from kernel
  2020-11-25 13:55     ` Aneesh Kumar K.V
@ 2020-11-25 14:16       ` Christophe Leroy
  0 siblings, 0 replies; 48+ messages in thread
From: Christophe Leroy @ 2020-11-25 14:16 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev, mpe; +Cc: Sandipan Das



Le 25/11/2020 à 14:55, Aneesh Kumar K.V a écrit :
> On 11/25/20 7:22 PM, Christophe Leroy wrote:
>>
>>
>> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
>>> This prepare kernel to operate with a different value than userspace AMR/IAMR.
>>> For this, AMR/IAMR need to be saved and restored on entry and return from the
>>> kernel.
>>>
>>> With KUAP we modify kernel AMR when accessing user address from the kernel
>>> via copy_to/from_user interfaces. We don't need to modify IAMR value in
>>> similar fashion.
>>>
>>> If MMU_FTR_PKEY is enabled we need to save AMR/IAMR in pt_regs on entering
>>> kernel from userspace. If not we can assume that AMR/IAMR is not modified
>>> from userspace.
>>>
>>> We need to save AMR if we have MMU_FTR_KUAP feature enabled and we are
>>> interrupted within kernel. This is required so that if we get interrupted
>>> within copy_to/from_user we continue with the right AMR value.
>>>
>>> If we hae MMU_FTR_KUEP enabled we need to restore IAMR on return to userspace
>>> beause kernel will be running with a different IAMR value.
>>>
>>> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>>> ---
>>>   arch/powerpc/include/asm/book3s/64/kup.h | 222 +++++++++++++++++++----
>>>   arch/powerpc/include/asm/ptrace.h        |   5 +-
>>>   arch/powerpc/kernel/asm-offsets.c        |   2 +
>>>   arch/powerpc/kernel/entry_64.S           |   6 +-
>>>   arch/powerpc/kernel/exceptions-64s.S     |   4 +-
>>>   arch/powerpc/kernel/syscall_64.c         |  32 +++-
>>>   6 files changed, 225 insertions(+), 46 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
>>> index 1d38eab83d48..4dbb2d53fd8f 100644
>>> --- a/arch/powerpc/include/asm/book3s/64/kup.h
>>> +++ b/arch/powerpc/include/asm/book3s/64/kup.h
>>> @@ -13,17 +13,46 @@
>>>   #ifdef __ASSEMBLY__
>>> -.macro kuap_restore_amr    gpr1, gpr2
>>> -#ifdef CONFIG_PPC_KUAP
>>> +.macro kuap_restore_user_amr gpr1
>>> +#if defined(CONFIG_PPC_PKEY)
>>>       BEGIN_MMU_FTR_SECTION_NESTED(67)
>>> -    mfspr    \gpr1, SPRN_AMR
>>> +    /*
>>> +     * AMR and IAMR are going to be different when
>>> +     * returning to userspace.
>>> +     */
>>> +    ld    \gpr1, STACK_REGS_AMR(r1)
>>> +    isync
>>> +    mtspr    SPRN_AMR, \gpr1
>>> +    /*
>>> +     * Restore IAMR only when returning to userspace
>>> +     */
>>> +    ld    \gpr1, STACK_REGS_IAMR(r1)
>>> +    mtspr    SPRN_IAMR, \gpr1
>>> +
>>> +    /* No isync required, see kuap_restore_user_amr() */
>>> +    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_PKEY, 67)
>>> +#endif
>>> +.endm
>>> +
>>> +.macro kuap_restore_kernel_amr    gpr1, gpr2
>>> +#if defined(CONFIG_PPC_PKEY)
>>> +
>>> +    BEGIN_MMU_FTR_SECTION_NESTED(67)
>>> +    /*
>>> +     * AMR is going to be mostly the same since we are
>>> +     * returning to the kernel. Compare and do a mtspr.
>>> +     */
>>>       ld    \gpr2, STACK_REGS_AMR(r1)
>>> +    mfspr    \gpr1, SPRN_AMR
>>>       cmpd    \gpr1, \gpr2
>>> -    beq    998f
>>> +    beq    100f
>>>       isync
>>>       mtspr    SPRN_AMR, \gpr2
>>> -    /* No isync required, see kuap_restore_amr() */
>>> -998:
>>> +    /*
>>> +     * No isync required, see kuap_restore_amr()
>>> +     * No need to restore IAMR when returning to kernel space.
>>> +     */
>>> +100:
>>>       END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
>>>   #endif
>>>   .endm
>>> @@ -42,23 +71,98 @@
>>>   .endm
>>>   #endif
>>> +/*
>>> + *    if (pkey) {
>>> + *
>>> + *        save AMR -> stack;
>>> + *        if (kuap) {
>>> + *            if (AMR != BLOCKED)
>>> + *                KUAP_BLOCKED -> AMR;
>>> + *        }
>>> + *        if (from_user) {
>>> + *            save IAMR -> stack;
>>> + *            if (kuep) {
>>> + *                KUEP_BLOCKED ->IAMR
>>> + *            }
>>> + *        }
>>> + *        return;
>>> + *    }
>>> + *
>>> + *    if (kuap) {
>>> + *        if (from_kernel) {
>>> + *            save AMR -> stack;
>>> + *            if (AMR != BLOCKED)
>>> + *                KUAP_BLOCKED -> AMR;
>>> + *        }
>>> + *
>>> + *    }
>>> + */
>>>   .macro kuap_save_amr_and_lock gpr1, gpr2, use_cr, msr_pr_cr
>>> -#ifdef CONFIG_PPC_KUAP
>>> +#if defined(CONFIG_PPC_PKEY)
>>> +
>>> +    /*
>>> +     * if both pkey and kuap is disabled, nothing to do
>>> +     */
>>> +    BEGIN_MMU_FTR_SECTION_NESTED(68)
>>> +    b    100f  // skip_save_amr
>>> +    END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY | MMU_FTR_KUAP, 68)
>>> +
>>> +    /*
>>> +     * if pkey is disabled and we are entering from userspace
>>> +     * don't do anything.
>>> +     */
>>>       BEGIN_MMU_FTR_SECTION_NESTED(67)
>>>       .ifnb \msr_pr_cr
>>> -    bne    \msr_pr_cr, 99f
>>> +    /*
>>> +     * Without pkey we are not changing AMR outside the kernel
>>> +     * hence skip this completely.
>>> +     */
>>> +    bne    \msr_pr_cr, 100f  // from userspace
>>>       .endif
>>> +        END_MMU_FTR_SECTION_NESTED_IFCLR(MMU_FTR_PKEY, 67)
>>> +
>>> +    /*
>>> +     * pkey is enabled or pkey is disabled but entering from kernel
>>> +     */
>>>       mfspr    \gpr1, SPRN_AMR
>>>       std    \gpr1, STACK_REGS_AMR(r1)
>>> -    li    \gpr2, (AMR_KUAP_BLOCKED >> AMR_KUAP_SHIFT)
>>> -    sldi    \gpr2, \gpr2, AMR_KUAP_SHIFT
>>> +
>>> +    /*
>>> +     * update kernel AMR with AMR_KUAP_BLOCKED only
>>> +     * if KUAP feature is enabled
>>> +     */
>>> +    BEGIN_MMU_FTR_SECTION_NESTED(69)
>>> +    LOAD_REG_IMMEDIATE(\gpr2, AMR_KUAP_BLOCKED)
>>>       cmpd    \use_cr, \gpr1, \gpr2
>>> -    beq    \use_cr, 99f
>>> -    // We don't isync here because we very recently entered via rfid
>>> +    beq    \use_cr, 102f
>>> +    /*
>>> +     * We don't isync here because we very recently entered via an interrupt
>>> +     */
>>>       mtspr    SPRN_AMR, \gpr2
>>>       isync
>>> -99:
>>> -    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 67)
>>> +102:
>>> +    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUAP, 69)
>>> +
>>> +    /*
>>> +     * if entering from kernel we don't need save IAMR
>>> +     */
>>> +    .ifnb \msr_pr_cr
>>> +    beq    \msr_pr_cr, 100f // from kernel space
>>> +    mfspr    \gpr1, SPRN_IAMR
>>> +    std    \gpr1, STACK_REGS_IAMR(r1)
>>> +
>>> +    /*
>>> +     * update kernel IAMR with AMR_KUEP_BLOCKED only
>>> +     * if KUEP feature is enabled
>>> +     */
>>> +    BEGIN_MMU_FTR_SECTION_NESTED(70)
>>> +    LOAD_REG_IMMEDIATE(\gpr2, AMR_KUEP_BLOCKED)
>>> +    mtspr    SPRN_IAMR, \gpr2
>>> +    isync
>>> +    END_MMU_FTR_SECTION_NESTED_IFSET(MMU_FTR_KUEP, 70)
>>> +    .endif
>>> +
>>> +100: // skip_save_amr
>>>   #endif
>>>   .endm
>>> @@ -66,22 +170,42 @@
>>>   DECLARE_STATIC_KEY_FALSE(uaccess_flush_key);
>>> -#ifdef CONFIG_PPC_KUAP
>>> +#ifdef CONFIG_PPC_PKEY
>>>   #include <asm/mmu.h>
>>>   #include <asm/ptrace.h>
>>> -static inline void kuap_restore_amr(struct pt_regs *regs, unsigned long amr)
>>> +static inline void kuap_restore_user_amr(struct pt_regs *regs)
>>
>> While we are at changing the function's names, could we remove the _amr from the names in order to 
>> make it more generic and allow to re-use that name when we migrate PPC32 to C interrupt/syscall 
>> entries/exits ? (see 
>> https://patchwork.ozlabs.org/project/linuxppc-dev/patch/302a0e88e15ce4569d9619631b36248041d5ed27.1586196948.git.christophe.leroy@c-s.fr/) 
>>
> 
> How do you suggest we rename it? kuap_restore_user is a bit ambiguous right?
> 

I have never been really good at finding names.

What about kuap_user_restore() and kuap_kernel_restore() ?

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 03/22] powerpc/book3s64/kuap/kuep: Make KUAP and KUEP a subfeature of PPC_MEM_KEYS
  2020-11-25 13:30   ` Christophe Leroy
@ 2020-11-25 14:57     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-25 14:57 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev, mpe

Christophe Leroy <christophe.leroy@csgroup.eu> writes:

> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
....

> diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
>> index b1d091a97611..7dc71f85683d 100644
>> --- a/arch/powerpc/mm/book3s64/pkeys.c
>> +++ b/arch/powerpc/mm/book3s64/pkeys.c
>> @@ -89,12 +89,14 @@ static int scan_pkey_feature(void)
>>   		}
>>   	}
>>   
>> +#ifdef CONFIG_PPC_MEM_KEYS
>>   	/*
>>   	 * Adjust the upper limit, based on the number of bits supported by
>>   	 * arch-neutral code.
>>   	 */
>>   	pkeys_total = min_t(int, pkeys_total,
>>   			    ((ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT) + 1));
>
> I don't think we need an #ifdef here. I thing an 'if (IS_ENABLED(CONFIG_PPC_MEM_KEYS))' should make it.

ppc64/arch/powerpc/mm/book3s64/pkeys.c: In function ‘scan_pkey_feature’:                                                                                     
ppc64/arch/powerpc/mm/book3s64/pkeys.c:98:33: error: ‘VM_PKEY_SHIFT’ undeclared (first use in this function)                                                 
   98 |         ((ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT) + 1));                                
      |                                 ^~~~~~~~~~~~~                                          
pkey headers only include arch headers if PPC_MEM_KEYS is enabled. ie, 

#ifdef CONFIG_ARCH_HAS_PKEYS
#include <asm/pkeys.h>
#else /* ! CONFIG_ARCH_HAS_PKEYS */
#define arch_max_pkey() (1)
#define execute_only_pkey(mm) (0)
#define arch_override_mprotect_pkey(vma, prot, pkey) (0)
#define PKEY_DEDICATED_EXECUTE_ONLY 0
#define ARCH_VM_PKEY_FLAGS 0
..

Sorting that out should be another patch series. 



>
>> +#endif
>>   	return pkeys_total;
>>   }

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 07/22] powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP to MMU_FTR_KUAP
  2020-11-25  5:16 ` [PATCH v6 07/22] powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP to MMU_FTR_KUAP Aneesh Kumar K.V
  2020-11-25 13:43   ` Christophe Leroy
@ 2020-11-26  3:16   ` Michael Ellerman
  1 sibling, 0 replies; 48+ messages in thread
From: Michael Ellerman @ 2020-11-26  3:16 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev; +Cc: Aneesh Kumar K.V

"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
> diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
> index 255a1837e9f7..f5c7a17c198a 100644
> --- a/arch/powerpc/include/asm/mmu.h
> +++ b/arch/powerpc/include/asm/mmu.h
> @@ -28,6 +28,11 @@
>   * Individual features below.
>   */
>  
> +/*
> + * Supports KUAP (key 0 controlling userspace addresses) on radix
> + */

That comment needs updating.

I think this feature now means we have either key 0 controlling uaccess
on radix OR we're using the AMR to manually implement KUAP.

> +#define MMU_FTR_KUAP			ASM_CONST(0x00000200)

I agree with Christophe that this name is now too generic.

With that name one would expect it to be enabled on the 32-bit CPUs that
implement KUAP.

Maybe MMU_FTR_BOOK3S_KUAP ?


If in future the other MMUs want an MMU feature for KUAP then we could
rename it to MMU_FTR_KUAP, but we'd need to be careful with ifdefs to
make sure it guards the right things.

cheers

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 03/22] powerpc/book3s64/kuap/kuep: Make KUAP and KUEP a subfeature of PPC_MEM_KEYS
  2020-11-25  5:16 ` [PATCH v6 03/22] powerpc/book3s64/kuap/kuep: Make KUAP and KUEP a subfeature of PPC_MEM_KEYS Aneesh Kumar K.V
  2020-11-25 13:30   ` Christophe Leroy
@ 2020-11-26  3:25   ` Michael Ellerman
  1 sibling, 0 replies; 48+ messages in thread
From: Michael Ellerman @ 2020-11-26  3:25 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev; +Cc: Aneesh Kumar K.V

"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
> The next set of patches adds support for kuap with hash translation.
> Hence make KUAP a BOOK3S_64 feature. Also make it a subfeature of
> PPC_MEM_KEYS. Hash translation is going to use pkeys to support
> KUAP/KUEP. Adding this dependency reduces the code complexity and
> enables us to move some of the initialization code to pkeys.c

The subject and change log don't really match the patch anymore since
you incorporated my changes.

This adds a new CONFIG called PPC_PKEY which is enabled if either PKEY
or KUAP/KUEP is enabled etc.

cheers

> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>  .../powerpc/include/asm/book3s/64/kup-radix.h |  4 ++--
>  arch/powerpc/include/asm/book3s/64/mmu.h      |  2 +-
>  arch/powerpc/include/asm/ptrace.h             |  7 +++++-
>  arch/powerpc/kernel/asm-offsets.c             |  3 +++
>  arch/powerpc/mm/book3s64/Makefile             |  2 +-
>  arch/powerpc/mm/book3s64/pkeys.c              | 24 ++++++++++++-------
>  arch/powerpc/platforms/Kconfig.cputype        |  5 ++++
>  7 files changed, 33 insertions(+), 14 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/kup-radix.h b/arch/powerpc/include/asm/book3s/64/kup-radix.h
> index 28716e2f13e3..68eaa2fac3ab 100644
> --- a/arch/powerpc/include/asm/book3s/64/kup-radix.h
> +++ b/arch/powerpc/include/asm/book3s/64/kup-radix.h
> @@ -16,7 +16,7 @@
>  #ifdef CONFIG_PPC_KUAP
>  	BEGIN_MMU_FTR_SECTION_NESTED(67)
>  	mfspr	\gpr1, SPRN_AMR
> -	ld	\gpr2, STACK_REGS_KUAP(r1)
> +	ld	\gpr2, STACK_REGS_AMR(r1)
>  	cmpd	\gpr1, \gpr2
>  	beq	998f
>  	isync
> @@ -48,7 +48,7 @@
>  	bne	\msr_pr_cr, 99f
>  	.endif
>  	mfspr	\gpr1, SPRN_AMR
> -	std	\gpr1, STACK_REGS_KUAP(r1)
> +	std	\gpr1, STACK_REGS_AMR(r1)
>  	li	\gpr2, (AMR_KUAP_BLOCKED >> AMR_KUAP_SHIFT)
>  	sldi	\gpr2, \gpr2, AMR_KUAP_SHIFT
>  	cmpd	\use_cr, \gpr1, \gpr2
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> index e0b52940e43c..a2a015066bae 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -199,7 +199,7 @@ extern int mmu_io_psize;
>  void mmu_early_init_devtree(void);
>  void hash__early_init_devtree(void);
>  void radix__early_init_devtree(void);
> -#ifdef CONFIG_PPC_MEM_KEYS
> +#ifdef CONFIG_PPC_PKEY
>  void pkey_early_init_devtree(void);
>  #else
>  static inline void pkey_early_init_devtree(void) {}
> diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
> index e2c778c176a3..e7f1caa007a4 100644
> --- a/arch/powerpc/include/asm/ptrace.h
> +++ b/arch/powerpc/include/asm/ptrace.h
> @@ -53,9 +53,14 @@ struct pt_regs
>  #ifdef CONFIG_PPC64
>  			unsigned long ppr;
>  #endif
> +			union {
>  #ifdef CONFIG_PPC_KUAP
> -			unsigned long kuap;
> +				unsigned long kuap;
>  #endif
> +#ifdef CONFIG_PPC_PKEY
> +				unsigned long amr;
> +#endif
> +			};
>  		};
>  		unsigned long __pad[2];	/* Maintain 16 byte interrupt stack alignment */
>  	};
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index c2722ff36e98..418a0b314a33 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -354,6 +354,9 @@ int main(void)
>  	STACK_PT_REGS_OFFSET(_PPR, ppr);
>  #endif /* CONFIG_PPC64 */
>  
> +#ifdef CONFIG_PPC_PKEY
> +	STACK_PT_REGS_OFFSET(STACK_REGS_AMR, amr);
> +#endif
>  #ifdef CONFIG_PPC_KUAP
>  	STACK_PT_REGS_OFFSET(STACK_REGS_KUAP, kuap);
>  #endif
> diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64/Makefile
> index fd393b8be14f..1b56d3af47d4 100644
> --- a/arch/powerpc/mm/book3s64/Makefile
> +++ b/arch/powerpc/mm/book3s64/Makefile
> @@ -17,7 +17,7 @@ endif
>  obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hash_hugepage.o
>  obj-$(CONFIG_PPC_SUBPAGE_PROT)	+= subpage_prot.o
>  obj-$(CONFIG_SPAPR_TCE_IOMMU)	+= iommu_api.o
> -obj-$(CONFIG_PPC_MEM_KEYS)	+= pkeys.o
> +obj-$(CONFIG_PPC_PKEY)	+= pkeys.o
>  
>  # Instrumenting the SLB fault path can lead to duplicate SLB entries
>  KCOV_INSTRUMENT_slb.o := n
> diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
> index b1d091a97611..7dc71f85683d 100644
> --- a/arch/powerpc/mm/book3s64/pkeys.c
> +++ b/arch/powerpc/mm/book3s64/pkeys.c
> @@ -89,12 +89,14 @@ static int scan_pkey_feature(void)
>  		}
>  	}
>  
> +#ifdef CONFIG_PPC_MEM_KEYS
>  	/*
>  	 * Adjust the upper limit, based on the number of bits supported by
>  	 * arch-neutral code.
>  	 */
>  	pkeys_total = min_t(int, pkeys_total,
>  			    ((ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT) + 1));
> +#endif
>  	return pkeys_total;
>  }
>  
> @@ -102,6 +104,7 @@ void __init pkey_early_init_devtree(void)
>  {
>  	int pkeys_total, i;
>  
> +#ifdef CONFIG_PPC_MEM_KEYS
>  	/*
>  	 * We define PKEY_DISABLE_EXECUTE in addition to the arch-neutral
>  	 * generic defines for PKEY_DISABLE_ACCESS and PKEY_DISABLE_WRITE.
> @@ -117,7 +120,7 @@ void __init pkey_early_init_devtree(void)
>  	BUILD_BUG_ON(__builtin_clzl(ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT) +
>  		     __builtin_popcountl(ARCH_VM_PKEY_FLAGS >> VM_PKEY_SHIFT)
>  				!= (sizeof(u64) * BITS_PER_BYTE));
> -
> +#endif
>  	/*
>  	 * Only P7 and above supports SPRN_AMR update with MSR[PR] = 1
>  	 */
> @@ -223,14 +226,6 @@ void __init pkey_early_init_devtree(void)
>  	return;
>  }
>  
> -void pkey_mm_init(struct mm_struct *mm)
> -{
> -	if (!mmu_has_feature(MMU_FTR_PKEY))
> -		return;
> -	mm_pkey_allocation_map(mm) = initial_allocation_mask;
> -	mm->context.execute_only_pkey = execute_only_key;
> -}
> -
>  static inline u64 read_amr(void)
>  {
>  	return mfspr(SPRN_AMR);
> @@ -257,6 +252,15 @@ static inline void write_iamr(u64 value)
>  	mtspr(SPRN_IAMR, value);
>  }
>  
> +#ifdef CONFIG_PPC_MEM_KEYS
> +void pkey_mm_init(struct mm_struct *mm)
> +{
> +	if (!mmu_has_feature(MMU_FTR_PKEY))
> +		return;
> +	mm_pkey_allocation_map(mm) = initial_allocation_mask;
> +	mm->context.execute_only_pkey = execute_only_key;
> +}
> +
>  static inline void init_amr(int pkey, u8 init_bits)
>  {
>  	u64 new_amr_bits = (((u64)init_bits & 0x3UL) << pkeyshift(pkey));
> @@ -445,3 +449,5 @@ void arch_dup_pkeys(struct mm_struct *oldmm, struct mm_struct *mm)
>  	mm_pkey_allocation_map(mm) = mm_pkey_allocation_map(oldmm);
>  	mm->context.execute_only_pkey = oldmm->context.execute_only_pkey;
>  }
> +
> +#endif /* CONFIG_PPC_MEM_KEYS */
> diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
> index c194c4ae8bc7..f255e8f32155 100644
> --- a/arch/powerpc/platforms/Kconfig.cputype
> +++ b/arch/powerpc/platforms/Kconfig.cputype
> @@ -395,6 +395,11 @@ config PPC_KUAP_DEBUG
>  	  Add extra debugging for Kernel Userspace Access Protection (KUAP)
>  	  If you're unsure, say N.
>  
> +config PPC_PKEY
> +	def_bool y
> +	depends on PPC_BOOK3S_64
> +	depends on PPC_MEM_KEYS || PPC_KUAP || PPC_KUEP
> +
>  config ARCH_ENABLE_HUGEPAGE_MIGRATION
>  	def_bool y
>  	depends on PPC_BOOK3S_64 && HUGETLB_PAGE && MIGRATION
> -- 
> 2.28.0

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 04/22] powerpc/book3s64/kuap/kuep: Move uamor setup to pkey init
  2020-11-25  5:16 ` [PATCH v6 04/22] powerpc/book3s64/kuap/kuep: Move uamor setup to pkey init Aneesh Kumar K.V
  2020-11-25 13:32   ` Christophe Leroy
@ 2020-11-26  3:28   ` Michael Ellerman
  1 sibling, 0 replies; 48+ messages in thread
From: Michael Ellerman @ 2020-11-26  3:28 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev; +Cc: Aneesh Kumar K.V

"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
> This patch consolidates UAMOR update across pkey, kuap and kuep features.
> The boot cpu initialize UAMOR via pkey init and both radix/hash do the
> secondary cpu UAMOR init in early_init_mmu_secondary.
>
> We don't check for mmu_feature in radix secondary init because UAMOR
> is a supported SPRN with all CPUs supporting radix translation.
> The old code was not updating UAMOR if we had smap disabled and smep enabled.
> This change handles that case.
>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>  arch/powerpc/mm/book3s64/radix_pgtable.c | 8 +++++---
>  1 file changed, 5 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index 3adcf730f478..bfe441af916a 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -620,9 +620,6 @@ void setup_kuap(bool disabled)
>  		cur_cpu_spec->mmu_features |= MMU_FTR_RADIX_KUAP;
>  	}
>  
> -	/* Make sure userspace can't change the AMR */
> -	mtspr(SPRN_UAMOR, 0);
> -
>  	/*
>  	 * Set the default kernel AMR values on all cpus.
>  	 */
> @@ -721,6 +718,11 @@ void radix__early_init_mmu_secondary(void)
>  
>  	radix__switch_mmu_context(NULL, &init_mm);
>  	tlbiel_all();
> +
> +#ifdef CONFIG_PPC_PKEY
> +	/* Make sure userspace can't change the AMR */
> +	mtspr(SPRN_UAMOR, 0);
> +#endif

If PPC_PKEY is disabled I think this leaves UAMOR unset, which means it
could potentially allow AMR to be used as a covert channel between
processes.

cheers

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 09/22] powerpc/exec: Set thread.regs early during exec
  2020-11-25 13:47   ` Christophe Leroy
@ 2020-11-26  7:38     ` Aneesh Kumar K.V
  2020-11-26  7:43       ` Christophe Leroy
  0 siblings, 1 reply; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-26  7:38 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev, mpe

Christophe Leroy <christophe.leroy@csgroup.eu> writes:

> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
....

> +++ b/arch/powerpc/kernel/process.c
>> @@ -1530,10 +1530,32 @@ void flush_thread(void)
>>   #ifdef CONFIG_PPC_BOOK3S_64
>>   void arch_setup_new_exec(void)
>>   {
>> -	if (radix_enabled())
>> -		return;
>> -	hash__setup_new_exec();
>> +	if (!radix_enabled())
>> +		hash__setup_new_exec();
>> +
>> +	/*
>> +	 * If we exec out of a kernel thread then thread.regs will not be
>> +	 * set.  Do it now.
>> +	 */
>> +	if (!current->thread.regs) {
>> +		struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
>> +		current->thread.regs = regs - 1;
>> +	}
>> +
>> +}
>> +#else
>> +void arch_setup_new_exec(void)
>> +{
>> +	/*
>> +	 * If we exec out of a kernel thread then thread.regs will not be
>> +	 * set.  Do it now.
>> +	 */
>> +	if (!current->thread.regs) {
>> +		struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
>> +		current->thread.regs = regs - 1;
>> +	}
>>   }
>> +
>>   #endif
>
> No need to duplicate arch_setup_new_exec() I think. radix_enabled() is defined at all time so the 
> first function should be valid at all time.
>

arch/powerpc/kernel/process.c: In function ‘arch_setup_new_exec’:
arch/powerpc/kernel/process.c:1529:3: error: implicit declaration of function ‘hash__setup_new_exec’; did you mean ‘arch_setup_new_exec’? [-Werror=implicit-function-declaration]
 1529 |   hash__setup_new_exec();
      |   ^~~~~~~~~~~~~~~~~~~~
      |   arch_setup_new_exec


That requires us to have hash__setup_new_exec prototype for all platforms.

-aneesh

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 09/22] powerpc/exec: Set thread.regs early during exec
  2020-11-26  7:38     ` Aneesh Kumar K.V
@ 2020-11-26  7:43       ` Christophe Leroy
  0 siblings, 0 replies; 48+ messages in thread
From: Christophe Leroy @ 2020-11-26  7:43 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev, mpe



Le 26/11/2020 à 08:38, Aneesh Kumar K.V a écrit :
> Christophe Leroy <christophe.leroy@csgroup.eu> writes:
> 
>> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
> ....
> 
>> +++ b/arch/powerpc/kernel/process.c
>>> @@ -1530,10 +1530,32 @@ void flush_thread(void)
>>>    #ifdef CONFIG_PPC_BOOK3S_64
>>>    void arch_setup_new_exec(void)
>>>    {
>>> -	if (radix_enabled())
>>> -		return;
>>> -	hash__setup_new_exec();
>>> +	if (!radix_enabled())
>>> +		hash__setup_new_exec();
>>> +
>>> +	/*
>>> +	 * If we exec out of a kernel thread then thread.regs will not be
>>> +	 * set.  Do it now.
>>> +	 */
>>> +	if (!current->thread.regs) {
>>> +		struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
>>> +		current->thread.regs = regs - 1;
>>> +	}
>>> +
>>> +}
>>> +#else
>>> +void arch_setup_new_exec(void)
>>> +{
>>> +	/*
>>> +	 * If we exec out of a kernel thread then thread.regs will not be
>>> +	 * set.  Do it now.
>>> +	 */
>>> +	if (!current->thread.regs) {
>>> +		struct pt_regs *regs = task_stack_page(current) + THREAD_SIZE;
>>> +		current->thread.regs = regs - 1;
>>> +	}
>>>    }
>>> +
>>>    #endif
>>
>> No need to duplicate arch_setup_new_exec() I think. radix_enabled() is defined at all time so the
>> first function should be valid at all time.
>>
> 
> arch/powerpc/kernel/process.c: In function ‘arch_setup_new_exec’:
> arch/powerpc/kernel/process.c:1529:3: error: implicit declaration of function ‘hash__setup_new_exec’; did you mean ‘arch_setup_new_exec’? [-Werror=implicit-function-declaration]
>   1529 |   hash__setup_new_exec();
>        |   ^~~~~~~~~~~~~~~~~~~~
>        |   arch_setup_new_exec
> 
> 
> That requires us to have hash__setup_new_exec prototype for all platforms.

Yes indeed.

So maybe, just enclose that part in the #ifdef instead of duplicating the common part ?

Christophe

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 16/22] powerpc/book3s64/kuap: Improve error reporting with KUAP
  2020-11-25 14:04   ` Christophe Leroy
@ 2020-11-26  7:44     ` Aneesh Kumar K.V
  2020-11-26  9:29       ` Michael Ellerman
  0 siblings, 1 reply; 48+ messages in thread
From: Aneesh Kumar K.V @ 2020-11-26  7:44 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev, mpe

Christophe Leroy <christophe.leroy@csgroup.eu> writes:

> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
>> With hash translation use DSISR_KEYFAULT to identify a wrong access.
>> With Radix we look at the AMR value and type of fault.
>> 
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>> ---
>>   arch/powerpc/include/asm/book3s/32/kup.h     |  4 +--
>>   arch/powerpc/include/asm/book3s/64/kup.h     | 27 ++++++++++++++++----
>>   arch/powerpc/include/asm/kup.h               |  4 +--
>>   arch/powerpc/include/asm/nohash/32/kup-8xx.h |  4 +--
>>   arch/powerpc/mm/fault.c                      |  2 +-
>>   5 files changed, 29 insertions(+), 12 deletions(-)
>> 
>> diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
>> index 32fd4452e960..b18cd931e325 100644
>> --- a/arch/powerpc/include/asm/book3s/32/kup.h
>> +++ b/arch/powerpc/include/asm/book3s/32/kup.h
>> @@ -177,8 +177,8 @@ static inline void restore_user_access(unsigned long flags)
>>   		allow_user_access(to, to, end - addr, KUAP_READ_WRITE);
>>   }
>>   
>> -static inline bool
>> -bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
>> +static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
>> +				  bool is_write, unsigned long error_code)
>>   {
>>   	unsigned long begin = regs->kuap & 0xf0000000;
>>   	unsigned long end = regs->kuap << 28;
>> diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
>> index 4a3d0d601745..2922c442a218 100644
>> --- a/arch/powerpc/include/asm/book3s/64/kup.h
>> +++ b/arch/powerpc/include/asm/book3s/64/kup.h
>> @@ -301,12 +301,29 @@ static inline void set_kuap(unsigned long value)
>>   	isync();
>>   }
>>   
>> -static inline bool
>> -bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
>> +#define RADIX_KUAP_BLOCK_READ	UL(0x4000000000000000)
>> +#define RADIX_KUAP_BLOCK_WRITE	UL(0x8000000000000000)
>> +
>> +static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
>> +				  bool is_write, unsigned long error_code)
>>   {
>> -	return WARN(mmu_has_feature(MMU_FTR_KUAP) &&
>> -		    (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)),
>> -		    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
>> +	if (!mmu_has_feature(MMU_FTR_KUAP))
>> +		return false;
>> +
>> +	if (radix_enabled()) {
>> +		/*
>> +		 * Will be a storage protection fault.
>> +		 * Only check the details of AMR[0]
>> +		 */
>> +		return WARN((regs->kuap & (is_write ? RADIX_KUAP_BLOCK_WRITE : RADIX_KUAP_BLOCK_READ)),
>> +			    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
>
> I think it is pointless to keep the WARN() here.
>
> I have a series aiming at removing them. See 
> https://patchwork.ozlabs.org/project/linuxppc-dev/patch/cc9129bdda1dbc2f0a09cf45fece7d0b0e690784.1605541983.git.christophe.leroy@csgroup.eu/

Can we do this as a spearate patch as you posted above? We can drop the
WARN in that while keeping the hash branch to look at DSISR value.

-aneesh

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 16/22] powerpc/book3s64/kuap: Improve error reporting with KUAP
  2020-11-26  7:44     ` Aneesh Kumar K.V
@ 2020-11-26  9:29       ` Michael Ellerman
  2020-11-26 10:39         ` Christophe Leroy
  0 siblings, 1 reply; 48+ messages in thread
From: Michael Ellerman @ 2020-11-26  9:29 UTC (permalink / raw)
  To: Aneesh Kumar K.V, Christophe Leroy, linuxppc-dev

"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
> Christophe Leroy <christophe.leroy@csgroup.eu> writes:
>
>> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
>>> With hash translation use DSISR_KEYFAULT to identify a wrong access.
>>> With Radix we look at the AMR value and type of fault.
>>> 
>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>>> ---
>>>   arch/powerpc/include/asm/book3s/32/kup.h     |  4 +--
>>>   arch/powerpc/include/asm/book3s/64/kup.h     | 27 ++++++++++++++++----
>>>   arch/powerpc/include/asm/kup.h               |  4 +--
>>>   arch/powerpc/include/asm/nohash/32/kup-8xx.h |  4 +--
>>>   arch/powerpc/mm/fault.c                      |  2 +-
>>>   5 files changed, 29 insertions(+), 12 deletions(-)
>>> 
>>> diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
>>> index 32fd4452e960..b18cd931e325 100644
>>> --- a/arch/powerpc/include/asm/book3s/32/kup.h
>>> +++ b/arch/powerpc/include/asm/book3s/32/kup.h
>>> @@ -177,8 +177,8 @@ static inline void restore_user_access(unsigned long flags)
>>>   		allow_user_access(to, to, end - addr, KUAP_READ_WRITE);
>>>   }
>>>   
>>> -static inline bool
>>> -bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
>>> +static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
>>> +				  bool is_write, unsigned long error_code)
>>>   {
>>>   	unsigned long begin = regs->kuap & 0xf0000000;
>>>   	unsigned long end = regs->kuap << 28;
>>> diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
>>> index 4a3d0d601745..2922c442a218 100644
>>> --- a/arch/powerpc/include/asm/book3s/64/kup.h
>>> +++ b/arch/powerpc/include/asm/book3s/64/kup.h
>>> @@ -301,12 +301,29 @@ static inline void set_kuap(unsigned long value)
>>>   	isync();
>>>   }
>>>   
>>> -static inline bool
>>> -bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
>>> +#define RADIX_KUAP_BLOCK_READ	UL(0x4000000000000000)
>>> +#define RADIX_KUAP_BLOCK_WRITE	UL(0x8000000000000000)
>>> +
>>> +static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
>>> +				  bool is_write, unsigned long error_code)
>>>   {
>>> -	return WARN(mmu_has_feature(MMU_FTR_KUAP) &&
>>> -		    (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)),
>>> -		    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
>>> +	if (!mmu_has_feature(MMU_FTR_KUAP))
>>> +		return false;
>>> +
>>> +	if (radix_enabled()) {
>>> +		/*
>>> +		 * Will be a storage protection fault.
>>> +		 * Only check the details of AMR[0]
>>> +		 */
>>> +		return WARN((regs->kuap & (is_write ? RADIX_KUAP_BLOCK_WRITE : RADIX_KUAP_BLOCK_READ)),
>>> +			    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
>>
>> I think it is pointless to keep the WARN() here.
>>
>> I have a series aiming at removing them. See 
>> https://patchwork.ozlabs.org/project/linuxppc-dev/patch/cc9129bdda1dbc2f0a09cf45fece7d0b0e690784.1605541983.git.christophe.leroy@csgroup.eu/
>
> Can we do this as a spearate patch as you posted above? We can drop the
> WARN in that while keeping the hash branch to look at DSISR value.

Yeah we can reconcile Christophe's series with yours later.

I'm still not 100% convinced I want to drop that WARN.

cheers

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 16/22] powerpc/book3s64/kuap: Improve error reporting with KUAP
  2020-11-26  9:29       ` Michael Ellerman
@ 2020-11-26 10:39         ` Christophe Leroy
  0 siblings, 0 replies; 48+ messages in thread
From: Christophe Leroy @ 2020-11-26 10:39 UTC (permalink / raw)
  To: Michael Ellerman, Aneesh Kumar K.V, linuxppc-dev



Le 26/11/2020 à 10:29, Michael Ellerman a écrit :
> "Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com> writes:
>> Christophe Leroy <christophe.leroy@csgroup.eu> writes:
>>
>>> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
>>>> With hash translation use DSISR_KEYFAULT to identify a wrong access.
>>>> With Radix we look at the AMR value and type of fault.
>>>>
>>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>>>> ---
>>>>    arch/powerpc/include/asm/book3s/32/kup.h     |  4 +--
>>>>    arch/powerpc/include/asm/book3s/64/kup.h     | 27 ++++++++++++++++----
>>>>    arch/powerpc/include/asm/kup.h               |  4 +--
>>>>    arch/powerpc/include/asm/nohash/32/kup-8xx.h |  4 +--
>>>>    arch/powerpc/mm/fault.c                      |  2 +-
>>>>    5 files changed, 29 insertions(+), 12 deletions(-)
>>>>
>>>> diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
>>>> index 32fd4452e960..b18cd931e325 100644
>>>> --- a/arch/powerpc/include/asm/book3s/32/kup.h
>>>> +++ b/arch/powerpc/include/asm/book3s/32/kup.h
>>>> @@ -177,8 +177,8 @@ static inline void restore_user_access(unsigned long flags)
>>>>    		allow_user_access(to, to, end - addr, KUAP_READ_WRITE);
>>>>    }
>>>>    
>>>> -static inline bool
>>>> -bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
>>>> +static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
>>>> +				  bool is_write, unsigned long error_code)
>>>>    {
>>>>    	unsigned long begin = regs->kuap & 0xf0000000;
>>>>    	unsigned long end = regs->kuap << 28;
>>>> diff --git a/arch/powerpc/include/asm/book3s/64/kup.h b/arch/powerpc/include/asm/book3s/64/kup.h
>>>> index 4a3d0d601745..2922c442a218 100644
>>>> --- a/arch/powerpc/include/asm/book3s/64/kup.h
>>>> +++ b/arch/powerpc/include/asm/book3s/64/kup.h
>>>> @@ -301,12 +301,29 @@ static inline void set_kuap(unsigned long value)
>>>>    	isync();
>>>>    }
>>>>    
>>>> -static inline bool
>>>> -bad_kuap_fault(struct pt_regs *regs, unsigned long address, bool is_write)
>>>> +#define RADIX_KUAP_BLOCK_READ	UL(0x4000000000000000)
>>>> +#define RADIX_KUAP_BLOCK_WRITE	UL(0x8000000000000000)
>>>> +
>>>> +static inline bool bad_kuap_fault(struct pt_regs *regs, unsigned long address,
>>>> +				  bool is_write, unsigned long error_code)
>>>>    {
>>>> -	return WARN(mmu_has_feature(MMU_FTR_KUAP) &&
>>>> -		    (regs->kuap & (is_write ? AMR_KUAP_BLOCK_WRITE : AMR_KUAP_BLOCK_READ)),
>>>> -		    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
>>>> +	if (!mmu_has_feature(MMU_FTR_KUAP))
>>>> +		return false;
>>>> +
>>>> +	if (radix_enabled()) {
>>>> +		/*
>>>> +		 * Will be a storage protection fault.
>>>> +		 * Only check the details of AMR[0]
>>>> +		 */
>>>> +		return WARN((regs->kuap & (is_write ? RADIX_KUAP_BLOCK_WRITE : RADIX_KUAP_BLOCK_READ)),
>>>> +			    "Bug: %s fault blocked by AMR!", is_write ? "Write" : "Read");
>>>
>>> I think it is pointless to keep the WARN() here.
>>>
>>> I have a series aiming at removing them. See
>>> https://patchwork.ozlabs.org/project/linuxppc-dev/patch/cc9129bdda1dbc2f0a09cf45fece7d0b0e690784.1605541983.git.christophe.leroy@csgroup.eu/
>>
>> Can we do this as a spearate patch as you posted above? We can drop the
>> WARN in that while keeping the hash branch to look at DSISR value.
> 
> Yeah we can reconcile Christophe's series with yours later.
> 
> I'm still not 100% convinced I want to drop that WARN.

Ok, you can still take the rest of the series as that patch is the last one.

But, I really can't see the point with the WARN. When I hip a kuap bad fault, I get a double dump 
(see below). The second one is the interesting one, it tells me everything about the fault. But the 
WARN provides internals of do_page_fault() function. What interesting information do I get from there ?

[   37.842509] lkdtm: attempting bad write at b7bae000
[   37.842526] ------------[ cut here ]------------
[   37.842536] Bug: write fault blocked by segment registers !
[   37.842598] WARNING: CPU: 0 PID: 434 at arch/powerpc/include/asm/book3s/32/kup.h:189 
do_page_fault+0x3c8/0x5f0
[   37.842630] CPU: 0 PID: 434 Comm: busybox Not tainted 5.10.0-rc5-s3k-dev-01343-g8bec80f73baa #4165
[   37.842650] NIP:  c00155e4 LR: c00155e4 CTR: 00000000
[   37.842670] REGS: e6719c78 TRAP: 0700   Not tainted  (5.10.0-rc5-s3k-dev-01343-g8bec80f73baa)
[   37.842683] MSR:  00021032 <ME,IR,DR,RI>  CR: 22002224  XER: 20000000
[   37.842750]
[   37.842750] GPR00: c00155e4 e6719d30 c113c660 0000002f c097adf8 c097af10 00000027 00000027
[   37.842750] GPR08: c0b0afbc 00000000 00000023 00000001 24002224 100d166a 100a0920 00000000
[   37.842750] GPR16: 100cac0c 100b0000 10169444 1016a685 100d0000 100d0000 00000000 100a0900
[   37.842750] GPR24: ffffffef ffffffff c1392220 00000300 c076f424 02000000 b7bae000 e6719d70
[   37.843049] NIP [c00155e4] do_page_fault+0x3c8/0x5f0
[   37.843074] LR [c00155e4] do_page_fault+0x3c8/0x5f0
[   37.843087] Call Trace:
[   37.843114] [e6719d30] [c00155e4] do_page_fault+0x3c8/0x5f0 (unreliable)
[   37.843154] [e6719d60] [c0014384] handle_page_fault+0x10/0x3c
[   37.843211] --- interrupt: 301 at lkdtm_ACCESS_USERSPACE+0xdc/0xe4
[   37.843211]     LR = lkdtm_ACCESS_USERSPACE+0xd0/0xe4
[   37.843238] [e6719e48] [c039d76c] direct_entry+0xe0/0x164
[   37.843281] [e6719e68] [c0286730] full_proxy_write+0x78/0xbc
[   37.843325] [e6719e88] [c01657a8] vfs_write+0xdc/0x458
[   37.843359] [e6719f08] [c0165cb0] ksys_write+0x6c/0x11c
[   37.843397] [e6719f38] [c0014164] ret_from_syscall+0x0/0x34
[   37.843426] --- interrupt: c01 at 0xfd55784
[   37.843426]     LR = 0xfe16244
[   37.843438] Instruction dump:
[   37.843459] 38600007 4bff7a19 3bc00000 4bfffdbc 419e0110 813f00b0 55280006 7c1e4040
[   37.843529] 408000f4 3c60c080 3863e148 4801552d <0fe00000> 3c80c072 3c60c097 38840d84
[   37.843602] ---[ end trace 29c115c8ef352681 ]---
[   37.843627] Kernel attempted to write user page (b7bae000) - exploit attempt? (uid: 0)
[   37.851531] BUG: Unable to handle kernel data access on write at 0xb7bae000
[   37.858472] Faulting instruction address: 0xc039e550
[   37.863432] Oops: Kernel access of bad area, sig: 11 [#1]
[   37.868822] BE PAGE_SIZE=4K PREEMPT CMPCPRO
[   37.873029] SAF3000 DIE NOTIFICATION
[   37.876624] CPU: 0 PID: 434 Comm: busybox Tainted: G        W 
5.10.0-rc5-s3k-dev-01343-g8bec80f73baa #4165
[   37.886940] NIP:  c039e550 LR: c039e544 CTR: 00000000
[   37.891988] REGS: e6719d70 TRAP: 0300   Tainted: G        W 
(5.10.0-rc5-s3k-dev-01343-g8bec80f73baa)
[   37.901866] MSR:  00009032 <EE,ME,IR,DR,RI>  CR: 24002224  XER: 00000000
[   37.908617] DAR: b7bae000 DSISR: 0a000000
[   37.908617] GPR00: c039e544 e6719e28 c113c660 c083aad8 c097adf8 c097af10 00000027 00000027
[   37.908617] GPR08: c0b0afbc c0dec0de 00000023 00000001 28002224 100d166a 100a0920 00000000
[   37.908617] GPR16: 100cac0c 100b0000 10169444 1016a685 100d0000 100d0000 00000000 100a0900
[   37.908617] GPR24: ffffffef ffffffff e6719f10 00000011 c076f424 c1cb1000 c0839e24 b7bae000
[   37.946267] NIP [c039e550] lkdtm_ACCESS_USERSPACE+0xdc/0xe4
[   37.951842] LR [c039e544] lkdtm_ACCESS_USERSPACE+0xd0/0xe4
[   37.957316] Call Trace:
[   37.959782] [e6719e28] [c039e544] lkdtm_ACCESS_USERSPACE+0xd0/0xe4 (unreliable)
[   37.967102] [e6719e48] [c039d76c] direct_entry+0xe0/0x164
[   37.972524] [e6719e68] [c0286730] full_proxy_write+0x78/0xbc
[   37.978204] [e6719e88] [c01657a8] vfs_write+0xdc/0x458
[   37.983358] [e6719f08] [c0165cb0] ksys_write+0x6c/0x11c
[   37.988605] [e6719f38] [c0014164] ret_from_syscall+0x0/0x34
[   37.994185] --- interrupt: c01 at 0xfd55784
[   37.994185]     LR = 0xfe16244
[   38.001385] Instruction dump:
[   38.004360] 3863ac00 3d29c0df 3929c0de 91210008 4bcd04c9 3c60c084 3863ac24 7fe4fb78
[   38.012149] 4bcd04b9 3c60c084 81210008 3863aad8 <913f0000> 4bffff80 3c60c084 3863aa00
[   38.020120] ---[ end trace 29c115c8ef352682 ]---

Christophe

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 19/22] powerpc/book3s64/hash/kuap: Enable kuap on hash
  2020-11-25  5:16 ` [PATCH v6 19/22] powerpc/book3s64/hash/kuap: Enable kuap on hash Aneesh Kumar K.V
@ 2021-03-15 12:06   ` Christophe Leroy
  2021-03-15 12:59     ` Aneesh Kumar K.V
  2021-10-08  9:32   ` Christophe Leroy
  1 sibling, 1 reply; 48+ messages in thread
From: Christophe Leroy @ 2021-03-15 12:06 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev, mpe; +Cc: Sandipan Das



Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

PPC_HAVE_KUAP is only selected on book3s/64 when PPC_RADIX_MMU is selected. Is that normal ?


> ---
>   arch/powerpc/mm/book3s64/pkeys.c | 7 ++++++-
>   1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
> index f747d66cc87d..84f8664ffc47 100644
> --- a/arch/powerpc/mm/book3s64/pkeys.c
> +++ b/arch/powerpc/mm/book3s64/pkeys.c
> @@ -257,7 +257,12 @@ void __init setup_kuep(bool disabled)
>   #ifdef CONFIG_PPC_KUAP
>   void __init setup_kuap(bool disabled)
>   {
> -	if (disabled || !early_radix_enabled())
> +	if (disabled)
> +		return;
> +	/*
> +	 * On hash if PKEY feature is not enabled, disable KUAP too.
> +	 */
> +	if (!early_radix_enabled() && !early_mmu_has_feature(MMU_FTR_PKEY))
>   		return;
>   
>   	if (smp_processor_id() == boot_cpuid) {
> 

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 19/22] powerpc/book3s64/hash/kuap: Enable kuap on hash
  2021-03-15 12:06   ` Christophe Leroy
@ 2021-03-15 12:59     ` Aneesh Kumar K.V
  2021-03-15 13:02       ` Christophe Leroy
  0 siblings, 1 reply; 48+ messages in thread
From: Aneesh Kumar K.V @ 2021-03-15 12:59 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev, mpe; +Cc: Sandipan Das

On 3/15/21 5:36 PM, Christophe Leroy wrote:
> 
> 
> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
>> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> 
> PPC_HAVE_KUAP is only selected on book3s/64 when PPC_RADIX_MMU is 
> selected. Is that normal ?
> 


I guess we missed fixing that with this patch. How about

modified   arch/powerpc/platforms/Kconfig.cputype
@@ -103,6 +103,8 @@ config PPC_BOOK3S_64
  	select ARCH_SUPPORTS_NUMA_BALANCING
  	select IRQ_WORK
  	select PPC_MM_SLICES
+	select PPC_HAVE_KUEP
+	select PPC_HAVE_KUAP

  config PPC_BOOK3E_64
  	bool "Embedded processors"
@@ -365,8 +367,6 @@ config PPC_RADIX_MMU
  	bool "Radix MMU Support"
  	depends on PPC_BOOK3S_64
  	select ARCH_HAS_GIGANTIC_PAGE
-	select PPC_HAVE_KUEP
-	select PPC_HAVE_KUAP
  	default y
  	help
  	  Enable support for the Power ISA 3.0 Radix style MMU. Currently this





-aneesh

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 19/22] powerpc/book3s64/hash/kuap: Enable kuap on hash
  2021-03-15 12:59     ` Aneesh Kumar K.V
@ 2021-03-15 13:02       ` Christophe Leroy
  0 siblings, 0 replies; 48+ messages in thread
From: Christophe Leroy @ 2021-03-15 13:02 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev, mpe; +Cc: Sandipan Das



Le 15/03/2021 à 13:59, Aneesh Kumar K.V a écrit :
> On 3/15/21 5:36 PM, Christophe Leroy wrote:
>>
>>
>> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
>>> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
>>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>>
>> PPC_HAVE_KUAP is only selected on book3s/64 when PPC_RADIX_MMU is selected. Is that normal ?
>>
> 
> 
> I guess we missed fixing that with this patch. How about

Yes, looks good to me.

> 
> modified   arch/powerpc/platforms/Kconfig.cputype
> @@ -103,6 +103,8 @@ config PPC_BOOK3S_64
>       select ARCH_SUPPORTS_NUMA_BALANCING
>       select IRQ_WORK
>       select PPC_MM_SLICES
> +    select PPC_HAVE_KUEP
> +    select PPC_HAVE_KUAP
> 
>   config PPC_BOOK3E_64
>       bool "Embedded processors"
> @@ -365,8 +367,6 @@ config PPC_RADIX_MMU
>       bool "Radix MMU Support"
>       depends on PPC_BOOK3S_64
>       select ARCH_HAS_GIGANTIC_PAGE
> -    select PPC_HAVE_KUEP
> -    select PPC_HAVE_KUAP
>       default y
>       help
>         Enable support for the Power ISA 3.0 Radix style MMU. Currently this
> 
> 
> 
> 
> 
> -aneesh

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 19/22] powerpc/book3s64/hash/kuap: Enable kuap on hash
  2020-11-25  5:16 ` [PATCH v6 19/22] powerpc/book3s64/hash/kuap: Enable kuap on hash Aneesh Kumar K.V
  2021-03-15 12:06   ` Christophe Leroy
@ 2021-10-08  9:32   ` Christophe Leroy
  2021-10-11  3:28     ` Michael Ellerman
  1 sibling, 1 reply; 48+ messages in thread
From: Christophe Leroy @ 2021-10-08  9:32 UTC (permalink / raw)
  To: Aneesh Kumar K.V, linuxppc-dev, mpe; +Cc: Sandipan Das



Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>   arch/powerpc/mm/book3s64/pkeys.c | 7 ++++++-
>   1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
> index f747d66cc87d..84f8664ffc47 100644
> --- a/arch/powerpc/mm/book3s64/pkeys.c
> +++ b/arch/powerpc/mm/book3s64/pkeys.c
> @@ -257,7 +257,12 @@ void __init setup_kuep(bool disabled)
>   #ifdef CONFIG_PPC_KUAP
>   void __init setup_kuap(bool disabled)
>   {
> -	if (disabled || !early_radix_enabled())
> +	if (disabled)
> +		return;
> +	/*
> +	 * On hash if PKEY feature is not enabled, disable KUAP too.
> +	 */
> +	if (!early_radix_enabled() && !early_mmu_has_feature(MMU_FTR_PKEY))

pkey_early_init_devtree() bails out without setting MMU_FTR_PKEY with 
the following:

	/*
	 * Only P7 and above supports SPRN_AMR update with MSR[PR] = 1
	 */
	if (!early_cpu_has_feature(CPU_FTR_ARCH_206))
		return;



Why would it be impossible to do KUAP in that case ? KUAP doesn't 
require updating SPRN_AMR with MSR[PR] = 1


>   		return;
>   
>   	if (smp_processor_id() == boot_cpuid) {
> 

^ permalink raw reply	[flat|nested] 48+ messages in thread

* Re: [PATCH v6 19/22] powerpc/book3s64/hash/kuap: Enable kuap on hash
  2021-10-08  9:32   ` Christophe Leroy
@ 2021-10-11  3:28     ` Michael Ellerman
  0 siblings, 0 replies; 48+ messages in thread
From: Michael Ellerman @ 2021-10-11  3:28 UTC (permalink / raw)
  To: Christophe Leroy, Aneesh Kumar K.V, linuxppc-dev; +Cc: Sandipan Das

Christophe Leroy <christophe.leroy@csgroup.eu> writes:
> Le 25/11/2020 à 06:16, Aneesh Kumar K.V a écrit :
>> Reviewed-by: Sandipan Das <sandipan@linux.ibm.com>
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
>> ---
>>   arch/powerpc/mm/book3s64/pkeys.c | 7 ++++++-
>>   1 file changed, 6 insertions(+), 1 deletion(-)
>> 
>> diff --git a/arch/powerpc/mm/book3s64/pkeys.c b/arch/powerpc/mm/book3s64/pkeys.c
>> index f747d66cc87d..84f8664ffc47 100644
>> --- a/arch/powerpc/mm/book3s64/pkeys.c
>> +++ b/arch/powerpc/mm/book3s64/pkeys.c
>> @@ -257,7 +257,12 @@ void __init setup_kuep(bool disabled)
>>   #ifdef CONFIG_PPC_KUAP
>>   void __init setup_kuap(bool disabled)
>>   {
>> -	if (disabled || !early_radix_enabled())
>> +	if (disabled)
>> +		return;
>> +	/*
>> +	 * On hash if PKEY feature is not enabled, disable KUAP too.
>> +	 */
>> +	if (!early_radix_enabled() && !early_mmu_has_feature(MMU_FTR_PKEY))
>
> pkey_early_init_devtree() bails out without setting MMU_FTR_PKEY with 
> the following:
>
> 	/*
> 	 * Only P7 and above supports SPRN_AMR update with MSR[PR] = 1
> 	 */
> 	if (!early_cpu_has_feature(CPU_FTR_ARCH_206))
> 		return;
>
>
>
> Why would it be impossible to do KUAP in that case ? KUAP doesn't 
> require updating SPRN_AMR with MSR[PR] = 1

You're right, it would be possible to do KUAP in that case.

That's an artifact of KUAP being implemented using PKEYs on hash. For
the PKEYs user-visible API we want AMR to be user controlled.

It's possible we could untangle all that, allowing KUAP to be
implemented on earlier CPUs, but I'm not sure it's going to be high on
anyone's todo list.

cheers

^ permalink raw reply	[flat|nested] 48+ messages in thread

end of thread, other threads:[~2021-10-11  3:28 UTC | newest]

Thread overview: 48+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-25  5:16 [PATCH v6 00/22] Kernel userspace access/execution prevention with hash translation Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 01/22] powerpc: Add new macro to handle NESTED_IFCLR Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 02/22] KVM: PPC: BOOK3S: PR: Ignore UAMOR SPR Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 03/22] powerpc/book3s64/kuap/kuep: Make KUAP and KUEP a subfeature of PPC_MEM_KEYS Aneesh Kumar K.V
2020-11-25 13:30   ` Christophe Leroy
2020-11-25 14:57     ` Aneesh Kumar K.V
2020-11-26  3:25   ` Michael Ellerman
2020-11-25  5:16 ` [PATCH v6 04/22] powerpc/book3s64/kuap/kuep: Move uamor setup to pkey init Aneesh Kumar K.V
2020-11-25 13:32   ` Christophe Leroy
2020-11-26  3:28   ` Michael Ellerman
2020-11-25  5:16 ` [PATCH v6 05/22] powerpc/book3s64/kuap: Move KUAP related function outside radix Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 06/22] powerpc/book3s64/kuep: Move KUEP " Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 07/22] powerpc/book3s64/kuap: Rename MMU_FTR_RADIX_KUAP to MMU_FTR_KUAP Aneesh Kumar K.V
2020-11-25 13:43   ` Christophe Leroy
2020-11-25 13:52     ` Aneesh Kumar K.V
2020-11-26  3:16   ` Michael Ellerman
2020-11-25  5:16 ` [PATCH v6 08/22] powerpc/book3s64/kuap: Use Key 3 for kernel mapping with hash translation Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 09/22] powerpc/exec: Set thread.regs early during exec Aneesh Kumar K.V
2020-11-25 13:47   ` Christophe Leroy
2020-11-26  7:38     ` Aneesh Kumar K.V
2020-11-26  7:43       ` Christophe Leroy
2020-11-25  5:16 ` [PATCH v6 10/22] powerpc/book3s64/pkeys: Store/restore userspace AMR/IAMR correctly on entry and exit from kernel Aneesh Kumar K.V
2020-11-25 13:52   ` Christophe Leroy
2020-11-25 13:55     ` Aneesh Kumar K.V
2020-11-25 14:16       ` Christophe Leroy
2020-11-25  5:16 ` [PATCH v6 11/22] powerpc/book3s64/pkeys: Inherit correctly on fork Aneesh Kumar K.V
2020-11-25 13:54   ` Christophe Leroy
2020-11-25 13:56     ` Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 12/22] powerpc/book3s64/pkeys: Reset userspace AMR correctly on exec Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 13/22] powerpc/ptrace-view: Use pt_regs values instead of thread_struct based one Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 14/22] powerpc/book3s64/pkeys: Don't update SPRN_AMR when in kernel mode Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 15/22] powerpc/book3s64/kuap: Restrict access to userspace based on userspace AMR Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 16/22] powerpc/book3s64/kuap: Improve error reporting with KUAP Aneesh Kumar K.V
2020-11-25 14:04   ` Christophe Leroy
2020-11-26  7:44     ` Aneesh Kumar K.V
2020-11-26  9:29       ` Michael Ellerman
2020-11-26 10:39         ` Christophe Leroy
2020-11-25  5:16 ` [PATCH v6 17/22] powerpc/book3s64/kuap: Use Key 3 to implement KUAP with hash translation Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 18/22] powerpc/book3s64/kuep: Use Key 3 to implement KUEP " Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 19/22] powerpc/book3s64/hash/kuap: Enable kuap on hash Aneesh Kumar K.V
2021-03-15 12:06   ` Christophe Leroy
2021-03-15 12:59     ` Aneesh Kumar K.V
2021-03-15 13:02       ` Christophe Leroy
2021-10-08  9:32   ` Christophe Leroy
2021-10-11  3:28     ` Michael Ellerman
2020-11-25  5:16 ` [PATCH v6 20/22] powerpc/book3s64/hash/kuep: Enable KUEP " Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 21/22] powerpc/book3s64/hash/kup: Don't hardcode kup key Aneesh Kumar K.V
2020-11-25  5:16 ` [PATCH v6 22/22] powerpc/book3s64/pkeys: Optimize FTR_KUAP and FTR_KUEP disabled case Aneesh Kumar K.V

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.