All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/8] Enable STRICT_KERNEL_RWX
@ 2017-05-25  3:36 Balbir Singh
  2017-05-25  3:36 ` [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching Balbir Singh
                   ` (8 more replies)
  0 siblings, 9 replies; 31+ messages in thread
From: Balbir Singh @ 2017-05-25  3:36 UTC (permalink / raw)
  To: linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, christophe.leroy, paulus, rashmica.g, Balbir Singh

Enable STRICT_KERNEL_RWX for PPC64/BOOK3S

These patches enable RX mappings of kernel text.
rodata is mapped RX as well as a trade-off, there
are more details in the patch description

As a prerequisite for R/O text, patch_instruction
is moved over to using a separate mapping that
allows write to kernel text. xmon/ftrace/kprobes
have been moved over to work with patch_instruction

There are a few bug fixes, the updatepp and updateboltedpp
did not use flags as described in PAPR and the ptdump
utility ignored the first PFN

Balbir Singh (8):
  powerpc/lib/code-patching: Enhance code patching
  powerpc/kprobes: Move kprobes over to patch_instruction
  powerpc/xmon: Add patch_instruction supporf for xmon
  powerpc/vmlinux.lds: Align __init_begin to 16M
  powerpc/platform/pseries/lpar: Fix updatepp and updateboltedpp
  powerpc/mm/hash: Implement mark_rodata_ro() for hash
  powerpc/Kconfig: Enable STRICT_KERNEL_RWX
  powerpc/mm/ptdump: Dump the first entry of the linear mapping as well

 arch/powerpc/Kconfig                       |  1 +
 arch/powerpc/include/asm/book3s/64/hash.h  |  3 +
 arch/powerpc/include/asm/book3s/64/radix.h |  4 ++
 arch/powerpc/kernel/kprobes.c              |  4 +-
 arch/powerpc/kernel/vmlinux.lds.S          | 10 +++-
 arch/powerpc/lib/code-patching.c           | 88 ++++++++++++++++++++++++++++--
 arch/powerpc/mm/dump_hashpagetable.c       |  2 +-
 arch/powerpc/mm/pgtable-hash64.c           | 35 ++++++++++++
 arch/powerpc/mm/pgtable-radix.c            |  7 +++
 arch/powerpc/mm/pgtable_64.c               |  9 +++
 arch/powerpc/platforms/pseries/lpar.c      | 13 ++++-
 arch/powerpc/xmon/xmon.c                   |  7 ++-
 12 files changed, 170 insertions(+), 13 deletions(-)

-- 
2.9.3

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching
  2017-05-25  3:36 [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
@ 2017-05-25  3:36 ` Balbir Singh
  2017-05-25  9:11   ` kbuild test robot
                     ` (3 more replies)
  2017-05-25  3:36 ` [PATCH v1 2/8] powerpc/kprobes: Move kprobes over to patch_instruction Balbir Singh
                   ` (7 subsequent siblings)
  8 siblings, 4 replies; 31+ messages in thread
From: Balbir Singh @ 2017-05-25  3:36 UTC (permalink / raw)
  To: linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, christophe.leroy, paulus, rashmica.g, Balbir Singh

Today our patching happens via direct copy and
patch_instruction. The patching code is well
contained in the sense that copying bits are limited.

While considering implementation of CONFIG_STRICT_RWX,
the first requirement is to a create another mapping
that will allow for patching. We create the window using
text_poke_area, allocated via get_vm_area(), which might
be an overkill. We can do per-cpu stuff as well. The
downside of these patches that patch_instruction is
now synchornized using a lock. Other arches do similar
things, but use fixmaps. The reason for not using
fixmaps is to make use of any randomization in the
future. The code also relies on set_pte_at and pte_clear
to do the appropriate tlb flushing.

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
 arch/powerpc/lib/code-patching.c | 88 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 84 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
index 500b0f6..0a16b2f 100644
--- a/arch/powerpc/lib/code-patching.c
+++ b/arch/powerpc/lib/code-patching.c
@@ -16,19 +16,98 @@
 #include <asm/code-patching.h>
 #include <linux/uaccess.h>
 #include <linux/kprobes.h>
+#include <asm/pgtable.h>
+#include <asm/tlbflush.h>
 
+struct vm_struct *text_poke_area;
+static DEFINE_RAW_SPINLOCK(text_poke_lock);
 
-int patch_instruction(unsigned int *addr, unsigned int instr)
+/*
+ * This is an early_initcall and early_initcalls happen at the right time
+ * for us, after slab is enabled and before we mark ro pages R/O. In the
+ * future if get_vm_area is randomized, this will be more flexible than
+ * fixmap
+ */
+static int __init setup_text_poke_area(void)
 {
+	text_poke_area = get_vm_area(PAGE_SIZE, VM_ALLOC);
+	if (!text_poke_area) {
+		WARN_ONCE(1, "could not create area for mapping kernel addrs"
+				" which allow for patching kernel code\n");
+		return 0;
+	}
+	pr_info("text_poke area ready...\n");
+	raw_spin_lock_init(&text_poke_lock);
+	return 0;
+}
+
+/*
+ * This can be called for kernel text or a module.
+ */
+static int kernel_map_addr(void *addr)
+{
+	unsigned long pfn;
 	int err;
 
-	__put_user_size(instr, addr, 4, err);
+	if (is_vmalloc_addr(addr))
+		pfn = vmalloc_to_pfn(addr);
+	else
+		pfn = __pa_symbol(addr) >> PAGE_SHIFT;
+
+	err = map_kernel_page((unsigned long)text_poke_area->addr,
+			(pfn << PAGE_SHIFT), _PAGE_KERNEL_RW | _PAGE_PRESENT);
+	pr_devel("Mapped addr %p with pfn %lx\n", text_poke_area->addr, pfn);
 	if (err)
-		return err;
-	asm ("dcbst 0, %0; sync; icbi 0,%0; sync; isync" : : "r" (addr));
+		return -1;
 	return 0;
 }
 
+static inline void kernel_unmap_addr(void *addr)
+{
+	pte_t *pte;
+	unsigned long kaddr = (unsigned long)addr;
+
+	pte = pte_offset_kernel(pmd_offset(pud_offset(pgd_offset_k(kaddr),
+				kaddr), kaddr), kaddr);
+	pr_devel("clearing mm %p, pte %p, kaddr %lx\n", &init_mm, pte, kaddr);
+	pte_clear(&init_mm, kaddr, pte);
+}
+
+int patch_instruction(unsigned int *addr, unsigned int instr)
+{
+	int err;
+	unsigned int *dest = NULL;
+	unsigned long flags;
+	unsigned long kaddr = (unsigned long)addr;
+
+	/*
+	 * During early early boot patch_instruction is called
+	 * when text_poke_area is not ready, but we still need
+	 * to allow patching. We just do the plain old patching
+	 */
+	if (!text_poke_area) {
+		__put_user_size(instr, addr, 4, err);
+		asm ("dcbst 0, %0; sync; icbi 0,%0; sync; isync" :: "r" (addr));
+		return 0;
+	}
+
+	raw_spin_lock_irqsave(&text_poke_lock, flags);
+	if (kernel_map_addr(addr)) {
+		err = -1;
+		goto out;
+	}
+
+	dest = (unsigned int *)(text_poke_area->addr) +
+		((kaddr & ~PAGE_MASK) / sizeof(unsigned int));
+	__put_user_size(instr, dest, 4, err);
+	asm ("dcbst 0, %0; sync; icbi 0,%0; sync; isync" :: "r" (dest));
+	kernel_unmap_addr(text_poke_area->addr);
+out:
+	raw_spin_unlock_irqrestore(&text_poke_lock, flags);
+	return err;
+}
+NOKPROBE_SYMBOL(patch_instruction);
+
 int patch_branch(unsigned int *addr, unsigned long target, int flags)
 {
 	return patch_instruction(addr, create_branch(addr, target, flags));
@@ -514,3 +593,4 @@ static int __init test_code_patching(void)
 late_initcall(test_code_patching);
 
 #endif /* CONFIG_CODE_PATCHING_SELFTEST */
+early_initcall(setup_text_poke_area);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v1 2/8] powerpc/kprobes: Move kprobes over to patch_instruction
  2017-05-25  3:36 [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
  2017-05-25  3:36 ` [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching Balbir Singh
@ 2017-05-25  3:36 ` Balbir Singh
  2017-05-29  8:50   ` Christophe LEROY
  2017-05-25  3:36 ` [PATCH v1 3/8] powerpc/xmon: Add patch_instruction supporf for xmon Balbir Singh
                   ` (6 subsequent siblings)
  8 siblings, 1 reply; 31+ messages in thread
From: Balbir Singh @ 2017-05-25  3:36 UTC (permalink / raw)
  To: linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, christophe.leroy, paulus, rashmica.g, Balbir Singh

arch_arm/disarm_probe use direct assignment for copying
instructions, replace them with patch_instruction

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
 arch/powerpc/kernel/kprobes.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
index 160ae0f..5e1fa86 100644
--- a/arch/powerpc/kernel/kprobes.c
+++ b/arch/powerpc/kernel/kprobes.c
@@ -158,7 +158,7 @@ NOKPROBE_SYMBOL(arch_prepare_kprobe);
 
 void arch_arm_kprobe(struct kprobe *p)
 {
-	*p->addr = BREAKPOINT_INSTRUCTION;
+	patch_instruction(p->addr, BREAKPOINT_INSTRUCTION);
 	flush_icache_range((unsigned long) p->addr,
 			   (unsigned long) p->addr + sizeof(kprobe_opcode_t));
 }
@@ -166,7 +166,7 @@ NOKPROBE_SYMBOL(arch_arm_kprobe);
 
 void arch_disarm_kprobe(struct kprobe *p)
 {
-	*p->addr = p->opcode;
+	patch_instruction(p->addr, BREAKPOINT_INSTRUCTION);
 	flush_icache_range((unsigned long) p->addr,
 			   (unsigned long) p->addr + sizeof(kprobe_opcode_t));
 }
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v1 3/8] powerpc/xmon: Add patch_instruction supporf for xmon
  2017-05-25  3:36 [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
  2017-05-25  3:36 ` [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching Balbir Singh
  2017-05-25  3:36 ` [PATCH v1 2/8] powerpc/kprobes: Move kprobes over to patch_instruction Balbir Singh
@ 2017-05-25  3:36 ` Balbir Singh
  2017-05-25  3:36 ` [PATCH v1 4/8] powerpc/vmlinux.lds: Align __init_begin to 16M Balbir Singh
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Balbir Singh @ 2017-05-25  3:36 UTC (permalink / raw)
  To: linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, christophe.leroy, paulus, rashmica.g, Balbir Singh

Move from mwrite() to patch_instruction() for xmon for
breakpoint addition and removal.

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
 arch/powerpc/xmon/xmon.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index f11f656..0952014 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -53,6 +53,7 @@
 #include <asm/xive.h>
 #include <asm/opal.h>
 #include <asm/firmware.h>
+#include <asm/code-patching.h>
 
 #ifdef CONFIG_PPC64
 #include <asm/hvcall.h>
@@ -837,7 +838,8 @@ static void insert_bpts(void)
 		store_inst(&bp->instr[0]);
 		if (bp->enabled & BP_CIABR)
 			continue;
-		if (mwrite(bp->address, &bpinstr, 4) != 4) {
+		if (patch_instruction((unsigned int *)bp->address,
+							bpinstr) != 0) {
 			printf("Couldn't write instruction at %lx, "
 			       "disabling breakpoint there\n", bp->address);
 			bp->enabled &= ~BP_TRAP;
@@ -874,7 +876,8 @@ static void remove_bpts(void)
 			continue;
 		if (mread(bp->address, &instr, 4) == 4
 		    && instr == bpinstr
-		    && mwrite(bp->address, &bp->instr, 4) != 4)
+		    && patch_instruction(
+			(unsigned int *)bp->address, bp->instr[0]) != 0)
 			printf("Couldn't remove breakpoint at %lx\n",
 			       bp->address);
 		else
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v1 4/8] powerpc/vmlinux.lds: Align __init_begin to 16M
  2017-05-25  3:36 [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
                   ` (2 preceding siblings ...)
  2017-05-25  3:36 ` [PATCH v1 3/8] powerpc/xmon: Add patch_instruction supporf for xmon Balbir Singh
@ 2017-05-25  3:36 ` Balbir Singh
  2017-05-25  3:36 ` [PATCH v1 5/8] powerpc/platform/pseries/lpar: Fix updatepp and updateboltedpp Balbir Singh
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Balbir Singh @ 2017-05-25  3:36 UTC (permalink / raw)
  To: linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, christophe.leroy, paulus, rashmica.g, Balbir Singh

For CONFIG_STRICT_KERNEL_RWX align __init_begin to 16M.
We use 16M since its the larger of 2M on radix and 16M
on hash for our linear mapping. The plan is to have
.text, .rodata and everything upto __init_begin marked
as RX. Note we still have executable read only data.
We could further align read only data to another 16M
boundary, but then the linker starts using stubs and
that breaks our assembler code in head_64.S

We don't use multi PT_LOAD in PHDRS because we are
not sure if all bootloaders support them

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
 arch/powerpc/kernel/vmlinux.lds.S | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 2f793be..3973829 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -8,6 +8,12 @@
 #include <asm/cache.h>
 #include <asm/thread_info.h>
 
+#ifdef CONFIG_STRICT_KERNEL_RWX
+#define STRICT_ALIGN_SIZE	(1 << 24)
+#else
+#define STRICT_ALIGN_SIZE	PAGE_SIZE
+#endif
+
 ENTRY(_stext)
 
 PHDRS {
@@ -132,7 +138,7 @@ SECTIONS
 	PROVIDE32 (etext = .);
 
 	/* Read-only data */
-	RODATA
+	RO_DATA(PAGE_SIZE)
 
 	EXCEPTION_TABLE(0)
 
@@ -149,7 +155,7 @@ SECTIONS
 /*
  * Init sections discarded at runtime
  */
-	. = ALIGN(PAGE_SIZE);
+	. = ALIGN(STRICT_ALIGN_SIZE);
 	__init_begin = .;
 	INIT_TEXT_SECTION(PAGE_SIZE) :kernel
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v1 5/8] powerpc/platform/pseries/lpar: Fix updatepp and updateboltedpp
  2017-05-25  3:36 [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
                   ` (3 preceding siblings ...)
  2017-05-25  3:36 ` [PATCH v1 4/8] powerpc/vmlinux.lds: Align __init_begin to 16M Balbir Singh
@ 2017-05-25  3:36 ` Balbir Singh
  2017-05-25  3:36 ` [PATCH v1 6/8] powerpc/mm/hash: Implement mark_rodata_ro() for hash Balbir Singh
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Balbir Singh @ 2017-05-25  3:36 UTC (permalink / raw)
  To: linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, christophe.leroy, paulus, rashmica.g, Balbir Singh

PAPR has pp0 in bit 55, currently we assumed that bit
pp0 is bit 0 (all bits in IBM order). This patch fixes
the pp0 bits for both these routines that use H_PROTECT

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
 arch/powerpc/platforms/pseries/lpar.c | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 6541d0b..83db643 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -301,7 +301,7 @@ static long pSeries_lpar_hpte_updatepp(unsigned long slot,
 				       int ssize, unsigned long inv_flags)
 {
 	unsigned long lpar_rc;
-	unsigned long flags = (newpp & 7) | H_AVPN;
+	unsigned long flags;
 	unsigned long want_v;
 
 	want_v = hpte_encode_avpn(vpn, psize, ssize);
@@ -309,6 +309,11 @@ static long pSeries_lpar_hpte_updatepp(unsigned long slot,
 	pr_devel("    update: avpnv=%016lx, hash=%016lx, f=%lx, psize: %d ...",
 		 want_v, slot, flags, psize);
 
+	/*
+	 * Move pp0 and set the mask, pp0 is bit 55
+	 * We ignore the keys for now.
+	 */
+	flags = ((newpp & HPTE_R_PP0) >> 55) | (newpp & 7) | H_AVPN;
 	lpar_rc = plpar_pte_protect(flags, slot, want_v);
 
 	if (lpar_rc == H_NOT_FOUND) {
@@ -379,7 +384,11 @@ static void pSeries_lpar_hpte_updateboltedpp(unsigned long newpp,
 	slot = pSeries_lpar_hpte_find(vpn, psize, ssize);
 	BUG_ON(slot == -1);
 
-	flags = newpp & 7;
+	/*
+	 * Move pp0 and set the mask, pp0 is bit 55
+	 * We ignore the keys for now.
+	 */
+	flags = ((newpp & HPTE_R_PP0) >> 55) | (newpp & 7);
 	lpar_rc = plpar_pte_protect(flags, slot, 0);
 
 	BUG_ON(lpar_rc != H_SUCCESS);
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v1 6/8] powerpc/mm/hash: Implement mark_rodata_ro() for hash
  2017-05-25  3:36 [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
                   ` (4 preceding siblings ...)
  2017-05-25  3:36 ` [PATCH v1 5/8] powerpc/platform/pseries/lpar: Fix updatepp and updateboltedpp Balbir Singh
@ 2017-05-25  3:36 ` Balbir Singh
  2017-05-25  3:36 ` [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX Balbir Singh
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 31+ messages in thread
From: Balbir Singh @ 2017-05-25  3:36 UTC (permalink / raw)
  To: linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, christophe.leroy, paulus, rashmica.g, Balbir Singh

With hash we update the bolted pte to mark it read-only. We rely
on the MMU_FTR_KERNEL_RO to generate the correct permissions
for read-only text. The radix implementation just prints a warning
in this implementation

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
 arch/powerpc/include/asm/book3s/64/hash.h  |  3 +++
 arch/powerpc/include/asm/book3s/64/radix.h |  4 ++++
 arch/powerpc/mm/pgtable-hash64.c           | 35 ++++++++++++++++++++++++++++++
 arch/powerpc/mm/pgtable-radix.c            |  7 ++++++
 arch/powerpc/mm/pgtable_64.c               |  9 ++++++++
 5 files changed, 58 insertions(+)

diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index 4e957b0..0ce513f 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -89,6 +89,9 @@ static inline int hash__pgd_bad(pgd_t pgd)
 {
 	return (pgd_val(pgd) == 0);
 }
+#ifdef CONFIG_STRICT_KERNEL_RWX
+extern void hash__mark_rodata_ro(void);
+#endif
 
 extern void hpte_need_flush(struct mm_struct *mm, unsigned long addr,
 			    pte_t *ptep, unsigned long pte, int huge);
diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
index ac16d19..368cb54 100644
--- a/arch/powerpc/include/asm/book3s/64/radix.h
+++ b/arch/powerpc/include/asm/book3s/64/radix.h
@@ -116,6 +116,10 @@
 #define RADIX_PUD_TABLE_SIZE	(sizeof(pud_t) << RADIX_PUD_INDEX_SIZE)
 #define RADIX_PGD_TABLE_SIZE	(sizeof(pgd_t) << RADIX_PGD_INDEX_SIZE)
 
+#ifdef CONFIG_STRICT_KERNEL_RWX
+extern void radix__mark_rodata_ro(void);
+#endif
+
 static inline unsigned long __radix_pte_update(pte_t *ptep, unsigned long clr,
 					       unsigned long set)
 {
diff --git a/arch/powerpc/mm/pgtable-hash64.c b/arch/powerpc/mm/pgtable-hash64.c
index 8b85a14..dda808b 100644
--- a/arch/powerpc/mm/pgtable-hash64.c
+++ b/arch/powerpc/mm/pgtable-hash64.c
@@ -11,8 +11,12 @@
 
 #include <linux/sched.h>
 #include <linux/mm_types.h>
+#include <linux/mm.h>
 
 #include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/sections.h>
+#include <asm/mmu.h>
 #include <asm/tlb.h>
 
 #include "mmu_decl.h"
@@ -342,3 +346,34 @@ int hash__has_transparent_hugepage(void)
 	return 1;
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+#ifdef CONFIG_STRICT_KERNEL_RWX
+void hash__mark_rodata_ro(void)
+{
+	unsigned long start = (unsigned long)_stext;
+	unsigned long end = (unsigned long)__init_begin;
+	unsigned long idx;
+	unsigned int step, shift;
+	unsigned long newpp = PP_RXXX;
+
+	if (!mmu_has_feature(MMU_FTR_KERNEL_RO)) {
+		pr_info("R/O rodata not supported\n");
+		return;
+	}
+
+	shift = mmu_psize_defs[mmu_linear_psize].shift;
+	step = 1 << shift;
+
+	start = ((start + step - 1) >> shift) << shift;
+	end = (end >> shift) << shift;
+
+	pr_devel("marking ro start %lx, end %lx, step %x\n",
+			start, end, step);
+
+	for (idx = start; idx < end; idx += step)
+		/* Not sure if we can do much with the return value */
+		mmu_hash_ops.hpte_updateboltedpp(newpp, idx, mmu_linear_psize,
+							mmu_kernel_ssize);
+
+}
+#endif
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index c28165d..8f42309 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -108,6 +108,13 @@ int radix__map_kernel_page(unsigned long ea, unsigned long pa,
 	return 0;
 }
 
+#ifdef CONFIG_STRICT_KERNEL_RWX
+void radix__mark_rodata_ro(void)
+{
+	pr_warn("Not yet implemented for radix\n");
+}
+#endif
+
 static inline void __meminit print_mapping(unsigned long start,
 					   unsigned long end,
 					   unsigned long size)
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index db93cf7..a769e4f 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -479,3 +479,12 @@ void mmu_partition_table_set_entry(unsigned int lpid, unsigned long dw0,
 }
 EXPORT_SYMBOL_GPL(mmu_partition_table_set_entry);
 #endif /* CONFIG_PPC_BOOK3S_64 */
+
+#ifdef CONFIG_STRICT_KERNEL_RWX
+void mark_rodata_ro(void)
+{
+	if (radix_enabled())
+		return radix__mark_rodata_ro();
+	return hash__mark_rodata_ro();
+}
+#endif
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX
  2017-05-25  3:36 [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
                   ` (5 preceding siblings ...)
  2017-05-25  3:36 ` [PATCH v1 6/8] powerpc/mm/hash: Implement mark_rodata_ro() for hash Balbir Singh
@ 2017-05-25  3:36 ` Balbir Singh
  2017-05-25 16:45   ` kbuild test robot
  2017-05-25  3:36 ` [PATCH v1 8/8] powerpc/mm/ptdump: Dump the first entry of the linear mapping as well Balbir Singh
  2017-05-25  6:57 ` [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
  8 siblings, 1 reply; 31+ messages in thread
From: Balbir Singh @ 2017-05-25  3:36 UTC (permalink / raw)
  To: linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, christophe.leroy, paulus, rashmica.g, Balbir Singh

We have the basic support in the form of patching R/O
text sections, linker scripts for extending alignment
of text data. We've also got mark_rodata_ro()

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
 arch/powerpc/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index f7c8f99..8b3f03b 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -171,6 +171,7 @@ config PPC
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
+	select ARCH_HAS_STRICT_KERNEL_RWX	if PPC64 && PPC_BOOK3S
 	select HAVE_CBPF_JIT			if !PPC64
 	select HAVE_CONTEXT_TRACKING		if PPC64
 	select HAVE_DEBUG_KMEMLEAK
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v1 8/8] powerpc/mm/ptdump: Dump the first entry of the linear mapping as well
  2017-05-25  3:36 [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
                   ` (6 preceding siblings ...)
  2017-05-25  3:36 ` [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX Balbir Singh
@ 2017-05-25  3:36 ` Balbir Singh
  2017-06-05 10:21   ` [v1, " Michael Ellerman
  2017-05-25  6:57 ` [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
  8 siblings, 1 reply; 31+ messages in thread
From: Balbir Singh @ 2017-05-25  3:36 UTC (permalink / raw)
  To: linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, christophe.leroy, paulus, rashmica.g, Balbir Singh

The check in hpte_find() should be < and not <= for PAGE_OFFSET

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
 arch/powerpc/mm/dump_hashpagetable.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/mm/dump_hashpagetable.c b/arch/powerpc/mm/dump_hashpagetable.c
index c6b900f..b1c144b 100644
--- a/arch/powerpc/mm/dump_hashpagetable.c
+++ b/arch/powerpc/mm/dump_hashpagetable.c
@@ -335,7 +335,7 @@ static unsigned long hpte_find(struct pg_state *st, unsigned long ea, int psize)
 	unsigned long rpn, lp_bits;
 	int base_psize = 0, actual_psize = 0;
 
-	if (ea <= PAGE_OFFSET)
+	if (ea < PAGE_OFFSET)
 		return -1;
 
 	/* Look in primary table */
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 0/8] Enable STRICT_KERNEL_RWX
  2017-05-25  3:36 [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
                   ` (7 preceding siblings ...)
  2017-05-25  3:36 ` [PATCH v1 8/8] powerpc/mm/ptdump: Dump the first entry of the linear mapping as well Balbir Singh
@ 2017-05-25  6:57 ` Balbir Singh
  2017-05-30 14:32   ` Naveen N. Rao
  8 siblings, 1 reply; 31+ messages in thread
From: Balbir Singh @ 2017-05-25  6:57 UTC (permalink / raw)
  To: linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, christophe.leroy, paulus, rashmica.g

On Thu, 25 May 2017 13:36:42 +1000
Balbir Singh <bsingharora@gmail.com> wrote:

> Enable STRICT_KERNEL_RWX for PPC64/BOOK3S
> 
> These patches enable RX mappings of kernel text.
> rodata is mapped RX as well as a trade-off, there
> are more details in the patch description
> 
> As a prerequisite for R/O text, patch_instruction
> is moved over to using a separate mapping that
> allows write to kernel text. xmon/ftrace/kprobes
> have been moved over to work with patch_instruction
> 

I think optprobes needs to be moved over as well.
I did not realize we have the optprobe_trampoline in
text (yikes!!)

> There are a few bug fixes, the updatepp and updateboltedpp
> did not use flags as described in PAPR and the ptdump
> utility ignored the first PFN
> 

Radix support is going to be incrementally added. I would
request a review of whats there. May be the bug fixes for
lpar.c and dump hash page tables can go in indepedently
along with the basic cleanup of kprobes.

Balbir Singh.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching
  2017-05-25  3:36 ` [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching Balbir Singh
@ 2017-05-25  9:11   ` kbuild test robot
  2017-05-28 14:29   ` christophe leroy
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 31+ messages in thread
From: kbuild test robot @ 2017-05-25  9:11 UTC (permalink / raw)
  To: Balbir Singh
  Cc: kbuild-all, linuxppc-dev, mpe, paulus, naveen.n.rao, rashmica.g

[-- Attachment #1: Type: text/plain, Size: 1721 bytes --]

Hi Balbir,

[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.12-rc2 next-20170525]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Balbir-Singh/Enable-STRICT_KERNEL_RWX/20170525-150234
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-allnoconfig (attached as .config)
compiler: powerpc-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
        wget https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=powerpc 

All errors (new ones prefixed by >>):

   arch/powerpc/lib/code-patching.c: In function 'kernel_map_addr':
>> arch/powerpc/lib/code-patching.c:57:8: error: implicit declaration of function 'map_kernel_page' [-Werror=implicit-function-declaration]
     err = map_kernel_page((unsigned long)text_poke_area->addr,
           ^~~~~~~~~~~~~~~
   cc1: all warnings being treated as errors

vim +/map_kernel_page +57 arch/powerpc/lib/code-patching.c

    51	
    52		if (is_vmalloc_addr(addr))
    53			pfn = vmalloc_to_pfn(addr);
    54		else
    55			pfn = __pa_symbol(addr) >> PAGE_SHIFT;
    56	
  > 57		err = map_kernel_page((unsigned long)text_poke_area->addr,
    58				(pfn << PAGE_SHIFT), _PAGE_KERNEL_RW | _PAGE_PRESENT);
    59		pr_devel("Mapped addr %p with pfn %lx\n", text_poke_area->addr, pfn);
    60		if (err)

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 6299 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX
  2017-05-25  3:36 ` [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX Balbir Singh
@ 2017-05-25 16:45   ` kbuild test robot
  2017-05-29  8:00     ` Christophe LEROY
  0 siblings, 1 reply; 31+ messages in thread
From: kbuild test robot @ 2017-05-25 16:45 UTC (permalink / raw)
  To: Balbir Singh
  Cc: kbuild-all, linuxppc-dev, mpe, paulus, naveen.n.rao, rashmica.g

[-- Attachment #1: Type: text/plain, Size: 3170 bytes --]

Hi Balbir,

[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.12-rc2 next-20170525]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Balbir-Singh/Enable-STRICT_KERNEL_RWX/20170525-150234
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-allmodconfig (attached as .config)
compiler: powerpc64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
reproduce:
        wget https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=powerpc 

All errors (new ones prefixed by >>):

>> kernel//power/snapshot.c:40:28: fatal error: asm/set_memory.h: No such file or directory
    #include <asm/set_memory.h>
                               ^
   compilation terminated.

vim +40 kernel//power/snapshot.c

25761b6eb Rafael J. Wysocki    2005-10-30  24  #include <linux/bootmem.h>
38b8d208a Ingo Molnar          2017-02-08  25  #include <linux/nmi.h>
25761b6eb Rafael J. Wysocki    2005-10-30  26  #include <linux/syscalls.h>
25761b6eb Rafael J. Wysocki    2005-10-30  27  #include <linux/console.h>
25761b6eb Rafael J. Wysocki    2005-10-30  28  #include <linux/highmem.h>
846705deb Rafael J. Wysocki    2008-11-26  29  #include <linux/list.h>
5a0e3ad6a Tejun Heo            2010-03-24  30  #include <linux/slab.h>
52f5684c8 Gideon Israel Dsouza 2014-04-07  31  #include <linux/compiler.h>
db5976058 Tina Ruchandani      2014-10-30  32  #include <linux/ktime.h>
25761b6eb Rafael J. Wysocki    2005-10-30  33  
7c0f6ba68 Linus Torvalds       2016-12-24  34  #include <linux/uaccess.h>
25761b6eb Rafael J. Wysocki    2005-10-30  35  #include <asm/mmu_context.h>
25761b6eb Rafael J. Wysocki    2005-10-30  36  #include <asm/pgtable.h>
25761b6eb Rafael J. Wysocki    2005-10-30  37  #include <asm/tlbflush.h>
25761b6eb Rafael J. Wysocki    2005-10-30  38  #include <asm/io.h>
50327ddfb Laura Abbott         2017-05-08  39  #ifdef CONFIG_STRICT_KERNEL_RWX
50327ddfb Laura Abbott         2017-05-08 @40  #include <asm/set_memory.h>
50327ddfb Laura Abbott         2017-05-08  41  #endif
25761b6eb Rafael J. Wysocki    2005-10-30  42  
25761b6eb Rafael J. Wysocki    2005-10-30  43  #include "power.h"
25761b6eb Rafael J. Wysocki    2005-10-30  44  
0f5bf6d0a Laura Abbott         2017-02-06  45  #ifdef CONFIG_STRICT_KERNEL_RWX
4c0b6c10f Rafael J. Wysocki    2016-07-10  46  static bool hibernate_restore_protection;
4c0b6c10f Rafael J. Wysocki    2016-07-10  47  static bool hibernate_restore_protection_active;
4c0b6c10f Rafael J. Wysocki    2016-07-10  48  

:::::: The code at line 40 was first introduced by commit
:::::: 50327ddfbc926e68da1958e4fac51f1106f5e730 kernel/power/snapshot.c: use set_memory.h header

:::::: TO: Laura Abbott <labbott@redhat.com>
:::::: CC: Linus Torvalds <torvalds@linux-foundation.org>

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 53775 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching
  2017-05-25  3:36 ` [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching Balbir Singh
  2017-05-25  9:11   ` kbuild test robot
@ 2017-05-28 14:29   ` christophe leroy
  2017-05-28 22:58     ` Balbir Singh
  2017-05-28 15:59   ` christophe leroy
  2017-05-28 18:00   ` christophe leroy
  3 siblings, 1 reply; 31+ messages in thread
From: christophe leroy @ 2017-05-28 14:29 UTC (permalink / raw)
  To: Balbir Singh, linuxppc-dev, mpe; +Cc: naveen.n.rao, ananth, paulus, rashmica.g



Le 25/05/2017 à 05:36, Balbir Singh a écrit :
> Today our patching happens via direct copy and
> patch_instruction. The patching code is well
> contained in the sense that copying bits are limited.
>
> While considering implementation of CONFIG_STRICT_RWX,
> the first requirement is to a create another mapping
> that will allow for patching. We create the window using
> text_poke_area, allocated via get_vm_area(), which might
> be an overkill. We can do per-cpu stuff as well. The
> downside of these patches that patch_instruction is
> now synchornized using a lock. Other arches do similar
> things, but use fixmaps. The reason for not using
> fixmaps is to make use of any randomization in the
> future. The code also relies on set_pte_at and pte_clear
> to do the appropriate tlb flushing.
>
> Signed-off-by: Balbir Singh <bsingharora@gmail.com>

[...]

> +static int kernel_map_addr(void *addr)
> +{
> +	unsigned long pfn;
>  	int err;
>
> -	__put_user_size(instr, addr, 4, err);
> +	if (is_vmalloc_addr(addr))
> +		pfn = vmalloc_to_pfn(addr);
> +	else
> +		pfn = __pa_symbol(addr) >> PAGE_SHIFT;
> +
> +	err = map_kernel_page((unsigned long)text_poke_area->addr,
> +			(pfn << PAGE_SHIFT), _PAGE_KERNEL_RW | _PAGE_PRESENT);

map_kernel_page() doesn't exist on powerpc32, so compilation fails.

However a similar function exists and is called map_page()

Maybe the below modification could help (not tested yet)

Christophe



---
  arch/powerpc/include/asm/book3s/32/pgtable.h | 2 ++
  arch/powerpc/include/asm/nohash/32/pgtable.h | 2 ++
  arch/powerpc/mm/8xx_mmu.c                    | 2 +-
  arch/powerpc/mm/dma-noncoherent.c            | 2 +-
  arch/powerpc/mm/mem.c                        | 4 ++--
  arch/powerpc/mm/mmu_decl.h                   | 1 -
  arch/powerpc/mm/pgtable_32.c                 | 8 ++++----
  7 files changed, 12 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h 
b/arch/powerpc/include/asm/book3s/32/pgtable.h
index 26ed228..7fb7558 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -297,6 +297,8 @@ static inline void __ptep_set_access_flags(struct 
mm_struct *mm,
  extern int get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t 
**ptep,
  		      pmd_t **pmdp);

+int map_kernel_page(unsigned long va, phys_addr_t pa, int flags);
+
  /* Generic accessors to PTE bits */
  static inline int pte_write(pte_t pte)		{ return !!(pte_val(pte) & 
_PAGE_RW);}
  static inline int pte_dirty(pte_t pte)		{ return !!(pte_val(pte) & 
_PAGE_DIRTY); }
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h 
b/arch/powerpc/include/asm/nohash/32/pgtable.h
index 5134ade..9131426 100644
--- a/arch/powerpc/include/asm/nohash/32/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
@@ -340,6 +340,8 @@ static inline void __ptep_set_access_flags(struct 
mm_struct *mm,
  extern int get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t 
**ptep,
  		      pmd_t **pmdp);

+int map_kernel_page(unsigned long va, phys_addr_t pa, int flags);
+
  #endif /* !__ASSEMBLY__ */

  #endif /* __ASM_POWERPC_NOHASH_32_PGTABLE_H */
diff --git a/arch/powerpc/mm/8xx_mmu.c b/arch/powerpc/mm/8xx_mmu.c
index 6c5025e..f4c6472 100644
--- a/arch/powerpc/mm/8xx_mmu.c
+++ b/arch/powerpc/mm/8xx_mmu.c
@@ -88,7 +88,7 @@ static void mmu_mapin_immr(void)
  	int offset;

  	for (offset = 0; offset < IMMR_SIZE; offset += PAGE_SIZE)
-		map_page(v + offset, p + offset, f);
+		map_kernel_page(v + offset, p + offset, f);
  }

  /* Address of instructions to patch */
diff --git a/arch/powerpc/mm/dma-noncoherent.c 
b/arch/powerpc/mm/dma-noncoherent.c
index 2dc74e5..3825284 100644
--- a/arch/powerpc/mm/dma-noncoherent.c
+++ b/arch/powerpc/mm/dma-noncoherent.c
@@ -227,7 +227,7 @@ __dma_alloc_coherent(struct device *dev, size_t 
size, dma_addr_t *handle, gfp_t

  		do {
  			SetPageReserved(page);
-			map_page(vaddr, page_to_phys(page),
+			map_kernel_page(vaddr, page_to_phys(page),
  				 pgprot_val(pgprot_noncached(PAGE_KERNEL)));
  			page++;
  			vaddr += PAGE_SIZE;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 9ee536e..04f4c98 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -313,11 +313,11 @@ void __init paging_init(void)
  	unsigned long end = __fix_to_virt(FIX_HOLE);

  	for (; v < end; v += PAGE_SIZE)
-		map_page(v, 0, 0); /* XXX gross */
+		map_kernel_page(v, 0, 0); /* XXX gross */
  #endif

  #ifdef CONFIG_HIGHMEM
-	map_page(PKMAP_BASE, 0, 0);	/* XXX gross */
+	map_kernel_page(PKMAP_BASE, 0, 0);	/* XXX gross */
  	pkmap_page_table = virt_to_kpte(PKMAP_BASE);

  	kmap_pte = virt_to_kpte(__fix_to_virt(FIX_KMAP_BEGIN));
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index f988db6..d46128b 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -94,7 +94,6 @@ extern void _tlbia(void);
  #ifdef CONFIG_PPC32

  extern void mapin_ram(void);
-extern int map_page(unsigned long va, phys_addr_t pa, int flags);
  extern void setbat(int index, unsigned long virt, phys_addr_t phys,
  		   unsigned int size, pgprot_t prot);

diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index a65c0b4..9c23c09 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -189,7 +189,7 @@ __ioremap_caller(phys_addr_t addr, unsigned long 
size, unsigned long flags,

  	err = 0;
  	for (i = 0; i < size && err == 0; i += PAGE_SIZE)
-		err = map_page(v+i, p+i, flags);
+		err = map_kernel_page(v+i, p+i, flags);
  	if (err) {
  		if (slab_is_available())
  			vunmap((void *)v);
@@ -215,7 +215,7 @@ void iounmap(volatile void __iomem *addr)
  }
  EXPORT_SYMBOL(iounmap);

-int map_page(unsigned long va, phys_addr_t pa, int flags)
+int map_kernel_page(unsigned long va, phys_addr_t pa, int flags)
  {
  	pmd_t *pd;
  	pte_t *pg;
@@ -255,7 +255,7 @@ void __init __mapin_ram_chunk(unsigned long offset, 
unsigned long top)
  		ktext = ((char *)v >= _stext && (char *)v < etext) ||
  			((char *)v >= _sinittext && (char *)v < _einittext);
  		f = ktext ? pgprot_val(PAGE_KERNEL_TEXT) : pgprot_val(PAGE_KERNEL);
-		map_page(v, p, f);
+		map_kernel_page(v, p, f);
  #ifdef CONFIG_PPC_STD_MMU_32
  		if (ktext)
  			hash_preload(&init_mm, v, 0, 0x300);
@@ -387,7 +387,7 @@ void __set_fixmap (enum fixed_addresses idx, 
phys_addr_t phys, pgprot_t flags)
  		return;
  	}

-	map_page(address, phys, pgprot_val(flags));
+	map_kernel_page(address, phys, pgprot_val(flags));
  	fixmaps++;
  }

-- 
2.2.2

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching
  2017-05-25  3:36 ` [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching Balbir Singh
  2017-05-25  9:11   ` kbuild test robot
  2017-05-28 14:29   ` christophe leroy
@ 2017-05-28 15:59   ` christophe leroy
  2017-05-28 22:50     ` Balbir Singh
  2017-05-28 18:00   ` christophe leroy
  3 siblings, 1 reply; 31+ messages in thread
From: christophe leroy @ 2017-05-28 15:59 UTC (permalink / raw)
  To: Balbir Singh, linuxppc-dev, mpe; +Cc: naveen.n.rao, ananth, paulus, rashmica.g



Le 25/05/2017 à 05:36, Balbir Singh a écrit :
> Today our patching happens via direct copy and
> patch_instruction. The patching code is well
> contained in the sense that copying bits are limited.
>
> While considering implementation of CONFIG_STRICT_RWX,
> the first requirement is to a create another mapping
> that will allow for patching. We create the window using
> text_poke_area, allocated via get_vm_area(), which might
> be an overkill. We can do per-cpu stuff as well. The
> downside of these patches that patch_instruction is
> now synchornized using a lock. Other arches do similar
> things, but use fixmaps. The reason for not using
> fixmaps is to make use of any randomization in the
> future. The code also relies on set_pte_at and pte_clear
> to do the appropriate tlb flushing.

Isn't it overkill to remap the text in another area ?

Among the 6 arches implementing CONFIG_STRICT_KERNEL_RWX (arm, arm64, 
parisc, s390, x86/32, x86/64):
- arm, x86/32 and x86/64 set text RW during the modification
- s390 seems to uses a special instruction which bypassed write protection
- parisc doesn't seem to implement any function which modifies kernel text.

Therefore it seems only arm64 does it via another mapping.
Wouldn't it be lighter to just unprotect memory during the modification 
as done on arm and x86 ?

Or another alternative could be to disable DMMU and do the write at 
physical address ?

Christophe

>
> Signed-off-by: Balbir Singh <bsingharora@gmail.com>
> ---
>  arch/powerpc/lib/code-patching.c | 88 ++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 84 insertions(+), 4 deletions(-)
>
> diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
> index 500b0f6..0a16b2f 100644
> --- a/arch/powerpc/lib/code-patching.c
> +++ b/arch/powerpc/lib/code-patching.c
> @@ -16,19 +16,98 @@
>  #include <asm/code-patching.h>
>  #include <linux/uaccess.h>
>  #include <linux/kprobes.h>
> +#include <asm/pgtable.h>
> +#include <asm/tlbflush.h>
>
> +struct vm_struct *text_poke_area;
> +static DEFINE_RAW_SPINLOCK(text_poke_lock);
>
> -int patch_instruction(unsigned int *addr, unsigned int instr)
> +/*
> + * This is an early_initcall and early_initcalls happen at the right time
> + * for us, after slab is enabled and before we mark ro pages R/O. In the
> + * future if get_vm_area is randomized, this will be more flexible than
> + * fixmap
> + */
> +static int __init setup_text_poke_area(void)
>  {
> +	text_poke_area = get_vm_area(PAGE_SIZE, VM_ALLOC);
> +	if (!text_poke_area) {
> +		WARN_ONCE(1, "could not create area for mapping kernel addrs"
> +				" which allow for patching kernel code\n");
> +		return 0;
> +	}
> +	pr_info("text_poke area ready...\n");
> +	raw_spin_lock_init(&text_poke_lock);
> +	return 0;
> +}
> +
> +/*
> + * This can be called for kernel text or a module.
> + */
> +static int kernel_map_addr(void *addr)
> +{
> +	unsigned long pfn;
>  	int err;
>
> -	__put_user_size(instr, addr, 4, err);
> +	if (is_vmalloc_addr(addr))
> +		pfn = vmalloc_to_pfn(addr);
> +	else
> +		pfn = __pa_symbol(addr) >> PAGE_SHIFT;
> +
> +	err = map_kernel_page((unsigned long)text_poke_area->addr,
> +			(pfn << PAGE_SHIFT), _PAGE_KERNEL_RW | _PAGE_PRESENT);
> +	pr_devel("Mapped addr %p with pfn %lx\n", text_poke_area->addr, pfn);
>  	if (err)
> -		return err;
> -	asm ("dcbst 0, %0; sync; icbi 0,%0; sync; isync" : : "r" (addr));
> +		return -1;
>  	return 0;
>  }
>
> +static inline void kernel_unmap_addr(void *addr)
> +{
> +	pte_t *pte;
> +	unsigned long kaddr = (unsigned long)addr;
> +
> +	pte = pte_offset_kernel(pmd_offset(pud_offset(pgd_offset_k(kaddr),
> +				kaddr), kaddr), kaddr);
> +	pr_devel("clearing mm %p, pte %p, kaddr %lx\n", &init_mm, pte, kaddr);
> +	pte_clear(&init_mm, kaddr, pte);
> +}
> +
> +int patch_instruction(unsigned int *addr, unsigned int instr)
> +{
> +	int err;
> +	unsigned int *dest = NULL;
> +	unsigned long flags;
> +	unsigned long kaddr = (unsigned long)addr;
> +
> +	/*
> +	 * During early early boot patch_instruction is called
> +	 * when text_poke_area is not ready, but we still need
> +	 * to allow patching. We just do the plain old patching
> +	 */
> +	if (!text_poke_area) {
> +		__put_user_size(instr, addr, 4, err);
> +		asm ("dcbst 0, %0; sync; icbi 0,%0; sync; isync" :: "r" (addr));
> +		return 0;
> +	}
> +
> +	raw_spin_lock_irqsave(&text_poke_lock, flags);
> +	if (kernel_map_addr(addr)) {
> +		err = -1;
> +		goto out;
> +	}
> +
> +	dest = (unsigned int *)(text_poke_area->addr) +
> +		((kaddr & ~PAGE_MASK) / sizeof(unsigned int));
> +	__put_user_size(instr, dest, 4, err);
> +	asm ("dcbst 0, %0; sync; icbi 0,%0; sync; isync" :: "r" (dest));
> +	kernel_unmap_addr(text_poke_area->addr);
> +out:
> +	raw_spin_unlock_irqrestore(&text_poke_lock, flags);
> +	return err;
> +}
> +NOKPROBE_SYMBOL(patch_instruction);
> +
>  int patch_branch(unsigned int *addr, unsigned long target, int flags)
>  {
>  	return patch_instruction(addr, create_branch(addr, target, flags));
> @@ -514,3 +593,4 @@ static int __init test_code_patching(void)
>  late_initcall(test_code_patching);
>
>  #endif /* CONFIG_CODE_PATCHING_SELFTEST */
> +early_initcall(setup_text_poke_area);
>

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching
  2017-05-25  3:36 ` [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching Balbir Singh
                     ` (2 preceding siblings ...)
  2017-05-28 15:59   ` christophe leroy
@ 2017-05-28 18:00   ` christophe leroy
  2017-05-28 22:15     ` Balbir Singh
  3 siblings, 1 reply; 31+ messages in thread
From: christophe leroy @ 2017-05-28 18:00 UTC (permalink / raw)
  To: Balbir Singh, linuxppc-dev, mpe; +Cc: naveen.n.rao, ananth, paulus, rashmica.g



Le 25/05/2017 à 05:36, Balbir Singh a écrit :
> Today our patching happens via direct copy and
> patch_instruction. The patching code is well
> contained in the sense that copying bits are limited.
>
> While considering implementation of CONFIG_STRICT_RWX,
> the first requirement is to a create another mapping
> that will allow for patching. We create the window using
> text_poke_area, allocated via get_vm_area(), which might
> be an overkill. We can do per-cpu stuff as well. The
> downside of these patches that patch_instruction is
> now synchornized using a lock. Other arches do similar
> things, but use fixmaps. The reason for not using
> fixmaps is to make use of any randomization in the
> future. The code also relies on set_pte_at and pte_clear
> to do the appropriate tlb flushing.
>
> Signed-off-by: Balbir Singh <bsingharora@gmail.com>
> ---
>  arch/powerpc/lib/code-patching.c | 88 ++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 84 insertions(+), 4 deletions(-)
>

[...]

> +static int kernel_map_addr(void *addr)
> +{
> +	unsigned long pfn;
>  	int err;
>
> -	__put_user_size(instr, addr, 4, err);
> +	if (is_vmalloc_addr(addr))
> +		pfn = vmalloc_to_pfn(addr);
> +	else
> +		pfn = __pa_symbol(addr) >> PAGE_SHIFT;
> +
> +	err = map_kernel_page((unsigned long)text_poke_area->addr,
> +			(pfn << PAGE_SHIFT), _PAGE_KERNEL_RW | _PAGE_PRESENT);



Why not use PAGE_KERNEL instead of _PAGE_KERNEL_RW | _PAGE_PRESENT ?

 From asm/pte-common.h :

#define PAGE_KERNEL	__pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
#define _PAGE_BASE	(_PAGE_BASE_NC)
#define _PAGE_BASE_NC	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)

Also, in pte-common.h, maybe the following defines could/should be 
reworked once you serie applied, shouldn't it ?

/* Protection used for kernel text. We want the debuggers to be able to
  * set breakpoints anywhere, so don't write protect the kernel text
  * on platforms where such control is possible.
  */
#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || 
defined(CONFIG_BDI_SWITCH) ||\
	defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
#define PAGE_KERNEL_TEXT	PAGE_KERNEL_X
#else
#define PAGE_KERNEL_TEXT	PAGE_KERNEL_ROX
#endif


Christophe

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching
  2017-05-28 18:00   ` christophe leroy
@ 2017-05-28 22:15     ` Balbir Singh
  0 siblings, 0 replies; 31+ messages in thread
From: Balbir Singh @ 2017-05-28 22:15 UTC (permalink / raw)
  To: christophe leroy, linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, paulus, rashmica.g

On Sun, 2017-05-28 at 20:00 +0200, christophe leroy wrote:
> 
> Le 25/05/2017 à 05:36, Balbir Singh a écrit :
> > Today our patching happens via direct copy and
> > patch_instruction. The patching code is well
> > contained in the sense that copying bits are limited.
> > 
> > While considering implementation of CONFIG_STRICT_RWX,
> > the first requirement is to a create another mapping
> > that will allow for patching. We create the window using
> > text_poke_area, allocated via get_vm_area(), which might
> > be an overkill. We can do per-cpu stuff as well. The
> > downside of these patches that patch_instruction is
> > now synchornized using a lock. Other arches do similar
> > things, but use fixmaps. The reason for not using
> > fixmaps is to make use of any randomization in the
> > future. The code also relies on set_pte_at and pte_clear
> > to do the appropriate tlb flushing.
> > 
> > Signed-off-by: Balbir Singh <bsingharora@gmail.com>
> > ---
> >  arch/powerpc/lib/code-patching.c | 88 ++++++++++++++++++++++++++++++++++++++--
> >  1 file changed, 84 insertions(+), 4 deletions(-)
> > 
> 
> [...]
> 
> > +static int kernel_map_addr(void *addr)
> > +{
> > +	unsigned long pfn;
> >  	int err;
> > 
> > -	__put_user_size(instr, addr, 4, err);
> > +	if (is_vmalloc_addr(addr))
> > +		pfn = vmalloc_to_pfn(addr);
> > +	else
> > +		pfn = __pa_symbol(addr) >> PAGE_SHIFT;
> > +
> > +	err = map_kernel_page((unsigned long)text_poke_area->addr,
> > +			(pfn << PAGE_SHIFT), _PAGE_KERNEL_RW | _PAGE_PRESENT);
> 
> 
> 
> Why not use PAGE_KERNEL instead of _PAGE_KERNEL_RW | _PAGE_PRESENT ?
>

Will do
 
>  From asm/pte-common.h :
> 
> #define PAGE_KERNEL	__pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
> #define _PAGE_BASE	(_PAGE_BASE_NC)
> #define _PAGE_BASE_NC	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
> 
> Also, in pte-common.h, maybe the following defines could/should be 
> reworked once you serie applied, shouldn't it ?
> 
> /* Protection used for kernel text. We want the debuggers to be able to
>   * set breakpoints anywhere, so don't write protect the kernel text
>   * on platforms where such control is possible.
>   */
> #if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || 
> defined(CONFIG_BDI_SWITCH) ||\
> 	defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
> #define PAGE_KERNEL_TEXT	PAGE_KERNEL_X
> #else
> #define PAGE_KERNEL_TEXT	PAGE_KERNEL_ROX
> #endif

Yes, I did see them and I want to rework them.

Thanks,
Balbir Singh.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching
  2017-05-28 15:59   ` christophe leroy
@ 2017-05-28 22:50     ` Balbir Singh
  2017-05-29  5:50       ` Christophe LEROY
  0 siblings, 1 reply; 31+ messages in thread
From: Balbir Singh @ 2017-05-28 22:50 UTC (permalink / raw)
  To: christophe leroy, linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, paulus, rashmica.g

On Sun, 2017-05-28 at 17:59 +0200, christophe leroy wrote:
> 
> Le 25/05/2017 à 05:36, Balbir Singh a écrit :
> > Today our patching happens via direct copy and
> > patch_instruction. The patching code is well
> > contained in the sense that copying bits are limited.
> > 
> > While considering implementation of CONFIG_STRICT_RWX,
> > the first requirement is to a create another mapping
> > that will allow for patching. We create the window using
> > text_poke_area, allocated via get_vm_area(), which might
> > be an overkill. We can do per-cpu stuff as well. The
> > downside of these patches that patch_instruction is
> > now synchornized using a lock. Other arches do similar
> > things, but use fixmaps. The reason for not using
> > fixmaps is to make use of any randomization in the
> > future. The code also relies on set_pte_at and pte_clear
> > to do the appropriate tlb flushing.
> 
> Isn't it overkill to remap the text in another area ?
>
> Among the 6 arches implementing CONFIG_STRICT_KERNEL_RWX (arm, arm64, 
> parisc, s390, x86/32, x86/64):
> - arm, x86/32 and x86/64 set text RW during the modification

x86 uses set_fixmap() in text_poke(), am I missing something?

> - s390 seems to uses a special instruction which bypassed write protection
> - parisc doesn't seem to implement any function which modifies kernel text.
> 
> Therefore it seems only arm64 does it via another mapping.
> Wouldn't it be lighter to just unprotect memory during the modification 
> as done on arm and x86 ?
> 

I am not sure if the trade-off is quite that simple, for security I thought

1. It would be better to randomize text_poke_area(), which is why I dynamically
allocated it. If we start randomizing get_vm_area(), we get the benefit
2. text_poke_aread() is RW and the normal text is RX, for any attack
to succeed, it would need to find text_poke_area() at the time of patching,
patch the kernel in that small window and use the normal mapping for
execution

Generally patch_instruction() is not fast path except for ftrace, tracing.
In my tests I did not find the slow down noticable

> Or another alternative could be to disable DMMU and do the write at 
> physical address ?
>

This would be worse off, I think, but we were discussing doing something
like that xmon. But for other cases, I think  it opens up a bigger window.

> Christophe

Balbir Singh

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching
  2017-05-28 14:29   ` christophe leroy
@ 2017-05-28 22:58     ` Balbir Singh
  2017-05-29  6:55       ` Christophe LEROY
  0 siblings, 1 reply; 31+ messages in thread
From: Balbir Singh @ 2017-05-28 22:58 UTC (permalink / raw)
  To: christophe leroy, linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, paulus, rashmica.g

On Sun, 2017-05-28 at 16:29 +0200, christophe leroy wrote:
> 
> Le 25/05/2017 à 05:36, Balbir Singh a écrit :
> > Today our patching happens via direct copy and
> > patch_instruction. The patching code is well
> > contained in the sense that copying bits are limited.
> > 
> > While considering implementation of CONFIG_STRICT_RWX,
> > the first requirement is to a create another mapping
> > that will allow for patching. We create the window using
> > text_poke_area, allocated via get_vm_area(), which might
> > be an overkill. We can do per-cpu stuff as well. The
> > downside of these patches that patch_instruction is
> > now synchornized using a lock. Other arches do similar
> > things, but use fixmaps. The reason for not using
> > fixmaps is to make use of any randomization in the
> > future. The code also relies on set_pte_at and pte_clear
> > to do the appropriate tlb flushing.
> > 
> > Signed-off-by: Balbir Singh <bsingharora@gmail.com>
> 
> [...]
> 
> > +static int kernel_map_addr(void *addr)
> > +{
> > +	unsigned long pfn;
> >  	int err;
> > 
> > -	__put_user_size(instr, addr, 4, err);
> > +	if (is_vmalloc_addr(addr))
> > +		pfn = vmalloc_to_pfn(addr);
> > +	else
> > +		pfn = __pa_symbol(addr) >> PAGE_SHIFT;
> > +
> > +	err = map_kernel_page((unsigned long)text_poke_area->addr,
> > +			(pfn << PAGE_SHIFT), _PAGE_KERNEL_RW | _PAGE_PRESENT);
> 
> map_kernel_page() doesn't exist on powerpc32, so compilation fails.
> 
> However a similar function exists and is called map_page()
> 
> Maybe the below modification could help (not tested yet)
> 
> Christophe
>

Thanks, I'll try and get a compile, as an alternative how about

#ifdef CONFIG_PPC32
#define map_kernel_page map_page
#endif

Balbir Singh.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching
  2017-05-28 22:50     ` Balbir Singh
@ 2017-05-29  5:50       ` Christophe LEROY
  0 siblings, 0 replies; 31+ messages in thread
From: Christophe LEROY @ 2017-05-29  5:50 UTC (permalink / raw)
  To: Balbir Singh, linuxppc-dev, mpe; +Cc: naveen.n.rao, ananth, paulus, rashmica.g



Le 29/05/2017 à 00:50, Balbir Singh a écrit :
> On Sun, 2017-05-28 at 17:59 +0200, christophe leroy wrote:
>>
>> Le 25/05/2017 à 05:36, Balbir Singh a écrit :
>>> Today our patching happens via direct copy and
>>> patch_instruction. The patching code is well
>>> contained in the sense that copying bits are limited.
>>>
>>> While considering implementation of CONFIG_STRICT_RWX,
>>> the first requirement is to a create another mapping
>>> that will allow for patching. We create the window using
>>> text_poke_area, allocated via get_vm_area(), which might
>>> be an overkill. We can do per-cpu stuff as well. The
>>> downside of these patches that patch_instruction is
>>> now synchornized using a lock. Other arches do similar
>>> things, but use fixmaps. The reason for not using
>>> fixmaps is to make use of any randomization in the
>>> future. The code also relies on set_pte_at and pte_clear
>>> to do the appropriate tlb flushing.
>>
>> Isn't it overkill to remap the text in another area ?
>>
>> Among the 6 arches implementing CONFIG_STRICT_KERNEL_RWX (arm, arm64,
>> parisc, s390, x86/32, x86/64):
>> - arm, x86/32 and x86/64 set text RW during the modification
> 
> x86 uses set_fixmap() in text_poke(), am I missing something?
> 

Indeed I looked how it is done in ftrace. On x86 text modifications are 
done using ftrace_write() which calls probe_kernel_write() which doesn't 
remap anything. It first calls ftrace_arch_code_modify_prepare() which 
sets the kernel text to rw.

Indeed you are right, text_poke() remaps via fixmap. However it looks 
like text_poke() is used only for kgdb and kprobe

Christophe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching
  2017-05-28 22:58     ` Balbir Singh
@ 2017-05-29  6:55       ` Christophe LEROY
  0 siblings, 0 replies; 31+ messages in thread
From: Christophe LEROY @ 2017-05-29  6:55 UTC (permalink / raw)
  To: Balbir Singh, linuxppc-dev, mpe; +Cc: naveen.n.rao, ananth, paulus, rashmica.g



Le 29/05/2017 à 00:58, Balbir Singh a écrit :
> On Sun, 2017-05-28 at 16:29 +0200, christophe leroy wrote:
>>
>> Le 25/05/2017 à 05:36, Balbir Singh a écrit :
>>> Today our patching happens via direct copy and
>>> patch_instruction. The patching code is well
>>> contained in the sense that copying bits are limited.
>>>
>>> While considering implementation of CONFIG_STRICT_RWX,
>>> the first requirement is to a create another mapping
>>> that will allow for patching. We create the window using
>>> text_poke_area, allocated via get_vm_area(), which might
>>> be an overkill. We can do per-cpu stuff as well. The
>>> downside of these patches that patch_instruction is
>>> now synchornized using a lock. Other arches do similar
>>> things, but use fixmaps. The reason for not using
>>> fixmaps is to make use of any randomization in the
>>> future. The code also relies on set_pte_at and pte_clear
>>> to do the appropriate tlb flushing.
>>>
>>> Signed-off-by: Balbir Singh <bsingharora@gmail.com>
>>
>> [...]
>>
>>> +static int kernel_map_addr(void *addr)
>>> +{
>>> +	unsigned long pfn;
>>>   	int err;
>>>
>>> -	__put_user_size(instr, addr, 4, err);
>>> +	if (is_vmalloc_addr(addr))
>>> +		pfn = vmalloc_to_pfn(addr);
>>> +	else
>>> +		pfn = __pa_symbol(addr) >> PAGE_SHIFT;
>>> +
>>> +	err = map_kernel_page((unsigned long)text_poke_area->addr,
>>> +			(pfn << PAGE_SHIFT), _PAGE_KERNEL_RW | _PAGE_PRESENT);
>>
>> map_kernel_page() doesn't exist on powerpc32, so compilation fails.
>>
>> However a similar function exists and is called map_page()
>>
>> Maybe the below modification could help (not tested yet)
>>
>> Christophe
>>
> 
> Thanks, I'll try and get a compile, as an alternative how about
> 
> #ifdef CONFIG_PPC32
> #define map_kernel_page map_page
> #endif
> 

My preference goes to renaming the PPC32 function, first because the 
PPC64 name fits better, second because too many defines kills 
readability, third because two functions doing the same thing are worth 
being called the same, and fourth because we surely have opportunity to 
merge both functions on day.

Christophe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX
  2017-05-25 16:45   ` kbuild test robot
@ 2017-05-29  8:00     ` Christophe LEROY
  2017-06-03  5:42         ` Balbir Singh
  0 siblings, 1 reply; 31+ messages in thread
From: Christophe LEROY @ 2017-05-29  8:00 UTC (permalink / raw)
  To: kbuild test robot, Balbir Singh, Laura Abbott, Linus Torvalds
  Cc: paulus, kbuild-all, naveen.n.rao, linuxppc-dev, rashmica.g,
	linux-kernel, linux-pm



Le 25/05/2017 à 18:45, kbuild test robot a écrit :
> Hi Balbir,
> 
> [auto build test ERROR on powerpc/next]
> [also build test ERROR on v4.12-rc2 next-20170525]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/Balbir-Singh/Enable-STRICT_KERNEL_RWX/20170525-150234
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
> config: powerpc-allmodconfig (attached as .config)
> compiler: powerpc64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
> reproduce:
>          wget https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>          chmod +x ~/bin/make.cross
>          # save the attached .config to linux build tree
>          make.cross ARCH=powerpc
> 
> All errors (new ones prefixed by >>):
> 
>>> kernel//power/snapshot.c:40:28: fatal error: asm/set_memory.h: No such file or directory
>      #include <asm/set_memory.h>

Looks like it is linked to commit 50327ddfb ("kernel/power/snapshot.c: 
use set_memory.h header").
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=50327ddfb

I believe that inclusion should be conditional to 
CONFIG_ARCH_HAS_SET_MEMORY, which is not set by powerpc arch, just like 
in include/linux/filter.c

Christophe


>                                 ^
>     compilation terminated.
> 
> vim +40 kernel//power/snapshot.c
> 
> 25761b6eb Rafael J. Wysocki    2005-10-30  24  #include <linux/bootmem.h>
> 38b8d208a Ingo Molnar          2017-02-08  25  #include <linux/nmi.h>
> 25761b6eb Rafael J. Wysocki    2005-10-30  26  #include <linux/syscalls.h>
> 25761b6eb Rafael J. Wysocki    2005-10-30  27  #include <linux/console.h>
> 25761b6eb Rafael J. Wysocki    2005-10-30  28  #include <linux/highmem.h>
> 846705deb Rafael J. Wysocki    2008-11-26  29  #include <linux/list.h>
> 5a0e3ad6a Tejun Heo            2010-03-24  30  #include <linux/slab.h>
> 52f5684c8 Gideon Israel Dsouza 2014-04-07  31  #include <linux/compiler.h>
> db5976058 Tina Ruchandani      2014-10-30  32  #include <linux/ktime.h>
> 25761b6eb Rafael J. Wysocki    2005-10-30  33
> 7c0f6ba68 Linus Torvalds       2016-12-24  34  #include <linux/uaccess.h>
> 25761b6eb Rafael J. Wysocki    2005-10-30  35  #include <asm/mmu_context.h>
> 25761b6eb Rafael J. Wysocki    2005-10-30  36  #include <asm/pgtable.h>
> 25761b6eb Rafael J. Wysocki    2005-10-30  37  #include <asm/tlbflush.h>
> 25761b6eb Rafael J. Wysocki    2005-10-30  38  #include <asm/io.h>
> 50327ddfb Laura Abbott         2017-05-08  39  #ifdef CONFIG_STRICT_KERNEL_RWX
> 50327ddfb Laura Abbott         2017-05-08 @40  #include <asm/set_memory.h>
> 50327ddfb Laura Abbott         2017-05-08  41  #endif
> 25761b6eb Rafael J. Wysocki    2005-10-30  42
> 25761b6eb Rafael J. Wysocki    2005-10-30  43  #include "power.h"
> 25761b6eb Rafael J. Wysocki    2005-10-30  44
> 0f5bf6d0a Laura Abbott         2017-02-06  45  #ifdef CONFIG_STRICT_KERNEL_RWX
> 4c0b6c10f Rafael J. Wysocki    2016-07-10  46  static bool hibernate_restore_protection;
> 4c0b6c10f Rafael J. Wysocki    2016-07-10  47  static bool hibernate_restore_protection_active;
> 4c0b6c10f Rafael J. Wysocki    2016-07-10  48
> 
> :::::: The code at line 40 was first introduced by commit
> :::::: 50327ddfbc926e68da1958e4fac51f1106f5e730 kernel/power/snapshot.c: use set_memory.h header
> 
> :::::: TO: Laura Abbott <labbott@redhat.com>
> :::::: CC: Linus Torvalds <torvalds@linux-foundation.org>
> 
> ---
> 0-DAY kernel test infrastructure                Open Source Technology Center
> https://lists.01.org/pipermail/kbuild-all                   Intel Corporation
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 2/8] powerpc/kprobes: Move kprobes over to patch_instruction
  2017-05-25  3:36 ` [PATCH v1 2/8] powerpc/kprobes: Move kprobes over to patch_instruction Balbir Singh
@ 2017-05-29  8:50   ` Christophe LEROY
  2017-05-29 22:11     ` Balbir Singh
  0 siblings, 1 reply; 31+ messages in thread
From: Christophe LEROY @ 2017-05-29  8:50 UTC (permalink / raw)
  To: Balbir Singh, linuxppc-dev, mpe; +Cc: naveen.n.rao, ananth, paulus, rashmica.g



Le 25/05/2017 à 05:36, Balbir Singh a écrit :
> arch_arm/disarm_probe use direct assignment for copying
> instructions, replace them with patch_instruction
> 
> Signed-off-by: Balbir Singh <bsingharora@gmail.com>
> ---
>   arch/powerpc/kernel/kprobes.c | 4 ++--
>   1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> index 160ae0f..5e1fa86 100644
> --- a/arch/powerpc/kernel/kprobes.c
> +++ b/arch/powerpc/kernel/kprobes.c
> @@ -158,7 +158,7 @@ NOKPROBE_SYMBOL(arch_prepare_kprobe);
>   
>   void arch_arm_kprobe(struct kprobe *p)
>   {
> -	*p->addr = BREAKPOINT_INSTRUCTION;
> +	patch_instruction(p->addr, BREAKPOINT_INSTRUCTION);
>   	flush_icache_range((unsigned long) p->addr,
>   			   (unsigned long) p->addr + sizeof(kprobe_opcode_t));
>   }
> @@ -166,7 +166,7 @@ NOKPROBE_SYMBOL(arch_arm_kprobe);
>   
>   void arch_disarm_kprobe(struct kprobe *p)
>   {
> -	*p->addr = p->opcode;
> +	patch_instruction(p->addr, BREAKPOINT_INSTRUCTION);

Shouldn't it be the following instead ?

patch_instruction(p->addr, p->opcode);

Christophe


>   	flush_icache_range((unsigned long) p->addr,
>   			   (unsigned long) p->addr + sizeof(kprobe_opcode_t));
>   }
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 2/8] powerpc/kprobes: Move kprobes over to patch_instruction
  2017-05-29  8:50   ` Christophe LEROY
@ 2017-05-29 22:11     ` Balbir Singh
  0 siblings, 0 replies; 31+ messages in thread
From: Balbir Singh @ 2017-05-29 22:11 UTC (permalink / raw)
  To: Christophe LEROY, linuxppc-dev, mpe
  Cc: naveen.n.rao, ananth, paulus, rashmica.g

On Mon, 2017-05-29 at 10:50 +0200, Christophe LEROY wrote:
> 
> Le 25/05/2017 à 05:36, Balbir Singh a écrit :
> > arch_arm/disarm_probe use direct assignment for copying
> > instructions, replace them with patch_instruction
> > 
> > Signed-off-by: Balbir Singh <bsingharora@gmail.com>
> > ---
> >   arch/powerpc/kernel/kprobes.c | 4 ++--
> >   1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
> > index 160ae0f..5e1fa86 100644
> > --- a/arch/powerpc/kernel/kprobes.c
> > +++ b/arch/powerpc/kernel/kprobes.c
> > @@ -158,7 +158,7 @@ NOKPROBE_SYMBOL(arch_prepare_kprobe);
> >   
> >   void arch_arm_kprobe(struct kprobe *p)
> >   {
> > -	*p->addr = BREAKPOINT_INSTRUCTION;
> > +	patch_instruction(p->addr, BREAKPOINT_INSTRUCTION);
> >   	flush_icache_range((unsigned long) p->addr,
> >   			   (unsigned long) p->addr + sizeof(kprobe_opcode_t));
> >   }
> > @@ -166,7 +166,7 @@ NOKPROBE_SYMBOL(arch_arm_kprobe);
> >   
> >   void arch_disarm_kprobe(struct kprobe *p)
> >   {
> > -	*p->addr = p->opcode;
> > +	patch_instruction(p->addr, BREAKPOINT_INSTRUCTION);
> 
> Shouldn't it be the following instead ?
> 
> patch_instruction(p->addr, p->opcode);

Yes, thanks for catching this!  I'll do a v2 on top of what you
posted.

Balbir Singh

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 0/8] Enable STRICT_KERNEL_RWX
  2017-05-25  6:57 ` [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
@ 2017-05-30 14:32   ` Naveen N. Rao
  0 siblings, 0 replies; 31+ messages in thread
From: Naveen N. Rao @ 2017-05-30 14:32 UTC (permalink / raw)
  To: Balbir Singh
  Cc: linuxppc-dev, mpe, ananth, christophe.leroy, paulus, rashmica.g

On 2017/05/25 04:57PM, Balbir Singh wrote:
> On Thu, 25 May 2017 13:36:42 +1000
> Balbir Singh <bsingharora@gmail.com> wrote:
> 
> > Enable STRICT_KERNEL_RWX for PPC64/BOOK3S
> > 
> > These patches enable RX mappings of kernel text.
> > rodata is mapped RX as well as a trade-off, there
> > are more details in the patch description
> > 
> > As a prerequisite for R/O text, patch_instruction
> > is moved over to using a separate mapping that
> > allows write to kernel text. xmon/ftrace/kprobes
> > have been moved over to work with patch_instruction
> > 
> 
> I think optprobes needs to be moved over as well.
> I did not realize we have the optprobe_trampoline in
> text (yikes!!)

Ah, you noticed. Yes, this is a product of the limited range we have for 
immediate branches.

- Naveen

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX
  2017-05-29  8:00     ` Christophe LEROY
  2017-06-03  5:42         ` Balbir Singh
@ 2017-06-03  5:42         ` Balbir Singh
  0 siblings, 0 replies; 31+ messages in thread
From: Balbir Singh @ 2017-06-03  5:42 UTC (permalink / raw)
  To: Christophe LEROY
  Cc: kbuild test robot, Laura Abbott, Linus Torvalds, Paul Mackerras,
	kbuild-all, Naveen N. Rao,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	Rashmica Gupta, linux-kernel, linux-pm

On Mon, May 29, 2017 at 6:00 PM, Christophe LEROY
<christophe.leroy@c-s.fr> wrote:
>
>
> Le 25/05/2017 à 18:45, kbuild test robot a écrit :
>>
>> Hi Balbir,
>>
>> [auto build test ERROR on powerpc/next]
>> [also build test ERROR on v4.12-rc2 next-20170525]
>> [if your patch is applied to the wrong git tree, please drop us a note to
>> help improve the system]
>>
>> url:
>> https://github.com/0day-ci/linux/commits/Balbir-Singh/Enable-STRICT_KERNEL_RWX/20170525-150234
>> base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git
>> next
>> config: powerpc-allmodconfig (attached as .config)
>> compiler: powerpc64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
>> reproduce:
>>          wget
>> https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O
>> ~/bin/make.cross
>>          chmod +x ~/bin/make.cross
>>          # save the attached .config to linux build tree
>>          make.cross ARCH=powerpc
>>
>> All errors (new ones prefixed by >>):
>>
>>>> kernel//power/snapshot.c:40:28: fatal error: asm/set_memory.h: No such
>>>> file or directory
>>
>>      #include <asm/set_memory.h>
>
>
> Looks like it is linked to commit 50327ddfb ("kernel/power/snapshot.c: use
> set_memory.h header").
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=50327ddfb
>
> I believe that inclusion should be conditional to
> CONFIG_ARCH_HAS_SET_MEMORY, which is not set by powerpc arch, just like in
> include/linux/filter.c
>
> Christophe
>
>
Agreed, it needs to be fixed in kernel/power/sanpshot.c

Balbir Singh

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX
@ 2017-06-03  5:42         ` Balbir Singh
  0 siblings, 0 replies; 31+ messages in thread
From: Balbir Singh @ 2017-06-03  5:42 UTC (permalink / raw)
  To: Christophe LEROY
  Cc: kbuild test robot, Laura Abbott, Linus Torvalds, Paul Mackerras,
	kbuild-all, Naveen N. Rao,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	Rashmica Gupta, linux-kernel, linux-pm

On Mon, May 29, 2017 at 6:00 PM, Christophe LEROY
<christophe.leroy@c-s.fr> wrote:
>
>
> Le 25/05/2017 à 18:45, kbuild test robot a écrit :
>>
>> Hi Balbir,
>>
>> [auto build test ERROR on powerpc/next]
>> [also build test ERROR on v4.12-rc2 next-20170525]
>> [if your patch is applied to the wrong git tree, please drop us a note to
>> help improve the system]
>>
>> url:
>> https://github.com/0day-ci/linux/commits/Balbir-Singh/Enable-STRICT_KERNEL_RWX/20170525-150234
>> base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git
>> next
>> config: powerpc-allmodconfig (attached as .config)
>> compiler: powerpc64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
>> reproduce:
>>          wget
>> https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross -O
>> ~/bin/make.cross
>>          chmod +x ~/bin/make.cross
>>          # save the attached .config to linux build tree
>>          make.cross ARCH=powerpc
>>
>> All errors (new ones prefixed by >>):
>>
>>>> kernel//power/snapshot.c:40:28: fatal error: asm/set_memory.h: No such
>>>> file or directory
>>
>>      #include <asm/set_memory.h>
>
>
> Looks like it is linked to commit 50327ddfb ("kernel/power/snapshot.c: use
> set_memory.h header").
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=50327ddfb
>
> I believe that inclusion should be conditional to
> CONFIG_ARCH_HAS_SET_MEMORY, which is not set by powerpc arch, just like in
> include/linux/filter.c
>
> Christophe
>
>
Agreed, it needs to be fixed in kernel/power/sanpshot.c

Balbir Singh

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX
@ 2017-06-03  5:42         ` Balbir Singh
  0 siblings, 0 replies; 31+ messages in thread
From: Balbir Singh @ 2017-06-03  5:42 UTC (permalink / raw)
  To: Christophe LEROY
  Cc: kbuild test robot, Laura Abbott, Linus Torvalds, Paul Mackerras,
	kbuild-all, Naveen N. Rao,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	Rashmica Gupta, linux-kernel, linux-pm

On Mon, May 29, 2017 at 6:00 PM, Christophe LEROY
<christophe.leroy@c-s.fr> wrote:
>
>
> Le 25/05/2017 =C3=A0 18:45, kbuild test robot a =C3=A9crit :
>>
>> Hi Balbir,
>>
>> [auto build test ERROR on powerpc/next]
>> [also build test ERROR on v4.12-rc2 next-20170525]
>> [if your patch is applied to the wrong git tree, please drop us a note t=
o
>> help improve the system]
>>
>> url:
>> https://github.com/0day-ci/linux/commits/Balbir-Singh/Enable-STRICT_KERN=
EL_RWX/20170525-150234
>> base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.gi=
t
>> next
>> config: powerpc-allmodconfig (attached as .config)
>> compiler: powerpc64-linux-gnu-gcc (Debian 6.1.1-9) 6.1.1 20160705
>> reproduce:
>>          wget
>> https://raw.githubusercontent.com/01org/lkp-tests/master/sbin/make.cross=
 -O
>> ~/bin/make.cross
>>          chmod +x ~/bin/make.cross
>>          # save the attached .config to linux build tree
>>          make.cross ARCH=3Dpowerpc
>>
>> All errors (new ones prefixed by >>):
>>
>>>> kernel//power/snapshot.c:40:28: fatal error: asm/set_memory.h: No such
>>>> file or directory
>>
>>      #include <asm/set_memory.h>
>
>
> Looks like it is linked to commit 50327ddfb ("kernel/power/snapshot.c: us=
e
> set_memory.h header").
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit=
/?id=3D50327ddfb
>
> I believe that inclusion should be conditional to
> CONFIG_ARCH_HAS_SET_MEMORY, which is not set by powerpc arch, just like i=
n
> include/linux/filter.c
>
> Christophe
>
>
Agreed, it needs to be fixed in kernel/power/sanpshot.c

Balbir Singh

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX
  2017-06-03  5:42         ` Balbir Singh
  (?)
@ 2017-06-05  5:46           ` Michael Ellerman
  -1 siblings, 0 replies; 31+ messages in thread
From: Michael Ellerman @ 2017-06-05  5:46 UTC (permalink / raw)
  To: Balbir Singh, Christophe LEROY
  Cc: kbuild test robot, linux-pm,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	linux-kernel, Paul Mackerras, kbuild-all, Naveen N. Rao,
	Laura Abbott, Linus Torvalds, Rashmica Gupta

Balbir Singh <bsingharora@gmail.com> writes:
> On Mon, May 29, 2017 at 6:00 PM, Christophe LEROY
> <christophe.leroy@c-s.fr> wrote:
>> Le 25/05/2017 à 18:45, kbuild test robot a écrit :
>>> All errors (new ones prefixed by >>):
>>>
>>>>> kernel//power/snapshot.c:40:28: fatal error: asm/set_memory.h: No such
>>>>> file or directory
>>>
>>>      #include <asm/set_memory.h>
>>
>>
>> Looks like it is linked to commit 50327ddfb ("kernel/power/snapshot.c: use
>> set_memory.h header").
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=50327ddfb
>>
>> I believe that inclusion should be conditional to
>> CONFIG_ARCH_HAS_SET_MEMORY, which is not set by powerpc arch, just like in
>> include/linux/filter.c
>>
> Agreed, it needs to be fixed in kernel/power/sanpshot.c

It should be fixed in asm-generic/set_memory.h, there should be no need
to guard an include of a generic header.

cheers

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX
@ 2017-06-05  5:46           ` Michael Ellerman
  0 siblings, 0 replies; 31+ messages in thread
From: Michael Ellerman @ 2017-06-05  5:46 UTC (permalink / raw)
  To: Balbir Singh, Christophe LEROY
  Cc: kbuild test robot, linux-pm,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	linux-kernel, Paul Mackerras, kbuild-all, Naveen N. Rao,
	Laura Abbott, Linus Torvalds, Rashmica Gupta

Balbir Singh <bsingharora@gmail.com> writes:
> On Mon, May 29, 2017 at 6:00 PM, Christophe LEROY
> <christophe.leroy@c-s.fr> wrote:
>> Le 25/05/2017 à 18:45, kbuild test robot a écrit :
>>> All errors (new ones prefixed by >>):
>>>
>>>>> kernel//power/snapshot.c:40:28: fatal error: asm/set_memory.h: No such
>>>>> file or directory
>>>
>>>      #include <asm/set_memory.h>
>>
>>
>> Looks like it is linked to commit 50327ddfb ("kernel/power/snapshot.c: use
>> set_memory.h header").
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=50327ddfb
>>
>> I believe that inclusion should be conditional to
>> CONFIG_ARCH_HAS_SET_MEMORY, which is not set by powerpc arch, just like in
>> include/linux/filter.c
>>
> Agreed, it needs to be fixed in kernel/power/sanpshot.c

It should be fixed in asm-generic/set_memory.h, there should be no need
to guard an include of a generic header.

cheers

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX
@ 2017-06-05  5:46           ` Michael Ellerman
  0 siblings, 0 replies; 31+ messages in thread
From: Michael Ellerman @ 2017-06-05  5:46 UTC (permalink / raw)
  To: Balbir Singh, Christophe LEROY
  Cc: kbuild test robot, linux-pm,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	linux-kernel, Paul Mackerras, kbuild-all, Naveen N. Rao,
	Laura Abbott, Linus Torvalds, Rashmica Gupta

Balbir Singh <bsingharora@gmail.com> writes:
> On Mon, May 29, 2017 at 6:00 PM, Christophe LEROY
> <christophe.leroy@c-s.fr> wrote:
>> Le 25/05/2017 =C3=A0 18:45, kbuild test robot a =C3=A9crit :
>>> All errors (new ones prefixed by >>):
>>>
>>>>> kernel//power/snapshot.c:40:28: fatal error: asm/set_memory.h: No such
>>>>> file or directory
>>>
>>>      #include <asm/set_memory.h>
>>
>>
>> Looks like it is linked to commit 50327ddfb ("kernel/power/snapshot.c: u=
se
>> set_memory.h header").
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commi=
t/?id=3D50327ddfb
>>
>> I believe that inclusion should be conditional to
>> CONFIG_ARCH_HAS_SET_MEMORY, which is not set by powerpc arch, just like =
in
>> include/linux/filter.c
>>
> Agreed, it needs to be fixed in kernel/power/sanpshot.c

It should be fixed in asm-generic/set_memory.h, there should be no need
to guard an include of a generic header.

cheers

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [v1, 8/8] powerpc/mm/ptdump: Dump the first entry of the linear mapping as well
  2017-05-25  3:36 ` [PATCH v1 8/8] powerpc/mm/ptdump: Dump the first entry of the linear mapping as well Balbir Singh
@ 2017-06-05 10:21   ` Michael Ellerman
  0 siblings, 0 replies; 31+ messages in thread
From: Michael Ellerman @ 2017-06-05 10:21 UTC (permalink / raw)
  To: Balbir Singh, linuxppc-dev; +Cc: paulus, naveen.n.rao, rashmica.g

On Thu, 2017-05-25 at 03:36:50 UTC, Balbir Singh wrote:
> The check in hpte_find() should be < and not <= for PAGE_OFFSET
> 
> Signed-off-by: Balbir Singh <bsingharora@gmail.com>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/e63739b1687ea37390e5463f43d054

cheers

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2017-06-05 10:21 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-25  3:36 [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
2017-05-25  3:36 ` [PATCH v1 1/8] powerpc/lib/code-patching: Enhance code patching Balbir Singh
2017-05-25  9:11   ` kbuild test robot
2017-05-28 14:29   ` christophe leroy
2017-05-28 22:58     ` Balbir Singh
2017-05-29  6:55       ` Christophe LEROY
2017-05-28 15:59   ` christophe leroy
2017-05-28 22:50     ` Balbir Singh
2017-05-29  5:50       ` Christophe LEROY
2017-05-28 18:00   ` christophe leroy
2017-05-28 22:15     ` Balbir Singh
2017-05-25  3:36 ` [PATCH v1 2/8] powerpc/kprobes: Move kprobes over to patch_instruction Balbir Singh
2017-05-29  8:50   ` Christophe LEROY
2017-05-29 22:11     ` Balbir Singh
2017-05-25  3:36 ` [PATCH v1 3/8] powerpc/xmon: Add patch_instruction supporf for xmon Balbir Singh
2017-05-25  3:36 ` [PATCH v1 4/8] powerpc/vmlinux.lds: Align __init_begin to 16M Balbir Singh
2017-05-25  3:36 ` [PATCH v1 5/8] powerpc/platform/pseries/lpar: Fix updatepp and updateboltedpp Balbir Singh
2017-05-25  3:36 ` [PATCH v1 6/8] powerpc/mm/hash: Implement mark_rodata_ro() for hash Balbir Singh
2017-05-25  3:36 ` [PATCH v1 7/8] powerpc/Kconfig: Enable STRICT_KERNEL_RWX Balbir Singh
2017-05-25 16:45   ` kbuild test robot
2017-05-29  8:00     ` Christophe LEROY
2017-06-03  5:42       ` Balbir Singh
2017-06-03  5:42         ` Balbir Singh
2017-06-03  5:42         ` Balbir Singh
2017-06-05  5:46         ` Michael Ellerman
2017-06-05  5:46           ` Michael Ellerman
2017-06-05  5:46           ` Michael Ellerman
2017-05-25  3:36 ` [PATCH v1 8/8] powerpc/mm/ptdump: Dump the first entry of the linear mapping as well Balbir Singh
2017-06-05 10:21   ` [v1, " Michael Ellerman
2017-05-25  6:57 ` [PATCH v1 0/8] Enable STRICT_KERNEL_RWX Balbir Singh
2017-05-30 14:32   ` Naveen N. Rao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.