linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] powerpc/mm: Fix kernel protection and implement CONFIG_DEBUG_RODATA on PPC32
@ 2017-04-21 13:02 Christophe Leroy
  2017-04-21 13:02 ` [PATCH v2 1/3] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Christophe Leroy @ 2017-04-21 13:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev

This patch set implements CONFIG_DEBUG_RODATA on Powerpc32
after fixing a few issues related to kernel code page protection.

Changes in v2:
  Instead of making the entire kernel RW to patch code in ftrace,
  we now only change the rights on the page to be modified

Christophe Leroy (3):
  powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs
  powerpc/mm: Fix kernel RAM protection after freeing unused memory on
    PPC32
  powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32

 arch/powerpc/Kconfig.debug         | 11 +++++++
 arch/powerpc/include/asm/pgtable.h |  8 ++++++
 arch/powerpc/kernel/ftrace.c       |  8 ++++--
 arch/powerpc/mm/init_32.c          |  3 +-
 arch/powerpc/mm/mem.c              |  1 +
 arch/powerpc/mm/mmu_decl.h         |  3 ++
 arch/powerpc/mm/pgtable_32.c       | 59 +++++++++++++++++++++++++++++++++-----
 7 files changed, 82 insertions(+), 11 deletions(-)

-- 
2.12.0

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v2 1/3] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs
  2017-04-21 13:02 [PATCH v2 0/3] powerpc/mm: Fix kernel protection and implement CONFIG_DEBUG_RODATA on PPC32 Christophe Leroy
@ 2017-04-21 13:02 ` Christophe Leroy
  2017-04-21 13:02 ` [PATCH v2 2/3] powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32 Christophe Leroy
  2017-04-21 13:02 ` [PATCH v2 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA " Christophe Leroy
  2 siblings, 0 replies; 9+ messages in thread
From: Christophe Leroy @ 2017-04-21 13:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev

__change_page_attr() uses flush_tlb_page().
flush_tlb_page() uses tlbie instruction, which also invalidates
pinned TLBs, which is not what we expect.

This patch modifies the implementation to use flush_tlb_kernel_range()
instead. This will make use of tlbia which will preserve pinned TLBs.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 v2: No change

 arch/powerpc/mm/pgtable_32.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index a65c0b4c0669..8e8940fad12f 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -325,7 +325,7 @@ get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep, pmd_t **pmdp)
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
 
-static int __change_page_attr(struct page *page, pgprot_t prot)
+static int __change_page_attr_noflush(struct page *page, pgprot_t prot)
 {
 	pte_t *kpte;
 	pmd_t *kpmd;
@@ -339,8 +339,6 @@ static int __change_page_attr(struct page *page, pgprot_t prot)
 	if (!get_pteptr(&init_mm, address, &kpte, &kpmd))
 		return -EINVAL;
 	__set_pte_at(&init_mm, address, kpte, mk_pte(page, prot), 0);
-	wmb();
-	flush_tlb_page(NULL, address);
 	pte_unmap(kpte);
 
 	return 0;
@@ -355,13 +353,17 @@ static int change_page_attr(struct page *page, int numpages, pgprot_t prot)
 {
 	int i, err = 0;
 	unsigned long flags;
+	struct page *start = page;
 
 	local_irq_save(flags);
 	for (i = 0; i < numpages; i++, page++) {
-		err = __change_page_attr(page, prot);
+		err = __change_page_attr_noflush(page, prot);
 		if (err)
 			break;
 	}
+	wmb();
+	flush_tlb_kernel_range((unsigned long)page_address(start),
+			       (unsigned long)page_address(page));
 	local_irq_restore(flags);
 	return err;
 }
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 2/3] powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32
  2017-04-21 13:02 [PATCH v2 0/3] powerpc/mm: Fix kernel protection and implement CONFIG_DEBUG_RODATA on PPC32 Christophe Leroy
  2017-04-21 13:02 ` [PATCH v2 1/3] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
@ 2017-04-21 13:02 ` Christophe Leroy
  2017-04-21 13:02 ` [PATCH v2 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA " Christophe Leroy
  2 siblings, 0 replies; 9+ messages in thread
From: Christophe Leroy @ 2017-04-21 13:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev

As seen below, allthough the init sections have been freed, the
associated memory area is still marked as executable in the
page tables.

~ dmesg
[    5.860093] Freeing unused kernel memory: 592K (c0570000 - c0604000)

~ cat /sys/kernel/debug/kernel_page_tables
---[ Start of kernel VM ]---
0xc0000000-0xc0497fff        4704K  rw  X  present dirty accessed shared
0xc0498000-0xc056ffff         864K  rw     present dirty accessed shared
0xc0570000-0xc059ffff         192K  rw  X  present dirty accessed shared
0xc05a0000-0xc7ffffff      125312K  rw     present dirty accessed shared
---[ vmalloc() Area ]---

This patch fixes that.

The implementation is done by reusing the change_page_attr()
function implemented for CONFIG_DEBUG_PAGEALLOC

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 v2: No change

 arch/powerpc/mm/mem.c        |  1 +
 arch/powerpc/mm/mmu_decl.h   |  3 +++
 arch/powerpc/mm/pgtable_32.c | 13 ++++++++++---
 3 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 9ee536ec0739..f261dac36a68 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -401,6 +401,7 @@ void free_initmem(void)
 {
 	ppc_md.progress = ppc_printk_progress;
 	free_initmem_default(POISON_FREE_INITMEM);
+	remap_init_ram();
 }
 
 #ifdef CONFIG_BLK_DEV_INITRD
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index f988db655e5b..718fa45310be 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -95,6 +95,7 @@ extern void _tlbia(void);
 
 extern void mapin_ram(void);
 extern int map_page(unsigned long va, phys_addr_t pa, int flags);
+void remap_init_ram(void);
 extern void setbat(int index, unsigned long virt, phys_addr_t phys,
 		   unsigned int size, pgprot_t prot);
 
@@ -106,6 +107,8 @@ struct hash_pte;
 extern struct hash_pte *Hash, *Hash_end;
 extern unsigned long Hash_size, Hash_mask;
 
+#else
+static inline void remap_init_ram(void) {}
 #endif /* CONFIG_PPC32 */
 
 extern unsigned long ioremap_bot;
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 8e8940fad12f..31728f3cdd20 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -323,8 +323,6 @@ get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep, pmd_t **pmdp)
         return(retval);
 }
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
-
 static int __change_page_attr_noflush(struct page *page, pgprot_t prot)
 {
 	pte_t *kpte;
@@ -347,7 +345,7 @@ static int __change_page_attr_noflush(struct page *page, pgprot_t prot)
 /*
  * Change the page attributes of an page in the linear mapping.
  *
- * THIS CONFLICTS WITH BAT MAPPINGS, DEBUG USE ONLY
+ * THIS DOES NOTHING WITH BAT MAPPINGS, DEBUG USE ONLY
  */
 static int change_page_attr(struct page *page, int numpages, pgprot_t prot)
 {
@@ -368,7 +366,16 @@ static int change_page_attr(struct page *page, int numpages, pgprot_t prot)
 	return err;
 }
 
+void remap_init_ram(void)
+{
+	struct page *page = virt_to_page(_sinittext);
+	unsigned long numpages = PFN_UP((unsigned long)_einittext) -
+				 PFN_DOWN((unsigned long)_sinittext);
+
+	change_page_attr(page, numpages, PAGE_KERNEL);
+}
 
+#ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	if (PageHighMem(page))
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v2 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32
  2017-04-21 13:02 [PATCH v2 0/3] powerpc/mm: Fix kernel protection and implement CONFIG_DEBUG_RODATA on PPC32 Christophe Leroy
  2017-04-21 13:02 ` [PATCH v2 1/3] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
  2017-04-21 13:02 ` [PATCH v2 2/3] powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32 Christophe Leroy
@ 2017-04-21 13:02 ` Christophe Leroy
       [not found]   ` <97d45054364142af48b8767f9f9e115504d7568b.1492778567.git.christophe.leroy@c-s .fr>
  2 siblings, 1 reply; 9+ messages in thread
From: Christophe Leroy @ 2017-04-21 13:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev

This patch implements CONFIG_DEBUG_RODATA on PPC32.

As for CONFIG_DEBUG_PAGEALLOC, it deactivates BAT and LTLB mappings
in order to allow page protection setup at the level of each page.

As BAT/LTLB mappings are deactivated, their might be performance
impact. For this reason, we keep it OFF by default.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 v2: For ftrace, only change the attributes of the page to be modified

 arch/powerpc/Kconfig.debug         | 11 +++++++++++
 arch/powerpc/include/asm/pgtable.h |  8 ++++++++
 arch/powerpc/kernel/ftrace.c       |  8 +++++---
 arch/powerpc/mm/init_32.c          |  3 ++-
 arch/powerpc/mm/pgtable_32.c       | 36 ++++++++++++++++++++++++++++++++++++
 5 files changed, 62 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
index c86df246339e..047f91564e52 100644
--- a/arch/powerpc/Kconfig.debug
+++ b/arch/powerpc/Kconfig.debug
@@ -369,4 +369,15 @@ config PPC_HTDUMP
 	def_bool y
 	depends on PPC_PTDUMP && PPC_BOOK3S
 
+config DEBUG_RODATA
+       bool "Write protect kernel read-only data structures"
+       depends on DEBUG_KERNEL && PPC32
+       default n
+       help
+	 Mark the kernel read-only data as write-protected in the pagetables,
+	 in order to catch accidental (and incorrect) writes to such const
+	 data. This option may have a performance impact because block
+	 mapping via BATs etc... will be disabled.
+	 If in doubt, say "N".
+
 endmenu
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index dd01212935ac..142337f3b745 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -80,6 +80,14 @@ unsigned long vmalloc_to_phys(void *vmalloc_addr);
 
 void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
 void pgtable_cache_init(void);
+
+#ifdef CONFIG_DEBUG_RODATA
+void set_kernel_text_rw(unsigned long addr);
+void set_kernel_text_ro(unsigned long addr);
+#else
+static inline void set_kernel_text_rw(unsigned long addr) {}
+static inline void set_kernel_text_ro(unsigned long addr) {}
+#endif
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_PGTABLE_H */
diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 32509de6ce4c..06d2ac53f471 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/powerpc/kernel/ftrace.c
@@ -46,6 +46,7 @@ static int
 ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new)
 {
 	unsigned int replaced;
+	int err;
 
 	/*
 	 * Note:
@@ -67,10 +68,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new)
 	}
 
 	/* replace the text with the new text */
-	if (patch_instruction((unsigned int *)ip, new))
-		return -EPERM;
+	set_kernel_text_rw(ip);
+	err = patch_instruction((unsigned int *)ip, new);
+	set_kernel_text_ro(ip);
 
-	return 0;
+	return err ? -EPERM : 0;
 }
 
 /*
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 8a7c38b8d335..e39c812b97ca 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -109,7 +109,8 @@ void __init MMU_setup(void)
 	if (strstr(boot_command_line, "noltlbs")) {
 		__map_without_ltlbs = 1;
 	}
-	if (debug_pagealloc_enabled()) {
+	if (debug_pagealloc_enabled() ||
+	    IS_ENABLED(CONFIG_DEBUG_RODATA)) {
 		__map_without_bats = 1;
 		__map_without_ltlbs = 1;
 	}
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 31728f3cdd20..972effec1bb2 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -34,6 +34,7 @@
 #include <asm/fixmap.h>
 #include <asm/io.h>
 #include <asm/setup.h>
+#include <asm/sections.h>
 
 #include "mmu_decl.h"
 
@@ -375,6 +376,41 @@ void remap_init_ram(void)
 	change_page_attr(page, numpages, PAGE_KERNEL);
 }
 
+#ifdef CONFIG_DEBUG_RODATA
+void set_kernel_text_rw(unsigned long addr)
+{
+	if (core_kernel_text(addr))
+		change_page_attr(virt_to_page(addr), 1, PAGE_KERNEL_X);
+}
+
+void set_kernel_text_ro(unsigned long addr)
+{
+	if (core_kernel_text(addr))
+		change_page_attr(virt_to_page(addr), 1, PAGE_KERNEL_ROX);
+}
+
+void mark_rodata_ro(void)
+{
+	struct page *page;
+	unsigned long numpages;
+
+	page = virt_to_page(_stext);
+	numpages = PFN_UP((unsigned long)_etext) -
+		   PFN_DOWN((unsigned long)_stext);
+
+	change_page_attr(page, numpages, PAGE_KERNEL_ROX);
+	/*
+	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
+	 * to cover NOTES and EXCEPTION_TABLE.
+	 */
+	page = virt_to_page(__start_rodata);
+	numpages = PFN_UP((unsigned long)__init_begin) -
+		   PFN_DOWN((unsigned long)__start_rodata);
+
+	change_page_attr(page, numpages, PAGE_KERNEL_RO);
+}
+#endif
+
 #ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-- 
2.12.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32
       [not found]   ` <97d45054364142af48b8767f9f9e115504d7568b.1492778567.git.christophe.leroy@c-s .fr>
@ 2017-04-21 13:32     ` Naveen N. Rao
  2017-04-22  6:08       ` Michael Ellerman
  0 siblings, 1 reply; 9+ messages in thread
From: Naveen N. Rao @ 2017-04-21 13:32 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Christophe Leroy, Michael Ellerman,
	Scott Wood, Paul Mackerras
  Cc: linux-kernel, linuxppc-dev

Excerpts from Christophe Leroy's message of April 21, 2017 18:32:
> This patch implements CONFIG_DEBUG_RODATA on PPC32.
> 
> As for CONFIG_DEBUG_PAGEALLOC, it deactivates BAT and LTLB mappings
> in order to allow page protection setup at the level of each page.
> 
> As BAT/LTLB mappings are deactivated, their might be performance
> impact. For this reason, we keep it OFF by default.
> 
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>

[snip]

> diff --git a/arch/powerpc/kernel/ftrace.c 
> b/arch/powerpc/kernel/ftrace.c
> index 32509de6ce4c..06d2ac53f471 100644
> --- a/arch/powerpc/kernel/ftrace.c
> +++ b/arch/powerpc/kernel/ftrace.c
> @@ -46,6 +46,7 @@ static int
>  ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new)
>  {
>  	unsigned int replaced;
> +	int err;
> 
>  	/*
>  	 * Note:
> @@ -67,10 +68,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new)
>  	}
> 
>  	/* replace the text with the new text */
> -	if (patch_instruction((unsigned int *)ip, new))
> -		return -EPERM;
> +	set_kernel_text_rw(ip);
> +	err = patch_instruction((unsigned int *)ip, new);
> +	set_kernel_text_ro(ip);

Is there a reason to not put those inside patch_instruction()?

- Naveen

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32
  2017-04-21 13:32     ` Naveen N. Rao
@ 2017-04-22  6:08       ` Michael Ellerman
  2017-04-22  6:58         ` christophe leroy
  0 siblings, 1 reply; 9+ messages in thread
From: Michael Ellerman @ 2017-04-22  6:08 UTC (permalink / raw)
  To: Naveen N. Rao, Benjamin Herrenschmidt, Christophe Leroy,
	Scott Wood, Paul Mackerras
  Cc: linux-kernel, linuxppc-dev

"Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:
> Excerpts from Christophe Leroy's message of April 21, 2017 18:32:
>> diff --git a/arch/powerpc/kernel/ftrace.c 
>> b/arch/powerpc/kernel/ftrace.c
>> index 32509de6ce4c..06d2ac53f471 100644
>> --- a/arch/powerpc/kernel/ftrace.c
>> +++ b/arch/powerpc/kernel/ftrace.c
>> @@ -46,6 +46,7 @@ static int
>> @@ -67,10 +68,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new)
>>  	}
>> 
>>  	/* replace the text with the new text */
>> -	if (patch_instruction((unsigned int *)ip, new))
>> -		return -EPERM;
>> +	set_kernel_text_rw(ip);
>> +	err = patch_instruction((unsigned int *)ip, new);
>> +	set_kernel_text_ro(ip);
>
> Is there a reason to not put those inside patch_instruction()?

Yes and no.

patch_instruction() is called quite early from apply_feature_fixups(), I
haven't looked closely but I suspect the set_kernel_text_rx() routines
won't work that early.

But on the other hand patch_instruction() is used by things other than
ftrace, like jump labels, so we probably want the rw/ro setting in there
so that we don't have to go and fixup jump labels etc.

So probably we need a raw_patch_instruction() which does just the
patching (what patch_instruction() does now), and can be used early in
boot. And then patch_instruction() would have the rw/ro change in it, so
that all users of it are OK.

eg ~=:

int raw_patch_instruction(unsigned int *addr, unsigned int instr)
{
  ...
}

int patch_instruction(unsigned int *addr, unsigned int instr)
{
	int err;

	set_kernel_text_rw(ip);
	err = raw_patch_instruction((unsigned int *)ip, new);
	set_kernel_text_ro(ip);

	return err;
}

cheers

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32
  2017-04-22  6:08       ` Michael Ellerman
@ 2017-04-22  6:58         ` christophe leroy
  2017-04-23 10:26           ` Michael Ellerman
  0 siblings, 1 reply; 9+ messages in thread
From: christophe leroy @ 2017-04-22  6:58 UTC (permalink / raw)
  To: Michael Ellerman, Naveen N. Rao, Benjamin Herrenschmidt,
	Scott Wood, Paul Mackerras
  Cc: linux-kernel, linuxppc-dev



Le 22/04/2017 à 08:08, Michael Ellerman a écrit :
> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:
>> Excerpts from Christophe Leroy's message of April 21, 2017 18:32:
>>> diff --git a/arch/powerpc/kernel/ftrace.c
>>> b/arch/powerpc/kernel/ftrace.c
>>> index 32509de6ce4c..06d2ac53f471 100644
>>> --- a/arch/powerpc/kernel/ftrace.c
>>> +++ b/arch/powerpc/kernel/ftrace.c
>>> @@ -46,6 +46,7 @@ static int
>>> @@ -67,10 +68,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new)
>>>  	}
>>>
>>>  	/* replace the text with the new text */
>>> -	if (patch_instruction((unsigned int *)ip, new))
>>> -		return -EPERM;
>>> +	set_kernel_text_rw(ip);
>>> +	err = patch_instruction((unsigned int *)ip, new);
>>> +	set_kernel_text_ro(ip);
>>
>> Is there a reason to not put those inside patch_instruction()?
>
> Yes and no.
>
> patch_instruction() is called quite early from apply_feature_fixups(), I
> haven't looked closely but I suspect the set_kernel_text_rx() routines
> won't work that early.
>
> But on the other hand patch_instruction() is used by things other than
> ftrace, like jump labels, so we probably want the rw/ro setting in there
> so that we don't have to go and fixup jump labels etc.
>
> So probably we need a raw_patch_instruction() which does just the
> patching (what patch_instruction() does now), and can be used early in
> boot. And then patch_instruction() would have the rw/ro change in it, so
> that all users of it are OK.
>
> eg ~=:
>
> int raw_patch_instruction(unsigned int *addr, unsigned int instr)
> {
>   ...
> }
>
> int patch_instruction(unsigned int *addr, unsigned int instr)
> {
> 	int err;
>
> 	set_kernel_text_rw(ip);
> 	err = raw_patch_instruction((unsigned int *)ip, new);
> 	set_kernel_text_ro(ip);
>
> 	return err;
> }
>

Shouldn't we then also have some kind of protection against parallel use 
of patch_instruction() like a spin_lock_irqsave(), or is it garantied 
not to happen for other reasons ?

Otherwise, we might end up with one instance setting back the kernel 
text to RO while the other one has just put it RW and is about to patch 
the instruction.

Christophe

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32
  2017-04-22  6:58         ` christophe leroy
@ 2017-04-23 10:26           ` Michael Ellerman
  2017-04-24 14:31             ` Christophe LEROY
  0 siblings, 1 reply; 9+ messages in thread
From: Michael Ellerman @ 2017-04-23 10:26 UTC (permalink / raw)
  To: christophe leroy, Naveen N. Rao, Benjamin Herrenschmidt,
	Scott Wood, Paul Mackerras
  Cc: linux-kernel, linuxppc-dev

christophe leroy <christophe.leroy@c-s.fr> writes:

> Le 22/04/2017 à 08:08, Michael Ellerman a écrit :
>> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:
>>> Excerpts from Christophe Leroy's message of April 21, 2017 18:32:
>>>> diff --git a/arch/powerpc/kernel/ftrace.c
>>>> b/arch/powerpc/kernel/ftrace.c
>>>> index 32509de6ce4c..06d2ac53f471 100644
>>>> --- a/arch/powerpc/kernel/ftrace.c
>>>> +++ b/arch/powerpc/kernel/ftrace.c
>>>> @@ -46,6 +46,7 @@ static int
>>>> @@ -67,10 +68,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new)
>>>>  	}
>>>>
>>>>  	/* replace the text with the new text */
>>>> -	if (patch_instruction((unsigned int *)ip, new))
>>>> -		return -EPERM;
>>>> +	set_kernel_text_rw(ip);
>>>> +	err = patch_instruction((unsigned int *)ip, new);
>>>> +	set_kernel_text_ro(ip);
>>>
>>> Is there a reason to not put those inside patch_instruction()?
>>
>> Yes and no.
>>
>> patch_instruction() is called quite early from apply_feature_fixups(), I
>> haven't looked closely but I suspect the set_kernel_text_rx() routines
>> won't work that early.
>>
>> But on the other hand patch_instruction() is used by things other than
>> ftrace, like jump labels, so we probably want the rw/ro setting in there
>> so that we don't have to go and fixup jump labels etc.
>>
>> So probably we need a raw_patch_instruction() which does just the
>> patching (what patch_instruction() does now), and can be used early in
>> boot. And then patch_instruction() would have the rw/ro change in it, so
>> that all users of it are OK.
>>
>> eg ~=:
>>
>> int raw_patch_instruction(unsigned int *addr, unsigned int instr)
>> {
>>   ...
>> }
>>
>> int patch_instruction(unsigned int *addr, unsigned int instr)
>> {
>> 	int err;
>>
>> 	set_kernel_text_rw(ip);
>> 	err = raw_patch_instruction((unsigned int *)ip, new);
>> 	set_kernel_text_ro(ip);
>>
>> 	return err;
>> }
>
> Shouldn't we then also have some kind of protection against parallel use 
> of patch_instruction() like a spin_lock_irqsave(), or is it garantied 
> not to happen for other reasons ?
>
> Otherwise, we might end up with one instance setting back the kernel 
> text to RO while the other one has just put it RW and is about to patch 
> the instruction.

Yes it'd need some locking for sure.

"Locking left as an exercise for the reader." ;)

cheers

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v2 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32
  2017-04-23 10:26           ` Michael Ellerman
@ 2017-04-24 14:31             ` Christophe LEROY
  0 siblings, 0 replies; 9+ messages in thread
From: Christophe LEROY @ 2017-04-24 14:31 UTC (permalink / raw)
  To: Michael Ellerman, Naveen N. Rao, Benjamin Herrenschmidt,
	Scott Wood, Paul Mackerras
  Cc: linux-kernel, linuxppc-dev



Le 23/04/2017 à 12:26, Michael Ellerman a écrit :
> christophe leroy <christophe.leroy@c-s.fr> writes:
>
>> Le 22/04/2017 à 08:08, Michael Ellerman a écrit :
>>> "Naveen N. Rao" <naveen.n.rao@linux.vnet.ibm.com> writes:
>>>> Excerpts from Christophe Leroy's message of April 21, 2017 18:32:
>>>>> diff --git a/arch/powerpc/kernel/ftrace.c
>>>>> b/arch/powerpc/kernel/ftrace.c
>>>>> index 32509de6ce4c..06d2ac53f471 100644
>>>>> --- a/arch/powerpc/kernel/ftrace.c
>>>>> +++ b/arch/powerpc/kernel/ftrace.c
>>>>> @@ -46,6 +46,7 @@ static int
>>>>> @@ -67,10 +68,11 @@ ftrace_modify_code(unsigned long ip, unsigned int old, unsigned int new)
>>>>>  	}
>>>>>
>>>>>  	/* replace the text with the new text */
>>>>> -	if (patch_instruction((unsigned int *)ip, new))
>>>>> -		return -EPERM;
>>>>> +	set_kernel_text_rw(ip);
>>>>> +	err = patch_instruction((unsigned int *)ip, new);
>>>>> +	set_kernel_text_ro(ip);
>>>>
>>>> Is there a reason to not put those inside patch_instruction()?
>>>
>>> Yes and no.
>>>
>>> patch_instruction() is called quite early from apply_feature_fixups(), I
>>> haven't looked closely but I suspect the set_kernel_text_rx() routines
>>> won't work that early.
>>>
>>> But on the other hand patch_instruction() is used by things other than
>>> ftrace, like jump labels, so we probably want the rw/ro setting in there
>>> so that we don't have to go and fixup jump labels etc.
>>>
>>> So probably we need a raw_patch_instruction() which does just the
>>> patching (what patch_instruction() does now), and can be used early in
>>> boot. And then patch_instruction() would have the rw/ro change in it, so
>>> that all users of it are OK.
>>>
>>> eg ~=:
>>>
>>> int raw_patch_instruction(unsigned int *addr, unsigned int instr)
>>> {
>>>   ...
>>> }
>>>
>>> int patch_instruction(unsigned int *addr, unsigned int instr)
>>> {
>>> 	int err;
>>>
>>> 	set_kernel_text_rw(ip);
>>> 	err = raw_patch_instruction((unsigned int *)ip, new);
>>> 	set_kernel_text_ro(ip);
>>>
>>> 	return err;
>>> }
>>
>> Shouldn't we then also have some kind of protection against parallel use
>> of patch_instruction() like a spin_lock_irqsave(), or is it garantied
>> not to happen for other reasons ?
>>
>> Otherwise, we might end up with one instance setting back the kernel
>> text to RO while the other one has just put it RW and is about to patch
>> the instruction.
>
> Yes it'd need some locking for sure.
>
> "Locking left as an exercise for the reader." ;)
>
> cheers
>

Not so easy indeed as patch_instruction() is called from many higher 
level functions like patch_branch() which are themselves called from 
other functions like do_features_fixup() which are called during init 
but also when loading a module for instance.
It is therefore not easy to implement it via a raw_patch_instruction() 
as proposed.

So I took another approach, taken from x86: a static bool tells whether 
kernel text has been put in RO yet or not. Until this, 
set_kernel_text_ro/rw() return without doing anything.

As for the locking, I put a spin_lock_irqsave() as I was not sure 
whether patch_instruction() can be called during interrupts or not.

Christophe

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-04-24 14:31 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-21 13:02 [PATCH v2 0/3] powerpc/mm: Fix kernel protection and implement CONFIG_DEBUG_RODATA on PPC32 Christophe Leroy
2017-04-21 13:02 ` [PATCH v2 1/3] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
2017-04-21 13:02 ` [PATCH v2 2/3] powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32 Christophe Leroy
2017-04-21 13:02 ` [PATCH v2 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA " Christophe Leroy
     [not found]   ` <97d45054364142af48b8767f9f9e115504d7568b.1492778567.git.christophe.leroy@c-s .fr>
2017-04-21 13:32     ` Naveen N. Rao
2017-04-22  6:08       ` Michael Ellerman
2017-04-22  6:58         ` christophe leroy
2017-04-23 10:26           ` Michael Ellerman
2017-04-24 14:31             ` Christophe LEROY

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).