linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] powerpc/mm: Fix kernel protection and implement CONFIG_DEBUG_RODATA on PPC32
@ 2017-04-19 10:59 Christophe Leroy
  2017-04-19 10:59 ` [PATCH 1/3] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Christophe Leroy @ 2017-04-19 10:59 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev

This patch set implements CONFIG_DEBUG_RODATA on Powerpc32
after fixing a few issues related to kernel code page protection.

The second patch of the set was initially submitted as standalone.
This new version takes into account Michael comments. It is part
of the set because it is now based on function change_page_attr()

Christophe Leroy (3):
  powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs
  powerpc/mm: Fix kernel RAM protection after freeing unused memory on
    PPC32
  powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32

 arch/powerpc/Kconfig.debug         | 11 +++++++
 arch/powerpc/include/asm/pgtable.h |  8 ++++++
 arch/powerpc/kernel/ftrace.c       |  2 ++
 arch/powerpc/mm/init_32.c          |  3 +-
 arch/powerpc/mm/mem.c              |  1 +
 arch/powerpc/mm/mmu_decl.h         |  3 ++
 arch/powerpc/mm/pgtable_32.c       | 59 +++++++++++++++++++++++++++++++++-----
 7 files changed, 79 insertions(+), 8 deletions(-)

-- 
2.12.0

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs
  2017-04-19 10:59 [PATCH 0/3] powerpc/mm: Fix kernel protection and implement CONFIG_DEBUG_RODATA on PPC32 Christophe Leroy
@ 2017-04-19 10:59 ` Christophe Leroy
  2017-04-19 11:00 ` [PATCH 2/3] powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32 Christophe Leroy
  2017-04-19 11:00 ` [PATCH 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA " Christophe Leroy
  2 siblings, 0 replies; 8+ messages in thread
From: Christophe Leroy @ 2017-04-19 10:59 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev

__change_page_attr() uses flush_tlb_page().
flush_tlb_page() uses tlbie instruction, which also invalidates
pinned TLBs, which is not what we expect.

This patch modifies the implementation to use flush_tlb_kernel_range()
instead. This will make use of tlbia which will preserve pinned TLBs.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/mm/pgtable_32.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index a65c0b4c0669..8e8940fad12f 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -325,7 +325,7 @@ get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep, pmd_t **pmdp)
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
 
-static int __change_page_attr(struct page *page, pgprot_t prot)
+static int __change_page_attr_noflush(struct page *page, pgprot_t prot)
 {
 	pte_t *kpte;
 	pmd_t *kpmd;
@@ -339,8 +339,6 @@ static int __change_page_attr(struct page *page, pgprot_t prot)
 	if (!get_pteptr(&init_mm, address, &kpte, &kpmd))
 		return -EINVAL;
 	__set_pte_at(&init_mm, address, kpte, mk_pte(page, prot), 0);
-	wmb();
-	flush_tlb_page(NULL, address);
 	pte_unmap(kpte);
 
 	return 0;
@@ -355,13 +353,17 @@ static int change_page_attr(struct page *page, int numpages, pgprot_t prot)
 {
 	int i, err = 0;
 	unsigned long flags;
+	struct page *start = page;
 
 	local_irq_save(flags);
 	for (i = 0; i < numpages; i++, page++) {
-		err = __change_page_attr(page, prot);
+		err = __change_page_attr_noflush(page, prot);
 		if (err)
 			break;
 	}
+	wmb();
+	flush_tlb_kernel_range((unsigned long)page_address(start),
+			       (unsigned long)page_address(page));
 	local_irq_restore(flags);
 	return err;
 }
-- 
2.12.0

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 2/3] powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32
  2017-04-19 10:59 [PATCH 0/3] powerpc/mm: Fix kernel protection and implement CONFIG_DEBUG_RODATA on PPC32 Christophe Leroy
  2017-04-19 10:59 ` [PATCH 1/3] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
@ 2017-04-19 11:00 ` Christophe Leroy
  2017-04-19 11:00 ` [PATCH 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA " Christophe Leroy
  2 siblings, 0 replies; 8+ messages in thread
From: Christophe Leroy @ 2017-04-19 11:00 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev

As seen below, allthough the init sections have been freed, the
associated memory area is still marked as executable in the
page tables.

~ dmesg
[    5.860093] Freeing unused kernel memory: 592K (c0570000 - c0604000)

~ cat /sys/kernel/debug/kernel_page_tables
---[ Start of kernel VM ]---
0xc0000000-0xc0497fff        4704K  rw  X  present dirty accessed shared
0xc0498000-0xc056ffff         864K  rw     present dirty accessed shared
0xc0570000-0xc059ffff         192K  rw  X  present dirty accessed shared
0xc05a0000-0xc7ffffff      125312K  rw     present dirty accessed shared
---[ vmalloc() Area ]---

This patch fixes that.

The implementation is done by reusing the change_page_attr()
function implemented for CONFIG_DEBUG_PAGEALLOC

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/mm/mem.c        |  1 +
 arch/powerpc/mm/mmu_decl.h   |  3 +++
 arch/powerpc/mm/pgtable_32.c | 13 ++++++++++---
 3 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 9ee536ec0739..f261dac36a68 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -401,6 +401,7 @@ void free_initmem(void)
 {
 	ppc_md.progress = ppc_printk_progress;
 	free_initmem_default(POISON_FREE_INITMEM);
+	remap_init_ram();
 }
 
 #ifdef CONFIG_BLK_DEV_INITRD
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index f988db655e5b..718fa45310be 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -95,6 +95,7 @@ extern void _tlbia(void);
 
 extern void mapin_ram(void);
 extern int map_page(unsigned long va, phys_addr_t pa, int flags);
+void remap_init_ram(void);
 extern void setbat(int index, unsigned long virt, phys_addr_t phys,
 		   unsigned int size, pgprot_t prot);
 
@@ -106,6 +107,8 @@ struct hash_pte;
 extern struct hash_pte *Hash, *Hash_end;
 extern unsigned long Hash_size, Hash_mask;
 
+#else
+static inline void remap_init_ram(void) {}
 #endif /* CONFIG_PPC32 */
 
 extern unsigned long ioremap_bot;
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 8e8940fad12f..31728f3cdd20 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -323,8 +323,6 @@ get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep, pmd_t **pmdp)
         return(retval);
 }
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
-
 static int __change_page_attr_noflush(struct page *page, pgprot_t prot)
 {
 	pte_t *kpte;
@@ -347,7 +345,7 @@ static int __change_page_attr_noflush(struct page *page, pgprot_t prot)
 /*
  * Change the page attributes of an page in the linear mapping.
  *
- * THIS CONFLICTS WITH BAT MAPPINGS, DEBUG USE ONLY
+ * THIS DOES NOTHING WITH BAT MAPPINGS, DEBUG USE ONLY
  */
 static int change_page_attr(struct page *page, int numpages, pgprot_t prot)
 {
@@ -368,7 +366,16 @@ static int change_page_attr(struct page *page, int numpages, pgprot_t prot)
 	return err;
 }
 
+void remap_init_ram(void)
+{
+	struct page *page = virt_to_page(_sinittext);
+	unsigned long numpages = PFN_UP((unsigned long)_einittext) -
+				 PFN_DOWN((unsigned long)_sinittext);
+
+	change_page_attr(page, numpages, PAGE_KERNEL);
+}
 
+#ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	if (PageHighMem(page))
-- 
2.12.0

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32
  2017-04-19 10:59 [PATCH 0/3] powerpc/mm: Fix kernel protection and implement CONFIG_DEBUG_RODATA on PPC32 Christophe Leroy
  2017-04-19 10:59 ` [PATCH 1/3] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
  2017-04-19 11:00 ` [PATCH 2/3] powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32 Christophe Leroy
@ 2017-04-19 11:00 ` Christophe Leroy
  2017-04-19 14:01   ` Michael Ellerman
  2 siblings, 1 reply; 8+ messages in thread
From: Christophe Leroy @ 2017-04-19 11:00 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev

This patch implements CONFIG_DEBUG_RODATA on PPC32.

As for CONFIG_DEBUG_PAGEALLOC, it deactivates BAT and LTLB mappings
in order to allow page protection setup at the level of each page.

As BAT/LTLB mappings are deactivated, their might be performance
impact. For this reason, we keep it OFF by default.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/Kconfig.debug         | 11 +++++++++++
 arch/powerpc/include/asm/pgtable.h |  8 ++++++++
 arch/powerpc/kernel/ftrace.c       |  2 ++
 arch/powerpc/mm/init_32.c          |  3 ++-
 arch/powerpc/mm/pgtable_32.c       | 36 ++++++++++++++++++++++++++++++++++++
 5 files changed, 59 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
index c86df246339e..047f91564e52 100644
--- a/arch/powerpc/Kconfig.debug
+++ b/arch/powerpc/Kconfig.debug
@@ -369,4 +369,15 @@ config PPC_HTDUMP
 	def_bool y
 	depends on PPC_PTDUMP && PPC_BOOK3S
 
+config DEBUG_RODATA
+       bool "Write protect kernel read-only data structures"
+       depends on DEBUG_KERNEL && PPC32
+       default n
+       help
+	 Mark the kernel read-only data as write-protected in the pagetables,
+	 in order to catch accidental (and incorrect) writes to such const
+	 data. This option may have a performance impact because block
+	 mapping via BATs etc... will be disabled.
+	 If in doubt, say "N".
+
 endmenu
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index dd01212935ac..5de74b60ed00 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -80,6 +80,14 @@ unsigned long vmalloc_to_phys(void *vmalloc_addr);
 
 void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
 void pgtable_cache_init(void);
+
+#ifdef CONFIG_DEBUG_RODATA
+void set_kernel_text_rw(void);
+void set_kernel_text_ro(void);
+#else
+static inline void set_kernel_text_rw(void) {}
+static inline void set_kernel_text_ro(void) {}
+#endif
 #endif /* __ASSEMBLY__ */
 
 #endif /* _ASM_POWERPC_PGTABLE_H */
diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 32509de6ce4c..4af81fb23653 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/powerpc/kernel/ftrace.c
@@ -526,7 +526,9 @@ void ftrace_replace_code(int enable)
  */
 void arch_ftrace_update_code(int command)
 {
+	set_kernel_text_rw();
 	ftrace_modify_all_code(command);
+	set_kernel_text_ro();
 }
 
 int __init ftrace_dyn_arch_init(void)
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 8a7c38b8d335..e39c812b97ca 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -109,7 +109,8 @@ void __init MMU_setup(void)
 	if (strstr(boot_command_line, "noltlbs")) {
 		__map_without_ltlbs = 1;
 	}
-	if (debug_pagealloc_enabled()) {
+	if (debug_pagealloc_enabled() ||
+	    IS_ENABLED(CONFIG_DEBUG_RODATA)) {
 		__map_without_bats = 1;
 		__map_without_ltlbs = 1;
 	}
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 31728f3cdd20..b071ce64d173 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -41,6 +41,7 @@ unsigned long ioremap_bot;
 EXPORT_SYMBOL(ioremap_bot);	/* aka VMALLOC_END */
 
 extern char etext[], _stext[], _sinittext[], _einittext[];
+extern char __start_rodata[], __init_begin[];
 
 __ref pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long address)
 {
@@ -375,6 +376,41 @@ void remap_init_ram(void)
 	change_page_attr(page, numpages, PAGE_KERNEL);
 }
 
+#ifdef CONFIG_DEBUG_RODATA
+void set_kernel_text_rw(void)
+{
+	struct page *page = virt_to_page(_stext);
+	unsigned long numpages = PFN_UP((unsigned long)etext) -
+				 PFN_DOWN((unsigned long)_stext);
+
+	change_page_attr(page, numpages, PAGE_KERNEL_X);
+}
+
+void set_kernel_text_ro(void)
+{
+	struct page *page = virt_to_page(_stext);
+	unsigned long numpages = PFN_UP((unsigned long)etext) -
+				 PFN_DOWN((unsigned long)_stext);
+
+	change_page_attr(page, numpages, PAGE_KERNEL_ROX);
+}
+
+void mark_rodata_ro(void)
+{
+	/*
+	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
+	 * to cover NOTES and EXCEPTION_TABLE.
+	 */
+	struct page *page = virt_to_page(__start_rodata);
+	unsigned long numpages = PFN_UP((unsigned long)__init_begin) -
+				 PFN_DOWN((unsigned long)__start_rodata);
+
+	set_kernel_text_ro();
+
+	change_page_attr(page, numpages, PAGE_KERNEL_RO);
+}
+#endif
+
 #ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-- 
2.12.0

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32
  2017-04-19 11:00 ` [PATCH 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA " Christophe Leroy
@ 2017-04-19 14:01   ` Michael Ellerman
  2017-04-19 14:22     ` Christophe LEROY
  0 siblings, 1 reply; 8+ messages in thread
From: Michael Ellerman @ 2017-04-19 14:01 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras, Scott Wood
  Cc: linux-kernel, linuxppc-dev

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
> index 32509de6ce4c..4af81fb23653 100644
> --- a/arch/powerpc/kernel/ftrace.c
> +++ b/arch/powerpc/kernel/ftrace.c
> @@ -526,7 +526,9 @@ void ftrace_replace_code(int enable)
>   */
>  void arch_ftrace_update_code(int command)
>  {
> +	set_kernel_text_rw();
>  	ftrace_modify_all_code(command);
> +	set_kernel_text_ro();
>  }
  
I'm not sure that's the right place for that.

If you look at other arches they hook into ftrace_modify_code(), where
you have the address that's being modified.

cheers

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32
  2017-04-19 14:01   ` Michael Ellerman
@ 2017-04-19 14:22     ` Christophe LEROY
  2017-04-19 18:36       ` christophe leroy
  0 siblings, 1 reply; 8+ messages in thread
From: Christophe LEROY @ 2017-04-19 14:22 UTC (permalink / raw)
  To: Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras, Scott Wood
  Cc: linux-kernel, linuxppc-dev



Le 19/04/2017 à 16:01, Michael Ellerman a écrit :
> Christophe Leroy <christophe.leroy@c-s.fr> writes:
>
>> diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
>> index 32509de6ce4c..4af81fb23653 100644
>> --- a/arch/powerpc/kernel/ftrace.c
>> +++ b/arch/powerpc/kernel/ftrace.c
>> @@ -526,7 +526,9 @@ void ftrace_replace_code(int enable)
>>   */
>>  void arch_ftrace_update_code(int command)
>>  {
>> +	set_kernel_text_rw();
>>  	ftrace_modify_all_code(command);
>> +	set_kernel_text_ro();
>>  }
>
> I'm not sure that's the right place for that.


I took arch/arm/ as model. It does the following. Is that wrong ?


static int __ftrace_modify_code(void *data)
{
	int *command = data;

	set_kernel_text_rw();
	ftrace_modify_all_code(*command);
	set_kernel_text_ro();

	return 0;
}

void arch_ftrace_update_code(int command)
{
	stop_machine(__ftrace_modify_code, &command, NULL);
}



> If you look at other arches they hook into ftrace_modify_code(), where
> you have the address that's being modified.
>

Ok, I will look at other arches.

Christophe

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32
  2017-04-19 14:22     ` Christophe LEROY
@ 2017-04-19 18:36       ` christophe leroy
  2017-04-20  6:04         ` Michael Ellerman
  0 siblings, 1 reply; 8+ messages in thread
From: christophe leroy @ 2017-04-19 18:36 UTC (permalink / raw)
  To: Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras, Scott Wood
  Cc: linuxppc-dev, linux-kernel



Le 19/04/2017 à 16:22, Christophe LEROY a écrit :
>
>
> Le 19/04/2017 à 16:01, Michael Ellerman a écrit :
>> Christophe Leroy <christophe.leroy@c-s.fr> writes:
>>
>>> diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
>>> index 32509de6ce4c..4af81fb23653 100644
>>> --- a/arch/powerpc/kernel/ftrace.c
>>> +++ b/arch/powerpc/kernel/ftrace.c
>>> @@ -526,7 +526,9 @@ void ftrace_replace_code(int enable)
>>>   */
>>>  void arch_ftrace_update_code(int command)
>>>  {
>>> +    set_kernel_text_rw();
>>>      ftrace_modify_all_code(command);
>>> +    set_kernel_text_ro();
>>>  }
>>
>> I'm not sure that's the right place for that.
>
>
> I took arch/arm/ as model. It does the following. Is that wrong ?
>
>
> static int __ftrace_modify_code(void *data)
> {
>     int *command = data;
>
>     set_kernel_text_rw();
>     ftrace_modify_all_code(*command);
>     set_kernel_text_ro();
>
>     return 0;
> }
>
> void arch_ftrace_update_code(int command)
> {
>     stop_machine(__ftrace_modify_code, &command, NULL);
> }
>
>
>
>> If you look at other arches they hook into ftrace_modify_code(), where
>> you have the address that's being modified.
>>
>
> Ok, I will look at other arches.

Could you provide more details on what you have seen on other arches ? I 
didn't notice anything related in any other arches' ftrace_modify_code()

What I could find however is, in x86, the use of 
ftrace_arch_code_modify_prepare() and 
ftrace_arch_code_modify_post_process ():

int ftrace_arch_code_modify_prepare(void)
{
	set_kernel_text_rw();
	set_all_modules_text_rw();
	return 0;
}

int ftrace_arch_code_modify_post_process(void)
{
	set_all_modules_text_ro();
	set_kernel_text_ro();
	return 0;
}


Those functions are also used on arm but only for modules:

int ftrace_arch_code_modify_prepare(void)
{
	set_all_modules_text_rw();
	return 0;
}

int ftrace_arch_code_modify_post_process(void)
{
	set_all_modules_text_ro();
	/* Make sure any TLB misses during machine stop are cleared. */
	flush_tlb_all();
	return 0;
}


Should I apply the x86 approach, or something else ?

Christophe

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA on PPC32
  2017-04-19 18:36       ` christophe leroy
@ 2017-04-20  6:04         ` Michael Ellerman
  0 siblings, 0 replies; 8+ messages in thread
From: Michael Ellerman @ 2017-04-20  6:04 UTC (permalink / raw)
  To: christophe leroy, Benjamin Herrenschmidt, Paul Mackerras, Scott Wood
  Cc: linuxppc-dev, linux-kernel

christophe leroy <christophe.leroy@c-s.fr> writes:

> Le 19/04/2017 à 16:22, Christophe LEROY a écrit :
>>
>>
>> Le 19/04/2017 à 16:01, Michael Ellerman a écrit :
>>> Christophe Leroy <christophe.leroy@c-s.fr> writes:
>>>
>>>> diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
>>>> index 32509de6ce4c..4af81fb23653 100644
>>>> --- a/arch/powerpc/kernel/ftrace.c
>>>> +++ b/arch/powerpc/kernel/ftrace.c
>>>> @@ -526,7 +526,9 @@ void ftrace_replace_code(int enable)
>>>>   */
>>>>  void arch_ftrace_update_code(int command)
>>>>  {
>>>> +    set_kernel_text_rw();
>>>>      ftrace_modify_all_code(command);
>>>> +    set_kernel_text_ro();
>>>>  }
>>>
>>> I'm not sure that's the right place for that.
>>
>> I took arch/arm/ as model. It does the following. Is that wrong ?

It's not wrong, it's just not optimal.

You're setting all text RW to modify one instruction, which is more work
than necessary and also creates a larger attack window.

> Could you provide more details on what you have seen on other arches ? I 
> didn't notice anything related in any other arches' ftrace_modify_code()

I was looking at arm64 specifically:

static int ftrace_modify_code(unsigned long pc, u32 old, u32 new,
			      bool validate)
{
...
	if (aarch64_insn_patch_text_nosync((void *)pc, new))
		return -EPERM;

	return 0;
}

int __kprobes aarch64_insn_patch_text_nosync(void *addr, u32 insn)
{
...
	ret = aarch64_insn_write(tp, insn);
	if (ret == 0)
		flush_icache_range((uintptr_t)tp,
				   (uintptr_t)tp + AARCH64_INSN_SIZE);

	return ret;
}

static int __kprobes __aarch64_insn_write(void *addr, u32 insn)
{
...
	raw_spin_lock_irqsave(&patch_lock, flags);
	waddr = patch_map(addr, FIX_TEXT_POKE0);

	ret = probe_kernel_write(waddr, &insn, AARCH64_INSN_SIZE);

	patch_unmap(FIX_TEXT_POKE0);
	raw_spin_unlock_irqrestore(&patch_lock, flags);

	return ret;
}


So if I'm reading it right they actually create a new RW mapping, patch
that, and then unmap, before flushing the icache.

That's definitely the nicest approach, but is maybe more work than you
signed up for :)


But even if you just shift your logic into ftrace_modify_code(), you
then have the ip, so you can call change_page_attr() on the single page
you're patching rather than all of text.

cheers

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-04-20  6:04 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-19 10:59 [PATCH 0/3] powerpc/mm: Fix kernel protection and implement CONFIG_DEBUG_RODATA on PPC32 Christophe Leroy
2017-04-19 10:59 ` [PATCH 1/3] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
2017-04-19 11:00 ` [PATCH 2/3] powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32 Christophe Leroy
2017-04-19 11:00 ` [PATCH 3/3] powerpc/mm: Implement CONFIG_DEBUG_RODATA " Christophe Leroy
2017-04-19 14:01   ` Michael Ellerman
2017-04-19 14:22     ` Christophe LEROY
2017-04-19 18:36       ` christophe leroy
2017-04-20  6:04         ` Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).