linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] powerpc/mm: Fix kernel protection and implement STRICT_KERNEL_RWX on PPC32
@ 2017-08-02 13:50 Christophe Leroy
  2017-08-02 13:51 ` [PATCH v3 1/5] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
                   ` (5 more replies)
  0 siblings, 6 replies; 8+ messages in thread
From: Christophe Leroy @ 2017-08-02 13:50 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Scott Wood, Balbir Singh
  Cc: linux-kernel, linuxppc-dev

This patch set implements STRICT_KERNEL_RWX on Powerpc32
after fixing a few issues related to kernel code page protection.

At the end we take the opportunity to get rid of some unneccessary/outdated
fixmap stuff.

Changes from v2 to v3:
* Rebased on latest linux-powerpc/merge branch
* Function remap_init_ram() renamed mark_initmem_nx() to match new
  PPC64 implementation

Changes from v1 to v2:
* Rebased on latest linux-next following including of STRICT_KERNEL_RWX for PPC64
* Removed from the serie the two patches already applied.

Christophe Leroy (5):
  powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs
  powerpc/mm: Fix kernel RAM protection after freeing unused memory on
    PPC32
  powerpc/mm: Implement STRICT_KERNEL_RWX on PPC32
  powerpc/mm: declare some local functions static
  powerpc/mm: Simplify __set_fixmap()

 arch/powerpc/Kconfig                         |  2 +-
 arch/powerpc/include/asm/book3s/32/pgtable.h |  3 --
 arch/powerpc/include/asm/fixmap.h            | 10 +++--
 arch/powerpc/include/asm/nohash/32/pgtable.h |  3 --
 arch/powerpc/include/asm/pgtable.h           |  2 +-
 arch/powerpc/kernel/vmlinux.lds.S            |  2 +-
 arch/powerpc/mm/init_32.c                    |  6 +++
 arch/powerpc/mm/pgtable_32.c                 | 66 ++++++++++++++++++----------
 8 files changed, 58 insertions(+), 36 deletions(-)

-- 
2.13.3

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3 1/5] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs
  2017-08-02 13:50 [PATCH v3 0/5] powerpc/mm: Fix kernel protection and implement STRICT_KERNEL_RWX on PPC32 Christophe Leroy
@ 2017-08-02 13:51 ` Christophe Leroy
  2017-08-16 12:29   ` [v3, " Michael Ellerman
  2017-08-02 13:51 ` [PATCH v3 2/5] powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32 Christophe Leroy
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 8+ messages in thread
From: Christophe Leroy @ 2017-08-02 13:51 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Scott Wood, Balbir Singh
  Cc: linux-kernel, linuxppc-dev

__change_page_attr() uses flush_tlb_page().
flush_tlb_page() uses tlbie instruction, which also invalidates
pinned TLBs, which is not what we expect.

This patch modifies the implementation to use flush_tlb_kernel_range()
instead. This will make use of tlbia which will preserve pinned TLBs.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 v3: no change

 arch/powerpc/mm/pgtable_32.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index a9e4bfc025bc..991036f818bb 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -325,7 +325,7 @@ get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep, pmd_t **pmdp)
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
 
-static int __change_page_attr(struct page *page, pgprot_t prot)
+static int __change_page_attr_noflush(struct page *page, pgprot_t prot)
 {
 	pte_t *kpte;
 	pmd_t *kpmd;
@@ -339,8 +339,6 @@ static int __change_page_attr(struct page *page, pgprot_t prot)
 	if (!get_pteptr(&init_mm, address, &kpte, &kpmd))
 		return -EINVAL;
 	__set_pte_at(&init_mm, address, kpte, mk_pte(page, prot), 0);
-	wmb();
-	flush_tlb_page(NULL, address);
 	pte_unmap(kpte);
 
 	return 0;
@@ -355,13 +353,17 @@ static int change_page_attr(struct page *page, int numpages, pgprot_t prot)
 {
 	int i, err = 0;
 	unsigned long flags;
+	struct page *start = page;
 
 	local_irq_save(flags);
 	for (i = 0; i < numpages; i++, page++) {
-		err = __change_page_attr(page, prot);
+		err = __change_page_attr_noflush(page, prot);
 		if (err)
 			break;
 	}
+	wmb();
+	flush_tlb_kernel_range((unsigned long)page_address(start),
+			       (unsigned long)page_address(page));
 	local_irq_restore(flags);
 	return err;
 }
-- 
2.13.3

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/5] powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32
  2017-08-02 13:50 [PATCH v3 0/5] powerpc/mm: Fix kernel protection and implement STRICT_KERNEL_RWX on PPC32 Christophe Leroy
  2017-08-02 13:51 ` [PATCH v3 1/5] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
@ 2017-08-02 13:51 ` Christophe Leroy
  2017-08-02 13:51 ` [PATCH v3 3/5] powerpc/mm: Implement STRICT_KERNEL_RWX " Christophe Leroy
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Christophe Leroy @ 2017-08-02 13:51 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Scott Wood, Balbir Singh
  Cc: linux-kernel, linuxppc-dev

As seen below, allthough the init sections have been freed, the
associated memory area is still marked as executable in the
page tables.

~ dmesg
[    5.860093] Freeing unused kernel memory: 592K (c0570000 - c0604000)

~ cat /sys/kernel/debug/kernel_page_tables
---[ Start of kernel VM ]---
0xc0000000-0xc0497fff        4704K  rw  X  present dirty accessed shared
0xc0498000-0xc056ffff         864K  rw     present dirty accessed shared
0xc0570000-0xc059ffff         192K  rw  X  present dirty accessed shared
0xc05a0000-0xc7ffffff      125312K  rw     present dirty accessed shared
---[ vmalloc() Area ]---

This patch fixes that.

The implementation is done by reusing the change_page_attr()
function implemented for CONFIG_DEBUG_PAGEALLOC

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 v3: Function remap_init_ram() renamed mark_initmem_nx() to match new
  PPC64 implementation

 arch/powerpc/include/asm/pgtable.h |  2 +-
 arch/powerpc/mm/pgtable_32.c       | 13 ++++++++++---
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index afae9a336136..ab7f44475b1f 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -81,7 +81,7 @@ unsigned long vmalloc_to_phys(void *vmalloc_addr);
 void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
 void pgtable_cache_init(void);
 
-#ifdef CONFIG_STRICT_KERNEL_RWX
+#if defined(CONFIG_STRICT_KERNEL_RWX) || defined(CONFIG_PPC32)
 void mark_initmem_nx(void);
 #else
 static inline void mark_initmem_nx(void) { }
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 991036f818bb..85e8f0e0efe6 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -323,8 +323,6 @@ get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep, pmd_t **pmdp)
         return(retval);
 }
 
-#ifdef CONFIG_DEBUG_PAGEALLOC
-
 static int __change_page_attr_noflush(struct page *page, pgprot_t prot)
 {
 	pte_t *kpte;
@@ -347,7 +345,7 @@ static int __change_page_attr_noflush(struct page *page, pgprot_t prot)
 /*
  * Change the page attributes of an page in the linear mapping.
  *
- * THIS CONFLICTS WITH BAT MAPPINGS, DEBUG USE ONLY
+ * THIS DOES NOTHING WITH BAT MAPPINGS, DEBUG USE ONLY
  */
 static int change_page_attr(struct page *page, int numpages, pgprot_t prot)
 {
@@ -368,7 +366,16 @@ static int change_page_attr(struct page *page, int numpages, pgprot_t prot)
 	return err;
 }
 
+void mark_initmem_nx(void)
+{
+	struct page *page = virt_to_page(_sinittext);
+	unsigned long numpages = PFN_UP((unsigned long)_einittext) -
+				 PFN_DOWN((unsigned long)_sinittext);
+
+	change_page_attr(page, numpages, PAGE_KERNEL);
+}
 
+#ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
 	if (PageHighMem(page))
-- 
2.13.3

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 3/5] powerpc/mm: Implement STRICT_KERNEL_RWX on PPC32
  2017-08-02 13:50 [PATCH v3 0/5] powerpc/mm: Fix kernel protection and implement STRICT_KERNEL_RWX on PPC32 Christophe Leroy
  2017-08-02 13:51 ` [PATCH v3 1/5] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
  2017-08-02 13:51 ` [PATCH v3 2/5] powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32 Christophe Leroy
@ 2017-08-02 13:51 ` Christophe Leroy
  2017-08-02 13:51 ` [PATCH v3 4/5] powerpc/mm: declare some local functions static Christophe Leroy
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 8+ messages in thread
From: Christophe Leroy @ 2017-08-02 13:51 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Scott Wood, Balbir Singh
  Cc: linux-kernel, linuxppc-dev

This patch implements STRICT_KERNEL_RWX on PPC32.

As for CONFIG_DEBUG_PAGEALLOC, it deactivates BAT and LTLB mappings
in order to allow page protection setup at the level of each page.

As BAT/LTLB mappings are deactivated, there might be a performance
impact.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 v3: no change

 arch/powerpc/Kconfig              |  2 +-
 arch/powerpc/kernel/vmlinux.lds.S |  2 +-
 arch/powerpc/mm/init_32.c         |  6 ++++++
 arch/powerpc/mm/pgtable_32.c      | 24 ++++++++++++++++++++++++
 4 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 5e9de178b557..02d91ee1b601 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -165,7 +165,7 @@ config PPC
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
-	select ARCH_HAS_STRICT_KERNEL_RWX	if (PPC_BOOK3S_64 && !RELOCATABLE && !HIBERNATION)
+	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !RELOCATABLE && !HIBERNATION)
 	select ARCH_OPTIONAL_KERNEL_RWX		if ARCH_HAS_STRICT_KERNEL_RWX
 	select HAVE_CBPF_JIT			if !PPC64
 	select HAVE_CONTEXT_TRACKING		if PPC64
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index b1a250560198..882628fa6987 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -8,7 +8,7 @@
 #include <asm/cache.h>
 #include <asm/thread_info.h>
 
-#ifdef CONFIG_STRICT_KERNEL_RWX
+#if defined(CONFIG_STRICT_KERNEL_RWX) && !defined(CONFIG_PPC32)
 #define STRICT_ALIGN_SIZE	(1 << 24)
 #else
 #define STRICT_ALIGN_SIZE	PAGE_SIZE
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index 8a7c38b8d335..7d5fee1bb116 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -113,6 +113,12 @@ void __init MMU_setup(void)
 		__map_without_bats = 1;
 		__map_without_ltlbs = 1;
 	}
+#ifdef CONFIG_STRICT_KERNEL_RWX
+	if (rodata_enabled) {
+		__map_without_bats = 1;
+		__map_without_ltlbs = 1;
+	}
+#endif
 }
 
 /*
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 85e8f0e0efe6..4a3dd9fc6989 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -34,6 +34,7 @@
 #include <asm/fixmap.h>
 #include <asm/io.h>
 #include <asm/setup.h>
+#include <asm/sections.h>
 
 #include "mmu_decl.h"
 
@@ -375,6 +376,29 @@ void mark_initmem_nx(void)
 	change_page_attr(page, numpages, PAGE_KERNEL);
 }
 
+#ifdef CONFIG_STRICT_KERNEL_RWX
+void mark_rodata_ro(void)
+{
+	struct page *page;
+	unsigned long numpages;
+
+	page = virt_to_page(_stext);
+	numpages = PFN_UP((unsigned long)_etext) -
+		   PFN_DOWN((unsigned long)_stext);
+
+	change_page_attr(page, numpages, PAGE_KERNEL_ROX);
+	/*
+	 * mark .rodata as read only. Use __init_begin rather than __end_rodata
+	 * to cover NOTES and EXCEPTION_TABLE.
+	 */
+	page = virt_to_page(__start_rodata);
+	numpages = PFN_UP((unsigned long)__init_begin) -
+		   PFN_DOWN((unsigned long)__start_rodata);
+
+	change_page_attr(page, numpages, PAGE_KERNEL_RO);
+}
+#endif
+
 #ifdef CONFIG_DEBUG_PAGEALLOC
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-- 
2.13.3

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 4/5] powerpc/mm: declare some local functions static
  2017-08-02 13:50 [PATCH v3 0/5] powerpc/mm: Fix kernel protection and implement STRICT_KERNEL_RWX on PPC32 Christophe Leroy
                   ` (2 preceding siblings ...)
  2017-08-02 13:51 ` [PATCH v3 3/5] powerpc/mm: Implement STRICT_KERNEL_RWX " Christophe Leroy
@ 2017-08-02 13:51 ` Christophe Leroy
  2017-08-02 13:51 ` [PATCH v3 5/5] powerpc/mm: Simplify __set_fixmap() Christophe Leroy
  2017-08-03  7:24 ` [PATCH v3 0/5] powerpc/mm: Fix kernel protection and implement STRICT_KERNEL_RWX on PPC32 Balbir Singh
  5 siblings, 0 replies; 8+ messages in thread
From: Christophe Leroy @ 2017-08-02 13:51 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Scott Wood, Balbir Singh
  Cc: linux-kernel, linuxppc-dev

get_pteptr() and __mapin_ram_chunk() are only used locally,
so define them static

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 v3: no change

 arch/powerpc/include/asm/book3s/32/pgtable.h | 3 ---
 arch/powerpc/include/asm/nohash/32/pgtable.h | 3 ---
 arch/powerpc/mm/pgtable_32.c                 | 4 ++--
 3 files changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/32/pgtable.h b/arch/powerpc/include/asm/book3s/32/pgtable.h
index 7fb755880409..17c8766777f1 100644
--- a/arch/powerpc/include/asm/book3s/32/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/32/pgtable.h
@@ -294,9 +294,6 @@ static inline void __ptep_set_access_flags(struct mm_struct *mm,
 #define __pte_to_swp_entry(pte)		((swp_entry_t) { pte_val(pte) >> 3 })
 #define __swp_entry_to_pte(x)		((pte_t) { (x).val << 3 })
 
-extern int get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep,
-		      pmd_t **pmdp);
-
 int map_kernel_page(unsigned long va, phys_addr_t pa, int flags);
 
 /* Generic accessors to PTE bits */
diff --git a/arch/powerpc/include/asm/nohash/32/pgtable.h b/arch/powerpc/include/asm/nohash/32/pgtable.h
index 91314268f04f..589206bf0358 100644
--- a/arch/powerpc/include/asm/nohash/32/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/32/pgtable.h
@@ -337,9 +337,6 @@ static inline void __ptep_set_access_flags(struct mm_struct *mm,
 #define __pte_to_swp_entry(pte)		((swp_entry_t) { pte_val(pte) >> 3 })
 #define __swp_entry_to_pte(x)		((pte_t) { (x).val << 3 })
 
-extern int get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep,
-		      pmd_t **pmdp);
-
 int map_kernel_page(unsigned long va, phys_addr_t pa, int flags);
 
 #endif /* !__ASSEMBLY__ */
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 4a3dd9fc6989..57f89cd88568 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -243,7 +243,7 @@ int map_kernel_page(unsigned long va, phys_addr_t pa, int flags)
 /*
  * Map in a chunk of physical memory starting at start.
  */
-void __init __mapin_ram_chunk(unsigned long offset, unsigned long top)
+static void __init __mapin_ram_chunk(unsigned long offset, unsigned long top)
 {
 	unsigned long v, s, f;
 	phys_addr_t p;
@@ -295,7 +295,7 @@ void __init mapin_ram(void)
  * Returns true (1) if PTE was found, zero otherwise.  The pointer to
  * the PTE pointer is unmodified if PTE is not found.
  */
-int
+static int
 get_pteptr(struct mm_struct *mm, unsigned long addr, pte_t **ptep, pmd_t **pmdp)
 {
         pgd_t	*pgd;
-- 
2.13.3

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 5/5] powerpc/mm: Simplify __set_fixmap()
  2017-08-02 13:50 [PATCH v3 0/5] powerpc/mm: Fix kernel protection and implement STRICT_KERNEL_RWX on PPC32 Christophe Leroy
                   ` (3 preceding siblings ...)
  2017-08-02 13:51 ` [PATCH v3 4/5] powerpc/mm: declare some local functions static Christophe Leroy
@ 2017-08-02 13:51 ` Christophe Leroy
  2017-08-03  7:24 ` [PATCH v3 0/5] powerpc/mm: Fix kernel protection and implement STRICT_KERNEL_RWX on PPC32 Balbir Singh
  5 siblings, 0 replies; 8+ messages in thread
From: Christophe Leroy @ 2017-08-02 13:51 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Scott Wood, Balbir Singh
  Cc: linux-kernel, linuxppc-dev

__set_fixmap() uses __fix_to_virt() then does the boundary checks
by it self. Instead, we can use fix_to_virt() which does the
verification at build time. For this, we need to use it inline
so that GCC can see the real value of idx at buildtime.

In the meantime, we remove the 'fixmaps' variable.
This variable is set but has never been used from the beginning
(commit 2c419bdeca1d9 ("[POWERPC] Port fixmap from x86 and use
for kmap_atomic"))

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 v3: no change

 arch/powerpc/include/asm/fixmap.h | 10 +++++++---
 arch/powerpc/mm/pgtable_32.c      | 15 ---------------
 2 files changed, 7 insertions(+), 18 deletions(-)

diff --git a/arch/powerpc/include/asm/fixmap.h b/arch/powerpc/include/asm/fixmap.h
index 4508b322f2cd..6c40dfda5912 100644
--- a/arch/powerpc/include/asm/fixmap.h
+++ b/arch/powerpc/include/asm/fixmap.h
@@ -17,6 +17,7 @@
 #ifndef __ASSEMBLY__
 #include <linux/kernel.h>
 #include <asm/page.h>
+#include <asm/pgtable.h>
 #ifdef CONFIG_HIGHMEM
 #include <linux/threads.h>
 #include <asm/kmap_types.h>
@@ -62,9 +63,6 @@ enum fixed_addresses {
 	__end_of_fixed_addresses
 };
 
-extern void __set_fixmap (enum fixed_addresses idx,
-					phys_addr_t phys, pgprot_t flags);
-
 #define __FIXADDR_SIZE	(__end_of_fixed_addresses << PAGE_SHIFT)
 #define FIXADDR_START		(FIXADDR_TOP - __FIXADDR_SIZE)
 
@@ -72,5 +70,11 @@ extern void __set_fixmap (enum fixed_addresses idx,
 
 #include <asm-generic/fixmap.h>
 
+static inline void __set_fixmap(enum fixed_addresses idx,
+				phys_addr_t phys, pgprot_t flags)
+{
+	map_kernel_page(fix_to_virt(idx), phys, pgprot_val(flags));
+}
+
 #endif /* !__ASSEMBLY__ */
 #endif
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 57f89cd88568..65eda1997c3f 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -408,18 +408,3 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
 	change_page_attr(page, numpages, enable ? PAGE_KERNEL : __pgprot(0));
 }
 #endif /* CONFIG_DEBUG_PAGEALLOC */
-
-static int fixmaps;
-
-void __set_fixmap (enum fixed_addresses idx, phys_addr_t phys, pgprot_t flags)
-{
-	unsigned long address = __fix_to_virt(idx);
-
-	if (idx >= __end_of_fixed_addresses) {
-		BUG();
-		return;
-	}
-
-	map_kernel_page(address, phys, pgprot_val(flags));
-	fixmaps++;
-}
-- 
2.13.3

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 0/5] powerpc/mm: Fix kernel protection and implement STRICT_KERNEL_RWX on PPC32
  2017-08-02 13:50 [PATCH v3 0/5] powerpc/mm: Fix kernel protection and implement STRICT_KERNEL_RWX on PPC32 Christophe Leroy
                   ` (4 preceding siblings ...)
  2017-08-02 13:51 ` [PATCH v3 5/5] powerpc/mm: Simplify __set_fixmap() Christophe Leroy
@ 2017-08-03  7:24 ` Balbir Singh
  5 siblings, 0 replies; 8+ messages in thread
From: Balbir Singh @ 2017-08-03  7:24 UTC (permalink / raw)
  To: Christophe Leroy
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Scott Wood, linux-kernel,
	open list:LINUX FOR POWERPC (32-BIT AND 64-BIT)

On Wed, Aug 2, 2017 at 11:50 PM, Christophe Leroy
<christophe.leroy@c-s.fr> wrote:
> This patch set implements STRICT_KERNEL_RWX on Powerpc32
> after fixing a few issues related to kernel code page protection.
>
> At the end we take the opportunity to get rid of some unneccessary/outdated
> fixmap stuff.

I looked at it, looks good overall, I've reviewed the previous series. Have you
checked the series against CONFIG_RELOCATABLE?

Balbir Singh.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [v3, 1/5] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs
  2017-08-02 13:51 ` [PATCH v3 1/5] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
@ 2017-08-16 12:29   ` Michael Ellerman
  0 siblings, 0 replies; 8+ messages in thread
From: Michael Ellerman @ 2017-08-16 12:29 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Scott Wood, Balbir Singh
  Cc: linuxppc-dev, linux-kernel

On Wed, 2017-08-02 at 13:51:01 UTC, Christophe Leroy wrote:
> __change_page_attr() uses flush_tlb_page().
> flush_tlb_page() uses tlbie instruction, which also invalidates
> pinned TLBs, which is not what we expect.
> 
> This patch modifies the implementation to use flush_tlb_kernel_range()
> instead. This will make use of tlbia which will preserve pinned TLBs.
> 
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>

Series applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/e611939fc8ec13387018df88083de7

cheers

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-08-16 12:29 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-02 13:50 [PATCH v3 0/5] powerpc/mm: Fix kernel protection and implement STRICT_KERNEL_RWX on PPC32 Christophe Leroy
2017-08-02 13:51 ` [PATCH v3 1/5] powerpc/mm: Ensure change_page_attr() doesn't invalidate pinned TLBs Christophe Leroy
2017-08-16 12:29   ` [v3, " Michael Ellerman
2017-08-02 13:51 ` [PATCH v3 2/5] powerpc/mm: Fix kernel RAM protection after freeing unused memory on PPC32 Christophe Leroy
2017-08-02 13:51 ` [PATCH v3 3/5] powerpc/mm: Implement STRICT_KERNEL_RWX " Christophe Leroy
2017-08-02 13:51 ` [PATCH v3 4/5] powerpc/mm: declare some local functions static Christophe Leroy
2017-08-02 13:51 ` [PATCH v3 5/5] powerpc/mm: Simplify __set_fixmap() Christophe Leroy
2017-08-03  7:24 ` [PATCH v3 0/5] powerpc/mm: Fix kernel protection and implement STRICT_KERNEL_RWX on PPC32 Balbir Singh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).