linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v1 0/3] Implement set_memory_xx for ppc64 book3s
@ 2017-08-01 11:25 Balbir Singh
  2017-08-01 11:25 ` [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines Balbir Singh
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Balbir Singh @ 2017-08-01 11:25 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: naveen.n.rao, Balbir Singh

After implementing STRICT_KERNEL_RWX, it turns out that implementing
set_memory_ro/rw/x/nx is quite easy. The first patch is applied on
top (http://patchwork.ozlabs.org/patch/795745/).

The first patch implements the various routines, the second patch
enables ARCH_HAS_SET_MEMORY for PPC_BOOK3S_64 and the third patch
enables the BPF infrastructure to use the set_memory_ro and
set_memory_rw routines.

Balbir Singh (3):
  arch/powerpc/set_memory: Implement set_memory_xx routines
  Enable ARCH_HAS_SET_MEMORY
  arch/powerpc/net/bpf: Basic EBPF support

 arch/powerpc/Kconfig                       |  1 +
 arch/powerpc/include/asm/book3s/64/hash.h  |  6 +++
 arch/powerpc/include/asm/book3s/64/radix.h |  6 +++
 arch/powerpc/include/asm/set_memory.h      | 34 +++++++++++++++
 arch/powerpc/mm/pgtable-hash64.c           | 51 ++++++++++++++++++++--
 arch/powerpc/mm/pgtable-radix.c            | 26 ++++++------
 arch/powerpc/mm/pgtable_64.c               | 68 ++++++++++++++++++++++++++++++
 arch/powerpc/net/bpf_jit_comp64.c          | 13 +-----
 8 files changed, 177 insertions(+), 28 deletions(-)
 create mode 100644 arch/powerpc/include/asm/set_memory.h

-- 
2.9.4

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines
  2017-08-01 11:25 [PATCH v1 0/3] Implement set_memory_xx for ppc64 book3s Balbir Singh
@ 2017-08-01 11:25 ` Balbir Singh
  2017-08-01 19:08   ` christophe leroy
  2017-08-02 10:09   ` Aneesh Kumar K.V
  2017-08-01 11:25 ` [PATCH v1 2/3] Enable ARCH_HAS_SET_MEMORY Balbir Singh
  2017-08-01 11:25 ` [PATCH v1 3/3] arch/powerpc/net/bpf: Basic EBPF support Balbir Singh
  2 siblings, 2 replies; 11+ messages in thread
From: Balbir Singh @ 2017-08-01 11:25 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: naveen.n.rao, Balbir Singh

Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX
feature support we got support for changing the page permissions
for pte ranges. This patch adds support for both radix and hash
so that we can change their permissions via set/clear masks.

A new helper is required for hash (hash__change_memory_range()
is changed to hash__change_boot_memory_range() as it deals with
bolted PTE's).

hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests
for permission changes. hash__change_memory_range() does not invoke
updatepp, instead it changes the software PTE and invalidates the PTE.

For radix, radix__change_memory_range() is setup to do the right
thing for vmalloc'd addresses. It takes a new parameter to decide
what attributes to set.

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
 arch/powerpc/include/asm/book3s/64/hash.h  |  6 +++
 arch/powerpc/include/asm/book3s/64/radix.h |  6 +++
 arch/powerpc/include/asm/set_memory.h      | 34 +++++++++++++++
 arch/powerpc/mm/pgtable-hash64.c           | 51 ++++++++++++++++++++--
 arch/powerpc/mm/pgtable-radix.c            | 26 ++++++------
 arch/powerpc/mm/pgtable_64.c               | 68 ++++++++++++++++++++++++++++++
 6 files changed, 175 insertions(+), 16 deletions(-)
 create mode 100644 arch/powerpc/include/asm/set_memory.h

diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index 36fc7bf..65003c9 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -94,6 +94,12 @@ extern void hash__mark_rodata_ro(void);
 extern void hash__mark_initmem_nx(void);
 #endif
 
+/*
+ * For set_memory_*
+ */
+extern int hash__change_memory_range(unsigned long start, unsigned long end,
+				     unsigned long set, unsigned long clear);
+
 extern void hpte_need_flush(struct mm_struct *mm, unsigned long addr,
 			    pte_t *ptep, unsigned long pte, int huge);
 extern unsigned long htab_convert_pte_flags(unsigned long pteflags);
diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
index 544440b..5ca0636 100644
--- a/arch/powerpc/include/asm/book3s/64/radix.h
+++ b/arch/powerpc/include/asm/book3s/64/radix.h
@@ -121,6 +121,12 @@ extern void radix__mark_rodata_ro(void);
 extern void radix__mark_initmem_nx(void);
 #endif
 
+/*
+ * For set_memory_*
+ */
+extern int radix__change_memory_range(unsigned long start, unsigned long end,
+				      unsigned long set, unsigned long clear);
+
 static inline unsigned long __radix_pte_update(pte_t *ptep, unsigned long clr,
 					       unsigned long set)
 {
diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h
new file mode 100644
index 0000000..b19c67c
--- /dev/null
+++ b/arch/powerpc/include/asm/set_memory.h
@@ -0,0 +1,34 @@
+/*
+ * set_memory.h
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, you can access it online at
+ * http://www.gnu.org/licenses/gpl-2.0.html.
+ *
+ * Copyright IBM Corporation, 2017
+ *
+ * Authors: Balbir Singh <bsingharora@gmail.com>
+ */
+
+#ifndef __ASM_SET_MEMORY_H
+#define __ASM_SET_MEMORY_H
+
+/*
+ * Functions to change memory attributes.
+ */
+int set_memory_ro(unsigned long addr, int numpages);
+int set_memory_rw(unsigned long addr, int numpages);
+int set_memory_x(unsigned long addr, int numpages);
+int set_memory_nx(unsigned long addr, int numpages);
+
+#endif
diff --git a/arch/powerpc/mm/pgtable-hash64.c b/arch/powerpc/mm/pgtable-hash64.c
index 656f7f3..db5b477 100644
--- a/arch/powerpc/mm/pgtable-hash64.c
+++ b/arch/powerpc/mm/pgtable-hash64.c
@@ -424,9 +424,52 @@ int hash__has_transparent_hugepage(void)
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
+/*
+ * This routine will change pte protection only for vmalloc'd
+ * PAGE_SIZE pages, do not invoke for bolted pages
+ */
+int hash__change_memory_range(unsigned long start, unsigned long end,
+				unsigned long set, unsigned long clear)
+{
+	unsigned long idx;
+	pgd_t *pgdp;
+	pud_t *pudp;
+	pmd_t *pmdp;
+	pte_t *ptep;
+
+	start = ALIGN_DOWN(start, PAGE_SIZE);
+	end = PAGE_ALIGN(end); // aligns up
+
+	/*
+	 * Update the software PTE and flush the entry.
+	 * This should cause a new fault with the right
+	 * things setup in the hash page table
+	 */
+	pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n",
+		 start, end, set, clear);
+
+	for (idx = start; idx < end; idx += PAGE_SIZE) {
+		pgdp = pgd_offset_k(idx);
+		pudp = pud_alloc(&init_mm, pgdp, idx);
+		if (!pudp)
+			return -1;
+		pmdp = pmd_alloc(&init_mm, pudp, idx);
+		if (!pmdp)
+			return -1;
+		ptep = pte_alloc_kernel(pmdp, idx);
+		if (!ptep)
+			return -1;
+		hash__pte_update(&init_mm, idx, ptep, clear, set, 0);
+		hash__flush_tlb_kernel_range(idx, idx + PAGE_SIZE);
+	}
+	return 0;
+
+}
+EXPORT_SYMBOL(hash__change_memory_range);
+
 #ifdef CONFIG_STRICT_KERNEL_RWX
-static bool hash__change_memory_range(unsigned long start, unsigned long end,
-				      unsigned long newpp)
+bool hash__change_boot_memory_range(unsigned long start, unsigned long end,
+				unsigned long newpp)
 {
 	unsigned long idx;
 	unsigned int step, shift;
@@ -482,7 +525,7 @@ void hash__mark_rodata_ro(void)
 	start = (unsigned long)_stext;
 	end = (unsigned long)__init_begin;
 
-	WARN_ON(!hash__change_memory_range(start, end, PP_RXXX));
+	WARN_ON(!hash__change_boot_memory_range(start, end, PP_RXXX));
 }
 
 void hash__mark_initmem_nx(void)
@@ -494,6 +537,6 @@ void hash__mark_initmem_nx(void)
 
 	pp = htab_convert_pte_flags(pgprot_val(PAGE_KERNEL));
 
-	WARN_ON(!hash__change_memory_range(start, end, pp));
+	WARN_ON(!hash__change_boot_memory_range(start, end, pp));
 }
 #endif
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
index 6e0176d..0e66324 100644
--- a/arch/powerpc/mm/pgtable-radix.c
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -114,9 +114,8 @@ int radix__map_kernel_page(unsigned long ea, unsigned long pa,
 	return 0;
 }
 
-#ifdef CONFIG_STRICT_KERNEL_RWX
-void radix__change_memory_range(unsigned long start, unsigned long end,
-				unsigned long clear)
+int radix__change_memory_range(unsigned long start, unsigned long end,
+				unsigned long set, unsigned long clear)
 {
 	unsigned long idx;
 	pgd_t *pgdp;
@@ -127,35 +126,38 @@ void radix__change_memory_range(unsigned long start, unsigned long end,
 	start = ALIGN_DOWN(start, PAGE_SIZE);
 	end = PAGE_ALIGN(end); // aligns up
 
-	pr_debug("Changing flags on range %lx-%lx removing 0x%lx\n",
-		 start, end, clear);
+	pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n",
+		 start, end, set, clear);
 
 	for (idx = start; idx < end; idx += PAGE_SIZE) {
 		pgdp = pgd_offset_k(idx);
 		pudp = pud_alloc(&init_mm, pgdp, idx);
 		if (!pudp)
-			continue;
+			return -1;
 		if (pud_huge(*pudp)) {
 			ptep = (pte_t *)pudp;
 			goto update_the_pte;
 		}
 		pmdp = pmd_alloc(&init_mm, pudp, idx);
 		if (!pmdp)
-			continue;
+			return -1;
 		if (pmd_huge(*pmdp)) {
 			ptep = pmdp_ptep(pmdp);
 			goto update_the_pte;
 		}
 		ptep = pte_alloc_kernel(pmdp, idx);
 		if (!ptep)
-			continue;
+			return -1;
 update_the_pte:
-		radix__pte_update(&init_mm, idx, ptep, clear, 0, 0);
+		radix__pte_update(&init_mm, idx, ptep, clear, set, 0);
 	}
 
 	radix__flush_tlb_kernel_range(start, end);
+	return 0;
 }
+EXPORT_SYMBOL(radix__change_memory_range);
 
+#ifdef CONFIG_STRICT_KERNEL_RWX
 void radix__mark_rodata_ro(void)
 {
 	unsigned long start, end;
@@ -163,12 +165,12 @@ void radix__mark_rodata_ro(void)
 	start = (unsigned long)_stext;
 	end = (unsigned long)__init_begin;
 
-	radix__change_memory_range(start, end, _PAGE_WRITE);
+	radix__change_memory_range(start, end, 0, _PAGE_WRITE);
 
 	start = (unsigned long)__start_interrupts - PHYSICAL_START;
 	end = (unsigned long)__end_interrupts - PHYSICAL_START;
 
-	radix__change_memory_range(start, end, _PAGE_WRITE);
+	radix__change_memory_range(start, end, 0, _PAGE_WRITE);
 }
 
 
@@ -177,7 +179,7 @@ void radix__mark_initmem_nx(void)
 	unsigned long start = (unsigned long)__init_begin;
 	unsigned long end = (unsigned long)__init_end;
 
-	radix__change_memory_range(start, end, _PAGE_EXEC);
+	radix__change_memory_range(start, end, 0, _PAGE_EXEC);
 }
 
 #endif /* CONFIG_STRICT_KERNEL_RWX */
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index 0736e94..3ee4c7d 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -514,3 +514,71 @@ void mark_initmem_nx(void)
 		hash__mark_initmem_nx();
 }
 #endif
+
+#ifdef CONFIG_ARCH_HAS_SET_MEMORY
+/*
+ * Some of these bits are taken from arm64/mm/page_attr.c
+ */
+static int change_memory_common(unsigned long addr, int numpages,
+				unsigned long set, unsigned long clear)
+{
+	unsigned long start = addr;
+	unsigned long size = PAGE_SIZE*numpages;
+	unsigned long end = start + size;
+	struct vm_struct *area;
+
+	if (!PAGE_ALIGNED(addr)) {
+		start &= PAGE_MASK;
+		end = start + size;
+		WARN_ON_ONCE(1);
+	}
+
+	/*
+	 * So check whether the [addr, addr + size) interval is entirely
+	 * covered by precisely one VM area that has the VM_ALLOC flag set.
+	 */
+	area = find_vm_area((void *)addr);
+	if (!area ||
+	    end > (unsigned long)area->addr + area->size ||
+	    !(area->flags & VM_ALLOC))
+		return -EINVAL;
+
+	if (!numpages)
+		return 0;
+
+	if (radix_enabled())
+		return radix__change_memory_range(start, start + size,
+							set, clear);
+	else
+		return hash__change_memory_range(start, start + size,
+							set, clear);
+}
+
+int set_memory_ro(unsigned long addr, int numpages)
+{
+	return change_memory_common(addr, numpages,
+					0, _PAGE_WRITE);
+}
+EXPORT_SYMBOL(set_memory_ro);
+
+int set_memory_rw(unsigned long addr, int numpages)
+{
+	return change_memory_common(addr, numpages,
+					_PAGE_WRITE, 0);
+}
+EXPORT_SYMBOL(set_memory_rw);
+
+int set_memory_nx(unsigned long addr, int numpages)
+{
+	return change_memory_common(addr, numpages,
+					0, _PAGE_EXEC);
+}
+EXPORT_SYMBOL(set_memory_nx);
+
+int set_memory_x(unsigned long addr, int numpages)
+{
+	return change_memory_common(addr, numpages,
+					_PAGE_EXEC, 0);
+}
+EXPORT_SYMBOL(set_memory_x);
+#endif
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 2/3] Enable ARCH_HAS_SET_MEMORY
  2017-08-01 11:25 [PATCH v1 0/3] Implement set_memory_xx for ppc64 book3s Balbir Singh
  2017-08-01 11:25 ` [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines Balbir Singh
@ 2017-08-01 11:25 ` Balbir Singh
  2017-08-01 11:25 ` [PATCH v1 3/3] arch/powerpc/net/bpf: Basic EBPF support Balbir Singh
  2 siblings, 0 replies; 11+ messages in thread
From: Balbir Singh @ 2017-08-01 11:25 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: naveen.n.rao, Balbir Singh

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
 arch/powerpc/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index b5b8ba8..7be710d 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -165,6 +165,7 @@ config PPC
 	select HAVE_ARCH_MMAP_RND_COMPAT_BITS	if COMPAT
 	select HAVE_ARCH_SECCOMP_FILTER
 	select HAVE_ARCH_TRACEHOOK
+	select ARCH_HAS_SET_MEMORY		if (PPC_BOOK3S_64)
 	select ARCH_HAS_STRICT_KERNEL_RWX	if (PPC_BOOK3S_64 && !HIBERNATION)
 	select ARCH_OPTIONAL_KERNEL_RWX		if ARCH_HAS_STRICT_KERNEL_RWX
 	select HAVE_CBPF_JIT			if !PPC64
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v1 3/3] arch/powerpc/net/bpf: Basic EBPF support
  2017-08-01 11:25 [PATCH v1 0/3] Implement set_memory_xx for ppc64 book3s Balbir Singh
  2017-08-01 11:25 ` [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines Balbir Singh
  2017-08-01 11:25 ` [PATCH v1 2/3] Enable ARCH_HAS_SET_MEMORY Balbir Singh
@ 2017-08-01 11:25 ` Balbir Singh
  2017-08-02 14:22   ` Naveen N. Rao
  2 siblings, 1 reply; 11+ messages in thread
From: Balbir Singh @ 2017-08-01 11:25 UTC (permalink / raw)
  To: linuxppc-dev, mpe; +Cc: naveen.n.rao, Balbir Singh

Signed-off-by: Balbir Singh <bsingharora@gmail.com>
---
 arch/powerpc/net/bpf_jit_comp64.c | 13 +------------
 1 file changed, 1 insertion(+), 12 deletions(-)

diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
index 861c5af..d81110e 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -1054,6 +1054,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 	fp->jited = 1;
 	fp->jited_len = alloclen;
 
+	bpf_jit_binary_lock_ro(bpf_hdr);
 	bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));
 
 out:
@@ -1064,15 +1065,3 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
 
 	return fp;
 }
-
-/* Overriding bpf_jit_free() as we don't set images read-only. */
-void bpf_jit_free(struct bpf_prog *fp)
-{
-	unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
-	struct bpf_binary_header *bpf_hdr = (void *)addr;
-
-	if (fp->jited)
-		bpf_jit_binary_free(bpf_hdr);
-
-	bpf_prog_unlock_free(fp);
-}
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines
  2017-08-01 11:25 ` [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines Balbir Singh
@ 2017-08-01 19:08   ` christophe leroy
  2017-08-02  3:07     ` Balbir Singh
  2017-08-02 10:09   ` Aneesh Kumar K.V
  1 sibling, 1 reply; 11+ messages in thread
From: christophe leroy @ 2017-08-01 19:08 UTC (permalink / raw)
  To: Balbir Singh, linuxppc-dev, mpe; +Cc: naveen.n.rao



Le 01/08/2017 à 13:25, Balbir Singh a écrit :
> Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX
> feature support we got support for changing the page permissions
> for pte ranges. This patch adds support for both radix and hash
> so that we can change their permissions via set/clear masks.
>
> A new helper is required for hash (hash__change_memory_range()
> is changed to hash__change_boot_memory_range() as it deals with
> bolted PTE's).
>
> hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests
> for permission changes. hash__change_memory_range() does not invoke
> updatepp, instead it changes the software PTE and invalidates the PTE.
>
> For radix, radix__change_memory_range() is setup to do the right
> thing for vmalloc'd addresses. It takes a new parameter to decide
> what attributes to set.
>
> Signed-off-by: Balbir Singh <bsingharora@gmail.com>
> ---
>  arch/powerpc/include/asm/book3s/64/hash.h  |  6 +++
>  arch/powerpc/include/asm/book3s/64/radix.h |  6 +++
>  arch/powerpc/include/asm/set_memory.h      | 34 +++++++++++++++
>  arch/powerpc/mm/pgtable-hash64.c           | 51 ++++++++++++++++++++--
>  arch/powerpc/mm/pgtable-radix.c            | 26 ++++++------
>  arch/powerpc/mm/pgtable_64.c               | 68 ++++++++++++++++++++++++++++++
>  6 files changed, 175 insertions(+), 16 deletions(-)
>  create mode 100644 arch/powerpc/include/asm/set_memory.h
>

[...]

> diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
> index 0736e94..3ee4c7d 100644
> --- a/arch/powerpc/mm/pgtable_64.c
> +++ b/arch/powerpc/mm/pgtable_64.c
> @@ -514,3 +514,71 @@ void mark_initmem_nx(void)
>  		hash__mark_initmem_nx();
>  }
>  #endif
> +
> +#ifdef CONFIG_ARCH_HAS_SET_MEMORY
> +/*
> + * Some of these bits are taken from arm64/mm/page_attr.c
> + */
> +static int change_memory_common(unsigned long addr, int numpages,
> +				unsigned long set, unsigned long clear)
> +{
> +	unsigned long start = addr;
> +	unsigned long size = PAGE_SIZE*numpages;
> +	unsigned long end = start + size;
> +	struct vm_struct *area;
> +
> +	if (!PAGE_ALIGNED(addr)) {
> +		start &= PAGE_MASK;
> +		end = start + size;
> +		WARN_ON_ONCE(1);
> +	}

Why not just set start = addr & PAGE_MASK, then just do 
WARN_ON_ONCE(start != addr), instead of that if ()

> +
> +	/*
> +	 * So check whether the [addr, addr + size) interval is entirely
> +	 * covered by precisely one VM area that has the VM_ALLOC flag set.
> +	 */
> +	area = find_vm_area((void *)addr);
> +	if (!area ||
> +	    end > (unsigned long)area->addr + area->size ||
> +	    !(area->flags & VM_ALLOC))
> +		return -EINVAL;
> +
> +	if (!numpages)
> +		return 0;

Shouldn't that be tested earlier ?

> +
> +	if (radix_enabled())
> +		return radix__change_memory_range(start, start + size,
> +							set, clear);
> +	else
> +		return hash__change_memory_range(start, start + size,
> +							set, clear);
> +}

The following functions should go in a place common to PPC32 and PPC64, 
otherwise they will have to be duplicated when implementing for PPC32.
Maybe the above function should also go in a common place, only the last 
part should remain in a PPC64 dedicated part. It could be called 
change_memory_range(), something like

int change_memory_range(unsigned long start, unsigned long end,
			unsigned long set, unsigned long clear)
{
	if (radix_enabled())
		return radix__change_memory_range(start, end,
						  set, clear);
	return hash__change_memory_range(start, end, set, clear);
}

Then change_memory_range() could also be implemented for PPC32 later.

> +
> +int set_memory_ro(unsigned long addr, int numpages)
> +{
> +	return change_memory_common(addr, numpages,
> +					0, _PAGE_WRITE);
> +}
> +EXPORT_SYMBOL(set_memory_ro);

Take care that _PAGE_WRITE has value 0 when _PAGE_RO instead of _PAGE_RW 
is defined (eg for the 8xx).

It would be better to use accessors like pte_wrprotect() and pte_mkwrite()

> +
> +int set_memory_rw(unsigned long addr, int numpages)
> +{
> +	return change_memory_common(addr, numpages,
> +					_PAGE_WRITE, 0);
> +}
> +EXPORT_SYMBOL(set_memory_rw);
> +
> +int set_memory_nx(unsigned long addr, int numpages)
> +{
> +	return change_memory_common(addr, numpages,
> +					0, _PAGE_EXEC);
> +}
> +EXPORT_SYMBOL(set_memory_nx);
> +
> +int set_memory_x(unsigned long addr, int numpages)
> +{
> +	return change_memory_common(addr, numpages,
> +					_PAGE_EXEC, 0);
> +}
> +EXPORT_SYMBOL(set_memory_x);
> +#endif
>

Christophe

---
L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast.
https://www.avast.com/antivirus

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines
  2017-08-01 19:08   ` christophe leroy
@ 2017-08-02  3:07     ` Balbir Singh
  0 siblings, 0 replies; 11+ messages in thread
From: Balbir Singh @ 2017-08-02  3:07 UTC (permalink / raw)
  To: christophe leroy; +Cc: linuxppc-dev, mpe, naveen.n.rao

On Tue, 1 Aug 2017 21:08:49 +0200
christophe leroy <christophe.leroy@c-s.fr> wrote:

> Le 01/08/2017 =C3=A0 13:25, Balbir Singh a =C3=A9crit :
> > Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX
> > feature support we got support for changing the page permissions
> > for pte ranges. This patch adds support for both radix and hash
> > so that we can change their permissions via set/clear masks.
> >
> > A new helper is required for hash (hash__change_memory_range()
> > is changed to hash__change_boot_memory_range() as it deals with
> > bolted PTE's).
> >
> > hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests
> > for permission changes. hash__change_memory_range() does not invoke
> > updatepp, instead it changes the software PTE and invalidates the PTE.
> >
> > For radix, radix__change_memory_range() is setup to do the right
> > thing for vmalloc'd addresses. It takes a new parameter to decide
> > what attributes to set.
> >
> > Signed-off-by: Balbir Singh <bsingharora@gmail.com>
> > ---
> >  arch/powerpc/include/asm/book3s/64/hash.h  |  6 +++
> >  arch/powerpc/include/asm/book3s/64/radix.h |  6 +++
> >  arch/powerpc/include/asm/set_memory.h      | 34 +++++++++++++++
> >  arch/powerpc/mm/pgtable-hash64.c           | 51 ++++++++++++++++++++--
> >  arch/powerpc/mm/pgtable-radix.c            | 26 ++++++------
> >  arch/powerpc/mm/pgtable_64.c               | 68 ++++++++++++++++++++++=
++++++++
> >  6 files changed, 175 insertions(+), 16 deletions(-)
> >  create mode 100644 arch/powerpc/include/asm/set_memory.h
> > =20
>=20
> [...]
>=20
> > diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
> > index 0736e94..3ee4c7d 100644
> > --- a/arch/powerpc/mm/pgtable_64.c
> > +++ b/arch/powerpc/mm/pgtable_64.c
> > @@ -514,3 +514,71 @@ void mark_initmem_nx(void)
> >  		hash__mark_initmem_nx();
> >  }
> >  #endif
> > +
> > +#ifdef CONFIG_ARCH_HAS_SET_MEMORY
> > +/*
> > + * Some of these bits are taken from arm64/mm/page_attr.c
> > + */
> > +static int change_memory_common(unsigned long addr, int numpages,
> > +				unsigned long set, unsigned long clear)
> > +{
> > +	unsigned long start =3D addr;
> > +	unsigned long size =3D PAGE_SIZE*numpages;
> > +	unsigned long end =3D start + size;
> > +	struct vm_struct *area;
> > +
> > +	if (!PAGE_ALIGNED(addr)) {
> > +		start &=3D PAGE_MASK;
> > +		end =3D start + size;
> > +		WARN_ON_ONCE(1);
> > +	} =20
>=20
> Why not just set start =3D addr & PAGE_MASK, then just do=20
> WARN_ON_ONCE(start !=3D addr), instead of that if ()

The code has been taken from arch/arm64/mm/page_attr.c. I did
not change any bits, but we could make changes.

>=20
> > +
> > +	/*
> > +	 * So check whether the [addr, addr + size) interval is entirely
> > +	 * covered by precisely one VM area that has the VM_ALLOC flag set.
> > +	 */
> > +	area =3D find_vm_area((void *)addr);
> > +	if (!area ||
> > +	    end > (unsigned long)area->addr + area->size ||
> > +	    !(area->flags & VM_ALLOC))
> > +		return -EINVAL;
> > +
> > +	if (!numpages)
> > +		return 0; =20
>=20
> Shouldn't that be tested earlier ?
>=20

Same as above

> > +
> > +	if (radix_enabled())
> > +		return radix__change_memory_range(start, start + size,
> > +							set, clear);
> > +	else
> > +		return hash__change_memory_range(start, start + size,
> > +							set, clear);
> > +} =20
>=20
> The following functions should go in a place common to PPC32 and PPC64,=20
> otherwise they will have to be duplicated when implementing for PPC32.
> Maybe the above function should also go in a common place, only the last=
=20
> part should remain in a PPC64 dedicated part. It could be called=20
> change_memory_range(), something like
>=20
> int change_memory_range(unsigned long start, unsigned long end,
> 			unsigned long set, unsigned long clear)
> {
> 	if (radix_enabled())
> 		return radix__change_memory_range(start, end,
> 						  set, clear);
> 	return hash__change_memory_range(start, end, set, clear);
> }
>=20
> Then change_memory_range() could also be implemented for PPC32 later.

I was hoping that when we implement support for PPC32, we
could refactor the code then and move it to arch/powerpc/mm/page_attr.c
if required. What do you think?

>=20
> > +
> > +int set_memory_ro(unsigned long addr, int numpages)
> > +{
> > +	return change_memory_common(addr, numpages,
> > +					0, _PAGE_WRITE);
> > +}
> > +EXPORT_SYMBOL(set_memory_ro); =20
>=20
> Take care that _PAGE_WRITE has value 0 when _PAGE_RO instead of _PAGE_RW=
=20
> is defined (eg for the 8xx).
>=20
> It would be better to use accessors like pte_wrprotect() and pte_mkwrite()
>

Sure we can definitely refactor this for PPC32, pte_wrprotect()
and pte_mkwrite() would require us to make the changes when we've
walked down to the pte and then invoke different functions based
on the flag, I kind of like the addr and permission abstraction<`2`><`2`>

> > +
> > +int set_memory_rw(unsigned long addr, int numpages)
> > +{
> > +	return change_memory_common(addr, numpages,
> > +					_PAGE_WRITE, 0);
> > +}
> > +EXPORT_SYMBOL(set_memory_rw);
> > +
> > +int set_memory_nx(unsigned long addr, int numpages)
> > +{
> > +	return change_memory_common(addr, numpages,
> > +					0, _PAGE_EXEC);
> > +}
> > +EXPORT_SYMBOL(set_memory_nx);
> > +
> > +int set_memory_x(unsigned long addr, int numpages)
> > +{
> > +	return change_memory_common(addr, numpages,
> > +					_PAGE_EXEC, 0);
> > +}
> > +EXPORT_SYMBOL(set_memory_x);
> > +#endif
> > =20
>=20
Thanks for the review
Balbir

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines
  2017-08-01 11:25 ` [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines Balbir Singh
  2017-08-01 19:08   ` christophe leroy
@ 2017-08-02 10:09   ` Aneesh Kumar K.V
  2017-08-02 10:33     ` Balbir Singh
  1 sibling, 1 reply; 11+ messages in thread
From: Aneesh Kumar K.V @ 2017-08-02 10:09 UTC (permalink / raw)
  To: Balbir Singh, linuxppc-dev, mpe; +Cc: naveen.n.rao

Balbir Singh <bsingharora@gmail.com> writes:

> Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX
> feature support we got support for changing the page permissions
> for pte ranges. This patch adds support for both radix and hash
> so that we can change their permissions via set/clear masks.
>
> A new helper is required for hash (hash__change_memory_range()
> is changed to hash__change_boot_memory_range() as it deals with
> bolted PTE's).
>
> hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests
> for permission changes. hash__change_memory_range() does not invoke
> updatepp, instead it changes the software PTE and invalidates the PTE.
>
> For radix, radix__change_memory_range() is setup to do the right
> thing for vmalloc'd addresses. It takes a new parameter to decide
> what attributes to set.
>
....

> +int hash__change_memory_range(unsigned long start, unsigned long end,
> +				unsigned long set, unsigned long clear)
> +{
> +	unsigned long idx;
> +	pgd_t *pgdp;
> +	pud_t *pudp;
> +	pmd_t *pmdp;
> +	pte_t *ptep;
> +
> +	start = ALIGN_DOWN(start, PAGE_SIZE);
> +	end = PAGE_ALIGN(end); // aligns up
> +
> +	/*
> +	 * Update the software PTE and flush the entry.
> +	 * This should cause a new fault with the right
> +	 * things setup in the hash page table
> +	 */
> +	pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n",
> +		 start, end, set, clear);
> +
> +	for (idx = start; idx < end; idx += PAGE_SIZE) {


> +		pgdp = pgd_offset_k(idx);
> +		pudp = pud_alloc(&init_mm, pgdp, idx);
> +		if (!pudp)
> +			return -1;
> +		pmdp = pmd_alloc(&init_mm, pudp, idx);
> +		if (!pmdp)
> +			return -1;
> +		ptep = pte_alloc_kernel(pmdp, idx);
> +		if (!ptep)
> +			return -1;
> +		hash__pte_update(&init_mm, idx, ptep, clear, set, 0);
> +		hash__flush_tlb_kernel_range(idx, idx + PAGE_SIZE);
> +	}

You can use find_linux_pte_or_hugepte. with my recent patch series
find_init_mm_pte() ?

-aneesh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines
  2017-08-02 10:09   ` Aneesh Kumar K.V
@ 2017-08-02 10:33     ` Balbir Singh
  2017-08-04  3:38       ` Aneesh Kumar K.V
  0 siblings, 1 reply; 11+ messages in thread
From: Balbir Singh @ 2017-08-02 10:33 UTC (permalink / raw)
  To: Aneesh Kumar K.V
  Cc: open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	Michael Ellerman, Naveen N. Rao

On Wed, Aug 2, 2017 at 8:09 PM, Aneesh Kumar K.V
<aneesh.kumar@linux.vnet.ibm.com> wrote:
> Balbir Singh <bsingharora@gmail.com> writes:
>
>> Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX
>> feature support we got support for changing the page permissions
>> for pte ranges. This patch adds support for both radix and hash
>> so that we can change their permissions via set/clear masks.
>>
>> A new helper is required for hash (hash__change_memory_range()
>> is changed to hash__change_boot_memory_range() as it deals with
>> bolted PTE's).
>>
>> hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests
>> for permission changes. hash__change_memory_range() does not invoke
>> updatepp, instead it changes the software PTE and invalidates the PTE.
>>
>> For radix, radix__change_memory_range() is setup to do the right
>> thing for vmalloc'd addresses. It takes a new parameter to decide
>> what attributes to set.
>>
> ....
>
>> +int hash__change_memory_range(unsigned long start, unsigned long end,
>> +                             unsigned long set, unsigned long clear)
>> +{
>> +     unsigned long idx;
>> +     pgd_t *pgdp;
>> +     pud_t *pudp;
>> +     pmd_t *pmdp;
>> +     pte_t *ptep;
>> +
>> +     start = ALIGN_DOWN(start, PAGE_SIZE);
>> +     end = PAGE_ALIGN(end); // aligns up
>> +
>> +     /*
>> +      * Update the software PTE and flush the entry.
>> +      * This should cause a new fault with the right
>> +      * things setup in the hash page table
>> +      */
>> +     pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n",
>> +              start, end, set, clear);
>> +
>> +     for (idx = start; idx < end; idx += PAGE_SIZE) {
>
>
>> +             pgdp = pgd_offset_k(idx);
>> +             pudp = pud_alloc(&init_mm, pgdp, idx);
>> +             if (!pudp)
>> +                     return -1;
>> +             pmdp = pmd_alloc(&init_mm, pudp, idx);
>> +             if (!pmdp)
>> +                     return -1;
>> +             ptep = pte_alloc_kernel(pmdp, idx);
>> +             if (!ptep)
>> +                     return -1;
>> +             hash__pte_update(&init_mm, idx, ptep, clear, set, 0);

I think this does the needful, if H_PAGE_HASHPTE is set, the flush
will happen

>> +             hash__flush_tlb_kernel_range(idx, idx + PAGE_SIZE);
>> +     }
>
> You can use find_linux_pte_or_hugepte. with my recent patch series
> find_init_mm_pte() ?
>

for pte_mkwrite and pte_wrprotect?

Balbir Singh.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 3/3] arch/powerpc/net/bpf: Basic EBPF support
  2017-08-01 11:25 ` [PATCH v1 3/3] arch/powerpc/net/bpf: Basic EBPF support Balbir Singh
@ 2017-08-02 14:22   ` Naveen N. Rao
  0 siblings, 0 replies; 11+ messages in thread
From: Naveen N. Rao @ 2017-08-02 14:22 UTC (permalink / raw)
  To: Balbir Singh; +Cc: linuxppc-dev, mpe

> arch/powerpc/net/bpf: Basic EBPF support

Perhaps:
powerpc/bpf: Set JIT memory read-only

On 2017/08/01 09:25PM, Balbir Singh wrote:
> Signed-off-by: Balbir Singh <bsingharora@gmail.com>
> ---
>  arch/powerpc/net/bpf_jit_comp64.c | 13 +------------
>  1 file changed, 1 insertion(+), 12 deletions(-)
> 
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 861c5af..d81110e 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -1054,6 +1054,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
>  	fp->jited = 1;
>  	fp->jited_len = alloclen;
> 
> +	bpf_jit_binary_lock_ro(bpf_hdr);
>  	bpf_flush_icache(bpf_hdr, (u8 *)bpf_hdr + (bpf_hdr->pages * PAGE_SIZE));

Should we call bpf_flush_icache() _before_ locking the memory?

Thanks,
Naveen

> 
>  out:
> @@ -1064,15 +1065,3 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
> 
>  	return fp;
>  }
> -
> -/* Overriding bpf_jit_free() as we don't set images read-only. */
> -void bpf_jit_free(struct bpf_prog *fp)
> -{
> -	unsigned long addr = (unsigned long)fp->bpf_func & PAGE_MASK;
> -	struct bpf_binary_header *bpf_hdr = (void *)addr;
> -
> -	if (fp->jited)
> -		bpf_jit_binary_free(bpf_hdr);
> -
> -	bpf_prog_unlock_free(fp);
> -}
> -- 
> 2.9.4
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines
  2017-08-02 10:33     ` Balbir Singh
@ 2017-08-04  3:38       ` Aneesh Kumar K.V
  2017-08-04  3:53         ` Balbir Singh
  0 siblings, 1 reply; 11+ messages in thread
From: Aneesh Kumar K.V @ 2017-08-04  3:38 UTC (permalink / raw)
  To: Balbir Singh
  Cc: open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	Michael Ellerman, Naveen N. Rao

Balbir Singh <bsingharora@gmail.com> writes:

> On Wed, Aug 2, 2017 at 8:09 PM, Aneesh Kumar K.V
> <aneesh.kumar@linux.vnet.ibm.com> wrote:
>> Balbir Singh <bsingharora@gmail.com> writes:
>>
>>> Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX
>>> feature support we got support for changing the page permissions
>>> for pte ranges. This patch adds support for both radix and hash
>>> so that we can change their permissions via set/clear masks.
>>>
>>> A new helper is required for hash (hash__change_memory_range()
>>> is changed to hash__change_boot_memory_range() as it deals with
>>> bolted PTE's).
>>>
>>> hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests
>>> for permission changes. hash__change_memory_range() does not invoke
>>> updatepp, instead it changes the software PTE and invalidates the PTE.
>>>
>>> For radix, radix__change_memory_range() is setup to do the right
>>> thing for vmalloc'd addresses. It takes a new parameter to decide
>>> what attributes to set.
>>>
>> ....
>>
>>> +int hash__change_memory_range(unsigned long start, unsigned long end,
>>> +                             unsigned long set, unsigned long clear)
>>> +{
>>> +     unsigned long idx;
>>> +     pgd_t *pgdp;
>>> +     pud_t *pudp;
>>> +     pmd_t *pmdp;
>>> +     pte_t *ptep;
>>> +
>>> +     start = ALIGN_DOWN(start, PAGE_SIZE);
>>> +     end = PAGE_ALIGN(end); // aligns up
>>> +
>>> +     /*
>>> +      * Update the software PTE and flush the entry.
>>> +      * This should cause a new fault with the right
>>> +      * things setup in the hash page table
>>> +      */
>>> +     pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n",
>>> +              start, end, set, clear);
>>> +
>>> +     for (idx = start; idx < end; idx += PAGE_SIZE) {
>>
>>
>>> +             pgdp = pgd_offset_k(idx);
>>> +             pudp = pud_alloc(&init_mm, pgdp, idx);
>>> +             if (!pudp)
>>> +                     return -1;
>>> +             pmdp = pmd_alloc(&init_mm, pudp, idx);
>>> +             if (!pmdp)
>>> +                     return -1;
>>> +             ptep = pte_alloc_kernel(pmdp, idx);
>>> +             if (!ptep)
>>> +                     return -1;
>>> +             hash__pte_update(&init_mm, idx, ptep, clear, set, 0);
>
> I think this does the needful, if H_PAGE_HASHPTE is set, the flush
> will happen
>
>>> +             hash__flush_tlb_kernel_range(idx, idx + PAGE_SIZE);
>>> +     }
>>
>> You can use find_linux_pte_or_hugepte. with my recent patch series
>> find_init_mm_pte() ?
>>
>
> for pte_mkwrite and pte_wrprotect?

For walking page table. I am not sure you really want to allocate page
table in that function. If you do, then what will be the initial value
of PTE ? We are requesting to set an clear from and existing PTE entry
right ? If you find a none page table entry you should handle it via a
fault ?


-aneesh

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines
  2017-08-04  3:38       ` Aneesh Kumar K.V
@ 2017-08-04  3:53         ` Balbir Singh
  0 siblings, 0 replies; 11+ messages in thread
From: Balbir Singh @ 2017-08-04  3:53 UTC (permalink / raw)
  To: Aneesh Kumar K.V
  Cc: open list:LINUX FOR POWERPC (32-BIT AND 64-BIT),
	Michael Ellerman, Naveen N. Rao

On Fri, Aug 4, 2017 at 1:38 PM, Aneesh Kumar K.V
<aneesh.kumar@linux.vnet.ibm.com> wrote:
> Balbir Singh <bsingharora@gmail.com> writes:
>
>> On Wed, Aug 2, 2017 at 8:09 PM, Aneesh Kumar K.V
>> <aneesh.kumar@linux.vnet.ibm.com> wrote:
>>> Balbir Singh <bsingharora@gmail.com> writes:
>>>
>>>> Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX
>>>> feature support we got support for changing the page permissions
>>>> for pte ranges. This patch adds support for both radix and hash
>>>> so that we can change their permissions via set/clear masks.
>>>>
>>>> A new helper is required for hash (hash__change_memory_range()
>>>> is changed to hash__change_boot_memory_range() as it deals with
>>>> bolted PTE's).
>>>>
>>>> hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests
>>>> for permission changes. hash__change_memory_range() does not invoke
>>>> updatepp, instead it changes the software PTE and invalidates the PTE.
>>>>
>>>> For radix, radix__change_memory_range() is setup to do the right
>>>> thing for vmalloc'd addresses. It takes a new parameter to decide
>>>> what attributes to set.
>>>>
>>> ....
>>>
>>>> +int hash__change_memory_range(unsigned long start, unsigned long end,
>>>> +                             unsigned long set, unsigned long clear)
>>>> +{
>>>> +     unsigned long idx;
>>>> +     pgd_t *pgdp;
>>>> +     pud_t *pudp;
>>>> +     pmd_t *pmdp;
>>>> +     pte_t *ptep;
>>>> +
>>>> +     start = ALIGN_DOWN(start, PAGE_SIZE);
>>>> +     end = PAGE_ALIGN(end); // aligns up
>>>> +
>>>> +     /*
>>>> +      * Update the software PTE and flush the entry.
>>>> +      * This should cause a new fault with the right
>>>> +      * things setup in the hash page table
>>>> +      */
>>>> +     pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n",
>>>> +              start, end, set, clear);
>>>> +
>>>> +     for (idx = start; idx < end; idx += PAGE_SIZE) {
>>>
>>>
>>>> +             pgdp = pgd_offset_k(idx);
>>>> +             pudp = pud_alloc(&init_mm, pgdp, idx);
>>>> +             if (!pudp)
>>>> +                     return -1;
>>>> +             pmdp = pmd_alloc(&init_mm, pudp, idx);
>>>> +             if (!pmdp)
>>>> +                     return -1;
>>>> +             ptep = pte_alloc_kernel(pmdp, idx);
>>>> +             if (!ptep)
>>>> +                     return -1;
>>>> +             hash__pte_update(&init_mm, idx, ptep, clear, set, 0);
>>
>> I think this does the needful, if H_PAGE_HASHPTE is set, the flush
>> will happen
>>
>>>> +             hash__flush_tlb_kernel_range(idx, idx + PAGE_SIZE);
>>>> +     }
>>>
>>> You can use find_linux_pte_or_hugepte. with my recent patch series
>>> find_init_mm_pte() ?
>>>
>>
>> for pte_mkwrite and pte_wrprotect?
>
> For walking page table. I am not sure you really want to allocate page
> table in that function. If you do, then what will be the initial value
> of PTE ? We are requesting to set an clear from and existing PTE entry
> right ? If you find a none page table entry you should handle it via a
> fault ?
>

Fair enough, I've been lazy with that check, I'll fix it in v2

Balbir Singh.

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-08-04  3:53 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-01 11:25 [PATCH v1 0/3] Implement set_memory_xx for ppc64 book3s Balbir Singh
2017-08-01 11:25 ` [PATCH v1 1/3] arch/powerpc/set_memory: Implement set_memory_xx routines Balbir Singh
2017-08-01 19:08   ` christophe leroy
2017-08-02  3:07     ` Balbir Singh
2017-08-02 10:09   ` Aneesh Kumar K.V
2017-08-02 10:33     ` Balbir Singh
2017-08-04  3:38       ` Aneesh Kumar K.V
2017-08-04  3:53         ` Balbir Singh
2017-08-01 11:25 ` [PATCH v1 2/3] Enable ARCH_HAS_SET_MEMORY Balbir Singh
2017-08-01 11:25 ` [PATCH v1 3/3] arch/powerpc/net/bpf: Basic EBPF support Balbir Singh
2017-08-02 14:22   ` Naveen N. Rao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).