* [PATCH v2 1/2] powerpc/mm: Move vm_get_page_prot() out of book3s64 code
@ 2022-08-08 13:01 Russell Currey
2022-08-08 13:01 ` [PATCH v2 2/2] powerpc/mm: Support execute-only memory on the Radix MMU Russell Currey
2022-08-08 14:32 ` [PATCH v2 1/2] powerpc/mm: Move vm_get_page_prot() out of book3s64 code Christophe Leroy
0 siblings, 2 replies; 7+ messages in thread
From: Russell Currey @ 2022-08-08 13:01 UTC (permalink / raw)
To: linuxppc-dev
Cc: ajd, anshuman.khandual, npiggin, aneesh.kumar, Russell Currey
protection_map is about to be __ro_after_init instead of const, so move
the only non-local function that consumes it to the same file so it can
at least be static.
Signed-off-by: Russell Currey <ruscur@russell.cc>
---
v2: new
arch/powerpc/mm/book3s64/pgtable.c | 16 ----------------
arch/powerpc/mm/pgtable.c | 21 +++++++++++++++++++--
2 files changed, 19 insertions(+), 18 deletions(-)
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 7b9966402b25..e2a4ea5eb960 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -550,19 +550,3 @@ unsigned long memremap_compat_align(void)
}
EXPORT_SYMBOL_GPL(memremap_compat_align);
#endif
-
-pgprot_t vm_get_page_prot(unsigned long vm_flags)
-{
- unsigned long prot = pgprot_val(protection_map[vm_flags &
- (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]);
-
- if (vm_flags & VM_SAO)
- prot |= _PAGE_SAO;
-
-#ifdef CONFIG_PPC_MEM_KEYS
- prot |= vmflag_to_pte_pkey_bits(vm_flags);
-#endif
-
- return __pgprot(prot);
-}
-EXPORT_SYMBOL(vm_get_page_prot);
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index cb2dcdb18f8e..0b2bbde5fb65 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -27,6 +27,7 @@
#include <asm/tlb.h>
#include <asm/hugetlb.h>
#include <asm/pte-walk.h>
+#include <asm/pkeys.h>
#ifdef CONFIG_PPC64
#define PGD_ALIGN (sizeof(pgd_t) * MAX_PTRS_PER_PGD)
@@ -493,6 +494,22 @@ const pgprot_t protection_map[16] = {
[VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X
};
-#ifndef CONFIG_PPC_BOOK3S_64
-DECLARE_VM_GET_PAGE_PROT
+#ifdef CONFIG_PPC_BOOK3S_64
+pgprot_t vm_get_page_prot(unsigned long vm_flags)
+{
+ unsigned long prot = pgprot_val(protection_map[vm_flags &
+ (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]);
+
+ if (vm_flags & VM_SAO)
+ prot |= _PAGE_SAO;
+
+#ifdef CONFIG_PPC_MEM_KEYS
+ prot |= vmflag_to_pte_pkey_bits(vm_flags);
#endif
+
+ return __pgprot(prot);
+}
+EXPORT_SYMBOL(vm_get_page_prot);
+#else
+DECLARE_VM_GET_PAGE_PROT
+#endif /* CONFIG_PPC_BOOK3S_64 */
--
2.37.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v2 2/2] powerpc/mm: Support execute-only memory on the Radix MMU
2022-08-08 13:01 [PATCH v2 1/2] powerpc/mm: Move vm_get_page_prot() out of book3s64 code Russell Currey
@ 2022-08-08 13:01 ` Russell Currey
2022-08-08 13:24 ` Aneesh Kumar K V
2022-08-08 14:32 ` [PATCH v2 1/2] powerpc/mm: Move vm_get_page_prot() out of book3s64 code Christophe Leroy
1 sibling, 1 reply; 7+ messages in thread
From: Russell Currey @ 2022-08-08 13:01 UTC (permalink / raw)
To: linuxppc-dev
Cc: ajd, anshuman.khandual, npiggin, aneesh.kumar, Russell Currey
The Hash MMU already supports XOM (i.e. mmap with PROT_EXEC only)
through the execute-only pkey. A PROT_EXEC-only mapping will actually
map to RX, and then the pkey will be applied on top of it.
Radix doesn't have pkeys, but it does have execute permissions built-in
to the MMU, so all we have to do to support XOM is expose it.
That's not possible with protection_map being const, so make it RO after
init instead.
Signed-off-by: Russell Currey <ruscur@russell.cc>
---
v2: Make protection_map __ro_after_init and set it in an initcall
(v1 didn't work, I tested before rebasing on Anshuman's patches)
basic test: https://raw.githubusercontent.com/ruscur/junkcode/main/mmap_test.c
arch/powerpc/include/asm/book3s/64/radix.h | 3 +++
arch/powerpc/include/asm/pgtable.h | 1 -
arch/powerpc/mm/fault.c | 10 ++++++++++
arch/powerpc/mm/pgtable.c | 16 +++++++++++++++-
4 files changed, 28 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
index 686001eda936..bf316b773d73 100644
--- a/arch/powerpc/include/asm/book3s/64/radix.h
+++ b/arch/powerpc/include/asm/book3s/64/radix.h
@@ -19,6 +19,9 @@
#include <asm/cpu_has_feature.h>
#endif
+/* Execute-only page protections, Hash can use RX + execute-only pkey */
+#define PAGE_EXECONLY __pgprot(_PAGE_BASE | _PAGE_EXEC)
+
/* An empty PTE can still have a R or C writeback */
#define RADIX_PTE_NONE_MASK (_PAGE_DIRTY | _PAGE_ACCESSED)
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 33f4bf8d22b0..3cbb6de20f9d 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -60,7 +60,6 @@ extern void paging_init(void);
void poking_init(void);
extern unsigned long ioremap_bot;
-extern const pgprot_t protection_map[16];
/*
* kern_addr_valid is intended to indicate whether an address is a valid
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 014005428687..887c0cc45ca6 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -270,6 +270,16 @@ static bool access_error(bool is_write, bool is_exec, struct vm_area_struct *vma
return false;
}
+ if (unlikely(!(vma->vm_flags & VM_READ))) {
+ /*
+ * If we're on Radix, then this could be a read attempt on
+ * execute-only memory. On other MMUs, an "exec-only" page
+ * will be given RX flags, so this might be redundant.
+ */
+ if (radix_enabled())
+ return true;
+ }
+
if (unlikely(!vma_is_accessible(vma)))
return true;
/*
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index 0b2bbde5fb65..6e1a6a999c3c 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -475,7 +475,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
EXPORT_SYMBOL_GPL(__find_linux_pte);
/* Note due to the way vm flags are laid out, the bits are XWR */
-const pgprot_t protection_map[16] = {
+static pgprot_t protection_map[16] __ro_after_init = {
[VM_NONE] = PAGE_NONE,
[VM_READ] = PAGE_READONLY,
[VM_WRITE] = PAGE_COPY,
@@ -494,6 +494,20 @@ const pgprot_t protection_map[16] = {
[VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X
};
+#ifdef CONFIG_PPC_RADIX_MMU
+static int __init radix_update_protection_map(void)
+{
+ if (early_radix_enabled()) {
+ /* Radix directly supports execute-only page protections */
+ protection_map[VM_EXEC] = PAGE_EXECONLY;
+ protection_map[VM_EXEC | VM_SHARED] = PAGE_EXECONLY;
+ }
+
+ return 0;
+}
+arch_initcall(radix_update_protection_map);
+#endif /* CONFIG_PPC_RADIX_MMU */
+
#ifdef CONFIG_PPC_BOOK3S_64
pgprot_t vm_get_page_prot(unsigned long vm_flags)
{
--
2.37.1
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/2] powerpc/mm: Support execute-only memory on the Radix MMU
2022-08-08 13:01 ` [PATCH v2 2/2] powerpc/mm: Support execute-only memory on the Radix MMU Russell Currey
@ 2022-08-08 13:24 ` Aneesh Kumar K V
2022-08-09 0:56 ` Russell Currey
0 siblings, 1 reply; 7+ messages in thread
From: Aneesh Kumar K V @ 2022-08-08 13:24 UTC (permalink / raw)
To: Russell Currey, linuxppc-dev; +Cc: ajd, npiggin, anshuman.khandual
On 8/8/22 6:31 PM, Russell Currey wrote:
> The Hash MMU already supports XOM (i.e. mmap with PROT_EXEC only)
> through the execute-only pkey. A PROT_EXEC-only mapping will actually
> map to RX, and then the pkey will be applied on top of it.
>
> Radix doesn't have pkeys, but it does have execute permissions built-in
> to the MMU, so all we have to do to support XOM is expose it.
>
> That's not possible with protection_map being const, so make it RO after
> init instead.
>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> ---
> v2: Make protection_map __ro_after_init and set it in an initcall
> (v1 didn't work, I tested before rebasing on Anshuman's patches)
>
> basic test: https://raw.githubusercontent.com/ruscur/junkcode/main/mmap_test.c
>
> arch/powerpc/include/asm/book3s/64/radix.h | 3 +++
> arch/powerpc/include/asm/pgtable.h | 1 -
> arch/powerpc/mm/fault.c | 10 ++++++++++
> arch/powerpc/mm/pgtable.c | 16 +++++++++++++++-
> 4 files changed, 28 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
> index 686001eda936..bf316b773d73 100644
> --- a/arch/powerpc/include/asm/book3s/64/radix.h
> +++ b/arch/powerpc/include/asm/book3s/64/radix.h
> @@ -19,6 +19,9 @@
> #include <asm/cpu_has_feature.h>
> #endif
>
> +/* Execute-only page protections, Hash can use RX + execute-only pkey */
> +#define PAGE_EXECONLY __pgprot(_PAGE_BASE | _PAGE_EXEC)
> +
> /* An empty PTE can still have a R or C writeback */
> #define RADIX_PTE_NONE_MASK (_PAGE_DIRTY | _PAGE_ACCESSED)
>
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 33f4bf8d22b0..3cbb6de20f9d 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -60,7 +60,6 @@ extern void paging_init(void);
> void poking_init(void);
>
> extern unsigned long ioremap_bot;
> -extern const pgprot_t protection_map[16];
>
> /*
> * kern_addr_valid is intended to indicate whether an address is a valid
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index 014005428687..887c0cc45ca6 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -270,6 +270,16 @@ static bool access_error(bool is_write, bool is_exec, struct vm_area_struct *vma
> return false;
> }
>
> + if (unlikely(!(vma->vm_flags & VM_READ))) {
> + /*
> + * If we're on Radix, then this could be a read attempt on
> + * execute-only memory. On other MMUs, an "exec-only" page
> + * will be given RX flags, so this might be redundant.
> + */
> + if (radix_enabled())
> + return true;
> + }
> +
> if (unlikely(!vma_is_accessible(vma)))
> return true;
> /*
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index 0b2bbde5fb65..6e1a6a999c3c 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -475,7 +475,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned long ea,
> EXPORT_SYMBOL_GPL(__find_linux_pte);
>
> /* Note due to the way vm flags are laid out, the bits are XWR */
> -const pgprot_t protection_map[16] = {
> +static pgprot_t protection_map[16] __ro_after_init = {
> [VM_NONE] = PAGE_NONE,
> [VM_READ] = PAGE_READONLY,
> [VM_WRITE] = PAGE_COPY,
> @@ -494,6 +494,20 @@ const pgprot_t protection_map[16] = {
> [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X
> };
>
> +#ifdef CONFIG_PPC_RADIX_MMU
> +static int __init radix_update_protection_map(void)
> +{
> + if (early_radix_enabled()) {
> + /* Radix directly supports execute-only page protections */
> + protection_map[VM_EXEC] = PAGE_EXECONLY;
> + protection_map[VM_EXEC | VM_SHARED] = PAGE_EXECONLY;
> + }
> +
> + return 0;
> +}
> +arch_initcall(radix_update_protection_map);
Instead of this can we do this in vm_get_page_prot() ?
/* EXEC only shared or non shared mapping ? */
if (radix_enabled() && ((vm_flags & (VM_READ | VM_WRITE | VM_EXEC)) == VM_EXEC))
prot = PAGE_EXECONLY;
> +#endif /* CONFIG_PPC_RADIX_MMU */
> +
> #ifdef CONFIG_PPC_BOOK3S_64
> pgprot_t vm_get_page_prot(unsigned long vm_flags)
> {
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] powerpc/mm: Move vm_get_page_prot() out of book3s64 code
2022-08-08 13:01 [PATCH v2 1/2] powerpc/mm: Move vm_get_page_prot() out of book3s64 code Russell Currey
2022-08-08 13:01 ` [PATCH v2 2/2] powerpc/mm: Support execute-only memory on the Radix MMU Russell Currey
@ 2022-08-08 14:32 ` Christophe Leroy
2022-08-09 0:55 ` Russell Currey
1 sibling, 1 reply; 7+ messages in thread
From: Christophe Leroy @ 2022-08-08 14:32 UTC (permalink / raw)
To: Russell Currey, linuxppc-dev
Cc: aneesh.kumar, ajd, npiggin, anshuman.khandual
Le 08/08/2022 à 15:01, Russell Currey a écrit :
> protection_map is about to be __ro_after_init instead of const, so move
> the only non-local function that consumes it to the same file so it can
> at least be static.
What's the advantage of doing that ? Why does it need to be static ?
Christophe
>
> Signed-off-by: Russell Currey <ruscur@russell.cc>
> ---
> v2: new
>
> arch/powerpc/mm/book3s64/pgtable.c | 16 ----------------
> arch/powerpc/mm/pgtable.c | 21 +++++++++++++++++++--
> 2 files changed, 19 insertions(+), 18 deletions(-)
>
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index 7b9966402b25..e2a4ea5eb960 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -550,19 +550,3 @@ unsigned long memremap_compat_align(void)
> }
> EXPORT_SYMBOL_GPL(memremap_compat_align);
> #endif
> -
> -pgprot_t vm_get_page_prot(unsigned long vm_flags)
> -{
> - unsigned long prot = pgprot_val(protection_map[vm_flags &
> - (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]);
> -
> - if (vm_flags & VM_SAO)
> - prot |= _PAGE_SAO;
> -
> -#ifdef CONFIG_PPC_MEM_KEYS
> - prot |= vmflag_to_pte_pkey_bits(vm_flags);
> -#endif
> -
> - return __pgprot(prot);
> -}
> -EXPORT_SYMBOL(vm_get_page_prot);
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index cb2dcdb18f8e..0b2bbde5fb65 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -27,6 +27,7 @@
> #include <asm/tlb.h>
> #include <asm/hugetlb.h>
> #include <asm/pte-walk.h>
> +#include <asm/pkeys.h>
>
> #ifdef CONFIG_PPC64
> #define PGD_ALIGN (sizeof(pgd_t) * MAX_PTRS_PER_PGD)
> @@ -493,6 +494,22 @@ const pgprot_t protection_map[16] = {
> [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] = PAGE_SHARED_X
> };
>
> -#ifndef CONFIG_PPC_BOOK3S_64
> -DECLARE_VM_GET_PAGE_PROT
> +#ifdef CONFIG_PPC_BOOK3S_64
> +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> +{
> + unsigned long prot = pgprot_val(protection_map[vm_flags &
> + (VM_READ|VM_WRITE|VM_EXEC|VM_SHARED)]);
> +
> + if (vm_flags & VM_SAO)
> + prot |= _PAGE_SAO;
> +
> +#ifdef CONFIG_PPC_MEM_KEYS
> + prot |= vmflag_to_pte_pkey_bits(vm_flags);
> #endif
> +
> + return __pgprot(prot);
> +}
> +EXPORT_SYMBOL(vm_get_page_prot);
> +#else
> +DECLARE_VM_GET_PAGE_PROT
> +#endif /* CONFIG_PPC_BOOK3S_64 */
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] powerpc/mm: Move vm_get_page_prot() out of book3s64 code
2022-08-08 14:32 ` [PATCH v2 1/2] powerpc/mm: Move vm_get_page_prot() out of book3s64 code Christophe Leroy
@ 2022-08-09 0:55 ` Russell Currey
2022-08-09 5:35 ` Christophe Leroy
0 siblings, 1 reply; 7+ messages in thread
From: Russell Currey @ 2022-08-09 0:55 UTC (permalink / raw)
To: Christophe Leroy, linuxppc-dev
Cc: aneesh.kumar, ajd, npiggin, anshuman.khandual
On Mon, 2022-08-08 at 14:32 +0000, Christophe Leroy wrote:
>
>
> Le 08/08/2022 à 15:01, Russell Currey a écrit :
> > protection_map is about to be __ro_after_init instead of const, so
> > move
> > the only non-local function that consumes it to the same file so it
> > can
> > at least be static.
>
> What's the advantage of doing that ? Why does it need to be static ?
>
> Christophe
It doesn't need to be, I didn't like having it exposed unnecessarily.
Aneesh's suggestion lets it stay const so I can drop this patch anyway.
- Russell
>
> >
> > Signed-off-by: Russell Currey <ruscur@russell.cc>
> > ---
> > v2: new
> >
> > arch/powerpc/mm/book3s64/pgtable.c | 16 ----------------
> > arch/powerpc/mm/pgtable.c | 21 +++++++++++++++++++--
> > 2 files changed, 19 insertions(+), 18 deletions(-)
> >
> > diff --git a/arch/powerpc/mm/book3s64/pgtable.c
> > b/arch/powerpc/mm/book3s64/pgtable.c
> > index 7b9966402b25..e2a4ea5eb960 100644
> > --- a/arch/powerpc/mm/book3s64/pgtable.c
> > +++ b/arch/powerpc/mm/book3s64/pgtable.c
> > @@ -550,19 +550,3 @@ unsigned long memremap_compat_align(void)
> > }
> > EXPORT_SYMBOL_GPL(memremap_compat_align);
> > #endif
> > -
> > -pgprot_t vm_get_page_prot(unsigned long vm_flags)
> > -{
> > - unsigned long prot = pgprot_val(protection_map[vm_flags &
> > -
> > (VM_READ|VM_WRITE|VM_EXEC|VM_
> > SHARED)]);
> > -
> > - if (vm_flags & VM_SAO)
> > - prot |= _PAGE_SAO;
> > -
> > -#ifdef CONFIG_PPC_MEM_KEYS
> > - prot |= vmflag_to_pte_pkey_bits(vm_flags);
> > -#endif
> > -
> > - return __pgprot(prot);
> > -}
> > -EXPORT_SYMBOL(vm_get_page_prot);
> > diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> > index cb2dcdb18f8e..0b2bbde5fb65 100644
> > --- a/arch/powerpc/mm/pgtable.c
> > +++ b/arch/powerpc/mm/pgtable.c
> > @@ -27,6 +27,7 @@
> > #include <asm/tlb.h>
> > #include <asm/hugetlb.h>
> > #include <asm/pte-walk.h>
> > +#include <asm/pkeys.h>
> >
> > #ifdef CONFIG_PPC64
> > #define PGD_ALIGN (sizeof(pgd_t) * MAX_PTRS_PER_PGD)
> > @@ -493,6 +494,22 @@ const pgprot_t protection_map[16] = {
> > [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] =
> > PAGE_SHARED_X
> > };
> >
> > -#ifndef CONFIG_PPC_BOOK3S_64
> > -DECLARE_VM_GET_PAGE_PROT
> > +#ifdef CONFIG_PPC_BOOK3S_64
> > +pgprot_t vm_get_page_prot(unsigned long vm_flags)
> > +{
> > + unsigned long prot = pgprot_val(protection_map[vm_flags &
> > + (VM_READ|VM_WRITE|VM_EXEC|V
> > M_SHARED)]);
> > +
> > + if (vm_flags & VM_SAO)
> > + prot |= _PAGE_SAO;
> > +
> > +#ifdef CONFIG_PPC_MEM_KEYS
> > + prot |= vmflag_to_pte_pkey_bits(vm_flags);
> > #endif
> > +
> > + return __pgprot(prot);
> > +}
> > +EXPORT_SYMBOL(vm_get_page_prot);
> > +#else
> > +DECLARE_VM_GET_PAGE_PROT
> > +#endif /* CONFIG_PPC_BOOK3S_64 */
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 2/2] powerpc/mm: Support execute-only memory on the Radix MMU
2022-08-08 13:24 ` Aneesh Kumar K V
@ 2022-08-09 0:56 ` Russell Currey
0 siblings, 0 replies; 7+ messages in thread
From: Russell Currey @ 2022-08-09 0:56 UTC (permalink / raw)
To: Aneesh Kumar K V, linuxppc-dev; +Cc: ajd, npiggin, anshuman.khandual
On Mon, 2022-08-08 at 18:54 +0530, Aneesh Kumar K V wrote:
> On 8/8/22 6:31 PM, Russell Currey wrote:
> > The Hash MMU already supports XOM (i.e. mmap with PROT_EXEC only)
> > through the execute-only pkey. A PROT_EXEC-only mapping will
> > actually
> > map to RX, and then the pkey will be applied on top of it.
> >
> > Radix doesn't have pkeys, but it does have execute permissions
> > built-in
> > to the MMU, so all we have to do to support XOM is expose it.
> >
> > That's not possible with protection_map being const, so make it RO
> > after
> > init instead.
> >
> > Signed-off-by: Russell Currey <ruscur@russell.cc>
> > ---
> > v2: Make protection_map __ro_after_init and set it in an initcall
> > (v1 didn't work, I tested before rebasing on Anshuman's patches)
> >
> > basic test:
> > https://raw.githubusercontent.com/ruscur/junkcode/main/mmap_test.c
> >
> > arch/powerpc/include/asm/book3s/64/radix.h | 3 +++
> > arch/powerpc/include/asm/pgtable.h | 1 -
> > arch/powerpc/mm/fault.c | 10 ++++++++++
> > arch/powerpc/mm/pgtable.c | 16 +++++++++++++++-
> > 4 files changed, 28 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/powerpc/include/asm/book3s/64/radix.h
> > b/arch/powerpc/include/asm/book3s/64/radix.h
> > index 686001eda936..bf316b773d73 100644
> > --- a/arch/powerpc/include/asm/book3s/64/radix.h
> > +++ b/arch/powerpc/include/asm/book3s/64/radix.h
> > @@ -19,6 +19,9 @@
> > #include <asm/cpu_has_feature.h>
> > #endif
> >
> > +/* Execute-only page protections, Hash can use RX + execute-only
> > pkey */
> > +#define PAGE_EXECONLY __pgprot(_PAGE_BASE | _PAGE_EXEC)
> > +
> > /* An empty PTE can still have a R or C writeback */
> > #define RADIX_PTE_NONE_MASK (_PAGE_DIRTY |
> > _PAGE_ACCESSED)
> >
> > diff --git a/arch/powerpc/include/asm/pgtable.h
> > b/arch/powerpc/include/asm/pgtable.h
> > index 33f4bf8d22b0..3cbb6de20f9d 100644
> > --- a/arch/powerpc/include/asm/pgtable.h
> > +++ b/arch/powerpc/include/asm/pgtable.h
> > @@ -60,7 +60,6 @@ extern void paging_init(void);
> > void poking_init(void);
> >
> > extern unsigned long ioremap_bot;
> > -extern const pgprot_t protection_map[16];
> >
> > /*
> > * kern_addr_valid is intended to indicate whether an address is a
> > valid
> > diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> > index 014005428687..887c0cc45ca6 100644
> > --- a/arch/powerpc/mm/fault.c
> > +++ b/arch/powerpc/mm/fault.c
> > @@ -270,6 +270,16 @@ static bool access_error(bool is_write, bool
> > is_exec, struct vm_area_struct *vma
> > return false;
> > }
> >
> > + if (unlikely(!(vma->vm_flags & VM_READ))) {
> > + /*
> > + * If we're on Radix, then this could be a read
> > attempt on
> > + * execute-only memory. On other MMUs, an "exec-
> > only" page
> > + * will be given RX flags, so this might be
> > redundant.
> > + */
> > + if (radix_enabled())
> > + return true;
> > + }
> > +
> > if (unlikely(!vma_is_accessible(vma)))
> > return true;
> > /*
> > diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> > index 0b2bbde5fb65..6e1a6a999c3c 100644
> > --- a/arch/powerpc/mm/pgtable.c
> > +++ b/arch/powerpc/mm/pgtable.c
> > @@ -475,7 +475,7 @@ pte_t *__find_linux_pte(pgd_t *pgdir, unsigned
> > long ea,
> > EXPORT_SYMBOL_GPL(__find_linux_pte);
> >
> > /* Note due to the way vm flags are laid out, the bits are XWR */
> > -const pgprot_t protection_map[16] = {
> > +static pgprot_t protection_map[16] __ro_after_init = {
> > [VM_NONE] =
> > PAGE_NONE,
> > [VM_READ] =
> > PAGE_READONLY,
> > [VM_WRITE] =
> > PAGE_COPY,
> > @@ -494,6 +494,20 @@ const pgprot_t protection_map[16] = {
> > [VM_SHARED | VM_EXEC | VM_WRITE | VM_READ] =
> > PAGE_SHARED_X
> > };
> >
> > +#ifdef CONFIG_PPC_RADIX_MMU
> > +static int __init radix_update_protection_map(void)
> > +{
> > + if (early_radix_enabled()) {
> > + /* Radix directly supports execute-only page
> > protections */
> > + protection_map[VM_EXEC] = PAGE_EXECONLY;
> > + protection_map[VM_EXEC | VM_SHARED] =
> > PAGE_EXECONLY;
> > + }
> > +
> > + return 0;
> > +}
> > +arch_initcall(radix_update_protection_map);
>
> Instead of this can we do this in vm_get_page_prot() ?
>
> /* EXEC only shared or non shared mapping ? */
> if (radix_enabled() && ((vm_flags & (VM_READ | VM_WRITE |
> VM_EXEC)) == VM_EXEC))
> prot = PAGE_EXECONLY;
That is a lot simpler, thanks.
- Russell
>
>
> > +#endif /* CONFIG_PPC_RADIX_MMU */
> > +
> > #ifdef CONFIG_PPC_BOOK3S_64
> > pgprot_t vm_get_page_prot(unsigned long vm_flags)
> > {
>
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v2 1/2] powerpc/mm: Move vm_get_page_prot() out of book3s64 code
2022-08-09 0:55 ` Russell Currey
@ 2022-08-09 5:35 ` Christophe Leroy
0 siblings, 0 replies; 7+ messages in thread
From: Christophe Leroy @ 2022-08-09 5:35 UTC (permalink / raw)
To: Russell Currey, linuxppc-dev
Cc: aneesh.kumar, ajd, npiggin, anshuman.khandual
Le 09/08/2022 à 02:55, Russell Currey a écrit :
> On Mon, 2022-08-08 at 14:32 +0000, Christophe Leroy wrote:
>>
>>
>> Le 08/08/2022 à 15:01, Russell Currey a écrit :
>>> protection_map is about to be __ro_after_init instead of const, so
>>> move
>>> the only non-local function that consumes it to the same file so it
>>> can
>>> at least be static.
>>
>> What's the advantage of doing that ? Why does it need to be static ?
>>
>> Christophe
>
> It doesn't need to be, I didn't like having it exposed unnecessarily.
> Aneesh's suggestion lets it stay const so I can drop this patch anyway.
Yes I think Aneesh's approach is better as it keeps book3s/64 specific
stuff in dedicated file.
Also as I probably saw from the robots, including asm/pkeys.h in a non
boo3s64 file was a problem, due to the following in that file:
#ifdef CONFIG_PPC_BOOK3S_64
#include <asm/book3s/64/pkeys.h>
#else
#error "Not supported"
#endif
Christophe
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-08-09 5:37 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-08 13:01 [PATCH v2 1/2] powerpc/mm: Move vm_get_page_prot() out of book3s64 code Russell Currey
2022-08-08 13:01 ` [PATCH v2 2/2] powerpc/mm: Support execute-only memory on the Radix MMU Russell Currey
2022-08-08 13:24 ` Aneesh Kumar K V
2022-08-09 0:56 ` Russell Currey
2022-08-08 14:32 ` [PATCH v2 1/2] powerpc/mm: Move vm_get_page_prot() out of book3s64 code Christophe Leroy
2022-08-09 0:55 ` Russell Currey
2022-08-09 5:35 ` Christophe Leroy
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.