All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] arm64: use hierarchical XN permissions for all page tables
@ 2021-03-08 18:15 Ard Biesheuvel
  2021-03-08 18:15 ` [PATCH v2 1/3] arm64: mm: add missing P4D definitions and use them consistently Ard Biesheuvel
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Ard Biesheuvel @ 2021-03-08 18:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, maz, catalin.marinas, will, mark.rutland,
	anshuman.khandual, qperret, kernel-team

This series tweaks the page table population code to set the UXNTable and
PXNTable bits as appropriate when page tables are being allocated and linked
into a page table hierarchy. On table entries that cover the linear region,
both PXN and UXN are set; for other page tables, either the UXN or PXN
attribute is set on all table entries, depending on whether the hierarchy in
question is used by the kernel or by user space.

Doing so removes the ability for lower level mappings to grant executable
permissions which are never needed by code that works as intended. And given
that swapper's PGD level is mapped r/o and can only be updated via the fixmap
API, the restrictions on kernel mappings cannot be trivially reverted by poking
writable memory.

Note that newer cores may permit hierarchical permission checks to be disabled,
so that the bits can be repurposed as software bits. However, we currently do
not make use of that feature, nor do we intend to, given that software bits in
table descriptors are not in short supply anyway.

This does not address a known exploit or vulnerability, but it applies the
principle of least privilege in a way that does not result in any space
or runtime overhead.

Changes since v1:
- clean up some more occurrences of P?D_xxx mismatches (#1)
- split the PXN and UXN macro definitions so we can apply them independently
- add patch #3 to apply PXNTable xor UXNTable permissions to all user and
  kernel mappings, respectively

Cc: maz@kernel.org
Cc: catalin.marinas@arm.com
Cc: will@kernel.org
Cc: mark.rutland@arm.com
Cc: anshuman.khandual@arm.com
Cc: qperret@google.com
Cc: kernel-team@android.com

Ard Biesheuvel (3):
  arm64: mm: add missing P4D definitions and use them consistently
  arm64: mm: use XN table mapping attributes for the linear region
  arm64: mm: use XN table mapping attributes for user/kernel mappings

 arch/arm64/include/asm/pgalloc.h       | 19 +++++++-----
 arch/arm64/include/asm/pgtable-hwdef.h | 15 ++++++++++
 arch/arm64/mm/kasan_init.c             |  4 +--
 arch/arm64/mm/mmu.c                    | 31 ++++++++++++++------
 4 files changed, 51 insertions(+), 18 deletions(-)

-- 
2.30.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v2 1/3] arm64: mm: add missing P4D definitions and use them consistently
  2021-03-08 18:15 [PATCH v2 0/3] arm64: use hierarchical XN permissions for all page tables Ard Biesheuvel
@ 2021-03-08 18:15 ` Ard Biesheuvel
  2021-03-09  4:56   ` Anshuman Khandual
  2021-03-08 18:15 ` [PATCH v2 2/3] arm64: mm: use XN table mapping attributes for the linear region Ard Biesheuvel
  2021-03-08 18:15 ` [PATCH v2 3/3] arm64: mm: use XN table mapping attributes for user/kernel mappings Ard Biesheuvel
  2 siblings, 1 reply; 10+ messages in thread
From: Ard Biesheuvel @ 2021-03-08 18:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, maz, catalin.marinas, will, mark.rutland,
	anshuman.khandual, qperret, kernel-team

Even though level 0, 1 and 2 descriptors share the same attribute
encodings, let's be a bit more consistent about using the right one at
the right level. So add new macros for level 0/P4D definitions, and
clean up some inconsistencies involving these macros.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/include/asm/pgalloc.h       | 4 ++--
 arch/arm64/include/asm/pgtable-hwdef.h | 9 +++++++++
 arch/arm64/mm/kasan_init.c             | 4 ++--
 arch/arm64/mm/mmu.c                    | 6 +++---
 4 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index 3c6a7f5988b1..27cc643d0509 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -27,7 +27,7 @@ static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot)
 
 static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp)
 {
-	__pud_populate(pudp, __pa(pmdp), PMD_TYPE_TABLE);
+	__pud_populate(pudp, __pa(pmdp), PUD_TYPE_TABLE);
 }
 #else
 static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot)
@@ -45,7 +45,7 @@ static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot)
 
 static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp)
 {
-	__p4d_populate(p4dp, __pa(pudp), PUD_TYPE_TABLE);
+	__p4d_populate(p4dp, __pa(pudp), P4D_TYPE_TABLE);
 }
 #else
 static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot)
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index 42442a0ae2ab..e64e77a345b2 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -94,6 +94,15 @@
 /*
  * Hardware page table definitions.
  *
+ * Level 0 descriptor (P4D).
+ */
+#define P4D_TYPE_TABLE		(_AT(p4dval_t, 3) << 0)
+#define P4D_TABLE_BIT		(_AT(p4dval_t, 1) << 1)
+#define P4D_TYPE_MASK		(_AT(p4dval_t, 3) << 0)
+#define P4D_TYPE_SECT		(_AT(p4dval_t, 1) << 0)
+#define P4D_SECT_RDONLY		(_AT(p4dval_t, 1) << 7)		/* AP[2] */
+
+/*
  * Level 1 descriptor (PUD).
  */
 #define PUD_TYPE_TABLE		(_AT(pudval_t, 3) << 0)
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index d8e66c78440e..9fe40cbbd8c0 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -79,7 +79,7 @@ static pmd_t *__init kasan_pmd_offset(pud_t *pudp, unsigned long addr, int node,
 		phys_addr_t pmd_phys = early ?
 				__pa_symbol(kasan_early_shadow_pmd)
 					: kasan_alloc_zeroed_page(node);
-		__pud_populate(pudp, pmd_phys, PMD_TYPE_TABLE);
+		__pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
 	}
 
 	return early ? pmd_offset_kimg(pudp, addr) : pmd_offset(pudp, addr);
@@ -92,7 +92,7 @@ static pud_t *__init kasan_pud_offset(p4d_t *p4dp, unsigned long addr, int node,
 		phys_addr_t pud_phys = early ?
 				__pa_symbol(kasan_early_shadow_pud)
 					: kasan_alloc_zeroed_page(node);
-		__p4d_populate(p4dp, pud_phys, PMD_TYPE_TABLE);
+		__p4d_populate(p4dp, pud_phys, P4D_TYPE_TABLE);
 	}
 
 	return early ? pud_offset_kimg(p4dp, addr) : pud_offset(p4dp, addr);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 3802cfbdd20d..029091474042 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -309,7 +309,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
 		phys_addr_t pud_phys;
 		BUG_ON(!pgtable_alloc);
 		pud_phys = pgtable_alloc(PUD_SHIFT);
-		__p4d_populate(p4dp, pud_phys, PUD_TYPE_TABLE);
+		__p4d_populate(p4dp, pud_phys, P4D_TYPE_TABLE);
 		p4d = READ_ONCE(*p4dp);
 	}
 	BUG_ON(p4d_bad(p4d));
@@ -1209,11 +1209,11 @@ void __init early_fixmap_init(void)
 		pudp = pud_offset_kimg(p4dp, addr);
 	} else {
 		if (p4d_none(p4d))
-			__p4d_populate(p4dp, __pa_symbol(bm_pud), PUD_TYPE_TABLE);
+			__p4d_populate(p4dp, __pa_symbol(bm_pud), P4D_TYPE_TABLE);
 		pudp = fixmap_pud(addr);
 	}
 	if (pud_none(READ_ONCE(*pudp)))
-		__pud_populate(pudp, __pa_symbol(bm_pmd), PMD_TYPE_TABLE);
+		__pud_populate(pudp, __pa_symbol(bm_pmd), PUD_TYPE_TABLE);
 	pmdp = fixmap_pmd(addr);
 	__pmd_populate(pmdp, __pa_symbol(bm_pte), PMD_TYPE_TABLE);
 
-- 
2.30.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 2/3] arm64: mm: use XN table mapping attributes for the linear region
  2021-03-08 18:15 [PATCH v2 0/3] arm64: use hierarchical XN permissions for all page tables Ard Biesheuvel
  2021-03-08 18:15 ` [PATCH v2 1/3] arm64: mm: add missing P4D definitions and use them consistently Ard Biesheuvel
@ 2021-03-08 18:15 ` Ard Biesheuvel
  2021-03-09  5:09   ` Anshuman Khandual
  2021-03-09  5:52   ` Anshuman Khandual
  2021-03-08 18:15 ` [PATCH v2 3/3] arm64: mm: use XN table mapping attributes for user/kernel mappings Ard Biesheuvel
  2 siblings, 2 replies; 10+ messages in thread
From: Ard Biesheuvel @ 2021-03-08 18:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, maz, catalin.marinas, will, mark.rutland,
	anshuman.khandual, qperret, kernel-team

The way the arm64 kernel virtual address space is constructed guarantees
that swapper PGD entries are never shared between the linear region on
the one hand, and the vmalloc region on the other, which is where all
kernel text, module text and BPF text mappings reside.

This means that mappings in the linear region (which never require
executable permissions) never share any table entries at any level with
mappings that do require executable permissions, and so we can set the
table-level PXN attributes for all table entries that are created while
setting up mappings in the linear region. Since swapper's PGD level page
table is mapped r/o itself, this adds another layer of robustness to the
way the kernel manages its own page tables. While at it, set the UXN
attribute as well for all kernel mappings created at boot.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
---
 arch/arm64/include/asm/pgtable-hwdef.h |  6 +++++
 arch/arm64/mm/mmu.c                    | 27 +++++++++++++++-----
 2 files changed, 26 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index e64e77a345b2..b82575a33f8b 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -101,6 +101,8 @@
 #define P4D_TYPE_MASK		(_AT(p4dval_t, 3) << 0)
 #define P4D_TYPE_SECT		(_AT(p4dval_t, 1) << 0)
 #define P4D_SECT_RDONLY		(_AT(p4dval_t, 1) << 7)		/* AP[2] */
+#define P4D_TABLE_PXN		(_AT(p4dval_t, 1) << 59)
+#define P4D_TABLE_UXN		(_AT(p4dval_t, 1) << 60)
 
 /*
  * Level 1 descriptor (PUD).
@@ -110,6 +112,8 @@
 #define PUD_TYPE_MASK		(_AT(pudval_t, 3) << 0)
 #define PUD_TYPE_SECT		(_AT(pudval_t, 1) << 0)
 #define PUD_SECT_RDONLY		(_AT(pudval_t, 1) << 7)		/* AP[2] */
+#define PUD_TABLE_PXN		(_AT(pudval_t, 1) << 59)
+#define PUD_TABLE_UXN		(_AT(pudval_t, 1) << 60)
 
 /*
  * Level 2 descriptor (PMD).
@@ -131,6 +135,8 @@
 #define PMD_SECT_CONT		(_AT(pmdval_t, 1) << 52)
 #define PMD_SECT_PXN		(_AT(pmdval_t, 1) << 53)
 #define PMD_SECT_UXN		(_AT(pmdval_t, 1) << 54)
+#define PMD_TABLE_PXN		(_AT(pmdval_t, 1) << 59)
+#define PMD_TABLE_UXN		(_AT(pmdval_t, 1) << 60)
 
 /*
  * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 029091474042..9de59fce0450 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -39,6 +39,7 @@
 
 #define NO_BLOCK_MAPPINGS	BIT(0)
 #define NO_CONT_MAPPINGS	BIT(1)
+#define NO_EXEC_MAPPINGS	BIT(2)
 
 u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
 u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
@@ -185,10 +186,14 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 
 	BUG_ON(pmd_sect(pmd));
 	if (pmd_none(pmd)) {
+		pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN;
 		phys_addr_t pte_phys;
+
+		if (flags & NO_EXEC_MAPPINGS)
+			pmdval |= PMD_TABLE_PXN;
 		BUG_ON(!pgtable_alloc);
 		pte_phys = pgtable_alloc(PAGE_SHIFT);
-		__pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE);
+		__pmd_populate(pmdp, pte_phys, pmdval);
 		pmd = READ_ONCE(*pmdp);
 	}
 	BUG_ON(pmd_bad(pmd));
@@ -259,10 +264,14 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 	 */
 	BUG_ON(pud_sect(pud));
 	if (pud_none(pud)) {
+		pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN;
 		phys_addr_t pmd_phys;
+
+		if (flags & NO_EXEC_MAPPINGS)
+			pudval |= PUD_TABLE_PXN;
 		BUG_ON(!pgtable_alloc);
 		pmd_phys = pgtable_alloc(PMD_SHIFT);
-		__pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
+		__pud_populate(pudp, pmd_phys, pudval);
 		pud = READ_ONCE(*pudp);
 	}
 	BUG_ON(pud_bad(pud));
@@ -306,10 +315,14 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
 	p4d_t p4d = READ_ONCE(*p4dp);
 
 	if (p4d_none(p4d)) {
+		p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN;
 		phys_addr_t pud_phys;
+
+		if (flags & NO_EXEC_MAPPINGS)
+			p4dval |= P4D_TABLE_PXN;
 		BUG_ON(!pgtable_alloc);
 		pud_phys = pgtable_alloc(PUD_SHIFT);
-		__p4d_populate(p4dp, pud_phys, P4D_TYPE_TABLE);
+		__p4d_populate(p4dp, pud_phys, p4dval);
 		p4d = READ_ONCE(*p4dp);
 	}
 	BUG_ON(p4d_bad(p4d));
@@ -489,11 +502,11 @@ static void __init map_mem(pgd_t *pgdp)
 	phys_addr_t kernel_start = __pa_symbol(_stext);
 	phys_addr_t kernel_end = __pa_symbol(__init_begin);
 	phys_addr_t start, end;
-	int flags = 0;
+	int flags = NO_EXEC_MAPPINGS;
 	u64 i;
 
 	if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
-		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
+		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	/*
 	 * Take care not to create a writable alias for the
@@ -1462,7 +1475,7 @@ struct range arch_get_mappable_range(void)
 int arch_add_memory(int nid, u64 start, u64 size,
 		    struct mhp_params *params)
 {
-	int ret, flags = 0;
+	int ret, flags = NO_EXEC_MAPPINGS;
 
 	VM_BUG_ON(!mhp_range_allowed(start, size, true));
 
@@ -1472,7 +1485,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 	 */
 	if (rodata_full || debug_pagealloc_enabled() ||
 	    IS_ENABLED(CONFIG_KFENCE))
-		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
+		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
 			     size, params->pgprot, __pgd_pgtable_alloc,
-- 
2.30.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v2 3/3] arm64: mm: use XN table mapping attributes for user/kernel mappings
  2021-03-08 18:15 [PATCH v2 0/3] arm64: use hierarchical XN permissions for all page tables Ard Biesheuvel
  2021-03-08 18:15 ` [PATCH v2 1/3] arm64: mm: add missing P4D definitions and use them consistently Ard Biesheuvel
  2021-03-08 18:15 ` [PATCH v2 2/3] arm64: mm: use XN table mapping attributes for the linear region Ard Biesheuvel
@ 2021-03-08 18:15 ` Ard Biesheuvel
  2021-03-09  5:40   ` Anshuman Khandual
  2 siblings, 1 reply; 10+ messages in thread
From: Ard Biesheuvel @ 2021-03-08 18:15 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Ard Biesheuvel, maz, catalin.marinas, will, mark.rutland,
	anshuman.khandual, qperret, kernel-team

As the kernel and user space page tables are strictly mutually exclusive
when it comes to executable permissions, we can set the UXN table attribute
on all table entries that are created while creating kernel mappings in the
swapper page tables, and the PXN table attribute on all table entries that
are created while creating user space mappings in user space page tables.

While at it, get rid of a redundant comment.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/pgalloc.h | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index 27cc643d0509..31fbab3d6f99 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -27,7 +27,10 @@ static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot)
 
 static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp)
 {
-	__pud_populate(pudp, __pa(pmdp), PUD_TYPE_TABLE);
+	pudval_t pudval = PUD_TYPE_TABLE;
+
+	pudval |= (mm == &init_mm) ? PUD_TABLE_UXN : PUD_TABLE_PXN;
+	__pud_populate(pudp, __pa(pmdp), pudval);
 }
 #else
 static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot)
@@ -45,7 +48,10 @@ static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot)
 
 static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp)
 {
-	__p4d_populate(p4dp, __pa(pudp), P4D_TYPE_TABLE);
+	p4dval_t p4dval = P4D_TYPE_TABLE;
+
+	p4dval |= (mm == &init_mm) ? P4D_TABLE_UXN : P4D_TABLE_PXN;
+	__p4d_populate(p4dp, __pa(pudp), p4dval);
 }
 #else
 static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot)
@@ -70,16 +76,15 @@ static inline void __pmd_populate(pmd_t *pmdp, phys_addr_t ptep,
 static inline void
 pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *ptep)
 {
-	/*
-	 * The pmd must be loaded with the physical address of the PTE table
-	 */
-	__pmd_populate(pmdp, __pa(ptep), PMD_TYPE_TABLE);
+	VM_BUG_ON(mm != &init_mm);
+	__pmd_populate(pmdp, __pa(ptep), PMD_TYPE_TABLE | PMD_TABLE_UXN);
 }
 
 static inline void
 pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep)
 {
-	__pmd_populate(pmdp, page_to_phys(ptep), PMD_TYPE_TABLE);
+	VM_BUG_ON(mm == &init_mm);
+	__pmd_populate(pmdp, page_to_phys(ptep), PMD_TYPE_TABLE | PMD_TABLE_PXN);
 }
 #define pmd_pgtable(pmd) pmd_page(pmd)
 
-- 
2.30.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 1/3] arm64: mm: add missing P4D definitions and use them consistently
  2021-03-08 18:15 ` [PATCH v2 1/3] arm64: mm: add missing P4D definitions and use them consistently Ard Biesheuvel
@ 2021-03-09  4:56   ` Anshuman Khandual
  0 siblings, 0 replies; 10+ messages in thread
From: Anshuman Khandual @ 2021-03-09  4:56 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel
  Cc: maz, catalin.marinas, will, mark.rutland, qperret, kernel-team



On 3/8/21 11:45 PM, Ard Biesheuvel wrote:
> Even though level 0, 1 and 2 descriptors share the same attribute
> encodings, let's be a bit more consistent about using the right one at
> the right level. So add new macros for level 0/P4D definitions, and
> clean up some inconsistencies involving these macros.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> Acked-by: Mark Rutland <mark.rutland@arm.com>

LGTM

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

> ---
>  arch/arm64/include/asm/pgalloc.h       | 4 ++--
>  arch/arm64/include/asm/pgtable-hwdef.h | 9 +++++++++
>  arch/arm64/mm/kasan_init.c             | 4 ++--
>  arch/arm64/mm/mmu.c                    | 6 +++---
>  4 files changed, 16 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
> index 3c6a7f5988b1..27cc643d0509 100644
> --- a/arch/arm64/include/asm/pgalloc.h
> +++ b/arch/arm64/include/asm/pgalloc.h
> @@ -27,7 +27,7 @@ static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot)
>  
>  static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp)
>  {
> -	__pud_populate(pudp, __pa(pmdp), PMD_TYPE_TABLE);
> +	__pud_populate(pudp, __pa(pmdp), PUD_TYPE_TABLE);
>  }
>  #else
>  static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot)
> @@ -45,7 +45,7 @@ static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot)
>  
>  static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp)
>  {
> -	__p4d_populate(p4dp, __pa(pudp), PUD_TYPE_TABLE);
> +	__p4d_populate(p4dp, __pa(pudp), P4D_TYPE_TABLE);
>  }
>  #else
>  static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot)
> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> index 42442a0ae2ab..e64e77a345b2 100644
> --- a/arch/arm64/include/asm/pgtable-hwdef.h
> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> @@ -94,6 +94,15 @@
>  /*
>   * Hardware page table definitions.
>   *
> + * Level 0 descriptor (P4D).
> + */
> +#define P4D_TYPE_TABLE		(_AT(p4dval_t, 3) << 0)
> +#define P4D_TABLE_BIT		(_AT(p4dval_t, 1) << 1)
> +#define P4D_TYPE_MASK		(_AT(p4dval_t, 3) << 0)
> +#define P4D_TYPE_SECT		(_AT(p4dval_t, 1) << 0)
> +#define P4D_SECT_RDONLY		(_AT(p4dval_t, 1) << 7)		/* AP[2] */
> +
> +/*
>   * Level 1 descriptor (PUD).
>   */
>  #define PUD_TYPE_TABLE		(_AT(pudval_t, 3) << 0)
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index d8e66c78440e..9fe40cbbd8c0 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -79,7 +79,7 @@ static pmd_t *__init kasan_pmd_offset(pud_t *pudp, unsigned long addr, int node,
>  		phys_addr_t pmd_phys = early ?
>  				__pa_symbol(kasan_early_shadow_pmd)
>  					: kasan_alloc_zeroed_page(node);
> -		__pud_populate(pudp, pmd_phys, PMD_TYPE_TABLE);
> +		__pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
>  	}
>  
>  	return early ? pmd_offset_kimg(pudp, addr) : pmd_offset(pudp, addr);
> @@ -92,7 +92,7 @@ static pud_t *__init kasan_pud_offset(p4d_t *p4dp, unsigned long addr, int node,
>  		phys_addr_t pud_phys = early ?
>  				__pa_symbol(kasan_early_shadow_pud)
>  					: kasan_alloc_zeroed_page(node);
> -		__p4d_populate(p4dp, pud_phys, PMD_TYPE_TABLE);
> +		__p4d_populate(p4dp, pud_phys, P4D_TYPE_TABLE);
>  	}
>  
>  	return early ? pud_offset_kimg(p4dp, addr) : pud_offset(p4dp, addr);
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 3802cfbdd20d..029091474042 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -309,7 +309,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
>  		phys_addr_t pud_phys;
>  		BUG_ON(!pgtable_alloc);
>  		pud_phys = pgtable_alloc(PUD_SHIFT);
> -		__p4d_populate(p4dp, pud_phys, PUD_TYPE_TABLE);
> +		__p4d_populate(p4dp, pud_phys, P4D_TYPE_TABLE);
>  		p4d = READ_ONCE(*p4dp);
>  	}
>  	BUG_ON(p4d_bad(p4d));
> @@ -1209,11 +1209,11 @@ void __init early_fixmap_init(void)
>  		pudp = pud_offset_kimg(p4dp, addr);
>  	} else {
>  		if (p4d_none(p4d))
> -			__p4d_populate(p4dp, __pa_symbol(bm_pud), PUD_TYPE_TABLE);
> +			__p4d_populate(p4dp, __pa_symbol(bm_pud), P4D_TYPE_TABLE);
>  		pudp = fixmap_pud(addr);
>  	}
>  	if (pud_none(READ_ONCE(*pudp)))
> -		__pud_populate(pudp, __pa_symbol(bm_pmd), PMD_TYPE_TABLE);
> +		__pud_populate(pudp, __pa_symbol(bm_pmd), PUD_TYPE_TABLE);
>  	pmdp = fixmap_pmd(addr);
>  	__pmd_populate(pmdp, __pa_symbol(bm_pte), PMD_TYPE_TABLE);
>  
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/3] arm64: mm: use XN table mapping attributes for the linear region
  2021-03-08 18:15 ` [PATCH v2 2/3] arm64: mm: use XN table mapping attributes for the linear region Ard Biesheuvel
@ 2021-03-09  5:09   ` Anshuman Khandual
  2021-03-09 12:36     ` Ard Biesheuvel
  2021-03-09  5:52   ` Anshuman Khandual
  1 sibling, 1 reply; 10+ messages in thread
From: Anshuman Khandual @ 2021-03-09  5:09 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel
  Cc: maz, catalin.marinas, will, mark.rutland, qperret, kernel-team



On 3/8/21 11:45 PM, Ard Biesheuvel wrote:
> The way the arm64 kernel virtual address space is constructed guarantees
> that swapper PGD entries are never shared between the linear region on
> the one hand, and the vmalloc region on the other, which is where all
> kernel text, module text and BPF text mappings reside.

While here, wondering if it would be better to validate this assumption
via a BUILD_BUG_ON() or something similar ?

> 
> This means that mappings in the linear region (which never require
> executable permissions) never share any table entries at any level with
> mappings that do require executable permissions, and so we can set the
> table-level PXN attributes for all table entries that are created while
> setting up mappings in the linear region. Since swapper's PGD level page
> table is mapped r/o itself, this adds another layer of robustness to the
> way the kernel manages its own page tables. While at it, set the UXN
> attribute as well for all kernel mappings created at boot.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> Acked-by: Mark Rutland <mark.rutland@arm.com>
> ---
>  arch/arm64/include/asm/pgtable-hwdef.h |  6 +++++
>  arch/arm64/mm/mmu.c                    | 27 +++++++++++++++-----
>  2 files changed, 26 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> index e64e77a345b2..b82575a33f8b 100644
> --- a/arch/arm64/include/asm/pgtable-hwdef.h
> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> @@ -101,6 +101,8 @@
>  #define P4D_TYPE_MASK		(_AT(p4dval_t, 3) << 0)
>  #define P4D_TYPE_SECT		(_AT(p4dval_t, 1) << 0)
>  #define P4D_SECT_RDONLY		(_AT(p4dval_t, 1) << 7)		/* AP[2] */
> +#define P4D_TABLE_PXN		(_AT(p4dval_t, 1) << 59)
> +#define P4D_TABLE_UXN		(_AT(p4dval_t, 1) << 60)
>  
>  /*
>   * Level 1 descriptor (PUD).
> @@ -110,6 +112,8 @@
>  #define PUD_TYPE_MASK		(_AT(pudval_t, 3) << 0)
>  #define PUD_TYPE_SECT		(_AT(pudval_t, 1) << 0)
>  #define PUD_SECT_RDONLY		(_AT(pudval_t, 1) << 7)		/* AP[2] */
> +#define PUD_TABLE_PXN		(_AT(pudval_t, 1) << 59)
> +#define PUD_TABLE_UXN		(_AT(pudval_t, 1) << 60)
>  
>  /*
>   * Level 2 descriptor (PMD).
> @@ -131,6 +135,8 @@
>  #define PMD_SECT_CONT		(_AT(pmdval_t, 1) << 52)
>  #define PMD_SECT_PXN		(_AT(pmdval_t, 1) << 53)
>  #define PMD_SECT_UXN		(_AT(pmdval_t, 1) << 54)
> +#define PMD_TABLE_PXN		(_AT(pmdval_t, 1) << 59)
> +#define PMD_TABLE_UXN		(_AT(pmdval_t, 1) << 60)
>  
>  /*
>   * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index 029091474042..9de59fce0450 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -39,6 +39,7 @@
>  
>  #define NO_BLOCK_MAPPINGS	BIT(0)
>  #define NO_CONT_MAPPINGS	BIT(1)
> +#define NO_EXEC_MAPPINGS	BIT(2)

NO_EXEC_MAPPINGS flag basically selects PXX_TABLE_PXN because PXX_TABLE_UXN
is now a default, which is already included as a protection flag. Should not
this be renamed as NO_KERNEL_EXEC_MAPPINGS matching what PXX_TABLE_PXN is
going to achieve.

>  
>  u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
>  u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
> @@ -185,10 +186,14 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
>  
>  	BUG_ON(pmd_sect(pmd));
>  	if (pmd_none(pmd)) {
> +		pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN;
>  		phys_addr_t pte_phys;
> +
> +		if (flags & NO_EXEC_MAPPINGS)
> +			pmdval |= PMD_TABLE_PXN;
>  		BUG_ON(!pgtable_alloc);
>  		pte_phys = pgtable_alloc(PAGE_SHIFT);
> -		__pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE);
> +		__pmd_populate(pmdp, pte_phys, pmdval);
>  		pmd = READ_ONCE(*pmdp);
>  	}
>  	BUG_ON(pmd_bad(pmd));
> @@ -259,10 +264,14 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
>  	 */
>  	BUG_ON(pud_sect(pud));
>  	if (pud_none(pud)) {
> +		pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN;
>  		phys_addr_t pmd_phys;
> +
> +		if (flags & NO_EXEC_MAPPINGS)
> +			pudval |= PUD_TABLE_PXN;
>  		BUG_ON(!pgtable_alloc);
>  		pmd_phys = pgtable_alloc(PMD_SHIFT);
> -		__pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
> +		__pud_populate(pudp, pmd_phys, pudval);
>  		pud = READ_ONCE(*pudp);
>  	}
>  	BUG_ON(pud_bad(pud));
> @@ -306,10 +315,14 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
>  	p4d_t p4d = READ_ONCE(*p4dp);
>  
>  	if (p4d_none(p4d)) {
> +		p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN;
>  		phys_addr_t pud_phys;
> +
> +		if (flags & NO_EXEC_MAPPINGS)
> +			p4dval |= P4D_TABLE_PXN;
>  		BUG_ON(!pgtable_alloc);
>  		pud_phys = pgtable_alloc(PUD_SHIFT);
> -		__p4d_populate(p4dp, pud_phys, P4D_TYPE_TABLE);
> +		__p4d_populate(p4dp, pud_phys, p4dval);
>  		p4d = READ_ONCE(*p4dp);
>  	}
>  	BUG_ON(p4d_bad(p4d));
> @@ -489,11 +502,11 @@ static void __init map_mem(pgd_t *pgdp)
>  	phys_addr_t kernel_start = __pa_symbol(_stext);
>  	phys_addr_t kernel_end = __pa_symbol(__init_begin);
>  	phys_addr_t start, end;
> -	int flags = 0;
> +	int flags = NO_EXEC_MAPPINGS;
>  	u64 i;
>  
>  	if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
> -		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> +		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>  
>  	/*
>  	 * Take care not to create a writable alias for the
> @@ -1462,7 +1475,7 @@ struct range arch_get_mappable_range(void)
>  int arch_add_memory(int nid, u64 start, u64 size,
>  		    struct mhp_params *params)
>  {
> -	int ret, flags = 0;
> +	int ret, flags = NO_EXEC_MAPPINGS;
>  
>  	VM_BUG_ON(!mhp_range_allowed(start, size, true));
>  
> @@ -1472,7 +1485,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
>  	 */
>  	if (rodata_full || debug_pagealloc_enabled() ||
>  	    IS_ENABLED(CONFIG_KFENCE))
> -		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> +		flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
>  
>  	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
>  			     size, params->pgprot, __pgd_pgtable_alloc,
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 3/3] arm64: mm: use XN table mapping attributes for user/kernel mappings
  2021-03-08 18:15 ` [PATCH v2 3/3] arm64: mm: use XN table mapping attributes for user/kernel mappings Ard Biesheuvel
@ 2021-03-09  5:40   ` Anshuman Khandual
  0 siblings, 0 replies; 10+ messages in thread
From: Anshuman Khandual @ 2021-03-09  5:40 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel
  Cc: maz, catalin.marinas, will, mark.rutland, qperret, kernel-team



On 3/8/21 11:45 PM, Ard Biesheuvel wrote:
> As the kernel and user space page tables are strictly mutually exclusive
> when it comes to executable permissions, we can set the UXN table attribute
> on all table entries that are created while creating kernel mappings in the
> swapper page tables, and the PXN table attribute on all table entries that
> are created while creating user space mappings in user space page tables.
> 
> While at it, get rid of a redundant comment.
> 
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>

> ---
>  arch/arm64/include/asm/pgalloc.h | 19 ++++++++++++-------
>  1 file changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
> index 27cc643d0509..31fbab3d6f99 100644
> --- a/arch/arm64/include/asm/pgalloc.h
> +++ b/arch/arm64/include/asm/pgalloc.h
> @@ -27,7 +27,10 @@ static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot)
>  
>  static inline void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmdp)
>  {
> -	__pud_populate(pudp, __pa(pmdp), PUD_TYPE_TABLE);
> +	pudval_t pudval = PUD_TYPE_TABLE;
> +
> +	pudval |= (mm == &init_mm) ? PUD_TABLE_UXN : PUD_TABLE_PXN;
> +	__pud_populate(pudp, __pa(pmdp), pudval);
>  }
>  #else
>  static inline void __pud_populate(pud_t *pudp, phys_addr_t pmdp, pudval_t prot)
> @@ -45,7 +48,10 @@ static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot)
>  
>  static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4dp, pud_t *pudp)
>  {
> -	__p4d_populate(p4dp, __pa(pudp), P4D_TYPE_TABLE);
> +	p4dval_t p4dval = P4D_TYPE_TABLE;
> +
> +	p4dval |= (mm == &init_mm) ? P4D_TABLE_UXN : P4D_TABLE_PXN;
> +	__p4d_populate(p4dp, __pa(pudp), p4dval);
>  }
>  #else
>  static inline void __p4d_populate(p4d_t *p4dp, phys_addr_t pudp, p4dval_t prot)
> @@ -70,16 +76,15 @@ static inline void __pmd_populate(pmd_t *pmdp, phys_addr_t ptep,
>  static inline void
>  pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp, pte_t *ptep)
>  {
> -	/*
> -	 * The pmd must be loaded with the physical address of the PTE table
> -	 */
> -	__pmd_populate(pmdp, __pa(ptep), PMD_TYPE_TABLE);
> +	VM_BUG_ON(mm != &init_mm);
> +	__pmd_populate(pmdp, __pa(ptep), PMD_TYPE_TABLE | PMD_TABLE_UXN);
>  }
>  
>  static inline void
>  pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep)
>  {
> -	__pmd_populate(pmdp, page_to_phys(ptep), PMD_TYPE_TABLE);
> +	VM_BUG_ON(mm == &init_mm);
> +	__pmd_populate(pmdp, page_to_phys(ptep), PMD_TYPE_TABLE | PMD_TABLE_PXN);
>  }
>  #define pmd_pgtable(pmd) pmd_page(pmd)
>  
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/3] arm64: mm: use XN table mapping attributes for the linear region
  2021-03-08 18:15 ` [PATCH v2 2/3] arm64: mm: use XN table mapping attributes for the linear region Ard Biesheuvel
  2021-03-09  5:09   ` Anshuman Khandual
@ 2021-03-09  5:52   ` Anshuman Khandual
  1 sibling, 0 replies; 10+ messages in thread
From: Anshuman Khandual @ 2021-03-09  5:52 UTC (permalink / raw)
  To: Ard Biesheuvel, linux-arm-kernel
  Cc: maz, catalin.marinas, will, mark.rutland, qperret, kernel-team


On 3/8/21 11:45 PM, Ard Biesheuvel wrote:
> The way the arm64 kernel virtual address space is constructed guarantees
> that swapper PGD entries are never shared between the linear region on
> the one hand, and the vmalloc region on the other, which is where all
> kernel text, module text and BPF text mappings reside.
> 
> This means that mappings in the linear region (which never require
> executable permissions) never share any table entries at any level with
> mappings that do require executable permissions, and so we can set the
> table-level PXN attributes for all table entries that are created while
> setting up mappings in the linear region. Since swapper's PGD level page
> table is mapped r/o itself, this adds another layer of robustness to the
> way the kernel manages its own page tables. While at it, set the UXN
> attribute as well for all kernel mappings created at boot.

What happens when FEAT_HPDS is implemented and also being enabled ? Would
there be any adverse affect here or at least break the assumption that the
linear mapping page table entries are safer than before ? Hence, it might
be still better to check FEAT_HPDS feature enablement here, even it it is
not being used right now.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/3] arm64: mm: use XN table mapping attributes for the linear region
  2021-03-09  5:09   ` Anshuman Khandual
@ 2021-03-09 12:36     ` Ard Biesheuvel
  2021-03-10  6:48       ` Anshuman Khandual
  0 siblings, 1 reply; 10+ messages in thread
From: Ard Biesheuvel @ 2021-03-09 12:36 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Linux ARM, Marc Zyngier, Catalin Marinas, Will Deacon,
	Mark Rutland, Quentin Perret, Android Kernel Team

On Tue, 9 Mar 2021 at 06:08, Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
>
>
> On 3/8/21 11:45 PM, Ard Biesheuvel wrote:
> > The way the arm64 kernel virtual address space is constructed guarantees
> > that swapper PGD entries are never shared between the linear region on
> > the one hand, and the vmalloc region on the other, which is where all
> > kernel text, module text and BPF text mappings reside.
>
> While here, wondering if it would be better to validate this assumption
> via a BUILD_BUG_ON() or something similar ?
>

Sure. Something like

BUILD_BUG_ON(pgd_index(_PAGE_END(VA_BITS_MIN) - 1) ==
pgd_index(_PAGE_END(VA_BITS_MIN)));

perhaps?

> >
> > This means that mappings in the linear region (which never require
> > executable permissions) never share any table entries at any level with
> > mappings that do require executable permissions, and so we can set the
> > table-level PXN attributes for all table entries that are created while
> > setting up mappings in the linear region. Since swapper's PGD level page
> > table is mapped r/o itself, this adds another layer of robustness to the
> > way the kernel manages its own page tables. While at it, set the UXN
> > attribute as well for all kernel mappings created at boot.
> >
> What happens when FEAT_HPDS is implemented and also being enabled ? Would
> there be any adverse affect here or at least break the assumption that the
> linear mapping page table entries are safer than before ? Hence, it might
> be still better to check FEAT_HPDS feature enablement here, even it it is
> not being used right now.

I would assume that enabling HPDS is only done to free up those bits
for some other purpose, and until that time, I don't think we need to
worry about this, given that the values are just going to be ignored
otherwise.

> > Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> > Acked-by: Mark Rutland <mark.rutland@arm.com>
> > ---
> >  arch/arm64/include/asm/pgtable-hwdef.h |  6 +++++
> >  arch/arm64/mm/mmu.c                    | 27 +++++++++++++++-----
> >  2 files changed, 26 insertions(+), 7 deletions(-)
> >
> > diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
> > index e64e77a345b2..b82575a33f8b 100644
> > --- a/arch/arm64/include/asm/pgtable-hwdef.h
> > +++ b/arch/arm64/include/asm/pgtable-hwdef.h
> > @@ -101,6 +101,8 @@
> >  #define P4D_TYPE_MASK                (_AT(p4dval_t, 3) << 0)
> >  #define P4D_TYPE_SECT                (_AT(p4dval_t, 1) << 0)
> >  #define P4D_SECT_RDONLY              (_AT(p4dval_t, 1) << 7)         /* AP[2] */
> > +#define P4D_TABLE_PXN                (_AT(p4dval_t, 1) << 59)
> > +#define P4D_TABLE_UXN                (_AT(p4dval_t, 1) << 60)
> >
> >  /*
> >   * Level 1 descriptor (PUD).
> > @@ -110,6 +112,8 @@
> >  #define PUD_TYPE_MASK                (_AT(pudval_t, 3) << 0)
> >  #define PUD_TYPE_SECT                (_AT(pudval_t, 1) << 0)
> >  #define PUD_SECT_RDONLY              (_AT(pudval_t, 1) << 7)         /* AP[2] */
> > +#define PUD_TABLE_PXN                (_AT(pudval_t, 1) << 59)
> > +#define PUD_TABLE_UXN                (_AT(pudval_t, 1) << 60)
> >
> >  /*
> >   * Level 2 descriptor (PMD).
> > @@ -131,6 +135,8 @@
> >  #define PMD_SECT_CONT                (_AT(pmdval_t, 1) << 52)
> >  #define PMD_SECT_PXN         (_AT(pmdval_t, 1) << 53)
> >  #define PMD_SECT_UXN         (_AT(pmdval_t, 1) << 54)
> > +#define PMD_TABLE_PXN                (_AT(pmdval_t, 1) << 59)
> > +#define PMD_TABLE_UXN                (_AT(pmdval_t, 1) << 60)
> >
> >  /*
> >   * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index 029091474042..9de59fce0450 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -39,6 +39,7 @@
> >
> >  #define NO_BLOCK_MAPPINGS    BIT(0)
> >  #define NO_CONT_MAPPINGS     BIT(1)
> > +#define NO_EXEC_MAPPINGS     BIT(2)
>
> NO_EXEC_MAPPINGS flag basically selects PXX_TABLE_PXN because PXX_TABLE_UXN
> is now a default, which is already included as a protection flag. Should not
> this be renamed as NO_KERNEL_EXEC_MAPPINGS matching what PXX_TABLE_PXN is
> going to achieve.
>

I don't think renaming this flag is going to make things more clear tbh.
> >
> >  u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
> >  u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
> > @@ -185,10 +186,14 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
> >
> >       BUG_ON(pmd_sect(pmd));
> >       if (pmd_none(pmd)) {
> > +             pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN;
> >               phys_addr_t pte_phys;
> > +
> > +             if (flags & NO_EXEC_MAPPINGS)
> > +                     pmdval |= PMD_TABLE_PXN;
> >               BUG_ON(!pgtable_alloc);
> >               pte_phys = pgtable_alloc(PAGE_SHIFT);
> > -             __pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE);
> > +             __pmd_populate(pmdp, pte_phys, pmdval);
> >               pmd = READ_ONCE(*pmdp);
> >       }
> >       BUG_ON(pmd_bad(pmd));
> > @@ -259,10 +264,14 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
> >        */
> >       BUG_ON(pud_sect(pud));
> >       if (pud_none(pud)) {
> > +             pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN;
> >               phys_addr_t pmd_phys;
> > +
> > +             if (flags & NO_EXEC_MAPPINGS)
> > +                     pudval |= PUD_TABLE_PXN;
> >               BUG_ON(!pgtable_alloc);
> >               pmd_phys = pgtable_alloc(PMD_SHIFT);
> > -             __pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
> > +             __pud_populate(pudp, pmd_phys, pudval);
> >               pud = READ_ONCE(*pudp);
> >       }
> >       BUG_ON(pud_bad(pud));
> > @@ -306,10 +315,14 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
> >       p4d_t p4d = READ_ONCE(*p4dp);
> >
> >       if (p4d_none(p4d)) {
> > +             p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN;
> >               phys_addr_t pud_phys;
> > +
> > +             if (flags & NO_EXEC_MAPPINGS)
> > +                     p4dval |= P4D_TABLE_PXN;
> >               BUG_ON(!pgtable_alloc);
> >               pud_phys = pgtable_alloc(PUD_SHIFT);
> > -             __p4d_populate(p4dp, pud_phys, P4D_TYPE_TABLE);
> > +             __p4d_populate(p4dp, pud_phys, p4dval);
> >               p4d = READ_ONCE(*p4dp);
> >       }
> >       BUG_ON(p4d_bad(p4d));
> > @@ -489,11 +502,11 @@ static void __init map_mem(pgd_t *pgdp)
> >       phys_addr_t kernel_start = __pa_symbol(_stext);
> >       phys_addr_t kernel_end = __pa_symbol(__init_begin);
> >       phys_addr_t start, end;
> > -     int flags = 0;
> > +     int flags = NO_EXEC_MAPPINGS;
> >       u64 i;
> >
> >       if (rodata_full || crash_mem_map || debug_pagealloc_enabled())
> > -             flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> > +             flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> >
> >       /*
> >        * Take care not to create a writable alias for the
> > @@ -1462,7 +1475,7 @@ struct range arch_get_mappable_range(void)
> >  int arch_add_memory(int nid, u64 start, u64 size,
> >                   struct mhp_params *params)
> >  {
> > -     int ret, flags = 0;
> > +     int ret, flags = NO_EXEC_MAPPINGS;
> >
> >       VM_BUG_ON(!mhp_range_allowed(start, size, true));
> >
> > @@ -1472,7 +1485,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
> >        */
> >       if (rodata_full || debug_pagealloc_enabled() ||
> >           IS_ENABLED(CONFIG_KFENCE))
> > -             flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> > +             flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
> >
> >       __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
> >                            size, params->pgprot, __pgd_pgtable_alloc,
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v2 2/3] arm64: mm: use XN table mapping attributes for the linear region
  2021-03-09 12:36     ` Ard Biesheuvel
@ 2021-03-10  6:48       ` Anshuman Khandual
  0 siblings, 0 replies; 10+ messages in thread
From: Anshuman Khandual @ 2021-03-10  6:48 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Linux ARM, Marc Zyngier, Catalin Marinas, Will Deacon,
	Mark Rutland, Quentin Perret, Android Kernel Team



On 3/9/21 6:06 PM, Ard Biesheuvel wrote:
> On Tue, 9 Mar 2021 at 06:08, Anshuman Khandual
> <anshuman.khandual@arm.com> wrote:
>>
>>
>>
>> On 3/8/21 11:45 PM, Ard Biesheuvel wrote:
>>> The way the arm64 kernel virtual address space is constructed guarantees
>>> that swapper PGD entries are never shared between the linear region on
>>> the one hand, and the vmalloc region on the other, which is where all
>>> kernel text, module text and BPF text mappings reside.
>>
>> While here, wondering if it would be better to validate this assumption
>> via a BUILD_BUG_ON() or something similar ?
>>
> 
> Sure. Something like
> 
> BUILD_BUG_ON(pgd_index(_PAGE_END(VA_BITS_MIN) - 1) ==
> pgd_index(_PAGE_END(VA_BITS_MIN)));
> 
> perhaps?

Yes, looks good. But probably with a comment explaining how this clear
separation in the kernel page table enables the use of hierarchical
permissions structure.

> 
>>>
>>> This means that mappings in the linear region (which never require
>>> executable permissions) never share any table entries at any level with
>>> mappings that do require executable permissions, and so we can set the
>>> table-level PXN attributes for all table entries that are created while
>>> setting up mappings in the linear region. Since swapper's PGD level page
>>> table is mapped r/o itself, this adds another layer of robustness to the
>>> way the kernel manages its own page tables. While at it, set the UXN
>>> attribute as well for all kernel mappings created at boot.
>>>
>> What happens when FEAT_HPDS is implemented and also being enabled ? Would
>> there be any adverse affect here or at least break the assumption that the
>> linear mapping page table entries are safer than before ? Hence, it might
>> be still better to check FEAT_HPDS feature enablement here, even it it is
>> not being used right now.
> 
> I would assume that enabling HPDS is only done to free up those bits
> for some other purpose, and until that time, I don't think we need to
> worry about this, given that the values are just going to be ignored
> otherwise.

Right, it is definitely safe because the values are going to be ignored
otherwise. But would not it be better to have a small comment explaining
this possible interaction with HPDS ?

> 
>>> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
>>> Acked-by: Mark Rutland <mark.rutland@arm.com>
>>> ---
>>>  arch/arm64/include/asm/pgtable-hwdef.h |  6 +++++
>>>  arch/arm64/mm/mmu.c                    | 27 +++++++++++++++-----
>>>  2 files changed, 26 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
>>> index e64e77a345b2..b82575a33f8b 100644
>>> --- a/arch/arm64/include/asm/pgtable-hwdef.h
>>> +++ b/arch/arm64/include/asm/pgtable-hwdef.h
>>> @@ -101,6 +101,8 @@
>>>  #define P4D_TYPE_MASK                (_AT(p4dval_t, 3) << 0)
>>>  #define P4D_TYPE_SECT                (_AT(p4dval_t, 1) << 0)
>>>  #define P4D_SECT_RDONLY              (_AT(p4dval_t, 1) << 7)         /* AP[2] */
>>> +#define P4D_TABLE_PXN                (_AT(p4dval_t, 1) << 59)
>>> +#define P4D_TABLE_UXN                (_AT(p4dval_t, 1) << 60)
>>>
>>>  /*
>>>   * Level 1 descriptor (PUD).
>>> @@ -110,6 +112,8 @@
>>>  #define PUD_TYPE_MASK                (_AT(pudval_t, 3) << 0)
>>>  #define PUD_TYPE_SECT                (_AT(pudval_t, 1) << 0)
>>>  #define PUD_SECT_RDONLY              (_AT(pudval_t, 1) << 7)         /* AP[2] */
>>> +#define PUD_TABLE_PXN                (_AT(pudval_t, 1) << 59)
>>> +#define PUD_TABLE_UXN                (_AT(pudval_t, 1) << 60)
>>>
>>>  /*
>>>   * Level 2 descriptor (PMD).
>>> @@ -131,6 +135,8 @@
>>>  #define PMD_SECT_CONT                (_AT(pmdval_t, 1) << 52)
>>>  #define PMD_SECT_PXN         (_AT(pmdval_t, 1) << 53)
>>>  #define PMD_SECT_UXN         (_AT(pmdval_t, 1) << 54)
>>> +#define PMD_TABLE_PXN                (_AT(pmdval_t, 1) << 59)
>>> +#define PMD_TABLE_UXN                (_AT(pmdval_t, 1) << 60)
>>>
>>>  /*
>>>   * AttrIndx[2:0] encoding (mapping attributes defined in the MAIR* registers).
>>> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
>>> index 029091474042..9de59fce0450 100644
>>> --- a/arch/arm64/mm/mmu.c
>>> +++ b/arch/arm64/mm/mmu.c
>>> @@ -39,6 +39,7 @@
>>>
>>>  #define NO_BLOCK_MAPPINGS    BIT(0)
>>>  #define NO_CONT_MAPPINGS     BIT(1)
>>> +#define NO_EXEC_MAPPINGS     BIT(2)
>>
>> NO_EXEC_MAPPINGS flag basically selects PXX_TABLE_PXN because PXX_TABLE_UXN
>> is now a default, which is already included as a protection flag. Should not
>> this be renamed as NO_KERNEL_EXEC_MAPPINGS matching what PXX_TABLE_PXN is
>> going to achieve.
>>
> 
> I don't think renaming this flag is going to make things more clear tbh.

Alright, no problem.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-03-10  6:49 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-08 18:15 [PATCH v2 0/3] arm64: use hierarchical XN permissions for all page tables Ard Biesheuvel
2021-03-08 18:15 ` [PATCH v2 1/3] arm64: mm: add missing P4D definitions and use them consistently Ard Biesheuvel
2021-03-09  4:56   ` Anshuman Khandual
2021-03-08 18:15 ` [PATCH v2 2/3] arm64: mm: use XN table mapping attributes for the linear region Ard Biesheuvel
2021-03-09  5:09   ` Anshuman Khandual
2021-03-09 12:36     ` Ard Biesheuvel
2021-03-10  6:48       ` Anshuman Khandual
2021-03-09  5:52   ` Anshuman Khandual
2021-03-08 18:15 ` [PATCH v2 3/3] arm64: mm: use XN table mapping attributes for user/kernel mappings Ard Biesheuvel
2021-03-09  5:40   ` Anshuman Khandual

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.