* [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in
@ 2024-03-13 12:56 Pingfan Liu
2024-03-13 12:56 ` [PATCH 01/10] arm64: mm: Split out routines for code reuse Pingfan Liu
` (9 more replies)
0 siblings, 10 replies; 13+ messages in thread
From: Pingfan Liu @ 2024-03-13 12:56 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Ard Biesheuvel, Catalin Marinas, Will Deacon, Mark Rutland
Hi everybody, I tried this stuff again. Last time when I tried this,
Catalin raised concern about the intrumentation, and Ard doubted this
way due to alignement issue with mmu-off.
Last time, the alignment issue looked unsoluable and I gave up. But
nowadays, when I looked at it, I think it is partially resovable. (for
detail, please see the commit log in [PATCH 08/10] arm64: mm: Enforce
memory alignment in mmu_head)
Overall, at this very early stage, the using of C routines faces three
challenge:
PIC
instrumentation
alignment
[2/10] resolves instrumentation issue
[3/10] makes mmu_head self-contained and prevent the outside
PIC/ instrumentation/ alignment issues from seeping in. And check the code PIC.
[PATCH 08/10] explains the alignement issue, in theory, it can be
checked and resolved. And in this patch, it is partially resolved.
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
To: linux-arm-kernel@lists.infradead.org
---
Pingfan Liu (10):
arm64: mm: Split out routines for code reuse
arm64: mm: Introduce mmu_head routines without instrumentation
arm64: mm: Use if-conditon to truncate external dependency
arm64: head: Enable __create_pgd_mapping() to handle pgtable's paddr
arm64: mm: Force early mapping aligned on SWAPPER_BLOCK_SIZE
arm64: mm: Handle scope beyond the capacity of kernel pgtable in
mmu_head_create_pgd_mapping()
arm64: mm: Introduce head_pool routines to enable pgtabl allocation
arm64: mm: Enforce memory alignment in mmu_head
arm64: head: Use __create_pgd_mapping_locked() to serve the creation
of pgtable
arm64: head: Clean up unneeded routines
arch/arm64/include/asm/kernel-pgtable.h | 1 +
arch/arm64/include/asm/mmu.h | 4 +
arch/arm64/include/asm/pgtable.h | 11 +-
arch/arm64/kernel/head.S | 314 +++++++-----------------
arch/arm64/mm/Makefile | 22 +-
arch/arm64/mm/mmu.c | 289 +---------------------
arch/arm64/mm/mmu_head.c | 134 ++++++++++
arch/arm64/mm/mmu_inc.c | 292 ++++++++++++++++++++++
8 files changed, 558 insertions(+), 509 deletions(-)
create mode 100644 arch/arm64/mm/mmu_head.c
create mode 100644 arch/arm64/mm/mmu_inc.c
--
2.41.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* [PATCH 01/10] arm64: mm: Split out routines for code reuse
2024-03-13 12:56 [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Pingfan Liu
@ 2024-03-13 12:56 ` Pingfan Liu
2024-03-13 12:57 ` [PATCH 02/10] arm64: mm: Introduce mmu_head routines without instrumentation Pingfan Liu
` (8 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2024-03-13 12:56 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Ard Biesheuvel, Catalin Marinas, Will Deacon, Mark Rutland
The split out routines will have a dedicated file scope and not interfere
with each other.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/mm/mmu.c | 253 +--------------------------------------
arch/arm64/mm/mmu_inc.c | 255 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 256 insertions(+), 252 deletions(-)
create mode 100644 arch/arm64/mm/mmu_inc.c
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 15f6347d23b6..870be374f458 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -169,230 +169,7 @@ bool pgattr_change_is_safe(u64 old, u64 new)
return ((old ^ new) & ~mask) == 0;
}
-static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
- phys_addr_t phys, pgprot_t prot)
-{
- pte_t *ptep;
-
- ptep = pte_set_fixmap_offset(pmdp, addr);
- do {
- pte_t old_pte = READ_ONCE(*ptep);
-
- set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot));
-
- /*
- * After the PTE entry has been populated once, we
- * only allow updates to the permission attributes.
- */
- BUG_ON(!pgattr_change_is_safe(pte_val(old_pte),
- READ_ONCE(pte_val(*ptep))));
-
- phys += PAGE_SIZE;
- } while (ptep++, addr += PAGE_SIZE, addr != end);
-
- pte_clear_fixmap();
-}
-
-static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
- unsigned long end, phys_addr_t phys,
- pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
- int flags)
-{
- unsigned long next;
- pmd_t pmd = READ_ONCE(*pmdp);
-
- BUG_ON(pmd_sect(pmd));
- if (pmd_none(pmd)) {
- pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN;
- phys_addr_t pte_phys;
-
- if (flags & NO_EXEC_MAPPINGS)
- pmdval |= PMD_TABLE_PXN;
- BUG_ON(!pgtable_alloc);
- pte_phys = pgtable_alloc(PAGE_SHIFT);
- __pmd_populate(pmdp, pte_phys, pmdval);
- pmd = READ_ONCE(*pmdp);
- }
- BUG_ON(pmd_bad(pmd));
-
- do {
- pgprot_t __prot = prot;
-
- next = pte_cont_addr_end(addr, end);
-
- /* use a contiguous mapping if the range is suitably aligned */
- if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
- (flags & NO_CONT_MAPPINGS) == 0)
- __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
-
- init_pte(pmdp, addr, next, phys, __prot);
-
- phys += next - addr;
- } while (addr = next, addr != end);
-}
-
-static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
- phys_addr_t phys, pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int), int flags)
-{
- unsigned long next;
- pmd_t *pmdp;
-
- pmdp = pmd_set_fixmap_offset(pudp, addr);
- do {
- pmd_t old_pmd = READ_ONCE(*pmdp);
-
- next = pmd_addr_end(addr, end);
-
- /* try section mapping first */
- if (((addr | next | phys) & ~PMD_MASK) == 0 &&
- (flags & NO_BLOCK_MAPPINGS) == 0) {
- pmd_set_huge(pmdp, phys, prot);
-
- /*
- * After the PMD entry has been populated once, we
- * only allow updates to the permission attributes.
- */
- BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
- READ_ONCE(pmd_val(*pmdp))));
- } else {
- alloc_init_cont_pte(pmdp, addr, next, phys, prot,
- pgtable_alloc, flags);
-
- BUG_ON(pmd_val(old_pmd) != 0 &&
- pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
- }
- phys += next - addr;
- } while (pmdp++, addr = next, addr != end);
-
- pmd_clear_fixmap();
-}
-
-static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
- unsigned long end, phys_addr_t phys,
- pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int), int flags)
-{
- unsigned long next;
- pud_t pud = READ_ONCE(*pudp);
-
- /*
- * Check for initial section mappings in the pgd/pud.
- */
- BUG_ON(pud_sect(pud));
- if (pud_none(pud)) {
- pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN;
- phys_addr_t pmd_phys;
-
- if (flags & NO_EXEC_MAPPINGS)
- pudval |= PUD_TABLE_PXN;
- BUG_ON(!pgtable_alloc);
- pmd_phys = pgtable_alloc(PMD_SHIFT);
- __pud_populate(pudp, pmd_phys, pudval);
- pud = READ_ONCE(*pudp);
- }
- BUG_ON(pud_bad(pud));
-
- do {
- pgprot_t __prot = prot;
-
- next = pmd_cont_addr_end(addr, end);
-
- /* use a contiguous mapping if the range is suitably aligned */
- if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
- (flags & NO_CONT_MAPPINGS) == 0)
- __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
-
- init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags);
-
- phys += next - addr;
- } while (addr = next, addr != end);
-}
-
-static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
- phys_addr_t phys, pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
- int flags)
-{
- unsigned long next;
- pud_t *pudp;
- p4d_t *p4dp = p4d_offset(pgdp, addr);
- p4d_t p4d = READ_ONCE(*p4dp);
-
- if (p4d_none(p4d)) {
- p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN;
- phys_addr_t pud_phys;
-
- if (flags & NO_EXEC_MAPPINGS)
- p4dval |= P4D_TABLE_PXN;
- BUG_ON(!pgtable_alloc);
- pud_phys = pgtable_alloc(PUD_SHIFT);
- __p4d_populate(p4dp, pud_phys, p4dval);
- p4d = READ_ONCE(*p4dp);
- }
- BUG_ON(p4d_bad(p4d));
-
- pudp = pud_set_fixmap_offset(p4dp, addr);
- do {
- pud_t old_pud = READ_ONCE(*pudp);
-
- next = pud_addr_end(addr, end);
-
- /*
- * For 4K granule only, attempt to put down a 1GB block
- */
- if (pud_sect_supported() &&
- ((addr | next | phys) & ~PUD_MASK) == 0 &&
- (flags & NO_BLOCK_MAPPINGS) == 0) {
- pud_set_huge(pudp, phys, prot);
-
- /*
- * After the PUD entry has been populated once, we
- * only allow updates to the permission attributes.
- */
- BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
- READ_ONCE(pud_val(*pudp))));
- } else {
- alloc_init_cont_pmd(pudp, addr, next, phys, prot,
- pgtable_alloc, flags);
-
- BUG_ON(pud_val(old_pud) != 0 &&
- pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
- }
- phys += next - addr;
- } while (pudp++, addr = next, addr != end);
-
- pud_clear_fixmap();
-}
-
-static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
- unsigned long virt, phys_addr_t size,
- pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
- int flags)
-{
- unsigned long addr, end, next;
- pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
-
- /*
- * If the virtual and physical address don't have the same offset
- * within a page, we cannot map the region as the caller expects.
- */
- if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
- return;
-
- phys &= PAGE_MASK;
- addr = virt & PAGE_MASK;
- end = PAGE_ALIGN(virt + size);
-
- do {
- next = pgd_addr_end(addr, end);
- alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc,
- flags);
- phys += next - addr;
- } while (pgdp++, addr = next, addr != end);
-}
+#include "mmu_inc.c"
static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
@@ -1168,34 +945,6 @@ void vmemmap_free(unsigned long start, unsigned long end,
}
#endif /* CONFIG_MEMORY_HOTPLUG */
-int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot)
-{
- pud_t new_pud = pfn_pud(__phys_to_pfn(phys), mk_pud_sect_prot(prot));
-
- /* Only allow permission changes for now */
- if (!pgattr_change_is_safe(READ_ONCE(pud_val(*pudp)),
- pud_val(new_pud)))
- return 0;
-
- VM_BUG_ON(phys & ~PUD_MASK);
- set_pud(pudp, new_pud);
- return 1;
-}
-
-int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot)
-{
- pmd_t new_pmd = pfn_pmd(__phys_to_pfn(phys), mk_pmd_sect_prot(prot));
-
- /* Only allow permission changes for now */
- if (!pgattr_change_is_safe(READ_ONCE(pmd_val(*pmdp)),
- pmd_val(new_pmd)))
- return 0;
-
- VM_BUG_ON(phys & ~PMD_MASK);
- set_pmd(pmdp, new_pmd);
- return 1;
-}
-
int pud_clear_huge(pud_t *pudp)
{
if (!pud_sect(READ_ONCE(*pudp)))
diff --git a/arch/arm64/mm/mmu_inc.c b/arch/arm64/mm/mmu_inc.c
new file mode 100644
index 000000000000..dcd97eea0726
--- /dev/null
+++ b/arch/arm64/mm/mmu_inc.c
@@ -0,0 +1,255 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot)
+{
+ pud_t new_pud = pfn_pud(__phys_to_pfn(phys), mk_pud_sect_prot(prot));
+
+ /* Only allow permission changes for now */
+ if (!pgattr_change_is_safe(READ_ONCE(pud_val(*pudp)),
+ pud_val(new_pud)))
+ return 0;
+
+ VM_BUG_ON(phys & ~PUD_MASK);
+ set_pud(pudp, new_pud);
+ return 1;
+}
+
+int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot)
+{
+ pmd_t new_pmd = pfn_pmd(__phys_to_pfn(phys), mk_pmd_sect_prot(prot));
+
+ /* Only allow permission changes for now */
+ if (!pgattr_change_is_safe(READ_ONCE(pmd_val(*pmdp)),
+ pmd_val(new_pmd)))
+ return 0;
+
+ VM_BUG_ON(phys & ~PMD_MASK);
+ set_pmd(pmdp, new_pmd);
+ return 1;
+}
+
+static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ phys_addr_t phys, pgprot_t prot)
+{
+ pte_t *ptep;
+
+ ptep = pte_set_fixmap_offset(pmdp, addr);
+ do {
+ pte_t old_pte = READ_ONCE(*ptep);
+
+ set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot));
+
+ /*
+ * After the PTE entry has been populated once, we
+ * only allow updates to the permission attributes.
+ */
+ BUG_ON(!pgattr_change_is_safe(pte_val(old_pte),
+ READ_ONCE(pte_val(*ptep))));
+
+ phys += PAGE_SIZE;
+ } while (ptep++, addr += PAGE_SIZE, addr != end);
+
+ pte_clear_fixmap();
+}
+
+static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
+ unsigned long end, phys_addr_t phys,
+ pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int),
+ int flags)
+{
+ unsigned long next;
+ pmd_t pmd = READ_ONCE(*pmdp);
+
+ BUG_ON(pmd_sect(pmd));
+ if (pmd_none(pmd)) {
+ pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN;
+ phys_addr_t pte_phys;
+
+ if (flags & NO_EXEC_MAPPINGS)
+ pmdval |= PMD_TABLE_PXN;
+ BUG_ON(!pgtable_alloc);
+ pte_phys = pgtable_alloc(PAGE_SHIFT);
+ __pmd_populate(pmdp, pte_phys, pmdval);
+ pmd = READ_ONCE(*pmdp);
+ }
+ BUG_ON(pmd_bad(pmd));
+
+ do {
+ pgprot_t __prot = prot;
+
+ next = pte_cont_addr_end(addr, end);
+
+ /* use a contiguous mapping if the range is suitably aligned */
+ if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
+ (flags & NO_CONT_MAPPINGS) == 0)
+ __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ init_pte(pmdp, addr, next, phys, __prot);
+
+ phys += next - addr;
+ } while (addr = next, addr != end);
+}
+
+static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
+ phys_addr_t phys, pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int), int flags)
+{
+ unsigned long next;
+ pmd_t *pmdp;
+
+ pmdp = pmd_set_fixmap_offset(pudp, addr);
+ do {
+ pmd_t old_pmd = READ_ONCE(*pmdp);
+
+ next = pmd_addr_end(addr, end);
+
+ /* try section mapping first */
+ if (((addr | next | phys) & ~PMD_MASK) == 0 &&
+ (flags & NO_BLOCK_MAPPINGS) == 0) {
+ pmd_set_huge(pmdp, phys, prot);
+
+ /*
+ * After the PMD entry has been populated once, we
+ * only allow updates to the permission attributes.
+ */
+ BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
+ READ_ONCE(pmd_val(*pmdp))));
+ } else {
+ alloc_init_cont_pte(pmdp, addr, next, phys, prot,
+ pgtable_alloc, flags);
+
+ BUG_ON(pmd_val(old_pmd) != 0 &&
+ pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
+ }
+ phys += next - addr;
+ } while (pmdp++, addr = next, addr != end);
+
+ pmd_clear_fixmap();
+}
+
+static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
+ unsigned long end, phys_addr_t phys,
+ pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int), int flags)
+{
+ unsigned long next;
+ pud_t pud = READ_ONCE(*pudp);
+
+ /*
+ * Check for initial section mappings in the pgd/pud.
+ */
+ BUG_ON(pud_sect(pud));
+ if (pud_none(pud)) {
+ pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN;
+ phys_addr_t pmd_phys;
+
+ if (flags & NO_EXEC_MAPPINGS)
+ pudval |= PUD_TABLE_PXN;
+ BUG_ON(!pgtable_alloc);
+ pmd_phys = pgtable_alloc(PMD_SHIFT);
+ __pud_populate(pudp, pmd_phys, pudval);
+ pud = READ_ONCE(*pudp);
+ }
+ BUG_ON(pud_bad(pud));
+
+ do {
+ pgprot_t __prot = prot;
+
+ next = pmd_cont_addr_end(addr, end);
+
+ /* use a contiguous mapping if the range is suitably aligned */
+ if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
+ (flags & NO_CONT_MAPPINGS) == 0)
+ __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags);
+
+ phys += next - addr;
+ } while (addr = next, addr != end);
+}
+
+static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
+ phys_addr_t phys, pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int),
+ int flags)
+{
+ unsigned long next;
+ pud_t *pudp;
+ p4d_t *p4dp = p4d_offset(pgdp, addr);
+ p4d_t p4d = READ_ONCE(*p4dp);
+
+ if (p4d_none(p4d)) {
+ p4dval_t p4dval = P4D_TYPE_TABLE | P4D_TABLE_UXN;
+ phys_addr_t pud_phys;
+
+ if (flags & NO_EXEC_MAPPINGS)
+ p4dval |= P4D_TABLE_PXN;
+ BUG_ON(!pgtable_alloc);
+ pud_phys = pgtable_alloc(PUD_SHIFT);
+ __p4d_populate(p4dp, pud_phys, p4dval);
+ p4d = READ_ONCE(*p4dp);
+ }
+ BUG_ON(p4d_bad(p4d));
+
+ pudp = pud_set_fixmap_offset(p4dp, addr);
+ do {
+ pud_t old_pud = READ_ONCE(*pudp);
+
+ next = pud_addr_end(addr, end);
+
+ /*
+ * For 4K granule only, attempt to put down a 1GB block
+ */
+ if (pud_sect_supported() &&
+ ((addr | next | phys) & ~PUD_MASK) == 0 &&
+ (flags & NO_BLOCK_MAPPINGS) == 0) {
+ pud_set_huge(pudp, phys, prot);
+
+ /*
+ * After the PUD entry has been populated once, we
+ * only allow updates to the permission attributes.
+ */
+ BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
+ READ_ONCE(pud_val(*pudp))));
+ } else {
+ alloc_init_cont_pmd(pudp, addr, next, phys, prot,
+ pgtable_alloc, flags);
+
+ BUG_ON(pud_val(old_pud) != 0 &&
+ pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
+ }
+ phys += next - addr;
+ } while (pudp++, addr = next, addr != end);
+
+ pud_clear_fixmap();
+}
+
+static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
+ unsigned long virt, phys_addr_t size,
+ pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int),
+ int flags)
+{
+ unsigned long addr, end, next;
+ pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
+
+ /*
+ * If the virtual and physical address don't have the same offset
+ * within a page, we cannot map the region as the caller expects.
+ */
+ if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
+ return;
+
+ phys &= PAGE_MASK;
+ addr = virt & PAGE_MASK;
+ end = PAGE_ALIGN(virt + size);
+
+ do {
+ next = pgd_addr_end(addr, end);
+ alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc,
+ flags);
+ phys += next - addr;
+ } while (pgdp++, addr = next, addr != end);
+}
+
--
2.41.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 02/10] arm64: mm: Introduce mmu_head routines without instrumentation
2024-03-13 12:56 [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Pingfan Liu
2024-03-13 12:56 ` [PATCH 01/10] arm64: mm: Split out routines for code reuse Pingfan Liu
@ 2024-03-13 12:57 ` Pingfan Liu
2024-03-13 12:57 ` [PATCH 03/10] arm64: mm: Use if-conditon to truncate external dependency Pingfan Liu
` (7 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2024-03-13 12:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Ard Biesheuvel, Catalin Marinas, Will Deacon, Mark Rutland
During the early boot stage, the instrumentation can not be handled.
Use a macro INSTRUMENT_OPTION to switch on or off 'noinstr' on these
routines.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/mm/Makefile | 2 +-
arch/arm64/mm/mmu.c | 50 +++++++++--------------------
arch/arm64/mm/mmu_head.c | 19 +++++++++++
arch/arm64/mm/mmu_inc.c | 68 +++++++++++++++++++++++++++++++---------
4 files changed, 87 insertions(+), 52 deletions(-)
create mode 100644 arch/arm64/mm/mmu_head.c
diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
index dbd1bc95967d..0d92fb24a398 100644
--- a/arch/arm64/mm/Makefile
+++ b/arch/arm64/mm/Makefile
@@ -2,7 +2,7 @@
obj-y := dma-mapping.o extable.o fault.o init.o \
cache.o copypage.o flush.o \
ioremap.o mmap.o pgd.o mmu.o \
- context.o proc.o pageattr.o fixmap.o
+ context.o proc.o pageattr.o fixmap.o mmu_head.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
obj-$(CONFIG_PTDUMP_DEBUGFS) += ptdump_debugfs.o
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 870be374f458..80e49faaf066 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -131,46 +131,14 @@ static phys_addr_t __init early_pgtable_alloc(int shift)
return phys;
}
+#define INSTRUMENT_OPTION
+#include "mmu_inc.c"
+
bool pgattr_change_is_safe(u64 old, u64 new)
{
- /*
- * The following mapping attributes may be updated in live
- * kernel mappings without the need for break-before-make.
- */
- pteval_t mask = PTE_PXN | PTE_RDONLY | PTE_WRITE | PTE_NG;
-
- /* creating or taking down mappings is always safe */
- if (!pte_valid(__pte(old)) || !pte_valid(__pte(new)))
- return true;
-
- /* A live entry's pfn should not change */
- if (pte_pfn(__pte(old)) != pte_pfn(__pte(new)))
- return false;
-
- /* live contiguous mappings may not be manipulated at all */
- if ((old | new) & PTE_CONT)
- return false;
-
- /* Transitioning from Non-Global to Global is unsafe */
- if (old & ~new & PTE_NG)
- return false;
-
- /*
- * Changing the memory type between Normal and Normal-Tagged is safe
- * since Tagged is considered a permission attribute from the
- * mismatched attribute aliases perspective.
- */
- if (((old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) ||
- (old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)) &&
- ((new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) ||
- (new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)))
- mask |= PTE_ATTRINDX_MASK;
-
- return ((old ^ new) & ~mask) == 0;
+ return __pgattr_change_is_safe(old, new);
}
-#include "mmu_inc.c"
-
static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
@@ -945,6 +913,16 @@ void vmemmap_free(unsigned long start, unsigned long end,
}
#endif /* CONFIG_MEMORY_HOTPLUG */
+int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot)
+{
+ return __pud_set_huge(pudp, phys, prot);
+}
+
+int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot)
+{
+ return __pmd_set_huge(pmdp, phys, prot);
+}
+
int pud_clear_huge(pud_t *pudp)
{
if (!pud_sect(READ_ONCE(*pudp)))
diff --git a/arch/arm64/mm/mmu_head.c b/arch/arm64/mm/mmu_head.c
new file mode 100644
index 000000000000..4d65b7368db3
--- /dev/null
+++ b/arch/arm64/mm/mmu_head.c
@@ -0,0 +1,19 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <asm/barrier.h>
+#include <asm/kernel-pgtable.h>
+#include <asm/pgalloc.h>
+
+#define INSTRUMENT_OPTION __noinstr_section(".init.text.noinstr")
+#include "mmu_inc.c"
+
+void INSTRUMENT_OPTION mmu_head_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
+ unsigned long virt, phys_addr_t size,
+ pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int),
+ int flags)
+{
+ __create_pgd_mapping_locked(pgdir, phys, virt, size, prot, pgtable_alloc, flags);
+}
diff --git a/arch/arm64/mm/mmu_inc.c b/arch/arm64/mm/mmu_inc.c
index dcd97eea0726..2535927d30ec 100644
--- a/arch/arm64/mm/mmu_inc.c
+++ b/arch/arm64/mm/mmu_inc.c
@@ -1,11 +1,49 @@
// SPDX-License-Identifier: GPL-2.0-only
-int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot)
+static bool INSTRUMENT_OPTION __pgattr_change_is_safe(u64 old, u64 new)
+{
+ /*
+ * The following mapping attributes may be updated in live
+ * kernel mappings without the need for break-before-make.
+ */
+ pteval_t mask = PTE_PXN | PTE_RDONLY | PTE_WRITE | PTE_NG;
+
+ /* creating or taking down mappings is always safe */
+ if (!pte_valid(__pte(old)) || !pte_valid(__pte(new)))
+ return true;
+
+ /* A live entry's pfn should not change */
+ if (pte_pfn(__pte(old)) != pte_pfn(__pte(new)))
+ return false;
+
+ /* live contiguous mappings may not be manipulated at all */
+ if ((old | new) & PTE_CONT)
+ return false;
+
+ /* Transitioning from Non-Global to Global is unsafe */
+ if (old & ~new & PTE_NG)
+ return false;
+
+ /*
+ * Changing the memory type between Normal and Normal-Tagged is safe
+ * since Tagged is considered a permission attribute from the
+ * mismatched attribute aliases perspective.
+ */
+ if (((old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) ||
+ (old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)) &&
+ ((new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) ||
+ (new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)))
+ mask |= PTE_ATTRINDX_MASK;
+
+ return ((old ^ new) & ~mask) == 0;
+}
+
+static int INSTRUMENT_OPTION __pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot)
{
pud_t new_pud = pfn_pud(__phys_to_pfn(phys), mk_pud_sect_prot(prot));
/* Only allow permission changes for now */
- if (!pgattr_change_is_safe(READ_ONCE(pud_val(*pudp)),
+ if (!__pgattr_change_is_safe(READ_ONCE(pud_val(*pudp)),
pud_val(new_pud)))
return 0;
@@ -14,12 +52,12 @@ int pud_set_huge(pud_t *pudp, phys_addr_t phys, pgprot_t prot)
return 1;
}
-int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot)
+static int INSTRUMENT_OPTION __pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot)
{
pmd_t new_pmd = pfn_pmd(__phys_to_pfn(phys), mk_pmd_sect_prot(prot));
/* Only allow permission changes for now */
- if (!pgattr_change_is_safe(READ_ONCE(pmd_val(*pmdp)),
+ if (!__pgattr_change_is_safe(READ_ONCE(pmd_val(*pmdp)),
pmd_val(new_pmd)))
return 0;
@@ -28,7 +66,7 @@ int pmd_set_huge(pmd_t *pmdp, phys_addr_t phys, pgprot_t prot)
return 1;
}
-static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
+static void INSTRUMENT_OPTION init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot)
{
pte_t *ptep;
@@ -43,7 +81,7 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
* After the PTE entry has been populated once, we
* only allow updates to the permission attributes.
*/
- BUG_ON(!pgattr_change_is_safe(pte_val(old_pte),
+ BUG_ON(!__pgattr_change_is_safe(pte_val(old_pte),
READ_ONCE(pte_val(*ptep))));
phys += PAGE_SIZE;
@@ -52,7 +90,7 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
pte_clear_fixmap();
}
-static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
+static void INSTRUMENT_OPTION alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
unsigned long end, phys_addr_t phys,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(int),
@@ -91,7 +129,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
} while (addr = next, addr != end);
}
-static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
+static void INSTRUMENT_OPTION init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(int), int flags)
{
@@ -107,13 +145,13 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
/* try section mapping first */
if (((addr | next | phys) & ~PMD_MASK) == 0 &&
(flags & NO_BLOCK_MAPPINGS) == 0) {
- pmd_set_huge(pmdp, phys, prot);
+ __pmd_set_huge(pmdp, phys, prot);
/*
* After the PMD entry has been populated once, we
* only allow updates to the permission attributes.
*/
- BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
+ BUG_ON(!__pgattr_change_is_safe(pmd_val(old_pmd),
READ_ONCE(pmd_val(*pmdp))));
} else {
alloc_init_cont_pte(pmdp, addr, next, phys, prot,
@@ -128,7 +166,7 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
pmd_clear_fixmap();
}
-static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
+static void INSTRUMENT_OPTION alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
unsigned long end, phys_addr_t phys,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(int), int flags)
@@ -169,7 +207,7 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
} while (addr = next, addr != end);
}
-static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
+static void INSTRUMENT_OPTION alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot,
phys_addr_t (*pgtable_alloc)(int),
int flags)
@@ -204,13 +242,13 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
if (pud_sect_supported() &&
((addr | next | phys) & ~PUD_MASK) == 0 &&
(flags & NO_BLOCK_MAPPINGS) == 0) {
- pud_set_huge(pudp, phys, prot);
+ __pud_set_huge(pudp, phys, prot);
/*
* After the PUD entry has been populated once, we
* only allow updates to the permission attributes.
*/
- BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
+ BUG_ON(!__pgattr_change_is_safe(pud_val(old_pud),
READ_ONCE(pud_val(*pudp))));
} else {
alloc_init_cont_pmd(pudp, addr, next, phys, prot,
@@ -225,7 +263,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
pud_clear_fixmap();
}
-static void __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
+static void INSTRUMENT_OPTION __create_pgd_mapping_locked(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(int),
--
2.41.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 03/10] arm64: mm: Use if-conditon to truncate external dependency
2024-03-13 12:56 [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Pingfan Liu
2024-03-13 12:56 ` [PATCH 01/10] arm64: mm: Split out routines for code reuse Pingfan Liu
2024-03-13 12:57 ` [PATCH 02/10] arm64: mm: Introduce mmu_head routines without instrumentation Pingfan Liu
@ 2024-03-13 12:57 ` Pingfan Liu
2024-03-13 12:57 ` [PATCH 04/10] arm64: head: Enable __create_pgd_mapping() to handle pgtable's paddr Pingfan Liu
` (6 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2024-03-13 12:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Ard Biesheuvel, Catalin Marinas, Will Deacon, Mark Rutland
An outside callee can present some challenging issues for the early
boot stage, including posistion-dependent, instrumentation, alignment and
sub-component not being ready.
To mitigate these dependencies, leveraging compile-time optimization can
help truncate reliance, ensuring that mmu_head is self-contained.
Additionally, running checks against relocation and external
dependencies in the Makefile can further enhance the robustness of the
system.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/include/asm/pgtable.h | 11 +++++++++--
arch/arm64/mm/Makefile | 19 ++++++++++++++++++
arch/arm64/mm/mmu_head.c | 3 +++
arch/arm64/mm/mmu_inc.c | 33 ++++++++++++++++----------------
4 files changed, 47 insertions(+), 19 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 79ce70fbb751..f43a93d78454 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -625,10 +625,17 @@ extern pgd_t reserved_pg_dir[PTRS_PER_PGD];
extern void set_swapper_pgd(pgd_t *pgdp, pgd_t pgd);
+#ifndef KERNEL_READY
+#define KERNEL_READY true
+#endif
static inline bool in_swapper_pgdir(void *addr)
{
- return ((unsigned long)addr & PAGE_MASK) ==
- ((unsigned long)swapper_pg_dir & PAGE_MASK);
+ /* The compiling time optimization screens the calls to set_swapper_pgd() */
+ if (KERNEL_READY)
+ return ((unsigned long)addr & PAGE_MASK) ==
+ ((unsigned long)swapper_pg_dir & PAGE_MASK);
+ else
+ return false;
}
static inline void set_pmd(pmd_t *pmdp, pmd_t pmd)
diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
index 0d92fb24a398..89d496ca970b 100644
--- a/arch/arm64/mm/Makefile
+++ b/arch/arm64/mm/Makefile
@@ -14,3 +14,22 @@ KASAN_SANITIZE_physaddr.o += n
obj-$(CONFIG_KASAN) += kasan_init.o
KASAN_SANITIZE_kasan_init.o := n
+
+$(obj)/mmu_head_tmp.o: $(src)/mmu_head.c FORCE
+ $(call if_changed_rule,cc_o_c)
+OBJCOPYFLAGS_mmu_head.o := $(OBJCOPYFLAGS)
+$(obj)/mmu_head.o: $(obj)/mmu_head_tmp.o FORCE
+ $(call if_changed,stubcopy)
+
+quiet_cmd_stubcopy = STUBCPY $@
+ cmd_stubcopy = \
+ $(STRIP) --strip-debug -o $@ $<; \
+ if $(OBJDUMP) -r $@ | grep R_AARCH64_ABS; then \
+ echo "$@: absolute symbol references not allowed in mmu_head.o" >&2; \
+ /bin/false; \
+ fi; \
+ if nm -u $@ | grep "U"; then \
+ echo "$@: external dependency incur uncertainty of alignment and not-PIC" >&2; \
+ /bin/false; \
+ fi; \
+ $(OBJCOPY) $(OBJCOPYFLAGS) $< $@
diff --git a/arch/arm64/mm/mmu_head.c b/arch/arm64/mm/mmu_head.c
index 4d65b7368db3..ccdd0f079c49 100644
--- a/arch/arm64/mm/mmu_head.c
+++ b/arch/arm64/mm/mmu_head.c
@@ -1,5 +1,8 @@
// SPDX-License-Identifier: GPL-2.0-only
+
+#define KERNEL_READY false
+
#include <linux/kernel.h>
#include <linux/errno.h>
#include <asm/barrier.h>
diff --git a/arch/arm64/mm/mmu_inc.c b/arch/arm64/mm/mmu_inc.c
index 2535927d30ec..196987c120bf 100644
--- a/arch/arm64/mm/mmu_inc.c
+++ b/arch/arm64/mm/mmu_inc.c
@@ -81,7 +81,7 @@ static void INSTRUMENT_OPTION init_pte(pmd_t *pmdp, unsigned long addr, unsigned
* After the PTE entry has been populated once, we
* only allow updates to the permission attributes.
*/
- BUG_ON(!__pgattr_change_is_safe(pte_val(old_pte),
+ BUG_ON(KERNEL_READY && !__pgattr_change_is_safe(pte_val(old_pte),
READ_ONCE(pte_val(*ptep))));
phys += PAGE_SIZE;
@@ -99,19 +99,19 @@ static void INSTRUMENT_OPTION alloc_init_cont_pte(pmd_t *pmdp, unsigned long add
unsigned long next;
pmd_t pmd = READ_ONCE(*pmdp);
- BUG_ON(pmd_sect(pmd));
+ BUG_ON(KERNEL_READY && pmd_sect(pmd));
if (pmd_none(pmd)) {
pmdval_t pmdval = PMD_TYPE_TABLE | PMD_TABLE_UXN;
phys_addr_t pte_phys;
if (flags & NO_EXEC_MAPPINGS)
pmdval |= PMD_TABLE_PXN;
- BUG_ON(!pgtable_alloc);
+ BUG_ON(KERNEL_READY && !pgtable_alloc);
pte_phys = pgtable_alloc(PAGE_SHIFT);
__pmd_populate(pmdp, pte_phys, pmdval);
pmd = READ_ONCE(*pmdp);
}
- BUG_ON(pmd_bad(pmd));
+ BUG_ON(KERNEL_READY && pmd_bad(pmd));
do {
pgprot_t __prot = prot;
@@ -151,14 +151,13 @@ static void INSTRUMENT_OPTION init_pmd(pud_t *pudp, unsigned long addr, unsigned
* After the PMD entry has been populated once, we
* only allow updates to the permission attributes.
*/
- BUG_ON(!__pgattr_change_is_safe(pmd_val(old_pmd),
+ BUG_ON(KERNEL_READY && !__pgattr_change_is_safe(pmd_val(old_pmd),
READ_ONCE(pmd_val(*pmdp))));
} else {
alloc_init_cont_pte(pmdp, addr, next, phys, prot,
pgtable_alloc, flags);
-
- BUG_ON(pmd_val(old_pmd) != 0 &&
- pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
+ BUG_ON(KERNEL_READY && pmd_val(old_pmd) != 0 &&
+ pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
}
phys += next - addr;
} while (pmdp++, addr = next, addr != end);
@@ -177,19 +176,19 @@ static void INSTRUMENT_OPTION alloc_init_cont_pmd(pud_t *pudp, unsigned long add
/*
* Check for initial section mappings in the pgd/pud.
*/
- BUG_ON(pud_sect(pud));
+ BUG_ON(KERNEL_READY && pud_sect(pud));
if (pud_none(pud)) {
pudval_t pudval = PUD_TYPE_TABLE | PUD_TABLE_UXN;
phys_addr_t pmd_phys;
if (flags & NO_EXEC_MAPPINGS)
pudval |= PUD_TABLE_PXN;
- BUG_ON(!pgtable_alloc);
+ BUG_ON(KERNEL_READY && !pgtable_alloc);
pmd_phys = pgtable_alloc(PMD_SHIFT);
__pud_populate(pudp, pmd_phys, pudval);
pud = READ_ONCE(*pudp);
}
- BUG_ON(pud_bad(pud));
+ BUG_ON(KERNEL_READY && pud_bad(pud));
do {
pgprot_t __prot = prot;
@@ -223,12 +222,12 @@ static void INSTRUMENT_OPTION alloc_init_pud(pgd_t *pgdp, unsigned long addr, un
if (flags & NO_EXEC_MAPPINGS)
p4dval |= P4D_TABLE_PXN;
- BUG_ON(!pgtable_alloc);
+ BUG_ON(KERNEL_READY && !pgtable_alloc);
pud_phys = pgtable_alloc(PUD_SHIFT);
__p4d_populate(p4dp, pud_phys, p4dval);
p4d = READ_ONCE(*p4dp);
}
- BUG_ON(p4d_bad(p4d));
+ BUG_ON(KERNEL_READY && p4d_bad(p4d));
pudp = pud_set_fixmap_offset(p4dp, addr);
do {
@@ -248,14 +247,14 @@ static void INSTRUMENT_OPTION alloc_init_pud(pgd_t *pgdp, unsigned long addr, un
* After the PUD entry has been populated once, we
* only allow updates to the permission attributes.
*/
- BUG_ON(!__pgattr_change_is_safe(pud_val(old_pud),
+ BUG_ON(KERNEL_READY && !__pgattr_change_is_safe(pud_val(old_pud),
READ_ONCE(pud_val(*pudp))));
} else {
alloc_init_cont_pmd(pudp, addr, next, phys, prot,
pgtable_alloc, flags);
- BUG_ON(pud_val(old_pud) != 0 &&
- pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
+ BUG_ON(KERNEL_READY && pud_val(old_pud) != 0 &&
+ pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
}
phys += next - addr;
} while (pudp++, addr = next, addr != end);
@@ -276,7 +275,7 @@ static void INSTRUMENT_OPTION __create_pgd_mapping_locked(pgd_t *pgdir, phys_add
* If the virtual and physical address don't have the same offset
* within a page, we cannot map the region as the caller expects.
*/
- if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
+ if (KERNEL_READY && WARN_ON((phys ^ virt) & ~PAGE_MASK))
return;
phys &= PAGE_MASK;
--
2.41.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 04/10] arm64: head: Enable __create_pgd_mapping() to handle pgtable's paddr
2024-03-13 12:56 [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Pingfan Liu
` (2 preceding siblings ...)
2024-03-13 12:57 ` [PATCH 03/10] arm64: mm: Use if-conditon to truncate external dependency Pingfan Liu
@ 2024-03-13 12:57 ` Pingfan Liu
2024-03-13 12:57 ` [PATCH 05/10] arm64: mm: Force early mapping aligned on SWAPPER_BLOCK_SIZE Pingfan Liu
` (5 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2024-03-13 12:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Ard Biesheuvel, Catalin Marinas, Will Deacon, Mark Rutland
When mmu-off or identical mapping, both of the page table:
init_idmap_pg_dir and init_pg_dir can be accessed by physical address
(virtual address equals physical)
This patch introduces routines to avoid using fixmap to access page table.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/mm/mmu_head.c | 42 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/arch/arm64/mm/mmu_head.c b/arch/arm64/mm/mmu_head.c
index ccdd0f079c49..562d036dc30a 100644
--- a/arch/arm64/mm/mmu_head.c
+++ b/arch/arm64/mm/mmu_head.c
@@ -10,6 +10,48 @@
#include <asm/pgalloc.h>
#define INSTRUMENT_OPTION __noinstr_section(".init.text.noinstr")
+
+#undef pud_set_fixmap_offset
+#undef pud_clear_fixmap
+#undef pmd_set_fixmap_offset
+#undef pmd_clear_fixmap
+#undef pte_set_fixmap_offset
+#undef pte_clear_fixmap
+
+/* This group is used to access intermedia level in no mmu or identity map */
+#define pud_set_fixmap_offset(p4dp, addr) \
+({ \
+ pud_t *pudp; \
+ if (CONFIG_PGTABLE_LEVELS > 3) \
+ pudp = (pud_t *)__p4d_to_phys(*p4dp) + pud_index(addr); \
+ else \
+ pudp = (pud_t *)p4dp; \
+ pudp; \
+})
+
+#define pud_clear_fixmap()
+
+#define pmd_set_fixmap_offset(pudp, addr) \
+({ \
+ pmd_t *pmdp; \
+ if (CONFIG_PGTABLE_LEVELS > 2) \
+ pmdp = (pmd_t *)__pud_to_phys(*pudp) + pmd_index(addr); \
+ else \
+ pmdp = (pmd_t *)pudp; \
+ pmdp; \
+})
+
+#define pmd_clear_fixmap()
+
+#define pte_set_fixmap_offset(pmdp, addr) \
+({ \
+ pte_t *ptep; \
+ ptep = (pte_t *)__pmd_to_phys(*pmdp) + pte_index(addr); \
+ ptep; \
+})
+
+#define pte_clear_fixmap()
+
#include "mmu_inc.c"
void INSTRUMENT_OPTION mmu_head_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
--
2.41.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 05/10] arm64: mm: Force early mapping aligned on SWAPPER_BLOCK_SIZE
2024-03-13 12:56 [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Pingfan Liu
` (3 preceding siblings ...)
2024-03-13 12:57 ` [PATCH 04/10] arm64: head: Enable __create_pgd_mapping() to handle pgtable's paddr Pingfan Liu
@ 2024-03-13 12:57 ` Pingfan Liu
2024-03-13 12:57 ` [PATCH 06/10] arm64: mm: Handle scope beyond the capacity of kernel pgtable in mmu_head_create_pgd_mapping() Pingfan Liu
` (4 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2024-03-13 12:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Ard Biesheuvel, Catalin Marinas, Will Deacon, Mark Rutland
At this very early stage, the page table size is limited and
block-mapping is appealed.
Force the input param aligned on SWAPPER_BLOCK_SIZE, so that
__create_pgd_mapping_locked() can use the block-mapping scheme.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/mm/mmu_head.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/arch/arm64/mm/mmu_head.c b/arch/arm64/mm/mmu_head.c
index 562d036dc30a..e00f6f2c7bec 100644
--- a/arch/arm64/mm/mmu_head.c
+++ b/arch/arm64/mm/mmu_head.c
@@ -60,5 +60,11 @@ void INSTRUMENT_OPTION mmu_head_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phy
phys_addr_t (*pgtable_alloc)(int),
int flags)
{
+ phys_addr_t end = phys + size;
+
+ phys = ALIGN_DOWN(phys, SWAPPER_BLOCK_SIZE);
+ virt = ALIGN_DOWN(virt, SWAPPER_BLOCK_SIZE);
+ end = ALIGN(end, SWAPPER_BLOCK_SIZE);
+ size = end - phys;
__create_pgd_mapping_locked(pgdir, phys, virt, size, prot, pgtable_alloc, flags);
}
--
2.41.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 06/10] arm64: mm: Handle scope beyond the capacity of kernel pgtable in mmu_head_create_pgd_mapping()
2024-03-13 12:56 [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Pingfan Liu
` (4 preceding siblings ...)
2024-03-13 12:57 ` [PATCH 05/10] arm64: mm: Force early mapping aligned on SWAPPER_BLOCK_SIZE Pingfan Liu
@ 2024-03-13 12:57 ` Pingfan Liu
2024-03-13 12:57 ` [PATCH 07/10] arm64: mm: Introduce head_pool routines to enable pgtabl allocation Pingfan Liu
` (3 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2024-03-13 12:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Ard Biesheuvel, Catalin Marinas, Will Deacon, Mark Rutland
This patch serves the same purpose as the
commit fa2a8445b1d3 ("arm64: allow ID map to be extended to 52 bits")
Since it is harmless to ditto init_pg_dir, there is no need to
distinguish between init_idmap_pg_dir and init_pg_dir.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/mm/mmu_head.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/arch/arm64/mm/mmu_head.c b/arch/arm64/mm/mmu_head.c
index e00f6f2c7bec..2df91e62ddb0 100644
--- a/arch/arm64/mm/mmu_head.c
+++ b/arch/arm64/mm/mmu_head.c
@@ -66,5 +66,23 @@ void INSTRUMENT_OPTION mmu_head_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phy
virt = ALIGN_DOWN(virt, SWAPPER_BLOCK_SIZE);
end = ALIGN(end, SWAPPER_BLOCK_SIZE);
size = end - phys;
+ /*
+ * In case that the kernel routines support small VA range while the boot image
+ * is put beyond the scope, blindless extending the pgtable by one level
+ */
+ if ((IS_ENABLED(CONFIG_ARM64_16K_PAGES) && IS_ENABLED(CONFIG_ARM64_VA_BITS_36)) ||
+ (IS_ENABLED(CONFIG_ARM64_64K_PAGES) && IS_ENABLED(CONFIG_ARM64_VA_BITS_42)) ||
+ (IS_ENABLED(CONFIG_ARM64_4K_PAGES) && IS_ENABLED(CONFIG_ARM64_VA_BITS_39))) {
+ unsigned long pgd_paddr;
+ pgd_t *pgd;
+ pgd_t pgd_val;
+
+ pgd_paddr = headpool_pgtable_alloc(0);
+ pgd_val = __pgd(pgd_paddr | P4D_TYPE_TABLE);
+ /* The shift should be one more level than PGDIR_SHIFT */
+ pgd = pgdir + (virt >> ARM64_HW_PGTABLE_LEVEL_SHIFT(3 - CONFIG_PGTABLE_LEVELS));
+ set_pgd(pgd, pgd_val);
+ pgdir = pgd;
+ }
__create_pgd_mapping_locked(pgdir, phys, virt, size, prot, pgtable_alloc, flags);
}
--
2.41.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 07/10] arm64: mm: Introduce head_pool routines to enable pgtabl allocation
2024-03-13 12:56 [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Pingfan Liu
` (5 preceding siblings ...)
2024-03-13 12:57 ` [PATCH 06/10] arm64: mm: Handle scope beyond the capacity of kernel pgtable in mmu_head_create_pgd_mapping() Pingfan Liu
@ 2024-03-13 12:57 ` Pingfan Liu
2024-03-13 12:57 ` [PATCH 09/10] arm64: head: Use __create_pgd_mapping_locked() to serve the creation of pgtable Pingfan Liu
` (2 subsequent siblings)
9 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2024-03-13 12:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Ard Biesheuvel, Catalin Marinas, Will Deacon, Mark Rutland
__create_pgd_mapping_locked() needs pgtable_alloc parameter to allocate
memory for page table.
During the early boot, the memory for page table should be allocated
from init_idmap_pg_dir or init_pg_dir. This patch introduces routines to
allocate PAGE from the above pool.
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/mm/mmu_head.c | 42 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
diff --git a/arch/arm64/mm/mmu_head.c b/arch/arm64/mm/mmu_head.c
index 2df91e62ddb0..801ebffe4209 100644
--- a/arch/arm64/mm/mmu_head.c
+++ b/arch/arm64/mm/mmu_head.c
@@ -54,6 +54,8 @@
#include "mmu_inc.c"
+phys_addr_t headpool_pgtable_alloc(int unused_shift);
+
void INSTRUMENT_OPTION mmu_head_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
@@ -86,3 +88,43 @@ void INSTRUMENT_OPTION mmu_head_create_pgd_mapping(pgd_t *pgdir, phys_addr_t phy
}
__create_pgd_mapping_locked(pgdir, phys, virt, size, prot, pgtable_alloc, flags);
}
+
+struct headpool {
+ phys_addr_t start;
+ unsigned long size;
+ unsigned long next_idx;
+} __aligned(8);
+
+struct headpool head_pool __initdata;
+
+void INSTRUMENT_OPTION headpool_init(phys_addr_t start, unsigned long size)
+{
+ struct headpool *pool;
+
+ asm volatile(
+ "adrp %0, head_pool;"
+ "add %0, %0, #:lo12:head_pool;"
+ : "=r" (pool)
+ :
+ :
+ );
+ pool->start = start;
+ pool->size = size;
+ pool->next_idx = 0;
+}
+
+phys_addr_t INSTRUMENT_OPTION headpool_pgtable_alloc(int unused_shift)
+{
+ struct headpool *pool;
+ unsigned long idx;
+
+ asm volatile(
+ "adrp %0, head_pool;"
+ "add %0, %0, #:lo12:head_pool;"
+ : "=r" (pool)
+ :
+ :
+ );
+ idx = pool->next_idx++;
+ return pool->start + (idx << PAGE_SHIFT);
+}
--
2.41.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 09/10] arm64: head: Use __create_pgd_mapping_locked() to serve the creation of pgtable
2024-03-13 12:56 [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Pingfan Liu
` (6 preceding siblings ...)
2024-03-13 12:57 ` [PATCH 07/10] arm64: mm: Introduce head_pool routines to enable pgtabl allocation Pingfan Liu
@ 2024-03-13 12:57 ` Pingfan Liu
2024-03-13 12:57 ` [PATCH 10/10] arm64: head: Clean up unneeded routines Pingfan Liu
2024-03-13 13:05 ` [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Ard Biesheuvel
9 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2024-03-13 12:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Ard Biesheuvel, Catalin Marinas, Will Deacon, Mark Rutland
The init_stack serves as stack for the C routines.
For idmap, the mapping consist of five sections:
kernel text section
init_pg_dir, which needs to be accessed when create_kernel_mapping()
__initdata, which contains data accessed by create_kernel_mapping()
init_stack, which serves as the stack
fdt
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/include/asm/kernel-pgtable.h | 1 +
arch/arm64/include/asm/mmu.h | 4 +
arch/arm64/kernel/head.S | 171 +++++++++++++-----------
arch/arm64/mm/mmu.c | 4 -
4 files changed, 96 insertions(+), 84 deletions(-)
diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 85d26143faa5..796bf3d8c181 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -91,6 +91,7 @@
#else
#define INIT_IDMAP_DIR_SIZE (INIT_IDMAP_DIR_PAGES * PAGE_SIZE)
#endif
+//
#define INIT_IDMAP_DIR_PAGES EARLY_PAGES(KIMAGE_VADDR, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE, 1)
/* Initial memory map size */
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 2fcf51231d6e..b817b694d1ba 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -12,6 +12,10 @@
#define USER_ASID_FLAG (UL(1) << USER_ASID_BIT)
#define TTBR_ASID_MASK (UL(0xffff) << 48)
+#define NO_BLOCK_MAPPINGS BIT(0)
+#define NO_CONT_MAPPINGS BIT(1)
+#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */
+
#ifndef __ASSEMBLY__
#include <linux/refcount.h>
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 7b236994f0e1..e2fa6b95f809 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -27,6 +27,7 @@
#include <asm/kernel-pgtable.h>
#include <asm/kvm_arm.h>
#include <asm/memory.h>
+#include <asm/mmu.h>
#include <asm/pgtable-hwdef.h>
#include <asm/page.h>
#include <asm/scs.h>
@@ -332,79 +333,69 @@ SYM_FUNC_START_LOCAL(remap_region)
SYM_FUNC_END(remap_region)
SYM_FUNC_START_LOCAL(create_idmap)
- mov x28, lr
- /*
- * The ID map carries a 1:1 mapping of the physical address range
- * covered by the loaded image, which could be anywhere in DRAM. This
- * means that the required size of the VA (== PA) space is decided at
- * boot time, and could be more than the configured size of the VA
- * space for ordinary kernel and user space mappings.
- *
- * There are three cases to consider here:
- * - 39 <= VA_BITS < 48, and the ID map needs up to 48 VA bits to cover
- * the placement of the image. In this case, we configure one extra
- * level of translation on the fly for the ID map only. (This case
- * also covers 42-bit VA/52-bit PA on 64k pages).
- *
- * - VA_BITS == 48, and the ID map needs more than 48 VA bits. This can
- * only happen when using 64k pages, in which case we need to extend
- * the root level table rather than add a level. Note that we can
- * treat this case as 'always extended' as long as we take care not
- * to program an unsupported T0SZ value into the TCR register.
- *
- * - Combinations that would require two additional levels of
- * translation are not supported, e.g., VA_BITS==36 on 16k pages, or
- * VA_BITS==39/4k pages with 5-level paging, where the input address
- * requires more than 47 or 48 bits, respectively.
- */
-#if (VA_BITS < 48)
-#define IDMAP_PGD_ORDER (VA_BITS - PGDIR_SHIFT)
-#define EXTRA_SHIFT (PGDIR_SHIFT + PAGE_SHIFT - 3)
+ adr_l x0, init_stack
+ add sp, x0, #THREAD_SIZE
+ sub sp, sp, #16
+ stp lr, x0, [sp, #0] // x0 is useless, just to keep stack 16-bytes align
- /*
- * If VA_BITS < 48, we have to configure an additional table level.
- * First, we have to verify our assumption that the current value of
- * VA_BITS was chosen such that all translation levels are fully
- * utilised, and that lowering T0SZ will always result in an additional
- * translation level to be configured.
- */
-#if VA_BITS != EXTRA_SHIFT
-#error "Mismatch between VA_BITS and page size/number of translation levels"
-#endif
-#else
-#define IDMAP_PGD_ORDER (PHYS_MASK_SHIFT - PGDIR_SHIFT)
-#define EXTRA_SHIFT
- /*
- * If VA_BITS == 48, we don't have to configure an additional
- * translation level, but the top-level table has more entries.
- */
-#endif
adrp x0, init_idmap_pg_dir
- adrp x3, _text
- adrp x6, _end + MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE
- mov_q x7, SWAPPER_RX_MMUFLAGS
-
- map_memory x0, x1, x3, x6, x7, x3, IDMAP_PGD_ORDER, x10, x11, x12, x13, x14, EXTRA_SHIFT
-
- /* Remap the kernel page tables r/w in the ID map */
- adrp x1, _text
- adrp x2, init_pg_dir
- adrp x3, init_pg_end
- bic x4, x2, #SWAPPER_BLOCK_SIZE - 1
- mov_q x5, SWAPPER_RW_MMUFLAGS
- mov x6, #SWAPPER_BLOCK_SHIFT
- bl remap_region
-
- /* Remap the FDT after the kernel image */
- adrp x1, _text
- adrp x22, _end + SWAPPER_BLOCK_SIZE
- bic x2, x22, #SWAPPER_BLOCK_SIZE - 1
+ adrp x1, init_idmap_pg_end
+ sub x1, x1, x0
+ bl headpool_init
+ mov x0, #0
+ bl headpool_pgtable_alloc // return x0, containing init_idmap_pg_dir
+ mov x27, x0 // bake in case of flush
+
+ adr_l x1, _text // phys
+ mov x2, x1 // virt for idmap
+ adr_l x3, _etext - 1
+ sub x3, x3, x1 // size
+ ldr x4, =SWAPPER_RX_MMUFLAGS
+ adr_l x5, headpool_pgtable_alloc
+ mov x6, #0
+ bl mmu_head_create_pgd_mapping
+
+ mov x0, x27 // pgd
+ adr_l x1, init_pg_dir // phys
+ mov x2, x1 // virt for idmap
+ adr_l x3, init_pg_end
+ sub x3, x3, x1
+ ldr x4, =SWAPPER_RW_MMUFLAGS
+ adr_l x5, headpool_pgtable_alloc
+ mov x6, #0
+ bl mmu_head_create_pgd_mapping
+
+ mov x0, x27 // pgd
+ adr_l x1, init_stack // kernel mapping need write-permission to use this stack
+ mov x2, x1 // virt for idmap
+ ldr x3, =THREAD_SIZE
+ ldr x4, =SWAPPER_RW_MMUFLAGS
+ adr_l x5, headpool_pgtable_alloc
+ mov x6, #0
+ bl mmu_head_create_pgd_mapping
+
+ mov x0, x27 // pgd
+ adr_l x1, __initdata_begin // kernel mapping need write-permission to it
+ mov x2, x1 // virt for idmap
+ adr_l x3, __initdata_end
+ sub x3, x3, x1
+ ldr x4, =SWAPPER_RW_MMUFLAGS
+ adr_l x5, headpool_pgtable_alloc
+ mov x6, #0
+ bl mmu_head_create_pgd_mapping
+
+
+ mov x0, x27 // pgd
+ mov x1, x21 // FDT phys
+ adr_l x2, _end + SWAPPER_BLOCK_SIZE // virt
+ mov x3, #(MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE) // size
+ ldr x4, =SWAPPER_RW_MMUFLAGS
+ adr_l x5, headpool_pgtable_alloc
+ mov x6, #0
+ bl mmu_head_create_pgd_mapping
+
+ adr_l x22, _end + SWAPPER_BLOCK_SIZE
bfi x22, x21, #0, #SWAPPER_BLOCK_SHIFT // remapped FDT address
- add x3, x2, #MAX_FDT_SIZE + SWAPPER_BLOCK_SIZE
- bic x4, x21, #SWAPPER_BLOCK_SIZE - 1
- mov_q x5, SWAPPER_RW_MMUFLAGS
- mov x6, #SWAPPER_BLOCK_SHIFT
- bl remap_region
/*
* Since the page tables have been populated with non-cacheable
@@ -417,22 +408,42 @@ SYM_FUNC_START_LOCAL(create_idmap)
adrp x0, init_idmap_pg_dir
adrp x1, init_idmap_pg_end
bl dcache_inval_poc
-0: ret x28
+ ldp lr, x0, [sp], #16
+0: ret
SYM_FUNC_END(create_idmap)
SYM_FUNC_START_LOCAL(create_kernel_mapping)
+ sub sp, sp, #80
+ stp x0, x1, [sp, #0]
+ stp x2, x3, [sp, #16]
+ stp x4, x5, [sp, #32]
+ stp x6, x7, [sp, #48]
+ stp lr, xzr, [sp, #64]
+
adrp x0, init_pg_dir
- mov_q x5, KIMAGE_VADDR // compile time __va(_text)
+ adrp x1, init_pg_end
+ sub x1, x1, x0
+ bl headpool_init
+ mov x0, #0
+ bl headpool_pgtable_alloc // return x0, containing init_pg_dir
+
+ adrp x1, _text // runtime __pa(_text)
+ mov_q x2, KIMAGE_VADDR // compile time __va(_text)
#ifdef CONFIG_RELOCATABLE
- add x5, x5, x23 // add KASLR displacement
+ add x2, x2, x23 // add KASLR displacement
#endif
- adrp x6, _end // runtime __pa(_end)
- adrp x3, _text // runtime __pa(_text)
- sub x6, x6, x3 // _end - _text
- add x6, x6, x5 // runtime __va(_end)
- mov_q x7, SWAPPER_RW_MMUFLAGS
-
- map_memory x0, x1, x5, x6, x7, x3, (VA_BITS - PGDIR_SHIFT), x10, x11, x12, x13, x14
+ adrp x3, _end // runtime __pa(_end)
+ sub x3, x3, x1 // _end - _text
+ ldr x4, =SWAPPER_RW_MMUFLAGS
+ adr_l x5, headpool_pgtable_alloc
+ mov x6, #0
+ bl mmu_head_create_pgd_mapping
+
+ ldp lr, xzr, [sp, #64]
+ ldp x6, x7, [sp, #48]
+ ldp x4, x5, [sp, #32]
+ ldp x2, x3, [sp, #16]
+ ldp x0, x1, [sp], #80
dsb ishst // sync with page table walker
ret
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 80e49faaf066..e9748c7017dd 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -41,10 +41,6 @@
#include <asm/pgalloc.h>
#include <asm/kfence.h>
-#define NO_BLOCK_MAPPINGS BIT(0)
-#define NO_CONT_MAPPINGS BIT(1)
-#define NO_EXEC_MAPPINGS BIT(2) /* assumes FEAT_HPDS is not used */
-
int idmap_t0sz __ro_after_init;
#if VA_BITS > 48
--
2.41.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* [PATCH 10/10] arm64: head: Clean up unneeded routines
2024-03-13 12:56 [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Pingfan Liu
` (7 preceding siblings ...)
2024-03-13 12:57 ` [PATCH 09/10] arm64: head: Use __create_pgd_mapping_locked() to serve the creation of pgtable Pingfan Liu
@ 2024-03-13 12:57 ` Pingfan Liu
2024-03-13 13:05 ` [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Ard Biesheuvel
9 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2024-03-13 12:57 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Ard Biesheuvel, Catalin Marinas, Will Deacon, Mark Rutland
Signed-off-by: Pingfan Liu <piliu@redhat.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/kernel/head.S | 143 ---------------------------------------
1 file changed, 143 deletions(-)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index e2fa6b95f809..c38d169129ac 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -189,149 +189,6 @@ SYM_FUNC_START_LOCAL(clear_page_tables)
b __pi_memset // tail call
SYM_FUNC_END(clear_page_tables)
-/*
- * Macro to populate page table entries, these entries can be pointers to the next level
- * or last level entries pointing to physical memory.
- *
- * tbl: page table address
- * rtbl: pointer to page table or physical memory
- * index: start index to write
- * eindex: end index to write - [index, eindex] written to
- * flags: flags for pagetable entry to or in
- * inc: increment to rtbl between each entry
- * tmp1: temporary variable
- *
- * Preserves: tbl, eindex, flags, inc
- * Corrupts: index, tmp1
- * Returns: rtbl
- */
- .macro populate_entries, tbl, rtbl, index, eindex, flags, inc, tmp1
-.Lpe\@: phys_to_pte \tmp1, \rtbl
- orr \tmp1, \tmp1, \flags // tmp1 = table entry
- str \tmp1, [\tbl, \index, lsl #3]
- add \rtbl, \rtbl, \inc // rtbl = pa next level
- add \index, \index, #1
- cmp \index, \eindex
- b.ls .Lpe\@
- .endm
-
-/*
- * Compute indices of table entries from virtual address range. If multiple entries
- * were needed in the previous page table level then the next page table level is assumed
- * to be composed of multiple pages. (This effectively scales the end index).
- *
- * vstart: virtual address of start of range
- * vend: virtual address of end of range - we map [vstart, vend]
- * shift: shift used to transform virtual address into index
- * order: #imm 2log(number of entries in page table)
- * istart: index in table corresponding to vstart
- * iend: index in table corresponding to vend
- * count: On entry: how many extra entries were required in previous level, scales
- * our end index.
- * On exit: returns how many extra entries required for next page table level
- *
- * Preserves: vstart, vend
- * Returns: istart, iend, count
- */
- .macro compute_indices, vstart, vend, shift, order, istart, iend, count
- ubfx \istart, \vstart, \shift, \order
- ubfx \iend, \vend, \shift, \order
- add \iend, \iend, \count, lsl \order
- sub \count, \iend, \istart
- .endm
-
-/*
- * Map memory for specified virtual address range. Each level of page table needed supports
- * multiple entries. If a level requires n entries the next page table level is assumed to be
- * formed from n pages.
- *
- * tbl: location of page table
- * rtbl: address to be used for first level page table entry (typically tbl + PAGE_SIZE)
- * vstart: virtual address of start of range
- * vend: virtual address of end of range - we map [vstart, vend - 1]
- * flags: flags to use to map last level entries
- * phys: physical address corresponding to vstart - physical memory is contiguous
- * order: #imm 2log(number of entries in PGD table)
- *
- * If extra_shift is set, an extra level will be populated if the end address does
- * not fit in 'extra_shift' bits. This assumes vend is in the TTBR0 range.
- *
- * Temporaries: istart, iend, tmp, count, sv - these need to be different registers
- * Preserves: vstart, flags
- * Corrupts: tbl, rtbl, vend, istart, iend, tmp, count, sv
- */
- .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, order, istart, iend, tmp, count, sv, extra_shift
- sub \vend, \vend, #1
- add \rtbl, \tbl, #PAGE_SIZE
- mov \count, #0
-
- .ifnb \extra_shift
- tst \vend, #~((1 << (\extra_shift)) - 1)
- b.eq .L_\@
- compute_indices \vstart, \vend, #\extra_shift, #(PAGE_SHIFT - 3), \istart, \iend, \count
- mov \sv, \rtbl
- populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
- mov \tbl, \sv
- .endif
-.L_\@:
- compute_indices \vstart, \vend, #PGDIR_SHIFT, #\order, \istart, \iend, \count
- mov \sv, \rtbl
- populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
- mov \tbl, \sv
-
-#if SWAPPER_PGTABLE_LEVELS > 3
- compute_indices \vstart, \vend, #PUD_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count
- mov \sv, \rtbl
- populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
- mov \tbl, \sv
-#endif
-
-#if SWAPPER_PGTABLE_LEVELS > 2
- compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count
- mov \sv, \rtbl
- populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
- mov \tbl, \sv
-#endif
-
- compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #(PAGE_SHIFT - 3), \istart, \iend, \count
- bic \rtbl, \phys, #SWAPPER_BLOCK_SIZE - 1
- populate_entries \tbl, \rtbl, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp
- .endm
-
-/*
- * Remap a subregion created with the map_memory macro with modified attributes
- * or output address. The entire remapped region must have been covered in the
- * invocation of map_memory.
- *
- * x0: last level table address (returned in first argument to map_memory)
- * x1: start VA of the existing mapping
- * x2: start VA of the region to update
- * x3: end VA of the region to update (exclusive)
- * x4: start PA associated with the region to update
- * x5: attributes to set on the updated region
- * x6: order of the last level mappings
- */
-SYM_FUNC_START_LOCAL(remap_region)
- sub x3, x3, #1 // make end inclusive
-
- // Get the index offset for the start of the last level table
- lsr x1, x1, x6
- bfi x1, xzr, #0, #PAGE_SHIFT - 3
-
- // Derive the start and end indexes into the last level table
- // associated with the provided region
- lsr x2, x2, x6
- lsr x3, x3, x6
- sub x2, x2, x1
- sub x3, x3, x1
-
- mov x1, #1
- lsl x6, x1, x6 // block size at this level
-
- populate_entries x0, x4, x2, x3, x5, x6, x7
- ret
-SYM_FUNC_END(remap_region)
-
SYM_FUNC_START_LOCAL(create_idmap)
adr_l x0, init_stack
add sp, x0, #THREAD_SIZE
--
2.41.0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 13+ messages in thread
* Re: [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in
2024-03-13 12:56 [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Pingfan Liu
` (8 preceding siblings ...)
2024-03-13 12:57 ` [PATCH 10/10] arm64: head: Clean up unneeded routines Pingfan Liu
@ 2024-03-13 13:05 ` Ard Biesheuvel
2024-03-14 2:54 ` Pingfan Liu
9 siblings, 1 reply; 13+ messages in thread
From: Ard Biesheuvel @ 2024-03-13 13:05 UTC (permalink / raw)
To: Pingfan Liu; +Cc: linux-arm-kernel, Catalin Marinas, Will Deacon, Mark Rutland
Hello Pingfan,
On Wed, 13 Mar 2024 at 13:57, Pingfan Liu <piliu@redhat.com> wrote:
>
> Hi everybody, I tried this stuff again.
Tried what again? Frankly, I have no idea what the purpose of this
patch series is, and this is v1.
Could you please explain?
Also, the early arm64 startup code is changing substantially - please refer to
https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=for-next/stage1-lpa2
for details.
> Last time when I tried this,
> Catalin raised concern about the intrumentation, and Ard doubted this
> way due to alignement issue with mmu-off.
>
> Last time, the alignment issue looked unsoluable and I gave up. But
> nowadays, when I looked at it, I think it is partially resovable. (for
> detail, please see the commit log in [PATCH 08/10] arm64: mm: Enforce
> memory alignment in mmu_head)
>
> Overall, at this very early stage, the using of C routines faces three
> challenge:
> PIC
> instrumentation
> alignment
>
> [2/10] resolves instrumentation issue
>
> [3/10] makes mmu_head self-contained and prevent the outside
> PIC/ instrumentation/ alignment issues from seeping in. And check the code PIC.
>
> [PATCH 08/10] explains the alignement issue, in theory, it can be
> checked and resolved. And in this patch, it is partially resolved.
>
>
> Cc: Ard Biesheuvel <ardb@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> To: linux-arm-kernel@lists.infradead.org
> ---
>
> Pingfan Liu (10):
> arm64: mm: Split out routines for code reuse
> arm64: mm: Introduce mmu_head routines without instrumentation
> arm64: mm: Use if-conditon to truncate external dependency
> arm64: head: Enable __create_pgd_mapping() to handle pgtable's paddr
> arm64: mm: Force early mapping aligned on SWAPPER_BLOCK_SIZE
> arm64: mm: Handle scope beyond the capacity of kernel pgtable in
> mmu_head_create_pgd_mapping()
> arm64: mm: Introduce head_pool routines to enable pgtabl allocation
> arm64: mm: Enforce memory alignment in mmu_head
> arm64: head: Use __create_pgd_mapping_locked() to serve the creation
> of pgtable
> arm64: head: Clean up unneeded routines
>
> arch/arm64/include/asm/kernel-pgtable.h | 1 +
> arch/arm64/include/asm/mmu.h | 4 +
> arch/arm64/include/asm/pgtable.h | 11 +-
> arch/arm64/kernel/head.S | 314 +++++++-----------------
> arch/arm64/mm/Makefile | 22 +-
> arch/arm64/mm/mmu.c | 289 +---------------------
> arch/arm64/mm/mmu_head.c | 134 ++++++++++
> arch/arm64/mm/mmu_inc.c | 292 ++++++++++++++++++++++
> 8 files changed, 558 insertions(+), 509 deletions(-)
> create mode 100644 arch/arm64/mm/mmu_head.c
> create mode 100644 arch/arm64/mm/mmu_inc.c
>
> --
> 2.41.0
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in
2024-03-13 13:05 ` [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Ard Biesheuvel
@ 2024-03-14 2:54 ` Pingfan Liu
2024-03-14 17:25 ` Catalin Marinas
0 siblings, 1 reply; 13+ messages in thread
From: Pingfan Liu @ 2024-03-14 2:54 UTC (permalink / raw)
To: Ard Biesheuvel
Cc: linux-arm-kernel, Catalin Marinas, Will Deacon, Mark Rutland
On Wed, Mar 13, 2024 at 9:05 PM Ard Biesheuvel <ardb@kernel.org> wrote:
>
> Hello Pingfan,
>
> On Wed, 13 Mar 2024 at 13:57, Pingfan Liu <piliu@redhat.com> wrote:
> >
> > Hi everybody, I tried this stuff again.
>
> Tried what again? Frankly, I have no idea what the purpose of this
> patch series is, and this is v1.
>
Sorry that I should paste the original link for the history:
https://lore.kernel.org/all/20210531084540.78546-1-kernelfans@gmail.com/
> Could you please explain?
>
It is about calling the C routine of __create_pgd_mapping() at the
stage of mmu-off.
> Also, the early arm64 startup code is changing substantially - please refer to
>
> https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=for-next/stage1-lpa2
>
> for details.
>
Oh, it seems that most of the ideas in my series have been
implemented. I will dive it for more detail.
Thank you very much.
Regards,
Pingfan
> > Last time when I tried this,
> > Catalin raised concern about the intrumentation, and Ard doubted this
> > way due to alignement issue with mmu-off.
> >
> > Last time, the alignment issue looked unsoluable and I gave up. But
> > nowadays, when I looked at it, I think it is partially resovable. (for
> > detail, please see the commit log in [PATCH 08/10] arm64: mm: Enforce
> > memory alignment in mmu_head)
> >
> > Overall, at this very early stage, the using of C routines faces three
> > challenge:
> > PIC
> > instrumentation
> > alignment
> >
> > [2/10] resolves instrumentation issue
> >
> > [3/10] makes mmu_head self-contained and prevent the outside
> > PIC/ instrumentation/ alignment issues from seeping in. And check the code PIC.
> >
> > [PATCH 08/10] explains the alignement issue, in theory, it can be
> > checked and resolved. And in this patch, it is partially resolved.
> >
> >
> > Cc: Ard Biesheuvel <ardb@kernel.org>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > To: linux-arm-kernel@lists.infradead.org
> > ---
> >
> > Pingfan Liu (10):
> > arm64: mm: Split out routines for code reuse
> > arm64: mm: Introduce mmu_head routines without instrumentation
> > arm64: mm: Use if-conditon to truncate external dependency
> > arm64: head: Enable __create_pgd_mapping() to handle pgtable's paddr
> > arm64: mm: Force early mapping aligned on SWAPPER_BLOCK_SIZE
> > arm64: mm: Handle scope beyond the capacity of kernel pgtable in
> > mmu_head_create_pgd_mapping()
> > arm64: mm: Introduce head_pool routines to enable pgtabl allocation
> > arm64: mm: Enforce memory alignment in mmu_head
> > arm64: head: Use __create_pgd_mapping_locked() to serve the creation
> > of pgtable
> > arm64: head: Clean up unneeded routines
> >
> > arch/arm64/include/asm/kernel-pgtable.h | 1 +
> > arch/arm64/include/asm/mmu.h | 4 +
> > arch/arm64/include/asm/pgtable.h | 11 +-
> > arch/arm64/kernel/head.S | 314 +++++++-----------------
> > arch/arm64/mm/Makefile | 22 +-
> > arch/arm64/mm/mmu.c | 289 +---------------------
> > arch/arm64/mm/mmu_head.c | 134 ++++++++++
> > arch/arm64/mm/mmu_inc.c | 292 ++++++++++++++++++++++
> > 8 files changed, 558 insertions(+), 509 deletions(-)
> > create mode 100644 arch/arm64/mm/mmu_head.c
> > create mode 100644 arch/arm64/mm/mmu_inc.c
> >
> > --
> > 2.41.0
> >
>
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in
2024-03-14 2:54 ` Pingfan Liu
@ 2024-03-14 17:25 ` Catalin Marinas
0 siblings, 0 replies; 13+ messages in thread
From: Catalin Marinas @ 2024-03-14 17:25 UTC (permalink / raw)
To: Pingfan Liu; +Cc: Ard Biesheuvel, linux-arm-kernel, Will Deacon, Mark Rutland
On Thu, Mar 14, 2024 at 10:54:12AM +0800, Pingfan Liu wrote:
> On Wed, Mar 13, 2024 at 9:05 PM Ard Biesheuvel <ardb@kernel.org> wrote:
> > Also, the early arm64 startup code is changing substantially - please refer to
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=for-next/stage1-lpa2
> >
> > for details.
>
> Oh, it seems that most of the ideas in my series have been
> implemented. I will dive it for more detail.
Yeah, better post after 6.9-rc1 when this stuff will land.
--
Catalin
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2024-03-14 17:26 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-13 12:56 [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Pingfan Liu
2024-03-13 12:56 ` [PATCH 01/10] arm64: mm: Split out routines for code reuse Pingfan Liu
2024-03-13 12:57 ` [PATCH 02/10] arm64: mm: Introduce mmu_head routines without instrumentation Pingfan Liu
2024-03-13 12:57 ` [PATCH 03/10] arm64: mm: Use if-conditon to truncate external dependency Pingfan Liu
2024-03-13 12:57 ` [PATCH 04/10] arm64: head: Enable __create_pgd_mapping() to handle pgtable's paddr Pingfan Liu
2024-03-13 12:57 ` [PATCH 05/10] arm64: mm: Force early mapping aligned on SWAPPER_BLOCK_SIZE Pingfan Liu
2024-03-13 12:57 ` [PATCH 06/10] arm64: mm: Handle scope beyond the capacity of kernel pgtable in mmu_head_create_pgd_mapping() Pingfan Liu
2024-03-13 12:57 ` [PATCH 07/10] arm64: mm: Introduce head_pool routines to enable pgtabl allocation Pingfan Liu
2024-03-13 12:57 ` [PATCH 09/10] arm64: head: Use __create_pgd_mapping_locked() to serve the creation of pgtable Pingfan Liu
2024-03-13 12:57 ` [PATCH 10/10] arm64: head: Clean up unneeded routines Pingfan Liu
2024-03-13 13:05 ` [PATCH 00/10] arm64: mm: Use __create_pgd_mapping_locked() in Ard Biesheuvel
2024-03-14 2:54 ` Pingfan Liu
2024-03-14 17:25 ` Catalin Marinas
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.