* [PATCHv2 00/10] use __create_pgd_mapping() to implement idmap and unify codes
@ 2021-04-25 14:12 Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 01/10] arm64/mm: split out __create_pgd_mapping() routines Pingfan Liu
` (9 more replies)
0 siblings, 10 replies; 11+ messages in thread
From: Pingfan Liu @ 2021-04-25 14:12 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
Logan Gunthorpe, Mark Brown
This series aim to share the pgtable manipulation code to head.S and
trans_pgd_idmap_page().
The core of the idea is by redefinition the CONFIG_PGTABLE_LEVEL, two
sets of pgtable manipulation code are generated, one as now is for
swapper_pg_dir, the other one is for idmap. And a dedicated
create_idmap() API is introduced.
The series can be grouped into two
[1~5/10] achieves porting trans_pgd_idmap_page() and introduce
create_idmap() API
[6-10/10] replace head.S pgtable manipulation asm with calling to
__create_pgd_mapping()
This series can success booting with the following configuration on Cavium
ThunderX 88XX cpu :
PAGE_SIZE VA PA PGTABLE_LEVEL
4K 48 48 4
4K 39 48 3
16K 48 48 4
16K 47 48 3
64K 52 52 3
64K 42 52 2
History
RFC:
https://lore.kernel.org/linux-arm-kernel/20210410095654.24102-1-kernelfans@gmail.com/
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
Pingfan Liu (10):
arm64/mm: split out __create_pgd_mapping() routines
arm64/mm: change __create_pgd_mapping() to accept nr_entries param and
introduce create_idmap()
arm64/mm: change __create_pgd_mapping() to accept extra parameter for
allocator
arm64/mm: enable __create_pgd_mapping() to run across different
pgtable
arm64/mm: port trans_pgd_idmap_page() onto create_idmap()
arm64/mm: introduce pgtable allocator for idmap_pg_dir and init_pg_dir
arm64/pgtable-prot.h: reorganize to cope with asm
arm64/mmu_include.c: disable WARN_ON() and BUG_ON() when booting.
arm64/mm: make __create_pgd_mapping() coped with pgtable's paddr
arm64/head: convert idmap_pg_dir and init_pg_dir to
__create_pgd_mapping()
arch/arm64/Kconfig | 4 +
arch/arm64/include/asm/pgalloc.h | 29 +++
arch/arm64/include/asm/pgtable-prot.h | 34 ++-
arch/arm64/kernel/head.S | 196 ++++----------
arch/arm64/mm/Makefile | 2 +
arch/arm64/mm/idmap_mmu.c | 39 +++
arch/arm64/mm/mmu.c | 362 ++++++--------------------
arch/arm64/mm/mmu_include.c | 320 +++++++++++++++++++++++
arch/arm64/mm/trans_pgd.c | 62 +++--
9 files changed, 579 insertions(+), 469 deletions(-)
create mode 100644 arch/arm64/mm/idmap_mmu.c
create mode 100644 arch/arm64/mm/mmu_include.c
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCHv2 01/10] arm64/mm: split out __create_pgd_mapping() routines
2021-04-25 14:12 [PATCHv2 00/10] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
@ 2021-04-25 14:12 ` Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 02/10] arm64/mm: change __create_pgd_mapping() to accept nr_entries param and introduce create_idmap() Pingfan Liu
` (8 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Pingfan Liu @ 2021-04-25 14:12 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
Logan Gunthorpe, Mark Brown
If VA_BITS < 48, an additional table level should be used to allow an ID
mapping to cover system RAM. At present, it is done by macro
create_table_entry in head.S. But with second thoughts, this expansion
can be easily achieved by redefinition of CONFIG_PGTABLE_LEVELS.
Split out the routines for __create_pgd_mapping(), so that two sets of
code will be generated to manipulate pgtable under two different
CONFIG_PGTABLE_LEVELS. Later the one generated under
'CONFIG_PGTABLE_LEVELS + 1' can be used for idmap.
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
RFC -> v2:
consider the config ARM64_16K_PAGES && ARM64_VA_BITS_36
and move redefinition of CONFIG_PGTABLE_LEVELS to the head of
idmap_mmu.c
---
arch/arm64/Kconfig | 4 +
arch/arm64/mm/Makefile | 2 +
arch/arm64/mm/idmap_mmu.c | 34 +++++
arch/arm64/mm/mmu.c | 263 +----------------------------------
arch/arm64/mm/mmu_include.c | 270 ++++++++++++++++++++++++++++++++++++
5 files changed, 312 insertions(+), 261 deletions(-)
create mode 100644 arch/arm64/mm/idmap_mmu.c
create mode 100644 arch/arm64/mm/mmu_include.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e4e1b6550115..79755ade5d27 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -327,6 +327,10 @@ config PGTABLE_LEVELS
default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47
default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48
+config IDMAP_PGTABLE_EXPAND
+ def_bool y
+ depends on (ARM64_16K_PAGES && ARM64_VA_BITS_36) || (ARM64_64K_PAGES && ARM64_VA_BITS_42) || (ARM64_4K_PAGES && ARM64_VA_BITS_39) || (ARM64_16K_PAGES && ARM64_VA_BITS_47)
+
config ARCH_SUPPORTS_UPROBES
def_bool y
diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
index f188c9092696..f9283cb9a201 100644
--- a/arch/arm64/mm/Makefile
+++ b/arch/arm64/mm/Makefile
@@ -3,6 +3,8 @@ obj-y := dma-mapping.o extable.o fault.o init.o \
cache.o copypage.o flush.o \
ioremap.o mmap.o pgd.o mmu.o \
context.o proc.o pageattr.o
+
+obj-$(CONFIG_IDMAP_PGTABLE_EXPAND) += idmap_mmu.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
obj-$(CONFIG_PTDUMP_CORE) += ptdump.o
obj-$(CONFIG_PTDUMP_DEBUGFS) += ptdump_debugfs.o
diff --git a/arch/arm64/mm/idmap_mmu.c b/arch/arm64/mm/idmap_mmu.c
new file mode 100644
index 000000000000..42a27dd5cc9f
--- /dev/null
+++ b/arch/arm64/mm/idmap_mmu.c
@@ -0,0 +1,34 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#ifdef CONFIG_IDMAP_PGTABLE_EXPAND
+
+#if CONFIG_PGTABLE_LEVELS == 2
+#define EXTEND_LEVEL 3
+#elif CONFIG_PGTABLE_LEVELS == 3
+#define EXTEND_LEVEL 4
+#endif
+
+#undef CONFIG_PGTABLE_LEVELS
+#define CONFIG_PGTABLE_LEVELS EXTEND_LEVEL
+
+#include <linux/errno.h>
+
+#include <asm/barrier.h>
+#include <asm/fixmap.h>
+#include <asm/kernel-pgtable.h>
+#include <asm/sections.h>
+#include <asm/pgalloc.h>
+
+#include "./mmu_include.c"
+
+void __create_pgd_mapping_extend(pgd_t *pgdir, phys_addr_t phys,
+ unsigned long virt, phys_addr_t size,
+ pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int),
+ int flags)
+{
+ __create_pgd_mapping(pgdir, phys, virt, size, prot, pgtable_alloc, flags);
+}
+#endif
+
+
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 5d9550fdb9cf..56e4f25e8d6d 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -37,9 +37,6 @@
#include <asm/tlbflush.h>
#include <asm/pgalloc.h>
-#define NO_BLOCK_MAPPINGS BIT(0)
-#define NO_CONT_MAPPINGS BIT(1)
-
u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN);
u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
@@ -116,264 +113,6 @@ static phys_addr_t __init early_pgtable_alloc(int shift)
return phys;
}
-static bool pgattr_change_is_safe(u64 old, u64 new)
-{
- /*
- * The following mapping attributes may be updated in live
- * kernel mappings without the need for break-before-make.
- */
- pteval_t mask = PTE_PXN | PTE_RDONLY | PTE_WRITE | PTE_NG;
-
- /* creating or taking down mappings is always safe */
- if (old == 0 || new == 0)
- return true;
-
- /* live contiguous mappings may not be manipulated at all */
- if ((old | new) & PTE_CONT)
- return false;
-
- /* Transitioning from Non-Global to Global is unsafe */
- if (old & ~new & PTE_NG)
- return false;
-
- /*
- * Changing the memory type between Normal and Normal-Tagged is safe
- * since Tagged is considered a permission attribute from the
- * mismatched attribute aliases perspective.
- */
- if (((old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) ||
- (old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)) &&
- ((new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) ||
- (new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)))
- mask |= PTE_ATTRINDX_MASK;
-
- return ((old ^ new) & ~mask) == 0;
-}
-
-static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
- phys_addr_t phys, pgprot_t prot)
-{
- pte_t *ptep;
-
- ptep = pte_set_fixmap_offset(pmdp, addr);
- do {
- pte_t old_pte = READ_ONCE(*ptep);
-
- set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot));
-
- /*
- * After the PTE entry has been populated once, we
- * only allow updates to the permission attributes.
- */
- BUG_ON(!pgattr_change_is_safe(pte_val(old_pte),
- READ_ONCE(pte_val(*ptep))));
-
- phys += PAGE_SIZE;
- } while (ptep++, addr += PAGE_SIZE, addr != end);
-
- pte_clear_fixmap();
-}
-
-static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
- unsigned long end, phys_addr_t phys,
- pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
- int flags)
-{
- unsigned long next;
- pmd_t pmd = READ_ONCE(*pmdp);
-
- BUG_ON(pmd_sect(pmd));
- if (pmd_none(pmd)) {
- phys_addr_t pte_phys;
- BUG_ON(!pgtable_alloc);
- pte_phys = pgtable_alloc(PAGE_SHIFT);
- __pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE);
- pmd = READ_ONCE(*pmdp);
- }
- BUG_ON(pmd_bad(pmd));
-
- do {
- pgprot_t __prot = prot;
-
- next = pte_cont_addr_end(addr, end);
-
- /* use a contiguous mapping if the range is suitably aligned */
- if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
- (flags & NO_CONT_MAPPINGS) == 0)
- __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
-
- init_pte(pmdp, addr, next, phys, __prot);
-
- phys += next - addr;
- } while (addr = next, addr != end);
-}
-
-static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
- phys_addr_t phys, pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int), int flags)
-{
- unsigned long next;
- pmd_t *pmdp;
-
- pmdp = pmd_set_fixmap_offset(pudp, addr);
- do {
- pmd_t old_pmd = READ_ONCE(*pmdp);
-
- next = pmd_addr_end(addr, end);
-
- /* try section mapping first */
- if (((addr | next | phys) & ~SECTION_MASK) == 0 &&
- (flags & NO_BLOCK_MAPPINGS) == 0) {
- pmd_set_huge(pmdp, phys, prot);
-
- /*
- * After the PMD entry has been populated once, we
- * only allow updates to the permission attributes.
- */
- BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
- READ_ONCE(pmd_val(*pmdp))));
- } else {
- alloc_init_cont_pte(pmdp, addr, next, phys, prot,
- pgtable_alloc, flags);
-
- BUG_ON(pmd_val(old_pmd) != 0 &&
- pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
- }
- phys += next - addr;
- } while (pmdp++, addr = next, addr != end);
-
- pmd_clear_fixmap();
-}
-
-static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
- unsigned long end, phys_addr_t phys,
- pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int), int flags)
-{
- unsigned long next;
- pud_t pud = READ_ONCE(*pudp);
-
- /*
- * Check for initial section mappings in the pgd/pud.
- */
- BUG_ON(pud_sect(pud));
- if (pud_none(pud)) {
- phys_addr_t pmd_phys;
- BUG_ON(!pgtable_alloc);
- pmd_phys = pgtable_alloc(PMD_SHIFT);
- __pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
- pud = READ_ONCE(*pudp);
- }
- BUG_ON(pud_bad(pud));
-
- do {
- pgprot_t __prot = prot;
-
- next = pmd_cont_addr_end(addr, end);
-
- /* use a contiguous mapping if the range is suitably aligned */
- if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
- (flags & NO_CONT_MAPPINGS) == 0)
- __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
-
- init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags);
-
- phys += next - addr;
- } while (addr = next, addr != end);
-}
-
-static inline bool use_1G_block(unsigned long addr, unsigned long next,
- unsigned long phys)
-{
- if (PAGE_SHIFT != 12)
- return false;
-
- if (((addr | next | phys) & ~PUD_MASK) != 0)
- return false;
-
- return true;
-}
-
-static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
- phys_addr_t phys, pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
- int flags)
-{
- unsigned long next;
- pud_t *pudp;
- p4d_t *p4dp = p4d_offset(pgdp, addr);
- p4d_t p4d = READ_ONCE(*p4dp);
-
- if (p4d_none(p4d)) {
- phys_addr_t pud_phys;
- BUG_ON(!pgtable_alloc);
- pud_phys = pgtable_alloc(PUD_SHIFT);
- __p4d_populate(p4dp, pud_phys, PUD_TYPE_TABLE);
- p4d = READ_ONCE(*p4dp);
- }
- BUG_ON(p4d_bad(p4d));
-
- pudp = pud_set_fixmap_offset(p4dp, addr);
- do {
- pud_t old_pud = READ_ONCE(*pudp);
-
- next = pud_addr_end(addr, end);
-
- /*
- * For 4K granule only, attempt to put down a 1GB block
- */
- if (use_1G_block(addr, next, phys) &&
- (flags & NO_BLOCK_MAPPINGS) == 0) {
- pud_set_huge(pudp, phys, prot);
-
- /*
- * After the PUD entry has been populated once, we
- * only allow updates to the permission attributes.
- */
- BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
- READ_ONCE(pud_val(*pudp))));
- } else {
- alloc_init_cont_pmd(pudp, addr, next, phys, prot,
- pgtable_alloc, flags);
-
- BUG_ON(pud_val(old_pud) != 0 &&
- pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
- }
- phys += next - addr;
- } while (pudp++, addr = next, addr != end);
-
- pud_clear_fixmap();
-}
-
-static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
- unsigned long virt, phys_addr_t size,
- pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
- int flags)
-{
- unsigned long addr, end, next;
- pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
-
- /*
- * If the virtual and physical address don't have the same offset
- * within a page, we cannot map the region as the caller expects.
- */
- if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
- return;
-
- phys &= PAGE_MASK;
- addr = virt & PAGE_MASK;
- end = PAGE_ALIGN(virt + size);
-
- do {
- next = pgd_addr_end(addr, end);
- alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc,
- flags);
- phys += next - addr;
- } while (pgdp++, addr = next, addr != end);
-}
-
static phys_addr_t __pgd_pgtable_alloc(int shift)
{
void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL);
@@ -404,6 +143,8 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
return pa;
}
+#include "./mmu_include.c"
+
/*
* This function can only be used to modify existing table entries,
* without allocating new levels of table. Note that this permits the
diff --git a/arch/arm64/mm/mmu_include.c b/arch/arm64/mm/mmu_include.c
new file mode 100644
index 000000000000..95ff35a3c6cb
--- /dev/null
+++ b/arch/arm64/mm/mmu_include.c
@@ -0,0 +1,270 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * This file is shared
+ * And these functions should be PIC because they are called under both MMU-disable
+ * and MMU-enable
+ */
+
+#define NO_BLOCK_MAPPINGS BIT(0)
+#define NO_CONT_MAPPINGS BIT(1)
+
+static bool pgattr_change_is_safe(u64 old, u64 new)
+{
+ /*
+ * The following mapping attributes may be updated in live
+ * kernel mappings without the need for break-before-make.
+ */
+ pteval_t mask = PTE_PXN | PTE_RDONLY | PTE_WRITE | PTE_NG;
+
+ /* creating or taking down mappings is always safe */
+ if (old == 0 || new == 0)
+ return true;
+
+ /* live contiguous mappings may not be manipulated at all */
+ if ((old | new) & PTE_CONT)
+ return false;
+
+ /* Transitioning from Non-Global to Global is unsafe */
+ if (old & ~new & PTE_NG)
+ return false;
+
+ /*
+ * Changing the memory type between Normal and Normal-Tagged is safe
+ * since Tagged is considered a permission attribute from the
+ * mismatched attribute aliases perspective.
+ */
+ if (((old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) ||
+ (old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)) &&
+ ((new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) ||
+ (new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)))
+ mask |= PTE_ATTRINDX_MASK;
+
+ return ((old ^ new) & ~mask) == 0;
+}
+
+static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ phys_addr_t phys, pgprot_t prot)
+{
+ pte_t *ptep;
+
+ ptep = pte_set_fixmap_offset(pmdp, addr);
+ do {
+ pte_t old_pte = READ_ONCE(*ptep);
+
+ set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot));
+
+ /*
+ * After the PTE entry has been populated once, we
+ * only allow updates to the permission attributes.
+ */
+ BUG_ON(!pgattr_change_is_safe(pte_val(old_pte),
+ READ_ONCE(pte_val(*ptep))));
+
+ phys += PAGE_SIZE;
+ } while (ptep++, addr += PAGE_SIZE, addr != end);
+
+ pte_clear_fixmap();
+}
+
+static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
+ unsigned long end, phys_addr_t phys,
+ pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int),
+ int flags)
+{
+ unsigned long next;
+ pmd_t pmd = READ_ONCE(*pmdp);
+
+ BUG_ON(pmd_sect(pmd));
+ if (pmd_none(pmd)) {
+ phys_addr_t pte_phys;
+
+ BUG_ON(!pgtable_alloc);
+ pte_phys = pgtable_alloc(PAGE_SHIFT);
+ __pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE);
+ pmd = READ_ONCE(*pmdp);
+ }
+ BUG_ON(pmd_bad(pmd));
+
+ do {
+ pgprot_t __prot = prot;
+
+ next = pte_cont_addr_end(addr, end);
+
+ /* use a contiguous mapping if the range is suitably aligned */
+ if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
+ (flags & NO_CONT_MAPPINGS) == 0)
+ __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ init_pte(pmdp, addr, next, phys, __prot);
+
+ phys += next - addr;
+ } while (addr = next, addr != end);
+}
+
+static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
+ phys_addr_t phys, pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int), int flags)
+{
+ unsigned long next;
+ pmd_t *pmdp;
+
+ pmdp = pmd_set_fixmap_offset(pudp, addr);
+ do {
+ pmd_t old_pmd = READ_ONCE(*pmdp);
+
+ next = pmd_addr_end(addr, end);
+
+ /* try section mapping first */
+ if (((addr | next | phys) & ~SECTION_MASK) == 0 &&
+ (flags & NO_BLOCK_MAPPINGS) == 0) {
+ pmd_set_huge(pmdp, phys, prot);
+
+ /*
+ * After the PMD entry has been populated once, we
+ * only allow updates to the permission attributes.
+ */
+ BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
+ READ_ONCE(pmd_val(*pmdp))));
+ } else {
+ alloc_init_cont_pte(pmdp, addr, next, phys, prot,
+ pgtable_alloc, flags);
+
+ BUG_ON(pmd_val(old_pmd) != 0 &&
+ pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
+ }
+ phys += next - addr;
+ } while (pmdp++, addr = next, addr != end);
+
+ pmd_clear_fixmap();
+}
+
+static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
+ unsigned long end, phys_addr_t phys,
+ pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int), int flags)
+{
+ unsigned long next;
+ pud_t pud = READ_ONCE(*pudp);
+
+ /*
+ * Check for initial section mappings in the pgd/pud.
+ */
+ BUG_ON(pud_sect(pud));
+ if (pud_none(pud)) {
+ phys_addr_t pmd_phys;
+
+ BUG_ON(!pgtable_alloc);
+ pmd_phys = pgtable_alloc(PMD_SHIFT);
+ __pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
+ pud = READ_ONCE(*pudp);
+ }
+ BUG_ON(pud_bad(pud));
+
+ do {
+ pgprot_t __prot = prot;
+
+ next = pmd_cont_addr_end(addr, end);
+
+ /* use a contiguous mapping if the range is suitably aligned */
+ if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
+ (flags & NO_CONT_MAPPINGS) == 0)
+ __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+ init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags);
+
+ phys += next - addr;
+ } while (addr = next, addr != end);
+}
+
+static inline bool use_1G_block(unsigned long addr, unsigned long next,
+ unsigned long phys)
+{
+ if (PAGE_SHIFT != 12)
+ return false;
+
+ if (((addr | next | phys) & ~PUD_MASK) != 0)
+ return false;
+
+ return true;
+}
+
+static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
+ phys_addr_t phys, pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int),
+ int flags)
+{
+ unsigned long next;
+ pud_t *pudp;
+ p4d_t *p4dp = p4d_offset(pgdp, addr);
+ p4d_t p4d = READ_ONCE(*p4dp);
+
+ if (p4d_none(p4d)) {
+ phys_addr_t pud_phys;
+
+ BUG_ON(!pgtable_alloc);
+ pud_phys = pgtable_alloc(PUD_SHIFT);
+ __p4d_populate(p4dp, pud_phys, PUD_TYPE_TABLE);
+ p4d = READ_ONCE(*p4dp);
+ }
+ BUG_ON(p4d_bad(p4d));
+
+ pudp = pud_set_fixmap_offset(p4dp, addr);
+ do {
+ pud_t old_pud = READ_ONCE(*pudp);
+
+ next = pud_addr_end(addr, end);
+
+ /*
+ * For 4K granule only, attempt to put down a 1GB block
+ */
+ if (use_1G_block(addr, next, phys) &&
+ (flags & NO_BLOCK_MAPPINGS) == 0) {
+ pud_set_huge(pudp, phys, prot);
+
+ /*
+ * After the PUD entry has been populated once, we
+ * only allow updates to the permission attributes.
+ */
+ BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
+ READ_ONCE(pud_val(*pudp))));
+ } else {
+ alloc_init_cont_pmd(pudp, addr, next, phys, prot,
+ pgtable_alloc, flags);
+
+ BUG_ON(pud_val(old_pud) != 0 &&
+ pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
+ }
+ phys += next - addr;
+ } while (pudp++, addr = next, addr != end);
+
+ pud_clear_fixmap();
+}
+
+static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
+ unsigned long virt, phys_addr_t size,
+ pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int),
+ int flags)
+{
+ unsigned long addr, end, next;
+ pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
+
+ /*
+ * If the virtual and physical address don't have the same offset
+ * within a page, we cannot map the region as the caller expects.
+ */
+ if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
+ return;
+
+ phys &= PAGE_MASK;
+ addr = virt & PAGE_MASK;
+ end = PAGE_ALIGN(virt + size);
+
+ do {
+ next = pgd_addr_end(addr, end);
+ alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc,
+ flags);
+ phys += next - addr;
+ } while (pgdp++, addr = next, addr != end);
+}
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCHv2 02/10] arm64/mm: change __create_pgd_mapping() to accept nr_entries param and introduce create_idmap()
2021-04-25 14:12 [PATCHv2 00/10] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 01/10] arm64/mm: split out __create_pgd_mapping() routines Pingfan Liu
@ 2021-04-25 14:12 ` Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 03/10] arm64/mm: change __create_pgd_mapping() to accept extra parameter for allocator Pingfan Liu
` (7 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Pingfan Liu @ 2021-04-25 14:12 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
Logan Gunthorpe, Mark Brown
As idmap_ptrs_per_pgd may have pgd entries greater than PTRS_PER_PGD,
the prototype of __create_pgd_mapping() needs change to cope with that
to create idmap.
Now this adaption, create_idmap() API can be introduced to create idmap
handly for all kinds of CONFIG_PGTABLE_LEVEL
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/include/asm/pgalloc.h | 7 ++++++
arch/arm64/kernel/head.S | 3 +++
arch/arm64/mm/idmap_mmu.c | 16 ++++++++-----
arch/arm64/mm/mmu.c | 41 ++++++++++++++++++++++++++------
arch/arm64/mm/mmu_include.c | 9 +++++--
5 files changed, 61 insertions(+), 15 deletions(-)
diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index 3c6a7f5988b1..555792921af0 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -83,4 +83,11 @@ pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep)
}
#define pmd_pgtable(pmd) pmd_page(pmd)
+extern void __create_pgd_mapping_extend(pgd_t *pgdir,
+ unsigned int entries_cnt, phys_addr_t phys,
+ unsigned long virt, phys_addr_t size,
+ pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int),
+ int flags);
+
#endif
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 840bda1869e9..e19649dbbafb 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -341,6 +341,9 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
#if VA_BITS != EXTRA_SHIFT
#error "Mismatch between VA_BITS and page size/number of translation levels"
#endif
+ adr_l x4, idmap_extend_pgtable
+ mov x5, #1
+ str x5, [x4] //require expanded pagetable
mov x4, EXTRA_PTRS
create_table_entry x0, x3, EXTRA_SHIFT, x4, x5, x6
diff --git a/arch/arm64/mm/idmap_mmu.c b/arch/arm64/mm/idmap_mmu.c
index 42a27dd5cc9f..bff1bffee321 100644
--- a/arch/arm64/mm/idmap_mmu.c
+++ b/arch/arm64/mm/idmap_mmu.c
@@ -21,13 +21,17 @@
#include "./mmu_include.c"
-void __create_pgd_mapping_extend(pgd_t *pgdir, phys_addr_t phys,
- unsigned long virt, phys_addr_t size,
- pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
- int flags)
+void __create_pgd_mapping_extend(pgd_t *pgdir,
+ unsigned int entries_cnt,
+ phys_addr_t phys,
+ unsigned long virt,
+ phys_addr_t size,
+ pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int),
+ int flags)
{
- __create_pgd_mapping(pgdir, phys, virt, size, prot, pgtable_alloc, flags);
+ __create_pgd_mapping(pgdir, entries_cnt, phys, virt, size, prot,
+ pgtable_alloc, flags);
}
#endif
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 56e4f25e8d6d..70a5a7b032dc 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -145,6 +145,33 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
#include "./mmu_include.c"
+int idmap_extend_pgtable;
+
+/*
+ * lock: no lock protection alongside this function call
+ * todo: tear down idmap. (no requirement at present)
+ */
+void create_idmap(pgd_t *pgdir, phys_addr_t phys,
+ phys_addr_t size,
+ pgprot_t prot,
+ phys_addr_t (*pgtable_alloc)(int),
+ int flags)
+{
+ u64 ptrs_per_pgd = idmap_ptrs_per_pgd;
+
+#ifdef CONFIG_IDMAP_PGTABLE_EXPAND
+ if (idmap_extend_pgtable)
+ __create_pgd_mapping_extend(pgdir, ptrs_per_pgd,
+ phys, phys, size, prot, pgtable_alloc, flags);
+ else
+ __create_pgd_mapping(pgdir, ptrs_per_pgd,
+ phys, phys, size, prot, pgtable_alloc, flags);
+#else
+ __create_pgd_mapping(pgdir, ptrs_per_pgd,
+ phys, phys, size, prot, pgtable_alloc, flags);
+#endif
+}
+
/*
* This function can only be used to modify existing table entries,
* without allocating new levels of table. Note that this permits the
@@ -158,7 +185,7 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
&phys, virt);
return;
}
- __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
+ __create_pgd_mapping(init_mm.pgd, PTRS_PER_PGD, phys, virt, size, prot, NULL,
NO_CONT_MAPPINGS);
}
@@ -173,7 +200,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
if (page_mappings_only)
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
- __create_pgd_mapping(mm->pgd, phys, virt, size, prot,
+ __create_pgd_mapping(mm->pgd, PTRS_PER_PGD, phys, virt, size, prot,
pgd_pgtable_alloc, flags);
}
@@ -186,7 +213,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
return;
}
- __create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
+ __create_pgd_mapping(init_mm.pgd, PTRS_PER_PGD, phys, virt, size, prot, NULL,
NO_CONT_MAPPINGS);
/* flush the TLBs after updating live kernel mappings */
@@ -196,7 +223,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
static void __init __map_memblock(pgd_t *pgdp, phys_addr_t start,
phys_addr_t end, pgprot_t prot, int flags)
{
- __create_pgd_mapping(pgdp, start, __phys_to_virt(start), end - start,
+ __create_pgd_mapping(pgdp, PTRS_PER_PGD, start, __phys_to_virt(start), end - start,
prot, early_pgtable_alloc, flags);
}
@@ -297,7 +324,7 @@ static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end,
BUG_ON(!PAGE_ALIGNED(pa_start));
BUG_ON(!PAGE_ALIGNED(size));
- __create_pgd_mapping(pgdp, pa_start, (unsigned long)va_start, size, prot,
+ __create_pgd_mapping(pgdp, PTRS_PER_PGD, pa_start, (unsigned long)va_start, size, prot,
early_pgtable_alloc, flags);
if (!(vm_flags & VM_NO_GUARD))
@@ -341,7 +368,7 @@ static int __init map_entry_trampoline(void)
/* Map only the text into the trampoline page table */
memset(tramp_pg_dir, 0, PGD_SIZE);
- __create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
+ __create_pgd_mapping(tramp_pg_dir, PTRS_PER_PGD, pa_start, TRAMP_VALIAS, PAGE_SIZE,
prot, __pgd_pgtable_alloc, 0);
/* Map both the text and data into the kernel page table */
@@ -1233,7 +1260,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
IS_ENABLED(CONFIG_KFENCE))
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
- __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
+ __create_pgd_mapping(swapper_pg_dir, PTRS_PER_PGD, start, __phys_to_virt(start),
size, params->pgprot, __pgd_pgtable_alloc,
flags);
diff --git a/arch/arm64/mm/mmu_include.c b/arch/arm64/mm/mmu_include.c
index 95ff35a3c6cb..be51689d1133 100644
--- a/arch/arm64/mm/mmu_include.c
+++ b/arch/arm64/mm/mmu_include.c
@@ -241,14 +241,19 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
pud_clear_fixmap();
}
-static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
+static void __create_pgd_mapping(pgd_t *pgdir, unsigned int entries_cnt, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
phys_addr_t (*pgtable_alloc)(int),
int flags)
{
unsigned long addr, end, next;
- pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
+ pgd_t *pgdp;
+
+ if (likely(entries_cnt == PTRS_PER_PGD))
+ pgdp = pgd_offset_pgd(pgdir, virt);
+ else
+ pgdp = pgdir + ((virt >> PGDIR_SHIFT) & (entries_cnt - 1));
/*
* If the virtual and physical address don't have the same offset
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCHv2 03/10] arm64/mm: change __create_pgd_mapping() to accept extra parameter for allocator
2021-04-25 14:12 [PATCHv2 00/10] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 01/10] arm64/mm: split out __create_pgd_mapping() routines Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 02/10] arm64/mm: change __create_pgd_mapping() to accept nr_entries param and introduce create_idmap() Pingfan Liu
@ 2021-04-25 14:12 ` Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 04/10] arm64/mm: enable __create_pgd_mapping() to run across different pgtable Pingfan Liu
` (6 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Pingfan Liu @ 2021-04-25 14:12 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
Logan Gunthorpe, Mark Brown
The current pgtable allocator only accepts the depth of pgtable as
parameter. And the allocator function itself determines the memory pool
info.
But incoming pgtable allocator needs an extra param to get local pool
info, which directs the allocation. Here preparing the prototype for the
incoming change.
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/include/asm/pgalloc.h | 5 +++-
arch/arm64/mm/idmap_mmu.c | 5 ++--
arch/arm64/mm/mmu.c | 31 +++++++++++++------------
arch/arm64/mm/mmu_include.c | 39 +++++++++++++++++++-------------
4 files changed, 46 insertions(+), 34 deletions(-)
diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index 555792921af0..42f602528b90 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -83,11 +83,14 @@ pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep)
}
#define pmd_pgtable(pmd) pmd_page(pmd)
+typedef phys_addr_t (*pgtable_alloc)(unsigned long shift, void *data);
+
extern void __create_pgd_mapping_extend(pgd_t *pgdir,
unsigned int entries_cnt, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
+ pgtable_alloc allocator,
+ void *info,
int flags);
#endif
diff --git a/arch/arm64/mm/idmap_mmu.c b/arch/arm64/mm/idmap_mmu.c
index bff1bffee321..4477cc2704a7 100644
--- a/arch/arm64/mm/idmap_mmu.c
+++ b/arch/arm64/mm/idmap_mmu.c
@@ -27,11 +27,12 @@ void __create_pgd_mapping_extend(pgd_t *pgdir,
unsigned long virt,
phys_addr_t size,
pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
+ pgtable_alloc allocator,
+ void *info,
int flags)
{
__create_pgd_mapping(pgdir, entries_cnt, phys, virt, size, prot,
- pgtable_alloc, flags);
+ allocator, info, flags);
}
#endif
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 70a5a7b032dc..520738c43874 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -86,7 +86,7 @@ pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
}
EXPORT_SYMBOL(phys_mem_access_prot);
-static phys_addr_t __init early_pgtable_alloc(int shift)
+static phys_addr_t __init early_pgtable_alloc(unsigned long unused_a, void *unused_b)
{
phys_addr_t phys;
void *ptr;
@@ -113,7 +113,7 @@ static phys_addr_t __init early_pgtable_alloc(int shift)
return phys;
}
-static phys_addr_t __pgd_pgtable_alloc(int shift)
+static phys_addr_t __pgd_pgtable_alloc(unsigned long unused_a, void *unused_b)
{
void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL);
BUG_ON(!ptr);
@@ -123,9 +123,9 @@ static phys_addr_t __pgd_pgtable_alloc(int shift)
return __pa(ptr);
}
-static phys_addr_t pgd_pgtable_alloc(int shift)
+static phys_addr_t pgd_pgtable_alloc(unsigned long shift, void *unused)
{
- phys_addr_t pa = __pgd_pgtable_alloc(shift);
+ phys_addr_t pa = __pgd_pgtable_alloc(shift, unused);
/*
* Call proper page table ctor in case later we need to
@@ -154,7 +154,8 @@ int idmap_extend_pgtable;
void create_idmap(pgd_t *pgdir, phys_addr_t phys,
phys_addr_t size,
pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
+ pgtable_alloc allocator,
+ void *info,
int flags)
{
u64 ptrs_per_pgd = idmap_ptrs_per_pgd;
@@ -162,13 +163,13 @@ void create_idmap(pgd_t *pgdir, phys_addr_t phys,
#ifdef CONFIG_IDMAP_PGTABLE_EXPAND
if (idmap_extend_pgtable)
__create_pgd_mapping_extend(pgdir, ptrs_per_pgd,
- phys, phys, size, prot, pgtable_alloc, flags);
+ phys, phys, size, prot, allocator, info, flags);
else
__create_pgd_mapping(pgdir, ptrs_per_pgd,
- phys, phys, size, prot, pgtable_alloc, flags);
+ phys, phys, size, prot, allocator, info, flags);
#else
__create_pgd_mapping(pgdir, ptrs_per_pgd,
- phys, phys, size, prot, pgtable_alloc, flags);
+ phys, phys, size, prot, allocator, info, flags);
#endif
}
@@ -186,7 +187,7 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
return;
}
__create_pgd_mapping(init_mm.pgd, PTRS_PER_PGD, phys, virt, size, prot, NULL,
- NO_CONT_MAPPINGS);
+ NULL, NO_CONT_MAPPINGS);
}
void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
@@ -201,7 +202,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(mm->pgd, PTRS_PER_PGD, phys, virt, size, prot,
- pgd_pgtable_alloc, flags);
+ pgd_pgtable_alloc, NULL, flags);
}
static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
@@ -214,7 +215,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
}
__create_pgd_mapping(init_mm.pgd, PTRS_PER_PGD, phys, virt, size, prot, NULL,
- NO_CONT_MAPPINGS);
+ NULL, NO_CONT_MAPPINGS);
/* flush the TLBs after updating live kernel mappings */
flush_tlb_kernel_range(virt, virt + size);
@@ -224,7 +225,7 @@ static void __init __map_memblock(pgd_t *pgdp, phys_addr_t start,
phys_addr_t end, pgprot_t prot, int flags)
{
__create_pgd_mapping(pgdp, PTRS_PER_PGD, start, __phys_to_virt(start), end - start,
- prot, early_pgtable_alloc, flags);
+ prot, early_pgtable_alloc, NULL, flags);
}
void __init mark_linear_text_alias_ro(void)
@@ -325,7 +326,7 @@ static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end,
BUG_ON(!PAGE_ALIGNED(size));
__create_pgd_mapping(pgdp, PTRS_PER_PGD, pa_start, (unsigned long)va_start, size, prot,
- early_pgtable_alloc, flags);
+ early_pgtable_alloc, NULL, flags);
if (!(vm_flags & VM_NO_GUARD))
size += PAGE_SIZE;
@@ -369,7 +370,7 @@ static int __init map_entry_trampoline(void)
/* Map only the text into the trampoline page table */
memset(tramp_pg_dir, 0, PGD_SIZE);
__create_pgd_mapping(tramp_pg_dir, PTRS_PER_PGD, pa_start, TRAMP_VALIAS, PAGE_SIZE,
- prot, __pgd_pgtable_alloc, 0);
+ prot, __pgd_pgtable_alloc, NULL, 0);
/* Map both the text and data into the kernel page table */
__set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
@@ -1261,7 +1262,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
__create_pgd_mapping(swapper_pg_dir, PTRS_PER_PGD, start, __phys_to_virt(start),
- size, params->pgprot, __pgd_pgtable_alloc,
+ size, params->pgprot, __pgd_pgtable_alloc, NULL,
flags);
memblock_clear_nomap(start, size);
diff --git a/arch/arm64/mm/mmu_include.c b/arch/arm64/mm/mmu_include.c
index be51689d1133..732e603fe3fc 100644
--- a/arch/arm64/mm/mmu_include.c
+++ b/arch/arm64/mm/mmu_include.c
@@ -69,7 +69,8 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
unsigned long end, phys_addr_t phys,
pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
+ pgtable_alloc allocator,
+ void *info,
int flags)
{
unsigned long next;
@@ -79,8 +80,8 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
if (pmd_none(pmd)) {
phys_addr_t pte_phys;
- BUG_ON(!pgtable_alloc);
- pte_phys = pgtable_alloc(PAGE_SHIFT);
+ BUG_ON(!allocator);
+ pte_phys = allocator(PAGE_SHIFT, info);
__pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE);
pmd = READ_ONCE(*pmdp);
}
@@ -104,7 +105,9 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int), int flags)
+ pgtable_alloc allocator,
+ void *info,
+ int flags)
{
unsigned long next;
pmd_t *pmdp;
@@ -128,7 +131,7 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
READ_ONCE(pmd_val(*pmdp))));
} else {
alloc_init_cont_pte(pmdp, addr, next, phys, prot,
- pgtable_alloc, flags);
+ allocator, info, flags);
BUG_ON(pmd_val(old_pmd) != 0 &&
pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
@@ -142,7 +145,9 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
unsigned long end, phys_addr_t phys,
pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int), int flags)
+ pgtable_alloc allocator,
+ void *info,
+ int flags)
{
unsigned long next;
pud_t pud = READ_ONCE(*pudp);
@@ -154,8 +159,8 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
if (pud_none(pud)) {
phys_addr_t pmd_phys;
- BUG_ON(!pgtable_alloc);
- pmd_phys = pgtable_alloc(PMD_SHIFT);
+ BUG_ON(!allocator);
+ pmd_phys = allocator(PMD_SHIFT, info);
__pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
pud = READ_ONCE(*pudp);
}
@@ -171,7 +176,7 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
(flags & NO_CONT_MAPPINGS) == 0)
__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
- init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags);
+ init_pmd(pudp, addr, next, phys, __prot, allocator, info, flags);
phys += next - addr;
} while (addr = next, addr != end);
@@ -191,7 +196,8 @@ static inline bool use_1G_block(unsigned long addr, unsigned long next,
static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
phys_addr_t phys, pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
+ pgtable_alloc allocator,
+ void *info,
int flags)
{
unsigned long next;
@@ -202,8 +208,8 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
if (p4d_none(p4d)) {
phys_addr_t pud_phys;
- BUG_ON(!pgtable_alloc);
- pud_phys = pgtable_alloc(PUD_SHIFT);
+ BUG_ON(!allocator);
+ pud_phys = allocator(PUD_SHIFT, info);
__p4d_populate(p4dp, pud_phys, PUD_TYPE_TABLE);
p4d = READ_ONCE(*p4dp);
}
@@ -230,7 +236,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
READ_ONCE(pud_val(*pudp))));
} else {
alloc_init_cont_pmd(pudp, addr, next, phys, prot,
- pgtable_alloc, flags);
+ allocator, info, flags);
BUG_ON(pud_val(old_pud) != 0 &&
pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
@@ -244,7 +250,8 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
static void __create_pgd_mapping(pgd_t *pgdir, unsigned int entries_cnt, phys_addr_t phys,
unsigned long virt, phys_addr_t size,
pgprot_t prot,
- phys_addr_t (*pgtable_alloc)(int),
+ pgtable_alloc allocator,
+ void *info,
int flags)
{
unsigned long addr, end, next;
@@ -268,8 +275,8 @@ static void __create_pgd_mapping(pgd_t *pgdir, unsigned int entries_cnt, phys_ad
do {
next = pgd_addr_end(addr, end);
- alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc,
- flags);
+ alloc_init_pud(pgdp, addr, next, phys, prot, allocator,
+ info, flags);
phys += next - addr;
} while (pgdp++, addr = next, addr != end);
}
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCHv2 04/10] arm64/mm: enable __create_pgd_mapping() to run across different pgtable
2021-04-25 14:12 [PATCHv2 00/10] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
` (2 preceding siblings ...)
2021-04-25 14:12 ` [PATCHv2 03/10] arm64/mm: change __create_pgd_mapping() to accept extra parameter for allocator Pingfan Liu
@ 2021-04-25 14:12 ` Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 05/10] arm64/mm: port trans_pgd_idmap_page() onto create_idmap() Pingfan Liu
` (5 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Pingfan Liu @ 2021-04-25 14:12 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
Logan Gunthorpe, Mark Brown
__create_pgd_mapping() is planned not only to be used in boot process,
but also after fully bootup. The latter means several callers can run
__create_pgd_mapping() concurrenlty.
In this case, PUP/PMD/PTE fixmap should be not used, instead, the
virtual addresss owned by each pgtable is used.
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
rfc -> v2:
change allocator return type back to phys_addr_t
---
arch/arm64/mm/mmu_include.c | 29 +++++++++++++++++++++--------
1 file changed, 21 insertions(+), 8 deletions(-)
diff --git a/arch/arm64/mm/mmu_include.c b/arch/arm64/mm/mmu_include.c
index 732e603fe3fc..ac8850fe6ce2 100644
--- a/arch/arm64/mm/mmu_include.c
+++ b/arch/arm64/mm/mmu_include.c
@@ -7,6 +7,7 @@
#define NO_BLOCK_MAPPINGS BIT(0)
#define NO_CONT_MAPPINGS BIT(1)
+#define NO_FIXMAP BIT(2)
static bool pgattr_change_is_safe(u64 old, u64 new)
{
@@ -43,11 +44,14 @@ static bool pgattr_change_is_safe(u64 old, u64 new)
}
static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
- phys_addr_t phys, pgprot_t prot)
+ phys_addr_t phys, pgprot_t prot, int flags)
{
pte_t *ptep;
- ptep = pte_set_fixmap_offset(pmdp, addr);
+ if (likely(!(flags & NO_FIXMAP)))
+ ptep = pte_set_fixmap_offset(pmdp, addr);
+ else
+ ptep = pte_offset_kernel(pmdp, addr);
do {
pte_t old_pte = READ_ONCE(*ptep);
@@ -63,7 +67,8 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
phys += PAGE_SIZE;
} while (ptep++, addr += PAGE_SIZE, addr != end);
- pte_clear_fixmap();
+ if (likely(!(flags & NO_FIXMAP)))
+ pte_clear_fixmap();
}
static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
@@ -97,7 +102,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
(flags & NO_CONT_MAPPINGS) == 0)
__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
- init_pte(pmdp, addr, next, phys, __prot);
+ init_pte(pmdp, addr, next, phys, __prot, flags);
phys += next - addr;
} while (addr = next, addr != end);
@@ -112,7 +117,10 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
unsigned long next;
pmd_t *pmdp;
- pmdp = pmd_set_fixmap_offset(pudp, addr);
+ if (likely(!(flags & NO_FIXMAP)))
+ pmdp = pmd_set_fixmap_offset(pudp, addr);
+ else
+ pmdp = pmd_offset(pudp, addr);
do {
pmd_t old_pmd = READ_ONCE(*pmdp);
@@ -139,7 +147,8 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
phys += next - addr;
} while (pmdp++, addr = next, addr != end);
- pmd_clear_fixmap();
+ if (likely(!(flags & NO_FIXMAP)))
+ pmd_clear_fixmap();
}
static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
@@ -215,7 +224,10 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
}
BUG_ON(p4d_bad(p4d));
- pudp = pud_set_fixmap_offset(p4dp, addr);
+ if (likely(!(flags & NO_FIXMAP)))
+ pudp = pud_set_fixmap_offset(p4dp, addr);
+ else
+ pudp = pud_offset(p4dp, addr);
do {
pud_t old_pud = READ_ONCE(*pudp);
@@ -244,7 +256,8 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
phys += next - addr;
} while (pudp++, addr = next, addr != end);
- pud_clear_fixmap();
+ if (likely(!(flags & NO_FIXMAP)))
+ pud_clear_fixmap();
}
static void __create_pgd_mapping(pgd_t *pgdir, unsigned int entries_cnt, phys_addr_t phys,
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCHv2 05/10] arm64/mm: port trans_pgd_idmap_page() onto create_idmap()
2021-04-25 14:12 [PATCHv2 00/10] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
` (3 preceding siblings ...)
2021-04-25 14:12 ` [PATCHv2 04/10] arm64/mm: enable __create_pgd_mapping() to run across different pgtable Pingfan Liu
@ 2021-04-25 14:12 ` Pingfan Liu
2021-04-25 14:13 ` [PATCHv2 06/10] arm64/mm: introduce pgtable allocator for idmap_pg_dir and init_pg_dir Pingfan Liu
` (4 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Pingfan Liu @ 2021-04-25 14:12 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
Logan Gunthorpe, Mark Brown
At present, trans_pgd_idmap_page() has its own logic to set up idmap. To
share the common code, porting it onto create_idmap().
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/include/asm/pgalloc.h | 13 +++++++
arch/arm64/mm/mmu_include.c | 4 ---
arch/arm64/mm/trans_pgd.c | 62 ++++++++++++++++----------------
3 files changed, 43 insertions(+), 36 deletions(-)
diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index 42f602528b90..8e6638b4d1dd 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -93,4 +93,17 @@ extern void __create_pgd_mapping_extend(pgd_t *pgdir,
void *info,
int flags);
+extern int idmap_extend_pgtable;
+
+extern void create_idmap(pgd_t *pgdir, phys_addr_t phys,
+ phys_addr_t size,
+ pgprot_t prot,
+ pgtable_alloc allocator,
+ void *info,
+ int flags);
+
+#define NO_BLOCK_MAPPINGS BIT(0)
+#define NO_CONT_MAPPINGS BIT(1)
+#define NO_FIXMAP BIT(2)
+
#endif
diff --git a/arch/arm64/mm/mmu_include.c b/arch/arm64/mm/mmu_include.c
index ac8850fe6ce2..98c560197ea9 100644
--- a/arch/arm64/mm/mmu_include.c
+++ b/arch/arm64/mm/mmu_include.c
@@ -5,10 +5,6 @@
* and MMU-enable
*/
-#define NO_BLOCK_MAPPINGS BIT(0)
-#define NO_CONT_MAPPINGS BIT(1)
-#define NO_FIXMAP BIT(2)
-
static bool pgattr_change_is_safe(u64 old, u64 new)
{
/*
diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c
index 527f0a39c3da..9f4512ab8659 100644
--- a/arch/arm64/mm/trans_pgd.c
+++ b/arch/arm64/mm/trans_pgd.c
@@ -274,6 +274,14 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd,
return 0;
}
+static phys_addr_t allocator_trans_alloc(unsigned long unused, void *info)
+{
+ unsigned long *p;
+
+ p = trans_alloc(info);
+ return virt_to_phys(p);
+}
+
/*
* The page we want to idmap may be outside the range covered by VA_BITS that
* can be built using the kernel's p?d_populate() helpers. As a one off, for a
@@ -287,38 +295,28 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd,
int trans_pgd_idmap_page(struct trans_pgd_info *info, phys_addr_t *trans_ttbr0,
unsigned long *t0sz, void *page)
{
- phys_addr_t dst_addr = virt_to_phys(page);
- unsigned long pfn = __phys_to_pfn(dst_addr);
- int max_msb = (dst_addr & GENMASK(52, 48)) ? 51 : 47;
- int bits_mapped = PAGE_SHIFT - 4;
- unsigned long level_mask, prev_level_entry, *levels[4];
- int this_level, index, level_lsb, level_msb;
-
- dst_addr &= PAGE_MASK;
- prev_level_entry = pte_val(pfn_pte(pfn, PAGE_KERNEL_EXEC));
-
- for (this_level = 3; this_level >= 0; this_level--) {
- levels[this_level] = trans_alloc(info);
- if (!levels[this_level])
- return -ENOMEM;
-
- level_lsb = ARM64_HW_PGTABLE_LEVEL_SHIFT(this_level);
- level_msb = min(level_lsb + bits_mapped, max_msb);
- level_mask = GENMASK_ULL(level_msb, level_lsb);
-
- index = (dst_addr & level_mask) >> level_lsb;
- *(levels[this_level] + index) = prev_level_entry;
-
- pfn = virt_to_pfn(levels[this_level]);
- prev_level_entry = pte_val(pfn_pte(pfn,
- __pgprot(PMD_TYPE_TABLE)));
-
- if (level_msb == max_msb)
- break;
- }
-
- *trans_ttbr0 = phys_to_ttbr(__pfn_to_phys(pfn));
- *t0sz = TCR_T0SZ(max_msb + 1);
+ pgd_t *pgdir = trans_alloc(info);
+ unsigned long base, step, level, va_bits;
+ int flags = NO_FIXMAP;
+
+#ifdef CONFIG_ARM64_64K_PAGES
+ base = 16;
+ step = 13;
+#elif defined(CONFIG_ARM64_4K_PAGES)
+ base = 12;
+ step = 9;
+#elif defined(CONFIG_ARM64_16K_PAGES)
+ base = 14;
+ step = 11;
+#endif
+ create_idmap(pgdir, virt_to_phys(page), PAGE_SIZE, PAGE_KERNEL_EXEC,
+ allocator_trans_alloc, info, flags);
+
+ *trans_ttbr0 = phys_to_ttbr(__virt_to_phys(pgdir));
+ level = CONFIG_PGTABLE_LEVELS + idmap_extend_pgtable ? 1 : 0;
+ va_bits = base + step * level;
+ va_bits = min(va_bits, vabits_actual);
+ *t0sz = 64 - va_bits;
return 0;
}
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCHv2 06/10] arm64/mm: introduce pgtable allocator for idmap_pg_dir and init_pg_dir
2021-04-25 14:12 [PATCHv2 00/10] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
` (4 preceding siblings ...)
2021-04-25 14:12 ` [PATCHv2 05/10] arm64/mm: port trans_pgd_idmap_page() onto create_idmap() Pingfan Liu
@ 2021-04-25 14:13 ` Pingfan Liu
2021-04-25 14:13 ` [PATCHv2 07/10] arm64/pgtable-prot.h: reorganize to cope with asm Pingfan Liu
` (3 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Pingfan Liu @ 2021-04-25 14:13 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
Logan Gunthorpe, Mark Brown
From the view of __create_pgd_mapping(), both idmap_pg_dir and
init_pg_dir can be treated as a memory pool. Introduce an allocator
working on them, so __create_pgd_mapping() can create mapping.
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/mm/mmu.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 520738c43874..fa1d1d4fee8f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -86,6 +86,28 @@ pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
}
EXPORT_SYMBOL(phys_mem_access_prot);
+struct mempool {
+ phys_addr_t start;
+ unsigned long size;
+ unsigned long next_idx;
+};
+
+struct mempool cur_pool;
+
+void set_cur_mempool(phys_addr_t start, unsigned long size)
+{
+ cur_pool.start = start;
+ cur_pool.size = size;
+ cur_pool.next_idx = 0;
+}
+
+phys_addr_t __init head_pgtable_alloc(unsigned long unused_a, void *unused_b)
+{
+ unsigned long idx = cur_pool.next_idx++;
+
+ return cur_pool.start + (idx << PAGE_SHIFT);
+}
+
static phys_addr_t __init early_pgtable_alloc(unsigned long unused_a, void *unused_b)
{
phys_addr_t phys;
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCHv2 07/10] arm64/pgtable-prot.h: reorganize to cope with asm
2021-04-25 14:12 [PATCHv2 00/10] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
` (5 preceding siblings ...)
2021-04-25 14:13 ` [PATCHv2 06/10] arm64/mm: introduce pgtable allocator for idmap_pg_dir and init_pg_dir Pingfan Liu
@ 2021-04-25 14:13 ` Pingfan Liu
2021-04-25 14:13 ` [PATCHv2 08/10] arm64/mmu_include.c: disable WARN_ON() and BUG_ON() when booting Pingfan Liu
` (2 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Pingfan Liu @ 2021-04-25 14:13 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
Logan Gunthorpe, Mark Brown
This patch is one of the preparation for calling __create_pgd_mapping()
from head.S.
In order to refer PAGE_KERNEL_EXEC in head.S, reorganize this file and
move the needed part under #ifdef __ASSEMBLY__
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/include/asm/pgtable-prot.h | 34 +++++++++++++++++----------
1 file changed, 21 insertions(+), 13 deletions(-)
diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 9a65fb528110..424fc5e6fd69 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -33,9 +33,6 @@
extern bool arm64_use_ng_mappings;
-#define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
-
#define PTE_MAYBE_NG (arm64_use_ng_mappings ? PTE_NG : 0)
#define PMD_MAYBE_NG (arm64_use_ng_mappings ? PMD_SECT_NG : 0)
@@ -49,6 +46,26 @@ extern bool arm64_use_ng_mappings;
#define PTE_MAYBE_GP 0
#endif
+#define PAGE_S2_MEMATTR(attr) \
+ ({ \
+ u64 __val; \
+ if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) \
+ __val = PTE_S2_MEMATTR(MT_S2_FWB_ ## attr); \
+ else \
+ __val = PTE_S2_MEMATTR(MT_S2_ ## attr); \
+ __val; \
+ })
+
+#endif /* __ASSEMBLY__ */
+
+#ifdef __ASSEMBLY__
+#define PTE_MAYBE_NG 0
+#define __pgprot(x) (x)
+#endif
+
+#define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+
#define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG)
#define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
@@ -71,15 +88,7 @@ extern bool arm64_use_ng_mappings;
#define PAGE_KERNEL_EXEC __pgprot(PROT_NORMAL & ~PTE_PXN)
#define PAGE_KERNEL_EXEC_CONT __pgprot((PROT_NORMAL & ~PTE_PXN) | PTE_CONT)
-#define PAGE_S2_MEMATTR(attr) \
- ({ \
- u64 __val; \
- if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB)) \
- __val = PTE_S2_MEMATTR(MT_S2_FWB_ ## attr); \
- else \
- __val = PTE_S2_MEMATTR(MT_S2_ ## attr); \
- __val; \
- })
+
#define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
/* shared+writable pages are clean by default, hence PTE_RDONLY|PTE_WRITE */
@@ -106,6 +115,5 @@ extern bool arm64_use_ng_mappings;
#define __S110 PAGE_SHARED_EXEC
#define __S111 PAGE_SHARED_EXEC
-#endif /* __ASSEMBLY__ */
#endif /* __ASM_PGTABLE_PROT_H */
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCHv2 08/10] arm64/mmu_include.c: disable WARN_ON() and BUG_ON() when booting.
2021-04-25 14:12 [PATCHv2 00/10] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
` (6 preceding siblings ...)
2021-04-25 14:13 ` [PATCHv2 07/10] arm64/pgtable-prot.h: reorganize to cope with asm Pingfan Liu
@ 2021-04-25 14:13 ` Pingfan Liu
2021-04-25 14:13 ` [PATCHv2 09/10] arm64/mm: make __create_pgd_mapping() coped with pgtable's paddr Pingfan Liu
2021-04-25 14:13 ` [PATCHv2 10/10] arm64/head: convert idmap_pg_dir and init_pg_dir to __create_pgd_mapping() Pingfan Liu
9 siblings, 0 replies; 11+ messages in thread
From: Pingfan Liu @ 2021-04-25 14:13 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
Logan Gunthorpe, Mark Brown
This patch is still one of the preparation for calling
__create_pgd_mapping() from head.S
When calling from head.S, printk is not ready to work. Hence define
SAFE_BUG_ON()/SAFE_WARN_ON(), wrapping around BUG_ON()/WARN_ON() to
protect against early calling.
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/include/asm/pgalloc.h | 1 +
arch/arm64/mm/mmu_include.c | 36 +++++++++++++++++++-------------
2 files changed, 23 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index 8e6638b4d1dd..c3875af99432 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -105,5 +105,6 @@ extern void create_idmap(pgd_t *pgdir, phys_addr_t phys,
#define NO_BLOCK_MAPPINGS BIT(0)
#define NO_CONT_MAPPINGS BIT(1)
#define NO_FIXMAP BIT(2)
+#define BOOT_HEAD BIT(3)
#endif
diff --git a/arch/arm64/mm/mmu_include.c b/arch/arm64/mm/mmu_include.c
index 98c560197ea9..746cb2b502a3 100644
--- a/arch/arm64/mm/mmu_include.c
+++ b/arch/arm64/mm/mmu_include.c
@@ -5,6 +5,14 @@
* and MMU-enable
*/
+/*
+ * printk is not ready in the very early stage. And this pair macro should be used
+ * instead
+ */
+#define SAFE_BUG_ON(x, y) if (likely(!(x & BOOT_HEAD))) { BUG_ON(y); }
+#define SAFE_WARN_ON(x, y) \
+ ({ int _ret; _ret = (x & BOOT_HEAD) ? false : WARN_ON(y); })
+
static bool pgattr_change_is_safe(u64 old, u64 new)
{
/*
@@ -57,7 +65,7 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
* After the PTE entry has been populated once, we
* only allow updates to the permission attributes.
*/
- BUG_ON(!pgattr_change_is_safe(pte_val(old_pte),
+ SAFE_BUG_ON(flags, !pgattr_change_is_safe(pte_val(old_pte),
READ_ONCE(pte_val(*ptep))));
phys += PAGE_SIZE;
@@ -77,16 +85,16 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
unsigned long next;
pmd_t pmd = READ_ONCE(*pmdp);
- BUG_ON(pmd_sect(pmd));
+ SAFE_BUG_ON(flags, pmd_sect(pmd));
if (pmd_none(pmd)) {
phys_addr_t pte_phys;
- BUG_ON(!allocator);
+ SAFE_BUG_ON(flags, !allocator);
pte_phys = allocator(PAGE_SHIFT, info);
__pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE);
pmd = READ_ONCE(*pmdp);
}
- BUG_ON(pmd_bad(pmd));
+ SAFE_BUG_ON(flags, pmd_bad(pmd));
do {
pgprot_t __prot = prot;
@@ -131,13 +139,13 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
* After the PMD entry has been populated once, we
* only allow updates to the permission attributes.
*/
- BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
+ SAFE_BUG_ON(flags, !pgattr_change_is_safe(pmd_val(old_pmd),
READ_ONCE(pmd_val(*pmdp))));
} else {
alloc_init_cont_pte(pmdp, addr, next, phys, prot,
allocator, info, flags);
- BUG_ON(pmd_val(old_pmd) != 0 &&
+ SAFE_BUG_ON(flags, pmd_val(old_pmd) != 0 &&
pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
}
phys += next - addr;
@@ -160,16 +168,16 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
/*
* Check for initial section mappings in the pgd/pud.
*/
- BUG_ON(pud_sect(pud));
+ SAFE_BUG_ON(flags, pud_sect(pud));
if (pud_none(pud)) {
phys_addr_t pmd_phys;
- BUG_ON(!allocator);
+ SAFE_BUG_ON(flags, !allocator);
pmd_phys = allocator(PMD_SHIFT, info);
__pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
pud = READ_ONCE(*pudp);
}
- BUG_ON(pud_bad(pud));
+ SAFE_BUG_ON(flags, pud_bad(pud));
do {
pgprot_t __prot = prot;
@@ -213,12 +221,12 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
if (p4d_none(p4d)) {
phys_addr_t pud_phys;
- BUG_ON(!allocator);
+ SAFE_BUG_ON(flags, !allocator);
pud_phys = allocator(PUD_SHIFT, info);
__p4d_populate(p4dp, pud_phys, PUD_TYPE_TABLE);
p4d = READ_ONCE(*p4dp);
}
- BUG_ON(p4d_bad(p4d));
+ SAFE_BUG_ON(flags, p4d_bad(p4d));
if (likely(!(flags & NO_FIXMAP)))
pudp = pud_set_fixmap_offset(p4dp, addr);
@@ -240,13 +248,13 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
* After the PUD entry has been populated once, we
* only allow updates to the permission attributes.
*/
- BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
+ SAFE_BUG_ON(flags, !pgattr_change_is_safe(pud_val(old_pud),
READ_ONCE(pud_val(*pudp))));
} else {
alloc_init_cont_pmd(pudp, addr, next, phys, prot,
allocator, info, flags);
- BUG_ON(pud_val(old_pud) != 0 &&
+ SAFE_BUG_ON(flags, pud_val(old_pud) != 0 &&
pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
}
phys += next - addr;
@@ -275,7 +283,7 @@ static void __create_pgd_mapping(pgd_t *pgdir, unsigned int entries_cnt, phys_ad
* If the virtual and physical address don't have the same offset
* within a page, we cannot map the region as the caller expects.
*/
- if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
+ if (SAFE_WARN_ON(flags, (phys ^ virt) & ~PAGE_MASK))
return;
phys &= PAGE_MASK;
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCHv2 09/10] arm64/mm: make __create_pgd_mapping() coped with pgtable's paddr
2021-04-25 14:12 [PATCHv2 00/10] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
` (7 preceding siblings ...)
2021-04-25 14:13 ` [PATCHv2 08/10] arm64/mmu_include.c: disable WARN_ON() and BUG_ON() when booting Pingfan Liu
@ 2021-04-25 14:13 ` Pingfan Liu
2021-04-25 14:13 ` [PATCHv2 10/10] arm64/head: convert idmap_pg_dir and init_pg_dir to __create_pgd_mapping() Pingfan Liu
9 siblings, 0 replies; 11+ messages in thread
From: Pingfan Liu @ 2021-04-25 14:13 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
Logan Gunthorpe, Mark Brown
This patch is the last preparation for calling __create_pgd_mapping()
from head.S.
Under mmu-offset situation, pud_t */pmd_t */pte_t * points to paddr.
During the building of pgtable, they should be carefully handled to
avoid the involvement of __va().
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
arch/arm64/mm/mmu_include.c | 29 +++++++++++++++++++++++++----
1 file changed, 25 insertions(+), 4 deletions(-)
diff --git a/arch/arm64/mm/mmu_include.c b/arch/arm64/mm/mmu_include.c
index 746cb2b502a3..c4ea00bae4df 100644
--- a/arch/arm64/mm/mmu_include.c
+++ b/arch/arm64/mm/mmu_include.c
@@ -54,6 +54,9 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
if (likely(!(flags & NO_FIXMAP)))
ptep = pte_set_fixmap_offset(pmdp, addr);
+ else if (flags & BOOT_HEAD)
+ /* for head.S, there is no __va() */
+ ptep = (pte_t *)__pmd_to_phys(*pmdp) + pte_index(addr);
else
ptep = pte_offset_kernel(pmdp, addr);
do {
@@ -121,10 +124,19 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
unsigned long next;
pmd_t *pmdp;
- if (likely(!(flags & NO_FIXMAP)))
+ if (likely(!(flags & NO_FIXMAP))) {
pmdp = pmd_set_fixmap_offset(pudp, addr);
- else
+ } else if (flags & BOOT_HEAD) {
+#if CONFIG_PGTABLE_LEVELS > 2
+ /* for head.S, there is no __va() */
+ pmdp = (pmd_t *)__pud_to_phys(*pudp) + pmd_index(addr);
+#else
+ pmdp = (pmd_t *)pudp;
+#endif
+ } else {
pmdp = pmd_offset(pudp, addr);
+ }
+
do {
pmd_t old_pmd = READ_ONCE(*pmdp);
@@ -228,10 +240,19 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
}
SAFE_BUG_ON(flags, p4d_bad(p4d));
- if (likely(!(flags & NO_FIXMAP)))
+ if (likely(!(flags & NO_FIXMAP))) {
pudp = pud_set_fixmap_offset(p4dp, addr);
- else
+ } else if (flags & BOOT_HEAD) {
+#if CONFIG_PGTABLE_LEVELS > 3
+ /* for head.S, there is no __va() */
+ pudp = (pud_t *)__p4d_to_phys(*p4dp) + pud_index(addr);
+#else
+ pudp = (pud_t *)p4dp;
+#endif
+ } else {
pudp = pud_offset(p4dp, addr);
+ }
+
do {
pud_t old_pud = READ_ONCE(*pudp);
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCHv2 10/10] arm64/head: convert idmap_pg_dir and init_pg_dir to __create_pgd_mapping()
2021-04-25 14:12 [PATCHv2 00/10] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
` (8 preceding siblings ...)
2021-04-25 14:13 ` [PATCHv2 09/10] arm64/mm: make __create_pgd_mapping() coped with pgtable's paddr Pingfan Liu
@ 2021-04-25 14:13 ` Pingfan Liu
9 siblings, 0 replies; 11+ messages in thread
From: Pingfan Liu @ 2021-04-25 14:13 UTC (permalink / raw)
To: linux-arm-kernel
Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
Logan Gunthorpe, Mark Brown
Now, everything is ready for calling __create_pgd_mapping() from head.S.
Switching to these C routine and remove the asm counterpart.
This patch has been successfully tested with the following config value:
PAGE_SIZE VA PA PGTABLE_LEVEL
4K 48 48 4
4K 39 48 3
16K 48 48 4
16K 47 48 3
64K 52 52 3
64K 42 52 2
Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
RFC -> V2:
correct the asm calling convention.
---
arch/arm64/include/asm/pgalloc.h | 5 +
arch/arm64/kernel/head.S | 193 ++++++++-----------------------
arch/arm64/mm/mmu.c | 13 +++
3 files changed, 66 insertions(+), 145 deletions(-)
diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index c3875af99432..b1182b656b00 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -8,6 +8,9 @@
#ifndef __ASM_PGALLOC_H
#define __ASM_PGALLOC_H
+#include <vdso/bits.h>
+
+#ifndef __ASSEMBLY__
#include <asm/pgtable-hwdef.h>
#include <asm/processor.h>
#include <asm/cacheflush.h>
@@ -102,6 +105,8 @@ extern void create_idmap(pgd_t *pgdir, phys_addr_t phys,
void *info,
int flags);
+#endif
+
#define NO_BLOCK_MAPPINGS BIT(0)
#define NO_CONT_MAPPINGS BIT(1)
#define NO_FIXMAP BIT(2)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index e19649dbbafb..ddb9601d61c2 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -28,6 +28,8 @@
#include <asm/memory.h>
#include <asm/pgtable-hwdef.h>
#include <asm/page.h>
+#include <asm/pgtable-prot.h>
+#include <asm/pgalloc.h>
#include <asm/scs.h>
#include <asm/smp.h>
#include <asm/sysreg.h>
@@ -92,6 +94,8 @@ SYM_CODE_START(primary_entry)
bl init_kernel_el // w0=cpu_boot_mode
adrp x23, __PHYS_OFFSET
and x23, x23, MIN_KIMG_ALIGN - 1 // KASLR offset, defaults to 0
+ adrp x4, init_thread_union
+ add sp, x4, #THREAD_SIZE
bl set_cpu_boot_mode_flag
bl __create_page_tables
/*
@@ -121,135 +125,6 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
b __inval_dcache_area // tail call
SYM_CODE_END(preserve_boot_args)
-/*
- * Macro to create a table entry to the next page.
- *
- * tbl: page table address
- * virt: virtual address
- * shift: #imm page table shift
- * ptrs: #imm pointers per table page
- *
- * Preserves: virt
- * Corrupts: ptrs, tmp1, tmp2
- * Returns: tbl -> next level table page address
- */
- .macro create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2
- add \tmp1, \tbl, #PAGE_SIZE
- phys_to_pte \tmp2, \tmp1
- orr \tmp2, \tmp2, #PMD_TYPE_TABLE // address of next table and entry type
- lsr \tmp1, \virt, #\shift
- sub \ptrs, \ptrs, #1
- and \tmp1, \tmp1, \ptrs // table index
- str \tmp2, [\tbl, \tmp1, lsl #3]
- add \tbl, \tbl, #PAGE_SIZE // next level table page
- .endm
-
-/*
- * Macro to populate page table entries, these entries can be pointers to the next level
- * or last level entries pointing to physical memory.
- *
- * tbl: page table address
- * rtbl: pointer to page table or physical memory
- * index: start index to write
- * eindex: end index to write - [index, eindex] written to
- * flags: flags for pagetable entry to or in
- * inc: increment to rtbl between each entry
- * tmp1: temporary variable
- *
- * Preserves: tbl, eindex, flags, inc
- * Corrupts: index, tmp1
- * Returns: rtbl
- */
- .macro populate_entries, tbl, rtbl, index, eindex, flags, inc, tmp1
-.Lpe\@: phys_to_pte \tmp1, \rtbl
- orr \tmp1, \tmp1, \flags // tmp1 = table entry
- str \tmp1, [\tbl, \index, lsl #3]
- add \rtbl, \rtbl, \inc // rtbl = pa next level
- add \index, \index, #1
- cmp \index, \eindex
- b.ls .Lpe\@
- .endm
-
-/*
- * Compute indices of table entries from virtual address range. If multiple entries
- * were needed in the previous page table level then the next page table level is assumed
- * to be composed of multiple pages. (This effectively scales the end index).
- *
- * vstart: virtual address of start of range
- * vend: virtual address of end of range
- * shift: shift used to transform virtual address into index
- * ptrs: number of entries in page table
- * istart: index in table corresponding to vstart
- * iend: index in table corresponding to vend
- * count: On entry: how many extra entries were required in previous level, scales
- * our end index.
- * On exit: returns how many extra entries required for next page table level
- *
- * Preserves: vstart, vend, shift, ptrs
- * Returns: istart, iend, count
- */
- .macro compute_indices, vstart, vend, shift, ptrs, istart, iend, count
- lsr \iend, \vend, \shift
- mov \istart, \ptrs
- sub \istart, \istart, #1
- and \iend, \iend, \istart // iend = (vend >> shift) & (ptrs - 1)
- mov \istart, \ptrs
- mul \istart, \istart, \count
- add \iend, \iend, \istart // iend += (count - 1) * ptrs
- // our entries span multiple tables
-
- lsr \istart, \vstart, \shift
- mov \count, \ptrs
- sub \count, \count, #1
- and \istart, \istart, \count
-
- sub \count, \iend, \istart
- .endm
-
-/*
- * Map memory for specified virtual address range. Each level of page table needed supports
- * multiple entries. If a level requires n entries the next page table level is assumed to be
- * formed from n pages.
- *
- * tbl: location of page table
- * rtbl: address to be used for first level page table entry (typically tbl + PAGE_SIZE)
- * vstart: start address to map
- * vend: end address to map - we map [vstart, vend]
- * flags: flags to use to map last level entries
- * phys: physical address corresponding to vstart - physical memory is contiguous
- * pgds: the number of pgd entries
- *
- * Temporaries: istart, iend, tmp, count, sv - these need to be different registers
- * Preserves: vstart, vend, flags
- * Corrupts: tbl, rtbl, istart, iend, tmp, count, sv
- */
- .macro map_memory, tbl, rtbl, vstart, vend, flags, phys, pgds, istart, iend, tmp, count, sv
- add \rtbl, \tbl, #PAGE_SIZE
- mov \sv, \rtbl
- mov \count, #0
- compute_indices \vstart, \vend, #PGDIR_SHIFT, \pgds, \istart, \iend, \count
- populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
- mov \tbl, \sv
- mov \sv, \rtbl
-
-#if SWAPPER_PGTABLE_LEVELS > 3
- compute_indices \vstart, \vend, #PUD_SHIFT, #PTRS_PER_PUD, \istart, \iend, \count
- populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
- mov \tbl, \sv
- mov \sv, \rtbl
-#endif
-
-#if SWAPPER_PGTABLE_LEVELS > 2
- compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #PTRS_PER_PMD, \istart, \iend, \count
- populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
- mov \tbl, \sv
-#endif
-
- compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #PTRS_PER_PTE, \istart, \iend, \count
- bic \count, \phys, #SWAPPER_BLOCK_SIZE - 1
- populate_entries \tbl, \count, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp
- .endm
-
/*
* Setup the initial page tables. We only setup the barest amount which is
* required to get the kernel running. The following sections are required:
@@ -345,8 +220,6 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
mov x5, #1
str x5, [x4] //require expanded pagetable
- mov x4, EXTRA_PTRS
- create_table_entry x0, x3, EXTRA_SHIFT, x4, x5, x6
#else
/*
* If VA_BITS == 48, we don't have to configure an additional
@@ -356,25 +229,55 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
str_l x4, idmap_ptrs_per_pgd, x5
#endif
1:
- ldr_l x4, idmap_ptrs_per_pgd
- mov x5, x3 // __pa(__idmap_text_start)
- adr_l x6, __idmap_text_end // __pa(__idmap_text_end)
-
- map_memory x0, x1, x3, x6, x7, x3, x4, x10, x11, x12, x13, x14
+ stp x0, x1, [sp, #-64]!
+ stp x2, x3, [sp, #48]
+ stp x4, x5, [sp, #32]
+ stp x6, x7, [sp, #16]
+
+ adrp x0, idmap_pg_dir
+ adrp x1, idmap_pg_end
+ sub x1, x1, x0
+ bl set_cur_mempool
+ mov x0, #0
+ mov x0, #0
+ bl head_pgtable_alloc // x0 contains idmap_pg_dir
+
+ adrp x1, __idmap_text_start
+ adr_l x2, __idmap_text_end
+ sub x2, x2, x1
+ ldr x3, =PAGE_KERNEL_EXEC
+ adr_l x4, head_pgtable_alloc
+ mov x5, #0
+ mov x6, #(NO_FIXMAP | BOOT_HEAD)
+ bl create_idmap
/*
* Map the kernel image (starting with PHYS_OFFSET).
*/
adrp x0, init_pg_dir
- mov_q x5, KIMAGE_VADDR // compile time __va(_text)
- add x5, x5, x23 // add KASLR displacement
- mov x4, PTRS_PER_PGD
- adrp x6, _end // runtime __pa(_end)
- adrp x3, _text // runtime __pa(_text)
- sub x6, x6, x3 // _end - _text
- add x6, x6, x5 // runtime __va(_end)
-
- map_memory x0, x1, x5, x6, x7, x3, x4, x10, x11, x12, x13, x14
+ adrp x1, init_pg_end
+ sub x1, x1, x0
+ bl set_cur_mempool
+ mov x0, #0
+ mov x0, #0
+ bl head_pgtable_alloc // x0 is init_pg_dir
+
+ adrp x1, _text // runtime __pa(_text)
+ mov_q x2, KIMAGE_VADDR // compile time __va(_text)
+ add x2, x2, x23 // add KASLR displacement
+ adrp x3, _end // runtime __pa(_end)
+ sub x3, x3, x1 // _end - _text
+
+ ldr x4, =PAGE_KERNEL_EXEC
+ adr_l x5, head_pgtable_alloc
+ mov x6, #0
+ mov x7, #(NO_FIXMAP | BOOT_HEAD)
+
+ bl create_init_pgd_mapping
+ ldp x6, x7, [sp, #16]
+ ldp x4, x5, [sp, #32]
+ ldp x2, x3, [sp, #48]
+ ldp x0, x1, [sp], #64
/*
* Since the page tables have been populated with non-cacheable
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index fa1d1d4fee8f..1ae72a3f2d27 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -167,6 +167,19 @@ static phys_addr_t pgd_pgtable_alloc(unsigned long shift, void *unused)
#include "./mmu_include.c"
+void create_init_pgd_mapping(pgd_t *pgdir,
+ phys_addr_t phys,
+ unsigned long virt,
+ phys_addr_t size,
+ pgprot_t prot,
+ pgtable_alloc allocator,
+ void* info,
+ int flags)
+{
+ __create_pgd_mapping(pgdir, PTRS_PER_PGD, phys, virt, size,
+ prot, allocator, info, flags);
+}
+
int idmap_extend_pgtable;
/*
--
2.29.2
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply related [flat|nested] 11+ messages in thread
end of thread, other threads:[~2021-04-25 14:17 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-25 14:12 [PATCHv2 00/10] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 01/10] arm64/mm: split out __create_pgd_mapping() routines Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 02/10] arm64/mm: change __create_pgd_mapping() to accept nr_entries param and introduce create_idmap() Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 03/10] arm64/mm: change __create_pgd_mapping() to accept extra parameter for allocator Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 04/10] arm64/mm: enable __create_pgd_mapping() to run across different pgtable Pingfan Liu
2021-04-25 14:12 ` [PATCHv2 05/10] arm64/mm: port trans_pgd_idmap_page() onto create_idmap() Pingfan Liu
2021-04-25 14:13 ` [PATCHv2 06/10] arm64/mm: introduce pgtable allocator for idmap_pg_dir and init_pg_dir Pingfan Liu
2021-04-25 14:13 ` [PATCHv2 07/10] arm64/pgtable-prot.h: reorganize to cope with asm Pingfan Liu
2021-04-25 14:13 ` [PATCHv2 08/10] arm64/mmu_include.c: disable WARN_ON() and BUG_ON() when booting Pingfan Liu
2021-04-25 14:13 ` [PATCHv2 09/10] arm64/mm: make __create_pgd_mapping() coped with pgtable's paddr Pingfan Liu
2021-04-25 14:13 ` [PATCHv2 10/10] arm64/head: convert idmap_pg_dir and init_pg_dir to __create_pgd_mapping() Pingfan Liu
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).