linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes
@ 2021-04-10  9:56 Pingfan Liu
  2021-04-10  9:56 ` [RFC 1/8] arm64/mm: split out __create_pgd_mapping() routines Pingfan Liu
                   ` (8 more replies)
  0 siblings, 9 replies; 13+ messages in thread
From: Pingfan Liu @ 2021-04-10  9:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
	Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
	Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
	Logan Gunthorpe, Mark Brown

Hi everyone,

Sorry to bring up this RFC in a hurry, since I paid attention to "arm64:
MMU enabled kexec relocation" too late and now it has advanced to "[PATCH
v13 00/18] arm64: MMU enabled kexec relocation".

And I think maybe that work can be based on my series.

I have raised my concern when reviewing "[PATCH v12 00/17] arm64: MMU
enabled kexec relocation"
  https://linuxlists.cc/l/1/linux-kernel/t/3923858/(patch_v12_00_17)_arm64:_mmu_enabled_kexec_relocation#post3948651
  (It seems that lore.kernel.org has not archived my reply)
  Where I wrote:
    Then the processes may be neat (I hope so):
    -1. set up identity map in machine_kexec_post_load(), instead of
    copying linear map.
    -2. Also past this temporary identity map to arm64_relocate_new_kernel()
    -3. in arm64_relocate_new_kernel(), just load identity map and
    re-enable MMU. After copying, just turn off MMU.

In a short discuss off-line, Pavel pointed to me
  https://lore.kernel.org/linux-arm-kernel/CA+CK2bC2KwWufE1DWa4szn_hQ1dbjDVHgYUu7=J4O_kvKXTrHg@mail.gmail.com/#t,
which prevent him from using idmap to implement his series.


After digging into the code, I find that if extending one more pgtable level,
the __create_pgd_mapping() routines can be re-used for idmap_pg_dir and
init_pg_dir. Besides, it can be re-used for trans_pgd_idmap_page().
That is what this series do.

As for "[PATCHv13 00/18] arm64: MMU enabled kexec relocation", here is
my two cents:
  -1. a call to create_idmap() API in machine_kexec_post_load(), to map
src + dst + arm64_relocate_new_kernel().
  -2. turn on MMU in arm64_relocate_new_kernel(), after done, turn off.

Sorry again for a hurry. It can be compiled, but far from good.

Thanks,

Pingfan

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org

Pingfan Liu (8):
  arm64/mm: split out __create_pgd_mapping() routines
  arm64/mm: change __create_pgd_mapping() prototype to accept nr_entries
    and introduce create_idmap()
  arm64/mm: change __create_pgd_mapping() prototype to accept extra info
    for allocator
  arm64/mm: enable __create_pgd_mapping() to run across different
    pgtable
  arm64/mm: make trans_pgd_idmap_page() use create_idmap()
  arm64/mm: introduce pgtable allocator for head
  arm64/pgtable-prot.h: reorganize to cope with asm
  arm64/head: convert idmap_pg_dir and init_pg_dir to
    __create_pgd_mapping()

 arch/arm64/Kconfig                    |   4 +
 arch/arm64/include/asm/pgalloc.h      |  28 ++
 arch/arm64/include/asm/pgtable-prot.h |  34 ++-
 arch/arm64/kernel/head.S              | 190 ++++----------
 arch/arm64/mm/Makefile                |   2 +
 arch/arm64/mm/idmap_mmu.c             |  46 ++++
 arch/arm64/mm/mmu.c                   | 358 ++++++--------------------
 arch/arm64/mm/mmu_include.c           | 284 ++++++++++++++++++++
 arch/arm64/mm/trans_pgd.c             |  59 ++---
 9 files changed, 535 insertions(+), 470 deletions(-)
 create mode 100644 arch/arm64/mm/idmap_mmu.c
 create mode 100644 arch/arm64/mm/mmu_include.c

-- 
2.29.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [RFC 1/8] arm64/mm: split out __create_pgd_mapping() routines
  2021-04-10  9:56 [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
@ 2021-04-10  9:56 ` Pingfan Liu
  2021-04-14 13:19   ` Pingfan Liu
  2021-04-10  9:56 ` [RFC 2/8] arm64/mm: change __create_pgd_mapping() prototype to accept nr_entries and introduce create_idmap() Pingfan Liu
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 13+ messages in thread
From: Pingfan Liu @ 2021-04-10  9:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
	Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
	Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
	Logan Gunthorpe, Mark Brown

Split out the routines for __create_pgd_mapping(), in order to use it
to generate two sets of operations for CONFIG_PGTABLE_LEVELS and
CONFIG_PGTABLE_LEVELS + 1

Later the one generated with 'CONFIG_PGTABLE_LEVELS + 1' can be used for
idmap if VA_BITS is too small to cover system RAM, which is located
sufficiently high in the physical address space.

Later, idmap can be created by __create_pgd_mapping() directly.

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/Kconfig          |   4 +
 arch/arm64/mm/Makefile      |   2 +
 arch/arm64/mm/idmap_mmu.c   |  45 ++++++
 arch/arm64/mm/mmu.c         | 263 +-----------------------------------
 arch/arm64/mm/mmu_include.c | 262 +++++++++++++++++++++++++++++++++++
 5 files changed, 315 insertions(+), 261 deletions(-)
 create mode 100644 arch/arm64/mm/idmap_mmu.c
 create mode 100644 arch/arm64/mm/mmu_include.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index e4e1b6550115..989fc501a1b4 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -327,6 +327,10 @@ config PGTABLE_LEVELS
 	default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47
 	default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48
 
+config IDMAP_PGTABLE_EXPAND
+	def_bool y
+	depends on (ARM64_4K_PAGES && ARM64_VA_BITS_39) || (ARM64_64K_PAGES && ARM64_VA_BITS_42)
+
 config ARCH_SUPPORTS_UPROBES
 	def_bool y
 
diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
index f188c9092696..f9283cb9a201 100644
--- a/arch/arm64/mm/Makefile
+++ b/arch/arm64/mm/Makefile
@@ -3,6 +3,8 @@ obj-y				:= dma-mapping.o extable.o fault.o init.o \
 				   cache.o copypage.o flush.o \
 				   ioremap.o mmap.o pgd.o mmu.o \
 				   context.o proc.o pageattr.o
+
+obj-$(CONFIG_IDMAP_PGTABLE_EXPAND)	+= idmap_mmu.o
 obj-$(CONFIG_HUGETLB_PAGE)	+= hugetlbpage.o
 obj-$(CONFIG_PTDUMP_CORE)	+= ptdump.o
 obj-$(CONFIG_PTDUMP_DEBUGFS)	+= ptdump_debugfs.o
diff --git a/arch/arm64/mm/idmap_mmu.c b/arch/arm64/mm/idmap_mmu.c
new file mode 100644
index 000000000000..7e9a4f4017d3
--- /dev/null
+++ b/arch/arm64/mm/idmap_mmu.c
@@ -0,0 +1,45 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+
+#include <asm/barrier.h>
+#include <asm/cputype.h>
+#include <asm/fixmap.h>
+#include <asm/kasan.h>
+#include <asm/kernel-pgtable.h>
+#include <asm/sections.h>
+#include <asm/setup.h>
+#include <linux/sizes.h>
+#include <asm/tlb.h>
+#include <asm/mmu_context.h>
+#include <asm/ptdump.h>
+#include <asm/tlbflush.h>
+#include <asm/pgalloc.h>
+
+#if CONFIG_IDMAP_PGTABLE_EXPAND
+
+#if CONFIG_PGTABLE_LEVELS == 2
+#define EXTEND_LEVEL 3
+#elif CONFIG_PGTABLE_LEVELS == 3
+#define EXTEND_LEVEL 4
+#endif
+
+#undef CONFIG_PGTABLE_LEVELS
+#define CONFIG_PGTABLE_LEVELS EXTEND_LEVEL
+
+
+#include "./mmu_include.c"
+
+void __create_pgd_mapping_extend(pgd_t *pgdir, unsigned int entries_cnt, phys_addr_t phys,
+				 unsigned long virt, phys_addr_t size,
+				 pgprot_t prot,
+				 phys_addr_t (*pgtable_alloc)(int),
+				 int flags)
+{
+	__create_pgd_mapping(pgdir, entries_cnt, phys, virt, size, prot, pgtable_alloc, flags);
+}
+#endif
+
+
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 5d9550fdb9cf..56e4f25e8d6d 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -37,9 +37,6 @@
 #include <asm/tlbflush.h>
 #include <asm/pgalloc.h>
 
-#define NO_BLOCK_MAPPINGS	BIT(0)
-#define NO_CONT_MAPPINGS	BIT(1)
-
 u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN);
 u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
 
@@ -116,264 +113,6 @@ static phys_addr_t __init early_pgtable_alloc(int shift)
 	return phys;
 }
 
-static bool pgattr_change_is_safe(u64 old, u64 new)
-{
-	/*
-	 * The following mapping attributes may be updated in live
-	 * kernel mappings without the need for break-before-make.
-	 */
-	pteval_t mask = PTE_PXN | PTE_RDONLY | PTE_WRITE | PTE_NG;
-
-	/* creating or taking down mappings is always safe */
-	if (old == 0 || new == 0)
-		return true;
-
-	/* live contiguous mappings may not be manipulated at all */
-	if ((old | new) & PTE_CONT)
-		return false;
-
-	/* Transitioning from Non-Global to Global is unsafe */
-	if (old & ~new & PTE_NG)
-		return false;
-
-	/*
-	 * Changing the memory type between Normal and Normal-Tagged is safe
-	 * since Tagged is considered a permission attribute from the
-	 * mismatched attribute aliases perspective.
-	 */
-	if (((old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) ||
-	     (old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)) &&
-	    ((new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) ||
-	     (new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)))
-		mask |= PTE_ATTRINDX_MASK;
-
-	return ((old ^ new) & ~mask) == 0;
-}
-
-static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
-		     phys_addr_t phys, pgprot_t prot)
-{
-	pte_t *ptep;
-
-	ptep = pte_set_fixmap_offset(pmdp, addr);
-	do {
-		pte_t old_pte = READ_ONCE(*ptep);
-
-		set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot));
-
-		/*
-		 * After the PTE entry has been populated once, we
-		 * only allow updates to the permission attributes.
-		 */
-		BUG_ON(!pgattr_change_is_safe(pte_val(old_pte),
-					      READ_ONCE(pte_val(*ptep))));
-
-		phys += PAGE_SIZE;
-	} while (ptep++, addr += PAGE_SIZE, addr != end);
-
-	pte_clear_fixmap();
-}
-
-static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
-				unsigned long end, phys_addr_t phys,
-				pgprot_t prot,
-				phys_addr_t (*pgtable_alloc)(int),
-				int flags)
-{
-	unsigned long next;
-	pmd_t pmd = READ_ONCE(*pmdp);
-
-	BUG_ON(pmd_sect(pmd));
-	if (pmd_none(pmd)) {
-		phys_addr_t pte_phys;
-		BUG_ON(!pgtable_alloc);
-		pte_phys = pgtable_alloc(PAGE_SHIFT);
-		__pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE);
-		pmd = READ_ONCE(*pmdp);
-	}
-	BUG_ON(pmd_bad(pmd));
-
-	do {
-		pgprot_t __prot = prot;
-
-		next = pte_cont_addr_end(addr, end);
-
-		/* use a contiguous mapping if the range is suitably aligned */
-		if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
-		    (flags & NO_CONT_MAPPINGS) == 0)
-			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
-
-		init_pte(pmdp, addr, next, phys, __prot);
-
-		phys += next - addr;
-	} while (addr = next, addr != end);
-}
-
-static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
-		     phys_addr_t phys, pgprot_t prot,
-		     phys_addr_t (*pgtable_alloc)(int), int flags)
-{
-	unsigned long next;
-	pmd_t *pmdp;
-
-	pmdp = pmd_set_fixmap_offset(pudp, addr);
-	do {
-		pmd_t old_pmd = READ_ONCE(*pmdp);
-
-		next = pmd_addr_end(addr, end);
-
-		/* try section mapping first */
-		if (((addr | next | phys) & ~SECTION_MASK) == 0 &&
-		    (flags & NO_BLOCK_MAPPINGS) == 0) {
-			pmd_set_huge(pmdp, phys, prot);
-
-			/*
-			 * After the PMD entry has been populated once, we
-			 * only allow updates to the permission attributes.
-			 */
-			BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
-						      READ_ONCE(pmd_val(*pmdp))));
-		} else {
-			alloc_init_cont_pte(pmdp, addr, next, phys, prot,
-					    pgtable_alloc, flags);
-
-			BUG_ON(pmd_val(old_pmd) != 0 &&
-			       pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
-		}
-		phys += next - addr;
-	} while (pmdp++, addr = next, addr != end);
-
-	pmd_clear_fixmap();
-}
-
-static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
-				unsigned long end, phys_addr_t phys,
-				pgprot_t prot,
-				phys_addr_t (*pgtable_alloc)(int), int flags)
-{
-	unsigned long next;
-	pud_t pud = READ_ONCE(*pudp);
-
-	/*
-	 * Check for initial section mappings in the pgd/pud.
-	 */
-	BUG_ON(pud_sect(pud));
-	if (pud_none(pud)) {
-		phys_addr_t pmd_phys;
-		BUG_ON(!pgtable_alloc);
-		pmd_phys = pgtable_alloc(PMD_SHIFT);
-		__pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
-		pud = READ_ONCE(*pudp);
-	}
-	BUG_ON(pud_bad(pud));
-
-	do {
-		pgprot_t __prot = prot;
-
-		next = pmd_cont_addr_end(addr, end);
-
-		/* use a contiguous mapping if the range is suitably aligned */
-		if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
-		    (flags & NO_CONT_MAPPINGS) == 0)
-			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
-
-		init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags);
-
-		phys += next - addr;
-	} while (addr = next, addr != end);
-}
-
-static inline bool use_1G_block(unsigned long addr, unsigned long next,
-			unsigned long phys)
-{
-	if (PAGE_SHIFT != 12)
-		return false;
-
-	if (((addr | next | phys) & ~PUD_MASK) != 0)
-		return false;
-
-	return true;
-}
-
-static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
-			   phys_addr_t phys, pgprot_t prot,
-			   phys_addr_t (*pgtable_alloc)(int),
-			   int flags)
-{
-	unsigned long next;
-	pud_t *pudp;
-	p4d_t *p4dp = p4d_offset(pgdp, addr);
-	p4d_t p4d = READ_ONCE(*p4dp);
-
-	if (p4d_none(p4d)) {
-		phys_addr_t pud_phys;
-		BUG_ON(!pgtable_alloc);
-		pud_phys = pgtable_alloc(PUD_SHIFT);
-		__p4d_populate(p4dp, pud_phys, PUD_TYPE_TABLE);
-		p4d = READ_ONCE(*p4dp);
-	}
-	BUG_ON(p4d_bad(p4d));
-
-	pudp = pud_set_fixmap_offset(p4dp, addr);
-	do {
-		pud_t old_pud = READ_ONCE(*pudp);
-
-		next = pud_addr_end(addr, end);
-
-		/*
-		 * For 4K granule only, attempt to put down a 1GB block
-		 */
-		if (use_1G_block(addr, next, phys) &&
-		    (flags & NO_BLOCK_MAPPINGS) == 0) {
-			pud_set_huge(pudp, phys, prot);
-
-			/*
-			 * After the PUD entry has been populated once, we
-			 * only allow updates to the permission attributes.
-			 */
-			BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
-						      READ_ONCE(pud_val(*pudp))));
-		} else {
-			alloc_init_cont_pmd(pudp, addr, next, phys, prot,
-					    pgtable_alloc, flags);
-
-			BUG_ON(pud_val(old_pud) != 0 &&
-			       pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
-		}
-		phys += next - addr;
-	} while (pudp++, addr = next, addr != end);
-
-	pud_clear_fixmap();
-}
-
-static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
-				 unsigned long virt, phys_addr_t size,
-				 pgprot_t prot,
-				 phys_addr_t (*pgtable_alloc)(int),
-				 int flags)
-{
-	unsigned long addr, end, next;
-	pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
-
-	/*
-	 * If the virtual and physical address don't have the same offset
-	 * within a page, we cannot map the region as the caller expects.
-	 */
-	if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
-		return;
-
-	phys &= PAGE_MASK;
-	addr = virt & PAGE_MASK;
-	end = PAGE_ALIGN(virt + size);
-
-	do {
-		next = pgd_addr_end(addr, end);
-		alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc,
-			       flags);
-		phys += next - addr;
-	} while (pgdp++, addr = next, addr != end);
-}
-
 static phys_addr_t __pgd_pgtable_alloc(int shift)
 {
 	void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL);
@@ -404,6 +143,8 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
 	return pa;
 }
 
+#include "./mmu_include.c"
+
 /*
  * This function can only be used to modify existing table entries,
  * without allocating new levels of table. Note that this permits the
diff --git a/arch/arm64/mm/mmu_include.c b/arch/arm64/mm/mmu_include.c
new file mode 100644
index 000000000000..e9ebdffe860b
--- /dev/null
+++ b/arch/arm64/mm/mmu_include.c
@@ -0,0 +1,262 @@
+// SPDX-License-Identifier: GPL-2.0-only
+
+#define NO_BLOCK_MAPPINGS	BIT(0)
+#define NO_CONT_MAPPINGS	BIT(1)
+
+static bool pgattr_change_is_safe(u64 old, u64 new)
+{
+	/*
+	 * The following mapping attributes may be updated in live
+	 * kernel mappings without the need for break-before-make.
+	 */
+	pteval_t mask = PTE_PXN | PTE_RDONLY | PTE_WRITE | PTE_NG;
+
+	/* creating or taking down mappings is always safe */
+	if (old == 0 || new == 0)
+		return true;
+
+	/* live contiguous mappings may not be manipulated at all */
+	if ((old | new) & PTE_CONT)
+		return false;
+
+	/* Transitioning from Non-Global to Global is unsafe */
+	if (old & ~new & PTE_NG)
+		return false;
+
+	/*
+	 * Changing the memory type between Normal and Normal-Tagged is safe
+	 * since Tagged is considered a permission attribute from the
+	 * mismatched attribute aliases perspective.
+	 */
+	if (((old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) ||
+	     (old & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)) &&
+	    ((new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL) ||
+	     (new & PTE_ATTRINDX_MASK) == PTE_ATTRINDX(MT_NORMAL_TAGGED)))
+		mask |= PTE_ATTRINDX_MASK;
+
+	return ((old ^ new) & ~mask) == 0;
+}
+
+static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
+		     phys_addr_t phys, pgprot_t prot)
+{
+	pte_t *ptep;
+
+	ptep = pte_set_fixmap_offset(pmdp, addr);
+	do {
+		pte_t old_pte = READ_ONCE(*ptep);
+
+		set_pte(ptep, pfn_pte(__phys_to_pfn(phys), prot));
+
+		/*
+		 * After the PTE entry has been populated once, we
+		 * only allow updates to the permission attributes.
+		 */
+		BUG_ON(!pgattr_change_is_safe(pte_val(old_pte),
+					      READ_ONCE(pte_val(*ptep))));
+
+		phys += PAGE_SIZE;
+	} while (ptep++, addr += PAGE_SIZE, addr != end);
+
+	pte_clear_fixmap();
+}
+
+static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
+				unsigned long end, phys_addr_t phys,
+				pgprot_t prot,
+				phys_addr_t (*pgtable_alloc)(int),
+				int flags)
+{
+	unsigned long next;
+	pmd_t pmd = READ_ONCE(*pmdp);
+
+	BUG_ON(pmd_sect(pmd));
+	if (pmd_none(pmd)) {
+		phys_addr_t pte_phys;
+		BUG_ON(!pgtable_alloc);
+		pte_phys = pgtable_alloc(PAGE_SHIFT);
+		__pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE);
+		pmd = READ_ONCE(*pmdp);
+	}
+	BUG_ON(pmd_bad(pmd));
+
+	do {
+		pgprot_t __prot = prot;
+
+		next = pte_cont_addr_end(addr, end);
+
+		/* use a contiguous mapping if the range is suitably aligned */
+		if ((((addr | next | phys) & ~CONT_PTE_MASK) == 0) &&
+		    (flags & NO_CONT_MAPPINGS) == 0)
+			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+		init_pte(pmdp, addr, next, phys, __prot);
+
+		phys += next - addr;
+	} while (addr = next, addr != end);
+}
+
+static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
+		     phys_addr_t phys, pgprot_t prot,
+		     phys_addr_t (*pgtable_alloc)(int), int flags)
+{
+	unsigned long next;
+	pmd_t *pmdp;
+
+	pmdp = pmd_set_fixmap_offset(pudp, addr);
+	do {
+		pmd_t old_pmd = READ_ONCE(*pmdp);
+
+		next = pmd_addr_end(addr, end);
+
+		/* try section mapping first */
+		if (((addr | next | phys) & ~SECTION_MASK) == 0 &&
+		    (flags & NO_BLOCK_MAPPINGS) == 0) {
+			pmd_set_huge(pmdp, phys, prot);
+
+			/*
+			 * After the PMD entry has been populated once, we
+			 * only allow updates to the permission attributes.
+			 */
+			BUG_ON(!pgattr_change_is_safe(pmd_val(old_pmd),
+						      READ_ONCE(pmd_val(*pmdp))));
+		} else {
+			alloc_init_cont_pte(pmdp, addr, next, phys, prot,
+					    pgtable_alloc, flags);
+
+			BUG_ON(pmd_val(old_pmd) != 0 &&
+			       pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
+		}
+		phys += next - addr;
+	} while (pmdp++, addr = next, addr != end);
+
+	pmd_clear_fixmap();
+}
+
+static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
+				unsigned long end, phys_addr_t phys,
+				pgprot_t prot,
+				phys_addr_t (*pgtable_alloc)(int), int flags)
+{
+	unsigned long next;
+	pud_t pud = READ_ONCE(*pudp);
+
+	/*
+	 * Check for initial section mappings in the pgd/pud.
+	 */
+	BUG_ON(pud_sect(pud));
+	if (pud_none(pud)) {
+		phys_addr_t pmd_phys;
+		BUG_ON(!pgtable_alloc);
+		pmd_phys = pgtable_alloc(PMD_SHIFT);
+		__pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
+		pud = READ_ONCE(*pudp);
+	}
+	BUG_ON(pud_bad(pud));
+
+	do {
+		pgprot_t __prot = prot;
+
+		next = pmd_cont_addr_end(addr, end);
+
+		/* use a contiguous mapping if the range is suitably aligned */
+		if ((((addr | next | phys) & ~CONT_PMD_MASK) == 0) &&
+		    (flags & NO_CONT_MAPPINGS) == 0)
+			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+
+		init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags);
+
+		phys += next - addr;
+	} while (addr = next, addr != end);
+}
+
+static inline bool use_1G_block(unsigned long addr, unsigned long next,
+			unsigned long phys)
+{
+	if (PAGE_SHIFT != 12)
+		return false;
+
+	if (((addr | next | phys) & ~PUD_MASK) != 0)
+		return false;
+
+	return true;
+}
+
+static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
+			   phys_addr_t phys, pgprot_t prot,
+			   phys_addr_t (*pgtable_alloc)(int),
+			   int flags)
+{
+	unsigned long next;
+	pud_t *pudp;
+	p4d_t *p4dp = p4d_offset(pgdp, addr);
+	p4d_t p4d = READ_ONCE(*p4dp);
+
+	if (p4d_none(p4d)) {
+		phys_addr_t pud_phys;
+		BUG_ON(!pgtable_alloc);
+		pud_phys = pgtable_alloc(PUD_SHIFT);
+		__p4d_populate(p4dp, pud_phys, PUD_TYPE_TABLE);
+		p4d = READ_ONCE(*p4dp);
+	}
+	BUG_ON(p4d_bad(p4d));
+
+	pudp = pud_set_fixmap_offset(p4dp, addr);
+	do {
+		pud_t old_pud = READ_ONCE(*pudp);
+
+		next = pud_addr_end(addr, end);
+
+		/*
+		 * For 4K granule only, attempt to put down a 1GB block
+		 */
+		if (use_1G_block(addr, next, phys) &&
+		    (flags & NO_BLOCK_MAPPINGS) == 0) {
+			pud_set_huge(pudp, phys, prot);
+
+			/*
+			 * After the PUD entry has been populated once, we
+			 * only allow updates to the permission attributes.
+			 */
+			BUG_ON(!pgattr_change_is_safe(pud_val(old_pud),
+						      READ_ONCE(pud_val(*pudp))));
+		} else {
+			alloc_init_cont_pmd(pudp, addr, next, phys, prot,
+					    pgtable_alloc, flags);
+
+			BUG_ON(pud_val(old_pud) != 0 &&
+			       pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
+		}
+		phys += next - addr;
+	} while (pudp++, addr = next, addr != end);
+
+	pud_clear_fixmap();
+}
+
+static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
+				 unsigned long virt, phys_addr_t size,
+				 pgprot_t prot,
+				 phys_addr_t (*pgtable_alloc)(int),
+				 int flags)
+{
+	unsigned long addr, end, next;
+	pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
+
+	/*
+	 * If the virtual and physical address don't have the same offset
+	 * within a page, we cannot map the region as the caller expects.
+	 */
+	if (WARN_ON((phys ^ virt) & ~PAGE_MASK))
+		return;
+
+	phys &= PAGE_MASK;
+	addr = virt & PAGE_MASK;
+	end = PAGE_ALIGN(virt + size);
+
+	do {
+		next = pgd_addr_end(addr, end);
+		alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc,
+			       flags);
+		phys += next - addr;
+	} while (pgdp++, addr = next, addr != end);
+}
-- 
2.29.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC 2/8] arm64/mm: change __create_pgd_mapping() prototype to accept nr_entries and introduce create_idmap()
  2021-04-10  9:56 [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
  2021-04-10  9:56 ` [RFC 1/8] arm64/mm: split out __create_pgd_mapping() routines Pingfan Liu
@ 2021-04-10  9:56 ` Pingfan Liu
  2021-04-10  9:56 ` [RFC 3/8] arm64/mm: change __create_pgd_mapping() prototype to accept extra info for allocator Pingfan Liu
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2021-04-10  9:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
	Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
	Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
	Logan Gunthorpe, Mark Brown

As idmap may have pgd entries more than PTRS_PER_PGD, change the
prototype of __create_pgd_mapping() to fix that.

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/include/asm/pgalloc.h |  7 ++++++
 arch/arm64/kernel/head.S         |  3 +++
 arch/arm64/mm/mmu.c              | 41 ++++++++++++++++++++++++++------
 arch/arm64/mm/mmu_include.c      | 10 ++++++--
 4 files changed, 52 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index 3c6a7f5988b1..555792921af0 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -83,4 +83,11 @@ pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep)
 }
 #define pmd_pgtable(pmd) pmd_page(pmd)
 
+extern void __create_pgd_mapping_extend(pgd_t *pgdir,
+		unsigned int entries_cnt, phys_addr_t phys,
+		unsigned long virt, phys_addr_t size,
+		pgprot_t prot,
+		phys_addr_t (*pgtable_alloc)(int),
+		int flags);
+
 #endif
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 840bda1869e9..e19649dbbafb 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -341,6 +341,9 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 #if VA_BITS != EXTRA_SHIFT
 #error "Mismatch between VA_BITS and page size/number of translation levels"
 #endif
+	adr_l	x4, idmap_extend_pgtable
+	mov	x5, #1
+	str	x5, [x4]                //require expanded pagetable
 
 	mov	x4, EXTRA_PTRS
 	create_table_entry x0, x3, EXTRA_SHIFT, x4, x5, x6
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 56e4f25e8d6d..30afd6ed275f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -145,6 +145,33 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
 
 #include "./mmu_include.c"
 
+int idmap_extend_pgtable;
+
+/* 
+ * todo: tear down part of idmap
+ * todo: lock protection for concurrent population
+ */
+void create_idmap(pgd_t *pgdir, phys_addr_t phys,
+		phys_addr_t size,
+		pgprot_t prot,
+		phys_addr_t (*pgtable_alloc)(int),
+		int flags)
+{
+	u64 ptrs_per_pgd = idmap_ptrs_per_pgd;
+
+#if CONFIG_IDMAP_PGTABLE_EXPAND
+	if (idmap_extend_pgtable)
+		__create_pgd_mapping_extend(pgdir, ptrs_per_pgd,
+				phys, phys, size, prot, pgtable_alloc, flags);
+	else
+		__create_pgd_mapping(pgdir, ptrs_per_pgd,
+				phys, phys, size, prot, pgtable_alloc, flags);
+#else
+	__create_pgd_mapping(pgdir, ptrs_per_pgd,
+			phys, phys, size, prot, pgtable_alloc, flags);
+#endif
+}
+
 /*
  * This function can only be used to modify existing table entries,
  * without allocating new levels of table. Note that this permits the
@@ -158,7 +185,7 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
 			&phys, virt);
 		return;
 	}
-	__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
+	__create_pgd_mapping(init_mm.pgd, PTRS_PER_PGD, phys, virt, size, prot, NULL,
 			     NO_CONT_MAPPINGS);
 }
 
@@ -173,7 +200,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 	if (page_mappings_only)
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
-	__create_pgd_mapping(mm->pgd, phys, virt, size, prot,
+	__create_pgd_mapping(mm->pgd, PTRS_PER_PGD, phys, virt, size, prot,
 			     pgd_pgtable_alloc, flags);
 }
 
@@ -186,7 +213,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
 		return;
 	}
 
-	__create_pgd_mapping(init_mm.pgd, phys, virt, size, prot, NULL,
+	__create_pgd_mapping(init_mm.pgd, PTRS_PER_PGD, phys, virt, size, prot, NULL,
 			     NO_CONT_MAPPINGS);
 
 	/* flush the TLBs after updating live kernel mappings */
@@ -196,7 +223,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
 static void __init __map_memblock(pgd_t *pgdp, phys_addr_t start,
 				  phys_addr_t end, pgprot_t prot, int flags)
 {
-	__create_pgd_mapping(pgdp, start, __phys_to_virt(start), end - start,
+	__create_pgd_mapping(pgdp, start, PTRS_PER_PGD, __phys_to_virt(start), end - start,
 			     prot, early_pgtable_alloc, flags);
 }
 
@@ -297,7 +324,7 @@ static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end,
 	BUG_ON(!PAGE_ALIGNED(pa_start));
 	BUG_ON(!PAGE_ALIGNED(size));
 
-	__create_pgd_mapping(pgdp, pa_start, (unsigned long)va_start, size, prot,
+	__create_pgd_mapping(pgdp, PTRS_PER_PGD, pa_start, (unsigned long)va_start, size, prot,
 			     early_pgtable_alloc, flags);
 
 	if (!(vm_flags & VM_NO_GUARD))
@@ -341,7 +368,7 @@ static int __init map_entry_trampoline(void)
 
 	/* Map only the text into the trampoline page table */
 	memset(tramp_pg_dir, 0, PGD_SIZE);
-	__create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
+	__create_pgd_mapping(tramp_pg_dir, PTRS_PER_PGD, pa_start, TRAMP_VALIAS, PAGE_SIZE,
 			     prot, __pgd_pgtable_alloc, 0);
 
 	/* Map both the text and data into the kernel page table */
@@ -1233,7 +1260,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 	    IS_ENABLED(CONFIG_KFENCE))
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
-	__create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start),
+	__create_pgd_mapping(swapper_pg_dir, PTRS_PER_PGD, start, __phys_to_virt(start),
 			     size, params->pgprot, __pgd_pgtable_alloc,
 			     flags);
 
diff --git a/arch/arm64/mm/mmu_include.c b/arch/arm64/mm/mmu_include.c
index e9ebdffe860b..1cf5af7e2aeb 100644
--- a/arch/arm64/mm/mmu_include.c
+++ b/arch/arm64/mm/mmu_include.c
@@ -233,14 +233,20 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
 	pud_clear_fixmap();
 }
 
-static void __create_pgd_mapping(pgd_t *pgdir, phys_addr_t phys,
+static void __create_pgd_mapping(pgd_t *pgdir, unsigned int entries_cnt, phys_addr_t phys,
 				 unsigned long virt, phys_addr_t size,
 				 pgprot_t prot,
 				 phys_addr_t (*pgtable_alloc)(int),
 				 int flags)
 {
 	unsigned long addr, end, next;
-	pgd_t *pgdp = pgd_offset_pgd(pgdir, virt);
+	pgd_t *pgdp;
+
+	if (likely(entries_cnt == PTRS_PER_PGD))
+		pgdp = pgd_offset_pgd(pgdir, virt);
+	else {
+		pgdp = pgdir + ((virt >> PGDIR_SHIFT) & (entries_cnt - 1));
+	}
 
 	/*
 	 * If the virtual and physical address don't have the same offset
-- 
2.29.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC 3/8] arm64/mm: change __create_pgd_mapping() prototype to accept extra info for allocator
  2021-04-10  9:56 [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
  2021-04-10  9:56 ` [RFC 1/8] arm64/mm: split out __create_pgd_mapping() routines Pingfan Liu
  2021-04-10  9:56 ` [RFC 2/8] arm64/mm: change __create_pgd_mapping() prototype to accept nr_entries and introduce create_idmap() Pingfan Liu
@ 2021-04-10  9:56 ` Pingfan Liu
  2021-04-10  9:56 ` [RFC 4/8] arm64/mm: enable __create_pgd_mapping() to run across different pgtable Pingfan Liu
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2021-04-10  9:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
	Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
	Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
	Logan Gunthorpe, Mark Brown

Incoming allocator needs extra info to get the memory pool info.

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/include/asm/pgalloc.h |  5 +++-
 arch/arm64/mm/idmap_mmu.c        |  5 ++--
 arch/arm64/mm/mmu.c              | 31 +++++++++++++------------
 arch/arm64/mm/mmu_include.c      | 39 +++++++++++++++++++-------------
 4 files changed, 46 insertions(+), 34 deletions(-)

diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index 555792921af0..42f602528b90 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -83,11 +83,14 @@ pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep)
 }
 #define pmd_pgtable(pmd) pmd_page(pmd)
 
+typedef phys_addr_t (*pgtable_alloc)(unsigned long shift, void *data);
+
 extern void __create_pgd_mapping_extend(pgd_t *pgdir,
 		unsigned int entries_cnt, phys_addr_t phys,
 		unsigned long virt, phys_addr_t size,
 		pgprot_t prot,
-		phys_addr_t (*pgtable_alloc)(int),
+		pgtable_alloc allocator,
+		void *info,
 		int flags);
 
 #endif
diff --git a/arch/arm64/mm/idmap_mmu.c b/arch/arm64/mm/idmap_mmu.c
index 7e9a4f4017d3..9d9fb77ce0e9 100644
--- a/arch/arm64/mm/idmap_mmu.c
+++ b/arch/arm64/mm/idmap_mmu.c
@@ -35,10 +35,11 @@
 void __create_pgd_mapping_extend(pgd_t *pgdir, unsigned int entries_cnt, phys_addr_t phys,
 				 unsigned long virt, phys_addr_t size,
 				 pgprot_t prot,
-				 phys_addr_t (*pgtable_alloc)(int),
+				 pgtable_alloc allocator,
+				 void *info,
 				 int flags)
 {
-	__create_pgd_mapping(pgdir, entries_cnt, phys, virt, size, prot, pgtable_alloc, flags);
+	__create_pgd_mapping(pgdir, entries_cnt, phys, virt, size, prot, allocator, info, flags);
 }
 #endif
 
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 30afd6ed275f..0f183aaf98c9 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -86,7 +86,7 @@ pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
 }
 EXPORT_SYMBOL(phys_mem_access_prot);
 
-static phys_addr_t __init early_pgtable_alloc(int shift)
+static phys_addr_t __init early_pgtable_alloc(unsigned long unused_a, void *unused_b)
 {
 	phys_addr_t phys;
 	void *ptr;
@@ -113,7 +113,7 @@ static phys_addr_t __init early_pgtable_alloc(int shift)
 	return phys;
 }
 
-static phys_addr_t __pgd_pgtable_alloc(int shift)
+static phys_addr_t __pgd_pgtable_alloc(unsigned long unused_a, void *unused_b)
 {
 	void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL);
 	BUG_ON(!ptr);
@@ -123,9 +123,9 @@ static phys_addr_t __pgd_pgtable_alloc(int shift)
 	return __pa(ptr);
 }
 
-static phys_addr_t pgd_pgtable_alloc(int shift)
+static phys_addr_t pgd_pgtable_alloc(unsigned long shift, void *unused)
 {
-	phys_addr_t pa = __pgd_pgtable_alloc(shift);
+	phys_addr_t pa = __pgd_pgtable_alloc(shift, unused);
 
 	/*
 	 * Call proper page table ctor in case later we need to
@@ -154,7 +154,8 @@ int idmap_extend_pgtable;
 void create_idmap(pgd_t *pgdir, phys_addr_t phys,
 		phys_addr_t size,
 		pgprot_t prot,
-		phys_addr_t (*pgtable_alloc)(int),
+		pgtable_alloc allocator,
+		void *info,
 		int flags)
 {
 	u64 ptrs_per_pgd = idmap_ptrs_per_pgd;
@@ -162,13 +163,13 @@ void create_idmap(pgd_t *pgdir, phys_addr_t phys,
 #if CONFIG_IDMAP_PGTABLE_EXPAND
 	if (idmap_extend_pgtable)
 		__create_pgd_mapping_extend(pgdir, ptrs_per_pgd,
-				phys, phys, size, prot, pgtable_alloc, flags);
+				phys, phys, size, prot, allocator, info, flags);
 	else
 		__create_pgd_mapping(pgdir, ptrs_per_pgd,
-				phys, phys, size, prot, pgtable_alloc, flags);
+				phys, phys, size, prot, allocator, info, flags);
 #else
 	__create_pgd_mapping(pgdir, ptrs_per_pgd,
-			phys, phys, size, prot, pgtable_alloc, flags);
+				phys, phys, size, prot, allocator, info, flags);
 #endif
 }
 
@@ -186,7 +187,7 @@ static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
 		return;
 	}
 	__create_pgd_mapping(init_mm.pgd, PTRS_PER_PGD, phys, virt, size, prot, NULL,
-			     NO_CONT_MAPPINGS);
+			     NULL, NO_CONT_MAPPINGS);
 }
 
 void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
@@ -201,7 +202,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	__create_pgd_mapping(mm->pgd, PTRS_PER_PGD, phys, virt, size, prot,
-			     pgd_pgtable_alloc, flags);
+			     pgd_pgtable_alloc, NULL, flags);
 }
 
 static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
@@ -214,7 +215,7 @@ static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
 	}
 
 	__create_pgd_mapping(init_mm.pgd, PTRS_PER_PGD, phys, virt, size, prot, NULL,
-			     NO_CONT_MAPPINGS);
+			     NULL, NO_CONT_MAPPINGS);
 
 	/* flush the TLBs after updating live kernel mappings */
 	flush_tlb_kernel_range(virt, virt + size);
@@ -224,7 +225,7 @@ static void __init __map_memblock(pgd_t *pgdp, phys_addr_t start,
 				  phys_addr_t end, pgprot_t prot, int flags)
 {
 	__create_pgd_mapping(pgdp, start, PTRS_PER_PGD, __phys_to_virt(start), end - start,
-			     prot, early_pgtable_alloc, flags);
+			     prot, early_pgtable_alloc, NULL, flags);
 }
 
 void __init mark_linear_text_alias_ro(void)
@@ -325,7 +326,7 @@ static void __init map_kernel_segment(pgd_t *pgdp, void *va_start, void *va_end,
 	BUG_ON(!PAGE_ALIGNED(size));
 
 	__create_pgd_mapping(pgdp, PTRS_PER_PGD, pa_start, (unsigned long)va_start, size, prot,
-			     early_pgtable_alloc, flags);
+			     early_pgtable_alloc, NULL, flags);
 
 	if (!(vm_flags & VM_NO_GUARD))
 		size += PAGE_SIZE;
@@ -369,7 +370,7 @@ static int __init map_entry_trampoline(void)
 	/* Map only the text into the trampoline page table */
 	memset(tramp_pg_dir, 0, PGD_SIZE);
 	__create_pgd_mapping(tramp_pg_dir, PTRS_PER_PGD, pa_start, TRAMP_VALIAS, PAGE_SIZE,
-			     prot, __pgd_pgtable_alloc, 0);
+			     prot, __pgd_pgtable_alloc, NULL, 0);
 
 	/* Map both the text and data into the kernel page table */
 	__set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
@@ -1261,7 +1262,7 @@ int arch_add_memory(int nid, u64 start, u64 size,
 		flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS;
 
 	__create_pgd_mapping(swapper_pg_dir, PTRS_PER_PGD, start, __phys_to_virt(start),
-			     size, params->pgprot, __pgd_pgtable_alloc,
+			     size, params->pgprot, __pgd_pgtable_alloc, NULL,
 			     flags);
 
 	memblock_clear_nomap(start, size);
diff --git a/arch/arm64/mm/mmu_include.c b/arch/arm64/mm/mmu_include.c
index 1cf5af7e2aeb..371afc7d4502 100644
--- a/arch/arm64/mm/mmu_include.c
+++ b/arch/arm64/mm/mmu_include.c
@@ -64,7 +64,8 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
 static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 				unsigned long end, phys_addr_t phys,
 				pgprot_t prot,
-				phys_addr_t (*pgtable_alloc)(int),
+				pgtable_alloc allocator,
+				void *info,
 				int flags)
 {
 	unsigned long next;
@@ -73,8 +74,8 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 	BUG_ON(pmd_sect(pmd));
 	if (pmd_none(pmd)) {
 		phys_addr_t pte_phys;
-		BUG_ON(!pgtable_alloc);
-		pte_phys = pgtable_alloc(PAGE_SHIFT);
+		BUG_ON(!allocator);
+		pte_phys = allocator(PAGE_SHIFT, info);
 		__pmd_populate(pmdp, pte_phys, PMD_TYPE_TABLE);
 		pmd = READ_ONCE(*pmdp);
 	}
@@ -98,7 +99,9 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 
 static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
 		     phys_addr_t phys, pgprot_t prot,
-		     phys_addr_t (*pgtable_alloc)(int), int flags)
+		     pgtable_alloc allocator,
+		     void *info,
+		     int flags)
 {
 	unsigned long next;
 	pmd_t *pmdp;
@@ -122,7 +125,7 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
 						      READ_ONCE(pmd_val(*pmdp))));
 		} else {
 			alloc_init_cont_pte(pmdp, addr, next, phys, prot,
-					    pgtable_alloc, flags);
+					    allocator, info, flags);
 
 			BUG_ON(pmd_val(old_pmd) != 0 &&
 			       pmd_val(old_pmd) != READ_ONCE(pmd_val(*pmdp)));
@@ -136,7 +139,9 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
 static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 				unsigned long end, phys_addr_t phys,
 				pgprot_t prot,
-				phys_addr_t (*pgtable_alloc)(int), int flags)
+				pgtable_alloc allocator,
+				void *info,
+				int flags)
 {
 	unsigned long next;
 	pud_t pud = READ_ONCE(*pudp);
@@ -147,8 +152,8 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 	BUG_ON(pud_sect(pud));
 	if (pud_none(pud)) {
 		phys_addr_t pmd_phys;
-		BUG_ON(!pgtable_alloc);
-		pmd_phys = pgtable_alloc(PMD_SHIFT);
+		BUG_ON(!allocator);
+		pmd_phys = allocator(PMD_SHIFT, info);
 		__pud_populate(pudp, pmd_phys, PUD_TYPE_TABLE);
 		pud = READ_ONCE(*pudp);
 	}
@@ -164,7 +169,7 @@ static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
 		    (flags & NO_CONT_MAPPINGS) == 0)
 			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
 
-		init_pmd(pudp, addr, next, phys, __prot, pgtable_alloc, flags);
+		init_pmd(pudp, addr, next, phys, __prot, allocator, info, flags);
 
 		phys += next - addr;
 	} while (addr = next, addr != end);
@@ -184,7 +189,8 @@ static inline bool use_1G_block(unsigned long addr, unsigned long next,
 
 static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
 			   phys_addr_t phys, pgprot_t prot,
-			   phys_addr_t (*pgtable_alloc)(int),
+			   pgtable_alloc allocator,
+			   void *info,
 			   int flags)
 {
 	unsigned long next;
@@ -194,8 +200,8 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
 
 	if (p4d_none(p4d)) {
 		phys_addr_t pud_phys;
-		BUG_ON(!pgtable_alloc);
-		pud_phys = pgtable_alloc(PUD_SHIFT);
+		BUG_ON(!allocator);
+		pud_phys = allocator(PUD_SHIFT, info);
 		__p4d_populate(p4dp, pud_phys, PUD_TYPE_TABLE);
 		p4d = READ_ONCE(*p4dp);
 	}
@@ -222,7 +228,7 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
 						      READ_ONCE(pud_val(*pudp))));
 		} else {
 			alloc_init_cont_pmd(pudp, addr, next, phys, prot,
-					    pgtable_alloc, flags);
+					    allocator, info, flags);
 
 			BUG_ON(pud_val(old_pud) != 0 &&
 			       pud_val(old_pud) != READ_ONCE(pud_val(*pudp)));
@@ -236,7 +242,8 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
 static void __create_pgd_mapping(pgd_t *pgdir, unsigned int entries_cnt, phys_addr_t phys,
 				 unsigned long virt, phys_addr_t size,
 				 pgprot_t prot,
-				 phys_addr_t (*pgtable_alloc)(int),
+				 pgtable_alloc allocator,
+				 void *info,
 				 int flags)
 {
 	unsigned long addr, end, next;
@@ -261,8 +268,8 @@ static void __create_pgd_mapping(pgd_t *pgdir, unsigned int entries_cnt, phys_ad
 
 	do {
 		next = pgd_addr_end(addr, end);
-		alloc_init_pud(pgdp, addr, next, phys, prot, pgtable_alloc,
-			       flags);
+		alloc_init_pud(pgdp, addr, next, phys, prot, allocator,
+			       info, flags);
 		phys += next - addr;
 	} while (pgdp++, addr = next, addr != end);
 }
-- 
2.29.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC 4/8] arm64/mm: enable __create_pgd_mapping() to run across different pgtable
  2021-04-10  9:56 [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
                   ` (2 preceding siblings ...)
  2021-04-10  9:56 ` [RFC 3/8] arm64/mm: change __create_pgd_mapping() prototype to accept extra info for allocator Pingfan Liu
@ 2021-04-10  9:56 ` Pingfan Liu
  2021-04-10  9:56 ` [RFC 5/8] arm64/mm: make trans_pgd_idmap_page() use create_idmap() Pingfan Liu
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2021-04-10  9:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
	Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
	Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
	Logan Gunthorpe, Mark Brown

PUP/PMD/PTE fixmap can not be shared across different pgtable
concurrently.

Also change the return type from phys_addr_t to unsigned long, since
allocator may return virtual address directly.

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/include/asm/pgalloc.h |  6 +++++-
 arch/arm64/mm/mmu.c              |  6 +++---
 arch/arm64/mm/mmu_include.c      | 31 ++++++++++++++++++++-----------
 3 files changed, 28 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index 42f602528b90..6e9f1e218300 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -83,7 +83,7 @@ pmd_populate(struct mm_struct *mm, pmd_t *pmdp, pgtable_t ptep)
 }
 #define pmd_pgtable(pmd) pmd_page(pmd)
 
-typedef phys_addr_t (*pgtable_alloc)(unsigned long shift, void *data);
+typedef unsigned long (*pgtable_alloc)(unsigned long shift, void *data);
 
 extern void __create_pgd_mapping_extend(pgd_t *pgdir,
 		unsigned int entries_cnt, phys_addr_t phys,
@@ -93,4 +93,8 @@ extern void __create_pgd_mapping_extend(pgd_t *pgdir,
 		void *info,
 		int flags);
 
+#define NO_BLOCK_MAPPINGS	BIT(0)
+#define NO_CONT_MAPPINGS	BIT(1)
+#define NO_FIXMAP	BIT(2)
+
 #endif
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 0f183aaf98c9..628752c3cfd0 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -86,7 +86,7 @@ pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
 }
 EXPORT_SYMBOL(phys_mem_access_prot);
 
-static phys_addr_t __init early_pgtable_alloc(unsigned long unused_a, void *unused_b)
+static unsigned long __init early_pgtable_alloc(unsigned long unused_a, void *unused_b)
 {
 	phys_addr_t phys;
 	void *ptr;
@@ -113,7 +113,7 @@ static phys_addr_t __init early_pgtable_alloc(unsigned long unused_a, void *unus
 	return phys;
 }
 
-static phys_addr_t __pgd_pgtable_alloc(unsigned long unused_a, void *unused_b)
+static unsigned long __pgd_pgtable_alloc(unsigned long unused_a, void *unused_b)
 {
 	void *ptr = (void *)__get_free_page(GFP_PGTABLE_KERNEL);
 	BUG_ON(!ptr);
@@ -123,7 +123,7 @@ static phys_addr_t __pgd_pgtable_alloc(unsigned long unused_a, void *unused_b)
 	return __pa(ptr);
 }
 
-static phys_addr_t pgd_pgtable_alloc(unsigned long shift, void *unused)
+static unsigned long pgd_pgtable_alloc(unsigned long shift, void *unused)
 {
 	phys_addr_t pa = __pgd_pgtable_alloc(shift, unused);
 
diff --git a/arch/arm64/mm/mmu_include.c b/arch/arm64/mm/mmu_include.c
index 371afc7d4502..adad0f93cd53 100644
--- a/arch/arm64/mm/mmu_include.c
+++ b/arch/arm64/mm/mmu_include.c
@@ -1,8 +1,5 @@
 // SPDX-License-Identifier: GPL-2.0-only
 
-#define NO_BLOCK_MAPPINGS	BIT(0)
-#define NO_CONT_MAPPINGS	BIT(1)
-
 static bool pgattr_change_is_safe(u64 old, u64 new)
 {
 	/*
@@ -38,11 +35,14 @@ static bool pgattr_change_is_safe(u64 old, u64 new)
 }
 
 static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
-		     phys_addr_t phys, pgprot_t prot)
+		     phys_addr_t phys, pgprot_t prot, int flags)
 {
 	pte_t *ptep;
 
-	ptep = pte_set_fixmap_offset(pmdp, addr);
+	if (likely(!(flags & NO_FIXMAP)))
+		ptep = pte_set_fixmap_offset(pmdp, addr);
+	else
+		ptep = pte_offset_kernel(pmdp, addr);
 	do {
 		pte_t old_pte = READ_ONCE(*ptep);
 
@@ -58,7 +58,8 @@ static void init_pte(pmd_t *pmdp, unsigned long addr, unsigned long end,
 		phys += PAGE_SIZE;
 	} while (ptep++, addr += PAGE_SIZE, addr != end);
 
-	pte_clear_fixmap();
+	if (likely(!(flags & NO_FIXMAP)))
+		pte_clear_fixmap();
 }
 
 static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
@@ -91,7 +92,7 @@ static void alloc_init_cont_pte(pmd_t *pmdp, unsigned long addr,
 		    (flags & NO_CONT_MAPPINGS) == 0)
 			__prot = __pgprot(pgprot_val(prot) | PTE_CONT);
 
-		init_pte(pmdp, addr, next, phys, __prot);
+		init_pte(pmdp, addr, next, phys, __prot, flags);
 
 		phys += next - addr;
 	} while (addr = next, addr != end);
@@ -106,7 +107,10 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
 	unsigned long next;
 	pmd_t *pmdp;
 
-	pmdp = pmd_set_fixmap_offset(pudp, addr);
+	if (likely(!(flags & NO_FIXMAP)))
+		pmdp = pmd_set_fixmap_offset(pudp, addr);
+	else
+		pmdp = pmd_offset(pudp, addr);
 	do {
 		pmd_t old_pmd = READ_ONCE(*pmdp);
 
@@ -133,7 +137,8 @@ static void init_pmd(pud_t *pudp, unsigned long addr, unsigned long end,
 		phys += next - addr;
 	} while (pmdp++, addr = next, addr != end);
 
-	pmd_clear_fixmap();
+	if (likely(!(flags & NO_FIXMAP)))
+		pmd_clear_fixmap();
 }
 
 static void alloc_init_cont_pmd(pud_t *pudp, unsigned long addr,
@@ -207,7 +212,10 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
 	}
 	BUG_ON(p4d_bad(p4d));
 
-	pudp = pud_set_fixmap_offset(p4dp, addr);
+	if (likely(!(flags & NO_FIXMAP)))
+		pudp = pud_set_fixmap_offset(p4dp, addr);
+	else
+		pudp = pud_offset(p4dp, addr);
 	do {
 		pud_t old_pud = READ_ONCE(*pudp);
 
@@ -236,7 +244,8 @@ static void alloc_init_pud(pgd_t *pgdp, unsigned long addr, unsigned long end,
 		phys += next - addr;
 	} while (pudp++, addr = next, addr != end);
 
-	pud_clear_fixmap();
+	if (likely(!(flags & NO_FIXMAP)))
+		pud_clear_fixmap();
 }
 
 static void __create_pgd_mapping(pgd_t *pgdir, unsigned int entries_cnt, phys_addr_t phys,
-- 
2.29.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC 5/8] arm64/mm: make trans_pgd_idmap_page() use create_idmap()
  2021-04-10  9:56 [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
                   ` (3 preceding siblings ...)
  2021-04-10  9:56 ` [RFC 4/8] arm64/mm: enable __create_pgd_mapping() to run across different pgtable Pingfan Liu
@ 2021-04-10  9:56 ` Pingfan Liu
  2021-04-10  9:56 ` [RFC 6/8] arm64/mm: introduce pgtable allocator for head Pingfan Liu
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2021-04-10  9:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
	Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
	Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
	Logan Gunthorpe, Mark Brown

At present, trans_pgd_idmap_page() has its own logic to set up idmap. To
make code simple, it can reuse create_idmap().

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/include/asm/pgalloc.h |  9 +++++
 arch/arm64/mm/trans_pgd.c        | 59 +++++++++++++++-----------------
 2 files changed, 36 insertions(+), 32 deletions(-)

diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index 6e9f1e218300..f848a0300228 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -93,6 +93,15 @@ extern void __create_pgd_mapping_extend(pgd_t *pgdir,
 		void *info,
 		int flags);
 
+extern int idmap_extend_pgtable;
+
+extern void create_idmap(pgd_t *pgdir, phys_addr_t phys,
+		phys_addr_t size,
+		pgprot_t prot,
+		pgtable_alloc allocator,
+		void *info,
+		int flags);
+
 #define NO_BLOCK_MAPPINGS	BIT(0)
 #define NO_CONT_MAPPINGS	BIT(1)
 #define NO_FIXMAP	BIT(2)
diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c
index 527f0a39c3da..004ccbadd647 100644
--- a/arch/arm64/mm/trans_pgd.c
+++ b/arch/arm64/mm/trans_pgd.c
@@ -274,6 +274,14 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd,
 	return 0;
 }
 
+static unsigned long allocator_trans_alloc(unsigned long unused, void *info)
+{
+	unsigned long *p;
+
+	p = trans_alloc(info);
+	return (unsigned long)p;
+}
+
 /*
  * The page we want to idmap may be outside the range covered by VA_BITS that
  * can be built using the kernel's p?d_populate() helpers. As a one off, for a
@@ -287,38 +295,25 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd,
 int trans_pgd_idmap_page(struct trans_pgd_info *info, phys_addr_t *trans_ttbr0,
 			 unsigned long *t0sz, void *page)
 {
-	phys_addr_t dst_addr = virt_to_phys(page);
-	unsigned long pfn = __phys_to_pfn(dst_addr);
-	int max_msb = (dst_addr & GENMASK(52, 48)) ? 51 : 47;
-	int bits_mapped = PAGE_SHIFT - 4;
-	unsigned long level_mask, prev_level_entry, *levels[4];
-	int this_level, index, level_lsb, level_msb;
-
-	dst_addr &= PAGE_MASK;
-	prev_level_entry = pte_val(pfn_pte(pfn, PAGE_KERNEL_EXEC));
-
-	for (this_level = 3; this_level >= 0; this_level--) {
-		levels[this_level] = trans_alloc(info);
-		if (!levels[this_level])
-			return -ENOMEM;
-
-		level_lsb = ARM64_HW_PGTABLE_LEVEL_SHIFT(this_level);
-		level_msb = min(level_lsb + bits_mapped, max_msb);
-		level_mask = GENMASK_ULL(level_msb, level_lsb);
-
-		index = (dst_addr & level_mask) >> level_lsb;
-		*(levels[this_level] + index) = prev_level_entry;
-
-		pfn = virt_to_pfn(levels[this_level]);
-		prev_level_entry = pte_val(pfn_pte(pfn,
-						   __pgprot(PMD_TYPE_TABLE)));
-
-		if (level_msb == max_msb)
-			break;
-	}
-
-	*trans_ttbr0 = phys_to_ttbr(__pfn_to_phys(pfn));
-	*t0sz = TCR_T0SZ(max_msb + 1);
+	pgd_t * pgdir = trans_alloc(info);
+	int flags = NO_FIXMAP;
+	unsigned long base, step, level, va_bits;
+
+#ifdef CONFIG_ARM64_64K_PAGES
+	base = 16;
+	step = 13;
+#elif CONFIG_ARM64_4K_PAGES
+	base = 12;
+	step = 9;
+#endif
+	create_idmap(pgdir, virt_to_phys(page), PAGE_SIZE, PAGE_KERNEL_EXEC,
+			allocator_trans_alloc, info, flags);
+
+	*trans_ttbr0 = phys_to_ttbr(__virt_to_phys(pgdir));
+	level = CONFIG_PGTABLE_LEVELS + idmap_extend_pgtable ? 1 : 0;
+	va_bits = base + step * level;
+	va_bits = min(va_bits, vabits_actual);
+	*t0sz = 64 - va_bits;
 
 	return 0;
 }
-- 
2.29.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC 6/8] arm64/mm: introduce pgtable allocator for head
  2021-04-10  9:56 [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
                   ` (4 preceding siblings ...)
  2021-04-10  9:56 ` [RFC 5/8] arm64/mm: make trans_pgd_idmap_page() use create_idmap() Pingfan Liu
@ 2021-04-10  9:56 ` Pingfan Liu
  2021-04-10  9:56 ` [RFC 7/8] arm64/pgtable-prot.h: reorganize to cope with asm Pingfan Liu
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2021-04-10  9:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
	Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
	Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
	Logan Gunthorpe, Mark Brown

From the view of __create_pgd_mapping(), both idmap_pg_dir and
init_pg_dir can be treated as a memory pool.  Introduce an allocator
working on them, so __create_pgd_mapping() can create mapping.

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/mm/mmu.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 628752c3cfd0..b546e47543e2 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -86,6 +86,28 @@ pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
 }
 EXPORT_SYMBOL(phys_mem_access_prot);
 
+struct mempool {
+	unsigned long start;
+	unsigned long size;
+	unsigned long next_idx;
+};
+
+struct mempool cur_pool;
+
+void set_cur_mempool(unsigned long start, unsigned long size)
+{
+	cur_pool.start = start;
+	cur_pool.size = size;
+	cur_pool.next_idx = 0;
+}
+
+unsigned long __init head_pgtable_alloc(unsigned long unused_a, void *unused_b)
+{
+	unsigned long idx = cur_pool.next_idx++;
+
+	return cur_pool.start + (idx << PAGE_SHIFT);
+}
+
 static unsigned long __init early_pgtable_alloc(unsigned long unused_a, void *unused_b)
 {
 	phys_addr_t phys;
-- 
2.29.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC 7/8] arm64/pgtable-prot.h: reorganize to cope with asm
  2021-04-10  9:56 [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
                   ` (5 preceding siblings ...)
  2021-04-10  9:56 ` [RFC 6/8] arm64/mm: introduce pgtable allocator for head Pingfan Liu
@ 2021-04-10  9:56 ` Pingfan Liu
  2021-04-10  9:56 ` [RFC 8/8] arm64/head: convert idmap_pg_dir and init_pg_dir to __create_pgd_mapping() Pingfan Liu
  2021-04-14 14:05 ` [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pavel Tatashin
  8 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2021-04-10  9:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
	Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
	Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
	Logan Gunthorpe, Mark Brown

In order to refer PAGE_KERNEL_EXEC in head.S, reorganize this file.

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/include/asm/pgtable-prot.h | 34 +++++++++++++++++----------
 1 file changed, 21 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
index 9a65fb528110..424fc5e6fd69 100644
--- a/arch/arm64/include/asm/pgtable-prot.h
+++ b/arch/arm64/include/asm/pgtable-prot.h
@@ -33,9 +33,6 @@
 
 extern bool arm64_use_ng_mappings;
 
-#define _PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
-#define _PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
-
 #define PTE_MAYBE_NG		(arm64_use_ng_mappings ? PTE_NG : 0)
 #define PMD_MAYBE_NG		(arm64_use_ng_mappings ? PMD_SECT_NG : 0)
 
@@ -49,6 +46,26 @@ extern bool arm64_use_ng_mappings;
 #define PTE_MAYBE_GP		0
 #endif
 
+#define PAGE_S2_MEMATTR(attr)						\
+	({								\
+		u64 __val;						\
+		if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))		\
+			__val = PTE_S2_MEMATTR(MT_S2_FWB_ ## attr);	\
+		else							\
+			__val = PTE_S2_MEMATTR(MT_S2_ ## attr);		\
+		__val;							\
+	 })
+
+#endif /* __ASSEMBLY__ */
+
+#ifdef __ASSEMBLY__
+#define PTE_MAYBE_NG	0
+#define __pgprot(x)	(x)
+#endif
+
+#define _PROT_DEFAULT		(PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+#define _PROT_SECT_DEFAULT	(PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
+
 #define PROT_DEFAULT		(_PROT_DEFAULT | PTE_MAYBE_NG)
 #define PROT_SECT_DEFAULT	(_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
 
@@ -71,15 +88,7 @@ extern bool arm64_use_ng_mappings;
 #define PAGE_KERNEL_EXEC	__pgprot(PROT_NORMAL & ~PTE_PXN)
 #define PAGE_KERNEL_EXEC_CONT	__pgprot((PROT_NORMAL & ~PTE_PXN) | PTE_CONT)
 
-#define PAGE_S2_MEMATTR(attr)						\
-	({								\
-		u64 __val;						\
-		if (cpus_have_const_cap(ARM64_HAS_STAGE2_FWB))		\
-			__val = PTE_S2_MEMATTR(MT_S2_FWB_ ## attr);	\
-		else							\
-			__val = PTE_S2_MEMATTR(MT_S2_ ## attr);		\
-		__val;							\
-	 })
+
 
 #define PAGE_NONE		__pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
 /* shared+writable pages are clean by default, hence PTE_RDONLY|PTE_WRITE */
@@ -106,6 +115,5 @@ extern bool arm64_use_ng_mappings;
 #define __S110  PAGE_SHARED_EXEC
 #define __S111  PAGE_SHARED_EXEC
 
-#endif /* __ASSEMBLY__ */
 
 #endif /* __ASM_PGTABLE_PROT_H */
-- 
2.29.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [RFC 8/8] arm64/head: convert idmap_pg_dir and init_pg_dir to __create_pgd_mapping()
  2021-04-10  9:56 [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
                   ` (6 preceding siblings ...)
  2021-04-10  9:56 ` [RFC 7/8] arm64/pgtable-prot.h: reorganize to cope with asm Pingfan Liu
@ 2021-04-10  9:56 ` Pingfan Liu
  2021-04-19 14:10   ` Pingfan Liu
  2021-04-14 14:05 ` [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pavel Tatashin
  8 siblings, 1 reply; 13+ messages in thread
From: Pingfan Liu @ 2021-04-10  9:56 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: Pingfan Liu, Catalin Marinas, Will Deacon, Marc Zyngier,
	Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
	Pavel Tatashin, Anshuman Khandual, Atish Patra, Mike Rapoport,
	Logan Gunthorpe, Mark Brown

Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Kristina Martsenko <kristina.martsenko@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Steven Price <steven.price@arm.com>
Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Atish Patra <atish.patra@wdc.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Mark Brown <broonie@kernel.org>
To: linux-arm-kernel@lists.infradead.org
---
 arch/arm64/include/asm/pgalloc.h |   5 +
 arch/arm64/kernel/head.S         | 187 +++++++------------------------
 arch/arm64/mm/mmu.c              |   9 ++
 3 files changed, 55 insertions(+), 146 deletions(-)

diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index f848a0300228..128d784d78d4 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -8,6 +8,9 @@
 #ifndef __ASM_PGALLOC_H
 #define __ASM_PGALLOC_H
 
+#include <vdso/bits.h>
+
+#ifndef __ASSEMBLY__
 #include <asm/pgtable-hwdef.h>
 #include <asm/processor.h>
 #include <asm/cacheflush.h>
@@ -102,6 +105,8 @@ extern void create_idmap(pgd_t *pgdir, phys_addr_t phys,
 		void *info,
 		int flags);
 
+#endif
+
 #define NO_BLOCK_MAPPINGS	BIT(0)
 #define NO_CONT_MAPPINGS	BIT(1)
 #define NO_FIXMAP	BIT(2)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index e19649dbbafb..3b0c9359ab70 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -28,6 +28,8 @@
 #include <asm/memory.h>
 #include <asm/pgtable-hwdef.h>
 #include <asm/page.h>
+#include <asm/pgtable-prot.h>
+#include <asm/pgalloc.h>
 #include <asm/scs.h>
 #include <asm/smp.h>
 #include <asm/sysreg.h>
@@ -93,6 +95,8 @@ SYM_CODE_START(primary_entry)
 	adrp	x23, __PHYS_OFFSET
 	and	x23, x23, MIN_KIMG_ALIGN - 1	// KASLR offset, defaults to 0
 	bl	set_cpu_boot_mode_flag
+	adrp	x4, init_thread_union
+	add	sp, x4, #THREAD_SIZE
 	bl	__create_page_tables
 	/*
 	 * The following calls CPU setup code, see arch/arm64/mm/proc.S for
@@ -121,135 +125,6 @@ SYM_CODE_START_LOCAL(preserve_boot_args)
 	b	__inval_dcache_area		// tail call
 SYM_CODE_END(preserve_boot_args)
 
-/*
- * Macro to create a table entry to the next page.
- *
- *	tbl:	page table address
- *	virt:	virtual address
- *	shift:	#imm page table shift
- *	ptrs:	#imm pointers per table page
- *
- * Preserves:	virt
- * Corrupts:	ptrs, tmp1, tmp2
- * Returns:	tbl -> next level table page address
- */
-	.macro	create_table_entry, tbl, virt, shift, ptrs, tmp1, tmp2
-	add	\tmp1, \tbl, #PAGE_SIZE
-	phys_to_pte \tmp2, \tmp1
-	orr	\tmp2, \tmp2, #PMD_TYPE_TABLE	// address of next table and entry type
-	lsr	\tmp1, \virt, #\shift
-	sub	\ptrs, \ptrs, #1
-	and	\tmp1, \tmp1, \ptrs		// table index
-	str	\tmp2, [\tbl, \tmp1, lsl #3]
-	add	\tbl, \tbl, #PAGE_SIZE		// next level table page
-	.endm
-
-/*
- * Macro to populate page table entries, these entries can be pointers to the next level
- * or last level entries pointing to physical memory.
- *
- *	tbl:	page table address
- *	rtbl:	pointer to page table or physical memory
- *	index:	start index to write
- *	eindex:	end index to write - [index, eindex] written to
- *	flags:	flags for pagetable entry to or in
- *	inc:	increment to rtbl between each entry
- *	tmp1:	temporary variable
- *
- * Preserves:	tbl, eindex, flags, inc
- * Corrupts:	index, tmp1
- * Returns:	rtbl
- */
-	.macro populate_entries, tbl, rtbl, index, eindex, flags, inc, tmp1
-.Lpe\@:	phys_to_pte \tmp1, \rtbl
-	orr	\tmp1, \tmp1, \flags	// tmp1 = table entry
-	str	\tmp1, [\tbl, \index, lsl #3]
-	add	\rtbl, \rtbl, \inc	// rtbl = pa next level
-	add	\index, \index, #1
-	cmp	\index, \eindex
-	b.ls	.Lpe\@
-	.endm
-
-/*
- * Compute indices of table entries from virtual address range. If multiple entries
- * were needed in the previous page table level then the next page table level is assumed
- * to be composed of multiple pages. (This effectively scales the end index).
- *
- *	vstart:	virtual address of start of range
- *	vend:	virtual address of end of range
- *	shift:	shift used to transform virtual address into index
- *	ptrs:	number of entries in page table
- *	istart:	index in table corresponding to vstart
- *	iend:	index in table corresponding to vend
- *	count:	On entry: how many extra entries were required in previous level, scales
- *			  our end index.
- *		On exit: returns how many extra entries required for next page table level
- *
- * Preserves:	vstart, vend, shift, ptrs
- * Returns:	istart, iend, count
- */
-	.macro compute_indices, vstart, vend, shift, ptrs, istart, iend, count
-	lsr	\iend, \vend, \shift
-	mov	\istart, \ptrs
-	sub	\istart, \istart, #1
-	and	\iend, \iend, \istart	// iend = (vend >> shift) & (ptrs - 1)
-	mov	\istart, \ptrs
-	mul	\istart, \istart, \count
-	add	\iend, \iend, \istart	// iend += (count - 1) * ptrs
-					// our entries span multiple tables
-
-	lsr	\istart, \vstart, \shift
-	mov	\count, \ptrs
-	sub	\count, \count, #1
-	and	\istart, \istart, \count
-
-	sub	\count, \iend, \istart
-	.endm
-
-/*
- * Map memory for specified virtual address range. Each level of page table needed supports
- * multiple entries. If a level requires n entries the next page table level is assumed to be
- * formed from n pages.
- *
- *	tbl:	location of page table
- *	rtbl:	address to be used for first level page table entry (typically tbl + PAGE_SIZE)
- *	vstart:	start address to map
- *	vend:	end address to map - we map [vstart, vend]
- *	flags:	flags to use to map last level entries
- *	phys:	physical address corresponding to vstart - physical memory is contiguous
- *	pgds:	the number of pgd entries
- *
- * Temporaries:	istart, iend, tmp, count, sv - these need to be different registers
- * Preserves:	vstart, vend, flags
- * Corrupts:	tbl, rtbl, istart, iend, tmp, count, sv
- */
-	.macro map_memory, tbl, rtbl, vstart, vend, flags, phys, pgds, istart, iend, tmp, count, sv
-	add \rtbl, \tbl, #PAGE_SIZE
-	mov \sv, \rtbl
-	mov \count, #0
-	compute_indices \vstart, \vend, #PGDIR_SHIFT, \pgds, \istart, \iend, \count
-	populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
-	mov \tbl, \sv
-	mov \sv, \rtbl
-
-#if SWAPPER_PGTABLE_LEVELS > 3
-	compute_indices \vstart, \vend, #PUD_SHIFT, #PTRS_PER_PUD, \istart, \iend, \count
-	populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
-	mov \tbl, \sv
-	mov \sv, \rtbl
-#endif
-
-#if SWAPPER_PGTABLE_LEVELS > 2
-	compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #PTRS_PER_PMD, \istart, \iend, \count
-	populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
-	mov \tbl, \sv
-#endif
-
-	compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #PTRS_PER_PTE, \istart, \iend, \count
-	bic \count, \phys, #SWAPPER_BLOCK_SIZE - 1
-	populate_entries \tbl, \count, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp
-	.endm
-
 /*
  * Setup the initial page tables. We only setup the barest amount which is
  * required to get the kernel running. The following sections are required:
@@ -344,9 +219,6 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 	adr_l	x4, idmap_extend_pgtable
 	mov	x5, #1
 	str	x5, [x4]                //require expanded pagetable
-
-	mov	x4, EXTRA_PTRS
-	create_table_entry x0, x3, EXTRA_SHIFT, x4, x5, x6
 #else
 	/*
 	 * If VA_BITS == 48, we don't have to configure an additional
@@ -356,25 +228,50 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 	str_l	x4, idmap_ptrs_per_pgd, x5
 #endif
 1:
-	ldr_l	x4, idmap_ptrs_per_pgd
-	mov	x5, x3				// __pa(__idmap_text_start)
-	adr_l	x6, __idmap_text_end		// __pa(__idmap_text_end)
+	stp	x0, x1, [sp, #-64]!
+	stp	x2, x3, [sp, #48]
+	stp	x4, x5, [sp, #32]
+	stp	x6, x7, [sp, #16]
 
-	map_memory x0, x1, x3, x6, x7, x3, x4, x10, x11, x12, x13, x14
+	adrp	x0, idmap_pg_dir
+	adrp	x1, idmap_pg_end
+	sub	x1, x1, x0
+	bl	set_cur_mempool
+		
+	adrp	x1, __idmap_text_start
+	adr_l	x2, __idmap_text_end
+	sub	x2, x2, x1
+	ldr	x3, =PAGE_KERNEL_EXEC
+	adr_l	x4, head_pgtable_alloc
+	mov	x5, #0
+	mov	x6, #NO_FIXMAP
+	bl	create_idmap
 
 	/*
 	 * Map the kernel image (starting with PHYS_OFFSET).
 	 */
 	adrp	x0, init_pg_dir
-	mov_q	x5, KIMAGE_VADDR		// compile time __va(_text)
-	add	x5, x5, x23			// add KASLR displacement
-	mov	x4, PTRS_PER_PGD
-	adrp	x6, _end			// runtime __pa(_end)
-	adrp	x3, _text			// runtime __pa(_text)
-	sub	x6, x6, x3			// _end - _text
-	add	x6, x6, x5			// runtime __va(_end)
+	adrp	x1, init_pg_end
+	sub	x1, x1, x0
+	bl	set_cur_mempool
 
-	map_memory x0, x1, x5, x6, x7, x3, x4, x10, x11, x12, x13, x14
+	mov	x1, PTRS_PER_PGD
+	adrp	x3, _text			// runtime __pa(_text)
+	mov_q	x4, KIMAGE_VADDR		// compile time __va(_text)
+	add	x4, x4, x23			// add KASLR displacement
+	adrp	x5, _end			// runtime __pa(_end)
+	sub	x5, x5, x3			// _end - _text
+
+	ldr	x3, =PAGE_KERNEL_EXEC
+	adr_l	x4, head_pgtable_alloc
+	mov	x5, #0
+	mov	x6, #NO_FIXMAP
+
+	bl	create_init_pgd_mapping
+	ldp	x6, x7, [sp, #16]
+	ldp	x4, x5, [sp, #32]
+	ldp	x2, x3, [sp, #48]
+	ldp	x0, x1, [sp], #-64
 
 	/*
 	 * Since the page tables have been populated with non-cacheable
@@ -402,8 +299,6 @@ SYM_FUNC_END(__create_page_tables)
  *   x0 = __PHYS_OFFSET
  */
 SYM_FUNC_START_LOCAL(__primary_switched)
-	adrp	x4, init_thread_union
-	add	sp, x4, #THREAD_SIZE
 	adr_l	x5, init_task
 	msr	sp_el0, x5			// Save thread_info
 
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index b546e47543e2..b886332a7c3f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -167,6 +167,15 @@ static unsigned long pgd_pgtable_alloc(unsigned long shift, void *unused)
 
 #include "./mmu_include.c"
 
+void create_init_pgd_mapping(pgd_t * pgdir, unsigned int entries_cnt,
+		phys_addr_t phys, unsigned long virt, phys_addr_t size,
+	       	pgprot_t prot, pgtable_alloc allocator,
+		void * info, int flags)
+{
+	__create_pgd_mapping(pgdir, entries_cnt, phys, virt, size,
+		prot, allocator, info, flags);
+}
+
 int idmap_extend_pgtable;
 
 /* 
-- 
2.29.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [RFC 1/8] arm64/mm: split out __create_pgd_mapping() routines
  2021-04-10  9:56 ` [RFC 1/8] arm64/mm: split out __create_pgd_mapping() routines Pingfan Liu
@ 2021-04-14 13:19   ` Pingfan Liu
  0 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2021-04-14 13:19 UTC (permalink / raw)
  To: Linux ARM
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Kristina Martsenko,
	James Morse, Steven Price, Jonathan Cameron, Pavel Tatashin,
	Anshuman Khandual, Atish Patra, Mike Rapoport, Logan Gunthorpe,
	Mark Brown

On Sat, Apr 10, 2021 at 5:57 PM Pingfan Liu <kernelfans@gmail.com> wrote:
>
> Split out the routines for __create_pgd_mapping(), in order to use it
> to generate two sets of operations for CONFIG_PGTABLE_LEVELS and
> CONFIG_PGTABLE_LEVELS + 1
>
> Later the one generated with 'CONFIG_PGTABLE_LEVELS + 1' can be used for
> idmap if VA_BITS is too small to cover system RAM, which is located
> sufficiently high in the physical address space.
>
> Later, idmap can be created by __create_pgd_mapping() directly.
>
> Signed-off-by: Pingfan Liu <kernelfans@gmail.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Kristina Martsenko <kristina.martsenko@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
> Cc: Atish Patra <atish.patra@wdc.com>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Logan Gunthorpe <logang@deltatee.com>
> Cc: Mark Brown <broonie@kernel.org>
> To: linux-arm-kernel@lists.infradead.org
> ---
>  arch/arm64/Kconfig          |   4 +
>  arch/arm64/mm/Makefile      |   2 +
>  arch/arm64/mm/idmap_mmu.c   |  45 ++++++
>  arch/arm64/mm/mmu.c         | 263 +-----------------------------------
>  arch/arm64/mm/mmu_include.c | 262 +++++++++++++++++++++++++++++++++++
>  5 files changed, 315 insertions(+), 261 deletions(-)
>  create mode 100644 arch/arm64/mm/idmap_mmu.c
>  create mode 100644 arch/arm64/mm/mmu_include.c
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index e4e1b6550115..989fc501a1b4 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -327,6 +327,10 @@ config PGTABLE_LEVELS
>         default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47
>         default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48
>
> +config IDMAP_PGTABLE_EXPAND
> +       def_bool y
> +       depends on (ARM64_4K_PAGES && ARM64_VA_BITS_39) || (ARM64_64K_PAGES && ARM64_VA_BITS_42)
> +
>  config ARCH_SUPPORTS_UPROBES
>         def_bool y
>
> diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
> index f188c9092696..f9283cb9a201 100644
> --- a/arch/arm64/mm/Makefile
> +++ b/arch/arm64/mm/Makefile
> @@ -3,6 +3,8 @@ obj-y                           := dma-mapping.o extable.o fault.o init.o \
>                                    cache.o copypage.o flush.o \
>                                    ioremap.o mmap.o pgd.o mmu.o \
>                                    context.o proc.o pageattr.o
> +
> +obj-$(CONFIG_IDMAP_PGTABLE_EXPAND)     += idmap_mmu.o
>  obj-$(CONFIG_HUGETLB_PAGE)     += hugetlbpage.o
>  obj-$(CONFIG_PTDUMP_CORE)      += ptdump.o
>  obj-$(CONFIG_PTDUMP_DEBUGFS)   += ptdump_debugfs.o
> diff --git a/arch/arm64/mm/idmap_mmu.c b/arch/arm64/mm/idmap_mmu.c
> new file mode 100644
> index 000000000000..7e9a4f4017d3
> --- /dev/null
> +++ b/arch/arm64/mm/idmap_mmu.c
> @@ -0,0 +1,45 @@
> +// SPDX-License-Identifier: GPL-2.0-only
> +
> +#include <linux/kernel.h>
> +#include <linux/errno.h>
> +#include <linux/mm.h>
> +
> +#include <asm/barrier.h>
> +#include <asm/cputype.h>
> +#include <asm/fixmap.h>
> +#include <asm/kasan.h>
> +#include <asm/kernel-pgtable.h>
> +#include <asm/sections.h>
> +#include <asm/setup.h>
> +#include <linux/sizes.h>
> +#include <asm/tlb.h>
> +#include <asm/mmu_context.h>
> +#include <asm/ptdump.h>
> +#include <asm/tlbflush.h>
> +#include <asm/pgalloc.h>
> +
> +#if CONFIG_IDMAP_PGTABLE_EXPAND
> +
> +#if CONFIG_PGTABLE_LEVELS == 2
> +#define EXTEND_LEVEL 3
> +#elif CONFIG_PGTABLE_LEVELS == 3
> +#define EXTEND_LEVEL 4
> +#endif
> +
> +#undef CONFIG_PGTABLE_LEVELS
> +#define CONFIG_PGTABLE_LEVELS EXTEND_LEVEL

In order to take effect, the redefinition of CONFIG_PGTABLE_LEVELS
should be moved to the head of this file.

I will fix it in V2.

Thanks,
Pingfan

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes
  2021-04-10  9:56 [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
                   ` (7 preceding siblings ...)
  2021-04-10  9:56 ` [RFC 8/8] arm64/head: convert idmap_pg_dir and init_pg_dir to __create_pgd_mapping() Pingfan Liu
@ 2021-04-14 14:05 ` Pavel Tatashin
  2021-04-15  2:14   ` Pingfan Liu
  8 siblings, 1 reply; 13+ messages in thread
From: Pavel Tatashin @ 2021-04-14 14:05 UTC (permalink / raw)
  To: Pingfan Liu
  Cc: Linux ARM, Catalin Marinas, Will Deacon, Marc Zyngier,
	Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
	Anshuman Khandual, Atish Patra, Mike Rapoport, Logan Gunthorpe,
	Mark Brown, Eric W. Biederman

On Sat, Apr 10, 2021 at 5:57 AM Pingfan Liu <kernelfans@gmail.com> wrote:
>
> Hi everyone,
>
> Sorry to bring up this RFC in a hurry, since I paid attention to "arm64:
> MMU enabled kexec relocation" too late and now it has advanced to "[PATCH
> v13 00/18] arm64: MMU enabled kexec relocation".
>
> And I think maybe that work can be based on my series.
>
> I have raised my concern when reviewing "[PATCH v12 00/17] arm64: MMU
> enabled kexec relocation"
>   https://linuxlists.cc/l/1/linux-kernel/t/3923858/(patch_v12_00_17)_arm64:_mmu_enabled_kexec_relocation#post3948651
>   (It seems that lore.kernel.org has not archived my reply)
>   Where I wrote:
>     Then the processes may be neat (I hope so):
>     -1. set up identity map in machine_kexec_post_load(), instead of
>     copying linear map.
>     -2. Also past this temporary identity map to arm64_relocate_new_kernel()
>     -3. in arm64_relocate_new_kernel(), just load identity map and
>     re-enable MMU. After copying, just turn off MMU.


Hi Pingfan,

The MMU enabled kexec code has been in development for a while, and
has gone through several iterations:

1. simply reserve memory (similar to crash kernel) so no relocation is
needed. The approach was only ~50 LOC, but since this was an ARM64
specific problem I was asked to fix it in ARM64, not in generic code.
2. The second approach was to use idmap (as you are proposing now),
but James Morse explained to me that there are arm systems that have
very high starting physical addresses that they cannot cover all
physical memory via idamp. So, I cannot assume that I can idmap any
page in PA.
3. The third approach was to unify some of page table management code
with hibernations (trans_pgd), and use contiguous VA maps, so the
relocation function can be as simply as possible. However, both, Eric
Biederman and James Morse asked me to change it to a linear map
instead: to be inline with other arches, and also for easier
debugging.
4. The fourth approach is the current one, I am using a linear map,
and a lot of patches for this project have already landed into the
mainline. The last set of changes does not add any new LOC: "18 files
changed, 315 insertions(+), 330 deletions(-)", as all the preliminary
work has landed upstream.

What is the benefit of going back to approach 2, when the current
approach has already been agreed with James and Eric, and does not add
new complexity, as the net LOC change is negative?

Thank you,
Pavel


>
> In a short discuss off-line, Pavel pointed to me
>   https://lore.kernel.org/linux-arm-kernel/CA+CK2bC2KwWufE1DWa4szn_hQ1dbjDVHgYUu7=J4O_kvKXTrHg@mail.gmail.com/#t,
> which prevent him from using idmap to implement his series.
>
>
> After digging into the code, I find that if extending one more pgtable level,
> the __create_pgd_mapping() routines can be re-used for idmap_pg_dir and
> init_pg_dir. Besides, it can be re-used for trans_pgd_idmap_page().
> That is what this series do.
>
> As for "[PATCHv13 00/18] arm64: MMU enabled kexec relocation", here is
> my two cents:
>   -1. a call to create_idmap() API in machine_kexec_post_load(), to map
> src + dst + arm64_relocate_new_kernel().
>   -2. turn on MMU in arm64_relocate_new_kernel(), after done, turn off.
>
> Sorry again for a hurry. It can be compiled, but far from good.
>
> Thanks,
>
> Pingfan
>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Cc: Marc Zyngier <maz@kernel.org>
> Cc: Kristina Martsenko <kristina.martsenko@arm.com>
> Cc: James Morse <james.morse@arm.com>
> Cc: Steven Price <steven.price@arm.com>
> Cc: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
> Cc: Atish Patra <atish.patra@wdc.com>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Logan Gunthorpe <logang@deltatee.com>
> Cc: Mark Brown <broonie@kernel.org>
> To: linux-arm-kernel@lists.infradead.org
>
> Pingfan Liu (8):
>   arm64/mm: split out __create_pgd_mapping() routines
>   arm64/mm: change __create_pgd_mapping() prototype to accept nr_entries
>     and introduce create_idmap()
>   arm64/mm: change __create_pgd_mapping() prototype to accept extra info
>     for allocator
>   arm64/mm: enable __create_pgd_mapping() to run across different
>     pgtable
>   arm64/mm: make trans_pgd_idmap_page() use create_idmap()
>   arm64/mm: introduce pgtable allocator for head
>   arm64/pgtable-prot.h: reorganize to cope with asm
>   arm64/head: convert idmap_pg_dir and init_pg_dir to
>     __create_pgd_mapping()
>
>  arch/arm64/Kconfig                    |   4 +
>  arch/arm64/include/asm/pgalloc.h      |  28 ++
>  arch/arm64/include/asm/pgtable-prot.h |  34 ++-
>  arch/arm64/kernel/head.S              | 190 ++++----------
>  arch/arm64/mm/Makefile                |   2 +
>  arch/arm64/mm/idmap_mmu.c             |  46 ++++
>  arch/arm64/mm/mmu.c                   | 358 ++++++--------------------
>  arch/arm64/mm/mmu_include.c           | 284 ++++++++++++++++++++
>  arch/arm64/mm/trans_pgd.c             |  59 ++---
>  9 files changed, 535 insertions(+), 470 deletions(-)
>  create mode 100644 arch/arm64/mm/idmap_mmu.c
>  create mode 100644 arch/arm64/mm/mmu_include.c
>
> --
> 2.29.2
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes
  2021-04-14 14:05 ` [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pavel Tatashin
@ 2021-04-15  2:14   ` Pingfan Liu
  0 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2021-04-15  2:14 UTC (permalink / raw)
  To: Pavel Tatashin
  Cc: Linux ARM, Catalin Marinas, Will Deacon, Marc Zyngier,
	Kristina Martsenko, James Morse, Steven Price, Jonathan Cameron,
	Anshuman Khandual, Atish Patra, Mike Rapoport, Logan Gunthorpe,
	Mark Brown, Eric W. Biederman

Hi Pavel,

First of all, sorry to stir up this topic too late.

On Wed, Apr 14, 2021 at 10:06 PM Pavel Tatashin
<pasha.tatashin@soleen.com> wrote:
>
[...]
> Hi Pingfan,
>
> The MMU enabled kexec code has been in development for a while, and
> has gone through several iterations:
>

Yes, I have gone through the whole iterations after reviewing v12.

> 1. simply reserve memory (similar to crash kernel) so no relocation is
> needed. The approach was only ~50 LOC, but since this was an ARM64
> specific problem I was asked to fix it in ARM64, not in generic code.
> 2. The second approach was to use idmap (as you are proposing now),
> but James Morse explained to me that there are arm systems that have
> very high starting physical addresses that they cannot cover all
> physical memory via idamp. So, I cannot assume that I can idmap any
> page in PA.

I think here, the exact blocking factor is the routines in hand can
not set up idmap. But the current routines have the capability if
enhanced. And  it turns out easy to achieve the goal by redefining
CONFIG_PGTABLE_LEVEL.

> 3. The third approach was to unify some of page table management code
> with hibernations (trans_pgd), and use contiguous VA maps, so the
> relocation function can be as simply as possible. However, both, Eric
> Biederman and James Morse asked me to change it to a linear map
> instead: to be inline with other arches, and also for easier
> debugging.
> 4. The fourth approach is the current one, I am using a linear map,
> and a lot of patches for this project have already landed into the
> mainline. The last set of changes does not add any new LOC: "18 files
> changed, 315 insertions(+), 330 deletions(-)", as all the preliminary
> work has landed upstream.
>
> What is the benefit of going back to approach 2, when the current
> approach has already been agreed with James and Eric, and does not add
> new complexity, as the net LOC change is negative?
>

It takes me some time to understand the stub handling. But James is
expert on this field and sure about it.  In contrast, idmap is similar
to linear map, meanwhile free of kvm stub handling. As a result, the
code scarcely needs change against MMU-disabled code.

Anyway, the primary target of my series is to share the common code
with [5/8] and [8/8].

Thanks,
Pingfan

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [RFC 8/8] arm64/head: convert idmap_pg_dir and init_pg_dir to __create_pgd_mapping()
  2021-04-10  9:56 ` [RFC 8/8] arm64/head: convert idmap_pg_dir and init_pg_dir to __create_pgd_mapping() Pingfan Liu
@ 2021-04-19 14:10   ` Pingfan Liu
  0 siblings, 0 replies; 13+ messages in thread
From: Pingfan Liu @ 2021-04-19 14:10 UTC (permalink / raw)
  To: Linux ARM
  Cc: Catalin Marinas, Will Deacon, Marc Zyngier, Kristina Martsenko,
	James Morse, Steven Price, Jonathan Cameron, Pavel Tatashin,
	Anshuman Khandual, Atish Patra, Mike Rapoport, Logan Gunthorpe,
	Mark Brown

On Sat, Apr 10, 2021 at 5:57 PM Pingfan Liu <kernelfans@gmail.com> wrote:
>
[...]
>         /*
>          * Map the kernel image (starting with PHYS_OFFSET).
>          */
>         adrp    x0, init_pg_dir
> -       mov_q   x5, KIMAGE_VADDR                // compile time __va(_text)
> -       add     x5, x5, x23                     // add KASLR displacement
> -       mov     x4, PTRS_PER_PGD
> -       adrp    x6, _end                        // runtime __pa(_end)
> -       adrp    x3, _text                       // runtime __pa(_text)
> -       sub     x6, x6, x3                      // _end - _text
> -       add     x6, x6, x5                      // runtime __va(_end)
> +       adrp    x1, init_pg_end
> +       sub     x1, x1, x0
> +       bl      set_cur_mempool
>
> -       map_memory x0, x1, x5, x6, x7, x3, x4, x10, x11, x12, x13, x14
> +       mov     x1, PTRS_PER_PGD
> +       adrp    x3, _text                       // runtime __pa(_text)
> +       mov_q   x4, KIMAGE_VADDR                // compile time __va(_text)
> +       add     x4, x4, x23                     // add KASLR displacement
> +       adrp    x5, _end                        // runtime __pa(_end)
> +       sub     x5, x5, x3                      // _end - _text
> +
> +       ldr     x3, =PAGE_KERNEL_EXEC
> +       adr_l   x4, head_pgtable_alloc
> +       mov     x5, #0
> +       mov     x6, #NO_FIXMAP
> +
> +       bl      create_init_pgd_mapping

This calling convention is wrong, should be changed as the following (
will be updated in v2)
        adrp    x0, init_pg_dir
        adrp    x1, init_pg_end
        sub     x1, x1, x0
        bl      set_cur_mempool
        mov     x0, #0
        mov     x0, #0
        bl      head_pgtable_alloc              // x0 is init_pg_dir

        adrp    x1, _text                       // runtime __pa(_text)
        mov_q   x2, KIMAGE_VADDR                // compile time __va(_text)
        add     x2, x2, x23                     // add KASLR displacement
        adrp    x3, _end                        // runtime __pa(_end)
        sub     x3, x3, x1                      // _end - _text

        ldr     x4, =PAGE_KERNEL_EXEC
        adr_l   x5, head_pgtable_alloc
        mov     x6, #0
        mov     x7, #(NO_FIXMAP | NO_PRINTK | BOOT_HEAD)

        bl      create_init_pgd_mapping

Thanks,
Pingfan

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-04-19 14:13 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-10  9:56 [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pingfan Liu
2021-04-10  9:56 ` [RFC 1/8] arm64/mm: split out __create_pgd_mapping() routines Pingfan Liu
2021-04-14 13:19   ` Pingfan Liu
2021-04-10  9:56 ` [RFC 2/8] arm64/mm: change __create_pgd_mapping() prototype to accept nr_entries and introduce create_idmap() Pingfan Liu
2021-04-10  9:56 ` [RFC 3/8] arm64/mm: change __create_pgd_mapping() prototype to accept extra info for allocator Pingfan Liu
2021-04-10  9:56 ` [RFC 4/8] arm64/mm: enable __create_pgd_mapping() to run across different pgtable Pingfan Liu
2021-04-10  9:56 ` [RFC 5/8] arm64/mm: make trans_pgd_idmap_page() use create_idmap() Pingfan Liu
2021-04-10  9:56 ` [RFC 6/8] arm64/mm: introduce pgtable allocator for head Pingfan Liu
2021-04-10  9:56 ` [RFC 7/8] arm64/pgtable-prot.h: reorganize to cope with asm Pingfan Liu
2021-04-10  9:56 ` [RFC 8/8] arm64/head: convert idmap_pg_dir and init_pg_dir to __create_pgd_mapping() Pingfan Liu
2021-04-19 14:10   ` Pingfan Liu
2021-04-14 14:05 ` [RFC 0/8] use __create_pgd_mapping() to implement idmap and unify codes Pavel Tatashin
2021-04-15  2:14   ` Pingfan Liu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).