All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] arm64/head: Cleanups for __create_page_tables()
@ 2022-05-18  3:17 Anshuman Khandual
  2022-05-18  3:17 ` [PATCH 1/6] arm64: don't override idmap t0sz Anshuman Khandual
                   ` (6 more replies)
  0 siblings, 7 replies; 14+ messages in thread
From: Anshuman Khandual @ 2022-05-18  3:17 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: mark.rutland, catalin.marinas, will, Anshuman Khandual

This cleanup series is a precursor before carving out idmap_pg_dir creation
from overall __create_page_table(). This series is derived from an original
work from Mark Rutland.

https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/pgtable/idmap

This series applies on v5.18-rc4

Mark Rutland (6):
  arm64: don't override idmap t0sz
  arm64: head: remove __PHYS_OFFSET
  arm64: head: clarify `populate_entries`
  arm64: head: clarify `compute_indices`
  arm64: head: clarify `map_memory`
  arm64: head: clarify commentary for __create_page_tables

 arch/arm64/kernel/head.S | 102 +++++++++++++++++++++------------------
 arch/arm64/mm/proc.S     |   3 +-
 2 files changed, 55 insertions(+), 50 deletions(-)

-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/6] arm64: don't override idmap t0sz
  2022-05-18  3:17 [PATCH 0/6] arm64/head: Cleanups for __create_page_tables() Anshuman Khandual
@ 2022-05-18  3:17 ` Anshuman Khandual
  2022-05-18  6:41   ` Ard Biesheuvel
  2022-05-18  3:17 ` [PATCH 2/6] arm64: head: remove __PHYS_OFFSET Anshuman Khandual
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Anshuman Khandual @ 2022-05-18  3:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, will, Ard Biesheuvel, Anshuman Khandual

From: Mark Rutland <mark.rutland@arm.com>

When the kernel is built with CONFIG_ARM64_VA_BITS_52, __cpu_setup will
override `idmap_t0sz`, and program TCR_EL1.T0SZ based on
`vabits_actual`. This is inconsistent with cpu_set_idmap_tcr_t0sz(),
which will use `idmap_t0sz`, but happens to work as
CONFIG_ARM64_VA_BITS_52 requires 64K pages where 48-bit VAs and 52-bit
VAs required the same number of page table levels and TTBR0 addresses
grow upwards from the base of the PGD table (for which the entire page
is zeroed).

When switching away from the idmap, cpu_set_default_tcr_t0sz() will use
`vabits_actual`, and so the T0SZ value used for the idmap does not have
to match the T0SZ used during regular kernel/userspace execution.

This patch ensures we *always* use `idmap_t0sz` as the TCR_EL1.T0SZ
value used while the idmap is active.

Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/arm64/mm/proc.S | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 50bbed947bec..c1f76bf3276c 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -468,9 +468,8 @@ SYM_FUNC_START(__cpu_setup)
 	sub		x9, xzr, x9
 	add		x9, x9, #64
 	tcr_set_t1sz	tcr, x9
-#else
-	ldr_l		x9, idmap_t0sz
 #endif
+	ldr_l		x9, idmap_t0sz
 	tcr_set_t0sz	tcr, x9
 
 	/*
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/6] arm64: head: remove __PHYS_OFFSET
  2022-05-18  3:17 [PATCH 0/6] arm64/head: Cleanups for __create_page_tables() Anshuman Khandual
  2022-05-18  3:17 ` [PATCH 1/6] arm64: don't override idmap t0sz Anshuman Khandual
@ 2022-05-18  3:17 ` Anshuman Khandual
  2022-05-18  6:45   ` Ard Biesheuvel
  2022-05-18  3:17 ` [PATCH 3/6] arm64: head: clarify `populate_entries` Anshuman Khandual
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Anshuman Khandual @ 2022-05-18  3:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, will, Ard Biesheuvel, Anshuman Khandual

From: Mark Rutland <mark.rutland@arm.com>

It's very easy to confuse __PHYS_OFFSET and PHYS_OFFSET. To clarify
things, let's remove __PHYS_OFFSET and use KERNEL_START directly, with
comments to show that we're using physical address, as we do for other
objects.

At the same time, update the comment regarding the kernel entry address
to mention __pa(KERNEL_START) rather than __pa(PAGE_OFFSET).

There should be no functional change as a result of this patch.

Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/arm64/kernel/head.S | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 6a98f1a38c29..aaad76680495 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -37,8 +37,6 @@
 
 #include "efi-header.S"
 
-#define __PHYS_OFFSET	KERNEL_START
-
 #if (PAGE_OFFSET & 0x1fffff) != 0
 #error PAGE_OFFSET must be at least 2MB aligned
 #endif
@@ -52,7 +50,7 @@
  *   x0 = physical address to the FDT blob.
  *
  * This code is mostly position independent so you call this at
- * __pa(PAGE_OFFSET).
+ * __pa(KERNEL_START).
  *
  * Note that the callee-saved registers are used for storing variables
  * that are useful before the MMU is enabled. The allocations are described
@@ -91,7 +89,7 @@
 SYM_CODE_START(primary_entry)
 	bl	preserve_boot_args
 	bl	init_kernel_el			// w0=cpu_boot_mode
-	adrp	x23, __PHYS_OFFSET
+	adrp	x23, KERNEL_START		// __pa(KERNEL_START)
 	and	x23, x23, MIN_KIMG_ALIGN - 1	// KASLR offset, defaults to 0
 	bl	set_cpu_boot_mode_flag
 	bl	__create_page_tables
@@ -420,7 +418,7 @@ SYM_FUNC_END(__create_page_tables)
 /*
  * The following fragment of code is executed with the MMU enabled.
  *
- *   x0 = __PHYS_OFFSET
+ *   x0 = __pa(KERNEL_START)
  */
 SYM_FUNC_START_LOCAL(__primary_switched)
 	adr_l	x4, init_task
@@ -870,7 +868,7 @@ SYM_FUNC_START_LOCAL(__primary_switch)
 	bl	__relocate_kernel
 #ifdef CONFIG_RANDOMIZE_BASE
 	ldr	x8, =__primary_switched
-	adrp	x0, __PHYS_OFFSET
+	adrp	x0, KERNEL_START		// __pa(KERNEL_START)
 	blr	x8
 
 	/*
@@ -893,6 +891,6 @@ SYM_FUNC_START_LOCAL(__primary_switch)
 #endif
 #endif
 	ldr	x8, =__primary_switched
-	adrp	x0, __PHYS_OFFSET
+	adrp	x0, KERNEL_START		// __pa(KERNEL_START)
 	br	x8
 SYM_FUNC_END(__primary_switch)
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/6] arm64: head: clarify `populate_entries`
  2022-05-18  3:17 [PATCH 0/6] arm64/head: Cleanups for __create_page_tables() Anshuman Khandual
  2022-05-18  3:17 ` [PATCH 1/6] arm64: don't override idmap t0sz Anshuman Khandual
  2022-05-18  3:17 ` [PATCH 2/6] arm64: head: remove __PHYS_OFFSET Anshuman Khandual
@ 2022-05-18  3:17 ` Anshuman Khandual
  2022-05-18  3:17 ` [PATCH 4/6] arm64: head: clarify `compute_indices` Anshuman Khandual
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Anshuman Khandual @ 2022-05-18  3:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, will, Ard Biesheuvel, Anshuman Khandual

From: Mark Rutland <mark.rutland@arm.com>

For a few reasons, `populate_entries` can be harder than necessary to
understand. This patch improves the commentary and naming to make it
easier to follow:

* Commentary is update to explicitly describe the span of adjacent pages
  which `populate_entries` operates on, and what entries correspond to
  at each level.

* As `rtbl` is not always a table, is renamed to `phys`, as it always
  represents a physical address.

* `index` and `eindex` are renamed to `istart` and `iend` respectively,
  to match the naming used in `compute_indices` where these values are
  generated.

* As "to or in" can be difficult to read, the commentary for `flags` is
  reworded in terms of "bits to set".

There should be no functional change as a result of this patch.

Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/arm64/kernel/head.S | 36 +++++++++++++++++++-----------------
 1 file changed, 19 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index aaad76680495..b5d7dacbbb2c 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -144,28 +144,30 @@ SYM_CODE_END(preserve_boot_args)
 	.endm
 
 /*
- * Macro to populate page table entries, these entries can be pointers to the next level
- * or last level entries pointing to physical memory.
+ * Populate a span of adjacent page tables with entries. For non-leaf levels,
+ * each entry points to a table in a span of adjacent page tables at the next
+ * level. For the leaf level these entries point to a span of physical memory
+ * being mapped.
  *
- *	tbl:	page table address
- *	rtbl:	pointer to page table or physical memory
- *	index:	start index to write
- *	eindex:	end index to write - [index, eindex] written to
- *	flags:	flags for pagetable entry to or in
- *	inc:	increment to rtbl between each entry
+ *	tbl:	physical address of the first table in this span
+ *	phys:	physical address of memory or next-level table span
+ *	istart:	index of the first entry to write
+ *	iend:	index of the last entry to write - [index, eindex] written to
+ *	flags:	bits to set in each page table entry
+ *	inc:	increment to phys between each entry
  *	tmp1:	temporary variable
  *
- * Preserves:	tbl, eindex, flags, inc
- * Corrupts:	index, tmp1
- * Returns:	rtbl
+ * Preserves:	tbl, iend, flags, inc
+ * Corrupts:	istart, tmp1
+ * Returns:	phys
  */
-	.macro populate_entries, tbl, rtbl, index, eindex, flags, inc, tmp1
-.Lpe\@:	phys_to_pte \tmp1, \rtbl
+	.macro populate_entries, tbl, phys, istart, iend, flags, inc, tmp1
+.Lpe\@:	phys_to_pte \tmp1, \phys
 	orr	\tmp1, \tmp1, \flags	// tmp1 = table entry
-	str	\tmp1, [\tbl, \index, lsl #3]
-	add	\rtbl, \rtbl, \inc	// rtbl = pa next level
-	add	\index, \index, #1
-	cmp	\index, \eindex
+	str	\tmp1, [\tbl, \istart, lsl #3]
+	add	\phys, \phys, \inc
+	add	\istart, \istart, #1
+	cmp	\istart, \iend
 	b.ls	.Lpe\@
 	.endm
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/6] arm64: head: clarify `compute_indices`
  2022-05-18  3:17 [PATCH 0/6] arm64/head: Cleanups for __create_page_tables() Anshuman Khandual
                   ` (2 preceding siblings ...)
  2022-05-18  3:17 ` [PATCH 3/6] arm64: head: clarify `populate_entries` Anshuman Khandual
@ 2022-05-18  3:17 ` Anshuman Khandual
  2022-05-18  6:47   ` Ard Biesheuvel
  2022-05-18  3:17 ` [PATCH 5/6] arm64: head: clarify `map_memory` Anshuman Khandual
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 14+ messages in thread
From: Anshuman Khandual @ 2022-05-18  3:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, will, Ard Biesheuvel, Anshuman Khandual

From: Mark Rutland <mark.rutland@arm.com>

The logic in the `compute_indices` macro can be difficult to follow, as
it transiently uses output operands for unrelated temporary values.

Let's make this clearer by using a `tmp` parameter, and splitting the
logic into commented blocks. By folding a MUL and ADD into a single MADD
we avoid the need for a second temporary.

As `ptrs` is sometimes a register and sometimes an immediate, we cannot
simplify this much further at present. If it were always a register, we
could remove redundant MOVs, and if it were always an immediate we could
use `(\ptrs - 1)` as an immediate for the ANDs when extracting index
bits (or replace the LSR; SUB; AND sequence with a single UBFX).

There should be no funcitonal change as a result of this patch.

Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/arm64/kernel/head.S | 33 ++++++++++++++++++---------------
 1 file changed, 18 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index b5d7dacbbb2c..01739f5ec3de 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -189,20 +189,23 @@ SYM_CODE_END(preserve_boot_args)
  * Preserves:	vstart, vend, shift, ptrs
  * Returns:	istart, iend, count
  */
-	.macro compute_indices, vstart, vend, shift, ptrs, istart, iend, count
+	.macro compute_indices, vstart, vend, shift, ptrs, istart, iend, count, tmp
+	// iend = (vend >> shift) & (ptrs - 1)
 	lsr	\iend, \vend, \shift
-	mov	\istart, \ptrs
-	sub	\istart, \istart, #1
-	and	\iend, \iend, \istart	// iend = (vend >> shift) & (ptrs - 1)
-	mov	\istart, \ptrs
-	mul	\istart, \istart, \count
-	add	\iend, \iend, \istart	// iend += count * ptrs
-					// our entries span multiple tables
+	mov	\tmp, \ptrs
+	sub	\tmp, \tmp, #1
+	and	\iend, \iend, \tmp
 
+	// iend += count * ptrs
+	// our entries span multiple tables
+	mov	\tmp, \ptrs
+	madd	\iend, \count, \tmp, \iend
+
+	// istart = (vend >> shift) & (ptrs - 1)
 	lsr	\istart, \vstart, \shift
-	mov	\count, \ptrs
-	sub	\count, \count, #1
-	and	\istart, \istart, \count
+	mov	\tmp, \ptrs
+	sub	\tmp, \tmp, #1
+	and	\istart, \istart, \tmp
 
 	sub	\count, \iend, \istart
 	.endm
@@ -229,25 +232,25 @@ SYM_CODE_END(preserve_boot_args)
 	add \rtbl, \tbl, #PAGE_SIZE
 	mov \sv, \rtbl
 	mov \count, #0
-	compute_indices \vstart, \vend, #PGDIR_SHIFT, \pgds, \istart, \iend, \count
+	compute_indices \vstart, \vend, #PGDIR_SHIFT, \pgds, \istart, \iend, \count, \tmp
 	populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
 	mov \tbl, \sv
 	mov \sv, \rtbl
 
 #if SWAPPER_PGTABLE_LEVELS > 3
-	compute_indices \vstart, \vend, #PUD_SHIFT, #PTRS_PER_PUD, \istart, \iend, \count
+	compute_indices \vstart, \vend, #PUD_SHIFT, #PTRS_PER_PUD, \istart, \iend, \count, \tmp
 	populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
 	mov \tbl, \sv
 	mov \sv, \rtbl
 #endif
 
 #if SWAPPER_PGTABLE_LEVELS > 2
-	compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #PTRS_PER_PMD, \istart, \iend, \count
+	compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #PTRS_PER_PMD, \istart, \iend, \count, \tmp
 	populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
 	mov \tbl, \sv
 #endif
 
-	compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #PTRS_PER_PTE, \istart, \iend, \count
+	compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #PTRS_PER_PTE, \istart, \iend, \count, \tmp
 	bic \count, \phys, #SWAPPER_BLOCK_SIZE - 1
 	populate_entries \tbl, \count, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp
 	.endm
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/6] arm64: head: clarify `map_memory`
  2022-05-18  3:17 [PATCH 0/6] arm64/head: Cleanups for __create_page_tables() Anshuman Khandual
                   ` (3 preceding siblings ...)
  2022-05-18  3:17 ` [PATCH 4/6] arm64: head: clarify `compute_indices` Anshuman Khandual
@ 2022-05-18  3:17 ` Anshuman Khandual
  2022-05-18  3:17 ` [PATCH 6/6] arm64: head: clarify commentary for __create_page_tables Anshuman Khandual
  2022-05-18  6:52 ` [PATCH 0/6] arm64/head: Cleanups for __create_page_tables() Ard Biesheuvel
  6 siblings, 0 replies; 14+ messages in thread
From: Anshuman Khandual @ 2022-05-18  3:17 UTC (permalink / raw)
  To: linux-arm-kernel; +Cc: mark.rutland, catalin.marinas, will, Anshuman Khandual

From: Mark Rutland <mark.rutland@arm.com>

In the `map_memory` macro we repurpose the `count` temporary register to
hold the physical address `phys` aligned downwards to
SWAPPER_BLOCK_SIZE. Due to the subtle usage of `count` elsewhere, this
is a little confusing, and is also unnecessary as we can safely corrupt
`phys`, which is not used after `map_memory` completes.

This patch makes `map_memory` manipulate `phys` in-place, and updates
the documentation to mention that it corrupts `phys`.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/arm64/kernel/head.S | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 01739f5ec3de..107275e06212 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -225,7 +225,7 @@ SYM_CODE_END(preserve_boot_args)
  *
  * Temporaries:	istart, iend, tmp, count, sv - these need to be different registers
  * Preserves:	vstart, flags
- * Corrupts:	tbl, rtbl, vend, istart, iend, tmp, count, sv
+ * Corrupts:	tbl, rtbl, vend, phys, istart, iend, tmp, count, sv
  */
 	.macro map_memory, tbl, rtbl, vstart, vend, flags, phys, pgds, istart, iend, tmp, count, sv
 	sub \vend, \vend, #1
@@ -251,8 +251,8 @@ SYM_CODE_END(preserve_boot_args)
 #endif
 
 	compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #PTRS_PER_PTE, \istart, \iend, \count, \tmp
-	bic \count, \phys, #SWAPPER_BLOCK_SIZE - 1
-	populate_entries \tbl, \count, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp
+	bic \phys, \phys, #SWAPPER_BLOCK_SIZE - 1
+	populate_entries \tbl, \phys, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp
 	.endm
 
 /*
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 6/6] arm64: head: clarify commentary for __create_page_tables
  2022-05-18  3:17 [PATCH 0/6] arm64/head: Cleanups for __create_page_tables() Anshuman Khandual
                   ` (4 preceding siblings ...)
  2022-05-18  3:17 ` [PATCH 5/6] arm64: head: clarify `map_memory` Anshuman Khandual
@ 2022-05-18  3:17 ` Anshuman Khandual
  2022-05-18  6:52 ` [PATCH 0/6] arm64/head: Cleanups for __create_page_tables() Ard Biesheuvel
  6 siblings, 0 replies; 14+ messages in thread
From: Anshuman Khandual @ 2022-05-18  3:17 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: mark.rutland, catalin.marinas, will, Ard Biesheuvel, Anshuman Khandual

From: Mark Rutland <mark.rutland@arm.com>

The comments in __create_page_tables have become stale and potentially
misleading over time. The kernel tables cover all of the kernel image
but none of the linear map (which is created separately later), and the
kernel mapping does not start at PHYS_OFFSET (which is the physical
start of the linear map).

Update the comments to be more precise.

There should be no functional change as a result of this patch.

Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
 arch/arm64/kernel/head.S | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 107275e06212..349ef0ed9aa9 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -256,11 +256,14 @@ SYM_CODE_END(preserve_boot_args)
 	.endm
 
 /*
- * Setup the initial page tables. We only setup the barest amount which is
- * required to get the kernel running. The following sections are required:
- *   - identity mapping to enable the MMU (low address, TTBR0)
- *   - first few MB of the kernel linear mapping to jump to once the MMU has
- *     been enabled
+ * Setup the initial page tables.
+ *
+ * The idmap page tables map the idmap page in TTBR0, with VA == PA. This
+ * covers the interval [__idmap_text_start, __idmap_text_end - 1]
+ *
+ * The initial kernel page tables map the kernel image in TTBR1, with _text
+ * mapped to VA (KIMAGE_VADDR +  KASLR offset). This covers the interval
+ * [_text, _end - 1]
  */
 SYM_FUNC_START_LOCAL(__create_page_tables)
 	mov	x28, lr
@@ -363,7 +366,7 @@ SYM_FUNC_START_LOCAL(__create_page_tables)
 	map_memory x0, x1, x3, x6, x7, x3, x4, x10, x11, x12, x13, x14
 
 	/*
-	 * Map the kernel image (starting with PHYS_OFFSET).
+	 * Map the kernel image
 	 */
 	adrp	x0, init_pg_dir
 	mov_q	x5, KIMAGE_VADDR		// compile time __va(_text)
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/6] arm64: don't override idmap t0sz
  2022-05-18  3:17 ` [PATCH 1/6] arm64: don't override idmap t0sz Anshuman Khandual
@ 2022-05-18  6:41   ` Ard Biesheuvel
  0 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2022-05-18  6:41 UTC (permalink / raw)
  To: Anshuman Khandual; +Cc: Linux ARM, Mark Rutland, Catalin Marinas, Will Deacon

On Wed, 18 May 2022 at 05:17, Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
> From: Mark Rutland <mark.rutland@arm.com>
>
> When the kernel is built with CONFIG_ARM64_VA_BITS_52, __cpu_setup will
> override `idmap_t0sz`, and program TCR_EL1.T0SZ based on
> `vabits_actual`. This is inconsistent with cpu_set_idmap_tcr_t0sz(),
> which will use `idmap_t0sz`, but happens to work as
> CONFIG_ARM64_VA_BITS_52 requires 64K pages where 48-bit VAs and 52-bit
> VAs required the same number of page table levels and TTBR0 addresses
> grow upwards from the base of the PGD table (for which the entire page
> is zeroed).
>
> When switching away from the idmap, cpu_set_default_tcr_t0sz() will use
> `vabits_actual`, and so the T0SZ value used for the idmap does not have
> to match the T0SZ used during regular kernel/userspace execution.
>
> This patch ensures we *always* use `idmap_t0sz` as the TCR_EL1.T0SZ
> value used while the idmap is active.
>
> Cc: Ard Biesheuvel <ardb@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>

Acked-by: Ard Biesheuvel <ardb@kernel.org>

Note that this conflicts [trivially] with my series here:
https://lore.kernel.org/linux-arm-kernel/20220411094824.4176877-1-ardb@kernel.org/


> ---
>  arch/arm64/mm/proc.S | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
> index 50bbed947bec..c1f76bf3276c 100644
> --- a/arch/arm64/mm/proc.S
> +++ b/arch/arm64/mm/proc.S
> @@ -468,9 +468,8 @@ SYM_FUNC_START(__cpu_setup)
>         sub             x9, xzr, x9
>         add             x9, x9, #64
>         tcr_set_t1sz    tcr, x9
> -#else
> -       ldr_l           x9, idmap_t0sz
>  #endif
> +       ldr_l           x9, idmap_t0sz
>         tcr_set_t0sz    tcr, x9
>
>         /*
> --
> 2.20.1
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/6] arm64: head: remove __PHYS_OFFSET
  2022-05-18  3:17 ` [PATCH 2/6] arm64: head: remove __PHYS_OFFSET Anshuman Khandual
@ 2022-05-18  6:45   ` Ard Biesheuvel
  0 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2022-05-18  6:45 UTC (permalink / raw)
  To: Anshuman Khandual; +Cc: Linux ARM, Mark Rutland, Catalin Marinas, Will Deacon

On Wed, 18 May 2022 at 05:17, Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
> From: Mark Rutland <mark.rutland@arm.com>
>
> It's very easy to confuse __PHYS_OFFSET and PHYS_OFFSET. To clarify
> things, let's remove __PHYS_OFFSET and use KERNEL_START directly, with
> comments to show that we're using physical address, as we do for other
> objects.
>
> At the same time, update the comment regarding the kernel entry address
> to mention __pa(KERNEL_START) rather than __pa(PAGE_OFFSET).
>
> There should be no functional change as a result of this patch.
>
> Cc: Ard Biesheuvel <ardb@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>

Acked-by: Ard Biesheuvel <ardb@kernel.org>

Again, this conflicts with

https://lore.kernel.org/linux-arm-kernel/20220411094824.4176877-1-ardb@kernel.org/

but the conflict can be resolved in a straight-forward manner.

> ---
>  arch/arm64/kernel/head.S | 12 +++++-------
>  1 file changed, 5 insertions(+), 7 deletions(-)
>
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 6a98f1a38c29..aaad76680495 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -37,8 +37,6 @@
>
>  #include "efi-header.S"
>
> -#define __PHYS_OFFSET  KERNEL_START
> -
>  #if (PAGE_OFFSET & 0x1fffff) != 0
>  #error PAGE_OFFSET must be at least 2MB aligned
>  #endif
> @@ -52,7 +50,7 @@
>   *   x0 = physical address to the FDT blob.
>   *
>   * This code is mostly position independent so you call this at
> - * __pa(PAGE_OFFSET).
> + * __pa(KERNEL_START).
>   *
>   * Note that the callee-saved registers are used for storing variables
>   * that are useful before the MMU is enabled. The allocations are described
> @@ -91,7 +89,7 @@
>  SYM_CODE_START(primary_entry)
>         bl      preserve_boot_args
>         bl      init_kernel_el                  // w0=cpu_boot_mode
> -       adrp    x23, __PHYS_OFFSET
> +       adrp    x23, KERNEL_START               // __pa(KERNEL_START)
>         and     x23, x23, MIN_KIMG_ALIGN - 1    // KASLR offset, defaults to 0
>         bl      set_cpu_boot_mode_flag
>         bl      __create_page_tables
> @@ -420,7 +418,7 @@ SYM_FUNC_END(__create_page_tables)
>  /*
>   * The following fragment of code is executed with the MMU enabled.
>   *
> - *   x0 = __PHYS_OFFSET
> + *   x0 = __pa(KERNEL_START)
>   */
>  SYM_FUNC_START_LOCAL(__primary_switched)
>         adr_l   x4, init_task
> @@ -870,7 +868,7 @@ SYM_FUNC_START_LOCAL(__primary_switch)
>         bl      __relocate_kernel
>  #ifdef CONFIG_RANDOMIZE_BASE
>         ldr     x8, =__primary_switched
> -       adrp    x0, __PHYS_OFFSET
> +       adrp    x0, KERNEL_START                // __pa(KERNEL_START)
>         blr     x8
>
>         /*
> @@ -893,6 +891,6 @@ SYM_FUNC_START_LOCAL(__primary_switch)
>  #endif
>  #endif
>         ldr     x8, =__primary_switched
> -       adrp    x0, __PHYS_OFFSET
> +       adrp    x0, KERNEL_START                // __pa(KERNEL_START)
>         br      x8
>  SYM_FUNC_END(__primary_switch)
> --
> 2.20.1
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/6] arm64: head: clarify `compute_indices`
  2022-05-18  3:17 ` [PATCH 4/6] arm64: head: clarify `compute_indices` Anshuman Khandual
@ 2022-05-18  6:47   ` Ard Biesheuvel
  0 siblings, 0 replies; 14+ messages in thread
From: Ard Biesheuvel @ 2022-05-18  6:47 UTC (permalink / raw)
  To: Anshuman Khandual; +Cc: Linux ARM, Mark Rutland, Catalin Marinas, Will Deacon

On Wed, 18 May 2022 at 05:17, Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
> From: Mark Rutland <mark.rutland@arm.com>
>
> The logic in the `compute_indices` macro can be difficult to follow, as
> it transiently uses output operands for unrelated temporary values.
>
> Let's make this clearer by using a `tmp` parameter, and splitting the
> logic into commented blocks. By folding a MUL and ADD into a single MADD
> we avoid the need for a second temporary.
>
> As `ptrs` is sometimes a register and sometimes an immediate, we cannot
> simplify this much further at present. If it were always a register, we
> could remove redundant MOVs, and if it were always an immediate we could
> use `(\ptrs - 1)` as an immediate for the ANDs when extracting index
> bits (or replace the LSR; SUB; AND sequence with a single UBFX).
>
> There should be no funcitonal change as a result of this patch.
>
> Cc: Ard Biesheuvel <ardb@kernel.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Will Deacon <will@kernel.org>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>

I have a patch out that simplifies this more comprehensively (similar
to what the final paragraph alludes to)

https://lore.kernel.org/linux-arm-kernel/20220411094824.4176877-6-ardb@kernel.org/

> ---
>  arch/arm64/kernel/head.S | 33 ++++++++++++++++++---------------
>  1 file changed, 18 insertions(+), 15 deletions(-)
>
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index b5d7dacbbb2c..01739f5ec3de 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -189,20 +189,23 @@ SYM_CODE_END(preserve_boot_args)
>   * Preserves:  vstart, vend, shift, ptrs
>   * Returns:    istart, iend, count
>   */
> -       .macro compute_indices, vstart, vend, shift, ptrs, istart, iend, count
> +       .macro compute_indices, vstart, vend, shift, ptrs, istart, iend, count, tmp
> +       // iend = (vend >> shift) & (ptrs - 1)
>         lsr     \iend, \vend, \shift
> -       mov     \istart, \ptrs
> -       sub     \istart, \istart, #1
> -       and     \iend, \iend, \istart   // iend = (vend >> shift) & (ptrs - 1)
> -       mov     \istart, \ptrs
> -       mul     \istart, \istart, \count
> -       add     \iend, \iend, \istart   // iend += count * ptrs
> -                                       // our entries span multiple tables
> +       mov     \tmp, \ptrs
> +       sub     \tmp, \tmp, #1
> +       and     \iend, \iend, \tmp
>
> +       // iend += count * ptrs
> +       // our entries span multiple tables
> +       mov     \tmp, \ptrs
> +       madd    \iend, \count, \tmp, \iend
> +
> +       // istart = (vend >> shift) & (ptrs - 1)
>         lsr     \istart, \vstart, \shift
> -       mov     \count, \ptrs
> -       sub     \count, \count, #1
> -       and     \istart, \istart, \count
> +       mov     \tmp, \ptrs
> +       sub     \tmp, \tmp, #1
> +       and     \istart, \istart, \tmp
>
>         sub     \count, \iend, \istart
>         .endm
> @@ -229,25 +232,25 @@ SYM_CODE_END(preserve_boot_args)
>         add \rtbl, \tbl, #PAGE_SIZE
>         mov \sv, \rtbl
>         mov \count, #0
> -       compute_indices \vstart, \vend, #PGDIR_SHIFT, \pgds, \istart, \iend, \count
> +       compute_indices \vstart, \vend, #PGDIR_SHIFT, \pgds, \istart, \iend, \count, \tmp
>         populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
>         mov \tbl, \sv
>         mov \sv, \rtbl
>
>  #if SWAPPER_PGTABLE_LEVELS > 3
> -       compute_indices \vstart, \vend, #PUD_SHIFT, #PTRS_PER_PUD, \istart, \iend, \count
> +       compute_indices \vstart, \vend, #PUD_SHIFT, #PTRS_PER_PUD, \istart, \iend, \count, \tmp
>         populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
>         mov \tbl, \sv
>         mov \sv, \rtbl
>  #endif
>
>  #if SWAPPER_PGTABLE_LEVELS > 2
> -       compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #PTRS_PER_PMD, \istart, \iend, \count
> +       compute_indices \vstart, \vend, #SWAPPER_TABLE_SHIFT, #PTRS_PER_PMD, \istart, \iend, \count, \tmp
>         populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
>         mov \tbl, \sv
>  #endif
>
> -       compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #PTRS_PER_PTE, \istart, \iend, \count
> +       compute_indices \vstart, \vend, #SWAPPER_BLOCK_SHIFT, #PTRS_PER_PTE, \istart, \iend, \count, \tmp
>         bic \count, \phys, #SWAPPER_BLOCK_SIZE - 1
>         populate_entries \tbl, \count, \istart, \iend, \flags, #SWAPPER_BLOCK_SIZE, \tmp
>         .endm
> --
> 2.20.1
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/6] arm64/head: Cleanups for __create_page_tables()
  2022-05-18  3:17 [PATCH 0/6] arm64/head: Cleanups for __create_page_tables() Anshuman Khandual
                   ` (5 preceding siblings ...)
  2022-05-18  3:17 ` [PATCH 6/6] arm64: head: clarify commentary for __create_page_tables Anshuman Khandual
@ 2022-05-18  6:52 ` Ard Biesheuvel
  2022-05-18  9:35   ` Anshuman Khandual
  6 siblings, 1 reply; 14+ messages in thread
From: Ard Biesheuvel @ 2022-05-18  6:52 UTC (permalink / raw)
  To: Anshuman Khandual; +Cc: Linux ARM, Mark Rutland, Catalin Marinas, Will Deacon

On Wed, 18 May 2022 at 05:18, Anshuman Khandual
<anshuman.khandual@arm.com> wrote:
>
> This cleanup series is a precursor before carving out idmap_pg_dir creation
> from overall __create_page_table(). This series is derived from an original
> work from Mark Rutland.
>
> https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/pgtable/idmap
>
> This series applies on v5.18-rc4
>
> Mark Rutland (6):
>   arm64: don't override idmap t0sz
>   arm64: head: remove __PHYS_OFFSET
>   arm64: head: clarify `populate_entries`
>   arm64: head: clarify `compute_indices`
>   arm64: head: clarify `map_memory`
>   arm64: head: clarify commentary for __create_page_tables
>

Hello Anshuman,

I submitted a fairly sizable stack of head.S changes recently, much of
which overlaps with this series, and which already splits off ID map
creation from the creation of early swapper.

https://lore.kernel.org/linux-arm-kernel/20220411094824.4176877-1-ardb@kernel.org/

Let's align instead of working on this in parallel, shall we?

Kind regards,
Ard.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/6] arm64/head: Cleanups for __create_page_tables()
  2022-05-18  6:52 ` [PATCH 0/6] arm64/head: Cleanups for __create_page_tables() Ard Biesheuvel
@ 2022-05-18  9:35   ` Anshuman Khandual
  2022-06-27 10:17     ` Will Deacon
  0 siblings, 1 reply; 14+ messages in thread
From: Anshuman Khandual @ 2022-05-18  9:35 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Linux ARM, Mark Rutland, Catalin Marinas, Will Deacon



On 5/18/22 12:22, Ard Biesheuvel wrote:
> On Wed, 18 May 2022 at 05:18, Anshuman Khandual
> <anshuman.khandual@arm.com> wrote:
>>
>> This cleanup series is a precursor before carving out idmap_pg_dir creation
>> from overall __create_page_table(). This series is derived from an original
>> work from Mark Rutland.
>>
>> https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/pgtable/idmap
>>
>> This series applies on v5.18-rc4
>>
>> Mark Rutland (6):
>>   arm64: don't override idmap t0sz
>>   arm64: head: remove __PHYS_OFFSET
>>   arm64: head: clarify `populate_entries`
>>   arm64: head: clarify `compute_indices`
>>   arm64: head: clarify `map_memory`
>>   arm64: head: clarify commentary for __create_page_tables
>>
> 
> Hello Anshuman,
> 
> I submitted a fairly sizable stack of head.S changes recently, much of
> which overlaps with this series, and which already splits off ID map
> creation from the creation of early swapper.
> 
> https://lore.kernel.org/linux-arm-kernel/20220411094824.4176877-1-ardb@kernel.org/
> 
> Let's align instead of working on this in parallel, shall we?

Hello Ard,

Sure. I will go through the series and align as required.

- Anshuman

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/6] arm64/head: Cleanups for __create_page_tables()
  2022-05-18  9:35   ` Anshuman Khandual
@ 2022-06-27 10:17     ` Will Deacon
  2022-06-28  4:47       ` Anshuman Khandual
  0 siblings, 1 reply; 14+ messages in thread
From: Will Deacon @ 2022-06-27 10:17 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: Ard Biesheuvel, Linux ARM, Mark Rutland, Catalin Marinas

Hi Anshuman,

On Wed, May 18, 2022 at 03:05:20PM +0530, Anshuman Khandual wrote:
> On 5/18/22 12:22, Ard Biesheuvel wrote:
> > On Wed, 18 May 2022 at 05:18, Anshuman Khandual
> > <anshuman.khandual@arm.com> wrote:
> >>
> >> This cleanup series is a precursor before carving out idmap_pg_dir creation
> >> from overall __create_page_table(). This series is derived from an original
> >> work from Mark Rutland.
> >>
> >> https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/pgtable/idmap
> >>
> >> This series applies on v5.18-rc4
> >>
> >> Mark Rutland (6):
> >>   arm64: don't override idmap t0sz
> >>   arm64: head: remove __PHYS_OFFSET
> >>   arm64: head: clarify `populate_entries`
> >>   arm64: head: clarify `compute_indices`
> >>   arm64: head: clarify `map_memory`
> >>   arm64: head: clarify commentary for __create_page_tables
> >>
> > 
> > Hello Anshuman,
> > 
> > I submitted a fairly sizable stack of head.S changes recently, much of
> > which overlaps with this series, and which already splits off ID map
> > creation from the creation of early swapper.
> > 
> > https://lore.kernel.org/linux-arm-kernel/20220411094824.4176877-1-ardb@kernel.org/
> > 
> > Let's align instead of working on this in parallel, shall we?
> 
> Hello Ard,
> 
> Sure. I will go through the series and align as required.

I've queued most of Ard's series now (for-next/boot), so please see if you
think any of the changes here are still relevant and post a new series based
on that.

Thanks,

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/6] arm64/head: Cleanups for __create_page_tables()
  2022-06-27 10:17     ` Will Deacon
@ 2022-06-28  4:47       ` Anshuman Khandual
  0 siblings, 0 replies; 14+ messages in thread
From: Anshuman Khandual @ 2022-06-28  4:47 UTC (permalink / raw)
  To: Will Deacon; +Cc: Ard Biesheuvel, Linux ARM, Mark Rutland, Catalin Marinas



On 6/27/22 15:47, Will Deacon wrote:
> Hi Anshuman,
> 
> On Wed, May 18, 2022 at 03:05:20PM +0530, Anshuman Khandual wrote:
>> On 5/18/22 12:22, Ard Biesheuvel wrote:
>>> On Wed, 18 May 2022 at 05:18, Anshuman Khandual
>>> <anshuman.khandual@arm.com> wrote:
>>>>
>>>> This cleanup series is a precursor before carving out idmap_pg_dir creation
>>>> from overall __create_page_table(). This series is derived from an original
>>>> work from Mark Rutland.
>>>>
>>>> https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=arm64/pgtable/idmap
>>>>
>>>> This series applies on v5.18-rc4
>>>>
>>>> Mark Rutland (6):
>>>>   arm64: don't override idmap t0sz
>>>>   arm64: head: remove __PHYS_OFFSET
>>>>   arm64: head: clarify `populate_entries`
>>>>   arm64: head: clarify `compute_indices`
>>>>   arm64: head: clarify `map_memory`
>>>>   arm64: head: clarify commentary for __create_page_tables
>>>>
>>>
>>> Hello Anshuman,
>>>
>>> I submitted a fairly sizable stack of head.S changes recently, much of
>>> which overlaps with this series, and which already splits off ID map
>>> creation from the creation of early swapper.
>>>
>>> https://lore.kernel.org/linux-arm-kernel/20220411094824.4176877-1-ardb@kernel.org/
>>>
>>> Let's align instead of working on this in parallel, shall we?
>>
>> Hello Ard,
>>
>> Sure. I will go through the series and align as required.
> 
> I've queued most of Ard's series now (for-next/boot), so please see if you
> think any of the changes here are still relevant and post a new series based
> on that.

I guess the second patch that drops __PHYS_OFFSET will be the only one still
remaining. I will send that across on the list.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-06-28  4:48 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-18  3:17 [PATCH 0/6] arm64/head: Cleanups for __create_page_tables() Anshuman Khandual
2022-05-18  3:17 ` [PATCH 1/6] arm64: don't override idmap t0sz Anshuman Khandual
2022-05-18  6:41   ` Ard Biesheuvel
2022-05-18  3:17 ` [PATCH 2/6] arm64: head: remove __PHYS_OFFSET Anshuman Khandual
2022-05-18  6:45   ` Ard Biesheuvel
2022-05-18  3:17 ` [PATCH 3/6] arm64: head: clarify `populate_entries` Anshuman Khandual
2022-05-18  3:17 ` [PATCH 4/6] arm64: head: clarify `compute_indices` Anshuman Khandual
2022-05-18  6:47   ` Ard Biesheuvel
2022-05-18  3:17 ` [PATCH 5/6] arm64: head: clarify `map_memory` Anshuman Khandual
2022-05-18  3:17 ` [PATCH 6/6] arm64: head: clarify commentary for __create_page_tables Anshuman Khandual
2022-05-18  6:52 ` [PATCH 0/6] arm64/head: Cleanups for __create_page_tables() Ard Biesheuvel
2022-05-18  9:35   ` Anshuman Khandual
2022-06-27 10:17     ` Will Deacon
2022-06-28  4:47       ` Anshuman Khandual

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.