linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] arm64: head: avoid over-mapping in map_memory
@ 2021-08-23 10:12 Mark Rutland
  2021-08-24 10:42 ` Will Deacon
  2021-08-24 16:18 ` Catalin Marinas
  0 siblings, 2 replies; 3+ messages in thread
From: Mark Rutland @ 2021-08-23 10:12 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: anshuman.khandual, ard.biesheuvel, catalin.marinas, mark.rutland,
	steve.capper, will

The `compute_indices` and `populate_entries` macros operate on inclusive
bounds, and thus the `map_memory` macro which uses them also operates
on inclusive bounds.

We pass `_end` and `_idmap_text_end` to `map_memory`, but these are
exclusive bounds, and if one of these is sufficiently aligned (as a
result of kernel configuration, physical placement, and KASLR), then:

* In `compute_indices`, the computed `iend` will be in the page/block *after*
  the final byte of the intended mapping.

* In `populate_entries`, an unnecessary entry will be created at the end
  of each level of table. At the leaf level, this entry will map up to
  SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend
  to map.

As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may
violate the boot protocol and map physical address past the 2MiB-aligned
end address we are permitted to map. As we map these with Normal memory
attributes, this may result in further problems depending on what these
physical addresses correspond to.

The final entry at each level may require an additional table at that
level. As EARLY_ENTRIES() calculates an inclusive bound, we allocate
enough memory for this.

Avoid the extraneous mapping by having map_memory convert the exclusive
end address to an inclusive end address by subtracting one, and do
likewise in EARLY_ENTRIES() when calculating the number of required
tables. For clarity, comments are updated to more clearly document which
boundaries the macros operate on.  For consistency with the other
macros, the comments in map_memory are also updated to describe `vstart`
and `vend` as virtual addresses.

Fixes: 0370b31e48454d8c ("arm64: Extend early page table code to allow for larger kernel")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Steve Capper <steve.capper@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/kernel-pgtable.h |  4 ++--
 arch/arm64/kernel/head.S                | 11 ++++++-----
 2 files changed, 8 insertions(+), 7 deletions(-)

Since v1 [1]:
* Subtract in map_memory macro
* Update EARLY_ENTRIES()
* Follow existing comment style

[1] https://lore.kernel.org/r/20210810152756.23903-1-mark.rutland@arm.com

Mark.

diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 3512184cfec17..96dc0f7da258d 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -65,8 +65,8 @@
 #define EARLY_KASLR	(0)
 #endif
 
-#define EARLY_ENTRIES(vstart, vend, shift) (((vend) >> (shift)) \
-					- ((vstart) >> (shift)) + 1 + EARLY_KASLR)
+#define EARLY_ENTRIES(vstart, vend, shift) \
+	((((vend) - 1) >> (shift)) - ((vstart) >> (shift)) + 1 + EARLY_KASLR)
 
 #define EARLY_PGDS(vstart, vend) (EARLY_ENTRIES(vstart, vend, PGDIR_SHIFT))
 
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index c5c994a73a645..17962452e31de 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -177,7 +177,7 @@ SYM_CODE_END(preserve_boot_args)
  * to be composed of multiple pages. (This effectively scales the end index).
  *
  *	vstart:	virtual address of start of range
- *	vend:	virtual address of end of range
+ *	vend:	virtual address of end of range - we map [vstart, vend]
  *	shift:	shift used to transform virtual address into index
  *	ptrs:	number of entries in page table
  *	istart:	index in table corresponding to vstart
@@ -214,17 +214,18 @@ SYM_CODE_END(preserve_boot_args)
  *
  *	tbl:	location of page table
  *	rtbl:	address to be used for first level page table entry (typically tbl + PAGE_SIZE)
- *	vstart:	start address to map
- *	vend:	end address to map - we map [vstart, vend]
+ *	vstart:	virtual address of start of range
+ *	vend:	virtual address of end of range - we map [vstart, vend - 1]
  *	flags:	flags to use to map last level entries
  *	phys:	physical address corresponding to vstart - physical memory is contiguous
  *	pgds:	the number of pgd entries
  *
  * Temporaries:	istart, iend, tmp, count, sv - these need to be different registers
- * Preserves:	vstart, vend, flags
- * Corrupts:	tbl, rtbl, istart, iend, tmp, count, sv
+ * Preserves:	vstart, flags
+ * Corrupts:	tbl, rtbl, vend, istart, iend, tmp, count, sv
  */
 	.macro map_memory, tbl, rtbl, vstart, vend, flags, phys, pgds, istart, iend, tmp, count, sv
+	sub \vend, \vend, #1
 	add \rtbl, \tbl, #PAGE_SIZE
 	mov \sv, \rtbl
 	mov \count, #0
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] arm64: head: avoid over-mapping in map_memory
  2021-08-23 10:12 [PATCH v2] arm64: head: avoid over-mapping in map_memory Mark Rutland
@ 2021-08-24 10:42 ` Will Deacon
  2021-08-24 16:18 ` Catalin Marinas
  1 sibling, 0 replies; 3+ messages in thread
From: Will Deacon @ 2021-08-24 10:42 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-arm-kernel, anshuman.khandual, ard.biesheuvel,
	catalin.marinas, steve.capper

On Mon, Aug 23, 2021 at 11:12:53AM +0100, Mark Rutland wrote:
> The `compute_indices` and `populate_entries` macros operate on inclusive
> bounds, and thus the `map_memory` macro which uses them also operates
> on inclusive bounds.
> 
> We pass `_end` and `_idmap_text_end` to `map_memory`, but these are
> exclusive bounds, and if one of these is sufficiently aligned (as a
> result of kernel configuration, physical placement, and KASLR), then:
> 
> * In `compute_indices`, the computed `iend` will be in the page/block *after*
>   the final byte of the intended mapping.
> 
> * In `populate_entries`, an unnecessary entry will be created at the end
>   of each level of table. At the leaf level, this entry will map up to
>   SWAPPER_BLOCK_SIZE bytes of physical addresses that we did not intend
>   to map.
> 
> As we may map up to SWAPPER_BLOCK_SIZE bytes more than intended, we may
> violate the boot protocol and map physical address past the 2MiB-aligned
> end address we are permitted to map. As we map these with Normal memory
> attributes, this may result in further problems depending on what these
> physical addresses correspond to.
> 
> The final entry at each level may require an additional table at that
> level. As EARLY_ENTRIES() calculates an inclusive bound, we allocate
> enough memory for this.
> 
> Avoid the extraneous mapping by having map_memory convert the exclusive
> end address to an inclusive end address by subtracting one, and do
> likewise in EARLY_ENTRIES() when calculating the number of required
> tables. For clarity, comments are updated to more clearly document which
> boundaries the macros operate on.  For consistency with the other
> macros, the comments in map_memory are also updated to describe `vstart`
> and `vend` as virtual addresses.
> 
> Fixes: 0370b31e48454d8c ("arm64: Extend early page table code to allow for larger kernel")
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Anshuman Khandual <anshuman.khandual@arm.com>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Steve Capper <steve.capper@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/include/asm/kernel-pgtable.h |  4 ++--
>  arch/arm64/kernel/head.S                | 11 ++++++-----
>  2 files changed, 8 insertions(+), 7 deletions(-)

Acked-by: Will Deacon <will@kernel.org>

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v2] arm64: head: avoid over-mapping in map_memory
  2021-08-23 10:12 [PATCH v2] arm64: head: avoid over-mapping in map_memory Mark Rutland
  2021-08-24 10:42 ` Will Deacon
@ 2021-08-24 16:18 ` Catalin Marinas
  1 sibling, 0 replies; 3+ messages in thread
From: Catalin Marinas @ 2021-08-24 16:18 UTC (permalink / raw)
  To: Mark Rutland, linux-arm-kernel
  Cc: Will Deacon, ard.biesheuvel, anshuman.khandual, steve.capper

On Mon, 23 Aug 2021 11:12:53 +0100, Mark Rutland wrote:
> The `compute_indices` and `populate_entries` macros operate on inclusive
> bounds, and thus the `map_memory` macro which uses them also operates
> on inclusive bounds.
> 
> We pass `_end` and `_idmap_text_end` to `map_memory`, but these are
> exclusive bounds, and if one of these is sufficiently aligned (as a
> result of kernel configuration, physical placement, and KASLR), then:
> 
> [...]

Applied to arm64 (for-next/misc), thanks!

[1/1] arm64: head: avoid over-mapping in map_memory
      https://git.kernel.org/arm64/c/90268574a3e8

-- 
Catalin


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-08-24 16:20 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-23 10:12 [PATCH v2] arm64: head: avoid over-mapping in map_memory Mark Rutland
2021-08-24 10:42 ` Will Deacon
2021-08-24 16:18 ` Catalin Marinas

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).