All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] arm64: mm: optimize VA space organization for 52-bit
@ 2020-10-06 22:49 Ard Biesheuvel
  2020-10-06 22:49 ` [PATCH 1/2] arm64: mm: use single quantity to represent the PA to VA translation Ard Biesheuvel
  2020-10-06 22:49 ` [PATCH 2/2] arm64: mm: extend linear region for 52-bit VA configurations Ard Biesheuvel
  0 siblings, 2 replies; 4+ messages in thread
From: Ard Biesheuvel @ 2020-10-06 22:49 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, Anshuman Khandual, will, Ard Biesheuvel, Steve Capper

This series reorganizes the kernel VA space slightly so that 52-bit VA
configurations can use more virtual address space, i.e., the usable
linear address space almost doubles, from 2^51 to 2^52-2^47.

Patch #1 merges the physvirt_offset and memstart_addr, both of which
represent translations between the physical address space and the linear
region, and there is no need for having both. This fixes a bug too, as
the two values were not properly kept in sync when booting with KASLR
enabled.

Patch #2 updates the definitions for the boundaries of the linear space,
so that 52-bit VA builds use all available space for the linear region.

Not tested yet on a 52-bit VA capable system.

Cc: Steve Capper <steve.capper@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>

Ard Biesheuvel (2):
  arm64: mm: use single quantity to represent the PA to VA translation
  arm64: mm: extend linear region for 52-bit VA configurations

 Documentation/arm64/kasan-offsets.sh |  3 +-
 arch/arm64/Kconfig                   | 20 ++++++------
 arch/arm64/include/asm/memory.h      | 13 ++++----
 arch/arm64/include/asm/pgtable.h     |  4 +--
 arch/arm64/mm/init.c                 | 32 +++++++-------------
 5 files changed, 30 insertions(+), 42 deletions(-)

-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/2] arm64: mm: use single quantity to represent the PA to VA translation
  2020-10-06 22:49 [PATCH 0/2] arm64: mm: optimize VA space organization for 52-bit Ard Biesheuvel
@ 2020-10-06 22:49 ` Ard Biesheuvel
  2020-10-06 22:49 ` [PATCH 2/2] arm64: mm: extend linear region for 52-bit VA configurations Ard Biesheuvel
  1 sibling, 0 replies; 4+ messages in thread
From: Ard Biesheuvel @ 2020-10-06 22:49 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, Anshuman Khandual, will, Ard Biesheuvel, Steve Capper

On arm64, the global variable memstart_addr represents the physical
address of PAGE_OFFSET, and so physical to virtual translations or
vice versa used to come down to simple additions or subtractions
involving the values of PAGE_OFFSET and memstart_addr.

When support for 52-bit virtual addressing was introduced, we had to
deal with PAGE_OFFSET potentially being outside of the region that
can be covered by the virtual range (as the 52-bit VA capable build
needs to be able to run on systems that are only 48-bit VA capable),
and for this reason, another translation was introduced, and recorded
in the global variable physvirt_offset.

However, if we go back to the original definition of memstart_addr,
i.e., the physical address of PAGE_OFFSET, it turns out that there is
no need for two separate translations: instead, we can simply subtract
the size of the unaddressable VA space from memstart_addr to make the
available physical memory appear in the 48-bit addressable VA region.

This simplifies things, but also fixes a bug on KASLR builds, which
may update memstart_addr later on in arm64_memblock_init(), but fails
to update vmemmap and physvirt_offset accordingly.

Fixes: 5383cc6efed13 ("arm64: mm: Introduce vabits_actual")
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 arch/arm64/include/asm/memory.h  |  5 ++--
 arch/arm64/include/asm/pgtable.h |  4 +--
 arch/arm64/mm/init.c             | 30 +++++++-------------
 3 files changed, 14 insertions(+), 25 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index afa722504bfd..1ded73189874 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -164,7 +164,6 @@
 extern u64			vabits_actual;
 #define PAGE_END		(_PAGE_END(vabits_actual))
 
-extern s64			physvirt_offset;
 extern s64			memstart_addr;
 /* PHYS_OFFSET - the physical address of the start of memory. */
 #define PHYS_OFFSET		({ VM_BUG_ON(memstart_addr & 1); memstart_addr; })
@@ -240,7 +239,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
  */
 #define __is_lm_address(addr)	(!(((u64)addr) & BIT(vabits_actual - 1)))
 
-#define __lm_to_phys(addr)	(((addr) + physvirt_offset))
+#define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
 
 #define __virt_to_phys_nodebug(x) ({					\
@@ -258,7 +257,7 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
 #define __phys_addr_symbol(x)	__pa_symbol_nodebug(x)
 #endif /* CONFIG_DEBUG_VIRTUAL */
 
-#define __phys_to_virt(x)	((unsigned long)((x) - physvirt_offset))
+#define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
 #define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
 
 /*
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index d5d3fbe73953..88233d42d9c2 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -23,6 +23,8 @@
 #define VMALLOC_START		(MODULES_END)
 #define VMALLOC_END		(- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
+#define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
+
 #define FIRST_USER_ADDRESS	0UL
 
 #ifndef __ASSEMBLY__
@@ -33,8 +35,6 @@
 #include <linux/mm_types.h>
 #include <linux/sched.h>
 
-extern struct page *vmemmap;
-
 extern void __pte_error(const char *file, int line, unsigned long val);
 extern void __pmd_error(const char *file, int line, unsigned long val);
 extern void __pud_error(const char *file, int line, unsigned long val);
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 481d22c32a2e..324f0e0894f6 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -54,12 +54,6 @@
 s64 memstart_addr __ro_after_init = -1;
 EXPORT_SYMBOL(memstart_addr);
 
-s64 physvirt_offset __ro_after_init;
-EXPORT_SYMBOL(physvirt_offset);
-
-struct page *vmemmap __ro_after_init;
-EXPORT_SYMBOL(vmemmap);
-
 /*
  * We create both ZONE_DMA and ZONE_DMA32. ZONE_DMA covers the first 1G of
  * memory as some devices, namely the Raspberry Pi 4, have peripherals with
@@ -290,20 +284,6 @@ void __init arm64_memblock_init(void)
 	memstart_addr = round_down(memblock_start_of_DRAM(),
 				   ARM64_MEMSTART_ALIGN);
 
-	physvirt_offset = PHYS_OFFSET - PAGE_OFFSET;
-
-	vmemmap = ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT));
-
-	/*
-	 * If we are running with a 52-bit kernel VA config on a system that
-	 * does not support it, we have to offset our vmemmap and physvirt_offset
-	 * s.t. we avoid the 52-bit portion of the direct linear map
-	 */
-	if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (vabits_actual != 52)) {
-		vmemmap += (_PAGE_OFFSET(48) - _PAGE_OFFSET(52)) >> PAGE_SHIFT;
-		physvirt_offset = PHYS_OFFSET - _PAGE_OFFSET(48);
-	}
-
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
@@ -318,6 +298,16 @@ void __init arm64_memblock_init(void)
 		memblock_remove(0, memstart_addr);
 	}
 
+	/*
+	 * If we are running with a 52-bit kernel VA config on a system that
+	 * does not support it, we have to place the available physical
+	 * memory in the 48-bit addressable part of the linear region, i.e.,
+	 * we have to move it upward. Since memstart_addr represents the
+	 * physical address of PAGE_OFFSET, we have to *subtract* from it.
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (vabits_actual != 52))
+		memstart_addr -= _PAGE_OFFSET(48) - _PAGE_OFFSET(52);
+
 	/*
 	 * Apply the memory limit if it was set. Since the kernel may be loaded
 	 * high up in memory, add back the kernel region that must be accessible
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] arm64: mm: extend linear region for 52-bit VA configurations
  2020-10-06 22:49 [PATCH 0/2] arm64: mm: optimize VA space organization for 52-bit Ard Biesheuvel
  2020-10-06 22:49 ` [PATCH 1/2] arm64: mm: use single quantity to represent the PA to VA translation Ard Biesheuvel
@ 2020-10-06 22:49 ` Ard Biesheuvel
  2020-10-07 18:00   ` Ard Biesheuvel
  1 sibling, 1 reply; 4+ messages in thread
From: Ard Biesheuvel @ 2020-10-06 22:49 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: catalin.marinas, Anshuman Khandual, will, Ard Biesheuvel, Steve Capper

For historical reasons, the arm64 kernel VA space is configured as two
equally sized halves, i.e., on a 48-bit VA build, the VA space is split
into a 47-bit vmalloc region and a 47-bit linear region.

When support for 52-bit virtual addressing was added, this equal split
was kept, resulting in a substantial waste of virtual address space in
the linear region:

                           48-bit VA                     52-bit VA
  0xffff_ffff_ffff_ffff +-------------+               +-------------+
                        |   vmalloc   |               |   vmalloc   |
  0xffff_8000_0000_0000 +-------------+ _PAGE_END(48) +-------------+
                        |   linear    |               :             :
  0xffff_0000_0000_0000 +-------------+               :             :
                        :             :               :             :
                        :             :               :             :
                        :             :               :             :
                        :             :               :  currently  :
                        :  unusable   :               :             :
                        :             :               :   unused    :
                        :     by      :               :             :
                        :             :               :             :
                        :  hardware   :               :             :
                        :             :               :             :
  0xfff8_0000_0000_0000 :             : _PAGE_END(52) +-------------+
                        :             :               |             |
                        :             :               |             |
                        :             :               |             |
                        :             :               |             |
                        :             :               |             |
                        :  unusable   :               |             |
                        :             :               |   linear    |
                        :     by      :               |             |
                        :             :               |   region    |
                        :  hardware   :               |             |
                        :             :               |             |
                        :             :               |             |
                        :             :               |             |
                        :             :               |             |
                        :             :               |             |
                        :             :               |             |
  0xfff0_0000_0000_0000 +-------------+  PAGE_OFFSET  +-------------+

As illustrated above, the 52-bit VA kernel uses 47 bits for the vmalloc
space (as before), to ensure that a single 64k granule kernel image can
support any 64k granule capable system, regardless of whether it supports
the 52-bit virtual addressing extension. However, due to the fact that
the VA space is still split in equal halves, the linear region is only
2^51 bytes in size, wasting almost half of the 52-bit VA space.

Let's fix this, by abandoning the equal split, and simply assigning all
VA space outside of the vmalloc region to the linear region.

The KASAN shadow region is reconfigured so that it ends at the start of
the vmalloc region, and grows downwards. That way, the arrangement of
the vmalloc space (which contains kernel mappings, modules, BPF region,
the vmemmap array etc) is identical between non-KASAN and KASAN builds,
which aids debugging.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
---
 Documentation/arm64/kasan-offsets.sh |  3 +--
 arch/arm64/Kconfig                   | 20 ++++++++++----------
 arch/arm64/include/asm/memory.h      |  8 ++++----
 arch/arm64/mm/init.c                 |  2 +-
 4 files changed, 16 insertions(+), 17 deletions(-)

diff --git a/Documentation/arm64/kasan-offsets.sh b/Documentation/arm64/kasan-offsets.sh
index 2b7a021db363..2dc5f9e18039 100644
--- a/Documentation/arm64/kasan-offsets.sh
+++ b/Documentation/arm64/kasan-offsets.sh
@@ -1,12 +1,11 @@
 #!/bin/sh
 
 # Print out the KASAN_SHADOW_OFFSETS required to place the KASAN SHADOW
-# start address at the mid-point of the kernel VA space
+# start address at the top of the linear region
 
 print_kasan_offset () {
 	printf "%02d\t" $1
 	printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
-			+ (1 << ($1 - 32 - $2)) \
 			- (1 << (64 - 32 - $2)) ))
 }
 
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 6d232837cbee..896a46a71d23 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -321,16 +321,16 @@ config BROKEN_GAS_INST
 config KASAN_SHADOW_OFFSET
 	hex
 	depends on KASAN
-	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && !KASAN_SW_TAGS
-	default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
-	default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
-	default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
-	default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
-	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && KASAN_SW_TAGS
-	default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
-	default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
-	default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
-	default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
+	default 0xdfff800000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && !KASAN_SW_TAGS
+	default 0xdfffc00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
+	default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
+	default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
+	default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
+	default 0xefff800000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && KASAN_SW_TAGS
+	default 0xefffc00000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
+	default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
+	default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
+	default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
 	default 0xffffffffffffffff
 
 source "arch/arm64/Kconfig.platforms"
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 1ded73189874..a9bb750b3dac 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -44,7 +44,7 @@
 #define _PAGE_OFFSET(va)	(-(UL(1) << (va)))
 #define PAGE_OFFSET		(_PAGE_OFFSET(VA_BITS))
 #define KIMAGE_VADDR		(MODULES_END)
-#define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
+#define BPF_JIT_REGION_START	(_PAGE_END(VA_BITS_MIN))
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
 #define BPF_JIT_REGION_END	(BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE)
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
@@ -76,10 +76,11 @@
 #define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
 #define KASAN_SHADOW_END	((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \
 					+ KASAN_SHADOW_OFFSET)
+#define PAGE_END		(KASAN_SHADOW_END - (1UL << (vabits_actual - KASAN_SHADOW_SCALE_SHIFT)))
 #define KASAN_THREAD_SHIFT	1
 #else
 #define KASAN_THREAD_SHIFT	0
-#define KASAN_SHADOW_END	(_PAGE_END(VA_BITS_MIN))
+#define PAGE_END		(_PAGE_END(VA_BITS_MIN))
 #endif /* CONFIG_KASAN */
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
@@ -162,7 +163,6 @@
 #include <asm/bug.h>
 
 extern u64			vabits_actual;
-#define PAGE_END		(_PAGE_END(vabits_actual))
 
 extern s64			memstart_addr;
 /* PHYS_OFFSET - the physical address of the start of memory. */
@@ -237,7 +237,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
  * space. Testing the top bit for the start of the region is a
  * sufficient check and avoids having to worry about the tag.
  */
-#define __is_lm_address(addr)	(!(((u64)addr) & BIT(vabits_actual - 1)))
+#define __is_lm_address(addr)	((untagged_addr(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))
 
 #define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 324f0e0894f6..9090779dd3cd 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -270,7 +270,7 @@ static void __init fdt_enforce_memory_region(void)
 
 void __init arm64_memblock_init(void)
 {
-	const s64 linear_region_size = BIT(vabits_actual - 1);
+	const s64 linear_region_size = PAGE_END - _PAGE_OFFSET(vabits_actual);
 
 	/* Handle linux,usable-memory-range property */
 	fdt_enforce_memory_region();
-- 
2.17.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 2/2] arm64: mm: extend linear region for 52-bit VA configurations
  2020-10-06 22:49 ` [PATCH 2/2] arm64: mm: extend linear region for 52-bit VA configurations Ard Biesheuvel
@ 2020-10-07 18:00   ` Ard Biesheuvel
  0 siblings, 0 replies; 4+ messages in thread
From: Ard Biesheuvel @ 2020-10-07 18:00 UTC (permalink / raw)
  To: Linux ARM; +Cc: Catalin Marinas, Anshuman Khandual, Will Deacon, Steve Capper

On Wed, 7 Oct 2020 at 00:50, Ard Biesheuvel <ardb@kernel.org> wrote:
>
> For historical reasons, the arm64 kernel VA space is configured as two
> equally sized halves, i.e., on a 48-bit VA build, the VA space is split
> into a 47-bit vmalloc region and a 47-bit linear region.
>
> When support for 52-bit virtual addressing was added, this equal split
> was kept, resulting in a substantial waste of virtual address space in
> the linear region:
>
>                            48-bit VA                     52-bit VA
>   0xffff_ffff_ffff_ffff +-------------+               +-------------+
>                         |   vmalloc   |               |   vmalloc   |
>   0xffff_8000_0000_0000 +-------------+ _PAGE_END(48) +-------------+
>                         |   linear    |               :             :
>   0xffff_0000_0000_0000 +-------------+               :             :
>                         :             :               :             :
>                         :             :               :             :
>                         :             :               :             :
>                         :             :               :  currently  :
>                         :  unusable   :               :             :
>                         :             :               :   unused    :
>                         :     by      :               :             :
>                         :             :               :             :
>                         :  hardware   :               :             :
>                         :             :               :             :
>   0xfff8_0000_0000_0000 :             : _PAGE_END(52) +-------------+
>                         :             :               |             |
>                         :             :               |             |
>                         :             :               |             |
>                         :             :               |             |
>                         :             :               |             |
>                         :  unusable   :               |             |
>                         :             :               |   linear    |
>                         :     by      :               |             |
>                         :             :               |   region    |
>                         :  hardware   :               |             |
>                         :             :               |             |
>                         :             :               |             |
>                         :             :               |             |
>                         :             :               |             |
>                         :             :               |             |
>                         :             :               |             |
>   0xfff0_0000_0000_0000 +-------------+  PAGE_OFFSET  +-------------+
>
> As illustrated above, the 52-bit VA kernel uses 47 bits for the vmalloc
> space (as before), to ensure that a single 64k granule kernel image can
> support any 64k granule capable system, regardless of whether it supports
> the 52-bit virtual addressing extension. However, due to the fact that
> the VA space is still split in equal halves, the linear region is only
> 2^51 bytes in size, wasting almost half of the 52-bit VA space.
>
> Let's fix this, by abandoning the equal split, and simply assigning all
> VA space outside of the vmalloc region to the linear region.
>
> The KASAN shadow region is reconfigured so that it ends at the start of
> the vmalloc region, and grows downwards. That way, the arrangement of
> the vmalloc space (which contains kernel mappings, modules, BPF region,
> the vmemmap array etc) is identical between non-KASAN and KASAN builds,
> which aids debugging.
>
> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
> ---
>  Documentation/arm64/kasan-offsets.sh |  3 +--
>  arch/arm64/Kconfig                   | 20 ++++++++++----------
>  arch/arm64/include/asm/memory.h      |  8 ++++----
>  arch/arm64/mm/init.c                 |  2 +-
>  4 files changed, 16 insertions(+), 17 deletions(-)
>
> diff --git a/Documentation/arm64/kasan-offsets.sh b/Documentation/arm64/kasan-offsets.sh
> index 2b7a021db363..2dc5f9e18039 100644
> --- a/Documentation/arm64/kasan-offsets.sh
> +++ b/Documentation/arm64/kasan-offsets.sh
> @@ -1,12 +1,11 @@
>  #!/bin/sh
>
>  # Print out the KASAN_SHADOW_OFFSETS required to place the KASAN SHADOW
> -# start address at the mid-point of the kernel VA space
> +# start address at the top of the linear region
>
>  print_kasan_offset () {
>         printf "%02d\t" $1
>         printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
> -                       + (1 << ($1 - 32 - $2)) \
>                         - (1 << (64 - 32 - $2)) ))
>  }
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 6d232837cbee..896a46a71d23 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -321,16 +321,16 @@ config BROKEN_GAS_INST
>  config KASAN_SHADOW_OFFSET
>         hex
>         depends on KASAN
> -       default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && !KASAN_SW_TAGS
> -       default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
> -       default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
> -       default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
> -       default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
> -       default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && KASAN_SW_TAGS
> -       default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
> -       default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
> -       default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
> -       default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
> +       default 0xdfff800000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && !KASAN_SW_TAGS
> +       default 0xdfffc00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
> +       default 0xdffffe0000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
> +       default 0xdfffffc000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
> +       default 0xdffffff800000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
> +       default 0xefff800000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && KASAN_SW_TAGS
> +       default 0xefffc00000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
> +       default 0xeffffe0000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
> +       default 0xefffffc000000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
> +       default 0xeffffff800000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
>         default 0xffffffffffffffff
>
>  source "arch/arm64/Kconfig.platforms"
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 1ded73189874..a9bb750b3dac 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -44,7 +44,7 @@
>  #define _PAGE_OFFSET(va)       (-(UL(1) << (va)))
>  #define PAGE_OFFSET            (_PAGE_OFFSET(VA_BITS))
>  #define KIMAGE_VADDR           (MODULES_END)
> -#define BPF_JIT_REGION_START   (KASAN_SHADOW_END)
> +#define BPF_JIT_REGION_START   (_PAGE_END(VA_BITS_MIN))
>  #define BPF_JIT_REGION_SIZE    (SZ_128M)
>  #define BPF_JIT_REGION_END     (BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE)
>  #define MODULES_END            (MODULES_VADDR + MODULES_VSIZE)
> @@ -76,10 +76,11 @@
>  #define KASAN_SHADOW_OFFSET    _AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
>  #define KASAN_SHADOW_END       ((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \
>                                         + KASAN_SHADOW_OFFSET)
> +#define PAGE_END               (KASAN_SHADOW_END - (1UL << (vabits_actual - KASAN_SHADOW_SCALE_SHIFT)))
>  #define KASAN_THREAD_SHIFT     1
>  #else
>  #define KASAN_THREAD_SHIFT     0
> -#define KASAN_SHADOW_END       (_PAGE_END(VA_BITS_MIN))
> +#define PAGE_END               (_PAGE_END(VA_BITS_MIN))
>  #endif /* CONFIG_KASAN */
>
>  #define MIN_THREAD_SHIFT       (14 + KASAN_THREAD_SHIFT)
> @@ -162,7 +163,6 @@
>  #include <asm/bug.h>
>
>  extern u64                     vabits_actual;
> -#define PAGE_END               (_PAGE_END(vabits_actual))
>
>  extern s64                     memstart_addr;
>  /* PHYS_OFFSET - the physical address of the start of memory. */
> @@ -237,7 +237,7 @@ static inline const void *__tag_set(const void *addr, u8 tag)
>   * space. Testing the top bit for the start of the region is a
>   * sufficient check and avoids having to worry about the tag.
>   */
> -#define __is_lm_address(addr)  (!(((u64)addr) & BIT(vabits_actual - 1)))
> +#define __is_lm_address(addr)  ((untagged_addr(addr) & ~PAGE_OFFSET) < (PAGE_END - PAGE_OFFSET))

This shouldn't be using untagged_addr(), but just a (u64) cast for addr.


>
>  #define __lm_to_phys(addr)     (((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
>  #define __kimg_to_phys(addr)   ((addr) - kimage_voffset)
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 324f0e0894f6..9090779dd3cd 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -270,7 +270,7 @@ static void __init fdt_enforce_memory_region(void)
>
>  void __init arm64_memblock_init(void)
>  {
> -       const s64 linear_region_size = BIT(vabits_actual - 1);
> +       const s64 linear_region_size = PAGE_END - _PAGE_OFFSET(vabits_actual);
>
>         /* Handle linux,usable-memory-range property */
>         fdt_enforce_memory_region();
> --
> 2.17.1
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-10-07 18:02 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-06 22:49 [PATCH 0/2] arm64: mm: optimize VA space organization for 52-bit Ard Biesheuvel
2020-10-06 22:49 ` [PATCH 1/2] arm64: mm: use single quantity to represent the PA to VA translation Ard Biesheuvel
2020-10-06 22:49 ` [PATCH 2/2] arm64: mm: extend linear region for 52-bit VA configurations Ard Biesheuvel
2020-10-07 18:00   ` Ard Biesheuvel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.