All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V3 0/8] 52-bit kernel VAs for arm64
@ 2018-05-10 16:23 Steve Capper
  2018-05-10 16:23 ` [PATCH v3 1/8] arm/arm64: KVM: Formalise end of direct linear map Steve Capper
                   ` (8 more replies)
  0 siblings, 9 replies; 18+ messages in thread
From: Steve Capper @ 2018-05-10 16:23 UTC (permalink / raw)
  To: linux-arm-kernel

This patch series brings 52-bit kernel VA support to arm64; if supported
at boot time. A new kernel option CONFIG_ARM64_VA_BITS_52 is available
when configured with a 64KB PAGE_SIZE (as on ARMv8.2-LPA, 52-bit VAs are
only allowed when running with a 64KB granule).

Switching between 48 and 52-bit does not involve any changes to the number
of page table levels. The number of PGDIR entries increases when running
with a 52 bit kernel VA.

In order to allow the kernel to switch between VA spaces at boot time, we
need to re-arrange the current kernel VA space. In particular, the KASAN
end address needs to be valid for both 48-bit and 52-bit VA spaces, meaning
we need to flip the kernel VA space s.t. the KASAN end address is high and
the direct linear mapping is low.

This patch series applies to 4.17-rc4.

Changes to V3:
 * VA_BITS now kepts constant (to mean maximum VA space size),
 * VA_BITS_MIN refers to minimum size of VA space, whilst VA_BITS_ACTUAL
   refers to runtime size.
 * Code to ensure plts veneers can address full 52-bit space added (code
   from Ard, I've put it into a patch).

Changes to V2:
 * Kernel VA space only flipped, the order of modules, kImage etc are now
   retained,
 * 4.15-rc4 is used as a base as it includes a fix from V1 that has been
   merged already,
 * HASLR patch series is used as a base meaning HYP VA fixes are no
   longer required.


Ard Bieusheuval (1):
  arm64: module-plts: Extend veneer to address 52-bit VAs

Steve Capper (7):
  arm/arm64: KVM: Formalise end of direct linear map
  arm64: mm: Flip kernel VA space
  arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's
  arm64: dump: Make kernel page table dumper dynamic again
  arm64: mm: Make VA space size variable
  arm64: mm: Add 48/52-bit kernel VA support

 Documentation/arm64/kasan-offsets.sh   | 20 ++++++++++++
 arch/arm/include/asm/memory.h          |  1 +
 arch/arm64/Kconfig                     | 22 +++++++++++++
 arch/arm64/Makefile                    |  9 ------
 arch/arm64/include/asm/efi.h           |  4 +--
 arch/arm64/include/asm/kasan.h         | 11 +++----
 arch/arm64/include/asm/memory.h        | 30 ++++++++++++------
 arch/arm64/include/asm/mmu_context.h   |  2 +-
 arch/arm64/include/asm/module.h        | 13 +++++++-
 arch/arm64/include/asm/pgtable-hwdef.h |  1 +
 arch/arm64/include/asm/pgtable.h       |  4 +--
 arch/arm64/include/asm/processor.h     |  2 +-
 arch/arm64/kernel/head.S               |  6 ++--
 arch/arm64/kernel/kaslr.c              |  6 ++--
 arch/arm64/kernel/machine_kexec.c      |  2 +-
 arch/arm64/kernel/module-plts.c        | 12 +++++++
 arch/arm64/kvm/va_layout.c             | 14 ++++----
 arch/arm64/mm/dump.c                   | 58 +++++++++++++++++++++++++++-------
 arch/arm64/mm/fault.c                  |  4 +--
 arch/arm64/mm/init.c                   | 14 ++++----
 arch/arm64/mm/kasan_init.c             |  9 +++---
 arch/arm64/mm/mmu.c                    | 17 ++++++----
 arch/arm64/mm/proc.S                   | 41 ++++++++++++++++++++++++
 virt/kvm/arm/mmu.c                     |  4 +--
 24 files changed, 225 insertions(+), 81 deletions(-)
 create mode 100644 Documentation/arm64/kasan-offsets.sh

-- 
2.11.0

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/8] arm/arm64: KVM: Formalise end of direct linear map
  2018-05-10 16:23 [PATCH V3 0/8] 52-bit kernel VAs for arm64 Steve Capper
@ 2018-05-10 16:23 ` Steve Capper
  2018-05-10 17:11   ` Marc Zyngier
  2018-05-10 16:23 ` [PATCH v3 2/8] arm64: mm: Flip kernel VA space Steve Capper
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Steve Capper @ 2018-05-10 16:23 UTC (permalink / raw)
  To: linux-arm-kernel

We assume that the direct linear map ends at ~0 in the KVM HYP map
intersection checking code. This assumption will become invalid later on
for arm64 when the address space of the kernel is re-arranged.

This patch introduces a new constant PAGE_OFFSET_END for both arm and
arm64 and defines it to be ~0UL

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm/include/asm/memory.h   | 1 +
 arch/arm64/include/asm/memory.h | 1 +
 virt/kvm/arm/mmu.c              | 4 ++--
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index ed8fd0d19a3e..45c211fd50da 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -24,6 +24,7 @@
 
 /* PAGE_OFFSET - the virtual address of the start of the kernel image */
 #define PAGE_OFFSET		UL(CONFIG_PAGE_OFFSET)
+#define PAGE_OFFSET_END		(~0UL)
 
 #ifdef CONFIG_MMU
 
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 49d99214f43c..c5617cbbf1ff 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -61,6 +61,7 @@
 	(UL(1) << VA_BITS) + 1)
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << (VA_BITS - 1)) + 1)
+#define PAGE_OFFSET_END		(~0UL)
 #define KIMAGE_VADDR		(MODULES_END)
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
 #define MODULES_VADDR		(VA_START + KASAN_SHADOW_SIZE)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 7f6a944db23d..22af347d65f1 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1927,10 +1927,10 @@ int kvm_mmu_init(void)
 	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
 	kvm_debug("HYP VA range: %lx:%lx\n",
 		  kern_hyp_va(PAGE_OFFSET),
-		  kern_hyp_va((unsigned long)high_memory - 1));
+		  kern_hyp_va(PAGE_OFFSET_END));
 
 	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
-	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
+	    hyp_idmap_start <  kern_hyp_va(PAGE_OFFSET_END) &&
 	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
 		/*
 		 * The idmap page is intersecting with the VA space,
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 2/8] arm64: mm: Flip kernel VA space
  2018-05-10 16:23 [PATCH V3 0/8] 52-bit kernel VAs for arm64 Steve Capper
  2018-05-10 16:23 ` [PATCH v3 1/8] arm/arm64: KVM: Formalise end of direct linear map Steve Capper
@ 2018-05-10 16:23 ` Steve Capper
  2018-05-10 16:23 ` [PATCH v3 3/8] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Steve Capper @ 2018-05-10 16:23 UTC (permalink / raw)
  To: linux-arm-kernel

Put the direct linear map in the lower addresses of the kernel VA range
and everything else in the higher ranges.

This allows us to make room for an inline KASAN shadow that operates
under both 48 and 52 bit kernel VA sizes. For example with a 52-bit VA,
if KASAN_SHADOW_END < 0xFFF8000000000000 (it is in the lower addresses
of the kernel VA range), this will be below the start of the minimum
48-bit kernel VA address of 0xFFFF000000000000.

We need to adjust:
 *) KASAN shadow region placement logic,
 *) KASAN_SHADOW_OFFSET computation logic,
 *) virt_to_phys, phys_to_virt checks,
 *) page table dumper.

These are all small changes, that need to take place atomically, so they
are bundled into this commit.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/Makefile              |  2 +-
 arch/arm64/include/asm/memory.h  | 10 +++++-----
 arch/arm64/include/asm/pgtable.h |  2 +-
 arch/arm64/mm/dump.c             |  8 ++++----
 arch/arm64/mm/init.c             |  9 +--------
 arch/arm64/mm/kasan_init.c       |  4 ++--
 arch/arm64/mm/mmu.c              |  4 ++--
 7 files changed, 16 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 87f7d2f9f17c..2785f34aa790 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -101,7 +101,7 @@ endif
 # in 32-bit arithmetic
 KASAN_SHADOW_SCALE_SHIFT := 3
 KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
-	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \
+	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
 	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
 	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
 
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index c5617cbbf1ff..f0478617db32 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -58,15 +58,15 @@
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
 #define VA_START		(UL(0xffffffffffffffff) - \
-	(UL(1) << VA_BITS) + 1)
-#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << (VA_BITS - 1)) + 1)
-#define PAGE_OFFSET_END		(~0UL)
+#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
+	(UL(1) << VA_BITS) + 1)
+#define PAGE_OFFSET_END		(VA_START)
 #define KIMAGE_VADDR		(MODULES_END)
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
 #define MODULES_VADDR		(VA_START + KASAN_SHADOW_SIZE)
 #define MODULES_VSIZE		(SZ_128M)
-#define VMEMMAP_START		(PAGE_OFFSET - VMEMMAP_SIZE)
+#define VMEMMAP_START		(-VMEMMAP_SIZE)
 #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
 #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
@@ -218,7 +218,7 @@ static inline unsigned long kaslr_offset(void)
  * space. Testing the top bit for the start of the region is a
  * sufficient check.
  */
-#define __is_lm_address(addr)	(!!((addr) & BIT(VA_BITS - 1)))
+#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS - 1)))
 
 #define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 7c4c8f318ba9..31e26f3ab078 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -31,7 +31,7 @@
  *	and fixed mappings
  */
 #define VMALLOC_START		(MODULES_END)
-#define VMALLOC_END		(PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
+#define VMALLOC_END		(- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
 #define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
 
diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index 65dfc8571bf8..76e8857b2baf 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -30,6 +30,8 @@
 #include <asm/ptdump.h>
 
 static const struct addr_marker address_markers[] = {
+	{ PAGE_OFFSET,			"Linear Mapping start" },
+	{ VA_START,			"Linear Mapping end" },
 #ifdef CONFIG_KASAN
 	{ KASAN_SHADOW_START,		"Kasan shadow start" },
 	{ KASAN_SHADOW_END,		"Kasan shadow end" },
@@ -43,10 +45,8 @@ static const struct addr_marker address_markers[] = {
 	{ PCI_IO_START,			"PCI I/O start" },
 	{ PCI_IO_END,			"PCI I/O end" },
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
-	{ VMEMMAP_START,		"vmemmap start" },
-	{ VMEMMAP_START + VMEMMAP_SIZE,	"vmemmap end" },
+	{ VMEMMAP_START,		"vmemmap" },
 #endif
-	{ PAGE_OFFSET,			"Linear Mapping" },
 	{ -1,				NULL },
 };
 
@@ -381,7 +381,7 @@ static void ptdump_initialize(void)
 static struct ptdump_info kernel_ptdump_info = {
 	.mm		= &init_mm,
 	.markers	= address_markers,
-	.base_addr	= VA_START,
+	.base_addr	= PAGE_OFFSET,
 };
 
 void ptdump_check_wx(void)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 9f3c47acf8ff..efb7e860f99f 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -361,7 +361,7 @@ static void __init fdt_enforce_memory_region(void)
 
 void __init arm64_memblock_init(void)
 {
-	const s64 linear_region_size = -(s64)PAGE_OFFSET;
+	const s64 linear_region_size = BIT(VA_BITS - 1);
 
 	/* Handle linux,usable-memory-range property */
 	fdt_enforce_memory_region();
@@ -370,13 +370,6 @@ void __init arm64_memblock_init(void)
 	memblock_remove(1ULL << PHYS_MASK_SHIFT, ULLONG_MAX);
 
 	/*
-	 * Ensure that the linear region takes up exactly half of the kernel
-	 * virtual address space. This way, we can distinguish a linear address
-	 * from a kernel/module/vmalloc address by testing a single bit.
-	 */
-	BUILD_BUG_ON(linear_region_size != BIT(VA_BITS - 1));
-
-	/*
 	 * Select a suitable value for the base of physical memory.
 	 */
 	memstart_addr = round_down(memblock_start_of_DRAM(),
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 12145874c02b..7571e3e6e0f0 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -206,10 +206,10 @@ void __init kasan_init(void)
 	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
 			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
 
-	kasan_populate_zero_shadow((void *)KASAN_SHADOW_START,
+	kasan_populate_zero_shadow(kasan_mem_to_shadow((void *) VA_START),
 				   (void *)mod_shadow_start);
 	kasan_populate_zero_shadow((void *)kimg_shadow_end,
-				   kasan_mem_to_shadow((void *)PAGE_OFFSET));
+				   (void *)KASAN_SHADOW_END);
 
 	if (kimg_shadow_start > mod_shadow_end)
 		kasan_populate_zero_shadow((void *)mod_shadow_end,
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 2dbb2c9f1ec1..12ce780fcef4 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -368,7 +368,7 @@ static phys_addr_t pgd_pgtable_alloc(void)
 static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
 				  phys_addr_t size, pgprot_t prot)
 {
-	if (virt < VMALLOC_START) {
+	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
 		pr_warn("BUG: not creating mapping for %pa@0x%016lx - outside kernel range\n",
 			&phys, virt);
 		return;
@@ -395,7 +395,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
 				phys_addr_t size, pgprot_t prot)
 {
-	if (virt < VMALLOC_START) {
+	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
 		pr_warn("BUG: not updating mapping for %pa@0x%016lx - outside kernel range\n",
 			&phys, virt);
 		return;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 3/8] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  2018-05-10 16:23 [PATCH V3 0/8] 52-bit kernel VAs for arm64 Steve Capper
  2018-05-10 16:23 ` [PATCH v3 1/8] arm/arm64: KVM: Formalise end of direct linear map Steve Capper
  2018-05-10 16:23 ` [PATCH v3 2/8] arm64: mm: Flip kernel VA space Steve Capper
@ 2018-05-10 16:23 ` Steve Capper
  2018-05-10 16:23 ` [PATCH v3 4/8] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's Steve Capper
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Steve Capper @ 2018-05-10 16:23 UTC (permalink / raw)
  To: linux-arm-kernel

KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
line argument and affects the codegen of the inline address sanetiser.

Essentially, for an example memory access:
    *ptr1 = val;
The compiler will insert logic similar to the below:
    shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET)
    if (somethingWrong(shadowValue))
        flagAnError();

This code sequence is inserted into many places, thus
KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel
text.

If we want to run a single kernel binary with multiple address spaces,
then we need to do this with KASAN_SHADOW_OFFSET fixed.

Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide
shadow addresses we know that the end of the shadow region is constant
w.r.t. VA space size:
    KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET

This means that if we increase the size of the VA space, the start of
the KASAN region expands into lower addresses whilst the end of the
KASAN region is fixed.

Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via
build scripts with the VA size used as a parameter. (There are build
time checks in the C code too to ensure that expected values are being
derived). It is sufficient, and indeed is a simplification, to remove
the build scripts (and build time checks) entirely and instead provide
KASAN_SHADOW_OFFSET values.

This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the
arm64 Makefile, and instead we adopt the approach used by x86 to supply
offset values in kConfig. To help debug/develop future VA space changes,
the Makefile logic has been preserved in a script file in the arm64
Documentation folder.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

Changed in V3, rebase to include KASAN_SHADOW_SCALE_SHIFT, wording
tidied up.
---
 Documentation/arm64/kasan-offsets.sh | 20 ++++++++++++++++++++
 arch/arm64/Kconfig                   | 10 ++++++++++
 arch/arm64/Makefile                  |  9 ---------
 arch/arm64/include/asm/kasan.h       | 11 ++++-------
 arch/arm64/include/asm/memory.h      |  7 +++++--
 arch/arm64/mm/kasan_init.c           |  2 --
 6 files changed, 39 insertions(+), 20 deletions(-)
 create mode 100644 Documentation/arm64/kasan-offsets.sh

diff --git a/Documentation/arm64/kasan-offsets.sh b/Documentation/arm64/kasan-offsets.sh
new file mode 100644
index 000000000000..329353f8489d
--- /dev/null
+++ b/Documentation/arm64/kasan-offsets.sh
@@ -0,0 +1,20 @@
+#!/bin/sh
+
+# Print out the KASAN_SHADOW_OFFSETS required to place the KASAN SHADOW
+# start address at the mid-point of the kernel VA space
+
+KASAN_SHADOW_SCALE_SHIFT=3
+
+print_kasan_offset () {
+	printf "%02d\t" $1
+	printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
+			+ (1 << ($1 - 32 - KASAN_SHADOW_SCALE_SHIFT)) \
+			- (1 << (64 - 32 - KASAN_SHADOW_SCALE_SHIFT)) ))
+}
+
+echo KASAN_SHADOW_SCALE_SHIFT = $KASAN_SHADOW_SCALE_SHIFT
+printf "VABITS\tKASAN_SHADOW_OFFSET\n"
+print_kasan_offset 48
+print_kasan_offset 42
+print_kasan_offset 39
+print_kasan_offset 36
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index eb2cf4938f6d..4d2bc91d4017 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -279,6 +279,16 @@ config ARCH_PROC_KCORE_TEXT
 config MULTI_IRQ_HANDLER
 	def_bool y
 
+config KASAN_SHADOW_OFFSET
+	hex
+	depends on KASAN
+	default 0xdfffa00000000000 if ARM64_VA_BITS_48
+	default 0xdfffd00000000000 if ARM64_VA_BITS_47
+	default 0xdffffe8000000000 if ARM64_VA_BITS_42
+	default 0xdfffffd000000000 if ARM64_VA_BITS_39
+	default 0xdffffffa00000000 if ARM64_VA_BITS_36
+	default 0xffffffffffffffff
+
 source "init/Kconfig"
 
 source "kernel/Kconfig.freezer"
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 2785f34aa790..08ccbfaec3c5 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -96,15 +96,6 @@ else
 TEXT_OFFSET := 0x00080000
 endif
 
-# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
-#				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
-# in 32-bit arithmetic
-KASAN_SHADOW_SCALE_SHIFT := 3
-KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
-	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
-	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
-	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
-
 export	TEXT_OFFSET GZFLAGS
 
 core-y		+= arch/arm64/kernel/ arch/arm64/mm/
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index 8758bb008436..ea397897ae4a 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -14,11 +14,8 @@
  * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
  * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/N of kernel virtual addresses,
  * where N = (1 << KASAN_SHADOW_SCALE_SHIFT).
- */
-#define KASAN_SHADOW_START      (VA_START)
-#define KASAN_SHADOW_END        (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
-
-/*
+ *
+ * KASAN_SHADOW_OFFSET:
  * This value is used to map an address to the corresponding shadow
  * address by the following formula:
  *     shadow_addr = (address >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET
@@ -29,8 +26,8 @@
  *      KASAN_SHADOW_OFFSET = KASAN_SHADOW_END -
  *				(1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT))
  */
-#define KASAN_SHADOW_OFFSET     (KASAN_SHADOW_END - (1ULL << \
-					(64 - KASAN_SHADOW_SCALE_SHIFT)))
+#define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
+#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS)
 
 void kasan_init(void);
 void kasan_copy_shadow(pgd_t *pgdir);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index f0478617db32..aa26958e5034 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -64,7 +64,7 @@
 #define PAGE_OFFSET_END		(VA_START)
 #define KIMAGE_VADDR		(MODULES_END)
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
-#define MODULES_VADDR		(VA_START + KASAN_SHADOW_SIZE)
+#define MODULES_VADDR		(KASAN_SHADOW_END)
 #define MODULES_VSIZE		(SZ_128M)
 #define VMEMMAP_START		(-VMEMMAP_SIZE)
 #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
@@ -83,9 +83,12 @@
 #define KASAN_SHADOW_SCALE_SHIFT 3
 #define KASAN_SHADOW_SIZE	(UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
 #define KASAN_THREAD_SHIFT	1
+#define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#define KASAN_SHADOW_END	((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \
+					+ KASAN_SHADOW_OFFSET)
 #else
-#define KASAN_SHADOW_SIZE	(0)
 #define KASAN_THREAD_SHIFT	0
+#define KASAN_SHADOW_END	(VA_START)
 #endif
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 7571e3e6e0f0..221ddead81ac 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -135,8 +135,6 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
 /* The early shadow maps everything to a single page of zeroes */
 asmlinkage void __init kasan_early_init(void)
 {
-	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
-		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
 	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 4/8] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's
  2018-05-10 16:23 [PATCH V3 0/8] 52-bit kernel VAs for arm64 Steve Capper
                   ` (2 preceding siblings ...)
  2018-05-10 16:23 ` [PATCH v3 3/8] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
@ 2018-05-10 16:23 ` Steve Capper
  2018-05-10 16:23 ` [PATCH v3 5/8] arm64: dump: Make kernel page table dumper dynamic again Steve Capper
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Steve Capper @ 2018-05-10 16:23 UTC (permalink / raw)
  To: linux-arm-kernel

In order to prepare for a variable VA_BITS we need to account for a
variable size VMEMMAP which in turn means the position of the fixed map
is variable at compile time.

Thus, we need to replace the BUILD_BUG_ON's that check the fixed map
position with BUG_ON's.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/mm/mmu.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 12ce780fcef4..197f4110ae2c 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -811,7 +811,7 @@ void __init early_fixmap_init(void)
 	 * The boot-ioremap range spans multiple pmds, for which
 	 * we are not prepared:
 	 */
-	BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
+	BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
 		     != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
 
 	if ((pmdp != fixmap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)))
@@ -879,9 +879,9 @@ void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot)
 	 * On 4k pages, we'll use section mappings for the FDT so we only
 	 * have to be in the same PUD.
 	 */
-	BUILD_BUG_ON(dt_virt_base % SZ_2M);
+	BUG_ON(dt_virt_base % SZ_2M);
 
-	BUILD_BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
+	BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
 		     __fix_to_virt(FIX_BTMAP_BEGIN) >> SWAPPER_TABLE_SHIFT);
 
 	offset = dt_phys % SWAPPER_BLOCK_SIZE;
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 5/8] arm64: dump: Make kernel page table dumper dynamic again
  2018-05-10 16:23 [PATCH V3 0/8] 52-bit kernel VAs for arm64 Steve Capper
                   ` (3 preceding siblings ...)
  2018-05-10 16:23 ` [PATCH v3 4/8] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's Steve Capper
@ 2018-05-10 16:23 ` Steve Capper
  2018-05-10 16:23 ` [PATCH v3 6/8] arm64: module-plts: Extend veneer to address 52-bit VAs Steve Capper
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 18+ messages in thread
From: Steve Capper @ 2018-05-10 16:23 UTC (permalink / raw)
  To: linux-arm-kernel

The kernel page table dumper assumes that the placement of VA regions is
constant and determined at compile time. As we are about to introduce
variable VA logic, we need to be able to determine certain regions at
boot time.

This patch adds logic to the kernel page table dumper s.t. these regions
can be computed at boot time.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/mm/dump.c | 58 ++++++++++++++++++++++++++++++++++++++++++----------
 1 file changed, 47 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index 76e8857b2baf..c1030b3ca3c7 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -29,23 +29,45 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/ptdump.h>
 
-static const struct addr_marker address_markers[] = {
-	{ PAGE_OFFSET,			"Linear Mapping start" },
-	{ VA_START,			"Linear Mapping end" },
+
+enum address_markers_idx {
+	PAGE_OFFSET_NR = 0,
+	VA_START_NR,
+#ifdef CONFIG_KASAN
+	KASAN_START_NR,
+	KASAN_END_NR,
+#endif
+	MODULES_START_NR,
+	MODULES_END_NR,
+	VMALLOC_START_NR,
+	VMALLOC_END_NR,
+	FIXADDR_START_NR,
+	FIXADDR_END_NR,
+	PCI_START_NR,
+	PCI_END_NR,
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+	VMEMMAP_START_NR,
+#endif
+	END_NR
+};
+
+static struct addr_marker address_markers[] = {
+	{ 0 /* PAGE_OFFSET */,		"Linear Mapping start" },
+	{ 0 /* VA_START */,		"Linear Mapping end" },
 #ifdef CONFIG_KASAN
-	{ KASAN_SHADOW_START,		"Kasan shadow start" },
+	{ 0 /* KASAN_SHADOW_START */,	"Kasan shadow start" },
 	{ KASAN_SHADOW_END,		"Kasan shadow end" },
 #endif
 	{ MODULES_VADDR,		"Modules start" },
 	{ MODULES_END,			"Modules end" },
 	{ VMALLOC_START,		"vmalloc() Area" },
-	{ VMALLOC_END,			"vmalloc() End" },
-	{ FIXADDR_START,		"Fixmap start" },
-	{ FIXADDR_TOP,			"Fixmap end" },
-	{ PCI_IO_START,			"PCI I/O start" },
-	{ PCI_IO_END,			"PCI I/O end" },
+	{ 0 /* VMALLOC_END */,		"vmalloc() End" },
+	{ 0 /* FIXADDR_START */,	"Fixmap start" },
+	{ 0 /* FIXADDR_TOP */,		"Fixmap end" },
+	{ 0 /* PCI_IO_START */,		"PCI I/O start" },
+	{ 0 /* PCI_IO_END */,		"PCI I/O end" },
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
-	{ VMEMMAP_START,		"vmemmap" },
+	{ 0 /* VMEMMAP_START */,	"vmemmap" },
 #endif
 	{ -1,				NULL },
 };
@@ -381,7 +403,6 @@ static void ptdump_initialize(void)
 static struct ptdump_info kernel_ptdump_info = {
 	.mm		= &init_mm,
 	.markers	= address_markers,
-	.base_addr	= PAGE_OFFSET,
 };
 
 void ptdump_check_wx(void)
@@ -407,6 +428,21 @@ void ptdump_check_wx(void)
 static int ptdump_init(void)
 {
 	ptdump_initialize();
+	kernel_ptdump_info.base_addr = PAGE_OFFSET;
+	address_markers[PAGE_OFFSET_NR].start_address = PAGE_OFFSET;
+	address_markers[VA_START_NR].start_address = VA_START;
+#ifdef CONFIG_KASAN
+	address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START;
+#endif
+	address_markers[VMALLOC_END_NR].start_address = VMALLOC_END;
+	address_markers[FIXADDR_START_NR].start_address = FIXADDR_START;
+	address_markers[FIXADDR_END_NR].start_address = FIXADDR_TOP;
+	address_markers[PCI_START_NR].start_address = PCI_IO_START;
+	address_markers[PCI_END_NR].start_address = PCI_IO_END;
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+	address_markers[VMEMMAP_START_NR].start_address = VMEMMAP_START;
+#endif
+
 	return ptdump_debugfs_register(&kernel_ptdump_info,
 					"kernel_page_tables");
 }
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 6/8] arm64: module-plts: Extend veneer to address 52-bit VAs
  2018-05-10 16:23 [PATCH V3 0/8] 52-bit kernel VAs for arm64 Steve Capper
                   ` (4 preceding siblings ...)
  2018-05-10 16:23 ` [PATCH v3 5/8] arm64: dump: Make kernel page table dumper dynamic again Steve Capper
@ 2018-05-10 16:23 ` Steve Capper
  2018-05-10 22:01   ` Ard Biesheuvel
  2018-05-10 16:23 ` [PATCH v3 7/8] arm64: mm: Make VA space size variable Steve Capper
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 18+ messages in thread
From: Steve Capper @ 2018-05-10 16:23 UTC (permalink / raw)
  To: linux-arm-kernel

From: Ard Bieusheuval <ard.biesheuvel@linaro.org>

In preparation for 52-bit VA support in the Linux kernel, we extend the
plts veneer to support 52-bit addresses via an extra movk instruction.

[Steve: code from Ard off-list, changed the #ifdef logic to inequality]
Signed-off-by: Steve Capper <steve.capper@arm.com>

---

New in V3 of the series.

I'm not sure if this is strictly necessary as the VAs of the module
space will fit within 48-bits of addressing even when a 52-bit VA space
is enabled. However, this may act to future-proof the 52-bit VA support
should any future adjustments be made to the VA space.
---
 arch/arm64/include/asm/module.h | 13 ++++++++++++-
 arch/arm64/kernel/module-plts.c | 12 ++++++++++++
 2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h
index 97d0ef12e2ff..30b8ca95d19a 100644
--- a/arch/arm64/include/asm/module.h
+++ b/arch/arm64/include/asm/module.h
@@ -59,6 +59,9 @@ struct plt_entry {
 	__le32	mov0;	/* movn	x16, #0x....			*/
 	__le32	mov1;	/* movk	x16, #0x...., lsl #16		*/
 	__le32	mov2;	/* movk	x16, #0x...., lsl #32		*/
+#if CONFIG_ARM64_VA_BITS > 48
+	__le32  mov3;   /* movk x16, #0x...., lsl #48		*/
+#endif
 	__le32	br;	/* br	x16				*/
 };
 
@@ -71,7 +74,8 @@ static inline struct plt_entry get_plt_entry(u64 val)
 	 * +--------+------------+--------+-----------+-------------+---------+
 	 *
 	 * Rd     := 0x10 (x16)
-	 * hw     := 0b00 (no shift), 0b01 (lsl #16), 0b10 (lsl #32)
+	 * hw     := 0b00 (no shift), 0b01 (lsl #16), 0b10 (lsl #32),
+	 *           0b11 (lsl #48)
 	 * opc    := 0b11 (MOVK), 0b00 (MOVN), 0b10 (MOVZ)
 	 * sf     := 1 (64-bit variant)
 	 */
@@ -79,6 +83,9 @@ static inline struct plt_entry get_plt_entry(u64 val)
 		cpu_to_le32(0x92800010 | (((~val      ) & 0xffff)) << 5),
 		cpu_to_le32(0xf2a00010 | ((( val >> 16) & 0xffff)) << 5),
 		cpu_to_le32(0xf2c00010 | ((( val >> 32) & 0xffff)) << 5),
+#if CONFIG_ARM64_VA_BITS > 48
+		cpu_to_le32(0xf2e00010 | ((( val >> 48) & 0xffff)) << 5),
+#endif
 		cpu_to_le32(0xd61f0200)
 	};
 }
@@ -86,6 +93,10 @@ static inline struct plt_entry get_plt_entry(u64 val)
 static inline bool plt_entries_equal(const struct plt_entry *a,
 				     const struct plt_entry *b)
 {
+#if CONFIG_ARM64_VA_BITS > 48
+	if (a->mov3 != b->mov3)
+		return false;
+#endif
 	return a->mov0 == b->mov0 &&
 	       a->mov1 == b->mov1 &&
 	       a->mov2 == b->mov2;
diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c
index f0690c2ca3e0..4d5617e09943 100644
--- a/arch/arm64/kernel/module-plts.c
+++ b/arch/arm64/kernel/module-plts.c
@@ -50,6 +50,9 @@ u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val)
 	struct plt_entry *plt = (struct plt_entry *)pltsec->plt->sh_addr;
 	int i = pltsec->plt_num_entries++;
 	u32 mov0, mov1, mov2, br;
+#if CONFIG_ARM64_VA_BITS > 48
+	u32 mov3;
+#endif
 	int rd;
 
 	if (WARN_ON(pltsec->plt_num_entries > pltsec->plt_max_entries))
@@ -69,6 +72,12 @@ u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val)
 	mov2 = aarch64_insn_gen_movewide(rd, (u16)(val >> 32), 32,
 					 AARCH64_INSN_VARIANT_64BIT,
 					 AARCH64_INSN_MOVEWIDE_KEEP);
+#if CONFIG_ARM64_VA_BITS > 48
+	mov3 = aarch64_insn_gen_movewide(rd, (u16)(val >> 48), 48,
+					 AARCH64_INSN_VARIANT_64BIT,
+					 AARCH64_INSN_MOVEWIDE_KEEP);
+#endif
+
 	br = aarch64_insn_gen_branch_imm((u64)&plt[i].br, (u64)loc + 4,
 					 AARCH64_INSN_BRANCH_NOLINK);
 
@@ -76,6 +85,9 @@ u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val)
 			cpu_to_le32(mov0),
 			cpu_to_le32(mov1),
 			cpu_to_le32(mov2),
+#if CONFIG_ARM64_VA_BITS > 48
+			cpu_to_le32(mov3),
+#endif
 			cpu_to_le32(br)
 		};
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 7/8] arm64: mm: Make VA space size variable
  2018-05-10 16:23 [PATCH V3 0/8] 52-bit kernel VAs for arm64 Steve Capper
                   ` (5 preceding siblings ...)
  2018-05-10 16:23 ` [PATCH v3 6/8] arm64: module-plts: Extend veneer to address 52-bit VAs Steve Capper
@ 2018-05-10 16:23 ` Steve Capper
  2018-05-10 16:23 ` [PATCH v3 8/8] arm64: mm: Add 48/52-bit kernel VA support Steve Capper
  2018-09-07  6:25 ` [PATCH V3 0/8] 52-bit kernel VAs for arm64 Jon Masters
  8 siblings, 0 replies; 18+ messages in thread
From: Steve Capper @ 2018-05-10 16:23 UTC (permalink / raw)
  To: linux-arm-kernel

In order to allow the kernel to select different virtual address sizes
on boot we need to "de-constify" VA space size. This patch introduces
vabits_actual, a variable which is defined at very early boot.

The meaning of a few variables changes, however, to facilitate this
change.

VA_BITS - The maximum size of the VA space (compile time constant),
VA_BITS_MIN - The minimum size of the VA space (compile time constant),
VA_BITS_ACTUAL - The actual size of the VA space (determined at boot).

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

Changed in V3, I have been asked to try keep VA_BITS itself constant,
and this is my attempt at doing this.

I am happy to extend this patch or go back to forcing VA_BITS variable
depending upon feedback.
---
 arch/arm64/Kconfig                     |  4 ++++
 arch/arm64/include/asm/efi.h           |  4 ++--
 arch/arm64/include/asm/kasan.h         |  2 +-
 arch/arm64/include/asm/memory.h        | 22 ++++++++++++++--------
 arch/arm64/include/asm/mmu_context.h   |  2 +-
 arch/arm64/include/asm/pgtable-hwdef.h |  1 +
 arch/arm64/include/asm/pgtable.h       |  2 +-
 arch/arm64/include/asm/processor.h     |  2 +-
 arch/arm64/kernel/head.S               |  6 ++++--
 arch/arm64/kernel/kaslr.c              |  6 +++---
 arch/arm64/kernel/machine_kexec.c      |  2 +-
 arch/arm64/kvm/va_layout.c             | 14 +++++++-------
 arch/arm64/mm/fault.c                  |  4 ++--
 arch/arm64/mm/init.c                   |  7 ++++++-
 arch/arm64/mm/kasan_init.c             |  3 ++-
 arch/arm64/mm/mmu.c                    |  7 +++++--
 arch/arm64/mm/proc.S                   | 28 ++++++++++++++++++++++++++++
 17 files changed, 83 insertions(+), 33 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 4d2bc91d4017..f68eeab08904 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -680,6 +680,10 @@ config ARM64_VA_BITS
 	default 47 if ARM64_VA_BITS_47
 	default 48 if ARM64_VA_BITS_48
 
+config ARM64_VA_BITS_MIN
+	int
+	default ARM64_VA_BITS
+
 choice
 	prompt "Physical address space size"
 	default ARM64_PA_BITS_48
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index 192d791f1103..dc2e47c53dff 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -68,7 +68,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base)
 
 /*
  * On arm64, we have to ensure that the initrd ends up in the linear region,
- * which is a 1 GB aligned region of size '1UL << (VA_BITS - 1)' that is
+ * which is a 1 GB aligned region of size '1UL << (VA_BITS_MIN - 1)' that is
  * guaranteed to cover the kernel Image.
  *
  * Since the EFI stub is part of the kernel Image, we can relax the
@@ -79,7 +79,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base)
 static inline unsigned long efi_get_max_initrd_addr(unsigned long dram_base,
 						    unsigned long image_addr)
 {
-	return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS - 1));
+	return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS_MIN - 1));
 }
 
 #define efi_call_early(f, ...)		sys_table_arg->boottime->f(__VA_ARGS__)
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index ea397897ae4a..59ff3ba9bb90 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -27,7 +27,7 @@
  *				(1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT))
  */
 #define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
-#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS)
+#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS_ACTUAL)
 
 void kasan_init(void);
 void kasan_copy_shadow(pgd_t *pgdir);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index aa26958e5034..2d96501b8712 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -57,10 +57,6 @@
  * VA_START - the first kernel virtual address.
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
-#define VA_START		(UL(0xffffffffffffffff) - \
-	(UL(1) << (VA_BITS - 1)) + 1)
-#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
-	(UL(1) << VA_BITS) + 1)
 #define PAGE_OFFSET_END		(VA_START)
 #define KIMAGE_VADDR		(MODULES_END)
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
@@ -70,6 +66,9 @@
 #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
 #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
+#define VA_BITS_MIN		(CONFIG_ARM64_VA_BITS_MIN)
+#define _VA_START(va)		(UL(0xffffffffffffffff) - \
+				(UL(1) << ((va) - 1)) + 1)
 
 #define KERNEL_START      _text
 #define KERNEL_END        _end
@@ -88,7 +87,7 @@
 					+ KASAN_SHADOW_OFFSET)
 #else
 #define KASAN_THREAD_SHIFT	0
-#define KASAN_SHADOW_END	(VA_START)
+#define KASAN_SHADOW_END	(_VA_START(VA_BITS_MIN))
 #endif
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
@@ -174,10 +173,17 @@
 #endif
 
 #ifndef __ASSEMBLY__
+extern u64			vabits_actual;
+#define VA_BITS_ACTUAL		({vabits_actual;})
+#define VA_START		(_VA_START(VA_BITS_ACTUAL))
+#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
+					(UL(1) << VA_BITS_ACTUAL) + 1)
+#define PAGE_OFFSET_END		(VA_START)
 
 #include <linux/bitops.h>
 #include <linux/mmdebug.h>
 
+extern s64			physvirt_offset;
 extern s64			memstart_addr;
 /* PHYS_OFFSET - the physical address of the start of memory. */
 #define PHYS_OFFSET		({ VM_BUG_ON(memstart_addr & 1); memstart_addr; })
@@ -221,9 +227,9 @@ static inline unsigned long kaslr_offset(void)
  * space. Testing the top bit for the start of the region is a
  * sufficient check.
  */
-#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS - 1)))
+#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS_ACTUAL - 1)))
 
-#define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
+#define __lm_to_phys(addr)	(((addr) + physvirt_offset))
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
 
 #define __virt_to_phys_nodebug(x) ({					\
@@ -242,7 +248,7 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
 #define __phys_addr_symbol(x)	__pa_symbol_nodebug(x)
 #endif
 
-#define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
+#define __phys_to_virt(x)	((unsigned long)((x) - physvirt_offset))
 #define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
 
 /*
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 39ec0b8a689e..75d8d7a48a3c 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -101,7 +101,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
 	isb();
 }
 
-#define cpu_set_default_tcr_t0sz()	__cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS))
+#define cpu_set_default_tcr_t0sz()	__cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS_ACTUAL))
 #define cpu_set_idmap_tcr_t0sz()	__cpu_set_tcr_t0sz(idmap_t0sz)
 
 /*
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index fd208eac9f2a..20f8740c505a 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -81,6 +81,7 @@
 #define PGDIR_SIZE		(_AC(1, UL) << PGDIR_SHIFT)
 #define PGDIR_MASK		(~(PGDIR_SIZE-1))
 #define PTRS_PER_PGD		(1 << (VA_BITS - PGDIR_SHIFT))
+#define PTRS_PER_PGD_ACTUAL	(1 << (VA_BITS_ACTUAL - PGDIR_SHIFT))
 
 /*
  * Section address mask and size definitions.
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 31e26f3ab078..337ea50f2440 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -583,7 +583,7 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd)
 #define pgd_ERROR(pgd)		__pgd_error(__FILE__, __LINE__, pgd_val(pgd))
 
 /* to find an entry in a page-table-directory */
-#define pgd_index(addr)		(((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
+#define pgd_index(addr)		(((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD_ACTUAL - 1))
 
 #define pgd_offset_raw(pgd, addr)	((pgd) + pgd_index(addr))
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 767598932549..244d23d1d911 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -19,7 +19,7 @@
 #ifndef __ASM_PROCESSOR_H
 #define __ASM_PROCESSOR_H
 
-#define TASK_SIZE_64		(UL(1) << VA_BITS)
+#define TASK_SIZE_64		(UL(1) << VA_BITS_MIN)
 
 #define KERNEL_DS	UL(-1)
 #define USER_DS		(TASK_SIZE_64 - 1)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index b0853069702f..50abb6617a8a 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -119,6 +119,7 @@ ENTRY(stext)
 	adrp	x23, __PHYS_OFFSET
 	and	x23, x23, MIN_KIMG_ALIGN - 1	// KASLR offset, defaults to 0
 	bl	set_cpu_boot_mode_flag
+	bl	__setup_va_constants
 	bl	__create_page_tables
 	/*
 	 * The following calls CPU setup code, see arch/arm64/mm/proc.S for
@@ -253,6 +254,7 @@ ENDPROC(preserve_boot_args)
 	add \rtbl, \tbl, #PAGE_SIZE
 	mov \sv, \rtbl
 	mov \count, #0
+
 	compute_indices \vstart, \vend, #PGDIR_SHIFT, \pgds, \istart, \iend, \count
 	populate_entries \tbl, \rtbl, \istart, \iend, #PMD_TYPE_TABLE, #PAGE_SIZE, \tmp
 	mov \tbl, \sv
@@ -338,7 +340,7 @@ __create_page_tables:
 	dmb	sy
 	dc	ivac, x6		// Invalidate potentially stale cache line
 
-#if (VA_BITS < 48)
+#if (VA_BITS_MIN < 48)
 #define EXTRA_SHIFT	(PGDIR_SHIFT + PAGE_SHIFT - 3)
 #define EXTRA_PTRS	(1 << (PHYS_MASK_SHIFT - EXTRA_SHIFT))
 
@@ -376,7 +378,7 @@ __create_page_tables:
 	adrp	x0, swapper_pg_dir
 	mov_q	x5, KIMAGE_VADDR + TEXT_OFFSET	// compile time __va(_text)
 	add	x5, x5, x23			// add KASLR displacement
-	mov	x4, PTRS_PER_PGD
+	ldr_l	x4, ptrs_per_pgd
 	adrp	x6, _end			// runtime __pa(_end)
 	adrp	x3, _text			// runtime __pa(_text)
 	sub	x6, x6, x3			// _end - _text
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index f0e6ab8abe9c..bb6ab342f80b 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -117,15 +117,15 @@ u64 __init kaslr_early_init(u64 dt_phys)
 	/*
 	 * OK, so we are proceeding with KASLR enabled. Calculate a suitable
 	 * kernel image offset from the seed. Let's place the kernel in the
-	 * middle half of the VMALLOC area (VA_BITS - 2), and stay clear of
+	 * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of
 	 * the lower and upper quarters to avoid colliding with other
 	 * allocations.
 	 * Even if we could randomize@page granularity for 16k and 64k pages,
 	 * let's always round to 2 MB so we don't interfere with the ability to
 	 * map using contiguous PTEs
 	 */
-	mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1);
-	offset = BIT(VA_BITS - 3) + (seed & mask);
+	mask = ((1UL << (VA_BITS_MIN - 2)) - 1) & ~(SZ_2M - 1);
+	offset = BIT(VA_BITS_MIN - 3) + (seed & mask);
 
 	/* use the top 16 bits to randomize the linear region */
 	memstart_offset_seed = seed >> 48;
diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
index f76ea92dff91..732ef5dd1e4c 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -355,7 +355,7 @@ void crash_free_reserved_phys_range(unsigned long begin, unsigned long end)
 
 void arch_crash_save_vmcoreinfo(void)
 {
-	VMCOREINFO_NUMBER(VA_BITS);
+	VMCOREINFO_NUMBER(VA_BITS_ACTUAL);
 	/* Please note VMCOREINFO_NUMBER() uses "%d", not "%x" */
 	vmcoreinfo_append_str("NUMBER(kimage_voffset)=0x%llx\n",
 						kimage_voffset);
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index c712a7376bc1..c9a1debb45bd 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -40,25 +40,25 @@ static void compute_layout(void)
 	int kva_msb;
 
 	/* Where is my RAM region? */
-	hyp_va_msb  = idmap_addr & BIT(VA_BITS - 1);
-	hyp_va_msb ^= BIT(VA_BITS - 1);
+	hyp_va_msb  = idmap_addr & BIT(VA_BITS_ACTUAL - 1);
+	hyp_va_msb ^= BIT(VA_BITS_ACTUAL - 1);
 
 	kva_msb = fls64((u64)phys_to_virt(memblock_start_of_DRAM()) ^
 			(u64)(high_memory - 1));
 
-	if (kva_msb == (VA_BITS - 1)) {
+	if (kva_msb == (VA_BITS_ACTUAL - 1)) {
 		/*
 		 * No space in the address, let's compute the mask so
-		 * that it covers (VA_BITS - 1) bits, and the region
+		 * that it covers (VA_BITS_ACTUAL - 1) bits, and the region
 		 * bit. The tag stays set to zero.
 		 */
-		va_mask  = BIT(VA_BITS - 1) - 1;
+		va_mask  = BIT(VA_BITS_ACTUAL - 1) - 1;
 		va_mask |= hyp_va_msb;
 	} else {
 		/*
 		 * We do have some free bits to insert a random tag.
 		 * Hyp VAs are now created from kernel linear map VAs
-		 * using the following formula (with V == VA_BITS):
+		 * using the following formula (with V == VA_BITS_ACTUAL):
 		 *
 		 *  63 ... V |     V-1    | V-2 .. tag_lsb | tag_lsb - 1 .. 0
 		 *  ---------------------------------------------------------
@@ -66,7 +66,7 @@ static void compute_layout(void)
 		 */
 		tag_lsb = kva_msb;
 		va_mask = GENMASK_ULL(tag_lsb - 1, 0);
-		tag_val = get_random_long() & GENMASK_ULL(VA_BITS - 2, tag_lsb);
+		tag_val = get_random_long() & GENMASK_ULL(VA_BITS_ACTUAL - 2, tag_lsb);
 		tag_val |= hyp_va_msb;
 		tag_val >>= tag_lsb;
 	}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 4165485e8b6e..7990f25d2031 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -151,9 +151,9 @@ void show_pte(unsigned long addr)
 		return;
 	}
 
-	pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp = %p\n",
+	pr_alert("%s pgtable: %luk pages, %llu-bit VAs, pgdp = %p\n",
 		 mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K,
-		 VA_BITS, mm->pgd);
+		 VA_BITS_ACTUAL, mm->pgd);
 	pgdp = pgd_offset(mm, addr);
 	pgd = READ_ONCE(*pgdp);
 	pr_alert("[%016lx] pgd=%016llx", addr, pgd_val(pgd));
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index efb7e860f99f..ff005ea5c8e8 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -62,6 +62,9 @@
 s64 memstart_addr __ro_after_init = -1;
 phys_addr_t arm64_dma_phys_limit __ro_after_init;
 
+s64 physvirt_offset __ro_after_init = -1;
+EXPORT_SYMBOL(physvirt_offset);
+
 #ifdef CONFIG_BLK_DEV_INITRD
 static int __init early_initrd(char *p)
 {
@@ -361,7 +364,7 @@ static void __init fdt_enforce_memory_region(void)
 
 void __init arm64_memblock_init(void)
 {
-	const s64 linear_region_size = BIT(VA_BITS - 1);
+	const s64 linear_region_size = BIT(VA_BITS_ACTUAL - 1);
 
 	/* Handle linux,usable-memory-range property */
 	fdt_enforce_memory_region();
@@ -375,6 +378,8 @@ void __init arm64_memblock_init(void)
 	memstart_addr = round_down(memblock_start_of_DRAM(),
 				   ARM64_MEMSTART_ALIGN);
 
+	physvirt_offset = PHYS_OFFSET - PAGE_OFFSET;
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 221ddead81ac..452b36fbf2b0 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -135,7 +135,8 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
 /* The early shadow maps everything to a single page of zeroes */
 asmlinkage void __init kasan_early_init(void)
 {
-	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
+	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), PGDIR_SIZE));
+	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
 	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
 			   true);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 197f4110ae2c..e5834492dbcc 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -49,8 +49,11 @@
 #define NO_BLOCK_MAPPINGS	BIT(0)
 #define NO_CONT_MAPPINGS	BIT(1)
 
-u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
 u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
+u64 idmap_t0sz __ro_after_init;
+u64 ptrs_per_pgd __ro_after_init;
+u64 vabits_actual __ro_after_init;
+EXPORT_SYMBOL(vabits_actual);
 
 u64 kimage_voffset __ro_after_init;
 EXPORT_SYMBOL(kimage_voffset);
@@ -668,7 +671,7 @@ int kern_addr_valid(unsigned long addr)
 	pmd_t *pmdp, pmd;
 	pte_t *ptep, pte;
 
-	if ((((long)addr) >> VA_BITS) != -1UL)
+	if ((((long)addr) >> VA_BITS_ACTUAL) != -1UL)
 		return 0;
 
 	pgdp = pgd_offset_k(addr);
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 5f9a73a4452c..5f8d9e452190 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -437,9 +437,18 @@ ENTRY(__cpu_setup)
 	 * Set/prepare TCR and TTBR. We use 512GB (39-bit) address range for
 	 * both user and kernel.
 	 */
+	ldr	x10, =TCR_TxSZ(VA_BITS_MIN) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
+			TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \
+			TCR_TBI0 | TCR_A1
+#if (CONFIG_ARM64_VA_BITS != CONFIG_ARM64_VA_BITS_MIN)
+	ldr_l	x9, vabits_actual
+	cmp	x9, #VA_BITS
+	b.ne	1f
 	ldr	x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
 			TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \
 			TCR_TBI0 | TCR_A1
+1:
+#endif
 	tcr_set_idmap_t0sz	x10, x9
 
 	/*
@@ -461,3 +470,22 @@ ENTRY(__cpu_setup)
 	msr	tcr_el1, x10
 	ret					// return to head.S
 ENDPROC(__cpu_setup)
+
+ENTRY(__setup_va_constants)
+	mov	x0, #VA_BITS_MIN
+	mov	x1, TCR_T0SZ(VA_BITS_MIN)
+	mov	x2, #1 << (VA_BITS_MIN - PGDIR_SHIFT)
+	str_l	x0, vabits_actual, x5
+	str_l	x1, idmap_t0sz, x5
+	str_l	x2, ptrs_per_pgd, x5
+
+	adr_l	x0, vabits_actual
+	adr_l	x1, idmap_t0sz
+	adr_l	x2, ptrs_per_pgd
+	dmb	sy
+	dc	ivac, x0	// Invalidate potentially stale cache
+	dc	ivac, x1
+	dc	ivac, x2
+
+	ret
+ENDPROC(__setup_va_constants)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 8/8] arm64: mm: Add 48/52-bit kernel VA support
  2018-05-10 16:23 [PATCH V3 0/8] 52-bit kernel VAs for arm64 Steve Capper
                   ` (6 preceding siblings ...)
  2018-05-10 16:23 ` [PATCH v3 7/8] arm64: mm: Make VA space size variable Steve Capper
@ 2018-05-10 16:23 ` Steve Capper
  2018-09-07  6:25 ` [PATCH V3 0/8] 52-bit kernel VAs for arm64 Jon Masters
  8 siblings, 0 replies; 18+ messages in thread
From: Steve Capper @ 2018-05-10 16:23 UTC (permalink / raw)
  To: linux-arm-kernel

Add the option to use 52-bit VA support upon availability at boot. We
use the same KASAN_SHADOW_OFFSET for both 48 and 52 bit VA spaces as in
both cases the start and end of the KASAN shadow region are PGD aligned.

>From ID_AA64MMFR2, we check the LVA field on very early boot and set the
VA size, PGDIR_SHIFT and TCR.T[01]SZ values which then influence how the
rest of the memory system behaves.

Note that userspace addresses will still be capped out at 48-bit. More
patches are needed to deal with scenarios where the user provides
MMAP_FIXED hint and a high address to mmap.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/Kconfig   | 10 +++++++++-
 arch/arm64/mm/proc.S | 13 +++++++++++++
 2 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f68eeab08904..6fe0c2976f0d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -266,6 +266,7 @@ config PGTABLE_LEVELS
 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
 	default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42
 	default 3 if ARM64_64K_PAGES && ARM64_VA_BITS_48
+	default 3 if ARM64_64K_PAGES && ARM64_VA_BITS_52
 	default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39
 	default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47
 	default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48
@@ -282,6 +283,7 @@ config MULTI_IRQ_HANDLER
 config KASAN_SHADOW_OFFSET
 	hex
 	depends on KASAN
+	default 0xdfffa00000000000 if ARM64_VA_BITS_52
 	default 0xdfffa00000000000 if ARM64_VA_BITS_48
 	default 0xdfffd00000000000 if ARM64_VA_BITS_47
 	default 0xdffffe8000000000 if ARM64_VA_BITS_42
@@ -670,6 +672,10 @@ config ARM64_VA_BITS_47
 config ARM64_VA_BITS_48
 	bool "48-bit"
 
+config ARM64_VA_BITS_52
+	bool "52-bit (ARMv8.2) (48 if not in hardware)"
+	depends on ARM64_64K_PAGES
+
 endchoice
 
 config ARM64_VA_BITS
@@ -679,10 +685,12 @@ config ARM64_VA_BITS
 	default 42 if ARM64_VA_BITS_42
 	default 47 if ARM64_VA_BITS_47
 	default 48 if ARM64_VA_BITS_48
+	default 52 if ARM64_VA_BITS_52
 
 config ARM64_VA_BITS_MIN
 	int
-	default ARM64_VA_BITS
+	default ARM64_VA_BITS if !ARM64_VA_BITS_52
+	default 48 if ARM64_VA_BITS_52
 
 choice
 	prompt "Physical address space size"
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 5f8d9e452190..031604502776 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -472,9 +472,22 @@ ENTRY(__cpu_setup)
 ENDPROC(__cpu_setup)
 
 ENTRY(__setup_va_constants)
+#ifdef CONFIG_ARM64_VA_BITS_52
+	mrs_s	x5, SYS_ID_AA64MMFR2_EL1
+	and	x5, x5, #0xf << ID_AA64MMFR2_LVA_SHIFT
+	cmp	x5, #1 << ID_AA64MMFR2_LVA_SHIFT
+	b.ne	1f
+	mov	x0, #VA_BITS
+	mov	x1, TCR_T0SZ(VA_BITS)
+	mov	x2, #1 << (VA_BITS - PGDIR_SHIFT)
+	b	2f
+#endif
+
+1:
 	mov	x0, #VA_BITS_MIN
 	mov	x1, TCR_T0SZ(VA_BITS_MIN)
 	mov	x2, #1 << (VA_BITS_MIN - PGDIR_SHIFT)
+2:
 	str_l	x0, vabits_actual, x5
 	str_l	x1, idmap_t0sz, x5
 	str_l	x2, ptrs_per_pgd, x5
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v3 1/8] arm/arm64: KVM: Formalise end of direct linear map
  2018-05-10 16:23 ` [PATCH v3 1/8] arm/arm64: KVM: Formalise end of direct linear map Steve Capper
@ 2018-05-10 17:11   ` Marc Zyngier
  2018-05-11  9:46     ` Steve Capper
  0 siblings, 1 reply; 18+ messages in thread
From: Marc Zyngier @ 2018-05-10 17:11 UTC (permalink / raw)
  To: linux-arm-kernel

[+Christoffer]

Hi Steve,

On 10/05/18 17:23, Steve Capper wrote:
> We assume that the direct linear map ends at ~0 in the KVM HYP map
> intersection checking code. This assumption will become invalid later on
> for arm64 when the address space of the kernel is re-arranged.
> 
> This patch introduces a new constant PAGE_OFFSET_END for both arm and
> arm64 and defines it to be ~0UL
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>
> ---
>  arch/arm/include/asm/memory.h   | 1 +
>  arch/arm64/include/asm/memory.h | 1 +
>  virt/kvm/arm/mmu.c              | 4 ++--
>  3 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> index ed8fd0d19a3e..45c211fd50da 100644
> --- a/arch/arm/include/asm/memory.h
> +++ b/arch/arm/include/asm/memory.h
> @@ -24,6 +24,7 @@
>  
>  /* PAGE_OFFSET - the virtual address of the start of the kernel image */
>  #define PAGE_OFFSET		UL(CONFIG_PAGE_OFFSET)
> +#define PAGE_OFFSET_END		(~0UL)
>  
>  #ifdef CONFIG_MMU
>  
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 49d99214f43c..c5617cbbf1ff 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -61,6 +61,7 @@
>  	(UL(1) << VA_BITS) + 1)
>  #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
>  	(UL(1) << (VA_BITS - 1)) + 1)
> +#define PAGE_OFFSET_END		(~0UL)
>  #define KIMAGE_VADDR		(MODULES_END)
>  #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
>  #define MODULES_VADDR		(VA_START + KASAN_SHADOW_SIZE)
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 7f6a944db23d..22af347d65f1 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -1927,10 +1927,10 @@ int kvm_mmu_init(void)
>  	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
>  	kvm_debug("HYP VA range: %lx:%lx\n",
>  		  kern_hyp_va(PAGE_OFFSET),
> -		  kern_hyp_va((unsigned long)high_memory - 1));
> +		  kern_hyp_va(PAGE_OFFSET_END));
>  
>  	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
> -	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
> +	    hyp_idmap_start <  kern_hyp_va(PAGE_OFFSET_END) &&

This doesn't feel right to me now that we have the HYP randomization
code merged. The way kern_hyp_va works now is only valid for addresses
between VA(memblock_start_of_DRAM()) and high_memory.

I fear that you could trigger the failing condition below as you
evaluate the idmap address against something that is now not a HYP VA.

>  	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
>  		/*
>  		 * The idmap page is intersecting with the VA space,
> 

I'd appreciate if you could keep me cc'd on this series.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 6/8] arm64: module-plts: Extend veneer to address 52-bit VAs
  2018-05-10 16:23 ` [PATCH v3 6/8] arm64: module-plts: Extend veneer to address 52-bit VAs Steve Capper
@ 2018-05-10 22:01   ` Ard Biesheuvel
  2018-05-11 10:11     ` Steve Capper
  0 siblings, 1 reply; 18+ messages in thread
From: Ard Biesheuvel @ 2018-05-10 22:01 UTC (permalink / raw)
  To: linux-arm-kernel

On 10 May 2018 at 18:23, Steve Capper <steve.capper@arm.com> wrote:
> From: Ard Bieusheuval <ard.biesheuvel@linaro.org>
>
> In preparation for 52-bit VA support in the Linux kernel, we extend the
> plts veneer to support 52-bit addresses via an extra movk instruction.
>
> [Steve: code from Ard off-list, changed the #ifdef logic to inequality]
> Signed-off-by: Steve Capper <steve.capper@arm.com>
>
> ---
>
> New in V3 of the series.
>
> I'm not sure if this is strictly necessary as the VAs of the module
> space will fit within 48-bits of addressing even when a 52-bit VA space
> is enabled.

What about the kernel text itself? Is that also guaranteed to have
bits [51:48] of its VAs equal 0xf, even under randomization?

If so, I agree we don't need the patch.

> However, this may act to future-proof the 52-bit VA support
> should any future adjustments be made to the VA space.
> ---
>  arch/arm64/include/asm/module.h | 13 ++++++++++++-
>  arch/arm64/kernel/module-plts.c | 12 ++++++++++++
>  2 files changed, 24 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/include/asm/module.h b/arch/arm64/include/asm/module.h
> index 97d0ef12e2ff..30b8ca95d19a 100644
> --- a/arch/arm64/include/asm/module.h
> +++ b/arch/arm64/include/asm/module.h
> @@ -59,6 +59,9 @@ struct plt_entry {
>         __le32  mov0;   /* movn x16, #0x....                    */
>         __le32  mov1;   /* movk x16, #0x...., lsl #16           */
>         __le32  mov2;   /* movk x16, #0x...., lsl #32           */
> +#if CONFIG_ARM64_VA_BITS > 48
> +       __le32  mov3;   /* movk x16, #0x...., lsl #48           */
> +#endif
>         __le32  br;     /* br   x16                             */
>  };
>
> @@ -71,7 +74,8 @@ static inline struct plt_entry get_plt_entry(u64 val)
>          * +--------+------------+--------+-----------+-------------+---------+
>          *
>          * Rd     := 0x10 (x16)
> -        * hw     := 0b00 (no shift), 0b01 (lsl #16), 0b10 (lsl #32)
> +        * hw     := 0b00 (no shift), 0b01 (lsl #16), 0b10 (lsl #32),
> +        *           0b11 (lsl #48)
>          * opc    := 0b11 (MOVK), 0b00 (MOVN), 0b10 (MOVZ)
>          * sf     := 1 (64-bit variant)
>          */
> @@ -79,6 +83,9 @@ static inline struct plt_entry get_plt_entry(u64 val)
>                 cpu_to_le32(0x92800010 | (((~val      ) & 0xffff)) << 5),
>                 cpu_to_le32(0xf2a00010 | ((( val >> 16) & 0xffff)) << 5),
>                 cpu_to_le32(0xf2c00010 | ((( val >> 32) & 0xffff)) << 5),
> +#if CONFIG_ARM64_VA_BITS > 48
> +               cpu_to_le32(0xf2e00010 | ((( val >> 48) & 0xffff)) << 5),
> +#endif
>                 cpu_to_le32(0xd61f0200)
>         };
>  }
> @@ -86,6 +93,10 @@ static inline struct plt_entry get_plt_entry(u64 val)
>  static inline bool plt_entries_equal(const struct plt_entry *a,
>                                      const struct plt_entry *b)
>  {
> +#if CONFIG_ARM64_VA_BITS > 48
> +       if (a->mov3 != b->mov3)
> +               return false;
> +#endif
>         return a->mov0 == b->mov0 &&
>                a->mov1 == b->mov1 &&
>                a->mov2 == b->mov2;
> diff --git a/arch/arm64/kernel/module-plts.c b/arch/arm64/kernel/module-plts.c
> index f0690c2ca3e0..4d5617e09943 100644
> --- a/arch/arm64/kernel/module-plts.c
> +++ b/arch/arm64/kernel/module-plts.c
> @@ -50,6 +50,9 @@ u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val)
>         struct plt_entry *plt = (struct plt_entry *)pltsec->plt->sh_addr;
>         int i = pltsec->plt_num_entries++;
>         u32 mov0, mov1, mov2, br;
> +#if CONFIG_ARM64_VA_BITS > 48
> +       u32 mov3;
> +#endif
>         int rd;
>
>         if (WARN_ON(pltsec->plt_num_entries > pltsec->plt_max_entries))
> @@ -69,6 +72,12 @@ u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val)
>         mov2 = aarch64_insn_gen_movewide(rd, (u16)(val >> 32), 32,
>                                          AARCH64_INSN_VARIANT_64BIT,
>                                          AARCH64_INSN_MOVEWIDE_KEEP);
> +#if CONFIG_ARM64_VA_BITS > 48
> +       mov3 = aarch64_insn_gen_movewide(rd, (u16)(val >> 48), 48,
> +                                        AARCH64_INSN_VARIANT_64BIT,
> +                                        AARCH64_INSN_MOVEWIDE_KEEP);
> +#endif
> +
>         br = aarch64_insn_gen_branch_imm((u64)&plt[i].br, (u64)loc + 4,
>                                          AARCH64_INSN_BRANCH_NOLINK);
>
> @@ -76,6 +85,9 @@ u64 module_emit_veneer_for_adrp(struct module *mod, void *loc, u64 val)
>                         cpu_to_le32(mov0),
>                         cpu_to_le32(mov1),
>                         cpu_to_le32(mov2),
> +#if CONFIG_ARM64_VA_BITS > 48
> +                       cpu_to_le32(mov3),
> +#endif
>                         cpu_to_le32(br)
>                 };
>
> --
> 2.11.0
>

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/8] arm/arm64: KVM: Formalise end of direct linear map
  2018-05-10 17:11   ` Marc Zyngier
@ 2018-05-11  9:46     ` Steve Capper
  2018-05-11 10:00       ` Steve Capper
  0 siblings, 1 reply; 18+ messages in thread
From: Steve Capper @ 2018-05-11  9:46 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, May 10, 2018 at 06:11:35PM +0100, Marc Zyngier wrote:
> [+Christoffer]
>
> Hi Steve,

Hi Marc,

>
> On 10/05/18 17:23, Steve Capper wrote:
> > We assume that the direct linear map ends at ~0 in the KVM HYP map
> > intersection checking code. This assumption will become invalid later on
> > for arm64 when the address space of the kernel is re-arranged.
> >
> > This patch introduces a new constant PAGE_OFFSET_END for both arm and
> > arm64 and defines it to be ~0UL
> >
> > Signed-off-by: Steve Capper <steve.capper@arm.com>
> > ---
> >  arch/arm/include/asm/memory.h   | 1 +
> >  arch/arm64/include/asm/memory.h | 1 +
> >  virt/kvm/arm/mmu.c              | 4 ++--
> >  3 files changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> > index ed8fd0d19a3e..45c211fd50da 100644
> > --- a/arch/arm/include/asm/memory.h
> > +++ b/arch/arm/include/asm/memory.h
> > @@ -24,6 +24,7 @@
> >
> >  /* PAGE_OFFSET - the virtual address of the start of the kernel image */
> >  #define PAGE_OFFSET                UL(CONFIG_PAGE_OFFSET)
> > +#define PAGE_OFFSET_END            (~0UL)
> >
> >  #ifdef CONFIG_MMU
> >
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index 49d99214f43c..c5617cbbf1ff 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -61,6 +61,7 @@
> >     (UL(1) << VA_BITS) + 1)
> >  #define PAGE_OFFSET                (UL(0xffffffffffffffff) - \
> >     (UL(1) << (VA_BITS - 1)) + 1)
> > +#define PAGE_OFFSET_END            (~0UL)
> >  #define KIMAGE_VADDR               (MODULES_END)
> >  #define MODULES_END                (MODULES_VADDR + MODULES_VSIZE)
> >  #define MODULES_VADDR              (VA_START + KASAN_SHADOW_SIZE)
> > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> > index 7f6a944db23d..22af347d65f1 100644
> > --- a/virt/kvm/arm/mmu.c
> > +++ b/virt/kvm/arm/mmu.c
> > @@ -1927,10 +1927,10 @@ int kvm_mmu_init(void)
> >     kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
> >     kvm_debug("HYP VA range: %lx:%lx\n",
> >               kern_hyp_va(PAGE_OFFSET),
> > -             kern_hyp_va((unsigned long)high_memory - 1));
> > +             kern_hyp_va(PAGE_OFFSET_END));
> >
> >     if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
> > -       hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
> > +       hyp_idmap_start <  kern_hyp_va(PAGE_OFFSET_END) &&
>
> This doesn't feel right to me now that we have the HYP randomization
> code merged. The way kern_hyp_va works now is only valid for addresses
> between VA(memblock_start_of_DRAM()) and high_memory.
>
> I fear that you could trigger the failing condition below as you
> evaluate the idmap address against something that is now not a HYP VA.
>
> >         hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
> >             /*
> >              * The idmap page is intersecting with the VA space,
> >

Thanks! Yes, this patch is completely spurious, apologies I think I
made a mistake rebasing my V2 series on top of HASLR (originally I replaced
~0LL with PAGE_OFFSET_END in V1). I will drop this patch from the next version
of the series.

>
> I'd appreciate if you could keep me cc'd on this series.

Apologies, I'll be much more careful with git send-email.

Cheers,
--
Steve
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 1/8] arm/arm64: KVM: Formalise end of direct linear map
  2018-05-11  9:46     ` Steve Capper
@ 2018-05-11 10:00       ` Steve Capper
  0 siblings, 0 replies; 18+ messages in thread
From: Steve Capper @ 2018-05-11 10:00 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, May 11, 2018 at 10:46:28AM +0100, Steve Capper wrote:
> On Thu, May 10, 2018 at 06:11:35PM +0100, Marc Zyngier wrote:
> > [+Christoffer]
> >
> > Hi Steve,
> 
> Hi Marc,
> 
> >
> > On 10/05/18 17:23, Steve Capper wrote:

[...]

> >
> > I'd appreciate if you could keep me cc'd on this series.
> 
> Apologies, I'll be much more careful with git send-email.
> 
> Cheers,
> --
> Steve
> IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
>

I will also be more careful with my email client, please ignore this
disclaimer.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 6/8] arm64: module-plts: Extend veneer to address 52-bit VAs
  2018-05-10 22:01   ` Ard Biesheuvel
@ 2018-05-11 10:11     ` Steve Capper
  2018-05-14 10:31       ` Ard Biesheuvel
  0 siblings, 1 reply; 18+ messages in thread
From: Steve Capper @ 2018-05-11 10:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, May 11, 2018 at 12:01:05AM +0200, Ard Biesheuvel wrote:
> On 10 May 2018 at 18:23, Steve Capper <steve.capper@arm.com> wrote:
> > From: Ard Bieusheuval <ard.biesheuvel@linaro.org>
> >
> > In preparation for 52-bit VA support in the Linux kernel, we extend the
> > plts veneer to support 52-bit addresses via an extra movk instruction.
> >
> > [Steve: code from Ard off-list, changed the #ifdef logic to inequality]
> > Signed-off-by: Steve Capper <steve.capper@arm.com>
> >
> > ---
> >
> > New in V3 of the series.
> >
> > I'm not sure if this is strictly necessary as the VAs of the module
> > space will fit within 48-bits of addressing even when a 52-bit VA space
> > is enabled.
> 
> What about the kernel text itself? Is that also guaranteed to have
> bits [51:48] of its VAs equal 0xf, even under randomization?
> 
> If so, I agree we don't need the patch.
> 

Hi Ard,
The kernel modules and text are guaranteed to have addresses greater
than or equal to KASAN_SHADOW_END (same for both 48, 52-bit VAs) or
_VA_START(VA_BITS_MIN) (same for both 48, 52-bit VAs). Also, IIUC, the
KASLR displacemnt is always non-negative?

So I think we're safe in that modules and kernel text will be 48-bit
addressable in 52-bit configurations.

I'll have a think about a BUILD_BUG to capture any change to the above.

Cheers,
-- 
Steve

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v3 6/8] arm64: module-plts: Extend veneer to address 52-bit VAs
  2018-05-11 10:11     ` Steve Capper
@ 2018-05-14 10:31       ` Ard Biesheuvel
  0 siblings, 0 replies; 18+ messages in thread
From: Ard Biesheuvel @ 2018-05-14 10:31 UTC (permalink / raw)
  To: linux-arm-kernel

On 11 May 2018 at 12:11, Steve Capper <steve.capper@arm.com> wrote:
> On Fri, May 11, 2018 at 12:01:05AM +0200, Ard Biesheuvel wrote:
>> On 10 May 2018 at 18:23, Steve Capper <steve.capper@arm.com> wrote:
>> > From: Ard Bieusheuval <ard.biesheuvel@linaro.org>
>> >
>> > In preparation for 52-bit VA support in the Linux kernel, we extend the
>> > plts veneer to support 52-bit addresses via an extra movk instruction.
>> >
>> > [Steve: code from Ard off-list, changed the #ifdef logic to inequality]
>> > Signed-off-by: Steve Capper <steve.capper@arm.com>
>> >
>> > ---
>> >
>> > New in V3 of the series.
>> >
>> > I'm not sure if this is strictly necessary as the VAs of the module
>> > space will fit within 48-bits of addressing even when a 52-bit VA space
>> > is enabled.
>>
>> What about the kernel text itself? Is that also guaranteed to have
>> bits [51:48] of its VAs equal 0xf, even under randomization?
>>
>> If so, I agree we don't need the patch.
>>
>
> Hi Ard,
> The kernel modules and text are guaranteed to have addresses greater
> than or equal to KASAN_SHADOW_END (same for both 48, 52-bit VAs) or
> _VA_START(VA_BITS_MIN) (same for both 48, 52-bit VAs). Also, IIUC, the
> KASLR displacemnt is always non-negative?
>

Correct.

> So I think we're safe in that modules and kernel text will be 48-bit
> addressable in 52-bit configurations.
>
> I'll have a think about a BUILD_BUG to capture any change to the above.
>

Yes please

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH V3 0/8] 52-bit kernel VAs for arm64
  2018-05-10 16:23 [PATCH V3 0/8] 52-bit kernel VAs for arm64 Steve Capper
                   ` (7 preceding siblings ...)
  2018-05-10 16:23 ` [PATCH v3 8/8] arm64: mm: Add 48/52-bit kernel VA support Steve Capper
@ 2018-09-07  6:25 ` Jon Masters
  2018-09-07 14:13   ` Steve Capper
  8 siblings, 1 reply; 18+ messages in thread
From: Jon Masters @ 2018-09-07  6:25 UTC (permalink / raw)
  To: linux-arm-kernel

On 05/10/2018 12:23 PM, Steve Capper wrote:

> This patch series brings 52-bit kernel VA support to arm64; if supported
> at boot time. A new kernel option CONFIG_ARM64_VA_BITS_52 is available
> when configured with a 64KB PAGE_SIZE (as on ARMv8.2-LPA, 52-bit VAs are
> only allowed when running with a 64KB granule).

What's the plan with this series?

Jon.

-- 
Computer Architect | Sent from my Fedora powered laptop

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH V3 0/8] 52-bit kernel VAs for arm64
  2018-09-07  6:25 ` [PATCH V3 0/8] 52-bit kernel VAs for arm64 Jon Masters
@ 2018-09-07 14:13   ` Steve Capper
  2018-09-07 19:45     ` Jon Masters
  0 siblings, 1 reply; 18+ messages in thread
From: Steve Capper @ 2018-09-07 14:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Sep 07, 2018 at 02:25:49AM -0400, Jon Masters wrote:
> On 05/10/2018 12:23 PM, Steve Capper wrote:
> 
> > This patch series brings 52-bit kernel VA support to arm64; if supported
> > at boot time. A new kernel option CONFIG_ARM64_VA_BITS_52 is available
> > when configured with a 64KB PAGE_SIZE (as on ARMv8.2-LPA, 52-bit VAs are
> > only allowed when running with a 64KB granule).
> 
> What's the plan with this series?
>

Hi Jon,
The series is quite heavyweight in that it affects quite a few subtle
areas of the kernel.

Catalin can better comment, but my understanding is that the plan is to
hold off expanding the kernel VA space until strictly necessary.

Cheers,
-- 
Steve

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH V3 0/8] 52-bit kernel VAs for arm64
  2018-09-07 14:13   ` Steve Capper
@ 2018-09-07 19:45     ` Jon Masters
  0 siblings, 0 replies; 18+ messages in thread
From: Jon Masters @ 2018-09-07 19:45 UTC (permalink / raw)
  To: linux-arm-kernel

On 09/07/2018 10:13 AM, Steve Capper wrote:
> On Fri, Sep 07, 2018 at 02:25:49AM -0400, Jon Masters wrote:
>> On 05/10/2018 12:23 PM, Steve Capper wrote:
>>
>>> This patch series brings 52-bit kernel VA support to arm64; if supported
>>> at boot time. A new kernel option CONFIG_ARM64_VA_BITS_52 is available
>>> when configured with a 64KB PAGE_SIZE (as on ARMv8.2-LPA, 52-bit VAs are
>>> only allowed when running with a 64KB granule).
>>
>> What's the plan with this series?

> The series is quite heavyweight in that it affects quite a few subtle
> areas of the kernel.

Sure. I've looked through the KASAN changes and the memory map changes.
But this is all the more reason I want to see it upstream due to various
downstream dependencies that would like to receive this support.

> Catalin can better comment, but my understanding is that the plan is to
> hold off expanding the kernel VA space until strictly necessary.

Unless or until this happens, what's the case for machines with
populated memory exceeding 48 bits? There's no way to setup the linear
map so we can't use that memory, correct? I think that's a problem.

Jon.

-- 
Computer Architect | Sent from my Fedora powered laptop

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2018-09-07 19:45 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-10 16:23 [PATCH V3 0/8] 52-bit kernel VAs for arm64 Steve Capper
2018-05-10 16:23 ` [PATCH v3 1/8] arm/arm64: KVM: Formalise end of direct linear map Steve Capper
2018-05-10 17:11   ` Marc Zyngier
2018-05-11  9:46     ` Steve Capper
2018-05-11 10:00       ` Steve Capper
2018-05-10 16:23 ` [PATCH v3 2/8] arm64: mm: Flip kernel VA space Steve Capper
2018-05-10 16:23 ` [PATCH v3 3/8] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
2018-05-10 16:23 ` [PATCH v3 4/8] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's Steve Capper
2018-05-10 16:23 ` [PATCH v3 5/8] arm64: dump: Make kernel page table dumper dynamic again Steve Capper
2018-05-10 16:23 ` [PATCH v3 6/8] arm64: module-plts: Extend veneer to address 52-bit VAs Steve Capper
2018-05-10 22:01   ` Ard Biesheuvel
2018-05-11 10:11     ` Steve Capper
2018-05-14 10:31       ` Ard Biesheuvel
2018-05-10 16:23 ` [PATCH v3 7/8] arm64: mm: Make VA space size variable Steve Capper
2018-05-10 16:23 ` [PATCH v3 8/8] arm64: mm: Add 48/52-bit kernel VA support Steve Capper
2018-09-07  6:25 ` [PATCH V3 0/8] 52-bit kernel VAs for arm64 Jon Masters
2018-09-07 14:13   ` Steve Capper
2018-09-07 19:45     ` Jon Masters

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.