Linux-ARM-Kernel Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH v3 00/10] 52-bit kernel + user VAs
@ 2019-06-12 17:26 Steve Capper
  2019-06-12 17:26 ` [PATCH v3 01/10] arm64: mm: Flip kernel VA space Steve Capper
                   ` (10 more replies)
  0 siblings, 11 replies; 17+ messages in thread
From: Steve Capper @ 2019-06-12 17:26 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

This patch series adds support for 52-bit kernel VAs using some of the
machinery already introduced by the 52-bit userspace VA code in 5.0.

As 52-bit virtual address support is an optional hardware feature,
software support for 52-bit kernel VAs needs to be deduced at early boot
time. If HW support is not available, the kernel falls back to 48-bit.

A significant proportion of this series focuses on "de-constifying"
VA_BITS related constants.

In order to allow for a KASAN shadow that changes size at boot time, one
must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
start address. Also, it is highly desirable to maintain the same
function addresses in the kernel .text between VA sizes. Both of these
requirements necessitate us to flip the kernel address space halves s.t.
the direct linear map occupies the lower addresses.

In V3 of this series, the 52-bit user/48-bit kernel option is removed
and we are left with a single 52-bit VA option instead. The offset_ttbr1
conditional logic has been re-worked to directly read a system register
rather than rely on the alternative framework (I couldn't actually see a
hotpath calling offset_ttbr1 and some parts of the early boot relied on
offset_ttbr1 before the alternatives framework was called). Also some
spurious de-constifying changes have been removed.

In V2 of this series (apologies for the long delay from V1), the major
change is that PAGE_OFFSET is retained as a constant. This allows for
much faster virt_to_page computations. This is achieved by expanding the
size of the VMEMMAP region to accommodate a disjoint 52-bit/48-bit
direct linear map. This has been found to work well in my testing, but I
would appreciate any feedback on this if it needs changing. To aid with
git bisect, this logic is broken down into a few smaller patches.

I am happy to add an extra set of patches to this series to document the
52-bit logic and export the relevant vmcoreinfo information (is
something like "vmcoreinfo_append_str(VA_BITS_ACTUAL)" enough?) or
post a separate series in future with this information in.

Cheers,
--
Steve

Steve Capper (10):
  arm64: mm: Flip kernel VA space
  arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  arm64: dump: De-constify VA_START and KASAN_SHADOW_START
  arm64: mm: Introduce VA_BITS_MIN
  arm64: mm: Introduce VA_BITS_ACTUAL
  arm64: mm: Logic to make offset_ttbr1 conditional
  arm64: mm: Separate out vmemmap
  arm64: mm: Modify calculation of VMEMMAP_SIZE
  arm64: mm: Tweak PAGE_OFFSET logic
  arm64: mm: Introduce 52-bit Kernel VAs

 Documentation/arm64/kasan-offsets.sh   | 27 ++++++++++++++++
 arch/arm64/Kconfig                     | 36 +++++++++++++++++----
 arch/arm64/Makefile                    |  8 -----
 arch/arm64/include/asm/assembler.h     | 17 ++++++++--
 arch/arm64/include/asm/efi.h           |  4 +--
 arch/arm64/include/asm/kasan.h         | 11 +++----
 arch/arm64/include/asm/memory.h        | 45 ++++++++++++++++++--------
 arch/arm64/include/asm/mmu_context.h   |  4 +--
 arch/arm64/include/asm/pgtable-hwdef.h |  2 +-
 arch/arm64/include/asm/pgtable.h       |  6 ++--
 arch/arm64/include/asm/processor.h     |  2 +-
 arch/arm64/kernel/head.S               | 13 +++++---
 arch/arm64/kernel/hibernate-asm.S      |  8 ++---
 arch/arm64/kernel/hibernate.c          |  2 +-
 arch/arm64/kernel/kaslr.c              |  6 ++--
 arch/arm64/kvm/va_layout.c             | 14 ++++----
 arch/arm64/mm/dump.c                   | 25 ++++++++++----
 arch/arm64/mm/fault.c                  |  4 +--
 arch/arm64/mm/init.c                   | 29 ++++++++++++-----
 arch/arm64/mm/kasan_init.c             | 11 +++----
 arch/arm64/mm/mmu.c                    |  7 ++--
 arch/arm64/mm/proc.S                   |  9 +++---
 22 files changed, 196 insertions(+), 94 deletions(-)
 create mode 100644 Documentation/arm64/kasan-offsets.sh

-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 01/10] arm64: mm: Flip kernel VA space
  2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
@ 2019-06-12 17:26 ` Steve Capper
  2019-06-14 12:17   ` Anshuman Khandual
  2019-06-14 13:00   ` Anshuman Khandual
  2019-06-12 17:26 ` [PATCH v3 02/10] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
                   ` (9 subsequent siblings)
  10 siblings, 2 replies; 17+ messages in thread
From: Steve Capper @ 2019-06-12 17:26 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

Put the direct linear map in the lower addresses of the kernel VA range
and everything else in the higher ranges.

This allows us to make room for an inline KASAN shadow that operates
under both 48 and 52 bit kernel VA sizes. For example with a 52-bit VA,
if KASAN_SHADOW_END < 0xFFF8000000000000 (it is in the lower addresses
of the kernel VA range), this will be below the start of the minimum
48-bit kernel VA address of 0xFFFF000000000000.

We need to adjust:
 *) KASAN shadow region placement logic,
 *) KASAN_SHADOW_OFFSET computation logic,
 *) virt_to_phys, phys_to_virt checks,
 *) page table dumper.

These are all small changes, that need to take place atomically, so they
are bundled into this commit.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/Makefile              | 2 +-
 arch/arm64/include/asm/memory.h  | 8 ++++----
 arch/arm64/include/asm/pgtable.h | 2 +-
 arch/arm64/kernel/hibernate.c    | 2 +-
 arch/arm64/mm/dump.c             | 8 ++++----
 arch/arm64/mm/init.c             | 9 +--------
 arch/arm64/mm/kasan_init.c       | 6 +++---
 arch/arm64/mm/mmu.c              | 4 ++--
 8 files changed, 17 insertions(+), 24 deletions(-)

diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index b025304bde46..2dad2ae6b181 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -115,7 +115,7 @@ KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 #				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
 # in 32-bit arithmetic
 KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
-	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \
+	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
 	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
 	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
 
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 8ffcf5a512bb..5cd2eb8cb424 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -49,9 +49,9 @@
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
 #define VA_START		(UL(0xffffffffffffffff) - \
-	(UL(1) << VA_BITS) + 1)
-#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << (VA_BITS - 1)) + 1)
+#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
+	(UL(1) << VA_BITS) + 1)
 #define KIMAGE_VADDR		(MODULES_END)
 #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
@@ -59,7 +59,7 @@
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
 #define MODULES_VADDR		(BPF_JIT_REGION_END)
 #define MODULES_VSIZE		(SZ_128M)
-#define VMEMMAP_START		(PAGE_OFFSET - VMEMMAP_SIZE)
+#define VMEMMAP_START		(-VMEMMAP_SIZE)
 #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
 #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
@@ -238,7 +238,7 @@ extern u64			vabits_user;
  * space. Testing the top bit for the start of the region is a
  * sufficient check.
  */
-#define __is_lm_address(addr)	(!!((addr) & BIT(VA_BITS - 1)))
+#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS - 1)))
 
 #define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 2c41b04708fe..d0ab784304e9 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -32,7 +32,7 @@
  *	and fixed mappings
  */
 #define VMALLOC_START		(MODULES_END)
-#define VMALLOC_END		(PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
+#define VMALLOC_END		(- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
 #define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
 
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 9859e1178e6b..6ffcc32f35dd 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -497,7 +497,7 @@ int swsusp_arch_resume(void)
 		rc = -ENOMEM;
 		goto out;
 	}
-	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, 0);
+	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, VA_START);
 	if (rc)
 		goto out;
 
diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index 14fe23cd5932..ee4e5bea8944 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -30,6 +30,8 @@
 #include <asm/ptdump.h>
 
 static const struct addr_marker address_markers[] = {
+	{ PAGE_OFFSET,			"Linear Mapping start" },
+	{ VA_START,			"Linear Mapping end" },
 #ifdef CONFIG_KASAN
 	{ KASAN_SHADOW_START,		"Kasan shadow start" },
 	{ KASAN_SHADOW_END,		"Kasan shadow end" },
@@ -43,10 +45,8 @@ static const struct addr_marker address_markers[] = {
 	{ PCI_IO_START,			"PCI I/O start" },
 	{ PCI_IO_END,			"PCI I/O end" },
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
-	{ VMEMMAP_START,		"vmemmap start" },
-	{ VMEMMAP_START + VMEMMAP_SIZE,	"vmemmap end" },
+	{ VMEMMAP_START,		"vmemmap" },
 #endif
-	{ PAGE_OFFSET,			"Linear mapping" },
 	{ -1,				NULL },
 };
 
@@ -380,7 +380,7 @@ static void ptdump_initialize(void)
 static struct ptdump_info kernel_ptdump_info = {
 	.mm		= &init_mm,
 	.markers	= address_markers,
-	.base_addr	= VA_START,
+	.base_addr	= PAGE_OFFSET,
 };
 
 void ptdump_check_wx(void)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index d2adffb81b5d..574ed1d4be19 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -311,7 +311,7 @@ static void __init fdt_enforce_memory_region(void)
 
 void __init arm64_memblock_init(void)
 {
-	const s64 linear_region_size = -(s64)PAGE_OFFSET;
+	const s64 linear_region_size = BIT(VA_BITS - 1);
 
 	/* Handle linux,usable-memory-range property */
 	fdt_enforce_memory_region();
@@ -319,13 +319,6 @@ void __init arm64_memblock_init(void)
 	/* Remove memory above our supported physical address size */
 	memblock_remove(1ULL << PHYS_MASK_SHIFT, ULLONG_MAX);
 
-	/*
-	 * Ensure that the linear region takes up exactly half of the kernel
-	 * virtual address space. This way, we can distinguish a linear address
-	 * from a kernel/module/vmalloc address by testing a single bit.
-	 */
-	BUILD_BUG_ON(linear_region_size != BIT(VA_BITS - 1));
-
 	/*
 	 * Select a suitable value for the base of physical memory.
 	 */
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 296de39ddee5..8066621052db 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -229,10 +229,10 @@ void __init kasan_init(void)
 	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
 			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
 
-	kasan_populate_early_shadow((void *)KASAN_SHADOW_START,
-				    (void *)mod_shadow_start);
+	kasan_populate_early_shadow(kasan_mem_to_shadow((void *) VA_START),
+				   (void *)mod_shadow_start);
 	kasan_populate_early_shadow((void *)kimg_shadow_end,
-				    kasan_mem_to_shadow((void *)PAGE_OFFSET));
+				   (void *)KASAN_SHADOW_END);
 
 	if (kimg_shadow_start > mod_shadow_end)
 		kasan_populate_early_shadow((void *)mod_shadow_end,
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index a1bfc4413982..16063ff10c6d 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -409,7 +409,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
 static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
 				  phys_addr_t size, pgprot_t prot)
 {
-	if (virt < VMALLOC_START) {
+	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
 		pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n",
 			&phys, virt);
 		return;
@@ -436,7 +436,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
 				phys_addr_t size, pgprot_t prot)
 {
-	if (virt < VMALLOC_START) {
+	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
 		pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
 			&phys, virt);
 		return;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 02/10] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
  2019-06-12 17:26 ` [PATCH v3 01/10] arm64: mm: Flip kernel VA space Steve Capper
@ 2019-06-12 17:26 ` Steve Capper
  2019-06-12 17:26 ` [PATCH v3 03/10] arm64: dump: De-constify VA_START and KASAN_SHADOW_START Steve Capper
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steve Capper @ 2019-06-12 17:26 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
line argument and affects the codegen of the inline address sanetiser.

Essentially, for an example memory access:
    *ptr1 = val;
The compiler will insert logic similar to the below:
    shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET)
    if (somethingWrong(shadowValue))
        flagAnError();

This code sequence is inserted into many places, thus
KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel
text.

If we want to run a single kernel binary with multiple address spaces,
then we need to do this with KASAN_SHADOW_OFFSET fixed.

Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide
shadow addresses we know that the end of the shadow region is constant
w.r.t. VA space size:
    KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET

This means that if we increase the size of the VA space, the start of
the KASAN region expands into lower addresses whilst the end of the
KASAN region is fixed.

Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via
build scripts with the VA size used as a parameter. (There are build
time checks in the C code too to ensure that expected values are being
derived). It is sufficient, and indeed is a simplification, to remove
the build scripts (and build time checks) entirely and instead provide
KASAN_SHADOW_OFFSET values.

This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the
arm64 Makefile, and instead we adopt the approach used by x86 to supply
offset values in kConfig. To help debug/develop future VA space changes,
the Makefile logic has been preserved in a script file in the arm64
Documentation folder.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 Documentation/arm64/kasan-offsets.sh | 27 +++++++++++++++++++++++++++
 arch/arm64/Kconfig                   | 15 +++++++++++++++
 arch/arm64/Makefile                  |  8 --------
 arch/arm64/include/asm/kasan.h       | 11 ++++-------
 arch/arm64/include/asm/memory.h      | 12 +++++++++---
 arch/arm64/mm/kasan_init.c           |  2 --
 6 files changed, 55 insertions(+), 20 deletions(-)
 create mode 100644 Documentation/arm64/kasan-offsets.sh

diff --git a/Documentation/arm64/kasan-offsets.sh b/Documentation/arm64/kasan-offsets.sh
new file mode 100644
index 000000000000..2b7a021db363
--- /dev/null
+++ b/Documentation/arm64/kasan-offsets.sh
@@ -0,0 +1,27 @@
+#!/bin/sh
+
+# Print out the KASAN_SHADOW_OFFSETS required to place the KASAN SHADOW
+# start address at the mid-point of the kernel VA space
+
+print_kasan_offset () {
+	printf "%02d\t" $1
+	printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
+			+ (1 << ($1 - 32 - $2)) \
+			- (1 << (64 - 32 - $2)) ))
+}
+
+echo KASAN_SHADOW_SCALE_SHIFT = 3
+printf "VABITS\tKASAN_SHADOW_OFFSET\n"
+print_kasan_offset 48 3
+print_kasan_offset 47 3
+print_kasan_offset 42 3
+print_kasan_offset 39 3
+print_kasan_offset 36 3
+echo
+echo KASAN_SHADOW_SCALE_SHIFT = 4
+printf "VABITS\tKASAN_SHADOW_OFFSET\n"
+print_kasan_offset 48 4
+print_kasan_offset 47 4
+print_kasan_offset 42 4
+print_kasan_offset 39 4
+print_kasan_offset 36 4
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 697ea0510729..89edcb211990 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -292,6 +292,21 @@ config ARCH_SUPPORTS_UPROBES
 config ARCH_PROC_KCORE_TEXT
 	def_bool y
 
+config KASAN_SHADOW_OFFSET
+	hex
+	depends on KASAN
+	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS
+	default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
+	default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
+	default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
+	default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
+	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS
+	default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
+	default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
+	default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
+	default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
+	default 0xffffffffffffffff
+
 source "arch/arm64/Kconfig.platforms"
 
 menu "Kernel Features"
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 2dad2ae6b181..764201c76d77 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -111,14 +111,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 
-# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
-#				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
-# in 32-bit arithmetic
-KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
-	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
-	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
-	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
-
 export	TEXT_OFFSET GZFLAGS
 
 core-y		+= arch/arm64/kernel/ arch/arm64/mm/
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index b52aacd2c526..10d2add842da 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -18,11 +18,8 @@
  * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
  * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/N of kernel virtual addresses,
  * where N = (1 << KASAN_SHADOW_SCALE_SHIFT).
- */
-#define KASAN_SHADOW_START      (VA_START)
-#define KASAN_SHADOW_END        (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
-
-/*
+ *
+ * KASAN_SHADOW_OFFSET:
  * This value is used to map an address to the corresponding shadow
  * address by the following formula:
  *     shadow_addr = (address >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET
@@ -33,8 +30,8 @@
  *      KASAN_SHADOW_OFFSET = KASAN_SHADOW_END -
  *				(1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT))
  */
-#define KASAN_SHADOW_OFFSET     (KASAN_SHADOW_END - (1ULL << \
-					(64 - KASAN_SHADOW_SCALE_SHIFT)))
+#define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
+#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS)
 
 void kasan_init(void);
 void kasan_copy_shadow(pgd_t *pgdir);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 5cd2eb8cb424..13b9cfbbb319 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -53,7 +53,7 @@
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << VA_BITS) + 1)
 #define KIMAGE_VADDR		(MODULES_END)
-#define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
+#define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
 #define BPF_JIT_REGION_END	(BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE)
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
@@ -79,11 +79,17 @@
  * significantly, so double the (minimum) stack size when they are in use.
  */
 #ifdef CONFIG_KASAN
-#define KASAN_SHADOW_SIZE	(UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
+#define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#define KASAN_SHADOW_END	((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \
+					+ KASAN_SHADOW_OFFSET)
+#ifdef CONFIG_KASAN_EXTRA
+#define KASAN_THREAD_SHIFT	2
+#else
 #define KASAN_THREAD_SHIFT	1
+#endif
 #else
-#define KASAN_SHADOW_SIZE	(0)
 #define KASAN_THREAD_SHIFT	0
+#define KASAN_SHADOW_END	(VA_START)
 #endif
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 8066621052db..0d3027be7bf8 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -158,8 +158,6 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
 /* The early shadow maps everything to a single page of zeroes */
 asmlinkage void __init kasan_early_init(void)
 {
-	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
-		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
 	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 03/10] arm64: dump: De-constify VA_START and KASAN_SHADOW_START
  2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
  2019-06-12 17:26 ` [PATCH v3 01/10] arm64: mm: Flip kernel VA space Steve Capper
  2019-06-12 17:26 ` [PATCH v3 02/10] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
@ 2019-06-12 17:26 ` Steve Capper
  2019-06-12 17:26 ` [PATCH v3 04/10] arm64: mm: Introduce VA_BITS_MIN Steve Capper
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steve Capper @ 2019-06-12 17:26 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

The kernel page table dumper assumes that the placement of VA regions is
constant and determined at compile time. As we are about to introduce
variable VA logic, we need to be able to determine certain regions at
boot time.

Specifically the VA_START and KASAN_SHADOW_START will depend on whether
or not the system is booted with 52-bit kernel VAs.

This patch adds logic to the kernel page table dumper s.t. these regions
can be computed at boot time.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

Changed in V3 - simplified the scope of de-constifying to just VA_START
and KASAN_SHADOW_START.
---
 arch/arm64/mm/dump.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index ee4e5bea8944..52c725ff49a2 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -29,11 +29,20 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/ptdump.h>
 
-static const struct addr_marker address_markers[] = {
+
+enum address_markers_idx {
+	PAGE_OFFSET_NR = 0,
+	VA_START_NR,
+#ifdef CONFIG_KASAN
+	KASAN_START_NR,
+#endif
+};
+
+static struct addr_marker address_markers[] = {
 	{ PAGE_OFFSET,			"Linear Mapping start" },
-	{ VA_START,			"Linear Mapping end" },
+	{ 0 /* VA_START */,		"Linear Mapping end" },
 #ifdef CONFIG_KASAN
-	{ KASAN_SHADOW_START,		"Kasan shadow start" },
+	{ 0 /* KASAN_SHADOW_START */,	"Kasan shadow start" },
 	{ KASAN_SHADOW_END,		"Kasan shadow end" },
 #endif
 	{ MODULES_VADDR,		"Modules start" },
@@ -405,6 +414,10 @@ void ptdump_check_wx(void)
 
 static int ptdump_init(void)
 {
+	address_markers[VA_START_NR].start_address = VA_START;
+#ifdef CONFIG_KASAN
+	address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START;
+#endif
 	ptdump_initialize();
 	ptdump_debugfs_register(&kernel_ptdump_info, "kernel_page_tables");
 	return 0;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 04/10] arm64: mm: Introduce VA_BITS_MIN
  2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
                   ` (2 preceding siblings ...)
  2019-06-12 17:26 ` [PATCH v3 03/10] arm64: dump: De-constify VA_START and KASAN_SHADOW_START Steve Capper
@ 2019-06-12 17:26 ` Steve Capper
  2019-06-12 17:26 ` [PATCH v3 05/10] arm64: mm: Introduce VA_BITS_ACTUAL Steve Capper
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steve Capper @ 2019-06-12 17:26 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

In order to support 52-bit kernel addresses detectable at boot time, the
kernel needs to know the most conservative VA_BITS possible should it
need to fall back to this quantity due to lack of hardware support.

A new compile time constant VA_BITS_MIN is introduced in this patch and
it is employed in the KASAN end address, KASLR, and EFI stub.

For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits.

In other words: VA_BITS_MIN = min (48, VA_BITS)

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/Kconfig                 | 4 ++++
 arch/arm64/include/asm/efi.h       | 4 ++--
 arch/arm64/include/asm/memory.h    | 5 ++++-
 arch/arm64/include/asm/processor.h | 2 +-
 arch/arm64/kernel/head.S           | 2 +-
 arch/arm64/kernel/kaslr.c          | 6 +++---
 arch/arm64/mm/kasan_init.c         | 3 ++-
 7 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 89edcb211990..4421e5409bb8 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -792,6 +792,10 @@ config ARM64_VA_BITS
 	default 47 if ARM64_VA_BITS_47
 	default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52
 
+config ARM64_VA_BITS_MIN
+	int
+	default ARM64_VA_BITS
+
 choice
 	prompt "Physical address space size"
 	default ARM64_PA_BITS_48
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index c9e9a6978e73..a87e78f3f58d 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -79,7 +79,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base)
 
 /*
  * On arm64, we have to ensure that the initrd ends up in the linear region,
- * which is a 1 GB aligned region of size '1UL << (VA_BITS - 1)' that is
+ * which is a 1 GB aligned region of size '1UL << (VA_BITS_MIN - 1)' that is
  * guaranteed to cover the kernel Image.
  *
  * Since the EFI stub is part of the kernel Image, we can relax the
@@ -90,7 +90,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base)
 static inline unsigned long efi_get_max_initrd_addr(unsigned long dram_base,
 						    unsigned long image_addr)
 {
-	return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS - 1));
+	return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS_MIN - 1));
 }
 
 #define efi_call_early(f, ...)		sys_table_arg->boottime->f(__VA_ARGS__)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 13b9cfbbb319..4814c8e77daf 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -63,6 +63,9 @@
 #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
 #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
+#define VA_BITS_MIN		(CONFIG_ARM64_VA_BITS_MIN)
+#define _VA_START(va)		(UL(0xffffffffffffffff) - \
+				(UL(1) << ((va) - 1)) + 1)
 
 #define KERNEL_START      _text
 #define KERNEL_END        _end
@@ -89,7 +92,7 @@
 #endif
 #else
 #define KASAN_THREAD_SHIFT	0
-#define KASAN_SHADOW_END	(VA_START)
+#define KASAN_SHADOW_END	(_VA_START(VA_BITS_MIN))
 #endif
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index fcd0e691b1ea..307cd2173813 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -53,7 +53,7 @@
  * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area.
  */
 
-#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS)
+#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS_MIN)
 #define TASK_SIZE_64		(UL(1) << vabits_user)
 
 #ifdef CONFIG_COMPAT
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index fcae3f85c6cd..ab68c3fe7a19 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -325,7 +325,7 @@ __create_page_tables:
 	mov	x5, #52
 	cbnz	x6, 1f
 #endif
-	mov	x5, #VA_BITS
+	mov	x5, #VA_BITS_MIN
 1:
 	adr_l	x6, vabits_user
 	str	x5, [x6]
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 06941c1fe418..c341580fc10a 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -119,15 +119,15 @@ u64 __init kaslr_early_init(u64 dt_phys)
 	/*
 	 * OK, so we are proceeding with KASLR enabled. Calculate a suitable
 	 * kernel image offset from the seed. Let's place the kernel in the
-	 * middle half of the VMALLOC area (VA_BITS - 2), and stay clear of
+	 * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of
 	 * the lower and upper quarters to avoid colliding with other
 	 * allocations.
 	 * Even if we could randomize at page granularity for 16k and 64k pages,
 	 * let's always round to 2 MB so we don't interfere with the ability to
 	 * map using contiguous PTEs
 	 */
-	mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1);
-	offset = BIT(VA_BITS - 3) + (seed & mask);
+	mask = ((1UL << (VA_BITS_MIN - 2)) - 1) & ~(SZ_2M - 1);
+	offset = BIT(VA_BITS_MIN - 3) + (seed & mask);
 
 	/* use the top 16 bits to randomize the linear region */
 	memstart_offset_seed = seed >> 48;
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 0d3027be7bf8..0e0b69af0aaa 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -158,7 +158,8 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
 /* The early shadow maps everything to a single page of zeroes */
 asmlinkage void __init kasan_early_init(void)
 {
-	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
+	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), PGDIR_SIZE));
+	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
 	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
 			   true);
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 05/10] arm64: mm: Introduce VA_BITS_ACTUAL
  2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
                   ` (3 preceding siblings ...)
  2019-06-12 17:26 ` [PATCH v3 04/10] arm64: mm: Introduce VA_BITS_MIN Steve Capper
@ 2019-06-12 17:26 ` Steve Capper
  2019-06-12 17:26 ` [PATCH v3 06/10] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steve Capper @ 2019-06-12 17:26 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

In order to support 52-bit kernel addresses detectable at boot time, one
needs to know the actual VA_BITS detected. A new variable VA_BITS_ACTUAL
is introduced in this commit and employed for the KVM hypervisor layout,
KASAN, fault handling and phys-to/from-virt translation where there
would normally be compile time constants.

In order to maintain performance in phys_to_virt, another variable
physvirt_offset is introduced.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/kasan.h       |  2 +-
 arch/arm64/include/asm/memory.h      | 12 +++++++-----
 arch/arm64/include/asm/mmu_context.h |  2 +-
 arch/arm64/kernel/head.S             |  5 +++++
 arch/arm64/kvm/va_layout.c           | 14 +++++++-------
 arch/arm64/mm/fault.c                |  4 ++--
 arch/arm64/mm/init.c                 |  7 ++++++-
 arch/arm64/mm/mmu.c                  |  3 +++
 8 files changed, 32 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index 10d2add842da..ff991dc86ae1 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -31,7 +31,7 @@
  *				(1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT))
  */
 #define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
-#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS)
+#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS_ACTUAL)
 
 void kasan_init(void);
 void kasan_copy_shadow(pgd_t *pgdir);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 4814c8e77daf..1c37b2620bb4 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -48,8 +48,6 @@
  * VA_START - the first kernel virtual address.
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
-#define VA_START		(UL(0xffffffffffffffff) - \
-	(UL(1) << (VA_BITS - 1)) + 1)
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << VA_BITS) + 1)
 #define KIMAGE_VADDR		(MODULES_END)
@@ -177,10 +175,14 @@
 #endif
 
 #ifndef __ASSEMBLY__
+extern u64			vabits_actual;
+#define VA_BITS_ACTUAL		({vabits_actual;})
+#define VA_START		(_VA_START(VA_BITS_ACTUAL))
 
 #include <linux/bitops.h>
 #include <linux/mmdebug.h>
 
+extern s64			physvirt_offset;
 extern s64			memstart_addr;
 /* PHYS_OFFSET - the physical address of the start of memory. */
 #define PHYS_OFFSET		({ VM_BUG_ON(memstart_addr & 1); memstart_addr; })
@@ -247,9 +249,9 @@ extern u64			vabits_user;
  * space. Testing the top bit for the start of the region is a
  * sufficient check.
  */
-#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS - 1)))
+#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS_ACTUAL - 1)))
 
-#define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
+#define __lm_to_phys(addr)	(((addr) + physvirt_offset))
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
 
 #define __virt_to_phys_nodebug(x) ({					\
@@ -268,7 +270,7 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
 #define __phys_addr_symbol(x)	__pa_symbol_nodebug(x)
 #endif
 
-#define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
+#define __phys_to_virt(x)	((unsigned long)((x) - physvirt_offset))
 #define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
 
 /*
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 2da3e478fd8f..133ecb65b602 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -106,7 +106,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
 	isb();
 }
 
-#define cpu_set_default_tcr_t0sz()	__cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS))
+#define cpu_set_default_tcr_t0sz()	__cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS_ACTUAL))
 #define cpu_set_idmap_tcr_t0sz()	__cpu_set_tcr_t0sz(idmap_t0sz)
 
 /*
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index ab68c3fe7a19..b3335e639b6d 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -332,6 +332,11 @@ __create_page_tables:
 	dmb	sy
 	dc	ivac, x6		// Invalidate potentially stale cache line
 
+	adr_l	x6, vabits_actual
+	str	x5, [x6]
+	dmb	sy
+	dc	ivac, x6		// Invalidate potentially stale cache line
+
 	/*
 	 * VA_BITS may be too small to allow for an ID mapping to be created
 	 * that covers system RAM if that is located sufficiently high in the
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index c712a7376bc1..c9a1debb45bd 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -40,25 +40,25 @@ static void compute_layout(void)
 	int kva_msb;
 
 	/* Where is my RAM region? */
-	hyp_va_msb  = idmap_addr & BIT(VA_BITS - 1);
-	hyp_va_msb ^= BIT(VA_BITS - 1);
+	hyp_va_msb  = idmap_addr & BIT(VA_BITS_ACTUAL - 1);
+	hyp_va_msb ^= BIT(VA_BITS_ACTUAL - 1);
 
 	kva_msb = fls64((u64)phys_to_virt(memblock_start_of_DRAM()) ^
 			(u64)(high_memory - 1));
 
-	if (kva_msb == (VA_BITS - 1)) {
+	if (kva_msb == (VA_BITS_ACTUAL - 1)) {
 		/*
 		 * No space in the address, let's compute the mask so
-		 * that it covers (VA_BITS - 1) bits, and the region
+		 * that it covers (VA_BITS_ACTUAL - 1) bits, and the region
 		 * bit. The tag stays set to zero.
 		 */
-		va_mask  = BIT(VA_BITS - 1) - 1;
+		va_mask  = BIT(VA_BITS_ACTUAL - 1) - 1;
 		va_mask |= hyp_va_msb;
 	} else {
 		/*
 		 * We do have some free bits to insert a random tag.
 		 * Hyp VAs are now created from kernel linear map VAs
-		 * using the following formula (with V == VA_BITS):
+		 * using the following formula (with V == VA_BITS_ACTUAL):
 		 *
 		 *  63 ... V |     V-1    | V-2 .. tag_lsb | tag_lsb - 1 .. 0
 		 *  ---------------------------------------------------------
@@ -66,7 +66,7 @@ static void compute_layout(void)
 		 */
 		tag_lsb = kva_msb;
 		va_mask = GENMASK_ULL(tag_lsb - 1, 0);
-		tag_val = get_random_long() & GENMASK_ULL(VA_BITS - 2, tag_lsb);
+		tag_val = get_random_long() & GENMASK_ULL(VA_BITS_ACTUAL - 2, tag_lsb);
 		tag_val |= hyp_va_msb;
 		tag_val >>= tag_lsb;
 	}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index a30818ed9c60..a17aa822a81c 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -171,9 +171,9 @@ static void show_pte(unsigned long addr)
 		return;
 	}
 
-	pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp=%016lx\n",
+	pr_alert("%s pgtable: %luk pages, %llu-bit VAs, pgdp=%016lx\n",
 		 mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K,
-		 mm == &init_mm ? VA_BITS : (int)vabits_user,
+		 mm == &init_mm ? VA_BITS_ACTUAL : (int)vabits_user,
 		 (unsigned long)virt_to_phys(mm->pgd));
 	pgdp = pgd_offset(mm, addr);
 	pgd = READ_ONCE(*pgdp);
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 574ed1d4be19..d89341df2d0e 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -61,6 +61,9 @@
 s64 memstart_addr __ro_after_init = -1;
 EXPORT_SYMBOL(memstart_addr);
 
+s64 physvirt_offset __ro_after_init;
+EXPORT_SYMBOL(physvirt_offset);
+
 phys_addr_t arm64_dma_phys_limit __ro_after_init;
 
 #ifdef CONFIG_KEXEC_CORE
@@ -311,7 +314,7 @@ static void __init fdt_enforce_memory_region(void)
 
 void __init arm64_memblock_init(void)
 {
-	const s64 linear_region_size = BIT(VA_BITS - 1);
+	const s64 linear_region_size = BIT(VA_BITS_ACTUAL - 1);
 
 	/* Handle linux,usable-memory-range property */
 	fdt_enforce_memory_region();
@@ -325,6 +328,8 @@ void __init arm64_memblock_init(void)
 	memstart_addr = round_down(memblock_start_of_DRAM(),
 				   ARM64_MEMSTART_ALIGN);
 
+	physvirt_offset = PHYS_OFFSET - PAGE_OFFSET;
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 16063ff10c6d..dcbb6593dd90 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -54,6 +54,9 @@ u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
 u64 vabits_user __ro_after_init;
 EXPORT_SYMBOL(vabits_user);
 
+u64 __section(".mmuoff.data.write") vabits_actual;
+EXPORT_SYMBOL(vabits_actual);
+
 u64 kimage_voffset __ro_after_init;
 EXPORT_SYMBOL(kimage_voffset);
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 06/10] arm64: mm: Logic to make offset_ttbr1 conditional
  2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
                   ` (4 preceding siblings ...)
  2019-06-12 17:26 ` [PATCH v3 05/10] arm64: mm: Introduce VA_BITS_ACTUAL Steve Capper
@ 2019-06-12 17:26 ` Steve Capper
  2019-06-12 17:26 ` [PATCH v3 07/10] arm64: mm: Separate out vmemmap Steve Capper
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steve Capper @ 2019-06-12 17:26 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

When running with a 52-bit userspace VA and a 48-bit kernel VA we offset
ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to
be used for both userspace and kernel.

Moving on to a 52-bit kernel VA we no longer require this offset to
ttbr1_el1 should we be running on a system with HW support for 52-bit
VAs.

This patch introduces conditional logic to offset_ttbr1 to query
SYS_ID_AA64MMFR2_EL1 whenever 52-bit VAs are selected. If there is HW
support for 52-bit VAs then the ttbr1 offset is skipped.

We choose to read a system register rather than vabits_actual because
offset_ttbr1 can be called in places where the kernel data is not
actually mapped.

Calls to offset_ttbr1 appear to be made from rarely called code paths so
this extra logic is not expected to adversely affect performance.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---

Changed in V3, move away from alternative framework as offset_ttbr1 can
be called in places before the alternative framework has been
initialised.
---
 arch/arm64/include/asm/assembler.h | 12 ++++++++++--
 arch/arm64/kernel/head.S           |  2 +-
 arch/arm64/kernel/hibernate-asm.S  |  8 ++++----
 arch/arm64/mm/proc.S               |  6 +++---
 4 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 92b6b7cf67dd..5b6e82eb2588 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -545,9 +545,17 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * In future this may be nop'ed out when dealing with 52-bit kernel VAs.
  * 	ttbr: Value of ttbr to set, modified.
  */
-	.macro	offset_ttbr1, ttbr
+	.macro	offset_ttbr1, ttbr, tmp
 #ifdef CONFIG_ARM64_USER_VA_BITS_52
 	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
+#endif
+
+#ifdef CONFIG_ARM64_VA_BITS_52
+	mrs_s	\tmp, SYS_ID_AA64MMFR2_EL1
+	and	\tmp, \tmp, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
+	cbnz	\tmp, .Lskipoffs_\@
+	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
+.Lskipoffs_\@ :
 #endif
 	.endm
 
@@ -557,7 +565,7 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * to be nop'ed out when dealing with 52-bit kernel VAs.
  */
 	.macro	restore_ttbr1, ttbr
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#if defined(CONFIG_ARM64_USER_VA_BITS_52) || defined(CONFIG_ARM64_VA_BITS_52)
 	bic	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
 #endif
 	.endm
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index b3335e639b6d..5b8e38503ce1 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -788,7 +788,7 @@ ENTRY(__enable_mmu)
 	phys_to_ttbr x1, x1
 	phys_to_ttbr x2, x2
 	msr	ttbr0_el1, x2			// load TTBR0
-	offset_ttbr1 x1
+	offset_ttbr1 x1, x3
 	msr	ttbr1_el1, x1			// load TTBR1
 	isb
 	msr	sctlr_el1, x0
diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
index fe36d85c60bd..f2dd592761fa 100644
--- a/arch/arm64/kernel/hibernate-asm.S
+++ b/arch/arm64/kernel/hibernate-asm.S
@@ -33,14 +33,14 @@
  * Even switching to our copied tables will cause a changed output address at
  * each stage of the walk.
  */
-.macro break_before_make_ttbr_switch zero_page, page_table, tmp
+.macro break_before_make_ttbr_switch zero_page, page_table, tmp, tmp2
 	phys_to_ttbr \tmp, \zero_page
 	msr	ttbr1_el1, \tmp
 	isb
 	tlbi	vmalle1
 	dsb	nsh
 	phys_to_ttbr \tmp, \page_table
-	offset_ttbr1 \tmp
+	offset_ttbr1 \tmp, \tmp2
 	msr	ttbr1_el1, \tmp
 	isb
 .endm
@@ -81,7 +81,7 @@ ENTRY(swsusp_arch_suspend_exit)
 	 * We execute from ttbr0, change ttbr1 to our copied linear map tables
 	 * with a break-before-make via the zero page
 	 */
-	break_before_make_ttbr_switch	x5, x0, x6
+	break_before_make_ttbr_switch	x5, x0, x6, x8
 
 	mov	x21, x1
 	mov	x30, x2
@@ -112,7 +112,7 @@ ENTRY(swsusp_arch_suspend_exit)
 	dsb	ish		/* wait for PoU cleaning to finish */
 
 	/* switch to the restored kernels page tables */
-	break_before_make_ttbr_switch	x25, x21, x6
+	break_before_make_ttbr_switch	x25, x21, x6, x8
 
 	ic	ialluis
 	dsb	ish
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index fdd626d34274..9f64283e0f89 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -179,7 +179,7 @@ ENDPROC(cpu_do_switch_mm)
 .macro	__idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
 	adrp	\tmp1, empty_zero_page
 	phys_to_ttbr \tmp2, \tmp1
-	offset_ttbr1 \tmp2
+	offset_ttbr1 \tmp2, \tmp1
 	msr	ttbr1_el1, \tmp2
 	isb
 	tlbi	vmalle1
@@ -198,7 +198,7 @@ ENTRY(idmap_cpu_replace_ttbr1)
 
 	__idmap_cpu_set_reserved_ttbr1 x1, x3
 
-	offset_ttbr1 x0
+	offset_ttbr1 x0, x3
 	msr	ttbr1_el1, x0
 	isb
 
@@ -373,7 +373,7 @@ __idmap_kpti_secondary:
 	cbnz	w18, 1b
 
 	/* All done, act like nothing happened */
-	offset_ttbr1 swapper_ttb
+	offset_ttbr1 swapper_ttb, x18
 	msr	ttbr1_el1, swapper_ttb
 	isb
 	ret
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 07/10] arm64: mm: Separate out vmemmap
  2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
                   ` (5 preceding siblings ...)
  2019-06-12 17:26 ` [PATCH v3 06/10] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
@ 2019-06-12 17:26 ` Steve Capper
  2019-06-12 17:26 ` [PATCH v3 08/10] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steve Capper @ 2019-06-12 17:26 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

vmemmap is a preprocessor definition that depends on a variable,
memstart_addr. In a later patch we will need to expand the size of
the VMEMMAP region and optionally modify vmemmap depending upon
whether or not hardware support is available for 52-bit virtual
addresses.

This patch changes vmemmap to be a variable. As the old definition
depended on a variable load, this should not affect performance
noticeably.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/pgtable.h | 4 ++--
 arch/arm64/mm/init.c             | 5 +++++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index d0ab784304e9..60c52c1da61a 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -34,8 +34,6 @@
 #define VMALLOC_START		(MODULES_END)
 #define VMALLOC_END		(- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
-#define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
-
 #define FIRST_USER_ADDRESS	0UL
 
 #ifndef __ASSEMBLY__
@@ -46,6 +44,8 @@
 #include <linux/mm_types.h>
 #include <linux/sched.h>
 
+extern struct page *vmemmap;
+
 extern void __pte_error(const char *file, int line, unsigned long val);
 extern void __pmd_error(const char *file, int line, unsigned long val);
 extern void __pud_error(const char *file, int line, unsigned long val);
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index d89341df2d0e..6844365c0a51 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -64,6 +64,9 @@ EXPORT_SYMBOL(memstart_addr);
 s64 physvirt_offset __ro_after_init;
 EXPORT_SYMBOL(physvirt_offset);
 
+struct page *vmemmap __ro_after_init;
+EXPORT_SYMBOL(vmemmap);
+
 phys_addr_t arm64_dma_phys_limit __ro_after_init;
 
 #ifdef CONFIG_KEXEC_CORE
@@ -330,6 +333,8 @@ void __init arm64_memblock_init(void)
 
 	physvirt_offset = PHYS_OFFSET - PAGE_OFFSET;
 
+	vmemmap = ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT));
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 08/10] arm64: mm: Modify calculation of VMEMMAP_SIZE
  2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
                   ` (6 preceding siblings ...)
  2019-06-12 17:26 ` [PATCH v3 07/10] arm64: mm: Separate out vmemmap Steve Capper
@ 2019-06-12 17:26 ` Steve Capper
  2019-06-12 17:26 ` [PATCH v3 09/10] arm64: mm: Tweak PAGE_OFFSET logic Steve Capper
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 17+ messages in thread
From: Steve Capper @ 2019-06-12 17:26 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

In a later patch we will need to have a slightly larger VMEMMAP region
to accommodate boot time selection between 48/52-bit kernel VAs.

This patch modifies the formula for computing VMEMMAP_SIZE to depend
explicitly on the PAGE_OFFSET and start of kernel addressable memory.
(This allows for a slightly larger direct linear map in future).

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/memory.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 1c37b2620bb4..d0fe2fa9169d 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -37,8 +37,15 @@
 /*
  * VMEMMAP_SIZE - allows the whole linear region to be covered by
  *                a struct page array
+ *
+ * If we are configured with a 52-bit kernel VA then our VMEMMAP_SIZE
+ * neads to cover the memory region from the beginning of the 52-bit
+ * PAGE_OFFSET all the way to VA_START for 48-bit. This allows us to
+ * keep a constant PAGE_OFFSET and "fallback" to using the higher end
+ * of the VMEMMAP where 52-bit support is not available in hardware.
  */
-#define VMEMMAP_SIZE (UL(1) << (VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT))
+#define VMEMMAP_SIZE ((_VA_START(VA_BITS_MIN) - PAGE_OFFSET) \
+			>> (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT))
 
 /*
  * PAGE_OFFSET - the virtual address of the start of the linear map (top
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 09/10] arm64: mm: Tweak PAGE_OFFSET logic
  2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
                   ` (7 preceding siblings ...)
  2019-06-12 17:26 ` [PATCH v3 08/10] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
@ 2019-06-12 17:26 ` Steve Capper
  2019-06-12 17:26 ` [PATCH v3 10/10] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
  2019-06-26 11:08 ` [PATCH v3 00/10] 52-bit kernel + user VAs Catalin Marinas
  10 siblings, 0 replies; 17+ messages in thread
From: Steve Capper @ 2019-06-12 17:26 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

This patch introduces _PAGE_OFFSET(va) to allow for computing the
largest possible direct linear map for use with 48/52-bit selectable
kernel VAs.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/memory.h | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index d0fe2fa9169d..d3932463822c 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -55,8 +55,9 @@
  * VA_START - the first kernel virtual address.
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
-#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
-	(UL(1) << VA_BITS) + 1)
+#define _PAGE_OFFSET(va)	(UL(0xffffffffffffffff) - \
+					(UL(1) << (va)) + 1)
+#define PAGE_OFFSET		(_PAGE_OFFSET(VA_BITS))
 #define KIMAGE_VADDR		(MODULES_END)
 #define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v3 10/10] arm64: mm: Introduce 52-bit Kernel VAs
  2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
                   ` (8 preceding siblings ...)
  2019-06-12 17:26 ` [PATCH v3 09/10] arm64: mm: Tweak PAGE_OFFSET logic Steve Capper
@ 2019-06-12 17:26 ` Steve Capper
  2019-06-26 11:08 ` [PATCH v3 00/10] 52-bit kernel + user VAs Catalin Marinas
  10 siblings, 0 replies; 17+ messages in thread
From: Steve Capper @ 2019-06-12 17:26 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

Most of the machinery is now in place to enable 52-bit kernel VAs that
are detectable at boot time.

This patch adds a Kconfig option for 52-bit user and kernel addresses
and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ,
physvirt_offset and vmemmap at early boot.

To simplify things this patch also removes the 52-bit user/48-bit kernel
kconfig option.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/Kconfig                     | 21 ++++++++++++---------
 arch/arm64/include/asm/assembler.h     | 13 ++++++++-----
 arch/arm64/include/asm/memory.h        |  2 +-
 arch/arm64/include/asm/mmu_context.h   |  2 +-
 arch/arm64/include/asm/pgtable-hwdef.h |  2 +-
 arch/arm64/kernel/head.S               |  4 ++--
 arch/arm64/mm/init.c                   | 10 ++++++++++
 arch/arm64/mm/proc.S                   |  3 ++-
 8 files changed, 37 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 4421e5409bb8..557cacaa38cd 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -281,7 +281,7 @@ config PGTABLE_LEVELS
 	int
 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
 	default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42
-	default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52)
+	default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || ARM64_VA_BITS_52)
 	default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39
 	default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47
 	default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48
@@ -295,12 +295,12 @@ config ARCH_PROC_KCORE_TEXT
 config KASAN_SHADOW_OFFSET
 	hex
 	depends on KASAN
-	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS
+	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && !KASAN_SW_TAGS
 	default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
 	default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
 	default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
 	default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
-	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS
+	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && KASAN_SW_TAGS
 	default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
 	default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
 	default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
@@ -754,13 +754,14 @@ config ARM64_VA_BITS_47
 config ARM64_VA_BITS_48
 	bool "48-bit"
 
-config ARM64_USER_VA_BITS_52
-	bool "52-bit (user)"
+config ARM64_VA_BITS_52
+	bool "52-bit"
 	depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN)
 	help
 	  Enable 52-bit virtual addressing for userspace when explicitly
-	  requested via a hint to mmap(). The kernel will continue to
-	  use 48-bit virtual addresses for its own mappings.
+	  requested via a hint to mmap(). The kernel will also use 52-bit
+	  virtual addresses for its own mappings (provided HW support for
+	  this feature is available, otherwise it reverts to 48-bit).
 
 	  NOTE: Enabling 52-bit virtual addressing in conjunction with
 	  ARMv8.3 Pointer Authentication will result in the PAC being
@@ -773,7 +774,7 @@ endchoice
 
 config ARM64_FORCE_52BIT
 	bool "Force 52-bit virtual addresses for userspace"
-	depends on ARM64_USER_VA_BITS_52 && EXPERT
+	depends on ARM64_VA_BITS_52 && EXPERT
 	help
 	  For systems with 52-bit userspace VAs enabled, the kernel will attempt
 	  to maintain compatibility with older software by providing 48-bit VAs
@@ -790,10 +791,12 @@ config ARM64_VA_BITS
 	default 39 if ARM64_VA_BITS_39
 	default 42 if ARM64_VA_BITS_42
 	default 47 if ARM64_VA_BITS_47
-	default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52
+	default 48 if ARM64_VA_BITS_48
+	default 52 if ARM64_VA_BITS_52
 
 config ARM64_VA_BITS_MIN
 	int
+	default 48 if ARM64_VA_BITS_52
 	default ARM64_VA_BITS
 
 choice
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 5b6e82eb2588..1db2db714397 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -356,6 +356,13 @@ alternative_endif
 	bfi	\valreg, \t0sz, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH
 	.endm
 
+/*
+ * tcr_set_t1sz - update TCR.T1SZ
+ */
+	.macro	tcr_set_t1sz, valreg, t1sz
+	bfi	\valreg, \t1sz, #TCR_T1SZ_OFFSET, #TCR_TxSZ_WIDTH
+	.endm
+
 /*
  * tcr_compute_pa_size - set TCR.(I)PS to the highest supported
  * ID_AA64MMFR0_EL1.PARange value
@@ -546,10 +553,6 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * 	ttbr: Value of ttbr to set, modified.
  */
 	.macro	offset_ttbr1, ttbr, tmp
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
-	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
-#endif
-
 #ifdef CONFIG_ARM64_VA_BITS_52
 	mrs_s	\tmp, SYS_ID_AA64MMFR2_EL1
 	and	\tmp, \tmp, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
@@ -565,7 +568,7 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * to be nop'ed out when dealing with 52-bit kernel VAs.
  */
 	.macro	restore_ttbr1, ttbr
-#if defined(CONFIG_ARM64_USER_VA_BITS_52) || defined(CONFIG_ARM64_VA_BITS_52)
+#ifdef CONFIG_ARM64_VA_BITS_52
 	bic	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
 #endif
 	.endm
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index d3932463822c..37166ac3a5c1 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -76,7 +76,7 @@
 #define KERNEL_START      _text
 #define KERNEL_END        _end
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 #define MAX_USER_VA_BITS	52
 #else
 #define MAX_USER_VA_BITS	VA_BITS
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 133ecb65b602..6252b1e7c5bd 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -74,7 +74,7 @@ extern u64 idmap_ptrs_per_pgd;
 
 static inline bool __cpu_uses_extended_idmap(void)
 {
-	if (IS_ENABLED(CONFIG_ARM64_USER_VA_BITS_52))
+	if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52))
 		return false;
 
 	return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index a69259cc1f16..bbf0556a9126 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -316,7 +316,7 @@
 #define TTBR_BADDR_MASK_52	(((UL(1) << 46) - 1) << 2)
 #endif
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 /* Must be at least 64-byte aligned to prevent corruption of the TTBR */
 #define TTBR1_BADDR_4852_OFFSET	(((UL(1) << (52 - PGDIR_SHIFT)) - \
 				 (UL(1) << (48 - PGDIR_SHIFT))) * 8)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 5b8e38503ce1..1a03ab998225 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -319,7 +319,7 @@ __create_page_tables:
 	adrp	x0, idmap_pg_dir
 	adrp	x3, __idmap_text_start		// __pa(__idmap_text_start)
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 	mrs_s	x6, SYS_ID_AA64MMFR2_EL1
 	and	x6, x6, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
 	mov	x5, #52
@@ -805,7 +805,7 @@ ENTRY(__enable_mmu)
 ENDPROC(__enable_mmu)
 
 ENTRY(__cpu_secondary_check52bitva)
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 	ldr_l	x0, vabits_user
 	cmp	x0, #52
 	b.ne	2f
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 6844365c0a51..9bc00970e54e 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -335,6 +335,16 @@ void __init arm64_memblock_init(void)
 
 	vmemmap = ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT));
 
+	/*
+	 * If we are running with a 52-bit kernel VA config on a system that
+	 * does not support it, we have to offset our vmemmap and physvirt_offset
+	 * s.t. we avoid the 52-bit portion of the direct linear map
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (VA_BITS_ACTUAL != 52)) {
+		vmemmap += (_PAGE_OFFSET(48) - _PAGE_OFFSET(52)) >> PAGE_SHIFT;
+		physvirt_offset = PHYS_OFFSET - _PAGE_OFFSET(48);
+	}
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 9f64283e0f89..cf12d05e51e6 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -449,10 +449,11 @@ ENTRY(__cpu_setup)
 			TCR_TBI0 | TCR_A1 | TCR_KASAN_FLAGS
 	tcr_clear_errata_bits x10, x9, x5
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 	ldr_l		x9, vabits_user
 	sub		x9, xzr, x9
 	add		x9, x9, #64
+	tcr_set_t1sz	x10, x9
 #else
 	ldr_l		x9, idmap_t0sz
 #endif
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 01/10] arm64: mm: Flip kernel VA space
  2019-06-12 17:26 ` [PATCH v3 01/10] arm64: mm: Flip kernel VA space Steve Capper
@ 2019-06-14 12:17   ` Anshuman Khandual
  2019-06-17 16:09     ` Steve Capper
  2019-06-14 13:00   ` Anshuman Khandual
  1 sibling, 1 reply; 17+ messages in thread
From: Anshuman Khandual @ 2019-06-14 12:17 UTC (permalink / raw)
  To: Steve Capper, linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

Hello Steve,

On 06/12/2019 10:56 PM, Steve Capper wrote:
> Put the direct linear map in the lower addresses of the kernel VA range
> and everything else in the higher ranges.

The reason for this flip has to be more clear in the commit message.

> 
> This allows us to make room for an inline KASAN shadow that operates
> under both 48 and 52 bit kernel VA sizes. For example with a 52-bit VA,
> if KASAN_SHADOW_END < 0xFFF8000000000000 (it is in the lower addresses
> of the kernel VA range), this will be below the start of the minimum
> 48-bit kernel VA address of 0xFFFF000000000000.

Though this is true it does not convey to the effect of why the flip is
required. As you had mentioned previously KASAN_SHADOW_END is fixed because
it needs to support the highest possible VA (~0UL) in both 48 and 52 bits.

Hence KASAN_SHADOW_START will have to be variable in order to accommodate
both 48 bits or 52 bits effective virtual address. Hence not sure what the
above example based on KASAN_SHADOW_START is trying to convey.

KASAN_SHADOW_END cannot be in the (52 bit space - 48 bit space) exclusion VA
range as it would not work for the 48 bits case. But KASAN_SHADOW_START can
very well be in that space for 52 bits case.

The current definition

#define KASAN_SHADOW_START      (VA_START)
#define KASAN_SHADOW_END        (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)

This wont work in the new scheme because VA_START is different for 48 bits
and 52 bits which will make KASAN_SHADOW_END variable as well. Hence we need
to change this arrangement.

What the commit message here does not try to convince is that there are no
other alternate arrangements where KASAN_SHADOW_END remains fixed (and also
accessible in 48 bits scheme), KASAN_SHADOW_FIRST is variable accommodating
both 48 bits and 52 bits case and flipping the kernel VA space is the only
viable option.

Ahh I see this in the cover letter.

" In order to allow for a KASAN shadow that changes size at boot time, one
must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
start address. Also, it is highly desirable to maintain the same
function addresses in the kernel .text between VA sizes. Both of these
requirements necessitate us to flip the kernel address space halves s.t.
the direct linear map occupies the lower addresses."

Though this is better (please add it to commit message).

There were no alternate arrangements to achieve the above two objectives
without flipping the VA space ? The reasoning here in the commit message is
not convincing enough even with the above cover letter extract.

> 
> We need to adjust:
>  *) KASAN shadow region placement logic,
>  *) KASAN_SHADOW_OFFSET computation logic,
>  *) virt_to_phys, phys_to_virt checks,
>  *) page table dumper.
> 
> These are all small changes, that need to take place atomically, so they
> are bundled into this commit.

It will be great to add a before patch and after patch view of the kernel
virtual address space enlisting different sections to make things apparent
about what and how the layout has changed.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 01/10] arm64: mm: Flip kernel VA space
  2019-06-12 17:26 ` [PATCH v3 01/10] arm64: mm: Flip kernel VA space Steve Capper
  2019-06-14 12:17   ` Anshuman Khandual
@ 2019-06-14 13:00   ` Anshuman Khandual
  2019-06-17 16:08     ` Steve Capper
  1 sibling, 1 reply; 17+ messages in thread
From: Anshuman Khandual @ 2019-06-14 13:00 UTC (permalink / raw)
  To: Steve Capper, linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

On 06/12/2019 10:56 PM, Steve Capper wrote:
> Put the direct linear map in the lower addresses of the kernel VA range
> and everything else in the higher ranges.
> 
> This allows us to make room for an inline KASAN shadow that operates
> under both 48 and 52 bit kernel VA sizes. For example with a 52-bit VA,
> if KASAN_SHADOW_END < 0xFFF8000000000000 (it is in the lower addresses
> of the kernel VA range), this will be below the start of the minimum
> 48-bit kernel VA address of 0xFFFF000000000000.
> 
> We need to adjust:
>  *) KASAN shadow region placement logic,
>  *) KASAN_SHADOW_OFFSET computation logic,
>  *) virt_to_phys, phys_to_virt checks,
>  *) page table dumper.
> 
> These are all small changes, that need to take place atomically, so they
> are bundled into this commit.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>
> ---
>  arch/arm64/Makefile              | 2 +-
>  arch/arm64/include/asm/memory.h  | 8 ++++----
>  arch/arm64/include/asm/pgtable.h | 2 +-
>  arch/arm64/kernel/hibernate.c    | 2 +-
>  arch/arm64/mm/dump.c             | 8 ++++----
>  arch/arm64/mm/init.c             | 9 +--------
>  arch/arm64/mm/kasan_init.c       | 6 +++---
>  arch/arm64/mm/mmu.c              | 4 ++--
>  8 files changed, 17 insertions(+), 24 deletions(-)
> 
> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> index b025304bde46..2dad2ae6b181 100644
> --- a/arch/arm64/Makefile
> +++ b/arch/arm64/Makefile
> @@ -115,7 +115,7 @@ KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
>  #				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
>  # in 32-bit arithmetic
>  KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
> -	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \
> +	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
>  	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
>  	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
>  
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 8ffcf5a512bb..5cd2eb8cb424 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -49,9 +49,9 @@
>   */
>  #define VA_BITS			(CONFIG_ARM64_VA_BITS)
>  #define VA_START		(UL(0xffffffffffffffff) - \
> -	(UL(1) << VA_BITS) + 1)
> -#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
>  	(UL(1) << (VA_BITS - 1)) + 1)
> +#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
> +	(UL(1) << VA_BITS) + 1)

PAGE_OFFSET and VA_START swapped their positions.

There are many places with UL(0xffffffffffffffff). Time to define
it as a constant ? Something like [KERNEL|TTBR1]_MAX_VADDR.

>  #define KIMAGE_VADDR		(MODULES_END)
>  #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
>  #define BPF_JIT_REGION_SIZE	(SZ_128M)
> @@ -59,7 +59,7 @@
>  #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
>  #define MODULES_VADDR		(BPF_JIT_REGION_END)
>  #define MODULES_VSIZE		(SZ_128M)
> -#define VMEMMAP_START		(PAGE_OFFSET - VMEMMAP_SIZE)
> +#define VMEMMAP_START		(-VMEMMAP_SIZE)
>  #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
>  #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
>  #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
> @@ -238,7 +238,7 @@ extern u64			vabits_user;
>   * space. Testing the top bit for the start of the region is a
>   * sufficient check.
>   */
> -#define __is_lm_address(addr)	(!!((addr) & BIT(VA_BITS - 1)))
> +#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS - 1)))

Should it be (!!((addr) & BIT(VA_BITS - 2))) instead for a positive validation
for addresses in the lower half ?

>  
>  #define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
>  #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 2c41b04708fe..d0ab784304e9 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -32,7 +32,7 @@
>   *	and fixed mappings
>   */
>  #define VMALLOC_START		(MODULES_END)
> -#define VMALLOC_END		(PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
> +#define VMALLOC_END		(- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)

(-VMEMMAP_SIZE) and (- PUD_SIZE - VMEMMAP_SIZE - SZ_64K) depends on implicit sign
inversion. IMHO it might be better to add [KERNEL|TTBR1]_MAX_VADDR in the equation.

>  
>  #define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
>  
> diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> index 9859e1178e6b..6ffcc32f35dd 100644
> --- a/arch/arm64/kernel/hibernate.c
> +++ b/arch/arm64/kernel/hibernate.c
> @@ -497,7 +497,7 @@ int swsusp_arch_resume(void)
>  		rc = -ENOMEM;
>  		goto out;
>  	}
> -	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, 0);
> +	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, VA_START);
>  	if (rc)
>  		goto out;
>  
> diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
> index 14fe23cd5932..ee4e5bea8944 100644
> --- a/arch/arm64/mm/dump.c
> +++ b/arch/arm64/mm/dump.c
> @@ -30,6 +30,8 @@
>  #include <asm/ptdump.h>
>  
>  static const struct addr_marker address_markers[] = {
> +	{ PAGE_OFFSET,			"Linear Mapping start" },
> +	{ VA_START,			"Linear Mapping end" },
>  #ifdef CONFIG_KASAN
>  	{ KASAN_SHADOW_START,		"Kasan shadow start" },
>  	{ KASAN_SHADOW_END,		"Kasan shadow end" },
> @@ -43,10 +45,8 @@ static const struct addr_marker address_markers[] = {
>  	{ PCI_IO_START,			"PCI I/O start" },
>  	{ PCI_IO_END,			"PCI I/O end" },
>  #ifdef CONFIG_SPARSEMEM_VMEMMAP
> -	{ VMEMMAP_START,		"vmemmap start" },
> -	{ VMEMMAP_START + VMEMMAP_SIZE,	"vmemmap end" },
> +	{ VMEMMAP_START,		"vmemmap" },

Vmemmap end got dropped ?

>  #endif
> -	{ PAGE_OFFSET,			"Linear mapping" },
>  	{ -1,				NULL },
>  };
>  
> @@ -380,7 +380,7 @@ static void ptdump_initialize(void)
>  static struct ptdump_info kernel_ptdump_info = {
>  	.mm		= &init_mm,
>  	.markers	= address_markers,
> -	.base_addr	= VA_START,
> +	.base_addr	= PAGE_OFFSET,
>  };
>  
>  void ptdump_check_wx(void)
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index d2adffb81b5d..574ed1d4be19 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -311,7 +311,7 @@ static void __init fdt_enforce_memory_region(void)
>  
>  void __init arm64_memblock_init(void)
>  {
> -	const s64 linear_region_size = -(s64)PAGE_OFFSET;
> +	const s64 linear_region_size = BIT(VA_BITS - 1);
>  
>  	/* Handle linux,usable-memory-range property */
>  	fdt_enforce_memory_region();
> @@ -319,13 +319,6 @@ void __init arm64_memblock_init(void)
>  	/* Remove memory above our supported physical address size */
>  	memblock_remove(1ULL << PHYS_MASK_SHIFT, ULLONG_MAX);
>  
> -	/*
> -	 * Ensure that the linear region takes up exactly half of the kernel
> -	 * virtual address space. This way, we can distinguish a linear address
> -	 * from a kernel/module/vmalloc address by testing a single bit.
> -	 */
> -	BUILD_BUG_ON(linear_region_size != BIT(VA_BITS - 1));
> -
>  	/*
>  	 * Select a suitable value for the base of physical memory.
>  	 */
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 296de39ddee5..8066621052db 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -229,10 +229,10 @@ void __init kasan_init(void)
>  	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
>  			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
>  
> -	kasan_populate_early_shadow((void *)KASAN_SHADOW_START,
> -				    (void *)mod_shadow_start);
> +	kasan_populate_early_shadow(kasan_mem_to_shadow((void *) VA_START),
> +				   (void *)mod_shadow_start);
>  	kasan_populate_early_shadow((void *)kimg_shadow_end,
> -				    kasan_mem_to_shadow((void *)PAGE_OFFSET));
> +				   (void *)KASAN_SHADOW_END);
>  
>  	if (kimg_shadow_start > mod_shadow_end)
>  		kasan_populate_early_shadow((void *)mod_shadow_end,
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index a1bfc4413982..16063ff10c6d 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -409,7 +409,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
>  static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
>  				  phys_addr_t size, pgprot_t prot)
>  {
> -	if (virt < VMALLOC_START) {
> +	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
>  		pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n",
>  			&phys, virt);
>  		return;
> @@ -436,7 +436,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
>  static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
>  				phys_addr_t size, pgprot_t prot)
>  {
> -	if (virt < VMALLOC_START) {
> +	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
>  		pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
>  			&phys, virt);
>  		return;
> 

Seems like adding (virt >= VA_START) is a semantics change here. In the previous
scheme (virt < VMALLOC_START) included undefined addresses below VA_START as well
which will be omitted here. Should not we add (virt < PAGE_OFFSET) to check those ?

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 01/10] arm64: mm: Flip kernel VA space
  2019-06-14 13:00   ` Anshuman Khandual
@ 2019-06-17 16:08     ` Steve Capper
  0 siblings, 0 replies; 17+ messages in thread
From: Steve Capper @ 2019-06-17 16:08 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: crecklin, ard.biesheuvel, Marc Zyngier, Catalin Marinas,
	bhsharma, Will Deacon, nd, linux-arm-kernel

Hi Anshuman,

On Fri, Jun 14, 2019 at 06:30:21PM +0530, Anshuman Khandual wrote:
> On 06/12/2019 10:56 PM, Steve Capper wrote:
> > Put the direct linear map in the lower addresses of the kernel VA range
> > and everything else in the higher ranges.
> > 

[...]

> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index 8ffcf5a512bb..5cd2eb8cb424 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -49,9 +49,9 @@
> >   */
> >  #define VA_BITS			(CONFIG_ARM64_VA_BITS)
> >  #define VA_START		(UL(0xffffffffffffffff) - \
> > -	(UL(1) << VA_BITS) + 1)
> > -#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
> >  	(UL(1) << (VA_BITS - 1)) + 1)
> > +#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
> > +	(UL(1) << VA_BITS) + 1)
> 
> PAGE_OFFSET and VA_START swapped their positions.
>

In the file I think they haven't moved, would you like them to be
swapped? (Their values have changed).

> There are many places with UL(0xffffffffffffffff). Time to define
> it as a constant ? Something like [KERNEL|TTBR1]_MAX_VADDR.
>

Could do, I am also tempted to do something like UL(~0) for the sake of
brevity.

> >  #define KIMAGE_VADDR		(MODULES_END)
> >  #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
> >  #define BPF_JIT_REGION_SIZE	(SZ_128M)
> > @@ -59,7 +59,7 @@
> >  #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
> >  #define MODULES_VADDR		(BPF_JIT_REGION_END)
> >  #define MODULES_VSIZE		(SZ_128M)
> > -#define VMEMMAP_START		(PAGE_OFFSET - VMEMMAP_SIZE)
> > +#define VMEMMAP_START		(-VMEMMAP_SIZE)
> >  #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
> >  #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
> >  #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
> > @@ -238,7 +238,7 @@ extern u64			vabits_user;
> >   * space. Testing the top bit for the start of the region is a
> >   * sufficient check.
> >   */
> > -#define __is_lm_address(addr)	(!!((addr) & BIT(VA_BITS - 1)))
> > +#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS - 1)))
> 
> Should it be (!!((addr) & BIT(VA_BITS - 2))) instead for a positive validation
> for addresses in the lower half ?
> 

I don't think so, we basically want to test which half of the address
space our address is in, so we test BIT(VA_BITS - 1). If one were to
test BIT(VA_BITS - 2) that would be testing which half of one of the
halves instead.

> >  
> >  #define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
> >  #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
> > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> > index 2c41b04708fe..d0ab784304e9 100644
> > --- a/arch/arm64/include/asm/pgtable.h
> > +++ b/arch/arm64/include/asm/pgtable.h
> > @@ -32,7 +32,7 @@
> >   *	and fixed mappings
> >   */
> >  #define VMALLOC_START		(MODULES_END)
> > -#define VMALLOC_END		(PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
> > +#define VMALLOC_END		(- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
> 
> (-VMEMMAP_SIZE) and (- PUD_SIZE - VMEMMAP_SIZE - SZ_64K) depends on implicit sign
> inversion. IMHO it might be better to add [KERNEL|TTBR1]_MAX_VADDR in the equation.
>

Hmmm, okay, I see what you mean. I will play with this.

> >  
> >  #define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
> >  
> > diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
> > index 9859e1178e6b..6ffcc32f35dd 100644
> > --- a/arch/arm64/kernel/hibernate.c
> > +++ b/arch/arm64/kernel/hibernate.c
> > @@ -497,7 +497,7 @@ int swsusp_arch_resume(void)
> >  		rc = -ENOMEM;
> >  		goto out;
> >  	}
> > -	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, 0);
> > +	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, VA_START);
> >  	if (rc)
> >  		goto out;
> >  
> > diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
> > index 14fe23cd5932..ee4e5bea8944 100644
> > --- a/arch/arm64/mm/dump.c
> > +++ b/arch/arm64/mm/dump.c
> > @@ -30,6 +30,8 @@
> >  #include <asm/ptdump.h>
> >  
> >  static const struct addr_marker address_markers[] = {
> > +	{ PAGE_OFFSET,			"Linear Mapping start" },
> > +	{ VA_START,			"Linear Mapping end" },
> >  #ifdef CONFIG_KASAN
> >  	{ KASAN_SHADOW_START,		"Kasan shadow start" },
> >  	{ KASAN_SHADOW_END,		"Kasan shadow end" },
> > @@ -43,10 +45,8 @@ static const struct addr_marker address_markers[] = {
> >  	{ PCI_IO_START,			"PCI I/O start" },
> >  	{ PCI_IO_END,			"PCI I/O end" },
> >  #ifdef CONFIG_SPARSEMEM_VMEMMAP
> > -	{ VMEMMAP_START,		"vmemmap start" },
> > -	{ VMEMMAP_START + VMEMMAP_SIZE,	"vmemmap end" },
> > +	{ VMEMMAP_START,		"vmemmap" },
> 
> Vmemmap end got dropped ?
> 

Yeah, IIRC this was because the marker never got printed, I will re-test
this.

> >  #endif
> > -	{ PAGE_OFFSET,			"Linear mapping" },
> >  	{ -1,				NULL },
> >  };
> >  
> > @@ -380,7 +380,7 @@ static void ptdump_initialize(void)
> >  static struct ptdump_info kernel_ptdump_info = {
> >  	.mm		= &init_mm,
> >  	.markers	= address_markers,
> > -	.base_addr	= VA_START,
> > +	.base_addr	= PAGE_OFFSET,
> >  };
> >  
> >  void ptdump_check_wx(void)
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index d2adffb81b5d..574ed1d4be19 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -311,7 +311,7 @@ static void __init fdt_enforce_memory_region(void)
> >  
> >  void __init arm64_memblock_init(void)
> >  {
> > -	const s64 linear_region_size = -(s64)PAGE_OFFSET;
> > +	const s64 linear_region_size = BIT(VA_BITS - 1);
> >  
> >  	/* Handle linux,usable-memory-range property */
> >  	fdt_enforce_memory_region();
> > @@ -319,13 +319,6 @@ void __init arm64_memblock_init(void)
> >  	/* Remove memory above our supported physical address size */
> >  	memblock_remove(1ULL << PHYS_MASK_SHIFT, ULLONG_MAX);
> >  
> > -	/*
> > -	 * Ensure that the linear region takes up exactly half of the kernel
> > -	 * virtual address space. This way, we can distinguish a linear address
> > -	 * from a kernel/module/vmalloc address by testing a single bit.
> > -	 */
> > -	BUILD_BUG_ON(linear_region_size != BIT(VA_BITS - 1));
> > -
> >  	/*
> >  	 * Select a suitable value for the base of physical memory.
> >  	 */
> > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > index 296de39ddee5..8066621052db 100644
> > --- a/arch/arm64/mm/kasan_init.c
> > +++ b/arch/arm64/mm/kasan_init.c
> > @@ -229,10 +229,10 @@ void __init kasan_init(void)
> >  	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
> >  			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
> >  
> > -	kasan_populate_early_shadow((void *)KASAN_SHADOW_START,
> > -				    (void *)mod_shadow_start);
> > +	kasan_populate_early_shadow(kasan_mem_to_shadow((void *) VA_START),
> > +				   (void *)mod_shadow_start);
> >  	kasan_populate_early_shadow((void *)kimg_shadow_end,
> > -				    kasan_mem_to_shadow((void *)PAGE_OFFSET));
> > +				   (void *)KASAN_SHADOW_END);
> >  
> >  	if (kimg_shadow_start > mod_shadow_end)
> >  		kasan_populate_early_shadow((void *)mod_shadow_end,
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index a1bfc4413982..16063ff10c6d 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -409,7 +409,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
> >  static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
> >  				  phys_addr_t size, pgprot_t prot)
> >  {
> > -	if (virt < VMALLOC_START) {
> > +	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
> >  		pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n",
> >  			&phys, virt);
> >  		return;
> > @@ -436,7 +436,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
> >  static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
> >  				phys_addr_t size, pgprot_t prot)
> >  {
> > -	if (virt < VMALLOC_START) {
> > +	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
> >  		pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
> >  			&phys, virt);
> >  		return;
> > 
> 
> Seems like adding (virt >= VA_START) is a semantics change here. In the previous
> scheme (virt < VMALLOC_START) included undefined addresses below VA_START as well
> which will be omitted here. Should not we add (virt < PAGE_OFFSET) to check those ?

Not quite, the original code would not warn if we passed an address in
the direct linear mapping. This conditional logic has been modified to
reflect the fact that the direct linear mapping has moved.

If the original behaviour was a bug then I'm happy to fix it :-).

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 01/10] arm64: mm: Flip kernel VA space
  2019-06-14 12:17   ` Anshuman Khandual
@ 2019-06-17 16:09     ` Steve Capper
  2019-06-26 10:56       ` Catalin Marinas
  0 siblings, 1 reply; 17+ messages in thread
From: Steve Capper @ 2019-06-17 16:09 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: crecklin, ard.biesheuvel, Marc Zyngier, Catalin Marinas,
	bhsharma, Will Deacon, nd, linux-arm-kernel

Hi Anshuman,

Thanks for taking a look at this.

On Fri, Jun 14, 2019 at 05:47:55PM +0530, Anshuman Khandual wrote:
> Hello Steve,
> 
> On 06/12/2019 10:56 PM, Steve Capper wrote:
> > Put the direct linear map in the lower addresses of the kernel VA range
> > and everything else in the higher ranges.
> 
> The reason for this flip has to be more clear in the commit message.
> 

Agreed, that's what I failed to do below ;-).

> > 
> > This allows us to make room for an inline KASAN shadow that operates
> > under both 48 and 52 bit kernel VA sizes. For example with a 52-bit VA,
> > if KASAN_SHADOW_END < 0xFFF8000000000000 (it is in the lower addresses
> > of the kernel VA range), this will be below the start of the minimum
> > 48-bit kernel VA address of 0xFFFF000000000000.
> 
> Though this is true it does not convey to the effect of why the flip is
> required. As you had mentioned previously KASAN_SHADOW_END is fixed because
> it needs to support the highest possible VA (~0UL) in both 48 and 52 bits.
> 
> Hence KASAN_SHADOW_START will have to be variable in order to accommodate
> both 48 bits or 52 bits effective virtual address. Hence not sure what the
> above example based on KASAN_SHADOW_START is trying to convey.
> 
> KASAN_SHADOW_END cannot be in the (52 bit space - 48 bit space) exclusion VA
> range as it would not work for the 48 bits case. But KASAN_SHADOW_START can
> very well be in that space for 52 bits case.
> 
> The current definition
> 
> #define KASAN_SHADOW_START      (VA_START)
> #define KASAN_SHADOW_END        (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
> 
> This wont work in the new scheme because VA_START is different for 48 bits
> and 52 bits which will make KASAN_SHADOW_END variable as well. Hence we need
> to change this arrangement.

Yes in a future patch? VA_BITS is still constant here.
> 
> What the commit message here does not try to convince is that there are no
> other alternate arrangements where KASAN_SHADOW_END remains fixed (and also
> accessible in 48 bits scheme), KASAN_SHADOW_FIRST is variable accommodating
> both 48 bits and 52 bits case and flipping the kernel VA space is the only
> viable option.
> 
> Ahh I see this in the cover letter.
> 
> " In order to allow for a KASAN shadow that changes size at boot time, one
> must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
> start address. Also, it is highly desirable to maintain the same
> function addresses in the kernel .text between VA sizes. Both of these
> requirements necessitate us to flip the kernel address space halves s.t.
> the direct linear map occupies the lower addresses."
> 
> Though this is better (please add it to commit message).
> 

Sure :-)

> There were no alternate arrangements to achieve the above two objectives
> without flipping the VA space ? The reasoning here in the commit message is
> not convincing enough even with the above cover letter extract.
> 
> > 
> > We need to adjust:
> >  *) KASAN shadow region placement logic,
> >  *) KASAN_SHADOW_OFFSET computation logic,
> >  *) virt_to_phys, phys_to_virt checks,
> >  *) page table dumper.
> > 
> > These are all small changes, that need to take place atomically, so they
> > are bundled into this commit.
> 
> It will be great to add a before patch and after patch view of the kernel
> virtual address space enlisting different sections to make things apparent
> about what and how the layout has changed.

Hmm, pondering this. I can introduce a documentation document in a new
first patch then modify it as the series changes?

Catalin, would that work for you?

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 01/10] arm64: mm: Flip kernel VA space
  2019-06-17 16:09     ` Steve Capper
@ 2019-06-26 10:56       ` Catalin Marinas
  0 siblings, 0 replies; 17+ messages in thread
From: Catalin Marinas @ 2019-06-26 10:56 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, Marc Zyngier, bhsharma,
	Anshuman Khandual, Will Deacon, nd, linux-arm-kernel

On Mon, Jun 17, 2019 at 05:09:11PM +0100, Steve Capper wrote:
> On Fri, Jun 14, 2019 at 05:47:55PM +0530, Anshuman Khandual wrote:
> > On 06/12/2019 10:56 PM, Steve Capper wrote:
> > > We need to adjust:
> > >  *) KASAN shadow region placement logic,
> > >  *) KASAN_SHADOW_OFFSET computation logic,
> > >  *) virt_to_phys, phys_to_virt checks,
> > >  *) page table dumper.
> > > 
> > > These are all small changes, that need to take place atomically, so they
> > > are bundled into this commit.
> > 
> > It will be great to add a before patch and after patch view of the kernel
> > virtual address space enlisting different sections to make things apparent
> > about what and how the layout has changed.
> 
> Hmm, pondering this. I can introduce a documentation document in a new
> first patch then modify it as the series changes?
> 
> Catalin, would that work for you?

If you want to. Otherwise just a patch documenting the final layout
would work for me (possibly mentioning the commit which removed the
original document as a reference).

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v3 00/10] 52-bit kernel + user VAs
  2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
                   ` (9 preceding siblings ...)
  2019-06-12 17:26 ` [PATCH v3 10/10] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
@ 2019-06-26 11:08 ` Catalin Marinas
  10 siblings, 0 replies; 17+ messages in thread
From: Catalin Marinas @ 2019-06-26 11:08 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, marc.zyngier, bhsharma, will.deacon,
	linux-arm-kernel

On Wed, Jun 12, 2019 at 06:26:48PM +0100, Steve Capper wrote:
> Steve Capper (10):
>   arm64: mm: Flip kernel VA space
>   arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
>   arm64: dump: De-constify VA_START and KASAN_SHADOW_START
>   arm64: mm: Introduce VA_BITS_MIN
>   arm64: mm: Introduce VA_BITS_ACTUAL
>   arm64: mm: Logic to make offset_ttbr1 conditional
>   arm64: mm: Separate out vmemmap
>   arm64: mm: Modify calculation of VMEMMAP_SIZE
>   arm64: mm: Tweak PAGE_OFFSET logic
>   arm64: mm: Introduce 52-bit Kernel VAs

The patches look fine to me now but it would be good if Ard had a look
as well.

I'll do some testing with my config combinations script (which takes a
while).

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, back to index

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-06-12 17:26 [PATCH v3 00/10] 52-bit kernel + user VAs Steve Capper
2019-06-12 17:26 ` [PATCH v3 01/10] arm64: mm: Flip kernel VA space Steve Capper
2019-06-14 12:17   ` Anshuman Khandual
2019-06-17 16:09     ` Steve Capper
2019-06-26 10:56       ` Catalin Marinas
2019-06-14 13:00   ` Anshuman Khandual
2019-06-17 16:08     ` Steve Capper
2019-06-12 17:26 ` [PATCH v3 02/10] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
2019-06-12 17:26 ` [PATCH v3 03/10] arm64: dump: De-constify VA_START and KASAN_SHADOW_START Steve Capper
2019-06-12 17:26 ` [PATCH v3 04/10] arm64: mm: Introduce VA_BITS_MIN Steve Capper
2019-06-12 17:26 ` [PATCH v3 05/10] arm64: mm: Introduce VA_BITS_ACTUAL Steve Capper
2019-06-12 17:26 ` [PATCH v3 06/10] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
2019-06-12 17:26 ` [PATCH v3 07/10] arm64: mm: Separate out vmemmap Steve Capper
2019-06-12 17:26 ` [PATCH v3 08/10] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
2019-06-12 17:26 ` [PATCH v3 09/10] arm64: mm: Tweak PAGE_OFFSET logic Steve Capper
2019-06-12 17:26 ` [PATCH v3 10/10] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
2019-06-26 11:08 ` [PATCH v3 00/10] 52-bit kernel + user VAs Catalin Marinas

Linux-ARM-Kernel Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-arm-kernel/0 linux-arm-kernel/git/0.git
	git clone --mirror https://lore.kernel.org/linux-arm-kernel/1 linux-arm-kernel/git/1.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-arm-kernel linux-arm-kernel/ https://lore.kernel.org/linux-arm-kernel \
		linux-arm-kernel@lists.infradead.org infradead-linux-arm-kernel@archiver.kernel.org
	public-inbox-index linux-arm-kernel


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.infradead.lists.linux-arm-kernel


AGPL code for this site: git clone https://public-inbox.org/ public-inbox