All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V4 00/11] 52-bit kernel + user VAs
@ 2019-07-29 16:21 Steve Capper
  2019-07-29 16:21 ` [PATCH V4 01/11] arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START Steve Capper
                   ` (10 more replies)
  0 siblings, 11 replies; 35+ messages in thread
From: Steve Capper @ 2019-07-29 16:21 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

This patch series adds support for 52-bit kernel VAs using some of the
machinery already introduced by the 52-bit userspace VA code in 5.0.

As 52-bit virtual address support is an optional hardware feature,
software support for 52-bit kernel VAs needs to be deduced at early boot
time. If HW support is not available, the kernel falls back to 48-bit.

A significant proportion of this series focuses on "de-constifying"
VA_BITS related constants.

In order to allow for a KASAN shadow that changes size at boot time, one
must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
start address. Also, it is highly desirable to maintain the same
function addresses in the kernel .text between VA sizes. Both of these
requirements necessitate us to flip the kernel address space halves s.t.
the direct linear map occupies the lower addresses.

In V4 of this series, an extra documentation patch is added to explain
both the layout of the memory and the implementation of 52-bit support.
Also added is a guard region after VMEMMAP to avoid ambiguity with
IS_ERR style pointers. Finally the bitmask optimisations for VMEMMAP and
PAGE_OFFSET are replaced with addition/subtraction in a new first patch
for the series.

In V3 of this series, the 52-bit user/48-bit kernel option is removed
and we are left with a single 52-bit VA option instead. The offset_ttbr1
conditional logic has been re-worked to directly read a system register
rather than rely on the alternative framework (I couldn't actually see a
hotpath calling offset_ttbr1 and some parts of the early boot relied on
offset_ttbr1 before the alternatives framework was called). Also some
spurious de-constifying changes have been removed.

In V2 of this series (apologies for the long delay from V1), the major
change is that PAGE_OFFSET is retained as a constant. This allows for
much faster virt_to_page computations. This is achieved by expanding the
size of the VMEMMAP region to accommodate a disjoint 52-bit/48-bit
direct linear map. This has been found to work well in my testing, but I
would appreciate any feedback on this if it needs changing. To aid with
git bisect, this logic is broken down into a few smaller patches.


Steve Capper (11):
  arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and
    VMEMMAP_START
  arm64: mm: Flip kernel VA space
  arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  arm64: dump: De-constify VA_START and KASAN_SHADOW_START
  arm64: mm: Introduce VA_BITS_MIN
  arm64: mm: Introduce VA_BITS_ACTUAL
  arm64: mm: Logic to make offset_ttbr1 conditional
  arm64: mm: Separate out vmemmap
  arm64: mm: Modify calculation of VMEMMAP_SIZE
  arm64: mm: Introduce 52-bit Kernel VAs
  docs: arm64: Add layout and 52-bit info to memory document

 Documentation/arm64/kasan-offsets.sh   |  27 ++++
 Documentation/arm64/memory.rst         | 177 ++++++++++++++++++++++---
 arch/arm64/Kconfig                     |  36 ++++-
 arch/arm64/Makefile                    |   8 --
 arch/arm64/include/asm/assembler.h     |  17 ++-
 arch/arm64/include/asm/efi.h           |   4 +-
 arch/arm64/include/asm/kasan.h         |  11 +-
 arch/arm64/include/asm/memory.h        |  56 +++++---
 arch/arm64/include/asm/mmu_context.h   |   4 +-
 arch/arm64/include/asm/pgtable-hwdef.h |   2 +-
 arch/arm64/include/asm/pgtable.h       |   6 +-
 arch/arm64/include/asm/processor.h     |   2 +-
 arch/arm64/kernel/head.S               |  13 +-
 arch/arm64/kernel/hibernate-asm.S      |   8 +-
 arch/arm64/kernel/hibernate.c          |   2 +-
 arch/arm64/kernel/kaslr.c              |   6 +-
 arch/arm64/kvm/va_layout.c             |  14 +-
 arch/arm64/mm/dump.c                   |  24 +++-
 arch/arm64/mm/fault.c                  |   4 +-
 arch/arm64/mm/init.c                   |  29 ++--
 arch/arm64/mm/kasan_init.c             |  11 +-
 arch/arm64/mm/mmu.c                    |   7 +-
 arch/arm64/mm/proc.S                   |   9 +-
 23 files changed, 361 insertions(+), 116 deletions(-)
 create mode 100644 Documentation/arm64/kasan-offsets.sh

-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH V4 01/11] arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START
  2019-07-29 16:21 [PATCH V4 00/11] 52-bit kernel + user VAs Steve Capper
@ 2019-07-29 16:21 ` Steve Capper
  2019-08-05 11:07   ` Catalin Marinas
  2019-07-29 16:21 ` [PATCH V4 02/11] arm64: mm: Flip kernel VA space Steve Capper
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 35+ messages in thread
From: Steve Capper @ 2019-07-29 16:21 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

Currently there are assumptions about the alignment of VMEMMAP_START
and PAGE_OFFSET that won't be valid after this series is applied.

These assumptions are in the form of bitwise operators being used
instead of addition and subtraction when calculating addresses.

This patch replaces these bitwise operators with addition/subtraction.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

New in V4
---
 arch/arm64/include/asm/memory.h | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b7ba75809751..d3a951dc9878 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -295,21 +295,20 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
 #define _virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
 #else
-#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
-#define __page_to_voff(kaddr)	(((u64)(kaddr) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
+#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) - PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
+#define __page_to_voff(kaddr)	(((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
 
 #define page_to_virt(page)	({					\
 	unsigned long __addr =						\
-		((__page_to_voff(page)) | PAGE_OFFSET);			\
+		((__page_to_voff(page)) + PAGE_OFFSET);			\
 	unsigned long __addr_tag =					\
 		 __tag_set(__addr, page_kasan_tag(page));		\
 	((void *)__addr_tag);						\
 })
 
-#define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START))
+#define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
 
-#define _virt_addr_valid(kaddr)	pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
-					   + PHYS_OFFSET) >> PAGE_SHIFT)
+#define _virt_addr_valid(kaddr)	pfn_valid(__virt_to_phys((u64)(kaddr)) >> PAGE_SHIFT)
 #endif
 #endif
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH V4 02/11] arm64: mm: Flip kernel VA space
  2019-07-29 16:21 [PATCH V4 00/11] 52-bit kernel + user VAs Steve Capper
  2019-07-29 16:21 ` [PATCH V4 01/11] arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START Steve Capper
@ 2019-07-29 16:21 ` Steve Capper
  2019-08-05 11:29   ` Catalin Marinas
  2019-07-29 16:21 ` [PATCH V4 03/11] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 35+ messages in thread
From: Steve Capper @ 2019-07-29 16:21 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

In order to allow for a KASAN shadow that changes size at boot time, one
must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
start address. Also, it is highly desirable to maintain the same
function addresses in the kernel .text between VA sizes. Both of these
requirements necessitate us to flip the kernel address space halves s.t.
the direct linear map occupies the lower addresses.

This patch puts the direct linear map in the lower addresses of the
kernel VA range and everything else in the higher ranges.

We need to adjust:
 *) KASAN shadow region placement logic,
 *) KASAN_SHADOW_OFFSET computation logic,
 *) virt_to_phys, phys_to_virt checks,
 *) page table dumper.

These are all small changes, that need to take place atomically, so they
are bundled into this commit.

As part of the re-arrangement, a guard region of 2MB (to preserve
alignment for fixed map) is added after the vmemmap. Otherwise the
vmemmap could intersect with IS_ERR pointers.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

Changed in V4 - we add a guard region after vmemmap to avoid ambiguity
with error pointers
---
 arch/arm64/Makefile              | 2 +-
 arch/arm64/include/asm/memory.h  | 8 ++++----
 arch/arm64/include/asm/pgtable.h | 2 +-
 arch/arm64/kernel/hibernate.c    | 2 +-
 arch/arm64/mm/dump.c             | 7 ++++---
 arch/arm64/mm/init.c             | 9 +--------
 arch/arm64/mm/kasan_init.c       | 6 +++---
 arch/arm64/mm/mmu.c              | 4 ++--
 8 files changed, 17 insertions(+), 23 deletions(-)

diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index bb1f1dbb34e8..b2400f9c1213 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -130,7 +130,7 @@ KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 #				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
 # in 32-bit arithmetic
 KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
-	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \
+	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
 	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
 	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
 
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index d3a951dc9878..98a87f0f40d5 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -38,9 +38,9 @@
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
 #define VA_START		(UL(0xffffffffffffffff) - \
-	(UL(1) << VA_BITS) + 1)
-#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << (VA_BITS - 1)) + 1)
+#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
+	(UL(1) << VA_BITS) + 1)
 #define KIMAGE_VADDR		(MODULES_END)
 #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
@@ -48,7 +48,7 @@
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
 #define MODULES_VADDR		(BPF_JIT_REGION_END)
 #define MODULES_VSIZE		(SZ_128M)
-#define VMEMMAP_START		(PAGE_OFFSET - VMEMMAP_SIZE)
+#define VMEMMAP_START		(-VMEMMAP_SIZE - SZ_2M)
 #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
 #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
@@ -227,7 +227,7 @@ extern u64			vabits_user;
  * space. Testing the top bit for the start of the region is a
  * sufficient check.
  */
-#define __is_lm_address(addr)	(!!((addr) & BIT(VA_BITS - 1)))
+#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS - 1)))
 
 #define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 3f5461f7b560..d274ea9a5f86 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -21,7 +21,7 @@
  *	and fixed mappings
  */
 #define VMALLOC_START		(MODULES_END)
-#define VMALLOC_END		(PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
+#define VMALLOC_END		(- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
 #define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
 
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 9341fcc6e809..e130db05d932 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -496,7 +496,7 @@ int swsusp_arch_resume(void)
 		rc = -ENOMEM;
 		goto out;
 	}
-	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, 0);
+	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, VA_START);
 	if (rc)
 		goto out;
 
diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index 82b3a7fdb4a6..6f0b9f8ddf55 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -26,6 +26,8 @@
 #include <asm/ptdump.h>
 
 static const struct addr_marker address_markers[] = {
+	{ PAGE_OFFSET,			"Linear Mapping start" },
+	{ VA_START,			"Linear Mapping end" },
 #ifdef CONFIG_KASAN
 	{ KASAN_SHADOW_START,		"Kasan shadow start" },
 	{ KASAN_SHADOW_END,		"Kasan shadow end" },
@@ -40,9 +42,8 @@ static const struct addr_marker address_markers[] = {
 	{ PCI_IO_END,			"PCI I/O end" },
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 	{ VMEMMAP_START,		"vmemmap start" },
-	{ VMEMMAP_START + VMEMMAP_SIZE,	"vmemmap end" },
+	{ -1,				"vmemmap end" },
 #endif
-	{ PAGE_OFFSET,			"Linear mapping" },
 	{ -1,				NULL },
 };
 
@@ -376,7 +377,7 @@ static void ptdump_initialize(void)
 static struct ptdump_info kernel_ptdump_info = {
 	.mm		= &init_mm,
 	.markers	= address_markers,
-	.base_addr	= VA_START,
+	.base_addr	= PAGE_OFFSET,
 };
 
 void ptdump_check_wx(void)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index f3c795278def..62927ed02229 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -301,7 +301,7 @@ static void __init fdt_enforce_memory_region(void)
 
 void __init arm64_memblock_init(void)
 {
-	const s64 linear_region_size = -(s64)PAGE_OFFSET;
+	const s64 linear_region_size = BIT(VA_BITS - 1);
 
 	/* Handle linux,usable-memory-range property */
 	fdt_enforce_memory_region();
@@ -309,13 +309,6 @@ void __init arm64_memblock_init(void)
 	/* Remove memory above our supported physical address size */
 	memblock_remove(1ULL << PHYS_MASK_SHIFT, ULLONG_MAX);
 
-	/*
-	 * Ensure that the linear region takes up exactly half of the kernel
-	 * virtual address space. This way, we can distinguish a linear address
-	 * from a kernel/module/vmalloc address by testing a single bit.
-	 */
-	BUILD_BUG_ON(linear_region_size != BIT(VA_BITS - 1));
-
 	/*
 	 * Select a suitable value for the base of physical memory.
 	 */
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 6cf97b904ebb..05edfe9b02e4 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -225,10 +225,10 @@ void __init kasan_init(void)
 	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
 			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
 
-	kasan_populate_early_shadow((void *)KASAN_SHADOW_START,
-				    (void *)mod_shadow_start);
+	kasan_populate_early_shadow(kasan_mem_to_shadow((void *) VA_START),
+				   (void *)mod_shadow_start);
 	kasan_populate_early_shadow((void *)kimg_shadow_end,
-				    kasan_mem_to_shadow((void *)PAGE_OFFSET));
+				   (void *)KASAN_SHADOW_END);
 
 	if (kimg_shadow_start > mod_shadow_end)
 		kasan_populate_early_shadow((void *)mod_shadow_end,
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 750a69dde39b..1d4247f9a496 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -398,7 +398,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
 static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
 				  phys_addr_t size, pgprot_t prot)
 {
-	if (virt < VMALLOC_START) {
+	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
 		pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n",
 			&phys, virt);
 		return;
@@ -425,7 +425,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
 				phys_addr_t size, pgprot_t prot)
 {
-	if (virt < VMALLOC_START) {
+	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
 		pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
 			&phys, virt);
 		return;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH V4 03/11] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  2019-07-29 16:21 [PATCH V4 00/11] 52-bit kernel + user VAs Steve Capper
  2019-07-29 16:21 ` [PATCH V4 01/11] arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START Steve Capper
  2019-07-29 16:21 ` [PATCH V4 02/11] arm64: mm: Flip kernel VA space Steve Capper
@ 2019-07-29 16:21 ` Steve Capper
  2019-08-05 16:37   ` Catalin Marinas
  2019-07-29 16:21 ` [PATCH V4 04/11] arm64: dump: De-constify VA_START and KASAN_SHADOW_START Steve Capper
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 35+ messages in thread
From: Steve Capper @ 2019-07-29 16:21 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
line argument and affects the codegen of the inline address sanetiser.

Essentially, for an example memory access:
    *ptr1 = val;
The compiler will insert logic similar to the below:
    shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET)
    if (somethingWrong(shadowValue))
        flagAnError();

This code sequence is inserted into many places, thus
KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel
text.

If we want to run a single kernel binary with multiple address spaces,
then we need to do this with KASAN_SHADOW_OFFSET fixed.

Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide
shadow addresses we know that the end of the shadow region is constant
w.r.t. VA space size:
    KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET

This means that if we increase the size of the VA space, the start of
the KASAN region expands into lower addresses whilst the end of the
KASAN region is fixed.

Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via
build scripts with the VA size used as a parameter. (There are build
time checks in the C code too to ensure that expected values are being
derived). It is sufficient, and indeed is a simplification, to remove
the build scripts (and build time checks) entirely and instead provide
KASAN_SHADOW_OFFSET values.

This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the
arm64 Makefile, and instead we adopt the approach used by x86 to supply
offset values in kConfig. To help debug/develop future VA space changes,
the Makefile logic has been preserved in a script file in the arm64
Documentation folder.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 Documentation/arm64/kasan-offsets.sh | 27 +++++++++++++++++++++++++++
 arch/arm64/Kconfig                   | 15 +++++++++++++++
 arch/arm64/Makefile                  |  8 --------
 arch/arm64/include/asm/kasan.h       | 11 ++++-------
 arch/arm64/include/asm/memory.h      | 12 +++++++++---
 arch/arm64/mm/kasan_init.c           |  2 --
 6 files changed, 55 insertions(+), 20 deletions(-)
 create mode 100644 Documentation/arm64/kasan-offsets.sh

diff --git a/Documentation/arm64/kasan-offsets.sh b/Documentation/arm64/kasan-offsets.sh
new file mode 100644
index 000000000000..2b7a021db363
--- /dev/null
+++ b/Documentation/arm64/kasan-offsets.sh
@@ -0,0 +1,27 @@
+#!/bin/sh
+
+# Print out the KASAN_SHADOW_OFFSETS required to place the KASAN SHADOW
+# start address at the mid-point of the kernel VA space
+
+print_kasan_offset () {
+	printf "%02d\t" $1
+	printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
+			+ (1 << ($1 - 32 - $2)) \
+			- (1 << (64 - 32 - $2)) ))
+}
+
+echo KASAN_SHADOW_SCALE_SHIFT = 3
+printf "VABITS\tKASAN_SHADOW_OFFSET\n"
+print_kasan_offset 48 3
+print_kasan_offset 47 3
+print_kasan_offset 42 3
+print_kasan_offset 39 3
+print_kasan_offset 36 3
+echo
+echo KASAN_SHADOW_SCALE_SHIFT = 4
+printf "VABITS\tKASAN_SHADOW_OFFSET\n"
+print_kasan_offset 48 4
+print_kasan_offset 47 4
+print_kasan_offset 42 4
+print_kasan_offset 39 4
+print_kasan_offset 36 4
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 3adcec05b1f6..f7f23e47c28f 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -297,6 +297,21 @@ config ARCH_SUPPORTS_UPROBES
 config ARCH_PROC_KCORE_TEXT
 	def_bool y
 
+config KASAN_SHADOW_OFFSET
+	hex
+	depends on KASAN
+	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS
+	default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
+	default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
+	default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
+	default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
+	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS
+	default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
+	default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
+	default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
+	default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
+	default 0xffffffffffffffff
+
 source "arch/arm64/Kconfig.platforms"
 
 menu "Kernel Features"
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index b2400f9c1213..2b7db0d41498 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -126,14 +126,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 
-# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
-#				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
-# in 32-bit arithmetic
-KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
-	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
-	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
-	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
-
 export	TEXT_OFFSET GZFLAGS
 
 core-y		+= arch/arm64/kernel/ arch/arm64/mm/
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index b52aacd2c526..10d2add842da 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -18,11 +18,8 @@
  * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
  * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/N of kernel virtual addresses,
  * where N = (1 << KASAN_SHADOW_SCALE_SHIFT).
- */
-#define KASAN_SHADOW_START      (VA_START)
-#define KASAN_SHADOW_END        (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
-
-/*
+ *
+ * KASAN_SHADOW_OFFSET:
  * This value is used to map an address to the corresponding shadow
  * address by the following formula:
  *     shadow_addr = (address >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET
@@ -33,8 +30,8 @@
  *      KASAN_SHADOW_OFFSET = KASAN_SHADOW_END -
  *				(1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT))
  */
-#define KASAN_SHADOW_OFFSET     (KASAN_SHADOW_END - (1ULL << \
-					(64 - KASAN_SHADOW_SCALE_SHIFT)))
+#define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
+#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS)
 
 void kasan_init(void);
 void kasan_copy_shadow(pgd_t *pgdir);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 98a87f0f40d5..8b0f1599b2d1 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -42,7 +42,7 @@
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << VA_BITS) + 1)
 #define KIMAGE_VADDR		(MODULES_END)
-#define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
+#define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
 #define BPF_JIT_REGION_END	(BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE)
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
@@ -68,11 +68,17 @@
  * significantly, so double the (minimum) stack size when they are in use.
  */
 #ifdef CONFIG_KASAN
-#define KASAN_SHADOW_SIZE	(UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
+#define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#define KASAN_SHADOW_END	((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \
+					+ KASAN_SHADOW_OFFSET)
+#ifdef CONFIG_KASAN_EXTRA
+#define KASAN_THREAD_SHIFT	2
+#else
 #define KASAN_THREAD_SHIFT	1
+#endif
 #else
-#define KASAN_SHADOW_SIZE	(0)
 #define KASAN_THREAD_SHIFT	0
+#define KASAN_SHADOW_END	(VA_START)
 #endif
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 05edfe9b02e4..9e68e3d12956 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -154,8 +154,6 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
 /* The early shadow maps everything to a single page of zeroes */
 asmlinkage void __init kasan_early_init(void)
 {
-	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
-		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
 	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH V4 04/11] arm64: dump: De-constify VA_START and KASAN_SHADOW_START
  2019-07-29 16:21 [PATCH V4 00/11] 52-bit kernel + user VAs Steve Capper
                   ` (2 preceding siblings ...)
  2019-07-29 16:21 ` [PATCH V4 03/11] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
@ 2019-07-29 16:21 ` Steve Capper
  2019-08-05 16:38   ` Catalin Marinas
  2019-07-29 16:21 ` [PATCH V4 05/11] arm64: mm: Introduce VA_BITS_MIN Steve Capper
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 35+ messages in thread
From: Steve Capper @ 2019-07-29 16:21 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

The kernel page table dumper assumes that the placement of VA regions is
constant and determined at compile time. As we are about to introduce
variable VA logic, we need to be able to determine certain regions at
boot time.

Specifically the VA_START and KASAN_SHADOW_START will depend on whether
or not the system is booted with 52-bit kernel VAs.

This patch adds logic to the kernel page table dumper s.t. these regions
can be computed at boot time.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

Changed in V3 - simplified the scope of de-constifying to just VA_START
and KASAN_SHADOW_START.
---
 arch/arm64/mm/dump.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index 6f0b9f8ddf55..afe7e1460557 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -25,11 +25,20 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/ptdump.h>
 
-static const struct addr_marker address_markers[] = {
+
+enum address_markers_idx {
+	PAGE_OFFSET_NR = 0,
+	VA_START_NR,
+#ifdef CONFIG_KASAN
+	KASAN_START_NR,
+#endif
+};
+
+static struct addr_marker address_markers[] = {
 	{ PAGE_OFFSET,			"Linear Mapping start" },
-	{ VA_START,			"Linear Mapping end" },
+	{ 0 /* VA_START */,		"Linear Mapping end" },
 #ifdef CONFIG_KASAN
-	{ KASAN_SHADOW_START,		"Kasan shadow start" },
+	{ 0 /* KASAN_SHADOW_START */,	"Kasan shadow start" },
 	{ KASAN_SHADOW_END,		"Kasan shadow end" },
 #endif
 	{ MODULES_VADDR,		"Modules start" },
@@ -402,6 +411,10 @@ void ptdump_check_wx(void)
 
 static int ptdump_init(void)
 {
+	address_markers[VA_START_NR].start_address = VA_START;
+#ifdef CONFIG_KASAN
+	address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START;
+#endif
 	ptdump_initialize();
 	ptdump_debugfs_register(&kernel_ptdump_info, "kernel_page_tables");
 	return 0;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH V4 05/11] arm64: mm: Introduce VA_BITS_MIN
  2019-07-29 16:21 [PATCH V4 00/11] 52-bit kernel + user VAs Steve Capper
                   ` (3 preceding siblings ...)
  2019-07-29 16:21 ` [PATCH V4 04/11] arm64: dump: De-constify VA_START and KASAN_SHADOW_START Steve Capper
@ 2019-07-29 16:21 ` Steve Capper
  2019-08-05 17:17   ` Catalin Marinas
  2019-08-05 17:20   ` Catalin Marinas
  2019-07-29 16:21 ` [PATCH V4 06/11] arm64: mm: Introduce VA_BITS_ACTUAL Steve Capper
                   ` (5 subsequent siblings)
  10 siblings, 2 replies; 35+ messages in thread
From: Steve Capper @ 2019-07-29 16:21 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

In order to support 52-bit kernel addresses detectable at boot time, the
kernel needs to know the most conservative VA_BITS possible should it
need to fall back to this quantity due to lack of hardware support.

A new compile time constant VA_BITS_MIN is introduced in this patch and
it is employed in the KASAN end address, KASLR, and EFI stub.

For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits.

In other words: VA_BITS_MIN = min (48, VA_BITS)

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/Kconfig                 | 4 ++++
 arch/arm64/include/asm/efi.h       | 4 ++--
 arch/arm64/include/asm/memory.h    | 5 ++++-
 arch/arm64/include/asm/processor.h | 2 +-
 arch/arm64/kernel/head.S           | 2 +-
 arch/arm64/kernel/kaslr.c          | 6 +++---
 arch/arm64/mm/kasan_init.c         | 3 ++-
 7 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f7f23e47c28f..0206804b0868 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -797,6 +797,10 @@ config ARM64_VA_BITS
 	default 47 if ARM64_VA_BITS_47
 	default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52
 
+config ARM64_VA_BITS_MIN
+	int
+	default ARM64_VA_BITS
+
 choice
 	prompt "Physical address space size"
 	default ARM64_PA_BITS_48
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index 8e79ce9c3f5c..f6dbc0149dae 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -79,7 +79,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base)
 
 /*
  * On arm64, we have to ensure that the initrd ends up in the linear region,
- * which is a 1 GB aligned region of size '1UL << (VA_BITS - 1)' that is
+ * which is a 1 GB aligned region of size '1UL << (VA_BITS_MIN - 1)' that is
  * guaranteed to cover the kernel Image.
  *
  * Since the EFI stub is part of the kernel Image, we can relax the
@@ -90,7 +90,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base)
 static inline unsigned long efi_get_max_initrd_addr(unsigned long dram_base,
 						    unsigned long image_addr)
 {
-	return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS - 1));
+	return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS_MIN - 1));
 }
 
 #define efi_call_early(f, ...)		sys_table_arg->boottime->f(__VA_ARGS__)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 8b0f1599b2d1..a8a91a573bff 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -52,6 +52,9 @@
 #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
 #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
+#define VA_BITS_MIN		(CONFIG_ARM64_VA_BITS_MIN)
+#define _VA_START(va)		(UL(0xffffffffffffffff) - \
+				(UL(1) << ((va) - 1)) + 1)
 
 #define KERNEL_START      _text
 #define KERNEL_END        _end
@@ -78,7 +81,7 @@
 #endif
 #else
 #define KASAN_THREAD_SHIFT	0
-#define KASAN_SHADOW_END	(VA_START)
+#define KASAN_SHADOW_END	(_VA_START(VA_BITS_MIN))
 #endif
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 844e2964b0f5..0e1f2770192a 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -42,7 +42,7 @@
  * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area.
  */
 
-#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS)
+#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS_MIN)
 #define TASK_SIZE_64		(UL(1) << vabits_user)
 
 #ifdef CONFIG_COMPAT
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 2cdacd1c141b..ac58c69993ec 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -314,7 +314,7 @@ __create_page_tables:
 	mov	x5, #52
 	cbnz	x6, 1f
 #endif
-	mov	x5, #VA_BITS
+	mov	x5, #VA_BITS_MIN
 1:
 	adr_l	x6, vabits_user
 	str	x5, [x6]
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 708051655ad9..5a59f7567f9c 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -116,15 +116,15 @@ u64 __init kaslr_early_init(u64 dt_phys)
 	/*
 	 * OK, so we are proceeding with KASLR enabled. Calculate a suitable
 	 * kernel image offset from the seed. Let's place the kernel in the
-	 * middle half of the VMALLOC area (VA_BITS - 2), and stay clear of
+	 * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of
 	 * the lower and upper quarters to avoid colliding with other
 	 * allocations.
 	 * Even if we could randomize at page granularity for 16k and 64k pages,
 	 * let's always round to 2 MB so we don't interfere with the ability to
 	 * map using contiguous PTEs
 	 */
-	mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1);
-	offset = BIT(VA_BITS - 3) + (seed & mask);
+	mask = ((1UL << (VA_BITS_MIN - 2)) - 1) & ~(SZ_2M - 1);
+	offset = BIT(VA_BITS_MIN - 3) + (seed & mask);
 
 	/* use the top 16 bits to randomize the linear region */
 	memstart_offset_seed = seed >> 48;
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 9e68e3d12956..881d545d252a 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -154,7 +154,8 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
 /* The early shadow maps everything to a single page of zeroes */
 asmlinkage void __init kasan_early_init(void)
 {
-	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
+	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), PGDIR_SIZE));
+	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
 	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
 			   true);
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH V4 06/11] arm64: mm: Introduce VA_BITS_ACTUAL
  2019-07-29 16:21 [PATCH V4 00/11] 52-bit kernel + user VAs Steve Capper
                   ` (4 preceding siblings ...)
  2019-07-29 16:21 ` [PATCH V4 05/11] arm64: mm: Introduce VA_BITS_MIN Steve Capper
@ 2019-07-29 16:21 ` Steve Capper
  2019-08-05 17:26   ` Catalin Marinas
  2019-07-29 16:21 ` [PATCH V4 07/11] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 35+ messages in thread
From: Steve Capper @ 2019-07-29 16:21 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

In order to support 52-bit kernel addresses detectable at boot time, one
needs to know the actual VA_BITS detected. A new variable VA_BITS_ACTUAL
is introduced in this commit and employed for the KVM hypervisor layout,
KASAN, fault handling and phys-to/from-virt translation where there
would normally be compile time constants.

In order to maintain performance in phys_to_virt, another variable
physvirt_offset is introduced.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/kasan.h       |  2 +-
 arch/arm64/include/asm/memory.h      | 12 +++++++-----
 arch/arm64/include/asm/mmu_context.h |  2 +-
 arch/arm64/kernel/head.S             |  5 +++++
 arch/arm64/kvm/va_layout.c           | 14 +++++++-------
 arch/arm64/mm/fault.c                |  4 ++--
 arch/arm64/mm/init.c                 |  7 ++++++-
 arch/arm64/mm/mmu.c                  |  3 +++
 8 files changed, 32 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index 10d2add842da..ff991dc86ae1 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -31,7 +31,7 @@
  *				(1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT))
  */
 #define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
-#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS)
+#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS_ACTUAL)
 
 void kasan_init(void);
 void kasan_copy_shadow(pgd_t *pgdir);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index a8a91a573bff..93341f4fe840 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -37,8 +37,6 @@
  * VA_START - the first kernel virtual address.
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
-#define VA_START		(UL(0xffffffffffffffff) - \
-	(UL(1) << (VA_BITS - 1)) + 1)
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << VA_BITS) + 1)
 #define KIMAGE_VADDR		(MODULES_END)
@@ -166,10 +164,14 @@
 #endif
 
 #ifndef __ASSEMBLY__
+extern u64			vabits_actual;
+#define VA_BITS_ACTUAL		({vabits_actual;})
+#define VA_START		(_VA_START(VA_BITS_ACTUAL))
 
 #include <linux/bitops.h>
 #include <linux/mmdebug.h>
 
+extern s64			physvirt_offset;
 extern s64			memstart_addr;
 /* PHYS_OFFSET - the physical address of the start of memory. */
 #define PHYS_OFFSET		({ VM_BUG_ON(memstart_addr & 1); memstart_addr; })
@@ -236,9 +238,9 @@ extern u64			vabits_user;
  * space. Testing the top bit for the start of the region is a
  * sufficient check.
  */
-#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS - 1)))
+#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS_ACTUAL - 1)))
 
-#define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
+#define __lm_to_phys(addr)	(((addr) + physvirt_offset))
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
 
 #define __virt_to_phys_nodebug(x) ({					\
@@ -257,7 +259,7 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
 #define __phys_addr_symbol(x)	__pa_symbol_nodebug(x)
 #endif
 
-#define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
+#define __phys_to_virt(x)	((unsigned long)((x) - physvirt_offset))
 #define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
 
 /*
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 7ed0adb187a8..890ccaf02264 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -95,7 +95,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
 	isb();
 }
 
-#define cpu_set_default_tcr_t0sz()	__cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS))
+#define cpu_set_default_tcr_t0sz()	__cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS_ACTUAL))
 #define cpu_set_idmap_tcr_t0sz()	__cpu_set_tcr_t0sz(idmap_t0sz)
 
 /*
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index ac58c69993ec..6dc7349868d9 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -321,6 +321,11 @@ __create_page_tables:
 	dmb	sy
 	dc	ivac, x6		// Invalidate potentially stale cache line
 
+	adr_l	x6, vabits_actual
+	str	x5, [x6]
+	dmb	sy
+	dc	ivac, x6		// Invalidate potentially stale cache line
+
 	/*
 	 * VA_BITS may be too small to allow for an ID mapping to be created
 	 * that covers system RAM if that is located sufficiently high in the
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index acd8084f1f2c..aaf1a3d43959 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -29,25 +29,25 @@ static void compute_layout(void)
 	int kva_msb;
 
 	/* Where is my RAM region? */
-	hyp_va_msb  = idmap_addr & BIT(VA_BITS - 1);
-	hyp_va_msb ^= BIT(VA_BITS - 1);
+	hyp_va_msb  = idmap_addr & BIT(VA_BITS_ACTUAL - 1);
+	hyp_va_msb ^= BIT(VA_BITS_ACTUAL - 1);
 
 	kva_msb = fls64((u64)phys_to_virt(memblock_start_of_DRAM()) ^
 			(u64)(high_memory - 1));
 
-	if (kva_msb == (VA_BITS - 1)) {
+	if (kva_msb == (VA_BITS_ACTUAL - 1)) {
 		/*
 		 * No space in the address, let's compute the mask so
-		 * that it covers (VA_BITS - 1) bits, and the region
+		 * that it covers (VA_BITS_ACTUAL - 1) bits, and the region
 		 * bit. The tag stays set to zero.
 		 */
-		va_mask  = BIT(VA_BITS - 1) - 1;
+		va_mask  = BIT(VA_BITS_ACTUAL - 1) - 1;
 		va_mask |= hyp_va_msb;
 	} else {
 		/*
 		 * We do have some free bits to insert a random tag.
 		 * Hyp VAs are now created from kernel linear map VAs
-		 * using the following formula (with V == VA_BITS):
+		 * using the following formula (with V == VA_BITS_ACTUAL):
 		 *
 		 *  63 ... V |     V-1    | V-2 .. tag_lsb | tag_lsb - 1 .. 0
 		 *  ---------------------------------------------------------
@@ -55,7 +55,7 @@ static void compute_layout(void)
 		 */
 		tag_lsb = kva_msb;
 		va_mask = GENMASK_ULL(tag_lsb - 1, 0);
-		tag_val = get_random_long() & GENMASK_ULL(VA_BITS - 2, tag_lsb);
+		tag_val = get_random_long() & GENMASK_ULL(VA_BITS_ACTUAL - 2, tag_lsb);
 		tag_val |= hyp_va_msb;
 		tag_val >>= tag_lsb;
 	}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 9568c116ac7f..751617613f0c 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -138,9 +138,9 @@ static void show_pte(unsigned long addr)
 		return;
 	}
 
-	pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp=%016lx\n",
+	pr_alert("%s pgtable: %luk pages, %llu-bit VAs, pgdp=%016lx\n",
 		 mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K,
-		 mm == &init_mm ? VA_BITS : (int)vabits_user,
+		 mm == &init_mm ? VA_BITS_ACTUAL : (int)vabits_user,
 		 (unsigned long)virt_to_phys(mm->pgd));
 	pgdp = pgd_offset(mm, addr);
 	pgd = READ_ONCE(*pgdp);
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 62927ed02229..189177672567 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -50,6 +50,9 @@
 s64 memstart_addr __ro_after_init = -1;
 EXPORT_SYMBOL(memstart_addr);
 
+s64 physvirt_offset __ro_after_init;
+EXPORT_SYMBOL(physvirt_offset);
+
 phys_addr_t arm64_dma_phys_limit __ro_after_init;
 
 #ifdef CONFIG_KEXEC_CORE
@@ -301,7 +304,7 @@ static void __init fdt_enforce_memory_region(void)
 
 void __init arm64_memblock_init(void)
 {
-	const s64 linear_region_size = BIT(VA_BITS - 1);
+	const s64 linear_region_size = BIT(VA_BITS_ACTUAL - 1);
 
 	/* Handle linux,usable-memory-range property */
 	fdt_enforce_memory_region();
@@ -315,6 +318,8 @@ void __init arm64_memblock_init(void)
 	memstart_addr = round_down(memblock_start_of_DRAM(),
 				   ARM64_MEMSTART_ALIGN);
 
+	physvirt_offset = PHYS_OFFSET - PAGE_OFFSET;
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 1d4247f9a496..07b30e6d17f8 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -43,6 +43,9 @@ u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
 u64 vabits_user __ro_after_init;
 EXPORT_SYMBOL(vabits_user);
 
+u64 __section(".mmuoff.data.write") vabits_actual;
+EXPORT_SYMBOL(vabits_actual);
+
 u64 kimage_voffset __ro_after_init;
 EXPORT_SYMBOL(kimage_voffset);
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH V4 07/11] arm64: mm: Logic to make offset_ttbr1 conditional
  2019-07-29 16:21 [PATCH V4 00/11] 52-bit kernel + user VAs Steve Capper
                   ` (5 preceding siblings ...)
  2019-07-29 16:21 ` [PATCH V4 06/11] arm64: mm: Introduce VA_BITS_ACTUAL Steve Capper
@ 2019-07-29 16:21 ` Steve Capper
  2019-08-05 17:06   ` Catalin Marinas
  2019-07-29 16:21 ` [PATCH V4 08/11] arm64: mm: Separate out vmemmap Steve Capper
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 35+ messages in thread
From: Steve Capper @ 2019-07-29 16:21 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

When running with a 52-bit userspace VA and a 48-bit kernel VA we offset
ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to
be used for both userspace and kernel.

Moving on to a 52-bit kernel VA we no longer require this offset to
ttbr1_el1 should we be running on a system with HW support for 52-bit
VAs.

This patch introduces conditional logic to offset_ttbr1 to query
SYS_ID_AA64MMFR2_EL1 whenever 52-bit VAs are selected. If there is HW
support for 52-bit VAs then the ttbr1 offset is skipped.

We choose to read a system register rather than vabits_actual because
offset_ttbr1 can be called in places where the kernel data is not
actually mapped.

Calls to offset_ttbr1 appear to be made from rarely called code paths so
this extra logic is not expected to adversely affect performance.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---

Changed in V3, move away from alternative framework as offset_ttbr1 can
be called in places before the alternative framework has been
initialised.
---
 arch/arm64/include/asm/assembler.h | 12 ++++++++++--
 arch/arm64/kernel/head.S           |  2 +-
 arch/arm64/kernel/hibernate-asm.S  |  8 ++++----
 arch/arm64/mm/proc.S               |  6 +++---
 4 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index e3a15c751b13..ede368bafa2c 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -538,9 +538,17 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * In future this may be nop'ed out when dealing with 52-bit kernel VAs.
  * 	ttbr: Value of ttbr to set, modified.
  */
-	.macro	offset_ttbr1, ttbr
+	.macro	offset_ttbr1, ttbr, tmp
 #ifdef CONFIG_ARM64_USER_VA_BITS_52
 	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
+#endif
+
+#ifdef CONFIG_ARM64_VA_BITS_52
+	mrs_s	\tmp, SYS_ID_AA64MMFR2_EL1
+	and	\tmp, \tmp, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
+	cbnz	\tmp, .Lskipoffs_\@
+	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
+.Lskipoffs_\@ :
 #endif
 	.endm
 
@@ -550,7 +558,7 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * to be nop'ed out when dealing with 52-bit kernel VAs.
  */
 	.macro	restore_ttbr1, ttbr
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#if defined(CONFIG_ARM64_USER_VA_BITS_52) || defined(CONFIG_ARM64_VA_BITS_52)
 	bic	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
 #endif
 	.endm
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 6dc7349868d9..a96dc4386c7c 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -777,7 +777,7 @@ ENTRY(__enable_mmu)
 	phys_to_ttbr x1, x1
 	phys_to_ttbr x2, x2
 	msr	ttbr0_el1, x2			// load TTBR0
-	offset_ttbr1 x1
+	offset_ttbr1 x1, x3
 	msr	ttbr1_el1, x1			// load TTBR1
 	isb
 	msr	sctlr_el1, x0
diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
index 2f4a2ce7264b..38bcd4d4e43b 100644
--- a/arch/arm64/kernel/hibernate-asm.S
+++ b/arch/arm64/kernel/hibernate-asm.S
@@ -22,14 +22,14 @@
  * Even switching to our copied tables will cause a changed output address at
  * each stage of the walk.
  */
-.macro break_before_make_ttbr_switch zero_page, page_table, tmp
+.macro break_before_make_ttbr_switch zero_page, page_table, tmp, tmp2
 	phys_to_ttbr \tmp, \zero_page
 	msr	ttbr1_el1, \tmp
 	isb
 	tlbi	vmalle1
 	dsb	nsh
 	phys_to_ttbr \tmp, \page_table
-	offset_ttbr1 \tmp
+	offset_ttbr1 \tmp, \tmp2
 	msr	ttbr1_el1, \tmp
 	isb
 .endm
@@ -70,7 +70,7 @@ ENTRY(swsusp_arch_suspend_exit)
 	 * We execute from ttbr0, change ttbr1 to our copied linear map tables
 	 * with a break-before-make via the zero page
 	 */
-	break_before_make_ttbr_switch	x5, x0, x6
+	break_before_make_ttbr_switch	x5, x0, x6, x8
 
 	mov	x21, x1
 	mov	x30, x2
@@ -101,7 +101,7 @@ ENTRY(swsusp_arch_suspend_exit)
 	dsb	ish		/* wait for PoU cleaning to finish */
 
 	/* switch to the restored kernels page tables */
-	break_before_make_ttbr_switch	x25, x21, x6
+	break_before_make_ttbr_switch	x25, x21, x6, x8
 
 	ic	ialluis
 	dsb	ish
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 7dbf2be470f6..8d289ff7584d 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -168,7 +168,7 @@ ENDPROC(cpu_do_switch_mm)
 .macro	__idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
 	adrp	\tmp1, empty_zero_page
 	phys_to_ttbr \tmp2, \tmp1
-	offset_ttbr1 \tmp2
+	offset_ttbr1 \tmp2, \tmp1
 	msr	ttbr1_el1, \tmp2
 	isb
 	tlbi	vmalle1
@@ -187,7 +187,7 @@ ENTRY(idmap_cpu_replace_ttbr1)
 
 	__idmap_cpu_set_reserved_ttbr1 x1, x3
 
-	offset_ttbr1 x0
+	offset_ttbr1 x0, x3
 	msr	ttbr1_el1, x0
 	isb
 
@@ -362,7 +362,7 @@ __idmap_kpti_secondary:
 	cbnz	w18, 1b
 
 	/* All done, act like nothing happened */
-	offset_ttbr1 swapper_ttb
+	offset_ttbr1 swapper_ttb, x18
 	msr	ttbr1_el1, swapper_ttb
 	isb
 	ret
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH V4 08/11] arm64: mm: Separate out vmemmap
  2019-07-29 16:21 [PATCH V4 00/11] 52-bit kernel + user VAs Steve Capper
                   ` (6 preceding siblings ...)
  2019-07-29 16:21 ` [PATCH V4 07/11] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
@ 2019-07-29 16:21 ` Steve Capper
  2019-08-05 17:07   ` Catalin Marinas
  2019-07-29 16:21 ` [PATCH V4 09/11] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 35+ messages in thread
From: Steve Capper @ 2019-07-29 16:21 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

vmemmap is a preprocessor definition that depends on a variable,
memstart_addr. In a later patch we will need to expand the size of
the VMEMMAP region and optionally modify vmemmap depending upon
whether or not hardware support is available for 52-bit virtual
addresses.

This patch changes vmemmap to be a variable. As the old definition
depended on a variable load, this should not affect performance
noticeably.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/pgtable.h | 4 ++--
 arch/arm64/mm/init.c             | 5 +++++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index d274ea9a5f86..0eedf8664ecc 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -23,8 +23,6 @@
 #define VMALLOC_START		(MODULES_END)
 #define VMALLOC_END		(- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
-#define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
-
 #define FIRST_USER_ADDRESS	0UL
 
 #ifndef __ASSEMBLY__
@@ -35,6 +33,8 @@
 #include <linux/mm_types.h>
 #include <linux/sched.h>
 
+extern struct page *vmemmap;
+
 extern void __pte_error(const char *file, int line, unsigned long val);
 extern void __pmd_error(const char *file, int line, unsigned long val);
 extern void __pud_error(const char *file, int line, unsigned long val);
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 189177672567..310e63b0dd22 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -53,6 +53,9 @@ EXPORT_SYMBOL(memstart_addr);
 s64 physvirt_offset __ro_after_init;
 EXPORT_SYMBOL(physvirt_offset);
 
+struct page *vmemmap __ro_after_init;
+EXPORT_SYMBOL(vmemmap);
+
 phys_addr_t arm64_dma_phys_limit __ro_after_init;
 
 #ifdef CONFIG_KEXEC_CORE
@@ -320,6 +323,8 @@ void __init arm64_memblock_init(void)
 
 	physvirt_offset = PHYS_OFFSET - PAGE_OFFSET;
 
+	vmemmap = ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT));
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH V4 09/11] arm64: mm: Modify calculation of VMEMMAP_SIZE
  2019-07-29 16:21 [PATCH V4 00/11] 52-bit kernel + user VAs Steve Capper
                   ` (7 preceding siblings ...)
  2019-07-29 16:21 ` [PATCH V4 08/11] arm64: mm: Separate out vmemmap Steve Capper
@ 2019-07-29 16:21 ` Steve Capper
  2019-08-05 17:10   ` Catalin Marinas
  2019-07-29 16:21 ` [PATCH V4 10/11] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
  2019-07-29 16:21 ` [PATCH V4 11/11] docs: arm64: Add layout and 52-bit info to memory document Steve Capper
  10 siblings, 1 reply; 35+ messages in thread
From: Steve Capper @ 2019-07-29 16:21 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

In a later patch we will need to have a slightly larger VMEMMAP region
to accommodate boot time selection between 48/52-bit kernel VAs.

This patch modifies the formula for computing VMEMMAP_SIZE to depend
explicitly on the PAGE_OFFSET and start of kernel addressable memory.
(This allows for a slightly larger direct linear map in future).

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/memory.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 93341f4fe840..aa7186006ee5 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -26,8 +26,15 @@
 /*
  * VMEMMAP_SIZE - allows the whole linear region to be covered by
  *                a struct page array
+ *
+ * If we are configured with a 52-bit kernel VA then our VMEMMAP_SIZE
+ * neads to cover the memory region from the beginning of the 52-bit
+ * PAGE_OFFSET all the way to VA_START for 48-bit. This allows us to
+ * keep a constant PAGE_OFFSET and "fallback" to using the higher end
+ * of the VMEMMAP where 52-bit support is not available in hardware.
  */
-#define VMEMMAP_SIZE (UL(1) << (VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT))
+#define VMEMMAP_SIZE ((_VA_START(VA_BITS_MIN) - PAGE_OFFSET) \
+			>> (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT))
 
 /*
  * PAGE_OFFSET - the virtual address of the start of the linear map (top
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH V4 10/11] arm64: mm: Introduce 52-bit Kernel VAs
  2019-07-29 16:21 [PATCH V4 00/11] 52-bit kernel + user VAs Steve Capper
                   ` (8 preceding siblings ...)
  2019-07-29 16:21 ` [PATCH V4 09/11] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
@ 2019-07-29 16:21 ` Steve Capper
  2019-08-05 17:27   ` Catalin Marinas
  2019-08-06 14:55   ` Catalin Marinas
  2019-07-29 16:21 ` [PATCH V4 11/11] docs: arm64: Add layout and 52-bit info to memory document Steve Capper
  10 siblings, 2 replies; 35+ messages in thread
From: Steve Capper @ 2019-07-29 16:21 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

Most of the machinery is now in place to enable 52-bit kernel VAs that
are detectable at boot time.

This patch adds a Kconfig option for 52-bit user and kernel addresses
and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ,
physvirt_offset and vmemmap at early boot.

To simplify things this patch also removes the 52-bit user/48-bit kernel
kconfig option.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/Kconfig                     | 21 ++++++++++++---------
 arch/arm64/include/asm/assembler.h     | 13 ++++++++-----
 arch/arm64/include/asm/memory.h        |  7 ++++---
 arch/arm64/include/asm/mmu_context.h   |  2 +-
 arch/arm64/include/asm/pgtable-hwdef.h |  2 +-
 arch/arm64/kernel/head.S               |  4 ++--
 arch/arm64/mm/init.c                   | 10 ++++++++++
 arch/arm64/mm/proc.S                   |  3 ++-
 8 files changed, 40 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 0206804b0868..7e80e9eeaef4 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -286,7 +286,7 @@ config PGTABLE_LEVELS
 	int
 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
 	default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42
-	default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52)
+	default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || ARM64_VA_BITS_52)
 	default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39
 	default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47
 	default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48
@@ -300,12 +300,12 @@ config ARCH_PROC_KCORE_TEXT
 config KASAN_SHADOW_OFFSET
 	hex
 	depends on KASAN
-	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS
+	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && !KASAN_SW_TAGS
 	default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
 	default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
 	default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
 	default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
-	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS
+	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && KASAN_SW_TAGS
 	default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
 	default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
 	default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
@@ -759,13 +759,14 @@ config ARM64_VA_BITS_47
 config ARM64_VA_BITS_48
 	bool "48-bit"
 
-config ARM64_USER_VA_BITS_52
-	bool "52-bit (user)"
+config ARM64_VA_BITS_52
+	bool "52-bit"
 	depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN)
 	help
 	  Enable 52-bit virtual addressing for userspace when explicitly
-	  requested via a hint to mmap(). The kernel will continue to
-	  use 48-bit virtual addresses for its own mappings.
+	  requested via a hint to mmap(). The kernel will also use 52-bit
+	  virtual addresses for its own mappings (provided HW support for
+	  this feature is available, otherwise it reverts to 48-bit).
 
 	  NOTE: Enabling 52-bit virtual addressing in conjunction with
 	  ARMv8.3 Pointer Authentication will result in the PAC being
@@ -778,7 +779,7 @@ endchoice
 
 config ARM64_FORCE_52BIT
 	bool "Force 52-bit virtual addresses for userspace"
-	depends on ARM64_USER_VA_BITS_52 && EXPERT
+	depends on ARM64_VA_BITS_52 && EXPERT
 	help
 	  For systems with 52-bit userspace VAs enabled, the kernel will attempt
 	  to maintain compatibility with older software by providing 48-bit VAs
@@ -795,10 +796,12 @@ config ARM64_VA_BITS
 	default 39 if ARM64_VA_BITS_39
 	default 42 if ARM64_VA_BITS_42
 	default 47 if ARM64_VA_BITS_47
-	default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52
+	default 48 if ARM64_VA_BITS_48
+	default 52 if ARM64_VA_BITS_52
 
 config ARM64_VA_BITS_MIN
 	int
+	default 48 if ARM64_VA_BITS_52
 	default ARM64_VA_BITS
 
 choice
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index ede368bafa2c..c066fc4976cd 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -349,6 +349,13 @@ alternative_endif
 	bfi	\valreg, \t0sz, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH
 	.endm
 
+/*
+ * tcr_set_t1sz - update TCR.T1SZ
+ */
+	.macro	tcr_set_t1sz, valreg, t1sz
+	bfi	\valreg, \t1sz, #TCR_T1SZ_OFFSET, #TCR_TxSZ_WIDTH
+	.endm
+
 /*
  * tcr_compute_pa_size - set TCR.(I)PS to the highest supported
  * ID_AA64MMFR0_EL1.PARange value
@@ -539,10 +546,6 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * 	ttbr: Value of ttbr to set, modified.
  */
 	.macro	offset_ttbr1, ttbr, tmp
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
-	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
-#endif
-
 #ifdef CONFIG_ARM64_VA_BITS_52
 	mrs_s	\tmp, SYS_ID_AA64MMFR2_EL1
 	and	\tmp, \tmp, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
@@ -558,7 +561,7 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * to be nop'ed out when dealing with 52-bit kernel VAs.
  */
 	.macro	restore_ttbr1, ttbr
-#if defined(CONFIG_ARM64_USER_VA_BITS_52) || defined(CONFIG_ARM64_VA_BITS_52)
+#ifdef CONFIG_ARM64_VA_BITS_52
 	bic	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
 #endif
 	.endm
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index aa7186006ee5..473c9abfc35c 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -44,8 +44,9 @@
  * VA_START - the first kernel virtual address.
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
-#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
-	(UL(1) << VA_BITS) + 1)
+#define _PAGE_OFFSET(va)	(UL(0xffffffffffffffff) - \
+					(UL(1) << (va)) + 1)
+#define PAGE_OFFSET		(_PAGE_OFFSET(VA_BITS))
 #define KIMAGE_VADDR		(MODULES_END)
 #define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
@@ -64,7 +65,7 @@
 #define KERNEL_START      _text
 #define KERNEL_END        _end
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 #define MAX_USER_VA_BITS	52
 #else
 #define MAX_USER_VA_BITS	VA_BITS
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 890ccaf02264..5a185fde1e00 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -63,7 +63,7 @@ extern u64 idmap_ptrs_per_pgd;
 
 static inline bool __cpu_uses_extended_idmap(void)
 {
-	if (IS_ENABLED(CONFIG_ARM64_USER_VA_BITS_52))
+	if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52))
 		return false;
 
 	return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index db92950bb1a0..3df60f97da1f 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -304,7 +304,7 @@
 #define TTBR_BADDR_MASK_52	(((UL(1) << 46) - 1) << 2)
 #endif
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 /* Must be at least 64-byte aligned to prevent corruption of the TTBR */
 #define TTBR1_BADDR_4852_OFFSET	(((UL(1) << (52 - PGDIR_SHIFT)) - \
 				 (UL(1) << (48 - PGDIR_SHIFT))) * 8)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index a96dc4386c7c..c8446f8c81f5 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -308,7 +308,7 @@ __create_page_tables:
 	adrp	x0, idmap_pg_dir
 	adrp	x3, __idmap_text_start		// __pa(__idmap_text_start)
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 	mrs_s	x6, SYS_ID_AA64MMFR2_EL1
 	and	x6, x6, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
 	mov	x5, #52
@@ -794,7 +794,7 @@ ENTRY(__enable_mmu)
 ENDPROC(__enable_mmu)
 
 ENTRY(__cpu_secondary_check52bitva)
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 	ldr_l	x0, vabits_user
 	cmp	x0, #52
 	b.ne	2f
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 310e63b0dd22..6c1f29cbcb22 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -325,6 +325,16 @@ void __init arm64_memblock_init(void)
 
 	vmemmap = ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT));
 
+	/*
+	 * If we are running with a 52-bit kernel VA config on a system that
+	 * does not support it, we have to offset our vmemmap and physvirt_offset
+	 * s.t. we avoid the 52-bit portion of the direct linear map
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (VA_BITS_ACTUAL != 52)) {
+		vmemmap += (_PAGE_OFFSET(48) - _PAGE_OFFSET(52)) >> PAGE_SHIFT;
+		physvirt_offset = PHYS_OFFSET - _PAGE_OFFSET(48);
+	}
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 8d289ff7584d..8b021c5c0884 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -438,10 +438,11 @@ ENTRY(__cpu_setup)
 			TCR_TBI0 | TCR_A1 | TCR_KASAN_FLAGS
 	tcr_clear_errata_bits x10, x9, x5
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 	ldr_l		x9, vabits_user
 	sub		x9, xzr, x9
 	add		x9, x9, #64
+	tcr_set_t1sz	x10, x9
 #else
 	ldr_l		x9, idmap_t0sz
 #endif
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH V4 11/11] docs: arm64: Add layout and 52-bit info to memory document
  2019-07-29 16:21 [PATCH V4 00/11] 52-bit kernel + user VAs Steve Capper
                   ` (9 preceding siblings ...)
  2019-07-29 16:21 ` [PATCH V4 10/11] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
@ 2019-07-29 16:21 ` Steve Capper
  2019-08-06 15:27   ` Catalin Marinas
  10 siblings, 1 reply; 35+ messages in thread
From: Steve Capper @ 2019-07-29 16:21 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

As the kernel no longer prints out the memory layout on boot, this patch
adds this information back to the memory document.

Also, as the 52-bit support introduces some subtle changes to the arm64
memory, the rationale behind these changes are also added to the memory
document.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

New in V4
---
 Documentation/arm64/memory.rst | 177 +++++++++++++++++++++++++++++----
 1 file changed, 160 insertions(+), 17 deletions(-)

diff --git a/Documentation/arm64/memory.rst b/Documentation/arm64/memory.rst
index 464b880fc4b7..79a5461e93c2 100644
--- a/Documentation/arm64/memory.rst
+++ b/Documentation/arm64/memory.rst
@@ -14,6 +14,10 @@ with the 4KB page configuration, allowing 39-bit (512GB) or 48-bit
 64KB pages, only 2 levels of translation tables, allowing 42-bit (4TB)
 virtual address, are used but the memory layout is the same.
 
+ARMv8.2 adds optional support for Large Virtual Address space. This is
+only available when running with a 64KB page size and expands the
+number of descriptors in the first level of translation.
+
 User addresses have bits 63:48 set to 0 while the kernel addresses have
 the same bits set to 1. TTBRx selection is given by bit 63 of the
 virtual address. The swapper_pg_dir contains only kernel (global)
@@ -22,40 +26,119 @@ The swapper_pg_dir address is written to TTBR1 and never written to
 TTBR0.
 
 
-AArch64 Linux memory layout with 4KB pages + 3 levels::
+AArch64 Linux memory layout with 4KB pages + 3 levels (39-bit)::
 
   Start			End			Size		Use
   -----------------------------------------------------------------------
   0000000000000000	0000007fffffffff	 512GB		user
-  ffffff8000000000	ffffffffffffffff	 512GB		kernel
-
-
-AArch64 Linux memory layout with 4KB pages + 4 levels::
+  ffffff8000000000	ffffffbfffffffff	 256GB		kernel logical memory map
+  ffffffc000000000	ffffffcfffffffff	  64GB		kasan shadow region
+  ffffffd000000000	ffffffd007ffffff	 128MB		bpf jit region
+  ffffffd008000000	ffffffd00fffffff	 128MB		modules
+  ffffffd010000000	fffffffebffeffff	~186GB		vmalloc
+  fffffffebfff0000	fffffffefe5f8fff	~998MB		[guard region]
+  fffffffefe5f9000	fffffffefe9fffff	4124KB		fixed mappings
+  fffffffefea00000	fffffffefebfffff	   2MB		[guard region]
+  fffffffefec00000	fffffffeffbfffff	  16MB		PCI I/O space
+  fffffffeffc00000	fffffffeffdfffff	   2MB		[guard region]
+  fffffffeffe00000	ffffffffffdfffff	   4GB		vmemmap
+  ffffffffffe00000	ffffffffffffffff	   2MB		[guard region]
+
+
+AArch64 Linux memory layout with 4KB pages + 4 levels (48-bit)::
 
   Start			End			Size		Use
   -----------------------------------------------------------------------
   0000000000000000	0000ffffffffffff	 256TB		user
-  ffff000000000000	ffffffffffffffff	 256TB		kernel
-
-
-AArch64 Linux memory layout with 64KB pages + 2 levels::
+  ffff000000000000	ffff7fffffffffff	 128TB		kernel logical memory map
+  ffff800000000000	ffff9fffffffffff	  32TB		kasan shadow region
+  ffffa00000000000	ffffa00007ffffff	 128MB		bpf jit region
+  ffffa00008000000	ffffa0000fffffff	 128MB		modules
+  ffffa00010000000	fffffdffbffeffff	 ~93TB		vmalloc
+  fffffdffbfff0000	fffffdfffe5f8fff	~998MB		[guard region]
+  fffffdfffe5f9000	fffffdfffe9fffff	4124KB		fixed mappings
+  fffffdfffea00000	fffffdfffebfffff	   2MB		[guard region]
+  fffffdfffec00000	fffffdffffbfffff	  16MB		PCI I/O space
+  fffffdffffc00000	fffffdffffdfffff	   2MB		[guard region]
+  fffffdffffe00000	ffffffffffdfffff	   2TB		vmemmap
+  ffffffffffe00000	ffffffffffffffff	   2MB		[guard region]
+
+
+AArch64 Linux memory layout with 64KB pages + 2 levels (42-bit)::
 
   Start			End			Size		Use
   -----------------------------------------------------------------------
   0000000000000000	000003ffffffffff	   4TB		user
-  fffffc0000000000	ffffffffffffffff	   4TB		kernel
-
-
-AArch64 Linux memory layout with 64KB pages + 3 levels::
+  fffffc0000000000	fffffdffffffffff	   2TB		kernel logical memory map
+  fffffe0000000000	fffffe7fffffffff	 512GB		kasan shadow region
+  fffffe8000000000	fffffe8007ffffff	 128MB		bpf jit region
+  fffffe8008000000	fffffe800fffffff	 128MB		modules
+  fffffe8010000000	ffffffff5ffeffff	  ~1TB		vmalloc
+  ffffffff5fff0000	ffffffff7e58ffff	~485MB		[guard region]
+  ffffffff7e590000	ffffffff7e9fffff	4544KB		fixed mappings
+  ffffffff7ea00000	ffffffff7ebfffff	   2MB		[guard region]
+  ffffffff7ec00000	ffffffff7fbfffff	  16MB		PCI I/O space
+  ffffffff7fc00000	ffffffff7fdfffff	   2MB		[guard region]
+  ffffffff7fe00000	ffffffffffdfffff	   2GB		vmemmap
+  ffffffffffe00000	ffffffffffffffff	   2MB		[guard region]
+
+
+AArch64 Linux memory layout with 64KB pages + 3 levels (48-bit)::
 
   Start			End			Size		Use
   -----------------------------------------------------------------------
   0000000000000000	0000ffffffffffff	 256TB		user
-  ffff000000000000	ffffffffffffffff	 256TB		kernel
+  ffff000000000000	ffff7fffffffffff	 128TB		kernel logical memory map
+  ffff800000000000	ffff9fffffffffff	  32TB		kasan shadow region
+  ffffa00000000000	ffffa00007ffffff	 128MB		bpf jit region
+  ffffa00008000000	ffffa0000fffffff	 128MB		modules
+  ffffa00010000000	fffffbdffffeffff	 ~91TB		vmalloc
+  fffffbdfffff0000	ffffffdffe58ffff	  ~3TB		[guard region]
+  ffffffdffe590000	ffffffdffe9fffff	4544KB		fixed mappings
+  ffffffdffea00000	ffffffdffebfffff	   2MB		[guard region]
+  ffffffdffec00000	ffffffdfffbfffff	  16MB		PCI I/O space
+  ffffffdfffc00000	ffffffdfffdfffff	   2MB		[guard region]
+  ffffffdfffe00000	ffffffffffdfffff	 128GB		vmemmap
+  ffffffffffe00000	ffffffffffffffff	   2MB		[guard region]
+
+
+AArch64 Linux memory layout with 64KB pages + 3 levels (52-bit w/o HW support)::
 
+  Start			End			Size		Use
+  -----------------------------------------------------------------------
+  0000000000000000	0000ffffffffffff	 256TB		user
+  ffff000000000000	ffff7fffffffffff	 128TB		kernel logical memory map
+  ffff800000000000	ffff9fffffffffff	  32TB		kasan shadow region
+  ffffa00000000000	ffffa00007ffffff	 128MB		bpf jit region
+  ffffa00008000000	ffffa0000fffffff	 128MB		modules
+  ffffa00010000000	fffff81ffffeffff	 ~88TB		vmalloc
+  fffff81fffff0000	fffffc1ffe58ffff	  ~3TB		[guard region]
+  fffffc1ffe590000	fffffc1ffe9fffff	4544KB		fixed mappings
+  fffffc1ffea00000	fffffc1ffebfffff	   2MB		[guard region]
+  fffffc1ffec00000	fffffc1fffbfffff	  16MB		PCI I/O space
+  fffffc1fffc00000	fffffc1fffdfffff	   2MB		[guard region]
+  fffffc1fffe00000	ffffffffffdfffff	3968GB		vmemmap
+  ffffffffffe00000	ffffffffffffffff	   2MB		[guard region]
+
+
+AArch64 Linux memory layout with 64KB pages + 3 levels (52-bit with HW support)::
 
-For details of the virtual kernel memory layout please see the kernel
-booting log.
+  Start			End			Size		Use
+  -----------------------------------------------------------------------
+  0000000000000000	000fffffffffffff	   4PB		user
+  fff0000000000000	fff7ffffffffffff	   2PB		kernel logical memory map
+  fff8000000000000	fffd9fffffffffff	1440TB		[gap]
+  fffda00000000000	ffff9fffffffffff	 512TB		kasan shadow region
+  ffffa00000000000	ffffa00007ffffff	 128MB		bpf jit region
+  ffffa00008000000	ffffa0000fffffff	 128MB		modules
+  ffffa00010000000	fffff81ffffeffff	 ~88TB		vmalloc
+  fffff81fffff0000	fffffc1ffe58ffff	  ~3TB		[guard region]
+  fffffc1ffe590000	fffffc1ffe9fffff	4544KB		fixed mappings
+  fffffc1ffea00000	fffffc1ffebfffff	   2MB		[guard region]
+  fffffc1ffec00000	fffffc1fffbfffff	  16MB		PCI I/O space
+  fffffc1fffc00000	fffffc1fffdfffff	   2MB		[guard region]
+  fffffc1fffe00000	ffffffffffdfffff	3968GB		vmemmap
+  ffffffffffe00000	ffffffffffffffff	   2MB		[guard region]
 
 
 Translation table lookup with 4KB pages::
@@ -83,7 +166,8 @@ Translation table lookup with 64KB pages::
    |                 |    |               |            [15:0]  in-page offset
    |                 |    |               +----------> [28:16] L3 index
    |                 |    +--------------------------> [41:29] L2 index
-   |                 +-------------------------------> [47:42] L1 index
+   |                 +-------------------------------> [47:42] L1 index (48-bit)
+   |                                                   [51:42] L1 index (52-bit)
    +-------------------------------------------------> [63] TTBR0/1
 
 
@@ -96,3 +180,62 @@ ARM64_HARDEN_EL2_VECTORS is selected for particular CPUs.
 
 When using KVM with the Virtualization Host Extensions, no additional
 mappings are created, since the host kernel runs directly in EL2.
+
+52-bit VA support in the kernel
+-------------------------------
+If the ARMv8.2-LVA optional feature is present, and we are running
+with a 64KB page size; then it is possible to use 52-bits of address
+space for both userspace and kernel addresses. However, any kernel
+binary that supports 52-bit must also be able to fall back to 48-bit
+at early boot time if the hardware feature is not present.
+
+This fallback mechanism necessitates the kernel .text to be in the
+higher addresses s.t. they are invariant to 48/52-bti VAs. Due to
+the kasan shadow being a fraction of the entire kernel VA space,
+the end of the kasan shadow must also be in the higher half of the
+kernel VA space for both 48/52-bit. (Switching from 48-bit to 52-bit,
+the end of the kasan shadow is invariant and dependent on ~0UL,
+whilst the start address will "grow" towards the lower addresses).
+
+In order to optimise phys_to_virt and virt_to_phys, the PAGE_OFFSET
+is kept constant at 0xFFF0000000000000 (corresponding to 52-bit),
+this obviates the need for an extra variable read. The physvirt
+offset and vmemmap offsets are computed at early boot to enable
+this logic.
+
+As a single binary will need to support both 48-bit and 52-bit VA
+spaces, the VMEMMAP must be sized large enough for 52-bit VAs and
+also must be sized large enought to accommodate a fixed PAGE_OFFSET.
+
+Most code in the kernel should not need to consider the VA_BITS, for
+code that does need to know the VA size the variables are
+defined as follows:
+
+VA_BITS		constant	the *maximum* VA space size
+
+VA_BITS_MIN	constant	the *minimum* VA space size
+
+VA_BITS_ACTUAL	variable	the *actual* VA space size
+
+
+Maximum and minimum sizes can be useful to ensure that buffers are
+sized large enough or that addresses are positioned close enough for
+the "worst" case.
+
+52-bit userspace VAs
+--------------------
+To maintain compatibility with software that relies on the ARMv8.0
+VA space maximum size of 48-bits, the kernel will, by default,
+return virtual addresses to userspace from a 48-bit range.
+
+Software can "opt-in" to receiving VAs from a 52-bit space by
+specifying an mmap hint parameter that is larger than 48-bit.
+For example:
+    maybe_high_address = mmap(~0UL, size, prot, flags,...);
+
+It is also possible to build a debug kernel that returns addresses
+from a 52-bit space by enabling the following kernel config options:
+   CONFIG_EXPERT=y && CONFIG_ARM64_FORCE_52BIT=y
+
+Note that this option is only intended for debugging applications
+and should not be used in production.
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 01/11] arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START
  2019-07-29 16:21 ` [PATCH V4 01/11] arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START Steve Capper
@ 2019-08-05 11:07   ` Catalin Marinas
  0 siblings, 0 replies; 35+ messages in thread
From: Catalin Marinas @ 2019-08-05 11:07 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:07PM +0100, Steve Capper wrote:
> Currently there are assumptions about the alignment of VMEMMAP_START
> and PAGE_OFFSET that won't be valid after this series is applied.
> 
> These assumptions are in the form of bitwise operators being used
> instead of addition and subtraction when calculating addresses.
> 
> This patch replaces these bitwise operators with addition/subtraction.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 02/11] arm64: mm: Flip kernel VA space
  2019-07-29 16:21 ` [PATCH V4 02/11] arm64: mm: Flip kernel VA space Steve Capper
@ 2019-08-05 11:29   ` Catalin Marinas
  2019-08-05 11:50     ` Steve Capper
  0 siblings, 1 reply; 35+ messages in thread
From: Catalin Marinas @ 2019-08-05 11:29 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:08PM +0100, Steve Capper wrote:
> diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
> index 82b3a7fdb4a6..6f0b9f8ddf55 100644
> --- a/arch/arm64/mm/dump.c
> +++ b/arch/arm64/mm/dump.c
> @@ -26,6 +26,8 @@
>  #include <asm/ptdump.h>
>  
>  static const struct addr_marker address_markers[] = {
> +	{ PAGE_OFFSET,			"Linear Mapping start" },
> +	{ VA_START,			"Linear Mapping end" },
>  #ifdef CONFIG_KASAN
>  	{ KASAN_SHADOW_START,		"Kasan shadow start" },
>  	{ KASAN_SHADOW_END,		"Kasan shadow end" },
> @@ -40,9 +42,8 @@ static const struct addr_marker address_markers[] = {
>  	{ PCI_IO_END,			"PCI I/O end" },
>  #ifdef CONFIG_SPARSEMEM_VMEMMAP
>  	{ VMEMMAP_START,		"vmemmap start" },
> -	{ VMEMMAP_START + VMEMMAP_SIZE,	"vmemmap end" },
> +	{ -1,				"vmemmap end" },

Why not keep the original vmemmap end here? We even leave a 2MB gap.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 02/11] arm64: mm: Flip kernel VA space
  2019-08-05 11:29   ` Catalin Marinas
@ 2019-08-05 11:50     ` Steve Capper
  0 siblings, 0 replies; 35+ messages in thread
From: Steve Capper @ 2019-08-05 11:50 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, nd, will, linux-arm-kernel

Hi Catalin,

On Mon, Aug 05, 2019 at 12:29:51PM +0100, Catalin Marinas wrote:
> On Mon, Jul 29, 2019 at 05:21:08PM +0100, Steve Capper wrote:
> > diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
> > index 82b3a7fdb4a6..6f0b9f8ddf55 100644
> > --- a/arch/arm64/mm/dump.c
> > +++ b/arch/arm64/mm/dump.c
> > @@ -26,6 +26,8 @@
> >  #include <asm/ptdump.h>
> >  
> >  static const struct addr_marker address_markers[] = {
> > +	{ PAGE_OFFSET,			"Linear Mapping start" },
> > +	{ VA_START,			"Linear Mapping end" },
> >  #ifdef CONFIG_KASAN
> >  	{ KASAN_SHADOW_START,		"Kasan shadow start" },
> >  	{ KASAN_SHADOW_END,		"Kasan shadow end" },
> > @@ -40,9 +42,8 @@ static const struct addr_marker address_markers[] = {
> >  	{ PCI_IO_END,			"PCI I/O end" },
> >  #ifdef CONFIG_SPARSEMEM_VMEMMAP
> >  	{ VMEMMAP_START,		"vmemmap start" },
> > -	{ VMEMMAP_START + VMEMMAP_SIZE,	"vmemmap end" },
> > +	{ -1,				"vmemmap end" },
> 
> Why not keep the original vmemmap end here? We even leave a 2MB gap.

Because I overlooked this when I added the 2MB gap :-), apologies, we
should keep the original vmemmap end.

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 03/11] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  2019-07-29 16:21 ` [PATCH V4 03/11] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
@ 2019-08-05 16:37   ` Catalin Marinas
  2019-08-06  9:05     ` Steve Capper
  0 siblings, 1 reply; 35+ messages in thread
From: Catalin Marinas @ 2019-08-05 16:37 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:09PM +0100, Steve Capper wrote:
> diff --git a/Documentation/arm64/kasan-offsets.sh b/Documentation/arm64/kasan-offsets.sh
> new file mode 100644
> index 000000000000..2b7a021db363
> --- /dev/null
> +++ b/Documentation/arm64/kasan-offsets.sh
> @@ -0,0 +1,27 @@
> +#!/bin/sh
> +
> +# Print out the KASAN_SHADOW_OFFSETS required to place the KASAN SHADOW
> +# start address at the mid-point of the kernel VA space
> +
> +print_kasan_offset () {
> +	printf "%02d\t" $1
> +	printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
> +			+ (1 << ($1 - 32 - $2)) \
> +			- (1 << (64 - 32 - $2)) ))
> +}
> +
> +echo KASAN_SHADOW_SCALE_SHIFT = 3
> +printf "VABITS\tKASAN_SHADOW_OFFSET\n"
> +print_kasan_offset 48 3
> +print_kasan_offset 47 3
> +print_kasan_offset 42 3
> +print_kasan_offset 39 3
> +print_kasan_offset 36 3
> +echo
> +echo KASAN_SHADOW_SCALE_SHIFT = 4
> +printf "VABITS\tKASAN_SHADOW_OFFSET\n"
> +print_kasan_offset 48 4
> +print_kasan_offset 47 4
> +print_kasan_offset 42 4
> +print_kasan_offset 39 4
> +print_kasan_offset 36 4

Even better if this generated the Kconfig entry directly ;). Anyway,
it's fine as it is.


> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 05edfe9b02e4..9e68e3d12956 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -154,8 +154,6 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
>  /* The early shadow maps everything to a single page of zeroes */
>  asmlinkage void __init kasan_early_init(void)
>  {
> -	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
> -		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));

Can we not still keep a BUILD_BUG_ON() for KASAN_SHADOW_OFFSET around,
even if it does the same calculation as the script?

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 04/11] arm64: dump: De-constify VA_START and KASAN_SHADOW_START
  2019-07-29 16:21 ` [PATCH V4 04/11] arm64: dump: De-constify VA_START and KASAN_SHADOW_START Steve Capper
@ 2019-08-05 16:38   ` Catalin Marinas
  0 siblings, 0 replies; 35+ messages in thread
From: Catalin Marinas @ 2019-08-05 16:38 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:10PM +0100, Steve Capper wrote:
> The kernel page table dumper assumes that the placement of VA regions is
> constant and determined at compile time. As we are about to introduce
> variable VA logic, we need to be able to determine certain regions at
> boot time.
> 
> Specifically the VA_START and KASAN_SHADOW_START will depend on whether
> or not the system is booted with 52-bit kernel VAs.
> 
> This patch adds logic to the kernel page table dumper s.t. these regions
> can be computed at boot time.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 07/11] arm64: mm: Logic to make offset_ttbr1 conditional
  2019-07-29 16:21 ` [PATCH V4 07/11] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
@ 2019-08-05 17:06   ` Catalin Marinas
  0 siblings, 0 replies; 35+ messages in thread
From: Catalin Marinas @ 2019-08-05 17:06 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:13PM +0100, Steve Capper wrote:
> When running with a 52-bit userspace VA and a 48-bit kernel VA we offset
> ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to
> be used for both userspace and kernel.
> 
> Moving on to a 52-bit kernel VA we no longer require this offset to
> ttbr1_el1 should we be running on a system with HW support for 52-bit
> VAs.
> 
> This patch introduces conditional logic to offset_ttbr1 to query
> SYS_ID_AA64MMFR2_EL1 whenever 52-bit VAs are selected. If there is HW
> support for 52-bit VAs then the ttbr1 offset is skipped.
> 
> We choose to read a system register rather than vabits_actual because
> offset_ttbr1 can be called in places where the kernel data is not
> actually mapped.
> 
> Calls to offset_ttbr1 appear to be made from rarely called code paths so
> this extra logic is not expected to adversely affect performance.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 08/11] arm64: mm: Separate out vmemmap
  2019-07-29 16:21 ` [PATCH V4 08/11] arm64: mm: Separate out vmemmap Steve Capper
@ 2019-08-05 17:07   ` Catalin Marinas
  0 siblings, 0 replies; 35+ messages in thread
From: Catalin Marinas @ 2019-08-05 17:07 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:14PM +0100, Steve Capper wrote:
> vmemmap is a preprocessor definition that depends on a variable,
> memstart_addr. In a later patch we will need to expand the size of
> the VMEMMAP region and optionally modify vmemmap depending upon
> whether or not hardware support is available for 52-bit virtual
> addresses.
> 
> This patch changes vmemmap to be a variable. As the old definition
> depended on a variable load, this should not affect performance
> noticeably.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 09/11] arm64: mm: Modify calculation of VMEMMAP_SIZE
  2019-07-29 16:21 ` [PATCH V4 09/11] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
@ 2019-08-05 17:10   ` Catalin Marinas
  0 siblings, 0 replies; 35+ messages in thread
From: Catalin Marinas @ 2019-08-05 17:10 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:15PM +0100, Steve Capper wrote:
> In a later patch we will need to have a slightly larger VMEMMAP region
> to accommodate boot time selection between 48/52-bit kernel VAs.
> 
> This patch modifies the formula for computing VMEMMAP_SIZE to depend
> explicitly on the PAGE_OFFSET and start of kernel addressable memory.
> (This allows for a slightly larger direct linear map in future).
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 05/11] arm64: mm: Introduce VA_BITS_MIN
  2019-07-29 16:21 ` [PATCH V4 05/11] arm64: mm: Introduce VA_BITS_MIN Steve Capper
@ 2019-08-05 17:17   ` Catalin Marinas
  2019-08-05 17:20   ` Catalin Marinas
  1 sibling, 0 replies; 35+ messages in thread
From: Catalin Marinas @ 2019-08-05 17:17 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:11PM +0100, Steve Capper wrote:
> In order to support 52-bit kernel addresses detectable at boot time, the
> kernel needs to know the most conservative VA_BITS possible should it
> need to fall back to this quantity due to lack of hardware support.
> 
> A new compile time constant VA_BITS_MIN is introduced in this patch and
> it is employed in the KASAN end address, KASLR, and EFI stub.
> 
> For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits.
> 
> In other words: VA_BITS_MIN = min (48, VA_BITS)
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 05/11] arm64: mm: Introduce VA_BITS_MIN
  2019-07-29 16:21 ` [PATCH V4 05/11] arm64: mm: Introduce VA_BITS_MIN Steve Capper
  2019-08-05 17:17   ` Catalin Marinas
@ 2019-08-05 17:20   ` Catalin Marinas
  2019-08-06  9:11     ` Steve Capper
  1 sibling, 1 reply; 35+ messages in thread
From: Catalin Marinas @ 2019-08-05 17:20 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:11PM +0100, Steve Capper wrote:
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index f7f23e47c28f..0206804b0868 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -797,6 +797,10 @@ config ARM64_VA_BITS
>  	default 47 if ARM64_VA_BITS_47
>  	default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52
>  
> +config ARM64_VA_BITS_MIN
> +	int
> +	default ARM64_VA_BITS
> +
>  choice
>  	prompt "Physical address space size"
>  	default ARM64_PA_BITS_48
[...]
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 8b0f1599b2d1..a8a91a573bff 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -52,6 +52,9 @@
>  #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
>  #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
>  #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
> +#define VA_BITS_MIN		(CONFIG_ARM64_VA_BITS_MIN)

Thinking about it, do we actually need a Kconfig option for VA_BITS_MIN?
Can we not just generated it here based on VA_BITS as min(48, VA_BITS)?

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 06/11] arm64: mm: Introduce VA_BITS_ACTUAL
  2019-07-29 16:21 ` [PATCH V4 06/11] arm64: mm: Introduce VA_BITS_ACTUAL Steve Capper
@ 2019-08-05 17:26   ` Catalin Marinas
  2019-08-06 11:32     ` Steve Capper
  0 siblings, 1 reply; 35+ messages in thread
From: Catalin Marinas @ 2019-08-05 17:26 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:12PM +0100, Steve Capper wrote:
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index a8a91a573bff..93341f4fe840 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -37,8 +37,6 @@
>   * VA_START - the first kernel virtual address.
>   */
>  #define VA_BITS			(CONFIG_ARM64_VA_BITS)
> -#define VA_START		(UL(0xffffffffffffffff) - \
> -	(UL(1) << (VA_BITS - 1)) + 1)
>  #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
>  	(UL(1) << VA_BITS) + 1)
>  #define KIMAGE_VADDR		(MODULES_END)
> @@ -166,10 +164,14 @@
>  #endif
>  
>  #ifndef __ASSEMBLY__
> +extern u64			vabits_actual;
> +#define VA_BITS_ACTUAL		({vabits_actual;})

Why not use the variable vabits_actual directly instead of defining a
macro?

> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index ac58c69993ec..6dc7349868d9 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -321,6 +321,11 @@ __create_page_tables:
>  	dmb	sy
>  	dc	ivac, x6		// Invalidate potentially stale cache line
>  
> +	adr_l	x6, vabits_actual
> +	str	x5, [x6]
> +	dmb	sy
> +	dc	ivac, x6		// Invalidate potentially stale cache line

Can we not replace vabits_user with vabits_actual and have a single
write? Maybe not in this patch but once the series is applied, they are
practically the same. It could be an additional patch (or define a
vabits_user macro as vabits_actual).

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 10/11] arm64: mm: Introduce 52-bit Kernel VAs
  2019-07-29 16:21 ` [PATCH V4 10/11] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
@ 2019-08-05 17:27   ` Catalin Marinas
  2019-08-06 14:55   ` Catalin Marinas
  1 sibling, 0 replies; 35+ messages in thread
From: Catalin Marinas @ 2019-08-05 17:27 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:16PM +0100, Steve Capper wrote:
> Most of the machinery is now in place to enable 52-bit kernel VAs that
> are detectable at boot time.
> 
> This patch adds a Kconfig option for 52-bit user and kernel addresses
> and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ,
> physvirt_offset and vmemmap at early boot.
> 
> To simplify things this patch also removes the 52-bit user/48-bit kernel
> kconfig option.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 03/11] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  2019-08-05 16:37   ` Catalin Marinas
@ 2019-08-06  9:05     ` Steve Capper
  0 siblings, 0 replies; 35+ messages in thread
From: Steve Capper @ 2019-08-06  9:05 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, nd, will, linux-arm-kernel

On Mon, Aug 05, 2019 at 05:37:21PM +0100, Catalin Marinas wrote:
> On Mon, Jul 29, 2019 at 05:21:09PM +0100, Steve Capper wrote:
> > diff --git a/Documentation/arm64/kasan-offsets.sh b/Documentation/arm64/kasan-offsets.sh
> > new file mode 100644
> > index 000000000000..2b7a021db363
> > --- /dev/null
> > +++ b/Documentation/arm64/kasan-offsets.sh
> > @@ -0,0 +1,27 @@
> > +#!/bin/sh
> > +
> > +# Print out the KASAN_SHADOW_OFFSETS required to place the KASAN SHADOW
> > +# start address at the mid-point of the kernel VA space
> > +
> > +print_kasan_offset () {
> > +	printf "%02d\t" $1
> > +	printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
> > +			+ (1 << ($1 - 32 - $2)) \
> > +			- (1 << (64 - 32 - $2)) ))
> > +}
> > +
> > +echo KASAN_SHADOW_SCALE_SHIFT = 3
> > +printf "VABITS\tKASAN_SHADOW_OFFSET\n"
> > +print_kasan_offset 48 3
> > +print_kasan_offset 47 3
> > +print_kasan_offset 42 3
> > +print_kasan_offset 39 3
> > +print_kasan_offset 36 3
> > +echo
> > +echo KASAN_SHADOW_SCALE_SHIFT = 4
> > +printf "VABITS\tKASAN_SHADOW_OFFSET\n"
> > +print_kasan_offset 48 4
> > +print_kasan_offset 47 4
> > +print_kasan_offset 42 4
> > +print_kasan_offset 39 4
> > +print_kasan_offset 36 4
> 
> Even better if this generated the Kconfig entry directly ;). Anyway,
> it's fine as it is.

:-)

> 
> 
> > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> > index 05edfe9b02e4..9e68e3d12956 100644
> > --- a/arch/arm64/mm/kasan_init.c
> > +++ b/arch/arm64/mm/kasan_init.c
> > @@ -154,8 +154,6 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
> >  /* The early shadow maps everything to a single page of zeroes */
> >  asmlinkage void __init kasan_early_init(void)
> >  {
> > -	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
> > -		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
> 
> Can we not still keep a BUILD_BUG_ON() for KASAN_SHADOW_OFFSET around,
> even if it does the same calculation as the script?
>

Yeah sure, I'll retain this. The only reason I removed it was because I
thought that it was redundant.

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 05/11] arm64: mm: Introduce VA_BITS_MIN
  2019-08-05 17:20   ` Catalin Marinas
@ 2019-08-06  9:11     ` Steve Capper
  0 siblings, 0 replies; 35+ messages in thread
From: Steve Capper @ 2019-08-06  9:11 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, nd, will, linux-arm-kernel

On Mon, Aug 05, 2019 at 06:20:01PM +0100, Catalin Marinas wrote:
> On Mon, Jul 29, 2019 at 05:21:11PM +0100, Steve Capper wrote:
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index f7f23e47c28f..0206804b0868 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -797,6 +797,10 @@ config ARM64_VA_BITS
> >  	default 47 if ARM64_VA_BITS_47
> >  	default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52
> >  
> > +config ARM64_VA_BITS_MIN
> > +	int
> > +	default ARM64_VA_BITS
> > +
> >  choice
> >  	prompt "Physical address space size"
> >  	default ARM64_PA_BITS_48
> [...]
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index 8b0f1599b2d1..a8a91a573bff 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -52,6 +52,9 @@
> >  #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
> >  #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
> >  #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
> > +#define VA_BITS_MIN		(CONFIG_ARM64_VA_BITS_MIN)
> 
> Thinking about it, do we actually need a Kconfig option for VA_BITS_MIN?
> Can we not just generated it here based on VA_BITS as min(48, VA_BITS)?
>

Thanks Catalin,
I'll get rid of the Kconfig option.

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 06/11] arm64: mm: Introduce VA_BITS_ACTUAL
  2019-08-05 17:26   ` Catalin Marinas
@ 2019-08-06 11:32     ` Steve Capper
  2019-08-06 14:48       ` Catalin Marinas
  0 siblings, 1 reply; 35+ messages in thread
From: Steve Capper @ 2019-08-06 11:32 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, nd, will, linux-arm-kernel

Hi Catalin,

On Mon, Aug 05, 2019 at 06:26:43PM +0100, Catalin Marinas wrote:
> On Mon, Jul 29, 2019 at 05:21:12PM +0100, Steve Capper wrote:
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index a8a91a573bff..93341f4fe840 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -37,8 +37,6 @@
> >   * VA_START - the first kernel virtual address.
> >   */
> >  #define VA_BITS			(CONFIG_ARM64_VA_BITS)
> > -#define VA_START		(UL(0xffffffffffffffff) - \
> > -	(UL(1) << (VA_BITS - 1)) + 1)
> >  #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
> >  	(UL(1) << VA_BITS) + 1)
> >  #define KIMAGE_VADDR		(MODULES_END)
> > @@ -166,10 +164,14 @@
> >  #endif
> >  
> >  #ifndef __ASSEMBLY__
> > +extern u64			vabits_actual;
> > +#define VA_BITS_ACTUAL		({vabits_actual;})
> 
> Why not use the variable vabits_actual directly instead of defining a
> macro?
> 

I thought that it would look better to have an uppercase name for the
actual VA bits to match the existing code style for VA_BITS.

I can just rename vabits_actual => VA_BITS_ACTUAL and get rid of the
macro?

> > diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> > index ac58c69993ec..6dc7349868d9 100644
> > --- a/arch/arm64/kernel/head.S
> > +++ b/arch/arm64/kernel/head.S
> > @@ -321,6 +321,11 @@ __create_page_tables:
> >  	dmb	sy
> >  	dc	ivac, x6		// Invalidate potentially stale cache line
> >  
> > +	adr_l	x6, vabits_actual
> > +	str	x5, [x6]
> > +	dmb	sy
> > +	dc	ivac, x6		// Invalidate potentially stale cache line
> 
> Can we not replace vabits_user with vabits_actual and have a single
> write? Maybe not in this patch but once the series is applied, they are
> practically the same. It could be an additional patch (or define a
> vabits_user macro as vabits_actual).
> 

Thanks, I think it may be better to consolidate these in an extra patch (just
before the documentation patch). I'll add this to the series.

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 06/11] arm64: mm: Introduce VA_BITS_ACTUAL
  2019-08-06 11:32     ` Steve Capper
@ 2019-08-06 14:48       ` Catalin Marinas
  2019-08-07 13:27         ` Steve Capper
  0 siblings, 1 reply; 35+ messages in thread
From: Catalin Marinas @ 2019-08-06 14:48 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, nd, will, linux-arm-kernel

On Tue, Aug 06, 2019 at 11:32:04AM +0000, Steve Capper wrote:
> On Mon, Aug 05, 2019 at 06:26:43PM +0100, Catalin Marinas wrote:
> > On Mon, Jul 29, 2019 at 05:21:12PM +0100, Steve Capper wrote:
> > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > index a8a91a573bff..93341f4fe840 100644
> > > --- a/arch/arm64/include/asm/memory.h
> > > +++ b/arch/arm64/include/asm/memory.h
> > > @@ -37,8 +37,6 @@
> > >   * VA_START - the first kernel virtual address.
> > >   */
> > >  #define VA_BITS			(CONFIG_ARM64_VA_BITS)
> > > -#define VA_START		(UL(0xffffffffffffffff) - \
> > > -	(UL(1) << (VA_BITS - 1)) + 1)
> > >  #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
> > >  	(UL(1) << VA_BITS) + 1)
> > >  #define KIMAGE_VADDR		(MODULES_END)
> > > @@ -166,10 +164,14 @@
> > >  #endif
> > >  
> > >  #ifndef __ASSEMBLY__
> > > +extern u64			vabits_actual;
> > > +#define VA_BITS_ACTUAL		({vabits_actual;})
> > 
> > Why not use the variable vabits_actual directly instead of defining a
> > macro?
> 
> I thought that it would look better to have an uppercase name for the
> actual VA bits to match the existing code style for VA_BITS.
> 
> I can just rename vabits_actual => VA_BITS_ACTUAL and get rid of the
> macro?

By tradition we use uppercase for macros and lowercase for variables. So
I'd definitely keep the variable lowercase.

If you prefer to keep the macro as well, fine by me, I don't think we
should bikeshed here.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 10/11] arm64: mm: Introduce 52-bit Kernel VAs
  2019-07-29 16:21 ` [PATCH V4 10/11] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
  2019-08-05 17:27   ` Catalin Marinas
@ 2019-08-06 14:55   ` Catalin Marinas
  2019-08-06 14:58     ` Catalin Marinas
  1 sibling, 1 reply; 35+ messages in thread
From: Catalin Marinas @ 2019-08-06 14:55 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:16PM +0100, Steve Capper wrote:
> @@ -759,13 +759,14 @@ config ARM64_VA_BITS_47
>  config ARM64_VA_BITS_48
>  	bool "48-bit"
>  
> -config ARM64_USER_VA_BITS_52
> -	bool "52-bit (user)"
> +config ARM64_VA_BITS_52
> +	bool "52-bit"

I think we should change defconfig as well to make this the default. We
tend to make defconfig cover all the architecture features we support
and people can disable them if they get in the way (performance).

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 10/11] arm64: mm: Introduce 52-bit Kernel VAs
  2019-08-06 14:55   ` Catalin Marinas
@ 2019-08-06 14:58     ` Catalin Marinas
  0 siblings, 0 replies; 35+ messages in thread
From: Catalin Marinas @ 2019-08-06 14:58 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Tue, Aug 06, 2019 at 03:55:45PM +0100, Catalin Marinas wrote:
> On Mon, Jul 29, 2019 at 05:21:16PM +0100, Steve Capper wrote:
> > @@ -759,13 +759,14 @@ config ARM64_VA_BITS_47
> >  config ARM64_VA_BITS_48
> >  	bool "48-bit"
> >  
> > -config ARM64_USER_VA_BITS_52
> > -	bool "52-bit (user)"
> > +config ARM64_VA_BITS_52
> > +	bool "52-bit"
> 
> I think we should change defconfig as well to make this the default. We
> tend to make defconfig cover all the architecture features we support
> and people can disable them if they get in the way (performance).

Ignore this. It only works with 64K pages and our defconfig is 4K.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 11/11] docs: arm64: Add layout and 52-bit info to memory document
  2019-07-29 16:21 ` [PATCH V4 11/11] docs: arm64: Add layout and 52-bit info to memory document Steve Capper
@ 2019-08-06 15:27   ` Catalin Marinas
  2019-08-07 13:29     ` Steve Capper
  0 siblings, 1 reply; 35+ messages in thread
From: Catalin Marinas @ 2019-08-06 15:27 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Mon, Jul 29, 2019 at 05:21:17PM +0100, Steve Capper wrote:
> +AArch64 Linux memory layout with 4KB pages + 4 levels (48-bit)::
>  
>    Start			End			Size		Use
>    -----------------------------------------------------------------------
>    0000000000000000	0000ffffffffffff	 256TB		user
> -  ffff000000000000	ffffffffffffffff	 256TB		kernel
> -
> -
> -AArch64 Linux memory layout with 64KB pages + 2 levels::
> +  ffff000000000000	ffff7fffffffffff	 128TB		kernel logical memory map
> +  ffff800000000000	ffff9fffffffffff	  32TB		kasan shadow region
> +  ffffa00000000000	ffffa00007ffffff	 128MB		bpf jit region
> +  ffffa00008000000	ffffa0000fffffff	 128MB		modules
> +  ffffa00010000000	fffffdffbffeffff	 ~93TB		vmalloc
> +  fffffdffbfff0000	fffffdfffe5f8fff	~998MB		[guard region]
> +  fffffdfffe5f9000	fffffdfffe9fffff	4124KB		fixed mappings
> +  fffffdfffea00000	fffffdfffebfffff	   2MB		[guard region]
> +  fffffdfffec00000	fffffdffffbfffff	  16MB		PCI I/O space
> +  fffffdffffc00000	fffffdffffdfffff	   2MB		[guard region]
> +  fffffdffffe00000	ffffffffffdfffff	   2TB		vmemmap
> +  ffffffffffe00000	ffffffffffffffff	   2MB		[guard region]
[...]
> +AArch64 Linux memory layout with 64KB pages + 3 levels (52-bit with HW support)::
>  
> -For details of the virtual kernel memory layout please see the kernel
> -booting log.
> +  Start			End			Size		Use
> +  -----------------------------------------------------------------------
> +  0000000000000000	000fffffffffffff	   4PB		user
> +  fff0000000000000	fff7ffffffffffff	   2PB		kernel logical memory map
> +  fff8000000000000	fffd9fffffffffff	1440TB		[gap]
> +  fffda00000000000	ffff9fffffffffff	 512TB		kasan shadow region
> +  ffffa00000000000	ffffa00007ffffff	 128MB		bpf jit region
> +  ffffa00008000000	ffffa0000fffffff	 128MB		modules
> +  ffffa00010000000	fffff81ffffeffff	 ~88TB		vmalloc
> +  fffff81fffff0000	fffffc1ffe58ffff	  ~3TB		[guard region]
> +  fffffc1ffe590000	fffffc1ffe9fffff	4544KB		fixed mappings
> +  fffffc1ffea00000	fffffc1ffebfffff	   2MB		[guard region]
> +  fffffc1ffec00000	fffffc1fffbfffff	  16MB		PCI I/O space
> +  fffffc1fffc00000	fffffc1fffdfffff	   2MB		[guard region]
> +  fffffc1fffe00000	ffffffffffdfffff	3968GB		vmemmap
> +  ffffffffffe00000	ffffffffffffffff	   2MB		[guard region]

Since we risk getting these out of sync, I'd rather only maintain two
entries: defconfig (4K pages, 48-bit VA) and the largest (64K pages,
52-bit with HW support).


> +52-bit VA support in the kernel
> +-------------------------------
> +If the ARMv8.2-LVA optional feature is present, and we are running
> +with a 64KB page size; then it is possible to use 52-bits of address
> +space for both userspace and kernel addresses. However, any kernel
> +binary that supports 52-bit must also be able to fall back to 48-bit
> +at early boot time if the hardware feature is not present.
> +
> +This fallback mechanism necessitates the kernel .text to be in the
> +higher addresses s.t. they are invariant to 48/52-bti VAs. Due to

The 's.t.' abbreviation always confused me. Could you please change it
to "so that" in the documentation? (I'm not too bothered about the
commit logs).

Also fix s/bti/bit/.

Otherwise:

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 06/11] arm64: mm: Introduce VA_BITS_ACTUAL
  2019-08-06 14:48       ` Catalin Marinas
@ 2019-08-07 13:27         ` Steve Capper
  0 siblings, 0 replies; 35+ messages in thread
From: Steve Capper @ 2019-08-07 13:27 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, nd, will, linux-arm-kernel

On Tue, Aug 06, 2019 at 03:48:33PM +0100, Catalin Marinas wrote:
> On Tue, Aug 06, 2019 at 11:32:04AM +0000, Steve Capper wrote:
> > On Mon, Aug 05, 2019 at 06:26:43PM +0100, Catalin Marinas wrote:
> > > On Mon, Jul 29, 2019 at 05:21:12PM +0100, Steve Capper wrote:
> > > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > > index a8a91a573bff..93341f4fe840 100644
> > > > --- a/arch/arm64/include/asm/memory.h
> > > > +++ b/arch/arm64/include/asm/memory.h
> > > > @@ -37,8 +37,6 @@
> > > >   * VA_START - the first kernel virtual address.
> > > >   */
> > > >  #define VA_BITS			(CONFIG_ARM64_VA_BITS)
> > > > -#define VA_START		(UL(0xffffffffffffffff) - \
> > > > -	(UL(1) << (VA_BITS - 1)) + 1)
> > > >  #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
> > > >  	(UL(1) << VA_BITS) + 1)
> > > >  #define KIMAGE_VADDR		(MODULES_END)
> > > > @@ -166,10 +164,14 @@
> > > >  #endif
> > > >  
> > > >  #ifndef __ASSEMBLY__
> > > > +extern u64			vabits_actual;
> > > > +#define VA_BITS_ACTUAL		({vabits_actual;})
> > > 
> > > Why not use the variable vabits_actual directly instead of defining a
> > > macro?
> > 
> > I thought that it would look better to have an uppercase name for the
> > actual VA bits to match the existing code style for VA_BITS.
> > 
> > I can just rename vabits_actual => VA_BITS_ACTUAL and get rid of the
> > macro?
> 
> By tradition we use uppercase for macros and lowercase for variables. So
> I'd definitely keep the variable lowercase.
> 
> If you prefer to keep the macro as well, fine by me, I don't think we
> should bikeshed here.

Having thought about it I prefer the lower case recommendation as it's
a variable. So have made this change. :-)

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 11/11] docs: arm64: Add layout and 52-bit info to memory document
  2019-08-06 15:27   ` Catalin Marinas
@ 2019-08-07 13:29     ` Steve Capper
  2019-08-07 14:55       ` Will Deacon
  0 siblings, 1 reply; 35+ messages in thread
From: Steve Capper @ 2019-08-07 13:29 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, nd, will, linux-arm-kernel

On Tue, Aug 06, 2019 at 04:27:34PM +0100, Catalin Marinas wrote:
> On Mon, Jul 29, 2019 at 05:21:17PM +0100, Steve Capper wrote:
> > +AArch64 Linux memory layout with 4KB pages + 4 levels (48-bit)::
> >  
> >    Start			End			Size		Use
> >    -----------------------------------------------------------------------
> >    0000000000000000	0000ffffffffffff	 256TB		user
> > -  ffff000000000000	ffffffffffffffff	 256TB		kernel
> > -
> > -
> > -AArch64 Linux memory layout with 64KB pages + 2 levels::
> > +  ffff000000000000	ffff7fffffffffff	 128TB		kernel logical memory map
> > +  ffff800000000000	ffff9fffffffffff	  32TB		kasan shadow region
> > +  ffffa00000000000	ffffa00007ffffff	 128MB		bpf jit region
> > +  ffffa00008000000	ffffa0000fffffff	 128MB		modules
> > +  ffffa00010000000	fffffdffbffeffff	 ~93TB		vmalloc
> > +  fffffdffbfff0000	fffffdfffe5f8fff	~998MB		[guard region]
> > +  fffffdfffe5f9000	fffffdfffe9fffff	4124KB		fixed mappings
> > +  fffffdfffea00000	fffffdfffebfffff	   2MB		[guard region]
> > +  fffffdfffec00000	fffffdffffbfffff	  16MB		PCI I/O space
> > +  fffffdffffc00000	fffffdffffdfffff	   2MB		[guard region]
> > +  fffffdffffe00000	ffffffffffdfffff	   2TB		vmemmap
> > +  ffffffffffe00000	ffffffffffffffff	   2MB		[guard region]
> [...]
> > +AArch64 Linux memory layout with 64KB pages + 3 levels (52-bit with HW support)::
> >  
> > -For details of the virtual kernel memory layout please see the kernel
> > -booting log.
> > +  Start			End			Size		Use
> > +  -----------------------------------------------------------------------
> > +  0000000000000000	000fffffffffffff	   4PB		user
> > +  fff0000000000000	fff7ffffffffffff	   2PB		kernel logical memory map
> > +  fff8000000000000	fffd9fffffffffff	1440TB		[gap]
> > +  fffda00000000000	ffff9fffffffffff	 512TB		kasan shadow region
> > +  ffffa00000000000	ffffa00007ffffff	 128MB		bpf jit region
> > +  ffffa00008000000	ffffa0000fffffff	 128MB		modules
> > +  ffffa00010000000	fffff81ffffeffff	 ~88TB		vmalloc
> > +  fffff81fffff0000	fffffc1ffe58ffff	  ~3TB		[guard region]
> > +  fffffc1ffe590000	fffffc1ffe9fffff	4544KB		fixed mappings
> > +  fffffc1ffea00000	fffffc1ffebfffff	   2MB		[guard region]
> > +  fffffc1ffec00000	fffffc1fffbfffff	  16MB		PCI I/O space
> > +  fffffc1fffc00000	fffffc1fffdfffff	   2MB		[guard region]
> > +  fffffc1fffe00000	ffffffffffdfffff	3968GB		vmemmap
> > +  ffffffffffe00000	ffffffffffffffff	   2MB		[guard region]
> 
> Since we risk getting these out of sync, I'd rather only maintain two
> entries: defconfig (4K pages, 48-bit VA) and the largest (64K pages,
> 52-bit with HW support).
> 

Sure thing, I've cut down the number of tables to two.

> 
> > +52-bit VA support in the kernel
> > +-------------------------------
> > +If the ARMv8.2-LVA optional feature is present, and we are running
> > +with a 64KB page size; then it is possible to use 52-bits of address
> > +space for both userspace and kernel addresses. However, any kernel
> > +binary that supports 52-bit must also be able to fall back to 48-bit
> > +at early boot time if the hardware feature is not present.
> > +
> > +This fallback mechanism necessitates the kernel .text to be in the
> > +higher addresses s.t. they are invariant to 48/52-bti VAs. Due to
> 
> The 's.t.' abbreviation always confused me. Could you please change it
> to "so that" in the documentation? (I'm not too bothered about the
> commit logs).

Thanks, I've expanded the acronym.

> 
> Also fix s/bti/bit/.

And fixed the typo.

> 
> Otherwise:
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
>

Many thanks for going through this series Catalin. Would you like me to post
a V5 of the series?

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 11/11] docs: arm64: Add layout and 52-bit info to memory document
  2019-08-07 13:29     ` Steve Capper
@ 2019-08-07 14:55       ` Will Deacon
  2019-08-07 15:57         ` Steve Capper
  0 siblings, 1 reply; 35+ messages in thread
From: Will Deacon @ 2019-08-07 14:55 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, Catalin Marinas, bhsharma, maz, nd,
	linux-arm-kernel

On Wed, Aug 07, 2019 at 01:29:38PM +0000, Steve Capper wrote:
> Many thanks for going through this series Catalin. Would you like me to post
> a V5 of the series?

/me does best Catalin impression...

"Yes, please."

Uncanny, eh?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH V4 11/11] docs: arm64: Add layout and 52-bit info to memory document
  2019-08-07 14:55       ` Will Deacon
@ 2019-08-07 15:57         ` Steve Capper
  0 siblings, 0 replies; 35+ messages in thread
From: Steve Capper @ 2019-08-07 15:57 UTC (permalink / raw)
  To: Will Deacon
  Cc: crecklin, ard.biesheuvel, Catalin Marinas, bhsharma, maz, nd,
	linux-arm-kernel

On Wed, Aug 07, 2019 at 03:55:40PM +0100, Will Deacon wrote:
> On Wed, Aug 07, 2019 at 01:29:38PM +0000, Steve Capper wrote:
> > Many thanks for going through this series Catalin. Would you like me to post
> > a V5 of the series?
> 
> /me does best Catalin impression...
> 
> "Yes, please."
> 
> Uncanny, eh?

Well I'm convinced! Just sent out a V5 now.

Cheers Will ;-).

-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2019-08-07 15:59 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-29 16:21 [PATCH V4 00/11] 52-bit kernel + user VAs Steve Capper
2019-07-29 16:21 ` [PATCH V4 01/11] arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START Steve Capper
2019-08-05 11:07   ` Catalin Marinas
2019-07-29 16:21 ` [PATCH V4 02/11] arm64: mm: Flip kernel VA space Steve Capper
2019-08-05 11:29   ` Catalin Marinas
2019-08-05 11:50     ` Steve Capper
2019-07-29 16:21 ` [PATCH V4 03/11] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
2019-08-05 16:37   ` Catalin Marinas
2019-08-06  9:05     ` Steve Capper
2019-07-29 16:21 ` [PATCH V4 04/11] arm64: dump: De-constify VA_START and KASAN_SHADOW_START Steve Capper
2019-08-05 16:38   ` Catalin Marinas
2019-07-29 16:21 ` [PATCH V4 05/11] arm64: mm: Introduce VA_BITS_MIN Steve Capper
2019-08-05 17:17   ` Catalin Marinas
2019-08-05 17:20   ` Catalin Marinas
2019-08-06  9:11     ` Steve Capper
2019-07-29 16:21 ` [PATCH V4 06/11] arm64: mm: Introduce VA_BITS_ACTUAL Steve Capper
2019-08-05 17:26   ` Catalin Marinas
2019-08-06 11:32     ` Steve Capper
2019-08-06 14:48       ` Catalin Marinas
2019-08-07 13:27         ` Steve Capper
2019-07-29 16:21 ` [PATCH V4 07/11] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
2019-08-05 17:06   ` Catalin Marinas
2019-07-29 16:21 ` [PATCH V4 08/11] arm64: mm: Separate out vmemmap Steve Capper
2019-08-05 17:07   ` Catalin Marinas
2019-07-29 16:21 ` [PATCH V4 09/11] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
2019-08-05 17:10   ` Catalin Marinas
2019-07-29 16:21 ` [PATCH V4 10/11] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
2019-08-05 17:27   ` Catalin Marinas
2019-08-06 14:55   ` Catalin Marinas
2019-08-06 14:58     ` Catalin Marinas
2019-07-29 16:21 ` [PATCH V4 11/11] docs: arm64: Add layout and 52-bit info to memory document Steve Capper
2019-08-06 15:27   ` Catalin Marinas
2019-08-07 13:29     ` Steve Capper
2019-08-07 14:55       ` Will Deacon
2019-08-07 15:57         ` Steve Capper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.