All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/12] 52-bit kernel + user VAs
@ 2019-05-28 16:10 Steve Capper
  2019-05-28 16:10 ` [PATCH v2 01/12] arm/arm64: KVM: Formalise end of direct linear map Steve Capper
                   ` (13 more replies)
  0 siblings, 14 replies; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

This patch series adds support for 52-bit kernel VAs using some of the
machinery already introduced by the 52-bit userspace VA code in 5.0.

As 52-bit virtual address support is an optional hardware feature,
software support for 52-bit kernel VAs needs to be deduced at early boot
time. If HW support is not available, the kernel falls back to 48-bit.

A significant proportion of this series focuses on "de-constifying"
VA_BITS related constants.

In order to allow for a KASAN shadow that changes size at boot time, one
must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
start address. Also, it is highly desirable to maintain the same
function addresses in the kernel .text between VA sizes. Both of these
requirements necessitate us to flip the kernel address space halves s.t.
the direct linear map occupies the lower addresses.

In V2 of this series (apologies for the long delay from V1), the major
change is that PAGE_OFFSET is retained as a constant. This allows for
much faster virt_to_page computations. This is achieved by expanding the
size of the VMEMMAP region to accommodate a disjoint 52-bit/48-bit
direct linear map. This has been found to work well in my testing, but I
would appreciate any feedback on this if it needs changing. To aid with
git bisect, this logic is broken down into a few smaller patches.

As far as I'm aware, there are two outstanding issues with this series
that need to be resolved:
 1) Is the code patching for ttbr1_offset safe? I need to analyse this
    a little more,
 2) How can this memory map be advertised to kdump tools/documentation?
    I was planning on getting the kernel VA structure agreed on, then I
    would add the relevant exports/documentation.

Cheers,
-- 
Steve


Steve Capper (12):
  arm/arm64: KVM: Formalise end of direct linear map
  arm64: mm: Flip kernel VA space
  arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's
  arm64: dump: Make kernel page table dumper dynamic again
  arm64: mm: Introduce VA_BITS_MIN
  arm64: mm: Introduce VA_BITS_ACTUAL
  arm64: mm: Logic to make offset_ttbr1 conditional
  arm64: mm: Separate out vmemmap
  arm64: mm: Modify calculation of VMEMMAP_SIZE
  arm64: mm: Tweak PAGE_OFFSET logic
  arm64: mm: Introduce 52-bit Kernel VAs

 Documentation/arm64/kasan-offsets.sh   | 27 ++++++++++++
 arch/arm/include/asm/memory.h          |  1 +
 arch/arm64/Kconfig                     | 46 +++++++++++++++++++-
 arch/arm64/Makefile                    |  8 ----
 arch/arm64/include/asm/assembler.h     | 17 +++++++-
 arch/arm64/include/asm/cpucaps.h       |  3 +-
 arch/arm64/include/asm/efi.h           |  4 +-
 arch/arm64/include/asm/kasan.h         | 11 ++---
 arch/arm64/include/asm/memory.h        | 53 +++++++++++++++--------
 arch/arm64/include/asm/mmu_context.h   |  4 +-
 arch/arm64/include/asm/pgtable-hwdef.h |  2 +-
 arch/arm64/include/asm/pgtable.h       |  6 +--
 arch/arm64/include/asm/processor.h     |  2 +-
 arch/arm64/kernel/cpufeature.c         | 18 ++++++++
 arch/arm64/kernel/head.S               | 25 +++++++++--
 arch/arm64/kernel/hibernate-asm.S      |  1 +
 arch/arm64/kernel/hibernate.c          |  2 +-
 arch/arm64/kernel/kaslr.c              |  6 +--
 arch/arm64/kvm/va_layout.c             | 14 +++----
 arch/arm64/mm/dump.c                   | 58 +++++++++++++++++++++-----
 arch/arm64/mm/fault.c                  |  4 +-
 arch/arm64/mm/init.c                   | 29 +++++++++----
 arch/arm64/mm/kasan_init.c             | 11 +++--
 arch/arm64/mm/mmu.c                    | 13 +++---
 arch/arm64/mm/proc.S                   |  6 ++-
 virt/kvm/arm/mmu.c                     |  4 +-
 26 files changed, 280 insertions(+), 95 deletions(-)
 create mode 100644 Documentation/arm64/kasan-offsets.sh

-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v2 01/12] arm/arm64: KVM: Formalise end of direct linear map
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
@ 2019-05-28 16:10 ` Steve Capper
  2019-05-28 16:27   ` Marc Zyngier
  2019-05-28 16:10 ` [PATCH v2 02/12] arm64: mm: Flip kernel VA space Steve Capper
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

We assume that the direct linear map ends at ~0 in the KVM HYP map
intersection checking code. This assumption will become invalid later on
for arm64 when the address space of the kernel is re-arranged.

This patch introduces a new constant PAGE_OFFSET_END for both arm and
arm64 and defines it to be ~0UL

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm/include/asm/memory.h   | 1 +
 arch/arm64/include/asm/memory.h | 1 +
 virt/kvm/arm/mmu.c              | 4 ++--
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index ed8fd0d19a3e..45c211fd50da 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -24,6 +24,7 @@
 
 /* PAGE_OFFSET - the virtual address of the start of the kernel image */
 #define PAGE_OFFSET		UL(CONFIG_PAGE_OFFSET)
+#define PAGE_OFFSET_END		(~0UL)
 
 #ifdef CONFIG_MMU
 
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 8ffcf5a512bb..9fd387a63b9b 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -52,6 +52,7 @@
 	(UL(1) << VA_BITS) + 1)
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << (VA_BITS - 1)) + 1)
+#define PAGE_OFFSET_END		(~0UL)
 #define KIMAGE_VADDR		(MODULES_END)
 #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index 74b6582eaa3c..e1a777275b37 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -2202,10 +2202,10 @@ int kvm_mmu_init(void)
 	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
 	kvm_debug("HYP VA range: %lx:%lx\n",
 		  kern_hyp_va(PAGE_OFFSET),
-		  kern_hyp_va((unsigned long)high_memory - 1));
+		  kern_hyp_va(PAGE_OFFSET_END));
 
 	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
-	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
+	    hyp_idmap_start <  kern_hyp_va(PAGE_OFFSET_END) &&
 	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
 		/*
 		 * The idmap page is intersecting with the VA space,
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 02/12] arm64: mm: Flip kernel VA space
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
  2019-05-28 16:10 ` [PATCH v2 01/12] arm/arm64: KVM: Formalise end of direct linear map Steve Capper
@ 2019-05-28 16:10 ` Steve Capper
  2019-05-28 16:10 ` [PATCH v2 03/12] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

Put the direct linear map in the lower addresses of the kernel VA range
and everything else in the higher ranges.

This allows us to make room for an inline KASAN shadow that operates
under both 48 and 52 bit kernel VA sizes. For example with a 52-bit VA,
if KASAN_SHADOW_END < 0xFFF8000000000000 (it is in the lower addresses
of the kernel VA range), this will be below the start of the minimum
48-bit kernel VA address of 0xFFFF000000000000.

We need to adjust:
 *) KASAN shadow region placement logic,
 *) KASAN_SHADOW_OFFSET computation logic,
 *) virt_to_phys, phys_to_virt checks,
 *) page table dumper.

These are all small changes, that need to take place atomically, so they
are bundled into this commit.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/Makefile              |  2 +-
 arch/arm64/include/asm/memory.h  | 10 +++++-----
 arch/arm64/include/asm/pgtable.h |  2 +-
 arch/arm64/kernel/hibernate.c    |  2 +-
 arch/arm64/mm/dump.c             |  8 ++++----
 arch/arm64/mm/init.c             |  9 +--------
 arch/arm64/mm/kasan_init.c       |  6 +++---
 arch/arm64/mm/mmu.c              |  4 ++--
 8 files changed, 18 insertions(+), 25 deletions(-)

diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index b025304bde46..2dad2ae6b181 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -115,7 +115,7 @@ KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 #				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
 # in 32-bit arithmetic
 KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
-	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \
+	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
 	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
 	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
 
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 9fd387a63b9b..d19bf9965d34 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -49,10 +49,10 @@
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
 #define VA_START		(UL(0xffffffffffffffff) - \
-	(UL(1) << VA_BITS) + 1)
-#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << (VA_BITS - 1)) + 1)
-#define PAGE_OFFSET_END		(~0UL)
+#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
+	(UL(1) << VA_BITS) + 1)
+#define PAGE_OFFSET_END		(VA_START)
 #define KIMAGE_VADDR		(MODULES_END)
 #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
@@ -60,7 +60,7 @@
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
 #define MODULES_VADDR		(BPF_JIT_REGION_END)
 #define MODULES_VSIZE		(SZ_128M)
-#define VMEMMAP_START		(PAGE_OFFSET - VMEMMAP_SIZE)
+#define VMEMMAP_START		(-VMEMMAP_SIZE)
 #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
 #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
@@ -239,7 +239,7 @@ extern u64			vabits_user;
  * space. Testing the top bit for the start of the region is a
  * sufficient check.
  */
-#define __is_lm_address(addr)	(!!((addr) & BIT(VA_BITS - 1)))
+#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS - 1)))
 
 #define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 2c41b04708fe..d0ab784304e9 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -32,7 +32,7 @@
  *	and fixed mappings
  */
 #define VMALLOC_START		(MODULES_END)
-#define VMALLOC_END		(PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
+#define VMALLOC_END		(- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
 #define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
 
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 9859e1178e6b..002653b214c4 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -497,7 +497,7 @@ int swsusp_arch_resume(void)
 		rc = -ENOMEM;
 		goto out;
 	}
-	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, 0);
+	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, PAGE_OFFSET_END);
 	if (rc)
 		goto out;
 
diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index 14fe23cd5932..ee4e5bea8944 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -30,6 +30,8 @@
 #include <asm/ptdump.h>
 
 static const struct addr_marker address_markers[] = {
+	{ PAGE_OFFSET,			"Linear Mapping start" },
+	{ VA_START,			"Linear Mapping end" },
 #ifdef CONFIG_KASAN
 	{ KASAN_SHADOW_START,		"Kasan shadow start" },
 	{ KASAN_SHADOW_END,		"Kasan shadow end" },
@@ -43,10 +45,8 @@ static const struct addr_marker address_markers[] = {
 	{ PCI_IO_START,			"PCI I/O start" },
 	{ PCI_IO_END,			"PCI I/O end" },
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
-	{ VMEMMAP_START,		"vmemmap start" },
-	{ VMEMMAP_START + VMEMMAP_SIZE,	"vmemmap end" },
+	{ VMEMMAP_START,		"vmemmap" },
 #endif
-	{ PAGE_OFFSET,			"Linear mapping" },
 	{ -1,				NULL },
 };
 
@@ -380,7 +380,7 @@ static void ptdump_initialize(void)
 static struct ptdump_info kernel_ptdump_info = {
 	.mm		= &init_mm,
 	.markers	= address_markers,
-	.base_addr	= VA_START,
+	.base_addr	= PAGE_OFFSET,
 };
 
 void ptdump_check_wx(void)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index d2adffb81b5d..574ed1d4be19 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -311,7 +311,7 @@ static void __init fdt_enforce_memory_region(void)
 
 void __init arm64_memblock_init(void)
 {
-	const s64 linear_region_size = -(s64)PAGE_OFFSET;
+	const s64 linear_region_size = BIT(VA_BITS - 1);
 
 	/* Handle linux,usable-memory-range property */
 	fdt_enforce_memory_region();
@@ -319,13 +319,6 @@ void __init arm64_memblock_init(void)
 	/* Remove memory above our supported physical address size */
 	memblock_remove(1ULL << PHYS_MASK_SHIFT, ULLONG_MAX);
 
-	/*
-	 * Ensure that the linear region takes up exactly half of the kernel
-	 * virtual address space. This way, we can distinguish a linear address
-	 * from a kernel/module/vmalloc address by testing a single bit.
-	 */
-	BUILD_BUG_ON(linear_region_size != BIT(VA_BITS - 1));
-
 	/*
 	 * Select a suitable value for the base of physical memory.
 	 */
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 296de39ddee5..8066621052db 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -229,10 +229,10 @@ void __init kasan_init(void)
 	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
 			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
 
-	kasan_populate_early_shadow((void *)KASAN_SHADOW_START,
-				    (void *)mod_shadow_start);
+	kasan_populate_early_shadow(kasan_mem_to_shadow((void *) VA_START),
+				   (void *)mod_shadow_start);
 	kasan_populate_early_shadow((void *)kimg_shadow_end,
-				    kasan_mem_to_shadow((void *)PAGE_OFFSET));
+				   (void *)KASAN_SHADOW_END);
 
 	if (kimg_shadow_start > mod_shadow_end)
 		kasan_populate_early_shadow((void *)mod_shadow_end,
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index a170c6369a68..b0401b2ec4da 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -409,7 +409,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
 static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
 				  phys_addr_t size, pgprot_t prot)
 {
-	if (virt < VMALLOC_START) {
+	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
 		pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n",
 			&phys, virt);
 		return;
@@ -436,7 +436,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
 				phys_addr_t size, pgprot_t prot)
 {
-	if (virt < VMALLOC_START) {
+	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
 		pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
 			&phys, virt);
 		return;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 03/12] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
  2019-05-28 16:10 ` [PATCH v2 01/12] arm/arm64: KVM: Formalise end of direct linear map Steve Capper
  2019-05-28 16:10 ` [PATCH v2 02/12] arm64: mm: Flip kernel VA space Steve Capper
@ 2019-05-28 16:10 ` Steve Capper
  2019-05-28 16:10 ` [PATCH v2 04/12] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's Steve Capper
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
line argument and affects the codegen of the inline address sanetiser.

Essentially, for an example memory access:
    *ptr1 = val;
The compiler will insert logic similar to the below:
    shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET)
    if (somethingWrong(shadowValue))
        flagAnError();

This code sequence is inserted into many places, thus
KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel
text.

If we want to run a single kernel binary with multiple address spaces,
then we need to do this with KASAN_SHADOW_OFFSET fixed.

Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide
shadow addresses we know that the end of the shadow region is constant
w.r.t. VA space size:
    KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET

This means that if we increase the size of the VA space, the start of
the KASAN region expands into lower addresses whilst the end of the
KASAN region is fixed.

Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via
build scripts with the VA size used as a parameter. (There are build
time checks in the C code too to ensure that expected values are being
derived). It is sufficient, and indeed is a simplification, to remove
the build scripts (and build time checks) entirely and instead provide
KASAN_SHADOW_OFFSET values.

This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the
arm64 Makefile, and instead we adopt the approach used by x86 to supply
offset values in kConfig. To help debug/develop future VA space changes,
the Makefile logic has been preserved in a script file in the arm64
Documentation folder.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

Changed in V3, rebase to include KASAN_SHADOW_SCALE_SHIFT, wording
tidied up.
---
 Documentation/arm64/kasan-offsets.sh | 27 +++++++++++++++++++++++++++
 arch/arm64/Kconfig                   | 15 +++++++++++++++
 arch/arm64/Makefile                  |  8 --------
 arch/arm64/include/asm/kasan.h       | 11 ++++-------
 arch/arm64/include/asm/memory.h      | 12 +++++++++---
 arch/arm64/mm/kasan_init.c           |  2 --
 6 files changed, 55 insertions(+), 20 deletions(-)
 create mode 100644 Documentation/arm64/kasan-offsets.sh

diff --git a/Documentation/arm64/kasan-offsets.sh b/Documentation/arm64/kasan-offsets.sh
new file mode 100644
index 000000000000..2b7a021db363
--- /dev/null
+++ b/Documentation/arm64/kasan-offsets.sh
@@ -0,0 +1,27 @@
+#!/bin/sh
+
+# Print out the KASAN_SHADOW_OFFSETS required to place the KASAN SHADOW
+# start address at the mid-point of the kernel VA space
+
+print_kasan_offset () {
+	printf "%02d\t" $1
+	printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
+			+ (1 << ($1 - 32 - $2)) \
+			- (1 << (64 - 32 - $2)) ))
+}
+
+echo KASAN_SHADOW_SCALE_SHIFT = 3
+printf "VABITS\tKASAN_SHADOW_OFFSET\n"
+print_kasan_offset 48 3
+print_kasan_offset 47 3
+print_kasan_offset 42 3
+print_kasan_offset 39 3
+print_kasan_offset 36 3
+echo
+echo KASAN_SHADOW_SCALE_SHIFT = 4
+printf "VABITS\tKASAN_SHADOW_OFFSET\n"
+print_kasan_offset 48 4
+print_kasan_offset 47 4
+print_kasan_offset 42 4
+print_kasan_offset 39 4
+print_kasan_offset 36 4
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 4780eb7af842..d30046c7de7f 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -291,6 +291,21 @@ config ARCH_SUPPORTS_UPROBES
 config ARCH_PROC_KCORE_TEXT
 	def_bool y
 
+config KASAN_SHADOW_OFFSET
+	hex
+	depends on KASAN
+	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS
+	default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
+	default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
+	default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
+	default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
+	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS
+	default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
+	default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
+	default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
+	default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
+	default 0xffffffffffffffff
+
 source "arch/arm64/Kconfig.platforms"
 
 menu "Kernel Features"
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index 2dad2ae6b181..764201c76d77 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -111,14 +111,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 
-# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
-#				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
-# in 32-bit arithmetic
-KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
-	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
-	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
-	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
-
 export	TEXT_OFFSET GZFLAGS
 
 core-y		+= arch/arm64/kernel/ arch/arm64/mm/
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index b52aacd2c526..10d2add842da 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -18,11 +18,8 @@
  * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
  * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/N of kernel virtual addresses,
  * where N = (1 << KASAN_SHADOW_SCALE_SHIFT).
- */
-#define KASAN_SHADOW_START      (VA_START)
-#define KASAN_SHADOW_END        (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
-
-/*
+ *
+ * KASAN_SHADOW_OFFSET:
  * This value is used to map an address to the corresponding shadow
  * address by the following formula:
  *     shadow_addr = (address >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET
@@ -33,8 +30,8 @@
  *      KASAN_SHADOW_OFFSET = KASAN_SHADOW_END -
  *				(1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT))
  */
-#define KASAN_SHADOW_OFFSET     (KASAN_SHADOW_END - (1ULL << \
-					(64 - KASAN_SHADOW_SCALE_SHIFT)))
+#define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
+#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS)
 
 void kasan_init(void);
 void kasan_copy_shadow(pgd_t *pgdir);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index d19bf9965d34..fbd841f986ff 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -54,7 +54,7 @@
 	(UL(1) << VA_BITS) + 1)
 #define PAGE_OFFSET_END		(VA_START)
 #define KIMAGE_VADDR		(MODULES_END)
-#define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
+#define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
 #define BPF_JIT_REGION_END	(BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE)
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
@@ -80,11 +80,17 @@
  * significantly, so double the (minimum) stack size when they are in use.
  */
 #ifdef CONFIG_KASAN
-#define KASAN_SHADOW_SIZE	(UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
+#define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#define KASAN_SHADOW_END	((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \
+					+ KASAN_SHADOW_OFFSET)
+#ifdef CONFIG_KASAN_EXTRA
+#define KASAN_THREAD_SHIFT	2
+#else
 #define KASAN_THREAD_SHIFT	1
+#endif
 #else
-#define KASAN_SHADOW_SIZE	(0)
 #define KASAN_THREAD_SHIFT	0
+#define KASAN_SHADOW_END	(VA_START)
 #endif
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 8066621052db..0d3027be7bf8 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -158,8 +158,6 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
 /* The early shadow maps everything to a single page of zeroes */
 asmlinkage void __init kasan_early_init(void)
 {
-	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
-		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
 	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 04/12] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (2 preceding siblings ...)
  2019-05-28 16:10 ` [PATCH v2 03/12] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
@ 2019-05-28 16:10 ` Steve Capper
  2019-05-28 17:07   ` Ard Biesheuvel
  2019-05-28 16:10 ` [PATCH v2 05/12] arm64: dump: Make kernel page table dumper dynamic again Steve Capper
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

In order to prepare for a variable VA_BITS we need to account for a
variable size VMEMMAP which in turn means the position of the fixed map
is variable at compile time.

Thus, we need to replace the BUILD_BUG_ON's that check the fixed map
position with BUG_ON's.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/mm/mmu.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index b0401b2ec4da..1b88d9d81954 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -846,7 +846,7 @@ void __init early_fixmap_init(void)
 	 * The boot-ioremap range spans multiple pmds, for which
 	 * we are not prepared:
 	 */
-	BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
+	BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
 		     != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
 
 	if ((pmdp != fixmap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)))
@@ -914,9 +914,9 @@ void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot)
 	 * On 4k pages, we'll use section mappings for the FDT so we only
 	 * have to be in the same PUD.
 	 */
-	BUILD_BUG_ON(dt_virt_base % SZ_2M);
+	BUG_ON(dt_virt_base % SZ_2M);
 
-	BUILD_BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
+	BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
 		     __fix_to_virt(FIX_BTMAP_BEGIN) >> SWAPPER_TABLE_SHIFT);
 
 	offset = dt_phys % SWAPPER_BLOCK_SIZE;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 05/12] arm64: dump: Make kernel page table dumper dynamic again
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (3 preceding siblings ...)
  2019-05-28 16:10 ` [PATCH v2 04/12] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's Steve Capper
@ 2019-05-28 16:10 ` Steve Capper
  2019-05-28 16:10 ` [PATCH v2 06/12] arm64: mm: Introduce VA_BITS_MIN Steve Capper
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

The kernel page table dumper assumes that the placement of VA regions is
constant and determined at compile time. As we are about to introduce
variable VA logic, we need to be able to determine certain regions at
boot time.

This patch adds logic to the kernel page table dumper s.t. these regions
can be computed at boot time.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/mm/dump.c | 58 +++++++++++++++++++++++++++++++++++---------
 1 file changed, 47 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index ee4e5bea8944..f5fd6d6557fc 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -29,23 +29,45 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/ptdump.h>
 
-static const struct addr_marker address_markers[] = {
-	{ PAGE_OFFSET,			"Linear Mapping start" },
-	{ VA_START,			"Linear Mapping end" },
+
+enum address_markers_idx {
+	PAGE_OFFSET_NR = 0,
+	VA_START_NR,
+#ifdef CONFIG_KASAN
+	KASAN_START_NR,
+	KASAN_END_NR,
+#endif
+	MODULES_START_NR,
+	MODULES_END_NR,
+	VMALLOC_START_NR,
+	VMALLOC_END_NR,
+	FIXADDR_START_NR,
+	FIXADDR_END_NR,
+	PCI_START_NR,
+	PCI_END_NR,
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+	VMEMMAP_START_NR,
+#endif
+	END_NR
+};
+
+static struct addr_marker address_markers[] = {
+	{ 0 /* PAGE_OFFSET */,		"Linear Mapping start" },
+	{ 0 /* VA_START */,		"Linear Mapping end" },
 #ifdef CONFIG_KASAN
-	{ KASAN_SHADOW_START,		"Kasan shadow start" },
+	{ 0 /* KASAN_SHADOW_START */,	"Kasan shadow start" },
 	{ KASAN_SHADOW_END,		"Kasan shadow end" },
 #endif
 	{ MODULES_VADDR,		"Modules start" },
 	{ MODULES_END,			"Modules end" },
 	{ VMALLOC_START,		"vmalloc() area" },
-	{ VMALLOC_END,			"vmalloc() end" },
-	{ FIXADDR_START,		"Fixmap start" },
-	{ FIXADDR_TOP,			"Fixmap end" },
-	{ PCI_IO_START,			"PCI I/O start" },
-	{ PCI_IO_END,			"PCI I/O end" },
+	{ 0 /* VMALLOC_END */,		"vmalloc() end" },
+	{ 0 /* FIXADDR_START */,	"Fixmap start" },
+	{ 0 /* FIXADDR_TOP */,		"Fixmap end" },
+	{ 0 /* PCI_IO_START */,		"PCI I/O start" },
+	{ 0 /* PCI_IO_END */,		"PCI I/O end" },
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
-	{ VMEMMAP_START,		"vmemmap" },
+	{ 0 /* VMEMMAP_START */,	"vmemmap" },
 #endif
 	{ -1,				NULL },
 };
@@ -380,7 +402,6 @@ static void ptdump_initialize(void)
 static struct ptdump_info kernel_ptdump_info = {
 	.mm		= &init_mm,
 	.markers	= address_markers,
-	.base_addr	= PAGE_OFFSET,
 };
 
 void ptdump_check_wx(void)
@@ -405,6 +426,21 @@ void ptdump_check_wx(void)
 
 static int ptdump_init(void)
 {
+	kernel_ptdump_info.base_addr = PAGE_OFFSET;
+	address_markers[PAGE_OFFSET_NR].start_address = PAGE_OFFSET;
+	address_markers[VA_START_NR].start_address = VA_START;
+#ifdef CONFIG_KASAN
+	address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START;
+#endif
+	address_markers[VMALLOC_END_NR].start_address = VMALLOC_END;
+	address_markers[FIXADDR_START_NR].start_address = FIXADDR_START;
+	address_markers[FIXADDR_END_NR].start_address = FIXADDR_TOP;
+	address_markers[PCI_START_NR].start_address = PCI_IO_START;
+	address_markers[PCI_END_NR].start_address = PCI_IO_END;
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+	address_markers[VMEMMAP_START_NR].start_address = VMEMMAP_START;
+#endif
+
 	ptdump_initialize();
 	ptdump_debugfs_register(&kernel_ptdump_info, "kernel_page_tables");
 	return 0;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 06/12] arm64: mm: Introduce VA_BITS_MIN
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (4 preceding siblings ...)
  2019-05-28 16:10 ` [PATCH v2 05/12] arm64: dump: Make kernel page table dumper dynamic again Steve Capper
@ 2019-05-28 16:10 ` Steve Capper
  2019-05-28 16:10 ` [PATCH v2 07/12] arm64: mm: Introduce VA_BITS_ACTUAL Steve Capper
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

In order to support 52-bit kernel addresses detectable at boot time, the
kernel needs to know the most conservative VA_BITS possible should it
need to fall back to this quantity due to lack of hardware support.

A new compile time constant VA_BITS_MIN is introduced in this patch and
it is employed in the KASAN end address, KASLR, and EFI stub.

For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits.

In other words: VA_BITS_MIN = min (48, VA_BITS)

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/Kconfig                 | 4 ++++
 arch/arm64/include/asm/efi.h       | 4 ++--
 arch/arm64/include/asm/memory.h    | 5 ++++-
 arch/arm64/include/asm/processor.h | 2 +-
 arch/arm64/kernel/head.S           | 2 +-
 arch/arm64/kernel/kaslr.c          | 6 +++---
 arch/arm64/mm/kasan_init.c         | 3 ++-
 7 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index d30046c7de7f..0cedcb4a0a01 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -774,6 +774,10 @@ config ARM64_VA_BITS
 	default 47 if ARM64_VA_BITS_47
 	default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52
 
+config ARM64_VA_BITS_MIN
+	int
+	default ARM64_VA_BITS
+
 choice
 	prompt "Physical address space size"
 	default ARM64_PA_BITS_48
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index c9e9a6978e73..a87e78f3f58d 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -79,7 +79,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base)
 
 /*
  * On arm64, we have to ensure that the initrd ends up in the linear region,
- * which is a 1 GB aligned region of size '1UL << (VA_BITS - 1)' that is
+ * which is a 1 GB aligned region of size '1UL << (VA_BITS_MIN - 1)' that is
  * guaranteed to cover the kernel Image.
  *
  * Since the EFI stub is part of the kernel Image, we can relax the
@@ -90,7 +90,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base)
 static inline unsigned long efi_get_max_initrd_addr(unsigned long dram_base,
 						    unsigned long image_addr)
 {
-	return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS - 1));
+	return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS_MIN - 1));
 }
 
 #define efi_call_early(f, ...)		sys_table_arg->boottime->f(__VA_ARGS__)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index fbd841f986ff..c62df8f9ef86 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -64,6 +64,9 @@
 #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
 #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
+#define VA_BITS_MIN		(CONFIG_ARM64_VA_BITS_MIN)
+#define _VA_START(va)		(UL(0xffffffffffffffff) - \
+				(UL(1) << ((va) - 1)) + 1)
 
 #define KERNEL_START      _text
 #define KERNEL_END        _end
@@ -90,7 +93,7 @@
 #endif
 #else
 #define KASAN_THREAD_SHIFT	0
-#define KASAN_SHADOW_END	(VA_START)
+#define KASAN_SHADOW_END	(_VA_START(VA_BITS_MIN))
 #endif
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index fcd0e691b1ea..307cd2173813 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -53,7 +53,7 @@
  * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area.
  */
 
-#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS)
+#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS_MIN)
 #define TASK_SIZE_64		(UL(1) << vabits_user)
 
 #ifdef CONFIG_COMPAT
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index fcae3f85c6cd..ab68c3fe7a19 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -325,7 +325,7 @@ __create_page_tables:
 	mov	x5, #52
 	cbnz	x6, 1f
 #endif
-	mov	x5, #VA_BITS
+	mov	x5, #VA_BITS_MIN
 1:
 	adr_l	x6, vabits_user
 	str	x5, [x6]
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index b09b6f75f759..6f0075f983c7 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -119,15 +119,15 @@ u64 __init kaslr_early_init(u64 dt_phys)
 	/*
 	 * OK, so we are proceeding with KASLR enabled. Calculate a suitable
 	 * kernel image offset from the seed. Let's place the kernel in the
-	 * middle half of the VMALLOC area (VA_BITS - 2), and stay clear of
+	 * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of
 	 * the lower and upper quarters to avoid colliding with other
 	 * allocations.
 	 * Even if we could randomize at page granularity for 16k and 64k pages,
 	 * let's always round to 2 MB so we don't interfere with the ability to
 	 * map using contiguous PTEs
 	 */
-	mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1);
-	offset = BIT(VA_BITS - 3) + (seed & mask);
+	mask = ((1UL << (VA_BITS_MIN - 2)) - 1) & ~(SZ_2M - 1);
+	offset = BIT(VA_BITS_MIN - 3) + (seed & mask);
 
 	/* use the top 16 bits to randomize the linear region */
 	memstart_offset_seed = seed >> 48;
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 0d3027be7bf8..0e0b69af0aaa 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -158,7 +158,8 @@ static void __init kasan_pgd_populate(unsigned long addr, unsigned long end,
 /* The early shadow maps everything to a single page of zeroes */
 asmlinkage void __init kasan_early_init(void)
 {
-	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
+	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), PGDIR_SIZE));
+	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
 	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
 			   true);
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 07/12] arm64: mm: Introduce VA_BITS_ACTUAL
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (5 preceding siblings ...)
  2019-05-28 16:10 ` [PATCH v2 06/12] arm64: mm: Introduce VA_BITS_MIN Steve Capper
@ 2019-05-28 16:10 ` Steve Capper
  2019-05-28 16:10 ` [PATCH v2 08/12] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

In order to support 52-bit kernel addresses detectable at boot time, one
needs to know the actual VA_BITS detected. A new variable VA_BITS_ACTUAL
is introduced in this commit and employed for the KVM hypervisor layout,
KASAN, fault handling and phys-to/from-virt translation where there
would normally be compile time constants.

In order to maintain performance in phys_to_virt, another variable
physvirt_offset is introduced.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/kasan.h       |  2 +-
 arch/arm64/include/asm/memory.h      | 12 +++++++-----
 arch/arm64/include/asm/mmu_context.h |  2 +-
 arch/arm64/kernel/head.S             |  5 +++++
 arch/arm64/kvm/va_layout.c           | 14 +++++++-------
 arch/arm64/mm/fault.c                |  4 ++--
 arch/arm64/mm/init.c                 |  7 ++++++-
 arch/arm64/mm/mmu.c                  |  3 +++
 8 files changed, 32 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index 10d2add842da..ff991dc86ae1 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -31,7 +31,7 @@
  *				(1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT))
  */
 #define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
-#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS)
+#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS_ACTUAL)
 
 void kasan_init(void);
 void kasan_copy_shadow(pgd_t *pgdir);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index c62df8f9ef86..c02373035533 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -48,8 +48,6 @@
  * VA_START - the first kernel virtual address.
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
-#define VA_START		(UL(0xffffffffffffffff) - \
-	(UL(1) << (VA_BITS - 1)) + 1)
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << VA_BITS) + 1)
 #define PAGE_OFFSET_END		(VA_START)
@@ -178,10 +176,14 @@
 #endif
 
 #ifndef __ASSEMBLY__
+extern u64			vabits_actual;
+#define VA_BITS_ACTUAL		({vabits_actual;})
+#define VA_START		(_VA_START(VA_BITS_ACTUAL))
 
 #include <linux/bitops.h>
 #include <linux/mmdebug.h>
 
+extern s64			physvirt_offset;
 extern s64			memstart_addr;
 /* PHYS_OFFSET - the physical address of the start of memory. */
 #define PHYS_OFFSET		({ VM_BUG_ON(memstart_addr & 1); memstart_addr; })
@@ -248,9 +250,9 @@ extern u64			vabits_user;
  * space. Testing the top bit for the start of the region is a
  * sufficient check.
  */
-#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS - 1)))
+#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS_ACTUAL - 1)))
 
-#define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
+#define __lm_to_phys(addr)	(((addr) + physvirt_offset))
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
 
 #define __virt_to_phys_nodebug(x) ({					\
@@ -269,7 +271,7 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
 #define __phys_addr_symbol(x)	__pa_symbol_nodebug(x)
 #endif
 
-#define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
+#define __phys_to_virt(x)	((unsigned long)((x) - physvirt_offset))
 #define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
 
 /*
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 2da3e478fd8f..133ecb65b602 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -106,7 +106,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
 	isb();
 }
 
-#define cpu_set_default_tcr_t0sz()	__cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS))
+#define cpu_set_default_tcr_t0sz()	__cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS_ACTUAL))
 #define cpu_set_idmap_tcr_t0sz()	__cpu_set_tcr_t0sz(idmap_t0sz)
 
 /*
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index ab68c3fe7a19..b3335e639b6d 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -332,6 +332,11 @@ __create_page_tables:
 	dmb	sy
 	dc	ivac, x6		// Invalidate potentially stale cache line
 
+	adr_l	x6, vabits_actual
+	str	x5, [x6]
+	dmb	sy
+	dc	ivac, x6		// Invalidate potentially stale cache line
+
 	/*
 	 * VA_BITS may be too small to allow for an ID mapping to be created
 	 * that covers system RAM if that is located sufficiently high in the
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index c712a7376bc1..c9a1debb45bd 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -40,25 +40,25 @@ static void compute_layout(void)
 	int kva_msb;
 
 	/* Where is my RAM region? */
-	hyp_va_msb  = idmap_addr & BIT(VA_BITS - 1);
-	hyp_va_msb ^= BIT(VA_BITS - 1);
+	hyp_va_msb  = idmap_addr & BIT(VA_BITS_ACTUAL - 1);
+	hyp_va_msb ^= BIT(VA_BITS_ACTUAL - 1);
 
 	kva_msb = fls64((u64)phys_to_virt(memblock_start_of_DRAM()) ^
 			(u64)(high_memory - 1));
 
-	if (kva_msb == (VA_BITS - 1)) {
+	if (kva_msb == (VA_BITS_ACTUAL - 1)) {
 		/*
 		 * No space in the address, let's compute the mask so
-		 * that it covers (VA_BITS - 1) bits, and the region
+		 * that it covers (VA_BITS_ACTUAL - 1) bits, and the region
 		 * bit. The tag stays set to zero.
 		 */
-		va_mask  = BIT(VA_BITS - 1) - 1;
+		va_mask  = BIT(VA_BITS_ACTUAL - 1) - 1;
 		va_mask |= hyp_va_msb;
 	} else {
 		/*
 		 * We do have some free bits to insert a random tag.
 		 * Hyp VAs are now created from kernel linear map VAs
-		 * using the following formula (with V == VA_BITS):
+		 * using the following formula (with V == VA_BITS_ACTUAL):
 		 *
 		 *  63 ... V |     V-1    | V-2 .. tag_lsb | tag_lsb - 1 .. 0
 		 *  ---------------------------------------------------------
@@ -66,7 +66,7 @@ static void compute_layout(void)
 		 */
 		tag_lsb = kva_msb;
 		va_mask = GENMASK_ULL(tag_lsb - 1, 0);
-		tag_val = get_random_long() & GENMASK_ULL(VA_BITS - 2, tag_lsb);
+		tag_val = get_random_long() & GENMASK_ULL(VA_BITS_ACTUAL - 2, tag_lsb);
 		tag_val |= hyp_va_msb;
 		tag_val >>= tag_lsb;
 	}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 0cb0e09995e1..6c672600ea8a 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -171,9 +171,9 @@ static void show_pte(unsigned long addr)
 		return;
 	}
 
-	pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp = %p\n",
+	pr_alert("%s pgtable: %luk pages, %llu-bit VAs, pgdp = %p\n",
 		 mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K,
-		 mm == &init_mm ? VA_BITS : (int) vabits_user, mm->pgd);
+		 mm == &init_mm ? VA_BITS_ACTUAL : (int) vabits_user, mm->pgd);
 	pgdp = pgd_offset(mm, addr);
 	pgd = READ_ONCE(*pgdp);
 	pr_alert("[%016lx] pgd=%016llx", addr, pgd_val(pgd));
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 574ed1d4be19..d89341df2d0e 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -61,6 +61,9 @@
 s64 memstart_addr __ro_after_init = -1;
 EXPORT_SYMBOL(memstart_addr);
 
+s64 physvirt_offset __ro_after_init;
+EXPORT_SYMBOL(physvirt_offset);
+
 phys_addr_t arm64_dma_phys_limit __ro_after_init;
 
 #ifdef CONFIG_KEXEC_CORE
@@ -311,7 +314,7 @@ static void __init fdt_enforce_memory_region(void)
 
 void __init arm64_memblock_init(void)
 {
-	const s64 linear_region_size = BIT(VA_BITS - 1);
+	const s64 linear_region_size = BIT(VA_BITS_ACTUAL - 1);
 
 	/* Handle linux,usable-memory-range property */
 	fdt_enforce_memory_region();
@@ -325,6 +328,8 @@ void __init arm64_memblock_init(void)
 	memstart_addr = round_down(memblock_start_of_DRAM(),
 				   ARM64_MEMSTART_ALIGN);
 
+	physvirt_offset = PHYS_OFFSET - PAGE_OFFSET;
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 1b88d9d81954..57d4b72219c9 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -54,6 +54,9 @@ u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
 u64 vabits_user __ro_after_init;
 EXPORT_SYMBOL(vabits_user);
 
+u64 __section(".mmuoff.data.write") vabits_actual;
+EXPORT_SYMBOL(vabits_actual);
+
 u64 kimage_voffset __ro_after_init;
 EXPORT_SYMBOL(kimage_voffset);
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 08/12] arm64: mm: Logic to make offset_ttbr1 conditional
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (6 preceding siblings ...)
  2019-05-28 16:10 ` [PATCH v2 07/12] arm64: mm: Introduce VA_BITS_ACTUAL Steve Capper
@ 2019-05-28 16:10 ` Steve Capper
  2019-06-10 14:18   ` Catalin Marinas
  2019-05-28 16:10 ` [PATCH v2 09/12] arm64: mm: Separate out vmemmap Steve Capper
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

When running with a 52-bit userspace VA and a 48-bit kernel VA we offset
ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to
be used for both userspace and kernel.

Moving on to a 52-bit kernel VA we no longer require this offset to
ttbr1_el1 should we be running on a system with HW support for 52-bit
VAs.

This patch introduces alternative logic to offset_ttbr1 and expands out
the very early case in head.S. We need to use the alternative framework
as offset_ttbr1 is used in places in the kernel where it is not possible
to safely adrp address kernel constants (such as the kpti paths); thus
code patching is the safer route.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/assembler.h | 10 +++++++++-
 arch/arm64/include/asm/cpucaps.h   |  3 ++-
 arch/arm64/kernel/cpufeature.c     | 18 ++++++++++++++++++
 arch/arm64/kernel/head.S           | 14 +++++++++++++-
 arch/arm64/kernel/hibernate-asm.S  |  1 +
 5 files changed, 43 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 039fbd822ec6..a42c392ed1e1 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -548,6 +548,14 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
 	.macro	offset_ttbr1, ttbr
 #ifdef CONFIG_ARM64_USER_VA_BITS_52
 	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
+#endif
+
+#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52
+alternative_if_not ARM64_HAS_52BIT_VA
+	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
+alternative_else
+	nop
+alternative_endif
 #endif
 	.endm
 
@@ -557,7 +565,7 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * to be nop'ed out when dealing with 52-bit kernel VAs.
  */
 	.macro	restore_ttbr1, ttbr
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#if defined(CONFIG_ARM64_USER_VA_BITS_52) || defined(CONFIG_ARM64_KERNEL_VA_BITS_52)
 	bic	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
 #endif
 	.endm
diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index defdc67d9ab4..b317b8761744 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -62,7 +62,8 @@
 #define ARM64_HAS_GENERIC_AUTH_IMP_DEF		41
 #define ARM64_HAS_IRQ_PRIO_MASKING		42
 #define ARM64_HAS_DCPODP			43
+#define ARM64_HAS_52BIT_VA			44
 
-#define ARM64_NCAPS				44
+#define ARM64_NCAPS				45
 
 #endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index ca27e08e3d8a..f9d8a5c8d8ce 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -957,6 +957,16 @@ has_useable_cnp(const struct arm64_cpu_capabilities *entry, int scope)
 	return has_cpuid_feature(entry, scope);
 }
 
+#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52
+extern u64 vabits_actual;
+static bool __maybe_unused
+has_52bit_kernel_va(const struct arm64_cpu_capabilities *entry, int scope)
+{
+	return vabits_actual == 52;
+}
+
+#endif /* CONFIG_ARM64_USER_KERNEL_VA_BITS_52 */
+
 static bool __meltdown_safe = true;
 static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
 
@@ -1558,6 +1568,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
 		.min_field_value = 1,
 	},
 #endif
+#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52
+	{
+		.desc = "52-bit kernel VA",
+		.capability = ARM64_HAS_52BIT_VA,
+		.type = ARM64_CPUCAP_SYSTEM_FEATURE,
+		.matches = has_52bit_kernel_va,
+	},
+#endif /* CONFIG_ARM64_USER_KERNEL_VA_BITS_52 */
 	{},
 };
 
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index b3335e639b6d..8bc1b533a912 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -788,7 +788,19 @@ ENTRY(__enable_mmu)
 	phys_to_ttbr x1, x1
 	phys_to_ttbr x2, x2
 	msr	ttbr0_el1, x2			// load TTBR0
-	offset_ttbr1 x1
+
+#if defined(CONFIG_ARM64_USER_VA_BITS_52)
+	orr     x1, x1, #TTBR1_BADDR_4852_OFFSET
+#endif
+
+#if defined(CONFIG_ARM64_USER_KERNEL_VA_BITS_52)
+	ldr_l	x3, vabits_actual
+	cmp	x3, #52
+	b.eq	1f
+	orr     x1, x1, #TTBR1_BADDR_4852_OFFSET
+1:
+#endif
+
 	msr	ttbr1_el1, x1			// load TTBR1
 	isb
 	msr	sctlr_el1, x0
diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
index fe36d85c60bd..d32725a2b77f 100644
--- a/arch/arm64/kernel/hibernate-asm.S
+++ b/arch/arm64/kernel/hibernate-asm.S
@@ -19,6 +19,7 @@
 #include <linux/linkage.h>
 #include <linux/errno.h>
 
+#include <asm/alternative.h>
 #include <asm/asm-offsets.h>
 #include <asm/assembler.h>
 #include <asm/cputype.h>
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 09/12] arm64: mm: Separate out vmemmap
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (7 preceding siblings ...)
  2019-05-28 16:10 ` [PATCH v2 08/12] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
@ 2019-05-28 16:10 ` Steve Capper
  2019-05-28 16:10 ` [PATCH v2 10/12] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

vmemmap is a preprocessor definition that depends on a variable,
memstart_addr. In a later patch we will need to expand the size of
the VMEMMAP region and optionally modify vmemmap depending upon
whether or not hardware support is available for 52-bit virtual
addresses.

This patch changes vmemmap to be a variable. As the old definition
depended on a variable load, this should not affect performance
noticeably.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/pgtable.h | 4 ++--
 arch/arm64/mm/init.c             | 5 +++++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index d0ab784304e9..60c52c1da61a 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -34,8 +34,6 @@
 #define VMALLOC_START		(MODULES_END)
 #define VMALLOC_END		(- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
-#define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
-
 #define FIRST_USER_ADDRESS	0UL
 
 #ifndef __ASSEMBLY__
@@ -46,6 +44,8 @@
 #include <linux/mm_types.h>
 #include <linux/sched.h>
 
+extern struct page *vmemmap;
+
 extern void __pte_error(const char *file, int line, unsigned long val);
 extern void __pmd_error(const char *file, int line, unsigned long val);
 extern void __pud_error(const char *file, int line, unsigned long val);
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index d89341df2d0e..6844365c0a51 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -64,6 +64,9 @@ EXPORT_SYMBOL(memstart_addr);
 s64 physvirt_offset __ro_after_init;
 EXPORT_SYMBOL(physvirt_offset);
 
+struct page *vmemmap __ro_after_init;
+EXPORT_SYMBOL(vmemmap);
+
 phys_addr_t arm64_dma_phys_limit __ro_after_init;
 
 #ifdef CONFIG_KEXEC_CORE
@@ -330,6 +333,8 @@ void __init arm64_memblock_init(void)
 
 	physvirt_offset = PHYS_OFFSET - PAGE_OFFSET;
 
+	vmemmap = ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT));
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 10/12] arm64: mm: Modify calculation of VMEMMAP_SIZE
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (8 preceding siblings ...)
  2019-05-28 16:10 ` [PATCH v2 09/12] arm64: mm: Separate out vmemmap Steve Capper
@ 2019-05-28 16:10 ` Steve Capper
  2019-05-28 16:10 ` [PATCH v2 11/12] arm64: mm: Tweak PAGE_OFFSET logic Steve Capper
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

In a later patch we will need to have a slightly larger VMEMMAP region
to accommodate boot time selection between 48/52-bit kernel VAs.

This patch modifies the formula for computing VMEMMAP_SIZE to depend
explicitly on the PAGE_OFFSET and start of kernel addressable memory.
(This allows for a slightly larger direct linear map in future).

Also this patch removes the use of bitmasking in the computation of
_page_to_voff as this can cause problems if we decide to move regions
around in future.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/memory.h | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index c02373035533..e9af8aa36612 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -37,8 +37,15 @@
 /*
  * VMEMMAP_SIZE - allows the whole linear region to be covered by
  *                a struct page array
+ *
+ * If we are configured with a 52-bit kernel VA then our VMEMMAP_SIZE
+ * neads to cover the memory region from the beginning of the 52-bit
+ * PAGE_OFFSET all the way to VA_START for 48-bit. This allows us to
+ * keep a constant PAGE_OFFSET and "fallback" to using the higher end
+ * of the VMEMMAP where 52-bit support is not available in hardware.
  */
-#define VMEMMAP_SIZE (UL(1) << (VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT))
+#define VMEMMAP_SIZE ((_VA_START(VA_BITS_MIN) - PAGE_OFFSET) \
+			>> (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT))
 
 /*
  * PAGE_OFFSET - the virtual address of the start of the linear map (top
@@ -319,7 +326,7 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define _virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
 #else
 #define __virt_to_pgoff(kaddr)	(((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
-#define __page_to_voff(kaddr)	(((u64)(kaddr) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
+#define __page_to_voff(kaddr)	(((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
 
 #define page_to_virt(page)	({					\
 	unsigned long __addr =						\
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 11/12] arm64: mm: Tweak PAGE_OFFSET logic
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (9 preceding siblings ...)
  2019-05-28 16:10 ` [PATCH v2 10/12] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
@ 2019-05-28 16:10 ` Steve Capper
  2019-05-28 16:10 ` [PATCH v2 12/12] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

This patch introduces _PAGE_OFFSET(va) to allow for computing the
largest possible direct linear map for use with 48/52-bit selectable
kernel VAs.

Also, this patch removes the PAGE_OFFSET bit masking logic in
virt_to_pgoff as this optimisation can paint us into a corner for
permissible PAGE_OFFSET values in future.

Lastly, this patch modifies the definition of _virt_addr_valid to use
__virt_to_phys rather than a bitmasked PAGE_OFFSET formula.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/memory.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index e9af8aa36612..c817ee71e201 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -55,8 +55,9 @@
  * VA_START - the first kernel virtual address.
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
-#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
-	(UL(1) << VA_BITS) + 1)
+#define _PAGE_OFFSET(va)	(UL(0xffffffffffffffff) - \
+					(UL(1) << (va)) + 1)
+#define PAGE_OFFSET		(_PAGE_OFFSET(VA_BITS))
 #define PAGE_OFFSET_END		(VA_START)
 #define KIMAGE_VADDR		(MODULES_END)
 #define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
@@ -325,7 +326,7 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
 #define _virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
 #else
-#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
+#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) - PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
 #define __page_to_voff(kaddr)	(((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
 
 #define page_to_virt(page)	({					\
@@ -338,8 +339,7 @@ static inline void *phys_to_virt(phys_addr_t x)
 
 #define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START))
 
-#define _virt_addr_valid(kaddr)	pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
-					   + PHYS_OFFSET) >> PAGE_SHIFT)
+#define _virt_addr_valid(kaddr)	pfn_valid(__virt_to_phys((u64)(kaddr)) >> PAGE_SHIFT)
 #endif
 #endif
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v2 12/12] arm64: mm: Introduce 52-bit Kernel VAs
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (10 preceding siblings ...)
  2019-05-28 16:10 ` [PATCH v2 11/12] arm64: mm: Tweak PAGE_OFFSET logic Steve Capper
@ 2019-05-28 16:10 ` Steve Capper
  2019-06-05 15:34   ` Catalin Marinas
  2019-06-07 13:53 ` [PATCH v2 00/12] 52-bit kernel + user VAs Anshuman Khandual
  2019-06-10 10:40   ` Bhupesh Sharma
  13 siblings, 1 reply; 31+ messages in thread
From: Steve Capper @ 2019-05-28 16:10 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

Most of the machinery is now in place to enable 52-bit kernel VAs that
are detectable at boot time.

This patch adds a Kconfig option for 52-bit user and kernel addresses
and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ,
physvirt_offset and vmemmap at early boot.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/Kconfig                     | 31 ++++++++++++++++++++++----
 arch/arm64/include/asm/assembler.h     |  9 +++++++-
 arch/arm64/include/asm/memory.h        |  2 +-
 arch/arm64/include/asm/mmu_context.h   |  2 +-
 arch/arm64/include/asm/pgtable-hwdef.h |  2 +-
 arch/arm64/kernel/head.S               |  4 ++--
 arch/arm64/mm/init.c                   | 10 +++++++++
 arch/arm64/mm/proc.S                   |  6 ++++-
 8 files changed, 55 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 0cedcb4a0a01..d22432c2caca 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -276,11 +276,14 @@ config KERNEL_MODE_NEON
 config FIX_EARLYCON_MEM
 	def_bool y
 
+config HAS_VA_BITS_52
+	def_bool n
+
 config PGTABLE_LEVELS
 	int
 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
 	default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42
-	default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52)
+	default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || HAS_VA_BITS_52)
 	default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39
 	default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47
 	default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48
@@ -294,12 +297,12 @@ config ARCH_PROC_KCORE_TEXT
 config KASAN_SHADOW_OFFSET
 	hex
 	depends on KASAN
-	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS
+	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || HAS_VA_BITS_52) && !KASAN_SW_TAGS
 	default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
 	default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
 	default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
 	default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
-	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS
+	default 0xefff900000000000 if (ARM64_VA_BITS_48 || HAS_VA_BITS_52) && KASAN_SW_TAGS
 	default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
 	default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
 	default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
@@ -739,6 +742,7 @@ config ARM64_VA_BITS_48
 config ARM64_USER_VA_BITS_52
 	bool "52-bit (user)"
 	depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN)
+	select HAS_VA_BITS_52
 	help
 	  Enable 52-bit virtual addressing for userspace when explicitly
 	  requested via a hint to mmap(). The kernel will continue to
@@ -751,11 +755,28 @@ config ARM64_USER_VA_BITS_52
 
 	  If unsure, select 48-bit virtual addressing instead.
 
+config ARM64_USER_KERNEL_VA_BITS_52
+	bool "52-bit (user & kernel)"
+	depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN)
+	select HAS_VA_BITS_52
+	help
+	  Enable 52-bit virtual addressing for userspace when explicitly
+	  requested via a hint to mmap(). The kernel will also use 52-bit
+	  virtual addresses for its own mappings (provided HW support for
+	  this feature is available, otherwise it reverts to 48-bit).
+
+	  NOTE: Enabling 52-bit virtual addressing in conjunction with
+	  ARMv8.3 Pointer Authentication will result in the PAC being
+	  reduced from 7 bits to 3 bits, which may have a significant
+	  impact on its susceptibility to brute-force attacks.
+
+	  If unsure, select 48-bit virtual addressing instead.
+
 endchoice
 
 config ARM64_FORCE_52BIT
 	bool "Force 52-bit virtual addresses for userspace"
-	depends on ARM64_USER_VA_BITS_52 && EXPERT
+	depends on HAS_VA_BITS_52 && EXPERT
 	help
 	  For systems with 52-bit userspace VAs enabled, the kernel will attempt
 	  to maintain compatibility with older software by providing 48-bit VAs
@@ -773,9 +794,11 @@ config ARM64_VA_BITS
 	default 42 if ARM64_VA_BITS_42
 	default 47 if ARM64_VA_BITS_47
 	default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52
+	default 52 if ARM64_USER_KERNEL_VA_BITS_52
 
 config ARM64_VA_BITS_MIN
 	int
+	default 48 if ARM64_USER_KERNEL_VA_BITS_52
 	default ARM64_VA_BITS
 
 choice
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index a42c392ed1e1..1f8d7b891ec9 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -356,6 +356,13 @@ alternative_endif
 	bfi	\valreg, \t0sz, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH
 	.endm
 
+/*
+ * tcr_set_t1sz - update TCR.T1SZ
+ */
+	.macro	tcr_set_t1sz, valreg, t1sz
+	bfi	\valreg, \t1sz, #TCR_T1SZ_OFFSET, #TCR_TxSZ_WIDTH
+	.endm
+
 /*
  * tcr_compute_pa_size - set TCR.(I)PS to the highest supported
  * ID_AA64MMFR0_EL1.PARange value
@@ -565,7 +572,7 @@ alternative_endif
  * to be nop'ed out when dealing with 52-bit kernel VAs.
  */
 	.macro	restore_ttbr1, ttbr
-#if defined(CONFIG_ARM64_USER_VA_BITS_52) || defined(CONFIG_ARM64_KERNEL_VA_BITS_52)
+#ifdef CONFIG_HAS_VA_BITS_52
 	bic	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
 #endif
 	.endm
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index c817ee71e201..8e3c12a3414d 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -77,7 +77,7 @@
 #define KERNEL_START      _text
 #define KERNEL_END        _end
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_HAS_VA_BITS_52
 #define MAX_USER_VA_BITS	52
 #else
 #define MAX_USER_VA_BITS	VA_BITS
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 133ecb65b602..1ae26b357e1e 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -74,7 +74,7 @@ extern u64 idmap_ptrs_per_pgd;
 
 static inline bool __cpu_uses_extended_idmap(void)
 {
-	if (IS_ENABLED(CONFIG_ARM64_USER_VA_BITS_52))
+	if (IS_ENABLED(CONFIG_HAS_VA_BITS_52))
 		return false;
 
 	return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index a69259cc1f16..73dc4a8c0df3 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -316,7 +316,7 @@
 #define TTBR_BADDR_MASK_52	(((UL(1) << 46) - 1) << 2)
 #endif
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_HAS_VA_BITS_52
 /* Must be at least 64-byte aligned to prevent corruption of the TTBR */
 #define TTBR1_BADDR_4852_OFFSET	(((UL(1) << (52 - PGDIR_SHIFT)) - \
 				 (UL(1) << (48 - PGDIR_SHIFT))) * 8)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 8bc1b533a912..31c43abf30aa 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -319,7 +319,7 @@ __create_page_tables:
 	adrp	x0, idmap_pg_dir
 	adrp	x3, __idmap_text_start		// __pa(__idmap_text_start)
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_HAS_VA_BITS_52
 	mrs_s	x6, SYS_ID_AA64MMFR2_EL1
 	and	x6, x6, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
 	mov	x5, #52
@@ -817,7 +817,7 @@ ENTRY(__enable_mmu)
 ENDPROC(__enable_mmu)
 
 ENTRY(__cpu_secondary_check52bitva)
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_HAS_VA_BITS_52
 	ldr_l	x0, vabits_user
 	cmp	x0, #52
 	b.ne	2f
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 6844365c0a51..c0c6d2dc39b2 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -335,6 +335,16 @@ void __init arm64_memblock_init(void)
 
 	vmemmap = ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT));
 
+	/*
+	 * If we are running with a 52-bit kernel VA config on a system that
+	 * does not support it, we have to offset our vmemmap and physvirt_offset
+	 * s.t. we avoid the 52-bit portion of the direct linear map
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_USER_KERNEL_VA_BITS_52) && (VA_BITS_ACTUAL != 52)) {
+		vmemmap += (_PAGE_OFFSET(48) - _PAGE_OFFSET(52)) >> PAGE_SHIFT;
+		physvirt_offset = PHYS_OFFSET - _PAGE_OFFSET(48);
+	}
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index fdd626d34274..62554d92e903 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -449,7 +449,7 @@ ENTRY(__cpu_setup)
 			TCR_TBI0 | TCR_A1 | TCR_KASAN_FLAGS
 	tcr_clear_errata_bits x10, x9, x5
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_HAS_VA_BITS_52
 	ldr_l		x9, vabits_user
 	sub		x9, xzr, x9
 	add		x9, x9, #64
@@ -458,6 +458,10 @@ ENTRY(__cpu_setup)
 #endif
 	tcr_set_t0sz	x10, x9
 
+#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52
+	tcr_set_t1sz	x10, x9
+#endif
+
 	/*
 	 * Set the IPS bits in TCR_EL1.
 	 */
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 01/12] arm/arm64: KVM: Formalise end of direct linear map
  2019-05-28 16:10 ` [PATCH v2 01/12] arm/arm64: KVM: Formalise end of direct linear map Steve Capper
@ 2019-05-28 16:27   ` Marc Zyngier
  2019-05-28 17:01     ` Steve Capper
  0 siblings, 1 reply; 31+ messages in thread
From: Marc Zyngier @ 2019-05-28 16:27 UTC (permalink / raw)
  To: Steve Capper, linux-arm-kernel
  Cc: catalin.marinas, bhsharma, crecklin, will.deacon, ard.biesheuvel

Hi Steve,

On 28/05/2019 17:10, Steve Capper wrote:
> We assume that the direct linear map ends at ~0 in the KVM HYP map

Do we? This has stopped being the case since ed57cac83e05f ("arm64: KVM:
Introduce EL2 VA randomisation").

> intersection checking code. This assumption will become invalid later on
> for arm64 when the address space of the kernel is re-arranged.
> 
> This patch introduces a new constant PAGE_OFFSET_END for both arm and
> arm64 and defines it to be ~0UL
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>
> ---
>  arch/arm/include/asm/memory.h   | 1 +
>  arch/arm64/include/asm/memory.h | 1 +
>  virt/kvm/arm/mmu.c              | 4 ++--
>  3 files changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> index ed8fd0d19a3e..45c211fd50da 100644
> --- a/arch/arm/include/asm/memory.h
> +++ b/arch/arm/include/asm/memory.h
> @@ -24,6 +24,7 @@
>  
>  /* PAGE_OFFSET - the virtual address of the start of the kernel image */
>  #define PAGE_OFFSET		UL(CONFIG_PAGE_OFFSET)
> +#define PAGE_OFFSET_END		(~0UL)
>  
>  #ifdef CONFIG_MMU
>  
> diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> index 8ffcf5a512bb..9fd387a63b9b 100644
> --- a/arch/arm64/include/asm/memory.h
> +++ b/arch/arm64/include/asm/memory.h
> @@ -52,6 +52,7 @@
>  	(UL(1) << VA_BITS) + 1)
>  #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
>  	(UL(1) << (VA_BITS - 1)) + 1)
> +#define PAGE_OFFSET_END		(~0UL)
>  #define KIMAGE_VADDR		(MODULES_END)
>  #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
>  #define BPF_JIT_REGION_SIZE	(SZ_128M)
> diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> index 74b6582eaa3c..e1a777275b37 100644
> --- a/virt/kvm/arm/mmu.c
> +++ b/virt/kvm/arm/mmu.c
> @@ -2202,10 +2202,10 @@ int kvm_mmu_init(void)
>  	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
>  	kvm_debug("HYP VA range: %lx:%lx\n",
>  		  kern_hyp_va(PAGE_OFFSET),
> -		  kern_hyp_va((unsigned long)high_memory - 1));
> +		  kern_hyp_va(PAGE_OFFSET_END));
>  
>  	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
> -	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
> +	    hyp_idmap_start <  kern_hyp_va(PAGE_OFFSET_END) &&
>  	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
>  		/*
>  		 * The idmap page is intersecting with the VA space,
> 

This definitely looks like a move in the wrong direction (reverting part
of the above commit). Is it that this is just an old patch that should
have been dropped? Or am I completely missing the point?

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 01/12] arm/arm64: KVM: Formalise end of direct linear map
  2019-05-28 16:27   ` Marc Zyngier
@ 2019-05-28 17:01     ` Steve Capper
  2019-05-29  9:26       ` Steve Capper
  0 siblings, 1 reply; 31+ messages in thread
From: Steve Capper @ 2019-05-28 17:01 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: crecklin, ard.biesheuvel, Catalin Marinas, bhsharma, Will Deacon,
	nd, linux-arm-kernel

On Tue, May 28, 2019 at 05:27:17PM +0100, Marc Zyngier wrote:
> Hi Steve,

Hi Marc,

> 
> On 28/05/2019 17:10, Steve Capper wrote:
> > We assume that the direct linear map ends at ~0 in the KVM HYP map
> 
> Do we? This has stopped being the case since ed57cac83e05f ("arm64: KVM:
> Introduce EL2 VA randomisation").
> 
> > intersection checking code. This assumption will become invalid later on
> > for arm64 when the address space of the kernel is re-arranged.
> > 
> > This patch introduces a new constant PAGE_OFFSET_END for both arm and
> > arm64 and defines it to be ~0UL
> > 
> > Signed-off-by: Steve Capper <steve.capper@arm.com>
> > ---
> >  arch/arm/include/asm/memory.h   | 1 +
> >  arch/arm64/include/asm/memory.h | 1 +
> >  virt/kvm/arm/mmu.c              | 4 ++--
> >  3 files changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> > index ed8fd0d19a3e..45c211fd50da 100644
> > --- a/arch/arm/include/asm/memory.h
> > +++ b/arch/arm/include/asm/memory.h
> > @@ -24,6 +24,7 @@
> >  
> >  /* PAGE_OFFSET - the virtual address of the start of the kernel image */
> >  #define PAGE_OFFSET		UL(CONFIG_PAGE_OFFSET)
> > +#define PAGE_OFFSET_END		(~0UL)
> >  
> >  #ifdef CONFIG_MMU
> >  
> > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > index 8ffcf5a512bb..9fd387a63b9b 100644
> > --- a/arch/arm64/include/asm/memory.h
> > +++ b/arch/arm64/include/asm/memory.h
> > @@ -52,6 +52,7 @@
> >  	(UL(1) << VA_BITS) + 1)
> >  #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
> >  	(UL(1) << (VA_BITS - 1)) + 1)
> > +#define PAGE_OFFSET_END		(~0UL)
> >  #define KIMAGE_VADDR		(MODULES_END)
> >  #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
> >  #define BPF_JIT_REGION_SIZE	(SZ_128M)
> > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> > index 74b6582eaa3c..e1a777275b37 100644
> > --- a/virt/kvm/arm/mmu.c
> > +++ b/virt/kvm/arm/mmu.c
> > @@ -2202,10 +2202,10 @@ int kvm_mmu_init(void)
> >  	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
> >  	kvm_debug("HYP VA range: %lx:%lx\n",
> >  		  kern_hyp_va(PAGE_OFFSET),
> > -		  kern_hyp_va((unsigned long)high_memory - 1));
> > +		  kern_hyp_va(PAGE_OFFSET_END));
> >  
> >  	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
> > -	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
> > +	    hyp_idmap_start <  kern_hyp_va(PAGE_OFFSET_END) &&
> >  	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
> >  		/*
> >  		 * The idmap page is intersecting with the VA space,
> > 
> 
> This definitely looks like a move in the wrong direction (reverting part
> of the above commit). Is it that this is just an old patch that should
> have been dropped? Or am I completely missing the point?

I suspect this is a case of me rebasing my series... poorly.
I'll re-examine the logic here and either update the patch or the commit
log to make it clearer.

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 04/12] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's
  2019-05-28 16:10 ` [PATCH v2 04/12] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's Steve Capper
@ 2019-05-28 17:07   ` Ard Biesheuvel
  2019-05-28 17:11     ` Ard Biesheuvel
  0 siblings, 1 reply; 31+ messages in thread
From: Ard Biesheuvel @ 2019-05-28 17:07 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, Marc Zyngier, Catalin Marinas, Bhupesh Sharma,
	Will Deacon, linux-arm-kernel

On Tue, 28 May 2019 at 18:10, Steve Capper <steve.capper@arm.com> wrote:
>
> In order to prepare for a variable VA_BITS we need to account for a
> variable size VMEMMAP which in turn means the position of the fixed map
> is variable at compile time.
>
> Thus, we need to replace the BUILD_BUG_ON's that check the fixed map
> position with BUG_ON's.
>
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Do we still need this patch now that VMEMMAP_SIZE is a compile time
constant again? Or am I missing something?

> ---
>  arch/arm64/mm/mmu.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> index b0401b2ec4da..1b88d9d81954 100644
> --- a/arch/arm64/mm/mmu.c
> +++ b/arch/arm64/mm/mmu.c
> @@ -846,7 +846,7 @@ void __init early_fixmap_init(void)
>          * The boot-ioremap range spans multiple pmds, for which
>          * we are not prepared:
>          */
> -       BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
> +       BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
>                      != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
>
>         if ((pmdp != fixmap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)))
> @@ -914,9 +914,9 @@ void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot)
>          * On 4k pages, we'll use section mappings for the FDT so we only
>          * have to be in the same PUD.
>          */
> -       BUILD_BUG_ON(dt_virt_base % SZ_2M);
> +       BUG_ON(dt_virt_base % SZ_2M);
>
> -       BUILD_BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
> +       BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
>                      __fix_to_virt(FIX_BTMAP_BEGIN) >> SWAPPER_TABLE_SHIFT);
>
>         offset = dt_phys % SWAPPER_BLOCK_SIZE;
> --
> 2.20.1
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 04/12] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's
  2019-05-28 17:07   ` Ard Biesheuvel
@ 2019-05-28 17:11     ` Ard Biesheuvel
  2019-05-29  9:28       ` Steve Capper
  0 siblings, 1 reply; 31+ messages in thread
From: Ard Biesheuvel @ 2019-05-28 17:11 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, Marc Zyngier, Catalin Marinas, Bhupesh Sharma,
	Will Deacon, linux-arm-kernel

On Tue, 28 May 2019 at 19:07, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>
> On Tue, 28 May 2019 at 18:10, Steve Capper <steve.capper@arm.com> wrote:
> >
> > In order to prepare for a variable VA_BITS we need to account for a
> > variable size VMEMMAP which in turn means the position of the fixed map
> > is variable at compile time.
> >
> > Thus, we need to replace the BUILD_BUG_ON's that check the fixed map
> > position with BUG_ON's.
> >
> > Signed-off-by: Steve Capper <steve.capper@arm.com>
>
> Do we still need this patch now that VMEMMAP_SIZE is a compile time
> constant again? Or am I missing something?
>

Never mind. You are moving the start of the linear region and the
start of the vmemmap region.

I wonder if this is really necessry, though. On a 48-bit VA only
system, the linear region could still start at 0xfff0_.... as long as
we don't put any physical memory there (and we already move around our
physical memory inside the linear region when KASLR is enabled)


> > ---
> >  arch/arm64/mm/mmu.c | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
> > index b0401b2ec4da..1b88d9d81954 100644
> > --- a/arch/arm64/mm/mmu.c
> > +++ b/arch/arm64/mm/mmu.c
> > @@ -846,7 +846,7 @@ void __init early_fixmap_init(void)
> >          * The boot-ioremap range spans multiple pmds, for which
> >          * we are not prepared:
> >          */
> > -       BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
> > +       BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
> >                      != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
> >
> >         if ((pmdp != fixmap_pmd(fix_to_virt(FIX_BTMAP_BEGIN)))
> > @@ -914,9 +914,9 @@ void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size, pgprot_t prot)
> >          * On 4k pages, we'll use section mappings for the FDT so we only
> >          * have to be in the same PUD.
> >          */
> > -       BUILD_BUG_ON(dt_virt_base % SZ_2M);
> > +       BUG_ON(dt_virt_base % SZ_2M);
> >
> > -       BUILD_BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
> > +       BUG_ON(__fix_to_virt(FIX_FDT_END) >> SWAPPER_TABLE_SHIFT !=
> >                      __fix_to_virt(FIX_BTMAP_BEGIN) >> SWAPPER_TABLE_SHIFT);
> >
> >         offset = dt_phys % SWAPPER_BLOCK_SIZE;
> > --
> > 2.20.1
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 01/12] arm/arm64: KVM: Formalise end of direct linear map
  2019-05-28 17:01     ` Steve Capper
@ 2019-05-29  9:26       ` Steve Capper
  0 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-05-29  9:26 UTC (permalink / raw)
  To: Marc Zyngier
  Cc: crecklin, ard.biesheuvel, Catalin Marinas, bhsharma, Will Deacon,
	nd, linux-arm-kernel

On Tue, May 28, 2019 at 05:01:29PM +0000, Steve Capper wrote:
> On Tue, May 28, 2019 at 05:27:17PM +0100, Marc Zyngier wrote:
> > Hi Steve,
> 
> Hi Marc,
> 
> > 
> > On 28/05/2019 17:10, Steve Capper wrote:
> > > We assume that the direct linear map ends at ~0 in the KVM HYP map
> > 
> > Do we? This has stopped being the case since ed57cac83e05f ("arm64: KVM:
> > Introduce EL2 VA randomisation").
> > 
> > > intersection checking code. This assumption will become invalid later on
> > > for arm64 when the address space of the kernel is re-arranged.
> > > 
> > > This patch introduces a new constant PAGE_OFFSET_END for both arm and
> > > arm64 and defines it to be ~0UL
> > > 
> > > Signed-off-by: Steve Capper <steve.capper@arm.com>
> > > ---
> > >  arch/arm/include/asm/memory.h   | 1 +
> > >  arch/arm64/include/asm/memory.h | 1 +
> > >  virt/kvm/arm/mmu.c              | 4 ++--
> > >  3 files changed, 4 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
> > > index ed8fd0d19a3e..45c211fd50da 100644
> > > --- a/arch/arm/include/asm/memory.h
> > > +++ b/arch/arm/include/asm/memory.h
> > > @@ -24,6 +24,7 @@
> > >  
> > >  /* PAGE_OFFSET - the virtual address of the start of the kernel image */
> > >  #define PAGE_OFFSET		UL(CONFIG_PAGE_OFFSET)
> > > +#define PAGE_OFFSET_END		(~0UL)
> > >  
> > >  #ifdef CONFIG_MMU
> > >  
> > > diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
> > > index 8ffcf5a512bb..9fd387a63b9b 100644
> > > --- a/arch/arm64/include/asm/memory.h
> > > +++ b/arch/arm64/include/asm/memory.h
> > > @@ -52,6 +52,7 @@
> > >  	(UL(1) << VA_BITS) + 1)
> > >  #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
> > >  	(UL(1) << (VA_BITS - 1)) + 1)
> > > +#define PAGE_OFFSET_END		(~0UL)
> > >  #define KIMAGE_VADDR		(MODULES_END)
> > >  #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
> > >  #define BPF_JIT_REGION_SIZE	(SZ_128M)
> > > diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
> > > index 74b6582eaa3c..e1a777275b37 100644
> > > --- a/virt/kvm/arm/mmu.c
> > > +++ b/virt/kvm/arm/mmu.c
> > > @@ -2202,10 +2202,10 @@ int kvm_mmu_init(void)
> > >  	kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
> > >  	kvm_debug("HYP VA range: %lx:%lx\n",
> > >  		  kern_hyp_va(PAGE_OFFSET),
> > > -		  kern_hyp_va((unsigned long)high_memory - 1));
> > > +		  kern_hyp_va(PAGE_OFFSET_END));
> > >  
> > >  	if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
> > > -	    hyp_idmap_start <  kern_hyp_va((unsigned long)high_memory - 1) &&
> > > +	    hyp_idmap_start <  kern_hyp_va(PAGE_OFFSET_END) &&
> > >  	    hyp_idmap_start != (unsigned long)__hyp_idmap_text_start) {
> > >  		/*
> > >  		 * The idmap page is intersecting with the VA space,
> > > 
> > 
> > This definitely looks like a move in the wrong direction (reverting part
> > of the above commit). Is it that this is just an old patch that should
> > have been dropped? Or am I completely missing the point?
> 
> I suspect this is a case of me rebasing my series... poorly.
> I'll re-examine the logic here and either update the patch or the commit
> log to make it clearer.
>

Hi Marc,
Thanks, this was indeed an overzealous rebase. I've removed this patch from the
series and kvmtool is happy booting guests (when rebased to v5.2-rc2).

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 04/12] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's
  2019-05-28 17:11     ` Ard Biesheuvel
@ 2019-05-29  9:28       ` Steve Capper
  0 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-05-29  9:28 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: crecklin, Marc Zyngier, Catalin Marinas, Bhupesh Sharma,
	Will Deacon, nd, linux-arm-kernel

On Tue, May 28, 2019 at 07:11:22PM +0200, Ard Biesheuvel wrote:
> On Tue, 28 May 2019 at 19:07, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> >
> > On Tue, 28 May 2019 at 18:10, Steve Capper <steve.capper@arm.com> wrote:
> > >
> > > In order to prepare for a variable VA_BITS we need to account for a
> > > variable size VMEMMAP which in turn means the position of the fixed map
> > > is variable at compile time.
> > >
> > > Thus, we need to replace the BUILD_BUG_ON's that check the fixed map
> > > position with BUG_ON's.
> > >
> > > Signed-off-by: Steve Capper <steve.capper@arm.com>
> >
> > Do we still need this patch now that VMEMMAP_SIZE is a compile time
> > constant again? Or am I missing something?
> >
> 
> Never mind. You are moving the start of the linear region and the
> start of the vmemmap region.
> 
> I wonder if this is really necessry, though. On a 48-bit VA only
> system, the linear region could still start at 0xfff0_.... as long as
> we don't put any physical memory there (and we already move around our
> physical memory inside the linear region when KASLR is enabled)
> 
>

Thanks Ard,
This patch is actually superfluous, it was needed in earlier versions of
the series; but VMEMMAP is fixed at compile time.

I have removed this patch from the series. I will also see if I can
simplify the de-constifying patches and bitmask replacement logic.

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 12/12] arm64: mm: Introduce 52-bit Kernel VAs
  2019-05-28 16:10 ` [PATCH v2 12/12] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
@ 2019-06-05 15:34   ` Catalin Marinas
  2019-06-07 10:34     ` Steve Capper
  0 siblings, 1 reply; 31+ messages in thread
From: Catalin Marinas @ 2019-06-05 15:34 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, marc.zyngier, bhsharma, will.deacon,
	linux-arm-kernel

Hi Steve,

On Tue, May 28, 2019 at 05:10:26PM +0100, Steve Capper wrote:
>  config ARM64_USER_VA_BITS_52
>  	bool "52-bit (user)"
>  	depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN)
> +	select HAS_VA_BITS_52
>  	help
>  	  Enable 52-bit virtual addressing for userspace when explicitly
>  	  requested via a hint to mmap(). The kernel will continue to
> @@ -751,11 +755,28 @@ config ARM64_USER_VA_BITS_52
>  
>  	  If unsure, select 48-bit virtual addressing instead.
>  
> +config ARM64_USER_KERNEL_VA_BITS_52
> +	bool "52-bit (user & kernel)"
> +	depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN)
> +	select HAS_VA_BITS_52
> +	help
> +	  Enable 52-bit virtual addressing for userspace when explicitly
> +	  requested via a hint to mmap(). The kernel will also use 52-bit
> +	  virtual addresses for its own mappings (provided HW support for
> +	  this feature is available, otherwise it reverts to 48-bit).
> +
> +	  NOTE: Enabling 52-bit virtual addressing in conjunction with
> +	  ARMv8.3 Pointer Authentication will result in the PAC being
> +	  reduced from 7 bits to 3 bits, which may have a significant
> +	  impact on its susceptibility to brute-force attacks.
> +
> +	  If unsure, select 48-bit virtual addressing instead.

With the latest version of this series, it doesn't look like will get a
performance hit for enabling 52-bit VA for the kernel (well, subject to
some testing). If that's the case, is it still worth having separate
user and user+kernel 52-bit VA Kconfig entries? It also gets pretty
confusing as I think you can now select both or just the latter for the
same feature.

BTW, do you plan to repost following comments? At a quick look, the
reset of the patches seem fine to me.

Thanks.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 12/12] arm64: mm: Introduce 52-bit Kernel VAs
  2019-06-05 15:34   ` Catalin Marinas
@ 2019-06-07 10:34     ` Steve Capper
  0 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-06-07 10:34 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: crecklin, ard.biesheuvel, Marc Zyngier, bhsharma, Will Deacon,
	nd, linux-arm-kernel

On Wed, Jun 05, 2019 at 04:34:41PM +0100, Catalin Marinas wrote:
> Hi Steve,

Hi Catalin,

> 
> On Tue, May 28, 2019 at 05:10:26PM +0100, Steve Capper wrote:
> >  config ARM64_USER_VA_BITS_52
> >  	bool "52-bit (user)"
> >  	depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN)
> > +	select HAS_VA_BITS_52
> >  	help
> >  	  Enable 52-bit virtual addressing for userspace when explicitly
> >  	  requested via a hint to mmap(). The kernel will continue to
> > @@ -751,11 +755,28 @@ config ARM64_USER_VA_BITS_52
> >  
> >  	  If unsure, select 48-bit virtual addressing instead.
> >  
> > +config ARM64_USER_KERNEL_VA_BITS_52
> > +	bool "52-bit (user & kernel)"
> > +	depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN)
> > +	select HAS_VA_BITS_52
> > +	help
> > +	  Enable 52-bit virtual addressing for userspace when explicitly
> > +	  requested via a hint to mmap(). The kernel will also use 52-bit
> > +	  virtual addresses for its own mappings (provided HW support for
> > +	  this feature is available, otherwise it reverts to 48-bit).
> > +
> > +	  NOTE: Enabling 52-bit virtual addressing in conjunction with
> > +	  ARMv8.3 Pointer Authentication will result in the PAC being
> > +	  reduced from 7 bits to 3 bits, which may have a significant
> > +	  impact on its susceptibility to brute-force attacks.
> > +
> > +	  If unsure, select 48-bit virtual addressing instead.
> 
> With the latest version of this series, it doesn't look like will get a
> performance hit for enabling 52-bit VA for the kernel (well, subject to
> some testing). If that's the case, is it still worth having separate
> user and user+kernel 52-bit VA Kconfig entries? It also gets pretty
> confusing as I think you can now select both or just the latter for the
> same feature.

Yeah, I was wondering if it would be better to just have a single
52-bit VA configuration entry. I think the performance should be fine.

> 
> BTW, do you plan to repost following comments? At a quick look, the
> reset of the patches seem fine to me.
> 

Thanks :-)

I'll repost another version of the series with a few simplifications
that Ard and Marc mentioned along with single 52-bit VA configuration
option.

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/12] 52-bit kernel + user VAs
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (11 preceding siblings ...)
  2019-05-28 16:10 ` [PATCH v2 12/12] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
@ 2019-06-07 13:53 ` Anshuman Khandual
  2019-06-07 14:24   ` Steve Capper
  2019-06-10 10:40   ` Bhupesh Sharma
  13 siblings, 1 reply; 31+ messages in thread
From: Anshuman Khandual @ 2019-06-07 13:53 UTC (permalink / raw)
  To: Steve Capper, linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	bhsharma, will.deacon

Hello Steve,

On 05/28/2019 09:40 PM, Steve Capper wrote:
> This patch series adds support for 52-bit kernel VAs using some of the
> machinery already introduced by the 52-bit userspace VA code in 5.0.
> 
> As 52-bit virtual address support is an optional hardware feature,
> software support for 52-bit kernel VAs needs to be deduced at early boot
> time. If HW support is not available, the kernel falls back to 48-bit.

Just to summarize.

If kernel is configured for 52 bits then it just setups up infrastructure
for 52 bits kernel VA space.

When at the boot

a. Detects HW feature	   -> Use 52 bits VA on 52 bits infra
b. Does not detect feature -> Use 48 bits VA on 52 bits infra (adjusted)

> A significant proportion of this series focuses on "de-constifying"
> VA_BITS related constants.

I assume this is required for the situation (b) because of adjustments
at boot time which will be required after detecting that 52 bit is not
supported in the HW.
  
> 
> In order to allow for a KASAN shadow that changes size at boot time, one

Ditto as above ?

> must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
> start address. Also, it is highly desirable to maintain the same

Is there any particular reason why KASAN_SHADOW_START cannot be fixed and
KASAN_SHADOW_END "grow" instead ? Is it because we are trying to make start
address (which will be closer to VA_START) for all required sections variable ?

> function addresses in the kernel .text between VA sizes. Both of these

Kernel .text range should remain same as the kernel is already loaded in
memory at boot and executing while also trying to fix the effective VA_BITS
after detecting (or not) the 52 bits HW feature.

> requirements necessitate us to flip the kernel address space halves s.t.
> the direct linear map occupies the lower addresses.

Still trying to understand all the reasons for this VA space flip here.

The current kernel 48 bit VA range is split into two halves

1. Higher half	- [UL(~0) ...... PAGE_OFFSET] for linear mapping
2. Lower half	- [PAGE_OFFSET ... VA_START]  for everything else

The split in the middle is based on VA_BITS. When that becomes variable then
boot time computed lower half sections like kernel text, fixed mapping etc
become problematic as they are already running or being used and cannot be
relocated. This is caused by the fact the 48 bits to 52 bits adjustment can
only happen on the VA_START end as the other end UL(~0) is fixed. Hence move
those non-relocatable/fixed sections to  higher half so they dont get impacted
from the 48-52 bits adjustments. Linear mapping (so would PAGE_OFFSET) on the
other hand will have to grow/shrink (or not) during 48-52 bits adjustment.
Hence it can be aligned with the VA_START end instead. Is that correct or I
am missing something.

> 
> In V2 of this series (apologies for the long delay from V1), the major
> change is that PAGE_OFFSET is retained as a constant. This allows for
> much faster virt_to_page computations. This is achieved by expanding the

virt_to_page(), __va(), __pa() needs to be based on just linear offset
calculations else there will be performance impact. 

> size of the VMEMMAP region to accommodate a disjoint 52-bit/48-bit
> direct linear map. This has been found to work well in my testing, but I

I assume it means that we create linear mapping for the entire 52 bit VA
space but back it up with vmmmap struct page mapping only for the actual
bits (48 or 52) in use.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/12] 52-bit kernel + user VAs
  2019-06-07 13:53 ` [PATCH v2 00/12] 52-bit kernel + user VAs Anshuman Khandual
@ 2019-06-07 14:24   ` Steve Capper
  0 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-06-07 14:24 UTC (permalink / raw)
  To: Anshuman Khandual
  Cc: crecklin, ard.biesheuvel, Marc Zyngier, Catalin Marinas,
	bhsharma, Will Deacon, nd, linux-arm-kernel

On Fri, Jun 07, 2019 at 07:23:59PM +0530, Anshuman Khandual wrote:
> Hello Steve,

Hi Anshuman,

> 
> On 05/28/2019 09:40 PM, Steve Capper wrote:
> > This patch series adds support for 52-bit kernel VAs using some of the
> > machinery already introduced by the 52-bit userspace VA code in 5.0.
> > 
> > As 52-bit virtual address support is an optional hardware feature,
> > software support for 52-bit kernel VAs needs to be deduced at early boot
> > time. If HW support is not available, the kernel falls back to 48-bit.
> 
> Just to summarize.
> 
> If kernel is configured for 52 bits then it just setups up infrastructure
> for 52 bits kernel VA space.
> 
> When at the boot
> 
> a. Detects HW feature	   -> Use 52 bits VA on 52 bits infra
> b. Does not detect feature -> Use 48 bits VA on 52 bits infra (adjusted)
> 
> > A significant proportion of this series focuses on "de-constifying"
> > VA_BITS related constants.
> 
> I assume this is required for the situation (b) because of adjustments
> at boot time which will be required after detecting that 52 bit is not
> supported in the HW.
>   
> > 
> > In order to allow for a KASAN shadow that changes size at boot time, one
> 
> Ditto as above ?
> 
> > must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
> > start address. Also, it is highly desirable to maintain the same
> 
> Is there any particular reason why KASAN_SHADOW_START cannot be fixed and
> KASAN_SHADOW_END "grow" instead ? Is it because we are trying to make start
> address (which will be closer to VA_START) for all required sections variable ?
> 

KASAN has a mode of operation whereby the shadow offset computation:
shadowPtr = (ptr >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET 

is inlined into the executable with a constant scale and offset. As we
are dealing with TTBR1 style addresses (i.e. prefixed by 0xfff...) this
effectively means that the KASAN shadow end address becomes fixed (the
highest ptr is always ~0UL which is invariant to VA space size
changes).

The only way that I am aware of fixing the start address is to somehow
patch the KASAN_SHADOW_OFFSET, or prohibit the KASAN inline mode (which
would then hurt performance).

> > function addresses in the kernel .text between VA sizes. Both of these
> 
> Kernel .text range should remain same as the kernel is already loaded in
> memory at boot and executing while also trying to fix the effective VA_BITS
> after detecting (or not) the 52 bits HW feature.
> 
> > requirements necessitate us to flip the kernel address space halves s.t.
> > the direct linear map occupies the lower addresses.
> 
> Still trying to understand all the reasons for this VA space flip here.
> 
> The current kernel 48 bit VA range is split into two halves
> 
> 1. Higher half	- [UL(~0) ...... PAGE_OFFSET] for linear mapping
> 2. Lower half	- [PAGE_OFFSET ... VA_START]  for everything else
> 
> The split in the middle is based on VA_BITS. When that becomes variable then
> boot time computed lower half sections like kernel text, fixed mapping etc
> become problematic as they are already running or being used and cannot be
> relocated. This is caused by the fact the 48 bits to 52 bits adjustment can
> only happen on the VA_START end as the other end UL(~0) is fixed. Hence move
> those non-relocatable/fixed sections to  higher half so they dont get impacted
> from the 48-52 bits adjustments. Linear mapping (so would PAGE_OFFSET) on the
> other hand will have to grow/shrink (or not) during 48-52 bits adjustment.
> Hence it can be aligned with the VA_START end instead. Is that correct or I
> am missing something.
> 

Agreed with the .text addresses. For PAGE_OFFSET we don't strictly need
it to point to the start of the linear map if we grow the vmemmap and
adjust the (already variable) vmemmap offset (along with physvirt_offset).

Also we need to flip the VA space to fit KASAN in as it will grow from
the start.

> > 
> > In V2 of this series (apologies for the long delay from V1), the major
> > change is that PAGE_OFFSET is retained as a constant. This allows for
> > much faster virt_to_page computations. This is achieved by expanding the
> 
> virt_to_page(), __va(), __pa() needs to be based on just linear offset
> calculations else there will be performance impact. 
> 

IIUC I've maintained equal perf for these, but if I've missed something
please shout :-).

> > size of the VMEMMAP region to accommodate a disjoint 52-bit/48-bit
> > direct linear map. This has been found to work well in my testing, but I
> 
> I assume it means that we create linear mapping for the entire 52 bit VA
> space but back it up with vmmmap struct page mapping only for the actual
> bits (48 or 52) in use.
>

That is my understanding too.

A big thank you for looking at this!

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/12] 52-bit kernel + user VAs
  2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
@ 2019-06-10 10:40   ` Bhupesh Sharma
  2019-05-28 16:10 ` [PATCH v2 02/12] arm64: mm: Flip kernel VA space Steve Capper
                     ` (12 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Bhupesh Sharma @ 2019-06-10 10:40 UTC (permalink / raw)
  To: Steve Capper, linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	will.deacon, kexec

Hi Steve,

Thanks for the v2. I still did not get much time to go through this in 
deep and have a go with the same on LVA supporting prototype platforms 
or old CPUs (which don't support ARMv8.2 LVA/LPA extensions) I have. May 
be I will give this a quick check on the same in a day or two.

On 05/28/2019 09:40 PM, Steve Capper wrote:
> This patch series adds support for 52-bit kernel VAs using some of the
> machinery already introduced by the 52-bit userspace VA code in 5.0.
> 
> As 52-bit virtual address support is an optional hardware feature,
> software support for 52-bit kernel VAs needs to be deduced at early boot
> time. If HW support is not available, the kernel falls back to 48-bit.
> 
> A significant proportion of this series focuses on "de-constifying"
> VA_BITS related constants.
> 
> In order to allow for a KASAN shadow that changes size at boot time, one
> must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
> start address. Also, it is highly desirable to maintain the same
> function addresses in the kernel .text between VA sizes. Both of these
> requirements necessitate us to flip the kernel address space halves s.t.
> the direct linear map occupies the lower addresses.
> 
> In V2 of this series (apologies for the long delay from V1), the major
> change is that PAGE_OFFSET is retained as a constant. This allows for
> much faster virt_to_page computations. This is achieved by expanding the
> size of the VMEMMAP region to accommodate a disjoint 52-bit/48-bit
> direct linear map. This has been found to work well in my testing, but I
> would appreciate any feedback on this if it needs changing. To aid with
> git bisect, this logic is broken down into a few smaller patches.
> 
> As far as I'm aware, there are two outstanding issues with this series
> that need to be resolved:
>   1) Is the code patching for ttbr1_offset safe? I need to analyse this
>      a little more,
>   2) How can this memory map be advertised to kdump tools/documentation?
>      I was planning on getting the kernel VA structure agreed on, then I
>      would add the relevant exports/documentation.


Indeed, in the absence of corresponding changes to the Documentation 
section,
it is hard to visualize the changes being made in the memory map.

Also I would suggest that we note in the patchset itself (may be the git 
log) that kdump tools (or even crash for that matter) will be broken 
with this patchset - to prevent kernel bugs being reported.

BTW, James and I are already discussing more coherent methods (see [0]) 
to manage this exporting of information to user-land (so to that we can 
save ourselves from requiring to export new variables in the vmcoreinfo 
in case we have similar changes to the virtual/physical address spaces 
in future).

I will work on and send a patchset addressing the same shortly.

[0]. http://lists.infradead.org/pipermail/kexec/2019-June/023105.html

Thanks,
Bhupesh


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/12] 52-bit kernel + user VAs
@ 2019-06-10 10:40   ` Bhupesh Sharma
  0 siblings, 0 replies; 31+ messages in thread
From: Bhupesh Sharma @ 2019-06-10 10:40 UTC (permalink / raw)
  To: Steve Capper, linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, marc.zyngier, catalin.marinas,
	will.deacon, kexec

Hi Steve,

Thanks for the v2. I still did not get much time to go through this in 
deep and have a go with the same on LVA supporting prototype platforms 
or old CPUs (which don't support ARMv8.2 LVA/LPA extensions) I have. May 
be I will give this a quick check on the same in a day or two.

On 05/28/2019 09:40 PM, Steve Capper wrote:
> This patch series adds support for 52-bit kernel VAs using some of the
> machinery already introduced by the 52-bit userspace VA code in 5.0.
> 
> As 52-bit virtual address support is an optional hardware feature,
> software support for 52-bit kernel VAs needs to be deduced at early boot
> time. If HW support is not available, the kernel falls back to 48-bit.
> 
> A significant proportion of this series focuses on "de-constifying"
> VA_BITS related constants.
> 
> In order to allow for a KASAN shadow that changes size at boot time, one
> must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
> start address. Also, it is highly desirable to maintain the same
> function addresses in the kernel .text between VA sizes. Both of these
> requirements necessitate us to flip the kernel address space halves s.t.
> the direct linear map occupies the lower addresses.
> 
> In V2 of this series (apologies for the long delay from V1), the major
> change is that PAGE_OFFSET is retained as a constant. This allows for
> much faster virt_to_page computations. This is achieved by expanding the
> size of the VMEMMAP region to accommodate a disjoint 52-bit/48-bit
> direct linear map. This has been found to work well in my testing, but I
> would appreciate any feedback on this if it needs changing. To aid with
> git bisect, this logic is broken down into a few smaller patches.
> 
> As far as I'm aware, there are two outstanding issues with this series
> that need to be resolved:
>   1) Is the code patching for ttbr1_offset safe? I need to analyse this
>      a little more,
>   2) How can this memory map be advertised to kdump tools/documentation?
>      I was planning on getting the kernel VA structure agreed on, then I
>      would add the relevant exports/documentation.


Indeed, in the absence of corresponding changes to the Documentation 
section,
it is hard to visualize the changes being made in the memory map.

Also I would suggest that we note in the patchset itself (may be the git 
log) that kdump tools (or even crash for that matter) will be broken 
with this patchset - to prevent kernel bugs being reported.

BTW, James and I are already discussing more coherent methods (see [0]) 
to manage this exporting of information to user-land (so to that we can 
save ourselves from requiring to export new variables in the vmcoreinfo 
in case we have similar changes to the virtual/physical address spaces 
in future).

I will work on and send a patchset addressing the same shortly.

[0]. http://lists.infradead.org/pipermail/kexec/2019-June/023105.html

Thanks,
Bhupesh


_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/12] 52-bit kernel + user VAs
  2019-06-10 10:40   ` Bhupesh Sharma
@ 2019-06-10 10:54     ` Catalin Marinas
  -1 siblings, 0 replies; 31+ messages in thread
From: Catalin Marinas @ 2019-06-10 10:54 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: crecklin, ard.biesheuvel, marc.zyngier, kexec, Steve Capper,
	will.deacon, linux-arm-kernel

On Mon, Jun 10, 2019 at 04:10:50PM +0530, Bhupesh Sharma wrote:
> On 05/28/2019 09:40 PM, Steve Capper wrote:
> >   2) How can this memory map be advertised to kdump tools/documentation?
> >      I was planning on getting the kernel VA structure agreed on, then I
> >      would add the relevant exports/documentation.
> 
> Indeed, in the absence of corresponding changes to the Documentation
> section, it is hard to visualize the changes being made in the memory
> map.

We used to have some better documentation in the arm64 memory.txt until
commit 08375198b010 ("arm64: Determine the vmalloc/vmemmap space at
build time based on VA_BITS") which removed it in favour of what the
kernel was printing. Subsequently, the kernel VA layout printing was
also removed. It would be nice to bring back the memory.txt, even if it
is for a single configuration as per defconfig.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/12] 52-bit kernel + user VAs
@ 2019-06-10 10:54     ` Catalin Marinas
  0 siblings, 0 replies; 31+ messages in thread
From: Catalin Marinas @ 2019-06-10 10:54 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: crecklin, ard.biesheuvel, marc.zyngier, kexec, Steve Capper,
	will.deacon, linux-arm-kernel

On Mon, Jun 10, 2019 at 04:10:50PM +0530, Bhupesh Sharma wrote:
> On 05/28/2019 09:40 PM, Steve Capper wrote:
> >   2) How can this memory map be advertised to kdump tools/documentation?
> >      I was planning on getting the kernel VA structure agreed on, then I
> >      would add the relevant exports/documentation.
> 
> Indeed, in the absence of corresponding changes to the Documentation
> section, it is hard to visualize the changes being made in the memory
> map.

We used to have some better documentation in the arm64 memory.txt until
commit 08375198b010 ("arm64: Determine the vmalloc/vmemmap space at
build time based on VA_BITS") which removed it in favour of what the
kernel was printing. Subsequently, the kernel VA layout printing was
also removed. It would be nice to bring back the memory.txt, even if it
is for a single configuration as per defconfig.

-- 
Catalin

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/12] 52-bit kernel + user VAs
  2019-06-10 10:54     ` Catalin Marinas
@ 2019-06-10 11:15       ` Bhupesh Sharma
  -1 siblings, 0 replies; 31+ messages in thread
From: Bhupesh Sharma @ 2019-06-10 11:15 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Christoph von Recklinghausen, Steve Capper, Marc Zyngier, kexec,
	Ard Biesheuvel, Will Deacon, linux-arm-kernel

Hi Catalin,

On Mon, Jun 10, 2019 at 4:24 PM Catalin Marinas <catalin.marinas@arm.com> wrote:
>
> On Mon, Jun 10, 2019 at 04:10:50PM +0530, Bhupesh Sharma wrote:
> > On 05/28/2019 09:40 PM, Steve Capper wrote:
> > >   2) How can this memory map be advertised to kdump tools/documentation?
> > >      I was planning on getting the kernel VA structure agreed on, then I
> > >      would add the relevant exports/documentation.
> >
> > Indeed, in the absence of corresponding changes to the Documentation
> > section, it is hard to visualize the changes being made in the memory
> > map.
>
> We used to have some better documentation in the arm64 memory.txt until
> commit 08375198b010 ("arm64: Determine the vmalloc/vmemmap space at
> build time based on VA_BITS") which removed it in favour of what the
> kernel was printing. Subsequently, the kernel VA layout printing was
> also removed. It would be nice to bring back the memory.txt, even if it
> is for a single configuration as per defconfig.

Indeed, that's what I suggested during the v1 review as well. See
<https://www.spinics.net/lists/arm-kernel/msg718096.html> for details.

Also, we may want to have a doc dedicated to 52-bit address space
details on arm64, similar to what we have currently for x86 (see [1a]
and [1b])

[1a]. https://github.com/torvalds/linux/blob/master/Documentation/x86/x86_64/5level-paging.txt
[1b].https://github.com/torvalds/linux/blob/master/Documentation/x86/x86_64/mm.txt

Thanks,
Bhupesh

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 00/12] 52-bit kernel + user VAs
@ 2019-06-10 11:15       ` Bhupesh Sharma
  0 siblings, 0 replies; 31+ messages in thread
From: Bhupesh Sharma @ 2019-06-10 11:15 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Christoph von Recklinghausen, Steve Capper, Marc Zyngier, kexec,
	Ard Biesheuvel, Will Deacon, linux-arm-kernel

Hi Catalin,

On Mon, Jun 10, 2019 at 4:24 PM Catalin Marinas <catalin.marinas@arm.com> wrote:
>
> On Mon, Jun 10, 2019 at 04:10:50PM +0530, Bhupesh Sharma wrote:
> > On 05/28/2019 09:40 PM, Steve Capper wrote:
> > >   2) How can this memory map be advertised to kdump tools/documentation?
> > >      I was planning on getting the kernel VA structure agreed on, then I
> > >      would add the relevant exports/documentation.
> >
> > Indeed, in the absence of corresponding changes to the Documentation
> > section, it is hard to visualize the changes being made in the memory
> > map.
>
> We used to have some better documentation in the arm64 memory.txt until
> commit 08375198b010 ("arm64: Determine the vmalloc/vmemmap space at
> build time based on VA_BITS") which removed it in favour of what the
> kernel was printing. Subsequently, the kernel VA layout printing was
> also removed. It would be nice to bring back the memory.txt, even if it
> is for a single configuration as per defconfig.

Indeed, that's what I suggested during the v1 review as well. See
<https://www.spinics.net/lists/arm-kernel/msg718096.html> for details.

Also, we may want to have a doc dedicated to 52-bit address space
details on arm64, similar to what we have currently for x86 (see [1a]
and [1b])

[1a]. https://github.com/torvalds/linux/blob/master/Documentation/x86/x86_64/5level-paging.txt
[1b].https://github.com/torvalds/linux/blob/master/Documentation/x86/x86_64/mm.txt

Thanks,
Bhupesh

_______________________________________________
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 08/12] arm64: mm: Logic to make offset_ttbr1 conditional
  2019-05-28 16:10 ` [PATCH v2 08/12] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
@ 2019-06-10 14:18   ` Catalin Marinas
  2019-06-12 10:58     ` Steve Capper
  0 siblings, 1 reply; 31+ messages in thread
From: Catalin Marinas @ 2019-06-10 14:18 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, marc.zyngier, bhsharma, will.deacon,
	linux-arm-kernel

Hi Steve,

On Tue, May 28, 2019 at 05:10:22PM +0100, Steve Capper wrote:
> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> index 039fbd822ec6..a42c392ed1e1 100644
> --- a/arch/arm64/include/asm/assembler.h
> +++ b/arch/arm64/include/asm/assembler.h
> @@ -548,6 +548,14 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
>  	.macro	offset_ttbr1, ttbr
>  #ifdef CONFIG_ARM64_USER_VA_BITS_52
>  	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
> +#endif
> +
> +#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52
> +alternative_if_not ARM64_HAS_52BIT_VA
> +	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
> +alternative_else
> +	nop
> +alternative_endif
>  #endif
>  	.endm

As a nitpick, you could write alternative_else_not_endif instead of the
last three lines.

Anyway, we use offset_ttbr1 in a few early cases via
idmap_cpu_replace_ttbr1 where the alternative framework hasn't got the
chance to patch the instructions. I suggest you open-code the feature
check in here, I don't think we use this on any fast path.

-- 
Catalin

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v2 08/12] arm64: mm: Logic to make offset_ttbr1 conditional
  2019-06-10 14:18   ` Catalin Marinas
@ 2019-06-12 10:58     ` Steve Capper
  0 siblings, 0 replies; 31+ messages in thread
From: Steve Capper @ 2019-06-12 10:58 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: crecklin, ard.biesheuvel, Marc Zyngier, bhsharma, Will Deacon,
	nd, linux-arm-kernel

On Mon, Jun 10, 2019 at 03:18:13PM +0100, Catalin Marinas wrote:
> Hi Steve,
> 

Hi Catalin,

> On Tue, May 28, 2019 at 05:10:22PM +0100, Steve Capper wrote:
> > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
> > index 039fbd822ec6..a42c392ed1e1 100644
> > --- a/arch/arm64/include/asm/assembler.h
> > +++ b/arch/arm64/include/asm/assembler.h
> > @@ -548,6 +548,14 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
> >  	.macro	offset_ttbr1, ttbr
> >  #ifdef CONFIG_ARM64_USER_VA_BITS_52
> >  	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
> > +#endif
> > +
> > +#ifdef CONFIG_ARM64_USER_KERNEL_VA_BITS_52
> > +alternative_if_not ARM64_HAS_52BIT_VA
> > +	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
> > +alternative_else
> > +	nop
> > +alternative_endif
> >  #endif
> >  	.endm
> 
> As a nitpick, you could write alternative_else_not_endif instead of the
> last three lines.
> 

Thanks yeah that makes it easier to read.

> Anyway, we use offset_ttbr1 in a few early cases via
> idmap_cpu_replace_ttbr1 where the alternative framework hasn't got the
> chance to patch the instructions. I suggest you open-code the feature
> check in here, I don't think we use this on any fast path.
> 

Apologies for not spotting that, okay, I'll query the vabits_actual
inside idmap_cpu_replace_ttbr1.

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2019-06-12 10:59 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-28 16:10 [PATCH v2 00/12] 52-bit kernel + user VAs Steve Capper
2019-05-28 16:10 ` [PATCH v2 01/12] arm/arm64: KVM: Formalise end of direct linear map Steve Capper
2019-05-28 16:27   ` Marc Zyngier
2019-05-28 17:01     ` Steve Capper
2019-05-29  9:26       ` Steve Capper
2019-05-28 16:10 ` [PATCH v2 02/12] arm64: mm: Flip kernel VA space Steve Capper
2019-05-28 16:10 ` [PATCH v2 03/12] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
2019-05-28 16:10 ` [PATCH v2 04/12] arm64: mm: Replace fixed map BUILD_BUG_ON's with BUG_ON's Steve Capper
2019-05-28 17:07   ` Ard Biesheuvel
2019-05-28 17:11     ` Ard Biesheuvel
2019-05-29  9:28       ` Steve Capper
2019-05-28 16:10 ` [PATCH v2 05/12] arm64: dump: Make kernel page table dumper dynamic again Steve Capper
2019-05-28 16:10 ` [PATCH v2 06/12] arm64: mm: Introduce VA_BITS_MIN Steve Capper
2019-05-28 16:10 ` [PATCH v2 07/12] arm64: mm: Introduce VA_BITS_ACTUAL Steve Capper
2019-05-28 16:10 ` [PATCH v2 08/12] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
2019-06-10 14:18   ` Catalin Marinas
2019-06-12 10:58     ` Steve Capper
2019-05-28 16:10 ` [PATCH v2 09/12] arm64: mm: Separate out vmemmap Steve Capper
2019-05-28 16:10 ` [PATCH v2 10/12] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
2019-05-28 16:10 ` [PATCH v2 11/12] arm64: mm: Tweak PAGE_OFFSET logic Steve Capper
2019-05-28 16:10 ` [PATCH v2 12/12] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
2019-06-05 15:34   ` Catalin Marinas
2019-06-07 10:34     ` Steve Capper
2019-06-07 13:53 ` [PATCH v2 00/12] 52-bit kernel + user VAs Anshuman Khandual
2019-06-07 14:24   ` Steve Capper
2019-06-10 10:40 ` Bhupesh Sharma
2019-06-10 10:40   ` Bhupesh Sharma
2019-06-10 10:54   ` Catalin Marinas
2019-06-10 10:54     ` Catalin Marinas
2019-06-10 11:15     ` Bhupesh Sharma
2019-06-10 11:15       ` Bhupesh Sharma

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.