linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V5 00/12] 52-bit kernel + user VAs
@ 2019-08-07 15:55 Steve Capper
  2019-08-07 15:55 ` [PATCH V5 01/12] arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START Steve Capper
                   ` (12 more replies)
  0 siblings, 13 replies; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

This patch series adds support for 52-bit kernel VAs using some of the
machinery already introduced by the 52-bit userspace VA code in 5.0.

As 52-bit virtual address support is an optional hardware feature,
software support for 52-bit kernel VAs needs to be deduced at early boot
time. If HW support is not available, the kernel falls back to 48-bit.

A significant proportion of this series focuses on "de-constifying"
VA_BITS related constants.

In order to allow for a KASAN shadow that changes size at boot time, one
must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
start address. Also, it is highly desirable to maintain the same
function addresses in the kernel .text between VA sizes. Both of these
requirements necessitate us to flip the kernel address space halves s.t.
the direct linear map occupies the lower addresses.

In V5 of this series the, now redundant, vabits_user was removed by an
extra patch.

In V4 of this series, an extra documentation patch is added to explain
both the layout of the memory and the implementation of 52-bit support.
Also added is a guard region after VMEMMAP to avoid ambiguity with
IS_ERR style pointers. Finally the bitmask optimisations for VMEMMAP and
PAGE_OFFSET are replaced with addition/subtraction in a new first patch
for the series.

In V3 of this series, the 52-bit user/48-bit kernel option is removed
and we are left with a single 52-bit VA option instead. The offset_ttbr1
conditional logic has been re-worked to directly read a system register
rather than rely on the alternative framework (I couldn't actually see a
hotpath calling offset_ttbr1 and some parts of the early boot relied on
offset_ttbr1 before the alternatives framework was called). Also some
spurious de-constifying changes have been removed.

In V2 of this series (apologies for the long delay from V1), the major
change is that PAGE_OFFSET is retained as a constant. This allows for
much faster virt_to_page computations. This is achieved by expanding the
size of the VMEMMAP region to accommodate a disjoint 52-bit/48-bit
direct linear map. This has been found to work well in my testing, but I
would appreciate any feedback on this if it needs changing. To aid with
git bisect, this logic is broken down into a few smaller patches

Steve Capper (12):
  arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and
    VMEMMAP_START
  arm64: mm: Flip kernel VA space
  arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  arm64: dump: De-constify VA_START and KASAN_SHADOW_START
  arm64: mm: Introduce VA_BITS_MIN
  arm64: mm: Introduce vabits_actual
  arm64: mm: Logic to make offset_ttbr1 conditional
  arm64: mm: Separate out vmemmap
  arm64: mm: Modify calculation of VMEMMAP_SIZE
  arm64: mm: Introduce 52-bit Kernel VAs
  arm64: mm: Remove vabits_user
  docs: arm64: Add layout and 52-bit info to memory document

 Documentation/arm64/kasan-offsets.sh   |  27 ++++++
 Documentation/arm64/memory.rst         | 123 +++++++++++++++++++------
 arch/arm64/Kconfig                     |  31 +++++--
 arch/arm64/Makefile                    |   8 --
 arch/arm64/include/asm/assembler.h     |  17 +++-
 arch/arm64/include/asm/efi.h           |   4 +-
 arch/arm64/include/asm/kasan.h         |  11 +--
 arch/arm64/include/asm/memory.h        |  58 +++++++-----
 arch/arm64/include/asm/mmu_context.h   |   4 +-
 arch/arm64/include/asm/pgtable-hwdef.h |   2 +-
 arch/arm64/include/asm/pgtable.h       |   6 +-
 arch/arm64/include/asm/pointer_auth.h  |   2 +-
 arch/arm64/include/asm/processor.h     |   4 +-
 arch/arm64/kernel/head.S               |  12 +--
 arch/arm64/kernel/hibernate-asm.S      |   8 +-
 arch/arm64/kernel/hibernate.c          |   2 +-
 arch/arm64/kernel/kaslr.c              |   6 +-
 arch/arm64/kvm/va_layout.c             |  14 +--
 arch/arm64/mm/dump.c                   |  22 ++++-
 arch/arm64/mm/fault.c                  |   5 +-
 arch/arm64/mm/init.c                   |  29 ++++--
 arch/arm64/mm/kasan_init.c             |   9 +-
 arch/arm64/mm/mmu.c                    |   9 +-
 arch/arm64/mm/proc.S                   |  11 ++-
 24 files changed, 289 insertions(+), 135 deletions(-)
 create mode 100644 Documentation/arm64/kasan-offsets.sh

-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V5 01/12] arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
@ 2019-08-07 15:55 ` Steve Capper
  2019-08-07 15:55 ` [PATCH V5 02/12] arm64: mm: Flip kernel VA space Steve Capper
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

Currently there are assumptions about the alignment of VMEMMAP_START
and PAGE_OFFSET that won't be valid after this series is applied.

These assumptions are in the form of bitwise operators being used
instead of addition and subtraction when calculating addresses.

This patch replaces these bitwise operators with addition/subtraction.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

---

New in V4
---
 arch/arm64/include/asm/memory.h | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index b7ba75809751..d3a951dc9878 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -295,21 +295,20 @@ static inline void *phys_to_virt(phys_addr_t x)
 #define virt_to_page(kaddr)	pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
 #define _virt_addr_valid(kaddr)	pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
 #else
-#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
-#define __page_to_voff(kaddr)	(((u64)(kaddr) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
+#define __virt_to_pgoff(kaddr)	(((u64)(kaddr) - PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
+#define __page_to_voff(kaddr)	(((u64)(kaddr) - VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
 
 #define page_to_virt(page)	({					\
 	unsigned long __addr =						\
-		((__page_to_voff(page)) | PAGE_OFFSET);			\
+		((__page_to_voff(page)) + PAGE_OFFSET);			\
 	unsigned long __addr_tag =					\
 		 __tag_set(__addr, page_kasan_tag(page));		\
 	((void *)__addr_tag);						\
 })
 
-#define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START))
+#define virt_to_page(vaddr)	((struct page *)((__virt_to_pgoff(vaddr)) + VMEMMAP_START))
 
-#define _virt_addr_valid(kaddr)	pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
-					   + PHYS_OFFSET) >> PAGE_SHIFT)
+#define _virt_addr_valid(kaddr)	pfn_valid(__virt_to_phys((u64)(kaddr)) >> PAGE_SHIFT)
 #endif
 #endif
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V5 02/12] arm64: mm: Flip kernel VA space
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
  2019-08-07 15:55 ` [PATCH V5 01/12] arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START Steve Capper
@ 2019-08-07 15:55 ` Steve Capper
  2019-08-07 16:12   ` Catalin Marinas
  2019-08-07 15:55 ` [PATCH V5 03/12] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

In order to allow for a KASAN shadow that changes size at boot time, one
must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
start address. Also, it is highly desirable to maintain the same
function addresses in the kernel .text between VA sizes. Both of these
requirements necessitate us to flip the kernel address space halves s.t.
the direct linear map occupies the lower addresses.

This patch puts the direct linear map in the lower addresses of the
kernel VA range and everything else in the higher ranges.

We need to adjust:
 *) KASAN shadow region placement logic,
 *) KASAN_SHADOW_OFFSET computation logic,
 *) virt_to_phys, phys_to_virt checks,
 *) page table dumper.

These are all small changes, that need to take place atomically, so they
are bundled into this commit.

As part of the re-arrangement, a guard region of 2MB (to preserve
alignment for fixed map) is added after the vmemmap. Otherwise the
vmemmap could intersect with IS_ERR pointers.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---
Changed in V5 - simplify the kernel page table dumper patch as we have
2MB gap at the end of the kernel virtual address space.

Changed in V4 - we add a guard region after vmemmap to avoid ambiguity
with error pointers
---
 arch/arm64/Makefile              | 2 +-
 arch/arm64/include/asm/memory.h  | 8 ++++----
 arch/arm64/include/asm/pgtable.h | 2 +-
 arch/arm64/kernel/hibernate.c    | 2 +-
 arch/arm64/mm/dump.c             | 5 +++--
 arch/arm64/mm/init.c             | 9 +--------
 arch/arm64/mm/kasan_init.c       | 6 +++---
 arch/arm64/mm/mmu.c              | 4 ++--
 8 files changed, 16 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index bb1f1dbb34e8..b2400f9c1213 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -130,7 +130,7 @@ KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 #				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
 # in 32-bit arithmetic
 KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
-	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 32))) \
+	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
 	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
 	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
 
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index d3a951dc9878..98a87f0f40d5 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -38,9 +38,9 @@
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
 #define VA_START		(UL(0xffffffffffffffff) - \
-	(UL(1) << VA_BITS) + 1)
-#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << (VA_BITS - 1)) + 1)
+#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
+	(UL(1) << VA_BITS) + 1)
 #define KIMAGE_VADDR		(MODULES_END)
 #define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
@@ -48,7 +48,7 @@
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
 #define MODULES_VADDR		(BPF_JIT_REGION_END)
 #define MODULES_VSIZE		(SZ_128M)
-#define VMEMMAP_START		(PAGE_OFFSET - VMEMMAP_SIZE)
+#define VMEMMAP_START		(-VMEMMAP_SIZE - SZ_2M)
 #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
 #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
@@ -227,7 +227,7 @@ extern u64			vabits_user;
  * space. Testing the top bit for the start of the region is a
  * sufficient check.
  */
-#define __is_lm_address(addr)	(!!((addr) & BIT(VA_BITS - 1)))
+#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS - 1)))
 
 #define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 3f5461f7b560..d274ea9a5f86 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -21,7 +21,7 @@
  *	and fixed mappings
  */
 #define VMALLOC_START		(MODULES_END)
-#define VMALLOC_END		(PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
+#define VMALLOC_END		(- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
 #define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
 
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 9341fcc6e809..e130db05d932 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -496,7 +496,7 @@ int swsusp_arch_resume(void)
 		rc = -ENOMEM;
 		goto out;
 	}
-	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, 0);
+	rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, VA_START);
 	if (rc)
 		goto out;
 
diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index 82b3a7fdb4a6..beec87488e97 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -26,6 +26,8 @@
 #include <asm/ptdump.h>
 
 static const struct addr_marker address_markers[] = {
+	{ PAGE_OFFSET,			"Linear Mapping start" },
+	{ VA_START,			"Linear Mapping end" },
 #ifdef CONFIG_KASAN
 	{ KASAN_SHADOW_START,		"Kasan shadow start" },
 	{ KASAN_SHADOW_END,		"Kasan shadow end" },
@@ -42,7 +44,6 @@ static const struct addr_marker address_markers[] = {
 	{ VMEMMAP_START,		"vmemmap start" },
 	{ VMEMMAP_START + VMEMMAP_SIZE,	"vmemmap end" },
 #endif
-	{ PAGE_OFFSET,			"Linear mapping" },
 	{ -1,				NULL },
 };
 
@@ -376,7 +377,7 @@ static void ptdump_initialize(void)
 static struct ptdump_info kernel_ptdump_info = {
 	.mm		= &init_mm,
 	.markers	= address_markers,
-	.base_addr	= VA_START,
+	.base_addr	= PAGE_OFFSET,
 };
 
 void ptdump_check_wx(void)
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index f3c795278def..62927ed02229 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -301,7 +301,7 @@ static void __init fdt_enforce_memory_region(void)
 
 void __init arm64_memblock_init(void)
 {
-	const s64 linear_region_size = -(s64)PAGE_OFFSET;
+	const s64 linear_region_size = BIT(VA_BITS - 1);
 
 	/* Handle linux,usable-memory-range property */
 	fdt_enforce_memory_region();
@@ -309,13 +309,6 @@ void __init arm64_memblock_init(void)
 	/* Remove memory above our supported physical address size */
 	memblock_remove(1ULL << PHYS_MASK_SHIFT, ULLONG_MAX);
 
-	/*
-	 * Ensure that the linear region takes up exactly half of the kernel
-	 * virtual address space. This way, we can distinguish a linear address
-	 * from a kernel/module/vmalloc address by testing a single bit.
-	 */
-	BUILD_BUG_ON(linear_region_size != BIT(VA_BITS - 1));
-
 	/*
 	 * Select a suitable value for the base of physical memory.
 	 */
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 6cf97b904ebb..05edfe9b02e4 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -225,10 +225,10 @@ void __init kasan_init(void)
 	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
 			   early_pfn_to_nid(virt_to_pfn(lm_alias(_text))));
 
-	kasan_populate_early_shadow((void *)KASAN_SHADOW_START,
-				    (void *)mod_shadow_start);
+	kasan_populate_early_shadow(kasan_mem_to_shadow((void *) VA_START),
+				   (void *)mod_shadow_start);
 	kasan_populate_early_shadow((void *)kimg_shadow_end,
-				    kasan_mem_to_shadow((void *)PAGE_OFFSET));
+				   (void *)KASAN_SHADOW_END);
 
 	if (kimg_shadow_start > mod_shadow_end)
 		kasan_populate_early_shadow((void *)mod_shadow_end,
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 750a69dde39b..1d4247f9a496 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -398,7 +398,7 @@ static phys_addr_t pgd_pgtable_alloc(int shift)
 static void __init create_mapping_noalloc(phys_addr_t phys, unsigned long virt,
 				  phys_addr_t size, pgprot_t prot)
 {
-	if (virt < VMALLOC_START) {
+	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
 		pr_warn("BUG: not creating mapping for %pa at 0x%016lx - outside kernel range\n",
 			&phys, virt);
 		return;
@@ -425,7 +425,7 @@ void __init create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
 static void update_mapping_prot(phys_addr_t phys, unsigned long virt,
 				phys_addr_t size, pgprot_t prot)
 {
-	if (virt < VMALLOC_START) {
+	if ((virt >= VA_START) && (virt < VMALLOC_START)) {
 		pr_warn("BUG: not updating mapping for %pa at 0x%016lx - outside kernel range\n",
 			&phys, virt);
 		return;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V5 03/12] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
  2019-08-07 15:55 ` [PATCH V5 01/12] arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START Steve Capper
  2019-08-07 15:55 ` [PATCH V5 02/12] arm64: mm: Flip kernel VA space Steve Capper
@ 2019-08-07 15:55 ` Steve Capper
  2019-08-07 16:12   ` Catalin Marinas
  2019-08-14 15:20   ` [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET Mark Rutland
  2019-08-07 15:55 ` [PATCH V5 04/12] arm64: dump: De-constify VA_START and KASAN_SHADOW_START Steve Capper
                   ` (9 subsequent siblings)
  12 siblings, 2 replies; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
line argument and affects the codegen of the inline address sanetiser.

Essentially, for an example memory access:
    *ptr1 = val;
The compiler will insert logic similar to the below:
    shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET)
    if (somethingWrong(shadowValue))
        flagAnError();

This code sequence is inserted into many places, thus
KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel
text.

If we want to run a single kernel binary with multiple address spaces,
then we need to do this with KASAN_SHADOW_OFFSET fixed.

Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide
shadow addresses we know that the end of the shadow region is constant
w.r.t. VA space size:
    KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET

This means that if we increase the size of the VA space, the start of
the KASAN region expands into lower addresses whilst the end of the
KASAN region is fixed.

Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via
build scripts with the VA size used as a parameter. (There are build
time checks in the C code too to ensure that expected values are being
derived). It is sufficient, and indeed is a simplification, to remove
the build scripts (and build time checks) entirely and instead provide
KASAN_SHADOW_OFFSET values.

This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the
arm64 Makefile, and instead we adopt the approach used by x86 to supply
offset values in kConfig. To help debug/develop future VA space changes,
the Makefile logic has been preserved in a script file in the arm64
Documentation folder.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

Changed in V5,
preserved a BUILD_BUG_ON that was removed before
Removed spurious KASAN_EXTRA logic
---
 Documentation/arm64/kasan-offsets.sh | 27 +++++++++++++++++++++++++++
 arch/arm64/Kconfig                   | 15 +++++++++++++++
 arch/arm64/Makefile                  |  8 --------
 arch/arm64/include/asm/kasan.h       | 11 ++++-------
 arch/arm64/include/asm/memory.h      |  8 +++++---
 5 files changed, 51 insertions(+), 18 deletions(-)
 create mode 100644 Documentation/arm64/kasan-offsets.sh

diff --git a/Documentation/arm64/kasan-offsets.sh b/Documentation/arm64/kasan-offsets.sh
new file mode 100644
index 000000000000..2b7a021db363
--- /dev/null
+++ b/Documentation/arm64/kasan-offsets.sh
@@ -0,0 +1,27 @@
+#!/bin/sh
+
+# Print out the KASAN_SHADOW_OFFSETS required to place the KASAN SHADOW
+# start address at the mid-point of the kernel VA space
+
+print_kasan_offset () {
+	printf "%02d\t" $1
+	printf "0x%08x00000000\n" $(( (0xffffffff & (-1 << ($1 - 1 - 32))) \
+			+ (1 << ($1 - 32 - $2)) \
+			- (1 << (64 - 32 - $2)) ))
+}
+
+echo KASAN_SHADOW_SCALE_SHIFT = 3
+printf "VABITS\tKASAN_SHADOW_OFFSET\n"
+print_kasan_offset 48 3
+print_kasan_offset 47 3
+print_kasan_offset 42 3
+print_kasan_offset 39 3
+print_kasan_offset 36 3
+echo
+echo KASAN_SHADOW_SCALE_SHIFT = 4
+printf "VABITS\tKASAN_SHADOW_OFFSET\n"
+print_kasan_offset 48 4
+print_kasan_offset 47 4
+print_kasan_offset 42 4
+print_kasan_offset 39 4
+print_kasan_offset 36 4
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 3adcec05b1f6..f7f23e47c28f 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -297,6 +297,21 @@ config ARCH_SUPPORTS_UPROBES
 config ARCH_PROC_KCORE_TEXT
 	def_bool y
 
+config KASAN_SHADOW_OFFSET
+	hex
+	depends on KASAN
+	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS
+	default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
+	default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
+	default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
+	default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
+	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS
+	default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
+	default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
+	default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
+	default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
+	default 0xffffffffffffffff
+
 source "arch/arm64/Kconfig.platforms"
 
 menu "Kernel Features"
diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index b2400f9c1213..2b7db0d41498 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -126,14 +126,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 
-# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
-#				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
-# in 32-bit arithmetic
-KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
-	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
-	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
-	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
-
 export	TEXT_OFFSET GZFLAGS
 
 core-y		+= arch/arm64/kernel/ arch/arm64/mm/
diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index b52aacd2c526..10d2add842da 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -18,11 +18,8 @@
  * KASAN_SHADOW_START: beginning of the kernel virtual addresses.
  * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/N of kernel virtual addresses,
  * where N = (1 << KASAN_SHADOW_SCALE_SHIFT).
- */
-#define KASAN_SHADOW_START      (VA_START)
-#define KASAN_SHADOW_END        (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
-
-/*
+ *
+ * KASAN_SHADOW_OFFSET:
  * This value is used to map an address to the corresponding shadow
  * address by the following formula:
  *     shadow_addr = (address >> KASAN_SHADOW_SCALE_SHIFT) + KASAN_SHADOW_OFFSET
@@ -33,8 +30,8 @@
  *      KASAN_SHADOW_OFFSET = KASAN_SHADOW_END -
  *				(1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT))
  */
-#define KASAN_SHADOW_OFFSET     (KASAN_SHADOW_END - (1ULL << \
-					(64 - KASAN_SHADOW_SCALE_SHIFT)))
+#define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
+#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS)
 
 void kasan_init(void);
 void kasan_copy_shadow(pgd_t *pgdir);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 98a87f0f40d5..0530f283abc9 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -42,7 +42,7 @@
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << VA_BITS) + 1)
 #define KIMAGE_VADDR		(MODULES_END)
-#define BPF_JIT_REGION_START	(VA_START + KASAN_SHADOW_SIZE)
+#define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
 #define BPF_JIT_REGION_END	(BPF_JIT_REGION_START + BPF_JIT_REGION_SIZE)
 #define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
@@ -68,11 +68,13 @@
  * significantly, so double the (minimum) stack size when they are in use.
  */
 #ifdef CONFIG_KASAN
-#define KASAN_SHADOW_SIZE	(UL(1) << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
+#define KASAN_SHADOW_OFFSET	_AC(CONFIG_KASAN_SHADOW_OFFSET, UL)
+#define KASAN_SHADOW_END	((UL(1) << (64 - KASAN_SHADOW_SCALE_SHIFT)) \
+					+ KASAN_SHADOW_OFFSET)
 #define KASAN_THREAD_SHIFT	1
 #else
-#define KASAN_SHADOW_SIZE	(0)
 #define KASAN_THREAD_SHIFT	0
+#define KASAN_SHADOW_END	(VA_START)
 #endif
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V5 04/12] arm64: dump: De-constify VA_START and KASAN_SHADOW_START
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (2 preceding siblings ...)
  2019-08-07 15:55 ` [PATCH V5 03/12] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
@ 2019-08-07 15:55 ` Steve Capper
  2019-08-07 15:55 ` [PATCH V5 05/12] arm64: mm: Introduce VA_BITS_MIN Steve Capper
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

The kernel page table dumper assumes that the placement of VA regions is
constant and determined at compile time. As we are about to introduce
variable VA logic, we need to be able to determine certain regions at
boot time.

Specifically the VA_START and KASAN_SHADOW_START will depend on whether
or not the system is booted with 52-bit kernel VAs.

This patch adds logic to the kernel page table dumper s.t. these regions
can be computed at boot time.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

---

Changed in V3 - simplified the scope of de-constifying to just VA_START
and KASAN_SHADOW_START.
---
 arch/arm64/mm/dump.c | 19 ++++++++++++++++---
 1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index beec87488e97..6ec75305828e 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -25,11 +25,20 @@
 #include <asm/pgtable-hwdef.h>
 #include <asm/ptdump.h>
 
-static const struct addr_marker address_markers[] = {
+
+enum address_markers_idx {
+	PAGE_OFFSET_NR = 0,
+	VA_START_NR,
+#ifdef CONFIG_KASAN
+	KASAN_START_NR,
+#endif
+};
+
+static struct addr_marker address_markers[] = {
 	{ PAGE_OFFSET,			"Linear Mapping start" },
-	{ VA_START,			"Linear Mapping end" },
+	{ 0 /* VA_START */,		"Linear Mapping end" },
 #ifdef CONFIG_KASAN
-	{ KASAN_SHADOW_START,		"Kasan shadow start" },
+	{ 0 /* KASAN_SHADOW_START */,	"Kasan shadow start" },
 	{ KASAN_SHADOW_END,		"Kasan shadow end" },
 #endif
 	{ MODULES_VADDR,		"Modules start" },
@@ -402,6 +411,10 @@ void ptdump_check_wx(void)
 
 static int ptdump_init(void)
 {
+	address_markers[VA_START_NR].start_address = VA_START;
+#ifdef CONFIG_KASAN
+	address_markers[KASAN_START_NR].start_address = KASAN_SHADOW_START;
+#endif
 	ptdump_initialize();
 	ptdump_debugfs_register(&kernel_ptdump_info, "kernel_page_tables");
 	return 0;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V5 05/12] arm64: mm: Introduce VA_BITS_MIN
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (3 preceding siblings ...)
  2019-08-07 15:55 ` [PATCH V5 04/12] arm64: dump: De-constify VA_START and KASAN_SHADOW_START Steve Capper
@ 2019-08-07 15:55 ` Steve Capper
  2019-08-07 16:14   ` Catalin Marinas
  2019-08-07 15:55 ` [PATCH V5 06/12] arm64: mm: Introduce vabits_actual Steve Capper
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

In order to support 52-bit kernel addresses detectable at boot time, the
kernel needs to know the most conservative VA_BITS possible should it
need to fall back to this quantity due to lack of hardware support.

A new compile time constant VA_BITS_MIN is introduced in this patch and
it is employed in the KASAN end address, KASLR, and EFI stub.

For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits.

In other words: VA_BITS_MIN = min (48, VA_BITS)

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/efi.h       | 4 ++--
 arch/arm64/include/asm/memory.h    | 9 ++++++++-
 arch/arm64/include/asm/processor.h | 2 +-
 arch/arm64/kernel/head.S           | 2 +-
 arch/arm64/kernel/kaslr.c          | 6 +++---
 arch/arm64/mm/kasan_init.c         | 3 ++-
 6 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index 8e79ce9c3f5c..f6dbc0149dae 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -79,7 +79,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base)
 
 /*
  * On arm64, we have to ensure that the initrd ends up in the linear region,
- * which is a 1 GB aligned region of size '1UL << (VA_BITS - 1)' that is
+ * which is a 1 GB aligned region of size '1UL << (VA_BITS_MIN - 1)' that is
  * guaranteed to cover the kernel Image.
  *
  * Since the EFI stub is part of the kernel Image, we can relax the
@@ -90,7 +90,7 @@ static inline unsigned long efi_get_max_fdt_addr(unsigned long dram_base)
 static inline unsigned long efi_get_max_initrd_addr(unsigned long dram_base,
 						    unsigned long image_addr)
 {
-	return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS - 1));
+	return (image_addr & ~(SZ_1G - 1UL)) + (1UL << (VA_BITS_MIN - 1));
 }
 
 #define efi_call_early(f, ...)		sys_table_arg->boottime->f(__VA_ARGS__)
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 0530f283abc9..99e13ac0e9b4 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -52,6 +52,13 @@
 #define PCI_IO_END		(VMEMMAP_START - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
 #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
+#if VA_BITS > 48
+#define VA_BITS_MIN		(48)
+#else
+#define VA_BITS_MIN		(VA_BITS)
+#endif
+#define _VA_START(va)		(UL(0xffffffffffffffff) - \
+				(UL(1) << ((va) - 1)) + 1)
 
 #define KERNEL_START      _text
 #define KERNEL_END        _end
@@ -74,7 +81,7 @@
 #define KASAN_THREAD_SHIFT	1
 #else
 #define KASAN_THREAD_SHIFT	0
-#define KASAN_SHADOW_END	(VA_START)
+#define KASAN_SHADOW_END	(_VA_START(VA_BITS_MIN))
 #endif
 
 #define MIN_THREAD_SHIFT	(14 + KASAN_THREAD_SHIFT)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 844e2964b0f5..0e1f2770192a 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -42,7 +42,7 @@
  * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area.
  */
 
-#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS)
+#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS_MIN)
 #define TASK_SIZE_64		(UL(1) << vabits_user)
 
 #ifdef CONFIG_COMPAT
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 2cdacd1c141b..ac58c69993ec 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -314,7 +314,7 @@ __create_page_tables:
 	mov	x5, #52
 	cbnz	x6, 1f
 #endif
-	mov	x5, #VA_BITS
+	mov	x5, #VA_BITS_MIN
 1:
 	adr_l	x6, vabits_user
 	str	x5, [x6]
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 708051655ad9..5a59f7567f9c 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -116,15 +116,15 @@ u64 __init kaslr_early_init(u64 dt_phys)
 	/*
 	 * OK, so we are proceeding with KASLR enabled. Calculate a suitable
 	 * kernel image offset from the seed. Let's place the kernel in the
-	 * middle half of the VMALLOC area (VA_BITS - 2), and stay clear of
+	 * middle half of the VMALLOC area (VA_BITS_MIN - 2), and stay clear of
 	 * the lower and upper quarters to avoid colliding with other
 	 * allocations.
 	 * Even if we could randomize at page granularity for 16k and 64k pages,
 	 * let's always round to 2 MB so we don't interfere with the ability to
 	 * map using contiguous PTEs
 	 */
-	mask = ((1UL << (VA_BITS - 2)) - 1) & ~(SZ_2M - 1);
-	offset = BIT(VA_BITS - 3) + (seed & mask);
+	mask = ((1UL << (VA_BITS_MIN - 2)) - 1) & ~(SZ_2M - 1);
+	offset = BIT(VA_BITS_MIN - 3) + (seed & mask);
 
 	/* use the top 16 bits to randomize the linear region */
 	memstart_offset_seed = seed >> 48;
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 05edfe9b02e4..725222271474 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -156,7 +156,8 @@ asmlinkage void __init kasan_early_init(void)
 {
 	BUILD_BUG_ON(KASAN_SHADOW_OFFSET !=
 		KASAN_SHADOW_END - (1UL << (64 - KASAN_SHADOW_SCALE_SHIFT)));
-	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_START, PGDIR_SIZE));
+	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS), PGDIR_SIZE));
+	BUILD_BUG_ON(!IS_ALIGNED(_KASAN_SHADOW_START(VA_BITS_MIN), PGDIR_SIZE));
 	BUILD_BUG_ON(!IS_ALIGNED(KASAN_SHADOW_END, PGDIR_SIZE));
 	kasan_pgd_populate(KASAN_SHADOW_START, KASAN_SHADOW_END, NUMA_NO_NODE,
 			   true);
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V5 06/12] arm64: mm: Introduce vabits_actual
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (4 preceding siblings ...)
  2019-08-07 15:55 ` [PATCH V5 05/12] arm64: mm: Introduce VA_BITS_MIN Steve Capper
@ 2019-08-07 15:55 ` Steve Capper
  2019-08-07 16:16   ` Catalin Marinas
  2019-08-07 15:55 ` [PATCH V5 07/12] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

In order to support 52-bit kernel addresses detectable at boot time, one
needs to know the actual VA_BITS detected. A new variable vabits_actual
is introduced in this commit and employed for the KVM hypervisor layout,
KASAN, fault handling and phys-to/from-virt translation where there
would normally be compile time constants.

In order to maintain performance in phys_to_virt, another variable
physvirt_offset is introduced.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

Changed in V5, got rid of VA_BITS_ACTUAL macro
---
 arch/arm64/include/asm/kasan.h       |  2 +-
 arch/arm64/include/asm/memory.h      | 11 ++++++-----
 arch/arm64/include/asm/mmu_context.h |  2 +-
 arch/arm64/kernel/head.S             |  5 +++++
 arch/arm64/kvm/va_layout.c           | 14 +++++++-------
 arch/arm64/mm/fault.c                |  4 ++--
 arch/arm64/mm/init.c                 |  7 ++++++-
 arch/arm64/mm/mmu.c                  |  3 +++
 8 files changed, 31 insertions(+), 17 deletions(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index 10d2add842da..b0dc4abc3589 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -31,7 +31,7 @@
  *				(1ULL << (64 - KASAN_SHADOW_SCALE_SHIFT))
  */
 #define _KASAN_SHADOW_START(va)	(KASAN_SHADOW_END - (1UL << ((va) - KASAN_SHADOW_SCALE_SHIFT)))
-#define KASAN_SHADOW_START      _KASAN_SHADOW_START(VA_BITS)
+#define KASAN_SHADOW_START      _KASAN_SHADOW_START(vabits_actual)
 
 void kasan_init(void);
 void kasan_copy_shadow(pgd_t *pgdir);
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 99e13ac0e9b4..91ba2cef095a 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -37,8 +37,6 @@
  * VA_START - the first kernel virtual address.
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
-#define VA_START		(UL(0xffffffffffffffff) - \
-	(UL(1) << (VA_BITS - 1)) + 1)
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
 	(UL(1) << VA_BITS) + 1)
 #define KIMAGE_VADDR		(MODULES_END)
@@ -166,10 +164,13 @@
 #endif
 
 #ifndef __ASSEMBLY__
+extern u64			vabits_actual;
+#define VA_START		(_VA_START(vabits_actual))
 
 #include <linux/bitops.h>
 #include <linux/mmdebug.h>
 
+extern s64			physvirt_offset;
 extern s64			memstart_addr;
 /* PHYS_OFFSET - the physical address of the start of memory. */
 #define PHYS_OFFSET		({ VM_BUG_ON(memstart_addr & 1); memstart_addr; })
@@ -236,9 +237,9 @@ extern u64			vabits_user;
  * space. Testing the top bit for the start of the region is a
  * sufficient check.
  */
-#define __is_lm_address(addr)	(!((addr) & BIT(VA_BITS - 1)))
+#define __is_lm_address(addr)	(!((addr) & BIT(vabits_actual - 1)))
 
-#define __lm_to_phys(addr)	(((addr) & ~PAGE_OFFSET) + PHYS_OFFSET)
+#define __lm_to_phys(addr)	(((addr) + physvirt_offset))
 #define __kimg_to_phys(addr)	((addr) - kimage_voffset)
 
 #define __virt_to_phys_nodebug(x) ({					\
@@ -257,7 +258,7 @@ extern phys_addr_t __phys_addr_symbol(unsigned long x);
 #define __phys_addr_symbol(x)	__pa_symbol_nodebug(x)
 #endif
 
-#define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET) | PAGE_OFFSET)
+#define __phys_to_virt(x)	((unsigned long)((x) - physvirt_offset))
 #define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
 
 /*
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 7ed0adb187a8..670003a55d28 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -95,7 +95,7 @@ static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
 	isb();
 }
 
-#define cpu_set_default_tcr_t0sz()	__cpu_set_tcr_t0sz(TCR_T0SZ(VA_BITS))
+#define cpu_set_default_tcr_t0sz()	__cpu_set_tcr_t0sz(TCR_T0SZ(vabits_actual))
 #define cpu_set_idmap_tcr_t0sz()	__cpu_set_tcr_t0sz(idmap_t0sz)
 
 /*
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index ac58c69993ec..6dc7349868d9 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -321,6 +321,11 @@ __create_page_tables:
 	dmb	sy
 	dc	ivac, x6		// Invalidate potentially stale cache line
 
+	adr_l	x6, vabits_actual
+	str	x5, [x6]
+	dmb	sy
+	dc	ivac, x6		// Invalidate potentially stale cache line
+
 	/*
 	 * VA_BITS may be too small to allow for an ID mapping to be created
 	 * that covers system RAM if that is located sufficiently high in the
diff --git a/arch/arm64/kvm/va_layout.c b/arch/arm64/kvm/va_layout.c
index acd8084f1f2c..2cf7d4b606c3 100644
--- a/arch/arm64/kvm/va_layout.c
+++ b/arch/arm64/kvm/va_layout.c
@@ -29,25 +29,25 @@ static void compute_layout(void)
 	int kva_msb;
 
 	/* Where is my RAM region? */
-	hyp_va_msb  = idmap_addr & BIT(VA_BITS - 1);
-	hyp_va_msb ^= BIT(VA_BITS - 1);
+	hyp_va_msb  = idmap_addr & BIT(vabits_actual - 1);
+	hyp_va_msb ^= BIT(vabits_actual - 1);
 
 	kva_msb = fls64((u64)phys_to_virt(memblock_start_of_DRAM()) ^
 			(u64)(high_memory - 1));
 
-	if (kva_msb == (VA_BITS - 1)) {
+	if (kva_msb == (vabits_actual - 1)) {
 		/*
 		 * No space in the address, let's compute the mask so
-		 * that it covers (VA_BITS - 1) bits, and the region
+		 * that it covers (vabits_actual - 1) bits, and the region
 		 * bit. The tag stays set to zero.
 		 */
-		va_mask  = BIT(VA_BITS - 1) - 1;
+		va_mask  = BIT(vabits_actual - 1) - 1;
 		va_mask |= hyp_va_msb;
 	} else {
 		/*
 		 * We do have some free bits to insert a random tag.
 		 * Hyp VAs are now created from kernel linear map VAs
-		 * using the following formula (with V == VA_BITS):
+		 * using the following formula (with V == vabits_actual):
 		 *
 		 *  63 ... V |     V-1    | V-2 .. tag_lsb | tag_lsb - 1 .. 0
 		 *  ---------------------------------------------------------
@@ -55,7 +55,7 @@ static void compute_layout(void)
 		 */
 		tag_lsb = kva_msb;
 		va_mask = GENMASK_ULL(tag_lsb - 1, 0);
-		tag_val = get_random_long() & GENMASK_ULL(VA_BITS - 2, tag_lsb);
+		tag_val = get_random_long() & GENMASK_ULL(vabits_actual - 2, tag_lsb);
 		tag_val |= hyp_va_msb;
 		tag_val >>= tag_lsb;
 	}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 9568c116ac7f..86fc1aff3462 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -138,9 +138,9 @@ static void show_pte(unsigned long addr)
 		return;
 	}
 
-	pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp=%016lx\n",
+	pr_alert("%s pgtable: %luk pages, %llu-bit VAs, pgdp=%016lx\n",
 		 mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K,
-		 mm == &init_mm ? VA_BITS : (int)vabits_user,
+		 mm == &init_mm ? vabits_actual : (int)vabits_user,
 		 (unsigned long)virt_to_phys(mm->pgd));
 	pgdp = pgd_offset(mm, addr);
 	pgd = READ_ONCE(*pgdp);
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 62927ed02229..e752f46d430e 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -50,6 +50,9 @@
 s64 memstart_addr __ro_after_init = -1;
 EXPORT_SYMBOL(memstart_addr);
 
+s64 physvirt_offset __ro_after_init;
+EXPORT_SYMBOL(physvirt_offset);
+
 phys_addr_t arm64_dma_phys_limit __ro_after_init;
 
 #ifdef CONFIG_KEXEC_CORE
@@ -301,7 +304,7 @@ static void __init fdt_enforce_memory_region(void)
 
 void __init arm64_memblock_init(void)
 {
-	const s64 linear_region_size = BIT(VA_BITS - 1);
+	const s64 linear_region_size = BIT(vabits_actual - 1);
 
 	/* Handle linux,usable-memory-range property */
 	fdt_enforce_memory_region();
@@ -315,6 +318,8 @@ void __init arm64_memblock_init(void)
 	memstart_addr = round_down(memblock_start_of_DRAM(),
 				   ARM64_MEMSTART_ALIGN);
 
+	physvirt_offset = PHYS_OFFSET - PAGE_OFFSET;
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 1d4247f9a496..07b30e6d17f8 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -43,6 +43,9 @@ u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
 u64 vabits_user __ro_after_init;
 EXPORT_SYMBOL(vabits_user);
 
+u64 __section(".mmuoff.data.write") vabits_actual;
+EXPORT_SYMBOL(vabits_actual);
+
 u64 kimage_voffset __ro_after_init;
 EXPORT_SYMBOL(kimage_voffset);
 
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V5 07/12] arm64: mm: Logic to make offset_ttbr1 conditional
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (5 preceding siblings ...)
  2019-08-07 15:55 ` [PATCH V5 06/12] arm64: mm: Introduce vabits_actual Steve Capper
@ 2019-08-07 15:55 ` Steve Capper
  2019-08-07 15:55 ` [PATCH V5 08/12] arm64: mm: Separate out vmemmap Steve Capper
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

When running with a 52-bit userspace VA and a 48-bit kernel VA we offset
ttbr1_el1 to allow the kernel pagetables with a 52-bit PTRS_PER_PGD to
be used for both userspace and kernel.

Moving on to a 52-bit kernel VA we no longer require this offset to
ttbr1_el1 should we be running on a system with HW support for 52-bit
VAs.

This patch introduces conditional logic to offset_ttbr1 to query
SYS_ID_AA64MMFR2_EL1 whenever 52-bit VAs are selected. If there is HW
support for 52-bit VAs then the ttbr1 offset is skipped.

We choose to read a system register rather than vabits_actual because
offset_ttbr1 can be called in places where the kernel data is not
actually mapped.

Calls to offset_ttbr1 appear to be made from rarely called code paths so
this extra logic is not expected to adversely affect performance.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

---

Changed in V3, move away from alternative framework as offset_ttbr1 can
be called in places before the alternative framework has been
initialised.
---
 arch/arm64/include/asm/assembler.h | 12 ++++++++++--
 arch/arm64/kernel/head.S           |  2 +-
 arch/arm64/kernel/hibernate-asm.S  |  8 ++++----
 arch/arm64/mm/proc.S               |  6 +++---
 4 files changed, 18 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index e3a15c751b13..ede368bafa2c 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -538,9 +538,17 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * In future this may be nop'ed out when dealing with 52-bit kernel VAs.
  * 	ttbr: Value of ttbr to set, modified.
  */
-	.macro	offset_ttbr1, ttbr
+	.macro	offset_ttbr1, ttbr, tmp
 #ifdef CONFIG_ARM64_USER_VA_BITS_52
 	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
+#endif
+
+#ifdef CONFIG_ARM64_VA_BITS_52
+	mrs_s	\tmp, SYS_ID_AA64MMFR2_EL1
+	and	\tmp, \tmp, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
+	cbnz	\tmp, .Lskipoffs_\@
+	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
+.Lskipoffs_\@ :
 #endif
 	.endm
 
@@ -550,7 +558,7 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * to be nop'ed out when dealing with 52-bit kernel VAs.
  */
 	.macro	restore_ttbr1, ttbr
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#if defined(CONFIG_ARM64_USER_VA_BITS_52) || defined(CONFIG_ARM64_VA_BITS_52)
 	bic	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
 #endif
 	.endm
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 6dc7349868d9..a96dc4386c7c 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -777,7 +777,7 @@ ENTRY(__enable_mmu)
 	phys_to_ttbr x1, x1
 	phys_to_ttbr x2, x2
 	msr	ttbr0_el1, x2			// load TTBR0
-	offset_ttbr1 x1
+	offset_ttbr1 x1, x3
 	msr	ttbr1_el1, x1			// load TTBR1
 	isb
 	msr	sctlr_el1, x0
diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
index 2f4a2ce7264b..38bcd4d4e43b 100644
--- a/arch/arm64/kernel/hibernate-asm.S
+++ b/arch/arm64/kernel/hibernate-asm.S
@@ -22,14 +22,14 @@
  * Even switching to our copied tables will cause a changed output address at
  * each stage of the walk.
  */
-.macro break_before_make_ttbr_switch zero_page, page_table, tmp
+.macro break_before_make_ttbr_switch zero_page, page_table, tmp, tmp2
 	phys_to_ttbr \tmp, \zero_page
 	msr	ttbr1_el1, \tmp
 	isb
 	tlbi	vmalle1
 	dsb	nsh
 	phys_to_ttbr \tmp, \page_table
-	offset_ttbr1 \tmp
+	offset_ttbr1 \tmp, \tmp2
 	msr	ttbr1_el1, \tmp
 	isb
 .endm
@@ -70,7 +70,7 @@ ENTRY(swsusp_arch_suspend_exit)
 	 * We execute from ttbr0, change ttbr1 to our copied linear map tables
 	 * with a break-before-make via the zero page
 	 */
-	break_before_make_ttbr_switch	x5, x0, x6
+	break_before_make_ttbr_switch	x5, x0, x6, x8
 
 	mov	x21, x1
 	mov	x30, x2
@@ -101,7 +101,7 @@ ENTRY(swsusp_arch_suspend_exit)
 	dsb	ish		/* wait for PoU cleaning to finish */
 
 	/* switch to the restored kernels page tables */
-	break_before_make_ttbr_switch	x25, x21, x6
+	break_before_make_ttbr_switch	x25, x21, x6, x8
 
 	ic	ialluis
 	dsb	ish
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 7dbf2be470f6..8d289ff7584d 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -168,7 +168,7 @@ ENDPROC(cpu_do_switch_mm)
 .macro	__idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
 	adrp	\tmp1, empty_zero_page
 	phys_to_ttbr \tmp2, \tmp1
-	offset_ttbr1 \tmp2
+	offset_ttbr1 \tmp2, \tmp1
 	msr	ttbr1_el1, \tmp2
 	isb
 	tlbi	vmalle1
@@ -187,7 +187,7 @@ ENTRY(idmap_cpu_replace_ttbr1)
 
 	__idmap_cpu_set_reserved_ttbr1 x1, x3
 
-	offset_ttbr1 x0
+	offset_ttbr1 x0, x3
 	msr	ttbr1_el1, x0
 	isb
 
@@ -362,7 +362,7 @@ __idmap_kpti_secondary:
 	cbnz	w18, 1b
 
 	/* All done, act like nothing happened */
-	offset_ttbr1 swapper_ttb
+	offset_ttbr1 swapper_ttb, x18
 	msr	ttbr1_el1, swapper_ttb
 	isb
 	ret
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V5 08/12] arm64: mm: Separate out vmemmap
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (6 preceding siblings ...)
  2019-08-07 15:55 ` [PATCH V5 07/12] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
@ 2019-08-07 15:55 ` Steve Capper
  2019-08-07 15:55 ` [PATCH V5 09/12] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

vmemmap is a preprocessor definition that depends on a variable,
memstart_addr. In a later patch we will need to expand the size of
the VMEMMAP region and optionally modify vmemmap depending upon
whether or not hardware support is available for 52-bit virtual
addresses.

This patch changes vmemmap to be a variable. As the old definition
depended on a variable load, this should not affect performance
noticeably.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/include/asm/pgtable.h | 4 ++--
 arch/arm64/mm/init.c             | 5 +++++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index d274ea9a5f86..0eedf8664ecc 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -23,8 +23,6 @@
 #define VMALLOC_START		(MODULES_END)
 #define VMALLOC_END		(- PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
-#define vmemmap			((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
-
 #define FIRST_USER_ADDRESS	0UL
 
 #ifndef __ASSEMBLY__
@@ -35,6 +33,8 @@
 #include <linux/mm_types.h>
 #include <linux/sched.h>
 
+extern struct page *vmemmap;
+
 extern void __pte_error(const char *file, int line, unsigned long val);
 extern void __pmd_error(const char *file, int line, unsigned long val);
 extern void __pud_error(const char *file, int line, unsigned long val);
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index e752f46d430e..2940221e5519 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -53,6 +53,9 @@ EXPORT_SYMBOL(memstart_addr);
 s64 physvirt_offset __ro_after_init;
 EXPORT_SYMBOL(physvirt_offset);
 
+struct page *vmemmap __ro_after_init;
+EXPORT_SYMBOL(vmemmap);
+
 phys_addr_t arm64_dma_phys_limit __ro_after_init;
 
 #ifdef CONFIG_KEXEC_CORE
@@ -320,6 +323,8 @@ void __init arm64_memblock_init(void)
 
 	physvirt_offset = PHYS_OFFSET - PAGE_OFFSET;
 
+	vmemmap = ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT));
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V5 09/12] arm64: mm: Modify calculation of VMEMMAP_SIZE
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (7 preceding siblings ...)
  2019-08-07 15:55 ` [PATCH V5 08/12] arm64: mm: Separate out vmemmap Steve Capper
@ 2019-08-07 15:55 ` Steve Capper
  2019-08-07 15:55 ` [PATCH V5 10/12] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

In a later patch we will need to have a slightly larger VMEMMAP region
to accommodate boot time selection between 48/52-bit kernel VAs.

This patch modifies the formula for computing VMEMMAP_SIZE to depend
explicitly on the PAGE_OFFSET and start of kernel addressable memory.
(This allows for a slightly larger direct linear map in future).

Signed-off-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/include/asm/memory.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 91ba2cef095a..2d5c57d13572 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -26,8 +26,15 @@
 /*
  * VMEMMAP_SIZE - allows the whole linear region to be covered by
  *                a struct page array
+ *
+ * If we are configured with a 52-bit kernel VA then our VMEMMAP_SIZE
+ * neads to cover the memory region from the beginning of the 52-bit
+ * PAGE_OFFSET all the way to VA_START for 48-bit. This allows us to
+ * keep a constant PAGE_OFFSET and "fallback" to using the higher end
+ * of the VMEMMAP where 52-bit support is not available in hardware.
  */
-#define VMEMMAP_SIZE (UL(1) << (VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT))
+#define VMEMMAP_SIZE ((_VA_START(VA_BITS_MIN) - PAGE_OFFSET) \
+			>> (PAGE_SHIFT - STRUCT_PAGE_MAX_SHIFT))
 
 /*
  * PAGE_OFFSET - the virtual address of the start of the linear map (top
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V5 10/12] arm64: mm: Introduce 52-bit Kernel VAs
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (8 preceding siblings ...)
  2019-08-07 15:55 ` [PATCH V5 09/12] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
@ 2019-08-07 15:55 ` Steve Capper
  2019-08-07 15:55 ` [PATCH V5 11/12] arm64: mm: Remove vabits_user Steve Capper
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

Most of the machinery is now in place to enable 52-bit kernel VAs that
are detectable at boot time.

This patch adds a Kconfig option for 52-bit user and kernel addresses
and plumbs in the requisite CONFIG_ macros as well as sets TCR.T1SZ,
physvirt_offset and vmemmap at early boot.

To simplify things this patch also removes the 52-bit user/48-bit kernel
kconfig option.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 arch/arm64/Kconfig                     | 20 +++++++++++---------
 arch/arm64/include/asm/assembler.h     | 13 ++++++++-----
 arch/arm64/include/asm/memory.h        |  7 ++++---
 arch/arm64/include/asm/mmu_context.h   |  2 +-
 arch/arm64/include/asm/pgtable-hwdef.h |  2 +-
 arch/arm64/kernel/head.S               |  4 ++--
 arch/arm64/mm/init.c                   | 10 ++++++++++
 arch/arm64/mm/proc.S                   |  3 ++-
 8 files changed, 39 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f7f23e47c28f..f5f7cb75a698 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -286,7 +286,7 @@ config PGTABLE_LEVELS
 	int
 	default 2 if ARM64_16K_PAGES && ARM64_VA_BITS_36
 	default 2 if ARM64_64K_PAGES && ARM64_VA_BITS_42
-	default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52)
+	default 3 if ARM64_64K_PAGES && (ARM64_VA_BITS_48 || ARM64_VA_BITS_52)
 	default 3 if ARM64_4K_PAGES && ARM64_VA_BITS_39
 	default 3 if ARM64_16K_PAGES && ARM64_VA_BITS_47
 	default 4 if !ARM64_64K_PAGES && ARM64_VA_BITS_48
@@ -300,12 +300,12 @@ config ARCH_PROC_KCORE_TEXT
 config KASAN_SHADOW_OFFSET
 	hex
 	depends on KASAN
-	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS
+	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && !KASAN_SW_TAGS
 	default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
 	default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
 	default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
 	default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
-	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS
+	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_VA_BITS_52) && KASAN_SW_TAGS
 	default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
 	default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
 	default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
@@ -759,13 +759,14 @@ config ARM64_VA_BITS_47
 config ARM64_VA_BITS_48
 	bool "48-bit"
 
-config ARM64_USER_VA_BITS_52
-	bool "52-bit (user)"
+config ARM64_VA_BITS_52
+	bool "52-bit"
 	depends on ARM64_64K_PAGES && (ARM64_PAN || !ARM64_SW_TTBR0_PAN)
 	help
 	  Enable 52-bit virtual addressing for userspace when explicitly
-	  requested via a hint to mmap(). The kernel will continue to
-	  use 48-bit virtual addresses for its own mappings.
+	  requested via a hint to mmap(). The kernel will also use 52-bit
+	  virtual addresses for its own mappings (provided HW support for
+	  this feature is available, otherwise it reverts to 48-bit).
 
 	  NOTE: Enabling 52-bit virtual addressing in conjunction with
 	  ARMv8.3 Pointer Authentication will result in the PAC being
@@ -778,7 +779,7 @@ endchoice
 
 config ARM64_FORCE_52BIT
 	bool "Force 52-bit virtual addresses for userspace"
-	depends on ARM64_USER_VA_BITS_52 && EXPERT
+	depends on ARM64_VA_BITS_52 && EXPERT
 	help
 	  For systems with 52-bit userspace VAs enabled, the kernel will attempt
 	  to maintain compatibility with older software by providing 48-bit VAs
@@ -795,7 +796,8 @@ config ARM64_VA_BITS
 	default 39 if ARM64_VA_BITS_39
 	default 42 if ARM64_VA_BITS_42
 	default 47 if ARM64_VA_BITS_47
-	default 48 if ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52
+	default 48 if ARM64_VA_BITS_48
+	default 52 if ARM64_VA_BITS_52
 
 choice
 	prompt "Physical address space size"
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index ede368bafa2c..c066fc4976cd 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -349,6 +349,13 @@ alternative_endif
 	bfi	\valreg, \t0sz, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH
 	.endm
 
+/*
+ * tcr_set_t1sz - update TCR.T1SZ
+ */
+	.macro	tcr_set_t1sz, valreg, t1sz
+	bfi	\valreg, \t1sz, #TCR_T1SZ_OFFSET, #TCR_TxSZ_WIDTH
+	.endm
+
 /*
  * tcr_compute_pa_size - set TCR.(I)PS to the highest supported
  * ID_AA64MMFR0_EL1.PARange value
@@ -539,10 +546,6 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * 	ttbr: Value of ttbr to set, modified.
  */
 	.macro	offset_ttbr1, ttbr, tmp
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
-	orr	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
-#endif
-
 #ifdef CONFIG_ARM64_VA_BITS_52
 	mrs_s	\tmp, SYS_ID_AA64MMFR2_EL1
 	and	\tmp, \tmp, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
@@ -558,7 +561,7 @@ USER(\label, ic	ivau, \tmp2)			// invalidate I line PoU
  * to be nop'ed out when dealing with 52-bit kernel VAs.
  */
 	.macro	restore_ttbr1, ttbr
-#if defined(CONFIG_ARM64_USER_VA_BITS_52) || defined(CONFIG_ARM64_VA_BITS_52)
+#ifdef CONFIG_ARM64_VA_BITS_52
 	bic	\ttbr, \ttbr, #TTBR1_BADDR_4852_OFFSET
 #endif
 	.endm
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 2d5c57d13572..3b5d1327035e 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -44,8 +44,9 @@
  * VA_START - the first kernel virtual address.
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
-#define PAGE_OFFSET		(UL(0xffffffffffffffff) - \
-	(UL(1) << VA_BITS) + 1)
+#define _PAGE_OFFSET(va)	(UL(0xffffffffffffffff) - \
+					(UL(1) << (va)) + 1)
+#define PAGE_OFFSET		(_PAGE_OFFSET(VA_BITS))
 #define KIMAGE_VADDR		(MODULES_END)
 #define BPF_JIT_REGION_START	(KASAN_SHADOW_END)
 #define BPF_JIT_REGION_SIZE	(SZ_128M)
@@ -68,7 +69,7 @@
 #define KERNEL_START      _text
 #define KERNEL_END        _end
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 #define MAX_USER_VA_BITS	52
 #else
 #define MAX_USER_VA_BITS	VA_BITS
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 670003a55d28..3827ff4040a3 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -63,7 +63,7 @@ extern u64 idmap_ptrs_per_pgd;
 
 static inline bool __cpu_uses_extended_idmap(void)
 {
-	if (IS_ENABLED(CONFIG_ARM64_USER_VA_BITS_52))
+	if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52))
 		return false;
 
 	return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index db92950bb1a0..3df60f97da1f 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -304,7 +304,7 @@
 #define TTBR_BADDR_MASK_52	(((UL(1) << 46) - 1) << 2)
 #endif
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 /* Must be at least 64-byte aligned to prevent corruption of the TTBR */
 #define TTBR1_BADDR_4852_OFFSET	(((UL(1) << (52 - PGDIR_SHIFT)) - \
 				 (UL(1) << (48 - PGDIR_SHIFT))) * 8)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index a96dc4386c7c..c8446f8c81f5 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -308,7 +308,7 @@ __create_page_tables:
 	adrp	x0, idmap_pg_dir
 	adrp	x3, __idmap_text_start		// __pa(__idmap_text_start)
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 	mrs_s	x6, SYS_ID_AA64MMFR2_EL1
 	and	x6, x6, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
 	mov	x5, #52
@@ -794,7 +794,7 @@ ENTRY(__enable_mmu)
 ENDPROC(__enable_mmu)
 
 ENTRY(__cpu_secondary_check52bitva)
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 	ldr_l	x0, vabits_user
 	cmp	x0, #52
 	b.ne	2f
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 2940221e5519..531c497c5758 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -325,6 +325,16 @@ void __init arm64_memblock_init(void)
 
 	vmemmap = ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT));
 
+	/*
+	 * If we are running with a 52-bit kernel VA config on a system that
+	 * does not support it, we have to offset our vmemmap and physvirt_offset
+	 * s.t. we avoid the 52-bit portion of the direct linear map
+	 */
+	if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (vabits_actual != 52)) {
+		vmemmap += (_PAGE_OFFSET(48) - _PAGE_OFFSET(52)) >> PAGE_SHIFT;
+		physvirt_offset = PHYS_OFFSET - _PAGE_OFFSET(48);
+	}
+
 	/*
 	 * Remove the memory that we will not be able to cover with the
 	 * linear mapping. Take care not to clip the kernel which may be
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 8d289ff7584d..8b021c5c0884 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -438,10 +438,11 @@ ENTRY(__cpu_setup)
 			TCR_TBI0 | TCR_A1 | TCR_KASAN_FLAGS
 	tcr_clear_errata_bits x10, x9, x5
 
-#ifdef CONFIG_ARM64_USER_VA_BITS_52
+#ifdef CONFIG_ARM64_VA_BITS_52
 	ldr_l		x9, vabits_user
 	sub		x9, xzr, x9
 	add		x9, x9, #64
+	tcr_set_t1sz	x10, x9
 #else
 	ldr_l		x9, idmap_t0sz
 #endif
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V5 11/12] arm64: mm: Remove vabits_user
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (9 preceding siblings ...)
  2019-08-07 15:55 ` [PATCH V5 10/12] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
@ 2019-08-07 15:55 ` Steve Capper
  2019-08-07 16:17   ` Catalin Marinas
  2019-08-07 15:55 ` [PATCH V5 12/12] docs: arm64: Add layout and 52-bit info to memory document Steve Capper
  2019-08-09 16:47 ` [PATCH V5 00/12] 52-bit kernel + user VAs Will Deacon
  12 siblings, 1 reply; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

Previous patches have enabled 52-bit kernel + user VAs and there is no
longer any scenario where user VA != kernel VA size.

This patch removes the, now redundant, vabits_user variable and replaces
usage with vabits_actual where appropriate.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

New in V5
---
 arch/arm64/include/asm/memory.h       | 3 ---
 arch/arm64/include/asm/pointer_auth.h | 2 +-
 arch/arm64/include/asm/processor.h    | 2 +-
 arch/arm64/kernel/head.S              | 7 +------
 arch/arm64/mm/fault.c                 | 3 +--
 arch/arm64/mm/mmu.c                   | 2 --
 arch/arm64/mm/proc.S                  | 2 +-
 7 files changed, 5 insertions(+), 16 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 3b5d1327035e..56e79da139c2 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -194,9 +194,6 @@ static inline unsigned long kaslr_offset(void)
 	return kimage_vaddr - KIMAGE_VADDR;
 }
 
-/* the actual size of a user virtual address */
-extern u64			vabits_user;
-
 /*
  * Allow all memory at the discovery stage. We will clip it later.
  */
diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h
index d328540cb85e..7a24bad1a58b 100644
--- a/arch/arm64/include/asm/pointer_auth.h
+++ b/arch/arm64/include/asm/pointer_auth.h
@@ -69,7 +69,7 @@ extern int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg);
  * The EL0 pointer bits used by a pointer authentication code.
  * This is dependent on TBI0 being enabled, or bits 63:56 would also apply.
  */
-#define ptrauth_user_pac_mask()	GENMASK(54, vabits_user)
+#define ptrauth_user_pac_mask()	GENMASK(54, vabits_actual)
 
 /* Only valid for EL0 TTBR0 instruction pointers */
 static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 0e1f2770192a..e4c93945e477 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -43,7 +43,7 @@
  */
 
 #define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS_MIN)
-#define TASK_SIZE_64		(UL(1) << vabits_user)
+#define TASK_SIZE_64		(UL(1) << vabits_actual)
 
 #ifdef CONFIG_COMPAT
 #if defined(CONFIG_ARM64_64K_PAGES) && defined(CONFIG_KUSER_HELPERS)
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index c8446f8c81f5..949b001a73bb 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -316,11 +316,6 @@ __create_page_tables:
 #endif
 	mov	x5, #VA_BITS_MIN
 1:
-	adr_l	x6, vabits_user
-	str	x5, [x6]
-	dmb	sy
-	dc	ivac, x6		// Invalidate potentially stale cache line
-
 	adr_l	x6, vabits_actual
 	str	x5, [x6]
 	dmb	sy
@@ -795,7 +790,7 @@ ENDPROC(__enable_mmu)
 
 ENTRY(__cpu_secondary_check52bitva)
 #ifdef CONFIG_ARM64_VA_BITS_52
-	ldr_l	x0, vabits_user
+	ldr_l	x0, vabits_actual
 	cmp	x0, #52
 	b.ne	2f
 
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 86fc1aff3462..3ef0a9f64240 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -140,8 +140,7 @@ static void show_pte(unsigned long addr)
 
 	pr_alert("%s pgtable: %luk pages, %llu-bit VAs, pgdp=%016lx\n",
 		 mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K,
-		 mm == &init_mm ? vabits_actual : (int)vabits_user,
-		 (unsigned long)virt_to_phys(mm->pgd));
+		 vabits_actual, (unsigned long)virt_to_phys(mm->pgd));
 	pgdp = pgd_offset(mm, addr);
 	pgd = READ_ONCE(*pgdp);
 	pr_alert("[%016lx] pgd=%016llx", addr, pgd_val(pgd));
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 07b30e6d17f8..0c8f7e55f859 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -40,8 +40,6 @@
 
 u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
 u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
-u64 vabits_user __ro_after_init;
-EXPORT_SYMBOL(vabits_user);
 
 u64 __section(".mmuoff.data.write") vabits_actual;
 EXPORT_SYMBOL(vabits_actual);
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 8b021c5c0884..391f9cabfe60 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -439,7 +439,7 @@ ENTRY(__cpu_setup)
 	tcr_clear_errata_bits x10, x9, x5
 
 #ifdef CONFIG_ARM64_VA_BITS_52
-	ldr_l		x9, vabits_user
+	ldr_l		x9, vabits_actual
 	sub		x9, xzr, x9
 	add		x9, x9, #64
 	tcr_set_t1sz	x10, x9
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V5 12/12] docs: arm64: Add layout and 52-bit info to memory document
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (10 preceding siblings ...)
  2019-08-07 15:55 ` [PATCH V5 11/12] arm64: mm: Remove vabits_user Steve Capper
@ 2019-08-07 15:55 ` Steve Capper
  2019-08-09 16:47 ` [PATCH V5 00/12] 52-bit kernel + user VAs Will Deacon
  12 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2019-08-07 15:55 UTC (permalink / raw)
  To: linux-arm-kernel
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, will

As the kernel no longer prints out the memory layout on boot, this patch
adds this information back to the memory document.

Also, as the 52-bit support introduces some subtle changes to the arm64
memory, the rationale behind these changes are also added to the memory
document.

Signed-off-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

---

V5: tables reduced to 2, typos fixed.

New in V4
---
 Documentation/arm64/memory.rst | 123 +++++++++++++++++++++++++--------
 1 file changed, 95 insertions(+), 28 deletions(-)

diff --git a/Documentation/arm64/memory.rst b/Documentation/arm64/memory.rst
index 464b880fc4b7..b040909e45f8 100644
--- a/Documentation/arm64/memory.rst
+++ b/Documentation/arm64/memory.rst
@@ -14,6 +14,10 @@ with the 4KB page configuration, allowing 39-bit (512GB) or 48-bit
 64KB pages, only 2 levels of translation tables, allowing 42-bit (4TB)
 virtual address, are used but the memory layout is the same.
 
+ARMv8.2 adds optional support for Large Virtual Address space. This is
+only available when running with a 64KB page size and expands the
+number of descriptors in the first level of translation.
+
 User addresses have bits 63:48 set to 0 while the kernel addresses have
 the same bits set to 1. TTBRx selection is given by bit 63 of the
 virtual address. The swapper_pg_dir contains only kernel (global)
@@ -22,40 +26,43 @@ The swapper_pg_dir address is written to TTBR1 and never written to
 TTBR0.
 
 
-AArch64 Linux memory layout with 4KB pages + 3 levels::
-
-  Start			End			Size		Use
-  -----------------------------------------------------------------------
-  0000000000000000	0000007fffffffff	 512GB		user
-  ffffff8000000000	ffffffffffffffff	 512GB		kernel
-
-
-AArch64 Linux memory layout with 4KB pages + 4 levels::
+AArch64 Linux memory layout with 4KB pages + 4 levels (48-bit)::
 
   Start			End			Size		Use
   -----------------------------------------------------------------------
   0000000000000000	0000ffffffffffff	 256TB		user
-  ffff000000000000	ffffffffffffffff	 256TB		kernel
-
-
-AArch64 Linux memory layout with 64KB pages + 2 levels::
+  ffff000000000000	ffff7fffffffffff	 128TB		kernel logical memory map
+  ffff800000000000	ffff9fffffffffff	  32TB		kasan shadow region
+  ffffa00000000000	ffffa00007ffffff	 128MB		bpf jit region
+  ffffa00008000000	ffffa0000fffffff	 128MB		modules
+  ffffa00010000000	fffffdffbffeffff	 ~93TB		vmalloc
+  fffffdffbfff0000	fffffdfffe5f8fff	~998MB		[guard region]
+  fffffdfffe5f9000	fffffdfffe9fffff	4124KB		fixed mappings
+  fffffdfffea00000	fffffdfffebfffff	   2MB		[guard region]
+  fffffdfffec00000	fffffdffffbfffff	  16MB		PCI I/O space
+  fffffdffffc00000	fffffdffffdfffff	   2MB		[guard region]
+  fffffdffffe00000	ffffffffffdfffff	   2TB		vmemmap
+  ffffffffffe00000	ffffffffffffffff	   2MB		[guard region]
+
+
+AArch64 Linux memory layout with 64KB pages + 3 levels (52-bit with HW support)::
 
   Start			End			Size		Use
   -----------------------------------------------------------------------
-  0000000000000000	000003ffffffffff	   4TB		user
-  fffffc0000000000	ffffffffffffffff	   4TB		kernel
-
-
-AArch64 Linux memory layout with 64KB pages + 3 levels::
-
-  Start			End			Size		Use
-  -----------------------------------------------------------------------
-  0000000000000000	0000ffffffffffff	 256TB		user
-  ffff000000000000	ffffffffffffffff	 256TB		kernel
-
-
-For details of the virtual kernel memory layout please see the kernel
-booting log.
+  0000000000000000	000fffffffffffff	   4PB		user
+  fff0000000000000	fff7ffffffffffff	   2PB		kernel logical memory map
+  fff8000000000000	fffd9fffffffffff	1440TB		[gap]
+  fffda00000000000	ffff9fffffffffff	 512TB		kasan shadow region
+  ffffa00000000000	ffffa00007ffffff	 128MB		bpf jit region
+  ffffa00008000000	ffffa0000fffffff	 128MB		modules
+  ffffa00010000000	fffff81ffffeffff	 ~88TB		vmalloc
+  fffff81fffff0000	fffffc1ffe58ffff	  ~3TB		[guard region]
+  fffffc1ffe590000	fffffc1ffe9fffff	4544KB		fixed mappings
+  fffffc1ffea00000	fffffc1ffebfffff	   2MB		[guard region]
+  fffffc1ffec00000	fffffc1fffbfffff	  16MB		PCI I/O space
+  fffffc1fffc00000	fffffc1fffdfffff	   2MB		[guard region]
+  fffffc1fffe00000	ffffffffffdfffff	3968GB		vmemmap
+  ffffffffffe00000	ffffffffffffffff	   2MB		[guard region]
 
 
 Translation table lookup with 4KB pages::
@@ -83,7 +90,8 @@ Translation table lookup with 64KB pages::
    |                 |    |               |            [15:0]  in-page offset
    |                 |    |               +----------> [28:16] L3 index
    |                 |    +--------------------------> [41:29] L2 index
-   |                 +-------------------------------> [47:42] L1 index
+   |                 +-------------------------------> [47:42] L1 index (48-bit)
+   |                                                   [51:42] L1 index (52-bit)
    +-------------------------------------------------> [63] TTBR0/1
 
 
@@ -96,3 +104,62 @@ ARM64_HARDEN_EL2_VECTORS is selected for particular CPUs.
 
 When using KVM with the Virtualization Host Extensions, no additional
 mappings are created, since the host kernel runs directly in EL2.
+
+52-bit VA support in the kernel
+-------------------------------
+If the ARMv8.2-LVA optional feature is present, and we are running
+with a 64KB page size; then it is possible to use 52-bits of address
+space for both userspace and kernel addresses. However, any kernel
+binary that supports 52-bit must also be able to fall back to 48-bit
+at early boot time if the hardware feature is not present.
+
+This fallback mechanism necessitates the kernel .text to be in the
+higher addresses such that they are invariant to 48/52-bit VAs. Due
+to the kasan shadow being a fraction of the entire kernel VA space,
+the end of the kasan shadow must also be in the higher half of the
+kernel VA space for both 48/52-bit. (Switching from 48-bit to 52-bit,
+the end of the kasan shadow is invariant and dependent on ~0UL,
+whilst the start address will "grow" towards the lower addresses).
+
+In order to optimise phys_to_virt and virt_to_phys, the PAGE_OFFSET
+is kept constant at 0xFFF0000000000000 (corresponding to 52-bit),
+this obviates the need for an extra variable read. The physvirt
+offset and vmemmap offsets are computed at early boot to enable
+this logic.
+
+As a single binary will need to support both 48-bit and 52-bit VA
+spaces, the VMEMMAP must be sized large enough for 52-bit VAs and
+also must be sized large enought to accommodate a fixed PAGE_OFFSET.
+
+Most code in the kernel should not need to consider the VA_BITS, for
+code that does need to know the VA size the variables are
+defined as follows:
+
+VA_BITS		constant	the *maximum* VA space size
+
+VA_BITS_MIN	constant	the *minimum* VA space size
+
+vabits_actual	variable	the *actual* VA space size
+
+
+Maximum and minimum sizes can be useful to ensure that buffers are
+sized large enough or that addresses are positioned close enough for
+the "worst" case.
+
+52-bit userspace VAs
+--------------------
+To maintain compatibility with software that relies on the ARMv8.0
+VA space maximum size of 48-bits, the kernel will, by default,
+return virtual addresses to userspace from a 48-bit range.
+
+Software can "opt-in" to receiving VAs from a 52-bit space by
+specifying an mmap hint parameter that is larger than 48-bit.
+For example:
+    maybe_high_address = mmap(~0UL, size, prot, flags,...);
+
+It is also possible to build a debug kernel that returns addresses
+from a 52-bit space by enabling the following kernel config options:
+   CONFIG_EXPERT=y && CONFIG_ARM64_FORCE_52BIT=y
+
+Note that this option is only intended for debugging applications
+and should not be used in production.
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 02/12] arm64: mm: Flip kernel VA space
  2019-08-07 15:55 ` [PATCH V5 02/12] arm64: mm: Flip kernel VA space Steve Capper
@ 2019-08-07 16:12   ` Catalin Marinas
  0 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2019-08-07 16:12 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Wed, Aug 07, 2019 at 04:55:14PM +0100, Steve Capper wrote:
> In order to allow for a KASAN shadow that changes size at boot time, one
> must fix the KASAN_SHADOW_END for both 48 & 52-bit VAs and "grow" the
> start address. Also, it is highly desirable to maintain the same
> function addresses in the kernel .text between VA sizes. Both of these
> requirements necessitate us to flip the kernel address space halves s.t.
> the direct linear map occupies the lower addresses.
> 
> This patch puts the direct linear map in the lower addresses of the
> kernel VA range and everything else in the higher ranges.
> 
> We need to adjust:
>  *) KASAN shadow region placement logic,
>  *) KASAN_SHADOW_OFFSET computation logic,
>  *) virt_to_phys, phys_to_virt checks,
>  *) page table dumper.
> 
> These are all small changes, that need to take place atomically, so they
> are bundled into this commit.
> 
> As part of the re-arrangement, a guard region of 2MB (to preserve
> alignment for fixed map) is added after the vmemmap. Otherwise the
> vmemmap could intersect with IS_ERR pointers.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 03/12] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET
  2019-08-07 15:55 ` [PATCH V5 03/12] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
@ 2019-08-07 16:12   ` Catalin Marinas
  2019-08-14 15:20   ` [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET Mark Rutland
  1 sibling, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2019-08-07 16:12 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Wed, Aug 07, 2019 at 04:55:15PM +0100, Steve Capper wrote:
> KASAN_SHADOW_OFFSET is a constant that is supplied to gcc as a command
> line argument and affects the codegen of the inline address sanetiser.
> 
> Essentially, for an example memory access:
>     *ptr1 = val;
> The compiler will insert logic similar to the below:
>     shadowValue = *(ptr1 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET)
>     if (somethingWrong(shadowValue))
>         flagAnError();
> 
> This code sequence is inserted into many places, thus
> KASAN_SHADOW_OFFSET is essentially baked into many places in the kernel
> text.
> 
> If we want to run a single kernel binary with multiple address spaces,
> then we need to do this with KASAN_SHADOW_OFFSET fixed.
> 
> Thankfully, due to the way the KASAN_SHADOW_OFFSET is used to provide
> shadow addresses we know that the end of the shadow region is constant
> w.r.t. VA space size:
>     KASAN_SHADOW_END = ~0 >> KASAN_SHADOW_SCALE_SHIFT + KASAN_SHADOW_OFFSET
> 
> This means that if we increase the size of the VA space, the start of
> the KASAN region expands into lower addresses whilst the end of the
> KASAN region is fixed.
> 
> Currently the arm64 code computes KASAN_SHADOW_OFFSET at build time via
> build scripts with the VA size used as a parameter. (There are build
> time checks in the C code too to ensure that expected values are being
> derived). It is sufficient, and indeed is a simplification, to remove
> the build scripts (and build time checks) entirely and instead provide
> KASAN_SHADOW_OFFSET values.
> 
> This patch removes the logic to compute the KASAN_SHADOW_OFFSET in the
> arm64 Makefile, and instead we adopt the approach used by x86 to supply
> offset values in kConfig. To help debug/develop future VA space changes,
> the Makefile logic has been preserved in a script file in the arm64
> Documentation folder.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 05/12] arm64: mm: Introduce VA_BITS_MIN
  2019-08-07 15:55 ` [PATCH V5 05/12] arm64: mm: Introduce VA_BITS_MIN Steve Capper
@ 2019-08-07 16:14   ` Catalin Marinas
  0 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2019-08-07 16:14 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Wed, Aug 07, 2019 at 04:55:17PM +0100, Steve Capper wrote:
> In order to support 52-bit kernel addresses detectable at boot time, the
> kernel needs to know the most conservative VA_BITS possible should it
> need to fall back to this quantity due to lack of hardware support.
> 
> A new compile time constant VA_BITS_MIN is introduced in this patch and
> it is employed in the KASAN end address, KASLR, and EFI stub.
> 
> For Arm, if 52-bit VA support is unavailable the fallback is to 48-bits.
> 
> In other words: VA_BITS_MIN = min (48, VA_BITS)
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 06/12] arm64: mm: Introduce vabits_actual
  2019-08-07 15:55 ` [PATCH V5 06/12] arm64: mm: Introduce vabits_actual Steve Capper
@ 2019-08-07 16:16   ` Catalin Marinas
  0 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2019-08-07 16:16 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Wed, Aug 07, 2019 at 04:55:18PM +0100, Steve Capper wrote:
> In order to support 52-bit kernel addresses detectable at boot time, one
> needs to know the actual VA_BITS detected. A new variable vabits_actual
> is introduced in this commit and employed for the KVM hypervisor layout,
> KASAN, fault handling and phys-to/from-virt translation where there
> would normally be compile time constants.
> 
> In order to maintain performance in phys_to_virt, another variable
> physvirt_offset is introduced.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 11/12] arm64: mm: Remove vabits_user
  2019-08-07 15:55 ` [PATCH V5 11/12] arm64: mm: Remove vabits_user Steve Capper
@ 2019-08-07 16:17   ` Catalin Marinas
  0 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2019-08-07 16:17 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, maz, bhsharma, will, linux-arm-kernel

On Wed, Aug 07, 2019 at 04:55:23PM +0100, Steve Capper wrote:
> Previous patches have enabled 52-bit kernel + user VAs and there is no
> longer any scenario where user VA != kernel VA size.
> 
> This patch removes the, now redundant, vabits_user variable and replaces
> usage with vabits_actual where appropriate.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 00/12] 52-bit kernel + user VAs
  2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
                   ` (11 preceding siblings ...)
  2019-08-07 15:55 ` [PATCH V5 12/12] docs: arm64: Add layout and 52-bit info to memory document Steve Capper
@ 2019-08-09 16:47 ` Will Deacon
  2019-08-13 11:23   ` Steve Capper
  2019-08-13 12:43   ` Geert Uytterhoeven
  12 siblings, 2 replies; 38+ messages in thread
From: Will Deacon @ 2019-08-09 16:47 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma, maz,
	linux-arm-kernel

On Wed, Aug 07, 2019 at 04:55:12PM +0100, Steve Capper wrote:
> This patch series adds support for 52-bit kernel VAs using some of the
> machinery already introduced by the 52-bit userspace VA code in 5.0.

Cheers, I've pushed this out on a for-next/52-bit-kva branch with one
small patch on top and Catalin's tags added.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 00/12] 52-bit kernel + user VAs
  2019-08-09 16:47 ` [PATCH V5 00/12] 52-bit kernel + user VAs Will Deacon
@ 2019-08-13 11:23   ` Steve Capper
  2019-08-13 11:59     ` Will Deacon
  2019-08-13 12:43   ` Geert Uytterhoeven
  1 sibling, 1 reply; 38+ messages in thread
From: Steve Capper @ 2019-08-13 11:23 UTC (permalink / raw)
  To: Will Deacon
  Cc: crecklin, ard.biesheuvel, Catalin Marinas, bhsharma, maz, nd,
	linux-arm-kernel

On Fri, Aug 09, 2019 at 05:47:17PM +0100, Will Deacon wrote:
> On Wed, Aug 07, 2019 at 04:55:12PM +0100, Steve Capper wrote:
> > This patch series adds support for 52-bit kernel VAs using some of the
> > machinery already introduced by the 52-bit userspace VA code in 5.0.
> 
> Cheers, I've pushed this out on a for-next/52-bit-kva branch with one
> small patch on top and Catalin's tags added.
> 

Many thanks Will!

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 00/12] 52-bit kernel + user VAs
  2019-08-13 11:23   ` Steve Capper
@ 2019-08-13 11:59     ` Will Deacon
  0 siblings, 0 replies; 38+ messages in thread
From: Will Deacon @ 2019-08-13 11:59 UTC (permalink / raw)
  To: Steve Capper
  Cc: crecklin, ard.biesheuvel, Catalin Marinas, bhsharma, maz, nd,
	linux-arm-kernel

On Tue, Aug 13, 2019 at 11:23:50AM +0000, Steve Capper wrote:
> On Fri, Aug 09, 2019 at 05:47:17PM +0100, Will Deacon wrote:
> > On Wed, Aug 07, 2019 at 04:55:12PM +0100, Steve Capper wrote:
> > > This patch series adds support for 52-bit kernel VAs using some of the
> > > machinery already introduced by the 52-bit userspace VA code in 5.0.
> > 
> > Cheers, I've pushed this out on a for-next/52-bit-kva branch with one
> > small patch on top and Catalin's tags added.
> > 
> 
> Many thanks Will!

Save your thanks for when I've fixed the bugs ;)

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 00/12] 52-bit kernel + user VAs
  2019-08-09 16:47 ` [PATCH V5 00/12] 52-bit kernel + user VAs Will Deacon
  2019-08-13 11:23   ` Steve Capper
@ 2019-08-13 12:43   ` Geert Uytterhoeven
  2019-08-13 13:10     ` Will Deacon
  1 sibling, 1 reply; 38+ messages in thread
From: Geert Uytterhoeven @ 2019-08-13 12:43 UTC (permalink / raw)
  To: Will Deacon, Steve Capper
  Cc: crecklin, Ard Biesheuvel, Catalin Marinas, bhsharma,
	Linux-Renesas, maz, Linux ARM

Hi Will, Steve,

On Fri, Aug 9, 2019 at 6:47 PM Will Deacon <will@kernel.org> wrote:
> On Wed, Aug 07, 2019 at 04:55:12PM +0100, Steve Capper wrote:
> > This patch series adds support for 52-bit kernel VAs using some of the
> > machinery already introduced by the 52-bit userspace VA code in 5.0.
>
> Cheers, I've pushed this out on a for-next/52-bit-kva branch with one
> small patch on top and Catalin's tags added.

As of commit 14c127c957c1c607 ("arm64: mm: Flip kernel VA space"), the
kernel log is spammed with

    virt_to_phys used for non-linear address: (____ptrval____)
(__func__.6603+0x14d681/0x17fb3d)
    WARNING: CPU: 0 PID: 264 at arch/arm64/mm/physaddr.c:15
__virt_to_phys+0x28/0x58
    Modules linked in:
    CPU: 0 PID: 264 Comm: mdev Not tainted
5.3.0-rc3-rcar3-initrd-00002-g14c127c957c1c607 #38
    Hardware name: Renesas Ebisu-4D board based on r8a77990 (DT)
    pstate: 60000005 (nZCv daif -PAN -UAO)
    pc : __virt_to_phys+0x28/0x58
    lr : __virt_to_phys+0x28/0x58
    sp : ffffffc011953c80
    x29: ffffffc011953c80 x28: ffffff8078790140
    x27: 0000000000000000 x26: 0000000000000000
    x25: ffffffc010a539b9 x24: ffffffc010a86000
    x23: ffffffc010a539ba x22: 0000000000000001
    x21: 0000000000202038 x20: 0000000000000001
    x19: ffffffc010a539b9 x18: 000000000000000a
    x17: 0000000000000000 x16: 0000000000000000
    x15: 00000000000ca51d x14: 0720072007200720
    x13: 0720072007200720 x12: 0720072007200720
    x11: 0720072007200720 x10: 0720072007200720
    x9 : 0720072007200720 x8 : 0000000000000001
    x7 : 0000000000000007 x6 : ffffff8079824f00
    x5 : 0000000000000140 x4 : 0000000000000000
    x3 : 0000000000000000 x2 : 00000000ffffffff
    x1 : 0713abbc9281cf00 x0 : 0000000000000000
    Call trace:
     __virt_to_phys+0x28/0x58
     __check_object_size+0xd0/0x1e0
     filldir64+0x1d8/0x2b0
     kernfs_fop_readdir+0x64/0x200
     iterate_dir+0x68/0x144
     ksys_getdents64+0x88/0x154
     __arm64_sys_getdents64+0x18/0x24
     el0_svc_common.constprop.0+0x84/0xe8
     el0_svc_compat_handler+0x18/0x20
     el0_svc_compat+0x8/0x10
    ---[ end trace 6980a45f636e18be ]---

as soon as userspace starts.

As this commit cannot be reverted easily, I had to revert the full branch with
"git revert -m 1 6ce0dc725177e9856c9a67f2e2cabb3f7a3d90d7".

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 00/12] 52-bit kernel + user VAs
  2019-08-13 12:43   ` Geert Uytterhoeven
@ 2019-08-13 13:10     ` Will Deacon
  2019-08-13 13:36       ` Geert Uytterhoeven
  0 siblings, 1 reply; 38+ messages in thread
From: Will Deacon @ 2019-08-13 13:10 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: crecklin, Ard Biesheuvel, Catalin Marinas, bhsharma,
	Steve Capper, Linux-Renesas, maz, Linux ARM

Hi Geert,

On Tue, Aug 13, 2019 at 02:43:23PM +0200, Geert Uytterhoeven wrote:
> On Fri, Aug 9, 2019 at 6:47 PM Will Deacon <will@kernel.org> wrote:
> > On Wed, Aug 07, 2019 at 04:55:12PM +0100, Steve Capper wrote:
> > > This patch series adds support for 52-bit kernel VAs using some of the
> > > machinery already introduced by the 52-bit userspace VA code in 5.0.
> >
> > Cheers, I've pushed this out on a for-next/52-bit-kva branch with one
> > small patch on top and Catalin's tags added.
> 
> As of commit 14c127c957c1c607 ("arm64: mm: Flip kernel VA space"), the
> kernel log is spammed with
> 
>     virt_to_phys used for non-linear address: (____ptrval____)
> (__func__.6603+0x14d681/0x17fb3d)
>     WARNING: CPU: 0 PID: 264 at arch/arm64/mm/physaddr.c:15
> __virt_to_phys+0x28/0x58
>     Modules linked in:
>     CPU: 0 PID: 264 Comm: mdev Not tainted
> 5.3.0-rc3-rcar3-initrd-00002-g14c127c957c1c607 #38
>     Hardware name: Renesas Ebisu-4D board based on r8a77990 (DT)
>     pstate: 60000005 (nZCv daif -PAN -UAO)
>     pc : __virt_to_phys+0x28/0x58
>     lr : __virt_to_phys+0x28/0x58
>     sp : ffffffc011953c80
>     x29: ffffffc011953c80 x28: ffffff8078790140
>     x27: 0000000000000000 x26: 0000000000000000
>     x25: ffffffc010a539b9 x24: ffffffc010a86000
>     x23: ffffffc010a539ba x22: 0000000000000001
>     x21: 0000000000202038 x20: 0000000000000001
>     x19: ffffffc010a539b9 x18: 000000000000000a
>     x17: 0000000000000000 x16: 0000000000000000
>     x15: 00000000000ca51d x14: 0720072007200720
>     x13: 0720072007200720 x12: 0720072007200720
>     x11: 0720072007200720 x10: 0720072007200720
>     x9 : 0720072007200720 x8 : 0000000000000001
>     x7 : 0000000000000007 x6 : ffffff8079824f00
>     x5 : 0000000000000140 x4 : 0000000000000000
>     x3 : 0000000000000000 x2 : 00000000ffffffff
>     x1 : 0713abbc9281cf00 x0 : 0000000000000000
>     Call trace:
>      __virt_to_phys+0x28/0x58
>      __check_object_size+0xd0/0x1e0
>      filldir64+0x1d8/0x2b0
>      kernfs_fop_readdir+0x64/0x200
>      iterate_dir+0x68/0x144
>      ksys_getdents64+0x88/0x154
>      __arm64_sys_getdents64+0x18/0x24
>      el0_svc_common.constprop.0+0x84/0xe8
>      el0_svc_compat_handler+0x18/0x20
>      el0_svc_compat+0x8/0x10
>     ---[ end trace 6980a45f636e18be ]---
> 
> as soon as userspace starts.

Can you try the hack I posted here, please?

https://lkml.org/lkml/2019/8/13/555

Also, what .config are you using?

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 00/12] 52-bit kernel + user VAs
  2019-08-13 13:10     ` Will Deacon
@ 2019-08-13 13:36       ` Geert Uytterhoeven
  2019-08-14  8:04         ` Bhupesh Sharma
  0 siblings, 1 reply; 38+ messages in thread
From: Geert Uytterhoeven @ 2019-08-13 13:36 UTC (permalink / raw)
  To: Will Deacon
  Cc: crecklin, Ard Biesheuvel, Catalin Marinas, bhsharma,
	Steve Capper, Linux-Renesas, maz, Linux ARM

[-- Attachment #1: Type: text/plain, Size: 3129 bytes --]

Hi Will,

On Tue, Aug 13, 2019 at 3:10 PM Will Deacon <will@kernel.org> wrote:
> On Tue, Aug 13, 2019 at 02:43:23PM +0200, Geert Uytterhoeven wrote:
> > On Fri, Aug 9, 2019 at 6:47 PM Will Deacon <will@kernel.org> wrote:
> > > On Wed, Aug 07, 2019 at 04:55:12PM +0100, Steve Capper wrote:
> > > > This patch series adds support for 52-bit kernel VAs using some of the
> > > > machinery already introduced by the 52-bit userspace VA code in 5.0.
> > >
> > > Cheers, I've pushed this out on a for-next/52-bit-kva branch with one
> > > small patch on top and Catalin's tags added.
> >
> > As of commit 14c127c957c1c607 ("arm64: mm: Flip kernel VA space"), the
> > kernel log is spammed with
> >
> >     virt_to_phys used for non-linear address: (____ptrval____)
> > (__func__.6603+0x14d681/0x17fb3d)
> >     WARNING: CPU: 0 PID: 264 at arch/arm64/mm/physaddr.c:15
> > __virt_to_phys+0x28/0x58
> >     Modules linked in:
> >     CPU: 0 PID: 264 Comm: mdev Not tainted
> > 5.3.0-rc3-rcar3-initrd-00002-g14c127c957c1c607 #38
> >     Hardware name: Renesas Ebisu-4D board based on r8a77990 (DT)
> >     pstate: 60000005 (nZCv daif -PAN -UAO)
> >     pc : __virt_to_phys+0x28/0x58
> >     lr : __virt_to_phys+0x28/0x58
> >     sp : ffffffc011953c80
> >     x29: ffffffc011953c80 x28: ffffff8078790140
> >     x27: 0000000000000000 x26: 0000000000000000
> >     x25: ffffffc010a539b9 x24: ffffffc010a86000
> >     x23: ffffffc010a539ba x22: 0000000000000001
> >     x21: 0000000000202038 x20: 0000000000000001
> >     x19: ffffffc010a539b9 x18: 000000000000000a
> >     x17: 0000000000000000 x16: 0000000000000000
> >     x15: 00000000000ca51d x14: 0720072007200720
> >     x13: 0720072007200720 x12: 0720072007200720
> >     x11: 0720072007200720 x10: 0720072007200720
> >     x9 : 0720072007200720 x8 : 0000000000000001
> >     x7 : 0000000000000007 x6 : ffffff8079824f00
> >     x5 : 0000000000000140 x4 : 0000000000000000
> >     x3 : 0000000000000000 x2 : 00000000ffffffff
> >     x1 : 0713abbc9281cf00 x0 : 0000000000000000
> >     Call trace:
> >      __virt_to_phys+0x28/0x58
> >      __check_object_size+0xd0/0x1e0
> >      filldir64+0x1d8/0x2b0
> >      kernfs_fop_readdir+0x64/0x200
> >      iterate_dir+0x68/0x144
> >      ksys_getdents64+0x88/0x154
> >      __arm64_sys_getdents64+0x18/0x24
> >      el0_svc_common.constprop.0+0x84/0xe8
> >      el0_svc_compat_handler+0x18/0x20
> >      el0_svc_compat+0x8/0x10
> >     ---[ end trace 6980a45f636e18be ]---
> >
> > as soon as userspace starts.
>
> Can you try the hack I posted here, please?
>
> https://lkml.org/lkml/2019/8/13/555

Thanks, that seems to do the trick!

Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>

> Also, what .config are you using?

Attached.

Probably CONFIG_DEBUG_VIRTUAL=y is what you're missing.


Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

[-- Attachment #2: ebisu-config.gz --]
[-- Type: application/gzip, Size: 27834 bytes --]

[-- Attachment #3: Type: text/plain, Size: 176 bytes --]

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 00/12] 52-bit kernel + user VAs
  2019-08-13 13:36       ` Geert Uytterhoeven
@ 2019-08-14  8:04         ` Bhupesh Sharma
  2019-08-14  8:21           ` Will Deacon
  0 siblings, 1 reply; 38+ messages in thread
From: Bhupesh Sharma @ 2019-08-14  8:04 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Christoph von Recklinghausen, Ard Biesheuvel, Catalin Marinas,
	Steve Capper, Linux-Renesas, maz, Will Deacon, Linux ARM

Hi Will, Steve,

I still see the following issue on a 48-bit hardware (i.e. _non_
ARMv8.2 hardware) with branch 'for-next/52-bit-kva' with commit
d2d73d2fef421ca0d4 as the HEAD:

[   41.318745] Freeing initrd memory: 25856K
[   41.333312] hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7
counters available
[   41.341818] kvm [1]: IPA Size Limit: 44bits
[   41.346131] kvm [1]: GICv3: no GICV resource entry
[   41.350908] kvm [1]: disabling GICv2 emulation
[   41.355358] kvm [1]: GIC system register CPU interface enabled
[   41.363504] kvm [1]: vgic interrupt IRQ1
[   41.370029] kvm [1]: VHE mode initialized successfully
[   41.380484] Unable to handle kernel paging request at virtual
address ffffffffffe432c8
[   41.388401] Mem abort info:
[   41.391182]   ESR = 0x96000006
[   41.394227]   Exception class = DABT (current EL), IL = 32 bits
[   41.400133]   SET = 0, FnV = 0
[   41.403176]   EA = 0, S1PTW = 0
[   41.406303] Data abort info:
[   41.409170]   ISV = 0, ISS = 0x00000006
[   41.412994]   CM = 0, WnR = 0
[   41.415949] swapper pgtable: 64k pages, 48-bit VAs, pgdp=0000000081230000
[   41.422726] [ffffffffffe432c8] pgd=0000000081890003,
pud=0000000081890003, pmd=0000000000000000
[   41.431413] Internal error: Oops: 96000006 [#1] SMP
[   41.436278] Modules linked in:
[   41.439321] CPU: 2 PID: 1357 Comm: modprobe Not tainted 5.3.0-rc3+ #1
[   41.445748] Hardware name: To be filled by O.E.M. Saber/Saber, BIOS
0ACKL025 01/18/2019
[   41.453738] pstate: 80400009 (Nzcv daif +PAN -UAO)
[   41.458520] pc : __check_object_size+0xc8/0x1f8
[   41.463037] lr : __check_object_size+0xac/0x1f8
[   41.467553] sp : ffff800031b2fcf0
[   41.470854] x29: ffff800031b2fcf0 x28: ffff009f51c1c440
[   41.476153] x27: 0000000000000000 x26: 0000000000002d29
[   41.481451] x25: ffff009f51c1c440 x24: 0000000000000018
[   41.486749] x23: 0000000000000004 x22: ffff800010cb1a19
[   41.492046] x21: 0000000000000001 x20: 0000000000000001
[   41.497344] x19: ffff800010cb1a18 x18: 0000000000000000
[   41.502641] x17: 0000000000000000 x16: 0000000000000000
[   41.507939] x15: 0000000000000000 x14: 0000000000000000
[   41.513236] x13: 0000000000000000 x12: 0000000000000000
[   41.518533] x11: 0000000000000000 x10: 0000000000000000
[   41.523831] x9 : 0000000000000000 x8 : 0000000000000000
[   41.529129] x7 : 000000003fcf0000 x6 : 0000000000000018
[   41.534426] x5 : ffff800011d22840 x4 : ffff800011d22828
[   41.539723] x3 : 0000000000000002 x2 : ffffffffffe432c0
[   41.545021] x1 : 00000000c0000000 x0 : ffffffdfffe00000
[   41.550319] Call trace:
[   41.552753]  __check_object_size+0xc8/0x1f8
[   41.556923]  filldir64+0x1e0/0x2d8
[   41.560312]  dcache_readdir+0x60/0x180
[   41.564048]  iterate_dir+0x14c/0x1a0
[   41.567609]  ksys_getdents64+0xa0/0x170
[   41.571431]  __arm64_sys_getdents64+0x28/0x38
[   41.575777]  el0_svc_handler+0xb0/0x180
[   41.579601]  el0_svc+0x8/0xc
[   41.582472] Code: b26babe0 d350fc42 f2dffbe0 8b021802 (f9400440)
[   41.588639] ---[ end trace 1e1de241f266e888 ]---
[   41.593243] Kernel panic - not syncing: Fatal exception
[   41.598477] SMP: stopping secondary CPUs
[   41.602431] Kernel Offset: disabled
[   41.605907] CPU features: 0x0002,22000c38
[   41.609902] Memory Limit: none
[   41.612967] ---[ end Kernel panic - not syncing: Fatal exception ]---

- git bisect points to 14c127c957c1c6070 as the offending patch.

- Here is a brief snippet of my .config flags enabling 48-bit VA and 52-bit PA:

CONFIG_ARM64_64K_PAGES=y
CONFIG_ARM64_VA_BITS_48=y
CONFIG_ARM64_VA_BITS=48
CONFIG_ARM64_PA_BITS_52=y
CONFIG_ARM64_PA_BITS=52

- Any idea if this is the same issue as Geert observed? Or should I
debug it further to determine the offending code in the patch pointed
to by git bisect.

Thanks,
Bhupesh

On Tue, Aug 13, 2019 at 7:06 PM Geert Uytterhoeven <geert@linux-m68k.org> wrote:
>
> Hi Will,
>
> On Tue, Aug 13, 2019 at 3:10 PM Will Deacon <will@kernel.org> wrote:
> > On Tue, Aug 13, 2019 at 02:43:23PM +0200, Geert Uytterhoeven wrote:
> > > On Fri, Aug 9, 2019 at 6:47 PM Will Deacon <will@kernel.org> wrote:
> > > > On Wed, Aug 07, 2019 at 04:55:12PM +0100, Steve Capper wrote:
> > > > > This patch series adds support for 52-bit kernel VAs using some of the
> > > > > machinery already introduced by the 52-bit userspace VA code in 5.0.
> > > >
> > > > Cheers, I've pushed this out on a for-next/52-bit-kva branch with one
> > > > small patch on top and Catalin's tags added.
> > >
> > > As of commit 14c127c957c1c607 ("arm64: mm: Flip kernel VA space"), the
> > > kernel log is spammed with
> > >
> > >     virt_to_phys used for non-linear address: (____ptrval____)
> > > (__func__.6603+0x14d681/0x17fb3d)
> > >     WARNING: CPU: 0 PID: 264 at arch/arm64/mm/physaddr.c:15
> > > __virt_to_phys+0x28/0x58
> > >     Modules linked in:
> > >     CPU: 0 PID: 264 Comm: mdev Not tainted
> > > 5.3.0-rc3-rcar3-initrd-00002-g14c127c957c1c607 #38
> > >     Hardware name: Renesas Ebisu-4D board based on r8a77990 (DT)
> > >     pstate: 60000005 (nZCv daif -PAN -UAO)
> > >     pc : __virt_to_phys+0x28/0x58
> > >     lr : __virt_to_phys+0x28/0x58
> > >     sp : ffffffc011953c80
> > >     x29: ffffffc011953c80 x28: ffffff8078790140
> > >     x27: 0000000000000000 x26: 0000000000000000
> > >     x25: ffffffc010a539b9 x24: ffffffc010a86000
> > >     x23: ffffffc010a539ba x22: 0000000000000001
> > >     x21: 0000000000202038 x20: 0000000000000001
> > >     x19: ffffffc010a539b9 x18: 000000000000000a
> > >     x17: 0000000000000000 x16: 0000000000000000
> > >     x15: 00000000000ca51d x14: 0720072007200720
> > >     x13: 0720072007200720 x12: 0720072007200720
> > >     x11: 0720072007200720 x10: 0720072007200720
> > >     x9 : 0720072007200720 x8 : 0000000000000001
> > >     x7 : 0000000000000007 x6 : ffffff8079824f00
> > >     x5 : 0000000000000140 x4 : 0000000000000000
> > >     x3 : 0000000000000000 x2 : 00000000ffffffff
> > >     x1 : 0713abbc9281cf00 x0 : 0000000000000000
> > >     Call trace:
> > >      __virt_to_phys+0x28/0x58
> > >      __check_object_size+0xd0/0x1e0
> > >      filldir64+0x1d8/0x2b0
> > >      kernfs_fop_readdir+0x64/0x200
> > >      iterate_dir+0x68/0x144
> > >      ksys_getdents64+0x88/0x154
> > >      __arm64_sys_getdents64+0x18/0x24
> > >      el0_svc_common.constprop.0+0x84/0xe8
> > >      el0_svc_compat_handler+0x18/0x20
> > >      el0_svc_compat+0x8/0x10
> > >     ---[ end trace 6980a45f636e18be ]---
> > >
> > > as soon as userspace starts.
> >
> > Can you try the hack I posted here, please?
> >
> > https://lkml.org/lkml/2019/8/13/555
>
> Thanks, that seems to do the trick!
>
> Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
>
> > Also, what .config are you using?
>
> Attached.
>
> Probably CONFIG_DEBUG_VIRTUAL=y is what you're missing.
>
>
> Gr{oetje,eeting}s,
>
>                         Geert
>
> --
> Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org
>
> In personal conversations with technical people, I call myself a hacker. But
> when I'm talking to journalists I just say "programmer" or something like that.
>                                 -- Linus Torvalds

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 00/12] 52-bit kernel + user VAs
  2019-08-14  8:04         ` Bhupesh Sharma
@ 2019-08-14  8:21           ` Will Deacon
  2019-08-14 11:59             ` Bhupesh Sharma
  0 siblings, 1 reply; 38+ messages in thread
From: Will Deacon @ 2019-08-14  8:21 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: Christoph von Recklinghausen, Ard Biesheuvel, Catalin Marinas,
	Steve Capper, Linux-Renesas, Geert Uytterhoeven, maz, Linux ARM

On Wed, Aug 14, 2019 at 01:34:49PM +0530, Bhupesh Sharma wrote:
> I still see the following issue on a 48-bit hardware (i.e. _non_
> ARMv8.2 hardware) with branch 'for-next/52-bit-kva' with commit
> d2d73d2fef421ca0d4 as the HEAD:

Have you tried the patches I posted here:

http://lists.infradead.org/pipermail/linux-arm-kernel/2019-August/673315.html

?

Whilst they're being reviewed, I've dropped the 52-bit branch from
linux-next (for-next/core) so that people don't keep running into this.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 00/12] 52-bit kernel + user VAs
  2019-08-14  8:21           ` Will Deacon
@ 2019-08-14 11:59             ` Bhupesh Sharma
  2019-08-14 12:24               ` Will Deacon
  0 siblings, 1 reply; 38+ messages in thread
From: Bhupesh Sharma @ 2019-08-14 11:59 UTC (permalink / raw)
  To: Will Deacon
  Cc: Christoph von Recklinghausen, Ard Biesheuvel, Catalin Marinas,
	Steve Capper, Linux-Renesas, Geert Uytterhoeven, maz, Linux ARM

On Wed, Aug 14, 2019 at 1:51 PM Will Deacon <will@kernel.org> wrote:
>
> On Wed, Aug 14, 2019 at 01:34:49PM +0530, Bhupesh Sharma wrote:
> > I still see the following issue on a 48-bit hardware (i.e. _non_
> > ARMv8.2 hardware) with branch 'for-next/52-bit-kva' with commit
> > d2d73d2fef421ca0d4 as the HEAD:
>
> Have you tried the patches I posted here:
>
> http://lists.infradead.org/pipermail/linux-arm-kernel/2019-August/673315.html
>
> ?
>
> Whilst they're being reviewed, I've dropped the 52-bit branch from
> linux-next (for-next/core) so that people don't keep running into this.

Thanks will try the above and get back with my results.

However just to make sure that the 52-bit changes are tested properly
(before landing up linux-next) - as we had issues with the 52-bit User
space VA + PA changes in the past (which broke userspace), I was
wondering if we can have a dedicated branch to have the v5 patches
from Steve + fixes, so that they can be easily tested and issues (if
any) reported with easy reference.

Or, if such a branch already exists, kindly share the pointer to the
same as well.

Thanks,
Bhupesh

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V5 00/12] 52-bit kernel + user VAs
  2019-08-14 11:59             ` Bhupesh Sharma
@ 2019-08-14 12:24               ` Will Deacon
  0 siblings, 0 replies; 38+ messages in thread
From: Will Deacon @ 2019-08-14 12:24 UTC (permalink / raw)
  To: Bhupesh Sharma
  Cc: mark.rutland, Christoph von Recklinghausen, Ard Biesheuvel,
	Catalin Marinas, Steve Capper, Linux-Renesas, Geert Uytterhoeven,
	maz, Linux ARM

[+Mark]

On Wed, Aug 14, 2019 at 05:29:09PM +0530, Bhupesh Sharma wrote:
> On Wed, Aug 14, 2019 at 1:51 PM Will Deacon <will@kernel.org> wrote:
> >
> > On Wed, Aug 14, 2019 at 01:34:49PM +0530, Bhupesh Sharma wrote:
> > > I still see the following issue on a 48-bit hardware (i.e. _non_
> > > ARMv8.2 hardware) with branch 'for-next/52-bit-kva' with commit
> > > d2d73d2fef421ca0d4 as the HEAD:
> >
> > Have you tried the patches I posted here:
> >
> > http://lists.infradead.org/pipermail/linux-arm-kernel/2019-August/673315.html
> >
> > ?
> >
> > Whilst they're being reviewed, I've dropped the 52-bit branch from
> > linux-next (for-next/core) so that people don't keep running into this.
> 
> Thanks will try the above and get back with my results.
> 
> However just to make sure that the 52-bit changes are tested properly
> (before landing up linux-next) - as we had issues with the 52-bit User
> space VA + PA changes in the past (which broke userspace), I was
> wondering if we can have a dedicated branch to have the v5 patches
> from Steve + fixes, so that they can be easily tested and issues (if
> any) reported with easy reference.
> 
> Or, if such a branch already exists, kindly share the pointer to the
> same as well.

I've pushed the current round of fixes on top of:

https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=for-next/52-bit-kva

Mark has spotted a couple of other issues, but they shoudn't hold up your
testing (although I'm going to hold off putting this back into -next until
we've got them resolved).

Mark -- please use the branch above as a basis for any additional fixes.
HEAD should be d0b3c32ed922.

Thanks,

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET
  2019-08-07 15:55 ` [PATCH V5 03/12] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
  2019-08-07 16:12   ` Catalin Marinas
@ 2019-08-14 15:20   ` Mark Rutland
  2019-08-14 15:57     ` Will Deacon
  2019-08-14 16:07     ` Steve Capper
  1 sibling, 2 replies; 38+ messages in thread
From: Mark Rutland @ 2019-08-14 15:20 UTC (permalink / raw)
  To: Steve Capper, will
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma, maz,
	linux-arm-kernel

Hi Steve,

On Wed, Aug 07, 2019 at 04:55:15PM +0100, Steve Capper wrote:
> +config KASAN_SHADOW_OFFSET
> +	hex
> +	depends on KASAN
> +	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS
> +	default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
> +	default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
> +	default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
> +	default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
> +	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS
> +	default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
> +	default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
> +	default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
> +	default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
> +	default 0xffffffffffffffff
> +
>  source "arch/arm64/Kconfig.platforms"
>  
>  menu "Kernel Features"
> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> index b2400f9c1213..2b7db0d41498 100644
> --- a/arch/arm64/Makefile
> +++ b/arch/arm64/Makefile
> @@ -126,14 +126,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
>  KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
>  KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
>  
> -# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
> -#				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
> -# in 32-bit arithmetic
> -KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
> -	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
> -	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
> -	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
> -
>  export	TEXT_OFFSET GZFLAGS
>  
>  core-y		+= arch/arm64/kernel/ arch/arm64/mm/

I've just spotted this breaks build using CONFIG_KASAN_SW_TAGS &&
CONFIG_KASAN_INLINE, as scripts/Makefile.kasan only propagates
CONFIG_KASAN_SHADOW_OFFSET into KASAN_SHADOW_OFFSET when
CONFIG_KASAN_GENERIC is selected, but consumes KASAN_SHADOW_OFFSET
regardless.

I think that's by accident rather than by design, but to
minimize/localize the fixup, how about the below? I can send a cleanup
patch for scripts/Makefile.kasan later.

Build and boot tested with CONFIG_KASAN_{SW_TAGS,GENERIC} and
VA_BITS_52 (on a 48-bit VA system).

Thanks,
Mark.

---->8----
From b1a6f2dd5aa30d874c4bd97a20ea1330607da624 Mon Sep 17 00:00:00 2001
From: Mark Rutland <mark.rutland@arm.com>
Date: Wed, 14 Aug 2019 15:51:14 +0100
Subject: [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE

Since commit:

  6bd1d0be0e97936d ("arm64: kasan: Switch to using KASAN_SHADOW_OFFSET")

... attempting to build with CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE
results ins splat:

| [mark@lakrids:~/src/linux]% usellvm 8.0.1 usekorg 8.1.0  make ARCH=arm64 CROSS_COMPILE=aarch64-linux- CC=clang
| scripts/kconfig/conf  --syncconfig Kconfig
|   CC      scripts/mod/empty.o
| clang (LLVM option parsing): for the -hwasan-mapping-offset option: '' value invalid for uint argument!
| scripts/Makefile.build:273: recipe for target 'scripts/mod/empty.o' failed
| make[1]: *** [scripts/mod/empty.o] Error 1
| Makefile:1123: recipe for target 'prepare0' failed
| make: *** [prepare0] Error 2

... since Makefile.kasan only consumes CONFIG_KASAN_SHADOW_OFFSET when
CONFIG_KASAN_GENERIC is selected, and for CONFIG_KASAN_SW_TAGS it consumes
KASAN_SHADOW_OFFSET (without a CONFIG_ prefix).

For the moment, let's always propagate CONFIG_KASAN_SHADOW_OFFSET into
KASAN_SHADOW_OFFSET via the arm64 Makefile. We can clean up the generic kasan
Makefile later down the line.

Fixes: 6bd1d0be0e97936d ("arm64: kasan: Switch to using KASAN_SHADOW_OFFSET")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Steve Capper <steve.capper@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm64/Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index a8d2a241ac58..a0c733f93b5b 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -126,6 +126,8 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
 
+KASAN_SHADOW_OFFSET := $(CONFIG_KASAN_SHADOW_OFFSET)
+
 export	TEXT_OFFSET GZFLAGS
 
 core-y		+= arch/arm64/kernel/ arch/arm64/mm/
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET
  2019-08-14 15:20   ` [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET Mark Rutland
@ 2019-08-14 15:57     ` Will Deacon
  2019-08-14 16:03       ` Mark Rutland
  2019-08-14 16:07     ` Steve Capper
  1 sibling, 1 reply; 38+ messages in thread
From: Will Deacon @ 2019-08-14 15:57 UTC (permalink / raw)
  To: Mark Rutland
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, linux-arm-kernel

On Wed, Aug 14, 2019 at 04:20:18PM +0100, Mark Rutland wrote:
> On Wed, Aug 07, 2019 at 04:55:15PM +0100, Steve Capper wrote:
> > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> > index b2400f9c1213..2b7db0d41498 100644
> > --- a/arch/arm64/Makefile
> > +++ b/arch/arm64/Makefile
> > @@ -126,14 +126,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> >  KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> >  KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> >  
> > -# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
> > -#				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
> > -# in 32-bit arithmetic
> > -KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
> > -	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
> > -	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
> > -	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
> > -
> >  export	TEXT_OFFSET GZFLAGS
> >  
> >  core-y		+= arch/arm64/kernel/ arch/arm64/mm/
> 
> I've just spotted this breaks build using CONFIG_KASAN_SW_TAGS &&
> CONFIG_KASAN_INLINE, as scripts/Makefile.kasan only propagates
> CONFIG_KASAN_SHADOW_OFFSET into KASAN_SHADOW_OFFSET when
> CONFIG_KASAN_GENERIC is selected, but consumes KASAN_SHADOW_OFFSET
> regardless.
> 
> I think that's by accident rather than by design, but to
> minimize/localize the fixup, how about the below? I can send a cleanup
> patch for scripts/Makefile.kasan later.

How much work is that? I've dropped this stuff from -next for now, so we
have time to fix it properly as long as it's not going to take weeks.

> ---->8----
> From b1a6f2dd5aa30d874c4bd97a20ea1330607da624 Mon Sep 17 00:00:00 2001
> From: Mark Rutland <mark.rutland@arm.com>
> Date: Wed, 14 Aug 2019 15:51:14 +0100
> Subject: [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE
> 
> Since commit:
> 
>   6bd1d0be0e97936d ("arm64: kasan: Switch to using KASAN_SHADOW_OFFSET")
> 
> ... attempting to build with CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE
> results ins splat:
> 
> | [mark@lakrids:~/src/linux]% usellvm 8.0.1 usekorg 8.1.0  make ARCH=arm64 CROSS_COMPILE=aarch64-linux- CC=clang
> | scripts/kconfig/conf  --syncconfig Kconfig
> |   CC      scripts/mod/empty.o
> | clang (LLVM option parsing): for the -hwasan-mapping-offset option: '' value invalid for uint argument!
> | scripts/Makefile.build:273: recipe for target 'scripts/mod/empty.o' failed
> | make[1]: *** [scripts/mod/empty.o] Error 1
> | Makefile:1123: recipe for target 'prepare0' failed
> | make: *** [prepare0] Error 2
> 
> ... since Makefile.kasan only consumes CONFIG_KASAN_SHADOW_OFFSET when
> CONFIG_KASAN_GENERIC is selected, and for CONFIG_KASAN_SW_TAGS it consumes
> KASAN_SHADOW_OFFSET (without a CONFIG_ prefix).
> 
> For the moment, let's always propagate CONFIG_KASAN_SHADOW_OFFSET into
> KASAN_SHADOW_OFFSET via the arm64 Makefile. We can clean up the generic kasan
> Makefile later down the line.
> 
> Fixes: 6bd1d0be0e97936d ("arm64: kasan: Switch to using KASAN_SHADOW_OFFSET")
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Steve Capper <steve.capper@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm64/Makefile | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> index a8d2a241ac58..a0c733f93b5b 100644
> --- a/arch/arm64/Makefile
> +++ b/arch/arm64/Makefile
> @@ -126,6 +126,8 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
>  KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
>  KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
>  
> +KASAN_SHADOW_OFFSET := $(CONFIG_KASAN_SHADOW_OFFSET)

This needs a comment explaining what it's doing and that it's a dirty,
temporary hack.

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET
  2019-08-14 15:57     ` Will Deacon
@ 2019-08-14 16:03       ` Mark Rutland
  2019-08-14 17:53         ` Steve Capper
  2019-08-15 12:09         ` Will Deacon
  0 siblings, 2 replies; 38+ messages in thread
From: Mark Rutland @ 2019-08-14 16:03 UTC (permalink / raw)
  To: Will Deacon, Andrey Ryabinin
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, maz, linux-arm-kernel

On Wed, Aug 14, 2019 at 04:57:11PM +0100, Will Deacon wrote:
> On Wed, Aug 14, 2019 at 04:20:18PM +0100, Mark Rutland wrote:
> > On Wed, Aug 07, 2019 at 04:55:15PM +0100, Steve Capper wrote:
> > > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> > > index b2400f9c1213..2b7db0d41498 100644
> > > --- a/arch/arm64/Makefile
> > > +++ b/arch/arm64/Makefile
> > > @@ -126,14 +126,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > >  KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > >  KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > >  
> > > -# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
> > > -#				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
> > > -# in 32-bit arithmetic
> > > -KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
> > > -	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
> > > -	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
> > > -	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
> > > -
> > >  export	TEXT_OFFSET GZFLAGS
> > >  
> > >  core-y		+= arch/arm64/kernel/ arch/arm64/mm/
> > 
> > I've just spotted this breaks build using CONFIG_KASAN_SW_TAGS &&
> > CONFIG_KASAN_INLINE, as scripts/Makefile.kasan only propagates
> > CONFIG_KASAN_SHADOW_OFFSET into KASAN_SHADOW_OFFSET when
> > CONFIG_KASAN_GENERIC is selected, but consumes KASAN_SHADOW_OFFSET
> > regardless.
> > 
> > I think that's by accident rather than by design, but to
> > minimize/localize the fixup, how about the below? I can send a cleanup
> > patch for scripts/Makefile.kasan later.
> 
> How much work is that? I've dropped this stuff from -next for now, so we
> have time to fix it properly as long as it's not going to take weeks.

I wrote it first, so no effort; patch below.

Andrey, would you be happy with this?

Thanks,
Mark.

---->8----
From ecdf60051a850f817d98f84ae9011afa2311b8f1 Mon Sep 17 00:00:00 2001
From: Mark Rutland <mark.rutland@arm.com>
Date: Wed, 14 Aug 2019 15:31:57 +0100
Subject: [PATCH] kasan/arm64: fix CONFIG_KASAN_SW_TAGS && KASAN_INLINE

The generic Makefile.kasan propagates CONFIG_KASAN_SHADOW_OFFSET into
KASAN_SHADOW_OFFSET, but only does so for CONFIG_KASAN_GENERIC.

Since commit:

  6bd1d0be0e97936d ("arm64: kasan: Switch to using KASAN_SHADOW_OFFSET")

... arm64 defines CONFIG_KASAN_SHADOW_OFFSET in Kconfig rather than
defining KASAN_SHADOW_OFFSET in a Makefile. Thus, if
CONFIG_KASAN_SW_TAGS && KASAN_INLINE are selected, we get build time
splats due to KASAN_SHADOW_OFFSET not being set:

| [mark@lakrids:~/src/linux]% usellvm 8.0.1 usekorg 8.1.0  make ARCH=arm64 CROSS_COMPILE=aarch64-linux- CC=clang
| scripts/kconfig/conf  --syncconfig Kconfig
|   CC      scripts/mod/empty.o
| clang (LLVM option parsing): for the -hwasan-mapping-offset option: '' value invalid for uint argument!
| scripts/Makefile.build:273: recipe for target 'scripts/mod/empty.o' failed
| make[1]: *** [scripts/mod/empty.o] Error 1
| Makefile:1123: recipe for target 'prepare0' failed
| make: *** [prepare0] Error 2

Let's fix this by always propagating CONFIG_KASAN_SHADOW_OFFSET into
KASAN_SHADOW_OFFSET if CONFIG_KASAN is selected, moving the existing
common definition of +CFLAGS_KASAN_NOSANITIZE to the top of
Makefile.kasan.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Steve Capper <steve.capper@arm.com>
Cc: Will Deacon <will@kernel.org>
---
 scripts/Makefile.kasan | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
index 6410bd22fe38..03757cc60e06 100644
--- a/scripts/Makefile.kasan
+++ b/scripts/Makefile.kasan
@@ -1,4 +1,9 @@
 # SPDX-License-Identifier: GPL-2.0
+ifdef CONFIG_KASAN
+CFLAGS_KASAN_NOSANITIZE := -fno-builtin
+KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET)
+endif
+
 ifdef CONFIG_KASAN_GENERIC
 
 ifdef CONFIG_KASAN_INLINE
@@ -7,8 +12,6 @@ else
 	call_threshold := 0
 endif
 
-KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET)
-
 CFLAGS_KASAN_MINIMAL := -fsanitize=kernel-address
 
 cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1)))
@@ -45,7 +48,3 @@ CFLAGS_KASAN := -fsanitize=kernel-hwaddress \
 		$(instrumentation_flags)
 
 endif # CONFIG_KASAN_SW_TAGS
-
-ifdef CONFIG_KASAN
-CFLAGS_KASAN_NOSANITIZE := -fno-builtin
-endif
-- 
2.11.0


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET
  2019-08-14 15:20   ` [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET Mark Rutland
  2019-08-14 15:57     ` Will Deacon
@ 2019-08-14 16:07     ` Steve Capper
  2019-08-14 16:14       ` Steve Capper
  1 sibling, 1 reply; 38+ messages in thread
From: Steve Capper @ 2019-08-14 16:07 UTC (permalink / raw)
  To: Mark Rutland
  Cc: crecklin, ard.biesheuvel, Catalin Marinas, bhsharma, maz, nd,
	will, linux-arm-kernel

On Wed, Aug 14, 2019 at 04:20:18PM +0100, Mark Rutland wrote:
> Hi Steve,
>

Hi Mark,

> On Wed, Aug 07, 2019 at 04:55:15PM +0100, Steve Capper wrote:
> > +config KASAN_SHADOW_OFFSET
> > +	hex
> > +	depends on KASAN
> > +	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS
> > +	default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
> > +	default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
> > +	default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
> > +	default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
> > +	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS
> > +	default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
> > +	default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
> > +	default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
> > +	default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
> > +	default 0xffffffffffffffff
> > +
> >  source "arch/arm64/Kconfig.platforms"
> >  
> >  menu "Kernel Features"
> > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> > index b2400f9c1213..2b7db0d41498 100644
> > --- a/arch/arm64/Makefile
> > +++ b/arch/arm64/Makefile
> > @@ -126,14 +126,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> >  KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> >  KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> >  
> > -# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
> > -#				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
> > -# in 32-bit arithmetic
> > -KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
> > -	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
> > -	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
> > -	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
> > -
> >  export	TEXT_OFFSET GZFLAGS
> >  
> >  core-y		+= arch/arm64/kernel/ arch/arm64/mm/
> 
> I've just spotted this breaks build using CONFIG_KASAN_SW_TAGS &&
> CONFIG_KASAN_INLINE, as scripts/Makefile.kasan only propagates
> CONFIG_KASAN_SHADOW_OFFSET into KASAN_SHADOW_OFFSET when
> CONFIG_KASAN_GENERIC is selected, but consumes KASAN_SHADOW_OFFSET
> regardless.
> 
> I think that's by accident rather than by design, but to
> minimize/localize the fixup, how about the below? I can send a cleanup
> patch for scripts/Makefile.kasan later.
> 
> Build and boot tested with CONFIG_KASAN_{SW_TAGS,GENERIC} and
> VA_BITS_52 (on a 48-bit VA system).
> 

I've tested this with VA_BITS_52 (booted with 52-bit) with inline
SW_TAGS and generic KASAN.

FWIW:
Tested-by: Steve Capper <steve.capper@arm.com>
Reviewed-by: Steve Capper <steve.capper@arm.com>

Agreed for this small fix now and a bigger fix in Makefile.kasan later.

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET
  2019-08-14 16:07     ` Steve Capper
@ 2019-08-14 16:14       ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2019-08-14 16:14 UTC (permalink / raw)
  To: Mark Rutland
  Cc: crecklin, ard.biesheuvel, Catalin Marinas, bhsharma, maz, nd,
	will, linux-arm-kernel

On Wed, Aug 14, 2019 at 05:07:15PM +0100, Steve Capper wrote:
> On Wed, Aug 14, 2019 at 04:20:18PM +0100, Mark Rutland wrote:
> > Hi Steve,
> >
> 
> Hi Mark,
> 
> > On Wed, Aug 07, 2019 at 04:55:15PM +0100, Steve Capper wrote:
> > > +config KASAN_SHADOW_OFFSET
> > > +	hex
> > > +	depends on KASAN
> > > +	default 0xdfffa00000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && !KASAN_SW_TAGS
> > > +	default 0xdfffd00000000000 if ARM64_VA_BITS_47 && !KASAN_SW_TAGS
> > > +	default 0xdffffe8000000000 if ARM64_VA_BITS_42 && !KASAN_SW_TAGS
> > > +	default 0xdfffffd000000000 if ARM64_VA_BITS_39 && !KASAN_SW_TAGS
> > > +	default 0xdffffffa00000000 if ARM64_VA_BITS_36 && !KASAN_SW_TAGS
> > > +	default 0xefff900000000000 if (ARM64_VA_BITS_48 || ARM64_USER_VA_BITS_52) && KASAN_SW_TAGS
> > > +	default 0xefffc80000000000 if ARM64_VA_BITS_47 && KASAN_SW_TAGS
> > > +	default 0xeffffe4000000000 if ARM64_VA_BITS_42 && KASAN_SW_TAGS
> > > +	default 0xefffffc800000000 if ARM64_VA_BITS_39 && KASAN_SW_TAGS
> > > +	default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS
> > > +	default 0xffffffffffffffff
> > > +
> > >  source "arch/arm64/Kconfig.platforms"
> > >  
> > >  menu "Kernel Features"
> > > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> > > index b2400f9c1213..2b7db0d41498 100644
> > > --- a/arch/arm64/Makefile
> > > +++ b/arch/arm64/Makefile
> > > @@ -126,14 +126,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > >  KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > >  KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > >  
> > > -# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
> > > -#				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
> > > -# in 32-bit arithmetic
> > > -KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
> > > -	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
> > > -	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
> > > -	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
> > > -
> > >  export	TEXT_OFFSET GZFLAGS
> > >  
> > >  core-y		+= arch/arm64/kernel/ arch/arm64/mm/
> > 
> > I've just spotted this breaks build using CONFIG_KASAN_SW_TAGS &&
> > CONFIG_KASAN_INLINE, as scripts/Makefile.kasan only propagates
> > CONFIG_KASAN_SHADOW_OFFSET into KASAN_SHADOW_OFFSET when
> > CONFIG_KASAN_GENERIC is selected, but consumes KASAN_SHADOW_OFFSET
> > regardless.
> > 
> > I think that's by accident rather than by design, but to
> > minimize/localize the fixup, how about the below? I can send a cleanup
> > patch for scripts/Makefile.kasan later.
> > 
> > Build and boot tested with CONFIG_KASAN_{SW_TAGS,GENERIC} and
> > VA_BITS_52 (on a 48-bit VA system).
> > 
> 
> I've tested this with VA_BITS_52 (booted with 52-bit) with inline
> SW_TAGS and generic KASAN.
> 
> FWIW:
> Tested-by: Steve Capper <steve.capper@arm.com>
> Reviewed-by: Steve Capper <steve.capper@arm.com>
> 
> Agreed for this small fix now and a bigger fix in Makefile.kasan later.
>

Apologies for the noise, I didn't notice the thread progress as I was
testing. Will test the improved patch :-).

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET
  2019-08-14 16:03       ` Mark Rutland
@ 2019-08-14 17:53         ` Steve Capper
  2019-08-15 12:09         ` Will Deacon
  1 sibling, 0 replies; 38+ messages in thread
From: Steve Capper @ 2019-08-14 17:53 UTC (permalink / raw)
  To: Mark Rutland
  Cc: crecklin, ard.biesheuvel, Catalin Marinas, bhsharma, maz,
	Andrey Ryabinin, nd, Will Deacon, linux-arm-kernel

On Wed, Aug 14, 2019 at 05:03:24PM +0100, Mark Rutland wrote:
> On Wed, Aug 14, 2019 at 04:57:11PM +0100, Will Deacon wrote:
> > On Wed, Aug 14, 2019 at 04:20:18PM +0100, Mark Rutland wrote:
> > > On Wed, Aug 07, 2019 at 04:55:15PM +0100, Steve Capper wrote:
> > > > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> > > > index b2400f9c1213..2b7db0d41498 100644
> > > > --- a/arch/arm64/Makefile
> > > > +++ b/arch/arm64/Makefile
> > > > @@ -126,14 +126,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > > >  KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > > >  KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > > >  
> > > > -# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
> > > > -#				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
> > > > -# in 32-bit arithmetic
> > > > -KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
> > > > -	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
> > > > -	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
> > > > -	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
> > > > -
> > > >  export	TEXT_OFFSET GZFLAGS
> > > >  
> > > >  core-y		+= arch/arm64/kernel/ arch/arm64/mm/
> > > 
> > > I've just spotted this breaks build using CONFIG_KASAN_SW_TAGS &&
> > > CONFIG_KASAN_INLINE, as scripts/Makefile.kasan only propagates
> > > CONFIG_KASAN_SHADOW_OFFSET into KASAN_SHADOW_OFFSET when
> > > CONFIG_KASAN_GENERIC is selected, but consumes KASAN_SHADOW_OFFSET
> > > regardless.
> > > 
> > > I think that's by accident rather than by design, but to
> > > minimize/localize the fixup, how about the below? I can send a cleanup
> > > patch for scripts/Makefile.kasan later.
> > 
> > How much work is that? I've dropped this stuff from -next for now, so we
> > have time to fix it properly as long as it's not going to take weeks.
> 
> I wrote it first, so no effort; patch below.
> 
> Andrey, would you be happy with this?
> 
> Thanks,
> Mark.

FWIW, this one worked well for me too (52-bit VA runtime, SW TAGS +
GENERIC both inlined).

Tested-by: Steve Capper <steve.capper@arm.com>

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET
  2019-08-14 16:03       ` Mark Rutland
  2019-08-14 17:53         ` Steve Capper
@ 2019-08-15 12:09         ` Will Deacon
  2019-08-15 12:21           ` [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE Andrey Ryabinin
  2019-08-20  6:02           ` [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET Bhupesh Sharma
  1 sibling, 2 replies; 38+ messages in thread
From: Will Deacon @ 2019-08-15 12:09 UTC (permalink / raw)
  To: Mark Rutland
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, kasan-dev, glider, dvyukov, maz, Andrey Ryabinin,
	linux-arm-kernel

[+more kasan people and the kasan-dev list]

On Wed, Aug 14, 2019 at 05:03:24PM +0100, Mark Rutland wrote:
> On Wed, Aug 14, 2019 at 04:57:11PM +0100, Will Deacon wrote:
> > On Wed, Aug 14, 2019 at 04:20:18PM +0100, Mark Rutland wrote:
> > > On Wed, Aug 07, 2019 at 04:55:15PM +0100, Steve Capper wrote:
> > > > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> > > > index b2400f9c1213..2b7db0d41498 100644
> > > > --- a/arch/arm64/Makefile
> > > > +++ b/arch/arm64/Makefile
> > > > @@ -126,14 +126,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > > >  KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > > >  KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > > >  
> > > > -# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
> > > > -#				 - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
> > > > -# in 32-bit arithmetic
> > > > -KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
> > > > -	(0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
> > > > -	+ (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
> > > > -	- (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
> > > > -
> > > >  export	TEXT_OFFSET GZFLAGS
> > > >  
> > > >  core-y		+= arch/arm64/kernel/ arch/arm64/mm/
> > > 
> > > I've just spotted this breaks build using CONFIG_KASAN_SW_TAGS &&
> > > CONFIG_KASAN_INLINE, as scripts/Makefile.kasan only propagates
> > > CONFIG_KASAN_SHADOW_OFFSET into KASAN_SHADOW_OFFSET when
> > > CONFIG_KASAN_GENERIC is selected, but consumes KASAN_SHADOW_OFFSET
> > > regardless.
> > > 
> > > I think that's by accident rather than by design, but to
> > > minimize/localize the fixup, how about the below? I can send a cleanup
> > > patch for scripts/Makefile.kasan later.
> > 
> > How much work is that? I've dropped this stuff from -next for now, so we
> > have time to fix it properly as long as it's not going to take weeks.
> 
> I wrote it first, so no effort; patch below.

The patch looks fine to me, but I'd like an Ack from one of the KASAN
folks before I queue this via the arm64 tree (where support for 52-bit
virtual addressing in the kernel [1] depends on this being fixed).

Patch is quoted below. Please can somebody take a look?

Thanks,

Will

[1] https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=for-next/52-bit-kva

> From ecdf60051a850f817d98f84ae9011afa2311b8f1 Mon Sep 17 00:00:00 2001
> From: Mark Rutland <mark.rutland@arm.com>
> Date: Wed, 14 Aug 2019 15:31:57 +0100
> Subject: [PATCH] kasan/arm64: fix CONFIG_KASAN_SW_TAGS && KASAN_INLINE
> 
> The generic Makefile.kasan propagates CONFIG_KASAN_SHADOW_OFFSET into
> KASAN_SHADOW_OFFSET, but only does so for CONFIG_KASAN_GENERIC.
> 
> Since commit:
> 
>   6bd1d0be0e97936d ("arm64: kasan: Switch to using KASAN_SHADOW_OFFSET")
> 
> ... arm64 defines CONFIG_KASAN_SHADOW_OFFSET in Kconfig rather than
> defining KASAN_SHADOW_OFFSET in a Makefile. Thus, if
> CONFIG_KASAN_SW_TAGS && KASAN_INLINE are selected, we get build time
> splats due to KASAN_SHADOW_OFFSET not being set:
> 
> | [mark@lakrids:~/src/linux]% usellvm 8.0.1 usekorg 8.1.0  make ARCH=arm64 CROSS_COMPILE=aarch64-linux- CC=clang
> | scripts/kconfig/conf  --syncconfig Kconfig
> |   CC      scripts/mod/empty.o
> | clang (LLVM option parsing): for the -hwasan-mapping-offset option: '' value invalid for uint argument!
> | scripts/Makefile.build:273: recipe for target 'scripts/mod/empty.o' failed
> | make[1]: *** [scripts/mod/empty.o] Error 1
> | Makefile:1123: recipe for target 'prepare0' failed
> | make: *** [prepare0] Error 2
> 
> Let's fix this by always propagating CONFIG_KASAN_SHADOW_OFFSET into
> KASAN_SHADOW_OFFSET if CONFIG_KASAN is selected, moving the existing
> common definition of +CFLAGS_KASAN_NOSANITIZE to the top of
> Makefile.kasan.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Steve Capper <steve.capper@arm.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  scripts/Makefile.kasan | 11 +++++------
>  1 file changed, 5 insertions(+), 6 deletions(-)
> 
> diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
> index 6410bd22fe38..03757cc60e06 100644
> --- a/scripts/Makefile.kasan
> +++ b/scripts/Makefile.kasan
> @@ -1,4 +1,9 @@
>  # SPDX-License-Identifier: GPL-2.0
> +ifdef CONFIG_KASAN
> +CFLAGS_KASAN_NOSANITIZE := -fno-builtin
> +KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET)
> +endif
> +
>  ifdef CONFIG_KASAN_GENERIC
>  
>  ifdef CONFIG_KASAN_INLINE
> @@ -7,8 +12,6 @@ else
>  	call_threshold := 0
>  endif
>  
> -KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET)
> -
>  CFLAGS_KASAN_MINIMAL := -fsanitize=kernel-address
>  
>  cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1)))
> @@ -45,7 +48,3 @@ CFLAGS_KASAN := -fsanitize=kernel-hwaddress \
>  		$(instrumentation_flags)
>  
>  endif # CONFIG_KASAN_SW_TAGS
> -
> -ifdef CONFIG_KASAN
> -CFLAGS_KASAN_NOSANITIZE := -fno-builtin
> -endif
> -- 
> 2.11.0
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE
  2019-08-15 12:09         ` Will Deacon
@ 2019-08-15 12:21           ` Andrey Ryabinin
  2019-08-15 12:22             ` Will Deacon
  2019-08-20  6:02           ` [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET Bhupesh Sharma
  1 sibling, 1 reply; 38+ messages in thread
From: Andrey Ryabinin @ 2019-08-15 12:21 UTC (permalink / raw)
  To: Will Deacon, Mark Rutland
  Cc: crecklin, ard.biesheuvel, catalin.marinas, bhsharma,
	Steve Capper, kasan-dev, glider, dvyukov, maz, linux-arm-kernel

On 8/15/19 3:09 PM, Will Deacon wrote:

> On Wed, Aug 14, 2019 at 05:03:24PM +0100, Mark Rutland wrote:
>> From ecdf60051a850f817d98f84ae9011afa2311b8f1 Mon Sep 17 00:00:00 2001
>> From: Mark Rutland <mark.rutland@arm.com>
>> Date: Wed, 14 Aug 2019 15:31:57 +0100
>> Subject: [PATCH] kasan/arm64: fix CONFIG_KASAN_SW_TAGS && KASAN_INLINE
>>
>> The generic Makefile.kasan propagates CONFIG_KASAN_SHADOW_OFFSET into
>> KASAN_SHADOW_OFFSET, but only does so for CONFIG_KASAN_GENERIC.
>>
>> Since commit:
>>
>>   6bd1d0be0e97936d ("arm64: kasan: Switch to using KASAN_SHADOW_OFFSET")
>>
>> ... arm64 defines CONFIG_KASAN_SHADOW_OFFSET in Kconfig rather than
>> defining KASAN_SHADOW_OFFSET in a Makefile. Thus, if
>> CONFIG_KASAN_SW_TAGS && KASAN_INLINE are selected, we get build time
>> splats due to KASAN_SHADOW_OFFSET not being set:
>>
>> | [mark@lakrids:~/src/linux]% usellvm 8.0.1 usekorg 8.1.0  make ARCH=arm64 CROSS_COMPILE=aarch64-linux- CC=clang
>> | scripts/kconfig/conf  --syncconfig Kconfig
>> |   CC      scripts/mod/empty.o
>> | clang (LLVM option parsing): for the -hwasan-mapping-offset option: '' value invalid for uint argument!
>> | scripts/Makefile.build:273: recipe for target 'scripts/mod/empty.o' failed
>> | make[1]: *** [scripts/mod/empty.o] Error 1
>> | Makefile:1123: recipe for target 'prepare0' failed
>> | make: *** [prepare0] Error 2
>>
>> Let's fix this by always propagating CONFIG_KASAN_SHADOW_OFFSET into
>> KASAN_SHADOW_OFFSET if CONFIG_KASAN is selected, moving the existing
>> common definition of +CFLAGS_KASAN_NOSANITIZE to the top of
>> Makefile.kasan.
>>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
>> Cc: Steve Capper <steve.capper@arm.com>
>> Cc: Will Deacon <will@kernel.org>
>> ---


Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE
  2019-08-15 12:21           ` [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE Andrey Ryabinin
@ 2019-08-15 12:22             ` Will Deacon
  0 siblings, 0 replies; 38+ messages in thread
From: Will Deacon @ 2019-08-15 12:22 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Mark Rutland, crecklin, ard.biesheuvel, catalin.marinas,
	bhsharma, Steve Capper, kasan-dev, glider, dvyukov, maz,
	linux-arm-kernel

On Thu, Aug 15, 2019 at 03:21:48PM +0300, Andrey Ryabinin wrote:
> On 8/15/19 3:09 PM, Will Deacon wrote:
> 
> > On Wed, Aug 14, 2019 at 05:03:24PM +0100, Mark Rutland wrote:
> >> From ecdf60051a850f817d98f84ae9011afa2311b8f1 Mon Sep 17 00:00:00 2001
> >> From: Mark Rutland <mark.rutland@arm.com>
> >> Date: Wed, 14 Aug 2019 15:31:57 +0100
> >> Subject: [PATCH] kasan/arm64: fix CONFIG_KASAN_SW_TAGS && KASAN_INLINE
> >>
> >> The generic Makefile.kasan propagates CONFIG_KASAN_SHADOW_OFFSET into
> >> KASAN_SHADOW_OFFSET, but only does so for CONFIG_KASAN_GENERIC.
> >>
> >> Since commit:
> >>
> >>   6bd1d0be0e97936d ("arm64: kasan: Switch to using KASAN_SHADOW_OFFSET")
> >>
> >> ... arm64 defines CONFIG_KASAN_SHADOW_OFFSET in Kconfig rather than
> >> defining KASAN_SHADOW_OFFSET in a Makefile. Thus, if
> >> CONFIG_KASAN_SW_TAGS && KASAN_INLINE are selected, we get build time
> >> splats due to KASAN_SHADOW_OFFSET not being set:
> >>
> >> | [mark@lakrids:~/src/linux]% usellvm 8.0.1 usekorg 8.1.0  make ARCH=arm64 CROSS_COMPILE=aarch64-linux- CC=clang
> >> | scripts/kconfig/conf  --syncconfig Kconfig
> >> |   CC      scripts/mod/empty.o
> >> | clang (LLVM option parsing): for the -hwasan-mapping-offset option: '' value invalid for uint argument!
> >> | scripts/Makefile.build:273: recipe for target 'scripts/mod/empty.o' failed
> >> | make[1]: *** [scripts/mod/empty.o] Error 1
> >> | Makefile:1123: recipe for target 'prepare0' failed
> >> | make: *** [prepare0] Error 2
> >>
> >> Let's fix this by always propagating CONFIG_KASAN_SHADOW_OFFSET into
> >> KASAN_SHADOW_OFFSET if CONFIG_KASAN is selected, moving the existing
> >> common definition of +CFLAGS_KASAN_NOSANITIZE to the top of
> >> Makefile.kasan.
> >>
> >> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> >> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>>> Cc: Catalin Marinas <catalin.marinas@arm.com>
> >> Cc: Steve Capper <steve.capper@arm.com>
> >> Cc: Will Deacon <will@kernel.org>
> >> ---
> 
> 
> Acked-by: Andrey Ryabinin <aryabinin@virtuozzo.com>

Thanks, Andrey!

Will

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET
  2019-08-15 12:09         ` Will Deacon
  2019-08-15 12:21           ` [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE Andrey Ryabinin
@ 2019-08-20  6:02           ` Bhupesh Sharma
  1 sibling, 0 replies; 38+ messages in thread
From: Bhupesh Sharma @ 2019-08-20  6:02 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Christoph von Recklinghausen, Ard Biesheuvel,
	Catalin Marinas, Steve Capper, kasan-dev, glider, Dmitry Vyukov,
	maz, Andrey Ryabinin, linux-arm-kernel

On Thu, Aug 15, 2019 at 5:39 PM Will Deacon <will@kernel.org> wrote:
>
> [+more kasan people and the kasan-dev list]
>
> On Wed, Aug 14, 2019 at 05:03:24PM +0100, Mark Rutland wrote:
> > On Wed, Aug 14, 2019 at 04:57:11PM +0100, Will Deacon wrote:
> > > On Wed, Aug 14, 2019 at 04:20:18PM +0100, Mark Rutland wrote:
> > > > On Wed, Aug 07, 2019 at 04:55:15PM +0100, Steve Capper wrote:
> > > > > diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
> > > > > index b2400f9c1213..2b7db0d41498 100644
> > > > > --- a/arch/arm64/Makefile
> > > > > +++ b/arch/arm64/Makefile
> > > > > @@ -126,14 +126,6 @@ KBUILD_CFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > > > >  KBUILD_CPPFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > > > >  KBUILD_AFLAGS += -DKASAN_SHADOW_SCALE_SHIFT=$(KASAN_SHADOW_SCALE_SHIFT)
> > > > >
> > > > > -# KASAN_SHADOW_OFFSET = VA_START + (1 << (VA_BITS - KASAN_SHADOW_SCALE_SHIFT))
> > > > > -#                               - (1 << (64 - KASAN_SHADOW_SCALE_SHIFT))
> > > > > -# in 32-bit arithmetic
> > > > > -KASAN_SHADOW_OFFSET := $(shell printf "0x%08x00000000\n" $$(( \
> > > > > -       (0xffffffff & (-1 << ($(CONFIG_ARM64_VA_BITS) - 1 - 32))) \
> > > > > -       + (1 << ($(CONFIG_ARM64_VA_BITS) - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) \
> > > > > -       - (1 << (64 - 32 - $(KASAN_SHADOW_SCALE_SHIFT))) )) )
> > > > > -
> > > > >  export TEXT_OFFSET GZFLAGS
> > > > >
> > > > >  core-y         += arch/arm64/kernel/ arch/arm64/mm/
> > > >
> > > > I've just spotted this breaks build using CONFIG_KASAN_SW_TAGS &&
> > > > CONFIG_KASAN_INLINE, as scripts/Makefile.kasan only propagates
> > > > CONFIG_KASAN_SHADOW_OFFSET into KASAN_SHADOW_OFFSET when
> > > > CONFIG_KASAN_GENERIC is selected, but consumes KASAN_SHADOW_OFFSET
> > > > regardless.
> > > >
> > > > I think that's by accident rather than by design, but to
> > > > minimize/localize the fixup, how about the below? I can send a cleanup
> > > > patch for scripts/Makefile.kasan later.
> > >
> > > How much work is that? I've dropped this stuff from -next for now, so we
> > > have time to fix it properly as long as it's not going to take weeks.
> >
> > I wrote it first, so no effort; patch below.
>
> The patch looks fine to me, but I'd like an Ack from one of the KASAN
> folks before I queue this via the arm64 tree (where support for 52-bit
> virtual addressing in the kernel [1] depends on this being fixed).
>
> Patch is quoted below. Please can somebody take a look?

I tested this on my hpe and apm arm64 hardware boxes and the issue I
reported via <http://lists.infradead.org/pipermail/linux-arm-kernel/2019-August/673424.html>
seem fixed, so:

Tested-by: Bhupesh Sharma <bhsharma@redhat.com>

Thanks,
Bhupesh

> [1] https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git/log/?h=for-next/52-bit-kva
>
> > From ecdf60051a850f817d98f84ae9011afa2311b8f1 Mon Sep 17 00:00:00 2001
> > From: Mark Rutland <mark.rutland@arm.com>
> > Date: Wed, 14 Aug 2019 15:31:57 +0100
> > Subject: [PATCH] kasan/arm64: fix CONFIG_KASAN_SW_TAGS && KASAN_INLINE
> >
> > The generic Makefile.kasan propagates CONFIG_KASAN_SHADOW_OFFSET into
> > KASAN_SHADOW_OFFSET, but only does so for CONFIG_KASAN_GENERIC.
> >
> > Since commit:
> >
> >   6bd1d0be0e97936d ("arm64: kasan: Switch to using KASAN_SHADOW_OFFSET")
> >
> > ... arm64 defines CONFIG_KASAN_SHADOW_OFFSET in Kconfig rather than
> > defining KASAN_SHADOW_OFFSET in a Makefile. Thus, if
> > CONFIG_KASAN_SW_TAGS && KASAN_INLINE are selected, we get build time
> > splats due to KASAN_SHADOW_OFFSET not being set:
> >
> > | [mark@lakrids:~/src/linux]% usellvm 8.0.1 usekorg 8.1.0  make ARCH=arm64 CROSS_COMPILE=aarch64-linux- CC=clang
> > | scripts/kconfig/conf  --syncconfig Kconfig
> > |   CC      scripts/mod/empty.o
> > | clang (LLVM option parsing): for the -hwasan-mapping-offset option: '' value invalid for uint argument!
> > | scripts/Makefile.build:273: recipe for target 'scripts/mod/empty.o' failed
> > | make[1]: *** [scripts/mod/empty.o] Error 1
> > | Makefile:1123: recipe for target 'prepare0' failed
> > | make: *** [prepare0] Error 2
> >
> > Let's fix this by always propagating CONFIG_KASAN_SHADOW_OFFSET into
> > KASAN_SHADOW_OFFSET if CONFIG_KASAN is selected, moving the existing
> > common definition of +CFLAGS_KASAN_NOSANITIZE to the top of
> > Makefile.kasan.
> >
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Steve Capper <steve.capper@arm.com>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >  scripts/Makefile.kasan | 11 +++++------
> >  1 file changed, 5 insertions(+), 6 deletions(-)
> >
> > diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
> > index 6410bd22fe38..03757cc60e06 100644
> > --- a/scripts/Makefile.kasan
> > +++ b/scripts/Makefile.kasan
> > @@ -1,4 +1,9 @@
> >  # SPDX-License-Identifier: GPL-2.0
> > +ifdef CONFIG_KASAN
> > +CFLAGS_KASAN_NOSANITIZE := -fno-builtin
> > +KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET)
> > +endif
> > +
> >  ifdef CONFIG_KASAN_GENERIC
> >
> >  ifdef CONFIG_KASAN_INLINE
> > @@ -7,8 +12,6 @@ else
> >       call_threshold := 0
> >  endif
> >
> > -KASAN_SHADOW_OFFSET ?= $(CONFIG_KASAN_SHADOW_OFFSET)
> > -
> >  CFLAGS_KASAN_MINIMAL := -fsanitize=kernel-address
> >
> >  cc-param = $(call cc-option, -mllvm -$(1), $(call cc-option, --param $(1)))
> > @@ -45,7 +48,3 @@ CFLAGS_KASAN := -fsanitize=kernel-hwaddress \
> >               $(instrumentation_flags)
> >
> >  endif # CONFIG_KASAN_SW_TAGS
> > -
> > -ifdef CONFIG_KASAN
> > -CFLAGS_KASAN_NOSANITIZE := -fno-builtin
> > -endif
> > --
> > 2.11.0
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2019-08-20  6:02 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-07 15:55 [PATCH V5 00/12] 52-bit kernel + user VAs Steve Capper
2019-08-07 15:55 ` [PATCH V5 01/12] arm64: mm: Remove bit-masking optimisations for PAGE_OFFSET and VMEMMAP_START Steve Capper
2019-08-07 15:55 ` [PATCH V5 02/12] arm64: mm: Flip kernel VA space Steve Capper
2019-08-07 16:12   ` Catalin Marinas
2019-08-07 15:55 ` [PATCH V5 03/12] arm64: kasan: Switch to using KASAN_SHADOW_OFFSET Steve Capper
2019-08-07 16:12   ` Catalin Marinas
2019-08-14 15:20   ` [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET Mark Rutland
2019-08-14 15:57     ` Will Deacon
2019-08-14 16:03       ` Mark Rutland
2019-08-14 17:53         ` Steve Capper
2019-08-15 12:09         ` Will Deacon
2019-08-15 12:21           ` [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE Andrey Ryabinin
2019-08-15 12:22             ` Will Deacon
2019-08-20  6:02           ` [PATCH] arm64: fix CONFIG_KASAN_SW_TAGS && CONFIG_KASAN_INLINE (was: Re: [PATCH V5 03/12] arm64: kasan: Switch to using) KASAN_SHADOW_OFFSET Bhupesh Sharma
2019-08-14 16:07     ` Steve Capper
2019-08-14 16:14       ` Steve Capper
2019-08-07 15:55 ` [PATCH V5 04/12] arm64: dump: De-constify VA_START and KASAN_SHADOW_START Steve Capper
2019-08-07 15:55 ` [PATCH V5 05/12] arm64: mm: Introduce VA_BITS_MIN Steve Capper
2019-08-07 16:14   ` Catalin Marinas
2019-08-07 15:55 ` [PATCH V5 06/12] arm64: mm: Introduce vabits_actual Steve Capper
2019-08-07 16:16   ` Catalin Marinas
2019-08-07 15:55 ` [PATCH V5 07/12] arm64: mm: Logic to make offset_ttbr1 conditional Steve Capper
2019-08-07 15:55 ` [PATCH V5 08/12] arm64: mm: Separate out vmemmap Steve Capper
2019-08-07 15:55 ` [PATCH V5 09/12] arm64: mm: Modify calculation of VMEMMAP_SIZE Steve Capper
2019-08-07 15:55 ` [PATCH V5 10/12] arm64: mm: Introduce 52-bit Kernel VAs Steve Capper
2019-08-07 15:55 ` [PATCH V5 11/12] arm64: mm: Remove vabits_user Steve Capper
2019-08-07 16:17   ` Catalin Marinas
2019-08-07 15:55 ` [PATCH V5 12/12] docs: arm64: Add layout and 52-bit info to memory document Steve Capper
2019-08-09 16:47 ` [PATCH V5 00/12] 52-bit kernel + user VAs Will Deacon
2019-08-13 11:23   ` Steve Capper
2019-08-13 11:59     ` Will Deacon
2019-08-13 12:43   ` Geert Uytterhoeven
2019-08-13 13:10     ` Will Deacon
2019-08-13 13:36       ` Geert Uytterhoeven
2019-08-14  8:04         ` Bhupesh Sharma
2019-08-14  8:21           ` Will Deacon
2019-08-14 11:59             ` Bhupesh Sharma
2019-08-14 12:24               ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).