All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V3 0/5] 52-bit userspace VAs
@ 2018-11-14 13:39 ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-14 13:39 UTC (permalink / raw)
  To: linux-mm, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, ard.biesheuvel, jcm, Steve Capper

This patch series brings support for 52-bit userspace VAs to systems that
have ARMv8.2-LVA and are running with a 48-bit VA_BITS and a 64KB
PAGE_SIZE.

If no hardware support is present, the kernel runs with a 48-bit VA space
for userspace.

Userspace can exploit this feature by providing an address hint to mmap
where addr[51:48] != 0. Otherwise all the VA mappings will behave in the
same way as a 48-bit VA system (this is to maintain compatibility with
software that assumes the maximum VA size on arm64 is 48-bit).

This patch series applies to 4.20-rc1.

Testing was in a model with Trusted Firmware and UEFI for boot.

Changed in V3, COMPAT fixes added (and tested with 32-bit userspace code).
Extra patch added to allow forcing all userspace allocations to come from
52-bits (to allow for debugging and testing).

The major change to V2 of the series is that mm/mmap.c is altered in the
first patch of the series (rather than copied over to arch/arm64).


Steve Capper (5):
  mm: mmap: Allow for "high" userspace addresses
  arm64: mm: Introduce DEFAULT_MAP_WINDOW
  arm64: mm: Define arch_get_mmap_end, arch_get_mmap_base
  arm64: mm: introduce 52-bit userspace support
  arm64: mm: Allow forcing all userspace addresses to 52-bit

 arch/arm64/Kconfig                      | 18 ++++++++++++++++++
 arch/arm64/include/asm/assembler.h      |  7 +++----
 arch/arm64/include/asm/elf.h            |  4 ++++
 arch/arm64/include/asm/mmu_context.h    |  3 +++
 arch/arm64/include/asm/pgalloc.h        |  4 ++++
 arch/arm64/include/asm/pgtable.h        | 16 +++++++++++++---
 arch/arm64/include/asm/processor.h      | 33 ++++++++++++++++++++++++++++-----
 arch/arm64/kernel/head.S                | 13 +++++++++++++
 arch/arm64/mm/fault.c                   |  2 +-
 arch/arm64/mm/init.c                    |  2 +-
 arch/arm64/mm/mmu.c                     |  1 +
 arch/arm64/mm/proc.S                    | 10 +++++++++-
 drivers/firmware/efi/arm-runtime.c      |  2 +-
 drivers/firmware/efi/libstub/arm-stub.c |  2 +-
 mm/mmap.c                               | 25 ++++++++++++++++++-------
 15 files changed, 118 insertions(+), 24 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 0/5] 52-bit userspace VAs
@ 2018-11-14 13:39 ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-14 13:39 UTC (permalink / raw)
  To: linux-arm-kernel

This patch series brings support for 52-bit userspace VAs to systems that
have ARMv8.2-LVA and are running with a 48-bit VA_BITS and a 64KB
PAGE_SIZE.

If no hardware support is present, the kernel runs with a 48-bit VA space
for userspace.

Userspace can exploit this feature by providing an address hint to mmap
where addr[51:48] != 0. Otherwise all the VA mappings will behave in the
same way as a 48-bit VA system (this is to maintain compatibility with
software that assumes the maximum VA size on arm64 is 48-bit).

This patch series applies to 4.20-rc1.

Testing was in a model with Trusted Firmware and UEFI for boot.

Changed in V3, COMPAT fixes added (and tested with 32-bit userspace code).
Extra patch added to allow forcing all userspace allocations to come from
52-bits (to allow for debugging and testing).

The major change to V2 of the series is that mm/mmap.c is altered in the
first patch of the series (rather than copied over to arch/arm64).


Steve Capper (5):
  mm: mmap: Allow for "high" userspace addresses
  arm64: mm: Introduce DEFAULT_MAP_WINDOW
  arm64: mm: Define arch_get_mmap_end, arch_get_mmap_base
  arm64: mm: introduce 52-bit userspace support
  arm64: mm: Allow forcing all userspace addresses to 52-bit

 arch/arm64/Kconfig                      | 18 ++++++++++++++++++
 arch/arm64/include/asm/assembler.h      |  7 +++----
 arch/arm64/include/asm/elf.h            |  4 ++++
 arch/arm64/include/asm/mmu_context.h    |  3 +++
 arch/arm64/include/asm/pgalloc.h        |  4 ++++
 arch/arm64/include/asm/pgtable.h        | 16 +++++++++++++---
 arch/arm64/include/asm/processor.h      | 33 ++++++++++++++++++++++++++++-----
 arch/arm64/kernel/head.S                | 13 +++++++++++++
 arch/arm64/mm/fault.c                   |  2 +-
 arch/arm64/mm/init.c                    |  2 +-
 arch/arm64/mm/mmu.c                     |  1 +
 arch/arm64/mm/proc.S                    | 10 +++++++++-
 drivers/firmware/efi/arm-runtime.c      |  2 +-
 drivers/firmware/efi/libstub/arm-stub.c |  2 +-
 mm/mmap.c                               | 25 ++++++++++++++++++-------
 15 files changed, 118 insertions(+), 24 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 1/5] mm: mmap: Allow for "high" userspace addresses
  2018-11-14 13:39 ` Steve Capper
@ 2018-11-14 13:39   ` Steve Capper
  -1 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-14 13:39 UTC (permalink / raw)
  To: linux-mm, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, ard.biesheuvel, jcm, Steve Capper

This patch adds support for "high" userspace addresses that are
optionally supported on the system and have to be requested via a hint
mechanism ("high" addr parameter to mmap).

Architectures such as powerpc and x86 achieve this by making changes to
their architectural versions of arch_get_unmapped_* functions. However,
on arm64 we use the generic versions of these functions.

Rather than duplicate the generic arch_get_unmapped_* implementations
for arm64, this patch instead introduces two architectural helper macros
and applies them to arch_get_unmapped_*:
 arch_get_mmap_end(addr) - get mmap upper limit depending on addr hint
 arch_get_mmap_base(addr, base) - get mmap_base depending on addr hint

If these macros are not defined in architectural code then they default
to (TASK_SIZE) and (base) so should not introduce any behavioural
changes to architectures that do not define them.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---
Changed in V3, commit log cleared up, explanation given for why core
code change over just architectural change
---
 mm/mmap.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 6c04292e16a7..7bb64381e77c 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2066,6 +2066,15 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info)
 	return gap_end;
 }
 
+
+#ifndef arch_get_mmap_end
+#define arch_get_mmap_end(addr)	(TASK_SIZE)
+#endif
+
+#ifndef arch_get_mmap_base
+#define arch_get_mmap_base(addr, base) (base)
+#endif
+
 /* Get an address range which is currently unmapped.
  * For shmat() with addr=0.
  *
@@ -2085,8 +2094,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma, *prev;
 	struct vm_unmapped_area_info info;
+	const unsigned long mmap_end = arch_get_mmap_end(addr);
 
-	if (len > TASK_SIZE - mmap_min_addr)
+	if (len > mmap_end - mmap_min_addr)
 		return -ENOMEM;
 
 	if (flags & MAP_FIXED)
@@ -2095,7 +2105,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	if (addr) {
 		addr = PAGE_ALIGN(addr);
 		vma = find_vma_prev(mm, addr, &prev);
-		if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
+		if (mmap_end - len >= addr && addr >= mmap_min_addr &&
 		    (!vma || addr + len <= vm_start_gap(vma)) &&
 		    (!prev || addr >= vm_end_gap(prev)))
 			return addr;
@@ -2104,7 +2114,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	info.flags = 0;
 	info.length = len;
 	info.low_limit = mm->mmap_base;
-	info.high_limit = TASK_SIZE;
+	info.high_limit = mmap_end;
 	info.align_mask = 0;
 	return vm_unmapped_area(&info);
 }
@@ -2124,9 +2134,10 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	struct mm_struct *mm = current->mm;
 	unsigned long addr = addr0;
 	struct vm_unmapped_area_info info;
+	const unsigned long mmap_end = arch_get_mmap_end(addr);
 
 	/* requested length too big for entire address space */
-	if (len > TASK_SIZE - mmap_min_addr)
+	if (len > mmap_end - mmap_min_addr)
 		return -ENOMEM;
 
 	if (flags & MAP_FIXED)
@@ -2136,7 +2147,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	if (addr) {
 		addr = PAGE_ALIGN(addr);
 		vma = find_vma_prev(mm, addr, &prev);
-		if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
+		if (mmap_end - len >= addr && addr >= mmap_min_addr &&
 				(!vma || addr + len <= vm_start_gap(vma)) &&
 				(!prev || addr >= vm_end_gap(prev)))
 			return addr;
@@ -2145,7 +2156,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
 	info.length = len;
 	info.low_limit = max(PAGE_SIZE, mmap_min_addr);
-	info.high_limit = mm->mmap_base;
+	info.high_limit = arch_get_mmap_base(addr, mm->mmap_base);
 	info.align_mask = 0;
 	addr = vm_unmapped_area(&info);
 
@@ -2159,7 +2170,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 		VM_BUG_ON(addr != -ENOMEM);
 		info.flags = 0;
 		info.low_limit = TASK_UNMAPPED_BASE;
-		info.high_limit = TASK_SIZE;
+		info.high_limit = mmap_end;
 		addr = vm_unmapped_area(&info);
 	}
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V3 1/5] mm: mmap: Allow for "high" userspace addresses
@ 2018-11-14 13:39   ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-14 13:39 UTC (permalink / raw)
  To: linux-arm-kernel

This patch adds support for "high" userspace addresses that are
optionally supported on the system and have to be requested via a hint
mechanism ("high" addr parameter to mmap).

Architectures such as powerpc and x86 achieve this by making changes to
their architectural versions of arch_get_unmapped_* functions. However,
on arm64 we use the generic versions of these functions.

Rather than duplicate the generic arch_get_unmapped_* implementations
for arm64, this patch instead introduces two architectural helper macros
and applies them to arch_get_unmapped_*:
 arch_get_mmap_end(addr) - get mmap upper limit depending on addr hint
 arch_get_mmap_base(addr, base) - get mmap_base depending on addr hint

If these macros are not defined in architectural code then they default
to (TASK_SIZE) and (base) so should not introduce any behavioural
changes to architectures that do not define them.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---
Changed in V3, commit log cleared up, explanation given for why core
code change over just architectural change
---
 mm/mmap.c | 25 ++++++++++++++++++-------
 1 file changed, 18 insertions(+), 7 deletions(-)

diff --git a/mm/mmap.c b/mm/mmap.c
index 6c04292e16a7..7bb64381e77c 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2066,6 +2066,15 @@ unsigned long unmapped_area_topdown(struct vm_unmapped_area_info *info)
 	return gap_end;
 }
 
+
+#ifndef arch_get_mmap_end
+#define arch_get_mmap_end(addr)	(TASK_SIZE)
+#endif
+
+#ifndef arch_get_mmap_base
+#define arch_get_mmap_base(addr, base) (base)
+#endif
+
 /* Get an address range which is currently unmapped.
  * For shmat() with addr=0.
  *
@@ -2085,8 +2094,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	struct mm_struct *mm = current->mm;
 	struct vm_area_struct *vma, *prev;
 	struct vm_unmapped_area_info info;
+	const unsigned long mmap_end = arch_get_mmap_end(addr);
 
-	if (len > TASK_SIZE - mmap_min_addr)
+	if (len > mmap_end - mmap_min_addr)
 		return -ENOMEM;
 
 	if (flags & MAP_FIXED)
@@ -2095,7 +2105,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	if (addr) {
 		addr = PAGE_ALIGN(addr);
 		vma = find_vma_prev(mm, addr, &prev);
-		if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
+		if (mmap_end - len >= addr && addr >= mmap_min_addr &&
 		    (!vma || addr + len <= vm_start_gap(vma)) &&
 		    (!prev || addr >= vm_end_gap(prev)))
 			return addr;
@@ -2104,7 +2114,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	info.flags = 0;
 	info.length = len;
 	info.low_limit = mm->mmap_base;
-	info.high_limit = TASK_SIZE;
+	info.high_limit = mmap_end;
 	info.align_mask = 0;
 	return vm_unmapped_area(&info);
 }
@@ -2124,9 +2134,10 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	struct mm_struct *mm = current->mm;
 	unsigned long addr = addr0;
 	struct vm_unmapped_area_info info;
+	const unsigned long mmap_end = arch_get_mmap_end(addr);
 
 	/* requested length too big for entire address space */
-	if (len > TASK_SIZE - mmap_min_addr)
+	if (len > mmap_end - mmap_min_addr)
 		return -ENOMEM;
 
 	if (flags & MAP_FIXED)
@@ -2136,7 +2147,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	if (addr) {
 		addr = PAGE_ALIGN(addr);
 		vma = find_vma_prev(mm, addr, &prev);
-		if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
+		if (mmap_end - len >= addr && addr >= mmap_min_addr &&
 				(!vma || addr + len <= vm_start_gap(vma)) &&
 				(!prev || addr >= vm_end_gap(prev)))
 			return addr;
@@ -2145,7 +2156,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
 	info.length = len;
 	info.low_limit = max(PAGE_SIZE, mmap_min_addr);
-	info.high_limit = mm->mmap_base;
+	info.high_limit = arch_get_mmap_base(addr, mm->mmap_base);
 	info.align_mask = 0;
 	addr = vm_unmapped_area(&info);
 
@@ -2159,7 +2170,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 		VM_BUG_ON(addr != -ENOMEM);
 		info.flags = 0;
 		info.low_limit = TASK_UNMAPPED_BASE;
-		info.high_limit = TASK_SIZE;
+		info.high_limit = mmap_end;
 		addr = vm_unmapped_area(&info);
 	}
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V3 2/5] arm64: mm: Introduce DEFAULT_MAP_WINDOW
  2018-11-14 13:39 ` Steve Capper
@ 2018-11-14 13:39   ` Steve Capper
  -1 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-14 13:39 UTC (permalink / raw)
  To: linux-mm, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, ard.biesheuvel, jcm, Steve Capper

We wish to introduce a 52-bit virtual address space for userspace but
maintain compatibility with software that assumes the maximum VA space
size is 48 bit.

In order to achieve this, on 52-bit VA systems, we make mmap behave as
if it were running on a 48-bit VA system (unless userspace explicitly
requests a VA where addr[51:48] != 0).

On a system running a 52-bit userspace we need TASK_SIZE to represent
the 52-bit limit as it is used in various places to distinguish between
kernelspace and userspace addresses.

Thus we need a new limit for mmap, stack, ELF loader and EFI (which uses
TTBR0) to represent the non-extended VA space.

This patch introduces DEFAULT_MAP_WINDOW and DEFAULT_MAP_WINDOW_64 and
switches the appropriate logic to use that instead of TASK_SIZE.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

Changed in V3: corrections to allow COMPAT 32-bit EL0 mode to work
---
 arch/arm64/include/asm/elf.h            |  2 +-
 arch/arm64/include/asm/processor.h      | 10 ++++++++--
 arch/arm64/mm/init.c                    |  2 +-
 drivers/firmware/efi/arm-runtime.c      |  2 +-
 drivers/firmware/efi/libstub/arm-stub.c |  2 +-
 5 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index 433b9554c6a1..bc9bd9e77d9d 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -117,7 +117,7 @@
  * 64-bit, this is above 4GB to leave the entire 32-bit address
  * space open for things that want to use the area for 32-bit pointers.
  */
-#define ELF_ET_DYN_BASE		(2 * TASK_SIZE_64 / 3)
+#define ELF_ET_DYN_BASE		(2 * DEFAULT_MAP_WINDOW_64 / 3)
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 3e2091708b8e..da41a2655b69 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -25,6 +25,9 @@
 #define USER_DS		(TASK_SIZE_64 - 1)
 
 #ifndef __ASSEMBLY__
+
+#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS)
+
 #ifdef __KERNEL__
 
 #include <linux/build_bug.h>
@@ -51,13 +54,16 @@
 				TASK_SIZE_32 : TASK_SIZE_64)
 #define TASK_SIZE_OF(tsk)	(test_tsk_thread_flag(tsk, TIF_32BIT) ? \
 				TASK_SIZE_32 : TASK_SIZE_64)
+#define DEFAULT_MAP_WINDOW	(test_thread_flag(TIF_32BIT) ? \
+				TASK_SIZE_32 : DEFAULT_MAP_WINDOW_64)
 #else
 #define TASK_SIZE		TASK_SIZE_64
+#define DEFAULT_MAP_WINDOW	DEFAULT_MAP_WINDOW_64
 #endif /* CONFIG_COMPAT */
 
-#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(TASK_SIZE / 4))
+#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4))
+#define STACK_TOP_MAX		DEFAULT_MAP_WINDOW_64
 
-#define STACK_TOP_MAX		TASK_SIZE_64
 #ifdef CONFIG_COMPAT
 #define AARCH32_VECTORS_BASE	0xffff0000
 #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 9d9582cac6c4..e5a1dc0beef9 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -609,7 +609,7 @@ void __init mem_init(void)
 	 * detected at build time already.
 	 */
 #ifdef CONFIG_COMPAT
-	BUILD_BUG_ON(TASK_SIZE_32			> TASK_SIZE_64);
+	BUILD_BUG_ON(TASK_SIZE_32			> DEFAULT_MAP_WINDOW_64);
 #endif
 
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c
index 922cfb813109..952cec5b611a 100644
--- a/drivers/firmware/efi/arm-runtime.c
+++ b/drivers/firmware/efi/arm-runtime.c
@@ -38,7 +38,7 @@ static struct ptdump_info efi_ptdump_info = {
 	.mm		= &efi_mm,
 	.markers	= (struct addr_marker[]){
 		{ 0,		"UEFI runtime start" },
-		{ TASK_SIZE_64,	"UEFI runtime end" }
+		{ DEFAULT_MAP_WINDOW_64, "UEFI runtime end" }
 	},
 	.base_addr	= 0,
 };
diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
index 30ac0c975f8a..d1ec7136e3e1 100644
--- a/drivers/firmware/efi/libstub/arm-stub.c
+++ b/drivers/firmware/efi/libstub/arm-stub.c
@@ -33,7 +33,7 @@
 #define EFI_RT_VIRTUAL_SIZE	SZ_512M
 
 #ifdef CONFIG_ARM64
-# define EFI_RT_VIRTUAL_LIMIT	TASK_SIZE_64
+# define EFI_RT_VIRTUAL_LIMIT	DEFAULT_MAP_WINDOW_64
 #else
 # define EFI_RT_VIRTUAL_LIMIT	TASK_SIZE
 #endif
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V3 2/5] arm64: mm: Introduce DEFAULT_MAP_WINDOW
@ 2018-11-14 13:39   ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-14 13:39 UTC (permalink / raw)
  To: linux-arm-kernel

We wish to introduce a 52-bit virtual address space for userspace but
maintain compatibility with software that assumes the maximum VA space
size is 48 bit.

In order to achieve this, on 52-bit VA systems, we make mmap behave as
if it were running on a 48-bit VA system (unless userspace explicitly
requests a VA where addr[51:48] != 0).

On a system running a 52-bit userspace we need TASK_SIZE to represent
the 52-bit limit as it is used in various places to distinguish between
kernelspace and userspace addresses.

Thus we need a new limit for mmap, stack, ELF loader and EFI (which uses
TTBR0) to represent the non-extended VA space.

This patch introduces DEFAULT_MAP_WINDOW and DEFAULT_MAP_WINDOW_64 and
switches the appropriate logic to use that instead of TASK_SIZE.

Signed-off-by: Steve Capper <steve.capper@arm.com>

---

Changed in V3: corrections to allow COMPAT 32-bit EL0 mode to work
---
 arch/arm64/include/asm/elf.h            |  2 +-
 arch/arm64/include/asm/processor.h      | 10 ++++++++--
 arch/arm64/mm/init.c                    |  2 +-
 drivers/firmware/efi/arm-runtime.c      |  2 +-
 drivers/firmware/efi/libstub/arm-stub.c |  2 +-
 5 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index 433b9554c6a1..bc9bd9e77d9d 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -117,7 +117,7 @@
  * 64-bit, this is above 4GB to leave the entire 32-bit address
  * space open for things that want to use the area for 32-bit pointers.
  */
-#define ELF_ET_DYN_BASE		(2 * TASK_SIZE_64 / 3)
+#define ELF_ET_DYN_BASE		(2 * DEFAULT_MAP_WINDOW_64 / 3)
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 3e2091708b8e..da41a2655b69 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -25,6 +25,9 @@
 #define USER_DS		(TASK_SIZE_64 - 1)
 
 #ifndef __ASSEMBLY__
+
+#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS)
+
 #ifdef __KERNEL__
 
 #include <linux/build_bug.h>
@@ -51,13 +54,16 @@
 				TASK_SIZE_32 : TASK_SIZE_64)
 #define TASK_SIZE_OF(tsk)	(test_tsk_thread_flag(tsk, TIF_32BIT) ? \
 				TASK_SIZE_32 : TASK_SIZE_64)
+#define DEFAULT_MAP_WINDOW	(test_thread_flag(TIF_32BIT) ? \
+				TASK_SIZE_32 : DEFAULT_MAP_WINDOW_64)
 #else
 #define TASK_SIZE		TASK_SIZE_64
+#define DEFAULT_MAP_WINDOW	DEFAULT_MAP_WINDOW_64
 #endif /* CONFIG_COMPAT */
 
-#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(TASK_SIZE / 4))
+#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4))
+#define STACK_TOP_MAX		DEFAULT_MAP_WINDOW_64
 
-#define STACK_TOP_MAX		TASK_SIZE_64
 #ifdef CONFIG_COMPAT
 #define AARCH32_VECTORS_BASE	0xffff0000
 #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 9d9582cac6c4..e5a1dc0beef9 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -609,7 +609,7 @@ void __init mem_init(void)
 	 * detected at build time already.
 	 */
 #ifdef CONFIG_COMPAT
-	BUILD_BUG_ON(TASK_SIZE_32			> TASK_SIZE_64);
+	BUILD_BUG_ON(TASK_SIZE_32			> DEFAULT_MAP_WINDOW_64);
 #endif
 
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c
index 922cfb813109..952cec5b611a 100644
--- a/drivers/firmware/efi/arm-runtime.c
+++ b/drivers/firmware/efi/arm-runtime.c
@@ -38,7 +38,7 @@ static struct ptdump_info efi_ptdump_info = {
 	.mm		= &efi_mm,
 	.markers	= (struct addr_marker[]){
 		{ 0,		"UEFI runtime start" },
-		{ TASK_SIZE_64,	"UEFI runtime end" }
+		{ DEFAULT_MAP_WINDOW_64, "UEFI runtime end" }
 	},
 	.base_addr	= 0,
 };
diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
index 30ac0c975f8a..d1ec7136e3e1 100644
--- a/drivers/firmware/efi/libstub/arm-stub.c
+++ b/drivers/firmware/efi/libstub/arm-stub.c
@@ -33,7 +33,7 @@
 #define EFI_RT_VIRTUAL_SIZE	SZ_512M
 
 #ifdef CONFIG_ARM64
-# define EFI_RT_VIRTUAL_LIMIT	TASK_SIZE_64
+# define EFI_RT_VIRTUAL_LIMIT	DEFAULT_MAP_WINDOW_64
 #else
 # define EFI_RT_VIRTUAL_LIMIT	TASK_SIZE
 #endif
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V3 3/5] arm64: mm: Define arch_get_mmap_end, arch_get_mmap_base
  2018-11-14 13:39 ` Steve Capper
@ 2018-11-14 13:39   ` Steve Capper
  -1 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-14 13:39 UTC (permalink / raw)
  To: linux-mm, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, ard.biesheuvel, jcm, Steve Capper

Now that we have DEFAULT_MAP_WINDOW defined, we can arch_get_mmap_end
and arch_get_mmap_base helpers to allow for high addresses in mmap.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/processor.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index da41a2655b69..bbe602cb8fd3 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -72,6 +72,13 @@
 #define STACK_TOP		STACK_TOP_MAX
 #endif /* CONFIG_COMPAT */
 
+#define arch_get_mmap_end(addr) ((addr > DEFAULT_MAP_WINDOW) ? TASK_SIZE :\
+				DEFAULT_MAP_WINDOW)
+
+#define arch_get_mmap_base(addr, base) ((addr > DEFAULT_MAP_WINDOW) ? \
+					base + TASK_SIZE - DEFAULT_MAP_WINDOW :\
+					base)
+
 extern phys_addr_t arm64_dma_phys_limit;
 #define ARCH_LOW_ADDRESS_LIMIT	(arm64_dma_phys_limit - 1)
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V3 3/5] arm64: mm: Define arch_get_mmap_end, arch_get_mmap_base
@ 2018-11-14 13:39   ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-14 13:39 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we have DEFAULT_MAP_WINDOW defined, we can arch_get_mmap_end
and arch_get_mmap_base helpers to allow for high addresses in mmap.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/include/asm/processor.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index da41a2655b69..bbe602cb8fd3 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -72,6 +72,13 @@
 #define STACK_TOP		STACK_TOP_MAX
 #endif /* CONFIG_COMPAT */
 
+#define arch_get_mmap_end(addr) ((addr > DEFAULT_MAP_WINDOW) ? TASK_SIZE :\
+				DEFAULT_MAP_WINDOW)
+
+#define arch_get_mmap_base(addr, base) ((addr > DEFAULT_MAP_WINDOW) ? \
+					base + TASK_SIZE - DEFAULT_MAP_WINDOW :\
+					base)
+
 extern phys_addr_t arm64_dma_phys_limit;
 #define ARCH_LOW_ADDRESS_LIMIT	(arm64_dma_phys_limit - 1)
 
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V3 4/5] arm64: mm: introduce 52-bit userspace support
  2018-11-14 13:39 ` Steve Capper
@ 2018-11-14 13:39   ` Steve Capper
  -1 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-14 13:39 UTC (permalink / raw)
  To: linux-mm, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, ard.biesheuvel, jcm, Steve Capper

On arm64 there is optional support for a 52-bit virtual address space.
To exploit this one has to be running with a 64KB page size and be
running on hardware that supports this.

For an arm64 kernel supporting a 48 bit VA with a 64KB page size,
a few changes are needed to support a 52-bit userspace:
 * TCR_EL1.T0SZ needs to be 12 instead of 16,
 * pgd_offset needs to work with a different PTRS_PER_PGD,
 * PGD_SIZE needs to be increased,
 * TASK_SIZE needs to reflect the new size.

This patch implements the above when the support for 52-bit VAs is
detected at early boot time.

On arm64 userspace addresses translation is controlled by TTBR0_EL1. As
well as userspace, TTBR0_EL1 controls:
 * The identity mapping,
 * EFI runtime code.

It is possible to run a kernel with an identity mapping that has a
larger VA size than userspace (and for this case __cpu_set_tcr_t0sz()
would set TCR_EL1.T0SZ as appropriate). However, when the conditions for
52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at
12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is
disabled.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/Kconfig                   |  4 ++++
 arch/arm64/include/asm/assembler.h   |  7 +++----
 arch/arm64/include/asm/mmu_context.h |  3 +++
 arch/arm64/include/asm/pgalloc.h     |  4 ++++
 arch/arm64/include/asm/pgtable.h     | 16 +++++++++++++---
 arch/arm64/include/asm/processor.h   | 13 ++++++++-----
 arch/arm64/kernel/head.S             | 13 +++++++++++++
 arch/arm64/mm/fault.c                |  2 +-
 arch/arm64/mm/mmu.c                  |  1 +
 arch/arm64/mm/proc.S                 | 10 +++++++++-
 10 files changed, 59 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 787d7850e064..eab02d24f5d1 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -709,6 +709,10 @@ config ARM64_PA_BITS_52
 
 endchoice
 
+config ARM64_52BIT_VA
+	def_bool y
+	depends on ARM64_VA_BITS_48 && ARM64_64K_PAGES
+
 config ARM64_PA_BITS
 	int
 	default 48 if ARM64_PA_BITS_48
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 6142402c2eb4..02ce922a37a7 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -342,11 +342,10 @@ alternative_endif
 	.endm
 
 /*
- * tcr_set_idmap_t0sz - update TCR.T0SZ so that we can load the ID map
+ * tcr_set_t0sz - update TCR.T0SZ so that we can load the ID map
  */
-	.macro	tcr_set_idmap_t0sz, valreg, tmpreg
-	ldr_l	\tmpreg, idmap_t0sz
-	bfi	\valreg, \tmpreg, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH
+	.macro	tcr_set_t0sz, valreg, t0sz
+	bfi	\valreg, \t0sz, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH
 	.endm
 
 /*
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 1e58bf58c22b..b125fafc611b 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -72,6 +72,9 @@ extern u64 idmap_ptrs_per_pgd;
 
 static inline bool __cpu_uses_extended_idmap(void)
 {
+	if (IS_ENABLED(CONFIG_ARM64_52BIT_VA))
+		return false;
+
 	return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
 }
 
diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index 2e05bcd944c8..56c3ccabeffe 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -27,7 +27,11 @@
 #define check_pgt_cache()		do { } while (0)
 
 #define PGALLOC_GFP	(GFP_KERNEL | __GFP_ZERO)
+#ifdef CONFIG_ARM64_52BIT_VA
+#define PGD_SIZE	((1 << (52 - PGDIR_SHIFT)) * sizeof(pgd_t))
+#else
 #define PGD_SIZE	(PTRS_PER_PGD * sizeof(pgd_t))
+#endif
 
 #if CONFIG_PGTABLE_LEVELS > 2
 
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 50b1ef8584c0..19736520b724 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -616,11 +616,21 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd)
 #define pgd_ERROR(pgd)		__pgd_error(__FILE__, __LINE__, pgd_val(pgd))
 
 /* to find an entry in a page-table-directory */
-#define pgd_index(addr)		(((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
+#define pgd_index(addr, ptrs)		(((addr) >> PGDIR_SHIFT) & ((ptrs) - 1))
+#define _pgd_offset_raw(pgd, addr, ptrs) ((pgd) + pgd_index(addr, ptrs))
+#define pgd_offset_raw(pgd, addr)	(_pgd_offset_raw(pgd, addr, PTRS_PER_PGD))
 
-#define pgd_offset_raw(pgd, addr)	((pgd) + pgd_index(addr))
+static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr)
+{
+	pgd_t *ret;
+
+	if (IS_ENABLED(CONFIG_ARM64_52BIT_VA) && (mm != &init_mm))
+		ret = _pgd_offset_raw(mm->pgd, addr, 1ULL << (vabits_user - PGDIR_SHIFT));
+	else
+		ret = pgd_offset_raw(mm->pgd, addr);
 
-#define pgd_offset(mm, addr)	(pgd_offset_raw((mm)->pgd, (addr)))
+	return ret;
+}
 
 /* to find an entry in a kernel page-table-directory */
 #define pgd_offset_k(addr)	pgd_offset(&init_mm, addr)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index bbe602cb8fd3..403c3c106d24 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -19,13 +19,16 @@
 #ifndef __ASM_PROCESSOR_H
 #define __ASM_PROCESSOR_H
 
-#define TASK_SIZE_64		(UL(1) << VA_BITS)
-
-#define KERNEL_DS	UL(-1)
-#define USER_DS		(TASK_SIZE_64 - 1)
-
+#define KERNEL_DS		UL(-1)
+#ifdef CONFIG_ARM64_52BIT_VA
+#define USER_DS			((UL(1) << 52) - 1)
+#else
+#define USER_DS			((UL(1) << VA_BITS) - 1)
+#endif /* CONFIG_ARM64_52IT_VA */
 #ifndef __ASSEMBLY__
 
+extern u64 vabits_user;
+#define TASK_SIZE_64		(UL(1) << vabits_user)
 #define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS)
 
 #ifdef __KERNEL__
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 4471f570a295..b9a2d9a9419a 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -318,6 +318,19 @@ __create_page_tables:
 	adrp	x0, idmap_pg_dir
 	adrp	x3, __idmap_text_start		// __pa(__idmap_text_start)
 
+#ifdef CONFIG_ARM64_52BIT_VA
+	mrs_s	x6, SYS_ID_AA64MMFR2_EL1
+	and	x6, x6, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
+	mov	x5, #52
+	cbnz	x6, 1f
+#endif
+	mov	x5, #VA_BITS
+1:
+	adr_l	x6, vabits_user
+	str	x5, [x6]
+	dmb	sy
+	dc	ivac, x6		// Invalidate potentially stale cache line
+
 	/*
 	 * VA_BITS may be too small to allow for an ID mapping to be created
 	 * that covers system RAM if that is located sufficiently high in the
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 7d9571f4ae3d..5fe6d2e40e9b 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -160,7 +160,7 @@ void show_pte(unsigned long addr)
 
 	pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp = %p\n",
 		 mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K,
-		 VA_BITS, mm->pgd);
+		 mm == &init_mm ? VA_BITS : (int) vabits_user, mm->pgd);
 	pgdp = pgd_offset(mm, addr);
 	pgd = READ_ONCE(*pgdp);
 	pr_alert("[%016lx] pgd=%016llx", addr, pgd_val(pgd));
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 394b8d554def..f8fc393143ea 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -52,6 +52,7 @@
 
 u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
 u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
+u64 vabits_user __ro_after_init;
 
 u64 kimage_voffset __ro_after_init;
 EXPORT_SYMBOL(kimage_voffset);
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 2c75b0b903ae..03454e1f92f2 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -446,7 +446,15 @@ ENTRY(__cpu_setup)
 	ldr	x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
 			TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \
 			TCR_TBI0 | TCR_A1
-	tcr_set_idmap_t0sz	x10, x9
+
+#ifdef CONFIG_ARM64_52BIT_VA
+	ldr_l 		x9, vabits_user
+	sub		x9, xzr, x9
+	add		x9, x9, #64
+#else
+	ldr_l		x9, idmap_t0sz
+#endif
+	tcr_set_t0sz	x10, x9
 
 	/*
 	 * Set the IPS bits in TCR_EL1.
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V3 4/5] arm64: mm: introduce 52-bit userspace support
@ 2018-11-14 13:39   ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-14 13:39 UTC (permalink / raw)
  To: linux-arm-kernel

On arm64 there is optional support for a 52-bit virtual address space.
To exploit this one has to be running with a 64KB page size and be
running on hardware that supports this.

For an arm64 kernel supporting a 48 bit VA with a 64KB page size,
a few changes are needed to support a 52-bit userspace:
 * TCR_EL1.T0SZ needs to be 12 instead of 16,
 * pgd_offset needs to work with a different PTRS_PER_PGD,
 * PGD_SIZE needs to be increased,
 * TASK_SIZE needs to reflect the new size.

This patch implements the above when the support for 52-bit VAs is
detected at early boot time.

On arm64 userspace addresses translation is controlled by TTBR0_EL1. As
well as userspace, TTBR0_EL1 controls:
 * The identity mapping,
 * EFI runtime code.

It is possible to run a kernel with an identity mapping that has a
larger VA size than userspace (and for this case __cpu_set_tcr_t0sz()
would set TCR_EL1.T0SZ as appropriate). However, when the conditions for
52-bit userspace are met; it is possible to keep TCR_EL1.T0SZ fixed at
12. Thus in this patch, the TCR_EL1.T0SZ size changing logic is
disabled.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
 arch/arm64/Kconfig                   |  4 ++++
 arch/arm64/include/asm/assembler.h   |  7 +++----
 arch/arm64/include/asm/mmu_context.h |  3 +++
 arch/arm64/include/asm/pgalloc.h     |  4 ++++
 arch/arm64/include/asm/pgtable.h     | 16 +++++++++++++---
 arch/arm64/include/asm/processor.h   | 13 ++++++++-----
 arch/arm64/kernel/head.S             | 13 +++++++++++++
 arch/arm64/mm/fault.c                |  2 +-
 arch/arm64/mm/mmu.c                  |  1 +
 arch/arm64/mm/proc.S                 | 10 +++++++++-
 10 files changed, 59 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 787d7850e064..eab02d24f5d1 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -709,6 +709,10 @@ config ARM64_PA_BITS_52
 
 endchoice
 
+config ARM64_52BIT_VA
+	def_bool y
+	depends on ARM64_VA_BITS_48 && ARM64_64K_PAGES
+
 config ARM64_PA_BITS
 	int
 	default 48 if ARM64_PA_BITS_48
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 6142402c2eb4..02ce922a37a7 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -342,11 +342,10 @@ alternative_endif
 	.endm
 
 /*
- * tcr_set_idmap_t0sz - update TCR.T0SZ so that we can load the ID map
+ * tcr_set_t0sz - update TCR.T0SZ so that we can load the ID map
  */
-	.macro	tcr_set_idmap_t0sz, valreg, tmpreg
-	ldr_l	\tmpreg, idmap_t0sz
-	bfi	\valreg, \tmpreg, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH
+	.macro	tcr_set_t0sz, valreg, t0sz
+	bfi	\valreg, \t0sz, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH
 	.endm
 
 /*
diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
index 1e58bf58c22b..b125fafc611b 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -72,6 +72,9 @@ extern u64 idmap_ptrs_per_pgd;
 
 static inline bool __cpu_uses_extended_idmap(void)
 {
+	if (IS_ENABLED(CONFIG_ARM64_52BIT_VA))
+		return false;
+
 	return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
 }
 
diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
index 2e05bcd944c8..56c3ccabeffe 100644
--- a/arch/arm64/include/asm/pgalloc.h
+++ b/arch/arm64/include/asm/pgalloc.h
@@ -27,7 +27,11 @@
 #define check_pgt_cache()		do { } while (0)
 
 #define PGALLOC_GFP	(GFP_KERNEL | __GFP_ZERO)
+#ifdef CONFIG_ARM64_52BIT_VA
+#define PGD_SIZE	((1 << (52 - PGDIR_SHIFT)) * sizeof(pgd_t))
+#else
 #define PGD_SIZE	(PTRS_PER_PGD * sizeof(pgd_t))
+#endif
 
 #if CONFIG_PGTABLE_LEVELS > 2
 
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 50b1ef8584c0..19736520b724 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -616,11 +616,21 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd)
 #define pgd_ERROR(pgd)		__pgd_error(__FILE__, __LINE__, pgd_val(pgd))
 
 /* to find an entry in a page-table-directory */
-#define pgd_index(addr)		(((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
+#define pgd_index(addr, ptrs)		(((addr) >> PGDIR_SHIFT) & ((ptrs) - 1))
+#define _pgd_offset_raw(pgd, addr, ptrs) ((pgd) + pgd_index(addr, ptrs))
+#define pgd_offset_raw(pgd, addr)	(_pgd_offset_raw(pgd, addr, PTRS_PER_PGD))
 
-#define pgd_offset_raw(pgd, addr)	((pgd) + pgd_index(addr))
+static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr)
+{
+	pgd_t *ret;
+
+	if (IS_ENABLED(CONFIG_ARM64_52BIT_VA) && (mm != &init_mm))
+		ret = _pgd_offset_raw(mm->pgd, addr, 1ULL << (vabits_user - PGDIR_SHIFT));
+	else
+		ret = pgd_offset_raw(mm->pgd, addr);
 
-#define pgd_offset(mm, addr)	(pgd_offset_raw((mm)->pgd, (addr)))
+	return ret;
+}
 
 /* to find an entry in a kernel page-table-directory */
 #define pgd_offset_k(addr)	pgd_offset(&init_mm, addr)
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index bbe602cb8fd3..403c3c106d24 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -19,13 +19,16 @@
 #ifndef __ASM_PROCESSOR_H
 #define __ASM_PROCESSOR_H
 
-#define TASK_SIZE_64		(UL(1) << VA_BITS)
-
-#define KERNEL_DS	UL(-1)
-#define USER_DS		(TASK_SIZE_64 - 1)
-
+#define KERNEL_DS		UL(-1)
+#ifdef CONFIG_ARM64_52BIT_VA
+#define USER_DS			((UL(1) << 52) - 1)
+#else
+#define USER_DS			((UL(1) << VA_BITS) - 1)
+#endif /* CONFIG_ARM64_52IT_VA */
 #ifndef __ASSEMBLY__
 
+extern u64 vabits_user;
+#define TASK_SIZE_64		(UL(1) << vabits_user)
 #define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS)
 
 #ifdef __KERNEL__
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 4471f570a295..b9a2d9a9419a 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -318,6 +318,19 @@ __create_page_tables:
 	adrp	x0, idmap_pg_dir
 	adrp	x3, __idmap_text_start		// __pa(__idmap_text_start)
 
+#ifdef CONFIG_ARM64_52BIT_VA
+	mrs_s	x6, SYS_ID_AA64MMFR2_EL1
+	and	x6, x6, #(0xf << ID_AA64MMFR2_LVA_SHIFT)
+	mov	x5, #52
+	cbnz	x6, 1f
+#endif
+	mov	x5, #VA_BITS
+1:
+	adr_l	x6, vabits_user
+	str	x5, [x6]
+	dmb	sy
+	dc	ivac, x6		// Invalidate potentially stale cache line
+
 	/*
 	 * VA_BITS may be too small to allow for an ID mapping to be created
 	 * that covers system RAM if that is located sufficiently high in the
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 7d9571f4ae3d..5fe6d2e40e9b 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -160,7 +160,7 @@ void show_pte(unsigned long addr)
 
 	pr_alert("%s pgtable: %luk pages, %u-bit VAs, pgdp = %p\n",
 		 mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K,
-		 VA_BITS, mm->pgd);
+		 mm == &init_mm ? VA_BITS : (int) vabits_user, mm->pgd);
 	pgdp = pgd_offset(mm, addr);
 	pgd = READ_ONCE(*pgdp);
 	pr_alert("[%016lx] pgd=%016llx", addr, pgd_val(pgd));
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 394b8d554def..f8fc393143ea 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -52,6 +52,7 @@
 
 u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
 u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
+u64 vabits_user __ro_after_init;
 
 u64 kimage_voffset __ro_after_init;
 EXPORT_SYMBOL(kimage_voffset);
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 2c75b0b903ae..03454e1f92f2 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -446,7 +446,15 @@ ENTRY(__cpu_setup)
 	ldr	x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
 			TCR_TG_FLAGS | TCR_KASLR_FLAGS | TCR_ASID16 | \
 			TCR_TBI0 | TCR_A1
-	tcr_set_idmap_t0sz	x10, x9
+
+#ifdef CONFIG_ARM64_52BIT_VA
+	ldr_l 		x9, vabits_user
+	sub		x9, xzr, x9
+	add		x9, x9, #64
+#else
+	ldr_l		x9, idmap_t0sz
+#endif
+	tcr_set_t0sz	x10, x9
 
 	/*
 	 * Set the IPS bits in TCR_EL1.
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V3 5/5] arm64: mm: Allow forcing all userspace addresses to 52-bit
  2018-11-14 13:39 ` Steve Capper
@ 2018-11-14 13:39   ` Steve Capper
  -1 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-14 13:39 UTC (permalink / raw)
  To: linux-mm, linux-arm-kernel
  Cc: catalin.marinas, will.deacon, ard.biesheuvel, jcm, Steve Capper

On arm64 52-bit VAs are provided to userspace when a hint is supplied to
mmap. This helps maintain compatibility with software that expects at
most 48-bit VAs to be returned.

In order to help identify software that has 48-bit VA assumptions, this
patch allows one to compile a kernel where 52-bit VAs are returned by
default on HW that supports it.

This feature is intended to be for development systems only.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
Patch added in V3 to allow for testing/preparation for 52-bit support.
---
 arch/arm64/Kconfig                 | 14 ++++++++++++++
 arch/arm64/include/asm/elf.h       |  4 ++++
 arch/arm64/include/asm/processor.h |  9 ++++++++-
 3 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index eab02d24f5d1..17d363e40c4d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1165,6 +1165,20 @@ config ARM64_CNP
 	  at runtime, and does not affect PEs that do not implement
 	  this feature.
 
+config ARM64_FORCE_52BIT
+	bool "Force 52-bit virtual addresses for userspace"
+	default n
+	depends on ARM64_52BIT_VA && EXPERT
+	help
+	  For systems with 52-bit userspace VAs enabled, the kernel will attempt
+	  to maintain compatibility with older software by providing 48-bit VAs
+	  unless a hint is supplied to mmap.
+
+	  This configuration option disables the 48-bit compatibility logic, and
+	  forces all userspace addresses to be 52-bit on HW that supports it. One
+	  should only enable this configuration option for stress testing userspace
+	  memory management code. If unsure say N here.
+
 endmenu
 
 config ARM64_SVE
diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index bc9bd9e77d9d..6adc1a90e7e6 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -117,7 +117,11 @@
  * 64-bit, this is above 4GB to leave the entire 32-bit address
  * space open for things that want to use the area for 32-bit pointers.
  */
+#ifdef CONFIG_ARM64_FORCE_52BIT
+#define ELF_ET_DYN_BASE		(2 * TASK_SIZE_64 / 3)
+#else
 #define ELF_ET_DYN_BASE		(2 * DEFAULT_MAP_WINDOW_64 / 3)
+#endif /* CONFIG_ARM64_FORCE_52BIT */
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 403c3c106d24..1415de41d836 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -64,8 +64,13 @@ extern u64 vabits_user;
 #define DEFAULT_MAP_WINDOW	DEFAULT_MAP_WINDOW_64
 #endif /* CONFIG_COMPAT */
 
-#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4))
+#ifdef CONFIG_ARM64_FORCE_52BIT
+#define STACK_TOP_MAX		TASK_SIZE_64
+#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(TASK_SIZE / 4))
+#else
 #define STACK_TOP_MAX		DEFAULT_MAP_WINDOW_64
+#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4))
+#endif /* CONFIG_ARM64_FORCE_52BIT */
 
 #ifdef CONFIG_COMPAT
 #define AARCH32_VECTORS_BASE	0xffff0000
@@ -75,12 +80,14 @@ extern u64 vabits_user;
 #define STACK_TOP		STACK_TOP_MAX
 #endif /* CONFIG_COMPAT */
 
+#ifndef CONFIG_ARM64_FORCE_52BIT
 #define arch_get_mmap_end(addr) ((addr > DEFAULT_MAP_WINDOW) ? TASK_SIZE :\
 				DEFAULT_MAP_WINDOW)
 
 #define arch_get_mmap_base(addr, base) ((addr > DEFAULT_MAP_WINDOW) ? \
 					base + TASK_SIZE - DEFAULT_MAP_WINDOW :\
 					base)
+#endif /* CONFIG_ARM64_FORCE_52BIT */
 
 extern phys_addr_t arm64_dma_phys_limit;
 #define ARCH_LOW_ADDRESS_LIMIT	(arm64_dma_phys_limit - 1)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* [PATCH V3 5/5] arm64: mm: Allow forcing all userspace addresses to 52-bit
@ 2018-11-14 13:39   ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-14 13:39 UTC (permalink / raw)
  To: linux-arm-kernel

On arm64 52-bit VAs are provided to userspace when a hint is supplied to
mmap. This helps maintain compatibility with software that expects at
most 48-bit VAs to be returned.

In order to help identify software that has 48-bit VA assumptions, this
patch allows one to compile a kernel where 52-bit VAs are returned by
default on HW that supports it.

This feature is intended to be for development systems only.

Signed-off-by: Steve Capper <steve.capper@arm.com>
---
Patch added in V3 to allow for testing/preparation for 52-bit support.
---
 arch/arm64/Kconfig                 | 14 ++++++++++++++
 arch/arm64/include/asm/elf.h       |  4 ++++
 arch/arm64/include/asm/processor.h |  9 ++++++++-
 3 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index eab02d24f5d1..17d363e40c4d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1165,6 +1165,20 @@ config ARM64_CNP
 	  at runtime, and does not affect PEs that do not implement
 	  this feature.
 
+config ARM64_FORCE_52BIT
+	bool "Force 52-bit virtual addresses for userspace"
+	default n
+	depends on ARM64_52BIT_VA && EXPERT
+	help
+	  For systems with 52-bit userspace VAs enabled, the kernel will attempt
+	  to maintain compatibility with older software by providing 48-bit VAs
+	  unless a hint is supplied to mmap.
+
+	  This configuration option disables the 48-bit compatibility logic, and
+	  forces all userspace addresses to be 52-bit on HW that supports it. One
+	  should only enable this configuration option for stress testing userspace
+	  memory management code. If unsure say N here.
+
 endmenu
 
 config ARM64_SVE
diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index bc9bd9e77d9d..6adc1a90e7e6 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -117,7 +117,11 @@
  * 64-bit, this is above 4GB to leave the entire 32-bit address
  * space open for things that want to use the area for 32-bit pointers.
  */
+#ifdef CONFIG_ARM64_FORCE_52BIT
+#define ELF_ET_DYN_BASE		(2 * TASK_SIZE_64 / 3)
+#else
 #define ELF_ET_DYN_BASE		(2 * DEFAULT_MAP_WINDOW_64 / 3)
+#endif /* CONFIG_ARM64_FORCE_52BIT */
 
 #ifndef __ASSEMBLY__
 
diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
index 403c3c106d24..1415de41d836 100644
--- a/arch/arm64/include/asm/processor.h
+++ b/arch/arm64/include/asm/processor.h
@@ -64,8 +64,13 @@ extern u64 vabits_user;
 #define DEFAULT_MAP_WINDOW	DEFAULT_MAP_WINDOW_64
 #endif /* CONFIG_COMPAT */
 
-#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4))
+#ifdef CONFIG_ARM64_FORCE_52BIT
+#define STACK_TOP_MAX		TASK_SIZE_64
+#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(TASK_SIZE / 4))
+#else
 #define STACK_TOP_MAX		DEFAULT_MAP_WINDOW_64
+#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4))
+#endif /* CONFIG_ARM64_FORCE_52BIT */
 
 #ifdef CONFIG_COMPAT
 #define AARCH32_VECTORS_BASE	0xffff0000
@@ -75,12 +80,14 @@ extern u64 vabits_user;
 #define STACK_TOP		STACK_TOP_MAX
 #endif /* CONFIG_COMPAT */
 
+#ifndef CONFIG_ARM64_FORCE_52BIT
 #define arch_get_mmap_end(addr) ((addr > DEFAULT_MAP_WINDOW) ? TASK_SIZE :\
 				DEFAULT_MAP_WINDOW)
 
 #define arch_get_mmap_base(addr, base) ((addr > DEFAULT_MAP_WINDOW) ? \
 					base + TASK_SIZE - DEFAULT_MAP_WINDOW :\
 					base)
+#endif /* CONFIG_ARM64_FORCE_52BIT */
 
 extern phys_addr_t arm64_dma_phys_limit;
 #define ARCH_LOW_ADDRESS_LIMIT	(arm64_dma_phys_limit - 1)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 1/5] mm: mmap: Allow for "high" userspace addresses
  2018-11-14 13:39   ` Steve Capper
@ 2018-11-23 18:17     ` Catalin Marinas
  -1 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2018-11-23 18:17 UTC (permalink / raw)
  To: Steve Capper; +Cc: linux-mm, linux-arm-kernel, will.deacon, jcm, ard.biesheuvel

On Wed, Nov 14, 2018 at 01:39:16PM +0000, Steve Capper wrote:
> This patch adds support for "high" userspace addresses that are
> optionally supported on the system and have to be requested via a hint
> mechanism ("high" addr parameter to mmap).
> 
> Architectures such as powerpc and x86 achieve this by making changes to
> their architectural versions of arch_get_unmapped_* functions. However,
> on arm64 we use the generic versions of these functions.
> 
> Rather than duplicate the generic arch_get_unmapped_* implementations
> for arm64, this patch instead introduces two architectural helper macros
> and applies them to arch_get_unmapped_*:
>  arch_get_mmap_end(addr) - get mmap upper limit depending on addr hint
>  arch_get_mmap_base(addr, base) - get mmap_base depending on addr hint
> 
> If these macros are not defined in architectural code then they default
> to (TASK_SIZE) and (base) so should not introduce any behavioural
> changes to architectures that do not define them.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 1/5] mm: mmap: Allow for "high" userspace addresses
@ 2018-11-23 18:17     ` Catalin Marinas
  0 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2018-11-23 18:17 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Nov 14, 2018 at 01:39:16PM +0000, Steve Capper wrote:
> This patch adds support for "high" userspace addresses that are
> optionally supported on the system and have to be requested via a hint
> mechanism ("high" addr parameter to mmap).
> 
> Architectures such as powerpc and x86 achieve this by making changes to
> their architectural versions of arch_get_unmapped_* functions. However,
> on arm64 we use the generic versions of these functions.
> 
> Rather than duplicate the generic arch_get_unmapped_* implementations
> for arm64, this patch instead introduces two architectural helper macros
> and applies them to arch_get_unmapped_*:
>  arch_get_mmap_end(addr) - get mmap upper limit depending on addr hint
>  arch_get_mmap_base(addr, base) - get mmap_base depending on addr hint
> 
> If these macros are not defined in architectural code then they default
> to (TASK_SIZE) and (base) so should not introduce any behavioural
> changes to architectures that do not define them.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 5/5] arm64: mm: Allow forcing all userspace addresses to 52-bit
  2018-11-14 13:39   ` Steve Capper
@ 2018-11-23 18:22     ` Catalin Marinas
  -1 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2018-11-23 18:22 UTC (permalink / raw)
  To: Steve Capper; +Cc: linux-mm, linux-arm-kernel, will.deacon, jcm, ard.biesheuvel

On Wed, Nov 14, 2018 at 01:39:20PM +0000, Steve Capper wrote:
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index eab02d24f5d1..17d363e40c4d 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1165,6 +1165,20 @@ config ARM64_CNP
>  	  at runtime, and does not affect PEs that do not implement
>  	  this feature.
>  
> +config ARM64_FORCE_52BIT
> +	bool "Force 52-bit virtual addresses for userspace"
> +	default n

No need for "default n"

> +	depends on ARM64_52BIT_VA && EXPERT

As long as it's for debug only and depends on EXPERT, it's fine by me.

-- 
Catalin

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 5/5] arm64: mm: Allow forcing all userspace addresses to 52-bit
@ 2018-11-23 18:22     ` Catalin Marinas
  0 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2018-11-23 18:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Nov 14, 2018 at 01:39:20PM +0000, Steve Capper wrote:
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index eab02d24f5d1..17d363e40c4d 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1165,6 +1165,20 @@ config ARM64_CNP
>  	  at runtime, and does not affect PEs that do not implement
>  	  this feature.
>  
> +config ARM64_FORCE_52BIT
> +	bool "Force 52-bit virtual addresses for userspace"
> +	default n

No need for "default n"

> +	depends on ARM64_52BIT_VA && EXPERT

As long as it's for debug only and depends on EXPERT, it's fine by me.

-- 
Catalin

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 4/5] arm64: mm: introduce 52-bit userspace support
  2018-11-14 13:39   ` Steve Capper
@ 2018-11-23 18:35     ` Catalin Marinas
  -1 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2018-11-23 18:35 UTC (permalink / raw)
  To: Steve Capper; +Cc: linux-mm, linux-arm-kernel, will.deacon, jcm, ard.biesheuvel

On Wed, Nov 14, 2018 at 01:39:19PM +0000, Steve Capper wrote:
> diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
> index 2e05bcd944c8..56c3ccabeffe 100644
> --- a/arch/arm64/include/asm/pgalloc.h
> +++ b/arch/arm64/include/asm/pgalloc.h
> @@ -27,7 +27,11 @@
>  #define check_pgt_cache()		do { } while (0)
>  
>  #define PGALLOC_GFP	(GFP_KERNEL | __GFP_ZERO)
> +#ifdef CONFIG_ARM64_52BIT_VA
> +#define PGD_SIZE	((1 << (52 - PGDIR_SHIFT)) * sizeof(pgd_t))
> +#else
>  #define PGD_SIZE	(PTRS_PER_PGD * sizeof(pgd_t))
> +#endif

This introduces a mismatch between PTRS_PER_PGD and PGD_SIZE. While it
happens not to corrupt any memory (we allocate a full page for pgdirs),
the compiler complains about the memset() in map_entry_trampoline()
since tramp_pg_dir[] is smaller.

-- 
Catalin

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 4/5] arm64: mm: introduce 52-bit userspace support
@ 2018-11-23 18:35     ` Catalin Marinas
  0 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2018-11-23 18:35 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Nov 14, 2018 at 01:39:19PM +0000, Steve Capper wrote:
> diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
> index 2e05bcd944c8..56c3ccabeffe 100644
> --- a/arch/arm64/include/asm/pgalloc.h
> +++ b/arch/arm64/include/asm/pgalloc.h
> @@ -27,7 +27,11 @@
>  #define check_pgt_cache()		do { } while (0)
>  
>  #define PGALLOC_GFP	(GFP_KERNEL | __GFP_ZERO)
> +#ifdef CONFIG_ARM64_52BIT_VA
> +#define PGD_SIZE	((1 << (52 - PGDIR_SHIFT)) * sizeof(pgd_t))
> +#else
>  #define PGD_SIZE	(PTRS_PER_PGD * sizeof(pgd_t))
> +#endif

This introduces a mismatch between PTRS_PER_PGD and PGD_SIZE. While it
happens not to corrupt any memory (we allocate a full page for pgdirs),
the compiler complains about the memset() in map_entry_trampoline()
since tramp_pg_dir[] is smaller.

-- 
Catalin

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 1/5] mm: mmap: Allow for "high" userspace addresses
  2018-11-23 18:17     ` Catalin Marinas
@ 2018-11-26 12:11       ` Steve Capper
  -1 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-26 12:11 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-mm, linux-arm-kernel, Will Deacon, jcm, ard.biesheuvel, nd

On Fri, Nov 23, 2018 at 06:17:44PM +0000, Catalin Marinas wrote:
> On Wed, Nov 14, 2018 at 01:39:16PM +0000, Steve Capper wrote:
> > This patch adds support for "high" userspace addresses that are
> > optionally supported on the system and have to be requested via a hint
> > mechanism ("high" addr parameter to mmap).
> > 
> > Architectures such as powerpc and x86 achieve this by making changes to
> > their architectural versions of arch_get_unmapped_* functions. However,
> > on arm64 we use the generic versions of these functions.
> > 
> > Rather than duplicate the generic arch_get_unmapped_* implementations
> > for arm64, this patch instead introduces two architectural helper macros
> > and applies them to arch_get_unmapped_*:
> >  arch_get_mmap_end(addr) - get mmap upper limit depending on addr hint
> >  arch_get_mmap_base(addr, base) - get mmap_base depending on addr hint
> > 
> > If these macros are not defined in architectural code then they default
> > to (TASK_SIZE) and (base) so should not introduce any behavioural
> > changes to architectures that do not define them.
> > 
> > Signed-off-by: Steve Capper <steve.capper@arm.com>
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks!

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 1/5] mm: mmap: Allow for "high" userspace addresses
@ 2018-11-26 12:11       ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-26 12:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Nov 23, 2018 at 06:17:44PM +0000, Catalin Marinas wrote:
> On Wed, Nov 14, 2018 at 01:39:16PM +0000, Steve Capper wrote:
> > This patch adds support for "high" userspace addresses that are
> > optionally supported on the system and have to be requested via a hint
> > mechanism ("high" addr parameter to mmap).
> > 
> > Architectures such as powerpc and x86 achieve this by making changes to
> > their architectural versions of arch_get_unmapped_* functions. However,
> > on arm64 we use the generic versions of these functions.
> > 
> > Rather than duplicate the generic arch_get_unmapped_* implementations
> > for arm64, this patch instead introduces two architectural helper macros
> > and applies them to arch_get_unmapped_*:
> >  arch_get_mmap_end(addr) - get mmap upper limit depending on addr hint
> >  arch_get_mmap_base(addr, base) - get mmap_base depending on addr hint
> > 
> > If these macros are not defined in architectural code then they default
> > to (TASK_SIZE) and (base) so should not introduce any behavioural
> > changes to architectures that do not define them.
> > 
> > Signed-off-by: Steve Capper <steve.capper@arm.com>
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks!

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 5/5] arm64: mm: Allow forcing all userspace addresses to 52-bit
  2018-11-23 18:22     ` Catalin Marinas
@ 2018-11-26 12:11       ` Steve Capper
  -1 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-26 12:11 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-mm, linux-arm-kernel, Will Deacon, jcm, ard.biesheuvel, nd

On Fri, Nov 23, 2018 at 06:22:34PM +0000, Catalin Marinas wrote:
> On Wed, Nov 14, 2018 at 01:39:20PM +0000, Steve Capper wrote:
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index eab02d24f5d1..17d363e40c4d 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -1165,6 +1165,20 @@ config ARM64_CNP
> >  	  at runtime, and does not affect PEs that do not implement
> >  	  this feature.
> >  
> > +config ARM64_FORCE_52BIT
> > +	bool "Force 52-bit virtual addresses for userspace"
> > +	default n
> 
> No need for "default n"
> 
> > +	depends on ARM64_52BIT_VA && EXPERT
> 
> As long as it's for debug only and depends on EXPERT, it's fine by me.

Okay, I'll remove this default n.

Cheers,
-- 
Steve

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 5/5] arm64: mm: Allow forcing all userspace addresses to 52-bit
@ 2018-11-26 12:11       ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-26 12:11 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Nov 23, 2018 at 06:22:34PM +0000, Catalin Marinas wrote:
> On Wed, Nov 14, 2018 at 01:39:20PM +0000, Steve Capper wrote:
> > diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> > index eab02d24f5d1..17d363e40c4d 100644
> > --- a/arch/arm64/Kconfig
> > +++ b/arch/arm64/Kconfig
> > @@ -1165,6 +1165,20 @@ config ARM64_CNP
> >  	  at runtime, and does not affect PEs that do not implement
> >  	  this feature.
> >  
> > +config ARM64_FORCE_52BIT
> > +	bool "Force 52-bit virtual addresses for userspace"
> > +	default n
> 
> No need for "default n"
> 
> > +	depends on ARM64_52BIT_VA && EXPERT
> 
> As long as it's for debug only and depends on EXPERT, it's fine by me.

Okay, I'll remove this default n.

Cheers,
-- 
Steve

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 4/5] arm64: mm: introduce 52-bit userspace support
  2018-11-23 18:35     ` Catalin Marinas
@ 2018-11-26 12:13       ` Steve Capper
  -1 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-26 12:13 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-mm, linux-arm-kernel, Will Deacon, jcm, ard.biesheuvel, nd

On Fri, Nov 23, 2018 at 06:35:16PM +0000, Catalin Marinas wrote:
> On Wed, Nov 14, 2018 at 01:39:19PM +0000, Steve Capper wrote:
> > diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
> > index 2e05bcd944c8..56c3ccabeffe 100644
> > --- a/arch/arm64/include/asm/pgalloc.h
> > +++ b/arch/arm64/include/asm/pgalloc.h
> > @@ -27,7 +27,11 @@
> >  #define check_pgt_cache()		do { } while (0)
> >  
> >  #define PGALLOC_GFP	(GFP_KERNEL | __GFP_ZERO)
> > +#ifdef CONFIG_ARM64_52BIT_VA
> > +#define PGD_SIZE	((1 << (52 - PGDIR_SHIFT)) * sizeof(pgd_t))
> > +#else
> >  #define PGD_SIZE	(PTRS_PER_PGD * sizeof(pgd_t))
> > +#endif
> 
> This introduces a mismatch between PTRS_PER_PGD and PGD_SIZE. While it
> happens not to corrupt any memory (we allocate a full page for pgdirs),
> the compiler complains about the memset() in map_entry_trampoline()
> since tramp_pg_dir[] is smaller.

Thanks Catalin,
I think the way forward may be to remove the sizes from the
declarations for tramp_pg_dir and friends as they are specified to be
PAGE_SIZE by the linker script anyway.

I think this should be in a separate patch preceeding this one, will get
something ready.

(I'll also upgrade my build system :-) )

Cheers,
-- 
Steve

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 4/5] arm64: mm: introduce 52-bit userspace support
@ 2018-11-26 12:13       ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-26 12:13 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Nov 23, 2018 at 06:35:16PM +0000, Catalin Marinas wrote:
> On Wed, Nov 14, 2018 at 01:39:19PM +0000, Steve Capper wrote:
> > diff --git a/arch/arm64/include/asm/pgalloc.h b/arch/arm64/include/asm/pgalloc.h
> > index 2e05bcd944c8..56c3ccabeffe 100644
> > --- a/arch/arm64/include/asm/pgalloc.h
> > +++ b/arch/arm64/include/asm/pgalloc.h
> > @@ -27,7 +27,11 @@
> >  #define check_pgt_cache()		do { } while (0)
> >  
> >  #define PGALLOC_GFP	(GFP_KERNEL | __GFP_ZERO)
> > +#ifdef CONFIG_ARM64_52BIT_VA
> > +#define PGD_SIZE	((1 << (52 - PGDIR_SHIFT)) * sizeof(pgd_t))
> > +#else
> >  #define PGD_SIZE	(PTRS_PER_PGD * sizeof(pgd_t))
> > +#endif
> 
> This introduces a mismatch between PTRS_PER_PGD and PGD_SIZE. While it
> happens not to corrupt any memory (we allocate a full page for pgdirs),
> the compiler complains about the memset() in map_entry_trampoline()
> since tramp_pg_dir[] is smaller.

Thanks Catalin,
I think the way forward may be to remove the sizes from the
declarations for tramp_pg_dir and friends as they are specified to be
PAGE_SIZE by the linker script anyway.

I think this should be in a separate patch preceeding this one, will get
something ready.

(I'll also upgrade my build system :-) )

Cheers,
-- 
Steve

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 2/5] arm64: mm: Introduce DEFAULT_MAP_WINDOW
  2018-11-14 13:39   ` Steve Capper
@ 2018-11-27 17:09     ` Catalin Marinas
  -1 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2018-11-27 17:09 UTC (permalink / raw)
  To: Steve Capper; +Cc: linux-mm, linux-arm-kernel, will.deacon, jcm, ard.biesheuvel

Hi Steve,

On Wed, Nov 14, 2018 at 01:39:17PM +0000, Steve Capper wrote:
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 3e2091708b8e..da41a2655b69 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -25,6 +25,9 @@
>  #define USER_DS		(TASK_SIZE_64 - 1)
>  
>  #ifndef __ASSEMBLY__
> +
> +#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS)
> +
>  #ifdef __KERNEL__

That's a strange place to place DEFAULT_MAP_WINDOW_64. Did you have any
#include dependency issues? If yes, we could look at cleaning them up,
maybe moving these definitions into a separate file.

(also, if you do a clean-up I don't think we need __KERNEL__ anymore)

>  
>  #include <linux/build_bug.h>
> @@ -51,13 +54,16 @@
>  				TASK_SIZE_32 : TASK_SIZE_64)
>  #define TASK_SIZE_OF(tsk)	(test_tsk_thread_flag(tsk, TIF_32BIT) ? \
>  				TASK_SIZE_32 : TASK_SIZE_64)
> +#define DEFAULT_MAP_WINDOW	(test_thread_flag(TIF_32BIT) ? \
> +				TASK_SIZE_32 : DEFAULT_MAP_WINDOW_64)
>  #else
>  #define TASK_SIZE		TASK_SIZE_64
> +#define DEFAULT_MAP_WINDOW	DEFAULT_MAP_WINDOW_64
>  #endif /* CONFIG_COMPAT */
>  
> -#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(TASK_SIZE / 4))
> +#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4))
> +#define STACK_TOP_MAX		DEFAULT_MAP_WINDOW_64
>  
> -#define STACK_TOP_MAX		TASK_SIZE_64
>  #ifdef CONFIG_COMPAT
>  #define AARCH32_VECTORS_BASE	0xffff0000
>  #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 9d9582cac6c4..e5a1dc0beef9 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -609,7 +609,7 @@ void __init mem_init(void)
>  	 * detected at build time already.
>  	 */
>  #ifdef CONFIG_COMPAT
> -	BUILD_BUG_ON(TASK_SIZE_32			> TASK_SIZE_64);
> +	BUILD_BUG_ON(TASK_SIZE_32			> DEFAULT_MAP_WINDOW_64);
>  #endif

Since you are at this, can you please remove the useless white space (I
guess it was there before when we had more BUILD_BUG_ONs).

> diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
> index 30ac0c975f8a..d1ec7136e3e1 100644
> --- a/drivers/firmware/efi/libstub/arm-stub.c
> +++ b/drivers/firmware/efi/libstub/arm-stub.c
> @@ -33,7 +33,7 @@
>  #define EFI_RT_VIRTUAL_SIZE	SZ_512M
>  
>  #ifdef CONFIG_ARM64
> -# define EFI_RT_VIRTUAL_LIMIT	TASK_SIZE_64
> +# define EFI_RT_VIRTUAL_LIMIT	DEFAULT_MAP_WINDOW_64
>  #else
>  # define EFI_RT_VIRTUAL_LIMIT	TASK_SIZE
>  #endif

Just curious, would anything happen if we leave this to TASK_SIZE_64?

-- 
Catalin

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 2/5] arm64: mm: Introduce DEFAULT_MAP_WINDOW
@ 2018-11-27 17:09     ` Catalin Marinas
  0 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2018-11-27 17:09 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Steve,

On Wed, Nov 14, 2018 at 01:39:17PM +0000, Steve Capper wrote:
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 3e2091708b8e..da41a2655b69 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -25,6 +25,9 @@
>  #define USER_DS		(TASK_SIZE_64 - 1)
>  
>  #ifndef __ASSEMBLY__
> +
> +#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS)
> +
>  #ifdef __KERNEL__

That's a strange place to place DEFAULT_MAP_WINDOW_64. Did you have any
#include dependency issues? If yes, we could look at cleaning them up,
maybe moving these definitions into a separate file.

(also, if you do a clean-up I don't think we need __KERNEL__ anymore)

>  
>  #include <linux/build_bug.h>
> @@ -51,13 +54,16 @@
>  				TASK_SIZE_32 : TASK_SIZE_64)
>  #define TASK_SIZE_OF(tsk)	(test_tsk_thread_flag(tsk, TIF_32BIT) ? \
>  				TASK_SIZE_32 : TASK_SIZE_64)
> +#define DEFAULT_MAP_WINDOW	(test_thread_flag(TIF_32BIT) ? \
> +				TASK_SIZE_32 : DEFAULT_MAP_WINDOW_64)
>  #else
>  #define TASK_SIZE		TASK_SIZE_64
> +#define DEFAULT_MAP_WINDOW	DEFAULT_MAP_WINDOW_64
>  #endif /* CONFIG_COMPAT */
>  
> -#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(TASK_SIZE / 4))
> +#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4))
> +#define STACK_TOP_MAX		DEFAULT_MAP_WINDOW_64
>  
> -#define STACK_TOP_MAX		TASK_SIZE_64
>  #ifdef CONFIG_COMPAT
>  #define AARCH32_VECTORS_BASE	0xffff0000
>  #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 9d9582cac6c4..e5a1dc0beef9 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -609,7 +609,7 @@ void __init mem_init(void)
>  	 * detected at build time already.
>  	 */
>  #ifdef CONFIG_COMPAT
> -	BUILD_BUG_ON(TASK_SIZE_32			> TASK_SIZE_64);
> +	BUILD_BUG_ON(TASK_SIZE_32			> DEFAULT_MAP_WINDOW_64);
>  #endif

Since you are at this, can you please remove the useless white space (I
guess it was there before when we had more BUILD_BUG_ONs).

> diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
> index 30ac0c975f8a..d1ec7136e3e1 100644
> --- a/drivers/firmware/efi/libstub/arm-stub.c
> +++ b/drivers/firmware/efi/libstub/arm-stub.c
> @@ -33,7 +33,7 @@
>  #define EFI_RT_VIRTUAL_SIZE	SZ_512M
>  
>  #ifdef CONFIG_ARM64
> -# define EFI_RT_VIRTUAL_LIMIT	TASK_SIZE_64
> +# define EFI_RT_VIRTUAL_LIMIT	DEFAULT_MAP_WINDOW_64
>  #else
>  # define EFI_RT_VIRTUAL_LIMIT	TASK_SIZE
>  #endif

Just curious, would anything happen if we leave this to TASK_SIZE_64?

-- 
Catalin

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 3/5] arm64: mm: Define arch_get_mmap_end, arch_get_mmap_base
  2018-11-14 13:39   ` Steve Capper
@ 2018-11-27 17:10     ` Catalin Marinas
  -1 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2018-11-27 17:10 UTC (permalink / raw)
  To: Steve Capper; +Cc: linux-mm, linux-arm-kernel, will.deacon, jcm, ard.biesheuvel

On Wed, Nov 14, 2018 at 01:39:18PM +0000, Steve Capper wrote:
> Now that we have DEFAULT_MAP_WINDOW defined, we can arch_get_mmap_end
> and arch_get_mmap_base helpers to allow for high addresses in mmap.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 3/5] arm64: mm: Define arch_get_mmap_end, arch_get_mmap_base
@ 2018-11-27 17:10     ` Catalin Marinas
  0 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2018-11-27 17:10 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Nov 14, 2018 at 01:39:18PM +0000, Steve Capper wrote:
> Now that we have DEFAULT_MAP_WINDOW defined, we can arch_get_mmap_end
> and arch_get_mmap_base helpers to allow for high addresses in mmap.
> 
> Signed-off-by: Steve Capper <steve.capper@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 2/5] arm64: mm: Introduce DEFAULT_MAP_WINDOW
  2018-11-27 17:09     ` Catalin Marinas
@ 2018-11-27 17:15       ` Ard Biesheuvel
  -1 siblings, 0 replies; 38+ messages in thread
From: Ard Biesheuvel @ 2018-11-27 17:15 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Steve Capper, Linux-MM, linux-arm-kernel, Will Deacon, Jon Masters

On Tue, 27 Nov 2018 at 18:09, Catalin Marinas <catalin.marinas@arm.com> wrote:
>
> Hi Steve,
>
> On Wed, Nov 14, 2018 at 01:39:17PM +0000, Steve Capper wrote:
> > diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> > index 3e2091708b8e..da41a2655b69 100644
> > --- a/arch/arm64/include/asm/processor.h
> > +++ b/arch/arm64/include/asm/processor.h
> > @@ -25,6 +25,9 @@
> >  #define USER_DS              (TASK_SIZE_64 - 1)
> >
> >  #ifndef __ASSEMBLY__
> > +
> > +#define DEFAULT_MAP_WINDOW_64        (UL(1) << VA_BITS)
> > +
> >  #ifdef __KERNEL__
>
> That's a strange place to place DEFAULT_MAP_WINDOW_64. Did you have any
> #include dependency issues? If yes, we could look at cleaning them up,
> maybe moving these definitions into a separate file.
>
> (also, if you do a clean-up I don't think we need __KERNEL__ anymore)
>
> >
> >  #include <linux/build_bug.h>
> > @@ -51,13 +54,16 @@
> >                               TASK_SIZE_32 : TASK_SIZE_64)
> >  #define TASK_SIZE_OF(tsk)    (test_tsk_thread_flag(tsk, TIF_32BIT) ? \
> >                               TASK_SIZE_32 : TASK_SIZE_64)
> > +#define DEFAULT_MAP_WINDOW   (test_thread_flag(TIF_32BIT) ? \
> > +                             TASK_SIZE_32 : DEFAULT_MAP_WINDOW_64)
> >  #else
> >  #define TASK_SIZE            TASK_SIZE_64
> > +#define DEFAULT_MAP_WINDOW   DEFAULT_MAP_WINDOW_64
> >  #endif /* CONFIG_COMPAT */
> >
> > -#define TASK_UNMAPPED_BASE   (PAGE_ALIGN(TASK_SIZE / 4))
> > +#define TASK_UNMAPPED_BASE   (PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4))
> > +#define STACK_TOP_MAX                DEFAULT_MAP_WINDOW_64
> >
> > -#define STACK_TOP_MAX                TASK_SIZE_64
> >  #ifdef CONFIG_COMPAT
> >  #define AARCH32_VECTORS_BASE 0xffff0000
> >  #define STACK_TOP            (test_thread_flag(TIF_32BIT) ? \
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index 9d9582cac6c4..e5a1dc0beef9 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -609,7 +609,7 @@ void __init mem_init(void)
> >        * detected at build time already.
> >        */
> >  #ifdef CONFIG_COMPAT
> > -     BUILD_BUG_ON(TASK_SIZE_32                       > TASK_SIZE_64);
> > +     BUILD_BUG_ON(TASK_SIZE_32                       > DEFAULT_MAP_WINDOW_64);
> >  #endif
>
> Since you are at this, can you please remove the useless white space (I
> guess it was there before when we had more BUILD_BUG_ONs).
>
> > diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
> > index 30ac0c975f8a..d1ec7136e3e1 100644
> > --- a/drivers/firmware/efi/libstub/arm-stub.c
> > +++ b/drivers/firmware/efi/libstub/arm-stub.c
> > @@ -33,7 +33,7 @@
> >  #define EFI_RT_VIRTUAL_SIZE  SZ_512M
> >
> >  #ifdef CONFIG_ARM64
> > -# define EFI_RT_VIRTUAL_LIMIT        TASK_SIZE_64
> > +# define EFI_RT_VIRTUAL_LIMIT        DEFAULT_MAP_WINDOW_64
> >  #else
> >  # define EFI_RT_VIRTUAL_LIMIT        TASK_SIZE
> >  #endif
>
> Just curious, would anything happen if we leave this to TASK_SIZE_64?
>

Not really. The kernel virtual mapping of the EFI runtime services
regions are randomized based on the this value, so they may end up way
up in memory, but EFI doesn't really care about that.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 2/5] arm64: mm: Introduce DEFAULT_MAP_WINDOW
@ 2018-11-27 17:15       ` Ard Biesheuvel
  0 siblings, 0 replies; 38+ messages in thread
From: Ard Biesheuvel @ 2018-11-27 17:15 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, 27 Nov 2018 at 18:09, Catalin Marinas <catalin.marinas@arm.com> wrote:
>
> Hi Steve,
>
> On Wed, Nov 14, 2018 at 01:39:17PM +0000, Steve Capper wrote:
> > diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> > index 3e2091708b8e..da41a2655b69 100644
> > --- a/arch/arm64/include/asm/processor.h
> > +++ b/arch/arm64/include/asm/processor.h
> > @@ -25,6 +25,9 @@
> >  #define USER_DS              (TASK_SIZE_64 - 1)
> >
> >  #ifndef __ASSEMBLY__
> > +
> > +#define DEFAULT_MAP_WINDOW_64        (UL(1) << VA_BITS)
> > +
> >  #ifdef __KERNEL__
>
> That's a strange place to place DEFAULT_MAP_WINDOW_64. Did you have any
> #include dependency issues? If yes, we could look at cleaning them up,
> maybe moving these definitions into a separate file.
>
> (also, if you do a clean-up I don't think we need __KERNEL__ anymore)
>
> >
> >  #include <linux/build_bug.h>
> > @@ -51,13 +54,16 @@
> >                               TASK_SIZE_32 : TASK_SIZE_64)
> >  #define TASK_SIZE_OF(tsk)    (test_tsk_thread_flag(tsk, TIF_32BIT) ? \
> >                               TASK_SIZE_32 : TASK_SIZE_64)
> > +#define DEFAULT_MAP_WINDOW   (test_thread_flag(TIF_32BIT) ? \
> > +                             TASK_SIZE_32 : DEFAULT_MAP_WINDOW_64)
> >  #else
> >  #define TASK_SIZE            TASK_SIZE_64
> > +#define DEFAULT_MAP_WINDOW   DEFAULT_MAP_WINDOW_64
> >  #endif /* CONFIG_COMPAT */
> >
> > -#define TASK_UNMAPPED_BASE   (PAGE_ALIGN(TASK_SIZE / 4))
> > +#define TASK_UNMAPPED_BASE   (PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4))
> > +#define STACK_TOP_MAX                DEFAULT_MAP_WINDOW_64
> >
> > -#define STACK_TOP_MAX                TASK_SIZE_64
> >  #ifdef CONFIG_COMPAT
> >  #define AARCH32_VECTORS_BASE 0xffff0000
> >  #define STACK_TOP            (test_thread_flag(TIF_32BIT) ? \
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index 9d9582cac6c4..e5a1dc0beef9 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -609,7 +609,7 @@ void __init mem_init(void)
> >        * detected at build time already.
> >        */
> >  #ifdef CONFIG_COMPAT
> > -     BUILD_BUG_ON(TASK_SIZE_32                       > TASK_SIZE_64);
> > +     BUILD_BUG_ON(TASK_SIZE_32                       > DEFAULT_MAP_WINDOW_64);
> >  #endif
>
> Since you are at this, can you please remove the useless white space (I
> guess it was there before when we had more BUILD_BUG_ONs).
>
> > diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
> > index 30ac0c975f8a..d1ec7136e3e1 100644
> > --- a/drivers/firmware/efi/libstub/arm-stub.c
> > +++ b/drivers/firmware/efi/libstub/arm-stub.c
> > @@ -33,7 +33,7 @@
> >  #define EFI_RT_VIRTUAL_SIZE  SZ_512M
> >
> >  #ifdef CONFIG_ARM64
> > -# define EFI_RT_VIRTUAL_LIMIT        TASK_SIZE_64
> > +# define EFI_RT_VIRTUAL_LIMIT        DEFAULT_MAP_WINDOW_64
> >  #else
> >  # define EFI_RT_VIRTUAL_LIMIT        TASK_SIZE
> >  #endif
>
> Just curious, would anything happen if we leave this to TASK_SIZE_64?
>

Not really. The kernel virtual mapping of the EFI runtime services
regions are randomized based on the this value, so they may end up way
up in memory, but EFI doesn't really care about that.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 2/5] arm64: mm: Introduce DEFAULT_MAP_WINDOW
  2018-11-27 17:09     ` Catalin Marinas
@ 2018-11-28 16:31       ` Steve Capper
  -1 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-28 16:31 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-mm, linux-arm-kernel, Will Deacon, jcm, ard.biesheuvel, nd

On Tue, Nov 27, 2018 at 05:09:32PM +0000, Catalin Marinas wrote:
> Hi Steve,

Hi Catalin,

> 
> On Wed, Nov 14, 2018 at 01:39:17PM +0000, Steve Capper wrote:
> > diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> > index 3e2091708b8e..da41a2655b69 100644
> > --- a/arch/arm64/include/asm/processor.h
> > +++ b/arch/arm64/include/asm/processor.h
> > @@ -25,6 +25,9 @@
> >  #define USER_DS		(TASK_SIZE_64 - 1)
> >  
> >  #ifndef __ASSEMBLY__
> > +
> > +#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS)
> > +
> >  #ifdef __KERNEL__
> 
> That's a strange place to place DEFAULT_MAP_WINDOW_64. Did you have any
> #include dependency issues? If yes, we could look at cleaning them up,
> maybe moving these definitions into a separate file.
> 
> (also, if you do a clean-up I don't think we need __KERNEL__ anymore)
> 

Okay, I will investigate cleaning this up.

> >  
> >  #include <linux/build_bug.h>
> > @@ -51,13 +54,16 @@
> >  				TASK_SIZE_32 : TASK_SIZE_64)
> >  #define TASK_SIZE_OF(tsk)	(test_tsk_thread_flag(tsk, TIF_32BIT) ? \
> >  				TASK_SIZE_32 : TASK_SIZE_64)
> > +#define DEFAULT_MAP_WINDOW	(test_thread_flag(TIF_32BIT) ? \
> > +				TASK_SIZE_32 : DEFAULT_MAP_WINDOW_64)
> >  #else
> >  #define TASK_SIZE		TASK_SIZE_64
> > +#define DEFAULT_MAP_WINDOW	DEFAULT_MAP_WINDOW_64
> >  #endif /* CONFIG_COMPAT */
> >  
> > -#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(TASK_SIZE / 4))
> > +#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4))
> > +#define STACK_TOP_MAX		DEFAULT_MAP_WINDOW_64
> >  
> > -#define STACK_TOP_MAX		TASK_SIZE_64
> >  #ifdef CONFIG_COMPAT
> >  #define AARCH32_VECTORS_BASE	0xffff0000
> >  #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index 9d9582cac6c4..e5a1dc0beef9 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -609,7 +609,7 @@ void __init mem_init(void)
> >  	 * detected at build time already.
> >  	 */
> >  #ifdef CONFIG_COMPAT
> > -	BUILD_BUG_ON(TASK_SIZE_32			> TASK_SIZE_64);
> > +	BUILD_BUG_ON(TASK_SIZE_32			> DEFAULT_MAP_WINDOW_64);
> >  #endif
> 
> Since you are at this, can you please remove the useless white space (I
> guess it was there before when we had more BUILD_BUG_ONs).
> 

Sure thing.

> > diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
> > index 30ac0c975f8a..d1ec7136e3e1 100644
> > --- a/drivers/firmware/efi/libstub/arm-stub.c
> > +++ b/drivers/firmware/efi/libstub/arm-stub.c
> > @@ -33,7 +33,7 @@
> >  #define EFI_RT_VIRTUAL_SIZE	SZ_512M
> >  
> >  #ifdef CONFIG_ARM64
> > -# define EFI_RT_VIRTUAL_LIMIT	TASK_SIZE_64
> > +# define EFI_RT_VIRTUAL_LIMIT	DEFAULT_MAP_WINDOW_64
> >  #else
> >  # define EFI_RT_VIRTUAL_LIMIT	TASK_SIZE
> >  #endif
> 
> Just curious, would anything happen if we leave this to TASK_SIZE_64?
> 

Then it doesn't compile :-). TASK_SIZE_64 is a variable that is outside
the EFI stub's knowledge (and indeed is initialised after the stub has
already executed).

Cheers,
-- 
Steve

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 2/5] arm64: mm: Introduce DEFAULT_MAP_WINDOW
@ 2018-11-28 16:31       ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-28 16:31 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Nov 27, 2018 at 05:09:32PM +0000, Catalin Marinas wrote:
> Hi Steve,

Hi Catalin,

> 
> On Wed, Nov 14, 2018 at 01:39:17PM +0000, Steve Capper wrote:
> > diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> > index 3e2091708b8e..da41a2655b69 100644
> > --- a/arch/arm64/include/asm/processor.h
> > +++ b/arch/arm64/include/asm/processor.h
> > @@ -25,6 +25,9 @@
> >  #define USER_DS		(TASK_SIZE_64 - 1)
> >  
> >  #ifndef __ASSEMBLY__
> > +
> > +#define DEFAULT_MAP_WINDOW_64	(UL(1) << VA_BITS)
> > +
> >  #ifdef __KERNEL__
> 
> That's a strange place to place DEFAULT_MAP_WINDOW_64. Did you have any
> #include dependency issues? If yes, we could look at cleaning them up,
> maybe moving these definitions into a separate file.
> 
> (also, if you do a clean-up I don't think we need __KERNEL__ anymore)
> 

Okay, I will investigate cleaning this up.

> >  
> >  #include <linux/build_bug.h>
> > @@ -51,13 +54,16 @@
> >  				TASK_SIZE_32 : TASK_SIZE_64)
> >  #define TASK_SIZE_OF(tsk)	(test_tsk_thread_flag(tsk, TIF_32BIT) ? \
> >  				TASK_SIZE_32 : TASK_SIZE_64)
> > +#define DEFAULT_MAP_WINDOW	(test_thread_flag(TIF_32BIT) ? \
> > +				TASK_SIZE_32 : DEFAULT_MAP_WINDOW_64)
> >  #else
> >  #define TASK_SIZE		TASK_SIZE_64
> > +#define DEFAULT_MAP_WINDOW	DEFAULT_MAP_WINDOW_64
> >  #endif /* CONFIG_COMPAT */
> >  
> > -#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(TASK_SIZE / 4))
> > +#define TASK_UNMAPPED_BASE	(PAGE_ALIGN(DEFAULT_MAP_WINDOW / 4))
> > +#define STACK_TOP_MAX		DEFAULT_MAP_WINDOW_64
> >  
> > -#define STACK_TOP_MAX		TASK_SIZE_64
> >  #ifdef CONFIG_COMPAT
> >  #define AARCH32_VECTORS_BASE	0xffff0000
> >  #define STACK_TOP		(test_thread_flag(TIF_32BIT) ? \
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index 9d9582cac6c4..e5a1dc0beef9 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -609,7 +609,7 @@ void __init mem_init(void)
> >  	 * detected at build time already.
> >  	 */
> >  #ifdef CONFIG_COMPAT
> > -	BUILD_BUG_ON(TASK_SIZE_32			> TASK_SIZE_64);
> > +	BUILD_BUG_ON(TASK_SIZE_32			> DEFAULT_MAP_WINDOW_64);
> >  #endif
> 
> Since you are at this, can you please remove the useless white space (I
> guess it was there before when we had more BUILD_BUG_ONs).
> 

Sure thing.

> > diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
> > index 30ac0c975f8a..d1ec7136e3e1 100644
> > --- a/drivers/firmware/efi/libstub/arm-stub.c
> > +++ b/drivers/firmware/efi/libstub/arm-stub.c
> > @@ -33,7 +33,7 @@
> >  #define EFI_RT_VIRTUAL_SIZE	SZ_512M
> >  
> >  #ifdef CONFIG_ARM64
> > -# define EFI_RT_VIRTUAL_LIMIT	TASK_SIZE_64
> > +# define EFI_RT_VIRTUAL_LIMIT	DEFAULT_MAP_WINDOW_64
> >  #else
> >  # define EFI_RT_VIRTUAL_LIMIT	TASK_SIZE
> >  #endif
> 
> Just curious, would anything happen if we leave this to TASK_SIZE_64?
> 

Then it doesn't compile :-). TASK_SIZE_64 is a variable that is outside
the EFI stub's knowledge (and indeed is initialised after the stub has
already executed).

Cheers,
-- 
Steve

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 3/5] arm64: mm: Define arch_get_mmap_end, arch_get_mmap_base
  2018-11-27 17:10     ` Catalin Marinas
@ 2018-11-28 16:31       ` Steve Capper
  -1 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-28 16:31 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-mm, linux-arm-kernel, Will Deacon, jcm, ard.biesheuvel, nd

On Tue, Nov 27, 2018 at 05:10:18PM +0000, Catalin Marinas wrote:
> On Wed, Nov 14, 2018 at 01:39:18PM +0000, Steve Capper wrote:
> > Now that we have DEFAULT_MAP_WINDOW defined, we can arch_get_mmap_end
> > and arch_get_mmap_base helpers to allow for high addresses in mmap.
> > 
> > Signed-off-by: Steve Capper <steve.capper@arm.com>
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks!

^ permalink raw reply	[flat|nested] 38+ messages in thread

* [PATCH V3 3/5] arm64: mm: Define arch_get_mmap_end, arch_get_mmap_base
@ 2018-11-28 16:31       ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-11-28 16:31 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Nov 27, 2018 at 05:10:18PM +0000, Catalin Marinas wrote:
> On Wed, Nov 14, 2018 at 01:39:18PM +0000, Steve Capper wrote:
> > Now that we have DEFAULT_MAP_WINDOW defined, we can arch_get_mmap_end
> > and arch_get_mmap_base helpers to allow for high addresses in mmap.
> > 
> > Signed-off-by: Steve Capper <steve.capper@arm.com>
> 
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks!

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 4/5] arm64: mm: introduce 52-bit userspace support
  2018-11-14 13:39   ` Steve Capper
@ 2018-11-30 17:59     ` Catalin Marinas
  -1 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2018-11-30 17:59 UTC (permalink / raw)
  To: Steve Capper; +Cc: linux-mm, linux-arm-kernel, Will Deacon, jcm, ard.biesheuvel

On Wed, Nov 14, 2018 at 01:39:19PM +0000, Steve Capper wrote:
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 50b1ef8584c0..19736520b724 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -616,11 +616,21 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd)
>  #define pgd_ERROR(pgd)__pgd_error(__FILE__, __LINE__, pgd_val(pgd))
>
>  /* to find an entry in a page-table-directory */
> -#define pgd_index(addr)(((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
> +#define pgd_index(addr, ptrs)(((addr) >> PGDIR_SHIFT) & ((ptrs) - 1))
> +#define _pgd_offset_raw(pgd, addr, ptrs) ((pgd) + pgd_index(addr, ptrs))
> +#define pgd_offset_raw(pgd, addr)(_pgd_offset_raw(pgd, addr, PTRS_PER_PGD))
>
> -#define pgd_offset_raw(pgd, addr)((pgd) + pgd_index(addr))
> +static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr)
> +{
> +pgd_t *ret;
> +
> +if (IS_ENABLED(CONFIG_ARM64_52BIT_VA) && (mm != &init_mm))
> +ret = _pgd_offset_raw(mm->pgd, addr, 1ULL << (vabits_user - PGDIR_SHIFT));

I think we can make this a constant since the additional 4 bits of the
user address should be 0 on a 48-bit VA. Once we get the 52-bit kernel
VA supported, we can probably revert back to a single macro.

Another option is to change  PTRS_PER_PGD etc. to cover the whole
52-bit, including the swapper_pg_dir, but with offsetting the TTBR1_EL1
setting to keep the 48-bit kernel VA (for the time being).

--
Catalin
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 4/5] arm64: mm: introduce 52-bit userspace support
@ 2018-11-30 17:59     ` Catalin Marinas
  0 siblings, 0 replies; 38+ messages in thread
From: Catalin Marinas @ 2018-11-30 17:59 UTC (permalink / raw)
  To: Steve Capper; +Cc: linux-mm, ard.biesheuvel, Will Deacon, linux-arm-kernel, jcm

On Wed, Nov 14, 2018 at 01:39:19PM +0000, Steve Capper wrote:
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 50b1ef8584c0..19736520b724 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -616,11 +616,21 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd)
>  #define pgd_ERROR(pgd)__pgd_error(__FILE__, __LINE__, pgd_val(pgd))
>
>  /* to find an entry in a page-table-directory */
> -#define pgd_index(addr)(((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
> +#define pgd_index(addr, ptrs)(((addr) >> PGDIR_SHIFT) & ((ptrs) - 1))
> +#define _pgd_offset_raw(pgd, addr, ptrs) ((pgd) + pgd_index(addr, ptrs))
> +#define pgd_offset_raw(pgd, addr)(_pgd_offset_raw(pgd, addr, PTRS_PER_PGD))
>
> -#define pgd_offset_raw(pgd, addr)((pgd) + pgd_index(addr))
> +static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr)
> +{
> +pgd_t *ret;
> +
> +if (IS_ENABLED(CONFIG_ARM64_52BIT_VA) && (mm != &init_mm))
> +ret = _pgd_offset_raw(mm->pgd, addr, 1ULL << (vabits_user - PGDIR_SHIFT));

I think we can make this a constant since the additional 4 bits of the
user address should be 0 on a 48-bit VA. Once we get the 52-bit kernel
VA supported, we can probably revert back to a single macro.

Another option is to change  PTRS_PER_PGD etc. to cover the whole
52-bit, including the swapper_pg_dir, but with offsetting the TTBR1_EL1
setting to keep the 48-bit kernel VA (for the time being).

--
Catalin
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 4/5] arm64: mm: introduce 52-bit userspace support
  2018-11-30 17:59     ` Catalin Marinas
@ 2018-12-04 17:41       ` Steve Capper
  -1 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-12-04 17:41 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-mm, linux-arm-kernel, Will Deacon, jcm, ard.biesheuvel, nd

On Fri, Nov 30, 2018 at 05:59:59PM +0000, Catalin Marinas wrote:
> On Wed, Nov 14, 2018 at 01:39:19PM +0000, Steve Capper wrote:
> > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> > index 50b1ef8584c0..19736520b724 100644
> > --- a/arch/arm64/include/asm/pgtable.h
> > +++ b/arch/arm64/include/asm/pgtable.h
> > @@ -616,11 +616,21 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd)
> >  #define pgd_ERROR(pgd)		__pgd_error(__FILE__, __LINE__, pgd_val(pgd))
> >  
> >  /* to find an entry in a page-table-directory */
> > -#define pgd_index(addr)		(((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
> > +#define pgd_index(addr, ptrs)		(((addr) >> PGDIR_SHIFT) & ((ptrs) - 1))
> > +#define _pgd_offset_raw(pgd, addr, ptrs) ((pgd) + pgd_index(addr, ptrs))
> > +#define pgd_offset_raw(pgd, addr)	(_pgd_offset_raw(pgd, addr, PTRS_PER_PGD))
> >  
> > -#define pgd_offset_raw(pgd, addr)	((pgd) + pgd_index(addr))
> > +static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr)
> > +{
> > +	pgd_t *ret;
> > +
> > +	if (IS_ENABLED(CONFIG_ARM64_52BIT_VA) && (mm != &init_mm))
> > +		ret = _pgd_offset_raw(mm->pgd, addr, 1ULL << (vabits_user - PGDIR_SHIFT));
> 
> I think we can make this a constant since the additional 4 bits of the
> user address should be 0 on a 48-bit VA. Once we get the 52-bit kernel
> VA supported, we can probably revert back to a single macro.

Yeah, I see what you mean.

> 
> Another option is to change  PTRS_PER_PGD etc. to cover the whole
> 52-bit, including the swapper_pg_dir, but with offsetting the TTBR1_EL1
> setting to keep the 48-bit kernel VA (for the time being).
> 

I've got a 52-bit PTRS_PER_PGD working now. I will clean things up, run
more tests and then post.

Cheers,
-- 
Steve

^ permalink raw reply	[flat|nested] 38+ messages in thread

* Re: [PATCH V3 4/5] arm64: mm: introduce 52-bit userspace support
@ 2018-12-04 17:41       ` Steve Capper
  0 siblings, 0 replies; 38+ messages in thread
From: Steve Capper @ 2018-12-04 17:41 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: ard.biesheuvel, jcm, Will Deacon, linux-mm, nd, linux-arm-kernel

On Fri, Nov 30, 2018 at 05:59:59PM +0000, Catalin Marinas wrote:
> On Wed, Nov 14, 2018 at 01:39:19PM +0000, Steve Capper wrote:
> > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> > index 50b1ef8584c0..19736520b724 100644
> > --- a/arch/arm64/include/asm/pgtable.h
> > +++ b/arch/arm64/include/asm/pgtable.h
> > @@ -616,11 +616,21 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd)
> >  #define pgd_ERROR(pgd)		__pgd_error(__FILE__, __LINE__, pgd_val(pgd))
> >  
> >  /* to find an entry in a page-table-directory */
> > -#define pgd_index(addr)		(((addr) >> PGDIR_SHIFT) & (PTRS_PER_PGD - 1))
> > +#define pgd_index(addr, ptrs)		(((addr) >> PGDIR_SHIFT) & ((ptrs) - 1))
> > +#define _pgd_offset_raw(pgd, addr, ptrs) ((pgd) + pgd_index(addr, ptrs))
> > +#define pgd_offset_raw(pgd, addr)	(_pgd_offset_raw(pgd, addr, PTRS_PER_PGD))
> >  
> > -#define pgd_offset_raw(pgd, addr)	((pgd) + pgd_index(addr))
> > +static inline pgd_t *pgd_offset(const struct mm_struct *mm, unsigned long addr)
> > +{
> > +	pgd_t *ret;
> > +
> > +	if (IS_ENABLED(CONFIG_ARM64_52BIT_VA) && (mm != &init_mm))
> > +		ret = _pgd_offset_raw(mm->pgd, addr, 1ULL << (vabits_user - PGDIR_SHIFT));
> 
> I think we can make this a constant since the additional 4 bits of the
> user address should be 0 on a 48-bit VA. Once we get the 52-bit kernel
> VA supported, we can probably revert back to a single macro.

Yeah, I see what you mean.

> 
> Another option is to change  PTRS_PER_PGD etc. to cover the whole
> 52-bit, including the swapper_pg_dir, but with offsetting the TTBR1_EL1
> setting to keep the 48-bit kernel VA (for the time being).
> 

I've got a 52-bit PTRS_PER_PGD working now. I will clean things up, run
more tests and then post.

Cheers,
-- 
Steve

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 38+ messages in thread

end of thread, other threads:[~2018-12-04 17:41 UTC | newest]

Thread overview: 38+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-14 13:39 [PATCH V3 0/5] 52-bit userspace VAs Steve Capper
2018-11-14 13:39 ` Steve Capper
2018-11-14 13:39 ` [PATCH V3 1/5] mm: mmap: Allow for "high" userspace addresses Steve Capper
2018-11-14 13:39   ` Steve Capper
2018-11-23 18:17   ` Catalin Marinas
2018-11-23 18:17     ` Catalin Marinas
2018-11-26 12:11     ` Steve Capper
2018-11-26 12:11       ` Steve Capper
2018-11-14 13:39 ` [PATCH V3 2/5] arm64: mm: Introduce DEFAULT_MAP_WINDOW Steve Capper
2018-11-14 13:39   ` Steve Capper
2018-11-27 17:09   ` Catalin Marinas
2018-11-27 17:09     ` Catalin Marinas
2018-11-27 17:15     ` Ard Biesheuvel
2018-11-27 17:15       ` Ard Biesheuvel
2018-11-28 16:31     ` Steve Capper
2018-11-28 16:31       ` Steve Capper
2018-11-14 13:39 ` [PATCH V3 3/5] arm64: mm: Define arch_get_mmap_end, arch_get_mmap_base Steve Capper
2018-11-14 13:39   ` Steve Capper
2018-11-27 17:10   ` Catalin Marinas
2018-11-27 17:10     ` Catalin Marinas
2018-11-28 16:31     ` Steve Capper
2018-11-28 16:31       ` Steve Capper
2018-11-14 13:39 ` [PATCH V3 4/5] arm64: mm: introduce 52-bit userspace support Steve Capper
2018-11-14 13:39   ` Steve Capper
2018-11-23 18:35   ` Catalin Marinas
2018-11-23 18:35     ` Catalin Marinas
2018-11-26 12:13     ` Steve Capper
2018-11-26 12:13       ` Steve Capper
2018-11-30 17:59   ` Catalin Marinas
2018-11-30 17:59     ` Catalin Marinas
2018-12-04 17:41     ` Steve Capper
2018-12-04 17:41       ` Steve Capper
2018-11-14 13:39 ` [PATCH V3 5/5] arm64: mm: Allow forcing all userspace addresses to 52-bit Steve Capper
2018-11-14 13:39   ` Steve Capper
2018-11-23 18:22   ` Catalin Marinas
2018-11-23 18:22     ` Catalin Marinas
2018-11-26 12:11     ` Steve Capper
2018-11-26 12:11       ` Steve Capper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.