linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCHv5, REBASED 0/9] x86: 5-level paging enabling for v4.12, Part 4
@ 2017-05-15 12:12 Kirill A. Shutemov
  2017-05-15 12:12 ` [PATCHv5, REBASED 1/9] x86/asm: Fix comment in return_from_SYSCALL_64 Kirill A. Shutemov
                   ` (8 more replies)
  0 siblings, 9 replies; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-15 12:12 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, Kirill A. Shutemov

Here's rebased version the fourth and the last bunch of of patches that brings
initial 5-level paging enabling.

Please review and consider applying.

Kirill A. Shutemov (9):
  x86/asm: Fix comment in return_from_SYSCALL_64
  x86/boot/64: Rewrite startup_64 in C
  x86/boot/64: Rename init_level4_pgt and early_level4_pgt
  x86/boot/64: Add support of additional page table level during early
    boot
  x86/mm: Add sync_global_pgds() for configuration with 5-level paging
  x86/mm: Make kernel_physical_mapping_init() support 5-level paging
  x86/mm: Add support for 5-level paging for KASLR
  x86: Enable 5-level paging support
  x86/mm: Allow to have userspace mappings above 47-bits

 arch/x86/Kconfig                            |   5 +
 arch/x86/boot/compressed/head_64.S          |  23 ++++-
 arch/x86/entry/entry_64.S                   |   3 +-
 arch/x86/include/asm/elf.h                  |   4 +-
 arch/x86/include/asm/mpx.h                  |   9 ++
 arch/x86/include/asm/pgtable.h              |   2 +-
 arch/x86/include/asm/pgtable_64.h           |   6 +-
 arch/x86/include/asm/processor.h            |  11 ++-
 arch/x86/include/uapi/asm/processor-flags.h |   2 +
 arch/x86/kernel/espfix_64.c                 |   2 +-
 arch/x86/kernel/head64.c                    | 143 +++++++++++++++++++++++++---
 arch/x86/kernel/head_64.S                   | 134 ++++++--------------------
 arch/x86/kernel/machine_kexec_64.c          |   2 +-
 arch/x86/kernel/sys_x86_64.c                |  30 +++++-
 arch/x86/mm/dump_pagetables.c               |   2 +-
 arch/x86/mm/hugetlbpage.c                   |  27 +++++-
 arch/x86/mm/init_64.c                       | 108 +++++++++++++++++++--
 arch/x86/mm/kasan_init_64.c                 |  12 +--
 arch/x86/mm/kaslr.c                         |  81 ++++++++++++----
 arch/x86/mm/mmap.c                          |   6 +-
 arch/x86/mm/mpx.c                           |  33 ++++++-
 arch/x86/realmode/init.c                    |   2 +-
 arch/x86/xen/Kconfig                        |   1 +
 arch/x86/xen/mmu_pv.c                       |  18 ++--
 arch/x86/xen/xen-pvh.S                      |   2 +-
 25 files changed, 480 insertions(+), 188 deletions(-)

-- 
2.11.0

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCHv5, REBASED 1/9] x86/asm: Fix comment in return_from_SYSCALL_64
  2017-05-15 12:12 [PATCHv5, REBASED 0/9] x86: 5-level paging enabling for v4.12, Part 4 Kirill A. Shutemov
@ 2017-05-15 12:12 ` Kirill A. Shutemov
  2017-05-15 12:12 ` [PATCHv5, REBASED 2/9] x86/boot/64: Rewrite startup_64 in C Kirill A. Shutemov
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-15 12:12 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, Kirill A. Shutemov

On x86-64 __VIRTUAL_MASK_SHIFT depends on paging mode now.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/entry/entry_64.S | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 607d72c4a485..edec30584eb8 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -266,7 +266,8 @@ return_from_SYSCALL_64:
 	 * If width of "canonical tail" ever becomes variable, this will need
 	 * to be updated to remain correct on both old and new CPUs.
 	 *
-	 * Change top 16 bits to be the sign-extension of 47th bit
+	 * Change top bits to match most significant bit (47th or 56th bit
+	 * depending on paging mode) in the address.
 	 */
 	shl	$(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
 	sar	$(64 - (__VIRTUAL_MASK_SHIFT+1)), %rcx
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCHv5, REBASED 2/9] x86/boot/64: Rewrite startup_64 in C
  2017-05-15 12:12 [PATCHv5, REBASED 0/9] x86: 5-level paging enabling for v4.12, Part 4 Kirill A. Shutemov
  2017-05-15 12:12 ` [PATCHv5, REBASED 1/9] x86/asm: Fix comment in return_from_SYSCALL_64 Kirill A. Shutemov
@ 2017-05-15 12:12 ` Kirill A. Shutemov
  2017-05-15 12:12 ` [PATCHv5, REBASED 3/9] x86/boot/64: Rename init_level4_pgt and early_level4_pgt Kirill A. Shutemov
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-15 12:12 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, Kirill A. Shutemov

The patch write most of startup_64 logic in C.

This is preparation for 5-level paging enabling.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/kernel/head64.c  | 85 +++++++++++++++++++++++++++++++++++++++++-
 arch/x86/kernel/head_64.S | 95 ++---------------------------------------------
 2 files changed, 87 insertions(+), 93 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 43b7002f44fb..b59c550b1d3a 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -35,9 +35,92 @@
  */
 extern pgd_t early_level4_pgt[PTRS_PER_PGD];
 extern pmd_t early_dynamic_pgts[EARLY_DYNAMIC_PAGE_TABLES][PTRS_PER_PMD];
-static unsigned int __initdata next_early_pgt = 2;
+static unsigned int __initdata next_early_pgt;
 pmdval_t early_pmd_flags = __PAGE_KERNEL_LARGE & ~(_PAGE_GLOBAL | _PAGE_NX);
 
+static void __init *fixup_pointer(void *ptr, unsigned long physaddr)
+{
+	return ptr - (void *)_text + (void *)physaddr;
+}
+
+void __init __startup_64(unsigned long physaddr)
+{
+	unsigned long load_delta, *p;
+	pgdval_t *pgd;
+	pudval_t *pud;
+	pmdval_t *pmd, pmd_entry;
+	int i;
+
+	/* Is the address too large? */
+	if (physaddr >> MAX_PHYSMEM_BITS)
+		for (;;);
+
+	/*
+	 * Compute the delta between the address I am compiled to run at
+	 * and the address I am actually running at.
+	 */
+	load_delta = physaddr - (unsigned long)(_text - __START_KERNEL_map);
+
+	/* Is the address not 2M aligned? */
+	if (load_delta & ~PMD_PAGE_MASK)
+		for (;;);
+
+	/* Fixup the physical addresses in the page table */
+
+	pgd = fixup_pointer(&early_level4_pgt, physaddr);
+	pgd[pgd_index(__START_KERNEL_map)] += load_delta;
+
+	pud = fixup_pointer(&level3_kernel_pgt, physaddr);
+	pud[510] += load_delta;
+	pud[511] += load_delta;
+
+	pmd = fixup_pointer(level2_fixmap_pgt, physaddr);
+	pmd[506] += load_delta;
+
+	/*
+	 * Set up the identity mapping for the switchover.  These
+	 * entries should *NOT* have the global bit set!  This also
+	 * creates a bunch of nonsense entries but that is fine --
+	 * it avoids problems around wraparound.
+	 */
+
+	pud = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);
+	pmd = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);
+
+	i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD;
+	pgd[i + 0] = (pgdval_t)pud + _KERNPG_TABLE;
+	pgd[i + 1] = (pgdval_t)pud + _KERNPG_TABLE;
+
+	i = (physaddr >> PUD_SHIFT) % PTRS_PER_PUD;
+	pud[i + 0] = (pudval_t)pmd + _KERNPG_TABLE;
+	pud[i + 1] = (pudval_t)pmd + _KERNPG_TABLE;
+
+	pmd_entry = __PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL;
+	pmd_entry +=  physaddr;
+
+	for (i = 0; i < DIV_ROUND_UP(_end - _text, PMD_SIZE); i++) {
+		int idx = i + (physaddr >> PMD_SHIFT) % PTRS_PER_PMD;
+		pmd[idx] = pmd_entry + i * PMD_SIZE;
+	}
+
+	/*
+	 * Fixup the kernel text+data virtual addresses. Note that
+	 * we might write invalid pmds, when the kernel is relocated
+	 * cleanup_highmap() fixes this up along with the mappings
+	 * beyond _end.
+	 */
+
+	pmd = fixup_pointer(level2_kernel_pgt, physaddr);
+	for (i = 0; i < PTRS_PER_PMD; i++) {
+		if (pmd[i] & _PAGE_PRESENT)
+			pmd[i] += load_delta;
+	}
+
+	/* Fixup phys_base */
+	p = fixup_pointer(&phys_base, physaddr);
+	*p += load_delta;
+}
+
 /* Wipe all early page tables except for the kernel symbol map */
 static void __init reset_early_page_tables(void)
 {
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index ac9d327d2e42..1432d530fa35 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -72,100 +72,11 @@ startup_64:
 	/* Sanitize CPU configuration */
 	call verify_cpu
 
-	/*
-	 * Compute the delta between the address I am compiled to run at and the
-	 * address I am actually running at.
-	 */
-	leaq	_text(%rip), %rbp
-	subq	$_text - __START_KERNEL_map, %rbp
-
-	/* Is the address not 2M aligned? */
-	testl	$~PMD_PAGE_MASK, %ebp
-	jnz	bad_address
-
-	/*
-	 * Is the address too large?
-	 */
-	leaq	_text(%rip), %rax
-	shrq	$MAX_PHYSMEM_BITS, %rax
-	jnz	bad_address
-
-	/*
-	 * Fixup the physical addresses in the page table
-	 */
-	addq	%rbp, early_level4_pgt + (L4_START_KERNEL*8)(%rip)
-
-	addq	%rbp, level3_kernel_pgt + (510*8)(%rip)
-	addq	%rbp, level3_kernel_pgt + (511*8)(%rip)
-
-	addq	%rbp, level2_fixmap_pgt + (506*8)(%rip)
-
-	/*
-	 * Set up the identity mapping for the switchover.  These
-	 * entries should *NOT* have the global bit set!  This also
-	 * creates a bunch of nonsense entries but that is fine --
-	 * it avoids problems around wraparound.
-	 */
 	leaq	_text(%rip), %rdi
-	leaq	early_level4_pgt(%rip), %rbx
-
-	movq	%rdi, %rax
-	shrq	$PGDIR_SHIFT, %rax
-
-	leaq	(PAGE_SIZE + _KERNPG_TABLE)(%rbx), %rdx
-	movq	%rdx, 0(%rbx,%rax,8)
-	movq	%rdx, 8(%rbx,%rax,8)
-
-	addq	$PAGE_SIZE, %rdx
-	movq	%rdi, %rax
-	shrq	$PUD_SHIFT, %rax
-	andl	$(PTRS_PER_PUD-1), %eax
-	movq	%rdx, PAGE_SIZE(%rbx,%rax,8)
-	incl	%eax
-	andl	$(PTRS_PER_PUD-1), %eax
-	movq	%rdx, PAGE_SIZE(%rbx,%rax,8)
-
-	addq	$PAGE_SIZE * 2, %rbx
-	movq	%rdi, %rax
-	shrq	$PMD_SHIFT, %rdi
-	addq	$(__PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL), %rax
-	leaq	(_end - 1)(%rip), %rcx
-	shrq	$PMD_SHIFT, %rcx
-	subq	%rdi, %rcx
-	incl	%ecx
+	pushq	%rsi
+	call	__startup_64
+	popq	%rsi
 
-1:
-	andq	$(PTRS_PER_PMD - 1), %rdi
-	movq	%rax, (%rbx,%rdi,8)
-	incq	%rdi
-	addq	$PMD_SIZE, %rax
-	decl	%ecx
-	jnz	1b
-
-	test %rbp, %rbp
-	jz .Lskip_fixup
-
-	/*
-	 * Fixup the kernel text+data virtual addresses. Note that
-	 * we might write invalid pmds, when the kernel is relocated
-	 * cleanup_highmap() fixes this up along with the mappings
-	 * beyond _end.
-	 */
-	leaq	level2_kernel_pgt(%rip), %rdi
-	leaq	PAGE_SIZE(%rdi), %r8
-	/* See if it is a valid page table entry */
-1:	testb	$_PAGE_PRESENT, 0(%rdi)
-	jz	2f
-	addq	%rbp, 0(%rdi)
-	/* Go to the next page */
-2:	addq	$8, %rdi
-	cmp	%r8, %rdi
-	jne	1b
-
-	/* Fixup phys_base */
-	addq	%rbp, phys_base(%rip)
-
-.Lskip_fixup:
 	movq	$(early_level4_pgt - __START_KERNEL_map), %rax
 	jmp 1f
 ENTRY(secondary_startup_64)
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCHv5, REBASED 3/9] x86/boot/64: Rename init_level4_pgt and early_level4_pgt
  2017-05-15 12:12 [PATCHv5, REBASED 0/9] x86: 5-level paging enabling for v4.12, Part 4 Kirill A. Shutemov
  2017-05-15 12:12 ` [PATCHv5, REBASED 1/9] x86/asm: Fix comment in return_from_SYSCALL_64 Kirill A. Shutemov
  2017-05-15 12:12 ` [PATCHv5, REBASED 2/9] x86/boot/64: Rewrite startup_64 in C Kirill A. Shutemov
@ 2017-05-15 12:12 ` Kirill A. Shutemov
  2017-05-15 12:12 ` [PATCHv5, REBASED 4/9] x86/boot/64: Add support of additional page table level during early boot Kirill A. Shutemov
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-15 12:12 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, Kirill A. Shutemov

With CONFIG_X86_5LEVEL=y, level 4 is no longer top level of page tables.

Let's give these variable more generic names: init_top_pgt and
early_top_pgt.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/include/asm/pgtable.h     |  2 +-
 arch/x86/include/asm/pgtable_64.h  |  4 ++--
 arch/x86/kernel/espfix_64.c        |  2 +-
 arch/x86/kernel/head64.c           | 18 +++++++++---------
 arch/x86/kernel/head_64.S          | 14 +++++++-------
 arch/x86/kernel/machine_kexec_64.c |  2 +-
 arch/x86/mm/dump_pagetables.c      |  2 +-
 arch/x86/mm/kasan_init_64.c        | 12 ++++++------
 arch/x86/realmode/init.c           |  2 +-
 arch/x86/xen/mmu_pv.c              | 18 +++++++++---------
 arch/x86/xen/xen-pvh.S             |  2 +-
 11 files changed, 39 insertions(+), 39 deletions(-)

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index f5af95a0c6b8..f59c5ec823f4 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -917,7 +917,7 @@ extern pgd_t trampoline_pgd_entry;
 static inline void __meminit init_trampoline_default(void)
 {
 	/* Default trampoline pgd value */
-	trampoline_pgd_entry = init_level4_pgt[pgd_index(__PAGE_OFFSET)];
+	trampoline_pgd_entry = init_top_pgt[pgd_index(__PAGE_OFFSET)];
 }
 # ifdef CONFIG_RANDOMIZE_MEMORY
 void __meminit init_trampoline(void);
diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
index 9991224f6238..c6098092205d 100644
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -20,9 +20,9 @@ extern pmd_t level2_kernel_pgt[512];
 extern pmd_t level2_fixmap_pgt[512];
 extern pmd_t level2_ident_pgt[512];
 extern pte_t level1_fixmap_pgt[512];
-extern pgd_t init_level4_pgt[];
+extern pgd_t init_top_pgt[];
 
-#define swapper_pg_dir init_level4_pgt
+#define swapper_pg_dir init_top_pgt
 
 extern void paging_init(void);
 
diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c
index 8e598a1ad986..6b91e2eb8d3f 100644
--- a/arch/x86/kernel/espfix_64.c
+++ b/arch/x86/kernel/espfix_64.c
@@ -125,7 +125,7 @@ void __init init_espfix_bsp(void)
 	p4d_t *p4d;
 
 	/* Install the espfix pud into the kernel page directory */
-	pgd = &init_level4_pgt[pgd_index(ESPFIX_BASE_ADDR)];
+	pgd = &init_top_pgt[pgd_index(ESPFIX_BASE_ADDR)];
 	p4d = p4d_alloc(&init_mm, pgd, ESPFIX_BASE_ADDR);
 	p4d_populate(&init_mm, p4d, espfix_pud_page);
 
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index b59c550b1d3a..f8a2f34fa15d 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -33,7 +33,7 @@
 /*
  * Manage page tables very early on.
  */
-extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern pgd_t early_top_pgt[PTRS_PER_PGD];
 extern pmd_t early_dynamic_pgts[EARLY_DYNAMIC_PAGE_TABLES][PTRS_PER_PMD];
 static unsigned int __initdata next_early_pgt;
 pmdval_t early_pmd_flags = __PAGE_KERNEL_LARGE & ~(_PAGE_GLOBAL | _PAGE_NX);
@@ -67,7 +67,7 @@ void __init __startup_64(unsigned long physaddr)
 
 	/* Fixup the physical addresses in the page table */
 
-	pgd = fixup_pointer(&early_level4_pgt, physaddr);
+	pgd = fixup_pointer(&early_top_pgt, physaddr);
 	pgd[pgd_index(__START_KERNEL_map)] += load_delta;
 
 	pud = fixup_pointer(&level3_kernel_pgt, physaddr);
@@ -124,9 +124,9 @@ void __init __startup_64(unsigned long physaddr)
 /* Wipe all early page tables except for the kernel symbol map */
 static void __init reset_early_page_tables(void)
 {
-	memset(early_level4_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1));
+	memset(early_top_pgt, 0, sizeof(pgd_t)*(PTRS_PER_PGD-1));
 	next_early_pgt = 0;
-	write_cr3(__pa_nodebug(early_level4_pgt));
+	write_cr3(__pa_nodebug(early_top_pgt));
 }
 
 /* Create a new PMD entry */
@@ -138,11 +138,11 @@ int __init early_make_pgtable(unsigned long address)
 	pmdval_t pmd, *pmd_p;
 
 	/* Invalid address or early pgt is done ?  */
-	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_level4_pgt))
+	if (physaddr >= MAXMEM || read_cr3() != __pa_nodebug(early_top_pgt))
 		return -1;
 
 again:
-	pgd_p = &early_level4_pgt[pgd_index(address)].pgd;
+	pgd_p = &early_top_pgt[pgd_index(address)].pgd;
 	pgd = *pgd_p;
 
 	/*
@@ -239,7 +239,7 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 
 	clear_bss();
 
-	clear_page(init_level4_pgt);
+	clear_page(init_top_pgt);
 
 	kasan_early_init();
 
@@ -254,8 +254,8 @@ asmlinkage __visible void __init x86_64_start_kernel(char * real_mode_data)
 	 */
 	load_ucode_bsp();
 
-	/* set init_level4_pgt kernel high mapping*/
-	init_level4_pgt[511] = early_level4_pgt[511];
+	/* set init_top_pgt kernel high mapping*/
+	init_top_pgt[511] = early_top_pgt[511];
 
 	x86_64_start_reservations(real_mode_data);
 }
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 1432d530fa35..0ae0bad4d4d5 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -77,7 +77,7 @@ startup_64:
 	call	__startup_64
 	popq	%rsi
 
-	movq	$(early_level4_pgt - __START_KERNEL_map), %rax
+	movq	$(early_top_pgt - __START_KERNEL_map), %rax
 	jmp 1f
 ENTRY(secondary_startup_64)
 	/*
@@ -97,7 +97,7 @@ ENTRY(secondary_startup_64)
 	/* Sanitize CPU configuration */
 	call verify_cpu
 
-	movq	$(init_level4_pgt - __START_KERNEL_map), %rax
+	movq	$(init_top_pgt - __START_KERNEL_map), %rax
 1:
 
 	/* Enable PAE mode and PGE */
@@ -328,7 +328,7 @@ GLOBAL(name)
 	.endr
 
 	__INITDATA
-NEXT_PAGE(early_level4_pgt)
+NEXT_PAGE(early_top_pgt)
 	.fill	511,8,0
 	.quad	level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
 
@@ -338,14 +338,14 @@ NEXT_PAGE(early_dynamic_pgts)
 	.data
 
 #ifndef CONFIG_XEN
-NEXT_PAGE(init_level4_pgt)
+NEXT_PAGE(init_top_pgt)
 	.fill	512,8,0
 #else
-NEXT_PAGE(init_level4_pgt)
+NEXT_PAGE(init_top_pgt)
 	.quad   level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
-	.org    init_level4_pgt + L4_PAGE_OFFSET*8, 0
+	.org    init_top_pgt + L4_PAGE_OFFSET*8, 0
 	.quad   level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
-	.org    init_level4_pgt + L4_START_KERNEL*8, 0
+	.org    init_top_pgt + L4_START_KERNEL*8, 0
 	/* (2^48-(2*1024*1024*1024))/(2^39) = 511 */
 	.quad   level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
 
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index 6f5ca4ebe6e5..cb0a30473c23 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -347,7 +347,7 @@ void machine_kexec(struct kimage *image)
 void arch_crash_save_vmcoreinfo(void)
 {
 	VMCOREINFO_NUMBER(phys_base);
-	VMCOREINFO_SYMBOL(init_level4_pgt);
+	VMCOREINFO_SYMBOL(init_top_pgt);
 
 #ifdef CONFIG_NUMA
 	VMCOREINFO_SYMBOL(node_data);
diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
index bce6990b1d81..0470826d2bdc 100644
--- a/arch/x86/mm/dump_pagetables.c
+++ b/arch/x86/mm/dump_pagetables.c
@@ -431,7 +431,7 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m, pgd_t *pgd,
 				       bool checkwx)
 {
 #ifdef CONFIG_X86_64
-	pgd_t *start = (pgd_t *) &init_level4_pgt;
+	pgd_t *start = (pgd_t *) &init_top_pgt;
 #else
 	pgd_t *start = swapper_pg_dir;
 #endif
diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index 0c7d8129bed6..88215ac16b24 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -12,7 +12,7 @@
 #include <asm/tlbflush.h>
 #include <asm/sections.h>
 
-extern pgd_t early_level4_pgt[PTRS_PER_PGD];
+extern pgd_t early_top_pgt[PTRS_PER_PGD];
 extern struct range pfn_mapped[E820_MAX_ENTRIES];
 
 static int __init map_range(struct range *range)
@@ -109,8 +109,8 @@ void __init kasan_early_init(void)
 	for (i = 0; CONFIG_PGTABLE_LEVELS >= 5 && i < PTRS_PER_P4D; i++)
 		kasan_zero_p4d[i] = __p4d(p4d_val);
 
-	kasan_map_early_shadow(early_level4_pgt);
-	kasan_map_early_shadow(init_level4_pgt);
+	kasan_map_early_shadow(early_top_pgt);
+	kasan_map_early_shadow(init_top_pgt);
 }
 
 void __init kasan_init(void)
@@ -121,8 +121,8 @@ void __init kasan_init(void)
 	register_die_notifier(&kasan_die_notifier);
 #endif
 
-	memcpy(early_level4_pgt, init_level4_pgt, sizeof(early_level4_pgt));
-	load_cr3(early_level4_pgt);
+	memcpy(early_top_pgt, init_top_pgt, sizeof(early_top_pgt));
+	load_cr3(early_top_pgt);
 	__flush_tlb_all();
 
 	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
@@ -148,7 +148,7 @@ void __init kasan_init(void)
 	kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
 			(void *)KASAN_SHADOW_END);
 
-	load_cr3(init_level4_pgt);
+	load_cr3(init_top_pgt);
 	__flush_tlb_all();
 
 	/*
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index a163a90af4aa..cd4be19c36dc 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -102,7 +102,7 @@ static void __init setup_real_mode(void)
 
 	trampoline_pgd = (u64 *) __va(real_mode_header->trampoline_pgd);
 	trampoline_pgd[0] = trampoline_pgd_entry.pgd;
-	trampoline_pgd[511] = init_level4_pgt[511].pgd;
+	trampoline_pgd[511] = init_top_pgt[511].pgd;
 #endif
 }
 
diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
index 7397d8b8459d..049d3719d704 100644
--- a/arch/x86/xen/mmu_pv.c
+++ b/arch/x86/xen/mmu_pv.c
@@ -1479,8 +1479,8 @@ static void xen_write_cr3(unsigned long cr3)
  * At the start of the day - when Xen launches a guest, it has already
  * built pagetables for the guest. We diligently look over them
  * in xen_setup_kernel_pagetable and graft as appropriate them in the
- * init_level4_pgt and its friends. Then when we are happy we load
- * the new init_level4_pgt - and continue on.
+ * init_top_pgt and its friends. Then when we are happy we load
+ * the new init_top_pgt - and continue on.
  *
  * The generic code starts (start_kernel) and 'init_mem_mapping' sets
  * up the rest of the pagetables. When it has completed it loads the cr3.
@@ -1923,13 +1923,13 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	pt_end = pt_base + xen_start_info->nr_pt_frames;
 
 	/* Zap identity mapping */
-	init_level4_pgt[0] = __pgd(0);
+	init_top_pgt[0] = __pgd(0);
 
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
 		/* Pre-constructed entries are in pfn, so convert to mfn */
 		/* L4[272] -> level3_ident_pgt
 		 * L4[511] -> level3_kernel_pgt */
-		convert_pfn_mfn(init_level4_pgt);
+		convert_pfn_mfn(init_top_pgt);
 
 		/* L3_i[0] -> level2_ident_pgt */
 		convert_pfn_mfn(level3_ident_pgt);
@@ -1960,11 +1960,11 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 	/* Copy the initial P->M table mappings if necessary. */
 	i = pgd_index(xen_start_info->mfn_list);
 	if (i && i < pgd_index(__START_KERNEL_map))
-		init_level4_pgt[i] = ((pgd_t *)xen_start_info->pt_base)[i];
+		init_top_pgt[i] = ((pgd_t *)xen_start_info->pt_base)[i];
 
 	if (!xen_feature(XENFEAT_auto_translated_physmap)) {
 		/* Make pagetable pieces RO */
-		set_page_prot(init_level4_pgt, PAGE_KERNEL_RO);
+		set_page_prot(init_top_pgt, PAGE_KERNEL_RO);
 		set_page_prot(level3_ident_pgt, PAGE_KERNEL_RO);
 		set_page_prot(level3_kernel_pgt, PAGE_KERNEL_RO);
 		set_page_prot(level3_user_vsyscall, PAGE_KERNEL_RO);
@@ -1975,7 +1975,7 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 
 		/* Pin down new L4 */
 		pin_pagetable_pfn(MMUEXT_PIN_L4_TABLE,
-				  PFN_DOWN(__pa_symbol(init_level4_pgt)));
+				  PFN_DOWN(__pa_symbol(init_top_pgt)));
 
 		/* Unpin Xen-provided one */
 		pin_pagetable_pfn(MMUEXT_UNPIN_TABLE, PFN_DOWN(__pa(pgd)));
@@ -1986,10 +1986,10 @@ void __init xen_setup_kernel_pagetable(pgd_t *pgd, unsigned long max_pfn)
 		 * pgd.
 		 */
 		xen_mc_batch();
-		__xen_write_cr3(true, __pa(init_level4_pgt));
+		__xen_write_cr3(true, __pa(init_top_pgt));
 		xen_mc_issue(PARAVIRT_LAZY_CPU);
 	} else
-		native_write_cr3(__pa(init_level4_pgt));
+		native_write_cr3(__pa(init_top_pgt));
 
 	/* We can't that easily rip out L3 and L2, as the Xen pagetables are
 	 * set out this way: [L4], [L1], [L2], [L3], [L1], [L1] ...  for
diff --git a/arch/x86/xen/xen-pvh.S b/arch/x86/xen/xen-pvh.S
index 5e246716d58f..e1a5fbeae08d 100644
--- a/arch/x86/xen/xen-pvh.S
+++ b/arch/x86/xen/xen-pvh.S
@@ -87,7 +87,7 @@ ENTRY(pvh_start_xen)
 	wrmsr
 
 	/* Enable pre-constructed page tables. */
-	mov $_pa(init_level4_pgt), %eax
+	mov $_pa(init_top_pgt), %eax
 	mov %eax, %cr3
 	mov $(X86_CR0_PG | X86_CR0_PE), %eax
 	mov %eax, %cr0
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCHv5, REBASED 4/9] x86/boot/64: Add support of additional page table level during early boot
  2017-05-15 12:12 [PATCHv5, REBASED 0/9] x86: 5-level paging enabling for v4.12, Part 4 Kirill A. Shutemov
                   ` (2 preceding siblings ...)
  2017-05-15 12:12 ` [PATCHv5, REBASED 3/9] x86/boot/64: Rename init_level4_pgt and early_level4_pgt Kirill A. Shutemov
@ 2017-05-15 12:12 ` Kirill A. Shutemov
  2017-05-15 12:12 ` [PATCHv5, REBASED 5/9] x86/mm: Add sync_global_pgds() for configuration with 5-level paging Kirill A. Shutemov
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-15 12:12 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, Kirill A. Shutemov

This patch adds support for 5-level paging during early boot.
It generalizes boot for 4- and 5-level paging on 64-bit systems with
compile-time switch between them.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/boot/compressed/head_64.S          | 23 +++++++++++---
 arch/x86/include/asm/pgtable_64.h           |  2 ++
 arch/x86/include/uapi/asm/processor-flags.h |  2 ++
 arch/x86/kernel/head64.c                    | 48 +++++++++++++++++++++++++----
 arch/x86/kernel/head_64.S                   | 29 +++++++++++++----
 5 files changed, 88 insertions(+), 16 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index d2ae1f821e0c..3ed26769810b 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -122,9 +122,12 @@ ENTRY(startup_32)
 	addl	%ebp, gdt+2(%ebp)
 	lgdt	gdt(%ebp)
 
-	/* Enable PAE mode */
+	/* Enable PAE and LA57 mode */
 	movl	%cr4, %eax
 	orl	$X86_CR4_PAE, %eax
+#ifdef CONFIG_X86_5LEVEL
+	orl	$X86_CR4_LA57, %eax
+#endif
 	movl	%eax, %cr4
 
  /*
@@ -136,13 +139,24 @@ ENTRY(startup_32)
 	movl	$(BOOT_INIT_PGT_SIZE/4), %ecx
 	rep	stosl
 
+	xorl	%edx, %edx
+
+	/* Build Top Level */
+	leal	pgtable(%ebx,%edx,1), %edi
+	leal	0x1007 (%edi), %eax
+	movl	%eax, 0(%edi)
+
+#ifdef CONFIG_X86_5LEVEL
 	/* Build Level 4 */
-	leal	pgtable + 0(%ebx), %edi
+	addl	$0x1000, %edx
+	leal	pgtable(%ebx,%edx), %edi
 	leal	0x1007 (%edi), %eax
 	movl	%eax, 0(%edi)
+#endif
 
 	/* Build Level 3 */
-	leal	pgtable + 0x1000(%ebx), %edi
+	addl	$0x1000, %edx
+	leal	pgtable(%ebx,%edx), %edi
 	leal	0x1007(%edi), %eax
 	movl	$4, %ecx
 1:	movl	%eax, 0x00(%edi)
@@ -152,7 +166,8 @@ ENTRY(startup_32)
 	jnz	1b
 
 	/* Build Level 2 */
-	leal	pgtable + 0x2000(%ebx), %edi
+	addl	$0x1000, %edx
+	leal	pgtable(%ebx,%edx), %edi
 	movl	$0x00000183, %eax
 	movl	$2048, %ecx
 1:	movl	%eax, 0(%edi)
diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
index c6098092205d..c9e41f1599dd 100644
--- a/arch/x86/include/asm/pgtable_64.h
+++ b/arch/x86/include/asm/pgtable_64.h
@@ -14,6 +14,8 @@
 #include <linux/bitops.h>
 #include <linux/threads.h>
 
+extern p4d_t level4_kernel_pgt[512];
+extern p4d_t level4_ident_pgt[512];
 extern pud_t level3_kernel_pgt[512];
 extern pud_t level3_ident_pgt[512];
 extern pmd_t level2_kernel_pgt[512];
diff --git a/arch/x86/include/uapi/asm/processor-flags.h b/arch/x86/include/uapi/asm/processor-flags.h
index 567de50a4c2a..185f3d10c194 100644
--- a/arch/x86/include/uapi/asm/processor-flags.h
+++ b/arch/x86/include/uapi/asm/processor-flags.h
@@ -104,6 +104,8 @@
 #define X86_CR4_OSFXSR		_BITUL(X86_CR4_OSFXSR_BIT)
 #define X86_CR4_OSXMMEXCPT_BIT	10 /* enable unmasked SSE exceptions */
 #define X86_CR4_OSXMMEXCPT	_BITUL(X86_CR4_OSXMMEXCPT_BIT)
+#define X86_CR4_LA57_BIT	12 /* enable 5-level page tables */
+#define X86_CR4_LA57		_BITUL(X86_CR4_LA57_BIT)
 #define X86_CR4_VMXE_BIT	13 /* enable VMX virtualization */
 #define X86_CR4_VMXE		_BITUL(X86_CR4_VMXE_BIT)
 #define X86_CR4_SMXE_BIT	14 /* enable safer mode (TXT) */
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index f8a2f34fa15d..9403633f4c7c 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -47,6 +47,7 @@ void __init __startup_64(unsigned long physaddr)
 {
 	unsigned long load_delta, *p;
 	pgdval_t *pgd;
+	p4dval_t *p4d;
 	pudval_t *pud;
 	pmdval_t *pmd, pmd_entry;
 	int i;
@@ -70,6 +71,11 @@ void __init __startup_64(unsigned long physaddr)
 	pgd = fixup_pointer(&early_top_pgt, physaddr);
 	pgd[pgd_index(__START_KERNEL_map)] += load_delta;
 
+	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
+		p4d = fixup_pointer(&level4_kernel_pgt, physaddr);
+		p4d[511] += load_delta;
+	}
+
 	pud = fixup_pointer(&level3_kernel_pgt, physaddr);
 	pud[510] += load_delta;
 	pud[511] += load_delta;
@@ -87,9 +93,21 @@ void __init __startup_64(unsigned long physaddr)
 	pud = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);
 	pmd = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);
 
-	i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD;
-	pgd[i + 0] = (pgdval_t)pud + _KERNPG_TABLE;
-	pgd[i + 1] = (pgdval_t)pud + _KERNPG_TABLE;
+	if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
+		p4d = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);
+
+		i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD;
+		pgd[i + 0] = (pgdval_t)p4d + _KERNPG_TABLE;
+		pgd[i + 1] = (pgdval_t)p4d + _KERNPG_TABLE;
+
+		i = (physaddr >> P4D_SHIFT) % PTRS_PER_P4D;
+		p4d[i + 0] = (pgdval_t)pud + _KERNPG_TABLE;
+		p4d[i + 1] = (pgdval_t)pud + _KERNPG_TABLE;
+	} else {
+		i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD;
+		pgd[i + 0] = (pgdval_t)pud + _KERNPG_TABLE;
+		pgd[i + 1] = (pgdval_t)pud + _KERNPG_TABLE;
+	}
 
 	i = (physaddr >> PUD_SHIFT) % PTRS_PER_PUD;
 	pud[i + 0] = (pudval_t)pmd + _KERNPG_TABLE;
@@ -134,6 +152,7 @@ int __init early_make_pgtable(unsigned long address)
 {
 	unsigned long physaddr = address - __PAGE_OFFSET;
 	pgdval_t pgd, *pgd_p;
+	p4dval_t p4d, *p4d_p;
 	pudval_t pud, *pud_p;
 	pmdval_t pmd, *pmd_p;
 
@@ -150,8 +169,25 @@ int __init early_make_pgtable(unsigned long address)
 	 * critical -- __PAGE_OFFSET would point us back into the dynamic
 	 * range and we might end up looping forever...
 	 */
-	if (pgd)
-		pud_p = (pudval_t *)((pgd & PTE_PFN_MASK) + __START_KERNEL_map - phys_base);
+	if (!IS_ENABLED(CONFIG_X86_5LEVEL))
+		p4d_p = pgd_p;
+	else if (pgd)
+		p4d_p = (p4dval_t *)((pgd & PTE_PFN_MASK) + __START_KERNEL_map - phys_base);
+	else {
+		if (next_early_pgt >= EARLY_DYNAMIC_PAGE_TABLES) {
+			reset_early_page_tables();
+			goto again;
+		}
+
+		p4d_p = (p4dval_t *)early_dynamic_pgts[next_early_pgt++];
+		memset(p4d_p, 0, sizeof(*p4d_p) * PTRS_PER_P4D);
+		*pgd_p = (pgdval_t)p4d_p - __START_KERNEL_map + phys_base + _KERNPG_TABLE;
+	}
+	p4d_p += p4d_index(address);
+	p4d = *p4d_p;
+
+	if (p4d)
+		pud_p = (pudval_t *)((p4d & PTE_PFN_MASK) + __START_KERNEL_map - phys_base);
 	else {
 		if (next_early_pgt >= EARLY_DYNAMIC_PAGE_TABLES) {
 			reset_early_page_tables();
@@ -160,7 +196,7 @@ int __init early_make_pgtable(unsigned long address)
 
 		pud_p = (pudval_t *)early_dynamic_pgts[next_early_pgt++];
 		memset(pud_p, 0, sizeof(*pud_p) * PTRS_PER_PUD);
-		*pgd_p = (pgdval_t)pud_p - __START_KERNEL_map + phys_base + _KERNPG_TABLE;
+		*p4d_p = (p4dval_t)pud_p - __START_KERNEL_map + phys_base + _KERNPG_TABLE;
 	}
 	pud_p += pud_index(address);
 	pud = *pud_p;
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 0ae0bad4d4d5..7b527fa47536 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -37,10 +37,14 @@
  *
  */
 
+#define p4d_index(x)	(((x) >> P4D_SHIFT) & (PTRS_PER_P4D-1))
 #define pud_index(x)	(((x) >> PUD_SHIFT) & (PTRS_PER_PUD-1))
 
-L4_PAGE_OFFSET = pgd_index(__PAGE_OFFSET_BASE)
-L4_START_KERNEL = pgd_index(__START_KERNEL_map)
+PGD_PAGE_OFFSET = pgd_index(__PAGE_OFFSET_BASE)
+PGD_START_KERNEL = pgd_index(__START_KERNEL_map)
+#ifdef CONFIG_X86_5LEVEL
+L4_START_KERNEL = p4d_index(__START_KERNEL_map)
+#endif
 L3_START_KERNEL = pud_index(__START_KERNEL_map)
 
 	.text
@@ -100,11 +104,14 @@ ENTRY(secondary_startup_64)
 	movq	$(init_top_pgt - __START_KERNEL_map), %rax
 1:
 
-	/* Enable PAE mode and PGE */
+	/* Enable PAE mode, PGE and LA57 */
 	movl	$(X86_CR4_PAE | X86_CR4_PGE), %ecx
+#ifdef CONFIG_X86_5LEVEL
+	orl	$X86_CR4_LA57, %ecx
+#endif
 	movq	%rcx, %cr4
 
-	/* Setup early boot stage 4 level pagetables. */
+	/* Setup early boot stage 4-/5-level pagetables. */
 	addq	phys_base(%rip), %rax
 	movq	%rax, %cr3
 
@@ -330,7 +337,11 @@ GLOBAL(name)
 	__INITDATA
 NEXT_PAGE(early_top_pgt)
 	.fill	511,8,0
+#ifdef CONFIG_X86_5LEVEL
+	.quad	level4_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
+#else
 	.quad	level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
+#endif
 
 NEXT_PAGE(early_dynamic_pgts)
 	.fill	512*EARLY_DYNAMIC_PAGE_TABLES,8,0
@@ -343,9 +354,9 @@ NEXT_PAGE(init_top_pgt)
 #else
 NEXT_PAGE(init_top_pgt)
 	.quad   level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
-	.org    init_top_pgt + L4_PAGE_OFFSET*8, 0
+	.org    init_top_pgt + PGD_PAGE_OFFSET*8, 0
 	.quad   level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE
-	.org    init_top_pgt + L4_START_KERNEL*8, 0
+	.org    init_top_pgt + PGD_START_KERNEL*8, 0
 	/* (2^48-(2*1024*1024*1024))/(2^39) = 511 */
 	.quad   level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
 
@@ -359,6 +370,12 @@ NEXT_PAGE(level2_ident_pgt)
 	PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD)
 #endif
 
+#ifdef CONFIG_X86_5LEVEL
+NEXT_PAGE(level4_kernel_pgt)
+	.fill	511,8,0
+	.quad	level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE
+#endif
+
 NEXT_PAGE(level3_kernel_pgt)
 	.fill	L3_START_KERNEL,8,0
 	/* (2^48-(2*1024*1024*1024)-((2^39)*511))/(2^30) = 510 */
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCHv5, REBASED 5/9] x86/mm: Add sync_global_pgds() for configuration with 5-level paging
  2017-05-15 12:12 [PATCHv5, REBASED 0/9] x86: 5-level paging enabling for v4.12, Part 4 Kirill A. Shutemov
                   ` (3 preceding siblings ...)
  2017-05-15 12:12 ` [PATCHv5, REBASED 4/9] x86/boot/64: Add support of additional page table level during early boot Kirill A. Shutemov
@ 2017-05-15 12:12 ` Kirill A. Shutemov
  2017-05-15 12:12 ` [PATCHv5, REBASED 6/9] x86/mm: Make kernel_physical_mapping_init() support " Kirill A. Shutemov
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-15 12:12 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, Kirill A. Shutemov

This basically restores slightly modified version of original
sync_global_pgds() which we had before folded p4d was introduced.

The only modification is protection against 'addr' overflow.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/mm/init_64.c | 39 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 39 insertions(+)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 95651dc58e09..ce410c05d68d 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -92,6 +92,44 @@ __setup("noexec32=", nonx32_setup);
  * When memory was added make sure all the processes MM have
  * suitable PGD entries in the local PGD level page.
  */
+#ifdef CONFIG_X86_5LEVEL
+void sync_global_pgds(unsigned long start, unsigned long end)
+{
+	unsigned long addr;
+
+	for (addr = start; addr <= end; addr += ALIGN(addr + 1, PGDIR_SIZE)) {
+		const pgd_t *pgd_ref = pgd_offset_k(addr);
+		struct page *page;
+
+		/* Check for overflow */
+		if (addr < start)
+			break;
+
+		if (pgd_none(*pgd_ref))
+			continue;
+
+		spin_lock(&pgd_lock);
+		list_for_each_entry(page, &pgd_list, lru) {
+			pgd_t *pgd;
+			spinlock_t *pgt_lock;
+
+			pgd = (pgd_t *)page_address(page) + pgd_index(addr);
+			/* the pgt_lock only for Xen */
+			pgt_lock = &pgd_page_get_mm(page)->page_table_lock;
+			spin_lock(pgt_lock);
+
+			if (!pgd_none(*pgd_ref) && !pgd_none(*pgd))
+				BUG_ON(pgd_page_vaddr(*pgd) != pgd_page_vaddr(*pgd_ref));
+
+			if (pgd_none(*pgd))
+				set_pgd(pgd, *pgd_ref);
+
+			spin_unlock(pgt_lock);
+		}
+		spin_unlock(&pgd_lock);
+	}
+}
+#else
 void sync_global_pgds(unsigned long start, unsigned long end)
 {
 	unsigned long addr;
@@ -135,6 +173,7 @@ void sync_global_pgds(unsigned long start, unsigned long end)
 		spin_unlock(&pgd_lock);
 	}
 }
+#endif
 
 /*
  * NOTE: This function is marked __ref because it calls __init function
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCHv5, REBASED 6/9] x86/mm: Make kernel_physical_mapping_init() support 5-level paging
  2017-05-15 12:12 [PATCHv5, REBASED 0/9] x86: 5-level paging enabling for v4.12, Part 4 Kirill A. Shutemov
                   ` (4 preceding siblings ...)
  2017-05-15 12:12 ` [PATCHv5, REBASED 5/9] x86/mm: Add sync_global_pgds() for configuration with 5-level paging Kirill A. Shutemov
@ 2017-05-15 12:12 ` Kirill A. Shutemov
  2017-05-15 12:12 ` [PATCHv5, REBASED 7/9] x86/mm: Add support for 5-level paging for KASLR Kirill A. Shutemov
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-15 12:12 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, Kirill A. Shutemov

Populate additional page table level if CONFIG_X86_5LEVEL is enabled.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/mm/init_64.c | 69 ++++++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 60 insertions(+), 9 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index ce410c05d68d..124f1a77c181 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -624,6 +624,57 @@ phys_pud_init(pud_t *pud_page, unsigned long paddr, unsigned long paddr_end,
 	return paddr_last;
 }
 
+static unsigned long __meminit
+phys_p4d_init(p4d_t *p4d_page, unsigned long paddr, unsigned long paddr_end,
+	      unsigned long page_size_mask)
+{
+	unsigned long paddr_next, paddr_last = paddr_end;
+	unsigned long vaddr = (unsigned long)__va(paddr);
+	int i = p4d_index(vaddr);
+
+	if (!IS_ENABLED(CONFIG_X86_5LEVEL))
+		return phys_pud_init((pud_t *) p4d_page, paddr, paddr_end, page_size_mask);
+
+	for (; i < PTRS_PER_P4D; i++, paddr = paddr_next) {
+		p4d_t *p4d;
+		pud_t *pud;
+
+		vaddr = (unsigned long)__va(paddr);
+		p4d = p4d_page + p4d_index(vaddr);
+		paddr_next = (paddr & P4D_MASK) + P4D_SIZE;
+
+		if (paddr >= paddr_end) {
+			if (!after_bootmem &&
+			    !e820__mapped_any(paddr & P4D_MASK, paddr_next,
+					     E820_TYPE_RAM) &&
+			    !e820__mapped_any(paddr & P4D_MASK, paddr_next,
+					     E820_TYPE_RESERVED_KERN))
+				set_p4d(p4d, __p4d(0));
+			continue;
+		}
+
+		if (!p4d_none(*p4d)) {
+			pud = pud_offset(p4d, 0);
+			paddr_last = phys_pud_init(pud, paddr,
+					paddr_end,
+					page_size_mask);
+			__flush_tlb_all();
+			continue;
+		}
+
+		pud = alloc_low_page();
+		paddr_last = phys_pud_init(pud, paddr, paddr_end,
+					   page_size_mask);
+
+		spin_lock(&init_mm.page_table_lock);
+		p4d_populate(&init_mm, p4d, pud);
+		spin_unlock(&init_mm.page_table_lock);
+	}
+	__flush_tlb_all();
+
+	return paddr_last;
+}
+
 /*
  * Create page table mapping for the physical memory for specific physical
  * addresses. The virtual and physical addresses have to be aligned on PMD level
@@ -645,26 +696,26 @@ kernel_physical_mapping_init(unsigned long paddr_start,
 	for (; vaddr < vaddr_end; vaddr = vaddr_next) {
 		pgd_t *pgd = pgd_offset_k(vaddr);
 		p4d_t *p4d;
-		pud_t *pud;
 
 		vaddr_next = (vaddr & PGDIR_MASK) + PGDIR_SIZE;
 
-		BUILD_BUG_ON(pgd_none(*pgd));
-		p4d = p4d_offset(pgd, vaddr);
-		if (p4d_val(*p4d)) {
-			pud = (pud_t *)p4d_page_vaddr(*p4d);
-			paddr_last = phys_pud_init(pud, __pa(vaddr),
+		if (pgd_val(*pgd)) {
+			p4d = (p4d_t *)pgd_page_vaddr(*pgd);
+			paddr_last = phys_p4d_init(p4d, __pa(vaddr),
 						   __pa(vaddr_end),
 						   page_size_mask);
 			continue;
 		}
 
-		pud = alloc_low_page();
-		paddr_last = phys_pud_init(pud, __pa(vaddr), __pa(vaddr_end),
+		p4d = alloc_low_page();
+		paddr_last = phys_p4d_init(p4d, __pa(vaddr), __pa(vaddr_end),
 					   page_size_mask);
 
 		spin_lock(&init_mm.page_table_lock);
-		p4d_populate(&init_mm, p4d, pud);
+		if (IS_ENABLED(CONFIG_X86_5LEVEL))
+			pgd_populate(&init_mm, pgd, p4d);
+		else
+			p4d_populate(&init_mm, p4d_offset(pgd, vaddr), (pud_t *) p4d);
 		spin_unlock(&init_mm.page_table_lock);
 		pgd_changed = true;
 	}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCHv5, REBASED 7/9] x86/mm: Add support for 5-level paging for KASLR
  2017-05-15 12:12 [PATCHv5, REBASED 0/9] x86: 5-level paging enabling for v4.12, Part 4 Kirill A. Shutemov
                   ` (5 preceding siblings ...)
  2017-05-15 12:12 ` [PATCHv5, REBASED 6/9] x86/mm: Make kernel_physical_mapping_init() support " Kirill A. Shutemov
@ 2017-05-15 12:12 ` Kirill A. Shutemov
  2017-05-15 12:12 ` [PATCHv5, REBASED 8/9] x86: Enable 5-level paging support Kirill A. Shutemov
  2017-05-15 12:12 ` [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits Kirill A. Shutemov
  8 siblings, 0 replies; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-15 12:12 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, Kirill A. Shutemov

With 5-level paging randomization happens on P4D level instead of PUD.

Maximum amount of physical memory also bumped to 52-bits for 5-level
paging.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/mm/kaslr.c | 81 ++++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 62 insertions(+), 19 deletions(-)

diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
index aed206475aa7..af599167fe3c 100644
--- a/arch/x86/mm/kaslr.c
+++ b/arch/x86/mm/kaslr.c
@@ -6,12 +6,12 @@
  *
  * Entropy is generated using the KASLR early boot functions now shared in
  * the lib directory (originally written by Kees Cook). Randomization is
- * done on PGD & PUD page table levels to increase possible addresses. The
- * physical memory mapping code was adapted to support PUD level virtual
- * addresses. This implementation on the best configuration provides 30,000
- * possible virtual addresses in average for each memory region. An additional
- * low memory page is used to ensure each CPU can start with a PGD aligned
- * virtual address (for realmode).
+ * done on PGD & P4D/PUD page table levels to increase possible addresses.
+ * The physical memory mapping code was adapted to support P4D/PUD level
+ * virtual addresses. This implementation on the best configuration provides
+ * 30,000 possible virtual addresses in average for each memory region.
+ * An additional low memory page is used to ensure each CPU can start with
+ * a PGD aligned virtual address (for realmode).
  *
  * The order of each memory region is not changed. The feature looks at
  * the available space for the regions based on different configuration
@@ -70,7 +70,7 @@ static __initdata struct kaslr_memory_region {
 	unsigned long *base;
 	unsigned long size_tb;
 } kaslr_regions[] = {
-	{ &page_offset_base, 64/* Maximum */ },
+	{ &page_offset_base, 1 << (__PHYSICAL_MASK_SHIFT - TB_SHIFT) /* Maximum */ },
 	{ &vmalloc_base, VMALLOC_SIZE_TB },
 	{ &vmemmap_base, 1 },
 };
@@ -142,7 +142,10 @@ void __init kernel_randomize_memory(void)
 		 */
 		entropy = remain_entropy / (ARRAY_SIZE(kaslr_regions) - i);
 		prandom_bytes_state(&rand_state, &rand, sizeof(rand));
-		entropy = (rand % (entropy + 1)) & PUD_MASK;
+		if (IS_ENABLED(CONFIG_X86_5LEVEL))
+			entropy = (rand % (entropy + 1)) & P4D_MASK;
+		else
+			entropy = (rand % (entropy + 1)) & PUD_MASK;
 		vaddr += entropy;
 		*kaslr_regions[i].base = vaddr;
 
@@ -151,27 +154,21 @@ void __init kernel_randomize_memory(void)
 		 * randomization alignment.
 		 */
 		vaddr += get_padding(&kaslr_regions[i]);
-		vaddr = round_up(vaddr + 1, PUD_SIZE);
+		if (IS_ENABLED(CONFIG_X86_5LEVEL))
+			vaddr = round_up(vaddr + 1, P4D_SIZE);
+		else
+			vaddr = round_up(vaddr + 1, PUD_SIZE);
 		remain_entropy -= entropy;
 	}
 }
 
-/*
- * Create PGD aligned trampoline table to allow real mode initialization
- * of additional CPUs. Consume only 1 low memory page.
- */
-void __meminit init_trampoline(void)
+static void __meminit init_trampoline_pud(void)
 {
 	unsigned long paddr, paddr_next;
 	pgd_t *pgd;
 	pud_t *pud_page, *pud_page_tramp;
 	int i;
 
-	if (!kaslr_memory_enabled()) {
-		init_trampoline_default();
-		return;
-	}
-
 	pud_page_tramp = alloc_low_page();
 
 	paddr = 0;
@@ -192,3 +189,49 @@ void __meminit init_trampoline(void)
 	set_pgd(&trampoline_pgd_entry,
 		__pgd(_KERNPG_TABLE | __pa(pud_page_tramp)));
 }
+
+static void __meminit init_trampoline_p4d(void)
+{
+	unsigned long paddr, paddr_next;
+	pgd_t *pgd;
+	p4d_t *p4d_page, *p4d_page_tramp;
+	int i;
+
+	p4d_page_tramp = alloc_low_page();
+
+	paddr = 0;
+	pgd = pgd_offset_k((unsigned long)__va(paddr));
+	p4d_page = (p4d_t *) pgd_page_vaddr(*pgd);
+
+	for (i = p4d_index(paddr); i < PTRS_PER_P4D; i++, paddr = paddr_next) {
+		p4d_t *p4d, *p4d_tramp;
+		unsigned long vaddr = (unsigned long)__va(paddr);
+
+		p4d_tramp = p4d_page_tramp + p4d_index(paddr);
+		p4d = p4d_page + p4d_index(vaddr);
+		paddr_next = (paddr & P4D_MASK) + P4D_SIZE;
+
+		*p4d_tramp = *p4d;
+	}
+
+	set_pgd(&trampoline_pgd_entry,
+		__pgd(_KERNPG_TABLE | __pa(p4d_page_tramp)));
+}
+
+/*
+ * Create PGD aligned trampoline table to allow real mode initialization
+ * of additional CPUs. Consume only 1 low memory page.
+ */
+void __meminit init_trampoline(void)
+{
+
+	if (!kaslr_memory_enabled()) {
+		init_trampoline_default();
+		return;
+	}
+
+	if (IS_ENABLED(CONFIG_X86_5LEVEL))
+		init_trampoline_p4d();
+	else
+		init_trampoline_pud();
+}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCHv5, REBASED 8/9] x86: Enable 5-level paging support
  2017-05-15 12:12 [PATCHv5, REBASED 0/9] x86: 5-level paging enabling for v4.12, Part 4 Kirill A. Shutemov
                   ` (6 preceding siblings ...)
  2017-05-15 12:12 ` [PATCHv5, REBASED 7/9] x86/mm: Add support for 5-level paging for KASLR Kirill A. Shutemov
@ 2017-05-15 12:12 ` Kirill A. Shutemov
  2017-05-15 12:31   ` Juergen Gross
  2017-05-15 12:12 ` [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits Kirill A. Shutemov
  8 siblings, 1 reply; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-15 12:12 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, Kirill A. Shutemov

Most of things are in place and we can enable support of 5-level paging.

Enabling XEN with 5-level paging requires more work. The patch makes XEN
dependent on !X86_5LEVEL.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/Kconfig     | 5 +++++
 arch/x86/xen/Kconfig | 1 +
 2 files changed, 6 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index cd18994a9555..11bd0498f64c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -318,6 +318,7 @@ config FIX_EARLYCON_MEM
 
 config PGTABLE_LEVELS
 	int
+	default 5 if X86_5LEVEL
 	default 4 if X86_64
 	default 3 if X86_PAE
 	default 2
@@ -1390,6 +1391,10 @@ config X86_PAE
 	  has the cost of more pagetable lookup overhead, and also
 	  consumes more pagetable space per process.
 
+config X86_5LEVEL
+	bool "Enable 5-level page tables support"
+	depends on X86_64
+
 config ARCH_PHYS_ADDR_T_64BIT
 	def_bool y
 	depends on X86_64 || X86_PAE
diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 027987638e98..12205e6dfa59 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -5,6 +5,7 @@
 config XEN
 	bool "Xen guest support"
 	depends on PARAVIRT
+	depends on !X86_5LEVEL
 	select PARAVIRT_CLOCK
 	depends on X86_64 || (X86_32 && X86_PAE)
 	depends on X86_LOCAL_APIC && X86_TSC
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits
  2017-05-15 12:12 [PATCHv5, REBASED 0/9] x86: 5-level paging enabling for v4.12, Part 4 Kirill A. Shutemov
                   ` (7 preceding siblings ...)
  2017-05-15 12:12 ` [PATCHv5, REBASED 8/9] x86: Enable 5-level paging support Kirill A. Shutemov
@ 2017-05-15 12:12 ` Kirill A. Shutemov
  2017-05-15 14:49   ` kbuild test robot
  2017-05-18 11:43   ` Michal Hocko
  8 siblings, 2 replies; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-15 12:12 UTC (permalink / raw)
  To: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, Kirill A. Shutemov, linux-api

On x86, 5-level paging enables 56-bit userspace virtual address space.
Not all user space is ready to handle wide addresses. It's known that
at least some JIT compilers use higher bits in pointers to encode their
information. It collides with valid pointers with 5-level paging and
leads to crashes.

To mitigate this, we are not going to allocate virtual address space
above 47-bit by default.

But userspace can ask for allocation from full address space by
specifying hint address (with or without MAP_FIXED) above 47-bits.

If hint address set above 47-bit, but MAP_FIXED is not specified, we try
to look for unmapped area by specified address. If it's already
occupied, we look for unmapped area in *full* address space, rather than
from 47-bit window.

A high hint address would only affect the allocation in question, but not
any future mmap()s.

Specifying high hint address on older kernel or on machine without 5-level
paging support is safe. The hint will be ignored and kernel will fall back
to allocation from 47-bit address space.

This approach helps to easily make application's memory allocator aware
about large address space without manually tracking allocated virtual
address space.

One important case we need to handle here is interaction with MPX.
MPX (without MAWA( extension cannot handle addresses above 47-bit, so we
need to make sure that MPX cannot be enabled we already have VMA above
the boundary and forbid creating such VMAs once MPX is enabled.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Dmitry Safonov <dsafonov@virtuozzo.com>
Cc: linux-api@vger.kernel.org
---
 arch/x86/include/asm/elf.h       |  4 ++--
 arch/x86/include/asm/mpx.h       |  9 +++++++++
 arch/x86/include/asm/processor.h | 11 ++++++++---
 arch/x86/kernel/sys_x86_64.c     | 30 ++++++++++++++++++++++++++----
 arch/x86/mm/hugetlbpage.c        | 27 +++++++++++++++++++++++----
 arch/x86/mm/mmap.c               |  6 +++---
 arch/x86/mm/mpx.c                | 33 ++++++++++++++++++++++++++++++++-
 7 files changed, 103 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index e8ab9a46bc68..7a30513a4046 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -250,7 +250,7 @@ extern int force_personality32;
    the loader.  We need to make sure that it is out of the way of the program
    that it will "exec", and that there is sufficient room for the brk.  */
 
-#define ELF_ET_DYN_BASE		(TASK_SIZE / 3 * 2)
+#define ELF_ET_DYN_BASE		(TASK_SIZE_LOW / 3 * 2)
 
 /* This yields a mask that user programs can use to figure out what
    instruction set this CPU supports.  This could be done in user space,
@@ -304,7 +304,7 @@ static inline int mmap_is_ia32(void)
 }
 
 extern unsigned long tasksize_32bit(void);
-extern unsigned long tasksize_64bit(void);
+extern unsigned long tasksize_64bit(int full_addr_space);
 extern unsigned long get_mmap_base(int is_legacy);
 
 #ifdef CONFIG_X86_32
diff --git a/arch/x86/include/asm/mpx.h b/arch/x86/include/asm/mpx.h
index a0d662be4c5b..7d7404756bb4 100644
--- a/arch/x86/include/asm/mpx.h
+++ b/arch/x86/include/asm/mpx.h
@@ -73,6 +73,9 @@ static inline void mpx_mm_init(struct mm_struct *mm)
 }
 void mpx_notify_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
 		      unsigned long start, unsigned long end);
+
+unsigned long mpx_unmapped_area_check(unsigned long addr, unsigned long len,
+		unsigned long flags);
 #else
 static inline siginfo_t *mpx_generate_siginfo(struct pt_regs *regs)
 {
@@ -94,6 +97,12 @@ static inline void mpx_notify_unmap(struct mm_struct *mm,
 				    unsigned long start, unsigned long end)
 {
 }
+
+static inline unsigned long mpx_unmapped_area_check(unsigned long addr,
+		unsigned long len, unsigned long flags)
+{
+	return addr;
+}
 #endif /* CONFIG_X86_INTEL_MPX */
 
 #endif /* _ASM_X86_MPX_H */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 3cada998a402..aaed58b03ddb 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -795,6 +795,7 @@ static inline void spin_lock_prefetch(const void *x)
 #define IA32_PAGE_OFFSET	PAGE_OFFSET
 #define TASK_SIZE		PAGE_OFFSET
 #define TASK_SIZE_MAX		TASK_SIZE
+#define DEFAULT_MAP_WINDOW	TASK_SIZE
 #define STACK_TOP		TASK_SIZE
 #define STACK_TOP_MAX		STACK_TOP
 
@@ -834,7 +835,9 @@ static inline void spin_lock_prefetch(const void *x)
  * particular problem by preventing anything from being mapped
  * at the maximum canonical address.
  */
-#define TASK_SIZE_MAX	((1UL << 47) - PAGE_SIZE)
+#define TASK_SIZE_MAX	((1UL << __VIRTUAL_MASK_SHIFT) - PAGE_SIZE)
+
+#define DEFAULT_MAP_WINDOW	((1UL << 47) - PAGE_SIZE)
 
 /* This decides where the kernel will search for a free chunk of vm
  * space during mmap's.
@@ -842,12 +845,14 @@ static inline void spin_lock_prefetch(const void *x)
 #define IA32_PAGE_OFFSET	((current->personality & ADDR_LIMIT_3GB) ? \
 					0xc0000000 : 0xFFFFe000)
 
+#define TASK_SIZE_LOW		(test_thread_flag(TIF_ADDR32) ? \
+					IA32_PAGE_OFFSET : DEFAULT_MAP_WINDOW)
 #define TASK_SIZE		(test_thread_flag(TIF_ADDR32) ? \
 					IA32_PAGE_OFFSET : TASK_SIZE_MAX)
 #define TASK_SIZE_OF(child)	((test_tsk_thread_flag(child, TIF_ADDR32)) ? \
 					IA32_PAGE_OFFSET : TASK_SIZE_MAX)
 
-#define STACK_TOP		TASK_SIZE
+#define STACK_TOP		TASK_SIZE_LOW
 #define STACK_TOP_MAX		TASK_SIZE_MAX
 
 #define INIT_THREAD  {						\
@@ -870,7 +875,7 @@ extern void start_thread(struct pt_regs *regs, unsigned long new_ip,
  * space during mmap's.
  */
 #define __TASK_UNMAPPED_BASE(task_size)	(PAGE_ALIGN(task_size / 3))
-#define TASK_UNMAPPED_BASE		__TASK_UNMAPPED_BASE(TASK_SIZE)
+#define TASK_UNMAPPED_BASE		__TASK_UNMAPPED_BASE(TASK_SIZE_LOW)
 
 #define KSTK_EIP(task)		(task_pt_regs(task)->ip)
 
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 207b8f2582c7..74d1587b181d 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -21,6 +21,7 @@
 #include <asm/compat.h>
 #include <asm/ia32.h>
 #include <asm/syscalls.h>
+#include <asm/mpx.h>
 
 /*
  * Align a virtual address to avoid aliasing in the I$ on AMD F15h.
@@ -100,8 +101,8 @@ SYSCALL_DEFINE6(mmap, unsigned long, addr, unsigned long, len,
 	return error;
 }
 
-static void find_start_end(unsigned long flags, unsigned long *begin,
-			   unsigned long *end)
+static void find_start_end(unsigned long addr, unsigned long flags,
+		unsigned long *begin, unsigned long *end)
 {
 	if (!in_compat_syscall() && (flags & MAP_32BIT)) {
 		/* This is usually used needed to map code in small
@@ -120,7 +121,10 @@ static void find_start_end(unsigned long flags, unsigned long *begin,
 	}
 
 	*begin	= get_mmap_base(1);
-	*end	= in_compat_syscall() ? tasksize_32bit() : tasksize_64bit();
+	if (in_compat_syscall())
+		*end = tasksize_32bit();
+	else
+		*end = tasksize_64bit(addr > DEFAULT_MAP_WINDOW);
 }
 
 unsigned long
@@ -132,10 +136,14 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
 	struct vm_unmapped_area_info info;
 	unsigned long begin, end;
 
+	addr = mpx_unmapped_area_check(addr, len, flags);
+	if (IS_ERR_VALUE(addr))
+		return addr;
+
 	if (flags & MAP_FIXED)
 		return addr;
 
-	find_start_end(flags, &begin, &end);
+	find_start_end(addr, flags, &begin, &end);
 
 	if (len > end)
 		return -ENOMEM;
@@ -171,6 +179,10 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	unsigned long addr = addr0;
 	struct vm_unmapped_area_info info;
 
+	addr = mpx_unmapped_area_check(addr, len, flags);
+	if (IS_ERR_VALUE(addr))
+		return addr;
+
 	/* requested length too big for entire address space */
 	if (len > TASK_SIZE)
 		return -ENOMEM;
@@ -195,6 +207,16 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	info.length = len;
 	info.low_limit = PAGE_SIZE;
 	info.high_limit = get_mmap_base(0);
+
+	/*
+	 * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area
+	 * in the full address space.
+	 *
+	 * !in_compat_syscall() check to avoid high addresses for x32.
+	 */
+	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
+		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
+
 	info.align_mask = 0;
 	info.align_offset = pgoff << PAGE_SHIFT;
 	if (filp) {
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index 302f43fd9c28..730f00250acb 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -18,6 +18,7 @@
 #include <asm/tlbflush.h>
 #include <asm/pgalloc.h>
 #include <asm/elf.h>
+#include <asm/mpx.h>
 
 #if 0	/* This is just for testing */
 struct page *
@@ -85,25 +86,38 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *file,
 	info.flags = 0;
 	info.length = len;
 	info.low_limit = get_mmap_base(1);
+
+	/*
+	 * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area
+	 * in the full address space.
+	 */
 	info.high_limit = in_compat_syscall() ?
-		tasksize_32bit() : tasksize_64bit();
+		tasksize_32bit() : tasksize_64bit(addr > DEFAULT_MAP_WINDOW);
+
 	info.align_mask = PAGE_MASK & ~huge_page_mask(h);
 	info.align_offset = 0;
 	return vm_unmapped_area(&info);
 }
 
 static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
-		unsigned long addr0, unsigned long len,
+		unsigned long addr, unsigned long len,
 		unsigned long pgoff, unsigned long flags)
 {
 	struct hstate *h = hstate_file(file);
 	struct vm_unmapped_area_info info;
-	unsigned long addr;
 
 	info.flags = VM_UNMAPPED_AREA_TOPDOWN;
 	info.length = len;
 	info.low_limit = PAGE_SIZE;
 	info.high_limit = get_mmap_base(0);
+
+	/*
+	 * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area
+	 * in the full address space.
+	 */
+	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
+		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
+
 	info.align_mask = PAGE_MASK & ~huge_page_mask(h);
 	info.align_offset = 0;
 	addr = vm_unmapped_area(&info);
@@ -118,7 +132,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
 		VM_BUG_ON(addr != -ENOMEM);
 		info.flags = 0;
 		info.low_limit = TASK_UNMAPPED_BASE;
-		info.high_limit = TASK_SIZE;
+		info.high_limit = TASK_SIZE_LOW;
 		addr = vm_unmapped_area(&info);
 	}
 
@@ -135,6 +149,11 @@ hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
 
 	if (len & ~huge_page_mask(h))
 		return -EINVAL;
+
+	addr = mpx_unmapped_area_check(addr, len, flags);
+	if (IS_ERR_VALUE(addr))
+		return addr;
+
 	if (len > TASK_SIZE)
 		return -ENOMEM;
 
diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
index 19ad095b41df..199050249d60 100644
--- a/arch/x86/mm/mmap.c
+++ b/arch/x86/mm/mmap.c
@@ -42,9 +42,9 @@ unsigned long tasksize_32bit(void)
 	return IA32_PAGE_OFFSET;
 }
 
-unsigned long tasksize_64bit(void)
+unsigned long tasksize_64bit(int full_addr_space)
 {
-	return TASK_SIZE_MAX;
+	return full_addr_space ? TASK_SIZE_MAX : DEFAULT_MAP_WINDOW;
 }
 
 static unsigned long stack_maxrandom_size(unsigned long task_size)
@@ -140,7 +140,7 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
 		mm->get_unmapped_area = arch_get_unmapped_area_topdown;
 
 	arch_pick_mmap_base(&mm->mmap_base, &mm->mmap_legacy_base,
-			arch_rnd(mmap64_rnd_bits), tasksize_64bit());
+			arch_rnd(mmap64_rnd_bits), tasksize_64bit(0));
 
 #ifdef CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES
 	/*
diff --git a/arch/x86/mm/mpx.c b/arch/x86/mm/mpx.c
index 1c34b767c84c..8c8da27e8549 100644
--- a/arch/x86/mm/mpx.c
+++ b/arch/x86/mm/mpx.c
@@ -355,10 +355,19 @@ int mpx_enable_management(void)
 	 */
 	bd_base = mpx_get_bounds_dir();
 	down_write(&mm->mmap_sem);
+
+	/* MPX doesn't support addresses above 47-bits yet. */
+	if (find_vma(mm, DEFAULT_MAP_WINDOW)) {
+		pr_warn_once("%s (%d): MPX cannot handle addresses "
+				"above 47-bits. Disabling.",
+				current->comm, current->pid);
+		ret = -ENXIO;
+		goto out;
+	}
 	mm->context.bd_addr = bd_base;
 	if (mm->context.bd_addr == MPX_INVALID_BOUNDS_DIR)
 		ret = -ENXIO;
-
+out:
 	up_write(&mm->mmap_sem);
 	return ret;
 }
@@ -1030,3 +1039,25 @@ void mpx_notify_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
 	if (ret)
 		force_sig(SIGSEGV, current);
 }
+
+/* MPX cannot handle addresses above 47-bits yet. */
+unsigned long mpx_unmapped_area_check(unsigned long addr, unsigned long len,
+		unsigned long flags)
+{
+	if (!kernel_managing_mpx_tables(current->mm))
+		return addr;
+	if (addr + len <= DEFAULT_MAP_WINDOW)
+		return addr;
+	if (flags & MAP_FIXED)
+		return -ENOMEM;
+
+	/*
+	 * Requested len is larger than whole area we're allowed to map in.
+	 * Resetting hinting address wouldn't do much good -- fail early.
+	 */
+	if (len > DEFAULT_MAP_WINDOW)
+		return -ENOMEM;
+
+	/* Look for unmap area within DEFAULT_MAP_WINDOW */
+	return 0;
+}
-- 
2.11.0

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 8/9] x86: Enable 5-level paging support
  2017-05-15 12:12 ` [PATCHv5, REBASED 8/9] x86: Enable 5-level paging support Kirill A. Shutemov
@ 2017-05-15 12:31   ` Juergen Gross
  2017-05-15 14:11     ` Kirill A. Shutemov
  0 siblings, 1 reply; 24+ messages in thread
From: Juergen Gross @ 2017-05-15 12:31 UTC (permalink / raw)
  To: Kirill A. Shutemov, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin
  Cc: Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel

On 15/05/17 14:12, Kirill A. Shutemov wrote:
> Most of things are in place and we can enable support of 5-level paging.
> 
> Enabling XEN with 5-level paging requires more work. The patch makes XEN
> dependent on !X86_5LEVEL.
> 
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> ---
>  arch/x86/Kconfig     | 5 +++++
>  arch/x86/xen/Kconfig | 1 +
>  2 files changed, 6 insertions(+)
> 
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index cd18994a9555..11bd0498f64c 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -318,6 +318,7 @@ config FIX_EARLYCON_MEM
>  
>  config PGTABLE_LEVELS
>  	int
> +	default 5 if X86_5LEVEL
>  	default 4 if X86_64
>  	default 3 if X86_PAE
>  	default 2
> @@ -1390,6 +1391,10 @@ config X86_PAE
>  	  has the cost of more pagetable lookup overhead, and also
>  	  consumes more pagetable space per process.
>  
> +config X86_5LEVEL
> +	bool "Enable 5-level page tables support"
> +	depends on X86_64
> +
>  config ARCH_PHYS_ADDR_T_64BIT
>  	def_bool y
>  	depends on X86_64 || X86_PAE
> diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
> index 027987638e98..12205e6dfa59 100644
> --- a/arch/x86/xen/Kconfig
> +++ b/arch/x86/xen/Kconfig
> @@ -5,6 +5,7 @@
>  config XEN
>  	bool "Xen guest support"
>  	depends on PARAVIRT
> +	depends on !X86_5LEVEL

I'd rather put this under "config XEN_PV".


Juergen

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 8/9] x86: Enable 5-level paging support
  2017-05-15 12:31   ` Juergen Gross
@ 2017-05-15 14:11     ` Kirill A. Shutemov
  2017-05-15 14:13       ` Juergen Gross
  0 siblings, 1 reply; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-15 14:11 UTC (permalink / raw)
  To: Juergen Gross
  Cc: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Andi Kleen,
	Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel

On Mon, May 15, 2017 at 02:31:00PM +0200, Juergen Gross wrote:
> > diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
> > index 027987638e98..12205e6dfa59 100644
> > --- a/arch/x86/xen/Kconfig
> > +++ b/arch/x86/xen/Kconfig
> > @@ -5,6 +5,7 @@
> >  config XEN
> >  	bool "Xen guest support"
> >  	depends on PARAVIRT
> > +	depends on !X86_5LEVEL
> 
> I'd rather put this under "config XEN_PV".

Makes sense.

----------------------8<----------------------------

>From 422a980c748a5b84a013258eb7c00d61edc34492 Mon Sep 17 00:00:00 2001
From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Date: Sat, 5 Nov 2016 03:24:03 +0300
Subject: [PATCHv6 8/9] x86: Enable 5-level paging support

Most of things are in place and we can enable support of 5-level paging.

The patch makes XEN_PV dependent on !X86_5LEVEL. XEN_PV is not ready to
work with 5-level paging.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 arch/x86/Kconfig     | 5 +++++
 arch/x86/xen/Kconfig | 1 +
 2 files changed, 6 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index cd18994a9555..11bd0498f64c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -318,6 +318,7 @@ config FIX_EARLYCON_MEM
 
 config PGTABLE_LEVELS
 	int
+	default 5 if X86_5LEVEL
 	default 4 if X86_64
 	default 3 if X86_PAE
 	default 2
@@ -1390,6 +1391,10 @@ config X86_PAE
 	  has the cost of more pagetable lookup overhead, and also
 	  consumes more pagetable space per process.
 
+config X86_5LEVEL
+	bool "Enable 5-level page tables support"
+	depends on X86_64
+
 config ARCH_PHYS_ADDR_T_64BIT
 	def_bool y
 	depends on X86_64 || X86_PAE
diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 027987638e98..1be9667bd476 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -17,6 +17,7 @@ config XEN_PV
 	bool "Xen PV guest support"
 	default y
 	depends on XEN
+	depends on !X86_5LEVEL
 	select XEN_HAVE_PVMMU
 	select XEN_HAVE_VPMU
 	help
-- 
 Kirill A. Shutemov

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 8/9] x86: Enable 5-level paging support
  2017-05-15 14:11     ` Kirill A. Shutemov
@ 2017-05-15 14:13       ` Juergen Gross
  0 siblings, 0 replies; 24+ messages in thread
From: Juergen Gross @ 2017-05-15 14:13 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Andi Kleen,
	Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel

On 15/05/17 16:11, Kirill A. Shutemov wrote:
> On Mon, May 15, 2017 at 02:31:00PM +0200, Juergen Gross wrote:
>>> diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
>>> index 027987638e98..12205e6dfa59 100644
>>> --- a/arch/x86/xen/Kconfig
>>> +++ b/arch/x86/xen/Kconfig
>>> @@ -5,6 +5,7 @@
>>>  config XEN
>>>  	bool "Xen guest support"
>>>  	depends on PARAVIRT
>>> +	depends on !X86_5LEVEL
>>
>> I'd rather put this under "config XEN_PV".
> 
> Makes sense.
> 
> ----------------------8<----------------------------
> 
> From 422a980c748a5b84a013258eb7c00d61edc34492 Mon Sep 17 00:00:00 2001
> From: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
> Date: Sat, 5 Nov 2016 03:24:03 +0300
> Subject: [PATCHv6 8/9] x86: Enable 5-level paging support
> 
> Most of things are in place and we can enable support of 5-level paging.
> 
> The patch makes XEN_PV dependent on !X86_5LEVEL. XEN_PV is not ready to
> work with 5-level paging.
> 
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>

Xen part:
Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits
  2017-05-15 12:12 ` [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits Kirill A. Shutemov
@ 2017-05-15 14:49   ` kbuild test robot
  2017-05-15 19:48     ` Kirill A. Shutemov
  2017-05-18 11:43   ` Michal Hocko
  1 sibling, 1 reply; 24+ messages in thread
From: kbuild test robot @ 2017-05-15 14:49 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: kbuild-all, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, Kirill A. Shutemov, linux-api

[-- Attachment #1: Type: text/plain, Size: 5255 bytes --]

Hi Kirill,

[auto build test ERROR on linus/master]
[also build test ERROR on v4.12-rc1 next-20170515]
[cannot apply to tip/x86/core xen-tip/linux-next]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]

url:    https://github.com/0day-ci/linux/commits/Kirill-A-Shutemov/x86-5-level-paging-enabling-for-v4-12-Part-4/20170515-202736
config: i386-defconfig (attached as .config)
compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All error/warnings (new ones prefixed by >>):

   In file included from include/linux/cache.h:4:0,
                    from include/linux/printk.h:8,
                    from include/linux/kernel.h:13,
                    from mm/mmap.c:11:
   mm/mmap.c: In function 'arch_get_unmapped_area_topdown':
   arch/x86/include/asm/processor.h:878:50: error: 'TASK_SIZE_LOW' undeclared (first use in this function)
    #define TASK_UNMAPPED_BASE  __TASK_UNMAPPED_BASE(TASK_SIZE_LOW)
                                                     ^
   include/uapi/linux/kernel.h:10:41: note: in definition of macro '__ALIGN_KERNEL_MASK'
    #define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask))
                                            ^
   include/linux/kernel.h:49:22: note: in expansion of macro '__ALIGN_KERNEL'
    #define ALIGN(x, a)  __ALIGN_KERNEL((x), (a))
                         ^~~~~~~~~~~~~~
   include/linux/mm.h:132:26: note: in expansion of macro 'ALIGN'
    #define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE)
                             ^~~~~
   arch/x86/include/asm/processor.h:877:42: note: in expansion of macro 'PAGE_ALIGN'
    #define __TASK_UNMAPPED_BASE(task_size) (PAGE_ALIGN(task_size / 3))
                                             ^~~~~~~~~~
   arch/x86/include/asm/processor.h:878:29: note: in expansion of macro '__TASK_UNMAPPED_BASE'
    #define TASK_UNMAPPED_BASE  __TASK_UNMAPPED_BASE(TASK_SIZE_LOW)
                                ^~~~~~~~~~~~~~~~~~~~
>> mm/mmap.c:2043:20: note: in expansion of macro 'TASK_UNMAPPED_BASE'
      info.low_limit = TASK_UNMAPPED_BASE;
                       ^~~~~~~~~~~~~~~~~~
   arch/x86/include/asm/processor.h:878:50: note: each undeclared identifier is reported only once for each function it appears in
    #define TASK_UNMAPPED_BASE  __TASK_UNMAPPED_BASE(TASK_SIZE_LOW)
                                                     ^
   include/uapi/linux/kernel.h:10:41: note: in definition of macro '__ALIGN_KERNEL_MASK'
    #define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask))
                                            ^
   include/linux/kernel.h:49:22: note: in expansion of macro '__ALIGN_KERNEL'
    #define ALIGN(x, a)  __ALIGN_KERNEL((x), (a))
                         ^~~~~~~~~~~~~~
   include/linux/mm.h:132:26: note: in expansion of macro 'ALIGN'
    #define PAGE_ALIGN(addr) ALIGN(addr, PAGE_SIZE)
                             ^~~~~
   arch/x86/include/asm/processor.h:877:42: note: in expansion of macro 'PAGE_ALIGN'
    #define __TASK_UNMAPPED_BASE(task_size) (PAGE_ALIGN(task_size / 3))
                                             ^~~~~~~~~~
   arch/x86/include/asm/processor.h:878:29: note: in expansion of macro '__TASK_UNMAPPED_BASE'
    #define TASK_UNMAPPED_BASE  __TASK_UNMAPPED_BASE(TASK_SIZE_LOW)
                                ^~~~~~~~~~~~~~~~~~~~
>> mm/mmap.c:2043:20: note: in expansion of macro 'TASK_UNMAPPED_BASE'
      info.low_limit = TASK_UNMAPPED_BASE;
                       ^~~~~~~~~~~~~~~~~~
--
   In file included from include/linux/elf.h:4:0,
                    from include/linux/module.h:15,
                    from fs/binfmt_elf.c:12:
   fs/binfmt_elf.c: In function 'load_elf_binary':
>> arch/x86/include/asm/elf.h:253:27: error: 'TASK_SIZE_LOW' undeclared (first use in this function)
    #define ELF_ET_DYN_BASE  (TASK_SIZE_LOW / 3 * 2)
                              ^
>> fs/binfmt_elf.c:937:16: note: in expansion of macro 'ELF_ET_DYN_BASE'
       load_bias = ELF_ET_DYN_BASE - vaddr;
                   ^~~~~~~~~~~~~~~
   arch/x86/include/asm/elf.h:253:27: note: each undeclared identifier is reported only once for each function it appears in
    #define ELF_ET_DYN_BASE  (TASK_SIZE_LOW / 3 * 2)
                              ^
>> fs/binfmt_elf.c:937:16: note: in expansion of macro 'ELF_ET_DYN_BASE'
       load_bias = ELF_ET_DYN_BASE - vaddr;
                   ^~~~~~~~~~~~~~~

vim +/TASK_SIZE_LOW +253 arch/x86/include/asm/elf.h

   247	
   248	/* This is the location that an ET_DYN program is loaded if exec'ed.  Typical
   249	   use of this is to invoke "./ld.so someprog" to test out a new version of
   250	   the loader.  We need to make sure that it is out of the way of the program
   251	   that it will "exec", and that there is sufficient room for the brk.  */
   252	
 > 253	#define ELF_ET_DYN_BASE		(TASK_SIZE_LOW / 3 * 2)
   254	
   255	/* This yields a mask that user programs can use to figure out what
   256	   instruction set this CPU supports.  This could be done in user space,

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 26192 bytes --]

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits
  2017-05-15 14:49   ` kbuild test robot
@ 2017-05-15 19:48     ` Kirill A. Shutemov
  0 siblings, 0 replies; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-15 19:48 UTC (permalink / raw)
  To: kbuild test robot
  Cc: kbuild-all, x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin,
	Andi Kleen, Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, linux-api

On Mon, May 15, 2017 at 10:49:43PM +0800, kbuild test robot wrote:
> Hi Kirill,
> 
> [auto build test ERROR on linus/master]
> [also build test ERROR on v4.12-rc1 next-20170515]
> [cannot apply to tip/x86/core xen-tip/linux-next]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> 
> url:    https://github.com/0day-ci/linux/commits/Kirill-A-Shutemov/x86-5-level-paging-enabling-for-v4-12-Part-4/20170515-202736
> config: i386-defconfig (attached as .config)
> compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901
> reproduce:
>         # save the attached .config to linux build tree
>         make ARCH=i386 
> 
> All error/warnings (new ones prefixed by >>):
> 
>    In file included from include/linux/cache.h:4:0,
>                     from include/linux/printk.h:8,
>                     from include/linux/kernel.h:13,
>                     from mm/mmap.c:11:
>    mm/mmap.c: In function 'arch_get_unmapped_area_topdown':
>    arch/x86/include/asm/processor.h:878:50: error: 'TASK_SIZE_LOW' undeclared (first use in this function)
>     #define TASK_UNMAPPED_BASE  __TASK_UNMAPPED_BASE(TASK_SIZE_LOW)

Thanks. Fixup is below.

Let me know if I need to send the full patch:

diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index aaed58b03ddb..65663de9287b 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -794,6 +794,7 @@ static inline void spin_lock_prefetch(const void *x)
  */
 #define IA32_PAGE_OFFSET	PAGE_OFFSET
 #define TASK_SIZE		PAGE_OFFSET
+#define TASK_SIZE_LOW		TASK_SIZE
 #define TASK_SIZE_MAX		TASK_SIZE
 #define DEFAULT_MAP_WINDOW	TASK_SIZE
 #define STACK_TOP		TASK_SIZE
-- 
 Kirill A. Shutemov

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits
  2017-05-15 12:12 ` [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits Kirill A. Shutemov
  2017-05-15 14:49   ` kbuild test robot
@ 2017-05-18 11:43   ` Michal Hocko
  2017-05-18 15:19     ` Kirill A. Shutemov
  1 sibling, 1 reply; 24+ messages in thread
From: Michal Hocko @ 2017-05-18 11:43 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: x86, Thomas Gleixner, Ingo Molnar, H. Peter Anvin, Andi Kleen,
	Dave Hansen, Andy Lutomirski, Dan Williams, linux-mm,
	linux-kernel, linux-api

On Mon 15-05-17 15:12:18, Kirill A. Shutemov wrote:
[...]
> @@ -195,6 +207,16 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
>  	info.length = len;
>  	info.low_limit = PAGE_SIZE;
>  	info.high_limit = get_mmap_base(0);
> +
> +	/*
> +	 * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area
> +	 * in the full address space.
> +	 *
> +	 * !in_compat_syscall() check to avoid high addresses for x32.
> +	 */
> +	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
> +		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
> +
>  	info.align_mask = 0;
>  	info.align_offset = pgoff << PAGE_SHIFT;
>  	if (filp) {

I have two questions/concerns here. The above assumes that any address above
1<<47 will use the _whole_ address space. Is this what we want? What
if somebody does mmap(1<<52, ...) because he wants to (ab)use 53+ bits
for some other purpose? Shouldn't we cap the high_limit by the given
address?

Another thing would be that 
	/* requesting a specific address */
	if (addr) {
		addr = PAGE_ALIGN(addr);
		vma = find_vma(mm, addr);
		if (TASK_SIZE - len >= addr &&
				(!vma || addr + len <= vma->vm_start))
			return addr;
	}

would fail for mmap(-1UL, ...) which is good because we do want to
fallback to vm_unmapped_area and have randomized address which is
ensured by your info.high_limit += ... but that wouldn't work for
mmap(1<<N, ...) where N>47. So the first such mapping won't be
randomized while others will be. This is quite unexpected I would say.
So it should be documented at least or maybe we want to skip the above
shortcut for addr > DEFAULT_MAP_WINDOW altogether.

The patch looks sensible other than that.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits
  2017-05-18 11:43   ` Michal Hocko
@ 2017-05-18 15:19     ` Kirill A. Shutemov
  2017-05-18 15:27       ` Michal Hocko
  0 siblings, 1 reply; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-18 15:19 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Kirill A. Shutemov, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Andi Kleen, Dave Hansen, Andy Lutomirski,
	Dan Williams, linux-mm, linux-kernel, linux-api

On Thu, May 18, 2017 at 01:43:59PM +0200, Michal Hocko wrote:
> On Mon 15-05-17 15:12:18, Kirill A. Shutemov wrote:
> [...]
> > @@ -195,6 +207,16 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
> >  	info.length = len;
> >  	info.low_limit = PAGE_SIZE;
> >  	info.high_limit = get_mmap_base(0);
> > +
> > +	/*
> > +	 * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area
> > +	 * in the full address space.
> > +	 *
> > +	 * !in_compat_syscall() check to avoid high addresses for x32.
> > +	 */
> > +	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
> > +		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
> > +
> >  	info.align_mask = 0;
> >  	info.align_offset = pgoff << PAGE_SHIFT;
> >  	if (filp) {
> 
> I have two questions/concerns here. The above assumes that any address above
> 1<<47 will use the _whole_ address space. Is this what we want?

Yes, I believe so.

> What if somebody does mmap(1<<52, ...) because he wants to (ab)use 53+
> bits for some other purpose? Shouldn't we cap the high_limit by the
> given address?

This would screw existing semantics of hint address -- "map here if
free, please".

> Another thing would be that 
> 	/* requesting a specific address */
> 	if (addr) {
> 		addr = PAGE_ALIGN(addr);
> 		vma = find_vma(mm, addr);
> 		if (TASK_SIZE - len >= addr &&
> 				(!vma || addr + len <= vma->vm_start))
> 			return addr;
> 	}
> 
> would fail for mmap(-1UL, ...) which is good because we do want to
> fallback to vm_unmapped_area and have randomized address which is
> ensured by your info.high_limit += ... but that wouldn't work for
> mmap(1<<N, ...) where N>47. So the first such mapping won't be
> randomized while others will be. This is quite unexpected I would say.
> So it should be documented at least or maybe we want to skip the above
> shortcut for addr > DEFAULT_MAP_WINDOW altogether.

Again, you're missing existing semantics of hint address. You may have a
reason to set hint address above 47-bit, besides getting access to full
address space.

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits
  2017-05-18 15:19     ` Kirill A. Shutemov
@ 2017-05-18 15:27       ` Michal Hocko
  2017-05-18 15:41         ` Kirill A. Shutemov
  0 siblings, 1 reply; 24+ messages in thread
From: Michal Hocko @ 2017-05-18 15:27 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Kirill A. Shutemov, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Andi Kleen, Dave Hansen, Andy Lutomirski,
	Dan Williams, linux-mm, linux-kernel, linux-api

On Thu 18-05-17 18:19:52, Kirill A. Shutemov wrote:
> On Thu, May 18, 2017 at 01:43:59PM +0200, Michal Hocko wrote:
> > On Mon 15-05-17 15:12:18, Kirill A. Shutemov wrote:
> > [...]
> > > @@ -195,6 +207,16 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
> > >  	info.length = len;
> > >  	info.low_limit = PAGE_SIZE;
> > >  	info.high_limit = get_mmap_base(0);
> > > +
> > > +	/*
> > > +	 * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area
> > > +	 * in the full address space.
> > > +	 *
> > > +	 * !in_compat_syscall() check to avoid high addresses for x32.
> > > +	 */
> > > +	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
> > > +		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
> > > +
> > >  	info.align_mask = 0;
> > >  	info.align_offset = pgoff << PAGE_SHIFT;
> > >  	if (filp) {
> > 
> > I have two questions/concerns here. The above assumes that any address above
> > 1<<47 will use the _whole_ address space. Is this what we want?
> 
> Yes, I believe so.
> 
> > What if somebody does mmap(1<<52, ...) because he wants to (ab)use 53+
> > bits for some other purpose? Shouldn't we cap the high_limit by the
> > given address?
> 
> This would screw existing semantics of hint address -- "map here if
> free, please".

Well, the given address is just _hint_. We are still allowed to map to a
different place. And it is not specified whether the resulting mapping
is above or below that address. So I do not think it would screw the
existing semantic. Or do I miss something?

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits
  2017-05-18 15:27       ` Michal Hocko
@ 2017-05-18 15:41         ` Kirill A. Shutemov
  2017-05-18 15:50           ` Michal Hocko
  0 siblings, 1 reply; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-18 15:41 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Kirill A. Shutemov, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Andi Kleen, Dave Hansen, Andy Lutomirski,
	Dan Williams, linux-mm, linux-kernel, linux-api

On Thu, May 18, 2017 at 05:27:36PM +0200, Michal Hocko wrote:
> On Thu 18-05-17 18:19:52, Kirill A. Shutemov wrote:
> > On Thu, May 18, 2017 at 01:43:59PM +0200, Michal Hocko wrote:
> > > On Mon 15-05-17 15:12:18, Kirill A. Shutemov wrote:
> > > [...]
> > > > @@ -195,6 +207,16 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
> > > >  	info.length = len;
> > > >  	info.low_limit = PAGE_SIZE;
> > > >  	info.high_limit = get_mmap_base(0);
> > > > +
> > > > +	/*
> > > > +	 * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area
> > > > +	 * in the full address space.
> > > > +	 *
> > > > +	 * !in_compat_syscall() check to avoid high addresses for x32.
> > > > +	 */
> > > > +	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
> > > > +		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
> > > > +
> > > >  	info.align_mask = 0;
> > > >  	info.align_offset = pgoff << PAGE_SHIFT;
> > > >  	if (filp) {
> > > 
> > > I have two questions/concerns here. The above assumes that any address above
> > > 1<<47 will use the _whole_ address space. Is this what we want?
> > 
> > Yes, I believe so.
> > 
> > > What if somebody does mmap(1<<52, ...) because he wants to (ab)use 53+
> > > bits for some other purpose? Shouldn't we cap the high_limit by the
> > > given address?
> > 
> > This would screw existing semantics of hint address -- "map here if
> > free, please".
> 
> Well, the given address is just _hint_. We are still allowed to map to a
> different place. And it is not specified whether the resulting mapping
> is above or below that address. So I do not think it would screw the
> existing semantic. Or do I miss something?

You are right, that this behaviour is not fixed by any standard or written
down in documentation, but it's de-facto policy of Linux mmap(2) the
beginning.

And we need to be very careful when messing with this.

I believe that qemu linux-user to some extend relies on this behaviour to
do 32-bit allocations on 64-bit machine.

https://github.com/qemu/qemu/blob/master/linux-user/mmap.c#L256

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits
  2017-05-18 15:41         ` Kirill A. Shutemov
@ 2017-05-18 15:50           ` Michal Hocko
  2017-05-18 15:59             ` Michal Hocko
  0 siblings, 1 reply; 24+ messages in thread
From: Michal Hocko @ 2017-05-18 15:50 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Kirill A. Shutemov, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Andi Kleen, Dave Hansen, Andy Lutomirski,
	Dan Williams, linux-mm, linux-kernel, linux-api

On Thu 18-05-17 18:41:35, Kirill A. Shutemov wrote:
> On Thu, May 18, 2017 at 05:27:36PM +0200, Michal Hocko wrote:
> > On Thu 18-05-17 18:19:52, Kirill A. Shutemov wrote:
> > > On Thu, May 18, 2017 at 01:43:59PM +0200, Michal Hocko wrote:
> > > > On Mon 15-05-17 15:12:18, Kirill A. Shutemov wrote:
> > > > [...]
> > > > > @@ -195,6 +207,16 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
> > > > >  	info.length = len;
> > > > >  	info.low_limit = PAGE_SIZE;
> > > > >  	info.high_limit = get_mmap_base(0);
> > > > > +
> > > > > +	/*
> > > > > +	 * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area
> > > > > +	 * in the full address space.
> > > > > +	 *
> > > > > +	 * !in_compat_syscall() check to avoid high addresses for x32.
> > > > > +	 */
> > > > > +	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
> > > > > +		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
> > > > > +
> > > > >  	info.align_mask = 0;
> > > > >  	info.align_offset = pgoff << PAGE_SHIFT;
> > > > >  	if (filp) {
> > > > 
> > > > I have two questions/concerns here. The above assumes that any address above
> > > > 1<<47 will use the _whole_ address space. Is this what we want?
> > > 
> > > Yes, I believe so.
> > > 
> > > > What if somebody does mmap(1<<52, ...) because he wants to (ab)use 53+
> > > > bits for some other purpose? Shouldn't we cap the high_limit by the
> > > > given address?
> > > 
> > > This would screw existing semantics of hint address -- "map here if
> > > free, please".
> > 
> > Well, the given address is just _hint_. We are still allowed to map to a
> > different place. And it is not specified whether the resulting mapping
> > is above or below that address. So I do not think it would screw the
> > existing semantic. Or do I miss something?
> 
> You are right, that this behaviour is not fixed by any standard or written
> down in documentation, but it's de-facto policy of Linux mmap(2) the
> beginning.
> 
> And we need to be very careful when messing with this.

I am sorry but I still do not understand. You already touch this
semantic. mmap(-1UL,...) will already returns basically arbitrary
address. All I am asking for is that mmap doesn't return higher address
than the given one whent address > 1<<47. We do not have any such users
currently so it won't be a change in behavior while it would allow
different sized address spaces naturally.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits
  2017-05-18 15:50           ` Michal Hocko
@ 2017-05-18 15:59             ` Michal Hocko
  2017-05-18 16:22               ` Kirill A. Shutemov
  0 siblings, 1 reply; 24+ messages in thread
From: Michal Hocko @ 2017-05-18 15:59 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Kirill A. Shutemov, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Andi Kleen, Dave Hansen, Andy Lutomirski,
	Dan Williams, linux-mm, linux-kernel, linux-api

On Thu 18-05-17 17:50:03, Michal Hocko wrote:
> On Thu 18-05-17 18:41:35, Kirill A. Shutemov wrote:
> > On Thu, May 18, 2017 at 05:27:36PM +0200, Michal Hocko wrote:
> > > On Thu 18-05-17 18:19:52, Kirill A. Shutemov wrote:
> > > > On Thu, May 18, 2017 at 01:43:59PM +0200, Michal Hocko wrote:
> > > > > On Mon 15-05-17 15:12:18, Kirill A. Shutemov wrote:
> > > > > [...]
> > > > > > @@ -195,6 +207,16 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
> > > > > >  	info.length = len;
> > > > > >  	info.low_limit = PAGE_SIZE;
> > > > > >  	info.high_limit = get_mmap_base(0);
> > > > > > +
> > > > > > +	/*
> > > > > > +	 * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area
> > > > > > +	 * in the full address space.
> > > > > > +	 *
> > > > > > +	 * !in_compat_syscall() check to avoid high addresses for x32.
> > > > > > +	 */
> > > > > > +	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
> > > > > > +		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
> > > > > > +
> > > > > >  	info.align_mask = 0;
> > > > > >  	info.align_offset = pgoff << PAGE_SHIFT;
> > > > > >  	if (filp) {
> > > > > 
> > > > > I have two questions/concerns here. The above assumes that any address above
> > > > > 1<<47 will use the _whole_ address space. Is this what we want?
> > > > 
> > > > Yes, I believe so.
> > > > 
> > > > > What if somebody does mmap(1<<52, ...) because he wants to (ab)use 53+
> > > > > bits for some other purpose? Shouldn't we cap the high_limit by the
> > > > > given address?
> > > > 
> > > > This would screw existing semantics of hint address -- "map here if
> > > > free, please".
> > > 
> > > Well, the given address is just _hint_. We are still allowed to map to a
> > > different place. And it is not specified whether the resulting mapping
> > > is above or below that address. So I do not think it would screw the
> > > existing semantic. Or do I miss something?
> > 
> > You are right, that this behaviour is not fixed by any standard or written
> > down in documentation, but it's de-facto policy of Linux mmap(2) the
> > beginning.
> > 
> > And we need to be very careful when messing with this.
> 
> I am sorry but I still do not understand. You already touch this
> semantic. mmap(-1UL,...) will already returns basically arbitrary
> address. All I am asking for is that mmap doesn't return higher address
> than the given one whent address > 1<<47. We do not have any such users
> currently so it won't be a change in behavior while it would allow
> different sized address spaces naturally.

I basically mean something like the following
---
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 74d1587b181d..d6f66ff02d0a 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -195,7 +195,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 		goto bottomup;
 
 	/* requesting a specific address */
-	if (addr) {
+	if (addr && addr <= DEFAULT_MAP_WINDOW) {
 		addr = PAGE_ALIGN(addr);
 		vma = find_vma(mm, addr);
 		if (TASK_SIZE - len >= addr &&
@@ -215,7 +215,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	 * !in_compat_syscall() check to avoid high addresses for x32.
 	 */
 	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
-		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
+		info.high_limit += min(TASK_SIZE_MAX, address) - DEFAULT_MAP_WINDOW;
 
 	info.align_mask = 0;
 	info.align_offset = pgoff << PAGE_SHIFT;
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits
  2017-05-18 15:59             ` Michal Hocko
@ 2017-05-18 16:22               ` Kirill A. Shutemov
  2017-05-18 17:13                 ` Michal Hocko
  0 siblings, 1 reply; 24+ messages in thread
From: Kirill A. Shutemov @ 2017-05-18 16:22 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Kirill A. Shutemov, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Andi Kleen, Dave Hansen, Andy Lutomirski,
	Dan Williams, linux-mm, linux-kernel, linux-api

On Thu, May 18, 2017 at 05:59:14PM +0200, Michal Hocko wrote:
> On Thu 18-05-17 17:50:03, Michal Hocko wrote:
> > On Thu 18-05-17 18:41:35, Kirill A. Shutemov wrote:
> > > On Thu, May 18, 2017 at 05:27:36PM +0200, Michal Hocko wrote:
> > > > On Thu 18-05-17 18:19:52, Kirill A. Shutemov wrote:
> > > > > On Thu, May 18, 2017 at 01:43:59PM +0200, Michal Hocko wrote:
> > > > > > On Mon 15-05-17 15:12:18, Kirill A. Shutemov wrote:
> > > > > > [...]
> > > > > > > @@ -195,6 +207,16 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
> > > > > > >  	info.length = len;
> > > > > > >  	info.low_limit = PAGE_SIZE;
> > > > > > >  	info.high_limit = get_mmap_base(0);
> > > > > > > +
> > > > > > > +	/*
> > > > > > > +	 * If hint address is above DEFAULT_MAP_WINDOW, look for unmapped area
> > > > > > > +	 * in the full address space.
> > > > > > > +	 *
> > > > > > > +	 * !in_compat_syscall() check to avoid high addresses for x32.
> > > > > > > +	 */
> > > > > > > +	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
> > > > > > > +		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
> > > > > > > +
> > > > > > >  	info.align_mask = 0;
> > > > > > >  	info.align_offset = pgoff << PAGE_SHIFT;
> > > > > > >  	if (filp) {
> > > > > > 
> > > > > > I have two questions/concerns here. The above assumes that any address above
> > > > > > 1<<47 will use the _whole_ address space. Is this what we want?
> > > > > 
> > > > > Yes, I believe so.
> > > > > 
> > > > > > What if somebody does mmap(1<<52, ...) because he wants to (ab)use 53+
> > > > > > bits for some other purpose? Shouldn't we cap the high_limit by the
> > > > > > given address?
> > > > > 
> > > > > This would screw existing semantics of hint address -- "map here if
> > > > > free, please".
> > > > 
> > > > Well, the given address is just _hint_. We are still allowed to map to a
> > > > different place. And it is not specified whether the resulting mapping
> > > > is above or below that address. So I do not think it would screw the
> > > > existing semantic. Or do I miss something?
> > > 
> > > You are right, that this behaviour is not fixed by any standard or written
> > > down in documentation, but it's de-facto policy of Linux mmap(2) the
> > > beginning.
> > > 
> > > And we need to be very careful when messing with this.
> > 
> > I am sorry but I still do not understand. You already touch this
> > semantic. mmap(-1UL,...) will already returns basically arbitrary
> > address. All I am asking for is that mmap doesn't return higher address
> > than the given one whent address > 1<<47. We do not have any such users
> > currently so it won't be a change in behavior while it would allow
> > different sized address spaces naturally.
> 
> I basically mean something like the following
> ---
> diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
> index 74d1587b181d..d6f66ff02d0a 100644
> --- a/arch/x86/kernel/sys_x86_64.c
> +++ b/arch/x86/kernel/sys_x86_64.c
> @@ -195,7 +195,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
>  		goto bottomup;
>  
>  	/* requesting a specific address */
> -	if (addr) {
> +	if (addr && addr <= DEFAULT_MAP_WINDOW) {
>  		addr = PAGE_ALIGN(addr);
>  		vma = find_vma(mm, addr);
>  		if (TASK_SIZE - len >= addr &&
> @@ -215,7 +215,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
>  	 * !in_compat_syscall() check to avoid high addresses for x32.
>  	 */
>  	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
> -		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
> +		info.high_limit += min(TASK_SIZE_MAX, address) - DEFAULT_MAP_WINDOW;
>  
>  	info.align_mask = 0;
>  	info.align_offset = pgoff << PAGE_SHIFT;

You try to stretch the interface too far. With the patch you propose we
have totally different behaviour wrt hint address if it below and above
47-bits:

 * <= 47-bits: allocate VM [addr; addr + len - 1], if free;
 * > 47-bits: allocate VM anywhere under addr;

Sorry, no. That's ugly.

If you feel that we need to guarantee that bits above certain limit are
unused, introduce new interface. We have enough logic encoded in hint
address already.

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits
  2017-05-18 16:22               ` Kirill A. Shutemov
@ 2017-05-18 17:13                 ` Michal Hocko
  2017-05-18 17:51                   ` Michal Hocko
  0 siblings, 1 reply; 24+ messages in thread
From: Michal Hocko @ 2017-05-18 17:13 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Kirill A. Shutemov, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Andi Kleen, Dave Hansen, Andy Lutomirski,
	Dan Williams, linux-mm, linux-kernel, linux-api

On Thu 18-05-17 19:22:55, Kirill A. Shutemov wrote:
> On Thu, May 18, 2017 at 05:59:14PM +0200, Michal Hocko wrote:
[...]
> > I basically mean something like the following
> > ---
> > diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
> > index 74d1587b181d..d6f66ff02d0a 100644
> > --- a/arch/x86/kernel/sys_x86_64.c
> > +++ b/arch/x86/kernel/sys_x86_64.c
> > @@ -195,7 +195,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
> >  		goto bottomup;
> >  
> >  	/* requesting a specific address */
> > -	if (addr) {
> > +	if (addr && addr <= DEFAULT_MAP_WINDOW) {
> >  		addr = PAGE_ALIGN(addr);
> >  		vma = find_vma(mm, addr);
> >  		if (TASK_SIZE - len >= addr &&
> > @@ -215,7 +215,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
> >  	 * !in_compat_syscall() check to avoid high addresses for x32.
> >  	 */
> >  	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
> > -		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
> > +		info.high_limit += min(TASK_SIZE_MAX, address) - DEFAULT_MAP_WINDOW;
> >  
> >  	info.align_mask = 0;
> >  	info.align_offset = pgoff << PAGE_SHIFT;
> 
> You try to stretch the interface too far. With the patch you propose we
> have totally different behaviour wrt hint address if it below and above
> 47-bits:
> 
>  * <= 47-bits: allocate VM [addr; addr + len - 1], if free;

unless I am missing something fundamental here this is not how it works.
We just map a different range if the requested one is not free (in
absence of MAP_FIXED). And we do that in top->down direction so this is
already how it works. And you _do_ rely on the same thing when allowing
larger than 47b except you start from the top of the supported address
space. So how exactly is your new behavior any different and more clear?

Say you would do
	mmap(1<<48, ...) # you will get 1<<48
	mmap(1<<48, ...) # you will get something below TASK_SIZE_MAX

>  * > 47-bits: allocate VM anywhere under addr;
> 
> Sorry, no. That's ugly.
> 
> If you feel that we need to guarantee that bits above certain limit are
> unused, introduce new interface. We have enough logic encoded in hint
> address already.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits
  2017-05-18 17:13                 ` Michal Hocko
@ 2017-05-18 17:51                   ` Michal Hocko
  0 siblings, 0 replies; 24+ messages in thread
From: Michal Hocko @ 2017-05-18 17:51 UTC (permalink / raw)
  To: Kirill A. Shutemov
  Cc: Kirill A. Shutemov, x86, Thomas Gleixner, Ingo Molnar,
	H. Peter Anvin, Andi Kleen, Dave Hansen, Andy Lutomirski,
	Dan Williams, linux-mm, linux-kernel, linux-api

On Thu 18-05-17 19:13:30, Michal Hocko wrote:
> On Thu 18-05-17 19:22:55, Kirill A. Shutemov wrote:
> > On Thu, May 18, 2017 at 05:59:14PM +0200, Michal Hocko wrote:
> [...]
> > > I basically mean something like the following
> > > ---
> > > diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
> > > index 74d1587b181d..d6f66ff02d0a 100644
> > > --- a/arch/x86/kernel/sys_x86_64.c
> > > +++ b/arch/x86/kernel/sys_x86_64.c
> > > @@ -195,7 +195,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
> > >  		goto bottomup;
> > >  
> > >  	/* requesting a specific address */
> > > -	if (addr) {
> > > +	if (addr && addr <= DEFAULT_MAP_WINDOW) {
> > >  		addr = PAGE_ALIGN(addr);
> > >  		vma = find_vma(mm, addr);
> > >  		if (TASK_SIZE - len >= addr &&
> > > @@ -215,7 +215,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
> > >  	 * !in_compat_syscall() check to avoid high addresses for x32.
> > >  	 */
> > >  	if (addr > DEFAULT_MAP_WINDOW && !in_compat_syscall())
> > > -		info.high_limit += TASK_SIZE_MAX - DEFAULT_MAP_WINDOW;
> > > +		info.high_limit += min(TASK_SIZE_MAX, address) - DEFAULT_MAP_WINDOW;
> > >  
> > >  	info.align_mask = 0;
> > >  	info.align_offset = pgoff << PAGE_SHIFT;
> > 
> > You try to stretch the interface too far. With the patch you propose we
> > have totally different behaviour wrt hint address if it below and above
> > 47-bits:
> > 
> >  * <= 47-bits: allocate VM [addr; addr + len - 1], if free;
> 
> unless I am missing something fundamental here this is not how it works.
> We just map a different range if the requested one is not free (in
> absence of MAP_FIXED). And we do that in top->down direction so this is
> already how it works. And you _do_ rely on the same thing when allowing
> larger than 47b except you start from the top of the supported address
> space. So how exactly is your new behavior any different and more clear?

OK, I take that back because I am clearly wrong. We simply always start
from top. Sorry about my confusion.

Feel free to add
Acked-by: Michal Hocko <mhocko@suse.com>
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2017-05-18 17:52 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-15 12:12 [PATCHv5, REBASED 0/9] x86: 5-level paging enabling for v4.12, Part 4 Kirill A. Shutemov
2017-05-15 12:12 ` [PATCHv5, REBASED 1/9] x86/asm: Fix comment in return_from_SYSCALL_64 Kirill A. Shutemov
2017-05-15 12:12 ` [PATCHv5, REBASED 2/9] x86/boot/64: Rewrite startup_64 in C Kirill A. Shutemov
2017-05-15 12:12 ` [PATCHv5, REBASED 3/9] x86/boot/64: Rename init_level4_pgt and early_level4_pgt Kirill A. Shutemov
2017-05-15 12:12 ` [PATCHv5, REBASED 4/9] x86/boot/64: Add support of additional page table level during early boot Kirill A. Shutemov
2017-05-15 12:12 ` [PATCHv5, REBASED 5/9] x86/mm: Add sync_global_pgds() for configuration with 5-level paging Kirill A. Shutemov
2017-05-15 12:12 ` [PATCHv5, REBASED 6/9] x86/mm: Make kernel_physical_mapping_init() support " Kirill A. Shutemov
2017-05-15 12:12 ` [PATCHv5, REBASED 7/9] x86/mm: Add support for 5-level paging for KASLR Kirill A. Shutemov
2017-05-15 12:12 ` [PATCHv5, REBASED 8/9] x86: Enable 5-level paging support Kirill A. Shutemov
2017-05-15 12:31   ` Juergen Gross
2017-05-15 14:11     ` Kirill A. Shutemov
2017-05-15 14:13       ` Juergen Gross
2017-05-15 12:12 ` [PATCHv5, REBASED 9/9] x86/mm: Allow to have userspace mappings above 47-bits Kirill A. Shutemov
2017-05-15 14:49   ` kbuild test robot
2017-05-15 19:48     ` Kirill A. Shutemov
2017-05-18 11:43   ` Michal Hocko
2017-05-18 15:19     ` Kirill A. Shutemov
2017-05-18 15:27       ` Michal Hocko
2017-05-18 15:41         ` Kirill A. Shutemov
2017-05-18 15:50           ` Michal Hocko
2017-05-18 15:59             ` Michal Hocko
2017-05-18 16:22               ` Kirill A. Shutemov
2017-05-18 17:13                 ` Michal Hocko
2017-05-18 17:51                   ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).