All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
@ 2016-02-16 12:52 Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 01/11] of/fdt: make memblock minimum physical address arch configurable Ard Biesheuvel
                   ` (11 more replies)
  0 siblings, 12 replies; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-16 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

At the request of Catalin, this series has been split off from my series
'arm64: implement support for KASLR v4' [1]. This sub-series deals with
moving the kernel out of the linear mapping into the vmalloc area. This
is a prerequisite for independent physical and virtual randomization of
the kernel image. On top of that, considering that these changes allow
the linear mapping to start at an arbitrary offset above PAGE_OFFSET, it
should be an improvement in itself due to the fact that we can now choose
PAGE_OFFSET such that RAM can be mapped using large block sizes.

For instance, on my Seattle A0 box, the kernel is loaded 16 MB into the
lowest GB of RAM, which means __pa(PAGE_OFFSET) is not 1 GB aligned, and
the entire 16 GB of RAM will be mapped using 2 MB blocks. (Similarly,
for 64 KB granule kernels, the entire 16 GB of RAM will be mapped using
pages since __pa(PAGE_OFFSET) is not 512 MB aligned). With these changes
 __pa(PAGE_OFFSET) will always be chosen such that it is aligned to a
quantity that allows efficient mapping.

Note that of the entire KASLR series, this sub-series is the most likely to
cause problems, and hence requires the most careful review and testing. This
is due to the fact that, with these changes, the invariant __va(__pa(x)) == x
no longer holds, and any code that is based on that assumption needs to be
updated.

The complete series can be found here:
https://git.linaro.org/people/ard.biesheuvel/linux-arm.git/shortlog/refs/heads/arm64-kaslr-v6

Changes since v5 [2]:
- fixed an initrd issue, where __va() was called by the generic early FDT code
  before memstart_addr is assigned (#2, #10)
- add patch #3 to fix a circular header dependency so we can use BUG_ON in
  asm/memory.h
- BUG() if __va() is called before memstart_addr is assigned (#11)
- fix a (transient) issue with GCC 4.9 or older in the KVM ksym ref patch  (#8)
  that affects bisectability (the actual issue surfaces after applying patch #9
  and disappears again with patch #11)
- fix a KASAN problem in the previous version of patch #9, where the shadow
  region of the module area was inadvertently populated with KASAN zero pages;
  this fixes at least one of the reported KASAN related issues with this series
- folded __maybe_unused change into patch #9
- folded change to memblock_add() back the entire kernel image instead of only
  the init and data segments after clipping for mem=
- map the linear alias of [_stext, _etext] as read-only/non-executable so its
  contents are visible to subsystems like hibernate in the expected place (#9)
- folded patch that removed clip_memory_range() and reverted to the generic
  memblock_enforce_memory_limit() (#11)

Changes since v4:
- added Marc's ack to patch #6
- round the kasan zero shadow region around the kernel image to swapper block
  size (#7)
- ensure that we don't clip the kernel image when clipping RAM to the linear
  region size (#8)

Patch #1 allows the low mark of memblocks discovered from the FDT to be
overridden by the architecture.

Patch #2 allows the assignment of initrd_start and initrd_end in generic early
FDT code to be overridden by architecture code.

Patch #3 reverses the #include dependencies between asm/bug.h and another
header file so that asm/bug.h can be included (and used) in asm/memory.h

Patch #4 enables the huge-vmap generic feature for arm64. This should be an
improvement in itself, but the significance for this series is that it allows
unmap_kernel_range() to be called on the [__init_begin, __init_end) region,
which may be partially mapped using block mappings.

Patch #5 introduces KIMAGE_VADDR as a separate, preparatory step towards
decoupling the kernel placement from PAGE_OFFSET

Patch #6 implements some translation table accessors that operate on statically
allocate translation tables before the linear mapping is up.

Patch #7 decouples the fixmap initialization from the linear mapping, by using
the accessors implemented by patch #6

Patch #8 removes assumptions made my KVM regarding the placement of the kernel
image inside the linear mapping.

Patch #9 moves the kernel image from the base of the linear mapping to the base
of the vmalloc area. The modules area, which sits right below the kernel image,
is moved along and is put right before the start of the vmalloc area.

Patch #10 defers the __va translation of the initrd to after the assignment of
memstart_addr.

Patch #11 decouples PHYS_OFFSET from PAGE_OFFSET, which allows the linear
mapping to cover all discovered memory, regardless of where the kernel image is
located in it. This effectively allows the kernel to be loaded at any physical
address (provided that the correct alignment is used)

[1] http://thread.gmane.org/gmane.linux.kernel/2135931
[2] http://thread.gmane.org/gmane.linux.ports.arm.kernel/473894

Ard Biesheuvel (11):
  of/fdt: make memblock minimum physical address arch configurable
  of/fdt: factor out assignment of initrd_start/initrd_end
  arm64: prevent potential circular header dependencies in asm/bug.h
  arm64: add support for ioremap() block mappings
  arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region
  arm64: pgtable: implement static [pte|pmd|pud]_offset variants
  arm64: decouple early fixmap init from linear mapping
  arm64: kvm: deal with kernel symbols outside of linear mapping
  arm64: move kernel image to base of vmalloc area
  arm64: defer __va translation of initrd_start and initrd_end
  arm64: allow kernel Image to be loaded anywhere in physical memory

 Documentation/arm64/booting.txt                      |  20 ++-
 Documentation/features/vm/huge-vmap/arch-support.txt |   2 +-
 arch/arm/include/asm/kvm_asm.h                       |   2 +
 arch/arm/kvm/arm.c                                   |   8 +-
 arch/arm64/Kconfig                                   |   1 +
 arch/arm64/include/asm/boot.h                        |   6 +
 arch/arm64/include/asm/bug.h                         |   2 +-
 arch/arm64/include/asm/debug-monitors.h              |   2 +-
 arch/arm64/include/asm/kasan.h                       |   2 +-
 arch/arm64/include/asm/kernel-pgtable.h              |  12 ++
 arch/arm64/include/asm/kvm_asm.h                     |   2 +
 arch/arm64/include/asm/kvm_host.h                    |   8 +-
 arch/arm64/include/asm/memory.h                      |  55 +++++--
 arch/arm64/include/asm/pgtable.h                     |  23 ++-
 arch/arm64/kernel/head.S                             |   8 +-
 arch/arm64/kernel/image.h                            |  13 +-
 arch/arm64/kernel/vmlinux.lds.S                      |   4 +-
 arch/arm64/kvm/hyp.S                                 |   6 +-
 arch/arm64/kvm/hyp/debug-sr.c                        |   1 +
 arch/arm64/mm/dump.c                                 |  12 +-
 arch/arm64/mm/init.c                                 |  99 ++++++++++--
 arch/arm64/mm/kasan_init.c                           |  27 +++-
 arch/arm64/mm/mmu.c                                  | 168 +++++++++++++++-----
 drivers/of/fdt.c                                     |  19 ++-
 24 files changed, 381 insertions(+), 121 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 01/11] of/fdt: make memblock minimum physical address arch configurable
  2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
@ 2016-02-16 12:52 ` Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 02/11] of/fdt: factor out assignment of initrd_start/initrd_end Ard Biesheuvel
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-16 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

By default, early_init_dt_add_memory_arch() ignores memory below
the base of the kernel image since it won't be addressable via the
linear mapping. However, this is not appropriate anymore once we
decouple the kernel text mapping from the linear mapping, so archs
may want to drop the low limit entirely. So allow the minimum to be
overridden by setting MIN_MEMBLOCK_ADDR.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Rob Herring <robh@kernel.org>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 drivers/of/fdt.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index 655f79db7899..1f98156f8996 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -976,13 +976,16 @@ int __init early_init_dt_scan_chosen(unsigned long node, const char *uname,
 }
 
 #ifdef CONFIG_HAVE_MEMBLOCK
+#ifndef MIN_MEMBLOCK_ADDR
+#define MIN_MEMBLOCK_ADDR	__pa(PAGE_OFFSET)
+#endif
 #ifndef MAX_MEMBLOCK_ADDR
 #define MAX_MEMBLOCK_ADDR	((phys_addr_t)~0)
 #endif
 
 void __init __weak early_init_dt_add_memory_arch(u64 base, u64 size)
 {
-	const u64 phys_offset = __pa(PAGE_OFFSET);
+	const u64 phys_offset = MIN_MEMBLOCK_ADDR;
 
 	if (!PAGE_ALIGNED(base)) {
 		if (size < PAGE_SIZE - (base & ~PAGE_MASK)) {
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 02/11] of/fdt: factor out assignment of initrd_start/initrd_end
  2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 01/11] of/fdt: make memblock minimum physical address arch configurable Ard Biesheuvel
@ 2016-02-16 12:52 ` Ard Biesheuvel
  2016-02-16 17:28   ` Rob Herring
  2016-02-16 12:52 ` [PATCH v6sub1 03/11] arm64: prevent potential circular header dependencies in asm/bug.h Ard Biesheuvel
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-16 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

Since architectures may not yet have their linear mapping up and running
when the initrd address is discovered from the DT, factor out the
assignment of initrd_start and initrd_end, so that an architecture can
override it and use the translation it needs.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 drivers/of/fdt.c | 14 +++++++++++---
 1 file changed, 11 insertions(+), 3 deletions(-)

diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index 1f98156f8996..3e90bce70545 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -760,6 +760,16 @@ const void * __init of_flat_dt_match_machine(const void *default_match,
 }
 
 #ifdef CONFIG_BLK_DEV_INITRD
+#ifndef __early_init_dt_declare_initrd
+static void __early_init_dt_declare_initrd(unsigned long start,
+					   unsigned long end)
+{
+	initrd_start = (unsigned long)__va(start);
+	initrd_end = (unsigned long)__va(end);
+	initrd_below_start_ok = 1;
+}
+#endif
+
 /**
  * early_init_dt_check_for_initrd - Decode initrd location from flat tree
  * @node: reference to node containing initrd location ('chosen')
@@ -782,9 +792,7 @@ static void __init early_init_dt_check_for_initrd(unsigned long node)
 		return;
 	end = of_read_number(prop, len/4);
 
-	initrd_start = (unsigned long)__va(start);
-	initrd_end = (unsigned long)__va(end);
-	initrd_below_start_ok = 1;
+	__early_init_dt_declare_initrd(start, end);
 
 	pr_debug("initrd_start=0x%llx  initrd_end=0x%llx\n",
 		 (unsigned long long)start, (unsigned long long)end);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 03/11] arm64: prevent potential circular header dependencies in asm/bug.h
  2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 01/11] of/fdt: make memblock minimum physical address arch configurable Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 02/11] of/fdt: factor out assignment of initrd_start/initrd_end Ard Biesheuvel
@ 2016-02-16 12:52 ` Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 04/11] arm64: add support for ioremap() block mappings Ard Biesheuvel
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-16 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

Currently, using BUG_ON() in header files is cumbersome, due to the fact
that asm/bug.h transitively includes a lot of other header files, resulting
in the actual BUG_ON() invocation appearing before its definition in the
preprocessor input. So let's reverse the #include dependency between
asm/bug.h and asm/debug-monitors.h, by moving the definition of BUG_BRK_IMM
from the latter to the former. Also fix up one user of asm/debug-monitors.h
which relied on a transitive include.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/bug.h            | 2 +-
 arch/arm64/include/asm/debug-monitors.h | 2 +-
 arch/arm64/kvm/hyp/debug-sr.c           | 1 +
 3 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/bug.h b/arch/arm64/include/asm/bug.h
index 4a748ce9ba1a..679d49221998 100644
--- a/arch/arm64/include/asm/bug.h
+++ b/arch/arm64/include/asm/bug.h
@@ -18,7 +18,7 @@
 #ifndef _ARCH_ARM64_ASM_BUG_H
 #define _ARCH_ARM64_ASM_BUG_H
 
-#include <asm/debug-monitors.h>
+#define BUG_BRK_IMM			0x800
 
 #ifdef CONFIG_GENERIC_BUG
 #define HAVE_ARCH_BUG
diff --git a/arch/arm64/include/asm/debug-monitors.h b/arch/arm64/include/asm/debug-monitors.h
index 279c85b5ec09..e893a1fca9c2 100644
--- a/arch/arm64/include/asm/debug-monitors.h
+++ b/arch/arm64/include/asm/debug-monitors.h
@@ -20,6 +20,7 @@
 
 #include <linux/errno.h>
 #include <linux/types.h>
+#include <asm/bug.h>
 #include <asm/esr.h>
 #include <asm/insn.h>
 #include <asm/ptrace.h>
@@ -57,7 +58,6 @@
 #define FAULT_BRK_IMM			0x100
 #define KGDB_DYN_DBG_BRK_IMM		0x400
 #define KGDB_COMPILED_DBG_BRK_IMM	0x401
-#define BUG_BRK_IMM			0x800
 
 /*
  * BRK instruction encoding
diff --git a/arch/arm64/kvm/hyp/debug-sr.c b/arch/arm64/kvm/hyp/debug-sr.c
index c9c1e97501a9..2f8bca8af295 100644
--- a/arch/arm64/kvm/hyp/debug-sr.c
+++ b/arch/arm64/kvm/hyp/debug-sr.c
@@ -18,6 +18,7 @@
 #include <linux/compiler.h>
 #include <linux/kvm_host.h>
 
+#include <asm/debug-monitors.h>
 #include <asm/kvm_asm.h>
 #include <asm/kvm_mmu.h>
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 04/11] arm64: add support for ioremap() block mappings
  2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
                   ` (2 preceding siblings ...)
  2016-02-16 12:52 ` [PATCH v6sub1 03/11] arm64: prevent potential circular header dependencies in asm/bug.h Ard Biesheuvel
@ 2016-02-16 12:52 ` Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 05/11] arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region Ard Biesheuvel
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-16 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

This wires up the existing generic huge-vmap feature, which allows
ioremap() to use PMD or PUD sized block mappings. It also adds support
to the unmap path for dealing with block mappings, which will allow us
to unmap the __init region using unmap_kernel_range() in a subsequent
patch.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 Documentation/features/vm/huge-vmap/arch-support.txt |  2 +-
 arch/arm64/Kconfig                                   |  1 +
 arch/arm64/include/asm/memory.h                      |  6 +++
 arch/arm64/mm/mmu.c                                  | 41 ++++++++++++++++++++
 4 files changed, 49 insertions(+), 1 deletion(-)

diff --git a/Documentation/features/vm/huge-vmap/arch-support.txt b/Documentation/features/vm/huge-vmap/arch-support.txt
index af6816bccb43..df1d1f3c9af2 100644
--- a/Documentation/features/vm/huge-vmap/arch-support.txt
+++ b/Documentation/features/vm/huge-vmap/arch-support.txt
@@ -9,7 +9,7 @@
     |       alpha: | TODO |
     |         arc: | TODO |
     |         arm: | TODO |
-    |       arm64: | TODO |
+    |       arm64: |  ok  |
     |       avr32: | TODO |
     |    blackfin: | TODO |
     |         c6x: | TODO |
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 8cc62289a63e..cd767fa3037a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -49,6 +49,7 @@ config ARM64
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_BITREVERSE
+	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
 	select HAVE_ARCH_KGDB
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 853953cd1f08..c65aad7b13dc 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -100,6 +100,12 @@
 #define MT_S2_NORMAL		0xf
 #define MT_S2_DEVICE_nGnRE	0x1
 
+#ifdef CONFIG_ARM64_4K_PAGES
+#define IOREMAP_MAX_ORDER	(PUD_SHIFT)
+#else
+#define IOREMAP_MAX_ORDER	(PMD_SHIFT)
+#endif
+
 #ifndef __ASSEMBLY__
 
 extern phys_addr_t		memstart_addr;
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 7711554a94f4..73383019f212 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -714,3 +714,44 @@ void *__init fixmap_remap_fdt(phys_addr_t dt_phys)
 
 	return dt_virt;
 }
+
+int __init arch_ioremap_pud_supported(void)
+{
+	/* only 4k granule supports level 1 block mappings */
+	return IS_ENABLED(CONFIG_ARM64_4K_PAGES);
+}
+
+int __init arch_ioremap_pmd_supported(void)
+{
+	return 1;
+}
+
+int pud_set_huge(pud_t *pud, phys_addr_t phys, pgprot_t prot)
+{
+	BUG_ON(phys & ~PUD_MASK);
+	set_pud(pud, __pud(phys | PUD_TYPE_SECT | pgprot_val(mk_sect_prot(prot))));
+	return 1;
+}
+
+int pmd_set_huge(pmd_t *pmd, phys_addr_t phys, pgprot_t prot)
+{
+	BUG_ON(phys & ~PMD_MASK);
+	set_pmd(pmd, __pmd(phys | PMD_TYPE_SECT | pgprot_val(mk_sect_prot(prot))));
+	return 1;
+}
+
+int pud_clear_huge(pud_t *pud)
+{
+	if (!pud_sect(*pud))
+		return 0;
+	pud_clear(pud);
+	return 1;
+}
+
+int pmd_clear_huge(pmd_t *pmd)
+{
+	if (!pmd_sect(*pmd))
+		return 0;
+	pmd_clear(pmd);
+	return 1;
+}
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 05/11] arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region
  2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
                   ` (3 preceding siblings ...)
  2016-02-16 12:52 ` [PATCH v6sub1 04/11] arm64: add support for ioremap() block mappings Ard Biesheuvel
@ 2016-02-16 12:52 ` Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 06/11] arm64: pgtable: implement static [pte|pmd|pud]_offset variants Ard Biesheuvel
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-16 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

This introduces the preprocessor symbol KIMAGE_VADDR which will serve as
the symbolic virtual base of the kernel region, i.e., the kernel's virtual
offset will be KIMAGE_VADDR + TEXT_OFFSET. For now, we define it as being
equal to PAGE_OFFSET, but in the future, it will be moved below it once
we move the kernel virtual mapping out of the linear mapping.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/memory.h | 10 ++++++++--
 arch/arm64/kernel/head.S        |  2 +-
 arch/arm64/kernel/vmlinux.lds.S |  4 ++--
 3 files changed, 11 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index c65aad7b13dc..aebc739f5a11 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -51,7 +51,8 @@
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
 #define VA_START		(UL(0xffffffffffffffff) << VA_BITS)
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) << (VA_BITS - 1))
-#define MODULES_END		(PAGE_OFFSET)
+#define KIMAGE_VADDR		(PAGE_OFFSET)
+#define MODULES_END		(KIMAGE_VADDR)
 #define MODULES_VADDR		(MODULES_END - SZ_64M)
 #define PCI_IO_END		(MODULES_VADDR - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
@@ -75,8 +76,13 @@
  * private definitions which should NOT be used outside memory.h
  * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
  */
-#define __virt_to_phys(x)	(((phys_addr_t)(x) - PAGE_OFFSET + PHYS_OFFSET))
+#define __virt_to_phys(x) ({						\
+	phys_addr_t __x = (phys_addr_t)(x);				\
+	__x >= PAGE_OFFSET ? (__x - PAGE_OFFSET + PHYS_OFFSET) :	\
+			     (__x - KIMAGE_VADDR + PHYS_OFFSET); })
+
 #define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
+#define __phys_to_kimg(x)	((unsigned long)((x) - PHYS_OFFSET + KIMAGE_VADDR))
 
 /*
  * Convert a page to/from a physical address
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 53b9f9f128c2..04d38a058b19 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -389,7 +389,7 @@ __create_page_tables:
 	 * Map the kernel image (starting with PHYS_OFFSET).
 	 */
 	mov	x0, x26				// swapper_pg_dir
-	mov	x5, #PAGE_OFFSET
+	ldr	x5, =KIMAGE_VADDR
 	create_pgd_entry x0, x5, x3, x6
 	ldr	x6, =KERNEL_END			// __va(KERNEL_END)
 	mov	x3, x24				// phys offset
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index b78a3c772294..282e3e64a17e 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -89,7 +89,7 @@ SECTIONS
 		*(.discard.*)
 	}
 
-	. = PAGE_OFFSET + TEXT_OFFSET;
+	. = KIMAGE_VADDR + TEXT_OFFSET;
 
 	.head.text : {
 		_text = .;
@@ -186,4 +186,4 @@ ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
 /*
  * If padding is applied before .head.text, virt<->phys conversions will fail.
  */
-ASSERT(_text == (PAGE_OFFSET + TEXT_OFFSET), "HEAD is misaligned")
+ASSERT(_text == (KIMAGE_VADDR + TEXT_OFFSET), "HEAD is misaligned")
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 06/11] arm64: pgtable: implement static [pte|pmd|pud]_offset variants
  2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
                   ` (4 preceding siblings ...)
  2016-02-16 12:52 ` [PATCH v6sub1 05/11] arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region Ard Biesheuvel
@ 2016-02-16 12:52 ` Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 07/11] arm64: decouple early fixmap init from linear mapping Ard Biesheuvel
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-16 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

The page table accessors pte_offset(), pud_offset() and pmd_offset()
rely on __va translations, so they can only be used after the linear
mapping has been installed. For the early fixmap and kasan init routines,
whose page tables are allocated statically in the kernel image, these
functions will return bogus values. So implement pte_offset_kimg(),
pmd_offset_kimg() and pud_offset_kimg(), which can be used instead
before any page tables have been allocated dynamically.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/pgtable.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 4229f75fd145..87355408d448 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -445,6 +445,9 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd)
 
 #define pmd_page(pmd)		pfn_to_page(__phys_to_pfn(pmd_val(pmd) & PHYS_MASK))
 
+/* use ONLY for statically allocated translation tables */
+#define pte_offset_kimg(dir,addr)	((pte_t *)__phys_to_kimg(pte_offset_phys((dir), (addr))))
+
 /*
  * Conversion functions: convert a page and protection to a page entry,
  * and a page entry and page directory to the page they refer to.
@@ -488,6 +491,9 @@ static inline phys_addr_t pud_page_paddr(pud_t pud)
 
 #define pud_page(pud)		pfn_to_page(__phys_to_pfn(pud_val(pud) & PHYS_MASK))
 
+/* use ONLY for statically allocated translation tables */
+#define pmd_offset_kimg(dir,addr)	((pmd_t *)__phys_to_kimg(pmd_offset_phys((dir), (addr))))
+
 #else
 
 #define pud_page_paddr(pud)	({ BUILD_BUG(); 0; })
@@ -497,6 +503,8 @@ static inline phys_addr_t pud_page_paddr(pud_t pud)
 #define pmd_set_fixmap_offset(pudp, addr)	((pmd_t *)pudp)
 #define pmd_clear_fixmap()
 
+#define pmd_offset_kimg(dir,addr)	((pmd_t *)dir)
+
 #endif	/* CONFIG_PGTABLE_LEVELS > 2 */
 
 #if CONFIG_PGTABLE_LEVELS > 3
@@ -535,6 +543,9 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd)
 
 #define pgd_page(pgd)		pfn_to_page(__phys_to_pfn(pgd_val(pgd) & PHYS_MASK))
 
+/* use ONLY for statically allocated translation tables */
+#define pud_offset_kimg(dir,addr)	((pud_t *)__phys_to_kimg(pud_offset_phys((dir), (addr))))
+
 #else
 
 #define pgd_page_paddr(pgd)	({ BUILD_BUG(); 0;})
@@ -544,6 +555,8 @@ static inline phys_addr_t pgd_page_paddr(pgd_t pgd)
 #define pud_set_fixmap_offset(pgdp, addr)	((pud_t *)pgdp)
 #define pud_clear_fixmap()
 
+#define pud_offset_kimg(dir,addr)	((pud_t *)dir)
+
 #endif  /* CONFIG_PGTABLE_LEVELS > 3 */
 
 #define pgd_ERROR(pgd)		__pgd_error(__FILE__, __LINE__, pgd_val(pgd))
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 07/11] arm64: decouple early fixmap init from linear mapping
  2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
                   ` (5 preceding siblings ...)
  2016-02-16 12:52 ` [PATCH v6sub1 06/11] arm64: pgtable: implement static [pte|pmd|pud]_offset variants Ard Biesheuvel
@ 2016-02-16 12:52 ` Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 08/11] arm64: kvm: deal with kernel symbols outside of " Ard Biesheuvel
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-16 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

Since the early fixmap page tables are populated using pages that are
part of the static footprint of the kernel, they are covered by the
initial kernel mapping, and we can refer to them without using __va/__pa
translations, which are tied to the linear mapping.

Since the fixmap page tables are disjoint from the kernel mapping up
to the top level pgd entry, we can refer to bm_pte[] directly, and there
is no need to walk the page tables and perform __pa()/__va() translations
at each step.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/mm/mmu.c | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 73383019f212..b84915723ea0 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -583,7 +583,7 @@ static inline pud_t * fixmap_pud(unsigned long addr)
 
 	BUG_ON(pgd_none(*pgd) || pgd_bad(*pgd));
 
-	return pud_offset(pgd, addr);
+	return pud_offset_kimg(pgd, addr);
 }
 
 static inline pmd_t * fixmap_pmd(unsigned long addr)
@@ -592,16 +592,12 @@ static inline pmd_t * fixmap_pmd(unsigned long addr)
 
 	BUG_ON(pud_none(*pud) || pud_bad(*pud));
 
-	return pmd_offset(pud, addr);
+	return pmd_offset_kimg(pud, addr);
 }
 
 static inline pte_t * fixmap_pte(unsigned long addr)
 {
-	pmd_t *pmd = fixmap_pmd(addr);
-
-	BUG_ON(pmd_none(*pmd) || pmd_bad(*pmd));
-
-	return pte_offset_kernel(pmd, addr);
+	return &bm_pte[pte_index(addr)];
 }
 
 void __init early_fixmap_init(void)
@@ -613,14 +609,14 @@ void __init early_fixmap_init(void)
 
 	pgd = pgd_offset_k(addr);
 	pgd_populate(&init_mm, pgd, bm_pud);
-	pud = pud_offset(pgd, addr);
+	pud = fixmap_pud(addr);
 	pud_populate(&init_mm, pud, bm_pmd);
-	pmd = pmd_offset(pud, addr);
+	pmd = fixmap_pmd(addr);
 	pmd_populate_kernel(&init_mm, pmd, bm_pte);
 
 	/*
 	 * The boot-ioremap range spans multiple pmds, for which
-	 * we are not preparted:
+	 * we are not prepared:
 	 */
 	BUILD_BUG_ON((__fix_to_virt(FIX_BTMAP_BEGIN) >> PMD_SHIFT)
 		     != (__fix_to_virt(FIX_BTMAP_END) >> PMD_SHIFT));
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 08/11] arm64: kvm: deal with kernel symbols outside of linear mapping
  2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
                   ` (6 preceding siblings ...)
  2016-02-16 12:52 ` [PATCH v6sub1 07/11] arm64: decouple early fixmap init from linear mapping Ard Biesheuvel
@ 2016-02-16 12:52 ` Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 09/11] arm64: move kernel image to base of vmalloc area Ard Biesheuvel
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-16 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

KVM on arm64 uses a fixed offset between the linear mapping at EL1 and
the HYP mapping at EL2. Before we can move the kernel virtual mapping
out of the linear mapping, we have to make sure that references to kernel
symbols that are accessed via the HYP mapping are translated to their
linear equivalent.

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm/include/asm/kvm_asm.h    |  2 ++
 arch/arm/kvm/arm.c                |  8 +++++---
 arch/arm64/include/asm/kvm_asm.h  | 17 +++++++++++++++++
 arch/arm64/include/asm/kvm_host.h |  8 +++++---
 arch/arm64/kvm/hyp.S              |  6 +++---
 5 files changed, 32 insertions(+), 9 deletions(-)

diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 194c91b610ff..c35c349da069 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -79,6 +79,8 @@
 #define rr_lo_hi(a1, a2) a1, a2
 #endif
 
+#define kvm_ksym_ref(kva)	(kva)
+
 #ifndef __ASSEMBLY__
 struct kvm;
 struct kvm_vcpu;
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dda1959f0dde..975da6cfbf59 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -982,7 +982,7 @@ static void cpu_init_hyp_mode(void *dummy)
 	pgd_ptr = kvm_mmu_get_httbr();
 	stack_page = __this_cpu_read(kvm_arm_hyp_stack_page);
 	hyp_stack_ptr = stack_page + PAGE_SIZE;
-	vector_ptr = (unsigned long)__kvm_hyp_vector;
+	vector_ptr = (unsigned long)kvm_ksym_ref(__kvm_hyp_vector);
 
 	__cpu_init_hyp_mode(boot_pgd_ptr, pgd_ptr, hyp_stack_ptr, vector_ptr);
 
@@ -1074,13 +1074,15 @@ static int init_hyp_mode(void)
 	/*
 	 * Map the Hyp-code called directly from the host
 	 */
-	err = create_hyp_mappings(__kvm_hyp_code_start, __kvm_hyp_code_end);
+	err = create_hyp_mappings(kvm_ksym_ref(__kvm_hyp_code_start),
+				  kvm_ksym_ref(__kvm_hyp_code_end));
 	if (err) {
 		kvm_err("Cannot map world-switch code\n");
 		goto out_free_mappings;
 	}
 
-	err = create_hyp_mappings(__start_rodata, __end_rodata);
+	err = create_hyp_mappings(kvm_ksym_ref(__start_rodata),
+				  kvm_ksym_ref(__end_rodata));
 	if (err) {
 		kvm_err("Cannot map rodata section\n");
 		goto out_free_mappings;
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 52b777b7d407..31b56008f412 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -26,7 +26,24 @@
 #define KVM_ARM64_DEBUG_DIRTY_SHIFT	0
 #define KVM_ARM64_DEBUG_DIRTY		(1 << KVM_ARM64_DEBUG_DIRTY_SHIFT)
 
+#define kvm_ksym_ref(sym)		((void *)&sym + kvm_ksym_shift)
+
 #ifndef __ASSEMBLY__
+#if __GNUC__ > 4
+#define kvm_ksym_shift			(PAGE_OFFSET - KIMAGE_VADDR)
+#else
+/*
+ * GCC versions 4.9 and older will fold the constant below into the addend of
+ * the reference to 'sym' above if kvm_ksym_shift is declared static or if the
+ * constant is used directly. However, since we use the small code model for
+ * the core kernel, the reference to 'sym' will be emitted as a adrp/add pair,
+ * with a +/- 4 GB range, resulting in linker relocation errors if the shift
+ * is sufficiently large. So prevent the compiler from folding the shift into
+ * the addend, by making the shift a variable with external linkage.
+ */
+__weak u64 kvm_ksym_shift = PAGE_OFFSET - KIMAGE_VADDR;
+#endif
+
 struct kvm;
 struct kvm_vcpu;
 
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 689d4c95e12f..e3d67ff8798b 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -307,7 +307,7 @@ static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
 struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
 
-u64 kvm_call_hyp(void *hypfn, ...);
+u64 __kvm_call_hyp(void *hypfn, ...);
 void force_vm_exit(const cpumask_t *mask);
 void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
 
@@ -328,8 +328,8 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
 	 * Call initialization code, and switch to the full blown
 	 * HYP code.
 	 */
-	kvm_call_hyp((void *)boot_pgd_ptr, pgd_ptr,
-		     hyp_stack_ptr, vector_ptr);
+	__kvm_call_hyp((void *)boot_pgd_ptr, pgd_ptr,
+		       hyp_stack_ptr, vector_ptr);
 }
 
 static inline void kvm_arch_hardware_disable(void) {}
@@ -343,4 +343,6 @@ void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_clear_debug(struct kvm_vcpu *vcpu);
 void kvm_arm_reset_debug_ptr(struct kvm_vcpu *vcpu);
 
+#define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
+
 #endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 0ccdcbbef3c2..870578f84b1c 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -20,7 +20,7 @@
 #include <asm/assembler.h>
 
 /*
- * u64 kvm_call_hyp(void *hypfn, ...);
+ * u64 __kvm_call_hyp(void *hypfn, ...);
  *
  * This is not really a variadic function in the classic C-way and care must
  * be taken when calling this to ensure parameters are passed in registers
@@ -37,7 +37,7 @@
  * used to implement __hyp_get_vectors in the same way as in
  * arch/arm64/kernel/hyp_stub.S.
  */
-ENTRY(kvm_call_hyp)
+ENTRY(__kvm_call_hyp)
 	hvc	#0
 	ret
-ENDPROC(kvm_call_hyp)
+ENDPROC(__kvm_call_hyp)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 09/11] arm64: move kernel image to base of vmalloc area
  2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
                   ` (7 preceding siblings ...)
  2016-02-16 12:52 ` [PATCH v6sub1 08/11] arm64: kvm: deal with kernel symbols outside of " Ard Biesheuvel
@ 2016-02-16 12:52 ` Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 10/11] arm64: defer __va translation of initrd_start and initrd_end Ard Biesheuvel
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-16 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

This moves the module area to right before the vmalloc area, and moves
the kernel image to the base of the vmalloc area. This is an intermediate
step towards implementing KASLR, which allows the kernel image to be
located anywhere in the vmalloc area.

Since other subsystems such as hibernate may still need to refer to the
kernel text or data segments via their linears addresses, both are mapped
in the linear region as well. The linear alias of the text region is
mapped read-only/non-executable to prevent inadvertent modification or
execution.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/kasan.h   |   2 +-
 arch/arm64/include/asm/memory.h  |  21 ++--
 arch/arm64/include/asm/pgtable.h |  10 +-
 arch/arm64/mm/dump.c             |  12 +--
 arch/arm64/mm/init.c             |  23 ++--
 arch/arm64/mm/kasan_init.c       |  27 ++++-
 arch/arm64/mm/mmu.c              | 110 ++++++++++++++------
 7 files changed, 137 insertions(+), 68 deletions(-)

diff --git a/arch/arm64/include/asm/kasan.h b/arch/arm64/include/asm/kasan.h
index de0d21211c34..71ad0f93eb71 100644
--- a/arch/arm64/include/asm/kasan.h
+++ b/arch/arm64/include/asm/kasan.h
@@ -14,7 +14,7 @@
  * KASAN_SHADOW_END: KASAN_SHADOW_START + 1/8 of kernel virtual addresses.
  */
 #define KASAN_SHADOW_START      (VA_START)
-#define KASAN_SHADOW_END        (KASAN_SHADOW_START + (1UL << (VA_BITS - 3)))
+#define KASAN_SHADOW_END        (KASAN_SHADOW_START + KASAN_SHADOW_SIZE)
 
 /*
  * This value is used to map an address to the corresponding shadow
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index aebc739f5a11..4388651d1f0d 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -45,16 +45,15 @@
  * VA_START - the first kernel virtual address.
  * TASK_SIZE - the maximum size of a user space task.
  * TASK_UNMAPPED_BASE - the lower boundary of the mmap VM area.
- * The module space lives between the addresses given by TASK_SIZE
- * and PAGE_OFFSET - it must be within 128MB of the kernel text.
  */
 #define VA_BITS			(CONFIG_ARM64_VA_BITS)
 #define VA_START		(UL(0xffffffffffffffff) << VA_BITS)
 #define PAGE_OFFSET		(UL(0xffffffffffffffff) << (VA_BITS - 1))
-#define KIMAGE_VADDR		(PAGE_OFFSET)
-#define MODULES_END		(KIMAGE_VADDR)
-#define MODULES_VADDR		(MODULES_END - SZ_64M)
-#define PCI_IO_END		(MODULES_VADDR - SZ_2M)
+#define KIMAGE_VADDR		(MODULES_END)
+#define MODULES_END		(MODULES_VADDR + MODULES_VSIZE)
+#define MODULES_VADDR		(VA_START + KASAN_SHADOW_SIZE)
+#define MODULES_VSIZE		(SZ_64M)
+#define PCI_IO_END		(PAGE_OFFSET - SZ_2M)
 #define PCI_IO_START		(PCI_IO_END - PCI_IO_SIZE)
 #define FIXADDR_TOP		(PCI_IO_START - SZ_2M)
 #define TASK_SIZE_64		(UL(1) << VA_BITS)
@@ -72,6 +71,16 @@
 #define TASK_UNMAPPED_BASE	(PAGE_ALIGN(TASK_SIZE / 4))
 
 /*
+ * The size of the KASAN shadow region. This should be 1/8th of the
+ * size of the entire kernel virtual address space.
+ */
+#ifdef CONFIG_KASAN
+#define KASAN_SHADOW_SIZE	(UL(1) << (VA_BITS - 3))
+#else
+#define KASAN_SHADOW_SIZE	(0)
+#endif
+
+/*
  * Physical vs virtual RAM address space conversion.  These are
  * private definitions which should NOT be used outside memory.h
  * files.  Use virt_to_phys/phys_to_virt/__pa/__va instead.
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 87355408d448..a440f5a85d08 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -36,19 +36,13 @@
  *
  * VMEMAP_SIZE: allows the whole VA space to be covered by a struct page array
  *	(rounded up to PUD_SIZE).
- * VMALLOC_START: beginning of the kernel VA space
+ * VMALLOC_START: beginning of the kernel vmalloc space
  * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space,
  *	fixed mappings and modules
  */
 #define VMEMMAP_SIZE		ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE)
 
-#ifndef CONFIG_KASAN
-#define VMALLOC_START		(VA_START)
-#else
-#include <asm/kasan.h>
-#define VMALLOC_START		(KASAN_SHADOW_END + SZ_64K)
-#endif
-
+#define VMALLOC_START		(MODULES_END)
 #define VMALLOC_END		(PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
 
 #define vmemmap			((struct page *)(VMALLOC_END + SZ_64K))
diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index 0adbebbc2803..e83ffb00560c 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -35,7 +35,9 @@ struct addr_marker {
 };
 
 enum address_markers_idx {
-	VMALLOC_START_NR = 0,
+	MODULES_START_NR = 0,
+	MODULES_END_NR,
+	VMALLOC_START_NR,
 	VMALLOC_END_NR,
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 	VMEMMAP_START_NR,
@@ -45,12 +47,12 @@ enum address_markers_idx {
 	FIXADDR_END_NR,
 	PCI_START_NR,
 	PCI_END_NR,
-	MODULES_START_NR,
-	MODULES_END_NR,
 	KERNEL_SPACE_NR,
 };
 
 static struct addr_marker address_markers[] = {
+	{ MODULES_VADDR,	"Modules start" },
+	{ MODULES_END,		"Modules end" },
 	{ VMALLOC_START,	"vmalloc() Area" },
 	{ VMALLOC_END,		"vmalloc() End" },
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
@@ -61,9 +63,7 @@ static struct addr_marker address_markers[] = {
 	{ FIXADDR_TOP,		"Fixmap end" },
 	{ PCI_IO_START,		"PCI I/O start" },
 	{ PCI_IO_END,		"PCI I/O end" },
-	{ MODULES_VADDR,	"Modules start" },
-	{ MODULES_END,		"Modules end" },
-	{ PAGE_OFFSET,		"Kernel Mapping" },
+	{ PAGE_OFFSET,		"Linear Mapping" },
 	{ -1,			NULL },
 };
 
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index f3b061e67bfe..1d627cd8121c 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -36,6 +36,7 @@
 #include <linux/swiotlb.h>
 
 #include <asm/fixmap.h>
+#include <asm/kasan.h>
 #include <asm/memory.h>
 #include <asm/sections.h>
 #include <asm/setup.h>
@@ -302,22 +303,26 @@ void __init mem_init(void)
 #ifdef CONFIG_KASAN
 		  "    kasan   : 0x%16lx - 0x%16lx   (%6ld GB)\n"
 #endif
+		  "    modules : 0x%16lx - 0x%16lx   (%6ld MB)\n"
 		  "    vmalloc : 0x%16lx - 0x%16lx   (%6ld GB)\n"
+		  "      .init : 0x%p" " - 0x%p" "   (%6ld KB)\n"
+		  "      .text : 0x%p" " - 0x%p" "   (%6ld KB)\n"
+		  "      .data : 0x%p" " - 0x%p" "   (%6ld KB)\n"
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 		  "    vmemmap : 0x%16lx - 0x%16lx   (%6ld GB maximum)\n"
 		  "              0x%16lx - 0x%16lx   (%6ld MB actual)\n"
 #endif
 		  "    fixed   : 0x%16lx - 0x%16lx   (%6ld KB)\n"
 		  "    PCI I/O : 0x%16lx - 0x%16lx   (%6ld MB)\n"
-		  "    modules : 0x%16lx - 0x%16lx   (%6ld MB)\n"
-		  "    memory  : 0x%16lx - 0x%16lx   (%6ld MB)\n"
-		  "      .init : 0x%p" " - 0x%p" "   (%6ld KB)\n"
-		  "      .text : 0x%p" " - 0x%p" "   (%6ld KB)\n"
-		  "      .data : 0x%p" " - 0x%p" "   (%6ld KB)\n",
+		  "    memory  : 0x%16lx - 0x%16lx   (%6ld MB)\n",
 #ifdef CONFIG_KASAN
 		  MLG(KASAN_SHADOW_START, KASAN_SHADOW_END),
 #endif
+		  MLM(MODULES_VADDR, MODULES_END),
 		  MLG(VMALLOC_START, VMALLOC_END),
+		  MLK_ROUNDUP(__init_begin, __init_end),
+		  MLK_ROUNDUP(_text, _etext),
+		  MLK_ROUNDUP(_sdata, _edata),
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 		  MLG((unsigned long)vmemmap,
 		      (unsigned long)vmemmap + VMEMMAP_SIZE),
@@ -326,11 +331,7 @@ void __init mem_init(void)
 #endif
 		  MLK(FIXADDR_START, FIXADDR_TOP),
 		  MLM(PCI_IO_START, PCI_IO_END),
-		  MLM(MODULES_VADDR, MODULES_END),
-		  MLM(PAGE_OFFSET, (unsigned long)high_memory),
-		  MLK_ROUNDUP(__init_begin, __init_end),
-		  MLK_ROUNDUP(_text, _etext),
-		  MLK_ROUNDUP(_sdata, _edata));
+		  MLM(PAGE_OFFSET, (unsigned long)high_memory));
 
 #undef MLK
 #undef MLM
@@ -358,8 +359,8 @@ void __init mem_init(void)
 
 void free_initmem(void)
 {
-	fixup_init();
 	free_initmem_default(0);
+	fixup_init();
 }
 
 #ifdef CONFIG_BLK_DEV_INITRD
diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index cc569a38bc76..7f10cc91fa8a 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -17,9 +17,11 @@
 #include <linux/start_kernel.h>
 
 #include <asm/mmu_context.h>
+#include <asm/kernel-pgtable.h>
 #include <asm/page.h>
 #include <asm/pgalloc.h>
 #include <asm/pgtable.h>
+#include <asm/sections.h>
 #include <asm/tlbflush.h>
 
 static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);
@@ -33,7 +35,7 @@ static void __init kasan_early_pte_populate(pmd_t *pmd, unsigned long addr,
 	if (pmd_none(*pmd))
 		pmd_populate_kernel(&init_mm, pmd, kasan_zero_pte);
 
-	pte = pte_offset_kernel(pmd, addr);
+	pte = pte_offset_kimg(pmd, addr);
 	do {
 		next = addr + PAGE_SIZE;
 		set_pte(pte, pfn_pte(virt_to_pfn(kasan_zero_page),
@@ -51,7 +53,7 @@ static void __init kasan_early_pmd_populate(pud_t *pud,
 	if (pud_none(*pud))
 		pud_populate(&init_mm, pud, kasan_zero_pmd);
 
-	pmd = pmd_offset(pud, addr);
+	pmd = pmd_offset_kimg(pud, addr);
 	do {
 		next = pmd_addr_end(addr, end);
 		kasan_early_pte_populate(pmd, addr, next);
@@ -68,7 +70,7 @@ static void __init kasan_early_pud_populate(pgd_t *pgd,
 	if (pgd_none(*pgd))
 		pgd_populate(&init_mm, pgd, kasan_zero_pud);
 
-	pud = pud_offset(pgd, addr);
+	pud = pud_offset_kimg(pgd, addr);
 	do {
 		next = pud_addr_end(addr, end);
 		kasan_early_pmd_populate(pud, addr, next);
@@ -126,9 +128,13 @@ static void __init clear_pgds(unsigned long start,
 
 void __init kasan_init(void)
 {
+	u64 kimg_shadow_start, kimg_shadow_end;
 	struct memblock_region *reg;
 	int i;
 
+	kimg_shadow_start = (u64)kasan_mem_to_shadow(_text);
+	kimg_shadow_end = (u64)kasan_mem_to_shadow(_end);
+
 	/*
 	 * We are going to perform proper setup of shadow memory.
 	 * At first we should unmap early shadow (clear_pgds() call bellow).
@@ -142,8 +148,23 @@ void __init kasan_init(void)
 
 	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
 
+	vmemmap_populate(kimg_shadow_start, kimg_shadow_end, NUMA_NO_NODE);
+
+	/*
+	 * vmemmap_populate() has populated the shadow region that covers the
+	 * kernel image with SWAPPER_BLOCK_SIZE mappings, so we have to round
+	 * the start and end addresses to SWAPPER_BLOCK_SIZE as well, to prevent
+	 * kasan_populate_zero_shadow() from replacing the PMD block mappings
+	 * with PMD table mappings at the edges of the shadow region for the
+	 * kernel image.
+	 */
+	if (ARM64_SWAPPER_USES_SECTION_MAPS)
+		kimg_shadow_end = round_up(kimg_shadow_end, SWAPPER_BLOCK_SIZE);
+
 	kasan_populate_zero_shadow((void *)KASAN_SHADOW_START,
 			kasan_mem_to_shadow((void *)MODULES_VADDR));
+	kasan_populate_zero_shadow((void *)kimg_shadow_end,
+			kasan_mem_to_shadow((void *)PAGE_OFFSET));
 
 	for_each_memblock(memory, reg) {
 		void *start = (void *)__phys_to_virt(reg->base);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index b84915723ea0..6eb8e49889d0 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -53,6 +53,10 @@ u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
 unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)] __page_aligned_bss;
 EXPORT_SYMBOL(empty_zero_page);
 
+static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss;
+static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss __maybe_unused;
+static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss __maybe_unused;
+
 pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
 			      unsigned long size, pgprot_t vma_prot)
 {
@@ -347,16 +351,15 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
 
 static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
 {
-
 	unsigned long kernel_start = __pa(_stext);
-	unsigned long kernel_end = __pa(_end);
+	unsigned long kernel_end = __pa(_etext);
 
 	/*
-	 * The kernel itself is mapped at page granularity. Map all other
-	 * memory, making sure we don't overwrite the existing kernel mappings.
+	 * Take care not to create a writable alias for the
+	 * read-only text and rodata sections of the kernel image.
 	 */
 
-	/* No overlap with the kernel. */
+	/* No overlap with the kernel text */
 	if (end < kernel_start || start >= kernel_end) {
 		__create_pgd_mapping(pgd, start, __phys_to_virt(start),
 				     end - start, PAGE_KERNEL,
@@ -365,8 +368,8 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
 	}
 
 	/*
-	 * This block overlaps the kernel mapping. Map the portion(s) which
-	 * don't overlap.
+	 * This block overlaps the kernel text mapping.
+	 * Map the portion(s) which don't overlap.
 	 */
 	if (start < kernel_start)
 		__create_pgd_mapping(pgd, start,
@@ -378,6 +381,16 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
 				     __phys_to_virt(kernel_end),
 				     end - kernel_end, PAGE_KERNEL,
 				     early_pgtable_alloc);
+
+	/*
+	 * Map the linear alias of the [_stext, _etext) interval as
+	 * read-only/non-executable. This makes the contents of the
+	 * region accessible to subsystems such as hibernate, but
+	 * protects it from inadvertent modification or execution.
+	 */
+	__create_pgd_mapping(pgd, kernel_start, __phys_to_virt(kernel_start),
+			     kernel_end - kernel_start, PAGE_KERNEL_RO,
+			     early_pgtable_alloc);
 }
 
 static void __init map_mem(pgd_t *pgd)
@@ -398,25 +411,28 @@ static void __init map_mem(pgd_t *pgd)
 	}
 }
 
-#ifdef CONFIG_DEBUG_RODATA
 void mark_rodata_ro(void)
 {
+	if (!IS_ENABLED(CONFIG_DEBUG_RODATA))
+		return;
+
 	create_mapping_late(__pa(_stext), (unsigned long)_stext,
 				(unsigned long)_etext - (unsigned long)_stext,
 				PAGE_KERNEL_ROX);
-
 }
-#endif
 
 void fixup_init(void)
 {
-	create_mapping_late(__pa(__init_begin), (unsigned long)__init_begin,
-			(unsigned long)__init_end - (unsigned long)__init_begin,
-			PAGE_KERNEL);
+	/*
+	 * Unmap the __init region but leave the VM area in place. This
+	 * prevents the region from being reused for kernel modules, which
+	 * is not supported by kallsyms.
+	 */
+	unmap_kernel_range((u64)__init_begin, (u64)(__init_end - __init_begin));
 }
 
 static void __init map_kernel_chunk(pgd_t *pgd, void *va_start, void *va_end,
-				    pgprot_t prot)
+				    pgprot_t prot, struct vm_struct *vma)
 {
 	phys_addr_t pa_start = __pa(va_start);
 	unsigned long size = va_end - va_start;
@@ -426,6 +442,14 @@ static void __init map_kernel_chunk(pgd_t *pgd, void *va_start, void *va_end,
 
 	__create_pgd_mapping(pgd, pa_start, (unsigned long)va_start, size, prot,
 			     early_pgtable_alloc);
+
+	vma->addr	= va_start;
+	vma->phys_addr	= pa_start;
+	vma->size	= size;
+	vma->flags	= VM_MAP;
+	vma->caller	= __builtin_return_address(0);
+
+	vm_area_add_early(vma);
 }
 
 /*
@@ -433,17 +457,35 @@ static void __init map_kernel_chunk(pgd_t *pgd, void *va_start, void *va_end,
  */
 static void __init map_kernel(pgd_t *pgd)
 {
+	static struct vm_struct vmlinux_text, vmlinux_init, vmlinux_data;
 
-	map_kernel_chunk(pgd, _stext, _etext, PAGE_KERNEL_EXEC);
-	map_kernel_chunk(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC);
-	map_kernel_chunk(pgd, _data, _end, PAGE_KERNEL);
+	map_kernel_chunk(pgd, _stext, _etext, PAGE_KERNEL_EXEC, &vmlinux_text);
+	map_kernel_chunk(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
+			 &vmlinux_init);
+	map_kernel_chunk(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data);
 
-	/*
-	 * The fixmap falls in a separate pgd to the kernel, and doesn't live
-	 * in the carveout for the swapper_pg_dir. We can simply re-use the
-	 * existing dir for the fixmap.
-	 */
-	set_pgd(pgd_offset_raw(pgd, FIXADDR_START), *pgd_offset_k(FIXADDR_START));
+	if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) {
+		/*
+		 * The fixmap falls in a separate pgd to the kernel, and doesn't
+		 * live in the carveout for the swapper_pg_dir. We can simply
+		 * re-use the existing dir for the fixmap.
+		 */
+		set_pgd(pgd_offset_raw(pgd, FIXADDR_START),
+			*pgd_offset_k(FIXADDR_START));
+	} else if (CONFIG_PGTABLE_LEVELS > 3) {
+		/*
+		 * The fixmap shares its top level pgd entry with the kernel
+		 * mapping. This can really only occur when we are running
+		 * with 16k/4 levels, so we can simply reuse the pud level
+		 * entry instead.
+		 */
+		BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
+		set_pud(pud_set_fixmap_offset(pgd, FIXADDR_START),
+			__pud(__pa(bm_pmd) | PUD_TYPE_TABLE));
+		pud_clear_fixmap();
+	} else {
+		BUG();
+	}
 
 	kasan_copy_shadow(pgd);
 }
@@ -569,14 +611,6 @@ void vmemmap_free(unsigned long start, unsigned long end)
 }
 #endif	/* CONFIG_SPARSEMEM_VMEMMAP */
 
-static pte_t bm_pte[PTRS_PER_PTE] __page_aligned_bss;
-#if CONFIG_PGTABLE_LEVELS > 2
-static pmd_t bm_pmd[PTRS_PER_PMD] __page_aligned_bss;
-#endif
-#if CONFIG_PGTABLE_LEVELS > 3
-static pud_t bm_pud[PTRS_PER_PUD] __page_aligned_bss;
-#endif
-
 static inline pud_t * fixmap_pud(unsigned long addr)
 {
 	pgd_t *pgd = pgd_offset_k(addr);
@@ -608,8 +642,18 @@ void __init early_fixmap_init(void)
 	unsigned long addr = FIXADDR_START;
 
 	pgd = pgd_offset_k(addr);
-	pgd_populate(&init_mm, pgd, bm_pud);
-	pud = fixmap_pud(addr);
+	if (CONFIG_PGTABLE_LEVELS > 3 && !pgd_none(*pgd)) {
+		/*
+		 * We only end up here if the kernel mapping and the fixmap
+		 * share the top level pgd entry, which should only happen on
+		 * 16k/4 levels configurations.
+		 */
+		BUG_ON(!IS_ENABLED(CONFIG_ARM64_16K_PAGES));
+		pud = pud_offset_kimg(pgd, addr);
+	} else {
+		pgd_populate(&init_mm, pgd, bm_pud);
+		pud = fixmap_pud(addr);
+	}
 	pud_populate(&init_mm, pud, bm_pmd);
 	pmd = fixmap_pmd(addr);
 	pmd_populate_kernel(&init_mm, pmd, bm_pte);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 10/11] arm64: defer __va translation of initrd_start and initrd_end
  2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
                   ` (8 preceding siblings ...)
  2016-02-16 12:52 ` [PATCH v6sub1 09/11] arm64: move kernel image to base of vmalloc area Ard Biesheuvel
@ 2016-02-16 12:52 ` Ard Biesheuvel
  2016-02-16 12:52 ` [PATCH v6sub1 11/11] arm64: allow kernel Image to be loaded anywhere in physical memory Ard Biesheuvel
  2016-02-18 18:25 ` [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Catalin Marinas
  11 siblings, 0 replies; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-16 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

Before deferring the assignment of memstart_addr in a subsequent patch, to
the moment where all memory has been discovered and possibly clipped based
on the size of the linear region and the presence of a mem= command line
parameter, we need to ensure that memstart_addr is not used to perform __va
translations before it is assigned.

One such use is in the generic early DT discovery of the initrd location,
which is recorded as a virtual address in the globals initrd_start and
initrd_end. So wire up the generic support to declare the initrd addresses,
and implement it without __va() translations, and perform the translation
after memstart_addr has been assigned.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 arch/arm64/include/asm/memory.h |  8 ++++++++
 arch/arm64/mm/init.c            | 13 +++++++++----
 2 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 4388651d1f0d..18b7e77c7495 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -121,6 +121,14 @@
 #define IOREMAP_MAX_ORDER	(PMD_SHIFT)
 #endif
 
+#ifdef CONFIG_BLK_DEV_INITRD
+#define __early_init_dt_declare_initrd(__start, __end)			\
+	do {								\
+		initrd_start = (__start);				\
+		initrd_end = (__end);					\
+	} while (0)
+#endif
+
 #ifndef __ASSEMBLY__
 
 extern phys_addr_t		memstart_addr;
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 1d627cd8121c..52d1fc465885 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -59,8 +59,8 @@ static int __init early_initrd(char *p)
 	if (*endp == ',') {
 		size = memparse(endp + 1, NULL);
 
-		initrd_start = (unsigned long)__va(start);
-		initrd_end = (unsigned long)__va(start + size);
+		initrd_start = start;
+		initrd_end = start + size;
 	}
 	return 0;
 }
@@ -168,8 +168,13 @@ void __init arm64_memblock_init(void)
 	 */
 	memblock_reserve(__pa(_text), _end - _text);
 #ifdef CONFIG_BLK_DEV_INITRD
-	if (initrd_start)
-		memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start);
+	if (initrd_start) {
+		memblock_reserve(initrd_start, initrd_end - initrd_start);
+
+		/* the generic initrd code expects virtual addresses */
+		initrd_start = __phys_to_virt(initrd_start);
+		initrd_end = __phys_to_virt(initrd_end);
+	}
 #endif
 
 	early_init_fdt_scan_reserved_mem();
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 11/11] arm64: allow kernel Image to be loaded anywhere in physical memory
  2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
                   ` (9 preceding siblings ...)
  2016-02-16 12:52 ` [PATCH v6sub1 10/11] arm64: defer __va translation of initrd_start and initrd_end Ard Biesheuvel
@ 2016-02-16 12:52 ` Ard Biesheuvel
  2016-02-18 18:25 ` [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Catalin Marinas
  11 siblings, 0 replies; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-16 12:52 UTC (permalink / raw)
  To: linux-arm-kernel

This relaxes the kernel Image placement requirements, so that it
may be placed at any 2 MB aligned offset in physical memory.

This is accomplished by ignoring PHYS_OFFSET when installing
memblocks, and accounting for the apparent virtual offset of
the kernel Image. As a result, virtual address references
below PAGE_OFFSET are correctly mapped onto physical references
into the kernel Image regardless of where it sits in memory.

Special care needs to be taken for dealing with memory limits passed
via mem=, since the generic implementation clips memory top down, which
may clip the kernel image itself if it is loaded high up in memory. To
deal with this case, we simply add back the memory covering the kernel
image, which may result in more memory to be retained than was passed
as a mem= parameter.

Since mem= should not be considered a production feature, a panic notifier
handler is installed that dumps the memory limit at panic time if one was
set.

Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
---
 Documentation/arm64/booting.txt         | 20 ++++---
 arch/arm64/include/asm/boot.h           |  6 ++
 arch/arm64/include/asm/kernel-pgtable.h | 12 ++++
 arch/arm64/include/asm/kvm_asm.h        | 17 +-----
 arch/arm64/include/asm/memory.h         | 18 +++---
 arch/arm64/kernel/head.S                |  6 +-
 arch/arm64/kernel/image.h               | 13 ++--
 arch/arm64/mm/init.c                    | 63 +++++++++++++++++++-
 arch/arm64/mm/mmu.c                     |  3 +
 9 files changed, 119 insertions(+), 39 deletions(-)

diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 701d39d3171a..56d6d8b796db 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -109,7 +109,13 @@ Header notes:
 			1 - 4K
 			2 - 16K
 			3 - 64K
-  Bits 3-63:	Reserved.
+  Bit 3:	Kernel physical placement
+			0 - 2MB aligned base should be as close as possible
+			    to the base of DRAM, since memory below it is not
+			    accessible via the linear mapping
+			1 - 2MB aligned base may be anywhere in physical
+			    memory
+  Bits 4-63:	Reserved.
 
 - When image_size is zero, a bootloader should attempt to keep as much
   memory as possible free for use by the kernel immediately after the
@@ -117,14 +123,14 @@ Header notes:
   depending on selected features, and is effectively unbound.
 
 The Image must be placed text_offset bytes from a 2MB aligned base
-address near the start of usable system RAM and called there. Memory
-below that base address is currently unusable by Linux, and therefore it
-is strongly recommended that this location is the start of system RAM.
-The region between the 2 MB aligned base address and the start of the
-image has no special significance to the kernel, and may be used for
-other purposes.
+address anywhere in usable system RAM and called there. The region
+between the 2 MB aligned base address and the start of the image has no
+special significance to the kernel, and may be used for other purposes.
 At least image_size bytes from the start of the image must be free for
 use by the kernel.
+NOTE: versions prior to v4.6 cannot make use of memory below the
+physical offset of the Image so it is recommended that the Image be
+placed as close as possible to the start of system RAM.
 
 Any memory described to the kernel (even that below the start of the
 image) which is not marked as reserved from the kernel (e.g., with a
diff --git a/arch/arm64/include/asm/boot.h b/arch/arm64/include/asm/boot.h
index 81151b67b26b..ebf2481889c3 100644
--- a/arch/arm64/include/asm/boot.h
+++ b/arch/arm64/include/asm/boot.h
@@ -11,4 +11,10 @@
 #define MIN_FDT_ALIGN		8
 #define MAX_FDT_SIZE		SZ_2M
 
+/*
+ * arm64 requires the kernel image to placed
+ * TEXT_OFFSET bytes beyond a 2 MB aligned base
+ */
+#define MIN_KIMG_ALIGN		SZ_2M
+
 #endif
diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index a459714ee29e..5c6375d8528b 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -79,5 +79,17 @@
 #define SWAPPER_MM_MMUFLAGS	(PTE_ATTRINDX(MT_NORMAL) | SWAPPER_PTE_FLAGS)
 #endif
 
+/*
+ * To make optimal use of block mappings when laying out the linear
+ * mapping, round down the base of physical memory to a size that can
+ * be mapped efficiently, i.e., either PUD_SIZE (4k granule) or PMD_SIZE
+ * (64k granule), or a multiple that can be mapped using contiguous bits
+ * in the page tables: 32 * PMD_SIZE (16k granule)
+ */
+#ifdef CONFIG_ARM64_64K_PAGES
+#define ARM64_MEMSTART_ALIGN	SZ_512M
+#else
+#define ARM64_MEMSTART_ALIGN	SZ_1G
+#endif
 
 #endif	/* __ASM_KERNEL_PGTABLE_H */
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 31b56008f412..054ac25e7c2e 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -26,24 +26,9 @@
 #define KVM_ARM64_DEBUG_DIRTY_SHIFT	0
 #define KVM_ARM64_DEBUG_DIRTY		(1 << KVM_ARM64_DEBUG_DIRTY_SHIFT)
 
-#define kvm_ksym_ref(sym)		((void *)&sym + kvm_ksym_shift)
+#define kvm_ksym_ref(sym)		phys_to_virt((u64)&sym - kimage_voffset)
 
 #ifndef __ASSEMBLY__
-#if __GNUC__ > 4
-#define kvm_ksym_shift			(PAGE_OFFSET - KIMAGE_VADDR)
-#else
-/*
- * GCC versions 4.9 and older will fold the constant below into the addend of
- * the reference to 'sym' above if kvm_ksym_shift is declared static or if the
- * constant is used directly. However, since we use the small code model for
- * the core kernel, the reference to 'sym' will be emitted as a adrp/add pair,
- * with a +/- 4 GB range, resulting in linker relocation errors if the shift
- * is sufficiently large. So prevent the compiler from folding the shift into
- * the addend, by making the shift a variable with external linkage.
- */
-__weak u64 kvm_ksym_shift = PAGE_OFFSET - KIMAGE_VADDR;
-#endif
-
 struct kvm;
 struct kvm_vcpu;
 
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 18b7e77c7495..3239e4d78e0d 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -24,6 +24,7 @@
 #include <linux/compiler.h>
 #include <linux/const.h>
 #include <linux/types.h>
+#include <asm/bug.h>
 #include <asm/sizes.h>
 
 /*
@@ -88,10 +89,10 @@
 #define __virt_to_phys(x) ({						\
 	phys_addr_t __x = (phys_addr_t)(x);				\
 	__x >= PAGE_OFFSET ? (__x - PAGE_OFFSET + PHYS_OFFSET) :	\
-			     (__x - KIMAGE_VADDR + PHYS_OFFSET); })
+			     (__x - kimage_voffset); })
 
 #define __phys_to_virt(x)	((unsigned long)((x) - PHYS_OFFSET + PAGE_OFFSET))
-#define __phys_to_kimg(x)	((unsigned long)((x) - PHYS_OFFSET + KIMAGE_VADDR))
+#define __phys_to_kimg(x)	((unsigned long)((x) + kimage_voffset))
 
 /*
  * Convert a page to/from a physical address
@@ -133,15 +134,16 @@
 
 extern phys_addr_t		memstart_addr;
 /* PHYS_OFFSET - the physical address of the start of memory. */
-#define PHYS_OFFSET		({ memstart_addr; })
+#define PHYS_OFFSET		({ BUG_ON(memstart_addr & 1); memstart_addr; })
+
+/* the offset between the kernel virtual and physical mappings */
+extern u64			kimage_voffset;
 
 /*
- * The maximum physical address that the linear direct mapping
- * of system RAM can cover. (PAGE_OFFSET can be interpreted as
- * a 2's complement signed quantity and negated to derive the
- * maximum size of the linear mapping.)
+ * Allow all memory at the discovery stage. We will clip it later.
  */
-#define MAX_MEMBLOCK_ADDR	({ memstart_addr - PAGE_OFFSET - 1; })
+#define MIN_MEMBLOCK_ADDR	0
+#define MAX_MEMBLOCK_ADDR	U64_MAX
 
 /*
  * PFNs are used to describe any physical page; this means
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 04d38a058b19..05b98289093e 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -428,7 +428,11 @@ __mmap_switched:
 	and	x4, x4, #~(THREAD_SIZE - 1)
 	msr	sp_el0, x4			// Save thread_info
 	str_l	x21, __fdt_pointer, x5		// Save FDT pointer
-	str_l	x24, memstart_addr, x6		// Save PHYS_OFFSET
+
+	ldr	x4, =KIMAGE_VADDR		// Save the offset between
+	sub	x4, x4, x24			// the kernel virtual and
+	str_l	x4, kimage_voffset, x5		// physical mappings
+
 	mov	x29, #0
 #ifdef CONFIG_KASAN
 	bl	kasan_early_init
diff --git a/arch/arm64/kernel/image.h b/arch/arm64/kernel/image.h
index 999633bd7294..c9c62cab25a4 100644
--- a/arch/arm64/kernel/image.h
+++ b/arch/arm64/kernel/image.h
@@ -42,15 +42,18 @@
 #endif
 
 #ifdef CONFIG_CPU_BIG_ENDIAN
-#define __HEAD_FLAG_BE	1
+#define __HEAD_FLAG_BE		1
 #else
-#define __HEAD_FLAG_BE	0
+#define __HEAD_FLAG_BE		0
 #endif
 
-#define __HEAD_FLAG_PAGE_SIZE ((PAGE_SHIFT - 10) / 2)
+#define __HEAD_FLAG_PAGE_SIZE	((PAGE_SHIFT - 10) / 2)
 
-#define __HEAD_FLAGS	((__HEAD_FLAG_BE << 0) |	\
-			 (__HEAD_FLAG_PAGE_SIZE << 1))
+#define __HEAD_FLAG_PHYS_BASE	1
+
+#define __HEAD_FLAGS		((__HEAD_FLAG_BE << 0) |	\
+				 (__HEAD_FLAG_PAGE_SIZE << 1) |	\
+				 (__HEAD_FLAG_PHYS_BASE << 3))
 
 /*
  * These will output as part of the Image header, which should be little-endian
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 52d1fc465885..c0ea54bd9995 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -35,8 +35,10 @@
 #include <linux/efi.h>
 #include <linux/swiotlb.h>
 
+#include <asm/boot.h>
 #include <asm/fixmap.h>
 #include <asm/kasan.h>
+#include <asm/kernel-pgtable.h>
 #include <asm/memory.h>
 #include <asm/sections.h>
 #include <asm/setup.h>
@@ -46,7 +48,13 @@
 
 #include "mm.h"
 
-phys_addr_t memstart_addr __read_mostly = 0;
+/*
+ * We need to be able to catch inadvertent references to memstart_addr
+ * that occur (potentially in generic code) before arm64_memblock_init()
+ * executes, which assigns it its actual value. So use a default value
+ * that cannot be mistaken for a real physical address.
+ */
+phys_addr_t memstart_addr __read_mostly = ~0ULL;
 phys_addr_t arm64_dma_phys_limit __read_mostly;
 
 #ifdef CONFIG_BLK_DEV_INITRD
@@ -160,7 +168,33 @@ early_param("mem", early_mem);
 
 void __init arm64_memblock_init(void)
 {
-	memblock_enforce_memory_limit(memory_limit);
+	const s64 linear_region_size = -(s64)PAGE_OFFSET;
+
+	/*
+	 * Select a suitable value for the base of physical memory.
+	 */
+	memstart_addr = round_down(memblock_start_of_DRAM(),
+				   ARM64_MEMSTART_ALIGN);
+
+	/*
+	 * Remove the memory that we will not be able to cover with the
+	 * linear mapping. Take care not to clip the kernel which may be
+	 * high in memory.
+	 */
+	memblock_remove(max(memstart_addr + linear_region_size, __pa(_end)),
+			ULLONG_MAX);
+	if (memblock_end_of_DRAM() > linear_region_size)
+		memblock_remove(0, memblock_end_of_DRAM() - linear_region_size);
+
+	/*
+	 * Apply the memory limit if it was set. Since the kernel may be loaded
+	 * high up in memory, add back the kernel region that must be accessible
+	 * via the linear mapping.
+	 */
+	if (memory_limit != (phys_addr_t)ULLONG_MAX) {
+		memblock_enforce_memory_limit(memory_limit);
+		memblock_add(__pa(_text), (u64)(_end - _text));
+	}
 
 	/*
 	 * Register the kernel text, kernel data, initrd, and initial
@@ -386,3 +420,28 @@ static int __init keepinitrd_setup(char *__unused)
 
 __setup("keepinitrd", keepinitrd_setup);
 #endif
+
+/*
+ * Dump out memory limit information on panic.
+ */
+static int dump_mem_limit(struct notifier_block *self, unsigned long v, void *p)
+{
+	if (memory_limit != (phys_addr_t)ULLONG_MAX) {
+		pr_emerg("Memory Limit: %llu MB\n", memory_limit >> 20);
+	} else {
+		pr_emerg("Memory Limit: none\n");
+	}
+	return 0;
+}
+
+static struct notifier_block mem_limit_notifier = {
+	.notifier_call = dump_mem_limit,
+};
+
+static int __init register_mem_limit_dumper(void)
+{
+	atomic_notifier_chain_register(&panic_notifier_list,
+				       &mem_limit_notifier);
+	return 0;
+}
+__initcall(register_mem_limit_dumper);
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6eb8e49889d0..b83c6bf7f90d 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -46,6 +46,9 @@
 
 u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
 
+u64 kimage_voffset __read_mostly;
+EXPORT_SYMBOL(kimage_voffset);
+
 /*
  * Empty_zero_page is a special page that is used for zero-initialized data
  * and COW.
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 02/11] of/fdt: factor out assignment of initrd_start/initrd_end
  2016-02-16 12:52 ` [PATCH v6sub1 02/11] of/fdt: factor out assignment of initrd_start/initrd_end Ard Biesheuvel
@ 2016-02-16 17:28   ` Rob Herring
  0 siblings, 0 replies; 26+ messages in thread
From: Rob Herring @ 2016-02-16 17:28 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 16, 2016 at 6:52 AM, Ard Biesheuvel
<ard.biesheuvel@linaro.org> wrote:
> Since architectures may not yet have their linear mapping up and running
> when the initrd address is discovered from the DT, factor out the
> assignment of initrd_start and initrd_end, so that an architecture can
> override it and use the translation it needs.
>
> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> ---
>  drivers/of/fdt.c | 14 +++++++++++---
>  1 file changed, 11 insertions(+), 3 deletions(-)

Acked-by: Rob Herring <robh@kernel.org>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
                   ` (10 preceding siblings ...)
  2016-02-16 12:52 ` [PATCH v6sub1 11/11] arm64: allow kernel Image to be loaded anywhere in physical memory Ard Biesheuvel
@ 2016-02-18 18:25 ` Catalin Marinas
  2016-02-18 18:27   ` Ard Biesheuvel
  11 siblings, 1 reply; 26+ messages in thread
From: Catalin Marinas @ 2016-02-18 18:25 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 16, 2016 at 01:52:31PM +0100, Ard Biesheuvel wrote:
> Ard Biesheuvel (11):
>   of/fdt: make memblock minimum physical address arch configurable
>   of/fdt: factor out assignment of initrd_start/initrd_end
>   arm64: prevent potential circular header dependencies in asm/bug.h
>   arm64: add support for ioremap() block mappings
>   arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region
>   arm64: pgtable: implement static [pte|pmd|pud]_offset variants
>   arm64: decouple early fixmap init from linear mapping
>   arm64: kvm: deal with kernel symbols outside of linear mapping
>   arm64: move kernel image to base of vmalloc area
>   arm64: defer __va translation of initrd_start and initrd_end
>   arm64: allow kernel Image to be loaded anywhere in physical memory

I queued this patches (again) for 4.6. I'll wait a few days with the
rest of KASLR until these get a bit more coverage in -next.

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-18 18:25 ` [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Catalin Marinas
@ 2016-02-18 18:27   ` Ard Biesheuvel
  2016-02-18 19:38     ` Ard Biesheuvel
  0 siblings, 1 reply; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-18 18:27 UTC (permalink / raw)
  To: linux-arm-kernel

On 18 February 2016 at 19:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Tue, Feb 16, 2016 at 01:52:31PM +0100, Ard Biesheuvel wrote:
>> Ard Biesheuvel (11):
>>   of/fdt: make memblock minimum physical address arch configurable
>>   of/fdt: factor out assignment of initrd_start/initrd_end
>>   arm64: prevent potential circular header dependencies in asm/bug.h
>>   arm64: add support for ioremap() block mappings
>>   arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region
>>   arm64: pgtable: implement static [pte|pmd|pud]_offset variants
>>   arm64: decouple early fixmap init from linear mapping
>>   arm64: kvm: deal with kernel symbols outside of linear mapping
>>   arm64: move kernel image to base of vmalloc area
>>   arm64: defer __va translation of initrd_start and initrd_end
>>   arm64: allow kernel Image to be loaded anywhere in physical memory
>
> I queued this patches (again) for 4.6. I'll wait a few days with the
> rest of KASLR until these get a bit more coverage in -next.
>
> Thanks.
>

Fingers crossed :-)

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-18 18:27   ` Ard Biesheuvel
@ 2016-02-18 19:38     ` Ard Biesheuvel
  2016-02-19  8:05       ` Ard Biesheuvel
  0 siblings, 1 reply; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-18 19:38 UTC (permalink / raw)
  To: linux-arm-kernel

On 18 February 2016 at 19:27, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> On 18 February 2016 at 19:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> On Tue, Feb 16, 2016 at 01:52:31PM +0100, Ard Biesheuvel wrote:
>>> Ard Biesheuvel (11):
>>>   of/fdt: make memblock minimum physical address arch configurable
>>>   of/fdt: factor out assignment of initrd_start/initrd_end
>>>   arm64: prevent potential circular header dependencies in asm/bug.h
>>>   arm64: add support for ioremap() block mappings
>>>   arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region
>>>   arm64: pgtable: implement static [pte|pmd|pud]_offset variants
>>>   arm64: decouple early fixmap init from linear mapping
>>>   arm64: kvm: deal with kernel symbols outside of linear mapping
>>>   arm64: move kernel image to base of vmalloc area
>>>   arm64: defer __va translation of initrd_start and initrd_end
>>>   arm64: allow kernel Image to be loaded anywhere in physical memory
>>
>> I queued this patches (again) for 4.6. I'll wait a few days with the
>> rest of KASLR until these get a bit more coverage in -next.
>>

I rebased the remaining patches onto for-next/core, and pushed it here:
https://git.linaro.org/people/ard.biesheuvel/linux-arm.git/shortlog/refs/heads/arm64-kaslr-v6

I need to check if everything still works, and if it does, I will send
them out as v6sub2
Note that I have included the arm64 extable patch plus its generic
dependency, and the kallsyms patches as well. We can decide later how
to proceed with those, but for now, I included them for completeness.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-18 19:38     ` Ard Biesheuvel
@ 2016-02-19  8:05       ` Ard Biesheuvel
  2016-02-19 14:25         ` Catalin Marinas
  0 siblings, 1 reply; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-19  8:05 UTC (permalink / raw)
  To: linux-arm-kernel

On 18 February 2016 at 20:38, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> On 18 February 2016 at 19:27, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>> On 18 February 2016 at 19:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
>>> On Tue, Feb 16, 2016 at 01:52:31PM +0100, Ard Biesheuvel wrote:
>>>> Ard Biesheuvel (11):
>>>>   of/fdt: make memblock minimum physical address arch configurable
>>>>   of/fdt: factor out assignment of initrd_start/initrd_end
>>>>   arm64: prevent potential circular header dependencies in asm/bug.h
>>>>   arm64: add support for ioremap() block mappings
>>>>   arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region
>>>>   arm64: pgtable: implement static [pte|pmd|pud]_offset variants
>>>>   arm64: decouple early fixmap init from linear mapping
>>>>   arm64: kvm: deal with kernel symbols outside of linear mapping
>>>>   arm64: move kernel image to base of vmalloc area
>>>>   arm64: defer __va translation of initrd_start and initrd_end
>>>>   arm64: allow kernel Image to be loaded anywhere in physical memory
>>>
>>> I queued this patches (again) for 4.6. I'll wait a few days with the
>>> rest of KASLR until these get a bit more coverage in -next.
>>>
>
> I rebased the remaining patches onto for-next/core, and pushed it here:
> https://git.linaro.org/people/ard.biesheuvel/linux-arm.git/shortlog/refs/heads/arm64-kaslr-v6
>
> I need to check if everything still works, and if it does, I will send
> them out as v6sub2
> Note that I have included the arm64 extable patch plus its generic
> dependency, and the kallsyms patches as well. We can decide later how
> to proceed with those, but for now, I included them for completeness.

OK, as it turns out, my arm64/extable patch conflicts with the UAO
patches that are now in for-next/core, not textually, but those
patches add additional absolute extable entries that need to be
updated to relative as well.

So it appears that akpm will need to drop that patch anyway, as he
won't be able to carry an updated version since he does not have the
UAO patches. That means it probably makes even more sense to take
those through the arm64 tree as well (minus the x86 one, which has a
conflict now as well). In fact, perhaps it makes sense to only take
the base patch and the arm64 patch, and I can send the remaining ones
to the various maintainers (or akpm) for v4.7

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-19  8:05       ` Ard Biesheuvel
@ 2016-02-19 14:25         ` Catalin Marinas
  2016-02-19 14:27           ` Ard Biesheuvel
  0 siblings, 1 reply; 26+ messages in thread
From: Catalin Marinas @ 2016-02-19 14:25 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 19, 2016 at 09:05:25AM +0100, Ard Biesheuvel wrote:
> On 18 February 2016 at 20:38, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> > On 18 February 2016 at 19:27, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> >> On 18 February 2016 at 19:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
> >>> On Tue, Feb 16, 2016 at 01:52:31PM +0100, Ard Biesheuvel wrote:
> >>>> Ard Biesheuvel (11):
> >>>>   of/fdt: make memblock minimum physical address arch configurable
> >>>>   of/fdt: factor out assignment of initrd_start/initrd_end
> >>>>   arm64: prevent potential circular header dependencies in asm/bug.h
> >>>>   arm64: add support for ioremap() block mappings
> >>>>   arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region
> >>>>   arm64: pgtable: implement static [pte|pmd|pud]_offset variants
> >>>>   arm64: decouple early fixmap init from linear mapping
> >>>>   arm64: kvm: deal with kernel symbols outside of linear mapping
> >>>>   arm64: move kernel image to base of vmalloc area
> >>>>   arm64: defer __va translation of initrd_start and initrd_end
> >>>>   arm64: allow kernel Image to be loaded anywhere in physical memory
> >>>
> >>> I queued this patches (again) for 4.6. I'll wait a few days with the
> >>> rest of KASLR until these get a bit more coverage in -next.
> >>>
> >
> > I rebased the remaining patches onto for-next/core, and pushed it here:
> > https://git.linaro.org/people/ard.biesheuvel/linux-arm.git/shortlog/refs/heads/arm64-kaslr-v6
> >
> > I need to check if everything still works, and if it does, I will send
> > them out as v6sub2
> > Note that I have included the arm64 extable patch plus its generic
> > dependency, and the kallsyms patches as well. We can decide later how
> > to proceed with those, but for now, I included them for completeness.
> 
> OK, as it turns out, my arm64/extable patch conflicts with the UAO
> patches that are now in for-next/core, not textually, but those
> patches add additional absolute extable entries that need to be
> updated to relative as well.

I noticed this as well while testing KASLR.

> So it appears that akpm will need to drop that patch anyway, as he
> won't be able to carry an updated version since he does not have the
> UAO patches. That means it probably makes even more sense to take
> those through the arm64 tree as well (minus the x86 one, which has a
> conflict now as well). In fact, perhaps it makes sense to only take
> the base patch and the arm64 patch, and I can send the remaining ones
> to the various maintainers (or akpm) for v4.7

Or we make BUILDTIME_EXTABLE_SORT depend on !RANDOMIZE_BASE until we
sort out the extable patches.

-- 
Catalin

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-19 14:25         ` Catalin Marinas
@ 2016-02-19 14:27           ` Ard Biesheuvel
  2016-02-19 14:29             ` Ard Biesheuvel
  0 siblings, 1 reply; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-19 14:27 UTC (permalink / raw)
  To: linux-arm-kernel

On 19 February 2016 at 15:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Fri, Feb 19, 2016 at 09:05:25AM +0100, Ard Biesheuvel wrote:
>> On 18 February 2016 at 20:38, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>> > On 18 February 2016 at 19:27, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>> >> On 18 February 2016 at 19:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> >>> On Tue, Feb 16, 2016 at 01:52:31PM +0100, Ard Biesheuvel wrote:
>> >>>> Ard Biesheuvel (11):
>> >>>>   of/fdt: make memblock minimum physical address arch configurable
>> >>>>   of/fdt: factor out assignment of initrd_start/initrd_end
>> >>>>   arm64: prevent potential circular header dependencies in asm/bug.h
>> >>>>   arm64: add support for ioremap() block mappings
>> >>>>   arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region
>> >>>>   arm64: pgtable: implement static [pte|pmd|pud]_offset variants
>> >>>>   arm64: decouple early fixmap init from linear mapping
>> >>>>   arm64: kvm: deal with kernel symbols outside of linear mapping
>> >>>>   arm64: move kernel image to base of vmalloc area
>> >>>>   arm64: defer __va translation of initrd_start and initrd_end
>> >>>>   arm64: allow kernel Image to be loaded anywhere in physical memory
>> >>>
>> >>> I queued this patches (again) for 4.6. I'll wait a few days with the
>> >>> rest of KASLR until these get a bit more coverage in -next.
>> >>>
>> >
>> > I rebased the remaining patches onto for-next/core, and pushed it here:
>> > https://git.linaro.org/people/ard.biesheuvel/linux-arm.git/shortlog/refs/heads/arm64-kaslr-v6
>> >
>> > I need to check if everything still works, and if it does, I will send
>> > them out as v6sub2
>> > Note that I have included the arm64 extable patch plus its generic
>> > dependency, and the kallsyms patches as well. We can decide later how
>> > to proceed with those, but for now, I included them for completeness.
>>
>> OK, as it turns out, my arm64/extable patch conflicts with the UAO
>> patches that are now in for-next/core, not textually, but those
>> patches add additional absolute extable entries that need to be
>> updated to relative as well.
>
> I noticed this as well while testing KASLR.
>
>> So it appears that akpm will need to drop that patch anyway, as he
>> won't be able to carry an updated version since he does not have the
>> UAO patches. That means it probably makes even more sense to take
>> those through the arm64 tree as well (minus the x86 one, which has a
>> conflict now as well). In fact, perhaps it makes sense to only take
>> the base patch and the arm64 patch, and I can send the remaining ones
>> to the various maintainers (or akpm) for v4.7
>
> Or we make BUILDTIME_EXTABLE_SORT depend on !RANDOMIZE_BASE until we
> sort out the extable patches.
>

That would still result in breakage once the current version queued by
akpm hits mainline.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-19 14:27           ` Ard Biesheuvel
@ 2016-02-19 14:29             ` Ard Biesheuvel
  2016-02-19 14:37               ` Catalin Marinas
  0 siblings, 1 reply; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-19 14:29 UTC (permalink / raw)
  To: linux-arm-kernel

On 19 February 2016 at 15:27, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> On 19 February 2016 at 15:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> On Fri, Feb 19, 2016 at 09:05:25AM +0100, Ard Biesheuvel wrote:
>>> On 18 February 2016 at 20:38, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>>> > On 18 February 2016 at 19:27, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>>> >> On 18 February 2016 at 19:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
>>> >>> On Tue, Feb 16, 2016 at 01:52:31PM +0100, Ard Biesheuvel wrote:
>>> >>>> Ard Biesheuvel (11):
>>> >>>>   of/fdt: make memblock minimum physical address arch configurable
>>> >>>>   of/fdt: factor out assignment of initrd_start/initrd_end
>>> >>>>   arm64: prevent potential circular header dependencies in asm/bug.h
>>> >>>>   arm64: add support for ioremap() block mappings
>>> >>>>   arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region
>>> >>>>   arm64: pgtable: implement static [pte|pmd|pud]_offset variants
>>> >>>>   arm64: decouple early fixmap init from linear mapping
>>> >>>>   arm64: kvm: deal with kernel symbols outside of linear mapping
>>> >>>>   arm64: move kernel image to base of vmalloc area
>>> >>>>   arm64: defer __va translation of initrd_start and initrd_end
>>> >>>>   arm64: allow kernel Image to be loaded anywhere in physical memory
>>> >>>
>>> >>> I queued this patches (again) for 4.6. I'll wait a few days with the
>>> >>> rest of KASLR until these get a bit more coverage in -next.
>>> >>>
>>> >
>>> > I rebased the remaining patches onto for-next/core, and pushed it here:
>>> > https://git.linaro.org/people/ard.biesheuvel/linux-arm.git/shortlog/refs/heads/arm64-kaslr-v6
>>> >
>>> > I need to check if everything still works, and if it does, I will send
>>> > them out as v6sub2
>>> > Note that I have included the arm64 extable patch plus its generic
>>> > dependency, and the kallsyms patches as well. We can decide later how
>>> > to proceed with those, but for now, I included them for completeness.
>>>
>>> OK, as it turns out, my arm64/extable patch conflicts with the UAO
>>> patches that are now in for-next/core, not textually, but those
>>> patches add additional absolute extable entries that need to be
>>> updated to relative as well.
>>
>> I noticed this as well while testing KASLR.
>>
>>> So it appears that akpm will need to drop that patch anyway, as he
>>> won't be able to carry an updated version since he does not have the
>>> UAO patches. That means it probably makes even more sense to take
>>> those through the arm64 tree as well (minus the x86 one, which has a
>>> conflict now as well). In fact, perhaps it makes sense to only take
>>> the base patch and the arm64 patch, and I can send the remaining ones
>>> to the various maintainers (or akpm) for v4.7
>>
>> Or we make BUILDTIME_EXTABLE_SORT depend on !RANDOMIZE_BASE until we
>> sort out the extable patches.
>>
>
> That would still result in breakage once the current version queued by
> akpm hits mainline.

... or in other words, the breakage is already in -next. This is
completely unrelated to the sorting, btw, but due to the difference
between relative/absolute

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-19 14:29             ` Ard Biesheuvel
@ 2016-02-19 14:37               ` Catalin Marinas
  2016-02-19 14:40                 ` Ard Biesheuvel
  0 siblings, 1 reply; 26+ messages in thread
From: Catalin Marinas @ 2016-02-19 14:37 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 19, 2016 at 03:29:13PM +0100, Ard Biesheuvel wrote:
> On 19 February 2016 at 15:27, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> > On 19 February 2016 at 15:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
> >> On Fri, Feb 19, 2016 at 09:05:25AM +0100, Ard Biesheuvel wrote:
> >>> So it appears that akpm will need to drop that patch anyway, as he
> >>> won't be able to carry an updated version since he does not have the
> >>> UAO patches. That means it probably makes even more sense to take
> >>> those through the arm64 tree as well (minus the x86 one, which has a
> >>> conflict now as well). In fact, perhaps it makes sense to only take
> >>> the base patch and the arm64 patch, and I can send the remaining ones
> >>> to the various maintainers (or akpm) for v4.7
> >>
> >> Or we make BUILDTIME_EXTABLE_SORT depend on !RANDOMIZE_BASE until we
> >> sort out the extable patches.
> >
> > That would still result in breakage once the current version queued by
> > akpm hits mainline.
> 
> ... or in other words, the breakage is already in -next. This is
> completely unrelated to the sorting, btw, but due to the difference
> between relative/absolute

Ah, I now realised that it was only working fine for me before merging
the EFI patches to actually do the base randomisation. Once we fully
randomise the load address, we must have relative extable.

Is your branch updated with the patches needed for arm64 (against
for-next/core)?

-- 
Catalin

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-19 14:37               ` Catalin Marinas
@ 2016-02-19 14:40                 ` Ard Biesheuvel
  2016-02-19 14:57                   ` Catalin Marinas
  2016-02-19 17:34                   ` Catalin Marinas
  0 siblings, 2 replies; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-19 14:40 UTC (permalink / raw)
  To: linux-arm-kernel

On 19 February 2016 at 15:37, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Fri, Feb 19, 2016 at 03:29:13PM +0100, Ard Biesheuvel wrote:
>> On 19 February 2016 at 15:27, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>> > On 19 February 2016 at 15:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> >> On Fri, Feb 19, 2016 at 09:05:25AM +0100, Ard Biesheuvel wrote:
>> >>> So it appears that akpm will need to drop that patch anyway, as he
>> >>> won't be able to carry an updated version since he does not have the
>> >>> UAO patches. That means it probably makes even more sense to take
>> >>> those through the arm64 tree as well (minus the x86 one, which has a
>> >>> conflict now as well). In fact, perhaps it makes sense to only take
>> >>> the base patch and the arm64 patch, and I can send the remaining ones
>> >>> to the various maintainers (or akpm) for v4.7
>> >>
>> >> Or we make BUILDTIME_EXTABLE_SORT depend on !RANDOMIZE_BASE until we
>> >> sort out the extable patches.
>> >
>> > That would still result in breakage once the current version queued by
>> > akpm hits mainline.
>>
>> ... or in other words, the breakage is already in -next. This is
>> completely unrelated to the sorting, btw, but due to the difference
>> between relative/absolute
>
> Ah, I now realised that it was only working fine for me before merging
> the EFI patches to actually do the base randomisation. Once we fully
> randomise the load address, we must have relative extable.
>
> Is your branch updated with the patches needed for arm64 (against
> for-next/core)?
>

Yes. I dropped the kallsyms patches, and included only the base and
arm64 extable patches, with the UAO issue fixed.

https://git.linaro.org/people/ard.biesheuvel/linux-arm.git/shortlog/refs/heads/arm64-kaslr-v6
git://git.linaro.org/people/ard.biesheuvel/linux-arm.git arm64-kaslr-v6

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-19 14:40                 ` Ard Biesheuvel
@ 2016-02-19 14:57                   ` Catalin Marinas
  2016-02-19 17:34                   ` Catalin Marinas
  1 sibling, 0 replies; 26+ messages in thread
From: Catalin Marinas @ 2016-02-19 14:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 19, 2016 at 03:40:32PM +0100, Ard Biesheuvel wrote:
> On 19 February 2016 at 15:37, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Fri, Feb 19, 2016 at 03:29:13PM +0100, Ard Biesheuvel wrote:
> >> On 19 February 2016 at 15:27, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> >> > On 19 February 2016 at 15:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
> >> >> On Fri, Feb 19, 2016 at 09:05:25AM +0100, Ard Biesheuvel wrote:
> >> >>> So it appears that akpm will need to drop that patch anyway, as he
> >> >>> won't be able to carry an updated version since he does not have the
> >> >>> UAO patches. That means it probably makes even more sense to take
> >> >>> those through the arm64 tree as well (minus the x86 one, which has a
> >> >>> conflict now as well). In fact, perhaps it makes sense to only take
> >> >>> the base patch and the arm64 patch, and I can send the remaining ones
> >> >>> to the various maintainers (or akpm) for v4.7
> >> >>
> >> >> Or we make BUILDTIME_EXTABLE_SORT depend on !RANDOMIZE_BASE until we
> >> >> sort out the extable patches.
> >> >
> >> > That would still result in breakage once the current version queued by
> >> > akpm hits mainline.
> >>
> >> ... or in other words, the breakage is already in -next. This is
> >> completely unrelated to the sorting, btw, but due to the difference
> >> between relative/absolute
> >
> > Ah, I now realised that it was only working fine for me before merging
> > the EFI patches to actually do the base randomisation. Once we fully
> > randomise the load address, we must have relative extable.
> >
> > Is your branch updated with the patches needed for arm64 (against
> > for-next/core)?
> 
> Yes. I dropped the kallsyms patches, and included only the base and
> arm64 extable patches, with the UAO issue fixed.
> 
> https://git.linaro.org/people/ard.biesheuvel/linux-arm.git/shortlog/refs/heads/arm64-kaslr-v6
> git://git.linaro.org/people/ard.biesheuvel/linux-arm.git arm64-kaslr-v6

Thanks, I'll give it a try.

-- 
Catalin

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-19 14:40                 ` Ard Biesheuvel
  2016-02-19 14:57                   ` Catalin Marinas
@ 2016-02-19 17:34                   ` Catalin Marinas
  2016-02-19 17:38                     ` Ard Biesheuvel
  1 sibling, 1 reply; 26+ messages in thread
From: Catalin Marinas @ 2016-02-19 17:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 19, 2016 at 03:40:32PM +0100, Ard Biesheuvel wrote:
> On 19 February 2016 at 15:37, Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Fri, Feb 19, 2016 at 03:29:13PM +0100, Ard Biesheuvel wrote:
> >> On 19 February 2016 at 15:27, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> >> > On 19 February 2016 at 15:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
> >> >> On Fri, Feb 19, 2016 at 09:05:25AM +0100, Ard Biesheuvel wrote:
> >> >>> So it appears that akpm will need to drop that patch anyway, as he
> >> >>> won't be able to carry an updated version since he does not have the
> >> >>> UAO patches. That means it probably makes even more sense to take
> >> >>> those through the arm64 tree as well (minus the x86 one, which has a
> >> >>> conflict now as well). In fact, perhaps it makes sense to only take
> >> >>> the base patch and the arm64 patch, and I can send the remaining ones
> >> >>> to the various maintainers (or akpm) for v4.7
> >> >>
> >> >> Or we make BUILDTIME_EXTABLE_SORT depend on !RANDOMIZE_BASE until we
> >> >> sort out the extable patches.
> >> >
> >> > That would still result in breakage once the current version queued by
> >> > akpm hits mainline.
> >>
> >> ... or in other words, the breakage is already in -next. This is
> >> completely unrelated to the sorting, btw, but due to the difference
> >> between relative/absolute
> >
> > Ah, I now realised that it was only working fine for me before merging
> > the EFI patches to actually do the base randomisation. Once we fully
> > randomise the load address, we must have relative extable.
> >
> > Is your branch updated with the patches needed for arm64 (against
> > for-next/core)?
> 
> Yes. I dropped the kallsyms patches, and included only the base and
> arm64 extable patches, with the UAO issue fixed.
> 
> https://git.linaro.org/people/ard.biesheuvel/linux-arm.git/shortlog/refs/heads/arm64-kaslr-v6
> git://git.linaro.org/people/ard.biesheuvel/linux-arm.git arm64-kaslr-v6

I pushed these patches to the arm64 for-next/kaslr for now, rebased
against the latest for-next/core branch. There was one commit
(e9ee71275034 arm64: add support for module PLTs) which inadvertently
got some extra information in the log but I found it useful, so I kept
it. If nothing else falls, I'll push them into -next on Monday.

I noticed that we still have MODULES_VADDR around and used in couple of
places (printing the kernel memory layout during init, debugfs
kernel_page_tables and KASAN). Shouldn't we use module_alloc_base
instead?

-- 
Catalin

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-19 17:34                   ` Catalin Marinas
@ 2016-02-19 17:38                     ` Ard Biesheuvel
  2016-02-19 17:43                       ` Catalin Marinas
  0 siblings, 1 reply; 26+ messages in thread
From: Ard Biesheuvel @ 2016-02-19 17:38 UTC (permalink / raw)
  To: linux-arm-kernel

On 19 February 2016 at 18:34, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Fri, Feb 19, 2016 at 03:40:32PM +0100, Ard Biesheuvel wrote:
>> On 19 February 2016 at 15:37, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> > On Fri, Feb 19, 2016 at 03:29:13PM +0100, Ard Biesheuvel wrote:
>> >> On 19 February 2016 at 15:27, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>> >> > On 19 February 2016 at 15:25, Catalin Marinas <catalin.marinas@arm.com> wrote:
>> >> >> On Fri, Feb 19, 2016 at 09:05:25AM +0100, Ard Biesheuvel wrote:
>> >> >>> So it appears that akpm will need to drop that patch anyway, as he
>> >> >>> won't be able to carry an updated version since he does not have the
>> >> >>> UAO patches. That means it probably makes even more sense to take
>> >> >>> those through the arm64 tree as well (minus the x86 one, which has a
>> >> >>> conflict now as well). In fact, perhaps it makes sense to only take
>> >> >>> the base patch and the arm64 patch, and I can send the remaining ones
>> >> >>> to the various maintainers (or akpm) for v4.7
>> >> >>
>> >> >> Or we make BUILDTIME_EXTABLE_SORT depend on !RANDOMIZE_BASE until we
>> >> >> sort out the extable patches.
>> >> >
>> >> > That would still result in breakage once the current version queued by
>> >> > akpm hits mainline.
>> >>
>> >> ... or in other words, the breakage is already in -next. This is
>> >> completely unrelated to the sorting, btw, but due to the difference
>> >> between relative/absolute
>> >
>> > Ah, I now realised that it was only working fine for me before merging
>> > the EFI patches to actually do the base randomisation. Once we fully
>> > randomise the load address, we must have relative extable.
>> >
>> > Is your branch updated with the patches needed for arm64 (against
>> > for-next/core)?
>>
>> Yes. I dropped the kallsyms patches, and included only the base and
>> arm64 extable patches, with the UAO issue fixed.
>>
>> https://git.linaro.org/people/ard.biesheuvel/linux-arm.git/shortlog/refs/heads/arm64-kaslr-v6
>> git://git.linaro.org/people/ard.biesheuvel/linux-arm.git arm64-kaslr-v6
>
> I pushed these patches to the arm64 for-next/kaslr for now, rebased
> against the latest for-next/core branch. There was one commit
> (e9ee71275034 arm64: add support for module PLTs) which inadvertently
> got some extra information in the log but I found it useful, so I kept
> it. If nothing else falls, I'll push them into -next on Monday.
>

OK

> I noticed that we still have MODULES_VADDR around and used in couple of
> places (printing the kernel memory layout during init, debugfs
> kernel_page_tables and KASAN). Shouldn't we use module_alloc_base
> instead?
>

For KASAN, I updated the patches so that the modules are always
allocated in the original module region, to prevent issues with the
zero shadow that backs the vmalloc region.

For the other purposes, it is really a matter of taste. The reserved
region still exists, KASLR or not, so if you omit it, you will have a
hole in the memory map. For the page table dumper, i did not add a
special section for the kernel either.

I'm happy to propose a patch that changes that, once we (you) decide
what you want to see.

-- 
Ard.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v6sub1 00/11] arm64: split linear and kernel mappings
  2016-02-19 17:38                     ` Ard Biesheuvel
@ 2016-02-19 17:43                       ` Catalin Marinas
  0 siblings, 0 replies; 26+ messages in thread
From: Catalin Marinas @ 2016-02-19 17:43 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Feb 19, 2016 at 06:38:19PM +0100, Ard Biesheuvel wrote:
> > I noticed that we still have MODULES_VADDR around and used in couple of
> > places (printing the kernel memory layout during init, debugfs
> > kernel_page_tables and KASAN). Shouldn't we use module_alloc_base
> > instead?
> 
> For KASAN, I updated the patches so that the modules are always
> allocated in the original module region, to prevent issues with the
> zero shadow that backs the vmalloc region.

Ah, I forgot about this.

> For the other purposes, it is really a matter of taste. The reserved
> region still exists, KASLR or not, so if you omit it, you will have a
> hole in the memory map. For the page table dumper, i did not add a
> special section for the kernel either.
> 
> I'm happy to propose a patch that changes that, once we (you) decide
> what you want to see.

Leave it as it is for now.

-- 
Catalin

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2016-02-19 17:43 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-16 12:52 [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Ard Biesheuvel
2016-02-16 12:52 ` [PATCH v6sub1 01/11] of/fdt: make memblock minimum physical address arch configurable Ard Biesheuvel
2016-02-16 12:52 ` [PATCH v6sub1 02/11] of/fdt: factor out assignment of initrd_start/initrd_end Ard Biesheuvel
2016-02-16 17:28   ` Rob Herring
2016-02-16 12:52 ` [PATCH v6sub1 03/11] arm64: prevent potential circular header dependencies in asm/bug.h Ard Biesheuvel
2016-02-16 12:52 ` [PATCH v6sub1 04/11] arm64: add support for ioremap() block mappings Ard Biesheuvel
2016-02-16 12:52 ` [PATCH v6sub1 05/11] arm64: introduce KIMAGE_VADDR as the virtual base of the kernel region Ard Biesheuvel
2016-02-16 12:52 ` [PATCH v6sub1 06/11] arm64: pgtable: implement static [pte|pmd|pud]_offset variants Ard Biesheuvel
2016-02-16 12:52 ` [PATCH v6sub1 07/11] arm64: decouple early fixmap init from linear mapping Ard Biesheuvel
2016-02-16 12:52 ` [PATCH v6sub1 08/11] arm64: kvm: deal with kernel symbols outside of " Ard Biesheuvel
2016-02-16 12:52 ` [PATCH v6sub1 09/11] arm64: move kernel image to base of vmalloc area Ard Biesheuvel
2016-02-16 12:52 ` [PATCH v6sub1 10/11] arm64: defer __va translation of initrd_start and initrd_end Ard Biesheuvel
2016-02-16 12:52 ` [PATCH v6sub1 11/11] arm64: allow kernel Image to be loaded anywhere in physical memory Ard Biesheuvel
2016-02-18 18:25 ` [PATCH v6sub1 00/11] arm64: split linear and kernel mappings Catalin Marinas
2016-02-18 18:27   ` Ard Biesheuvel
2016-02-18 19:38     ` Ard Biesheuvel
2016-02-19  8:05       ` Ard Biesheuvel
2016-02-19 14:25         ` Catalin Marinas
2016-02-19 14:27           ` Ard Biesheuvel
2016-02-19 14:29             ` Ard Biesheuvel
2016-02-19 14:37               ` Catalin Marinas
2016-02-19 14:40                 ` Ard Biesheuvel
2016-02-19 14:57                   ` Catalin Marinas
2016-02-19 17:34                   ` Catalin Marinas
2016-02-19 17:38                     ` Ard Biesheuvel
2016-02-19 17:43                       ` Catalin Marinas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.