linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32
@ 2019-08-09 10:07 Jason Yan
  2019-08-09 10:07 ` [PATCH v6 01/12] powerpc: unify definition of M_IF_NEEDED Jason Yan
                   ` (13 more replies)
  0 siblings, 14 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:07 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

This series implements KASLR for powerpc/fsl_booke/32, as a security
feature that deters exploit attempts relying on knowledge of the location
of kernel internals.

Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

Entropy is derived from the banner and timer base, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

    KERNELBASE

        |-->   64M   <--|
        |               |
        +---------------+    +----------------+---------------+
        |               |....|    |kernel|    |               |
        +---------------+    +----------------+---------------+
        |                         |
        |----->   offset    <-----|

                              kernstart_virt_addr

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Changes since v5:
 - Rename M_IF_NEEDED to MAS2_M_IF_NEEDED
 - Define some global variable as __ro_after_init
 - Replace kimage_vaddr with kernstart_virt_addr
 - Depend on RELOCATABLE, not select it
 - Modify the comment block below the SPDX tag
 - Remove some useless headers in kaslr_booke.c and move is_second_reloc
   declarationto mmu_decl.h
 - Remove DBG() and use pr_debug() and rewrite comment above get_boot_seed().
 - Add a patch to document the KASLR implementation.
 - Split a patch from patch #10 which exports kaslr offset in VMCOREINFO ELF notes.
 - Remove extra logic around finding nokaslr string in cmdline.
 - Make regions static global and __initdata

Changes since v4:
 - Add Reviewed-by tag from Christophe
 - Remove an unnecessary cast
 - Remove unnecessary parenthesis
 - Fix checkpatch warning

Changes since v3:
 - Add Reviewed-by and Tested-by tag from Diana
 - Change the comment in fsl_booke_entry_mapping.S to be consistent
   with the new code.

Changes since v2:
 - Remove unnecessary #ifdef
 - Use SZ_64M instead of0x4000000
 - Call early_init_dt_scan_chosen() to init boot_command_line
 - Rename kaslr_second_init() to kaslr_late_init()

Changes since v1:
 - Remove some useless 'extern' keyword.
 - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
 - Improve some assembly code
 - Use memzero_explicit instead of memset
 - Use boot_command_line and remove early_command_line
 - Do not print kaslr offset if kaslr is disabled

Jason Yan (12):
  powerpc: unify definition of M_IF_NEEDED
  powerpc: move memstart_addr and kernstart_addr to init-common.c
  powerpc: introduce kernstart_virt_addr to store the kernel base
  powerpc/fsl_booke/32: introduce create_tlb_entry() helper
  powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
  powerpc/fsl_booke/32: implement KASLR infrastructure
  powerpc/fsl_booke/32: randomize the kernel image offset
  powerpc/fsl_booke/kaslr: clear the original kernel if randomized
  powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
  powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
  powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes
  powerpc/fsl_booke/32: Document KASLR implementation

 Documentation/powerpc/kaslr-booke32.rst       |  42 ++
 arch/powerpc/Kconfig                          |  11 +
 arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
 arch/powerpc/include/asm/page.h               |   7 +
 arch/powerpc/kernel/Makefile                  |   1 +
 arch/powerpc/kernel/early_32.c                |   2 +-
 arch/powerpc/kernel/exceptions-64e.S          |  12 +-
 arch/powerpc/kernel/fsl_booke_entry_mapping.S |  27 +-
 arch/powerpc/kernel/head_fsl_booke.S          |  55 ++-
 arch/powerpc/kernel/kaslr_booke.c             | 393 ++++++++++++++++++
 arch/powerpc/kernel/machine_kexec.c           |   1 +
 arch/powerpc/kernel/misc_64.S                 |   7 +-
 arch/powerpc/kernel/setup-common.c            |  20 +
 arch/powerpc/mm/init-common.c                 |   7 +
 arch/powerpc/mm/init_32.c                     |   5 -
 arch/powerpc/mm/init_64.c                     |   5 -
 arch/powerpc/mm/mmu_decl.h                    |  11 +
 arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
 18 files changed, 572 insertions(+), 52 deletions(-)
 create mode 100644 Documentation/powerpc/kaslr-booke32.rst
 create mode 100644 arch/powerpc/kernel/kaslr_booke.c

-- 
2.17.2


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v6 01/12] powerpc: unify definition of M_IF_NEEDED
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
@ 2019-08-09 10:07 ` Jason Yan
  2019-08-09 10:07 ` [PATCH v6 02/12] powerpc: move memstart_addr and kernstart_addr to init-common.c Jason Yan
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:07 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

M_IF_NEEDED is defined too many times. Move it to a common place and
rename it to MAS2_M_IF_NEEDED which is much readable.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Tested-by: Diana Craciun <diana.craciun@nxp.com>
---
 arch/powerpc/include/asm/nohash/mmu-book3e.h  | 10 ++++++++++
 arch/powerpc/kernel/exceptions-64e.S          | 12 +-----------
 arch/powerpc/kernel/fsl_booke_entry_mapping.S | 14 ++------------
 arch/powerpc/kernel/misc_64.S                 |  7 +------
 4 files changed, 14 insertions(+), 29 deletions(-)

diff --git a/arch/powerpc/include/asm/nohash/mmu-book3e.h b/arch/powerpc/include/asm/nohash/mmu-book3e.h
index 4c9777d256fb..fa3efc2d310f 100644
--- a/arch/powerpc/include/asm/nohash/mmu-book3e.h
+++ b/arch/powerpc/include/asm/nohash/mmu-book3e.h
@@ -221,6 +221,16 @@
 #define TLBILX_T_CLASS2			6
 #define TLBILX_T_CLASS3			7
 
+/*
+ * The mapping only needs to be cache-coherent on SMP, except on
+ * Freescale e500mc derivatives where it's also needed for coherent DMA.
+ */
+#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
+#define MAS2_M_IF_NEEDED	MAS2_M
+#else
+#define MAS2_M_IF_NEEDED	0
+#endif
+
 #ifndef __ASSEMBLY__
 #include <asm/bug.h>
 
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 1cfb3da4a84a..c5bc09b5e281 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1342,16 +1342,6 @@ skpinv:	addi	r6,r6,1				/* Increment */
 	sync
 	isync
 
-/*
- * The mapping only needs to be cache-coherent on SMP, except on
- * Freescale e500mc derivatives where it's also needed for coherent DMA.
- */
-#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
-#define M_IF_NEEDED	MAS2_M
-#else
-#define M_IF_NEEDED	0
-#endif
-
 /* 6. Setup KERNELBASE mapping in TLB[0]
  *
  * r3 = MAS0 w/TLBSEL & ESEL for the entry we started in
@@ -1364,7 +1354,7 @@ skpinv:	addi	r6,r6,1				/* Increment */
 	ori	r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_1GB))@l
 	mtspr	SPRN_MAS1,r6
 
-	LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET | M_IF_NEEDED)
+	LOAD_REG_IMMEDIATE(r6, PAGE_OFFSET | MAS2_M_IF_NEEDED)
 	mtspr	SPRN_MAS2,r6
 
 	rlwinm	r5,r5,0,0,25
diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
index ea065282b303..f4d3eaae54a9 100644
--- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
+++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
@@ -153,16 +153,6 @@ skpinv:	addi	r6,r6,1				/* Increment */
 	tlbivax 0,r9
 	TLBSYNC
 
-/*
- * The mapping only needs to be cache-coherent on SMP, except on
- * Freescale e500mc derivatives where it's also needed for coherent DMA.
- */
-#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
-#define M_IF_NEEDED	MAS2_M
-#else
-#define M_IF_NEEDED	0
-#endif
-
 #if defined(ENTRY_MAPPING_BOOT_SETUP)
 
 /* 6. Setup KERNELBASE mapping in TLB1[0] */
@@ -171,8 +161,8 @@ skpinv:	addi	r6,r6,1				/* Increment */
 	lis	r6,(MAS1_VALID|MAS1_IPROT)@h
 	ori	r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
 	mtspr	SPRN_MAS1,r6
-	lis	r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_NEEDED)@h
-	ori	r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_NEEDED)@l
+	lis	r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, MAS2_M_IF_NEEDED)@h
+	ori	r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, MAS2_M_IF_NEEDED)@l
 	mtspr	SPRN_MAS2,r6
 	mtspr	SPRN_MAS3,r8
 	tlbwe
diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index b55a7b4cb543..2062a299a22d 100644
--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -432,18 +432,13 @@ kexec_create_tlb:
 	rlwimi	r9,r10,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r9) */
 
 /* Set up a temp identity mapping v:0 to p:0 and return to it. */
-#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
-#define M_IF_NEEDED	MAS2_M
-#else
-#define M_IF_NEEDED	0
-#endif
 	mtspr	SPRN_MAS0,r9
 
 	lis	r9,(MAS1_VALID|MAS1_IPROT)@h
 	ori	r9,r9,(MAS1_TSIZE(BOOK3E_PAGESZ_1GB))@l
 	mtspr	SPRN_MAS1,r9
 
-	LOAD_REG_IMMEDIATE(r9, 0x0 | M_IF_NEEDED)
+	LOAD_REG_IMMEDIATE(r9, 0x0 | MAS2_M_IF_NEEDED)
 	mtspr	SPRN_MAS2,r9
 
 	LOAD_REG_IMMEDIATE(r9, 0x0 | MAS3_SR | MAS3_SW | MAS3_SX)
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 02/12] powerpc: move memstart_addr and kernstart_addr to init-common.c
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
  2019-08-09 10:07 ` [PATCH v6 01/12] powerpc: unify definition of M_IF_NEEDED Jason Yan
@ 2019-08-09 10:07 ` Jason Yan
  2019-08-09 10:07 ` [PATCH v6 03/12] powerpc: introduce kernstart_virt_addr to store the kernel base Jason Yan
                   ` (11 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:07 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

These two variables are both defined in init_32.c and init_64.c. Move
them to init-common.c and make them __ro_after_init.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Tested-by: Diana Craciun <diana.craciun@nxp.com>
---
 arch/powerpc/mm/init-common.c | 5 +++++
 arch/powerpc/mm/init_32.c     | 5 -----
 arch/powerpc/mm/init_64.c     | 5 -----
 3 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index a84da92920f7..e223da482c0c 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -21,6 +21,11 @@
 #include <asm/pgtable.h>
 #include <asm/kup.h>
 
+phys_addr_t memstart_addr __ro_after_init = (phys_addr_t)~0ull;
+EXPORT_SYMBOL_GPL(memstart_addr);
+phys_addr_t kernstart_addr __ro_after_init;
+EXPORT_SYMBOL_GPL(kernstart_addr);
+
 static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
 static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
 
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index b04896a88d79..872df48ae41b 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -56,11 +56,6 @@
 phys_addr_t total_memory;
 phys_addr_t total_lowmem;
 
-phys_addr_t memstart_addr = (phys_addr_t)~0ull;
-EXPORT_SYMBOL(memstart_addr);
-phys_addr_t kernstart_addr;
-EXPORT_SYMBOL(kernstart_addr);
-
 #ifdef CONFIG_RELOCATABLE
 /* Used in __va()/__pa() */
 long long virt_phys_offset;
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index a44f6281ca3a..c836f1269ee7 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -63,11 +63,6 @@
 
 #include <mm/mmu_decl.h>
 
-phys_addr_t memstart_addr = ~0;
-EXPORT_SYMBOL_GPL(memstart_addr);
-phys_addr_t kernstart_addr;
-EXPORT_SYMBOL_GPL(kernstart_addr);
-
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 /*
  * Given an address within the vmemmap, determine the pfn of the page that
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 03/12] powerpc: introduce kernstart_virt_addr to store the kernel base
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
  2019-08-09 10:07 ` [PATCH v6 01/12] powerpc: unify definition of M_IF_NEEDED Jason Yan
  2019-08-09 10:07 ` [PATCH v6 02/12] powerpc: move memstart_addr and kernstart_addr to init-common.c Jason Yan
@ 2019-08-09 10:07 ` Jason Yan
  2019-08-09 10:07 ` [PATCH v6 04/12] powerpc/fsl_booke/32: introduce create_tlb_entry() helper Jason Yan
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:07 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

Now the kernel base is a fixed value - KERNELBASE. To support KASLR, we
need a variable to store the kernel base.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Tested-by: Diana Craciun <diana.craciun@nxp.com>
---
 arch/powerpc/include/asm/page.h | 2 ++
 arch/powerpc/mm/init-common.c   | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 0d52f57fca04..4d32d1b561d6 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -315,6 +315,8 @@ void arch_free_page(struct page *page, int order);
 
 struct vm_area_struct;
 
+extern unsigned long kernstart_virt_addr;
+
 #include <asm-generic/memory_model.h>
 #endif /* __ASSEMBLY__ */
 #include <asm/slice.h>
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index e223da482c0c..42ef7a6e6098 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -25,6 +25,8 @@ phys_addr_t memstart_addr __ro_after_init = (phys_addr_t)~0ull;
 EXPORT_SYMBOL_GPL(memstart_addr);
 phys_addr_t kernstart_addr __ro_after_init;
 EXPORT_SYMBOL_GPL(kernstart_addr);
+unsigned long kernstart_virt_addr __ro_after_init = KERNELBASE;
+EXPORT_SYMBOL_GPL(kernstart_virt_addr);
 
 static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
 static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 04/12] powerpc/fsl_booke/32: introduce create_tlb_entry() helper
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (2 preceding siblings ...)
  2019-08-09 10:07 ` [PATCH v6 03/12] powerpc: introduce kernstart_virt_addr to store the kernel base Jason Yan
@ 2019-08-09 10:07 ` Jason Yan
  2019-08-27 22:07   ` Scott Wood
  2019-08-09 10:07 ` [PATCH v6 05/12] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper Jason Yan
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:07 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

Add a new helper create_tlb_entry() to create a tlb entry by the virtual
and physical address. This is a preparation to support boot kernel at a
randomized address.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Tested-by: Diana Craciun <diana.craciun@nxp.com>
---
 arch/powerpc/kernel/head_fsl_booke.S | 29 ++++++++++++++++++++++++++++
 arch/powerpc/mm/mmu_decl.h           |  1 +
 2 files changed, 30 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index adf0505dbe02..04d124fee17d 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1114,6 +1114,35 @@ __secondary_hold_acknowledge:
 	.long	-1
 #endif
 
+/*
+ * Create a 64M tlb by address and entry
+ * r3/r4 - physical address
+ * r5 - virtual address
+ * r6 - entry
+ */
+_GLOBAL(create_tlb_entry)
+	lis     r7,0x1000               /* Set MAS0(TLBSEL) = 1 */
+	rlwimi  r7,r6,16,4,15           /* Setup MAS0 = TLBSEL | ESEL(r6) */
+	mtspr   SPRN_MAS0,r7            /* Write MAS0 */
+
+	lis     r6,(MAS1_VALID|MAS1_IPROT)@h
+	ori     r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
+	mtspr   SPRN_MAS1,r6            /* Write MAS1 */
+
+	lis     r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
+	ori     r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
+	and     r6,r6,r5
+	ori	r6,r6,MAS2_M@l
+	mtspr   SPRN_MAS2,r6            /* Write MAS2(EPN) */
+
+	ori     r8,r4,(MAS3_SW|MAS3_SR|MAS3_SX)
+	mtspr   SPRN_MAS3,r8            /* Write MAS3(RPN) */
+
+	tlbwe                           /* Write TLB */
+	isync
+	sync
+	blr
+
 /*
  * Create a tlb entry with the same effective and physical address as
  * the tlb entry used by the current running code. But set the TS to 1.
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 32c1a191c28a..a09f89d3aa0f 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -142,6 +142,7 @@ extern unsigned long calc_cam_sz(unsigned long ram, unsigned long virt,
 extern void adjust_total_lowmem(void);
 extern int switch_to_as1(void);
 extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
+void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);
 #endif
 extern void loadcam_entry(unsigned int index);
 extern void loadcam_multi(int first_idx, int num, int tmp_idx);
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 05/12] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (3 preceding siblings ...)
  2019-08-09 10:07 ` [PATCH v6 04/12] powerpc/fsl_booke/32: introduce create_tlb_entry() helper Jason Yan
@ 2019-08-09 10:07 ` Jason Yan
  2019-08-09 10:07 ` [PATCH v6 06/12] powerpc/fsl_booke/32: implement KASLR infrastructure Jason Yan
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:07 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

Add a new helper reloc_kernel_entry() to jump back to the start of the
new kernel. After we put the new kernel in a randomized place we can use
this new helper to enter the kernel and begin to relocate again.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Tested-by: Diana Craciun <diana.craciun@nxp.com>
---
 arch/powerpc/kernel/head_fsl_booke.S | 13 +++++++++++++
 arch/powerpc/mm/mmu_decl.h           |  1 +
 2 files changed, 14 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 04d124fee17d..2083382dd662 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1143,6 +1143,19 @@ _GLOBAL(create_tlb_entry)
 	sync
 	blr
 
+/*
+ * Return to the start of the relocated kernel and run again
+ * r3 - virtual address of fdt
+ * r4 - entry of the kernel
+ */
+_GLOBAL(reloc_kernel_entry)
+	mfmsr	r7
+	rlwinm	r7, r7, 0, ~(MSR_IS | MSR_DS)
+
+	mtspr	SPRN_SRR0,r4
+	mtspr	SPRN_SRR1,r7
+	rfi
+
 /*
  * Create a tlb entry with the same effective and physical address as
  * the tlb entry used by the current running code. But set the TS to 1.
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index a09f89d3aa0f..804da298beb3 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -143,6 +143,7 @@ extern void adjust_total_lowmem(void);
 extern int switch_to_as1(void);
 extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
 void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);
+void reloc_kernel_entry(void *fdt, int addr);
 #endif
 extern void loadcam_entry(unsigned int index);
 extern void loadcam_multi(int first_idx, int num, int tmp_idx);
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 06/12] powerpc/fsl_booke/32: implement KASLR infrastructure
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (4 preceding siblings ...)
  2019-08-09 10:07 ` [PATCH v6 05/12] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper Jason Yan
@ 2019-08-09 10:07 ` Jason Yan
  2019-08-28  4:54   ` Scott Wood
  2019-08-09 10:07 ` [PATCH v6 07/12] powerpc/fsl_booke/32: randomize the kernel image offset Jason Yan
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:07 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

This patch add support to boot kernel from places other than KERNELBASE.
Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

The offset of the kernel was not randomized yet(a fixed 64M is set). We
will randomize it in the next patch.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Tested-by: Diana Craciun <diana.craciun@nxp.com>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/Kconfig                          | 11 ++++
 arch/powerpc/kernel/Makefile                  |  1 +
 arch/powerpc/kernel/early_32.c                |  2 +-
 arch/powerpc/kernel/fsl_booke_entry_mapping.S | 17 +++--
 arch/powerpc/kernel/head_fsl_booke.S          | 13 +++-
 arch/powerpc/kernel/kaslr_booke.c             | 62 +++++++++++++++++++
 arch/powerpc/mm/mmu_decl.h                    |  7 +++
 arch/powerpc/mm/nohash/fsl_booke.c            |  7 ++-
 8 files changed, 105 insertions(+), 15 deletions(-)
 create mode 100644 arch/powerpc/kernel/kaslr_booke.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 77f6ebf97113..710c12ef7159 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -548,6 +548,17 @@ config RELOCATABLE
 	  setting can still be useful to bootwrappers that need to know the
 	  load address of the kernel (eg. u-boot/mkimage).
 
+config RANDOMIZE_BASE
+	bool "Randomize the address of the kernel image"
+	depends on (FSL_BOOKE && FLATMEM && PPC32)
+	depends on RELOCATABLE
+	help
+	  Randomizes the virtual address at which the kernel image is
+	  loaded, as a security feature that deters exploit attempts
+	  relying on knowledge of the location of kernel internals.
+
+	  If unsure, say N.
+
 config RELOCATABLE_TEST
 	bool "Test relocatable kernel"
 	depends on (PPC64 && RELOCATABLE)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index ea0c69236789..32f6c5b99307 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -106,6 +106,7 @@ extra-$(CONFIG_PPC_8xx)		:= head_8xx.o
 extra-y				+= vmlinux.lds
 
 obj-$(CONFIG_RELOCATABLE)	+= reloc_$(BITS).o
+obj-$(CONFIG_RANDOMIZE_BASE)	+= kaslr_booke.o
 
 obj-$(CONFIG_PPC32)		+= entry_32.o setup_32.o early_32.o
 obj-$(CONFIG_PPC64)		+= dma-iommu.o iommu.o
diff --git a/arch/powerpc/kernel/early_32.c b/arch/powerpc/kernel/early_32.c
index 3482118ffe76..0c5849fd936d 100644
--- a/arch/powerpc/kernel/early_32.c
+++ b/arch/powerpc/kernel/early_32.c
@@ -32,5 +32,5 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)
 
 	apply_feature_fixups();
 
-	return KERNELBASE + offset;
+	return kernstart_virt_addr + offset;
 }
diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
index f4d3eaae54a9..641920d4f694 100644
--- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
+++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
@@ -155,23 +155,22 @@ skpinv:	addi	r6,r6,1				/* Increment */
 
 #if defined(ENTRY_MAPPING_BOOT_SETUP)
 
-/* 6. Setup KERNELBASE mapping in TLB1[0] */
+/* 6. Setup kernstart_virt_addr mapping in TLB1[0] */
 	lis	r6,0x1000		/* Set MAS0(TLBSEL) = TLB1(1), ESEL = 0 */
 	mtspr	SPRN_MAS0,r6
 	lis	r6,(MAS1_VALID|MAS1_IPROT)@h
 	ori	r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
 	mtspr	SPRN_MAS1,r6
-	lis	r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, MAS2_M_IF_NEEDED)@h
-	ori	r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, MAS2_M_IF_NEEDED)@l
-	mtspr	SPRN_MAS2,r6
+	lis     r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
+	ori     r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
+	and     r6,r6,r20
+	ori	r6,r6,MAS2_M_IF_NEEDED@l
+	mtspr   SPRN_MAS2,r6
 	mtspr	SPRN_MAS3,r8
 	tlbwe
 
-/* 7. Jump to KERNELBASE mapping */
-	lis	r6,(KERNELBASE & ~0xfff)@h
-	ori	r6,r6,(KERNELBASE & ~0xfff)@l
-	rlwinm	r7,r25,0,0x03ffffff
-	add	r6,r7,r6
+/* 7. Jump to kernstart_virt_addr mapping */
+	mr	r6,r20
 
 #elif defined(ENTRY_MAPPING_KEXEC_SETUP)
 /*
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 2083382dd662..f7a5c5f03c72 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -155,6 +155,8 @@ _ENTRY(_start);
  */
 
 _ENTRY(__early_start)
+	LOAD_REG_ADDR_PIC(r20, kernstart_virt_addr)
+	lwz     r20,0(r20)
 
 #define ENTRY_MAPPING_BOOT_SETUP
 #include "fsl_booke_entry_mapping.S"
@@ -277,8 +279,8 @@ set_ivor:
 	ori	r6, r6, swapper_pg_dir@l
 	lis	r5, abatron_pteptrs@h
 	ori	r5, r5, abatron_pteptrs@l
-	lis	r4, KERNELBASE@h
-	ori	r4, r4, KERNELBASE@l
+	lis     r3, kernstart_virt_addr@ha
+	lwz     r4, kernstart_virt_addr@l(r3)
 	stw	r5, 0(r4)	/* Save abatron_pteptrs at a fixed location */
 	stw	r6, 0(r5)
 
@@ -1067,7 +1069,12 @@ __secondary_start:
 	mr	r5,r25		/* phys kernel start */
 	rlwinm	r5,r5,0,~0x3ffffff	/* aligned 64M */
 	subf	r4,r5,r4	/* memstart_addr - phys kernel start */
-	li	r5,0		/* no device tree */
+	lis	r7,KERNELBASE@h
+	ori	r7,r7,KERNELBASE@l
+	cmpw	r20,r7		/* if kimage_vaddr != KERNELBASE, randomized */
+	beq	2f
+	li	r4,0
+2:	li	r5,0		/* no device tree */
 	li	r6,0		/* not boot cpu */
 	bl	restore_to_as0
 
diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
new file mode 100644
index 000000000000..f8dc60534ac1
--- /dev/null
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -0,0 +1,62 @@
+// SPDX-License-Identifier: GPL-2.0-only
+//
+// Copyright (C) 2019 Jason Yan <yanaijie@huawei.com>
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/stddef.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/memblock.h>
+#include <asm/pgalloc.h>
+#include <asm/prom.h>
+#include <mm/mmu_decl.h>
+
+static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size,
+						  unsigned long kernel_sz)
+{
+	/* return a fixed offset of 64M for now */
+	return SZ_64M;
+}
+
+/*
+ * To see if we need to relocate the kernel to a random offset
+ * void *dt_ptr - address of the device tree
+ * phys_addr_t size - size of the first memory block
+ */
+notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
+{
+	unsigned long tlb_virt;
+	phys_addr_t tlb_phys;
+	unsigned long offset;
+	unsigned long kernel_sz;
+
+	kernel_sz = (unsigned long)_end - KERNELBASE;
+
+	offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
+
+	if (offset == 0)
+		return;
+
+	kernstart_virt_addr += offset;
+	kernstart_addr += offset;
+
+	is_second_reloc = 1;
+
+	if (offset >= SZ_64M) {
+		tlb_virt = round_down(kernstart_virt_addr, SZ_64M);
+		tlb_phys = round_down(kernstart_addr, SZ_64M);
+
+		/* Create kernel map to relocate in */
+		create_tlb_entry(tlb_phys, tlb_virt, 1);
+	}
+
+	/* Copy the kernel to it's new location and run */
+	memcpy((void *)kernstart_virt_addr, (void *)KERNELBASE, kernel_sz);
+
+	reloc_kernel_entry(dt_ptr, kernstart_virt_addr);
+}
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 804da298beb3..213997d69729 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -144,10 +144,17 @@ extern int switch_to_as1(void);
 extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
 void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);
 void reloc_kernel_entry(void *fdt, int addr);
+extern int is_second_reloc;
 #endif
 extern void loadcam_entry(unsigned int index);
 extern void loadcam_multi(int first_idx, int num, int tmp_idx);
 
+#ifdef CONFIG_RANDOMIZE_BASE
+void kaslr_early_init(void *dt_ptr, phys_addr_t size);
+#else
+static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
+#endif
+
 struct tlbcam {
 	u32	MAS0;
 	u32	MAS1;
diff --git a/arch/powerpc/mm/nohash/fsl_booke.c b/arch/powerpc/mm/nohash/fsl_booke.c
index 556e3cd52a35..2dc27cf88add 100644
--- a/arch/powerpc/mm/nohash/fsl_booke.c
+++ b/arch/powerpc/mm/nohash/fsl_booke.c
@@ -263,7 +263,8 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
 int __initdata is_second_reloc;
 notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 {
-	unsigned long base = KERNELBASE;
+	unsigned long base = kernstart_virt_addr;
+	phys_addr_t size;
 
 	kernstart_addr = start;
 	if (is_second_reloc) {
@@ -291,7 +292,7 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 	start &= ~0x3ffffff;
 	base &= ~0x3ffffff;
 	virt_phys_offset = base - start;
-	early_get_first_memblock_info(__va(dt_ptr), NULL);
+	early_get_first_memblock_info(__va(dt_ptr), &size);
 	/*
 	 * We now get the memstart_addr, then we should check if this
 	 * address is the same as what the PAGE_OFFSET map to now. If
@@ -316,6 +317,8 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 		/* We should never reach here */
 		panic("Relocation error");
 	}
+
+	kaslr_early_init(__va(dt_ptr), size);
 }
 #endif
 #endif
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 07/12] powerpc/fsl_booke/32: randomize the kernel image offset
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (5 preceding siblings ...)
  2019-08-09 10:07 ` [PATCH v6 06/12] powerpc/fsl_booke/32: implement KASLR infrastructure Jason Yan
@ 2019-08-09 10:07 ` Jason Yan
  2019-08-09 10:07 ` [PATCH v6 08/12] powerpc/fsl_booke/kaslr: clear the original kernel if randomized Jason Yan
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:07 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

After we have the basic support of relocate the kernel in some
appropriate place, we can start to randomize the offset now.

Entropy is derived from the banner and timer, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Some pieces of code are derived from arch/x86/boot/compressed/kaslr.c or
arch/arm64/kernel/kaslr.c such as rotate_xor(). Credit goes to Kees and
Ard.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Tested-by: Diana Craciun <diana.craciun@nxp.com>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/kaslr_booke.c | 317 +++++++++++++++++++++++++++++-
 1 file changed, 315 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
index f8dc60534ac1..51a0b3749724 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -12,15 +12,326 @@
 #include <linux/init.h>
 #include <linux/delay.h>
 #include <linux/memblock.h>
+#include <linux/libfdt.h>
+#include <linux/crash_core.h>
 #include <asm/pgalloc.h>
 #include <asm/prom.h>
+#include <asm/kdump.h>
 #include <mm/mmu_decl.h>
+#include <generated/compile.h>
+#include <generated/utsrelease.h>
+
+struct regions {
+	unsigned long pa_start;
+	unsigned long pa_end;
+	unsigned long kernel_size;
+	unsigned long dtb_start;
+	unsigned long dtb_end;
+	unsigned long initrd_start;
+	unsigned long initrd_end;
+	unsigned long crash_start;
+	unsigned long crash_end;
+	int reserved_mem;
+	int reserved_mem_addr_cells;
+	int reserved_mem_size_cells;
+};
+
+/* Simplified build-specific string for starting entropy. */
+static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
+		LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
+
+struct regions __initdata regions;
+
+static __init void kaslr_get_cmdline(void *fdt)
+{
+	int node = fdt_path_offset(fdt, "/chosen");
+
+	early_init_dt_scan_chosen(node, "chosen", 1, boot_command_line);
+}
+
+static unsigned long __init rotate_xor(unsigned long hash, const void *area,
+				       size_t size)
+{
+	size_t i;
+	const unsigned long *ptr = area;
+
+	for (i = 0; i < size / sizeof(hash); i++) {
+		/* Rotate by odd number of bits and XOR. */
+		hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
+		hash ^= ptr[i];
+	}
+
+	return hash;
+}
+
+/* Attempt to create a simple starting entropy. This can make it defferent for
+ * every build but it is still not enough. Stronger entropy should
+ * be added to make it change for every boot.
+ */
+static unsigned long __init get_boot_seed(void *fdt)
+{
+	unsigned long hash = 0;
+
+	hash = rotate_xor(hash, build_str, sizeof(build_str));
+	hash = rotate_xor(hash, fdt, fdt_totalsize(fdt));
+
+	return hash;
+}
+
+static __init u64 get_kaslr_seed(void *fdt)
+{
+	int node, len;
+	fdt64_t *prop;
+	u64 ret;
+
+	node = fdt_path_offset(fdt, "/chosen");
+	if (node < 0)
+		return 0;
+
+	prop = fdt_getprop_w(fdt, node, "kaslr-seed", &len);
+	if (!prop || len != sizeof(u64))
+		return 0;
+
+	ret = fdt64_to_cpu(*prop);
+	*prop = 0;
+	return ret;
+}
+
+static __init bool regions_overlap(u32 s1, u32 e1, u32 s2, u32 e2)
+{
+	return e1 >= s2 && e2 >= s1;
+}
+
+static __init bool overlaps_reserved_region(const void *fdt, u32 start,
+					    u32 end)
+{
+	int subnode, len, i;
+	u64 base, size;
+
+	/* check for overlap with /memreserve/ entries */
+	for (i = 0; i < fdt_num_mem_rsv(fdt); i++) {
+		if (fdt_get_mem_rsv(fdt, i, &base, &size) < 0)
+			continue;
+		if (regions_overlap(start, end, base, base + size))
+			return true;
+	}
+
+	if (regions.reserved_mem < 0)
+		return false;
+
+	/* check for overlap with static reservations in /reserved-memory */
+	for (subnode = fdt_first_subnode(fdt, regions.reserved_mem);
+	     subnode >= 0;
+	     subnode = fdt_next_subnode(fdt, subnode)) {
+		const fdt32_t *reg;
+		u64 rsv_end;
+
+		len = 0;
+		reg = fdt_getprop(fdt, subnode, "reg", &len);
+		while (len >= (regions.reserved_mem_addr_cells +
+			       regions.reserved_mem_size_cells)) {
+			base = fdt32_to_cpu(reg[0]);
+			if (regions.reserved_mem_addr_cells == 2)
+				base = (base << 32) | fdt32_to_cpu(reg[1]);
+
+			reg += regions.reserved_mem_addr_cells;
+			len -= 4 * regions.reserved_mem_addr_cells;
+
+			size = fdt32_to_cpu(reg[0]);
+			if (regions.reserved_mem_size_cells == 2)
+				size = (size << 32) | fdt32_to_cpu(reg[1]);
+
+			reg += regions.reserved_mem_size_cells;
+			len -= 4 * regions.reserved_mem_size_cells;
+
+			if (base >= regions.pa_end)
+				continue;
+
+			rsv_end = min(base + size, (u64)U32_MAX);
+
+			if (regions_overlap(start, end, base, rsv_end))
+				return true;
+		}
+	}
+	return false;
+}
+
+static __init bool overlaps_region(const void *fdt, u32 start,
+				   u32 end)
+{
+	if (regions_overlap(start, end, regions.dtb_start,
+			    regions.dtb_end))
+		return true;
+
+	if (regions_overlap(start, end, regions.initrd_start,
+			    regions.initrd_end))
+		return true;
+
+	if (regions_overlap(start, end, regions.crash_start,
+			    regions.crash_end))
+		return true;
+
+	return overlaps_reserved_region(fdt, start, end);
+}
+
+static void __init get_crash_kernel(void *fdt, unsigned long size)
+{
+#ifdef CONFIG_CRASH_CORE
+	unsigned long long crash_size, crash_base;
+	int ret;
+
+	ret = parse_crashkernel(boot_command_line, size, &crash_size,
+				&crash_base);
+	if (ret != 0 || crash_size == 0)
+		return;
+	if (crash_base == 0)
+		crash_base = KDUMP_KERNELBASE;
+
+	regions.crash_start = (unsigned long)crash_base;
+	regions.crash_end = (unsigned long)(crash_base + crash_size);
+
+	pr_debug("crash_base=0x%llx crash_size=0x%llx\n", crash_base, crash_size);
+#endif
+}
+
+static void __init get_initrd_range(void *fdt)
+{
+	u64 start, end;
+	int node, len;
+	const __be32 *prop;
+
+	node = fdt_path_offset(fdt, "/chosen");
+	if (node < 0)
+		return;
+
+	prop = fdt_getprop(fdt, node, "linux,initrd-start", &len);
+	if (!prop)
+		return;
+	start = of_read_number(prop, len / 4);
+
+	prop = fdt_getprop(fdt, node, "linux,initrd-end", &len);
+	if (!prop)
+		return;
+	end = of_read_number(prop, len / 4);
+
+	regions.initrd_start = (unsigned long)start;
+	regions.initrd_end = (unsigned long)end;
+
+	pr_debug("initrd_start=0x%llx  initrd_end=0x%llx\n", start, end);
+}
+
+static __init unsigned long get_usable_offset(const void *fdt,
+					      unsigned long start)
+{
+	unsigned long pa;
+	unsigned long pa_end;
+
+	for (pa = start; pa > regions.pa_start; pa -= SZ_16K) {
+		pa_end = pa + regions.kernel_size;
+		if (overlaps_region(fdt, pa, pa_end))
+			continue;
+
+		return pa;
+	}
+	return 0;
+}
+
+static __init void get_cell_sizes(const void *fdt, int node, int *addr_cells,
+				  int *size_cells)
+{
+	const int *prop;
+	int len;
+
+	/*
+	 * Retrieve the #address-cells and #size-cells properties
+	 * from the 'node', or use the default if not provided.
+	 */
+	*addr_cells = *size_cells = 1;
+
+	prop = fdt_getprop(fdt, node, "#address-cells", &len);
+	if (len == 4)
+		*addr_cells = fdt32_to_cpu(*prop);
+	prop = fdt_getprop(fdt, node, "#size-cells", &len);
+	if (len == 4)
+		*size_cells = fdt32_to_cpu(*prop);
+}
 
 static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size,
 						  unsigned long kernel_sz)
 {
-	/* return a fixed offset of 64M for now */
-	return SZ_64M;
+	unsigned long offset, random;
+	unsigned long ram, linear_sz;
+	unsigned long kaslr_offset;
+	u64 seed;
+	unsigned long index;
+
+	random = get_boot_seed(dt_ptr);
+
+	seed = get_tb() << 32;
+	seed ^= get_tb();
+	random = rotate_xor(random, &seed, sizeof(seed));
+
+	/*
+	 * Retrieve (and wipe) the seed from the FDT
+	 */
+	seed = get_kaslr_seed(dt_ptr);
+	if (seed)
+		random = rotate_xor(random, &seed, sizeof(seed));
+
+	ram = min_t(phys_addr_t, __max_low_memory, size);
+	ram = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM, true);
+	linear_sz = min_t(unsigned long, ram, SZ_512M);
+
+	/* If the linear size is smaller than 64M, do not randmize */
+	if (linear_sz < SZ_64M)
+		return 0;
+
+	/* check for a reserved-memory node and record its cell sizes */
+	regions.reserved_mem = fdt_path_offset(dt_ptr, "/reserved-memory");
+	if (regions.reserved_mem >= 0)
+		get_cell_sizes(dt_ptr, regions.reserved_mem,
+			       &regions.reserved_mem_addr_cells,
+			       &regions.reserved_mem_size_cells);
+
+	regions.pa_start = 0;
+	regions.pa_end = linear_sz;
+	regions.dtb_start = __pa(dt_ptr);
+	regions.dtb_end = __pa(dt_ptr) + fdt_totalsize(dt_ptr);
+	regions.kernel_size = kernel_sz;
+
+	get_initrd_range(dt_ptr);
+	get_crash_kernel(dt_ptr, ram);
+
+	/*
+	 * Decide which 64M we want to start
+	 * Only use the low 8 bits of the random seed
+	 */
+	index = random & 0xFF;
+	index %= linear_sz / SZ_64M;
+
+	/* Decide offset inside 64M */
+	if (index == 0) {
+		offset = random % (SZ_64M - round_up(kernel_sz, SZ_16K) * 2);
+		offset += round_up(kernel_sz, SZ_16K);
+		offset = round_up(offset, SZ_16K);
+	} else {
+		offset = random % (SZ_64M - kernel_sz);
+		offset = round_down(offset, SZ_16K);
+	}
+
+	while (index >= 0) {
+		offset = offset + index * SZ_64M;
+		kaslr_offset = get_usable_offset(dt_ptr, offset);
+		if (kaslr_offset)
+			break;
+		index--;
+	}
+
+	/* Did not find any usable region? Give up randomize */
+	if (index < 0)
+		kaslr_offset = 0;
+
+	return kaslr_offset;
 }
 
 /*
@@ -37,6 +348,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
 
 	kernel_sz = (unsigned long)_end - KERNELBASE;
 
+	kaslr_get_cmdline(dt_ptr);
+
 	offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
 
 	if (offset == 0)
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 08/12] powerpc/fsl_booke/kaslr: clear the original kernel if randomized
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (6 preceding siblings ...)
  2019-08-09 10:07 ` [PATCH v6 07/12] powerpc/fsl_booke/32: randomize the kernel image offset Jason Yan
@ 2019-08-09 10:07 ` Jason Yan
  2019-08-09 10:07 ` [PATCH v6 09/12] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter Jason Yan
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:07 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

The original kernel still exists in the memory, clear it now.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Tested-by: Diana Craciun <diana.craciun@nxp.com>
---
 arch/powerpc/kernel/kaslr_booke.c  | 11 +++++++++++
 arch/powerpc/mm/mmu_decl.h         |  2 ++
 arch/powerpc/mm/nohash/fsl_booke.c |  1 +
 3 files changed, 14 insertions(+)

diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
index 51a0b3749724..9a360b6124ed 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -373,3 +373,14 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
 
 	reloc_kernel_entry(dt_ptr, kernstart_virt_addr);
 }
+
+void __init kaslr_late_init(void)
+{
+	/* If randomized, clear the original kernel */
+	if (kernstart_virt_addr != KERNELBASE) {
+		unsigned long kernel_sz;
+
+		kernel_sz = (unsigned long)_end - kernstart_virt_addr;
+		memzero_explicit((void *)KERNELBASE, kernel_sz);
+	}
+}
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 213997d69729..64b2ac8a5343 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -151,8 +151,10 @@ extern void loadcam_multi(int first_idx, int num, int tmp_idx);
 
 #ifdef CONFIG_RANDOMIZE_BASE
 void kaslr_early_init(void *dt_ptr, phys_addr_t size);
+void kaslr_late_init(void);
 #else
 static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
+static inline void kaslr_late_init(void) {}
 #endif
 
 struct tlbcam {
diff --git a/arch/powerpc/mm/nohash/fsl_booke.c b/arch/powerpc/mm/nohash/fsl_booke.c
index 2dc27cf88add..b4eb06ceb189 100644
--- a/arch/powerpc/mm/nohash/fsl_booke.c
+++ b/arch/powerpc/mm/nohash/fsl_booke.c
@@ -269,6 +269,7 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 	kernstart_addr = start;
 	if (is_second_reloc) {
 		virt_phys_offset = PAGE_OFFSET - memstart_addr;
+		kaslr_late_init();
 		return;
 	}
 
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 09/12] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (7 preceding siblings ...)
  2019-08-09 10:07 ` [PATCH v6 08/12] powerpc/fsl_booke/kaslr: clear the original kernel if randomized Jason Yan
@ 2019-08-09 10:07 ` Jason Yan
  2019-08-09 10:07 ` [PATCH v6 10/12] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic Jason Yan
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:07 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

One may want to disable kaslr when boot, so provide a cmdline parameter
'nokaslr' to support this.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Tested-by: Diana Craciun <diana.craciun@nxp.com>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/kaslr_booke.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
index 9a360b6124ed..fd32ae10c218 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -334,6 +334,11 @@ static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size
 	return kaslr_offset;
 }
 
+static inline __init bool kaslr_disabled(void)
+{
+	return strstr(boot_command_line, "nokaslr") != NULL;
+}
+
 /*
  * To see if we need to relocate the kernel to a random offset
  * void *dt_ptr - address of the device tree
@@ -349,6 +354,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
 	kernel_sz = (unsigned long)_end - KERNELBASE;
 
 	kaslr_get_cmdline(dt_ptr);
+	if (kaslr_disabled())
+		return;
 
 	offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
 
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 10/12] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (8 preceding siblings ...)
  2019-08-09 10:07 ` [PATCH v6 09/12] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter Jason Yan
@ 2019-08-09 10:07 ` Jason Yan
  2019-08-09 10:07 ` [PATCH v6 11/12] powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes Jason Yan
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:07 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

When kaslr is enabled, the kernel offset is different for every boot.
This brings some difficult to debug the kernel. Dump out the kernel
offset when panic so that we can easily debug the kernel.

This code is derived from x86/arm64 which has similar functionality.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Tested-by: Diana Craciun <diana.craciun@nxp.com>
---
 arch/powerpc/include/asm/page.h    |  5 +++++
 arch/powerpc/kernel/setup-common.c | 20 ++++++++++++++++++++
 2 files changed, 25 insertions(+)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 4d32d1b561d6..b34b9cdd91f1 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -317,6 +317,11 @@ struct vm_area_struct;
 
 extern unsigned long kernstart_virt_addr;
 
+static inline unsigned long kaslr_offset(void)
+{
+	return kernstart_virt_addr - KERNELBASE;
+}
+
 #include <asm-generic/memory_model.h>
 #endif /* __ASSEMBLY__ */
 #include <asm/slice.h>
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 1f8db666468d..ba1a34ab218a 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -715,8 +715,28 @@ static struct notifier_block ppc_panic_block = {
 	.priority = INT_MIN /* may not return; must be done last */
 };
 
+/*
+ * Dump out kernel offset information on panic.
+ */
+static int dump_kernel_offset(struct notifier_block *self, unsigned long v,
+			      void *p)
+{
+	pr_emerg("Kernel Offset: 0x%lx from 0x%lx\n",
+		 kaslr_offset(), KERNELBASE);
+
+	return 0;
+}
+
+static struct notifier_block kernel_offset_notifier = {
+	.notifier_call = dump_kernel_offset
+};
+
 void __init setup_panic(void)
 {
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && kaslr_offset() > 0)
+		atomic_notifier_chain_register(&panic_notifier_list,
+					       &kernel_offset_notifier);
+
 	/* PPC64 always does a hard irq disable in its panic handler */
 	if (!IS_ENABLED(CONFIG_PPC64) && !ppc_md.panic)
 		return;
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 11/12] powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (9 preceding siblings ...)
  2019-08-09 10:07 ` [PATCH v6 10/12] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic Jason Yan
@ 2019-08-09 10:07 ` Jason Yan
  2019-08-09 10:08 ` [PATCH v6 12/12] powerpc/fsl_booke/32: Document KASLR implementation Jason Yan
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:07 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

Like all other architectures such as x86 or arm64, include KASLR offset
in VMCOREINFO ELF notes to assist in debugging. After this, we can use
crash --kaslr option to parse vmcore generated from a kaslr kernel.

Note: The crash tool needs to support --kaslr too.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/kernel/machine_kexec.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
index c4ed328a7b96..078fe3d76feb 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -86,6 +86,7 @@ void arch_crash_save_vmcoreinfo(void)
 	VMCOREINFO_STRUCT_SIZE(mmu_psize_def);
 	VMCOREINFO_OFFSET(mmu_psize_def, shift);
 #endif
+	vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
 }
 
 /*
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 12/12] powerpc/fsl_booke/32: Document KASLR implementation
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (10 preceding siblings ...)
  2019-08-09 10:07 ` [PATCH v6 11/12] powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes Jason Yan
@ 2019-08-09 10:08 ` Jason Yan
  2019-08-19  6:12 ` [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
  2019-08-28  4:05 ` Scott Wood
  13 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-09 10:08 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan

Add document to explain how we implement KASLR for fsl_booke32.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
---
 Documentation/powerpc/kaslr-booke32.rst | 42 +++++++++++++++++++++++++
 1 file changed, 42 insertions(+)
 create mode 100644 Documentation/powerpc/kaslr-booke32.rst

diff --git a/Documentation/powerpc/kaslr-booke32.rst b/Documentation/powerpc/kaslr-booke32.rst
new file mode 100644
index 000000000000..8b259fdfdf03
--- /dev/null
+++ b/Documentation/powerpc/kaslr-booke32.rst
@@ -0,0 +1,42 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+===========================
+KASLR for Freescale BookE32
+===========================
+
+The word KASLR stands for Kernel Address Space Layout Randomization.
+
+This document tries to explain the implementation of the KASLR for
+Freescale BookE32. KASLR is a security feature that deters exploit
+attempts relying on knowledge of the location of kernel internals.
+
+Since CONFIG_RELOCATABLE has already supported, what we need to do is
+map or copy kernel to a proper place and relocate. Freescale Book-E
+parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
+entries are not suitable to map the kernel directly in a randomized
+region, so we chose to copy the kernel to a proper place and restart to
+relocate.
+
+Entropy is derived from the banner and timer base, which will change every
+build and boot. This not so much safe so additionally the bootloader may
+pass entropy via the /chosen/kaslr-seed node in device tree.
+
+We will use the first 512M of the low memory to randomize the kernel
+image. The memory will be split in 64M zones. We will use the lower 8
+bit of the entropy to decide the index of the 64M zone. Then we chose a
+16K aligned offset inside the 64M zone to put the kernel in::
+
+    KERNELBASE
+
+        |-->   64M   <--|
+        |               |
+        +---------------+    +----------------+---------------+
+        |               |....|    |kernel|    |               |
+        +---------------+    +----------------+---------------+
+        |                         |
+        |----->   offset    <-----|
+
+                              kernstart_virt_addr
+
+To enable KASLR, set CONFIG_RANDOMIZE_BASE = y. If KASLR is enable and you
+want to disable it at runtime, add "nokaslr" to the kernel cmdline.
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (11 preceding siblings ...)
  2019-08-09 10:08 ` [PATCH v6 12/12] powerpc/fsl_booke/32: Document KASLR implementation Jason Yan
@ 2019-08-19  6:12 ` Jason Yan
  2019-08-27  0:39   ` Jason Yan
  2019-08-28  4:05 ` Scott Wood
  13 siblings, 1 reply; 31+ messages in thread
From: Jason Yan @ 2019-08-19  6:12 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang

Hi Michael,

Is there anything more I should do to get this feature meeting the 
requirements of the mainline?

Thanks,
Jason

On 2019/8/9 18:07, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
> 
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
> 
> Entropy is derived from the banner and timer base, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.
> 
> We will use the first 512M of the low memory to randomize the kernel
> image. The memory will be split in 64M zones. We will use the lower 8
> bit of the entropy to decide the index of the 64M zone. Then we chose a
> 16K aligned offset inside the 64M zone to put the kernel in.
> 
>      KERNELBASE
> 
>          |-->   64M   <--|
>          |               |
>          +---------------+    +----------------+---------------+
>          |               |....|    |kernel|    |               |
>          +---------------+    +----------------+---------------+
>          |                         |
>          |----->   offset    <-----|
> 
>                                kernstart_virt_addr
> 
> We also check if we will overlap with some areas like the dtb area, the
> initrd area or the crashkernel area. If we cannot find a proper area,
> kaslr will be disabled and boot from the original kernel.
> 
> Changes since v5:
>   - Rename M_IF_NEEDED to MAS2_M_IF_NEEDED
>   - Define some global variable as __ro_after_init
>   - Replace kimage_vaddr with kernstart_virt_addr
>   - Depend on RELOCATABLE, not select it
>   - Modify the comment block below the SPDX tag
>   - Remove some useless headers in kaslr_booke.c and move is_second_reloc
>     declarationto mmu_decl.h
>   - Remove DBG() and use pr_debug() and rewrite comment above get_boot_seed().
>   - Add a patch to document the KASLR implementation.
>   - Split a patch from patch #10 which exports kaslr offset in VMCOREINFO ELF notes.
>   - Remove extra logic around finding nokaslr string in cmdline.
>   - Make regions static global and __initdata
> 
> Changes since v4:
>   - Add Reviewed-by tag from Christophe
>   - Remove an unnecessary cast
>   - Remove unnecessary parenthesis
>   - Fix checkpatch warning
> 
> Changes since v3:
>   - Add Reviewed-by and Tested-by tag from Diana
>   - Change the comment in fsl_booke_entry_mapping.S to be consistent
>     with the new code.
> 
> Changes since v2:
>   - Remove unnecessary #ifdef
>   - Use SZ_64M instead of0x4000000
>   - Call early_init_dt_scan_chosen() to init boot_command_line
>   - Rename kaslr_second_init() to kaslr_late_init()
> 
> Changes since v1:
>   - Remove some useless 'extern' keyword.
>   - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
>   - Improve some assembly code
>   - Use memzero_explicit instead of memset
>   - Use boot_command_line and remove early_command_line
>   - Do not print kaslr offset if kaslr is disabled
> 
> Jason Yan (12):
>    powerpc: unify definition of M_IF_NEEDED
>    powerpc: move memstart_addr and kernstart_addr to init-common.c
>    powerpc: introduce kernstart_virt_addr to store the kernel base
>    powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>    powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>    powerpc/fsl_booke/32: implement KASLR infrastructure
>    powerpc/fsl_booke/32: randomize the kernel image offset
>    powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>    powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>    powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>    powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes
>    powerpc/fsl_booke/32: Document KASLR implementation
> 
>   Documentation/powerpc/kaslr-booke32.rst       |  42 ++
>   arch/powerpc/Kconfig                          |  11 +
>   arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
>   arch/powerpc/include/asm/page.h               |   7 +
>   arch/powerpc/kernel/Makefile                  |   1 +
>   arch/powerpc/kernel/early_32.c                |   2 +-
>   arch/powerpc/kernel/exceptions-64e.S          |  12 +-
>   arch/powerpc/kernel/fsl_booke_entry_mapping.S |  27 +-
>   arch/powerpc/kernel/head_fsl_booke.S          |  55 ++-
>   arch/powerpc/kernel/kaslr_booke.c             | 393 ++++++++++++++++++
>   arch/powerpc/kernel/machine_kexec.c           |   1 +
>   arch/powerpc/kernel/misc_64.S                 |   7 +-
>   arch/powerpc/kernel/setup-common.c            |  20 +
>   arch/powerpc/mm/init-common.c                 |   7 +
>   arch/powerpc/mm/init_32.c                     |   5 -
>   arch/powerpc/mm/init_64.c                     |   5 -
>   arch/powerpc/mm/mmu_decl.h                    |  11 +
>   arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
>   18 files changed, 572 insertions(+), 52 deletions(-)
>   create mode 100644 Documentation/powerpc/kaslr-booke32.rst
>   create mode 100644 arch/powerpc/kernel/kaslr_booke.c
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32
  2019-08-19  6:12 ` [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
@ 2019-08-27  0:39   ` Jason Yan
  2019-08-27  1:33     ` Michael Ellerman
  0 siblings, 1 reply; 31+ messages in thread
From: Jason Yan @ 2019-08-27  0:39 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang

A polite ping :)

What else should I do now?

Thanks

On 2019/8/19 14:12, Jason Yan wrote:
> Hi Michael,
> 
> Is there anything more I should do to get this feature meeting the 
> requirements of the mainline?
> 
> Thanks,
> Jason
> 
> On 2019/8/9 18:07, Jason Yan wrote:
>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>> feature that deters exploit attempts relying on knowledge of the location
>> of kernel internals.
>>
>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>> map or copy kernel to a proper place and relocate. Freescale Book-E
>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>> entries are not suitable to map the kernel directly in a randomized
>> region, so we chose to copy the kernel to a proper place and restart to
>> relocate.
>>
>> Entropy is derived from the banner and timer base, which will change 
>> every
>> build and boot. This not so much safe so additionally the bootloader may
>> pass entropy via the /chosen/kaslr-seed node in device tree.
>>
>> We will use the first 512M of the low memory to randomize the kernel
>> image. The memory will be split in 64M zones. We will use the lower 8
>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>> 16K aligned offset inside the 64M zone to put the kernel in.
>>
>>      KERNELBASE
>>
>>          |-->   64M   <--|
>>          |               |
>>          +---------------+    +----------------+---------------+
>>          |               |....|    |kernel|    |               |
>>          +---------------+    +----------------+---------------+
>>          |                         |
>>          |----->   offset    <-----|
>>
>>                                kernstart_virt_addr
>>
>> We also check if we will overlap with some areas like the dtb area, the
>> initrd area or the crashkernel area. If we cannot find a proper area,
>> kaslr will be disabled and boot from the original kernel.
>>
>> Changes since v5:
>>   - Rename M_IF_NEEDED to MAS2_M_IF_NEEDED
>>   - Define some global variable as __ro_after_init
>>   - Replace kimage_vaddr with kernstart_virt_addr
>>   - Depend on RELOCATABLE, not select it
>>   - Modify the comment block below the SPDX tag
>>   - Remove some useless headers in kaslr_booke.c and move is_second_reloc
>>     declarationto mmu_decl.h
>>   - Remove DBG() and use pr_debug() and rewrite comment above 
>> get_boot_seed().
>>   - Add a patch to document the KASLR implementation.
>>   - Split a patch from patch #10 which exports kaslr offset in 
>> VMCOREINFO ELF notes.
>>   - Remove extra logic around finding nokaslr string in cmdline.
>>   - Make regions static global and __initdata
>>
>> Changes since v4:
>>   - Add Reviewed-by tag from Christophe
>>   - Remove an unnecessary cast
>>   - Remove unnecessary parenthesis
>>   - Fix checkpatch warning
>>
>> Changes since v3:
>>   - Add Reviewed-by and Tested-by tag from Diana
>>   - Change the comment in fsl_booke_entry_mapping.S to be consistent
>>     with the new code.
>>
>> Changes since v2:
>>   - Remove unnecessary #ifdef
>>   - Use SZ_64M instead of0x4000000
>>   - Call early_init_dt_scan_chosen() to init boot_command_line
>>   - Rename kaslr_second_init() to kaslr_late_init()
>>
>> Changes since v1:
>>   - Remove some useless 'extern' keyword.
>>   - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
>>   - Improve some assembly code
>>   - Use memzero_explicit instead of memset
>>   - Use boot_command_line and remove early_command_line
>>   - Do not print kaslr offset if kaslr is disabled
>>
>> Jason Yan (12):
>>    powerpc: unify definition of M_IF_NEEDED
>>    powerpc: move memstart_addr and kernstart_addr to init-common.c
>>    powerpc: introduce kernstart_virt_addr to store the kernel base
>>    powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>>    powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>>    powerpc/fsl_booke/32: implement KASLR infrastructure
>>    powerpc/fsl_booke/32: randomize the kernel image offset
>>    powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>>    powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>>    powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>>    powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes
>>    powerpc/fsl_booke/32: Document KASLR implementation
>>
>>   Documentation/powerpc/kaslr-booke32.rst       |  42 ++
>>   arch/powerpc/Kconfig                          |  11 +
>>   arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
>>   arch/powerpc/include/asm/page.h               |   7 +
>>   arch/powerpc/kernel/Makefile                  |   1 +
>>   arch/powerpc/kernel/early_32.c                |   2 +-
>>   arch/powerpc/kernel/exceptions-64e.S          |  12 +-
>>   arch/powerpc/kernel/fsl_booke_entry_mapping.S |  27 +-
>>   arch/powerpc/kernel/head_fsl_booke.S          |  55 ++-
>>   arch/powerpc/kernel/kaslr_booke.c             | 393 ++++++++++++++++++
>>   arch/powerpc/kernel/machine_kexec.c           |   1 +
>>   arch/powerpc/kernel/misc_64.S                 |   7 +-
>>   arch/powerpc/kernel/setup-common.c            |  20 +
>>   arch/powerpc/mm/init-common.c                 |   7 +
>>   arch/powerpc/mm/init_32.c                     |   5 -
>>   arch/powerpc/mm/init_64.c                     |   5 -
>>   arch/powerpc/mm/mmu_decl.h                    |  11 +
>>   arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
>>   18 files changed, 572 insertions(+), 52 deletions(-)
>>   create mode 100644 Documentation/powerpc/kaslr-booke32.rst
>>   create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>>


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32
  2019-08-27  0:39   ` Jason Yan
@ 2019-08-27  1:33     ` Michael Ellerman
  2019-08-28  5:08       ` Scott Wood
  0 siblings, 1 reply; 31+ messages in thread
From: Michael Ellerman @ 2019-08-27  1:33 UTC (permalink / raw)
  To: Scott Wood
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan,
	linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening

Jason Yan <yanaijie@huawei.com> writes:
> A polite ping :)
>
> What else should I do now?

That's a good question.

Scott, are you still maintaining FSL bits, and if so any comments? Or
should I take this.

cheers

> On 2019/8/19 14:12, Jason Yan wrote:
>> Hi Michael,
>> 
>> Is there anything more I should do to get this feature meeting the 
>> requirements of the mainline?
>> 
>> Thanks,
>> Jason
>> 
>> On 2019/8/9 18:07, Jason Yan wrote:
>>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>>> feature that deters exploit attempts relying on knowledge of the location
>>> of kernel internals.
>>>
>>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>>> map or copy kernel to a proper place and relocate. Freescale Book-E
>>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>>> entries are not suitable to map the kernel directly in a randomized
>>> region, so we chose to copy the kernel to a proper place and restart to
>>> relocate.
>>>
>>> Entropy is derived from the banner and timer base, which will change 
>>> every
>>> build and boot. This not so much safe so additionally the bootloader may
>>> pass entropy via the /chosen/kaslr-seed node in device tree.
>>>
>>> We will use the first 512M of the low memory to randomize the kernel
>>> image. The memory will be split in 64M zones. We will use the lower 8
>>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>>> 16K aligned offset inside the 64M zone to put the kernel in.
>>>
>>>      KERNELBASE
>>>
>>>          |-->   64M   <--|
>>>          |               |
>>>          +---------------+    +----------------+---------------+
>>>          |               |....|    |kernel|    |               |
>>>          +---------------+    +----------------+---------------+
>>>          |                         |
>>>          |----->   offset    <-----|
>>>
>>>                                kernstart_virt_addr
>>>
>>> We also check if we will overlap with some areas like the dtb area, the
>>> initrd area or the crashkernel area. If we cannot find a proper area,
>>> kaslr will be disabled and boot from the original kernel.
>>>
>>> Changes since v5:
>>>   - Rename M_IF_NEEDED to MAS2_M_IF_NEEDED
>>>   - Define some global variable as __ro_after_init
>>>   - Replace kimage_vaddr with kernstart_virt_addr
>>>   - Depend on RELOCATABLE, not select it
>>>   - Modify the comment block below the SPDX tag
>>>   - Remove some useless headers in kaslr_booke.c and move is_second_reloc
>>>     declarationto mmu_decl.h
>>>   - Remove DBG() and use pr_debug() and rewrite comment above 
>>> get_boot_seed().
>>>   - Add a patch to document the KASLR implementation.
>>>   - Split a patch from patch #10 which exports kaslr offset in 
>>> VMCOREINFO ELF notes.
>>>   - Remove extra logic around finding nokaslr string in cmdline.
>>>   - Make regions static global and __initdata
>>>
>>> Changes since v4:
>>>   - Add Reviewed-by tag from Christophe
>>>   - Remove an unnecessary cast
>>>   - Remove unnecessary parenthesis
>>>   - Fix checkpatch warning
>>>
>>> Changes since v3:
>>>   - Add Reviewed-by and Tested-by tag from Diana
>>>   - Change the comment in fsl_booke_entry_mapping.S to be consistent
>>>     with the new code.
>>>
>>> Changes since v2:
>>>   - Remove unnecessary #ifdef
>>>   - Use SZ_64M instead of0x4000000
>>>   - Call early_init_dt_scan_chosen() to init boot_command_line
>>>   - Rename kaslr_second_init() to kaslr_late_init()
>>>
>>> Changes since v1:
>>>   - Remove some useless 'extern' keyword.
>>>   - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL
>>>   - Improve some assembly code
>>>   - Use memzero_explicit instead of memset
>>>   - Use boot_command_line and remove early_command_line
>>>   - Do not print kaslr offset if kaslr is disabled
>>>
>>> Jason Yan (12):
>>>    powerpc: unify definition of M_IF_NEEDED
>>>    powerpc: move memstart_addr and kernstart_addr to init-common.c
>>>    powerpc: introduce kernstart_virt_addr to store the kernel base
>>>    powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>>>    powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>>>    powerpc/fsl_booke/32: implement KASLR infrastructure
>>>    powerpc/fsl_booke/32: randomize the kernel image offset
>>>    powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>>>    powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>>>    powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>>>    powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes
>>>    powerpc/fsl_booke/32: Document KASLR implementation
>>>
>>>   Documentation/powerpc/kaslr-booke32.rst       |  42 ++
>>>   arch/powerpc/Kconfig                          |  11 +
>>>   arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
>>>   arch/powerpc/include/asm/page.h               |   7 +
>>>   arch/powerpc/kernel/Makefile                  |   1 +
>>>   arch/powerpc/kernel/early_32.c                |   2 +-
>>>   arch/powerpc/kernel/exceptions-64e.S          |  12 +-
>>>   arch/powerpc/kernel/fsl_booke_entry_mapping.S |  27 +-
>>>   arch/powerpc/kernel/head_fsl_booke.S          |  55 ++-
>>>   arch/powerpc/kernel/kaslr_booke.c             | 393 ++++++++++++++++++
>>>   arch/powerpc/kernel/machine_kexec.c           |   1 +
>>>   arch/powerpc/kernel/misc_64.S                 |   7 +-
>>>   arch/powerpc/kernel/setup-common.c            |  20 +
>>>   arch/powerpc/mm/init-common.c                 |   7 +
>>>   arch/powerpc/mm/init_32.c                     |   5 -
>>>   arch/powerpc/mm/init_64.c                     |   5 -
>>>   arch/powerpc/mm/mmu_decl.h                    |  11 +
>>>   arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
>>>   18 files changed, 572 insertions(+), 52 deletions(-)
>>>   create mode 100644 Documentation/powerpc/kaslr-booke32.rst
>>>   create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>>>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 04/12] powerpc/fsl_booke/32: introduce create_tlb_entry() helper
  2019-08-09 10:07 ` [PATCH v6 04/12] powerpc/fsl_booke/32: introduce create_tlb_entry() helper Jason Yan
@ 2019-08-27 22:07   ` Scott Wood
  2019-08-28  5:33     ` Jason Yan
  0 siblings, 1 reply; 31+ messages in thread
From: Scott Wood @ 2019-08-27 22:07 UTC (permalink / raw)
  To: Jason Yan
  Cc: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening, wangkefeng.wang,
	linux-kernel, jingxiangfeng, zhaohongjiang, thunder.leizhen,
	fanchengyang, yebin10

On Fri, Aug 09, 2019 at 06:07:52PM +0800, Jason Yan wrote:
> Add a new helper create_tlb_entry() to create a tlb entry by the virtual
> and physical address. This is a preparation to support boot kernel at a
> randomized address.
> 
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>
> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
> Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
> Tested-by: Diana Craciun <diana.craciun@nxp.com>
> ---
>  arch/powerpc/kernel/head_fsl_booke.S | 29 ++++++++++++++++++++++++++++
>  arch/powerpc/mm/mmu_decl.h           |  1 +
>  2 files changed, 30 insertions(+)
> 
> diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
> index adf0505dbe02..04d124fee17d 100644
> --- a/arch/powerpc/kernel/head_fsl_booke.S
> +++ b/arch/powerpc/kernel/head_fsl_booke.S
> @@ -1114,6 +1114,35 @@ __secondary_hold_acknowledge:
>  	.long	-1
>  #endif
>  
> +/*
> + * Create a 64M tlb by address and entry
> + * r3/r4 - physical address
> + * r5 - virtual address
> + * r6 - entry
> + */
> +_GLOBAL(create_tlb_entry)

This function is broadly named but contains various assumptions about the
entry being created.  I'd just call it create_kaslr_tlb_entry.

> +	lis     r7,0x1000               /* Set MAS0(TLBSEL) = 1 */
> +	rlwimi  r7,r6,16,4,15           /* Setup MAS0 = TLBSEL | ESEL(r6) */
> +	mtspr   SPRN_MAS0,r7            /* Write MAS0 */
> +
> +	lis     r6,(MAS1_VALID|MAS1_IPROT)@h
> +	ori     r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
> +	mtspr   SPRN_MAS1,r6            /* Write MAS1 */
> +
> +	lis     r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
> +	ori     r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
> +	and     r6,r6,r5
> +	ori	r6,r6,MAS2_M@l
> +	mtspr   SPRN_MAS2,r6            /* Write MAS2(EPN) */
> +
> +	ori     r8,r4,(MAS3_SW|MAS3_SR|MAS3_SX)
> +	mtspr   SPRN_MAS3,r8            /* Write MAS3(RPN) */
> +
> +	tlbwe                           /* Write TLB */
> +	isync
> +	sync
> +	blr

Should set MAS7 under MMU_FTR_BIG_PHYS (or CONFIG_PHYS_64BIT if it's
too early for features) -- even if relocatable kernels over 4GiB aren't
supported (I don't remember if they work or not), MAS7 might be non-zero
on entry.  And the function claims to take a 64-bit phys addr as input...

MAS2_M should be MAS2_M_IF_NEEDED to match other kmem tlb entries.

-Scott

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32
  2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (12 preceding siblings ...)
  2019-08-19  6:12 ` [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
@ 2019-08-28  4:05 ` Scott Wood
  2019-08-28  4:59   ` Scott Wood
                     ` (2 more replies)
  13 siblings, 3 replies; 31+ messages in thread
From: Scott Wood @ 2019-08-28  4:05 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, christophe.leroy,
	benh, paulus, npiggin, keescook, kernel-hardening
  Cc: wangkefeng.wang, linux-kernel, jingxiangfeng, zhaohongjiang,
	thunder.leizhen, fanchengyang, yebin10

On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
> 
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate.

Have you tested this with a kernel that was loaded at a non-zero address?  I
tried loading a kernel at 0x04000000 (by changing the address in the uImage,
and setting bootm_low to 04000000 in U-Boot), and it works without
CONFIG_RANDOMIZE and fails with.

>  Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
> 
> Entropy is derived from the banner and timer base, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.

How complicated would it be to directly access the HW RNG (if present) that
early in the boot?  It'd be nice if a U-Boot update weren't required (and
particularly concerning that KASLR would appear to work without a U-Boot
update, but without decent entropy).

-Scott



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 06/12] powerpc/fsl_booke/32: implement KASLR infrastructure
  2019-08-09 10:07 ` [PATCH v6 06/12] powerpc/fsl_booke/32: implement KASLR infrastructure Jason Yan
@ 2019-08-28  4:54   ` Scott Wood
  2019-08-28  5:47     ` Christophe Leroy
  2019-08-28 11:03     ` Jason Yan
  0 siblings, 2 replies; 31+ messages in thread
From: Scott Wood @ 2019-08-28  4:54 UTC (permalink / raw)
  To: Jason Yan
  Cc: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening, wangkefeng.wang,
	linux-kernel, jingxiangfeng, zhaohongjiang, thunder.leizhen,
	fanchengyang, yebin10

On Fri, Aug 09, 2019 at 06:07:54PM +0800, Jason Yan wrote:
> This patch add support to boot kernel from places other than KERNELBASE.
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
> 
> The offset of the kernel was not randomized yet(a fixed 64M is set). We
> will randomize it in the next patch.
> 
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>
> Tested-by: Diana Craciun <diana.craciun@nxp.com>
> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>  arch/powerpc/Kconfig                          | 11 ++++
>  arch/powerpc/kernel/Makefile                  |  1 +
>  arch/powerpc/kernel/early_32.c                |  2 +-
>  arch/powerpc/kernel/fsl_booke_entry_mapping.S | 17 +++--
>  arch/powerpc/kernel/head_fsl_booke.S          | 13 +++-
>  arch/powerpc/kernel/kaslr_booke.c             | 62 +++++++++++++++++++
>  arch/powerpc/mm/mmu_decl.h                    |  7 +++
>  arch/powerpc/mm/nohash/fsl_booke.c            |  7 ++-
>  8 files changed, 105 insertions(+), 15 deletions(-)
>  create mode 100644 arch/powerpc/kernel/kaslr_booke.c
> 
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 77f6ebf97113..710c12ef7159 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -548,6 +548,17 @@ config RELOCATABLE
>  	  setting can still be useful to bootwrappers that need to know the
>  	  load address of the kernel (eg. u-boot/mkimage).
>  
> +config RANDOMIZE_BASE
> +	bool "Randomize the address of the kernel image"
> +	depends on (FSL_BOOKE && FLATMEM && PPC32)
> +	depends on RELOCATABLE
> +	help
> +	  Randomizes the virtual address at which the kernel image is
> +	  loaded, as a security feature that deters exploit attempts
> +	  relying on knowledge of the location of kernel internals.
> +
> +	  If unsure, say N.
> +

Why is N the safe default (other than concerns about code maturity,
though arm64 and mips don't seem to have updated this recommendation
after several years)?  On x86 this defaults to Y.

> diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> index f4d3eaae54a9..641920d4f694 100644
> --- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> +++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> @@ -155,23 +155,22 @@ skpinv:	addi	r6,r6,1				/* Increment */
>  
>  #if defined(ENTRY_MAPPING_BOOT_SETUP)
>  
> -/* 6. Setup KERNELBASE mapping in TLB1[0] */
> +/* 6. Setup kernstart_virt_addr mapping in TLB1[0] */
>  	lis	r6,0x1000		/* Set MAS0(TLBSEL) = TLB1(1), ESEL = 0 */
>  	mtspr	SPRN_MAS0,r6
>  	lis	r6,(MAS1_VALID|MAS1_IPROT)@h
>  	ori	r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
>  	mtspr	SPRN_MAS1,r6
> -	lis	r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, MAS2_M_IF_NEEDED)@h
> -	ori	r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, MAS2_M_IF_NEEDED)@l
> -	mtspr	SPRN_MAS2,r6
> +	lis     r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
> +	ori     r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
> +	and     r6,r6,r20
> +	ori	r6,r6,MAS2_M_IF_NEEDED@l
> +	mtspr   SPRN_MAS2,r6

Please use tabs rather than spaces between the mnemonic and the
arguments.

It looks like that was the last user of MAS2_VAL so let's remove it.

> diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
> new file mode 100644
> index 000000000000..f8dc60534ac1
> --- /dev/null
> +++ b/arch/powerpc/kernel/kaslr_booke.c

Shouldn't this go under arch/powerpc/mm/nohash?

> +/*
> + * To see if we need to relocate the kernel to a random offset
> + * void *dt_ptr - address of the device tree
> + * phys_addr_t size - size of the first memory block
> + */
> +notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
> +{
> +	unsigned long tlb_virt;
> +	phys_addr_t tlb_phys;
> +	unsigned long offset;
> +	unsigned long kernel_sz;
> +
> +	kernel_sz = (unsigned long)_end - KERNELBASE;

Why KERNELBASE and not kernstart_addr?

> +
> +	offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
> +
> +	if (offset == 0)
> +		return;
> +
> +	kernstart_virt_addr += offset;
> +	kernstart_addr += offset;
> +
> +	is_second_reloc = 1;
> +
> +	if (offset >= SZ_64M) {
> +		tlb_virt = round_down(kernstart_virt_addr, SZ_64M);
> +		tlb_phys = round_down(kernstart_addr, SZ_64M);

If kernstart_addr wasn't 64M-aligned before adding offset, then "offset
>= SZ_64M" is not necessarily going to detect when you've crossed a
mapping boundary.

> +
> +		/* Create kernel map to relocate in */
> +		create_tlb_entry(tlb_phys, tlb_virt, 1);
> +	}
> +
> +	/* Copy the kernel to it's new location and run */
> +	memcpy((void *)kernstart_virt_addr, (void *)KERNELBASE, kernel_sz);
> +
> +	reloc_kernel_entry(dt_ptr, kernstart_virt_addr);
> +}

After copying, call flush_icache_range() on the destination.

> diff --git a/arch/powerpc/mm/nohash/fsl_booke.c b/arch/powerpc/mm/nohash/fsl_booke.c
> index 556e3cd52a35..2dc27cf88add 100644
> --- a/arch/powerpc/mm/nohash/fsl_booke.c
> +++ b/arch/powerpc/mm/nohash/fsl_booke.c
> @@ -263,7 +263,8 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
>  int __initdata is_second_reloc;
>  notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
>  {
> -	unsigned long base = KERNELBASE;
> +	unsigned long base = kernstart_virt_addr;
> +	phys_addr_t size;
>  
>  	kernstart_addr = start;
>  	if (is_second_reloc) {
> @@ -291,7 +292,7 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
>  	start &= ~0x3ffffff;
>  	base &= ~0x3ffffff;
>  	virt_phys_offset = base - start;
> -	early_get_first_memblock_info(__va(dt_ptr), NULL);
> +	early_get_first_memblock_info(__va(dt_ptr), &size);
>  	/*
>  	 * We now get the memstart_addr, then we should check if this
>  	 * address is the same as what the PAGE_OFFSET map to now. If
> @@ -316,6 +317,8 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
>  		/* We should never reach here */
>  		panic("Relocation error");
>  	}
> +
> +	kaslr_early_init(__va(dt_ptr), size);

Are you assuming that available memory starts at physical address zero? 
This isn't true of some partitioning scenarios, or in a kdump crash
kernel.

-Scott

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32
  2019-08-28  4:05 ` Scott Wood
@ 2019-08-28  4:59   ` Scott Wood
  2019-08-29  2:41     ` Jason Yan
  2019-08-29  1:57   ` Jason Yan
  2019-09-10  5:34   ` Jason Yan
  2 siblings, 1 reply; 31+ messages in thread
From: Scott Wood @ 2019-08-28  4:59 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, christophe.leroy,
	benh, paulus, npiggin, keescook, kernel-hardening
  Cc: wangkefeng.wang, linux-kernel, jingxiangfeng, zhaohongjiang,
	thunder.leizhen, fanchengyang, yebin10

On Tue, 2019-08-27 at 23:05 -0500, Scott Wood wrote:
> On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote:
> >  Freescale Book-E
> > parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> > entries are not suitable to map the kernel directly in a randomized
> > region, so we chose to copy the kernel to a proper place and restart to
> > relocate.
> > 
> > Entropy is derived from the banner and timer base, which will change every
> > build and boot. This not so much safe so additionally the bootloader may
> > pass entropy via the /chosen/kaslr-seed node in device tree.
> 
> How complicated would it be to directly access the HW RNG (if present) that
> early in the boot?  It'd be nice if a U-Boot update weren't required (and
> particularly concerning that KASLR would appear to work without a U-Boot
> update, but without decent entropy).

OK, I see that kaslr-seed is used on some other platforms, though arm64 aborts
KASLR if it doesn't get a seed.  I'm not sure if that's better than a loud
warning message (or if it was a conscious choice rather than just not having
an alternative implemented), but silently using poor entropy for something
like this seems bad.

-Scott



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32
  2019-08-27  1:33     ` Michael Ellerman
@ 2019-08-28  5:08       ` Scott Wood
  2019-08-28 13:01         ` Michael Ellerman
  0 siblings, 1 reply; 31+ messages in thread
From: Scott Wood @ 2019-08-28  5:08 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan,
	linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening

On Tue, 2019-08-27 at 11:33 +1000, Michael Ellerman wrote:
> Jason Yan <yanaijie@huawei.com> writes:
> > A polite ping :)
> > 
> > What else should I do now?
> 
> That's a good question.
> 
> Scott, are you still maintaining FSL bits, 

Sort of... now that it's become very low volume, it's easy to forget when
something does show up (or miss it if I'm not CCed).  It'd probably help if I
were to just ack patches instead of thinking "I'll do a pull request for this
later" when it's just one or two patches per cycle.

-Scott



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 04/12] powerpc/fsl_booke/32: introduce create_tlb_entry() helper
  2019-08-27 22:07   ` Scott Wood
@ 2019-08-28  5:33     ` Jason Yan
  0 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-28  5:33 UTC (permalink / raw)
  To: Scott Wood
  Cc: wangkefeng.wang, keescook, kernel-hardening, thunder.leizhen,
	linux-kernel, npiggin, jingxiangfeng, diana.craciun, paulus,
	zhaohongjiang, fanchengyang, linuxppc-dev, yebin10

Hi Scott,

Thanks for your reply.

On 2019/8/28 6:07, Scott Wood wrote:
> On Fri, Aug 09, 2019 at 06:07:52PM +0800, Jason Yan wrote:
>> Add a new helper create_tlb_entry() to create a tlb entry by the virtual
>> and physical address. This is a preparation to support boot kernel at a
>> randomized address.
>>
>> Signed-off-by: Jason Yan <yanaijie@huawei.com>
>> Cc: Diana Craciun <diana.craciun@nxp.com>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
>> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Nicholas Piggin <npiggin@gmail.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
>> Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
>> Tested-by: Diana Craciun <diana.craciun@nxp.com>
>> ---
>>   arch/powerpc/kernel/head_fsl_booke.S | 29 ++++++++++++++++++++++++++++
>>   arch/powerpc/mm/mmu_decl.h           |  1 +
>>   2 files changed, 30 insertions(+)
>>
>> diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
>> index adf0505dbe02..04d124fee17d 100644
>> --- a/arch/powerpc/kernel/head_fsl_booke.S
>> +++ b/arch/powerpc/kernel/head_fsl_booke.S
>> @@ -1114,6 +1114,35 @@ __secondary_hold_acknowledge:
>>   	.long	-1
>>   #endif
>>   
>> +/*
>> + * Create a 64M tlb by address and entry
>> + * r3/r4 - physical address
>> + * r5 - virtual address
>> + * r6 - entry
>> + */
>> +_GLOBAL(create_tlb_entry)
> 
> This function is broadly named but contains various assumptions about the
> entry being created.  I'd just call it create_kaslr_tlb_entry.
> 

OK.

>> +	lis     r7,0x1000               /* Set MAS0(TLBSEL) = 1 */
>> +	rlwimi  r7,r6,16,4,15           /* Setup MAS0 = TLBSEL | ESEL(r6) */
>> +	mtspr   SPRN_MAS0,r7            /* Write MAS0 */
>> +
>> +	lis     r6,(MAS1_VALID|MAS1_IPROT)@h
>> +	ori     r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
>> +	mtspr   SPRN_MAS1,r6            /* Write MAS1 */
>> +
>> +	lis     r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
>> +	ori     r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
>> +	and     r6,r6,r5
>> +	ori	r6,r6,MAS2_M@l
>> +	mtspr   SPRN_MAS2,r6            /* Write MAS2(EPN) */
>> +
>> +	ori     r8,r4,(MAS3_SW|MAS3_SR|MAS3_SX)
>> +	mtspr   SPRN_MAS3,r8            /* Write MAS3(RPN) */
>> +
>> +	tlbwe                           /* Write TLB */
>> +	isync
>> +	sync
>> +	blr
> 
> Should set MAS7 under MMU_FTR_BIG_PHYS (or CONFIG_PHYS_64BIT if it's
> too early for features) -- even if relocatable kernels over 4GiB aren't
> supported (I don't remember if they work or not), MAS7 might be non-zero
> on entry.  And the function claims to take a 64-bit phys addr as input...
> 

Good catch. And I should consider 32-bit phys addr as input too. I will 
fix this in next version. Thanks.

> MAS2_M should be MAS2_M_IF_NEEDED to match other kmem tlb entries.
> 

OK

> -Scott
> 
> .
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 06/12] powerpc/fsl_booke/32: implement KASLR infrastructure
  2019-08-28  4:54   ` Scott Wood
@ 2019-08-28  5:47     ` Christophe Leroy
  2019-08-29  6:26       ` Jason Yan
  2019-08-28 11:03     ` Jason Yan
  1 sibling, 1 reply; 31+ messages in thread
From: Christophe Leroy @ 2019-08-28  5:47 UTC (permalink / raw)
  To: Scott Wood, Jason Yan
  Cc: mpe, linuxppc-dev, diana.craciun, benh, paulus, npiggin,
	keescook, kernel-hardening, wangkefeng.wang, linux-kernel,
	jingxiangfeng, zhaohongjiang, thunder.leizhen, fanchengyang,
	yebin10



Le 28/08/2019 à 06:54, Scott Wood a écrit :
> On Fri, Aug 09, 2019 at 06:07:54PM +0800, Jason Yan wrote:
>> This patch add support to boot kernel from places other than KERNELBASE.
>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>> map or copy kernel to a proper place and relocate. Freescale Book-E
>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>> entries are not suitable to map the kernel directly in a randomized
>> region, so we chose to copy the kernel to a proper place and restart to
>> relocate.
>>
>> The offset of the kernel was not randomized yet(a fixed 64M is set). We
>> will randomize it in the next patch.
>>
>> Signed-off-by: Jason Yan <yanaijie@huawei.com>
>> Cc: Diana Craciun <diana.craciun@nxp.com>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
>> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Nicholas Piggin <npiggin@gmail.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> Tested-by: Diana Craciun <diana.craciun@nxp.com>
>> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
>> ---
>>   arch/powerpc/Kconfig                          | 11 ++++
>>   arch/powerpc/kernel/Makefile                  |  1 +
>>   arch/powerpc/kernel/early_32.c                |  2 +-
>>   arch/powerpc/kernel/fsl_booke_entry_mapping.S | 17 +++--
>>   arch/powerpc/kernel/head_fsl_booke.S          | 13 +++-
>>   arch/powerpc/kernel/kaslr_booke.c             | 62 +++++++++++++++++++
>>   arch/powerpc/mm/mmu_decl.h                    |  7 +++
>>   arch/powerpc/mm/nohash/fsl_booke.c            |  7 ++-
>>   8 files changed, 105 insertions(+), 15 deletions(-)
>>   create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>>

[...]

>> diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
>> new file mode 100644
>> index 000000000000..f8dc60534ac1
>> --- /dev/null
>> +++ b/arch/powerpc/kernel/kaslr_booke.c
> 
> Shouldn't this go under arch/powerpc/mm/nohash?
> 
>> +/*
>> + * To see if we need to relocate the kernel to a random offset
>> + * void *dt_ptr - address of the device tree
>> + * phys_addr_t size - size of the first memory block
>> + */
>> +notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
>> +{
>> +	unsigned long tlb_virt;
>> +	phys_addr_t tlb_phys;
>> +	unsigned long offset;
>> +	unsigned long kernel_sz;
>> +
>> +	kernel_sz = (unsigned long)_end - KERNELBASE;
> 
> Why KERNELBASE and not kernstart_addr?
> 
>> +
>> +	offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
>> +
>> +	if (offset == 0)
>> +		return;
>> +
>> +	kernstart_virt_addr += offset;
>> +	kernstart_addr += offset;
>> +
>> +	is_second_reloc = 1;
>> +
>> +	if (offset >= SZ_64M) {
>> +		tlb_virt = round_down(kernstart_virt_addr, SZ_64M);
>> +		tlb_phys = round_down(kernstart_addr, SZ_64M);
> 
> If kernstart_addr wasn't 64M-aligned before adding offset, then "offset
>> = SZ_64M" is not necessarily going to detect when you've crossed a
> mapping boundary.
> 
>> +
>> +		/* Create kernel map to relocate in */
>> +		create_tlb_entry(tlb_phys, tlb_virt, 1);
>> +	}
>> +
>> +	/* Copy the kernel to it's new location and run */
>> +	memcpy((void *)kernstart_virt_addr, (void *)KERNELBASE, kernel_sz);
>> +
>> +	reloc_kernel_entry(dt_ptr, kernstart_virt_addr);
>> +}
> 
> After copying, call flush_icache_range() on the destination.

Function copy_and_flush() does the copy and the flush. I think it should 
be used instead of memcpy() + flush_icache_range()

Christophe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 06/12] powerpc/fsl_booke/32: implement KASLR infrastructure
  2019-08-28  4:54   ` Scott Wood
  2019-08-28  5:47     ` Christophe Leroy
@ 2019-08-28 11:03     ` Jason Yan
  2019-08-28 16:44       ` Scott Wood
  1 sibling, 1 reply; 31+ messages in thread
From: Jason Yan @ 2019-08-28 11:03 UTC (permalink / raw)
  To: Scott Wood
  Cc: wangkefeng.wang, keescook, kernel-hardening, thunder.leizhen,
	linux-kernel, npiggin, jingxiangfeng, diana.craciun, paulus,
	zhaohongjiang, fanchengyang, linuxppc-dev, yebin10



On 2019/8/28 12:54, Scott Wood wrote:
> On Fri, Aug 09, 2019 at 06:07:54PM +0800, Jason Yan wrote:
>> This patch add support to boot kernel from places other than KERNELBASE.
>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>> map or copy kernel to a proper place and relocate. Freescale Book-E
>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>> entries are not suitable to map the kernel directly in a randomized
>> region, so we chose to copy the kernel to a proper place and restart to
>> relocate.
>>
>> The offset of the kernel was not randomized yet(a fixed 64M is set). We
>> will randomize it in the next patch.
>>
>> Signed-off-by: Jason Yan <yanaijie@huawei.com>
>> Cc: Diana Craciun <diana.craciun@nxp.com>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
>> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Nicholas Piggin <npiggin@gmail.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> Tested-by: Diana Craciun <diana.craciun@nxp.com>
>> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
>> ---
>>   arch/powerpc/Kconfig                          | 11 ++++
>>   arch/powerpc/kernel/Makefile                  |  1 +
>>   arch/powerpc/kernel/early_32.c                |  2 +-
>>   arch/powerpc/kernel/fsl_booke_entry_mapping.S | 17 +++--
>>   arch/powerpc/kernel/head_fsl_booke.S          | 13 +++-
>>   arch/powerpc/kernel/kaslr_booke.c             | 62 +++++++++++++++++++
>>   arch/powerpc/mm/mmu_decl.h                    |  7 +++
>>   arch/powerpc/mm/nohash/fsl_booke.c            |  7 ++-
>>   8 files changed, 105 insertions(+), 15 deletions(-)
>>   create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>>
>> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
>> index 77f6ebf97113..710c12ef7159 100644
>> --- a/arch/powerpc/Kconfig
>> +++ b/arch/powerpc/Kconfig
>> @@ -548,6 +548,17 @@ config RELOCATABLE
>>   	  setting can still be useful to bootwrappers that need to know the
>>   	  load address of the kernel (eg. u-boot/mkimage).
>>   
>> +config RANDOMIZE_BASE
>> +	bool "Randomize the address of the kernel image"
>> +	depends on (FSL_BOOKE && FLATMEM && PPC32)
>> +	depends on RELOCATABLE
>> +	help
>> +	  Randomizes the virtual address at which the kernel image is
>> +	  loaded, as a security feature that deters exploit attempts
>> +	  relying on knowledge of the location of kernel internals.
>> +
>> +	  If unsure, say N.
>> +
> 
> Why is N the safe default (other than concerns about code maturity,
> though arm64 and mips don't seem to have updated this recommendation
> after several years)?  On x86 this defaults to Y.
> 

Actually I would like to set this default Y. I was just wondering if
people like this feature or not at the beginning so I had to be more
careful.

>> diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
>> index f4d3eaae54a9..641920d4f694 100644
>> --- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
>> +++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
>> @@ -155,23 +155,22 @@ skpinv:	addi	r6,r6,1				/* Increment */
>>   
>>   #if defined(ENTRY_MAPPING_BOOT_SETUP)
>>   
>> -/* 6. Setup KERNELBASE mapping in TLB1[0] */
>> +/* 6. Setup kernstart_virt_addr mapping in TLB1[0] */
>>   	lis	r6,0x1000		/* Set MAS0(TLBSEL) = TLB1(1), ESEL = 0 */
>>   	mtspr	SPRN_MAS0,r6
>>   	lis	r6,(MAS1_VALID|MAS1_IPROT)@h
>>   	ori	r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
>>   	mtspr	SPRN_MAS1,r6
>> -	lis	r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, MAS2_M_IF_NEEDED)@h
>> -	ori	r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, MAS2_M_IF_NEEDED)@l
>> -	mtspr	SPRN_MAS2,r6
>> +	lis     r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
>> +	ori     r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
>> +	and     r6,r6,r20
>> +	ori	r6,r6,MAS2_M_IF_NEEDED@l
>> +	mtspr   SPRN_MAS2,r6
> 
> Please use tabs rather than spaces between the mnemonic and the
> arguments.
> 
> It looks like that was the last user of MAS2_VAL so let's remove it.
> 

OK.

>> diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
>> new file mode 100644
>> index 000000000000..f8dc60534ac1
>> --- /dev/null
>> +++ b/arch/powerpc/kernel/kaslr_booke.c
> 
> Shouldn't this go under arch/powerpc/mm/nohash?
> 
>> +/*
>> + * To see if we need to relocate the kernel to a random offset
>> + * void *dt_ptr - address of the device tree
>> + * phys_addr_t size - size of the first memory block
>> + */
>> +notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
>> +{
>> +	unsigned long tlb_virt;
>> +	phys_addr_t tlb_phys;
>> +	unsigned long offset;
>> +	unsigned long kernel_sz;
>> +
>> +	kernel_sz = (unsigned long)_end - KERNELBASE;
> 
> Why KERNELBASE and not kernstart_addr?
> 

Did you mean kernstart_virt_addr? It should be kernstart_virt_addr.

>> +
>> +	offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
>> +
>> +	if (offset == 0)
>> +		return;
>> +
>> +	kernstart_virt_addr += offset;
>> +	kernstart_addr += offset;
>> +
>> +	is_second_reloc = 1;
>> +
>> +	if (offset >= SZ_64M) {
>> +		tlb_virt = round_down(kernstart_virt_addr, SZ_64M);
>> +		tlb_phys = round_down(kernstart_addr, SZ_64M);
> 
> If kernstart_addr wasn't 64M-aligned before adding offset, then "offset
>> = SZ_64M" is not necessarily going to detect when you've crossed a
> mapping boundary.
> >> +
>> +		/* Create kernel map to relocate in */
>> +		create_tlb_entry(tlb_phys, tlb_virt, 1);
>> +	}
>> +
>> +	/* Copy the kernel to it's new location and run */
>> +	memcpy((void *)kernstart_virt_addr, (void *)KERNELBASE, kernel_sz);
>> +
>> +	reloc_kernel_entry(dt_ptr, kernstart_virt_addr);
>> +}
> 
> After copying, call flush_icache_range() on the destination.
> 

OK

>> diff --git a/arch/powerpc/mm/nohash/fsl_booke.c b/arch/powerpc/mm/nohash/fsl_booke.c
>> index 556e3cd52a35..2dc27cf88add 100644
>> --- a/arch/powerpc/mm/nohash/fsl_booke.c
>> +++ b/arch/powerpc/mm/nohash/fsl_booke.c
>> @@ -263,7 +263,8 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
>>   int __initdata is_second_reloc;
>>   notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
>>   {
>> -	unsigned long base = KERNELBASE;
>> +	unsigned long base = kernstart_virt_addr;
>> +	phys_addr_t size;
>>   
>>   	kernstart_addr = start;
>>   	if (is_second_reloc) {
>> @@ -291,7 +292,7 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
>>   	start &= ~0x3ffffff;
>>   	base &= ~0x3ffffff;
>>   	virt_phys_offset = base - start;
>> -	early_get_first_memblock_info(__va(dt_ptr), NULL);
>> +	early_get_first_memblock_info(__va(dt_ptr), &size);
>>   	/*
>>   	 * We now get the memstart_addr, then we should check if this
>>   	 * address is the same as what the PAGE_OFFSET map to now. If
>> @@ -316,6 +317,8 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
>>   		/* We should never reach here */
>>   		panic("Relocation error");
>>   	}
>> +
>> +	kaslr_early_init(__va(dt_ptr), size);
> 
> Are you assuming that available memory starts at physical address zero?
> This isn't true of some partitioning scenarios, or in a kdump crash
> kernel.
> 

I'm not assuming that but I haven't tested that case for now. I will 
reconsider and test these scenarios and fix all bugs.

> -Scott
> 
> .
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32
  2019-08-28  5:08       ` Scott Wood
@ 2019-08-28 13:01         ` Michael Ellerman
  0 siblings, 0 replies; 31+ messages in thread
From: Michael Ellerman @ 2019-08-28 13:01 UTC (permalink / raw)
  To: Scott Wood
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, zhaohongjiang, Jason Yan,
	linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening

Scott Wood <oss@buserror.net> writes:
> On Tue, 2019-08-27 at 11:33 +1000, Michael Ellerman wrote:
>> Jason Yan <yanaijie@huawei.com> writes:
>> > A polite ping :)
>> > 
>> > What else should I do now?
>> 
>> That's a good question.
>> 
>> Scott, are you still maintaining FSL bits, 
>
> Sort of... now that it's become very low volume, it's easy to forget when
> something does show up (or miss it if I'm not CCed).  It'd probably help if I
> were to just ack patches instead of thinking "I'll do a pull request for this
> later" when it's just one or two patches per cycle.

Yep, understand. Just sending acks is totally fine if you don't have
enough for a pull request.

cheers

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 06/12] powerpc/fsl_booke/32: implement KASLR infrastructure
  2019-08-28 11:03     ` Jason Yan
@ 2019-08-28 16:44       ` Scott Wood
  0 siblings, 0 replies; 31+ messages in thread
From: Scott Wood @ 2019-08-28 16:44 UTC (permalink / raw)
  To: Jason Yan
  Cc: wangkefeng.wang, keescook, kernel-hardening, thunder.leizhen,
	linux-kernel, npiggin, jingxiangfeng, diana.craciun, paulus,
	zhaohongjiang, fanchengyang, linuxppc-dev, yebin10

On Wed, 2019-08-28 at 19:03 +0800, Jason Yan wrote:
> 
> On 2019/8/28 12:54, Scott Wood wrote:
> > On Fri, Aug 09, 2019 at 06:07:54PM +0800, Jason Yan wrote:
> > > +/*
> > > + * To see if we need to relocate the kernel to a random offset
> > > + * void *dt_ptr - address of the device tree
> > > + * phys_addr_t size - size of the first memory block
> > > + */
> > > +notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
> > > +{
> > > +	unsigned long tlb_virt;
> > > +	phys_addr_t tlb_phys;
> > > +	unsigned long offset;
> > > +	unsigned long kernel_sz;
> > > +
> > > +	kernel_sz = (unsigned long)_end - KERNELBASE;
> > 
> > Why KERNELBASE and not kernstart_addr?
> > 
> 
> Did you mean kernstart_virt_addr? It should be kernstart_virt_addr.

Yes, kernstart_virt_addr.  KERNELBASE will be incorrect if the kernel was
loaded at a nonzero physical address without CONFIG_PHYSICAL_START being
adjusted to match.

-Scott



^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32
  2019-08-28  4:05 ` Scott Wood
  2019-08-28  4:59   ` Scott Wood
@ 2019-08-29  1:57   ` Jason Yan
  2019-09-10  5:34   ` Jason Yan
  2 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-29  1:57 UTC (permalink / raw)
  To: Scott Wood, mpe, linuxppc-dev, diana.craciun, christophe.leroy,
	benh, paulus, npiggin, keescook, kernel-hardening
  Cc: wangkefeng.wang, linux-kernel, jingxiangfeng, zhaohongjiang,
	thunder.leizhen, fanchengyang, yebin10



On 2019/8/28 12:05, Scott Wood wrote:
> On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote:
>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>> feature that deters exploit attempts relying on knowledge of the location
>> of kernel internals.
>>
>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>> map or copy kernel to a proper place and relocate.
> 
> Have you tested this with a kernel that was loaded at a non-zero address?  I
> tried loading a kernel at 0x04000000 (by changing the address in the uImage,
> and setting bootm_low to 04000000 in U-Boot), and it works without
> CONFIG_RANDOMIZE and fails with.
> 

Not yet. I will test this kind of cases in the next days. Thank you so
much. If there are any other corner cases that have to be tested, please
let me know.

>>   Freescale Book-E
>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>> entries are not suitable to map the kernel directly in a randomized
>> region, so we chose to copy the kernel to a proper place and restart to
>> relocate.
>>
>> Entropy is derived from the banner and timer base, which will change every
>> build and boot. This not so much safe so additionally the bootloader may
>> pass entropy via the /chosen/kaslr-seed node in device tree.
> 
> How complicated would it be to directly access the HW RNG (if present) that
> early in the boot?  It'd be nice if a U-Boot update weren't required (and
> particularly concerning that KASLR would appear to work without a U-Boot
> update, but without decent entropy).
> 
> -Scott
> 
> 
> 
> .
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32
  2019-08-28  4:59   ` Scott Wood
@ 2019-08-29  2:41     ` Jason Yan
  0 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-29  2:41 UTC (permalink / raw)
  To: Scott Wood, mpe, linuxppc-dev, diana.craciun, christophe.leroy,
	benh, paulus, npiggin, keescook, kernel-hardening
  Cc: wangkefeng.wang, linux-kernel, jingxiangfeng, zhaohongjiang,
	thunder.leizhen, fanchengyang, yebin10



On 2019/8/28 12:59, Scott Wood wrote:
> On Tue, 2019-08-27 at 23:05 -0500, Scott Wood wrote:
>> On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote:
>>>   Freescale Book-E
>>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>>> entries are not suitable to map the kernel directly in a randomized
>>> region, so we chose to copy the kernel to a proper place and restart to
>>> relocate.
>>>
>>> Entropy is derived from the banner and timer base, which will change every
>>> build and boot. This not so much safe so additionally the bootloader may
>>> pass entropy via the /chosen/kaslr-seed node in device tree.
>>
>> How complicated would it be to directly access the HW RNG (if present) that
>> early in the boot?  It'd be nice if a U-Boot update weren't required (and
>> particularly concerning that KASLR would appear to work without a U-Boot
>> update, but without decent entropy).
> 
> OK, I see that kaslr-seed is used on some other platforms, though arm64 aborts
> KASLR if it doesn't get a seed.  I'm not sure if that's better than a loud
> warning message (or if it was a conscious choice rather than just not having
> an alternative implemented), but silently using poor entropy for something
> like this seems bad.
> 

It can still make the attacker's cost higher with not so good entropy.
The same strategy exists in X86 when X86 KASLR uses RDTSC if without
X86_FEATURE_RDRAND supported. I agree that having a warning message
looks better for reminding people in this situation.

> -Scott
> 
> 
> 
> .
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 06/12] powerpc/fsl_booke/32: implement KASLR infrastructure
  2019-08-28  5:47     ` Christophe Leroy
@ 2019-08-29  6:26       ` Jason Yan
  0 siblings, 0 replies; 31+ messages in thread
From: Jason Yan @ 2019-08-29  6:26 UTC (permalink / raw)
  To: Christophe Leroy, Scott Wood
  Cc: mpe, linuxppc-dev, diana.craciun, benh, paulus, npiggin,
	keescook, kernel-hardening, wangkefeng.wang, linux-kernel,
	jingxiangfeng, zhaohongjiang, thunder.leizhen, fanchengyang,
	yebin10



On 2019/8/28 13:47, Christophe Leroy wrote:
> 
> 
> Le 28/08/2019 à 06:54, Scott Wood a écrit :
>> On Fri, Aug 09, 2019 at 06:07:54PM +0800, Jason Yan wrote:
>>> This patch add support to boot kernel from places other than KERNELBASE.
>>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>>> map or copy kernel to a proper place and relocate. Freescale Book-E
>>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>>> entries are not suitable to map the kernel directly in a randomized
>>> region, so we chose to copy the kernel to a proper place and restart to
>>> relocate.
>>>
>>> The offset of the kernel was not randomized yet(a fixed 64M is set). We
>>> will randomize it in the next patch.
>>>
>>> Signed-off-by: Jason Yan <yanaijie@huawei.com>
>>> Cc: Diana Craciun <diana.craciun@nxp.com>
>>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>>> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
>>> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>>> Cc: Paul Mackerras <paulus@samba.org>
>>> Cc: Nicholas Piggin <npiggin@gmail.com>
>>> Cc: Kees Cook <keescook@chromium.org>
>>> Tested-by: Diana Craciun <diana.craciun@nxp.com>
>>> Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>
>>> ---
>>>   arch/powerpc/Kconfig                          | 11 ++++
>>>   arch/powerpc/kernel/Makefile                  |  1 +
>>>   arch/powerpc/kernel/early_32.c                |  2 +-
>>>   arch/powerpc/kernel/fsl_booke_entry_mapping.S | 17 +++--
>>>   arch/powerpc/kernel/head_fsl_booke.S          | 13 +++-
>>>   arch/powerpc/kernel/kaslr_booke.c             | 62 +++++++++++++++++++
>>>   arch/powerpc/mm/mmu_decl.h                    |  7 +++
>>>   arch/powerpc/mm/nohash/fsl_booke.c            |  7 ++-
>>>   8 files changed, 105 insertions(+), 15 deletions(-)
>>>   create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>>>
> 
> [...]
> 
>>> diff --git a/arch/powerpc/kernel/kaslr_booke.c 
>>> b/arch/powerpc/kernel/kaslr_booke.c
>>> new file mode 100644
>>> index 000000000000..f8dc60534ac1
>>> --- /dev/null
>>> +++ b/arch/powerpc/kernel/kaslr_booke.c
>>
>> Shouldn't this go under arch/powerpc/mm/nohash?
>>
>>> +/*
>>> + * To see if we need to relocate the kernel to a random offset
>>> + * void *dt_ptr - address of the device tree
>>> + * phys_addr_t size - size of the first memory block
>>> + */
>>> +notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
>>> +{
>>> +    unsigned long tlb_virt;
>>> +    phys_addr_t tlb_phys;
>>> +    unsigned long offset;
>>> +    unsigned long kernel_sz;
>>> +
>>> +    kernel_sz = (unsigned long)_end - KERNELBASE;
>>
>> Why KERNELBASE and not kernstart_addr?
>>
>>> +
>>> +    offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
>>> +
>>> +    if (offset == 0)
>>> +        return;
>>> +
>>> +    kernstart_virt_addr += offset;
>>> +    kernstart_addr += offset;
>>> +
>>> +    is_second_reloc = 1;
>>> +
>>> +    if (offset >= SZ_64M) {
>>> +        tlb_virt = round_down(kernstart_virt_addr, SZ_64M);
>>> +        tlb_phys = round_down(kernstart_addr, SZ_64M);
>>
>> If kernstart_addr wasn't 64M-aligned before adding offset, then "offset
>>> = SZ_64M" is not necessarily going to detect when you've crossed a
>> mapping boundary.
>>
>>> +
>>> +        /* Create kernel map to relocate in */
>>> +        create_tlb_entry(tlb_phys, tlb_virt, 1);
>>> +    }
>>> +
>>> +    /* Copy the kernel to it's new location and run */
>>> +    memcpy((void *)kernstart_virt_addr, (void *)KERNELBASE, kernel_sz);
>>> +
>>> +    reloc_kernel_entry(dt_ptr, kernstart_virt_addr);
>>> +}
>>
>> After copying, call flush_icache_range() on the destination.
> 
> Function copy_and_flush() does the copy and the flush. I think it should 
> be used instead of memcpy() + flush_icache_range()
> 

Hi Christophe,

Thanks for the suggestion. But I think copy_and_flush() is not included 
in fsl booke code, maybe move this function to misc.S?

> Christophe
> 
> .
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32
  2019-08-28  4:05 ` Scott Wood
  2019-08-28  4:59   ` Scott Wood
  2019-08-29  1:57   ` Jason Yan
@ 2019-09-10  5:34   ` Jason Yan
  2019-09-14 14:28     ` Scott Wood
  2 siblings, 1 reply; 31+ messages in thread
From: Jason Yan @ 2019-09-10  5:34 UTC (permalink / raw)
  To: Scott Wood, mpe, linuxppc-dev, diana.craciun, christophe.leroy,
	benh, paulus, npiggin, keescook, kernel-hardening
  Cc: wangkefeng.wang, linux-kernel, jingxiangfeng, zhaohongjiang,
	thunder.leizhen, fanchengyang, yebin10

Hi Scott,

On 2019/8/28 12:05, Scott Wood wrote:
> On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote:
>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>> feature that deters exploit attempts relying on knowledge of the location
>> of kernel internals.
>>
>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>> map or copy kernel to a proper place and relocate.
> 
> Have you tested this with a kernel that was loaded at a non-zero address?  I
> tried loading a kernel at 0x04000000 (by changing the address in the uImage,
> and setting bootm_low to 04000000 in U-Boot), and it works without
> CONFIG_RANDOMIZE and fails with.
> 

How did you change the load address of the uImage, by changing the
kernel config CONFIG_PHYSICAL_START or the "-a/-e" parameter of mkimage?
I tried both, but it did not work with or without CONFIG_RANDOMIZE.


Thanks,
Jason

>>   Freescale Book-E
>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>> entries are not suitable to map the kernel directly in a randomized
>> region, so we chose to copy the kernel to a proper place and restart to
>> relocate.
>>
>> Entropy is derived from the banner and timer base, which will change every
>> build and boot. This not so much safe so additionally the bootloader may
>> pass entropy via the /chosen/kaslr-seed node in device tree.
> 
> How complicated would it be to directly access the HW RNG (if present) that
> early in the boot?  It'd be nice if a U-Boot update weren't required (and
> particularly concerning that KASLR would appear to work without a U-Boot
> update, but without decent entropy).
> 
> -Scott
> 
> 
> 
> .
> 


^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32
  2019-09-10  5:34   ` Jason Yan
@ 2019-09-14 14:28     ` Scott Wood
  0 siblings, 0 replies; 31+ messages in thread
From: Scott Wood @ 2019-09-14 14:28 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, christophe.leroy,
	benh, paulus, npiggin, keescook, kernel-hardening
  Cc: wangkefeng.wang, linux-kernel, jingxiangfeng, zhaohongjiang,
	thunder.leizhen, fanchengyang, yebin10

On Tue, 2019-09-10 at 13:34 +0800, Jason Yan wrote:
> Hi Scott,
> 
> On 2019/8/28 12:05, Scott Wood wrote:
> > On Fri, 2019-08-09 at 18:07 +0800, Jason Yan wrote:
> > > This series implements KASLR for powerpc/fsl_booke/32, as a security
> > > feature that deters exploit attempts relying on knowledge of the
> > > location
> > > of kernel internals.
> > > 
> > > Since CONFIG_RELOCATABLE has already supported, what we need to do is
> > > map or copy kernel to a proper place and relocate.
> > 
> > Have you tested this with a kernel that was loaded at a non-zero
> > address?  I
> > tried loading a kernel at 0x04000000 (by changing the address in the
> > uImage,
> > and setting bootm_low to 04000000 in U-Boot), and it works without
> > CONFIG_RANDOMIZE and fails with.
> > 
> 
> How did you change the load address of the uImage, by changing the
> kernel config CONFIG_PHYSICAL_START or the "-a/-e" parameter of mkimage?
> I tried both, but it did not work with or without CONFIG_RANDOMIZE.

With mkimage.  Did you set bootm_low in U-Boot as described above?  Was
CONFIG_RELOCATABLE set in the non-CONFIG_RANDOMIZE kernel?

-Scott



^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2019-09-14 14:35 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-09 10:07 [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
2019-08-09 10:07 ` [PATCH v6 01/12] powerpc: unify definition of M_IF_NEEDED Jason Yan
2019-08-09 10:07 ` [PATCH v6 02/12] powerpc: move memstart_addr and kernstart_addr to init-common.c Jason Yan
2019-08-09 10:07 ` [PATCH v6 03/12] powerpc: introduce kernstart_virt_addr to store the kernel base Jason Yan
2019-08-09 10:07 ` [PATCH v6 04/12] powerpc/fsl_booke/32: introduce create_tlb_entry() helper Jason Yan
2019-08-27 22:07   ` Scott Wood
2019-08-28  5:33     ` Jason Yan
2019-08-09 10:07 ` [PATCH v6 05/12] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper Jason Yan
2019-08-09 10:07 ` [PATCH v6 06/12] powerpc/fsl_booke/32: implement KASLR infrastructure Jason Yan
2019-08-28  4:54   ` Scott Wood
2019-08-28  5:47     ` Christophe Leroy
2019-08-29  6:26       ` Jason Yan
2019-08-28 11:03     ` Jason Yan
2019-08-28 16:44       ` Scott Wood
2019-08-09 10:07 ` [PATCH v6 07/12] powerpc/fsl_booke/32: randomize the kernel image offset Jason Yan
2019-08-09 10:07 ` [PATCH v6 08/12] powerpc/fsl_booke/kaslr: clear the original kernel if randomized Jason Yan
2019-08-09 10:07 ` [PATCH v6 09/12] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter Jason Yan
2019-08-09 10:07 ` [PATCH v6 10/12] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic Jason Yan
2019-08-09 10:07 ` [PATCH v6 11/12] powerpc/fsl_booke/kaslr: export offset in VMCOREINFO ELF notes Jason Yan
2019-08-09 10:08 ` [PATCH v6 12/12] powerpc/fsl_booke/32: Document KASLR implementation Jason Yan
2019-08-19  6:12 ` [PATCH v6 00/12] implement KASLR for powerpc/fsl_booke/32 Jason Yan
2019-08-27  0:39   ` Jason Yan
2019-08-27  1:33     ` Michael Ellerman
2019-08-28  5:08       ` Scott Wood
2019-08-28 13:01         ` Michael Ellerman
2019-08-28  4:05 ` Scott Wood
2019-08-28  4:59   ` Scott Wood
2019-08-29  2:41     ` Jason Yan
2019-08-29  1:57   ` Jason Yan
2019-09-10  5:34   ` Jason Yan
2019-09-14 14:28     ` Scott Wood

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).