kernel-hardening.lists.openwall.com archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32
@ 2019-07-17  8:06 Jason Yan
  2019-07-17  8:06 ` [RFC PATCH 01/10] powerpc: unify definition of M_IF_NEEDED Jason Yan
                   ` (11 more replies)
  0 siblings, 12 replies; 37+ messages in thread
From: Jason Yan @ 2019-07-17  8:06 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Jason Yan

This series implements KASLR for powerpc/fsl_booke/32, as a security
feature that deters exploit attempts relying on knowledge of the location
of kernel internals.

Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

Entropy is derived from the banner and timer base, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

    KERNELBASE

        |-->   64M   <--|
        |               |
        +---------------+    +----------------+---------------+
        |               |....|    |kernel|    |               |
        +---------------+    +----------------+---------------+
        |                         |
        |----->   offset    <-----|

                              kimage_vaddr

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Jason Yan (10):
  powerpc: unify definition of M_IF_NEEDED
  powerpc: move memstart_addr and kernstart_addr to init-common.c
  powerpc: introduce kimage_vaddr to store the kernel base
  powerpc/fsl_booke/32: introduce create_tlb_entry() helper
  powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
  powerpc/fsl_booke/32: implement KASLR infrastructure
  powerpc/fsl_booke/32: randomize the kernel image offset
  powerpc/fsl_booke/kaslr: clear the original kernel if randomized
  powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
  powerpc/fsl_booke/kaslr: dump out kernel offset information on panic

 arch/powerpc/Kconfig                          |  11 +
 arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
 arch/powerpc/include/asm/page.h               |   7 +
 arch/powerpc/kernel/Makefile                  |   1 +
 arch/powerpc/kernel/early_32.c                |   2 +-
 arch/powerpc/kernel/exceptions-64e.S          |  10 -
 arch/powerpc/kernel/fsl_booke_entry_mapping.S |  23 +-
 arch/powerpc/kernel/head_fsl_booke.S          |  61 ++-
 arch/powerpc/kernel/kaslr_booke.c             | 439 ++++++++++++++++++
 arch/powerpc/kernel/machine_kexec.c           |   1 +
 arch/powerpc/kernel/misc_64.S                 |   5 -
 arch/powerpc/kernel/setup-common.c            |  23 +
 arch/powerpc/mm/init-common.c                 |   7 +
 arch/powerpc/mm/init_32.c                     |   5 -
 arch/powerpc/mm/init_64.c                     |   5 -
 arch/powerpc/mm/mmu_decl.h                    |  10 +
 arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
 17 files changed, 580 insertions(+), 48 deletions(-)
 create mode 100644 arch/powerpc/kernel/kaslr_booke.c

-- 
2.17.2


^ permalink raw reply	[flat|nested] 37+ messages in thread

* [RFC PATCH 01/10] powerpc: unify definition of M_IF_NEEDED
  2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
@ 2019-07-17  8:06 ` Jason Yan
  2019-07-29 10:59   ` Christophe Leroy
  2019-07-17  8:06 ` [RFC PATCH 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c Jason Yan
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Jason Yan @ 2019-07-17  8:06 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Jason Yan

M_IF_NEEDED is defined too many times. Move it to a common place.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/include/asm/nohash/mmu-book3e.h  | 10 ++++++++++
 arch/powerpc/kernel/exceptions-64e.S          | 10 ----------
 arch/powerpc/kernel/fsl_booke_entry_mapping.S | 10 ----------
 arch/powerpc/kernel/misc_64.S                 |  5 -----
 4 files changed, 10 insertions(+), 25 deletions(-)

diff --git a/arch/powerpc/include/asm/nohash/mmu-book3e.h b/arch/powerpc/include/asm/nohash/mmu-book3e.h
index 4c9777d256fb..0877362e48fa 100644
--- a/arch/powerpc/include/asm/nohash/mmu-book3e.h
+++ b/arch/powerpc/include/asm/nohash/mmu-book3e.h
@@ -221,6 +221,16 @@
 #define TLBILX_T_CLASS2			6
 #define TLBILX_T_CLASS3			7
 
+/*
+ * The mapping only needs to be cache-coherent on SMP, except on
+ * Freescale e500mc derivatives where it's also needed for coherent DMA.
+ */
+#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
+#define M_IF_NEEDED	MAS2_M
+#else
+#define M_IF_NEEDED	0
+#endif
+
 #ifndef __ASSEMBLY__
 #include <asm/bug.h>
 
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 1cfb3da4a84a..fd49ec07ce4a 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -1342,16 +1342,6 @@ skpinv:	addi	r6,r6,1				/* Increment */
 	sync
 	isync
 
-/*
- * The mapping only needs to be cache-coherent on SMP, except on
- * Freescale e500mc derivatives where it's also needed for coherent DMA.
- */
-#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
-#define M_IF_NEEDED	MAS2_M
-#else
-#define M_IF_NEEDED	0
-#endif
-
 /* 6. Setup KERNELBASE mapping in TLB[0]
  *
  * r3 = MAS0 w/TLBSEL & ESEL for the entry we started in
diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
index ea065282b303..de0980945510 100644
--- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
+++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
@@ -153,16 +153,6 @@ skpinv:	addi	r6,r6,1				/* Increment */
 	tlbivax 0,r9
 	TLBSYNC
 
-/*
- * The mapping only needs to be cache-coherent on SMP, except on
- * Freescale e500mc derivatives where it's also needed for coherent DMA.
- */
-#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
-#define M_IF_NEEDED	MAS2_M
-#else
-#define M_IF_NEEDED	0
-#endif
-
 #if defined(ENTRY_MAPPING_BOOT_SETUP)
 
 /* 6. Setup KERNELBASE mapping in TLB1[0] */
diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index b55a7b4cb543..26074f92d4bc 100644
--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -432,11 +432,6 @@ kexec_create_tlb:
 	rlwimi	r9,r10,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r9) */
 
 /* Set up a temp identity mapping v:0 to p:0 and return to it. */
-#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
-#define M_IF_NEEDED	MAS2_M
-#else
-#define M_IF_NEEDED	0
-#endif
 	mtspr	SPRN_MAS0,r9
 
 	lis	r9,(MAS1_VALID|MAS1_IPROT)@h
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c
  2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
  2019-07-17  8:06 ` [RFC PATCH 01/10] powerpc: unify definition of M_IF_NEEDED Jason Yan
@ 2019-07-17  8:06 ` Jason Yan
  2019-07-29 11:00   ` Christophe Leroy
  2019-07-29 14:31   ` Christoph Hellwig
  2019-07-17  8:06 ` [RFC PATCH 03/10] powerpc: introduce kimage_vaddr to store the kernel base Jason Yan
                   ` (9 subsequent siblings)
  11 siblings, 2 replies; 37+ messages in thread
From: Jason Yan @ 2019-07-17  8:06 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Jason Yan

These two variables are both defined in init_32.c and init_64.c. Move
them to init-common.c.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/mm/init-common.c | 5 +++++
 arch/powerpc/mm/init_32.c     | 5 -----
 arch/powerpc/mm/init_64.c     | 5 -----
 3 files changed, 5 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index a84da92920f7..9273c38009cb 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -21,6 +21,11 @@
 #include <asm/pgtable.h>
 #include <asm/kup.h>
 
+phys_addr_t memstart_addr = (phys_addr_t)~0ull;
+EXPORT_SYMBOL(memstart_addr);
+phys_addr_t kernstart_addr;
+EXPORT_SYMBOL(kernstart_addr);
+
 static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
 static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
 
diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
index b04896a88d79..872df48ae41b 100644
--- a/arch/powerpc/mm/init_32.c
+++ b/arch/powerpc/mm/init_32.c
@@ -56,11 +56,6 @@
 phys_addr_t total_memory;
 phys_addr_t total_lowmem;
 
-phys_addr_t memstart_addr = (phys_addr_t)~0ull;
-EXPORT_SYMBOL(memstart_addr);
-phys_addr_t kernstart_addr;
-EXPORT_SYMBOL(kernstart_addr);
-
 #ifdef CONFIG_RELOCATABLE
 /* Used in __va()/__pa() */
 long long virt_phys_offset;
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index a44f6281ca3a..c836f1269ee7 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -63,11 +63,6 @@
 
 #include <mm/mmu_decl.h>
 
-phys_addr_t memstart_addr = ~0;
-EXPORT_SYMBOL_GPL(memstart_addr);
-phys_addr_t kernstart_addr;
-EXPORT_SYMBOL_GPL(kernstart_addr);
-
 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 /*
  * Given an address within the vmemmap, determine the pfn of the page that
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH 03/10] powerpc: introduce kimage_vaddr to store the kernel base
  2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
  2019-07-17  8:06 ` [RFC PATCH 01/10] powerpc: unify definition of M_IF_NEEDED Jason Yan
  2019-07-17  8:06 ` [RFC PATCH 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c Jason Yan
@ 2019-07-17  8:06 ` Jason Yan
  2019-07-29 11:00   ` Christophe Leroy
  2019-07-29 14:32   ` Christoph Hellwig
  2019-07-17  8:06 ` [RFC PATCH 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper Jason Yan
                   ` (8 subsequent siblings)
  11 siblings, 2 replies; 37+ messages in thread
From: Jason Yan @ 2019-07-17  8:06 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Jason Yan

Now the kernel base is a fixed value - KERNELBASE. To support KASLR, we
need a variable to store the kernel base.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/include/asm/page.h | 2 ++
 arch/powerpc/mm/init-common.c   | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 0d52f57fca04..60a68d3a54b1 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -315,6 +315,8 @@ void arch_free_page(struct page *page, int order);
 
 struct vm_area_struct;
 
+extern unsigned long kimage_vaddr;
+
 #include <asm-generic/memory_model.h>
 #endif /* __ASSEMBLY__ */
 #include <asm/slice.h>
diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
index 9273c38009cb..c7a98c73e5c1 100644
--- a/arch/powerpc/mm/init-common.c
+++ b/arch/powerpc/mm/init-common.c
@@ -25,6 +25,8 @@ phys_addr_t memstart_addr = (phys_addr_t)~0ull;
 EXPORT_SYMBOL(memstart_addr);
 phys_addr_t kernstart_addr;
 EXPORT_SYMBOL(kernstart_addr);
+unsigned long kimage_vaddr = KERNELBASE;
+EXPORT_SYMBOL(kimage_vaddr);
 
 static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
 static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper
  2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (2 preceding siblings ...)
  2019-07-17  8:06 ` [RFC PATCH 03/10] powerpc: introduce kimage_vaddr to store the kernel base Jason Yan
@ 2019-07-17  8:06 ` Jason Yan
  2019-07-29 11:05   ` Christophe Leroy
  2019-07-17  8:06 ` [RFC PATCH 05/10] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper Jason Yan
                   ` (7 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Jason Yan @ 2019-07-17  8:06 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Jason Yan

Add a new helper create_tlb_entry() to create a tlb entry by the virtual
and physical address. This is a preparation to support boot kernel at a
randomized address.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/kernel/head_fsl_booke.S | 30 ++++++++++++++++++++++++++++
 arch/powerpc/mm/mmu_decl.h           |  1 +
 2 files changed, 31 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index adf0505dbe02..a57d44638031 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1114,6 +1114,36 @@ __secondary_hold_acknowledge:
 	.long	-1
 #endif
 
+/*
+ * Create a 64M tlb by address and entry
+ * r3/r4 - physical address
+ * r5 - virtual address
+ * r6 - entry
+ */
+_GLOBAL(create_tlb_entry)
+	lis     r7,0x1000               /* Set MAS0(TLBSEL) = 1 */
+	rlwimi  r7,r6,16,4,15           /* Setup MAS0 = TLBSEL | ESEL(r6) */
+	mtspr   SPRN_MAS0,r7            /* Write MAS0 */
+
+	lis     r6,(MAS1_VALID|MAS1_IPROT)@h
+	ori     r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
+	mtspr   SPRN_MAS1,r6            /* Write MAS1 */
+
+	lis     r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
+	ori     r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
+	and     r6,r6,r5
+	ori	r6,r6,MAS2_M@l
+	mtspr   SPRN_MAS2,r6            /* Write MAS2(EPN) */
+
+	mr      r8,r4
+	ori     r8,r8,(MAS3_SW|MAS3_SR|MAS3_SX)
+	mtspr   SPRN_MAS3,r8            /* Write MAS3(RPN) */
+
+	tlbwe                           /* Write TLB */
+	isync
+	sync
+	blr
+
 /*
  * Create a tlb entry with the same effective and physical address as
  * the tlb entry used by the current running code. But set the TS to 1.
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 32c1a191c28a..d7737cf97cee 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -142,6 +142,7 @@ extern unsigned long calc_cam_sz(unsigned long ram, unsigned long virt,
 extern void adjust_total_lowmem(void);
 extern int switch_to_as1(void);
 extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
+extern void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);
 #endif
 extern void loadcam_entry(unsigned int index);
 extern void loadcam_multi(int first_idx, int num, int tmp_idx);
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH 05/10] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
  2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (3 preceding siblings ...)
  2019-07-17  8:06 ` [RFC PATCH 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper Jason Yan
@ 2019-07-17  8:06 ` Jason Yan
  2019-07-29 11:08   ` Christophe Leroy
  2019-07-17  8:06 ` [RFC PATCH 06/10] powerpc/fsl_booke/32: implement KASLR infrastructure Jason Yan
                   ` (6 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Jason Yan @ 2019-07-17  8:06 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Jason Yan

Add a new helper reloc_kernel_entry() to jump back to the start of the
new kernel. After we put the new kernel in a randomized place we can use
this new helper to enter the kernel and begin to relocate again.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/kernel/head_fsl_booke.S | 16 ++++++++++++++++
 arch/powerpc/mm/mmu_decl.h           |  1 +
 2 files changed, 17 insertions(+)

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index a57d44638031..ce40f96dae20 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -1144,6 +1144,22 @@ _GLOBAL(create_tlb_entry)
 	sync
 	blr
 
+/*
+ * Return to the start of the relocated kernel and run again
+ * r3 - virtual address of fdt
+ * r4 - entry of the kernel
+ */
+_GLOBAL(reloc_kernel_entry)
+	mfmsr	r7
+	li	r8,(MSR_IS | MSR_DS)
+	andc	r7,r7,r8
+
+	mtspr	SPRN_SRR0,r4
+	mtspr	SPRN_SRR1,r7
+	isync
+	sync
+	rfi
+
 /*
  * Create a tlb entry with the same effective and physical address as
  * the tlb entry used by the current running code. But set the TS to 1.
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index d7737cf97cee..dae8e9177574 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -143,6 +143,7 @@ extern void adjust_total_lowmem(void);
 extern int switch_to_as1(void);
 extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
 extern void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);
+extern void reloc_kernel_entry(void *fdt, int addr);
 #endif
 extern void loadcam_entry(unsigned int index);
 extern void loadcam_multi(int first_idx, int num, int tmp_idx);
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH 06/10] powerpc/fsl_booke/32: implement KASLR infrastructure
  2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (4 preceding siblings ...)
  2019-07-17  8:06 ` [RFC PATCH 05/10] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper Jason Yan
@ 2019-07-17  8:06 ` Jason Yan
  2019-07-29 11:16   ` Christophe Leroy
  2019-07-17  8:06 ` [RFC PATCH 07/10] powerpc/fsl_booke/32: randomize the kernel image offset Jason Yan
                   ` (5 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Jason Yan @ 2019-07-17  8:06 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Jason Yan

This patch add support to boot kernel from places other than KERNELBASE.
Since CONFIG_RELOCATABLE has already supported, what we need to do is
map or copy kernel to a proper place and relocate. Freescale Book-E
parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
entries are not suitable to map the kernel directly in a randomized
region, so we chose to copy the kernel to a proper place and restart to
relocate.

The offset of the kernel was not randomized yet(a fixed 64M is set). We
will randomize it in the next patch.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/Kconfig                          | 11 +++
 arch/powerpc/kernel/Makefile                  |  1 +
 arch/powerpc/kernel/early_32.c                |  2 +-
 arch/powerpc/kernel/fsl_booke_entry_mapping.S | 13 ++-
 arch/powerpc/kernel/head_fsl_booke.S          | 15 +++-
 arch/powerpc/kernel/kaslr_booke.c             | 83 +++++++++++++++++++
 arch/powerpc/mm/mmu_decl.h                    |  6 ++
 arch/powerpc/mm/nohash/fsl_booke.c            |  7 +-
 8 files changed, 125 insertions(+), 13 deletions(-)
 create mode 100644 arch/powerpc/kernel/kaslr_booke.c

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index f516796dd819..3742df54bdc8 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -547,6 +547,17 @@ config RELOCATABLE
 	  setting can still be useful to bootwrappers that need to know the
 	  load address of the kernel (eg. u-boot/mkimage).
 
+config RANDOMIZE_BASE
+	bool "Randomize the address of the kernel image"
+	depends on (FSL_BOOKE && FLATMEM && PPC32)
+	select RELOCATABLE
+	help
+	  Randomizes the virtual address at which the kernel image is
+	  loaded, as a security feature that deters exploit attempts
+	  relying on knowledge of the location of kernel internals.
+
+	  If unsure, say N.
+
 config RELOCATABLE_TEST
 	bool "Test relocatable kernel"
 	depends on (PPC64 && RELOCATABLE)
diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
index 56dfa7a2a6f2..cf87a0921db4 100644
--- a/arch/powerpc/kernel/Makefile
+++ b/arch/powerpc/kernel/Makefile
@@ -105,6 +105,7 @@ extra-$(CONFIG_PPC_8xx)		:= head_8xx.o
 extra-y				+= vmlinux.lds
 
 obj-$(CONFIG_RELOCATABLE)	+= reloc_$(BITS).o
+obj-$(CONFIG_RANDOMIZE_BASE)	+= kaslr_booke.o
 
 obj-$(CONFIG_PPC32)		+= entry_32.o setup_32.o early_32.o
 obj-$(CONFIG_PPC64)		+= dma-iommu.o iommu.o
diff --git a/arch/powerpc/kernel/early_32.c b/arch/powerpc/kernel/early_32.c
index 3482118ffe76..fe8347cdc07d 100644
--- a/arch/powerpc/kernel/early_32.c
+++ b/arch/powerpc/kernel/early_32.c
@@ -32,5 +32,5 @@ notrace unsigned long __init early_init(unsigned long dt_ptr)
 
 	apply_feature_fixups();
 
-	return KERNELBASE + offset;
+	return kimage_vaddr + offset;
 }
diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
index de0980945510..6d2967673ac7 100644
--- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
+++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
@@ -161,17 +161,16 @@ skpinv:	addi	r6,r6,1				/* Increment */
 	lis	r6,(MAS1_VALID|MAS1_IPROT)@h
 	ori	r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
 	mtspr	SPRN_MAS1,r6
-	lis	r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_NEEDED)@h
-	ori	r6,r6,MAS2_VAL(PAGE_OFFSET, BOOK3E_PAGESZ_64M, M_IF_NEEDED)@l
-	mtspr	SPRN_MAS2,r6
+	lis     r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
+	ori     r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
+	and     r6,r6,r20
+	ori	r6,r6,M_IF_NEEDED@l
+	mtspr   SPRN_MAS2,r6
 	mtspr	SPRN_MAS3,r8
 	tlbwe
 
 /* 7. Jump to KERNELBASE mapping */
-	lis	r6,(KERNELBASE & ~0xfff)@h
-	ori	r6,r6,(KERNELBASE & ~0xfff)@l
-	rlwinm	r7,r25,0,0x03ffffff
-	add	r6,r7,r6
+	mr	r6,r20
 
 #elif defined(ENTRY_MAPPING_KEXEC_SETUP)
 /*
diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index ce40f96dae20..d34933b0745a 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -155,6 +155,8 @@ _ENTRY(_start);
  */
 
 _ENTRY(__early_start)
+	LOAD_REG_ADDR_PIC(r20, kimage_vaddr)
+	lwz     r20,0(r20)
 
 #define ENTRY_MAPPING_BOOT_SETUP
 #include "fsl_booke_entry_mapping.S"
@@ -277,8 +279,8 @@ set_ivor:
 	ori	r6, r6, swapper_pg_dir@l
 	lis	r5, abatron_pteptrs@h
 	ori	r5, r5, abatron_pteptrs@l
-	lis	r4, KERNELBASE@h
-	ori	r4, r4, KERNELBASE@l
+	lis     r3, kimage_vaddr@ha
+	lwz     r4, kimage_vaddr@l(r3)
 	stw	r5, 0(r4)	/* Save abatron_pteptrs at a fixed location */
 	stw	r6, 0(r5)
 
@@ -1067,7 +1069,14 @@ __secondary_start:
 	mr	r5,r25		/* phys kernel start */
 	rlwinm	r5,r5,0,~0x3ffffff	/* aligned 64M */
 	subf	r4,r5,r4	/* memstart_addr - phys kernel start */
-	li	r5,0		/* no device tree */
+#ifdef CONFIG_RANDOMIZE_BASE
+	lis	r7,KERNELBASE@h
+	ori	r7,r7,KERNELBASE@l
+	cmpw	r20,r7		/* if kimage_vaddr != KERNELBASE, randomized */
+	beq	2f
+	li	r4,0
+#endif
+2:	li	r5,0		/* no device tree */
 	li	r6,0		/* not boot cpu */
 	bl	restore_to_as0
 
diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
new file mode 100644
index 000000000000..72d8e9432048
--- /dev/null
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -0,0 +1,83 @@
+/*
+ * Copyright (C) 2019 Jason Yan <yanaijie@huawei.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/stddef.h>
+#include <linux/vmalloc.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/highmem.h>
+#include <linux/memblock.h>
+#include <asm/pgalloc.h>
+#include <asm/prom.h>
+#include <asm/io.h>
+#include <asm/mmu_context.h>
+#include <asm/pgtable.h>
+#include <asm/mmu.h>
+#include <linux/uaccess.h>
+#include <asm/smp.h>
+#include <asm/machdep.h>
+#include <asm/setup.h>
+#include <asm/paca.h>
+#include <mm/mmu_decl.h>
+
+extern int is_second_reloc;
+
+static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size,
+					unsigned long kernel_sz)
+{
+	/* return a fixed offset of 64M for now */
+	return 0x4000000;
+}
+
+/*
+ * To see if we need to relocate the kernel to a random offset
+ * void *dt_ptr - address of the device tree
+ * phys_addr_t size - size of the first memory block
+ */
+notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
+{
+	unsigned long tlb_virt;
+	phys_addr_t tlb_phys;
+	unsigned long offset;
+	unsigned long kernel_sz;
+
+	kernel_sz = (unsigned long)_end - KERNELBASE;
+
+	offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
+
+	if (offset == 0)
+		return;
+
+	kimage_vaddr += offset;
+	kernstart_addr += offset;
+
+	is_second_reloc = 1;
+
+	if (offset >= SZ_64M) {
+		tlb_virt = round_down(kimage_vaddr, SZ_64M);
+		tlb_phys = round_down(kernstart_addr, SZ_64M);
+
+		/* Create kernel map to relocate in */
+		create_tlb_entry(tlb_phys, tlb_virt, 1);
+	}
+
+	/* Copy the kernel to it's new location and run */
+	memcpy((void *)kimage_vaddr, (void *)KERNELBASE, kernel_sz);
+
+	reloc_kernel_entry(dt_ptr, kimage_vaddr);
+}
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index dae8e9177574..754ae1e69f92 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -148,6 +148,12 @@ extern void reloc_kernel_entry(void *fdt, int addr);
 extern void loadcam_entry(unsigned int index);
 extern void loadcam_multi(int first_idx, int num, int tmp_idx);
 
+#ifdef CONFIG_RANDOMIZE_BASE
+extern void kaslr_early_init(void *dt_ptr, phys_addr_t size);
+#else
+static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
+#endif
+
 struct tlbcam {
 	u32	MAS0;
 	u32	MAS1;
diff --git a/arch/powerpc/mm/nohash/fsl_booke.c b/arch/powerpc/mm/nohash/fsl_booke.c
index 556e3cd52a35..8d25a8dc965f 100644
--- a/arch/powerpc/mm/nohash/fsl_booke.c
+++ b/arch/powerpc/mm/nohash/fsl_booke.c
@@ -263,7 +263,8 @@ void setup_initial_memory_limit(phys_addr_t first_memblock_base,
 int __initdata is_second_reloc;
 notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 {
-	unsigned long base = KERNELBASE;
+	unsigned long base = kimage_vaddr;
+	phys_addr_t size;
 
 	kernstart_addr = start;
 	if (is_second_reloc) {
@@ -291,7 +292,7 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 	start &= ~0x3ffffff;
 	base &= ~0x3ffffff;
 	virt_phys_offset = base - start;
-	early_get_first_memblock_info(__va(dt_ptr), NULL);
+	early_get_first_memblock_info(__va(dt_ptr), &size);
 	/*
 	 * We now get the memstart_addr, then we should check if this
 	 * address is the same as what the PAGE_OFFSET map to now. If
@@ -316,6 +317,8 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 		/* We should never reach here */
 		panic("Relocation error");
 	}
+
+	kaslr_early_init(__va(dt_ptr), size);
 }
 #endif
 #endif
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH 07/10] powerpc/fsl_booke/32: randomize the kernel image offset
  2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (5 preceding siblings ...)
  2019-07-17  8:06 ` [RFC PATCH 06/10] powerpc/fsl_booke/32: implement KASLR infrastructure Jason Yan
@ 2019-07-17  8:06 ` Jason Yan
  2019-07-29 11:33   ` Christophe Leroy
  2019-07-17  8:06 ` [RFC PATCH 08/10] powerpc/fsl_booke/kaslr: clear the original kernel if randomized Jason Yan
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Jason Yan @ 2019-07-17  8:06 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Jason Yan

After we have the basic support of relocate the kernel in some
appropriate place, we can start to randomize the offset now.

Entropy is derived from the banner and timer, which will change every
build and boot. This not so much safe so additionally the bootloader may
pass entropy via the /chosen/kaslr-seed node in device tree.

We will use the first 512M of the low memory to randomize the kernel
image. The memory will be split in 64M zones. We will use the lower 8
bit of the entropy to decide the index of the 64M zone. Then we chose a
16K aligned offset inside the 64M zone to put the kernel in.

    KERNELBASE

        |-->   64M   <--|
        |               |
        +---------------+    +----------------+---------------+
        |               |....|    |kernel|    |               |
        +---------------+    +----------------+---------------+
        |                         |
        |----->   offset    <-----|

                              kimage_vaddr

We also check if we will overlap with some areas like the dtb area, the
initrd area or the crashkernel area. If we cannot find a proper area,
kaslr will be disabled and boot from the original kernel.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/kernel/kaslr_booke.c | 335 +++++++++++++++++++++++++++++-
 1 file changed, 333 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
index 72d8e9432048..90357f4bd313 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -22,6 +22,8 @@
 #include <linux/delay.h>
 #include <linux/highmem.h>
 #include <linux/memblock.h>
+#include <linux/libfdt.h>
+#include <linux/crash_core.h>
 #include <asm/pgalloc.h>
 #include <asm/prom.h>
 #include <asm/io.h>
@@ -33,15 +35,342 @@
 #include <asm/machdep.h>
 #include <asm/setup.h>
 #include <asm/paca.h>
+#include <asm/kdump.h>
 #include <mm/mmu_decl.h>
+#include <generated/compile.h>
+#include <generated/utsrelease.h>
+
+#ifdef DEBUG
+#define DBG(fmt...) printk(KERN_ERR fmt)
+#else
+#define DBG(fmt...)
+#endif
+
+struct regions {
+	unsigned long pa_start;
+	unsigned long pa_end;
+	unsigned long kernel_size;
+	unsigned long dtb_start;
+	unsigned long dtb_end;
+	unsigned long initrd_start;
+	unsigned long initrd_end;
+	unsigned long crash_start;
+	unsigned long crash_end;
+	int reserved_mem;
+	int reserved_mem_addr_cells;
+	int reserved_mem_size_cells;
+};
 
 extern int is_second_reloc;
 
+/* Simplified build-specific string for starting entropy. */
+static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
+		LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
+static char __initdata early_command_line[COMMAND_LINE_SIZE];
+
+static __init void kaslr_get_cmdline(void *fdt)
+{
+	const char *cmdline = CONFIG_CMDLINE;
+	if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) {
+		int node;
+		const u8 *prop;
+		node = fdt_path_offset(fdt, "/chosen");
+		if (node < 0)
+			goto out;
+
+		prop = fdt_getprop(fdt, node, "bootargs", NULL);
+		if (!prop)
+			goto out;
+		cmdline = prop;
+	}
+out:
+	strscpy(early_command_line, cmdline, COMMAND_LINE_SIZE);
+}
+
+static unsigned long __init rotate_xor(unsigned long hash, const void *area,
+				size_t size)
+{
+	size_t i;
+	unsigned long *ptr = (unsigned long *)area;
+
+	for (i = 0; i < size / sizeof(hash); i++) {
+		/* Rotate by odd number of bits and XOR. */
+		hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
+		hash ^= ptr[i];
+	}
+
+	return hash;
+}
+
+/* Attempt to create a simple but unpredictable starting entropy. */
+static unsigned long __init get_boot_seed(void *fdt)
+{
+	unsigned long hash = 0;
+
+	hash = rotate_xor(hash, build_str, sizeof(build_str));
+	hash = rotate_xor(hash, fdt, fdt_totalsize(fdt));
+
+	return hash;
+}
+
+static __init u64 get_kaslr_seed(void *fdt)
+{
+	int node, len;
+	fdt64_t *prop;
+	u64 ret;
+
+	node = fdt_path_offset(fdt, "/chosen");
+	if (node < 0)
+		return 0;
+
+	prop = fdt_getprop_w(fdt, node, "kaslr-seed", &len);
+	if (!prop || len != sizeof(u64))
+		return 0;
+
+	ret = fdt64_to_cpu(*prop);
+	*prop = 0;
+	return ret;
+}
+
+static __init bool regions_overlap(u32 s1, u32 e1, u32 s2, u32 e2)
+{
+	return e1 >= s2 && e2 >= s1;
+}
+
+static __init bool overlaps_reserved_region(const void *fdt, u32 start,
+				       u32 end, struct regions *regions)
+{
+	int subnode, len, i;
+	u64 base, size;
+
+	/* check for overlap with /memreserve/ entries */
+	for (i = 0; i < fdt_num_mem_rsv(fdt); i++) {
+		if (fdt_get_mem_rsv(fdt, i, &base, &size) < 0)
+			continue;
+		if (regions_overlap(start, end, base, base + size))
+			return true;
+	}
+
+	if (regions->reserved_mem < 0)
+		return false;
+
+	/* check for overlap with static reservations in /reserved-memory */
+	for (subnode = fdt_first_subnode(fdt, regions->reserved_mem);
+	     subnode >= 0;
+	     subnode = fdt_next_subnode(fdt, subnode)) {
+		const fdt32_t *reg;
+		u64 rsv_end;
+
+		len = 0;
+		reg = fdt_getprop(fdt, subnode, "reg", &len);
+		while (len >= (regions->reserved_mem_addr_cells +
+			       regions->reserved_mem_size_cells)) {
+			base = fdt32_to_cpu(reg[0]);
+			if (regions->reserved_mem_addr_cells == 2)
+				base = (base << 32) | fdt32_to_cpu(reg[1]);
+
+			reg += regions->reserved_mem_addr_cells;
+			len -= 4 * regions->reserved_mem_addr_cells;
+
+			size = fdt32_to_cpu(reg[0]);
+			if (regions->reserved_mem_size_cells == 2)
+				size = (size << 32) | fdt32_to_cpu(reg[1]);
+
+			reg += regions->reserved_mem_size_cells;
+			len -= 4 * regions->reserved_mem_size_cells;
+
+			if (base >= regions->pa_end)
+				continue;
+
+			rsv_end = min(base + size, (u64)U32_MAX);
+
+			if (regions_overlap(start, end, base, rsv_end))
+				return true;
+		}
+	}
+	return false;
+}
+
+static __init bool overlaps_region(const void *fdt, u32 start,
+				       u32 end, struct regions *regions)
+{
+	if (regions_overlap(start, end, regions->dtb_start,
+			      regions->dtb_end))
+		return true;
+
+	if (regions_overlap(start, end, regions->initrd_start,
+			      regions->initrd_end))
+		return true;
+
+	if (regions_overlap(start, end, regions->crash_start,
+			      regions->crash_end))
+		return true;
+
+	return overlaps_reserved_region(fdt, start, end, regions);
+}
+
+static void __init get_crash_kernel(void *fdt, unsigned long size,
+				struct regions *regions)
+{
+#ifdef CONFIG_KEXEC_CORE
+	unsigned long long crash_size, crash_base;
+	int ret;
+
+	ret = parse_crashkernel(early_command_line, size, &crash_size,
+			&crash_base);
+	if (ret != 0 || crash_size == 0)
+		return;
+	if (crash_base == 0)
+		crash_base = KDUMP_KERNELBASE;
+
+	regions->crash_start = (unsigned long)crash_base;
+	regions->crash_end = (unsigned long)(crash_base + crash_size);
+
+	DBG("crash_base=0x%llx crash_size=0x%llx\n", crash_base, crash_size);
+#endif
+}
+
+static void __init get_initrd_range(void *fdt, struct regions *regions)
+{
+	u64 start, end;
+	int node, len;
+	const __be32 *prop;
+
+	node = fdt_path_offset(fdt, "/chosen");
+	if (node < 0)
+		return;
+
+	prop = fdt_getprop(fdt, node, "linux,initrd-start", &len);
+	if (!prop)
+		return;
+	start = of_read_number(prop, len / 4);
+
+	prop = fdt_getprop(fdt, node, "linux,initrd-end", &len);
+	if (!prop)
+		return;
+	end = of_read_number(prop, len / 4);
+
+	regions->initrd_start = (unsigned long)start;
+	regions->initrd_end = (unsigned long)end;
+
+	DBG("initrd_start=0x%llx  initrd_end=0x%llx\n", start, end);
+}
+
+static __init unsigned long get_usable_offset(const void *fdt, struct regions *regions,
+				unsigned long start)
+{
+	unsigned long pa;
+	unsigned long pa_end;
+
+	for (pa = start; pa > regions->pa_start; pa -= SZ_16K) {
+		pa_end = pa + regions->kernel_size;
+		if (overlaps_region(fdt, pa, pa_end, regions))
+			continue;
+
+		return pa;
+	}
+	return 0;
+}
+
+static __init void get_cell_sizes(const void *fdt, int node, int *addr_cells,
+			   int *size_cells)
+{
+	const int *prop;
+	int len;
+
+	/*
+	 * Retrieve the #address-cells and #size-cells properties
+	 * from the 'node', or use the default if not provided.
+	 */
+	*addr_cells = *size_cells = 1;
+
+	prop = fdt_getprop(fdt, node, "#address-cells", &len);
+	if (len == 4)
+		*addr_cells = fdt32_to_cpu(*prop);
+	prop = fdt_getprop(fdt, node, "#size-cells", &len);
+	if (len == 4)
+		*size_cells = fdt32_to_cpu(*prop);
+}
+
 static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size,
 					unsigned long kernel_sz)
 {
-	/* return a fixed offset of 64M for now */
-	return 0x4000000;
+	unsigned long offset, random;
+	unsigned long ram, linear_sz;
+	unsigned long kaslr_offset;
+	u64 seed;
+	struct regions regions;
+	unsigned long index;
+
+	random = get_boot_seed(dt_ptr);
+
+	seed = get_tb() << 32;
+	seed ^= get_tb();
+	random = rotate_xor(random, &seed, sizeof(seed));
+
+	/*
+	 * Retrieve (and wipe) the seed from the FDT
+	 */
+	seed = get_kaslr_seed(dt_ptr);
+	if (seed)
+		random = rotate_xor(random, &seed, sizeof(seed));
+
+	ram = min((phys_addr_t)__max_low_memory, size);
+	ram = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM, true);
+	linear_sz = min(ram, (unsigned long)SZ_512M);
+
+	/* If the linear size is smaller than 64M, do not randmize */
+	if (linear_sz < SZ_64M)
+		return 0;
+
+	memset(&regions, 0, sizeof(regions));
+
+	/* check for a reserved-memory node and record its cell sizes */
+	regions.reserved_mem = fdt_path_offset(dt_ptr, "/reserved-memory");
+	if (regions.reserved_mem >= 0)
+		get_cell_sizes(dt_ptr, regions.reserved_mem,
+			       &regions.reserved_mem_addr_cells,
+			       &regions.reserved_mem_size_cells);
+
+	regions.pa_start = 0;
+	regions.pa_end = linear_sz;
+	regions.dtb_start = __pa(dt_ptr);
+	regions.dtb_end = __pa(dt_ptr) + fdt_totalsize(dt_ptr);
+	regions.kernel_size = kernel_sz;
+
+	get_initrd_range(dt_ptr, &regions);
+	get_crash_kernel(dt_ptr, ram, &regions);
+
+	/*
+	 * Decide which 64M we want to start
+	 * Only use the low 8 bits of the random seed
+	 */
+	index = random & 0xFF;
+	index %= linear_sz / SZ_64M;
+
+	/* Decide offset inside 64M */
+	if (index == 0) {
+		offset = random % (SZ_64M - round_up(kernel_sz, SZ_16K) * 2);
+		offset += round_up(kernel_sz, SZ_16K);
+		offset = round_up(offset, SZ_16K);
+	} else {
+		offset = random % (SZ_64M - kernel_sz);
+		offset = round_down(offset, SZ_16K);
+	}
+
+	while (index >= 0) {
+		offset = offset + index * SZ_64M;
+		kaslr_offset = get_usable_offset(dt_ptr, &regions, offset);
+		if (kaslr_offset)
+			break;
+		index--;
+	}
+
+	/* Did not find any usable region? Give up randomize */
+	if (index < 0)
+		kaslr_offset = 0;
+
+	return kaslr_offset;
 }
 
 /*
@@ -58,6 +387,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
 
 	kernel_sz = (unsigned long)_end - KERNELBASE;
 
+	kaslr_get_cmdline(dt_ptr);
+
 	offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
 
 	if (offset == 0)
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH 08/10] powerpc/fsl_booke/kaslr: clear the original kernel if randomized
  2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (6 preceding siblings ...)
  2019-07-17  8:06 ` [RFC PATCH 07/10] powerpc/fsl_booke/32: randomize the kernel image offset Jason Yan
@ 2019-07-17  8:06 ` Jason Yan
  2019-07-29 11:19   ` Christophe Leroy
  2019-07-17  8:06 ` [RFC PATCH 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter Jason Yan
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Jason Yan @ 2019-07-17  8:06 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Jason Yan

The original kernel still exists in the memory, clear it now.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/kernel/kaslr_booke.c  | 11 +++++++++++
 arch/powerpc/mm/mmu_decl.h         |  2 ++
 arch/powerpc/mm/nohash/fsl_booke.c |  1 +
 3 files changed, 14 insertions(+)

diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
index 90357f4bd313..00339c05879f 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -412,3 +412,14 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
 
 	reloc_kernel_entry(dt_ptr, kimage_vaddr);
 }
+
+void __init kaslr_second_init(void)
+{
+	/* If randomized, clear the original kernel */
+	if (kimage_vaddr != KERNELBASE) {
+		unsigned long kernel_sz;
+
+		kernel_sz = (unsigned long)_end - kimage_vaddr;
+		memset((void *)KERNELBASE, 0, kernel_sz);
+	}
+}
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index 754ae1e69f92..9912ee598f9b 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -150,8 +150,10 @@ extern void loadcam_multi(int first_idx, int num, int tmp_idx);
 
 #ifdef CONFIG_RANDOMIZE_BASE
 extern void kaslr_early_init(void *dt_ptr, phys_addr_t size);
+extern void kaslr_second_init(void);
 #else
 static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
+static inline void kaslr_second_init(void) {}
 #endif
 
 struct tlbcam {
diff --git a/arch/powerpc/mm/nohash/fsl_booke.c b/arch/powerpc/mm/nohash/fsl_booke.c
index 8d25a8dc965f..fa5a87f5c08e 100644
--- a/arch/powerpc/mm/nohash/fsl_booke.c
+++ b/arch/powerpc/mm/nohash/fsl_booke.c
@@ -269,6 +269,7 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
 	kernstart_addr = start;
 	if (is_second_reloc) {
 		virt_phys_offset = PAGE_OFFSET - memstart_addr;
+		kaslr_second_init();
 		return;
 	}
 
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
  2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (7 preceding siblings ...)
  2019-07-17  8:06 ` [RFC PATCH 08/10] powerpc/fsl_booke/kaslr: clear the original kernel if randomized Jason Yan
@ 2019-07-17  8:06 ` Jason Yan
  2019-07-29 11:38   ` Christophe Leroy
  2019-07-17  8:06 ` [RFC PATCH 10/10] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic Jason Yan
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 37+ messages in thread
From: Jason Yan @ 2019-07-17  8:06 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Jason Yan

One may want to disable kaslr when boot, so provide a cmdline parameter
'nokaslr' to support this.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/kernel/kaslr_booke.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
index 00339c05879f..e65a5d9d2ff1 100644
--- a/arch/powerpc/kernel/kaslr_booke.c
+++ b/arch/powerpc/kernel/kaslr_booke.c
@@ -373,6 +373,18 @@ static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size
 	return kaslr_offset;
 }
 
+static inline __init bool kaslr_disabled(void)
+{
+	char *str;
+
+	str = strstr(early_command_line, "nokaslr");
+	if ((str == early_command_line) ||
+	    (str > early_command_line && *(str - 1) == ' '))
+		return true;
+
+	return false;
+}
+
 /*
  * To see if we need to relocate the kernel to a random offset
  * void *dt_ptr - address of the device tree
@@ -388,6 +400,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
 	kernel_sz = (unsigned long)_end - KERNELBASE;
 
 	kaslr_get_cmdline(dt_ptr);
+	if (kaslr_disabled())
+		return;
 
 	offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
 
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* [RFC PATCH 10/10] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
  2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (8 preceding siblings ...)
  2019-07-17  8:06 ` [RFC PATCH 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter Jason Yan
@ 2019-07-17  8:06 ` Jason Yan
  2019-07-29 11:43   ` Christophe Leroy
  2019-07-25  7:16 ` [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
  2019-07-29 14:30 ` Diana Madalina Craciun
  11 siblings, 1 reply; 37+ messages in thread
From: Jason Yan @ 2019-07-17  8:06 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Jason Yan

When kaslr is enabled, the kernel offset is different for every boot.
This brings some difficult to debug the kernel. Dump out the kernel
offset when panic so that we can easily debug the kernel.

Signed-off-by: Jason Yan <yanaijie@huawei.com>
Cc: Diana Craciun <diana.craciun@nxp.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Kees Cook <keescook@chromium.org>
---
 arch/powerpc/include/asm/page.h     |  5 +++++
 arch/powerpc/kernel/machine_kexec.c |  1 +
 arch/powerpc/kernel/setup-common.c  | 23 +++++++++++++++++++++++
 3 files changed, 29 insertions(+)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 60a68d3a54b1..cd3ac530e58d 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -317,6 +317,11 @@ struct vm_area_struct;
 
 extern unsigned long kimage_vaddr;
 
+static inline unsigned long kaslr_offset(void)
+{
+	return kimage_vaddr - KERNELBASE;
+}
+
 #include <asm-generic/memory_model.h>
 #endif /* __ASSEMBLY__ */
 #include <asm/slice.h>
diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
index c4ed328a7b96..078fe3d76feb 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -86,6 +86,7 @@ void arch_crash_save_vmcoreinfo(void)
 	VMCOREINFO_STRUCT_SIZE(mmu_psize_def);
 	VMCOREINFO_OFFSET(mmu_psize_def, shift);
 #endif
+	vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
 }
 
 /*
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 1f8db666468d..49e540c0adeb 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -715,12 +715,35 @@ static struct notifier_block ppc_panic_block = {
 	.priority = INT_MIN /* may not return; must be done last */
 };
 
+/*
+ * Dump out kernel offset information on panic.
+ */
+static int dump_kernel_offset(struct notifier_block *self, unsigned long v,
+			      void *p)
+{
+	const unsigned long offset = kaslr_offset();
+
+	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && offset > 0)
+		pr_emerg("Kernel Offset: 0x%lx from 0x%lx\n",
+			 offset, KERNELBASE);
+	else
+		pr_emerg("Kernel Offset: disabled\n");
+
+	return 0;
+}
+
+static struct notifier_block kernel_offset_notifier = {
+	.notifier_call = dump_kernel_offset
+};
+
 void __init setup_panic(void)
 {
 	/* PPC64 always does a hard irq disable in its panic handler */
 	if (!IS_ENABLED(CONFIG_PPC64) && !ppc_md.panic)
 		return;
 	atomic_notifier_chain_register(&panic_notifier_list, &ppc_panic_block);
+	atomic_notifier_chain_register(&panic_notifier_list,
+				       &kernel_offset_notifier);
 }
 
 #ifdef CONFIG_CHECK_CACHE_COHERENCY
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32
  2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (9 preceding siblings ...)
  2019-07-17  8:06 ` [RFC PATCH 10/10] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic Jason Yan
@ 2019-07-25  7:16 ` Jason Yan
  2019-07-25 19:58   ` Kees Cook
  2019-07-26  7:04   ` Diana Madalina Craciun
  2019-07-29 14:30 ` Diana Madalina Craciun
  11 siblings, 2 replies; 37+ messages in thread
From: Jason Yan @ 2019-07-25  7:16 UTC (permalink / raw)
  To: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang

Hi all, any comments?


On 2019/7/17 16:06, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
> 
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
> 
> Entropy is derived from the banner and timer base, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.
> 
> We will use the first 512M of the low memory to randomize the kernel
> image. The memory will be split in 64M zones. We will use the lower 8
> bit of the entropy to decide the index of the 64M zone. Then we chose a
> 16K aligned offset inside the 64M zone to put the kernel in.
> 
>      KERNELBASE
> 
>          |-->   64M   <--|
>          |               |
>          +---------------+    +----------------+---------------+
>          |               |....|    |kernel|    |               |
>          +---------------+    +----------------+---------------+
>          |                         |
>          |----->   offset    <-----|
> 
>                                kimage_vaddr
> 
> We also check if we will overlap with some areas like the dtb area, the
> initrd area or the crashkernel area. If we cannot find a proper area,
> kaslr will be disabled and boot from the original kernel.
> 
> Jason Yan (10):
>    powerpc: unify definition of M_IF_NEEDED
>    powerpc: move memstart_addr and kernstart_addr to init-common.c
>    powerpc: introduce kimage_vaddr to store the kernel base
>    powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>    powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>    powerpc/fsl_booke/32: implement KASLR infrastructure
>    powerpc/fsl_booke/32: randomize the kernel image offset
>    powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>    powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>    powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
> 
>   arch/powerpc/Kconfig                          |  11 +
>   arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
>   arch/powerpc/include/asm/page.h               |   7 +
>   arch/powerpc/kernel/Makefile                  |   1 +
>   arch/powerpc/kernel/early_32.c                |   2 +-
>   arch/powerpc/kernel/exceptions-64e.S          |  10 -
>   arch/powerpc/kernel/fsl_booke_entry_mapping.S |  23 +-
>   arch/powerpc/kernel/head_fsl_booke.S          |  61 ++-
>   arch/powerpc/kernel/kaslr_booke.c             | 439 ++++++++++++++++++
>   arch/powerpc/kernel/machine_kexec.c           |   1 +
>   arch/powerpc/kernel/misc_64.S                 |   5 -
>   arch/powerpc/kernel/setup-common.c            |  23 +
>   arch/powerpc/mm/init-common.c                 |   7 +
>   arch/powerpc/mm/init_32.c                     |   5 -
>   arch/powerpc/mm/init_64.c                     |   5 -
>   arch/powerpc/mm/mmu_decl.h                    |  10 +
>   arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
>   17 files changed, 580 insertions(+), 48 deletions(-)
>   create mode 100644 arch/powerpc/kernel/kaslr_booke.c
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32
  2019-07-25  7:16 ` [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
@ 2019-07-25 19:58   ` Kees Cook
  2019-07-26  7:20     ` Jason Yan
  2019-07-26  7:04   ` Diana Madalina Craciun
  1 sibling, 1 reply; 37+ messages in thread
From: Kees Cook @ 2019-07-25 19:58 UTC (permalink / raw)
  To: Jason Yan
  Cc: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, kernel-hardening, linux-kernel, wangkefeng.wang,
	yebin10, thunder.leizhen, jingxiangfeng, fanchengyang

On Thu, Jul 25, 2019 at 03:16:28PM +0800, Jason Yan wrote:
> Hi all, any comments?

I'm a fan of it, but I don't know ppc internals well enough to sanely
review the code. :) Some comments below on design...

> 
> 
> On 2019/7/17 16:06, Jason Yan wrote:
> > This series implements KASLR for powerpc/fsl_booke/32, as a security
> > feature that deters exploit attempts relying on knowledge of the location
> > of kernel internals.
> > 
> > Since CONFIG_RELOCATABLE has already supported, what we need to do is
> > map or copy kernel to a proper place and relocate. Freescale Book-E
> > parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> > entries are not suitable to map the kernel directly in a randomized
> > region, so we chose to copy the kernel to a proper place and restart to
> > relocate.
> > 
> > Entropy is derived from the banner and timer base, which will change every
> > build and boot. This not so much safe so additionally the bootloader may
> > pass entropy via the /chosen/kaslr-seed node in device tree.

Good: adding kaslr-seed is a good step here. Are there any x86-like
RDRAND or RDTSC to use? (Or maybe timer base here is similar to x86
RDTSC here?)

> > 
> > We will use the first 512M of the low memory to randomize the kernel
> > image. The memory will be split in 64M zones. We will use the lower 8
> > bit of the entropy to decide the index of the 64M zone. Then we chose a
> > 16K aligned offset inside the 64M zone to put the kernel in.

Does this 16K granularity have any page table performance impact? My
understanding was that x86 needed to have 2M granularity due to its page
table layouts.

Why the 64M zones instead of just 16K granularity across the entire low
512M?

> > 
> >      KERNELBASE
> > 
> >          |-->   64M   <--|
> >          |               |
> >          +---------------+    +----------------+---------------+
> >          |               |....|    |kernel|    |               |
> >          +---------------+    +----------------+---------------+
> >          |                         |
> >          |----->   offset    <-----|
> > 
> >                                kimage_vaddr
> > 
> > We also check if we will overlap with some areas like the dtb area, the
> > initrd area or the crashkernel area. If we cannot find a proper area,
> > kaslr will be disabled and boot from the original kernel.
> > 
> > Jason Yan (10):
> >    powerpc: unify definition of M_IF_NEEDED
> >    powerpc: move memstart_addr and kernstart_addr to init-common.c
> >    powerpc: introduce kimage_vaddr to store the kernel base
> >    powerpc/fsl_booke/32: introduce create_tlb_entry() helper
> >    powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
> >    powerpc/fsl_booke/32: implement KASLR infrastructure
> >    powerpc/fsl_booke/32: randomize the kernel image offset
> >    powerpc/fsl_booke/kaslr: clear the original kernel if randomized
> >    powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
> >    powerpc/fsl_booke/kaslr: dump out kernel offset information on panic

Is there anything planned for other fixed-location things, like x86's
CONFIG_RANDOMIZE_MEMORY?

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32
  2019-07-25  7:16 ` [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
  2019-07-25 19:58   ` Kees Cook
@ 2019-07-26  7:04   ` Diana Madalina Craciun
  2019-07-26  7:26     ` Jason Yan
  1 sibling, 1 reply; 37+ messages in thread
From: Diana Madalina Craciun @ 2019-07-26  7:04 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Laurentiu Tudor

Hi Jason,

I have briefly tested yesterday on a P4080 board and did not see any
issues. I do not have much expertise on KASLR, but I will take a look
over the code.

Regards,
Diana

On 7/25/2019 10:16 AM, Jason Yan wrote:
> Hi all, any comments?
>
>
> On 2019/7/17 16:06, Jason Yan wrote:
>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>> feature that deters exploit attempts relying on knowledge of the location
>> of kernel internals.
>>
>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>> map or copy kernel to a proper place and relocate. Freescale Book-E
>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>> entries are not suitable to map the kernel directly in a randomized
>> region, so we chose to copy the kernel to a proper place and restart to
>> relocate.
>>
>> Entropy is derived from the banner and timer base, which will change every
>> build and boot. This not so much safe so additionally the bootloader may
>> pass entropy via the /chosen/kaslr-seed node in device tree.
>>
>> We will use the first 512M of the low memory to randomize the kernel
>> image. The memory will be split in 64M zones. We will use the lower 8
>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>> 16K aligned offset inside the 64M zone to put the kernel in.
>>
>>      KERNELBASE
>>
>>          |-->   64M   <--|
>>          |               |
>>          +---------------+    +----------------+---------------+
>>          |               |....|    |kernel|    |               |
>>          +---------------+    +----------------+---------------+
>>          |                         |
>>          |----->   offset    <-----|
>>
>>                                kimage_vaddr
>>
>> We also check if we will overlap with some areas like the dtb area, the
>> initrd area or the crashkernel area. If we cannot find a proper area,
>> kaslr will be disabled and boot from the original kernel.
>>
>> Jason Yan (10):
>>    powerpc: unify definition of M_IF_NEEDED
>>    powerpc: move memstart_addr and kernstart_addr to init-common.c
>>    powerpc: introduce kimage_vaddr to store the kernel base
>>    powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>>    powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>>    powerpc/fsl_booke/32: implement KASLR infrastructure
>>    powerpc/fsl_booke/32: randomize the kernel image offset
>>    powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>>    powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>>    powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>>
>>   arch/powerpc/Kconfig                          |  11 +
>>   arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
>>   arch/powerpc/include/asm/page.h               |   7 +
>>   arch/powerpc/kernel/Makefile                  |   1 +
>>   arch/powerpc/kernel/early_32.c                |   2 +-
>>   arch/powerpc/kernel/exceptions-64e.S          |  10 -
>>   arch/powerpc/kernel/fsl_booke_entry_mapping.S |  23 +-
>>   arch/powerpc/kernel/head_fsl_booke.S          |  61 ++-
>>   arch/powerpc/kernel/kaslr_booke.c             | 439 ++++++++++++++++++
>>   arch/powerpc/kernel/machine_kexec.c           |   1 +
>>   arch/powerpc/kernel/misc_64.S                 |   5 -
>>   arch/powerpc/kernel/setup-common.c            |  23 +
>>   arch/powerpc/mm/init-common.c                 |   7 +
>>   arch/powerpc/mm/init_32.c                     |   5 -
>>   arch/powerpc/mm/init_64.c                     |   5 -
>>   arch/powerpc/mm/mmu_decl.h                    |  10 +
>>   arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
>>   17 files changed, 580 insertions(+), 48 deletions(-)
>>   create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>>
>


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32
  2019-07-25 19:58   ` Kees Cook
@ 2019-07-26  7:20     ` Jason Yan
  2019-07-26 16:15       ` Kees Cook
  0 siblings, 1 reply; 37+ messages in thread
From: Jason Yan @ 2019-07-26  7:20 UTC (permalink / raw)
  To: Kees Cook
  Cc: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, kernel-hardening, linux-kernel, wangkefeng.wang,
	yebin10, thunder.leizhen, jingxiangfeng, fanchengyang



On 2019/7/26 3:58, Kees Cook wrote:
> On Thu, Jul 25, 2019 at 03:16:28PM +0800, Jason Yan wrote:
>> Hi all, any comments?
> 
> I'm a fan of it, but I don't know ppc internals well enough to sanely
> review the code. :) Some comments below on design...
> 

Hi Kees, Thanks for your comments.

>>
>>
>> On 2019/7/17 16:06, Jason Yan wrote:
>>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>>> feature that deters exploit attempts relying on knowledge of the location
>>> of kernel internals.
>>>
>>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>>> map or copy kernel to a proper place and relocate. Freescale Book-E
>>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>>> entries are not suitable to map the kernel directly in a randomized
>>> region, so we chose to copy the kernel to a proper place and restart to
>>> relocate.
>>>
>>> Entropy is derived from the banner and timer base, which will change every
>>> build and boot. This not so much safe so additionally the bootloader may
>>> pass entropy via the /chosen/kaslr-seed node in device tree.
> 
> Good: adding kaslr-seed is a good step here. Are there any x86-like
> RDRAND or RDTSC to use? (Or maybe timer base here is similar to x86
> RDTSC here?)
> 

Yes, time base is similar to RDTSC here.

>>>
>>> We will use the first 512M of the low memory to randomize the kernel
>>> image. The memory will be split in 64M zones. We will use the lower 8
>>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>>> 16K aligned offset inside the 64M zone to put the kernel in.
> 
> Does this 16K granularity have any page table performance impact? My
> understanding was that x86 needed to have 2M granularity due to its page
> table layouts.
> 

The fsl booke TLB1 covers the whole low memeory. AFAIK, there is no page 
table performance impact. But if anyone knows there is any regressions, 
please let me know.

> Why the 64M zones instead of just 16K granularity across the entire low
> 512M?
> 

The boot code only maps one 64M zone at early start. If the kernel 
crosses two 64M zones, we need to map two 64M zones. Keep the kernel in 
one 64M saves a lot of complex codes.

>>>
>>>       KERNELBASE
>>>
>>>           |-->   64M   <--|
>>>           |               |
>>>           +---------------+    +----------------+---------------+
>>>           |               |....|    |kernel|    |               |
>>>           +---------------+    +----------------+---------------+
>>>           |                         |
>>>           |----->   offset    <-----|
>>>
>>>                                 kimage_vaddr
>>>
>>> We also check if we will overlap with some areas like the dtb area, the
>>> initrd area or the crashkernel area. If we cannot find a proper area,
>>> kaslr will be disabled and boot from the original kernel.
>>>
>>> Jason Yan (10):
>>>     powerpc: unify definition of M_IF_NEEDED
>>>     powerpc: move memstart_addr and kernstart_addr to init-common.c
>>>     powerpc: introduce kimage_vaddr to store the kernel base
>>>     powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>>>     powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>>>     powerpc/fsl_booke/32: implement KASLR infrastructure
>>>     powerpc/fsl_booke/32: randomize the kernel image offset
>>>     powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>>>     powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>>>     powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
> 
> Is there anything planned for other fixed-location things, like x86's
> CONFIG_RANDOMIZE_MEMORY?
> 

Yes, if this feature can be accepted, I will start to work with 
powerpc64 KASLR and other things like CONFIG_RANDOMIZE_MEMORY.


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32
  2019-07-26  7:04   ` Diana Madalina Craciun
@ 2019-07-26  7:26     ` Jason Yan
  0 siblings, 0 replies; 37+ messages in thread
From: Jason Yan @ 2019-07-26  7:26 UTC (permalink / raw)
  To: Diana Madalina Craciun, mpe, linuxppc-dev, christophe.leroy,
	benh, paulus, npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang, Laurentiu Tudor



On 2019/7/26 15:04, Diana Madalina Craciun wrote:
> Hi Jason,
> 
> I have briefly tested yesterday on a P4080 board and did not see any
> issues. I do not have much expertise on KASLR, but I will take a look
> over the code.
> 

Hi Diana, thanks. Looking forward to your suggestions.

> Regards,
> Diana
> 
> On 7/25/2019 10:16 AM, Jason Yan wrote:
>> Hi all, any comments?
>>
>>
>> On 2019/7/17 16:06, Jason Yan wrote:
>>> This series implements KASLR for powerpc/fsl_booke/32, as a security
>>> feature that deters exploit attempts relying on knowledge of the location
>>> of kernel internals.
>>>
>>> Since CONFIG_RELOCATABLE has already supported, what we need to do is
>>> map or copy kernel to a proper place and relocate. Freescale Book-E
>>> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
>>> entries are not suitable to map the kernel directly in a randomized
>>> region, so we chose to copy the kernel to a proper place and restart to
>>> relocate.
>>>
>>> Entropy is derived from the banner and timer base, which will change every
>>> build and boot. This not so much safe so additionally the bootloader may
>>> pass entropy via the /chosen/kaslr-seed node in device tree.
>>>
>>> We will use the first 512M of the low memory to randomize the kernel
>>> image. The memory will be split in 64M zones. We will use the lower 8
>>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>>> 16K aligned offset inside the 64M zone to put the kernel in.
>>>
>>>       KERNELBASE
>>>
>>>           |-->   64M   <--|
>>>           |               |
>>>           +---------------+    +----------------+---------------+
>>>           |               |....|    |kernel|    |               |
>>>           +---------------+    +----------------+---------------+
>>>           |                         |
>>>           |----->   offset    <-----|
>>>
>>>                                 kimage_vaddr
>>>
>>> We also check if we will overlap with some areas like the dtb area, the
>>> initrd area or the crashkernel area. If we cannot find a proper area,
>>> kaslr will be disabled and boot from the original kernel.
>>>
>>> Jason Yan (10):
>>>     powerpc: unify definition of M_IF_NEEDED
>>>     powerpc: move memstart_addr and kernstart_addr to init-common.c
>>>     powerpc: introduce kimage_vaddr to store the kernel base
>>>     powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>>>     powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>>>     powerpc/fsl_booke/32: implement KASLR infrastructure
>>>     powerpc/fsl_booke/32: randomize the kernel image offset
>>>     powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>>>     powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>>>     powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>>>
>>>    arch/powerpc/Kconfig                          |  11 +
>>>    arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
>>>    arch/powerpc/include/asm/page.h               |   7 +
>>>    arch/powerpc/kernel/Makefile                  |   1 +
>>>    arch/powerpc/kernel/early_32.c                |   2 +-
>>>    arch/powerpc/kernel/exceptions-64e.S          |  10 -
>>>    arch/powerpc/kernel/fsl_booke_entry_mapping.S |  23 +-
>>>    arch/powerpc/kernel/head_fsl_booke.S          |  61 ++-
>>>    arch/powerpc/kernel/kaslr_booke.c             | 439 ++++++++++++++++++
>>>    arch/powerpc/kernel/machine_kexec.c           |   1 +
>>>    arch/powerpc/kernel/misc_64.S                 |   5 -
>>>    arch/powerpc/kernel/setup-common.c            |  23 +
>>>    arch/powerpc/mm/init-common.c                 |   7 +
>>>    arch/powerpc/mm/init_32.c                     |   5 -
>>>    arch/powerpc/mm/init_64.c                     |   5 -
>>>    arch/powerpc/mm/mmu_decl.h                    |  10 +
>>>    arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
>>>    17 files changed, 580 insertions(+), 48 deletions(-)
>>>    create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>>>
>>
> 
> 
> .
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32
  2019-07-26  7:20     ` Jason Yan
@ 2019-07-26 16:15       ` Kees Cook
  0 siblings, 0 replies; 37+ messages in thread
From: Kees Cook @ 2019-07-26 16:15 UTC (permalink / raw)
  To: Jason Yan
  Cc: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, kernel-hardening, linux-kernel, wangkefeng.wang,
	yebin10, thunder.leizhen, jingxiangfeng, fanchengyang

On Fri, Jul 26, 2019 at 03:20:26PM +0800, Jason Yan wrote:
> The boot code only maps one 64M zone at early start. If the kernel crosses
> two 64M zones, we need to map two 64M zones. Keep the kernel in one 64M
> saves a lot of complex codes.

Ah-ha. Gotcha. Thanks for the clarification.

> Yes, if this feature can be accepted, I will start to work with powerpc64
> KASLR and other things like CONFIG_RANDOMIZE_MEMORY.

Awesome. :)

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 01/10] powerpc: unify definition of M_IF_NEEDED
  2019-07-17  8:06 ` [RFC PATCH 01/10] powerpc: unify definition of M_IF_NEEDED Jason Yan
@ 2019-07-29 10:59   ` Christophe Leroy
  0 siblings, 0 replies; 37+ messages in thread
From: Christophe Leroy @ 2019-07-29 10:59 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> M_IF_NEEDED is defined too many times. Move it to a common place.
> 
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>

Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>

> ---
>   arch/powerpc/include/asm/nohash/mmu-book3e.h  | 10 ++++++++++
>   arch/powerpc/kernel/exceptions-64e.S          | 10 ----------
>   arch/powerpc/kernel/fsl_booke_entry_mapping.S | 10 ----------
>   arch/powerpc/kernel/misc_64.S                 |  5 -----
>   4 files changed, 10 insertions(+), 25 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/nohash/mmu-book3e.h b/arch/powerpc/include/asm/nohash/mmu-book3e.h
> index 4c9777d256fb..0877362e48fa 100644
> --- a/arch/powerpc/include/asm/nohash/mmu-book3e.h
> +++ b/arch/powerpc/include/asm/nohash/mmu-book3e.h
> @@ -221,6 +221,16 @@
>   #define TLBILX_T_CLASS2			6
>   #define TLBILX_T_CLASS3			7
>   
> +/*
> + * The mapping only needs to be cache-coherent on SMP, except on
> + * Freescale e500mc derivatives where it's also needed for coherent DMA.
> + */
> +#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
> +#define M_IF_NEEDED	MAS2_M
> +#else
> +#define M_IF_NEEDED	0
> +#endif
> +
>   #ifndef __ASSEMBLY__
>   #include <asm/bug.h>
>   
> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
> index 1cfb3da4a84a..fd49ec07ce4a 100644
> --- a/arch/powerpc/kernel/exceptions-64e.S
> +++ b/arch/powerpc/kernel/exceptions-64e.S
> @@ -1342,16 +1342,6 @@ skpinv:	addi	r6,r6,1				/* Increment */
>   	sync
>   	isync
>   
> -/*
> - * The mapping only needs to be cache-coherent on SMP, except on
> - * Freescale e500mc derivatives where it's also needed for coherent DMA.
> - */
> -#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
> -#define M_IF_NEEDED	MAS2_M
> -#else
> -#define M_IF_NEEDED	0
> -#endif
> -
>   /* 6. Setup KERNELBASE mapping in TLB[0]
>    *
>    * r3 = MAS0 w/TLBSEL & ESEL for the entry we started in
> diff --git a/arch/powerpc/kernel/fsl_booke_entry_mapping.S b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> index ea065282b303..de0980945510 100644
> --- a/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> +++ b/arch/powerpc/kernel/fsl_booke_entry_mapping.S
> @@ -153,16 +153,6 @@ skpinv:	addi	r6,r6,1				/* Increment */
>   	tlbivax 0,r9
>   	TLBSYNC
>   
> -/*
> - * The mapping only needs to be cache-coherent on SMP, except on
> - * Freescale e500mc derivatives where it's also needed for coherent DMA.
> - */
> -#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
> -#define M_IF_NEEDED	MAS2_M
> -#else
> -#define M_IF_NEEDED	0
> -#endif
> -
>   #if defined(ENTRY_MAPPING_BOOT_SETUP)
>   
>   /* 6. Setup KERNELBASE mapping in TLB1[0] */
> diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
> index b55a7b4cb543..26074f92d4bc 100644
> --- a/arch/powerpc/kernel/misc_64.S
> +++ b/arch/powerpc/kernel/misc_64.S
> @@ -432,11 +432,6 @@ kexec_create_tlb:
>   	rlwimi	r9,r10,16,4,15		/* Setup MAS0 = TLBSEL | ESEL(r9) */
>   
>   /* Set up a temp identity mapping v:0 to p:0 and return to it. */
> -#if defined(CONFIG_SMP) || defined(CONFIG_PPC_E500MC)
> -#define M_IF_NEEDED	MAS2_M
> -#else
> -#define M_IF_NEEDED	0
> -#endif
>   	mtspr	SPRN_MAS0,r9
>   
>   	lis	r9,(MAS1_VALID|MAS1_IPROT)@h
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c
  2019-07-17  8:06 ` [RFC PATCH 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c Jason Yan
@ 2019-07-29 11:00   ` Christophe Leroy
  2019-07-29 14:31   ` Christoph Hellwig
  1 sibling, 0 replies; 37+ messages in thread
From: Christophe Leroy @ 2019-07-29 11:00 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> These two variables are both defined in init_32.c and init_64.c. Move
> them to init-common.c.
> 
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>

Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>


> ---
>   arch/powerpc/mm/init-common.c | 5 +++++
>   arch/powerpc/mm/init_32.c     | 5 -----
>   arch/powerpc/mm/init_64.c     | 5 -----
>   3 files changed, 5 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
> index a84da92920f7..9273c38009cb 100644
> --- a/arch/powerpc/mm/init-common.c
> +++ b/arch/powerpc/mm/init-common.c
> @@ -21,6 +21,11 @@
>   #include <asm/pgtable.h>
>   #include <asm/kup.h>
>   
> +phys_addr_t memstart_addr = (phys_addr_t)~0ull;
> +EXPORT_SYMBOL(memstart_addr);
> +phys_addr_t kernstart_addr;
> +EXPORT_SYMBOL(kernstart_addr);
> +
>   static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
>   static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
>   
> diff --git a/arch/powerpc/mm/init_32.c b/arch/powerpc/mm/init_32.c
> index b04896a88d79..872df48ae41b 100644
> --- a/arch/powerpc/mm/init_32.c
> +++ b/arch/powerpc/mm/init_32.c
> @@ -56,11 +56,6 @@
>   phys_addr_t total_memory;
>   phys_addr_t total_lowmem;
>   
> -phys_addr_t memstart_addr = (phys_addr_t)~0ull;
> -EXPORT_SYMBOL(memstart_addr);
> -phys_addr_t kernstart_addr;
> -EXPORT_SYMBOL(kernstart_addr);
> -
>   #ifdef CONFIG_RELOCATABLE
>   /* Used in __va()/__pa() */
>   long long virt_phys_offset;
> diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
> index a44f6281ca3a..c836f1269ee7 100644
> --- a/arch/powerpc/mm/init_64.c
> +++ b/arch/powerpc/mm/init_64.c
> @@ -63,11 +63,6 @@
>   
>   #include <mm/mmu_decl.h>
>   
> -phys_addr_t memstart_addr = ~0;
> -EXPORT_SYMBOL_GPL(memstart_addr);
> -phys_addr_t kernstart_addr;
> -EXPORT_SYMBOL_GPL(kernstart_addr);
> -
>   #ifdef CONFIG_SPARSEMEM_VMEMMAP
>   /*
>    * Given an address within the vmemmap, determine the pfn of the page that
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 03/10] powerpc: introduce kimage_vaddr to store the kernel base
  2019-07-17  8:06 ` [RFC PATCH 03/10] powerpc: introduce kimage_vaddr to store the kernel base Jason Yan
@ 2019-07-29 11:00   ` Christophe Leroy
  2019-07-29 14:32   ` Christoph Hellwig
  1 sibling, 0 replies; 37+ messages in thread
From: Christophe Leroy @ 2019-07-29 11:00 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> Now the kernel base is a fixed value - KERNELBASE. To support KASLR, we
> need a variable to store the kernel base.
> 
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>

Reviewed-by: Christophe Leroy <christophe.leroy@c-s.fr>


> ---
>   arch/powerpc/include/asm/page.h | 2 ++
>   arch/powerpc/mm/init-common.c   | 2 ++
>   2 files changed, 4 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index 0d52f57fca04..60a68d3a54b1 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -315,6 +315,8 @@ void arch_free_page(struct page *page, int order);
>   
>   struct vm_area_struct;
>   
> +extern unsigned long kimage_vaddr;
> +
>   #include <asm-generic/memory_model.h>
>   #endif /* __ASSEMBLY__ */
>   #include <asm/slice.h>
> diff --git a/arch/powerpc/mm/init-common.c b/arch/powerpc/mm/init-common.c
> index 9273c38009cb..c7a98c73e5c1 100644
> --- a/arch/powerpc/mm/init-common.c
> +++ b/arch/powerpc/mm/init-common.c
> @@ -25,6 +25,8 @@ phys_addr_t memstart_addr = (phys_addr_t)~0ull;
>   EXPORT_SYMBOL(memstart_addr);
>   phys_addr_t kernstart_addr;
>   EXPORT_SYMBOL(kernstart_addr);
> +unsigned long kimage_vaddr = KERNELBASE;
> +EXPORT_SYMBOL(kimage_vaddr);
>   
>   static bool disable_kuep = !IS_ENABLED(CONFIG_PPC_KUEP);
>   static bool disable_kuap = !IS_ENABLED(CONFIG_PPC_KUAP);
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper
  2019-07-17  8:06 ` [RFC PATCH 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper Jason Yan
@ 2019-07-29 11:05   ` Christophe Leroy
  2019-07-29 13:26     ` Jason Yan
  0 siblings, 1 reply; 37+ messages in thread
From: Christophe Leroy @ 2019-07-29 11:05 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> Add a new helper create_tlb_entry() to create a tlb entry by the virtual
> and physical address. This is a preparation to support boot kernel at a
> randomized address.
> 
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>
> ---
>   arch/powerpc/kernel/head_fsl_booke.S | 30 ++++++++++++++++++++++++++++
>   arch/powerpc/mm/mmu_decl.h           |  1 +
>   2 files changed, 31 insertions(+)
> 
> diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
> index adf0505dbe02..a57d44638031 100644
> --- a/arch/powerpc/kernel/head_fsl_booke.S
> +++ b/arch/powerpc/kernel/head_fsl_booke.S
> @@ -1114,6 +1114,36 @@ __secondary_hold_acknowledge:
>   	.long	-1
>   #endif
>   
> +/*
> + * Create a 64M tlb by address and entry
> + * r3/r4 - physical address
> + * r5 - virtual address
> + * r6 - entry
> + */
> +_GLOBAL(create_tlb_entry)
> +	lis     r7,0x1000               /* Set MAS0(TLBSEL) = 1 */
> +	rlwimi  r7,r6,16,4,15           /* Setup MAS0 = TLBSEL | ESEL(r6) */
> +	mtspr   SPRN_MAS0,r7            /* Write MAS0 */
> +
> +	lis     r6,(MAS1_VALID|MAS1_IPROT)@h
> +	ori     r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
> +	mtspr   SPRN_MAS1,r6            /* Write MAS1 */
> +
> +	lis     r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
> +	ori     r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
> +	and     r6,r6,r5
> +	ori	r6,r6,MAS2_M@l
> +	mtspr   SPRN_MAS2,r6            /* Write MAS2(EPN) */
> +
> +	mr      r8,r4
> +	ori     r8,r8,(MAS3_SW|MAS3_SR|MAS3_SX)

Could drop the mr r8, r4 and do:

ori     r8,r4,(MAS3_SW|MAS3_SR|MAS3_SX)

> +	mtspr   SPRN_MAS3,r8            /* Write MAS3(RPN) */
> +
> +	tlbwe                           /* Write TLB */
> +	isync
> +	sync
> +	blr
> +
>   /*
>    * Create a tlb entry with the same effective and physical address as
>    * the tlb entry used by the current running code. But set the TS to 1.
> diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
> index 32c1a191c28a..d7737cf97cee 100644
> --- a/arch/powerpc/mm/mmu_decl.h
> +++ b/arch/powerpc/mm/mmu_decl.h
> @@ -142,6 +142,7 @@ extern unsigned long calc_cam_sz(unsigned long ram, unsigned long virt,
>   extern void adjust_total_lowmem(void);
>   extern int switch_to_as1(void);
>   extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
> +extern void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);

Please please do not add new declarations with the useless 'extern' 
keyword. See checkpatch report: 
https://openpower.xyz/job/snowpatch/job/snowpatch-linux-checkpatch/8124//artifact/linux/checkpatch.log

>   #endif
>   extern void loadcam_entry(unsigned int index);
>   extern void loadcam_multi(int first_idx, int num, int tmp_idx);
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 05/10] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
  2019-07-17  8:06 ` [RFC PATCH 05/10] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper Jason Yan
@ 2019-07-29 11:08   ` Christophe Leroy
  2019-07-29 13:35     ` Jason Yan
  0 siblings, 1 reply; 37+ messages in thread
From: Christophe Leroy @ 2019-07-29 11:08 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> Add a new helper reloc_kernel_entry() to jump back to the start of the
> new kernel. After we put the new kernel in a randomized place we can use
> this new helper to enter the kernel and begin to relocate again.
> 
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>
> ---
>   arch/powerpc/kernel/head_fsl_booke.S | 16 ++++++++++++++++
>   arch/powerpc/mm/mmu_decl.h           |  1 +
>   2 files changed, 17 insertions(+)
> 
> diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
> index a57d44638031..ce40f96dae20 100644
> --- a/arch/powerpc/kernel/head_fsl_booke.S
> +++ b/arch/powerpc/kernel/head_fsl_booke.S
> @@ -1144,6 +1144,22 @@ _GLOBAL(create_tlb_entry)
>   	sync
>   	blr
>   
> +/*
> + * Return to the start of the relocated kernel and run again
> + * r3 - virtual address of fdt
> + * r4 - entry of the kernel
> + */
> +_GLOBAL(reloc_kernel_entry)
> +	mfmsr	r7
> +	li	r8,(MSR_IS | MSR_DS)
> +	andc	r7,r7,r8

Instead of the li/andc, what about the following:

rlwinm r7, r7, 0, ~(MSR_IS | MSR_DS)

> +
> +	mtspr	SPRN_SRR0,r4
> +	mtspr	SPRN_SRR1,r7
> +	isync
> +	sync
> +	rfi

Are the isync/sync really necessary ? AFAIK, rfi is context synchronising.

> +
>   /*
>    * Create a tlb entry with the same effective and physical address as
>    * the tlb entry used by the current running code. But set the TS to 1.
> diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
> index d7737cf97cee..dae8e9177574 100644
> --- a/arch/powerpc/mm/mmu_decl.h
> +++ b/arch/powerpc/mm/mmu_decl.h
> @@ -143,6 +143,7 @@ extern void adjust_total_lowmem(void);
>   extern int switch_to_as1(void);
>   extern void restore_to_as0(int esel, int offset, void *dt_ptr, int bootcpu);
>   extern void create_tlb_entry(phys_addr_t phys, unsigned long virt, int entry);
> +extern void reloc_kernel_entry(void *fdt, int addr);

No new 'extern' please, see 
https://openpower.xyz/job/snowpatch/job/snowpatch-linux-checkpatch/8125//artifact/linux/checkpatch.log


>   #endif
>   extern void loadcam_entry(unsigned int index);
>   extern void loadcam_multi(int first_idx, int num, int tmp_idx);
> 

Christophe

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 06/10] powerpc/fsl_booke/32: implement KASLR infrastructure
  2019-07-17  8:06 ` [RFC PATCH 06/10] powerpc/fsl_booke/32: implement KASLR infrastructure Jason Yan
@ 2019-07-29 11:16   ` Christophe Leroy
  0 siblings, 0 replies; 37+ messages in thread
From: Christophe Leroy @ 2019-07-29 11:16 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> This patch add support to boot kernel from places other than KERNELBASE.
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
> 
> The offset of the kernel was not randomized yet(a fixed 64M is set). We
> will randomize it in the next patch.
> 
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>
> ---
>   arch/powerpc/Kconfig                          | 11 +++
>   arch/powerpc/kernel/Makefile                  |  1 +
>   arch/powerpc/kernel/early_32.c                |  2 +-
>   arch/powerpc/kernel/fsl_booke_entry_mapping.S | 13 ++-
>   arch/powerpc/kernel/head_fsl_booke.S          | 15 +++-
>   arch/powerpc/kernel/kaslr_booke.c             | 83 +++++++++++++++++++
>   arch/powerpc/mm/mmu_decl.h                    |  6 ++
>   arch/powerpc/mm/nohash/fsl_booke.c            |  7 +-
>   8 files changed, 125 insertions(+), 13 deletions(-)
>   create mode 100644 arch/powerpc/kernel/kaslr_booke.c
> 

[...]

> diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
> index dae8e9177574..754ae1e69f92 100644
> --- a/arch/powerpc/mm/mmu_decl.h
> +++ b/arch/powerpc/mm/mmu_decl.h
> @@ -148,6 +148,12 @@ extern void reloc_kernel_entry(void *fdt, int addr);
>   extern void loadcam_entry(unsigned int index);
>   extern void loadcam_multi(int first_idx, int num, int tmp_idx);
>   
> +#ifdef CONFIG_RANDOMIZE_BASE
> +extern void kaslr_early_init(void *dt_ptr, phys_addr_t size);

No superflous 'extern' keyword.

Christophe

> +#else
> +static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
> +#endif
> +
>   struct tlbcam {
>   	u32	MAS0;
>   	u32	MAS1;

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 08/10] powerpc/fsl_booke/kaslr: clear the original kernel if randomized
  2019-07-17  8:06 ` [RFC PATCH 08/10] powerpc/fsl_booke/kaslr: clear the original kernel if randomized Jason Yan
@ 2019-07-29 11:19   ` Christophe Leroy
  2019-07-29 13:43     ` Jason Yan
  0 siblings, 1 reply; 37+ messages in thread
From: Christophe Leroy @ 2019-07-29 11:19 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> The original kernel still exists in the memory, clear it now.
> 
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>
> ---
>   arch/powerpc/kernel/kaslr_booke.c  | 11 +++++++++++
>   arch/powerpc/mm/mmu_decl.h         |  2 ++
>   arch/powerpc/mm/nohash/fsl_booke.c |  1 +
>   3 files changed, 14 insertions(+)
> 
> diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
> index 90357f4bd313..00339c05879f 100644
> --- a/arch/powerpc/kernel/kaslr_booke.c
> +++ b/arch/powerpc/kernel/kaslr_booke.c
> @@ -412,3 +412,14 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
>   
>   	reloc_kernel_entry(dt_ptr, kimage_vaddr);
>   }
> +
> +void __init kaslr_second_init(void)
> +{
> +	/* If randomized, clear the original kernel */
> +	if (kimage_vaddr != KERNELBASE) {
> +		unsigned long kernel_sz;
> +
> +		kernel_sz = (unsigned long)_end - kimage_vaddr;
> +		memset((void *)KERNELBASE, 0, kernel_sz);

Why are we clearing ? Is that just to tidy up or is it of security 
importance ?

If so, maybe memzero_explicit() should be used instead ?

> +	}
> +}
> diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
> index 754ae1e69f92..9912ee598f9b 100644
> --- a/arch/powerpc/mm/mmu_decl.h
> +++ b/arch/powerpc/mm/mmu_decl.h
> @@ -150,8 +150,10 @@ extern void loadcam_multi(int first_idx, int num, int tmp_idx);
>   
>   #ifdef CONFIG_RANDOMIZE_BASE
>   extern void kaslr_early_init(void *dt_ptr, phys_addr_t size);
> +extern void kaslr_second_init(void);

No new 'extern' please.

>   #else
>   static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
> +static inline void kaslr_second_init(void) {}
>   #endif
>   
>   struct tlbcam {
> diff --git a/arch/powerpc/mm/nohash/fsl_booke.c b/arch/powerpc/mm/nohash/fsl_booke.c
> index 8d25a8dc965f..fa5a87f5c08e 100644
> --- a/arch/powerpc/mm/nohash/fsl_booke.c
> +++ b/arch/powerpc/mm/nohash/fsl_booke.c
> @@ -269,6 +269,7 @@ notrace void __init relocate_init(u64 dt_ptr, phys_addr_t start)
>   	kernstart_addr = start;
>   	if (is_second_reloc) {
>   		virt_phys_offset = PAGE_OFFSET - memstart_addr;
> +		kaslr_second_init();
>   		return;
>   	}
>   
> 

Christophe

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 07/10] powerpc/fsl_booke/32: randomize the kernel image offset
  2019-07-17  8:06 ` [RFC PATCH 07/10] powerpc/fsl_booke/32: randomize the kernel image offset Jason Yan
@ 2019-07-29 11:33   ` Christophe Leroy
  2019-07-29 13:53     ` Jason Yan
  0 siblings, 1 reply; 37+ messages in thread
From: Christophe Leroy @ 2019-07-29 11:33 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> After we have the basic support of relocate the kernel in some
> appropriate place, we can start to randomize the offset now.
> 
> Entropy is derived from the banner and timer, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.
> 
> We will use the first 512M of the low memory to randomize the kernel
> image. The memory will be split in 64M zones. We will use the lower 8
> bit of the entropy to decide the index of the 64M zone. Then we chose a
> 16K aligned offset inside the 64M zone to put the kernel in.
> 
>      KERNELBASE
> 
>          |-->   64M   <--|
>          |               |
>          +---------------+    +----------------+---------------+
>          |               |....|    |kernel|    |               |
>          +---------------+    +----------------+---------------+
>          |                         |
>          |----->   offset    <-----|
> 
>                                kimage_vaddr
> 
> We also check if we will overlap with some areas like the dtb area, the
> initrd area or the crashkernel area. If we cannot find a proper area,
> kaslr will be disabled and boot from the original kernel.
> 
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>
> ---
>   arch/powerpc/kernel/kaslr_booke.c | 335 +++++++++++++++++++++++++++++-
>   1 file changed, 333 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
> index 72d8e9432048..90357f4bd313 100644
> --- a/arch/powerpc/kernel/kaslr_booke.c
> +++ b/arch/powerpc/kernel/kaslr_booke.c
> @@ -22,6 +22,8 @@
>   #include <linux/delay.h>
>   #include <linux/highmem.h>
>   #include <linux/memblock.h>
> +#include <linux/libfdt.h>
> +#include <linux/crash_core.h>
>   #include <asm/pgalloc.h>
>   #include <asm/prom.h>
>   #include <asm/io.h>
> @@ -33,15 +35,342 @@
>   #include <asm/machdep.h>
>   #include <asm/setup.h>
>   #include <asm/paca.h>
> +#include <asm/kdump.h>
>   #include <mm/mmu_decl.h>
> +#include <generated/compile.h>
> +#include <generated/utsrelease.h>
> +
> +#ifdef DEBUG
> +#define DBG(fmt...) printk(KERN_ERR fmt)
> +#else
> +#define DBG(fmt...)
> +#endif
> +
> +struct regions {
> +	unsigned long pa_start;
> +	unsigned long pa_end;
> +	unsigned long kernel_size;
> +	unsigned long dtb_start;
> +	unsigned long dtb_end;
> +	unsigned long initrd_start;
> +	unsigned long initrd_end;
> +	unsigned long crash_start;
> +	unsigned long crash_end;
> +	int reserved_mem;
> +	int reserved_mem_addr_cells;
> +	int reserved_mem_size_cells;
> +};
>   
>   extern int is_second_reloc;
>   
> +/* Simplified build-specific string for starting entropy. */
> +static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
> +		LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
> +static char __initdata early_command_line[COMMAND_LINE_SIZE];
> +
> +static __init void kaslr_get_cmdline(void *fdt)
> +{
> +	const char *cmdline = CONFIG_CMDLINE;
> +	if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) {
> +		int node;
> +		const u8 *prop;
> +		node = fdt_path_offset(fdt, "/chosen");
> +		if (node < 0)
> +			goto out;
> +
> +		prop = fdt_getprop(fdt, node, "bootargs", NULL);
> +		if (!prop)
> +			goto out;
> +		cmdline = prop;
> +	}
> +out:
> +	strscpy(early_command_line, cmdline, COMMAND_LINE_SIZE);
> +}
> +

Can you explain why we need that and can't use the already existing 
cmdline stuff ?

Christophe

> +static unsigned long __init rotate_xor(unsigned long hash, const void *area,
> +				size_t size)
> +{
> +	size_t i;
> +	unsigned long *ptr = (unsigned long *)area;
> +
> +	for (i = 0; i < size / sizeof(hash); i++) {
> +		/* Rotate by odd number of bits and XOR. */
> +		hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
> +		hash ^= ptr[i];
> +	}
> +
> +	return hash;
> +}
> +
> +/* Attempt to create a simple but unpredictable starting entropy. */
> +static unsigned long __init get_boot_seed(void *fdt)
> +{
> +	unsigned long hash = 0;
> +
> +	hash = rotate_xor(hash, build_str, sizeof(build_str));
> +	hash = rotate_xor(hash, fdt, fdt_totalsize(fdt));
> +
> +	return hash;
> +}
> +
> +static __init u64 get_kaslr_seed(void *fdt)
> +{
> +	int node, len;
> +	fdt64_t *prop;
> +	u64 ret;
> +
> +	node = fdt_path_offset(fdt, "/chosen");
> +	if (node < 0)
> +		return 0;
> +
> +	prop = fdt_getprop_w(fdt, node, "kaslr-seed", &len);
> +	if (!prop || len != sizeof(u64))
> +		return 0;
> +
> +	ret = fdt64_to_cpu(*prop);
> +	*prop = 0;
> +	return ret;
> +}
> +
> +static __init bool regions_overlap(u32 s1, u32 e1, u32 s2, u32 e2)
> +{
> +	return e1 >= s2 && e2 >= s1;
> +}
> +
> +static __init bool overlaps_reserved_region(const void *fdt, u32 start,
> +				       u32 end, struct regions *regions)
> +{
> +	int subnode, len, i;
> +	u64 base, size;
> +
> +	/* check for overlap with /memreserve/ entries */
> +	for (i = 0; i < fdt_num_mem_rsv(fdt); i++) {
> +		if (fdt_get_mem_rsv(fdt, i, &base, &size) < 0)
> +			continue;
> +		if (regions_overlap(start, end, base, base + size))
> +			return true;
> +	}
> +
> +	if (regions->reserved_mem < 0)
> +		return false;
> +
> +	/* check for overlap with static reservations in /reserved-memory */
> +	for (subnode = fdt_first_subnode(fdt, regions->reserved_mem);
> +	     subnode >= 0;
> +	     subnode = fdt_next_subnode(fdt, subnode)) {
> +		const fdt32_t *reg;
> +		u64 rsv_end;
> +
> +		len = 0;
> +		reg = fdt_getprop(fdt, subnode, "reg", &len);
> +		while (len >= (regions->reserved_mem_addr_cells +
> +			       regions->reserved_mem_size_cells)) {
> +			base = fdt32_to_cpu(reg[0]);
> +			if (regions->reserved_mem_addr_cells == 2)
> +				base = (base << 32) | fdt32_to_cpu(reg[1]);
> +
> +			reg += regions->reserved_mem_addr_cells;
> +			len -= 4 * regions->reserved_mem_addr_cells;
> +
> +			size = fdt32_to_cpu(reg[0]);
> +			if (regions->reserved_mem_size_cells == 2)
> +				size = (size << 32) | fdt32_to_cpu(reg[1]);
> +
> +			reg += regions->reserved_mem_size_cells;
> +			len -= 4 * regions->reserved_mem_size_cells;
> +
> +			if (base >= regions->pa_end)
> +				continue;
> +
> +			rsv_end = min(base + size, (u64)U32_MAX);
> +
> +			if (regions_overlap(start, end, base, rsv_end))
> +				return true;
> +		}
> +	}
> +	return false;
> +}
> +
> +static __init bool overlaps_region(const void *fdt, u32 start,
> +				       u32 end, struct regions *regions)
> +{
> +	if (regions_overlap(start, end, regions->dtb_start,
> +			      regions->dtb_end))
> +		return true;
> +
> +	if (regions_overlap(start, end, regions->initrd_start,
> +			      regions->initrd_end))
> +		return true;
> +
> +	if (regions_overlap(start, end, regions->crash_start,
> +			      regions->crash_end))
> +		return true;
> +
> +	return overlaps_reserved_region(fdt, start, end, regions);
> +}
> +
> +static void __init get_crash_kernel(void *fdt, unsigned long size,
> +				struct regions *regions)
> +{
> +#ifdef CONFIG_KEXEC_CORE
> +	unsigned long long crash_size, crash_base;
> +	int ret;
> +
> +	ret = parse_crashkernel(early_command_line, size, &crash_size,
> +			&crash_base);
> +	if (ret != 0 || crash_size == 0)
> +		return;
> +	if (crash_base == 0)
> +		crash_base = KDUMP_KERNELBASE;
> +
> +	regions->crash_start = (unsigned long)crash_base;
> +	regions->crash_end = (unsigned long)(crash_base + crash_size);
> +
> +	DBG("crash_base=0x%llx crash_size=0x%llx\n", crash_base, crash_size);
> +#endif
> +}
> +
> +static void __init get_initrd_range(void *fdt, struct regions *regions)
> +{
> +	u64 start, end;
> +	int node, len;
> +	const __be32 *prop;
> +
> +	node = fdt_path_offset(fdt, "/chosen");
> +	if (node < 0)
> +		return;
> +
> +	prop = fdt_getprop(fdt, node, "linux,initrd-start", &len);
> +	if (!prop)
> +		return;
> +	start = of_read_number(prop, len / 4);
> +
> +	prop = fdt_getprop(fdt, node, "linux,initrd-end", &len);
> +	if (!prop)
> +		return;
> +	end = of_read_number(prop, len / 4);
> +
> +	regions->initrd_start = (unsigned long)start;
> +	regions->initrd_end = (unsigned long)end;
> +
> +	DBG("initrd_start=0x%llx  initrd_end=0x%llx\n", start, end);
> +}
> +
> +static __init unsigned long get_usable_offset(const void *fdt, struct regions *regions,
> +				unsigned long start)
> +{
> +	unsigned long pa;
> +	unsigned long pa_end;
> +
> +	for (pa = start; pa > regions->pa_start; pa -= SZ_16K) {
> +		pa_end = pa + regions->kernel_size;
> +		if (overlaps_region(fdt, pa, pa_end, regions))
> +			continue;
> +
> +		return pa;
> +	}
> +	return 0;
> +}
> +
> +static __init void get_cell_sizes(const void *fdt, int node, int *addr_cells,
> +			   int *size_cells)
> +{
> +	const int *prop;
> +	int len;
> +
> +	/*
> +	 * Retrieve the #address-cells and #size-cells properties
> +	 * from the 'node', or use the default if not provided.
> +	 */
> +	*addr_cells = *size_cells = 1;
> +
> +	prop = fdt_getprop(fdt, node, "#address-cells", &len);
> +	if (len == 4)
> +		*addr_cells = fdt32_to_cpu(*prop);
> +	prop = fdt_getprop(fdt, node, "#size-cells", &len);
> +	if (len == 4)
> +		*size_cells = fdt32_to_cpu(*prop);
> +}
> +
>   static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size,
>   					unsigned long kernel_sz)
>   {
> -	/* return a fixed offset of 64M for now */
> -	return 0x4000000;
> +	unsigned long offset, random;
> +	unsigned long ram, linear_sz;
> +	unsigned long kaslr_offset;
> +	u64 seed;
> +	struct regions regions;
> +	unsigned long index;
> +
> +	random = get_boot_seed(dt_ptr);
> +
> +	seed = get_tb() << 32;
> +	seed ^= get_tb();
> +	random = rotate_xor(random, &seed, sizeof(seed));
> +
> +	/*
> +	 * Retrieve (and wipe) the seed from the FDT
> +	 */
> +	seed = get_kaslr_seed(dt_ptr);
> +	if (seed)
> +		random = rotate_xor(random, &seed, sizeof(seed));
> +
> +	ram = min((phys_addr_t)__max_low_memory, size);
> +	ram = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM, true);
> +	linear_sz = min(ram, (unsigned long)SZ_512M);
> +
> +	/* If the linear size is smaller than 64M, do not randmize */
> +	if (linear_sz < SZ_64M)
> +		return 0;
> +
> +	memset(&regions, 0, sizeof(regions));
> +
> +	/* check for a reserved-memory node and record its cell sizes */
> +	regions.reserved_mem = fdt_path_offset(dt_ptr, "/reserved-memory");
> +	if (regions.reserved_mem >= 0)
> +		get_cell_sizes(dt_ptr, regions.reserved_mem,
> +			       &regions.reserved_mem_addr_cells,
> +			       &regions.reserved_mem_size_cells);
> +
> +	regions.pa_start = 0;
> +	regions.pa_end = linear_sz;
> +	regions.dtb_start = __pa(dt_ptr);
> +	regions.dtb_end = __pa(dt_ptr) + fdt_totalsize(dt_ptr);
> +	regions.kernel_size = kernel_sz;
> +
> +	get_initrd_range(dt_ptr, &regions);
> +	get_crash_kernel(dt_ptr, ram, &regions);
> +
> +	/*
> +	 * Decide which 64M we want to start
> +	 * Only use the low 8 bits of the random seed
> +	 */
> +	index = random & 0xFF;
> +	index %= linear_sz / SZ_64M;
> +
> +	/* Decide offset inside 64M */
> +	if (index == 0) {
> +		offset = random % (SZ_64M - round_up(kernel_sz, SZ_16K) * 2);
> +		offset += round_up(kernel_sz, SZ_16K);
> +		offset = round_up(offset, SZ_16K);
> +	} else {
> +		offset = random % (SZ_64M - kernel_sz);
> +		offset = round_down(offset, SZ_16K);
> +	}
> +
> +	while (index >= 0) {
> +		offset = offset + index * SZ_64M;
> +		kaslr_offset = get_usable_offset(dt_ptr, &regions, offset);
> +		if (kaslr_offset)
> +			break;
> +		index--;
> +	}
> +
> +	/* Did not find any usable region? Give up randomize */
> +	if (index < 0)
> +		kaslr_offset = 0;
> +
> +	return kaslr_offset;
>   }
>   
>   /*
> @@ -58,6 +387,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
>   
>   	kernel_sz = (unsigned long)_end - KERNELBASE;
>   
> +	kaslr_get_cmdline(dt_ptr);
> +
>   	offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
>   
>   	if (offset == 0)
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
  2019-07-17  8:06 ` [RFC PATCH 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter Jason Yan
@ 2019-07-29 11:38   ` Christophe Leroy
  2019-07-29 14:04     ` Jason Yan
  0 siblings, 1 reply; 37+ messages in thread
From: Christophe Leroy @ 2019-07-29 11:38 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> One may want to disable kaslr when boot, so provide a cmdline parameter
> 'nokaslr' to support this.
> 
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>
> ---
>   arch/powerpc/kernel/kaslr_booke.c | 14 ++++++++++++++
>   1 file changed, 14 insertions(+)
> 
> diff --git a/arch/powerpc/kernel/kaslr_booke.c b/arch/powerpc/kernel/kaslr_booke.c
> index 00339c05879f..e65a5d9d2ff1 100644
> --- a/arch/powerpc/kernel/kaslr_booke.c
> +++ b/arch/powerpc/kernel/kaslr_booke.c
> @@ -373,6 +373,18 @@ static unsigned long __init kaslr_choose_location(void *dt_ptr, phys_addr_t size
>   	return kaslr_offset;
>   }
>   
> +static inline __init bool kaslr_disabled(void)
> +{
> +	char *str;
> +
> +	str = strstr(early_command_line, "nokaslr");

Why using early_command_line instead of boot_command_line ?


> +	if ((str == early_command_line) ||
> +	    (str > early_command_line && *(str - 1) == ' '))

Is that stuff really needed ?

Why not just:

return strstr(early_command_line, "nokaslr") != NULL;

> +		return true;
> +
> +	return false;
> +}


> +
>   /*
>    * To see if we need to relocate the kernel to a random offset
>    * void *dt_ptr - address of the device tree
> @@ -388,6 +400,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, phys_addr_t size)
>   	kernel_sz = (unsigned long)_end - KERNELBASE;
>   
>   	kaslr_get_cmdline(dt_ptr);
> +	if (kaslr_disabled())
> +		return;
>   
>   	offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
>   
> 

Christophe

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 10/10] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
  2019-07-17  8:06 ` [RFC PATCH 10/10] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic Jason Yan
@ 2019-07-29 11:43   ` Christophe Leroy
  2019-07-29 14:08     ` Jason Yan
  0 siblings, 1 reply; 37+ messages in thread
From: Christophe Leroy @ 2019-07-29 11:43 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang



Le 17/07/2019 à 10:06, Jason Yan a écrit :
> When kaslr is enabled, the kernel offset is different for every boot.
> This brings some difficult to debug the kernel. Dump out the kernel
> offset when panic so that we can easily debug the kernel.
> 
> Signed-off-by: Jason Yan <yanaijie@huawei.com>
> Cc: Diana Craciun <diana.craciun@nxp.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Kees Cook <keescook@chromium.org>
> ---
>   arch/powerpc/include/asm/page.h     |  5 +++++
>   arch/powerpc/kernel/machine_kexec.c |  1 +
>   arch/powerpc/kernel/setup-common.c  | 23 +++++++++++++++++++++++
>   3 files changed, 29 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index 60a68d3a54b1..cd3ac530e58d 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -317,6 +317,11 @@ struct vm_area_struct;
>   
>   extern unsigned long kimage_vaddr;
>   
> +static inline unsigned long kaslr_offset(void)
> +{
> +	return kimage_vaddr - KERNELBASE;
> +}
> +
>   #include <asm-generic/memory_model.h>
>   #endif /* __ASSEMBLY__ */
>   #include <asm/slice.h>
> diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
> index c4ed328a7b96..078fe3d76feb 100644
> --- a/arch/powerpc/kernel/machine_kexec.c
> +++ b/arch/powerpc/kernel/machine_kexec.c
> @@ -86,6 +86,7 @@ void arch_crash_save_vmcoreinfo(void)
>   	VMCOREINFO_STRUCT_SIZE(mmu_psize_def);
>   	VMCOREINFO_OFFSET(mmu_psize_def, shift);
>   #endif
> +	vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
>   }
>   
>   /*
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index 1f8db666468d..49e540c0adeb 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -715,12 +715,35 @@ static struct notifier_block ppc_panic_block = {
>   	.priority = INT_MIN /* may not return; must be done last */
>   };
>   
> +/*
> + * Dump out kernel offset information on panic.
> + */
> +static int dump_kernel_offset(struct notifier_block *self, unsigned long v,
> +			      void *p)
> +{
> +	const unsigned long offset = kaslr_offset();
> +
> +	if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && offset > 0)
> +		pr_emerg("Kernel Offset: 0x%lx from 0x%lx\n",
> +			 offset, KERNELBASE);
> +	else
> +		pr_emerg("Kernel Offset: disabled\n");

Do we really need that else branch ?

Why not just make the below atomic_notifier_chain_register() 
conditionnal to IS_ENABLED(CONFIG_RANDOMIZE_BASE) && offset > 0
and not print anything otherwise ?

Christophe

> +
> +	return 0;
> +}
> +
> +static struct notifier_block kernel_offset_notifier = {
> +	.notifier_call = dump_kernel_offset
> +};
> +
>   void __init setup_panic(void)
>   {
>   	/* PPC64 always does a hard irq disable in its panic handler */
>   	if (!IS_ENABLED(CONFIG_PPC64) && !ppc_md.panic)
>   		return;
>   	atomic_notifier_chain_register(&panic_notifier_list, &ppc_panic_block);
> +	atomic_notifier_chain_register(&panic_notifier_list,
> +				       &kernel_offset_notifier);
>   }
>   
>   #ifdef CONFIG_CHECK_CACHE_COHERENCY
> 

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper
  2019-07-29 11:05   ` Christophe Leroy
@ 2019-07-29 13:26     ` Jason Yan
  0 siblings, 0 replies; 37+ messages in thread
From: Jason Yan @ 2019-07-29 13:26 UTC (permalink / raw)
  To: Christophe Leroy, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang


On 2019/7/29 19:05, Christophe Leroy wrote:
> 
> 
> Le 17/07/2019 à 10:06, Jason Yan a écrit :
>> Add a new helper create_tlb_entry() to create a tlb entry by the virtual
>> and physical address. This is a preparation to support boot kernel at a
>> randomized address.
>>
>> Signed-off-by: Jason Yan <yanaijie@huawei.com>
>> Cc: Diana Craciun <diana.craciun@nxp.com>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
>> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Nicholas Piggin <npiggin@gmail.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> ---
>>   arch/powerpc/kernel/head_fsl_booke.S | 30 ++++++++++++++++++++++++++++
>>   arch/powerpc/mm/mmu_decl.h           |  1 +
>>   2 files changed, 31 insertions(+)
>>
>> diff --git a/arch/powerpc/kernel/head_fsl_booke.S 
>> b/arch/powerpc/kernel/head_fsl_booke.S
>> index adf0505dbe02..a57d44638031 100644
>> --- a/arch/powerpc/kernel/head_fsl_booke.S
>> +++ b/arch/powerpc/kernel/head_fsl_booke.S
>> @@ -1114,6 +1114,36 @@ __secondary_hold_acknowledge:
>>       .long    -1
>>   #endif
>> +/*
>> + * Create a 64M tlb by address and entry
>> + * r3/r4 - physical address
>> + * r5 - virtual address
>> + * r6 - entry
>> + */
>> +_GLOBAL(create_tlb_entry)
>> +    lis     r7,0x1000               /* Set MAS0(TLBSEL) = 1 */
>> +    rlwimi  r7,r6,16,4,15           /* Setup MAS0 = TLBSEL | ESEL(r6) */
>> +    mtspr   SPRN_MAS0,r7            /* Write MAS0 */
>> +
>> +    lis     r6,(MAS1_VALID|MAS1_IPROT)@h
>> +    ori     r6,r6,(MAS1_TSIZE(BOOK3E_PAGESZ_64M))@l
>> +    mtspr   SPRN_MAS1,r6            /* Write MAS1 */
>> +
>> +    lis     r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@h
>> +    ori     r6,r6,MAS2_EPN_MASK(BOOK3E_PAGESZ_64M)@l
>> +    and     r6,r6,r5
>> +    ori    r6,r6,MAS2_M@l
>> +    mtspr   SPRN_MAS2,r6            /* Write MAS2(EPN) */
>> +
>> +    mr      r8,r4
>> +    ori     r8,r8,(MAS3_SW|MAS3_SR|MAS3_SX)
> 
> Could drop the mr r8, r4 and do:
> 
> ori     r8,r4,(MAS3_SW|MAS3_SR|MAS3_SX)
> 

OK, thanks for the suggestion.

>> +    mtspr   SPRN_MAS3,r8            /* Write MAS3(RPN) */
>> +
>> +    tlbwe                           /* Write TLB */
>> +    isync
>> +    sync
>> +    blr
>> +
>>   /*
>>    * Create a tlb entry with the same effective and physical address as
>>    * the tlb entry used by the current running code. But set the TS to 1.
>> diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
>> index 32c1a191c28a..d7737cf97cee 100644
>> --- a/arch/powerpc/mm/mmu_decl.h
>> +++ b/arch/powerpc/mm/mmu_decl.h
>> @@ -142,6 +142,7 @@ extern unsigned long calc_cam_sz(unsigned long 
>> ram, unsigned long virt,
>>   extern void adjust_total_lowmem(void);
>>   extern int switch_to_as1(void);
>>   extern void restore_to_as0(int esel, int offset, void *dt_ptr, int 
>> bootcpu);
>> +extern void create_tlb_entry(phys_addr_t phys, unsigned long virt, 
>> int entry);
> 
> Please please do not add new declarations with the useless 'extern' 
> keyword. See checkpatch report: 
> https://openpower.xyz/job/snowpatch/job/snowpatch-linux-checkpatch/8124//artifact/linux/checkpatch.log 
> 

Will drop all useless 'extern' in this and other patches, thanks.

> 
>>   #endif
>>   extern void loadcam_entry(unsigned int index);
>>   extern void loadcam_multi(int first_idx, int num, int tmp_idx);
>>
> 
> .
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 05/10] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
  2019-07-29 11:08   ` Christophe Leroy
@ 2019-07-29 13:35     ` Jason Yan
  0 siblings, 0 replies; 37+ messages in thread
From: Jason Yan @ 2019-07-29 13:35 UTC (permalink / raw)
  To: Christophe Leroy, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang


On 2019/7/29 19:08, Christophe Leroy wrote:
> 
> 
> Le 17/07/2019 à 10:06, Jason Yan a écrit :
>> Add a new helper reloc_kernel_entry() to jump back to the start of the
>> new kernel. After we put the new kernel in a randomized place we can use
>> this new helper to enter the kernel and begin to relocate again.
>>
>> Signed-off-by: Jason Yan <yanaijie@huawei.com>
>> Cc: Diana Craciun <diana.craciun@nxp.com>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
>> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Nicholas Piggin <npiggin@gmail.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> ---
>>   arch/powerpc/kernel/head_fsl_booke.S | 16 ++++++++++++++++
>>   arch/powerpc/mm/mmu_decl.h           |  1 +
>>   2 files changed, 17 insertions(+)
>>
>> diff --git a/arch/powerpc/kernel/head_fsl_booke.S 
>> b/arch/powerpc/kernel/head_fsl_booke.S
>> index a57d44638031..ce40f96dae20 100644
>> --- a/arch/powerpc/kernel/head_fsl_booke.S
>> +++ b/arch/powerpc/kernel/head_fsl_booke.S
>> @@ -1144,6 +1144,22 @@ _GLOBAL(create_tlb_entry)
>>       sync
>>       blr
>> +/*
>> + * Return to the start of the relocated kernel and run again
>> + * r3 - virtual address of fdt
>> + * r4 - entry of the kernel
>> + */
>> +_GLOBAL(reloc_kernel_entry)
>> +    mfmsr    r7
>> +    li    r8,(MSR_IS | MSR_DS)
>> +    andc    r7,r7,r8
> 
> Instead of the li/andc, what about the following:
> 
> rlwinm r7, r7, 0, ~(MSR_IS | MSR_DS)
> 

Good idea.

>> +
>> +    mtspr    SPRN_SRR0,r4
>> +    mtspr    SPRN_SRR1,r7
>> +    isync
>> +    sync
>> +    rfi
> 
> Are the isync/sync really necessary ? AFAIK, rfi is context synchronising.
> 

I see some code with sync before rfi so I'm not sure. I will check this
and drop the isync/sync if it's true.

Thanks.

>> +
>>   /*
>>    * Create a tlb entry with the same effective and physical address as
>>    * the tlb entry used by the current running code. But set the TS to 1.
>> diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
>> index d7737cf97cee..dae8e9177574 100644
>> --- a/arch/powerpc/mm/mmu_decl.h
>> +++ b/arch/powerpc/mm/mmu_decl.h
>> @@ -143,6 +143,7 @@ extern void adjust_total_lowmem(void);
>>   extern int switch_to_as1(void);
>>   extern void restore_to_as0(int esel, int offset, void *dt_ptr, int 
>> bootcpu);
>>   extern void create_tlb_entry(phys_addr_t phys, unsigned long virt, 
>> int entry);
>> +extern void reloc_kernel_entry(void *fdt, int addr);
> 
> No new 'extern' please, see 
> https://openpower.xyz/job/snowpatch/job/snowpatch-linux-checkpatch/8125//artifact/linux/checkpatch.log 
> 
> 
> 
>>   #endif
>>   extern void loadcam_entry(unsigned int index);
>>   extern void loadcam_multi(int first_idx, int num, int tmp_idx);
>>
> 
> Christophe
> 
> .
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 08/10] powerpc/fsl_booke/kaslr: clear the original kernel if randomized
  2019-07-29 11:19   ` Christophe Leroy
@ 2019-07-29 13:43     ` Jason Yan
  0 siblings, 0 replies; 37+ messages in thread
From: Jason Yan @ 2019-07-29 13:43 UTC (permalink / raw)
  To: Christophe Leroy, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang


On 2019/7/29 19:19, Christophe Leroy wrote:
> 
> 
> Le 17/07/2019 à 10:06, Jason Yan a écrit :
>> The original kernel still exists in the memory, clear it now.
>>
>> Signed-off-by: Jason Yan <yanaijie@huawei.com>
>> Cc: Diana Craciun <diana.craciun@nxp.com>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
>> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Nicholas Piggin <npiggin@gmail.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> ---
>>   arch/powerpc/kernel/kaslr_booke.c  | 11 +++++++++++
>>   arch/powerpc/mm/mmu_decl.h         |  2 ++
>>   arch/powerpc/mm/nohash/fsl_booke.c |  1 +
>>   3 files changed, 14 insertions(+)
>>
>> diff --git a/arch/powerpc/kernel/kaslr_booke.c 
>> b/arch/powerpc/kernel/kaslr_booke.c
>> index 90357f4bd313..00339c05879f 100644
>> --- a/arch/powerpc/kernel/kaslr_booke.c
>> +++ b/arch/powerpc/kernel/kaslr_booke.c
>> @@ -412,3 +412,14 @@ notrace void __init kaslr_early_init(void 
>> *dt_ptr, phys_addr_t size)
>>       reloc_kernel_entry(dt_ptr, kimage_vaddr);
>>   }
>> +
>> +void __init kaslr_second_init(void)
>> +{
>> +    /* If randomized, clear the original kernel */
>> +    if (kimage_vaddr != KERNELBASE) {
>> +        unsigned long kernel_sz;
>> +
>> +        kernel_sz = (unsigned long)_end - kimage_vaddr;
>> +        memset((void *)KERNELBASE, 0, kernel_sz);
> 
> Why are we clearing ? Is that just to tidy up or is it of security 
> importance ?
> 

If we leave it there, attackers can still find the kernel object very
easy, it's still dangerous especially if without 
CONFIG_STRICT_KERNEL_RWX enabled.

> If so, maybe memzero_explicit() should be used instead ?
> 

OK

>> +    }
>> +}
>> diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
>> index 754ae1e69f92..9912ee598f9b 100644
>> --- a/arch/powerpc/mm/mmu_decl.h
>> +++ b/arch/powerpc/mm/mmu_decl.h
>> @@ -150,8 +150,10 @@ extern void loadcam_multi(int first_idx, int num, 
>> int tmp_idx);
>>   #ifdef CONFIG_RANDOMIZE_BASE
>>   extern void kaslr_early_init(void *dt_ptr, phys_addr_t size);
>> +extern void kaslr_second_init(void);
> 
> No new 'extern' please.
> 
>>   #else
>>   static inline void kaslr_early_init(void *dt_ptr, phys_addr_t size) {}
>> +static inline void kaslr_second_init(void) {}
>>   #endif
>>   struct tlbcam {
>> diff --git a/arch/powerpc/mm/nohash/fsl_booke.c 
>> b/arch/powerpc/mm/nohash/fsl_booke.c
>> index 8d25a8dc965f..fa5a87f5c08e 100644
>> --- a/arch/powerpc/mm/nohash/fsl_booke.c
>> +++ b/arch/powerpc/mm/nohash/fsl_booke.c
>> @@ -269,6 +269,7 @@ notrace void __init relocate_init(u64 dt_ptr, 
>> phys_addr_t start)
>>       kernstart_addr = start;
>>       if (is_second_reloc) {
>>           virt_phys_offset = PAGE_OFFSET - memstart_addr;
>> +        kaslr_second_init();
>>           return;
>>       }
>>
> 
> Christophe
> 
> .
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 07/10] powerpc/fsl_booke/32: randomize the kernel image offset
  2019-07-29 11:33   ` Christophe Leroy
@ 2019-07-29 13:53     ` Jason Yan
  0 siblings, 0 replies; 37+ messages in thread
From: Jason Yan @ 2019-07-29 13:53 UTC (permalink / raw)
  To: Christophe Leroy, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang


On 2019/7/29 19:33, Christophe Leroy wrote:
> 
> 
> Le 17/07/2019 à 10:06, Jason Yan a écrit :
>> After we have the basic support of relocate the kernel in some
>> appropriate place, we can start to randomize the offset now.
>>
>> Entropy is derived from the banner and timer, which will change every
>> build and boot. This not so much safe so additionally the bootloader may
>> pass entropy via the /chosen/kaslr-seed node in device tree.
>>
>> We will use the first 512M of the low memory to randomize the kernel
>> image. The memory will be split in 64M zones. We will use the lower 8
>> bit of the entropy to decide the index of the 64M zone. Then we chose a
>> 16K aligned offset inside the 64M zone to put the kernel in.
>>
>>      KERNELBASE
>>
>>          |-->   64M   <--|
>>          |               |
>>          +---------------+    +----------------+---------------+
>>          |               |....|    |kernel|    |               |
>>          +---------------+    +----------------+---------------+
>>          |                         |
>>          |----->   offset    <-----|
>>
>>                                kimage_vaddr
>>
>> We also check if we will overlap with some areas like the dtb area, the
>> initrd area or the crashkernel area. If we cannot find a proper area,
>> kaslr will be disabled and boot from the original kernel.
>>
>> Signed-off-by: Jason Yan <yanaijie@huawei.com>
>> Cc: Diana Craciun <diana.craciun@nxp.com>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
>> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Nicholas Piggin <npiggin@gmail.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> ---
>>   arch/powerpc/kernel/kaslr_booke.c | 335 +++++++++++++++++++++++++++++-
>>   1 file changed, 333 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/powerpc/kernel/kaslr_booke.c 
>> b/arch/powerpc/kernel/kaslr_booke.c
>> index 72d8e9432048..90357f4bd313 100644
>> --- a/arch/powerpc/kernel/kaslr_booke.c
>> +++ b/arch/powerpc/kernel/kaslr_booke.c
>> @@ -22,6 +22,8 @@
>>   #include <linux/delay.h>
>>   #include <linux/highmem.h>
>>   #include <linux/memblock.h>
>> +#include <linux/libfdt.h>
>> +#include <linux/crash_core.h>
>>   #include <asm/pgalloc.h>
>>   #include <asm/prom.h>
>>   #include <asm/io.h>
>> @@ -33,15 +35,342 @@
>>   #include <asm/machdep.h>
>>   #include <asm/setup.h>
>>   #include <asm/paca.h>
>> +#include <asm/kdump.h>
>>   #include <mm/mmu_decl.h>
>> +#include <generated/compile.h>
>> +#include <generated/utsrelease.h>
>> +
>> +#ifdef DEBUG
>> +#define DBG(fmt...) printk(KERN_ERR fmt)
>> +#else
>> +#define DBG(fmt...)
>> +#endif
>> +
>> +struct regions {
>> +    unsigned long pa_start;
>> +    unsigned long pa_end;
>> +    unsigned long kernel_size;
>> +    unsigned long dtb_start;
>> +    unsigned long dtb_end;
>> +    unsigned long initrd_start;
>> +    unsigned long initrd_end;
>> +    unsigned long crash_start;
>> +    unsigned long crash_end;
>> +    int reserved_mem;
>> +    int reserved_mem_addr_cells;
>> +    int reserved_mem_size_cells;
>> +};
>>   extern int is_second_reloc;
>> +/* Simplified build-specific string for starting entropy. */
>> +static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
>> +        LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
>> +static char __initdata early_command_line[COMMAND_LINE_SIZE];
>> +
>> +static __init void kaslr_get_cmdline(void *fdt)
>> +{
>> +    const char *cmdline = CONFIG_CMDLINE;
>> +    if (!IS_ENABLED(CONFIG_CMDLINE_FORCE)) {
>> +        int node;
>> +        const u8 *prop;
>> +        node = fdt_path_offset(fdt, "/chosen");
>> +        if (node < 0)
>> +            goto out;
>> +
>> +        prop = fdt_getprop(fdt, node, "bootargs", NULL);
>> +        if (!prop)
>> +            goto out;
>> +        cmdline = prop;
>> +    }
>> +out:
>> +    strscpy(early_command_line, cmdline, COMMAND_LINE_SIZE);
>> +}
>> +
> 
> Can you explain why we need that and can't use the already existing 
> cmdline stuff ?
> 

I'm afraid of breaking the other initializing code of the cmdline
buffer at first. I will have a try to use it to see if there is any 
problems.

> Christophe
> 
>> +static unsigned long __init rotate_xor(unsigned long hash, const void 
>> *area,
>> +                size_t size)
>> +{
>> +    size_t i;
>> +    unsigned long *ptr = (unsigned long *)area;
>> +
>> +    for (i = 0; i < size / sizeof(hash); i++) {
>> +        /* Rotate by odd number of bits and XOR. */
>> +        hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
>> +        hash ^= ptr[i];
>> +    }
>> +
>> +    return hash;
>> +}
>> +
>> +/* Attempt to create a simple but unpredictable starting entropy. */
>> +static unsigned long __init get_boot_seed(void *fdt)
>> +{
>> +    unsigned long hash = 0;
>> +
>> +    hash = rotate_xor(hash, build_str, sizeof(build_str));
>> +    hash = rotate_xor(hash, fdt, fdt_totalsize(fdt));
>> +
>> +    return hash;
>> +}
>> +
>> +static __init u64 get_kaslr_seed(void *fdt)
>> +{
>> +    int node, len;
>> +    fdt64_t *prop;
>> +    u64 ret;
>> +
>> +    node = fdt_path_offset(fdt, "/chosen");
>> +    if (node < 0)
>> +        return 0;
>> +
>> +    prop = fdt_getprop_w(fdt, node, "kaslr-seed", &len);
>> +    if (!prop || len != sizeof(u64))
>> +        return 0;
>> +
>> +    ret = fdt64_to_cpu(*prop);
>> +    *prop = 0;
>> +    return ret;
>> +}
>> +
>> +static __init bool regions_overlap(u32 s1, u32 e1, u32 s2, u32 e2)
>> +{
>> +    return e1 >= s2 && e2 >= s1;
>> +}
>> +
>> +static __init bool overlaps_reserved_region(const void *fdt, u32 start,
>> +                       u32 end, struct regions *regions)
>> +{
>> +    int subnode, len, i;
>> +    u64 base, size;
>> +
>> +    /* check for overlap with /memreserve/ entries */
>> +    for (i = 0; i < fdt_num_mem_rsv(fdt); i++) {
>> +        if (fdt_get_mem_rsv(fdt, i, &base, &size) < 0)
>> +            continue;
>> +        if (regions_overlap(start, end, base, base + size))
>> +            return true;
>> +    }
>> +
>> +    if (regions->reserved_mem < 0)
>> +        return false;
>> +
>> +    /* check for overlap with static reservations in /reserved-memory */
>> +    for (subnode = fdt_first_subnode(fdt, regions->reserved_mem);
>> +         subnode >= 0;
>> +         subnode = fdt_next_subnode(fdt, subnode)) {
>> +        const fdt32_t *reg;
>> +        u64 rsv_end;
>> +
>> +        len = 0;
>> +        reg = fdt_getprop(fdt, subnode, "reg", &len);
>> +        while (len >= (regions->reserved_mem_addr_cells +
>> +                   regions->reserved_mem_size_cells)) {
>> +            base = fdt32_to_cpu(reg[0]);
>> +            if (regions->reserved_mem_addr_cells == 2)
>> +                base = (base << 32) | fdt32_to_cpu(reg[1]);
>> +
>> +            reg += regions->reserved_mem_addr_cells;
>> +            len -= 4 * regions->reserved_mem_addr_cells;
>> +
>> +            size = fdt32_to_cpu(reg[0]);
>> +            if (regions->reserved_mem_size_cells == 2)
>> +                size = (size << 32) | fdt32_to_cpu(reg[1]);
>> +
>> +            reg += regions->reserved_mem_size_cells;
>> +            len -= 4 * regions->reserved_mem_size_cells;
>> +
>> +            if (base >= regions->pa_end)
>> +                continue;
>> +
>> +            rsv_end = min(base + size, (u64)U32_MAX);
>> +
>> +            if (regions_overlap(start, end, base, rsv_end))
>> +                return true;
>> +        }
>> +    }
>> +    return false;
>> +}
>> +
>> +static __init bool overlaps_region(const void *fdt, u32 start,
>> +                       u32 end, struct regions *regions)
>> +{
>> +    if (regions_overlap(start, end, regions->dtb_start,
>> +                  regions->dtb_end))
>> +        return true;
>> +
>> +    if (regions_overlap(start, end, regions->initrd_start,
>> +                  regions->initrd_end))
>> +        return true;
>> +
>> +    if (regions_overlap(start, end, regions->crash_start,
>> +                  regions->crash_end))
>> +        return true;
>> +
>> +    return overlaps_reserved_region(fdt, start, end, regions);
>> +}
>> +
>> +static void __init get_crash_kernel(void *fdt, unsigned long size,
>> +                struct regions *regions)
>> +{
>> +#ifdef CONFIG_KEXEC_CORE
>> +    unsigned long long crash_size, crash_base;
>> +    int ret;
>> +
>> +    ret = parse_crashkernel(early_command_line, size, &crash_size,
>> +            &crash_base);
>> +    if (ret != 0 || crash_size == 0)
>> +        return;
>> +    if (crash_base == 0)
>> +        crash_base = KDUMP_KERNELBASE;
>> +
>> +    regions->crash_start = (unsigned long)crash_base;
>> +    regions->crash_end = (unsigned long)(crash_base + crash_size);
>> +
>> +    DBG("crash_base=0x%llx crash_size=0x%llx\n", crash_base, 
>> crash_size);
>> +#endif
>> +}
>> +
>> +static void __init get_initrd_range(void *fdt, struct regions *regions)
>> +{
>> +    u64 start, end;
>> +    int node, len;
>> +    const __be32 *prop;
>> +
>> +    node = fdt_path_offset(fdt, "/chosen");
>> +    if (node < 0)
>> +        return;
>> +
>> +    prop = fdt_getprop(fdt, node, "linux,initrd-start", &len);
>> +    if (!prop)
>> +        return;
>> +    start = of_read_number(prop, len / 4);
>> +
>> +    prop = fdt_getprop(fdt, node, "linux,initrd-end", &len);
>> +    if (!prop)
>> +        return;
>> +    end = of_read_number(prop, len / 4);
>> +
>> +    regions->initrd_start = (unsigned long)start;
>> +    regions->initrd_end = (unsigned long)end;
>> +
>> +    DBG("initrd_start=0x%llx  initrd_end=0x%llx\n", start, end);
>> +}
>> +
>> +static __init unsigned long get_usable_offset(const void *fdt, struct 
>> regions *regions,
>> +                unsigned long start)
>> +{
>> +    unsigned long pa;
>> +    unsigned long pa_end;
>> +
>> +    for (pa = start; pa > regions->pa_start; pa -= SZ_16K) {
>> +        pa_end = pa + regions->kernel_size;
>> +        if (overlaps_region(fdt, pa, pa_end, regions))
>> +            continue;
>> +
>> +        return pa;
>> +    }
>> +    return 0;
>> +}
>> +
>> +static __init void get_cell_sizes(const void *fdt, int node, int 
>> *addr_cells,
>> +               int *size_cells)
>> +{
>> +    const int *prop;
>> +    int len;
>> +
>> +    /*
>> +     * Retrieve the #address-cells and #size-cells properties
>> +     * from the 'node', or use the default if not provided.
>> +     */
>> +    *addr_cells = *size_cells = 1;
>> +
>> +    prop = fdt_getprop(fdt, node, "#address-cells", &len);
>> +    if (len == 4)
>> +        *addr_cells = fdt32_to_cpu(*prop);
>> +    prop = fdt_getprop(fdt, node, "#size-cells", &len);
>> +    if (len == 4)
>> +        *size_cells = fdt32_to_cpu(*prop);
>> +}
>> +
>>   static unsigned long __init kaslr_choose_location(void *dt_ptr, 
>> phys_addr_t size,
>>                       unsigned long kernel_sz)
>>   {
>> -    /* return a fixed offset of 64M for now */
>> -    return 0x4000000;
>> +    unsigned long offset, random;
>> +    unsigned long ram, linear_sz;
>> +    unsigned long kaslr_offset;
>> +    u64 seed;
>> +    struct regions regions;
>> +    unsigned long index;
>> +
>> +    random = get_boot_seed(dt_ptr);
>> +
>> +    seed = get_tb() << 32;
>> +    seed ^= get_tb();
>> +    random = rotate_xor(random, &seed, sizeof(seed));
>> +
>> +    /*
>> +     * Retrieve (and wipe) the seed from the FDT
>> +     */
>> +    seed = get_kaslr_seed(dt_ptr);
>> +    if (seed)
>> +        random = rotate_xor(random, &seed, sizeof(seed));
>> +
>> +    ram = min((phys_addr_t)__max_low_memory, size);
>> +    ram = map_mem_in_cams(ram, CONFIG_LOWMEM_CAM_NUM, true);
>> +    linear_sz = min(ram, (unsigned long)SZ_512M);
>> +
>> +    /* If the linear size is smaller than 64M, do not randmize */
>> +    if (linear_sz < SZ_64M)
>> +        return 0;
>> +
>> +    memset(&regions, 0, sizeof(regions));
>> +
>> +    /* check for a reserved-memory node and record its cell sizes */
>> +    regions.reserved_mem = fdt_path_offset(dt_ptr, "/reserved-memory");
>> +    if (regions.reserved_mem >= 0)
>> +        get_cell_sizes(dt_ptr, regions.reserved_mem,
>> +                   &regions.reserved_mem_addr_cells,
>> +                   &regions.reserved_mem_size_cells);
>> +
>> +    regions.pa_start = 0;
>> +    regions.pa_end = linear_sz;
>> +    regions.dtb_start = __pa(dt_ptr);
>> +    regions.dtb_end = __pa(dt_ptr) + fdt_totalsize(dt_ptr);
>> +    regions.kernel_size = kernel_sz;
>> +
>> +    get_initrd_range(dt_ptr, &regions);
>> +    get_crash_kernel(dt_ptr, ram, &regions);
>> +
>> +    /*
>> +     * Decide which 64M we want to start
>> +     * Only use the low 8 bits of the random seed
>> +     */
>> +    index = random & 0xFF;
>> +    index %= linear_sz / SZ_64M;
>> +
>> +    /* Decide offset inside 64M */
>> +    if (index == 0) {
>> +        offset = random % (SZ_64M - round_up(kernel_sz, SZ_16K) * 2);
>> +        offset += round_up(kernel_sz, SZ_16K);
>> +        offset = round_up(offset, SZ_16K);
>> +    } else {
>> +        offset = random % (SZ_64M - kernel_sz);
>> +        offset = round_down(offset, SZ_16K);
>> +    }
>> +
>> +    while (index >= 0) {
>> +        offset = offset + index * SZ_64M;
>> +        kaslr_offset = get_usable_offset(dt_ptr, &regions, offset);
>> +        if (kaslr_offset)
>> +            break;
>> +        index--;
>> +    }
>> +
>> +    /* Did not find any usable region? Give up randomize */
>> +    if (index < 0)
>> +        kaslr_offset = 0;
>> +
>> +    return kaslr_offset;
>>   }
>>   /*
>> @@ -58,6 +387,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, 
>> phys_addr_t size)
>>       kernel_sz = (unsigned long)_end - KERNELBASE;
>> +    kaslr_get_cmdline(dt_ptr);
>> +
>>       offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
>>       if (offset == 0)
>>
> 
> .
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
  2019-07-29 11:38   ` Christophe Leroy
@ 2019-07-29 14:04     ` Jason Yan
  0 siblings, 0 replies; 37+ messages in thread
From: Jason Yan @ 2019-07-29 14:04 UTC (permalink / raw)
  To: Christophe Leroy, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang



On 2019/7/29 19:38, Christophe Leroy wrote:
> 
> 
> Le 17/07/2019 à 10:06, Jason Yan a écrit :
>> One may want to disable kaslr when boot, so provide a cmdline parameter
>> 'nokaslr' to support this.
>>
>> Signed-off-by: Jason Yan <yanaijie@huawei.com>
>> Cc: Diana Craciun <diana.craciun@nxp.com>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
>> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Nicholas Piggin <npiggin@gmail.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> ---
>>   arch/powerpc/kernel/kaslr_booke.c | 14 ++++++++++++++
>>   1 file changed, 14 insertions(+)
>>
>> diff --git a/arch/powerpc/kernel/kaslr_booke.c 
>> b/arch/powerpc/kernel/kaslr_booke.c
>> index 00339c05879f..e65a5d9d2ff1 100644
>> --- a/arch/powerpc/kernel/kaslr_booke.c
>> +++ b/arch/powerpc/kernel/kaslr_booke.c
>> @@ -373,6 +373,18 @@ static unsigned long __init 
>> kaslr_choose_location(void *dt_ptr, phys_addr_t size
>>       return kaslr_offset;
>>   }
>> +static inline __init bool kaslr_disabled(void)
>> +{
>> +    char *str;
>> +
>> +    str = strstr(early_command_line, "nokaslr");
> 
> Why using early_command_line instead of boot_command_line ?
> 

Will switch to boot_command_line.

> 
>> +    if ((str == early_command_line) ||
>> +        (str > early_command_line && *(str - 1) == ' '))
> 
> Is that stuff really needed ?
> 
> Why not just:
> 
> return strstr(early_command_line, "nokaslr") != NULL;
> 

This code is derived from other arch such as arm64/mips. It's trying to
make sure that 'nokaslr' is a separate word but not part of other words 
such as 'abcnokaslr'.

>> +        return true;
>> +
>> +    return false;
>> +}
> 
> 
>> +
>>   /*
>>    * To see if we need to relocate the kernel to a random offset
>>    * void *dt_ptr - address of the device tree
>> @@ -388,6 +400,8 @@ notrace void __init kaslr_early_init(void *dt_ptr, 
>> phys_addr_t size)
>>       kernel_sz = (unsigned long)_end - KERNELBASE;
>>       kaslr_get_cmdline(dt_ptr);
>> +    if (kaslr_disabled())
>> +        return;
>>       offset = kaslr_choose_location(dt_ptr, size, kernel_sz);
>>
> 
> Christophe
> 
> .
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 10/10] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
  2019-07-29 11:43   ` Christophe Leroy
@ 2019-07-29 14:08     ` Jason Yan
  0 siblings, 0 replies; 37+ messages in thread
From: Jason Yan @ 2019-07-29 14:08 UTC (permalink / raw)
  To: Christophe Leroy, mpe, linuxppc-dev, diana.craciun, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang


On 2019/7/29 19:43, Christophe Leroy wrote:
> 
> 
> Le 17/07/2019 à 10:06, Jason Yan a écrit :
>> When kaslr is enabled, the kernel offset is different for every boot.
>> This brings some difficult to debug the kernel. Dump out the kernel
>> offset when panic so that we can easily debug the kernel.
>>
>> Signed-off-by: Jason Yan <yanaijie@huawei.com>
>> Cc: Diana Craciun <diana.craciun@nxp.com>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Cc: Christophe Leroy <christophe.leroy@c-s.fr>
>> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
>> Cc: Paul Mackerras <paulus@samba.org>
>> Cc: Nicholas Piggin <npiggin@gmail.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> ---
>>   arch/powerpc/include/asm/page.h     |  5 +++++
>>   arch/powerpc/kernel/machine_kexec.c |  1 +
>>   arch/powerpc/kernel/setup-common.c  | 23 +++++++++++++++++++++++
>>   3 files changed, 29 insertions(+)
>>
>> diff --git a/arch/powerpc/include/asm/page.h 
>> b/arch/powerpc/include/asm/page.h
>> index 60a68d3a54b1..cd3ac530e58d 100644
>> --- a/arch/powerpc/include/asm/page.h
>> +++ b/arch/powerpc/include/asm/page.h
>> @@ -317,6 +317,11 @@ struct vm_area_struct;
>>   extern unsigned long kimage_vaddr;
>> +static inline unsigned long kaslr_offset(void)
>> +{
>> +    return kimage_vaddr - KERNELBASE;
>> +}
>> +
>>   #include <asm-generic/memory_model.h>
>>   #endif /* __ASSEMBLY__ */
>>   #include <asm/slice.h>
>> diff --git a/arch/powerpc/kernel/machine_kexec.c 
>> b/arch/powerpc/kernel/machine_kexec.c
>> index c4ed328a7b96..078fe3d76feb 100644
>> --- a/arch/powerpc/kernel/machine_kexec.c
>> +++ b/arch/powerpc/kernel/machine_kexec.c
>> @@ -86,6 +86,7 @@ void arch_crash_save_vmcoreinfo(void)
>>       VMCOREINFO_STRUCT_SIZE(mmu_psize_def);
>>       VMCOREINFO_OFFSET(mmu_psize_def, shift);
>>   #endif
>> +    vmcoreinfo_append_str("KERNELOFFSET=%lx\n", kaslr_offset());
>>   }
>>   /*
>> diff --git a/arch/powerpc/kernel/setup-common.c 
>> b/arch/powerpc/kernel/setup-common.c
>> index 1f8db666468d..49e540c0adeb 100644
>> --- a/arch/powerpc/kernel/setup-common.c
>> +++ b/arch/powerpc/kernel/setup-common.c
>> @@ -715,12 +715,35 @@ static struct notifier_block ppc_panic_block = {
>>       .priority = INT_MIN /* may not return; must be done last */
>>   };
>> +/*
>> + * Dump out kernel offset information on panic.
>> + */
>> +static int dump_kernel_offset(struct notifier_block *self, unsigned 
>> long v,
>> +                  void *p)
>> +{
>> +    const unsigned long offset = kaslr_offset();
>> +
>> +    if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && offset > 0)
>> +        pr_emerg("Kernel Offset: 0x%lx from 0x%lx\n",
>> +             offset, KERNELBASE);
>> +    else
>> +        pr_emerg("Kernel Offset: disabled\n");
> 
> Do we really need that else branch ?
> 
> Why not just make the below atomic_notifier_chain_register() 
> conditionnal to IS_ENABLED(CONFIG_RANDOMIZE_BASE) && offset > 0
> and not print anything otherwise ?
> 

I'm trying to keep the same fashion as x86/arm64 do. But I agree
with you that it's simpler to not print anything else if not randomized.

> Christophe
> 
>> +
>> +    return 0;
>> +}
>> +
>> +static struct notifier_block kernel_offset_notifier = {
>> +    .notifier_call = dump_kernel_offset
>> +};
>> +
>>   void __init setup_panic(void)
>>   {
>>       /* PPC64 always does a hard irq disable in its panic handler */
>>       if (!IS_ENABLED(CONFIG_PPC64) && !ppc_md.panic)
>>           return;
>>       atomic_notifier_chain_register(&panic_notifier_list, 
>> &ppc_panic_block);
>> +    atomic_notifier_chain_register(&panic_notifier_list,
>> +                       &kernel_offset_notifier);
>>   }
>>   #ifdef CONFIG_CHECK_CACHE_COHERENCY
>>
> 
> .
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32
  2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
                   ` (10 preceding siblings ...)
  2019-07-25  7:16 ` [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
@ 2019-07-29 14:30 ` Diana Madalina Craciun
  11 siblings, 0 replies; 37+ messages in thread
From: Diana Madalina Craciun @ 2019-07-29 14:30 UTC (permalink / raw)
  To: Jason Yan, mpe, linuxppc-dev, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening
  Cc: linux-kernel, wangkefeng.wang, yebin10, thunder.leizhen,
	jingxiangfeng, fanchengyang

Reviewed-by: Diana Craciun <diana.craciun@nxp.com>
Tested-by: Diana Craciun <diana.craciun@nxp.com>


On 7/17/2019 10:49 AM, Jason Yan wrote:
> This series implements KASLR for powerpc/fsl_booke/32, as a security
> feature that deters exploit attempts relying on knowledge of the location
> of kernel internals.
>
> Since CONFIG_RELOCATABLE has already supported, what we need to do is
> map or copy kernel to a proper place and relocate. Freescale Book-E
> parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1
> entries are not suitable to map the kernel directly in a randomized
> region, so we chose to copy the kernel to a proper place and restart to
> relocate.
>
> Entropy is derived from the banner and timer base, which will change every
> build and boot. This not so much safe so additionally the bootloader may
> pass entropy via the /chosen/kaslr-seed node in device tree.
>
> We will use the first 512M of the low memory to randomize the kernel
> image. The memory will be split in 64M zones. We will use the lower 8
> bit of the entropy to decide the index of the 64M zone. Then we chose a
> 16K aligned offset inside the 64M zone to put the kernel in.
>
>     KERNELBASE
>
>         |-->   64M   <--|
>         |               |
>         +---------------+    +----------------+---------------+
>         |               |....|    |kernel|    |               |
>         +---------------+    +----------------+---------------+
>         |                         |
>         |----->   offset    <-----|
>
>                               kimage_vaddr
>
> We also check if we will overlap with some areas like the dtb area, the
> initrd area or the crashkernel area. If we cannot find a proper area,
> kaslr will be disabled and boot from the original kernel.
>
> Jason Yan (10):
>   powerpc: unify definition of M_IF_NEEDED
>   powerpc: move memstart_addr and kernstart_addr to init-common.c
>   powerpc: introduce kimage_vaddr to store the kernel base
>   powerpc/fsl_booke/32: introduce create_tlb_entry() helper
>   powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper
>   powerpc/fsl_booke/32: implement KASLR infrastructure
>   powerpc/fsl_booke/32: randomize the kernel image offset
>   powerpc/fsl_booke/kaslr: clear the original kernel if randomized
>   powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter
>   powerpc/fsl_booke/kaslr: dump out kernel offset information on panic
>
>  arch/powerpc/Kconfig                          |  11 +
>  arch/powerpc/include/asm/nohash/mmu-book3e.h  |  10 +
>  arch/powerpc/include/asm/page.h               |   7 +
>  arch/powerpc/kernel/Makefile                  |   1 +
>  arch/powerpc/kernel/early_32.c                |   2 +-
>  arch/powerpc/kernel/exceptions-64e.S          |  10 -
>  arch/powerpc/kernel/fsl_booke_entry_mapping.S |  23 +-
>  arch/powerpc/kernel/head_fsl_booke.S          |  61 ++-
>  arch/powerpc/kernel/kaslr_booke.c             | 439 ++++++++++++++++++
>  arch/powerpc/kernel/machine_kexec.c           |   1 +
>  arch/powerpc/kernel/misc_64.S                 |   5 -
>  arch/powerpc/kernel/setup-common.c            |  23 +
>  arch/powerpc/mm/init-common.c                 |   7 +
>  arch/powerpc/mm/init_32.c                     |   5 -
>  arch/powerpc/mm/init_64.c                     |   5 -
>  arch/powerpc/mm/mmu_decl.h                    |  10 +
>  arch/powerpc/mm/nohash/fsl_booke.c            |   8 +-
>  17 files changed, 580 insertions(+), 48 deletions(-)
>  create mode 100644 arch/powerpc/kernel/kaslr_booke.c
>


^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c
  2019-07-17  8:06 ` [RFC PATCH 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c Jason Yan
  2019-07-29 11:00   ` Christophe Leroy
@ 2019-07-29 14:31   ` Christoph Hellwig
  2019-07-30  0:47     ` Jason Yan
  1 sibling, 1 reply; 37+ messages in thread
From: Christoph Hellwig @ 2019-07-29 14:31 UTC (permalink / raw)
  To: Jason Yan
  Cc: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening, wangkefeng.wang,
	linux-kernel, jingxiangfeng, thunder.leizhen, fanchengyang,
	yebin10

I think you need to keep the more restrictive EXPORT_SYMBOL_GPL from
the 64-bit code to keep the intention of all authors intact.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 03/10] powerpc: introduce kimage_vaddr to store the kernel base
  2019-07-17  8:06 ` [RFC PATCH 03/10] powerpc: introduce kimage_vaddr to store the kernel base Jason Yan
  2019-07-29 11:00   ` Christophe Leroy
@ 2019-07-29 14:32   ` Christoph Hellwig
  1 sibling, 0 replies; 37+ messages in thread
From: Christoph Hellwig @ 2019-07-29 14:32 UTC (permalink / raw)
  To: Jason Yan
  Cc: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening, wangkefeng.wang,
	linux-kernel, jingxiangfeng, thunder.leizhen, fanchengyang,
	yebin10

On Wed, Jul 17, 2019 at 04:06:14PM +0800, Jason Yan wrote:
> Now the kernel base is a fixed value - KERNELBASE. To support KASLR, we
> need a variable to store the kernel base.

This should probably merged into the patch actually using it.

^ permalink raw reply	[flat|nested] 37+ messages in thread

* Re: [RFC PATCH 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c
  2019-07-29 14:31   ` Christoph Hellwig
@ 2019-07-30  0:47     ` Jason Yan
  0 siblings, 0 replies; 37+ messages in thread
From: Jason Yan @ 2019-07-30  0:47 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: mpe, linuxppc-dev, diana.craciun, christophe.leroy, benh, paulus,
	npiggin, keescook, kernel-hardening, wangkefeng.wang,
	linux-kernel, jingxiangfeng, thunder.leizhen, fanchengyang,
	yebin10



On 2019/7/29 22:31, Christoph Hellwig wrote:
> I think you need to keep the more restrictive EXPORT_SYMBOL_GPL from
> the 64-bit code to keep the intention of all authors intact.
> 

Oh yes, I will fix in v2. Thanks.

> .
> 


^ permalink raw reply	[flat|nested] 37+ messages in thread

end of thread, other threads:[~2019-07-30  0:47 UTC | newest]

Thread overview: 37+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-17  8:06 [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
2019-07-17  8:06 ` [RFC PATCH 01/10] powerpc: unify definition of M_IF_NEEDED Jason Yan
2019-07-29 10:59   ` Christophe Leroy
2019-07-17  8:06 ` [RFC PATCH 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c Jason Yan
2019-07-29 11:00   ` Christophe Leroy
2019-07-29 14:31   ` Christoph Hellwig
2019-07-30  0:47     ` Jason Yan
2019-07-17  8:06 ` [RFC PATCH 03/10] powerpc: introduce kimage_vaddr to store the kernel base Jason Yan
2019-07-29 11:00   ` Christophe Leroy
2019-07-29 14:32   ` Christoph Hellwig
2019-07-17  8:06 ` [RFC PATCH 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper Jason Yan
2019-07-29 11:05   ` Christophe Leroy
2019-07-29 13:26     ` Jason Yan
2019-07-17  8:06 ` [RFC PATCH 05/10] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper Jason Yan
2019-07-29 11:08   ` Christophe Leroy
2019-07-29 13:35     ` Jason Yan
2019-07-17  8:06 ` [RFC PATCH 06/10] powerpc/fsl_booke/32: implement KASLR infrastructure Jason Yan
2019-07-29 11:16   ` Christophe Leroy
2019-07-17  8:06 ` [RFC PATCH 07/10] powerpc/fsl_booke/32: randomize the kernel image offset Jason Yan
2019-07-29 11:33   ` Christophe Leroy
2019-07-29 13:53     ` Jason Yan
2019-07-17  8:06 ` [RFC PATCH 08/10] powerpc/fsl_booke/kaslr: clear the original kernel if randomized Jason Yan
2019-07-29 11:19   ` Christophe Leroy
2019-07-29 13:43     ` Jason Yan
2019-07-17  8:06 ` [RFC PATCH 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter Jason Yan
2019-07-29 11:38   ` Christophe Leroy
2019-07-29 14:04     ` Jason Yan
2019-07-17  8:06 ` [RFC PATCH 10/10] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic Jason Yan
2019-07-29 11:43   ` Christophe Leroy
2019-07-29 14:08     ` Jason Yan
2019-07-25  7:16 ` [RFC PATCH 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan
2019-07-25 19:58   ` Kees Cook
2019-07-26  7:20     ` Jason Yan
2019-07-26 16:15       ` Kees Cook
2019-07-26  7:04   ` Diana Madalina Craciun
2019-07-26  7:26     ` Jason Yan
2019-07-29 14:30 ` Diana Madalina Craciun

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).