From: Jason Yan <yanaijie@huawei.com> To: <mpe@ellerman.id.au>, <linuxppc-dev@lists.ozlabs.org>, <diana.craciun@nxp.com>, <christophe.leroy@c-s.fr>, <benh@kernel.crashing.org>, <paulus@samba.org>, <npiggin@gmail.com>, <keescook@chromium.org>, <kernel-hardening@lists.openwall.com> Cc: <linux-kernel@vger.kernel.org>, <wangkefeng.wang@huawei.com>, <yebin10@huawei.com>, <thunder.leizhen@huawei.com>, <jingxiangfeng@huawei.com>, <fanchengyang@huawei.com>, <zhaohongjiang@huawei.com>, Jason Yan <yanaijie@huawei.com> Subject: [PATCH v3 00/10] implement KASLR for powerpc/fsl_booke/32 Date: Wed, 31 Jul 2019 17:43:08 +0800 [thread overview] Message-ID: <20190731094318.26538-1-yanaijie@huawei.com> (raw) This series implements KASLR for powerpc/fsl_booke/32, as a security feature that deters exploit attempts relying on knowledge of the location of kernel internals. Since CONFIG_RELOCATABLE has already supported, what we need to do is map or copy kernel to a proper place and relocate. Freescale Book-E parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 entries are not suitable to map the kernel directly in a randomized region, so we chose to copy the kernel to a proper place and restart to relocate. Entropy is derived from the banner and timer base, which will change every build and boot. This not so much safe so additionally the bootloader may pass entropy via the /chosen/kaslr-seed node in device tree. We will use the first 512M of the low memory to randomize the kernel image. The memory will be split in 64M zones. We will use the lower 8 bit of the entropy to decide the index of the 64M zone. Then we chose a 16K aligned offset inside the 64M zone to put the kernel in. KERNELBASE |--> 64M <--| | | +---------------+ +----------------+---------------+ | |....| |kernel| | | +---------------+ +----------------+---------------+ | | |-----> offset <-----| kimage_vaddr We also check if we will overlap with some areas like the dtb area, the initrd area or the crashkernel area. If we cannot find a proper area, kaslr will be disabled and boot from the original kernel. Changes since v2: - Remove unnecessary #ifdef - Use SZ_64M instead of0x4000000 - Call early_init_dt_scan_chosen() to init boot_command_line - Rename kaslr_second_init() to kaslr_late_init() Changes since v1: - Remove some useless 'extern' keyword. - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL - Improve some assembly code - Use memzero_explicit instead of memset - Use boot_command_line and remove early_command_line - Do not print kaslr offset if kaslr is disabled Jason Yan (10): powerpc: unify definition of M_IF_NEEDED powerpc: move memstart_addr and kernstart_addr to init-common.c powerpc: introduce kimage_vaddr to store the kernel base powerpc/fsl_booke/32: introduce create_tlb_entry() helper powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper powerpc/fsl_booke/32: implement KASLR infrastructure powerpc/fsl_booke/32: randomize the kernel image offset powerpc/fsl_booke/kaslr: clear the original kernel if randomized powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter powerpc/fsl_booke/kaslr: dump out kernel offset information on panic arch/powerpc/Kconfig | 11 + arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 + arch/powerpc/include/asm/page.h | 7 + arch/powerpc/kernel/Makefile | 1 + arch/powerpc/kernel/early_32.c | 2 +- arch/powerpc/kernel/exceptions-64e.S | 10 - arch/powerpc/kernel/fsl_booke_entry_mapping.S | 23 +- arch/powerpc/kernel/head_fsl_booke.S | 55 ++- arch/powerpc/kernel/kaslr_booke.c | 427 ++++++++++++++++++ arch/powerpc/kernel/machine_kexec.c | 1 + arch/powerpc/kernel/misc_64.S | 5 - arch/powerpc/kernel/setup-common.c | 19 + arch/powerpc/mm/init-common.c | 7 + arch/powerpc/mm/init_32.c | 5 - arch/powerpc/mm/init_64.c | 5 - arch/powerpc/mm/mmu_decl.h | 10 + arch/powerpc/mm/nohash/fsl_booke.c | 8 +- 17 files changed, 558 insertions(+), 48 deletions(-) create mode 100644 arch/powerpc/kernel/kaslr_booke.c -- 2.17.2
WARNING: multiple messages have this Message-ID (diff)
From: Jason Yan <yanaijie@huawei.com> To: <mpe@ellerman.id.au>, <linuxppc-dev@lists.ozlabs.org>, <diana.craciun@nxp.com>, <christophe.leroy@c-s.fr>, <benh@kernel.crashing.org>, <paulus@samba.org>, <npiggin@gmail.com>, <keescook@chromium.org>, <kernel-hardening@lists.openwall.com> Cc: wangkefeng.wang@huawei.com, Jason Yan <yanaijie@huawei.com>, linux-kernel@vger.kernel.org, jingxiangfeng@huawei.com, zhaohongjiang@huawei.com, thunder.leizhen@huawei.com, fanchengyang@huawei.com, yebin10@huawei.com Subject: [PATCH v3 00/10] implement KASLR for powerpc/fsl_booke/32 Date: Wed, 31 Jul 2019 17:43:08 +0800 [thread overview] Message-ID: <20190731094318.26538-1-yanaijie@huawei.com> (raw) This series implements KASLR for powerpc/fsl_booke/32, as a security feature that deters exploit attempts relying on knowledge of the location of kernel internals. Since CONFIG_RELOCATABLE has already supported, what we need to do is map or copy kernel to a proper place and relocate. Freescale Book-E parts expect lowmem to be mapped by fixed TLB entries(TLB1). The TLB1 entries are not suitable to map the kernel directly in a randomized region, so we chose to copy the kernel to a proper place and restart to relocate. Entropy is derived from the banner and timer base, which will change every build and boot. This not so much safe so additionally the bootloader may pass entropy via the /chosen/kaslr-seed node in device tree. We will use the first 512M of the low memory to randomize the kernel image. The memory will be split in 64M zones. We will use the lower 8 bit of the entropy to decide the index of the 64M zone. Then we chose a 16K aligned offset inside the 64M zone to put the kernel in. KERNELBASE |--> 64M <--| | | +---------------+ +----------------+---------------+ | |....| |kernel| | | +---------------+ +----------------+---------------+ | | |-----> offset <-----| kimage_vaddr We also check if we will overlap with some areas like the dtb area, the initrd area or the crashkernel area. If we cannot find a proper area, kaslr will be disabled and boot from the original kernel. Changes since v2: - Remove unnecessary #ifdef - Use SZ_64M instead of0x4000000 - Call early_init_dt_scan_chosen() to init boot_command_line - Rename kaslr_second_init() to kaslr_late_init() Changes since v1: - Remove some useless 'extern' keyword. - Replace EXPORT_SYMBOL with EXPORT_SYMBOL_GPL - Improve some assembly code - Use memzero_explicit instead of memset - Use boot_command_line and remove early_command_line - Do not print kaslr offset if kaslr is disabled Jason Yan (10): powerpc: unify definition of M_IF_NEEDED powerpc: move memstart_addr and kernstart_addr to init-common.c powerpc: introduce kimage_vaddr to store the kernel base powerpc/fsl_booke/32: introduce create_tlb_entry() helper powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper powerpc/fsl_booke/32: implement KASLR infrastructure powerpc/fsl_booke/32: randomize the kernel image offset powerpc/fsl_booke/kaslr: clear the original kernel if randomized powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter powerpc/fsl_booke/kaslr: dump out kernel offset information on panic arch/powerpc/Kconfig | 11 + arch/powerpc/include/asm/nohash/mmu-book3e.h | 10 + arch/powerpc/include/asm/page.h | 7 + arch/powerpc/kernel/Makefile | 1 + arch/powerpc/kernel/early_32.c | 2 +- arch/powerpc/kernel/exceptions-64e.S | 10 - arch/powerpc/kernel/fsl_booke_entry_mapping.S | 23 +- arch/powerpc/kernel/head_fsl_booke.S | 55 ++- arch/powerpc/kernel/kaslr_booke.c | 427 ++++++++++++++++++ arch/powerpc/kernel/machine_kexec.c | 1 + arch/powerpc/kernel/misc_64.S | 5 - arch/powerpc/kernel/setup-common.c | 19 + arch/powerpc/mm/init-common.c | 7 + arch/powerpc/mm/init_32.c | 5 - arch/powerpc/mm/init_64.c | 5 - arch/powerpc/mm/mmu_decl.h | 10 + arch/powerpc/mm/nohash/fsl_booke.c | 8 +- 17 files changed, 558 insertions(+), 48 deletions(-) create mode 100644 arch/powerpc/kernel/kaslr_booke.c -- 2.17.2
next reply other threads:[~2019-07-31 9:26 UTC|newest] Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-07-31 9:43 Jason Yan [this message] 2019-07-31 9:43 ` [PATCH v3 00/10] implement KASLR for powerpc/fsl_booke/32 Jason Yan 2019-07-31 9:43 ` [PATCH v3 01/10] powerpc: unify definition of M_IF_NEEDED Jason Yan 2019-07-31 9:43 ` Jason Yan 2019-07-31 9:43 ` [PATCH v3 02/10] powerpc: move memstart_addr and kernstart_addr to init-common.c Jason Yan 2019-07-31 9:43 ` Jason Yan 2019-07-31 9:43 ` [PATCH v3 03/10] powerpc: introduce kimage_vaddr to store the kernel base Jason Yan 2019-07-31 9:43 ` Jason Yan 2019-07-31 9:43 ` [PATCH v3 04/10] powerpc/fsl_booke/32: introduce create_tlb_entry() helper Jason Yan 2019-07-31 9:43 ` Jason Yan 2019-07-31 9:43 ` [PATCH v3 05/10] powerpc/fsl_booke/32: introduce reloc_kernel_entry() helper Jason Yan 2019-07-31 9:43 ` Jason Yan 2019-07-31 9:43 ` [PATCH v3 06/10] powerpc/fsl_booke/32: implement KASLR infrastructure Jason Yan 2019-07-31 9:43 ` Jason Yan 2019-08-02 8:41 ` Diana Madalina Craciun 2019-08-02 8:41 ` Diana Madalina Craciun 2019-08-02 8:41 ` Diana Madalina Craciun 2019-08-05 5:40 ` Jason Yan 2019-08-05 5:40 ` Jason Yan 2019-08-05 5:40 ` Jason Yan 2019-07-31 9:43 ` [PATCH v3 07/10] powerpc/fsl_booke/32: randomize the kernel image offset Jason Yan 2019-07-31 9:43 ` Jason Yan 2019-07-31 9:43 ` [PATCH v3 08/10] powerpc/fsl_booke/kaslr: clear the original kernel if randomized Jason Yan 2019-07-31 9:43 ` Jason Yan 2019-07-31 9:43 ` [PATCH v3 09/10] powerpc/fsl_booke/kaslr: support nokaslr cmdline parameter Jason Yan 2019-07-31 9:43 ` Jason Yan 2019-07-31 9:43 ` [PATCH v3 10/10] powerpc/fsl_booke/kaslr: dump out kernel offset information on panic Jason Yan 2019-07-31 9:43 ` Jason Yan 2019-08-01 14:36 ` [PATCH v3 00/10] implement KASLR for powerpc/fsl_booke/32 Diana Madalina Craciun 2019-08-01 14:36 ` Diana Madalina Craciun 2019-08-01 14:36 ` Diana Madalina Craciun 2019-08-02 0:48 ` Jason Yan 2019-08-02 0:48 ` Jason Yan 2019-08-02 0:48 ` Jason Yan 2019-08-02 8:48 ` Diana Madalina Craciun 2019-08-02 8:48 ` Diana Madalina Craciun 2019-08-02 8:48 ` Diana Madalina Craciun
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190731094318.26538-1-yanaijie@huawei.com \ --to=yanaijie@huawei.com \ --cc=benh@kernel.crashing.org \ --cc=christophe.leroy@c-s.fr \ --cc=diana.craciun@nxp.com \ --cc=fanchengyang@huawei.com \ --cc=jingxiangfeng@huawei.com \ --cc=keescook@chromium.org \ --cc=kernel-hardening@lists.openwall.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linuxppc-dev@lists.ozlabs.org \ --cc=mpe@ellerman.id.au \ --cc=npiggin@gmail.com \ --cc=paulus@samba.org \ --cc=thunder.leizhen@huawei.com \ --cc=wangkefeng.wang@huawei.com \ --cc=yebin10@huawei.com \ --cc=zhaohongjiang@huawei.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.