From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933625AbcLIOmR (ORCPT ); Fri, 9 Dec 2016 09:42:17 -0500 Received: from mx1.redhat.com ([209.132.183.28]:56968 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932594AbcLIOmP (ORCPT ); Fri, 9 Dec 2016 09:42:15 -0500 From: Baoquan He To: linux-kernel@vger.kernel.org Cc: tglx@linutronix.de, hpa@zytor.com, mingo@redhat.com, x86@kernel.org, keescook@chromium.org, yinghai@kernel.org, bp@suse.de, thgarnie@google.com, kuleshovmail@gmail.com, luto@kernel.org, mcgrof@kernel.org, anderson@redhat.com, dyoung@redhat.com, xlpang@redhat.com, Baoquan He Subject: [PATCH v2 1/2] x86/64: Make kernel text mapping always take one whole page table in early boot code Date: Fri, 9 Dec 2016 22:41:57 +0800 Message-Id: <1481294518-29595-2-git-send-email-bhe@redhat.com> In-Reply-To: <1481294518-29595-1-git-send-email-bhe@redhat.com> References: <1481294518-29595-1-git-send-email-bhe@redhat.com> X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Fri, 09 Dec 2016 14:42:15 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In early boot code level2_kernel_pgt is used to map kernel text. And its size varies according to KERNEL_IMAGE_SIZE and fixed at compiling time. In fact we can make it always take 512 entries of one whole page table, because later function cleanup_highmap will clean up the unused entries. With the help of this change kernel text mapping size can be decided at runtime later, 512M if kaslr is disabled, 1G if kaslr is enabled. Signed-off-by: Baoquan He Acked-by: Kees Cook --- v1->v2: Changed a typo of patch log Alexnader found. arch/x86/include/asm/page_64_types.h | 3 ++- arch/x86/kernel/head_64.S | 15 ++++++++------- arch/x86/mm/init_64.c | 2 +- 3 files changed, 11 insertions(+), 9 deletions(-) diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h index 9215e05..62a20ea 100644 --- a/arch/x86/include/asm/page_64_types.h +++ b/arch/x86/include/asm/page_64_types.h @@ -56,8 +56,9 @@ * are fully set up. If kernel ASLR is configured, it can extend the * kernel page table mapping, reducing the size of the modules area. */ +#define KERNEL_MAPPING_SIZE_EXT (1024 * 1024 * 1024) #if defined(CONFIG_RANDOMIZE_BASE) -#define KERNEL_IMAGE_SIZE (1024 * 1024 * 1024) +#define KERNEL_IMAGE_SIZE KERNEL_MAPPING_SIZE_EXT #else #define KERNEL_IMAGE_SIZE (512 * 1024 * 1024) #endif diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index b4421cc..c4b40e7c9 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -453,17 +453,18 @@ NEXT_PAGE(level3_kernel_pgt) NEXT_PAGE(level2_kernel_pgt) /* - * 512 MB kernel mapping. We spend a full page on this pagetable - * anyway. + * Kernel image size is limited to 512 MB. The kernel code+data+bss + * must not be bigger than that. * - * The kernel code+data+bss must not be bigger than that. + * We spend a full page on this pagetable anyway, so take the whole + * page here so that the kernel mapping size can be decided at runtime, + * 512M if no kaslr, 1G if kaslr enabled. Later cleanup_highmap will + * clean up those unused entries. * - * (NOTE: at +512MB starts the module area, see MODULES_VADDR. - * If you want to increase this then increase MODULES_VADDR - * too.) + * The module area starts after kernel mapping area. */ PMDS(0, __PAGE_KERNEL_LARGE_EXEC, - KERNEL_IMAGE_SIZE/PMD_SIZE) + PTRS_PER_PMD) NEXT_PAGE(level2_fixmap_pgt) .fill 506,8,0 diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 14b9dd7..e95b977 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -307,7 +307,7 @@ void __init init_extra_mapping_uc(unsigned long phys, unsigned long size) void __init cleanup_highmap(void) { unsigned long vaddr = __START_KERNEL_map; - unsigned long vaddr_end = __START_KERNEL_map + KERNEL_IMAGE_SIZE; + unsigned long vaddr_end = __START_KERNEL_map + KERNEL_MAPPING_SIZE_EXT; unsigned long end = roundup((unsigned long)_brk_end, PMD_SIZE) - 1; pmd_t *pmd = level2_kernel_pgt; -- 2.5.5