From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752576AbdK0Kv4 (ORCPT ); Mon, 27 Nov 2017 05:51:56 -0500 Received: from mail-wr0-f195.google.com ([209.85.128.195]:42646 "EHLO mail-wr0-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752515AbdK0Kvr (ORCPT ); Mon, 27 Nov 2017 05:51:47 -0500 X-Google-Smtp-Source: AGs4zMYiCk0uShRvEw1DoRcFndMz/EkBxoWCmTgM76s+1JtjLscYROMODkfVMJ1Ebtl7aoewqCSL/A== From: Ingo Molnar To: linux-kernel@vger.kernel.org Cc: Dave Hansen , Andy Lutomirski , Thomas Gleixner , "H . Peter Anvin" , Peter Zijlstra , Borislav Petkov , Linus Torvalds Subject: [PATCH 06/24] x86/mm/kaiser: Make sure the static PGDs are 8k in size Date: Mon, 27 Nov 2017 11:49:05 +0100 Message-Id: <20171127104923.14378-7-mingo@kernel.org> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20171127104923.14378-1-mingo@kernel.org> References: <20171127104923.14378-1-mingo@kernel.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Hansen A few PGDs come out of the kernel binary instead of being allocated dynamically. Before this patch, they are all 8k-aligned, but they must also be 8k in *size*. The original KAISER patch did not do this. It probably just lucked out that it did not trample over data after the last PGD. Signed-off-by: Dave Hansen Signed-off-by: Thomas Gleixner Cc: Andy Lutomirski Cc: Borislav Petkov Cc: Brian Gerst Cc: Denys Vlasenko Cc: H. Peter Anvin Cc: Josh Poimboeuf Cc: Linus Torvalds Cc: Peter Zijlstra Cc: Rik van Riel Cc: daniel.gruss@iaik.tugraz.at Cc: hughd@google.com Cc: keescook@google.com Cc: linux-mm@kvack.org Cc: michael.schwarz@iaik.tugraz.at Cc: moritz.lipp@iaik.tugraz.at Cc: richard.fellner@student.tugraz.at Link: https://lkml.kernel.org/r/20171123003450.76492124@viggo.jf.intel.com Signed-off-by: Ingo Molnar --- arch/x86/kernel/head_64.S | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S index 43d1cffd1fcf..58087ab1782e 100644 --- a/arch/x86/kernel/head_64.S +++ b/arch/x86/kernel/head_64.S @@ -342,11 +342,24 @@ GLOBAL(early_recursion_flag) GLOBAL(name) #ifdef CONFIG_KAISER +/* + * Each PGD needs to be 8k long and 8k aligned. We do not + * ever go out to userspace with these, so we do not + * strictly *need* the second page, but this allows us to + * have a single set_pgd() implementation that does not + * need to worry about whether it has 4k or 8k to work + * with. + * + * This ensures PGDs are 8k long: + */ +#define KAISER_USER_PGD_FILL 512 +/* This ensures they are 8k-aligned: */ #define NEXT_PGD_PAGE(name) \ .balign 2 * PAGE_SIZE; \ GLOBAL(name) #else #define NEXT_PGD_PAGE(name) NEXT_PAGE(name) +#define KAISER_USER_PGD_FILL 0 #endif /* Automate the creation of 1 to 1 mapping pmd entries */ @@ -365,6 +378,7 @@ NEXT_PGD_PAGE(early_top_pgt) #else .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC #endif + .fill KAISER_USER_PGD_FILL,8,0 NEXT_PAGE(early_dynamic_pgts) .fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0 @@ -379,6 +393,7 @@ NEXT_PGD_PAGE(init_top_pgt) .org init_top_pgt + PGD_START_KERNEL*8, 0 /* (2^48-(2*1024*1024*1024))/(2^39) = 511 */ .quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC + .fill KAISER_USER_PGD_FILL,8,0 NEXT_PAGE(level3_ident_pgt) .quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC @@ -391,6 +406,7 @@ NEXT_PAGE(level2_ident_pgt) #else NEXT_PGD_PAGE(init_top_pgt) .fill 512,8,0 + .fill KAISER_USER_PGD_FILL,8,0 #endif #ifdef CONFIG_X86_5LEVEL -- 2.14.1