From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Google-Smtp-Source: AG47ELtrDrTxftxkkH14QIYYPDZ4o7oOy+QcgztPBHOn/hAN9VJpf7P8pq+1xiO4Xf47cVo2Pxe3 ARC-Seal: i=1; a=rsa-sha256; t=1520245575; cv=none; d=google.com; s=arc-20160816; b=wssXHAQ9Z6Rtbaj7udG+2QMNKUQgOxqv23jhpLcUhqpmlbdKJ/GR2YCK15L1o6IGeH fUqMUMONo7inzM6z7DDZI441jp81LeU5k9wMS5MvCQGpoooa7Q+TCZlsBhp/M17zBCUR W6M5ZHTPGDIfLkLTSV5dg+BNpCGEEs09seZBPaINcchjL051Ud1WlMsVKZgHTxNBxurk pzKYsIh9GkTpwc39c44AQmKcEDMcNdN6b3i/D/o9hMvUjB+4RbnnJziDC41QP7+UHamE WxQFoVAbGQfRZ0ENHNNZtXdzALkfJq+ahIGL/e2AH3vOpzC/EGeTG4iIhuWEf4fg+QWI KegQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature:arc-authentication-results; bh=JkRqSgi4/+KSltBc4UZcNnsJotRDCBQ6c9iKVNw312Y=; b=m0Pnu2ie06Tkpk8NH6qTq2mrAkpotbsHx48Zh7+ZMVGjLPxhG47NFuowlL3PAV4Vrg Vc/fLpTdi0SIg23hiMALZ3KCwaajPZFBtGUbuDXfaD3I8CTuhZQmd0FcwVbijnhreOjG 8GgEkSt6siIAmMtC0T3RfM/evvXtr2MKy3Dwr8x9EvarUmNIKKb29622GMS54PZvyY6E TrRY5Km9+I3sKVXNafm7xlKx3PuqLl+9yAekI2fXAoavgZcJWfoyvFIayaoKF6kPpAJO 07BILfRlLoAZQNno9zOvOL9Y+i7dsNz/GKWSNvTxglneVF6vTCZQu65vVFkG1Q/GOcI+ k58A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=FaqYp7us; spf=pass (google.com: domain of joro@8bytes.org designates 81.169.241.247 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org Authentication-Results: mx.google.com; dkim=pass (test mode) header.i=@8bytes.org header.s=mail-1 header.b=FaqYp7us; spf=pass (google.com: domain of joro@8bytes.org designates 81.169.241.247 as permitted sender) smtp.mailfrom=joro@8bytes.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=8bytes.org From: Joerg Roedel To: Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Linus Torvalds , Andy Lutomirski , Dave Hansen , Josh Poimboeuf , Juergen Gross , Peter Zijlstra , Borislav Petkov , Jiri Kosina , Boris Ostrovsky , Brian Gerst , David Laight , Denys Vlasenko , Eduardo Valentin , Greg KH , Will Deacon , aliguori@amazon.com, daniel.gruss@iaik.tugraz.at, hughd@google.com, keescook@google.com, Andrea Arcangeli , Waiman Long , Pavel Machek , jroedel@suse.de, joro@8bytes.org Subject: [PATCH 17/34] x86/pgtable/32: Allocate 8k page-tables when PTI is enabled Date: Mon, 5 Mar 2018 11:25:46 +0100 Message-Id: <1520245563-8444-18-git-send-email-joro@8bytes.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1520245563-8444-1-git-send-email-joro@8bytes.org> References: <1520245563-8444-1-git-send-email-joro@8bytes.org> X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: =?utf-8?q?1594093024555359499?= X-GMAIL-MSGID: =?utf-8?q?1594093024555359499?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: From: Joerg Roedel Allocate a kernel and a user page-table root when PTI is enabled. Also allocate a full page per root for PAE because otherwise the bit to flip in cr3 to switch between them would be non-constant, which creates a lot of hassle. Keep that for a later optimization. Signed-off-by: Joerg Roedel --- arch/x86/kernel/head_32.S | 20 +++++++++++++++----- arch/x86/mm/pgtable.c | 5 +++-- 2 files changed, 18 insertions(+), 7 deletions(-) diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S index c290209..1f35d60 100644 --- a/arch/x86/kernel/head_32.S +++ b/arch/x86/kernel/head_32.S @@ -512,11 +512,18 @@ ENTRY(initial_code) ENTRY(setup_once_ref) .long setup_once +#ifdef CONFIG_PAGE_TABLE_ISOLATION +#define PGD_ALIGN (2 * PAGE_SIZE) +#define PTI_USER_PGD_FILL 1024 +#else +#define PGD_ALIGN (PAGE_SIZE) +#define PTI_USER_PGD_FILL 0 +#endif /* * BSS section */ __PAGE_ALIGNED_BSS - .align PAGE_SIZE + .align PGD_ALIGN #ifdef CONFIG_X86_PAE .globl initial_pg_pmd initial_pg_pmd: @@ -526,14 +533,17 @@ initial_pg_pmd: initial_page_table: .fill 1024,4,0 #endif + .align PGD_ALIGN initial_pg_fixmap: .fill 1024,4,0 -.globl empty_zero_page -empty_zero_page: - .fill 4096,1,0 .globl swapper_pg_dir + .align PGD_ALIGN swapper_pg_dir: .fill 1024,4,0 + .fill PTI_USER_PGD_FILL,4,0 +.globl empty_zero_page +empty_zero_page: + .fill 4096,1,0 EXPORT_SYMBOL(empty_zero_page) /* @@ -542,7 +552,7 @@ EXPORT_SYMBOL(empty_zero_page) #ifdef CONFIG_X86_PAE __PAGE_ALIGNED_DATA /* Page-aligned for the benefit of paravirt? */ - .align PAGE_SIZE + .align PGD_ALIGN ENTRY(initial_page_table) .long pa(initial_pg_pmd+PGD_IDENT_ATTR),0 /* low identity map */ # if KPMDS == 3 diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 004abf9..a81d42e 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -338,7 +338,8 @@ static inline pgd_t *_pgd_alloc(void) * We allocate one page for pgd. */ if (!SHARED_KERNEL_PMD) - return (pgd_t *)__get_free_page(PGALLOC_GFP); + return (pgd_t *)__get_free_pages(PGALLOC_GFP, + PGD_ALLOCATION_ORDER); /* * Now PAE kernel is not running as a Xen domain. We can allocate @@ -350,7 +351,7 @@ static inline pgd_t *_pgd_alloc(void) static inline void _pgd_free(pgd_t *pgd) { if (!SHARED_KERNEL_PMD) - free_page((unsigned long)pgd); + free_pages((unsigned long)pgd, PGD_ALLOCATION_ORDER); else kmem_cache_free(pgd_cache, pgd); } -- 2.7.4