From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752643AbdKHTrI (ORCPT ); Wed, 8 Nov 2017 14:47:08 -0500 Received: from mga03.intel.com ([134.134.136.65]:21189 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752524AbdKHTrA (ORCPT ); Wed, 8 Nov 2017 14:47:00 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.44,365,1505804400"; d="scan'208";a="1241646026" Subject: [PATCH 01/30] x86, mm: do not set _PAGE_USER for init_mm page tables To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, dave.hansen@linux.intel.com, tglx@linutronix.de, moritz.lipp@iaik.tugraz.at, daniel.gruss@iaik.tugraz.at, michael.schwarz@iaik.tugraz.at, richard.fellner@student.tugraz.at, luto@kernel.org, torvalds@linux-foundation.org, keescook@google.com, hughd@google.com, x86@kernel.org From: Dave Hansen Date: Wed, 08 Nov 2017 11:46:47 -0800 References: <20171108194646.907A1942@viggo.jf.intel.com> In-Reply-To: <20171108194646.907A1942@viggo.jf.intel.com> Message-Id: <20171108194647.ABC9BC79@viggo.jf.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Dave Hansen init_mm is for kernel-exclusive use. If someone is allocating page tables for it, do not set _PAGE_USER on them. Signed-off-by: Dave Hansen Reviewed-by: Thomas Gleixner Cc: Moritz Lipp Cc: Daniel Gruss Cc: Michael Schwarz Cc: Richard Fellner Cc: Andy Lutomirski Cc: Linus Torvalds Cc: Kees Cook Cc: Hugh Dickins Cc: x86@kernel.org --- b/arch/x86/include/asm/pgalloc.h | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-) diff -puN arch/x86/include/asm/pgalloc.h~kaiser-prep-clear-_PAGE_USER-for-init_mm arch/x86/include/asm/pgalloc.h --- a/arch/x86/include/asm/pgalloc.h~kaiser-prep-clear-_PAGE_USER-for-init_mm 2017-11-08 10:45:25.928681403 -0800 +++ b/arch/x86/include/asm/pgalloc.h 2017-11-08 10:45:25.931681403 -0800 @@ -61,20 +61,37 @@ static inline void __pte_free_tlb(struct ___pte_free_tlb(tlb, pte); } +/* + * init_mm is for kernel-exclusive use. Any page tables that + * are seteup for it should not be usable by userspace. + * + * This also *signals* to code (like KAISER) that this page table + * entry is for kernel-exclusive use. + */ +static inline pteval_t mm_pgtable_flags(struct mm_struct *mm) +{ + if (!mm || (mm == &init_mm)) + return _KERNPG_TABLE; + return _PAGE_TABLE; +} + static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd, pte_t *pte) { + pteval_t pgtable_flags = mm_pgtable_flags(mm); + paravirt_alloc_pte(mm, __pa(pte) >> PAGE_SHIFT); - set_pmd(pmd, __pmd(__pa(pte) | _PAGE_TABLE)); + set_pmd(pmd, __pmd(__pa(pte) | pgtable_flags)); } static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd, struct page *pte) { + pteval_t pgtable_flags = mm_pgtable_flags(mm); unsigned long pfn = page_to_pfn(pte); paravirt_alloc_pte(mm, pfn); - set_pmd(pmd, __pmd(((pteval_t)pfn << PAGE_SHIFT) | _PAGE_TABLE)); + set_pmd(pmd, __pmd(((pteval_t)pfn << PAGE_SHIFT) | pgtable_flags)); } #define pmd_pgtable(pmd) pmd_page(pmd) @@ -117,16 +134,20 @@ extern void pud_populate(struct mm_struc #else /* !CONFIG_X86_PAE */ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd) { + pteval_t pgtable_flags = mm_pgtable_flags(mm); + paravirt_alloc_pmd(mm, __pa(pmd) >> PAGE_SHIFT); - set_pud(pud, __pud(_PAGE_TABLE | __pa(pmd))); + set_pud(pud, __pud(__pa(pmd) | pgtable_flags)); } #endif /* CONFIG_X86_PAE */ #if CONFIG_PGTABLE_LEVELS > 3 static inline void p4d_populate(struct mm_struct *mm, p4d_t *p4d, pud_t *pud) { + pteval_t pgtable_flags = mm_pgtable_flags(mm); + paravirt_alloc_pud(mm, __pa(pud) >> PAGE_SHIFT); - set_p4d(p4d, __p4d(_PAGE_TABLE | __pa(pud))); + set_p4d(p4d, __p4d(__pa(pud) | pgtable_flags)); } static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr) @@ -155,8 +176,10 @@ static inline void __pud_free_tlb(struct #if CONFIG_PGTABLE_LEVELS > 4 static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, p4d_t *p4d) { + pteval_t pgtable_flags = mm_pgtable_flags(mm); + paravirt_alloc_p4d(mm, __pa(p4d) >> PAGE_SHIFT); - set_pgd(pgd, __pgd(_PAGE_TABLE | __pa(p4d))); + set_pgd(pgd, __pgd(__pa(p4d) | pgtable_flags)); } static inline p4d_t *p4d_alloc_one(struct mm_struct *mm, unsigned long addr) _