From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9D626C433DB for ; Wed, 17 Feb 2021 22:28:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 29BFC64E2E for ; Wed, 17 Feb 2021 22:28:17 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 29BFC64E2E Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 91DF28D0005; Wed, 17 Feb 2021 17:28:02 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8AAC28D0003; Wed, 17 Feb 2021 17:28:02 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6160B8D0006; Wed, 17 Feb 2021 17:28:02 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0157.hostedemail.com [216.40.44.157]) by kanga.kvack.org (Postfix) with ESMTP id 21B058D0001 for ; Wed, 17 Feb 2021 17:28:02 -0500 (EST) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id AD572181AC1E9 for ; Wed, 17 Feb 2021 22:28:01 +0000 (UTC) X-FDA: 77829198762.15.touch34_290a1bc27650 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin15.hostedemail.com (Postfix) with ESMTP id 58C621814B0C8 for ; Wed, 17 Feb 2021 22:28:01 +0000 (UTC) X-HE-Tag: touch34_290a1bc27650 X-Filterd-Recvd-Size: 16489 Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Wed, 17 Feb 2021 22:28:00 +0000 (UTC) IronPort-SDR: jG0Uv9QA2wdhnHsBaSBN7v8XqJG2B+RFBMB4wwGbWNzJuDHHy+StYyRKmExUqnAiI/3q9jyi0/ uPZurjOtJjFA== X-IronPort-AV: E=McAfee;i="6000,8403,9898"; a="170993995" X-IronPort-AV: E=Sophos;i="5.81,185,1610438400"; d="scan'208";a="170993995" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Feb 2021 14:27:58 -0800 IronPort-SDR: Qas7lK/cUsfgKK+GO5O2h7wDBbvd7ZJeZC9as9k8SeBjgU+BXh27TvyjkkrtThW8J3j7wZjGun vyG8b+s6hatw== X-IronPort-AV: E=Sophos;i="5.81,185,1610438400"; d="scan'208";a="400172611" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 17 Feb 2021 14:27:57 -0800 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang , Pengfei Xu , Haitao Huang Cc: Yu-cheng Yu Subject: [PATCH v21 08/26] x86/mm: Introduce _PAGE_COW Date: Wed, 17 Feb 2021 14:27:12 -0800 Message-Id: <20210217222730.15819-9-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210217222730.15819-1-yu-cheng.yu@intel.com> References: <20210217222730.15819-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There is essentially no room left in the x86 hardware PTEs on some OSes (not Linux). That left the hardware architects looking for a way to represent a new memory type (shadow stack) within the existing bits. They chose to repurpose a lightly-used state: Write=3D0, Dirty=3D1. The reason it's lightly used is that Dirty=3D1 is normally set by hardwar= e and cannot normally be set by hardware on a Write=3D0 PTE. Software must normally be involved to create one of these PTEs, so software can simply opt to not create them. In places where Linux normally creates Write=3D0, Dirty=3D1, it can use t= he software-defined _PAGE_COW in place of the hardware _PAGE_DIRTY. In othe= r words, whenever Linux needs to create Write=3D0, Dirty=3D1, it instead cr= eates Write=3D0, Cow=3D1, except for shadow stack, which is Write=3D0, Dirty=3D= 1. This clearly separates shadow stack from other data, and results in the following: (a) A modified, copy-on-write (COW) page: (Write=3D0, Cow=3D1) (b) A R/O page that has been COW'ed: (Write=3D0, Cow=3D1) The user page is in a R/O VMA, and get_user_pages() needs a writable copy. The page fault handler creates a copy of the page and sets the new copy's PTE as Write=3D0 and Cow=3D1. (c) A shadow stack PTE: (Write=3D0, Dirty=3D1) (d) A shared shadow stack PTE: (Write=3D0, Cow=3D1) When a shadow stack page is being shared among processes (this happen= s at fork()), its PTE is made Dirty=3D0, so the next shadow stack acces= s causes a fault, and the page is duplicated and Dirty=3D1 is set again= . This is the COW equivalent for shadow stack pages, even though it's copy-on-access rather than copy-on-write. (e) A page where the processor observed a Write=3D1 PTE, started a write,= set Dirty=3D1, but then observed a Write=3D0 PTE. That's possible today,= but will not happen on processors that support shadow stack. Define _PAGE_COW and update pte_*() helpers and apply the same changes to pmd and pud. After this, there are six free bits left in the 64-bit PTE, and no more free bits in the 32-bit PTE (except for PAE) and Shadow Stack is not implemented for the 32-bit kernel. Signed-off-by: Yu-cheng Yu --- arch/x86/include/asm/pgtable.h | 185 ++++++++++++++++++++++++--- arch/x86/include/asm/pgtable_types.h | 42 +++++- 2 files changed, 206 insertions(+), 21 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtabl= e.h index a02c67291cfc..1a6c0784af0a 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -121,11 +121,21 @@ extern pmdval_t early_pmd_flags; * The following only work if pte_present() is true. * Undefined behaviour if not.. */ -static inline int pte_dirty(pte_t pte) +static inline bool pte_dirty(pte_t pte) { - return pte_flags(pte) & _PAGE_DIRTY; + /* + * A dirty PTE has Dirty=3D1 or Cow=3D1. + */ + return pte_flags(pte) & _PAGE_DIRTY_BITS; } =20 +static inline bool pte_shstk(pte_t pte) +{ + if (!cpu_feature_enabled(X86_FEATURE_SHSTK)) + return false; + + return (pte_flags(pte) & (_PAGE_RW | _PAGE_DIRTY)) =3D=3D _PAGE_DIRTY; +} =20 static inline u32 read_pkru(void) { @@ -160,9 +170,20 @@ static inline int pte_young(pte_t pte) return pte_flags(pte) & _PAGE_ACCESSED; } =20 -static inline int pmd_dirty(pmd_t pmd) +static inline bool pmd_dirty(pmd_t pmd) { - return pmd_flags(pmd) & _PAGE_DIRTY; + /* + * A dirty PMD has Dirty=3D1 or Cow=3D1. + */ + return pmd_flags(pmd) & _PAGE_DIRTY_BITS; +} + +static inline bool pmd_shstk(pmd_t pmd) +{ + if (!cpu_feature_enabled(X86_FEATURE_SHSTK)) + return false; + + return (pmd_flags(pmd) & (_PAGE_RW | _PAGE_DIRTY)) =3D=3D _PAGE_DIRTY; } =20 static inline int pmd_young(pmd_t pmd) @@ -170,9 +191,12 @@ static inline int pmd_young(pmd_t pmd) return pmd_flags(pmd) & _PAGE_ACCESSED; } =20 -static inline int pud_dirty(pud_t pud) +static inline bool pud_dirty(pud_t pud) { - return pud_flags(pud) & _PAGE_DIRTY; + /* + * A dirty PUD has Dirty=3D1 or Cow=3D1. + */ + return pud_flags(pud) & _PAGE_DIRTY_BITS; } =20 static inline int pud_young(pud_t pud) @@ -182,7 +206,7 @@ static inline int pud_young(pud_t pud) =20 static inline int pte_write(pte_t pte) { - return pte_flags(pte) & _PAGE_RW; + return (pte_flags(pte) & _PAGE_RW) || pte_shstk(pte); } =20 static inline int pte_huge(pte_t pte) @@ -314,6 +338,24 @@ static inline pte_t pte_clear_flags(pte_t pte, pteva= l_t clear) return native_make_pte(v & ~clear); } =20 +static inline pte_t pte_make_cow(pte_t pte) +{ + if (!cpu_feature_enabled(X86_FEATURE_SHSTK)) + return pte; + + pte =3D pte_clear_flags(pte, _PAGE_DIRTY); + return pte_set_flags(pte, _PAGE_COW); +} + +static inline pte_t pte_clear_cow(pte_t pte) +{ + if (!cpu_feature_enabled(X86_FEATURE_SHSTK)) + return pte; + + pte =3D pte_set_flags(pte, _PAGE_DIRTY); + return pte_clear_flags(pte, _PAGE_COW); +} + #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP static inline int pte_uffd_wp(pte_t pte) { @@ -333,7 +375,7 @@ static inline pte_t pte_clear_uffd_wp(pte_t pte) =20 static inline pte_t pte_mkclean(pte_t pte) { - return pte_clear_flags(pte, _PAGE_DIRTY); + return pte_clear_flags(pte, _PAGE_DIRTY_BITS); } =20 static inline pte_t pte_mkold(pte_t pte) @@ -343,7 +385,16 @@ static inline pte_t pte_mkold(pte_t pte) =20 static inline pte_t pte_wrprotect(pte_t pte) { - return pte_clear_flags(pte, _PAGE_RW); + pte =3D pte_clear_flags(pte, _PAGE_RW); + + /* + * Blindly clearing _PAGE_RW might accidentally create + * a shadow stack PTE (RW=3D0, Dirty=3D1). Move the hardware + * dirty value to the software bit. + */ + if (pte_dirty(pte)) + pte =3D pte_make_cow(pte); + return pte; } =20 static inline pte_t pte_mkexec(pte_t pte) @@ -353,7 +404,18 @@ static inline pte_t pte_mkexec(pte_t pte) =20 static inline pte_t pte_mkdirty(pte_t pte) { - return pte_set_flags(pte, _PAGE_DIRTY | _PAGE_SOFT_DIRTY); + pteval_t dirty =3D _PAGE_DIRTY; + + /* Avoid creating (HW)Dirty=3D1, Write=3D0 PTEs */ + if (cpu_feature_enabled(X86_FEATURE_SHSTK) && !pte_write(pte)) + dirty =3D _PAGE_COW; + + return pte_set_flags(pte, dirty | _PAGE_SOFT_DIRTY); +} + +static inline pte_t pte_mkwrite_shstk(pte_t pte) +{ + return pte_clear_cow(pte); } =20 static inline pte_t pte_mkyoung(pte_t pte) @@ -363,7 +425,12 @@ static inline pte_t pte_mkyoung(pte_t pte) =20 static inline pte_t pte_mkwrite(pte_t pte) { - return pte_set_flags(pte, _PAGE_RW); + pte =3D pte_set_flags(pte, _PAGE_RW); + + if (pte_dirty(pte)) + pte =3D pte_clear_cow(pte); + + return pte; } =20 static inline pte_t pte_mkhuge(pte_t pte) @@ -410,6 +477,24 @@ static inline pmd_t pmd_clear_flags(pmd_t pmd, pmdva= l_t clear) return native_make_pmd(v & ~clear); } =20 +static inline pmd_t pmd_make_cow(pmd_t pmd) +{ + if (!cpu_feature_enabled(X86_FEATURE_SHSTK)) + return pmd; + + pmd =3D pmd_clear_flags(pmd, _PAGE_DIRTY); + return pmd_set_flags(pmd, _PAGE_COW); +} + +static inline pmd_t pmd_clear_cow(pmd_t pmd) +{ + if (!cpu_feature_enabled(X86_FEATURE_SHSTK)) + return pmd; + + pmd =3D pmd_set_flags(pmd, _PAGE_DIRTY); + return pmd_clear_flags(pmd, _PAGE_COW); +} + #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_WP static inline int pmd_uffd_wp(pmd_t pmd) { @@ -434,17 +519,36 @@ static inline pmd_t pmd_mkold(pmd_t pmd) =20 static inline pmd_t pmd_mkclean(pmd_t pmd) { - return pmd_clear_flags(pmd, _PAGE_DIRTY); + return pmd_clear_flags(pmd, _PAGE_DIRTY_BITS); } =20 static inline pmd_t pmd_wrprotect(pmd_t pmd) { - return pmd_clear_flags(pmd, _PAGE_RW); + pmd =3D pmd_clear_flags(pmd, _PAGE_RW); + /* + * Blindly clearing _PAGE_RW might accidentally create + * a shadow stack PMD (RW=3D0, Dirty=3D1). Move the hardware + * dirty value to the software bit. + */ + if (pmd_dirty(pmd)) + pmd =3D pmd_make_cow(pmd); + return pmd; } =20 static inline pmd_t pmd_mkdirty(pmd_t pmd) { - return pmd_set_flags(pmd, _PAGE_DIRTY | _PAGE_SOFT_DIRTY); + pmdval_t dirty =3D _PAGE_DIRTY; + + /* Avoid creating (HW)Dirty=3D1, Write=3D0 PMDs */ + if (cpu_feature_enabled(X86_FEATURE_SHSTK) && !(pmd_flags(pmd) & _PAGE_= RW)) + dirty =3D _PAGE_COW; + + return pmd_set_flags(pmd, dirty | _PAGE_SOFT_DIRTY); +} + +static inline pmd_t pmd_mkwrite_shstk(pmd_t pmd) +{ + return pmd_clear_cow(pmd); } =20 static inline pmd_t pmd_mkdevmap(pmd_t pmd) @@ -464,7 +568,11 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd) =20 static inline pmd_t pmd_mkwrite(pmd_t pmd) { - return pmd_set_flags(pmd, _PAGE_RW); + pmd =3D pmd_set_flags(pmd, _PAGE_RW); + + if (pmd_dirty(pmd)) + pmd =3D pmd_clear_cow(pmd); + return pmd; } =20 static inline pud_t pud_set_flags(pud_t pud, pudval_t set) @@ -481,6 +589,24 @@ static inline pud_t pud_clear_flags(pud_t pud, pudva= l_t clear) return native_make_pud(v & ~clear); } =20 +static inline pud_t pud_make_cow(pud_t pud) +{ + if (!cpu_feature_enabled(X86_FEATURE_SHSTK)) + return pud; + + pud =3D pud_clear_flags(pud, _PAGE_DIRTY); + return pud_set_flags(pud, _PAGE_COW); +} + +static inline pud_t pud_clear_cow(pud_t pud) +{ + if (!cpu_feature_enabled(X86_FEATURE_SHSTK)) + return pud; + + pud =3D pud_set_flags(pud, _PAGE_DIRTY); + return pud_clear_flags(pud, _PAGE_COW); +} + static inline pud_t pud_mkold(pud_t pud) { return pud_clear_flags(pud, _PAGE_ACCESSED); @@ -488,17 +614,32 @@ static inline pud_t pud_mkold(pud_t pud) =20 static inline pud_t pud_mkclean(pud_t pud) { - return pud_clear_flags(pud, _PAGE_DIRTY); + return pud_clear_flags(pud, _PAGE_DIRTY_BITS); } =20 static inline pud_t pud_wrprotect(pud_t pud) { - return pud_clear_flags(pud, _PAGE_RW); + pud =3D pud_clear_flags(pud, _PAGE_RW); + + /* + * Blindly clearing _PAGE_RW might accidentally create + * a shadow stack PUD (RW=3D0, Dirty=3D1). Move the hardware + * dirty value to the software bit. + */ + if (pud_dirty(pud)) + pud =3D pud_make_cow(pud); + return pud; } =20 static inline pud_t pud_mkdirty(pud_t pud) { - return pud_set_flags(pud, _PAGE_DIRTY | _PAGE_SOFT_DIRTY); + pudval_t dirty =3D _PAGE_DIRTY; + + /* Avoid creating (HW)Dirty=3D1, Write=3D0 PUDs */ + if (cpu_feature_enabled(X86_FEATURE_SHSTK) && !(pud_flags(pud) & _PAGE_= RW)) + dirty =3D _PAGE_COW; + + return pud_set_flags(pud, dirty | _PAGE_SOFT_DIRTY); } =20 static inline pud_t pud_mkdevmap(pud_t pud) @@ -518,7 +659,11 @@ static inline pud_t pud_mkyoung(pud_t pud) =20 static inline pud_t pud_mkwrite(pud_t pud) { - return pud_set_flags(pud, _PAGE_RW); + pud =3D pud_set_flags(pud, _PAGE_RW); + + if (pud_dirty(pud)) + pud =3D pud_clear_cow(pud); + return pud; } =20 #ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY @@ -1131,7 +1276,7 @@ extern int pmdp_clear_flush_young(struct vm_area_st= ruct *vma, #define pmd_write pmd_write static inline int pmd_write(pmd_t pmd) { - return pmd_flags(pmd) & _PAGE_RW; + return (pmd_flags(pmd) & _PAGE_RW) || pmd_shstk(pmd); } =20 #define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/= pgtable_types.h index b8b79d618bbc..437d7ff0ae80 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -23,7 +23,8 @@ #define _PAGE_BIT_SOFTW2 10 /* " */ #define _PAGE_BIT_SOFTW3 11 /* " */ #define _PAGE_BIT_PAT_LARGE 12 /* On 2MB or 1GB pages */ -#define _PAGE_BIT_SOFTW4 58 /* available for programmer */ +#define _PAGE_BIT_SOFTW4 57 /* available for programmer */ +#define _PAGE_BIT_SOFTW5 58 /* available for programmer */ #define _PAGE_BIT_PKEY_BIT0 59 /* Protection Keys, bit 1/4 */ #define _PAGE_BIT_PKEY_BIT1 60 /* Protection Keys, bit 2/4 */ #define _PAGE_BIT_PKEY_BIT2 61 /* Protection Keys, bit 3/4 */ @@ -36,6 +37,15 @@ #define _PAGE_BIT_SOFT_DIRTY _PAGE_BIT_SOFTW3 /* software dirty tracking= */ #define _PAGE_BIT_DEVMAP _PAGE_BIT_SOFTW4 =20 +/* + * Indicates a copy-on-write page. + */ +#ifdef CONFIG_X86_CET +#define _PAGE_BIT_COW _PAGE_BIT_SOFTW5 /* copy-on-write */ +#else +#define _PAGE_BIT_COW 0 +#endif + /* If _PAGE_BIT_PRESENT is clear, we use these: */ /* - if the user mapped it with PROT_NONE; pte_present gives true */ #define _PAGE_BIT_PROTNONE _PAGE_BIT_GLOBAL @@ -117,6 +127,36 @@ #define _PAGE_DEVMAP (_AT(pteval_t, 0)) #endif =20 +/* + * The hardware requires shadow stack to be read-only and Dirty. + * _PAGE_COW is a software-only bit used to separate copy-on-write PTEs + * from shadow stack PTEs: + * (a) A modified, copy-on-write (COW) page: (Write=3D0, Cow=3D1) + * (b) A R/O page that has been COW'ed: (Write=3D0, Cow=3D1) + * The user page is in a R/O VMA, and get_user_pages() needs a + * writable copy. The page fault handler creates a copy of the page + * and sets the new copy's PTE as Write=3D0, Cow=3D1. + * (c) A shadow stack PTE: (Write=3D0, Dirty=3D1) + * (d) A shared (copy-on-access) shadow stack PTE: (Write=3D0, Cow=3D1) + * When a shadow stack page is being shared among processes (this + * happens at fork()), its PTE is cleared of _PAGE_DIRTY, so the nex= t + * shadow stack access causes a fault, and the page is duplicated an= d + * _PAGE_DIRTY is set again. This is the COW equivalent for shadow + * stack pages, even though it's copy-on-access rather than + * copy-on-write. + * (e) A page where the processor observed a Write=3D1 PTE, started a wr= ite, + * set Dirty=3D1, but then observed a Write=3D0 PTE (changed by anot= her + * thread). That's possible today, but will not happen on processor= s + * that support shadow stack. + */ +#ifdef CONFIG_X86_CET +#define _PAGE_COW (_AT(pteval_t, 1) << _PAGE_BIT_COW) +#else +#define _PAGE_COW (_AT(pteval_t, 0)) +#endif + +#define _PAGE_DIRTY_BITS (_PAGE_DIRTY | _PAGE_COW) + #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE) =20 /* --=20 2.21.0