From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7BFDAC43470 for ; Tue, 27 Apr 2021 20:44:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5D520613E5 for ; Tue, 27 Apr 2021 20:44:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239225AbhD0Upi (ORCPT ); Tue, 27 Apr 2021 16:45:38 -0400 Received: from mga05.intel.com ([192.55.52.43]:31780 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239145AbhD0UpS (ORCPT ); Tue, 27 Apr 2021 16:45:18 -0400 IronPort-SDR: xFOZXerFhaJpJQni3mRPHiWzI7fBoPgpfKoUpxGQK19/m4clE6U9j5KCFA7L4QUxnG/j49qnWj tBlLfA86U/lw== X-IronPort-AV: E=McAfee;i="6200,9189,9967"; a="281922468" X-IronPort-AV: E=Sophos;i="5.82,255,1613462400"; d="scan'208";a="281922468" Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Apr 2021 13:44:13 -0700 IronPort-SDR: GiOu6oGZC8O6G8a2/HSiJ96TGKrREMZ7zhgi84VlXsBeJAxvwBZEF7I6QU9NQblIGwA95wFEnf UFoV9HxkYXHw== X-IronPort-AV: E=Sophos;i="5.82,255,1613462400"; d="scan'208";a="465623479" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Apr 2021 13:44:12 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang , Pengfei Xu , Haitao Huang Cc: Yu-cheng Yu , "Kirill A . Shutemov" Subject: [PATCH v26 11/30] x86/mm: Update pte_modify for _PAGE_COW Date: Tue, 27 Apr 2021 13:42:56 -0700 Message-Id: <20210427204315.24153-12-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210427204315.24153-1-yu-cheng.yu@intel.com> References: <20210427204315.24153-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The read-only and Dirty PTE has been used to indicate copy-on-write pages. However, newer x86 processors also regard a read-only and Dirty PTE as a shadow stack page. In order to separate the two, the software-defined _PAGE_COW is created to replace _PAGE_DIRTY for the copy-on-write case, and pte_*() are updated. Pte_modify() changes a PTE to 'newprot', but it doesn't use the pte_*(). Introduce fixup_dirty_pte(), which sets a dirty PTE, based on _PAGE_RW, to either _PAGE_DIRTY or _PAGE_COW. Apply the same changes to pmd_modify(). Signed-off-by: Yu-cheng Yu Reviewed-by: Kirill A. Shutemov --- arch/x86/include/asm/pgtable.h | 37 ++++++++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index 9c056d5815de..e1739f590ca6 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -799,6 +799,23 @@ static inline pmd_t pmd_mkinvalid(pmd_t pmd) static inline u64 flip_protnone_guard(u64 oldval, u64 val, u64 mask); +static inline pteval_t fixup_dirty_pte(pteval_t pteval) +{ + pte_t pte = __pte(pteval); + + /* + * Fix up potential shadow stack page flags because the RO, Dirty + * PTE is special. + */ + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { + if (pte_dirty(pte)) { + pte = pte_mkclean(pte); + pte = pte_mkdirty(pte); + } + } + return pte_val(pte); +} + static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) { pteval_t val = pte_val(pte), oldval = val; @@ -809,16 +826,36 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot) */ val &= _PAGE_CHG_MASK; val |= check_pgprot(newprot) & ~_PAGE_CHG_MASK; + val = fixup_dirty_pte(val); val = flip_protnone_guard(oldval, val, PTE_PFN_MASK); return __pte(val); } +static inline int pmd_write(pmd_t pmd); +static inline pmdval_t fixup_dirty_pmd(pmdval_t pmdval) +{ + pmd_t pmd = __pmd(pmdval); + + /* + * Fix up potential shadow stack page flags because the RO, Dirty + * PMD is special. + */ + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { + if (pmd_dirty(pmd)) { + pmd = pmd_mkclean(pmd); + pmd = pmd_mkdirty(pmd); + } + } + return pmd_val(pmd); +} + static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) { pmdval_t val = pmd_val(pmd), oldval = val; val &= _HPAGE_CHG_MASK; val |= check_pgprot(newprot) & ~_HPAGE_CHG_MASK; + val = fixup_dirty_pmd(val); val = flip_protnone_guard(oldval, val, PHYSICAL_PMD_PAGE_MASK); return __pmd(val); } -- 2.21.0