From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DFF48C32753 for ; Tue, 13 Aug 2019 21:03:22 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A34EB2070D for ; Tue, 13 Aug 2019 21:03:22 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org A34EB2070D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DC67E6B0273; Tue, 13 Aug 2019 17:03:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D51C06B0274; Tue, 13 Aug 2019 17:03:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id BF1186B0275; Tue, 13 Aug 2019 17:03:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id 937D66B0274 for ; Tue, 13 Aug 2019 17:03:01 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with SMTP id 3F403611C for ; Tue, 13 Aug 2019 21:03:01 +0000 (UTC) X-FDA: 75818629362.07.sack12_1a4a8f739e241 X-HE-Tag: sack12_1a4a8f739e241 X-Filterd-Recvd-Size: 5711 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf29.hostedemail.com (Postfix) with ESMTP for ; Tue, 13 Aug 2019 21:03:00 +0000 (UTC) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Aug 2019 14:02:54 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,382,1559545200"; d="scan'208";a="187901433" Received: from yyu32-desk1.sc.intel.com ([10.144.153.205]) by orsmga002.jf.intel.com with ESMTP; 13 Aug 2019 14:02:53 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin Cc: Yu-cheng Yu Subject: [PATCH v8 16/27] mm: Handle THP/HugeTLB shadow stack page fault Date: Tue, 13 Aug 2019 13:52:14 -0700 Message-Id: <20190813205225.12032-17-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190813205225.12032-1-yu-cheng.yu@intel.com> References: <20190813205225.12032-1-yu-cheng.yu@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This patch implements THP shadow stack (SHSTK) copying in the same way as in the previous patch for regular PTE. In copy_huge_pmd(), clear the dirty bit from the PMD to cause a page fault upon the next SHSTK access to the PMD. At that time, fix the PMD and copy/re-use the page. Signed-off-by: Yu-cheng Yu --- arch/x86/mm/pgtable.c | 8 ++++++++ include/asm-generic/pgtable.h | 6 ++++++ mm/huge_memory.c | 4 ++++ 3 files changed, 18 insertions(+) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index 0c10d0c5e329..6f3959ca2a08 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -886,6 +886,14 @@ inline pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma) return pte; } +inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pmd_mkdirty_shstk(pmd); + else + return pmd; +} + inline bool arch_copy_pte_mapping(vm_flags_t vm_flags) { return (vm_flags & VM_SHSTK); diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 89b0fa132f1f..438ce73b57ea 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1194,12 +1194,18 @@ static inline pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma) return pte; } +static inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) +{ + return pmd; +} + static inline bool arch_copy_pte_mapping(vm_flags_t vm_flags) { return false; } #else pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma); +pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma); bool arch_copy_pte_mapping(vm_flags_t vm_flags); #endif diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 1334ede667a8..39d66c628121 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -611,6 +611,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, entry = mk_huge_pmd(page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = pmd_set_vma_features(entry, vma); page_add_new_anon_rmap(page, vma, haddr, true); mem_cgroup_commit_charge(page, memcg, false, true); lru_cache_add_active_or_unevictable(page, vma); @@ -1253,6 +1254,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pte_t entry; entry = mk_pte(pages[i], vma->vm_page_prot); entry = maybe_mkwrite(pte_mkdirty(entry), vma); + entry = pte_set_vma_features(entry, vma); memcg = (void *)page_private(pages[i]); set_page_private(pages[i], 0); page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false); @@ -1335,6 +1337,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) pmd_t entry; entry = pmd_mkyoung(orig_pmd); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = pmd_set_vma_features(entry, vma); if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1)) update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); ret |= VM_FAULT_WRITE; @@ -1407,6 +1410,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) pmd_t entry; entry = mk_huge_pmd(new_page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = pmd_set_vma_features(entry, vma); pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd); page_add_new_anon_rmap(new_page, vma, haddr, true); mem_cgroup_commit_charge(new_page, memcg, false, true); -- 2.17.1