From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEBE4C433F5 for ; Thu, 30 Aug 2018 14:45:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B2BAC20834 for ; Thu, 30 Aug 2018 14:45:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B2BAC20834 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729895AbeH3SsP (ORCPT ); Thu, 30 Aug 2018 14:48:15 -0400 Received: from mga01.intel.com ([192.55.52.88]:38433 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729487AbeH3SqS (ORCPT ); Thu, 30 Aug 2018 14:46:18 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 30 Aug 2018 07:43:43 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.53,307,1531810800"; d="scan'208";a="67186718" Received: from 2b52.sc.intel.com ([143.183.136.52]) by fmsmga008.fm.intel.com with ESMTP; 30 Aug 2018 07:43:41 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Dave Hansen , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , "Ravi V. Shankar" , Vedvyas Shanbhogue Cc: Yu-cheng Yu Subject: [RFC PATCH v3 15/24] mm: Handle THP/HugeTLB shadow stack page fault Date: Thu, 30 Aug 2018 07:38:55 -0700 Message-Id: <20180830143904.3168-16-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180830143904.3168-1-yu-cheng.yu@intel.com> References: <20180830143904.3168-1-yu-cheng.yu@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This patch implements THP shadow stack memory copying in the same way as the previous patch for regular PTE. In copy_huge_pmd(), we clear the dirty bit from the PMD. On the next shadow stack access to the PMD, a page fault occurs. At that time, the page is copied/re-used and the PMD is fixed. Signed-off-by: Yu-cheng Yu --- arch/x86/mm/pgtable.c | 8 ++++++++ include/asm-generic/pgtable.h | 6 ++++++ mm/huge_memory.c | 4 ++++ 3 files changed, 18 insertions(+) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index c63261128ac3..0ab38bfbedfc 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -881,4 +881,12 @@ inline pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma) else return pte; } + +inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pmd_mkdirty_shstk(pmd); + else + return pmd; +} #endif /* CONFIG_X86_INTEL_SHADOW_STACK_USER */ diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 558a485617cd..0f25186cd38d 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1151,8 +1151,14 @@ static inline pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma) { return pte; } + +static inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) +{ + return pmd; +} #else pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma); +pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma); #endif #endif /* _ASM_GENERIC_PGTABLE_H */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c3bc7e9c9a2a..5b4c8f2fb85e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -597,6 +597,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, entry = mk_huge_pmd(page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = pmd_set_vma_features(entry, vma); page_add_new_anon_rmap(page, vma, haddr, true); mem_cgroup_commit_charge(page, memcg, false, true); lru_cache_add_active_or_unevictable(page, vma); @@ -1194,6 +1195,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pte_t entry; entry = mk_pte(pages[i], vma->vm_page_prot); entry = maybe_mkwrite(pte_mkdirty(entry), vma); + entry = pte_set_vma_features(entry, vma); memcg = (void *)page_private(pages[i]); set_page_private(pages[i], 0); page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false); @@ -1278,6 +1280,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) pmd_t entry; entry = pmd_mkyoung(orig_pmd); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = pmd_set_vma_features(entry, vma); if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1)) update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); ret |= VM_FAULT_WRITE; @@ -1349,6 +1352,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) pmd_t entry; entry = mk_huge_pmd(new_page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = pmd_set_vma_features(entry, vma); pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd); page_add_new_anon_rmap(new_page, vma, haddr, true); mem_cgroup_commit_charge(new_page, memcg, false, true); -- 2.17.1 From mboxrd@z Thu Jan 1 00:00:00 1970 From: Yu-cheng Yu Subject: [RFC PATCH v3 15/24] mm: Handle THP/HugeTLB shadow stack page fault Date: Thu, 30 Aug 2018 07:38:55 -0700 Message-ID: <20180830143904.3168-16-yu-cheng.yu@intel.com> References: <20180830143904.3168-1-yu-cheng.yu@intel.com> Return-path: In-Reply-To: <20180830143904.3168-1-yu-cheng.yu@intel.com> Sender: linux-kernel-owner@vger.kernel.org To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Dave Hansen , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek Peter Cc: Yu-cheng Yu List-Id: linux-api@vger.kernel.org This patch implements THP shadow stack memory copying in the same way as the previous patch for regular PTE. In copy_huge_pmd(), we clear the dirty bit from the PMD. On the next shadow stack access to the PMD, a page fault occurs. At that time, the page is copied/re-used and the PMD is fixed. Signed-off-by: Yu-cheng Yu --- arch/x86/mm/pgtable.c | 8 ++++++++ include/asm-generic/pgtable.h | 6 ++++++ mm/huge_memory.c | 4 ++++ 3 files changed, 18 insertions(+) diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c index c63261128ac3..0ab38bfbedfc 100644 --- a/arch/x86/mm/pgtable.c +++ b/arch/x86/mm/pgtable.c @@ -881,4 +881,12 @@ inline pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma) else return pte; } + +inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) +{ + if (vma->vm_flags & VM_SHSTK) + return pmd_mkdirty_shstk(pmd); + else + return pmd; +} #endif /* CONFIG_X86_INTEL_SHADOW_STACK_USER */ diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 558a485617cd..0f25186cd38d 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -1151,8 +1151,14 @@ static inline pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma) { return pte; } + +static inline pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma) +{ + return pmd; +} #else pte_t pte_set_vma_features(pte_t pte, struct vm_area_struct *vma); +pmd_t pmd_set_vma_features(pmd_t pmd, struct vm_area_struct *vma); #endif #endif /* _ASM_GENERIC_PGTABLE_H */ diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c3bc7e9c9a2a..5b4c8f2fb85e 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -597,6 +597,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, entry = mk_huge_pmd(page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = pmd_set_vma_features(entry, vma); page_add_new_anon_rmap(page, vma, haddr, true); mem_cgroup_commit_charge(page, memcg, false, true); lru_cache_add_active_or_unevictable(page, vma); @@ -1194,6 +1195,7 @@ static vm_fault_t do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pte_t entry; entry = mk_pte(pages[i], vma->vm_page_prot); entry = maybe_mkwrite(pte_mkdirty(entry), vma); + entry = pte_set_vma_features(entry, vma); memcg = (void *)page_private(pages[i]); set_page_private(pages[i], 0); page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false); @@ -1278,6 +1280,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) pmd_t entry; entry = pmd_mkyoung(orig_pmd); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = pmd_set_vma_features(entry, vma); if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry, 1)) update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); ret |= VM_FAULT_WRITE; @@ -1349,6 +1352,7 @@ vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd) pmd_t entry; entry = mk_huge_pmd(new_page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); + entry = pmd_set_vma_features(entry, vma); pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd); page_add_new_anon_rmap(new_page, vma, haddr, true); mem_cgroup_commit_charge(new_page, memcg, false, true); -- 2.17.1