From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id EEC31C3279B for ; Tue, 10 Jul 2018 22:31:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B159C2133D for ; Tue, 10 Jul 2018 22:31:16 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B159C2133D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732443AbeGJWcY (ORCPT ); Tue, 10 Jul 2018 18:32:24 -0400 Received: from mga12.intel.com ([192.55.52.136]:33776 "EHLO mga12.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1732355AbeGJWcY (ORCPT ); Tue, 10 Jul 2018 18:32:24 -0400 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Jul 2018 15:31:13 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,335,1526367600"; d="scan'208";a="70305412" Received: from 2b52.sc.intel.com ([143.183.136.52]) by fmsmga004.fm.intel.com with ESMTP; 10 Jul 2018 15:31:13 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Dave Hansen , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , "Ravi V. Shankar" , Vedvyas Shanbhogue Cc: Yu-cheng Yu Subject: [RFC PATCH v2 13/27] mm: Handle shadow stack page fault Date: Tue, 10 Jul 2018 15:26:25 -0700 Message-Id: <20180710222639.8241-14-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180710222639.8241-1-yu-cheng.yu@intel.com> References: <20180710222639.8241-1-yu-cheng.yu@intel.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When a task does fork(), its shadow stack must be duplicated for the child. However, the child may not actually use all pages of of the copied shadow stack. This patch implements a flow that is similar to copy-on-write of an anonymous page, but for shadow stack memory. A shadow stack PTE needs to be RO and dirty. We use this dirty bit requirement to effect the copying of shadow stack pages. In copy_one_pte(), we clear the dirty bit from the shadow stack PTE. On the next shadow stack access to the PTE, a page fault occurs. At that time, we then copy/re-use the page and fix the PTE. Signed-off-by: Yu-cheng Yu --- mm/memory.c | 30 ++++++++++++++++++++++++++++-- 1 file changed, 28 insertions(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 7206a634270b..a2695dbc0418 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2453,7 +2453,13 @@ static inline void wp_page_reuse(struct vm_fault *vmf) flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); entry = pte_mkyoung(vmf->orig_pte); - entry = maybe_mkwrite(pte_mkdirty(entry), vma); + + if (is_shstk_mapping(vma->vm_flags)) + entry = pte_mkdirty_shstk(entry); + else + entry = pte_mkdirty(entry); + + entry = maybe_mkwrite(entry, vma); if (ptep_set_access_flags(vma, vmf->address, vmf->pte, entry, 1)) update_mmu_cache(vma, vmf->address, vmf->pte); pte_unmap_unlock(vmf->pte, vmf->ptl); @@ -2526,7 +2532,11 @@ static int wp_page_copy(struct vm_fault *vmf) } flush_cache_page(vma, vmf->address, pte_pfn(vmf->orig_pte)); entry = mk_pte(new_page, vma->vm_page_prot); - entry = maybe_mkwrite(pte_mkdirty(entry), vma); + if (is_shstk_mapping(vma->vm_flags)) + entry = pte_mkdirty_shstk(entry); + else + entry = pte_mkdirty(entry); + entry = maybe_mkwrite(entry, vma); /* * Clear the pte entry and flush it first, before updating the * pte with the new entry. This will avoid a race condition @@ -3201,6 +3211,14 @@ static int do_anonymous_page(struct vm_fault *vmf) mem_cgroup_commit_charge(page, memcg, false, false); lru_cache_add_active_or_unevictable(page, vma); setpte: + /* + * If this is within a shadow stack mapping, mark + * the PTE dirty. We don't use pte_mkdirty(), + * because the PTE must have _PAGE_DIRTY_HW set. + */ + if (is_shstk_mapping(vma->vm_flags)) + entry = pte_mkdirty_shstk(entry); + set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry); /* No need to invalidate - it was non-present before */ @@ -3983,6 +4001,14 @@ static int handle_pte_fault(struct vm_fault *vmf) entry = vmf->orig_pte; if (unlikely(!pte_same(*vmf->pte, entry))) goto unlock; + + /* + * Shadow stack PTEs are copy-on-access, so do_wp_page() + * handling on them no matter if we have write fault or not. + */ + if (is_shstk_mapping(vmf->vma->vm_flags)) + return do_wp_page(vmf); + if (vmf->flags & FAULT_FLAG_WRITE) { if (!pte_write(entry)) return do_wp_page(vmf); -- 2.17.1