All of lore.kernel.org
 help / color / mirror / Atom feed
From: Yu-cheng Yu <yu-cheng.yu@intel.com>
To: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-mm@kvack.org, linux-arch@vger.kernel.org, x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H.J. Lu" <hjl.tools@gmail.com>,
	Vedvyas Shanbhogue <vedvyas.shanbhogue@intel.com>,
	"Ravi V. Shankar" <ravi.v.shankar@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@amacapital.net>,
	Jonathan Corbet <corbet@lwn.net>, Oleg Nesterov <oleg@redhat.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Mike Kravetz <mike.kravetz@oracle.com>
Cc: Yu-cheng Yu <yu-cheng.yu@intel.com>
Subject: [PATCH 9/9] x86/cet: Handle THP/HugeTLB shadow stack page copying
Date: Thu,  7 Jun 2018 07:37:05 -0700	[thread overview]
Message-ID: <20180607143705.3531-10-yu-cheng.yu@intel.com> (raw)
In-Reply-To: <20180607143705.3531-1-yu-cheng.yu@intel.com>

This patch implements THP shadow stack memory copying in the same
way as the previous patch for regular PTE.

In copy_huge_pmd(), we clear the dirty bit from the PMD.  On the
next shadow stack access to the PMD, a page fault occurs.  At
that time, the page is copied/re-used and the PMD is fixed.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 mm/huge_memory.c | 10 +++++++++-
 mm/hugetlb.c     |  2 +-
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a3a1815f8e11..c6e72ccc4274 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -600,6 +600,8 @@ static int __do_huge_pmd_anonymous_page(struct vm_fault *vmf, struct page *page,
 
 		entry = mk_huge_pmd(page, vma->vm_page_prot);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+		if (is_shstk_mapping(vma->vm_flags))
+			entry = pmd_mkdirty_shstk(entry);
 		page_add_new_anon_rmap(page, vma, haddr, true);
 		mem_cgroup_commit_charge(page, memcg, false, true);
 		lru_cache_add_active_or_unevictable(page, vma);
@@ -976,7 +978,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 	mm_inc_nr_ptes(dst_mm);
 	pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
 
-	pmdp_set_wrprotect(src_mm, addr, src_pmd);
+	pmdp_set_wrprotect_flush(vma, addr, src_pmd);
 	pmd = pmd_mkold(pmd_wrprotect(pmd));
 	set_pmd_at(dst_mm, addr, dst_pmd, pmd);
 
@@ -1196,6 +1198,8 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
 		pte_t entry;
 		entry = mk_pte(pages[i], vma->vm_page_prot);
 		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+		if (is_shstk_mapping(vma->vm_flags))
+			entry = pte_mkdirty_shstk(entry);
 		memcg = (void *)page_private(pages[i]);
 		set_page_private(pages[i], 0);
 		page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false);
@@ -1280,6 +1284,8 @@ int do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
 		pmd_t entry;
 		entry = pmd_mkyoung(orig_pmd);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+		if (is_shstk_mapping(vma->vm_flags))
+			entry = pmd_mkdirty_shstk(entry);
 		if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry,  1))
 			update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
 		ret |= VM_FAULT_WRITE;
@@ -1350,6 +1356,8 @@ int do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
 		pmd_t entry;
 		entry = mk_huge_pmd(new_page, vma->vm_page_prot);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+		if (is_shstk_mapping(vma->vm_flags))
+			entry = pmd_mkdirty_shstk(entry);
 		pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd);
 		page_add_new_anon_rmap(new_page, vma, haddr, true);
 		mem_cgroup_commit_charge(new_page, memcg, false, true);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 218679138255..d694cfab9f90 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3293,7 +3293,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 				 *
 				 * See Documentation/vm/mmu_notifier.txt
 				 */
-				huge_ptep_set_wrprotect(src, addr, src_pte);
+				huge_ptep_set_wrprotect_flush(vma, addr, src_pte);
 			}
 			entry = huge_ptep_get(src_pte);
 			ptepage = pte_page(entry);
-- 
2.15.1

WARNING: multiple messages have this Message-ID (diff)
From: Yu-cheng Yu <yu-cheng.yu@intel.com>
To: linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-mm@kvack.org, linux-arch@vger.kernel.org, x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H.J. Lu" <hjl.tools@gmail.com>,
	Vedvyas Shanbhogue <vedvyas.shanbhogue@intel.com>,
	"Ravi V. Shankar" <ravi.v.shankar@intel.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Andy Lutomirski <luto@amacapital.net>,
	Jonathan Corbet <corbet@lwn.net>, Oleg Nesterov <oleg@redhat.com>,
	Arnd Bergmann <arnd@arndb.de>,
	Mike Kravetz <mike.kravetz@oracle.com>
Cc: Yu-cheng Yu <yu-cheng.yu@intel.com>
Subject: [PATCH 9/9] x86/cet: Handle THP/HugeTLB shadow stack page copying
Date: Thu,  7 Jun 2018 07:37:05 -0700	[thread overview]
Message-ID: <20180607143705.3531-10-yu-cheng.yu@intel.com> (raw)
In-Reply-To: <20180607143705.3531-1-yu-cheng.yu@intel.com>

This patch implements THP shadow stack memory copying in the same
way as the previous patch for regular PTE.

In copy_huge_pmd(), we clear the dirty bit from the PMD.  On the
next shadow stack access to the PMD, a page fault occurs.  At
that time, the page is copied/re-used and the PMD is fixed.

Signed-off-by: Yu-cheng Yu <yu-cheng.yu@intel.com>
---
 mm/huge_memory.c | 10 +++++++++-
 mm/hugetlb.c     |  2 +-
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a3a1815f8e11..c6e72ccc4274 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -600,6 +600,8 @@ static int __do_huge_pmd_anonymous_page(struct vm_fault *vmf, struct page *page,
 
 		entry = mk_huge_pmd(page, vma->vm_page_prot);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+		if (is_shstk_mapping(vma->vm_flags))
+			entry = pmd_mkdirty_shstk(entry);
 		page_add_new_anon_rmap(page, vma, haddr, true);
 		mem_cgroup_commit_charge(page, memcg, false, true);
 		lru_cache_add_active_or_unevictable(page, vma);
@@ -976,7 +978,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 	mm_inc_nr_ptes(dst_mm);
 	pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
 
-	pmdp_set_wrprotect(src_mm, addr, src_pmd);
+	pmdp_set_wrprotect_flush(vma, addr, src_pmd);
 	pmd = pmd_mkold(pmd_wrprotect(pmd));
 	set_pmd_at(dst_mm, addr, dst_pmd, pmd);
 
@@ -1196,6 +1198,8 @@ static int do_huge_pmd_wp_page_fallback(struct vm_fault *vmf, pmd_t orig_pmd,
 		pte_t entry;
 		entry = mk_pte(pages[i], vma->vm_page_prot);
 		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+		if (is_shstk_mapping(vma->vm_flags))
+			entry = pte_mkdirty_shstk(entry);
 		memcg = (void *)page_private(pages[i]);
 		set_page_private(pages[i], 0);
 		page_add_new_anon_rmap(pages[i], vmf->vma, haddr, false);
@@ -1280,6 +1284,8 @@ int do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
 		pmd_t entry;
 		entry = pmd_mkyoung(orig_pmd);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+		if (is_shstk_mapping(vma->vm_flags))
+			entry = pmd_mkdirty_shstk(entry);
 		if (pmdp_set_access_flags(vma, haddr, vmf->pmd, entry,  1))
 			update_mmu_cache_pmd(vma, vmf->address, vmf->pmd);
 		ret |= VM_FAULT_WRITE;
@@ -1350,6 +1356,8 @@ int do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
 		pmd_t entry;
 		entry = mk_huge_pmd(new_page, vma->vm_page_prot);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
+		if (is_shstk_mapping(vma->vm_flags))
+			entry = pmd_mkdirty_shstk(entry);
 		pmdp_huge_clear_flush_notify(vma, haddr, vmf->pmd);
 		page_add_new_anon_rmap(new_page, vma, haddr, true);
 		mem_cgroup_commit_charge(new_page, memcg, false, true);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 218679138255..d694cfab9f90 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3293,7 +3293,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
 				 *
 				 * See Documentation/vm/mmu_notifier.txt
 				 */
-				huge_ptep_set_wrprotect(src, addr, src_pte);
+				huge_ptep_set_wrprotect_flush(vma, addr, src_pte);
 			}
 			entry = huge_ptep_get(src_pte);
 			ptepage = pte_page(entry);
-- 
2.15.1

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

  parent reply	other threads:[~2018-06-07 14:40 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-07 14:36 [PATCH 0/9] Control Flow Enforcement - Part (2) Yu-cheng Yu
2018-06-07 14:36 ` Yu-cheng Yu
2018-06-07 14:36 ` [PATCH 1/9] x86/cet: Control protection exception handler Yu-cheng Yu
2018-06-07 14:36   ` Yu-cheng Yu
2018-06-07 15:46   ` Andy Lutomirski
2018-06-07 15:46     ` Andy Lutomirski
2018-06-07 16:23     ` Yu-cheng Yu
2018-06-07 16:23       ` Yu-cheng Yu
2018-06-08  4:17   ` kbuild test robot
2018-06-08  4:17     ` kbuild test robot
2018-06-08  4:17     ` kbuild test robot
2018-06-08  4:18   ` kbuild test robot
2018-06-08  4:18     ` kbuild test robot
2018-06-08  4:18     ` kbuild test robot
2018-06-07 14:36 ` [PATCH 2/9] x86/cet: Add Kconfig option for user-mode shadow stack Yu-cheng Yu
2018-06-07 14:36   ` Yu-cheng Yu
2018-06-07 15:47   ` Andy Lutomirski
2018-06-07 15:47     ` Andy Lutomirski
2018-06-07 15:58     ` Yu-cheng Yu
2018-06-07 15:58       ` Yu-cheng Yu
2018-06-07 16:28       ` Andy Lutomirski
2018-06-07 16:28         ` Andy Lutomirski
2018-06-07 14:36 ` [PATCH 3/9] mm: Introduce VM_SHSTK for shadow stack memory Yu-cheng Yu
2018-06-07 14:36   ` Yu-cheng Yu
2018-06-07 14:37 ` [PATCH 4/9] x86/mm: Change _PAGE_DIRTY to _PAGE_DIRTY_HW Yu-cheng Yu
2018-06-07 14:37   ` Yu-cheng Yu
2018-06-08  3:53   ` kbuild test robot
2018-06-08  3:53     ` kbuild test robot
2018-06-08  3:53     ` kbuild test robot
2018-06-07 14:37 ` [PATCH 5/9] x86/mm: Introduce _PAGE_DIRTY_SW Yu-cheng Yu
2018-06-07 14:37   ` Yu-cheng Yu
2018-06-08  5:15   ` kbuild test robot
2018-06-08  5:15     ` kbuild test robot
2018-06-08  5:15     ` kbuild test robot
2018-06-07 14:37 ` [PATCH 6/9] x86/mm: Introduce ptep_set_wrprotect_flush and related functions Yu-cheng Yu
2018-06-07 14:37   ` Yu-cheng Yu
2018-06-07 16:24   ` Andy Lutomirski
2018-06-07 16:24     ` Andy Lutomirski
2018-06-07 18:21     ` Dave Hansen
2018-06-07 18:21       ` Dave Hansen
2018-06-07 18:24       ` Andy Lutomirski
2018-06-07 18:24         ` Andy Lutomirski
2018-06-07 20:29     ` Dave Hansen
2018-06-07 20:29       ` Dave Hansen
2018-06-07 20:36       ` Yu-cheng Yu
2018-06-07 20:36         ` Yu-cheng Yu
2018-06-08  0:59       ` Andy Lutomirski
2018-06-08  0:59         ` Andy Lutomirski
2018-06-08  1:20         ` Dave Hansen
2018-06-08  1:20           ` Dave Hansen
2018-06-08  4:43   ` kbuild test robot
2018-06-08  4:43     ` kbuild test robot
2018-06-08  4:43     ` kbuild test robot
2018-06-08 14:13   ` kbuild test robot
2018-06-08 14:13     ` kbuild test robot
2018-06-08 14:13     ` kbuild test robot
2018-06-07 14:37 ` [PATCH 7/9] x86/mm: Shadow stack page fault error checking Yu-cheng Yu
2018-06-07 14:37   ` Yu-cheng Yu
2018-06-07 16:26   ` Andy Lutomirski
2018-06-07 16:26     ` Andy Lutomirski
2018-06-07 16:46     ` Yu-cheng Yu
2018-06-07 16:46       ` Yu-cheng Yu
2018-06-07 16:56     ` Dave Hansen
2018-06-07 16:56       ` Dave Hansen
2018-06-07 14:37 ` [PATCH 8/9] x86/cet: Handle shadow stack page fault Yu-cheng Yu
2018-06-07 14:37   ` Yu-cheng Yu
2018-06-07 14:37 ` Yu-cheng Yu [this message]
2018-06-07 14:37   ` [PATCH 9/9] x86/cet: Handle THP/HugeTLB shadow stack page copying Yu-cheng Yu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180607143705.3531-10-yu-cheng.yu@intel.com \
    --to=yu-cheng.yu@intel.com \
    --cc=arnd@arndb.de \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=hjl.tools@gmail.com \
    --cc=hpa@zytor.com \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@amacapital.net \
    --cc=mike.kravetz@oracle.com \
    --cc=mingo@redhat.com \
    --cc=oleg@redhat.com \
    --cc=ravi.v.shankar@intel.com \
    --cc=tglx@linutronix.de \
    --cc=vedvyas.shanbhogue@intel.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.