All of lore.kernel.org
 help / color / mirror / Atom feed
From: Chih-En Lin <shiyn.lin@gmail.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Qi Zheng <zhengqi.arch@bytedance.com>,
	David Hildenbrand <david@redhat.com>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	John Hubbard <jhubbard@nvidia.com>, Nadav Amit <namit@vmware.com>,
	Barry Song <baohua@kernel.org>,
	Pasha Tatashin <pasha.tatashin@soleen.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Masami Hiramatsu <mhiramat@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Arnaldo Carvalho de Melo <acme@kernel.org>,
	Mark Rutland <mark.rutland@arm.com>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>, Namhyung Kim <namhyung@kernel.org>,
	Ian Rogers <irogers@google.com>,
	Adrian Hunter <adrian.hunter@intel.com>,
	Yu Zhao <yuzhao@google.com>, Steven Barrett <steven@liquorix.net>,
	Juergen Gross <jgross@suse.com>, Peter Xu <peterx@redhat.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	Tong Tiangen <tongtiangen@huawei.com>,
	Christoph Hellwig <hch@infradead.org>,
	"Liam R. Howlett" <Liam.Howlett@Oracle.com>,
	Yang Shi <shy828301@gmail.com>, Vlastimil Babka <vbabka@suse.cz>,
	Alex Sierra <alex.sierra@amd.com>,
	Vincent Whitchurch <vincent.whitchurch@axis.com>,
	Anshuman Khandual <anshuman.khandual@arm.com>,
	Li kunyu <kunyu@nfschina.com>, Liu Shixin <liushixin2@huawei.com>,
	Hugh Dickins <hughd@google.com>, Minchan Kim <minchan@kernel.org>,
	Joey Gouly <joey.gouly@arm.com>,
	Chih-En Lin <shiyn.lin@gmail.com>, Michal Hocko <mhocko@suse.com>,
	Suren Baghdasaryan <surenb@google.com>,
	"Zach O'Keefe" <zokeefe@google.com>,
	Gautam Menghani <gautammenghani201@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Mark Brown <broonie@kernel.org>,
	"Eric W. Biederman" <ebiederm@xmission.com>,
	Andrei Vagin <avagin@gmail.com>,
	Shakeel Butt <shakeelb@google.com>,
	Daniel Bristot de Oliveira <bristot@kernel.org>,
	"Jason A. Donenfeld" <Jason@zx2c4.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Alexey Gladkov <legion@kernel.org>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-trace-kernel@vger.kernel.org,
	linux-perf-users@vger.kernel.org,
	Dinglan Peng <peng301@purdue.edu>,
	Pedro Fonseca <pfonseca@purdue.edu>,
	Jim Huang <jserv@ccns.ncku.edu.tw>,
	Huichun Feng <foxhoundsk.tw@gmail.com>
Subject: [PATCH v5 03/17] mm: Add Copy-On-Write PTE to fork()
Date: Fri, 14 Apr 2023 22:23:27 +0800	[thread overview]
Message-ID: <20230414142341.354556-4-shiyn.lin@gmail.com> (raw)
In-Reply-To: <20230414142341.354556-1-shiyn.lin@gmail.com>

Add copy_cow_pte_range() and recover_pte_range() for copy-on-write (COW)
PTE in fork system call. During COW PTE fork, when processing the shared
PTE, we traverse all the entries to determine current mapped page is
available to share between processes. If PTE can be shared, account
those mapped pages and then share the PTE. However, once we find out the
mapped page is unavailable, e.g., pinned page, we have to copy it via
copy_present_page(), which means that we will fall back to default path,
page table copying (copy_pte_range()). And, since we may have already
processed some COW-ed PTE entries, before starting the default path, we
have to recover those entries.

All the COW PTE behaviors are protected by the pte lock.
The logic of how we handle nonpresent/present pte entries and error
in copy_cow_pte_range() is same as copy_pte_range(). But to keep the
codes clean (e.g., avoiding condition lock), we introduce new functions
instead of modifying copy_pte_range().

To track the lifetime of COW-ed PTE, introduce the refcount of PTE.
We reuse the _refcount in struct page for the page table to maintain the
number of process references to COW-ed PTE table. Doing the fork with
COW PTE will increase the refcount. And, when someone writes to the
COW-ed PTE, it will cause the write fault to break COW PTE. If the
refcount of COW-ed PTE is one, the process that triggers the fault will
reuse the COW-ed PTE. Otherwise, the process will decrease the refcount
and duplicate it.

Since we share the PTE between the parent and child, the state of the
parent's pte entries is different between COW PTE and the normal fork.
COW PTE handles all the pte entries on the child side which means it
will clear the dirty and access bit of the parent's pte entry.

And, since some of the architectures, e.g., s390 and powerpc32, don't
support the PMD entry and PTE table operations, add a new Kconfig,
COW_PTE. COW_PTE config depends on the (HAVE_ARCH_TRANSPARENT_HUGEPAGE
&& !PREEMPT_RT) condition, it is same as the TRANSPARENT_HUGEPAGE
config since most of the operations in COW PTE are depend on it.

Signed-off-by: Chih-En Lin <shiyn.lin@gmail.com>
---
 include/linux/mm.h |  20 +++
 mm/Kconfig         |   9 ++
 mm/memory.c        | 303 +++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 332 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1f79667824eb..828f8a1b1e32 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2636,6 +2636,23 @@ static inline bool ptlock_init(struct page *page) { return true; }
 static inline void ptlock_free(struct page *page) {}
 #endif /* USE_SPLIT_PTE_PTLOCKS */
 
+#ifdef CONFIG_COW_PTE
+static inline int pmd_get_pte(pmd_t *pmd)
+{
+	return page_ref_inc_return(pmd_page(*pmd));
+}
+
+static inline bool pmd_put_pte(pmd_t *pmd)
+{
+	return page_ref_add_unless(pmd_page(*pmd), -1, 1);
+}
+
+static inline int cow_pte_count(pmd_t *pmd)
+{
+	return page_count(pmd_page(*pmd));
+}
+#endif
+
 static inline void pgtable_init(void)
 {
 	ptlock_cache_init();
@@ -2648,6 +2665,9 @@ static inline bool pgtable_pte_page_ctor(struct page *page)
 		return false;
 	__SetPageTable(page);
 	inc_lruvec_page_state(page, NR_PAGETABLE);
+#ifdef CONFIG_COW_PTE
+	set_page_count(page, 1);
+#endif
 	return true;
 }
 
diff --git a/mm/Kconfig b/mm/Kconfig
index 4751031f3f05..0eac8851601b 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -841,6 +841,15 @@ config READ_ONLY_THP_FOR_FS
 
 endif # TRANSPARENT_HUGEPAGE
 
+menuconfig COW_PTE
+	bool "Copy-on-write PTE table"
+	depends on HAVE_ARCH_TRANSPARENT_HUGEPAGE && !PREEMPT_RT
+	help
+	  Extend the copy-on-write (COW) mechanism to the PTE table
+	  (the bottom level of the page-table hierarchy). To enable this
+	  feature, a process must set prctl(PR_SET_COW_PTE) before the
+	  fork system call.
+
 #
 # UP and nommu archs use km based percpu allocator
 #
diff --git a/mm/memory.c b/mm/memory.c
index 0476cf22ea33..3b1c4a7e632c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -749,11 +749,17 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *dst_vma,
 		struct vm_area_struct *src_vma, unsigned long addr, int *rss)
 {
+	/* With COW PTE, dst_vma is src_vma. */
 	unsigned long vm_flags = dst_vma->vm_flags;
 	pte_t pte = *src_pte;
 	struct page *page;
 	swp_entry_t entry = pte_to_swp_entry(pte);
 
+	/*
+	 * If it's COW PTE, parent shares PTE with child. Which means the
+	 * following modifications of child will also affect parent.
+	 */
+
 	if (likely(!non_swap_entry(entry))) {
 		if (swap_duplicate(entry) < 0)
 			return -EIO;
@@ -896,6 +902,8 @@ copy_present_page(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma
 /*
  * Copy one pte.  Returns 0 if succeeded, or -EAGAIN if one preallocated page
  * is required to copy this pte.
+ * However, if prealloc is NULL, it is COW PTE. We should return and fall back
+ * to copy the PTE table.
  */
 static inline int
 copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
@@ -922,6 +930,14 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 		if (unlikely(page_try_dup_anon_rmap(page, false, src_vma))) {
 			/* Page may be pinned, we have to copy. */
 			folio_put(folio);
+			/*
+			 * If prealloc is NULL, we are processing share page
+			 * table (COW PTE, in copy_cow_pte_range()). We cannot
+			 * call copy_present_page() right now, instead, we
+			 * should fall back to copy_pte_range().
+			 */
+			if (!prealloc)
+				return -EAGAIN;
 			return copy_present_page(dst_vma, src_vma, dst_pte, src_pte,
 						 addr, rss, prealloc, page);
 		}
@@ -942,6 +958,11 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 	}
 	VM_BUG_ON(page && folio_test_anon(folio) && PageAnonExclusive(page));
 
+	/*
+	 * If it's COW PTE, parent shares PTE with child.
+	 * Which means the following will also affect parent.
+	 */
+
 	/*
 	 * If it's a shared mapping, mark it clean in
 	 * the child
@@ -950,6 +971,7 @@ copy_present_pte(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 		pte = pte_mkclean(pte);
 	pte = pte_mkold(pte);
 
+	/* For COW PTE, dst_vma is still src_vma. */
 	if (!userfaultfd_wp(dst_vma))
 		pte = pte_clear_uffd_wp(pte);
 
@@ -975,6 +997,8 @@ static inline struct folio *page_copy_prealloc(struct mm_struct *src_mm,
 	return new_folio;
 }
 
+
+/* copy_pte_range() will immediately allocate new page table. */
 static int
 copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 	       pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
@@ -1099,6 +1123,227 @@ copy_pte_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 	return ret;
 }
 
+#ifdef CONFIG_COW_PTE
+/*
+ * copy_cow_pte_range() will try to share the page table with child.
+ * The logic of non-present, present and error handling is same as
+ * copy_pte_range() but dst_vma and dst_pte are src_vma and src_pte.
+ *
+ * We cannot preserve soft-dirty information, because PTE will share
+ * between multiple processes.
+ */
+static int
+copy_cow_pte_range(struct vm_area_struct *dst_vma,
+		   struct vm_area_struct *src_vma,
+		   pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
+		   unsigned long end, unsigned long *recover_end)
+{
+	struct mm_struct *dst_mm = dst_vma->vm_mm;
+	struct mm_struct *src_mm = src_vma->vm_mm;
+	struct vma_iterator vmi;
+	struct vm_area_struct *curr = src_vma;
+	pte_t *src_pte, *orig_src_pte;
+	spinlock_t *src_ptl;
+	int ret = 0;
+	int rss[NR_MM_COUNTERS];
+	swp_entry_t entry = (swp_entry_t){0};
+	unsigned long vm_end, orig_addr = addr;
+	pgtable_t pte_table = pmd_pgtable(*src_pmd);
+
+	end = (addr + PMD_SIZE) & PMD_MASK;
+	addr = addr & PMD_MASK;
+
+	/*
+	 * Increase the refcount to prevent the parent's PTE
+	 * dropped/reused. Only increace the refcount at first
+	 * time attached.
+	 */
+	src_ptl = pte_lockptr(src_mm, src_pmd);
+	spin_lock(src_ptl);
+	pmd_get_pte(src_pmd);
+	pmd_install(dst_mm, dst_pmd, &pte_table);
+	spin_unlock(src_ptl);
+
+	/*
+	 * We should handle all of the entries in this PTE at this traversal,
+	 * since we cannot promise that the next vma will not do the lazy fork.
+	 * The lazy fork will skip the copying, which may cause the incomplete
+	 * state of COW-ed PTE.
+	 */
+	vma_iter_init(&vmi, src_mm, addr);
+	for_each_vma_range(vmi, curr, end) {
+		vm_end = min(end, curr->vm_end);
+		addr = max(addr, curr->vm_start);
+
+		/* We don't share the PTE with VM_DONTCOPY. */
+		if (curr->vm_flags & VM_DONTCOPY) {
+			*recover_end = addr;
+			return -EAGAIN;
+		}
+again:
+		init_rss_vec(rss);
+		src_pte = pte_offset_map(src_pmd, addr);
+		src_ptl = pte_lockptr(src_mm, src_pmd);
+		orig_src_pte = src_pte;
+		spin_lock(src_ptl);
+		arch_enter_lazy_mmu_mode();
+
+		do {
+			if (pte_none(*src_pte))
+				continue;
+			if (unlikely(!pte_present(*src_pte))) {
+				/*
+				 * Although, parent's PTE is COW-ed, we should
+				 * still need to handle all the swap stuffs.
+				 */
+				ret = copy_nonpresent_pte(dst_mm, src_mm,
+							  src_pte, src_pte,
+							  curr, curr,
+							  addr, rss);
+				if (ret == -EIO) {
+					entry = pte_to_swp_entry(*src_pte);
+					break;
+				} else if (ret == -EBUSY) {
+					break;
+				} else if (!ret)
+					continue;
+				/*
+				 * Device exclusive entry restored, continue by
+				 * copying the now present pte.
+				 */
+				WARN_ON_ONCE(ret != -ENOENT);
+			}
+			/*
+			 * copy_present_pte() will determine the mapped page
+			 * should be COW mapping or not.
+			 */
+			ret = copy_present_pte(curr, curr, src_pte, src_pte,
+					       addr, rss, NULL);
+			/*
+			 * If we need a pre-allocated page for this pte,
+			 * drop the lock, recover all the entries, fall
+			 * back to copy_pte_range(), and try again.
+			 */
+			if (unlikely(ret == -EAGAIN))
+				break;
+		} while (src_pte++, addr += PAGE_SIZE, addr != vm_end);
+
+		arch_leave_lazy_mmu_mode();
+		add_mm_rss_vec(dst_mm, rss);
+		spin_unlock(src_ptl);
+		pte_unmap(orig_src_pte);
+		cond_resched();
+
+		if (ret == -EIO) {
+			VM_WARN_ON_ONCE(!entry.val);
+			if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) {
+				ret = -ENOMEM;
+				goto out;
+			}
+			entry.val = 0;
+		} else if (ret == -EBUSY) {
+			goto out;
+		} else if (ret == -EAGAIN) {
+			/*
+			 * We've to allocate the page immediately but first we
+			 * should recover the processed entries and fall back
+			 * to copy_pte_range().
+			 */
+			*recover_end = addr;
+			return -EAGAIN;
+		} else if (ret) {
+			VM_WARN_ON_ONCE(1);
+		}
+
+		/* We've captured and resolved the error. Reset, try again. */
+		ret = 0;
+		if (addr != vm_end)
+			goto again;
+	}
+
+out:
+	/*
+	 * All the pte entries are available to COW mapping.
+	 * Now, we can share with child (COW PTE).
+	 */
+	pmdp_set_wrprotect(src_mm, orig_addr, src_pmd);
+	set_pmd_at(dst_mm, orig_addr, dst_pmd, pmd_wrprotect(*src_pmd));
+
+	return ret;
+}
+
+/* When recovering the pte entries, we should hold the locks entirely. */
+static int
+recover_pte_range(struct vm_area_struct *dst_vma,
+		  struct vm_area_struct *src_vma,
+		  pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long end)
+{
+	struct mm_struct *dst_mm = dst_vma->vm_mm;
+	struct mm_struct *src_mm = src_vma->vm_mm;
+	struct vma_iterator vmi;
+	struct vm_area_struct *curr = src_vma;
+	pte_t *orig_src_pte, *orig_dst_pte;
+	pte_t *src_pte, *dst_pte;
+	spinlock_t *src_ptl, *dst_ptl;
+	unsigned long vm_end, addr = end & PMD_MASK;
+	int ret = 0;
+
+	/* Before we allocate the new PTE, clear the entry. */
+	mm_dec_nr_ptes(dst_mm);
+	pmd_clear(dst_pmd);
+	if (pte_alloc(dst_mm, dst_pmd))
+		return -ENOMEM;
+
+	/*
+	 * Traverse all the vmas that cover this PTE table until
+	 * the end of recover address (unshareable page).
+	 */
+	vma_iter_init(&vmi, src_mm, addr);
+	for_each_vma_range(vmi, curr, end) {
+		vm_end = min(end, curr->vm_end);
+		addr = max(addr, curr->vm_start);
+
+		orig_dst_pte = dst_pte = pte_offset_map(dst_pmd, addr);
+		dst_ptl = pte_lockptr(dst_mm, dst_pmd);
+		spin_lock(dst_ptl);
+
+		orig_src_pte = src_pte = pte_offset_map(src_pmd, addr);
+		src_ptl = pte_lockptr(src_mm, src_pmd);
+		spin_lock(src_ptl);
+		arch_enter_lazy_mmu_mode();
+
+		do {
+			if (pte_none(*src_pte))
+				continue;
+			/*
+			 * COW mapping stuffs (e.g., PageAnonExclusive)
+			 * should already handled by copy_cow_pte_range().
+			 * We can simply set the entry to the child.
+			 */
+			set_pte_at(dst_mm, addr, dst_pte, *src_pte);
+		} while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end);
+
+		arch_leave_lazy_mmu_mode();
+		spin_unlock(src_ptl);
+		pte_unmap(orig_src_pte);
+
+		spin_unlock(dst_ptl);
+		pte_unmap(orig_dst_pte);
+	}
+	/*
+	 * After recovering the entries, release the holding from child.
+	 * Parent may still share with others, so don't make it writeable.
+	 */
+	spin_lock(src_ptl);
+	pmd_put_pte(src_pmd);
+	spin_unlock(src_ptl);
+
+	cond_resched();
+
+	return ret;
+}
+#endif /* CONFIG_COW_PTE */
+
 static inline int
 copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 	       pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
@@ -1127,6 +1372,64 @@ copy_pmd_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma,
 				continue;
 			/* fall through */
 		}
+
+#ifdef CONFIG_COW_PTE
+		/*
+		 * If MMF_COW_PTE set, copy_pte_range() will try to share
+		 * the PTE page table first. In other words, it attempts to
+		 * do COW on PTE (and mapped pages). However, if there has
+		 * any unshareable page (e.g., pinned page, device private
+		 * page), it will fall back to the default path, which will
+		 * copy the page table immediately.
+		 * In such a case, it stores the address of first unshareable
+		 * page to recover_end then goes back to the beginning of PTE
+		 * and recovers the COW-ed PTE entries until it meets the same
+		 * unshareable page again. During the recovering, because of
+		 * COW-ed PTE entries are logical same as COW mapping, so it
+		 * only needs to allocate the new PTE and sets COW-ed PTE
+		 * entries to new PTE (which will be same as COW mapping).
+		 */
+		if (test_bit(MMF_COW_PTE, &src_mm->flags)) {
+			unsigned long recover_end = 0;
+			int ret;
+
+			/*
+			 * Setting wrprotect with normal PTE to pmd entry
+			 * will trigger pmd_bad(). Skip bad checking here.
+			 */
+			if (pmd_none(*src_pmd))
+				continue;
+			/* Skip if the PTE already did COW PTE this time. */
+			if (!pmd_none(*dst_pmd) && !pmd_write(*dst_pmd))
+				continue;
+
+			ret = copy_cow_pte_range(dst_vma, src_vma,
+						 dst_pmd, src_pmd,
+						 addr, next, &recover_end);
+			if (!ret) {
+				/* COW PTE succeeded. */
+				continue;
+			} else if (ret == -EAGAIN) {
+				/* fall back to normal copy method. */
+				if (recover_pte_range(dst_vma, src_vma,
+						      dst_pmd, src_pmd,
+						      recover_end))
+					return -ENOMEM;
+				/*
+				 * Since we processed all the entries of PTE
+				 * table, recover_end may not in the src_vma.
+				 * If we already handled the src_vma, skip it.
+				 */
+				if (!range_in_vma(src_vma, recover_end,
+						  recover_end + PAGE_SIZE))
+					continue;
+				else
+					addr = recover_end;
+				/* fall through */
+			} else if (ret)
+				return -ENOMEM;
+		}
+#endif /* CONFIG_COW_PTE */
 		if (pmd_none_or_clear_bad(src_pmd))
 			continue;
 		if (copy_pte_range(dst_vma, src_vma, dst_pmd, src_pmd,
-- 
2.34.1


  parent reply	other threads:[~2023-04-14 14:25 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-14 14:23 [PATCH v5 00/17] Introduce Copy-On-Write to Page Table Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 01/17] mm: Split out the present cases from zap_pte_range() Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 02/17] mm: Allow user to control COW PTE via prctl Chih-En Lin
2023-04-14 14:23 ` Chih-En Lin [this message]
2023-04-14 14:23 ` [PATCH v5 04/17] mm: Add break COW PTE fault and helper functions Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 05/17] mm: Handle COW-ed PTE during zapping Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 06/17] mm/rmap: Break COW PTE in rmap walking Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 07/17] mm/khugepaged: Break COW PTE before scanning pte Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 08/17] mm/ksm: Break COW PTE before modify shared PTE Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 09/17] mm/madvise: Handle COW-ed PTE with madvise() Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 10/17] mm/gup: Trigger break COW PTE before calling follow_pfn_pte() Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 11/17] mm/mprotect: Break COW PTE before changing protection Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 12/17] mm/userfaultfd: Support COW PTE Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 13/17] mm/migrate_device: " Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 14/17] fs/proc: Support COW PTE with clear_refs_write Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 15/17] events/uprobes: Break COW PTE before replacing page Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 16/17] mm: fork: Enable COW PTE to fork system call Chih-En Lin
2023-04-14 14:23 ` [PATCH v5 17/17] mm: Check the unexpected modification of COW-ed PTE Chih-En Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230414142341.354556-4-shiyn.lin@gmail.com \
    --to=shiyn.lin@gmail.com \
    --cc=Jason@zx2c4.com \
    --cc=Liam.Howlett@Oracle.com \
    --cc=acme@kernel.org \
    --cc=adrian.hunter@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.sierra@amd.com \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=anshuman.khandual@arm.com \
    --cc=avagin@gmail.com \
    --cc=baohua@kernel.org \
    --cc=bp@alien8.de \
    --cc=bristot@kernel.org \
    --cc=broonie@kernel.org \
    --cc=catalin.marinas@arm.com \
    --cc=christophe.leroy@csgroup.eu \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=ebiederm@xmission.com \
    --cc=foxhoundsk.tw@gmail.com \
    --cc=gautammenghani201@gmail.com \
    --cc=gregkh@linuxfoundation.org \
    --cc=hch@infradead.org \
    --cc=hpa@zytor.com \
    --cc=hughd@google.com \
    --cc=irogers@google.com \
    --cc=jgross@suse.com \
    --cc=jhubbard@nvidia.com \
    --cc=joey.gouly@arm.com \
    --cc=jolsa@kernel.org \
    --cc=jserv@ccns.ncku.edu.tw \
    --cc=kunyu@nfschina.com \
    --cc=legion@kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-perf-users@vger.kernel.org \
    --cc=linux-trace-kernel@vger.kernel.org \
    --cc=liushixin2@huawei.com \
    --cc=mark.rutland@arm.com \
    --cc=mhiramat@kernel.org \
    --cc=mhocko@suse.com \
    --cc=minchan@kernel.org \
    --cc=mingo@redhat.com \
    --cc=namhyung@kernel.org \
    --cc=namit@vmware.com \
    --cc=pasha.tatashin@soleen.com \
    --cc=peng301@purdue.edu \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=pfonseca@purdue.edu \
    --cc=rostedt@goodmis.org \
    --cc=shakeelb@google.com \
    --cc=shy828301@gmail.com \
    --cc=steven@liquorix.net \
    --cc=surenb@google.com \
    --cc=tglx@linutronix.de \
    --cc=tongtiangen@huawei.com \
    --cc=vbabka@suse.cz \
    --cc=vincent.whitchurch@axis.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    --cc=yuzhao@google.com \
    --cc=zhengqi.arch@bytedance.com \
    --cc=zokeefe@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.