All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/6] Fix some bugs related to ramp and dax
@ 2022-03-02  8:27 Muchun Song
  2022-03-02  8:27 ` [PATCH v4 1/6] mm: rmap: fix cache flush on THP pages Muchun Song
                   ` (5 more replies)
  0 siblings, 6 replies; 17+ messages in thread
From: Muchun Song @ 2022-03-02  8:27 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	smuchun, Muchun Song

This series is based on next-20220225.

Patch 1-2 fix a cache flush bug, because subsequent patches depend on
those on those changes, there are placed in this series.  Patch 3-4
are preparation for fixing a dax bug in patch 5.  Patch 6 is code cleanup
since the previous patch remove the usage of follow_invalidate_pte().

v4:
- Fix compilation error on riscv.

v3:
- Based on next-20220225.

v2:
- Avoid the overly long line in lots of places suggested by Christoph.
- Fix a compiler warning reported by kernel test robot since pmd_pfn()
  is not defined when !CONFIG_TRANSPARENT_HUGEPAGE on powerpc architecture.
- Split a new patch 4 for preparation of fixing the dax bug.

Muchun Song (6):
  mm: rmap: fix cache flush on THP pages
  dax: fix cache flush on PMD-mapped pages
  mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
  mm: pvmw: add support for walking devmap pages
  dax: fix missing writeprotect the pte entry
  mm: remove range parameter from follow_invalidate_pte()

 fs/dax.c             | 82 +++++-----------------------------------------------
 include/linux/mm.h   |  3 --
 include/linux/rmap.h |  3 ++
 mm/internal.h        | 26 +++++++++++------
 mm/memory.c          | 23 ++-------------
 mm/page_vma_mapped.c |  5 ++--
 mm/rmap.c            | 68 +++++++++++++++++++++++++++++++++++--------
 7 files changed, 89 insertions(+), 121 deletions(-)

-- 
2.11.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4 1/6] mm: rmap: fix cache flush on THP pages
  2022-03-02  8:27 [PATCH v4 0/6] Fix some bugs related to ramp and dax Muchun Song
@ 2022-03-02  8:27 ` Muchun Song
  2022-03-10  0:01   ` Dan Williams
  2022-03-02  8:27 ` [PATCH v4 2/6] dax: fix cache flush on PMD-mapped pages Muchun Song
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Muchun Song @ 2022-03-02  8:27 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	smuchun, Muchun Song

The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue. At least, no
problems were found due to this. Maybe because the architectures that
have virtual indexed caches is less.

Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
---
 mm/rmap.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index fc46a3d7b704..723682ddb9e8 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -970,7 +970,8 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
 			if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
 				continue;
 
-			flush_cache_page(vma, address, folio_pfn(folio));
+			flush_cache_range(vma, address,
+					  address + HPAGE_PMD_SIZE);
 			entry = pmdp_invalidate(vma, address, pmd);
 			entry = pmd_wrprotect(entry);
 			entry = pmd_mkclean(entry);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 2/6] dax: fix cache flush on PMD-mapped pages
  2022-03-02  8:27 [PATCH v4 0/6] Fix some bugs related to ramp and dax Muchun Song
  2022-03-02  8:27 ` [PATCH v4 1/6] mm: rmap: fix cache flush on THP pages Muchun Song
@ 2022-03-02  8:27 ` Muchun Song
  2022-03-10  0:06   ` Dan Williams
  2022-03-02  8:27 ` [PATCH v4 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Muchun Song
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Muchun Song @ 2022-03-02  8:27 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	smuchun, Muchun Song

The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue.

Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 fs/dax.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/dax.c b/fs/dax.c
index 67a08a32fccb..a372304c9695 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -845,7 +845,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
 			if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
 				goto unlock_pmd;
 
-			flush_cache_page(vma, address, pfn);
+			flush_cache_range(vma, address,
+					  address + HPAGE_PMD_SIZE);
 			pmd = pmdp_invalidate(vma, address, pmdp);
 			pmd = pmd_wrprotect(pmd);
 			pmd = pmd_mkclean(pmd);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
  2022-03-02  8:27 [PATCH v4 0/6] Fix some bugs related to ramp and dax Muchun Song
  2022-03-02  8:27 ` [PATCH v4 1/6] mm: rmap: fix cache flush on THP pages Muchun Song
  2022-03-02  8:27 ` [PATCH v4 2/6] dax: fix cache flush on PMD-mapped pages Muchun Song
@ 2022-03-02  8:27 ` Muchun Song
  2022-03-10  0:26   ` Dan Williams
  2022-03-02  8:27 ` [PATCH v4 4/6] mm: pvmw: add support for walking devmap pages Muchun Song
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Muchun Song @ 2022-03-02  8:27 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	smuchun, Muchun Song

The page_mkclean_one() is supposed to be used with the pfn that has a
associated struct page, but not all the pfns (e.g. DAX) have a struct
page. Introduce a new function pfn_mkclean_range() to cleans the PTEs
(including PMDs) mapped with range of pfns which has no struct page
associated with them. This helper will be used by DAX device in the
next patch to make pfns clean.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/rmap.h |  3 +++
 mm/internal.h        | 26 +++++++++++++--------
 mm/rmap.c            | 65 +++++++++++++++++++++++++++++++++++++++++++---------
 3 files changed, 74 insertions(+), 20 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index b58ddb8b2220..a6ec0d3e40c1 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -263,6 +263,9 @@ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *);
  */
 int folio_mkclean(struct folio *);
 
+int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
+		      struct vm_area_struct *vma);
+
 void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked);
 
 /*
diff --git a/mm/internal.h b/mm/internal.h
index f45292dc4ef5..ff873944749f 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -516,26 +516,22 @@ void mlock_page_drain(int cpu);
 extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
 
 /*
- * At what user virtual address is page expected in vma?
- * Returns -EFAULT if all of the page is outside the range of vma.
- * If page is a compound head, the entire compound page is considered.
+ * * Return the start of user virtual address at the specific offset within
+ * a vma.
  */
 static inline unsigned long
-vma_address(struct page *page, struct vm_area_struct *vma)
+vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages,
+		  struct vm_area_struct *vma)
 {
-	pgoff_t pgoff;
 	unsigned long address;
 
-	VM_BUG_ON_PAGE(PageKsm(page), page);	/* KSM page->index unusable */
-	pgoff = page_to_pgoff(page);
 	if (pgoff >= vma->vm_pgoff) {
 		address = vma->vm_start +
 			((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
 		/* Check for address beyond vma (or wrapped through 0?) */
 		if (address < vma->vm_start || address >= vma->vm_end)
 			address = -EFAULT;
-	} else if (PageHead(page) &&
-		   pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) {
+	} else if (pgoff + nr_pages - 1 >= vma->vm_pgoff) {
 		/* Test above avoids possibility of wrap to 0 on 32-bit */
 		address = vma->vm_start;
 	} else {
@@ -545,6 +541,18 @@ vma_address(struct page *page, struct vm_area_struct *vma)
 }
 
 /*
+ * Return the start of user virtual address of a page within a vma.
+ * Returns -EFAULT if all of the page is outside the range of vma.
+ * If page is a compound head, the entire compound page is considered.
+ */
+static inline unsigned long
+vma_address(struct page *page, struct vm_area_struct *vma)
+{
+	VM_BUG_ON_PAGE(PageKsm(page), page);	/* KSM page->index unusable */
+	return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma);
+}
+
+/*
  * Then at what user virtual address will none of the range be found in vma?
  * Assumes that vma_address() already returned a good starting address.
  */
diff --git a/mm/rmap.c b/mm/rmap.c
index 723682ddb9e8..ad5cf0e45a73 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -929,12 +929,12 @@ int folio_referenced(struct folio *folio, int is_locked,
 	return pra.referenced;
 }
 
-static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
-			    unsigned long address, void *arg)
+static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
 {
-	DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC);
+	int cleaned = 0;
+	struct vm_area_struct *vma = pvmw->vma;
 	struct mmu_notifier_range range;
-	int *cleaned = arg;
+	unsigned long address = pvmw->address;
 
 	/*
 	 * We have to assume the worse case ie pmd for invalidation. Note that
@@ -942,16 +942,16 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
 	 */
 	mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE,
 				0, vma, vma->vm_mm, address,
-				vma_address_end(&pvmw));
+				vma_address_end(pvmw));
 	mmu_notifier_invalidate_range_start(&range);
 
-	while (page_vma_mapped_walk(&pvmw)) {
+	while (page_vma_mapped_walk(pvmw)) {
 		int ret = 0;
 
-		address = pvmw.address;
-		if (pvmw.pte) {
+		address = pvmw->address;
+		if (pvmw->pte) {
 			pte_t entry;
-			pte_t *pte = pvmw.pte;
+			pte_t *pte = pvmw->pte;
 
 			if (!pte_dirty(*pte) && !pte_write(*pte))
 				continue;
@@ -964,7 +964,7 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
 			ret = 1;
 		} else {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-			pmd_t *pmd = pvmw.pmd;
+			pmd_t *pmd = pvmw->pmd;
 			pmd_t entry;
 
 			if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
@@ -991,11 +991,22 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
 		 * See Documentation/vm/mmu_notifier.rst
 		 */
 		if (ret)
-			(*cleaned)++;
+			cleaned++;
 	}
 
 	mmu_notifier_invalidate_range_end(&range);
 
+	return cleaned;
+}
+
+static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
+			     unsigned long address, void *arg)
+{
+	DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC);
+	int *cleaned = arg;
+
+	*cleaned += page_vma_mkclean_one(&pvmw);
+
 	return true;
 }
 
@@ -1033,6 +1044,38 @@ int folio_mkclean(struct folio *folio)
 EXPORT_SYMBOL_GPL(folio_mkclean);
 
 /**
+ * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of
+ *                     [@pfn, @pfn + @nr_pages) at the specific offset (@pgoff)
+ *                     within the @vma of shared mappings. And since clean PTEs
+ *                     should also be readonly, write protects them too.
+ * @pfn: start pfn.
+ * @nr_pages: number of physically contiguous pages srarting with @pfn.
+ * @pgoff: page offset that the @pfn mapped with.
+ * @vma: vma that @pfn mapped within.
+ *
+ * Returns the number of cleaned PTEs (including PMDs).
+ */
+int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
+		      struct vm_area_struct *vma)
+{
+	struct page_vma_mapped_walk pvmw = {
+		.pfn		= pfn,
+		.nr_pages	= nr_pages,
+		.pgoff		= pgoff,
+		.vma		= vma,
+		.flags		= PVMW_SYNC,
+	};
+
+	if (invalid_mkclean_vma(vma, NULL))
+		return 0;
+
+	pvmw.address = vma_pgoff_address(pgoff, nr_pages, vma);
+	VM_BUG_ON_VMA(pvmw.address == -EFAULT, vma);
+
+	return page_vma_mkclean_one(&pvmw);
+}
+
+/**
  * page_move_anon_rmap - move a page to our anon_vma
  * @page:	the page to move to our anon_vma
  * @vma:	the vma the page belongs to
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 4/6] mm: pvmw: add support for walking devmap pages
  2022-03-02  8:27 [PATCH v4 0/6] Fix some bugs related to ramp and dax Muchun Song
                   ` (2 preceding siblings ...)
  2022-03-02  8:27 ` [PATCH v4 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Muchun Song
@ 2022-03-02  8:27 ` Muchun Song
  2022-03-02  8:27 ` [PATCH v4 5/6] dax: fix missing writeprotect the pte entry Muchun Song
  2022-03-02  8:27 ` [PATCH v4 6/6] mm: remove range parameter from follow_invalidate_pte() Muchun Song
  5 siblings, 0 replies; 17+ messages in thread
From: Muchun Song @ 2022-03-02  8:27 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	smuchun, Muchun Song

The devmap pages can not use page_vma_mapped_walk() to check if a huge
devmap page is mapped into a vma.  Add support for walking huge devmap
pages so that DAX can use it in the next patch.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/page_vma_mapped.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index 1187f9c1ec5b..f9ffa84adf4d 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -210,10 +210,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 		 */
 		pmde = READ_ONCE(*pvmw->pmd);
 
-		if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) {
+		if (pmd_trans_huge(pmde) || pmd_devmap(pmde) ||
+		    is_pmd_migration_entry(pmde)) {
 			pvmw->ptl = pmd_lock(mm, pvmw->pmd);
 			pmde = *pvmw->pmd;
-			if (likely(pmd_trans_huge(pmde))) {
+			if (likely(pmd_trans_huge(pmde) || pmd_devmap(pmde))) {
 				if (pvmw->flags & PVMW_MIGRATION)
 					return not_found(pvmw);
 				if (!check_pmd(pmd_pfn(pmde), pvmw))
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 5/6] dax: fix missing writeprotect the pte entry
  2022-03-02  8:27 [PATCH v4 0/6] Fix some bugs related to ramp and dax Muchun Song
                   ` (3 preceding siblings ...)
  2022-03-02  8:27 ` [PATCH v4 4/6] mm: pvmw: add support for walking devmap pages Muchun Song
@ 2022-03-02  8:27 ` Muchun Song
  2022-03-10  0:59   ` Dan Williams
  2022-03-02  8:27 ` [PATCH v4 6/6] mm: remove range parameter from follow_invalidate_pte() Muchun Song
  5 siblings, 1 reply; 17+ messages in thread
From: Muchun Song @ 2022-03-02  8:27 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	smuchun, Muchun Song

Currently dax_mapping_entry_mkclean() fails to clean and write protect
the pte entry within a DAX PMD entry during an *sync operation. This
can result in data loss in the following sequence:

  1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
     making the pmd entry dirty and writeable.
  2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
     write to the same file, dirtying PMD radix tree entry (already
     done in 1)) and making the pte entry dirty and writeable.
  3) fsync, flushing out PMD data and cleaning the radix tree entry. We
     currently fail to mark the pte entry as clean and write protected
     since the vma of process B is not covered in dax_entry_mkclean().
  4) process B writes to the pte. These don't cause any page faults since
     the pte entry is dirty and writeable. The radix tree entry remains
     clean.
  5) fsync, which fails to flush the dirty PMD data because the radix tree
     entry was clean.
  6) crash - dirty data that should have been fsync'd as part of 5) could
     still have been in the processor cache, and is lost.

Just to use pfn_mkclean_range() to clean the pfns to fix this issue.

Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 fs/dax.c | 83 ++++++----------------------------------------------------------
 1 file changed, 7 insertions(+), 76 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index a372304c9695..7fd4a16769f9 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -24,6 +24,7 @@
 #include <linux/sizes.h>
 #include <linux/mmu_notifier.h>
 #include <linux/iomap.h>
+#include <linux/rmap.h>
 #include <asm/pgalloc.h>
 
 #define CREATE_TRACE_POINTS
@@ -789,87 +790,17 @@ static void *dax_insert_entry(struct xa_state *xas,
 	return entry;
 }
 
-static inline
-unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma)
-{
-	unsigned long address;
-
-	address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
-	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
-	return address;
-}
-
 /* Walk all mappings of a given index of a file and writeprotect them */
-static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
-		unsigned long pfn)
+static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn,
+			      unsigned long npfn, pgoff_t start)
 {
 	struct vm_area_struct *vma;
-	pte_t pte, *ptep = NULL;
-	pmd_t *pmdp = NULL;
-	spinlock_t *ptl;
+	pgoff_t end = start + npfn - 1;
 
 	i_mmap_lock_read(mapping);
-	vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) {
-		struct mmu_notifier_range range;
-		unsigned long address;
-
+	vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) {
+		pfn_mkclean_range(pfn, npfn, start, vma);
 		cond_resched();
-
-		if (!(vma->vm_flags & VM_SHARED))
-			continue;
-
-		address = pgoff_address(index, vma);
-
-		/*
-		 * follow_invalidate_pte() will use the range to call
-		 * mmu_notifier_invalidate_range_start() on our behalf before
-		 * taking any lock.
-		 */
-		if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep,
-					  &pmdp, &ptl))
-			continue;
-
-		/*
-		 * No need to call mmu_notifier_invalidate_range() as we are
-		 * downgrading page table protection not changing it to point
-		 * to a new page.
-		 *
-		 * See Documentation/vm/mmu_notifier.rst
-		 */
-		if (pmdp) {
-#ifdef CONFIG_FS_DAX_PMD
-			pmd_t pmd;
-
-			if (pfn != pmd_pfn(*pmdp))
-				goto unlock_pmd;
-			if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
-				goto unlock_pmd;
-
-			flush_cache_range(vma, address,
-					  address + HPAGE_PMD_SIZE);
-			pmd = pmdp_invalidate(vma, address, pmdp);
-			pmd = pmd_wrprotect(pmd);
-			pmd = pmd_mkclean(pmd);
-			set_pmd_at(vma->vm_mm, address, pmdp, pmd);
-unlock_pmd:
-#endif
-			spin_unlock(ptl);
-		} else {
-			if (pfn != pte_pfn(*ptep))
-				goto unlock_pte;
-			if (!pte_dirty(*ptep) && !pte_write(*ptep))
-				goto unlock_pte;
-
-			flush_cache_page(vma, address, pfn);
-			pte = ptep_clear_flush(vma, address, ptep);
-			pte = pte_wrprotect(pte);
-			pte = pte_mkclean(pte);
-			set_pte_at(vma->vm_mm, address, ptep, pte);
-unlock_pte:
-			pte_unmap_unlock(ptep, ptl);
-		}
-
-		mmu_notifier_invalidate_range_end(&range);
 	}
 	i_mmap_unlock_read(mapping);
 }
@@ -937,7 +868,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
 	count = 1UL << dax_entry_order(entry);
 	index = xas->xa_index & ~(count - 1);
 
-	dax_entry_mkclean(mapping, index, pfn);
+	dax_entry_mkclean(mapping, pfn, count, index);
 	dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE);
 	/*
 	 * After we have flushed the cache, we can clear the dirty tag. There
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 6/6] mm: remove range parameter from follow_invalidate_pte()
  2022-03-02  8:27 [PATCH v4 0/6] Fix some bugs related to ramp and dax Muchun Song
                   ` (4 preceding siblings ...)
  2022-03-02  8:27 ` [PATCH v4 5/6] dax: fix missing writeprotect the pte entry Muchun Song
@ 2022-03-02  8:27 ` Muchun Song
  2022-03-10  1:02   ` Dan Williams
  5 siblings, 1 reply; 17+ messages in thread
From: Muchun Song @ 2022-03-02  8:27 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	smuchun, Muchun Song

The only user (DAX) of range parameter of follow_invalidate_pte()
is gone, it safe to remove the range paramter and make it static
to simlify the code.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/mm.h |  3 ---
 mm/memory.c        | 23 +++--------------------
 2 files changed, 3 insertions(+), 23 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index c9bada4096ac..be7ec4c37ebe 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1871,9 +1871,6 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
 		unsigned long end, unsigned long floor, unsigned long ceiling);
 int
 copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
-int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
-			  struct mmu_notifier_range *range, pte_t **ptepp,
-			  pmd_t **pmdpp, spinlock_t **ptlp);
 int follow_pte(struct mm_struct *mm, unsigned long address,
 	       pte_t **ptepp, spinlock_t **ptlp);
 int follow_pfn(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index cc6968dc8e4e..278ab6d62b54 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4964,9 +4964,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
 }
 #endif /* __PAGETABLE_PMD_FOLDED */
 
-int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
-			  struct mmu_notifier_range *range, pte_t **ptepp,
-			  pmd_t **pmdpp, spinlock_t **ptlp)
+static int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
+				 pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
 {
 	pgd_t *pgd;
 	p4d_t *p4d;
@@ -4993,31 +4992,17 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
 		if (!pmdpp)
 			goto out;
 
-		if (range) {
-			mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0,
-						NULL, mm, address & PMD_MASK,
-						(address & PMD_MASK) + PMD_SIZE);
-			mmu_notifier_invalidate_range_start(range);
-		}
 		*ptlp = pmd_lock(mm, pmd);
 		if (pmd_huge(*pmd)) {
 			*pmdpp = pmd;
 			return 0;
 		}
 		spin_unlock(*ptlp);
-		if (range)
-			mmu_notifier_invalidate_range_end(range);
 	}
 
 	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
 		goto out;
 
-	if (range) {
-		mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, NULL, mm,
-					address & PAGE_MASK,
-					(address & PAGE_MASK) + PAGE_SIZE);
-		mmu_notifier_invalidate_range_start(range);
-	}
 	ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
 	if (!pte_present(*ptep))
 		goto unlock;
@@ -5025,8 +5010,6 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
 	return 0;
 unlock:
 	pte_unmap_unlock(ptep, *ptlp);
-	if (range)
-		mmu_notifier_invalidate_range_end(range);
 out:
 	return -EINVAL;
 }
@@ -5055,7 +5038,7 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
 int follow_pte(struct mm_struct *mm, unsigned long address,
 	       pte_t **ptepp, spinlock_t **ptlp)
 {
-	return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp);
+	return follow_invalidate_pte(mm, address, ptepp, NULL, ptlp);
 }
 EXPORT_SYMBOL_GPL(follow_pte);
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 1/6] mm: rmap: fix cache flush on THP pages
  2022-03-02  8:27 ` [PATCH v4 1/6] mm: rmap: fix cache flush on THP pages Muchun Song
@ 2022-03-10  0:01   ` Dan Williams
  0 siblings, 0 replies; 17+ messages in thread
From: Dan Williams @ 2022-03-10  0:01 UTC (permalink / raw)
  To: Muchun Song
  Cc: Matthew Wilcox, Jan Kara, Al Viro, Andrew Morton,
	Alistair Popple, Yang Shi, Ralph Campbell, Hugh Dickins,
	xiyuyang19, Kirill A. Shutemov, Ross Zwisler, Christoph Hellwig,
	linux-fsdevel, Linux NVDIMM, Linux Kernel Mailing List, Linux MM,
	duanxiongchun, Muchun Song

On Wed, Mar 2, 2022 at 12:29 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
> However, it does not cover the full pages in a THP except a head page.
> Replace it with flush_cache_range() to fix this issue. At least, no
> problems were found due to this. Maybe because the architectures that
> have virtual indexed caches is less.
>
> Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()")
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Reviewed-by: Yang Shi <shy828301@gmail.com>

Reviewed-by: Dan Williams <dan.j.williams@intel.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/6] dax: fix cache flush on PMD-mapped pages
  2022-03-02  8:27 ` [PATCH v4 2/6] dax: fix cache flush on PMD-mapped pages Muchun Song
@ 2022-03-10  0:06   ` Dan Williams
  2022-03-10 13:48     ` Muchun Song
  0 siblings, 1 reply; 17+ messages in thread
From: Dan Williams @ 2022-03-10  0:06 UTC (permalink / raw)
  To: Muchun Song
  Cc: Matthew Wilcox, Jan Kara, Al Viro, Andrew Morton,
	Alistair Popple, Yang Shi, Ralph Campbell, Hugh Dickins,
	xiyuyang19, Kirill A. Shutemov, Ross Zwisler, Christoph Hellwig,
	linux-fsdevel, Linux NVDIMM, Linux Kernel Mailing List, Linux MM,
	duanxiongchun, Muchun Song

On Wed, Mar 2, 2022 at 12:29 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
> However, it does not cover the full pages in a THP except a head page.
> Replace it with flush_cache_range() to fix this issue.

This needs to clarify that this is just a documentation issue with the
respect to properly documenting the expected usage of cache flushing
before modifying the pmd. However, in practice this is not a problem
due to the fact that DAX is not available on architectures with
virtually indexed caches per:

d92576f1167c dax: does not work correctly with virtual aliasing caches

Otherwise, you can add:

Reviewed-by: Dan Williams <dan.j.williams@intel.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
  2022-03-02  8:27 ` [PATCH v4 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Muchun Song
@ 2022-03-10  0:26   ` Dan Williams
  2022-03-10  0:40     ` Dan Williams
  0 siblings, 1 reply; 17+ messages in thread
From: Dan Williams @ 2022-03-10  0:26 UTC (permalink / raw)
  To: Muchun Song
  Cc: Matthew Wilcox, Jan Kara, Al Viro, Andrew Morton,
	Alistair Popple, Yang Shi, Ralph Campbell, Hugh Dickins,
	xiyuyang19, Kirill A. Shutemov, Ross Zwisler, Christoph Hellwig,
	linux-fsdevel, Linux NVDIMM, Linux Kernel Mailing List, Linux MM,
	duanxiongchun, Muchun Song

On Wed, Mar 2, 2022 at 12:29 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> The page_mkclean_one() is supposed to be used with the pfn that has a
> associated struct page, but not all the pfns (e.g. DAX) have a struct
> page. Introduce a new function pfn_mkclean_range() to cleans the PTEs
> (including PMDs) mapped with range of pfns which has no struct page
> associated with them. This helper will be used by DAX device in the
> next patch to make pfns clean.

This seems unfortunate given the desire to kill off
CONFIG_FS_DAX_LIMITED which is the only way to get DAX without 'struct
page'.

I would special case these helpers behind CONFIG_FS_DAX_LIMITED such
that they can be deleted when that support is finally removed.

>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  include/linux/rmap.h |  3 +++
>  mm/internal.h        | 26 +++++++++++++--------
>  mm/rmap.c            | 65 +++++++++++++++++++++++++++++++++++++++++++---------
>  3 files changed, 74 insertions(+), 20 deletions(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index b58ddb8b2220..a6ec0d3e40c1 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -263,6 +263,9 @@ unsigned long page_address_in_vma(struct page *, struct vm_area_struct *);
>   */
>  int folio_mkclean(struct folio *);
>
> +int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
> +                     struct vm_area_struct *vma);
> +
>  void remove_migration_ptes(struct folio *src, struct folio *dst, bool locked);
>
>  /*
> diff --git a/mm/internal.h b/mm/internal.h
> index f45292dc4ef5..ff873944749f 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -516,26 +516,22 @@ void mlock_page_drain(int cpu);
>  extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
>
>  /*
> - * At what user virtual address is page expected in vma?
> - * Returns -EFAULT if all of the page is outside the range of vma.
> - * If page is a compound head, the entire compound page is considered.
> + * * Return the start of user virtual address at the specific offset within
> + * a vma.
>   */
>  static inline unsigned long
> -vma_address(struct page *page, struct vm_area_struct *vma)
> +vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages,
> +                 struct vm_area_struct *vma)
>  {
> -       pgoff_t pgoff;
>         unsigned long address;
>
> -       VM_BUG_ON_PAGE(PageKsm(page), page);    /* KSM page->index unusable */
> -       pgoff = page_to_pgoff(page);
>         if (pgoff >= vma->vm_pgoff) {
>                 address = vma->vm_start +
>                         ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
>                 /* Check for address beyond vma (or wrapped through 0?) */
>                 if (address < vma->vm_start || address >= vma->vm_end)
>                         address = -EFAULT;
> -       } else if (PageHead(page) &&
> -                  pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) {
> +       } else if (pgoff + nr_pages - 1 >= vma->vm_pgoff) {
>                 /* Test above avoids possibility of wrap to 0 on 32-bit */
>                 address = vma->vm_start;
>         } else {
> @@ -545,6 +541,18 @@ vma_address(struct page *page, struct vm_area_struct *vma)
>  }
>
>  /*
> + * Return the start of user virtual address of a page within a vma.
> + * Returns -EFAULT if all of the page is outside the range of vma.
> + * If page is a compound head, the entire compound page is considered.
> + */
> +static inline unsigned long
> +vma_address(struct page *page, struct vm_area_struct *vma)
> +{
> +       VM_BUG_ON_PAGE(PageKsm(page), page);    /* KSM page->index unusable */
> +       return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma);
> +}
> +
> +/*
>   * Then at what user virtual address will none of the range be found in vma?
>   * Assumes that vma_address() already returned a good starting address.
>   */
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 723682ddb9e8..ad5cf0e45a73 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -929,12 +929,12 @@ int folio_referenced(struct folio *folio, int is_locked,
>         return pra.referenced;
>  }
>
> -static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
> -                           unsigned long address, void *arg)
> +static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
>  {
> -       DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC);
> +       int cleaned = 0;
> +       struct vm_area_struct *vma = pvmw->vma;
>         struct mmu_notifier_range range;
> -       int *cleaned = arg;
> +       unsigned long address = pvmw->address;
>
>         /*
>          * We have to assume the worse case ie pmd for invalidation. Note that
> @@ -942,16 +942,16 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
>          */
>         mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE,
>                                 0, vma, vma->vm_mm, address,
> -                               vma_address_end(&pvmw));
> +                               vma_address_end(pvmw));
>         mmu_notifier_invalidate_range_start(&range);
>
> -       while (page_vma_mapped_walk(&pvmw)) {
> +       while (page_vma_mapped_walk(pvmw)) {
>                 int ret = 0;
>
> -               address = pvmw.address;
> -               if (pvmw.pte) {
> +               address = pvmw->address;
> +               if (pvmw->pte) {
>                         pte_t entry;
> -                       pte_t *pte = pvmw.pte;
> +                       pte_t *pte = pvmw->pte;
>
>                         if (!pte_dirty(*pte) && !pte_write(*pte))
>                                 continue;
> @@ -964,7 +964,7 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
>                         ret = 1;
>                 } else {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -                       pmd_t *pmd = pvmw.pmd;
> +                       pmd_t *pmd = pvmw->pmd;
>                         pmd_t entry;
>
>                         if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
> @@ -991,11 +991,22 @@ static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
>                  * See Documentation/vm/mmu_notifier.rst
>                  */
>                 if (ret)
> -                       (*cleaned)++;
> +                       cleaned++;
>         }
>
>         mmu_notifier_invalidate_range_end(&range);
>
> +       return cleaned;
> +}
> +
> +static bool page_mkclean_one(struct folio *folio, struct vm_area_struct *vma,
> +                            unsigned long address, void *arg)
> +{
> +       DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, address, PVMW_SYNC);
> +       int *cleaned = arg;
> +
> +       *cleaned += page_vma_mkclean_one(&pvmw);
> +
>         return true;
>  }
>
> @@ -1033,6 +1044,38 @@ int folio_mkclean(struct folio *folio)
>  EXPORT_SYMBOL_GPL(folio_mkclean);
>
>  /**
> + * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of
> + *                     [@pfn, @pfn + @nr_pages) at the specific offset (@pgoff)
> + *                     within the @vma of shared mappings. And since clean PTEs
> + *                     should also be readonly, write protects them too.
> + * @pfn: start pfn.
> + * @nr_pages: number of physically contiguous pages srarting with @pfn.
> + * @pgoff: page offset that the @pfn mapped with.
> + * @vma: vma that @pfn mapped within.
> + *
> + * Returns the number of cleaned PTEs (including PMDs).
> + */
> +int pfn_mkclean_range(unsigned long pfn, unsigned long nr_pages, pgoff_t pgoff,
> +                     struct vm_area_struct *vma)
> +{
> +       struct page_vma_mapped_walk pvmw = {
> +               .pfn            = pfn,
> +               .nr_pages       = nr_pages,
> +               .pgoff          = pgoff,
> +               .vma            = vma,
> +               .flags          = PVMW_SYNC,
> +       };
> +
> +       if (invalid_mkclean_vma(vma, NULL))
> +               return 0;
> +
> +       pvmw.address = vma_pgoff_address(pgoff, nr_pages, vma);
> +       VM_BUG_ON_VMA(pvmw.address == -EFAULT, vma);
> +
> +       return page_vma_mkclean_one(&pvmw);
> +}
> +
> +/**
>   * page_move_anon_rmap - move a page to our anon_vma
>   * @page:      the page to move to our anon_vma
>   * @vma:       the vma the page belongs to
> --
> 2.11.0
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
  2022-03-10  0:26   ` Dan Williams
@ 2022-03-10  0:40     ` Dan Williams
  0 siblings, 0 replies; 17+ messages in thread
From: Dan Williams @ 2022-03-10  0:40 UTC (permalink / raw)
  To: Muchun Song
  Cc: Matthew Wilcox, Jan Kara, Al Viro, Andrew Morton,
	Alistair Popple, Yang Shi, Ralph Campbell, Hugh Dickins,
	xiyuyang19, Kirill A. Shutemov, Ross Zwisler, Christoph Hellwig,
	linux-fsdevel, Linux NVDIMM, Linux Kernel Mailing List, Linux MM,
	duanxiongchun, Muchun Song

On Wed, Mar 9, 2022 at 4:26 PM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Wed, Mar 2, 2022 at 12:29 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > The page_mkclean_one() is supposed to be used with the pfn that has a
> > associated struct page, but not all the pfns (e.g. DAX) have a struct
> > page. Introduce a new function pfn_mkclean_range() to cleans the PTEs
> > (including PMDs) mapped with range of pfns which has no struct page
> > associated with them. This helper will be used by DAX device in the
> > next patch to make pfns clean.
>
> This seems unfortunate given the desire to kill off
> CONFIG_FS_DAX_LIMITED which is the only way to get DAX without 'struct
> page'.
>
> I would special case these helpers behind CONFIG_FS_DAX_LIMITED such
> that they can be deleted when that support is finally removed.

...unless this support is to be used for other PFN_MAP scenarios where
a 'struct page' is not available? If so then the "(e.g. DAX)" should
be clarified to those other cases.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 5/6] dax: fix missing writeprotect the pte entry
  2022-03-02  8:27 ` [PATCH v4 5/6] dax: fix missing writeprotect the pte entry Muchun Song
@ 2022-03-10  0:59   ` Dan Williams
  2022-03-11  9:04     ` Muchun Song
  0 siblings, 1 reply; 17+ messages in thread
From: Dan Williams @ 2022-03-10  0:59 UTC (permalink / raw)
  To: Muchun Song
  Cc: Matthew Wilcox, Jan Kara, Al Viro, Andrew Morton,
	Alistair Popple, Yang Shi, Ralph Campbell, Hugh Dickins,
	xiyuyang19, Kirill A. Shutemov, Ross Zwisler, Christoph Hellwig,
	linux-fsdevel, Linux NVDIMM, Linux Kernel Mailing List, Linux MM,
	duanxiongchun, Muchun Song

On Wed, Mar 2, 2022 at 12:30 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> Currently dax_mapping_entry_mkclean() fails to clean and write protect
> the pte entry within a DAX PMD entry during an *sync operation. This
> can result in data loss in the following sequence:
>
>   1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
>      making the pmd entry dirty and writeable.
>   2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
>      write to the same file, dirtying PMD radix tree entry (already
>      done in 1)) and making the pte entry dirty and writeable.
>   3) fsync, flushing out PMD data and cleaning the radix tree entry. We
>      currently fail to mark the pte entry as clean and write protected
>      since the vma of process B is not covered in dax_entry_mkclean().
>   4) process B writes to the pte. These don't cause any page faults since
>      the pte entry is dirty and writeable. The radix tree entry remains
>      clean.
>   5) fsync, which fails to flush the dirty PMD data because the radix tree
>      entry was clean.
>   6) crash - dirty data that should have been fsync'd as part of 5) could
>      still have been in the processor cache, and is lost.

Excellent description.

>
> Just to use pfn_mkclean_range() to clean the pfns to fix this issue.

So the original motivation for CONFIG_FS_DAX_LIMITED was for archs
that do not have spare PTE bits to indicate pmd_devmap(). So this fix
can only work in the CONFIG_FS_DAX_LIMITED=n case and in that case it
seems you can use the current page_mkclean_one(), right? So perhaps
the fix is to skip patch 3, keep patch 4 and make this patch use
page_mkclean_one() along with this:

diff --git a/fs/Kconfig b/fs/Kconfig
index 7a2b11c0b803..42108adb7a78 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -83,6 +83,7 @@ config FS_DAX_PMD
        depends on FS_DAX
        depends on ZONE_DEVICE
        depends on TRANSPARENT_HUGEPAGE
+       depends on !FS_DAX_LIMITED

 # Selected by DAX drivers that do not expect filesystem DAX to support
 # get_user_pages() of DAX mappings. I.e. "limited" indicates no support

...to preclude the pmd conflict in that case?

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 6/6] mm: remove range parameter from follow_invalidate_pte()
  2022-03-02  8:27 ` [PATCH v4 6/6] mm: remove range parameter from follow_invalidate_pte() Muchun Song
@ 2022-03-10  1:02   ` Dan Williams
  0 siblings, 0 replies; 17+ messages in thread
From: Dan Williams @ 2022-03-10  1:02 UTC (permalink / raw)
  To: Muchun Song
  Cc: Matthew Wilcox, Jan Kara, Al Viro, Andrew Morton,
	Alistair Popple, Yang Shi, Ralph Campbell, Hugh Dickins,
	xiyuyang19, Kirill A. Shutemov, Ross Zwisler, Christoph Hellwig,
	linux-fsdevel, Linux NVDIMM, Linux Kernel Mailing List, Linux MM,
	duanxiongchun, Muchun Song

On Wed, Mar 2, 2022 at 12:30 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> The only user (DAX) of range parameter of follow_invalidate_pte()
> is gone, it safe to remove the range paramter and make it static
> to simlify the code.
>

Looks good, I suspect this savings is still valid if the "just use
page_mkclean_one" directly feedback is workable.

Otherwise you can add:

Reviewed-by: Dan Williams <dan.j.williams@intel.com>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/6] dax: fix cache flush on PMD-mapped pages
  2022-03-10  0:06   ` Dan Williams
@ 2022-03-10 13:48     ` Muchun Song
  0 siblings, 0 replies; 17+ messages in thread
From: Muchun Song @ 2022-03-10 13:48 UTC (permalink / raw)
  To: Dan Williams
  Cc: Matthew Wilcox, Jan Kara, Al Viro, Andrew Morton,
	Alistair Popple, Yang Shi, Ralph Campbell, Hugh Dickins,
	Xiyu Yang, Kirill A. Shutemov, Ross Zwisler, Christoph Hellwig,
	linux-fsdevel, Linux NVDIMM, Linux Kernel Mailing List, Linux MM,
	Xiongchun duan, Muchun Song

On Thu, Mar 10, 2022 at 8:06 AM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Wed, Mar 2, 2022 at 12:29 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
> > However, it does not cover the full pages in a THP except a head page.
> > Replace it with flush_cache_range() to fix this issue.
>
> This needs to clarify that this is just a documentation issue with the
> respect to properly documenting the expected usage of cache flushing
> before modifying the pmd. However, in practice this is not a problem
> due to the fact that DAX is not available on architectures with
> virtually indexed caches per:

Right. I'll add this into the commit log.

>
> d92576f1167c dax: does not work correctly with virtual aliasing caches
>
> Otherwise, you can add:
>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>

Thanks.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 5/6] dax: fix missing writeprotect the pte entry
  2022-03-10  0:59   ` Dan Williams
@ 2022-03-11  9:04     ` Muchun Song
  2022-03-14 20:50       ` Dan Williams
  0 siblings, 1 reply; 17+ messages in thread
From: Muchun Song @ 2022-03-11  9:04 UTC (permalink / raw)
  To: Dan Williams
  Cc: Matthew Wilcox, Jan Kara, Al Viro, Andrew Morton,
	Alistair Popple, Yang Shi, Ralph Campbell, Hugh Dickins,
	Xiyu Yang, Kirill A. Shutemov, Ross Zwisler, Christoph Hellwig,
	linux-fsdevel, Linux NVDIMM, Linux Kernel Mailing List, Linux MM,
	Xiongchun duan, Muchun Song

On Thu, Mar 10, 2022 at 8:59 AM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Wed, Mar 2, 2022 at 12:30 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > Currently dax_mapping_entry_mkclean() fails to clean and write protect
> > the pte entry within a DAX PMD entry during an *sync operation. This
> > can result in data loss in the following sequence:
> >
> >   1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
> >      making the pmd entry dirty and writeable.
> >   2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
> >      write to the same file, dirtying PMD radix tree entry (already
> >      done in 1)) and making the pte entry dirty and writeable.
> >   3) fsync, flushing out PMD data and cleaning the radix tree entry. We
> >      currently fail to mark the pte entry as clean and write protected
> >      since the vma of process B is not covered in dax_entry_mkclean().
> >   4) process B writes to the pte. These don't cause any page faults since
> >      the pte entry is dirty and writeable. The radix tree entry remains
> >      clean.
> >   5) fsync, which fails to flush the dirty PMD data because the radix tree
> >      entry was clean.
> >   6) crash - dirty data that should have been fsync'd as part of 5) could
> >      still have been in the processor cache, and is lost.
>
> Excellent description.
>
> >
> > Just to use pfn_mkclean_range() to clean the pfns to fix this issue.
>
> So the original motivation for CONFIG_FS_DAX_LIMITED was for archs
> that do not have spare PTE bits to indicate pmd_devmap(). So this fix
> can only work in the CONFIG_FS_DAX_LIMITED=n case and in that case it
> seems you can use the current page_mkclean_one(), right?

I don't know the history of CONFIG_FS_DAX_LIMITED.
page_mkclean_one() need a struct page associated with
the pfn,  do the struct pages exist when CONFIG_FS_DAX_LIMITED
and ! FS_DAX_PMD? If yes, I think you are right. But I don't
see this guarantee. I am not familiar with DAX code, so what am
I missing here?

Thanks.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 5/6] dax: fix missing writeprotect the pte entry
  2022-03-11  9:04     ` Muchun Song
@ 2022-03-14 20:50       ` Dan Williams
  2022-03-15  7:51         ` Muchun Song
  0 siblings, 1 reply; 17+ messages in thread
From: Dan Williams @ 2022-03-14 20:50 UTC (permalink / raw)
  To: Muchun Song
  Cc: Matthew Wilcox, Jan Kara, Al Viro, Andrew Morton,
	Alistair Popple, Yang Shi, Ralph Campbell, Hugh Dickins,
	Xiyu Yang, Kirill A. Shutemov, Ross Zwisler, Christoph Hellwig,
	linux-fsdevel, Linux NVDIMM, Linux Kernel Mailing List, Linux MM,
	Xiongchun duan, Muchun Song

On Fri, Mar 11, 2022 at 1:06 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Thu, Mar 10, 2022 at 8:59 AM Dan Williams <dan.j.williams@intel.com> wrote:
> >
> > On Wed, Mar 2, 2022 at 12:30 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > >
> > > Currently dax_mapping_entry_mkclean() fails to clean and write protect
> > > the pte entry within a DAX PMD entry during an *sync operation. This
> > > can result in data loss in the following sequence:
> > >
> > >   1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
> > >      making the pmd entry dirty and writeable.
> > >   2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
> > >      write to the same file, dirtying PMD radix tree entry (already
> > >      done in 1)) and making the pte entry dirty and writeable.
> > >   3) fsync, flushing out PMD data and cleaning the radix tree entry. We
> > >      currently fail to mark the pte entry as clean and write protected
> > >      since the vma of process B is not covered in dax_entry_mkclean().
> > >   4) process B writes to the pte. These don't cause any page faults since
> > >      the pte entry is dirty and writeable. The radix tree entry remains
> > >      clean.
> > >   5) fsync, which fails to flush the dirty PMD data because the radix tree
> > >      entry was clean.
> > >   6) crash - dirty data that should have been fsync'd as part of 5) could
> > >      still have been in the processor cache, and is lost.
> >
> > Excellent description.
> >
> > >
> > > Just to use pfn_mkclean_range() to clean the pfns to fix this issue.
> >
> > So the original motivation for CONFIG_FS_DAX_LIMITED was for archs
> > that do not have spare PTE bits to indicate pmd_devmap(). So this fix
> > can only work in the CONFIG_FS_DAX_LIMITED=n case and in that case it
> > seems you can use the current page_mkclean_one(), right?
>
> I don't know the history of CONFIG_FS_DAX_LIMITED.
> page_mkclean_one() need a struct page associated with
> the pfn,  do the struct pages exist when CONFIG_FS_DAX_LIMITED
> and ! FS_DAX_PMD?

CONFIG_FS_DAX_LIMITED was created to preserve some DAX use for S390
which does not have CONFIG_ARCH_HAS_PTE_DEVMAP. Without PTE_DEVMAP
then get_user_pages() for DAX mappings fails.

To your question, no, there are no pages at all in the
CONFIG_FS_DAX_LIMITED=y case. So page_mkclean_one() could only be
deployed for PMD mappings, but I think it is reasonable to just
disable PMD mappings for the CONFIG_FS_DAX_LIMITED=y case.

Going forward the hope is to remove the ARCH_HAS_PTE_DEVMAP
requirement for DAX, and use PTE_SPECIAL for the S390 case. However,
that still wants to have 'struct page' availability as an across the
board requirement.

> If yes, I think you are right. But I don't
> see this guarantee. I am not familiar with DAX code, so what am
> I missing here?

Perhaps I missed a 'struct page' dependency? I thought the bug you are
fixing only triggers in the presence of PMDs. The
CONFIG_FS_DAX_LIMITED=y case can still use the current "page-less"
mkclean path for PTEs.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 5/6] dax: fix missing writeprotect the pte entry
  2022-03-14 20:50       ` Dan Williams
@ 2022-03-15  7:51         ` Muchun Song
  0 siblings, 0 replies; 17+ messages in thread
From: Muchun Song @ 2022-03-15  7:51 UTC (permalink / raw)
  To: Dan Williams
  Cc: Matthew Wilcox, Jan Kara, Al Viro, Andrew Morton,
	Alistair Popple, Yang Shi, Ralph Campbell, Hugh Dickins,
	Xiyu Yang, Kirill A. Shutemov, Ross Zwisler, Christoph Hellwig,
	linux-fsdevel, Linux NVDIMM, Linux Kernel Mailing List, Linux MM,
	Xiongchun duan, Muchun Song

On Tue, Mar 15, 2022 at 4:50 AM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Fri, Mar 11, 2022 at 1:06 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > On Thu, Mar 10, 2022 at 8:59 AM Dan Williams <dan.j.williams@intel.com> wrote:
> > >
> > > On Wed, Mar 2, 2022 at 12:30 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > >
> > > > Currently dax_mapping_entry_mkclean() fails to clean and write protect
> > > > the pte entry within a DAX PMD entry during an *sync operation. This
> > > > can result in data loss in the following sequence:
> > > >
> > > >   1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
> > > >      making the pmd entry dirty and writeable.
> > > >   2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
> > > >      write to the same file, dirtying PMD radix tree entry (already
> > > >      done in 1)) and making the pte entry dirty and writeable.
> > > >   3) fsync, flushing out PMD data and cleaning the radix tree entry. We
> > > >      currently fail to mark the pte entry as clean and write protected
> > > >      since the vma of process B is not covered in dax_entry_mkclean().
> > > >   4) process B writes to the pte. These don't cause any page faults since
> > > >      the pte entry is dirty and writeable. The radix tree entry remains
> > > >      clean.
> > > >   5) fsync, which fails to flush the dirty PMD data because the radix tree
> > > >      entry was clean.
> > > >   6) crash - dirty data that should have been fsync'd as part of 5) could
> > > >      still have been in the processor cache, and is lost.
> > >
> > > Excellent description.
> > >
> > > >
> > > > Just to use pfn_mkclean_range() to clean the pfns to fix this issue.
> > >
> > > So the original motivation for CONFIG_FS_DAX_LIMITED was for archs
> > > that do not have spare PTE bits to indicate pmd_devmap(). So this fix
> > > can only work in the CONFIG_FS_DAX_LIMITED=n case and in that case it
> > > seems you can use the current page_mkclean_one(), right?
> >
> > I don't know the history of CONFIG_FS_DAX_LIMITED.
> > page_mkclean_one() need a struct page associated with
> > the pfn,  do the struct pages exist when CONFIG_FS_DAX_LIMITED
> > and ! FS_DAX_PMD?
>
> CONFIG_FS_DAX_LIMITED was created to preserve some DAX use for S390
> which does not have CONFIG_ARCH_HAS_PTE_DEVMAP. Without PTE_DEVMAP
> then get_user_pages() for DAX mappings fails.
>
> To your question, no, there are no pages at all in the
> CONFIG_FS_DAX_LIMITED=y case. So page_mkclean_one() could only be
> deployed for PMD mappings, but I think it is reasonable to just
> disable PMD mappings for the CONFIG_FS_DAX_LIMITED=y case.
>
> Going forward the hope is to remove the ARCH_HAS_PTE_DEVMAP
> requirement for DAX, and use PTE_SPECIAL for the S390 case. However,
> that still wants to have 'struct page' availability as an across the
> board requirement.

Got it. Thanks for your patient explanation.

>
> > If yes, I think you are right. But I don't
> > see this guarantee. I am not familiar with DAX code, so what am
> > I missing here?
>
> Perhaps I missed a 'struct page' dependency? I thought the bug you are
> fixing only triggers in the presence of PMDs. The

Right.

> CONFIG_FS_DAX_LIMITED=y case can still use the current "page-less"
> mkclean path for PTEs.

But I think introducing pfn_mkclean_range() could make the code
simple and easy to maintain here since it could handle both PTE
and PMD mappings.  And page_vma_mapped_walk() could work
on PFNs since commit [1], which is the case here, we do not need
extra code to handle the page-less case here.  What do you
think?

[1] https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git/commit/?id=b786e44a4dbfe64476e7120ec7990b89a37be37d

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-03-15  7:53 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-02  8:27 [PATCH v4 0/6] Fix some bugs related to ramp and dax Muchun Song
2022-03-02  8:27 ` [PATCH v4 1/6] mm: rmap: fix cache flush on THP pages Muchun Song
2022-03-10  0:01   ` Dan Williams
2022-03-02  8:27 ` [PATCH v4 2/6] dax: fix cache flush on PMD-mapped pages Muchun Song
2022-03-10  0:06   ` Dan Williams
2022-03-10 13:48     ` Muchun Song
2022-03-02  8:27 ` [PATCH v4 3/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Muchun Song
2022-03-10  0:26   ` Dan Williams
2022-03-10  0:40     ` Dan Williams
2022-03-02  8:27 ` [PATCH v4 4/6] mm: pvmw: add support for walking devmap pages Muchun Song
2022-03-02  8:27 ` [PATCH v4 5/6] dax: fix missing writeprotect the pte entry Muchun Song
2022-03-10  0:59   ` Dan Williams
2022-03-11  9:04     ` Muchun Song
2022-03-14 20:50       ` Dan Williams
2022-03-15  7:51         ` Muchun Song
2022-03-02  8:27 ` [PATCH v4 6/6] mm: remove range parameter from follow_invalidate_pte() Muchun Song
2022-03-10  1:02   ` Dan Williams

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.