All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] Fix some bugs related to ramp and dax
@ 2022-02-02 14:33 Muchun Song
  2022-02-02 14:33 ` [PATCH v2 1/6] mm: rmap: fix cache flush on THP pages Muchun Song
                   ` (5 more replies)
  0 siblings, 6 replies; 13+ messages in thread
From: Muchun Song @ 2022-02-02 14:33 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	Muchun Song

Patch 1-2 fix a cache flush bug, because subsequent patches depend on
those on those changes, there are placed in this series.  Patch 3-4
are preparation for fixing a dax bug in patch 5.  Patch 6 is code cleanup
since the previous patch remove the usage of follow_invalidate_pte().

Changes in v2:
  - Avoid the overly long line in lots of places suggested by Christoph.
  - Fix a compiler warning reported by kernel test robot since pmd_pfn()
    is not defined when !CONFIG_TRANSPARENT_HUGEPAGE on powerpc architecture.
  - Split a new patch 4 for preparation of fixing the dax bug.

Muchun Song (6):
  mm: rmap: fix cache flush on THP pages
  dax: fix cache flush on PMD-mapped pages
  mm: page_vma_mapped: support checking if a pfn is mapped into a vma
  mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
  dax: fix missing writeprotect the pte entry
  mm: remove range parameter from follow_invalidate_pte()

 fs/dax.c                | 82 ++++------------------------------------------
 include/linux/mm.h      |  3 --
 include/linux/rmap.h    | 17 ++++++++--
 include/linux/swapops.h | 13 +++++---
 mm/internal.h           | 52 +++++++++++++++++++----------
 mm/memory.c             | 23 ++-----------
 mm/page_vma_mapped.c    | 68 ++++++++++++++++++++++++--------------
 mm/rmap.c               | 87 ++++++++++++++++++++++++++++++++++++++-----------
 8 files changed, 180 insertions(+), 165 deletions(-)

-- 
2.11.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 1/6] mm: rmap: fix cache flush on THP pages
  2022-02-02 14:33 [PATCH v2 0/6] Fix some bugs related to ramp and dax Muchun Song
@ 2022-02-02 14:33 ` Muchun Song
  2022-02-03 10:16   ` Jan Kara
  2022-02-02 14:33 ` [PATCH v2 2/6] dax: fix cache flush on PMD-mapped pages Muchun Song
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Muchun Song @ 2022-02-02 14:33 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	Muchun Song

The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue. At least, no
problems were found due to this. Maybe because the architectures that
have virtual indexed caches is less.

Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
---
 mm/rmap.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index b0fd9dc19eba..0ba12dc9fae3 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -974,7 +974,8 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 			if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
 				continue;
 
-			flush_cache_page(vma, address, page_to_pfn(page));
+			flush_cache_range(vma, address,
+					  address + HPAGE_PMD_SIZE);
 			entry = pmdp_invalidate(vma, address, pmd);
 			entry = pmd_wrprotect(entry);
 			entry = pmd_mkclean(entry);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 2/6] dax: fix cache flush on PMD-mapped pages
  2022-02-02 14:33 [PATCH v2 0/6] Fix some bugs related to ramp and dax Muchun Song
  2022-02-02 14:33 ` [PATCH v2 1/6] mm: rmap: fix cache flush on THP pages Muchun Song
@ 2022-02-02 14:33 ` Muchun Song
  2022-02-03 10:17   ` Jan Kara
  2022-02-02 14:33 ` [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma Muchun Song
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 13+ messages in thread
From: Muchun Song @ 2022-02-02 14:33 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	Muchun Song

The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
However, it does not cover the full pages in a THP except a head page.
Replace it with flush_cache_range() to fix this issue.

Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 fs/dax.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/fs/dax.c b/fs/dax.c
index 88be1c02a151..e031e4b6c13c 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -857,7 +857,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
 			if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
 				goto unlock_pmd;
 
-			flush_cache_page(vma, address, pfn);
+			flush_cache_range(vma, address,
+					  address + HPAGE_PMD_SIZE);
 			pmd = pmdp_invalidate(vma, address, pmdp);
 			pmd = pmd_wrprotect(pmd);
 			pmd = pmd_mkclean(pmd);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma
  2022-02-02 14:33 [PATCH v2 0/6] Fix some bugs related to ramp and dax Muchun Song
  2022-02-02 14:33 ` [PATCH v2 1/6] mm: rmap: fix cache flush on THP pages Muchun Song
  2022-02-02 14:33 ` [PATCH v2 2/6] dax: fix cache flush on PMD-mapped pages Muchun Song
@ 2022-02-02 14:33 ` Muchun Song
  2022-02-02 16:43   ` kernel test robot
                     ` (2 more replies)
  2022-02-02 14:33 ` [PATCH v2 4/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Muchun Song
                   ` (2 subsequent siblings)
  5 siblings, 3 replies; 13+ messages in thread
From: Muchun Song @ 2022-02-02 14:33 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	Muchun Song

page_vma_mapped_walk() is supposed to check if a page is mapped into a vma.
However, not all page frames (e.g. PFN_DEV) have a associated struct page
with it. There is going to be some duplicate codes similar with this function
if someone want to check if a pfn (without a struct page) is mapped into a
vma. So add support for checking if a pfn is mapped into a vma. In the next
patch, the dax will use this new feature.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/rmap.h    | 14 ++++++++--
 include/linux/swapops.h | 13 +++++++---
 mm/internal.h           | 28 +++++++++++++-------
 mm/page_vma_mapped.c    | 68 +++++++++++++++++++++++++++++++------------------
 4 files changed, 83 insertions(+), 40 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 221c3c6438a7..78373935ad49 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -204,9 +204,18 @@ int make_device_exclusive_range(struct mm_struct *mm, unsigned long start,
 #define PVMW_SYNC		(1 << 0)
 /* Look for migarion entries rather than present PTEs */
 #define PVMW_MIGRATION		(1 << 1)
+/* Walk the page table by checking the pfn instead of a struct page */
+#define PVMW_PFN_WALK		(1 << 2)
 
 struct page_vma_mapped_walk {
-	struct page *page;
+	union {
+		struct page *page;
+		struct {
+			unsigned long pfn;
+			unsigned int nr;
+			pgoff_t index;
+		};
+	};
 	struct vm_area_struct *vma;
 	unsigned long address;
 	pmd_t *pmd;
@@ -218,7 +227,8 @@ struct page_vma_mapped_walk {
 static inline void page_vma_mapped_walk_done(struct page_vma_mapped_walk *pvmw)
 {
 	/* HugeTLB pte is set to the relevant page table entry without pte_mapped. */
-	if (pvmw->pte && !PageHuge(pvmw->page))
+	if (pvmw->pte && (pvmw->flags & PVMW_PFN_WALK ||
+			  !PageHuge(pvmw->page)))
 		pte_unmap(pvmw->pte);
 	if (pvmw->ptl)
 		spin_unlock(pvmw->ptl);
diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index d356ab4047f7..d28bf65fd6a5 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -247,17 +247,22 @@ static inline int is_writable_migration_entry(swp_entry_t entry)
 
 #endif
 
-static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
+static inline unsigned long pfn_swap_entry_to_pfn(swp_entry_t entry)
 {
-	struct page *p = pfn_to_page(swp_offset(entry));
+	unsigned long pfn = swp_offset(entry);
 
 	/*
 	 * Any use of migration entries may only occur while the
 	 * corresponding page is locked
 	 */
-	BUG_ON(is_migration_entry(entry) && !PageLocked(p));
+	BUG_ON(is_migration_entry(entry) && !PageLocked(pfn_to_page(pfn)));
+
+	return pfn;
+}
 
-	return p;
+static inline struct page *pfn_swap_entry_to_page(swp_entry_t entry)
+{
+	return pfn_to_page(pfn_swap_entry_to_pfn(entry));
 }
 
 /*
diff --git a/mm/internal.h b/mm/internal.h
index deb9bda18e59..5458cd08df33 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -478,25 +478,35 @@ vma_address(struct page *page, struct vm_area_struct *vma)
 }
 
 /*
- * Then at what user virtual address will none of the page be found in vma?
- * Assumes that vma_address() already returned a good starting address.
- * If page is a compound head, the entire compound page is considered.
+ * Return the end of user virtual address at the specific offset within
+ * a vma.
  */
 static inline unsigned long
-vma_address_end(struct page *page, struct vm_area_struct *vma)
+vma_pgoff_address_end(pgoff_t pgoff, unsigned long nr_pages,
+		      struct vm_area_struct *vma)
 {
-	pgoff_t pgoff;
-	unsigned long address;
+	unsigned long address = vma->vm_start;
 
-	VM_BUG_ON_PAGE(PageKsm(page), page);	/* KSM page->index unusable */
-	pgoff = page_to_pgoff(page) + compound_nr(page);
-	address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
+	address += (pgoff + nr_pages - vma->vm_pgoff) << PAGE_SHIFT;
 	/* Check for address beyond vma (or wrapped through 0?) */
 	if (address < vma->vm_start || address > vma->vm_end)
 		address = vma->vm_end;
 	return address;
 }
 
+/*
+ * Return the end of user virtual address of a page within a vma. Assumes that
+ * vma_address() already returned a good starting address. If page is a compound
+ * head, the entire compound page is considered.
+ */
+static inline unsigned long
+vma_address_end(struct page *page, struct vm_area_struct *vma)
+{
+	VM_BUG_ON_PAGE(PageKsm(page), page);	/* KSM page->index unusable */
+	return vma_pgoff_address_end(page_to_pgoff(page), compound_nr(page),
+				     vma);
+}
+
 static inline struct file *maybe_unlock_mmap_for_io(struct vm_fault *vmf,
 						    struct file *fpin)
 {
diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
index f7b331081791..bd172268084f 100644
--- a/mm/page_vma_mapped.c
+++ b/mm/page_vma_mapped.c
@@ -53,10 +53,17 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw)
 	return true;
 }
 
-static inline bool pfn_is_match(struct page *page, unsigned long pfn)
+static inline bool pfn_is_match(struct page_vma_mapped_walk *pvmw,
+				unsigned long pfn)
 {
-	unsigned long page_pfn = page_to_pfn(page);
+	struct page *page;
+	unsigned long page_pfn;
 
+	if (pvmw->flags & PVMW_PFN_WALK)
+		return pfn >= pvmw->pfn && pfn - pvmw->pfn < pvmw->nr;
+
+	page = pvmw->page;
+	page_pfn = page_to_pfn(page);
 	/* normal page and hugetlbfs page */
 	if (!PageTransCompound(page) || PageHuge(page))
 		return page_pfn == pfn;
@@ -116,7 +123,7 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw)
 		pfn = pte_pfn(*pvmw->pte);
 	}
 
-	return pfn_is_match(pvmw->page, pfn);
+	return pfn_is_match(pvmw, pfn);
 }
 
 static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
@@ -127,24 +134,24 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
 }
 
 /**
- * page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at
- * @pvmw->address
- * @pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags
- * must be set. pmd, pte and ptl must be NULL.
+ * page_vma_mapped_walk - check if @pvmw->page or @pvmw->pfn is mapped in
+ * @pvmw->vma at @pvmw->address
+ * @pvmw: pointer to struct page_vma_mapped_walk. page (or pfn and nr and
+ * index), vma, address and flags must be set. pmd, pte and ptl must be NULL.
  *
- * Returns true if the page is mapped in the vma. @pvmw->pmd and @pvmw->pte point
- * to relevant page table entries. @pvmw->ptl is locked. @pvmw->address is
- * adjusted if needed (for PTE-mapped THPs).
+ * Returns true if the page or pfn is mapped in the vma. @pvmw->pmd and
+ * @pvmw->pte point to relevant page table entries. @pvmw->ptl is locked.
+ * @pvmw->address is adjusted if needed (for PTE-mapped THPs).
  *
  * If @pvmw->pmd is set but @pvmw->pte is not, you have found PMD-mapped page
- * (usually THP). For PTE-mapped THP, you should run page_vma_mapped_walk() in
- * a loop to find all PTEs that map the THP.
+ * (usually THP or Huge DEVMAP). For PMD-mapped page, you should run
+ * page_vma_mapped_walk() in a loop to find all PTEs that map the huge page.
  *
  * For HugeTLB pages, @pvmw->pte is set to the relevant page table entry
  * regardless of which page table level the page is mapped at. @pvmw->pmd is
  * NULL.
  *
- * Returns false if there are no more page table entries for the page in
+ * Returns false if there are no more page table entries for the page or pfn in
  * the vma. @pvmw->ptl is unlocked and @pvmw->pte is unmapped.
  *
  * If you need to stop the walk before page_vma_mapped_walk() returned false,
@@ -153,8 +160,9 @@ static void step_forward(struct page_vma_mapped_walk *pvmw, unsigned long size)
 bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 {
 	struct mm_struct *mm = pvmw->vma->vm_mm;
-	struct page *page = pvmw->page;
+	struct page *page = NULL;
 	unsigned long end;
+	unsigned long pfn;
 	pgd_t *pgd;
 	p4d_t *p4d;
 	pud_t *pud;
@@ -164,7 +172,11 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 	if (pvmw->pmd && !pvmw->pte)
 		return not_found(pvmw);
 
-	if (unlikely(PageHuge(page))) {
+	if (!(pvmw->flags & PVMW_PFN_WALK))
+		page = pvmw->page;
+	pfn = page ? page_to_pfn(page) : pvmw->pfn;
+
+	if (unlikely(page && PageHuge(page))) {
 		/* The only possible mapping was handled on last iteration */
 		if (pvmw->pte)
 			return not_found(pvmw);
@@ -187,9 +199,13 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 	 * any PageKsm page: whose page->index misleads vma_address()
 	 * and vma_address_end() to disaster.
 	 */
-	end = PageTransCompound(page) ?
-		vma_address_end(page, pvmw->vma) :
-		pvmw->address + PAGE_SIZE;
+	if (page)
+		end = PageTransCompound(page) ?
+		      vma_address_end(page, pvmw->vma) :
+		      pvmw->address + PAGE_SIZE;
+	else
+		end = vma_pgoff_address_end(pvmw->index, pvmw->nr, pvmw->vma);
+
 	if (pvmw->pte)
 		goto next_pte;
 restart:
@@ -217,14 +233,14 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 		 * subsequent update.
 		 */
 		pmde = READ_ONCE(*pvmw->pmd);
-
-		if (pmd_trans_huge(pmde) || is_pmd_migration_entry(pmde)) {
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+		if (pmd_leaf(pmde) || is_pmd_migration_entry(pmde)) {
 			pvmw->ptl = pmd_lock(mm, pvmw->pmd);
 			pmde = *pvmw->pmd;
-			if (likely(pmd_trans_huge(pmde))) {
+			if (likely(pmd_leaf(pmde))) {
 				if (pvmw->flags & PVMW_MIGRATION)
 					return not_found(pvmw);
-				if (pmd_page(pmde) != page)
+				if (pmd_pfn(pmde) != pfn)
 					return not_found(pvmw);
 				return true;
 			}
@@ -236,20 +252,22 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
 					return not_found(pvmw);
 				entry = pmd_to_swp_entry(pmde);
 				if (!is_migration_entry(entry) ||
-				    pfn_swap_entry_to_page(entry) != page)
+				    pfn_swap_entry_to_pfn(entry) != pfn)
 					return not_found(pvmw);
 				return true;
 			}
 			/* THP pmd was split under us: handle on pte level */
 			spin_unlock(pvmw->ptl);
 			pvmw->ptl = NULL;
-		} else if (!pmd_present(pmde)) {
+		} else
+#endif
+		if (!pmd_present(pmde)) {
 			/*
 			 * If PVMW_SYNC, take and drop THP pmd lock so that we
 			 * cannot return prematurely, while zap_huge_pmd() has
 			 * cleared *pmd but not decremented compound_mapcount().
 			 */
-			if ((pvmw->flags & PVMW_SYNC) &&
+			if ((pvmw->flags & PVMW_SYNC) && page &&
 			    PageTransCompound(page)) {
 				spinlock_t *ptl = pmd_lock(mm, pvmw->pmd);
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 4/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs
  2022-02-02 14:33 [PATCH v2 0/6] Fix some bugs related to ramp and dax Muchun Song
                   ` (2 preceding siblings ...)
  2022-02-02 14:33 ` [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma Muchun Song
@ 2022-02-02 14:33 ` Muchun Song
  2022-02-02 14:33 ` [PATCH v2 5/6] dax: fix missing writeprotect the pte entry Muchun Song
  2022-02-02 14:33 ` [PATCH v2 6/6] mm: remove range parameter from follow_invalidate_pte() Muchun Song
  5 siblings, 0 replies; 13+ messages in thread
From: Muchun Song @ 2022-02-02 14:33 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	Muchun Song

The page_mkclean_one() is supposed to be used with the pfn that has a
associated struct page, but not all the pfns (e.g. DAX) have a struct
page. Introduce a new function pfn_mkclean_range() to cleans the PTEs
(including PMDs) mapped with range of pfns which has no struct page
associated with them. This helper will be used by DAX device in the
next patch to make pfns clean.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/rmap.h |  3 ++
 mm/internal.h        | 26 ++++++++++------
 mm/rmap.c            | 84 +++++++++++++++++++++++++++++++++++++++++-----------
 3 files changed, 86 insertions(+), 27 deletions(-)

diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 78373935ad49..668a1e81b442 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -241,6 +241,9 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw);
  */
 unsigned long page_address_in_vma(struct page *, struct vm_area_struct *);
 
+int pfn_mkclean_range(unsigned long pfn, int npfn, pgoff_t pgoff,
+		      struct vm_area_struct *vma);
+
 /*
  * Cleans the PTEs of shared mappings.
  * (and since clean PTEs should also be readonly, write protects them too)
diff --git a/mm/internal.h b/mm/internal.h
index 5458cd08df33..dc71256e568f 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -449,26 +449,22 @@ extern void clear_page_mlock(struct page *page);
 extern pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma);
 
 /*
- * At what user virtual address is page expected in vma?
- * Returns -EFAULT if all of the page is outside the range of vma.
- * If page is a compound head, the entire compound page is considered.
+ * Return the start of user virtual address at the specific offset within
+ * a vma.
  */
 static inline unsigned long
-vma_address(struct page *page, struct vm_area_struct *vma)
+vma_pgoff_address(pgoff_t pgoff, unsigned long nr_pages,
+		  struct vm_area_struct *vma)
 {
-	pgoff_t pgoff;
 	unsigned long address;
 
-	VM_BUG_ON_PAGE(PageKsm(page), page);	/* KSM page->index unusable */
-	pgoff = page_to_pgoff(page);
 	if (pgoff >= vma->vm_pgoff) {
 		address = vma->vm_start +
 			((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
 		/* Check for address beyond vma (or wrapped through 0?) */
 		if (address < vma->vm_start || address >= vma->vm_end)
 			address = -EFAULT;
-	} else if (PageHead(page) &&
-		   pgoff + compound_nr(page) - 1 >= vma->vm_pgoff) {
+	} else if (pgoff + nr_pages - 1 >= vma->vm_pgoff) {
 		/* Test above avoids possibility of wrap to 0 on 32-bit */
 		address = vma->vm_start;
 	} else {
@@ -478,6 +474,18 @@ vma_address(struct page *page, struct vm_area_struct *vma)
 }
 
 /*
+ * Return the start of user virtual address of a page within a vma.
+ * Returns -EFAULT if all of the page is outside the range of vma.
+ * If page is a compound head, the entire compound page is considered.
+ */
+static inline unsigned long
+vma_address(struct page *page, struct vm_area_struct *vma)
+{
+	VM_BUG_ON_PAGE(PageKsm(page), page);	/* KSM page->index unusable */
+	return vma_pgoff_address(page_to_pgoff(page), compound_nr(page), vma);
+}
+
+/*
  * Return the end of user virtual address at the specific offset within
  * a vma.
  */
diff --git a/mm/rmap.c b/mm/rmap.c
index 0ba12dc9fae3..8f1860dc22bc 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -928,34 +928,33 @@ int page_referenced(struct page *page,
 	return pra.referenced;
 }
 
-static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
-			    unsigned long address, void *arg)
+static int page_vma_mkclean_one(struct page_vma_mapped_walk *pvmw)
 {
-	struct page_vma_mapped_walk pvmw = {
-		.page = page,
-		.vma = vma,
-		.address = address,
-		.flags = PVMW_SYNC,
-	};
+	int cleaned = 0;
+	struct vm_area_struct *vma = pvmw->vma;
 	struct mmu_notifier_range range;
-	int *cleaned = arg;
+	unsigned long end;
+
+	if (pvmw->flags & PVMW_PFN_WALK)
+		end = vma_pgoff_address_end(pvmw->index, pvmw->nr, vma);
+	else
+		end = vma_address_end(pvmw->page, vma);
 
 	/*
 	 * We have to assume the worse case ie pmd for invalidation. Note that
 	 * the page can not be free from this function.
 	 */
-	mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE,
-				0, vma, vma->vm_mm, address,
-				vma_address_end(page, vma));
+	mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_PAGE, 0, vma,
+				vma->vm_mm, pvmw->address, end);
 	mmu_notifier_invalidate_range_start(&range);
 
-	while (page_vma_mapped_walk(&pvmw)) {
+	while (page_vma_mapped_walk(pvmw)) {
 		int ret = 0;
+		unsigned long address = pvmw->address;
 
-		address = pvmw.address;
-		if (pvmw.pte) {
+		if (pvmw->pte) {
 			pte_t entry;
-			pte_t *pte = pvmw.pte;
+			pte_t *pte = pvmw->pte;
 
 			if (!pte_dirty(*pte) && !pte_write(*pte))
 				continue;
@@ -968,7 +967,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 			ret = 1;
 		} else {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-			pmd_t *pmd = pvmw.pmd;
+			pmd_t *pmd = pvmw->pmd;
 			pmd_t entry;
 
 			if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
@@ -995,11 +994,27 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 		 * See Documentation/vm/mmu_notifier.rst
 		 */
 		if (ret)
-			(*cleaned)++;
+			cleaned++;
 	}
 
 	mmu_notifier_invalidate_range_end(&range);
 
+	return cleaned;
+}
+
+static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
+			    unsigned long address, void *arg)
+{
+	struct page_vma_mapped_walk pvmw = {
+		.page		= page,
+		.vma		= vma,
+		.address	= address,
+		.flags		= PVMW_SYNC,
+	};
+	int *cleaned = arg;
+
+	*cleaned += page_vma_mkclean_one(&pvmw);
+
 	return true;
 }
 
@@ -1037,6 +1052,39 @@ int folio_mkclean(struct folio *folio)
 EXPORT_SYMBOL_GPL(folio_mkclean);
 
 /**
+ * pfn_mkclean_range - Cleans the PTEs (including PMDs) mapped with range of
+ *                     [@pfn, @pfn + @npfn) at the specific offset (@pgoff)
+ *                     within the @vma of shared mappings. And since clean PTEs
+ *                     should also be readonly, write protects them too.
+ * @pfn: start pfn.
+ * @npfn: number of physically contiguous pfns srarting with @pfn.
+ * @pgoff: page offset that the @pfn mapped with.
+ * @vma: vma that @pfn mapped within.
+ *
+ * Returns the number of cleaned PTEs (including PMDs).
+ */
+int pfn_mkclean_range(unsigned long pfn, int npfn, pgoff_t pgoff,
+		      struct vm_area_struct *vma)
+{
+	unsigned long address = vma_pgoff_address(pgoff, npfn, vma);
+	struct page_vma_mapped_walk pvmw = {
+		.pfn		= pfn,
+		.nr		= npfn,
+		.index		= pgoff,
+		.vma		= vma,
+		.address	= address,
+		.flags		= PVMW_SYNC | PVMW_PFN_WALK,
+	};
+
+	if (invalid_mkclean_vma(vma, NULL))
+		return 0;
+
+	VM_BUG_ON_VMA(address == -EFAULT, vma);
+
+	return page_vma_mkclean_one(&pvmw);
+}
+
+/**
  * page_move_anon_rmap - move a page to our anon_vma
  * @page:	the page to move to our anon_vma
  * @vma:	the vma the page belongs to
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 5/6] dax: fix missing writeprotect the pte entry
  2022-02-02 14:33 [PATCH v2 0/6] Fix some bugs related to ramp and dax Muchun Song
                   ` (3 preceding siblings ...)
  2022-02-02 14:33 ` [PATCH v2 4/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Muchun Song
@ 2022-02-02 14:33 ` Muchun Song
  2022-02-02 14:33 ` [PATCH v2 6/6] mm: remove range parameter from follow_invalidate_pte() Muchun Song
  5 siblings, 0 replies; 13+ messages in thread
From: Muchun Song @ 2022-02-02 14:33 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	Muchun Song

Currently dax_mapping_entry_mkclean() fails to clean and write protect
the pte entry within a DAX PMD entry during an *sync operation. This
can result in data loss in the following sequence:

  1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and
     making the pmd entry dirty and writeable.
  2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K)
     write to the same file, dirtying PMD radix tree entry (already
     done in 1)) and making the pte entry dirty and writeable.
  3) fsync, flushing out PMD data and cleaning the radix tree entry. We
     currently fail to mark the pte entry as clean and write protected
     since the vma of process B is not covered in dax_entry_mkclean().
  4) process B writes to the pte. These don't cause any page faults since
     the pte entry is dirty and writeable. The radix tree entry remains
     clean.
  5) fsync, which fails to flush the dirty PMD data because the radix tree
     entry was clean.
  6) crash - dirty data that should have been fsync'd as part of 5) could
     still have been in the processor cache, and is lost.

Just to use pfn_mkclean_range() to clean the pfns to fix this issue.

Fixes: 4b4bb46d00b3 ("dax: clear dirty entry tags on cache flush")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 fs/dax.c | 83 ++++++----------------------------------------------------------
 1 file changed, 7 insertions(+), 76 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index e031e4b6c13c..b64ac02d55d7 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -25,6 +25,7 @@
 #include <linux/sizes.h>
 #include <linux/mmu_notifier.h>
 #include <linux/iomap.h>
+#include <linux/rmap.h>
 #include <asm/pgalloc.h>
 
 #define CREATE_TRACE_POINTS
@@ -801,87 +802,17 @@ static void *dax_insert_entry(struct xa_state *xas,
 	return entry;
 }
 
-static inline
-unsigned long pgoff_address(pgoff_t pgoff, struct vm_area_struct *vma)
-{
-	unsigned long address;
-
-	address = vma->vm_start + ((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
-	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
-	return address;
-}
-
 /* Walk all mappings of a given index of a file and writeprotect them */
-static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
-		unsigned long pfn)
+static void dax_entry_mkclean(struct address_space *mapping, unsigned long pfn,
+			      unsigned long npfn, pgoff_t start)
 {
 	struct vm_area_struct *vma;
-	pte_t pte, *ptep = NULL;
-	pmd_t *pmdp = NULL;
-	spinlock_t *ptl;
+	pgoff_t end = start + npfn - 1;
 
 	i_mmap_lock_read(mapping);
-	vma_interval_tree_foreach(vma, &mapping->i_mmap, index, index) {
-		struct mmu_notifier_range range;
-		unsigned long address;
-
+	vma_interval_tree_foreach(vma, &mapping->i_mmap, start, end) {
+		pfn_mkclean_range(pfn, npfn, start, vma);
 		cond_resched();
-
-		if (!(vma->vm_flags & VM_SHARED))
-			continue;
-
-		address = pgoff_address(index, vma);
-
-		/*
-		 * follow_invalidate_pte() will use the range to call
-		 * mmu_notifier_invalidate_range_start() on our behalf before
-		 * taking any lock.
-		 */
-		if (follow_invalidate_pte(vma->vm_mm, address, &range, &ptep,
-					  &pmdp, &ptl))
-			continue;
-
-		/*
-		 * No need to call mmu_notifier_invalidate_range() as we are
-		 * downgrading page table protection not changing it to point
-		 * to a new page.
-		 *
-		 * See Documentation/vm/mmu_notifier.rst
-		 */
-		if (pmdp) {
-#ifdef CONFIG_FS_DAX_PMD
-			pmd_t pmd;
-
-			if (pfn != pmd_pfn(*pmdp))
-				goto unlock_pmd;
-			if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
-				goto unlock_pmd;
-
-			flush_cache_range(vma, address,
-					  address + HPAGE_PMD_SIZE);
-			pmd = pmdp_invalidate(vma, address, pmdp);
-			pmd = pmd_wrprotect(pmd);
-			pmd = pmd_mkclean(pmd);
-			set_pmd_at(vma->vm_mm, address, pmdp, pmd);
-unlock_pmd:
-#endif
-			spin_unlock(ptl);
-		} else {
-			if (pfn != pte_pfn(*ptep))
-				goto unlock_pte;
-			if (!pte_dirty(*ptep) && !pte_write(*ptep))
-				goto unlock_pte;
-
-			flush_cache_page(vma, address, pfn);
-			pte = ptep_clear_flush(vma, address, ptep);
-			pte = pte_wrprotect(pte);
-			pte = pte_mkclean(pte);
-			set_pte_at(vma->vm_mm, address, ptep, pte);
-unlock_pte:
-			pte_unmap_unlock(ptep, ptl);
-		}
-
-		mmu_notifier_invalidate_range_end(&range);
 	}
 	i_mmap_unlock_read(mapping);
 }
@@ -949,7 +880,7 @@ static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
 	count = 1UL << dax_entry_order(entry);
 	index = xas->xa_index & ~(count - 1);
 
-	dax_entry_mkclean(mapping, index, pfn);
+	dax_entry_mkclean(mapping, pfn, count, index);
 	dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE);
 	/*
 	 * After we have flushed the cache, we can clear the dirty tag. There
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 6/6] mm: remove range parameter from follow_invalidate_pte()
  2022-02-02 14:33 [PATCH v2 0/6] Fix some bugs related to ramp and dax Muchun Song
                   ` (4 preceding siblings ...)
  2022-02-02 14:33 ` [PATCH v2 5/6] dax: fix missing writeprotect the pte entry Muchun Song
@ 2022-02-02 14:33 ` Muchun Song
  5 siblings, 0 replies; 13+ messages in thread
From: Muchun Song @ 2022-02-02 14:33 UTC (permalink / raw)
  To: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch
  Cc: linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun,
	Muchun Song

The only user (DAX) of range parameter of follow_invalidate_pte()
is gone, it safe to remove the range paramter and make it static
to simlify the code.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/mm.h |  3 ---
 mm/memory.c        | 23 +++--------------------
 2 files changed, 3 insertions(+), 23 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index d211a06784d5..7895b17f6847 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1814,9 +1814,6 @@ void free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
 		unsigned long end, unsigned long floor, unsigned long ceiling);
 int
 copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
-int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
-			  struct mmu_notifier_range *range, pte_t **ptepp,
-			  pmd_t **pmdpp, spinlock_t **ptlp);
 int follow_pte(struct mm_struct *mm, unsigned long address,
 	       pte_t **ptepp, spinlock_t **ptlp);
 int follow_pfn(struct vm_area_struct *vma, unsigned long address,
diff --git a/mm/memory.c b/mm/memory.c
index 514a81cdd1ae..e8ce066be5f2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4869,9 +4869,8 @@ int __pmd_alloc(struct mm_struct *mm, pud_t *pud, unsigned long address)
 }
 #endif /* __PAGETABLE_PMD_FOLDED */
 
-int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
-			  struct mmu_notifier_range *range, pte_t **ptepp,
-			  pmd_t **pmdpp, spinlock_t **ptlp)
+static int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
+				 pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp)
 {
 	pgd_t *pgd;
 	p4d_t *p4d;
@@ -4898,31 +4897,17 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
 		if (!pmdpp)
 			goto out;
 
-		if (range) {
-			mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0,
-						NULL, mm, address & PMD_MASK,
-						(address & PMD_MASK) + PMD_SIZE);
-			mmu_notifier_invalidate_range_start(range);
-		}
 		*ptlp = pmd_lock(mm, pmd);
 		if (pmd_huge(*pmd)) {
 			*pmdpp = pmd;
 			return 0;
 		}
 		spin_unlock(*ptlp);
-		if (range)
-			mmu_notifier_invalidate_range_end(range);
 	}
 
 	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
 		goto out;
 
-	if (range) {
-		mmu_notifier_range_init(range, MMU_NOTIFY_CLEAR, 0, NULL, mm,
-					address & PAGE_MASK,
-					(address & PAGE_MASK) + PAGE_SIZE);
-		mmu_notifier_invalidate_range_start(range);
-	}
 	ptep = pte_offset_map_lock(mm, pmd, address, ptlp);
 	if (!pte_present(*ptep))
 		goto unlock;
@@ -4930,8 +4915,6 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
 	return 0;
 unlock:
 	pte_unmap_unlock(ptep, *ptlp);
-	if (range)
-		mmu_notifier_invalidate_range_end(range);
 out:
 	return -EINVAL;
 }
@@ -4960,7 +4943,7 @@ int follow_invalidate_pte(struct mm_struct *mm, unsigned long address,
 int follow_pte(struct mm_struct *mm, unsigned long address,
 	       pte_t **ptepp, spinlock_t **ptlp)
 {
-	return follow_invalidate_pte(mm, address, NULL, ptepp, NULL, ptlp);
+	return follow_invalidate_pte(mm, address, ptepp, NULL, ptlp);
 }
 EXPORT_SYMBOL_GPL(follow_pte);
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma
  2022-02-02 14:33 ` [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma Muchun Song
@ 2022-02-02 16:43   ` kernel test robot
  2022-02-02 18:28   ` Matthew Wilcox
  2022-02-03  0:18     ` kernel test robot
  2 siblings, 0 replies; 13+ messages in thread
From: kernel test robot @ 2022-02-02 16:43 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 8401 bytes --]

Hi Muchun,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on hnaz-mm/master v5.17-rc2 next-20220202]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Muchun-Song/Fix-some-bugs-related-to-ramp-and-dax/20220202-223615
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 9f7fb8de5d9bac17b6392a14af40baf555d9129b
config: um-i386_defconfig (https://download.01.org/0day-ci/archive/20220203/202202030043.3NXtgGau-lkp(a)intel.com/config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
        # https://github.com/0day-ci/linux/commit/64a64c01138fd43f8d9ac17a47c813b55231c325
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Muchun-Song/Fix-some-bugs-related-to-ramp-and-dax/20220202-223615
        git checkout 64a64c01138fd43f8d9ac17a47c813b55231c325
        # save the config file to linux build tree
        mkdir build_dir
        make W=1 O=build_dir ARCH=um SUBARCH=i386 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   mm/page_vma_mapped.c: In function 'page_vma_mapped_walk':
>> mm/page_vma_mapped.c:165:16: warning: variable 'pfn' set but not used [-Wunused-but-set-variable]
     165 |  unsigned long pfn;
         |                ^~~


vim +/pfn +165 mm/page_vma_mapped.c

   135	
   136	/**
   137	 * page_vma_mapped_walk - check if @pvmw->page or @pvmw->pfn is mapped in
   138	 * @pvmw->vma at @pvmw->address
   139	 * @pvmw: pointer to struct page_vma_mapped_walk. page (or pfn and nr and
   140	 * index), vma, address and flags must be set. pmd, pte and ptl must be NULL.
   141	 *
   142	 * Returns true if the page or pfn is mapped in the vma. @pvmw->pmd and
   143	 * @pvmw->pte point to relevant page table entries. @pvmw->ptl is locked.
   144	 * @pvmw->address is adjusted if needed (for PTE-mapped THPs).
   145	 *
   146	 * If @pvmw->pmd is set but @pvmw->pte is not, you have found PMD-mapped page
   147	 * (usually THP or Huge DEVMAP). For PMD-mapped page, you should run
   148	 * page_vma_mapped_walk() in a loop to find all PTEs that map the huge page.
   149	 *
   150	 * For HugeTLB pages, @pvmw->pte is set to the relevant page table entry
   151	 * regardless of which page table level the page is mapped at. @pvmw->pmd is
   152	 * NULL.
   153	 *
   154	 * Returns false if there are no more page table entries for the page or pfn in
   155	 * the vma. @pvmw->ptl is unlocked and @pvmw->pte is unmapped.
   156	 *
   157	 * If you need to stop the walk before page_vma_mapped_walk() returned false,
   158	 * use page_vma_mapped_walk_done(). It will do the housekeeping.
   159	 */
   160	bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
   161	{
   162		struct mm_struct *mm = pvmw->vma->vm_mm;
   163		struct page *page = NULL;
   164		unsigned long end;
 > 165		unsigned long pfn;
   166		pgd_t *pgd;
   167		p4d_t *p4d;
   168		pud_t *pud;
   169		pmd_t pmde;
   170	
   171		/* The only possible pmd mapping has been handled on last iteration */
   172		if (pvmw->pmd && !pvmw->pte)
   173			return not_found(pvmw);
   174	
   175		if (!(pvmw->flags & PVMW_PFN_WALK))
   176			page = pvmw->page;
   177		pfn = page ? page_to_pfn(page) : pvmw->pfn;
   178	
   179		if (unlikely(page && PageHuge(page))) {
   180			/* The only possible mapping was handled on last iteration */
   181			if (pvmw->pte)
   182				return not_found(pvmw);
   183	
   184			/* when pud is not present, pte will be NULL */
   185			pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page));
   186			if (!pvmw->pte)
   187				return false;
   188	
   189			pvmw->ptl = huge_pte_lockptr(page_hstate(page), mm, pvmw->pte);
   190			spin_lock(pvmw->ptl);
   191			if (!check_pte(pvmw))
   192				return not_found(pvmw);
   193			return true;
   194		}
   195	
   196		/*
   197		 * Seek to next pte only makes sense for THP.
   198		 * But more important than that optimization, is to filter out
   199		 * any PageKsm page: whose page->index misleads vma_address()
   200		 * and vma_address_end() to disaster.
   201		 */
   202		if (page)
   203			end = PageTransCompound(page) ?
   204			      vma_address_end(page, pvmw->vma) :
   205			      pvmw->address + PAGE_SIZE;
   206		else
   207			end = vma_pgoff_address_end(pvmw->index, pvmw->nr, pvmw->vma);
   208	
   209		if (pvmw->pte)
   210			goto next_pte;
   211	restart:
   212		do {
   213			pgd = pgd_offset(mm, pvmw->address);
   214			if (!pgd_present(*pgd)) {
   215				step_forward(pvmw, PGDIR_SIZE);
   216				continue;
   217			}
   218			p4d = p4d_offset(pgd, pvmw->address);
   219			if (!p4d_present(*p4d)) {
   220				step_forward(pvmw, P4D_SIZE);
   221				continue;
   222			}
   223			pud = pud_offset(p4d, pvmw->address);
   224			if (!pud_present(*pud)) {
   225				step_forward(pvmw, PUD_SIZE);
   226				continue;
   227			}
   228	
   229			pvmw->pmd = pmd_offset(pud, pvmw->address);
   230			/*
   231			 * Make sure the pmd value isn't cached in a register by the
   232			 * compiler and used as a stale value after we've observed a
   233			 * subsequent update.
   234			 */
   235			pmde = READ_ONCE(*pvmw->pmd);
   236	#ifdef CONFIG_TRANSPARENT_HUGEPAGE
   237			if (pmd_leaf(pmde) || is_pmd_migration_entry(pmde)) {
   238				pvmw->ptl = pmd_lock(mm, pvmw->pmd);
   239				pmde = *pvmw->pmd;
   240				if (likely(pmd_leaf(pmde))) {
   241					if (pvmw->flags & PVMW_MIGRATION)
   242						return not_found(pvmw);
   243					if (pmd_pfn(pmde) != pfn)
   244						return not_found(pvmw);
   245					return true;
   246				}
   247				if (!pmd_present(pmde)) {
   248					swp_entry_t entry;
   249	
   250					if (!thp_migration_supported() ||
   251					    !(pvmw->flags & PVMW_MIGRATION))
   252						return not_found(pvmw);
   253					entry = pmd_to_swp_entry(pmde);
   254					if (!is_migration_entry(entry) ||
   255					    pfn_swap_entry_to_pfn(entry) != pfn)
   256						return not_found(pvmw);
   257					return true;
   258				}
   259				/* THP pmd was split under us: handle on pte level */
   260				spin_unlock(pvmw->ptl);
   261				pvmw->ptl = NULL;
   262			} else
   263	#endif
   264			if (!pmd_present(pmde)) {
   265				/*
   266				 * If PVMW_SYNC, take and drop THP pmd lock so that we
   267				 * cannot return prematurely, while zap_huge_pmd() has
   268				 * cleared *pmd but not decremented compound_mapcount().
   269				 */
   270				if ((pvmw->flags & PVMW_SYNC) && page &&
   271				    PageTransCompound(page)) {
   272					spinlock_t *ptl = pmd_lock(mm, pvmw->pmd);
   273	
   274					spin_unlock(ptl);
   275				}
   276				step_forward(pvmw, PMD_SIZE);
   277				continue;
   278			}
   279			if (!map_pte(pvmw))
   280				goto next_pte;
   281	this_pte:
   282			if (check_pte(pvmw))
   283				return true;
   284	next_pte:
   285			do {
   286				pvmw->address += PAGE_SIZE;
   287				if (pvmw->address >= end)
   288					return not_found(pvmw);
   289				/* Did we cross page table boundary? */
   290				if ((pvmw->address & (PMD_SIZE - PAGE_SIZE)) == 0) {
   291					if (pvmw->ptl) {
   292						spin_unlock(pvmw->ptl);
   293						pvmw->ptl = NULL;
   294					}
   295					pte_unmap(pvmw->pte);
   296					pvmw->pte = NULL;
   297					goto restart;
   298				}
   299				pvmw->pte++;
   300				if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) {
   301					pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
   302					spin_lock(pvmw->ptl);
   303				}
   304			} while (pte_none(*pvmw->pte));
   305	
   306			if (!pvmw->ptl) {
   307				pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
   308				spin_lock(pvmw->ptl);
   309			}
   310			goto this_pte;
   311		} while (pvmw->address < end);
   312	
   313		return false;
   314	}
   315	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma
  2022-02-02 14:33 ` [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma Muchun Song
  2022-02-02 16:43   ` kernel test robot
@ 2022-02-02 18:28   ` Matthew Wilcox
  2022-02-03  0:18     ` kernel test robot
  2 siblings, 0 replies; 13+ messages in thread
From: Matthew Wilcox @ 2022-02-02 18:28 UTC (permalink / raw)
  To: Muchun Song
  Cc: dan.j.williams, jack, viro, akpm, apopple, shy828301, rcampbell,
	hughd, xiyuyang19, kirill.shutemov, zwisler, hch, linux-fsdevel,
	nvdimm, linux-kernel, linux-mm, duanxiongchun

On Wed, Feb 02, 2022 at 10:33:04PM +0800, Muchun Song wrote:
> page_vma_mapped_walk() is supposed to check if a page is mapped into a vma.
> However, not all page frames (e.g. PFN_DEV) have a associated struct page
> with it. There is going to be some duplicate codes similar with this function
> if someone want to check if a pfn (without a struct page) is mapped into a
> vma. So add support for checking if a pfn is mapped into a vma. In the next
> patch, the dax will use this new feature.

I'm coming to more or less the same solution for fixing the bug in
page_mapped_in_vma().  If you call it with a head page, it will look
for any page in the THP instead of the precise page.  I think we can do
a fairly significant simplification though, so I'm going to go off
and work on that next ...


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma
  2022-02-02 14:33 ` [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma Muchun Song
@ 2022-02-03  0:18     ` kernel test robot
  2022-02-02 18:28   ` Matthew Wilcox
  2022-02-03  0:18     ` kernel test robot
  2 siblings, 0 replies; 13+ messages in thread
From: kernel test robot @ 2022-02-03  0:18 UTC (permalink / raw)
  To: Muchun Song, dan.j.williams, willy, jack, viro, akpm, apopple,
	shy828301, rcampbell, hughd, xiyuyang19
  Cc: llvm, kbuild-all

Hi Muchun,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on hnaz-mm/master v5.17-rc2 next-20220202]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Muchun-Song/Fix-some-bugs-related-to-ramp-and-dax/20220202-223615
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 9f7fb8de5d9bac17b6392a14af40baf555d9129b
config: i386-randconfig-a014-20220131 (https://download.01.org/0day-ci/archive/20220203/202202030744.TkknA58x-lkp@intel.com/config)
compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 6b1e844b69f15bb7dffaf9365cd2b355d2eb7579)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/64a64c01138fd43f8d9ac17a47c813b55231c325
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Muchun-Song/Fix-some-bugs-related-to-ramp-and-dax/20220202-223615
        git checkout 64a64c01138fd43f8d9ac17a47c813b55231c325
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> mm/page_vma_mapped.c:165:16: warning: variable 'pfn' set but not used [-Wunused-but-set-variable]
           unsigned long pfn;
                         ^
   1 warning generated.


vim +/pfn +165 mm/page_vma_mapped.c

   135	
   136	/**
   137	 * page_vma_mapped_walk - check if @pvmw->page or @pvmw->pfn is mapped in
   138	 * @pvmw->vma at @pvmw->address
   139	 * @pvmw: pointer to struct page_vma_mapped_walk. page (or pfn and nr and
   140	 * index), vma, address and flags must be set. pmd, pte and ptl must be NULL.
   141	 *
   142	 * Returns true if the page or pfn is mapped in the vma. @pvmw->pmd and
   143	 * @pvmw->pte point to relevant page table entries. @pvmw->ptl is locked.
   144	 * @pvmw->address is adjusted if needed (for PTE-mapped THPs).
   145	 *
   146	 * If @pvmw->pmd is set but @pvmw->pte is not, you have found PMD-mapped page
   147	 * (usually THP or Huge DEVMAP). For PMD-mapped page, you should run
   148	 * page_vma_mapped_walk() in a loop to find all PTEs that map the huge page.
   149	 *
   150	 * For HugeTLB pages, @pvmw->pte is set to the relevant page table entry
   151	 * regardless of which page table level the page is mapped at. @pvmw->pmd is
   152	 * NULL.
   153	 *
   154	 * Returns false if there are no more page table entries for the page or pfn in
   155	 * the vma. @pvmw->ptl is unlocked and @pvmw->pte is unmapped.
   156	 *
   157	 * If you need to stop the walk before page_vma_mapped_walk() returned false,
   158	 * use page_vma_mapped_walk_done(). It will do the housekeeping.
   159	 */
   160	bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
   161	{
   162		struct mm_struct *mm = pvmw->vma->vm_mm;
   163		struct page *page = NULL;
   164		unsigned long end;
 > 165		unsigned long pfn;
   166		pgd_t *pgd;
   167		p4d_t *p4d;
   168		pud_t *pud;
   169		pmd_t pmde;
   170	
   171		/* The only possible pmd mapping has been handled on last iteration */
   172		if (pvmw->pmd && !pvmw->pte)
   173			return not_found(pvmw);
   174	
   175		if (!(pvmw->flags & PVMW_PFN_WALK))
   176			page = pvmw->page;
   177		pfn = page ? page_to_pfn(page) : pvmw->pfn;
   178	
   179		if (unlikely(page && PageHuge(page))) {
   180			/* The only possible mapping was handled on last iteration */
   181			if (pvmw->pte)
   182				return not_found(pvmw);
   183	
   184			/* when pud is not present, pte will be NULL */
   185			pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page));
   186			if (!pvmw->pte)
   187				return false;
   188	
   189			pvmw->ptl = huge_pte_lockptr(page_hstate(page), mm, pvmw->pte);
   190			spin_lock(pvmw->ptl);
   191			if (!check_pte(pvmw))
   192				return not_found(pvmw);
   193			return true;
   194		}
   195	
   196		/*
   197		 * Seek to next pte only makes sense for THP.
   198		 * But more important than that optimization, is to filter out
   199		 * any PageKsm page: whose page->index misleads vma_address()
   200		 * and vma_address_end() to disaster.
   201		 */
   202		if (page)
   203			end = PageTransCompound(page) ?
   204			      vma_address_end(page, pvmw->vma) :
   205			      pvmw->address + PAGE_SIZE;
   206		else
   207			end = vma_pgoff_address_end(pvmw->index, pvmw->nr, pvmw->vma);
   208	
   209		if (pvmw->pte)
   210			goto next_pte;
   211	restart:
   212		do {
   213			pgd = pgd_offset(mm, pvmw->address);
   214			if (!pgd_present(*pgd)) {
   215				step_forward(pvmw, PGDIR_SIZE);
   216				continue;
   217			}
   218			p4d = p4d_offset(pgd, pvmw->address);
   219			if (!p4d_present(*p4d)) {
   220				step_forward(pvmw, P4D_SIZE);
   221				continue;
   222			}
   223			pud = pud_offset(p4d, pvmw->address);
   224			if (!pud_present(*pud)) {
   225				step_forward(pvmw, PUD_SIZE);
   226				continue;
   227			}
   228	
   229			pvmw->pmd = pmd_offset(pud, pvmw->address);
   230			/*
   231			 * Make sure the pmd value isn't cached in a register by the
   232			 * compiler and used as a stale value after we've observed a
   233			 * subsequent update.
   234			 */
   235			pmde = READ_ONCE(*pvmw->pmd);
   236	#ifdef CONFIG_TRANSPARENT_HUGEPAGE
   237			if (pmd_leaf(pmde) || is_pmd_migration_entry(pmde)) {
   238				pvmw->ptl = pmd_lock(mm, pvmw->pmd);
   239				pmde = *pvmw->pmd;
   240				if (likely(pmd_leaf(pmde))) {
   241					if (pvmw->flags & PVMW_MIGRATION)
   242						return not_found(pvmw);
   243					if (pmd_pfn(pmde) != pfn)
   244						return not_found(pvmw);
   245					return true;
   246				}
   247				if (!pmd_present(pmde)) {
   248					swp_entry_t entry;
   249	
   250					if (!thp_migration_supported() ||
   251					    !(pvmw->flags & PVMW_MIGRATION))
   252						return not_found(pvmw);
   253					entry = pmd_to_swp_entry(pmde);
   254					if (!is_migration_entry(entry) ||
   255					    pfn_swap_entry_to_pfn(entry) != pfn)
   256						return not_found(pvmw);
   257					return true;
   258				}
   259				/* THP pmd was split under us: handle on pte level */
   260				spin_unlock(pvmw->ptl);
   261				pvmw->ptl = NULL;
   262			} else
   263	#endif
   264			if (!pmd_present(pmde)) {
   265				/*
   266				 * If PVMW_SYNC, take and drop THP pmd lock so that we
   267				 * cannot return prematurely, while zap_huge_pmd() has
   268				 * cleared *pmd but not decremented compound_mapcount().
   269				 */
   270				if ((pvmw->flags & PVMW_SYNC) && page &&
   271				    PageTransCompound(page)) {
   272					spinlock_t *ptl = pmd_lock(mm, pvmw->pmd);
   273	
   274					spin_unlock(ptl);
   275				}
   276				step_forward(pvmw, PMD_SIZE);
   277				continue;
   278			}
   279			if (!map_pte(pvmw))
   280				goto next_pte;
   281	this_pte:
   282			if (check_pte(pvmw))
   283				return true;
   284	next_pte:
   285			do {
   286				pvmw->address += PAGE_SIZE;
   287				if (pvmw->address >= end)
   288					return not_found(pvmw);
   289				/* Did we cross page table boundary? */
   290				if ((pvmw->address & (PMD_SIZE - PAGE_SIZE)) == 0) {
   291					if (pvmw->ptl) {
   292						spin_unlock(pvmw->ptl);
   293						pvmw->ptl = NULL;
   294					}
   295					pte_unmap(pvmw->pte);
   296					pvmw->pte = NULL;
   297					goto restart;
   298				}
   299				pvmw->pte++;
   300				if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) {
   301					pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
   302					spin_lock(pvmw->ptl);
   303				}
   304			} while (pte_none(*pvmw->pte));
   305	
   306			if (!pvmw->ptl) {
   307				pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
   308				spin_lock(pvmw->ptl);
   309			}
   310			goto this_pte;
   311		} while (pvmw->address < end);
   312	
   313		return false;
   314	}
   315	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma
@ 2022-02-03  0:18     ` kernel test robot
  0 siblings, 0 replies; 13+ messages in thread
From: kernel test robot @ 2022-02-03  0:18 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 8628 bytes --]

Hi Muchun,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on hnaz-mm/master v5.17-rc2 next-20220202]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Muchun-Song/Fix-some-bugs-related-to-ramp-and-dax/20220202-223615
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 9f7fb8de5d9bac17b6392a14af40baf555d9129b
config: i386-randconfig-a014-20220131 (https://download.01.org/0day-ci/archive/20220203/202202030744.TkknA58x-lkp(a)intel.com/config)
compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 6b1e844b69f15bb7dffaf9365cd2b355d2eb7579)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/64a64c01138fd43f8d9ac17a47c813b55231c325
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Muchun-Song/Fix-some-bugs-related-to-ramp-and-dax/20220202-223615
        git checkout 64a64c01138fd43f8d9ac17a47c813b55231c325
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

>> mm/page_vma_mapped.c:165:16: warning: variable 'pfn' set but not used [-Wunused-but-set-variable]
           unsigned long pfn;
                         ^
   1 warning generated.


vim +/pfn +165 mm/page_vma_mapped.c

   135	
   136	/**
   137	 * page_vma_mapped_walk - check if @pvmw->page or @pvmw->pfn is mapped in
   138	 * @pvmw->vma at @pvmw->address
   139	 * @pvmw: pointer to struct page_vma_mapped_walk. page (or pfn and nr and
   140	 * index), vma, address and flags must be set. pmd, pte and ptl must be NULL.
   141	 *
   142	 * Returns true if the page or pfn is mapped in the vma. @pvmw->pmd and
   143	 * @pvmw->pte point to relevant page table entries. @pvmw->ptl is locked.
   144	 * @pvmw->address is adjusted if needed (for PTE-mapped THPs).
   145	 *
   146	 * If @pvmw->pmd is set but @pvmw->pte is not, you have found PMD-mapped page
   147	 * (usually THP or Huge DEVMAP). For PMD-mapped page, you should run
   148	 * page_vma_mapped_walk() in a loop to find all PTEs that map the huge page.
   149	 *
   150	 * For HugeTLB pages, @pvmw->pte is set to the relevant page table entry
   151	 * regardless of which page table level the page is mapped at. @pvmw->pmd is
   152	 * NULL.
   153	 *
   154	 * Returns false if there are no more page table entries for the page or pfn in
   155	 * the vma. @pvmw->ptl is unlocked and @pvmw->pte is unmapped.
   156	 *
   157	 * If you need to stop the walk before page_vma_mapped_walk() returned false,
   158	 * use page_vma_mapped_walk_done(). It will do the housekeeping.
   159	 */
   160	bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
   161	{
   162		struct mm_struct *mm = pvmw->vma->vm_mm;
   163		struct page *page = NULL;
   164		unsigned long end;
 > 165		unsigned long pfn;
   166		pgd_t *pgd;
   167		p4d_t *p4d;
   168		pud_t *pud;
   169		pmd_t pmde;
   170	
   171		/* The only possible pmd mapping has been handled on last iteration */
   172		if (pvmw->pmd && !pvmw->pte)
   173			return not_found(pvmw);
   174	
   175		if (!(pvmw->flags & PVMW_PFN_WALK))
   176			page = pvmw->page;
   177		pfn = page ? page_to_pfn(page) : pvmw->pfn;
   178	
   179		if (unlikely(page && PageHuge(page))) {
   180			/* The only possible mapping was handled on last iteration */
   181			if (pvmw->pte)
   182				return not_found(pvmw);
   183	
   184			/* when pud is not present, pte will be NULL */
   185			pvmw->pte = huge_pte_offset(mm, pvmw->address, page_size(page));
   186			if (!pvmw->pte)
   187				return false;
   188	
   189			pvmw->ptl = huge_pte_lockptr(page_hstate(page), mm, pvmw->pte);
   190			spin_lock(pvmw->ptl);
   191			if (!check_pte(pvmw))
   192				return not_found(pvmw);
   193			return true;
   194		}
   195	
   196		/*
   197		 * Seek to next pte only makes sense for THP.
   198		 * But more important than that optimization, is to filter out
   199		 * any PageKsm page: whose page->index misleads vma_address()
   200		 * and vma_address_end() to disaster.
   201		 */
   202		if (page)
   203			end = PageTransCompound(page) ?
   204			      vma_address_end(page, pvmw->vma) :
   205			      pvmw->address + PAGE_SIZE;
   206		else
   207			end = vma_pgoff_address_end(pvmw->index, pvmw->nr, pvmw->vma);
   208	
   209		if (pvmw->pte)
   210			goto next_pte;
   211	restart:
   212		do {
   213			pgd = pgd_offset(mm, pvmw->address);
   214			if (!pgd_present(*pgd)) {
   215				step_forward(pvmw, PGDIR_SIZE);
   216				continue;
   217			}
   218			p4d = p4d_offset(pgd, pvmw->address);
   219			if (!p4d_present(*p4d)) {
   220				step_forward(pvmw, P4D_SIZE);
   221				continue;
   222			}
   223			pud = pud_offset(p4d, pvmw->address);
   224			if (!pud_present(*pud)) {
   225				step_forward(pvmw, PUD_SIZE);
   226				continue;
   227			}
   228	
   229			pvmw->pmd = pmd_offset(pud, pvmw->address);
   230			/*
   231			 * Make sure the pmd value isn't cached in a register by the
   232			 * compiler and used as a stale value after we've observed a
   233			 * subsequent update.
   234			 */
   235			pmde = READ_ONCE(*pvmw->pmd);
   236	#ifdef CONFIG_TRANSPARENT_HUGEPAGE
   237			if (pmd_leaf(pmde) || is_pmd_migration_entry(pmde)) {
   238				pvmw->ptl = pmd_lock(mm, pvmw->pmd);
   239				pmde = *pvmw->pmd;
   240				if (likely(pmd_leaf(pmde))) {
   241					if (pvmw->flags & PVMW_MIGRATION)
   242						return not_found(pvmw);
   243					if (pmd_pfn(pmde) != pfn)
   244						return not_found(pvmw);
   245					return true;
   246				}
   247				if (!pmd_present(pmde)) {
   248					swp_entry_t entry;
   249	
   250					if (!thp_migration_supported() ||
   251					    !(pvmw->flags & PVMW_MIGRATION))
   252						return not_found(pvmw);
   253					entry = pmd_to_swp_entry(pmde);
   254					if (!is_migration_entry(entry) ||
   255					    pfn_swap_entry_to_pfn(entry) != pfn)
   256						return not_found(pvmw);
   257					return true;
   258				}
   259				/* THP pmd was split under us: handle on pte level */
   260				spin_unlock(pvmw->ptl);
   261				pvmw->ptl = NULL;
   262			} else
   263	#endif
   264			if (!pmd_present(pmde)) {
   265				/*
   266				 * If PVMW_SYNC, take and drop THP pmd lock so that we
   267				 * cannot return prematurely, while zap_huge_pmd() has
   268				 * cleared *pmd but not decremented compound_mapcount().
   269				 */
   270				if ((pvmw->flags & PVMW_SYNC) && page &&
   271				    PageTransCompound(page)) {
   272					spinlock_t *ptl = pmd_lock(mm, pvmw->pmd);
   273	
   274					spin_unlock(ptl);
   275				}
   276				step_forward(pvmw, PMD_SIZE);
   277				continue;
   278			}
   279			if (!map_pte(pvmw))
   280				goto next_pte;
   281	this_pte:
   282			if (check_pte(pvmw))
   283				return true;
   284	next_pte:
   285			do {
   286				pvmw->address += PAGE_SIZE;
   287				if (pvmw->address >= end)
   288					return not_found(pvmw);
   289				/* Did we cross page table boundary? */
   290				if ((pvmw->address & (PMD_SIZE - PAGE_SIZE)) == 0) {
   291					if (pvmw->ptl) {
   292						spin_unlock(pvmw->ptl);
   293						pvmw->ptl = NULL;
   294					}
   295					pte_unmap(pvmw->pte);
   296					pvmw->pte = NULL;
   297					goto restart;
   298				}
   299				pvmw->pte++;
   300				if ((pvmw->flags & PVMW_SYNC) && !pvmw->ptl) {
   301					pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
   302					spin_lock(pvmw->ptl);
   303				}
   304			} while (pte_none(*pvmw->pte));
   305	
   306			if (!pvmw->ptl) {
   307				pvmw->ptl = pte_lockptr(mm, pvmw->pmd);
   308				spin_lock(pvmw->ptl);
   309			}
   310			goto this_pte;
   311		} while (pvmw->address < end);
   312	
   313		return false;
   314	}
   315	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 1/6] mm: rmap: fix cache flush on THP pages
  2022-02-02 14:33 ` [PATCH v2 1/6] mm: rmap: fix cache flush on THP pages Muchun Song
@ 2022-02-03 10:16   ` Jan Kara
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Kara @ 2022-02-03 10:16 UTC (permalink / raw)
  To: Muchun Song
  Cc: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch,
	linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun

On Wed 02-02-22 22:33:02, Muchun Song wrote:
> The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
> However, it does not cover the full pages in a THP except a head page.
> Replace it with flush_cache_range() to fix this issue. At least, no
> problems were found due to this. Maybe because the architectures that
> have virtual indexed caches is less.
> 
> Fixes: f27176cfc363 ("mm: convert page_mkclean_one() to use page_vma_mapped_walk()")
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Reviewed-by: Yang Shi <shy828301@gmail.com>

Looks good. Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  mm/rmap.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/rmap.c b/mm/rmap.c
> index b0fd9dc19eba..0ba12dc9fae3 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -974,7 +974,8 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
>  			if (!pmd_dirty(*pmd) && !pmd_write(*pmd))
>  				continue;
>  
> -			flush_cache_page(vma, address, page_to_pfn(page));
> +			flush_cache_range(vma, address,
> +					  address + HPAGE_PMD_SIZE);
>  			entry = pmdp_invalidate(vma, address, pmd);
>  			entry = pmd_wrprotect(entry);
>  			entry = pmd_mkclean(entry);
> -- 
> 2.11.0
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 2/6] dax: fix cache flush on PMD-mapped pages
  2022-02-02 14:33 ` [PATCH v2 2/6] dax: fix cache flush on PMD-mapped pages Muchun Song
@ 2022-02-03 10:17   ` Jan Kara
  0 siblings, 0 replies; 13+ messages in thread
From: Jan Kara @ 2022-02-03 10:17 UTC (permalink / raw)
  To: Muchun Song
  Cc: dan.j.williams, willy, jack, viro, akpm, apopple, shy828301,
	rcampbell, hughd, xiyuyang19, kirill.shutemov, zwisler, hch,
	linux-fsdevel, nvdimm, linux-kernel, linux-mm, duanxiongchun

On Wed 02-02-22 22:33:03, Muchun Song wrote:
> The flush_cache_page() only remove a PAGE_SIZE sized range from the cache.
> However, it does not cover the full pages in a THP except a head page.
> Replace it with flush_cache_range() to fix this issue.
> 
> Fixes: f729c8c9b24f ("dax: wrprotect pmd_t in dax_mapping_entry_mkclean")
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Looks good. Feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza

> ---
>  fs/dax.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index 88be1c02a151..e031e4b6c13c 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -857,7 +857,8 @@ static void dax_entry_mkclean(struct address_space *mapping, pgoff_t index,
>  			if (!pmd_dirty(*pmdp) && !pmd_write(*pmdp))
>  				goto unlock_pmd;
>  
> -			flush_cache_page(vma, address, pfn);
> +			flush_cache_range(vma, address,
> +					  address + HPAGE_PMD_SIZE);
>  			pmd = pmdp_invalidate(vma, address, pmdp);
>  			pmd = pmd_wrprotect(pmd);
>  			pmd = pmd_mkclean(pmd);
> -- 
> 2.11.0
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-02-03 10:17 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-02 14:33 [PATCH v2 0/6] Fix some bugs related to ramp and dax Muchun Song
2022-02-02 14:33 ` [PATCH v2 1/6] mm: rmap: fix cache flush on THP pages Muchun Song
2022-02-03 10:16   ` Jan Kara
2022-02-02 14:33 ` [PATCH v2 2/6] dax: fix cache flush on PMD-mapped pages Muchun Song
2022-02-03 10:17   ` Jan Kara
2022-02-02 14:33 ` [PATCH v2 3/6] mm: page_vma_mapped: support checking if a pfn is mapped into a vma Muchun Song
2022-02-02 16:43   ` kernel test robot
2022-02-02 18:28   ` Matthew Wilcox
2022-02-03  0:18   ` kernel test robot
2022-02-03  0:18     ` kernel test robot
2022-02-02 14:33 ` [PATCH v2 4/6] mm: rmap: introduce pfn_mkclean_range() to cleans PTEs Muchun Song
2022-02-02 14:33 ` [PATCH v2 5/6] dax: fix missing writeprotect the pte entry Muchun Song
2022-02-02 14:33 ` [PATCH v2 6/6] mm: remove range parameter from follow_invalidate_pte() Muchun Song

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.