linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/13] Fix the DAX-gup mistake
@ 2022-09-04  2:16 Dan Williams
  2022-09-04  2:16 ` [PATCH 01/13] fsdax: Rename "busy page" to "pinned page" Dan Williams
                   ` (13 more replies)
  0 siblings, 14 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:16 UTC (permalink / raw)
  To: akpm
  Cc: Jason Gunthorpe, Jan Kara, Christoph Hellwig, Darrick J. Wong,
	John Hubbard, Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

tl;dr: Move the pin of 'struct dev_pagemap' instances from gup-time to
map time, move the unpin of 'struct dev_pagemap' to truncate_inode_pages()
for fsdax and devdax inodes, and use page_maybe_dma_pinned() to
determine when filesystems can safely truncate DAX mappings vs DMA.

The longer story is that DAX has caused friction with folio development
and other device-memory use cases due to its hack of using a
page-reference count of 1 to indicate that the page is DMA idle. That
situation arose from the mistake of not managing DAX page reference
counts at map time. The lack of page reference counting at map time grew
organically from the original DAX experiment of attempting to manage DAX
mappings without page structures. The page lock, dirty tracking and
other entry management was supported sans pages. However, the page
support was then bolted on incrementally so solve problems with gup,
memory-failure, and all the other kernel services that are missing when
a pfn does not have an associated page structure.

Since then John has led an effort to account for when a page is pinned
for DMA vs other sources that elevate the reference count. The
page_maybe_dma_pinned() helper slots in seamlessly to replace the need
to track transitions to page->_refount == 1.

The larger change in this set comes from Jason's observation that
inserting DAX mappings without any reference taken is a bug. So
dax_insert_entry(), that fsdax uses, is updated to take 'struct
dev_pagemap' references, and devdax is updated to reuse the same.

This allows for 'struct dev_pagemap' manipulation to be self-contained
to DAX-specific paths. It is also a foundation to build towards removing
pte_devmap() and start treating DAX pages as another vm_normal_page(),
and perhaps more conversions of the DAX infrastructure to reuse typical
page mapping helpers. One of the immediate hurdles is the usage of
pmd_devmap() to distinguish large page mappings that are not transparent
huge pages.

---

Dan Williams (13):
      fsdax: Rename "busy page" to "pinned page"
      fsdax: Use page_maybe_dma_pinned() for DAX vs DMA collisions
      fsdax: Delete put_devmap_managed_page_refs()
      fsdax: Update dax_insert_entry() calling convention to return an error
      fsdax: Cleanup dax_associate_entry()
      fsdax: Rework dax_insert_entry() calling convention
      fsdax: Manage pgmap references at entry insertion and deletion
      devdax: Minor warning fixups
      devdax: Move address_space helpers to the DAX core
      dax: Prep dax_{associate,disassociate}_entry() for compound pages
      devdax: add PUD support to the DAX mapping infrastructure
      devdax: Use dax_insert_entry() + dax_delete_mapping_entry()
      mm/gup: Drop DAX pgmap accounting


 .clang-format             |    1
 drivers/Makefile          |    2
 drivers/dax/Kconfig       |    6
 drivers/dax/Makefile      |    1
 drivers/dax/bus.c         |    2
 drivers/dax/dax-private.h |    1
 drivers/dax/device.c      |   73 ++-
 drivers/dax/mapping.c     | 1020 ++++++++++++++++++++++++++++++++++++++++++++
 drivers/dax/super.c       |    2
 fs/dax.c                  | 1049 ++-------------------------------------------
 fs/ext4/inode.c           |    9
 fs/fuse/dax.c             |   10
 fs/xfs/xfs_file.c         |    8
 fs/xfs/xfs_inode.c        |    2
 include/linux/dax.h       |  124 ++++-
 include/linux/huge_mm.h   |   23 -
 include/linux/memremap.h  |   24 +
 include/linux/mm.h        |   58 +-
 mm/gup.c                  |   92 +---
 mm/huge_memory.c          |   54 --
 mm/memremap.c             |   31 -
 mm/swap.c                 |    2
 22 files changed, 1326 insertions(+), 1268 deletions(-)
 create mode 100644 drivers/dax/mapping.c

base-commit: 1c23f9e627a7b412978b4e852793c5e3c3efc555


^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 01/13] fsdax: Rename "busy page" to "pinned page"
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
@ 2022-09-04  2:16 ` Dan Williams
  2022-09-04  2:16 ` [PATCH 02/13] fsdax: Use page_maybe_dma_pinned() for DAX vs DMA collisions Dan Williams
                   ` (12 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:16 UTC (permalink / raw)
  To: akpm
  Cc: Matthew Wilcox, Jan Kara, Darrick J. Wong, Jason Gunthorpe,
	Christoph Hellwig, linux-mm, nvdimm, linux-fsdevel

The FSDAX need to hold of truncate is for pages undergoing DMA. Replace
the DAX specific "busy" terminology with the "pinned" term. This is in
preparation from moving FSDAX from watching transitions of
page->_refcount to '1' with observations of page_maybe_dma_pinned()
returning false.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 fs/dax.c            |   16 ++++++++--------
 fs/ext4/inode.c     |    2 +-
 fs/fuse/dax.c       |    4 ++--
 fs/xfs/xfs_file.c   |    2 +-
 fs/xfs/xfs_inode.c  |    2 +-
 include/linux/dax.h |   10 ++++++----
 6 files changed, 19 insertions(+), 17 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index c440dcef4b1b..0f22f7b46de0 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -407,7 +407,7 @@ static void dax_disassociate_entry(void *entry, struct address_space *mapping,
 	}
 }
 
-static struct page *dax_busy_page(void *entry)
+static struct page *dax_pinned_page(void *entry)
 {
 	unsigned long pfn;
 
@@ -665,7 +665,7 @@ static void *grab_mapping_entry(struct xa_state *xas,
 }
 
 /**
- * dax_layout_busy_page_range - find first pinned page in @mapping
+ * dax_layout_pinned_page_range - find first pinned page in @mapping
  * @mapping: address space to scan for a page with ref count > 1
  * @start: Starting offset. Page containing 'start' is included.
  * @end: End offset. Page containing 'end' is included. If 'end' is LLONG_MAX,
@@ -682,7 +682,7 @@ static void *grab_mapping_entry(struct xa_state *xas,
  * to be able to run unmap_mapping_range() and subsequently not race
  * mapping_mapped() becoming true.
  */
-struct page *dax_layout_busy_page_range(struct address_space *mapping,
+struct page *dax_layout_pinned_page_range(struct address_space *mapping,
 					loff_t start, loff_t end)
 {
 	void *entry;
@@ -727,7 +727,7 @@ struct page *dax_layout_busy_page_range(struct address_space *mapping,
 		if (unlikely(dax_is_locked(entry)))
 			entry = get_unlocked_entry(&xas, 0);
 		if (entry)
-			page = dax_busy_page(entry);
+			page = dax_pinned_page(entry);
 		put_unlocked_entry(&xas, entry, WAKE_NEXT);
 		if (page)
 			break;
@@ -742,13 +742,13 @@ struct page *dax_layout_busy_page_range(struct address_space *mapping,
 	xas_unlock_irq(&xas);
 	return page;
 }
-EXPORT_SYMBOL_GPL(dax_layout_busy_page_range);
+EXPORT_SYMBOL_GPL(dax_layout_pinned_page_range);
 
-struct page *dax_layout_busy_page(struct address_space *mapping)
+struct page *dax_layout_pinned_page(struct address_space *mapping)
 {
-	return dax_layout_busy_page_range(mapping, 0, LLONG_MAX);
+	return dax_layout_pinned_page_range(mapping, 0, LLONG_MAX);
 }
-EXPORT_SYMBOL_GPL(dax_layout_busy_page);
+EXPORT_SYMBOL_GPL(dax_layout_pinned_page);
 
 static int __dax_invalidate_entry(struct address_space *mapping,
 					  pgoff_t index, bool trunc)
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 601214453c3a..bf49bf506965 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3957,7 +3957,7 @@ int ext4_break_layouts(struct inode *inode)
 		return -EINVAL;
 
 	do {
-		page = dax_layout_busy_page(inode->i_mapping);
+		page = dax_layout_pinned_page(inode->i_mapping);
 		if (!page)
 			return 0;
 
diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c
index e23e802a8013..e0b846f16bc5 100644
--- a/fs/fuse/dax.c
+++ b/fs/fuse/dax.c
@@ -443,7 +443,7 @@ static int fuse_setup_new_dax_mapping(struct inode *inode, loff_t pos,
 
 	/*
 	 * Can't do inline reclaim in fault path. We call
-	 * dax_layout_busy_page() before we free a range. And
+	 * dax_layout_pinned_page() before we free a range. And
 	 * fuse_wait_dax_page() drops mapping->invalidate_lock and requires it.
 	 * In fault path we enter with mapping->invalidate_lock held and can't
 	 * drop it. Also in fault path we hold mapping->invalidate_lock shared
@@ -671,7 +671,7 @@ static int __fuse_dax_break_layouts(struct inode *inode, bool *retry,
 {
 	struct page *page;
 
-	page = dax_layout_busy_page_range(inode->i_mapping, start, end);
+	page = dax_layout_pinned_page_range(inode->i_mapping, start, end);
 	if (!page)
 		return 0;
 
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index c6c80265c0b2..954bb6e83796 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -822,7 +822,7 @@ xfs_break_dax_layouts(
 
 	ASSERT(xfs_isilocked(XFS_I(inode), XFS_MMAPLOCK_EXCL));
 
-	page = dax_layout_busy_page(inode->i_mapping);
+	page = dax_layout_pinned_page(inode->i_mapping);
 	if (!page)
 		return 0;
 
diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
index 28493c8e9bb2..9d0bea03501e 100644
--- a/fs/xfs/xfs_inode.c
+++ b/fs/xfs/xfs_inode.c
@@ -3481,7 +3481,7 @@ xfs_mmaplock_two_inodes_and_break_dax_layout(
 	 * need to unlock & lock the XFS_MMAPLOCK_EXCL which is not suitable
 	 * for this nested lock case.
 	 */
-	page = dax_layout_busy_page(VFS_I(ip2)->i_mapping);
+	page = dax_layout_pinned_page(VFS_I(ip2)->i_mapping);
 	if (page && page_ref_count(page) != 1) {
 		xfs_iunlock(ip2, XFS_MMAPLOCK_EXCL);
 		xfs_iunlock(ip1, XFS_MMAPLOCK_EXCL);
diff --git a/include/linux/dax.h b/include/linux/dax.h
index ba985333e26b..54f099166a29 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -157,8 +157,8 @@ static inline void fs_put_dax(struct dax_device *dax_dev, void *holder)
 int dax_writeback_mapping_range(struct address_space *mapping,
 		struct dax_device *dax_dev, struct writeback_control *wbc);
 
-struct page *dax_layout_busy_page(struct address_space *mapping);
-struct page *dax_layout_busy_page_range(struct address_space *mapping, loff_t start, loff_t end);
+struct page *dax_layout_pinned_page(struct address_space *mapping);
+struct page *dax_layout_pinned_page_range(struct address_space *mapping, loff_t start, loff_t end);
 dax_entry_t dax_lock_page(struct page *page);
 void dax_unlock_page(struct page *page, dax_entry_t cookie);
 dax_entry_t dax_lock_mapping_entry(struct address_space *mapping,
@@ -166,12 +166,14 @@ dax_entry_t dax_lock_mapping_entry(struct address_space *mapping,
 void dax_unlock_mapping_entry(struct address_space *mapping,
 		unsigned long index, dax_entry_t cookie);
 #else
-static inline struct page *dax_layout_busy_page(struct address_space *mapping)
+static inline struct page *dax_layout_pinned_page(struct address_space *mapping)
 {
 	return NULL;
 }
 
-static inline struct page *dax_layout_busy_page_range(struct address_space *mapping, pgoff_t start, pgoff_t nr_pages)
+static inline struct page *
+dax_layout_pinned_page_range(struct address_space *mapping, pgoff_t start,
+			     pgoff_t nr_pages)
 {
 	return NULL;
 }



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 02/13] fsdax: Use page_maybe_dma_pinned() for DAX vs DMA collisions
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
  2022-09-04  2:16 ` [PATCH 01/13] fsdax: Rename "busy page" to "pinned page" Dan Williams
@ 2022-09-04  2:16 ` Dan Williams
  2022-09-06 12:07   ` Jason Gunthorpe
  2022-09-04  2:16 ` [PATCH 03/13] fsdax: Delete put_devmap_managed_page_refs() Dan Williams
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:16 UTC (permalink / raw)
  To: akpm
  Cc: Jan Kara, Darrick J. Wong, Christoph Hellwig, John Hubbard,
	Jason Gunthorpe, Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

The pin_user_pages() + page_maybe_dma_pinned() infrastructure is a
framework for tackling the kernel's struggles with gup+DMA.

DAX presents a unique flavor of the gup+DMA problem since pinned pages
are identical to physical filesystem blocks. Unlike the page-cache case,
a mapping of a file can not be truncated while DMA is in-flight because
the DMA must complete before the filesystem block is reclaimed.

DAX has a homegrown solution to this problem based on watching the
page->_refcount go idle. Beyond being awkward to catch that idle transition
in put_page(), it is overkill when only the page_maybe_dma_pinned()
transition needs to be captured.

Move the wakeup of filesystem-DAX truncate paths
({ext4,xfs,fuse_dax}_break_layouts()) to unpin_user_pages() with a new
wakeup_fsdax_pin_waiters() helper, and use !page_maybe_dma_pinned() as
the wake condition.

Cc: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Reported-by: Jason Gunthorpe <jgg@nvidia.com>
Reported-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 fs/dax.c           |    4 ++--
 fs/ext4/inode.c    |    7 +++----
 fs/fuse/dax.c      |    6 +++---
 fs/xfs/xfs_file.c  |    6 +++---
 include/linux/mm.h |   28 ++++++++++++++++++++++++++++
 mm/gup.c           |    6 ++++--
 6 files changed, 43 insertions(+), 14 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 0f22f7b46de0..aceb587bc27e 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -395,7 +395,7 @@ static void dax_disassociate_entry(void *entry, struct address_space *mapping,
 	for_each_mapped_pfn(entry, pfn) {
 		struct page *page = pfn_to_page(pfn);
 
-		WARN_ON_ONCE(trunc && page_ref_count(page) > 1);
+		WARN_ON_ONCE(trunc && page_maybe_dma_pinned(page));
 		if (dax_mapping_is_cow(page->mapping)) {
 			/* keep the CoW flag if this page is still shared */
 			if (page->index-- > 0)
@@ -414,7 +414,7 @@ static struct page *dax_pinned_page(void *entry)
 	for_each_mapped_pfn(entry, pfn) {
 		struct page *page = pfn_to_page(pfn);
 
-		if (page_ref_count(page) > 1)
+		if (page_maybe_dma_pinned(page))
 			return page;
 	}
 	return NULL;
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index bf49bf506965..5e68e64f155a 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -3961,10 +3961,9 @@ int ext4_break_layouts(struct inode *inode)
 		if (!page)
 			return 0;
 
-		error = ___wait_var_event(&page->_refcount,
-				atomic_read(&page->_refcount) == 1,
-				TASK_INTERRUPTIBLE, 0, 0,
-				ext4_wait_dax_page(inode));
+		error = ___wait_var_event(page, !page_maybe_dma_pinned(page),
+					  TASK_INTERRUPTIBLE, 0, 0,
+					  ext4_wait_dax_page(inode));
 	} while (error == 0);
 
 	return error;
diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c
index e0b846f16bc5..6419ca420c42 100644
--- a/fs/fuse/dax.c
+++ b/fs/fuse/dax.c
@@ -676,9 +676,9 @@ static int __fuse_dax_break_layouts(struct inode *inode, bool *retry,
 		return 0;
 
 	*retry = true;
-	return ___wait_var_event(&page->_refcount,
-			atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE,
-			0, 0, fuse_wait_dax_page(inode));
+	return ___wait_var_event(page, !page_maybe_dma_pinned(page),
+				 TASK_INTERRUPTIBLE, 0, 0,
+				 fuse_wait_dax_page(inode));
 }
 
 /* dmap_end == 0 leads to unmapping of whole file */
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 954bb6e83796..dbffb9481b71 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -827,9 +827,9 @@ xfs_break_dax_layouts(
 		return 0;
 
 	*retry = true;
-	return ___wait_var_event(&page->_refcount,
-			atomic_read(&page->_refcount) == 1, TASK_INTERRUPTIBLE,
-			0, 0, xfs_wait_dax_page(inode));
+	return ___wait_var_event(page, !page_maybe_dma_pinned(page),
+				 TASK_INTERRUPTIBLE, 0, 0,
+				 xfs_wait_dax_page(inode));
 }
 
 int
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 3bedc449c14d..557d5447ebec 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1517,6 +1517,34 @@ static inline bool page_maybe_dma_pinned(struct page *page)
 	return folio_maybe_dma_pinned(page_folio(page));
 }
 
+#if defined(CONFIG_ZONE_DEVICE) && defined(CONFIG_FS_DAX)
+/*
+ * Unlike typical file backed pages that support truncating a page from
+ * a file while it is under active DMA, DAX pages need to hold off
+ * truncate operations until transient page pins are released.
+ *
+ * The filesystem (via dax_layout_pinned_page()) takes steps to make
+ * sure that any observation of the !page_maybe_dma_pinned() state is
+ * stable until the truncation completes.
+ */
+static inline void wakeup_fsdax_pin_waiters(struct folio *folio)
+{
+	struct page *page = &folio->page;
+
+	if (!folio_is_zone_device(folio))
+		return;
+	if (page->pgmap->type != MEMORY_DEVICE_FS_DAX)
+		return;
+	if (folio_maybe_dma_pinned(folio))
+		return;
+	wake_up_var(page);
+}
+#else /* CONFIG_ZONE_DEVICE && CONFIG_FS_DAX */
+static inline void wakeup_fsdax_pin_waiters(struct folio *folio)
+{
+}
+#endif /* CONFIG_ZONE_DEVICE && CONFIG_FS_DAX */
+
 /*
  * This should most likely only be called during fork() to see whether we
  * should break the cow immediately for an anon page on the src mm.
diff --git a/mm/gup.c b/mm/gup.c
index 732825157430..499c46296fda 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -177,8 +177,10 @@ static void gup_put_folio(struct folio *folio, int refs, unsigned int flags)
 			refs *= GUP_PIN_COUNTING_BIAS;
 	}
 
-	if (!put_devmap_managed_page_refs(&folio->page, refs))
-		folio_put_refs(folio, refs);
+	folio_put_refs(folio, refs);
+
+	if (flags & FOLL_PIN)
+		wakeup_fsdax_pin_waiters(folio);
 }
 
 /**



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 03/13] fsdax: Delete put_devmap_managed_page_refs()
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
  2022-09-04  2:16 ` [PATCH 01/13] fsdax: Rename "busy page" to "pinned page" Dan Williams
  2022-09-04  2:16 ` [PATCH 02/13] fsdax: Use page_maybe_dma_pinned() for DAX vs DMA collisions Dan Williams
@ 2022-09-04  2:16 ` Dan Williams
  2022-09-04  2:16 ` [PATCH 04/13] fsdax: Update dax_insert_entry() calling convention to return an error Dan Williams
                   ` (10 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:16 UTC (permalink / raw)
  To: akpm
  Cc: Matthew Wilcox, Jan Kara, Darrick J. Wong, Jason Gunthorpe,
	Christoph Hellwig, John Hubbard, linux-mm, nvdimm, linux-fsdevel

Now that fsdax DMA-idle detection no longer depends on catching
transitions of page->_refcount to 1, remove
put_devmap_managed_page_refs() and associated infrastructure.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 include/linux/mm.h |   30 ------------------------------
 mm/gup.c           |    3 +--
 mm/memremap.c      |   18 ------------------
 mm/swap.c          |    2 --
 4 files changed, 1 insertion(+), 52 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 557d5447ebec..24f8682d0cd7 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1048,30 +1048,6 @@ vm_fault_t finish_mkwrite_fault(struct vm_fault *vmf);
  *   back into memory.
  */
 
-#if defined(CONFIG_ZONE_DEVICE) && defined(CONFIG_FS_DAX)
-DECLARE_STATIC_KEY_FALSE(devmap_managed_key);
-
-bool __put_devmap_managed_page_refs(struct page *page, int refs);
-static inline bool put_devmap_managed_page_refs(struct page *page, int refs)
-{
-	if (!static_branch_unlikely(&devmap_managed_key))
-		return false;
-	if (!is_zone_device_page(page))
-		return false;
-	return __put_devmap_managed_page_refs(page, refs);
-}
-#else /* CONFIG_ZONE_DEVICE && CONFIG_FS_DAX */
-static inline bool put_devmap_managed_page_refs(struct page *page, int refs)
-{
-	return false;
-}
-#endif /* CONFIG_ZONE_DEVICE && CONFIG_FS_DAX */
-
-static inline bool put_devmap_managed_page(struct page *page)
-{
-	return put_devmap_managed_page_refs(page, 1);
-}
-
 /* 127: arbitrary random number, small enough to assemble well */
 #define folio_ref_zero_or_close_to_overflow(folio) \
 	((unsigned int) folio_ref_count(folio) + 127u <= 127u)
@@ -1168,12 +1144,6 @@ static inline void put_page(struct page *page)
 {
 	struct folio *folio = page_folio(page);
 
-	/*
-	 * For some devmap managed pages we need to catch refcount transition
-	 * from 2 to 1:
-	 */
-	if (put_devmap_managed_page(&folio->page))
-		return;
 	folio_put(folio);
 }
 
diff --git a/mm/gup.c b/mm/gup.c
index 499c46296fda..67dfffe97917 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -87,8 +87,7 @@ static inline struct folio *try_get_folio(struct page *page, int refs)
 	 * belongs to this folio.
 	 */
 	if (unlikely(page_folio(page) != folio)) {
-		if (!put_devmap_managed_page_refs(&folio->page, refs))
-			folio_put_refs(folio, refs);
+		folio_put_refs(folio, refs);
 		goto retry;
 	}
 
diff --git a/mm/memremap.c b/mm/memremap.c
index 58b20c3c300b..433500e955fb 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -507,21 +507,3 @@ void free_zone_device_page(struct page *page)
 	 */
 	set_page_count(page, 1);
 }
-
-#ifdef CONFIG_FS_DAX
-bool __put_devmap_managed_page_refs(struct page *page, int refs)
-{
-	if (page->pgmap->type != MEMORY_DEVICE_FS_DAX)
-		return false;
-
-	/*
-	 * fsdax page refcounts are 1-based, rather than 0-based: if
-	 * refcount is 1, then the page is free and the refcount is
-	 * stable because nobody holds a reference on the page.
-	 */
-	if (page_ref_sub_return(page, refs) == 1)
-		wake_up_var(&page->_refcount);
-	return true;
-}
-EXPORT_SYMBOL(__put_devmap_managed_page_refs);
-#endif /* CONFIG_FS_DAX */
diff --git a/mm/swap.c b/mm/swap.c
index 9cee7f6a3809..b346dd24cde8 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -960,8 +960,6 @@ void release_pages(struct page **pages, int nr)
 				unlock_page_lruvec_irqrestore(lruvec, flags);
 				lruvec = NULL;
 			}
-			if (put_devmap_managed_page(&folio->page))
-				continue;
 			if (folio_put_testzero(folio))
 				free_zone_device_page(&folio->page);
 			continue;



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 04/13] fsdax: Update dax_insert_entry() calling convention to return an error
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
                   ` (2 preceding siblings ...)
  2022-09-04  2:16 ` [PATCH 03/13] fsdax: Delete put_devmap_managed_page_refs() Dan Williams
@ 2022-09-04  2:16 ` Dan Williams
  2022-09-04  2:16 ` [PATCH 05/13] fsdax: Cleanup dax_associate_entry() Dan Williams
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:16 UTC (permalink / raw)
  To: akpm
  Cc: Matthew Wilcox, Jan Kara, Darrick J. Wong, Jason Gunthorpe,
	Christoph Hellwig, John Hubbard, linux-mm, nvdimm, linux-fsdevel

In preparation for teaching dax_insert_entry() to take live @pgmap
references, enable it to return errors. Given the observation that all
callers overwrite the passed in entry with the return value, just update
@entry in place and convert the return code to a vm_fault_t status.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 fs/dax.c |   27 +++++++++++++++++++--------
 1 file changed, 19 insertions(+), 8 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index aceb587bc27e..d2fb58a7449b 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -853,14 +853,15 @@ static bool dax_fault_is_cow(const struct iomap_iter *iter)
  * already in the tree, we will skip the insertion and just dirty the PMD as
  * appropriate.
  */
-static void *dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
-		const struct iomap_iter *iter, void *entry, pfn_t pfn,
-		unsigned long flags)
+static vm_fault_t dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
+				   const struct iomap_iter *iter, void **pentry,
+				   pfn_t pfn, unsigned long flags)
 {
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	void *new_entry = dax_make_entry(pfn, flags);
 	bool dirty = !dax_fault_is_synchronous(iter, vmf->vma);
 	bool cow = dax_fault_is_cow(iter);
+	void *entry = *pentry;
 
 	if (dirty)
 		__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
@@ -906,7 +907,8 @@ static void *dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
 		xas_set_mark(xas, PAGECACHE_TAG_TOWRITE);
 
 	xas_unlock_irq(xas);
-	return entry;
+	*pentry = entry;
+	return 0;
 }
 
 static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
@@ -1154,9 +1156,12 @@ static vm_fault_t dax_load_hole(struct xa_state *xas, struct vm_fault *vmf,
 	pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
 	vm_fault_t ret;
 
-	*entry = dax_insert_entry(xas, vmf, iter, *entry, pfn, DAX_ZERO_PAGE);
+	ret = dax_insert_entry(xas, vmf, iter, entry, pfn, DAX_ZERO_PAGE);
+	if (ret)
+		goto out;
 
 	ret = vmf_insert_mixed(vmf->vma, vaddr, pfn);
+out:
 	trace_dax_load_hole(inode, vmf, ret);
 	return ret;
 }
@@ -1173,6 +1178,7 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
 	struct page *zero_page;
 	spinlock_t *ptl;
 	pmd_t pmd_entry;
+	vm_fault_t ret;
 	pfn_t pfn;
 
 	zero_page = mm_get_huge_zero_page(vmf->vma->vm_mm);
@@ -1181,8 +1187,10 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
 		goto fallback;
 
 	pfn = page_to_pfn_t(zero_page);
-	*entry = dax_insert_entry(xas, vmf, iter, *entry, pfn,
-				  DAX_PMD | DAX_ZERO_PAGE);
+	ret = dax_insert_entry(xas, vmf, iter, entry, pfn,
+			       DAX_PMD | DAX_ZERO_PAGE);
+	if (ret)
+		return ret;
 
 	if (arch_needs_pgtable_deposit()) {
 		pgtable = pte_alloc_one(vma->vm_mm);
@@ -1534,6 +1542,7 @@ static vm_fault_t dax_fault_iter(struct vm_fault *vmf,
 	loff_t pos = (loff_t)xas->xa_index << PAGE_SHIFT;
 	bool write = iter->flags & IOMAP_WRITE;
 	unsigned long entry_flags = pmd ? DAX_PMD : 0;
+	vm_fault_t ret;
 	int err = 0;
 	pfn_t pfn;
 	void *kaddr;
@@ -1558,7 +1567,9 @@ static vm_fault_t dax_fault_iter(struct vm_fault *vmf,
 	if (err)
 		return pmd ? VM_FAULT_FALLBACK : dax_fault_return(err);
 
-	*entry = dax_insert_entry(xas, vmf, iter, *entry, pfn, entry_flags);
+	ret = dax_insert_entry(xas, vmf, iter, entry, pfn, entry_flags);
+	if (ret)
+		return ret;
 
 	if (write &&
 	    srcmap->type != IOMAP_HOLE && srcmap->addr != iomap->addr) {



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 05/13] fsdax: Cleanup dax_associate_entry()
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
                   ` (3 preceding siblings ...)
  2022-09-04  2:16 ` [PATCH 04/13] fsdax: Update dax_insert_entry() calling convention to return an error Dan Williams
@ 2022-09-04  2:16 ` Dan Williams
  2022-09-04  2:16 ` [PATCH 06/13] fsdax: Rework dax_insert_entry() calling convention Dan Williams
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:16 UTC (permalink / raw)
  To: akpm
  Cc: Matthew Wilcox, Jan Kara, Darrick J. Wong, Jason Gunthorpe,
	Christoph Hellwig, John Hubbard, linux-mm, nvdimm, linux-fsdevel

Pass @vmf to drop the separate @vma and @address arguments to
dax_associate_entry(), use the existing DAX flags to convey the @cow
argument, and replace the open-coded ALIGN().

Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 fs/dax.c |    9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index d2fb58a7449b..fad1c8a1d913 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -362,7 +362,7 @@ static inline void dax_mapping_set_cow(struct page *page)
  * FS_DAX_MAPPING_COW, and use page->index as refcount.
  */
 static void dax_associate_entry(void *entry, struct address_space *mapping,
-		struct vm_area_struct *vma, unsigned long address, bool cow)
+				struct vm_fault *vmf, unsigned long flags)
 {
 	unsigned long size = dax_entry_size(entry), pfn, index;
 	int i = 0;
@@ -370,11 +370,11 @@ static void dax_associate_entry(void *entry, struct address_space *mapping,
 	if (IS_ENABLED(CONFIG_FS_DAX_LIMITED))
 		return;
 
-	index = linear_page_index(vma, address & ~(size - 1));
+	index = linear_page_index(vmf->vma, ALIGN(vmf->address, size));
 	for_each_mapped_pfn(entry, pfn) {
 		struct page *page = pfn_to_page(pfn);
 
-		if (cow) {
+		if (flags & DAX_COW) {
 			dax_mapping_set_cow(page);
 		} else {
 			WARN_ON_ONCE(page->mapping);
@@ -882,8 +882,7 @@ static vm_fault_t dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
 		void *old;
 
 		dax_disassociate_entry(entry, mapping, false);
-		dax_associate_entry(new_entry, mapping, vmf->vma, vmf->address,
-				cow);
+		dax_associate_entry(new_entry, mapping, vmf, flags);
 		/*
 		 * Only swap our new entry into the page cache if the current
 		 * entry is a zero page or an empty entry.  If a normal PTE or



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 06/13] fsdax: Rework dax_insert_entry() calling convention
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
                   ` (4 preceding siblings ...)
  2022-09-04  2:16 ` [PATCH 05/13] fsdax: Cleanup dax_associate_entry() Dan Williams
@ 2022-09-04  2:16 ` Dan Williams
  2022-09-04  2:16 ` [PATCH 07/13] fsdax: Manage pgmap references at entry insertion and deletion Dan Williams
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:16 UTC (permalink / raw)
  To: akpm
  Cc: Matthew Wilcox, Jan Kara, Darrick J. Wong, Jason Gunthorpe,
	Christoph Hellwig, John Hubbard, linux-mm, nvdimm, linux-fsdevel

Move the determination of @dirty and @cow in dax_insert_entry() to flags
(DAX_DIRTY and DAX_COW) that are passed in. This allows the iomap
related code to remain fs/dax.c in preparation for the Xarray
infrastructure to move to drivers/dax/mapping.c.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 fs/dax.c |   44 +++++++++++++++++++++++++++++++++++---------
 1 file changed, 35 insertions(+), 9 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index fad1c8a1d913..65d55c5ecdef 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -75,11 +75,19 @@ fs_initcall(init_dax_wait_table);
  * block allocation.
  */
 #define DAX_SHIFT	(4)
+#define DAX_MASK	((1UL << DAX_SHIFT) - 1)
 #define DAX_LOCKED	(1UL << 0)
 #define DAX_PMD		(1UL << 1)
 #define DAX_ZERO_PAGE	(1UL << 2)
 #define DAX_EMPTY	(1UL << 3)
 
+/*
+ * These flags are not conveyed in Xarray value entries, they are just
+ * modifiers to dax_insert_entry().
+ */
+#define DAX_DIRTY (1UL << (DAX_SHIFT + 0))
+#define DAX_COW   (1UL << (DAX_SHIFT + 1))
+
 static unsigned long dax_to_pfn(void *entry)
 {
 	return xa_to_value(entry) >> DAX_SHIFT;
@@ -87,7 +95,8 @@ static unsigned long dax_to_pfn(void *entry)
 
 static void *dax_make_entry(pfn_t pfn, unsigned long flags)
 {
-	return xa_mk_value(flags | (pfn_t_to_pfn(pfn) << DAX_SHIFT));
+	return xa_mk_value((flags & DAX_MASK) |
+			   (pfn_t_to_pfn(pfn) << DAX_SHIFT));
 }
 
 static bool dax_is_locked(void *entry)
@@ -846,6 +855,20 @@ static bool dax_fault_is_cow(const struct iomap_iter *iter)
 		(iter->iomap.flags & IOMAP_F_SHARED);
 }
 
+static unsigned long dax_iter_flags(const struct iomap_iter *iter,
+				    struct vm_fault *vmf)
+{
+	unsigned long flags = 0;
+
+	if (!dax_fault_is_synchronous(iter, vmf->vma))
+		flags |= DAX_DIRTY;
+
+	if (dax_fault_is_cow(iter))
+		flags |= DAX_COW;
+
+	return flags;
+}
+
 /*
  * By this point grab_mapping_entry() has ensured that we have a locked entry
  * of the appropriate size so we don't have to worry about downgrading PMDs to
@@ -854,13 +877,13 @@ static bool dax_fault_is_cow(const struct iomap_iter *iter)
  * appropriate.
  */
 static vm_fault_t dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
-				   const struct iomap_iter *iter, void **pentry,
-				   pfn_t pfn, unsigned long flags)
+				   void **pentry, pfn_t pfn,
+				   unsigned long flags)
 {
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	void *new_entry = dax_make_entry(pfn, flags);
-	bool dirty = !dax_fault_is_synchronous(iter, vmf->vma);
-	bool cow = dax_fault_is_cow(iter);
+	bool dirty = flags & DAX_DIRTY;
+	bool cow = flags & DAX_COW;
 	void *entry = *pentry;
 
 	if (dirty)
@@ -1155,7 +1178,8 @@ static vm_fault_t dax_load_hole(struct xa_state *xas, struct vm_fault *vmf,
 	pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
 	vm_fault_t ret;
 
-	ret = dax_insert_entry(xas, vmf, iter, entry, pfn, DAX_ZERO_PAGE);
+	ret = dax_insert_entry(xas, vmf, entry, pfn,
+			       DAX_ZERO_PAGE | dax_iter_flags(iter, vmf));
 	if (ret)
 		goto out;
 
@@ -1186,8 +1210,9 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
 		goto fallback;
 
 	pfn = page_to_pfn_t(zero_page);
-	ret = dax_insert_entry(xas, vmf, iter, entry, pfn,
-			       DAX_PMD | DAX_ZERO_PAGE);
+	ret = dax_insert_entry(xas, vmf, entry, pfn,
+			       DAX_PMD | DAX_ZERO_PAGE |
+				       dax_iter_flags(iter, vmf));
 	if (ret)
 		return ret;
 
@@ -1566,7 +1591,8 @@ static vm_fault_t dax_fault_iter(struct vm_fault *vmf,
 	if (err)
 		return pmd ? VM_FAULT_FALLBACK : dax_fault_return(err);
 
-	ret = dax_insert_entry(xas, vmf, iter, entry, pfn, entry_flags);
+	ret = dax_insert_entry(xas, vmf, entry, pfn,
+			       entry_flags | dax_iter_flags(iter, vmf));
 	if (ret)
 		return ret;
 



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 07/13] fsdax: Manage pgmap references at entry insertion and deletion
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
                   ` (5 preceding siblings ...)
  2022-09-04  2:16 ` [PATCH 06/13] fsdax: Rework dax_insert_entry() calling convention Dan Williams
@ 2022-09-04  2:16 ` Dan Williams
  2022-09-06 12:30   ` Jason Gunthorpe
  2022-09-04  2:16 ` [PATCH 08/13] devdax: Minor warning fixups Dan Williams
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:16 UTC (permalink / raw)
  To: akpm
  Cc: Matthew Wilcox, Jan Kara, Darrick J. Wong, Jason Gunthorpe,
	Christoph Hellwig, John Hubbard, linux-mm, nvdimm, linux-fsdevel

The percpu_ref in 'struct dev_pagemap' is used to coordinate active
mappings of device-memory with the device-removal / unbind path. It
enables the semantic that initiating device-removal (or
device-driver-unbind) blocks new mapping and DMA attempts, and waits for
mapping revocation or inflight DMA to complete.

Expand the scope of the reference count to pin the DAX device active at
mapping time and not later at the first gup event. With a device
reference being held while any page on that device is mapped the need to
manage pgmap reference counts in the gup code is eliminated. That
cleanup is saved for a follow-on change.

For now, teach dax_insert_entry() and dax_delete_mapping_entry() to take
and drop pgmap references respectively.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 fs/dax.c                 |   30 ++++++++++++++++++++++++------
 include/linux/memremap.h |   18 ++++++++++++++----
 mm/memremap.c            |   13 ++++++++-----
 3 files changed, 46 insertions(+), 15 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 65d55c5ecdef..b23222b0dae4 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -370,14 +370,24 @@ static inline void dax_mapping_set_cow(struct page *page)
  * whether this entry is shared by multiple files.  If so, set the page->mapping
  * FS_DAX_MAPPING_COW, and use page->index as refcount.
  */
-static void dax_associate_entry(void *entry, struct address_space *mapping,
-				struct vm_fault *vmf, unsigned long flags)
+static vm_fault_t dax_associate_entry(void *entry,
+				      struct address_space *mapping,
+				      struct vm_fault *vmf, unsigned long flags)
 {
 	unsigned long size = dax_entry_size(entry), pfn, index;
+	struct dev_pagemap *pgmap;
 	int i = 0;
 
 	if (IS_ENABLED(CONFIG_FS_DAX_LIMITED))
-		return;
+		return 0;
+
+	if (!size)
+		return 0;
+
+	pfn = dax_to_pfn(entry);
+	pgmap = get_dev_pagemap_many(pfn, NULL, PHYS_PFN(size));
+	if (!pgmap)
+		return VM_FAULT_SIGBUS;
 
 	index = linear_page_index(vmf->vma, ALIGN(vmf->address, size));
 	for_each_mapped_pfn(entry, pfn) {
@@ -391,19 +401,27 @@ static void dax_associate_entry(void *entry, struct address_space *mapping,
 			page->index = index + i++;
 		}
 	}
+
+	return 0;
 }
 
 static void dax_disassociate_entry(void *entry, struct address_space *mapping,
 		bool trunc)
 {
-	unsigned long pfn;
+	unsigned long size = dax_entry_size(entry), pfn;
+	struct page *page;
 
 	if (IS_ENABLED(CONFIG_FS_DAX_LIMITED))
 		return;
 
-	for_each_mapped_pfn(entry, pfn) {
-		struct page *page = pfn_to_page(pfn);
+	if (!size)
+		return;
+
+	page = pfn_to_page(dax_to_pfn(entry));
+	put_dev_pagemap_many(page->pgmap, PHYS_PFN(size));
 
+	for_each_mapped_pfn(entry, pfn) {
+		page = pfn_to_page(pfn);
 		WARN_ON_ONCE(trunc && page_maybe_dma_pinned(page));
 		if (dax_mapping_is_cow(page->mapping)) {
 			/* keep the CoW flag if this page is still shared */
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index c3b4cc84877b..fd57407e7f3d 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -191,8 +191,13 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid);
 void memunmap_pages(struct dev_pagemap *pgmap);
 void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap);
 void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap);
-struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
-		struct dev_pagemap *pgmap);
+struct dev_pagemap *get_dev_pagemap_many(unsigned long pfn,
+					 struct dev_pagemap *pgmap, int refs);
+static inline struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
+						  struct dev_pagemap *pgmap)
+{
+	return get_dev_pagemap_many(pfn, pgmap, 1);
+}
 bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn);
 
 unsigned long vmem_altmap_offset(struct vmem_altmap *altmap);
@@ -244,10 +249,15 @@ static inline unsigned long memremap_compat_align(void)
 }
 #endif /* CONFIG_ZONE_DEVICE */
 
-static inline void put_dev_pagemap(struct dev_pagemap *pgmap)
+static inline void put_dev_pagemap_many(struct dev_pagemap *pgmap, int refs)
 {
 	if (pgmap)
-		percpu_ref_put(&pgmap->ref);
+		percpu_ref_put_many(&pgmap->ref, refs);
+}
+
+static inline void put_dev_pagemap(struct dev_pagemap *pgmap)
+{
+	put_dev_pagemap_many(pgmap, 1);
 }
 
 #endif /* _LINUX_MEMREMAP_H_ */
diff --git a/mm/memremap.c b/mm/memremap.c
index 433500e955fb..4debe7b211ae 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -430,15 +430,16 @@ void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns)
 }
 
 /**
- * get_dev_pagemap() - take a new live reference on the dev_pagemap for @pfn
+ * get_dev_pagemap_many() - take new live references(s) on the dev_pagemap for @pfn
  * @pfn: page frame number to lookup page_map
  * @pgmap: optional known pgmap that already has a reference
+ * @refs: number of references to take
  *
  * If @pgmap is non-NULL and covers @pfn it will be returned as-is.  If @pgmap
  * is non-NULL but does not cover @pfn the reference to it will be released.
  */
-struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
-		struct dev_pagemap *pgmap)
+struct dev_pagemap *get_dev_pagemap_many(unsigned long pfn,
+					 struct dev_pagemap *pgmap, int refs)
 {
 	resource_size_t phys = PFN_PHYS(pfn);
 
@@ -454,13 +455,15 @@ struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
 	/* fall back to slow path lookup */
 	rcu_read_lock();
 	pgmap = xa_load(&pgmap_array, PHYS_PFN(phys));
-	if (pgmap && !percpu_ref_tryget_live(&pgmap->ref))
+	if (pgmap && !percpu_ref_tryget_live_rcu(&pgmap->ref))
 		pgmap = NULL;
+	if (pgmap && refs > 1)
+		percpu_ref_get_many(&pgmap->ref, refs - 1);
 	rcu_read_unlock();
 
 	return pgmap;
 }
-EXPORT_SYMBOL_GPL(get_dev_pagemap);
+EXPORT_SYMBOL_GPL(get_dev_pagemap_many);
 
 void free_zone_device_page(struct page *page)
 {



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 08/13] devdax: Minor warning fixups
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
                   ` (6 preceding siblings ...)
  2022-09-04  2:16 ` [PATCH 07/13] fsdax: Manage pgmap references at entry insertion and deletion Dan Williams
@ 2022-09-04  2:16 ` Dan Williams
  2022-09-04  2:16 ` [PATCH 09/13] devdax: Move address_space helpers to the DAX core Dan Williams
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:16 UTC (permalink / raw)
  To: akpm; +Cc: hch, linux-mm, nvdimm, linux-fsdevel

Fix a missing prototype warning for dev_dax_probe(), and fix
dax_holder() comment block format.

Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/dax/dax-private.h |    1 +
 drivers/dax/super.c       |    2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/dax/dax-private.h b/drivers/dax/dax-private.h
index 1c974b7caae6..202cafd836e8 100644
--- a/drivers/dax/dax-private.h
+++ b/drivers/dax/dax-private.h
@@ -87,6 +87,7 @@ static inline struct dax_mapping *to_dax_mapping(struct device *dev)
 }
 
 phys_addr_t dax_pgoff_to_phys(struct dev_dax *dev_dax, pgoff_t pgoff, unsigned long size);
+int dev_dax_probe(struct dev_dax *dev_dax);
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static inline bool dax_align_valid(unsigned long align)
diff --git a/drivers/dax/super.c b/drivers/dax/super.c
index 9b5e2a5eb0ae..4909ad945a49 100644
--- a/drivers/dax/super.c
+++ b/drivers/dax/super.c
@@ -475,7 +475,7 @@ EXPORT_SYMBOL_GPL(put_dax);
 /**
  * dax_holder() - obtain the holder of a dax device
  * @dax_dev: a dax_device instance
-
+ *
  * Return: the holder's data which represents the holder if registered,
  * otherwize NULL.
  */



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 09/13] devdax: Move address_space helpers to the DAX core
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
                   ` (7 preceding siblings ...)
  2022-09-04  2:16 ` [PATCH 08/13] devdax: Minor warning fixups Dan Williams
@ 2022-09-04  2:16 ` Dan Williams
  2022-09-04  2:16 ` [PATCH 10/13] dax: Prep dax_{associate, disassociate}_entry() for compound pages Dan Williams
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:16 UTC (permalink / raw)
  To: akpm
  Cc: Matthew Wilcox, Jan Kara, Darrick J. Wong, Jason Gunthorpe,
	Christoph Hellwig, John Hubbard, linux-mm, nvdimm, linux-fsdevel

In preparation for decamping 'get_dev_pagemap()' from code paths outside
of DAX, device-dax needs to track mapping references. Reuse the same
infrastructure as fsdax (dax_{insert,delete_mapping}_entry()). For now,
just move that infrastructure into a common location.

The move involves splitting iomap and supporting helpers into fs/dax.c
and all 'struct address_space' and DAX-entry manipulation into
drivers/dax/mapping.c. grab_mapping_entry() is renamed
dax_grab_mapping_entry(), and some common definitions and declarations
are moved to include/linux/dax.h.

No functional change is intended, just code movement.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 .clang-format            |    1 
 drivers/Makefile         |    2 
 drivers/dax/Kconfig      |    5 
 drivers/dax/Makefile     |    1 
 drivers/dax/mapping.c    |  973 ++++++++++++++++++++++++++++++++++++++++++
 fs/dax.c                 | 1071 +---------------------------------------------
 include/linux/dax.h      |  116 ++++-
 include/linux/memremap.h |    6 
 8 files changed, 1106 insertions(+), 1069 deletions(-)
 create mode 100644 drivers/dax/mapping.c

diff --git a/.clang-format b/.clang-format
index 1247d54f9e49..336fa266386e 100644
--- a/.clang-format
+++ b/.clang-format
@@ -269,6 +269,7 @@ ForEachMacros:
   - 'for_each_link_cpus'
   - 'for_each_link_platforms'
   - 'for_each_lru'
+  - 'for_each_mapped_pfn'
   - 'for_each_matching_node'
   - 'for_each_matching_node_and_match'
   - 'for_each_mem_pfn_range'
diff --git a/drivers/Makefile b/drivers/Makefile
index 057857258bfd..ec6c4146b966 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -71,7 +71,7 @@ obj-$(CONFIG_FB_INTEL)          += video/fbdev/intelfb/
 obj-$(CONFIG_PARPORT)		+= parport/
 obj-y				+= base/ block/ misc/ mfd/ nfc/
 obj-$(CONFIG_LIBNVDIMM)		+= nvdimm/
-obj-$(CONFIG_DAX)		+= dax/
+obj-y				+= dax/
 obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/
 obj-$(CONFIG_NUBUS)		+= nubus/
 obj-y				+= cxl/
diff --git a/drivers/dax/Kconfig b/drivers/dax/Kconfig
index 5fdf269a822e..3ed4da3935e5 100644
--- a/drivers/dax/Kconfig
+++ b/drivers/dax/Kconfig
@@ -1,6 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0-only
 menuconfig DAX
 	tristate "DAX: direct access to differentiated memory"
+	depends on MMU
 	select SRCU
 	default m if NVDIMM_DAX
 
@@ -49,6 +50,10 @@ config DEV_DAX_HMEM_DEVICES
 	depends on DEV_DAX_HMEM && DAX=y
 	def_bool y
 
+config DAX_MAPPING
+	depends on DAX
+	def_bool y
+
 config DEV_DAX_KMEM
 	tristate "KMEM DAX: volatile-use of persistent memory"
 	default DEV_DAX
diff --git a/drivers/dax/Makefile b/drivers/dax/Makefile
index 90a56ca3b345..d57f1f34e8a8 100644
--- a/drivers/dax/Makefile
+++ b/drivers/dax/Makefile
@@ -3,6 +3,7 @@ obj-$(CONFIG_DAX) += dax.o
 obj-$(CONFIG_DEV_DAX) += device_dax.o
 obj-$(CONFIG_DEV_DAX_KMEM) += kmem.o
 obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o
+obj-$(CONFIG_DAX_MAPPING) += mapping.o
 
 dax-y := super.o
 dax-y += bus.o
diff --git a/drivers/dax/mapping.c b/drivers/dax/mapping.c
new file mode 100644
index 000000000000..0810af7d9503
--- /dev/null
+++ b/drivers/dax/mapping.c
@@ -0,0 +1,973 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Direct Access mapping infrastructure split from fs/dax.c
+ * Copyright (c) 2013-2014 Intel Corporation
+ * Author: Matthew Wilcox <matthew.r.wilcox@intel.com>
+ * Author: Ross Zwisler <ross.zwisler@linux.intel.com>
+ */
+
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/dax.h>
+#include <linux/rmap.h>
+#include <linux/pfn_t.h>
+#include <linux/sizes.h>
+#include <linux/pagemap.h>
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/fs_dax.h>
+
+/* We choose 4096 entries - same as per-zone page wait tables */
+#define DAX_WAIT_TABLE_BITS 12
+#define DAX_WAIT_TABLE_ENTRIES (1 << DAX_WAIT_TABLE_BITS)
+
+static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
+
+static int __init init_dax_wait_table(void)
+{
+	int i;
+
+	for (i = 0; i < DAX_WAIT_TABLE_ENTRIES; i++)
+		init_waitqueue_head(wait_table + i);
+	return 0;
+}
+fs_initcall(init_dax_wait_table);
+
+static unsigned long dax_to_pfn(void *entry)
+{
+	return xa_to_value(entry) >> DAX_SHIFT;
+}
+
+static void *dax_make_entry(pfn_t pfn, unsigned long flags)
+{
+	return xa_mk_value((flags & DAX_MASK) |
+			   (pfn_t_to_pfn(pfn) << DAX_SHIFT));
+}
+
+static bool dax_is_locked(void *entry)
+{
+	return xa_to_value(entry) & DAX_LOCKED;
+}
+
+static unsigned int dax_entry_order(void *entry)
+{
+	if (xa_to_value(entry) & DAX_PMD)
+		return PMD_ORDER;
+	return 0;
+}
+
+static unsigned long dax_is_pmd_entry(void *entry)
+{
+	return xa_to_value(entry) & DAX_PMD;
+}
+
+static bool dax_is_pte_entry(void *entry)
+{
+	return !(xa_to_value(entry) & DAX_PMD);
+}
+
+static int dax_is_zero_entry(void *entry)
+{
+	return xa_to_value(entry) & DAX_ZERO_PAGE;
+}
+
+static int dax_is_empty_entry(void *entry)
+{
+	return xa_to_value(entry) & DAX_EMPTY;
+}
+
+/*
+ * true if the entry that was found is of a smaller order than the entry
+ * we were looking for
+ */
+static bool dax_is_conflict(void *entry)
+{
+	return entry == XA_RETRY_ENTRY;
+}
+
+/*
+ * DAX page cache entry locking
+ */
+struct exceptional_entry_key {
+	struct xarray *xa;
+	pgoff_t entry_start;
+};
+
+struct wait_exceptional_entry_queue {
+	wait_queue_entry_t wait;
+	struct exceptional_entry_key key;
+};
+
+/**
+ * enum dax_wake_mode: waitqueue wakeup behaviour
+ * @WAKE_ALL: wake all waiters in the waitqueue
+ * @WAKE_NEXT: wake only the first waiter in the waitqueue
+ */
+enum dax_wake_mode {
+	WAKE_ALL,
+	WAKE_NEXT,
+};
+
+static wait_queue_head_t *dax_entry_waitqueue(struct xa_state *xas, void *entry,
+					      struct exceptional_entry_key *key)
+{
+	unsigned long hash;
+	unsigned long index = xas->xa_index;
+
+	/*
+	 * If 'entry' is a PMD, align the 'index' that we use for the wait
+	 * queue to the start of that PMD.  This ensures that all offsets in
+	 * the range covered by the PMD map to the same bit lock.
+	 */
+	if (dax_is_pmd_entry(entry))
+		index &= ~PG_PMD_COLOUR;
+	key->xa = xas->xa;
+	key->entry_start = index;
+
+	hash = hash_long((unsigned long)xas->xa ^ index, DAX_WAIT_TABLE_BITS);
+	return wait_table + hash;
+}
+
+static int wake_exceptional_entry_func(wait_queue_entry_t *wait,
+				       unsigned int mode, int sync, void *keyp)
+{
+	struct exceptional_entry_key *key = keyp;
+	struct wait_exceptional_entry_queue *ewait =
+		container_of(wait, struct wait_exceptional_entry_queue, wait);
+
+	if (key->xa != ewait->key.xa ||
+	    key->entry_start != ewait->key.entry_start)
+		return 0;
+	return autoremove_wake_function(wait, mode, sync, NULL);
+}
+
+/*
+ * @entry may no longer be the entry at the index in the mapping.
+ * The important information it's conveying is whether the entry at
+ * this index used to be a PMD entry.
+ */
+static void dax_wake_entry(struct xa_state *xas, void *entry,
+			   enum dax_wake_mode mode)
+{
+	struct exceptional_entry_key key;
+	wait_queue_head_t *wq;
+
+	wq = dax_entry_waitqueue(xas, entry, &key);
+
+	/*
+	 * Checking for locked entry and prepare_to_wait_exclusive() happens
+	 * under the i_pages lock, ditto for entry handling in our callers.
+	 * So at this point all tasks that could have seen our entry locked
+	 * must be in the waitqueue and the following check will see them.
+	 */
+	if (waitqueue_active(wq))
+		__wake_up(wq, TASK_NORMAL, mode == WAKE_ALL ? 0 : 1, &key);
+}
+
+/*
+ * Look up entry in page cache, wait for it to become unlocked if it
+ * is a DAX entry and return it.  The caller must subsequently call
+ * put_unlocked_entry() if it did not lock the entry or dax_unlock_entry()
+ * if it did.  The entry returned may have a larger order than @order.
+ * If @order is larger than the order of the entry found in i_pages, this
+ * function returns a dax_is_conflict entry.
+ *
+ * Must be called with the i_pages lock held.
+ */
+static void *get_unlocked_entry(struct xa_state *xas, unsigned int order)
+{
+	void *entry;
+	struct wait_exceptional_entry_queue ewait;
+	wait_queue_head_t *wq;
+
+	init_wait(&ewait.wait);
+	ewait.wait.func = wake_exceptional_entry_func;
+
+	for (;;) {
+		entry = xas_find_conflict(xas);
+		if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
+			return entry;
+		if (dax_entry_order(entry) < order)
+			return XA_RETRY_ENTRY;
+		if (!dax_is_locked(entry))
+			return entry;
+
+		wq = dax_entry_waitqueue(xas, entry, &ewait.key);
+		prepare_to_wait_exclusive(wq, &ewait.wait,
+					  TASK_UNINTERRUPTIBLE);
+		xas_unlock_irq(xas);
+		xas_reset(xas);
+		schedule();
+		finish_wait(wq, &ewait.wait);
+		xas_lock_irq(xas);
+	}
+}
+
+/*
+ * The only thing keeping the address space around is the i_pages lock
+ * (it's cycled in clear_inode() after removing the entries from i_pages)
+ * After we call xas_unlock_irq(), we cannot touch xas->xa.
+ */
+static void wait_entry_unlocked(struct xa_state *xas, void *entry)
+{
+	struct wait_exceptional_entry_queue ewait;
+	wait_queue_head_t *wq;
+
+	init_wait(&ewait.wait);
+	ewait.wait.func = wake_exceptional_entry_func;
+
+	wq = dax_entry_waitqueue(xas, entry, &ewait.key);
+	/*
+	 * Unlike get_unlocked_entry() there is no guarantee that this
+	 * path ever successfully retrieves an unlocked entry before an
+	 * inode dies. Perform a non-exclusive wait in case this path
+	 * never successfully performs its own wake up.
+	 */
+	prepare_to_wait(wq, &ewait.wait, TASK_UNINTERRUPTIBLE);
+	xas_unlock_irq(xas);
+	schedule();
+	finish_wait(wq, &ewait.wait);
+}
+
+static void put_unlocked_entry(struct xa_state *xas, void *entry,
+			       enum dax_wake_mode mode)
+{
+	if (entry && !dax_is_conflict(entry))
+		dax_wake_entry(xas, entry, mode);
+}
+
+/*
+ * We used the xa_state to get the entry, but then we locked the entry and
+ * dropped the xa_lock, so we know the xa_state is stale and must be reset
+ * before use.
+ */
+void dax_unlock_entry(struct xa_state *xas, void *entry)
+{
+	void *old;
+
+	WARN_ON(dax_is_locked(entry));
+	xas_reset(xas);
+	xas_lock_irq(xas);
+	old = xas_store(xas, entry);
+	xas_unlock_irq(xas);
+	WARN_ON(!dax_is_locked(old));
+	dax_wake_entry(xas, entry, WAKE_NEXT);
+}
+
+/*
+ * Return: The entry stored at this location before it was locked.
+ */
+static void *dax_lock_entry(struct xa_state *xas, void *entry)
+{
+	unsigned long v = xa_to_value(entry);
+
+	return xas_store(xas, xa_mk_value(v | DAX_LOCKED));
+}
+
+static unsigned long dax_entry_size(void *entry)
+{
+	if (dax_is_zero_entry(entry))
+		return 0;
+	else if (dax_is_empty_entry(entry))
+		return 0;
+	else if (dax_is_pmd_entry(entry))
+		return PMD_SIZE;
+	else
+		return PAGE_SIZE;
+}
+
+static unsigned long dax_end_pfn(void *entry)
+{
+	return dax_to_pfn(entry) + dax_entry_size(entry) / PAGE_SIZE;
+}
+
+/*
+ * Iterate through all mapped pfns represented by an entry, i.e. skip
+ * 'empty' and 'zero' entries.
+ */
+#define for_each_mapped_pfn(entry, pfn) \
+	for (pfn = dax_to_pfn(entry); pfn < dax_end_pfn(entry); pfn++)
+
+static bool dax_mapping_is_cow(struct address_space *mapping)
+{
+	return (unsigned long)mapping == PAGE_MAPPING_DAX_COW;
+}
+
+/*
+ * Set the page->mapping with FS_DAX_MAPPING_COW flag, increase the refcount.
+ */
+static void dax_mapping_set_cow(struct page *page)
+{
+	if ((uintptr_t)page->mapping != PAGE_MAPPING_DAX_COW) {
+		/*
+		 * Reset the index if the page was already mapped
+		 * regularly before.
+		 */
+		if (page->mapping)
+			page->index = 1;
+		page->mapping = (void *)PAGE_MAPPING_DAX_COW;
+	}
+	page->index++;
+}
+
+struct page *dax_pinned_page(void *entry)
+{
+	unsigned long pfn;
+
+	for_each_mapped_pfn(entry, pfn) {
+		struct page *page = pfn_to_page(pfn);
+
+		if (page_maybe_dma_pinned(page))
+			return page;
+	}
+	return NULL;
+}
+
+/*
+ * When it is called in dax_insert_entry(), the cow flag will indicate that
+ * whether this entry is shared by multiple files.  If so, set the page->mapping
+ * FS_DAX_MAPPING_COW, and use page->index as refcount.
+ */
+static vm_fault_t dax_associate_entry(void *entry,
+				      struct address_space *mapping,
+				      struct vm_fault *vmf, unsigned long flags)
+{
+	unsigned long size = dax_entry_size(entry), pfn, index;
+	struct dev_pagemap *pgmap;
+	int i = 0;
+
+	if (IS_ENABLED(CONFIG_FS_DAX_LIMITED))
+		return 0;
+
+	if (!size)
+		return 0;
+
+	pfn = dax_to_pfn(entry);
+	pgmap = get_dev_pagemap_many(pfn, NULL, PHYS_PFN(size));
+	if (!pgmap)
+		return VM_FAULT_SIGBUS;
+
+	index = linear_page_index(vmf->vma, ALIGN(vmf->address, size));
+	for_each_mapped_pfn(entry, pfn) {
+		struct page *page = pfn_to_page(pfn);
+
+		if (flags & DAX_COW) {
+			dax_mapping_set_cow(page);
+		} else {
+			WARN_ON_ONCE(page->mapping);
+			page->mapping = mapping;
+			page->index = index + i++;
+		}
+	}
+
+	return 0;
+}
+
+static void dax_disassociate_entry(void *entry, struct address_space *mapping,
+				   bool trunc)
+{
+	unsigned long size = dax_entry_size(entry), pfn;
+	struct page *page;
+
+	if (IS_ENABLED(CONFIG_FS_DAX_LIMITED))
+		return;
+
+	if (!size)
+		return;
+
+	page = pfn_to_page(dax_to_pfn(entry));
+	put_dev_pagemap_many(page->pgmap, PHYS_PFN(size));
+
+	for_each_mapped_pfn(entry, pfn) {
+		page = pfn_to_page(pfn);
+		WARN_ON_ONCE(trunc && page_maybe_dma_pinned(page));
+		if (dax_mapping_is_cow(page->mapping)) {
+			/* keep the CoW flag if this page is still shared */
+			if (page->index-- > 0)
+				continue;
+		} else
+			WARN_ON_ONCE(page->mapping && page->mapping != mapping);
+		page->mapping = NULL;
+		page->index = 0;
+	}
+}
+
+/*
+ * dax_lock_page - Lock the DAX entry corresponding to a page
+ * @page: The page whose entry we want to lock
+ *
+ * Context: Process context.
+ * Return: A cookie to pass to dax_unlock_page() or 0 if the entry could
+ * not be locked.
+ */
+dax_entry_t dax_lock_page(struct page *page)
+{
+	XA_STATE(xas, NULL, 0);
+	void *entry;
+
+	/* Ensure page->mapping isn't freed while we look at it */
+	rcu_read_lock();
+	for (;;) {
+		struct address_space *mapping = READ_ONCE(page->mapping);
+
+		entry = NULL;
+		if (!mapping || !dax_mapping(mapping))
+			break;
+
+		/*
+		 * In the device-dax case there's no need to lock, a
+		 * struct dev_pagemap pin is sufficient to keep the
+		 * inode alive, and we assume we have dev_pagemap pin
+		 * otherwise we would not have a valid pfn_to_page()
+		 * translation.
+		 */
+		entry = (void *)~0UL;
+		if (S_ISCHR(mapping->host->i_mode))
+			break;
+
+		xas.xa = &mapping->i_pages;
+		xas_lock_irq(&xas);
+		if (mapping != page->mapping) {
+			xas_unlock_irq(&xas);
+			continue;
+		}
+		xas_set(&xas, page->index);
+		entry = xas_load(&xas);
+		if (dax_is_locked(entry)) {
+			rcu_read_unlock();
+			wait_entry_unlocked(&xas, entry);
+			rcu_read_lock();
+			continue;
+		}
+		dax_lock_entry(&xas, entry);
+		xas_unlock_irq(&xas);
+		break;
+	}
+	rcu_read_unlock();
+	return (dax_entry_t)entry;
+}
+
+void dax_unlock_page(struct page *page, dax_entry_t cookie)
+{
+	struct address_space *mapping = page->mapping;
+	XA_STATE(xas, &mapping->i_pages, page->index);
+
+	if (S_ISCHR(mapping->host->i_mode))
+		return;
+
+	dax_unlock_entry(&xas, (void *)cookie);
+}
+
+/*
+ * dax_lock_mapping_entry - Lock the DAX entry corresponding to a mapping
+ * @mapping: the file's mapping whose entry we want to lock
+ * @index: the offset within this file
+ * @page: output the dax page corresponding to this dax entry
+ *
+ * Return: A cookie to pass to dax_unlock_mapping_entry() or 0 if the entry
+ * could not be locked.
+ */
+dax_entry_t dax_lock_mapping_entry(struct address_space *mapping, pgoff_t index,
+				   struct page **page)
+{
+	XA_STATE(xas, NULL, 0);
+	void *entry;
+
+	rcu_read_lock();
+	for (;;) {
+		entry = NULL;
+		if (!dax_mapping(mapping))
+			break;
+
+		xas.xa = &mapping->i_pages;
+		xas_lock_irq(&xas);
+		xas_set(&xas, index);
+		entry = xas_load(&xas);
+		if (dax_is_locked(entry)) {
+			rcu_read_unlock();
+			wait_entry_unlocked(&xas, entry);
+			rcu_read_lock();
+			continue;
+		}
+		if (!entry || dax_is_zero_entry(entry) ||
+		    dax_is_empty_entry(entry)) {
+			/*
+			 * Because we are looking for entry from file's mapping
+			 * and index, so the entry may not be inserted for now,
+			 * or even a zero/empty entry.  We don't think this is
+			 * an error case.  So, return a special value and do
+			 * not output @page.
+			 */
+			entry = (void *)~0UL;
+		} else {
+			*page = pfn_to_page(dax_to_pfn(entry));
+			dax_lock_entry(&xas, entry);
+		}
+		xas_unlock_irq(&xas);
+		break;
+	}
+	rcu_read_unlock();
+	return (dax_entry_t)entry;
+}
+
+void dax_unlock_mapping_entry(struct address_space *mapping, pgoff_t index,
+			      dax_entry_t cookie)
+{
+	XA_STATE(xas, &mapping->i_pages, index);
+
+	if (cookie == ~0UL)
+		return;
+
+	dax_unlock_entry(&xas, (void *)cookie);
+}
+
+/*
+ * Find page cache entry at given index. If it is a DAX entry, return it
+ * with the entry locked. If the page cache doesn't contain an entry at
+ * that index, add a locked empty entry.
+ *
+ * When requesting an entry with size DAX_PMD, dax_grab_mapping_entry() will
+ * either return that locked entry or will return VM_FAULT_FALLBACK.
+ * This will happen if there are any PTE entries within the PMD range
+ * that we are requesting.
+ *
+ * We always favor PTE entries over PMD entries. There isn't a flow where we
+ * evict PTE entries in order to 'upgrade' them to a PMD entry.  A PMD
+ * insertion will fail if it finds any PTE entries already in the tree, and a
+ * PTE insertion will cause an existing PMD entry to be unmapped and
+ * downgraded to PTE entries.  This happens for both PMD zero pages as
+ * well as PMD empty entries.
+ *
+ * The exception to this downgrade path is for PMD entries that have
+ * real storage backing them.  We will leave these real PMD entries in
+ * the tree, and PTE writes will simply dirty the entire PMD entry.
+ *
+ * Note: Unlike filemap_fault() we don't honor FAULT_FLAG_RETRY flags. For
+ * persistent memory the benefit is doubtful. We can add that later if we can
+ * show it helps.
+ *
+ * On error, this function does not return an ERR_PTR.  Instead it returns
+ * a VM_FAULT code, encoded as an xarray internal entry.  The ERR_PTR values
+ * overlap with xarray value entries.
+ */
+void *dax_grab_mapping_entry(struct xa_state *xas,
+			     struct address_space *mapping, unsigned int order)
+{
+	unsigned long index = xas->xa_index;
+	bool pmd_downgrade; /* splitting PMD entry into PTE entries? */
+	void *entry;
+
+retry:
+	pmd_downgrade = false;
+	xas_lock_irq(xas);
+	entry = get_unlocked_entry(xas, order);
+
+	if (entry) {
+		if (dax_is_conflict(entry))
+			goto fallback;
+		if (!xa_is_value(entry)) {
+			xas_set_err(xas, -EIO);
+			goto out_unlock;
+		}
+
+		if (order == 0) {
+			if (dax_is_pmd_entry(entry) &&
+			    (dax_is_zero_entry(entry) ||
+			     dax_is_empty_entry(entry))) {
+				pmd_downgrade = true;
+			}
+		}
+	}
+
+	if (pmd_downgrade) {
+		/*
+		 * Make sure 'entry' remains valid while we drop
+		 * the i_pages lock.
+		 */
+		dax_lock_entry(xas, entry);
+
+		/*
+		 * Besides huge zero pages the only other thing that gets
+		 * downgraded are empty entries which don't need to be
+		 * unmapped.
+		 */
+		if (dax_is_zero_entry(entry)) {
+			xas_unlock_irq(xas);
+			unmap_mapping_pages(mapping,
+					    xas->xa_index & ~PG_PMD_COLOUR,
+					    PG_PMD_NR, false);
+			xas_reset(xas);
+			xas_lock_irq(xas);
+		}
+
+		dax_disassociate_entry(entry, mapping, false);
+		xas_store(xas, NULL); /* undo the PMD join */
+		dax_wake_entry(xas, entry, WAKE_ALL);
+		mapping->nrpages -= PG_PMD_NR;
+		entry = NULL;
+		xas_set(xas, index);
+	}
+
+	if (entry) {
+		dax_lock_entry(xas, entry);
+	} else {
+		unsigned long flags = DAX_EMPTY;
+
+		if (order > 0)
+			flags |= DAX_PMD;
+		entry = dax_make_entry(pfn_to_pfn_t(0), flags);
+		dax_lock_entry(xas, entry);
+		if (xas_error(xas))
+			goto out_unlock;
+		mapping->nrpages += 1UL << order;
+	}
+
+out_unlock:
+	xas_unlock_irq(xas);
+	if (xas_nomem(xas, mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM))
+		goto retry;
+	if (xas->xa_node == XA_ERROR(-ENOMEM))
+		return xa_mk_internal(VM_FAULT_OOM);
+	if (xas_error(xas))
+		return xa_mk_internal(VM_FAULT_SIGBUS);
+	return entry;
+fallback:
+	xas_unlock_irq(xas);
+	return xa_mk_internal(VM_FAULT_FALLBACK);
+}
+
+/**
+ * dax_layout_pinned_page_range - find first pinned page in @mapping
+ * @mapping: address space to scan for a page with ref count > 1
+ * @start: Starting offset. Page containing 'start' is included.
+ * @end: End offset. Page containing 'end' is included. If 'end' is LLONG_MAX,
+ *       pages from 'start' till the end of file are included.
+ *
+ * DAX requires ZONE_DEVICE mapped pages. These pages are never
+ * 'onlined' to the page allocator so they are considered idle when
+ * page->count == 1. A filesystem uses this interface to determine if
+ * any page in the mapping is busy, i.e. for DMA, or other
+ * get_user_pages() usages.
+ *
+ * It is expected that the filesystem is holding locks to block the
+ * establishment of new mappings in this address_space. I.e. it expects
+ * to be able to run unmap_mapping_range() and subsequently not race
+ * mapping_mapped() becoming true.
+ */
+struct page *dax_layout_pinned_page_range(struct address_space *mapping,
+					  loff_t start, loff_t end)
+{
+	void *entry;
+	unsigned int scanned = 0;
+	struct page *page = NULL;
+	pgoff_t start_idx = start >> PAGE_SHIFT;
+	pgoff_t end_idx;
+	XA_STATE(xas, &mapping->i_pages, start_idx);
+
+	/*
+	 * In the 'limited' case get_user_pages() for dax is disabled.
+	 */
+	if (IS_ENABLED(CONFIG_FS_DAX_LIMITED))
+		return NULL;
+
+	if (!dax_mapping(mapping) || !mapping_mapped(mapping))
+		return NULL;
+
+	/* If end == LLONG_MAX, all pages from start to till end of file */
+	if (end == LLONG_MAX)
+		end_idx = ULONG_MAX;
+	else
+		end_idx = end >> PAGE_SHIFT;
+	/*
+	 * If we race get_user_pages_fast() here either we'll see the
+	 * elevated page count in the iteration and wait, or
+	 * get_user_pages_fast() will see that the page it took a reference
+	 * against is no longer mapped in the page tables and bail to the
+	 * get_user_pages() slow path.  The slow path is protected by
+	 * pte_lock() and pmd_lock(). New references are not taken without
+	 * holding those locks, and unmap_mapping_pages() will not zero the
+	 * pte or pmd without holding the respective lock, so we are
+	 * guaranteed to either see new references or prevent new
+	 * references from being established.
+	 */
+	unmap_mapping_pages(mapping, start_idx, end_idx - start_idx + 1, 0);
+
+	xas_lock_irq(&xas);
+	xas_for_each(&xas, entry, end_idx) {
+		if (WARN_ON_ONCE(!xa_is_value(entry)))
+			continue;
+		if (unlikely(dax_is_locked(entry)))
+			entry = get_unlocked_entry(&xas, 0);
+		if (entry)
+			page = dax_pinned_page(entry);
+		put_unlocked_entry(&xas, entry, WAKE_NEXT);
+		if (page)
+			break;
+		if (++scanned % XA_CHECK_SCHED)
+			continue;
+
+		xas_pause(&xas);
+		xas_unlock_irq(&xas);
+		cond_resched();
+		xas_lock_irq(&xas);
+	}
+	xas_unlock_irq(&xas);
+	return page;
+}
+EXPORT_SYMBOL_GPL(dax_layout_pinned_page_range);
+
+struct page *dax_layout_pinned_page(struct address_space *mapping)
+{
+	return dax_layout_pinned_page_range(mapping, 0, LLONG_MAX);
+}
+EXPORT_SYMBOL_GPL(dax_layout_pinned_page);
+
+static int __dax_invalidate_entry(struct address_space *mapping, pgoff_t index,
+				  bool trunc)
+{
+	XA_STATE(xas, &mapping->i_pages, index);
+	int ret = 0;
+	void *entry;
+
+	xas_lock_irq(&xas);
+	entry = get_unlocked_entry(&xas, 0);
+	if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
+		goto out;
+	if (!trunc && (xas_get_mark(&xas, PAGECACHE_TAG_DIRTY) ||
+		       xas_get_mark(&xas, PAGECACHE_TAG_TOWRITE)))
+		goto out;
+	dax_disassociate_entry(entry, mapping, trunc);
+	xas_store(&xas, NULL);
+	mapping->nrpages -= 1UL << dax_entry_order(entry);
+	ret = 1;
+out:
+	put_unlocked_entry(&xas, entry, WAKE_ALL);
+	xas_unlock_irq(&xas);
+	return ret;
+}
+
+int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
+				      pgoff_t index)
+{
+	return __dax_invalidate_entry(mapping, index, false);
+}
+
+/*
+ * Delete DAX entry at @index from @mapping.  Wait for it
+ * to be unlocked before deleting it.
+ */
+int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
+{
+	int ret = __dax_invalidate_entry(mapping, index, true);
+
+	/*
+	 * This gets called from truncate / punch_hole path. As such, the caller
+	 * must hold locks protecting against concurrent modifications of the
+	 * page cache (usually fs-private i_mmap_sem for writing). Since the
+	 * caller has seen a DAX entry for this index, we better find it
+	 * at that index as well...
+	 */
+	WARN_ON_ONCE(!ret);
+	return ret;
+}
+
+/*
+ * By this point dax_grab_mapping_entry() has ensured that we have a locked entry
+ * of the appropriate size so we don't have to worry about downgrading PMDs to
+ * PTEs.  If we happen to be trying to insert a PTE and there is a PMD
+ * already in the tree, we will skip the insertion and just dirty the PMD as
+ * appropriate.
+ */
+vm_fault_t dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
+			    void **pentry, pfn_t pfn, unsigned long flags)
+{
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+	void *new_entry = dax_make_entry(pfn, flags);
+	bool dirty = flags & DAX_DIRTY;
+	bool cow = flags & DAX_COW;
+	void *entry = *pentry;
+
+	if (dirty)
+		__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
+
+	if (cow || (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE))) {
+		unsigned long index = xas->xa_index;
+		/* we are replacing a zero page with block mapping */
+		if (dax_is_pmd_entry(entry))
+			unmap_mapping_pages(mapping, index & ~PG_PMD_COLOUR,
+					    PG_PMD_NR, false);
+		else /* pte entry */
+			unmap_mapping_pages(mapping, index, 1, false);
+	}
+
+	xas_reset(xas);
+	xas_lock_irq(xas);
+	if (cow || dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
+		void *old;
+
+		dax_disassociate_entry(entry, mapping, false);
+		dax_associate_entry(new_entry, mapping, vmf, flags);
+		/*
+		 * Only swap our new entry into the page cache if the current
+		 * entry is a zero page or an empty entry.  If a normal PTE or
+		 * PMD entry is already in the cache, we leave it alone.  This
+		 * means that if we are trying to insert a PTE and the
+		 * existing entry is a PMD, we will just leave the PMD in the
+		 * tree and dirty it if necessary.
+		 */
+		old = dax_lock_entry(xas, new_entry);
+		WARN_ON_ONCE(old !=
+			     xa_mk_value(xa_to_value(entry) | DAX_LOCKED));
+		entry = new_entry;
+	} else {
+		xas_load(xas); /* Walk the xa_state */
+	}
+
+	if (dirty)
+		xas_set_mark(xas, PAGECACHE_TAG_DIRTY);
+
+	if (cow)
+		xas_set_mark(xas, PAGECACHE_TAG_TOWRITE);
+
+	xas_unlock_irq(xas);
+	*pentry = entry;
+	return 0;
+}
+
+int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+		      struct address_space *mapping, void *entry)
+{
+	unsigned long pfn, index, count, end;
+	long ret = 0;
+	struct vm_area_struct *vma;
+
+	/*
+	 * A page got tagged dirty in DAX mapping? Something is seriously
+	 * wrong.
+	 */
+	if (WARN_ON(!xa_is_value(entry)))
+		return -EIO;
+
+	if (unlikely(dax_is_locked(entry))) {
+		void *old_entry = entry;
+
+		entry = get_unlocked_entry(xas, 0);
+
+		/* Entry got punched out / reallocated? */
+		if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
+			goto put_unlocked;
+		/*
+		 * Entry got reallocated elsewhere? No need to writeback.
+		 * We have to compare pfns as we must not bail out due to
+		 * difference in lockbit or entry type.
+		 */
+		if (dax_to_pfn(old_entry) != dax_to_pfn(entry))
+			goto put_unlocked;
+		if (WARN_ON_ONCE(dax_is_empty_entry(entry) ||
+					dax_is_zero_entry(entry))) {
+			ret = -EIO;
+			goto put_unlocked;
+		}
+
+		/* Another fsync thread may have already done this entry */
+		if (!xas_get_mark(xas, PAGECACHE_TAG_TOWRITE))
+			goto put_unlocked;
+	}
+
+	/* Lock the entry to serialize with page faults */
+	dax_lock_entry(xas, entry);
+
+	/*
+	 * We can clear the tag now but we have to be careful so that concurrent
+	 * dax_writeback_one() calls for the same index cannot finish before we
+	 * actually flush the caches. This is achieved as the calls will look
+	 * at the entry only under the i_pages lock and once they do that
+	 * they will see the entry locked and wait for it to unlock.
+	 */
+	xas_clear_mark(xas, PAGECACHE_TAG_TOWRITE);
+	xas_unlock_irq(xas);
+
+	/*
+	 * If dax_writeback_mapping_range() was given a wbc->range_start
+	 * in the middle of a PMD, the 'index' we use needs to be
+	 * aligned to the start of the PMD.
+	 * This allows us to flush for PMD_SIZE and not have to worry about
+	 * partial PMD writebacks.
+	 */
+	pfn = dax_to_pfn(entry);
+	count = 1UL << dax_entry_order(entry);
+	index = xas->xa_index & ~(count - 1);
+	end = index + count - 1;
+
+	/* Walk all mappings of a given index of a file and writeprotect them */
+	i_mmap_lock_read(mapping);
+	vma_interval_tree_foreach(vma, &mapping->i_mmap, index, end) {
+		pfn_mkclean_range(pfn, count, index, vma);
+		cond_resched();
+	}
+	i_mmap_unlock_read(mapping);
+
+	dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE);
+	/*
+	 * After we have flushed the cache, we can clear the dirty tag. There
+	 * cannot be new dirty data in the pfn after the flush has completed as
+	 * the pfn mappings are writeprotected and fault waits for mapping
+	 * entry lock.
+	 */
+	xas_reset(xas);
+	xas_lock_irq(xas);
+	xas_store(xas, entry);
+	xas_clear_mark(xas, PAGECACHE_TAG_DIRTY);
+	dax_wake_entry(xas, entry, WAKE_NEXT);
+
+	trace_dax_writeback_one(mapping->host, index, count);
+	return ret;
+
+ put_unlocked:
+	put_unlocked_entry(xas, entry, WAKE_NEXT);
+	return ret;
+}
+
+/*
+ * dax_insert_pfn_mkwrite - insert PTE or PMD entry into page tables
+ * @vmf: The description of the fault
+ * @pfn: PFN to insert
+ * @order: Order of entry to insert.
+ *
+ * This function inserts a writeable PTE or PMD entry into the page tables
+ * for an mmaped DAX file.  It also marks the page cache entry as dirty.
+ */
+vm_fault_t dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn,
+				  unsigned int order)
+{
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, order);
+	void *entry;
+	vm_fault_t ret;
+
+	xas_lock_irq(&xas);
+	entry = get_unlocked_entry(&xas, order);
+	/* Did we race with someone splitting entry or so? */
+	if (!entry || dax_is_conflict(entry) ||
+	    (order == 0 && !dax_is_pte_entry(entry))) {
+		put_unlocked_entry(&xas, entry, WAKE_NEXT);
+		xas_unlock_irq(&xas);
+		trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
+						      VM_FAULT_NOPAGE);
+		return VM_FAULT_NOPAGE;
+	}
+	xas_set_mark(&xas, PAGECACHE_TAG_DIRTY);
+	dax_lock_entry(&xas, entry);
+	xas_unlock_irq(&xas);
+	if (order == 0)
+		ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
+#ifdef CONFIG_FS_DAX_PMD
+	else if (order == PMD_ORDER)
+		ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
+#endif
+	else
+		ret = VM_FAULT_FALLBACK;
+	dax_unlock_entry(&xas, entry);
+	trace_dax_insert_pfn_mkwrite(mapping->host, vmf, ret);
+	return ret;
+}
diff --git a/fs/dax.c b/fs/dax.c
index b23222b0dae4..79e49e718d33 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -27,809 +27,8 @@
 #include <linux/rmap.h>
 #include <asm/pgalloc.h>
 
-#define CREATE_TRACE_POINTS
 #include <trace/events/fs_dax.h>
 
-static inline unsigned int pe_order(enum page_entry_size pe_size)
-{
-	if (pe_size == PE_SIZE_PTE)
-		return PAGE_SHIFT - PAGE_SHIFT;
-	if (pe_size == PE_SIZE_PMD)
-		return PMD_SHIFT - PAGE_SHIFT;
-	if (pe_size == PE_SIZE_PUD)
-		return PUD_SHIFT - PAGE_SHIFT;
-	return ~0;
-}
-
-/* We choose 4096 entries - same as per-zone page wait tables */
-#define DAX_WAIT_TABLE_BITS 12
-#define DAX_WAIT_TABLE_ENTRIES (1 << DAX_WAIT_TABLE_BITS)
-
-/* The 'colour' (ie low bits) within a PMD of a page offset.  */
-#define PG_PMD_COLOUR	((PMD_SIZE >> PAGE_SHIFT) - 1)
-#define PG_PMD_NR	(PMD_SIZE >> PAGE_SHIFT)
-
-/* The order of a PMD entry */
-#define PMD_ORDER	(PMD_SHIFT - PAGE_SHIFT)
-
-static wait_queue_head_t wait_table[DAX_WAIT_TABLE_ENTRIES];
-
-static int __init init_dax_wait_table(void)
-{
-	int i;
-
-	for (i = 0; i < DAX_WAIT_TABLE_ENTRIES; i++)
-		init_waitqueue_head(wait_table + i);
-	return 0;
-}
-fs_initcall(init_dax_wait_table);
-
-/*
- * DAX pagecache entries use XArray value entries so they can't be mistaken
- * for pages.  We use one bit for locking, one bit for the entry size (PMD)
- * and two more to tell us if the entry is a zero page or an empty entry that
- * is just used for locking.  In total four special bits.
- *
- * If the PMD bit isn't set the entry has size PAGE_SIZE, and if the ZERO_PAGE
- * and EMPTY bits aren't set the entry is a normal DAX entry with a filesystem
- * block allocation.
- */
-#define DAX_SHIFT	(4)
-#define DAX_MASK	((1UL << DAX_SHIFT) - 1)
-#define DAX_LOCKED	(1UL << 0)
-#define DAX_PMD		(1UL << 1)
-#define DAX_ZERO_PAGE	(1UL << 2)
-#define DAX_EMPTY	(1UL << 3)
-
-/*
- * These flags are not conveyed in Xarray value entries, they are just
- * modifiers to dax_insert_entry().
- */
-#define DAX_DIRTY (1UL << (DAX_SHIFT + 0))
-#define DAX_COW   (1UL << (DAX_SHIFT + 1))
-
-static unsigned long dax_to_pfn(void *entry)
-{
-	return xa_to_value(entry) >> DAX_SHIFT;
-}
-
-static void *dax_make_entry(pfn_t pfn, unsigned long flags)
-{
-	return xa_mk_value((flags & DAX_MASK) |
-			   (pfn_t_to_pfn(pfn) << DAX_SHIFT));
-}
-
-static bool dax_is_locked(void *entry)
-{
-	return xa_to_value(entry) & DAX_LOCKED;
-}
-
-static unsigned int dax_entry_order(void *entry)
-{
-	if (xa_to_value(entry) & DAX_PMD)
-		return PMD_ORDER;
-	return 0;
-}
-
-static unsigned long dax_is_pmd_entry(void *entry)
-{
-	return xa_to_value(entry) & DAX_PMD;
-}
-
-static bool dax_is_pte_entry(void *entry)
-{
-	return !(xa_to_value(entry) & DAX_PMD);
-}
-
-static int dax_is_zero_entry(void *entry)
-{
-	return xa_to_value(entry) & DAX_ZERO_PAGE;
-}
-
-static int dax_is_empty_entry(void *entry)
-{
-	return xa_to_value(entry) & DAX_EMPTY;
-}
-
-/*
- * true if the entry that was found is of a smaller order than the entry
- * we were looking for
- */
-static bool dax_is_conflict(void *entry)
-{
-	return entry == XA_RETRY_ENTRY;
-}
-
-/*
- * DAX page cache entry locking
- */
-struct exceptional_entry_key {
-	struct xarray *xa;
-	pgoff_t entry_start;
-};
-
-struct wait_exceptional_entry_queue {
-	wait_queue_entry_t wait;
-	struct exceptional_entry_key key;
-};
-
-/**
- * enum dax_wake_mode: waitqueue wakeup behaviour
- * @WAKE_ALL: wake all waiters in the waitqueue
- * @WAKE_NEXT: wake only the first waiter in the waitqueue
- */
-enum dax_wake_mode {
-	WAKE_ALL,
-	WAKE_NEXT,
-};
-
-static wait_queue_head_t *dax_entry_waitqueue(struct xa_state *xas,
-		void *entry, struct exceptional_entry_key *key)
-{
-	unsigned long hash;
-	unsigned long index = xas->xa_index;
-
-	/*
-	 * If 'entry' is a PMD, align the 'index' that we use for the wait
-	 * queue to the start of that PMD.  This ensures that all offsets in
-	 * the range covered by the PMD map to the same bit lock.
-	 */
-	if (dax_is_pmd_entry(entry))
-		index &= ~PG_PMD_COLOUR;
-	key->xa = xas->xa;
-	key->entry_start = index;
-
-	hash = hash_long((unsigned long)xas->xa ^ index, DAX_WAIT_TABLE_BITS);
-	return wait_table + hash;
-}
-
-static int wake_exceptional_entry_func(wait_queue_entry_t *wait,
-		unsigned int mode, int sync, void *keyp)
-{
-	struct exceptional_entry_key *key = keyp;
-	struct wait_exceptional_entry_queue *ewait =
-		container_of(wait, struct wait_exceptional_entry_queue, wait);
-
-	if (key->xa != ewait->key.xa ||
-	    key->entry_start != ewait->key.entry_start)
-		return 0;
-	return autoremove_wake_function(wait, mode, sync, NULL);
-}
-
-/*
- * @entry may no longer be the entry at the index in the mapping.
- * The important information it's conveying is whether the entry at
- * this index used to be a PMD entry.
- */
-static void dax_wake_entry(struct xa_state *xas, void *entry,
-			   enum dax_wake_mode mode)
-{
-	struct exceptional_entry_key key;
-	wait_queue_head_t *wq;
-
-	wq = dax_entry_waitqueue(xas, entry, &key);
-
-	/*
-	 * Checking for locked entry and prepare_to_wait_exclusive() happens
-	 * under the i_pages lock, ditto for entry handling in our callers.
-	 * So at this point all tasks that could have seen our entry locked
-	 * must be in the waitqueue and the following check will see them.
-	 */
-	if (waitqueue_active(wq))
-		__wake_up(wq, TASK_NORMAL, mode == WAKE_ALL ? 0 : 1, &key);
-}
-
-/*
- * Look up entry in page cache, wait for it to become unlocked if it
- * is a DAX entry and return it.  The caller must subsequently call
- * put_unlocked_entry() if it did not lock the entry or dax_unlock_entry()
- * if it did.  The entry returned may have a larger order than @order.
- * If @order is larger than the order of the entry found in i_pages, this
- * function returns a dax_is_conflict entry.
- *
- * Must be called with the i_pages lock held.
- */
-static void *get_unlocked_entry(struct xa_state *xas, unsigned int order)
-{
-	void *entry;
-	struct wait_exceptional_entry_queue ewait;
-	wait_queue_head_t *wq;
-
-	init_wait(&ewait.wait);
-	ewait.wait.func = wake_exceptional_entry_func;
-
-	for (;;) {
-		entry = xas_find_conflict(xas);
-		if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
-			return entry;
-		if (dax_entry_order(entry) < order)
-			return XA_RETRY_ENTRY;
-		if (!dax_is_locked(entry))
-			return entry;
-
-		wq = dax_entry_waitqueue(xas, entry, &ewait.key);
-		prepare_to_wait_exclusive(wq, &ewait.wait,
-					  TASK_UNINTERRUPTIBLE);
-		xas_unlock_irq(xas);
-		xas_reset(xas);
-		schedule();
-		finish_wait(wq, &ewait.wait);
-		xas_lock_irq(xas);
-	}
-}
-
-/*
- * The only thing keeping the address space around is the i_pages lock
- * (it's cycled in clear_inode() after removing the entries from i_pages)
- * After we call xas_unlock_irq(), we cannot touch xas->xa.
- */
-static void wait_entry_unlocked(struct xa_state *xas, void *entry)
-{
-	struct wait_exceptional_entry_queue ewait;
-	wait_queue_head_t *wq;
-
-	init_wait(&ewait.wait);
-	ewait.wait.func = wake_exceptional_entry_func;
-
-	wq = dax_entry_waitqueue(xas, entry, &ewait.key);
-	/*
-	 * Unlike get_unlocked_entry() there is no guarantee that this
-	 * path ever successfully retrieves an unlocked entry before an
-	 * inode dies. Perform a non-exclusive wait in case this path
-	 * never successfully performs its own wake up.
-	 */
-	prepare_to_wait(wq, &ewait.wait, TASK_UNINTERRUPTIBLE);
-	xas_unlock_irq(xas);
-	schedule();
-	finish_wait(wq, &ewait.wait);
-}
-
-static void put_unlocked_entry(struct xa_state *xas, void *entry,
-			       enum dax_wake_mode mode)
-{
-	if (entry && !dax_is_conflict(entry))
-		dax_wake_entry(xas, entry, mode);
-}
-
-/*
- * We used the xa_state to get the entry, but then we locked the entry and
- * dropped the xa_lock, so we know the xa_state is stale and must be reset
- * before use.
- */
-static void dax_unlock_entry(struct xa_state *xas, void *entry)
-{
-	void *old;
-
-	BUG_ON(dax_is_locked(entry));
-	xas_reset(xas);
-	xas_lock_irq(xas);
-	old = xas_store(xas, entry);
-	xas_unlock_irq(xas);
-	BUG_ON(!dax_is_locked(old));
-	dax_wake_entry(xas, entry, WAKE_NEXT);
-}
-
-/*
- * Return: The entry stored at this location before it was locked.
- */
-static void *dax_lock_entry(struct xa_state *xas, void *entry)
-{
-	unsigned long v = xa_to_value(entry);
-	return xas_store(xas, xa_mk_value(v | DAX_LOCKED));
-}
-
-static unsigned long dax_entry_size(void *entry)
-{
-	if (dax_is_zero_entry(entry))
-		return 0;
-	else if (dax_is_empty_entry(entry))
-		return 0;
-	else if (dax_is_pmd_entry(entry))
-		return PMD_SIZE;
-	else
-		return PAGE_SIZE;
-}
-
-static unsigned long dax_end_pfn(void *entry)
-{
-	return dax_to_pfn(entry) + dax_entry_size(entry) / PAGE_SIZE;
-}
-
-/*
- * Iterate through all mapped pfns represented by an entry, i.e. skip
- * 'empty' and 'zero' entries.
- */
-#define for_each_mapped_pfn(entry, pfn) \
-	for (pfn = dax_to_pfn(entry); \
-			pfn < dax_end_pfn(entry); pfn++)
-
-static inline bool dax_mapping_is_cow(struct address_space *mapping)
-{
-	return (unsigned long)mapping == PAGE_MAPPING_DAX_COW;
-}
-
-/*
- * Set the page->mapping with FS_DAX_MAPPING_COW flag, increase the refcount.
- */
-static inline void dax_mapping_set_cow(struct page *page)
-{
-	if ((uintptr_t)page->mapping != PAGE_MAPPING_DAX_COW) {
-		/*
-		 * Reset the index if the page was already mapped
-		 * regularly before.
-		 */
-		if (page->mapping)
-			page->index = 1;
-		page->mapping = (void *)PAGE_MAPPING_DAX_COW;
-	}
-	page->index++;
-}
-
-/*
- * When it is called in dax_insert_entry(), the cow flag will indicate that
- * whether this entry is shared by multiple files.  If so, set the page->mapping
- * FS_DAX_MAPPING_COW, and use page->index as refcount.
- */
-static vm_fault_t dax_associate_entry(void *entry,
-				      struct address_space *mapping,
-				      struct vm_fault *vmf, unsigned long flags)
-{
-	unsigned long size = dax_entry_size(entry), pfn, index;
-	struct dev_pagemap *pgmap;
-	int i = 0;
-
-	if (IS_ENABLED(CONFIG_FS_DAX_LIMITED))
-		return 0;
-
-	if (!size)
-		return 0;
-
-	pfn = dax_to_pfn(entry);
-	pgmap = get_dev_pagemap_many(pfn, NULL, PHYS_PFN(size));
-	if (!pgmap)
-		return VM_FAULT_SIGBUS;
-
-	index = linear_page_index(vmf->vma, ALIGN(vmf->address, size));
-	for_each_mapped_pfn(entry, pfn) {
-		struct page *page = pfn_to_page(pfn);
-
-		if (flags & DAX_COW) {
-			dax_mapping_set_cow(page);
-		} else {
-			WARN_ON_ONCE(page->mapping);
-			page->mapping = mapping;
-			page->index = index + i++;
-		}
-	}
-
-	return 0;
-}
-
-static void dax_disassociate_entry(void *entry, struct address_space *mapping,
-		bool trunc)
-{
-	unsigned long size = dax_entry_size(entry), pfn;
-	struct page *page;
-
-	if (IS_ENABLED(CONFIG_FS_DAX_LIMITED))
-		return;
-
-	if (!size)
-		return;
-
-	page = pfn_to_page(dax_to_pfn(entry));
-	put_dev_pagemap_many(page->pgmap, PHYS_PFN(size));
-
-	for_each_mapped_pfn(entry, pfn) {
-		page = pfn_to_page(pfn);
-		WARN_ON_ONCE(trunc && page_maybe_dma_pinned(page));
-		if (dax_mapping_is_cow(page->mapping)) {
-			/* keep the CoW flag if this page is still shared */
-			if (page->index-- > 0)
-				continue;
-		} else
-			WARN_ON_ONCE(page->mapping && page->mapping != mapping);
-		page->mapping = NULL;
-		page->index = 0;
-	}
-}
-
-static struct page *dax_pinned_page(void *entry)
-{
-	unsigned long pfn;
-
-	for_each_mapped_pfn(entry, pfn) {
-		struct page *page = pfn_to_page(pfn);
-
-		if (page_maybe_dma_pinned(page))
-			return page;
-	}
-	return NULL;
-}
-
-/*
- * dax_lock_page - Lock the DAX entry corresponding to a page
- * @page: The page whose entry we want to lock
- *
- * Context: Process context.
- * Return: A cookie to pass to dax_unlock_page() or 0 if the entry could
- * not be locked.
- */
-dax_entry_t dax_lock_page(struct page *page)
-{
-	XA_STATE(xas, NULL, 0);
-	void *entry;
-
-	/* Ensure page->mapping isn't freed while we look at it */
-	rcu_read_lock();
-	for (;;) {
-		struct address_space *mapping = READ_ONCE(page->mapping);
-
-		entry = NULL;
-		if (!mapping || !dax_mapping(mapping))
-			break;
-
-		/*
-		 * In the device-dax case there's no need to lock, a
-		 * struct dev_pagemap pin is sufficient to keep the
-		 * inode alive, and we assume we have dev_pagemap pin
-		 * otherwise we would not have a valid pfn_to_page()
-		 * translation.
-		 */
-		entry = (void *)~0UL;
-		if (S_ISCHR(mapping->host->i_mode))
-			break;
-
-		xas.xa = &mapping->i_pages;
-		xas_lock_irq(&xas);
-		if (mapping != page->mapping) {
-			xas_unlock_irq(&xas);
-			continue;
-		}
-		xas_set(&xas, page->index);
-		entry = xas_load(&xas);
-		if (dax_is_locked(entry)) {
-			rcu_read_unlock();
-			wait_entry_unlocked(&xas, entry);
-			rcu_read_lock();
-			continue;
-		}
-		dax_lock_entry(&xas, entry);
-		xas_unlock_irq(&xas);
-		break;
-	}
-	rcu_read_unlock();
-	return (dax_entry_t)entry;
-}
-
-void dax_unlock_page(struct page *page, dax_entry_t cookie)
-{
-	struct address_space *mapping = page->mapping;
-	XA_STATE(xas, &mapping->i_pages, page->index);
-
-	if (S_ISCHR(mapping->host->i_mode))
-		return;
-
-	dax_unlock_entry(&xas, (void *)cookie);
-}
-
-/*
- * dax_lock_mapping_entry - Lock the DAX entry corresponding to a mapping
- * @mapping: the file's mapping whose entry we want to lock
- * @index: the offset within this file
- * @page: output the dax page corresponding to this dax entry
- *
- * Return: A cookie to pass to dax_unlock_mapping_entry() or 0 if the entry
- * could not be locked.
- */
-dax_entry_t dax_lock_mapping_entry(struct address_space *mapping, pgoff_t index,
-		struct page **page)
-{
-	XA_STATE(xas, NULL, 0);
-	void *entry;
-
-	rcu_read_lock();
-	for (;;) {
-		entry = NULL;
-		if (!dax_mapping(mapping))
-			break;
-
-		xas.xa = &mapping->i_pages;
-		xas_lock_irq(&xas);
-		xas_set(&xas, index);
-		entry = xas_load(&xas);
-		if (dax_is_locked(entry)) {
-			rcu_read_unlock();
-			wait_entry_unlocked(&xas, entry);
-			rcu_read_lock();
-			continue;
-		}
-		if (!entry ||
-		    dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
-			/*
-			 * Because we are looking for entry from file's mapping
-			 * and index, so the entry may not be inserted for now,
-			 * or even a zero/empty entry.  We don't think this is
-			 * an error case.  So, return a special value and do
-			 * not output @page.
-			 */
-			entry = (void *)~0UL;
-		} else {
-			*page = pfn_to_page(dax_to_pfn(entry));
-			dax_lock_entry(&xas, entry);
-		}
-		xas_unlock_irq(&xas);
-		break;
-	}
-	rcu_read_unlock();
-	return (dax_entry_t)entry;
-}
-
-void dax_unlock_mapping_entry(struct address_space *mapping, pgoff_t index,
-		dax_entry_t cookie)
-{
-	XA_STATE(xas, &mapping->i_pages, index);
-
-	if (cookie == ~0UL)
-		return;
-
-	dax_unlock_entry(&xas, (void *)cookie);
-}
-
-/*
- * Find page cache entry at given index. If it is a DAX entry, return it
- * with the entry locked. If the page cache doesn't contain an entry at
- * that index, add a locked empty entry.
- *
- * When requesting an entry with size DAX_PMD, grab_mapping_entry() will
- * either return that locked entry or will return VM_FAULT_FALLBACK.
- * This will happen if there are any PTE entries within the PMD range
- * that we are requesting.
- *
- * We always favor PTE entries over PMD entries. There isn't a flow where we
- * evict PTE entries in order to 'upgrade' them to a PMD entry.  A PMD
- * insertion will fail if it finds any PTE entries already in the tree, and a
- * PTE insertion will cause an existing PMD entry to be unmapped and
- * downgraded to PTE entries.  This happens for both PMD zero pages as
- * well as PMD empty entries.
- *
- * The exception to this downgrade path is for PMD entries that have
- * real storage backing them.  We will leave these real PMD entries in
- * the tree, and PTE writes will simply dirty the entire PMD entry.
- *
- * Note: Unlike filemap_fault() we don't honor FAULT_FLAG_RETRY flags. For
- * persistent memory the benefit is doubtful. We can add that later if we can
- * show it helps.
- *
- * On error, this function does not return an ERR_PTR.  Instead it returns
- * a VM_FAULT code, encoded as an xarray internal entry.  The ERR_PTR values
- * overlap with xarray value entries.
- */
-static void *grab_mapping_entry(struct xa_state *xas,
-		struct address_space *mapping, unsigned int order)
-{
-	unsigned long index = xas->xa_index;
-	bool pmd_downgrade;	/* splitting PMD entry into PTE entries? */
-	void *entry;
-
-retry:
-	pmd_downgrade = false;
-	xas_lock_irq(xas);
-	entry = get_unlocked_entry(xas, order);
-
-	if (entry) {
-		if (dax_is_conflict(entry))
-			goto fallback;
-		if (!xa_is_value(entry)) {
-			xas_set_err(xas, -EIO);
-			goto out_unlock;
-		}
-
-		if (order == 0) {
-			if (dax_is_pmd_entry(entry) &&
-			    (dax_is_zero_entry(entry) ||
-			     dax_is_empty_entry(entry))) {
-				pmd_downgrade = true;
-			}
-		}
-	}
-
-	if (pmd_downgrade) {
-		/*
-		 * Make sure 'entry' remains valid while we drop
-		 * the i_pages lock.
-		 */
-		dax_lock_entry(xas, entry);
-
-		/*
-		 * Besides huge zero pages the only other thing that gets
-		 * downgraded are empty entries which don't need to be
-		 * unmapped.
-		 */
-		if (dax_is_zero_entry(entry)) {
-			xas_unlock_irq(xas);
-			unmap_mapping_pages(mapping,
-					xas->xa_index & ~PG_PMD_COLOUR,
-					PG_PMD_NR, false);
-			xas_reset(xas);
-			xas_lock_irq(xas);
-		}
-
-		dax_disassociate_entry(entry, mapping, false);
-		xas_store(xas, NULL);	/* undo the PMD join */
-		dax_wake_entry(xas, entry, WAKE_ALL);
-		mapping->nrpages -= PG_PMD_NR;
-		entry = NULL;
-		xas_set(xas, index);
-	}
-
-	if (entry) {
-		dax_lock_entry(xas, entry);
-	} else {
-		unsigned long flags = DAX_EMPTY;
-
-		if (order > 0)
-			flags |= DAX_PMD;
-		entry = dax_make_entry(pfn_to_pfn_t(0), flags);
-		dax_lock_entry(xas, entry);
-		if (xas_error(xas))
-			goto out_unlock;
-		mapping->nrpages += 1UL << order;
-	}
-
-out_unlock:
-	xas_unlock_irq(xas);
-	if (xas_nomem(xas, mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM))
-		goto retry;
-	if (xas->xa_node == XA_ERROR(-ENOMEM))
-		return xa_mk_internal(VM_FAULT_OOM);
-	if (xas_error(xas))
-		return xa_mk_internal(VM_FAULT_SIGBUS);
-	return entry;
-fallback:
-	xas_unlock_irq(xas);
-	return xa_mk_internal(VM_FAULT_FALLBACK);
-}
-
-/**
- * dax_layout_pinned_page_range - find first pinned page in @mapping
- * @mapping: address space to scan for a page with ref count > 1
- * @start: Starting offset. Page containing 'start' is included.
- * @end: End offset. Page containing 'end' is included. If 'end' is LLONG_MAX,
- *       pages from 'start' till the end of file are included.
- *
- * DAX requires ZONE_DEVICE mapped pages. These pages are never
- * 'onlined' to the page allocator so they are considered idle when
- * page->count == 1. A filesystem uses this interface to determine if
- * any page in the mapping is busy, i.e. for DMA, or other
- * get_user_pages() usages.
- *
- * It is expected that the filesystem is holding locks to block the
- * establishment of new mappings in this address_space. I.e. it expects
- * to be able to run unmap_mapping_range() and subsequently not race
- * mapping_mapped() becoming true.
- */
-struct page *dax_layout_pinned_page_range(struct address_space *mapping,
-					loff_t start, loff_t end)
-{
-	void *entry;
-	unsigned int scanned = 0;
-	struct page *page = NULL;
-	pgoff_t start_idx = start >> PAGE_SHIFT;
-	pgoff_t end_idx;
-	XA_STATE(xas, &mapping->i_pages, start_idx);
-
-	/*
-	 * In the 'limited' case get_user_pages() for dax is disabled.
-	 */
-	if (IS_ENABLED(CONFIG_FS_DAX_LIMITED))
-		return NULL;
-
-	if (!dax_mapping(mapping) || !mapping_mapped(mapping))
-		return NULL;
-
-	/* If end == LLONG_MAX, all pages from start to till end of file */
-	if (end == LLONG_MAX)
-		end_idx = ULONG_MAX;
-	else
-		end_idx = end >> PAGE_SHIFT;
-	/*
-	 * If we race get_user_pages_fast() here either we'll see the
-	 * elevated page count in the iteration and wait, or
-	 * get_user_pages_fast() will see that the page it took a reference
-	 * against is no longer mapped in the page tables and bail to the
-	 * get_user_pages() slow path.  The slow path is protected by
-	 * pte_lock() and pmd_lock(). New references are not taken without
-	 * holding those locks, and unmap_mapping_pages() will not zero the
-	 * pte or pmd without holding the respective lock, so we are
-	 * guaranteed to either see new references or prevent new
-	 * references from being established.
-	 */
-	unmap_mapping_pages(mapping, start_idx, end_idx - start_idx + 1, 0);
-
-	xas_lock_irq(&xas);
-	xas_for_each(&xas, entry, end_idx) {
-		if (WARN_ON_ONCE(!xa_is_value(entry)))
-			continue;
-		if (unlikely(dax_is_locked(entry)))
-			entry = get_unlocked_entry(&xas, 0);
-		if (entry)
-			page = dax_pinned_page(entry);
-		put_unlocked_entry(&xas, entry, WAKE_NEXT);
-		if (page)
-			break;
-		if (++scanned % XA_CHECK_SCHED)
-			continue;
-
-		xas_pause(&xas);
-		xas_unlock_irq(&xas);
-		cond_resched();
-		xas_lock_irq(&xas);
-	}
-	xas_unlock_irq(&xas);
-	return page;
-}
-EXPORT_SYMBOL_GPL(dax_layout_pinned_page_range);
-
-struct page *dax_layout_pinned_page(struct address_space *mapping)
-{
-	return dax_layout_pinned_page_range(mapping, 0, LLONG_MAX);
-}
-EXPORT_SYMBOL_GPL(dax_layout_pinned_page);
-
-static int __dax_invalidate_entry(struct address_space *mapping,
-					  pgoff_t index, bool trunc)
-{
-	XA_STATE(xas, &mapping->i_pages, index);
-	int ret = 0;
-	void *entry;
-
-	xas_lock_irq(&xas);
-	entry = get_unlocked_entry(&xas, 0);
-	if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
-		goto out;
-	if (!trunc &&
-	    (xas_get_mark(&xas, PAGECACHE_TAG_DIRTY) ||
-	     xas_get_mark(&xas, PAGECACHE_TAG_TOWRITE)))
-		goto out;
-	dax_disassociate_entry(entry, mapping, trunc);
-	xas_store(&xas, NULL);
-	mapping->nrpages -= 1UL << dax_entry_order(entry);
-	ret = 1;
-out:
-	put_unlocked_entry(&xas, entry, WAKE_ALL);
-	xas_unlock_irq(&xas);
-	return ret;
-}
-
-/*
- * Delete DAX entry at @index from @mapping.  Wait for it
- * to be unlocked before deleting it.
- */
-int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index)
-{
-	int ret = __dax_invalidate_entry(mapping, index, true);
-
-	/*
-	 * This gets called from truncate / punch_hole path. As such, the caller
-	 * must hold locks protecting against concurrent modifications of the
-	 * page cache (usually fs-private i_mmap_sem for writing). Since the
-	 * caller has seen a DAX entry for this index, we better find it
-	 * at that index as well...
-	 */
-	WARN_ON_ONCE(!ret);
-	return ret;
-}
-
-/*
- * Invalidate DAX entry if it is clean.
- */
-int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
-				      pgoff_t index)
-{
-	return __dax_invalidate_entry(mapping, index, false);
-}
-
 static pgoff_t dax_iomap_pgoff(const struct iomap *iomap, loff_t pos)
 {
 	return PHYS_PFN(iomap->addr + (pos & PAGE_MASK) - iomap->offset);
@@ -856,195 +55,6 @@ static int copy_cow_page_dax(struct vm_fault *vmf, const struct iomap_iter *iter
 	return 0;
 }
 
-/*
- * MAP_SYNC on a dax mapping guarantees dirty metadata is
- * flushed on write-faults (non-cow), but not read-faults.
- */
-static bool dax_fault_is_synchronous(const struct iomap_iter *iter,
-		struct vm_area_struct *vma)
-{
-	return (iter->flags & IOMAP_WRITE) && (vma->vm_flags & VM_SYNC) &&
-		(iter->iomap.flags & IOMAP_F_DIRTY);
-}
-
-static bool dax_fault_is_cow(const struct iomap_iter *iter)
-{
-	return (iter->flags & IOMAP_WRITE) &&
-		(iter->iomap.flags & IOMAP_F_SHARED);
-}
-
-static unsigned long dax_iter_flags(const struct iomap_iter *iter,
-				    struct vm_fault *vmf)
-{
-	unsigned long flags = 0;
-
-	if (!dax_fault_is_synchronous(iter, vmf->vma))
-		flags |= DAX_DIRTY;
-
-	if (dax_fault_is_cow(iter))
-		flags |= DAX_COW;
-
-	return flags;
-}
-
-/*
- * By this point grab_mapping_entry() has ensured that we have a locked entry
- * of the appropriate size so we don't have to worry about downgrading PMDs to
- * PTEs.  If we happen to be trying to insert a PTE and there is a PMD
- * already in the tree, we will skip the insertion and just dirty the PMD as
- * appropriate.
- */
-static vm_fault_t dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
-				   void **pentry, pfn_t pfn,
-				   unsigned long flags)
-{
-	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
-	void *new_entry = dax_make_entry(pfn, flags);
-	bool dirty = flags & DAX_DIRTY;
-	bool cow = flags & DAX_COW;
-	void *entry = *pentry;
-
-	if (dirty)
-		__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
-
-	if (cow || (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE))) {
-		unsigned long index = xas->xa_index;
-		/* we are replacing a zero page with block mapping */
-		if (dax_is_pmd_entry(entry))
-			unmap_mapping_pages(mapping, index & ~PG_PMD_COLOUR,
-					PG_PMD_NR, false);
-		else /* pte entry */
-			unmap_mapping_pages(mapping, index, 1, false);
-	}
-
-	xas_reset(xas);
-	xas_lock_irq(xas);
-	if (cow || dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
-		void *old;
-
-		dax_disassociate_entry(entry, mapping, false);
-		dax_associate_entry(new_entry, mapping, vmf, flags);
-		/*
-		 * Only swap our new entry into the page cache if the current
-		 * entry is a zero page or an empty entry.  If a normal PTE or
-		 * PMD entry is already in the cache, we leave it alone.  This
-		 * means that if we are trying to insert a PTE and the
-		 * existing entry is a PMD, we will just leave the PMD in the
-		 * tree and dirty it if necessary.
-		 */
-		old = dax_lock_entry(xas, new_entry);
-		WARN_ON_ONCE(old != xa_mk_value(xa_to_value(entry) |
-					DAX_LOCKED));
-		entry = new_entry;
-	} else {
-		xas_load(xas);	/* Walk the xa_state */
-	}
-
-	if (dirty)
-		xas_set_mark(xas, PAGECACHE_TAG_DIRTY);
-
-	if (cow)
-		xas_set_mark(xas, PAGECACHE_TAG_TOWRITE);
-
-	xas_unlock_irq(xas);
-	*pentry = entry;
-	return 0;
-}
-
-static int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
-		struct address_space *mapping, void *entry)
-{
-	unsigned long pfn, index, count, end;
-	long ret = 0;
-	struct vm_area_struct *vma;
-
-	/*
-	 * A page got tagged dirty in DAX mapping? Something is seriously
-	 * wrong.
-	 */
-	if (WARN_ON(!xa_is_value(entry)))
-		return -EIO;
-
-	if (unlikely(dax_is_locked(entry))) {
-		void *old_entry = entry;
-
-		entry = get_unlocked_entry(xas, 0);
-
-		/* Entry got punched out / reallocated? */
-		if (!entry || WARN_ON_ONCE(!xa_is_value(entry)))
-			goto put_unlocked;
-		/*
-		 * Entry got reallocated elsewhere? No need to writeback.
-		 * We have to compare pfns as we must not bail out due to
-		 * difference in lockbit or entry type.
-		 */
-		if (dax_to_pfn(old_entry) != dax_to_pfn(entry))
-			goto put_unlocked;
-		if (WARN_ON_ONCE(dax_is_empty_entry(entry) ||
-					dax_is_zero_entry(entry))) {
-			ret = -EIO;
-			goto put_unlocked;
-		}
-
-		/* Another fsync thread may have already done this entry */
-		if (!xas_get_mark(xas, PAGECACHE_TAG_TOWRITE))
-			goto put_unlocked;
-	}
-
-	/* Lock the entry to serialize with page faults */
-	dax_lock_entry(xas, entry);
-
-	/*
-	 * We can clear the tag now but we have to be careful so that concurrent
-	 * dax_writeback_one() calls for the same index cannot finish before we
-	 * actually flush the caches. This is achieved as the calls will look
-	 * at the entry only under the i_pages lock and once they do that
-	 * they will see the entry locked and wait for it to unlock.
-	 */
-	xas_clear_mark(xas, PAGECACHE_TAG_TOWRITE);
-	xas_unlock_irq(xas);
-
-	/*
-	 * If dax_writeback_mapping_range() was given a wbc->range_start
-	 * in the middle of a PMD, the 'index' we use needs to be
-	 * aligned to the start of the PMD.
-	 * This allows us to flush for PMD_SIZE and not have to worry about
-	 * partial PMD writebacks.
-	 */
-	pfn = dax_to_pfn(entry);
-	count = 1UL << dax_entry_order(entry);
-	index = xas->xa_index & ~(count - 1);
-	end = index + count - 1;
-
-	/* Walk all mappings of a given index of a file and writeprotect them */
-	i_mmap_lock_read(mapping);
-	vma_interval_tree_foreach(vma, &mapping->i_mmap, index, end) {
-		pfn_mkclean_range(pfn, count, index, vma);
-		cond_resched();
-	}
-	i_mmap_unlock_read(mapping);
-
-	dax_flush(dax_dev, page_address(pfn_to_page(pfn)), count * PAGE_SIZE);
-	/*
-	 * After we have flushed the cache, we can clear the dirty tag. There
-	 * cannot be new dirty data in the pfn after the flush has completed as
-	 * the pfn mappings are writeprotected and fault waits for mapping
-	 * entry lock.
-	 */
-	xas_reset(xas);
-	xas_lock_irq(xas);
-	xas_store(xas, entry);
-	xas_clear_mark(xas, PAGECACHE_TAG_DIRTY);
-	dax_wake_entry(xas, entry, WAKE_NEXT);
-
-	trace_dax_writeback_one(mapping->host, index, count);
-	return ret;
-
- put_unlocked:
-	put_unlocked_entry(xas, entry, WAKE_NEXT);
-	return ret;
-}
-
 /*
  * Flush the mapping to the persistent domain within the byte range of [start,
  * end]. This is required by data integrity operations to ensure file data is
@@ -1181,6 +191,37 @@ static int dax_iomap_cow_copy(loff_t pos, uint64_t length, size_t align_size,
 	return 0;
 }
 
+/*
+ * MAP_SYNC on a dax mapping guarantees dirty metadata is
+ * flushed on write-faults (non-cow), but not read-faults.
+ */
+static bool dax_fault_is_synchronous(const struct iomap_iter *iter,
+				     struct vm_area_struct *vma)
+{
+	return (iter->flags & IOMAP_WRITE) && (vma->vm_flags & VM_SYNC) &&
+	       (iter->iomap.flags & IOMAP_F_DIRTY);
+}
+
+static bool dax_fault_is_cow(const struct iomap_iter *iter)
+{
+	return (iter->flags & IOMAP_WRITE) &&
+	       (iter->iomap.flags & IOMAP_F_SHARED);
+}
+
+static unsigned long dax_iter_flags(const struct iomap_iter *iter,
+				    struct vm_fault *vmf)
+{
+	unsigned long flags = 0;
+
+	if (!dax_fault_is_synchronous(iter, vmf->vma))
+		flags |= DAX_DIRTY;
+
+	if (dax_fault_is_cow(iter))
+		flags |= DAX_COW;
+
+	return flags;
+}
+
 /*
  * The user has performed a load from a hole in the file.  Allocating a new
  * page in the file would cause excessive storage usage for workloads with
@@ -1663,7 +704,7 @@ static vm_fault_t dax_iomap_pte_fault(struct vm_fault *vmf, pfn_t *pfnp,
 	if ((vmf->flags & FAULT_FLAG_WRITE) && !vmf->cow_page)
 		iter.flags |= IOMAP_WRITE;
 
-	entry = grab_mapping_entry(&xas, mapping, 0);
+	entry = dax_grab_mapping_entry(&xas, mapping, 0);
 	if (xa_is_internal(entry)) {
 		ret = xa_to_internal(entry);
 		goto out;
@@ -1780,12 +821,12 @@ static vm_fault_t dax_iomap_pmd_fault(struct vm_fault *vmf, pfn_t *pfnp,
 		goto fallback;
 
 	/*
-	 * grab_mapping_entry() will make sure we get an empty PMD entry,
+	 * dax_grab_mapping_entry() will make sure we get an empty PMD entry,
 	 * a zero PMD entry or a DAX PMD.  If it can't (because a PTE
 	 * entry is already in the array, for instance), it will return
 	 * VM_FAULT_FALLBACK.
 	 */
-	entry = grab_mapping_entry(&xas, mapping, PMD_ORDER);
+	entry = dax_grab_mapping_entry(&xas, mapping, PMD_ORDER);
 	if (xa_is_internal(entry)) {
 		ret = xa_to_internal(entry);
 		goto fallback;
@@ -1859,50 +900,6 @@ vm_fault_t dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
 }
 EXPORT_SYMBOL_GPL(dax_iomap_fault);
 
-/*
- * dax_insert_pfn_mkwrite - insert PTE or PMD entry into page tables
- * @vmf: The description of the fault
- * @pfn: PFN to insert
- * @order: Order of entry to insert.
- *
- * This function inserts a writeable PTE or PMD entry into the page tables
- * for an mmaped DAX file.  It also marks the page cache entry as dirty.
- */
-static vm_fault_t
-dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn, unsigned int order)
-{
-	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
-	XA_STATE_ORDER(xas, &mapping->i_pages, vmf->pgoff, order);
-	void *entry;
-	vm_fault_t ret;
-
-	xas_lock_irq(&xas);
-	entry = get_unlocked_entry(&xas, order);
-	/* Did we race with someone splitting entry or so? */
-	if (!entry || dax_is_conflict(entry) ||
-	    (order == 0 && !dax_is_pte_entry(entry))) {
-		put_unlocked_entry(&xas, entry, WAKE_NEXT);
-		xas_unlock_irq(&xas);
-		trace_dax_insert_pfn_mkwrite_no_entry(mapping->host, vmf,
-						      VM_FAULT_NOPAGE);
-		return VM_FAULT_NOPAGE;
-	}
-	xas_set_mark(&xas, PAGECACHE_TAG_DIRTY);
-	dax_lock_entry(&xas, entry);
-	xas_unlock_irq(&xas);
-	if (order == 0)
-		ret = vmf_insert_mixed_mkwrite(vmf->vma, vmf->address, pfn);
-#ifdef CONFIG_FS_DAX_PMD
-	else if (order == PMD_ORDER)
-		ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
-#endif
-	else
-		ret = VM_FAULT_FALLBACK;
-	dax_unlock_entry(&xas, entry);
-	trace_dax_insert_pfn_mkwrite(mapping->host, vmf, ret);
-	return ret;
-}
-
 /**
  * dax_finish_sync_fault - finish synchronous page fault
  * @vmf: The description of the fault
diff --git a/include/linux/dax.h b/include/linux/dax.h
index 54f099166a29..05ce7992ac43 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -157,41 +157,50 @@ static inline void fs_put_dax(struct dax_device *dax_dev, void *holder)
 int dax_writeback_mapping_range(struct address_space *mapping,
 		struct dax_device *dax_dev, struct writeback_control *wbc);
 
-struct page *dax_layout_pinned_page(struct address_space *mapping);
-struct page *dax_layout_pinned_page_range(struct address_space *mapping, loff_t start, loff_t end);
+#else
+static inline int dax_writeback_mapping_range(struct address_space *mapping,
+		struct dax_device *dax_dev, struct writeback_control *wbc)
+{
+	return -EOPNOTSUPP;
+}
+
+#endif
+
+int dax_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
+		const struct iomap_ops *ops);
+int dax_truncate_page(struct inode *inode, loff_t pos, bool *did_zero,
+		const struct iomap_ops *ops);
+
+#if IS_ENABLED(CONFIG_DAX)
+int dax_read_lock(void);
+void dax_read_unlock(int id);
 dax_entry_t dax_lock_page(struct page *page);
 void dax_unlock_page(struct page *page, dax_entry_t cookie);
+void run_dax(struct dax_device *dax_dev);
 dax_entry_t dax_lock_mapping_entry(struct address_space *mapping,
 		unsigned long index, struct page **page);
 void dax_unlock_mapping_entry(struct address_space *mapping,
 		unsigned long index, dax_entry_t cookie);
+struct page *dax_layout_pinned_page(struct address_space *mapping);
+struct page *dax_layout_pinned_page_range(struct address_space *mapping, loff_t start, loff_t end);
 #else
-static inline struct page *dax_layout_pinned_page(struct address_space *mapping)
-{
-	return NULL;
-}
-
-static inline struct page *
-dax_layout_pinned_page_range(struct address_space *mapping, pgoff_t start,
-			     pgoff_t nr_pages)
+static inline dax_entry_t dax_lock_page(struct page *page)
 {
-	return NULL;
+	if (IS_DAX(page->mapping->host))
+		return ~0UL;
+	return 0;
 }
 
-static inline int dax_writeback_mapping_range(struct address_space *mapping,
-		struct dax_device *dax_dev, struct writeback_control *wbc)
+static inline void dax_unlock_page(struct page *page, dax_entry_t cookie)
 {
-	return -EOPNOTSUPP;
 }
 
-static inline dax_entry_t dax_lock_page(struct page *page)
+static inline int dax_read_lock(void)
 {
-	if (IS_DAX(page->mapping->host))
-		return ~0UL;
 	return 0;
 }
 
-static inline void dax_unlock_page(struct page *page, dax_entry_t cookie)
+static inline void dax_read_unlock(int id)
 {
 }
 
@@ -205,24 +214,17 @@ static inline void dax_unlock_mapping_entry(struct address_space *mapping,
 		unsigned long index, dax_entry_t cookie)
 {
 }
-#endif
-
-int dax_zero_range(struct inode *inode, loff_t pos, loff_t len, bool *did_zero,
-		const struct iomap_ops *ops);
-int dax_truncate_page(struct inode *inode, loff_t pos, bool *did_zero,
-		const struct iomap_ops *ops);
 
-#if IS_ENABLED(CONFIG_DAX)
-int dax_read_lock(void);
-void dax_read_unlock(int id);
-#else
-static inline int dax_read_lock(void)
+static inline struct page *dax_layout_pinned_page(struct address_space *mapping)
 {
-	return 0;
+	return NULL;
 }
 
-static inline void dax_read_unlock(int id)
+static inline struct page *
+dax_layout_pinned_page_range(struct address_space *mapping, loff_t start,
+			     loff_t end)
 {
+	return NULL;
 }
 #endif /* CONFIG_DAX */
 bool dax_alive(struct dax_device *dax_dev);
@@ -245,6 +247,10 @@ vm_fault_t dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
 		    pfn_t *pfnp, int *errp, const struct iomap_ops *ops);
 vm_fault_t dax_finish_sync_fault(struct vm_fault *vmf,
 		enum page_entry_size pe_size, pfn_t pfn);
+struct page *dax_pinned_page(void *entry);
+void *dax_grab_mapping_entry(struct xa_state *xas,
+			     struct address_space *mapping, unsigned int order);
+void dax_unlock_entry(struct xa_state *xas, void *entry);
 int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index);
 int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
 				      pgoff_t index);
@@ -261,6 +267,54 @@ static inline bool dax_mapping(struct address_space *mapping)
 	return mapping->host && IS_DAX(mapping->host);
 }
 
+/*
+ * DAX pagecache entries use XArray value entries so they can't be mistaken
+ * for pages.  We use one bit for locking, one bit for the entry size (PMD)
+ * and two more to tell us if the entry is a zero page or an empty entry that
+ * is just used for locking.  In total four special bits.
+ *
+ * If the PMD bit isn't set the entry has size PAGE_SIZE, and if the ZERO_PAGE
+ * and EMPTY bits aren't set the entry is a normal DAX entry with a filesystem
+ * block allocation.
+ */
+#define DAX_SHIFT (4)
+#define DAX_MASK ((1UL << DAX_SHIFT) - 1)
+#define DAX_LOCKED (1UL << 0)
+#define DAX_PMD (1UL << 1)
+#define DAX_ZERO_PAGE (1UL << 2)
+#define DAX_EMPTY (1UL << 3)
+
+/*
+ * These flags are not conveyed in Xarray value entries, they are just
+ * modifiers to dax_insert_entry().
+ */
+#define DAX_DIRTY (1UL << (DAX_SHIFT + 0))
+#define DAX_COW   (1UL << (DAX_SHIFT + 1))
+vm_fault_t dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
+			    void **pentry, pfn_t pfn, unsigned long flags);
+vm_fault_t dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn,
+				  unsigned int order);
+int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
+		      struct address_space *mapping, void *entry);
+
+/* The 'colour' (ie low bits) within a PMD of a page offset.  */
+#define PG_PMD_COLOUR ((PMD_SIZE >> PAGE_SHIFT) - 1)
+#define PG_PMD_NR (PMD_SIZE >> PAGE_SHIFT)
+
+/* The order of a PMD entry */
+#define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT)
+
+static inline unsigned int pe_order(enum page_entry_size pe_size)
+{
+	if (pe_size == PE_SIZE_PTE)
+		return PAGE_SHIFT - PAGE_SHIFT;
+	if (pe_size == PE_SIZE_PMD)
+		return PMD_SHIFT - PAGE_SHIFT;
+	if (pe_size == PE_SIZE_PUD)
+		return PUD_SHIFT - PAGE_SHIFT;
+	return ~0;
+}
+
 #ifdef CONFIG_DEV_DAX_HMEM_DEVICES
 void hmem_register_device(int target_nid, struct resource *r);
 #else
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index fd57407e7f3d..e5d30eec3bf1 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -221,6 +221,12 @@ static inline void devm_memunmap_pages(struct device *dev,
 {
 }
 
+static inline struct dev_pagemap *
+get_dev_pagemap_many(unsigned long pfn, struct dev_pagemap *pgmap, int refs)
+{
+	return NULL;
+}
+
 static inline struct dev_pagemap *get_dev_pagemap(unsigned long pfn,
 		struct dev_pagemap *pgmap)
 {



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 10/13] dax: Prep dax_{associate, disassociate}_entry() for compound pages
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
                   ` (8 preceding siblings ...)
  2022-09-04  2:16 ` [PATCH 09/13] devdax: Move address_space helpers to the DAX core Dan Williams
@ 2022-09-04  2:16 ` Dan Williams
  2022-09-04  2:17 ` [PATCH 11/13] devdax: add PUD support to the DAX mapping infrastructure Dan Williams
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:16 UTC (permalink / raw)
  To: akpm
  Cc: Matthew Wilcox, Jan Kara, Darrick J. Wong, Jason Gunthorpe,
	Christoph Hellwig, John Hubbard, linux-mm, nvdimm, linux-fsdevel

In preparation for device-dax to use the same mapping machinery as
fsdax, add support for device-dax compound pages.

Presently this is handled by dax_set_mapping() which is careful to only
update page->mapping for head pages. However, it does that by looking at
properties in the 'struct dev_dax' instance associated with the page.
Switch to just checking PageHead() directly.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/dax/Kconfig   |    1 +
 drivers/dax/mapping.c |   18 ++++++++++++++++++
 2 files changed, 19 insertions(+)

diff --git a/drivers/dax/Kconfig b/drivers/dax/Kconfig
index 3ed4da3935e5..2dcc8744277d 100644
--- a/drivers/dax/Kconfig
+++ b/drivers/dax/Kconfig
@@ -10,6 +10,7 @@ if DAX
 config DEV_DAX
 	tristate "Device DAX: direct access mapping device"
 	depends on TRANSPARENT_HUGEPAGE
+	depends on !FS_DAX_LIMITED
 	help
 	  Support raw access to differentiated (persistence, bandwidth,
 	  latency...) memory via an mmap(2) capable character
diff --git a/drivers/dax/mapping.c b/drivers/dax/mapping.c
index 0810af7d9503..6bd38ddba2cb 100644
--- a/drivers/dax/mapping.c
+++ b/drivers/dax/mapping.c
@@ -351,6 +351,8 @@ static vm_fault_t dax_associate_entry(void *entry,
 	for_each_mapped_pfn(entry, pfn) {
 		struct page *page = pfn_to_page(pfn);
 
+		page = compound_head(page);
+
 		if (flags & DAX_COW) {
 			dax_mapping_set_cow(page);
 		} else {
@@ -358,6 +360,13 @@ static vm_fault_t dax_associate_entry(void *entry,
 			page->mapping = mapping;
 			page->index = index + i++;
 		}
+
+		/*
+		 * page->mapping and page->index are only manipulated on
+		 * head pages
+		 */
+		if (PageHead(page))
+			break;
 	}
 
 	return 0;
@@ -380,6 +389,8 @@ static void dax_disassociate_entry(void *entry, struct address_space *mapping,
 
 	for_each_mapped_pfn(entry, pfn) {
 		page = pfn_to_page(pfn);
+		page = compound_head(page);
+
 		WARN_ON_ONCE(trunc && page_maybe_dma_pinned(page));
 		if (dax_mapping_is_cow(page->mapping)) {
 			/* keep the CoW flag if this page is still shared */
@@ -389,6 +400,13 @@ static void dax_disassociate_entry(void *entry, struct address_space *mapping,
 			WARN_ON_ONCE(page->mapping && page->mapping != mapping);
 		page->mapping = NULL;
 		page->index = 0;
+
+		/*
+		 * page->mapping and page->index are only manipulated on
+		 * head pages
+		 */
+		if (PageHead(page))
+			break;
 	}
 }
 



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 11/13] devdax: add PUD support to the DAX mapping infrastructure
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
                   ` (9 preceding siblings ...)
  2022-09-04  2:16 ` [PATCH 10/13] dax: Prep dax_{associate, disassociate}_entry() for compound pages Dan Williams
@ 2022-09-04  2:17 ` Dan Williams
  2022-09-04  2:17 ` [PATCH 12/13] devdax: Use dax_insert_entry() + dax_delete_mapping_entry() Dan Williams
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:17 UTC (permalink / raw)
  To: akpm
  Cc: Matthew Wilcox, Jan Kara, Darrick J. Wong, Jason Gunthorpe,
	Christoph Hellwig, John Hubbard, linux-mm, nvdimm, linux-fsdevel

In preparation for using the DAX mapping infrastructure for device-dax,
update the helpers to handle PUD entries.

In practice the code related to @size_downgrade will go unused for PUD
entries since only devdax creates DAX PUD entries and devdax enforces
aligned mappings. The conversion is included for completeness.

The addition of PUD support to dax_insert_pfn_mkwrite() requires a new
stub for vmf_insert_pfn_pud() in the
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=n case.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/dax/mapping.c   |   50 ++++++++++++++++++++++++++++++++++++-----------
 include/linux/dax.h     |   30 +++++++++++++++++++---------
 include/linux/huge_mm.h |   11 ++++++++--
 3 files changed, 67 insertions(+), 24 deletions(-)

diff --git a/drivers/dax/mapping.c b/drivers/dax/mapping.c
index 6bd38ddba2cb..6eaa0fe33c16 100644
--- a/drivers/dax/mapping.c
+++ b/drivers/dax/mapping.c
@@ -13,6 +13,7 @@
 #include <linux/pfn_t.h>
 #include <linux/sizes.h>
 #include <linux/pagemap.h>
+#include <linux/huge_mm.h>
 
 #define CREATE_TRACE_POINTS
 #include <trace/events/fs_dax.h>
@@ -51,6 +52,8 @@ static bool dax_is_locked(void *entry)
 
 static unsigned int dax_entry_order(void *entry)
 {
+	if (xa_to_value(entry) & DAX_PUD)
+		return PUD_ORDER;
 	if (xa_to_value(entry) & DAX_PMD)
 		return PMD_ORDER;
 	return 0;
@@ -61,9 +64,14 @@ static unsigned long dax_is_pmd_entry(void *entry)
 	return xa_to_value(entry) & DAX_PMD;
 }
 
+static unsigned long dax_is_pud_entry(void *entry)
+{
+	return xa_to_value(entry) & DAX_PUD;
+}
+
 static bool dax_is_pte_entry(void *entry)
 {
-	return !(xa_to_value(entry) & DAX_PMD);
+	return !(xa_to_value(entry) & (DAX_PMD|DAX_PUD));
 }
 
 static int dax_is_zero_entry(void *entry)
@@ -272,6 +280,8 @@ static unsigned long dax_entry_size(void *entry)
 		return 0;
 	else if (dax_is_pmd_entry(entry))
 		return PMD_SIZE;
+	else if (dax_is_pud_entry(entry))
+		return PUD_SIZE;
 	else
 		return PAGE_SIZE;
 }
@@ -572,11 +582,11 @@ void *dax_grab_mapping_entry(struct xa_state *xas,
 			     struct address_space *mapping, unsigned int order)
 {
 	unsigned long index = xas->xa_index;
-	bool pmd_downgrade; /* splitting PMD entry into PTE entries? */
+	bool size_downgrade; /* splitting entry into PTE entries? */
 	void *entry;
 
 retry:
-	pmd_downgrade = false;
+	size_downgrade = false;
 	xas_lock_irq(xas);
 	entry = get_unlocked_entry(xas, order);
 
@@ -589,15 +599,25 @@ void *dax_grab_mapping_entry(struct xa_state *xas,
 		}
 
 		if (order == 0) {
-			if (dax_is_pmd_entry(entry) &&
+			if (!dax_is_pte_entry(entry) &&
 			    (dax_is_zero_entry(entry) ||
 			     dax_is_empty_entry(entry))) {
-				pmd_downgrade = true;
+				size_downgrade = true;
 			}
 		}
 	}
 
-	if (pmd_downgrade) {
+	if (size_downgrade) {
+		unsigned long colour, nr;
+
+		if (dax_is_pmd_entry(entry)) {
+			colour = PG_PMD_COLOUR;
+			nr = PG_PMD_NR;
+		} else {
+			colour = PG_PUD_COLOUR;
+			nr = PG_PUD_NR;
+		}
+
 		/*
 		 * Make sure 'entry' remains valid while we drop
 		 * the i_pages lock.
@@ -611,9 +631,8 @@ void *dax_grab_mapping_entry(struct xa_state *xas,
 		 */
 		if (dax_is_zero_entry(entry)) {
 			xas_unlock_irq(xas);
-			unmap_mapping_pages(mapping,
-					    xas->xa_index & ~PG_PMD_COLOUR,
-					    PG_PMD_NR, false);
+			unmap_mapping_pages(mapping, xas->xa_index & ~colour,
+					    nr, false);
 			xas_reset(xas);
 			xas_lock_irq(xas);
 		}
@@ -621,7 +640,7 @@ void *dax_grab_mapping_entry(struct xa_state *xas,
 		dax_disassociate_entry(entry, mapping, false);
 		xas_store(xas, NULL); /* undo the PMD join */
 		dax_wake_entry(xas, entry, WAKE_ALL);
-		mapping->nrpages -= PG_PMD_NR;
+		mapping->nrpages -= nr;
 		entry = NULL;
 		xas_set(xas, index);
 	}
@@ -631,7 +650,9 @@ void *dax_grab_mapping_entry(struct xa_state *xas,
 	} else {
 		unsigned long flags = DAX_EMPTY;
 
-		if (order > 0)
+		if (order == PUD_SHIFT - PAGE_SHIFT)
+			flags |= DAX_PUD;
+		else if (order == PMD_SHIFT - PAGE_SHIFT)
 			flags |= DAX_PMD;
 		entry = dax_make_entry(pfn_to_pfn_t(0), flags);
 		dax_lock_entry(xas, entry);
@@ -811,7 +832,10 @@ vm_fault_t dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
 	if (cow || (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE))) {
 		unsigned long index = xas->xa_index;
 		/* we are replacing a zero page with block mapping */
-		if (dax_is_pmd_entry(entry))
+		if (dax_is_pud_entry(entry))
+			unmap_mapping_pages(mapping, index & ~PG_PUD_COLOUR,
+					    PG_PUD_NR, false);
+		else if (dax_is_pmd_entry(entry))
 			unmap_mapping_pages(mapping, index & ~PG_PMD_COLOUR,
 					    PG_PMD_NR, false);
 		else /* pte entry */
@@ -983,6 +1007,8 @@ vm_fault_t dax_insert_pfn_mkwrite(struct vm_fault *vmf, pfn_t pfn,
 	else if (order == PMD_ORDER)
 		ret = vmf_insert_pfn_pmd(vmf, pfn, FAULT_FLAG_WRITE);
 #endif
+	else if (order == PUD_ORDER)
+		ret = vmf_insert_pfn_pud(vmf, pfn, FAULT_FLAG_WRITE);
 	else
 		ret = VM_FAULT_FALLBACK;
 	dax_unlock_entry(&xas, entry);
diff --git a/include/linux/dax.h b/include/linux/dax.h
index 05ce7992ac43..81fcc0e4a070 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -268,21 +268,24 @@ static inline bool dax_mapping(struct address_space *mapping)
 }
 
 /*
- * DAX pagecache entries use XArray value entries so they can't be mistaken
- * for pages.  We use one bit for locking, one bit for the entry size (PMD)
- * and two more to tell us if the entry is a zero page or an empty entry that
- * is just used for locking.  In total four special bits.
+ * DAX pagecache entries use XArray value entries so they can't be
+ * mistaken for pages.  We use one bit for locking, two bits for the
+ * entry size (PMD, PUD) and two more to tell us if the entry is a zero
+ * page or an empty entry that is just used for locking.  In total 5
+ * special bits which limits the max pfn that can be stored as:
+ * (1UL << 57 - PAGE_SHIFT). 63 - DAX_SHIFT - 1 (for xa_mk_value()).
  *
- * If the PMD bit isn't set the entry has size PAGE_SIZE, and if the ZERO_PAGE
- * and EMPTY bits aren't set the entry is a normal DAX entry with a filesystem
- * block allocation.
+ * If the P{M,U}D bits are not set the entry has size PAGE_SIZE, and if
+ * the ZERO_PAGE and EMPTY bits aren't set the entry is a normal DAX
+ * entry with a filesystem block allocation.
  */
-#define DAX_SHIFT (4)
+#define DAX_SHIFT (5)
 #define DAX_MASK ((1UL << DAX_SHIFT) - 1)
 #define DAX_LOCKED (1UL << 0)
 #define DAX_PMD (1UL << 1)
-#define DAX_ZERO_PAGE (1UL << 2)
-#define DAX_EMPTY (1UL << 3)
+#define DAX_PUD (1UL << 2)
+#define DAX_ZERO_PAGE (1UL << 3)
+#define DAX_EMPTY (1UL << 4)
 
 /*
  * These flags are not conveyed in Xarray value entries, they are just
@@ -304,6 +307,13 @@ int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
 /* The order of a PMD entry */
 #define PMD_ORDER (PMD_SHIFT - PAGE_SHIFT)
 
+/* The 'colour' (ie low bits) within a PUD of a page offset.  */
+#define PG_PUD_COLOUR ((PUD_SIZE >> PAGE_SHIFT) - 1)
+#define PG_PUD_NR (PUD_SIZE >> PAGE_SHIFT)
+
+/* The order of a PUD entry */
+#define PUD_ORDER (PUD_SHIFT - PAGE_SHIFT)
+
 static inline unsigned int pe_order(enum page_entry_size pe_size)
 {
 	if (pe_size == PE_SIZE_PTE)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 768e5261fdae..de73f5a16252 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -18,10 +18,19 @@ int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 
 #ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
 void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud);
+vm_fault_t vmf_insert_pfn_pud_prot(struct vm_fault *vmf, pfn_t pfn,
+				   pgprot_t pgprot, bool write);
 #else
 static inline void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud)
 {
 }
+
+static inline vm_fault_t vmf_insert_pfn_pud_prot(struct vm_fault *vmf,
+						 pfn_t pfn, pgprot_t pgprot,
+						 bool write)
+{
+	return VM_FAULT_SIGBUS;
+}
 #endif
 
 vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf);
@@ -58,8 +67,6 @@ static inline vm_fault_t vmf_insert_pfn_pmd(struct vm_fault *vmf, pfn_t pfn,
 {
 	return vmf_insert_pfn_pmd_prot(vmf, pfn, vmf->vma->vm_page_prot, write);
 }
-vm_fault_t vmf_insert_pfn_pud_prot(struct vm_fault *vmf, pfn_t pfn,
-				   pgprot_t pgprot, bool write);
 
 /**
  * vmf_insert_pfn_pud - insert a pud size pfn



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 12/13] devdax: Use dax_insert_entry() + dax_delete_mapping_entry()
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
                   ` (10 preceding siblings ...)
  2022-09-04  2:17 ` [PATCH 11/13] devdax: add PUD support to the DAX mapping infrastructure Dan Williams
@ 2022-09-04  2:17 ` Dan Williams
  2022-09-04  2:17 ` [PATCH 13/13] mm/gup: Drop DAX pgmap accounting Dan Williams
  2022-09-06 13:05 ` [PATCH 00/13] Fix the DAX-gup mistake Jason Gunthorpe
  13 siblings, 0 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:17 UTC (permalink / raw)
  To: akpm
  Cc: Matthew Wilcox, Jan Kara, Darrick J. Wong, Jason Gunthorpe,
	Christoph Hellwig, John Hubbard, linux-mm, nvdimm, linux-fsdevel

Track entries and take pgmap references at mapping insertion time.
Revoke mappings and drop the associated pgmap references at device
destruction or inode eviction time. With this in place, and the fsdax
equivalent already in place, the gup code no longer needs to consider
PTE_DEVMAP as an indicator to get a pgmap reference before taking a page
reference.

In other words, GUP takes additional references on mapped pages. Until
now, DAX in all its forms was failing to take references at mapping
time. With that fixed there is no longer a requirement for the gup to
manage @pgmap references. That cleanup is saved for a follow-on patch.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Jason Gunthorpe <jgg@nvidia.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/dax/bus.c     |    2 +
 drivers/dax/device.c  |   73 +++++++++++++++++++++++++++++--------------------
 drivers/dax/mapping.c |    3 ++
 3 files changed, 47 insertions(+), 31 deletions(-)

diff --git a/drivers/dax/bus.c b/drivers/dax/bus.c
index 1dad813ee4a6..f4dd9b8b88a9 100644
--- a/drivers/dax/bus.c
+++ b/drivers/dax/bus.c
@@ -384,7 +384,7 @@ void kill_dev_dax(struct dev_dax *dev_dax)
 	struct inode *inode = dax_inode(dax_dev);
 
 	kill_dax(dax_dev);
-	unmap_mapping_range(inode->i_mapping, 0, 0, 1);
+	truncate_inode_pages(inode->i_mapping, 0);
 
 	/*
 	 * Dynamic dax region have the pgmap allocated via dev_kzalloc()
diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index 5494d745ced5..7f306939807e 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -73,38 +73,15 @@ __weak phys_addr_t dax_pgoff_to_phys(struct dev_dax *dev_dax, pgoff_t pgoff,
 	return -1;
 }
 
-static void dax_set_mapping(struct vm_fault *vmf, pfn_t pfn,
-			      unsigned long fault_size)
-{
-	unsigned long i, nr_pages = fault_size / PAGE_SIZE;
-	struct file *filp = vmf->vma->vm_file;
-	struct dev_dax *dev_dax = filp->private_data;
-	pgoff_t pgoff;
-
-	/* mapping is only set on the head */
-	if (dev_dax->pgmap->vmemmap_shift)
-		nr_pages = 1;
-
-	pgoff = linear_page_index(vmf->vma,
-			ALIGN(vmf->address, fault_size));
-
-	for (i = 0; i < nr_pages; i++) {
-		struct page *page = pfn_to_page(pfn_t_to_pfn(pfn) + i);
-
-		page = compound_head(page);
-		if (page->mapping)
-			continue;
-
-		page->mapping = filp->f_mapping;
-		page->index = pgoff + i;
-	}
-}
-
 static vm_fault_t __dev_dax_pte_fault(struct dev_dax *dev_dax,
 				struct vm_fault *vmf)
 {
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
+	XA_STATE(xas, &mapping->i_pages, vmf->pgoff);
 	struct device *dev = &dev_dax->dev;
 	phys_addr_t phys;
+	vm_fault_t ret;
+	void *entry;
 	pfn_t pfn;
 	unsigned int fault_size = PAGE_SIZE;
 
@@ -128,7 +105,16 @@ static vm_fault_t __dev_dax_pte_fault(struct dev_dax *dev_dax,
 
 	pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP);
 
-	dax_set_mapping(vmf, pfn, fault_size);
+	entry = dax_grab_mapping_entry(&xas, mapping, 0);
+	if (xa_is_internal(entry))
+		return xa_to_internal(entry);
+
+	ret = dax_insert_entry(&xas, vmf, &entry, pfn, 0);
+
+	dax_unlock_entry(&xas, entry);
+
+	if (ret)
+		return ret;
 
 	return vmf_insert_mixed(vmf->vma, vmf->address, pfn);
 }
@@ -136,10 +122,14 @@ static vm_fault_t __dev_dax_pte_fault(struct dev_dax *dev_dax,
 static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax,
 				struct vm_fault *vmf)
 {
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	unsigned long pmd_addr = vmf->address & PMD_MASK;
+	XA_STATE(xas, &mapping->i_pages, vmf->pgoff);
 	struct device *dev = &dev_dax->dev;
 	phys_addr_t phys;
+	vm_fault_t ret;
 	pgoff_t pgoff;
+	void *entry;
 	pfn_t pfn;
 	unsigned int fault_size = PMD_SIZE;
 
@@ -171,7 +161,16 @@ static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax,
 
 	pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP);
 
-	dax_set_mapping(vmf, pfn, fault_size);
+	entry = dax_grab_mapping_entry(&xas, mapping, PMD_ORDER);
+	if (xa_is_internal(entry))
+		return xa_to_internal(entry);
+
+	ret = dax_insert_entry(&xas, vmf, &entry, pfn, DAX_PMD);
+
+	dax_unlock_entry(&xas, entry);
+
+	if (ret)
+		return ret;
 
 	return vmf_insert_pfn_pmd(vmf, pfn, vmf->flags & FAULT_FLAG_WRITE);
 }
@@ -180,10 +179,14 @@ static vm_fault_t __dev_dax_pmd_fault(struct dev_dax *dev_dax,
 static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax,
 				struct vm_fault *vmf)
 {
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	unsigned long pud_addr = vmf->address & PUD_MASK;
+	XA_STATE(xas, &mapping->i_pages, vmf->pgoff);
 	struct device *dev = &dev_dax->dev;
 	phys_addr_t phys;
+	vm_fault_t ret;
 	pgoff_t pgoff;
+	void *entry;
 	pfn_t pfn;
 	unsigned int fault_size = PUD_SIZE;
 
@@ -216,7 +219,16 @@ static vm_fault_t __dev_dax_pud_fault(struct dev_dax *dev_dax,
 
 	pfn = phys_to_pfn_t(phys, PFN_DEV|PFN_MAP);
 
-	dax_set_mapping(vmf, pfn, fault_size);
+	entry = dax_grab_mapping_entry(&xas, mapping, PUD_ORDER);
+	if (xa_is_internal(entry))
+		return xa_to_internal(entry);
+
+	ret = dax_insert_entry(&xas, vmf, &entry, pfn, DAX_PUD);
+
+	dax_unlock_entry(&xas, entry);
+
+	if (ret)
+		return ret;
 
 	return vmf_insert_pfn_pud(vmf, pfn, vmf->flags & FAULT_FLAG_WRITE);
 }
@@ -494,3 +506,4 @@ MODULE_LICENSE("GPL v2");
 module_init(dax_init);
 module_exit(dax_exit);
 MODULE_ALIAS_DAX_DEVICE(0);
+MODULE_IMPORT_NS(DAX);
diff --git a/drivers/dax/mapping.c b/drivers/dax/mapping.c
index 6eaa0fe33c16..b9851cfd4cbd 100644
--- a/drivers/dax/mapping.c
+++ b/drivers/dax/mapping.c
@@ -261,6 +261,7 @@ void dax_unlock_entry(struct xa_state *xas, void *entry)
 	WARN_ON(!dax_is_locked(old));
 	dax_wake_entry(xas, entry, WAKE_NEXT);
 }
+EXPORT_SYMBOL_NS_GPL(dax_unlock_entry, DAX);
 
 /*
  * Return: The entry stored at this location before it was locked.
@@ -674,6 +675,7 @@ void *dax_grab_mapping_entry(struct xa_state *xas,
 	xas_unlock_irq(xas);
 	return xa_mk_internal(VM_FAULT_FALLBACK);
 }
+EXPORT_SYMBOL_NS_GPL(dax_grab_mapping_entry, DAX);
 
 /**
  * dax_layout_pinned_page_range - find first pinned page in @mapping
@@ -875,6 +877,7 @@ vm_fault_t dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
 	*pentry = entry;
 	return 0;
 }
+EXPORT_SYMBOL_NS_GPL(dax_insert_entry, DAX);
 
 int dax_writeback_one(struct xa_state *xas, struct dax_device *dax_dev,
 		      struct address_space *mapping, void *entry)



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* [PATCH 13/13] mm/gup: Drop DAX pgmap accounting
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
                   ` (11 preceding siblings ...)
  2022-09-04  2:17 ` [PATCH 12/13] devdax: Use dax_insert_entry() + dax_delete_mapping_entry() Dan Williams
@ 2022-09-04  2:17 ` Dan Williams
  2022-09-06 13:05 ` [PATCH 00/13] Fix the DAX-gup mistake Jason Gunthorpe
  13 siblings, 0 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-04  2:17 UTC (permalink / raw)
  To: akpm
  Cc: Matthew Wilcox, Jan Kara, Darrick J. Wong, Christoph Hellwig,
	John Hubbard, Jason Gunthorpe, linux-mm, nvdimm, linux-fsdevel

Now that pgmap accounting is handled at map time, it can be dropped from
gup time.

A hurdle still remains that filesystem-DAX huge pages are not compound
pages which still requires infrastructure like
__gup_device_huge_p{m,u}d() to stick around.

Additionally, ZONE_DEVICE pages with this change are still not suitable
to be returned from vm_normal_page(), so this cleanup is limited to
deleting pgmap reference manipulation. This is an incremental step on
the path to removing pte_devmap() altogether.

Note that follow_pmd_devmap() can be deleted entirely since a few
additions of pmd_devmap() allows the transparent huge page path to be
reused.

Cc: Matthew Wilcox <willy@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Cc: "Darrick J. Wong" <djwong@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: John Hubbard <jhubbard@nvidia.com>
Reported-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 include/linux/huge_mm.h |   12 +------
 mm/gup.c                |   83 +++++++++++------------------------------------
 mm/huge_memory.c        |   54 +------------------------------
 3 files changed, 22 insertions(+), 127 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index de73f5a16252..b8ed373c6090 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -263,10 +263,8 @@ static inline bool folio_test_pmd_mappable(struct folio *folio)
 	return folio_order(folio) >= HPAGE_PMD_ORDER;
 }
 
-struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
-		pmd_t *pmd, int flags, struct dev_pagemap **pgmap);
 struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
-		pud_t *pud, int flags, struct dev_pagemap **pgmap);
+		pud_t *pud, int flags);
 
 vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
 
@@ -418,14 +416,8 @@ static inline void mm_put_huge_zero_page(struct mm_struct *mm)
 	return;
 }
 
-static inline struct page *follow_devmap_pmd(struct vm_area_struct *vma,
-	unsigned long addr, pmd_t *pmd, int flags, struct dev_pagemap **pgmap)
-{
-	return NULL;
-}
-
 static inline struct page *follow_devmap_pud(struct vm_area_struct *vma,
-	unsigned long addr, pud_t *pud, int flags, struct dev_pagemap **pgmap)
+	unsigned long addr, pud_t *pud, int flags)
 {
 	return NULL;
 }
diff --git a/mm/gup.c b/mm/gup.c
index 67dfffe97917..3832edd27dfd 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -25,7 +25,6 @@
 #include "internal.h"
 
 struct follow_page_context {
-	struct dev_pagemap *pgmap;
 	unsigned int page_mask;
 };
 
@@ -490,8 +489,7 @@ static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
 }
 
 static struct page *follow_page_pte(struct vm_area_struct *vma,
-		unsigned long address, pmd_t *pmd, unsigned int flags,
-		struct dev_pagemap **pgmap)
+		unsigned long address, pmd_t *pmd, unsigned int flags)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	struct page *page;
@@ -535,17 +533,13 @@ static struct page *follow_page_pte(struct vm_area_struct *vma,
 	}
 
 	page = vm_normal_page(vma, address, pte);
-	if (!page && pte_devmap(pte) && (flags & (FOLL_GET | FOLL_PIN))) {
+	if (!page && pte_devmap(pte)) {
 		/*
-		 * Only return device mapping pages in the FOLL_GET or FOLL_PIN
-		 * case since they are only valid while holding the pgmap
-		 * reference.
+		 * ZONE_DEVICE pages are not yet treated as vm_normal_page()
+		 * instances, with respect to mapcount and compound-page
+		 * metadata
 		 */
-		*pgmap = get_dev_pagemap(pte_pfn(pte), *pgmap);
-		if (*pgmap)
-			page = pte_page(pte);
-		else
-			goto no_page;
+		page = pte_page(pte);
 	} else if (unlikely(!page)) {
 		if (flags & FOLL_DUMP) {
 			/* Avoid special (like zero) pages in core dumps */
@@ -663,15 +657,8 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 			return no_page_table(vma, flags);
 		goto retry;
 	}
-	if (pmd_devmap(pmdval)) {
-		ptl = pmd_lock(mm, pmd);
-		page = follow_devmap_pmd(vma, address, pmd, flags, &ctx->pgmap);
-		spin_unlock(ptl);
-		if (page)
-			return page;
-	}
-	if (likely(!pmd_trans_huge(pmdval)))
-		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
+	if (likely(!(pmd_trans_huge(pmdval) || pmd_devmap(pmdval))))
+		return follow_page_pte(vma, address, pmd, flags);
 
 	if ((flags & FOLL_NUMA) && pmd_protnone(pmdval))
 		return no_page_table(vma, flags);
@@ -689,9 +676,9 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 		pmd_migration_entry_wait(mm, pmd);
 		goto retry_locked;
 	}
-	if (unlikely(!pmd_trans_huge(*pmd))) {
+	if (unlikely(!(pmd_trans_huge(*pmd) || pmd_devmap(pmdval)))) {
 		spin_unlock(ptl);
-		return follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
+		return follow_page_pte(vma, address, pmd, flags);
 	}
 	if (flags & FOLL_SPLIT_PMD) {
 		int ret;
@@ -709,7 +696,7 @@ static struct page *follow_pmd_mask(struct vm_area_struct *vma,
 		}
 
 		return ret ? ERR_PTR(ret) :
-			follow_page_pte(vma, address, pmd, flags, &ctx->pgmap);
+			follow_page_pte(vma, address, pmd, flags);
 	}
 	page = follow_trans_huge_pmd(vma, address, pmd, flags);
 	spin_unlock(ptl);
@@ -746,7 +733,7 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma,
 	}
 	if (pud_devmap(*pud)) {
 		ptl = pud_lock(mm, pud);
-		page = follow_devmap_pud(vma, address, pud, flags, &ctx->pgmap);
+		page = follow_devmap_pud(vma, address, pud, flags);
 		spin_unlock(ptl);
 		if (page)
 			return page;
@@ -793,9 +780,6 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma,
  *
  * @flags can have FOLL_ flags set, defined in <linux/mm.h>
  *
- * When getting pages from ZONE_DEVICE memory, the @ctx->pgmap caches
- * the device's dev_pagemap metadata to avoid repeating expensive lookups.
- *
  * When getting an anonymous page and the caller has to trigger unsharing
  * of a shared anonymous page first, -EMLINK is returned. The caller should
  * trigger a fault with FAULT_FLAG_UNSHARE set. Note that unsharing is only
@@ -850,7 +834,7 @@ static struct page *follow_page_mask(struct vm_area_struct *vma,
 struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
 			 unsigned int foll_flags)
 {
-	struct follow_page_context ctx = { NULL };
+	struct follow_page_context ctx = { 0 };
 	struct page *page;
 
 	if (vma_is_secretmem(vma))
@@ -860,8 +844,6 @@ struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
 		return NULL;
 
 	page = follow_page_mask(vma, address, foll_flags, &ctx);
-	if (ctx.pgmap)
-		put_dev_pagemap(ctx.pgmap);
 	return page;
 }
 
@@ -1121,7 +1103,7 @@ static long __get_user_pages(struct mm_struct *mm,
 {
 	long ret = 0, i = 0;
 	struct vm_area_struct *vma = NULL;
-	struct follow_page_context ctx = { NULL };
+	struct follow_page_context ctx = { 0 };
 
 	if (!nr_pages)
 		return 0;
@@ -1244,8 +1226,6 @@ static long __get_user_pages(struct mm_struct *mm,
 		nr_pages -= page_increm;
 	} while (nr_pages);
 out:
-	if (ctx.pgmap)
-		put_dev_pagemap(ctx.pgmap);
 	return i ? i : ret;
 }
 
@@ -2325,9 +2305,8 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start,
 static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 			 unsigned int flags, struct page **pages, int *nr)
 {
-	struct dev_pagemap *pgmap = NULL;
-	int nr_start = *nr, ret = 0;
 	pte_t *ptep, *ptem;
+	int ret = 0;
 
 	ptem = ptep = pte_offset_map(&pmd, addr);
 	do {
@@ -2348,12 +2327,6 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 		if (pte_devmap(pte)) {
 			if (unlikely(flags & FOLL_LONGTERM))
 				goto pte_unmap;
-
-			pgmap = get_dev_pagemap(pte_pfn(pte), pgmap);
-			if (unlikely(!pgmap)) {
-				undo_dev_pagemap(nr, nr_start, flags, pages);
-				goto pte_unmap;
-			}
 		} else if (pte_special(pte))
 			goto pte_unmap;
 
@@ -2400,8 +2373,6 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
 	ret = 1;
 
 pte_unmap:
-	if (pgmap)
-		put_dev_pagemap(pgmap);
 	pte_unmap(ptem);
 	return ret;
 }
@@ -2428,28 +2399,17 @@ static int __gup_device_huge(unsigned long pfn, unsigned long addr,
 			     unsigned long end, unsigned int flags,
 			     struct page **pages, int *nr)
 {
-	int nr_start = *nr;
-	struct dev_pagemap *pgmap = NULL;
-
 	do {
 		struct page *page = pfn_to_page(pfn);
 
-		pgmap = get_dev_pagemap(pfn, pgmap);
-		if (unlikely(!pgmap)) {
-			undo_dev_pagemap(nr, nr_start, flags, pages);
-			break;
-		}
 		SetPageReferenced(page);
 		pages[*nr] = page;
-		if (unlikely(!try_grab_page(page, flags))) {
-			undo_dev_pagemap(nr, nr_start, flags, pages);
+		if (unlikely(!try_grab_page(page, flags)))
 			break;
-		}
 		(*nr)++;
 		pfn++;
 	} while (addr += PAGE_SIZE, addr != end);
 
-	put_dev_pagemap(pgmap);
 	return addr == end;
 }
 
@@ -2458,16 +2418,14 @@ static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 				 struct page **pages, int *nr)
 {
 	unsigned long fault_pfn;
-	int nr_start = *nr;
 
 	fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
 	if (!__gup_device_huge(fault_pfn, addr, end, flags, pages, nr))
 		return 0;
 
-	if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) {
-		undo_dev_pagemap(nr, nr_start, flags, pages);
+	if (unlikely(pmd_val(orig) != pmd_val(*pmdp)))
 		return 0;
-	}
+
 	return 1;
 }
 
@@ -2476,16 +2434,13 @@ static int __gup_device_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
 				 struct page **pages, int *nr)
 {
 	unsigned long fault_pfn;
-	int nr_start = *nr;
 
 	fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
 	if (!__gup_device_huge(fault_pfn, addr, end, flags, pages, nr))
 		return 0;
 
-	if (unlikely(pud_val(orig) != pud_val(*pudp))) {
-		undo_dev_pagemap(nr, nr_start, flags, pages);
+	if (unlikely(pud_val(orig) != pud_val(*pudp)))
 		return 0;
-	}
 	return 1;
 }
 #else
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8a7c1b344abe..ef68296f2158 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1031,55 +1031,6 @@ static void touch_pmd(struct vm_area_struct *vma, unsigned long addr,
 		update_mmu_cache_pmd(vma, addr, pmd);
 }
 
-struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
-		pmd_t *pmd, int flags, struct dev_pagemap **pgmap)
-{
-	unsigned long pfn = pmd_pfn(*pmd);
-	struct mm_struct *mm = vma->vm_mm;
-	struct page *page;
-
-	assert_spin_locked(pmd_lockptr(mm, pmd));
-
-	/*
-	 * When we COW a devmap PMD entry, we split it into PTEs, so we should
-	 * not be in this function with `flags & FOLL_COW` set.
-	 */
-	WARN_ONCE(flags & FOLL_COW, "mm: In follow_devmap_pmd with FOLL_COW set");
-
-	/* FOLL_GET and FOLL_PIN are mutually exclusive. */
-	if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) ==
-			 (FOLL_PIN | FOLL_GET)))
-		return NULL;
-
-	if (flags & FOLL_WRITE && !pmd_write(*pmd))
-		return NULL;
-
-	if (pmd_present(*pmd) && pmd_devmap(*pmd))
-		/* pass */;
-	else
-		return NULL;
-
-	if (flags & FOLL_TOUCH)
-		touch_pmd(vma, addr, pmd, flags & FOLL_WRITE);
-
-	/*
-	 * device mapped pages can only be returned if the
-	 * caller will manage the page reference count.
-	 */
-	if (!(flags & (FOLL_GET | FOLL_PIN)))
-		return ERR_PTR(-EEXIST);
-
-	pfn += (addr & ~PMD_MASK) >> PAGE_SHIFT;
-	*pgmap = get_dev_pagemap(pfn, *pgmap);
-	if (!*pgmap)
-		return ERR_PTR(-EFAULT);
-	page = pfn_to_page(pfn);
-	if (!try_grab_page(page, flags))
-		page = ERR_PTR(-ENOMEM);
-
-	return page;
-}
-
 int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		  pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
 		  struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
@@ -1196,7 +1147,7 @@ static void touch_pud(struct vm_area_struct *vma, unsigned long addr,
 }
 
 struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
-		pud_t *pud, int flags, struct dev_pagemap **pgmap)
+			       pud_t *pud, int flags)
 {
 	unsigned long pfn = pud_pfn(*pud);
 	struct mm_struct *mm = vma->vm_mm;
@@ -1230,9 +1181,6 @@ struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
 		return ERR_PTR(-EEXIST);
 
 	pfn += (addr & ~PUD_MASK) >> PAGE_SHIFT;
-	*pgmap = get_dev_pagemap(pfn, *pgmap);
-	if (!*pgmap)
-		return ERR_PTR(-EFAULT);
 	page = pfn_to_page(pfn);
 	if (!try_grab_page(page, flags))
 		page = ERR_PTR(-ENOMEM);



^ permalink raw reply related	[flat|nested] 33+ messages in thread

* Re: [PATCH 02/13] fsdax: Use page_maybe_dma_pinned() for DAX vs DMA collisions
  2022-09-04  2:16 ` [PATCH 02/13] fsdax: Use page_maybe_dma_pinned() for DAX vs DMA collisions Dan Williams
@ 2022-09-06 12:07   ` Jason Gunthorpe
  0 siblings, 0 replies; 33+ messages in thread
From: Jason Gunthorpe @ 2022-09-06 12:07 UTC (permalink / raw)
  To: Dan Williams
  Cc: akpm, Jan Kara, Darrick J. Wong, Christoph Hellwig, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

On Sat, Sep 03, 2022 at 07:16:12PM -0700, Dan Williams wrote:

> diff --git a/mm/gup.c b/mm/gup.c
> index 732825157430..499c46296fda 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -177,8 +177,10 @@ static void gup_put_folio(struct folio *folio, int refs, unsigned int flags)
>  			refs *= GUP_PIN_COUNTING_BIAS;
>  	}
>  
> -	if (!put_devmap_managed_page_refs(&folio->page, refs))
> -		folio_put_refs(folio, refs);
> +	folio_put_refs(folio, refs);

Can this be made unconditional at this point in the series? Shouldn't
something have been changed on the get ref side?

Jason


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 07/13] fsdax: Manage pgmap references at entry insertion and deletion
  2022-09-04  2:16 ` [PATCH 07/13] fsdax: Manage pgmap references at entry insertion and deletion Dan Williams
@ 2022-09-06 12:30   ` Jason Gunthorpe
  0 siblings, 0 replies; 33+ messages in thread
From: Jason Gunthorpe @ 2022-09-06 12:30 UTC (permalink / raw)
  To: Dan Williams
  Cc: akpm, Matthew Wilcox, Jan Kara, Darrick J. Wong,
	Christoph Hellwig, John Hubbard, linux-mm, nvdimm, linux-fsdevel

On Sat, Sep 03, 2022 at 07:16:40PM -0700, Dan Williams wrote:

> +	pfn = dax_to_pfn(entry);
> +	pgmap = get_dev_pagemap_many(pfn, NULL, PHYS_PFN(size));
> +	if (!pgmap)
> +		return VM_FAULT_SIGBUS;

I'm not sure this makes sense to me, why do we need to hold this
reference here?

The entire point of normal struct page refcounting is that once we put
the pte we can have the refcount elevated by anything

So this can't be protective because when we get here:

> +	page = pfn_to_page(dax_to_pfn(entry));
> +	put_dev_pagemap_many(page->pgmap, PHYS_PFN(size));

We don't know that all the page references have gone away.

When the pgrefcount reaches zero we call from free_zone_device_page()

	page->pgmap->ops->page_free(page);

Shouldn't we be managing the pgmap at this point instead? Ie when we
make the pageref go from 0->1 we incr the pgmap and when it goes from
1->0 we decr it?

Otherwise, what prevents the above from UAFing?

Jason


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
                   ` (12 preceding siblings ...)
  2022-09-04  2:17 ` [PATCH 13/13] mm/gup: Drop DAX pgmap accounting Dan Williams
@ 2022-09-06 13:05 ` Jason Gunthorpe
  2022-09-06 17:23   ` Dan Williams
  13 siblings, 1 reply; 33+ messages in thread
From: Jason Gunthorpe @ 2022-09-06 13:05 UTC (permalink / raw)
  To: Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

On Sat, Sep 03, 2022 at 07:16:00PM -0700, Dan Williams wrote:
> tl;dr: Move the pin of 'struct dev_pagemap' instances from gup-time to
> map time, move the unpin of 'struct dev_pagemap' to truncate_inode_pages()
> for fsdax and devdax inodes, and use page_maybe_dma_pinned() to
> determine when filesystems can safely truncate DAX mappings vs DMA.
> 
> The longer story is that DAX has caused friction with folio development
> and other device-memory use cases due to its hack of using a
> page-reference count of 1 to indicate that the page is DMA idle. That
> situation arose from the mistake of not managing DAX page reference
> counts at map time. The lack of page reference counting at map time grew
> organically from the original DAX experiment of attempting to manage DAX
> mappings without page structures. The page lock, dirty tracking and
> other entry management was supported sans pages. However, the page
> support was then bolted on incrementally so solve problems with gup,
> memory-failure, and all the other kernel services that are missing when
> a pfn does not have an associated page structure.
> 
> Since then John has led an effort to account for when a page is pinned
> for DMA vs other sources that elevate the reference count. The
> page_maybe_dma_pinned() helper slots in seamlessly to replace the need
> to track transitions to page->_refount == 1.
> 
> The larger change in this set comes from Jason's observation that
> inserting DAX mappings without any reference taken is a bug. So
> dax_insert_entry(), that fsdax uses, is updated to take 'struct
> dev_pagemap' references, and devdax is updated to reuse the same.

It wasn't pagemap references that were the problem, it was struct page
references.

pagemap is just something that should be ref'd in the background, as
long as a struct page has a positive reference the pagemap should be
considered referenced, IMHO free_zone_device_page() should be dealing
with this - put the pagemap after calling page_free().

Pagemap is protecting page->pgmap from UAF so we must ensure we hold
it when we do pgmap->ops

That should be the only put, and it should pair with the only get
which happens when the driver takes a 0 refcount page out of its free
list and makes it have a refcount of 1.

> page mapping helpers. One of the immediate hurdles is the usage of
> pmd_devmap() to distinguish large page mappings that are not transparent
> huge pages.

And this is because the struct page refcounting is not right :|

I had thought the progression would be to make fsdax use compound
folios, install compound folios in the PMD, remove all the special
case refcounting for DAX from the pagetable code, then address the
pgmap issue from the basis of working page->refcount, eg by putting a
pgmap put in right after the op->page_free call.

Can we continue to have the weird page->refcount behavior and still
change the other things?

Jason


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-06 13:05 ` [PATCH 00/13] Fix the DAX-gup mistake Jason Gunthorpe
@ 2022-09-06 17:23   ` Dan Williams
  2022-09-06 17:29     ` Jason Gunthorpe
  0 siblings, 1 reply; 33+ messages in thread
From: Dan Williams @ 2022-09-06 17:23 UTC (permalink / raw)
  To: Jason Gunthorpe, Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

Jason Gunthorpe wrote:
> On Sat, Sep 03, 2022 at 07:16:00PM -0700, Dan Williams wrote:
> > tl;dr: Move the pin of 'struct dev_pagemap' instances from gup-time to
> > map time, move the unpin of 'struct dev_pagemap' to truncate_inode_pages()
> > for fsdax and devdax inodes, and use page_maybe_dma_pinned() to
> > determine when filesystems can safely truncate DAX mappings vs DMA.
> > 
> > The longer story is that DAX has caused friction with folio development
> > and other device-memory use cases due to its hack of using a
> > page-reference count of 1 to indicate that the page is DMA idle. That
> > situation arose from the mistake of not managing DAX page reference
> > counts at map time. The lack of page reference counting at map time grew
> > organically from the original DAX experiment of attempting to manage DAX
> > mappings without page structures. The page lock, dirty tracking and
> > other entry management was supported sans pages. However, the page
> > support was then bolted on incrementally so solve problems with gup,
> > memory-failure, and all the other kernel services that are missing when
> > a pfn does not have an associated page structure.
> > 
> > Since then John has led an effort to account for when a page is pinned
> > for DMA vs other sources that elevate the reference count. The
> > page_maybe_dma_pinned() helper slots in seamlessly to replace the need
> > to track transitions to page->_refount == 1.
> > 
> > The larger change in this set comes from Jason's observation that
> > inserting DAX mappings without any reference taken is a bug. So
> > dax_insert_entry(), that fsdax uses, is updated to take 'struct
> > dev_pagemap' references, and devdax is updated to reuse the same.
> 
> It wasn't pagemap references that were the problem, it was struct page
> references.
> 
> pagemap is just something that should be ref'd in the background, as
> long as a struct page has a positive reference the pagemap should be
> considered referenced, IMHO free_zone_device_page() should be dealing
> with this - put the pagemap after calling page_free().

Yes.

I think I got caught admiring the solution of the
page_maybe_dma_pinned() conversion for replacing the ugly observation of
the 2 -> 1 refcount transition, and then introduced this breakage. I
will rework this to catch the 0 to 1 transition of the refcount for
incrementing and use free_zone_device_page() to drop the pgmap
reference.

> Pagemap is protecting page->pgmap from UAF so we must ensure we hold
> it when we do pgmap->ops
> 
> That should be the only put, and it should pair with the only get
> which happens when the driver takes a 0 refcount page out of its free
> list and makes it have a refcount of 1.
> 
> > page mapping helpers. One of the immediate hurdles is the usage of
> > pmd_devmap() to distinguish large page mappings that are not transparent
> > huge pages.
> 
> And this is because the struct page refcounting is not right :|
> 
> I had thought the progression would be to make fsdax use compound
> folios, install compound folios in the PMD, remove all the special
> case refcounting for DAX from the pagetable code, then address the
> pgmap issue from the basis of working page->refcount, eg by putting a
> pgmap put in right after the op->page_free call.

As far as I can see as long as the pgmap is managed at map and
free_zone_device_page() time then the large folio conversion can come
later.

> Can we continue to have the weird page->refcount behavior and still
> change the other things?

No at a minimum the pgmap vs page->refcount problem needs to be solved
first.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-06 17:23   ` Dan Williams
@ 2022-09-06 17:29     ` Jason Gunthorpe
  2022-09-06 18:37       ` Dan Williams
  0 siblings, 1 reply; 33+ messages in thread
From: Jason Gunthorpe @ 2022-09-06 17:29 UTC (permalink / raw)
  To: Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

On Tue, Sep 06, 2022 at 10:23:41AM -0700, Dan Williams wrote:

> > Can we continue to have the weird page->refcount behavior and still
> > change the other things?
> 
> No at a minimum the pgmap vs page->refcount problem needs to be solved
> first.

So who will do the put page after the PTE/PMD's are cleared out? In
the normal case the tlb flusher does it integrated into zap..

Can we safely have the put page in the fsdax side after the zap?

Jason


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-06 17:29     ` Jason Gunthorpe
@ 2022-09-06 18:37       ` Dan Williams
  2022-09-06 18:49         ` Jason Gunthorpe
  0 siblings, 1 reply; 33+ messages in thread
From: Dan Williams @ 2022-09-06 18:37 UTC (permalink / raw)
  To: Jason Gunthorpe, Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

Jason Gunthorpe wrote:
> On Tue, Sep 06, 2022 at 10:23:41AM -0700, Dan Williams wrote:
> 
> > > Can we continue to have the weird page->refcount behavior and still
> > > change the other things?
> > 
> > No at a minimum the pgmap vs page->refcount problem needs to be solved
> > first.
> 
> So who will do the put page after the PTE/PMD's are cleared out? In
> the normal case the tlb flusher does it integrated into zap..

AFAICS the zap manages the _mapcount not _refcount. Are you talking
about page_remove_rmap() or some other reference count drop?

> Can we safely have the put page in the fsdax side after the zap?

The _refcount is managed from the lifetime insert_page() to
truncate_inode_pages(), where for DAX those are managed from
dax_insert_dentry() to dax_delete_mapping_entry().

I think that is sufficient modulo the gap you identified where I was not
accounting for additional _refcount increases outside of gup while the
DAX entry is live.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-06 18:37       ` Dan Williams
@ 2022-09-06 18:49         ` Jason Gunthorpe
  2022-09-06 19:41           ` Dan Williams
  0 siblings, 1 reply; 33+ messages in thread
From: Jason Gunthorpe @ 2022-09-06 18:49 UTC (permalink / raw)
  To: Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

On Tue, Sep 06, 2022 at 11:37:36AM -0700, Dan Williams wrote:
> Jason Gunthorpe wrote:
> > On Tue, Sep 06, 2022 at 10:23:41AM -0700, Dan Williams wrote:
> > 
> > > > Can we continue to have the weird page->refcount behavior and still
> > > > change the other things?
> > > 
> > > No at a minimum the pgmap vs page->refcount problem needs to be solved
> > > first.
> > 
> > So who will do the put page after the PTE/PMD's are cleared out? In
> > the normal case the tlb flusher does it integrated into zap..
> 
> AFAICS the zap manages the _mapcount not _refcount. Are you talking
> about page_remove_rmap() or some other reference count drop?

No, page refcount.

__tlb_remove_page() eventually causes a put_page() via
tlb_batch_pages_flush() calling free_pages_and_swap_cache()

Eg:

 *  MMU_GATHER_NO_GATHER
 *
 *  If the option is set the mmu_gather will not track individual pages for
 *  delayed page free anymore. A platform that enables the option needs to
 *  provide its own implementation of the __tlb_remove_page_size() function to
 *  free pages.

> > Can we safely have the put page in the fsdax side after the zap?
> 
> The _refcount is managed from the lifetime insert_page() to
> truncate_inode_pages(), where for DAX those are managed from
> dax_insert_dentry() to dax_delete_mapping_entry().

As long as we all understand the page doesn't become re-allocatable
until the refcount reaches 0 and the free op is called it may be OK!

Jason



^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-06 18:49         ` Jason Gunthorpe
@ 2022-09-06 19:41           ` Dan Williams
  2022-09-07  0:54             ` Dan Williams
  0 siblings, 1 reply; 33+ messages in thread
From: Dan Williams @ 2022-09-06 19:41 UTC (permalink / raw)
  To: Jason Gunthorpe, Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

Jason Gunthorpe wrote:
> On Tue, Sep 06, 2022 at 11:37:36AM -0700, Dan Williams wrote:
> > Jason Gunthorpe wrote:
> > > On Tue, Sep 06, 2022 at 10:23:41AM -0700, Dan Williams wrote:
> > > 
> > > > > Can we continue to have the weird page->refcount behavior and still
> > > > > change the other things?
> > > > 
> > > > No at a minimum the pgmap vs page->refcount problem needs to be solved
> > > > first.
> > > 
> > > So who will do the put page after the PTE/PMD's are cleared out? In
> > > the normal case the tlb flusher does it integrated into zap..
> > 
> > AFAICS the zap manages the _mapcount not _refcount. Are you talking
> > about page_remove_rmap() or some other reference count drop?
> 
> No, page refcount.
> 
> __tlb_remove_page() eventually causes a put_page() via
> tlb_batch_pages_flush() calling free_pages_and_swap_cache()
> 
> Eg:
> 
>  *  MMU_GATHER_NO_GATHER
>  *
>  *  If the option is set the mmu_gather will not track individual pages for
>  *  delayed page free anymore. A platform that enables the option needs to
>  *  provide its own implementation of the __tlb_remove_page_size() function to
>  *  free pages.

Ok, yes, that is a vm_normal_page() mechanism which I was going to defer
since it is incremental to the _refcount handling fix and maintain that
DAX pages are still !vm_normal_page() in this set.

> > > Can we safely have the put page in the fsdax side after the zap?
> > 
> > The _refcount is managed from the lifetime insert_page() to
> > truncate_inode_pages(), where for DAX those are managed from
> > dax_insert_dentry() to dax_delete_mapping_entry().
> 
> As long as we all understand the page doesn't become re-allocatable
> until the refcount reaches 0 and the free op is called it may be OK!

Yes, but this does mean that page_maybe_dma_pinned() is not sufficient for
when the filesystem can safely reuse the page, it really needs to wait
for the reference count to drop to 0 similar to how it waits for the
page-idle condition today.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-06 19:41           ` Dan Williams
@ 2022-09-07  0:54             ` Dan Williams
  2022-09-07 12:58               ` Jason Gunthorpe
  0 siblings, 1 reply; 33+ messages in thread
From: Dan Williams @ 2022-09-07  0:54 UTC (permalink / raw)
  To: Dan Williams, Jason Gunthorpe
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

Dan Williams wrote:
> Jason Gunthorpe wrote:
> > On Tue, Sep 06, 2022 at 11:37:36AM -0700, Dan Williams wrote:
> > > Jason Gunthorpe wrote:
> > > > On Tue, Sep 06, 2022 at 10:23:41AM -0700, Dan Williams wrote:
> > > > 
> > > > > > Can we continue to have the weird page->refcount behavior and still
> > > > > > change the other things?
> > > > > 
> > > > > No at a minimum the pgmap vs page->refcount problem needs to be solved
> > > > > first.
> > > > 
> > > > So who will do the put page after the PTE/PMD's are cleared out? In
> > > > the normal case the tlb flusher does it integrated into zap..
> > > 
> > > AFAICS the zap manages the _mapcount not _refcount. Are you talking
> > > about page_remove_rmap() or some other reference count drop?
> > 
> > No, page refcount.
> > 
> > __tlb_remove_page() eventually causes a put_page() via
> > tlb_batch_pages_flush() calling free_pages_and_swap_cache()
> > 
> > Eg:
> > 
> >  *  MMU_GATHER_NO_GATHER
> >  *
> >  *  If the option is set the mmu_gather will not track individual pages for
> >  *  delayed page free anymore. A platform that enables the option needs to
> >  *  provide its own implementation of the __tlb_remove_page_size() function to
> >  *  free pages.
> 
> Ok, yes, that is a vm_normal_page() mechanism which I was going to defer
> since it is incremental to the _refcount handling fix and maintain that
> DAX pages are still !vm_normal_page() in this set.
> 
> > > > Can we safely have the put page in the fsdax side after the zap?
> > > 
> > > The _refcount is managed from the lifetime insert_page() to
> > > truncate_inode_pages(), where for DAX those are managed from
> > > dax_insert_dentry() to dax_delete_mapping_entry().
> > 
> > As long as we all understand the page doesn't become re-allocatable
> > until the refcount reaches 0 and the free op is called it may be OK!
> 
> Yes, but this does mean that page_maybe_dma_pinned() is not sufficient for
> when the filesystem can safely reuse the page, it really needs to wait
> for the reference count to drop to 0 similar to how it waits for the
> page-idle condition today.

This gets tricky with break_layouts(). For example xfs_break_layouts()
wants to ensure that the page is gup idle while holding the mmap lock.
If the page is not gup idle it needs to drop locks and retry. It is
possible the path to drop a page reference also needs to acquire
filesystem locs. Consider odd cases like DMA from one offset to another
in the same file. So waiting with filesystem locks held is off the
table, which also means that deferring the wait until
dax_delete_mapping_entry() time is also off the table.

That means that even after the conversion to make DAX page references
0-based it will still be the case that filesystem code will be waiting
for the 2 -> 1 transition to indicate "mapped DAX page has no more
external references".

Then dax_delete_mapping_entry() can validate that it is performing the 1
-> 0 transition since no more refernences should have been taken while
holding filesystem locks.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-07  0:54             ` Dan Williams
@ 2022-09-07 12:58               ` Jason Gunthorpe
  2022-09-07 17:10                 ` Dan Williams
  0 siblings, 1 reply; 33+ messages in thread
From: Jason Gunthorpe @ 2022-09-07 12:58 UTC (permalink / raw)
  To: Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

On Tue, Sep 06, 2022 at 05:54:54PM -0700, Dan Williams wrote:
> Dan Williams wrote:
> > Jason Gunthorpe wrote:
> > > On Tue, Sep 06, 2022 at 11:37:36AM -0700, Dan Williams wrote:
> > > > Jason Gunthorpe wrote:
> > > > > On Tue, Sep 06, 2022 at 10:23:41AM -0700, Dan Williams wrote:
> > > > > 
> > > > > > > Can we continue to have the weird page->refcount behavior and still
> > > > > > > change the other things?
> > > > > > 
> > > > > > No at a minimum the pgmap vs page->refcount problem needs to be solved
> > > > > > first.
> > > > > 
> > > > > So who will do the put page after the PTE/PMD's are cleared out? In
> > > > > the normal case the tlb flusher does it integrated into zap..
> > > > 
> > > > AFAICS the zap manages the _mapcount not _refcount. Are you talking
> > > > about page_remove_rmap() or some other reference count drop?
> > > 
> > > No, page refcount.
> > > 
> > > __tlb_remove_page() eventually causes a put_page() via
> > > tlb_batch_pages_flush() calling free_pages_and_swap_cache()
> > > 
> > > Eg:
> > > 
> > >  *  MMU_GATHER_NO_GATHER
> > >  *
> > >  *  If the option is set the mmu_gather will not track individual pages for
> > >  *  delayed page free anymore. A platform that enables the option needs to
> > >  *  provide its own implementation of the __tlb_remove_page_size() function to
> > >  *  free pages.
> > 
> > Ok, yes, that is a vm_normal_page() mechanism which I was going to defer
> > since it is incremental to the _refcount handling fix and maintain that
> > DAX pages are still !vm_normal_page() in this set.
> > 
> > > > > Can we safely have the put page in the fsdax side after the zap?
> > > > 
> > > > The _refcount is managed from the lifetime insert_page() to
> > > > truncate_inode_pages(), where for DAX those are managed from
> > > > dax_insert_dentry() to dax_delete_mapping_entry().
> > > 
> > > As long as we all understand the page doesn't become re-allocatable
> > > until the refcount reaches 0 and the free op is called it may be OK!
> > 
> > Yes, but this does mean that page_maybe_dma_pinned() is not sufficient for
> > when the filesystem can safely reuse the page, it really needs to wait
> > for the reference count to drop to 0 similar to how it waits for the
> > page-idle condition today.
> 
> This gets tricky with break_layouts(). For example xfs_break_layouts()
> wants to ensure that the page is gup idle while holding the mmap lock.
> If the page is not gup idle it needs to drop locks and retry. It is
> possible the path to drop a page reference also needs to acquire
> filesystem locs. Consider odd cases like DMA from one offset to another
> in the same file. So waiting with filesystem locks held is off the
> table, which also means that deferring the wait until
> dax_delete_mapping_entry() time is also off the table.
> 
> That means that even after the conversion to make DAX page references
> 0-based it will still be the case that filesystem code will be waiting
> for the 2 -> 1 transition to indicate "mapped DAX page has no more
> external references".

Why?

If you are doing the break layouts wouldn't you first zap the ptes,
which will bring the reference to 0 if there are not other users.

If the reference did not become 0 then you have to drop all locks,
sleep until it reaches zero, and retry?

How does adding 2->1 help anything?

Jason


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-07 12:58               ` Jason Gunthorpe
@ 2022-09-07 17:10                 ` Dan Williams
  2022-09-07 18:43                   ` Dan Williams
  0 siblings, 1 reply; 33+ messages in thread
From: Dan Williams @ 2022-09-07 17:10 UTC (permalink / raw)
  To: Jason Gunthorpe, Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

Jason Gunthorpe wrote:
> On Tue, Sep 06, 2022 at 05:54:54PM -0700, Dan Williams wrote:
> > Dan Williams wrote:
> > > Jason Gunthorpe wrote:
> > > > On Tue, Sep 06, 2022 at 11:37:36AM -0700, Dan Williams wrote:
> > > > > Jason Gunthorpe wrote:
> > > > > > On Tue, Sep 06, 2022 at 10:23:41AM -0700, Dan Williams wrote:
> > > > > > 
> > > > > > > > Can we continue to have the weird page->refcount behavior and still
> > > > > > > > change the other things?
> > > > > > > 
> > > > > > > No at a minimum the pgmap vs page->refcount problem needs to be solved
> > > > > > > first.
> > > > > > 
> > > > > > So who will do the put page after the PTE/PMD's are cleared out? In
> > > > > > the normal case the tlb flusher does it integrated into zap..
> > > > > 
> > > > > AFAICS the zap manages the _mapcount not _refcount. Are you talking
> > > > > about page_remove_rmap() or some other reference count drop?
> > > > 
> > > > No, page refcount.
> > > > 
> > > > __tlb_remove_page() eventually causes a put_page() via
> > > > tlb_batch_pages_flush() calling free_pages_and_swap_cache()
> > > > 
> > > > Eg:
> > > > 
> > > >  *  MMU_GATHER_NO_GATHER
> > > >  *
> > > >  *  If the option is set the mmu_gather will not track individual pages for
> > > >  *  delayed page free anymore. A platform that enables the option needs to
> > > >  *  provide its own implementation of the __tlb_remove_page_size() function to
> > > >  *  free pages.
> > > 
> > > Ok, yes, that is a vm_normal_page() mechanism which I was going to defer
> > > since it is incremental to the _refcount handling fix and maintain that
> > > DAX pages are still !vm_normal_page() in this set.
> > > 
> > > > > > Can we safely have the put page in the fsdax side after the zap?
> > > > > 
> > > > > The _refcount is managed from the lifetime insert_page() to
> > > > > truncate_inode_pages(), where for DAX those are managed from
> > > > > dax_insert_dentry() to dax_delete_mapping_entry().
> > > > 
> > > > As long as we all understand the page doesn't become re-allocatable
> > > > until the refcount reaches 0 and the free op is called it may be OK!
> > > 
> > > Yes, but this does mean that page_maybe_dma_pinned() is not sufficient for
> > > when the filesystem can safely reuse the page, it really needs to wait
> > > for the reference count to drop to 0 similar to how it waits for the
> > > page-idle condition today.
> > 
> > This gets tricky with break_layouts(). For example xfs_break_layouts()
> > wants to ensure that the page is gup idle while holding the mmap lock.
> > If the page is not gup idle it needs to drop locks and retry. It is
> > possible the path to drop a page reference also needs to acquire
> > filesystem locs. Consider odd cases like DMA from one offset to another
> > in the same file. So waiting with filesystem locks held is off the
> > table, which also means that deferring the wait until
> > dax_delete_mapping_entry() time is also off the table.
> > 
> > That means that even after the conversion to make DAX page references
> > 0-based it will still be the case that filesystem code will be waiting
> > for the 2 -> 1 transition to indicate "mapped DAX page has no more
> > external references".
> 
> Why?
> 
> If you are doing the break layouts wouldn't you first zap the ptes,
> which will bring the reference to 0 if there are not other users.

The internals of break layouts does zap the ptes, but it does not remove
the mapping entries. So, I was limiting my thinking to that constraint,
but now that I push on it, the need to keep the entry around until the
final truncate_setsize() event seems soft. It should be ok to upgrade
break layouts to delete mapping entries, wait for _refcount to drop to
zero, and then re-evaluate that nothing installed a new entry after
acquiring the filesystem locks again.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-07 17:10                 ` Dan Williams
@ 2022-09-07 18:43                   ` Dan Williams
  2022-09-07 19:30                     ` Jason Gunthorpe
  0 siblings, 1 reply; 33+ messages in thread
From: Dan Williams @ 2022-09-07 18:43 UTC (permalink / raw)
  To: Dan Williams, Jason Gunthorpe
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

Dan Williams wrote:
> Jason Gunthorpe wrote:
> > On Tue, Sep 06, 2022 at 05:54:54PM -0700, Dan Williams wrote:
> > > Dan Williams wrote:
> > > > Jason Gunthorpe wrote:
> > > > > On Tue, Sep 06, 2022 at 11:37:36AM -0700, Dan Williams wrote:
> > > > > > Jason Gunthorpe wrote:
> > > > > > > On Tue, Sep 06, 2022 at 10:23:41AM -0700, Dan Williams wrote:
> > > > > > > 
> > > > > > > > > Can we continue to have the weird page->refcount behavior and still
> > > > > > > > > change the other things?
> > > > > > > > 
> > > > > > > > No at a minimum the pgmap vs page->refcount problem needs to be solved
> > > > > > > > first.
> > > > > > > 
> > > > > > > So who will do the put page after the PTE/PMD's are cleared out? In
> > > > > > > the normal case the tlb flusher does it integrated into zap..
> > > > > > 
> > > > > > AFAICS the zap manages the _mapcount not _refcount. Are you talking
> > > > > > about page_remove_rmap() or some other reference count drop?
> > > > > 
> > > > > No, page refcount.
> > > > > 
> > > > > __tlb_remove_page() eventually causes a put_page() via
> > > > > tlb_batch_pages_flush() calling free_pages_and_swap_cache()
> > > > > 
> > > > > Eg:
> > > > > 
> > > > >  *  MMU_GATHER_NO_GATHER
> > > > >  *
> > > > >  *  If the option is set the mmu_gather will not track individual pages for
> > > > >  *  delayed page free anymore. A platform that enables the option needs to
> > > > >  *  provide its own implementation of the __tlb_remove_page_size() function to
> > > > >  *  free pages.
> > > > 
> > > > Ok, yes, that is a vm_normal_page() mechanism which I was going to defer
> > > > since it is incremental to the _refcount handling fix and maintain that
> > > > DAX pages are still !vm_normal_page() in this set.
> > > > 
> > > > > > > Can we safely have the put page in the fsdax side after the zap?
> > > > > > 
> > > > > > The _refcount is managed from the lifetime insert_page() to
> > > > > > truncate_inode_pages(), where for DAX those are managed from
> > > > > > dax_insert_dentry() to dax_delete_mapping_entry().
> > > > > 
> > > > > As long as we all understand the page doesn't become re-allocatable
> > > > > until the refcount reaches 0 and the free op is called it may be OK!
> > > > 
> > > > Yes, but this does mean that page_maybe_dma_pinned() is not sufficient for
> > > > when the filesystem can safely reuse the page, it really needs to wait
> > > > for the reference count to drop to 0 similar to how it waits for the
> > > > page-idle condition today.
> > > 
> > > This gets tricky with break_layouts(). For example xfs_break_layouts()
> > > wants to ensure that the page is gup idle while holding the mmap lock.
> > > If the page is not gup idle it needs to drop locks and retry. It is
> > > possible the path to drop a page reference also needs to acquire
> > > filesystem locs. Consider odd cases like DMA from one offset to another
> > > in the same file. So waiting with filesystem locks held is off the
> > > table, which also means that deferring the wait until
> > > dax_delete_mapping_entry() time is also off the table.
> > > 
> > > That means that even after the conversion to make DAX page references
> > > 0-based it will still be the case that filesystem code will be waiting
> > > for the 2 -> 1 transition to indicate "mapped DAX page has no more
> > > external references".
> > 
> > Why?
> > 
> > If you are doing the break layouts wouldn't you first zap the ptes,
> > which will bring the reference to 0 if there are not other users.
> 
> The internals of break layouts does zap the ptes, but it does not remove
> the mapping entries. So, I was limiting my thinking to that constraint,
> but now that I push on it, the need to keep the entry around until the
> final truncate_setsize() event seems soft. It should be ok to upgrade
> break layouts to delete mapping entries, wait for _refcount to drop to
> zero, and then re-evaluate that nothing installed a new entry after
> acquiring the filesystem locks again.

It is still the case that while waiting for the page to go idle it is
associated with its given file / inode. It is possible that
memory-failure, or some other event that requires looking up the page's
association, fires in that time span.

If that happens the page's association to the file needs to be kept in
tact. So it is still the case that while waiting for the final put the
page count needs to remain elevated to maintain the page's association
to the file until break layouts can be sure it is doing the final put
under filesystem locks. I.e. break layouts is "make it safe to do the
truncate", not "do the truncate up front".

The truncate will still move from being done while the _refcount is 1 to
being done while the _refcount is 0, but the condition for break layouts
to signal it is safe to proceed is when it can observe _refcount == 0,
or the 1 -> 0 transition.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-07 18:43                   ` Dan Williams
@ 2022-09-07 19:30                     ` Jason Gunthorpe
  2022-09-07 20:45                       ` Dan Williams
  0 siblings, 1 reply; 33+ messages in thread
From: Jason Gunthorpe @ 2022-09-07 19:30 UTC (permalink / raw)
  To: Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

On Wed, Sep 07, 2022 at 11:43:52AM -0700, Dan Williams wrote:

> It is still the case that while waiting for the page to go idle it is
> associated with its given file / inode. It is possible that
> memory-failure, or some other event that requires looking up the page's
> association, fires in that time span.

Can't the page->mapping can remain set to the address space even if it is
not installed into any PTEs? Zap should only remove the PTEs, not
clear the page->mapping.

Or, said another way, page->mapping should only change while the page
refcount is 0 and thus the filesystem is completely in control of when
it changes, and can do so under its own locks

If the refcount is 0 then memory failure should not happen - it would
require someone accessed the page without referencing it. The only
thing that could do that is the kernel, and if the kernel is
referencing a 0 refcount page (eg it got converted to meta-data or
something), it is probably not linked to an address space anymore
anyhow?

> under filesystem locks. I.e. break layouts is "make it safe to do the
> truncate", not "do the truncate up front".

The truncate action is reorganizing the metadata in the filesystem,
the lead up to it is to fence of all access to the DAX pages from all
sources, so it does seem to me that 0ing the refcount in advance is
exactly the right thing to do.

It returns the page back to the exclusive control of the filesystem,
and nothing else does this.

Jason


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-07 19:30                     ` Jason Gunthorpe
@ 2022-09-07 20:45                       ` Dan Williams
  2022-09-08 18:49                         ` Jason Gunthorpe
  0 siblings, 1 reply; 33+ messages in thread
From: Dan Williams @ 2022-09-07 20:45 UTC (permalink / raw)
  To: Jason Gunthorpe, Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

Jason Gunthorpe wrote:
> On Wed, Sep 07, 2022 at 11:43:52AM -0700, Dan Williams wrote:
> 
> > It is still the case that while waiting for the page to go idle it is
> > associated with its given file / inode. It is possible that
> > memory-failure, or some other event that requires looking up the page's
> > association, fires in that time span.
> 
> Can't the page->mapping can remain set to the address space even if it is
> not installed into any PTEs? Zap should only remove the PTEs, not
> clear the page->mapping.
> 
> Or, said another way, page->mapping should only change while the page
> refcount is 0 and thus the filesystem is completely in control of when
> it changes, and can do so under its own locks
> 
> If the refcount is 0 then memory failure should not happen - it would
> require someone accessed the page without referencing it. The only
> thing that could do that is the kernel, and if the kernel is
> referencing a 0 refcount page (eg it got converted to meta-data or
> something), it is probably not linked to an address space anymore
> anyhow?

First, thank you for helping me think through this, I am going to need
this thread in 6 months when I revisit this code.

I agree with the observation that page->mapping should only change while
the reference count is zero, but my problem is catching the 1 -> 0 in
its natural location in free_zone_device_page(). That and the fact that
the entry needs to be maintained until the page is actually disconnected
from the file to me means that break layouts holds off truncate until it
can observe the 0 refcount condition while holding filesystem locks, and
then the final truncate deletes the mapping entry which is already at 0.

I.e. break layouts waits until _refcount reaches 0, but entry removal
still needs one more dax_delete_mapping_entry() event to transitition to
the _refcount == 0 plus no address_space entry condition. Effectively
simulating _mapcount with address_space tracking until DAX pages can
become vm_normal_page().


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-07 20:45                       ` Dan Williams
@ 2022-09-08 18:49                         ` Jason Gunthorpe
  2022-09-08 19:27                           ` Dan Williams
  0 siblings, 1 reply; 33+ messages in thread
From: Jason Gunthorpe @ 2022-09-08 18:49 UTC (permalink / raw)
  To: Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

On Wed, Sep 07, 2022 at 01:45:35PM -0700, Dan Williams wrote:
> Jason Gunthorpe wrote:
> > On Wed, Sep 07, 2022 at 11:43:52AM -0700, Dan Williams wrote:
> > 
> > > It is still the case that while waiting for the page to go idle it is
> > > associated with its given file / inode. It is possible that
> > > memory-failure, or some other event that requires looking up the page's
> > > association, fires in that time span.
> > 
> > Can't the page->mapping can remain set to the address space even if it is
> > not installed into any PTEs? Zap should only remove the PTEs, not
> > clear the page->mapping.
> > 
> > Or, said another way, page->mapping should only change while the page
> > refcount is 0 and thus the filesystem is completely in control of when
> > it changes, and can do so under its own locks
> > 
> > If the refcount is 0 then memory failure should not happen - it would
> > require someone accessed the page without referencing it. The only
> > thing that could do that is the kernel, and if the kernel is
> > referencing a 0 refcount page (eg it got converted to meta-data or
> > something), it is probably not linked to an address space anymore
> > anyhow?
> 
> First, thank you for helping me think through this, I am going to need
> this thread in 6 months when I revisit this code.
> 
> I agree with the observation that page->mapping should only change while
> the reference count is zero, but my problem is catching the 1 -> 0 in
> its natural location in free_zone_device_page(). That and the fact that
> the entry needs to be maintained until the page is actually disconnected
> from the file to me means that break layouts holds off truncate until it
> can observe the 0 refcount condition while holding filesystem locks, and
> then the final truncate deletes the mapping entry which is already at 0.

Okay, that makes sense to me.. but what is "entry need to be
maintained" mean?

> I.e. break layouts waits until _refcount reaches 0, but entry removal
> still needs one more dax_delete_mapping_entry() event to transitition to
> the _refcount == 0 plus no address_space entry condition. Effectively
> simulating _mapcount with address_space tracking until DAX pages can
> become vm_normal_page().

This I don't follow.. Who will do the one more
dax_delete_mapping_entry()?

I'm not sure what it has to do with normal_page?

Jason


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-08 18:49                         ` Jason Gunthorpe
@ 2022-09-08 19:27                           ` Dan Williams
  2022-09-09 11:53                             ` Jason Gunthorpe
  2022-09-09 18:11                             ` Matthew Wilcox
  0 siblings, 2 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-08 19:27 UTC (permalink / raw)
  To: Jason Gunthorpe, Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

Jason Gunthorpe wrote:
> On Wed, Sep 07, 2022 at 01:45:35PM -0700, Dan Williams wrote:
> > Jason Gunthorpe wrote:
> > > On Wed, Sep 07, 2022 at 11:43:52AM -0700, Dan Williams wrote:
> > > 
> > > > It is still the case that while waiting for the page to go idle it is
> > > > associated with its given file / inode. It is possible that
> > > > memory-failure, or some other event that requires looking up the page's
> > > > association, fires in that time span.
> > > 
> > > Can't the page->mapping can remain set to the address space even if it is
> > > not installed into any PTEs? Zap should only remove the PTEs, not
> > > clear the page->mapping.
> > > 
> > > Or, said another way, page->mapping should only change while the page
> > > refcount is 0 and thus the filesystem is completely in control of when
> > > it changes, and can do so under its own locks
> > > 
> > > If the refcount is 0 then memory failure should not happen - it would
> > > require someone accessed the page without referencing it. The only
> > > thing that could do that is the kernel, and if the kernel is
> > > referencing a 0 refcount page (eg it got converted to meta-data or
> > > something), it is probably not linked to an address space anymore
> > > anyhow?
> > 
> > First, thank you for helping me think through this, I am going to need
> > this thread in 6 months when I revisit this code.
> > 
> > I agree with the observation that page->mapping should only change while
> > the reference count is zero, but my problem is catching the 1 -> 0 in
> > its natural location in free_zone_device_page(). That and the fact that
> > the entry needs to be maintained until the page is actually disconnected
> > from the file to me means that break layouts holds off truncate until it
> > can observe the 0 refcount condition while holding filesystem locks, and
> > then the final truncate deletes the mapping entry which is already at 0.
> 
> Okay, that makes sense to me.. but what is "entry need to be
> maintained" mean?

I am talking about keeping an entry in the address_space until that page
is truncated out of the file, and I think the "zapped but not truncated"
state needs a new Xarray-value flag (DAX_ZAP) to track it. So the
life-cycle of a DAX page that is truncated becomes:

0/ devm_memremap_pages() to instantiate the page with _refcount==0

1/ dax_insert_entry() and vmf_insert_mixed() add an entry to the
address_space and install a pte for the page. _refcount==1.

2/ gup elevates _refcount to 2

3/ truncate or punch hole attempts to free the DAX page

4/ break layouts zaps the ptes, drops the reference from 1/, and waits
   for _refcount to drop to zero while holding fs locks.

5/ at the 1 -> 0 transition the address_space entry is tagged with a new
   flag DAX_ZAP to track that this page is unreferenced, but still
   associated with the mapping until the final truncate. I.e. the DAX_ZAP
   flag lets the fsdax core track when it has already dropped a page
   reference, but still has use for things like memory-failure to
   opportunistically use page->mapping on a 0-reference page.

I think this could be done without the DAX_ZAP flag, but I want to have
some safety to catch programming errors where the truncate path finds
entries already at a zero reference count without having first been
zapped.

> > I.e. break layouts waits until _refcount reaches 0, but entry removal
> > still needs one more dax_delete_mapping_entry() event to transitition to
> > the _refcount == 0 plus no address_space entry condition. Effectively
> > simulating _mapcount with address_space tracking until DAX pages can
> > become vm_normal_page().
> 
> This I don't follow.. Who will do the one more
> dax_delete_mapping_entry()?

The core of dax_delete_mapping_entry() is __dax_invalidate_entry(). I am
thinking something like __dax_invalidate_entry(mapping, index, ZAP) is
called for break layouts and __dax_invalidate_entry(mapping, index,
TRUNC) is called for finally disconnecting that page from its mapping.

> 
> I'm not sure what it has to do with normal_page?
> 

This thread is mainly about DAX slowly reinventing _mapcount that gets
managed in all the right places for a normal_page. Longer term I think
we either need to get back to the page-less DAX experiment and find a
way to unwind some of this page usage creep, or cut over to finally
making DAX-pages be normal pages and delete most of the special case
handling in fs/dax.c. I am open to discussing the former, but I think
the latter is more likely.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-08 19:27                           ` Dan Williams
@ 2022-09-09 11:53                             ` Jason Gunthorpe
  2022-09-09 17:52                               ` Dan Williams
  2022-09-09 18:11                             ` Matthew Wilcox
  1 sibling, 1 reply; 33+ messages in thread
From: Jason Gunthorpe @ 2022-09-09 11:53 UTC (permalink / raw)
  To: Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

On Thu, Sep 08, 2022 at 12:27:06PM -0700, Dan Williams wrote:
>    flag lets the fsdax core track when it has already dropped a page
>    reference, but still has use for things like memory-failure to
>    opportunistically use page->mapping on a 0-reference page.

This is not straightforward, as discussed before the page->mapping is
allowed to change while the refcount is zero, so there is no generic
way to safely obtain a pointer to the address space from a 0 reference
page.

You'd have to pass the 0 reference page into a new pgmap operation
which could obtain an appropriate internal lock to read page->mapping.

> > I'm not sure what it has to do with normal_page?
> 
> This thread is mainly about DAX slowly reinventing _mapcount that gets
> managed in all the right places for a normal_page. Longer term I think
> we either need to get back to the page-less DAX experiment and find a
> way to unwind some of this page usage creep, or cut over to finally
> making DAX-pages be normal pages and delete most of the special case
> handling in fs/dax.c. I am open to discussing the former, but I think
> the latter is more likely.

I think the latter is inevitable at this point..

Jason


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-09 11:53                             ` Jason Gunthorpe
@ 2022-09-09 17:52                               ` Dan Williams
  0 siblings, 0 replies; 33+ messages in thread
From: Dan Williams @ 2022-09-09 17:52 UTC (permalink / raw)
  To: Jason Gunthorpe, Dan Williams
  Cc: akpm, Jan Kara, Christoph Hellwig, Darrick J. Wong, John Hubbard,
	Matthew Wilcox, linux-mm, nvdimm, linux-fsdevel

Jason Gunthorpe wrote:
> On Thu, Sep 08, 2022 at 12:27:06PM -0700, Dan Williams wrote:
> >    flag lets the fsdax core track when it has already dropped a page
> >    reference, but still has use for things like memory-failure to
> >    opportunistically use page->mapping on a 0-reference page.
> 
> This is not straightforward, as discussed before the page->mapping is
> allowed to change while the refcount is zero, so there is no generic
> way to safely obtain a pointer to the address space from a 0 reference
> page.

Agree.

> 
> You'd have to pass the 0 reference page into a new pgmap operation
> which could obtain an appropriate internal lock to read page->mapping.

Correct, that's what the memory-failure code does via dax_lock_page().
It pins the pgmap, freezes page->mapping associations via
rcu_read_lock(), speculatively reads page->mapping, takes the Xarray
lock, revalidates page->mapping is the one we read speculatively, and
then finally locks the entry in place until the memory-failure handling
completes.


^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 00/13] Fix the DAX-gup mistake
  2022-09-08 19:27                           ` Dan Williams
  2022-09-09 11:53                             ` Jason Gunthorpe
@ 2022-09-09 18:11                             ` Matthew Wilcox
  1 sibling, 0 replies; 33+ messages in thread
From: Matthew Wilcox @ 2022-09-09 18:11 UTC (permalink / raw)
  To: Dan Williams
  Cc: Jason Gunthorpe, akpm, Jan Kara, Christoph Hellwig,
	Darrick J. Wong, John Hubbard, linux-mm, nvdimm, linux-fsdevel

On Thu, Sep 08, 2022 at 12:27:06PM -0700, Dan Williams wrote:
> This thread is mainly about DAX slowly reinventing _mapcount that gets
> managed in all the right places for a normal_page. Longer term I think
> we either need to get back to the page-less DAX experiment and find a
> way to unwind some of this page usage creep, or cut over to finally
> making DAX-pages be normal pages and delete most of the special case
> handling in fs/dax.c. I am open to discussing the former, but I think
> the latter is more likely.

I'm still very much in favour of the former.  I've started on replacing
scatterlist with phyr, but haven't got very far yet.



^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2022-09-09 18:11 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-04  2:16 [PATCH 00/13] Fix the DAX-gup mistake Dan Williams
2022-09-04  2:16 ` [PATCH 01/13] fsdax: Rename "busy page" to "pinned page" Dan Williams
2022-09-04  2:16 ` [PATCH 02/13] fsdax: Use page_maybe_dma_pinned() for DAX vs DMA collisions Dan Williams
2022-09-06 12:07   ` Jason Gunthorpe
2022-09-04  2:16 ` [PATCH 03/13] fsdax: Delete put_devmap_managed_page_refs() Dan Williams
2022-09-04  2:16 ` [PATCH 04/13] fsdax: Update dax_insert_entry() calling convention to return an error Dan Williams
2022-09-04  2:16 ` [PATCH 05/13] fsdax: Cleanup dax_associate_entry() Dan Williams
2022-09-04  2:16 ` [PATCH 06/13] fsdax: Rework dax_insert_entry() calling convention Dan Williams
2022-09-04  2:16 ` [PATCH 07/13] fsdax: Manage pgmap references at entry insertion and deletion Dan Williams
2022-09-06 12:30   ` Jason Gunthorpe
2022-09-04  2:16 ` [PATCH 08/13] devdax: Minor warning fixups Dan Williams
2022-09-04  2:16 ` [PATCH 09/13] devdax: Move address_space helpers to the DAX core Dan Williams
2022-09-04  2:16 ` [PATCH 10/13] dax: Prep dax_{associate, disassociate}_entry() for compound pages Dan Williams
2022-09-04  2:17 ` [PATCH 11/13] devdax: add PUD support to the DAX mapping infrastructure Dan Williams
2022-09-04  2:17 ` [PATCH 12/13] devdax: Use dax_insert_entry() + dax_delete_mapping_entry() Dan Williams
2022-09-04  2:17 ` [PATCH 13/13] mm/gup: Drop DAX pgmap accounting Dan Williams
2022-09-06 13:05 ` [PATCH 00/13] Fix the DAX-gup mistake Jason Gunthorpe
2022-09-06 17:23   ` Dan Williams
2022-09-06 17:29     ` Jason Gunthorpe
2022-09-06 18:37       ` Dan Williams
2022-09-06 18:49         ` Jason Gunthorpe
2022-09-06 19:41           ` Dan Williams
2022-09-07  0:54             ` Dan Williams
2022-09-07 12:58               ` Jason Gunthorpe
2022-09-07 17:10                 ` Dan Williams
2022-09-07 18:43                   ` Dan Williams
2022-09-07 19:30                     ` Jason Gunthorpe
2022-09-07 20:45                       ` Dan Williams
2022-09-08 18:49                         ` Jason Gunthorpe
2022-09-08 19:27                           ` Dan Williams
2022-09-09 11:53                             ` Jason Gunthorpe
2022-09-09 17:52                               ` Dan Williams
2022-09-09 18:11                             ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).