linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/5] DAX common 4k zero page
@ 2017-07-21 22:39 Ross Zwisler
  2017-07-21 22:39 ` [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed() Ross Zwisler
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Ross Zwisler @ 2017-07-21 22:39 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel
  Cc: Ross Zwisler, Darrick J. Wong, Theodore Ts'o, Alexander Viro,
	Andreas Dilger, Christoph Hellwig, Dan Williams, Dave Chinner,
	David Airlie, Ingo Molnar, Inki Dae, Jan Kara, Jonathan Corbet,
	Joonyoung Shim, Krzysztof Kozlowski, Kukjin Kim, Kyungmin Park,
	Matthew Wilcox, Patrik Jakobsson, Rob Clark, Seung-Woo Kim,
	Steven Rostedt, Tomi Valkeinen, dri-devel, freedreno,
	linux-arm-kernel, linux-arm-msm, linux-doc, linux-ext4,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-samsung-soc,
	linux-xfs

Changes since v3:
 - Rebased onto the current linux/master which is based on v4.13-rc1.

 - Instead of adding vm_insert_mkwrite_mixed() and duplicating code from
   vm_insert_mixed(), instead just add a 'mkwrite' parameter to
   vm_insert_mixed() and update all call sites.  (Vivek)

 - Added a sanity check to the mkwrite case of insert_pfn() to be sure the
   pfn for the pte we are about to make writable matches the pfn for our
   fault. (Jan)

 - Fixed up some changelog wording for clarity. (Jan)

---

When servicing mmap() reads from file holes the current DAX code allocates
a page cache page of all zeroes and places the struct page pointer in the
mapping->page_tree radix tree.  This has three major drawbacks:

1) It consumes memory unnecessarily.  For every 4k page that is read via a
DAX mmap() over a hole, we allocate a new page cache page.  This means that
if you read 1GiB worth of pages, you end up using 1GiB of zeroed memory.

2) It is slower than using a common zero page because each page fault has
more work to do.  Instead of just inserting a common zero page we have to
allocate a page cache page, zero it, and then insert it.

3) The fact that we had to check for both DAX exceptional entries and for
page cache pages in the radix tree made the DAX code more complex.

This series solves these issues by following the lead of the DAX PMD code
and using a common 4k zero page instead.  This reduces memory usage and
decreases latencies for some workloads, and it simplifies the DAX code,
removing over 100 lines in total.

This series has passed my targeted testing and a full xfstests run on both
XFS and ext4.

Ross Zwisler (5):
  mm: add mkwrite param to vm_insert_mixed()
  dax: relocate some dax functions
  dax: use common 4k zero page for dax mmap reads
  dax: remove DAX code from page_cache_tree_insert()
  dax: move all DAX radix tree defs to fs/dax.c

 Documentation/filesystems/dax.txt       |   5 +-
 drivers/dax/device.c                    |   2 +-
 drivers/gpu/drm/exynos/exynos_drm_gem.c |   3 +-
 drivers/gpu/drm/gma500/framebuffer.c    |   2 +-
 drivers/gpu/drm/msm/msm_gem.c           |   3 +-
 drivers/gpu/drm/omapdrm/omap_gem.c      |   6 +-
 drivers/gpu/drm/ttm/ttm_bo_vm.c         |   2 +-
 fs/dax.c                                | 342 +++++++++++++-------------------
 fs/ext2/file.c                          |  25 +--
 fs/ext4/file.c                          |  32 +--
 fs/xfs/xfs_file.c                       |   2 +-
 include/linux/dax.h                     |  45 -----
 include/linux/mm.h                      |   2 +-
 include/trace/events/fs_dax.h           |   2 -
 mm/filemap.c                            |  13 +-
 mm/memory.c                             |  27 ++-
 16 files changed, 181 insertions(+), 332 deletions(-)

-- 
2.9.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed()
  2017-07-21 22:39 [PATCH v4 0/5] DAX common 4k zero page Ross Zwisler
@ 2017-07-21 22:39 ` Ross Zwisler
  2017-07-22 16:21   ` Dan Williams
  2017-07-24 11:25   ` Jan Kara
  2017-07-21 22:39 ` [PATCH v4 2/5] dax: relocate some dax functions Ross Zwisler
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 16+ messages in thread
From: Ross Zwisler @ 2017-07-21 22:39 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel
  Cc: Ross Zwisler, Darrick J. Wong, Theodore Ts'o, Alexander Viro,
	Andreas Dilger, Christoph Hellwig, Dan Williams, Dave Chinner,
	David Airlie, Ingo Molnar, Inki Dae, Jan Kara, Jonathan Corbet,
	Joonyoung Shim, Krzysztof Kozlowski, Kukjin Kim, Kyungmin Park,
	Matthew Wilcox, Patrik Jakobsson, Rob Clark, Seung-Woo Kim,
	Steven Rostedt, Tomi Valkeinen, dri-devel, freedreno,
	linux-arm-kernel, linux-arm-msm, linux-doc, linux-ext4,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-samsung-soc,
	linux-xfs

To be able to use the common 4k zero page in DAX we need to have our PTE
fault path look more like our PMD fault path where a PTE entry can be
marked as dirty and writeable as it is first inserted, rather than waiting
for a follow-up dax_pfn_mkwrite() => finish_mkwrite_fault() call.

Right now we can rely on having a dax_pfn_mkwrite() call because we can
distinguish between these two cases in do_wp_page():

	case 1: 4k zero page => writable DAX storage
	case 2: read-only DAX storage => writeable DAX storage

This distinction is made by via vm_normal_page().  vm_normal_page() returns
false for the common 4k zero page, though, just as it does for DAX ptes.
Instead of special casing the DAX + 4k zero page case, we will simplify our
DAX PTE page fault sequence so that it matches our DAX PMD sequence, and
get rid of the dax_pfn_mkwrite() helper.  We will instead use
dax_iomap_fault() to handle write-protection faults.

This means that insert_pfn() needs to follow the lead of insert_pfn_pmd()
and allow us to pass in a 'mkwrite' flag.  If 'mkwrite' is set insert_pfn()
will do the work that was previously done by wp_page_reuse() as part of the
dax_pfn_mkwrite() call path.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
---
 drivers/dax/device.c                    |  2 +-
 drivers/gpu/drm/exynos/exynos_drm_gem.c |  3 ++-
 drivers/gpu/drm/gma500/framebuffer.c    |  2 +-
 drivers/gpu/drm/msm/msm_gem.c           |  3 ++-
 drivers/gpu/drm/omapdrm/omap_gem.c      |  6 ++++--
 drivers/gpu/drm/ttm/ttm_bo_vm.c         |  2 +-
 fs/dax.c                                |  2 +-
 include/linux/mm.h                      |  2 +-
 mm/memory.c                             | 27 +++++++++++++++++++++------
 9 files changed, 34 insertions(+), 15 deletions(-)

diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index e9f3b3e..3973521 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -273,7 +273,7 @@ static int __dev_dax_pte_fault(struct dev_dax *dev_dax, struct vm_fault *vmf)
 
 	pfn = phys_to_pfn_t(phys, dax_region->pfn_flags);
 
-	rc = vm_insert_mixed(vmf->vma, vmf->address, pfn);
+	rc = vm_insert_mixed(vmf->vma, vmf->address, pfn, false);
 
 	if (rc == -ENOMEM)
 		return VM_FAULT_OOM;
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index c23479b..bfa6648 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -466,7 +466,8 @@ int exynos_drm_gem_fault(struct vm_fault *vmf)
 	}
 
 	pfn = page_to_pfn(exynos_gem->pages[page_offset]);
-	ret = vm_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, PFN_DEV));
+	ret = vm_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, PFN_DEV),
+			false);
 
 out:
 	switch (ret) {
diff --git a/drivers/gpu/drm/gma500/framebuffer.c b/drivers/gpu/drm/gma500/framebuffer.c
index 7da70b6..6dd865f 100644
--- a/drivers/gpu/drm/gma500/framebuffer.c
+++ b/drivers/gpu/drm/gma500/framebuffer.c
@@ -134,7 +134,7 @@ static int psbfb_vm_fault(struct vm_fault *vmf)
 		pfn = (phys_addr >> PAGE_SHIFT);
 
 		ret = vm_insert_mixed(vma, address,
-				__pfn_to_pfn_t(pfn, PFN_DEV));
+				__pfn_to_pfn_t(pfn, PFN_DEV), false);
 		if (unlikely((ret == -EBUSY) || (ret != 0 && i > 0)))
 			break;
 		else if (unlikely(ret != 0)) {
diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
index 65f3554..c187fd1 100644
--- a/drivers/gpu/drm/msm/msm_gem.c
+++ b/drivers/gpu/drm/msm/msm_gem.c
@@ -249,7 +249,8 @@ int msm_gem_fault(struct vm_fault *vmf)
 	VERB("Inserting %p pfn %lx, pa %lx", (void *)vmf->address,
 			pfn, pfn << PAGE_SHIFT);
 
-	ret = vm_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, PFN_DEV));
+	ret = vm_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, PFN_DEV),
+			false);
 
 out_unlock:
 	mutex_unlock(&msm_obj->lock);
diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
index 5c5c86d..26eebcd 100644
--- a/drivers/gpu/drm/omapdrm/omap_gem.c
+++ b/drivers/gpu/drm/omapdrm/omap_gem.c
@@ -393,7 +393,8 @@ static int fault_1d(struct drm_gem_object *obj,
 	VERB("Inserting %p pfn %lx, pa %lx", (void *)vmf->address,
 			pfn, pfn << PAGE_SHIFT);
 
-	return vm_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, PFN_DEV));
+	return vm_insert_mixed(vma, vmf->address, __pfn_to_pfn_t(pfn, PFN_DEV),
+			false);
 }
 
 /* Special handling for the case of faulting in 2d tiled buffers */
@@ -486,7 +487,8 @@ static int fault_2d(struct drm_gem_object *obj,
 			pfn, pfn << PAGE_SHIFT);
 
 	for (i = n; i > 0; i--) {
-		vm_insert_mixed(vma, vaddr, __pfn_to_pfn_t(pfn, PFN_DEV));
+		vm_insert_mixed(vma, vaddr, __pfn_to_pfn_t(pfn, PFN_DEV),
+				false);
 		pfn += priv->usergart[fmt].stride_pfn;
 		vaddr += PAGE_SIZE * m;
 	}
diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
index b442d12..e85bfa7 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
@@ -248,7 +248,7 @@ static int ttm_bo_vm_fault(struct vm_fault *vmf)
 
 		if (vma->vm_flags & VM_MIXEDMAP)
 			ret = vm_insert_mixed(&cvma, address,
-					__pfn_to_pfn_t(pfn, PFN_DEV));
+					__pfn_to_pfn_t(pfn, PFN_DEV), false);
 		else
 			ret = vm_insert_pfn(&cvma, address, pfn);
 
diff --git a/fs/dax.c b/fs/dax.c
index 306c2b6..c844a51 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -899,7 +899,7 @@ static int dax_insert_mapping(struct address_space *mapping,
 	*entryp = ret;
 
 	trace_dax_insert_mapping(mapping->host, vmf, ret);
-	return vm_insert_mixed(vma, vaddr, pfn);
+	return vm_insert_mixed(vma, vaddr, pfn, false);
 }
 
 /**
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 46b9ac5..3eabc40 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2292,7 +2292,7 @@ int vm_insert_pfn(struct vm_area_struct *vma, unsigned long addr,
 int vm_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
 			unsigned long pfn, pgprot_t pgprot);
 int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
-			pfn_t pfn);
+			pfn_t pfn, bool mkwrite);
 int vm_iomap_memory(struct vm_area_struct *vma, phys_addr_t start, unsigned long len);
 
 
diff --git a/mm/memory.c b/mm/memory.c
index 0e517be..d351911 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1646,7 +1646,7 @@ int vm_insert_page(struct vm_area_struct *vma, unsigned long addr,
 EXPORT_SYMBOL(vm_insert_page);
 
 static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
-			pfn_t pfn, pgprot_t prot)
+			pfn_t pfn, pgprot_t prot, bool mkwrite)
 {
 	struct mm_struct *mm = vma->vm_mm;
 	int retval;
@@ -1658,14 +1658,28 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
 	if (!pte)
 		goto out;
 	retval = -EBUSY;
-	if (!pte_none(*pte))
-		goto out_unlock;
+	if (!pte_none(*pte)) {
+		if (mkwrite) {
+			if (WARN_ON_ONCE(pte_pfn(*pte) != pfn_t_to_pfn(pfn)))
+				goto out_unlock;
+			entry = *pte;
+			goto out_mkwrite;
+		} else
+			goto out_unlock;
+	}
 
 	/* Ok, finally just insert the thing.. */
 	if (pfn_t_devmap(pfn))
 		entry = pte_mkdevmap(pfn_t_pte(pfn, prot));
 	else
 		entry = pte_mkspecial(pfn_t_pte(pfn, prot));
+
+out_mkwrite:
+	if (mkwrite) {
+		entry = pte_mkyoung(entry);
+		entry = maybe_mkwrite(pte_mkdirty(entry), vma);
+	}
+
 	set_pte_at(mm, addr, pte, entry);
 	update_mmu_cache(vma, addr, pte); /* XXX: why not for insert_page? */
 
@@ -1736,14 +1750,15 @@ int vm_insert_pfn_prot(struct vm_area_struct *vma, unsigned long addr,
 
 	track_pfn_insert(vma, &pgprot, __pfn_to_pfn_t(pfn, PFN_DEV));
 
-	ret = insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, PFN_DEV), pgprot);
+	ret = insert_pfn(vma, addr, __pfn_to_pfn_t(pfn, PFN_DEV), pgprot,
+			false);
 
 	return ret;
 }
 EXPORT_SYMBOL(vm_insert_pfn_prot);
 
 int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
-			pfn_t pfn)
+			pfn_t pfn, bool mkwrite)
 {
 	pgprot_t pgprot = vma->vm_page_prot;
 
@@ -1772,7 +1787,7 @@ int vm_insert_mixed(struct vm_area_struct *vma, unsigned long addr,
 		page = pfn_to_page(pfn_t_to_pfn(pfn));
 		return insert_page(vma, addr, page, pgprot);
 	}
-	return insert_pfn(vma, addr, pfn, pgprot);
+	return insert_pfn(vma, addr, pfn, pgprot, mkwrite);
 }
 EXPORT_SYMBOL(vm_insert_mixed);
 
-- 
2.9.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 2/5] dax: relocate some dax functions
  2017-07-21 22:39 [PATCH v4 0/5] DAX common 4k zero page Ross Zwisler
  2017-07-21 22:39 ` [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed() Ross Zwisler
@ 2017-07-21 22:39 ` Ross Zwisler
  2017-07-24 11:35   ` Jan Kara
  2017-07-21 22:39 ` [PATCH v4 3/5] dax: use common 4k zero page for dax mmap reads Ross Zwisler
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 16+ messages in thread
From: Ross Zwisler @ 2017-07-21 22:39 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel
  Cc: Ross Zwisler, Darrick J. Wong, Theodore Ts'o, Alexander Viro,
	Andreas Dilger, Christoph Hellwig, Dan Williams, Dave Chinner,
	David Airlie, Ingo Molnar, Inki Dae, Jan Kara, Jonathan Corbet,
	Joonyoung Shim, Krzysztof Kozlowski, Kukjin Kim, Kyungmin Park,
	Matthew Wilcox, Patrik Jakobsson, Rob Clark, Seung-Woo Kim,
	Steven Rostedt, Tomi Valkeinen, dri-devel, freedreno,
	linux-arm-kernel, linux-arm-msm, linux-doc, linux-ext4,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-samsung-soc,
	linux-xfs

dax_load_hole() will soon need to call dax_insert_mapping_entry(), so it
needs to be moved lower in dax.c so the definition exists.

dax_wake_mapping_entry_waiter() will soon be removed from dax.h and be made
static to dax.c, so we need to move its definition above all its callers.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
---
 fs/dax.c | 138 +++++++++++++++++++++++++++++++--------------------------------
 1 file changed, 69 insertions(+), 69 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index c844a51..779dc5e 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -121,6 +121,31 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo
 }
 
 /*
+ * We do not necessarily hold the mapping->tree_lock when we call this
+ * function so it is possible that 'entry' is no longer a valid item in the
+ * radix tree.  This is okay because all we really need to do is to find the
+ * correct waitqueue where tasks might be waiting for that old 'entry' and
+ * wake them.
+ */
+void dax_wake_mapping_entry_waiter(struct address_space *mapping,
+		pgoff_t index, void *entry, bool wake_all)
+{
+	struct exceptional_entry_key key;
+	wait_queue_head_t *wq;
+
+	wq = dax_entry_waitqueue(mapping, index, entry, &key);
+
+	/*
+	 * Checking for locked entry and prepare_to_wait_exclusive() happens
+	 * under mapping->tree_lock, ditto for entry handling in our callers.
+	 * So at this point all tasks that could have seen our entry locked
+	 * must be in the waitqueue and the following check will see them.
+	 */
+	if (waitqueue_active(wq))
+		__wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key);
+}
+
+/*
  * Check whether the given slot is locked. The function must be called with
  * mapping->tree_lock held
  */
@@ -392,31 +417,6 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
 	return entry;
 }
 
-/*
- * We do not necessarily hold the mapping->tree_lock when we call this
- * function so it is possible that 'entry' is no longer a valid item in the
- * radix tree.  This is okay because all we really need to do is to find the
- * correct waitqueue where tasks might be waiting for that old 'entry' and
- * wake them.
- */
-void dax_wake_mapping_entry_waiter(struct address_space *mapping,
-		pgoff_t index, void *entry, bool wake_all)
-{
-	struct exceptional_entry_key key;
-	wait_queue_head_t *wq;
-
-	wq = dax_entry_waitqueue(mapping, index, entry, &key);
-
-	/*
-	 * Checking for locked entry and prepare_to_wait_exclusive() happens
-	 * under mapping->tree_lock, ditto for entry handling in our callers.
-	 * So at this point all tasks that could have seen our entry locked
-	 * must be in the waitqueue and the following check will see them.
-	 */
-	if (waitqueue_active(wq))
-		__wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key);
-}
-
 static int __dax_invalidate_mapping_entry(struct address_space *mapping,
 					  pgoff_t index, bool trunc)
 {
@@ -468,50 +468,6 @@ int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
 	return __dax_invalidate_mapping_entry(mapping, index, false);
 }
 
-/*
- * The user has performed a load from a hole in the file.  Allocating
- * a new page in the file would cause excessive storage usage for
- * workloads with sparse files.  We allocate a page cache page instead.
- * We'll kick it out of the page cache if it's ever written to,
- * otherwise it will simply fall out of the page cache under memory
- * pressure without ever having been dirtied.
- */
-static int dax_load_hole(struct address_space *mapping, void **entry,
-			 struct vm_fault *vmf)
-{
-	struct inode *inode = mapping->host;
-	struct page *page;
-	int ret;
-
-	/* Hole page already exists? Return it...  */
-	if (!radix_tree_exceptional_entry(*entry)) {
-		page = *entry;
-		goto finish_fault;
-	}
-
-	/* This will replace locked radix tree entry with a hole page */
-	page = find_or_create_page(mapping, vmf->pgoff,
-				   vmf->gfp_mask | __GFP_ZERO);
-	if (!page) {
-		ret = VM_FAULT_OOM;
-		goto out;
-	}
-
-finish_fault:
-	vmf->page = page;
-	ret = finish_fault(vmf);
-	vmf->page = NULL;
-	*entry = page;
-	if (!ret) {
-		/* Grab reference for PTE that is now referencing the page */
-		get_page(page);
-		ret = VM_FAULT_NOPAGE;
-	}
-out:
-	trace_dax_load_hole(inode, vmf, ret);
-	return ret;
-}
-
 static int copy_user_dax(struct block_device *bdev, struct dax_device *dax_dev,
 		sector_t sector, size_t size, struct page *to,
 		unsigned long vaddr)
@@ -938,6 +894,50 @@ int dax_pfn_mkwrite(struct vm_fault *vmf)
 }
 EXPORT_SYMBOL_GPL(dax_pfn_mkwrite);
 
+/*
+ * The user has performed a load from a hole in the file.  Allocating
+ * a new page in the file would cause excessive storage usage for
+ * workloads with sparse files.  We allocate a page cache page instead.
+ * We'll kick it out of the page cache if it's ever written to,
+ * otherwise it will simply fall out of the page cache under memory
+ * pressure without ever having been dirtied.
+ */
+static int dax_load_hole(struct address_space *mapping, void **entry,
+			 struct vm_fault *vmf)
+{
+	struct inode *inode = mapping->host;
+	struct page *page;
+	int ret;
+
+	/* Hole page already exists? Return it...  */
+	if (!radix_tree_exceptional_entry(*entry)) {
+		page = *entry;
+		goto finish_fault;
+	}
+
+	/* This will replace locked radix tree entry with a hole page */
+	page = find_or_create_page(mapping, vmf->pgoff,
+				   vmf->gfp_mask | __GFP_ZERO);
+	if (!page) {
+		ret = VM_FAULT_OOM;
+		goto out;
+	}
+
+finish_fault:
+	vmf->page = page;
+	ret = finish_fault(vmf);
+	vmf->page = NULL;
+	*entry = page;
+	if (!ret) {
+		/* Grab reference for PTE that is now referencing the page */
+		get_page(page);
+		ret = VM_FAULT_NOPAGE;
+	}
+out:
+	trace_dax_load_hole(inode, vmf, ret);
+	return ret;
+}
+
 static bool dax_range_is_aligned(struct block_device *bdev,
 				 unsigned int offset, unsigned int length)
 {
-- 
2.9.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 3/5] dax: use common 4k zero page for dax mmap reads
  2017-07-21 22:39 [PATCH v4 0/5] DAX common 4k zero page Ross Zwisler
  2017-07-21 22:39 ` [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed() Ross Zwisler
  2017-07-21 22:39 ` [PATCH v4 2/5] dax: relocate some dax functions Ross Zwisler
@ 2017-07-21 22:39 ` Ross Zwisler
  2017-07-24 11:46   ` Jan Kara
  2017-07-21 22:39 ` [PATCH v4 4/5] dax: remove DAX code from page_cache_tree_insert() Ross Zwisler
  2017-07-21 22:39 ` [PATCH v4 5/5] dax: move all DAX radix tree defs to fs/dax.c Ross Zwisler
  4 siblings, 1 reply; 16+ messages in thread
From: Ross Zwisler @ 2017-07-21 22:39 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel
  Cc: Ross Zwisler, Darrick J. Wong, Theodore Ts'o, Alexander Viro,
	Andreas Dilger, Christoph Hellwig, Dan Williams, Dave Chinner,
	David Airlie, Ingo Molnar, Inki Dae, Jan Kara, Jonathan Corbet,
	Joonyoung Shim, Krzysztof Kozlowski, Kukjin Kim, Kyungmin Park,
	Matthew Wilcox, Patrik Jakobsson, Rob Clark, Seung-Woo Kim,
	Steven Rostedt, Tomi Valkeinen, dri-devel, freedreno,
	linux-arm-kernel, linux-arm-msm, linux-doc, linux-ext4,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-samsung-soc,
	linux-xfs

When servicing mmap() reads from file holes the current DAX code allocates
a page cache page of all zeroes and places the struct page pointer in the
mapping->page_tree radix tree.  This has three major drawbacks:

1) It consumes memory unnecessarily.  For every 4k page that is read via a
DAX mmap() over a hole, we allocate a new page cache page.  This means that
if you read 1GiB worth of pages, you end up using 1GiB of zeroed memory.
This is easily visible by looking at the overall memory consumption of the
system or by looking at /proc/[pid]/smaps:

	7f62e72b3000-7f63272b3000 rw-s 00000000 103:00 12   /root/dax/data
	Size:            1048576 kB
	Rss:             1048576 kB
	Pss:             1048576 kB
	Shared_Clean:          0 kB
	Shared_Dirty:          0 kB
	Private_Clean:   1048576 kB
	Private_Dirty:         0 kB
	Referenced:      1048576 kB
	Anonymous:             0 kB
	LazyFree:              0 kB
	AnonHugePages:         0 kB
	ShmemPmdMapped:        0 kB
	Shared_Hugetlb:        0 kB
	Private_Hugetlb:       0 kB
	Swap:                  0 kB
	SwapPss:               0 kB
	KernelPageSize:        4 kB
	MMUPageSize:           4 kB
	Locked:                0 kB

2) It is slower than using a common zero page because each page fault has
more work to do.  Instead of just inserting a common zero page we have to
allocate a page cache page, zero it, and then insert it.  Here are the
average latencies of dax_load_hole() as measured by ftrace on a random test
box:

Old method, using zeroed page cache pages:	3.4 us
New method, using the common 4k zero page:	0.8 us

This was the average latency over 1 GiB of sequential reads done by this
simple fio script:

  [global]
  size=1G
  filename=/root/dax/data
  fallocate=none
  [io]
  rw=read
  ioengine=mmap

3) The fact that we had to check for both DAX exceptional entries and for
page cache pages in the radix tree made the DAX code more complex.

Solve these issues by following the lead of the DAX PMD code and using a
common 4k zero page instead.  As with the PMD code we will now insert a DAX
exceptional entry into the radix tree instead of a struct page pointer
which allows us to remove all the special casing in the DAX code.

Note that we do still pretty aggressively check for regular pages in the
DAX radix tree, especially where we take action based on the bits set in
the page.  If we ever find a regular page in our radix tree now that most
likely means that someone besides DAX is inserting pages (which has
happened lots of times in the past), and we want to find that out early and
fail loudly.

This solution also removes the extra memory consumption.  Here is that same
/proc/[pid]/smaps after 1GiB of reading from a hole with the new code:

	7f2054a74000-7f2094a74000 rw-s 00000000 103:00 12   /root/dax/data
	Size:            1048576 kB
	Rss:                   0 kB
	Pss:                   0 kB
	Shared_Clean:          0 kB
	Shared_Dirty:          0 kB
	Private_Clean:         0 kB
	Private_Dirty:         0 kB
	Referenced:            0 kB
	Anonymous:             0 kB
	LazyFree:              0 kB
	AnonHugePages:         0 kB
	ShmemPmdMapped:        0 kB
	Shared_Hugetlb:        0 kB
	Private_Hugetlb:       0 kB
	Swap:                  0 kB
	SwapPss:               0 kB
	KernelPageSize:        4 kB
	MMUPageSize:           4 kB
	Locked:                0 kB

Overall system memory consumption is similarly improved.

Another major change is that we remove dax_pfn_mkwrite() from our fault
flow, and instead rely on the page fault itself to make the PTE dirty and
writeable.  The following description from the patch adding the
vm_insert_mixed_mkwrite() call explains this a little more:

***
  To be able to use the common 4k zero page in DAX we need to have our PTE
  fault path look more like our PMD fault path where a PTE entry can be
  marked as dirty and writeable as it is first inserted, rather than
  waiting for a follow-up dax_pfn_mkwrite() => finish_mkwrite_fault() call.

  Right now we can rely on having a dax_pfn_mkwrite() call because we can
  distinguish between these two cases in do_wp_page():

          case 1: 4k zero page => writable DAX storage
          case 2: read-only DAX storage => writeable DAX storage

  This distinction is made by via vm_normal_page().  vm_normal_page()
  returns false for the common 4k zero page, though, just as it does for
  DAX ptes.  Instead of special casing the DAX + 4k zero page case, we will
  simplify our DAX PTE page fault sequence so that it matches our DAX PMD
  sequence, and get rid of the dax_pfn_mkwrite() helper.  We will instead
  use dax_iomap_fault() to handle write-protection faults.

  This means that insert_pfn() needs to follow the lead of insert_pfn_pmd()
  and allow us to pass in a 'mkwrite' flag.  If 'mkwrite' is set
  insert_pfn() will do the work that was previously done by wp_page_reuse()
  as part of the dax_pfn_mkwrite() call path.
***

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
---
 Documentation/filesystems/dax.txt |   5 +-
 fs/dax.c                          | 240 ++++++++++++--------------------------
 fs/ext2/file.c                    |  25 +---
 fs/ext4/file.c                    |  32 +----
 fs/xfs/xfs_file.c                 |   2 +-
 include/linux/dax.h               |  12 +-
 include/trace/events/fs_dax.h     |   2 -
 7 files changed, 83 insertions(+), 235 deletions(-)

diff --git a/Documentation/filesystems/dax.txt b/Documentation/filesystems/dax.txt
index a7e6e14..3be3b26 100644
--- a/Documentation/filesystems/dax.txt
+++ b/Documentation/filesystems/dax.txt
@@ -63,9 +63,8 @@ Filesystem support consists of
 - implementing an mmap file operation for DAX files which sets the
   VM_MIXEDMAP and VM_HUGEPAGE flags on the VMA, and setting the vm_ops to
   include handlers for fault, pmd_fault, page_mkwrite, pfn_mkwrite. These
-  handlers should probably call dax_iomap_fault() (for fault and page_mkwrite
-  handlers), dax_iomap_pmd_fault(), dax_pfn_mkwrite() passing the appropriate
-  iomap operations.
+  handlers should probably call dax_iomap_fault() passing the appropriate
+  fault size and iomap operations.
 - calling iomap_zero_range() passing appropriate iomap operations instead of
   block_truncate_page() for DAX files
 - ensuring that there is sufficient locking between reads, writes,
diff --git a/fs/dax.c b/fs/dax.c
index 779dc5e..fb0e4c1 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -66,7 +66,7 @@ static int dax_is_pte_entry(void *entry)
 
 static int dax_is_zero_entry(void *entry)
 {
-	return (unsigned long)entry & RADIX_DAX_HZP;
+	return (unsigned long)entry & RADIX_DAX_ZERO_PAGE;
 }
 
 static int dax_is_empty_entry(void *entry)
@@ -206,7 +206,8 @@ static void *get_unlocked_mapping_entry(struct address_space *mapping,
 	for (;;) {
 		entry = __radix_tree_lookup(&mapping->page_tree, index, NULL,
 					  &slot);
-		if (!entry || !radix_tree_exceptional_entry(entry) ||
+		if (!entry ||
+		    WARN_ON_ONCE(!radix_tree_exceptional_entry(entry)) ||
 		    !slot_locked(mapping, slot)) {
 			if (slotp)
 				*slotp = slot;
@@ -241,14 +242,9 @@ static void dax_unlock_mapping_entry(struct address_space *mapping,
 }
 
 static void put_locked_mapping_entry(struct address_space *mapping,
-				     pgoff_t index, void *entry)
+		pgoff_t index)
 {
-	if (!radix_tree_exceptional_entry(entry)) {
-		unlock_page(entry);
-		put_page(entry);
-	} else {
-		dax_unlock_mapping_entry(mapping, index);
-	}
+	dax_unlock_mapping_entry(mapping, index);
 }
 
 /*
@@ -258,7 +254,7 @@ static void put_locked_mapping_entry(struct address_space *mapping,
 static void put_unlocked_mapping_entry(struct address_space *mapping,
 				       pgoff_t index, void *entry)
 {
-	if (!radix_tree_exceptional_entry(entry))
+	if (!entry)
 		return;
 
 	/* We have to wake up next waiter for the radix tree entry lock */
@@ -266,15 +262,15 @@ static void put_unlocked_mapping_entry(struct address_space *mapping,
 }
 
 /*
- * Find radix tree entry at given index. If it points to a page, return with
- * the page locked. If it points to the exceptional entry, return with the
- * radix tree entry locked. If the radix tree doesn't contain given index,
- * create empty exceptional entry for the index and return with it locked.
+ * Find radix tree entry at given index. If it points to an exceptional entry,
+ * return it with the radix tree entry locked. If the radix tree doesn't
+ * contain given index, create an empty exceptional entry for the index and
+ * return with it locked.
  *
  * When requesting an entry with size RADIX_DAX_PMD, grab_mapping_entry() will
  * either return that locked entry or will return an error.  This error will
- * happen if there are any 4k entries (either zero pages or DAX entries)
- * within the 2MiB range that we are requesting.
+ * happen if there are any 4k entries within the 2MiB range that we are
+ * requesting.
  *
  * We always favor 4k entries over 2MiB entries. There isn't a flow where we
  * evict 4k entries in order to 'upgrade' them to a 2MiB entry.  A 2MiB
@@ -301,18 +297,21 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
 	spin_lock_irq(&mapping->tree_lock);
 	entry = get_unlocked_mapping_entry(mapping, index, &slot);
 
+	if (WARN_ON_ONCE(entry && !radix_tree_exceptional_entry(entry))) {
+		entry = ERR_PTR(-EIO);
+		goto out_unlock;
+	}
+
 	if (entry) {
 		if (size_flag & RADIX_DAX_PMD) {
-			if (!radix_tree_exceptional_entry(entry) ||
-			    dax_is_pte_entry(entry)) {
+			if (dax_is_pte_entry(entry)) {
 				put_unlocked_mapping_entry(mapping, index,
 						entry);
 				entry = ERR_PTR(-EEXIST);
 				goto out_unlock;
 			}
 		} else { /* trying to grab a PTE entry */
-			if (radix_tree_exceptional_entry(entry) &&
-			    dax_is_pmd_entry(entry) &&
+			if (dax_is_pmd_entry(entry) &&
 			    (dax_is_zero_entry(entry) ||
 			     dax_is_empty_entry(entry))) {
 				pmd_downgrade = true;
@@ -346,7 +345,7 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
 				mapping_gfp_mask(mapping) & ~__GFP_HIGHMEM);
 		if (err) {
 			if (pmd_downgrade)
-				put_locked_mapping_entry(mapping, index, entry);
+				put_locked_mapping_entry(mapping, index);
 			return ERR_PTR(err);
 		}
 		spin_lock_irq(&mapping->tree_lock);
@@ -396,21 +395,6 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
 		spin_unlock_irq(&mapping->tree_lock);
 		return entry;
 	}
-	/* Normal page in radix tree? */
-	if (!radix_tree_exceptional_entry(entry)) {
-		struct page *page = entry;
-
-		get_page(page);
-		spin_unlock_irq(&mapping->tree_lock);
-		lock_page(page);
-		/* Page got truncated? Retry... */
-		if (unlikely(page->mapping != mapping)) {
-			unlock_page(page);
-			put_page(page);
-			goto restart;
-		}
-		return page;
-	}
 	entry = lock_slot(mapping, slot);
  out_unlock:
 	spin_unlock_irq(&mapping->tree_lock);
@@ -426,7 +410,7 @@ static int __dax_invalidate_mapping_entry(struct address_space *mapping,
 
 	spin_lock_irq(&mapping->tree_lock);
 	entry = get_unlocked_mapping_entry(mapping, index, NULL);
-	if (!entry || !radix_tree_exceptional_entry(entry))
+	if (!entry || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry)))
 		goto out;
 	if (!trunc &&
 	    (radix_tree_tag_get(page_tree, index, PAGECACHE_TAG_DIRTY) ||
@@ -508,47 +492,27 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
 				      unsigned long flags)
 {
 	struct radix_tree_root *page_tree = &mapping->page_tree;
-	int error = 0;
-	bool hole_fill = false;
 	void *new_entry;
 	pgoff_t index = vmf->pgoff;
 
 	if (vmf->flags & FAULT_FLAG_WRITE)
 		__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
 
-	/* Replacing hole page with block mapping? */
-	if (!radix_tree_exceptional_entry(entry)) {
-		hole_fill = true;
-		/*
-		 * Unmap the page now before we remove it from page cache below.
-		 * The page is locked so it cannot be faulted in again.
-		 */
-		unmap_mapping_range(mapping, vmf->pgoff << PAGE_SHIFT,
-				    PAGE_SIZE, 0);
-		error = radix_tree_preload(vmf->gfp_mask & ~__GFP_HIGHMEM);
-		if (error)
-			return ERR_PTR(error);
-	} else if (dax_is_zero_entry(entry) && !(flags & RADIX_DAX_HZP)) {
-		/* replacing huge zero page with PMD block mapping */
-		unmap_mapping_range(mapping,
-			(vmf->pgoff << PAGE_SHIFT) & PMD_MASK, PMD_SIZE, 0);
+	if (dax_is_zero_entry(entry) && !(flags & RADIX_DAX_ZERO_PAGE)) {
+		/* we are replacing a zero page with block mapping */
+		if (dax_is_pmd_entry(entry))
+			unmap_mapping_range(mapping,
+					(vmf->pgoff << PAGE_SHIFT) & PMD_MASK,
+					PMD_SIZE, 0);
+		else /* pte entry */
+			unmap_mapping_range(mapping, vmf->pgoff << PAGE_SHIFT,
+					PAGE_SIZE, 0);
 	}
 
 	spin_lock_irq(&mapping->tree_lock);
 	new_entry = dax_radix_locked_entry(sector, flags);
 
-	if (hole_fill) {
-		__delete_from_page_cache(entry, NULL);
-		/* Drop pagecache reference */
-		put_page(entry);
-		error = __radix_tree_insert(page_tree, index,
-				dax_radix_order(new_entry), new_entry);
-		if (error) {
-			new_entry = ERR_PTR(error);
-			goto unlock;
-		}
-		mapping->nrexceptional++;
-	} else if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
+	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
 		/*
 		 * Only swap our new entry into the radix tree if the current
 		 * entry is a zero page or an empty entry.  If a normal PTE or
@@ -565,23 +529,14 @@ static void *dax_insert_mapping_entry(struct address_space *mapping,
 		WARN_ON_ONCE(ret != entry);
 		__radix_tree_replace(page_tree, node, slot,
 				     new_entry, NULL, NULL);
+		entry = new_entry;
 	}
+
 	if (vmf->flags & FAULT_FLAG_WRITE)
 		radix_tree_tag_set(page_tree, index, PAGECACHE_TAG_DIRTY);
- unlock:
+
 	spin_unlock_irq(&mapping->tree_lock);
-	if (hole_fill) {
-		radix_tree_preload_end();
-		/*
-		 * We don't need hole page anymore, it has been replaced with
-		 * locked radix tree entry now.
-		 */
-		if (mapping->a_ops->freepage)
-			mapping->a_ops->freepage(entry);
-		unlock_page(entry);
-		put_page(entry);
-	}
-	return new_entry;
+	return entry;
 }
 
 static inline unsigned long
@@ -680,7 +635,7 @@ static int dax_writeback_one(struct block_device *bdev,
 	spin_lock_irq(&mapping->tree_lock);
 	entry2 = get_unlocked_mapping_entry(mapping, index, &slot);
 	/* Entry got punched out / reallocated? */
-	if (!entry2 || !radix_tree_exceptional_entry(entry2))
+	if (!entry2 || WARN_ON_ONCE(!radix_tree_exceptional_entry(entry2)))
 		goto put_unlocked;
 	/*
 	 * Entry got reallocated elsewhere? No need to writeback. We have to
@@ -752,7 +707,7 @@ static int dax_writeback_one(struct block_device *bdev,
 	trace_dax_writeback_one(mapping->host, index, size >> PAGE_SHIFT);
  dax_unlock:
 	dax_read_unlock(id);
-	put_locked_mapping_entry(mapping, index, entry);
+	put_locked_mapping_entry(mapping, index);
 	return ret;
 
  put_unlocked:
@@ -827,11 +782,10 @@ EXPORT_SYMBOL_GPL(dax_writeback_mapping_range);
 
 static int dax_insert_mapping(struct address_space *mapping,
 		struct block_device *bdev, struct dax_device *dax_dev,
-		sector_t sector, size_t size, void **entryp,
+		sector_t sector, size_t size, void *entry,
 		struct vm_area_struct *vma, struct vm_fault *vmf)
 {
 	unsigned long vaddr = vmf->address;
-	void *entry = *entryp;
 	void *ret, *kaddr;
 	pgoff_t pgoff;
 	int id, rc;
@@ -852,87 +806,41 @@ static int dax_insert_mapping(struct address_space *mapping,
 	ret = dax_insert_mapping_entry(mapping, vmf, entry, sector, 0);
 	if (IS_ERR(ret))
 		return PTR_ERR(ret);
-	*entryp = ret;
 
 	trace_dax_insert_mapping(mapping->host, vmf, ret);
-	return vm_insert_mixed(vma, vaddr, pfn, false);
+	return vm_insert_mixed(vma, vaddr, pfn, vmf->flags & FAULT_FLAG_WRITE);
 }
 
-/**
- * dax_pfn_mkwrite - handle first write to DAX page
- * @vmf: The description of the fault
- */
-int dax_pfn_mkwrite(struct vm_fault *vmf)
-{
-	struct file *file = vmf->vma->vm_file;
-	struct address_space *mapping = file->f_mapping;
-	struct inode *inode = mapping->host;
-	void *entry, **slot;
-	pgoff_t index = vmf->pgoff;
-
-	spin_lock_irq(&mapping->tree_lock);
-	entry = get_unlocked_mapping_entry(mapping, index, &slot);
-	if (!entry || !radix_tree_exceptional_entry(entry)) {
-		if (entry)
-			put_unlocked_mapping_entry(mapping, index, entry);
-		spin_unlock_irq(&mapping->tree_lock);
-		trace_dax_pfn_mkwrite_no_entry(inode, vmf, VM_FAULT_NOPAGE);
-		return VM_FAULT_NOPAGE;
-	}
-	radix_tree_tag_set(&mapping->page_tree, index, PAGECACHE_TAG_DIRTY);
-	entry = lock_slot(mapping, slot);
-	spin_unlock_irq(&mapping->tree_lock);
-	/*
-	 * If we race with somebody updating the PTE and finish_mkwrite_fault()
-	 * fails, we don't care. We need to return VM_FAULT_NOPAGE and retry
-	 * the fault in either case.
-	 */
-	finish_mkwrite_fault(vmf);
-	put_locked_mapping_entry(mapping, index, entry);
-	trace_dax_pfn_mkwrite(inode, vmf, VM_FAULT_NOPAGE);
-	return VM_FAULT_NOPAGE;
-}
-EXPORT_SYMBOL_GPL(dax_pfn_mkwrite);
-
 /*
- * The user has performed a load from a hole in the file.  Allocating
- * a new page in the file would cause excessive storage usage for
- * workloads with sparse files.  We allocate a page cache page instead.
- * We'll kick it out of the page cache if it's ever written to,
- * otherwise it will simply fall out of the page cache under memory
- * pressure without ever having been dirtied.
+ * The user has performed a load from a hole in the file.  Allocating a new
+ * page in the file would cause excessive storage usage for workloads with
+ * sparse files.  Instead we insert a read-only mapping of the 4k zero page.
+ * If this page is ever written to we will re-fault and change the mapping to
+ * point to real DAX storage instead.
  */
-static int dax_load_hole(struct address_space *mapping, void **entry,
+static int dax_load_hole(struct address_space *mapping, void *entry,
 			 struct vm_fault *vmf)
 {
 	struct inode *inode = mapping->host;
-	struct page *page;
-	int ret;
-
-	/* Hole page already exists? Return it...  */
-	if (!radix_tree_exceptional_entry(*entry)) {
-		page = *entry;
-		goto finish_fault;
-	}
+	unsigned long vaddr = vmf->address;
+	int ret = VM_FAULT_NOPAGE;
+	struct page *zero_page;
+	void *entry2;
 
-	/* This will replace locked radix tree entry with a hole page */
-	page = find_or_create_page(mapping, vmf->pgoff,
-				   vmf->gfp_mask | __GFP_ZERO);
-	if (!page) {
+	zero_page = ZERO_PAGE(0);
+	if (unlikely(!zero_page)) {
 		ret = VM_FAULT_OOM;
 		goto out;
 	}
 
-finish_fault:
-	vmf->page = page;
-	ret = finish_fault(vmf);
-	vmf->page = NULL;
-	*entry = page;
-	if (!ret) {
-		/* Grab reference for PTE that is now referencing the page */
-		get_page(page);
-		ret = VM_FAULT_NOPAGE;
+	entry2 = dax_insert_mapping_entry(mapping, vmf, entry, 0,
+			RADIX_DAX_ZERO_PAGE);
+	if (IS_ERR(entry2)) {
+		ret = VM_FAULT_SIGBUS;
+		goto out;
 	}
+
+	vm_insert_mixed(vmf->vma, vaddr, page_to_pfn_t(zero_page), false);
 out:
 	trace_dax_load_hole(inode, vmf, ret);
 	return ret;
@@ -1220,7 +1128,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
 			major = VM_FAULT_MAJOR;
 		}
 		error = dax_insert_mapping(mapping, iomap.bdev, iomap.dax_dev,
-				sector, PAGE_SIZE, &entry, vmf->vma, vmf);
+				sector, PAGE_SIZE, entry, vmf->vma, vmf);
 		/* -EBUSY is fine, somebody else faulted on the same PTE */
 		if (error == -EBUSY)
 			error = 0;
@@ -1228,7 +1136,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
 	case IOMAP_UNWRITTEN:
 	case IOMAP_HOLE:
 		if (!(vmf->flags & FAULT_FLAG_WRITE)) {
-			vmf_ret = dax_load_hole(mapping, &entry, vmf);
+			vmf_ret = dax_load_hole(mapping, entry, vmf);
 			goto finish_iomap;
 		}
 		/*FALLTHRU*/
@@ -1255,7 +1163,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
 		ops->iomap_end(inode, pos, PAGE_SIZE, copied, flags, &iomap);
 	}
  unlock_entry:
-	put_locked_mapping_entry(mapping, vmf->pgoff, entry);
+	put_locked_mapping_entry(mapping, vmf->pgoff);
  out:
 	trace_dax_pte_fault_done(inode, vmf, vmf_ret);
 	return vmf_ret;
@@ -1269,7 +1177,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
 #define PG_PMD_COLOUR	((PMD_SIZE >> PAGE_SHIFT) - 1)
 
 static int dax_pmd_insert_mapping(struct vm_fault *vmf, struct iomap *iomap,
-		loff_t pos, void **entryp)
+		loff_t pos, void *entry)
 {
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	const sector_t sector = dax_iomap_sector(iomap, pos);
@@ -1300,11 +1208,10 @@ static int dax_pmd_insert_mapping(struct vm_fault *vmf, struct iomap *iomap,
 		goto unlock_fallback;
 	dax_read_unlock(id);
 
-	ret = dax_insert_mapping_entry(mapping, vmf, *entryp, sector,
+	ret = dax_insert_mapping_entry(mapping, vmf, entry, sector,
 			RADIX_DAX_PMD);
 	if (IS_ERR(ret))
 		goto fallback;
-	*entryp = ret;
 
 	trace_dax_pmd_insert_mapping(inode, vmf, length, pfn, ret);
 	return vmf_insert_pfn_pmd(vmf->vma, vmf->address, vmf->pmd,
@@ -1318,7 +1225,7 @@ static int dax_pmd_insert_mapping(struct vm_fault *vmf, struct iomap *iomap,
 }
 
 static int dax_pmd_load_hole(struct vm_fault *vmf, struct iomap *iomap,
-		void **entryp)
+		void *entry)
 {
 	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	unsigned long pmd_addr = vmf->address & PMD_MASK;
@@ -1333,11 +1240,10 @@ static int dax_pmd_load_hole(struct vm_fault *vmf, struct iomap *iomap,
 	if (unlikely(!zero_page))
 		goto fallback;
 
-	ret = dax_insert_mapping_entry(mapping, vmf, *entryp, 0,
-			RADIX_DAX_PMD | RADIX_DAX_HZP);
+	ret = dax_insert_mapping_entry(mapping, vmf, entry, 0,
+			RADIX_DAX_PMD | RADIX_DAX_ZERO_PAGE);
 	if (IS_ERR(ret))
 		goto fallback;
-	*entryp = ret;
 
 	ptl = pmd_lock(vmf->vma->vm_mm, vmf->pmd);
 	if (!pmd_none(*(vmf->pmd))) {
@@ -1403,10 +1309,10 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf,
 		goto fallback;
 
 	/*
-	 * grab_mapping_entry() will make sure we get a 2M empty entry, a DAX
-	 * PMD or a HZP entry.  If it can't (because a 4k page is already in
-	 * the tree, for instance), it will return -EEXIST and we just fall
-	 * back to 4k entries.
+	 * grab_mapping_entry() will make sure we get a 2MiB empty entry, a
+	 * 2MiB zero page entry or a DAX PMD.  If it can't (because a 4k page
+	 * is already in the tree, for instance), it will return -EEXIST and
+	 * we just fall back to 4k entries.
 	 */
 	entry = grab_mapping_entry(mapping, pgoff, RADIX_DAX_PMD);
 	if (IS_ERR(entry))
@@ -1439,13 +1345,13 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf,
 
 	switch (iomap.type) {
 	case IOMAP_MAPPED:
-		result = dax_pmd_insert_mapping(vmf, &iomap, pos, &entry);
+		result = dax_pmd_insert_mapping(vmf, &iomap, pos, entry);
 		break;
 	case IOMAP_UNWRITTEN:
 	case IOMAP_HOLE:
 		if (WARN_ON_ONCE(write))
 			break;
-		result = dax_pmd_load_hole(vmf, &iomap, &entry);
+		result = dax_pmd_load_hole(vmf, &iomap, entry);
 		break;
 	default:
 		WARN_ON_ONCE(1);
@@ -1468,7 +1374,7 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf,
 				&iomap);
 	}
  unlock_entry:
-	put_locked_mapping_entry(mapping, pgoff, entry);
+	put_locked_mapping_entry(mapping, pgoff);
  fallback:
 	if (result == VM_FAULT_FALLBACK) {
 		split_huge_pmd(vma, vmf->pmd, vmf->address);
diff --git a/fs/ext2/file.c b/fs/ext2/file.c
index d34d32b..ff3a363 100644
--- a/fs/ext2/file.c
+++ b/fs/ext2/file.c
@@ -107,29 +107,6 @@ static int ext2_dax_fault(struct vm_fault *vmf)
 	return ret;
 }
 
-static int ext2_dax_pfn_mkwrite(struct vm_fault *vmf)
-{
-	struct inode *inode = file_inode(vmf->vma->vm_file);
-	struct ext2_inode_info *ei = EXT2_I(inode);
-	loff_t size;
-	int ret;
-
-	sb_start_pagefault(inode->i_sb);
-	file_update_time(vmf->vma->vm_file);
-	down_read(&ei->dax_sem);
-
-	/* check that the faulting page hasn't raced with truncate */
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (vmf->pgoff >= size)
-		ret = VM_FAULT_SIGBUS;
-	else
-		ret = dax_pfn_mkwrite(vmf);
-
-	up_read(&ei->dax_sem);
-	sb_end_pagefault(inode->i_sb);
-	return ret;
-}
-
 static const struct vm_operations_struct ext2_dax_vm_ops = {
 	.fault		= ext2_dax_fault,
 	/*
@@ -138,7 +115,7 @@ static const struct vm_operations_struct ext2_dax_vm_ops = {
 	 * will always fail and fail back to regular faults.
 	 */
 	.page_mkwrite	= ext2_dax_fault,
-	.pfn_mkwrite	= ext2_dax_pfn_mkwrite,
+	.pfn_mkwrite	= ext2_dax_fault,
 };
 
 static int ext2_file_mmap(struct file *file, struct vm_area_struct *vma)
diff --git a/fs/ext4/file.c b/fs/ext4/file.c
index 58294c9..9dda70e 100644
--- a/fs/ext4/file.c
+++ b/fs/ext4/file.c
@@ -311,41 +311,11 @@ static int ext4_dax_fault(struct vm_fault *vmf)
 	return ext4_dax_huge_fault(vmf, PE_SIZE_PTE);
 }
 
-/*
- * Handle write fault for VM_MIXEDMAP mappings. Similarly to ext4_dax_fault()
- * handler we check for races agaist truncate. Note that since we cycle through
- * i_mmap_sem, we are sure that also any hole punching that began before we
- * were called is finished by now and so if it included part of the file we
- * are working on, our pte will get unmapped and the check for pte_same() in
- * wp_pfn_shared() fails. Thus fault gets retried and things work out as
- * desired.
- */
-static int ext4_dax_pfn_mkwrite(struct vm_fault *vmf)
-{
-	struct inode *inode = file_inode(vmf->vma->vm_file);
-	struct super_block *sb = inode->i_sb;
-	loff_t size;
-	int ret;
-
-	sb_start_pagefault(sb);
-	file_update_time(vmf->vma->vm_file);
-	down_read(&EXT4_I(inode)->i_mmap_sem);
-	size = (i_size_read(inode) + PAGE_SIZE - 1) >> PAGE_SHIFT;
-	if (vmf->pgoff >= size)
-		ret = VM_FAULT_SIGBUS;
-	else
-		ret = dax_pfn_mkwrite(vmf);
-	up_read(&EXT4_I(inode)->i_mmap_sem);
-	sb_end_pagefault(sb);
-
-	return ret;
-}
-
 static const struct vm_operations_struct ext4_dax_vm_ops = {
 	.fault		= ext4_dax_fault,
 	.huge_fault	= ext4_dax_huge_fault,
 	.page_mkwrite	= ext4_dax_fault,
-	.pfn_mkwrite	= ext4_dax_pfn_mkwrite,
+	.pfn_mkwrite	= ext4_dax_fault,
 };
 #else
 #define ext4_dax_vm_ops	ext4_file_vm_ops
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index c4893e2..62db8ff 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -1130,7 +1130,7 @@ xfs_filemap_pfn_mkwrite(
 	if (vmf->pgoff >= size)
 		ret = VM_FAULT_SIGBUS;
 	else if (IS_DAX(inode))
-		ret = dax_pfn_mkwrite(vmf);
+		ret = dax_iomap_fault(vmf, PE_SIZE_PTE, &xfs_iomap_ops);
 	xfs_iunlock(ip, XFS_MMAPLOCK_SHARED);
 	sb_end_pagefault(inode->i_sb);
 	return ret;
diff --git a/include/linux/dax.h b/include/linux/dax.h
index 7948118..29cced8 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -90,18 +90,17 @@ void dax_write_cache(struct dax_device *dax_dev, bool wc);
 
 /*
  * We use lowest available bit in exceptional entry for locking, one bit for
- * the entry size (PMD) and two more to tell us if the entry is a huge zero
- * page (HZP) or an empty entry that is just used for locking.  In total four
- * special bits.
+ * the entry size (PMD) and two more to tell us if the entry is a zero page or
+ * an empty entry that is just used for locking.  In total four special bits.
  *
- * If the PMD bit isn't set the entry has size PAGE_SIZE, and if the HZP and
- * EMPTY bits aren't set the entry is a normal DAX entry with a filesystem
+ * If the PMD bit isn't set the entry has size PAGE_SIZE, and if the ZERO_PAGE
+ * and EMPTY bits aren't set the entry is a normal DAX entry with a filesystem
  * block allocation.
  */
 #define RADIX_DAX_SHIFT	(RADIX_TREE_EXCEPTIONAL_SHIFT + 4)
 #define RADIX_DAX_ENTRY_LOCK (1 << RADIX_TREE_EXCEPTIONAL_SHIFT)
 #define RADIX_DAX_PMD (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 1))
-#define RADIX_DAX_HZP (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 2))
+#define RADIX_DAX_ZERO_PAGE (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 2))
 #define RADIX_DAX_EMPTY (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 3))
 
 static inline unsigned long dax_radix_sector(void *entry)
@@ -152,7 +151,6 @@ static inline unsigned int dax_radix_order(void *entry)
 	return 0;
 }
 #endif
-int dax_pfn_mkwrite(struct vm_fault *vmf);
 
 static inline bool dax_mapping(struct address_space *mapping)
 {
diff --git a/include/trace/events/fs_dax.h b/include/trace/events/fs_dax.h
index 08bb3ed..fbc4a06 100644
--- a/include/trace/events/fs_dax.h
+++ b/include/trace/events/fs_dax.h
@@ -190,8 +190,6 @@ DEFINE_EVENT(dax_pte_fault_class, name, \
 
 DEFINE_PTE_FAULT_EVENT(dax_pte_fault);
 DEFINE_PTE_FAULT_EVENT(dax_pte_fault_done);
-DEFINE_PTE_FAULT_EVENT(dax_pfn_mkwrite_no_entry);
-DEFINE_PTE_FAULT_EVENT(dax_pfn_mkwrite);
 DEFINE_PTE_FAULT_EVENT(dax_load_hole);
 
 TRACE_EVENT(dax_insert_mapping,
-- 
2.9.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 4/5] dax: remove DAX code from page_cache_tree_insert()
  2017-07-21 22:39 [PATCH v4 0/5] DAX common 4k zero page Ross Zwisler
                   ` (2 preceding siblings ...)
  2017-07-21 22:39 ` [PATCH v4 3/5] dax: use common 4k zero page for dax mmap reads Ross Zwisler
@ 2017-07-21 22:39 ` Ross Zwisler
  2017-07-24 11:47   ` Jan Kara
  2017-07-21 22:39 ` [PATCH v4 5/5] dax: move all DAX radix tree defs to fs/dax.c Ross Zwisler
  4 siblings, 1 reply; 16+ messages in thread
From: Ross Zwisler @ 2017-07-21 22:39 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel
  Cc: Ross Zwisler, Darrick J. Wong, Theodore Ts'o, Alexander Viro,
	Andreas Dilger, Christoph Hellwig, Dan Williams, Dave Chinner,
	David Airlie, Ingo Molnar, Inki Dae, Jan Kara, Jonathan Corbet,
	Joonyoung Shim, Krzysztof Kozlowski, Kukjin Kim, Kyungmin Park,
	Matthew Wilcox, Patrik Jakobsson, Rob Clark, Seung-Woo Kim,
	Steven Rostedt, Tomi Valkeinen, dri-devel, freedreno,
	linux-arm-kernel, linux-arm-msm, linux-doc, linux-ext4,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-samsung-soc,
	linux-xfs

Now that we no longer insert struct page pointers in DAX radix trees we can
remove the special casing for DAX in page_cache_tree_insert().  This also
allows us to make dax_wake_mapping_entry_waiter() local to fs/dax.c,
removing it from dax.h.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Jan Kara <jack@suse.cz>
---
 fs/dax.c            |  2 +-
 include/linux/dax.h |  2 --
 mm/filemap.c        | 13 ++-----------
 3 files changed, 3 insertions(+), 14 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index fb0e4c1..0e27d90 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -127,7 +127,7 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo
  * correct waitqueue where tasks might be waiting for that old 'entry' and
  * wake them.
  */
-void dax_wake_mapping_entry_waiter(struct address_space *mapping,
+static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
 		pgoff_t index, void *entry, bool wake_all)
 {
 	struct exceptional_entry_key key;
diff --git a/include/linux/dax.h b/include/linux/dax.h
index 29cced8..afa99bb 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -122,8 +122,6 @@ int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
 int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index);
 int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
 				      pgoff_t index);
-void dax_wake_mapping_entry_waiter(struct address_space *mapping,
-		pgoff_t index, void *entry, bool wake_all);
 
 #ifdef CONFIG_FS_DAX
 int __dax_zero_page_range(struct block_device *bdev,
diff --git a/mm/filemap.c b/mm/filemap.c
index a497024..1bf1265 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -130,17 +130,8 @@ static int page_cache_tree_insert(struct address_space *mapping,
 			return -EEXIST;
 
 		mapping->nrexceptional--;
-		if (!dax_mapping(mapping)) {
-			if (shadowp)
-				*shadowp = p;
-		} else {
-			/* DAX can replace empty locked entry with a hole */
-			WARN_ON_ONCE(p !=
-				dax_radix_locked_entry(0, RADIX_DAX_EMPTY));
-			/* Wakeup waiters for exceptional entry lock */
-			dax_wake_mapping_entry_waiter(mapping, page->index, p,
-						      true);
-		}
+		if (shadowp)
+			*shadowp = p;
 	}
 	__radix_tree_replace(&mapping->page_tree, node, slot, page,
 			     workingset_update_node, mapping);
-- 
2.9.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH v4 5/5] dax: move all DAX radix tree defs to fs/dax.c
  2017-07-21 22:39 [PATCH v4 0/5] DAX common 4k zero page Ross Zwisler
                   ` (3 preceding siblings ...)
  2017-07-21 22:39 ` [PATCH v4 4/5] dax: remove DAX code from page_cache_tree_insert() Ross Zwisler
@ 2017-07-21 22:39 ` Ross Zwisler
  2017-07-24 11:48   ` Jan Kara
  4 siblings, 1 reply; 16+ messages in thread
From: Ross Zwisler @ 2017-07-21 22:39 UTC (permalink / raw)
  To: Andrew Morton, linux-kernel
  Cc: Ross Zwisler, Darrick J. Wong, Theodore Ts'o, Alexander Viro,
	Andreas Dilger, Christoph Hellwig, Dan Williams, Dave Chinner,
	David Airlie, Ingo Molnar, Inki Dae, Jan Kara, Jonathan Corbet,
	Joonyoung Shim, Krzysztof Kozlowski, Kukjin Kim, Kyungmin Park,
	Matthew Wilcox, Patrik Jakobsson, Rob Clark, Seung-Woo Kim,
	Steven Rostedt, Tomi Valkeinen, dri-devel, freedreno,
	linux-arm-kernel, linux-arm-msm, linux-doc, linux-ext4,
	linux-fsdevel, linux-mm, linux-nvdimm, linux-samsung-soc,
	linux-xfs

Now that we no longer insert struct page pointers in DAX radix trees the
page cache code no longer needs to know anything about DAX exceptional
entries.  Move all the DAX exceptional entry definitions from dax.h to
fs/dax.c.

Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Suggested-by: Jan Kara <jack@suse.cz>
---
 fs/dax.c            | 34 ++++++++++++++++++++++++++++++++++
 include/linux/dax.h | 41 -----------------------------------------
 2 files changed, 34 insertions(+), 41 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 0e27d90..e7acc45 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -54,6 +54,40 @@ static int __init init_dax_wait_table(void)
 }
 fs_initcall(init_dax_wait_table);
 
+/*
+ * We use lowest available bit in exceptional entry for locking, one bit for
+ * the entry size (PMD) and two more to tell us if the entry is a zero page or
+ * an empty entry that is just used for locking.  In total four special bits.
+ *
+ * If the PMD bit isn't set the entry has size PAGE_SIZE, and if the ZERO_PAGE
+ * and EMPTY bits aren't set the entry is a normal DAX entry with a filesystem
+ * block allocation.
+ */
+#define RADIX_DAX_SHIFT		(RADIX_TREE_EXCEPTIONAL_SHIFT + 4)
+#define RADIX_DAX_ENTRY_LOCK	(1 << RADIX_TREE_EXCEPTIONAL_SHIFT)
+#define RADIX_DAX_PMD		(1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 1))
+#define RADIX_DAX_ZERO_PAGE	(1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 2))
+#define RADIX_DAX_EMPTY		(1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 3))
+
+static unsigned long dax_radix_sector(void *entry)
+{
+	return (unsigned long)entry >> RADIX_DAX_SHIFT;
+}
+
+static void *dax_radix_locked_entry(sector_t sector, unsigned long flags)
+{
+	return (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY | flags |
+			((unsigned long)sector << RADIX_DAX_SHIFT) |
+			RADIX_DAX_ENTRY_LOCK);
+}
+
+static unsigned int dax_radix_order(void *entry)
+{
+	if ((unsigned long)entry & RADIX_DAX_PMD)
+		return PMD_SHIFT - PAGE_SHIFT;
+	return 0;
+}
+
 static int dax_is_pmd_entry(void *entry)
 {
 	return (unsigned long)entry & RADIX_DAX_PMD;
diff --git a/include/linux/dax.h b/include/linux/dax.h
index afa99bb..d0e3272 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -88,33 +88,6 @@ void dax_flush(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
 		size_t size);
 void dax_write_cache(struct dax_device *dax_dev, bool wc);
 
-/*
- * We use lowest available bit in exceptional entry for locking, one bit for
- * the entry size (PMD) and two more to tell us if the entry is a zero page or
- * an empty entry that is just used for locking.  In total four special bits.
- *
- * If the PMD bit isn't set the entry has size PAGE_SIZE, and if the ZERO_PAGE
- * and EMPTY bits aren't set the entry is a normal DAX entry with a filesystem
- * block allocation.
- */
-#define RADIX_DAX_SHIFT	(RADIX_TREE_EXCEPTIONAL_SHIFT + 4)
-#define RADIX_DAX_ENTRY_LOCK (1 << RADIX_TREE_EXCEPTIONAL_SHIFT)
-#define RADIX_DAX_PMD (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 1))
-#define RADIX_DAX_ZERO_PAGE (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 2))
-#define RADIX_DAX_EMPTY (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 3))
-
-static inline unsigned long dax_radix_sector(void *entry)
-{
-	return (unsigned long)entry >> RADIX_DAX_SHIFT;
-}
-
-static inline void *dax_radix_locked_entry(sector_t sector, unsigned long flags)
-{
-	return (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY | flags |
-			((unsigned long)sector << RADIX_DAX_SHIFT) |
-			RADIX_DAX_ENTRY_LOCK);
-}
-
 ssize_t dax_iomap_rw(struct kiocb *iocb, struct iov_iter *iter,
 		const struct iomap_ops *ops);
 int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
@@ -136,20 +109,6 @@ static inline int __dax_zero_page_range(struct block_device *bdev,
 }
 #endif
 
-#ifdef CONFIG_FS_DAX_PMD
-static inline unsigned int dax_radix_order(void *entry)
-{
-	if ((unsigned long)entry & RADIX_DAX_PMD)
-		return PMD_SHIFT - PAGE_SHIFT;
-	return 0;
-}
-#else
-static inline unsigned int dax_radix_order(void *entry)
-{
-	return 0;
-}
-#endif
-
 static inline bool dax_mapping(struct address_space *mapping)
 {
 	return mapping->host && IS_DAX(mapping->host);
-- 
2.9.4

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed()
  2017-07-21 22:39 ` [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed() Ross Zwisler
@ 2017-07-22 16:21   ` Dan Williams
  2017-07-24 11:15     ` Jan Kara
  2017-07-24 11:25   ` Jan Kara
  1 sibling, 1 reply; 16+ messages in thread
From: Dan Williams @ 2017-07-22 16:21 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Andrew Morton, Linux Kernel Mailing List, Jan Kara, linux-doc,
	David Airlie, Dave Chinner, dri-devel, linux-mm, Andreas Dilger,
	Patrik Jakobsson, Christoph Hellwig, linux-samsung-soc,
	Joonyoung Shim, Darrick J. Wong, Tomi Valkeinen, Kyungmin Park,
	Krzysztof Kozlowski, Ingo Molnar, ext4 hackers, Matthew Wilcox,
	linux-arm-msm, Steven Rostedt, Inki Dae, linux-nvdimm,
	Alexander Viro, linux-arm-kernel, Theodore Ts'o,
	Jonathan Corbet, Seung-Woo Kim, linux-xfs, Rob Clark, Kukjin Kim,
	linux-fsdevel, freedreno

On Fri, Jul 21, 2017 at 3:39 PM, Ross Zwisler
<ross.zwisler@linux.intel.com> wrote:
> To be able to use the common 4k zero page in DAX we need to have our PTE
> fault path look more like our PMD fault path where a PTE entry can be
> marked as dirty and writeable as it is first inserted, rather than waiting
> for a follow-up dax_pfn_mkwrite() => finish_mkwrite_fault() call.
>
> Right now we can rely on having a dax_pfn_mkwrite() call because we can
> distinguish between these two cases in do_wp_page():
>
>         case 1: 4k zero page => writable DAX storage
>         case 2: read-only DAX storage => writeable DAX storage
>
> This distinction is made by via vm_normal_page().  vm_normal_page() returns
> false for the common 4k zero page, though, just as it does for DAX ptes.
> Instead of special casing the DAX + 4k zero page case, we will simplify our
> DAX PTE page fault sequence so that it matches our DAX PMD sequence, and
> get rid of the dax_pfn_mkwrite() helper.  We will instead use
> dax_iomap_fault() to handle write-protection faults.
>
> This means that insert_pfn() needs to follow the lead of insert_pfn_pmd()
> and allow us to pass in a 'mkwrite' flag.  If 'mkwrite' is set insert_pfn()
> will do the work that was previously done by wp_page_reuse() as part of the
> dax_pfn_mkwrite() call path.
>
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> ---
>  drivers/dax/device.c                    |  2 +-
>  drivers/gpu/drm/exynos/exynos_drm_gem.c |  3 ++-
>  drivers/gpu/drm/gma500/framebuffer.c    |  2 +-
>  drivers/gpu/drm/msm/msm_gem.c           |  3 ++-
>  drivers/gpu/drm/omapdrm/omap_gem.c      |  6 ++++--
>  drivers/gpu/drm/ttm/ttm_bo_vm.c         |  2 +-
>  fs/dax.c                                |  2 +-
>  include/linux/mm.h                      |  2 +-
>  mm/memory.c                             | 27 +++++++++++++++++++++------
>  9 files changed, 34 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/dax/device.c b/drivers/dax/device.c
> index e9f3b3e..3973521 100644
> --- a/drivers/dax/device.c
> +++ b/drivers/dax/device.c
> @@ -273,7 +273,7 @@ static int __dev_dax_pte_fault(struct dev_dax *dev_dax, struct vm_fault *vmf)
>
>         pfn = phys_to_pfn_t(phys, dax_region->pfn_flags);
>
> -       rc = vm_insert_mixed(vmf->vma, vmf->address, pfn);
> +       rc = vm_insert_mixed(vmf->vma, vmf->address, pfn, false);

Ugh, I generally find bool flags unreadable. They place a tax on
jumping to function definition to recall what true and false mean. If
we want to go this 'add an argument' route can we at least add an enum
like:

enum {
    PTE_MKDIRTY,
    PTE_MKCLEAN,
};

...to differentiate the two cases?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed()
  2017-07-22 16:21   ` Dan Williams
@ 2017-07-24 11:15     ` Jan Kara
  2017-07-24 15:13       ` Ross Zwisler
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Kara @ 2017-07-24 11:15 UTC (permalink / raw)
  To: Dan Williams
  Cc: Ross Zwisler, Andrew Morton, Linux Kernel Mailing List, Jan Kara,
	linux-doc, David Airlie, Dave Chinner, dri-devel, linux-mm,
	Andreas Dilger, Patrik Jakobsson, Christoph Hellwig,
	linux-samsung-soc, Joonyoung Shim, Darrick J. Wong,
	Tomi Valkeinen, Kyungmin Park, Krzysztof Kozlowski, Ingo Molnar,
	ext4 hackers, Matthew Wilcox, linux-arm-msm, Steven Rostedt,
	Inki Dae, linux-nvdimm, Alexander Viro, linux-arm-kernel,
	Theodore Ts'o, Jonathan Corbet, Seung-Woo Kim, linux-xfs,
	Rob Clark, Kukjin Kim, linux-fsdevel, freedreno

On Sat 22-07-17 09:21:31, Dan Williams wrote:
> On Fri, Jul 21, 2017 at 3:39 PM, Ross Zwisler
> <ross.zwisler@linux.intel.com> wrote:
> > To be able to use the common 4k zero page in DAX we need to have our PTE
> > fault path look more like our PMD fault path where a PTE entry can be
> > marked as dirty and writeable as it is first inserted, rather than waiting
> > for a follow-up dax_pfn_mkwrite() => finish_mkwrite_fault() call.
> >
> > Right now we can rely on having a dax_pfn_mkwrite() call because we can
> > distinguish between these two cases in do_wp_page():
> >
> >         case 1: 4k zero page => writable DAX storage
> >         case 2: read-only DAX storage => writeable DAX storage
> >
> > This distinction is made by via vm_normal_page().  vm_normal_page() returns
> > false for the common 4k zero page, though, just as it does for DAX ptes.
> > Instead of special casing the DAX + 4k zero page case, we will simplify our
> > DAX PTE page fault sequence so that it matches our DAX PMD sequence, and
> > get rid of the dax_pfn_mkwrite() helper.  We will instead use
> > dax_iomap_fault() to handle write-protection faults.
> >
> > This means that insert_pfn() needs to follow the lead of insert_pfn_pmd()
> > and allow us to pass in a 'mkwrite' flag.  If 'mkwrite' is set insert_pfn()
> > will do the work that was previously done by wp_page_reuse() as part of the
> > dax_pfn_mkwrite() call path.
> >
> > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> > ---
> >  drivers/dax/device.c                    |  2 +-
> >  drivers/gpu/drm/exynos/exynos_drm_gem.c |  3 ++-
> >  drivers/gpu/drm/gma500/framebuffer.c    |  2 +-
> >  drivers/gpu/drm/msm/msm_gem.c           |  3 ++-
> >  drivers/gpu/drm/omapdrm/omap_gem.c      |  6 ++++--
> >  drivers/gpu/drm/ttm/ttm_bo_vm.c         |  2 +-
> >  fs/dax.c                                |  2 +-
> >  include/linux/mm.h                      |  2 +-
> >  mm/memory.c                             | 27 +++++++++++++++++++++------
> >  9 files changed, 34 insertions(+), 15 deletions(-)
> >
> > diff --git a/drivers/dax/device.c b/drivers/dax/device.c
> > index e9f3b3e..3973521 100644
> > --- a/drivers/dax/device.c
> > +++ b/drivers/dax/device.c
> > @@ -273,7 +273,7 @@ static int __dev_dax_pte_fault(struct dev_dax *dev_dax, struct vm_fault *vmf)
> >
> >         pfn = phys_to_pfn_t(phys, dax_region->pfn_flags);
> >
> > -       rc = vm_insert_mixed(vmf->vma, vmf->address, pfn);
> > +       rc = vm_insert_mixed(vmf->vma, vmf->address, pfn, false);
> 
> Ugh, I generally find bool flags unreadable. They place a tax on
> jumping to function definition to recall what true and false mean. If
> we want to go this 'add an argument' route can we at least add an enum
> like:
> 
> enum {
>     PTE_MKDIRTY,
>     PTE_MKCLEAN,
> };
> 
> ...to differentiate the two cases?

So how I usually deal with this is that I create e.g.:

__vm_insert_mixed() that takes the bool argument, make vm_insert_mixed()
pass false, and vm_insert_mixed_mkwrite() pass true. That way there's no
code duplication, old call sites can stay unchanged, the naming clearly
says what's going on...

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed()
  2017-07-21 22:39 ` [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed() Ross Zwisler
  2017-07-22 16:21   ` Dan Williams
@ 2017-07-24 11:25   ` Jan Kara
  2017-07-24 15:23     ` Ross Zwisler
  1 sibling, 1 reply; 16+ messages in thread
From: Jan Kara @ 2017-07-24 11:25 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Andrew Morton, linux-kernel, Darrick J. Wong, Theodore Ts'o,
	Alexander Viro, Andreas Dilger, Christoph Hellwig, Dan Williams,
	Dave Chinner, David Airlie, Ingo Molnar, Inki Dae, Jan Kara,
	Jonathan Corbet, Joonyoung Shim, Krzysztof Kozlowski, Kukjin Kim,
	Kyungmin Park, Matthew Wilcox, Patrik Jakobsson, Rob Clark,
	Seung-Woo Kim, Steven Rostedt, Tomi Valkeinen, dri-devel,
	freedreno, linux-arm-kernel, linux-arm-msm, linux-doc,
	linux-ext4, linux-fsdevel, linux-mm, linux-nvdimm,
	linux-samsung-soc, linux-xfs

> @@ -1658,14 +1658,28 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
>  	if (!pte)
>  		goto out;
>  	retval = -EBUSY;
> -	if (!pte_none(*pte))
> -		goto out_unlock;
> +	if (!pte_none(*pte)) {
> +		if (mkwrite) {
> +			if (WARN_ON_ONCE(pte_pfn(*pte) != pfn_t_to_pfn(pfn)))

Is the WARN_ON_ONCE() really appropriate here? Your testcase with private
mappings has triggered this situation if I'm right...

Otherwise the patch looks good to me.

								Honza

-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 2/5] dax: relocate some dax functions
  2017-07-21 22:39 ` [PATCH v4 2/5] dax: relocate some dax functions Ross Zwisler
@ 2017-07-24 11:35   ` Jan Kara
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Kara @ 2017-07-24 11:35 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Andrew Morton, linux-kernel, Darrick J. Wong, Theodore Ts'o,
	Alexander Viro, Andreas Dilger, Christoph Hellwig, Dan Williams,
	Dave Chinner, David Airlie, Ingo Molnar, Inki Dae, Jan Kara,
	Jonathan Corbet, Joonyoung Shim, Krzysztof Kozlowski, Kukjin Kim,
	Kyungmin Park, Matthew Wilcox, Patrik Jakobsson, Rob Clark,
	Seung-Woo Kim, Steven Rostedt, Tomi Valkeinen, dri-devel,
	freedreno, linux-arm-kernel, linux-arm-msm, linux-doc,
	linux-ext4, linux-fsdevel, linux-mm, linux-nvdimm,
	linux-samsung-soc, linux-xfs

On Fri 21-07-17 16:39:52, Ross Zwisler wrote:
> dax_load_hole() will soon need to call dax_insert_mapping_entry(), so it
> needs to be moved lower in dax.c so the definition exists.
> 
> dax_wake_mapping_entry_waiter() will soon be removed from dax.h and be made
> static to dax.c, so we need to move its definition above all its callers.
> 
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>

Looks good. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza


> ---
>  fs/dax.c | 138 +++++++++++++++++++++++++++++++--------------------------------
>  1 file changed, 69 insertions(+), 69 deletions(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index c844a51..779dc5e 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -121,6 +121,31 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo
>  }
>  
>  /*
> + * We do not necessarily hold the mapping->tree_lock when we call this
> + * function so it is possible that 'entry' is no longer a valid item in the
> + * radix tree.  This is okay because all we really need to do is to find the
> + * correct waitqueue where tasks might be waiting for that old 'entry' and
> + * wake them.
> + */
> +void dax_wake_mapping_entry_waiter(struct address_space *mapping,
> +		pgoff_t index, void *entry, bool wake_all)
> +{
> +	struct exceptional_entry_key key;
> +	wait_queue_head_t *wq;
> +
> +	wq = dax_entry_waitqueue(mapping, index, entry, &key);
> +
> +	/*
> +	 * Checking for locked entry and prepare_to_wait_exclusive() happens
> +	 * under mapping->tree_lock, ditto for entry handling in our callers.
> +	 * So at this point all tasks that could have seen our entry locked
> +	 * must be in the waitqueue and the following check will see them.
> +	 */
> +	if (waitqueue_active(wq))
> +		__wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key);
> +}
> +
> +/*
>   * Check whether the given slot is locked. The function must be called with
>   * mapping->tree_lock held
>   */
> @@ -392,31 +417,6 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
>  	return entry;
>  }
>  
> -/*
> - * We do not necessarily hold the mapping->tree_lock when we call this
> - * function so it is possible that 'entry' is no longer a valid item in the
> - * radix tree.  This is okay because all we really need to do is to find the
> - * correct waitqueue where tasks might be waiting for that old 'entry' and
> - * wake them.
> - */
> -void dax_wake_mapping_entry_waiter(struct address_space *mapping,
> -		pgoff_t index, void *entry, bool wake_all)
> -{
> -	struct exceptional_entry_key key;
> -	wait_queue_head_t *wq;
> -
> -	wq = dax_entry_waitqueue(mapping, index, entry, &key);
> -
> -	/*
> -	 * Checking for locked entry and prepare_to_wait_exclusive() happens
> -	 * under mapping->tree_lock, ditto for entry handling in our callers.
> -	 * So at this point all tasks that could have seen our entry locked
> -	 * must be in the waitqueue and the following check will see them.
> -	 */
> -	if (waitqueue_active(wq))
> -		__wake_up(wq, TASK_NORMAL, wake_all ? 0 : 1, &key);
> -}
> -
>  static int __dax_invalidate_mapping_entry(struct address_space *mapping,
>  					  pgoff_t index, bool trunc)
>  {
> @@ -468,50 +468,6 @@ int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
>  	return __dax_invalidate_mapping_entry(mapping, index, false);
>  }
>  
> -/*
> - * The user has performed a load from a hole in the file.  Allocating
> - * a new page in the file would cause excessive storage usage for
> - * workloads with sparse files.  We allocate a page cache page instead.
> - * We'll kick it out of the page cache if it's ever written to,
> - * otherwise it will simply fall out of the page cache under memory
> - * pressure without ever having been dirtied.
> - */
> -static int dax_load_hole(struct address_space *mapping, void **entry,
> -			 struct vm_fault *vmf)
> -{
> -	struct inode *inode = mapping->host;
> -	struct page *page;
> -	int ret;
> -
> -	/* Hole page already exists? Return it...  */
> -	if (!radix_tree_exceptional_entry(*entry)) {
> -		page = *entry;
> -		goto finish_fault;
> -	}
> -
> -	/* This will replace locked radix tree entry with a hole page */
> -	page = find_or_create_page(mapping, vmf->pgoff,
> -				   vmf->gfp_mask | __GFP_ZERO);
> -	if (!page) {
> -		ret = VM_FAULT_OOM;
> -		goto out;
> -	}
> -
> -finish_fault:
> -	vmf->page = page;
> -	ret = finish_fault(vmf);
> -	vmf->page = NULL;
> -	*entry = page;
> -	if (!ret) {
> -		/* Grab reference for PTE that is now referencing the page */
> -		get_page(page);
> -		ret = VM_FAULT_NOPAGE;
> -	}
> -out:
> -	trace_dax_load_hole(inode, vmf, ret);
> -	return ret;
> -}
> -
>  static int copy_user_dax(struct block_device *bdev, struct dax_device *dax_dev,
>  		sector_t sector, size_t size, struct page *to,
>  		unsigned long vaddr)
> @@ -938,6 +894,50 @@ int dax_pfn_mkwrite(struct vm_fault *vmf)
>  }
>  EXPORT_SYMBOL_GPL(dax_pfn_mkwrite);
>  
> +/*
> + * The user has performed a load from a hole in the file.  Allocating
> + * a new page in the file would cause excessive storage usage for
> + * workloads with sparse files.  We allocate a page cache page instead.
> + * We'll kick it out of the page cache if it's ever written to,
> + * otherwise it will simply fall out of the page cache under memory
> + * pressure without ever having been dirtied.
> + */
> +static int dax_load_hole(struct address_space *mapping, void **entry,
> +			 struct vm_fault *vmf)
> +{
> +	struct inode *inode = mapping->host;
> +	struct page *page;
> +	int ret;
> +
> +	/* Hole page already exists? Return it...  */
> +	if (!radix_tree_exceptional_entry(*entry)) {
> +		page = *entry;
> +		goto finish_fault;
> +	}
> +
> +	/* This will replace locked radix tree entry with a hole page */
> +	page = find_or_create_page(mapping, vmf->pgoff,
> +				   vmf->gfp_mask | __GFP_ZERO);
> +	if (!page) {
> +		ret = VM_FAULT_OOM;
> +		goto out;
> +	}
> +
> +finish_fault:
> +	vmf->page = page;
> +	ret = finish_fault(vmf);
> +	vmf->page = NULL;
> +	*entry = page;
> +	if (!ret) {
> +		/* Grab reference for PTE that is now referencing the page */
> +		get_page(page);
> +		ret = VM_FAULT_NOPAGE;
> +	}
> +out:
> +	trace_dax_load_hole(inode, vmf, ret);
> +	return ret;
> +}
> +
>  static bool dax_range_is_aligned(struct block_device *bdev,
>  				 unsigned int offset, unsigned int length)
>  {
> -- 
> 2.9.4
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 3/5] dax: use common 4k zero page for dax mmap reads
  2017-07-21 22:39 ` [PATCH v4 3/5] dax: use common 4k zero page for dax mmap reads Ross Zwisler
@ 2017-07-24 11:46   ` Jan Kara
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Kara @ 2017-07-24 11:46 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Andrew Morton, linux-kernel, Darrick J. Wong, Theodore Ts'o,
	Alexander Viro, Andreas Dilger, Christoph Hellwig, Dan Williams,
	Dave Chinner, David Airlie, Ingo Molnar, Inki Dae, Jan Kara,
	Jonathan Corbet, Joonyoung Shim, Krzysztof Kozlowski, Kukjin Kim,
	Kyungmin Park, Matthew Wilcox, Patrik Jakobsson, Rob Clark,
	Seung-Woo Kim, Steven Rostedt, Tomi Valkeinen, dri-devel,
	freedreno, linux-arm-kernel, linux-arm-msm, linux-doc,
	linux-ext4, linux-fsdevel, linux-mm, linux-nvdimm,
	linux-samsung-soc, linux-xfs

On Fri 21-07-17 16:39:53, Ross Zwisler wrote:
> When servicing mmap() reads from file holes the current DAX code allocates
> a page cache page of all zeroes and places the struct page pointer in the
> mapping->page_tree radix tree.  This has three major drawbacks:
> 
> 1) It consumes memory unnecessarily.  For every 4k page that is read via a
> DAX mmap() over a hole, we allocate a new page cache page.  This means that
> if you read 1GiB worth of pages, you end up using 1GiB of zeroed memory.
> This is easily visible by looking at the overall memory consumption of the
> system or by looking at /proc/[pid]/smaps:
> 
> 	7f62e72b3000-7f63272b3000 rw-s 00000000 103:00 12   /root/dax/data
> 	Size:            1048576 kB
> 	Rss:             1048576 kB
> 	Pss:             1048576 kB
> 	Shared_Clean:          0 kB
> 	Shared_Dirty:          0 kB
> 	Private_Clean:   1048576 kB
> 	Private_Dirty:         0 kB
> 	Referenced:      1048576 kB
> 	Anonymous:             0 kB
> 	LazyFree:              0 kB
> 	AnonHugePages:         0 kB
> 	ShmemPmdMapped:        0 kB
> 	Shared_Hugetlb:        0 kB
> 	Private_Hugetlb:       0 kB
> 	Swap:                  0 kB
> 	SwapPss:               0 kB
> 	KernelPageSize:        4 kB
> 	MMUPageSize:           4 kB
> 	Locked:                0 kB
> 
> 2) It is slower than using a common zero page because each page fault has
> more work to do.  Instead of just inserting a common zero page we have to
> allocate a page cache page, zero it, and then insert it.  Here are the
> average latencies of dax_load_hole() as measured by ftrace on a random test
> box:
> 
> Old method, using zeroed page cache pages:	3.4 us
> New method, using the common 4k zero page:	0.8 us
> 
> This was the average latency over 1 GiB of sequential reads done by this
> simple fio script:
> 
>   [global]
>   size=1G
>   filename=/root/dax/data
>   fallocate=none
>   [io]
>   rw=read
>   ioengine=mmap
> 
> 3) The fact that we had to check for both DAX exceptional entries and for
> page cache pages in the radix tree made the DAX code more complex.
> 
> Solve these issues by following the lead of the DAX PMD code and using a
> common 4k zero page instead.  As with the PMD code we will now insert a DAX
> exceptional entry into the radix tree instead of a struct page pointer
> which allows us to remove all the special casing in the DAX code.
> 
> Note that we do still pretty aggressively check for regular pages in the
> DAX radix tree, especially where we take action based on the bits set in
> the page.  If we ever find a regular page in our radix tree now that most
> likely means that someone besides DAX is inserting pages (which has
> happened lots of times in the past), and we want to find that out early and
> fail loudly.
> 
> This solution also removes the extra memory consumption.  Here is that same
> /proc/[pid]/smaps after 1GiB of reading from a hole with the new code:
> 
> 	7f2054a74000-7f2094a74000 rw-s 00000000 103:00 12   /root/dax/data
> 	Size:            1048576 kB
> 	Rss:                   0 kB
> 	Pss:                   0 kB
> 	Shared_Clean:          0 kB
> 	Shared_Dirty:          0 kB
> 	Private_Clean:         0 kB
> 	Private_Dirty:         0 kB
> 	Referenced:            0 kB
> 	Anonymous:             0 kB
> 	LazyFree:              0 kB
> 	AnonHugePages:         0 kB
> 	ShmemPmdMapped:        0 kB
> 	Shared_Hugetlb:        0 kB
> 	Private_Hugetlb:       0 kB
> 	Swap:                  0 kB
> 	SwapPss:               0 kB
> 	KernelPageSize:        4 kB
> 	MMUPageSize:           4 kB
> 	Locked:                0 kB
> 
> Overall system memory consumption is similarly improved.
> 
> Another major change is that we remove dax_pfn_mkwrite() from our fault
> flow, and instead rely on the page fault itself to make the PTE dirty and
> writeable.  The following description from the patch adding the
> vm_insert_mixed_mkwrite() call explains this a little more:
> 
> ***
>   To be able to use the common 4k zero page in DAX we need to have our PTE
>   fault path look more like our PMD fault path where a PTE entry can be
>   marked as dirty and writeable as it is first inserted, rather than
>   waiting for a follow-up dax_pfn_mkwrite() => finish_mkwrite_fault() call.
> 
>   Right now we can rely on having a dax_pfn_mkwrite() call because we can
>   distinguish between these two cases in do_wp_page():
> 
>           case 1: 4k zero page => writable DAX storage
>           case 2: read-only DAX storage => writeable DAX storage
> 
>   This distinction is made by via vm_normal_page().  vm_normal_page()
>   returns false for the common 4k zero page, though, just as it does for
>   DAX ptes.  Instead of special casing the DAX + 4k zero page case, we will
>   simplify our DAX PTE page fault sequence so that it matches our DAX PMD
>   sequence, and get rid of the dax_pfn_mkwrite() helper.  We will instead
>   use dax_iomap_fault() to handle write-protection faults.
> 
>   This means that insert_pfn() needs to follow the lead of insert_pfn_pmd()
>   and allow us to pass in a 'mkwrite' flag.  If 'mkwrite' is set
>   insert_pfn() will do the work that was previously done by wp_page_reuse()
>   as part of the dax_pfn_mkwrite() call path.
> ***
> 
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>

The patch looks good to me. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

And I really like that we've got rid of these pagecache hole pages!

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 4/5] dax: remove DAX code from page_cache_tree_insert()
  2017-07-21 22:39 ` [PATCH v4 4/5] dax: remove DAX code from page_cache_tree_insert() Ross Zwisler
@ 2017-07-24 11:47   ` Jan Kara
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Kara @ 2017-07-24 11:47 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Andrew Morton, linux-kernel, Darrick J. Wong, Theodore Ts'o,
	Alexander Viro, Andreas Dilger, Christoph Hellwig, Dan Williams,
	Dave Chinner, David Airlie, Ingo Molnar, Inki Dae, Jan Kara,
	Jonathan Corbet, Joonyoung Shim, Krzysztof Kozlowski, Kukjin Kim,
	Kyungmin Park, Matthew Wilcox, Patrik Jakobsson, Rob Clark,
	Seung-Woo Kim, Steven Rostedt, Tomi Valkeinen, dri-devel,
	freedreno, linux-arm-kernel, linux-arm-msm, linux-doc,
	linux-ext4, linux-fsdevel, linux-mm, linux-nvdimm,
	linux-samsung-soc, linux-xfs

On Fri 21-07-17 16:39:54, Ross Zwisler wrote:
> Now that we no longer insert struct page pointers in DAX radix trees we can
> remove the special casing for DAX in page_cache_tree_insert().  This also
> allows us to make dax_wake_mapping_entry_waiter() local to fs/dax.c,
> removing it from dax.h.
> 
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Suggested-by: Jan Kara <jack@suse.cz>

Looks good. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza


> ---
>  fs/dax.c            |  2 +-
>  include/linux/dax.h |  2 --
>  mm/filemap.c        | 13 ++-----------
>  3 files changed, 3 insertions(+), 14 deletions(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index fb0e4c1..0e27d90 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -127,7 +127,7 @@ static int wake_exceptional_entry_func(wait_queue_entry_t *wait, unsigned int mo
>   * correct waitqueue where tasks might be waiting for that old 'entry' and
>   * wake them.
>   */
> -void dax_wake_mapping_entry_waiter(struct address_space *mapping,
> +static void dax_wake_mapping_entry_waiter(struct address_space *mapping,
>  		pgoff_t index, void *entry, bool wake_all)
>  {
>  	struct exceptional_entry_key key;
> diff --git a/include/linux/dax.h b/include/linux/dax.h
> index 29cced8..afa99bb 100644
> --- a/include/linux/dax.h
> +++ b/include/linux/dax.h
> @@ -122,8 +122,6 @@ int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
>  int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index);
>  int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
>  				      pgoff_t index);
> -void dax_wake_mapping_entry_waiter(struct address_space *mapping,
> -		pgoff_t index, void *entry, bool wake_all);
>  
>  #ifdef CONFIG_FS_DAX
>  int __dax_zero_page_range(struct block_device *bdev,
> diff --git a/mm/filemap.c b/mm/filemap.c
> index a497024..1bf1265 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -130,17 +130,8 @@ static int page_cache_tree_insert(struct address_space *mapping,
>  			return -EEXIST;
>  
>  		mapping->nrexceptional--;
> -		if (!dax_mapping(mapping)) {
> -			if (shadowp)
> -				*shadowp = p;
> -		} else {
> -			/* DAX can replace empty locked entry with a hole */
> -			WARN_ON_ONCE(p !=
> -				dax_radix_locked_entry(0, RADIX_DAX_EMPTY));
> -			/* Wakeup waiters for exceptional entry lock */
> -			dax_wake_mapping_entry_waiter(mapping, page->index, p,
> -						      true);
> -		}
> +		if (shadowp)
> +			*shadowp = p;
>  	}
>  	__radix_tree_replace(&mapping->page_tree, node, slot, page,
>  			     workingset_update_node, mapping);
> -- 
> 2.9.4
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 5/5] dax: move all DAX radix tree defs to fs/dax.c
  2017-07-21 22:39 ` [PATCH v4 5/5] dax: move all DAX radix tree defs to fs/dax.c Ross Zwisler
@ 2017-07-24 11:48   ` Jan Kara
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Kara @ 2017-07-24 11:48 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Andrew Morton, linux-kernel, Darrick J. Wong, Theodore Ts'o,
	Alexander Viro, Andreas Dilger, Christoph Hellwig, Dan Williams,
	Dave Chinner, David Airlie, Ingo Molnar, Inki Dae, Jan Kara,
	Jonathan Corbet, Joonyoung Shim, Krzysztof Kozlowski, Kukjin Kim,
	Kyungmin Park, Matthew Wilcox, Patrik Jakobsson, Rob Clark,
	Seung-Woo Kim, Steven Rostedt, Tomi Valkeinen, dri-devel,
	freedreno, linux-arm-kernel, linux-arm-msm, linux-doc,
	linux-ext4, linux-fsdevel, linux-mm, linux-nvdimm,
	linux-samsung-soc, linux-xfs

On Fri 21-07-17 16:39:55, Ross Zwisler wrote:
> Now that we no longer insert struct page pointers in DAX radix trees the
> page cache code no longer needs to know anything about DAX exceptional
> entries.  Move all the DAX exceptional entry definitions from dax.h to
> fs/dax.c.
> 
> Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> Suggested-by: Jan Kara <jack@suse.cz>

Looks good. You can add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza


> ---
>  fs/dax.c            | 34 ++++++++++++++++++++++++++++++++++
>  include/linux/dax.h | 41 -----------------------------------------
>  2 files changed, 34 insertions(+), 41 deletions(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index 0e27d90..e7acc45 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -54,6 +54,40 @@ static int __init init_dax_wait_table(void)
>  }
>  fs_initcall(init_dax_wait_table);
>  
> +/*
> + * We use lowest available bit in exceptional entry for locking, one bit for
> + * the entry size (PMD) and two more to tell us if the entry is a zero page or
> + * an empty entry that is just used for locking.  In total four special bits.
> + *
> + * If the PMD bit isn't set the entry has size PAGE_SIZE, and if the ZERO_PAGE
> + * and EMPTY bits aren't set the entry is a normal DAX entry with a filesystem
> + * block allocation.
> + */
> +#define RADIX_DAX_SHIFT		(RADIX_TREE_EXCEPTIONAL_SHIFT + 4)
> +#define RADIX_DAX_ENTRY_LOCK	(1 << RADIX_TREE_EXCEPTIONAL_SHIFT)
> +#define RADIX_DAX_PMD		(1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 1))
> +#define RADIX_DAX_ZERO_PAGE	(1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 2))
> +#define RADIX_DAX_EMPTY		(1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 3))
> +
> +static unsigned long dax_radix_sector(void *entry)
> +{
> +	return (unsigned long)entry >> RADIX_DAX_SHIFT;
> +}
> +
> +static void *dax_radix_locked_entry(sector_t sector, unsigned long flags)
> +{
> +	return (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY | flags |
> +			((unsigned long)sector << RADIX_DAX_SHIFT) |
> +			RADIX_DAX_ENTRY_LOCK);
> +}
> +
> +static unsigned int dax_radix_order(void *entry)
> +{
> +	if ((unsigned long)entry & RADIX_DAX_PMD)
> +		return PMD_SHIFT - PAGE_SHIFT;
> +	return 0;
> +}
> +
>  static int dax_is_pmd_entry(void *entry)
>  {
>  	return (unsigned long)entry & RADIX_DAX_PMD;
> diff --git a/include/linux/dax.h b/include/linux/dax.h
> index afa99bb..d0e3272 100644
> --- a/include/linux/dax.h
> +++ b/include/linux/dax.h
> @@ -88,33 +88,6 @@ void dax_flush(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
>  		size_t size);
>  void dax_write_cache(struct dax_device *dax_dev, bool wc);
>  
> -/*
> - * We use lowest available bit in exceptional entry for locking, one bit for
> - * the entry size (PMD) and two more to tell us if the entry is a zero page or
> - * an empty entry that is just used for locking.  In total four special bits.
> - *
> - * If the PMD bit isn't set the entry has size PAGE_SIZE, and if the ZERO_PAGE
> - * and EMPTY bits aren't set the entry is a normal DAX entry with a filesystem
> - * block allocation.
> - */
> -#define RADIX_DAX_SHIFT	(RADIX_TREE_EXCEPTIONAL_SHIFT + 4)
> -#define RADIX_DAX_ENTRY_LOCK (1 << RADIX_TREE_EXCEPTIONAL_SHIFT)
> -#define RADIX_DAX_PMD (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 1))
> -#define RADIX_DAX_ZERO_PAGE (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 2))
> -#define RADIX_DAX_EMPTY (1 << (RADIX_TREE_EXCEPTIONAL_SHIFT + 3))
> -
> -static inline unsigned long dax_radix_sector(void *entry)
> -{
> -	return (unsigned long)entry >> RADIX_DAX_SHIFT;
> -}
> -
> -static inline void *dax_radix_locked_entry(sector_t sector, unsigned long flags)
> -{
> -	return (void *)(RADIX_TREE_EXCEPTIONAL_ENTRY | flags |
> -			((unsigned long)sector << RADIX_DAX_SHIFT) |
> -			RADIX_DAX_ENTRY_LOCK);
> -}
> -
>  ssize_t dax_iomap_rw(struct kiocb *iocb, struct iov_iter *iter,
>  		const struct iomap_ops *ops);
>  int dax_iomap_fault(struct vm_fault *vmf, enum page_entry_size pe_size,
> @@ -136,20 +109,6 @@ static inline int __dax_zero_page_range(struct block_device *bdev,
>  }
>  #endif
>  
> -#ifdef CONFIG_FS_DAX_PMD
> -static inline unsigned int dax_radix_order(void *entry)
> -{
> -	if ((unsigned long)entry & RADIX_DAX_PMD)
> -		return PMD_SHIFT - PAGE_SHIFT;
> -	return 0;
> -}
> -#else
> -static inline unsigned int dax_radix_order(void *entry)
> -{
> -	return 0;
> -}
> -#endif
> -
>  static inline bool dax_mapping(struct address_space *mapping)
>  {
>  	return mapping->host && IS_DAX(mapping->host);
> -- 
> 2.9.4
> 
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed()
  2017-07-24 11:15     ` Jan Kara
@ 2017-07-24 15:13       ` Ross Zwisler
  0 siblings, 0 replies; 16+ messages in thread
From: Ross Zwisler @ 2017-07-24 15:13 UTC (permalink / raw)
  To: Jan Kara
  Cc: Dan Williams, Ross Zwisler, Andrew Morton,
	Linux Kernel Mailing List, linux-doc, David Airlie, Dave Chinner,
	dri-devel, linux-mm, Andreas Dilger, Patrik Jakobsson,
	Christoph Hellwig, linux-samsung-soc, Joonyoung Shim,
	Darrick J. Wong, Tomi Valkeinen, Kyungmin Park,
	Krzysztof Kozlowski, Ingo Molnar, ext4 hackers, Matthew Wilcox,
	linux-arm-msm, Steven Rostedt, Inki Dae, linux-nvdimm,
	Alexander Viro, linux-arm-kernel, Theodore Ts'o,
	Jonathan Corbet, Seung-Woo Kim, linux-xfs, Rob Clark, Kukjin Kim,
	linux-fsdevel, freedreno

On Mon, Jul 24, 2017 at 01:15:31PM +0200, Jan Kara wrote:
> On Sat 22-07-17 09:21:31, Dan Williams wrote:
> > On Fri, Jul 21, 2017 at 3:39 PM, Ross Zwisler
> > <ross.zwisler@linux.intel.com> wrote:
> > > To be able to use the common 4k zero page in DAX we need to have our PTE
> > > fault path look more like our PMD fault path where a PTE entry can be
> > > marked as dirty and writeable as it is first inserted, rather than waiting
> > > for a follow-up dax_pfn_mkwrite() => finish_mkwrite_fault() call.
> > >
> > > Right now we can rely on having a dax_pfn_mkwrite() call because we can
> > > distinguish between these two cases in do_wp_page():
> > >
> > >         case 1: 4k zero page => writable DAX storage
> > >         case 2: read-only DAX storage => writeable DAX storage
> > >
> > > This distinction is made by via vm_normal_page().  vm_normal_page() returns
> > > false for the common 4k zero page, though, just as it does for DAX ptes.
> > > Instead of special casing the DAX + 4k zero page case, we will simplify our
> > > DAX PTE page fault sequence so that it matches our DAX PMD sequence, and
> > > get rid of the dax_pfn_mkwrite() helper.  We will instead use
> > > dax_iomap_fault() to handle write-protection faults.
> > >
> > > This means that insert_pfn() needs to follow the lead of insert_pfn_pmd()
> > > and allow us to pass in a 'mkwrite' flag.  If 'mkwrite' is set insert_pfn()
> > > will do the work that was previously done by wp_page_reuse() as part of the
> > > dax_pfn_mkwrite() call path.
> > >
> > > Signed-off-by: Ross Zwisler <ross.zwisler@linux.intel.com>
> > > ---
> > >  drivers/dax/device.c                    |  2 +-
> > >  drivers/gpu/drm/exynos/exynos_drm_gem.c |  3 ++-
> > >  drivers/gpu/drm/gma500/framebuffer.c    |  2 +-
> > >  drivers/gpu/drm/msm/msm_gem.c           |  3 ++-
> > >  drivers/gpu/drm/omapdrm/omap_gem.c      |  6 ++++--
> > >  drivers/gpu/drm/ttm/ttm_bo_vm.c         |  2 +-
> > >  fs/dax.c                                |  2 +-
> > >  include/linux/mm.h                      |  2 +-
> > >  mm/memory.c                             | 27 +++++++++++++++++++++------
> > >  9 files changed, 34 insertions(+), 15 deletions(-)
> > >
> > > diff --git a/drivers/dax/device.c b/drivers/dax/device.c
> > > index e9f3b3e..3973521 100644
> > > --- a/drivers/dax/device.c
> > > +++ b/drivers/dax/device.c
> > > @@ -273,7 +273,7 @@ static int __dev_dax_pte_fault(struct dev_dax *dev_dax, struct vm_fault *vmf)
> > >
> > >         pfn = phys_to_pfn_t(phys, dax_region->pfn_flags);
> > >
> > > -       rc = vm_insert_mixed(vmf->vma, vmf->address, pfn);
> > > +       rc = vm_insert_mixed(vmf->vma, vmf->address, pfn, false);
> > 
> > Ugh, I generally find bool flags unreadable. They place a tax on
> > jumping to function definition to recall what true and false mean. If
> > we want to go this 'add an argument' route can we at least add an enum
> > like:
> > 
> > enum {
> >     PTE_MKDIRTY,
> >     PTE_MKCLEAN,
> > };
> > 
> > ...to differentiate the two cases?
> 
> So how I usually deal with this is that I create e.g.:
> 
> __vm_insert_mixed() that takes the bool argument, make vm_insert_mixed()
> pass false, and vm_insert_mixed_mkwrite() pass true. That way there's no
> code duplication, old call sites can stay unchanged, the naming clearly
> says what's going on...

Ah, that does seem cleaner.  I'll try that for v5.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed()
  2017-07-24 11:25   ` Jan Kara
@ 2017-07-24 15:23     ` Ross Zwisler
  2017-07-24 15:59       ` Jan Kara
  0 siblings, 1 reply; 16+ messages in thread
From: Ross Zwisler @ 2017-07-24 15:23 UTC (permalink / raw)
  To: Jan Kara
  Cc: Ross Zwisler, Andrew Morton, linux-kernel, Darrick J. Wong,
	Theodore Ts'o, Alexander Viro, Andreas Dilger,
	Christoph Hellwig, Dan Williams, Dave Chinner, David Airlie,
	Ingo Molnar, Inki Dae, Jonathan Corbet, Joonyoung Shim,
	Krzysztof Kozlowski, Kukjin Kim, Kyungmin Park, Matthew Wilcox,
	Patrik Jakobsson, Rob Clark, Seung-Woo Kim, Steven Rostedt,
	Tomi Valkeinen, dri-devel, freedreno, linux-arm-kernel,
	linux-arm-msm, linux-doc, linux-ext4, linux-fsdevel, linux-mm,
	linux-nvdimm, linux-samsung-soc, linux-xfs

On Mon, Jul 24, 2017 at 01:25:30PM +0200, Jan Kara wrote:
> > @@ -1658,14 +1658,28 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
> >  	if (!pte)
> >  		goto out;
> >  	retval = -EBUSY;
> > -	if (!pte_none(*pte))
> > -		goto out_unlock;
> > +	if (!pte_none(*pte)) {
> > +		if (mkwrite) {
> > +			if (WARN_ON_ONCE(pte_pfn(*pte) != pfn_t_to_pfn(pfn)))
> 
> Is the WARN_ON_ONCE() really appropriate here? Your testcase with private
> mappings has triggered this situation if I'm right...

Yep, I think this WARN_ON_ONCE() is correct.  The test with private mappings
had collisions between read-only DAX mappings which were being faulted in via
insert_pfn(), and read/write COW page cache mappings which were being faulted
in by wp_page_copy().

I was hitting a false-positive warning when I had the WARN_ON_ONCE() in
insert_pfn() outside of the mkwrite case, i.e.:

	if (!pte_none(*pte)) {
		if (WARN_ON_ONCE(pte_pfn(*pte) != pfn_t_to_pfn(pfn)))
			goto out_unlock;
		if (mkwrite) {
			entry = *pte;
			goto out_mkwrite;
		} else
			goto out_unlock;
	}

This was triggering when one thread was faulting in a read-only DAX mapping
when another thread had already faulted in a read-write COW page cache page.

The patches I sent out have the warning in the mkwrite case, which would mean
that we were getting a fault for a read/write PTE in insert_pfn() and the PFN
didn't match what was already in the PTE.

This can't ever happen in the private mapping case because we will never
install a read/write PTE for normal storage, only for COW page cache pages.
Essentially I don't think we should ever be able to hit this warning, and if
we do I'd like to get the bug report so that I can track down how it was
happening and make sure that it's safe.  It is in the mkwrite path of
insert_pfn() which is currently only used by the DAX code.

Does that make sense to you, or would you recommend leaving it out?  (If so,
why?)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed()
  2017-07-24 15:23     ` Ross Zwisler
@ 2017-07-24 15:59       ` Jan Kara
  0 siblings, 0 replies; 16+ messages in thread
From: Jan Kara @ 2017-07-24 15:59 UTC (permalink / raw)
  To: Ross Zwisler
  Cc: Jan Kara, Andrew Morton, linux-kernel, Darrick J. Wong,
	Theodore Ts'o, Alexander Viro, Andreas Dilger,
	Christoph Hellwig, Dan Williams, Dave Chinner, David Airlie,
	Ingo Molnar, Inki Dae, Jonathan Corbet, Joonyoung Shim,
	Krzysztof Kozlowski, Kukjin Kim, Kyungmin Park, Matthew Wilcox,
	Patrik Jakobsson, Rob Clark, Seung-Woo Kim, Steven Rostedt,
	Tomi Valkeinen, dri-devel, freedreno, linux-arm-kernel,
	linux-arm-msm, linux-doc, linux-ext4, linux-fsdevel, linux-mm,
	linux-nvdimm, linux-samsung-soc, linux-xfs

On Mon 24-07-17 09:23:57, Ross Zwisler wrote:
> On Mon, Jul 24, 2017 at 01:25:30PM +0200, Jan Kara wrote:
> > > @@ -1658,14 +1658,28 @@ static int insert_pfn(struct vm_area_struct *vma, unsigned long addr,
> > >  	if (!pte)
> > >  		goto out;
> > >  	retval = -EBUSY;
> > > -	if (!pte_none(*pte))
> > > -		goto out_unlock;
> > > +	if (!pte_none(*pte)) {
> > > +		if (mkwrite) {
> > > +			if (WARN_ON_ONCE(pte_pfn(*pte) != pfn_t_to_pfn(pfn)))
> > 
> > Is the WARN_ON_ONCE() really appropriate here? Your testcase with private
> > mappings has triggered this situation if I'm right...
> 
> Yep, I think this WARN_ON_ONCE() is correct.  The test with private mappings
> had collisions between read-only DAX mappings which were being faulted in via
> insert_pfn(), and read/write COW page cache mappings which were being faulted
> in by wp_page_copy().
> 
> I was hitting a false-positive warning when I had the WARN_ON_ONCE() in
> insert_pfn() outside of the mkwrite case, i.e.:
> 
> 	if (!pte_none(*pte)) {
> 		if (WARN_ON_ONCE(pte_pfn(*pte) != pfn_t_to_pfn(pfn)))
> 			goto out_unlock;
> 		if (mkwrite) {
> 			entry = *pte;
> 			goto out_mkwrite;
> 		} else
> 			goto out_unlock;
> 	}
> 
> This was triggering when one thread was faulting in a read-only DAX mapping
> when another thread had already faulted in a read-write COW page cache page.
> 
> The patches I sent out have the warning in the mkwrite case, which would mean
> that we were getting a fault for a read/write PTE in insert_pfn() and the PFN
> didn't match what was already in the PTE.
> 
> This can't ever happen in the private mapping case because we will never
> install a read/write PTE for normal storage, only for COW page cache pages.
> Essentially I don't think we should ever be able to hit this warning, and if
> we do I'd like to get the bug report so that I can track down how it was
> happening and make sure that it's safe.  It is in the mkwrite path of
> insert_pfn() which is currently only used by the DAX code.
> 
> Does that make sense to you, or would you recommend leaving it out?  (If so,
> why?)

Ah, OK, makes sense. So feel free to add:

Reviewed-by: Jan Kara <jack@suse.cz>

								Honza
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-07-24 15:59 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-21 22:39 [PATCH v4 0/5] DAX common 4k zero page Ross Zwisler
2017-07-21 22:39 ` [PATCH v4 1/5] mm: add mkwrite param to vm_insert_mixed() Ross Zwisler
2017-07-22 16:21   ` Dan Williams
2017-07-24 11:15     ` Jan Kara
2017-07-24 15:13       ` Ross Zwisler
2017-07-24 11:25   ` Jan Kara
2017-07-24 15:23     ` Ross Zwisler
2017-07-24 15:59       ` Jan Kara
2017-07-21 22:39 ` [PATCH v4 2/5] dax: relocate some dax functions Ross Zwisler
2017-07-24 11:35   ` Jan Kara
2017-07-21 22:39 ` [PATCH v4 3/5] dax: use common 4k zero page for dax mmap reads Ross Zwisler
2017-07-24 11:46   ` Jan Kara
2017-07-21 22:39 ` [PATCH v4 4/5] dax: remove DAX code from page_cache_tree_insert() Ross Zwisler
2017-07-24 11:47   ` Jan Kara
2017-07-21 22:39 ` [PATCH v4 5/5] dax: move all DAX radix tree defs to fs/dax.c Ross Zwisler
2017-07-24 11:48   ` Jan Kara

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).