linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/7] fsdax,xfs: Add reflink&dedupe support for fsdax
@ 2021-04-08 12:04 Shiyang Ruan
  2021-04-08 12:04 ` [PATCH v4 1/7] fsdax: Introduce dax_iomap_cow_copy() Shiyang Ruan
                   ` (6 more replies)
  0 siblings, 7 replies; 17+ messages in thread
From: Shiyang Ruan @ 2021-04-08 12:04 UTC (permalink / raw)
  To: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel
  Cc: darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn

This patchset is attempt to add CoW support for fsdax, and take XFS,
which has both reflink and fsdax feature, as an example.

Changes from V3:
 - Take out the first 3 patches as a cleanup patchset[1], which has been
    sent yesterday.
 - Fix usage of code in dax_iomap_cow_copy()
 - Add comments for macro definitions
 - Fix other code style problems and mistakes

Changes from V2:
 - Fix the mistake in iomap_apply2() and dax_dedupe_file_range_compare()
 - Add CoW judgement in dax_iomap_zero()
 - Fix other code style problems and mistakes

Changes from V1:
 - Factor some helper functions to simplify dax fault code
 - Introduce iomap_apply2() for dax_dedupe_file_range_compare()
 - Fix mistakes and other problems
 - Rebased on v5.11

One of the key mechanism need to be implemented in fsdax is CoW.  Copy
the data from srcmap before we actually write data to the destance
iomap.  And we just copy range in which data won't be changed.

Another mechanism is range comparison.  In page cache case, readpage()
is used to load data on disk to page cache in order to be able to
compare data.  In fsdax case, readpage() does not work.  So, we need
another compare data with direct access support.

With the two mechanism implemented in fsdax, we are able to make reflink
and fsdax work together in XFS.

Some of the patches are picked up from Goldwyn's patchset.  I made some
changes to adapt to this patchset.


(Rebased on v5.12-rc5 and patchset[1])

[1]: https://lore.kernel.org/linux-xfs/20210407133823.828176-1-ruansy.fnst@fujitsu.com/
==

Shiyang Ruan (7):
  fsdax: Introduce dax_iomap_cow_copy()
  fsdax: Replace mmap entry in case of CoW
  fsdax: Add dax_iomap_cow_copy() for dax_iomap_zero
  iomap: Introduce iomap_apply2() for operations on two files
  fsdax: Dedup file range to use a compare function
  fs/xfs: Handle CoW for fsdax write() path
  fs/xfs: Add dedupe support for fsdax

 fs/dax.c               | 202 +++++++++++++++++++++++++++++++++++------
 fs/iomap/apply.c       |  52 +++++++++++
 fs/iomap/buffered-io.c |   2 +-
 fs/remap_range.c       |  45 +++++++--
 fs/xfs/xfs_bmap_util.c |   3 +-
 fs/xfs/xfs_file.c      |  29 ++++--
 fs/xfs/xfs_inode.c     |   8 +-
 fs/xfs/xfs_inode.h     |   1 +
 fs/xfs/xfs_iomap.c     |  58 +++++++++++-
 fs/xfs/xfs_iomap.h     |   4 +
 fs/xfs/xfs_iops.c      |   7 +-
 fs/xfs/xfs_reflink.c   |  17 ++--
 include/linux/dax.h    |   7 +-
 include/linux/fs.h     |  12 ++-
 include/linux/iomap.h  |   7 +-
 15 files changed, 393 insertions(+), 61 deletions(-)

-- 
2.31.0




^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v4 1/7] fsdax: Introduce dax_iomap_cow_copy()
  2021-04-08 12:04 [PATCH v4 0/7] fsdax,xfs: Add reflink&dedupe support for fsdax Shiyang Ruan
@ 2021-04-08 12:04 ` Shiyang Ruan
  2021-04-08 21:53   ` Darrick J. Wong
  2021-04-08 12:04 ` [PATCH v4 2/7] fsdax: Replace mmap entry in case of CoW Shiyang Ruan
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Shiyang Ruan @ 2021-04-08 12:04 UTC (permalink / raw)
  To: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel
  Cc: darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn

In the case where the iomap is a write operation and iomap is not equal
to srcmap after iomap_begin, we consider it is a CoW operation.

The destance extent which iomap indicated is new allocated extent.
So, it is needed to copy the data from srcmap to new allocated extent.
In theory, it is better to copy the head and tail ranges which is
outside of the non-aligned area instead of copying the whole aligned
range. But in dax page fault, it will always be an aligned range.  So,
we have to copy the whole range in this case.

Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 fs/dax.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 77 insertions(+), 5 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index 8d7e4e2cc0fb..b4fd3813457a 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1038,6 +1038,61 @@ static int dax_iomap_direct_access(struct iomap *iomap, loff_t pos, size_t size,
 	return rc;
 }
 
+/**
+ * dax_iomap_cow_copy(): Copy the data from source to destination before write.
+ * @pos:	address to do copy from.
+ * @length:	size of copy operation.
+ * @align_size:	aligned w.r.t align_size (either PMD_SIZE or PAGE_SIZE)
+ * @srcmap:	iomap srcmap
+ * @daddr:	destination address to copy to.
+ *
+ * This can be called from two places. Either during DAX write fault, to copy
+ * the length size data to daddr. Or, while doing normal DAX write operation,
+ * dax_iomap_actor() might call this to do the copy of either start or end
+ * unaligned address. In this case the rest of the copy of aligned ranges is
+ * taken care by dax_iomap_actor() itself.
+ * Also, note DAX fault will always result in aligned pos and pos + length.
+ */
+static int dax_iomap_cow_copy(loff_t pos, loff_t length, size_t align_size,
+		struct iomap *srcmap, void *daddr)
+{
+	loff_t head_off = pos & (align_size - 1);
+	size_t size = ALIGN(head_off + length, align_size);
+	loff_t end = pos + length;
+	loff_t pg_end = round_up(end, align_size);
+	bool copy_all = head_off == 0 && end == pg_end;
+	void *saddr = 0;
+	int ret = 0;
+
+	ret = dax_iomap_direct_access(srcmap, pos, size, &saddr, NULL);
+	if (ret)
+		return ret;
+
+	if (copy_all) {
+		ret = copy_mc_to_kernel(daddr, saddr, length);
+		return ret ? -EIO : 0;
+	}
+
+	/* Copy the head part of the range.  Note: we pass offset as length. */
+	if (head_off) {
+		ret = copy_mc_to_kernel(daddr, saddr, head_off);
+		if (ret)
+			return -EIO;
+	}
+
+	/* Copy the tail part of the range */
+	if (end < pg_end) {
+		loff_t tail_off = head_off + length;
+		loff_t tail_len = pg_end - end;
+
+		ret = copy_mc_to_kernel(daddr + tail_off, saddr + tail_off,
+					tail_len);
+		if (ret)
+			return -EIO;
+	}
+	return 0;
+}
+
 /*
  * The user has performed a load from a hole in the file.  Allocating a new
  * page in the file would cause excessive storage usage for workloads with
@@ -1167,11 +1222,12 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 	struct dax_device *dax_dev = iomap->dax_dev;
 	struct iov_iter *iter = data;
 	loff_t end = pos + length, done = 0;
+	bool write = iov_iter_rw(iter) == WRITE;
 	ssize_t ret = 0;
 	size_t xfer;
 	int id;
 
-	if (iov_iter_rw(iter) == READ) {
+	if (!write) {
 		end = min(end, i_size_read(inode));
 		if (pos >= end)
 			return 0;
@@ -1180,7 +1236,8 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 			return iov_iter_zero(min(length, end - pos), iter);
 	}
 
-	if (WARN_ON_ONCE(iomap->type != IOMAP_MAPPED))
+	if (WARN_ON_ONCE(iomap->type != IOMAP_MAPPED &&
+			!(iomap->flags & IOMAP_F_SHARED)))
 		return -EIO;
 
 	/*
@@ -1219,6 +1276,13 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 			break;
 		}
 
+		if (write && srcmap->addr != iomap->addr) {
+			ret = dax_iomap_cow_copy(pos, length, PAGE_SIZE, srcmap,
+						 kaddr);
+			if (ret)
+				break;
+		}
+
 		map_len = PFN_PHYS(map_len);
 		kaddr += offset;
 		map_len -= offset;
@@ -1230,7 +1294,7 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 		 * validated via access_ok() in either vfs_read() or
 		 * vfs_write(), depending on which operation we are doing.
 		 */
-		if (iov_iter_rw(iter) == WRITE)
+		if (write)
 			xfer = dax_copy_from_iter(dax_dev, pgoff, kaddr,
 					map_len, iter);
 		else
@@ -1382,6 +1446,7 @@ static vm_fault_t dax_fault_actor(struct vm_fault *vmf, pfn_t *pfnp,
 	unsigned long entry_flags = pmd ? DAX_PMD : 0;
 	int err = 0;
 	pfn_t pfn;
+	void *kaddr;
 
 	/* if we are reading UNWRITTEN and HOLE, return a hole. */
 	if (!write &&
@@ -1392,18 +1457,25 @@ static vm_fault_t dax_fault_actor(struct vm_fault *vmf, pfn_t *pfnp,
 			return dax_pmd_load_hole(xas, vmf, iomap, entry);
 	}
 
-	if (iomap->type != IOMAP_MAPPED) {
+	if (iomap->type != IOMAP_MAPPED && !(iomap->flags & IOMAP_F_SHARED)) {
 		WARN_ON_ONCE(1);
 		return pmd ? VM_FAULT_FALLBACK : VM_FAULT_SIGBUS;
 	}
 
-	err = dax_iomap_direct_access(iomap, pos, size, NULL, &pfn);
+	err = dax_iomap_direct_access(iomap, pos, size, &kaddr, &pfn);
 	if (err)
 		return pmd ? VM_FAULT_FALLBACK : dax_fault_return(err);
 
 	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn, entry_flags,
 				  write && !sync);
 
+	if (write &&
+	    srcmap->addr != IOMAP_HOLE && srcmap->addr != iomap->addr) {
+		err = dax_iomap_cow_copy(pos, size, size, srcmap, kaddr);
+		if (err)
+			return dax_fault_return(err);
+	}
+
 	if (sync)
 		return dax_fault_synchronous_pfnp(pfnp, pfn);
 
-- 
2.31.0




^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 2/7] fsdax: Replace mmap entry in case of CoW
  2021-04-08 12:04 [PATCH v4 0/7] fsdax,xfs: Add reflink&dedupe support for fsdax Shiyang Ruan
  2021-04-08 12:04 ` [PATCH v4 1/7] fsdax: Introduce dax_iomap_cow_copy() Shiyang Ruan
@ 2021-04-08 12:04 ` Shiyang Ruan
  2021-04-08 22:11   ` Darrick J. Wong
  2021-04-08 12:04 ` [PATCH v4 3/7] fsdax: Add dax_iomap_cow_copy() for dax_iomap_zero Shiyang Ruan
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Shiyang Ruan @ 2021-04-08 12:04 UTC (permalink / raw)
  To: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel
  Cc: darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn, Goldwyn Rodrigues, Ritesh Harjani

We replace the existing entry to the newly allocated one in case of CoW.
Also, we mark the entry as PAGECACHE_TAG_TOWRITE so writeback marks this
entry as writeprotected.  This helps us snapshots so new write
pagefaults after snapshots trigger a CoW.

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Ritesh Harjani <riteshh@linux.ibm.com>
---
 fs/dax.c | 39 ++++++++++++++++++++++++++++-----------
 1 file changed, 28 insertions(+), 11 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index b4fd3813457a..e6c1354b27a8 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -722,6 +722,10 @@ static int copy_cow_page_dax(struct block_device *bdev, struct dax_device *dax_d
 	return 0;
 }
 
+/* DAX Insert Flag for the entry we insert */
+#define DAX_IF_DIRTY		(1 << 0)
+#define DAX_IF_COW		(1 << 1)
+
 /*
  * By this point grab_mapping_entry() has ensured that we have a locked entry
  * of the appropriate size so we don't have to worry about downgrading PMDs to
@@ -729,16 +733,19 @@ static int copy_cow_page_dax(struct block_device *bdev, struct dax_device *dax_d
  * already in the tree, we will skip the insertion and just dirty the PMD as
  * appropriate.
  */
-static void *dax_insert_entry(struct xa_state *xas,
-		struct address_space *mapping, struct vm_fault *vmf,
-		void *entry, pfn_t pfn, unsigned long flags, bool dirty)
+static void *dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
+		void *entry, pfn_t pfn, unsigned long flags,
+		unsigned int insert_flags)
 {
+	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
 	void *new_entry = dax_make_entry(pfn, flags);
+	bool dirty = insert_flags & DAX_IF_DIRTY;
+	bool cow = insert_flags & DAX_IF_COW;
 
 	if (dirty)
 		__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
 
-	if (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE)) {
+	if (cow || (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE))) {
 		unsigned long index = xas->xa_index;
 		/* we are replacing a zero page with block mapping */
 		if (dax_is_pmd_entry(entry))
@@ -750,7 +757,7 @@ static void *dax_insert_entry(struct xa_state *xas,
 
 	xas_reset(xas);
 	xas_lock_irq(xas);
-	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
+	if (cow || dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
 		void *old;
 
 		dax_disassociate_entry(entry, mapping, false);
@@ -774,6 +781,9 @@ static void *dax_insert_entry(struct xa_state *xas,
 	if (dirty)
 		xas_set_mark(xas, PAGECACHE_TAG_DIRTY);
 
+	if (cow)
+		xas_set_mark(xas, PAGECACHE_TAG_TOWRITE);
+
 	xas_unlock_irq(xas);
 	return entry;
 }
@@ -1109,8 +1119,7 @@ static vm_fault_t dax_load_hole(struct xa_state *xas,
 	pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
 	vm_fault_t ret;
 
-	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn,
-			DAX_ZERO_PAGE, false);
+	*entry = dax_insert_entry(xas, vmf, *entry, pfn, DAX_ZERO_PAGE, 0);
 
 	ret = vmf_insert_mixed(vmf->vma, vaddr, pfn);
 	trace_dax_load_hole(inode, vmf, ret);
@@ -1137,8 +1146,8 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
 		goto fallback;
 
 	pfn = page_to_pfn_t(zero_page);
-	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn,
-			DAX_PMD | DAX_ZERO_PAGE, false);
+	*entry = dax_insert_entry(xas, vmf, *entry, pfn,
+				  DAX_PMD | DAX_ZERO_PAGE, 0);
 
 	if (arch_needs_pgtable_deposit()) {
 		pgtable = pte_alloc_one(vma->vm_mm);
@@ -1444,6 +1453,7 @@ static vm_fault_t dax_fault_actor(struct vm_fault *vmf, pfn_t *pfnp,
 	bool write = vmf->flags & FAULT_FLAG_WRITE;
 	bool sync = dax_fault_is_synchronous(flags, vmf->vma, iomap);
 	unsigned long entry_flags = pmd ? DAX_PMD : 0;
+	unsigned int insert_flags = 0;
 	int err = 0;
 	pfn_t pfn;
 	void *kaddr;
@@ -1466,8 +1476,15 @@ static vm_fault_t dax_fault_actor(struct vm_fault *vmf, pfn_t *pfnp,
 	if (err)
 		return pmd ? VM_FAULT_FALLBACK : dax_fault_return(err);
 
-	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn, entry_flags,
-				  write && !sync);
+	if (write) {
+		if (!sync)
+			insert_flags |= DAX_IF_DIRTY;
+		if (iomap->flags & IOMAP_F_SHARED)
+			insert_flags |= DAX_IF_COW;
+	}
+
+	*entry = dax_insert_entry(xas, vmf, *entry, pfn, entry_flags,
+				  insert_flags);
 
 	if (write &&
 	    srcmap->addr != IOMAP_HOLE && srcmap->addr != iomap->addr) {
-- 
2.31.0




^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 3/7] fsdax: Add dax_iomap_cow_copy() for dax_iomap_zero
  2021-04-08 12:04 [PATCH v4 0/7] fsdax,xfs: Add reflink&dedupe support for fsdax Shiyang Ruan
  2021-04-08 12:04 ` [PATCH v4 1/7] fsdax: Introduce dax_iomap_cow_copy() Shiyang Ruan
  2021-04-08 12:04 ` [PATCH v4 2/7] fsdax: Replace mmap entry in case of CoW Shiyang Ruan
@ 2021-04-08 12:04 ` Shiyang Ruan
  2021-04-08 12:04 ` [PATCH v4 4/7] iomap: Introduce iomap_apply2() for operations on two files Shiyang Ruan
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 17+ messages in thread
From: Shiyang Ruan @ 2021-04-08 12:04 UTC (permalink / raw)
  To: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel
  Cc: darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn, Ritesh Harjani

Punch hole on a reflinked file needs dax_copy_edge() too.  Otherwise,
data in not aligned area will be not correct.  So, add the srcmap to
dax_iomap_zero() and replace memset() as dax_copy_edge().

Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
Reviewed-by: Ritesh Harjani <riteshh@linux.ibm.com>
---
 fs/dax.c               | 25 +++++++++++++++----------
 fs/iomap/buffered-io.c |  2 +-
 include/linux/dax.h    |  3 ++-
 3 files changed, 18 insertions(+), 12 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index e6c1354b27a8..fcd1e932716e 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1186,7 +1186,8 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
 }
 #endif /* CONFIG_FS_DAX_PMD */
 
-s64 dax_iomap_zero(loff_t pos, u64 length, struct iomap *iomap)
+s64 dax_iomap_zero(loff_t pos, u64 length, struct iomap *iomap,
+		struct iomap *srcmap)
 {
 	sector_t sector = iomap_sector(iomap, pos & PAGE_MASK);
 	pgoff_t pgoff;
@@ -1208,19 +1209,23 @@ s64 dax_iomap_zero(loff_t pos, u64 length, struct iomap *iomap)
 
 	if (page_aligned)
 		rc = dax_zero_page_range(iomap->dax_dev, pgoff, 1);
-	else
+	else {
 		rc = dax_direct_access(iomap->dax_dev, pgoff, 1, &kaddr, NULL);
-	if (rc < 0) {
-		dax_read_unlock(id);
-		return rc;
-	}
-
-	if (!page_aligned) {
-		memset(kaddr + offset, 0, size);
+		if (rc < 0)
+			goto out;
+		if (iomap->addr != srcmap->addr) {
+			rc = dax_iomap_cow_copy(offset, size, PAGE_SIZE, srcmap,
+						kaddr);
+			if (rc < 0)
+				goto out;
+		} else
+			memset(kaddr + offset, 0, size);
 		dax_flush(iomap->dax_dev, kaddr + offset, size);
 	}
+
+out:
 	dax_read_unlock(id);
-	return size;
+	return rc < 0 ? rc : size;
 }
 
 static loff_t
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 414769a6ad11..67936e9967b8 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -933,7 +933,7 @@ static loff_t iomap_zero_range_actor(struct inode *inode, loff_t pos,
 		s64 bytes;
 
 		if (IS_DAX(inode))
-			bytes = dax_iomap_zero(pos, length, iomap);
+			bytes = dax_iomap_zero(pos, length, iomap, srcmap);
 		else
 			bytes = iomap_zero(inode, pos, length, iomap, srcmap);
 		if (bytes < 0)
diff --git a/include/linux/dax.h b/include/linux/dax.h
index b52f084aa643..3275e01ed33d 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -237,7 +237,8 @@ vm_fault_t dax_finish_sync_fault(struct vm_fault *vmf,
 int dax_delete_mapping_entry(struct address_space *mapping, pgoff_t index);
 int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
 				      pgoff_t index);
-s64 dax_iomap_zero(loff_t pos, u64 length, struct iomap *iomap);
+s64 dax_iomap_zero(loff_t pos, u64 length, struct iomap *iomap,
+		struct iomap *srcmap);
 static inline bool dax_mapping(struct address_space *mapping)
 {
 	return mapping->host && IS_DAX(mapping->host);
-- 
2.31.0




^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 4/7] iomap: Introduce iomap_apply2() for operations on two files
  2021-04-08 12:04 [PATCH v4 0/7] fsdax,xfs: Add reflink&dedupe support for fsdax Shiyang Ruan
                   ` (2 preceding siblings ...)
  2021-04-08 12:04 ` [PATCH v4 3/7] fsdax: Add dax_iomap_cow_copy() for dax_iomap_zero Shiyang Ruan
@ 2021-04-08 12:04 ` Shiyang Ruan
  2021-04-08 22:26   ` Darrick J. Wong
  2021-04-08 12:04 ` [PATCH v4 5/7] fsdax: Dedup file range to use a compare function Shiyang Ruan
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 17+ messages in thread
From: Shiyang Ruan @ 2021-04-08 12:04 UTC (permalink / raw)
  To: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel
  Cc: darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn

Some operations, such as comparing a range of data in two files under
fsdax mode, requires nested iomap_open()/iomap_end() on two file.  Thus,
we introduce iomap_apply2() to accept arguments from two files and
iomap_actor2_t for actions on two files.

Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
---
 fs/iomap/apply.c      | 52 +++++++++++++++++++++++++++++++++++++++++++
 include/linux/iomap.h |  7 +++++-
 2 files changed, 58 insertions(+), 1 deletion(-)

diff --git a/fs/iomap/apply.c b/fs/iomap/apply.c
index 26ab6563181f..0493da5286ad 100644
--- a/fs/iomap/apply.c
+++ b/fs/iomap/apply.c
@@ -97,3 +97,55 @@ iomap_apply(struct inode *inode, loff_t pos, loff_t length, unsigned flags,
 
 	return written ? written : ret;
 }
+
+loff_t
+iomap_apply2(struct inode *ino1, loff_t pos1, struct inode *ino2, loff_t pos2,
+		loff_t length, unsigned int flags, const struct iomap_ops *ops,
+		void *data, iomap_actor2_t actor)
+{
+	struct iomap smap = { .type = IOMAP_HOLE };
+	struct iomap dmap = { .type = IOMAP_HOLE };
+	loff_t written = 0, ret, ret2 = 0;
+	loff_t len1 = length, len2, min_len;
+
+	ret = ops->iomap_begin(ino1, pos1, len1, flags, &smap, NULL);
+	if (ret)
+		goto out;
+	if (WARN_ON(smap.offset > pos1)) {
+		written = -EIO;
+		goto out_src;
+	}
+	if (WARN_ON(smap.length == 0)) {
+		written = -EIO;
+		goto out_src;
+	}
+	len2 = min_t(loff_t, len1, smap.length);
+
+	ret = ops->iomap_begin(ino2, pos2, len2, flags, &dmap, NULL);
+	if (ret)
+		goto out_src;
+	if (WARN_ON(dmap.offset > pos2)) {
+		written = -EIO;
+		goto out_dest;
+	}
+	if (WARN_ON(dmap.length == 0)) {
+		written = -EIO;
+		goto out_dest;
+	}
+	min_len = min_t(loff_t, len2, dmap.length);
+
+	written = actor(ino1, pos1, ino2, pos2, min_len, data, &smap, &dmap);
+
+out_dest:
+	if (ops->iomap_end)
+		ret2 = ops->iomap_end(ino2, pos2, len2,
+				      written > 0 ? written : 0, flags, &dmap);
+out_src:
+	if (ops->iomap_end)
+		ret = ops->iomap_end(ino1, pos1, len1,
+				     written > 0 ? written : 0, flags, &smap);
+out:
+	if (written)
+		return written;
+	return ret ?: ret2;
+}
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index d202fd2d0f91..9493c48bcc9c 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -150,10 +150,15 @@ struct iomap_ops {
  */
 typedef loff_t (*iomap_actor_t)(struct inode *inode, loff_t pos, loff_t len,
 		void *data, struct iomap *iomap, struct iomap *srcmap);
-
+typedef loff_t (*iomap_actor2_t)(struct inode *ino1, loff_t pos1,
+		struct inode *ino2, loff_t pos2, loff_t len, void *data,
+		struct iomap *smap, struct iomap *dmap);
 loff_t iomap_apply(struct inode *inode, loff_t pos, loff_t length,
 		unsigned flags, const struct iomap_ops *ops, void *data,
 		iomap_actor_t actor);
+loff_t iomap_apply2(struct inode *ino1, loff_t pos1, struct inode *ino2,
+		loff_t pos2, loff_t length, unsigned int flags,
+		const struct iomap_ops *ops, void *data, iomap_actor2_t actor);
 
 ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
 		const struct iomap_ops *ops);
-- 
2.31.0




^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 5/7] fsdax: Dedup file range to use a compare function
  2021-04-08 12:04 [PATCH v4 0/7] fsdax,xfs: Add reflink&dedupe support for fsdax Shiyang Ruan
                   ` (3 preceding siblings ...)
  2021-04-08 12:04 ` [PATCH v4 4/7] iomap: Introduce iomap_apply2() for operations on two files Shiyang Ruan
@ 2021-04-08 12:04 ` Shiyang Ruan
  2021-04-08 22:33   ` Darrick J. Wong
  2021-04-08 12:04 ` [PATCH v4 6/7] fs/xfs: Handle CoW for fsdax write() path Shiyang Ruan
  2021-04-08 12:04 ` [PATCH v4 7/7] fs/xfs: Add dedupe support for fsdax Shiyang Ruan
  6 siblings, 1 reply; 17+ messages in thread
From: Shiyang Ruan @ 2021-04-08 12:04 UTC (permalink / raw)
  To: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel
  Cc: darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn, Goldwyn Rodrigues

With dax we cannot deal with readpage() etc. So, we create a dax
comparison funciton which is similar with
vfs_dedupe_file_range_compare().
And introduce dax_remap_file_range_prep() for filesystem use.

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
---
 fs/dax.c             | 56 ++++++++++++++++++++++++++++++++++++++++++++
 fs/remap_range.c     | 45 ++++++++++++++++++++++++++++-------
 fs/xfs/xfs_reflink.c |  9 +++++--
 include/linux/dax.h  |  4 ++++
 include/linux/fs.h   | 12 ++++++----
 5 files changed, 112 insertions(+), 14 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index fcd1e932716e..ba924b6629a6 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1849,3 +1849,59 @@ vm_fault_t dax_finish_sync_fault(struct vm_fault *vmf,
 	return dax_insert_pfn_mkwrite(vmf, pfn, order);
 }
 EXPORT_SYMBOL_GPL(dax_finish_sync_fault);
+
+static loff_t dax_range_compare_actor(struct inode *ino1, loff_t pos1,
+		struct inode *ino2, loff_t pos2, loff_t len, void *data,
+		struct iomap *smap, struct iomap *dmap)
+{
+	void *saddr, *daddr;
+	bool *same = data;
+	int ret;
+
+	if (smap->type == IOMAP_HOLE && dmap->type == IOMAP_HOLE) {
+		*same = true;
+		return len;
+	}
+
+	if (smap->type == IOMAP_HOLE || dmap->type == IOMAP_HOLE) {
+		*same = false;
+		return 0;
+	}
+
+	ret = dax_iomap_direct_access(smap, pos1, ALIGN(pos1 + len, PAGE_SIZE),
+				      &saddr, NULL);
+	if (ret < 0)
+		return -EIO;
+
+	ret = dax_iomap_direct_access(dmap, pos2, ALIGN(pos2 + len, PAGE_SIZE),
+				      &daddr, NULL);
+	if (ret < 0)
+		return -EIO;
+
+	*same = !memcmp(saddr, daddr, len);
+	return len;
+}
+
+int dax_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
+		struct inode *dest, loff_t destoff, loff_t len, bool *is_same,
+		const struct iomap_ops *ops)
+{
+	int id, ret = 0;
+
+	id = dax_read_lock();
+	while (len) {
+		ret = iomap_apply2(src, srcoff, dest, destoff, len, 0, ops,
+				   is_same, dax_range_compare_actor);
+		if (ret < 0 || !*is_same)
+			goto out;
+
+		len -= ret;
+		srcoff += ret;
+		destoff += ret;
+	}
+	ret = 0;
+out:
+	dax_read_unlock(id);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(dax_dedupe_file_range_compare);
diff --git a/fs/remap_range.c b/fs/remap_range.c
index e4a5fdd7ad7b..1fab0db49c68 100644
--- a/fs/remap_range.c
+++ b/fs/remap_range.c
@@ -14,6 +14,7 @@
 #include <linux/compat.h>
 #include <linux/mount.h>
 #include <linux/fs.h>
+#include <linux/dax.h>
 #include "internal.h"
 
 #include <linux/uaccess.h>
@@ -199,9 +200,9 @@ static void vfs_unlock_two_pages(struct page *page1, struct page *page2)
  * Compare extents of two files to see if they are the same.
  * Caller must have locked both inodes to prevent write races.
  */
-static int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
-					 struct inode *dest, loff_t destoff,
-					 loff_t len, bool *is_same)
+int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
+				  struct inode *dest, loff_t destoff,
+				  loff_t len, bool *is_same)
 {
 	loff_t src_poff;
 	loff_t dest_poff;
@@ -280,6 +281,7 @@ static int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
 out_error:
 	return error;
 }
+EXPORT_SYMBOL(vfs_dedupe_file_range_compare);
 
 /*
  * Check that the two inodes are eligible for cloning, the ranges make
@@ -289,9 +291,11 @@ static int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
  * If there's an error, then the usual negative error code is returned.
  * Otherwise returns 0 with *len set to the request length.
  */
-int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
-				  struct file *file_out, loff_t pos_out,
-				  loff_t *len, unsigned int remap_flags)
+static int
+__generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+				struct file *file_out, loff_t pos_out,
+				loff_t *len, unsigned int remap_flags,
+				const struct iomap_ops *ops)
 {
 	struct inode *inode_in = file_inode(file_in);
 	struct inode *inode_out = file_inode(file_out);
@@ -351,8 +355,15 @@ int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
 	if (remap_flags & REMAP_FILE_DEDUP) {
 		bool		is_same = false;
 
-		ret = vfs_dedupe_file_range_compare(inode_in, pos_in,
-				inode_out, pos_out, *len, &is_same);
+		if (!IS_DAX(inode_in) && !IS_DAX(inode_out))
+			ret = vfs_dedupe_file_range_compare(inode_in, pos_in,
+					inode_out, pos_out, *len, &is_same);
+		else if (IS_DAX(inode_in) && IS_DAX(inode_out) && ops)
+			ret = dax_dedupe_file_range_compare(inode_in, pos_in,
+					inode_out, pos_out, *len, &is_same,
+					ops);
+		else
+			return -EINVAL;
 		if (ret)
 			return ret;
 		if (!is_same)
@@ -370,6 +381,24 @@ int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
 
 	return ret;
 }
+
+int dax_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+			      struct file *file_out, loff_t pos_out,
+			      loff_t *len, unsigned int remap_flags,
+			      const struct iomap_ops *ops)
+{
+	return __generic_remap_file_range_prep(file_in, pos_in, file_out,
+					       pos_out, len, remap_flags, ops);
+}
+EXPORT_SYMBOL(dax_remap_file_range_prep);
+
+int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+				  struct file *file_out, loff_t pos_out,
+				  loff_t *len, unsigned int remap_flags)
+{
+	return __generic_remap_file_range_prep(file_in, pos_in, file_out,
+					       pos_out, len, remap_flags, NULL);
+}
 EXPORT_SYMBOL(generic_remap_file_range_prep);
 
 loff_t do_clone_file_range(struct file *file_in, loff_t pos_in,
diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c
index 725c7d8e4438..9ef9f98725a2 100644
--- a/fs/xfs/xfs_reflink.c
+++ b/fs/xfs/xfs_reflink.c
@@ -1329,8 +1329,13 @@ xfs_reflink_remap_prep(
 	if (IS_DAX(inode_in) || IS_DAX(inode_out))
 		goto out_unlock;
 
-	ret = generic_remap_file_range_prep(file_in, pos_in, file_out, pos_out,
-			len, remap_flags);
+	if (!IS_DAX(inode_in))
+		ret = generic_remap_file_range_prep(file_in, pos_in, file_out,
+						    pos_out, len, remap_flags);
+	else
+		ret = dax_remap_file_range_prep(file_in, pos_in, file_out,
+						pos_out, len, remap_flags,
+						&xfs_read_iomap_ops);
 	if (ret || *len == 0)
 		goto out_unlock;
 
diff --git a/include/linux/dax.h b/include/linux/dax.h
index 3275e01ed33d..32e1c34349f2 100644
--- a/include/linux/dax.h
+++ b/include/linux/dax.h
@@ -239,6 +239,10 @@ int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
 				      pgoff_t index);
 s64 dax_iomap_zero(loff_t pos, u64 length, struct iomap *iomap,
 		struct iomap *srcmap);
+int dax_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
+				  struct inode *dest, loff_t destoff,
+				  loff_t len, bool *is_same,
+				  const struct iomap_ops *ops);
 static inline bool dax_mapping(struct address_space *mapping)
 {
 	return mapping->host && IS_DAX(mapping->host);
diff --git a/include/linux/fs.h b/include/linux/fs.h
index ec8f3ddf4a6a..28861e334dac 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -70,6 +70,7 @@ struct fsverity_info;
 struct fsverity_operations;
 struct fs_context;
 struct fs_parameter_spec;
+struct iomap_ops;
 
 extern void __init inode_init(void);
 extern void __init inode_init_early(void);
@@ -1989,10 +1990,13 @@ extern ssize_t vfs_copy_file_range(struct file *, loff_t , struct file *,
 extern ssize_t generic_copy_file_range(struct file *file_in, loff_t pos_in,
 				       struct file *file_out, loff_t pos_out,
 				       size_t len, unsigned int flags);
-extern int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
-					 struct file *file_out, loff_t pos_out,
-					 loff_t *count,
-					 unsigned int remap_flags);
+int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+				  struct file *file_out, loff_t pos_out,
+				  loff_t *count, unsigned int remap_flags);
+int dax_remap_file_range_prep(struct file *file_in, loff_t pos_in,
+			      struct file *file_out, loff_t pos_out,
+			      loff_t *len, unsigned int remap_flags,
+			      const struct iomap_ops *ops);
 extern loff_t do_clone_file_range(struct file *file_in, loff_t pos_in,
 				  struct file *file_out, loff_t pos_out,
 				  loff_t len, unsigned int remap_flags);
-- 
2.31.0




^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 6/7] fs/xfs: Handle CoW for fsdax write() path
  2021-04-08 12:04 [PATCH v4 0/7] fsdax,xfs: Add reflink&dedupe support for fsdax Shiyang Ruan
                   ` (4 preceding siblings ...)
  2021-04-08 12:04 ` [PATCH v4 5/7] fsdax: Dedup file range to use a compare function Shiyang Ruan
@ 2021-04-08 12:04 ` Shiyang Ruan
  2021-04-08 22:40   ` Darrick J. Wong
  2021-04-08 12:04 ` [PATCH v4 7/7] fs/xfs: Add dedupe support for fsdax Shiyang Ruan
  6 siblings, 1 reply; 17+ messages in thread
From: Shiyang Ruan @ 2021-04-08 12:04 UTC (permalink / raw)
  To: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel
  Cc: darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn

In fsdax mode, WRITE and ZERO on a shared extent need CoW performed. After
CoW, new allocated extents needs to be remapped to the file.  So, add an
iomap_end for dax write ops to do the remapping work.

Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
---
 fs/xfs/xfs_bmap_util.c |  3 +--
 fs/xfs/xfs_file.c      |  9 +++----
 fs/xfs/xfs_iomap.c     | 58 +++++++++++++++++++++++++++++++++++++++++-
 fs/xfs/xfs_iomap.h     |  4 +++
 fs/xfs/xfs_iops.c      |  7 +++--
 fs/xfs/xfs_reflink.c   |  3 +--
 6 files changed, 69 insertions(+), 15 deletions(-)

diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
index e7d68318e6a5..9fcea33dd2c9 100644
--- a/fs/xfs/xfs_bmap_util.c
+++ b/fs/xfs/xfs_bmap_util.c
@@ -954,8 +954,7 @@ xfs_free_file_space(
 		return 0;
 	if (offset + len > XFS_ISIZE(ip))
 		len = XFS_ISIZE(ip) - offset;
-	error = iomap_zero_range(VFS_I(ip), offset, len, NULL,
-			&xfs_buffered_write_iomap_ops);
+	error = xfs_iomap_zero_range(VFS_I(ip), offset, len, NULL);
 	if (error)
 		return error;
 
diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index a007ca0711d9..5795d5d6f869 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -684,11 +684,8 @@ xfs_file_dax_write(
 	pos = iocb->ki_pos;
 
 	trace_xfs_file_dax_write(iocb, from);
-	ret = dax_iomap_rw(iocb, from, &xfs_direct_write_iomap_ops);
-	if (ret > 0 && iocb->ki_pos > i_size_read(inode)) {
-		i_size_write(inode, iocb->ki_pos);
-		error = xfs_setfilesize(ip, pos, ret);
-	}
+	ret = dax_iomap_rw(iocb, from, &xfs_dax_write_iomap_ops);
+
 out:
 	if (iolock)
 		xfs_iunlock(ip, iolock);
@@ -1309,7 +1306,7 @@ __xfs_filemap_fault(
 
 		ret = dax_iomap_fault(vmf, pe_size, &pfn, NULL,
 				(write_fault && !vmf->cow_page) ?
-				 &xfs_direct_write_iomap_ops :
+				 &xfs_dax_write_iomap_ops :
 				 &xfs_read_iomap_ops);
 		if (ret & VM_FAULT_NEEDDSYNC)
 			ret = dax_finish_sync_fault(vmf, pe_size, pfn);
diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
index e17ab7f42928..f818f989687b 100644
--- a/fs/xfs/xfs_iomap.c
+++ b/fs/xfs/xfs_iomap.c
@@ -760,7 +760,8 @@ xfs_direct_write_iomap_begin(
 
 		/* may drop and re-acquire the ilock */
 		error = xfs_reflink_allocate_cow(ip, &imap, &cmap, &shared,
-				&lockmode, flags & IOMAP_DIRECT);
+				&lockmode,
+				flags & IOMAP_DIRECT || IS_DAX(inode));
 		if (error)
 			goto out_unlock;
 		if (shared)
@@ -853,6 +854,38 @@ const struct iomap_ops xfs_direct_write_iomap_ops = {
 	.iomap_begin		= xfs_direct_write_iomap_begin,
 };
 
+static int
+xfs_dax_write_iomap_end(
+	struct inode		*inode,
+	loff_t			pos,
+	loff_t			length,
+	ssize_t			written,
+	unsigned int		flags,
+	struct iomap		*iomap)
+{
+	int			error = 0;
+	xfs_inode_t		*ip = XFS_I(inode);
+	bool			cow = xfs_is_cow_inode(ip);
+
+	if (pos + written > i_size_read(inode)) {
+		i_size_write(inode, pos + written);
+		error = xfs_setfilesize(ip, pos, written);
+		if (error && cow) {
+			xfs_reflink_cancel_cow_range(ip, pos, written, true);
+			return error;
+		}
+	}
+	if (cow)
+		error = xfs_reflink_end_cow(ip, pos, written);
+
+	return error;
+}
+
+const struct iomap_ops xfs_dax_write_iomap_ops = {
+	.iomap_begin		= xfs_direct_write_iomap_begin,
+	.iomap_end		= xfs_dax_write_iomap_end,
+};
+
 static int
 xfs_buffered_write_iomap_begin(
 	struct inode		*inode,
@@ -1314,3 +1347,26 @@ xfs_xattr_iomap_begin(
 const struct iomap_ops xfs_xattr_iomap_ops = {
 	.iomap_begin		= xfs_xattr_iomap_begin,
 };
+
+int
+xfs_iomap_zero_range(
+	struct inode		*inode,
+	loff_t			offset,
+	loff_t			len,
+	bool			*did_zero)
+{
+	return iomap_zero_range(inode, offset, len, did_zero,
+			IS_DAX(inode) ? &xfs_dax_write_iomap_ops :
+					&xfs_buffered_write_iomap_ops);
+}
+
+int
+xfs_iomap_truncate_page(
+	struct inode		*inode,
+	loff_t			pos,
+	bool			*did_zero)
+{
+	return iomap_truncate_page(inode, pos, did_zero,
+			IS_DAX(inode) ? &xfs_dax_write_iomap_ops :
+					&xfs_buffered_write_iomap_ops);
+}
diff --git a/fs/xfs/xfs_iomap.h b/fs/xfs/xfs_iomap.h
index 7d3703556d0e..8adb2bf78a5a 100644
--- a/fs/xfs/xfs_iomap.h
+++ b/fs/xfs/xfs_iomap.h
@@ -14,6 +14,9 @@ struct xfs_bmbt_irec;
 int xfs_iomap_write_direct(struct xfs_inode *ip, xfs_fileoff_t offset_fsb,
 		xfs_fileoff_t count_fsb, struct xfs_bmbt_irec *imap);
 int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t, bool);
+int xfs_iomap_zero_range(struct inode *inode, loff_t offset, loff_t len,
+		bool *did_zero);
+int xfs_iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero);
 xfs_fileoff_t xfs_iomap_eof_align_last_fsb(struct xfs_inode *ip,
 		xfs_fileoff_t end_fsb);
 
@@ -42,6 +45,7 @@ xfs_aligned_fsb_count(
 
 extern const struct iomap_ops xfs_buffered_write_iomap_ops;
 extern const struct iomap_ops xfs_direct_write_iomap_ops;
+extern const struct iomap_ops xfs_dax_write_iomap_ops;
 extern const struct iomap_ops xfs_read_iomap_ops;
 extern const struct iomap_ops xfs_seek_iomap_ops;
 extern const struct iomap_ops xfs_xattr_iomap_ops;
diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c
index 66ebccb5a6ff..db8eeaa8a773 100644
--- a/fs/xfs/xfs_iops.c
+++ b/fs/xfs/xfs_iops.c
@@ -879,8 +879,8 @@ xfs_setattr_size(
 	 */
 	if (newsize > oldsize) {
 		trace_xfs_zero_eof(ip, oldsize, newsize - oldsize);
-		error = iomap_zero_range(inode, oldsize, newsize - oldsize,
-				&did_zeroing, &xfs_buffered_write_iomap_ops);
+		error = xfs_iomap_zero_range(inode, oldsize, newsize - oldsize,
+				&did_zeroing);
 	} else {
 		/*
 		 * iomap won't detect a dirty page over an unwritten block (or a
@@ -892,8 +892,7 @@ xfs_setattr_size(
 						     newsize);
 		if (error)
 			return error;
-		error = iomap_truncate_page(inode, newsize, &did_zeroing,
-				&xfs_buffered_write_iomap_ops);
+		error = xfs_iomap_truncate_page(inode, newsize, &did_zeroing);
 	}
 
 	if (error)
diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c
index 9ef9f98725a2..a4cd6e8a7aa0 100644
--- a/fs/xfs/xfs_reflink.c
+++ b/fs/xfs/xfs_reflink.c
@@ -1266,8 +1266,7 @@ xfs_reflink_zero_posteof(
 		return 0;
 
 	trace_xfs_zero_eof(ip, isize, pos - isize);
-	return iomap_zero_range(VFS_I(ip), isize, pos - isize, NULL,
-			&xfs_buffered_write_iomap_ops);
+	return xfs_iomap_zero_range(VFS_I(ip), isize, pos - isize, NULL);
 }
 
 /*
-- 
2.31.0




^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v4 7/7] fs/xfs: Add dedupe support for fsdax
  2021-04-08 12:04 [PATCH v4 0/7] fsdax,xfs: Add reflink&dedupe support for fsdax Shiyang Ruan
                   ` (5 preceding siblings ...)
  2021-04-08 12:04 ` [PATCH v4 6/7] fs/xfs: Handle CoW for fsdax write() path Shiyang Ruan
@ 2021-04-08 12:04 ` Shiyang Ruan
  2021-04-08 13:12   ` Su Yue
  2021-04-08 23:04   ` Darrick J. Wong
  6 siblings, 2 replies; 17+ messages in thread
From: Shiyang Ruan @ 2021-04-08 12:04 UTC (permalink / raw)
  To: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel
  Cc: darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn

Add xfs_break_two_dax_layouts() to break layout for tow dax files.  Then
call compare range function only when files are both DAX or not.

Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
---
 fs/xfs/xfs_file.c    | 20 ++++++++++++++++++++
 fs/xfs/xfs_inode.c   |  8 +++++++-
 fs/xfs/xfs_inode.h   |  1 +
 fs/xfs/xfs_reflink.c |  5 +++--
 4 files changed, 31 insertions(+), 3 deletions(-)

diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
index 5795d5d6f869..1fd457167c12 100644
--- a/fs/xfs/xfs_file.c
+++ b/fs/xfs/xfs_file.c
@@ -842,6 +842,26 @@ xfs_break_dax_layouts(
 			0, 0, xfs_wait_dax_page(inode));
 }
 
+int
+xfs_break_two_dax_layouts(
+	struct inode		*src,
+	struct inode		*dest)
+{
+	int			error;
+	bool			retry = false;
+
+retry:
+	error = xfs_break_dax_layouts(src, &retry);
+	if (error || retry)
+		goto retry;
+
+	error = xfs_break_dax_layouts(dest, &retry);
+	if (error || retry)
+		goto retry;
+
+	return error;
+}
+
 int
 xfs_break_layouts(
 	struct inode		*inode,
diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
index f93370bd7b1e..c01786917eef 100644
--- a/fs/xfs/xfs_inode.c
+++ b/fs/xfs/xfs_inode.c
@@ -3713,8 +3713,10 @@ xfs_ilock2_io_mmap(
 	struct xfs_inode	*ip2)
 {
 	int			ret;
+	struct inode		*inode1 = VFS_I(ip1);
+	struct inode		*inode2 = VFS_I(ip2);
 
-	ret = xfs_iolock_two_inodes_and_break_layout(VFS_I(ip1), VFS_I(ip2));
+	ret = xfs_iolock_two_inodes_and_break_layout(inode1, inode2);
 	if (ret)
 		return ret;
 	if (ip1 == ip2)
@@ -3722,6 +3724,10 @@ xfs_ilock2_io_mmap(
 	else
 		xfs_lock_two_inodes(ip1, XFS_MMAPLOCK_EXCL,
 				    ip2, XFS_MMAPLOCK_EXCL);
+
+	if (IS_DAX(inode1) && IS_DAX(inode2))
+		ret = xfs_break_two_dax_layouts(inode1, inode2);
+
 	return 0;
 }
 
diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h
index 88ee4c3930ae..5ef21924dddc 100644
--- a/fs/xfs/xfs_inode.h
+++ b/fs/xfs/xfs_inode.h
@@ -435,6 +435,7 @@ enum xfs_prealloc_flags {
 
 int	xfs_update_prealloc_flags(struct xfs_inode *ip,
 				  enum xfs_prealloc_flags flags);
+int	xfs_break_two_dax_layouts(struct inode *inode1, struct inode *inode2);
 int	xfs_break_layouts(struct inode *inode, uint *iolock,
 		enum layout_break_reason reason);
 
diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c
index a4cd6e8a7aa0..4426bcc8a985 100644
--- a/fs/xfs/xfs_reflink.c
+++ b/fs/xfs/xfs_reflink.c
@@ -29,6 +29,7 @@
 #include "xfs_iomap.h"
 #include "xfs_sb.h"
 #include "xfs_ag_resv.h"
+#include <linux/dax.h>
 
 /*
  * Copy on Write of Shared Blocks
@@ -1324,8 +1325,8 @@ xfs_reflink_remap_prep(
 	if (XFS_IS_REALTIME_INODE(src) || XFS_IS_REALTIME_INODE(dest))
 		goto out_unlock;
 
-	/* Don't share DAX file data for now. */
-	if (IS_DAX(inode_in) || IS_DAX(inode_out))
+	/* Don't share DAX file data with non-DAX file. */
+	if (IS_DAX(inode_in) != IS_DAX(inode_out))
 		goto out_unlock;
 
 	if (!IS_DAX(inode_in))
-- 
2.31.0




^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 7/7] fs/xfs: Add dedupe support for fsdax
  2021-04-08 12:04 ` [PATCH v4 7/7] fs/xfs: Add dedupe support for fsdax Shiyang Ruan
@ 2021-04-08 13:12   ` Su Yue
  2021-04-09  1:56     ` ruansy.fnst
  2021-04-08 23:04   ` Darrick J. Wong
  1 sibling, 1 reply; 17+ messages in thread
From: Su Yue @ 2021-04-08 13:12 UTC (permalink / raw)
  To: Shiyang Ruan
  Cc: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel,
	darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn


On Thu 08 Apr 2021 at 20:04, Shiyang Ruan 
<ruansy.fnst@fujitsu.com> wrote:

> Add xfs_break_two_dax_layouts() to break layout for tow dax 
> files.  Then
> call compare range function only when files are both DAX or not.
>
> Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
>
Not family with xfs code but reading code make my sleep better :)
See bellow.

> ---
>  fs/xfs/xfs_file.c    | 20 ++++++++++++++++++++
>  fs/xfs/xfs_inode.c   |  8 +++++++-
>  fs/xfs/xfs_inode.h   |  1 +
>  fs/xfs/xfs_reflink.c |  5 +++--
>  4 files changed, 31 insertions(+), 3 deletions(-)
>
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index 5795d5d6f869..1fd457167c12 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -842,6 +842,26 @@ xfs_break_dax_layouts(
>  			0, 0, xfs_wait_dax_page(inode));
>  }
>
> +int
> +xfs_break_two_dax_layouts(
> +	struct inode		*src,
> +	struct inode		*dest)
> +{
> +	int			error;
> +	bool			retry = false;
> +
> +retry:
>
'retry = false;' ? since xfs_break_dax_layouts() won't
set retry to false if there is no busy page in inode->i_mapping.
Dead loop will happen if retry is true once.

> +	error = xfs_break_dax_layouts(src, &retry);
> +	if (error || retry)
> +		goto retry;
> +
> +	error = xfs_break_dax_layouts(dest, &retry);
> +	if (error || retry)
> +		goto retry;
> +
> +	return error;
> +}
> +
>  int
>  xfs_break_layouts(
>  	struct inode		*inode,
> diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> index f93370bd7b1e..c01786917eef 100644
> --- a/fs/xfs/xfs_inode.c
> +++ b/fs/xfs/xfs_inode.c
> @@ -3713,8 +3713,10 @@ xfs_ilock2_io_mmap(
>  	struct xfs_inode	*ip2)
>  {
>  	int			ret;
> +	struct inode		*inode1 = VFS_I(ip1);
> +	struct inode		*inode2 = VFS_I(ip2);
>
> -	ret = xfs_iolock_two_inodes_and_break_layout(VFS_I(ip1), 
> VFS_I(ip2));
> +	ret = xfs_iolock_two_inodes_and_break_layout(inode1, inode2);
>  	if (ret)
>  		return ret;
>  	if (ip1 == ip2)
> @@ -3722,6 +3724,10 @@ xfs_ilock2_io_mmap(
>  	else
>  		xfs_lock_two_inodes(ip1, XFS_MMAPLOCK_EXCL,
>  				    ip2, XFS_MMAPLOCK_EXCL);
> +
> +	if (IS_DAX(inode1) && IS_DAX(inode2))
> +		ret = xfs_break_two_dax_layouts(inode1, inode2);
> +
ret is ignored here.

--
Su
>  	return 0;
>  }
>
> diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h
> index 88ee4c3930ae..5ef21924dddc 100644
> --- a/fs/xfs/xfs_inode.h
> +++ b/fs/xfs/xfs_inode.h
> @@ -435,6 +435,7 @@ enum xfs_prealloc_flags {
>
>  int	xfs_update_prealloc_flags(struct xfs_inode *ip,
>  				  enum xfs_prealloc_flags flags);
> +int	xfs_break_two_dax_layouts(struct inode *inode1, struct 
> inode *inode2);
>  int	xfs_break_layouts(struct inode *inode, uint *iolock,
>  		enum layout_break_reason reason);
>
> diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c
> index a4cd6e8a7aa0..4426bcc8a985 100644
> --- a/fs/xfs/xfs_reflink.c
> +++ b/fs/xfs/xfs_reflink.c
> @@ -29,6 +29,7 @@
>  #include "xfs_iomap.h"
>  #include "xfs_sb.h"
>  #include "xfs_ag_resv.h"
> +#include <linux/dax.h>
>
>  /*
>   * Copy on Write of Shared Blocks
> @@ -1324,8 +1325,8 @@ xfs_reflink_remap_prep(
>  	if (XFS_IS_REALTIME_INODE(src) || XFS_IS_REALTIME_INODE(dest))
>  		goto out_unlock;
>
> -	/* Don't share DAX file data for now. */
> -	if (IS_DAX(inode_in) || IS_DAX(inode_out))
> +	/* Don't share DAX file data with non-DAX file. */
> +	if (IS_DAX(inode_in) != IS_DAX(inode_out))
>  		goto out_unlock;
>
>  	if (!IS_DAX(inode_in))

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 1/7] fsdax: Introduce dax_iomap_cow_copy()
  2021-04-08 12:04 ` [PATCH v4 1/7] fsdax: Introduce dax_iomap_cow_copy() Shiyang Ruan
@ 2021-04-08 21:53   ` Darrick J. Wong
  2021-04-09  2:30     ` ruansy.fnst
  0 siblings, 1 reply; 17+ messages in thread
From: Darrick J. Wong @ 2021-04-08 21:53 UTC (permalink / raw)
  To: Shiyang Ruan
  Cc: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel,
	darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn

On Thu, Apr 08, 2021 at 08:04:26PM +0800, Shiyang Ruan wrote:
> In the case where the iomap is a write operation and iomap is not equal
> to srcmap after iomap_begin, we consider it is a CoW operation.
> 
> The destance extent which iomap indicated is new allocated extent.
> So, it is needed to copy the data from srcmap to new allocated extent.
> In theory, it is better to copy the head and tail ranges which is
> outside of the non-aligned area instead of copying the whole aligned
> range. But in dax page fault, it will always be an aligned range.  So,
> we have to copy the whole range in this case.
> 
> Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> ---
>  fs/dax.c | 82 ++++++++++++++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 77 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index 8d7e4e2cc0fb..b4fd3813457a 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -1038,6 +1038,61 @@ static int dax_iomap_direct_access(struct iomap *iomap, loff_t pos, size_t size,
>  	return rc;
>  }
>  
> +/**
> + * dax_iomap_cow_copy(): Copy the data from source to destination before write.
> + * @pos:	address to do copy from.
> + * @length:	size of copy operation.
> + * @align_size:	aligned w.r.t align_size (either PMD_SIZE or PAGE_SIZE)
> + * @srcmap:	iomap srcmap
> + * @daddr:	destination address to copy to.
> + *
> + * This can be called from two places. Either during DAX write fault, to copy
> + * the length size data to daddr. Or, while doing normal DAX write operation,
> + * dax_iomap_actor() might call this to do the copy of either start or end
> + * unaligned address. In this case the rest of the copy of aligned ranges is
> + * taken care by dax_iomap_actor() itself.

Er... what?  This description is very confusing to me.  /me reads the
code, and ...

OH.

Given a range (pos, length) and a mapping for a source file, this
function copies all the bytes between pos and (pos + length) to daddr if
the range is aligned to @align_size.  But if pos and length are not both
aligned to align_src then it'll copy *around* the range, leaving the
area in the middle uncopied waiting for write_iter to fill it in with
whatever's in the iovec.

Yikes, this function is doing double duty and ought to be split into
two functions.

The first function does the COW work for a write fault to an mmap
region and does a straight copy.  Page faults are always aligned, so
this functionality is needed by dax_fault_actor.  Maybe this could be
named dax_fault_cow?

The second function does the prep COW work *around* a write so that we
always copy entire page/blocks.  This cow-around code is needed by
dax_iomap_actor.  This should perhaps be named dax_iomap_cow_around()?

> + * Also, note DAX fault will always result in aligned pos and pos + length.
> + */
> +static int dax_iomap_cow_copy(loff_t pos, loff_t length, size_t align_size,
> +		struct iomap *srcmap, void *daddr)
> +{
> +	loff_t head_off = pos & (align_size - 1);
> +	size_t size = ALIGN(head_off + length, align_size);
> +	loff_t end = pos + length;
> +	loff_t pg_end = round_up(end, align_size);
> +	bool copy_all = head_off == 0 && end == pg_end;
> +	void *saddr = 0;
> +	int ret = 0;
> +
> +	ret = dax_iomap_direct_access(srcmap, pos, size, &saddr, NULL);
> +	if (ret)
> +		return ret;
> +
> +	if (copy_all) {
> +		ret = copy_mc_to_kernel(daddr, saddr, length);
> +		return ret ? -EIO : 0;

I find it /very/ interesting that copy_mc_to_kernel takes an unsigned
int argument but returns an unsigned long (counting the bytes that
didn't get copied, oddly...but that's an existing API so I guess I'll
let it go.)

> +	}
> +
> +	/* Copy the head part of the range.  Note: we pass offset as length. */
> +	if (head_off) {
> +		ret = copy_mc_to_kernel(daddr, saddr, head_off);
> +		if (ret)
> +			return -EIO;
> +	}
> +
> +	/* Copy the tail part of the range */
> +	if (end < pg_end) {
> +		loff_t tail_off = head_off + length;
> +		loff_t tail_len = pg_end - end;
> +
> +		ret = copy_mc_to_kernel(daddr + tail_off, saddr + tail_off,
> +					tail_len);
> +		if (ret)
> +			return -EIO;
> +	}
> +	return 0;
> +}
> +
>  /*
>   * The user has performed a load from a hole in the file.  Allocating a new
>   * page in the file would cause excessive storage usage for workloads with
> @@ -1167,11 +1222,12 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
>  	struct dax_device *dax_dev = iomap->dax_dev;
>  	struct iov_iter *iter = data;
>  	loff_t end = pos + length, done = 0;
> +	bool write = iov_iter_rw(iter) == WRITE;
>  	ssize_t ret = 0;
>  	size_t xfer;
>  	int id;
>  
> -	if (iov_iter_rw(iter) == READ) {
> +	if (!write) {
>  		end = min(end, i_size_read(inode));
>  		if (pos >= end)
>  			return 0;
> @@ -1180,7 +1236,8 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
>  			return iov_iter_zero(min(length, end - pos), iter);
>  	}
>  
> -	if (WARN_ON_ONCE(iomap->type != IOMAP_MAPPED))
> +	if (WARN_ON_ONCE(iomap->type != IOMAP_MAPPED &&
> +			!(iomap->flags & IOMAP_F_SHARED)))

This is a bit subtle.  Could we add a comment:

	/*
	 * In DAX mode, we allow either pure overwrites of written extents,
	 * or writes to unwritten extents as part of a copy-on-write
	 * operation.
	 */
	if (WARN_ON_ONCE(...))

>  		return -EIO;
>  
>  	/*
> @@ -1219,6 +1276,13 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
>  			break;
>  		}
>  
> +		if (write && srcmap->addr != iomap->addr) {

Do you have to check if srcmap is not a hole?

--D

> +			ret = dax_iomap_cow_copy(pos, length, PAGE_SIZE, srcmap,
> +						 kaddr);
> +			if (ret)
> +				break;
> +		}
> +
>  		map_len = PFN_PHYS(map_len);
>  		kaddr += offset;
>  		map_len -= offset;
> @@ -1230,7 +1294,7 @@ dax_iomap_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
>  		 * validated via access_ok() in either vfs_read() or
>  		 * vfs_write(), depending on which operation we are doing.
>  		 */
> -		if (iov_iter_rw(iter) == WRITE)
> +		if (write)
>  			xfer = dax_copy_from_iter(dax_dev, pgoff, kaddr,
>  					map_len, iter);
>  		else
> @@ -1382,6 +1446,7 @@ static vm_fault_t dax_fault_actor(struct vm_fault *vmf, pfn_t *pfnp,
>  	unsigned long entry_flags = pmd ? DAX_PMD : 0;
>  	int err = 0;
>  	pfn_t pfn;
> +	void *kaddr;
>  
>  	/* if we are reading UNWRITTEN and HOLE, return a hole. */
>  	if (!write &&
> @@ -1392,18 +1457,25 @@ static vm_fault_t dax_fault_actor(struct vm_fault *vmf, pfn_t *pfnp,
>  			return dax_pmd_load_hole(xas, vmf, iomap, entry);
>  	}
>  
> -	if (iomap->type != IOMAP_MAPPED) {
> +	if (iomap->type != IOMAP_MAPPED && !(iomap->flags & IOMAP_F_SHARED)) {
>  		WARN_ON_ONCE(1);
>  		return pmd ? VM_FAULT_FALLBACK : VM_FAULT_SIGBUS;
>  	}
>  
> -	err = dax_iomap_direct_access(iomap, pos, size, NULL, &pfn);
> +	err = dax_iomap_direct_access(iomap, pos, size, &kaddr, &pfn);
>  	if (err)
>  		return pmd ? VM_FAULT_FALLBACK : dax_fault_return(err);
>  
>  	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn, entry_flags,
>  				  write && !sync);
>  
> +	if (write &&
> +	    srcmap->addr != IOMAP_HOLE && srcmap->addr != iomap->addr) {
> +		err = dax_iomap_cow_copy(pos, size, size, srcmap, kaddr);
> +		if (err)
> +			return dax_fault_return(err);
> +	}
> +
>  	if (sync)
>  		return dax_fault_synchronous_pfnp(pfnp, pfn);
>  
> -- 
> 2.31.0
> 
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 2/7] fsdax: Replace mmap entry in case of CoW
  2021-04-08 12:04 ` [PATCH v4 2/7] fsdax: Replace mmap entry in case of CoW Shiyang Ruan
@ 2021-04-08 22:11   ` Darrick J. Wong
  0 siblings, 0 replies; 17+ messages in thread
From: Darrick J. Wong @ 2021-04-08 22:11 UTC (permalink / raw)
  To: Shiyang Ruan
  Cc: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel,
	darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn, Goldwyn Rodrigues, Ritesh Harjani

On Thu, Apr 08, 2021 at 08:04:27PM +0800, Shiyang Ruan wrote:
> We replace the existing entry to the newly allocated one in case of CoW.
> Also, we mark the entry as PAGECACHE_TAG_TOWRITE so writeback marks this
> entry as writeprotected.  This helps us snapshots so new write
> pagefaults after snapshots trigger a CoW.
> 
> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
> Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Ritesh Harjani <riteshh@linux.ibm.com>
> ---
>  fs/dax.c | 39 ++++++++++++++++++++++++++++-----------
>  1 file changed, 28 insertions(+), 11 deletions(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index b4fd3813457a..e6c1354b27a8 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -722,6 +722,10 @@ static int copy_cow_page_dax(struct block_device *bdev, struct dax_device *dax_d
>  	return 0;
>  }
>  
> +/* DAX Insert Flag for the entry we insert */

Might be worth mentioning that these are xarray marks for the inserted
entry, since this comment didn't help much.

> +#define DAX_IF_DIRTY		(1 << 0)
> +#define DAX_IF_COW		(1 << 1)
> +
>  /*
>   * By this point grab_mapping_entry() has ensured that we have a locked entry
>   * of the appropriate size so we don't have to worry about downgrading PMDs to
> @@ -729,16 +733,19 @@ static int copy_cow_page_dax(struct block_device *bdev, struct dax_device *dax_d
>   * already in the tree, we will skip the insertion and just dirty the PMD as
>   * appropriate.
>   */
> -static void *dax_insert_entry(struct xa_state *xas,
> -		struct address_space *mapping, struct vm_fault *vmf,
> -		void *entry, pfn_t pfn, unsigned long flags, bool dirty)
> +static void *dax_insert_entry(struct xa_state *xas, struct vm_fault *vmf,
> +		void *entry, pfn_t pfn, unsigned long flags,
> +		unsigned int insert_flags)

Urk, two flags arguments.  Oh, I see.  We insert (shifted) pfn_t values
into the mapping as xarray values, so @flags determines the state flags
of the new entry value, whereas @insert_flags determines what xarray
mark we're going to attach (if any) to the inserted value.

--D

>  {
> +	struct address_space *mapping = vmf->vma->vm_file->f_mapping;
>  	void *new_entry = dax_make_entry(pfn, flags);
> +	bool dirty = insert_flags & DAX_IF_DIRTY;
> +	bool cow = insert_flags & DAX_IF_COW;
>  
>  	if (dirty)
>  		__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
>  
> -	if (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE)) {
> +	if (cow || (dax_is_zero_entry(entry) && !(flags & DAX_ZERO_PAGE))) {
>  		unsigned long index = xas->xa_index;
>  		/* we are replacing a zero page with block mapping */
>  		if (dax_is_pmd_entry(entry))
> @@ -750,7 +757,7 @@ static void *dax_insert_entry(struct xa_state *xas,
>  
>  	xas_reset(xas);
>  	xas_lock_irq(xas);
> -	if (dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
> +	if (cow || dax_is_zero_entry(entry) || dax_is_empty_entry(entry)) {
>  		void *old;
>  
>  		dax_disassociate_entry(entry, mapping, false);
> @@ -774,6 +781,9 @@ static void *dax_insert_entry(struct xa_state *xas,
>  	if (dirty)
>  		xas_set_mark(xas, PAGECACHE_TAG_DIRTY);
>  
> +	if (cow)
> +		xas_set_mark(xas, PAGECACHE_TAG_TOWRITE);
> +
>  	xas_unlock_irq(xas);
>  	return entry;
>  }
> @@ -1109,8 +1119,7 @@ static vm_fault_t dax_load_hole(struct xa_state *xas,
>  	pfn_t pfn = pfn_to_pfn_t(my_zero_pfn(vaddr));
>  	vm_fault_t ret;
>  
> -	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn,
> -			DAX_ZERO_PAGE, false);
> +	*entry = dax_insert_entry(xas, vmf, *entry, pfn, DAX_ZERO_PAGE, 0);
>  
>  	ret = vmf_insert_mixed(vmf->vma, vaddr, pfn);
>  	trace_dax_load_hole(inode, vmf, ret);
> @@ -1137,8 +1146,8 @@ static vm_fault_t dax_pmd_load_hole(struct xa_state *xas, struct vm_fault *vmf,
>  		goto fallback;
>  
>  	pfn = page_to_pfn_t(zero_page);
> -	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn,
> -			DAX_PMD | DAX_ZERO_PAGE, false);
> +	*entry = dax_insert_entry(xas, vmf, *entry, pfn,
> +				  DAX_PMD | DAX_ZERO_PAGE, 0);
>  
>  	if (arch_needs_pgtable_deposit()) {
>  		pgtable = pte_alloc_one(vma->vm_mm);
> @@ -1444,6 +1453,7 @@ static vm_fault_t dax_fault_actor(struct vm_fault *vmf, pfn_t *pfnp,
>  	bool write = vmf->flags & FAULT_FLAG_WRITE;
>  	bool sync = dax_fault_is_synchronous(flags, vmf->vma, iomap);
>  	unsigned long entry_flags = pmd ? DAX_PMD : 0;
> +	unsigned int insert_flags = 0;
>  	int err = 0;
>  	pfn_t pfn;
>  	void *kaddr;
> @@ -1466,8 +1476,15 @@ static vm_fault_t dax_fault_actor(struct vm_fault *vmf, pfn_t *pfnp,
>  	if (err)
>  		return pmd ? VM_FAULT_FALLBACK : dax_fault_return(err);
>  
> -	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn, entry_flags,
> -				  write && !sync);
> +	if (write) {
> +		if (!sync)
> +			insert_flags |= DAX_IF_DIRTY;
> +		if (iomap->flags & IOMAP_F_SHARED)
> +			insert_flags |= DAX_IF_COW;
> +	}
> +
> +	*entry = dax_insert_entry(xas, vmf, *entry, pfn, entry_flags,
> +				  insert_flags);
>  
>  	if (write &&
>  	    srcmap->addr != IOMAP_HOLE && srcmap->addr != iomap->addr) {
> -- 
> 2.31.0
> 
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 4/7] iomap: Introduce iomap_apply2() for operations on two files
  2021-04-08 12:04 ` [PATCH v4 4/7] iomap: Introduce iomap_apply2() for operations on two files Shiyang Ruan
@ 2021-04-08 22:26   ` Darrick J. Wong
  0 siblings, 0 replies; 17+ messages in thread
From: Darrick J. Wong @ 2021-04-08 22:26 UTC (permalink / raw)
  To: Shiyang Ruan
  Cc: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel,
	darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn

On Thu, Apr 08, 2021 at 08:04:29PM +0800, Shiyang Ruan wrote:
> Some operations, such as comparing a range of data in two files under
> fsdax mode, requires nested iomap_open()/iomap_end() on two file.  Thus,
> we introduce iomap_apply2() to accept arguments from two files and
> iomap_actor2_t for actions on two files.
> 
> Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>

Kinda wish we weren't propagating even more indirect call usage, but oh
well.

Reviewed-by: Darrick J. Wong <djwong@kernel.org>

--D

> ---
>  fs/iomap/apply.c      | 52 +++++++++++++++++++++++++++++++++++++++++++
>  include/linux/iomap.h |  7 +++++-
>  2 files changed, 58 insertions(+), 1 deletion(-)
> 
> diff --git a/fs/iomap/apply.c b/fs/iomap/apply.c
> index 26ab6563181f..0493da5286ad 100644
> --- a/fs/iomap/apply.c
> +++ b/fs/iomap/apply.c
> @@ -97,3 +97,55 @@ iomap_apply(struct inode *inode, loff_t pos, loff_t length, unsigned flags,
>  
>  	return written ? written : ret;
>  }
> +
> +loff_t
> +iomap_apply2(struct inode *ino1, loff_t pos1, struct inode *ino2, loff_t pos2,
> +		loff_t length, unsigned int flags, const struct iomap_ops *ops,
> +		void *data, iomap_actor2_t actor)
> +{
> +	struct iomap smap = { .type = IOMAP_HOLE };
> +	struct iomap dmap = { .type = IOMAP_HOLE };
> +	loff_t written = 0, ret, ret2 = 0;
> +	loff_t len1 = length, len2, min_len;
> +
> +	ret = ops->iomap_begin(ino1, pos1, len1, flags, &smap, NULL);
> +	if (ret)
> +		goto out;
> +	if (WARN_ON(smap.offset > pos1)) {
> +		written = -EIO;
> +		goto out_src;
> +	}
> +	if (WARN_ON(smap.length == 0)) {
> +		written = -EIO;
> +		goto out_src;
> +	}
> +	len2 = min_t(loff_t, len1, smap.length);
> +
> +	ret = ops->iomap_begin(ino2, pos2, len2, flags, &dmap, NULL);
> +	if (ret)
> +		goto out_src;
> +	if (WARN_ON(dmap.offset > pos2)) {
> +		written = -EIO;
> +		goto out_dest;
> +	}
> +	if (WARN_ON(dmap.length == 0)) {
> +		written = -EIO;
> +		goto out_dest;
> +	}
> +	min_len = min_t(loff_t, len2, dmap.length);
> +
> +	written = actor(ino1, pos1, ino2, pos2, min_len, data, &smap, &dmap);
> +
> +out_dest:
> +	if (ops->iomap_end)
> +		ret2 = ops->iomap_end(ino2, pos2, len2,
> +				      written > 0 ? written : 0, flags, &dmap);
> +out_src:
> +	if (ops->iomap_end)
> +		ret = ops->iomap_end(ino1, pos1, len1,
> +				     written > 0 ? written : 0, flags, &smap);
> +out:
> +	if (written)
> +		return written;
> +	return ret ?: ret2;
> +}
> diff --git a/include/linux/iomap.h b/include/linux/iomap.h
> index d202fd2d0f91..9493c48bcc9c 100644
> --- a/include/linux/iomap.h
> +++ b/include/linux/iomap.h
> @@ -150,10 +150,15 @@ struct iomap_ops {
>   */
>  typedef loff_t (*iomap_actor_t)(struct inode *inode, loff_t pos, loff_t len,
>  		void *data, struct iomap *iomap, struct iomap *srcmap);
> -
> +typedef loff_t (*iomap_actor2_t)(struct inode *ino1, loff_t pos1,
> +		struct inode *ino2, loff_t pos2, loff_t len, void *data,
> +		struct iomap *smap, struct iomap *dmap);
>  loff_t iomap_apply(struct inode *inode, loff_t pos, loff_t length,
>  		unsigned flags, const struct iomap_ops *ops, void *data,
>  		iomap_actor_t actor);
> +loff_t iomap_apply2(struct inode *ino1, loff_t pos1, struct inode *ino2,
> +		loff_t pos2, loff_t length, unsigned int flags,
> +		const struct iomap_ops *ops, void *data, iomap_actor2_t actor);
>  
>  ssize_t iomap_file_buffered_write(struct kiocb *iocb, struct iov_iter *from,
>  		const struct iomap_ops *ops);
> -- 
> 2.31.0
> 
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 5/7] fsdax: Dedup file range to use a compare function
  2021-04-08 12:04 ` [PATCH v4 5/7] fsdax: Dedup file range to use a compare function Shiyang Ruan
@ 2021-04-08 22:33   ` Darrick J. Wong
  0 siblings, 0 replies; 17+ messages in thread
From: Darrick J. Wong @ 2021-04-08 22:33 UTC (permalink / raw)
  To: Shiyang Ruan
  Cc: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel,
	darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn, Goldwyn Rodrigues

On Thu, Apr 08, 2021 at 08:04:30PM +0800, Shiyang Ruan wrote:
> With dax we cannot deal with readpage() etc. So, we create a dax
> comparison funciton which is similar with
> vfs_dedupe_file_range_compare().
> And introduce dax_remap_file_range_prep() for filesystem use.
> 
> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
> Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> ---
>  fs/dax.c             | 56 ++++++++++++++++++++++++++++++++++++++++++++
>  fs/remap_range.c     | 45 ++++++++++++++++++++++++++++-------
>  fs/xfs/xfs_reflink.c |  9 +++++--
>  include/linux/dax.h  |  4 ++++
>  include/linux/fs.h   | 12 ++++++----
>  5 files changed, 112 insertions(+), 14 deletions(-)
> 
> diff --git a/fs/dax.c b/fs/dax.c
> index fcd1e932716e..ba924b6629a6 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -1849,3 +1849,59 @@ vm_fault_t dax_finish_sync_fault(struct vm_fault *vmf,
>  	return dax_insert_pfn_mkwrite(vmf, pfn, order);
>  }
>  EXPORT_SYMBOL_GPL(dax_finish_sync_fault);
> +
> +static loff_t dax_range_compare_actor(struct inode *ino1, loff_t pos1,
> +		struct inode *ino2, loff_t pos2, loff_t len, void *data,
> +		struct iomap *smap, struct iomap *dmap)
> +{
> +	void *saddr, *daddr;
> +	bool *same = data;
> +	int ret;
> +
> +	if (smap->type == IOMAP_HOLE && dmap->type == IOMAP_HOLE) {
> +		*same = true;
> +		return len;
> +	}
> +
> +	if (smap->type == IOMAP_HOLE || dmap->type == IOMAP_HOLE) {
> +		*same = false;
> +		return 0;
> +	}
> +
> +	ret = dax_iomap_direct_access(smap, pos1, ALIGN(pos1 + len, PAGE_SIZE),
> +				      &saddr, NULL);
> +	if (ret < 0)
> +		return -EIO;
> +
> +	ret = dax_iomap_direct_access(dmap, pos2, ALIGN(pos2 + len, PAGE_SIZE),
> +				      &daddr, NULL);
> +	if (ret < 0)
> +		return -EIO;
> +
> +	*same = !memcmp(saddr, daddr, len);
> +	return len;
> +}
> +
> +int dax_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
> +		struct inode *dest, loff_t destoff, loff_t len, bool *is_same,
> +		const struct iomap_ops *ops)
> +{
> +	int id, ret = 0;
> +
> +	id = dax_read_lock();
> +	while (len) {
> +		ret = iomap_apply2(src, srcoff, dest, destoff, len, 0, ops,
> +				   is_same, dax_range_compare_actor);
> +		if (ret < 0 || !*is_same)
> +			goto out;
> +
> +		len -= ret;
> +		srcoff += ret;
> +		destoff += ret;
> +	}
> +	ret = 0;
> +out:
> +	dax_read_unlock(id);
> +	return ret;
> +}
> +EXPORT_SYMBOL_GPL(dax_dedupe_file_range_compare);
> diff --git a/fs/remap_range.c b/fs/remap_range.c
> index e4a5fdd7ad7b..1fab0db49c68 100644
> --- a/fs/remap_range.c
> +++ b/fs/remap_range.c
> @@ -14,6 +14,7 @@
>  #include <linux/compat.h>
>  #include <linux/mount.h>
>  #include <linux/fs.h>
> +#include <linux/dax.h>
>  #include "internal.h"
>  
>  #include <linux/uaccess.h>
> @@ -199,9 +200,9 @@ static void vfs_unlock_two_pages(struct page *page1, struct page *page2)
>   * Compare extents of two files to see if they are the same.
>   * Caller must have locked both inodes to prevent write races.
>   */
> -static int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
> -					 struct inode *dest, loff_t destoff,
> -					 loff_t len, bool *is_same)
> +int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
> +				  struct inode *dest, loff_t destoff,
> +				  loff_t len, bool *is_same)
>  {
>  	loff_t src_poff;
>  	loff_t dest_poff;
> @@ -280,6 +281,7 @@ static int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
>  out_error:
>  	return error;
>  }
> +EXPORT_SYMBOL(vfs_dedupe_file_range_compare);
>  
>  /*
>   * Check that the two inodes are eligible for cloning, the ranges make
> @@ -289,9 +291,11 @@ static int vfs_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
>   * If there's an error, then the usual negative error code is returned.
>   * Otherwise returns 0 with *len set to the request length.
>   */
> -int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
> -				  struct file *file_out, loff_t pos_out,
> -				  loff_t *len, unsigned int remap_flags)
> +static int
> +__generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
> +				struct file *file_out, loff_t pos_out,
> +				loff_t *len, unsigned int remap_flags,
> +				const struct iomap_ops *ops)

Can we rename @ops to @dax_read_ops instead?

>  {
>  	struct inode *inode_in = file_inode(file_in);
>  	struct inode *inode_out = file_inode(file_out);
> @@ -351,8 +355,15 @@ int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
>  	if (remap_flags & REMAP_FILE_DEDUP) {
>  		bool		is_same = false;
>  
> -		ret = vfs_dedupe_file_range_compare(inode_in, pos_in,
> -				inode_out, pos_out, *len, &is_same);
> +		if (!IS_DAX(inode_in) && !IS_DAX(inode_out))

If we already checked that IS_DAX(inode_in) == IS_DAX(inode_out), then
can we only check one of these?

	if (!IS_DAX(inode_in))
		ret = vfs_dedupe_file_range_compare(...);
	else if (dax_compare_ops)
		ret = dax_dedupe_file_range_compare(...);
	else
		ret = -EINVAL;

> +			ret = vfs_dedupe_file_range_compare(inode_in, pos_in,
> +					inode_out, pos_out, *len, &is_same);
> +		else if (IS_DAX(inode_in) && IS_DAX(inode_out) && ops)
> +			ret = dax_dedupe_file_range_compare(inode_in, pos_in,
> +					inode_out, pos_out, *len, &is_same,
> +					ops);
> +		else
> +			return -EINVAL;
>  		if (ret)
>  			return ret;
>  		if (!is_same)
> @@ -370,6 +381,24 @@ int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
>  
>  	return ret;
>  }
> +
> +int dax_remap_file_range_prep(struct file *file_in, loff_t pos_in,
> +			      struct file *file_out, loff_t pos_out,
> +			      loff_t *len, unsigned int remap_flags,
> +			      const struct iomap_ops *ops)
> +{
> +	return __generic_remap_file_range_prep(file_in, pos_in, file_out,
> +					       pos_out, len, remap_flags, ops);
> +}
> +EXPORT_SYMBOL(dax_remap_file_range_prep);
> +
> +int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
> +				  struct file *file_out, loff_t pos_out,
> +				  loff_t *len, unsigned int remap_flags)
> +{
> +	return __generic_remap_file_range_prep(file_in, pos_in, file_out,
> +					       pos_out, len, remap_flags, NULL);
> +}
>  EXPORT_SYMBOL(generic_remap_file_range_prep);
>  
>  loff_t do_clone_file_range(struct file *file_in, loff_t pos_in,
> diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c
> index 725c7d8e4438..9ef9f98725a2 100644
> --- a/fs/xfs/xfs_reflink.c
> +++ b/fs/xfs/xfs_reflink.c
> @@ -1329,8 +1329,13 @@ xfs_reflink_remap_prep(
>  	if (IS_DAX(inode_in) || IS_DAX(inode_out))
>  		goto out_unlock;
>  
> -	ret = generic_remap_file_range_prep(file_in, pos_in, file_out, pos_out,
> -			len, remap_flags);
> +	if (!IS_DAX(inode_in))
> +		ret = generic_remap_file_range_prep(file_in, pos_in, file_out,
> +						    pos_out, len, remap_flags);
> +	else
> +		ret = dax_remap_file_range_prep(file_in, pos_in, file_out,
> +						pos_out, len, remap_flags,
> +						&xfs_read_iomap_ops);
>  	if (ret || *len == 0)
>  		goto out_unlock;
>  
> diff --git a/include/linux/dax.h b/include/linux/dax.h
> index 3275e01ed33d..32e1c34349f2 100644
> --- a/include/linux/dax.h
> +++ b/include/linux/dax.h
> @@ -239,6 +239,10 @@ int dax_invalidate_mapping_entry_sync(struct address_space *mapping,
>  				      pgoff_t index);
>  s64 dax_iomap_zero(loff_t pos, u64 length, struct iomap *iomap,
>  		struct iomap *srcmap);
> +int dax_dedupe_file_range_compare(struct inode *src, loff_t srcoff,
> +				  struct inode *dest, loff_t destoff,
> +				  loff_t len, bool *is_same,
> +				  const struct iomap_ops *ops);

What happens if dax isn't enabled in the kernel build?

--D

>  static inline bool dax_mapping(struct address_space *mapping)
>  {
>  	return mapping->host && IS_DAX(mapping->host);
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index ec8f3ddf4a6a..28861e334dac 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -70,6 +70,7 @@ struct fsverity_info;
>  struct fsverity_operations;
>  struct fs_context;
>  struct fs_parameter_spec;
> +struct iomap_ops;
>  
>  extern void __init inode_init(void);
>  extern void __init inode_init_early(void);
> @@ -1989,10 +1990,13 @@ extern ssize_t vfs_copy_file_range(struct file *, loff_t , struct file *,
>  extern ssize_t generic_copy_file_range(struct file *file_in, loff_t pos_in,
>  				       struct file *file_out, loff_t pos_out,
>  				       size_t len, unsigned int flags);
> -extern int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
> -					 struct file *file_out, loff_t pos_out,
> -					 loff_t *count,
> -					 unsigned int remap_flags);
> +int generic_remap_file_range_prep(struct file *file_in, loff_t pos_in,
> +				  struct file *file_out, loff_t pos_out,
> +				  loff_t *count, unsigned int remap_flags);
> +int dax_remap_file_range_prep(struct file *file_in, loff_t pos_in,
> +			      struct file *file_out, loff_t pos_out,
> +			      loff_t *len, unsigned int remap_flags,
> +			      const struct iomap_ops *ops);
>  extern loff_t do_clone_file_range(struct file *file_in, loff_t pos_in,
>  				  struct file *file_out, loff_t pos_out,
>  				  loff_t len, unsigned int remap_flags);
> -- 
> 2.31.0
> 
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 6/7] fs/xfs: Handle CoW for fsdax write() path
  2021-04-08 12:04 ` [PATCH v4 6/7] fs/xfs: Handle CoW for fsdax write() path Shiyang Ruan
@ 2021-04-08 22:40   ` Darrick J. Wong
  0 siblings, 0 replies; 17+ messages in thread
From: Darrick J. Wong @ 2021-04-08 22:40 UTC (permalink / raw)
  To: Shiyang Ruan
  Cc: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel,
	darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn

On Thu, Apr 08, 2021 at 08:04:31PM +0800, Shiyang Ruan wrote:
> In fsdax mode, WRITE and ZERO on a shared extent need CoW performed. After
> CoW, new allocated extents needs to be remapped to the file.  So, add an
> iomap_end for dax write ops to do the remapping work.
> 
> Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> ---
>  fs/xfs/xfs_bmap_util.c |  3 +--
>  fs/xfs/xfs_file.c      |  9 +++----
>  fs/xfs/xfs_iomap.c     | 58 +++++++++++++++++++++++++++++++++++++++++-
>  fs/xfs/xfs_iomap.h     |  4 +++
>  fs/xfs/xfs_iops.c      |  7 +++--
>  fs/xfs/xfs_reflink.c   |  3 +--
>  6 files changed, 69 insertions(+), 15 deletions(-)
> 
> diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
> index e7d68318e6a5..9fcea33dd2c9 100644
> --- a/fs/xfs/xfs_bmap_util.c
> +++ b/fs/xfs/xfs_bmap_util.c
> @@ -954,8 +954,7 @@ xfs_free_file_space(
>  		return 0;
>  	if (offset + len > XFS_ISIZE(ip))
>  		len = XFS_ISIZE(ip) - offset;
> -	error = iomap_zero_range(VFS_I(ip), offset, len, NULL,
> -			&xfs_buffered_write_iomap_ops);
> +	error = xfs_iomap_zero_range(VFS_I(ip), offset, len, NULL);
>  	if (error)
>  		return error;
>  
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index a007ca0711d9..5795d5d6f869 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -684,11 +684,8 @@ xfs_file_dax_write(
>  	pos = iocb->ki_pos;
>  
>  	trace_xfs_file_dax_write(iocb, from);
> -	ret = dax_iomap_rw(iocb, from, &xfs_direct_write_iomap_ops);
> -	if (ret > 0 && iocb->ki_pos > i_size_read(inode)) {
> -		i_size_write(inode, iocb->ki_pos);
> -		error = xfs_setfilesize(ip, pos, ret);
> -	}
> +	ret = dax_iomap_rw(iocb, from, &xfs_dax_write_iomap_ops);
> +
>  out:
>  	if (iolock)
>  		xfs_iunlock(ip, iolock);
> @@ -1309,7 +1306,7 @@ __xfs_filemap_fault(
>  
>  		ret = dax_iomap_fault(vmf, pe_size, &pfn, NULL,
>  				(write_fault && !vmf->cow_page) ?
> -				 &xfs_direct_write_iomap_ops :
> +				 &xfs_dax_write_iomap_ops :
>  				 &xfs_read_iomap_ops);
>  		if (ret & VM_FAULT_NEEDDSYNC)
>  			ret = dax_finish_sync_fault(vmf, pe_size, pfn);
> diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
> index e17ab7f42928..f818f989687b 100644
> --- a/fs/xfs/xfs_iomap.c
> +++ b/fs/xfs/xfs_iomap.c
> @@ -760,7 +760,8 @@ xfs_direct_write_iomap_begin(
>  
>  		/* may drop and re-acquire the ilock */
>  		error = xfs_reflink_allocate_cow(ip, &imap, &cmap, &shared,
> -				&lockmode, flags & IOMAP_DIRECT);
> +				&lockmode,
> +				flags & IOMAP_DIRECT || IS_DAX(inode));

Parentheses, please:
				(flags & IOMAP_DIRECT) || IS_DAX(inode));

>  		if (error)
>  			goto out_unlock;
>  		if (shared)
> @@ -853,6 +854,38 @@ const struct iomap_ops xfs_direct_write_iomap_ops = {
>  	.iomap_begin		= xfs_direct_write_iomap_begin,
>  };
>  
> +static int
> +xfs_dax_write_iomap_end(
> +	struct inode		*inode,
> +	loff_t			pos,
> +	loff_t			length,
> +	ssize_t			written,
> +	unsigned int		flags,
> +	struct iomap		*iomap)
> +{
> +	int			error = 0;
> +	xfs_inode_t		*ip = XFS_I(inode);

Please don't use typedefs:

	struct xfs_inode	*ip = XFS_I(inode);

> +	bool			cow = xfs_is_cow_inode(ip);
> +
> +	if (pos + written > i_size_read(inode)) {

What if we wrote zero bytes?  Usually that means error, right?

> +		i_size_write(inode, pos + written);
> +		error = xfs_setfilesize(ip, pos, written);
> +		if (error && cow) {
> +			xfs_reflink_cancel_cow_range(ip, pos, written, true);
> +			return error;
> +		}
> +	}
> +	if (cow)
> +		error = xfs_reflink_end_cow(ip, pos, written);
> +
> +	return error;
> +}
> +
> +const struct iomap_ops xfs_dax_write_iomap_ops = {
> +	.iomap_begin		= xfs_direct_write_iomap_begin,
> +	.iomap_end		= xfs_dax_write_iomap_end,
> +};
> +
>  static int
>  xfs_buffered_write_iomap_begin(
>  	struct inode		*inode,
> @@ -1314,3 +1347,26 @@ xfs_xattr_iomap_begin(
>  const struct iomap_ops xfs_xattr_iomap_ops = {
>  	.iomap_begin		= xfs_xattr_iomap_begin,
>  };
> +
> +int
> +xfs_iomap_zero_range(
> +	struct inode		*inode,

Might as well pass the xfs_inode pointers directly into these two functions.

--D

> +	loff_t			offset,
> +	loff_t			len,
> +	bool			*did_zero)
> +{
> +	return iomap_zero_range(inode, offset, len, did_zero,
> +			IS_DAX(inode) ? &xfs_dax_write_iomap_ops :
> +					&xfs_buffered_write_iomap_ops);
> +}
> +
> +int
> +xfs_iomap_truncate_page(
> +	struct inode		*inode,
> +	loff_t			pos,
> +	bool			*did_zero)
> +{
> +	return iomap_truncate_page(inode, pos, did_zero,
> +			IS_DAX(inode) ? &xfs_dax_write_iomap_ops :
> +					&xfs_buffered_write_iomap_ops);
> +}
> diff --git a/fs/xfs/xfs_iomap.h b/fs/xfs/xfs_iomap.h
> index 7d3703556d0e..8adb2bf78a5a 100644
> --- a/fs/xfs/xfs_iomap.h
> +++ b/fs/xfs/xfs_iomap.h
> @@ -14,6 +14,9 @@ struct xfs_bmbt_irec;
>  int xfs_iomap_write_direct(struct xfs_inode *ip, xfs_fileoff_t offset_fsb,
>  		xfs_fileoff_t count_fsb, struct xfs_bmbt_irec *imap);
>  int xfs_iomap_write_unwritten(struct xfs_inode *, xfs_off_t, xfs_off_t, bool);
> +int xfs_iomap_zero_range(struct inode *inode, loff_t offset, loff_t len,
> +		bool *did_zero);
> +int xfs_iomap_truncate_page(struct inode *inode, loff_t pos, bool *did_zero);
>  xfs_fileoff_t xfs_iomap_eof_align_last_fsb(struct xfs_inode *ip,
>  		xfs_fileoff_t end_fsb);
>  
> @@ -42,6 +45,7 @@ xfs_aligned_fsb_count(
>  
>  extern const struct iomap_ops xfs_buffered_write_iomap_ops;
>  extern const struct iomap_ops xfs_direct_write_iomap_ops;
> +extern const struct iomap_ops xfs_dax_write_iomap_ops;
>  extern const struct iomap_ops xfs_read_iomap_ops;
>  extern const struct iomap_ops xfs_seek_iomap_ops;
>  extern const struct iomap_ops xfs_xattr_iomap_ops;
> diff --git a/fs/xfs/xfs_iops.c b/fs/xfs/xfs_iops.c
> index 66ebccb5a6ff..db8eeaa8a773 100644
> --- a/fs/xfs/xfs_iops.c
> +++ b/fs/xfs/xfs_iops.c
> @@ -879,8 +879,8 @@ xfs_setattr_size(
>  	 */
>  	if (newsize > oldsize) {
>  		trace_xfs_zero_eof(ip, oldsize, newsize - oldsize);
> -		error = iomap_zero_range(inode, oldsize, newsize - oldsize,
> -				&did_zeroing, &xfs_buffered_write_iomap_ops);
> +		error = xfs_iomap_zero_range(inode, oldsize, newsize - oldsize,
> +				&did_zeroing);
>  	} else {
>  		/*
>  		 * iomap won't detect a dirty page over an unwritten block (or a
> @@ -892,8 +892,7 @@ xfs_setattr_size(
>  						     newsize);
>  		if (error)
>  			return error;
> -		error = iomap_truncate_page(inode, newsize, &did_zeroing,
> -				&xfs_buffered_write_iomap_ops);
> +		error = xfs_iomap_truncate_page(inode, newsize, &did_zeroing);
>  	}
>  
>  	if (error)
> diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c
> index 9ef9f98725a2..a4cd6e8a7aa0 100644
> --- a/fs/xfs/xfs_reflink.c
> +++ b/fs/xfs/xfs_reflink.c
> @@ -1266,8 +1266,7 @@ xfs_reflink_zero_posteof(
>  		return 0;
>  
>  	trace_xfs_zero_eof(ip, isize, pos - isize);
> -	return iomap_zero_range(VFS_I(ip), isize, pos - isize, NULL,
> -			&xfs_buffered_write_iomap_ops);
> +	return xfs_iomap_zero_range(VFS_I(ip), isize, pos - isize, NULL);
>  }
>  
>  /*
> -- 
> 2.31.0
> 
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v4 7/7] fs/xfs: Add dedupe support for fsdax
  2021-04-08 12:04 ` [PATCH v4 7/7] fs/xfs: Add dedupe support for fsdax Shiyang Ruan
  2021-04-08 13:12   ` Su Yue
@ 2021-04-08 23:04   ` Darrick J. Wong
  1 sibling, 0 replies; 17+ messages in thread
From: Darrick J. Wong @ 2021-04-08 23:04 UTC (permalink / raw)
  To: Shiyang Ruan
  Cc: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel,
	darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn

On Thu, Apr 08, 2021 at 08:04:32PM +0800, Shiyang Ruan wrote:
> Add xfs_break_two_dax_layouts() to break layout for tow dax files.  Then
> call compare range function only when files are both DAX or not.
> 
> Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> ---
>  fs/xfs/xfs_file.c    | 20 ++++++++++++++++++++
>  fs/xfs/xfs_inode.c   |  8 +++++++-
>  fs/xfs/xfs_inode.h   |  1 +
>  fs/xfs/xfs_reflink.c |  5 +++--
>  4 files changed, 31 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
> index 5795d5d6f869..1fd457167c12 100644
> --- a/fs/xfs/xfs_file.c
> +++ b/fs/xfs/xfs_file.c
> @@ -842,6 +842,26 @@ xfs_break_dax_layouts(
>  			0, 0, xfs_wait_dax_page(inode));
>  }
>  
> +int
> +xfs_break_two_dax_layouts(
> +	struct inode		*src,
> +	struct inode		*dest)
> +{
> +	int			error;
> +	bool			retry = false;
> +
> +retry:
> +	error = xfs_break_dax_layouts(src, &retry);
> +	if (error || retry)
> +		goto retry;
> +
> +	error = xfs_break_dax_layouts(dest, &retry);
> +	if (error || retry)
> +		goto retry;
> +
> +	return error;
> +}
> +
>  int
>  xfs_break_layouts(
>  	struct inode		*inode,
> diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
> index f93370bd7b1e..c01786917eef 100644
> --- a/fs/xfs/xfs_inode.c
> +++ b/fs/xfs/xfs_inode.c
> @@ -3713,8 +3713,10 @@ xfs_ilock2_io_mmap(
>  	struct xfs_inode	*ip2)
>  {
>  	int			ret;
> +	struct inode		*inode1 = VFS_I(ip1);
> +	struct inode		*inode2 = VFS_I(ip2);
>  
> -	ret = xfs_iolock_two_inodes_and_break_layout(VFS_I(ip1), VFS_I(ip2));
> +	ret = xfs_iolock_two_inodes_and_break_layout(inode1, inode2);
>  	if (ret)
>  		return ret;
>  	if (ip1 == ip2)
> @@ -3722,6 +3724,10 @@ xfs_ilock2_io_mmap(
>  	else
>  		xfs_lock_two_inodes(ip1, XFS_MMAPLOCK_EXCL,
>  				    ip2, XFS_MMAPLOCK_EXCL);
> +
> +	if (IS_DAX(inode1) && IS_DAX(inode2))
> +		ret = xfs_break_two_dax_layouts(inode1, inode2);

This is wrong on many levels.

The first problem is that xfs_break_two_dax_layouts calls
xfs_break_dax_layouts twice even if inode1 == inode2, which is
unnecessary.

The second problem is that xfs_break_dax_layouts can cycle the MMAPLOCK
on the inode that it's processing.  Since there are two inodes in play
here, you must be /very/ careful about maintaining correct locking order,
which for the MMAPLOCK is increasing order of xfs_inode.i_ino.  If you
drop the MMAPLOCK for the lower-numbered inode for any reason, you have
to drop both MMAPLOCKs and try again.

In other words, you have to replace all that nice MMAPLOCK code with a
new xfs_mmaplock_two_inodes_and_break_dax_layouts function that is
structured similarly to what xfs_iolock_two_inodes_and_break_layout
does for the IOLOCK and PNFS layouts.

> +
>  	return 0;
>  }
>  
> diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h
> index 88ee4c3930ae..5ef21924dddc 100644
> --- a/fs/xfs/xfs_inode.h
> +++ b/fs/xfs/xfs_inode.h
> @@ -435,6 +435,7 @@ enum xfs_prealloc_flags {
>  
>  int	xfs_update_prealloc_flags(struct xfs_inode *ip,
>  				  enum xfs_prealloc_flags flags);
> +int	xfs_break_two_dax_layouts(struct inode *inode1, struct inode *inode2);
>  int	xfs_break_layouts(struct inode *inode, uint *iolock,
>  		enum layout_break_reason reason);
>  
> diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c
> index a4cd6e8a7aa0..4426bcc8a985 100644
> --- a/fs/xfs/xfs_reflink.c
> +++ b/fs/xfs/xfs_reflink.c
> @@ -29,6 +29,7 @@
>  #include "xfs_iomap.h"
>  #include "xfs_sb.h"
>  #include "xfs_ag_resv.h"
> +#include <linux/dax.h>

Why is this necessary?

--D

>  
>  /*
>   * Copy on Write of Shared Blocks
> @@ -1324,8 +1325,8 @@ xfs_reflink_remap_prep(
>  	if (XFS_IS_REALTIME_INODE(src) || XFS_IS_REALTIME_INODE(dest))
>  		goto out_unlock;
>  
> -	/* Don't share DAX file data for now. */
> -	if (IS_DAX(inode_in) || IS_DAX(inode_out))
> +	/* Don't share DAX file data with non-DAX file. */
> +	if (IS_DAX(inode_in) != IS_DAX(inode_out))
>  		goto out_unlock;
>  
>  	if (!IS_DAX(inode_in))
> -- 
> 2.31.0
> 
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH v4 7/7] fs/xfs: Add dedupe support for fsdax
  2021-04-08 13:12   ` Su Yue
@ 2021-04-09  1:56     ` ruansy.fnst
  0 siblings, 0 replies; 17+ messages in thread
From: ruansy.fnst @ 2021-04-09  1:56 UTC (permalink / raw)
  To: Su Yue
  Cc: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel,
	darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn



> -----Original Message-----
> From: Su Yue <l@damenly.su>
> Subject: Re: [PATCH v4 7/7] fs/xfs: Add dedupe support for fsdax
> 
> 
> On Thu 08 Apr 2021 at 20:04, Shiyang Ruan <ruansy.fnst@fujitsu.com> wrote:
> 
> > Add xfs_break_two_dax_layouts() to break layout for tow dax files.
> > Then call compare range function only when files are both DAX or not.
> >
> > Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> >
> Not family with xfs code but reading code make my sleep better :) See bellow.
> 
> > ---
> >  fs/xfs/xfs_file.c    | 20 ++++++++++++++++++++
> >  fs/xfs/xfs_inode.c   |  8 +++++++-
> >  fs/xfs/xfs_inode.h   |  1 +
> >  fs/xfs/xfs_reflink.c |  5 +++--
> >  4 files changed, 31 insertions(+), 3 deletions(-)
> >
> > diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c index
> > 5795d5d6f869..1fd457167c12 100644
> > --- a/fs/xfs/xfs_file.c
> > +++ b/fs/xfs/xfs_file.c
> > @@ -842,6 +842,26 @@ xfs_break_dax_layouts(
> >  			0, 0, xfs_wait_dax_page(inode));
> >  }
> >
> > +int
> > +xfs_break_two_dax_layouts(
> > +	struct inode		*src,
> > +	struct inode		*dest)
> > +{
> > +	int			error;
> > +	bool			retry = false;
> > +
> > +retry:
> >
> 'retry = false;' ? since xfs_break_dax_layouts() won't set retry to false if there is
> no busy page in inode->i_mapping.
> Dead loop will happen if retry is true once.

Yes, I should move 'retry=false;' under the retry label.

> 
> > +	error = xfs_break_dax_layouts(src, &retry);
> > +	if (error || retry)
> > +		goto retry;
> > +
> > +	error = xfs_break_dax_layouts(dest, &retry);
> > +	if (error || retry)
> > +		goto retry;
> > +
> > +	return error;
> > +}
> > +
> >  int
> >  xfs_break_layouts(
> >  	struct inode		*inode,
> > diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c index
> > f93370bd7b1e..c01786917eef 100644
> > --- a/fs/xfs/xfs_inode.c
> > +++ b/fs/xfs/xfs_inode.c
> > @@ -3713,8 +3713,10 @@ xfs_ilock2_io_mmap(
> >  	struct xfs_inode	*ip2)
> >  {
> >  	int			ret;
> > +	struct inode		*inode1 = VFS_I(ip1);
> > +	struct inode		*inode2 = VFS_I(ip2);
> >
> > -	ret = xfs_iolock_two_inodes_and_break_layout(VFS_I(ip1),
> > VFS_I(ip2));
> > +	ret = xfs_iolock_two_inodes_and_break_layout(inode1, inode2);
> >  	if (ret)
> >  		return ret;
> >  	if (ip1 == ip2)
> > @@ -3722,6 +3724,10 @@ xfs_ilock2_io_mmap(
> >  	else
> >  		xfs_lock_two_inodes(ip1, XFS_MMAPLOCK_EXCL,
> >  				    ip2, XFS_MMAPLOCK_EXCL);
> > +
> > +	if (IS_DAX(inode1) && IS_DAX(inode2))
> > +		ret = xfs_break_two_dax_layouts(inode1, inode2);
> > +
> ret is ignored here.

Thanks, it's my mistake.


--
Thanks,
Ruan Shiyang.
> 
> --
> Su
> >  	return 0;
> >  }
> >
> > diff --git a/fs/xfs/xfs_inode.h b/fs/xfs/xfs_inode.h index
> > 88ee4c3930ae..5ef21924dddc 100644
> > --- a/fs/xfs/xfs_inode.h
> > +++ b/fs/xfs/xfs_inode.h
> > @@ -435,6 +435,7 @@ enum xfs_prealloc_flags {
> >
> >  int	xfs_update_prealloc_flags(struct xfs_inode *ip,
> >  				  enum xfs_prealloc_flags flags);
> > +int	xfs_break_two_dax_layouts(struct inode *inode1, struct
> > inode *inode2);
> >  int	xfs_break_layouts(struct inode *inode, uint *iolock,
> >  		enum layout_break_reason reason);
> >
> > diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c index
> > a4cd6e8a7aa0..4426bcc8a985 100644
> > --- a/fs/xfs/xfs_reflink.c
> > +++ b/fs/xfs/xfs_reflink.c
> > @@ -29,6 +29,7 @@
> >  #include "xfs_iomap.h"
> >  #include "xfs_sb.h"
> >  #include "xfs_ag_resv.h"
> > +#include <linux/dax.h>
> >
> >  /*
> >   * Copy on Write of Shared Blocks
> > @@ -1324,8 +1325,8 @@ xfs_reflink_remap_prep(
> >  	if (XFS_IS_REALTIME_INODE(src) || XFS_IS_REALTIME_INODE(dest))
> >  		goto out_unlock;
> >
> > -	/* Don't share DAX file data for now. */
> > -	if (IS_DAX(inode_in) || IS_DAX(inode_out))
> > +	/* Don't share DAX file data with non-DAX file. */
> > +	if (IS_DAX(inode_in) != IS_DAX(inode_out))
> >  		goto out_unlock;
> >
> >  	if (!IS_DAX(inode_in))

^ permalink raw reply	[flat|nested] 17+ messages in thread

* RE: [PATCH v4 1/7] fsdax: Introduce dax_iomap_cow_copy()
  2021-04-08 21:53   ` Darrick J. Wong
@ 2021-04-09  2:30     ` ruansy.fnst
  0 siblings, 0 replies; 17+ messages in thread
From: ruansy.fnst @ 2021-04-09  2:30 UTC (permalink / raw)
  To: Darrick J. Wong
  Cc: linux-kernel, linux-xfs, linux-nvdimm, linux-fsdevel,
	darrick.wong, dan.j.williams, willy, jack, viro, linux-btrfs,
	david, hch, rgoldwyn


> -----Original Message-----
> From: Darrick J. Wong <djwong@kernel.org>
> Sent: Friday, April 9, 2021 5:53 AM
> Subject: Re: [PATCH v4 1/7] fsdax: Introduce dax_iomap_cow_copy()
> 
> On Thu, Apr 08, 2021 at 08:04:26PM +0800, Shiyang Ruan wrote:
> > In the case where the iomap is a write operation and iomap is not
> > equal to srcmap after iomap_begin, we consider it is a CoW operation.
> >
> > The destance extent which iomap indicated is new allocated extent.
> > So, it is needed to copy the data from srcmap to new allocated extent.
> > In theory, it is better to copy the head and tail ranges which is
> > outside of the non-aligned area instead of copying the whole aligned
> > range. But in dax page fault, it will always be an aligned range.  So,
> > we have to copy the whole range in this case.
> >
> > Signed-off-by: Shiyang Ruan <ruansy.fnst@fujitsu.com>
> > Reviewed-by: Christoph Hellwig <hch@lst.de>
> > ---
> >  fs/dax.c | 82
> > ++++++++++++++++++++++++++++++++++++++++++++++++++++----
> >  1 file changed, 77 insertions(+), 5 deletions(-)
> >
> > diff --git a/fs/dax.c b/fs/dax.c
> > index 8d7e4e2cc0fb..b4fd3813457a 100644
> > --- a/fs/dax.c
> > +++ b/fs/dax.c
> > @@ -1038,6 +1038,61 @@ static int dax_iomap_direct_access(struct iomap
> *iomap, loff_t pos, size_t size,
> >  	return rc;
> >  }
> >
> > +/**
> > + * dax_iomap_cow_copy(): Copy the data from source to destination before
> write.
> > + * @pos:	address to do copy from.
> > + * @length:	size of copy operation.
> > + * @align_size:	aligned w.r.t align_size (either PMD_SIZE or PAGE_SIZE)
> > + * @srcmap:	iomap srcmap
> > + * @daddr:	destination address to copy to.
> > + *
> > + * This can be called from two places. Either during DAX write fault,
> > +to copy
> > + * the length size data to daddr. Or, while doing normal DAX write
> > +operation,
> > + * dax_iomap_actor() might call this to do the copy of either start
> > +or end
> > + * unaligned address. In this case the rest of the copy of aligned
> > +ranges is
> > + * taken care by dax_iomap_actor() itself.
> 
> Er... what?  This description is very confusing to me.  /me reads the code,
> and ...
> 
> OH.
> 
> Given a range (pos, length) and a mapping for a source file, this function copies
> all the bytes between pos and (pos + length) to daddr if the range is aligned to
> @align_size.  But if pos and length are not both aligned to align_src then it'll
> copy *around* the range, leaving the area in the middle uncopied waiting for
> write_iter to fill it in with whatever's in the iovec.
> 
> Yikes, this function is doing double duty and ought to be split into two functions.
> 
> The first function does the COW work for a write fault to an mmap region and
> does a straight copy.  Page faults are always aligned, so this functionality is
> needed by dax_fault_actor.  Maybe this could be named dax_fault_cow?
> 
> The second function does the prep COW work *around* a write so that we
> always copy entire page/blocks.  This cow-around code is needed by
> dax_iomap_actor.  This should perhaps be named dax_iomap_cow_around()?

Two functions seems easier to understand.  But I think the code from dax_iomap_direct_access() to its above will be redundant in this two functions.
How about make the description better?

> 
> > + * Also, note DAX fault will always result in aligned pos and pos + length.
> > + */
> > +static int dax_iomap_cow_copy(loff_t pos, loff_t length, size_t align_size,
> > +		struct iomap *srcmap, void *daddr)
> > +{
> > +	loff_t head_off = pos & (align_size - 1);
> > +	size_t size = ALIGN(head_off + length, align_size);
> > +	loff_t end = pos + length;
> > +	loff_t pg_end = round_up(end, align_size);
> > +	bool copy_all = head_off == 0 && end == pg_end;
> > +	void *saddr = 0;
> > +	int ret = 0;
> > +
> > +	ret = dax_iomap_direct_access(srcmap, pos, size, &saddr, NULL);
> > +	if (ret)
> > +		return ret;
> > +
> > +	if (copy_all) {
> > +		ret = copy_mc_to_kernel(daddr, saddr, length);
> > +		return ret ? -EIO : 0;
> 
> I find it /very/ interesting that copy_mc_to_kernel takes an unsigned int
> argument but returns an unsigned long (counting the bytes that didn't get
> copied, oddly...but that's an existing API so I guess I'll let it go.)
> 
> > +	}
> > +
> > +	/* Copy the head part of the range.  Note: we pass offset as length. */
> > +	if (head_off) {
> > +		ret = copy_mc_to_kernel(daddr, saddr, head_off);
> > +		if (ret)
> > +			return -EIO;
> > +	}
> > +
> > +	/* Copy the tail part of the range */
> > +	if (end < pg_end) {
> > +		loff_t tail_off = head_off + length;
> > +		loff_t tail_len = pg_end - end;
> > +
> > +		ret = copy_mc_to_kernel(daddr + tail_off, saddr + tail_off,
> > +					tail_len);
> > +		if (ret)
> > +			return -EIO;
> > +	}
> > +	return 0;
> > +}
> > +
> >  /*
> >   * The user has performed a load from a hole in the file.  Allocating a new
> >   * page in the file would cause excessive storage usage for workloads
> > with @@ -1167,11 +1222,12 @@ dax_iomap_actor(struct inode *inode,
> loff_t pos, loff_t length, void *data,
> >  	struct dax_device *dax_dev = iomap->dax_dev;
> >  	struct iov_iter *iter = data;
> >  	loff_t end = pos + length, done = 0;
> > +	bool write = iov_iter_rw(iter) == WRITE;
> >  	ssize_t ret = 0;
> >  	size_t xfer;
> >  	int id;
> >
> > -	if (iov_iter_rw(iter) == READ) {
> > +	if (!write) {
> >  		end = min(end, i_size_read(inode));
> >  		if (pos >= end)
> >  			return 0;
> > @@ -1180,7 +1236,8 @@ dax_iomap_actor(struct inode *inode, loff_t pos,
> loff_t length, void *data,
> >  			return iov_iter_zero(min(length, end - pos), iter);
> >  	}
> >
> > -	if (WARN_ON_ONCE(iomap->type != IOMAP_MAPPED))
> > +	if (WARN_ON_ONCE(iomap->type != IOMAP_MAPPED &&
> > +			!(iomap->flags & IOMAP_F_SHARED)))
> 
> This is a bit subtle.  Could we add a comment:
> 
> 	/*
> 	 * In DAX mode, we allow either pure overwrites of written extents,
> 	 * or writes to unwritten extents as part of a copy-on-write
> 	 * operation.
> 	 */
> 	if (WARN_ON_ONCE(...))

OK.

> 
> >  		return -EIO;
> >
> >  	/*
> > @@ -1219,6 +1276,13 @@ dax_iomap_actor(struct inode *inode, loff_t pos,
> loff_t length, void *data,
> >  			break;
> >  		}
> >
> > +		if (write && srcmap->addr != iomap->addr) {
> 
> Do you have to check if srcmap is not a hole?
This dax_iomap_actor() is called by iomap_apply(), in which srcmap has been checked: If srcmap is a HOLE, then iomap_apply() will tell the actor that iomap == srcmap.  So, I didn't check it here.  But in dax_fault_actor() case, because we are not using iomap_apply(), the check is needed.


--
Thanks,
Ruan Shiyang.
> 
> --D
> 
> > +			ret = dax_iomap_cow_copy(pos, length, PAGE_SIZE, srcmap,
> > +						 kaddr);
> > +			if (ret)
> > +				break;
> > +		}
> > +
> >  		map_len = PFN_PHYS(map_len);
> >  		kaddr += offset;
> >  		map_len -= offset;
> > @@ -1230,7 +1294,7 @@ dax_iomap_actor(struct inode *inode, loff_t pos,
> loff_t length, void *data,
> >  		 * validated via access_ok() in either vfs_read() or
> >  		 * vfs_write(), depending on which operation we are doing.
> >  		 */
> > -		if (iov_iter_rw(iter) == WRITE)
> > +		if (write)
> >  			xfer = dax_copy_from_iter(dax_dev, pgoff, kaddr,
> >  					map_len, iter);
> >  		else
> > @@ -1382,6 +1446,7 @@ static vm_fault_t dax_fault_actor(struct vm_fault
> *vmf, pfn_t *pfnp,
> >  	unsigned long entry_flags = pmd ? DAX_PMD : 0;
> >  	int err = 0;
> >  	pfn_t pfn;
> > +	void *kaddr;
> >
> >  	/* if we are reading UNWRITTEN and HOLE, return a hole. */
> >  	if (!write &&
> > @@ -1392,18 +1457,25 @@ static vm_fault_t dax_fault_actor(struct
> vm_fault *vmf, pfn_t *pfnp,
> >  			return dax_pmd_load_hole(xas, vmf, iomap, entry);
> >  	}
> >
> > -	if (iomap->type != IOMAP_MAPPED) {
> > +	if (iomap->type != IOMAP_MAPPED && !(iomap->flags &
> IOMAP_F_SHARED))
> > +{
> >  		WARN_ON_ONCE(1);
> >  		return pmd ? VM_FAULT_FALLBACK : VM_FAULT_SIGBUS;
> >  	}
> >
> > -	err = dax_iomap_direct_access(iomap, pos, size, NULL, &pfn);
> > +	err = dax_iomap_direct_access(iomap, pos, size, &kaddr, &pfn);
> >  	if (err)
> >  		return pmd ? VM_FAULT_FALLBACK : dax_fault_return(err);
> >
> >  	*entry = dax_insert_entry(xas, mapping, vmf, *entry, pfn, entry_flags,
> >  				  write && !sync);
> >
> > +	if (write &&
> > +	    srcmap->addr != IOMAP_HOLE && srcmap->addr != iomap->addr) {
> > +		err = dax_iomap_cow_copy(pos, size, size, srcmap, kaddr);
> > +		if (err)
> > +			return dax_fault_return(err);
> > +	}
> > +
> >  	if (sync)
> >  		return dax_fault_synchronous_pfnp(pfnp, pfn);
> >
> > --
> > 2.31.0
> >
> >
> >

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-04-09  2:30 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-08 12:04 [PATCH v4 0/7] fsdax,xfs: Add reflink&dedupe support for fsdax Shiyang Ruan
2021-04-08 12:04 ` [PATCH v4 1/7] fsdax: Introduce dax_iomap_cow_copy() Shiyang Ruan
2021-04-08 21:53   ` Darrick J. Wong
2021-04-09  2:30     ` ruansy.fnst
2021-04-08 12:04 ` [PATCH v4 2/7] fsdax: Replace mmap entry in case of CoW Shiyang Ruan
2021-04-08 22:11   ` Darrick J. Wong
2021-04-08 12:04 ` [PATCH v4 3/7] fsdax: Add dax_iomap_cow_copy() for dax_iomap_zero Shiyang Ruan
2021-04-08 12:04 ` [PATCH v4 4/7] iomap: Introduce iomap_apply2() for operations on two files Shiyang Ruan
2021-04-08 22:26   ` Darrick J. Wong
2021-04-08 12:04 ` [PATCH v4 5/7] fsdax: Dedup file range to use a compare function Shiyang Ruan
2021-04-08 22:33   ` Darrick J. Wong
2021-04-08 12:04 ` [PATCH v4 6/7] fs/xfs: Handle CoW for fsdax write() path Shiyang Ruan
2021-04-08 22:40   ` Darrick J. Wong
2021-04-08 12:04 ` [PATCH v4 7/7] fs/xfs: Add dedupe support for fsdax Shiyang Ruan
2021-04-08 13:12   ` Su Yue
2021-04-09  1:56     ` ruansy.fnst
2021-04-08 23:04   ` Darrick J. Wong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).