linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range
@ 2020-03-09 12:41 fdmanana
  2020-03-09 12:41 ` [PATCH v3 1/4] Btrfs: fix missing file extent item for hole after ranged fsync fdmanana
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: fdmanana @ 2020-03-09 12:41 UTC (permalink / raw)
  To: linux-btrfs; +Cc: josef

From: Filipe Manana <fdmanana@suse.com>

This patchset fixes a bug when not using NO_HOLES and makes ranged fsyncs
respect the given file range when using the NO_HOLES feature.

The bug is about missing file extents items representing a hole after doing
a ranged fsync on a file and replaying the log.

Btrfs doesn't respect the given range for a fsync when the inode's has the
"need full sync" bit set - it treats the fsync as a full ranged one, operating
on the whole file, doing more IO and cpu work then needed.

That behaviour was needed to fix a corruption bug. Commit 0c713cbab6200b
("Btrfs: fix race between ranged fsync and writeback of adjacent ranges")
fixed that bug by turning the ranged fsync into a full ranged one.

Later the hole detection code of fsync was simplified a lot in order to
fix another bug when using the NO_HOLES feature - done by commit
0e56315ca147b3 ("Btrfs: fix missing hole after hole punching and fsync when
using NO_HOLES"). That commit now makes it easy to avoid turning the ranged
fsyncs into non-ranged fsyncs.

This patchset does those two changes. The first patch fixes the bug mentioned
before, patches 2 and 3 are preparation cleanups for patch 4, which is the
one that makes fsync respect the given file range when using NO_HOLES.

V3: Updated patch one so that the ranged is set to full before locking the
    inode. To make sure we do writeback and wait for ordered extent
    completion as much as possible before locking the inode.
    Remaining patches are unchanged.

V2: Added one more patch to the series, which is the first patch, that
    fixes the bug regarding missing holes after doing a ranged fsync.

    The remaining patches remain the same, only patch 4 had a trivial
    conflict when rebasing against patch 1 and got its changelog
    updated. Now all fstests pass with version 2 of this patchset.

Filipe Manana (4):
  Btrfs: fix missing file extent item for hole after ranged fsync
  Btrfs: add helper to get the end offset of a file extent item
  Btrfs: factor out inode items copy loop from btrfs_log_inode()
  Btrfs: make ranged full fsyncs more efficient

 fs/btrfs/ctree.h     |   1 +
 fs/btrfs/file-item.c |  40 ++++--
 fs/btrfs/file.c      |  23 ++--
 fs/btrfs/inode.c     |  10 +-
 fs/btrfs/send.c      |  44 +-----
 fs/btrfs/tree-log.c  | 379 +++++++++++++++++++++++++++++----------------------
 6 files changed, 261 insertions(+), 236 deletions(-)

-- 
2.11.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 1/4] Btrfs: fix missing file extent item for hole after ranged fsync
  2020-03-09 12:41 [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range fdmanana
@ 2020-03-09 12:41 ` fdmanana
  2020-03-09 12:41 ` [PATCH v3 2/4] Btrfs: add helper to get the end offset of a file extent item fdmanana
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: fdmanana @ 2020-03-09 12:41 UTC (permalink / raw)
  To: linux-btrfs; +Cc: josef

From: Filipe Manana <fdmanana@suse.com>

When doing a fast fsync for a range that starts at an offset greater than
zero, we can end up with a log that when replayed causes the respective
inode miss a file extent item representing a hole if we are not using the
NO_HOLES feature. This is because for fast fsyncs we don't log any extents
that cover a range different from the one requested in the fsync.

Example scenario to trigger it:

  $ mkfs.btrfs -O ^no-holes -f /dev/sdd
  $ mount /dev/sdd /mnt

  # Create a file with a single 256K and fsync it to clear to full sync
  # bit in the inode - we want the msync below to trigger a fast fsync.
  $ xfs_io -f -c "pwrite -S 0xab 0 256K" -c "fsync" /mnt/foo

  # Force a transaction commit and wipe out the log tree.
  $ sync

  # Dirty 768K of data, increasing the file size to 1Mb, and flush only
  # the range from 256K to 512K without updating the log tree
  # (sync_file_range() does not trigger fsync, it only starts writeback
  # and waits for it to finish).

  $ xfs_io -c "pwrite -S 0xcd 256K 768K" /mnt/foo
  $ xfs_io -c "sync_range -abw 256K 256K" /mnt/foo

  # Now dirty the range from 768K to 1M again and sync that range.
  $ xfs_io -c "mmap -w 768K 256K"        \
           -c "mwrite -S 0xef 768K 256K" \
           -c "msync -s 768K 256K"       \
           -c "munmap"                   \
           /mnt/foo

  <power fail>

  # Mount to replay the log.
  $ mount /dev/sdd /mnt
  $ umount /mnt

  $ btrfs check /dev/sdd
  Opening filesystem to check...
  Checking filesystem on /dev/sdd
  UUID: 482fb574-b288-478e-a190-a9c44a78fca6
  [1/7] checking root items
  [2/7] checking extents
  [3/7] checking free space cache
  [4/7] checking fs roots
  root 5 inode 257 errors 100, file extent discount
  Found file extent holes:
       start: 262144, len: 524288
  ERROR: errors found in fs roots
  found 720896 bytes used, error(s) found
  total csum bytes: 512
  total tree bytes: 131072
  total fs tree bytes: 32768
  total extent tree bytes: 16384
  btree space waste bytes: 123514
  file data blocks allocated: 589824
    referenced 589824

Fix this issue by setting the range to full (0 to LLONG_MAX) when the
NO_HOLES feature is not enabled. This results in extra work being done
but it gives the guarantee we don't end up with missing holes after
replaying the log.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
 fs/btrfs/file.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 31c72371a164..4a536387e992 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -2072,6 +2072,16 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
 	btrfs_init_log_ctx(&ctx, inode);
 
 	/*
+	 * Set the range to full if the NO_HOLES feature is not enabled.
+	 * This is to avoid missing file extent items representing holes after
+	 * replaying the log.
+	 */
+	if (!btrfs_fs_incompat(fs_info, NO_HOLES)) {
+		start = 0;
+		end = LLONG_MAX;
+	}
+
+	/*
 	 * We write the dirty pages in the range and wait until they complete
 	 * out of the ->i_mutex. If so, we can flush the dirty pages by
 	 * multi-task, and make the performance up.  See
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 2/4] Btrfs: add helper to get the end offset of a file extent item
  2020-03-09 12:41 [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range fdmanana
  2020-03-09 12:41 ` [PATCH v3 1/4] Btrfs: fix missing file extent item for hole after ranged fsync fdmanana
@ 2020-03-09 12:41 ` fdmanana
  2020-03-09 12:41 ` [PATCH v3 3/4] Btrfs: factor out inode items copy loop from btrfs_log_inode() fdmanana
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: fdmanana @ 2020-03-09 12:41 UTC (permalink / raw)
  To: linux-btrfs; +Cc: josef

From: Filipe Manana <fdmanana@suse.com>

Getting the end offset for a file extent item requires a bit of code since
the extent can be either inline or regular/prealloc. There are some places
all over the code base that open code this logic and in another patch
later in this series it will be needed again. Therefore encapsulate this
logic in a helper function and use it.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
 fs/btrfs/ctree.h     |  1 +
 fs/btrfs/file-item.c | 40 ++++++++++++++++++++++++++++------------
 fs/btrfs/inode.c     | 10 +---------
 fs/btrfs/send.c      | 44 +++-----------------------------------------
 fs/btrfs/tree-log.c  | 15 +--------------
 5 files changed, 34 insertions(+), 76 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index ecd016f7dab1..3dc417620120 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -2837,6 +2837,7 @@ int btrfs_inode_clear_file_extent_range(struct btrfs_inode *inode, u64 start,
 int btrfs_inode_set_file_extent_range(struct btrfs_inode *inode, u64 start,
 				      u64 len);
 void btrfs_inode_safe_disk_i_size_write(struct inode *inode, u64 new_i_size);
+u64 btrfs_file_extent_end(const struct btrfs_path *path);
 
 /* inode.c */
 struct extent_map *btrfs_get_extent_fiemap(struct btrfs_inode *inode,
diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
index 6c849e8fd5a1..b618ad5339ba 100644
--- a/fs/btrfs/file-item.c
+++ b/fs/btrfs/file-item.c
@@ -1040,18 +1040,7 @@ void btrfs_extent_item_to_extent_map(struct btrfs_inode *inode,
 
 	btrfs_item_key_to_cpu(leaf, &key, slot);
 	extent_start = key.offset;
-
-	if (type == BTRFS_FILE_EXTENT_REG ||
-	    type == BTRFS_FILE_EXTENT_PREALLOC) {
-		extent_end = extent_start +
-			btrfs_file_extent_num_bytes(leaf, fi);
-	} else if (type == BTRFS_FILE_EXTENT_INLINE) {
-		size_t size;
-		size = btrfs_file_extent_ram_bytes(leaf, fi);
-		extent_end = ALIGN(extent_start + size,
-				   fs_info->sectorsize);
-	}
-
+	extent_end = btrfs_file_extent_end(path);
 	em->ram_bytes = btrfs_file_extent_ram_bytes(leaf, fi);
 	if (type == BTRFS_FILE_EXTENT_REG ||
 	    type == BTRFS_FILE_EXTENT_PREALLOC) {
@@ -1098,3 +1087,30 @@ void btrfs_extent_item_to_extent_map(struct btrfs_inode *inode,
 			  root->root_key.objectid);
 	}
 }
+
+/*
+ * Returns the end offset (non inclusive) of the file extent item the given path
+ * points to. If it points to an inline extent, the returned offset is rounded
+ * up to the sector size.
+ */
+u64 btrfs_file_extent_end(const struct btrfs_path *path)
+{
+	const struct extent_buffer *leaf = path->nodes[0];
+	const int slot = path->slots[0];
+	struct btrfs_file_extent_item *fi;
+	struct btrfs_key key;
+	u64 end;
+
+	btrfs_item_key_to_cpu(leaf, &key, slot);
+	ASSERT(key.type == BTRFS_EXTENT_DATA_KEY);
+	fi = btrfs_item_ptr(leaf, slot, struct btrfs_file_extent_item);
+
+	if (btrfs_file_extent_type(leaf, fi) == BTRFS_FILE_EXTENT_INLINE) {
+		end = btrfs_file_extent_ram_bytes(leaf, fi);
+		end = ALIGN(key.offset + end, leaf->fs_info->sectorsize);
+	} else {
+		end = key.offset + btrfs_file_extent_num_bytes(leaf, fi);
+	}
+
+	return end;
+}
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 8a3bc19d83ff..3a636b405088 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -6520,6 +6520,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
 
 	extent_type = btrfs_file_extent_type(leaf, item);
 	extent_start = found_key.offset;
+	extent_end = btrfs_file_extent_end(path);
 	if (extent_type == BTRFS_FILE_EXTENT_REG ||
 	    extent_type == BTRFS_FILE_EXTENT_PREALLOC) {
 		/* Only regular file could have regular/prealloc extent */
@@ -6530,18 +6531,9 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
 				   btrfs_ino(inode));
 			goto out;
 		}
-		extent_end = extent_start +
-		       btrfs_file_extent_num_bytes(leaf, item);
-
 		trace_btrfs_get_extent_show_fi_regular(inode, leaf, item,
 						       extent_start);
 	} else if (extent_type == BTRFS_FILE_EXTENT_INLINE) {
-		size_t size;
-
-		size = btrfs_file_extent_ram_bytes(leaf, item);
-		extent_end = ALIGN(extent_start + size,
-				   fs_info->sectorsize);
-
 		trace_btrfs_get_extent_show_fi_inline(inode, leaf, item,
 						      path->slots[0],
 						      extent_start);
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index 6b86841315be..e47f768cec3d 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -5586,10 +5586,7 @@ static int get_last_extent(struct send_ctx *sctx, u64 offset)
 {
 	struct btrfs_path *path;
 	struct btrfs_root *root = sctx->send_root;
-	struct btrfs_file_extent_item *fi;
 	struct btrfs_key key;
-	u64 extent_end;
-	u8 type;
 	int ret;
 
 	path = alloc_path_for_send();
@@ -5609,18 +5606,7 @@ static int get_last_extent(struct send_ctx *sctx, u64 offset)
 	if (key.objectid != sctx->cur_ino || key.type != BTRFS_EXTENT_DATA_KEY)
 		goto out;
 
-	fi = btrfs_item_ptr(path->nodes[0], path->slots[0],
-			    struct btrfs_file_extent_item);
-	type = btrfs_file_extent_type(path->nodes[0], fi);
-	if (type == BTRFS_FILE_EXTENT_INLINE) {
-		u64 size = btrfs_file_extent_ram_bytes(path->nodes[0], fi);
-		extent_end = ALIGN(key.offset + size,
-				   sctx->send_root->fs_info->sectorsize);
-	} else {
-		extent_end = key.offset +
-			btrfs_file_extent_num_bytes(path->nodes[0], fi);
-	}
-	sctx->cur_inode_last_extent = extent_end;
+	sctx->cur_inode_last_extent = btrfs_file_extent_end(path);
 out:
 	btrfs_free_path(path);
 	return ret;
@@ -5674,16 +5660,7 @@ static int range_is_hole_in_parent(struct send_ctx *sctx,
 			break;
 
 		fi = btrfs_item_ptr(leaf, slot, struct btrfs_file_extent_item);
-		if (btrfs_file_extent_type(leaf, fi) ==
-		    BTRFS_FILE_EXTENT_INLINE) {
-			u64 size = btrfs_file_extent_ram_bytes(leaf, fi);
-
-			extent_end = ALIGN(key.offset + size,
-					   root->fs_info->sectorsize);
-		} else {
-			extent_end = key.offset +
-				btrfs_file_extent_num_bytes(leaf, fi);
-		}
+		extent_end = btrfs_file_extent_end(path);
 		if (extent_end <= start)
 			goto next;
 		if (btrfs_file_extent_disk_bytenr(leaf, fi) == 0) {
@@ -5704,9 +5681,6 @@ static int range_is_hole_in_parent(struct send_ctx *sctx,
 static int maybe_send_hole(struct send_ctx *sctx, struct btrfs_path *path,
 			   struct btrfs_key *key)
 {
-	struct btrfs_file_extent_item *fi;
-	u64 extent_end;
-	u8 type;
 	int ret = 0;
 
 	if (sctx->cur_ino != key->objectid || !need_send_hole(sctx))
@@ -5718,18 +5692,6 @@ static int maybe_send_hole(struct send_ctx *sctx, struct btrfs_path *path,
 			return ret;
 	}
 
-	fi = btrfs_item_ptr(path->nodes[0], path->slots[0],
-			    struct btrfs_file_extent_item);
-	type = btrfs_file_extent_type(path->nodes[0], fi);
-	if (type == BTRFS_FILE_EXTENT_INLINE) {
-		u64 size = btrfs_file_extent_ram_bytes(path->nodes[0], fi);
-		extent_end = ALIGN(key->offset + size,
-				   sctx->send_root->fs_info->sectorsize);
-	} else {
-		extent_end = key->offset +
-			btrfs_file_extent_num_bytes(path->nodes[0], fi);
-	}
-
 	if (path->slots[0] == 0 &&
 	    sctx->cur_inode_last_extent < key->offset) {
 		/*
@@ -5755,7 +5717,7 @@ static int maybe_send_hole(struct send_ctx *sctx, struct btrfs_path *path,
 		else
 			ret = 0;
 	}
-	sctx->cur_inode_last_extent = extent_end;
+	sctx->cur_inode_last_extent = btrfs_file_extent_end(path);
 	return ret;
 }
 
diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
index 19c107be9ef6..b723ee03de26 100644
--- a/fs/btrfs/tree-log.c
+++ b/fs/btrfs/tree-log.c
@@ -4555,9 +4555,7 @@ static int btrfs_log_holes(struct btrfs_trans_handle *trans,
 		return ret;
 
 	while (true) {
-		struct btrfs_file_extent_item *extent;
 		struct extent_buffer *leaf = path->nodes[0];
-		u64 len;
 
 		if (path->slots[0] >= btrfs_header_nritems(path->nodes[0])) {
 			ret = btrfs_next_leaf(root, path);
@@ -4606,18 +4604,7 @@ static int btrfs_log_holes(struct btrfs_trans_handle *trans,
 			leaf = path->nodes[0];
 		}
 
-		extent = btrfs_item_ptr(leaf, path->slots[0],
-					struct btrfs_file_extent_item);
-		if (btrfs_file_extent_type(leaf, extent) ==
-		    BTRFS_FILE_EXTENT_INLINE) {
-			len = btrfs_file_extent_ram_bytes(leaf, extent);
-			prev_extent_end = ALIGN(key.offset + len,
-						fs_info->sectorsize);
-		} else {
-			len = btrfs_file_extent_num_bytes(leaf, extent);
-			prev_extent_end = key.offset + len;
-		}
-
+		prev_extent_end = btrfs_file_extent_end(path);
 		path->slots[0]++;
 		cond_resched();
 	}
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 3/4] Btrfs: factor out inode items copy loop from btrfs_log_inode()
  2020-03-09 12:41 [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range fdmanana
  2020-03-09 12:41 ` [PATCH v3 1/4] Btrfs: fix missing file extent item for hole after ranged fsync fdmanana
  2020-03-09 12:41 ` [PATCH v3 2/4] Btrfs: add helper to get the end offset of a file extent item fdmanana
@ 2020-03-09 12:41 ` fdmanana
  2020-03-09 12:41 ` [PATCH v3 4/4] Btrfs: make ranged full fsyncs more efficient fdmanana
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: fdmanana @ 2020-03-09 12:41 UTC (permalink / raw)
  To: linux-btrfs; +Cc: josef

From: Filipe Manana <fdmanana@suse.com>

The function btrfs_log_inode() is quite large and so is its loop which
iterates the inode items from the fs/subvolume tree and copies them into
a log tree. Because this is a large loop inside a very large function
and because an upcoming patch in this series needs to add some more logic
inside that loop, move the loop into a helper function to make it a bit
more manageable.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
 fs/btrfs/tree-log.c | 277 ++++++++++++++++++++++++++--------------------------
 1 file changed, 141 insertions(+), 136 deletions(-)

diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
index b723ee03de26..bd4854ced55d 100644
--- a/fs/btrfs/tree-log.c
+++ b/fs/btrfs/tree-log.c
@@ -4941,6 +4941,141 @@ static int log_conflicting_inodes(struct btrfs_trans_handle *trans,
 	return ret;
 }
 
+static int copy_inode_items_to_log(struct btrfs_trans_handle *trans,
+				   struct btrfs_inode *inode,
+				   struct btrfs_key *min_key,
+				   const struct btrfs_key *max_key,
+				   struct btrfs_path *path,
+				   struct btrfs_path *dst_path,
+				   const u64 logged_isize,
+				   const bool recursive_logging,
+				   const int inode_only,
+				   struct btrfs_log_ctx *ctx,
+				   bool *need_log_inode_item)
+{
+	struct btrfs_root *root = inode->root;
+	int ins_start_slot = 0;
+	int ins_nr = 0;
+	int ret;
+
+	while (1) {
+		ret = btrfs_search_forward(root, min_key, path, trans->transid);
+		if (ret < 0)
+			return ret;
+		if (ret > 0) {
+			ret = 0;
+			break;
+		}
+again:
+		/* note, ins_nr might be > 0 here, cleanup outside the loop */
+		if (min_key->objectid != max_key->objectid)
+			break;
+		if (min_key->type > max_key->type)
+			break;
+
+		if (min_key->type == BTRFS_INODE_ITEM_KEY)
+			*need_log_inode_item = false;
+
+		if ((min_key->type == BTRFS_INODE_REF_KEY ||
+		     min_key->type == BTRFS_INODE_EXTREF_KEY) &&
+		    inode->generation == trans->transid &&
+		    !recursive_logging) {
+			u64 other_ino = 0;
+			u64 other_parent = 0;
+
+			ret = btrfs_check_ref_name_override(path->nodes[0],
+					path->slots[0], min_key, inode,
+					&other_ino, &other_parent);
+			if (ret < 0) {
+				return ret;
+			} else if (ret > 0 && ctx &&
+				   other_ino != btrfs_ino(BTRFS_I(ctx->inode))) {
+				if (ins_nr > 0) {
+					ins_nr++;
+				} else {
+					ins_nr = 1;
+					ins_start_slot = path->slots[0];
+				}
+				ret = copy_items(trans, inode, dst_path, path,
+						 ins_start_slot,
+						 ins_nr, inode_only,
+						 logged_isize);
+				if (ret < 0)
+					return ret;
+				ins_nr = 0;
+
+				ret = log_conflicting_inodes(trans, root, path,
+						ctx, other_ino, other_parent);
+				if (ret)
+					return ret;
+				btrfs_release_path(path);
+				goto next_key;
+			}
+		}
+
+		/* Skip xattrs, we log them later with btrfs_log_all_xattrs() */
+		if (min_key->type == BTRFS_XATTR_ITEM_KEY) {
+			if (ins_nr == 0)
+				goto next_slot;
+			ret = copy_items(trans, inode, dst_path, path,
+					 ins_start_slot,
+					 ins_nr, inode_only, logged_isize);
+			if (ret < 0)
+				return ret;
+			ins_nr = 0;
+			goto next_slot;
+		}
+
+		if (ins_nr && ins_start_slot + ins_nr == path->slots[0]) {
+			ins_nr++;
+			goto next_slot;
+		} else if (!ins_nr) {
+			ins_start_slot = path->slots[0];
+			ins_nr = 1;
+			goto next_slot;
+		}
+
+		ret = copy_items(trans, inode, dst_path, path,
+				 ins_start_slot, ins_nr, inode_only,
+				 logged_isize);
+		if (ret < 0)
+			return ret;
+		ins_nr = 1;
+		ins_start_slot = path->slots[0];
+next_slot:
+		path->slots[0]++;
+		if (path->slots[0] < btrfs_header_nritems(path->nodes[0])) {
+			btrfs_item_key_to_cpu(path->nodes[0], min_key,
+					      path->slots[0]);
+			goto again;
+		}
+		if (ins_nr) {
+			ret = copy_items(trans, inode, dst_path, path,
+					 ins_start_slot,
+					 ins_nr, inode_only, logged_isize);
+			if (ret < 0)
+				return ret;
+			ins_nr = 0;
+		}
+		btrfs_release_path(path);
+next_key:
+		if (min_key->offset < (u64)-1) {
+			min_key->offset++;
+		} else if (min_key->type < max_key->type) {
+			min_key->type++;
+			min_key->offset = 0;
+		} else {
+			break;
+		}
+	}
+	if (ins_nr)
+		ret = copy_items(trans, inode, dst_path, path,
+				 ins_start_slot, ins_nr, inode_only,
+				 logged_isize);
+
+	return ret;
+}
+
 /* log a single inode in the tree log.
  * At least one parent directory for this inode must exist in the tree
  * or be logged already.
@@ -4970,9 +5105,6 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
 	struct btrfs_root *log = root->log_root;
 	int err = 0;
 	int ret;
-	int nritems;
-	int ins_start_slot = 0;
-	int ins_nr;
 	bool fast_search = false;
 	u64 ino = btrfs_ino(inode);
 	struct extent_map_tree *em_tree = &inode->extent_tree;
@@ -5103,139 +5235,12 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
 		goto out_unlock;
 	}
 
-	while (1) {
-		ins_nr = 0;
-		ret = btrfs_search_forward(root, &min_key,
-					   path, trans->transid);
-		if (ret < 0) {
-			err = ret;
-			goto out_unlock;
-		}
-		if (ret != 0)
-			break;
-again:
-		/* note, ins_nr might be > 0 here, cleanup outside the loop */
-		if (min_key.objectid != ino)
-			break;
-		if (min_key.type > max_key.type)
-			break;
-
-		if (min_key.type == BTRFS_INODE_ITEM_KEY)
-			need_log_inode_item = false;
-
-		if ((min_key.type == BTRFS_INODE_REF_KEY ||
-		     min_key.type == BTRFS_INODE_EXTREF_KEY) &&
-		    inode->generation == trans->transid &&
-		    !recursive_logging) {
-			u64 other_ino = 0;
-			u64 other_parent = 0;
-
-			ret = btrfs_check_ref_name_override(path->nodes[0],
-					path->slots[0], &min_key, inode,
-					&other_ino, &other_parent);
-			if (ret < 0) {
-				err = ret;
-				goto out_unlock;
-			} else if (ret > 0 && ctx &&
-				   other_ino != btrfs_ino(BTRFS_I(ctx->inode))) {
-				if (ins_nr > 0) {
-					ins_nr++;
-				} else {
-					ins_nr = 1;
-					ins_start_slot = path->slots[0];
-				}
-				ret = copy_items(trans, inode, dst_path, path,
-						 ins_start_slot,
-						 ins_nr, inode_only,
-						 logged_isize);
-				if (ret < 0) {
-					err = ret;
-					goto out_unlock;
-				}
-				ins_nr = 0;
-
-				err = log_conflicting_inodes(trans, root, path,
-						ctx, other_ino, other_parent);
-				if (err)
-					goto out_unlock;
-				btrfs_release_path(path);
-				goto next_key;
-			}
-		}
-
-		/* Skip xattrs, we log them later with btrfs_log_all_xattrs() */
-		if (min_key.type == BTRFS_XATTR_ITEM_KEY) {
-			if (ins_nr == 0)
-				goto next_slot;
-			ret = copy_items(trans, inode, dst_path, path,
-					 ins_start_slot,
-					 ins_nr, inode_only, logged_isize);
-			if (ret < 0) {
-				err = ret;
-				goto out_unlock;
-			}
-			ins_nr = 0;
-			goto next_slot;
-		}
-
-		if (ins_nr && ins_start_slot + ins_nr == path->slots[0]) {
-			ins_nr++;
-			goto next_slot;
-		} else if (!ins_nr) {
-			ins_start_slot = path->slots[0];
-			ins_nr = 1;
-			goto next_slot;
-		}
-
-		ret = copy_items(trans, inode, dst_path, path,
-				 ins_start_slot, ins_nr, inode_only,
-				 logged_isize);
-		if (ret < 0) {
-			err = ret;
-			goto out_unlock;
-		}
-		ins_nr = 1;
-		ins_start_slot = path->slots[0];
-next_slot:
-
-		nritems = btrfs_header_nritems(path->nodes[0]);
-		path->slots[0]++;
-		if (path->slots[0] < nritems) {
-			btrfs_item_key_to_cpu(path->nodes[0], &min_key,
-					      path->slots[0]);
-			goto again;
-		}
-		if (ins_nr) {
-			ret = copy_items(trans, inode, dst_path, path,
-					 ins_start_slot,
-					 ins_nr, inode_only, logged_isize);
-			if (ret < 0) {
-				err = ret;
-				goto out_unlock;
-			}
-			ins_nr = 0;
-		}
-		btrfs_release_path(path);
-next_key:
-		if (min_key.offset < (u64)-1) {
-			min_key.offset++;
-		} else if (min_key.type < max_key.type) {
-			min_key.type++;
-			min_key.offset = 0;
-		} else {
-			break;
-		}
-	}
-	if (ins_nr) {
-		ret = copy_items(trans, inode, dst_path, path,
-				 ins_start_slot, ins_nr, inode_only,
-				 logged_isize);
-		if (ret < 0) {
-			err = ret;
-			goto out_unlock;
-		}
-		ins_nr = 0;
-	}
+	err = copy_inode_items_to_log(trans, inode, &min_key, &max_key,
+				      path, dst_path, logged_isize,
+				      recursive_logging, inode_only, ctx,
+				      &need_log_inode_item);
+	if (err)
+		goto out_unlock;
 
 	btrfs_release_path(path);
 	btrfs_release_path(dst_path);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 4/4] Btrfs: make ranged full fsyncs more efficient
  2020-03-09 12:41 [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range fdmanana
                   ` (2 preceding siblings ...)
  2020-03-09 12:41 ` [PATCH v3 3/4] Btrfs: factor out inode items copy loop from btrfs_log_inode() fdmanana
@ 2020-03-09 12:41 ` fdmanana
  2020-03-11 17:44 ` [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range Josef Bacik
  2020-03-12 20:33 ` David Sterba
  5 siblings, 0 replies; 9+ messages in thread
From: fdmanana @ 2020-03-09 12:41 UTC (permalink / raw)
  To: linux-btrfs; +Cc: josef

From: Filipe Manana <fdmanana@suse.com>

Commit 0c713cbab6200b ("Btrfs: fix race between ranged fsync and writeback
of adjacent ranges") fixed a bug where we could end up with file extent
items in a log tree that represent file ranges that overlap due to a race
between the hole detection of a ranged full fsync and writeback for a
different file range.

The problem was solved by forcing any ranged full fsync to become a
non-ranged full fsync - setting the range start to 0 and the end offset to
LLONG_MAX. This was a simple solution because the code that detected and
marked holes was very complex, it used to be done at copy_items() and
implied several searches on the fs/subvolume tree. The drawback of that
solution was that we started to flush delalloc for the entire file and
wait for all the ordered extents to complete for ranged full fsyncs
(including ordered extents covering ranges completely outside the given
range). Fortunatelly ranged full fsyncs are not the most common case
(hopefully for most workloads).

However a later fix for detecting and marking holes was made by commit
0e56315ca147b3 ("Btrfs: fix missing hole after hole punching and fsync
when using NO_HOLES") and it simplified a lot the detection of holes,
and now copy_items() no longer does it and we do it in a much more simple
way at btrfs_log_holes().

This makes it now possible to simply make the code that detects holes to
operate only on the initial range and no longer need to operate on the
whole file, while also avoiding the need to flush delalloc for the entire
file and wait for ordered extents that cover ranges that don't overlap the
given range.

Another special care is that we must skip file extent items that fall
entirely outside the fsync range when copying inode items from the
fs/subvolume tree into the log tree - this is to avoid races with ordered
extent completion for extents falling outside the fsync range, which could
cause us to end up with file extent items in the log tree that have
overlapping ranges - for example if the fsync range is [1Mb, 2Mb], when
we copy inode items we could copy an extent item for the range [0, 512K],
then release the search path and before moving to the next leaf, an
ordered extent for a range of [256Kb, 512Kb] completes - this would
cause us to copy the new extent item for range [256Kb, 512Kb] into the
log tree after we have copied one for the range [0, 512Kb] - the extents
overlap, resulting in a corruption.

So this change just does these steps:

1) When the NO_HOLES feature is enabled it leaves the initial range
   intact - no longer sets it to [0, LLONG_MAX] when the full sync bit
   is set in the inode. If NO_HOLES is not enabled, always set the range
   to a full, just like before this change, to avoid missing file extent
   items representing holes after replaying the log (for both full and
   fast fsyncs);

2) Make the hole detection code to operate only on the fsync range;

3) Make the code that copies items from the fs/subvolume tree to skip
   copying file extent items that cover a range completely outside the
   range of the fsync.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
 fs/btrfs/file.c     | 13 --------
 fs/btrfs/tree-log.c | 93 +++++++++++++++++++++++++++++++++++++++++++++--------
 2 files changed, 79 insertions(+), 27 deletions(-)

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 4a536387e992..18c88f514a0d 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -2103,19 +2103,6 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
 	atomic_inc(&root->log_batch);
 
 	/*
-	 * If the inode needs a full sync, make sure we use a full range to
-	 * avoid log tree corruption, due to hole detection racing with ordered
-	 * extent completion for adjacent ranges, and assertion failures during
-	 * hole detection. Do this while holding the inode lock, to avoid races
-	 * with other tasks.
-	 */
-	if (test_bit(BTRFS_INODE_NEEDS_FULL_SYNC,
-		     &BTRFS_I(inode)->runtime_flags)) {
-		start = 0;
-		end = LLONG_MAX;
-	}
-
-	/*
 	 * Before we acquired the inode's lock, someone may have dirtied more
 	 * pages in the target range. We need to make sure that writeback for
 	 * any such pages does not start while we are logging the inode, because
diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
index bd4854ced55d..d8cc46517865 100644
--- a/fs/btrfs/tree-log.c
+++ b/fs/btrfs/tree-log.c
@@ -96,8 +96,8 @@ enum {
 static int btrfs_log_inode(struct btrfs_trans_handle *trans,
 			   struct btrfs_root *root, struct btrfs_inode *inode,
 			   int inode_only,
-			   const loff_t start,
-			   const loff_t end,
+			   u64 start,
+			   u64 end,
 			   struct btrfs_log_ctx *ctx);
 static int link_to_fixup_dir(struct btrfs_trans_handle *trans,
 			     struct btrfs_root *root,
@@ -4534,13 +4534,15 @@ static int btrfs_log_all_xattrs(struct btrfs_trans_handle *trans,
 static int btrfs_log_holes(struct btrfs_trans_handle *trans,
 			   struct btrfs_root *root,
 			   struct btrfs_inode *inode,
-			   struct btrfs_path *path)
+			   struct btrfs_path *path,
+			   const u64 start,
+			   const u64 end)
 {
 	struct btrfs_fs_info *fs_info = root->fs_info;
 	struct btrfs_key key;
 	const u64 ino = btrfs_ino(inode);
 	const u64 i_size = i_size_read(&inode->vfs_inode);
-	u64 prev_extent_end = 0;
+	u64 prev_extent_end = start;
 	int ret;
 
 	if (!btrfs_fs_incompat(fs_info, NO_HOLES) || i_size == 0)
@@ -4548,14 +4550,21 @@ static int btrfs_log_holes(struct btrfs_trans_handle *trans,
 
 	key.objectid = ino;
 	key.type = BTRFS_EXTENT_DATA_KEY;
-	key.offset = 0;
+	key.offset = start;
 
 	ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
 	if (ret < 0)
 		return ret;
 
+	if (ret > 0 && path->slots[0] > 0) {
+		btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0] - 1);
+		if (key.objectid == ino && key.type == BTRFS_EXTENT_DATA_KEY)
+			path->slots[0]--;
+	}
+
 	while (true) {
 		struct extent_buffer *leaf = path->nodes[0];
+		u64 extent_end;
 
 		if (path->slots[0] >= btrfs_header_nritems(path->nodes[0])) {
 			ret = btrfs_next_leaf(root, path);
@@ -4572,9 +4581,18 @@ static int btrfs_log_holes(struct btrfs_trans_handle *trans,
 		if (key.objectid != ino || key.type != BTRFS_EXTENT_DATA_KEY)
 			break;
 
+		extent_end = btrfs_file_extent_end(path);
+		if (extent_end <= start)
+			goto next_slot;
+
 		/* We have a hole, log it. */
 		if (prev_extent_end < key.offset) {
-			const u64 hole_len = key.offset - prev_extent_end;
+			u64 hole_len;
+
+			if (key.offset >= end)
+				hole_len = end - prev_extent_end;
+			else
+				hole_len = key.offset - prev_extent_end;
 
 			/*
 			 * Release the path to avoid deadlocks with other code
@@ -4604,16 +4622,20 @@ static int btrfs_log_holes(struct btrfs_trans_handle *trans,
 			leaf = path->nodes[0];
 		}
 
-		prev_extent_end = btrfs_file_extent_end(path);
+		prev_extent_end = min(extent_end, end);
+		if (extent_end >= end)
+			break;
+next_slot:
 		path->slots[0]++;
 		cond_resched();
 	}
 
-	if (prev_extent_end < i_size) {
+	if (prev_extent_end < end && prev_extent_end < i_size) {
 		u64 hole_len;
 
 		btrfs_release_path(path);
-		hole_len = ALIGN(i_size - prev_extent_end, fs_info->sectorsize);
+		hole_len = min(ALIGN(i_size, fs_info->sectorsize), end);
+		hole_len -= prev_extent_end;
 		ret = btrfs_insert_file_extent(trans, root->log_root,
 					       ino, prev_extent_end, 0, 0,
 					       hole_len, 0, hole_len,
@@ -4950,6 +4972,8 @@ static int copy_inode_items_to_log(struct btrfs_trans_handle *trans,
 				   const u64 logged_isize,
 				   const bool recursive_logging,
 				   const int inode_only,
+				   const u64 start,
+				   const u64 end,
 				   struct btrfs_log_ctx *ctx,
 				   bool *need_log_inode_item)
 {
@@ -4958,6 +4982,21 @@ static int copy_inode_items_to_log(struct btrfs_trans_handle *trans,
 	int ins_nr = 0;
 	int ret;
 
+	/*
+	 * We must make sure we don't copy extent items that are entirely out of
+	 * the range [start, end - 1]. This is not just an optimization to avoid
+	 * copying but also needed to avoid a corruption where we end up with
+	 * file extent items in the log tree that have overlapping ranges - this
+	 * can happen if we race with ordered extent completion for ranges that
+	 * are outside our target range. For example we copy an extent item and
+	 * when we move to the next leaf, that extent was trimmed and a new one
+	 * covering a subrange of it, but with a higher key, was inserted - we
+	 * would then copy this other extent too, resulting in a log tree with
+	 * 2 extent items that represent overlapping ranges.
+	 *
+	 * We can copy the entire extents at the range bondaries however, even
+	 * if they cover an area outside the target range. That's ok.
+	 */
 	while (1) {
 		ret = btrfs_search_forward(root, min_key, path, trans->transid);
 		if (ret < 0)
@@ -5026,6 +5065,29 @@ static int copy_inode_items_to_log(struct btrfs_trans_handle *trans,
 			goto next_slot;
 		}
 
+		if (min_key->type == BTRFS_EXTENT_DATA_KEY) {
+			const u64 extent_end = btrfs_file_extent_end(path);
+
+			if (extent_end <= start) {
+				if (ins_nr > 0) {
+					ret = copy_items(trans, inode, dst_path,
+							 path, ins_start_slot,
+							 ins_nr, inode_only,
+							 logged_isize);
+					if (ret < 0)
+						return ret;
+					ins_nr = 0;
+				}
+				goto next_slot;
+			}
+			if (extent_end >= end) {
+				ins_nr++;
+				if (ins_nr == 1)
+					ins_start_slot = path->slots[0];
+				break;
+			}
+		}
+
 		if (ins_nr && ins_start_slot + ins_nr == path->slots[0]) {
 			ins_nr++;
 			goto next_slot;
@@ -5093,8 +5155,8 @@ static int copy_inode_items_to_log(struct btrfs_trans_handle *trans,
 static int btrfs_log_inode(struct btrfs_trans_handle *trans,
 			   struct btrfs_root *root, struct btrfs_inode *inode,
 			   int inode_only,
-			   const loff_t start,
-			   const loff_t end,
+			   u64 start,
+			   u64 end,
 			   struct btrfs_log_ctx *ctx)
 {
 	struct btrfs_fs_info *fs_info = root->fs_info;
@@ -5122,6 +5184,9 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
 		return -ENOMEM;
 	}
 
+	start = ALIGN_DOWN(start, fs_info->sectorsize);
+	end = ALIGN(end, fs_info->sectorsize);
+
 	min_key.objectid = ino;
 	min_key.type = BTRFS_INODE_ITEM_KEY;
 	min_key.offset = 0;
@@ -5237,8 +5302,8 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
 
 	err = copy_inode_items_to_log(trans, inode, &min_key, &max_key,
 				      path, dst_path, logged_isize,
-				      recursive_logging, inode_only, ctx,
-				      &need_log_inode_item);
+				      recursive_logging, inode_only,
+				      start, end, ctx, &need_log_inode_item);
 	if (err)
 		goto out_unlock;
 
@@ -5251,7 +5316,7 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
 	if (max_key.type >= BTRFS_EXTENT_DATA_KEY && !fast_search) {
 		btrfs_release_path(path);
 		btrfs_release_path(dst_path);
-		err = btrfs_log_holes(trans, root, inode, path);
+		err = btrfs_log_holes(trans, root, inode, path, start, end);
 		if (err)
 			goto out_unlock;
 	}
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range
  2020-03-09 12:41 [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range fdmanana
                   ` (3 preceding siblings ...)
  2020-03-09 12:41 ` [PATCH v3 4/4] Btrfs: make ranged full fsyncs more efficient fdmanana
@ 2020-03-11 17:44 ` Josef Bacik
  2020-03-12 20:33 ` David Sterba
  5 siblings, 0 replies; 9+ messages in thread
From: Josef Bacik @ 2020-03-11 17:44 UTC (permalink / raw)
  To: fdmanana, linux-btrfs

On 3/9/20 8:41 AM, fdmanana@kernel.org wrote:
> From: Filipe Manana <fdmanana@suse.com>
> 
> This patchset fixes a bug when not using NO_HOLES and makes ranged fsyncs
> respect the given file range when using the NO_HOLES feature.
> 
> The bug is about missing file extents items representing a hole after doing
> a ranged fsync on a file and replaying the log.
> 
> Btrfs doesn't respect the given range for a fsync when the inode's has the
> "need full sync" bit set - it treats the fsync as a full ranged one, operating
> on the whole file, doing more IO and cpu work then needed.
> 
> That behaviour was needed to fix a corruption bug. Commit 0c713cbab6200b
> ("Btrfs: fix race between ranged fsync and writeback of adjacent ranges")
> fixed that bug by turning the ranged fsync into a full ranged one.
> 
> Later the hole detection code of fsync was simplified a lot in order to
> fix another bug when using the NO_HOLES feature - done by commit
> 0e56315ca147b3 ("Btrfs: fix missing hole after hole punching and fsync when
> using NO_HOLES"). That commit now makes it easy to avoid turning the ranged
> fsyncs into non-ranged fsyncs.
> 
> This patchset does those two changes. The first patch fixes the bug mentioned
> before, patches 2 and 3 are preparation cleanups for patch 4, which is the
> one that makes fsync respect the given file range when using NO_HOLES.
> 
> V3: Updated patch one so that the ranged is set to full before locking the
>      inode. To make sure we do writeback and wait for ordered extent
>      completion as much as possible before locking the inode.
>      Remaining patches are unchanged.
> 
> V2: Added one more patch to the series, which is the first patch, that
>      fixes the bug regarding missing holes after doing a ranged fsync.
> 
>      The remaining patches remain the same, only patch 4 had a trivial
>      conflict when rebasing against patch 1 and got its changelog
>      updated. Now all fstests pass with version 2 of this patchset.
> 
> Filipe Manana (4):
>    Btrfs: fix missing file extent item for hole after ranged fsync
>    Btrfs: add helper to get the end offset of a file extent item
>    Btrfs: factor out inode items copy loop from btrfs_log_inode()
>    Btrfs: make ranged full fsyncs more efficient
> 

You can add

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

to the whole series.  Thanks,

Josef

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range
  2020-03-09 12:41 [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range fdmanana
                   ` (4 preceding siblings ...)
  2020-03-11 17:44 ` [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range Josef Bacik
@ 2020-03-12 20:33 ` David Sterba
  2020-03-13 11:01   ` Filipe Manana
  5 siblings, 1 reply; 9+ messages in thread
From: David Sterba @ 2020-03-12 20:33 UTC (permalink / raw)
  To: fdmanana; +Cc: linux-btrfs, josef

On Mon, Mar 09, 2020 at 12:41:04PM +0000, fdmanana@kernel.org wrote:
> From: Filipe Manana <fdmanana@suse.com>
> 
> This patchset fixes a bug when not using NO_HOLES and makes ranged fsyncs
> respect the given file range when using the NO_HOLES feature.
> 
> The bug is about missing file extents items representing a hole after doing
> a ranged fsync on a file and replaying the log.
> 
> Btrfs doesn't respect the given range for a fsync when the inode's has the
> "need full sync" bit set - it treats the fsync as a full ranged one, operating
> on the whole file, doing more IO and cpu work then needed.
> 
> That behaviour was needed to fix a corruption bug. Commit 0c713cbab6200b
> ("Btrfs: fix race between ranged fsync and writeback of adjacent ranges")
> fixed that bug by turning the ranged fsync into a full ranged one.
> 
> Later the hole detection code of fsync was simplified a lot in order to
> fix another bug when using the NO_HOLES feature - done by commit
> 0e56315ca147b3 ("Btrfs: fix missing hole after hole punching and fsync when
> using NO_HOLES"). That commit now makes it easy to avoid turning the ranged
> fsyncs into non-ranged fsyncs.
> 
> This patchset does those two changes. The first patch fixes the bug mentioned
> before, patches 2 and 3 are preparation cleanups for patch 4, which is the
> one that makes fsync respect the given file range when using NO_HOLES.
> 
> V3: Updated patch one so that the ranged is set to full before locking the
>     inode. To make sure we do writeback and wait for ordered extent
>     completion as much as possible before locking the inode.
>     Remaining patches are unchanged.
> 
> V2: Added one more patch to the series, which is the first patch, that
>     fixes the bug regarding missing holes after doing a ranged fsync.
> 
>     The remaining patches remain the same, only patch 4 had a trivial
>     conflict when rebasing against patch 1 and got its changelog
>     updated. Now all fstests pass with version 2 of this patchset.
> 
> Filipe Manana (4):
>   Btrfs: fix missing file extent item for hole after ranged fsync
>   Btrfs: add helper to get the end offset of a file extent item
>   Btrfs: factor out inode items copy loop from btrfs_log_inode()
>   Btrfs: make ranged full fsyncs more efficient

Moved from for-next to misc-next, thanks.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range
  2020-03-12 20:33 ` David Sterba
@ 2020-03-13 11:01   ` Filipe Manana
  2020-03-13 13:17     ` David Sterba
  0 siblings, 1 reply; 9+ messages in thread
From: Filipe Manana @ 2020-03-13 11:01 UTC (permalink / raw)
  To: dsterba, Filipe Manana, linux-btrfs, Josef Bacik

On Thu, Mar 12, 2020 at 8:33 PM David Sterba <dsterba@suse.cz> wrote:
>
> On Mon, Mar 09, 2020 at 12:41:04PM +0000, fdmanana@kernel.org wrote:
> > From: Filipe Manana <fdmanana@suse.com>
> >
> > This patchset fixes a bug when not using NO_HOLES and makes ranged fsyncs
> > respect the given file range when using the NO_HOLES feature.
> >
> > The bug is about missing file extents items representing a hole after doing
> > a ranged fsync on a file and replaying the log.
> >
> > Btrfs doesn't respect the given range for a fsync when the inode's has the
> > "need full sync" bit set - it treats the fsync as a full ranged one, operating
> > on the whole file, doing more IO and cpu work then needed.
> >
> > That behaviour was needed to fix a corruption bug. Commit 0c713cbab6200b
> > ("Btrfs: fix race between ranged fsync and writeback of adjacent ranges")
> > fixed that bug by turning the ranged fsync into a full ranged one.
> >
> > Later the hole detection code of fsync was simplified a lot in order to
> > fix another bug when using the NO_HOLES feature - done by commit
> > 0e56315ca147b3 ("Btrfs: fix missing hole after hole punching and fsync when
> > using NO_HOLES"). That commit now makes it easy to avoid turning the ranged
> > fsyncs into non-ranged fsyncs.
> >
> > This patchset does those two changes. The first patch fixes the bug mentioned
> > before, patches 2 and 3 are preparation cleanups for patch 4, which is the
> > one that makes fsync respect the given file range when using NO_HOLES.
> >
> > V3: Updated patch one so that the ranged is set to full before locking the
> >     inode. To make sure we do writeback and wait for ordered extent
> >     completion as much as possible before locking the inode.
> >     Remaining patches are unchanged.
> >
> > V2: Added one more patch to the series, which is the first patch, that
> >     fixes the bug regarding missing holes after doing a ranged fsync.
> >
> >     The remaining patches remain the same, only patch 4 had a trivial
> >     conflict when rebasing against patch 1 and got its changelog
> >     updated. Now all fstests pass with version 2 of this patchset.
> >
> > Filipe Manana (4):
> >   Btrfs: fix missing file extent item for hole after ranged fsync
> >   Btrfs: add helper to get the end offset of a file extent item
> >   Btrfs: factor out inode items copy loop from btrfs_log_inode()
> >   Btrfs: make ranged full fsyncs more efficient
>
> Moved from for-next to misc-next, thanks.

Btw, Josef's reviewed-by tag is missing in the changelog of patch 4
(however there's an unusual reviewed-by tag from you).
Thanks.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range
  2020-03-13 11:01   ` Filipe Manana
@ 2020-03-13 13:17     ` David Sterba
  0 siblings, 0 replies; 9+ messages in thread
From: David Sterba @ 2020-03-13 13:17 UTC (permalink / raw)
  To: Filipe Manana; +Cc: dsterba, linux-btrfs, Josef Bacik

On Fri, Mar 13, 2020 at 11:01:35AM +0000, Filipe Manana wrote:
> > > Filipe Manana (4):
> > >   Btrfs: fix missing file extent item for hole after ranged fsync
> > >   Btrfs: add helper to get the end offset of a file extent item
> > >   Btrfs: factor out inode items copy loop from btrfs_log_inode()
> > >   Btrfs: make ranged full fsyncs more efficient
> >
> > Moved from for-next to misc-next, thanks.
> 
> Btw, Josef's reviewed-by tag is missing in the changelog of patch 4
> (however there's an unusual reviewed-by tag from you).

I meant to put my rev-by to the cleanup patches, for the 4th patch I'll
update it.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-03-13 13:17 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-09 12:41 [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range fdmanana
2020-03-09 12:41 ` [PATCH v3 1/4] Btrfs: fix missing file extent item for hole after ranged fsync fdmanana
2020-03-09 12:41 ` [PATCH v3 2/4] Btrfs: add helper to get the end offset of a file extent item fdmanana
2020-03-09 12:41 ` [PATCH v3 3/4] Btrfs: factor out inode items copy loop from btrfs_log_inode() fdmanana
2020-03-09 12:41 ` [PATCH v3 4/4] Btrfs: make ranged full fsyncs more efficient fdmanana
2020-03-11 17:44 ` [PATCH v3 0/4] Btrfs: make ranged fsyncs always respect the given range Josef Bacik
2020-03-12 20:33 ` David Sterba
2020-03-13 11:01   ` Filipe Manana
2020-03-13 13:17     ` David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).