All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v7 0/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation
@ 2020-06-28 11:39 Qu Wenruo
  2020-06-28 11:39 ` [PATCH v7 1/3] " Qu Wenruo
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Qu Wenruo @ 2020-06-28 11:39 UTC (permalink / raw)
  To: linux-btrfs

Before this patch, btrfs_truncate_block() never checks the NODATACOW
bit, thus when we run out of data space, we can return ENOSPC for
truncate for NODATACOW inode.

This patchset will address this problem by doing the same behavior as
buffered write to address it.

Changelog:
v2:
- Rebased to misc-next
  Only one minor conflict in ctree.h

v3:
- Added two new patches
- Refactor check_can_nocow()
  Since the introduction of nowait, check_can_nocow() are in fact split
  into two usage patterns: check_can_nocow(nowait = false) with
  btrfs_drew_write_unlock(), and single check_can_nocow(nowait = true).
  Refactor them into two functions: start_nocow_check() paired with
  end_nocow_check(), and single try_nocow_check(). With comment added.

- Rebased to latest misc-next

- Added btrfs_assert_drew_write_locked() for btrfs_end_nocow_check()
  This is a little concerning one, as it's in the hot path of buffered
  write.
  It has percpu_counter_sum() called in that hot path, causing
  obvious performance drop for CONFIG_BTRFS_DEBUG build.
  Not sure if the assert is worthy since there aren't any other users.

v4:
- Rebased to latest misc-next

- Comment update to follow the relaxed kernel-doc format

- Re-order the patches
  So the fix comes first, then refactors.

- Naming updates for the refactor patch
  Now the exported pair is btrfs_check_nocow_lock() and
  btrfs_check_nocow_unlock().

- Remove the btrfs_assert_drew_write_lock()
  It's extremely slow for btrfs/187, and we only have two call sites so
  far, thus it's not really worthy.

v5:
- Fix a stupid bug that check_nocow_nolock() passes wrong bool

- Update the comment for btrfs_check_nocow_lock() as there is no
  "nowait" parameter anymore

v6:
- Replace one btrfs_drew_write_unlock() with the wrapper

v7:
- Fix a "BTRF_I()" typo
  Reported-by: kernel test robot <lkp@intel.com>

Qu Wenruo (3):
  btrfs: allow btrfs_truncate_block() to fallback to nocow for data
    space reservation
  btrfs: add comments for btrfs_check_can_nocow() and can_nocow_extent()
  btrfs: refactor btrfs_check_can_nocow() into two variants

 fs/btrfs/ctree.h |  3 +++
 fs/btrfs/file.c  | 61 +++++++++++++++++++++++++++++++++++-----------
 fs/btrfs/inode.c | 63 +++++++++++++++++++++++++++++++++++++++++-------
 3 files changed, 104 insertions(+), 23 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v7 1/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation
  2020-06-28 11:39 [PATCH v7 0/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation Qu Wenruo
@ 2020-06-28 11:39 ` Qu Wenruo
  2020-06-28 11:39 ` [PATCH v7 2/3] btrfs: add comments for btrfs_check_can_nocow() and can_nocow_extent() Qu Wenruo
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Qu Wenruo @ 2020-06-28 11:39 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Martin Doucha, Filipe Manana, Anand Jain

[BUG]
When the data space is exhausted, even the inode has NOCOW attribute,
btrfs will still refuse to truncate unaligned range due to ENOSPC.

The following script can reproduce it pretty easily:
  #!/bin/bash

  dev=/dev/test/test
  mnt=/mnt/btrfs

  umount $dev &> /dev/null
  umount $mnt&> /dev/null

  mkfs.btrfs -f $dev -b 1G
  mount -o nospace_cache $dev $mnt
  touch $mnt/foobar
  chattr +C $mnt/foobar

  xfs_io -f -c "pwrite -b 4k 0 4k" $mnt/foobar > /dev/null
  xfs_io -f -c "pwrite -b 4k 0 1G" $mnt/padding &> /dev/null
  sync

  xfs_io -c "fpunch 0 2k" $mnt/foobar
  umount $mnt

Current btrfs will fail at the fpunch part.

[CAUSE]
Because btrfs_truncate_block() always reserve space without checking the
NOCOW attribute.

Since the writeback path follows NOCOW bit, we only need to bother the
space reservation code in btrfs_truncate_block().

[FIX]
Make btrfs_truncate_block() to follow btrfs_buffered_write() to try to
reserve data space first, and falls back to NOCOW check only when we
don't have enough space.

Such always-try-reserve is an optimization introduced in
btrfs_buffered_write(), to avoid expensive btrfs_check_can_nocow() call.

This patch will export check_can_nocow() as btrfs_check_can_nocow(), and
use it in btrfs_truncate_block() to fix the problem.

Reported-by: Martin Doucha <martin.doucha@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Anand Jain <anand.jain@oracle.com>
---
 fs/btrfs/ctree.h |  2 ++
 fs/btrfs/file.c  | 12 ++++++------
 fs/btrfs/inode.c | 44 +++++++++++++++++++++++++++++++++++++-------
 3 files changed, 45 insertions(+), 13 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index d8301bf240e0..b7e14189da32 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -3035,6 +3035,8 @@ int btrfs_dirty_pages(struct inode *inode, struct page **pages,
 		      size_t num_pages, loff_t pos, size_t write_bytes,
 		      struct extent_state **cached);
 int btrfs_fdatawrite_range(struct inode *inode, loff_t start, loff_t end);
+int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos,
+			  size_t *write_bytes, bool nowait);
 
 /* tree-defrag.c */
 int btrfs_defrag_leaves(struct btrfs_trans_handle *trans,
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index fccf5862cd3e..0115ef7f7943 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1533,8 +1533,8 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
 	return ret;
 }
 
-static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
-				    size_t *write_bytes, bool nowait)
+int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos,
+			  size_t *write_bytes, bool nowait)
 {
 	struct btrfs_fs_info *fs_info = inode->root->fs_info;
 	struct btrfs_root *root = inode->root;
@@ -1649,8 +1649,8 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
 		if (ret < 0) {
 			if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
 						      BTRFS_INODE_PREALLOC)) &&
-			    check_can_nocow(BTRFS_I(inode), pos,
-					    &write_bytes, false) > 0) {
+			    btrfs_check_can_nocow(BTRFS_I(inode), pos,
+						  &write_bytes, false) > 0) {
 				/*
 				 * For nodata cow case, no need to reserve
 				 * data space.
@@ -1927,8 +1927,8 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
 		 */
 		if (!(BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
 					      BTRFS_INODE_PREALLOC)) ||
-		    check_can_nocow(BTRFS_I(inode), pos, &nocow_bytes,
-				    true) <= 0) {
+		    btrfs_check_can_nocow(BTRFS_I(inode), pos, &nocow_bytes,
+					  true) <= 0) {
 			inode_unlock(inode);
 			return -EAGAIN;
 		}
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 86f7aa377da9..9d9ec7838f13 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -4519,11 +4519,13 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len,
 	struct extent_state *cached_state = NULL;
 	struct extent_changeset *data_reserved = NULL;
 	char *kaddr;
+	bool only_release_metadata = false;
 	u32 blocksize = fs_info->sectorsize;
 	pgoff_t index = from >> PAGE_SHIFT;
 	unsigned offset = from & (blocksize - 1);
 	struct page *page;
 	gfp_t mask = btrfs_alloc_write_mask(mapping);
+	size_t write_bytes = blocksize;
 	int ret = 0;
 	u64 block_start;
 	u64 block_end;
@@ -4535,11 +4537,27 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len,
 	block_start = round_down(from, blocksize);
 	block_end = block_start + blocksize - 1;
 
-	ret = btrfs_delalloc_reserve_space(inode, &data_reserved,
-					   block_start, blocksize);
-	if (ret)
-		goto out;
 
+	ret = btrfs_check_data_free_space(inode, &data_reserved, block_start,
+					  blocksize);
+	if (ret < 0) {
+		if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
+					      BTRFS_INODE_PREALLOC)) &&
+		    btrfs_check_can_nocow(BTRFS_I(inode), block_start,
+					  &write_bytes, false) > 0) {
+			/* For nocow case, no need to reserve data space. */
+			only_release_metadata = true;
+		} else {
+			goto out;
+		}
+	}
+	ret = btrfs_delalloc_reserve_metadata(BTRFS_I(inode), blocksize);
+	if (ret < 0) {
+		if (!only_release_metadata)
+			btrfs_free_reserved_data_space(inode, data_reserved,
+					block_start, blocksize);
+		goto out;
+	}
 again:
 	page = find_or_create_page(mapping, index, mask);
 	if (!page) {
@@ -4608,14 +4626,26 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len,
 	set_page_dirty(page);
 	unlock_extent_cached(io_tree, block_start, block_end, &cached_state);
 
+	if (only_release_metadata)
+		set_extent_bit(&BTRFS_I(inode)->io_tree, block_start,
+				block_end, EXTENT_NORESERVE, NULL, NULL,
+				GFP_NOFS);
+
 out_unlock:
-	if (ret)
-		btrfs_delalloc_release_space(inode, data_reserved, block_start,
-					     blocksize, true);
+	if (ret) {
+		if (!only_release_metadata)
+			btrfs_delalloc_release_space(inode, data_reserved,
+					block_start, blocksize, true);
+		else
+			btrfs_delalloc_release_metadata(BTRFS_I(inode),
+					blocksize, true);
+	}
 	btrfs_delalloc_release_extents(BTRFS_I(inode), blocksize);
 	unlock_page(page);
 	put_page(page);
 out:
+	if (only_release_metadata)
+		btrfs_drew_write_unlock(&BTRFS_I(inode)->root->snapshot_lock);
 	extent_changeset_free(data_reserved);
 	return ret;
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v7 2/3] btrfs: add comments for btrfs_check_can_nocow() and can_nocow_extent()
  2020-06-28 11:39 [PATCH v7 0/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation Qu Wenruo
  2020-06-28 11:39 ` [PATCH v7 1/3] " Qu Wenruo
@ 2020-06-28 11:39 ` Qu Wenruo
  2020-06-28 11:39 ` [PATCH v7 3/3] btrfs: refactor btrfs_check_can_nocow() into two variants Qu Wenruo
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 6+ messages in thread
From: Qu Wenruo @ 2020-06-28 11:39 UTC (permalink / raw)
  To: linux-btrfs

These two functions have extra conditions that their callers need to
meet, and some not-that-common parameters used for return value.

So adding some comments may save reviewers some time.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/file.c  | 21 +++++++++++++++++++++
 fs/btrfs/inode.c | 21 +++++++++++++++++++--
 2 files changed, 40 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 0115ef7f7943..2bee888cb929 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1533,6 +1533,27 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
 	return ret;
 }
 
+/*
+ * Check if we can do nocow write into the range [@pos, @pos + @write_bytes)
+ *
+ * @pos:	 File offset
+ * @write_bytes: The length to write, will be updated to the nocow writeable
+ *		 range
+ * @nowait:	 Whether this function could sleep.
+ *
+ * This function will flush ordered extents in the range to ensure proper
+ * nocow checks for (nowait == false) case.
+ *
+ * Return:
+ * >0 and update @write_bytes if we can do nocow write.
+ * 0 if we can't do nocow write.
+ * -EAGAIN if we can't get the needed lock or there are ordered extents for
+ * (nowait == true) case.
+ * <0 if other error happened.
+ *
+ * NOTE: For wait (nowait==false) calls, callers need to release the drew write
+ * 	 lock of inode->root->snapshot_lock when return value > 0.
+ */
 int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos,
 			  size_t *write_bytes, bool nowait)
 {
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 9d9ec7838f13..339d739b2d29 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -6952,8 +6952,25 @@ static struct extent_map *btrfs_new_extent_direct(struct inode *inode,
 }
 
 /*
- * returns 1 when the nocow is safe, < 1 on error, 0 if the
- * block must be cow'd
+ * Check if we can do nocow write into the range [@offset, @offset+ @len)
+ *
+ * @offset:	File offset
+ * @len:	The length to write, will be updated to the nocow writeable
+ *		range
+ * @orig_start:	(Optional) Return the original file offset of the file extent
+ * @orig_len:	(Optional) Return the original on-disk length of the file extent
+ * @ram_bytes:	(Optional) Return the ram_bytes of the file extent
+ *
+ * This function will flush ordered extents in the range to ensure proper
+ * nocow checks for (nowait == false) case.
+ *
+ * Return:
+ * >0 and update @len if we can do nocow write.
+ * 0 if we can't do nocow write.
+ * <0 if error happened.
+ *
+ * NOTE: This only checks the file extents, caller is responsible to wait for
+ *	 any ordered extents.
  */
 noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
 			      u64 *orig_start, u64 *orig_block_len,
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v7 3/3] btrfs: refactor btrfs_check_can_nocow() into two variants
  2020-06-28 11:39 [PATCH v7 0/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation Qu Wenruo
  2020-06-28 11:39 ` [PATCH v7 1/3] " Qu Wenruo
  2020-06-28 11:39 ` [PATCH v7 2/3] btrfs: add comments for btrfs_check_can_nocow() and can_nocow_extent() Qu Wenruo
@ 2020-06-28 11:39 ` Qu Wenruo
  2020-06-29 15:40 ` [PATCH v7 0/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation David Sterba
  2020-06-29 17:09 ` David Sterba
  4 siblings, 0 replies; 6+ messages in thread
From: Qu Wenruo @ 2020-06-28 11:39 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Anand Jain

The function btrfs_check_can_nocow() now has two completely different call
patterns.

For nowait variant, callers don't need to do any cleanup.
While for wait variant, callers need to release the lock if they can do
nocow write.

This is somehow confusing, and is already a problem for the exported
btrfs_check_can_nocow().

So this patch will separate the different patterns into different
functions.
For nowait variant, the function will be called check_nocow_nolock().
For wait variant, the function pair will be btrfs_check_nocow_lock()
btrfs_check_nocow_unlock().

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Anand Jain <anand.jain@oracle.com>
---
 fs/btrfs/ctree.h |  5 +--
 fs/btrfs/file.c  | 82 +++++++++++++++++++++++++++---------------------
 fs/btrfs/inode.c |  8 ++---
 3 files changed, 53 insertions(+), 42 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index b7e14189da32..ce716c4e3b13 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -3035,8 +3035,9 @@ int btrfs_dirty_pages(struct inode *inode, struct page **pages,
 		      size_t num_pages, loff_t pos, size_t write_bytes,
 		      struct extent_state **cached);
 int btrfs_fdatawrite_range(struct inode *inode, loff_t start, loff_t end);
-int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos,
-			  size_t *write_bytes, bool nowait);
+int btrfs_check_nocow_lock(struct btrfs_inode *inode, loff_t pos,
+			   size_t *write_bytes);
+void btrfs_check_nocow_unlock(struct btrfs_inode *inode);
 
 /* tree-defrag.c */
 int btrfs_defrag_leaves(struct btrfs_trans_handle *trans,
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 2bee888cb929..3c84f0df1292 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1533,29 +1533,8 @@ lock_and_cleanup_extent_if_need(struct btrfs_inode *inode, struct page **pages,
 	return ret;
 }
 
-/*
- * Check if we can do nocow write into the range [@pos, @pos + @write_bytes)
- *
- * @pos:	 File offset
- * @write_bytes: The length to write, will be updated to the nocow writeable
- *		 range
- * @nowait:	 Whether this function could sleep.
- *
- * This function will flush ordered extents in the range to ensure proper
- * nocow checks for (nowait == false) case.
- *
- * Return:
- * >0 and update @write_bytes if we can do nocow write.
- * 0 if we can't do nocow write.
- * -EAGAIN if we can't get the needed lock or there are ordered extents for
- * (nowait == true) case.
- * <0 if other error happened.
- *
- * NOTE: For wait (nowait==false) calls, callers need to release the drew write
- * 	 lock of inode->root->snapshot_lock when return value > 0.
- */
-int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos,
-			  size_t *write_bytes, bool nowait)
+static int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
+			   size_t *write_bytes, bool nowait)
 {
 	struct btrfs_fs_info *fs_info = inode->root->fs_info;
 	struct btrfs_root *root = inode->root;
@@ -1563,6 +1542,9 @@ int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos,
 	u64 num_bytes;
 	int ret;
 
+	if (!(inode->flags & (BTRFS_INODE_NODATACOW | BTRFS_INODE_PREALLOC)))
+		return 0;
+
 	if (!nowait && !btrfs_drew_try_write_lock(&root->snapshot_lock))
 		return -EAGAIN;
 
@@ -1605,6 +1587,41 @@ int btrfs_check_can_nocow(struct btrfs_inode *inode, loff_t pos,
 	return ret;
 }
 
+static int check_nocow_nolock(struct btrfs_inode *inode, loff_t pos,
+			      size_t *write_bytes)
+{
+	return check_can_nocow(inode, pos, write_bytes, true);
+}
+
+/*
+ * Check if we can do nocow write into the range [@pos, @pos + @write_bytes)
+ *
+ * @pos:	 File offset
+ * @write_bytes: The length to write, will be updated to the nocow writeable
+ *		 range
+ *
+ * This function will flush ordered extents in the range to ensure proper
+ * nocow checks.
+ *
+ * Return:
+ * >0 and update @write_bytes if we can do nocow write.
+ * 0 if we can't do nocow write.
+ * -EAGAIN if we can't get the needed lock.
+ * <0 if other error happened.
+ *
+ * NOTE: Callers need to release the lock by btrfs_check_nocow_unlock().
+ */
+int btrfs_check_nocow_lock(struct btrfs_inode *inode, loff_t pos,
+			   size_t *write_bytes)
+{
+	return check_can_nocow(inode, pos, write_bytes, false);
+}
+
+void btrfs_check_nocow_unlock(struct btrfs_inode *inode)
+{
+	btrfs_drew_write_unlock(&inode->root->snapshot_lock);
+}
+
 static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
 					       struct iov_iter *i)
 {
@@ -1612,7 +1629,6 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
 	loff_t pos = iocb->ki_pos;
 	struct inode *inode = file_inode(file);
 	struct btrfs_fs_info *fs_info = btrfs_sb(inode->i_sb);
-	struct btrfs_root *root = BTRFS_I(inode)->root;
 	struct page **pages = NULL;
 	struct extent_changeset *data_reserved = NULL;
 	u64 release_bytes = 0;
@@ -1668,10 +1684,8 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
 		ret = btrfs_check_data_free_space(inode, &data_reserved, pos,
 						  write_bytes);
 		if (ret < 0) {
-			if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
-						      BTRFS_INODE_PREALLOC)) &&
-			    btrfs_check_can_nocow(BTRFS_I(inode), pos,
-						  &write_bytes, false) > 0) {
+			if (btrfs_check_nocow_lock(BTRFS_I(inode), pos,
+						   &write_bytes) > 0) {
 				/*
 				 * For nodata cow case, no need to reserve
 				 * data space.
@@ -1700,7 +1714,7 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
 						data_reserved, pos,
 						write_bytes);
 			else
-				btrfs_drew_write_unlock(&root->snapshot_lock);
+				btrfs_check_nocow_unlock(BTRFS_I(inode));
 			break;
 		}
 
@@ -1804,7 +1818,7 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
 
 		release_bytes = 0;
 		if (only_release_metadata)
-			btrfs_drew_write_unlock(&root->snapshot_lock);
+			btrfs_check_nocow_unlock(BTRFS_I(inode));
 
 		if (only_release_metadata && copied > 0) {
 			lockstart = round_down(pos,
@@ -1831,7 +1845,7 @@ static noinline ssize_t btrfs_buffered_write(struct kiocb *iocb,
 
 	if (release_bytes) {
 		if (only_release_metadata) {
-			btrfs_drew_write_unlock(&root->snapshot_lock);
+			btrfs_check_nocow_unlock(BTRFS_I(inode));
 			btrfs_delalloc_release_metadata(BTRFS_I(inode),
 					release_bytes, true);
 		} else {
@@ -1946,10 +1960,8 @@ static ssize_t btrfs_file_write_iter(struct kiocb *iocb,
 		 * We will allocate space in case nodatacow is not set,
 		 * so bail
 		 */
-		if (!(BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
-					      BTRFS_INODE_PREALLOC)) ||
-		    btrfs_check_can_nocow(BTRFS_I(inode), pos, &nocow_bytes,
-					  true) <= 0) {
+		if (check_nocow_nolock(BTRFS_I(inode), pos, &nocow_bytes)
+		    <= 0) {
 			inode_unlock(inode);
 			return -EAGAIN;
 		}
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 339d739b2d29..58eb91f07b0b 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -4541,10 +4541,8 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len,
 	ret = btrfs_check_data_free_space(inode, &data_reserved, block_start,
 					  blocksize);
 	if (ret < 0) {
-		if ((BTRFS_I(inode)->flags & (BTRFS_INODE_NODATACOW |
-					      BTRFS_INODE_PREALLOC)) &&
-		    btrfs_check_can_nocow(BTRFS_I(inode), block_start,
-					  &write_bytes, false) > 0) {
+		if (btrfs_check_nocow_lock(BTRFS_I(inode), block_start,
+					   &write_bytes) > 0) {
 			/* For nocow case, no need to reserve data space. */
 			only_release_metadata = true;
 		} else {
@@ -4645,7 +4643,7 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len,
 	put_page(page);
 out:
 	if (only_release_metadata)
-		btrfs_drew_write_unlock(&BTRFS_I(inode)->root->snapshot_lock);
+		btrfs_check_nocow_unlock(BTRFS_I(inode));
 	extent_changeset_free(data_reserved);
 	return ret;
 }
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v7 0/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation
  2020-06-28 11:39 [PATCH v7 0/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation Qu Wenruo
                   ` (2 preceding siblings ...)
  2020-06-28 11:39 ` [PATCH v7 3/3] btrfs: refactor btrfs_check_can_nocow() into two variants Qu Wenruo
@ 2020-06-29 15:40 ` David Sterba
  2020-06-29 17:09 ` David Sterba
  4 siblings, 0 replies; 6+ messages in thread
From: David Sterba @ 2020-06-29 15:40 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Sun, Jun 28, 2020 at 07:39:28PM +0800, Qu Wenruo wrote:
> v7:
> - Fix a "BTRF_I()" typo
>   Reported-by: kernel test robot <lkp@intel.com>

Please don't resend the whole patchset only a few hours since the last
one. If it's just a trivial fix, you can reply to the patch where it
happens with the fix. Or compile before you hit send.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v7 0/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation
  2020-06-28 11:39 [PATCH v7 0/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation Qu Wenruo
                   ` (3 preceding siblings ...)
  2020-06-29 15:40 ` [PATCH v7 0/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation David Sterba
@ 2020-06-29 17:09 ` David Sterba
  4 siblings, 0 replies; 6+ messages in thread
From: David Sterba @ 2020-06-29 17:09 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Sun, Jun 28, 2020 at 07:39:28PM +0800, Qu Wenruo wrote:
> v6:
> - Replace one btrfs_drew_write_unlock() with the wrapper
> 
> v7:
> - Fix a "BTRF_I()" typo
>   Reported-by: kernel test robot <lkp@intel.com>

I had v5 with the drew lock fixup and few other formatting fixups in
misc-next, so applied that assuming there's no other change in v6/v7.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-06-29 21:23 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-28 11:39 [PATCH v7 0/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation Qu Wenruo
2020-06-28 11:39 ` [PATCH v7 1/3] " Qu Wenruo
2020-06-28 11:39 ` [PATCH v7 2/3] btrfs: add comments for btrfs_check_can_nocow() and can_nocow_extent() Qu Wenruo
2020-06-28 11:39 ` [PATCH v7 3/3] btrfs: refactor btrfs_check_can_nocow() into two variants Qu Wenruo
2020-06-29 15:40 ` [PATCH v7 0/3] btrfs: allow btrfs_truncate_block() to fallback to nocow for data space reservation David Sterba
2020-06-29 17:09 ` David Sterba

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.