linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Factor out common ordered extent flushing code
@ 2019-04-22  7:46 Nikolay Borisov
  2019-04-22  7:46 ` [PATCH 1/3] btrfs: Implement btrfs_lock_and_flush_ordered_range Nikolay Borisov
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Nikolay Borisov @ 2019-04-22  7:46 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov

There are a couple of places in the code which need to ensure that a particular 
range is locked and doesn't have pending ordered extents. Currently this logic is
open-coded in said places. This patchset refactors the code such that we now 
have btrfs_lock_and_flush_ordered_range which encompasses this logic. 

Patch 1 introduces a new function which contains the duplicated logic. 

Patch 2 replaces occurences of given code pattern in callers 

Patch 3 Introduces a micro-op so that unlocks in the function can happen without
looking up in the ordered tree. This change is careful to not leak extra 
extent_state references. 

Patches have undergone multiple successful xfstest runs. 


Nikolay Borisov (3):
  btrfs: Implement btrfs_lock_and_flush_ordered_range
  btrfs: Use newly introduced btrfs_lock_and_flush_ordered_range
  btrfs: Always use a cached extent_state in
    btrfs_lock_and_flush_ordered_range

 fs/btrfs/extent_io.c    | 27 +++----------------------
 fs/btrfs/file.c         | 14 ++-----------
 fs/btrfs/inode.c        | 17 ++--------------
 fs/btrfs/ordered-data.c | 44 +++++++++++++++++++++++++++++++++++++++++
 fs/btrfs/ordered-data.h |  3 +++
 5 files changed, 54 insertions(+), 51 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/3] btrfs: Implement btrfs_lock_and_flush_ordered_range
  2019-04-22  7:46 [PATCH 0/3] Factor out common ordered extent flushing code Nikolay Borisov
@ 2019-04-22  7:46 ` Nikolay Borisov
  2019-05-02 14:13   ` David Sterba
  2019-04-22  7:46 ` [PATCH 2/3] btrfs: Use newly introduced btrfs_lock_and_flush_ordered_range Nikolay Borisov
  2019-04-22  7:46 ` [PATCH 3/3] btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range Nikolay Borisov
  2 siblings, 1 reply; 5+ messages in thread
From: Nikolay Borisov @ 2019-04-22  7:46 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov

There is a certain idiom used in multiple places in btrfs' codebase,
dealing with flushing an ordered range. Factor this in a separate
function that can be reused. Future patches will replace the existing
code with that function.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/ordered-data.c | 32 ++++++++++++++++++++++++++++++++
 fs/btrfs/ordered-data.h |  3 +++
 2 files changed, 35 insertions(+)

diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 4d9bb0dea9af..65f6409c1c9f 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -954,6 +954,38 @@ int btrfs_find_ordered_sum(struct inode *inode, u64 offset, u64 disk_bytenr,
 	return index;
 }
 
+/*
+ * btrfs_flush_ordered_range - Lock the passed range and ensures all pending
+ * ordered extents in it are run to completion.
+ *
+ * @tree:         IO tree used for locking out other users of the range
+ * @inode:        Inode whose ordered tree is to be searched
+ * @start:        Beginning of range to flush
+ * @end:          Last byte of range to lock
+ * @cached_state: If passed, will return the extent state responsible for the
+ * locked range. It's the caller's responsibility to free the cached state.
+ *
+ * This function always returns with the given range locked, ensuring after it's
+ * called no order extent can be pending.
+ */
+void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree,
+					struct inode *inode, u64 start, u64 end,
+					struct extent_state **cached_state)
+{
+	struct btrfs_ordered_extent *ordered;
+
+	while (1) {
+		lock_extent_bits(tree, start, end, cached_state);
+		ordered = btrfs_lookup_ordered_range(BTRFS_I(inode), start,
+						     end - start + 1);
+		if (!ordered)
+			break;
+		unlock_extent_cached(tree, start, end, cached_state);
+		btrfs_start_ordered_extent(inode, ordered, 1);
+		btrfs_put_ordered_extent(ordered);
+	}
+}
+
 int __init ordered_data_init(void)
 {
 	btrfs_ordered_extent_cache = kmem_cache_create("btrfs_ordered_extent",
diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h
index 4c5991c3de14..3f6a7d12f435 100644
--- a/fs/btrfs/ordered-data.h
+++ b/fs/btrfs/ordered-data.h
@@ -188,6 +188,9 @@ u64 btrfs_wait_ordered_extents(struct btrfs_root *root, u64 nr,
 			       const u64 range_start, const u64 range_len);
 u64 btrfs_wait_ordered_roots(struct btrfs_fs_info *fs_info, u64 nr,
 			      const u64 range_start, const u64 range_len);
+void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree,
+					struct inode *inode, u64 start, u64 end,
+					struct extent_state **cached_state);
 int __init ordered_data_init(void);
 void __cold ordered_data_exit(void);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/3] btrfs: Use newly introduced btrfs_lock_and_flush_ordered_range
  2019-04-22  7:46 [PATCH 0/3] Factor out common ordered extent flushing code Nikolay Borisov
  2019-04-22  7:46 ` [PATCH 1/3] btrfs: Implement btrfs_lock_and_flush_ordered_range Nikolay Borisov
@ 2019-04-22  7:46 ` Nikolay Borisov
  2019-04-22  7:46 ` [PATCH 3/3] btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range Nikolay Borisov
  2 siblings, 0 replies; 5+ messages in thread
From: Nikolay Borisov @ 2019-04-22  7:46 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov

There several functions which open code
btrfs_lock_and_flush_ordered_range, just replace them with a call
to the function. No functional changes.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/extent_io.c | 27 +++------------------------
 fs/btrfs/file.c      | 14 ++------------
 fs/btrfs/inode.c     | 17 ++---------------
 3 files changed, 7 insertions(+), 51 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 9aa79ad794c9..bf5712249986 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3206,21 +3206,10 @@ static inline void contiguous_readpages(struct extent_io_tree *tree,
 					     unsigned long *bio_flags,
 					     u64 *prev_em_start)
 {
-	struct inode *inode;
-	struct btrfs_ordered_extent *ordered;
+	struct inode *inode = pages[0]->mapping->host;
 	int index;
 
-	inode = pages[0]->mapping->host;
-	while (1) {
-		lock_extent(tree, start, end);
-		ordered = btrfs_lookup_ordered_range(BTRFS_I(inode), start,
-						     end - start + 1);
-		if (!ordered)
-			break;
-		unlock_extent(tree, start, end);
-		btrfs_start_ordered_extent(inode, ordered, 1);
-		btrfs_put_ordered_extent(ordered);
-	}
+	btrfs_lock_and_flush_ordered_range(tree, inode, start, end, NULL);
 
 	for (index = 0; index < nr_pages; index++) {
 		__do_readpage(tree, pages[index], btrfs_get_extent, em_cached,
@@ -3237,21 +3226,11 @@ static int __extent_read_full_page(struct extent_io_tree *tree,
 				   unsigned int read_flags)
 {
 	struct inode *inode = page->mapping->host;
-	struct btrfs_ordered_extent *ordered;
 	u64 start = page_offset(page);
 	u64 end = start + PAGE_SIZE - 1;
 	int ret;
 
-	while (1) {
-		lock_extent(tree, start, end);
-		ordered = btrfs_lookup_ordered_range(BTRFS_I(inode), start,
-						PAGE_SIZE);
-		if (!ordered)
-			break;
-		unlock_extent(tree, start, end);
-		btrfs_start_ordered_extent(inode, ordered, 1);
-		btrfs_put_ordered_extent(ordered);
-	}
+	btrfs_lock_and_flush_ordered_range(tree, inode, start, end, NULL);
 
 	ret = __do_readpage(tree, page, get_extent, NULL, bio, mirror_num,
 			    bio_flags, read_flags, NULL);
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index c857a884a90f..2030b9bcb977 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1541,7 +1541,6 @@ static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
 {
 	struct btrfs_fs_info *fs_info = inode->root->fs_info;
 	struct btrfs_root *root = inode->root;
-	struct btrfs_ordered_extent *ordered;
 	u64 lockstart, lockend;
 	u64 num_bytes;
 	int ret;
@@ -1554,17 +1553,8 @@ static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
 	lockend = round_up(pos + *write_bytes,
 			   fs_info->sectorsize) - 1;
 
-	while (1) {
-		lock_extent(&inode->io_tree, lockstart, lockend);
-		ordered = btrfs_lookup_ordered_range(inode, lockstart,
-						     lockend - lockstart + 1);
-		if (!ordered) {
-			break;
-		}
-		unlock_extent(&inode->io_tree, lockstart, lockend);
-		btrfs_start_ordered_extent(&inode->vfs_inode, ordered, 1);
-		btrfs_put_ordered_extent(ordered);
-	}
+	btrfs_lock_and_flush_ordered_range(&inode->io_tree, &inode->vfs_inode,
+					   lockstart, lockend, NULL);
 
 	num_bytes = lockend - lockstart + 1;
 	ret = can_nocow_extent(&inode->vfs_inode, lockstart, &num_bytes,
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 5afb1648276f..58d320b168e9 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -4985,21 +4985,8 @@ int btrfs_cont_expand(struct inode *inode, loff_t oldsize, loff_t size)
 	if (size <= hole_start)
 		return 0;
 
-	while (1) {
-		struct btrfs_ordered_extent *ordered;
-
-		lock_extent_bits(io_tree, hole_start, block_end - 1,
-				 &cached_state);
-		ordered = btrfs_lookup_ordered_range(BTRFS_I(inode), hole_start,
-						     block_end - hole_start);
-		if (!ordered)
-			break;
-		unlock_extent_cached(io_tree, hole_start, block_end - 1,
-				     &cached_state);
-		btrfs_start_ordered_extent(inode, ordered, 1);
-		btrfs_put_ordered_extent(ordered);
-	}
-
+	btrfs_lock_and_flush_ordered_range(io_tree, inode, hole_start,
+					   block_end - 1, &cached_state);
 	cur_offset = hole_start;
 	while (1) {
 		em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, cur_offset,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 3/3] btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range
  2019-04-22  7:46 [PATCH 0/3] Factor out common ordered extent flushing code Nikolay Borisov
  2019-04-22  7:46 ` [PATCH 1/3] btrfs: Implement btrfs_lock_and_flush_ordered_range Nikolay Borisov
  2019-04-22  7:46 ` [PATCH 2/3] btrfs: Use newly introduced btrfs_lock_and_flush_ordered_range Nikolay Borisov
@ 2019-04-22  7:46 ` Nikolay Borisov
  2 siblings, 0 replies; 5+ messages in thread
From: Nikolay Borisov @ 2019-04-22  7:46 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov

In case no cached_state argument is passed to
btrfs_lock_and_flush_ordered_range use one locally in the function. This
optimises the case when an ordered extent is found since the unlock
function will be able to unlock that state directly without searching
for it again.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/ordered-data.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 65f6409c1c9f..5da2abe320b6 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -973,14 +973,26 @@ void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree,
 					struct extent_state **cached_state)
 {
 	struct btrfs_ordered_extent *ordered;
+	struct extent_state *cachedp = NULL;
+
+	if (cached_state)
+		cachedp = *cached_state;
 
 	while (1) {
-		lock_extent_bits(tree, start, end, cached_state);
+		lock_extent_bits(tree, start, end, &cachedp);
 		ordered = btrfs_lookup_ordered_range(BTRFS_I(inode), start,
 						     end - start + 1);
-		if (!ordered)
+		if (!ordered) {
+			/*
+			 * If no external cached_state has been passed then
+			 * decrement the extra ref taken for cachedp since we
+			 * aren't exposing it outside of this function
+			 */
+			if (!cached_state)
+				refcount_dec(&cachedp->refs);
 			break;
-		unlock_extent_cached(tree, start, end, cached_state);
+		}
+		unlock_extent_cached(tree, start, end, &cachedp);
 		btrfs_start_ordered_extent(inode, ordered, 1);
 		btrfs_put_ordered_extent(ordered);
 	}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/3] btrfs: Implement btrfs_lock_and_flush_ordered_range
  2019-04-22  7:46 ` [PATCH 1/3] btrfs: Implement btrfs_lock_and_flush_ordered_range Nikolay Borisov
@ 2019-05-02 14:13   ` David Sterba
  0 siblings, 0 replies; 5+ messages in thread
From: David Sterba @ 2019-05-02 14:13 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: linux-btrfs

On Mon, Apr 22, 2019 at 10:46:51AM +0300, Nikolay Borisov wrote:
> There is a certain idiom used in multiple places in btrfs' codebase,
> dealing with flushing an ordered range. Factor this in a separate
> function that can be reused. Future patches will replace the existing
> code with that function.
> 
> Signed-off-by: Nikolay Borisov <nborisov@suse.com>
> ---
>  fs/btrfs/ordered-data.c | 32 ++++++++++++++++++++++++++++++++
>  fs/btrfs/ordered-data.h |  3 +++
>  2 files changed, 35 insertions(+)
> 
> diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
> index 4d9bb0dea9af..65f6409c1c9f 100644
> --- a/fs/btrfs/ordered-data.c
> +++ b/fs/btrfs/ordered-data.c
> @@ -954,6 +954,38 @@ int btrfs_find_ordered_sum(struct inode *inode, u64 offset, u64 disk_bytenr,
>  	return index;
>  }
>  
> +/*
> + * btrfs_flush_ordered_range - Lock the passed range and ensures all pending
> + * ordered extents in it are run to completion.
> + *
> + * @tree:         IO tree used for locking out other users of the range
> + * @inode:        Inode whose ordered tree is to be searched
> + * @start:        Beginning of range to flush
> + * @end:          Last byte of range to lock
> + * @cached_state: If passed, will return the extent state responsible for the
> + * locked range. It's the caller's responsibility to free the cached state.
> + *
> + * This function always returns with the given range locked, ensuring after it's
> + * called no order extent can be pending.
> + */
> +void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree,
> +					struct inode *inode, u64 start, u64 end,
> +					struct extent_state **cached_state)
> +{

Please use btrfs_inode instead of inode for interfaces that are internal
to btrfs. This is not consistent but the plan is to switch everything to
btrfs_inode so new code should try to follow that.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-05-02 14:12 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-22  7:46 [PATCH 0/3] Factor out common ordered extent flushing code Nikolay Borisov
2019-04-22  7:46 ` [PATCH 1/3] btrfs: Implement btrfs_lock_and_flush_ordered_range Nikolay Borisov
2019-05-02 14:13   ` David Sterba
2019-04-22  7:46 ` [PATCH 2/3] btrfs: Use newly introduced btrfs_lock_and_flush_ordered_range Nikolay Borisov
2019-04-22  7:46 ` [PATCH 3/3] btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range Nikolay Borisov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).