linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Ordered extent flushing refactor
@ 2019-05-07  7:19 Nikolay Borisov
  2019-05-07  7:19 ` [PATCH v2 1/3] btrfs: Implement btrfs_lock_and_flush_ordered_range Nikolay Borisov
                   ` (4 more replies)
  0 siblings, 5 replies; 7+ messages in thread
From: Nikolay Borisov @ 2019-05-07  7:19 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov

Here is v2 of factoring out common code when flushing ordered extent. The main
change in this version is the switch from inode to btrfs_inode for  function 
interfaces as per David's feedback. 

Nikolay Borisov (3):
  btrfs: Implement btrfs_lock_and_flush_ordered_range
  btrfs: Use newly introduced btrfs_lock_and_flush_ordered_range
  btrfs: Always use a cached extent_state in
    btrfs_lock_and_flush_ordered_range

 fs/btrfs/extent_io.c    | 29 ++++----------------------
 fs/btrfs/file.c         | 14 ++-----------
 fs/btrfs/inode.c        | 17 ++--------------
 fs/btrfs/ordered-data.c | 45 +++++++++++++++++++++++++++++++++++++++++
 fs/btrfs/ordered-data.h |  4 ++++
 5 files changed, 57 insertions(+), 52 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/3] btrfs: Implement btrfs_lock_and_flush_ordered_range
  2019-05-07  7:19 [PATCH v2 0/3] Ordered extent flushing refactor Nikolay Borisov
@ 2019-05-07  7:19 ` Nikolay Borisov
  2019-05-07  7:19 ` [PATCH v2 2/3] btrfs: Use newly introduced btrfs_lock_and_flush_ordered_range Nikolay Borisov
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Nikolay Borisov @ 2019-05-07  7:19 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov

There is a certain idiom used in multiple places in btrfs' codebase,
dealing with flushing an ordered range. Factor this in a separate
function that can be reused. Future patches will replace the existing
code with that function.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/ordered-data.c | 33 +++++++++++++++++++++++++++++++++
 fs/btrfs/ordered-data.h |  4 ++++
 2 files changed, 37 insertions(+)

diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 52889da69113..37401cc04a6b 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -962,6 +962,39 @@ int btrfs_find_ordered_sum(struct inode *inode, u64 offset, u64 disk_bytenr,
 	return index;
 }
 
+/*
+ * btrfs_flush_ordered_range - Lock the passed range and ensures all pending
+ * ordered extents in it are run to completion.
+ *
+ * @tree:         IO tree used for locking out other users of the range
+ * @inode:        Inode whose ordered tree is to be searched
+ * @start:        Beginning of range to flush
+ * @end:          Last byte of range to lock
+ * @cached_state: If passed, will return the extent state responsible for the
+ * locked range. It's the caller's responsibility to free the cached state.
+ *
+ * This function always returns with the given range locked, ensuring after it's
+ * called no order extent can be pending.
+ */
+void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree,
+					struct btrfs_inode *inode, u64 start,
+					u64 end,
+					struct extent_state **cached_state)
+{
+	struct btrfs_ordered_extent *ordered;
+
+	while (1) {
+		lock_extent_bits(tree, start, end, cached_state);
+		ordered = btrfs_lookup_ordered_range(inode, start,
+						     end - start + 1);
+		if (!ordered)
+			break;
+		unlock_extent_cached(tree, start, end, cached_state);
+		btrfs_start_ordered_extent(&inode->vfs_inode, ordered, 1);
+		btrfs_put_ordered_extent(ordered);
+	}
+}
+
 int __init ordered_data_init(void)
 {
 	btrfs_ordered_extent_cache = kmem_cache_create("btrfs_ordered_extent",
diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h
index 4c5991c3de14..9b68179d580f 100644
--- a/fs/btrfs/ordered-data.h
+++ b/fs/btrfs/ordered-data.h
@@ -188,6 +188,10 @@ u64 btrfs_wait_ordered_extents(struct btrfs_root *root, u64 nr,
 			       const u64 range_start, const u64 range_len);
 u64 btrfs_wait_ordered_roots(struct btrfs_fs_info *fs_info, u64 nr,
 			      const u64 range_start, const u64 range_len);
+void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree,
+					struct btrfs_inode *inode, u64 start,
+					u64 end,
+					struct extent_state **cached_state);
 int __init ordered_data_init(void);
 void __cold ordered_data_exit(void);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/3] btrfs: Use newly introduced btrfs_lock_and_flush_ordered_range
  2019-05-07  7:19 [PATCH v2 0/3] Ordered extent flushing refactor Nikolay Borisov
  2019-05-07  7:19 ` [PATCH v2 1/3] btrfs: Implement btrfs_lock_and_flush_ordered_range Nikolay Borisov
@ 2019-05-07  7:19 ` Nikolay Borisov
  2019-05-07  7:19 ` [PATCH v2 3/3] btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range Nikolay Borisov
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 7+ messages in thread
From: Nikolay Borisov @ 2019-05-07  7:19 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov

There several functions which open code
btrfs_lock_and_flush_ordered_range, just replace them with a call
to the function. No functional changes.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/extent_io.c | 29 ++++-------------------------
 fs/btrfs/file.c      | 14 ++------------
 fs/btrfs/inode.c     | 17 ++---------------
 3 files changed, 8 insertions(+), 52 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 13fca7bfc1f2..8998819d185a 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -3206,21 +3206,10 @@ static inline void contiguous_readpages(struct extent_io_tree *tree,
 					     unsigned long *bio_flags,
 					     u64 *prev_em_start)
 {
-	struct inode *inode;
-	struct btrfs_ordered_extent *ordered;
+	struct btrfs_inode *inode = BTRFS_I(pages[0]->mapping->host);
 	int index;
 
-	inode = pages[0]->mapping->host;
-	while (1) {
-		lock_extent(tree, start, end);
-		ordered = btrfs_lookup_ordered_range(BTRFS_I(inode), start,
-						     end - start + 1);
-		if (!ordered)
-			break;
-		unlock_extent(tree, start, end);
-		btrfs_start_ordered_extent(inode, ordered, 1);
-		btrfs_put_ordered_extent(ordered);
-	}
+	btrfs_lock_and_flush_ordered_range(tree, inode, start, end, NULL);
 
 	for (index = 0; index < nr_pages; index++) {
 		__do_readpage(tree, pages[index], btrfs_get_extent, em_cached,
@@ -3236,22 +3225,12 @@ static int __extent_read_full_page(struct extent_io_tree *tree,
 				   unsigned long *bio_flags,
 				   unsigned int read_flags)
 {
-	struct inode *inode = page->mapping->host;
-	struct btrfs_ordered_extent *ordered;
+	struct btrfs_inode *inode = BTRFS_I(page->mapping->host);
 	u64 start = page_offset(page);
 	u64 end = start + PAGE_SIZE - 1;
 	int ret;
 
-	while (1) {
-		lock_extent(tree, start, end);
-		ordered = btrfs_lookup_ordered_range(BTRFS_I(inode), start,
-						PAGE_SIZE);
-		if (!ordered)
-			break;
-		unlock_extent(tree, start, end);
-		btrfs_start_ordered_extent(inode, ordered, 1);
-		btrfs_put_ordered_extent(ordered);
-	}
+	btrfs_lock_and_flush_ordered_range(tree, inode, start, end, NULL);
 
 	ret = __do_readpage(tree, page, get_extent, NULL, bio, mirror_num,
 			    bio_flags, read_flags, NULL);
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index 9dbea72a61fe..d8abce428176 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -1550,7 +1550,6 @@ static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
 {
 	struct btrfs_fs_info *fs_info = inode->root->fs_info;
 	struct btrfs_root *root = inode->root;
-	struct btrfs_ordered_extent *ordered;
 	u64 lockstart, lockend;
 	u64 num_bytes;
 	int ret;
@@ -1563,17 +1562,8 @@ static noinline int check_can_nocow(struct btrfs_inode *inode, loff_t pos,
 	lockend = round_up(pos + *write_bytes,
 			   fs_info->sectorsize) - 1;
 
-	while (1) {
-		lock_extent(&inode->io_tree, lockstart, lockend);
-		ordered = btrfs_lookup_ordered_range(inode, lockstart,
-						     lockend - lockstart + 1);
-		if (!ordered) {
-			break;
-		}
-		unlock_extent(&inode->io_tree, lockstart, lockend);
-		btrfs_start_ordered_extent(&inode->vfs_inode, ordered, 1);
-		btrfs_put_ordered_extent(ordered);
-	}
+	btrfs_lock_and_flush_ordered_range(&inode->io_tree, inode, lockstart,
+					   lockend, NULL);
 
 	num_bytes = lockend - lockstart + 1;
 	ret = can_nocow_extent(&inode->vfs_inode, lockstart, &num_bytes,
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index d1de8eb253eb..45f999bb3dfe 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -5008,21 +5008,8 @@ int btrfs_cont_expand(struct inode *inode, loff_t oldsize, loff_t size)
 	if (size <= hole_start)
 		return 0;
 
-	while (1) {
-		struct btrfs_ordered_extent *ordered;
-
-		lock_extent_bits(io_tree, hole_start, block_end - 1,
-				 &cached_state);
-		ordered = btrfs_lookup_ordered_range(BTRFS_I(inode), hole_start,
-						     block_end - hole_start);
-		if (!ordered)
-			break;
-		unlock_extent_cached(io_tree, hole_start, block_end - 1,
-				     &cached_state);
-		btrfs_start_ordered_extent(inode, ordered, 1);
-		btrfs_put_ordered_extent(ordered);
-	}
-
+	btrfs_lock_and_flush_ordered_range(io_tree, BTRFS_I(inode), hole_start,
+					   block_end - 1, &cached_state);
 	cur_offset = hole_start;
 	while (1) {
 		em = btrfs_get_extent(BTRFS_I(inode), NULL, 0, cur_offset,
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 3/3] btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range
  2019-05-07  7:19 [PATCH v2 0/3] Ordered extent flushing refactor Nikolay Borisov
  2019-05-07  7:19 ` [PATCH v2 1/3] btrfs: Implement btrfs_lock_and_flush_ordered_range Nikolay Borisov
  2019-05-07  7:19 ` [PATCH v2 2/3] btrfs: Use newly introduced btrfs_lock_and_flush_ordered_range Nikolay Borisov
@ 2019-05-07  7:19 ` Nikolay Borisov
  2019-05-09 15:35   ` David Sterba
  2019-05-07 17:39 ` [PATCH v2 0/3] Ordered extent flushing refactor Josef Bacik
  2019-05-10 12:05 ` David Sterba
  4 siblings, 1 reply; 7+ messages in thread
From: Nikolay Borisov @ 2019-05-07  7:19 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov

In case no cached_state argument is passed to
btrfs_lock_and_flush_ordered_range use one locally in the function. This
optimises the case when an ordered extent is found since the unlock
function will be able to unlock that state directly without searching
for it again.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/ordered-data.c | 18 +++++++++++++++---
 1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 37401cc04a6b..df02ed25b7db 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -982,14 +982,26 @@ void btrfs_lock_and_flush_ordered_range(struct extent_io_tree *tree,
 					struct extent_state **cached_state)
 {
 	struct btrfs_ordered_extent *ordered;
+	struct extent_state *cachedp = NULL;
+
+	if (cached_state)
+		cachedp = *cached_state;
 
 	while (1) {
-		lock_extent_bits(tree, start, end, cached_state);
+		lock_extent_bits(tree, start, end, &cachedp);
 		ordered = btrfs_lookup_ordered_range(inode, start,
 						     end - start + 1);
-		if (!ordered)
+		if (!ordered) {
+			/*
+			 * If no external cached_state has been passed then
+			 * decrement the extra ref taken for cachedp since we
+			 * aren't exposing it outside of this function
+			 */
+			if (!cached_state)
+				refcount_dec(&cachedp->refs);
 			break;
-		unlock_extent_cached(tree, start, end, cached_state);
+		}
+		unlock_extent_cached(tree, start, end, &cachedp);
 		btrfs_start_ordered_extent(&inode->vfs_inode, ordered, 1);
 		btrfs_put_ordered_extent(ordered);
 	}
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 0/3] Ordered extent flushing refactor
  2019-05-07  7:19 [PATCH v2 0/3] Ordered extent flushing refactor Nikolay Borisov
                   ` (2 preceding siblings ...)
  2019-05-07  7:19 ` [PATCH v2 3/3] btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range Nikolay Borisov
@ 2019-05-07 17:39 ` Josef Bacik
  2019-05-10 12:05 ` David Sterba
  4 siblings, 0 replies; 7+ messages in thread
From: Josef Bacik @ 2019-05-07 17:39 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: linux-btrfs

On Tue, May 07, 2019 at 10:19:21AM +0300, Nikolay Borisov wrote:
> Here is v2 of factoring out common code when flushing ordered extent. The main
> change in this version is the switch from inode to btrfs_inode for  function 
> interfaces as per David's feedback. 
> 
> Nikolay Borisov (3):
>   btrfs: Implement btrfs_lock_and_flush_ordered_range
>   btrfs: Use newly introduced btrfs_lock_and_flush_ordered_range
>   btrfs: Always use a cached extent_state in
>     btrfs_lock_and_flush_ordered_range
> 

You can add

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

for the whole series.  Thanks,

Josef

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 3/3] btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range
  2019-05-07  7:19 ` [PATCH v2 3/3] btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range Nikolay Borisov
@ 2019-05-09 15:35   ` David Sterba
  0 siblings, 0 replies; 7+ messages in thread
From: David Sterba @ 2019-05-09 15:35 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: linux-btrfs

On Tue, May 07, 2019 at 10:19:24AM +0300, Nikolay Borisov wrote:
> In case no cached_state argument is passed to
> btrfs_lock_and_flush_ordered_range use one locally in the function. This
> optimises the case when an ordered extent is found since the unlock
> function will be able to unlock that state directly without searching
> for it again.

This will speed up all callers that previously did not cache the state,
right? That can improve bring some improvement, I wonder if the caching
can be used in more places, there are still many plain lock_extent
calls.

check_can_nocow calls unlock_extent from 2 locations, so passing the
cached pointer could help in case the ordered is found and thus unlock
happens outside of btrfs_lock_and_flush_ordered_range. Elsewhere it's
only the locking part so the cache would have to be passed along, but
this might not make sense in all cases. Anyway, that's for another
patch.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 0/3] Ordered extent flushing refactor
  2019-05-07  7:19 [PATCH v2 0/3] Ordered extent flushing refactor Nikolay Borisov
                   ` (3 preceding siblings ...)
  2019-05-07 17:39 ` [PATCH v2 0/3] Ordered extent flushing refactor Josef Bacik
@ 2019-05-10 12:05 ` David Sterba
  4 siblings, 0 replies; 7+ messages in thread
From: David Sterba @ 2019-05-10 12:05 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: linux-btrfs

On Tue, May 07, 2019 at 10:19:21AM +0300, Nikolay Borisov wrote:
> Here is v2 of factoring out common code when flushing ordered extent. The main
> change in this version is the switch from inode to btrfs_inode for  function 
> interfaces as per David's feedback. 
> 
> Nikolay Borisov (3):
>   btrfs: Implement btrfs_lock_and_flush_ordered_range
>   btrfs: Use newly introduced btrfs_lock_and_flush_ordered_range
>   btrfs: Always use a cached extent_state in
>     btrfs_lock_and_flush_ordered_range

1-3 added to 5.3 queue, thanks.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-05-10 12:04 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-07  7:19 [PATCH v2 0/3] Ordered extent flushing refactor Nikolay Borisov
2019-05-07  7:19 ` [PATCH v2 1/3] btrfs: Implement btrfs_lock_and_flush_ordered_range Nikolay Borisov
2019-05-07  7:19 ` [PATCH v2 2/3] btrfs: Use newly introduced btrfs_lock_and_flush_ordered_range Nikolay Borisov
2019-05-07  7:19 ` [PATCH v2 3/3] btrfs: Always use a cached extent_state in btrfs_lock_and_flush_ordered_range Nikolay Borisov
2019-05-09 15:35   ` David Sterba
2019-05-07 17:39 ` [PATCH v2 0/3] Ordered extent flushing refactor Josef Bacik
2019-05-10 12:05 ` David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).