linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Refactor delayed refs processing loop
@ 2018-08-15  7:39 Nikolay Borisov
  2018-08-15  7:39 ` [PATCH 1/3] btrfs: Factor out ref head locking code in __btrfs_run_delayed_refs Nikolay Borisov
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Nikolay Borisov @ 2018-08-15  7:39 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov

Here is a small series which aims to rectify the eye sore that delayed refs 
processing loop currently is. In fact, it's actually 2 loops in the guise of a 
single 'while' construct. All in all this should bring no functional changes 
and I've verified this with multiple xfstest runs with no problems. 

Patch 1 factors out the code which deals with selecting a ref head which has 
delayed refs pending and locking it. 

Patch 2 introduces a new function which comprises the internal loop aka 
processing delayed refs of a delayed head, which is more or less most of the 
code in __btrfs_run_delayed_refs copied. The only difference is that the 
function can return EAGAIN if we detect a delayed ref which has sequence number
higher than what is currently in the tree mod list. 

Patch 3 Finaly makes the loop in __btrfs_run_delayed_refs use the function 
introduced in the previous patch, meaning deleting most of the code and 
changing the loop to a 'do {} while' construct. I strived to retain all the 
semantics of the old code so there should be no surprises. 

Nikolay Borisov (3):
  btrfs: Factor out ref head locking code in __btrfs_run_delayed_refs
  btrfs: Factor out loop processing all refs of a head
  btrfs: refactor __btrfs_run_delayed_refs loop

 fs/btrfs/extent-tree.c | 210 ++++++++++++++++++++++++++++++-------------------
 1 file changed, 128 insertions(+), 82 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] btrfs: Factor out ref head locking code in __btrfs_run_delayed_refs
  2018-08-15  7:39 [PATCH 0/3] Refactor delayed refs processing loop Nikolay Borisov
@ 2018-08-15  7:39 ` Nikolay Borisov
  2018-09-21 14:30   ` David Sterba
  2018-08-15  7:39 ` [PATCH 2/3] btrfs: Factor out loop processing all refs of a head Nikolay Borisov
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 8+ messages in thread
From: Nikolay Borisov @ 2018-08-15  7:39 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov

This is in preparation to refactor the giant loop in
__btrfs_run_delayed_refs. As a first step define a new function
which implements acquiring a reference to a btrfs_delayed_refs_head and
use it. No functional changes.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/extent-tree.c | 54 ++++++++++++++++++++++++++++++++++----------------
 1 file changed, 37 insertions(+), 17 deletions(-)

diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index de6f75f5547b..6a2c86c8a756 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -2502,6 +2502,40 @@ static int cleanup_ref_head(struct btrfs_trans_handle *trans,
 	return 0;
 }
 
+STATIC
+struct btrfs_delayed_ref_head *btrfs_obtain_ref_head(
+					struct btrfs_trans_handle *trans)
+{
+	struct btrfs_delayed_ref_root *delayed_refs =
+		&trans->transaction->delayed_refs;
+	struct btrfs_delayed_ref_head *head = NULL;
+	int ret;
+
+	spin_lock(&delayed_refs->lock);
+	head = btrfs_select_ref_head(trans);
+	if (!head) {
+		spin_unlock(&delayed_refs->lock);
+		return head;
+	}
+
+	/*
+	 * grab the lock that says we are going to process  all the refs for
+	 * this head
+	 */
+	ret = btrfs_delayed_ref_lock(trans, head);
+	spin_unlock(&delayed_refs->lock);
+	/*
+	 * we may have dropped the spin lock to get the head
+	 * mutex lock, and that might have given someone else
+	 * time to free the head.  If that's true, it has been
+	 * removed from our list and we can move on.
+	 */
+	if (ret == -EAGAIN)
+		head = ERR_PTR(-EAGAIN);
+
+	return head;
+}
+
 /*
  * Returns 0 on success or if called with an already aborted transaction.
  * Returns -ENOMEM or -EIO on failure and will abort the transaction.
@@ -2526,24 +2560,10 @@ static noinline int __btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
 			if (count >= nr)
 				break;
 
-			spin_lock(&delayed_refs->lock);
-			locked_ref = btrfs_select_ref_head(trans);
-			if (!locked_ref) {
-				spin_unlock(&delayed_refs->lock);
+			locked_ref = btrfs_obtain_ref_head(trans);
+			if (!locked_ref)
 				break;
-			}
-
-			/* grab the lock that says we are going to process
-			 * all the refs for this head */
-			ret = btrfs_delayed_ref_lock(trans, locked_ref);
-			spin_unlock(&delayed_refs->lock);
-			/*
-			 * we may have dropped the spin lock to get the head
-			 * mutex lock, and that might have given someone else
-			 * time to free the head.  If that's true, it has been
-			 * removed from our list and we can move on.
-			 */
-			if (ret == -EAGAIN) {
+			else if (PTR_ERR(locked_ref) == -EAGAIN) {
 				locked_ref = NULL;
 				count++;
 				continue;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] btrfs: Factor out loop processing all refs of a head
  2018-08-15  7:39 [PATCH 0/3] Refactor delayed refs processing loop Nikolay Borisov
  2018-08-15  7:39 ` [PATCH 1/3] btrfs: Factor out ref head locking code in __btrfs_run_delayed_refs Nikolay Borisov
@ 2018-08-15  7:39 ` Nikolay Borisov
  2018-09-21 14:39   ` David Sterba
  2018-08-15  7:39 ` [PATCH 3/3] btrfs: refactor __btrfs_run_delayed_refs loop Nikolay Borisov
  2018-09-21 14:43 ` [PATCH 0/3] Refactor delayed refs processing loop David Sterba
  3 siblings, 1 reply; 8+ messages in thread
From: Nikolay Borisov @ 2018-08-15  7:39 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov

This patch introduces a new helper encompassing the implicit inner loop
in __btrfs_run_delayed_refs which processes all the refs for a given
head. The code is mostly copy/paste, the only difference is that if we
detect a newer reference then -EAGAIN is returned so that callers can
react correctly. Also at the end of the loop the head is relocked and
btrfs_merge_delayed_refs is run again to retain the pre-refactoring
semantics.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/extent-tree.c | 79 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 79 insertions(+)

diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 6a2c86c8a756..165a29871814 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -2536,6 +2536,85 @@ struct btrfs_delayed_ref_head *btrfs_obtain_ref_head(
 	return head;
 }
 
+STATIC
+int btrfs_run_delayed_refs_for_head(struct btrfs_trans_handle *trans,
+				    struct btrfs_delayed_ref_head *locked_ref,
+				    unsigned long *run_refs)
+{
+	struct btrfs_fs_info *fs_info = trans->fs_info;
+	struct btrfs_delayed_ref_root *delayed_refs;
+	struct btrfs_delayed_extent_op *extent_op;
+	struct btrfs_delayed_ref_node *ref;
+	int must_insert_reserved = 0;
+	int ret;
+
+	delayed_refs = &trans->transaction->delayed_refs;
+
+	while ((ref = select_delayed_ref(locked_ref))) {
+
+		if (ref->seq &&
+		    btrfs_check_delayed_seq(fs_info, ref->seq)) {
+			spin_unlock(&locked_ref->lock);
+			unselect_delayed_ref_head(delayed_refs, locked_ref);
+			return -EAGAIN;
+		}
+
+		(*run_refs)++;
+		ref->in_tree = 0;
+		rb_erase(&ref->ref_node, &locked_ref->ref_tree);
+		RB_CLEAR_NODE(&ref->ref_node);
+		if (!list_empty(&ref->add_list))
+			list_del(&ref->add_list);
+		/*
+		 * When we play the delayed ref, also correct the ref_mod on
+		 * head
+		 */
+		switch (ref->action) {
+		case BTRFS_ADD_DELAYED_REF:
+		case BTRFS_ADD_DELAYED_EXTENT:
+			locked_ref->ref_mod -= ref->ref_mod;
+			break;
+		case BTRFS_DROP_DELAYED_REF:
+			locked_ref->ref_mod += ref->ref_mod;
+			break;
+		default:
+			WARN_ON(1);
+		}
+		atomic_dec(&delayed_refs->num_entries);
+
+		/*
+		 * Record the must-insert_reserved flag before we drop the
+		 * spin lock.
+		 */
+		must_insert_reserved = locked_ref->must_insert_reserved;
+		locked_ref->must_insert_reserved = 0;
+
+		extent_op = locked_ref->extent_op;
+		locked_ref->extent_op = NULL;
+		spin_unlock(&locked_ref->lock);
+
+		ret = run_one_delayed_ref(trans, ref, extent_op,
+					  must_insert_reserved);
+
+		btrfs_free_delayed_extent_op(extent_op);
+		if (ret) {
+			unselect_delayed_ref_head(delayed_refs, locked_ref);
+			btrfs_put_delayed_ref(ref);
+			btrfs_debug(fs_info, "run_one_delayed_ref returned %d",
+				    ret);
+			return ret;
+		}
+
+		btrfs_put_delayed_ref(ref);
+		cond_resched();
+
+		spin_lock(&locked_ref->lock);
+		btrfs_merge_delayed_refs(trans, delayed_refs, locked_ref);
+	}
+
+	return 0;
+}
+
 /*
  * Returns 0 on success or if called with an already aborted transaction.
  * Returns -ENOMEM or -EIO on failure and will abort the transaction.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] btrfs: refactor __btrfs_run_delayed_refs loop
  2018-08-15  7:39 [PATCH 0/3] Refactor delayed refs processing loop Nikolay Borisov
  2018-08-15  7:39 ` [PATCH 1/3] btrfs: Factor out ref head locking code in __btrfs_run_delayed_refs Nikolay Borisov
  2018-08-15  7:39 ` [PATCH 2/3] btrfs: Factor out loop processing all refs of a head Nikolay Borisov
@ 2018-08-15  7:39 ` Nikolay Borisov
  2018-09-21 14:39   ` David Sterba
  2018-09-21 14:43 ` [PATCH 0/3] Refactor delayed refs processing loop David Sterba
  3 siblings, 1 reply; 8+ messages in thread
From: Nikolay Borisov @ 2018-08-15  7:39 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov

Refactor the delayed refs loop by using the newly introduced
btrfs_run_delayed_refs_for_head function. This greatly simplifies
__btrfs_run_delayed_refs and makes it more obvious what is happening.
We now have 1 loop which iterates the existing delayed_heads and then
each selected ref head is processed by the new helper. All existing
semantics of the code are preserved so no functional changes.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
---
 fs/btrfs/extent-tree.c | 107 +++++++++++++------------------------------------
 1 file changed, 27 insertions(+), 80 deletions(-)

diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 165a29871814..6a66b7f56b28 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -2550,6 +2550,9 @@ int btrfs_run_delayed_refs_for_head(struct btrfs_trans_handle *trans,
 
 	delayed_refs = &trans->transaction->delayed_refs;
 
+	lockdep_assert_held(&locked_ref->mutex);
+	lockdep_assert_held(&locked_ref->lock);
+
 	while ((ref = select_delayed_ref(locked_ref))) {
 
 		if (ref->seq &&
@@ -2624,31 +2627,24 @@ static noinline int __btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
 {
 	struct btrfs_fs_info *fs_info = trans->fs_info;
 	struct btrfs_delayed_ref_root *delayed_refs;
-	struct btrfs_delayed_ref_node *ref;
 	struct btrfs_delayed_ref_head *locked_ref = NULL;
-	struct btrfs_delayed_extent_op *extent_op;
 	ktime_t start = ktime_get();
 	int ret;
 	unsigned long count = 0;
 	unsigned long actual_count = 0;
-	int must_insert_reserved = 0;
 
 	delayed_refs = &trans->transaction->delayed_refs;
-	while (1) {
+	do {
 		if (!locked_ref) {
-			if (count >= nr)
-				break;
-
 			locked_ref = btrfs_obtain_ref_head(trans);
-			if (!locked_ref)
-				break;
-			else if (PTR_ERR(locked_ref) == -EAGAIN) {
-				locked_ref = NULL;
-				count++;
-				continue;
+			if (IS_ERR_OR_NULL(locked_ref)) {
+				if (PTR_ERR(locked_ref) == -EAGAIN) {
+					continue;
+				} else
+					break;
 			}
+			count++;
 		}
-
 		/*
 		 * We need to try and merge add/drops of the same ref since we
 		 * can run into issues with relocate dropping the implicit ref
@@ -2664,23 +2660,19 @@ static noinline int __btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
 		spin_lock(&locked_ref->lock);
 		btrfs_merge_delayed_refs(trans, delayed_refs, locked_ref);
 
-		ref = select_delayed_ref(locked_ref);
-
-		if (ref && ref->seq &&
-		    btrfs_check_delayed_seq(fs_info, ref->seq)) {
-			spin_unlock(&locked_ref->lock);
-			unselect_delayed_ref_head(delayed_refs, locked_ref);
-			locked_ref = NULL;
-			cond_resched();
-			count++;
-			continue;
-		}
-
-		/*
-		 * We're done processing refs in this ref_head, clean everything
-		 * up and move on to the next ref_head.
-		 */
-		if (!ref) {
+		ret = btrfs_run_delayed_refs_for_head(trans, locked_ref,
+						      &actual_count);
+		if (ret < 0 && ret != -EAGAIN) {
+			/*
+			 * Error, btrfs_run_delayed_refs_for_head already
+			 * unlocked everything so just bail out
+			 */
+			return ret;
+		} else if (!ret) {
+			/*
+			 * Success, perform the usual cleanup of a processed
+			 * head
+			 */
 			ret = cleanup_ref_head(trans, locked_ref);
 			if (ret > 0 ) {
 				/* We dropped our lock, we need to loop. */
@@ -2689,61 +2681,16 @@ static noinline int __btrfs_run_delayed_refs(struct btrfs_trans_handle *trans,
 			} else if (ret) {
 				return ret;
 			}
-			locked_ref = NULL;
-			count++;
-			continue;
-		}
-
-		actual_count++;
-		ref->in_tree = 0;
-		rb_erase(&ref->ref_node, &locked_ref->ref_tree);
-		RB_CLEAR_NODE(&ref->ref_node);
-		if (!list_empty(&ref->add_list))
-			list_del(&ref->add_list);
-		/*
-		 * When we play the delayed ref, also correct the ref_mod on
-		 * head
-		 */
-		switch (ref->action) {
-		case BTRFS_ADD_DELAYED_REF:
-		case BTRFS_ADD_DELAYED_EXTENT:
-			locked_ref->ref_mod -= ref->ref_mod;
-			break;
-		case BTRFS_DROP_DELAYED_REF:
-			locked_ref->ref_mod += ref->ref_mod;
-			break;
-		default:
-			WARN_ON(1);
 		}
-		atomic_dec(&delayed_refs->num_entries);
 
 		/*
-		 * Record the must-insert_reserved flag before we drop the spin
-		 * lock.
+		 * Either success case or btrfs_run_delayed_refs_for_head
+		 * returned -EAGAIN, meaning we need to select another head
 		 */
-		must_insert_reserved = locked_ref->must_insert_reserved;
-		locked_ref->must_insert_reserved = 0;
 
-		extent_op = locked_ref->extent_op;
-		locked_ref->extent_op = NULL;
-		spin_unlock(&locked_ref->lock);
-
-		ret = run_one_delayed_ref(trans, ref, extent_op,
-					  must_insert_reserved);
-
-		btrfs_free_delayed_extent_op(extent_op);
-		if (ret) {
-			unselect_delayed_ref_head(delayed_refs, locked_ref);
-			btrfs_put_delayed_ref(ref);
-			btrfs_debug(fs_info, "run_one_delayed_ref returned %d",
-				    ret);
-			return ret;
-		}
-
-		btrfs_put_delayed_ref(ref);
-		count++;
+		locked_ref = NULL;
 		cond_resched();
-	}
+	} while ((nr != -1 && count < nr) || locked_ref);
 
 	/*
 	 * We don't want to include ref heads since we can have empty ref heads
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 1/3] btrfs: Factor out ref head locking code in __btrfs_run_delayed_refs
  2018-08-15  7:39 ` [PATCH 1/3] btrfs: Factor out ref head locking code in __btrfs_run_delayed_refs Nikolay Borisov
@ 2018-09-21 14:30   ` David Sterba
  0 siblings, 0 replies; 8+ messages in thread
From: David Sterba @ 2018-09-21 14:30 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: linux-btrfs

On Wed, Aug 15, 2018 at 10:39:54AM +0300, Nikolay Borisov wrote:
> This is in preparation to refactor the giant loop in
> __btrfs_run_delayed_refs. As a first step define a new function
> which implements acquiring a reference to a btrfs_delayed_refs_head and
> use it. No functional changes.
> 
> Signed-off-by: Nikolay Borisov <nborisov@suse.com>
> ---
>  fs/btrfs/extent-tree.c | 54 ++++++++++++++++++++++++++++++++++----------------
>  1 file changed, 37 insertions(+), 17 deletions(-)
> 
> diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
> index de6f75f5547b..6a2c86c8a756 100644
> --- a/fs/btrfs/extent-tree.c
> +++ b/fs/btrfs/extent-tree.c
> @@ -2502,6 +2502,40 @@ static int cleanup_ref_head(struct btrfs_trans_handle *trans,
>  	return 0;
>  }
>  
> +STATIC
> +struct btrfs_delayed_ref_head *btrfs_obtain_ref_head(
> +					struct btrfs_trans_handle *trans)

The STATIC is now gone from misc-next so it's just 'static'. Please
don't split the type and function name. It's allowed to fixup the style
once the function prototype is updated.

> +{
> +	struct btrfs_delayed_ref_root *delayed_refs =
> +		&trans->transaction->delayed_refs;
> +	struct btrfs_delayed_ref_head *head = NULL;
> +	int ret;
> +
> +	spin_lock(&delayed_refs->lock);
> +	head = btrfs_select_ref_head(trans);
> +	if (!head) {
> +		spin_unlock(&delayed_refs->lock);
> +		return head;
> +	}
> +
> +	/*
> +	 * grab the lock that says we are going to process  all the refs for
> +	 * this head
> +	 */
> +	ret = btrfs_delayed_ref_lock(trans, head);
> +	spin_unlock(&delayed_refs->lock);
> +	/*
> +	 * we may have dropped the spin lock to get the head
> +	 * mutex lock, and that might have given someone else
> +	 * time to free the head.  If that's true, it has been
> +	 * removed from our list and we can move on.
> +	 */

As you move the entire comment, you can reformat it or reword if you
find a better way what's is supposed to say.

Otherwise

Reviewed-by: David Sterba <dsterba@suse.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] btrfs: refactor __btrfs_run_delayed_refs loop
  2018-08-15  7:39 ` [PATCH 3/3] btrfs: refactor __btrfs_run_delayed_refs loop Nikolay Borisov
@ 2018-09-21 14:39   ` David Sterba
  0 siblings, 0 replies; 8+ messages in thread
From: David Sterba @ 2018-09-21 14:39 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: linux-btrfs

On Wed, Aug 15, 2018 at 10:39:56AM +0300, Nikolay Borisov wrote:
> Refactor the delayed refs loop by using the newly introduced
> btrfs_run_delayed_refs_for_head function. This greatly simplifies
> __btrfs_run_delayed_refs and makes it more obvious what is happening.
> We now have 1 loop which iterates the existing delayed_heads and then
> each selected ref head is processed by the new helper. All existing
> semantics of the code are preserved so no functional changes.

What a mess, took me some time to understand and find the hidden loop,
this is a perfect counter example. Thanks for fixing it up.

Reviewed-by: David Sterba <dsterba@suse.com>


> -		rb_erase(&ref->ref_node, &locked_ref->ref_tree);

There was a merge conflict with the rb_root_cached tree update, but the
fixup was trivial.

> +	} while ((nr != -1 && count < nr) || locked_ref);

I can't be the first to notice that the '-1' is signed long compared to
an unsigned long. This part is not obvious as this means to process all
delayed refs, called from the contexts like commit.  Replacing that with
something more explicit would be good.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] btrfs: Factor out loop processing all refs of a head
  2018-08-15  7:39 ` [PATCH 2/3] btrfs: Factor out loop processing all refs of a head Nikolay Borisov
@ 2018-09-21 14:39   ` David Sterba
  0 siblings, 0 replies; 8+ messages in thread
From: David Sterba @ 2018-09-21 14:39 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: linux-btrfs

On Wed, Aug 15, 2018 at 10:39:55AM +0300, Nikolay Borisov wrote:
> This patch introduces a new helper encompassing the implicit inner loop
> in __btrfs_run_delayed_refs which processes all the refs for a given
> head. The code is mostly copy/paste, the only difference is that if we
> detect a newer reference then -EAGAIN is returned so that callers can
> react correctly. Also at the end of the loop the head is relocked and
> btrfs_merge_delayed_refs is run again to retain the pre-refactoring
> semantics.
> 
> Signed-off-by: Nikolay Borisov <nborisov@suse.com>

Reviewed-by: David Sterba <dsterba@suse.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/3] Refactor delayed refs processing loop
  2018-08-15  7:39 [PATCH 0/3] Refactor delayed refs processing loop Nikolay Borisov
                   ` (2 preceding siblings ...)
  2018-08-15  7:39 ` [PATCH 3/3] btrfs: refactor __btrfs_run_delayed_refs loop Nikolay Borisov
@ 2018-09-21 14:43 ` David Sterba
  3 siblings, 0 replies; 8+ messages in thread
From: David Sterba @ 2018-09-21 14:43 UTC (permalink / raw)
  To: Nikolay Borisov; +Cc: linux-btrfs

On Wed, Aug 15, 2018 at 10:39:53AM +0300, Nikolay Borisov wrote:
> Here is a small series which aims to rectify the eye sore that delayed refs 
> processing loop currently is. In fact, it's actually 2 loops in the guise of a 
> single 'while' construct. All in all this should bring no functional changes 
> and I've verified this with multiple xfstest runs with no problems. 
> 
> Patch 1 factors out the code which deals with selecting a ref head which has 
> delayed refs pending and locking it. 
> 
> Patch 2 introduces a new function which comprises the internal loop aka 
> processing delayed refs of a delayed head, which is more or less most of the 
> code in __btrfs_run_delayed_refs copied. The only difference is that the 
> function can return EAGAIN if we detect a delayed ref which has sequence number
> higher than what is currently in the tree mod list. 
> 
> Patch 3 Finaly makes the loop in __btrfs_run_delayed_refs use the function 
> introduced in the previous patch, meaning deleting most of the code and 
> changing the loop to a 'do {} while' construct. I strived to retain all the 
> semantics of the old code so there should be no surprises. 
> 
> Nikolay Borisov (3):
>   btrfs: Factor out ref head locking code in __btrfs_run_delayed_refs
>   btrfs: Factor out loop processing all refs of a head
>   btrfs: refactor __btrfs_run_delayed_refs loop

Added to misc-next. It's been in next for some time and I haven't
noticed problems related to that patchset.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-09-21 20:32 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-15  7:39 [PATCH 0/3] Refactor delayed refs processing loop Nikolay Borisov
2018-08-15  7:39 ` [PATCH 1/3] btrfs: Factor out ref head locking code in __btrfs_run_delayed_refs Nikolay Borisov
2018-09-21 14:30   ` David Sterba
2018-08-15  7:39 ` [PATCH 2/3] btrfs: Factor out loop processing all refs of a head Nikolay Borisov
2018-09-21 14:39   ` David Sterba
2018-08-15  7:39 ` [PATCH 3/3] btrfs: refactor __btrfs_run_delayed_refs loop Nikolay Borisov
2018-09-21 14:39   ` David Sterba
2018-09-21 14:43 ` [PATCH 0/3] Refactor delayed refs processing loop David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).