All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots
@ 2020-03-26  8:32 Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 01/39] btrfs: backref: Introduce the skeleton of btrfs_backref_iter Qu Wenruo
                   ` (41 more replies)
  0 siblings, 42 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs

This patchset is based on misc-5.7 branch.

The branch can be fetched from github for review/testing.
https://github.com/adam900710/linux/tree/backref_cache_all

The patchset survives all the existing qgroup/volume/replace/balance tests.


=== BACKGROUND ===
One of the biggest problem for qgroup is its performance impact.
Although we have improved it in since v5.0 kernel, there is still
something slowing down qgroup, the backref walk.

Before this patchset, we use btrfs_find_all_roots() to iterate all roots
referring to one extent.
That function is doing a pretty good job, but it doesn't has any cache,
which means even we're looking up the same extent, we still need to do
the full backref walk.

On the other hand, relocation is doing its own backref cache, and
provides a much faster backref walk.

So the patchset is mostly trying to make qgroup backref walk (at least
commit root backref walk) to use the same mechanism provided by
relocation.

=== BENCHMARK ===
For the performance improvement, the last patch has a benchmark.
The following content is completely copied from that patch:
------
Here is a small script to test it:

  mkfs.btrfs -f $dev
  mount $dev -o space_cache=v2 $mnt

  btrfs subvolume create $mnt/src

  for ((i = 0; i < 64; i++)); do
          for (( j = 0; j < 16; j++)); do
                  xfs_io -f -c "pwrite 0 2k" $mnt/src/file_inline_$(($i * 16 + $j)) > /dev/null
          done
          xfs_io -f -c "pwrite 0 1M" $mnt/src/file_reg_$i > /dev/null
          sync
          btrfs subvol snapshot $mnt/src $mnt/snapshot_$i
  done
  sync

  btrfs quota enable $mnt
  btrfs quota rescan -w $mnt

Here is the benchmark for above small tests.
The performance material is the total execution time of get_old_roots()
for patched kernel (*), and find_all_roots() for original kernel.

*: With CONFIG_BTRFS_FS_CHECK_INTEGRITY disabled, as get_old_roots()
   will call find_all_roots() to verify the result if that config is
   enabled.

		|  Number of calls | Total exec time |
------------------------------------------------------
find_all_roots()|  732		   | 529991034ns
get_old_roots() |  732		   | 127998312ns
------------------------------------------------------
diff		|  0.00 %	   | -75.8 %
------


=== PATCHSET STRUCTURE ===
Patch 01~14 are refactors of relocation backref.
Patch 15~31 are code move.
Patch 32 is the patch that is already in misc-next.
Patch 33 is the final preparation for qgroup backref.
Patch 34~40 are the qgroup backref cache implementation.

=== CHANGELOG ===
v1:
- Use btrfs_backref_ prefix for exported structure/function
- Add one extra patch to rename backref_(node/edge/cache)
  The renaming itself is not small, thus better to do the rename
  first then move them to backref.[ch].
- Add extra Reviewed-by tags.

v2:
- Rebased to misc-next branch
- Add new reviewed-by tags from v1.

Qu Wenruo (39):
  btrfs: backref: Introduce the skeleton of btrfs_backref_iter
  btrfs: backref: Implement btrfs_backref_iter_next()
  btrfs: relocation: Use btrfs_backref_iter infrastructure
  btrfs: relocation: Rename mark_block_processed() and
    __mark_block_processed()
  btrfs: relocation: Add backref_cache::pending_edge and
    backref_cache::useless_node members
  btrfs: relocation: Add backref_cache::fs_info member
  btrfs: relocation: Make reloc root search specific for relocation
    backref cache
  btrfs: relocation: Refactor direct tree backref processing into its
    own function
  btrfs: relocation: Refactor indirect tree backref processing into its
    own function
  btrfs: relocation: Use wrapper to replace open-coded edge linking
  btrfs: relocation: Specify essential members for alloc_backref_node()
  btrfs: relocation: Remove the open-coded goto loop for breadth-first
    search
  btrfs: relocation: Refactor the finishing part of upper linkage into
    finish_upper_links()
  btrfs: relocation: Refactor the useless nodes handling into its own
    function
  btrfs: relocation: Add btrfs_ prefix for backref_node/edge/cache
  btrfs: Move btrfs_backref_(node|edge|cache) structures to backref.h
  btrfs: Rename tree_entry to simple_node and export it
  btrfs: Rename backref_cache_init() to btrfs_backref_cache_init() and
    move it to backref.c
  btrfs: Rename alloc_backref_node() to btrfs_backref_alloc_node() and
    move it backref.c
  btrfs: Rename alloc_backref_edge() to btrfs_backref_alloc_edge() and
    move it backref.c
  btrfs: Rename link_backref_edge() to btrfs_backref_link_edge() and
    move it backref.h
  btrfs: Rename free_backref_(node|edge) to
    btrfs_backref_free_(node|edge) and move them to backref.h
  btrfs: Rename drop_backref_node() to btrfs_backref_drop_node() and
    move its needed facilities to backref.h
  btrfs: Rename remove_backref_node() to btrfs_backref_cleanup_node()
    and move it to backref.c
  btrfs: Rename backref_cache_cleanup() to btrfs_backref_release_cache()
    and move it to backref.c
  btrfs: Rename backref_tree_panic() to btrfs_backref_panic(), and move
    it to backref.c
  btrfs: Rename should_ignore_root() to btrfs_should_ignore_reloc_root()
    and export it
  btrfs: relocation: Open-code read_fs_root() for
    handle_indirect_tree_backref()
  btrfs: Rename handle_one_tree_block() to btrfs_backref_add_tree_node()
    and move it to backref.c
  btrfs: Rename finish_upper_links() to
    btrfs_backref_finish_upper_links() and move it to backref.c
  btrfs: relocation: Move error handling of build_backref_tree() to
    backref.c
  btrfs: backref: Only ignore reloc roots for indrect backref resolve if
    the backref cache is for reloction purpose
  btrfs: qgroup: Introduce qgroup backref cache
  btrfs: qgroup: Introduce qgroup_backref_cache_build() function
  btrfs: qgroup: Introduce a function to iterate through backref_cache
    to find all parents for specified node
  btrfs: qgroup: Introduce helpers to get needed tree block info
  btrfs: qgroup: Introduce verification for function to ensure old roots
    ulist matches btrfs_find_all_roots() result
  btrfs: qgroup: Introduce a new function to get old_roots ulist using
    backref cache
  btrfs: qgroup: Use backref cache to speed up old_roots search

 fs/btrfs/backref.c    |  808 ++++++++++++++++++++++++++++
 fs/btrfs/backref.h    |  319 +++++++++++
 fs/btrfs/ctree.h      |    5 +
 fs/btrfs/disk-io.c    |    1 +
 fs/btrfs/misc.h       |   54 ++
 fs/btrfs/qgroup.c     |  516 +++++++++++++++++-
 fs/btrfs/relocation.c | 1187 ++++++++---------------------------------
 7 files changed, 1925 insertions(+), 965 deletions(-)

-- 
2.26.0


^ permalink raw reply	[flat|nested] 52+ messages in thread

* [PATCH v2 01/39] btrfs: backref: Introduce the skeleton of btrfs_backref_iter
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-04-01 15:37   ` David Sterba
  2020-03-26  8:32 ` [PATCH v2 02/39] btrfs: backref: Implement btrfs_backref_iter_next() Qu Wenruo
                   ` (40 subsequent siblings)
  41 siblings, 1 reply; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Johannes Thumshirn, Josef Bacik

Due to the complex nature of btrfs extent tree, when we want to iterate
all backrefs of one extent, it involves quite a lot of work, like
searching the EXTENT_ITEM/METADATA_ITEM, iteration through inline and keyed
backrefs.

Normally this would result pretty complex code, something like:
  btrfs_search_slot()
  /* Ensure we are at EXTENT_ITEM/METADATA_ITEM */
  while (1) {	/* Loop for extent tree items */
	while (ptr < end) { /* Loop for inlined items */
		/* REAL WORK HERE */
	}
  next:
  	ret = btrfs_next_item()
	/* Ensure we're still at keyed item for specified bytenr */
  }

The idea of btrfs_backref_iter is to avoid such complex and hard to
read code structure, but something like the following:

  iter = btrfs_backref_iter_alloc();
  ret = btrfs_backref_iter_start(iter, bytenr);
  if (ret < 0)
	goto out;
  for (; ; ret = btrfs_backref_iter_next(iter)) {
	/* REAL WORK HERE */
  }
  out:
  btrfs_backref_iter_free(iter);

This patch is just the skeleton + btrfs_backref_iter_start() code.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/backref.c | 110 +++++++++++++++++++++++++++++++++++++++++++++
 fs/btrfs/backref.h |  39 ++++++++++++++++
 2 files changed, 149 insertions(+)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index 9c380e7edf62..b27e90e362d6 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -2295,3 +2295,113 @@ void free_ipath(struct inode_fs_paths *ipath)
 	kvfree(ipath->fspath);
 	kfree(ipath);
 }
+
+struct btrfs_backref_iter *btrfs_backref_iter_alloc(
+		struct btrfs_fs_info *fs_info, gfp_t gfp_flag)
+{
+	struct btrfs_backref_iter *ret;
+
+	ret = kzalloc(sizeof(*ret), gfp_flag);
+	if (!ret)
+		return NULL;
+
+	ret->path = btrfs_alloc_path();
+	if (!ret) {
+		kfree(ret);
+		return NULL;
+	}
+
+	/* Current backref iterator only supports iteration in commit root */
+	ret->path->search_commit_root = 1;
+	ret->path->skip_locking = 1;
+	ret->fs_info = fs_info;
+
+	return ret;
+}
+
+int btrfs_backref_iter_start(struct btrfs_backref_iter *iter, u64 bytenr)
+{
+	struct btrfs_fs_info *fs_info = iter->fs_info;
+	struct btrfs_path *path = iter->path;
+	struct btrfs_extent_item *ei;
+	struct btrfs_key key;
+	int ret;
+
+	key.objectid = bytenr;
+	key.type = BTRFS_METADATA_ITEM_KEY;
+	key.offset = (u64)-1;
+	iter->bytenr = bytenr;
+
+	ret = btrfs_search_slot(NULL, fs_info->extent_root, &key, path, 0, 0);
+	if (ret < 0)
+		return ret;
+	if (ret == 0) {
+		ret = -EUCLEAN;
+		goto release;
+	}
+	if (path->slots[0] == 0) {
+		WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
+		ret = -EUCLEAN;
+		goto release;
+	}
+	path->slots[0]--;
+
+	btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]);
+	if ((key.type != BTRFS_EXTENT_ITEM_KEY &&
+	     key.type != BTRFS_METADATA_ITEM_KEY) || key.objectid != bytenr) {
+		ret = -ENOENT;
+		goto release;
+	}
+	memcpy(&iter->cur_key, &key, sizeof(key));
+	iter->item_ptr = (u32)btrfs_item_ptr_offset(path->nodes[0],
+						    path->slots[0]);
+	iter->end_ptr = (u32)(iter->item_ptr +
+			btrfs_item_size_nr(path->nodes[0], path->slots[0]));
+	ei = btrfs_item_ptr(path->nodes[0], path->slots[0],
+			    struct btrfs_extent_item);
+
+	/*
+	 * Only support iteration on tree backref yet.
+	 *
+	 * This is extra precaution for non skinny-metadata, where
+	 * EXTENT_ITEM is also used for tree blocks, that we can only use
+	 * extent flags to determine if it's a tree block.
+	 */
+	if (btrfs_extent_flags(path->nodes[0], ei) & BTRFS_EXTENT_FLAG_DATA) {
+		ret = -ENOTSUPP;
+		goto release;
+	}
+	iter->cur_ptr = (u32)(iter->item_ptr + sizeof(*ei));
+
+	/* If there is no inline backref, go search for keyed backref */
+	if (iter->cur_ptr >= iter->end_ptr) {
+		ret = btrfs_next_item(fs_info->extent_root, path);
+
+		/* No inline nor keyed ref */
+		if (ret > 0) {
+			ret = -ENOENT;
+			goto release;
+		}
+		if (ret < 0)
+			goto release;
+
+		btrfs_item_key_to_cpu(path->nodes[0], &iter->cur_key,
+				path->slots[0]);
+		if (iter->cur_key.objectid != bytenr ||
+		    (iter->cur_key.type != BTRFS_SHARED_BLOCK_REF_KEY &&
+		     iter->cur_key.type != BTRFS_TREE_BLOCK_REF_KEY)) {
+			ret = -ENOENT;
+			goto release;
+		}
+		iter->cur_ptr = (u32)btrfs_item_ptr_offset(path->nodes[0],
+							   path->slots[0]);
+		iter->item_ptr = iter->cur_ptr;
+		iter->end_ptr = (u32)(iter->item_ptr + btrfs_item_size_nr(
+				      path->nodes[0], path->slots[0]));
+	}
+
+	return 0;
+release:
+	btrfs_backref_iter_release(iter);
+	return ret;
+}
diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index 723d6da99114..4217e9019f4a 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -78,4 +78,43 @@ struct prelim_ref {
 	u64 wanted_disk_byte;
 };
 
+/*
+ * Helper structure to help iterate backrefs of one extent.
+ *
+ * Now it only supports iteration for tree block in commit root.
+ */
+struct btrfs_backref_iter {
+	u64 bytenr;
+	struct btrfs_path *path;
+	struct btrfs_fs_info *fs_info;
+	struct btrfs_key cur_key;
+	u32 item_ptr;
+	u32 cur_ptr;
+	u32 end_ptr;
+};
+
+struct btrfs_backref_iter *btrfs_backref_iter_alloc(
+		struct btrfs_fs_info *fs_info, gfp_t gfp_flag);
+
+static inline void btrfs_backref_iter_free(struct btrfs_backref_iter *iter)
+{
+	if (!iter)
+		return;
+	btrfs_free_path(iter->path);
+	kfree(iter);
+}
+
+int btrfs_backref_iter_start(struct btrfs_backref_iter *iter, u64 bytenr);
+
+static inline void
+btrfs_backref_iter_release(struct btrfs_backref_iter *iter)
+{
+	iter->bytenr = 0;
+	iter->item_ptr = 0;
+	iter->cur_ptr = 0;
+	iter->end_ptr = 0;
+	btrfs_release_path(iter->path);
+	memset(&iter->cur_key, 0, sizeof(iter->cur_key));
+}
+
 #endif
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 02/39] btrfs: backref: Implement btrfs_backref_iter_next()
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 01/39] btrfs: backref: Introduce the skeleton of btrfs_backref_iter Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 03/39] btrfs: relocation: Use btrfs_backref_iter infrastructure Qu Wenruo
                   ` (39 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Johannes Thumshirn, Josef Bacik

This function will go next inline/keyed backref for
btrfs_backref_iter infrastructure.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/backref.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++
 fs/btrfs/backref.h | 34 +++++++++++++++++++++++++++
 2 files changed, 92 insertions(+)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index b27e90e362d6..a1044f093f6c 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -2405,3 +2405,61 @@ int btrfs_backref_iter_start(struct btrfs_backref_iter *iter, u64 bytenr)
 	btrfs_backref_iter_release(iter);
 	return ret;
 }
+
+/*
+ * Go to next backref item of current bytenr, can be either inlined or keyed.
+ *
+ * Caller need to check whether it's inline ref or not by iter->cur_key.
+ *
+ * Return 0 if we get next backref without problem.
+ * Return >0 if there is no extra backref for this bytenr.
+ * Return <0 if there is something wrong happened.
+ */
+int btrfs_backref_iter_next(struct btrfs_backref_iter *iter)
+{
+	struct extent_buffer *eb = btrfs_backref_get_eb(iter);
+	struct btrfs_path *path = iter->path;
+	struct btrfs_extent_inline_ref *iref;
+	int ret;
+	u32 size;
+
+	if (btrfs_backref_iter_is_inline_ref(iter)) {
+		/* We're still inside the inline refs */
+		ASSERT(iter->cur_ptr < iter->end_ptr);
+
+		if (btrfs_backref_has_tree_block_info(iter)) {
+			/* First tree block info */
+			size = sizeof(struct btrfs_tree_block_info);
+		} else {
+			/* Use inline ref type to determine the size */
+			int type;
+
+			iref = (struct btrfs_extent_inline_ref *)
+				((unsigned long)iter->cur_ptr);
+			type = btrfs_extent_inline_ref_type(eb, iref);
+
+			size = btrfs_extent_inline_ref_size(type);
+		}
+		iter->cur_ptr += size;
+		if (iter->cur_ptr < iter->end_ptr)
+			return 0;
+
+		/* All inline items iterated, fall through */
+	}
+	/* We're at keyed items, there is no inline item, just go next item */
+	ret = btrfs_next_item(iter->fs_info->extent_root, iter->path);
+	if (ret)
+		return ret;
+
+	btrfs_item_key_to_cpu(path->nodes[0], &iter->cur_key, path->slots[0]);
+	if (iter->cur_key.objectid != iter->bytenr ||
+	    (iter->cur_key.type != BTRFS_TREE_BLOCK_REF_KEY &&
+	     iter->cur_key.type != BTRFS_SHARED_BLOCK_REF_KEY))
+		return 1;
+	iter->item_ptr = (u32)btrfs_item_ptr_offset(path->nodes[0],
+					path->slots[0]);
+	iter->cur_ptr = iter->item_ptr;
+	iter->end_ptr = iter->item_ptr + (u32)btrfs_item_size_nr(path->nodes[0],
+						path->slots[0]);
+	return 0;
+}
diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index 4217e9019f4a..3226dea35e2c 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -104,8 +104,42 @@ static inline void btrfs_backref_iter_free(struct btrfs_backref_iter *iter)
 	kfree(iter);
 }
 
+static inline struct extent_buffer *
+btrfs_backref_get_eb(struct btrfs_backref_iter *iter)
+{
+	if (!iter)
+		return NULL;
+	return iter->path->nodes[0];
+}
+
+/*
+ * For metadata with EXTENT_ITEM key (non-skinny) case, the first inline data
+ * is btrfs_tree_block_info, without a btrfs_extent_inline_ref header.
+ *
+ * This helper is here to determine if that's the case.
+ */
+static inline bool btrfs_backref_has_tree_block_info(
+		struct btrfs_backref_iter *iter)
+{
+	if (iter->cur_key.type == BTRFS_EXTENT_ITEM_KEY &&
+	    iter->cur_ptr - iter->item_ptr == sizeof(struct btrfs_extent_item))
+		return true;
+	return false;
+}
+
 int btrfs_backref_iter_start(struct btrfs_backref_iter *iter, u64 bytenr);
 
+int btrfs_backref_iter_next(struct btrfs_backref_iter *iter);
+
+static inline bool
+btrfs_backref_iter_is_inline_ref(struct btrfs_backref_iter *iter)
+{
+	if (iter->cur_key.type == BTRFS_EXTENT_ITEM_KEY ||
+	    iter->cur_key.type == BTRFS_METADATA_ITEM_KEY)
+		return true;
+	return false;
+}
+
 static inline void
 btrfs_backref_iter_release(struct btrfs_backref_iter *iter)
 {
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 03/39] btrfs: relocation: Use btrfs_backref_iter infrastructure
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 01/39] btrfs: backref: Introduce the skeleton of btrfs_backref_iter Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 02/39] btrfs: backref: Implement btrfs_backref_iter_next() Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 04/39] btrfs: relocation: Rename mark_block_processed() and __mark_block_processed() Qu Wenruo
                   ` (38 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Johannes Thumshirn, Josef Bacik

In the core function of relocation, build_backref_tree, it needs to
iterate all backref items of one tree block.

We don't really want to spend our code and reviewers' time to going
through tons of supportive code just for the backref walk.

Use btrfs_backref_iter infrastructure to do the loop.

The backref items look would be much more easier to read:

	ret = btrfs_backref_iter_start(iter, cur->bytenr);
	for (; ret == 0; ret = btrfs_backref_iter_next(iter)) {
		/* The really important work */
	}

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/relocation.c | 190 ++++++++++++++----------------------------
 1 file changed, 61 insertions(+), 129 deletions(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index f65595602aa8..cf0406a1705b 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -654,48 +654,6 @@ static struct btrfs_root *read_fs_root(struct btrfs_fs_info *fs_info,
 	return btrfs_get_fs_root(fs_info, &key, false);
 }
 
-static noinline_for_stack
-int find_inline_backref(struct extent_buffer *leaf, int slot,
-			unsigned long *ptr, unsigned long *end)
-{
-	struct btrfs_key key;
-	struct btrfs_extent_item *ei;
-	struct btrfs_tree_block_info *bi;
-	u32 item_size;
-
-	btrfs_item_key_to_cpu(leaf, &key, slot);
-
-	item_size = btrfs_item_size_nr(leaf, slot);
-	if (item_size < sizeof(*ei)) {
-		btrfs_print_v0_err(leaf->fs_info);
-		btrfs_handle_fs_error(leaf->fs_info, -EINVAL, NULL);
-		return 1;
-	}
-	ei = btrfs_item_ptr(leaf, slot, struct btrfs_extent_item);
-	WARN_ON(!(btrfs_extent_flags(leaf, ei) &
-		  BTRFS_EXTENT_FLAG_TREE_BLOCK));
-
-	if (key.type == BTRFS_EXTENT_ITEM_KEY &&
-	    item_size <= sizeof(*ei) + sizeof(*bi)) {
-		WARN_ON(item_size < sizeof(*ei) + sizeof(*bi));
-		return 1;
-	}
-	if (key.type == BTRFS_METADATA_ITEM_KEY &&
-	    item_size <= sizeof(*ei)) {
-		WARN_ON(item_size < sizeof(*ei));
-		return 1;
-	}
-
-	if (key.type == BTRFS_EXTENT_ITEM_KEY) {
-		bi = (struct btrfs_tree_block_info *)(ei + 1);
-		*ptr = (unsigned long)(bi + 1);
-	} else {
-		*ptr = (unsigned long)(ei + 1);
-	}
-	*end = (unsigned long)ei + item_size;
-	return 0;
-}
-
 /*
  * build backref tree for a given tree block. root of the backref tree
  * corresponds the tree block, leaves of the backref tree correspond
@@ -715,10 +673,9 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 					struct btrfs_key *node_key,
 					int level, u64 bytenr)
 {
+	struct btrfs_backref_iter *iter;
 	struct backref_cache *cache = &rc->backref_cache;
-	struct btrfs_path *path1; /* For searching extent root */
-	struct btrfs_path *path2; /* For searching parent of TREE_BLOCK_REF */
-	struct extent_buffer *eb;
+	struct btrfs_path *path; /* For searching parent of TREE_BLOCK_REF */
 	struct btrfs_root *root;
 	struct backref_node *cur;
 	struct backref_node *upper;
@@ -727,9 +684,6 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	struct backref_node *exist = NULL;
 	struct backref_edge *edge;
 	struct rb_node *rb_node;
-	struct btrfs_key key;
-	unsigned long end;
-	unsigned long ptr;
 	LIST_HEAD(list); /* Pending edge list, upper node needs to be checked */
 	LIST_HEAD(useless);
 	int cowonly;
@@ -737,9 +691,11 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	int err = 0;
 	bool need_check = true;
 
-	path1 = btrfs_alloc_path();
-	path2 = btrfs_alloc_path();
-	if (!path1 || !path2) {
+	iter = btrfs_backref_iter_alloc(rc->extent_root->fs_info, GFP_NOFS);
+	if (!iter)
+		return ERR_PTR(-ENOMEM);
+	path = btrfs_alloc_path();
+	if (!path) {
 		err = -ENOMEM;
 		goto out;
 	}
@@ -755,25 +711,28 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	node->lowest = 1;
 	cur = node;
 again:
-	end = 0;
-	ptr = 0;
-	key.objectid = cur->bytenr;
-	key.type = BTRFS_METADATA_ITEM_KEY;
-	key.offset = (u64)-1;
-
-	path1->search_commit_root = 1;
-	path1->skip_locking = 1;
-	ret = btrfs_search_slot(NULL, rc->extent_root, &key, path1,
-				0, 0);
+	ret = btrfs_backref_iter_start(iter, cur->bytenr);
 	if (ret < 0) {
 		err = ret;
 		goto out;
 	}
-	ASSERT(ret);
-	ASSERT(path1->slots[0]);
-
-	path1->slots[0]--;
 
+	/*
+	 * We skip the first btrfs_tree_block_info, as we don't use the key
+	 * stored in it, but fetch it from the tree block.
+	 */
+	if (btrfs_backref_has_tree_block_info(iter)) {
+		ret = btrfs_backref_iter_next(iter);
+		if (ret < 0) {
+			err = ret;
+			goto out;
+		}
+		/* No extra backref? This means the tree block is corrupted */
+		if (ret > 0) {
+			err = -EUCLEAN;
+			goto out;
+		}
+	}
 	WARN_ON(cur->checked);
 	if (!list_empty(&cur->upper)) {
 		/*
@@ -795,42 +754,21 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		exist = NULL;
 	}
 
-	while (1) {
-		cond_resched();
-		eb = path1->nodes[0];
-
-		if (ptr >= end) {
-			if (path1->slots[0] >= btrfs_header_nritems(eb)) {
-				ret = btrfs_next_leaf(rc->extent_root, path1);
-				if (ret < 0) {
-					err = ret;
-					goto out;
-				}
-				if (ret > 0)
-					break;
-				eb = path1->nodes[0];
-			}
+	for (; ret == 0; ret = btrfs_backref_iter_next(iter)) {
+		struct extent_buffer *eb;
+		struct btrfs_key key;
+		int type;
 
-			btrfs_item_key_to_cpu(eb, &key, path1->slots[0]);
-			if (key.objectid != cur->bytenr) {
-				WARN_ON(exist);
-				break;
-			}
+		cond_resched();
+		eb = btrfs_backref_get_eb(iter);
 
-			if (key.type == BTRFS_EXTENT_ITEM_KEY ||
-			    key.type == BTRFS_METADATA_ITEM_KEY) {
-				ret = find_inline_backref(eb, path1->slots[0],
-							  &ptr, &end);
-				if (ret)
-					goto next;
-			}
-		}
+		key.objectid = iter->bytenr;
+		if (btrfs_backref_iter_is_inline_ref(iter)) {
+			struct btrfs_extent_inline_ref *iref;
 
-		if (ptr < end) {
 			/* update key for inline back ref */
-			struct btrfs_extent_inline_ref *iref;
-			int type;
-			iref = (struct btrfs_extent_inline_ref *)ptr;
+			iref = (struct btrfs_extent_inline_ref *)
+				((unsigned long)iter->cur_ptr);
 			type = btrfs_get_extent_inline_ref_type(eb, iref,
 							BTRFS_REF_TYPE_BLOCK);
 			if (type == BTRFS_REF_TYPE_INVALID) {
@@ -839,9 +777,9 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			}
 			key.type = type;
 			key.offset = btrfs_extent_inline_ref_offset(eb, iref);
-
-			WARN_ON(key.type != BTRFS_TREE_BLOCK_REF_KEY &&
-				key.type != BTRFS_SHARED_BLOCK_REF_KEY);
+		} else {
+			key.type = iter->cur_key.type;
+			key.offset = iter->cur_key.offset;
 		}
 
 		/*
@@ -854,7 +792,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		     (key.type == BTRFS_SHARED_BLOCK_REF_KEY &&
 		      exist->bytenr == key.offset))) {
 			exist = NULL;
-			goto next;
+			continue;
 		}
 
 		/* SHARED_BLOCK_REF means key.offset is the parent bytenr */
@@ -900,7 +838,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			edge->node[LOWER] = cur;
 			edge->node[UPPER] = upper;
 
-			goto next;
+			continue;
 		} else if (unlikely(key.type == BTRFS_EXTENT_REF_V0_KEY)) {
 			err = -EINVAL;
 			btrfs_print_v0_err(rc->extent_root->fs_info);
@@ -908,7 +846,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 					      NULL);
 			goto out;
 		} else if (key.type != BTRFS_TREE_BLOCK_REF_KEY) {
-			goto next;
+			continue;
 		}
 
 		/*
@@ -941,21 +879,21 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		level = cur->level + 1;
 
 		/* Search the tree to find parent blocks referring the block. */
-		path2->search_commit_root = 1;
-		path2->skip_locking = 1;
-		path2->lowest_level = level;
-		ret = btrfs_search_slot(NULL, root, node_key, path2, 0, 0);
-		path2->lowest_level = 0;
+		path->search_commit_root = 1;
+		path->skip_locking = 1;
+		path->lowest_level = level;
+		ret = btrfs_search_slot(NULL, root, node_key, path, 0, 0);
+		path->lowest_level = 0;
 		if (ret < 0) {
 			btrfs_put_root(root);
 			err = ret;
 			goto out;
 		}
-		if (ret > 0 && path2->slots[level] > 0)
-			path2->slots[level]--;
+		if (ret > 0 && path->slots[level] > 0)
+			path->slots[level]--;
 
-		eb = path2->nodes[level];
-		if (btrfs_node_blockptr(eb, path2->slots[level]) !=
+		eb = path->nodes[level];
+		if (btrfs_node_blockptr(eb, path->slots[level]) !=
 		    cur->bytenr) {
 			btrfs_err(root->fs_info,
 	"couldn't find block (%llu) (level %d) in tree (%llu) with key (%llu %u %llu)",
@@ -972,7 +910,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 
 		/* Add all nodes and edges in the path */
 		for (; level < BTRFS_MAX_LEVEL; level++) {
-			if (!path2->nodes[level]) {
+			if (!path->nodes[level]) {
 				ASSERT(btrfs_root_bytenr(&root->root_item) ==
 				       lower->bytenr);
 				if (should_ignore_root(root)) {
@@ -991,7 +929,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 				goto out;
 			}
 
-			eb = path2->nodes[level];
+			eb = path->nodes[level];
 			rb_node = tree_search(&cache->rb_root, eb->start);
 			if (!rb_node) {
 				upper = alloc_backref_node(cache);
@@ -1051,20 +989,14 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			lower = upper;
 			upper = NULL;
 		}
-		btrfs_release_path(path2);
-next:
-		if (ptr < end) {
-			ptr += btrfs_extent_inline_ref_size(key.type);
-			if (ptr >= end) {
-				WARN_ON(ptr > end);
-				ptr = 0;
-				end = 0;
-			}
-		}
-		if (ptr >= end)
-			path1->slots[0]++;
+		btrfs_release_path(path);
 	}
-	btrfs_release_path(path1);
+	if (ret < 0) {
+		err = ret;
+		goto out;
+	}
+	ret = 0;
+	btrfs_backref_iter_release(iter);
 
 	cur->checked = 1;
 	WARN_ON(exist);
@@ -1182,8 +1114,8 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		}
 	}
 out:
-	btrfs_free_path(path1);
-	btrfs_free_path(path2);
+	btrfs_backref_iter_free(iter);
+	btrfs_free_path(path);
 	if (err) {
 		while (!list_empty(&useless)) {
 			lower = list_entry(useless.next,
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 04/39] btrfs: relocation: Rename mark_block_processed() and __mark_block_processed()
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (2 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 03/39] btrfs: relocation: Use btrfs_backref_iter infrastructure Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 05/39] btrfs: relocation: Add backref_cache::pending_edge and backref_cache::useless_node members Qu Wenruo
                   ` (37 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Nikolay Borisov, Josef Bacik

These two functions are weirdly named, mark_block_processed() in fact
just mark a range dirty unconditionally, while __mark_block_processed()
does extra check before doing the marking.

This patch will open code old mark_block_processed, and rename
__mark_block_processed() to remove the "__" prefix.

Since we're here, also kill the forward declaration, which could also
kill in_block_group() with in_range() macro.

Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/relocation.c | 56 +++++++++++++++++--------------------------
 1 file changed, 22 insertions(+), 34 deletions(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index cf0406a1705b..7171e80454ba 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -237,8 +237,22 @@ struct reloc_control {
 
 static void remove_backref_node(struct backref_cache *cache,
 				struct backref_node *node);
-static void __mark_block_processed(struct reloc_control *rc,
-				   struct backref_node *node);
+
+static void mark_block_processed(struct reloc_control *rc,
+				 struct backref_node *node)
+{
+	u32 blocksize;
+
+	if (node->level == 0 ||
+	    in_range(node->bytenr, rc->block_group->start,
+		     rc->block_group->length)) {
+		blocksize = rc->extent_root->fs_info->nodesize;
+		set_extent_bits(&rc->processed_blocks, node->bytenr,
+				node->bytenr + blocksize - 1, EXTENT_DIRTY);
+	}
+	node->processed = 1;
+}
+
 
 static void mapping_tree_init(struct mapping_tree *tree)
 {
@@ -1104,7 +1118,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			if (list_empty(&lower->upper))
 				list_add(&lower->list, &useless);
 		}
-		__mark_block_processed(rc, upper);
+		mark_block_processed(rc, upper);
 		if (upper->level > 0) {
 			list_add(&upper->list, &cache->detached);
 			upper->detached = 1;
@@ -1596,14 +1610,6 @@ static struct inode *find_next_inode(struct btrfs_root *root, u64 objectid)
 	return NULL;
 }
 
-static int in_block_group(u64 bytenr, struct btrfs_block_group *block_group)
-{
-	if (bytenr >= block_group->start &&
-	    bytenr < block_group->start + block_group->length)
-		return 1;
-	return 0;
-}
-
 /*
  * get new location of data
  */
@@ -1701,7 +1707,8 @@ int replace_file_extents(struct btrfs_trans_handle *trans,
 		num_bytes = btrfs_file_extent_disk_num_bytes(leaf, fi);
 		if (bytenr == 0)
 			continue;
-		if (!in_block_group(bytenr, rc->block_group))
+		if (!in_range(bytenr, rc->block_group->start,
+			      rc->block_group->length))
 			continue;
 
 		/*
@@ -2663,7 +2670,7 @@ struct btrfs_root *select_reloc_root(struct btrfs_trans_handle *trans,
 			ASSERT(next->root);
 			list_add_tail(&next->list,
 				      &rc->backref_cache.changed);
-			__mark_block_processed(rc, next);
+			mark_block_processed(rc, next);
 			break;
 		}
 
@@ -3013,25 +3020,6 @@ static int finish_pending_nodes(struct btrfs_trans_handle *trans,
 	return err;
 }
 
-static void mark_block_processed(struct reloc_control *rc,
-				 u64 bytenr, u32 blocksize)
-{
-	set_extent_bits(&rc->processed_blocks, bytenr, bytenr + blocksize - 1,
-			EXTENT_DIRTY);
-}
-
-static void __mark_block_processed(struct reloc_control *rc,
-				   struct backref_node *node)
-{
-	u32 blocksize;
-	if (node->level == 0 ||
-	    in_block_group(node->bytenr, rc->block_group)) {
-		blocksize = rc->extent_root->fs_info->nodesize;
-		mark_block_processed(rc, node->bytenr, blocksize);
-	}
-	node->processed = 1;
-}
-
 /*
  * mark a block and all blocks directly/indirectly reference the block
  * as processed.
@@ -3050,7 +3038,7 @@ static void update_processed_blocks(struct reloc_control *rc,
 			if (next->processed)
 				break;
 
-			__mark_block_processed(rc, next);
+			mark_block_processed(rc, next);
 
 			if (list_empty(&next->upper))
 				break;
@@ -4619,7 +4607,7 @@ int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans,
 		}
 
 		if (first_cow)
-			__mark_block_processed(rc, node);
+			mark_block_processed(rc, node);
 
 		if (first_cow && level > 0)
 			rc->nodes_relocated += buf->len;
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 05/39] btrfs: relocation: Add backref_cache::pending_edge and backref_cache::useless_node members
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (3 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 04/39] btrfs: relocation: Rename mark_block_processed() and __mark_block_processed() Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 06/39] btrfs: relocation: Add backref_cache::fs_info member Qu Wenruo
                   ` (36 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Josef Bacik

These two new members will act the same as the existing local lists,
@useless and @list in build_backref_tree().

Currently build_backref_tree() is only executed serially, thus moving
such local list into backref_cache is still safe.

Also since we're here, use list_first_entry() to replace a lot of
list_entry() calls after !list_empty().

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/relocation.c | 74 +++++++++++++++++++++++++++----------------
 1 file changed, 46 insertions(+), 28 deletions(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 7171e80454ba..108ea3d428bc 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -158,6 +158,12 @@ struct backref_cache {
 
 	int nr_nodes;
 	int nr_edges;
+
+	/* The list of unchecked backref edges during backref cache build */
+	struct list_head pending_edge;
+
+	/* The list of useless backref nodes during backref cache build */
+	struct list_head useless_node;
 };
 
 /*
@@ -269,6 +275,8 @@ static void backref_cache_init(struct backref_cache *cache)
 	INIT_LIST_HEAD(&cache->changed);
 	INIT_LIST_HEAD(&cache->detached);
 	INIT_LIST_HEAD(&cache->leaves);
+	INIT_LIST_HEAD(&cache->pending_edge);
+	INIT_LIST_HEAD(&cache->useless_node);
 }
 
 static void backref_cache_cleanup(struct backref_cache *cache)
@@ -292,6 +300,8 @@ static void backref_cache_cleanup(struct backref_cache *cache)
 
 	for (i = 0; i < BTRFS_MAX_LEVEL; i++)
 		ASSERT(list_empty(&cache->pending[i]));
+	ASSERT(list_empty(&cache->pending_edge));
+	ASSERT(list_empty(&cache->useless_node));
 	ASSERT(list_empty(&cache->changed));
 	ASSERT(list_empty(&cache->detached));
 	ASSERT(RB_EMPTY_ROOT(&cache->rb_root));
@@ -698,8 +708,6 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	struct backref_node *exist = NULL;
 	struct backref_edge *edge;
 	struct rb_node *rb_node;
-	LIST_HEAD(list); /* Pending edge list, upper node needs to be checked */
-	LIST_HEAD(useless);
 	int cowonly;
 	int ret;
 	int err = 0;
@@ -763,7 +771,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		 * check its backrefs
 		 */
 		if (!exist->checked)
-			list_add_tail(&edge->list[UPPER], &list);
+			list_add_tail(&edge->list[UPPER], &cache->pending_edge);
 	} else {
 		exist = NULL;
 	}
@@ -841,7 +849,8 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 				 *  backrefs for the upper level block isn't
 				 *  cached, add the block to pending list
 				 */
-				list_add_tail(&edge->list[UPPER], &list);
+				list_add_tail(&edge->list[UPPER],
+					      &cache->pending_edge);
 			} else {
 				upper = rb_entry(rb_node, struct backref_node,
 						 rb_node);
@@ -883,7 +892,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			       cur->bytenr);
 			if (should_ignore_root(root)) {
 				btrfs_put_root(root);
-				list_add(&cur->list, &useless);
+				list_add(&cur->list, &cache->useless_node);
 			} else {
 				cur->root = root;
 			}
@@ -929,7 +938,8 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 				       lower->bytenr);
 				if (should_ignore_root(root)) {
 					btrfs_put_root(root);
-					list_add(&lower->list, &useless);
+					list_add(&lower->list,
+						 &cache->useless_node);
 				} else {
 					lower->root = root;
 				}
@@ -978,7 +988,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 				if (!upper->checked && need_check) {
 					need_check = false;
 					list_add_tail(&edge->list[UPPER],
-						      &list);
+						      &cache->pending_edge);
 				} else {
 					if (upper->checked)
 						need_check = true;
@@ -1016,8 +1026,9 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	WARN_ON(exist);
 
 	/* the pending list isn't empty, take the first block to process */
-	if (!list_empty(&list)) {
-		edge = list_entry(list.next, struct backref_edge, list[UPPER]);
+	if (!list_empty(&cache->pending_edge)) {
+		edge = list_entry(cache->pending_edge.next,
+				  struct backref_edge, list[UPPER]);
 		list_del_init(&edge->list[UPPER]);
 		cur = edge->node[UPPER];
 		goto again;
@@ -1038,10 +1049,11 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	}
 
 	list_for_each_entry(edge, &node->upper, list[LOWER])
-		list_add_tail(&edge->list[UPPER], &list);
+		list_add_tail(&edge->list[UPPER], &cache->pending_edge);
 
-	while (!list_empty(&list)) {
-		edge = list_entry(list.next, struct backref_edge, list[UPPER]);
+	while (!list_empty(&cache->pending_edge)) {
+		edge = list_first_entry(&cache->pending_edge,
+				struct backref_edge, list[UPPER]);
 		list_del_init(&edge->list[UPPER]);
 		upper = edge->node[UPPER];
 		if (upper->detached) {
@@ -1049,7 +1061,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			lower = edge->node[LOWER];
 			free_backref_edge(cache, edge);
 			if (list_empty(&lower->upper))
-				list_add(&lower->list, &useless);
+				list_add(&lower->list, &cache->useless_node);
 			continue;
 		}
 
@@ -1089,7 +1101,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		list_add_tail(&edge->list[UPPER], &upper->lower);
 
 		list_for_each_entry(edge, &upper->upper, list[LOWER])
-			list_add_tail(&edge->list[UPPER], &list);
+			list_add_tail(&edge->list[UPPER], &cache->pending_edge);
 	}
 	/*
 	 * process useless backref nodes. backref nodes for tree leaves
@@ -1097,8 +1109,9 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	 * tree blocks are left in the cache to avoid unnecessary backref
 	 * lookup.
 	 */
-	while (!list_empty(&useless)) {
-		upper = list_entry(useless.next, struct backref_node, list);
+	while (!list_empty(&cache->useless_node)) {
+		upper = list_first_entry(&cache->useless_node,
+				   struct backref_node, list);
 		list_del_init(&upper->list);
 		ASSERT(list_empty(&upper->upper));
 		if (upper == node)
@@ -1108,7 +1121,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			upper->lowest = 0;
 		}
 		while (!list_empty(&upper->lower)) {
-			edge = list_entry(upper->lower.next,
+			edge = list_first_entry(&upper->lower,
 					  struct backref_edge, list[UPPER]);
 			list_del(&edge->list[UPPER]);
 			list_del(&edge->list[LOWER]);
@@ -1116,7 +1129,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			free_backref_edge(cache, edge);
 
 			if (list_empty(&lower->upper))
-				list_add(&lower->list, &useless);
+				list_add(&lower->list, &cache->useless_node);
 		}
 		mark_block_processed(rc, upper);
 		if (upper->level > 0) {
@@ -1131,14 +1144,14 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	btrfs_backref_iter_free(iter);
 	btrfs_free_path(path);
 	if (err) {
-		while (!list_empty(&useless)) {
-			lower = list_entry(useless.next,
+		while (!list_empty(&cache->useless_node)) {
+			lower = list_first_entry(&cache->useless_node,
 					   struct backref_node, list);
 			list_del_init(&lower->list);
 		}
-		while (!list_empty(&list)) {
-			edge = list_first_entry(&list, struct backref_edge,
-						list[UPPER]);
+		while (!list_empty(&cache->pending_edge)) {
+			edge = list_first_entry(&cache->pending_edge,
+					struct backref_edge, list[UPPER]);
 			list_del(&edge->list[UPPER]);
 			list_del(&edge->list[LOWER]);
 			lower = edge->node[LOWER];
@@ -1151,20 +1164,21 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			 */
 			if (list_empty(&lower->upper) &&
 			    RB_EMPTY_NODE(&lower->rb_node))
-				list_add(&lower->list, &useless);
+				list_add(&lower->list, &cache->useless_node);
 
 			if (!RB_EMPTY_NODE(&upper->rb_node))
 				continue;
 
 			/* Add this guy's upper edges to the list to process */
 			list_for_each_entry(edge, &upper->upper, list[LOWER])
-				list_add_tail(&edge->list[UPPER], &list);
+				list_add_tail(&edge->list[UPPER],
+					      &cache->pending_edge);
 			if (list_empty(&upper->upper))
-				list_add(&upper->list, &useless);
+				list_add(&upper->list, &cache->useless_node);
 		}
 
-		while (!list_empty(&useless)) {
-			lower = list_entry(useless.next,
+		while (!list_empty(&cache->useless_node)) {
+			lower = list_first_entry(&cache->useless_node,
 					   struct backref_node, list);
 			list_del_init(&lower->list);
 			if (lower == node)
@@ -1173,9 +1187,13 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		}
 
 		remove_backref_node(cache, node);
+		ASSERT(list_empty(&cache->useless_node) &&
+		       list_empty(&cache->pending_edge));
 		return ERR_PTR(err);
 	}
 	ASSERT(!node || !node->detached);
+	ASSERT(list_empty(&cache->useless_node) &&
+	       list_empty(&cache->pending_edge));
 	return node;
 }
 
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 06/39] btrfs: relocation: Add backref_cache::fs_info member
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (4 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 05/39] btrfs: relocation: Add backref_cache::pending_edge and backref_cache::useless_node members Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 07/39] btrfs: relocation: Make reloc root search specific for relocation backref cache Qu Wenruo
                   ` (35 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Josef Bacik

Add this member so that we can grab fs_info without the help from
reloc_control.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/relocation.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 108ea3d428bc..eb117f2138cb 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -164,6 +164,8 @@ struct backref_cache {
 
 	/* The list of useless backref nodes during backref cache build */
 	struct list_head useless_node;
+
+	struct btrfs_fs_info *fs_info;
 };
 
 /*
@@ -266,7 +268,8 @@ static void mapping_tree_init(struct mapping_tree *tree)
 	spin_lock_init(&tree->lock);
 }
 
-static void backref_cache_init(struct backref_cache *cache)
+static void backref_cache_init(struct btrfs_fs_info *fs_info,
+			       struct backref_cache *cache)
 {
 	int i;
 	cache->rb_root = RB_ROOT;
@@ -277,6 +280,7 @@ static void backref_cache_init(struct backref_cache *cache)
 	INIT_LIST_HEAD(&cache->leaves);
 	INIT_LIST_HEAD(&cache->pending_edge);
 	INIT_LIST_HEAD(&cache->useless_node);
+	cache->fs_info = fs_info;
 }
 
 static void backref_cache_cleanup(struct backref_cache *cache)
@@ -4172,7 +4176,7 @@ static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info)
 
 	INIT_LIST_HEAD(&rc->reloc_roots);
 	INIT_LIST_HEAD(&rc->dirty_subvol_roots);
-	backref_cache_init(&rc->backref_cache);
+	backref_cache_init(fs_info, &rc->backref_cache);
 	mapping_tree_init(&rc->reloc_root_tree);
 	extent_io_tree_init(fs_info, &rc->processed_blocks,
 			    IO_TREE_RELOC_BLOCKS, NULL);
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 07/39] btrfs: relocation: Make reloc root search specific for relocation backref cache
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (5 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 06/39] btrfs: relocation: Add backref_cache::fs_info member Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 08/39] btrfs: relocation: Refactor direct tree backref processing into its own function Qu Wenruo
                   ` (34 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Josef Bacik

find_reloc_root() searches reloc_control::reloc_root_tree to find the
reloc root.
This behavior is only useful for relocation backref cache.

For the incoming more generic purposed backref cache, we don't care
about who owns the reloc root, but only care if it's a reloc root.

So this patch makes the following modifications to make the reloc root
search more specific to relocation backref:
- Add backref_node::is_reloc_root
  This will be an extra indicator for generic purposed backref cache.
  User doesn't need to read root key from backref_node::root to
  determine if it's a reloc root.
  Also for reloc tree root, it's useless and will be queued to useless
  list.

- Add backref_cache::is_reloc
  This will allow backref cache code to do different behavior for
  generic purposed backref cache and relocation backref cache.

- Make find_reloc_root() to accept fs_info
  Just a personal taste.

- Export find_reloc_root()
  So backref.c can utilize this function.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/ctree.h      |  2 ++
 fs/btrfs/relocation.c | 50 +++++++++++++++++++++++++++++++++----------
 2 files changed, 41 insertions(+), 11 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 8aa7b9dac405..1e8a0a513e73 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -3381,6 +3381,8 @@ void btrfs_reloc_pre_snapshot(struct btrfs_pending_snapshot *pending,
 int btrfs_reloc_post_snapshot(struct btrfs_trans_handle *trans,
 			      struct btrfs_pending_snapshot *pending);
 int btrfs_should_cancel_balance(struct btrfs_fs_info *fs_info);
+struct btrfs_root *find_reloc_root(struct btrfs_fs_info *fs_info,
+				   u64 bytenr);
 
 /* scrub.c */
 int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index eb117f2138cb..72eb2b52d14a 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -122,6 +122,12 @@ struct backref_node {
 	 * backref node.
 	 */
 	unsigned int detached:1;
+
+	/*
+	 * For generic purpose backref cache, where we only care if it's a reloc
+	 * root, doesn't care the source subvolid.
+	 */
+	unsigned int is_reloc_root:1;
 };
 
 /*
@@ -166,6 +172,14 @@ struct backref_cache {
 	struct list_head useless_node;
 
 	struct btrfs_fs_info *fs_info;
+
+	/*
+	 * Whether this cache is for relocation
+	 *
+	 * Reloction backref cache require more info for reloc root compared
+	 * to generic backref cache.
+	 */
+	unsigned int is_reloc;
 };
 
 /*
@@ -269,7 +283,7 @@ static void mapping_tree_init(struct mapping_tree *tree)
 }
 
 static void backref_cache_init(struct btrfs_fs_info *fs_info,
-			       struct backref_cache *cache)
+			       struct backref_cache *cache, int is_reloc)
 {
 	int i;
 	cache->rb_root = RB_ROOT;
@@ -281,6 +295,7 @@ static void backref_cache_init(struct btrfs_fs_info *fs_info,
 	INIT_LIST_HEAD(&cache->pending_edge);
 	INIT_LIST_HEAD(&cache->useless_node);
 	cache->fs_info = fs_info;
+	cache->is_reloc = is_reloc;
 }
 
 static void backref_cache_cleanup(struct backref_cache *cache)
@@ -653,13 +668,14 @@ static int should_ignore_root(struct btrfs_root *root)
 /*
  * find reloc tree by address of tree root
  */
-static struct btrfs_root *find_reloc_root(struct reloc_control *rc,
-					  u64 bytenr)
+struct btrfs_root *find_reloc_root(struct btrfs_fs_info *fs_info, u64 bytenr)
 {
+	struct reloc_control *rc = fs_info->reloc_ctl;
 	struct rb_node *rb_node;
 	struct mapping_node *node;
 	struct btrfs_root *root = NULL;
 
+	ASSERT(rc);
 	spin_lock(&rc->reloc_root_tree.lock);
 	rb_node = tree_search(&rc->reloc_root_tree.rb_root, bytenr);
 	if (rb_node) {
@@ -703,6 +719,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 {
 	struct btrfs_backref_iter *iter;
 	struct backref_cache *cache = &rc->backref_cache;
+	struct btrfs_fs_info *fs_info = cache->fs_info;
 	struct btrfs_path *path; /* For searching parent of TREE_BLOCK_REF */
 	struct btrfs_root *root;
 	struct backref_node *cur;
@@ -824,13 +841,24 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		/* SHARED_BLOCK_REF means key.offset is the parent bytenr */
 		if (key.type == BTRFS_SHARED_BLOCK_REF_KEY) {
 			if (key.objectid == key.offset) {
-				/*
-				 * Only root blocks of reloc trees use backref
-				 * pointing to itself.
-				 */
-				root = find_reloc_root(rc, cur->bytenr);
-				ASSERT(root);
-				cur->root = root;
+				cur->is_reloc_root = 1;
+				/* Only reloc backref cache cares exact root */
+				if (cache->is_reloc) {
+					root = find_reloc_root(fs_info,
+							cur->bytenr);
+					if (WARN_ON(!root)) {
+						err = -ENOENT;
+						goto out;
+					}
+					cur->root = root;
+				} else {
+					/*
+					 * For generic purpose backref cache,
+					 * reloc root node is useless.
+					 */
+					list_add(&cur->list,
+						&cache->useless_node);
+				}
 				break;
 			}
 
@@ -4176,7 +4204,7 @@ static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info)
 
 	INIT_LIST_HEAD(&rc->reloc_roots);
 	INIT_LIST_HEAD(&rc->dirty_subvol_roots);
-	backref_cache_init(fs_info, &rc->backref_cache);
+	backref_cache_init(fs_info, &rc->backref_cache, 1);
 	mapping_tree_init(&rc->reloc_root_tree);
 	extent_io_tree_init(fs_info, &rc->processed_blocks,
 			    IO_TREE_RELOC_BLOCKS, NULL);
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 08/39] btrfs: relocation: Refactor direct tree backref processing into its own function
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (6 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 07/39] btrfs: relocation: Make reloc root search specific for relocation backref cache Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 09/39] btrfs: relocation: Refactor indirect " Qu Wenruo
                   ` (33 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Josef Bacik

For BTRFS_SHARED_BLOCK_REF_KEY, its processing is straightforward, as we
now the parent node bytenr directly.

If the parent is already cached, or a root, call it a day.
If the parent is not cached, add it pending list.

This patch will just refactor this part into its own function,
handle_direct_tree_backref() and add some comment explaining the
@ref_key parameter.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/relocation.c | 131 +++++++++++++++++++++++++-----------------
 1 file changed, 79 insertions(+), 52 deletions(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 72eb2b52d14a..3a1a54350139 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -698,6 +698,82 @@ static struct btrfs_root *read_fs_root(struct btrfs_fs_info *fs_info,
 	return btrfs_get_fs_root(fs_info, &key, false);
 }
 
+/*
+ * Handle direct tree backref.
+ *
+ * Direct tree backref means, the backref item shows its parent bytenr
+ * directly. This is for SHARED_BLOCK_REF backref (keyed or inlined).
+ *
+ * @ref_key:	The converted backref key.
+ *		For keyed backref, it's the item key.
+ *		For inlined backref, objectid is the bytenr,
+ *		type is btrfs_inline_ref_type, offset is
+ *		btrfs_inline_ref_offset.
+ */
+static int handle_direct_tree_backref(struct backref_cache *cache,
+				      struct btrfs_key *ref_key,
+				      struct backref_node *cur)
+{
+	struct backref_edge *edge;
+	struct backref_node *upper;
+	struct rb_node *rb_node;
+
+	ASSERT(ref_key->type == BTRFS_SHARED_BLOCK_REF_KEY);
+
+	/* Only reloc root uses backref pointing to itself */
+	if (ref_key->objectid == ref_key->offset) {
+		struct btrfs_root *root;
+
+		cur->is_reloc_root = 1;
+		/* Only reloc backref cache cares exact root */
+		if (cache->is_reloc) {
+			root = find_reloc_root(cache->fs_info, cur->bytenr);
+			if (WARN_ON(!root))
+				return -ENOENT;
+			cur->root = root;
+		} else {
+			/*
+			 * For generic purpose backref cache, reloc root node
+			 * is useless.
+			 */
+			list_add(&cur->list, &cache->useless_node);
+		}
+		return 0;
+	}
+
+	edge = alloc_backref_edge(cache);
+	if (!edge)
+		return -ENOMEM;
+
+	rb_node = tree_search(&cache->rb_root, ref_key->offset);
+	if (!rb_node) {
+		/* Parent node not yet cached */
+		upper = alloc_backref_node(cache);
+		if (!upper) {
+			free_backref_edge(cache, edge);
+			return -ENOMEM;
+		}
+		upper->bytenr = ref_key->offset;
+		upper->level = cur->level + 1;
+
+		/*
+		 *  backrefs for the upper level block isn't
+		 *  cached, add the block to pending list
+		 */
+		list_add_tail(&edge->list[UPPER], &cache->pending_edge);
+	} else {
+		/* Parent node already cached */
+		upper = rb_entry(rb_node, struct backref_node,
+				 rb_node);
+		ASSERT(upper->checked);
+		INIT_LIST_HEAD(&edge->list[UPPER]);
+	}
+	list_add_tail(&edge->list[LOWER], &cur->upper);
+	edge->node[LOWER] = cur;
+	edge->node[UPPER] = upper;
+	return 0;
+}
+
 /*
  * build backref tree for a given tree block. root of the backref tree
  * corresponds the tree block, leaves of the backref tree correspond
@@ -719,7 +795,6 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 {
 	struct btrfs_backref_iter *iter;
 	struct backref_cache *cache = &rc->backref_cache;
-	struct btrfs_fs_info *fs_info = cache->fs_info;
 	struct btrfs_path *path; /* For searching parent of TREE_BLOCK_REF */
 	struct btrfs_root *root;
 	struct backref_node *cur;
@@ -840,59 +915,11 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 
 		/* SHARED_BLOCK_REF means key.offset is the parent bytenr */
 		if (key.type == BTRFS_SHARED_BLOCK_REF_KEY) {
-			if (key.objectid == key.offset) {
-				cur->is_reloc_root = 1;
-				/* Only reloc backref cache cares exact root */
-				if (cache->is_reloc) {
-					root = find_reloc_root(fs_info,
-							cur->bytenr);
-					if (WARN_ON(!root)) {
-						err = -ENOENT;
-						goto out;
-					}
-					cur->root = root;
-				} else {
-					/*
-					 * For generic purpose backref cache,
-					 * reloc root node is useless.
-					 */
-					list_add(&cur->list,
-						&cache->useless_node);
-				}
-				break;
-			}
-
-			edge = alloc_backref_edge(cache);
-			if (!edge) {
-				err = -ENOMEM;
+			ret = handle_direct_tree_backref(cache, &key, cur);
+			if (ret < 0) {
+				err = ret;
 				goto out;
 			}
-			rb_node = tree_search(&cache->rb_root, key.offset);
-			if (!rb_node) {
-				upper = alloc_backref_node(cache);
-				if (!upper) {
-					free_backref_edge(cache, edge);
-					err = -ENOMEM;
-					goto out;
-				}
-				upper->bytenr = key.offset;
-				upper->level = cur->level + 1;
-				/*
-				 *  backrefs for the upper level block isn't
-				 *  cached, add the block to pending list
-				 */
-				list_add_tail(&edge->list[UPPER],
-					      &cache->pending_edge);
-			} else {
-				upper = rb_entry(rb_node, struct backref_node,
-						 rb_node);
-				ASSERT(upper->checked);
-				INIT_LIST_HEAD(&edge->list[UPPER]);
-			}
-			list_add_tail(&edge->list[LOWER], &cur->upper);
-			edge->node[LOWER] = cur;
-			edge->node[UPPER] = upper;
-
 			continue;
 		} else if (unlikely(key.type == BTRFS_EXTENT_REF_V0_KEY)) {
 			err = -EINVAL;
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 09/39] btrfs: relocation: Refactor indirect tree backref processing into its own function
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (7 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 08/39] btrfs: relocation: Refactor direct tree backref processing into its own function Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 10/39] btrfs: relocation: Use wrapper to replace open-coded edge linking Qu Wenruo
                   ` (32 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Josef Bacik

The processing of indirect tree backref (TREE_BLOCK_REF) is the most
complex work.

We need to grab the fs root, do a tree search to locate all its parent
nodes, linking all needed edges, and put all uncached edges to
pending edge list.

This is definitely worthy a helper function.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/relocation.c | 294 +++++++++++++++++++++++-------------------
 1 file changed, 159 insertions(+), 135 deletions(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 3a1a54350139..611ccb579938 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -774,6 +774,163 @@ static int handle_direct_tree_backref(struct backref_cache *cache,
 	return 0;
 }
 
+/*
+ * Handle indirect tree backref.
+ *
+ * Indirect tree backref means, we only know which tree the node belongs to.
+ * Need to do a tree search to find out parents. This is for TREE_BLOCK_REF
+ * backref (keyed or inlined).
+ *
+ * @ref_key:	The same as @ref_key in  handle_direct_tree_backref()
+ * @tree_key:	The first key of this tree block.
+ * @path:	A clean (released) path, to avoid allocating path everytime
+ *		the function get called.
+ */
+static int handle_indirect_tree_backref(struct backref_cache *cache,
+					struct btrfs_path *path,
+					struct btrfs_key *ref_key,
+					struct btrfs_key *tree_key,
+					struct backref_node *cur)
+{
+	struct btrfs_fs_info *fs_info = cache->fs_info;
+	struct backref_node *upper;
+	struct backref_node *lower;
+	struct backref_edge *edge;
+	struct extent_buffer *eb;
+	struct btrfs_root *root;
+	struct rb_node *rb_node;
+	int level;
+	bool need_check = true;
+	int ret;
+
+	root = read_fs_root(fs_info, ref_key->offset);
+	if (IS_ERR(root))
+		return PTR_ERR(root);
+	if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
+		cur->cowonly = 1;
+
+	if (btrfs_root_level(&root->root_item) == cur->level) {
+		/* tree root */
+		ASSERT(btrfs_root_bytenr(&root->root_item) == cur->bytenr);
+		if (should_ignore_root(root)) {
+			btrfs_put_root(root);
+			list_add(&cur->list, &cache->useless_node);
+		} else {
+			cur->root = root;
+		}
+		return 0;
+	}
+
+	level = cur->level + 1;
+
+	/* Search the tree to find parent blocks referring the block. */
+	path->search_commit_root = 1;
+	path->skip_locking = 1;
+	path->lowest_level = level;
+	ret = btrfs_search_slot(NULL, root, tree_key, path, 0, 0);
+	path->lowest_level = 0;
+	if (ret < 0) {
+		btrfs_put_root(root);
+		return ret;
+	}
+	if (ret > 0 && path->slots[level] > 0)
+		path->slots[level]--;
+
+	eb = path->nodes[level];
+	if (btrfs_node_blockptr(eb, path->slots[level]) != cur->bytenr) {
+		btrfs_err(fs_info,
+"couldn't find block (%llu) (level %d) in tree (%llu) with key (%llu %u %llu)",
+			  cur->bytenr, level - 1, root->root_key.objectid,
+			  tree_key->objectid, tree_key->type, tree_key->offset);
+		btrfs_put_root(root);
+		ret = -ENOENT;
+		goto out;
+	}
+	lower = cur;
+
+	/* Add all nodes and edges in the path */
+	for (; level < BTRFS_MAX_LEVEL; level++) {
+		if (!path->nodes[level]) {
+			ASSERT(btrfs_root_bytenr(&root->root_item) ==
+			       lower->bytenr);
+			if (should_ignore_root(root)) {
+				btrfs_put_root(root);
+				list_add(&lower->list, &cache->useless_node);
+			} else {
+				lower->root = root;
+			}
+			break;
+		}
+
+		edge = alloc_backref_edge(cache);
+		if (!edge) {
+			btrfs_put_root(root);
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		eb = path->nodes[level];
+		rb_node = tree_search(&cache->rb_root, eb->start);
+		if (!rb_node) {
+			upper = alloc_backref_node(cache);
+			if (!upper) {
+				btrfs_put_root(root);
+				free_backref_edge(cache, edge);
+				ret = -ENOMEM;
+				goto out;
+			}
+			upper->bytenr = eb->start;
+			upper->owner = btrfs_header_owner(eb);
+			upper->level = lower->level + 1;
+			if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
+				upper->cowonly = 1;
+
+			/*
+			 * if we know the block isn't shared we can void
+			 * checking its backrefs.
+			 */
+			if (btrfs_block_can_be_shared(root, eb))
+				upper->checked = 0;
+			else
+				upper->checked = 1;
+
+			/*
+			 * add the block to pending list if we need check its
+			 * backrefs, we only do this once while walking up a
+			 * tree as we will catch anything else later on.
+			 */
+			if (!upper->checked && need_check) {
+				need_check = false;
+				list_add_tail(&edge->list[UPPER],
+					      &cache->pending_edge);
+			} else {
+				if (upper->checked)
+					need_check = true;
+				INIT_LIST_HEAD(&edge->list[UPPER]);
+			}
+		} else {
+			upper = rb_entry(rb_node, struct backref_node, rb_node);
+			ASSERT(upper->checked);
+			INIT_LIST_HEAD(&edge->list[UPPER]);
+			if (!upper->owner)
+				upper->owner = btrfs_header_owner(eb);
+		}
+		list_add_tail(&edge->list[LOWER], &lower->upper);
+		edge->node[LOWER] = lower;
+		edge->node[UPPER] = upper;
+
+		if (rb_node) {
+			btrfs_put_root(root);
+			break;
+		}
+		lower = upper;
+		upper = NULL;
+	}
+out:
+	btrfs_release_path(path);
+	return ret;
+}
+
 /*
  * build backref tree for a given tree block. root of the backref tree
  * corresponds the tree block, leaves of the backref tree correspond
@@ -796,7 +953,6 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	struct btrfs_backref_iter *iter;
 	struct backref_cache *cache = &rc->backref_cache;
 	struct btrfs_path *path; /* For searching parent of TREE_BLOCK_REF */
-	struct btrfs_root *root;
 	struct backref_node *cur;
 	struct backref_node *upper;
 	struct backref_node *lower;
@@ -807,7 +963,6 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	int cowonly;
 	int ret;
 	int err = 0;
-	bool need_check = true;
 
 	iter = btrfs_backref_iter_alloc(rc->extent_root->fs_info, GFP_NOFS);
 	if (!iter)
@@ -936,143 +1091,12 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		 * means the root objectid. We need to search the tree to get
 		 * its parent bytenr.
 		 */
-		root = read_fs_root(rc->extent_root->fs_info, key.offset);
-		if (IS_ERR(root)) {
-			err = PTR_ERR(root);
-			goto out;
-		}
-
-		if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
-			cur->cowonly = 1;
-
-		if (btrfs_root_level(&root->root_item) == cur->level) {
-			/* tree root */
-			ASSERT(btrfs_root_bytenr(&root->root_item) ==
-			       cur->bytenr);
-			if (should_ignore_root(root)) {
-				btrfs_put_root(root);
-				list_add(&cur->list, &cache->useless_node);
-			} else {
-				cur->root = root;
-			}
-			break;
-		}
-
-		level = cur->level + 1;
-
-		/* Search the tree to find parent blocks referring the block. */
-		path->search_commit_root = 1;
-		path->skip_locking = 1;
-		path->lowest_level = level;
-		ret = btrfs_search_slot(NULL, root, node_key, path, 0, 0);
-		path->lowest_level = 0;
+		ret = handle_indirect_tree_backref(cache, path, &key, node_key,
+						   cur);
 		if (ret < 0) {
-			btrfs_put_root(root);
 			err = ret;
 			goto out;
 		}
-		if (ret > 0 && path->slots[level] > 0)
-			path->slots[level]--;
-
-		eb = path->nodes[level];
-		if (btrfs_node_blockptr(eb, path->slots[level]) !=
-		    cur->bytenr) {
-			btrfs_err(root->fs_info,
-	"couldn't find block (%llu) (level %d) in tree (%llu) with key (%llu %u %llu)",
-				  cur->bytenr, level - 1,
-				  root->root_key.objectid,
-				  node_key->objectid, node_key->type,
-				  node_key->offset);
-			btrfs_put_root(root);
-			err = -ENOENT;
-			goto out;
-		}
-		lower = cur;
-		need_check = true;
-
-		/* Add all nodes and edges in the path */
-		for (; level < BTRFS_MAX_LEVEL; level++) {
-			if (!path->nodes[level]) {
-				ASSERT(btrfs_root_bytenr(&root->root_item) ==
-				       lower->bytenr);
-				if (should_ignore_root(root)) {
-					btrfs_put_root(root);
-					list_add(&lower->list,
-						 &cache->useless_node);
-				} else {
-					lower->root = root;
-				}
-				break;
-			}
-
-			edge = alloc_backref_edge(cache);
-			if (!edge) {
-				btrfs_put_root(root);
-				err = -ENOMEM;
-				goto out;
-			}
-
-			eb = path->nodes[level];
-			rb_node = tree_search(&cache->rb_root, eb->start);
-			if (!rb_node) {
-				upper = alloc_backref_node(cache);
-				if (!upper) {
-					btrfs_put_root(root);
-					free_backref_edge(cache, edge);
-					err = -ENOMEM;
-					goto out;
-				}
-				upper->bytenr = eb->start;
-				upper->owner = btrfs_header_owner(eb);
-				upper->level = lower->level + 1;
-				if (!test_bit(BTRFS_ROOT_REF_COWS,
-					      &root->state))
-					upper->cowonly = 1;
-
-				/*
-				 * if we know the block isn't shared
-				 * we can void checking its backrefs.
-				 */
-				if (btrfs_block_can_be_shared(root, eb))
-					upper->checked = 0;
-				else
-					upper->checked = 1;
-
-				/*
-				 * add the block to pending list if we
-				 * need check its backrefs, we only do this once
-				 * while walking up a tree as we will catch
-				 * anything else later on.
-				 */
-				if (!upper->checked && need_check) {
-					need_check = false;
-					list_add_tail(&edge->list[UPPER],
-						      &cache->pending_edge);
-				} else {
-					if (upper->checked)
-						need_check = true;
-					INIT_LIST_HEAD(&edge->list[UPPER]);
-				}
-			} else {
-				upper = rb_entry(rb_node, struct backref_node,
-						 rb_node);
-				ASSERT(upper->checked);
-				INIT_LIST_HEAD(&edge->list[UPPER]);
-				if (!upper->owner)
-					upper->owner = btrfs_header_owner(eb);
-			}
-			list_add_tail(&edge->list[LOWER], &lower->upper);
-			edge->node[LOWER] = lower;
-			edge->node[UPPER] = upper;
-
-			if (rb_node) {
-				btrfs_put_root(root);
-				break;
-			}
-			lower = upper;
-			upper = NULL;
-		}
-		btrfs_release_path(path);
 	}
 	if (ret < 0) {
 		err = ret;
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 10/39] btrfs: relocation: Use wrapper to replace open-coded edge linking
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (8 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 09/39] btrfs: relocation: Refactor indirect " Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 11/39] btrfs: relocation: Specify essential members for alloc_backref_node() Qu Wenruo
                   ` (31 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Josef Bacik, Nikolay Borisov

Since backref_edge is used to connect upper and lower backref nodes, and
need to access both nodes, some code can look pretty nasty:

		list_add_tail(&edge->list[LOWER], &cur->upper);

The above code will link @cur to the LOWER side of the edge, while both
"LOWER" and "upper" words show up.
This can sometimes be very confusing for reader to grasp.

This patch introduce a new wrapper, link_backref_edge(), to handle the
linking behavior.
Which also has extra ASSERT() to ensure caller won't pass wrong nodes
in.

Also, this updates the comment of related lists of backref_node and
backref_edge, to make it more clear that each list points to what.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/relocation.c | 53 ++++++++++++++++++++++++++++++-------------
 1 file changed, 37 insertions(+), 16 deletions(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 611ccb579938..a022b46804ff 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -92,10 +92,12 @@ struct backref_node {
 	u64 owner;
 	/* link to pending, changed or detached list */
 	struct list_head list;
-	/* list of upper level blocks reference this block */
+
+	/* List of upper level edges, which links this node to its parent(s) */
 	struct list_head upper;
-	/* list of child blocks in the cache */
+	/* List of lower level edges, which links this node to its child(ren) */
 	struct list_head lower;
+
 	/* NULL if this node is not tree root */
 	struct btrfs_root *root;
 	/* extent buffer got by COW the block */
@@ -130,17 +132,26 @@ struct backref_node {
 	unsigned int is_reloc_root:1;
 };
 
+#define LOWER	0
+#define UPPER	1
+#define RELOCATION_RESERVED_NODES	256
 /*
- * present a block pointer in the backref cache
+ * present an edge connecting upper and lower backref nodes.
  */
 struct backref_edge {
+	/*
+	 * list[LOWER] is linked to backref_node::upper of lower level node,
+	 * and list[UPPER] is linked to backref_node::lower of upper level node.
+	 *
+	 * Also, build_backref_tree() uses list[UPPER] for pending edges, before
+	 * linking list[UPPER] to its upper level nodes.
+	 */
 	struct list_head list[2];
+
+	/* Two related nodes */
 	struct backref_node *node[2];
 };
 
-#define LOWER	0
-#define UPPER	1
-#define RELOCATION_RESERVED_NODES	256
 
 struct backref_cache {
 	/* red black tree of all backref nodes in the cache */
@@ -363,6 +374,22 @@ static struct backref_edge *alloc_backref_edge(struct backref_cache *cache)
 	return edge;
 }
 
+#define		LINK_LOWER	(1 << 0)
+#define		LINK_UPPER	(1 << 1)
+static void link_backref_edge(struct backref_edge *edge,
+			      struct backref_node *lower,
+			      struct backref_node *upper,
+			      int link_which)
+{
+	ASSERT(upper && lower && upper->level == lower->level + 1);
+	edge->node[LOWER] = lower;
+	edge->node[UPPER] = upper;
+	if (link_which & LINK_LOWER)
+		list_add_tail(&edge->list[LOWER], &lower->upper);
+	if (link_which & LINK_UPPER)
+		list_add_tail(&edge->list[UPPER], &upper->lower);
+}
+
 static void free_backref_edge(struct backref_cache *cache,
 			      struct backref_edge *edge)
 {
@@ -768,9 +795,7 @@ static int handle_direct_tree_backref(struct backref_cache *cache,
 		ASSERT(upper->checked);
 		INIT_LIST_HEAD(&edge->list[UPPER]);
 	}
-	list_add_tail(&edge->list[LOWER], &cur->upper);
-	edge->node[LOWER] = cur;
-	edge->node[UPPER] = upper;
+	link_backref_edge(edge, cur, upper, LINK_LOWER);
 	return 0;
 }
 
@@ -915,9 +940,7 @@ static int handle_indirect_tree_backref(struct backref_cache *cache,
 			if (!upper->owner)
 				upper->owner = btrfs_header_owner(eb);
 		}
-		list_add_tail(&edge->list[LOWER], &lower->upper);
-		edge->node[LOWER] = lower;
-		edge->node[UPPER] = upper;
+		link_backref_edge(edge, lower, upper, LINK_LOWER);
 
 		if (rb_node) {
 			btrfs_put_root(root);
@@ -1340,10 +1363,8 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 			if (!new_edge)
 				goto fail;
 
-			new_edge->node[UPPER] = new_node;
-			new_edge->node[LOWER] = edge->node[LOWER];
-			list_add_tail(&new_edge->list[UPPER],
-				      &new_node->lower);
+			link_backref_edge(new_edge, edge->node[LOWER], new_node,
+					  LINK_UPPER);
 		}
 	} else {
 		list_add_tail(&new_node->lower, &cache->leaves);
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 11/39] btrfs: relocation: Specify essential members for alloc_backref_node()
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (9 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 10/39] btrfs: relocation: Use wrapper to replace open-coded edge linking Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 12/39] btrfs: relocation: Remove the open-coded goto loop for breadth-first search Qu Wenruo
                   ` (30 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Josef Bacik, Nikolay Borisov

Bytenr and level are essential parameters for backref_node, thus it
makes sense to initial them at alloc time.

Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/relocation.c | 39 +++++++++++++++++++--------------------
 1 file changed, 19 insertions(+), 20 deletions(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index a022b46804ff..1ea184e8afb2 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -339,18 +339,23 @@ static void backref_cache_cleanup(struct backref_cache *cache)
 	ASSERT(!cache->nr_edges);
 }
 
-static struct backref_node *alloc_backref_node(struct backref_cache *cache)
+static struct backref_node *alloc_backref_node(struct backref_cache *cache,
+						u64 bytenr, int level)
 {
 	struct backref_node *node;
 
+	ASSERT(level >= 0 && level < BTRFS_MAX_LEVEL);
 	node = kzalloc(sizeof(*node), GFP_NOFS);
-	if (node) {
-		INIT_LIST_HEAD(&node->list);
-		INIT_LIST_HEAD(&node->upper);
-		INIT_LIST_HEAD(&node->lower);
-		RB_CLEAR_NODE(&node->rb_node);
-		cache->nr_nodes++;
-	}
+	if (!node)
+		return node;
+	INIT_LIST_HEAD(&node->list);
+	INIT_LIST_HEAD(&node->upper);
+	INIT_LIST_HEAD(&node->lower);
+	RB_CLEAR_NODE(&node->rb_node);
+	cache->nr_nodes++;
+
+	node->level = level;
+	node->bytenr = bytenr;
 	return node;
 }
 
@@ -775,13 +780,12 @@ static int handle_direct_tree_backref(struct backref_cache *cache,
 	rb_node = tree_search(&cache->rb_root, ref_key->offset);
 	if (!rb_node) {
 		/* Parent node not yet cached */
-		upper = alloc_backref_node(cache);
+		upper = alloc_backref_node(cache, ref_key->offset,
+					   cur->level + 1);
 		if (!upper) {
 			free_backref_edge(cache, edge);
 			return -ENOMEM;
 		}
-		upper->bytenr = ref_key->offset;
-		upper->level = cur->level + 1;
 
 		/*
 		 *  backrefs for the upper level block isn't
@@ -897,16 +901,15 @@ static int handle_indirect_tree_backref(struct backref_cache *cache,
 		eb = path->nodes[level];
 		rb_node = tree_search(&cache->rb_root, eb->start);
 		if (!rb_node) {
-			upper = alloc_backref_node(cache);
+			upper = alloc_backref_node(cache, eb->start,
+						   lower->level + 1);
 			if (!upper) {
 				btrfs_put_root(root);
 				free_backref_edge(cache, edge);
 				ret = -ENOMEM;
 				goto out;
 			}
-			upper->bytenr = eb->start;
 			upper->owner = btrfs_header_owner(eb);
-			upper->level = lower->level + 1;
 			if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
 				upper->cowonly = 1;
 
@@ -996,14 +999,12 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		goto out;
 	}
 
-	node = alloc_backref_node(cache);
+	node = alloc_backref_node(cache, bytenr, level);
 	if (!node) {
 		err = -ENOMEM;
 		goto out;
 	}
 
-	node->bytenr = bytenr;
-	node->level = level;
 	node->lowest = 1;
 	cur = node;
 again:
@@ -1346,12 +1347,10 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 	if (!node)
 		return 0;
 
-	new_node = alloc_backref_node(cache);
+	new_node = alloc_backref_node(cache, dest->node->start, node->level);
 	if (!new_node)
 		return -ENOMEM;
 
-	new_node->bytenr = dest->node->start;
-	new_node->level = node->level;
 	new_node->lowest = node->lowest;
 	new_node->checked = 1;
 	new_node->root = btrfs_grab_root(dest);
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 12/39] btrfs: relocation: Remove the open-coded goto loop for breadth-first search
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (10 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 11/39] btrfs: relocation: Specify essential members for alloc_backref_node() Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 13/39] btrfs: relocation: Refactor the finishing part of upper linkage into finish_upper_links() Qu Wenruo
                   ` (29 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Josef Bacik

build_backref_tree() uses "goto again;" to implement a breadth-first
search to build backref cache.

This patch will extract most of its work into a wrapper,
handle_one_tree_block(), and use a do {} while() loop to implement the same
thing.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/relocation.c | 168 ++++++++++++++++++++++--------------------
 1 file changed, 88 insertions(+), 80 deletions(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 1ea184e8afb2..462b6df54b11 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -957,76 +957,31 @@ static int handle_indirect_tree_backref(struct backref_cache *cache,
 	return ret;
 }
 
-/*
- * build backref tree for a given tree block. root of the backref tree
- * corresponds the tree block, leaves of the backref tree correspond
- * roots of b-trees that reference the tree block.
- *
- * the basic idea of this function is check backrefs of a given block
- * to find upper level blocks that reference the block, and then check
- * backrefs of these upper level blocks recursively. the recursion stop
- * when tree root is reached or backrefs for the block is cached.
- *
- * NOTE: if we find backrefs for a block are cached, we know backrefs
- * for all upper level blocks that directly/indirectly reference the
- * block are also cached.
- */
-static noinline_for_stack
-struct backref_node *build_backref_tree(struct reloc_control *rc,
-					struct btrfs_key *node_key,
-					int level, u64 bytenr)
+static int handle_one_tree_block(struct backref_cache *cache,
+				 struct btrfs_path *path,
+				 struct btrfs_backref_iter *iter,
+				 struct btrfs_key *node_key,
+				 struct backref_node *cur)
 {
-	struct btrfs_backref_iter *iter;
-	struct backref_cache *cache = &rc->backref_cache;
-	struct btrfs_path *path; /* For searching parent of TREE_BLOCK_REF */
-	struct backref_node *cur;
-	struct backref_node *upper;
-	struct backref_node *lower;
-	struct backref_node *node = NULL;
-	struct backref_node *exist = NULL;
+	struct btrfs_fs_info *fs_info = cache->fs_info;
 	struct backref_edge *edge;
-	struct rb_node *rb_node;
-	int cowonly;
+	struct backref_node *exist;
 	int ret;
-	int err = 0;
-
-	iter = btrfs_backref_iter_alloc(rc->extent_root->fs_info, GFP_NOFS);
-	if (!iter)
-		return ERR_PTR(-ENOMEM);
-	path = btrfs_alloc_path();
-	if (!path) {
-		err = -ENOMEM;
-		goto out;
-	}
 
-	node = alloc_backref_node(cache, bytenr, level);
-	if (!node) {
-		err = -ENOMEM;
-		goto out;
-	}
-
-	node->lowest = 1;
-	cur = node;
-again:
 	ret = btrfs_backref_iter_start(iter, cur->bytenr);
-	if (ret < 0) {
-		err = ret;
-		goto out;
-	}
-
+	if (ret < 0)
+		return ret;
 	/*
 	 * We skip the first btrfs_tree_block_info, as we don't use the key
 	 * stored in it, but fetch it from the tree block.
 	 */
 	if (btrfs_backref_has_tree_block_info(iter)) {
 		ret = btrfs_backref_iter_next(iter);
-		if (ret < 0) {
-			err = ret;
+		if (ret < 0)
 			goto out;
-		}
 		/* No extra backref? This means the tree block is corrupted */
 		if (ret > 0) {
-			err = -EUCLEAN;
+			ret = -EUCLEAN;
 			goto out;
 		}
 	}
@@ -1069,7 +1024,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			type = btrfs_get_extent_inline_ref_type(eb, iref,
 							BTRFS_REF_TYPE_BLOCK);
 			if (type == BTRFS_REF_TYPE_INVALID) {
-				err = -EUCLEAN;
+				ret = -EUCLEAN;
 				goto out;
 			}
 			key.type = type;
@@ -1095,16 +1050,13 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		/* SHARED_BLOCK_REF means key.offset is the parent bytenr */
 		if (key.type == BTRFS_SHARED_BLOCK_REF_KEY) {
 			ret = handle_direct_tree_backref(cache, &key, cur);
-			if (ret < 0) {
-				err = ret;
+			if (ret < 0)
 				goto out;
-			}
 			continue;
 		} else if (unlikely(key.type == BTRFS_EXTENT_REF_V0_KEY)) {
-			err = -EINVAL;
-			btrfs_print_v0_err(rc->extent_root->fs_info);
-			btrfs_handle_fs_error(rc->extent_root->fs_info, err,
-					      NULL);
+			ret = -EINVAL;
+			btrfs_print_v0_err(fs_info);
+			btrfs_handle_fs_error(fs_info, ret, NULL);
 			goto out;
 		} else if (key.type != BTRFS_TREE_BLOCK_REF_KEY) {
 			continue;
@@ -1117,30 +1069,86 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		 */
 		ret = handle_indirect_tree_backref(cache, path, &key, node_key,
 						   cur);
-		if (ret < 0) {
-			err = ret;
+		if (ret < 0)
 			goto out;
-		}
-	}
-	if (ret < 0) {
-		err = ret;
-		goto out;
 	}
 	ret = 0;
-	btrfs_backref_iter_release(iter);
-
 	cur->checked = 1;
 	WARN_ON(exist);
+out:
+	btrfs_backref_iter_release(iter);
+	return ret;
+}
 
-	/* the pending list isn't empty, take the first block to process */
-	if (!list_empty(&cache->pending_edge)) {
-		edge = list_entry(cache->pending_edge.next,
-				  struct backref_edge, list[UPPER]);
-		list_del_init(&edge->list[UPPER]);
-		cur = edge->node[UPPER];
-		goto again;
+/*
+ * build backref tree for a given tree block. root of the backref tree
+ * corresponds the tree block, leaves of the backref tree correspond
+ * roots of b-trees that reference the tree block.
+ *
+ * the basic idea of this function is check backrefs of a given block
+ * to find upper level blocks that reference the block, and then check
+ * backrefs of these upper level blocks recursively. the recursion stop
+ * when tree root is reached or backrefs for the block is cached.
+ *
+ * NOTE: if we find backrefs for a block are cached, we know backrefs
+ * for all upper level blocks that directly/indirectly reference the
+ * block are also cached.
+ */
+static noinline_for_stack
+struct backref_node *build_backref_tree(struct reloc_control *rc,
+					struct btrfs_key *node_key,
+					int level, u64 bytenr)
+{
+	struct btrfs_backref_iter *iter;
+	struct backref_cache *cache = &rc->backref_cache;
+	struct btrfs_path *path; /* For searching parent of TREE_BLOCK_REF */
+	struct backref_node *cur;
+	struct backref_node *upper;
+	struct backref_node *lower;
+	struct backref_node *node = NULL;
+	struct backref_edge *edge;
+	struct rb_node *rb_node;
+	int cowonly;
+	int ret;
+	int err = 0;
+
+	iter = btrfs_backref_iter_alloc(rc->extent_root->fs_info, GFP_NOFS);
+	if (!iter)
+		return ERR_PTR(-ENOMEM);
+	path = btrfs_alloc_path();
+	if (!path) {
+		err = -ENOMEM;
+		goto out;
 	}
 
+	node = alloc_backref_node(cache, bytenr, level);
+	if (!node) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	node->lowest = 1;
+	cur = node;
+
+	/* Breadth-first search to build backref cache */
+	do {
+		ret = handle_one_tree_block(cache, path, iter, node_key, cur);
+		if (ret < 0) {
+			err = ret;
+			goto out;
+		}
+		edge = list_first_entry_or_null(&cache->pending_edge,
+				struct backref_edge, list[UPPER]);
+		/*
+		 * the pending list isn't empty, take the first block to
+		 * process
+		 */
+		if (edge) {
+			list_del_init(&edge->list[UPPER]);
+			cur = edge->node[UPPER];
+		}
+	} while (edge);
+
 	/*
 	 * everything goes well, connect backref nodes and insert backref nodes
 	 * into the cache.
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 13/39] btrfs: relocation: Refactor the finishing part of upper linkage into finish_upper_links()
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (11 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 12/39] btrfs: relocation: Remove the open-coded goto loop for breadth-first search Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 14/39] btrfs: relocation: Refactor the useless nodes handling into its own function Qu Wenruo
                   ` (28 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Josef Bacik

After handle_one_tree_backref(), all newly added (not cached) edges and
nodes have the following features:

- Only backref_edge::list[LOWER] is linked.
  This means, we can only iterate from botton to top, not the other
  direction.

- Newly added nodes are not added to cache rb_tree yet

So to finish the backref cache, we still need to finish the links and
add all nodes into backref cache rb_tree.

This patch will refactor the existing code into finish_upper_links(),
add more comments of each branch, and why we need to do all these works.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/relocation.c | 186 ++++++++++++++++++++++++++----------------
 1 file changed, 117 insertions(+), 69 deletions(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 462b6df54b11..37885d0a1d6c 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -1080,6 +1080,118 @@ static int handle_one_tree_block(struct backref_cache *cache,
 	return ret;
 }
 
+/*
+ * In handle_one_tree_backref(), we have only linked the lower node to the edge,
+ * but the upper node hasn't been linked to the edge.
+ * This means we can only iterate through backref_node::upper to reach parent
+ * edges, but not through backref_node::lower to reach children edges.
+ *
+ * This function will finish the backref_node::lower to related edges, so that
+ * backref cache can be bi-directionally iterated.
+ *
+ * Also, this will add the nodes to backref cache for next run.
+ */
+static int finish_upper_links(struct backref_cache *cache,
+			      struct backref_node *start)
+{
+	struct list_head *useless_node = &cache->useless_node;
+	struct backref_edge *edge;
+	struct rb_node *rb_node;
+	LIST_HEAD(pending_edge);
+
+	ASSERT(start->checked);
+
+	/* Insert this node to cache if it's not cowonly */
+	if (!start->cowonly) {
+		rb_node = tree_insert(&cache->rb_root, start->bytenr,
+				      &start->rb_node);
+		if (rb_node)
+			backref_tree_panic(rb_node, -EEXIST, start->bytenr);
+		list_add_tail(&start->lower, &cache->leaves);
+	}
+
+	/*
+	 * Use breadth first search to iterate all related edges.
+	 *
+	 * The start point is all the edges of this node
+	 */
+	list_for_each_entry(edge, &start->upper, list[LOWER])
+		list_add_tail(&edge->list[UPPER], &pending_edge);
+
+	while (!list_empty(&pending_edge)) {
+		struct backref_node *upper;
+		struct backref_node *lower;
+		struct rb_node *rb_node;
+
+		edge = list_first_entry(&pending_edge, struct backref_edge,
+				  list[UPPER]);
+		list_del_init(&edge->list[UPPER]);
+		upper = edge->node[UPPER];
+		lower = edge->node[LOWER];
+
+		/* Parent is detached, no need to keep any edges */
+		if (upper->detached) {
+			list_del(&edge->list[LOWER]);
+			free_backref_edge(cache, edge);
+
+			/* Lower node is orphan, queue for cleanup */
+			if (list_empty(&lower->upper))
+				list_add(&lower->list, useless_node);
+			continue;
+		}
+
+		/*
+		 * All new nodes added in current build_backref_tree() haven't
+		 * been linked to the cache rb tree.
+		 * So if we have upper->rb_node populated, this means a cache
+		 * hit. We only need to link the edge, as @upper and all its
+		 * parent have already been linked.
+		 */
+		if (!RB_EMPTY_NODE(&upper->rb_node)) {
+			if (upper->lowest) {
+				list_del_init(&upper->lower);
+				upper->lowest = 0;
+			}
+
+			list_add_tail(&edge->list[UPPER], &upper->lower);
+			continue;
+		}
+
+		/* Sanity check, we shouldn't have any unchecked nodes */
+		if (!upper->checked) {
+			ASSERT(0);
+			return -EUCLEAN;
+		}
+
+		/* Sanity check, cowonly node has non-cowonly parent */
+		if (start->cowonly != upper->cowonly) {
+			ASSERT(0);
+			return -EUCLEAN;
+		}
+
+		/* Only cache non-cowonly (subvolume trees) tree blocks */
+		if (!upper->cowonly) {
+			rb_node = tree_insert(&cache->rb_root, upper->bytenr,
+					      &upper->rb_node);
+			if (rb_node) {
+				backref_tree_panic(rb_node, -EEXIST,
+						   upper->bytenr);
+				return -EUCLEAN;
+			}
+		}
+
+		list_add_tail(&edge->list[UPPER], &upper->lower);
+
+		/*
+		 * Also queue all the parent edges of this uncached node
+		 * to finish the upper linkage
+		 */
+		list_for_each_entry(edge, &upper->upper, list[LOWER])
+			list_add_tail(&edge->list[UPPER], &pending_edge);
+	}
+	return 0;
+}
+
 /*
  * build backref tree for a given tree block. root of the backref tree
  * corresponds the tree block, leaves of the backref tree correspond
@@ -1107,8 +1219,6 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	struct backref_node *lower;
 	struct backref_node *node = NULL;
 	struct backref_edge *edge;
-	struct rb_node *rb_node;
-	int cowonly;
 	int ret;
 	int err = 0;
 
@@ -1149,75 +1259,13 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		}
 	} while (edge);
 
-	/*
-	 * everything goes well, connect backref nodes and insert backref nodes
-	 * into the cache.
-	 */
-	ASSERT(node->checked);
-	cowonly = node->cowonly;
-	if (!cowonly) {
-		rb_node = tree_insert(&cache->rb_root, node->bytenr,
-				      &node->rb_node);
-		if (rb_node)
-			backref_tree_panic(rb_node, -EEXIST, node->bytenr);
-		list_add_tail(&node->lower, &cache->leaves);
+	/* Finish the upper linkage of newly added edges/nodes */
+	ret = finish_upper_links(cache, node);
+	if (ret < 0) {
+		err = ret;
+		goto out;
 	}
 
-	list_for_each_entry(edge, &node->upper, list[LOWER])
-		list_add_tail(&edge->list[UPPER], &cache->pending_edge);
-
-	while (!list_empty(&cache->pending_edge)) {
-		edge = list_first_entry(&cache->pending_edge,
-				struct backref_edge, list[UPPER]);
-		list_del_init(&edge->list[UPPER]);
-		upper = edge->node[UPPER];
-		if (upper->detached) {
-			list_del(&edge->list[LOWER]);
-			lower = edge->node[LOWER];
-			free_backref_edge(cache, edge);
-			if (list_empty(&lower->upper))
-				list_add(&lower->list, &cache->useless_node);
-			continue;
-		}
-
-		if (!RB_EMPTY_NODE(&upper->rb_node)) {
-			if (upper->lowest) {
-				list_del_init(&upper->lower);
-				upper->lowest = 0;
-			}
-
-			list_add_tail(&edge->list[UPPER], &upper->lower);
-			continue;
-		}
-
-		if (!upper->checked) {
-			/*
-			 * Still want to blow up for developers since this is a
-			 * logic bug.
-			 */
-			ASSERT(0);
-			err = -EINVAL;
-			goto out;
-		}
-		if (cowonly != upper->cowonly) {
-			ASSERT(0);
-			err = -EINVAL;
-			goto out;
-		}
-
-		if (!cowonly) {
-			rb_node = tree_insert(&cache->rb_root, upper->bytenr,
-					      &upper->rb_node);
-			if (rb_node)
-				backref_tree_panic(rb_node, -EEXIST,
-						   upper->bytenr);
-		}
-
-		list_add_tail(&edge->list[UPPER], &upper->lower);
-
-		list_for_each_entry(edge, &upper->upper, list[LOWER])
-			list_add_tail(&edge->list[UPPER], &cache->pending_edge);
-	}
 	/*
 	 * process useless backref nodes. backref nodes for tree leaves
 	 * are deleted from the cache. backref nodes for upper level
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 14/39] btrfs: relocation: Refactor the useless nodes handling into its own function
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (12 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 13/39] btrfs: relocation: Refactor the finishing part of upper linkage into finish_upper_links() Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 15/39] btrfs: relocation: Add btrfs_ prefix for backref_node/edge/cache Qu Wenruo
                   ` (27 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Josef Bacik

This patch will also add some comment for the cleanup up.

Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/relocation.c | 112 ++++++++++++++++++++++++++++--------------
 1 file changed, 75 insertions(+), 37 deletions(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 37885d0a1d6c..663b782f8de1 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -1192,6 +1192,79 @@ static int finish_upper_links(struct backref_cache *cache,
 	return 0;
 }
 
+/*
+ * For useless nodes, do two major clean ups:
+ * - Cleanup the children edges and nodes
+ *   If child node is also orphan (no parent) during cleanup, then the
+ *   child node will also be cleaned up.
+ *
+ * - Freeing up leaves (level 0), keeps nodes detached
+ *   For nodes, the node is still cached as "detached"
+ *
+ * Return false if @node is not in the @useless_nodes list.
+ * Return true if @node is in the @useless_nodes list.
+ */
+static bool handle_useless_nodes(struct reloc_control *rc,
+				 struct backref_node *node)
+{
+	struct backref_cache *cache = &rc->backref_cache;
+	struct list_head *useless_node = &cache->useless_node;
+	bool ret = false;
+
+	while (!list_empty(useless_node)) {
+		struct backref_node *cur;
+
+		cur = list_first_entry(useless_node, struct backref_node,
+				 list);
+		list_del_init(&cur->list);
+
+		/* Only tree root nodes can be added to @useless_nodes */
+		ASSERT(list_empty(&cur->upper));
+
+		if (cur == node)
+			ret = true;
+
+		/* The node is the lowest node */
+		if (cur->lowest) {
+			list_del_init(&cur->lower);
+			cur->lowest = 0;
+		}
+
+		/* Cleanup the lower edges */
+		while (!list_empty(&cur->lower)) {
+			struct backref_edge *edge;
+			struct backref_node *lower;
+
+			edge = list_entry(cur->lower.next,
+					  struct backref_edge, list[UPPER]);
+			list_del(&edge->list[UPPER]);
+			list_del(&edge->list[LOWER]);
+			lower = edge->node[LOWER];
+			free_backref_edge(cache, edge);
+
+			/* Child node is also orphan, queue for cleanup */
+			if (list_empty(&lower->upper))
+				list_add(&lower->list, useless_node);
+		}
+		/* Mark this block processed for relocation */
+		mark_block_processed(rc, cur);
+
+		/*
+		 * Backref nodes for tree leaves are deleted from the cache.
+		 * Backref nodes for upper level tree blocks are left in the
+		 * cache to avoid unnecessary backref lookup.
+		 */
+		if (cur->level > 0) {
+			list_add(&cur->list, &cache->detached);
+			cur->detached = 1;
+		} else {
+			rb_erase(&cur->rb_node, &cache->rb_root);
+			free_backref_node(cache, cur);
+		}
+	}
+	return ret;
+}
+
 /*
  * build backref tree for a given tree block. root of the backref tree
  * corresponds the tree block, leaves of the backref tree correspond
@@ -1266,43 +1339,8 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		goto out;
 	}
 
-	/*
-	 * process useless backref nodes. backref nodes for tree leaves
-	 * are deleted from the cache. backref nodes for upper level
-	 * tree blocks are left in the cache to avoid unnecessary backref
-	 * lookup.
-	 */
-	while (!list_empty(&cache->useless_node)) {
-		upper = list_first_entry(&cache->useless_node,
-				   struct backref_node, list);
-		list_del_init(&upper->list);
-		ASSERT(list_empty(&upper->upper));
-		if (upper == node)
-			node = NULL;
-		if (upper->lowest) {
-			list_del_init(&upper->lower);
-			upper->lowest = 0;
-		}
-		while (!list_empty(&upper->lower)) {
-			edge = list_first_entry(&upper->lower,
-					  struct backref_edge, list[UPPER]);
-			list_del(&edge->list[UPPER]);
-			list_del(&edge->list[LOWER]);
-			lower = edge->node[LOWER];
-			free_backref_edge(cache, edge);
-
-			if (list_empty(&lower->upper))
-				list_add(&lower->list, &cache->useless_node);
-		}
-		mark_block_processed(rc, upper);
-		if (upper->level > 0) {
-			list_add(&upper->list, &cache->detached);
-			upper->detached = 1;
-		} else {
-			rb_erase(&upper->rb_node, &cache->rb_root);
-			free_backref_node(cache, upper);
-		}
-	}
+	if (handle_useless_nodes(rc, node))
+		node = NULL;
 out:
 	btrfs_backref_iter_free(iter);
 	btrfs_free_path(path);
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 15/39] btrfs: relocation: Add btrfs_ prefix for backref_node/edge/cache
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (13 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 14/39] btrfs: relocation: Refactor the useless nodes handling into its own function Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 16/39] btrfs: Move btrfs_backref_(node|edge|cache) structures to backref.h Qu Wenruo
                   ` (26 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs

Those three structures are the main elements of backref cache. Add the
"btrfs_" prefix for later export.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/relocation.c | 282 +++++++++++++++++++++---------------------
 1 file changed, 143 insertions(+), 139 deletions(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 663b782f8de1..94a000ea2759 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -73,7 +73,7 @@
  */
 
 /*
- * backref_node, mapping_node and tree_block start with this
+ * btrfs_backref_node, mapping_node and tree_block start with this
  */
 struct tree_entry {
 	struct rb_node rb_node;
@@ -83,7 +83,7 @@ struct tree_entry {
 /*
  * present a tree block in the backref cache
  */
-struct backref_node {
+struct btrfs_backref_node {
 	struct rb_node rb_node;
 	u64 bytenr;
 
@@ -138,10 +138,11 @@ struct backref_node {
 /*
  * present an edge connecting upper and lower backref nodes.
  */
-struct backref_edge {
+struct btrfs_backref_edge {
 	/*
-	 * list[LOWER] is linked to backref_node::upper of lower level node,
-	 * and list[UPPER] is linked to backref_node::lower of upper level node.
+	 * list[LOWER] is linked to btrfs_backref_node::upper of lower level
+	 * node, and list[UPPER] is linked to btrfs_backref_node::lower of
+	 * upper level node.
 	 *
 	 * Also, build_backref_tree() uses list[UPPER] for pending edges, before
 	 * linking list[UPPER] to its upper level nodes.
@@ -149,15 +150,15 @@ struct backref_edge {
 	struct list_head list[2];
 
 	/* Two related nodes */
-	struct backref_node *node[2];
+	struct btrfs_backref_node *node[2];
 };
 
 
-struct backref_cache {
+struct btrfs_backref_cache {
 	/* red black tree of all backref nodes in the cache */
 	struct rb_root rb_root;
 	/* for passing backref nodes to btrfs_reloc_cow_block */
-	struct backref_node *path[BTRFS_MAX_LEVEL];
+	struct btrfs_backref_node *path[BTRFS_MAX_LEVEL];
 	/*
 	 * list of blocks that have been cowed but some block
 	 * pointers in upper level blocks may not reflect the
@@ -237,7 +238,7 @@ struct reloc_control {
 
 	struct btrfs_block_rsv *block_rsv;
 
-	struct backref_cache backref_cache;
+	struct btrfs_backref_cache backref_cache;
 
 	struct file_extent_cluster cluster;
 	/* tree blocks have been processed */
@@ -268,11 +269,11 @@ struct reloc_control {
 #define MOVE_DATA_EXTENTS	0
 #define UPDATE_DATA_PTRS	1
 
-static void remove_backref_node(struct backref_cache *cache,
-				struct backref_node *node);
+static void remove_backref_node(struct btrfs_backref_cache *cache,
+				struct btrfs_backref_node *node);
 
 static void mark_block_processed(struct reloc_control *rc,
-				 struct backref_node *node)
+				 struct btrfs_backref_node *node)
 {
 	u32 blocksize;
 
@@ -294,7 +295,7 @@ static void mapping_tree_init(struct mapping_tree *tree)
 }
 
 static void backref_cache_init(struct btrfs_fs_info *fs_info,
-			       struct backref_cache *cache, int is_reloc)
+			       struct btrfs_backref_cache *cache, int is_reloc)
 {
 	int i;
 	cache->rb_root = RB_ROOT;
@@ -309,20 +310,20 @@ static void backref_cache_init(struct btrfs_fs_info *fs_info,
 	cache->is_reloc = is_reloc;
 }
 
-static void backref_cache_cleanup(struct backref_cache *cache)
+static void backref_cache_cleanup(struct btrfs_backref_cache *cache)
 {
-	struct backref_node *node;
+	struct btrfs_backref_node *node;
 	int i;
 
 	while (!list_empty(&cache->detached)) {
 		node = list_entry(cache->detached.next,
-				  struct backref_node, list);
+				  struct btrfs_backref_node, list);
 		remove_backref_node(cache, node);
 	}
 
 	while (!list_empty(&cache->leaves)) {
 		node = list_entry(cache->leaves.next,
-				  struct backref_node, lower);
+				  struct btrfs_backref_node, lower);
 		remove_backref_node(cache, node);
 	}
 
@@ -339,10 +340,10 @@ static void backref_cache_cleanup(struct backref_cache *cache)
 	ASSERT(!cache->nr_edges);
 }
 
-static struct backref_node *alloc_backref_node(struct backref_cache *cache,
-						u64 bytenr, int level)
+static struct btrfs_backref_node *alloc_backref_node(
+		struct btrfs_backref_cache *cache, u64 bytenr, int level)
 {
-	struct backref_node *node;
+	struct btrfs_backref_node *node;
 
 	ASSERT(level >= 0 && level < BTRFS_MAX_LEVEL);
 	node = kzalloc(sizeof(*node), GFP_NOFS);
@@ -359,8 +360,8 @@ static struct backref_node *alloc_backref_node(struct backref_cache *cache,
 	return node;
 }
 
-static void free_backref_node(struct backref_cache *cache,
-			      struct backref_node *node)
+static void free_backref_node(struct btrfs_backref_cache *cache,
+			      struct btrfs_backref_node *node)
 {
 	if (node) {
 		cache->nr_nodes--;
@@ -369,9 +370,10 @@ static void free_backref_node(struct backref_cache *cache,
 	}
 }
 
-static struct backref_edge *alloc_backref_edge(struct backref_cache *cache)
+static struct btrfs_backref_edge *alloc_backref_edge(
+		struct btrfs_backref_cache *cache)
 {
-	struct backref_edge *edge;
+	struct btrfs_backref_edge *edge;
 
 	edge = kzalloc(sizeof(*edge), GFP_NOFS);
 	if (edge)
@@ -381,9 +383,9 @@ static struct backref_edge *alloc_backref_edge(struct backref_cache *cache)
 
 #define		LINK_LOWER	(1 << 0)
 #define		LINK_UPPER	(1 << 1)
-static void link_backref_edge(struct backref_edge *edge,
-			      struct backref_node *lower,
-			      struct backref_node *upper,
+static void link_backref_edge(struct btrfs_backref_edge *edge,
+			      struct btrfs_backref_node *lower,
+			      struct btrfs_backref_node *upper,
 			      int link_which)
 {
 	ASSERT(upper && lower && upper->level == lower->level + 1);
@@ -395,8 +397,8 @@ static void link_backref_edge(struct backref_edge *edge,
 		list_add_tail(&edge->list[UPPER], &upper->lower);
 }
 
-static void free_backref_edge(struct backref_cache *cache,
-			      struct backref_edge *edge)
+static void free_backref_edge(struct btrfs_backref_cache *cache,
+			      struct btrfs_backref_edge *edge)
 {
 	if (edge) {
 		cache->nr_edges--;
@@ -450,8 +452,8 @@ static void backref_tree_panic(struct rb_node *rb_node, int errno, u64 bytenr)
 {
 
 	struct btrfs_fs_info *fs_info = NULL;
-	struct backref_node *bnode = rb_entry(rb_node, struct backref_node,
-					      rb_node);
+	struct btrfs_backref_node *bnode = rb_entry(rb_node,
+			struct btrfs_backref_node, rb_node);
 	if (bnode->root)
 		fs_info = bnode->root->fs_info;
 	btrfs_panic(fs_info, errno,
@@ -462,16 +464,16 @@ static void backref_tree_panic(struct rb_node *rb_node, int errno, u64 bytenr)
 /*
  * walk up backref nodes until reach node presents tree root
  */
-static struct backref_node *walk_up_backref(struct backref_node *node,
-					    struct backref_edge *edges[],
-					    int *index)
+static struct btrfs_backref_node *walk_up_backref(
+		struct btrfs_backref_node *node,
+		struct btrfs_backref_edge *edges[], int *index)
 {
-	struct backref_edge *edge;
+	struct btrfs_backref_edge *edge;
 	int idx = *index;
 
 	while (!list_empty(&node->upper)) {
 		edge = list_entry(node->upper.next,
-				  struct backref_edge, list[LOWER]);
+				  struct btrfs_backref_edge, list[LOWER]);
 		edges[idx++] = edge;
 		node = edge->node[UPPER];
 	}
@@ -483,11 +485,11 @@ static struct backref_node *walk_up_backref(struct backref_node *node,
 /*
  * walk down backref nodes to find start of next reference path
  */
-static struct backref_node *walk_down_backref(struct backref_edge *edges[],
-					      int *index)
+static struct btrfs_backref_node *walk_down_backref(
+		struct btrfs_backref_edge *edges[], int *index)
 {
-	struct backref_edge *edge;
-	struct backref_node *lower;
+	struct btrfs_backref_edge *edge;
+	struct btrfs_backref_node *lower;
 	int idx = *index;
 
 	while (idx > 0) {
@@ -498,7 +500,7 @@ static struct backref_node *walk_down_backref(struct backref_edge *edges[],
 			continue;
 		}
 		edge = list_entry(edge->list[LOWER].next,
-				  struct backref_edge, list[LOWER]);
+				  struct btrfs_backref_edge, list[LOWER]);
 		edges[idx - 1] = edge;
 		*index = idx;
 		return edge->node[UPPER];
@@ -507,7 +509,7 @@ static struct backref_node *walk_down_backref(struct backref_edge *edges[],
 	return NULL;
 }
 
-static void unlock_node_buffer(struct backref_node *node)
+static void unlock_node_buffer(struct btrfs_backref_node *node)
 {
 	if (node->locked) {
 		btrfs_tree_unlock(node->eb);
@@ -515,7 +517,7 @@ static void unlock_node_buffer(struct backref_node *node)
 	}
 }
 
-static void drop_node_buffer(struct backref_node *node)
+static void drop_node_buffer(struct btrfs_backref_node *node)
 {
 	if (node->eb) {
 		unlock_node_buffer(node);
@@ -524,8 +526,8 @@ static void drop_node_buffer(struct backref_node *node)
 	}
 }
 
-static void drop_backref_node(struct backref_cache *tree,
-			      struct backref_node *node)
+static void drop_backref_node(struct btrfs_backref_cache *tree,
+			      struct btrfs_backref_node *node)
 {
 	BUG_ON(!list_empty(&node->upper));
 
@@ -540,18 +542,18 @@ static void drop_backref_node(struct backref_cache *tree,
 /*
  * remove a backref node from the backref cache
  */
-static void remove_backref_node(struct backref_cache *cache,
-				struct backref_node *node)
+static void remove_backref_node(struct btrfs_backref_cache *cache,
+				struct btrfs_backref_node *node)
 {
-	struct backref_node *upper;
-	struct backref_edge *edge;
+	struct btrfs_backref_node *upper;
+	struct btrfs_backref_edge *edge;
 
 	if (!node)
 		return;
 
 	BUG_ON(!node->lowest && !node->detached);
 	while (!list_empty(&node->upper)) {
-		edge = list_entry(node->upper.next, struct backref_edge,
+		edge = list_entry(node->upper.next, struct btrfs_backref_edge,
 				  list[LOWER]);
 		upper = edge->node[UPPER];
 		list_del(&edge->list[LOWER]);
@@ -578,8 +580,8 @@ static void remove_backref_node(struct backref_cache *cache,
 	drop_backref_node(cache, node);
 }
 
-static void update_backref_node(struct backref_cache *cache,
-				struct backref_node *node, u64 bytenr)
+static void update_backref_node(struct btrfs_backref_cache *cache,
+				struct btrfs_backref_node *node, u64 bytenr)
 {
 	struct rb_node *rb_node;
 	rb_erase(&node->rb_node, &cache->rb_root);
@@ -593,9 +595,9 @@ static void update_backref_node(struct backref_cache *cache,
  * update backref cache after a transaction commit
  */
 static int update_backref_cache(struct btrfs_trans_handle *trans,
-				struct backref_cache *cache)
+				struct btrfs_backref_cache *cache)
 {
-	struct backref_node *node;
+	struct btrfs_backref_node *node;
 	int level = 0;
 
 	if (cache->last_trans == 0) {
@@ -613,13 +615,13 @@ static int update_backref_cache(struct btrfs_trans_handle *trans,
 	 */
 	while (!list_empty(&cache->detached)) {
 		node = list_entry(cache->detached.next,
-				  struct backref_node, list);
+				  struct btrfs_backref_node, list);
 		remove_backref_node(cache, node);
 	}
 
 	while (!list_empty(&cache->changed)) {
 		node = list_entry(cache->changed.next,
-				  struct backref_node, list);
+				  struct btrfs_backref_node, list);
 		list_del_init(&node->list);
 		BUG_ON(node->pending);
 		update_backref_node(cache, node, node->new_bytenr);
@@ -742,12 +744,12 @@ static struct btrfs_root *read_fs_root(struct btrfs_fs_info *fs_info,
  *		type is btrfs_inline_ref_type, offset is
  *		btrfs_inline_ref_offset.
  */
-static int handle_direct_tree_backref(struct backref_cache *cache,
+static int handle_direct_tree_backref(struct btrfs_backref_cache *cache,
 				      struct btrfs_key *ref_key,
-				      struct backref_node *cur)
+				      struct btrfs_backref_node *cur)
 {
-	struct backref_edge *edge;
-	struct backref_node *upper;
+	struct btrfs_backref_edge *edge;
+	struct btrfs_backref_node *upper;
 	struct rb_node *rb_node;
 
 	ASSERT(ref_key->type == BTRFS_SHARED_BLOCK_REF_KEY);
@@ -794,7 +796,7 @@ static int handle_direct_tree_backref(struct backref_cache *cache,
 		list_add_tail(&edge->list[UPPER], &cache->pending_edge);
 	} else {
 		/* Parent node already cached */
-		upper = rb_entry(rb_node, struct backref_node,
+		upper = rb_entry(rb_node, struct btrfs_backref_node,
 				 rb_node);
 		ASSERT(upper->checked);
 		INIT_LIST_HEAD(&edge->list[UPPER]);
@@ -815,16 +817,16 @@ static int handle_direct_tree_backref(struct backref_cache *cache,
  * @path:	A clean (released) path, to avoid allocating path everytime
  *		the function get called.
  */
-static int handle_indirect_tree_backref(struct backref_cache *cache,
+static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
 					struct btrfs_path *path,
 					struct btrfs_key *ref_key,
 					struct btrfs_key *tree_key,
-					struct backref_node *cur)
+					struct btrfs_backref_node *cur)
 {
 	struct btrfs_fs_info *fs_info = cache->fs_info;
-	struct backref_node *upper;
-	struct backref_node *lower;
-	struct backref_edge *edge;
+	struct btrfs_backref_node *upper;
+	struct btrfs_backref_node *lower;
+	struct btrfs_backref_edge *edge;
 	struct extent_buffer *eb;
 	struct btrfs_root *root;
 	struct rb_node *rb_node;
@@ -937,7 +939,8 @@ static int handle_indirect_tree_backref(struct backref_cache *cache,
 				INIT_LIST_HEAD(&edge->list[UPPER]);
 			}
 		} else {
-			upper = rb_entry(rb_node, struct backref_node, rb_node);
+			upper = rb_entry(rb_node, struct btrfs_backref_node,
+					 rb_node);
 			ASSERT(upper->checked);
 			INIT_LIST_HEAD(&edge->list[UPPER]);
 			if (!upper->owner)
@@ -957,15 +960,15 @@ static int handle_indirect_tree_backref(struct backref_cache *cache,
 	return ret;
 }
 
-static int handle_one_tree_block(struct backref_cache *cache,
+static int handle_one_tree_block(struct btrfs_backref_cache *cache,
 				 struct btrfs_path *path,
 				 struct btrfs_backref_iter *iter,
 				 struct btrfs_key *node_key,
-				 struct backref_node *cur)
+				 struct btrfs_backref_node *cur)
 {
 	struct btrfs_fs_info *fs_info = cache->fs_info;
-	struct backref_edge *edge;
-	struct backref_node *exist;
+	struct btrfs_backref_edge *edge;
+	struct btrfs_backref_node *exist;
 	int ret;
 
 	ret = btrfs_backref_iter_start(iter, cur->bytenr);
@@ -992,7 +995,7 @@ static int handle_one_tree_block(struct backref_cache *cache,
 		 * backref of type BTRFS_TREE_BLOCK_REF_KEY
 		 */
 		ASSERT(list_is_singular(&cur->upper));
-		edge = list_entry(cur->upper.next, struct backref_edge,
+		edge = list_entry(cur->upper.next, struct btrfs_backref_edge,
 				  list[LOWER]);
 		ASSERT(list_empty(&edge->list[UPPER]));
 		exist = edge->node[UPPER];
@@ -1083,19 +1086,20 @@ static int handle_one_tree_block(struct backref_cache *cache,
 /*
  * In handle_one_tree_backref(), we have only linked the lower node to the edge,
  * but the upper node hasn't been linked to the edge.
- * This means we can only iterate through backref_node::upper to reach parent
- * edges, but not through backref_node::lower to reach children edges.
+ * This means we can only iterate through btrfs_backref_node::upper to reach
+ * parent edges, but not through btrfs_backref_node::lower to reach children
+ * edges.
  *
- * This function will finish the backref_node::lower to related edges, so that
- * backref cache can be bi-directionally iterated.
+ * This function will finish the btrfs_backref_node::lower to related edges,
+ * so that backref cache can be bi-directionally iterated.
  *
  * Also, this will add the nodes to backref cache for next run.
  */
-static int finish_upper_links(struct backref_cache *cache,
-			      struct backref_node *start)
+static int finish_upper_links(struct btrfs_backref_cache *cache,
+			      struct btrfs_backref_node *start)
 {
 	struct list_head *useless_node = &cache->useless_node;
-	struct backref_edge *edge;
+	struct btrfs_backref_edge *edge;
 	struct rb_node *rb_node;
 	LIST_HEAD(pending_edge);
 
@@ -1119,12 +1123,12 @@ static int finish_upper_links(struct backref_cache *cache,
 		list_add_tail(&edge->list[UPPER], &pending_edge);
 
 	while (!list_empty(&pending_edge)) {
-		struct backref_node *upper;
-		struct backref_node *lower;
+		struct btrfs_backref_node *upper;
+		struct btrfs_backref_node *lower;
 		struct rb_node *rb_node;
 
-		edge = list_first_entry(&pending_edge, struct backref_edge,
-				  list[UPPER]);
+		edge = list_first_entry(&pending_edge,
+				struct btrfs_backref_edge, list[UPPER]);
 		list_del_init(&edge->list[UPPER]);
 		upper = edge->node[UPPER];
 		lower = edge->node[LOWER];
@@ -1205,16 +1209,16 @@ static int finish_upper_links(struct backref_cache *cache,
  * Return true if @node is in the @useless_nodes list.
  */
 static bool handle_useless_nodes(struct reloc_control *rc,
-				 struct backref_node *node)
+				 struct btrfs_backref_node *node)
 {
-	struct backref_cache *cache = &rc->backref_cache;
+	struct btrfs_backref_cache *cache = &rc->backref_cache;
 	struct list_head *useless_node = &cache->useless_node;
 	bool ret = false;
 
 	while (!list_empty(useless_node)) {
-		struct backref_node *cur;
+		struct btrfs_backref_node *cur;
 
-		cur = list_first_entry(useless_node, struct backref_node,
+		cur = list_first_entry(useless_node, struct btrfs_backref_node,
 				 list);
 		list_del_init(&cur->list);
 
@@ -1232,11 +1236,11 @@ static bool handle_useless_nodes(struct reloc_control *rc,
 
 		/* Cleanup the lower edges */
 		while (!list_empty(&cur->lower)) {
-			struct backref_edge *edge;
-			struct backref_node *lower;
+			struct btrfs_backref_edge *edge;
+			struct btrfs_backref_node *lower;
 
 			edge = list_entry(cur->lower.next,
-					  struct backref_edge, list[UPPER]);
+					struct btrfs_backref_edge, list[UPPER]);
 			list_del(&edge->list[UPPER]);
 			list_del(&edge->list[LOWER]);
 			lower = edge->node[LOWER];
@@ -1280,18 +1284,18 @@ static bool handle_useless_nodes(struct reloc_control *rc,
  * block are also cached.
  */
 static noinline_for_stack
-struct backref_node *build_backref_tree(struct reloc_control *rc,
-					struct btrfs_key *node_key,
-					int level, u64 bytenr)
+struct btrfs_backref_node *build_backref_tree(struct reloc_control *rc,
+					      struct btrfs_key *node_key,
+					      int level, u64 bytenr)
 {
 	struct btrfs_backref_iter *iter;
-	struct backref_cache *cache = &rc->backref_cache;
+	struct btrfs_backref_cache *cache = &rc->backref_cache;
 	struct btrfs_path *path; /* For searching parent of TREE_BLOCK_REF */
-	struct backref_node *cur;
-	struct backref_node *upper;
-	struct backref_node *lower;
-	struct backref_node *node = NULL;
-	struct backref_edge *edge;
+	struct btrfs_backref_node *cur;
+	struct btrfs_backref_node *upper;
+	struct btrfs_backref_node *lower;
+	struct btrfs_backref_node *node = NULL;
+	struct btrfs_backref_edge *edge;
 	int ret;
 	int err = 0;
 
@@ -1321,7 +1325,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			goto out;
 		}
 		edge = list_first_entry_or_null(&cache->pending_edge,
-				struct backref_edge, list[UPPER]);
+				struct btrfs_backref_edge, list[UPPER]);
 		/*
 		 * the pending list isn't empty, take the first block to
 		 * process
@@ -1347,12 +1351,12 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	if (err) {
 		while (!list_empty(&cache->useless_node)) {
 			lower = list_first_entry(&cache->useless_node,
-					   struct backref_node, list);
+					   struct btrfs_backref_node, list);
 			list_del_init(&lower->list);
 		}
 		while (!list_empty(&cache->pending_edge)) {
 			edge = list_first_entry(&cache->pending_edge,
-					struct backref_edge, list[UPPER]);
+					struct btrfs_backref_edge, list[UPPER]);
 			list_del(&edge->list[UPPER]);
 			list_del(&edge->list[LOWER]);
 			lower = edge->node[LOWER];
@@ -1380,7 +1384,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 
 		while (!list_empty(&cache->useless_node)) {
 			lower = list_first_entry(&cache->useless_node,
-					   struct backref_node, list);
+					   struct btrfs_backref_node, list);
 			list_del_init(&lower->list);
 			if (lower == node)
 				node = NULL;
@@ -1409,11 +1413,11 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 			      struct btrfs_root *dest)
 {
 	struct btrfs_root *reloc_root = src->reloc_root;
-	struct backref_cache *cache = &rc->backref_cache;
-	struct backref_node *node = NULL;
-	struct backref_node *new_node;
-	struct backref_edge *edge;
-	struct backref_edge *new_edge;
+	struct btrfs_backref_cache *cache = &rc->backref_cache;
+	struct btrfs_backref_node *node = NULL;
+	struct btrfs_backref_node *new_node;
+	struct btrfs_backref_edge *edge;
+	struct btrfs_backref_edge *new_edge;
 	struct rb_node *rb_node;
 
 	if (cache->last_trans > 0)
@@ -1421,7 +1425,7 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 
 	rb_node = tree_search(&cache->rb_root, src->commit_root->start);
 	if (rb_node) {
-		node = rb_entry(rb_node, struct backref_node, rb_node);
+		node = rb_entry(rb_node, struct btrfs_backref_node, rb_node);
 		if (node->detached)
 			node = NULL;
 		else
@@ -1432,7 +1436,7 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 		rb_node = tree_search(&cache->rb_root,
 				      reloc_root->commit_root->start);
 		if (rb_node) {
-			node = rb_entry(rb_node, struct backref_node,
+			node = rb_entry(rb_node, struct btrfs_backref_node,
 					rb_node);
 			BUG_ON(node->detached);
 		}
@@ -1478,7 +1482,7 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 fail:
 	while (!list_empty(&new_node->lower)) {
 		new_edge = list_entry(new_node->lower.next,
-				      struct backref_edge, list[UPPER]);
+				      struct btrfs_backref_edge, list[UPPER]);
 		list_del(&new_edge->list[UPPER]);
 		free_backref_edge(cache, new_edge);
 	}
@@ -2853,10 +2857,10 @@ static int record_reloc_root_in_trans(struct btrfs_trans_handle *trans,
 static noinline_for_stack
 struct btrfs_root *select_reloc_root(struct btrfs_trans_handle *trans,
 				     struct reloc_control *rc,
-				     struct backref_node *node,
-				     struct backref_edge *edges[])
+				     struct btrfs_backref_node *node,
+				     struct btrfs_backref_edge *edges[])
 {
-	struct backref_node *next;
+	struct btrfs_backref_node *next;
 	struct btrfs_root *root;
 	int index = 0;
 
@@ -2916,12 +2920,12 @@ struct btrfs_root *select_reloc_root(struct btrfs_trans_handle *trans,
  * counted. return -ENOENT if the block is root of reloc tree.
  */
 static noinline_for_stack
-struct btrfs_root *select_one_root(struct backref_node *node)
+struct btrfs_root *select_one_root(struct btrfs_backref_node *node)
 {
-	struct backref_node *next;
+	struct btrfs_backref_node *next;
 	struct btrfs_root *root;
 	struct btrfs_root *fs_root = NULL;
-	struct backref_edge *edges[BTRFS_MAX_LEVEL - 1];
+	struct btrfs_backref_edge *edges[BTRFS_MAX_LEVEL - 1];
 	int index = 0;
 
 	next = node;
@@ -2953,12 +2957,12 @@ struct btrfs_root *select_one_root(struct backref_node *node)
 
 static noinline_for_stack
 u64 calcu_metadata_size(struct reloc_control *rc,
-			struct backref_node *node, int reserve)
+			struct btrfs_backref_node *node, int reserve)
 {
 	struct btrfs_fs_info *fs_info = rc->extent_root->fs_info;
-	struct backref_node *next = node;
-	struct backref_edge *edge;
-	struct backref_edge *edges[BTRFS_MAX_LEVEL - 1];
+	struct btrfs_backref_node *next = node;
+	struct btrfs_backref_edge *edge;
+	struct btrfs_backref_edge *edges[BTRFS_MAX_LEVEL - 1];
 	u64 num_bytes = 0;
 	int index = 0;
 
@@ -2976,7 +2980,7 @@ u64 calcu_metadata_size(struct reloc_control *rc,
 				break;
 
 			edge = list_entry(next->upper.next,
-					  struct backref_edge, list[LOWER]);
+					struct btrfs_backref_edge, list[LOWER]);
 			edges[index++] = edge;
 			next = edge->node[UPPER];
 		}
@@ -2987,7 +2991,7 @@ u64 calcu_metadata_size(struct reloc_control *rc,
 
 static int reserve_metadata_space(struct btrfs_trans_handle *trans,
 				  struct reloc_control *rc,
-				  struct backref_node *node)
+				  struct btrfs_backref_node *node)
 {
 	struct btrfs_root *root = rc->extent_root;
 	struct btrfs_fs_info *fs_info = root->fs_info;
@@ -3035,14 +3039,14 @@ static int reserve_metadata_space(struct btrfs_trans_handle *trans,
  */
 static int do_relocation(struct btrfs_trans_handle *trans,
 			 struct reloc_control *rc,
-			 struct backref_node *node,
+			 struct btrfs_backref_node *node,
 			 struct btrfs_key *key,
 			 struct btrfs_path *path, int lowest)
 {
 	struct btrfs_fs_info *fs_info = rc->extent_root->fs_info;
-	struct backref_node *upper;
-	struct backref_edge *edge;
-	struct backref_edge *edges[BTRFS_MAX_LEVEL - 1];
+	struct btrfs_backref_node *upper;
+	struct btrfs_backref_edge *edge;
+	struct btrfs_backref_edge *edges[BTRFS_MAX_LEVEL - 1];
 	struct btrfs_root *root;
 	struct extent_buffer *eb;
 	u32 blocksize;
@@ -3198,7 +3202,7 @@ static int do_relocation(struct btrfs_trans_handle *trans,
 
 static int link_to_upper(struct btrfs_trans_handle *trans,
 			 struct reloc_control *rc,
-			 struct backref_node *node,
+			 struct btrfs_backref_node *node,
 			 struct btrfs_path *path)
 {
 	struct btrfs_key key;
@@ -3212,15 +3216,15 @@ static int finish_pending_nodes(struct btrfs_trans_handle *trans,
 				struct btrfs_path *path, int err)
 {
 	LIST_HEAD(list);
-	struct backref_cache *cache = &rc->backref_cache;
-	struct backref_node *node;
+	struct btrfs_backref_cache *cache = &rc->backref_cache;
+	struct btrfs_backref_node *node;
 	int level;
 	int ret;
 
 	for (level = 0; level < BTRFS_MAX_LEVEL; level++) {
 		while (!list_empty(&cache->pending[level])) {
 			node = list_entry(cache->pending[level].next,
-					  struct backref_node, list);
+					  struct btrfs_backref_node, list);
 			list_move_tail(&node->list, &list);
 			BUG_ON(!node->pending);
 
@@ -3240,11 +3244,11 @@ static int finish_pending_nodes(struct btrfs_trans_handle *trans,
  * as processed.
  */
 static void update_processed_blocks(struct reloc_control *rc,
-				    struct backref_node *node)
+				    struct btrfs_backref_node *node)
 {
-	struct backref_node *next = node;
-	struct backref_edge *edge;
-	struct backref_edge *edges[BTRFS_MAX_LEVEL - 1];
+	struct btrfs_backref_node *next = node;
+	struct btrfs_backref_edge *edge;
+	struct btrfs_backref_edge *edges[BTRFS_MAX_LEVEL - 1];
 	int index = 0;
 
 	while (next) {
@@ -3259,7 +3263,7 @@ static void update_processed_blocks(struct reloc_control *rc,
 				break;
 
 			edge = list_entry(next->upper.next,
-					  struct backref_edge, list[LOWER]);
+					struct btrfs_backref_edge, list[LOWER]);
 			edges[index++] = edge;
 			next = edge->node[UPPER];
 		}
@@ -3304,7 +3308,7 @@ static int get_tree_block_key(struct btrfs_fs_info *fs_info,
  */
 static int relocate_tree_block(struct btrfs_trans_handle *trans,
 				struct reloc_control *rc,
-				struct backref_node *node,
+				struct btrfs_backref_node *node,
 				struct btrfs_key *key,
 				struct btrfs_path *path)
 {
@@ -3366,7 +3370,7 @@ int relocate_tree_blocks(struct btrfs_trans_handle *trans,
 			 struct reloc_control *rc, struct rb_root *blocks)
 {
 	struct btrfs_fs_info *fs_info = rc->extent_root->fs_info;
-	struct backref_node *node;
+	struct btrfs_backref_node *node;
 	struct btrfs_path *path;
 	struct tree_block *block;
 	struct tree_block *next;
@@ -4785,7 +4789,7 @@ int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans,
 {
 	struct btrfs_fs_info *fs_info = root->fs_info;
 	struct reloc_control *rc;
-	struct backref_node *node;
+	struct btrfs_backref_node *node;
 	int first_cow = 0;
 	int level;
 	int ret = 0;
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 16/39] btrfs: Move btrfs_backref_(node|edge|cache) structures to backref.h
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (14 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 15/39] btrfs: relocation: Add btrfs_ prefix for backref_node/edge/cache Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 17/39] btrfs: Rename tree_entry to simple_node and export it Qu Wenruo
                   ` (25 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs

These 3 structures are the main part of btrfs backref cache, move them
to backref.h to build the basis for later reuse.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.h    | 119 ++++++++++++++++++++++++++++++++++++++++++
 fs/btrfs/relocation.c | 113 ---------------------------------------
 2 files changed, 119 insertions(+), 113 deletions(-)

diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index 3226dea35e2c..76858ec099d9 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -151,4 +151,123 @@ btrfs_backref_iter_release(struct btrfs_backref_iter *iter)
 	memset(&iter->cur_key, 0, sizeof(iter->cur_key));
 }
 
+/*
+ * Backref cache related structures.
+ *
+ * The whole objective of backref_cache is to build a bi-directional map
+ * of tree blocks (represented by backref_node) and all their parents.
+ */
+
+/*
+ * present a tree block in the backref cache
+ */
+struct btrfs_backref_node {
+	struct rb_node rb_node;
+	u64 bytenr;
+
+	u64 new_bytenr;
+	/* objectid of tree block owner, can be not uptodate */
+	u64 owner;
+	/* link to pending, changed or detached list */
+	struct list_head list;
+
+	/* List of upper level edges, which links this node to its parent(s) */
+	struct list_head upper;
+	/* List of lower level edges, which links this node to its child(ren) */
+	struct list_head lower;
+
+	/* NULL if this node is not tree root */
+	struct btrfs_root *root;
+	/* extent buffer got by COW the block */
+	struct extent_buffer *eb;
+	/* level of tree block */
+	unsigned int level:8;
+	/* is the block in non-reference counted tree */
+	unsigned int cowonly:1;
+	/* 1 if no child node in the cache */
+	unsigned int lowest:1;
+	/* is the extent buffer locked */
+	unsigned int locked:1;
+	/* has the block been processed */
+	unsigned int processed:1;
+	/* have backrefs of this block been checked */
+	unsigned int checked:1;
+	/*
+	 * 1 if corresponding block has been cowed but some upper
+	 * level block pointers may not point to the new location
+	 */
+	unsigned int pending:1;
+	/*
+	 * 1 if the backref node isn't connected to any other
+	 * backref node.
+	 */
+	unsigned int detached:1;
+
+	/*
+	 * For generic purpose backref cache, where we only care if it's a reloc
+	 * root, doesn't care the source subvolid.
+	 */
+	unsigned int is_reloc_root:1;
+};
+
+#define LOWER	0
+#define UPPER	1
+/*
+ * present an edge connecting upper and lower backref nodes.
+ */
+struct btrfs_backref_edge {
+	/*
+	 * list[LOWER] is linked to btrfs_backref_node::upper of lower level
+	 * node, and list[UPPER] is linked to btrfs_backref_node::lower of
+	 * upper level node.
+	 *
+	 * Also, build_backref_tree() uses list[UPPER] for pending edges, before
+	 * linking list[UPPER] to its upper level nodes.
+	 */
+	struct list_head list[2];
+
+	/* Two related nodes */
+	struct btrfs_backref_node *node[2];
+};
+
+struct btrfs_backref_cache {
+	/* red black tree of all backref nodes in the cache */
+	struct rb_root rb_root;
+	/* for passing backref nodes to btrfs_reloc_cow_block */
+	struct btrfs_backref_node *path[BTRFS_MAX_LEVEL];
+	/*
+	 * list of blocks that have been cowed but some block
+	 * pointers in upper level blocks may not reflect the
+	 * new location
+	 */
+	struct list_head pending[BTRFS_MAX_LEVEL];
+	/* list of backref nodes with no child node */
+	struct list_head leaves;
+	/* list of blocks that have been cowed in current transaction */
+	struct list_head changed;
+	/* list of detached backref node. */
+	struct list_head detached;
+
+	u64 last_trans;
+
+	int nr_nodes;
+	int nr_edges;
+
+	/* The list of unchecked backref edges during backref cache build */
+	struct list_head pending_edge;
+
+	/* The list of useless backref nodes during backref cache build */
+	struct list_head useless_node;
+
+	struct btrfs_fs_info *fs_info;
+
+	/*
+	 * Whether this cache is for relocation
+	 *
+	 * Reloction backref cache require more info for reloc root compared
+	 * to generic backref cache.
+	 */
+	unsigned int is_reloc;
+};
+
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 94a000ea2759..ec7d28d63347 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -80,120 +80,7 @@ struct tree_entry {
 	u64 bytenr;
 };
 
-/*
- * present a tree block in the backref cache
- */
-struct btrfs_backref_node {
-	struct rb_node rb_node;
-	u64 bytenr;
-
-	u64 new_bytenr;
-	/* objectid of tree block owner, can be not uptodate */
-	u64 owner;
-	/* link to pending, changed or detached list */
-	struct list_head list;
-
-	/* List of upper level edges, which links this node to its parent(s) */
-	struct list_head upper;
-	/* List of lower level edges, which links this node to its child(ren) */
-	struct list_head lower;
-
-	/* NULL if this node is not tree root */
-	struct btrfs_root *root;
-	/* extent buffer got by COW the block */
-	struct extent_buffer *eb;
-	/* level of tree block */
-	unsigned int level:8;
-	/* is the block in non-reference counted tree */
-	unsigned int cowonly:1;
-	/* 1 if no child node in the cache */
-	unsigned int lowest:1;
-	/* is the extent buffer locked */
-	unsigned int locked:1;
-	/* has the block been processed */
-	unsigned int processed:1;
-	/* have backrefs of this block been checked */
-	unsigned int checked:1;
-	/*
-	 * 1 if corresponding block has been cowed but some upper
-	 * level block pointers may not point to the new location
-	 */
-	unsigned int pending:1;
-	/*
-	 * 1 if the backref node isn't connected to any other
-	 * backref node.
-	 */
-	unsigned int detached:1;
-
-	/*
-	 * For generic purpose backref cache, where we only care if it's a reloc
-	 * root, doesn't care the source subvolid.
-	 */
-	unsigned int is_reloc_root:1;
-};
-
-#define LOWER	0
-#define UPPER	1
 #define RELOCATION_RESERVED_NODES	256
-/*
- * present an edge connecting upper and lower backref nodes.
- */
-struct btrfs_backref_edge {
-	/*
-	 * list[LOWER] is linked to btrfs_backref_node::upper of lower level
-	 * node, and list[UPPER] is linked to btrfs_backref_node::lower of
-	 * upper level node.
-	 *
-	 * Also, build_backref_tree() uses list[UPPER] for pending edges, before
-	 * linking list[UPPER] to its upper level nodes.
-	 */
-	struct list_head list[2];
-
-	/* Two related nodes */
-	struct btrfs_backref_node *node[2];
-};
-
-
-struct btrfs_backref_cache {
-	/* red black tree of all backref nodes in the cache */
-	struct rb_root rb_root;
-	/* for passing backref nodes to btrfs_reloc_cow_block */
-	struct btrfs_backref_node *path[BTRFS_MAX_LEVEL];
-	/*
-	 * list of blocks that have been cowed but some block
-	 * pointers in upper level blocks may not reflect the
-	 * new location
-	 */
-	struct list_head pending[BTRFS_MAX_LEVEL];
-	/* list of backref nodes with no child node */
-	struct list_head leaves;
-	/* list of blocks that have been cowed in current transaction */
-	struct list_head changed;
-	/* list of detached backref node. */
-	struct list_head detached;
-
-	u64 last_trans;
-
-	int nr_nodes;
-	int nr_edges;
-
-	/* The list of unchecked backref edges during backref cache build */
-	struct list_head pending_edge;
-
-	/* The list of useless backref nodes during backref cache build */
-	struct list_head useless_node;
-
-	struct btrfs_fs_info *fs_info;
-
-	/*
-	 * Whether this cache is for relocation
-	 *
-	 * Reloction backref cache require more info for reloc root compared
-	 * to generic backref cache.
-	 */
-	unsigned int is_reloc;
-};
-
 /*
  * map address of tree root to tree
  */
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 17/39] btrfs: Rename tree_entry to simple_node and export it
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (15 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 16/39] btrfs: Move btrfs_backref_(node|edge|cache) structures to backref.h Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-04-01 15:48   ` David Sterba
  2020-03-26  8:32 ` [PATCH v2 18/39] btrfs: Rename backref_cache_init() to btrfs_backref_cache_init() and move it to backref.c Qu Wenruo
                   ` (24 subsequent siblings)
  41 siblings, 1 reply; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs

Structure tree_entry provides a very simple rb_tree which only uses
bytenr as search index.

That tree_entry is used in 3 structures: backref_node, mapping_node and
tree_block.

Since we're going to make backref_node independnt from relocation, it's
a good time to extract the tree_entry into simple_node, and export it
into misc.h.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.h    |   6 ++-
 fs/btrfs/misc.h       |  54 +++++++++++++++++++++
 fs/btrfs/relocation.c | 109 +++++++++++++-----------------------------
 3 files changed, 90 insertions(+), 79 deletions(-)

diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index 76858ec099d9..f3eae9e9f84b 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -162,8 +162,10 @@ btrfs_backref_iter_release(struct btrfs_backref_iter *iter)
  * present a tree block in the backref cache
  */
 struct btrfs_backref_node {
-	struct rb_node rb_node;
-	u64 bytenr;
+	struct {
+		struct rb_node rb_node;
+		u64 bytenr;
+	}; /* Use simple_node for search/insert */
 
 	u64 new_bytenr;
 	/* objectid of tree block owner, can be not uptodate */
diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
index 72bab64ecf60..d199bfdb210e 100644
--- a/fs/btrfs/misc.h
+++ b/fs/btrfs/misc.h
@@ -6,6 +6,7 @@
 #include <linux/sched.h>
 #include <linux/wait.h>
 #include <asm/div64.h>
+#include <linux/rbtree.h>
 
 #define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len))
 
@@ -58,4 +59,57 @@ static inline bool has_single_bit_set(u64 n)
 	return is_power_of_two_u64(n);
 }
 
+/*
+ * Simple bytenr based rb_tree relate structures
+ *
+ * Any structure wants to use bytenr as single search index should have their
+ * structure start with these members.
+ */
+struct simple_node {
+	struct rb_node rb_node;
+	u64 bytenr;
+};
+
+static inline struct rb_node *simple_search(struct rb_root *root, u64 bytenr)
+{
+	struct rb_node *n = root->rb_node;
+	struct simple_node *entry;
+
+	while (n) {
+		entry = rb_entry(n, struct simple_node, rb_node);
+
+		if (bytenr < entry->bytenr)
+			n = n->rb_left;
+		else if (bytenr > entry->bytenr)
+			n = n->rb_right;
+		else
+			return n;
+	}
+	return NULL;
+}
+
+static inline struct rb_node *simple_insert(struct rb_root *root, u64 bytenr,
+					    struct rb_node *node)
+{
+	struct rb_node **p = &root->rb_node;
+	struct rb_node *parent = NULL;
+	struct simple_node *entry;
+
+	while (*p) {
+		parent = *p;
+		entry = rb_entry(parent, struct simple_node, rb_node);
+
+		if (bytenr < entry->bytenr)
+			p = &(*p)->rb_left;
+		else if (bytenr > entry->bytenr)
+			p = &(*p)->rb_right;
+		else
+			return parent;
+	}
+
+	rb_link_node(node, parent, p);
+	rb_insert_color(node, root);
+	return NULL;
+}
+
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index ec7d28d63347..d62537792ac0 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -24,6 +24,7 @@
 #include "delalloc-space.h"
 #include "block-group.h"
 #include "backref.h"
+#include "misc.h"
 
 /*
  * Relocation overview
@@ -72,21 +73,15 @@
  * The entry point of relocation is relocate_block_group() function.
  */
 
-/*
- * btrfs_backref_node, mapping_node and tree_block start with this
- */
-struct tree_entry {
-	struct rb_node rb_node;
-	u64 bytenr;
-};
-
 #define RELOCATION_RESERVED_NODES	256
 /*
  * map address of tree root to tree
  */
 struct mapping_node {
-	struct rb_node rb_node;
-	u64 bytenr;
+	struct {
+		struct rb_node rb_node;
+		u64 bytenr;
+	}; /* Use simle_node for search_insert */
 	void *data;
 };
 
@@ -99,8 +94,10 @@ struct mapping_tree {
  * present a tree block to process
  */
 struct tree_block {
-	struct rb_node rb_node;
-	u64 bytenr;
+	struct {
+		struct rb_node rb_node;
+		u64 bytenr;
+	}; /* Use simple_node for search/insert */
 	struct btrfs_key key;
 	unsigned int level:8;
 	unsigned int key_ready:1;
@@ -293,48 +290,6 @@ static void free_backref_edge(struct btrfs_backref_cache *cache,
 	}
 }
 
-static struct rb_node *tree_insert(struct rb_root *root, u64 bytenr,
-				   struct rb_node *node)
-{
-	struct rb_node **p = &root->rb_node;
-	struct rb_node *parent = NULL;
-	struct tree_entry *entry;
-
-	while (*p) {
-		parent = *p;
-		entry = rb_entry(parent, struct tree_entry, rb_node);
-
-		if (bytenr < entry->bytenr)
-			p = &(*p)->rb_left;
-		else if (bytenr > entry->bytenr)
-			p = &(*p)->rb_right;
-		else
-			return parent;
-	}
-
-	rb_link_node(node, parent, p);
-	rb_insert_color(node, root);
-	return NULL;
-}
-
-static struct rb_node *tree_search(struct rb_root *root, u64 bytenr)
-{
-	struct rb_node *n = root->rb_node;
-	struct tree_entry *entry;
-
-	while (n) {
-		entry = rb_entry(n, struct tree_entry, rb_node);
-
-		if (bytenr < entry->bytenr)
-			n = n->rb_left;
-		else if (bytenr > entry->bytenr)
-			n = n->rb_right;
-		else
-			return n;
-	}
-	return NULL;
-}
-
 static void backref_tree_panic(struct rb_node *rb_node, int errno, u64 bytenr)
 {
 
@@ -473,7 +428,7 @@ static void update_backref_node(struct btrfs_backref_cache *cache,
 	struct rb_node *rb_node;
 	rb_erase(&node->rb_node, &cache->rb_root);
 	node->bytenr = bytenr;
-	rb_node = tree_insert(&cache->rb_root, node->bytenr, &node->rb_node);
+	rb_node = simple_insert(&cache->rb_root, node->bytenr, &node->rb_node);
 	if (rb_node)
 		backref_tree_panic(rb_node, -EEXIST, bytenr);
 }
@@ -598,7 +553,7 @@ struct btrfs_root *find_reloc_root(struct btrfs_fs_info *fs_info, u64 bytenr)
 
 	ASSERT(rc);
 	spin_lock(&rc->reloc_root_tree.lock);
-	rb_node = tree_search(&rc->reloc_root_tree.rb_root, bytenr);
+	rb_node = simple_search(&rc->reloc_root_tree.rb_root, bytenr);
 	if (rb_node) {
 		node = rb_entry(rb_node, struct mapping_node, rb_node);
 		root = (struct btrfs_root *)node->data;
@@ -666,7 +621,7 @@ static int handle_direct_tree_backref(struct btrfs_backref_cache *cache,
 	if (!edge)
 		return -ENOMEM;
 
-	rb_node = tree_search(&cache->rb_root, ref_key->offset);
+	rb_node = simple_search(&cache->rb_root, ref_key->offset);
 	if (!rb_node) {
 		/* Parent node not yet cached */
 		upper = alloc_backref_node(cache, ref_key->offset,
@@ -788,7 +743,7 @@ static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
 		}
 
 		eb = path->nodes[level];
-		rb_node = tree_search(&cache->rb_root, eb->start);
+		rb_node = simple_search(&cache->rb_root, eb->start);
 		if (!rb_node) {
 			upper = alloc_backref_node(cache, eb->start,
 						   lower->level + 1);
@@ -994,8 +949,8 @@ static int finish_upper_links(struct btrfs_backref_cache *cache,
 
 	/* Insert this node to cache if it's not cowonly */
 	if (!start->cowonly) {
-		rb_node = tree_insert(&cache->rb_root, start->bytenr,
-				      &start->rb_node);
+		rb_node = simple_insert(&cache->rb_root, start->bytenr,
+					&start->rb_node);
 		if (rb_node)
 			backref_tree_panic(rb_node, -EEXIST, start->bytenr);
 		list_add_tail(&start->lower, &cache->leaves);
@@ -1062,8 +1017,8 @@ static int finish_upper_links(struct btrfs_backref_cache *cache,
 
 		/* Only cache non-cowonly (subvolume trees) tree blocks */
 		if (!upper->cowonly) {
-			rb_node = tree_insert(&cache->rb_root, upper->bytenr,
-					      &upper->rb_node);
+			rb_node = simple_insert(&cache->rb_root, upper->bytenr,
+						&upper->rb_node);
 			if (rb_node) {
 				backref_tree_panic(rb_node, -EEXIST,
 						   upper->bytenr);
@@ -1310,7 +1265,7 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 	if (cache->last_trans > 0)
 		update_backref_cache(trans, cache);
 
-	rb_node = tree_search(&cache->rb_root, src->commit_root->start);
+	rb_node = simple_search(&cache->rb_root, src->commit_root->start);
 	if (rb_node) {
 		node = rb_entry(rb_node, struct btrfs_backref_node, rb_node);
 		if (node->detached)
@@ -1320,8 +1275,8 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 	}
 
 	if (!node) {
-		rb_node = tree_search(&cache->rb_root,
-				      reloc_root->commit_root->start);
+		rb_node = simple_search(&cache->rb_root,
+					reloc_root->commit_root->start);
 		if (rb_node) {
 			node = rb_entry(rb_node, struct btrfs_backref_node,
 					rb_node);
@@ -1354,8 +1309,8 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 		list_add_tail(&new_node->lower, &cache->leaves);
 	}
 
-	rb_node = tree_insert(&cache->rb_root, new_node->bytenr,
-			      &new_node->rb_node);
+	rb_node = simple_insert(&cache->rb_root, new_node->bytenr,
+				&new_node->rb_node);
 	if (rb_node)
 		backref_tree_panic(rb_node, -EEXIST, new_node->bytenr);
 
@@ -1395,8 +1350,8 @@ static int __must_check __add_reloc_root(struct btrfs_root *root)
 	node->data = root;
 
 	spin_lock(&rc->reloc_root_tree.lock);
-	rb_node = tree_insert(&rc->reloc_root_tree.rb_root,
-			      node->bytenr, &node->rb_node);
+	rb_node = simple_insert(&rc->reloc_root_tree.rb_root,
+				node->bytenr, &node->rb_node);
 	spin_unlock(&rc->reloc_root_tree.lock);
 	if (rb_node) {
 		btrfs_panic(fs_info, -EEXIST,
@@ -1422,8 +1377,8 @@ static void __del_reloc_root(struct btrfs_root *root)
 
 	if (rc && root->node) {
 		spin_lock(&rc->reloc_root_tree.lock);
-		rb_node = tree_search(&rc->reloc_root_tree.rb_root,
-				      root->commit_root->start);
+		rb_node = simple_search(&rc->reloc_root_tree.rb_root,
+					root->commit_root->start);
 		if (rb_node) {
 			node = rb_entry(rb_node, struct mapping_node, rb_node);
 			rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
@@ -1466,8 +1421,8 @@ static int __update_reloc_root(struct btrfs_root *root)
 	struct reloc_control *rc = fs_info->reloc_ctl;
 
 	spin_lock(&rc->reloc_root_tree.lock);
-	rb_node = tree_search(&rc->reloc_root_tree.rb_root,
-			      root->commit_root->start);
+	rb_node = simple_search(&rc->reloc_root_tree.rb_root,
+				root->commit_root->start);
 	if (rb_node) {
 		node = rb_entry(rb_node, struct mapping_node, rb_node);
 		rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
@@ -1480,8 +1435,8 @@ static int __update_reloc_root(struct btrfs_root *root)
 
 	spin_lock(&rc->reloc_root_tree.lock);
 	node->bytenr = root->node->start;
-	rb_node = tree_insert(&rc->reloc_root_tree.rb_root,
-			      node->bytenr, &node->rb_node);
+	rb_node = simple_insert(&rc->reloc_root_tree.rb_root,
+				node->bytenr, &node->rb_node);
 	spin_unlock(&rc->reloc_root_tree.lock);
 	if (rb_node)
 		backref_tree_panic(rb_node, -EEXIST, node->bytenr);
@@ -3624,7 +3579,7 @@ static int add_tree_block(struct reloc_control *rc,
 	block->level = level;
 	block->key_ready = 0;
 
-	rb_node = tree_insert(blocks, block->bytenr, &block->rb_node);
+	rb_node = simple_insert(blocks, block->bytenr, &block->rb_node);
 	if (rb_node)
 		backref_tree_panic(rb_node, -EEXIST, block->bytenr);
 
@@ -3647,7 +3602,7 @@ static int __add_tree_block(struct reloc_control *rc,
 	if (tree_block_processed(bytenr, rc))
 		return 0;
 
-	if (tree_search(blocks, bytenr))
+	if (simple_search(blocks, bytenr))
 		return 0;
 
 	path = btrfs_alloc_path();
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 18/39] btrfs: Rename backref_cache_init() to btrfs_backref_cache_init() and move it to backref.c
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (16 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 17/39] btrfs: Rename tree_entry to simple_node and export it Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 19/39] btrfs: Rename alloc_backref_node() to btrfs_backref_alloc_node() and move it backref.c Qu Wenruo
                   ` (23 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.c    | 17 +++++++++++++++++
 fs/btrfs/backref.h    |  2 ++
 fs/btrfs/relocation.c | 18 +-----------------
 3 files changed, 20 insertions(+), 17 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index a1044f093f6c..997f609c97f8 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -2463,3 +2463,20 @@ int btrfs_backref_iter_next(struct btrfs_backref_iter *iter)
 						path->slots[0]);
 	return 0;
 }
+
+void btrfs_backref_init_cache(struct btrfs_fs_info *fs_info,
+			      struct btrfs_backref_cache *cache, int is_reloc)
+{
+	int i;
+
+	cache->rb_root = RB_ROOT;
+	for (i = 0; i < BTRFS_MAX_LEVEL; i++)
+		INIT_LIST_HEAD(&cache->pending[i]);
+	INIT_LIST_HEAD(&cache->changed);
+	INIT_LIST_HEAD(&cache->detached);
+	INIT_LIST_HEAD(&cache->leaves);
+	INIT_LIST_HEAD(&cache->pending_edge);
+	INIT_LIST_HEAD(&cache->useless_node);
+	cache->fs_info = fs_info;
+	cache->is_reloc = is_reloc;
+}
diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index f3eae9e9f84b..25a0b1a5ce32 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -272,4 +272,6 @@ struct btrfs_backref_cache {
 	unsigned int is_reloc;
 };
 
+void btrfs_backref_init_cache(struct btrfs_fs_info *fs_info,
+			      struct btrfs_backref_cache *cache, int is_reloc);
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index d62537792ac0..1fa7c4a67f3d 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -178,22 +178,6 @@ static void mapping_tree_init(struct mapping_tree *tree)
 	spin_lock_init(&tree->lock);
 }
 
-static void backref_cache_init(struct btrfs_fs_info *fs_info,
-			       struct btrfs_backref_cache *cache, int is_reloc)
-{
-	int i;
-	cache->rb_root = RB_ROOT;
-	for (i = 0; i < BTRFS_MAX_LEVEL; i++)
-		INIT_LIST_HEAD(&cache->pending[i]);
-	INIT_LIST_HEAD(&cache->changed);
-	INIT_LIST_HEAD(&cache->detached);
-	INIT_LIST_HEAD(&cache->leaves);
-	INIT_LIST_HEAD(&cache->pending_edge);
-	INIT_LIST_HEAD(&cache->useless_node);
-	cache->fs_info = fs_info;
-	cache->is_reloc = is_reloc;
-}
-
 static void backref_cache_cleanup(struct btrfs_backref_cache *cache)
 {
 	struct btrfs_backref_node *node;
@@ -4215,7 +4199,7 @@ static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info)
 
 	INIT_LIST_HEAD(&rc->reloc_roots);
 	INIT_LIST_HEAD(&rc->dirty_subvol_roots);
-	backref_cache_init(fs_info, &rc->backref_cache, 1);
+	btrfs_backref_init_cache(fs_info, &rc->backref_cache, 1);
 	mapping_tree_init(&rc->reloc_root_tree);
 	extent_io_tree_init(fs_info, &rc->processed_blocks,
 			    IO_TREE_RELOC_BLOCKS, NULL);
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 19/39] btrfs: Rename alloc_backref_node() to btrfs_backref_alloc_node() and move it backref.c
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (17 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 18/39] btrfs: Rename backref_cache_init() to btrfs_backref_cache_init() and move it to backref.c Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 20/39] btrfs: Rename alloc_backref_edge() to btrfs_backref_alloc_edge() " Qu Wenruo
                   ` (22 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.c    | 20 ++++++++++++++++++++
 fs/btrfs/backref.h    |  2 ++
 fs/btrfs/relocation.c | 31 ++++++-------------------------
 3 files changed, 28 insertions(+), 25 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index 997f609c97f8..079de971f302 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -2480,3 +2480,23 @@ void btrfs_backref_init_cache(struct btrfs_fs_info *fs_info,
 	cache->fs_info = fs_info;
 	cache->is_reloc = is_reloc;
 }
+
+struct btrfs_backref_node *btrfs_backref_alloc_node(
+		struct btrfs_backref_cache *cache, u64 bytenr, int level)
+{
+	struct btrfs_backref_node *node;
+
+	ASSERT(level >= 0 && level < BTRFS_MAX_LEVEL);
+	node = kzalloc(sizeof(*node), GFP_NOFS);
+	if (!node)
+		return node;
+	INIT_LIST_HEAD(&node->list);
+	INIT_LIST_HEAD(&node->upper);
+	INIT_LIST_HEAD(&node->lower);
+	RB_CLEAR_NODE(&node->rb_node);
+	cache->nr_nodes++;
+
+	node->level = level;
+	node->bytenr = bytenr;
+	return node;
+}
diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index 25a0b1a5ce32..54364fbec65b 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -274,4 +274,6 @@ struct btrfs_backref_cache {
 
 void btrfs_backref_init_cache(struct btrfs_fs_info *fs_info,
 			      struct btrfs_backref_cache *cache, int is_reloc);
+struct btrfs_backref_node *btrfs_backref_alloc_node(
+		struct btrfs_backref_cache *cache, u64 bytenr, int level);
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 1fa7c4a67f3d..4f22e9bd8e3c 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -208,26 +208,6 @@ static void backref_cache_cleanup(struct btrfs_backref_cache *cache)
 	ASSERT(!cache->nr_edges);
 }
 
-static struct btrfs_backref_node *alloc_backref_node(
-		struct btrfs_backref_cache *cache, u64 bytenr, int level)
-{
-	struct btrfs_backref_node *node;
-
-	ASSERT(level >= 0 && level < BTRFS_MAX_LEVEL);
-	node = kzalloc(sizeof(*node), GFP_NOFS);
-	if (!node)
-		return node;
-	INIT_LIST_HEAD(&node->list);
-	INIT_LIST_HEAD(&node->upper);
-	INIT_LIST_HEAD(&node->lower);
-	RB_CLEAR_NODE(&node->rb_node);
-	cache->nr_nodes++;
-
-	node->level = level;
-	node->bytenr = bytenr;
-	return node;
-}
-
 static void free_backref_node(struct btrfs_backref_cache *cache,
 			      struct btrfs_backref_node *node)
 {
@@ -608,7 +588,7 @@ static int handle_direct_tree_backref(struct btrfs_backref_cache *cache,
 	rb_node = simple_search(&cache->rb_root, ref_key->offset);
 	if (!rb_node) {
 		/* Parent node not yet cached */
-		upper = alloc_backref_node(cache, ref_key->offset,
+		upper = btrfs_backref_alloc_node(cache, ref_key->offset,
 					   cur->level + 1);
 		if (!upper) {
 			free_backref_edge(cache, edge);
@@ -729,8 +709,8 @@ static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
 		eb = path->nodes[level];
 		rb_node = simple_search(&cache->rb_root, eb->start);
 		if (!rb_node) {
-			upper = alloc_backref_node(cache, eb->start,
-						   lower->level + 1);
+			upper = btrfs_backref_alloc_node(cache, eb->start,
+							 lower->level + 1);
 			if (!upper) {
 				btrfs_put_root(root);
 				free_backref_edge(cache, edge);
@@ -1134,7 +1114,7 @@ struct btrfs_backref_node *build_backref_tree(struct reloc_control *rc,
 		goto out;
 	}
 
-	node = alloc_backref_node(cache, bytenr, level);
+	node = btrfs_backref_alloc_node(cache, bytenr, level);
 	if (!node) {
 		err = -ENOMEM;
 		goto out;
@@ -1271,7 +1251,8 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 	if (!node)
 		return 0;
 
-	new_node = alloc_backref_node(cache, dest->node->start, node->level);
+	new_node = btrfs_backref_alloc_node(cache, dest->node->start,
+					    node->level);
 	if (!new_node)
 		return -ENOMEM;
 
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 20/39] btrfs: Rename alloc_backref_edge() to btrfs_backref_alloc_edge() and move it backref.c
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (18 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 19/39] btrfs: Rename alloc_backref_node() to btrfs_backref_alloc_node() and move it backref.c Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 21/39] btrfs: Rename link_backref_edge() to btrfs_backref_link_edge() and move it backref.h Qu Wenruo
                   ` (21 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.c    | 11 +++++++++++
 fs/btrfs/backref.h    |  2 ++
 fs/btrfs/relocation.c | 17 +++--------------
 3 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index 079de971f302..9475b6ccc7eb 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -2500,3 +2500,14 @@ struct btrfs_backref_node *btrfs_backref_alloc_node(
 	node->bytenr = bytenr;
 	return node;
 }
+
+struct btrfs_backref_edge *btrfs_backref_alloc_edge(
+		struct btrfs_backref_cache *cache)
+{
+	struct btrfs_backref_edge *edge;
+
+	edge = kzalloc(sizeof(*edge), GFP_NOFS);
+	if (edge)
+		cache->nr_edges++;
+	return edge;
+}
diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index 54364fbec65b..f04b366144ad 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -276,4 +276,6 @@ void btrfs_backref_init_cache(struct btrfs_fs_info *fs_info,
 			      struct btrfs_backref_cache *cache, int is_reloc);
 struct btrfs_backref_node *btrfs_backref_alloc_node(
 		struct btrfs_backref_cache *cache, u64 bytenr, int level);
+struct btrfs_backref_edge *btrfs_backref_alloc_edge(
+		struct btrfs_backref_cache *cache);
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 4f22e9bd8e3c..87bc0b20cbfd 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -218,17 +218,6 @@ static void free_backref_node(struct btrfs_backref_cache *cache,
 	}
 }
 
-static struct btrfs_backref_edge *alloc_backref_edge(
-		struct btrfs_backref_cache *cache)
-{
-	struct btrfs_backref_edge *edge;
-
-	edge = kzalloc(sizeof(*edge), GFP_NOFS);
-	if (edge)
-		cache->nr_edges++;
-	return edge;
-}
-
 #define		LINK_LOWER	(1 << 0)
 #define		LINK_UPPER	(1 << 1)
 static void link_backref_edge(struct btrfs_backref_edge *edge,
@@ -581,7 +570,7 @@ static int handle_direct_tree_backref(struct btrfs_backref_cache *cache,
 		return 0;
 	}
 
-	edge = alloc_backref_edge(cache);
+	edge = btrfs_backref_alloc_edge(cache);
 	if (!edge)
 		return -ENOMEM;
 
@@ -699,7 +688,7 @@ static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
 			break;
 		}
 
-		edge = alloc_backref_edge(cache);
+		edge = btrfs_backref_alloc_edge(cache);
 		if (!edge) {
 			btrfs_put_root(root);
 			ret = -ENOMEM;
@@ -1263,7 +1252,7 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 
 	if (!node->lowest) {
 		list_for_each_entry(edge, &node->lower, list[UPPER]) {
-			new_edge = alloc_backref_edge(cache);
+			new_edge = btrfs_backref_alloc_edge(cache);
 			if (!new_edge)
 				goto fail;
 
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 21/39] btrfs: Rename link_backref_edge() to btrfs_backref_link_edge() and move it backref.h
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (19 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 20/39] btrfs: Rename alloc_backref_edge() to btrfs_backref_alloc_edge() " Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:32 ` [PATCH v2 22/39] btrfs: Rename free_backref_(node|edge) to btrfs_backref_free_(node|edge) and move them to backref.h Qu Wenruo
                   ` (20 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.h    | 16 ++++++++++++++++
 fs/btrfs/relocation.c | 23 ++++-------------------
 2 files changed, 20 insertions(+), 19 deletions(-)

diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index f04b366144ad..c09fd4024cc2 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -278,4 +278,20 @@ struct btrfs_backref_node *btrfs_backref_alloc_node(
 		struct btrfs_backref_cache *cache, u64 bytenr, int level);
 struct btrfs_backref_edge *btrfs_backref_alloc_edge(
 		struct btrfs_backref_cache *cache);
+
+#define		LINK_LOWER	(1 << 0)
+#define		LINK_UPPER	(1 << 1)
+static inline void btrfs_backref_link_edge(struct btrfs_backref_edge *edge,
+					   struct btrfs_backref_node *lower,
+					   struct btrfs_backref_node *upper,
+					   int link_which)
+{
+	ASSERT(upper && lower && upper->level == lower->level + 1);
+	edge->node[LOWER] = lower;
+	edge->node[UPPER] = upper;
+	if (link_which & LINK_LOWER)
+		list_add_tail(&edge->list[LOWER], &lower->upper);
+	if (link_which & LINK_UPPER)
+		list_add_tail(&edge->list[UPPER], &upper->lower);
+}
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 87bc0b20cbfd..3a95f5cd353a 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -218,21 +218,6 @@ static void free_backref_node(struct btrfs_backref_cache *cache,
 	}
 }
 
-#define		LINK_LOWER	(1 << 0)
-#define		LINK_UPPER	(1 << 1)
-static void link_backref_edge(struct btrfs_backref_edge *edge,
-			      struct btrfs_backref_node *lower,
-			      struct btrfs_backref_node *upper,
-			      int link_which)
-{
-	ASSERT(upper && lower && upper->level == lower->level + 1);
-	edge->node[LOWER] = lower;
-	edge->node[UPPER] = upper;
-	if (link_which & LINK_LOWER)
-		list_add_tail(&edge->list[LOWER], &lower->upper);
-	if (link_which & LINK_UPPER)
-		list_add_tail(&edge->list[UPPER], &upper->lower);
-}
 
 static void free_backref_edge(struct btrfs_backref_cache *cache,
 			      struct btrfs_backref_edge *edge)
@@ -596,7 +581,7 @@ static int handle_direct_tree_backref(struct btrfs_backref_cache *cache,
 		ASSERT(upper->checked);
 		INIT_LIST_HEAD(&edge->list[UPPER]);
 	}
-	link_backref_edge(edge, cur, upper, LINK_LOWER);
+	btrfs_backref_link_edge(edge, cur, upper, LINK_LOWER);
 	return 0;
 }
 
@@ -741,7 +726,7 @@ static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
 			if (!upper->owner)
 				upper->owner = btrfs_header_owner(eb);
 		}
-		link_backref_edge(edge, lower, upper, LINK_LOWER);
+		btrfs_backref_link_edge(edge, lower, upper, LINK_LOWER);
 
 		if (rb_node) {
 			btrfs_put_root(root);
@@ -1256,8 +1241,8 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 			if (!new_edge)
 				goto fail;
 
-			link_backref_edge(new_edge, edge->node[LOWER], new_node,
-					  LINK_UPPER);
+			btrfs_backref_link_edge(new_edge, edge->node[LOWER],
+						new_node, LINK_UPPER);
 		}
 	} else {
 		list_add_tail(&new_node->lower, &cache->leaves);
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 22/39] btrfs: Rename free_backref_(node|edge) to btrfs_backref_free_(node|edge) and move them to backref.h
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (20 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 21/39] btrfs: Rename link_backref_edge() to btrfs_backref_link_edge() and move it backref.h Qu Wenruo
@ 2020-03-26  8:32 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 23/39] btrfs: Rename drop_backref_node() to btrfs_backref_drop_node() and move its needed facilities " Qu Wenruo
                   ` (19 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:32 UTC (permalink / raw)
  To: linux-btrfs

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.h    | 20 ++++++++++++++++++++
 fs/btrfs/relocation.c | 42 +++++++++++-------------------------------
 2 files changed, 31 insertions(+), 31 deletions(-)

diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index c09fd4024cc2..5464d0dc1669 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -8,6 +8,7 @@
 
 #include <linux/btrfs.h>
 #include "ulist.h"
+#include "disk-io.h"
 #include "extent_io.h"
 
 struct inode_fs_paths {
@@ -294,4 +295,23 @@ static inline void btrfs_backref_link_edge(struct btrfs_backref_edge *edge,
 	if (link_which & LINK_UPPER)
 		list_add_tail(&edge->list[UPPER], &upper->lower);
 }
+static inline void btrfs_backref_free_node(struct btrfs_backref_cache *cache,
+					   struct btrfs_backref_node *node)
+{
+	if (node) {
+		cache->nr_nodes--;
+		btrfs_put_root(node->root);
+		kfree(node);
+	}
+}
+
+static inline void btrfs_backref_free_edge(struct btrfs_backref_cache *cache,
+					   struct btrfs_backref_edge *edge)
+{
+	if (edge) {
+		cache->nr_edges--;
+		kfree(edge);
+	}
+}
+
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 3a95f5cd353a..b09141f4d4c8 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -208,26 +208,6 @@ static void backref_cache_cleanup(struct btrfs_backref_cache *cache)
 	ASSERT(!cache->nr_edges);
 }
 
-static void free_backref_node(struct btrfs_backref_cache *cache,
-			      struct btrfs_backref_node *node)
-{
-	if (node) {
-		cache->nr_nodes--;
-		btrfs_put_root(node->root);
-		kfree(node);
-	}
-}
-
-
-static void free_backref_edge(struct btrfs_backref_cache *cache,
-			      struct btrfs_backref_edge *edge)
-{
-	if (edge) {
-		cache->nr_edges--;
-		kfree(edge);
-	}
-}
-
 static void backref_tree_panic(struct rb_node *rb_node, int errno, u64 bytenr)
 {
 
@@ -316,7 +296,7 @@ static void drop_backref_node(struct btrfs_backref_cache *tree,
 	list_del(&node->lower);
 	if (!RB_EMPTY_NODE(&node->rb_node))
 		rb_erase(&node->rb_node, &tree->rb_root);
-	free_backref_node(tree, node);
+	btrfs_backref_free_node(tree, node);
 }
 
 /*
@@ -338,7 +318,7 @@ static void remove_backref_node(struct btrfs_backref_cache *cache,
 		upper = edge->node[UPPER];
 		list_del(&edge->list[LOWER]);
 		list_del(&edge->list[UPPER]);
-		free_backref_edge(cache, edge);
+		btrfs_backref_free_edge(cache, edge);
 
 		if (RB_EMPTY_NODE(&upper->rb_node)) {
 			BUG_ON(!list_empty(&node->upper));
@@ -565,7 +545,7 @@ static int handle_direct_tree_backref(struct btrfs_backref_cache *cache,
 		upper = btrfs_backref_alloc_node(cache, ref_key->offset,
 					   cur->level + 1);
 		if (!upper) {
-			free_backref_edge(cache, edge);
+			btrfs_backref_free_edge(cache, edge);
 			return -ENOMEM;
 		}
 
@@ -687,7 +667,7 @@ static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
 							 lower->level + 1);
 			if (!upper) {
 				btrfs_put_root(root);
-				free_backref_edge(cache, edge);
+				btrfs_backref_free_edge(cache, edge);
 				ret = -ENOMEM;
 				goto out;
 			}
@@ -916,7 +896,7 @@ static int finish_upper_links(struct btrfs_backref_cache *cache,
 		/* Parent is detached, no need to keep any edges */
 		if (upper->detached) {
 			list_del(&edge->list[LOWER]);
-			free_backref_edge(cache, edge);
+			btrfs_backref_free_edge(cache, edge);
 
 			/* Lower node is orphan, queue for cleanup */
 			if (list_empty(&lower->upper))
@@ -1024,7 +1004,7 @@ static bool handle_useless_nodes(struct reloc_control *rc,
 			list_del(&edge->list[UPPER]);
 			list_del(&edge->list[LOWER]);
 			lower = edge->node[LOWER];
-			free_backref_edge(cache, edge);
+			btrfs_backref_free_edge(cache, edge);
 
 			/* Child node is also orphan, queue for cleanup */
 			if (list_empty(&lower->upper))
@@ -1043,7 +1023,7 @@ static bool handle_useless_nodes(struct reloc_control *rc,
 			cur->detached = 1;
 		} else {
 			rb_erase(&cur->rb_node, &cache->rb_root);
-			free_backref_node(cache, cur);
+			btrfs_backref_free_node(cache, cur);
 		}
 	}
 	return ret;
@@ -1141,7 +1121,7 @@ struct btrfs_backref_node *build_backref_tree(struct reloc_control *rc,
 			list_del(&edge->list[LOWER]);
 			lower = edge->node[LOWER];
 			upper = edge->node[UPPER];
-			free_backref_edge(cache, edge);
+			btrfs_backref_free_edge(cache, edge);
 
 			/*
 			 * Lower is no longer linked to any upper backref nodes
@@ -1168,7 +1148,7 @@ struct btrfs_backref_node *build_backref_tree(struct reloc_control *rc,
 			list_del_init(&lower->list);
 			if (lower == node)
 				node = NULL;
-			free_backref_node(cache, lower);
+			btrfs_backref_free_node(cache, lower);
 		}
 
 		remove_backref_node(cache, node);
@@ -1265,9 +1245,9 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 		new_edge = list_entry(new_node->lower.next,
 				      struct btrfs_backref_edge, list[UPPER]);
 		list_del(&new_edge->list[UPPER]);
-		free_backref_edge(cache, new_edge);
+		btrfs_backref_free_edge(cache, new_edge);
 	}
-	free_backref_node(cache, new_node);
+	btrfs_backref_free_node(cache, new_node);
 	return -ENOMEM;
 }
 
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 23/39] btrfs: Rename drop_backref_node() to btrfs_backref_drop_node() and move its needed facilities to backref.h
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (21 preceding siblings ...)
  2020-03-26  8:32 ` [PATCH v2 22/39] btrfs: Rename free_backref_(node|edge) to btrfs_backref_free_(node|edge) and move them to backref.h Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 24/39] btrfs: Rename remove_backref_node() to btrfs_backref_cleanup_node() and move it to backref.c Qu Wenruo
                   ` (18 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

With extra comment for drop_backref_node() as it has some similarity
with remove_backref_node(), thus we need extra comment explaining the
difference.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.h    | 39 +++++++++++++++++++++++++++++++++++++
 fs/btrfs/relocation.c | 45 +++++++------------------------------------
 2 files changed, 46 insertions(+), 38 deletions(-)

diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index 5464d0dc1669..5e8d368d9a82 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -314,4 +314,43 @@ static inline void btrfs_backref_free_edge(struct btrfs_backref_cache *cache,
 	}
 }
 
+static inline void btrfs_backref_unlock_node_buffer(
+		struct btrfs_backref_node *node)
+{
+	if (node->locked) {
+		btrfs_tree_unlock(node->eb);
+		node->locked = 0;
+	}
+}
+
+static inline void btrfs_backref_drop_node_buffer(
+		struct btrfs_backref_node *node)
+{
+	if (node->eb) {
+		btrfs_backref_unlock_node_buffer(node);
+		free_extent_buffer(node->eb);
+		node->eb = NULL;
+	}
+}
+
+/*
+ * Drop the backref node from cache without cleaning up its children
+ * edges.
+ *
+ * This can only be called on node without parent edges.
+ * The children edges are still kept as is.
+ */
+static inline void btrfs_backref_drop_node(struct btrfs_backref_cache *tree,
+					   struct btrfs_backref_node *node)
+{
+	BUG_ON(!list_empty(&node->upper));
+
+	btrfs_backref_drop_node_buffer(node);
+	list_del(&node->list);
+	list_del(&node->lower);
+	if (!RB_EMPTY_NODE(&node->rb_node))
+		rb_erase(&node->rb_node, &tree->rb_root);
+	btrfs_backref_free_node(tree, node);
+}
+
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index b09141f4d4c8..fd94bd18f2ad 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -268,37 +268,6 @@ static struct btrfs_backref_node *walk_down_backref(
 	*index = 0;
 	return NULL;
 }
-
-static void unlock_node_buffer(struct btrfs_backref_node *node)
-{
-	if (node->locked) {
-		btrfs_tree_unlock(node->eb);
-		node->locked = 0;
-	}
-}
-
-static void drop_node_buffer(struct btrfs_backref_node *node)
-{
-	if (node->eb) {
-		unlock_node_buffer(node);
-		free_extent_buffer(node->eb);
-		node->eb = NULL;
-	}
-}
-
-static void drop_backref_node(struct btrfs_backref_cache *tree,
-			      struct btrfs_backref_node *node)
-{
-	BUG_ON(!list_empty(&node->upper));
-
-	drop_node_buffer(node);
-	list_del(&node->list);
-	list_del(&node->lower);
-	if (!RB_EMPTY_NODE(&node->rb_node))
-		rb_erase(&node->rb_node, &tree->rb_root);
-	btrfs_backref_free_node(tree, node);
-}
-
 /*
  * remove a backref node from the backref cache
  */
@@ -322,7 +291,7 @@ static void remove_backref_node(struct btrfs_backref_cache *cache,
 
 		if (RB_EMPTY_NODE(&upper->rb_node)) {
 			BUG_ON(!list_empty(&node->upper));
-			drop_backref_node(cache, node);
+			btrfs_backref_drop_node(cache, node);
 			node = upper;
 			node->lowest = 1;
 			continue;
@@ -337,7 +306,7 @@ static void remove_backref_node(struct btrfs_backref_cache *cache,
 		}
 	}
 
-	drop_backref_node(cache, node);
+	btrfs_backref_drop_node(cache, node);
 }
 
 static void update_backref_node(struct btrfs_backref_cache *cache,
@@ -2844,7 +2813,7 @@ static int do_relocation(struct btrfs_trans_handle *trans,
 				if (node->eb->start == bytenr)
 					goto next;
 			}
-			drop_node_buffer(upper);
+			btrfs_backref_drop_node_buffer(upper);
 		}
 
 		if (!upper->eb) {
@@ -2943,15 +2912,15 @@ static int do_relocation(struct btrfs_trans_handle *trans,
 		}
 next:
 		if (!upper->pending)
-			drop_node_buffer(upper);
+			btrfs_backref_drop_node_buffer(upper);
 		else
-			unlock_node_buffer(upper);
+			btrfs_backref_unlock_node_buffer(upper);
 		if (err)
 			break;
 	}
 
 	if (!err && node->pending) {
-		drop_node_buffer(node);
+		btrfs_backref_drop_node_buffer(node);
 		list_move_tail(&node->list, &rc->backref_cache.changed);
 		node->pending = 0;
 	}
@@ -4575,7 +4544,7 @@ int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans,
 		BUG_ON(node->bytenr != buf->start &&
 		       node->new_bytenr != buf->start);
 
-		drop_node_buffer(node);
+		btrfs_backref_drop_node_buffer(node);
 		atomic_inc(&cow->refs);
 		node->eb = cow;
 		node->new_bytenr = cow->start;
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 24/39] btrfs: Rename remove_backref_node() to btrfs_backref_cleanup_node() and move it to backref.c
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (22 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 23/39] btrfs: Rename drop_backref_node() to btrfs_backref_drop_node() and move its needed facilities " Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 25/39] btrfs: Rename backref_cache_cleanup() to btrfs_backref_release_cache() " Qu Wenruo
                   ` (17 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

Also add comment explaining the cleanup progress, to differ it from
btrfs_backref_drop_node().

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.c    | 38 +++++++++++++++++++++++++++++++
 fs/btrfs/backref.h    |  9 ++++++++
 fs/btrfs/relocation.c | 53 ++++---------------------------------------
 3 files changed, 52 insertions(+), 48 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index 9475b6ccc7eb..5cab1b71d0b5 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -2511,3 +2511,41 @@ struct btrfs_backref_edge *btrfs_backref_alloc_edge(
 		cache->nr_edges++;
 	return edge;
 }
+
+void btrfs_backref_cleanup_node(struct btrfs_backref_cache *cache,
+				struct btrfs_backref_node *node)
+{
+	struct btrfs_backref_node *upper;
+	struct btrfs_backref_edge *edge;
+
+	if (!node)
+		return;
+
+	BUG_ON(!node->lowest && !node->detached);
+	while (!list_empty(&node->upper)) {
+		edge = list_entry(node->upper.next, struct btrfs_backref_edge,
+				  list[LOWER]);
+		upper = edge->node[UPPER];
+		list_del(&edge->list[LOWER]);
+		list_del(&edge->list[UPPER]);
+		btrfs_backref_free_edge(cache, edge);
+
+		if (RB_EMPTY_NODE(&upper->rb_node)) {
+			BUG_ON(!list_empty(&node->upper));
+			btrfs_backref_drop_node(cache, node);
+			node = upper;
+			node->lowest = 1;
+			continue;
+		}
+		/*
+		 * add the node to leaf node list if no other
+		 * child block cached.
+		 */
+		if (list_empty(&upper->lower)) {
+			list_add_tail(&upper->lower, &cache->leaves);
+			upper->lowest = 1;
+		}
+	}
+
+	btrfs_backref_drop_node(cache, node);
+}
diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index 5e8d368d9a82..5bc3700fea1d 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -353,4 +353,13 @@ static inline void btrfs_backref_drop_node(struct btrfs_backref_cache *tree,
 	btrfs_backref_free_node(tree, node);
 }
 
+/*
+ * Drop the backref node from cache, also cleaning up all its
+ * upper edges and any uncached nodes in the path.
+ *
+ * This cleanup happens bottom up, thus the node should either
+ * be the lowest node in the cache or detached node.
+ */
+void btrfs_backref_cleanup_node(struct btrfs_backref_cache *cache,
+				struct btrfs_backref_node *node);
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index fd94bd18f2ad..04d9b88d92aa 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -153,9 +153,6 @@ struct reloc_control {
 #define MOVE_DATA_EXTENTS	0
 #define UPDATE_DATA_PTRS	1
 
-static void remove_backref_node(struct btrfs_backref_cache *cache,
-				struct btrfs_backref_node *node);
-
 static void mark_block_processed(struct reloc_control *rc,
 				 struct btrfs_backref_node *node)
 {
@@ -186,13 +183,13 @@ static void backref_cache_cleanup(struct btrfs_backref_cache *cache)
 	while (!list_empty(&cache->detached)) {
 		node = list_entry(cache->detached.next,
 				  struct btrfs_backref_node, list);
-		remove_backref_node(cache, node);
+		btrfs_backref_cleanup_node(cache, node);
 	}
 
 	while (!list_empty(&cache->leaves)) {
 		node = list_entry(cache->leaves.next,
 				  struct btrfs_backref_node, lower);
-		remove_backref_node(cache, node);
+		btrfs_backref_cleanup_node(cache, node);
 	}
 
 	cache->last_trans = 0;
@@ -268,46 +265,6 @@ static struct btrfs_backref_node *walk_down_backref(
 	*index = 0;
 	return NULL;
 }
-/*
- * remove a backref node from the backref cache
- */
-static void remove_backref_node(struct btrfs_backref_cache *cache,
-				struct btrfs_backref_node *node)
-{
-	struct btrfs_backref_node *upper;
-	struct btrfs_backref_edge *edge;
-
-	if (!node)
-		return;
-
-	BUG_ON(!node->lowest && !node->detached);
-	while (!list_empty(&node->upper)) {
-		edge = list_entry(node->upper.next, struct btrfs_backref_edge,
-				  list[LOWER]);
-		upper = edge->node[UPPER];
-		list_del(&edge->list[LOWER]);
-		list_del(&edge->list[UPPER]);
-		btrfs_backref_free_edge(cache, edge);
-
-		if (RB_EMPTY_NODE(&upper->rb_node)) {
-			BUG_ON(!list_empty(&node->upper));
-			btrfs_backref_drop_node(cache, node);
-			node = upper;
-			node->lowest = 1;
-			continue;
-		}
-		/*
-		 * add the node to leaf node list if no other
-		 * child block cached.
-		 */
-		if (list_empty(&upper->lower)) {
-			list_add_tail(&upper->lower, &cache->leaves);
-			upper->lowest = 1;
-		}
-	}
-
-	btrfs_backref_drop_node(cache, node);
-}
 
 static void update_backref_node(struct btrfs_backref_cache *cache,
 				struct btrfs_backref_node *node, u64 bytenr)
@@ -345,7 +302,7 @@ static int update_backref_cache(struct btrfs_trans_handle *trans,
 	while (!list_empty(&cache->detached)) {
 		node = list_entry(cache->detached.next,
 				  struct btrfs_backref_node, list);
-		remove_backref_node(cache, node);
+		btrfs_backref_cleanup_node(cache, node);
 	}
 
 	while (!list_empty(&cache->changed)) {
@@ -1120,7 +1077,7 @@ struct btrfs_backref_node *build_backref_tree(struct reloc_control *rc,
 			btrfs_backref_free_node(cache, lower);
 		}
 
-		remove_backref_node(cache, node);
+		btrfs_backref_cleanup_node(cache, node);
 		ASSERT(list_empty(&cache->useless_node) &&
 		       list_empty(&cache->pending_edge));
 		return ERR_PTR(err);
@@ -3088,7 +3045,7 @@ static int relocate_tree_block(struct btrfs_trans_handle *trans,
 	}
 out:
 	if (ret || node->level == 0 || node->cowonly)
-		remove_backref_node(&rc->backref_cache, node);
+		btrfs_backref_cleanup_node(&rc->backref_cache, node);
 	return ret;
 }
 
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 25/39] btrfs: Rename backref_cache_cleanup() to btrfs_backref_release_cache() and move it to backref.c
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (23 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 24/39] btrfs: Rename remove_backref_node() to btrfs_backref_cleanup_node() and move it to backref.c Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 26/39] btrfs: Rename backref_tree_panic() to btrfs_backref_panic(), " Qu Wenruo
                   ` (16 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

Since we're releasing all existing nodes/edges, other than cleanup the
mess after error, "release" is a more proper naming here.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.c    | 30 ++++++++++++++++++++++++++++++
 fs/btrfs/backref.h    |  3 +++
 fs/btrfs/relocation.c | 32 +-------------------------------
 3 files changed, 34 insertions(+), 31 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index 5cab1b71d0b5..8dae00cfa69f 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -2549,3 +2549,33 @@ void btrfs_backref_cleanup_node(struct btrfs_backref_cache *cache,
 
 	btrfs_backref_drop_node(cache, node);
 }
+
+void btrfs_backref_release_cache(struct btrfs_backref_cache *cache)
+{
+	struct btrfs_backref_node *node;
+	int i;
+
+	while (!list_empty(&cache->detached)) {
+		node = list_entry(cache->detached.next,
+				  struct btrfs_backref_node, list);
+		btrfs_backref_cleanup_node(cache, node);
+	}
+
+	while (!list_empty(&cache->leaves)) {
+		node = list_entry(cache->leaves.next,
+				  struct btrfs_backref_node, lower);
+		btrfs_backref_cleanup_node(cache, node);
+	}
+
+	cache->last_trans = 0;
+
+	for (i = 0; i < BTRFS_MAX_LEVEL; i++)
+		ASSERT(list_empty(&cache->pending[i]));
+	ASSERT(list_empty(&cache->pending_edge));
+	ASSERT(list_empty(&cache->useless_node));
+	ASSERT(list_empty(&cache->changed));
+	ASSERT(list_empty(&cache->detached));
+	ASSERT(RB_EMPTY_ROOT(&cache->rb_root));
+	ASSERT(!cache->nr_nodes);
+	ASSERT(!cache->nr_edges);
+}
diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index 5bc3700fea1d..c77de13570bc 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -362,4 +362,7 @@ static inline void btrfs_backref_drop_node(struct btrfs_backref_cache *tree,
  */
 void btrfs_backref_cleanup_node(struct btrfs_backref_cache *cache,
 				struct btrfs_backref_node *node);
+
+/* Release all nodes/edges from current cache */
+void btrfs_backref_release_cache(struct btrfs_backref_cache *cache);
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 04d9b88d92aa..4649de9fd02a 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -175,36 +175,6 @@ static void mapping_tree_init(struct mapping_tree *tree)
 	spin_lock_init(&tree->lock);
 }
 
-static void backref_cache_cleanup(struct btrfs_backref_cache *cache)
-{
-	struct btrfs_backref_node *node;
-	int i;
-
-	while (!list_empty(&cache->detached)) {
-		node = list_entry(cache->detached.next,
-				  struct btrfs_backref_node, list);
-		btrfs_backref_cleanup_node(cache, node);
-	}
-
-	while (!list_empty(&cache->leaves)) {
-		node = list_entry(cache->leaves.next,
-				  struct btrfs_backref_node, lower);
-		btrfs_backref_cleanup_node(cache, node);
-	}
-
-	cache->last_trans = 0;
-
-	for (i = 0; i < BTRFS_MAX_LEVEL; i++)
-		ASSERT(list_empty(&cache->pending[i]));
-	ASSERT(list_empty(&cache->pending_edge));
-	ASSERT(list_empty(&cache->useless_node));
-	ASSERT(list_empty(&cache->changed));
-	ASSERT(list_empty(&cache->detached));
-	ASSERT(RB_EMPTY_ROOT(&cache->rb_root));
-	ASSERT(!cache->nr_nodes);
-	ASSERT(!cache->nr_edges);
-}
-
 static void backref_tree_panic(struct rb_node *rb_node, int errno, u64 bytenr)
 {
 
@@ -3933,7 +3903,7 @@ static noinline_for_stack int relocate_block_group(struct reloc_control *rc)
 	rc->create_reloc_tree = 0;
 	set_reloc_control(rc);
 
-	backref_cache_cleanup(&rc->backref_cache);
+	btrfs_backref_release_cache(&rc->backref_cache);
 	btrfs_block_rsv_release(fs_info, rc->block_rsv, (u64)-1, NULL);
 
 	/*
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 26/39] btrfs: Rename backref_tree_panic() to btrfs_backref_panic(), and move it to backref.c
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (24 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 25/39] btrfs: Rename backref_cache_cleanup() to btrfs_backref_release_cache() " Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 27/39] btrfs: Rename should_ignore_root() to btrfs_should_ignore_reloc_root() and export it Qu Wenruo
                   ` (15 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

Also change the parameter, since all callers can easily grab an fs_info,
there is no need for all the dancing.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.h    |  9 +++++++++
 fs/btrfs/relocation.c | 29 +++++++++--------------------
 2 files changed, 18 insertions(+), 20 deletions(-)

diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index c77de13570bc..c355ca816349 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -365,4 +365,13 @@ void btrfs_backref_cleanup_node(struct btrfs_backref_cache *cache,
 
 /* Release all nodes/edges from current cache */
 void btrfs_backref_release_cache(struct btrfs_backref_cache *cache);
+
+static inline void btrfs_backref_panic(struct btrfs_fs_info *fs_info,
+				       u64 bytenr, int errno)
+{
+	btrfs_panic(fs_info, errno,
+		    "Inconsistency in backref cache found at offset %llu",
+		    bytenr);
+}
+
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 4649de9fd02a..5448cc2f1b28 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -175,19 +175,6 @@ static void mapping_tree_init(struct mapping_tree *tree)
 	spin_lock_init(&tree->lock);
 }
 
-static void backref_tree_panic(struct rb_node *rb_node, int errno, u64 bytenr)
-{
-
-	struct btrfs_fs_info *fs_info = NULL;
-	struct btrfs_backref_node *bnode = rb_entry(rb_node,
-			struct btrfs_backref_node, rb_node);
-	if (bnode->root)
-		fs_info = bnode->root->fs_info;
-	btrfs_panic(fs_info, errno,
-		    "Inconsistency in backref cache found at offset %llu",
-		    bytenr);
-}
-
 /*
  * walk up backref nodes until reach node presents tree root
  */
@@ -244,7 +231,7 @@ static void update_backref_node(struct btrfs_backref_cache *cache,
 	node->bytenr = bytenr;
 	rb_node = simple_insert(&cache->rb_root, node->bytenr, &node->rb_node);
 	if (rb_node)
-		backref_tree_panic(rb_node, -EEXIST, bytenr);
+		btrfs_backref_panic(cache->fs_info, bytenr, -EEXIST);
 }
 
 /*
@@ -766,7 +753,8 @@ static int finish_upper_links(struct btrfs_backref_cache *cache,
 		rb_node = simple_insert(&cache->rb_root, start->bytenr,
 					&start->rb_node);
 		if (rb_node)
-			backref_tree_panic(rb_node, -EEXIST, start->bytenr);
+			btrfs_backref_panic(cache->fs_info, start->bytenr,
+					    -EEXIST);
 		list_add_tail(&start->lower, &cache->leaves);
 	}
 
@@ -834,8 +822,8 @@ static int finish_upper_links(struct btrfs_backref_cache *cache,
 			rb_node = simple_insert(&cache->rb_root, upper->bytenr,
 						&upper->rb_node);
 			if (rb_node) {
-				backref_tree_panic(rb_node, -EEXIST,
-						   upper->bytenr);
+				btrfs_backref_panic(cache->fs_info,
+						upper->bytenr, -EEXIST);
 				return -EUCLEAN;
 			}
 		}
@@ -1127,7 +1115,7 @@ static int clone_backref_node(struct btrfs_trans_handle *trans,
 	rb_node = simple_insert(&cache->rb_root, new_node->bytenr,
 				&new_node->rb_node);
 	if (rb_node)
-		backref_tree_panic(rb_node, -EEXIST, new_node->bytenr);
+		btrfs_backref_panic(trans->fs_info, new_node->bytenr, -EEXIST);
 
 	if (!new_node->lowest) {
 		list_for_each_entry(new_edge, &new_node->lower, list[UPPER]) {
@@ -1254,7 +1242,7 @@ static int __update_reloc_root(struct btrfs_root *root)
 				node->bytenr, &node->rb_node);
 	spin_unlock(&rc->reloc_root_tree.lock);
 	if (rb_node)
-		backref_tree_panic(rb_node, -EEXIST, node->bytenr);
+		btrfs_backref_panic(fs_info, node->bytenr, -EEXIST);
 	return 0;
 }
 
@@ -3396,7 +3384,8 @@ static int add_tree_block(struct reloc_control *rc,
 
 	rb_node = simple_insert(blocks, block->bytenr, &block->rb_node);
 	if (rb_node)
-		backref_tree_panic(rb_node, -EEXIST, block->bytenr);
+		btrfs_backref_panic(rc->extent_root->fs_info, block->bytenr,
+				    -EEXIST);
 
 	return 0;
 }
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 27/39] btrfs: Rename should_ignore_root() to btrfs_should_ignore_reloc_root() and export it
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (25 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 26/39] btrfs: Rename backref_tree_panic() to btrfs_backref_panic(), " Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 28/39] btrfs: relocation: Open-code read_fs_root() for handle_indirect_tree_backref() Qu Wenruo
                   ` (14 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

This function is mostly single purpose to relocation backref cache, but
since we're moving the main part of backref cache to backref.c, we need
to export such function.

And to avoid confusing, rename the function to
btrfs_should_ignore_reloc_root() make the name a little more clear.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/ctree.h      |  1 +
 fs/btrfs/relocation.c | 10 ++++++----
 2 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 1e8a0a513e73..01b03e8a671f 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -3383,6 +3383,7 @@ int btrfs_reloc_post_snapshot(struct btrfs_trans_handle *trans,
 int btrfs_should_cancel_balance(struct btrfs_fs_info *fs_info);
 struct btrfs_root *find_reloc_root(struct btrfs_fs_info *fs_info,
 				   u64 bytenr);
+int btrfs_should_ignore_reloc_root(struct btrfs_root *root);
 
 /* scrub.c */
 int btrfs_scrub_dev(struct btrfs_fs_info *fs_info, u64 devid, u64 start,
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 5448cc2f1b28..23ed17cd54c4 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -305,7 +305,8 @@ static bool reloc_root_is_dead(struct btrfs_root *root)
  *
  * Reloc tree after swap is considered dead, thus not considered as valid.
  * This is enough for most callers, as they don't distinguish dead reloc root
- * from no reloc root.  But should_ignore_root() below is a special case.
+ * from no reloc root.  But btrfs_should_ignore_reloc_root() below is a
+ * special case.
  */
 static bool have_reloc_root(struct btrfs_root *root)
 {
@@ -316,7 +317,7 @@ static bool have_reloc_root(struct btrfs_root *root)
 	return true;
 }
 
-static int should_ignore_root(struct btrfs_root *root)
+int btrfs_should_ignore_reloc_root(struct btrfs_root *root)
 {
 	struct btrfs_root *reloc_root;
 
@@ -342,6 +343,7 @@ static int should_ignore_root(struct btrfs_root *root)
 	 */
 	return 1;
 }
+
 /*
  * find reloc tree by address of tree root
  */
@@ -486,7 +488,7 @@ static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
 	if (btrfs_root_level(&root->root_item) == cur->level) {
 		/* tree root */
 		ASSERT(btrfs_root_bytenr(&root->root_item) == cur->bytenr);
-		if (should_ignore_root(root)) {
+		if (btrfs_should_ignore_reloc_root(root)) {
 			btrfs_put_root(root);
 			list_add(&cur->list, &cache->useless_node);
 		} else {
@@ -527,7 +529,7 @@ static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
 		if (!path->nodes[level]) {
 			ASSERT(btrfs_root_bytenr(&root->root_item) ==
 			       lower->bytenr);
-			if (should_ignore_root(root)) {
+			if (btrfs_should_ignore_reloc_root(root)) {
 				btrfs_put_root(root);
 				list_add(&lower->list, &cache->useless_node);
 			} else {
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 28/39] btrfs: relocation: Open-code read_fs_root() for handle_indirect_tree_backref()
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (26 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 27/39] btrfs: Rename should_ignore_root() to btrfs_should_ignore_reloc_root() and export it Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 29/39] btrfs: Rename handle_one_tree_block() to btrfs_backref_add_tree_node() and move it to backref.c Qu Wenruo
                   ` (13 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

The backref code is going to be moved to backref.c, and read_fs_root()
is just a simple wrapper, open-code it to prepare to the incoming code
move.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/relocation.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 23ed17cd54c4..c44a797b3b25 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -474,12 +474,16 @@ static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
 	struct btrfs_backref_edge *edge;
 	struct extent_buffer *eb;
 	struct btrfs_root *root;
+	struct btrfs_key root_key;
 	struct rb_node *rb_node;
 	int level;
 	bool need_check = true;
 	int ret;
 
-	root = read_fs_root(fs_info, ref_key->offset);
+	root_key.objectid = ref_key->offset;
+	root_key.type = BTRFS_ROOT_ITEM_KEY;
+	root_key.offset = (u64)-1;
+	root = btrfs_get_fs_root(fs_info, &root_key, false);
 	if (IS_ERR(root))
 		return PTR_ERR(root);
 	if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 29/39] btrfs: Rename handle_one_tree_block() to btrfs_backref_add_tree_node() and move it to backref.c
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (27 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 28/39] btrfs: relocation: Open-code read_fs_root() for handle_indirect_tree_backref() Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 30/39] btrfs: Rename finish_upper_links() to btrfs_backref_finish_upper_links() " Qu Wenruo
                   ` (12 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

This function is the major part of backref cache build process, move it
to backref.c so we can reuse it later.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.c    | 356 +++++++++++++++++++++++++++++++++++++++++
 fs/btrfs/backref.h    |  15 ++
 fs/btrfs/relocation.c | 358 +-----------------------------------------
 3 files changed, 373 insertions(+), 356 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index 8dae00cfa69f..b13cf6e144c8 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -13,6 +13,7 @@
 #include "transaction.h"
 #include "delayed-ref.h"
 #include "locking.h"
+#include "misc.h"
 
 /* Just an arbitrary number so we can be sure this happened */
 #define BACKREF_FOUND_SHARED 6
@@ -2579,3 +2580,358 @@ void btrfs_backref_release_cache(struct btrfs_backref_cache *cache)
 	ASSERT(!cache->nr_nodes);
 	ASSERT(!cache->nr_edges);
 }
+
+/*
+ * Handle direct tree backref.
+ *
+ * Direct tree backref means, the backref item shows its parent bytenr
+ * directly. This is for SHARED_BLOCK_REF backref (keyed or inlined).
+ *
+ * @ref_key:	The converted backref key.
+ *		For keyed backref, it's the item key.
+ *		For inlined backref, objectid is the bytenr,
+ *		type is btrfs_inline_ref_type, offset is
+ *		btrfs_inline_ref_offset.
+ */
+static int handle_direct_tree_backref(struct btrfs_backref_cache *cache,
+				      struct btrfs_key *ref_key,
+				      struct btrfs_backref_node *cur)
+{
+	struct btrfs_backref_edge *edge;
+	struct btrfs_backref_node *upper;
+	struct rb_node *rb_node;
+
+	ASSERT(ref_key->type == BTRFS_SHARED_BLOCK_REF_KEY);
+
+	/* Only reloc root uses backref pointing to itself */
+	if (ref_key->objectid == ref_key->offset) {
+		struct btrfs_root *root;
+
+		cur->is_reloc_root = 1;
+		/* Only reloc backref cache cares exact root */
+		if (cache->is_reloc) {
+			root = find_reloc_root(cache->fs_info, cur->bytenr);
+			if (WARN_ON(!root))
+				return -ENOENT;
+			cur->root = root;
+		} else {
+			/*
+			 * For generic purpose backref cache, reloc root node
+			 * is useless.
+			 */
+			list_add(&cur->list, &cache->useless_node);
+		}
+		return 0;
+	}
+
+	edge = btrfs_backref_alloc_edge(cache);
+	if (!edge)
+		return -ENOMEM;
+
+	rb_node = simple_search(&cache->rb_root, ref_key->offset);
+	if (!rb_node) {
+		/* Parent node not yet cached */
+		upper = btrfs_backref_alloc_node(cache, ref_key->offset,
+					   cur->level + 1);
+		if (!upper) {
+			btrfs_backref_free_edge(cache, edge);
+			return -ENOMEM;
+		}
+
+		/*
+		 *  backrefs for the upper level block isn't
+		 *  cached, add the block to pending list
+		 */
+		list_add_tail(&edge->list[UPPER], &cache->pending_edge);
+	} else {
+		/* Parent node already cached */
+		upper = rb_entry(rb_node, struct btrfs_backref_node,
+				 rb_node);
+		ASSERT(upper->checked);
+		INIT_LIST_HEAD(&edge->list[UPPER]);
+	}
+	btrfs_backref_link_edge(edge, cur, upper, LINK_LOWER);
+	return 0;
+}
+
+/*
+ * Handle indirect tree backref.
+ *
+ * Indirect tree backref means, we only know which tree the node belongs to.
+ * Need to do a tree search to find out parents. This is for TREE_BLOCK_REF
+ * backref (keyed or inlined).
+ *
+ * @ref_key:	The same as @ref_key in  handle_direct_tree_backref()
+ * @tree_key:	The first key of this tree block.
+ * @path:	A clean (released) path, to avoid allocating path everytime
+ *		the function get called.
+ */
+static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
+					struct btrfs_path *path,
+					struct btrfs_key *ref_key,
+					struct btrfs_key *tree_key,
+					struct btrfs_backref_node *cur)
+{
+	struct btrfs_fs_info *fs_info = cache->fs_info;
+	struct btrfs_backref_node *upper;
+	struct btrfs_backref_node *lower;
+	struct btrfs_backref_edge *edge;
+	struct extent_buffer *eb;
+	struct btrfs_root *root;
+	struct btrfs_key root_key;
+	struct rb_node *rb_node;
+	int level;
+	bool need_check = true;
+	int ret;
+
+	root_key.objectid = ref_key->offset;
+	root_key.type = BTRFS_ROOT_ITEM_KEY;
+	root_key.offset = (u64)-1;
+	root = btrfs_get_fs_root(fs_info, &root_key, false);
+	if (IS_ERR(root))
+		return PTR_ERR(root);
+	if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
+		cur->cowonly = 1;
+
+	if (btrfs_root_level(&root->root_item) == cur->level) {
+		/* tree root */
+		ASSERT(btrfs_root_bytenr(&root->root_item) == cur->bytenr);
+		if (btrfs_should_ignore_reloc_root(root)) {
+			btrfs_put_root(root);
+			list_add(&cur->list, &cache->useless_node);
+		} else {
+			cur->root = root;
+		}
+		return 0;
+	}
+
+	level = cur->level + 1;
+
+	/* Search the tree to find parent blocks referring the block. */
+	path->search_commit_root = 1;
+	path->skip_locking = 1;
+	path->lowest_level = level;
+	ret = btrfs_search_slot(NULL, root, tree_key, path, 0, 0);
+	path->lowest_level = 0;
+	if (ret < 0) {
+		btrfs_put_root(root);
+		return ret;
+	}
+	if (ret > 0 && path->slots[level] > 0)
+		path->slots[level]--;
+
+	eb = path->nodes[level];
+	if (btrfs_node_blockptr(eb, path->slots[level]) != cur->bytenr) {
+		btrfs_err(fs_info,
+"couldn't find block (%llu) (level %d) in tree (%llu) with key (%llu %u %llu)",
+			  cur->bytenr, level - 1, root->root_key.objectid,
+			  tree_key->objectid, tree_key->type, tree_key->offset);
+		btrfs_put_root(root);
+		ret = -ENOENT;
+		goto out;
+	}
+	lower = cur;
+
+	/* Add all nodes and edges in the path */
+	for (; level < BTRFS_MAX_LEVEL; level++) {
+		if (!path->nodes[level]) {
+			ASSERT(btrfs_root_bytenr(&root->root_item) ==
+			       lower->bytenr);
+			if (btrfs_should_ignore_reloc_root(root)) {
+				btrfs_put_root(root);
+				list_add(&lower->list, &cache->useless_node);
+			} else {
+				lower->root = root;
+			}
+			break;
+		}
+
+		edge = btrfs_backref_alloc_edge(cache);
+		if (!edge) {
+			btrfs_put_root(root);
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		eb = path->nodes[level];
+		rb_node = simple_search(&cache->rb_root, eb->start);
+		if (!rb_node) {
+			upper = btrfs_backref_alloc_node(cache, eb->start,
+							 lower->level + 1);
+			if (!upper) {
+				btrfs_put_root(root);
+				btrfs_backref_free_edge(cache, edge);
+				ret = -ENOMEM;
+				goto out;
+			}
+			upper->owner = btrfs_header_owner(eb);
+			if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
+				upper->cowonly = 1;
+
+			/*
+			 * if we know the block isn't shared we can void
+			 * checking its backrefs.
+			 */
+			if (btrfs_block_can_be_shared(root, eb))
+				upper->checked = 0;
+			else
+				upper->checked = 1;
+
+			/*
+			 * add the block to pending list if we need check its
+			 * backrefs, we only do this once while walking up a
+			 * tree as we will catch anything else later on.
+			 */
+			if (!upper->checked && need_check) {
+				need_check = false;
+				list_add_tail(&edge->list[UPPER],
+					      &cache->pending_edge);
+			} else {
+				if (upper->checked)
+					need_check = true;
+				INIT_LIST_HEAD(&edge->list[UPPER]);
+			}
+		} else {
+			upper = rb_entry(rb_node, struct btrfs_backref_node,
+					 rb_node);
+			ASSERT(upper->checked);
+			INIT_LIST_HEAD(&edge->list[UPPER]);
+			if (!upper->owner)
+				upper->owner = btrfs_header_owner(eb);
+		}
+		btrfs_backref_link_edge(edge, lower, upper, LINK_LOWER);
+
+		if (rb_node) {
+			btrfs_put_root(root);
+			break;
+		}
+		lower = upper;
+		upper = NULL;
+	}
+out:
+	btrfs_release_path(path);
+	return ret;
+}
+
+int btrfs_backref_add_tree_node(struct btrfs_backref_cache *cache,
+				struct btrfs_path *path,
+				struct btrfs_backref_iter *iter,
+				struct btrfs_key *node_key,
+				struct btrfs_backref_node *cur)
+{
+	struct btrfs_fs_info *fs_info = cache->fs_info;
+	struct btrfs_backref_edge *edge;
+	struct btrfs_backref_node *exist;
+	int ret;
+
+	ret = btrfs_backref_iter_start(iter, cur->bytenr);
+	if (ret < 0)
+		return ret;
+	/*
+	 * We skip the first btrfs_tree_block_info, as we don't use the key
+	 * stored in it, but fetch it from the tree block.
+	 */
+	if (btrfs_backref_has_tree_block_info(iter)) {
+		ret = btrfs_backref_iter_next(iter);
+		if (ret < 0)
+			goto out;
+		/* No extra backref? This means the tree block is corrupted */
+		if (ret > 0) {
+			ret = -EUCLEAN;
+			goto out;
+		}
+	}
+	WARN_ON(cur->checked);
+	if (!list_empty(&cur->upper)) {
+		/*
+		 * the backref was added previously when processing
+		 * backref of type BTRFS_TREE_BLOCK_REF_KEY
+		 */
+		ASSERT(list_is_singular(&cur->upper));
+		edge = list_entry(cur->upper.next, struct btrfs_backref_edge,
+				  list[LOWER]);
+		ASSERT(list_empty(&edge->list[UPPER]));
+		exist = edge->node[UPPER];
+		/*
+		 * add the upper level block to pending list if we need
+		 * check its backrefs
+		 */
+		if (!exist->checked)
+			list_add_tail(&edge->list[UPPER], &cache->pending_edge);
+	} else {
+		exist = NULL;
+	}
+
+	for (; ret == 0; ret = btrfs_backref_iter_next(iter)) {
+		struct extent_buffer *eb;
+		struct btrfs_key key;
+		int type;
+
+		cond_resched();
+		eb = btrfs_backref_get_eb(iter);
+
+		key.objectid = iter->bytenr;
+		if (btrfs_backref_iter_is_inline_ref(iter)) {
+			struct btrfs_extent_inline_ref *iref;
+
+			/* update key for inline back ref */
+			iref = (struct btrfs_extent_inline_ref *)
+				((unsigned long)iter->cur_ptr);
+			type = btrfs_get_extent_inline_ref_type(eb, iref,
+							BTRFS_REF_TYPE_BLOCK);
+			if (type == BTRFS_REF_TYPE_INVALID) {
+				ret = -EUCLEAN;
+				goto out;
+			}
+			key.type = type;
+			key.offset = btrfs_extent_inline_ref_offset(eb, iref);
+		} else {
+			key.type = iter->cur_key.type;
+			key.offset = iter->cur_key.offset;
+		}
+
+		/*
+		 * Parent node found and matches current inline ref, no need to
+		 * rebuild this node for this inline ref.
+		 */
+		if (exist &&
+		    ((key.type == BTRFS_TREE_BLOCK_REF_KEY &&
+		      exist->owner == key.offset) ||
+		     (key.type == BTRFS_SHARED_BLOCK_REF_KEY &&
+		      exist->bytenr == key.offset))) {
+			exist = NULL;
+			continue;
+		}
+
+		/* SHARED_BLOCK_REF means key.offset is the parent bytenr */
+		if (key.type == BTRFS_SHARED_BLOCK_REF_KEY) {
+			ret = handle_direct_tree_backref(cache, &key, cur);
+			if (ret < 0)
+				goto out;
+			continue;
+		} else if (unlikely(key.type == BTRFS_EXTENT_REF_V0_KEY)) {
+			ret = -EINVAL;
+			btrfs_print_v0_err(fs_info);
+			btrfs_handle_fs_error(fs_info, ret, NULL);
+			goto out;
+		} else if (key.type != BTRFS_TREE_BLOCK_REF_KEY) {
+			continue;
+		}
+
+		/*
+		 * key.type == BTRFS_TREE_BLOCK_REF_KEY, inline ref offset
+		 * means the root objectid. We need to search the tree to get
+		 * its parent bytenr.
+		 */
+		ret = handle_indirect_tree_backref(cache, path, &key, node_key,
+						   cur);
+		if (ret < 0)
+			goto out;
+	}
+	ret = 0;
+	cur->checked = 1;
+	WARN_ON(exist);
+out:
+	btrfs_backref_iter_release(iter);
+	return ret;
+}
diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index c355ca816349..929342b99e2b 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -374,4 +374,19 @@ static inline void btrfs_backref_panic(struct btrfs_fs_info *fs_info,
 		    bytenr);
 }
 
+/*
+ * Add backref node @cur into @cache.
+ *
+ * NOTE: Even if the function returned 0, @cur is not yet cached as its upper
+ *	 links aren't yet bi-directional. Needs to finish such links.
+ *
+ * @path:	Released path for indirect tree backref lookup
+ * @iter:	Released backref iter for extent tree search
+ * @node_key:	The first key of the tree block
+ */
+int btrfs_backref_add_tree_node(struct btrfs_backref_cache *cache,
+				struct btrfs_path *path,
+				struct btrfs_backref_iter *iter,
+				struct btrfs_key *node_key,
+				struct btrfs_backref_node *cur);
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index c44a797b3b25..d8ab93425b16 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -377,361 +377,6 @@ static struct btrfs_root *read_fs_root(struct btrfs_fs_info *fs_info,
 	return btrfs_get_fs_root(fs_info, &key, false);
 }
 
-/*
- * Handle direct tree backref.
- *
- * Direct tree backref means, the backref item shows its parent bytenr
- * directly. This is for SHARED_BLOCK_REF backref (keyed or inlined).
- *
- * @ref_key:	The converted backref key.
- *		For keyed backref, it's the item key.
- *		For inlined backref, objectid is the bytenr,
- *		type is btrfs_inline_ref_type, offset is
- *		btrfs_inline_ref_offset.
- */
-static int handle_direct_tree_backref(struct btrfs_backref_cache *cache,
-				      struct btrfs_key *ref_key,
-				      struct btrfs_backref_node *cur)
-{
-	struct btrfs_backref_edge *edge;
-	struct btrfs_backref_node *upper;
-	struct rb_node *rb_node;
-
-	ASSERT(ref_key->type == BTRFS_SHARED_BLOCK_REF_KEY);
-
-	/* Only reloc root uses backref pointing to itself */
-	if (ref_key->objectid == ref_key->offset) {
-		struct btrfs_root *root;
-
-		cur->is_reloc_root = 1;
-		/* Only reloc backref cache cares exact root */
-		if (cache->is_reloc) {
-			root = find_reloc_root(cache->fs_info, cur->bytenr);
-			if (WARN_ON(!root))
-				return -ENOENT;
-			cur->root = root;
-		} else {
-			/*
-			 * For generic purpose backref cache, reloc root node
-			 * is useless.
-			 */
-			list_add(&cur->list, &cache->useless_node);
-		}
-		return 0;
-	}
-
-	edge = btrfs_backref_alloc_edge(cache);
-	if (!edge)
-		return -ENOMEM;
-
-	rb_node = simple_search(&cache->rb_root, ref_key->offset);
-	if (!rb_node) {
-		/* Parent node not yet cached */
-		upper = btrfs_backref_alloc_node(cache, ref_key->offset,
-					   cur->level + 1);
-		if (!upper) {
-			btrfs_backref_free_edge(cache, edge);
-			return -ENOMEM;
-		}
-
-		/*
-		 *  backrefs for the upper level block isn't
-		 *  cached, add the block to pending list
-		 */
-		list_add_tail(&edge->list[UPPER], &cache->pending_edge);
-	} else {
-		/* Parent node already cached */
-		upper = rb_entry(rb_node, struct btrfs_backref_node,
-				 rb_node);
-		ASSERT(upper->checked);
-		INIT_LIST_HEAD(&edge->list[UPPER]);
-	}
-	btrfs_backref_link_edge(edge, cur, upper, LINK_LOWER);
-	return 0;
-}
-
-/*
- * Handle indirect tree backref.
- *
- * Indirect tree backref means, we only know which tree the node belongs to.
- * Need to do a tree search to find out parents. This is for TREE_BLOCK_REF
- * backref (keyed or inlined).
- *
- * @ref_key:	The same as @ref_key in  handle_direct_tree_backref()
- * @tree_key:	The first key of this tree block.
- * @path:	A clean (released) path, to avoid allocating path everytime
- *		the function get called.
- */
-static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
-					struct btrfs_path *path,
-					struct btrfs_key *ref_key,
-					struct btrfs_key *tree_key,
-					struct btrfs_backref_node *cur)
-{
-	struct btrfs_fs_info *fs_info = cache->fs_info;
-	struct btrfs_backref_node *upper;
-	struct btrfs_backref_node *lower;
-	struct btrfs_backref_edge *edge;
-	struct extent_buffer *eb;
-	struct btrfs_root *root;
-	struct btrfs_key root_key;
-	struct rb_node *rb_node;
-	int level;
-	bool need_check = true;
-	int ret;
-
-	root_key.objectid = ref_key->offset;
-	root_key.type = BTRFS_ROOT_ITEM_KEY;
-	root_key.offset = (u64)-1;
-	root = btrfs_get_fs_root(fs_info, &root_key, false);
-	if (IS_ERR(root))
-		return PTR_ERR(root);
-	if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
-		cur->cowonly = 1;
-
-	if (btrfs_root_level(&root->root_item) == cur->level) {
-		/* tree root */
-		ASSERT(btrfs_root_bytenr(&root->root_item) == cur->bytenr);
-		if (btrfs_should_ignore_reloc_root(root)) {
-			btrfs_put_root(root);
-			list_add(&cur->list, &cache->useless_node);
-		} else {
-			cur->root = root;
-		}
-		return 0;
-	}
-
-	level = cur->level + 1;
-
-	/* Search the tree to find parent blocks referring the block. */
-	path->search_commit_root = 1;
-	path->skip_locking = 1;
-	path->lowest_level = level;
-	ret = btrfs_search_slot(NULL, root, tree_key, path, 0, 0);
-	path->lowest_level = 0;
-	if (ret < 0) {
-		btrfs_put_root(root);
-		return ret;
-	}
-	if (ret > 0 && path->slots[level] > 0)
-		path->slots[level]--;
-
-	eb = path->nodes[level];
-	if (btrfs_node_blockptr(eb, path->slots[level]) != cur->bytenr) {
-		btrfs_err(fs_info,
-"couldn't find block (%llu) (level %d) in tree (%llu) with key (%llu %u %llu)",
-			  cur->bytenr, level - 1, root->root_key.objectid,
-			  tree_key->objectid, tree_key->type, tree_key->offset);
-		btrfs_put_root(root);
-		ret = -ENOENT;
-		goto out;
-	}
-	lower = cur;
-
-	/* Add all nodes and edges in the path */
-	for (; level < BTRFS_MAX_LEVEL; level++) {
-		if (!path->nodes[level]) {
-			ASSERT(btrfs_root_bytenr(&root->root_item) ==
-			       lower->bytenr);
-			if (btrfs_should_ignore_reloc_root(root)) {
-				btrfs_put_root(root);
-				list_add(&lower->list, &cache->useless_node);
-			} else {
-				lower->root = root;
-			}
-			break;
-		}
-
-		edge = btrfs_backref_alloc_edge(cache);
-		if (!edge) {
-			btrfs_put_root(root);
-			ret = -ENOMEM;
-			goto out;
-		}
-
-		eb = path->nodes[level];
-		rb_node = simple_search(&cache->rb_root, eb->start);
-		if (!rb_node) {
-			upper = btrfs_backref_alloc_node(cache, eb->start,
-							 lower->level + 1);
-			if (!upper) {
-				btrfs_put_root(root);
-				btrfs_backref_free_edge(cache, edge);
-				ret = -ENOMEM;
-				goto out;
-			}
-			upper->owner = btrfs_header_owner(eb);
-			if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
-				upper->cowonly = 1;
-
-			/*
-			 * if we know the block isn't shared we can void
-			 * checking its backrefs.
-			 */
-			if (btrfs_block_can_be_shared(root, eb))
-				upper->checked = 0;
-			else
-				upper->checked = 1;
-
-			/*
-			 * add the block to pending list if we need check its
-			 * backrefs, we only do this once while walking up a
-			 * tree as we will catch anything else later on.
-			 */
-			if (!upper->checked && need_check) {
-				need_check = false;
-				list_add_tail(&edge->list[UPPER],
-					      &cache->pending_edge);
-			} else {
-				if (upper->checked)
-					need_check = true;
-				INIT_LIST_HEAD(&edge->list[UPPER]);
-			}
-		} else {
-			upper = rb_entry(rb_node, struct btrfs_backref_node,
-					 rb_node);
-			ASSERT(upper->checked);
-			INIT_LIST_HEAD(&edge->list[UPPER]);
-			if (!upper->owner)
-				upper->owner = btrfs_header_owner(eb);
-		}
-		btrfs_backref_link_edge(edge, lower, upper, LINK_LOWER);
-
-		if (rb_node) {
-			btrfs_put_root(root);
-			break;
-		}
-		lower = upper;
-		upper = NULL;
-	}
-out:
-	btrfs_release_path(path);
-	return ret;
-}
-
-static int handle_one_tree_block(struct btrfs_backref_cache *cache,
-				 struct btrfs_path *path,
-				 struct btrfs_backref_iter *iter,
-				 struct btrfs_key *node_key,
-				 struct btrfs_backref_node *cur)
-{
-	struct btrfs_fs_info *fs_info = cache->fs_info;
-	struct btrfs_backref_edge *edge;
-	struct btrfs_backref_node *exist;
-	int ret;
-
-	ret = btrfs_backref_iter_start(iter, cur->bytenr);
-	if (ret < 0)
-		return ret;
-	/*
-	 * We skip the first btrfs_tree_block_info, as we don't use the key
-	 * stored in it, but fetch it from the tree block.
-	 */
-	if (btrfs_backref_has_tree_block_info(iter)) {
-		ret = btrfs_backref_iter_next(iter);
-		if (ret < 0)
-			goto out;
-		/* No extra backref? This means the tree block is corrupted */
-		if (ret > 0) {
-			ret = -EUCLEAN;
-			goto out;
-		}
-	}
-	WARN_ON(cur->checked);
-	if (!list_empty(&cur->upper)) {
-		/*
-		 * the backref was added previously when processing
-		 * backref of type BTRFS_TREE_BLOCK_REF_KEY
-		 */
-		ASSERT(list_is_singular(&cur->upper));
-		edge = list_entry(cur->upper.next, struct btrfs_backref_edge,
-				  list[LOWER]);
-		ASSERT(list_empty(&edge->list[UPPER]));
-		exist = edge->node[UPPER];
-		/*
-		 * add the upper level block to pending list if we need
-		 * check its backrefs
-		 */
-		if (!exist->checked)
-			list_add_tail(&edge->list[UPPER], &cache->pending_edge);
-	} else {
-		exist = NULL;
-	}
-
-	for (; ret == 0; ret = btrfs_backref_iter_next(iter)) {
-		struct extent_buffer *eb;
-		struct btrfs_key key;
-		int type;
-
-		cond_resched();
-		eb = btrfs_backref_get_eb(iter);
-
-		key.objectid = iter->bytenr;
-		if (btrfs_backref_iter_is_inline_ref(iter)) {
-			struct btrfs_extent_inline_ref *iref;
-
-			/* update key for inline back ref */
-			iref = (struct btrfs_extent_inline_ref *)
-				((unsigned long)iter->cur_ptr);
-			type = btrfs_get_extent_inline_ref_type(eb, iref,
-							BTRFS_REF_TYPE_BLOCK);
-			if (type == BTRFS_REF_TYPE_INVALID) {
-				ret = -EUCLEAN;
-				goto out;
-			}
-			key.type = type;
-			key.offset = btrfs_extent_inline_ref_offset(eb, iref);
-		} else {
-			key.type = iter->cur_key.type;
-			key.offset = iter->cur_key.offset;
-		}
-
-		/*
-		 * Parent node found and matches current inline ref, no need to
-		 * rebuild this node for this inline ref.
-		 */
-		if (exist &&
-		    ((key.type == BTRFS_TREE_BLOCK_REF_KEY &&
-		      exist->owner == key.offset) ||
-		     (key.type == BTRFS_SHARED_BLOCK_REF_KEY &&
-		      exist->bytenr == key.offset))) {
-			exist = NULL;
-			continue;
-		}
-
-		/* SHARED_BLOCK_REF means key.offset is the parent bytenr */
-		if (key.type == BTRFS_SHARED_BLOCK_REF_KEY) {
-			ret = handle_direct_tree_backref(cache, &key, cur);
-			if (ret < 0)
-				goto out;
-			continue;
-		} else if (unlikely(key.type == BTRFS_EXTENT_REF_V0_KEY)) {
-			ret = -EINVAL;
-			btrfs_print_v0_err(fs_info);
-			btrfs_handle_fs_error(fs_info, ret, NULL);
-			goto out;
-		} else if (key.type != BTRFS_TREE_BLOCK_REF_KEY) {
-			continue;
-		}
-
-		/*
-		 * key.type == BTRFS_TREE_BLOCK_REF_KEY, inline ref offset
-		 * means the root objectid. We need to search the tree to get
-		 * its parent bytenr.
-		 */
-		ret = handle_indirect_tree_backref(cache, path, &key, node_key,
-						   cur);
-		if (ret < 0)
-			goto out;
-	}
-	ret = 0;
-	cur->checked = 1;
-	WARN_ON(exist);
-out:
-	btrfs_backref_iter_release(iter);
-	return ret;
-}
-
 /*
  * In handle_one_tree_backref(), we have only linked the lower node to the edge,
  * but the upper node hasn't been linked to the edge.
@@ -969,7 +614,8 @@ struct btrfs_backref_node *build_backref_tree(struct reloc_control *rc,
 
 	/* Breadth-first search to build backref cache */
 	do {
-		ret = handle_one_tree_block(cache, path, iter, node_key, cur);
+		ret = btrfs_backref_add_tree_node(cache, path, iter, node_key,
+						  cur);
 		if (ret < 0) {
 			err = ret;
 			goto out;
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 30/39] btrfs: Rename finish_upper_links() to btrfs_backref_finish_upper_links() and move it to backref.c
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (28 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 29/39] btrfs: Rename handle_one_tree_block() to btrfs_backref_add_tree_node() and move it to backref.c Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 31/39] btrfs: relocation: Move error handling of build_backref_tree() " Qu Wenruo
                   ` (11 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

This the the 2nd major part of generic backref cache. Move it to
backref.c so we can reuse it.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.c    | 102 +++++++++++++++++++++++++++++++++++++
 fs/btrfs/backref.h    |   5 ++
 fs/btrfs/relocation.c | 116 +-----------------------------------------
 3 files changed, 108 insertions(+), 115 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index b13cf6e144c8..5bc8c7d6145e 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -2935,3 +2935,105 @@ int btrfs_backref_add_tree_node(struct btrfs_backref_cache *cache,
 	btrfs_backref_iter_release(iter);
 	return ret;
 }
+
+int btrfs_backref_finish_upper_links(struct btrfs_backref_cache *cache,
+				     struct btrfs_backref_node *start)
+{
+	struct list_head *useless_node = &cache->useless_node;
+	struct btrfs_backref_edge *edge;
+	struct rb_node *rb_node;
+	LIST_HEAD(pending_edge);
+
+	ASSERT(start->checked);
+
+	/* Insert this node to cache if it's not cowonly */
+	if (!start->cowonly) {
+		rb_node = simple_insert(&cache->rb_root, start->bytenr,
+					&start->rb_node);
+		if (rb_node)
+			btrfs_backref_panic(cache->fs_info, start->bytenr,
+					    -EEXIST);
+		list_add_tail(&start->lower, &cache->leaves);
+	}
+
+	/*
+	 * Use breadth first search to iterate all related edges.
+	 *
+	 * The start point is all the edges of this node
+	 */
+	list_for_each_entry(edge, &start->upper, list[LOWER])
+		list_add_tail(&edge->list[UPPER], &pending_edge);
+
+	while (!list_empty(&pending_edge)) {
+		struct btrfs_backref_node *upper;
+		struct btrfs_backref_node *lower;
+		struct rb_node *rb_node;
+
+		edge = list_first_entry(&pending_edge,
+				struct btrfs_backref_edge, list[UPPER]);
+		list_del_init(&edge->list[UPPER]);
+		upper = edge->node[UPPER];
+		lower = edge->node[LOWER];
+
+		/* Parent is detached, no need to keep any edges */
+		if (upper->detached) {
+			list_del(&edge->list[LOWER]);
+			btrfs_backref_free_edge(cache, edge);
+
+			/* Lower node is orphan, queue for cleanup */
+			if (list_empty(&lower->upper))
+				list_add(&lower->list, useless_node);
+			continue;
+		}
+
+		/*
+		 * All new nodes added in current build_backref_tree() haven't
+		 * been linked to the cache rb tree.
+		 * So if we have upper->rb_node populated, this means a cache
+		 * hit. We only need to link the edge, as @upper and all its
+		 * parent have already been linked.
+		 */
+		if (!RB_EMPTY_NODE(&upper->rb_node)) {
+			if (upper->lowest) {
+				list_del_init(&upper->lower);
+				upper->lowest = 0;
+			}
+
+			list_add_tail(&edge->list[UPPER], &upper->lower);
+			continue;
+		}
+
+		/* Sanity check, we shouldn't have any unchecked nodes */
+		if (!upper->checked) {
+			ASSERT(0);
+			return -EUCLEAN;
+		}
+
+		/* Sanity check, cowonly node has non-cowonly parent */
+		if (start->cowonly != upper->cowonly) {
+			ASSERT(0);
+			return -EUCLEAN;
+		}
+
+		/* Only cache non-cowonly (subvolume trees) tree blocks */
+		if (!upper->cowonly) {
+			rb_node = simple_insert(&cache->rb_root, upper->bytenr,
+						&upper->rb_node);
+			if (rb_node) {
+				btrfs_backref_panic(cache->fs_info,
+						upper->bytenr, -EEXIST);
+				return -EUCLEAN;
+			}
+		}
+
+		list_add_tail(&edge->list[UPPER], &upper->lower);
+
+		/*
+		 * Also queue all the parent edges of this uncached node
+		 * to finish the upper linkage
+		 */
+		list_for_each_entry(edge, &upper->upper, list[LOWER])
+			list_add_tail(&edge->list[UPPER], &pending_edge);
+	}
+	return 0;
+}
diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index 929342b99e2b..415ab4a05bd8 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -379,6 +379,7 @@ static inline void btrfs_backref_panic(struct btrfs_fs_info *fs_info,
  *
  * NOTE: Even if the function returned 0, @cur is not yet cached as its upper
  *	 links aren't yet bi-directional. Needs to finish such links.
+ *	 Use btrfs_backref_finish_upper_links() to finish such linkage.
  *
  * @path:	Released path for indirect tree backref lookup
  * @iter:	Released backref iter for extent tree search
@@ -389,4 +390,8 @@ int btrfs_backref_add_tree_node(struct btrfs_backref_cache *cache,
 				struct btrfs_backref_iter *iter,
 				struct btrfs_key *node_key,
 				struct btrfs_backref_node *cur);
+
+/* Finish the upwards linkage created by btrfs_backref_add_tree_node(). */
+int btrfs_backref_finish_upper_links(struct btrfs_backref_cache *cache,
+				     struct btrfs_backref_node *start);
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index d8ab93425b16..cd2037421406 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -377,120 +377,6 @@ static struct btrfs_root *read_fs_root(struct btrfs_fs_info *fs_info,
 	return btrfs_get_fs_root(fs_info, &key, false);
 }
 
-/*
- * In handle_one_tree_backref(), we have only linked the lower node to the edge,
- * but the upper node hasn't been linked to the edge.
- * This means we can only iterate through btrfs_backref_node::upper to reach
- * parent edges, but not through btrfs_backref_node::lower to reach children
- * edges.
- *
- * This function will finish the btrfs_backref_node::lower to related edges,
- * so that backref cache can be bi-directionally iterated.
- *
- * Also, this will add the nodes to backref cache for next run.
- */
-static int finish_upper_links(struct btrfs_backref_cache *cache,
-			      struct btrfs_backref_node *start)
-{
-	struct list_head *useless_node = &cache->useless_node;
-	struct btrfs_backref_edge *edge;
-	struct rb_node *rb_node;
-	LIST_HEAD(pending_edge);
-
-	ASSERT(start->checked);
-
-	/* Insert this node to cache if it's not cowonly */
-	if (!start->cowonly) {
-		rb_node = simple_insert(&cache->rb_root, start->bytenr,
-					&start->rb_node);
-		if (rb_node)
-			btrfs_backref_panic(cache->fs_info, start->bytenr,
-					    -EEXIST);
-		list_add_tail(&start->lower, &cache->leaves);
-	}
-
-	/*
-	 * Use breadth first search to iterate all related edges.
-	 *
-	 * The start point is all the edges of this node
-	 */
-	list_for_each_entry(edge, &start->upper, list[LOWER])
-		list_add_tail(&edge->list[UPPER], &pending_edge);
-
-	while (!list_empty(&pending_edge)) {
-		struct btrfs_backref_node *upper;
-		struct btrfs_backref_node *lower;
-		struct rb_node *rb_node;
-
-		edge = list_first_entry(&pending_edge,
-				struct btrfs_backref_edge, list[UPPER]);
-		list_del_init(&edge->list[UPPER]);
-		upper = edge->node[UPPER];
-		lower = edge->node[LOWER];
-
-		/* Parent is detached, no need to keep any edges */
-		if (upper->detached) {
-			list_del(&edge->list[LOWER]);
-			btrfs_backref_free_edge(cache, edge);
-
-			/* Lower node is orphan, queue for cleanup */
-			if (list_empty(&lower->upper))
-				list_add(&lower->list, useless_node);
-			continue;
-		}
-
-		/*
-		 * All new nodes added in current build_backref_tree() haven't
-		 * been linked to the cache rb tree.
-		 * So if we have upper->rb_node populated, this means a cache
-		 * hit. We only need to link the edge, as @upper and all its
-		 * parent have already been linked.
-		 */
-		if (!RB_EMPTY_NODE(&upper->rb_node)) {
-			if (upper->lowest) {
-				list_del_init(&upper->lower);
-				upper->lowest = 0;
-			}
-
-			list_add_tail(&edge->list[UPPER], &upper->lower);
-			continue;
-		}
-
-		/* Sanity check, we shouldn't have any unchecked nodes */
-		if (!upper->checked) {
-			ASSERT(0);
-			return -EUCLEAN;
-		}
-
-		/* Sanity check, cowonly node has non-cowonly parent */
-		if (start->cowonly != upper->cowonly) {
-			ASSERT(0);
-			return -EUCLEAN;
-		}
-
-		/* Only cache non-cowonly (subvolume trees) tree blocks */
-		if (!upper->cowonly) {
-			rb_node = simple_insert(&cache->rb_root, upper->bytenr,
-						&upper->rb_node);
-			if (rb_node) {
-				btrfs_backref_panic(cache->fs_info,
-						upper->bytenr, -EEXIST);
-				return -EUCLEAN;
-			}
-		}
-
-		list_add_tail(&edge->list[UPPER], &upper->lower);
-
-		/*
-		 * Also queue all the parent edges of this uncached node
-		 * to finish the upper linkage
-		 */
-		list_for_each_entry(edge, &upper->upper, list[LOWER])
-			list_add_tail(&edge->list[UPPER], &pending_edge);
-	}
-	return 0;
-}
-
 /*
  * For useless nodes, do two major clean ups:
  * - Cleanup the children edges and nodes
@@ -633,7 +519,7 @@ struct btrfs_backref_node *build_backref_tree(struct reloc_control *rc,
 	} while (edge);
 
 	/* Finish the upper linkage of newly added edges/nodes */
-	ret = finish_upper_links(cache, node);
+	ret = btrfs_backref_finish_upper_links(cache, node);
 	if (ret < 0) {
 		err = ret;
 		goto out;
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 31/39] btrfs: relocation: Move error handling of build_backref_tree() to backref.c
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (29 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 30/39] btrfs: Rename finish_upper_links() to btrfs_backref_finish_upper_links() " Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 32/39] btrfs: backref: Only ignore reloc roots for indrect backref resolve if the backref cache is for reloction purpose Qu Wenruo
                   ` (10 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

The error cleanup will be extracted as a new function,
btrfs_backref_error_cleanup(), and moved to backref.c and exported for
later usage.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.c    | 54 +++++++++++++++++++++++++++++++++++++++++++
 fs/btrfs/backref.h    |  3 +++
 fs/btrfs/relocation.c | 48 +-------------------------------------
 3 files changed, 58 insertions(+), 47 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index 5bc8c7d6145e..21d29d3d0a7e 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -3037,3 +3037,57 @@ int btrfs_backref_finish_upper_links(struct btrfs_backref_cache *cache,
 	}
 	return 0;
 }
+
+void btrfs_backref_error_cleanup(struct btrfs_backref_cache *cache,
+				 struct btrfs_backref_node *node)
+{
+	struct btrfs_backref_node *lower;
+	struct btrfs_backref_node *upper;
+	struct btrfs_backref_edge *edge;
+
+	while (!list_empty(&cache->useless_node)) {
+		lower = list_first_entry(&cache->useless_node,
+				   struct btrfs_backref_node, list);
+		list_del_init(&lower->list);
+	}
+	while (!list_empty(&cache->pending_edge)) {
+		edge = list_first_entry(&cache->pending_edge,
+				struct btrfs_backref_edge, list[UPPER]);
+		list_del(&edge->list[UPPER]);
+		list_del(&edge->list[LOWER]);
+		lower = edge->node[LOWER];
+		upper = edge->node[UPPER];
+		btrfs_backref_free_edge(cache, edge);
+
+		/*
+		 * Lower is no longer linked to any upper backref nodes
+		 * and isn't in the cache, we can free it ourselves.
+		 */
+		if (list_empty(&lower->upper) &&
+		    RB_EMPTY_NODE(&lower->rb_node))
+			list_add(&lower->list, &cache->useless_node);
+
+		if (!RB_EMPTY_NODE(&upper->rb_node))
+			continue;
+
+		/* Add this guy's upper edges to the list to process */
+		list_for_each_entry(edge, &upper->upper, list[LOWER])
+			list_add_tail(&edge->list[UPPER],
+				      &cache->pending_edge);
+		if (list_empty(&upper->upper))
+			list_add(&upper->list, &cache->useless_node);
+	}
+
+	while (!list_empty(&cache->useless_node)) {
+		lower = list_first_entry(&cache->useless_node,
+				   struct btrfs_backref_node, list);
+		list_del_init(&lower->list);
+		if (lower == node)
+			node = NULL;
+		btrfs_backref_free_node(cache, lower);
+	}
+
+	btrfs_backref_cleanup_node(cache, node);
+	ASSERT(list_empty(&cache->useless_node) &&
+	       list_empty(&cache->pending_edge));
+}
diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index 415ab4a05bd8..fe2e055a49a9 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -394,4 +394,7 @@ int btrfs_backref_add_tree_node(struct btrfs_backref_cache *cache,
 /* Finish the upwards linkage created by btrfs_backref_add_tree_node(). */
 int btrfs_backref_finish_upper_links(struct btrfs_backref_cache *cache,
 				     struct btrfs_backref_node *start);
+
+void btrfs_backref_error_cleanup(struct btrfs_backref_cache *cache,
+				 struct btrfs_backref_node *node);
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index cd2037421406..15527a732fca 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -473,8 +473,6 @@ struct btrfs_backref_node *build_backref_tree(struct reloc_control *rc,
 	struct btrfs_backref_cache *cache = &rc->backref_cache;
 	struct btrfs_path *path; /* For searching parent of TREE_BLOCK_REF */
 	struct btrfs_backref_node *cur;
-	struct btrfs_backref_node *upper;
-	struct btrfs_backref_node *lower;
 	struct btrfs_backref_node *node = NULL;
 	struct btrfs_backref_edge *edge;
 	int ret;
@@ -531,51 +529,7 @@ struct btrfs_backref_node *build_backref_tree(struct reloc_control *rc,
 	btrfs_backref_iter_free(iter);
 	btrfs_free_path(path);
 	if (err) {
-		while (!list_empty(&cache->useless_node)) {
-			lower = list_first_entry(&cache->useless_node,
-					   struct btrfs_backref_node, list);
-			list_del_init(&lower->list);
-		}
-		while (!list_empty(&cache->pending_edge)) {
-			edge = list_first_entry(&cache->pending_edge,
-					struct btrfs_backref_edge, list[UPPER]);
-			list_del(&edge->list[UPPER]);
-			list_del(&edge->list[LOWER]);
-			lower = edge->node[LOWER];
-			upper = edge->node[UPPER];
-			btrfs_backref_free_edge(cache, edge);
-
-			/*
-			 * Lower is no longer linked to any upper backref nodes
-			 * and isn't in the cache, we can free it ourselves.
-			 */
-			if (list_empty(&lower->upper) &&
-			    RB_EMPTY_NODE(&lower->rb_node))
-				list_add(&lower->list, &cache->useless_node);
-
-			if (!RB_EMPTY_NODE(&upper->rb_node))
-				continue;
-
-			/* Add this guy's upper edges to the list to process */
-			list_for_each_entry(edge, &upper->upper, list[LOWER])
-				list_add_tail(&edge->list[UPPER],
-					      &cache->pending_edge);
-			if (list_empty(&upper->upper))
-				list_add(&upper->list, &cache->useless_node);
-		}
-
-		while (!list_empty(&cache->useless_node)) {
-			lower = list_first_entry(&cache->useless_node,
-					   struct btrfs_backref_node, list);
-			list_del_init(&lower->list);
-			if (lower == node)
-				node = NULL;
-			btrfs_backref_free_node(cache, lower);
-		}
-
-		btrfs_backref_cleanup_node(cache, node);
-		ASSERT(list_empty(&cache->useless_node) &&
-		       list_empty(&cache->pending_edge));
+		btrfs_backref_error_cleanup(cache, node);
 		return ERR_PTR(err);
 	}
 	ASSERT(!node || !node->detached);
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 32/39] btrfs: backref: Only ignore reloc roots for indrect backref resolve if the backref cache is for reloction purpose
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (30 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 31/39] btrfs: relocation: Move error handling of build_backref_tree() " Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 33/39] btrfs: qgroup: Introduce qgroup backref cache Qu Wenruo
                   ` (9 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

For relocation tree detection, relocation backref cache uses
should_ignore_reloc_root() which uses relocation specific checks like
checking the DEAD_RELOC_ROOT bit.

However for generic purposed backref cache, we can rely on that check,
as it's possible that relocation is also running.

For generic purposed backref cache, we detect reloc root by
SHARED_BLOCK_REF item.
Only reloc root node has its parent bytenr pointing back to itself.

And in that case, backref cache will mark the reloc root node useless,
dropping any child orphan nodes.

So only call should_ignore_reloc_root() if the backref cache is for
relocation.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.c | 16 ++++++++++++++--
 1 file changed, 14 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index 21d29d3d0a7e..ccf39aec28f7 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -2696,7 +2696,17 @@ static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
 	if (btrfs_root_level(&root->root_item) == cur->level) {
 		/* tree root */
 		ASSERT(btrfs_root_bytenr(&root->root_item) == cur->bytenr);
-		if (btrfs_should_ignore_reloc_root(root)) {
+		/*
+		 * For reloc backref cache, we may ignore reloc root.
+		 * But for generic purposed backref cache, we can't rely
+		 * on btrfs_should_ignore_reloc_root() as it may conflict with
+		 * current running relocation and lead to missing root.
+		 *
+		 * For generic purposed backref cache, reloc root detection
+		 * is completely relying on direct backref (key->offset is
+		 * parent bytenr), thus only do such check for reloc cache.
+		 */
+		if (btrfs_should_ignore_reloc_root(root) && cache->is_reloc) {
 			btrfs_put_root(root);
 			list_add(&cur->list, &cache->useless_node);
 		} else {
@@ -2737,7 +2747,9 @@ static int handle_indirect_tree_backref(struct btrfs_backref_cache *cache,
 		if (!path->nodes[level]) {
 			ASSERT(btrfs_root_bytenr(&root->root_item) ==
 			       lower->bytenr);
-			if (btrfs_should_ignore_reloc_root(root)) {
+			/* Same as previous should_ignore_reloc_root() call */
+			if (btrfs_should_ignore_reloc_root(root) &&
+			    cache->is_reloc) {
 				btrfs_put_root(root);
 				list_add(&lower->list, &cache->useless_node);
 			} else {
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 33/39] btrfs: qgroup: Introduce qgroup backref cache
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (31 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 32/39] btrfs: backref: Only ignore reloc roots for indrect backref resolve if the backref cache is for reloction purpose Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 34/39] btrfs: qgroup: Introduce qgroup_backref_cache_build() function Qu Wenruo
                   ` (8 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

This adds two new members for btrfs_fs_info:
- struct btrfs_backref_cache *qgroup_backref_cache
  Only get initialized at qgroup enable time.
  This is to avoid further bloating up fs_info structure.

- struct mutex qgroup_backref_lock
  This is initialized at fs_info initial time.

This patch only introduces the skeleton, just initialization and cleanup
for these newly introduced members, no usage of them yet.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/ctree.h   |  2 ++
 fs/btrfs/disk-io.c |  1 +
 fs/btrfs/qgroup.c  | 53 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 56 insertions(+)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 01b03e8a671f..70e90b549d3e 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -888,6 +888,8 @@ struct btrfs_fs_info {
 	struct btrfs_workqueue *qgroup_rescan_workers;
 	struct completion qgroup_rescan_completion;
 	struct btrfs_work qgroup_rescan_work;
+	struct mutex qgroup_backref_lock;
+	struct btrfs_backref_cache *qgroup_backref_cache;
 	bool qgroup_rescan_running;	/* protected by qgroup_rescan_lock */
 
 	/* filesystem state */
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index a6cb5cbbdb9f..e79d287c362f 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2107,6 +2107,7 @@ static void btrfs_init_qgroup(struct btrfs_fs_info *fs_info)
 	fs_info->qgroup_ulist = NULL;
 	fs_info->qgroup_rescan_running = false;
 	mutex_init(&fs_info->qgroup_rescan_lock);
+	mutex_init(&fs_info->qgroup_backref_lock);
 }
 
 static int btrfs_init_workqueues(struct btrfs_fs_info *fs_info,
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index c3888fb367e7..31b320860b71 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -339,6 +339,19 @@ int btrfs_read_qgroup_config(struct btrfs_fs_info *fs_info)
 	if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags))
 		return 0;
 
+	mutex_lock(&fs_info->qgroup_backref_lock);
+	if (!fs_info->qgroup_backref_cache) {
+		fs_info->qgroup_backref_cache = kzalloc(
+				sizeof(struct btrfs_backref_cache), GFP_KERNEL);
+		if (!fs_info->qgroup_backref_cache) {
+			mutex_unlock(&fs_info->qgroup_backref_lock);
+			return -ENOMEM;
+		}
+		btrfs_backref_init_cache(fs_info,
+				fs_info->qgroup_backref_cache, 0);
+	}
+	mutex_unlock(&fs_info->qgroup_backref_lock);
+
 	fs_info->qgroup_ulist = ulist_alloc(GFP_KERNEL);
 	if (!fs_info->qgroup_ulist) {
 		ret = -ENOMEM;
@@ -528,6 +541,14 @@ void btrfs_free_qgroup_config(struct btrfs_fs_info *fs_info)
 	 */
 	ulist_free(fs_info->qgroup_ulist);
 	fs_info->qgroup_ulist = NULL;
+
+	mutex_lock(&fs_info->qgroup_backref_lock);
+	if (fs_info->qgroup_backref_cache) {
+		btrfs_backref_release_cache(fs_info->qgroup_backref_cache);
+		kfree(fs_info->qgroup_backref_cache);
+		fs_info->qgroup_backref_cache = NULL;
+	}
+	mutex_unlock(&fs_info->qgroup_backref_lock);
 }
 
 static int add_qgroup_relation_item(struct btrfs_trans_handle *trans, u64 src,
@@ -891,6 +912,20 @@ int btrfs_quota_enable(struct btrfs_fs_info *fs_info)
 	int slot;
 
 	mutex_lock(&fs_info->qgroup_ioctl_lock);
+	mutex_lock(&fs_info->qgroup_backref_lock);
+	if (!fs_info->qgroup_backref_cache) {
+		fs_info->qgroup_backref_cache = kzalloc(
+				sizeof(struct btrfs_backref_cache), GFP_KERNEL);
+		if (!fs_info->qgroup_backref_cache) {
+			mutex_unlock(&fs_info->qgroup_backref_lock);
+			ret = -ENOMEM;
+			goto out;
+		}
+		btrfs_backref_init_cache(fs_info, fs_info->qgroup_backref_cache,
+					 0);
+	}
+	mutex_unlock(&fs_info->qgroup_backref_lock);
+
 	if (fs_info->quota_root)
 		goto out;
 
@@ -1095,6 +1130,14 @@ int btrfs_quota_disable(struct btrfs_fs_info *fs_info)
 		goto end_trans;
 	}
 
+	mutex_lock(&fs_info->qgroup_backref_lock);
+	if (fs_info->qgroup_backref_cache) {
+		btrfs_backref_release_cache(fs_info->qgroup_backref_cache);
+		kfree(fs_info->qgroup_backref_cache);
+		fs_info->qgroup_backref_cache = NULL;
+	}
+	mutex_unlock(&fs_info->qgroup_backref_lock);
+
 	list_del(&quota_root->dirty_list);
 
 	btrfs_tree_lock(quota_root->node);
@@ -2561,6 +2604,16 @@ int btrfs_qgroup_account_extents(struct btrfs_trans_handle *trans)
 	}
 	trace_qgroup_num_dirty_extents(fs_info, trans->transid,
 				       num_dirty_extents);
+
+	/*
+	 * Qgroup accounting happens at commit transaction time, thus the
+	 * backref cache will no longer be valid in next trans.
+	 * Free it up.
+	 */
+	mutex_lock(&fs_info->qgroup_backref_lock);
+	if (fs_info->qgroup_backref_cache)
+		btrfs_backref_release_cache(fs_info->qgroup_backref_cache);
+	mutex_unlock(&fs_info->qgroup_backref_lock);
 	return ret;
 }
 
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 34/39] btrfs: qgroup: Introduce qgroup_backref_cache_build() function
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (32 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 33/39] btrfs: qgroup: Introduce qgroup backref cache Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 35/39] btrfs: qgroup: Introduce a function to iterate through backref_cache to find all parents for specified node Qu Wenruo
                   ` (7 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

This function is the main function to build the generic purposed backref
cache for qgroup.

The major difference from relocation purposed backref cache is:
- No processed extent_io_tree
  As we don't need to bother the relocation progress

- Don't care reloc root
  Since reloc root doesn't contribute to qgroup accounting, reloc roots
  are queued to useless list

- Always populate backref_node::owner
  This is the main index for qgroup backref cache to find out the owner
  of one tree block.
  The @owner parameter is from tree block header owner, which doesn't
  reflect reloc tree. But backref cache mechanism will detect reloc tree
  and remove them from backref cache, thus the header owner is very
  accruate for qgroup usage.

This function will be utlized in incoming patches.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/qgroup.c | 154 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 154 insertions(+)

diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 31b320860b71..d7d50943f482 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -22,6 +22,7 @@
 #include "extent_io.h"
 #include "qgroup.h"
 #include "block-group.h"
+#include "misc.h"
 
 /* TODO XXX FIXME
  *  - subvol delete -> delete when ref goes to 0? delete limits also?
@@ -1606,6 +1607,159 @@ int btrfs_qgroup_trace_extent_nolock(struct btrfs_fs_info *fs_info,
 	return 0;
 }
 
+static bool handle_useless_nodes(struct btrfs_backref_cache *cache,
+				 struct btrfs_backref_node *node)
+{
+	struct list_head *useless_node = &cache->useless_node;
+	bool ret = false;
+
+	while (!list_empty(useless_node)) {
+		struct btrfs_backref_node *cur;
+
+		cur = list_first_entry(useless_node, struct btrfs_backref_node,
+				 list);
+		list_del_init(&cur->list);
+
+		/* Only tree root nodes can be added to @useless_nodes */
+		ASSERT(list_empty(&cur->upper));
+
+		if (cur == node)
+			ret = true;
+
+		/* The node is the lowest node */
+		if (cur->lowest) {
+			list_del_init(&cur->lower);
+			cur->lowest = 0;
+		}
+
+		/* Cleanup the lower edges */
+		while (!list_empty(&cur->lower)) {
+			struct btrfs_backref_edge *edge;
+			struct btrfs_backref_node *lower;
+
+			edge = list_entry(cur->lower.next,
+					struct btrfs_backref_edge, list[UPPER]);
+			list_del(&edge->list[UPPER]);
+			list_del(&edge->list[LOWER]);
+			lower = edge->node[LOWER];
+			btrfs_backref_free_edge(cache, edge);
+
+			/* Child node is also orphan, queue for cleanup */
+			if (list_empty(&lower->upper))
+				list_add(&lower->list, useless_node);
+		}
+
+		/*
+		 * Backref nodes for tree leaves are deleted from the cache.
+		 * Backref nodes for upper level tree blocks are left in the
+		 * cache to avoid unnecessary backref lookup.
+		 */
+		if (cur->level > 0) {
+			list_add(&cur->list, &cache->detached);
+			cur->detached = 1;
+		} else {
+			rb_erase(&cur->rb_node, &cache->rb_root);
+			btrfs_backref_free_node(cache, cur);
+		}
+	}
+	return ret;
+}
+
+/*
+ * Build backref cache for one tree block.
+ *
+ * @node_key:	The first key of the tree block.
+ * @level:	Tree level
+ * @bytenr:	The bytenr of the tree block.
+ * @owner:	The owner from btrfs_header.
+ *
+ * Caller must ensure the tree block belongs to a subvolume tree.
+ *
+ * Return the cached backref_node if the tree block is useful for owner
+ * iteration.
+ *
+ * Return NULL if the tree block doesn't make sense for owner iteration.
+ * (E.g. the tree block belongs to a reloc tree)
+ *
+ * Return ERR_PTR() if something wrong happened.
+ */
+static struct btrfs_backref_node *qgroup_backref_cache_build(
+		struct btrfs_fs_info *fs_info,
+		struct btrfs_key *node_key,
+		int level, u64 bytenr, u64 owner)
+{
+	struct btrfs_backref_iter *iter;
+	struct btrfs_backref_cache *cache = fs_info->qgroup_backref_cache;
+	struct btrfs_path *path;
+	struct btrfs_backref_node *cur;
+	struct btrfs_backref_node *node = NULL;
+	struct btrfs_backref_edge *edge;
+	struct rb_node *rb_node;
+	int ret;
+
+	ASSERT(is_fstree(owner));
+
+	rb_node = simple_search(&cache->rb_root, bytenr);
+	/* Already cached, return the cached node directly */
+	if (rb_node)
+		return rb_entry(rb_node, struct btrfs_backref_node, rb_node);
+
+	iter = btrfs_backref_iter_alloc(fs_info, GFP_NOFS);
+	if (!iter)
+		return ERR_PTR(-ENOMEM);
+	path = btrfs_alloc_path();
+	if (!path) {
+		ret = -ENOMEM;
+		goto out;
+	}
+
+	node = btrfs_backref_alloc_node(cache, bytenr, level);
+	if (!node) {
+		ret = -ENOMEM;
+		goto out;
+	}
+	node->owner = owner;
+	node->lowest = 1;
+	cur = node;
+
+	/* Breadth-first search to build backref cache */
+	do {
+		ret = btrfs_backref_add_tree_node(cache, path, iter, node_key,
+						  cur);
+		if (ret < 0)
+			goto out;
+		edge = list_first_entry_or_null(&cache->pending_edge,
+				struct btrfs_backref_edge, list[UPPER]);
+		/*
+		 * the pending list isn't empty, take the first block to
+		 * process.
+		 */
+		if (edge) {
+			list_del_init(&edge->list[UPPER]);
+			cur = edge->node[UPPER];
+		}
+	} while (edge);
+
+	/* Finish the upper linkage of newly added edges/nodes */
+	ret = btrfs_backref_finish_upper_links(cache, node);
+	if (ret < 0)
+		goto out;
+
+	if (handle_useless_nodes(cache, node))
+		node = NULL;
+out:
+	btrfs_backref_iter_free(iter);
+	btrfs_free_path(path);
+	if (ret < 0) {
+		btrfs_backref_error_cleanup(cache, node);
+		return ERR_PTR(ret);
+	}
+	ASSERT(!node || !node->detached);
+	ASSERT(list_empty(&cache->useless_node) &&
+	       list_empty(&cache->pending_edge));
+	return node;
+}
+
 int btrfs_qgroup_trace_extent_post(struct btrfs_fs_info *fs_info,
 				   struct btrfs_qgroup_extent_record *qrecord)
 {
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 35/39] btrfs: qgroup: Introduce a function to iterate through backref_cache to find all parents for specified node
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (33 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 34/39] btrfs: qgroup: Introduce qgroup_backref_cache_build() function Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 36/39] btrfs: qgroup: Introduce helpers to get needed tree block info Qu Wenruo
                   ` (6 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

Introduce a new static function, iterate_all_roots(), to find all
roots for specified backref node.

This function will do iterative depth-first search, and queue hit root
objectid to the result ulist.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/qgroup.c | 40 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index d7d50943f482..69522aa3224b 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -1760,6 +1760,46 @@ static struct btrfs_backref_node *qgroup_backref_cache_build(
 	return node;
 }
 
+/* Iterate all roots in the backref_cache, and add root objectid into @roots */
+static int iterate_all_roots(struct btrfs_backref_node *node,
+			     struct ulist *roots)
+{
+	struct btrfs_backref_edge *edge;
+	struct btrfs_backref_node *upper;
+	int ret = 0;
+
+	ASSERT(node->level < BTRFS_MAX_LEVEL);
+
+	/* Useless node, exit directly */
+	if (node->detached || node->is_reloc_root || node->cowonly)
+		goto out;
+
+	/* Find a root, queue to @roots ulist */
+	if (list_empty(&node->upper)) {
+		ASSERT(is_fstree(node->owner));
+		ret = ulist_add(roots, node->owner, 0, GFP_NOFS);
+		goto out;
+	}
+
+	/* Go upper level */
+	list_for_each_entry(edge, &node->upper, list[LOWER]) {
+		upper = edge->node[UPPER];
+
+		if (upper->level != node->level + 1 ||
+		    upper->level >= BTRFS_MAX_LEVEL) {
+			ret = -EUCLEAN;
+			goto out;
+		}
+		ret = iterate_all_roots(upper, roots);
+		if (ret < 0)
+			goto out;
+	}
+out:
+	if (ret < 0)
+		ulist_release(roots);
+	return ret;
+}
+
 int btrfs_qgroup_trace_extent_post(struct btrfs_fs_info *fs_info,
 				   struct btrfs_qgroup_extent_record *qrecord)
 {
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 36/39] btrfs: qgroup: Introduce helpers to get needed tree block info
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (34 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 35/39] btrfs: qgroup: Introduce a function to iterate through backref_cache to find all parents for specified node Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 37/39] btrfs: qgroup: Introduce verification for function to ensure old roots ulist matches btrfs_find_all_roots() result Qu Wenruo
                   ` (5 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

Introduce two helpers, get_tree_key() and get_tree_info(), to
grab needed tree block info (level, first key and owner) for qgroup
backref cache.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/qgroup.c | 66 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 66 insertions(+)

diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 69522aa3224b..988b14de6569 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -1760,6 +1760,72 @@ static struct btrfs_backref_node *qgroup_backref_cache_build(
 	return node;
 }
 
+static int get_tree_key(struct btrfs_fs_info *fs_info, u64 bytenr, int level,
+			struct btrfs_key *node_key, u64 *owner)
+{
+	struct extent_buffer *eb;
+
+	eb = read_tree_block(fs_info, bytenr, 0, level, NULL);
+	if (IS_ERR(eb))
+		return PTR_ERR(eb);
+	if (btrfs_header_level(eb))
+		btrfs_node_key_to_cpu(eb, node_key, 0);
+	else
+		btrfs_item_key_to_cpu(eb, node_key, 0);
+	*owner = btrfs_header_owner(eb);
+	free_extent_buffer(eb);
+	return 0;
+}
+
+/*
+ * Helper to get tree level, first key and the first owner for
+ * qgroup_backref_cache_build().
+ *
+ * Caller should have done one extent_from_logical() call to ensure the
+ * extent exists and it's a tree block.
+ */
+static int get_tree_info(struct btrfs_fs_info *fs_info,
+			 struct btrfs_path *path, u64 bytenr,
+			 struct btrfs_key *node_key, u64 *owner, u8 *level)
+{
+	struct btrfs_extent_item *ei;
+	struct btrfs_key key;
+	unsigned long ptr = 0;
+	u64 extent_flag;
+	u64 tmp;
+	u32 item_size;
+	int ret;
+
+	path->search_commit_root = 1;
+	path->skip_locking = 1;
+
+	ret = extent_from_logical(fs_info, bytenr, path, &key, &extent_flag);
+	if (ret < 0)
+		goto out;
+	ASSERT(extent_flag & BTRFS_EXTENT_FLAG_TREE_BLOCK);
+
+	ei = btrfs_item_ptr(path->nodes[0], path->slots[0],
+			    struct btrfs_extent_item);
+	item_size = btrfs_item_size_nr(path->nodes[0], path->slots[0]);
+
+	/*
+	 * Don't trust the owner get from tree_backref_for_extent(), as for
+	 * SHARED_BLOCK type, the return owner is just the parent tree block.
+	 */
+	ret = tree_backref_for_extent(&ptr, path->nodes[0], &key,
+			ei, item_size, &tmp, level);
+	if (ret < 0)
+		goto out;
+
+	/* Instead, get the owner from btrfs header */
+	ret = get_tree_key(fs_info, bytenr, *level, node_key, owner);
+	if (ret < 0)
+		goto out;
+out:
+	btrfs_release_path(path);
+	return ret;
+}
+
 /* Iterate all roots in the backref_cache, and add root objectid into @roots */
 static int iterate_all_roots(struct btrfs_backref_node *node,
 			     struct ulist *roots)
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 37/39] btrfs: qgroup: Introduce verification for function to ensure old roots ulist matches btrfs_find_all_roots() result
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (35 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 36/39] btrfs: qgroup: Introduce helpers to get needed tree block info Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 38/39] btrfs: qgroup: Introduce a new function to get old_roots ulist using backref cache Qu Wenruo
                   ` (4 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

This patch will introduce a new function, verify_old_roots(), for qgroup
to verify the backref cache based result and old btrfs_find_all_roots()
result.

Since it will impact performance heavily as we are doing two different
backref walk, this verification will only be enabled for
CONFIG_BTRFS_FS_CHECK_INTEGRITY.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/qgroup.c | 72 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 72 insertions(+)

diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 988b14de6569..07a0101836ff 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -1826,6 +1826,78 @@ static int get_tree_info(struct btrfs_fs_info *fs_info,
 	return ret;
 }
 
+#ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY
+/*
+ * Compare the result with old btrfs_find_all_roots() to ensure the new backref
+ * cache based result is correct.
+ */
+static int verify_old_roots(struct btrfs_fs_info *fs_info,
+			    struct ulist *result, u64 bytenr)
+{
+	struct ulist *old;
+	struct ulist_iterator uiter;
+	struct ulist_node *unode;
+	bool not_fstree = false;
+	int ret = 0;
+
+	ret = btrfs_find_all_roots(NULL, fs_info, bytenr, 0, &old, true);
+	if (ret < 0)
+		return ret;
+
+	/*
+	 * Check the first node, as find_all_roots() will also return
+	 * non-subvolume tree owner.
+	 * Since subvolume tree block won't be shared with non-subvolume
+	 * trees, if we find a non-subolume tree root, we don't need
+	 * to verify at all.
+	 */
+	ULIST_ITER_INIT(&uiter);
+	while ((unode = ulist_next(old, &uiter))) {
+		if (!is_fstree(unode->val))
+			not_fstree = true;
+		break;
+	}
+
+	if (not_fstree && result->nnodes != 0) {
+		btrfs_err(fs_info,
+"qgroup backref cached error, bytenr=%llu found cached node for non-fs tree",
+			  bytenr);
+		ret = -EUCLEAN;
+		goto out;
+	}
+	if (not_fstree)
+		goto out;
+
+	if (result->nnodes != old->nnodes) {
+		btrfs_err(fs_info,
+"qgroup backref cache error, bytenr=%llu nr nodes mismatch, old method=%lu cache method=%lu",
+			  bytenr, old->nnodes, result->nnodes);
+		ret = -EUCLEAN;
+		goto out;
+	}
+	ULIST_ITER_INIT(&uiter);
+	while ((unode = ulist_next(result, &uiter))) {
+		/*
+		 * @result and @old have the same amount of nodes, so if we
+		 * delete each @result node from @old, we either delete all
+		 * nodes from @old (verification pass), or we will hit
+		 * a missing node (verification failure).
+		 */
+		ret = ulist_del(old, unode->val, 0);
+		if (ret) {
+			btrfs_err(fs_info,
+	"qgroup backref cache error, bytenr=%llu root %llu not found in cached result",
+				  bytenr, unode->val);
+			ret = -EUCLEAN;
+			goto out;
+		}
+	}
+out:
+	ulist_free(old);
+	return ret;
+}
+#endif
+
 /* Iterate all roots in the backref_cache, and add root objectid into @roots */
 static int iterate_all_roots(struct btrfs_backref_node *node,
 			     struct ulist *roots)
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 38/39] btrfs: qgroup: Introduce a new function to get old_roots ulist using backref cache
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (36 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 37/39] btrfs: qgroup: Introduce verification for function to ensure old roots ulist matches btrfs_find_all_roots() result Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-26  8:33 ` [PATCH v2 39/39] btrfs: qgroup: Use backref cache to speed up old_roots search Qu Wenruo
                   ` (3 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

The new function, get_old_roots() will replace the old
btrfs_find_all_roots() to do a backref cache based search for all
referring roots.

The workflow includes:
- Search the extent tree to get basic tree info
  Including: first key, level and owner (from btrfs_header).

- Skip all non-subvolume tree blocks
  Since non-subvolume tree blocks will never be shared with subvolume
  trees, skipping them would speed up the procedure.

- Build backref cache for the tree block
  Either we get the backref_node inserted into the cache, or it's
  referred exclusively by reloc tree which doesn't contribute to qgroup.

- Find all roots using the return backref_node
  It's a simple iterative depth-first search. The result will be stored
  into a ulist, just like old btrfs_find_all_roots().

- Verify the cached result with old btrfs_find_all_roots() for DEBUG
  build
  If we enabled CONFIG_BTRFS_FS_CHECK_INTEGRITY, we again call
  btrfs_find_all_roots() just as what we used to do.
  Then verify the result against the result from backref cache.

  This is very performance heavy as it kills all the benefit we get from
  backref cache, thus should only be enabled for DEBUG build.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/qgroup.c | 109 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 109 insertions(+)

diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 07a0101836ff..ab19b2bfa112 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -1938,6 +1938,115 @@ static int iterate_all_roots(struct btrfs_backref_node *node,
 	return ret;
 }
 
+static struct ulist *get_old_roots(struct btrfs_fs_info *fs_info, u64 bytenr)
+{
+	struct ulist *tree_blocks = NULL;
+	struct btrfs_path *path = NULL;
+	struct btrfs_key key;
+	struct ulist_iterator uiter;
+	struct ulist_node *unode;
+	struct ulist *ret_ulist;
+	u64 extent_flag;
+	int ret;
+
+	ret_ulist = ulist_alloc(GFP_NOFS);
+	if (!ret_ulist)
+		return ERR_PTR(-ENOMEM);
+
+	path = btrfs_alloc_path();
+	if (!path) {
+		ret = -ENOMEM;
+		goto out;
+	}
+	path->search_commit_root = 1;
+	path->skip_locking = 1;
+
+	ret = extent_from_logical(fs_info, bytenr, path, &key, &extent_flag);
+	if (ret == -ENOENT) {
+		/* No backref for this extent, returning empty old_root */
+		ret = 0;
+		goto out;
+	}
+	if (ret < 0)
+		goto out;
+
+	if (extent_flag & BTRFS_EXTENT_FLAG_TREE_BLOCK) {
+		tree_blocks = ulist_alloc(GFP_NOFS);
+		if (!tree_blocks) {
+			ret = -ENOMEM;
+			goto out;
+		}
+
+		ret = ulist_add(tree_blocks, bytenr, 0, GFP_NOFS);
+		if (ret < 0)
+			goto out;
+	} else {
+		ret = btrfs_find_all_leafs(NULL, fs_info, bytenr, 0,
+				&tree_blocks, NULL, true);
+		if (ret < 0)
+			goto out;
+	}
+	btrfs_release_path(path);
+
+	/*
+	 * Add all related tree blocks to backref cache and get all roots
+	 * from each iteration
+	 */
+	ULIST_ITER_INIT(&uiter);
+	while ((unode = ulist_next(tree_blocks, &uiter))) {
+		struct btrfs_backref_node *node;
+		struct btrfs_key node_key;
+		u64 owner;
+		u8 tree_level;
+
+		ret = get_tree_info(fs_info, path, unode->val,
+				&node_key, &owner, &tree_level);
+		if (ret < 0)
+			goto out;
+
+		/* Isn't a subvolume tree, direct exist */
+		if (!is_fstree(owner))
+			goto out;
+
+		mutex_lock(&fs_info->qgroup_backref_lock);
+		/*
+		 * This can happen when rescan worker is running while qgroup
+		 * is being disabled.
+		 * Just exit without verification, qgroup will cleanup itself.
+		 */
+		if (!fs_info->qgroup_backref_cache) {
+			mutex_unlock(&fs_info->qgroup_backref_lock);
+			goto out_no_verify;
+		}
+
+		node = qgroup_backref_cache_build(fs_info, &node_key,
+				tree_level, unode->val, owner);
+		if (IS_ERR(node)) {
+			ret = PTR_ERR(node);
+			mutex_unlock(&fs_info->qgroup_backref_lock);
+			goto out;
+		}
+		if (node)
+			ret = iterate_all_roots(node, ret_ulist);
+		mutex_unlock(&fs_info->qgroup_backref_lock);
+		if (ret < 0)
+			goto out;
+	}
+out:
+	if (IS_ENABLED(CONFIG_BTRFS_FS_CHECK_INTEGRITY) && !ret) {
+		ret = verify_old_roots(fs_info, ret_ulist, bytenr);
+		WARN_ON(ret < 0);
+	}
+out_no_verify:
+	btrfs_free_path(path);
+	ulist_free(tree_blocks);
+	if (ret < 0) {
+		ulist_free(ret_ulist);
+		return ERR_PTR(ret);
+	}
+	return ret_ulist;
+}
+
 int btrfs_qgroup_trace_extent_post(struct btrfs_fs_info *fs_info,
 				   struct btrfs_qgroup_extent_record *qrecord)
 {
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v2 39/39] btrfs: qgroup: Use backref cache to speed up old_roots search
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (37 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 38/39] btrfs: qgroup: Introduce a new function to get old_roots ulist using backref cache Qu Wenruo
@ 2020-03-26  8:33 ` Qu Wenruo
  2020-03-27 15:51 ` [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots David Sterba
                   ` (2 subsequent siblings)
  41 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-03-26  8:33 UTC (permalink / raw)
  To: linux-btrfs

Now use the backref cache backed backref walk mechanism.

This mechanism is trading memory usage for faster, and more qgroup
specific backref walk.

Compared to original btrfs_find_all_roots(), it has the extra behavior
difference:
- Skip non-subvolume tress from the very beginning
  Since only subvolume trees contribute to qgroup numbers, skipping them
  would save us time.

- Skip reloc trees earlier
  Reloc trees doesn't contribute to qgroup, and btrfs_find_all_roots()
  also doesn't account them, thus we don't need to account them.
  Here we use the detached nodes in backref cache to skip them faster
  and earlier.

- Cached results
  Well, backref cache is obviously cached, right.

The major performance improvement happens for backref walk in commit
tree, one of the most obvious user is qgroup rescan.

Here is a small script to test it:

  mkfs.btrfs -f $dev
  mount $dev -o space_cache=v2 $mnt

  btrfs subvolume create $mnt/src

  for ((i = 0; i < 64; i++)); do
          for (( j = 0; j < 16; j++)); do
                  xfs_io -f -c "pwrite 0 2k" \
			$mnt/src/file_inline_$(($i * 16 + $j)) > /dev/null
          done
          xfs_io -f -c "pwrite 0 1M" $mnt/src/file_reg_$i > /dev/null
          sync
          btrfs subvol snapshot $mnt/src $mnt/snapshot_$i
  done
  sync

  btrfs quota enable $mnt
  btrfs quota rescan -w $mnt

Here is the benchmark for above small tests.
The performance material is the total execution time of get_old_roots()
for patched kernel (*), and find_all_roots() for original kernel.

*: With CONFIG_BTRFS_FS_CHECK_INTEGRITY disabled, as get_old_roots()
   will call find_all_roots() to verify the result if that config is
   enabled.

		|  Number of calls | Total exec time |
------------------------------------------------------
find_all_roots()|  732		   | 529991034ns
get_old_roots() |  732		   | 127998312ns
------------------------------------------------------
diff		|  0.00 %	   | -75.8 %

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/qgroup.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index ab19b2bfa112..4f36206a96aa 100644
--- a/fs/btrfs/qgroup.c
+++ b/fs/btrfs/qgroup.c
@@ -2054,8 +2054,9 @@ int btrfs_qgroup_trace_extent_post(struct btrfs_fs_info *fs_info,
 	u64 bytenr = qrecord->bytenr;
 	int ret;
 
-	ret = btrfs_find_all_roots(NULL, fs_info, bytenr, 0, &old_root, false);
-	if (ret < 0) {
+	old_root = get_old_roots(fs_info, bytenr);
+	if (IS_ERR(old_root)) {
+		ret = PTR_ERR(old_root);
 		fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
 		btrfs_warn(fs_info,
 "error accounting new delayed refs extent (err code: %d), quota inconsistent",
@@ -3001,12 +3002,12 @@ int btrfs_qgroup_account_extents(struct btrfs_trans_handle *trans)
 			 * extent record
 			 */
 			if (WARN_ON(!record->old_roots)) {
-				/* Search commit root to find old_roots */
-				ret = btrfs_find_all_roots(NULL, fs_info,
-						record->bytenr, 0,
-						&record->old_roots, false);
-				if (ret < 0)
+				record->old_roots = get_old_roots(fs_info,
+						record->bytenr);
+				if (IS_ERR(record->old_roots)) {
+					ret = PTR_ERR(record->old_roots);
 					goto cleanup;
+				}
 			}
 
 			/* Free the reserved data space */
@@ -3585,10 +3586,11 @@ static int qgroup_rescan_leaf(struct btrfs_trans_handle *trans,
 		else
 			num_bytes = found.offset;
 
-		ret = btrfs_find_all_roots(NULL, fs_info, found.objectid, 0,
-					   &roots, false);
-		if (ret < 0)
+		roots = get_old_roots(fs_info, found.objectid);
+		if (IS_ERR(roots)) {
+			ret = PTR_ERR(roots);
 			goto out;
+		}
 		/* For rescan, just pass old_roots as NULL */
 		ret = btrfs_qgroup_account_extent(trans, found.objectid,
 						  num_bytes, NULL, roots);
-- 
2.26.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* Re: [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (38 preceding siblings ...)
  2020-03-26  8:33 ` [PATCH v2 39/39] btrfs: qgroup: Use backref cache to speed up old_roots search Qu Wenruo
@ 2020-03-27 15:51 ` David Sterba
  2020-04-02 16:18 ` David Sterba
  2020-04-03 15:44 ` David Sterba
  41 siblings, 0 replies; 52+ messages in thread
From: David Sterba @ 2020-03-27 15:51 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Thu, Mar 26, 2020 at 04:32:37PM +0800, Qu Wenruo wrote:
> This patchset is based on misc-5.7 branch.
> 
> The branch can be fetched from github for review/testing.
> https://github.com/adam900710/linux/tree/backref_cache_all
> 
> The patchset survives all the existing qgroup/volume/replace/balance tests.

Thanks for the rebase, the whole patchset passed fstests so I'll start
merging it. The backref cache (patches 33-39) still need review but
because the tests pass it can be in for-next. The cleanup part 1-32
seems safe so that'll go to misc-next soonish, once I go through the
patches, there are some minor style issues.

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v2 01/39] btrfs: backref: Introduce the skeleton of btrfs_backref_iter
  2020-03-26  8:32 ` [PATCH v2 01/39] btrfs: backref: Introduce the skeleton of btrfs_backref_iter Qu Wenruo
@ 2020-04-01 15:37   ` David Sterba
  2020-04-01 23:31     ` Qu Wenruo
  0 siblings, 1 reply; 52+ messages in thread
From: David Sterba @ 2020-04-01 15:37 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs, Johannes Thumshirn, Josef Bacik

On Thu, Mar 26, 2020 at 04:32:38PM +0800, Qu Wenruo wrote:
> --- a/fs/btrfs/backref.h
> +++ b/fs/btrfs/backref.h
> @@ -78,4 +78,43 @@ struct prelim_ref {
>  	u64 wanted_disk_byte;
>  };
>  
> +/*
> + * Helper structure to help iterate backrefs of one extent.
> + *
> + * Now it only supports iteration for tree block in commit root.
> + */
> +struct btrfs_backref_iter {
> +	u64 bytenr;
> +	struct btrfs_path *path;
> +	struct btrfs_fs_info *fs_info;
> +	struct btrfs_key cur_key;
> +	u32 item_ptr;
> +	u32 cur_ptr;
> +	u32 end_ptr;
> +};
> +
> +struct btrfs_backref_iter *btrfs_backref_iter_alloc(
> +		struct btrfs_fs_info *fs_info, gfp_t gfp_flag);
> +
> +static inline void btrfs_backref_iter_free(struct btrfs_backref_iter *iter)
> +{
> +	if (!iter)
> +		return;
> +	btrfs_free_path(iter->path);
> +	kfree(iter);
> +}

Why do you make so many functions static inline? It makes sense for some
of them but in the following patches there are functions that are either
too big (so when they're inlined it bloats the asm) or called
infrequently so the inlining does not bring much. Code in header files
should be kept to minimum.

There are also functions not used anywhere else than in backref.c so
they don't need to be exported for now. For example
btrfs_backref_iter_is_inline_ref.

> +
> +int btrfs_backref_iter_start(struct btrfs_backref_iter *iter, u64 bytenr);
> +
> +static inline void
> +btrfs_backref_iter_release(struct btrfs_backref_iter *iter)

Please keep the function type and name on the same line, arguments can
go to the next line.

> +{
> +	iter->bytenr = 0;
> +	iter->item_ptr = 0;
> +	iter->cur_ptr = 0;
> +	iter->end_ptr = 0;
> +	btrfs_release_path(iter->path);
> +	memset(&iter->cur_key, 0, sizeof(iter->cur_key));
> +}
> +
>  #endif
> -- 
> 2.26.0

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v2 17/39] btrfs: Rename tree_entry to simple_node and export it
  2020-03-26  8:32 ` [PATCH v2 17/39] btrfs: Rename tree_entry to simple_node and export it Qu Wenruo
@ 2020-04-01 15:48   ` David Sterba
  2020-04-01 23:40     ` Qu Wenruo
  0 siblings, 1 reply; 52+ messages in thread
From: David Sterba @ 2020-04-01 15:48 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Thu, Mar 26, 2020 at 04:32:54PM +0800, Qu Wenruo wrote:
> Structure tree_entry provides a very simple rb_tree which only uses
> bytenr as search index.
> 
> That tree_entry is used in 3 structures: backref_node, mapping_node and
> tree_block.
> 
> Since we're going to make backref_node independnt from relocation, it's
> a good time to extract the tree_entry into simple_node, and export it
> into misc.h.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
>  fs/btrfs/backref.h    |   6 ++-
>  fs/btrfs/misc.h       |  54 +++++++++++++++++++++
>  fs/btrfs/relocation.c | 109 +++++++++++++-----------------------------
>  3 files changed, 90 insertions(+), 79 deletions(-)
> 
> diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
> index 76858ec099d9..f3eae9e9f84b 100644
> --- a/fs/btrfs/backref.h
> +++ b/fs/btrfs/backref.h
> @@ -162,8 +162,10 @@ btrfs_backref_iter_release(struct btrfs_backref_iter *iter)
>   * present a tree block in the backref cache
>   */
>  struct btrfs_backref_node {
> -	struct rb_node rb_node;
> -	u64 bytenr;
> +	struct {
> +		struct rb_node rb_node;
> +		u64 bytenr;
> +	}; /* Use simple_node for search/insert */

Why is this anonymous struct? This should be the simple_node as I see
below. For some simple rb search API.

>  
>  	u64 new_bytenr;
>  	/* objectid of tree block owner, can be not uptodate */
> diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
> index 72bab64ecf60..d199bfdb210e 100644
> --- a/fs/btrfs/misc.h
> +++ b/fs/btrfs/misc.h
> @@ -6,6 +6,7 @@
>  #include <linux/sched.h>
>  #include <linux/wait.h>
>  #include <asm/div64.h>
> +#include <linux/rbtree.h>
>  
>  #define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len))
>  
> @@ -58,4 +59,57 @@ static inline bool has_single_bit_set(u64 n)
>  	return is_power_of_two_u64(n);
>  }
>  
> +/*
> + * Simple bytenr based rb_tree relate structures
> + *
> + * Any structure wants to use bytenr as single search index should have their
> + * structure start with these members.

This is not very clean coding style, relying on particular placement and
order in another struct.

> + */
> +struct simple_node {
> +	struct rb_node rb_node;
> +	u64 bytenr;
> +};
> +
> +static inline struct rb_node *simple_search(struct rb_root *root, u64 bytenr)

simple_search is IMHO too vague, it's related to a rb-tree so this could
be reflected in the name somehow.

I think it's ok if you do this as a middle step before making it a
proper struct hook and API but I don't like the end result as it's not
really an improvement.

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v2 01/39] btrfs: backref: Introduce the skeleton of btrfs_backref_iter
  2020-04-01 15:37   ` David Sterba
@ 2020-04-01 23:31     ` Qu Wenruo
  2020-04-02  1:01       ` David Sterba
  0 siblings, 1 reply; 52+ messages in thread
From: Qu Wenruo @ 2020-04-01 23:31 UTC (permalink / raw)
  To: dsterba, linux-btrfs, Johannes Thumshirn, Josef Bacik



On 2020/4/1 下午11:37, David Sterba wrote:
> On Thu, Mar 26, 2020 at 04:32:38PM +0800, Qu Wenruo wrote:
>> --- a/fs/btrfs/backref.h
>> +++ b/fs/btrfs/backref.h
>> @@ -78,4 +78,43 @@ struct prelim_ref {
>>  	u64 wanted_disk_byte;
>>  };
>>  
>> +/*
>> + * Helper structure to help iterate backrefs of one extent.
>> + *
>> + * Now it only supports iteration for tree block in commit root.
>> + */
>> +struct btrfs_backref_iter {
>> +	u64 bytenr;
>> +	struct btrfs_path *path;
>> +	struct btrfs_fs_info *fs_info;
>> +	struct btrfs_key cur_key;
>> +	u32 item_ptr;
>> +	u32 cur_ptr;
>> +	u32 end_ptr;
>> +};
>> +
>> +struct btrfs_backref_iter *btrfs_backref_iter_alloc(
>> +		struct btrfs_fs_info *fs_info, gfp_t gfp_flag);
>> +
>> +static inline void btrfs_backref_iter_free(struct btrfs_backref_iter *iter)
>> +{
>> +	if (!iter)
>> +		return;
>> +	btrfs_free_path(iter->path);
>> +	kfree(iter);
>> +}
> 
> Why do you make so many functions static inline? It makes sense for some
> of them but in the following patches there are functions that are either
> too big (so when they're inlined it bloats the asm) or called
> infrequently so the inlining does not bring much. Code in header files
> should be kept to minimum.

As most of them meet the requirement for either too small, or too
infrequently called.

> 
> There are also functions not used anywhere else than in backref.c so
> they don't need to be exported for now. For example
> btrfs_backref_iter_is_inline_ref.

But it's used in later patches, thus I exported them to avoid re-export
them later.

> 
>> +
>> +int btrfs_backref_iter_start(struct btrfs_backref_iter *iter, u64 bytenr);
>> +
>> +static inline void
>> +btrfs_backref_iter_release(struct btrfs_backref_iter *iter)
> 
> Please keep the function type and name on the same line, arguments can
> go to the next line.

Forgot this one...

Do I need to resend?

Thanks,
Qu

> 
>> +{
>> +	iter->bytenr = 0;
>> +	iter->item_ptr = 0;
>> +	iter->cur_ptr = 0;
>> +	iter->end_ptr = 0;
>> +	btrfs_release_path(iter->path);
>> +	memset(&iter->cur_key, 0, sizeof(iter->cur_key));
>> +}
>> +
>>  #endif
>> -- 
>> 2.26.0

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v2 17/39] btrfs: Rename tree_entry to simple_node and export it
  2020-04-01 15:48   ` David Sterba
@ 2020-04-01 23:40     ` Qu Wenruo
  2020-04-02  0:52       ` Qu Wenruo
  2020-04-02  1:09       ` David Sterba
  0 siblings, 2 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-04-01 23:40 UTC (permalink / raw)
  To: dsterba, linux-btrfs



On 2020/4/1 下午11:48, David Sterba wrote:
> On Thu, Mar 26, 2020 at 04:32:54PM +0800, Qu Wenruo wrote:
>> Structure tree_entry provides a very simple rb_tree which only uses
>> bytenr as search index.
>>
>> That tree_entry is used in 3 structures: backref_node, mapping_node and
>> tree_block.
>>
>> Since we're going to make backref_node independnt from relocation, it's
>> a good time to extract the tree_entry into simple_node, and export it
>> into misc.h.
>>
>> Signed-off-by: Qu Wenruo <wqu@suse.com>
>> ---
>>  fs/btrfs/backref.h    |   6 ++-
>>  fs/btrfs/misc.h       |  54 +++++++++++++++++++++
>>  fs/btrfs/relocation.c | 109 +++++++++++++-----------------------------
>>  3 files changed, 90 insertions(+), 79 deletions(-)
>>
>> diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
>> index 76858ec099d9..f3eae9e9f84b 100644
>> --- a/fs/btrfs/backref.h
>> +++ b/fs/btrfs/backref.h
>> @@ -162,8 +162,10 @@ btrfs_backref_iter_release(struct btrfs_backref_iter *iter)
>>   * present a tree block in the backref cache
>>   */
>>  struct btrfs_backref_node {
>> -	struct rb_node rb_node;
>> -	u64 bytenr;
>> +	struct {
>> +		struct rb_node rb_node;
>> +		u64 bytenr;
>> +	}; /* Use simple_node for search/insert */
> 
> Why is this anonymous struct? This should be the simple_node as I see
> below. For some simple rb search API.

If using simple_node, we need a ton of extra wrapper to wrap things like
rb_entry(), rb_postorder_()

Thus here we still want byte/rb_node directly embeded into the structure.

The ideal method would be anonymous but typed structure.
Unfortunately no such C standard supports this.

> 
>>  
>>  	u64 new_bytenr;
>>  	/* objectid of tree block owner, can be not uptodate */
>> diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
>> index 72bab64ecf60..d199bfdb210e 100644
>> --- a/fs/btrfs/misc.h
>> +++ b/fs/btrfs/misc.h
>> @@ -6,6 +6,7 @@
>>  #include <linux/sched.h>
>>  #include <linux/wait.h>
>>  #include <asm/div64.h>
>> +#include <linux/rbtree.h>
>>  
>>  #define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len))
>>  
>> @@ -58,4 +59,57 @@ static inline bool has_single_bit_set(u64 n)
>>  	return is_power_of_two_u64(n);
>>  }
>>  
>> +/*
>> + * Simple bytenr based rb_tree relate structures
>> + *
>> + * Any structure wants to use bytenr as single search index should have their
>> + * structure start with these members.
> 
> This is not very clean coding style, relying on particular placement and
> order in another struct.

Order is not a problem, since we call container_of(), thus there is no
need for any order or placement.
User can easily put rb_node at the end of the structure, and bytenr at
the beginning of the structure, and everything still goes well.

The anonymous structure is mostly here to inform callers that we're
using simple_node structure.

> 
>> + */
>> +struct simple_node {
>> +	struct rb_node rb_node;
>> +	u64 bytenr;
>> +};
>> +
>> +static inline struct rb_node *simple_search(struct rb_root *root, u64 bytenr)
> 
> simple_search is IMHO too vague, it's related to a rb-tree so this could
> be reflected in the name somehow.
> 
> I think it's ok if you do this as a middle step before making it a
> proper struct hook and API but I don't like the end result as it's not
> really an improvement.
> 
That's the what I mean for "simple", it's really just a simple, not even
a full wrapper, for bytenr based rb tree search.

Adding too many wrappers may simply kill the "simple" part.

Although I have to admit, that most of the simple_node part is only to
reuse code across relocation.c and backref.c. Since no other users
utilize such simple facility.

Any idea to improve such situation? Or we really need to go full wrappers?

Thanks,
Qu

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v2 17/39] btrfs: Rename tree_entry to simple_node and export it
  2020-04-01 23:40     ` Qu Wenruo
@ 2020-04-02  0:52       ` Qu Wenruo
  2020-04-02  1:09       ` David Sterba
  1 sibling, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-04-02  0:52 UTC (permalink / raw)
  To: dsterba, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 4187 bytes --]



On 2020/4/2 上午7:40, Qu Wenruo wrote:
> 
> 
> On 2020/4/1 下午11:48, David Sterba wrote:
>> On Thu, Mar 26, 2020 at 04:32:54PM +0800, Qu Wenruo wrote:
>>> Structure tree_entry provides a very simple rb_tree which only uses
>>> bytenr as search index.
>>>
>>> That tree_entry is used in 3 structures: backref_node, mapping_node and
>>> tree_block.
>>>
>>> Since we're going to make backref_node independnt from relocation, it's
>>> a good time to extract the tree_entry into simple_node, and export it
>>> into misc.h.
>>>
>>> Signed-off-by: Qu Wenruo <wqu@suse.com>
>>> ---
>>>  fs/btrfs/backref.h    |   6 ++-
>>>  fs/btrfs/misc.h       |  54 +++++++++++++++++++++
>>>  fs/btrfs/relocation.c | 109 +++++++++++++-----------------------------
>>>  3 files changed, 90 insertions(+), 79 deletions(-)
>>>
>>> diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
>>> index 76858ec099d9..f3eae9e9f84b 100644
>>> --- a/fs/btrfs/backref.h
>>> +++ b/fs/btrfs/backref.h
>>> @@ -162,8 +162,10 @@ btrfs_backref_iter_release(struct btrfs_backref_iter *iter)
>>>   * present a tree block in the backref cache
>>>   */
>>>  struct btrfs_backref_node {
>>> -	struct rb_node rb_node;
>>> -	u64 bytenr;
>>> +	struct {
>>> +		struct rb_node rb_node;
>>> +		u64 bytenr;
>>> +	}; /* Use simple_node for search/insert */
>>
>> Why is this anonymous struct? This should be the simple_node as I see
>> below. For some simple rb search API.
> 
> If using simple_node, we need a ton of extra wrapper to wrap things like
> rb_entry(), rb_postorder_()
> 
> Thus here we still want byte/rb_node directly embeded into the structure.
> 
> The ideal method would be anonymous but typed structure.
> Unfortunately no such C standard supports this.
> 
>>
>>>  
>>>  	u64 new_bytenr;
>>>  	/* objectid of tree block owner, can be not uptodate */
>>> diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
>>> index 72bab64ecf60..d199bfdb210e 100644
>>> --- a/fs/btrfs/misc.h
>>> +++ b/fs/btrfs/misc.h
>>> @@ -6,6 +6,7 @@
>>>  #include <linux/sched.h>
>>>  #include <linux/wait.h>
>>>  #include <asm/div64.h>
>>> +#include <linux/rbtree.h>
>>>  
>>>  #define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len))
>>>  
>>> @@ -58,4 +59,57 @@ static inline bool has_single_bit_set(u64 n)
>>>  	return is_power_of_two_u64(n);
>>>  }
>>>  
>>> +/*
>>> + * Simple bytenr based rb_tree relate structures
>>> + *
>>> + * Any structure wants to use bytenr as single search index should have their
>>> + * structure start with these members.
>>
>> This is not very clean coding style, relying on particular placement and
>> order in another struct.
> 
> Order is not a problem, since we call container_of(), thus there is no
> need for any order or placement.
> User can easily put rb_node at the end of the structure, and bytenr at
> the beginning of the structure, and everything still goes well.

My bad, the order is still a pretty important thing...

Thus we still need to keep everything in their correct order to make the
code work...

> 
> The anonymous structure is mostly here to inform callers that we're
> using simple_node structure.
> 
>>
>>> + */
>>> +struct simple_node {
>>> +	struct rb_node rb_node;
>>> +	u64 bytenr;
>>> +};
>>> +
>>> +static inline struct rb_node *simple_search(struct rb_root *root, u64 bytenr)
>>
>> simple_search is IMHO too vague, it's related to a rb-tree so this could
>> be reflected in the name somehow.
>>
>> I think it's ok if you do this as a middle step before making it a
>> proper struct hook and API but I don't like the end result as it's not
>> really an improvement.
>>
> That's the what I mean for "simple", it's really just a simple, not even
> a full wrapper, for bytenr based rb tree search.
> 
> Adding too many wrappers may simply kill the "simple" part.
> 
> Although I have to admit, that most of the simple_node part is only to
> reuse code across relocation.c and backref.c. Since no other users
> utilize such simple facility.
> 
> Any idea to improve such situation? Or we really need to go full wrappers?
> 
> Thanks,
> Qu
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v2 01/39] btrfs: backref: Introduce the skeleton of btrfs_backref_iter
  2020-04-01 23:31     ` Qu Wenruo
@ 2020-04-02  1:01       ` David Sterba
  2020-04-02  1:27         ` Qu Wenruo
  0 siblings, 1 reply; 52+ messages in thread
From: David Sterba @ 2020-04-02  1:01 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: dsterba, linux-btrfs, Johannes Thumshirn, Josef Bacik

On Thu, Apr 02, 2020 at 07:31:28AM +0800, Qu Wenruo wrote:
> On 2020/4/1 下午11:37, David Sterba wrote:
> > On Thu, Mar 26, 2020 at 04:32:38PM +0800, Qu Wenruo wrote:
> >> --- a/fs/btrfs/backref.h
> >> +++ b/fs/btrfs/backref.h
> >> @@ -78,4 +78,43 @@ struct prelim_ref {
> >>  	u64 wanted_disk_byte;
> >>  };
> >>  
> >> +/*
> >> + * Helper structure to help iterate backrefs of one extent.
> >> + *
> >> + * Now it only supports iteration for tree block in commit root.
> >> + */
> >> +struct btrfs_backref_iter {
> >> +	u64 bytenr;
> >> +	struct btrfs_path *path;
> >> +	struct btrfs_fs_info *fs_info;
> >> +	struct btrfs_key cur_key;
> >> +	u32 item_ptr;
> >> +	u32 cur_ptr;
> >> +	u32 end_ptr;
> >> +};
> >> +
> >> +struct btrfs_backref_iter *btrfs_backref_iter_alloc(
> >> +		struct btrfs_fs_info *fs_info, gfp_t gfp_flag);
> >> +
> >> +static inline void btrfs_backref_iter_free(struct btrfs_backref_iter *iter)
> >> +{
> >> +	if (!iter)
> >> +		return;
> >> +	btrfs_free_path(iter->path);
> >> +	kfree(iter);
> >> +}
> > 
> > Why do you make so many functions static inline? It makes sense for some
> > of them but in the following patches there are functions that are either
> > too big (so when they're inlined it bloats the asm) or called
> > infrequently so the inlining does not bring much. Code in header files
> > should be kept to minimum.
> 
> As most of them meet the requirement for either too small, or too
> infrequently called.

So the rules or recommendations I use to decide if a function should be
static inline:

* it's like macro with type checking
* results into a few instructions (where few is like 3-6)
* the function is good for code readability, like for a helper that does
  some checks and returns a result, or derefernces a few pointers, and
  the function name is self explaining
* there should be some performance reason where the function call would
  be too costly, eg. if the function is on a hot path or in a loop

And all of that can be irrelevant if the compiler does some fancy
optimization, like function cloning where it keeps one copy intact
that's for the public interface and then inline other copies, possibly
applying more optimizations eg. based on parameters or some analysis
that splits function to hot and cold parts.

Unless we find some suboptimal result of compilation that could be fixed
by static inlines, I tend to not use them besides the trivial cases that
help code readability.

> > There are also functions not used anywhere else than in backref.c so
> > they don't need to be exported for now. For example
> > btrfs_backref_iter_is_inline_ref.
> 
> But it's used in later patches, thus I exported them to avoid re-export
> them later.

I grepped the whole branch with the backref cache and assumed that if
you introduce a function in the cleanup part, it would be used in the
other one. But btrfs_backref_iter_is_inline_ref wasn't.

> >> +int btrfs_backref_iter_start(struct btrfs_backref_iter *iter, u64 bytenr);
> >> +
> >> +static inline void
> >> +btrfs_backref_iter_release(struct btrfs_backref_iter *iter)
> > 
> > Please keep the function type and name on the same line, arguments can
> > go to the next line.
> 
> Forgot this one...
> 
> Do I need to resend?

I fix such things when applying the patches so for that only reason it's
not necessary to resend.

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v2 17/39] btrfs: Rename tree_entry to simple_node and export it
  2020-04-01 23:40     ` Qu Wenruo
  2020-04-02  0:52       ` Qu Wenruo
@ 2020-04-02  1:09       ` David Sterba
  2020-04-02  1:32         ` Qu Wenruo
  1 sibling, 1 reply; 52+ messages in thread
From: David Sterba @ 2020-04-02  1:09 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: dsterba, linux-btrfs

On Thu, Apr 02, 2020 at 07:40:29AM +0800, Qu Wenruo wrote:
> >>  struct btrfs_backref_node {
> >> -	struct rb_node rb_node;
> >> -	u64 bytenr;
> >> +	struct {
> >> +		struct rb_node rb_node;
> >> +		u64 bytenr;
> >> +	}; /* Use simple_node for search/insert */
> > 
> > Why is this anonymous struct? This should be the simple_node as I see
> > below. For some simple rb search API.
> 
> If using simple_node, we need a ton of extra wrapper to wrap things like
> rb_entry(), rb_postorder_()
> 
> Thus here we still want byte/rb_node directly embeded into the structure.
> 
> The ideal method would be anonymous but typed structure.
> Unfortunately no such C standard supports this.

My idea was to have something like this (simplified):

	struct tree_node {
		struct rb_node node;
		u64 bytenr;
	};

	struct backref_node {
		...
		struct tree_node cache_node;
		...
	};

	struct backref_node bnode;

when the rb_node is needed, pass &bnode.cache_node.rb_node . All the
rb_* functions should work without adding another interface layer.

> >>  	u64 new_bytenr;
> >>  	/* objectid of tree block owner, can be not uptodate */
> >> diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
> >> index 72bab64ecf60..d199bfdb210e 100644
> >> --- a/fs/btrfs/misc.h
> >> +++ b/fs/btrfs/misc.h
> >> @@ -6,6 +6,7 @@
> >>  #include <linux/sched.h>
> >>  #include <linux/wait.h>
> >>  #include <asm/div64.h>
> >> +#include <linux/rbtree.h>
> >>  
> >>  #define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len))
> >>  
> >> @@ -58,4 +59,57 @@ static inline bool has_single_bit_set(u64 n)
> >>  	return is_power_of_two_u64(n);
> >>  }
> >>  
> >> +/*
> >> + * Simple bytenr based rb_tree relate structures
> >> + *
> >> + * Any structure wants to use bytenr as single search index should have their
> >> + * structure start with these members.
> > 
> > This is not very clean coding style, relying on particular placement and
> > order in another struct.
> 
> Order is not a problem, since we call container_of(), thus there is no
> need for any order or placement.
> User can easily put rb_node at the end of the structure, and bytenr at
> the beginning of the structure, and everything still goes well.
> 
> The anonymous structure is mostly here to inform callers that we're
> using simple_node structure.
> 
> > 
> >> + */
> >> +struct simple_node {
> >> +	struct rb_node rb_node;
> >> +	u64 bytenr;
> >> +};
> >> +
> >> +static inline struct rb_node *simple_search(struct rb_root *root, u64 bytenr)
> > 
> > simple_search is IMHO too vague, it's related to a rb-tree so this could
> > be reflected in the name somehow.
> > 
> > I think it's ok if you do this as a middle step before making it a
> > proper struct hook and API but I don't like the end result as it's not
> > really an improvement.
> > 
> That's the what I mean for "simple", it's really just a simple, not even
> a full wrapper, for bytenr based rb tree search.
> 
> Adding too many wrappers may simply kill the "simple" part.
> 
> Although I have to admit, that most of the simple_node part is only to
> reuse code across relocation.c and backref.c. Since no other users
> utilize such simple facility.
> 
> Any idea to improve such situation? Or we really need to go full wrappers?

If the above works we won't need to add more wrappers. But after some
thinking I'm ok with the way you implement it as it will certainly clean
up some things and once it's merged we'll have another chance to look at
the code and fix up only the structures.

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v2 01/39] btrfs: backref: Introduce the skeleton of btrfs_backref_iter
  2020-04-02  1:01       ` David Sterba
@ 2020-04-02  1:27         ` Qu Wenruo
  0 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-04-02  1:27 UTC (permalink / raw)
  To: dsterba, linux-btrfs, Johannes Thumshirn, Josef Bacik



On 2020/4/2 上午9:01, David Sterba wrote:
> On Thu, Apr 02, 2020 at 07:31:28AM +0800, Qu Wenruo wrote:
>> On 2020/4/1 下午11:37, David Sterba wrote:
>>> On Thu, Mar 26, 2020 at 04:32:38PM +0800, Qu Wenruo wrote:
>>>> --- a/fs/btrfs/backref.h
>>>> +++ b/fs/btrfs/backref.h
>>>> @@ -78,4 +78,43 @@ struct prelim_ref {
>>>>  	u64 wanted_disk_byte;
>>>>  };
>>>>  
>>>> +/*
>>>> + * Helper structure to help iterate backrefs of one extent.
>>>> + *
>>>> + * Now it only supports iteration for tree block in commit root.
>>>> + */
>>>> +struct btrfs_backref_iter {
>>>> +	u64 bytenr;
>>>> +	struct btrfs_path *path;
>>>> +	struct btrfs_fs_info *fs_info;
>>>> +	struct btrfs_key cur_key;
>>>> +	u32 item_ptr;
>>>> +	u32 cur_ptr;
>>>> +	u32 end_ptr;
>>>> +};
>>>> +
>>>> +struct btrfs_backref_iter *btrfs_backref_iter_alloc(
>>>> +		struct btrfs_fs_info *fs_info, gfp_t gfp_flag);
>>>> +
>>>> +static inline void btrfs_backref_iter_free(struct btrfs_backref_iter *iter)
>>>> +{
>>>> +	if (!iter)
>>>> +		return;
>>>> +	btrfs_free_path(iter->path);
>>>> +	kfree(iter);
>>>> +}
>>>
>>> Why do you make so many functions static inline? It makes sense for some
>>> of them but in the following patches there are functions that are either
>>> too big (so when they're inlined it bloats the asm) or called
>>> infrequently so the inlining does not bring much. Code in header files
>>> should be kept to minimum.
>>
>> As most of them meet the requirement for either too small, or too
>> infrequently called.
> 
> So the rules or recommendations I use to decide if a function should be
> static inline:
> 
> * it's like macro with type checking
> * results into a few instructions (where few is like 3-6)
> * the function is good for code readability, like for a helper that does
>   some checks and returns a result, or derefernces a few pointers, and
>   the function name is self explaining

After re-checking the backref.h, I still find most (if not all) of these
inlined functions meet these conditions.

They are short, mostly just doing basic check then free up some pointers.

> * there should be some performance reason where the function call would
>   be too costly, eg. if the function is on a hot path or in a loop

This is my main concern.

Compiler can easily "uninline" functions if they are only static inline
inside the same C file.
But if we define some function in headers, then it will always be a
function call, and can't be "uninlined"

That's my major concern, any exported functions is a barrier where
compiler can't do their best to optimize.

Thus to me, I agree condition 1~3, but no the 4th if we're handling
exported functions, as it's a single directional optimization.

If there is some proof (I don't believe though) that, compiler can
uninline/inline such exported functions, then I'm completely happy to
make them regular functions.

> 
> And all of that can be irrelevant if the compiler does some fancy
> optimization, like function cloning where it keeps one copy intact
> that's for the public interface and then inline other copies, possibly
> applying more optimizations eg. based on parameters or some analysis
> that splits function to hot and cold parts.
> 
> Unless we find some suboptimal result of compilation that could be fixed
> by static inlines, I tend to not use them besides the trivial cases that
> help code readability.

But these functions are small, self explaining, thus rarely get read
that much.

> 
>>> There are also functions not used anywhere else than in backref.c so
>>> they don't need to be exported for now. For example
>>> btrfs_backref_iter_is_inline_ref.
>>
>> But it's used in later patches, thus I exported them to avoid re-export
>> them later.
> 
> I grepped the whole branch with the backref cache and assumed that if
> you introduce a function in the cleanup part, it would be used in the
> other one. But btrfs_backref_iter_is_inline_ref wasn't.

Oh, you're right. That function is only temporary used.
It's introduced in patch 02, then utilized in patch 03, where the major
backref cache handling is still in relocation.c.

Then in patch 29, the backref cache code got moved to backref.c, then no
one utilize that function outside of backref.c, thus it no longer needs
to be exported.

Would you mind to unexport it in patch 29 too?

Thanks,
Qu

> 
>>>> +int btrfs_backref_iter_start(struct btrfs_backref_iter *iter, u64 bytenr);
>>>> +
>>>> +static inline void
>>>> +btrfs_backref_iter_release(struct btrfs_backref_iter *iter)
>>>
>>> Please keep the function type and name on the same line, arguments can
>>> go to the next line.
>>
>> Forgot this one...
>>
>> Do I need to resend?
> 
> I fix such things when applying the patches so for that only reason it's
> not necessary to resend.
> 

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v2 17/39] btrfs: Rename tree_entry to simple_node and export it
  2020-04-02  1:09       ` David Sterba
@ 2020-04-02  1:32         ` Qu Wenruo
  0 siblings, 0 replies; 52+ messages in thread
From: Qu Wenruo @ 2020-04-02  1:32 UTC (permalink / raw)
  To: dsterba, linux-btrfs



On 2020/4/2 上午9:09, David Sterba wrote:
> On Thu, Apr 02, 2020 at 07:40:29AM +0800, Qu Wenruo wrote:
>>>>  struct btrfs_backref_node {
>>>> -	struct rb_node rb_node;
>>>> -	u64 bytenr;
>>>> +	struct {
>>>> +		struct rb_node rb_node;
>>>> +		u64 bytenr;
>>>> +	}; /* Use simple_node for search/insert */
>>>
>>> Why is this anonymous struct? This should be the simple_node as I see
>>> below. For some simple rb search API.
>>
>> If using simple_node, we need a ton of extra wrapper to wrap things like
>> rb_entry(), rb_postorder_()
>>
>> Thus here we still want byte/rb_node directly embeded into the structure.
>>
>> The ideal method would be anonymous but typed structure.
>> Unfortunately no such C standard supports this.
> 
> My idea was to have something like this (simplified):
> 
> 	struct tree_node {
> 		struct rb_node node;
> 		u64 bytenr;
> 	};
> 
> 	struct backref_node {
> 		...
> 		struct tree_node cache_node;
> 		...
> 	};
> 
> 	struct backref_node bnode;
> 
> when the rb_node is needed, pass &bnode.cache_node.rb_node . All the
> rb_* functions should work without adding another interface layer.

The problem is function relocate_tree_blocks().

In which we call rbtree_postorder_for_each_entry_safe().

If we use tree_node directly, we need to call container_of() again to
grab the tree_block structure, which almost kills the meaning of
rbtree_postorder_for_each_entry_safe()

This also applies to rb_first() callers like free_block_list().

> 
>>>>  	u64 new_bytenr;
>>>>  	/* objectid of tree block owner, can be not uptodate */
>>>> diff --git a/fs/btrfs/misc.h b/fs/btrfs/misc.h
>>>> index 72bab64ecf60..d199bfdb210e 100644
>>>> --- a/fs/btrfs/misc.h
>>>> +++ b/fs/btrfs/misc.h
>>>> @@ -6,6 +6,7 @@
>>>>  #include <linux/sched.h>
>>>>  #include <linux/wait.h>
>>>>  #include <asm/div64.h>
>>>> +#include <linux/rbtree.h>
>>>>  
>>>>  #define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len))
>>>>  
>>>> @@ -58,4 +59,57 @@ static inline bool has_single_bit_set(u64 n)
>>>>  	return is_power_of_two_u64(n);
>>>>  }
>>>>  
>>>> +/*
>>>> + * Simple bytenr based rb_tree relate structures
>>>> + *
>>>> + * Any structure wants to use bytenr as single search index should have their
>>>> + * structure start with these members.
>>>
>>> This is not very clean coding style, relying on particular placement and
>>> order in another struct.
>>
>> Order is not a problem, since we call container_of(), thus there is no
>> need for any order or placement.
>> User can easily put rb_node at the end of the structure, and bytenr at
>> the beginning of the structure, and everything still goes well.
>>
>> The anonymous structure is mostly here to inform callers that we're
>> using simple_node structure.
>>
>>>
>>>> + */
>>>> +struct simple_node {
>>>> +	struct rb_node rb_node;
>>>> +	u64 bytenr;
>>>> +};
>>>> +
>>>> +static inline struct rb_node *simple_search(struct rb_root *root, u64 bytenr)
>>>
>>> simple_search is IMHO too vague, it's related to a rb-tree so this could
>>> be reflected in the name somehow.
>>>
>>> I think it's ok if you do this as a middle step before making it a
>>> proper struct hook and API but I don't like the end result as it's not
>>> really an improvement.
>>>
>> That's the what I mean for "simple", it's really just a simple, not even
>> a full wrapper, for bytenr based rb tree search.
>>
>> Adding too many wrappers may simply kill the "simple" part.
>>
>> Although I have to admit, that most of the simple_node part is only to
>> reuse code across relocation.c and backref.c. Since no other users
>> utilize such simple facility.
>>
>> Any idea to improve such situation? Or we really need to go full wrappers?
> 
> If the above works we won't need to add more wrappers. But after some
> thinking I'm ok with the way you implement it as it will certainly clean
> up some things and once it's merged we'll have another chance to look at
> the code and fix up only the structures.

Looking forward to better cleanups.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (39 preceding siblings ...)
  2020-03-27 15:51 ` [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots David Sterba
@ 2020-04-02 16:18 ` David Sterba
  2020-04-03 15:44 ` David Sterba
  41 siblings, 0 replies; 52+ messages in thread
From: David Sterba @ 2020-04-02 16:18 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

I went through the patches, overall like the patch separation made it
easy. Thanks.

There are some things that I fixed or updated.

On Thu, Mar 26, 2020 at 04:32:37PM +0800, Qu Wenruo wrote:
> Qu Wenruo (39):
>   btrfs: backref: Introduce the skeleton of btrfs_backref_iter
>   btrfs: backref: Implement btrfs_backref_iter_next()
>   btrfs: relocation: Use btrfs_backref_iter infrastructure
>   btrfs: relocation: Rename mark_block_processed() and
>     __mark_block_processed()
>   btrfs: relocation: Add backref_cache::pending_edge and
>     backref_cache::useless_node members
>   btrfs: relocation: Add backref_cache::fs_info member
>   btrfs: relocation: Make reloc root search specific for relocation
>     backref cache
>   btrfs: relocation: Refactor direct tree backref processing into its
>     own function
>   btrfs: relocation: Refactor indirect tree backref processing into its
>     own function
>   btrfs: relocation: Use wrapper to replace open-coded edge linking
>   btrfs: relocation: Specify essential members for alloc_backref_node()
>   btrfs: relocation: Remove the open-coded goto loop for breadth-first
>     search
>   btrfs: relocation: Refactor the finishing part of upper linkage into
>     finish_upper_links()
>   btrfs: relocation: Refactor the useless nodes handling into its own
>     function
>   btrfs: relocation: Add btrfs_ prefix for backref_node/edge/cache
>   btrfs: Move btrfs_backref_(node|edge|cache) structures to backref.h
>   btrfs: Rename tree_entry to simple_node and export it
>   btrfs: Rename backref_cache_init() to btrfs_backref_cache_init() and
>     move it to backref.c
>   btrfs: Rename alloc_backref_node() to btrfs_backref_alloc_node() and
>     move it backref.c
>   btrfs: Rename alloc_backref_edge() to btrfs_backref_alloc_edge() and
>     move it backref.c
>   btrfs: Rename link_backref_edge() to btrfs_backref_link_edge() and
>     move it backref.h
>   btrfs: Rename free_backref_(node|edge) to
>     btrfs_backref_free_(node|edge) and move them to backref.h
>   btrfs: Rename drop_backref_node() to btrfs_backref_drop_node() and
>     move its needed facilities to backref.h
>   btrfs: Rename remove_backref_node() to btrfs_backref_cleanup_node()
>     and move it to backref.c
>   btrfs: Rename backref_cache_cleanup() to btrfs_backref_release_cache()
>     and move it to backref.c
>   btrfs: Rename backref_tree_panic() to btrfs_backref_panic(), and move
>     it to backref.c
>   btrfs: Rename should_ignore_root() to btrfs_should_ignore_reloc_root()
>     and export it
>   btrfs: relocation: Open-code read_fs_root() for
>     handle_indirect_tree_backref()
>   btrfs: Rename handle_one_tree_block() to btrfs_backref_add_tree_node()
>     and move it to backref.c
>   btrfs: Rename finish_upper_links() to
>     btrfs_backref_finish_upper_links() and move it to backref.c
>   btrfs: relocation: Move error handling of build_backref_tree() to
>     backref.c
>   btrfs: backref: Only ignore reloc roots for indrect backref resolve if
>     the backref cache is for reloction purpose

This subject line is way too long and also quite hard to grasp what's
the patch actually doing. The other subjects about moving functions are
too long as well, I understand you want to put the new name there too,
but it's IMHO not necessary. When the function is 'renamed and moved',
the details are in the patch. So the final list of subject lines I got
to:

  btrfs: backref: introduce the skeleton of btrfs_backref_iter
  btrfs: backref: implement btrfs_backref_iter_next()
  btrfs: reloc: use btrfs_backref_iter infrastructure
  btrfs: reloc: rename mark_block_processed and __mark_block_processed
  btrfs: reloc: add backref_cache::pending_edge and backref_cache::useless_node
  btrfs: reloc: add backref_cache::fs_info member
  btrfs: reloc: make reloc root search-specific for relocation backref cache
  btrfs: reloc: refactor direct tree backref processing into its own function
  btrfs: reloc: refactor indirect tree backref processing into its own function
  btrfs: reloc: use wrapper to replace open-coded edge linking
  btrfs: reloc: pass essential members for alloc_backref_node()
  btrfs: reloc: remove the open-coded goto loop for breadth-first search
  btrfs: reloc: refactor finishing part of upper linkage into finish_upper_links()
  btrfs: reloc: refactor useless nodes handling into its own function
  btrfs: reloc: add btrfs_ prefix for backref_node/edge/cache
  btrfs: move btrfs_backref_(node|edge|cache) structures to backref.h
  btrfs: rename tree_entry to simple_node and export it
  btrfs: rename and move backref_cache_init()
  btrfs: rename and move alloc_backref_node()
  btrfs: rename and move alloc_backref_edge()
  btrfs: rename and move link_backref_edge()
  btrfs: rename and move free_backref_(node|edge)
  btrfs: rename and move drop_backref_node()
  btrfs: rename and move remove_backref_node()
  btrfs: rename and move backref_cache_cleanup()
  btrfs: rename and move backref_tree_panic()
  btrfs: rename and move should_ignore_root()
  btrfs: reloc: open-code read_fs_root() for handle_indirect_tree_backref()
  btrfs: rename and move handle_one_tree_block()
  btrfs: rename and move finish_upper_links()
  btrfs: reloc: move error handling of build_backref_tree() to backref.c
  btrfs: backref: distinguish reloc and non-reloc use of indirect resolution

For a cleanup series I'd really like to see more focus on making the
code also look better, namely when the comments are moved/updated.

There's a common style that the old comments don't follow, eg. no
capital letter at the beginning, or not using the full line width. There
are also grammar mistakes or spelling typos. It's ok to fix that on the
fly.

The function comments should go to the .c file, not the headers (eg.
btrfs_backref_finish_upper_links, btrfs_backref_add_tree_node,
btrfs_backref_cleanup_node).

When you add something to the end of a header file, please keep an empty
line before the last #endif.

For the static inlines I want to do another round, most of them are
acceptable so I'll look for some clear examples where it's misused. A
quick grep over the code base shows there are many so it would be a
wider cleanup.

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots
  2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
                   ` (40 preceding siblings ...)
  2020-04-02 16:18 ` David Sterba
@ 2020-04-03 15:44 ` David Sterba
  41 siblings, 0 replies; 52+ messages in thread
From: David Sterba @ 2020-04-03 15:44 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Thu, Mar 26, 2020 at 04:32:37PM +0800, Qu Wenruo wrote:
> Qu Wenruo (39):
>   btrfs: backref: Introduce the skeleton of btrfs_backref_iter
>   btrfs: backref: Implement btrfs_backref_iter_next()
>   btrfs: relocation: Use btrfs_backref_iter infrastructure
>   btrfs: relocation: Rename mark_block_processed() and
>     __mark_block_processed()
>   btrfs: relocation: Add backref_cache::pending_edge and
>     backref_cache::useless_node members
>   btrfs: relocation: Add backref_cache::fs_info member
>   btrfs: relocation: Make reloc root search specific for relocation
>     backref cache
>   btrfs: relocation: Refactor direct tree backref processing into its
>     own function
>   btrfs: relocation: Refactor indirect tree backref processing into its
>     own function
>   btrfs: relocation: Use wrapper to replace open-coded edge linking
>   btrfs: relocation: Specify essential members for alloc_backref_node()
>   btrfs: relocation: Remove the open-coded goto loop for breadth-first
>     search
>   btrfs: relocation: Refactor the finishing part of upper linkage into
>     finish_upper_links()
>   btrfs: relocation: Refactor the useless nodes handling into its own
>     function
>   btrfs: relocation: Add btrfs_ prefix for backref_node/edge/cache
>   btrfs: Move btrfs_backref_(node|edge|cache) structures to backref.h
>   btrfs: Rename tree_entry to simple_node and export it
>   btrfs: Rename backref_cache_init() to btrfs_backref_cache_init() and
>     move it to backref.c
>   btrfs: Rename alloc_backref_node() to btrfs_backref_alloc_node() and
>     move it backref.c
>   btrfs: Rename alloc_backref_edge() to btrfs_backref_alloc_edge() and
>     move it backref.c
>   btrfs: Rename link_backref_edge() to btrfs_backref_link_edge() and
>     move it backref.h
>   btrfs: Rename free_backref_(node|edge) to
>     btrfs_backref_free_(node|edge) and move them to backref.h
>   btrfs: Rename drop_backref_node() to btrfs_backref_drop_node() and
>     move its needed facilities to backref.h
>   btrfs: Rename remove_backref_node() to btrfs_backref_cleanup_node()
>     and move it to backref.c
>   btrfs: Rename backref_cache_cleanup() to btrfs_backref_release_cache()
>     and move it to backref.c
>   btrfs: Rename backref_tree_panic() to btrfs_backref_panic(), and move
>     it to backref.c
>   btrfs: Rename should_ignore_root() to btrfs_should_ignore_reloc_root()
>     and export it
>   btrfs: relocation: Open-code read_fs_root() for
>     handle_indirect_tree_backref()
>   btrfs: Rename handle_one_tree_block() to btrfs_backref_add_tree_node()
>     and move it to backref.c
>   btrfs: Rename finish_upper_links() to
>     btrfs_backref_finish_upper_links() and move it to backref.c
>   btrfs: relocation: Move error handling of build_backref_tree() to
>     backref.c
>   btrfs: backref: Only ignore reloc roots for indrect backref resolve if
>     the backref cache is for reloction purpose

Patches 1-32 are in misc-next.

^ permalink raw reply	[flat|nested] 52+ messages in thread

end of thread, other threads:[~2020-04-03 15:45 UTC | newest]

Thread overview: 52+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-26  8:32 [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 01/39] btrfs: backref: Introduce the skeleton of btrfs_backref_iter Qu Wenruo
2020-04-01 15:37   ` David Sterba
2020-04-01 23:31     ` Qu Wenruo
2020-04-02  1:01       ` David Sterba
2020-04-02  1:27         ` Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 02/39] btrfs: backref: Implement btrfs_backref_iter_next() Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 03/39] btrfs: relocation: Use btrfs_backref_iter infrastructure Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 04/39] btrfs: relocation: Rename mark_block_processed() and __mark_block_processed() Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 05/39] btrfs: relocation: Add backref_cache::pending_edge and backref_cache::useless_node members Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 06/39] btrfs: relocation: Add backref_cache::fs_info member Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 07/39] btrfs: relocation: Make reloc root search specific for relocation backref cache Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 08/39] btrfs: relocation: Refactor direct tree backref processing into its own function Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 09/39] btrfs: relocation: Refactor indirect " Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 10/39] btrfs: relocation: Use wrapper to replace open-coded edge linking Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 11/39] btrfs: relocation: Specify essential members for alloc_backref_node() Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 12/39] btrfs: relocation: Remove the open-coded goto loop for breadth-first search Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 13/39] btrfs: relocation: Refactor the finishing part of upper linkage into finish_upper_links() Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 14/39] btrfs: relocation: Refactor the useless nodes handling into its own function Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 15/39] btrfs: relocation: Add btrfs_ prefix for backref_node/edge/cache Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 16/39] btrfs: Move btrfs_backref_(node|edge|cache) structures to backref.h Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 17/39] btrfs: Rename tree_entry to simple_node and export it Qu Wenruo
2020-04-01 15:48   ` David Sterba
2020-04-01 23:40     ` Qu Wenruo
2020-04-02  0:52       ` Qu Wenruo
2020-04-02  1:09       ` David Sterba
2020-04-02  1:32         ` Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 18/39] btrfs: Rename backref_cache_init() to btrfs_backref_cache_init() and move it to backref.c Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 19/39] btrfs: Rename alloc_backref_node() to btrfs_backref_alloc_node() and move it backref.c Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 20/39] btrfs: Rename alloc_backref_edge() to btrfs_backref_alloc_edge() " Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 21/39] btrfs: Rename link_backref_edge() to btrfs_backref_link_edge() and move it backref.h Qu Wenruo
2020-03-26  8:32 ` [PATCH v2 22/39] btrfs: Rename free_backref_(node|edge) to btrfs_backref_free_(node|edge) and move them to backref.h Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 23/39] btrfs: Rename drop_backref_node() to btrfs_backref_drop_node() and move its needed facilities " Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 24/39] btrfs: Rename remove_backref_node() to btrfs_backref_cleanup_node() and move it to backref.c Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 25/39] btrfs: Rename backref_cache_cleanup() to btrfs_backref_release_cache() " Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 26/39] btrfs: Rename backref_tree_panic() to btrfs_backref_panic(), " Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 27/39] btrfs: Rename should_ignore_root() to btrfs_should_ignore_reloc_root() and export it Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 28/39] btrfs: relocation: Open-code read_fs_root() for handle_indirect_tree_backref() Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 29/39] btrfs: Rename handle_one_tree_block() to btrfs_backref_add_tree_node() and move it to backref.c Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 30/39] btrfs: Rename finish_upper_links() to btrfs_backref_finish_upper_links() " Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 31/39] btrfs: relocation: Move error handling of build_backref_tree() " Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 32/39] btrfs: backref: Only ignore reloc roots for indrect backref resolve if the backref cache is for reloction purpose Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 33/39] btrfs: qgroup: Introduce qgroup backref cache Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 34/39] btrfs: qgroup: Introduce qgroup_backref_cache_build() function Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 35/39] btrfs: qgroup: Introduce a function to iterate through backref_cache to find all parents for specified node Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 36/39] btrfs: qgroup: Introduce helpers to get needed tree block info Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 37/39] btrfs: qgroup: Introduce verification for function to ensure old roots ulist matches btrfs_find_all_roots() result Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 38/39] btrfs: qgroup: Introduce a new function to get old_roots ulist using backref cache Qu Wenruo
2020-03-26  8:33 ` [PATCH v2 39/39] btrfs: qgroup: Use backref cache to speed up old_roots search Qu Wenruo
2020-03-27 15:51 ` [PATCH v2 00/39] btrfs: qgroup: Use backref cache based backref walk for commit roots David Sterba
2020-04-02 16:18 ` David Sterba
2020-04-03 15:44 ` David Sterba

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.