Linux-BTRFS Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH 0/3] Btrfs: relocation: Refactor build_backref_tree() using btrfs_backref_iterator infrastructure
@ 2020-02-14  6:32 Qu Wenruo
  2020-02-14  6:33 ` [PATCH 1/3] btrfs: backref: Introduce the skeleton of btrfs_backref_iterator Qu Wenruo
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Qu Wenruo @ 2020-02-14  6:32 UTC (permalink / raw)
  To: linux-btrfs

This is part 1 of the incoming refactor patches for build_backref_tree()

[THE PLAN]
The overall plan of refactoring build_backref_tree() is:
- Refactor how we iterate through backref items
  This patchset, the smallest I guess.

- Make build_backref_tree() easier to read.
  In short, that function is doing breadth-first-search to build a map
  which starts from one tree block, to all root nodes referring it.

  It involves backref iteration part, and a lot of backref cache only
  works.
  At least I hope to make this function less bulky and more structured.

- Make build_backref_tree() independent from relocation
  The hardest I guess.

  Current it even accepts reloc_control as its first parameter.
  Don't have a clear plan yet, but I guess at least I should make
  build_backref_tree() to do some more coverage, other than taking
  certain relocation-dependent shortcut.

[THIS PATCHSET]
For the patchset itself, the main purpose is to change how we iterate
through all backref items of one tree block.

The old way:

  path->search_commit_root = 1;
  path->skip_locking = 1;
  ret = btrfs_search_slot(NULL, extent_root, path, &key, 0, 0);
  ptr = btrfs_item_offset_nr()
  end = btrfs_item_end_nr()
  /* Inline item loop */
  while (ptr < end) {
	/* Handle each inline item here */
  }
  while (1) {
  	ret = btrfs_next_item();
	btrfs_item_key_to_cpu()
	if (key.objectid != bytenr ||
	    !(key.type == XXX || key.type == YYY)) 
		break;
	/* Handle each keyed item here */
  }
  
The new way:

  iterator = btrfs_backref_iterator_alloc();
  for (ret = btrfs_backref_iterator_start(iterator, bytenr);
       ret == 0; ret = btrfs_backref_iterator_next(iterator)) {
	/*
	 * Handle both keyed and inlined item here.
	 *
	 * We can use iterator->key to determine if it's inlined or
	 * keyed.
	 * Even for inlined item, it can be easily converted to keyed
	 * item, just like we did in build_backref_tree().
	 */
  }

Currently, only build_backref_tree() can utilize this infrastructure.

Backref.c has more requirement, as it needs to handle iteration for both
data and metadata, both commit root and current root.
And more importantly, backref.c uses depth first search, thus not a
perfect match for btrfs_backref_iterator.

Extra naming suggestion is welcomed.
The current naming, btrfs_backref_iterator_* looks pretty long to me
already.
Shorter naming would be much better.

Qu Wenruo (3):
  btrfs: backref: Introduce the skeleton of btrfs_backref_iterator
  btrfs: backref: Implement btrfs_backref_iterator_next()
  btrfs: relocation: Use btrfs_backref_iterator infrastructure

 fs/btrfs/backref.c    | 106 +++++++++++++++++++++++
 fs/btrfs/backref.h    |  96 +++++++++++++++++++++
 fs/btrfs/relocation.c | 193 ++++++++++++++----------------------------
 3 files changed, 264 insertions(+), 131 deletions(-)

-- 
2.25.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/3] btrfs: backref: Introduce the skeleton of btrfs_backref_iterator
  2020-02-14  6:32 [PATCH 0/3] Btrfs: relocation: Refactor build_backref_tree() using btrfs_backref_iterator infrastructure Qu Wenruo
@ 2020-02-14  6:33 ` Qu Wenruo
  2020-02-14  6:33 ` [PATCH 2/3] btrfs: backref: Implement btrfs_backref_iterator_next() Qu Wenruo
  2020-02-14  6:33 ` [PATCH 3/3] btrfs: relocation: Use btrfs_backref_iterator infrastructure Qu Wenruo
  2 siblings, 0 replies; 6+ messages in thread
From: Qu Wenruo @ 2020-02-14  6:33 UTC (permalink / raw)
  To: linux-btrfs

Due to the complex nature of btrfs extent tree, when we want to iterate
all backrefs of one extent, it involves quite a lot of works, like
search the EXTENT_ITEM/METADATA_ITEM, iteration through inline and keyed
backrefs.

Normally this would result pretty complex code, something like:
  btrfs_search_slot()
  /* Ensure we are at EXTENT_ITEM/METADATA_ITEM */
  while (1) {	/* Loop for extent tree items */
	while (ptr < end) { /* Loop for inlined items */
		/* REAL WORK HERE */
	}
  next:
  	ret = btrfs_next_item()
	/* Ensure we're still at keyed item for specified bytenr */
  }

The idea of btrfs_backref_iterator is to avoid such complex and hard to
read code structure, but something like the following:

  iterator = btrfs_backref_iterator_alloc();
  ret = btrfs_backref_iterator_start(iterator, bytenr);
  if (ret < 0)
	goto out;
  for (; ; ret = btrfs_backref_iterator_next(iterator)) {
	/* REAL WORK HERE */
  }
  out:
  btrfs_backref_iterator_free(iterator);

This patch is just the skeleton + btrfs_backref_iterator_start() code.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.c | 58 +++++++++++++++++++++++++++++++++++
 fs/btrfs/backref.h | 76 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 134 insertions(+)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index e5d85311d5d5..73c4829609c9 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -2252,3 +2252,61 @@ void free_ipath(struct inode_fs_paths *ipath)
 	kvfree(ipath->fspath);
 	kfree(ipath);
 }
+
+int btrfs_backref_iterator_start(struct btrfs_backref_iterator *iterator,
+				 u64 bytenr)
+{
+	struct btrfs_fs_info *fs_info = iterator->fs_info;
+	struct btrfs_path *path = iterator->path;
+	struct btrfs_extent_item *ei;
+	struct btrfs_key key;
+	int ret;
+
+	key.objectid = bytenr;
+	key.type = BTRFS_METADATA_ITEM_KEY;
+	key.offset = (u64)-1;
+
+	ret = btrfs_search_slot(NULL, fs_info->extent_root, &key, path, 0, 0);
+	if (ret < 0)
+		return ret;
+	if (ret == 0) {
+		ret = -EUCLEAN;
+		goto release;
+	}
+	if (path->slots[0] == 0) {
+		WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
+		ret = -EUCLEAN;
+		goto release;
+	}
+	path->slots[0]--;
+
+	btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]);
+	if (!(key.type == BTRFS_EXTENT_ITEM_KEY ||
+	      key.type == BTRFS_METADATA_ITEM_KEY) || key.objectid != bytenr) {
+		ret = -ENOENT;
+		goto release;
+	}
+	memcpy(&iterator->cur_key, &key, sizeof(key));
+	iterator->end_ptr = btrfs_item_end_nr(path->nodes[0], path->slots[0]);
+	iterator->item_ptr = btrfs_item_ptr_offset(path->nodes[0],
+						   path->slots[0]);
+	ei = btrfs_item_ptr(path->nodes[0], path->slots[0],
+			    struct btrfs_extent_item);
+
+	/* Only support iteration on tree backref yet */
+	if (btrfs_extent_flags(path->nodes[0], ei) & BTRFS_EXTENT_FLAG_DATA) {
+		ret = -ENOTTY;
+		goto release;
+	}
+	iterator->cur_ptr = iterator->item_ptr + sizeof(*ei);
+	return 0;
+release:
+	iterator->end_ptr = 0;
+	iterator->cur_ptr = 0;
+	iterator->item_ptr = 0;
+	iterator->cur_key.objectid = 0;
+	iterator->cur_key.type = 0;
+	iterator->cur_key.offset = 0;
+	btrfs_release_path(path);
+	return ret;
+}
diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index 777f61dc081e..b53301f2147f 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -74,4 +74,80 @@ struct prelim_ref {
 	u64 wanted_disk_byte;
 };
 
+/*
+ * Helper structure to help iterate backrefs of one extent.
+ *
+ * Now it only supports iteration for tree block in commit root.
+ */
+struct btrfs_backref_iterator {
+	u64 bytenr;
+	struct btrfs_path *path;
+	struct btrfs_fs_info *fs_info;
+	struct btrfs_key cur_key;
+	unsigned long item_ptr;
+	unsigned long cur_ptr;
+	unsigned long end_ptr;
+};
+
+static inline struct btrfs_backref_iterator *
+btrfs_backref_iterator_alloc(struct btrfs_fs_info *fs_info, gfp_t gfp_flag)
+{
+	struct btrfs_backref_iterator *ret;
+
+	ret = kzalloc(sizeof(*ret), gfp_flag);
+	if (!ret)
+		return NULL;
+
+	ret->path = btrfs_alloc_path();
+	if (!ret) {
+		kfree(ret);
+		return NULL;
+	}
+
+	/* Current backref iterator only supports iteration in commit root */
+	ret->path->search_commit_root = 1;
+	ret->path->skip_locking = 1;
+	ret->path->reada = READA_FORWARD;
+	ret->fs_info = fs_info;
+
+	return ret;
+}
+
+static inline void btrfs_backref_iterator_free(
+		struct btrfs_backref_iterator *iterator)
+{
+	if (!iterator)
+		return;
+	btrfs_free_path(iterator->path);
+	kfree(iterator);
+}
+
+static inline struct extent_buffer *
+btrfs_backref_get_eb(struct btrfs_backref_iterator *iterator)
+{
+	if (!iterator)
+		return NULL;
+	return iterator->path->nodes[0];
+}
+
+/*
+ * For metadata with EXTENT_ITEM key (non-skinny) case, the first inline data
+ * is btrfs_tree_block_info, without a btrfs_extent_inline_ref header.
+ *
+ * This helper is here to determine if that's the case.
+ */
+static inline bool btrfs_backref_has_tree_block_info(
+		struct btrfs_backref_iterator *iterator)
+{
+	if (iterator->cur_key.type == BTRFS_EXTENT_ITEM_KEY &&
+	    iterator->cur_ptr - iterator->item_ptr ==
+	    sizeof(struct btrfs_extent_item))
+		return true;
+	return false;
+}
+
+int btrfs_backref_iterator_start(struct btrfs_backref_iterator *iterator,
+				 u64 bytenr);
+int btrfs_backref_iterator_next(struct btrfs_backref_iterator *iterator);
+
 #endif
-- 
2.25.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 2/3] btrfs: backref: Implement btrfs_backref_iterator_next()
  2020-02-14  6:32 [PATCH 0/3] Btrfs: relocation: Refactor build_backref_tree() using btrfs_backref_iterator infrastructure Qu Wenruo
  2020-02-14  6:33 ` [PATCH 1/3] btrfs: backref: Introduce the skeleton of btrfs_backref_iterator Qu Wenruo
@ 2020-02-14  6:33 ` Qu Wenruo
  2020-02-14  7:25   ` Qu Wenruo
  2020-02-14  6:33 ` [PATCH 3/3] btrfs: relocation: Use btrfs_backref_iterator infrastructure Qu Wenruo
  2 siblings, 1 reply; 6+ messages in thread
From: Qu Wenruo @ 2020-02-14  6:33 UTC (permalink / raw)
  To: linux-btrfs

This function will go next inline/keyed backref for
btrfs_backref_iterator infrastructure.

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 48 insertions(+)

diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index 73c4829609c9..0f231cd69675 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -2310,3 +2310,51 @@ int btrfs_backref_iterator_start(struct btrfs_backref_iterator *iterator,
 	btrfs_release_path(path);
 	return ret;
 }
+
+int btrfs_backref_iterator_next(struct btrfs_backref_iterator *iterator)
+{
+	struct extent_buffer *eb = btrfs_backref_get_eb(iterator);
+	struct btrfs_path *path = iterator->path;
+	struct btrfs_extent_inline_ref *iref;
+	int ret;
+	u32 size;
+
+	if (iterator->cur_key.type == BTRFS_EXTENT_ITEM_KEY ||
+	    iterator->cur_key.type == BTRFS_METADATA_ITEM_KEY) {
+		/* We're still inside the inline refs */
+		if (btrfs_backref_has_tree_block_info(iterator)) {
+			/* First tree block info */
+			size = sizeof(struct btrfs_tree_block_info);
+		} else {
+			/* Use inline ref type to determine the size */
+			int type;
+
+			iref = (struct btrfs_extent_inline_ref *)
+				(iterator->cur_ptr);
+			type = btrfs_extent_inline_ref_offset(eb, iref);
+
+			size = btrfs_extent_inline_ref_size(type);
+		}
+		iterator->cur_ptr += size;
+		if (iterator->cur_ptr < iterator->end_ptr)
+			return 0;
+
+		/* All inline items iterated, fall through */
+	}
+	/* We're at keyed items, there is no inline item, just go next item */
+	ret = btrfs_next_item(iterator->fs_info->extent_root, iterator->path);
+	if (ret > 0 || ret < 0)
+		return ret;
+
+	btrfs_item_key_to_cpu(path->nodes[0], &iterator->cur_key,
+			      path->slots[0]);
+	if (iterator->cur_key.objectid != iterator->bytenr ||
+	    (iterator->cur_key.type != BTRFS_TREE_BLOCK_REF_KEY &&
+	     iterator->cur_key.type != BTRFS_SHARED_BLOCK_REF_KEY))
+		return 1;
+	iterator->item_ptr = btrfs_item_ptr_offset(path->nodes[0],
+						   path->slots[0]);
+	iterator->cur_ptr = iterator->item_ptr;
+	iterator->end_ptr = btrfs_item_end_nr(path->nodes[0], path->slots[0]);
+	return 0;
+}
-- 
2.25.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 3/3] btrfs: relocation: Use btrfs_backref_iterator infrastructure
  2020-02-14  6:32 [PATCH 0/3] Btrfs: relocation: Refactor build_backref_tree() using btrfs_backref_iterator infrastructure Qu Wenruo
  2020-02-14  6:33 ` [PATCH 1/3] btrfs: backref: Introduce the skeleton of btrfs_backref_iterator Qu Wenruo
  2020-02-14  6:33 ` [PATCH 2/3] btrfs: backref: Implement btrfs_backref_iterator_next() Qu Wenruo
@ 2020-02-14  6:33 ` Qu Wenruo
  2020-02-14  6:56   ` Qu Wenruo
  2 siblings, 1 reply; 6+ messages in thread
From: Qu Wenruo @ 2020-02-14  6:33 UTC (permalink / raw)
  To: linux-btrfs

In the core function of relocation, build_backref_tree, it needs to
iterate all backref items of one tree block.

We don't really want to spend our code and reviewers' time to going
through tons of supportive code just for the backref walk.

Use btrfs_backref_iterator infrastructure to do the loop.

The backref items look would be much more easier to read:

	ret = btrfs_backref_iterator_start(iterator, cur->bytenr);
	for (; ret == 0; ret = btrfs_backref_iterator_next(iterator)) {
		/* The really important work */
	}

Signed-off-by: Qu Wenruo <wqu@suse.com>
---
 fs/btrfs/backref.h    |  20 +++++
 fs/btrfs/relocation.c | 193 ++++++++++++++----------------------------
 2 files changed, 82 insertions(+), 131 deletions(-)

diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
index b53301f2147f..016999339be1 100644
--- a/fs/btrfs/backref.h
+++ b/fs/btrfs/backref.h
@@ -150,4 +150,24 @@ int btrfs_backref_iterator_start(struct btrfs_backref_iterator *iterator,
 				 u64 bytenr);
 int btrfs_backref_iterator_next(struct btrfs_backref_iterator *iterator);
 
+static inline bool
+btrfs_backref_iterator_is_inline_ref(struct btrfs_backref_iterator *iterator)
+{
+	if (iterator->cur_key.type == BTRFS_EXTENT_ITEM_KEY ||
+	    iterator->cur_key.type == BTRFS_METADATA_ITEM_KEY)
+		return true;
+	return false;
+}
+
+static inline void
+btrfs_backref_iterator_release(struct btrfs_backref_iterator *iterator)
+{
+	iterator->bytenr = 0;
+	iterator->item_ptr = 0;
+	iterator->cur_ptr = 0;
+	iterator->end_ptr = 0;
+	btrfs_release_path(iterator->path);
+	memset(&iterator->cur_key, 0, sizeof(iterator->cur_key));
+}
+
 #endif
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index b1365a516a25..21e4482c8232 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -22,6 +22,7 @@
 #include "print-tree.h"
 #include "delalloc-space.h"
 #include "block-group.h"
+#include "backref.h"
 
 /*
  * backref_node, mapping_node and tree_block start with this
@@ -604,48 +605,6 @@ static struct btrfs_root *read_fs_root(struct btrfs_fs_info *fs_info,
 	return btrfs_get_fs_root(fs_info, &key, false);
 }
 
-static noinline_for_stack
-int find_inline_backref(struct extent_buffer *leaf, int slot,
-			unsigned long *ptr, unsigned long *end)
-{
-	struct btrfs_key key;
-	struct btrfs_extent_item *ei;
-	struct btrfs_tree_block_info *bi;
-	u32 item_size;
-
-	btrfs_item_key_to_cpu(leaf, &key, slot);
-
-	item_size = btrfs_item_size_nr(leaf, slot);
-	if (item_size < sizeof(*ei)) {
-		btrfs_print_v0_err(leaf->fs_info);
-		btrfs_handle_fs_error(leaf->fs_info, -EINVAL, NULL);
-		return 1;
-	}
-	ei = btrfs_item_ptr(leaf, slot, struct btrfs_extent_item);
-	WARN_ON(!(btrfs_extent_flags(leaf, ei) &
-		  BTRFS_EXTENT_FLAG_TREE_BLOCK));
-
-	if (key.type == BTRFS_EXTENT_ITEM_KEY &&
-	    item_size <= sizeof(*ei) + sizeof(*bi)) {
-		WARN_ON(item_size < sizeof(*ei) + sizeof(*bi));
-		return 1;
-	}
-	if (key.type == BTRFS_METADATA_ITEM_KEY &&
-	    item_size <= sizeof(*ei)) {
-		WARN_ON(item_size < sizeof(*ei));
-		return 1;
-	}
-
-	if (key.type == BTRFS_EXTENT_ITEM_KEY) {
-		bi = (struct btrfs_tree_block_info *)(ei + 1);
-		*ptr = (unsigned long)(bi + 1);
-	} else {
-		*ptr = (unsigned long)(ei + 1);
-	}
-	*end = (unsigned long)ei + item_size;
-	return 0;
-}
-
 /*
  * build backref tree for a given tree block. root of the backref tree
  * corresponds the tree block, leaves of the backref tree correspond
@@ -665,10 +624,9 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 					struct btrfs_key *node_key,
 					int level, u64 bytenr)
 {
+	struct btrfs_backref_iterator *iterator;
 	struct backref_cache *cache = &rc->backref_cache;
-	struct btrfs_path *path1; /* For searching extent root */
-	struct btrfs_path *path2; /* For searching parent of TREE_BLOCK_REF */
-	struct extent_buffer *eb;
+	struct btrfs_path *path; /* For searching parent of TREE_BLOCK_REF */
 	struct btrfs_root *root;
 	struct backref_node *cur;
 	struct backref_node *upper;
@@ -677,9 +635,6 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	struct backref_node *exist = NULL;
 	struct backref_edge *edge;
 	struct rb_node *rb_node;
-	struct btrfs_key key;
-	unsigned long end;
-	unsigned long ptr;
 	LIST_HEAD(list); /* Pending edge list, upper node needs to be checked */
 	LIST_HEAD(useless);
 	int cowonly;
@@ -687,14 +642,14 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	int err = 0;
 	bool need_check = true;
 
-	path1 = btrfs_alloc_path();
-	path2 = btrfs_alloc_path();
-	if (!path1 || !path2) {
+	iterator = btrfs_backref_iterator_alloc(rc->extent_root->fs_info,
+						GFP_NOFS);
+	path = btrfs_alloc_path();
+	if (!path) {
 		err = -ENOMEM;
 		goto out;
 	}
-	path1->reada = READA_FORWARD;
-	path2->reada = READA_FORWARD;
+	path->reada = READA_FORWARD;
 
 	node = alloc_backref_node(cache);
 	if (!node) {
@@ -707,25 +662,28 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 	node->lowest = 1;
 	cur = node;
 again:
-	end = 0;
-	ptr = 0;
-	key.objectid = cur->bytenr;
-	key.type = BTRFS_METADATA_ITEM_KEY;
-	key.offset = (u64)-1;
-
-	path1->search_commit_root = 1;
-	path1->skip_locking = 1;
-	ret = btrfs_search_slot(NULL, rc->extent_root, &key, path1,
-				0, 0);
+	ret = btrfs_backref_iterator_start(iterator, cur->bytenr);
 	if (ret < 0) {
 		err = ret;
 		goto out;
 	}
-	ASSERT(ret);
-	ASSERT(path1->slots[0]);
-
-	path1->slots[0]--;
 
+	/*
+	 * We skip the first btrfs_tree_block_info, as we don't use the key
+	 * stored in it, but fetch it from the tree block.
+	 */
+	if (btrfs_backref_has_tree_block_info(iterator)) {
+		ret = btrfs_backref_iterator_next(iterator);
+		if (ret < 0) {
+			err = ret;
+			goto out;
+		}
+		/* No extra backref? This means the tree block is corrupted */
+		if (ret > 0) {
+			err = -EUCLEAN;
+			goto out;
+		}
+	}
 	WARN_ON(cur->checked);
 	if (!list_empty(&cur->upper)) {
 		/*
@@ -747,42 +705,21 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		exist = NULL;
 	}
 
-	while (1) {
-		cond_resched();
-		eb = path1->nodes[0];
-
-		if (ptr >= end) {
-			if (path1->slots[0] >= btrfs_header_nritems(eb)) {
-				ret = btrfs_next_leaf(rc->extent_root, path1);
-				if (ret < 0) {
-					err = ret;
-					goto out;
-				}
-				if (ret > 0)
-					break;
-				eb = path1->nodes[0];
-			}
+	for (; ret == 0; ret = btrfs_backref_iterator_next(iterator)) {
+		struct extent_buffer *eb;
+		struct btrfs_key key;
+		int type;
 
-			btrfs_item_key_to_cpu(eb, &key, path1->slots[0]);
-			if (key.objectid != cur->bytenr) {
-				WARN_ON(exist);
-				break;
-			}
+		cond_resched();
+		eb = btrfs_backref_get_eb(iterator);
 
-			if (key.type == BTRFS_EXTENT_ITEM_KEY ||
-			    key.type == BTRFS_METADATA_ITEM_KEY) {
-				ret = find_inline_backref(eb, path1->slots[0],
-							  &ptr, &end);
-				if (ret)
-					goto next;
-			}
-		}
+		key.objectid = iterator->bytenr;
+		if (btrfs_backref_iterator_is_inline_ref(iterator)) {
+			struct btrfs_extent_inline_ref *iref;
 
-		if (ptr < end) {
 			/* update key for inline back ref */
-			struct btrfs_extent_inline_ref *iref;
-			int type;
-			iref = (struct btrfs_extent_inline_ref *)ptr;
+			iref = (struct btrfs_extent_inline_ref *)
+				iterator->cur_ptr;
 			type = btrfs_get_extent_inline_ref_type(eb, iref,
 							BTRFS_REF_TYPE_BLOCK);
 			if (type == BTRFS_REF_TYPE_INVALID) {
@@ -791,9 +728,9 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			}
 			key.type = type;
 			key.offset = btrfs_extent_inline_ref_offset(eb, iref);
-
-			WARN_ON(key.type != BTRFS_TREE_BLOCK_REF_KEY &&
-				key.type != BTRFS_SHARED_BLOCK_REF_KEY);
+		} else {
+			key.type = iterator->cur_key.type;
+			key.offset = iterator->cur_key.offset;
 		}
 
 		/*
@@ -806,7 +743,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		     (key.type == BTRFS_SHARED_BLOCK_REF_KEY &&
 		      exist->bytenr == key.offset))) {
 			exist = NULL;
-			goto next;
+			continue;
 		}
 
 		/* SHARED_BLOCK_REF means key.offset is the parent bytenr */
@@ -852,7 +789,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			edge->node[LOWER] = cur;
 			edge->node[UPPER] = upper;
 
-			goto next;
+			continue;
 		} else if (unlikely(key.type == BTRFS_EXTENT_REF_V0_KEY)) {
 			err = -EINVAL;
 			btrfs_print_v0_err(rc->extent_root->fs_info);
@@ -860,7 +797,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 					      NULL);
 			goto out;
 		} else if (key.type != BTRFS_TREE_BLOCK_REF_KEY) {
-			goto next;
+			continue;
 		}
 
 		/*
@@ -891,20 +828,20 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		level = cur->level + 1;
 
 		/* Search the tree to find parent blocks referring the block. */
-		path2->search_commit_root = 1;
-		path2->skip_locking = 1;
-		path2->lowest_level = level;
-		ret = btrfs_search_slot(NULL, root, node_key, path2, 0, 0);
-		path2->lowest_level = 0;
+		path->search_commit_root = 1;
+		path->skip_locking = 1;
+		path->lowest_level = level;
+		ret = btrfs_search_slot(NULL, root, node_key, path, 0, 0);
+		path->lowest_level = 0;
 		if (ret < 0) {
 			err = ret;
 			goto out;
 		}
-		if (ret > 0 && path2->slots[level] > 0)
-			path2->slots[level]--;
+		if (ret > 0 && path->slots[level] > 0)
+			path->slots[level]--;
 
-		eb = path2->nodes[level];
-		if (btrfs_node_blockptr(eb, path2->slots[level]) !=
+		eb = path->nodes[level];
+		if (btrfs_node_blockptr(eb, path->slots[level]) !=
 		    cur->bytenr) {
 			btrfs_err(root->fs_info,
 	"couldn't find block (%llu) (level %d) in tree (%llu) with key (%llu %u %llu)",
@@ -920,7 +857,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 
 		/* Add all nodes and edges in the path */
 		for (; level < BTRFS_MAX_LEVEL; level++) {
-			if (!path2->nodes[level]) {
+			if (!path->nodes[level]) {
 				ASSERT(btrfs_root_bytenr(&root->root_item) ==
 				       lower->bytenr);
 				if (should_ignore_root(root))
@@ -936,7 +873,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 				goto out;
 			}
 
-			eb = path2->nodes[level];
+			eb = path->nodes[level];
 			rb_node = tree_search(&cache->rb_root, eb->start);
 			if (!rb_node) {
 				upper = alloc_backref_node(cache);
@@ -993,20 +930,14 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 			lower = upper;
 			upper = NULL;
 		}
-		btrfs_release_path(path2);
-next:
-		if (ptr < end) {
-			ptr += btrfs_extent_inline_ref_size(key.type);
-			if (ptr >= end) {
-				WARN_ON(ptr > end);
-				ptr = 0;
-				end = 0;
-			}
-		}
-		if (ptr >= end)
-			path1->slots[0]++;
+		btrfs_release_path(path);
+	}
+	if (ret < 0) {
+		err = ret;
+		goto out;
 	}
-	btrfs_release_path(path1);
+	ret = 0;
+	btrfs_backref_iterator_release(iterator);
 
 	cur->checked = 1;
 	WARN_ON(exist);
@@ -1124,8 +1055,8 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
 		}
 	}
 out:
-	btrfs_free_path(path1);
-	btrfs_free_path(path2);
+	btrfs_backref_iterator_free(iterator);
+	btrfs_free_path(path);
 	if (err) {
 		while (!list_empty(&useless)) {
 			lower = list_entry(useless.next,
-- 
2.25.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 3/3] btrfs: relocation: Use btrfs_backref_iterator infrastructure
  2020-02-14  6:33 ` [PATCH 3/3] btrfs: relocation: Use btrfs_backref_iterator infrastructure Qu Wenruo
@ 2020-02-14  6:56   ` Qu Wenruo
  0 siblings, 0 replies; 6+ messages in thread
From: Qu Wenruo @ 2020-02-14  6:56 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1.1: Type: text/plain, Size: 12428 bytes --]



On 2020/2/14 下午2:33, Qu Wenruo wrote:
> In the core function of relocation, build_backref_tree, it needs to
> iterate all backref items of one tree block.
> 
> We don't really want to spend our code and reviewers' time to going
> through tons of supportive code just for the backref walk.
> 
> Use btrfs_backref_iterator infrastructure to do the loop.
> 
> The backref items look would be much more easier to read:
> 
> 	ret = btrfs_backref_iterator_start(iterator, cur->bytenr);
> 	for (; ret == 0; ret = btrfs_backref_iterator_next(iterator)) {
> 		/* The really important work */
> 	}
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>

There is a memleak at error handling. btrfs/101 would cause OOM.

Will update to address this problem.

Thanks,
Qu
> ---
>  fs/btrfs/backref.h    |  20 +++++
>  fs/btrfs/relocation.c | 193 ++++++++++++++----------------------------
>  2 files changed, 82 insertions(+), 131 deletions(-)
> 
> diff --git a/fs/btrfs/backref.h b/fs/btrfs/backref.h
> index b53301f2147f..016999339be1 100644
> --- a/fs/btrfs/backref.h
> +++ b/fs/btrfs/backref.h
> @@ -150,4 +150,24 @@ int btrfs_backref_iterator_start(struct btrfs_backref_iterator *iterator,
>  				 u64 bytenr);
>  int btrfs_backref_iterator_next(struct btrfs_backref_iterator *iterator);
>  
> +static inline bool
> +btrfs_backref_iterator_is_inline_ref(struct btrfs_backref_iterator *iterator)
> +{
> +	if (iterator->cur_key.type == BTRFS_EXTENT_ITEM_KEY ||
> +	    iterator->cur_key.type == BTRFS_METADATA_ITEM_KEY)
> +		return true;
> +	return false;
> +}
> +
> +static inline void
> +btrfs_backref_iterator_release(struct btrfs_backref_iterator *iterator)
> +{
> +	iterator->bytenr = 0;
> +	iterator->item_ptr = 0;
> +	iterator->cur_ptr = 0;
> +	iterator->end_ptr = 0;
> +	btrfs_release_path(iterator->path);
> +	memset(&iterator->cur_key, 0, sizeof(iterator->cur_key));
> +}
> +
>  #endif
> diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
> index b1365a516a25..21e4482c8232 100644
> --- a/fs/btrfs/relocation.c
> +++ b/fs/btrfs/relocation.c
> @@ -22,6 +22,7 @@
>  #include "print-tree.h"
>  #include "delalloc-space.h"
>  #include "block-group.h"
> +#include "backref.h"
>  
>  /*
>   * backref_node, mapping_node and tree_block start with this
> @@ -604,48 +605,6 @@ static struct btrfs_root *read_fs_root(struct btrfs_fs_info *fs_info,
>  	return btrfs_get_fs_root(fs_info, &key, false);
>  }
>  
> -static noinline_for_stack
> -int find_inline_backref(struct extent_buffer *leaf, int slot,
> -			unsigned long *ptr, unsigned long *end)
> -{
> -	struct btrfs_key key;
> -	struct btrfs_extent_item *ei;
> -	struct btrfs_tree_block_info *bi;
> -	u32 item_size;
> -
> -	btrfs_item_key_to_cpu(leaf, &key, slot);
> -
> -	item_size = btrfs_item_size_nr(leaf, slot);
> -	if (item_size < sizeof(*ei)) {
> -		btrfs_print_v0_err(leaf->fs_info);
> -		btrfs_handle_fs_error(leaf->fs_info, -EINVAL, NULL);
> -		return 1;
> -	}
> -	ei = btrfs_item_ptr(leaf, slot, struct btrfs_extent_item);
> -	WARN_ON(!(btrfs_extent_flags(leaf, ei) &
> -		  BTRFS_EXTENT_FLAG_TREE_BLOCK));
> -
> -	if (key.type == BTRFS_EXTENT_ITEM_KEY &&
> -	    item_size <= sizeof(*ei) + sizeof(*bi)) {
> -		WARN_ON(item_size < sizeof(*ei) + sizeof(*bi));
> -		return 1;
> -	}
> -	if (key.type == BTRFS_METADATA_ITEM_KEY &&
> -	    item_size <= sizeof(*ei)) {
> -		WARN_ON(item_size < sizeof(*ei));
> -		return 1;
> -	}
> -
> -	if (key.type == BTRFS_EXTENT_ITEM_KEY) {
> -		bi = (struct btrfs_tree_block_info *)(ei + 1);
> -		*ptr = (unsigned long)(bi + 1);
> -	} else {
> -		*ptr = (unsigned long)(ei + 1);
> -	}
> -	*end = (unsigned long)ei + item_size;
> -	return 0;
> -}
> -
>  /*
>   * build backref tree for a given tree block. root of the backref tree
>   * corresponds the tree block, leaves of the backref tree correspond
> @@ -665,10 +624,9 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  					struct btrfs_key *node_key,
>  					int level, u64 bytenr)
>  {
> +	struct btrfs_backref_iterator *iterator;
>  	struct backref_cache *cache = &rc->backref_cache;
> -	struct btrfs_path *path1; /* For searching extent root */
> -	struct btrfs_path *path2; /* For searching parent of TREE_BLOCK_REF */
> -	struct extent_buffer *eb;
> +	struct btrfs_path *path; /* For searching parent of TREE_BLOCK_REF */
>  	struct btrfs_root *root;
>  	struct backref_node *cur;
>  	struct backref_node *upper;
> @@ -677,9 +635,6 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  	struct backref_node *exist = NULL;
>  	struct backref_edge *edge;
>  	struct rb_node *rb_node;
> -	struct btrfs_key key;
> -	unsigned long end;
> -	unsigned long ptr;
>  	LIST_HEAD(list); /* Pending edge list, upper node needs to be checked */
>  	LIST_HEAD(useless);
>  	int cowonly;
> @@ -687,14 +642,14 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  	int err = 0;
>  	bool need_check = true;
>  
> -	path1 = btrfs_alloc_path();
> -	path2 = btrfs_alloc_path();
> -	if (!path1 || !path2) {
> +	iterator = btrfs_backref_iterator_alloc(rc->extent_root->fs_info,
> +						GFP_NOFS);
> +	path = btrfs_alloc_path();
> +	if (!path) {
>  		err = -ENOMEM;
>  		goto out;
>  	}
> -	path1->reada = READA_FORWARD;
> -	path2->reada = READA_FORWARD;
> +	path->reada = READA_FORWARD;
>  
>  	node = alloc_backref_node(cache);
>  	if (!node) {
> @@ -707,25 +662,28 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  	node->lowest = 1;
>  	cur = node;
>  again:
> -	end = 0;
> -	ptr = 0;
> -	key.objectid = cur->bytenr;
> -	key.type = BTRFS_METADATA_ITEM_KEY;
> -	key.offset = (u64)-1;
> -
> -	path1->search_commit_root = 1;
> -	path1->skip_locking = 1;
> -	ret = btrfs_search_slot(NULL, rc->extent_root, &key, path1,
> -				0, 0);
> +	ret = btrfs_backref_iterator_start(iterator, cur->bytenr);
>  	if (ret < 0) {
>  		err = ret;
>  		goto out;
>  	}
> -	ASSERT(ret);
> -	ASSERT(path1->slots[0]);
> -
> -	path1->slots[0]--;
>  
> +	/*
> +	 * We skip the first btrfs_tree_block_info, as we don't use the key
> +	 * stored in it, but fetch it from the tree block.
> +	 */
> +	if (btrfs_backref_has_tree_block_info(iterator)) {
> +		ret = btrfs_backref_iterator_next(iterator);
> +		if (ret < 0) {
> +			err = ret;
> +			goto out;
> +		}
> +		/* No extra backref? This means the tree block is corrupted */
> +		if (ret > 0) {
> +			err = -EUCLEAN;
> +			goto out;
> +		}
> +	}
>  	WARN_ON(cur->checked);
>  	if (!list_empty(&cur->upper)) {
>  		/*
> @@ -747,42 +705,21 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  		exist = NULL;
>  	}
>  
> -	while (1) {
> -		cond_resched();
> -		eb = path1->nodes[0];
> -
> -		if (ptr >= end) {
> -			if (path1->slots[0] >= btrfs_header_nritems(eb)) {
> -				ret = btrfs_next_leaf(rc->extent_root, path1);
> -				if (ret < 0) {
> -					err = ret;
> -					goto out;
> -				}
> -				if (ret > 0)
> -					break;
> -				eb = path1->nodes[0];
> -			}
> +	for (; ret == 0; ret = btrfs_backref_iterator_next(iterator)) {
> +		struct extent_buffer *eb;
> +		struct btrfs_key key;
> +		int type;
>  
> -			btrfs_item_key_to_cpu(eb, &key, path1->slots[0]);
> -			if (key.objectid != cur->bytenr) {
> -				WARN_ON(exist);
> -				break;
> -			}
> +		cond_resched();
> +		eb = btrfs_backref_get_eb(iterator);
>  
> -			if (key.type == BTRFS_EXTENT_ITEM_KEY ||
> -			    key.type == BTRFS_METADATA_ITEM_KEY) {
> -				ret = find_inline_backref(eb, path1->slots[0],
> -							  &ptr, &end);
> -				if (ret)
> -					goto next;
> -			}
> -		}
> +		key.objectid = iterator->bytenr;
> +		if (btrfs_backref_iterator_is_inline_ref(iterator)) {
> +			struct btrfs_extent_inline_ref *iref;
>  
> -		if (ptr < end) {
>  			/* update key for inline back ref */
> -			struct btrfs_extent_inline_ref *iref;
> -			int type;
> -			iref = (struct btrfs_extent_inline_ref *)ptr;
> +			iref = (struct btrfs_extent_inline_ref *)
> +				iterator->cur_ptr;
>  			type = btrfs_get_extent_inline_ref_type(eb, iref,
>  							BTRFS_REF_TYPE_BLOCK);
>  			if (type == BTRFS_REF_TYPE_INVALID) {
> @@ -791,9 +728,9 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  			}
>  			key.type = type;
>  			key.offset = btrfs_extent_inline_ref_offset(eb, iref);
> -
> -			WARN_ON(key.type != BTRFS_TREE_BLOCK_REF_KEY &&
> -				key.type != BTRFS_SHARED_BLOCK_REF_KEY);
> +		} else {
> +			key.type = iterator->cur_key.type;
> +			key.offset = iterator->cur_key.offset;
>  		}
>  
>  		/*
> @@ -806,7 +743,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  		     (key.type == BTRFS_SHARED_BLOCK_REF_KEY &&
>  		      exist->bytenr == key.offset))) {
>  			exist = NULL;
> -			goto next;
> +			continue;
>  		}
>  
>  		/* SHARED_BLOCK_REF means key.offset is the parent bytenr */
> @@ -852,7 +789,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  			edge->node[LOWER] = cur;
>  			edge->node[UPPER] = upper;
>  
> -			goto next;
> +			continue;
>  		} else if (unlikely(key.type == BTRFS_EXTENT_REF_V0_KEY)) {
>  			err = -EINVAL;
>  			btrfs_print_v0_err(rc->extent_root->fs_info);
> @@ -860,7 +797,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  					      NULL);
>  			goto out;
>  		} else if (key.type != BTRFS_TREE_BLOCK_REF_KEY) {
> -			goto next;
> +			continue;
>  		}
>  
>  		/*
> @@ -891,20 +828,20 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  		level = cur->level + 1;
>  
>  		/* Search the tree to find parent blocks referring the block. */
> -		path2->search_commit_root = 1;
> -		path2->skip_locking = 1;
> -		path2->lowest_level = level;
> -		ret = btrfs_search_slot(NULL, root, node_key, path2, 0, 0);
> -		path2->lowest_level = 0;
> +		path->search_commit_root = 1;
> +		path->skip_locking = 1;
> +		path->lowest_level = level;
> +		ret = btrfs_search_slot(NULL, root, node_key, path, 0, 0);
> +		path->lowest_level = 0;
>  		if (ret < 0) {
>  			err = ret;
>  			goto out;
>  		}
> -		if (ret > 0 && path2->slots[level] > 0)
> -			path2->slots[level]--;
> +		if (ret > 0 && path->slots[level] > 0)
> +			path->slots[level]--;
>  
> -		eb = path2->nodes[level];
> -		if (btrfs_node_blockptr(eb, path2->slots[level]) !=
> +		eb = path->nodes[level];
> +		if (btrfs_node_blockptr(eb, path->slots[level]) !=
>  		    cur->bytenr) {
>  			btrfs_err(root->fs_info,
>  	"couldn't find block (%llu) (level %d) in tree (%llu) with key (%llu %u %llu)",
> @@ -920,7 +857,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  
>  		/* Add all nodes and edges in the path */
>  		for (; level < BTRFS_MAX_LEVEL; level++) {
> -			if (!path2->nodes[level]) {
> +			if (!path->nodes[level]) {
>  				ASSERT(btrfs_root_bytenr(&root->root_item) ==
>  				       lower->bytenr);
>  				if (should_ignore_root(root))
> @@ -936,7 +873,7 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  				goto out;
>  			}
>  
> -			eb = path2->nodes[level];
> +			eb = path->nodes[level];
>  			rb_node = tree_search(&cache->rb_root, eb->start);
>  			if (!rb_node) {
>  				upper = alloc_backref_node(cache);
> @@ -993,20 +930,14 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  			lower = upper;
>  			upper = NULL;
>  		}
> -		btrfs_release_path(path2);
> -next:
> -		if (ptr < end) {
> -			ptr += btrfs_extent_inline_ref_size(key.type);
> -			if (ptr >= end) {
> -				WARN_ON(ptr > end);
> -				ptr = 0;
> -				end = 0;
> -			}
> -		}
> -		if (ptr >= end)
> -			path1->slots[0]++;
> +		btrfs_release_path(path);
> +	}
> +	if (ret < 0) {
> +		err = ret;
> +		goto out;
>  	}
> -	btrfs_release_path(path1);
> +	ret = 0;
> +	btrfs_backref_iterator_release(iterator);
>  
>  	cur->checked = 1;
>  	WARN_ON(exist);
> @@ -1124,8 +1055,8 @@ struct backref_node *build_backref_tree(struct reloc_control *rc,
>  		}
>  	}
>  out:
> -	btrfs_free_path(path1);
> -	btrfs_free_path(path2);
> +	btrfs_backref_iterator_free(iterator);
> +	btrfs_free_path(path);
>  	if (err) {
>  		while (!list_empty(&useless)) {
>  			lower = list_entry(useless.next,
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 2/3] btrfs: backref: Implement btrfs_backref_iterator_next()
  2020-02-14  6:33 ` [PATCH 2/3] btrfs: backref: Implement btrfs_backref_iterator_next() Qu Wenruo
@ 2020-02-14  7:25   ` Qu Wenruo
  0 siblings, 0 replies; 6+ messages in thread
From: Qu Wenruo @ 2020-02-14  7:25 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1.1: Type: text/plain, Size: 2487 bytes --]



On 2020/2/14 下午2:33, Qu Wenruo wrote:
> This function will go next inline/keyed backref for
> btrfs_backref_iterator infrastructure.
> 
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
>  fs/btrfs/backref.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 48 insertions(+)
> 
> diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
> index 73c4829609c9..0f231cd69675 100644
> --- a/fs/btrfs/backref.c
> +++ b/fs/btrfs/backref.c
> @@ -2310,3 +2310,51 @@ int btrfs_backref_iterator_start(struct btrfs_backref_iterator *iterator,
>  	btrfs_release_path(path);
>  	return ret;
>  }
> +
> +int btrfs_backref_iterator_next(struct btrfs_backref_iterator *iterator)
> +{
> +	struct extent_buffer *eb = btrfs_backref_get_eb(iterator);
> +	struct btrfs_path *path = iterator->path;
> +	struct btrfs_extent_inline_ref *iref;
> +	int ret;
> +	u32 size;
> +
> +	if (iterator->cur_key.type == BTRFS_EXTENT_ITEM_KEY ||
> +	    iterator->cur_key.type == BTRFS_METADATA_ITEM_KEY) {
> +		/* We're still inside the inline refs */
> +		if (btrfs_backref_has_tree_block_info(iterator)) {
> +			/* First tree block info */
> +			size = sizeof(struct btrfs_tree_block_info);
> +		} else {
> +			/* Use inline ref type to determine the size */
> +			int type;
> +
> +			iref = (struct btrfs_extent_inline_ref *)
> +				(iterator->cur_ptr);
> +			type = btrfs_extent_inline_ref_offset(eb, iref);

Oh, I'm blindly complete the function name without checking the last word.

> +
> +			size = btrfs_extent_inline_ref_size(type);
> +		}
> +		iterator->cur_ptr += size;
> +		if (iterator->cur_ptr < iterator->end_ptr)
> +			return 0;
> +
> +		/* All inline items iterated, fall through */
> +	}
> +	/* We're at keyed items, there is no inline item, just go next item */
> +	ret = btrfs_next_item(iterator->fs_info->extent_root, iterator->path);
> +	if (ret > 0 || ret < 0)
> +		return ret;
> +
> +	btrfs_item_key_to_cpu(path->nodes[0], &iterator->cur_key,
> +			      path->slots[0]);
> +	if (iterator->cur_key.objectid != iterator->bytenr ||
> +	    (iterator->cur_key.type != BTRFS_TREE_BLOCK_REF_KEY &&
> +	     iterator->cur_key.type != BTRFS_SHARED_BLOCK_REF_KEY))
> +		return 1;
> +	iterator->item_ptr = btrfs_item_ptr_offset(path->nodes[0],
> +						   path->slots[0]);
> +	iterator->cur_ptr = iterator->item_ptr;
> +	iterator->end_ptr = btrfs_item_end_nr(path->nodes[0], path->slots[0]);
> +	return 0;
> +}
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, back to index

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-14  6:32 [PATCH 0/3] Btrfs: relocation: Refactor build_backref_tree() using btrfs_backref_iterator infrastructure Qu Wenruo
2020-02-14  6:33 ` [PATCH 1/3] btrfs: backref: Introduce the skeleton of btrfs_backref_iterator Qu Wenruo
2020-02-14  6:33 ` [PATCH 2/3] btrfs: backref: Implement btrfs_backref_iterator_next() Qu Wenruo
2020-02-14  7:25   ` Qu Wenruo
2020-02-14  6:33 ` [PATCH 3/3] btrfs: relocation: Use btrfs_backref_iterator infrastructure Qu Wenruo
2020-02-14  6:56   ` Qu Wenruo

Linux-BTRFS Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-btrfs/0 linux-btrfs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-btrfs linux-btrfs/ https://lore.kernel.org/linux-btrfs \
		linux-btrfs@vger.kernel.org
	public-inbox-index linux-btrfs

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-btrfs


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git