linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V2 00/10] unify origanization structure of block group cache
@ 2019-12-18  5:18 damenly.su
  2019-12-18  5:18 ` [PATCH V2 01/10] btrfs-progs: handle error if btrfs_write_one_block_group() failed damenly.su
                   ` (10 more replies)
  0 siblings, 11 replies; 15+ messages in thread
From: damenly.su @ 2019-12-18  5:18 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Su Yue

From: Su Yue <Damenly_Su@gmx.com>

In progs, block group caches are stored in btrfs_fs_info::block_group_cache
whose type is cache_extent. All block group caches adding/finding/freeing
are done in the misleading set/clear_extent_bits ways. However, kernel
side uses red-black tree structure in btrfs_fs_info directly. The
latter's structure is more reasonable and intuitive.

This patchset transforms structure of block group caches from cache_extent
to red-black tree and list.

patch[1] handles error to avoid warning after reform.
patch[2-6] are about rb tree reform things in preparation.
patch[7-8] are about dirty block groups linked in transaction in preparation.
patch[9] does replace works in action.
patch[10] does cleanup.

This patchset passed progs tests and did not cause any regression.

---
Changelog:
v2:
   Adjust block group cache tree seach and lookup functions to
   progs behaviors.
   Use rbtree_postorder_for_each_entry_safe() in patch[9] (Qu WenRuo).
   Add reviewed-by tags.

Su Yue (10):
  btrfs-progs: handle error if btrfs_write_one_block_group() failed
  btrfs-progs: block_group: add rb tree related memebers
  btrfs-progs: port block group cache tree insertion and lookup
    functions
  btrfs-progs: reform the function block_group_cache_tree_search()
  btrfs-progs: adjust ported block group lookup functions in kernel
    version
  btrfs-progs: abstract function btrfs_add_block_group_cache()
  block-progs: block_group: add dirty_bgs list related memebers
  btrfs-progs: pass @trans to functions touch dirty block groups
  btrfs-progs: reform block groups caches structure
  btrfs-progs: cleanups after block group cache reform

 check/main.c                |   6 +-
 check/mode-lowmem.c         |   6 +-
 cmds/rescue-chunk-recover.c |  10 +-
 ctree.h                     |  29 ++--
 disk-io.c                   |   4 +-
 extent-tree.c               | 304 +++++++++++++++---------------------
 extent_io.h                 |   2 -
 image/main.c                |  10 +-
 transaction.c               |   8 +-
 transaction.h               |   3 +-
 10 files changed, 165 insertions(+), 217 deletions(-)

-- 
2.21.0 (Apple Git-122.2)


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH V2 01/10] btrfs-progs: handle error if btrfs_write_one_block_group() failed
  2019-12-18  5:18 [PATCH V2 00/10] unify origanization structure of block group cache damenly.su
@ 2019-12-18  5:18 ` damenly.su
  2019-12-18  5:18 ` [PATCH V2 02/10] btrfs-progs: block_group: add rb tree related memebers damenly.su
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: damenly.su @ 2019-12-18  5:18 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Su Yue, Qu Wenruo

From: Su Yue <Damenly_Su@gmx.com>

Just break loop and return the error code if failed.
Functions in the call chain are able to handle it.

Signed-off-by: Su Yue <Damenly_Su@gmx.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
---
 extent-tree.c | 4 +++-
 transaction.c | 4 +++-
 2 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/extent-tree.c b/extent-tree.c
index 53be4f4c7369..4a3db029e811 100644
--- a/extent-tree.c
+++ b/extent-tree.c
@@ -1596,9 +1596,11 @@ int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans)
 
 		cache = (struct btrfs_block_group_cache *)(unsigned long)ptr;
 		ret = write_one_cache_group(trans, path, cache);
+		if (ret)
+			break;
 	}
 	btrfs_free_path(path);
-	return 0;
+	return ret;
 }
 
 static struct btrfs_space_info *__find_space_info(struct btrfs_fs_info *info,
diff --git a/transaction.c b/transaction.c
index 45bb9e1f9de6..c9035c765a74 100644
--- a/transaction.c
+++ b/transaction.c
@@ -77,7 +77,9 @@ static int update_cowonly_root(struct btrfs_trans_handle *trans,
 					&root->root_item);
 		if (ret < 0)
 			return ret;
-		btrfs_write_dirty_block_groups(trans);
+		ret = btrfs_write_dirty_block_groups(trans);
+		if (ret)
+			return ret;
 	}
 	return 0;
 }
-- 
2.21.0 (Apple Git-122.2)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 02/10] btrfs-progs: block_group: add rb tree related memebers
  2019-12-18  5:18 [PATCH V2 00/10] unify origanization structure of block group cache damenly.su
  2019-12-18  5:18 ` [PATCH V2 01/10] btrfs-progs: handle error if btrfs_write_one_block_group() failed damenly.su
@ 2019-12-18  5:18 ` damenly.su
  2019-12-18  5:18 ` [PATCH V2 03/10] btrfs-progs: port block group cache tree insertion and lookup functions damenly.su
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: damenly.su @ 2019-12-18  5:18 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Su Yue, Qu Wenruo

From: Su Yue <Damenly_Su@gmx.com>

To convert from existed extent_cache to plain rb_tree, add
btrfs_block_group_cache::cache_node and btrfs_fs_info::block_group_
cache_tree.

Signed-off-by: Su Yue <Damenly_Su@gmx.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
---
 ctree.h   | 21 ++++++++++++---------
 disk-io.c |  2 ++
 2 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/ctree.h b/ctree.h
index 3e50d0863bde..f3f5f52f2559 100644
--- a/ctree.h
+++ b/ctree.h
@@ -1107,16 +1107,18 @@ struct btrfs_block_group_cache {
 	int cached;
 	int ro;
 	/*
-         * If the free space extent count exceeds this number, convert the block
-         * group to bitmaps.
-         */
-        u32 bitmap_high_thresh;
-        /*
-         * If the free space extent count drops below this number, convert the
-         * block group back to extents.
-         */
-        u32 bitmap_low_thresh;
+	 * If the free space extent count exceeds this number, convert the block
+	 * group to bitmaps.
+	 */
+	u32 bitmap_high_thresh;
+	/*
+	 * If the free space extent count drops below this number, convert the
+	 * block group back to extents.
+	 */
+	u32 bitmap_low_thresh;
 
+	/* Block group cache stuff */
+	struct rb_node cache_node;
 };
 
 struct btrfs_device;
@@ -1146,6 +1148,7 @@ struct btrfs_fs_info {
 	struct extent_io_tree extent_ins;
 	struct extent_io_tree *excluded_extents;
 
+	struct rb_root block_group_cache_tree;
 	/* logical->physical extent mapping */
 	struct btrfs_mapping_tree mapping_tree;
 
diff --git a/disk-io.c b/disk-io.c
index 659f8b93a7ca..b7ae72a99f59 100644
--- a/disk-io.c
+++ b/disk-io.c
@@ -797,6 +797,8 @@ struct btrfs_fs_info *btrfs_new_fs_info(int writable, u64 sb_bytenr)
 	extent_io_tree_init(&fs_info->block_group_cache);
 	extent_io_tree_init(&fs_info->pinned_extents);
 	extent_io_tree_init(&fs_info->extent_ins);
+
+	fs_info->block_group_cache_tree = RB_ROOT;
 	fs_info->excluded_extents = NULL;
 
 	fs_info->fs_root_tree = RB_ROOT;
-- 
2.21.0 (Apple Git-122.2)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 03/10] btrfs-progs: port block group cache tree insertion and lookup functions
  2019-12-18  5:18 [PATCH V2 00/10] unify origanization structure of block group cache damenly.su
  2019-12-18  5:18 ` [PATCH V2 01/10] btrfs-progs: handle error if btrfs_write_one_block_group() failed damenly.su
  2019-12-18  5:18 ` [PATCH V2 02/10] btrfs-progs: block_group: add rb tree related memebers damenly.su
@ 2019-12-18  5:18 ` damenly.su
  2019-12-18  5:18 ` [PATCH V2 04/10] btrfs-progs: reform the function block_group_cache_tree_search() damenly.su
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: damenly.su @ 2019-12-18  5:18 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Su Yue, Qu Wenruo

From: Su Yue <Damenly_Su@gmx.com>

Simple copy and paste codes, remove useless lock operantions in progs.
Th new coming lookup functions are named with suffix _kernel in
temporary.

Signed-off-by: Su Yue <Damenly_Su@gmx.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
---
 extent-tree.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 86 insertions(+)

diff --git a/extent-tree.c b/extent-tree.c
index 4a3db029e811..ab576f8732a2 100644
--- a/extent-tree.c
+++ b/extent-tree.c
@@ -164,6 +164,92 @@ err:
 	return 0;
 }
 
+/*
+ * This adds the block group to the fs_info rb tree for the block group cache
+ */
+static int btrfs_add_block_group_cache(struct btrfs_fs_info *info,
+				struct btrfs_block_group_cache *block_group)
+{
+	struct rb_node **p;
+	struct rb_node *parent = NULL;
+	struct btrfs_block_group_cache *cache;
+
+	p = &info->block_group_cache_tree.rb_node;
+
+	while (*p) {
+		parent = *p;
+		cache = rb_entry(parent, struct btrfs_block_group_cache,
+				 cache_node);
+		if (block_group->key.objectid < cache->key.objectid)
+			p = &(*p)->rb_left;
+		else if (block_group->key.objectid > cache->key.objectid)
+			p = &(*p)->rb_right;
+		else
+			return -EEXIST;
+	}
+
+	rb_link_node(&block_group->cache_node, parent, p);
+	rb_insert_color(&block_group->cache_node,
+			&info->block_group_cache_tree);
+
+	return 0;
+}
+
+/*
+ * This will return the block group at or after bytenr if contains is 0, else
+ * it will return the block group that contains the bytenr
+ */
+static struct btrfs_block_group_cache *block_group_cache_tree_search(
+		struct btrfs_fs_info *info, u64 bytenr, int contains)
+{
+	struct btrfs_block_group_cache *cache, *ret = NULL;
+	struct rb_node *n;
+	u64 end, start;
+
+	n = info->block_group_cache_tree.rb_node;
+
+	while (n) {
+		cache = rb_entry(n, struct btrfs_block_group_cache,
+				 cache_node);
+		end = cache->key.objectid + cache->key.offset - 1;
+		start = cache->key.objectid;
+
+		if (bytenr < start) {
+			if (!contains && (!ret || start < ret->key.objectid))
+				ret = cache;
+			n = n->rb_left;
+		} else if (bytenr > start) {
+			if (contains && bytenr <= end) {
+				ret = cache;
+				break;
+			}
+			n = n->rb_right;
+		} else {
+			ret = cache;
+			break;
+		}
+	}
+	return ret;
+}
+
+/*
+ * Return the block group that starts at or after bytenr
+ */
+struct btrfs_block_group_cache *btrfs_lookup_first_block_group_kernel(
+		struct btrfs_fs_info *info, u64 bytenr)
+{
+	return block_group_cache_tree_search(info, bytenr, 0);
+}
+
+/*
+ * Return the block group that contains the given bytenr
+ */
+struct btrfs_block_group_cache *btrfs_lookup_block_group_kernel(
+		struct btrfs_fs_info *info, u64 bytenr)
+{
+	return block_group_cache_tree_search(info, bytenr, 1);
+}
+
 /*
  * Return the block group that contains @bytenr, otherwise return the next one
  * that starts after @bytenr
-- 
2.21.0 (Apple Git-122.2)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 04/10] btrfs-progs: reform the function block_group_cache_tree_search()
  2019-12-18  5:18 [PATCH V2 00/10] unify origanization structure of block group cache damenly.su
                   ` (2 preceding siblings ...)
  2019-12-18  5:18 ` [PATCH V2 03/10] btrfs-progs: port block group cache tree insertion and lookup functions damenly.su
@ 2019-12-18  5:18 ` damenly.su
  2019-12-18  9:51   ` Qu Wenruo
  2019-12-18  5:18 ` [PATCH V2 05/10] btrfs-progs: adjust ported block group lookup functions in kernel version damenly.su
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 15+ messages in thread
From: damenly.su @ 2019-12-18  5:18 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Su Yue

From: Su Yue <Damenly_Su@gmx.com>

Change @cotnains to @next of block_group_cache_tree_search().
Now, the function will try to search the block group containing
the @bytenr. If not found, return NULL if @next is zero. Or
It will return the next block group.

Will be used in the later commit.

Signed-off-by: Su Yue <Damenly_Su@gmx.com>
---
 extent-tree.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/extent-tree.c b/extent-tree.c
index ab576f8732a2..fdfa29a2409f 100644
--- a/extent-tree.c
+++ b/extent-tree.c
@@ -196,11 +196,15 @@ static int btrfs_add_block_group_cache(struct btrfs_fs_info *info,
 }
 
 /*
- * This will return the block group at or after bytenr if contains is 0, else
- * it will return the block group that contains the bytenr
+ * This will return the block group which contains @bytenr if it exists.
+ * If found nothing, the return depends on @next.
+ *
+ * @next:
+ *   if 0, return NULL if there's no block group containing the bytenr.
+ *   if 1, return the block group which starts after @bytenr.
  */
 static struct btrfs_block_group_cache *block_group_cache_tree_search(
-		struct btrfs_fs_info *info, u64 bytenr, int contains)
+		struct btrfs_fs_info *info, u64 bytenr, int next)
 {
 	struct btrfs_block_group_cache *cache, *ret = NULL;
 	struct rb_node *n;
@@ -215,11 +219,11 @@ static struct btrfs_block_group_cache *block_group_cache_tree_search(
 		start = cache->key.objectid;
 
 		if (bytenr < start) {
-			if (!contains && (!ret || start < ret->key.objectid))
+			if (next && (!ret || start < ret->key.objectid))
 				ret = cache;
 			n = n->rb_left;
 		} else if (bytenr > start) {
-			if (contains && bytenr <= end) {
+			if (bytenr <= end) {
 				ret = cache;
 				break;
 			}
@@ -229,6 +233,7 @@ static struct btrfs_block_group_cache *block_group_cache_tree_search(
 			break;
 		}
 	}
+
 	return ret;
 }
 
-- 
2.21.0 (Apple Git-122.2)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 05/10] btrfs-progs: adjust ported block group lookup functions in kernel version
  2019-12-18  5:18 [PATCH V2 00/10] unify origanization structure of block group cache damenly.su
                   ` (3 preceding siblings ...)
  2019-12-18  5:18 ` [PATCH V2 04/10] btrfs-progs: reform the function block_group_cache_tree_search() damenly.su
@ 2019-12-18  5:18 ` damenly.su
  2019-12-18  9:52   ` Qu Wenruo
  2019-12-18  5:18 ` [PATCH V2 06/10] btrfs-progs: abstract function btrfs_add_block_group_cache() damenly.su
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 15+ messages in thread
From: damenly.su @ 2019-12-18  5:18 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Su Yue

From: Su Yue <Damenly_Su@gmx.com>

The are different behavior of btrfs_lookup_first_block_group() and
btrfs_lookup_first_block_group_kernel().
There are many palaces calling the lookup function include extent
allocation part. It's too complicated to check and change those.
It will influence many functionalities in progs.

So here, just make kernel version lookup functions run likely in
progs behavior.

Signed-off-by: Su Yue <Damenly_Su@gmx.com>
---
 extent-tree.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/extent-tree.c b/extent-tree.c
index fdfa29a2409f..3f7b82dc88a2 100644
--- a/extent-tree.c
+++ b/extent-tree.c
@@ -238,12 +238,13 @@ static struct btrfs_block_group_cache *block_group_cache_tree_search(
 }
 
 /*
- * Return the block group that starts at or after bytenr
+ * Return the block group that contains @bytenr, otherwise return the next one
+ * that starts after @bytenr
  */
 struct btrfs_block_group_cache *btrfs_lookup_first_block_group_kernel(
 		struct btrfs_fs_info *info, u64 bytenr)
 {
-	return block_group_cache_tree_search(info, bytenr, 0);
+	return block_group_cache_tree_search(info, bytenr, 1);
 }
 
 /*
@@ -252,7 +253,7 @@ struct btrfs_block_group_cache *btrfs_lookup_first_block_group_kernel(
 struct btrfs_block_group_cache *btrfs_lookup_block_group_kernel(
 		struct btrfs_fs_info *info, u64 bytenr)
 {
-	return block_group_cache_tree_search(info, bytenr, 1);
+	return block_group_cache_tree_search(info, bytenr, 0);
 }
 
 /*
-- 
2.21.0 (Apple Git-122.2)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 06/10] btrfs-progs: abstract function btrfs_add_block_group_cache()
  2019-12-18  5:18 [PATCH V2 00/10] unify origanization structure of block group cache damenly.su
                   ` (4 preceding siblings ...)
  2019-12-18  5:18 ` [PATCH V2 05/10] btrfs-progs: adjust ported block group lookup functions in kernel version damenly.su
@ 2019-12-18  5:18 ` damenly.su
  2019-12-18  5:18 ` [PATCH V2 07/10] block-progs: block_group: add dirty_bgs list related memebers damenly.su
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: damenly.su @ 2019-12-18  5:18 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Su Yue, Qu Wenruo

From: Su Yue <Damenly_Su@gmx.com>

The new function btrfs_add_block_group_cache() abstracts the old
set_extent_bits and set_state_private operations.

Rename the rb tree version to btrfs_add_block_group_cache_kernel().

Signed-off-by: Su Yue <Damenly_Su@gmx.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
---
 extent-tree.c | 50 ++++++++++++++++++++++++++------------------------
 1 file changed, 26 insertions(+), 24 deletions(-)

diff --git a/extent-tree.c b/extent-tree.c
index 3f7b82dc88a2..9e681273d4b8 100644
--- a/extent-tree.c
+++ b/extent-tree.c
@@ -164,10 +164,31 @@ err:
 	return 0;
 }
 
+static int btrfs_add_block_group_cache(struct btrfs_fs_info *info,
+				       struct btrfs_block_group_cache *cache,
+				       int bits)
+{
+	int ret;
+
+	ret = set_extent_bits(&info->block_group_cache, cache->key.objectid,
+			      cache->key.objectid + cache->key.offset - 1,
+			      bits);
+	if (ret)
+		return ret;
+
+	ret = set_state_private(&info->block_group_cache, cache->key.objectid,
+				(unsigned long)cache);
+	if (ret)
+		clear_extent_bits(&info->block_group_cache, cache->key.objectid,
+				  cache->key.objectid + cache->key.offset - 1,
+				  bits);
+	return ret;
+}
+
 /*
  * This adds the block group to the fs_info rb tree for the block group cache
  */
-static int btrfs_add_block_group_cache(struct btrfs_fs_info *info,
+static int btrfs_add_block_group_cache_kernel(struct btrfs_fs_info *info,
 				struct btrfs_block_group_cache *block_group)
 {
 	struct rb_node **p;
@@ -2764,7 +2785,6 @@ error:
 static int read_one_block_group(struct btrfs_fs_info *fs_info,
 				 struct btrfs_path *path)
 {
-	struct extent_io_tree *block_group_cache = &fs_info->block_group_cache;
 	struct extent_buffer *leaf = path->nodes[0];
 	struct btrfs_space_info *space_info;
 	struct btrfs_block_group_cache *cache;
@@ -2814,11 +2834,7 @@ static int read_one_block_group(struct btrfs_fs_info *fs_info,
 	}
 	cache->space_info = space_info;
 
-	set_extent_bits(block_group_cache, cache->key.objectid,
-			cache->key.objectid + cache->key.offset - 1,
-			bit | EXTENT_LOCKED);
-	set_state_private(block_group_cache, cache->key.objectid,
-			  (unsigned long)cache);
+	btrfs_add_block_group_cache(fs_info, cache, bit | EXTENT_LOCKED);
 	return 0;
 }
 
@@ -2870,9 +2886,6 @@ btrfs_add_block_group(struct btrfs_fs_info *fs_info, u64 bytes_used, u64 type,
 	int ret;
 	int bit = 0;
 	struct btrfs_block_group_cache *cache;
-	struct extent_io_tree *block_group_cache;
-
-	block_group_cache = &fs_info->block_group_cache;
 
 	cache = kzalloc(sizeof(*cache), GFP_NOFS);
 	BUG_ON(!cache);
@@ -2889,13 +2902,8 @@ btrfs_add_block_group(struct btrfs_fs_info *fs_info, u64 bytes_used, u64 type,
 	BUG_ON(ret);
 
 	bit = block_group_state_bits(type);
-	ret = set_extent_bits(block_group_cache, chunk_offset,
-			      chunk_offset + size - 1,
-			      bit | EXTENT_LOCKED);
-	BUG_ON(ret);
 
-	ret = set_state_private(block_group_cache, chunk_offset,
-				(unsigned long)cache);
+	ret = btrfs_add_block_group_cache(fs_info, cache, bit | EXTENT_LOCKED);
 	BUG_ON(ret);
 	set_avail_alloc_bits(fs_info, type);
 
@@ -2945,9 +2953,7 @@ int btrfs_make_block_groups(struct btrfs_trans_handle *trans,
 	int bit;
 	struct btrfs_root *extent_root = fs_info->extent_root;
 	struct btrfs_block_group_cache *cache;
-	struct extent_io_tree *block_group_cache;
 
-	block_group_cache = &fs_info->block_group_cache;
 	total_bytes = btrfs_super_total_bytes(fs_info->super_copy);
 	group_align = 64 * fs_info->sectorsize;
 
@@ -2991,12 +2997,8 @@ int btrfs_make_block_groups(struct btrfs_trans_handle *trans,
 					0, &cache->space_info);
 		BUG_ON(ret);
 		set_avail_alloc_bits(fs_info, group_type);
-
-		set_extent_bits(block_group_cache, cur_start,
-				cur_start + group_size - 1,
-				bit | EXTENT_LOCKED);
-		set_state_private(block_group_cache, cur_start,
-				  (unsigned long)cache);
+		btrfs_add_block_group_cache(fs_info, cache,
+					    bit | EXTENT_LOCKED);
 		cur_start += group_size;
 	}
 	/* then insert all the items */
-- 
2.21.0 (Apple Git-122.2)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 07/10] block-progs: block_group: add dirty_bgs list related memebers
  2019-12-18  5:18 [PATCH V2 00/10] unify origanization structure of block group cache damenly.su
                   ` (5 preceding siblings ...)
  2019-12-18  5:18 ` [PATCH V2 06/10] btrfs-progs: abstract function btrfs_add_block_group_cache() damenly.su
@ 2019-12-18  5:18 ` damenly.su
  2019-12-18  5:18 ` [PATCH V2 08/10] btrfs-progs: pass @trans to functions touch dirty block groups damenly.su
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: damenly.su @ 2019-12-18  5:18 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Su Yue, Qu Wenruo

From: Su Yue <Damenly_Su@gmx.com>

The old style uses extent bit BLOCK_GROUP_DIRTY to mark dirty block
groups in extent cache. To replace it, add btrfs_trans_handle::dirty_bgs
and btrfs_block_group_cache::dirty_list.

Signed-off-by: Su Yue <Damenly_Su@gmx.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
---
 ctree.h       | 3 +++
 extent-tree.c | 4 ++++
 transaction.c | 1 +
 transaction.h | 3 ++-
 4 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/ctree.h b/ctree.h
index f3f5f52f2559..61ce53c46302 100644
--- a/ctree.h
+++ b/ctree.h
@@ -1119,6 +1119,9 @@ struct btrfs_block_group_cache {
 
 	/* Block group cache stuff */
 	struct rb_node cache_node;
+
+	/* For dirty block groups */
+	struct list_head dirty_list;
 };
 
 struct btrfs_device;
diff --git a/extent-tree.c b/extent-tree.c
index 9e681273d4b8..615d823ec4de 100644
--- a/extent-tree.c
+++ b/extent-tree.c
@@ -2814,6 +2814,8 @@ static int read_one_block_group(struct btrfs_fs_info *fs_info,
 	cache->pinned = 0;
 	cache->flags = btrfs_block_group_flags(&bgi);
 	cache->used = btrfs_block_group_used(&bgi);
+	INIT_LIST_HEAD(&cache->dirty_list);
+
 	if (cache->flags & BTRFS_BLOCK_GROUP_DATA) {
 		bit = BLOCK_GROUP_DATA;
 	} else if (cache->flags & BTRFS_BLOCK_GROUP_SYSTEM) {
@@ -2895,6 +2897,7 @@ btrfs_add_block_group(struct btrfs_fs_info *fs_info, u64 bytes_used, u64 type,
 	cache->key.type = BTRFS_BLOCK_GROUP_ITEM_KEY;
 	cache->used = bytes_used;
 	cache->flags = type;
+	INIT_LIST_HEAD(&cache->dirty_list);
 
 	exclude_super_stripes(fs_info, cache);
 	ret = update_space_info(fs_info, cache->flags, size, bytes_used,
@@ -2992,6 +2995,7 @@ int btrfs_make_block_groups(struct btrfs_trans_handle *trans,
 		cache->key.type = BTRFS_BLOCK_GROUP_ITEM_KEY;
 		cache->used = 0;
 		cache->flags = group_type;
+		INIT_LIST_HEAD(&cache->dirty_list);
 
 		ret = update_space_info(fs_info, group_type, group_size,
 					0, &cache->space_info);
diff --git a/transaction.c b/transaction.c
index c9035c765a74..269e52c01d29 100644
--- a/transaction.c
+++ b/transaction.c
@@ -52,6 +52,7 @@ struct btrfs_trans_handle* btrfs_start_transaction(struct btrfs_root *root,
 	root->last_trans = h->transid;
 	root->commit_root = root->node;
 	extent_buffer_get(root->node);
+	INIT_LIST_HEAD(&h->dirty_bgs);
 
 	return h;
 }
diff --git a/transaction.h b/transaction.h
index 750f456b3cc0..8fa65508fa8d 100644
--- a/transaction.h
+++ b/transaction.h
@@ -22,6 +22,7 @@
 #include "kerncompat.h"
 #include "ctree.h"
 #include "delayed-ref.h"
+#include "kernel-lib/list.h"
 
 struct btrfs_trans_handle {
 	struct btrfs_fs_info *fs_info;
@@ -35,7 +36,7 @@ struct btrfs_trans_handle {
 	unsigned long blocks_used;
 	struct btrfs_block_group_cache *block_group;
 	struct btrfs_delayed_ref_root delayed_refs;
-
+	struct list_head dirty_bgs;
 };
 
 struct btrfs_trans_handle* btrfs_start_transaction(struct btrfs_root *root,
-- 
2.21.0 (Apple Git-122.2)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 08/10] btrfs-progs: pass @trans to functions touch dirty block groups
  2019-12-18  5:18 [PATCH V2 00/10] unify origanization structure of block group cache damenly.su
                   ` (6 preceding siblings ...)
  2019-12-18  5:18 ` [PATCH V2 07/10] block-progs: block_group: add dirty_bgs list related memebers damenly.su
@ 2019-12-18  5:18 ` damenly.su
  2019-12-18  5:18 ` [PATCH V2 09/10] btrfs-progs: reform block groups caches structure damenly.su
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: damenly.su @ 2019-12-18  5:18 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Su Yue, Qu Wenruo

From: Su Yue <Damenly_Su@gmx.com>

We are going to touch dirty_bgs in trans directly, so every call chain
should pass paramemter @trans to end functions.

Signed-off-by: Su Yue <Damenly_Su@gmx.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
---
 check/main.c                |  6 +++---
 check/mode-lowmem.c         |  6 +++---
 cmds/rescue-chunk-recover.c |  6 +++---
 ctree.h                     |  4 ++--
 extent-tree.c               | 18 +++++++++---------
 image/main.c                |  5 +++--
 6 files changed, 23 insertions(+), 22 deletions(-)

diff --git a/check/main.c b/check/main.c
index 08dc9e66d013..7d797750e6d6 100644
--- a/check/main.c
+++ b/check/main.c
@@ -6651,8 +6651,8 @@ static int delete_extent_records(struct btrfs_trans_handle *trans,
 			u64 bytes = (found_key.type == BTRFS_EXTENT_ITEM_KEY) ?
 				found_key.offset : fs_info->nodesize;
 
-			ret = btrfs_update_block_group(fs_info->extent_root,
-						       bytenr, bytes, 0, 0);
+			ret = btrfs_update_block_group(trans, bytenr, bytes,
+						       0, 0);
 			if (ret)
 				break;
 		}
@@ -6730,7 +6730,7 @@ static int record_extent(struct btrfs_trans_handle *trans,
 		}
 
 		btrfs_mark_buffer_dirty(leaf);
-		ret = btrfs_update_block_group(extent_root, rec->start,
+		ret = btrfs_update_block_group(trans, rec->start,
 					       rec->max_size, 1, 0);
 		if (ret)
 			goto fail;
diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c
index f53a0c39e86e..74c60368ca01 100644
--- a/check/mode-lowmem.c
+++ b/check/mode-lowmem.c
@@ -735,7 +735,7 @@ static int repair_tree_block_ref(struct btrfs_root *root,
 		}
 		btrfs_mark_buffer_dirty(eb);
 		printf("Added an extent item [%llu %u]\n", bytenr, node_size);
-		btrfs_update_block_group(extent_root, bytenr, node_size, 1, 0);
+		btrfs_update_block_group(trans, bytenr, node_size, 1, 0);
 
 		nrefs->refs[level] = 0;
 		nrefs->full_backref[level] =
@@ -3292,8 +3292,8 @@ static int repair_extent_data_item(struct btrfs_root *root,
 		btrfs_set_extent_flags(eb, ei, BTRFS_EXTENT_FLAG_DATA);
 
 		btrfs_mark_buffer_dirty(eb);
-		ret = btrfs_update_block_group(extent_root, disk_bytenr,
-					       num_bytes, 1, 0);
+		ret = btrfs_update_block_group(trans, disk_bytenr, num_bytes,
+					       1, 0);
 		btrfs_release_path(&path);
 	}
 
diff --git a/cmds/rescue-chunk-recover.c b/cmds/rescue-chunk-recover.c
index 171b4d07ecf9..461b66c6e13b 100644
--- a/cmds/rescue-chunk-recover.c
+++ b/cmds/rescue-chunk-recover.c
@@ -1084,7 +1084,7 @@ err:
 	return ret;
 }
 
-static int block_group_free_all_extent(struct btrfs_root *root,
+static int block_group_free_all_extent(struct btrfs_trans_handle *trans,
 				       struct block_group_record *bg)
 {
 	struct btrfs_block_group_cache *cache;
@@ -1092,7 +1092,7 @@ static int block_group_free_all_extent(struct btrfs_root *root,
 	u64 start;
 	u64 end;
 
-	info = root->fs_info;
+	info = trans->fs_info;
 	cache = btrfs_lookup_block_group(info, bg->objectid);
 	if (!cache)
 		return -ENOENT;
@@ -1124,7 +1124,7 @@ static int remove_chunk_extent_item(struct btrfs_trans_handle *trans,
 		if (ret)
 			return ret;
 
-		ret = block_group_free_all_extent(root, chunk->bg_rec);
+		ret = block_group_free_all_extent(trans, chunk->bg_rec);
 		if (ret)
 			return ret;
 	}
diff --git a/ctree.h b/ctree.h
index 61ce53c46302..53882d04ac03 100644
--- a/ctree.h
+++ b/ctree.h
@@ -2568,8 +2568,8 @@ int btrfs_make_block_group(struct btrfs_trans_handle *trans,
 			   u64 type, u64 chunk_offset, u64 size);
 int btrfs_make_block_groups(struct btrfs_trans_handle *trans,
 			    struct btrfs_fs_info *fs_info);
-int btrfs_update_block_group(struct btrfs_root *root, u64 bytenr, u64 num,
-			     int alloc, int mark_free);
+int btrfs_update_block_group(struct btrfs_trans_handle *trans, u64 bytenr,
+			     u64 num, int alloc, int mark_free);
 int btrfs_record_file_extent(struct btrfs_trans_handle *trans,
 			      struct btrfs_root *root, u64 objectid,
 			      struct btrfs_inode_item *inode,
diff --git a/extent-tree.c b/extent-tree.c
index 615d823ec4de..f50d1c8b0a77 100644
--- a/extent-tree.c
+++ b/extent-tree.c
@@ -1872,9 +1872,10 @@ static int do_chunk_alloc(struct btrfs_trans_handle *trans,
 	return 0;
 }
 
-static int update_block_group(struct btrfs_fs_info *info, u64 bytenr,
+static int update_block_group(struct btrfs_trans_handle *trans, u64 bytenr,
 			      u64 num_bytes, int alloc, int mark_free)
 {
+	struct btrfs_fs_info *info = trans->fs_info;
 	struct btrfs_block_group_cache *cache;
 	u64 total = num_bytes;
 	u64 old_val;
@@ -2237,8 +2238,7 @@ static int __free_extent(struct btrfs_trans_handle *trans,
 			goto fail;
 		}
 
-		update_block_group(trans->fs_info, bytenr, num_bytes, 0,
-				   mark_free);
+		update_block_group(trans, bytenr, num_bytes, 0, mark_free);
 	}
 fail:
 	btrfs_free_path(path);
@@ -2570,7 +2570,7 @@ static int alloc_reserved_tree_block(struct btrfs_trans_handle *trans,
 	if (ret)
 		return ret;
 
-	ret = update_block_group(fs_info, ins.objectid, fs_info->nodesize, 1,
+	ret = update_block_group(trans, ins.objectid, fs_info->nodesize, 1,
 				 0);
 	if (sinfo) {
 		if (fs_info->nodesize > sinfo->bytes_reserved) {
@@ -3026,11 +3026,11 @@ int btrfs_make_block_groups(struct btrfs_trans_handle *trans,
 	return 0;
 }
 
-int btrfs_update_block_group(struct btrfs_root *root,
+int btrfs_update_block_group(struct btrfs_trans_handle *trans,
 			     u64 bytenr, u64 num_bytes, int alloc,
 			     int mark_free)
 {
-	return update_block_group(root->fs_info, bytenr, num_bytes,
+	return update_block_group(trans, bytenr, num_bytes,
 				  alloc, mark_free);
 }
 
@@ -3444,12 +3444,12 @@ int btrfs_fix_block_accounting(struct btrfs_trans_handle *trans)
 		btrfs_item_key_to_cpu(leaf, &key, slot);
 		if (key.type == BTRFS_EXTENT_ITEM_KEY) {
 			bytes_used += key.offset;
-			ret = btrfs_update_block_group(root,
+			ret = btrfs_update_block_group(trans,
 				  key.objectid, key.offset, 1, 0);
 			BUG_ON(ret);
 		} else if (key.type == BTRFS_METADATA_ITEM_KEY) {
 			bytes_used += fs_info->nodesize;
-			ret = btrfs_update_block_group(root,
+			ret = btrfs_update_block_group(trans,
 				  key.objectid, fs_info->nodesize, 1, 0);
 			if (ret)
 				goto out;
@@ -3604,7 +3604,7 @@ static int __btrfs_record_file_extent(struct btrfs_trans_handle *trans,
 					       BTRFS_EXTENT_FLAG_DATA);
 			btrfs_mark_buffer_dirty(leaf);
 
-			ret = btrfs_update_block_group(root, disk_bytenr,
+			ret = btrfs_update_block_group(trans, disk_bytenr,
 						       num_bytes, 1, 0);
 			if (ret)
 				goto fail;
diff --git a/image/main.c b/image/main.c
index bddb49720f0a..f88ffb16bafe 100644
--- a/image/main.c
+++ b/image/main.c
@@ -2338,8 +2338,9 @@ again:
 	return 0;
 }
 
-static void fixup_block_groups(struct btrfs_fs_info *fs_info)
+static void fixup_block_groups(struct btrfs_trans_handle *trans)
 {
+	struct btrfs_fs_info *fs_info = trans->fs_info;
 	struct btrfs_block_group_cache *bg;
 	struct btrfs_mapping_tree *map_tree = &fs_info->mapping_tree;
 	struct cache_extent *ce;
@@ -2499,7 +2500,7 @@ static int fixup_chunks_and_devices(struct btrfs_fs_info *fs_info,
 		return PTR_ERR(trans);
 	}
 
-	fixup_block_groups(fs_info);
+	fixup_block_groups(trans);
 	ret = fixup_dev_extents(trans);
 	if (ret < 0)
 		goto error;
-- 
2.21.0 (Apple Git-122.2)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 09/10] btrfs-progs: reform block groups caches structure
  2019-12-18  5:18 [PATCH V2 00/10] unify origanization structure of block group cache damenly.su
                   ` (7 preceding siblings ...)
  2019-12-18  5:18 ` [PATCH V2 08/10] btrfs-progs: pass @trans to functions touch dirty block groups damenly.su
@ 2019-12-18  5:18 ` damenly.su
  2019-12-18  5:18 ` [PATCH V2 10/10] btrfs-progs: cleanups after block group cache reform damenly.su
  2020-01-22 17:52 ` [PATCH V2 00/10] unify origanization structure of block group cache David Sterba
  10 siblings, 0 replies; 15+ messages in thread
From: damenly.su @ 2019-12-18  5:18 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Su Yue

From: Su Yue <Damenly_Su@gmx.com>

This commit organises block groups cache in
btrfs_fs_info::block_group_cache_tree. And any dirty block groups are
linked in transaction_handle::dirty_bgs.

To keep coherence of bisect, it does almost replace in place:
1. Replace the old btrfs group lookup functions with new functions
introduced in former commits.
2. set_extent_bits(..., BLOCK_GROUP_DIRYT) things are replaced by linking
the block group cache into trans::dirty_bgs. Checking and clearing bits
are transformed too.
3. set_extent_bits(..., bit | EXTENT_LOCKED) things are replaced by
new the btrfs_add_block_group_cache() which inserts caches into
btrfs_fs_info::block_group_cache_tree directly. Other operations are
converted to tree operations.

Signed-off-by: Su Yue <Damenly_Su@gmx.com>
---
 cmds/rescue-chunk-recover.c |   4 +-
 extent-tree.c               | 214 ++++++------------------------------
 image/main.c                |   5 +-
 transaction.c               |   3 +-
 4 files changed, 39 insertions(+), 187 deletions(-)

diff --git a/cmds/rescue-chunk-recover.c b/cmds/rescue-chunk-recover.c
index 461b66c6e13b..a13acc015d11 100644
--- a/cmds/rescue-chunk-recover.c
+++ b/cmds/rescue-chunk-recover.c
@@ -1100,8 +1100,8 @@ static int block_group_free_all_extent(struct btrfs_trans_handle *trans,
 	start = cache->key.objectid;
 	end = start + cache->key.offset - 1;
 
-	set_extent_bits(&info->block_group_cache, start, end,
-			BLOCK_GROUP_DIRTY);
+	if (list_empty(&cache->dirty_list))
+		list_add_tail(&cache->dirty_list, &trans->dirty_bgs);
 	set_extent_dirty(&info->free_space_cache, start, end);
 
 	cache->used = 0;
diff --git a/extent-tree.c b/extent-tree.c
index f50d1c8b0a77..b7d5aa104a37 100644
--- a/extent-tree.c
+++ b/extent-tree.c
@@ -24,6 +24,7 @@
 #include "kernel-lib/radix-tree.h"
 #include "ctree.h"
 #include "disk-io.h"
+#include "kernel-lib/rbtree.h"
 #include "print-tree.h"
 #include "transaction.h"
 #include "crypto/crc32c.h"
@@ -164,31 +165,10 @@ err:
 	return 0;
 }
 
-static int btrfs_add_block_group_cache(struct btrfs_fs_info *info,
-				       struct btrfs_block_group_cache *cache,
-				       int bits)
-{
-	int ret;
-
-	ret = set_extent_bits(&info->block_group_cache, cache->key.objectid,
-			      cache->key.objectid + cache->key.offset - 1,
-			      bits);
-	if (ret)
-		return ret;
-
-	ret = set_state_private(&info->block_group_cache, cache->key.objectid,
-				(unsigned long)cache);
-	if (ret)
-		clear_extent_bits(&info->block_group_cache, cache->key.objectid,
-				  cache->key.objectid + cache->key.offset - 1,
-				  bits);
-	return ret;
-}
-
 /*
  * This adds the block group to the fs_info rb tree for the block group cache
  */
-static int btrfs_add_block_group_cache_kernel(struct btrfs_fs_info *info,
+static int btrfs_add_block_group_cache(struct btrfs_fs_info *info,
 				struct btrfs_block_group_cache *block_group)
 {
 	struct rb_node **p;
@@ -262,7 +242,7 @@ static struct btrfs_block_group_cache *block_group_cache_tree_search(
  * Return the block group that contains @bytenr, otherwise return the next one
  * that starts after @bytenr
  */
-struct btrfs_block_group_cache *btrfs_lookup_first_block_group_kernel(
+struct btrfs_block_group_cache *btrfs_lookup_first_block_group(
 		struct btrfs_fs_info *info, u64 bytenr)
 {
 	return block_group_cache_tree_search(info, bytenr, 1);
@@ -271,78 +251,12 @@ struct btrfs_block_group_cache *btrfs_lookup_first_block_group_kernel(
 /*
  * Return the block group that contains the given bytenr
  */
-struct btrfs_block_group_cache *btrfs_lookup_block_group_kernel(
+struct btrfs_block_group_cache *btrfs_lookup_block_group(
 		struct btrfs_fs_info *info, u64 bytenr)
 {
 	return block_group_cache_tree_search(info, bytenr, 0);
 }
 
-/*
- * Return the block group that contains @bytenr, otherwise return the next one
- * that starts after @bytenr
- */
-struct btrfs_block_group_cache *btrfs_lookup_first_block_group(struct
-						       btrfs_fs_info *info,
-						       u64 bytenr)
-{
-	struct extent_io_tree *block_group_cache;
-	struct btrfs_block_group_cache *block_group = NULL;
-	u64 ptr;
-	u64 start;
-	u64 end;
-	int ret;
-
-	bytenr = max_t(u64, bytenr,
-		       BTRFS_SUPER_INFO_OFFSET + BTRFS_SUPER_INFO_SIZE);
-	block_group_cache = &info->block_group_cache;
-	ret = find_first_extent_bit(block_group_cache,
-				    bytenr, &start, &end,
-				    BLOCK_GROUP_DATA | BLOCK_GROUP_METADATA |
-				    BLOCK_GROUP_SYSTEM);
-	if (ret) {
-		return NULL;
-	}
-	ret = get_state_private(block_group_cache, start, &ptr);
-	if (ret)
-		return NULL;
-
-	block_group = (struct btrfs_block_group_cache *)(unsigned long)ptr;
-	return block_group;
-}
-
-/*
- * Return the block group that contains the given @bytenr
- */
-struct btrfs_block_group_cache *btrfs_lookup_block_group(struct
-							 btrfs_fs_info *info,
-							 u64 bytenr)
-{
-	struct extent_io_tree *block_group_cache;
-	struct btrfs_block_group_cache *block_group = NULL;
-	u64 ptr;
-	u64 start;
-	u64 end;
-	int ret;
-
-	block_group_cache = &info->block_group_cache;
-	ret = find_first_extent_bit(block_group_cache,
-				    bytenr, &start, &end,
-				    BLOCK_GROUP_DATA | BLOCK_GROUP_METADATA |
-				    BLOCK_GROUP_SYSTEM);
-	if (ret) {
-		return NULL;
-	}
-	ret = get_state_private(block_group_cache, start, &ptr);
-	if (ret)
-		return NULL;
-
-	block_group = (struct btrfs_block_group_cache *)(unsigned long)ptr;
-	if (block_group->key.objectid <= bytenr && bytenr <
-	    block_group->key.objectid + block_group->key.offset)
-		return block_group;
-	return NULL;
-}
-
 static int block_group_bits(struct btrfs_block_group_cache *cache, u64 bits)
 {
 	return (cache->flags & bits) == bits;
@@ -432,28 +346,18 @@ btrfs_find_block_group(struct btrfs_root *root, struct btrfs_block_group_cache
 		       *hint, u64 search_start, int data, int owner)
 {
 	struct btrfs_block_group_cache *cache;
-	struct extent_io_tree *block_group_cache;
 	struct btrfs_block_group_cache *found_group = NULL;
 	struct btrfs_fs_info *info = root->fs_info;
 	u64 used;
 	u64 last = 0;
 	u64 hint_last;
-	u64 start;
-	u64 end;
 	u64 free_check;
-	u64 ptr;
-	int bit;
-	int ret;
 	int full_search = 0;
 	int factor = 10;
 
-	block_group_cache = &info->block_group_cache;
-
 	if (!owner)
 		factor = 10;
 
-	bit = block_group_state_bits(data);
-
 	if (search_start) {
 		struct btrfs_block_group_cache *shint;
 		shint = btrfs_lookup_block_group(info, search_start);
@@ -483,16 +387,10 @@ btrfs_find_block_group(struct btrfs_root *root, struct btrfs_block_group_cache
 	}
 again:
 	while(1) {
-		ret = find_first_extent_bit(block_group_cache, last,
-					    &start, &end, bit);
-		if (ret)
-			break;
-
-		ret = get_state_private(block_group_cache, start, &ptr);
-		if (ret)
+		cache = btrfs_lookup_first_block_group(info, last);
+		if (!cache)
 			break;
 
-		cache = (struct btrfs_block_group_cache *)(unsigned long)ptr;
 		last = cache->key.objectid + cache->key.offset;
 		used = cache->used;
 
@@ -1676,38 +1574,18 @@ fail:
 
 int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans)
 {
-	struct extent_io_tree *block_group_cache;
 	struct btrfs_block_group_cache *cache;
-	int ret;
 	struct btrfs_path *path;
-	u64 last = 0;
-	u64 start;
-	u64 end;
-	u64 ptr;
+	int ret;
 
-	block_group_cache = &trans->fs_info->block_group_cache;
 	path = btrfs_alloc_path();
 	if (!path)
 		return -ENOMEM;
 
-	while(1) {
-		ret = find_first_extent_bit(block_group_cache, last,
-					    &start, &end, BLOCK_GROUP_DIRTY);
-		if (ret) {
-			if (last == 0)
-				break;
-			last = 0;
-			continue;
-		}
-
-		last = end + 1;
-		ret = get_state_private(block_group_cache, start, &ptr);
-		BUG_ON(ret);
-
-		clear_extent_bits(block_group_cache, start, end,
-				  BLOCK_GROUP_DIRTY);
-
-		cache = (struct btrfs_block_group_cache *)(unsigned long)ptr;
+	while (!list_empty(&trans->dirty_bgs)) {
+		cache = list_first_entry(&trans->dirty_bgs,
+				 struct btrfs_block_group_cache, dirty_list);
+		list_del_init(&cache->dirty_list);
 		ret = write_one_cache_group(trans, path, cache);
 		if (ret)
 			break;
@@ -1880,8 +1758,6 @@ static int update_block_group(struct btrfs_trans_handle *trans, u64 bytenr,
 	u64 total = num_bytes;
 	u64 old_val;
 	u64 byte_in_group;
-	u64 start;
-	u64 end;
 
 	/* block accounting for super block */
 	old_val = btrfs_super_bytes_used(info->super_copy);
@@ -1898,11 +1774,8 @@ static int update_block_group(struct btrfs_trans_handle *trans, u64 bytenr,
 		}
 		byte_in_group = bytenr - cache->key.objectid;
 		WARN_ON(byte_in_group > cache->key.offset);
-		start = cache->key.objectid;
-		end = start + cache->key.offset - 1;
-		set_extent_bits(&info->block_group_cache, start, end,
-				BLOCK_GROUP_DIRTY);
-
+		if (list_empty(&cache->dirty_list))
+			list_add_tail(&cache->dirty_list, &trans->dirty_bgs);
 		old_val = cache->used;
 		num_bytes = min(total, cache->key.offset - byte_in_group);
 
@@ -2691,29 +2564,24 @@ struct extent_buffer *btrfs_alloc_free_block(struct btrfs_trans_handle *trans,
 int btrfs_free_block_groups(struct btrfs_fs_info *info)
 {
 	struct btrfs_space_info *sinfo;
-	struct btrfs_block_group_cache *cache;
+	struct btrfs_block_group_cache *cache, *next;
 	u64 start;
 	u64 end;
-	u64 ptr;
 	int ret;
 
-	while(1) {
-		ret = find_first_extent_bit(&info->block_group_cache, 0,
-					    &start, &end, (unsigned int)-1);
-		if (ret)
-			break;
-		ret = get_state_private(&info->block_group_cache, start, &ptr);
-		if (!ret) {
-			cache = u64_to_ptr(ptr);
-			if (cache->free_space_ctl) {
-				btrfs_remove_free_space_cache(cache);
-				kfree(cache->free_space_ctl);
-			}
-			kfree(cache);
+	rbtree_postorder_for_each_entry_safe(cache, next,
+			     &info->block_group_cache_tree, cache_node) {
+		if (!list_empty(&cache->dirty_list))
+			list_del_init(&cache->dirty_list);
+		rb_erase(&cache->cache_node, &info->block_group_cache_tree);
+		RB_CLEAR_NODE(&cache->cache_node);
+		if (cache->free_space_ctl) {
+			btrfs_remove_free_space_cache(cache);
+			kfree(cache->free_space_ctl);
 		}
-		clear_extent_bits(&info->block_group_cache, start,
-				  end, (unsigned int)-1);
+		kfree(cache);
 	}
+
 	while(1) {
 		ret = find_first_extent_bit(&info->free_space_cache, 0,
 					    &start, &end, EXTENT_DIRTY);
@@ -2791,7 +2659,6 @@ static int read_one_block_group(struct btrfs_fs_info *fs_info,
 	struct btrfs_block_group_item bgi;
 	struct btrfs_key key;
 	int slot = path->slots[0];
-	int bit = 0;
 	int ret;
 
 	btrfs_item_key_to_cpu(leaf, &key, slot);
@@ -2816,13 +2683,6 @@ static int read_one_block_group(struct btrfs_fs_info *fs_info,
 	cache->used = btrfs_block_group_used(&bgi);
 	INIT_LIST_HEAD(&cache->dirty_list);
 
-	if (cache->flags & BTRFS_BLOCK_GROUP_DATA) {
-		bit = BLOCK_GROUP_DATA;
-	} else if (cache->flags & BTRFS_BLOCK_GROUP_SYSTEM) {
-		bit = BLOCK_GROUP_SYSTEM;
-	} else if (cache->flags & BTRFS_BLOCK_GROUP_METADATA) {
-		bit = BLOCK_GROUP_METADATA;
-	}
 	set_avail_alloc_bits(fs_info, cache->flags);
 	if (btrfs_chunk_readonly(fs_info, cache->key.objectid))
 		cache->ro = 1;
@@ -2836,7 +2696,7 @@ static int read_one_block_group(struct btrfs_fs_info *fs_info,
 	}
 	cache->space_info = space_info;
 
-	btrfs_add_block_group_cache(fs_info, cache, bit | EXTENT_LOCKED);
+	btrfs_add_block_group_cache(fs_info, cache);
 	return 0;
 }
 
@@ -2886,7 +2746,6 @@ btrfs_add_block_group(struct btrfs_fs_info *fs_info, u64 bytes_used, u64 type,
 		      u64 chunk_offset, u64 size)
 {
 	int ret;
-	int bit = 0;
 	struct btrfs_block_group_cache *cache;
 
 	cache = kzalloc(sizeof(*cache), GFP_NOFS);
@@ -2904,9 +2763,7 @@ btrfs_add_block_group(struct btrfs_fs_info *fs_info, u64 bytes_used, u64 type,
 				&cache->space_info);
 	BUG_ON(ret);
 
-	bit = block_group_state_bits(type);
-
-	ret = btrfs_add_block_group_cache(fs_info, cache, bit | EXTENT_LOCKED);
+	ret = btrfs_add_block_group_cache(fs_info, cache);
 	BUG_ON(ret);
 	set_avail_alloc_bits(fs_info, type);
 
@@ -2953,7 +2810,6 @@ int btrfs_make_block_groups(struct btrfs_trans_handle *trans,
 	u64 total_data = 0;
 	u64 total_metadata = 0;
 	int ret;
-	int bit;
 	struct btrfs_root *extent_root = fs_info->extent_root;
 	struct btrfs_block_group_cache *cache;
 
@@ -2965,7 +2821,6 @@ int btrfs_make_block_groups(struct btrfs_trans_handle *trans,
 		group_size = total_bytes / 12;
 		group_size = min_t(u64, group_size, total_bytes - cur_start);
 		if (cur_start == 0) {
-			bit = BLOCK_GROUP_SYSTEM;
 			group_type = BTRFS_BLOCK_GROUP_SYSTEM;
 			group_size /= 4;
 			group_size &= ~(group_align - 1);
@@ -3001,8 +2856,7 @@ int btrfs_make_block_groups(struct btrfs_trans_handle *trans,
 					0, &cache->space_info);
 		BUG_ON(ret);
 		set_avail_alloc_bits(fs_info, group_type);
-		btrfs_add_block_group_cache(fs_info, cache,
-					    bit | EXTENT_LOCKED);
+		btrfs_add_block_group_cache(fs_info, cache);
 		cur_start += group_size;
 	}
 	/* then insert all the items */
@@ -3278,8 +3132,9 @@ static int free_block_group_cache(struct btrfs_trans_handle *trans,
 		btrfs_remove_free_space_cache(cache);
 		kfree(cache->free_space_ctl);
 	}
-	clear_extent_bits(&fs_info->block_group_cache, bytenr, bytenr + len - 1,
-			  (unsigned int)-1);
+	if (!list_empty(&cache->dirty_list))
+		list_del(&cache->dirty_list);
+	rb_erase(&cache->cache_node, &fs_info->block_group_cache_tree);
 	ret = free_space_info(fs_info, flags, len, 0, NULL);
 	if (ret < 0)
 		goto out;
@@ -3412,13 +3267,12 @@ int btrfs_fix_block_accounting(struct btrfs_trans_handle *trans)
 		cache = btrfs_lookup_first_block_group(fs_info, start);
 		if (!cache)
 			break;
+
 		start = cache->key.objectid + cache->key.offset;
 		cache->used = 0;
 		cache->space_info->bytes_used = 0;
-		set_extent_bits(&root->fs_info->block_group_cache,
-				cache->key.objectid,
-				cache->key.objectid + cache->key.offset -1,
-				BLOCK_GROUP_DIRTY);
+		if (list_empty(&cache->dirty_list))
+			list_add_tail(&cache->dirty_list, &trans->dirty_bgs);
 	}
 
 	btrfs_init_path(&path);
diff --git a/image/main.c b/image/main.c
index f88ffb16bafe..95eb3cc3d4de 100644
--- a/image/main.c
+++ b/image/main.c
@@ -2365,9 +2365,8 @@ static void fixup_block_groups(struct btrfs_trans_handle *trans)
 
 		/* Update the block group item and mark the bg dirty */
 		bg->flags = map->type;
-		set_extent_bits(&fs_info->block_group_cache, ce->start,
-				ce->start + ce->size - 1, BLOCK_GROUP_DIRTY);
-
+		if (list_empty(&bg->dirty_list))
+			list_add_tail(&bg->dirty_list, &trans->dirty_bgs);
 		/*
 		 * Chunk and bg flags can be different, changing bg flags
 		 * without update avail_data/meta_alloc_bits will lead to
diff --git a/transaction.c b/transaction.c
index 269e52c01d29..b6b81b2178c8 100644
--- a/transaction.c
+++ b/transaction.c
@@ -203,8 +203,7 @@ commit_tree:
 	 * again, we need to exhause both dirty blocks and delayed refs
 	 */
 	while (!RB_EMPTY_ROOT(&trans->delayed_refs.href_root) ||
-	       test_range_bit(&fs_info->block_group_cache, 0, (u64)-1,
-			      BLOCK_GROUP_DIRTY, 0)) {
+	       !list_empty(&trans->dirty_bgs)) {
 		ret = btrfs_write_dirty_block_groups(trans);
 		if (ret < 0)
 			goto error;
-- 
2.21.0 (Apple Git-122.2)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 10/10] btrfs-progs: cleanups after block group cache reform
  2019-12-18  5:18 [PATCH V2 00/10] unify origanization structure of block group cache damenly.su
                   ` (8 preceding siblings ...)
  2019-12-18  5:18 ` [PATCH V2 09/10] btrfs-progs: reform block groups caches structure damenly.su
@ 2019-12-18  5:18 ` damenly.su
  2020-01-22 17:52 ` [PATCH V2 00/10] unify origanization structure of block group cache David Sterba
  10 siblings, 0 replies; 15+ messages in thread
From: damenly.su @ 2019-12-18  5:18 UTC (permalink / raw)
  To: linux-btrfs; +Cc: Su Yue, Qu Wenruo

From: Su Yue <Damenly_Su@gmx.com>

btrfs_fs_info::block_group_cache and the bit BLOCK_GROUP_DIRY are
useless. So is the block_group_state_bits().

Remove them.

Signed-off-by: Su Yue <Damenly_Su@gmx.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
---
 ctree.h       |  1 -
 disk-io.c     |  2 --
 extent-tree.c | 12 ------------
 extent_io.h   |  2 --
 4 files changed, 17 deletions(-)

diff --git a/ctree.h b/ctree.h
index 53882d04ac03..6d2fad6406d7 100644
--- a/ctree.h
+++ b/ctree.h
@@ -1146,7 +1146,6 @@ struct btrfs_fs_info {
 
 	struct extent_io_tree extent_cache;
 	struct extent_io_tree free_space_cache;
-	struct extent_io_tree block_group_cache;
 	struct extent_io_tree pinned_extents;
 	struct extent_io_tree extent_ins;
 	struct extent_io_tree *excluded_extents;
diff --git a/disk-io.c b/disk-io.c
index b7ae72a99f59..95958d9706da 100644
--- a/disk-io.c
+++ b/disk-io.c
@@ -794,7 +794,6 @@ struct btrfs_fs_info *btrfs_new_fs_info(int writable, u64 sb_bytenr)
 
 	extent_io_tree_init(&fs_info->extent_cache);
 	extent_io_tree_init(&fs_info->free_space_cache);
-	extent_io_tree_init(&fs_info->block_group_cache);
 	extent_io_tree_init(&fs_info->pinned_extents);
 	extent_io_tree_init(&fs_info->extent_ins);
 
@@ -1069,7 +1068,6 @@ void btrfs_cleanup_all_caches(struct btrfs_fs_info *fs_info)
 	free_mapping_cache_tree(&fs_info->mapping_tree.cache_tree);
 	extent_io_tree_cleanup(&fs_info->extent_cache);
 	extent_io_tree_cleanup(&fs_info->free_space_cache);
-	extent_io_tree_cleanup(&fs_info->block_group_cache);
 	extent_io_tree_cleanup(&fs_info->pinned_extents);
 	extent_io_tree_cleanup(&fs_info->extent_ins);
 }
diff --git a/extent-tree.c b/extent-tree.c
index b7d5aa104a37..11879d89d1a7 100644
--- a/extent-tree.c
+++ b/extent-tree.c
@@ -329,18 +329,6 @@ wrapped:
 	goto again;
 }
 
-static int block_group_state_bits(u64 flags)
-{
-	int bits = 0;
-	if (flags & BTRFS_BLOCK_GROUP_DATA)
-		bits |= BLOCK_GROUP_DATA;
-	if (flags & BTRFS_BLOCK_GROUP_METADATA)
-		bits |= BLOCK_GROUP_METADATA;
-	if (flags & BTRFS_BLOCK_GROUP_SYSTEM)
-		bits |= BLOCK_GROUP_SYSTEM;
-	return bits;
-}
-
 static struct btrfs_block_group_cache *
 btrfs_find_block_group(struct btrfs_root *root, struct btrfs_block_group_cache
 		       *hint, u64 search_start, int data, int owner)
diff --git a/extent_io.h b/extent_io.h
index 1715acc60708..7f88e3f8a305 100644
--- a/extent_io.h
+++ b/extent_io.h
@@ -47,8 +47,6 @@
 #define BLOCK_GROUP_METADATA	(1U << 2)
 #define BLOCK_GROUP_SYSTEM	(1U << 4)
 
-#define BLOCK_GROUP_DIRTY 	(1U)
-
 /*
  * The extent buffer bitmap operations are done with byte granularity instead of
  * word granularity for two reasons:
-- 
2.21.0 (Apple Git-122.2)


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 04/10] btrfs-progs: reform the function block_group_cache_tree_search()
  2019-12-18  5:18 ` [PATCH V2 04/10] btrfs-progs: reform the function block_group_cache_tree_search() damenly.su
@ 2019-12-18  9:51   ` Qu Wenruo
  0 siblings, 0 replies; 15+ messages in thread
From: Qu Wenruo @ 2019-12-18  9:51 UTC (permalink / raw)
  To: damenly.su, linux-btrfs; +Cc: Su Yue


[-- Attachment #1.1: Type: text/plain, Size: 2252 bytes --]



On 2019/12/18 下午1:18, damenly.su@gmail.com wrote:
> From: Su Yue <Damenly_Su@gmx.com>
> 
> Change @cotnains to @next of block_group_cache_tree_search().
> Now, the function will try to search the block group containing
> the @bytenr. If not found, return NULL if @next is zero. Or
> It will return the next block group.
> 
> Will be used in the later commit.
> 
> Signed-off-by: Su Yue <Damenly_Su@gmx.com>

Reviewed-by: Qu Wenruo <wqu@suse.com>

The @next looks pretty good, more clear than old @contains.

Thanks,
Qu

> ---
>  extent-tree.c | 15 ++++++++++-----
>  1 file changed, 10 insertions(+), 5 deletions(-)
> 
> diff --git a/extent-tree.c b/extent-tree.c
> index ab576f8732a2..fdfa29a2409f 100644
> --- a/extent-tree.c
> +++ b/extent-tree.c
> @@ -196,11 +196,15 @@ static int btrfs_add_block_group_cache(struct btrfs_fs_info *info,
>  }
>  
>  /*
> - * This will return the block group at or after bytenr if contains is 0, else
> - * it will return the block group that contains the bytenr
> + * This will return the block group which contains @bytenr if it exists.
> + * If found nothing, the return depends on @next.
> + *
> + * @next:
> + *   if 0, return NULL if there's no block group containing the bytenr.
> + *   if 1, return the block group which starts after @bytenr.
>   */
>  static struct btrfs_block_group_cache *block_group_cache_tree_search(
> -		struct btrfs_fs_info *info, u64 bytenr, int contains)
> +		struct btrfs_fs_info *info, u64 bytenr, int next)
>  {
>  	struct btrfs_block_group_cache *cache, *ret = NULL;
>  	struct rb_node *n;
> @@ -215,11 +219,11 @@ static struct btrfs_block_group_cache *block_group_cache_tree_search(
>  		start = cache->key.objectid;
>  
>  		if (bytenr < start) {
> -			if (!contains && (!ret || start < ret->key.objectid))
> +			if (next && (!ret || start < ret->key.objectid))
>  				ret = cache;
>  			n = n->rb_left;
>  		} else if (bytenr > start) {
> -			if (contains && bytenr <= end) {
> +			if (bytenr <= end) {
>  				ret = cache;
>  				break;
>  			}
> @@ -229,6 +233,7 @@ static struct btrfs_block_group_cache *block_group_cache_tree_search(
>  			break;
>  		}
>  	}
> +
>  	return ret;
>  }
>  
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 520 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 05/10] btrfs-progs: adjust ported block group lookup functions in kernel version
  2019-12-18  5:18 ` [PATCH V2 05/10] btrfs-progs: adjust ported block group lookup functions in kernel version damenly.su
@ 2019-12-18  9:52   ` Qu Wenruo
  2019-12-18 11:01     ` Su Yue
  0 siblings, 1 reply; 15+ messages in thread
From: Qu Wenruo @ 2019-12-18  9:52 UTC (permalink / raw)
  To: damenly.su, linux-btrfs; +Cc: Su Yue


[-- Attachment #1.1: Type: text/plain, Size: 1783 bytes --]



On 2019/12/18 下午1:18, damenly.su@gmail.com wrote:
> From: Su Yue <Damenly_Su@gmx.com>
> 
> The are different behavior of btrfs_lookup_first_block_group() and
> btrfs_lookup_first_block_group_kernel().
> There are many palaces calling the lookup function include extent
> allocation part. It's too complicated to check and change those.
> It will influence many functionalities in progs.
> 
> So here, just make kernel version lookup functions run likely in
> progs behavior.
> 
> Signed-off-by: Su Yue <Damenly_Su@gmx.com>

It should be folded into previous commit, or this will break bisect.

Thanks,
Qu

> ---
>  extent-tree.c | 7 ++++---
>  1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/extent-tree.c b/extent-tree.c
> index fdfa29a2409f..3f7b82dc88a2 100644
> --- a/extent-tree.c
> +++ b/extent-tree.c
> @@ -238,12 +238,13 @@ static struct btrfs_block_group_cache *block_group_cache_tree_search(
>  }
>  
>  /*
> - * Return the block group that starts at or after bytenr
> + * Return the block group that contains @bytenr, otherwise return the next one
> + * that starts after @bytenr
>   */
>  struct btrfs_block_group_cache *btrfs_lookup_first_block_group_kernel(
>  		struct btrfs_fs_info *info, u64 bytenr)
>  {
> -	return block_group_cache_tree_search(info, bytenr, 0);
> +	return block_group_cache_tree_search(info, bytenr, 1);
>  }
>  
>  /*
> @@ -252,7 +253,7 @@ struct btrfs_block_group_cache *btrfs_lookup_first_block_group_kernel(
>  struct btrfs_block_group_cache *btrfs_lookup_block_group_kernel(
>  		struct btrfs_fs_info *info, u64 bytenr)
>  {
> -	return block_group_cache_tree_search(info, bytenr, 1);
> +	return block_group_cache_tree_search(info, bytenr, 0);
>  }
>  
>  /*
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 05/10] btrfs-progs: adjust ported block group lookup functions in kernel version
  2019-12-18  9:52   ` Qu Wenruo
@ 2019-12-18 11:01     ` Su Yue
  0 siblings, 0 replies; 15+ messages in thread
From: Su Yue @ 2019-12-18 11:01 UTC (permalink / raw)
  To: Qu Wenruo, damenly.su, linux-btrfs

On 2019/12/18 5:52 PM, Qu Wenruo wrote:
>
>
> On 2019/12/18 下午1:18, damenly.su@gmail.com wrote:
>> From: Su Yue <Damenly_Su@gmx.com>
>>
>> The are different behavior of btrfs_lookup_first_block_group() and
>> btrfs_lookup_first_block_group_kernel().
>> There are many palaces calling the lookup function include extent
>> allocation part. It's too complicated to check and change those.
>> It will influence many functionalities in progs.
>>
>> So here, just make kernel version lookup functions run likely in
>> progs behavior.
>>
>> Signed-off-by: Su Yue <Damenly_Su@gmx.com>
>
> It should be folded into previous commit, or this will break bisect.
>

Oh, will do.

Thanks for your review.


> Thanks,
> Qu
>
>> ---
>>   extent-tree.c | 7 ++++---
>>   1 file changed, 4 insertions(+), 3 deletions(-)
>>
>> diff --git a/extent-tree.c b/extent-tree.c
>> index fdfa29a2409f..3f7b82dc88a2 100644
>> --- a/extent-tree.c
>> +++ b/extent-tree.c
>> @@ -238,12 +238,13 @@ static struct btrfs_block_group_cache *block_group_cache_tree_search(
>>   }
>>
>>   /*
>> - * Return the block group that starts at or after bytenr
>> + * Return the block group that contains @bytenr, otherwise return the next one
>> + * that starts after @bytenr
>>    */
>>   struct btrfs_block_group_cache *btrfs_lookup_first_block_group_kernel(
>>   		struct btrfs_fs_info *info, u64 bytenr)
>>   {
>> -	return block_group_cache_tree_search(info, bytenr, 0);
>> +	return block_group_cache_tree_search(info, bytenr, 1);
>>   }
>>
>>   /*
>> @@ -252,7 +253,7 @@ struct btrfs_block_group_cache *btrfs_lookup_first_block_group_kernel(
>>   struct btrfs_block_group_cache *btrfs_lookup_block_group_kernel(
>>   		struct btrfs_fs_info *info, u64 bytenr)
>>   {
>> -	return block_group_cache_tree_search(info, bytenr, 1);
>> +	return block_group_cache_tree_search(info, bytenr, 0);
>>   }
>>
>>   /*
>>
>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 00/10] unify origanization structure of block group cache
  2019-12-18  5:18 [PATCH V2 00/10] unify origanization structure of block group cache damenly.su
                   ` (9 preceding siblings ...)
  2019-12-18  5:18 ` [PATCH V2 10/10] btrfs-progs: cleanups after block group cache reform damenly.su
@ 2020-01-22 17:52 ` David Sterba
  10 siblings, 0 replies; 15+ messages in thread
From: David Sterba @ 2020-01-22 17:52 UTC (permalink / raw)
  To: damenly.su; +Cc: linux-btrfs, Su Yue

On Wed, Dec 18, 2019 at 01:18:39PM +0800, damenly.su@gmail.com wrote:
> From: Su Yue <Damenly_Su@gmx.com>
> 
> In progs, block group caches are stored in btrfs_fs_info::block_group_cache
> whose type is cache_extent. All block group caches adding/finding/freeing
> are done in the misleading set/clear_extent_bits ways. However, kernel
> side uses red-black tree structure in btrfs_fs_info directly. The
> latter's structure is more reasonable and intuitive.
> 
> This patchset transforms structure of block group caches from cache_extent
> to red-black tree and list.
> 
> patch[1] handles error to avoid warning after reform.
> patch[2-6] are about rb tree reform things in preparation.
> patch[7-8] are about dirty block groups linked in transaction in preparation.
> patch[9] does replace works in action.
> patch[10] does cleanup.
> 
> This patchset passed progs tests and did not cause any regression.
> 
> ---
> Changelog:
> v2:
>    Adjust block group cache tree seach and lookup functions to
>    progs behaviors.
>    Use rbtree_postorder_for_each_entry_safe() in patch[9] (Qu WenRuo).
>    Add reviewed-by tags.
> 
> Su Yue (10):
>   btrfs-progs: handle error if btrfs_write_one_block_group() failed
>   btrfs-progs: block_group: add rb tree related memebers
>   btrfs-progs: port block group cache tree insertion and lookup
>     functions
>   btrfs-progs: reform the function block_group_cache_tree_search()
>   btrfs-progs: adjust ported block group lookup functions in kernel
>     version
>   btrfs-progs: abstract function btrfs_add_block_group_cache()
>   block-progs: block_group: add dirty_bgs list related memebers
>   btrfs-progs: pass @trans to functions touch dirty block groups
>   btrfs-progs: reform block groups caches structure
>   btrfs-progs: cleanups after block group cache reform

As the patches were reviewed by Qu, I've added them to devel. I've
folded patch 5 to 4 as suggested. Thanks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2020-01-22 17:52 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-18  5:18 [PATCH V2 00/10] unify origanization structure of block group cache damenly.su
2019-12-18  5:18 ` [PATCH V2 01/10] btrfs-progs: handle error if btrfs_write_one_block_group() failed damenly.su
2019-12-18  5:18 ` [PATCH V2 02/10] btrfs-progs: block_group: add rb tree related memebers damenly.su
2019-12-18  5:18 ` [PATCH V2 03/10] btrfs-progs: port block group cache tree insertion and lookup functions damenly.su
2019-12-18  5:18 ` [PATCH V2 04/10] btrfs-progs: reform the function block_group_cache_tree_search() damenly.su
2019-12-18  9:51   ` Qu Wenruo
2019-12-18  5:18 ` [PATCH V2 05/10] btrfs-progs: adjust ported block group lookup functions in kernel version damenly.su
2019-12-18  9:52   ` Qu Wenruo
2019-12-18 11:01     ` Su Yue
2019-12-18  5:18 ` [PATCH V2 06/10] btrfs-progs: abstract function btrfs_add_block_group_cache() damenly.su
2019-12-18  5:18 ` [PATCH V2 07/10] block-progs: block_group: add dirty_bgs list related memebers damenly.su
2019-12-18  5:18 ` [PATCH V2 08/10] btrfs-progs: pass @trans to functions touch dirty block groups damenly.su
2019-12-18  5:18 ` [PATCH V2 09/10] btrfs-progs: reform block groups caches structure damenly.su
2019-12-18  5:18 ` [PATCH V2 10/10] btrfs-progs: cleanups after block group cache reform damenly.su
2020-01-22 17:52 ` [PATCH V2 00/10] unify origanization structure of block group cache David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).