linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/15] RAID/volumes code cleanups
@ 2019-05-17  9:43 David Sterba
  2019-05-17  9:43 ` [PATCH 01/15] btrfs: fix minimum number of chunk errors for DUP David Sterba
                   ` (14 more replies)
  0 siblings, 15 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

This is preparatory work for RAID1C3, making use of the raid_attr table
that replaces the hand crafted if-else-if sequences and bit mask checks.
Pluging a new bg profile is easy on top of that, though there are still
some possible cleanups left.

There's one user-visible change, patch 2/15, where the balance filters
allow conversion to the RAID56 profiles with the minimum number of
devices. This is for consistency with mkfs/mount.

So this will work:

 $ mkfs.btrfs -d raid1 -m raid1 /dev/sda /dev/sdb
 $ mount dev/sda /mnt
 $ btrfs balance start -dconvert=raid5 -mconvert=raid5 /mnt

David Sterba (15):
  btrfs: fix minimum number of chunk errors for DUP
  btrfs: raid56: allow the exact minimum number of devices for balance
    convert
  btrfs: remove mapping tree structures indirection
  btrfs: use raid_attr table in get_profile_num_devs
  btrfs: use raid_attr in btrfs_chunk_max_errors
  btrfs: use raid_attr table in calc_stripe_length for nparity
  btrfs: use raid_attr to get allowed profiles for balance conversion
  btrfs: use raid_attr table to find profiles for integrity lowering
  btrfs: use raid_attr table for btrfs_bg_type_to_factor
  btrfs: factor out helper for counting data stripes
  btrfs: use u8 for raid_array members
  btrfs: factor out devs_max setting in __btrfs_alloc_chunk
  btrfs: refactor helper for bg flags to name conversion
  btrfs: constify map parameter for nr_parity_stripes and
    nr_data_stripes
  btrfs: read number of data stripes from map only once

 fs/btrfs/ctree.h            |   6 +-
 fs/btrfs/dev-replace.c      |   2 +-
 fs/btrfs/disk-io.c          |   6 +-
 fs/btrfs/extent-tree.c      |  28 ++---
 fs/btrfs/free-space-cache.c |   2 +-
 fs/btrfs/raid56.h           |   4 +-
 fs/btrfs/scrub.c            |  16 +--
 fs/btrfs/volumes.c          | 202 ++++++++++++++++--------------------
 fs/btrfs/volumes.h          |  24 ++---
 9 files changed, 125 insertions(+), 165 deletions(-)

-- 
2.21.0


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 01/15] btrfs: fix minimum number of chunk errors for DUP
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17 14:05   ` Qu Wenruo
  2019-05-17  9:43 ` [PATCH 02/15] btrfs: raid56: allow the exact minimum number of devices for balance convert David Sterba
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba, Qu Wenruo

The list of profiles in btrfs_chunk_max_errors lists DUP as a profile
DUP able to tolerate 1 device missing. Though this profile is special
with 2 copies, it still needs the device, unlike the others.

Looking at the history of changes, thre's no clear reason why DUP is
there, functions were refactored and blocks of code merged to one
helper.

d20983b40e828 Btrfs: fix writing data into the seed filesystem
  - factor code to a helper

de11cc12df173 Btrfs: don't pre-allocate btrfs bio
  - unrelated change, DUP still in the list with max errors 1

a236aed14ccb0 Btrfs: Deal with failed writes in mirrored configurations
  - introduced the max errors, leaves DUP and RAID1 in the same group

CC: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com
---
 fs/btrfs/volumes.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 1c2a6e4b39da..8508f6028c8d 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -5328,8 +5328,7 @@ static inline int btrfs_chunk_max_errors(struct map_lookup *map)
 
 	if (map->type & (BTRFS_BLOCK_GROUP_RAID1 |
 			 BTRFS_BLOCK_GROUP_RAID10 |
-			 BTRFS_BLOCK_GROUP_RAID5 |
-			 BTRFS_BLOCK_GROUP_DUP)) {
+			 BTRFS_BLOCK_GROUP_RAID5)) {
 		max_errors = 1;
 	} else if (map->type & BTRFS_BLOCK_GROUP_RAID6) {
 		max_errors = 2;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 02/15] btrfs: raid56: allow the exact minimum number of devices for balance convert
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
  2019-05-17  9:43 ` [PATCH 01/15] btrfs: fix minimum number of chunk errors for DUP David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17  9:43 ` [PATCH 03/15] btrfs: remove mapping tree structures indirection David Sterba
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The minimum number of devices for RAID5 is 2, though this is only a bit
expensive RAID1, and for RAID6 it's 3, which is a triple copy that works
only 3 devices.

mkfs.btrfs allows that and mounting such filesystem also works, so the
conversion via balance filters is inconsistent with the others and we
should not prevent it.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/volumes.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 8508f6028c8d..10f7de0cc7e6 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -4080,11 +4080,12 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
 	allowed = BTRFS_AVAIL_ALLOC_BIT_SINGLE | BTRFS_BLOCK_GROUP_DUP;
 	if (num_devices > 1)
 		allowed |= (BTRFS_BLOCK_GROUP_RAID0 | BTRFS_BLOCK_GROUP_RAID1);
-	if (num_devices > 2)
+	if (num_devices >= 2)
 		allowed |= BTRFS_BLOCK_GROUP_RAID5;
+	if (num_devices >= 3)
+		allowed |= BTRFS_BLOCK_GROUP_RAID6;
 	if (num_devices > 3)
-		allowed |= (BTRFS_BLOCK_GROUP_RAID10 |
-			    BTRFS_BLOCK_GROUP_RAID6);
+		allowed |= BTRFS_BLOCK_GROUP_RAID10;
 	if (validate_convert_profile(&bctl->data, allowed)) {
 		int index = btrfs_bg_flags_to_raid_index(bctl->data.target);
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 03/15] btrfs: remove mapping tree structures indirection
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
  2019-05-17  9:43 ` [PATCH 01/15] btrfs: fix minimum number of chunk errors for DUP David Sterba
  2019-05-17  9:43 ` [PATCH 02/15] btrfs: raid56: allow the exact minimum number of devices for balance convert David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17  9:43 ` [PATCH 04/15] btrfs: use raid_attr table in get_profile_num_devs David Sterba
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

fs_info::mapping_tree is the physical<->logical mapping tree and uses
the same underlying structure as extents, but is embedded to another
structure. There are no other members and this indirection is useless.
No functional change.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/ctree.h            |  6 +----
 fs/btrfs/dev-replace.c      |  2 +-
 fs/btrfs/disk-io.c          |  2 +-
 fs/btrfs/extent-tree.c      | 14 +++++-----
 fs/btrfs/free-space-cache.c |  2 +-
 fs/btrfs/scrub.c            |  8 +++---
 fs/btrfs/volumes.c          | 53 +++++++++++++++++--------------------
 fs/btrfs/volumes.h          |  3 +--
 8 files changed, 40 insertions(+), 50 deletions(-)

diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
index 0a61dff27f57..a2130cf0e03b 100644
--- a/fs/btrfs/ctree.h
+++ b/fs/btrfs/ctree.h
@@ -99,10 +99,6 @@ static inline u32 count_max_extents(u64 size)
 	return div_u64(size + BTRFS_MAX_EXTENT_SIZE - 1, BTRFS_MAX_EXTENT_SIZE);
 }
 
-struct btrfs_mapping_tree {
-	struct extent_map_tree map_tree;
-};
-
 static inline unsigned long btrfs_chunk_item_size(int num_stripes)
 {
 	BUG_ON(num_stripes == 0);
@@ -824,7 +820,7 @@ struct btrfs_fs_info {
 	struct extent_io_tree *pinned_extents;
 
 	/* logical->physical extent mapping */
-	struct btrfs_mapping_tree mapping_tree;
+	struct extent_map_tree mapping_tree;
 
 	/*
 	 * block reservation for extent, checksum, root tree and
diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
index 55c15f31d00d..6cda52203171 100644
--- a/fs/btrfs/dev-replace.c
+++ b/fs/btrfs/dev-replace.c
@@ -713,7 +713,7 @@ static void btrfs_dev_replace_update_device_in_mapping_tree(
 						struct btrfs_device *srcdev,
 						struct btrfs_device *tgtdev)
 {
-	struct extent_map_tree *em_tree = &fs_info->mapping_tree.map_tree;
+	struct extent_map_tree *em_tree = &fs_info->mapping_tree;
 	struct extent_map *em;
 	struct map_lookup *map;
 	u64 start = 0;
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index deb74a8c191a..cdd6e7ee76b6 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2689,7 +2689,7 @@ int open_ctree(struct super_block *sb,
 	INIT_LIST_HEAD(&fs_info->space_info);
 	INIT_LIST_HEAD(&fs_info->tree_mod_seq_list);
 	INIT_LIST_HEAD(&fs_info->unused_bgs);
-	btrfs_mapping_init(&fs_info->mapping_tree);
+	extent_map_tree_init(&fs_info->mapping_tree);
 	btrfs_init_block_rsv(&fs_info->global_block_rsv,
 			     BTRFS_BLOCK_RSV_GLOBAL);
 	btrfs_init_block_rsv(&fs_info->trans_block_rsv, BTRFS_BLOCK_RSV_TRANS);
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index f79e477a378e..12889a7a1bb1 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -9948,7 +9948,7 @@ static int find_first_block_group(struct btrfs_fs_info *fs_info,
 			struct extent_map_tree *em_tree;
 			struct extent_map *em;
 
-			em_tree = &root->fs_info->mapping_tree.map_tree;
+			em_tree = &root->fs_info->mapping_tree;
 			read_lock(&em_tree->lock);
 			em = lookup_extent_mapping(em_tree, found_key.objectid,
 						   found_key.offset);
@@ -10242,21 +10242,21 @@ btrfs_create_block_group_cache(struct btrfs_fs_info *fs_info,
  */
 static int check_chunk_block_group_mappings(struct btrfs_fs_info *fs_info)
 {
-	struct btrfs_mapping_tree *map_tree = &fs_info->mapping_tree;
+	struct extent_map_tree *map_tree = &fs_info->mapping_tree;
 	struct extent_map *em;
 	struct btrfs_block_group_cache *bg;
 	u64 start = 0;
 	int ret = 0;
 
 	while (1) {
-		read_lock(&map_tree->map_tree.lock);
+		read_lock(&map_tree->lock);
 		/*
 		 * lookup_extent_mapping will return the first extent map
 		 * intersecting the range, so setting @len to 1 is enough to
 		 * get the first chunk.
 		 */
-		em = lookup_extent_mapping(&map_tree->map_tree, start, 1);
-		read_unlock(&map_tree->map_tree.lock);
+		em = lookup_extent_mapping(map_tree, start, 1);
+		read_unlock(&map_tree->lock);
 		if (!em)
 			break;
 
@@ -10833,7 +10833,7 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
 	if (remove_em) {
 		struct extent_map_tree *em_tree;
 
-		em_tree = &fs_info->mapping_tree.map_tree;
+		em_tree = &fs_info->mapping_tree;
 		write_lock(&em_tree->lock);
 		remove_extent_mapping(em_tree, em);
 		write_unlock(&em_tree->lock);
@@ -10868,7 +10868,7 @@ struct btrfs_trans_handle *
 btrfs_start_trans_remove_block_group(struct btrfs_fs_info *fs_info,
 				     const u64 chunk_offset)
 {
-	struct extent_map_tree *em_tree = &fs_info->mapping_tree.map_tree;
+	struct extent_map_tree *em_tree = &fs_info->mapping_tree;
 	struct extent_map *em;
 	struct map_lookup *map;
 	unsigned int num_items;
diff --git a/fs/btrfs/free-space-cache.c b/fs/btrfs/free-space-cache.c
index f74dc259307b..c2f6ea14d74a 100644
--- a/fs/btrfs/free-space-cache.c
+++ b/fs/btrfs/free-space-cache.c
@@ -3358,7 +3358,7 @@ void btrfs_put_block_group_trimming(struct btrfs_block_group_cache *block_group)
 
 	if (cleanup) {
 		mutex_lock(&fs_info->chunk_mutex);
-		em_tree = &fs_info->mapping_tree.map_tree;
+		em_tree = &fs_info->mapping_tree;
 		write_lock(&em_tree->lock);
 		em = lookup_extent_mapping(em_tree, block_group->key.objectid,
 					   1);
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index f7b29f9db5e2..0827bdf4faf1 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3410,15 +3410,15 @@ static noinline_for_stack int scrub_chunk(struct scrub_ctx *sctx,
 					  struct btrfs_block_group_cache *cache)
 {
 	struct btrfs_fs_info *fs_info = sctx->fs_info;
-	struct btrfs_mapping_tree *map_tree = &fs_info->mapping_tree;
+	struct extent_map_tree *map_tree = &fs_info->mapping_tree;
 	struct map_lookup *map;
 	struct extent_map *em;
 	int i;
 	int ret = 0;
 
-	read_lock(&map_tree->map_tree.lock);
-	em = lookup_extent_mapping(&map_tree->map_tree, chunk_offset, 1);
-	read_unlock(&map_tree->map_tree.lock);
+	read_lock(&map_tree->lock);
+	em = lookup_extent_mapping(map_tree, chunk_offset, 1);
+	read_unlock(&map_tree->lock);
 
 	if (!em) {
 		/*
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 10f7de0cc7e6..a3fa741c8534 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -1818,7 +1818,7 @@ static u64 find_next_chunk(struct btrfs_fs_info *fs_info)
 	struct rb_node *n;
 	u64 ret = 0;
 
-	em_tree = &fs_info->mapping_tree.map_tree;
+	em_tree = &fs_info->mapping_tree;
 	read_lock(&em_tree->lock);
 	n = rb_last(&em_tree->map.rb_root);
 	if (n) {
@@ -2941,7 +2941,7 @@ struct extent_map *btrfs_get_chunk_map(struct btrfs_fs_info *fs_info,
 	struct extent_map_tree *em_tree;
 	struct extent_map *em;
 
-	em_tree = &fs_info->mapping_tree.map_tree;
+	em_tree = &fs_info->mapping_tree;
 	read_lock(&em_tree->lock);
 	em = lookup_extent_mapping(em_tree, logical, length);
 	read_unlock(&em_tree->lock);
@@ -5144,7 +5144,7 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
 	em->block_len = em->len;
 	em->orig_block_len = stripe_size;
 
-	em_tree = &info->mapping_tree.map_tree;
+	em_tree = &info->mapping_tree;
 	write_lock(&em_tree->lock);
 	ret = add_extent_mapping(em_tree, em, 0);
 	if (ret) {
@@ -5378,21 +5378,16 @@ int btrfs_chunk_readonly(struct btrfs_fs_info *fs_info, u64 chunk_offset)
 	return readonly;
 }
 
-void btrfs_mapping_init(struct btrfs_mapping_tree *tree)
-{
-	extent_map_tree_init(&tree->map_tree);
-}
-
-void btrfs_mapping_tree_free(struct btrfs_mapping_tree *tree)
+void btrfs_mapping_tree_free(struct extent_map_tree *tree)
 {
 	struct extent_map *em;
 
 	while (1) {
-		write_lock(&tree->map_tree.lock);
-		em = lookup_extent_mapping(&tree->map_tree, 0, (u64)-1);
+		write_lock(&tree->lock);
+		em = lookup_extent_mapping(tree, 0, (u64)-1);
 		if (em)
-			remove_extent_mapping(&tree->map_tree, em);
-		write_unlock(&tree->map_tree.lock);
+			remove_extent_mapping(tree, em);
+		write_unlock(&tree->lock);
 		if (!em)
 			break;
 		/* once for us */
@@ -6687,7 +6682,7 @@ static int read_one_chunk(struct btrfs_key *key, struct extent_buffer *leaf,
 			  struct btrfs_chunk *chunk)
 {
 	struct btrfs_fs_info *fs_info = leaf->fs_info;
-	struct btrfs_mapping_tree *map_tree = &fs_info->mapping_tree;
+	struct extent_map_tree *map_tree = &fs_info->mapping_tree;
 	struct map_lookup *map;
 	struct extent_map *em;
 	u64 logical;
@@ -6712,9 +6707,9 @@ static int read_one_chunk(struct btrfs_key *key, struct extent_buffer *leaf,
 			return ret;
 	}
 
-	read_lock(&map_tree->map_tree.lock);
-	em = lookup_extent_mapping(&map_tree->map_tree, logical, 1);
-	read_unlock(&map_tree->map_tree.lock);
+	read_lock(&map_tree->lock);
+	em = lookup_extent_mapping(map_tree, logical, 1);
+	read_unlock(&map_tree->lock);
 
 	/* already mapped? */
 	if (em && em->start <= logical && em->start + em->len > logical) {
@@ -6783,9 +6778,9 @@ static int read_one_chunk(struct btrfs_key *key, struct extent_buffer *leaf,
 
 	}
 
-	write_lock(&map_tree->map_tree.lock);
-	ret = add_extent_mapping(&map_tree->map_tree, em, 0);
-	write_unlock(&map_tree->map_tree.lock);
+	write_lock(&map_tree->lock);
+	ret = add_extent_mapping(map_tree, em, 0);
+	write_unlock(&map_tree->lock);
 	if (ret < 0) {
 		btrfs_err(fs_info,
 			  "failed to add chunk map, start=%llu len=%llu: %d",
@@ -7103,14 +7098,14 @@ int btrfs_read_sys_array(struct btrfs_fs_info *fs_info)
 bool btrfs_check_rw_degradable(struct btrfs_fs_info *fs_info,
 					struct btrfs_device *failing_dev)
 {
-	struct btrfs_mapping_tree *map_tree = &fs_info->mapping_tree;
+	struct extent_map_tree *map_tree = &fs_info->mapping_tree;
 	struct extent_map *em;
 	u64 next_start = 0;
 	bool ret = true;
 
-	read_lock(&map_tree->map_tree.lock);
-	em = lookup_extent_mapping(&map_tree->map_tree, 0, (u64)-1);
-	read_unlock(&map_tree->map_tree.lock);
+	read_lock(&map_tree->lock);
+	em = lookup_extent_mapping(map_tree, 0, (u64)-1);
+	read_unlock(&map_tree->lock);
 	/* No chunk at all? Return false anyway */
 	if (!em) {
 		ret = false;
@@ -7148,10 +7143,10 @@ bool btrfs_check_rw_degradable(struct btrfs_fs_info *fs_info,
 		next_start = extent_map_end(em);
 		free_extent_map(em);
 
-		read_lock(&map_tree->map_tree.lock);
-		em = lookup_extent_mapping(&map_tree->map_tree, next_start,
+		read_lock(&map_tree->lock);
+		em = lookup_extent_mapping(map_tree, next_start,
 					   (u64)(-1) - next_start);
-		read_unlock(&map_tree->map_tree.lock);
+		read_unlock(&map_tree->lock);
 	}
 out:
 	return ret;
@@ -7612,7 +7607,7 @@ static int verify_one_dev_extent(struct btrfs_fs_info *fs_info,
 				 u64 chunk_offset, u64 devid,
 				 u64 physical_offset, u64 physical_len)
 {
-	struct extent_map_tree *em_tree = &fs_info->mapping_tree.map_tree;
+	struct extent_map_tree *em_tree = &fs_info->mapping_tree;
 	struct extent_map *em;
 	struct map_lookup *map;
 	struct btrfs_device *dev;
@@ -7701,7 +7696,7 @@ static int verify_one_dev_extent(struct btrfs_fs_info *fs_info,
 
 static int verify_chunk_dev_extent_mapping(struct btrfs_fs_info *fs_info)
 {
-	struct extent_map_tree *em_tree = &fs_info->mapping_tree.map_tree;
+	struct extent_map_tree *em_tree = &fs_info->mapping_tree;
 	struct extent_map *em;
 	struct rb_node *node;
 	int ret = 0;
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index 136a3eb64604..07156d974ac4 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -413,8 +413,7 @@ int btrfs_rmap_block(struct btrfs_fs_info *fs_info, u64 chunk_start,
 int btrfs_read_sys_array(struct btrfs_fs_info *fs_info);
 int btrfs_read_chunk_tree(struct btrfs_fs_info *fs_info);
 int btrfs_alloc_chunk(struct btrfs_trans_handle *trans, u64 type);
-void btrfs_mapping_init(struct btrfs_mapping_tree *tree);
-void btrfs_mapping_tree_free(struct btrfs_mapping_tree *tree);
+void btrfs_mapping_tree_free(struct extent_map_tree *tree);
 blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio,
 			   int mirror_num, int async_submit);
 int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 04/15] btrfs: use raid_attr table in get_profile_num_devs
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
                   ` (2 preceding siblings ...)
  2019-05-17  9:43 ` [PATCH 03/15] btrfs: remove mapping tree structures indirection David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17  9:43 ` [PATCH 05/15] btrfs: use raid_attr in btrfs_chunk_max_errors David Sterba
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The dev_max constraints are defined in the raid_attr table, use it
instead of open-coding it.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/extent-tree.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 12889a7a1bb1..37d7e5261079 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -4324,15 +4324,9 @@ static u64 get_profile_num_devs(struct btrfs_fs_info *fs_info, u64 type)
 {
 	u64 num_dev;
 
-	if (type & (BTRFS_BLOCK_GROUP_RAID10 |
-		    BTRFS_BLOCK_GROUP_RAID0 |
-		    BTRFS_BLOCK_GROUP_RAID5 |
-		    BTRFS_BLOCK_GROUP_RAID6))
+	num_dev = btrfs_raid_array[btrfs_bg_flags_to_raid_index(type)].devs_max;
+	if (!num_dev)
 		num_dev = fs_info->fs_devices->rw_devices;
-	else if (type & BTRFS_BLOCK_GROUP_RAID1)
-		num_dev = 2;
-	else
-		num_dev = 1;	/* DUP or single */
 
 	return num_dev;
 }
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 05/15] btrfs: use raid_attr in btrfs_chunk_max_errors
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
                   ` (3 preceding siblings ...)
  2019-05-17  9:43 ` [PATCH 04/15] btrfs: use raid_attr table in get_profile_num_devs David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17  9:43 ` [PATCH 06/15] btrfs: use raid_attr table in calc_stripe_length for nparity David Sterba
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The number of tolerated failures is stored in the raid_attr table, use
it.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/volumes.c | 14 ++------------
 1 file changed, 2 insertions(+), 12 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index a3fa741c8534..995a15a816f2 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -5325,19 +5325,9 @@ static noinline int init_first_rw_device(struct btrfs_trans_handle *trans)
 
 static inline int btrfs_chunk_max_errors(struct map_lookup *map)
 {
-	int max_errors;
-
-	if (map->type & (BTRFS_BLOCK_GROUP_RAID1 |
-			 BTRFS_BLOCK_GROUP_RAID10 |
-			 BTRFS_BLOCK_GROUP_RAID5)) {
-		max_errors = 1;
-	} else if (map->type & BTRFS_BLOCK_GROUP_RAID6) {
-		max_errors = 2;
-	} else {
-		max_errors = 0;
-	}
+	const int index = btrfs_bg_flags_to_raid_index(map->type);
 
-	return max_errors;
+	return btrfs_raid_array[index].tolerated_failures;
 }
 
 int btrfs_chunk_readonly(struct btrfs_fs_info *fs_info, u64 chunk_offset)
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 06/15] btrfs: use raid_attr table in calc_stripe_length for nparity
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
                   ` (4 preceding siblings ...)
  2019-05-17  9:43 ` [PATCH 05/15] btrfs: use raid_attr in btrfs_chunk_max_errors David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17 10:06   ` Hans van Kranenburg
  2019-05-17  9:43 ` [PATCH 07/15] btrfs: use raid_attr to get allowed profiles for balance conversion David Sterba
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The table is already used for ncopies, replace open coding of stripes
with the raid_attr value.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/volumes.c | 15 +++++----------
 1 file changed, 5 insertions(+), 10 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 995a15a816f2..743ed1f0b2a6 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -6652,19 +6652,14 @@ static u64 calc_stripe_length(u64 type, u64 chunk_len, int num_stripes)
 {
 	int index = btrfs_bg_flags_to_raid_index(type);
 	int ncopies = btrfs_raid_array[index].ncopies;
+	int nparity = btrfs_raid_array[index].nparity;
 	int data_stripes;
 
-	switch (type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
-	case BTRFS_BLOCK_GROUP_RAID5:
-		data_stripes = num_stripes - 1;
-		break;
-	case BTRFS_BLOCK_GROUP_RAID6:
-		data_stripes = num_stripes - 2;
-		break;
-	default:
+	if (nparity)
+		data_stripes = num_stripes - nparity;
+	else
 		data_stripes = num_stripes / ncopies;
-		break;
-	}
+
 	return div_u64(chunk_len, data_stripes);
 }
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 07/15] btrfs: use raid_attr to get allowed profiles for balance conversion
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
                   ` (5 preceding siblings ...)
  2019-05-17  9:43 ` [PATCH 06/15] btrfs: use raid_attr table in calc_stripe_length for nparity David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17  9:43 ` [PATCH 08/15] btrfs: use raid_attr table to find profiles for integrity lowering David Sterba
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Iterate over the table and gather all allowed profiles for a given
number of devices, instead of open coding.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/volumes.c | 14 +++++---------
 1 file changed, 5 insertions(+), 9 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 743ed1f0b2a6..34e4d2269802 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -4047,6 +4047,7 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
 	u64 num_devices;
 	unsigned seq;
 	bool reducing_integrity;
+	int i;
 
 	if (btrfs_fs_closing(fs_info) ||
 	    atomic_read(&fs_info->balance_pause_req) ||
@@ -4076,16 +4077,11 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
 	}
 
 	num_devices = btrfs_num_devices(fs_info);
+	allowed = 0;
+	for (i = 0; i < ARRAY_SIZE(btrfs_raid_array); i++)
+		if (num_devices >= btrfs_raid_array[i].devs_min)
+			allowed |= btrfs_raid_array[i].bg_flag;
 
-	allowed = BTRFS_AVAIL_ALLOC_BIT_SINGLE | BTRFS_BLOCK_GROUP_DUP;
-	if (num_devices > 1)
-		allowed |= (BTRFS_BLOCK_GROUP_RAID0 | BTRFS_BLOCK_GROUP_RAID1);
-	if (num_devices >= 2)
-		allowed |= BTRFS_BLOCK_GROUP_RAID5;
-	if (num_devices >= 3)
-		allowed |= BTRFS_BLOCK_GROUP_RAID6;
-	if (num_devices > 3)
-		allowed |= BTRFS_BLOCK_GROUP_RAID10;
 	if (validate_convert_profile(&bctl->data, allowed)) {
 		int index = btrfs_bg_flags_to_raid_index(bctl->data.target);
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 08/15] btrfs: use raid_attr table to find profiles for integrity lowering
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
                   ` (6 preceding siblings ...)
  2019-05-17  9:43 ` [PATCH 07/15] btrfs: use raid_attr to get allowed profiles for balance conversion David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17  9:43 ` [PATCH 09/15] btrfs: use raid_attr table for btrfs_bg_type_to_factor David Sterba
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Replace open coded list of the profiles by selecting them from the
raid_attr table. The criteria are now more explicit, we need profiles
that have more than 1 copy of the data or can reconstruct the data with
a missing device.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/volumes.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 34e4d2269802..9bcda2d76a33 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -4110,11 +4110,16 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
 		goto out;
 	}
 
-	/* allow to reduce meta or sys integrity only if force set */
-	allowed = BTRFS_BLOCK_GROUP_DUP | BTRFS_BLOCK_GROUP_RAID1 |
-			BTRFS_BLOCK_GROUP_RAID10 |
-			BTRFS_BLOCK_GROUP_RAID5 |
-			BTRFS_BLOCK_GROUP_RAID6;
+	/*
+	 * Allow to reduce metadata or system integrity only if force set for
+	 * profiles with redundancy (copies, parity)
+	 */
+	allowed = 0;
+	for (i = 0; i < ARRAY_SIZE(btrfs_raid_array); i++) {
+		if (btrfs_raid_array[i].ncopies >= 2 ||
+		    btrfs_raid_array[i].tolerated_failures >= 1)
+			allowed |= btrfs_raid_array[i].bg_flag;
+	}
 	do {
 		seq = read_seqbegin(&fs_info->profiles_lock);
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 09/15] btrfs: use raid_attr table for btrfs_bg_type_to_factor
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
                   ` (7 preceding siblings ...)
  2019-05-17  9:43 ` [PATCH 08/15] btrfs: use raid_attr table to find profiles for integrity lowering David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17  9:43 ` [PATCH 10/15] btrfs: factor out helper for counting data stripes David Sterba
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The factor is the number of copies.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/volumes.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 9bcda2d76a33..3d65fdf7884c 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -7581,10 +7581,9 @@ void btrfs_reset_fs_info_ptr(struct btrfs_fs_info *fs_info)
  */
 int btrfs_bg_type_to_factor(u64 flags)
 {
-	if (flags & (BTRFS_BLOCK_GROUP_DUP | BTRFS_BLOCK_GROUP_RAID1 |
-		     BTRFS_BLOCK_GROUP_RAID10))
-		return 2;
-	return 1;
+	const int index = btrfs_bg_flags_to_raid_index(flags);
+
+	return btrfs_raid_array[index].ncopies;
 }
 
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 10/15] btrfs: factor out helper for counting data stripes
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
                   ` (8 preceding siblings ...)
  2019-05-17  9:43 ` [PATCH 09/15] btrfs: use raid_attr table for btrfs_bg_type_to_factor David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17  9:43 ` [PATCH 11/15] btrfs: use u8 for raid_array members David Sterba
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Factor the sequence of ifs to a helper, the 'data stripes' here means
the number of stripes without redundancy and parity.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/volumes.c | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 3d65fdf7884c..3464bf1f0c48 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -3474,6 +3474,18 @@ static int chunk_devid_filter(struct extent_buffer *leaf,
 	return 1;
 }
 
+static u64 calc_data_stripes(u64 type, int num_stripes)
+{
+	const int index = btrfs_bg_flags_to_raid_index(type);
+	const int ncopies = btrfs_raid_array[index].ncopies;
+	const int nparity = btrfs_raid_array[index].nparity;
+
+	if (nparity)
+		return num_stripes - nparity;
+	else
+		return num_stripes / ncopies;
+}
+
 /* [pstart, pend) */
 static int chunk_drange_filter(struct extent_buffer *leaf,
 			       struct btrfs_chunk *chunk,
@@ -3483,22 +3495,15 @@ static int chunk_drange_filter(struct extent_buffer *leaf,
 	int num_stripes = btrfs_chunk_num_stripes(leaf, chunk);
 	u64 stripe_offset;
 	u64 stripe_length;
+	u64 type;
 	int factor;
 	int i;
 
 	if (!(bargs->flags & BTRFS_BALANCE_ARGS_DEVID))
 		return 0;
 
-	if (btrfs_chunk_type(leaf, chunk) & (BTRFS_BLOCK_GROUP_DUP |
-	     BTRFS_BLOCK_GROUP_RAID1 | BTRFS_BLOCK_GROUP_RAID10)) {
-		factor = num_stripes / 2;
-	} else if (btrfs_chunk_type(leaf, chunk) & BTRFS_BLOCK_GROUP_RAID5) {
-		factor = num_stripes - 1;
-	} else if (btrfs_chunk_type(leaf, chunk) & BTRFS_BLOCK_GROUP_RAID6) {
-		factor = num_stripes - 2;
-	} else {
-		factor = num_stripes;
-	}
+	type = btrfs_chunk_type(leaf, chunk);
+	factor = calc_data_stripes(type, num_stripes);
 
 	for (i = 0; i < num_stripes; i++) {
 		stripe = btrfs_stripe_nr(chunk, i);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 11/15] btrfs: use u8 for raid_array members
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
                   ` (9 preceding siblings ...)
  2019-05-17  9:43 ` [PATCH 10/15] btrfs: factor out helper for counting data stripes David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17  9:43 ` [PATCH 12/15] btrfs: factor out devs_max setting in __btrfs_alloc_chunk David Sterba
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The raid_attr table is now 7 * 56 = 392 bytes long, consisting of just
small numbers so we don't have to use ints. New size is 7 * 32 = 224,
saving 3 cachelines.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/disk-io.c |  4 ++--
 fs/btrfs/volumes.h | 18 +++++++++---------
 2 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index cdd6e7ee76b6..c4a4e6c42456 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -3709,7 +3709,7 @@ int btrfs_get_num_tolerated_disk_barrier_failures(u64 flags)
 
 	if ((flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0 ||
 	    (flags & BTRFS_AVAIL_ALLOC_BIT_SINGLE))
-		min_tolerated = min(min_tolerated,
+		min_tolerated = min_t(int, min_tolerated,
 				    btrfs_raid_array[BTRFS_RAID_SINGLE].
 				    tolerated_failures);
 
@@ -3718,7 +3718,7 @@ int btrfs_get_num_tolerated_disk_barrier_failures(u64 flags)
 			continue;
 		if (!(flags & btrfs_raid_array[raid_type].bg_flag))
 			continue;
-		min_tolerated = min(min_tolerated,
+		min_tolerated = min_t(int, min_tolerated,
 				    btrfs_raid_array[raid_type].
 				    tolerated_failures);
 	}
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index 07156d974ac4..73520a6ed90a 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -336,16 +336,16 @@ struct btrfs_device_info {
 };
 
 struct btrfs_raid_attr {
-	int sub_stripes;	/* sub_stripes info for map */
-	int dev_stripes;	/* stripes per dev */
-	int devs_max;		/* max devs to use */
-	int devs_min;		/* min devs needed */
-	int tolerated_failures; /* max tolerated fail devs */
-	int devs_increment;	/* ndevs has to be a multiple of this */
-	int ncopies;		/* how many copies to data has */
-	int nparity;		/* number of stripes worth of bytes to store
+	u8 sub_stripes;		/* sub_stripes info for map */
+	u8 dev_stripes;		/* stripes per dev */
+	u8 devs_max;		/* max devs to use */
+	u8 devs_min;		/* min devs needed */
+	u8 tolerated_failures;	/* max tolerated fail devs */
+	u8 devs_increment;	/* ndevs has to be a multiple of this */
+	u8 ncopies;		/* how many copies to data has */
+	u8 nparity;		/* number of stripes worth of bytes to store
 				 * parity information */
-	int mindev_error;	/* error code if min devs requisite is unmet */
+	u8 mindev_error;	/* error code if min devs requisite is unmet */
 	const char raid_name[8]; /* name of the raid */
 	u64 bg_flag;		/* block group flag of the raid */
 };
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 12/15] btrfs: factor out devs_max setting in __btrfs_alloc_chunk
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
                   ` (10 preceding siblings ...)
  2019-05-17  9:43 ` [PATCH 11/15] btrfs: use u8 for raid_array members David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17  9:43 ` [PATCH 13/15] btrfs: refactor helper for bg flags to name conversion David Sterba
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Merge the repeated code before the if-else block.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/volumes.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 3464bf1f0c48..9ee35fe9ee0f 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -4956,6 +4956,8 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
 	sub_stripes = btrfs_raid_array[index].sub_stripes;
 	dev_stripes = btrfs_raid_array[index].dev_stripes;
 	devs_max = btrfs_raid_array[index].devs_max;
+	if (!devs_max)
+		devs_max = BTRFS_MAX_DEVS(info);
 	devs_min = btrfs_raid_array[index].devs_min;
 	devs_increment = btrfs_raid_array[index].devs_increment;
 	ncopies = btrfs_raid_array[index].ncopies;
@@ -4964,8 +4966,6 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
 	if (type & BTRFS_BLOCK_GROUP_DATA) {
 		max_stripe_size = SZ_1G;
 		max_chunk_size = BTRFS_MAX_DATA_CHUNK_SIZE;
-		if (!devs_max)
-			devs_max = BTRFS_MAX_DEVS(info);
 	} else if (type & BTRFS_BLOCK_GROUP_METADATA) {
 		/* for larger filesystems, use larger metadata chunks */
 		if (fs_devices->total_rw_bytes > 50ULL * SZ_1G)
@@ -4973,13 +4973,9 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
 		else
 			max_stripe_size = SZ_256M;
 		max_chunk_size = max_stripe_size;
-		if (!devs_max)
-			devs_max = BTRFS_MAX_DEVS(info);
 	} else if (type & BTRFS_BLOCK_GROUP_SYSTEM) {
 		max_stripe_size = SZ_32M;
 		max_chunk_size = 2 * max_stripe_size;
-		if (!devs_max)
-			devs_max = BTRFS_MAX_DEVS_SYS_CHUNK;
 	} else {
 		btrfs_err(info, "invalid chunk type 0x%llx requested",
 		       type);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 13/15] btrfs: refactor helper for bg flags to name conversion
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
                   ` (11 preceding siblings ...)
  2019-05-17  9:43 ` [PATCH 12/15] btrfs: factor out devs_max setting in __btrfs_alloc_chunk David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17  9:43 ` [PATCH 14/15] btrfs: constify map parameter for nr_parity_stripes and nr_data_stripes David Sterba
  2019-05-17  9:43 ` [PATCH 15/15] btrfs: read number of data stripes from map only once David Sterba
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

The helper lacks the btrfs_ prefix and the parameter is the raw
blockgroup type, so none of the callers has to do the flags -> index
conversion.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/extent-tree.c |  4 +---
 fs/btrfs/volumes.c     | 34 +++++++++++++---------------------
 fs/btrfs/volumes.h     |  3 +--
 3 files changed, 15 insertions(+), 26 deletions(-)

diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 37d7e5261079..436c53a105a5 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -10134,7 +10134,6 @@ void btrfs_add_raid_kobjects(struct btrfs_fs_info *fs_info)
 	struct btrfs_space_info *space_info;
 	struct raid_kobject *rkobj;
 	LIST_HEAD(list);
-	int index;
 	int ret = 0;
 
 	spin_lock(&fs_info->pending_raid_kobjs_lock);
@@ -10143,10 +10142,9 @@ void btrfs_add_raid_kobjects(struct btrfs_fs_info *fs_info)
 
 	list_for_each_entry(rkobj, &list, list) {
 		space_info = __find_space_info(fs_info, rkobj->flags);
-		index = btrfs_bg_flags_to_raid_index(rkobj->flags);
 
 		ret = kobject_add(&rkobj->kobj, &space_info->kobj,
-				  "%s", get_raid_name(index));
+				"%s", btrfs_bg_type_to_raid_name(rkobj->flags));
 		if (ret) {
 			kobject_put(&rkobj->kobj);
 			break;
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 9ee35fe9ee0f..a8bf76d5f8e6 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -123,12 +123,14 @@ const struct btrfs_raid_attr btrfs_raid_array[BTRFS_NR_RAID_TYPES] = {
 	},
 };
 
-const char *get_raid_name(enum btrfs_raid_types type)
+const char *btrfs_bg_type_to_raid_name(u64 flags)
 {
-	if (type >= BTRFS_NR_RAID_TYPES)
+	const int index = btrfs_bg_flags_to_raid_index(flags);
+
+	if (index >= BTRFS_NR_RAID_TYPES)
 		return NULL;
 
-	return btrfs_raid_array[type].raid_name;
+	return btrfs_raid_array[index].raid_name;
 }
 
 /*
@@ -3926,11 +3928,9 @@ static void describe_balance_args(struct btrfs_balance_args *bargs, char *buf,
 		bp += ret;						\
 	} while (0)
 
-	if (flags & BTRFS_BALANCE_ARGS_CONVERT) {
-		int index = btrfs_bg_flags_to_raid_index(bargs->target);
-
-		CHECK_APPEND_1ARG("convert=%s,", get_raid_name(index));
-	}
+	if (flags & BTRFS_BALANCE_ARGS_CONVERT)
+		CHECK_APPEND_1ARG("convert=%s,",
+				  btrfs_bg_type_to_raid_name(bargs->target));
 
 	if (flags & BTRFS_BALANCE_ARGS_SOFT)
 		CHECK_APPEND_NOARG("soft,");
@@ -4088,29 +4088,23 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
 			allowed |= btrfs_raid_array[i].bg_flag;
 
 	if (validate_convert_profile(&bctl->data, allowed)) {
-		int index = btrfs_bg_flags_to_raid_index(bctl->data.target);
-
 		btrfs_err(fs_info,
 			  "balance: invalid convert data profile %s",
-			  get_raid_name(index));
+			  btrfs_bg_type_to_raid_name(bctl->data.target));
 		ret = -EINVAL;
 		goto out;
 	}
 	if (validate_convert_profile(&bctl->meta, allowed)) {
-		int index = btrfs_bg_flags_to_raid_index(bctl->meta.target);
-
 		btrfs_err(fs_info,
 			  "balance: invalid convert metadata profile %s",
-			  get_raid_name(index));
+			  btrfs_bg_type_to_raid_name(bctl->meta.target));
 		ret = -EINVAL;
 		goto out;
 	}
 	if (validate_convert_profile(&bctl->sys, allowed)) {
-		int index = btrfs_bg_flags_to_raid_index(bctl->sys.target);
-
 		btrfs_err(fs_info,
 			  "balance: invalid convert system profile %s",
-			  get_raid_name(index));
+			  btrfs_bg_type_to_raid_name(bctl->sys.target));
 		ret = -EINVAL;
 		goto out;
 	}
@@ -4159,12 +4153,10 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
 
 	if (btrfs_get_num_tolerated_disk_barrier_failures(meta_target) <
 		btrfs_get_num_tolerated_disk_barrier_failures(data_target)) {
-		int meta_index = btrfs_bg_flags_to_raid_index(meta_target);
-		int data_index = btrfs_bg_flags_to_raid_index(data_target);
-
 		btrfs_warn(fs_info,
 	"balance: metadata profile %s has lower redundancy than data profile %s",
-			   get_raid_name(meta_index), get_raid_name(data_index));
+				btrfs_bg_type_to_raid_name(meta_target),
+				btrfs_bg_type_to_raid_name(data_target));
 	}
 
 	ret = insert_balance_item(fs_info, bctl);
diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
index 73520a6ed90a..4a7a4d90ded8 100644
--- a/fs/btrfs/volumes.h
+++ b/fs/btrfs/volumes.h
@@ -556,8 +556,6 @@ static inline enum btrfs_raid_types btrfs_bg_flags_to_raid_index(u64 flags)
 	return BTRFS_RAID_SINGLE; /* BTRFS_BLOCK_GROUP_SINGLE */
 }
 
-const char *get_raid_name(enum btrfs_raid_types type);
-
 void btrfs_commit_device_sizes(struct btrfs_transaction *trans);
 
 struct list_head *btrfs_get_fs_uuids(void);
@@ -567,6 +565,7 @@ bool btrfs_check_rw_degradable(struct btrfs_fs_info *fs_info,
 					struct btrfs_device *failing_dev);
 
 int btrfs_bg_type_to_factor(u64 flags);
+const char *btrfs_bg_type_to_raid_name(u64 flags);
 int btrfs_verify_dev_extents(struct btrfs_fs_info *fs_info);
 
 #endif
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 14/15] btrfs: constify map parameter for nr_parity_stripes and nr_data_stripes
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
                   ` (12 preceding siblings ...)
  2019-05-17  9:43 ` [PATCH 13/15] btrfs: refactor helper for bg flags to name conversion David Sterba
@ 2019-05-17  9:43 ` David Sterba
  2019-05-17  9:43 ` [PATCH 15/15] btrfs: read number of data stripes from map only once David Sterba
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/raid56.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/btrfs/raid56.h b/fs/btrfs/raid56.h
index f5d4c13a8dbc..2503485db859 100644
--- a/fs/btrfs/raid56.h
+++ b/fs/btrfs/raid56.h
@@ -7,7 +7,7 @@
 #ifndef BTRFS_RAID56_H
 #define BTRFS_RAID56_H
 
-static inline int nr_parity_stripes(struct map_lookup *map)
+static inline int nr_parity_stripes(const struct map_lookup *map)
 {
 	if (map->type & BTRFS_BLOCK_GROUP_RAID5)
 		return 1;
@@ -17,7 +17,7 @@ static inline int nr_parity_stripes(struct map_lookup *map)
 		return 0;
 }
 
-static inline int nr_data_stripes(struct map_lookup *map)
+static inline int nr_data_stripes(const struct map_lookup *map)
 {
 	return map->num_stripes - nr_parity_stripes(map);
 }
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 15/15] btrfs: read number of data stripes from map only once
  2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
                   ` (13 preceding siblings ...)
  2019-05-17  9:43 ` [PATCH 14/15] btrfs: constify map parameter for nr_parity_stripes and nr_data_stripes David Sterba
@ 2019-05-17  9:43 ` David Sterba
  14 siblings, 0 replies; 20+ messages in thread
From: David Sterba @ 2019-05-17  9:43 UTC (permalink / raw)
  To: linux-btrfs; +Cc: David Sterba

There are several places that call nr_data_stripes, but this value does
not change.

Signed-off-by: David Sterba <dsterba@suse.com>
---
 fs/btrfs/scrub.c   |  8 ++++----
 fs/btrfs/volumes.c | 17 +++++++++--------
 2 files changed, 13 insertions(+), 12 deletions(-)

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 0827bdf4faf1..e51929a55af4 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -2660,18 +2660,18 @@ static int get_raid56_logic_offset(u64 physical, int num,
 	u64 last_offset;
 	u32 stripe_index;
 	u32 rot;
+	const int data_stripes = nr_data_stripes(map);
 
-	last_offset = (physical - map->stripes[num].physical) *
-		      nr_data_stripes(map);
+	last_offset = (physical - map->stripes[num].physical) * data_stripes;
 	if (stripe_start)
 		*stripe_start = last_offset;
 
 	*offset = last_offset;
-	for (i = 0; i < nr_data_stripes(map); i++) {
+	for (i = 0; i < data_stripes; i++) {
 		*offset = last_offset + i * map->stripe_len;
 
 		stripe_nr = div64_u64(*offset, map->stripe_len);
-		stripe_nr = div_u64(stripe_nr, nr_data_stripes(map));
+		stripe_nr = div_u64(stripe_nr, data_stripes);
 
 		/* Work out the disk rotation on this stripe-set */
 		stripe_nr = div_u64_rem(stripe_nr, map->num_stripes, &rot);
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index a8bf76d5f8e6..77a9dcfe3087 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -5918,6 +5918,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 	u64 stripe_nr;
 	u64 stripe_len;
 	u32 stripe_index;
+	int data_stripes;
 	int i;
 	int ret = 0;
 	int num_stripes;
@@ -5949,6 +5950,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 	 * to get to this block
 	 */
 	stripe_nr = div64_u64(stripe_nr, stripe_len);
+	data_stripes = nr_data_stripes(map);
 
 	stripe_offset = stripe_nr * stripe_len;
 	if (offset < stripe_offset) {
@@ -5965,7 +5967,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 
 	/* if we're here for raid56, we need to know the stripe aligned start */
 	if (map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) {
-		unsigned long full_stripe_len = stripe_len * nr_data_stripes(map);
+		unsigned long full_stripe_len = stripe_len * data_stripes;
 		raid56_full_stripe_start = offset;
 
 		/* allow a write of a full stripe, but make sure we don't
@@ -5983,7 +5985,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 		   stripe (on a single disk). */
 		if ((map->type & BTRFS_BLOCK_GROUP_RAID56_MASK) &&
 		    (op == BTRFS_MAP_WRITE)) {
-			max_len = stripe_len * nr_data_stripes(map) -
+			max_len = stripe_len * data_stripes -
 				(offset - raid56_full_stripe_start);
 		} else {
 			/* we limit the length of each bio to what fits in a stripe */
@@ -6073,7 +6075,7 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 		if (need_raid_map && (need_full_stripe(op) || mirror_num > 1)) {
 			/* push stripe_nr back to the start of the full stripe */
 			stripe_nr = div64_u64(raid56_full_stripe_start,
-					stripe_len * nr_data_stripes(map));
+					stripe_len * data_stripes);
 
 			/* RAID[56] write or recovery. Return all stripes */
 			num_stripes = map->num_stripes;
@@ -6089,10 +6091,9 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 			 * Mirror #3 is RAID6 Q block.
 			 */
 			stripe_nr = div_u64_rem(stripe_nr,
-					nr_data_stripes(map), &stripe_index);
+					data_stripes, &stripe_index);
 			if (mirror_num > 1)
-				stripe_index = nr_data_stripes(map) +
-						mirror_num - 2;
+				stripe_index = data_stripes + mirror_num - 2;
 
 			/* We distribute the parity blocks across stripes */
 			div_u64_rem(stripe_nr + stripe_index, map->num_stripes,
@@ -6150,8 +6151,8 @@ static int __btrfs_map_block(struct btrfs_fs_info *fs_info,
 		div_u64_rem(stripe_nr, num_stripes, &rot);
 
 		/* Fill in the logical address of each stripe */
-		tmp = stripe_nr * nr_data_stripes(map);
-		for (i = 0; i < nr_data_stripes(map); i++)
+		tmp = stripe_nr * data_stripes;
+		for (i = 0; i < data_stripes; i++)
 			bbio->raid_map[(i+rot) % num_stripes] =
 				em->start + (tmp + i) * map->stripe_len;
 
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 06/15] btrfs: use raid_attr table in calc_stripe_length for nparity
  2019-05-17  9:43 ` [PATCH 06/15] btrfs: use raid_attr table in calc_stripe_length for nparity David Sterba
@ 2019-05-17 10:06   ` Hans van Kranenburg
  2019-05-17 12:54     ` David Sterba
  0 siblings, 1 reply; 20+ messages in thread
From: Hans van Kranenburg @ 2019-05-17 10:06 UTC (permalink / raw)
  To: David Sterba, linux-btrfs

Hi,

Great cleanup series!

On 5/17/19 11:43 AM, David Sterba wrote:
> The table is already used for ncopies, replace open coding of stripes
> with the raid_attr value.
> 
> Signed-off-by: David Sterba <dsterba@suse.com>
> ---
>  fs/btrfs/volumes.c | 15 +++++----------
>  1 file changed, 5 insertions(+), 10 deletions(-)
> 
> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
> index 995a15a816f2..743ed1f0b2a6 100644
> --- a/fs/btrfs/volumes.c
> +++ b/fs/btrfs/volumes.c
> @@ -6652,19 +6652,14 @@ static u64 calc_stripe_length(u64 type, u64 chunk_len, int num_stripes)
>  {
>  	int index = btrfs_bg_flags_to_raid_index(type);
>  	int ncopies = btrfs_raid_array[index].ncopies;
> +	int nparity = btrfs_raid_array[index].nparity;
>  	int data_stripes;
>  
> -	switch (type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
> -	case BTRFS_BLOCK_GROUP_RAID5:
> -		data_stripes = num_stripes - 1;
> -		break;
> -	case BTRFS_BLOCK_GROUP_RAID6:
> -		data_stripes = num_stripes - 2;
> -		break;
> -	default:
> +	if (nparity)
> +		data_stripes = num_stripes - nparity;
> +	else
>  		data_stripes = num_stripes / ncopies;
> -		break;
> -	}

A few lines earlier in that file we have this:

        /*
         * this will have to be fixed for RAID1 and RAID10 over
         * more drives
         */
        data_stripes = (num_stripes - nparity) / ncopies;

1) I changed the calculation in b50836edf9 and did it in one statement,
I see you use and extra if here. Which one do you prefer and why?

2) Back then I wanted to get rid of that comment, because I don't
understand it. "this will have to be fixed" does not tell me what should
be fixed, so I left it there. Maybe now is the time? Do you know what
this comment/warning means and if it can be removed? I mean, even with
raid1c3 the calculation would be correct. There's no parity and three
copies.

> +
>  	return div_u64(chunk_len, data_stripes);
>  }
>  
> 

Hans

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 06/15] btrfs: use raid_attr table in calc_stripe_length for nparity
  2019-05-17 10:06   ` Hans van Kranenburg
@ 2019-05-17 12:54     ` David Sterba
  2019-05-17 13:06       ` Hans van Kranenburg
  0 siblings, 1 reply; 20+ messages in thread
From: David Sterba @ 2019-05-17 12:54 UTC (permalink / raw)
  To: Hans van Kranenburg; +Cc: David Sterba, linux-btrfs

On Fri, May 17, 2019 at 12:06:05PM +0200, Hans van Kranenburg wrote:
> > -	default:
> > +	if (nparity)
> > +		data_stripes = num_stripes - nparity;
> > +	else
> >  		data_stripes = num_stripes / ncopies;
> > -		break;
> > -	}
> 
> A few lines earlier in that file we have this:
> 
>         /*
>          * this will have to be fixed for RAID1 and RAID10 over
>          * more drives
>          */
>         data_stripes = (num_stripes - nparity) / ncopies;
> 
> 1) I changed the calculation in b50836edf9 and did it in one statement,
> I see you use and extra if here. Which one do you prefer and why?

I did the cleanup only in the function and was not aware of the above,
but the ifs did not feel right so I'm glad you pointed that out.

And actually I think there must be an ultimate formula that also
includes the sub_stripes (raid10) and devs_increment (dup), this could
clean up the rest of the special cases.

> 2) Back then I wanted to get rid of that comment, because I don't
> understand it. "this will have to be fixed" does not tell me what should
> be fixed, so I left it there. Maybe now is the time? Do you know what
> this comment/warning means and if it can be removed? I mean, even with
> raid1c3 the calculation would be correct. There's no parity and three
> copies.

Yeah the comment does not help much, it was introduced by the monster
raid56 commit but I don't think there's anything to be fixed, regarding
raid1 or raid10.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 06/15] btrfs: use raid_attr table in calc_stripe_length for nparity
  2019-05-17 12:54     ` David Sterba
@ 2019-05-17 13:06       ` Hans van Kranenburg
  0 siblings, 0 replies; 20+ messages in thread
From: Hans van Kranenburg @ 2019-05-17 13:06 UTC (permalink / raw)
  To: dsterba, David Sterba, linux-btrfs

On 5/17/19 2:54 PM, David Sterba wrote:
> On Fri, May 17, 2019 at 12:06:05PM +0200, Hans van Kranenburg wrote:
>>> -	default:
>>> +	if (nparity)
>>> +		data_stripes = num_stripes - nparity;
>>> +	else
>>>  		data_stripes = num_stripes / ncopies;
>>> -		break;
>>> -	}
>>
>> A few lines earlier in that file we have this:
>>
>>         /*
>>          * this will have to be fixed for RAID1 and RAID10 over
>>          * more drives
>>          */
>>         data_stripes = (num_stripes - nparity) / ncopies;
>>
>> 1) I changed the calculation in b50836edf9 and did it in one statement,
>> I see you use and extra if here. Which one do you prefer and why?
> 
> I did the cleanup only in the function and was not aware of the above,
> but the ifs did not feel right so I'm glad you pointed that out.
> 
> And actually I think there must be an ultimate formula that also
> includes the sub_stripes (raid10) and devs_increment (dup), this could
> clean up the rest of the special cases.

Yeah. It would make sense to have a few helper functions to do those
calculations. I did that in python-btrfs already, and it's pretty useful:

https://github.com/knorrie/python-btrfs/blob/develop/btrfs/volumes.py
(line 155 and further)

Feel free to Cify those and add them, and then replace the individual
calculations all over the place with function calls with nice names
which make the code even more understandable.

I did this because I added a pythonified copy of the chunk allocator
code, which is used for the detailed free space calculations in the
usage report code:

https://github.com/knorrie/python-btrfs/blob/develop/btrfs/fs_usage.py#L648

>> 2) Back then I wanted to get rid of that comment, because I don't
>> understand it. "this will have to be fixed" does not tell me what should
>> be fixed, so I left it there. Maybe now is the time? Do you know what
>> this comment/warning means and if it can be removed? I mean, even with
>> raid1c3 the calculation would be correct. There's no parity and three
>> copies.
> 
> Yeah the comment does not help much, it was introduced by the monster
> raid56 commit but I don't think there's anything to be fixed, regarding
> raid1 or raid10.

Ok.

Hans


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 01/15] btrfs: fix minimum number of chunk errors for DUP
  2019-05-17  9:43 ` [PATCH 01/15] btrfs: fix minimum number of chunk errors for DUP David Sterba
@ 2019-05-17 14:05   ` Qu Wenruo
  0 siblings, 0 replies; 20+ messages in thread
From: Qu Wenruo @ 2019-05-17 14:05 UTC (permalink / raw)
  To: David Sterba, linux-btrfs; +Cc: Qu Wenruo


[-- Attachment #1.1: Type: text/plain, Size: 1815 bytes --]



On 2019/5/17 下午5:43, David Sterba wrote:
> The list of profiles in btrfs_chunk_max_errors lists DUP as a profile
> DUP able to tolerate 1 device missing. Though this profile is special
> with 2 copies, it still needs the device, unlike the others.
> 
> Looking at the history of changes, thre's no clear reason why DUP is
> there, functions were refactored and blocks of code merged to one
> helper.
> 
> d20983b40e828 Btrfs: fix writing data into the seed filesystem
>   - factor code to a helper
> 
> de11cc12df173 Btrfs: don't pre-allocate btrfs bio
>   - unrelated change, DUP still in the list with max errors 1
> 
> a236aed14ccb0 Btrfs: Deal with failed writes in mirrored configurations
>   - introduced the max errors, leaves DUP and RAID1 in the same group
> 
> CC: Qu Wenruo <wqu@suse.com>
> Signed-off-by: David Sterba <dsterba@suse.com

Reviewed-by: Qu Wenruo <wqu@suse.com>

Just some extra hint for the tolerance of DUP profile.

In case of DUP, either all stripes are missing, or all stripes exist.

So no matter whether the tolerance is 0 or 1, it will always work.
But indeed, setting it to 0 is more accurate.

Thanks,
Qu
> ---
>  fs/btrfs/volumes.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
> index 1c2a6e4b39da..8508f6028c8d 100644
> --- a/fs/btrfs/volumes.c
> +++ b/fs/btrfs/volumes.c
> @@ -5328,8 +5328,7 @@ static inline int btrfs_chunk_max_errors(struct map_lookup *map)
>  
>  	if (map->type & (BTRFS_BLOCK_GROUP_RAID1 |
>  			 BTRFS_BLOCK_GROUP_RAID10 |
> -			 BTRFS_BLOCK_GROUP_RAID5 |
> -			 BTRFS_BLOCK_GROUP_DUP)) {
> +			 BTRFS_BLOCK_GROUP_RAID5)) {
>  		max_errors = 1;
>  	} else if (map->type & BTRFS_BLOCK_GROUP_RAID6) {
>  		max_errors = 2;
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2019-05-17 14:05 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-17  9:43 [PATCH 00/15] RAID/volumes code cleanups David Sterba
2019-05-17  9:43 ` [PATCH 01/15] btrfs: fix minimum number of chunk errors for DUP David Sterba
2019-05-17 14:05   ` Qu Wenruo
2019-05-17  9:43 ` [PATCH 02/15] btrfs: raid56: allow the exact minimum number of devices for balance convert David Sterba
2019-05-17  9:43 ` [PATCH 03/15] btrfs: remove mapping tree structures indirection David Sterba
2019-05-17  9:43 ` [PATCH 04/15] btrfs: use raid_attr table in get_profile_num_devs David Sterba
2019-05-17  9:43 ` [PATCH 05/15] btrfs: use raid_attr in btrfs_chunk_max_errors David Sterba
2019-05-17  9:43 ` [PATCH 06/15] btrfs: use raid_attr table in calc_stripe_length for nparity David Sterba
2019-05-17 10:06   ` Hans van Kranenburg
2019-05-17 12:54     ` David Sterba
2019-05-17 13:06       ` Hans van Kranenburg
2019-05-17  9:43 ` [PATCH 07/15] btrfs: use raid_attr to get allowed profiles for balance conversion David Sterba
2019-05-17  9:43 ` [PATCH 08/15] btrfs: use raid_attr table to find profiles for integrity lowering David Sterba
2019-05-17  9:43 ` [PATCH 09/15] btrfs: use raid_attr table for btrfs_bg_type_to_factor David Sterba
2019-05-17  9:43 ` [PATCH 10/15] btrfs: factor out helper for counting data stripes David Sterba
2019-05-17  9:43 ` [PATCH 11/15] btrfs: use u8 for raid_array members David Sterba
2019-05-17  9:43 ` [PATCH 12/15] btrfs: factor out devs_max setting in __btrfs_alloc_chunk David Sterba
2019-05-17  9:43 ` [PATCH 13/15] btrfs: refactor helper for bg flags to name conversion David Sterba
2019-05-17  9:43 ` [PATCH 14/15] btrfs: constify map parameter for nr_parity_stripes and nr_data_stripes David Sterba
2019-05-17  9:43 ` [PATCH 15/15] btrfs: read number of data stripes from map only once David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).