linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH] btrfs: zoned: auto reclaim low mostly full block-groups first
@ 2021-09-20 15:11 Johannes Thumshirn
  2021-09-20 15:50 ` David Sterba
  0 siblings, 1 reply; 3+ messages in thread
From: Johannes Thumshirn @ 2021-09-20 15:11 UTC (permalink / raw)
  To: David Sterba; +Cc: Johannes Thumshirn, linux-btrfs, Josef Bacik

Currently auto reclaim of unusable zones reclaims the block-groups in the
order they have been added to the reclaim list.

Sort the list so  we have the block-groups with the least amount of bytes
to preserve at the beginning before starting the garbage collection loop.

Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
 fs/btrfs/block-group.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 46fdef7bbe20..d90297fb99e1 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -1,5 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
 
+#include <linux/list_sort.h>
+
 #include "misc.h"
 #include "ctree.h"
 #include "block-group.h"
@@ -1486,6 +1488,21 @@ void btrfs_mark_bg_unused(struct btrfs_block_group *bg)
 	spin_unlock(&fs_info->unused_bgs_lock);
 }
 
+/*
+ * We want block groups with a low number of used bytes to be in the beginning
+ * of the list, so they will get reclaimed first.
+ */
+static int reclaim_bgs_cmp(void *unused, const struct list_head *a,
+			   const struct list_head *b)
+{
+	const struct btrfs_block_group *bg1, *bg2;
+
+	bg1 = list_entry(a, struct btrfs_block_group, bg_list);
+	bg2 = list_entry(b, struct btrfs_block_group, bg_list);
+
+	return bg1->used - bg2->used;
+}
+
 void btrfs_reclaim_bgs_work(struct work_struct *work)
 {
 	struct btrfs_fs_info *fs_info =
@@ -1510,6 +1527,7 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
 	}
 
 	spin_lock(&fs_info->unused_bgs_lock);
+	list_sort(NULL, &fs_info->reclaim_bgs, reclaim_bgs_cmp);
 	while (!list_empty(&fs_info->reclaim_bgs)) {
 		u64 zone_unusable;
 		int ret = 0;
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [RFC PATCH] btrfs: zoned: auto reclaim low mostly full block-groups first
  2021-09-20 15:11 [RFC PATCH] btrfs: zoned: auto reclaim low mostly full block-groups first Johannes Thumshirn
@ 2021-09-20 15:50 ` David Sterba
  2021-09-21  7:31   ` Johannes Thumshirn
  0 siblings, 1 reply; 3+ messages in thread
From: David Sterba @ 2021-09-20 15:50 UTC (permalink / raw)
  To: Johannes Thumshirn; +Cc: David Sterba, linux-btrfs, Josef Bacik

On Tue, Sep 21, 2021 at 12:11:01AM +0900, Johannes Thumshirn wrote:
> Currently auto reclaim of unusable zones reclaims the block-groups in the
> order they have been added to the reclaim list.
> 
> Sort the list so  we have the block-groups with the least amount of bytes
> to preserve at the beginning before starting the garbage collection loop.

Makes sense as an optimization.

> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
> ---
>  fs/btrfs/block-group.c | 18 ++++++++++++++++++
>  1 file changed, 18 insertions(+)
> 
> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> index 46fdef7bbe20..d90297fb99e1 100644
> --- a/fs/btrfs/block-group.c
> +++ b/fs/btrfs/block-group.c
> @@ -1,5 +1,7 @@
>  // SPDX-License-Identifier: GPL-2.0
>  
> +#include <linux/list_sort.h>
> +
>  #include "misc.h"
>  #include "ctree.h"
>  #include "block-group.h"
> @@ -1486,6 +1488,21 @@ void btrfs_mark_bg_unused(struct btrfs_block_group *bg)
>  	spin_unlock(&fs_info->unused_bgs_lock);
>  }
>  
> +/*
> + * We want block groups with a low number of used bytes to be in the beginning
> + * of the list, so they will get reclaimed first.
> + */
> +static int reclaim_bgs_cmp(void *unused, const struct list_head *a,
> +			   const struct list_head *b)
> +{
> +	const struct btrfs_block_group *bg1, *bg2;
> +
> +	bg1 = list_entry(a, struct btrfs_block_group, bg_list);
> +	bg2 = list_entry(b, struct btrfs_block_group, bg_list);
> +
> +	return bg1->used - bg2->used;
> +}
> +
>  void btrfs_reclaim_bgs_work(struct work_struct *work)
>  {
>  	struct btrfs_fs_info *fs_info =
> @@ -1510,6 +1527,7 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
>  	}
>  
>  	spin_lock(&fs_info->unused_bgs_lock);
> +	list_sort(NULL, &fs_info->reclaim_bgs, reclaim_bgs_cmp);

The sort is under a spinlock, though it's probably not a highly
contended lock, I think we should try to move it outside. Something like

  lock()
  list_splice_init(&splice, &reclaim_bgs)
  unlock()

  list_sort(&splice);

  while (!list_empty(splice)) {
  }

We already use splice in the again_list so it could build on top of it.

OTOH, it may not be absolutelly necessary to do the sort outside of the
lock but rather because as a matter of good programming hygiene to not
introduce unnecessary delays due to contended lock here and there that
could potentially cascade further.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [RFC PATCH] btrfs: zoned: auto reclaim low mostly full block-groups first
  2021-09-20 15:50 ` David Sterba
@ 2021-09-21  7:31   ` Johannes Thumshirn
  0 siblings, 0 replies; 3+ messages in thread
From: Johannes Thumshirn @ 2021-09-21  7:31 UTC (permalink / raw)
  To: dsterba; +Cc: David Sterba, linux-btrfs, Josef Bacik

On 20/09/2021 17:50, David Sterba wrote:
> On Tue, Sep 21, 2021 at 12:11:01AM +0900, Johannes Thumshirn wrote:
>> Currently auto reclaim of unusable zones reclaims the block-groups in the
>> order they have been added to the reclaim list.
>>
>> Sort the list so  we have the block-groups with the least amount of bytes
>> to preserve at the beginning before starting the garbage collection loop.
> 
> Makes sense as an optimization.
> 
>> Signed-off-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
>> ---
>>  fs/btrfs/block-group.c | 18 ++++++++++++++++++
>>  1 file changed, 18 insertions(+)
>>
>> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
>> index 46fdef7bbe20..d90297fb99e1 100644
>> --- a/fs/btrfs/block-group.c
>> +++ b/fs/btrfs/block-group.c
>> @@ -1,5 +1,7 @@
>>  // SPDX-License-Identifier: GPL-2.0
>>  
>> +#include <linux/list_sort.h>
>> +
>>  #include "misc.h"
>>  #include "ctree.h"
>>  #include "block-group.h"
>> @@ -1486,6 +1488,21 @@ void btrfs_mark_bg_unused(struct btrfs_block_group *bg)
>>  	spin_unlock(&fs_info->unused_bgs_lock);
>>  }
>>  
>> +/*
>> + * We want block groups with a low number of used bytes to be in the beginning
>> + * of the list, so they will get reclaimed first.
>> + */
>> +static int reclaim_bgs_cmp(void *unused, const struct list_head *a,
>> +			   const struct list_head *b)
>> +{
>> +	const struct btrfs_block_group *bg1, *bg2;
>> +
>> +	bg1 = list_entry(a, struct btrfs_block_group, bg_list);
>> +	bg2 = list_entry(b, struct btrfs_block_group, bg_list);
>> +
>> +	return bg1->used - bg2->used;
>> +}
>> +
>>  void btrfs_reclaim_bgs_work(struct work_struct *work)
>>  {
>>  	struct btrfs_fs_info *fs_info =
>> @@ -1510,6 +1527,7 @@ void btrfs_reclaim_bgs_work(struct work_struct *work)
>>  	}
>>  
>>  	spin_lock(&fs_info->unused_bgs_lock);
>> +	list_sort(NULL, &fs_info->reclaim_bgs, reclaim_bgs_cmp);
> 
> The sort is under a spinlock, though it's probably not a highly
> contended lock, I think we should try to move it outside. Something like
> 
>   lock()
>   list_splice_init(&splice, &reclaim_bgs)
>   unlock()
> 
>   list_sort(&splice);
> 
>   while (!list_empty(splice)) {
>   }
> 
> We already use splice in the again_list so it could build on top of it.
> 
> OTOH, it may not be absolutelly necessary to do the sort outside of the
> lock but rather because as a matter of good programming hygiene to not
> introduce unnecessary delays due to contended lock here and there that
> could potentially cascade further.
> 

I'm expecting the number of entries in the list to be in the single or two
digit range, so the sort should be rather quick. 

But I agree using a spliced list looks more future proof.

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2021-09-21  7:31 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-20 15:11 [RFC PATCH] btrfs: zoned: auto reclaim low mostly full block-groups first Johannes Thumshirn
2021-09-20 15:50 ` David Sterba
2021-09-21  7:31   ` Johannes Thumshirn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).