linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] btrfs: scrub: Don't check free space before marking a block  group RO
@ 2019-11-15  2:09 Qu Wenruo
  2019-11-15 12:51 ` Filipe Manana
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Qu Wenruo @ 2019-11-15  2:09 UTC (permalink / raw)
  To: linux-btrfs

[BUG]
When running btrfs/072 with only one online CPU, it has a pretty high
chance to fail:

  btrfs/072 12s ... _check_dmesg: something found in dmesg (see xfstests-dev/results//btrfs/072.dmesg)
  - output mismatch (see xfstests-dev/results//btrfs/072.out.bad)
      --- tests/btrfs/072.out     2019-10-22 15:18:14.008965340 +0800
      +++ /xfstests-dev/results//btrfs/072.out.bad      2019-11-14 15:56:45.877152240 +0800
      @@ -1,2 +1,3 @@
       QA output created by 072
       Silence is golden
      +Scrub find errors in "-m dup -d single" test
      ...

And with the following call trace:
  BTRFS info (device dm-5): scrub: started on devid 1
  ------------[ cut here ]------------
  BTRFS: Transaction aborted (error -27)
  WARNING: CPU: 0 PID: 55087 at fs/btrfs/block-group.c:1890 btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
  CPU: 0 PID: 55087 Comm: btrfs Tainted: G        W  O      5.4.0-rc1-custom+ #13
  Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
  RIP: 0010:btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
  Call Trace:
   __btrfs_end_transaction+0xdb/0x310 [btrfs]
   btrfs_end_transaction+0x10/0x20 [btrfs]
   btrfs_inc_block_group_ro+0x1c9/0x210 [btrfs]
   scrub_enumerate_chunks+0x264/0x940 [btrfs]
   btrfs_scrub_dev+0x45c/0x8f0 [btrfs]
   btrfs_ioctl+0x31a1/0x3fb0 [btrfs]
   do_vfs_ioctl+0x636/0xaa0
   ksys_ioctl+0x67/0x90
   __x64_sys_ioctl+0x43/0x50
   do_syscall_64+0x79/0xe0
   entry_SYSCALL_64_after_hwframe+0x49/0xbe
  ---[ end trace 166c865cec7688e7 ]---

[CAUSE]
The error number -27 is -EFBIG, returned from the following call chain:
btrfs_end_transaction()
|- __btrfs_end_transaction()
   |- btrfs_create_pending_block_groups()
      |- btrfs_finish_chunk_alloc()
         |- btrfs_add_system_chunk()

This happens because we have used up all space of
btrfs_super_block::sys_chunk_array.

The root cause is, we have the following bad loop of creating tons of
system chunks:
1. The only SYSTEM chunk is being scrubbed
   It's very common to have only one SYSTEM chunk.
2. New SYSTEM bg will be allocated
   As btrfs_inc_block_group_ro() will check if we have enough space
   after marking current bg RO. If not, then allocate a new chunk.
3. New SYSTEM bg is still empty, will be reclaimed
   During the reclaim, we will mark it RO again.
4. That newly allocated empty SYSTEM bg get scrubbed
   We go back to step 2, as the bg is already mark RO but still not
   cleaned up yet.

If the cleaner kthread doesn't get executed fast enough (e.g. only one
CPU), then we will get more and more empty SYSTEM chunks, using up all
the space of btrfs_super_block::sys_chunk_array.

[FIX]
Since scrub/dev-replace doesn't always need to allocate new extent,
especially chunk tree extent, so we don't really need to do chunk
pre-allocation.

To break above spiral, here we introduce a new parameter to
btrfs_inc_block_group(), @do_chunk_alloc, which indicates whether we
need extra chunk pre-allocation.

For relocation, we pass @do_chunk_alloc=true, while for scrub, we pass
@do_chunk_alloc=false.
This should keep unnecessary empty chunks from popping up for scrub.

Also, since there are two parameters for btrfs_inc_block_group_ro(),
add more comment for it.

Signed-off-by: Qu Wenruo <wqu@suse.com>

---
This version is based on v5.4-rc1, for a commonly known commit hash.
It would cause conflicts due to
btrfs_block_group_cache -> btrfs_block_group refactor.

The conflicts should be easy to solve.

Changelog:
v2:
- Fix a bug that previous verion never do chunk pre-allocation.
- Avoid chunk pre-allocation from check_system_chunk()
- Use extra parameter @do_chunk_alloc other than new function with
  "_force" suffix.
- Skip unnecessary update_block_group_flags() call completely for
  do_chunk_alloc=false case.
---
 fs/btrfs/block-group.c | 49 +++++++++++++++++++++++++++---------------
 fs/btrfs/block-group.h |  3 ++-
 fs/btrfs/relocation.c  |  2 +-
 fs/btrfs/scrub.c       | 21 +++++++++++++++++-
 4 files changed, 55 insertions(+), 20 deletions(-)

diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index bf7e3f23bba7..ae23319e4233 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -2021,8 +2021,17 @@ static u64 update_block_group_flags(struct btrfs_fs_info *fs_info, u64 flags)
 	return flags;
 }
 
-int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache)
-
+/*
+ * Mark one block group ro, can be called several times for the same block
+ * group.
+ *
+ * @cache:		The destination block group
+ * @do_chunk_alloc:	Whether need to do chunk pre-allocation, this is to
+ * 			ensure we still have some free space after marking this
+ * 			block group RO.
+ */
+int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache,
+			     bool do_chunk_alloc)
 {
 	struct btrfs_fs_info *fs_info = cache->fs_info;
 	struct btrfs_trans_handle *trans;
@@ -2052,25 +2061,30 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache)
 		goto again;
 	}
 
-	/*
-	 * if we are changing raid levels, try to allocate a corresponding
-	 * block group with the new raid level.
-	 */
-	alloc_flags = update_block_group_flags(fs_info, cache->flags);
-	if (alloc_flags != cache->flags) {
-		ret = btrfs_chunk_alloc(trans, alloc_flags, CHUNK_ALLOC_FORCE);
+	if (do_chunk_alloc) {
 		/*
-		 * ENOSPC is allowed here, we may have enough space
-		 * already allocated at the new raid level to
-		 * carry on
+		 * if we are changing raid levels, try to allocate a
+		 * corresponding block group with the new raid level.
 		 */
-		if (ret == -ENOSPC)
-			ret = 0;
-		if (ret < 0)
-			goto out;
+		alloc_flags = update_block_group_flags(fs_info, cache->flags);
+		if (alloc_flags != cache->flags) {
+			ret = btrfs_chunk_alloc(trans, alloc_flags,
+						CHUNK_ALLOC_FORCE);
+			/*
+			 * ENOSPC is allowed here, we may have enough space
+			 * already allocated at the new raid level to
+			 * carry on
+			 */
+			if (ret == -ENOSPC)
+				ret = 0;
+			if (ret < 0)
+				goto out;
+		}
 	}
 
-	ret = inc_block_group_ro(cache, 0);
+	ret = inc_block_group_ro(cache, !do_chunk_alloc);
+	if (!do_chunk_alloc)
+		goto unlock_out;
 	if (!ret)
 		goto out;
 	alloc_flags = btrfs_get_alloc_profile(fs_info, cache->space_info->flags);
@@ -2085,6 +2099,7 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache)
 		check_system_chunk(trans, alloc_flags);
 		mutex_unlock(&fs_info->chunk_mutex);
 	}
+unlock_out:
 	mutex_unlock(&fs_info->ro_block_group_mutex);
 
 	btrfs_end_transaction(trans);
diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
index c391800388dd..0758e6d52acb 100644
--- a/fs/btrfs/block-group.h
+++ b/fs/btrfs/block-group.h
@@ -205,7 +205,8 @@ int btrfs_read_block_groups(struct btrfs_fs_info *info);
 int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
 			   u64 type, u64 chunk_offset, u64 size);
 void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans);
-int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache);
+int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache,
+			     bool do_chunk_alloc);
 void btrfs_dec_block_group_ro(struct btrfs_block_group_cache *cache);
 int btrfs_start_dirty_block_groups(struct btrfs_trans_handle *trans);
 int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans);
diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
index 00504657b602..fd6df6dd5421 100644
--- a/fs/btrfs/relocation.c
+++ b/fs/btrfs/relocation.c
@@ -4325,7 +4325,7 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
 	rc->extent_root = extent_root;
 	rc->block_group = bg;
 
-	ret = btrfs_inc_block_group_ro(rc->block_group);
+	ret = btrfs_inc_block_group_ro(rc->block_group, true);
 	if (ret) {
 		err = ret;
 		goto out;
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index f7d4e03f4c5d..7e48d65bbdf6 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3563,7 +3563,26 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
 		 * -> btrfs_scrub_pause()
 		 */
 		scrub_pause_on(fs_info);
-		ret = btrfs_inc_block_group_ro(cache);
+
+		/*
+		 * Don't do chunk preallocation for scrub.
+		 *
+		 * This is especially important for SYSTEM bgs, or we can hit
+		 * -EFBIG from btrfs_finish_chunk_alloc() like:
+		 * 1. The only SYSTEM bg is marked RO.
+		 *    Since SYSTEM bg is small, that's pretty common.
+		 * 2. New SYSTEM bg will be allocated
+		 *    Due to regular version will allocate new chunk.
+		 * 3. New SYSTEM bg is empty and will get cleaned up
+		 *    Before cleanup really happens, it's marked RO again.
+		 * 4. Empty SYSTEM bg get scrubbed
+		 *    We go back to 2.
+		 *
+		 * This can easily boost the amount of SYSTEM chunks if cleaner
+		 * thread can't be triggered fast enough, and use up all space
+		 * of btrfs_super_block::sys_chunk_array
+		 */
+		ret = btrfs_inc_block_group_ro(cache, false);
 		if (!ret && sctx->is_dev_replace) {
 			/*
 			 * If we are doing a device replace wait for any tasks
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] btrfs: scrub: Don't check free space before marking a block group RO
  2019-11-15  2:09 [PATCH v2] btrfs: scrub: Don't check free space before marking a block group RO Qu Wenruo
@ 2019-11-15 12:51 ` Filipe Manana
  2019-11-18 15:14 ` David Sterba
  2020-01-17 17:59 ` Filipe Manana
  2 siblings, 0 replies; 7+ messages in thread
From: Filipe Manana @ 2019-11-15 12:51 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Fri, Nov 15, 2019 at 2:11 AM Qu Wenruo <wqu@suse.com> wrote:
>
> [BUG]
> When running btrfs/072 with only one online CPU, it has a pretty high
> chance to fail:
>
>   btrfs/072 12s ... _check_dmesg: something found in dmesg (see xfstests-dev/results//btrfs/072.dmesg)
>   - output mismatch (see xfstests-dev/results//btrfs/072.out.bad)
>       --- tests/btrfs/072.out     2019-10-22 15:18:14.008965340 +0800
>       +++ /xfstests-dev/results//btrfs/072.out.bad      2019-11-14 15:56:45.877152240 +0800
>       @@ -1,2 +1,3 @@
>        QA output created by 072
>        Silence is golden
>       +Scrub find errors in "-m dup -d single" test
>       ...
>
> And with the following call trace:
>   BTRFS info (device dm-5): scrub: started on devid 1
>   ------------[ cut here ]------------
>   BTRFS: Transaction aborted (error -27)
>   WARNING: CPU: 0 PID: 55087 at fs/btrfs/block-group.c:1890 btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
>   CPU: 0 PID: 55087 Comm: btrfs Tainted: G        W  O      5.4.0-rc1-custom+ #13
>   Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
>   RIP: 0010:btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
>   Call Trace:
>    __btrfs_end_transaction+0xdb/0x310 [btrfs]
>    btrfs_end_transaction+0x10/0x20 [btrfs]
>    btrfs_inc_block_group_ro+0x1c9/0x210 [btrfs]
>    scrub_enumerate_chunks+0x264/0x940 [btrfs]
>    btrfs_scrub_dev+0x45c/0x8f0 [btrfs]
>    btrfs_ioctl+0x31a1/0x3fb0 [btrfs]
>    do_vfs_ioctl+0x636/0xaa0
>    ksys_ioctl+0x67/0x90
>    __x64_sys_ioctl+0x43/0x50
>    do_syscall_64+0x79/0xe0
>    entry_SYSCALL_64_after_hwframe+0x49/0xbe
>   ---[ end trace 166c865cec7688e7 ]---
>
> [CAUSE]
> The error number -27 is -EFBIG, returned from the following call chain:
> btrfs_end_transaction()
> |- __btrfs_end_transaction()
>    |- btrfs_create_pending_block_groups()
>       |- btrfs_finish_chunk_alloc()
>          |- btrfs_add_system_chunk()
>
> This happens because we have used up all space of
> btrfs_super_block::sys_chunk_array.
>
> The root cause is, we have the following bad loop of creating tons of
> system chunks:
> 1. The only SYSTEM chunk is being scrubbed
>    It's very common to have only one SYSTEM chunk.
> 2. New SYSTEM bg will be allocated
>    As btrfs_inc_block_group_ro() will check if we have enough space
>    after marking current bg RO. If not, then allocate a new chunk.
> 3. New SYSTEM bg is still empty, will be reclaimed
>    During the reclaim, we will mark it RO again.
> 4. That newly allocated empty SYSTEM bg get scrubbed
>    We go back to step 2, as the bg is already mark RO but still not
>    cleaned up yet.
>
> If the cleaner kthread doesn't get executed fast enough (e.g. only one
> CPU), then we will get more and more empty SYSTEM chunks, using up all
> the space of btrfs_super_block::sys_chunk_array.
>
> [FIX]
> Since scrub/dev-replace doesn't always need to allocate new extent,
> especially chunk tree extent, so we don't really need to do chunk
> pre-allocation.
>
> To break above spiral, here we introduce a new parameter to
> btrfs_inc_block_group(), @do_chunk_alloc, which indicates whether we
> need extra chunk pre-allocation.
>
> For relocation, we pass @do_chunk_alloc=true, while for scrub, we pass
> @do_chunk_alloc=false.
> This should keep unnecessary empty chunks from popping up for scrub.
>
> Also, since there are two parameters for btrfs_inc_block_group_ro(),
> add more comment for it.
>
> Signed-off-by: Qu Wenruo <wqu@suse.com>

Reviewed-by: Filipe Manana <fdmanana@suse.com>

Now it looks good to me, thanks.

>
> ---
> This version is based on v5.4-rc1, for a commonly known commit hash.
> It would cause conflicts due to
> btrfs_block_group_cache -> btrfs_block_group refactor.
>
> The conflicts should be easy to solve.
>
> Changelog:
> v2:
> - Fix a bug that previous verion never do chunk pre-allocation.
> - Avoid chunk pre-allocation from check_system_chunk()
> - Use extra parameter @do_chunk_alloc other than new function with
>   "_force" suffix.
> - Skip unnecessary update_block_group_flags() call completely for
>   do_chunk_alloc=false case.
> ---
>  fs/btrfs/block-group.c | 49 +++++++++++++++++++++++++++---------------
>  fs/btrfs/block-group.h |  3 ++-
>  fs/btrfs/relocation.c  |  2 +-
>  fs/btrfs/scrub.c       | 21 +++++++++++++++++-
>  4 files changed, 55 insertions(+), 20 deletions(-)
>
> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> index bf7e3f23bba7..ae23319e4233 100644
> --- a/fs/btrfs/block-group.c
> +++ b/fs/btrfs/block-group.c
> @@ -2021,8 +2021,17 @@ static u64 update_block_group_flags(struct btrfs_fs_info *fs_info, u64 flags)
>         return flags;
>  }
>
> -int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache)
> -
> +/*
> + * Mark one block group ro, can be called several times for the same block
> + * group.
> + *
> + * @cache:             The destination block group
> + * @do_chunk_alloc:    Whether need to do chunk pre-allocation, this is to
> + *                     ensure we still have some free space after marking this
> + *                     block group RO.
> + */
> +int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache,
> +                            bool do_chunk_alloc)
>  {
>         struct btrfs_fs_info *fs_info = cache->fs_info;
>         struct btrfs_trans_handle *trans;
> @@ -2052,25 +2061,30 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache)
>                 goto again;
>         }
>
> -       /*
> -        * if we are changing raid levels, try to allocate a corresponding
> -        * block group with the new raid level.
> -        */
> -       alloc_flags = update_block_group_flags(fs_info, cache->flags);
> -       if (alloc_flags != cache->flags) {
> -               ret = btrfs_chunk_alloc(trans, alloc_flags, CHUNK_ALLOC_FORCE);
> +       if (do_chunk_alloc) {
>                 /*
> -                * ENOSPC is allowed here, we may have enough space
> -                * already allocated at the new raid level to
> -                * carry on
> +                * if we are changing raid levels, try to allocate a
> +                * corresponding block group with the new raid level.
>                  */
> -               if (ret == -ENOSPC)
> -                       ret = 0;
> -               if (ret < 0)
> -                       goto out;
> +               alloc_flags = update_block_group_flags(fs_info, cache->flags);
> +               if (alloc_flags != cache->flags) {
> +                       ret = btrfs_chunk_alloc(trans, alloc_flags,
> +                                               CHUNK_ALLOC_FORCE);
> +                       /*
> +                        * ENOSPC is allowed here, we may have enough space
> +                        * already allocated at the new raid level to
> +                        * carry on
> +                        */
> +                       if (ret == -ENOSPC)
> +                               ret = 0;
> +                       if (ret < 0)
> +                               goto out;
> +               }
>         }
>
> -       ret = inc_block_group_ro(cache, 0);
> +       ret = inc_block_group_ro(cache, !do_chunk_alloc);
> +       if (!do_chunk_alloc)
> +               goto unlock_out;
>         if (!ret)
>                 goto out;
>         alloc_flags = btrfs_get_alloc_profile(fs_info, cache->space_info->flags);
> @@ -2085,6 +2099,7 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache)
>                 check_system_chunk(trans, alloc_flags);
>                 mutex_unlock(&fs_info->chunk_mutex);
>         }
> +unlock_out:
>         mutex_unlock(&fs_info->ro_block_group_mutex);
>
>         btrfs_end_transaction(trans);
> diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
> index c391800388dd..0758e6d52acb 100644
> --- a/fs/btrfs/block-group.h
> +++ b/fs/btrfs/block-group.h
> @@ -205,7 +205,8 @@ int btrfs_read_block_groups(struct btrfs_fs_info *info);
>  int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
>                            u64 type, u64 chunk_offset, u64 size);
>  void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans);
> -int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache);
> +int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache,
> +                            bool do_chunk_alloc);
>  void btrfs_dec_block_group_ro(struct btrfs_block_group_cache *cache);
>  int btrfs_start_dirty_block_groups(struct btrfs_trans_handle *trans);
>  int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans);
> diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
> index 00504657b602..fd6df6dd5421 100644
> --- a/fs/btrfs/relocation.c
> +++ b/fs/btrfs/relocation.c
> @@ -4325,7 +4325,7 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
>         rc->extent_root = extent_root;
>         rc->block_group = bg;
>
> -       ret = btrfs_inc_block_group_ro(rc->block_group);
> +       ret = btrfs_inc_block_group_ro(rc->block_group, true);
>         if (ret) {
>                 err = ret;
>                 goto out;
> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> index f7d4e03f4c5d..7e48d65bbdf6 100644
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -3563,7 +3563,26 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
>                  * -> btrfs_scrub_pause()
>                  */
>                 scrub_pause_on(fs_info);
> -               ret = btrfs_inc_block_group_ro(cache);
> +
> +               /*
> +                * Don't do chunk preallocation for scrub.
> +                *
> +                * This is especially important for SYSTEM bgs, or we can hit
> +                * -EFBIG from btrfs_finish_chunk_alloc() like:
> +                * 1. The only SYSTEM bg is marked RO.
> +                *    Since SYSTEM bg is small, that's pretty common.
> +                * 2. New SYSTEM bg will be allocated
> +                *    Due to regular version will allocate new chunk.
> +                * 3. New SYSTEM bg is empty and will get cleaned up
> +                *    Before cleanup really happens, it's marked RO again.
> +                * 4. Empty SYSTEM bg get scrubbed
> +                *    We go back to 2.
> +                *
> +                * This can easily boost the amount of SYSTEM chunks if cleaner
> +                * thread can't be triggered fast enough, and use up all space
> +                * of btrfs_super_block::sys_chunk_array
> +                */
> +               ret = btrfs_inc_block_group_ro(cache, false);
>                 if (!ret && sctx->is_dev_replace) {
>                         /*
>                          * If we are doing a device replace wait for any tasks
> --
> 2.24.0
>


-- 
Filipe David Manana,

“Whether you think you can, or you think you can't — you're right.”

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] btrfs: scrub: Don't check free space before marking a block  group RO
  2019-11-15  2:09 [PATCH v2] btrfs: scrub: Don't check free space before marking a block group RO Qu Wenruo
  2019-11-15 12:51 ` Filipe Manana
@ 2019-11-18 15:14 ` David Sterba
  2020-01-17 17:59 ` Filipe Manana
  2 siblings, 0 replies; 7+ messages in thread
From: David Sterba @ 2019-11-18 15:14 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Fri, Nov 15, 2019 at 10:09:00AM +0800, Qu Wenruo wrote:
...
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> 
> ---
> This version is based on v5.4-rc1, for a commonly known commit hash.
> It would cause conflicts due to
> btrfs_block_group_cache -> btrfs_block_group refactor.
> 
> The conflicts should be easy to solve.
> 
> Changelog:
> v2:
> - Fix a bug that previous verion never do chunk pre-allocation.
> - Avoid chunk pre-allocation from check_system_chunk()
> - Use extra parameter @do_chunk_alloc other than new function with
>   "_force" suffix.
> - Skip unnecessary update_block_group_flags() call completely for
>   do_chunk_alloc=false case.

Added to misc-next, thanks.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] btrfs: scrub: Don't check free space before marking a block group RO
  2019-11-15  2:09 [PATCH v2] btrfs: scrub: Don't check free space before marking a block group RO Qu Wenruo
  2019-11-15 12:51 ` Filipe Manana
  2019-11-18 15:14 ` David Sterba
@ 2020-01-17 17:59 ` Filipe Manana
  2020-01-18  1:16   ` Qu Wenruo
  2 siblings, 1 reply; 7+ messages in thread
From: Filipe Manana @ 2020-01-17 17:59 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs, Josef Bacik

On Fri, Nov 15, 2019 at 2:11 AM Qu Wenruo <wqu@suse.com> wrote:
>
> [BUG]
> When running btrfs/072 with only one online CPU, it has a pretty high
> chance to fail:
>
>   btrfs/072 12s ... _check_dmesg: something found in dmesg (see xfstests-dev/results//btrfs/072.dmesg)
>   - output mismatch (see xfstests-dev/results//btrfs/072.out.bad)
>       --- tests/btrfs/072.out     2019-10-22 15:18:14.008965340 +0800
>       +++ /xfstests-dev/results//btrfs/072.out.bad      2019-11-14 15:56:45.877152240 +0800
>       @@ -1,2 +1,3 @@
>        QA output created by 072
>        Silence is golden
>       +Scrub find errors in "-m dup -d single" test
>       ...
>
> And with the following call trace:
>   BTRFS info (device dm-5): scrub: started on devid 1
>   ------------[ cut here ]------------
>   BTRFS: Transaction aborted (error -27)
>   WARNING: CPU: 0 PID: 55087 at fs/btrfs/block-group.c:1890 btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
>   CPU: 0 PID: 55087 Comm: btrfs Tainted: G        W  O      5.4.0-rc1-custom+ #13
>   Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
>   RIP: 0010:btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
>   Call Trace:
>    __btrfs_end_transaction+0xdb/0x310 [btrfs]
>    btrfs_end_transaction+0x10/0x20 [btrfs]
>    btrfs_inc_block_group_ro+0x1c9/0x210 [btrfs]
>    scrub_enumerate_chunks+0x264/0x940 [btrfs]
>    btrfs_scrub_dev+0x45c/0x8f0 [btrfs]
>    btrfs_ioctl+0x31a1/0x3fb0 [btrfs]
>    do_vfs_ioctl+0x636/0xaa0
>    ksys_ioctl+0x67/0x90
>    __x64_sys_ioctl+0x43/0x50
>    do_syscall_64+0x79/0xe0
>    entry_SYSCALL_64_after_hwframe+0x49/0xbe
>   ---[ end trace 166c865cec7688e7 ]---
>
> [CAUSE]
> The error number -27 is -EFBIG, returned from the following call chain:
> btrfs_end_transaction()
> |- __btrfs_end_transaction()
>    |- btrfs_create_pending_block_groups()
>       |- btrfs_finish_chunk_alloc()
>          |- btrfs_add_system_chunk()
>
> This happens because we have used up all space of
> btrfs_super_block::sys_chunk_array.
>
> The root cause is, we have the following bad loop of creating tons of
> system chunks:
> 1. The only SYSTEM chunk is being scrubbed
>    It's very common to have only one SYSTEM chunk.
> 2. New SYSTEM bg will be allocated
>    As btrfs_inc_block_group_ro() will check if we have enough space
>    after marking current bg RO. If not, then allocate a new chunk.
> 3. New SYSTEM bg is still empty, will be reclaimed
>    During the reclaim, we will mark it RO again.
> 4. That newly allocated empty SYSTEM bg get scrubbed
>    We go back to step 2, as the bg is already mark RO but still not
>    cleaned up yet.
>
> If the cleaner kthread doesn't get executed fast enough (e.g. only one
> CPU), then we will get more and more empty SYSTEM chunks, using up all
> the space of btrfs_super_block::sys_chunk_array.
>
> [FIX]
> Since scrub/dev-replace doesn't always need to allocate new extent,
> especially chunk tree extent, so we don't really need to do chunk
> pre-allocation.
>
> To break above spiral, here we introduce a new parameter to
> btrfs_inc_block_group(), @do_chunk_alloc, which indicates whether we
> need extra chunk pre-allocation.
>
> For relocation, we pass @do_chunk_alloc=true, while for scrub, we pass
> @do_chunk_alloc=false.
> This should keep unnecessary empty chunks from popping up for scrub.
>
> Also, since there are two parameters for btrfs_inc_block_group_ro(),
> add more comment for it.
>
> Signed-off-by: Qu Wenruo <wqu@suse.com>

Qu,

Strangely, this has caused some unexpected failures on test btrfs/071
(fsstress + device replace + remount followed by scrub).

Since I hadn't seen the issue in my 5.4 (and older) based branches,
and only started to observe the failure in 5.5-rc2+, I left a vm
bisecting since last week after coming back from vacations.
The bisection points out to this change. And going to 5.5-rc5 and
reverting this change, or just doing:

diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index 21de630b0730..87478654a3e1 100644
--- a/fs/btrfs/scrub.c
+++ b/fs/btrfs/scrub.c
@@ -3578,7 +3578,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
                 * thread can't be triggered fast enough, and use up all space
                 * of btrfs_super_block::sys_chunk_array
                 */
-               ret = btrfs_inc_block_group_ro(cache, false);
+               ret = btrfs_inc_block_group_ro(cache, true);
                scrub_pause_off(fs_info);

                if (ret == 0) {

which is simpler then reverting due to conflicts, confirms this patch
is what causes btrfs/071 to fail like this:

$ cat results/btrfs/071.out.bad
QA output created by 071
Silence is golden
Scrub find errors in "-m raid0 -d raid0" test

$ cat results/btrfs/071.full
(...)
Test -m raid0 -d raid0
Run fsstress  -p 20 -n 100 -d
/home/fdmanana/btrfs-tests/scratch_1/stressdir -f rexchange=0 -f
rwhiteout=0
Start replace worker: 17813
Wait for fsstress to exit and kill all background workers
seed = 1579455326
dev_pool=/dev/sdd /dev/sde /dev/sdf
free_dev=/dev/sdg, src_dev=/dev/sdd
Replacing /dev/sdd with /dev/sdg
Replacing /dev/sde with /dev/sdd
Replacing /dev/sdf with /dev/sde
Replacing /dev/sdg with /dev/sdf
Replacing /dev/sdd with /dev/sdg
Replacing /dev/sde with /dev/sdd
Replacing /dev/sdf with /dev/sde
Replacing /dev/sdg with /dev/sdf
Replacing /dev/sdd with /dev/sdg
Replacing /dev/sde with /dev/sdd
Scrub the filesystem
ERROR: there are uncorrectable errors
scrub done for 0f63c9b5-5618-4484-b6f2-0b7d3294cf05
Scrub started:    Fri Jan 17 12:31:35 2020
Status:           finished
Duration:         0:00:00
Total to scrub:   5.02GiB
Rate:             0.00B/s
Error summary:    csum=1
  Corrected:      0
  Uncorrectable:  1
  Unverified:     0
Scrub find errors in "-m raid0 -d raid0" test
(...)

And in syslog:

(...)
Jan  9 13:14:15 debian5 kernel: [1739740.727952] BTRFS info (device
sdc): dev_replace from /dev/sde (devid 4) to /dev/sdd started
Jan  9 13:14:15 debian5 kernel: [1739740.752226]
scrub_handle_errored_block: 8 callbacks suppressed
Jan  9 13:14:15 debian5 kernel: [1739740.752228] BTRFS warning (device
sdc): checksum error at logical 1129050112 on dev /dev/sde, physical
277803008, root 5, inode 405, offset 1540096, length 4096, links 1
(path: stressdir/pa/d2/d5/fa)
Jan  9 13:14:15 debian5 kernel: [1739740.752230]
btrfs_dev_stat_print_on_error: 8 callbacks suppressed
Jan  9 13:14:15 debian5 kernel: [1739740.758600] BTRFS error (device
sdc): bdev /dev/sde errs: wr 0, rd 0, flush 0, corrupt 1, gen 0
Jan  9 13:14:15 debian5 kernel: [1739740.908263]
scrub_handle_errored_block: 2 callbacks suppressed
Jan  9 13:14:15 debian5 kernel: [1739740.929814] BTRFS error (device
sdc): unable to fixup (regular) error at logical 1129050112 on dev
/dev/sde
Jan  9 13:14:15 debian5 kernel: [1739740.929817] BTRFS info (device
sdc): dev_replace from /dev/sde (devid 4) to /dev/sdd finished
(...)

From a quick look I don't have an idea how this patch causes such
regression, doesn't seem to make any sense.

I changed btrfs/071 to comment out the remount loop
(https://pastebin.com/X5LiJtpS, the fsstress arguments are to avoid an
infinite loop during fsync, for which there is a fix already) to see
if that had any influence - it does not.
So it seems the problem is device replace and/or scrub only.

I can trigger the failure somewhat easily, 1000 iterations of
btrfs/071 on 5.5-rc5 produced 94 failures like that.
With this patch reverted (or passing true to
btrfs_inc_block_group_ro() from scrub), 20108 iterations succeeded (no
failures at all).

I haven't tried Josef's recent changes to inc_block_group_ro(), so I
can't say yet if they fix the problem.

I can take a closer look next week to see if I can figure out why this
causes such weird failures, unless you already have an idea.

Thanks.

>
> ---
> This version is based on v5.4-rc1, for a commonly known commit hash.
> It would cause conflicts due to
> btrfs_block_group_cache -> btrfs_block_group refactor.
>
> The conflicts should be easy to solve.
>
> Changelog:
> v2:
> - Fix a bug that previous verion never do chunk pre-allocation.
> - Avoid chunk pre-allocation from check_system_chunk()
> - Use extra parameter @do_chunk_alloc other than new function with
>   "_force" suffix.
> - Skip unnecessary update_block_group_flags() call completely for
>   do_chunk_alloc=false case.
> ---
>  fs/btrfs/block-group.c | 49 +++++++++++++++++++++++++++---------------
>  fs/btrfs/block-group.h |  3 ++-
>  fs/btrfs/relocation.c  |  2 +-
>  fs/btrfs/scrub.c       | 21 +++++++++++++++++-
>  4 files changed, 55 insertions(+), 20 deletions(-)
>
> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> index bf7e3f23bba7..ae23319e4233 100644
> --- a/fs/btrfs/block-group.c
> +++ b/fs/btrfs/block-group.c
> @@ -2021,8 +2021,17 @@ static u64 update_block_group_flags(struct btrfs_fs_info *fs_info, u64 flags)
>         return flags;
>  }
>
> -int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache)
> -
> +/*
> + * Mark one block group ro, can be called several times for the same block
> + * group.
> + *
> + * @cache:             The destination block group
> + * @do_chunk_alloc:    Whether need to do chunk pre-allocation, this is to
> + *                     ensure we still have some free space after marking this
> + *                     block group RO.
> + */
> +int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache,
> +                            bool do_chunk_alloc)
>  {
>         struct btrfs_fs_info *fs_info = cache->fs_info;
>         struct btrfs_trans_handle *trans;
> @@ -2052,25 +2061,30 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache)
>                 goto again;
>         }
>
> -       /*
> -        * if we are changing raid levels, try to allocate a corresponding
> -        * block group with the new raid level.
> -        */
> -       alloc_flags = update_block_group_flags(fs_info, cache->flags);
> -       if (alloc_flags != cache->flags) {
> -               ret = btrfs_chunk_alloc(trans, alloc_flags, CHUNK_ALLOC_FORCE);
> +       if (do_chunk_alloc) {
>                 /*
> -                * ENOSPC is allowed here, we may have enough space
> -                * already allocated at the new raid level to
> -                * carry on
> +                * if we are changing raid levels, try to allocate a
> +                * corresponding block group with the new raid level.
>                  */
> -               if (ret == -ENOSPC)
> -                       ret = 0;
> -               if (ret < 0)
> -                       goto out;
> +               alloc_flags = update_block_group_flags(fs_info, cache->flags);
> +               if (alloc_flags != cache->flags) {
> +                       ret = btrfs_chunk_alloc(trans, alloc_flags,
> +                                               CHUNK_ALLOC_FORCE);
> +                       /*
> +                        * ENOSPC is allowed here, we may have enough space
> +                        * already allocated at the new raid level to
> +                        * carry on
> +                        */
> +                       if (ret == -ENOSPC)
> +                               ret = 0;
> +                       if (ret < 0)
> +                               goto out;
> +               }
>         }
>
> -       ret = inc_block_group_ro(cache, 0);
> +       ret = inc_block_group_ro(cache, !do_chunk_alloc);
> +       if (!do_chunk_alloc)
> +               goto unlock_out;
>         if (!ret)
>                 goto out;
>         alloc_flags = btrfs_get_alloc_profile(fs_info, cache->space_info->flags);
> @@ -2085,6 +2099,7 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache)
>                 check_system_chunk(trans, alloc_flags);
>                 mutex_unlock(&fs_info->chunk_mutex);
>         }
> +unlock_out:
>         mutex_unlock(&fs_info->ro_block_group_mutex);
>
>         btrfs_end_transaction(trans);
> diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
> index c391800388dd..0758e6d52acb 100644
> --- a/fs/btrfs/block-group.h
> +++ b/fs/btrfs/block-group.h
> @@ -205,7 +205,8 @@ int btrfs_read_block_groups(struct btrfs_fs_info *info);
>  int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
>                            u64 type, u64 chunk_offset, u64 size);
>  void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans);
> -int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache);
> +int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache,
> +                            bool do_chunk_alloc);
>  void btrfs_dec_block_group_ro(struct btrfs_block_group_cache *cache);
>  int btrfs_start_dirty_block_groups(struct btrfs_trans_handle *trans);
>  int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans);
> diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
> index 00504657b602..fd6df6dd5421 100644
> --- a/fs/btrfs/relocation.c
> +++ b/fs/btrfs/relocation.c
> @@ -4325,7 +4325,7 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
>         rc->extent_root = extent_root;
>         rc->block_group = bg;
>
> -       ret = btrfs_inc_block_group_ro(rc->block_group);
> +       ret = btrfs_inc_block_group_ro(rc->block_group, true);
>         if (ret) {
>                 err = ret;
>                 goto out;
> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> index f7d4e03f4c5d..7e48d65bbdf6 100644
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -3563,7 +3563,26 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
>                  * -> btrfs_scrub_pause()
>                  */
>                 scrub_pause_on(fs_info);
> -               ret = btrfs_inc_block_group_ro(cache);
> +
> +               /*
> +                * Don't do chunk preallocation for scrub.
> +                *
> +                * This is especially important for SYSTEM bgs, or we can hit
> +                * -EFBIG from btrfs_finish_chunk_alloc() like:
> +                * 1. The only SYSTEM bg is marked RO.
> +                *    Since SYSTEM bg is small, that's pretty common.
> +                * 2. New SYSTEM bg will be allocated
> +                *    Due to regular version will allocate new chunk.
> +                * 3. New SYSTEM bg is empty and will get cleaned up
> +                *    Before cleanup really happens, it's marked RO again.
> +                * 4. Empty SYSTEM bg get scrubbed
> +                *    We go back to 2.
> +                *
> +                * This can easily boost the amount of SYSTEM chunks if cleaner
> +                * thread can't be triggered fast enough, and use up all space
> +                * of btrfs_super_block::sys_chunk_array
> +                */
> +               ret = btrfs_inc_block_group_ro(cache, false);
>                 if (!ret && sctx->is_dev_replace) {
>                         /*
>                          * If we are doing a device replace wait for any tasks
> --
> 2.24.0
>


-- 
Filipe David Manana,

“Whether you think you can, or you think you can't — you're right.”

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] btrfs: scrub: Don't check free space before marking a block group RO
  2020-01-17 17:59 ` Filipe Manana
@ 2020-01-18  1:16   ` Qu Wenruo
  2020-01-20  9:43     ` Qu Wenruo
  0 siblings, 1 reply; 7+ messages in thread
From: Qu Wenruo @ 2020-01-18  1:16 UTC (permalink / raw)
  To: fdmanana, Qu Wenruo; +Cc: linux-btrfs, Josef Bacik


[-- Attachment #1.1: Type: text/plain, Size: 16844 bytes --]



On 2020/1/18 上午1:59, Filipe Manana wrote:
> On Fri, Nov 15, 2019 at 2:11 AM Qu Wenruo <wqu@suse.com> wrote:
>>
>> [BUG]
>> When running btrfs/072 with only one online CPU, it has a pretty high
>> chance to fail:
>>
>>   btrfs/072 12s ... _check_dmesg: something found in dmesg (see xfstests-dev/results//btrfs/072.dmesg)
>>   - output mismatch (see xfstests-dev/results//btrfs/072.out.bad)
>>       --- tests/btrfs/072.out     2019-10-22 15:18:14.008965340 +0800
>>       +++ /xfstests-dev/results//btrfs/072.out.bad      2019-11-14 15:56:45.877152240 +0800
>>       @@ -1,2 +1,3 @@
>>        QA output created by 072
>>        Silence is golden
>>       +Scrub find errors in "-m dup -d single" test
>>       ...
>>
>> And with the following call trace:
>>   BTRFS info (device dm-5): scrub: started on devid 1
>>   ------------[ cut here ]------------
>>   BTRFS: Transaction aborted (error -27)
>>   WARNING: CPU: 0 PID: 55087 at fs/btrfs/block-group.c:1890 btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
>>   CPU: 0 PID: 55087 Comm: btrfs Tainted: G        W  O      5.4.0-rc1-custom+ #13
>>   Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
>>   RIP: 0010:btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
>>   Call Trace:
>>    __btrfs_end_transaction+0xdb/0x310 [btrfs]
>>    btrfs_end_transaction+0x10/0x20 [btrfs]
>>    btrfs_inc_block_group_ro+0x1c9/0x210 [btrfs]
>>    scrub_enumerate_chunks+0x264/0x940 [btrfs]
>>    btrfs_scrub_dev+0x45c/0x8f0 [btrfs]
>>    btrfs_ioctl+0x31a1/0x3fb0 [btrfs]
>>    do_vfs_ioctl+0x636/0xaa0
>>    ksys_ioctl+0x67/0x90
>>    __x64_sys_ioctl+0x43/0x50
>>    do_syscall_64+0x79/0xe0
>>    entry_SYSCALL_64_after_hwframe+0x49/0xbe
>>   ---[ end trace 166c865cec7688e7 ]---
>>
>> [CAUSE]
>> The error number -27 is -EFBIG, returned from the following call chain:
>> btrfs_end_transaction()
>> |- __btrfs_end_transaction()
>>    |- btrfs_create_pending_block_groups()
>>       |- btrfs_finish_chunk_alloc()
>>          |- btrfs_add_system_chunk()
>>
>> This happens because we have used up all space of
>> btrfs_super_block::sys_chunk_array.
>>
>> The root cause is, we have the following bad loop of creating tons of
>> system chunks:
>> 1. The only SYSTEM chunk is being scrubbed
>>    It's very common to have only one SYSTEM chunk.
>> 2. New SYSTEM bg will be allocated
>>    As btrfs_inc_block_group_ro() will check if we have enough space
>>    after marking current bg RO. If not, then allocate a new chunk.
>> 3. New SYSTEM bg is still empty, will be reclaimed
>>    During the reclaim, we will mark it RO again.
>> 4. That newly allocated empty SYSTEM bg get scrubbed
>>    We go back to step 2, as the bg is already mark RO but still not
>>    cleaned up yet.
>>
>> If the cleaner kthread doesn't get executed fast enough (e.g. only one
>> CPU), then we will get more and more empty SYSTEM chunks, using up all
>> the space of btrfs_super_block::sys_chunk_array.
>>
>> [FIX]
>> Since scrub/dev-replace doesn't always need to allocate new extent,
>> especially chunk tree extent, so we don't really need to do chunk
>> pre-allocation.
>>
>> To break above spiral, here we introduce a new parameter to
>> btrfs_inc_block_group(), @do_chunk_alloc, which indicates whether we
>> need extra chunk pre-allocation.
>>
>> For relocation, we pass @do_chunk_alloc=true, while for scrub, we pass
>> @do_chunk_alloc=false.
>> This should keep unnecessary empty chunks from popping up for scrub.
>>
>> Also, since there are two parameters for btrfs_inc_block_group_ro(),
>> add more comment for it.
>>
>> Signed-off-by: Qu Wenruo <wqu@suse.com>
> 
> Qu,
> 
> Strangely, this has caused some unexpected failures on test btrfs/071
> (fsstress + device replace + remount followed by scrub).

How reproducible?

I also hit rare csum corruptions in btrfs/06[45] and btrfs/071.
That from v5.5-rc6 and misc-next.

In my runs, the reproducibility comes around 1/20 to 1/50.

> 
> Since I hadn't seen the issue in my 5.4 (and older) based branches,
> and only started to observe the failure in 5.5-rc2+, I left a vm
> bisecting since last week after coming back from vacations.
> The bisection points out to this change. And going to 5.5-rc5 and
> reverting this change, or just doing:
> 
> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> index 21de630b0730..87478654a3e1 100644
> --- a/fs/btrfs/scrub.c
> +++ b/fs/btrfs/scrub.c
> @@ -3578,7 +3578,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
>                  * thread can't be triggered fast enough, and use up all space
>                  * of btrfs_super_block::sys_chunk_array
>                  */
> -               ret = btrfs_inc_block_group_ro(cache, false);
> +               ret = btrfs_inc_block_group_ro(cache, true);
>                 scrub_pause_off(fs_info);
> 
>                 if (ret == 0) {
> 
> which is simpler then reverting due to conflicts, confirms this patch
> is what causes btrfs/071 to fail like this:
> 
> $ cat results/btrfs/071.out.bad
> QA output created by 071
> Silence is golden
> Scrub find errors in "-m raid0 -d raid0" test

In my case, not only raid0 raid0, but also simple simple.

> 
> $ cat results/btrfs/071.full
> (...)
> Test -m raid0 -d raid0
> Run fsstress  -p 20 -n 100 -d
> /home/fdmanana/btrfs-tests/scratch_1/stressdir -f rexchange=0 -f
> rwhiteout=0
> Start replace worker: 17813
> Wait for fsstress to exit and kill all background workers
> seed = 1579455326
> dev_pool=/dev/sdd /dev/sde /dev/sdf
> free_dev=/dev/sdg, src_dev=/dev/sdd
> Replacing /dev/sdd with /dev/sdg
> Replacing /dev/sde with /dev/sdd
> Replacing /dev/sdf with /dev/sde
> Replacing /dev/sdg with /dev/sdf
> Replacing /dev/sdd with /dev/sdg
> Replacing /dev/sde with /dev/sdd
> Replacing /dev/sdf with /dev/sde
> Replacing /dev/sdg with /dev/sdf
> Replacing /dev/sdd with /dev/sdg
> Replacing /dev/sde with /dev/sdd
> Scrub the filesystem
> ERROR: there are uncorrectable errors
> scrub done for 0f63c9b5-5618-4484-b6f2-0b7d3294cf05
> Scrub started:    Fri Jan 17 12:31:35 2020
> Status:           finished
> Duration:         0:00:00
> Total to scrub:   5.02GiB
> Rate:             0.00B/s
> Error summary:    csum=1
>   Corrected:      0
>   Uncorrectable:  1
>   Unverified:     0
> Scrub find errors in "-m raid0 -d raid0" test
> (...)
> 
> And in syslog:
> 
> (...)
> Jan  9 13:14:15 debian5 kernel: [1739740.727952] BTRFS info (device
> sdc): dev_replace from /dev/sde (devid 4) to /dev/sdd started
> Jan  9 13:14:15 debian5 kernel: [1739740.752226]
> scrub_handle_errored_block: 8 callbacks suppressed
> Jan  9 13:14:15 debian5 kernel: [1739740.752228] BTRFS warning (device
> sdc): checksum error at logical 1129050112 on dev /dev/sde, physical
> 277803008, root 5, inode 405, offset 1540096, length 4096, links 1
> (path: stressdir/pa/d2/d5/fa)

Exactly one of my observed sympton.

> Jan  9 13:14:15 debian5 kernel: [1739740.752230]
> btrfs_dev_stat_print_on_error: 8 callbacks suppressed
> Jan  9 13:14:15 debian5 kernel: [1739740.758600] BTRFS error (device
> sdc): bdev /dev/sde errs: wr 0, rd 0, flush 0, corrupt 1, gen 0
> Jan  9 13:14:15 debian5 kernel: [1739740.908263]
> scrub_handle_errored_block: 2 callbacks suppressed
> Jan  9 13:14:15 debian5 kernel: [1739740.929814] BTRFS error (device
> sdc): unable to fixup (regular) error at logical 1129050112 on dev
> /dev/sde
> Jan  9 13:14:15 debian5 kernel: [1739740.929817] BTRFS info (device
> sdc): dev_replace from /dev/sde (devid 4) to /dev/sdd finished
> (...)
> 
> From a quick look I don't have an idea how this patch causes such
> regression, doesn't seem to make any sense.
> 
> I changed btrfs/071 to comment out the remount loop
> (https://pastebin.com/X5LiJtpS, the fsstress arguments are to avoid an
> infinite loop during fsync, for which there is a fix already) to see
> if that had any influence - it does not.
> So it seems the problem is device replace and/or scrub only.
> 
> I can trigger the failure somewhat easily, 1000 iterations of
> btrfs/071 on 5.5-rc5 produced 94 failures like that.

OK, that's more easily reproduced than my environment.

> With this patch reverted (or passing true to
> btrfs_inc_block_group_ro() from scrub), 20108 iterations succeeded (no
> failures at all).

This is very interesting, the iteration number is very convincing, so it
should be some random fix.

> 
> I haven't tried Josef's recent changes to inc_block_group_ro(), so I
> can't say yet if they fix the problem.

I guess his fix would be similar in behavior. But extra try would always
be good.

> 
> I can take a closer look next week to see if I can figure out why this
> causes such weird failures, unless you already have an idea.

No idea at all. The only observed clue is, the csum mismatch always
happen in data bgs, no tree blocks so far.

Maybe it's related to data chunk pre-allocation? Then only do the chunk
pre-alloc for data chunks may lead to the same result?

Anyway, thanks for the confirm on the needed iterations to trigger, and
the possible fix for it.

Thanks,
Qu

> 
> Thanks.
> 
>>
>> ---
>> This version is based on v5.4-rc1, for a commonly known commit hash.
>> It would cause conflicts due to
>> btrfs_block_group_cache -> btrfs_block_group refactor.
>>
>> The conflicts should be easy to solve.
>>
>> Changelog:
>> v2:
>> - Fix a bug that previous verion never do chunk pre-allocation.
>> - Avoid chunk pre-allocation from check_system_chunk()
>> - Use extra parameter @do_chunk_alloc other than new function with
>>   "_force" suffix.
>> - Skip unnecessary update_block_group_flags() call completely for
>>   do_chunk_alloc=false case.
>> ---
>>  fs/btrfs/block-group.c | 49 +++++++++++++++++++++++++++---------------
>>  fs/btrfs/block-group.h |  3 ++-
>>  fs/btrfs/relocation.c  |  2 +-
>>  fs/btrfs/scrub.c       | 21 +++++++++++++++++-
>>  4 files changed, 55 insertions(+), 20 deletions(-)
>>
>> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
>> index bf7e3f23bba7..ae23319e4233 100644
>> --- a/fs/btrfs/block-group.c
>> +++ b/fs/btrfs/block-group.c
>> @@ -2021,8 +2021,17 @@ static u64 update_block_group_flags(struct btrfs_fs_info *fs_info, u64 flags)
>>         return flags;
>>  }
>>
>> -int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache)
>> -
>> +/*
>> + * Mark one block group ro, can be called several times for the same block
>> + * group.
>> + *
>> + * @cache:             The destination block group
>> + * @do_chunk_alloc:    Whether need to do chunk pre-allocation, this is to
>> + *                     ensure we still have some free space after marking this
>> + *                     block group RO.
>> + */
>> +int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache,
>> +                            bool do_chunk_alloc)
>>  {
>>         struct btrfs_fs_info *fs_info = cache->fs_info;
>>         struct btrfs_trans_handle *trans;
>> @@ -2052,25 +2061,30 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache)
>>                 goto again;
>>         }
>>
>> -       /*
>> -        * if we are changing raid levels, try to allocate a corresponding
>> -        * block group with the new raid level.
>> -        */
>> -       alloc_flags = update_block_group_flags(fs_info, cache->flags);
>> -       if (alloc_flags != cache->flags) {
>> -               ret = btrfs_chunk_alloc(trans, alloc_flags, CHUNK_ALLOC_FORCE);
>> +       if (do_chunk_alloc) {
>>                 /*
>> -                * ENOSPC is allowed here, we may have enough space
>> -                * already allocated at the new raid level to
>> -                * carry on
>> +                * if we are changing raid levels, try to allocate a
>> +                * corresponding block group with the new raid level.
>>                  */
>> -               if (ret == -ENOSPC)
>> -                       ret = 0;
>> -               if (ret < 0)
>> -                       goto out;
>> +               alloc_flags = update_block_group_flags(fs_info, cache->flags);
>> +               if (alloc_flags != cache->flags) {
>> +                       ret = btrfs_chunk_alloc(trans, alloc_flags,
>> +                                               CHUNK_ALLOC_FORCE);
>> +                       /*
>> +                        * ENOSPC is allowed here, we may have enough space
>> +                        * already allocated at the new raid level to
>> +                        * carry on
>> +                        */
>> +                       if (ret == -ENOSPC)
>> +                               ret = 0;
>> +                       if (ret < 0)
>> +                               goto out;
>> +               }
>>         }
>>
>> -       ret = inc_block_group_ro(cache, 0);
>> +       ret = inc_block_group_ro(cache, !do_chunk_alloc);
>> +       if (!do_chunk_alloc)
>> +               goto unlock_out;
>>         if (!ret)
>>                 goto out;
>>         alloc_flags = btrfs_get_alloc_profile(fs_info, cache->space_info->flags);
>> @@ -2085,6 +2099,7 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache)
>>                 check_system_chunk(trans, alloc_flags);
>>                 mutex_unlock(&fs_info->chunk_mutex);
>>         }
>> +unlock_out:
>>         mutex_unlock(&fs_info->ro_block_group_mutex);
>>
>>         btrfs_end_transaction(trans);
>> diff --git a/fs/btrfs/block-group.h b/fs/btrfs/block-group.h
>> index c391800388dd..0758e6d52acb 100644
>> --- a/fs/btrfs/block-group.h
>> +++ b/fs/btrfs/block-group.h
>> @@ -205,7 +205,8 @@ int btrfs_read_block_groups(struct btrfs_fs_info *info);
>>  int btrfs_make_block_group(struct btrfs_trans_handle *trans, u64 bytes_used,
>>                            u64 type, u64 chunk_offset, u64 size);
>>  void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans);
>> -int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache);
>> +int btrfs_inc_block_group_ro(struct btrfs_block_group_cache *cache,
>> +                            bool do_chunk_alloc);
>>  void btrfs_dec_block_group_ro(struct btrfs_block_group_cache *cache);
>>  int btrfs_start_dirty_block_groups(struct btrfs_trans_handle *trans);
>>  int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans);
>> diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
>> index 00504657b602..fd6df6dd5421 100644
>> --- a/fs/btrfs/relocation.c
>> +++ b/fs/btrfs/relocation.c
>> @@ -4325,7 +4325,7 @@ int btrfs_relocate_block_group(struct btrfs_fs_info *fs_info, u64 group_start)
>>         rc->extent_root = extent_root;
>>         rc->block_group = bg;
>>
>> -       ret = btrfs_inc_block_group_ro(rc->block_group);
>> +       ret = btrfs_inc_block_group_ro(rc->block_group, true);
>>         if (ret) {
>>                 err = ret;
>>                 goto out;
>> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
>> index f7d4e03f4c5d..7e48d65bbdf6 100644
>> --- a/fs/btrfs/scrub.c
>> +++ b/fs/btrfs/scrub.c
>> @@ -3563,7 +3563,26 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
>>                  * -> btrfs_scrub_pause()
>>                  */
>>                 scrub_pause_on(fs_info);
>> -               ret = btrfs_inc_block_group_ro(cache);
>> +
>> +               /*
>> +                * Don't do chunk preallocation for scrub.
>> +                *
>> +                * This is especially important for SYSTEM bgs, or we can hit
>> +                * -EFBIG from btrfs_finish_chunk_alloc() like:
>> +                * 1. The only SYSTEM bg is marked RO.
>> +                *    Since SYSTEM bg is small, that's pretty common.
>> +                * 2. New SYSTEM bg will be allocated
>> +                *    Due to regular version will allocate new chunk.
>> +                * 3. New SYSTEM bg is empty and will get cleaned up
>> +                *    Before cleanup really happens, it's marked RO again.
>> +                * 4. Empty SYSTEM bg get scrubbed
>> +                *    We go back to 2.
>> +                *
>> +                * This can easily boost the amount of SYSTEM chunks if cleaner
>> +                * thread can't be triggered fast enough, and use up all space
>> +                * of btrfs_super_block::sys_chunk_array
>> +                */
>> +               ret = btrfs_inc_block_group_ro(cache, false);
>>                 if (!ret && sctx->is_dev_replace) {
>>                         /*
>>                          * If we are doing a device replace wait for any tasks
>> --
>> 2.24.0
>>
> 
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] btrfs: scrub: Don't check free space before marking a block group RO
  2020-01-18  1:16   ` Qu Wenruo
@ 2020-01-20  9:43     ` Qu Wenruo
  2020-01-20 10:45       ` Filipe Manana
  0 siblings, 1 reply; 7+ messages in thread
From: Qu Wenruo @ 2020-01-20  9:43 UTC (permalink / raw)
  To: fdmanana, Qu Wenruo; +Cc: linux-btrfs, Josef Bacik


[-- Attachment #1.1: Type: text/plain, Size: 7618 bytes --]



On 2020/1/18 上午9:16, Qu Wenruo wrote:
> 
> 
> On 2020/1/18 上午1:59, Filipe Manana wrote:
>> On Fri, Nov 15, 2019 at 2:11 AM Qu Wenruo <wqu@suse.com> wrote:
>>>
>>> [BUG]
>>> When running btrfs/072 with only one online CPU, it has a pretty high
>>> chance to fail:
>>>
>>>   btrfs/072 12s ... _check_dmesg: something found in dmesg (see xfstests-dev/results//btrfs/072.dmesg)
>>>   - output mismatch (see xfstests-dev/results//btrfs/072.out.bad)
>>>       --- tests/btrfs/072.out     2019-10-22 15:18:14.008965340 +0800
>>>       +++ /xfstests-dev/results//btrfs/072.out.bad      2019-11-14 15:56:45.877152240 +0800
>>>       @@ -1,2 +1,3 @@
>>>        QA output created by 072
>>>        Silence is golden
>>>       +Scrub find errors in "-m dup -d single" test
>>>       ...
>>>
>>> And with the following call trace:
>>>   BTRFS info (device dm-5): scrub: started on devid 1
>>>   ------------[ cut here ]------------
>>>   BTRFS: Transaction aborted (error -27)
>>>   WARNING: CPU: 0 PID: 55087 at fs/btrfs/block-group.c:1890 btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
>>>   CPU: 0 PID: 55087 Comm: btrfs Tainted: G        W  O      5.4.0-rc1-custom+ #13
>>>   Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
>>>   RIP: 0010:btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
>>>   Call Trace:
>>>    __btrfs_end_transaction+0xdb/0x310 [btrfs]
>>>    btrfs_end_transaction+0x10/0x20 [btrfs]
>>>    btrfs_inc_block_group_ro+0x1c9/0x210 [btrfs]
>>>    scrub_enumerate_chunks+0x264/0x940 [btrfs]
>>>    btrfs_scrub_dev+0x45c/0x8f0 [btrfs]
>>>    btrfs_ioctl+0x31a1/0x3fb0 [btrfs]
>>>    do_vfs_ioctl+0x636/0xaa0
>>>    ksys_ioctl+0x67/0x90
>>>    __x64_sys_ioctl+0x43/0x50
>>>    do_syscall_64+0x79/0xe0
>>>    entry_SYSCALL_64_after_hwframe+0x49/0xbe
>>>   ---[ end trace 166c865cec7688e7 ]---
>>>
>>> [CAUSE]
>>> The error number -27 is -EFBIG, returned from the following call chain:
>>> btrfs_end_transaction()
>>> |- __btrfs_end_transaction()
>>>    |- btrfs_create_pending_block_groups()
>>>       |- btrfs_finish_chunk_alloc()
>>>          |- btrfs_add_system_chunk()
>>>
>>> This happens because we have used up all space of
>>> btrfs_super_block::sys_chunk_array.
>>>
>>> The root cause is, we have the following bad loop of creating tons of
>>> system chunks:
>>> 1. The only SYSTEM chunk is being scrubbed
>>>    It's very common to have only one SYSTEM chunk.
>>> 2. New SYSTEM bg will be allocated
>>>    As btrfs_inc_block_group_ro() will check if we have enough space
>>>    after marking current bg RO. If not, then allocate a new chunk.
>>> 3. New SYSTEM bg is still empty, will be reclaimed
>>>    During the reclaim, we will mark it RO again.
>>> 4. That newly allocated empty SYSTEM bg get scrubbed
>>>    We go back to step 2, as the bg is already mark RO but still not
>>>    cleaned up yet.
>>>
>>> If the cleaner kthread doesn't get executed fast enough (e.g. only one
>>> CPU), then we will get more and more empty SYSTEM chunks, using up all
>>> the space of btrfs_super_block::sys_chunk_array.
>>>
>>> [FIX]
>>> Since scrub/dev-replace doesn't always need to allocate new extent,
>>> especially chunk tree extent, so we don't really need to do chunk
>>> pre-allocation.
>>>
>>> To break above spiral, here we introduce a new parameter to
>>> btrfs_inc_block_group(), @do_chunk_alloc, which indicates whether we
>>> need extra chunk pre-allocation.
>>>
>>> For relocation, we pass @do_chunk_alloc=true, while for scrub, we pass
>>> @do_chunk_alloc=false.
>>> This should keep unnecessary empty chunks from popping up for scrub.
>>>
>>> Also, since there are two parameters for btrfs_inc_block_group_ro(),
>>> add more comment for it.
>>>
>>> Signed-off-by: Qu Wenruo <wqu@suse.com>
>>
>> Qu,
>>
>> Strangely, this has caused some unexpected failures on test btrfs/071
>> (fsstress + device replace + remount followed by scrub).
> 
> How reproducible?
> 
> I also hit rare csum corruptions in btrfs/06[45] and btrfs/071.
> That from v5.5-rc6 and misc-next.
> 
> In my runs, the reproducibility comes around 1/20 to 1/50.
> 
>>
>> Since I hadn't seen the issue in my 5.4 (and older) based branches,
>> and only started to observe the failure in 5.5-rc2+, I left a vm
>> bisecting since last week after coming back from vacations.
>> The bisection points out to this change. And going to 5.5-rc5 and
>> reverting this change, or just doing:
>>
>> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
>> index 21de630b0730..87478654a3e1 100644
>> --- a/fs/btrfs/scrub.c
>> +++ b/fs/btrfs/scrub.c
>> @@ -3578,7 +3578,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
>>                  * thread can't be triggered fast enough, and use up all space
>>                  * of btrfs_super_block::sys_chunk_array
>>                  */
>> -               ret = btrfs_inc_block_group_ro(cache, false);
>> +               ret = btrfs_inc_block_group_ro(cache, true);
>>                 scrub_pause_off(fs_info);
>>
>>                 if (ret == 0) {
>>
>> which is simpler then reverting due to conflicts, confirms this patch
>> is what causes btrfs/071 to fail like this:
>>
>> $ cat results/btrfs/071.out.bad
>> QA output created by 071
>> Silence is golden
>> Scrub find errors in "-m raid0 -d raid0" test
> 
> In my case, not only raid0 raid0, but also simple simple.
> 
>>
>> $ cat results/btrfs/071.full
>> (...)
>> Test -m raid0 -d raid0
>> Run fsstress  -p 20 -n 100 -d
>> /home/fdmanana/btrfs-tests/scratch_1/stressdir -f rexchange=0 -f
>> rwhiteout=0
>> Start replace worker: 17813
>> Wait for fsstress to exit and kill all background workers
>> seed = 1579455326
>> dev_pool=/dev/sdd /dev/sde /dev/sdf
>> free_dev=/dev/sdg, src_dev=/dev/sdd
>> Replacing /dev/sdd with /dev/sdg
>> Replacing /dev/sde with /dev/sdd
>> Replacing /dev/sdf with /dev/sde
>> Replacing /dev/sdg with /dev/sdf
>> Replacing /dev/sdd with /dev/sdg
>> Replacing /dev/sde with /dev/sdd
>> Replacing /dev/sdf with /dev/sde
>> Replacing /dev/sdg with /dev/sdf
>> Replacing /dev/sdd with /dev/sdg
>> Replacing /dev/sde with /dev/sdd
>> Scrub the filesystem
>> ERROR: there are uncorrectable errors
>> scrub done for 0f63c9b5-5618-4484-b6f2-0b7d3294cf05
>> Scrub started:    Fri Jan 17 12:31:35 2020
>> Status:           finished
>> Duration:         0:00:00
>> Total to scrub:   5.02GiB
>> Rate:             0.00B/s
>> Error summary:    csum=1
>>   Corrected:      0
>>   Uncorrectable:  1
>>   Unverified:     0
>> Scrub find errors in "-m raid0 -d raid0" test
>> (...)
>>
>> And in syslog:
>>
>> (...)
>> Jan  9 13:14:15 debian5 kernel: [1739740.727952] BTRFS info (device
>> sdc): dev_replace from /dev/sde (devid 4) to /dev/sdd started
>> Jan  9 13:14:15 debian5 kernel: [1739740.752226]
>> scrub_handle_errored_block: 8 callbacks suppressed
>> Jan  9 13:14:15 debian5 kernel: [1739740.752228] BTRFS warning (device
>> sdc): checksum error at logical 1129050112 on dev /dev/sde, physical
>> 277803008, root 5, inode 405, offset 1540096, length 4096, links 1
>> (path: stressdir/pa/d2/d5/fa)
> 
Since no clue why this patch is causing problem, I just poking around to
see how it's related.

- It's on-disk data corruption.
  Btrfs check --check-data-csum also reports similar error of the fs.
  So it's not some false alert from scrub.

Considering the effect, I guess it may be worthy to use your quick fix,
or at least do chunk pre-allocation for data chunks.

Thanks,
Qu


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2] btrfs: scrub: Don't check free space before marking a block group RO
  2020-01-20  9:43     ` Qu Wenruo
@ 2020-01-20 10:45       ` Filipe Manana
  0 siblings, 0 replies; 7+ messages in thread
From: Filipe Manana @ 2020-01-20 10:45 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Qu Wenruo, linux-btrfs, Josef Bacik

On Mon, Jan 20, 2020 at 9:43 AM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
>
> On 2020/1/18 上午9:16, Qu Wenruo wrote:
> >
> >
> > On 2020/1/18 上午1:59, Filipe Manana wrote:
> >> On Fri, Nov 15, 2019 at 2:11 AM Qu Wenruo <wqu@suse.com> wrote:
> >>>
> >>> [BUG]
> >>> When running btrfs/072 with only one online CPU, it has a pretty high
> >>> chance to fail:
> >>>
> >>>   btrfs/072 12s ... _check_dmesg: something found in dmesg (see xfstests-dev/results//btrfs/072.dmesg)
> >>>   - output mismatch (see xfstests-dev/results//btrfs/072.out.bad)
> >>>       --- tests/btrfs/072.out     2019-10-22 15:18:14.008965340 +0800
> >>>       +++ /xfstests-dev/results//btrfs/072.out.bad      2019-11-14 15:56:45.877152240 +0800
> >>>       @@ -1,2 +1,3 @@
> >>>        QA output created by 072
> >>>        Silence is golden
> >>>       +Scrub find errors in "-m dup -d single" test
> >>>       ...
> >>>
> >>> And with the following call trace:
> >>>   BTRFS info (device dm-5): scrub: started on devid 1
> >>>   ------------[ cut here ]------------
> >>>   BTRFS: Transaction aborted (error -27)
> >>>   WARNING: CPU: 0 PID: 55087 at fs/btrfs/block-group.c:1890 btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
> >>>   CPU: 0 PID: 55087 Comm: btrfs Tainted: G        W  O      5.4.0-rc1-custom+ #13
> >>>   Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
> >>>   RIP: 0010:btrfs_create_pending_block_groups+0x3e6/0x470 [btrfs]
> >>>   Call Trace:
> >>>    __btrfs_end_transaction+0xdb/0x310 [btrfs]
> >>>    btrfs_end_transaction+0x10/0x20 [btrfs]
> >>>    btrfs_inc_block_group_ro+0x1c9/0x210 [btrfs]
> >>>    scrub_enumerate_chunks+0x264/0x940 [btrfs]
> >>>    btrfs_scrub_dev+0x45c/0x8f0 [btrfs]
> >>>    btrfs_ioctl+0x31a1/0x3fb0 [btrfs]
> >>>    do_vfs_ioctl+0x636/0xaa0
> >>>    ksys_ioctl+0x67/0x90
> >>>    __x64_sys_ioctl+0x43/0x50
> >>>    do_syscall_64+0x79/0xe0
> >>>    entry_SYSCALL_64_after_hwframe+0x49/0xbe
> >>>   ---[ end trace 166c865cec7688e7 ]---
> >>>
> >>> [CAUSE]
> >>> The error number -27 is -EFBIG, returned from the following call chain:
> >>> btrfs_end_transaction()
> >>> |- __btrfs_end_transaction()
> >>>    |- btrfs_create_pending_block_groups()
> >>>       |- btrfs_finish_chunk_alloc()
> >>>          |- btrfs_add_system_chunk()
> >>>
> >>> This happens because we have used up all space of
> >>> btrfs_super_block::sys_chunk_array.
> >>>
> >>> The root cause is, we have the following bad loop of creating tons of
> >>> system chunks:
> >>> 1. The only SYSTEM chunk is being scrubbed
> >>>    It's very common to have only one SYSTEM chunk.
> >>> 2. New SYSTEM bg will be allocated
> >>>    As btrfs_inc_block_group_ro() will check if we have enough space
> >>>    after marking current bg RO. If not, then allocate a new chunk.
> >>> 3. New SYSTEM bg is still empty, will be reclaimed
> >>>    During the reclaim, we will mark it RO again.
> >>> 4. That newly allocated empty SYSTEM bg get scrubbed
> >>>    We go back to step 2, as the bg is already mark RO but still not
> >>>    cleaned up yet.
> >>>
> >>> If the cleaner kthread doesn't get executed fast enough (e.g. only one
> >>> CPU), then we will get more and more empty SYSTEM chunks, using up all
> >>> the space of btrfs_super_block::sys_chunk_array.
> >>>
> >>> [FIX]
> >>> Since scrub/dev-replace doesn't always need to allocate new extent,
> >>> especially chunk tree extent, so we don't really need to do chunk
> >>> pre-allocation.
> >>>
> >>> To break above spiral, here we introduce a new parameter to
> >>> btrfs_inc_block_group(), @do_chunk_alloc, which indicates whether we
> >>> need extra chunk pre-allocation.
> >>>
> >>> For relocation, we pass @do_chunk_alloc=true, while for scrub, we pass
> >>> @do_chunk_alloc=false.
> >>> This should keep unnecessary empty chunks from popping up for scrub.
> >>>
> >>> Also, since there are two parameters for btrfs_inc_block_group_ro(),
> >>> add more comment for it.
> >>>
> >>> Signed-off-by: Qu Wenruo <wqu@suse.com>
> >>
> >> Qu,
> >>
> >> Strangely, this has caused some unexpected failures on test btrfs/071
> >> (fsstress + device replace + remount followed by scrub).
> >
> > How reproducible?
> >
> > I also hit rare csum corruptions in btrfs/06[45] and btrfs/071.
> > That from v5.5-rc6 and misc-next.
> >
> > In my runs, the reproducibility comes around 1/20 to 1/50.
> >
> >>
> >> Since I hadn't seen the issue in my 5.4 (and older) based branches,
> >> and only started to observe the failure in 5.5-rc2+, I left a vm
> >> bisecting since last week after coming back from vacations.
> >> The bisection points out to this change. And going to 5.5-rc5 and
> >> reverting this change, or just doing:
> >>
> >> diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
> >> index 21de630b0730..87478654a3e1 100644
> >> --- a/fs/btrfs/scrub.c
> >> +++ b/fs/btrfs/scrub.c
> >> @@ -3578,7 +3578,7 @@ int scrub_enumerate_chunks(struct scrub_ctx *sctx,
> >>                  * thread can't be triggered fast enough, and use up all space
> >>                  * of btrfs_super_block::sys_chunk_array
> >>                  */
> >> -               ret = btrfs_inc_block_group_ro(cache, false);
> >> +               ret = btrfs_inc_block_group_ro(cache, true);
> >>                 scrub_pause_off(fs_info);
> >>
> >>                 if (ret == 0) {
> >>
> >> which is simpler then reverting due to conflicts, confirms this patch
> >> is what causes btrfs/071 to fail like this:
> >>
> >> $ cat results/btrfs/071.out.bad
> >> QA output created by 071
> >> Silence is golden
> >> Scrub find errors in "-m raid0 -d raid0" test
> >
> > In my case, not only raid0 raid0, but also simple simple.
> >
> >>
> >> $ cat results/btrfs/071.full
> >> (...)
> >> Test -m raid0 -d raid0
> >> Run fsstress  -p 20 -n 100 -d
> >> /home/fdmanana/btrfs-tests/scratch_1/stressdir -f rexchange=0 -f
> >> rwhiteout=0
> >> Start replace worker: 17813
> >> Wait for fsstress to exit and kill all background workers
> >> seed = 1579455326
> >> dev_pool=/dev/sdd /dev/sde /dev/sdf
> >> free_dev=/dev/sdg, src_dev=/dev/sdd
> >> Replacing /dev/sdd with /dev/sdg
> >> Replacing /dev/sde with /dev/sdd
> >> Replacing /dev/sdf with /dev/sde
> >> Replacing /dev/sdg with /dev/sdf
> >> Replacing /dev/sdd with /dev/sdg
> >> Replacing /dev/sde with /dev/sdd
> >> Replacing /dev/sdf with /dev/sde
> >> Replacing /dev/sdg with /dev/sdf
> >> Replacing /dev/sdd with /dev/sdg
> >> Replacing /dev/sde with /dev/sdd
> >> Scrub the filesystem
> >> ERROR: there are uncorrectable errors
> >> scrub done for 0f63c9b5-5618-4484-b6f2-0b7d3294cf05
> >> Scrub started:    Fri Jan 17 12:31:35 2020
> >> Status:           finished
> >> Duration:         0:00:00
> >> Total to scrub:   5.02GiB
> >> Rate:             0.00B/s
> >> Error summary:    csum=1
> >>   Corrected:      0
> >>   Uncorrectable:  1
> >>   Unverified:     0
> >> Scrub find errors in "-m raid0 -d raid0" test
> >> (...)
> >>
> >> And in syslog:
> >>
> >> (...)
> >> Jan  9 13:14:15 debian5 kernel: [1739740.727952] BTRFS info (device
> >> sdc): dev_replace from /dev/sde (devid 4) to /dev/sdd started
> >> Jan  9 13:14:15 debian5 kernel: [1739740.752226]
> >> scrub_handle_errored_block: 8 callbacks suppressed
> >> Jan  9 13:14:15 debian5 kernel: [1739740.752228] BTRFS warning (device
> >> sdc): checksum error at logical 1129050112 on dev /dev/sde, physical
> >> 277803008, root 5, inode 405, offset 1540096, length 4096, links 1
> >> (path: stressdir/pa/d2/d5/fa)
> >
> Since no clue why this patch is causing problem, I just poking around to
> see how it's related.
>
> - It's on-disk data corruption.
>   Btrfs check --check-data-csum also reports similar error of the fs.
>   So it's not some false alert from scrub.

Yes, I figured that later as well.
The problem is really with device replace, passing false to
btrfs_inc_block_group_ro() during scrub is fine, but passing false
during device replace is not fine.
I haven't checked (due to lack of time) if the problem is incorrect
data written, or if it's the checksum that ends ups in csum tree that
is incorrect or stale.

>
> Considering the effect, I guess it may be worthy to use your quick fix,
> or at least do chunk pre-allocation for data chunks.
>
> Thanks,
> Qu
>


-- 
Filipe David Manana,

“Whether you think you can, or you think you can't — you're right.”

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-01-20 10:45 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-15  2:09 [PATCH v2] btrfs: scrub: Don't check free space before marking a block group RO Qu Wenruo
2019-11-15 12:51 ` Filipe Manana
2019-11-18 15:14 ` David Sterba
2020-01-17 17:59 ` Filipe Manana
2020-01-18  1:16   ` Qu Wenruo
2020-01-20  9:43     ` Qu Wenruo
2020-01-20 10:45       ` Filipe Manana

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).