On 2019/11/25 下午10:40, Josef Bacik wrote: > For some reason we've translated the do_chunk_alloc that goes into > btrfs_inc_block_group_ro to force in inc_block_group_ro, but these are > two different things. > > force for inc_block_group_ro is used when we are forcing the block group > read only no matter what, for example when the underlying chunk is > marked read only. We need to not do the space check here as this block > group needs to be read only. > > btrfs_inc_block_group_ro() has a do_chunk_alloc flag that indicates that > we need to pre-allocate a chunk before marking the block group read > only. This has nothing to do with forcing, and in fact we _always_ want > to do the space check in this case, so unconditionally pass false for > force in this case. I think the patch order makes thing a little hard to grasp here. Without the last patch, the idea itself is not correct. The reason to force ro is because we want to avoid empty chunk to be allocated, especially for scrub case. If you put the last patch before this one, it's more clear, as then we can accept over-commit, we won't return false ENOSPC and no empty chunk created. BTW, with the last patch applied, we can remove that @force parameter for inc_block_group_ro(). Thanks, Qu > > Then fixup inc_block_group_ro to honor force as it's expected and > documented to do. > > Signed-off-by: Josef Bacik > --- > fs/btrfs/block-group.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c > index db539bfc5a52..3ffbc2e0af21 100644 > --- a/fs/btrfs/block-group.c > +++ b/fs/btrfs/block-group.c > @@ -1190,8 +1190,10 @@ static int inc_block_group_ro(struct btrfs_block_group *cache, int force) > spin_lock(&sinfo->lock); > spin_lock(&cache->lock); > > - if (cache->ro) { > + if (cache->ro || force) { > cache->ro++; > + if (list_empty(&cache->ro_list)) > + list_add_tail(&cache->ro_list, &sinfo->ro_bgs); > ret = 0; > goto out; > } > @@ -2063,7 +2065,7 @@ int btrfs_inc_block_group_ro(struct btrfs_block_group *cache, > } > } > > - ret = inc_block_group_ro(cache, !do_chunk_alloc); > + ret = inc_block_group_ro(cache, false); > if (!do_chunk_alloc) > goto unlock_out; > if (!ret) >