linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Josef Bacik <josef@toxicpanda.com>
To: dsterba@suse.cz, linux-btrfs@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH 1/5] btrfs: check rw_devices, not num_devices for restriping
Date: Tue, 14 Jan 2020 13:07:22 -0800	[thread overview]
Message-ID: <801709ca-22cd-f6ed-4e39-622a6aa1a1e6@toxicpanda.com> (raw)
In-Reply-To: <20200114205609.GL3929@twin.jikos.cz>

On 1/14/20 12:56 PM, David Sterba wrote:
> On Fri, Jan 10, 2020 at 11:11:24AM -0500, Josef Bacik wrote:
>> While running xfstests with compression on I noticed I was panicing on
>> btrfs/154.  I bisected this down to my inc_block_group_ro patches, which
>> was strange.
> 
> Do you have stacktrace of the panic?
> 

I don't have it with me, I can reproduce when I get back.  But it's a 
BUG_ON(ret) in init_reloc_root when we do the copy_root, because we get an 
ENOSPC when trying to allocate the tree block.


>> What was happening is with my patches we now use btrfs_can_overcommit()
>> to see if we can flip a block group read only.  Before this would fail
>> because we weren't taking into account the usable un-allocated space for
>> allocating chunks.  With my patches we were allowed to do the balance,
>> which is technically correct.
> 
> What patches does "my patches" mean?
> 

The ones that convert the inc_block_group_ro() to use btrfs_can_overcommit().

>> However this test is testing restriping with a degraded mount, something
>> that isn't working right because Anand's fix for the test was never
>> actually merged.
> 
> Which patch is that?

It says in the header of btrfs/154.  I don't have xfstests in front of me right now.

> 
>> So now we're trying to allocate a chunk and cannot because we want to
>> allocate a RAID1 chunk, but there's only 1 device that's available for
>> usage.  This results in an ENOSPC in one of the BUG_ON(ret) paths in
>> relocation (and a tricky path that is going to take many more patches to
>> fix.)
>>
>> But we shouldn't even be making it this far, we don't have enough
>> devices to restripe.  The problem is we're using btrfs_num_devices(),
>> which for some reason includes missing devices.  That's not actually
>> what we want, we want the rw_devices.
> 
> The wrapper btrfs_num_devices takes into account an ongoing replace that
> temporarily increases num_devices, so the result returned to balance is
> adjusted.
> 
> That we need to know the correct number of writable devices at this
> point is right. With btrfs_num_devices we'd have to subtract missing
> devices, but in the end we can't use more than rw_devices.
> 
>> Fix this by getting the rw_devices.  With this patch we're no longer
>> panicing with my other patches applied, and we're in fact erroring out
>> at the correct spot instead of at inc_block_group_ro.  The fact that
>> this was working before was just sheer dumb luck.
>>
>> Fixes: e4d8ec0f65b9 ("Btrfs: implement online profile changing")
>> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
>> ---
>>   fs/btrfs/volumes.c | 9 ++++++++-
>>   1 file changed, 8 insertions(+), 1 deletion(-)
>>
>> diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
>> index 7483521a928b..a92059555754 100644
>> --- a/fs/btrfs/volumes.c
>> +++ b/fs/btrfs/volumes.c
>> @@ -3881,7 +3881,14 @@ int btrfs_balance(struct btrfs_fs_info *fs_info,
>>   		}
>>   	}
>>   
>> -	num_devices = btrfs_num_devices(fs_info);
>> +	/*
>> +	 * rw_devices can be messed with by rm_device and device replace, so
>> +	 * take the chunk_mutex to make sure we have a relatively consistent
>> +	 * view of the fs at this point.
> 
> Well, what does 'relatively consistent' mean here? There are enough
> locks and exclusion that device remove or replace should not change the
> value until btrfs_balance ends, no?
> 

Again I don't have the code in front of me, but there's nothing at this point to 
stop us from running in at the tail end of device replace or device rm.  The 
mutex keeps us from getting weirdly inflated values when we increment and 
decrement at the end of device replace, but there's nothing (that I can 
remember) that will stop rw devices from changing right after we check it, thus 
relatively.  Thanks,

Josef

  reply	other threads:[~2020-01-14 21:07 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-10 16:11 [PATCH 0/5][v3] clean up how we mark block groups read only Josef Bacik
2020-01-10 16:11 ` [PATCH 1/5] btrfs: check rw_devices, not num_devices for restriping Josef Bacik
2020-01-11  9:24   ` Qu Wenruo
2020-01-14 20:56   ` David Sterba
2020-01-14 21:07     ` Josef Bacik [this message]
2020-01-16 15:59       ` David Sterba
2020-01-16 16:25         ` Josef Bacik
2020-01-10 16:11 ` [PATCH 2/5] btrfs: don't pass system_chunk into can_overcommit Josef Bacik
2020-01-14 19:56   ` David Sterba
2020-01-10 16:11 ` [PATCH 3/5] btrfs: kill min_allocable_bytes in inc_block_group_ro Josef Bacik
2020-01-10 16:11 ` [PATCH 4/5] btrfs: fix force usage " Josef Bacik
2020-01-11  6:15   ` Qu Wenruo
2020-01-10 16:11 ` [PATCH 5/5] btrfs: use btrfs_can_overcommit " Josef Bacik

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=801709ca-22cd-f6ed-4e39-622a6aa1a1e6@toxicpanda.com \
    --to=josef@toxicpanda.com \
    --cc=dsterba@suse.cz \
    --cc=kernel-team@fb.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).