All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] btrfs: free fs roots on failed mount
@ 2020-07-22 16:07 Josef Bacik
  2020-07-22 16:07 ` [PATCH 2/2] btrfs: fix root leak printk to use %lld instead of %llu Josef Bacik
  2020-07-27 14:19 ` [PATCH 1/2] btrfs: free fs roots on failed mount David Sterba
  0 siblings, 2 replies; 12+ messages in thread
From: Josef Bacik @ 2020-07-22 16:07 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

While testing a weird problem with -o degraded, I noticed I was getting
leaked root errors

BTRFS warning (device loop0): writable mount is not allowed due to too many missing devices
BTRFS error (device loop0): open_ctree failed
BTRFS error (device loop0): leaked root -9-0 refcount 1

This is the DATA_RELOC root, which gets read before the other fs roots,
but is included in the fs roots radix tree, and thus gets freed by
btrfs_free_fs_roots.  Fix this by moving the call into fail_tree_roots:
in open_ctree.  With this fix we no longer leak that root on mount
failure.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/disk-io.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index c850d7f44fbe..f1fdbdd44c02 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -3421,7 +3421,6 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
 fail_trans_kthread:
 	kthread_stop(fs_info->transaction_kthread);
 	btrfs_cleanup_transaction(fs_info);
-	btrfs_free_fs_roots(fs_info);
 fail_cleaner:
 	kthread_stop(fs_info->cleaner_kthread);
 
@@ -3441,6 +3440,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
 	btrfs_put_block_group_cache(fs_info);
 
 fail_tree_roots:
+	btrfs_free_fs_roots(fs_info);
 	free_root_pointers(fs_info, true);
 	invalidate_inode_pages2(fs_info->btree_inode->i_mapping);
 
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH 2/2] btrfs: fix root leak printk to use %lld instead of %llu
  2020-07-22 16:07 [PATCH 1/2] btrfs: free fs roots on failed mount Josef Bacik
@ 2020-07-22 16:07 ` Josef Bacik
  2020-07-23 14:20   ` David Sterba
  2020-07-27 14:19 ` [PATCH 1/2] btrfs: free fs roots on failed mount David Sterba
  1 sibling, 1 reply; 12+ messages in thread
From: Josef Bacik @ 2020-07-22 16:07 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

I'm a actual human being so am incapable of converting u64 to s64 in my
head, use %lld so we can see the negative number in order to identify
which of our special roots we leaked.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/disk-io.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index f1fdbdd44c02..cc4081a1c7f9 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -1508,7 +1508,7 @@ void btrfs_check_leaked_roots(struct btrfs_fs_info *fs_info)
 	while (!list_empty(&fs_info->allocated_roots)) {
 		root = list_first_entry(&fs_info->allocated_roots,
 					struct btrfs_root, leak_list);
-		btrfs_err(fs_info, "leaked root %llu-%llu refcount %d",
+		btrfs_err(fs_info, "leaked root %lld-%llu refcount %d",
 			  root->root_key.objectid, root->root_key.offset,
 			  refcount_read(&root->refs));
 		while (refcount_read(&root->refs) > 1)
-- 
2.24.1


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] btrfs: fix root leak printk to use %lld instead of %llu
  2020-07-22 16:07 ` [PATCH 2/2] btrfs: fix root leak printk to use %lld instead of %llu Josef Bacik
@ 2020-07-23 14:20   ` David Sterba
  2020-07-24  0:40     ` Qu Wenruo
  0 siblings, 1 reply; 12+ messages in thread
From: David Sterba @ 2020-07-23 14:20 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs, kernel-team

On Wed, Jul 22, 2020 at 12:07:22PM -0400, Josef Bacik wrote:
> I'm a actual human being so am incapable of converting u64 to s64 in my
> head, use %lld so we can see the negative number in order to identify
> which of our special roots we leaked.
> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> ---
>  fs/btrfs/disk-io.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index f1fdbdd44c02..cc4081a1c7f9 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -1508,7 +1508,7 @@ void btrfs_check_leaked_roots(struct btrfs_fs_info *fs_info)
>  	while (!list_empty(&fs_info->allocated_roots)) {
>  		root = list_first_entry(&fs_info->allocated_roots,
>  					struct btrfs_root, leak_list);
> -		btrfs_err(fs_info, "leaked root %llu-%llu refcount %d",
> +		btrfs_err(fs_info, "leaked root %lld-%llu refcount %d",

But this is wrong in another way, roots with high numbers will appear as
negative numbers.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] btrfs: fix root leak printk to use %lld instead of %llu
  2020-07-23 14:20   ` David Sterba
@ 2020-07-24  0:40     ` Qu Wenruo
  2020-07-27 14:03       ` David Sterba
  0 siblings, 1 reply; 12+ messages in thread
From: Qu Wenruo @ 2020-07-24  0:40 UTC (permalink / raw)
  To: dsterba, Josef Bacik, linux-btrfs, kernel-team


[-- Attachment #1.1: Type: text/plain, Size: 1334 bytes --]



On 2020/7/23 下午10:20, David Sterba wrote:
> On Wed, Jul 22, 2020 at 12:07:22PM -0400, Josef Bacik wrote:
>> I'm a actual human being so am incapable of converting u64 to s64 in my
>> head, use %lld so we can see the negative number in order to identify
>> which of our special roots we leaked.
>>
>> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
>> ---
>>  fs/btrfs/disk-io.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
>> index f1fdbdd44c02..cc4081a1c7f9 100644
>> --- a/fs/btrfs/disk-io.c
>> +++ b/fs/btrfs/disk-io.c
>> @@ -1508,7 +1508,7 @@ void btrfs_check_leaked_roots(struct btrfs_fs_info *fs_info)
>>  	while (!list_empty(&fs_info->allocated_roots)) {
>>  		root = list_first_entry(&fs_info->allocated_roots,
>>  					struct btrfs_root, leak_list);
>> -		btrfs_err(fs_info, "leaked root %llu-%llu refcount %d",
>> +		btrfs_err(fs_info, "leaked root %lld-%llu refcount %d",
> 
> But this is wrong in another way, roots with high numbers will appear as
> negative numbers.
> 

Nope. We won't have that many roots.

In fact, for subvolumes, the highest id is only 2 ^ 48, an special limit
introduced for qgroup.

So we won't have high enough subvolume ids to be negative, but only
special trees.

Thanks,
Qu


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] btrfs: fix root leak printk to use %lld instead of %llu
  2020-07-24  0:40     ` Qu Wenruo
@ 2020-07-27 14:03       ` David Sterba
  2020-07-27 14:07         ` David Sterba
  2020-07-27 23:47         ` Qu Wenruo
  0 siblings, 2 replies; 12+ messages in thread
From: David Sterba @ 2020-07-27 14:03 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: dsterba, Josef Bacik, linux-btrfs, kernel-team

On Fri, Jul 24, 2020 at 08:40:17AM +0800, Qu Wenruo wrote:
> 
> 
> On 2020/7/23 下午10:20, David Sterba wrote:
> > On Wed, Jul 22, 2020 at 12:07:22PM -0400, Josef Bacik wrote:
> >> I'm a actual human being so am incapable of converting u64 to s64 in my
> >> head, use %lld so we can see the negative number in order to identify
> >> which of our special roots we leaked.
> >>
> >> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> >> ---
> >>  fs/btrfs/disk-io.c | 2 +-
> >>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> >> index f1fdbdd44c02..cc4081a1c7f9 100644
> >> --- a/fs/btrfs/disk-io.c
> >> +++ b/fs/btrfs/disk-io.c
> >> @@ -1508,7 +1508,7 @@ void btrfs_check_leaked_roots(struct btrfs_fs_info *fs_info)
> >>  	while (!list_empty(&fs_info->allocated_roots)) {
> >>  		root = list_first_entry(&fs_info->allocated_roots,
> >>  					struct btrfs_root, leak_list);
> >> -		btrfs_err(fs_info, "leaked root %llu-%llu refcount %d",
> >> +		btrfs_err(fs_info, "leaked root %lld-%llu refcount %d",
> > 
> > But this is wrong in another way, roots with high numbers will appear as
> > negative numbers.
> > 
> 
> Nope. We won't have that many roots.
> 
> In fact, for subvolumes, the highest id is only 2 ^ 48, an special limit
> introduced for qgroup.

It's not a hard limit and certainly can have subvolumes with numbers
that high. That qgoups interpret the qgroup in some way is not a
limitation on subvolumes. We'll have to start reusing the subvolume ids
eventually, with qgroups we can on.

Also the negativer numbers start to appear with 2^32 so that's still
below the percieved limit of 2^48.

> So we won't have high enough subvolume ids to be negative, but only
> special trees.

For the internal trees we eg. have pretty-printer in progs so kernel can
reuse that.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] btrfs: fix root leak printk to use %lld instead of %llu
  2020-07-27 14:03       ` David Sterba
@ 2020-07-27 14:07         ` David Sterba
  2020-07-27 23:47         ` Qu Wenruo
  1 sibling, 0 replies; 12+ messages in thread
From: David Sterba @ 2020-07-27 14:07 UTC (permalink / raw)
  To: dsterba, Qu Wenruo, Josef Bacik, linux-btrfs, kernel-team

On Mon, Jul 27, 2020 at 04:03:14PM +0200, David Sterba wrote:
> On Fri, Jul 24, 2020 at 08:40:17AM +0800, Qu Wenruo wrote:
> > Nope. We won't have that many roots.
> > 
> > In fact, for subvolumes, the highest id is only 2 ^ 48, an special limit
> > introduced for qgroup.
> 
> It's not a hard limit and certainly can have subvolumes with numbers
> that high. That qgoups interpret the qgroup in some way is not a
> limitation on subvolumes. We'll have to start reusing the subvolume ids
> eventually, with qgroups we can on.

Let me rephrase without the typos:

It's not a hard limit and we certainly can have subvolumes with numbers
that high. That qgroups interpret the qgroup id in some way is not a
limitation on subvolumes. We'll have to start reusing the subvolume ids
eventually, when qgroups are turned on.

(The comment about 2^32/2^48 was incorrect.)

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] btrfs: free fs roots on failed mount
  2020-07-22 16:07 [PATCH 1/2] btrfs: free fs roots on failed mount Josef Bacik
  2020-07-22 16:07 ` [PATCH 2/2] btrfs: fix root leak printk to use %lld instead of %llu Josef Bacik
@ 2020-07-27 14:19 ` David Sterba
  2020-07-27 14:33   ` Josef Bacik
  1 sibling, 1 reply; 12+ messages in thread
From: David Sterba @ 2020-07-27 14:19 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs, kernel-team

On Wed, Jul 22, 2020 at 12:07:21PM -0400, Josef Bacik wrote:
> While testing a weird problem with -o degraded, I noticed I was getting
> leaked root errors
> 
> BTRFS warning (device loop0): writable mount is not allowed due to too many missing devices
> BTRFS error (device loop0): open_ctree failed
> BTRFS error (device loop0): leaked root -9-0 refcount 1
> 
> This is the DATA_RELOC root, which gets read before the other fs roots,
> but is included in the fs roots radix tree, and thus gets freed by
> btrfs_free_fs_roots.  Fix this by moving the call into fail_tree_roots:
> in open_ctree.  With this fix we no longer leak that root on mount
> failure.
> 
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> ---
>  fs/btrfs/disk-io.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index c850d7f44fbe..f1fdbdd44c02 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -3421,7 +3421,6 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
>  fail_trans_kthread:
>  	kthread_stop(fs_info->transaction_kthread);
>  	btrfs_cleanup_transaction(fs_info);
> -	btrfs_free_fs_roots(fs_info);
>  fail_cleaner:
>  	kthread_stop(fs_info->cleaner_kthread);
>  
> @@ -3441,6 +3440,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
>  	btrfs_put_block_group_cache(fs_info);
>  
>  fail_tree_roots:
> +	btrfs_free_fs_roots(fs_info);
>  	free_root_pointers(fs_info, true);

The data reloc tree is freed inside free_root_pointers, that it's also
in the radix tree is for convenience so I'd rather fix it inside
free_root_pointers and not reorder btrfs_free_fs_roots.

>  	invalidate_inode_pages2(fs_info->btree_inode->i_mapping);
>  
> -- 
> 2.24.1

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] btrfs: free fs roots on failed mount
  2020-07-27 14:19 ` [PATCH 1/2] btrfs: free fs roots on failed mount David Sterba
@ 2020-07-27 14:33   ` Josef Bacik
  2020-07-27 15:17     ` David Sterba
  0 siblings, 1 reply; 12+ messages in thread
From: Josef Bacik @ 2020-07-27 14:33 UTC (permalink / raw)
  To: dsterba, linux-btrfs, kernel-team

On 7/27/20 10:19 AM, David Sterba wrote:
> On Wed, Jul 22, 2020 at 12:07:21PM -0400, Josef Bacik wrote:
>> While testing a weird problem with -o degraded, I noticed I was getting
>> leaked root errors
>>
>> BTRFS warning (device loop0): writable mount is not allowed due to too many missing devices
>> BTRFS error (device loop0): open_ctree failed
>> BTRFS error (device loop0): leaked root -9-0 refcount 1
>>
>> This is the DATA_RELOC root, which gets read before the other fs roots,
>> but is included in the fs roots radix tree, and thus gets freed by
>> btrfs_free_fs_roots.  Fix this by moving the call into fail_tree_roots:
>> in open_ctree.  With this fix we no longer leak that root on mount
>> failure.
>>
>> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
>> ---
>>   fs/btrfs/disk-io.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
>> index c850d7f44fbe..f1fdbdd44c02 100644
>> --- a/fs/btrfs/disk-io.c
>> +++ b/fs/btrfs/disk-io.c
>> @@ -3421,7 +3421,6 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
>>   fail_trans_kthread:
>>   	kthread_stop(fs_info->transaction_kthread);
>>   	btrfs_cleanup_transaction(fs_info);
>> -	btrfs_free_fs_roots(fs_info);
>>   fail_cleaner:
>>   	kthread_stop(fs_info->cleaner_kthread);
>>   
>> @@ -3441,6 +3440,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
>>   	btrfs_put_block_group_cache(fs_info);
>>   
>>   fail_tree_roots:
>> +	btrfs_free_fs_roots(fs_info);
>>   	free_root_pointers(fs_info, true);
> 
> The data reloc tree is freed inside free_root_pointers, that it's also
> in the radix tree is for convenience so I'd rather fix it inside
> free_root_pointers and not reorder btrfs_free_fs_roots.
> 

We can't do that, because free_root_pointers() is called to drop the extent 
buffers when we're looping through the backup roots.  btrfs_free_fs_roots() is 
the correct thing to do here, it goes through anything that ended up in the 
radix tree.  Thanks,

Josef

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] btrfs: free fs roots on failed mount
  2020-07-27 14:33   ` Josef Bacik
@ 2020-07-27 15:17     ` David Sterba
  2020-07-27 15:37       ` Josef Bacik
  0 siblings, 1 reply; 12+ messages in thread
From: David Sterba @ 2020-07-27 15:17 UTC (permalink / raw)
  To: Josef Bacik; +Cc: dsterba, linux-btrfs, kernel-team

On Mon, Jul 27, 2020 at 10:33:32AM -0400, Josef Bacik wrote:
> >> @@ -3441,6 +3440,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
> >>   	btrfs_put_block_group_cache(fs_info);
> >>   
> >>   fail_tree_roots:
> >> +	btrfs_free_fs_roots(fs_info);
> >>   	free_root_pointers(fs_info, true);
> > 
> > The data reloc tree is freed inside free_root_pointers, that it's also
> > in the radix tree is for convenience so I'd rather fix it inside
> > free_root_pointers and not reorder btrfs_free_fs_roots.
> 
> We can't do that, because free_root_pointers() is called to drop the extent 
> buffers when we're looping through the backup roots.  btrfs_free_fs_roots() is 
> the correct thing to do here, it goes through anything that ended up in the 
> radix tree.  Thanks,

I see, free_root_pointers is used elsewhere. I'm concerned about the
reordeing because there are several functions that are now called after
the roots are freed.

(before)	btrfs_free_fs_roots(fs_info);

		kthread_stop(fs_info->cleaner_kthread);
		filemap_write_and_wait(fs_info->btree_inode->i_mapping);
		btrfs_sysfs_remove_mounted(fs_info);
		btrfs_sysfs_remove_fsid(fs_info->fs_devices);
		btrfs_put_block_group_cache(fs_info);

(after)		btrfs_free_fs_roots(fs_info);

If something called by btrfs_free_fs_roots depends on structures
removed/cleaned by the other functions, eg. btrfs_put_block_group_cache
ti could be a problem.

I haven't spotted anything so far, the functions are called after
failure still during mount, this is easier than shutdown sequence after
a full mount.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 1/2] btrfs: free fs roots on failed mount
  2020-07-27 15:17     ` David Sterba
@ 2020-07-27 15:37       ` Josef Bacik
  0 siblings, 0 replies; 12+ messages in thread
From: Josef Bacik @ 2020-07-27 15:37 UTC (permalink / raw)
  To: dsterba, linux-btrfs, kernel-team

On 7/27/20 11:17 AM, David Sterba wrote:
> On Mon, Jul 27, 2020 at 10:33:32AM -0400, Josef Bacik wrote:
>>>> @@ -3441,6 +3440,7 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
>>>>    	btrfs_put_block_group_cache(fs_info);
>>>>    
>>>>    fail_tree_roots:
>>>> +	btrfs_free_fs_roots(fs_info);
>>>>    	free_root_pointers(fs_info, true);
>>>
>>> The data reloc tree is freed inside free_root_pointers, that it's also
>>> in the radix tree is for convenience so I'd rather fix it inside
>>> free_root_pointers and not reorder btrfs_free_fs_roots.
>>
>> We can't do that, because free_root_pointers() is called to drop the extent
>> buffers when we're looping through the backup roots.  btrfs_free_fs_roots() is
>> the correct thing to do here, it goes through anything that ended up in the
>> radix tree.  Thanks,
> 
> I see, free_root_pointers is used elsewhere. I'm concerned about the
> reordeing because there are several functions that are now called after
> the roots are freed.
> 
> (before)	btrfs_free_fs_roots(fs_info);
> 
> 		kthread_stop(fs_info->cleaner_kthread);
> 		filemap_write_and_wait(fs_info->btree_inode->i_mapping);
> 		btrfs_sysfs_remove_mounted(fs_info);
> 		btrfs_sysfs_remove_fsid(fs_info->fs_devices);
> 		btrfs_put_block_group_cache(fs_info);
> 
> (after)		btrfs_free_fs_roots(fs_info);
> 
> If something called by btrfs_free_fs_roots depends on structures
> removed/cleaned by the other functions, eg. btrfs_put_block_group_cache
> ti could be a problem.
> 
> I haven't spotted anything so far, the functions are called after
> failure still during mount, this is easier than shutdown sequence after
> a full mount.
> 

Yeah I'm always loathe to move this stuff around.  I actually think we can end 
up with the case described in close_ctree if we get an error during log replay 
on mount.  I'll rework this some more.  Thanks,

Josef

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] btrfs: fix root leak printk to use %lld instead of %llu
  2020-07-27 14:03       ` David Sterba
  2020-07-27 14:07         ` David Sterba
@ 2020-07-27 23:47         ` Qu Wenruo
  2020-07-28 15:09           ` David Sterba
  1 sibling, 1 reply; 12+ messages in thread
From: Qu Wenruo @ 2020-07-27 23:47 UTC (permalink / raw)
  To: dsterba, Josef Bacik, linux-btrfs, kernel-team


[-- Attachment #1.1: Type: text/plain, Size: 2273 bytes --]



On 2020/7/27 下午10:03, David Sterba wrote:
> On Fri, Jul 24, 2020 at 08:40:17AM +0800, Qu Wenruo wrote:
>>
>>
>> On 2020/7/23 下午10:20, David Sterba wrote:
>>> On Wed, Jul 22, 2020 at 12:07:22PM -0400, Josef Bacik wrote:
>>>> I'm a actual human being so am incapable of converting u64 to s64 in my
>>>> head, use %lld so we can see the negative number in order to identify
>>>> which of our special roots we leaked.
>>>>
>>>> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
>>>> ---
>>>>  fs/btrfs/disk-io.c | 2 +-
>>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>>
>>>> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
>>>> index f1fdbdd44c02..cc4081a1c7f9 100644
>>>> --- a/fs/btrfs/disk-io.c
>>>> +++ b/fs/btrfs/disk-io.c
>>>> @@ -1508,7 +1508,7 @@ void btrfs_check_leaked_roots(struct btrfs_fs_info *fs_info)
>>>>  	while (!list_empty(&fs_info->allocated_roots)) {
>>>>  		root = list_first_entry(&fs_info->allocated_roots,
>>>>  					struct btrfs_root, leak_list);
>>>> -		btrfs_err(fs_info, "leaked root %llu-%llu refcount %d",
>>>> +		btrfs_err(fs_info, "leaked root %lld-%llu refcount %d",
>>>
>>> But this is wrong in another way, roots with high numbers will appear as
>>> negative numbers.
>>>
>>
>> Nope. We won't have that many roots.
>>
>> In fact, for subvolumes, the highest id is only 2 ^ 48, an special limit
>> introduced for qgroup.
> 
> It's not a hard limit and certainly can have subvolumes with numbers
> that high. That qgoups interpret the qgroup in some way is not a
> limitation on subvolumes. We'll have to start reusing the subvolume ids
> eventually, with qgroups we can on.

Strange...

I still remember that we put that limit as a hard limit for subvolume
creation.
Did we change that behavior in recent releases?

> 
> Also the negativer numbers start to appear with 2^32 so that's still
> below the percieved limit of 2^48.

Nope, For signed 64bits, it's -2^63 ~ 2^63 - 1, not 2 ^ 32.

> 
>> So we won't have high enough subvolume ids to be negative, but only
>> special trees.
> 
> For the internal trees we eg. have pretty-printer in progs so kernel can
> reuse that.
> 

That's true. Pretty tree name is much better for human to read.

Thanks,
Qu


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH 2/2] btrfs: fix root leak printk to use %lld instead of %llu
  2020-07-27 23:47         ` Qu Wenruo
@ 2020-07-28 15:09           ` David Sterba
  0 siblings, 0 replies; 12+ messages in thread
From: David Sterba @ 2020-07-28 15:09 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: dsterba, Josef Bacik, linux-btrfs, kernel-team

On Tue, Jul 28, 2020 at 07:47:00AM +0800, Qu Wenruo wrote:
> I still remember that we put that limit as a hard limit for subvolume
> creation.

Ok, I found it, added by your patch e09fe2d2119800e6 in 2015. It's not
documented anywhere, eg. manual page 5 states the limit is 2^64.

2^48 should be enough for everyone and we can't probably changed it
right now as it's part of API and ABI. Oh well.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-07-28 15:09 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-22 16:07 [PATCH 1/2] btrfs: free fs roots on failed mount Josef Bacik
2020-07-22 16:07 ` [PATCH 2/2] btrfs: fix root leak printk to use %lld instead of %llu Josef Bacik
2020-07-23 14:20   ` David Sterba
2020-07-24  0:40     ` Qu Wenruo
2020-07-27 14:03       ` David Sterba
2020-07-27 14:07         ` David Sterba
2020-07-27 23:47         ` Qu Wenruo
2020-07-28 15:09           ` David Sterba
2020-07-27 14:19 ` [PATCH 1/2] btrfs: free fs roots on failed mount David Sterba
2020-07-27 14:33   ` Josef Bacik
2020-07-27 15:17     ` David Sterba
2020-07-27 15:37       ` Josef Bacik

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.