* 5.11.0: open ctree failed: devide total_bytes should be at most X but found Y
@ 2021-02-22 19:38 Steven Davies
2021-02-23 9:11 ` Johannes Thumshirn
0 siblings, 1 reply; 8+ messages in thread
From: Steven Davies @ 2021-02-22 19:38 UTC (permalink / raw)
To: linux-btrfs
Booted my system with kernel 5.11.0 vanilla with the first time and received this:
BTRFS info (device nvme0n1p2): has skinny extents
BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
964770336768
BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
Booting with 5.10.12 has no issues.
# btrfs filesystem usage /
Overall:
Device size: 898.51GiB
Device allocated: 620.06GiB
Device unallocated: 278.45GiB
Device missing: 0.00B
Used: 616.58GiB
Free (estimated): 279.94GiB (min: 140.72GiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,single: Size:568.00GiB, Used:566.51GiB (99.74%)
/dev/nvme0n1p2 568.00GiB
Metadata,DUP: Size:26.00GiB, Used:25.03GiB (96.29%)
/dev/nvme0n1p2 52.00GiB
System,DUP: Size:32.00MiB, Used:80.00KiB (0.24%)
/dev/nvme0n1p2 64.00MiB
Unallocated:
/dev/nvme0n1p2 278.45GiB
# parted -l
Model: Sabrent Rocket Q (nvme)
Disk /dev/nvme0n1: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 1075MB 1074MB fat32 boot, esp
2 1075MB 966GB 965GB btrfs
3 966GB 1000GB 34.4GB linux-swap(v1) swap
What has changed in 5.11 which might cause this?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 5.11.0: open ctree failed: devide total_bytes should be at most X but found Y
2021-02-22 19:38 5.11.0: open ctree failed: devide total_bytes should be at most X but found Y Steven Davies
@ 2021-02-23 9:11 ` Johannes Thumshirn
2021-02-23 9:43 ` Johannes Thumshirn
0 siblings, 1 reply; 8+ messages in thread
From: Johannes Thumshirn @ 2021-02-23 9:11 UTC (permalink / raw)
To: Steven Davies, linux-btrfs, Anand Jain
On 22/02/2021 21:07, Steven Davies wrote:
[+CC Anand ]
> Booted my system with kernel 5.11.0 vanilla with the first time and received this:
>
> BTRFS info (device nvme0n1p2): has skinny extents
> BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
> 964770336768
> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
>
> Booting with 5.10.12 has no issues.
>
> # btrfs filesystem usage /
> Overall:
> Device size: 898.51GiB
> Device allocated: 620.06GiB
> Device unallocated: 278.45GiB
> Device missing: 0.00B
> Used: 616.58GiB
> Free (estimated): 279.94GiB (min: 140.72GiB)
> Data ratio: 1.00
> Metadata ratio: 2.00
> Global reserve: 512.00MiB (used: 0.00B)
>
> Data,single: Size:568.00GiB, Used:566.51GiB (99.74%)
> /dev/nvme0n1p2 568.00GiB
>
> Metadata,DUP: Size:26.00GiB, Used:25.03GiB (96.29%)
> /dev/nvme0n1p2 52.00GiB
>
> System,DUP: Size:32.00MiB, Used:80.00KiB (0.24%)
> /dev/nvme0n1p2 64.00MiB
>
> Unallocated:
> /dev/nvme0n1p2 278.45GiB
>
> # parted -l
> Model: Sabrent Rocket Q (nvme)
> Disk /dev/nvme0n1: 1000GB
> Sector size (logical/physical): 512B/512B
> Partition Table: gpt
> Disk Flags:
>
> Number Start End Size File system Name Flags
> 1 1049kB 1075MB 1074MB fat32 boot, esp
> 2 1075MB 966GB 965GB btrfs
> 3 966GB 1000GB 34.4GB linux-swap(v1) swap
>
> What has changed in 5.11 which might cause this?
>
>
This line:
> BTRFS info (device nvme0n1p2): has skinny extents
> BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
> 964770336768
> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
comes from 3a160a933111 ("btrfs: drop never met disk total bytes check in verify_one_dev_extent")
which went into v5.11-rc1.
IIUIC the device item's total_bytes and the block device inode's size are off by 12M, so the check
introduced in the above commit refuses to mount the FS.
Anand any idea?
Byte,
Johannes
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 5.11.0: open ctree failed: devide total_bytes should be at most X but found Y
2021-02-23 9:11 ` Johannes Thumshirn
@ 2021-02-23 9:43 ` Johannes Thumshirn
2021-02-23 14:30 ` David Sterba
0 siblings, 1 reply; 8+ messages in thread
From: Johannes Thumshirn @ 2021-02-23 9:43 UTC (permalink / raw)
To: Steven Davies, linux-btrfs, Anand Jain
On 23/02/2021 10:13, Johannes Thumshirn wrote:
> On 22/02/2021 21:07, Steven Davies wrote:
>
> [+CC Anand ]
>
>> Booted my system with kernel 5.11.0 vanilla with the first time and received this:
>>
>> BTRFS info (device nvme0n1p2): has skinny extents
>> BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
>> 964770336768
>> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
>>
>> Booting with 5.10.12 has no issues.
>>
>> # btrfs filesystem usage /
>> Overall:
>> Device size: 898.51GiB
>> Device allocated: 620.06GiB
>> Device unallocated: 278.45GiB
>> Device missing: 0.00B
>> Used: 616.58GiB
>> Free (estimated): 279.94GiB (min: 140.72GiB)
>> Data ratio: 1.00
>> Metadata ratio: 2.00
>> Global reserve: 512.00MiB (used: 0.00B)
>>
>> Data,single: Size:568.00GiB, Used:566.51GiB (99.74%)
>> /dev/nvme0n1p2 568.00GiB
>>
>> Metadata,DUP: Size:26.00GiB, Used:25.03GiB (96.29%)
>> /dev/nvme0n1p2 52.00GiB
>>
>> System,DUP: Size:32.00MiB, Used:80.00KiB (0.24%)
>> /dev/nvme0n1p2 64.00MiB
>>
>> Unallocated:
>> /dev/nvme0n1p2 278.45GiB
>>
>> # parted -l
>> Model: Sabrent Rocket Q (nvme)
>> Disk /dev/nvme0n1: 1000GB
>> Sector size (logical/physical): 512B/512B
>> Partition Table: gpt
>> Disk Flags:
>>
>> Number Start End Size File system Name Flags
>> 1 1049kB 1075MB 1074MB fat32 boot, esp
>> 2 1075MB 966GB 965GB btrfs
>> 3 966GB 1000GB 34.4GB linux-swap(v1) swap
>>
>> What has changed in 5.11 which might cause this?
>>
>>
>
> This line:
>> BTRFS info (device nvme0n1p2): has skinny extents
>> BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
>> 964770336768
>> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
>
> comes from 3a160a933111 ("btrfs: drop never met disk total bytes check in verify_one_dev_extent")
> which went into v5.11-rc1.
>
> IIUIC the device item's total_bytes and the block device inode's size are off by 12M, so the check
> introduced in the above commit refuses to mount the FS.
>
> Anand any idea?
OK this is getting interesting:
btrfs-porgs sets the device's total_bytes at mkfs time and obtains it from ioctl(..., BLKGETSIZE64, ...);
BLKGETSIZE64 does:
return put_u64(argp, i_size_read(bdev->bd_inode));
The new check in read_one_dev() does:
u64 max_total_bytes = i_size_read(device->bdev->bd_inode);
if (device->total_bytes > max_total_bytes) {
btrfs_err(fs_info,
"device total_bytes should be at most %llu but found %llu",
max_total_bytes, device->total_bytes);
return -EINVAL;
So the bdev inode's i_size must have changed between mkfs and mount.
Steven, can you please run:
blockdev --getsize64 /dev/nvme0n1p2
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 5.11.0: open ctree failed: devide total_bytes should be at most X but found Y
2021-02-23 9:43 ` Johannes Thumshirn
@ 2021-02-23 14:30 ` David Sterba
2021-02-23 17:19 ` Steven Davies
0 siblings, 1 reply; 8+ messages in thread
From: David Sterba @ 2021-02-23 14:30 UTC (permalink / raw)
To: Johannes Thumshirn; +Cc: Steven Davies, linux-btrfs, Anand Jain
On Tue, Feb 23, 2021 at 09:43:04AM +0000, Johannes Thumshirn wrote:
> On 23/02/2021 10:13, Johannes Thumshirn wrote:
> > On 22/02/2021 21:07, Steven Davies wrote:
> >
> > [+CC Anand ]
> >
> >> Booted my system with kernel 5.11.0 vanilla with the first time and received this:
> >>
> >> BTRFS info (device nvme0n1p2): has skinny extents
> >> BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
> >> 964770336768
> >> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
> >>
> >> Booting with 5.10.12 has no issues.
> >>
> >> # btrfs filesystem usage /
> >> Overall:
> >> Device size: 898.51GiB
> >> Device allocated: 620.06GiB
> >> Device unallocated: 278.45GiB
> >> Device missing: 0.00B
> >> Used: 616.58GiB
> >> Free (estimated): 279.94GiB (min: 140.72GiB)
> >> Data ratio: 1.00
> >> Metadata ratio: 2.00
> >> Global reserve: 512.00MiB (used: 0.00B)
> >>
> >> Data,single: Size:568.00GiB, Used:566.51GiB (99.74%)
> >> /dev/nvme0n1p2 568.00GiB
> >>
> >> Metadata,DUP: Size:26.00GiB, Used:25.03GiB (96.29%)
> >> /dev/nvme0n1p2 52.00GiB
> >>
> >> System,DUP: Size:32.00MiB, Used:80.00KiB (0.24%)
> >> /dev/nvme0n1p2 64.00MiB
> >>
> >> Unallocated:
> >> /dev/nvme0n1p2 278.45GiB
> >>
> >> # parted -l
> >> Model: Sabrent Rocket Q (nvme)
> >> Disk /dev/nvme0n1: 1000GB
> >> Sector size (logical/physical): 512B/512B
> >> Partition Table: gpt
> >> Disk Flags:
> >>
> >> Number Start End Size File system Name Flags
> >> 1 1049kB 1075MB 1074MB fat32 boot, esp
> >> 2 1075MB 966GB 965GB btrfs
> >> 3 966GB 1000GB 34.4GB linux-swap(v1) swap
> >>
> >> What has changed in 5.11 which might cause this?
> >>
> >>
> >
> > This line:
> >> BTRFS info (device nvme0n1p2): has skinny extents
> >> BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
> >> 964770336768
> >> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
> >
> > comes from 3a160a933111 ("btrfs: drop never met disk total bytes check in verify_one_dev_extent")
> > which went into v5.11-rc1.
> >
> > IIUIC the device item's total_bytes and the block device inode's size are off by 12M, so the check
> > introduced in the above commit refuses to mount the FS.
> >
> > Anand any idea?
>
> OK this is getting interesting:
> btrfs-porgs sets the device's total_bytes at mkfs time and obtains it from ioctl(..., BLKGETSIZE64, ...);
>
> BLKGETSIZE64 does:
> return put_u64(argp, i_size_read(bdev->bd_inode));
>
> The new check in read_one_dev() does:
>
> u64 max_total_bytes = i_size_read(device->bdev->bd_inode);
>
> if (device->total_bytes > max_total_bytes) {
> btrfs_err(fs_info,
> "device total_bytes should be at most %llu but found %llu",
> max_total_bytes, device->total_bytes);
> return -EINVAL;
>
>
> So the bdev inode's i_size must have changed between mkfs and mount.
The kernel side verifies that the physical device size is not smaller
than the size recorded in the device item, so that makes sense. I was a
bit doubtful about the check but it can detect real problems or point
out some weirdness.
The 12M delta is not big, but I'd expect that for a physical device it
should not change. Another possibility would be some kind of rounding to
a reasonable number, like 16M.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 5.11.0: open ctree failed: devide total_bytes should be at most X but found Y
2021-02-23 14:30 ` David Sterba
@ 2021-02-23 17:19 ` Steven Davies
2021-02-23 17:35 ` Johannes Thumshirn
0 siblings, 1 reply; 8+ messages in thread
From: Steven Davies @ 2021-02-23 17:19 UTC (permalink / raw)
To: dsterba, Johannes Thumshirn, Steven Davies, linux-btrfs, Anand Jain
On 2021-02-23 14:30, David Sterba wrote:
> On Tue, Feb 23, 2021 at 09:43:04AM +0000, Johannes Thumshirn wrote:
>> On 23/02/2021 10:13, Johannes Thumshirn wrote:
>> > On 22/02/2021 21:07, Steven Davies wrote:
>> >
>> > [+CC Anand ]
>> >
>> >> Booted my system with kernel 5.11.0 vanilla with the first time and received this:
>> >>
>> >> BTRFS info (device nvme0n1p2): has skinny extents
>> >> BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
>> >> 964770336768
>> >> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
>> >>
>> >> Booting with 5.10.12 has no issues.
>> >>
>> >> # btrfs filesystem usage /
>> >> Overall:
>> >> Device size: 898.51GiB
>> >> Device allocated: 620.06GiB
>> >> Device unallocated: 278.45GiB
>> >> Device missing: 0.00B
>> >> Used: 616.58GiB
>> >> Free (estimated): 279.94GiB (min: 140.72GiB)
>> >> Data ratio: 1.00
>> >> Metadata ratio: 2.00
>> >> Global reserve: 512.00MiB (used: 0.00B)
>> >>
>> >> Data,single: Size:568.00GiB, Used:566.51GiB (99.74%)
>> >> /dev/nvme0n1p2 568.00GiB
>> >>
>> >> Metadata,DUP: Size:26.00GiB, Used:25.03GiB (96.29%)
>> >> /dev/nvme0n1p2 52.00GiB
>> >>
>> >> System,DUP: Size:32.00MiB, Used:80.00KiB (0.24%)
>> >> /dev/nvme0n1p2 64.00MiB
>> >>
>> >> Unallocated:
>> >> /dev/nvme0n1p2 278.45GiB
>> >>
>> >> # parted -l
>> >> Model: Sabrent Rocket Q (nvme)
>> >> Disk /dev/nvme0n1: 1000GB
>> >> Sector size (logical/physical): 512B/512B
>> >> Partition Table: gpt
>> >> Disk Flags:
>> >>
>> >> Number Start End Size File system Name Flags
>> >> 1 1049kB 1075MB 1074MB fat32 boot, esp
>> >> 2 1075MB 966GB 965GB btrfs
>> >> 3 966GB 1000GB 34.4GB linux-swap(v1) swap
>> >>
>> >> What has changed in 5.11 which might cause this?
>> >>
>> >>
>> >
>> > This line:
>> >> BTRFS info (device nvme0n1p2): has skinny extents
>> >> BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
>> >> 964770336768
>> >> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
>> >
>> > comes from 3a160a933111 ("btrfs: drop never met disk total bytes check in verify_one_dev_extent")
>> > which went into v5.11-rc1.
>> >
>> > IIUIC the device item's total_bytes and the block device inode's size are off by 12M, so the check
>> > introduced in the above commit refuses to mount the FS.
>> >
>> > Anand any idea?
>>
>> OK this is getting interesting:
>> btrfs-porgs sets the device's total_bytes at mkfs time and obtains it
>> from ioctl(..., BLKGETSIZE64, ...);
>>
>> BLKGETSIZE64 does:
>> return put_u64(argp, i_size_read(bdev->bd_inode));
>>
>> The new check in read_one_dev() does:
>>
>> u64 max_total_bytes =
>> i_size_read(device->bdev->bd_inode);
>>
>> if (device->total_bytes > max_total_bytes) {
>> btrfs_err(fs_info,
>> "device total_bytes should be at most %llu but
>> found %llu",
>> max_total_bytes,
>> device->total_bytes);
>> return -EINVAL;
>>
>>
>> So the bdev inode's i_size must have changed between mkfs and mount.
That's likely, this is my development/testing machine and I've changed
partitions (and btrfs RAID levels) around more than once since mkfs
time. I can't remember if or how I've modified the fs to take account of
this.
>> Steven, can you please run:
>> blockdev --getsize64 /dev/nvme0n1p2
# blockdev --getsize64 /dev/nvme0n1p2
964757028864
>
> The kernel side verifies that the physical device size is not smaller
> than the size recorded in the device item, so that makes sense. I was a
> bit doubtful about the check but it can detect real problems or point
> out some weirdness.
Agreed. It's useful, but somewhat painful when it refuses to mount a
root device after reboot.
> The 12M delta is not big, but I'd expect that for a physical device it
> should not change. Another possibility would be some kind of rounding
> to
> a reasonable number, like 16M.
Is there a simple way to fix this partition so that btrfs and the
partition table agree on its size?
--
Steven Davies
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 5.11.0: open ctree failed: devide total_bytes should be at most X but found Y
2021-02-23 17:19 ` Steven Davies
@ 2021-02-23 17:35 ` Johannes Thumshirn
2021-02-24 1:20 ` Anand Jain
0 siblings, 1 reply; 8+ messages in thread
From: Johannes Thumshirn @ 2021-02-23 17:35 UTC (permalink / raw)
To: Steven Davies, dsterba, linux-btrfs, Anand Jain
On 23/02/2021 18:20, Steven Davies wrote:
> On 2021-02-23 14:30, David Sterba wrote:
>> On Tue, Feb 23, 2021 at 09:43:04AM +0000, Johannes Thumshirn wrote:
>>> On 23/02/2021 10:13, Johannes Thumshirn wrote:
>>>> On 22/02/2021 21:07, Steven Davies wrote:
>>>>
>>>> [+CC Anand ]
>>>>
>>>>> Booted my system with kernel 5.11.0 vanilla with the first time and received this:
>>>>>
>>>>> BTRFS info (device nvme0n1p2): has skinny extents
>>>>> BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
>>>>> 964770336768
>>>>> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
>>>>>
>>>>> Booting with 5.10.12 has no issues.
>>>>>
>>>>> # btrfs filesystem usage /
>>>>> Overall:
>>>>> Device size: 898.51GiB
>>>>> Device allocated: 620.06GiB
>>>>> Device unallocated: 278.45GiB
>>>>> Device missing: 0.00B
>>>>> Used: 616.58GiB
>>>>> Free (estimated): 279.94GiB (min: 140.72GiB)
>>>>> Data ratio: 1.00
>>>>> Metadata ratio: 2.00
>>>>> Global reserve: 512.00MiB (used: 0.00B)
>>>>>
>>>>> Data,single: Size:568.00GiB, Used:566.51GiB (99.74%)
>>>>> /dev/nvme0n1p2 568.00GiB
>>>>>
>>>>> Metadata,DUP: Size:26.00GiB, Used:25.03GiB (96.29%)
>>>>> /dev/nvme0n1p2 52.00GiB
>>>>>
>>>>> System,DUP: Size:32.00MiB, Used:80.00KiB (0.24%)
>>>>> /dev/nvme0n1p2 64.00MiB
>>>>>
>>>>> Unallocated:
>>>>> /dev/nvme0n1p2 278.45GiB
>>>>>
>>>>> # parted -l
>>>>> Model: Sabrent Rocket Q (nvme)
>>>>> Disk /dev/nvme0n1: 1000GB
>>>>> Sector size (logical/physical): 512B/512B
>>>>> Partition Table: gpt
>>>>> Disk Flags:
>>>>>
>>>>> Number Start End Size File system Name Flags
>>>>> 1 1049kB 1075MB 1074MB fat32 boot, esp
>>>>> 2 1075MB 966GB 965GB btrfs
>>>>> 3 966GB 1000GB 34.4GB linux-swap(v1) swap
>>>>>
>>>>> What has changed in 5.11 which might cause this?
>>>>>
>>>>>
>>>>
>>>> This line:
>>>>> BTRFS info (device nvme0n1p2): has skinny extents
>>>>> BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
>>>>> 964770336768
>>>>> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
>>>>
>>>> comes from 3a160a933111 ("btrfs: drop never met disk total bytes check in verify_one_dev_extent")
>>>> which went into v5.11-rc1.
>>>>
>>>> IIUIC the device item's total_bytes and the block device inode's size are off by 12M, so the check
>>>> introduced in the above commit refuses to mount the FS.
>>>>
>>>> Anand any idea?
>>>
>>> OK this is getting interesting:
>>> btrfs-porgs sets the device's total_bytes at mkfs time and obtains it
>>> from ioctl(..., BLKGETSIZE64, ...);
>>>
>>> BLKGETSIZE64 does:
>>> return put_u64(argp, i_size_read(bdev->bd_inode));
>>>
>>> The new check in read_one_dev() does:
>>>
>>> u64 max_total_bytes =
>>> i_size_read(device->bdev->bd_inode);
>>>
>>> if (device->total_bytes > max_total_bytes) {
>>> btrfs_err(fs_info,
>>> "device total_bytes should be at most %llu but
>>> found %llu",
>>> max_total_bytes,
>>> device->total_bytes);
>>> return -EINVAL;
>>>
>>>
>>> So the bdev inode's i_size must have changed between mkfs and mount.
>
> That's likely, this is my development/testing machine and I've changed
> partitions (and btrfs RAID levels) around more than once since mkfs
> time. I can't remember if or how I've modified the fs to take account of
> this.
>
>>> Steven, can you please run:
>>> blockdev --getsize64 /dev/nvme0n1p2
>
> # blockdev --getsize64 /dev/nvme0n1p2
> 964757028864
>
>>
>> The kernel side verifies that the physical device size is not smaller
>> than the size recorded in the device item, so that makes sense. I was a
>> bit doubtful about the check but it can detect real problems or point
>> out some weirdness.
>
> Agreed. It's useful, but somewhat painful when it refuses to mount a
> root device after reboot.
>
>> The 12M delta is not big, but I'd expect that for a physical device it
>> should not change. Another possibility would be some kind of rounding
>> to
>> a reasonable number, like 16M.
>
> Is there a simple way to fix this partition so that btrfs and the
> partition table agree on its size?
>
Unless someone's yelling at me that this is a bad advice (David, Anand?),
I'd go for:
btrfs filesystem resize max /
I've personally never shrinked a device but looking at the code it will
write the blockdevice's inode i_size to the device extents, and possibly
relocate data.
Hope I didn't give a bad advice,
Johannes
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 5.11.0: open ctree failed: devide total_bytes should be at most X but found Y
2021-02-23 17:35 ` Johannes Thumshirn
@ 2021-02-24 1:20 ` Anand Jain
2021-02-24 17:19 ` Steven Davies
0 siblings, 1 reply; 8+ messages in thread
From: Anand Jain @ 2021-02-24 1:20 UTC (permalink / raw)
To: Johannes Thumshirn, Steven Davies, dsterba, linux-btrfs
On 24/02/2021 01:35, Johannes Thumshirn wrote:
> On 23/02/2021 18:20, Steven Davies wrote:
>> On 2021-02-23 14:30, David Sterba wrote:
>>> On Tue, Feb 23, 2021 at 09:43:04AM +0000, Johannes Thumshirn wrote:
>>>> On 23/02/2021 10:13, Johannes Thumshirn wrote:
>>>>> On 22/02/2021 21:07, Steven Davies wrote:
>>>>>
>>>>> [+CC Anand ]
>>>>>
>>>>>> Booted my system with kernel 5.11.0 vanilla with the first time and received this:
>>>>>>
>>>>>> BTRFS info (device nvme0n1p2): has skinny extents
>>>>>> BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
>>>>>> 964770336768
>>>>>> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
>>>>>>
>>>>>> Booting with 5.10.12 has no issues.
>>>>>>
>>>>>> # btrfs filesystem usage /
>>>>>> Overall:
>>>>>> Device size: 898.51GiB
>>>>>> Device allocated: 620.06GiB
>>>>>> Device unallocated: 278.45GiB
>>>>>> Device missing: 0.00B
>>>>>> Used: 616.58GiB
>>>>>> Free (estimated): 279.94GiB (min: 140.72GiB)
>>>>>> Data ratio: 1.00
>>>>>> Metadata ratio: 2.00
>>>>>> Global reserve: 512.00MiB (used: 0.00B)
>>>>>>
>>>>>> Data,single: Size:568.00GiB, Used:566.51GiB (99.74%)
>>>>>> /dev/nvme0n1p2 568.00GiB
>>>>>>
>>>>>> Metadata,DUP: Size:26.00GiB, Used:25.03GiB (96.29%)
>>>>>> /dev/nvme0n1p2 52.00GiB
>>>>>>
>>>>>> System,DUP: Size:32.00MiB, Used:80.00KiB (0.24%)
>>>>>> /dev/nvme0n1p2 64.00MiB
>>>>>>
>>>>>> Unallocated:
>>>>>> /dev/nvme0n1p2 278.45GiB
>>>>>>
>>>>>> # parted -l
>>>>>> Model: Sabrent Rocket Q (nvme)
>>>>>> Disk /dev/nvme0n1: 1000GB
>>>>>> Sector size (logical/physical): 512B/512B
>>>>>> Partition Table: gpt
>>>>>> Disk Flags:
>>>>>>
>>>>>> Number Start End Size File system Name Flags
>>>>>> 1 1049kB 1075MB 1074MB fat32 boot, esp
>>>>>> 2 1075MB 966GB 965GB btrfs
>>>>>> 3 966GB 1000GB 34.4GB linux-swap(v1) swap
>>>>>>
>>>>>> What has changed in 5.11 which might cause this?
>>>>>>
>>>>>>
>>>>>
>>>>> This line:
>>>>>> BTRFS info (device nvme0n1p2): has skinny extents
>>>>>> BTRFS error (device nvme0n1p2): device total_bytes should be at most 964757028864 but found
>>>>>> 964770336768
>>>>>> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
>>>>>
>>>>> comes from 3a160a933111 ("btrfs: drop never met disk total bytes check in verify_one_dev_extent")
>>>>> which went into v5.11-rc1.
>>>>>
>>>>> IIUIC the device item's total_bytes and the block device inode's size are off by 12M, so the check
>>>>> introduced in the above commit refuses to mount the FS.
>>>>>
>>>>> Anand any idea?
>>>>
>>>> OK this is getting interesting:
>>>> btrfs-porgs sets the device's total_bytes at mkfs time and obtains it
>>>> from ioctl(..., BLKGETSIZE64, ...);
>>>>
>>>> BLKGETSIZE64 does:
>>>> return put_u64(argp, i_size_read(bdev->bd_inode));
>>>>
>>>> The new check in read_one_dev() does:
>>>>
>>>> u64 max_total_bytes =
>>>> i_size_read(device->bdev->bd_inode);
>>>>
>>>> if (device->total_bytes > max_total_bytes) {
>>>> btrfs_err(fs_info,
>>>> "device total_bytes should be at most %llu but
>>>> found %llu",
>>>> max_total_bytes,
>>>> device->total_bytes);
>>>> return -EINVAL;
>>>>
>>>>
>>>> So the bdev inode's i_size must have changed between mkfs and mount.
>>
>> That's likely, this is my development/testing machine and I've changed
>> partitions (and btrfs RAID levels) around more than once since mkfs
>> time. I can't remember if or how I've modified the fs to take account of
>> this.
>>
What you say matches with the kernel logs.
>>>> Steven, can you please run:
>>>> blockdev --getsize64 /dev/nvme0n1p2
>>
>> # blockdev --getsize64 /dev/nvme0n1p2
>> 964757028864
Size at the time of mkfs is 964770336768. Now it is 964757028864.
>>
>>>
>>> The kernel side verifies that the physical device size is not smaller
>>> than the size recorded in the device item, so that makes sense. I was a
>>> bit doubtful about the check but it can detect real problems or point
>>> out some weirdness.
>>
>> Agreed. It's useful, but somewhat painful when it refuses to mount a
>> root device after reboot.
>>
>>> The 12M delta is not big, but I'd expect that for a physical device it
>>> should not change. Another possibility would be some kind of rounding
>>> to
>>> a reasonable number, like 16M.
>>
>> Is there a simple way to fix this partition so that btrfs and the
>> partition table agree on its size?
>>
>
> Unless someone's yelling at me that this is a bad advice (David, Anand?),
> I'd go for:
> btrfs filesystem resize max /
I was thinking about the same step when I was reading above.
> I've personally never shrinked a device but looking at the code it will
> write the blockdevice's inode i_size to the device extents, and possibly
> relocate data.
Shrink works. I have tested it before.
I hope shrink helps here too. Please let us know.
Thanks, Anand
>
> Hope I didn't give a bad advice,
> Johannes
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: 5.11.0: open ctree failed: devide total_bytes should be at most X but found Y
2021-02-24 1:20 ` Anand Jain
@ 2021-02-24 17:19 ` Steven Davies
0 siblings, 0 replies; 8+ messages in thread
From: Steven Davies @ 2021-02-24 17:19 UTC (permalink / raw)
To: Anand Jain; +Cc: Johannes Thumshirn, dsterba, linux-btrfs
On 2021-02-24 01:20, Anand Jain wrote:
> On 24/02/2021 01:35, Johannes Thumshirn wrote:
>> On 23/02/2021 18:20, Steven Davies wrote:
>>> On 2021-02-23 14:30, David Sterba wrote:
>>>> On Tue, Feb 23, 2021 at 09:43:04AM +0000, Johannes Thumshirn wrote:
>>>>> On 23/02/2021 10:13, Johannes Thumshirn wrote:
>>>>>> On 22/02/2021 21:07, Steven Davies wrote:
>>>>>>> Booted my system with kernel 5.11.0 vanilla with the first time
>>>>>>> and received this:
>>>>>>>
>>>>>>> BTRFS info (device nvme0n1p2): has skinny extents
>>>>>>> BTRFS error (device nvme0n1p2): device total_bytes should be at
>>>>>>> most 964757028864 but found
>>>>>>> 964770336768
>>>>>>> BTRFS error (device nvme0n1p2): failed to read chunk tree: -22
>>>>>>>
>>>>>>> Booting with 5.10.12 has no issues.
>>>>> So the bdev inode's i_size must have changed between mkfs and
>>>>> mount.
>>>
>
>
>>> That's likely, this is my development/testing machine and I've
>>> changed
>>> partitions (and btrfs RAID levels) around more than once since mkfs
>>> time. I can't remember if or how I've modified the fs to take account
>>> of
>>> this.
>>>
>
> What you say matches with the kernel logs.
>
>>>>> Steven, can you please run:
>>>>> blockdev --getsize64 /dev/nvme0n1p2
>>>
>>> # blockdev --getsize64 /dev/nvme0n1p2
>>> 964757028864
>
>
> Size at the time of mkfs is 964770336768. Now it is 964757028864.
>
>
>>> Is there a simple way to fix this partition so that btrfs and the
>>> partition table agree on its size?
>>>
>>
>> Unless someone's yelling at me that this is a bad advice (David,
>> Anand?),
>
>
>> I'd go for:
>> btrfs filesystem resize max /
>
> I was thinking about the same step when I was reading above.
>
>> I've personally never shrinked a device but looking at the code it
>> will
>> write the blockdevice's inode i_size to the device extents, and
>> possibly
>> relocate data.
>
>
> Shrink works. I have tested it before.
> I hope shrink helps here too. Please let us know.
>
> Thanks, Anand
Yes, this worked - at least there's no panic on boot (albeit this single
device fs is devid 3 now so I had to use 3:max).
--
Steven Davies
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-02-24 17:20 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-22 19:38 5.11.0: open ctree failed: devide total_bytes should be at most X but found Y Steven Davies
2021-02-23 9:11 ` Johannes Thumshirn
2021-02-23 9:43 ` Johannes Thumshirn
2021-02-23 14:30 ` David Sterba
2021-02-23 17:19 ` Steven Davies
2021-02-23 17:35 ` Johannes Thumshirn
2021-02-24 1:20 ` Anand Jain
2021-02-24 17:19 ` Steven Davies
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.