All of lore.kernel.org
 help / color / mirror / Atom feed
* corrupt leaf
@ 2020-02-27  5:59 4e868df3
  2020-02-27  6:30 ` Chris Murphy
  2020-02-27  8:25 ` Qu Wenruo
  0 siblings, 2 replies; 12+ messages in thread
From: 4e868df3 @ 2020-02-27  5:59 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 455 bytes --]

I updated kernels recently and now am getting a corrupt leaf error.
The drives decrypt and mount, and I can touch a file briefly until the
mount switches over to read-only mode. Extended SMART tests show all 6
of my drives have a healthy status. I have a backup of the data. The
array is configured as RAID10. As the BTRFS filesystem remains
accessible / read-only, I am able to take an additional backup. What
is the best way to recover from this error?

[-- Attachment #2: info.txt --]
[-- Type: text/plain, Size: 4178 bytes --]

layout: proxmox with direct /dev passthrough to VMs

$ uname -a
VM: Linux server0 5.5.6-arch1-1 #1 SMP PREEMPT Mon, 24 Feb 2020 12:20:16 +0000 x86_64 GNU/Linux
proxmox: Linux pxe 4.15.18-26-pve #1 SMP PVE 4.15.18-54 (Sat, 15 Feb 2020 15:34:24 +0100) x86_64 GNU/Linux

$ btrfs --version (VM)
btrfs-progs v5.4

$ btrfs fi show
Label: none  uuid: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
        Total devices 6 FS bytes used 2.88TiB
        devid    1 size 2.73TiB used 1.02TiB path /dev/mapper/luks0
        devid    2 size 2.73TiB used 1.02TiB path /dev/mapper/luks1
        devid    3 size 2.73TiB used 1.02TiB path /dev/mapper/luks2
        devid    4 size 2.73TiB used 1.02TiB path /dev/mapper/luks3
        devid    5 size 2.73TiB used 1.02TiB path /dev/mapper/luks4
        devid    6 size 2.73TiB used 1.02TiB path /dev/mapper/luks5

$ btrfs fi df /mnt/raid  
Data, RAID10: total=3.05TiB, used=2.87TiB
System, RAID10: total=103.88MiB, used=320.00KiB
Metadata, RAID10: total=6.09GiB, used=4.46GiB
GlobalReserve, single: total=512.00MiB, used=0.00B

$ dmesg | grep BTRFS
[   19.060581] BTRFS: device fsid 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00 devid 5 transid 361687 /dev/dm-5 scanned by systemd-udevd (553)
[   19.061232] BTRFS: device fsid 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00 devid 1 transid 361687 /dev/dm-0 scanned by systemd-udevd (526)
[   19.062756] BTRFS: device fsid 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00 devid 2 transid 361687 /dev/dm-3 scanned by systemd-udevd (538)
[   19.063265] BTRFS: device fsid 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00 devid 4 transid 361687 /dev/dm-2 scanned by systemd-udevd (545)
[   19.071525] BTRFS: device fsid 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00 devid 6 transid 361687 /dev/dm-1 scanned by systemd-udevd (557)
[   19.073708] BTRFS: device fsid 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00 devid 3 transid 361687 /dev/dm-4 scanned by systemd-udevd (533)
[   19.190159] BTRFS info (device dm-0): enabling auto defrag
[   19.190172] BTRFS info (device dm-0): disk space caching is enabled
[   19.190174] BTRFS info (device dm-0): has skinny extents
[   19.448971] BTRFS info (device dm-0): bdev /dev/mapper/luks0 errs: wr 13790, rd 387, flush 0, corrupt 3532, gen 578
[   19.448977] BTRFS info (device dm-0): bdev /dev/mapper/luks5 errs: wr 13673, rd 207, flush 0, corrupt 3540, gen 705
[  130.172956] BTRFS info (device dm-0): the free space cache file (9692905472) is invalid, skip it
[  130.206490] BTRFS info (device dm-0): the free space cache file (32241483776) is invalid, skip it
[  130.221862] BTRFS info (device dm-0): the free space cache file (38683934720) is invalid, skip it
[  130.254926] BTRFS info (device dm-0): the free space cache file (54790062080) is invalid, skip it
[  130.256586] BTRFS info (device dm-0): the free space cache file (58011287552) is invalid, skip it
[  130.261085] BTRFS info (device dm-0): the free space cache file (61232513024) is invalid, skip it
[  130.261771] BTRFS info (device dm-0): the free space cache file (67674963968) is invalid, skip it
[  130.395696] BTRFS critical (device dm-0): corrupt leaf: root=7 block=2533706842112 slot=5, csum end range (68761223168) goes beyond the start range (68761178112) of the next csum item
[  130.395829] BTRFS error (device dm-0): block=2533706842112 read time tree block corruption detected
[  130.406624] BTRFS critical (device dm-0): corrupt leaf: root=7 block=2533706842112 slot=5, csum end range (68761223168) goes beyond the start range (68761178112) of the next csum item
[  130.406803] BTRFS error (device dm-0): block=2533706842112 read time tree block corruption detected
[  130.412343] BTRFS critical (device dm-0): corrupt leaf: root=7 block=2533706842112 slot=5, csum end range (68761223168) goes beyond the start range (68761178112) of the next csum item
[  130.412526] BTRFS error (device dm-0): block=2533706842112 read time tree block corruption detected
[  130.414847] BTRFS critical (device dm-0): corrupt leaf: root=7 block=2533706842112 slot=5, csum end range (68761223168) goes beyond the start range (68761178112) of the next csum item
[  130.415056] BTRFS error (device dm-0): block=2533706842112 read time tree block corruption detected

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: corrupt leaf
  2020-02-27  5:59 corrupt leaf 4e868df3
@ 2020-02-27  6:30 ` Chris Murphy
  2020-02-27  7:23   ` 4e868df3
  2020-02-27  8:25 ` Qu Wenruo
  1 sibling, 1 reply; 12+ messages in thread
From: Chris Murphy @ 2020-02-27  6:30 UTC (permalink / raw)
  To: 4e868df3; +Cc: Btrfs BTRFS

On Wed, Feb 26, 2020 at 11:00 PM 4e868df3 <4e868df3@gmail.com> wrote:
>
> I updated kernels recently and now am getting a corrupt leaf error.
> The drives decrypt and mount, and I can touch a file briefly until the
> mount switches over to read-only mode. Extended SMART tests show all 6
> of my drives have a healthy status. I have a backup of the data. The
> array is configured as RAID10. As the BTRFS filesystem remains
> accessible / read-only, I am able to take an additional backup. What
> is the best way to recover from this error?


>$ uname -a
>VM: Linux server0 5.5.6-arch1-1 #1 SMP PREEMPT Mon, 24 Feb 2020 12:20:16 +0000 x86_64 GNU/Linux
>proxmox: Linux pxe 4.15.18-26-pve #1 SMP PVE 4.15.18-54 (Sat, 15 Feb 2020 15:34:24 +0100) x86_64 GNU/Linux

Which kernel is reporting the errors?

> [   19.448971] BTRFS info (device dm-0): bdev /dev/mapper/luks0 errs: wr 13790, rd 387, flush 0, corrupt 3532, gen 578
> [   19.448977] BTRFS info (device dm-0): bdev /dev/mapper/luks5 errs: wr 13673, rd 207, flush 0, corrupt 3540, gen 705

Btrfs reports at mount time significant number of dropped writes, and
other issues for 2 of 6 drives. This are problems that have already
happened, the statistics are recorded in file system metadata. What's
the history that might explain this? Any power failures or crashes?
When was the last time it was scrubbed?

>[  130.415056] BTRFS error (device dm-0): block=2533706842112 read time tree block corruption detected

What happens after this line? File ends here.

What do you get for
btrfs check /dev/

This is readonly, and repair isn't recommended unless a dev advises
it. The check only needs to be run on one device.

-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: corrupt leaf
  2020-02-27  6:30 ` Chris Murphy
@ 2020-02-27  7:23   ` 4e868df3
  2020-02-27  8:03     ` Chris Murphy
  0 siblings, 1 reply; 12+ messages in thread
From: 4e868df3 @ 2020-02-27  7:23 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Btrfs BTRFS

The VM is reporting the errors. A power failure or crash is likely to
blame. Scrubs happen monthly (3 weeks ago). No other BTRFS messages in
dmesg to report, just a bunch of sudo audit's.

$ sudo btrfs scrub status /mnt/raid
UUID:             8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
Scrub started:    Thu Feb 27 00:16:58 2020
Status:           aborted
Duration:         0:02:03
Total to scrub:   5.76TiB
Rate:             853.91MiB/s
Error summary:    csum=278
  Corrected:      0
  Uncorrectable:  278
  Unverified:     0


On Wed, Feb 26, 2020 at 11:30 PM Chris Murphy <lists@colorremedies.com> wrote:
>
> On Wed, Feb 26, 2020 at 11:00 PM 4e868df3 <4e868df3@gmail.com> wrote:
> >
> > I updated kernels recently and now am getting a corrupt leaf error.
> > The drives decrypt and mount, and I can touch a file briefly until the
> > mount switches over to read-only mode. Extended SMART tests show all 6
> > of my drives have a healthy status. I have a backup of the data. The
> > array is configured as RAID10. As the BTRFS filesystem remains
> > accessible / read-only, I am able to take an additional backup. What
> > is the best way to recover from this error?
>
>
> >$ uname -a
> >VM: Linux server0 5.5.6-arch1-1 #1 SMP PREEMPT Mon, 24 Feb 2020 12:20:16 +0000 x86_64 GNU/Linux
> >proxmox: Linux pxe 4.15.18-26-pve #1 SMP PVE 4.15.18-54 (Sat, 15 Feb 2020 15:34:24 +0100) x86_64 GNU/Linux
>
> Which kernel is reporting the errors?
>
> > [   19.448971] BTRFS info (device dm-0): bdev /dev/mapper/luks0 errs: wr 13790, rd 387, flush 0, corrupt 3532, gen 578
> > [   19.448977] BTRFS info (device dm-0): bdev /dev/mapper/luks5 errs: wr 13673, rd 207, flush 0, corrupt 3540, gen 705
>
> Btrfs reports at mount time significant number of dropped writes, and
> other issues for 2 of 6 drives. This are problems that have already
> happened, the statistics are recorded in file system metadata. What's
> the history that might explain this? Any power failures or crashes?
> When was the last time it was scrubbed?
>
> >[  130.415056] BTRFS error (device dm-0): block=2533706842112 read time tree block corruption detected
>
> What happens after this line? File ends here.
>
> What do you get for
> btrfs check /dev/
>
> This is readonly, and repair isn't recommended unless a dev advises
> it. The check only needs to be run on one device.
>
> --
> Chris Murphy

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: corrupt leaf
  2020-02-27  7:23   ` 4e868df3
@ 2020-02-27  8:03     ` Chris Murphy
  0 siblings, 0 replies; 12+ messages in thread
From: Chris Murphy @ 2020-02-27  8:03 UTC (permalink / raw)
  To: 4e868df3; +Cc: Chris Murphy, Btrfs BTRFS

On Thu, Feb 27, 2020 at 12:24 AM 4e868df3 <4e868df3@gmail.com> wrote:
>
> The VM is reporting the errors.

What are the details of the storage stack? What are the Btrfs devices
backed by on the host? Physical partitions, or qcow2, or raw files? If
they are files, is chattr +C set?

And what is the cache method being used for each device?

>A power failure or crash is likely to
> blame.

The host or the VM? The VM can crash, it's fairly safe, I even use
unsafe cache on all my VMs (it's way faster) and a force power off on
VMs all the time with Btrfs and never have a problem. But, it's a
different story if the host crashes.

It's a little strange that only two devices are affected.

>Scrubs happen monthly (3 weeks ago). No other BTRFS messages in
> dmesg to report, just a bunch of sudo audit's.
>
> $ sudo btrfs scrub status /mnt/raid
> UUID:             8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
> Scrub started:    Thu Feb 27 00:16:58 2020
> Status:           aborted
> Duration:         0:02:03
> Total to scrub:   5.76TiB
> Rate:             853.91MiB/s
> Error summary:    csum=278
>   Corrected:      0
>   Uncorrectable:  278
>   Unverified:     0

Btrfs check might take a while on 5.7T. It depends on how much
metadata there is.

-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: corrupt leaf
  2020-02-27  5:59 corrupt leaf 4e868df3
  2020-02-27  6:30 ` Chris Murphy
@ 2020-02-27  8:25 ` Qu Wenruo
  2020-02-28  2:28   ` 4e868df3
  1 sibling, 1 reply; 12+ messages in thread
From: Qu Wenruo @ 2020-02-27  8:25 UTC (permalink / raw)
  To: 4e868df3, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 1291 bytes --]



On 2020/2/27 下午1:59, 4e868df3 wrote:
> I updated kernels recently and now am getting a corrupt leaf error.
> The drives decrypt and mount, and I can touch a file briefly until the
> mount switches over to read-only mode. Extended SMART tests show all 6
> of my drives have a healthy status. I have a backup of the data. The
> array is configured as RAID10. As the BTRFS filesystem remains
> accessible / read-only, I am able to take an additional backup. What
> is the best way to recover from this error?
> 

> [  130.395696] BTRFS critical (device dm-0): corrupt leaf: root=7
block=2533706842112 slot=5, csum end range (68761223168) goes beyond the
start range (68761178112) of the next csum item
> [  130.395829] BTRFS error (device dm-0): block=2533706842112 read
time tree block corruption detected

This is not something caused by a powerloss, but more likely an older
kernel, or memory corruption.

Please provide the following command dump:
# btrfs ins dump-tree -b 2533706842112 /dev/dm-0

Furthermore, btrfs check output would be appreciated to ensure that's
the only corruption.

But from the generation mismatch, it looks like there are transid
mismatch error, which could be a bigger problem than your corrupted csum
tree.

Thanks,
Qu



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: corrupt leaf
  2020-02-27  8:25 ` Qu Wenruo
@ 2020-02-28  2:28   ` 4e868df3
  2020-02-28  3:01     ` Qu Wenruo
  0 siblings, 1 reply; 12+ messages in thread
From: 4e868df3 @ 2020-02-28  2:28 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS

> What are the details of the storage stack? What are the Btrfs devices backed by on the host? Physical partitions, or qcow2, or raw files? If they are files, is chattr +C set?
6x 2tb scsi drives. Raw physical partitions, working through LUKS.
Proxmox passes the devices directly through to the VM without touching
them.

> btrfs ins dump-tree -b 2533706842112 /dev/dm-0
btrfs-progs v5.4
leaf 2533706842112 items 9 free space 5914 generation 360253 owner CSUM_TREE
leaf 2533706842112 flags 0x1(WRITTEN) backref revision 1
fs uuid 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
chunk uuid c3b187d2-64c1-46f0-a83f-d0aeb0e37fe4
        item 0 key (EXTENT_CSUM EXTENT_CSUM 68754231296) itemoff 16279
itemsize 4
                range start 68754231296 end 68754235392 length 4096
        item 1 key (EXTENT_CSUM EXTENT_CSUM 68754235392) itemoff 13167
itemsize 3112
                range start 68754235392 end 68757422080 length 3186688
        item 2 key (EXTENT_CSUM EXTENT_CSUM 68757422080) itemoff 12891
itemsize 276
                range start 68757422080 end 68757704704 length 282624
        item 3 key (EXTENT_CSUM EXTENT_CSUM 68757819392) itemoff 12767
itemsize 124
                range start 68757819392 end 68757946368 length 126976
        item 4 key (EXTENT_CSUM EXTENT_CSUM 68757946368) itemoff 11359
itemsize 1408
                range start 68757946368 end 68759388160 length 1441792
        item 5 key (EXTENT_CSUM EXTENT_CSUM 68759388160) itemoff 9567
itemsize 1792
                range start 68759388160 end 68761223168 length 1835008
        item 6 key (EXTENT_CSUM EXTENT_CSUM 68761178112) itemoff 9363
itemsize 204
                range start 68761178112 end 68761387008 length 208896
        item 7 key (EXTENT_CSUM EXTENT_CSUM 68761387008) itemoff 7739
itemsize 1624
                range start 68761387008 end 68763049984 length 1662976
        item 8 key (EXTENT_CSUM EXTENT_CSUM 68763049984) itemoff 6139
itemsize 1600
                range start 68763049984 end 68764688384 length 1638400

> btrfs check --force /dev/mapper/luks0
Opening filesystem to check...
WARNING: filesystem mounted, continuing because of --force
Checking filesystem on /dev/mapper/luks0
UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
there are no extents for csum range 68757573632-68757704704
Right section didn't have a record
there are no extents for csum range 68754427904-68757704704
csum exists for 68750639104-68757704704 but there is no extent record
there are no extents for csum range 68760719360-68761223168
Right section didn't have a record
there are no extents for csum range 68757819392-68761223168
csum exists for 68757819392-68761223168 but there is no extent record
there are no extents for csum range 68761362432-68761378816
Right section didn't have a record
there are no extents for csum range 68761178112-68836831232
csum exists for 68761178112-68836831232 but there is no extent record
there are no extents for csum range 1168638763008-1168638803968
csum exists for 1168638763008-1168645861376 but there is no extent record
ERROR: errors found in csum tree
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
found 3165125918720 bytes used, error(s) found
total csum bytes: 3085473228
total tree bytes: 4791877632
total fs tree bytes: 1177714688
total extent tree bytes: 94617600
btree space waste bytes: 492319296
file data blocks allocated: 3160334041088
 referenced 3157401378816

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: corrupt leaf
  2020-02-28  2:28   ` 4e868df3
@ 2020-02-28  3:01     ` Qu Wenruo
  2020-02-29 15:47       ` 4e868df3
  0 siblings, 1 reply; 12+ messages in thread
From: Qu Wenruo @ 2020-02-28  3:01 UTC (permalink / raw)
  To: 4e868df3; +Cc: Btrfs BTRFS


[-- Attachment #1.1: Type: text/plain, Size: 4104 bytes --]



On 2020/2/28 上午10:28, 4e868df3 wrote:
>> What are the details of the storage stack? What are the Btrfs devices backed by on the host? Physical partitions, or qcow2, or raw files? If they are files, is chattr +C set?
> 6x 2tb scsi drives. Raw physical partitions, working through LUKS.
> Proxmox passes the devices directly through to the VM without touching
> them.
> 
>> btrfs ins dump-tree -b 2533706842112 /dev/dm-0
> btrfs-progs v5.4
> leaf 2533706842112 items 9 free space 5914 generation 360253 owner CSUM_TREE
> leaf 2533706842112 flags 0x1(WRITTEN) backref revision 1
> fs uuid 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
> chunk uuid c3b187d2-64c1-46f0-a83f-d0aeb0e37fe4
>         item 0 key (EXTENT_CSUM EXTENT_CSUM 68754231296) itemoff 16279
> itemsize 4
>                 range start 68754231296 end 68754235392 length 4096
>         item 1 key (EXTENT_CSUM EXTENT_CSUM 68754235392) itemoff 13167
> itemsize 3112
>                 range start 68754235392 end 68757422080 length 3186688
>         item 2 key (EXTENT_CSUM EXTENT_CSUM 68757422080) itemoff 12891
> itemsize 276
>                 range start 68757422080 end 68757704704 length 282624
>         item 3 key (EXTENT_CSUM EXTENT_CSUM 68757819392) itemoff 12767
> itemsize 124
>                 range start 68757819392 end 68757946368 length 126976
>         item 4 key (EXTENT_CSUM EXTENT_CSUM 68757946368) itemoff 11359
> itemsize 1408
>                 range start 68757946368 end 68759388160 length 1441792
>         item 5 key (EXTENT_CSUM EXTENT_CSUM 68759388160) itemoff 9567
> itemsize 1792
>                 range start 68759388160 end 68761223168 length 1835008

This csum item is too large, overlapping the next item.
Doesn't looks like a bit flip, as the item size still matches.

>         item 6 key (EXTENT_CSUM EXTENT_CSUM 68761178112) itemoff 9363
> itemsize 204
>                 range start 68761178112 end 68761387008 length 208896
>         item 7 key (EXTENT_CSUM EXTENT_CSUM 68761387008) itemoff 7739
> itemsize 1624
>                 range start 68761387008 end 68763049984 length 1662976
>         item 8 key (EXTENT_CSUM EXTENT_CSUM 68763049984) itemoff 6139
> itemsize 1600
>                 range start 68763049984 end 68764688384 length 1638400
> 
>> btrfs check --force /dev/mapper/luks0
> Opening filesystem to check...
> WARNING: filesystem mounted, continuing because of --force
> Checking filesystem on /dev/mapper/luks0
> UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
> [1/7] checking root items
> [2/7] checking extents
> [3/7] checking free space cache
> [4/7] checking fs roots
> [5/7] checking only csums items (without verifying data)
> there are no extents for csum range 68757573632-68757704704
> Right section didn't have a record
> there are no extents for csum range 68754427904-68757704704
> csum exists for 68750639104-68757704704 but there is no extent record
> there are no extents for csum range 68760719360-68761223168
> Right section didn't have a record
> there are no extents for csum range 68757819392-68761223168
> csum exists for 68757819392-68761223168 but there is no extent record
> there are no extents for csum range 68761362432-68761378816
> Right section didn't have a record
> there are no extents for csum range 68761178112-68836831232
> csum exists for 68761178112-68836831232 but there is no extent record
> there are no extents for csum range 1168638763008-1168638803968
> csum exists for 1168638763008-1168645861376 but there is no extent record
> ERROR: errors found in csum tree

Since the csum error is the only error, you can fix it by
--init-csum-tree option from btrfs check.

Thanks,
Qu

> [6/7] checking root refs
> [7/7] checking quota groups skipped (not enabled on this FS)
> found 3165125918720 bytes used, error(s) found
> total csum bytes: 3085473228
> total tree bytes: 4791877632
> total fs tree bytes: 1177714688
> total extent tree bytes: 94617600
> btree space waste bytes: 492319296
> file data blocks allocated: 3160334041088
>  referenced 3157401378816
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: corrupt leaf
  2020-02-28  3:01     ` Qu Wenruo
@ 2020-02-29 15:47       ` 4e868df3
  2020-03-01  0:41         ` Qu Wenruo
  2020-03-02  6:35         ` Chris Murphy
  0 siblings, 2 replies; 12+ messages in thread
From: 4e868df3 @ 2020-02-29 15:47 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS

It came up with some kind of `840 abort`. Then I reran btrfs check and
tried again.

$ btrfs check --init-csum-tree /dev/mapper/luks0
Creating a new CRC tree
WARNING:

        Do not use --repair unless you are advised to do so by a developer
        or an experienced user, and then only after having accepted that no
        fsck can successfully repair all types of filesystem corruption. Eg.
        some software or hardware bugs can fatally damage a volume.
        The operation will start in 10 seconds.
        Use Ctrl-C to stop it.
10 9 8 7 6 5 4 3 2 1
Starting repair.
Opening filesystem to check...
Checking filesystem on /dev/mapper/luks0
UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
Reinitialize checksum tree
Unable to find block group for 0
Unable to find block group for 0
Unable to find block group for 0
ctree.c:2272: split_leaf: BUG_ON `1` triggered, value 1
btrfs(+0x71e09)[0x564eef35ee09]
btrfs(btrfs_search_slot+0xfb1)[0x564eef360431]
btrfs(btrfs_csum_file_block+0x442)[0x564eef37c412]
btrfs(+0x35bde)[0x564eef322bde]
btrfs(+0x47ce4)[0x564eef334ce4]
btrfs(main+0x94)[0x564eef3020c4]
/usr/lib/libc.so.6(__libc_start_main+0xf3)[0x7ff12a43e023]
btrfs(_start+0x2e)[0x564eef30235e]
[1]    840 abort      sudo btrfs check --init-csum-tree /dev/mapper/luks0

$ btrfs check /dev/mapper/luks0
Opening filesystem to check...
Checking filesystem on /dev/mapper/luks0
UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
there are no extents for csum range 68757573632-68757704704
Right section didn't have a record
there are no extents for csum range 68754427904-68757704704
csum exists for 68750639104-68757704704 but there is no extent record
there are no extents for csum range 68760719360-68761223168
Right section didn't have a record
there are no extents for csum range 68757819392-68761223168
csum exists for 68757819392-68761223168 but there is no extent record
there are no extents for csum range 68761362432-68761378816
Right section didn't have a record
there are no extents for csum range 68761178112-68836831232
csum exists for 68761178112-68836831232 but there is no extent record
there are no extents for csum range 1168638763008-1168638803968
csum exists for 1168638763008-1168645861376 but there is no extent
record
ERROR: errors found in csum tree
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
found 3165125918720 bytes used, error(s) found
total csum bytes: 3085473228
total tree bytes: 4791877632
total fs tree bytes: 1177714688
total extent tree bytes: 94617600
btree space waste bytes: 492319296
file data blocks allocated: 3160334041088
 referenced 3157401378816

$ btrfs check --init-csum-tree /dev/mapper/luks0
Creating a new CRC tree
WARNING:

        Do not use --repair unless you are advised to do so by a developer
        or an experienced user, and then only after having accepted that no
        fsck can successfully repair all types of filesystem corruption. Eg.
        some software or hardware bugs can fatally damage a volume.
        The operation will start in 10 seconds.
        Use Ctrl-C to stop it.
10 9 8 7 6 5 4 3 2 1
Starting repair.
Opening filesystem to check...
Checking filesystem on /dev/mapper/luks0
UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
Reinitialize checksum tree
Unable to find block group for 0
Unable to find block group for 0
Unable to find block group for 0
ctree.c:2272: split_leaf: BUG_ON `1` triggered, value 1
btrfs(+0x71e09)[0x559260a6de09]
btrfs(btrfs_search_slot+0xfb1)[0x559260a6f431]
btrfs(btrfs_csum_file_block+0x442)[0x559260a8b412]
btrfs(+0x35bde)[0x559260a31bde]
btrfs(+0x47ce4)[0x559260a43ce4]
btrfs(main+0x94)[0x559260a110c4]
/usr/lib/libc.so.6(__libc_start_main+0xf3)[0x7f212eb1f023]
btrfs(_start+0x2e)[0x559260a1135e]
[1]    848 abort      sudo btrfs check --init-csum-tree /dev/mapper/luks0

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: corrupt leaf
  2020-02-29 15:47       ` 4e868df3
@ 2020-03-01  0:41         ` Qu Wenruo
  2020-03-01  6:11           ` 4e868df3
  2020-03-02  6:35         ` Chris Murphy
  1 sibling, 1 reply; 12+ messages in thread
From: Qu Wenruo @ 2020-03-01  0:41 UTC (permalink / raw)
  To: 4e868df3; +Cc: Btrfs BTRFS


[-- Attachment #1.1: Type: text/plain, Size: 4486 bytes --]



On 2020/2/29 下午11:47, 4e868df3 wrote:
> It came up with some kind of `840 abort`. Then I reran btrfs check and
> tried again.
> 
> $ btrfs check --init-csum-tree /dev/mapper/luks0
> Creating a new CRC tree
> WARNING:
> 
>         Do not use --repair unless you are advised to do so by a developer
>         or an experienced user, and then only after having accepted that no
>         fsck can successfully repair all types of filesystem corruption. Eg.
>         some software or hardware bugs can fatally damage a volume.
>         The operation will start in 10 seconds.
>         Use Ctrl-C to stop it.
> 10 9 8 7 6 5 4 3 2 1
> Starting repair.
> Opening filesystem to check...
> Checking filesystem on /dev/mapper/luks0
> UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
> Reinitialize checksum tree
> Unable to find block group for 0
> Unable to find block group for 0
> Unable to find block group for 0

This means the metadata space is used up.

Which btrfs-progs version are you using?
Some older btrfs-progs have a bug in space reservation.

Thanks,
Qu
> ctree.c:2272: split_leaf: BUG_ON `1` triggered, value 1
> btrfs(+0x71e09)[0x564eef35ee09]
> btrfs(btrfs_search_slot+0xfb1)[0x564eef360431]
> btrfs(btrfs_csum_file_block+0x442)[0x564eef37c412]
> btrfs(+0x35bde)[0x564eef322bde]
> btrfs(+0x47ce4)[0x564eef334ce4]
> btrfs(main+0x94)[0x564eef3020c4]
> /usr/lib/libc.so.6(__libc_start_main+0xf3)[0x7ff12a43e023]
> btrfs(_start+0x2e)[0x564eef30235e]
> [1]    840 abort      sudo btrfs check --init-csum-tree /dev/mapper/luks0
> 
> $ btrfs check /dev/mapper/luks0
> Opening filesystem to check...
> Checking filesystem on /dev/mapper/luks0
> UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
> [1/7] checking root items
> [2/7] checking extents
> [3/7] checking free space cache
> [4/7] checking fs roots
> [5/7] checking only csums items (without verifying data)
> there are no extents for csum range 68757573632-68757704704
> Right section didn't have a record
> there are no extents for csum range 68754427904-68757704704
> csum exists for 68750639104-68757704704 but there is no extent record
> there are no extents for csum range 68760719360-68761223168
> Right section didn't have a record
> there are no extents for csum range 68757819392-68761223168
> csum exists for 68757819392-68761223168 but there is no extent record
> there are no extents for csum range 68761362432-68761378816
> Right section didn't have a record
> there are no extents for csum range 68761178112-68836831232
> csum exists for 68761178112-68836831232 but there is no extent record
> there are no extents for csum range 1168638763008-1168638803968
> csum exists for 1168638763008-1168645861376 but there is no extent
> record
> ERROR: errors found in csum tree
> [6/7] checking root refs
> [7/7] checking quota groups skipped (not enabled on this FS)
> found 3165125918720 bytes used, error(s) found
> total csum bytes: 3085473228
> total tree bytes: 4791877632
> total fs tree bytes: 1177714688
> total extent tree bytes: 94617600
> btree space waste bytes: 492319296
> file data blocks allocated: 3160334041088
>  referenced 3157401378816
> 
> $ btrfs check --init-csum-tree /dev/mapper/luks0
> Creating a new CRC tree
> WARNING:
> 
>         Do not use --repair unless you are advised to do so by a developer
>         or an experienced user, and then only after having accepted that no
>         fsck can successfully repair all types of filesystem corruption. Eg.
>         some software or hardware bugs can fatally damage a volume.
>         The operation will start in 10 seconds.
>         Use Ctrl-C to stop it.
> 10 9 8 7 6 5 4 3 2 1
> Starting repair.
> Opening filesystem to check...
> Checking filesystem on /dev/mapper/luks0
> UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
> Reinitialize checksum tree
> Unable to find block group for 0
> Unable to find block group for 0
> Unable to find block group for 0
> ctree.c:2272: split_leaf: BUG_ON `1` triggered, value 1
> btrfs(+0x71e09)[0x559260a6de09]
> btrfs(btrfs_search_slot+0xfb1)[0x559260a6f431]
> btrfs(btrfs_csum_file_block+0x442)[0x559260a8b412]
> btrfs(+0x35bde)[0x559260a31bde]
> btrfs(+0x47ce4)[0x559260a43ce4]
> btrfs(main+0x94)[0x559260a110c4]
> /usr/lib/libc.so.6(__libc_start_main+0xf3)[0x7f212eb1f023]
> btrfs(_start+0x2e)[0x559260a1135e]
> [1]    848 abort      sudo btrfs check --init-csum-tree /dev/mapper/luks0
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: corrupt leaf
  2020-03-01  0:41         ` Qu Wenruo
@ 2020-03-01  6:11           ` 4e868df3
  2020-03-01 11:40             ` Qu Wenruo
  0 siblings, 1 reply; 12+ messages in thread
From: 4e868df3 @ 2020-03-01  6:11 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Btrfs BTRFS

It's possible a pacman upgrade triggered this BTRFS event. I don't
know what was previously installed. Here is what is installed now.

$ btrfs version
btrfs-progs v5.4

On Sat, Feb 29, 2020 at 5:41 PM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
>
> On 2020/2/29 下午11:47, 4e868df3 wrote:
> > It came up with some kind of `840 abort`. Then I reran btrfs check and
> > tried again.
> >
> > $ btrfs check --init-csum-tree /dev/mapper/luks0
> > Creating a new CRC tree
> > WARNING:
> >
> >         Do not use --repair unless you are advised to do so by a developer
> >         or an experienced user, and then only after having accepted that no
> >         fsck can successfully repair all types of filesystem corruption. Eg.
> >         some software or hardware bugs can fatally damage a volume.
> >         The operation will start in 10 seconds.
> >         Use Ctrl-C to stop it.
> > 10 9 8 7 6 5 4 3 2 1
> > Starting repair.
> > Opening filesystem to check...
> > Checking filesystem on /dev/mapper/luks0
> > UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
> > Reinitialize checksum tree
> > Unable to find block group for 0
> > Unable to find block group for 0
> > Unable to find block group for 0
>
> This means the metadata space is used up.
>
> Which btrfs-progs version are you using?
> Some older btrfs-progs have a bug in space reservation.
>
> Thanks,
> Qu
> > ctree.c:2272: split_leaf: BUG_ON `1` triggered, value 1
> > btrfs(+0x71e09)[0x564eef35ee09]
> > btrfs(btrfs_search_slot+0xfb1)[0x564eef360431]
> > btrfs(btrfs_csum_file_block+0x442)[0x564eef37c412]
> > btrfs(+0x35bde)[0x564eef322bde]
> > btrfs(+0x47ce4)[0x564eef334ce4]
> > btrfs(main+0x94)[0x564eef3020c4]
> > /usr/lib/libc.so.6(__libc_start_main+0xf3)[0x7ff12a43e023]
> > btrfs(_start+0x2e)[0x564eef30235e]
> > [1]    840 abort      sudo btrfs check --init-csum-tree /dev/mapper/luks0
> >
> > $ btrfs check /dev/mapper/luks0
> > Opening filesystem to check...
> > Checking filesystem on /dev/mapper/luks0
> > UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
> > [1/7] checking root items
> > [2/7] checking extents
> > [3/7] checking free space cache
> > [4/7] checking fs roots
> > [5/7] checking only csums items (without verifying data)
> > there are no extents for csum range 68757573632-68757704704
> > Right section didn't have a record
> > there are no extents for csum range 68754427904-68757704704
> > csum exists for 68750639104-68757704704 but there is no extent record
> > there are no extents for csum range 68760719360-68761223168
> > Right section didn't have a record
> > there are no extents for csum range 68757819392-68761223168
> > csum exists for 68757819392-68761223168 but there is no extent record
> > there are no extents for csum range 68761362432-68761378816
> > Right section didn't have a record
> > there are no extents for csum range 68761178112-68836831232
> > csum exists for 68761178112-68836831232 but there is no extent record
> > there are no extents for csum range 1168638763008-1168638803968
> > csum exists for 1168638763008-1168645861376 but there is no extent
> > record
> > ERROR: errors found in csum tree
> > [6/7] checking root refs
> > [7/7] checking quota groups skipped (not enabled on this FS)
> > found 3165125918720 bytes used, error(s) found
> > total csum bytes: 3085473228
> > total tree bytes: 4791877632
> > total fs tree bytes: 1177714688
> > total extent tree bytes: 94617600
> > btree space waste bytes: 492319296
> > file data blocks allocated: 3160334041088
> >  referenced 3157401378816
> >
> > $ btrfs check --init-csum-tree /dev/mapper/luks0
> > Creating a new CRC tree
> > WARNING:
> >
> >         Do not use --repair unless you are advised to do so by a developer
> >         or an experienced user, and then only after having accepted that no
> >         fsck can successfully repair all types of filesystem corruption. Eg.
> >         some software or hardware bugs can fatally damage a volume.
> >         The operation will start in 10 seconds.
> >         Use Ctrl-C to stop it.
> > 10 9 8 7 6 5 4 3 2 1
> > Starting repair.
> > Opening filesystem to check...
> > Checking filesystem on /dev/mapper/luks0
> > UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
> > Reinitialize checksum tree
> > Unable to find block group for 0
> > Unable to find block group for 0
> > Unable to find block group for 0
> > ctree.c:2272: split_leaf: BUG_ON `1` triggered, value 1
> > btrfs(+0x71e09)[0x559260a6de09]
> > btrfs(btrfs_search_slot+0xfb1)[0x559260a6f431]
> > btrfs(btrfs_csum_file_block+0x442)[0x559260a8b412]
> > btrfs(+0x35bde)[0x559260a31bde]
> > btrfs(+0x47ce4)[0x559260a43ce4]
> > btrfs(main+0x94)[0x559260a110c4]
> > /usr/lib/libc.so.6(__libc_start_main+0xf3)[0x7f212eb1f023]
> > btrfs(_start+0x2e)[0x559260a1135e]
> > [1]    848 abort      sudo btrfs check --init-csum-tree /dev/mapper/luks0
> >
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: corrupt leaf
  2020-03-01  6:11           ` 4e868df3
@ 2020-03-01 11:40             ` Qu Wenruo
  0 siblings, 0 replies; 12+ messages in thread
From: Qu Wenruo @ 2020-03-01 11:40 UTC (permalink / raw)
  To: 4e868df3; +Cc: Btrfs BTRFS


[-- Attachment #1.1: Type: text/plain, Size: 5501 bytes --]



On 2020/3/1 下午2:11, 4e868df3 wrote:
> It's possible a pacman upgrade triggered this BTRFS event. I don't
> know what was previously installed. Here is what is installed now.
> 
> $ btrfs version
> btrfs-progs v5.4

Then it's a bug in btrfs-progs. I need to find sometime to reproduce the
problem and fix it.

`btrfs fi df` output may help in this case.


For now, I only have possible workaround, that is to delete the
offending files which contains the csum.

You can try the following command to locate the offending files:

# btrfs ins logical 68761178112 -s 45056 <mnt>

And delete related files and hopes kernel find its way to delete the
offending csums.

Thanks,
Qu
> 
> On Sat, Feb 29, 2020 at 5:41 PM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>>
>> On 2020/2/29 下午11:47, 4e868df3 wrote:
>>> It came up with some kind of `840 abort`. Then I reran btrfs check and
>>> tried again.
>>>
>>> $ btrfs check --init-csum-tree /dev/mapper/luks0
>>> Creating a new CRC tree
>>> WARNING:
>>>
>>>         Do not use --repair unless you are advised to do so by a developer
>>>         or an experienced user, and then only after having accepted that no
>>>         fsck can successfully repair all types of filesystem corruption. Eg.
>>>         some software or hardware bugs can fatally damage a volume.
>>>         The operation will start in 10 seconds.
>>>         Use Ctrl-C to stop it.
>>> 10 9 8 7 6 5 4 3 2 1
>>> Starting repair.
>>> Opening filesystem to check...
>>> Checking filesystem on /dev/mapper/luks0
>>> UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
>>> Reinitialize checksum tree
>>> Unable to find block group for 0
>>> Unable to find block group for 0
>>> Unable to find block group for 0
>>
>> This means the metadata space is used up.
>>
>> Which btrfs-progs version are you using?
>> Some older btrfs-progs have a bug in space reservation.
>>
>> Thanks,
>> Qu
>>> ctree.c:2272: split_leaf: BUG_ON `1` triggered, value 1
>>> btrfs(+0x71e09)[0x564eef35ee09]
>>> btrfs(btrfs_search_slot+0xfb1)[0x564eef360431]
>>> btrfs(btrfs_csum_file_block+0x442)[0x564eef37c412]
>>> btrfs(+0x35bde)[0x564eef322bde]
>>> btrfs(+0x47ce4)[0x564eef334ce4]
>>> btrfs(main+0x94)[0x564eef3020c4]
>>> /usr/lib/libc.so.6(__libc_start_main+0xf3)[0x7ff12a43e023]
>>> btrfs(_start+0x2e)[0x564eef30235e]
>>> [1]    840 abort      sudo btrfs check --init-csum-tree /dev/mapper/luks0
>>>
>>> $ btrfs check /dev/mapper/luks0
>>> Opening filesystem to check...
>>> Checking filesystem on /dev/mapper/luks0
>>> UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
>>> [1/7] checking root items
>>> [2/7] checking extents
>>> [3/7] checking free space cache
>>> [4/7] checking fs roots
>>> [5/7] checking only csums items (without verifying data)
>>> there are no extents for csum range 68757573632-68757704704
>>> Right section didn't have a record
>>> there are no extents for csum range 68754427904-68757704704
>>> csum exists for 68750639104-68757704704 but there is no extent record
>>> there are no extents for csum range 68760719360-68761223168
>>> Right section didn't have a record
>>> there are no extents for csum range 68757819392-68761223168
>>> csum exists for 68757819392-68761223168 but there is no extent record
>>> there are no extents for csum range 68761362432-68761378816
>>> Right section didn't have a record
>>> there are no extents for csum range 68761178112-68836831232
>>> csum exists for 68761178112-68836831232 but there is no extent record
>>> there are no extents for csum range 1168638763008-1168638803968
>>> csum exists for 1168638763008-1168645861376 but there is no extent
>>> record
>>> ERROR: errors found in csum tree
>>> [6/7] checking root refs
>>> [7/7] checking quota groups skipped (not enabled on this FS)
>>> found 3165125918720 bytes used, error(s) found
>>> total csum bytes: 3085473228
>>> total tree bytes: 4791877632
>>> total fs tree bytes: 1177714688
>>> total extent tree bytes: 94617600
>>> btree space waste bytes: 492319296
>>> file data blocks allocated: 3160334041088
>>>  referenced 3157401378816
>>>
>>> $ btrfs check --init-csum-tree /dev/mapper/luks0
>>> Creating a new CRC tree
>>> WARNING:
>>>
>>>         Do not use --repair unless you are advised to do so by a developer
>>>         or an experienced user, and then only after having accepted that no
>>>         fsck can successfully repair all types of filesystem corruption. Eg.
>>>         some software or hardware bugs can fatally damage a volume.
>>>         The operation will start in 10 seconds.
>>>         Use Ctrl-C to stop it.
>>> 10 9 8 7 6 5 4 3 2 1
>>> Starting repair.
>>> Opening filesystem to check...
>>> Checking filesystem on /dev/mapper/luks0
>>> UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
>>> Reinitialize checksum tree
>>> Unable to find block group for 0
>>> Unable to find block group for 0
>>> Unable to find block group for 0
>>> ctree.c:2272: split_leaf: BUG_ON `1` triggered, value 1
>>> btrfs(+0x71e09)[0x559260a6de09]
>>> btrfs(btrfs_search_slot+0xfb1)[0x559260a6f431]
>>> btrfs(btrfs_csum_file_block+0x442)[0x559260a8b412]
>>> btrfs(+0x35bde)[0x559260a31bde]
>>> btrfs(+0x47ce4)[0x559260a43ce4]
>>> btrfs(main+0x94)[0x559260a110c4]
>>> /usr/lib/libc.so.6(__libc_start_main+0xf3)[0x7f212eb1f023]
>>> btrfs(_start+0x2e)[0x559260a1135e]
>>> [1]    848 abort      sudo btrfs check --init-csum-tree /dev/mapper/luks0
>>>
>>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: corrupt leaf
  2020-02-29 15:47       ` 4e868df3
  2020-03-01  0:41         ` Qu Wenruo
@ 2020-03-02  6:35         ` Chris Murphy
  1 sibling, 0 replies; 12+ messages in thread
From: Chris Murphy @ 2020-03-02  6:35 UTC (permalink / raw)
  To: 4e868df3; +Cc: Qu Wenruo, Btrfs BTRFS

On Sat, Feb 29, 2020 at 8:48 AM 4e868df3 <4e868df3@gmail.com> wrote:
>
> It came up with some kind of `840 abort`. Then I reran btrfs check and
> tried again.
>
> $ btrfs check --init-csum-tree /dev/mapper/luks0
> Creating a new CRC tree
> WARNING:
>
>         Do not use --repair unless you are advised to do so by a developer
>         or an experienced user, and then only after having accepted that no
>         fsck can successfully repair all types of filesystem corruption. Eg.
>         some software or hardware bugs can fatally damage a volume.
>         The operation will start in 10 seconds.
>         Use Ctrl-C to stop it.
> 10 9 8 7 6 5 4 3 2 1
> Starting repair.
> Opening filesystem to check...
> Checking filesystem on /dev/mapper/luks0
> UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
> Reinitialize checksum tree
> Unable to find block group for 0
> Unable to find block group for 0
> Unable to find block group for 0
> ctree.c:2272: split_leaf: BUG_ON `1` triggered, value 1
> btrfs(+0x71e09)[0x564eef35ee09]
> btrfs(btrfs_search_slot+0xfb1)[0x564eef360431]
> btrfs(btrfs_csum_file_block+0x442)[0x564eef37c412]
> btrfs(+0x35bde)[0x564eef322bde]
> btrfs(+0x47ce4)[0x564eef334ce4]
> btrfs(main+0x94)[0x564eef3020c4]
> /usr/lib/libc.so.6(__libc_start_main+0xf3)[0x7ff12a43e023]
> btrfs(_start+0x2e)[0x564eef30235e]
> [1]    840 abort      sudo btrfs check --init-csum-tree /dev/mapper/luks0
>
> $ btrfs check /dev/mapper/luks0
> Opening filesystem to check...
> Checking filesystem on /dev/mapper/luks0
> UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
> [1/7] checking root items
> [2/7] checking extents
> [3/7] checking free space cache
> [4/7] checking fs roots
> [5/7] checking only csums items (without verifying data)
> there are no extents for csum range 68757573632-68757704704
> Right section didn't have a record
> there are no extents for csum range 68754427904-68757704704
> csum exists for 68750639104-68757704704 but there is no extent record
> there are no extents for csum range 68760719360-68761223168
> Right section didn't have a record
> there are no extents for csum range 68757819392-68761223168
> csum exists for 68757819392-68761223168 but there is no extent record
> there are no extents for csum range 68761362432-68761378816
> Right section didn't have a record
> there are no extents for csum range 68761178112-68836831232
> csum exists for 68761178112-68836831232 but there is no extent record
> there are no extents for csum range 1168638763008-1168638803968
> csum exists for 1168638763008-1168645861376 but there is no extent
> record
> ERROR: errors found in csum tree
> [6/7] checking root refs
> [7/7] checking quota groups skipped (not enabled on this FS)
> found 3165125918720 bytes used, error(s) found
> total csum bytes: 3085473228
> total tree bytes: 4791877632
> total fs tree bytes: 1177714688
> total extent tree bytes: 94617600
> btree space waste bytes: 492319296
> file data blocks allocated: 3160334041088
>  referenced 3157401378816
>
> $ btrfs check --init-csum-tree /dev/mapper/luks0
> Creating a new CRC tree
> WARNING:
>
>         Do not use --repair unless you are advised to do so by a developer
>         or an experienced user, and then only after having accepted that no
>         fsck can successfully repair all types of filesystem corruption. Eg.
>         some software or hardware bugs can fatally damage a volume.
>         The operation will start in 10 seconds.
>         Use Ctrl-C to stop it.
> 10 9 8 7 6 5 4 3 2 1
> Starting repair.
> Opening filesystem to check...
> Checking filesystem on /dev/mapper/luks0
> UUID: 8c1dea88-fa40-4e6e-a1a1-214ea6bcdb00
> Reinitialize checksum tree
> Unable to find block group for 0
> Unable to find block group for 0
> Unable to find block group for 0
> ctree.c:2272: split_leaf: BUG_ON `1` triggered, value 1
> btrfs(+0x71e09)[0x559260a6de09]
> btrfs(btrfs_search_slot+0xfb1)[0x559260a6f431]
> btrfs(btrfs_csum_file_block+0x442)[0x559260a8b412]
> btrfs(+0x35bde)[0x559260a31bde]
> btrfs(+0x47ce4)[0x559260a43ce4]
> btrfs(main+0x94)[0x559260a110c4]
> /usr/lib/libc.so.6(__libc_start_main+0xf3)[0x7f212eb1f023]
> btrfs(_start+0x2e)[0x559260a1135e]
> [1]    848 abort      sudo btrfs check --init-csum-tree /dev/mapper/luks0


A crash is a bug, but at least it didn't make the problem worse. I'm
not sure if 5.4.1 can do any better, but it's the current version.



-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2020-03-02  6:35 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-27  5:59 corrupt leaf 4e868df3
2020-02-27  6:30 ` Chris Murphy
2020-02-27  7:23   ` 4e868df3
2020-02-27  8:03     ` Chris Murphy
2020-02-27  8:25 ` Qu Wenruo
2020-02-28  2:28   ` 4e868df3
2020-02-28  3:01     ` Qu Wenruo
2020-02-29 15:47       ` 4e868df3
2020-03-01  0:41         ` Qu Wenruo
2020-03-01  6:11           ` 4e868df3
2020-03-01 11:40             ` Qu Wenruo
2020-03-02  6:35         ` Chris Murphy

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.