All of lore.kernel.org
 help / color / mirror / Atom feed
* Need help recovering broken RAID5 array (parent transid verify failed)
@ 2020-05-15  6:03 Emil Heimpel
  2020-05-15 21:46 ` Chris Murphy
  0 siblings, 1 reply; 8+ messages in thread
From: Emil Heimpel @ 2020-05-15  6:03 UTC (permalink / raw)
  To: linux-btrfs


Hi,

I hope this is the right place to ask for help. I am unable to mount my BTRFS array and wanted to know, if it is possible to recover (some) data from it.

I have a RAID1-Metadata/RAID5-Data array consisting of 6 drives, 2x8TB, 5TB, 4TB and 2x3TB. It was running fine for the last 3 months. Because I expanded it drive by drive I wanted to do a full balance the other day, when after around 40% completion (ca 1.5 days) I noticed, that one drive was missing from the array (If I remember correctly, it was the 5TB one). I tried to cancel the balance, but even after a few hours it didn't cancel, so I tried to do a reboot. That didn't work either, so I did a hard reset. Probably not the best idea, I know....

My array looks like this:

[bluemond@BlueQ btrfslogs]$ sudo btrfs fi show                               Label: none  uuid: 19b4f289-a87f-4ed8-8882-b0d03e014104
Total devices 6 FS bytes used 15.47TiB
devid    1 size 7.28TiB used 5.83TiB path /dev/sdc1
devid    2 size 4.55TiB used 4.39TiB path /dev/sdg1
devid    3 size 3.64TiB used 3.63TiB path /dev/sdf1
devid    4 size 7.28TiB used 3.03TiB path /dev/sda1
devid    5 size 2.73TiB used 2.22TiB path /dev/sde1
devid    6 size 2.73TiB used 2.22TiB path /dev/sdd1

After the reboot all drives appeared again but now I can't mount the array anymore, it gives me the following error in dmesg:

[  858.554594] BTRFS info (device sdc1): disk space caching is enabled
[  858.554596] BTRFS info (device sdc1): has skinny extents
[  858.556165] BTRFS error (device sdc1): parent transid verify failed on 23219912048640 wanted 116443 found 116484
[  858.556516] BTRFS error (device sdc1): parent transid verify failed on 23219912048640 wanted 116443  found 116484
[  858.556527] BTRFS error (device sdc1): failed to read chunk root
[  858.588332] BTRFS error (device sdc1): open_ctree failed

Mounting with the backuproot option isn't working either:

[  793.730875] BTRFS info (device sdc1): trying to use backup root at mount time
[  793.730879] BTRFS info (device sdc1): disk space caching is enabled
[  793.730880] BTRFS info (device sdc1): has skinny extents
[  793.732479] BTRFS error (device sdc1): parent transid verify failed on 23219912048640 wanted 116443 found 116484
[  793.732775] BTRFS error (device sdc1): parent transid verify failed on 23219912048640 wanted 116443 found 116484
[  793.732785] BTRFS error (device sdc1): failed to read chunk root
[  793.756693] BTRFS error (device sdc1): open_ctree failed

Btrfs restore isn't finding any data either:

[bluemond@BlueQ ~]$ sudo btrfs restore -xmSivD /dev/sda1 /btrfs/
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
Ignoring transid failure
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
Ignoring transid failure
This is a dry-run, no files are going to be restored
Done searching

Btrfs checks of each drive produce the following output:

[bluemond@BlueQ btrfslogs]$ sudo btrfs check /dev/sda1
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
Ignoring transid failure
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
Ignoring transid failure
[1/7] checking root items
parent transid verify failed on 30122546839552 wanted 116438 found 116458
parent transid verify failed on 30122546839552 wanted 116438 found 116458
parent transid verify failed on 30122546839552 wanted 116438 found 116458
Ignoring transid failure
leaf parent key incorrect 30122546839552
ERROR: failed to repair root items: Operation not permitted

[bluemond@BlueQ btrfslogs]$ sudo btrfs check /dev/sdc1
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
Ignoring transid failure
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
Ignoring transid failure
[1/7] checking root items
parent transid verify failed on 30122546839552 wanted 116438 found 116458
parent transid verify failed on 30122546839552 wanted 116438 found 116458
parent transid verify failed on 30122546839552 wanted 116438 found 116458
Ignoring transid failure
leaf parent key incorrect 30122546839552
ERROR: failed to repair root items: Operation not permitted

[bluemond@BlueQ btrfslogs]$ sudo btrfs check /dev/sdd1
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
Ignoring transid failure
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
Ignoring transid failure
[1/7] checking root items
parent transid verify failed on 30122546839552 wanted 116438 found 116458
parent transid verify failed on 30122546839552 wanted 116438 found 116458
parent transid verify failed on 30122546839552 wanted 116438 found 116458
Ignoring transid failure
leaf parent key incorrect 30122546839552
ERROR: failed to repair root items: Operation not permitted

[bluemond@BlueQ btrfslogs]$ sudo btrfs check /dev/sde1
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
Ignoring transid failure
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
Ignoring transid failure
[1/7] checking root items
parent transid verify failed on 30122546839552 wanted 116438 found 116458
parent transid verify failed on 30122546839552 wanted 116438 found 116458
parent transid verify failed on 30122546839552 wanted 116438 found 116458
Ignoring transid failure
leaf parent key incorrect 30122546839552
ERROR: failed to repair root items: Operation not permitted

[bluemond@BlueQ btrfslogs]$ sudo btrfs check /dev/sdf1
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
Ignoring transid failure
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
Ignoring transid failure
[1/7] checking root items
parent transid verify failed on 30122546839552 wanted 116438 found 116458
parent transid verify failed on 30122546839552 wanted 116438 found 116458
parent transid verify failed on 30122546839552 wanted 116438 found 116458
Ignoring transid failure
leaf parent key incorrect 30122546839552
ERROR: failed to repair root items: Operation not permitted

[bluemond@BlueQ btrfslogs]$ sudo btrfs check /dev/sdg1
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
Ignoring transid failure
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
Ignoring transid failure
[1/7] checking root items
parent transid verify failed on 30122546839552 wanted 116438 found 116458
parent transid verify failed on 30122546839552 wanted 116438 found 116458
parent transid verify failed on 30122546839552 wanted 116438 found 116458
Ignoring transid failure
leaf parent key incorrect 30122546839552
ERROR: failed to repair root items: Operation not permitted

I tried to read up on the issue, but I only found it mentioned in the gotchas page of the wiki, Marc's blog and in a thread about the stability of RAID56 in this mailing list, where it was stated, that the recoverability is from 0 to 100%, but nowhere was it mentioned, what to do when you encounter that problem. Is there anything I can do to at least recover some of my data from the array?

And how can I prevent it from happening again? Would using the new multi-parity raid1 for Metadata help?

Some info on my system:
I'm running arch on an ssd.
[bluemond@BlueQ btrfslogs]$ uname -a
Linux BlueQ 5.6.12-arch1-1 #1 SMP PREEMPT Sun, 10 May 2020 10:43:42 +0000 x86_64 GNU/Linux

[bluemond@BlueQ btrfslogs]$ btrfs --version
btrfs-progs v5.6

I'm not very familiar with mailing lists, so pardon me if I have done anything wrong!
Hope someone can give me more information about what to do now.

Thanks,
Emil


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Need help recovering broken RAID5 array (parent transid verify failed)
  2020-05-15  6:03 Need help recovering broken RAID5 array (parent transid verify failed) Emil Heimpel
@ 2020-05-15 21:46 ` Chris Murphy
  2020-05-16  1:44   ` Emil Heimpel
  0 siblings, 1 reply; 8+ messages in thread
From: Chris Murphy @ 2020-05-15 21:46 UTC (permalink / raw)
  To: Emil Heimpel, Qu Wenruo; +Cc: Btrfs BTRFS

On Fri, May 15, 2020 at 12:03 AM Emil Heimpel
<broetchenrackete@gmail.com> wrote:
>
>
> Hi,
>
> I hope this is the right place to ask for help. I am unable to mount my BTRFS array and wanted to know, if it is possible to recover (some) data from it.

Hi, yes it is!


>
> I have a RAID1-Metadata/RAID5-Data array consisting of 6 drives, 2x8TB, 5TB, 4TB and 2x3TB. It was running fine for the last 3 months. Because I expanded it drive by drive I wanted to do a full balance the other day, when after around 40% completion (ca 1.5 days) I noticed, that one drive was missing from the array (If I remember correctly, it was the 5TB one). I tried to cancel the balance, but even after a few hours it didn't cancel, so I tried to do a reboot. That didn't work either, so I did a hard reset. Probably not the best idea, I know....

The file system should be power fail safe (with some limited data
loss), but the hardware can betray everything. Your configuration is
better due to raid1 metadata.
>
> After the reboot all drives appeared again but now I can't mount the array anymore, it gives me the following error in dmesg:
>
> [  858.554594] BTRFS info (device sdc1): disk space caching is enabled
> [  858.554596] BTRFS info (device sdc1): has skinny extents
> [  858.556165] BTRFS error (device sdc1): parent transid verify failed on 23219912048640 wanted 116443 found 116484
> [  858.556516] BTRFS error (device sdc1): parent transid verify failed on 23219912048640 wanted 116443  found 116484
> [  858.556527] BTRFS error (device sdc1): failed to read chunk root
> [  858.588332] BTRFS error (device sdc1): open_ctree failed

Extent tree is damaged, but it's unexpected that a newer transid is
found than is wanted. Something happened out of order. Both copies.

What do you get for:
# btrfs rescue super -v /dev/anydevice
# btrfs insp dump-s -fa /dev/anydevice
# btrfs insp dump-t -b 30122546839552 /dev/anydevice
# mount -o ro,nologreplay,degraded /dev/anydevice



>
> [bluemond@BlueQ btrfslogs]$ sudo btrfs check /dev/sdd1

For what it's worth, btrfs check does find all member devices, so you
only have to run check on any one of them. However, scrub is
different, you can run that individually per block device to work
around some performance problems with raid56, when running it on the
volume's mount point.

> And how can I prevent it from happening again? Would using the new multi-parity raid1 for Metadata help?

Difficult to know yet what went wrong. Do you have dmesg/journalctl -k
for the time period the problem drive began all the way to the forced
power off? It might give a hint. Before doing a forced poweroff while
writes are happening it might help to disable the write cache on all
the drives; or alternatively always disable them.

> I'm running arch on an ssd.
> [bluemond@BlueQ btrfslogs]$ uname -a
> Linux BlueQ 5.6.12-arch1-1 #1 SMP PREEMPT Sun, 10 May 2020 10:43:42 +0000 x86_64 GNU/Linux
>
> [bluemond@BlueQ btrfslogs]$ btrfs --version
> btrfs-progs v5.6

5.6.1 is current but I don't think there's anything in the minor
update that applies here.

Post that info and maybe a dev will have time to take a look. If it
does mount ro,degraded, take the chance to update backups, just in
case. Yeah, ~21TB will be really inconvenient to lose. Also, since
it's over the weekend, and there's some time, it might be useful to
have a btrfs image:

btrfs-image -ss -c9 -t4 /dev/anydevice ~/problemvolume.btrfs.bin

This file will be roughly 1/2 the size of file system metadata. I
guess you could have around 140G of metadata depending on nodesize
chosen at mkfs time, and how many small files this filesystem has.

Still another option that might make it possible to mount, if above
doesn't work; build the kernel with this patch
https://patchwork.kernel.org/project/linux-btrfs/list/?series=170715

Mount using -o ro,nologreplay,rescue=skipbg

This also doesn't actually fix the problem, it just might make it
possible to mount the file system, mainly for updating backups in case
it's not possible to fix.


--
Chris Murphy

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Need help recovering broken RAID5 array (parent transid verify failed)
  2020-05-15 21:46 ` Chris Murphy
@ 2020-05-16  1:44   ` Emil Heimpel
  2020-05-20 11:56     ` Emil Heimpel
  0 siblings, 1 reply; 8+ messages in thread
From: Emil Heimpel @ 2020-05-16  1:44 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Qu Wenruo, Btrfs BTRFS

[-- Attachment #1: Type: text/plain, Size: 5722 bytes --]

Hi,

Thanks for the answer. I attached the output of the commands you requested as a txt file. Unfortunately mounting didn't work, even with the kernel patch and skipbg option.

I will try to find the journalctl from when it happened.

Emil

May 15, 2020 23:46:55 Chris Murphy <lists@colorremedies.com>:

>       On Fri, May 15, 2020 at 12:03 AM Emil Heimpel 
>      
> <broetchenrackete@gmail.com> wrote: 
>      
>  
>      
> >  
> >        
> >       Hi, 
> >       
> >  
> >       I hope this is the right place to ask for help. I am unable to mount my BTRFS array and wanted to know, if it is possible to recover (some) data from it. 
> >       
> >  
> >      
>  
>      Hi, yes it is! 
>      
>  
>       
>       
>      
> >  
> >       I have a RAID1-Metadata/RAID5-Data array consisting of 6 drives, 2x8TB, 5TB, 4TB and 2x3TB. It was running fine for the last 3 months. Because I expanded it drive by drive I wanted to do a full balance the other day, when after around 40% completion (ca 1.5 days) I noticed, that one drive was missing from the array (If I remember correctly, it was the 5TB one). I tried to cancel the balance, but even after a few hours it didn't cancel, so I tried to do a reboot. That didn't work either, so I did a hard reset. Probably not the best idea, I know.... 
> >       
> >  
> >      
>  
>      The file system should be power fail safe (with some limited data 
>      
> loss), but the hardware can betray everything. Your configuration is 
>      
> better due to raid1 metadata. 
>      
>  
>      
> >  
> >       After the reboot all drives appeared again but now I can't mount the array anymore, it gives me the following error in dmesg: 
> >       
> >  
> >       [  858.554594] BTRFS info (device sdc1): disk space caching is enabled 
> >       
> > [  858.554596] BTRFS info (device sdc1): has skinny extents 
> >       
> > [  858.556165] BTRFS error (device sdc1): parent transid verify failed on 23219912048640 wanted 116443 found 116484 
> >       
> > [  858.556516] BTRFS error (device sdc1): parent transid verify failed on 23219912048640 wanted 116443  found 116484 
> >       
> > [  858.556527] BTRFS error (device sdc1): failed to read chunk root 
> >       
> > [  858.588332] BTRFS error (device sdc1): open_ctree failed 
> >       
> >  
> >      
>  
>      Extent tree is damaged, but it's unexpected that a newer transid is 
>      
> found than is wanted. Something happened out of order. Both copies. 
>      
>  
>      What do you get for: 
>      
> # btrfs rescue super -v /dev/anydevice 
>      
> # btrfs insp dump-s -fa /dev/anydevice 
>      
> # btrfs insp dump-t -b 30122546839552 /dev/anydevice 
>      
> # mount -o ro,nologreplay,degraded /dev/anydevice 
>      
>  
>       
>       
>       
>      
> >  
> >       [bluemond@BlueQ btrfslogs]$ sudo btrfs check /dev/sdd1 
> >       
> >  
> >      
>  
>      For what it's worth, btrfs check does find all member devices, so you 
>      
> only have to run check on any one of them. However, scrub is 
>      
> different, you can run that individually per block device to work 
>      
> around some performance problems with raid56, when running it on the 
>      
> volume's mount point. 
>      
>  
>       
>      
> 
> >        And how can I prevent it from happening again? Would using the new multi-parity raid1 for Metadata help? 
> >       
> >  
> >      
>  
>      Difficult to know yet what went wrong. Do you have dmesg/journalctl -k 
>      
> for the time period the problem drive began all the way to the forced 
>      
> power off? It might give a hint. Before doing a forced poweroff while 
>      
> writes are happening it might help to disable the write cache on all 
>      
> the drives; or alternatively always disable them. 
>      
>  
>       
>      
> 
> >        I'm running arch on an ssd. 
> >       
> > [bluemond@BlueQ btrfslogs]$ uname -a 
> >       
> > Linux BlueQ 5.6.12-arch1-1 #1 SMP PREEMPT Sun, 10 May 2020 10:43:42 +0000 x86_64 GNU/Linux 
> >       
> >  
> >       [bluemond@BlueQ btrfslogs]$ btrfs --version 
> >       
> > btrfs-progs v5.6 
> >       
> >  
> >      
>  
>      5.6.1 is current but I don't think there's anything in the minor 
>      
> update that applies here. 
>      
>  
>      Post that info and maybe a dev will have time to take a look. If it 
>      
> does mount ro,degraded, take the chance to update backups, just in 
>      
> case. Yeah, ~21TB will be really inconvenient to lose. Also, since 
>      
> it's over the weekend, and there's some time, it might be useful to 
>      
> have a btrfs image: 
>      
>  
>      btrfs-image -ss -c9 -t4 /dev/anydevice ~/problemvolume.btrfs.bin 
>      
>  
>      This file will be roughly 1/2 the size of file system metadata. I 
>      
> guess you could have around 140G of metadata depending on nodesize 
>      
> chosen at mkfs time, and how many small files this filesystem has. 
>      
>  
>      Still another option that might make it possible to mount, if above 
>      
> doesn't work; build the kernel with this patch 
>      
> https://patchwork.kernel.org/project/linux-btrfs/list/?series=170715 
>      
>  
>      Mount using -o ro,nologreplay,rescue=skipbg 
>      
>  
>      This also doesn't actually fix the problem, it just might make it 
>      
> possible to mount the file system, mainly for updating backups in case 
>      
> it's not possible to fix. 
>      
>  
>       
>      -- 
>      
> Chris Murphy 
>      
>  
>     

[-- Attachment #2: btrfslog.txt --]
[-- Type: text/plain, Size: 62053 bytes --]

Last login: Fri May 15 05:05:01 2020 from 192.168.1.35
[bluemond@BlueQ ~]$ sudo btrfs rescue super -v /dev/sda1
All Devices:
        Device: id = 2, name = /dev/sdg1
        Device: id = 3, name = /dev/sdf1
        Device: id = 6, name = /dev/sdd1
        Device: id = 5, name = /dev/sde1
        Device: id = 1, name = /dev/sdc1
        Device: id = 4, name = /dev/sda1

Before Recovering:
        [All good supers]:
                device name = /dev/sdg1
                superblock bytenr = 65536

                device name = /dev/sdg1
                superblock bytenr = 67108864

                device name = /dev/sdg1
                superblock bytenr = 274877906944

                device name = /dev/sdf1
                superblock bytenr = 65536

                device name = /dev/sdf1
                superblock bytenr = 67108864

                device name = /dev/sdf1
                superblock bytenr = 274877906944

                device name = /dev/sdd1
                superblock bytenr = 65536

                device name = /dev/sdd1
                superblock bytenr = 67108864

                device name = /dev/sdd1
                superblock bytenr = 274877906944

                device name = /dev/sde1
                superblock bytenr = 65536

                device name = /dev/sde1
                superblock bytenr = 67108864

                device name = /dev/sde1
                superblock bytenr = 274877906944

                device name = /dev/sdc1
                superblock bytenr = 65536

                device name = /dev/sdc1
                superblock bytenr = 67108864

                device name = /dev/sdc1
                superblock bytenr = 274877906944

                device name = /dev/sda1
                superblock bytenr = 65536

                device name = /dev/sda1
                superblock bytenr = 67108864

                device name = /dev/sda1
                superblock bytenr = 274877906944

        [All bad supers]:

All supers are valid, no need to recover
[bluemond@BlueQ ~]$ sudo btrfs insp dump-s -fa /dev/sda1
superblock: bytenr=65536, device=/dev/sda1
---------------------------------------------------------
csum_type               0 (crc32c)
csum_size               4
csum                    0x0774dbf0 [match]
bytenr                  65536
flags                   0x1
                        ( WRITTEN )
magic                   _BHRfS_M [match]
fsid                    19b4f289-a87f-4ed8-8882-b0d03e014104
metadata_uuid           19b4f289-a87f-4ed8-8882-b0d03e014104
label
generation              116443
root                    30122565173248
sys_array_size          129
chunk_root_generation   116443
root_level              1
chunk_root              23219912048640
chunk_root_level        1
log_root                0
log_root_transid        0
log_root_level          0
total_bytes             31006074101760
bytes_used              17011224203264
sectorsize              4096
nodesize                16384
leafsize (deprecated)   16384
stripesize              4096
root_dir                6
num_devices             6
compat_flags            0x0
compat_ro_flags         0x0
incompat_flags          0x1e1
                        ( MIXED_BACKREF |
                          BIG_METADATA |
                          EXTENDED_IREF |
                          RAID56 |
                          SKINNY_METADATA )
cache_generation        116443
uuid_tree_generation    116443
dev_item.uuid           c2b6d51b-f1bc-4e8a-ace5-9acb5b61e1b2
dev_item.fsid           19b4f289-a87f-4ed8-8882-b0d03e014104 [match]
dev_item.type           0
dev_item.total_bytes    8001562152960
dev_item.bytes_used     3334556811264
dev_item.io_align       4096
dev_item.io_width       4096
dev_item.sector_size    4096
dev_item.devid          4
dev_item.dev_group      0
dev_item.seek_speed     0
dev_item.bandwidth      0
dev_item.generation     0
sys_chunk_array[2048]:
        item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 23219912048640)
                length 33554432 owner 2 stripe_len 65536 type SYSTEM|RAID1
                io_align 65536 io_width 65536 sector_size 4096
                num_stripes 2 sub_stripes 1
                        stripe 0 devid 4 offset 1346487779328
                        dev_uuid c2b6d51b-f1bc-4e8a-ace5-9acb5b61e1b2
                        stripe 1 devid 1 offset 4429221527552
                        dev_uuid 6ce59a89-dacb-44dc-9823-8f6a16edd335
backup_roots[4]:
        backup 0:
                backup_tree_root:       30122549657600  gen: 116441     level: 1
                backup_chunk_root:      23219912065024  gen: 116441     level: 1
                backup_extent_root:     30122549706752  gen: 116441     level: 2
                backup_fs_root:         30122471063552  gen: 116437     level: 2
                backup_dev_root:        30122549575680  gen: 116441     level: 1
                backup_csum_root:       30122549870592  gen: 116441     level: 3
                backup_total_bytes:     31006074101760
                backup_bytes_used:      17009168805888
                backup_num_devices:     6

        backup 1:
                backup_tree_root:       30122552590336  gen: 116442     level: 1
                backup_chunk_root:      23219912065024  gen: 116441     level: 1
                backup_extent_root:     30122554408960  gen: 116442     level: 2
                backup_fs_root:         30122471063552  gen: 116437     level: 2
                backup_dev_root:        30122549575680  gen: 116441     level: 1
                backup_csum_root:       30122555932672  gen: 116443     level: 3
                backup_total_bytes:     31006074101760
                backup_bytes_used:      17010697097216
                backup_num_devices:     6

        backup 2:
                backup_tree_root:       30122565173248  gen: 116443     level: 1
                backup_chunk_root:      23219912048640  gen: 116484     level: 1
                backup_extent_root:     30122559078400  gen: 116492     level: 0
                backup_fs_root:         30122471063552  gen: 116492     level: 0
                backup_dev_root:        30122559127552  gen: 116492     level: 0
                backup_csum_root:       30122555932672  gen: 116443     level: 3
                backup_total_bytes:     31006074101760
                backup_bytes_used:      17011224203264
                backup_num_devices:     6

        backup 3:
                backup_tree_root:       30122548707328  gen: 116440     level: 1
                backup_chunk_root:      23219912048640  gen: 116431     level: 1
                backup_extent_root:     30122548723712  gen: 116440     level: 2
                backup_fs_root:         30122471063552  gen: 116437     level: 2
                backup_dev_root:        30122294935552  gen: 116431     level: 1
                backup_csum_root:       30122548822016  gen: 116440     level: 3
                backup_total_bytes:     31006074101760
                backup_bytes_used:      17009170755584
                backup_num_devices:     6


superblock: bytenr=67108864, device=/dev/sda1
---------------------------------------------------------
csum_type               0 (crc32c)
csum_size               4
csum                    0xa715f33e [match]
bytenr                  67108864
flags                   0x1
                        ( WRITTEN )
magic                   _BHRfS_M [match]
fsid                    19b4f289-a87f-4ed8-8882-b0d03e014104
metadata_uuid           19b4f289-a87f-4ed8-8882-b0d03e014104
label
generation              116443
root                    30122565173248
sys_array_size          129
chunk_root_generation   116443
root_level              1
chunk_root              23219912048640
chunk_root_level        1
log_root                0
log_root_transid        0
log_root_level          0
total_bytes             31006074101760
bytes_used              17011224203264
sectorsize              4096
nodesize                16384
leafsize (deprecated)   16384
stripesize              4096
root_dir                6
num_devices             6
compat_flags            0x0
compat_ro_flags         0x0
incompat_flags          0x1e1
                        ( MIXED_BACKREF |
                          BIG_METADATA |
                          EXTENDED_IREF |
                          RAID56 |
                          SKINNY_METADATA )
cache_generation        116443
uuid_tree_generation    116443
dev_item.uuid           c2b6d51b-f1bc-4e8a-ace5-9acb5b61e1b2
dev_item.fsid           19b4f289-a87f-4ed8-8882-b0d03e014104 [match]
dev_item.type           0
dev_item.total_bytes    8001562152960
dev_item.bytes_used     3334556811264
dev_item.io_align       4096
dev_item.io_width       4096
dev_item.sector_size    4096
dev_item.devid          4
dev_item.dev_group      0
dev_item.seek_speed     0
dev_item.bandwidth      0
dev_item.generation     0
sys_chunk_array[2048]:
        item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 23219912048640)
                length 33554432 owner 2 stripe_len 65536 type SYSTEM|RAID1
                io_align 65536 io_width 65536 sector_size 4096
                num_stripes 2 sub_stripes 1
                        stripe 0 devid 4 offset 1346487779328
                        dev_uuid c2b6d51b-f1bc-4e8a-ace5-9acb5b61e1b2
                        stripe 1 devid 1 offset 4429221527552
                        dev_uuid 6ce59a89-dacb-44dc-9823-8f6a16edd335
backup_roots[4]:
        backup 0:
                backup_tree_root:       30122549657600  gen: 116441     level: 1
                backup_chunk_root:      23219912065024  gen: 116441     level: 1
                backup_extent_root:     30122549706752  gen: 116441     level: 2
                backup_fs_root:         30122471063552  gen: 116437     level: 2
                backup_dev_root:        30122549575680  gen: 116441     level: 1
                backup_csum_root:       30122549870592  gen: 116441     level: 3
                backup_total_bytes:     31006074101760
                backup_bytes_used:      17009168805888
                backup_num_devices:     6

        backup 1:
                backup_tree_root:       30122552590336  gen: 116442     level: 1
                backup_chunk_root:      23219912065024  gen: 116441     level: 1
                backup_extent_root:     30122554408960  gen: 116442     level: 2
                backup_fs_root:         30122471063552  gen: 116437     level: 2
                backup_dev_root:        30122549575680  gen: 116441     level: 1
                backup_csum_root:       30122555932672  gen: 116443     level: 3
                backup_total_bytes:     31006074101760
                backup_bytes_used:      17010697097216
                backup_num_devices:     6

        backup 2:
                backup_tree_root:       30122565173248  gen: 116443     level: 1
                backup_chunk_root:      23219912048640  gen: 116484     level: 1
                backup_extent_root:     30122559078400  gen: 116492     level: 0
                backup_fs_root:         30122471063552  gen: 116492     level: 0
                backup_dev_root:        30122559127552  gen: 116492     level: 0
                backup_csum_root:       30122555932672  gen: 116443     level: 3
                backup_total_bytes:     31006074101760
                backup_bytes_used:      17011224203264
                backup_num_devices:     6

        backup 3:
                backup_tree_root:       30122548707328  gen: 116440     level: 1
                backup_chunk_root:      23219912048640  gen: 116431     level: 1
                backup_extent_root:     30122548723712  gen: 116440     level: 2
                backup_fs_root:         30122471063552  gen: 116437     level: 2
                backup_dev_root:        30122294935552  gen: 116431     level: 1
                backup_csum_root:       30122548822016  gen: 116440     level: 3
                backup_total_bytes:     31006074101760
                backup_bytes_used:      17009170755584
                backup_num_devices:     6


superblock: bytenr=274877906944, device=/dev/sda1
---------------------------------------------------------
csum_type               0 (crc32c)
csum_size               4
csum                    0x5a92a50f [match]
bytenr                  274877906944
flags                   0x1
                        ( WRITTEN )
magic                   _BHRfS_M [match]
fsid                    19b4f289-a87f-4ed8-8882-b0d03e014104
metadata_uuid           19b4f289-a87f-4ed8-8882-b0d03e014104
label
generation              116443
root                    30122565173248
sys_array_size          129
chunk_root_generation   116443
root_level              1
chunk_root              23219912048640
chunk_root_level        1
log_root                0
log_root_transid        0
log_root_level          0
total_bytes             31006074101760
bytes_used              17011224203264
sectorsize              4096
nodesize                16384
leafsize (deprecated)   16384
stripesize              4096
root_dir                6
num_devices             6
compat_flags            0x0
compat_ro_flags         0x0
incompat_flags          0x1e1
                        ( MIXED_BACKREF |
                          BIG_METADATA |
                          EXTENDED_IREF |
                          RAID56 |
                          SKINNY_METADATA )
cache_generation        116443
uuid_tree_generation    116443
dev_item.uuid           c2b6d51b-f1bc-4e8a-ace5-9acb5b61e1b2
dev_item.fsid           19b4f289-a87f-4ed8-8882-b0d03e014104 [match]
dev_item.type           0
dev_item.total_bytes    8001562152960
dev_item.bytes_used     3334556811264
dev_item.io_align       4096
dev_item.io_width       4096
dev_item.sector_size    4096
dev_item.devid          4
dev_item.dev_group      0
dev_item.seek_speed     0
dev_item.bandwidth      0
dev_item.generation     0
sys_chunk_array[2048]:
        item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 23219912048640)
                length 33554432 owner 2 stripe_len 65536 type SYSTEM|RAID1
                io_align 65536 io_width 65536 sector_size 4096
                num_stripes 2 sub_stripes 1
                        stripe 0 devid 4 offset 1346487779328
                        dev_uuid c2b6d51b-f1bc-4e8a-ace5-9acb5b61e1b2
                        stripe 1 devid 1 offset 4429221527552
                        dev_uuid 6ce59a89-dacb-44dc-9823-8f6a16edd335
backup_roots[4]:
        backup 0:
                backup_tree_root:       30122549657600  gen: 116441     level: 1
                backup_chunk_root:      23219912065024  gen: 116441     level: 1
                backup_extent_root:     30122549706752  gen: 116441     level: 2
                backup_fs_root:         30122471063552  gen: 116437     level: 2
                backup_dev_root:        30122549575680  gen: 116441     level: 1
                backup_csum_root:       30122549870592  gen: 116441     level: 3
                backup_total_bytes:     31006074101760
                backup_bytes_used:      17009168805888
                backup_num_devices:     6

        backup 1:
                backup_tree_root:       30122552590336  gen: 116442     level: 1
                backup_chunk_root:      23219912065024  gen: 116441     level: 1
                backup_extent_root:     30122554408960  gen: 116442     level: 2
                backup_fs_root:         30122471063552  gen: 116437     level: 2
                backup_dev_root:        30122549575680  gen: 116441     level: 1
                backup_csum_root:       30122555932672  gen: 116443     level: 3
                backup_total_bytes:     31006074101760
                backup_bytes_used:      17010697097216
                backup_num_devices:     6

        backup 2:
                backup_tree_root:       30122565173248  gen: 116443     level: 1
                backup_chunk_root:      23219912048640  gen: 116484     level: 1
                backup_extent_root:     30122559078400  gen: 116492     level: 0
                backup_fs_root:         30122471063552  gen: 116492     level: 0
                backup_dev_root:        30122559127552  gen: 116492     level: 0
                backup_csum_root:       30122555932672  gen: 116443     level: 3
                backup_total_bytes:     31006074101760
                backup_bytes_used:      17011224203264
                backup_num_devices:     6

        backup 3:
                backup_tree_root:       30122548707328  gen: 116440     level: 1
                backup_chunk_root:      23219912048640  gen: 116431     level: 1
                backup_extent_root:     30122548723712  gen: 116440     level: 2
                backup_fs_root:         30122471063552  gen: 116437     level: 2
                backup_dev_root:        30122294935552  gen: 116431     level: 1
                backup_csum_root:       30122548822016  gen: 116440     level: 3
                backup_total_bytes:     31006074101760
                backup_bytes_used:      17009170755584
                backup_num_devices:     6



[bluemond@BlueQ ~]$ sudo btrfs insp dump-t -b 30122546839552 /dev/sda1
btrfs-progs v5.6
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
parent transid verify failed on 23219912048640 wanted 116443 found 116484
Ignoring transid failure
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
parent transid verify failed on 30122559078400 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
parent transid verify failed on 30122559127552 wanted 116443 found 116492
Ignoring transid failure
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
parent transid verify failed on 30122471063552 wanted 116437 found 116492
Ignoring transid failure
leaf 30122546839552 items 220 free space 2601 generation 116458 owner EXTENT_TREE
leaf 30122546839552 flags 0x1(WRITTEN) backref revision 1
fs uuid 19b4f289-a87f-4ed8-8882-b0d03e014104
chunk uuid 3115c997-bf8b-4c75-b5a3-40d82169d3f0
        item 0 key (27674016481280 EXTENT_ITEM 6647808) itemoff 16246 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020555120640 count 1
        item 1 key (27674023129088 EXTENT_ITEM 167936) itemoff 16209 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 4923990802432 count 1
        item 2 key (27674023297024 EXTENT_ITEM 7057408) itemoff 16172 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020555120640 count 1
        item 3 key (27674030374912 EXTENT_ITEM 7077888) itemoff 16135 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020555120640 count 1
        item 4 key (27674037452800 EXTENT_ITEM 8818688) itemoff 16098 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020555120640 count 1
        item 5 key (27674046365696 EXTENT_ITEM 4423680) itemoff 16061 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020555120640 count 1
        item 6 key (27674050822144 EXTENT_ITEM 5312512) itemoff 16024 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020555120640 count 1
        item 7 key (27674056134656 EXTENT_ITEM 192512) itemoff 15987 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122676994048 count 1
        item 8 key (27674056327168 EXTENT_ITEM 9228288) itemoff 15950 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 9 key (27674065555456 EXTENT_ITEM 208896) itemoff 15913 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280214827008 count 1
        item 10 key (27674065764352 EXTENT_ITEM 6467584) itemoff 15876 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 11 key (27674072317952 EXTENT_ITEM 8048640) itemoff 15839 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 12 key (27674080444416 EXTENT_ITEM 8077312) itemoff 15802 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 13 key (27674088570880 EXTENT_ITEM 6778880) itemoff 15765 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 14 key (27674095386624 EXTENT_ITEM 6500352) itemoff 15728 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 15 key (27674101940224 EXTENT_ITEM 5480448) itemoff 15691 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 16 key (27674107445248 EXTENT_ITEM 4218880) itemoff 15654 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 17 key (27674111664128 EXTENT_ITEM 217088) itemoff 15617 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122159407104 count 1
        item 18 key (27674111901696 EXTENT_ITEM 6696960) itemoff 15580 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 19 key (27674118598656 EXTENT_ITEM 65536) itemoff 15543 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279284379648 count 1
        item 20 key (27674118717440 EXTENT_ITEM 6647808) itemoff 15506 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 21 key (27674125365248 EXTENT_ITEM 167936) itemoff 15469 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279096455168 count 1
        item 22 key (27674125533184 EXTENT_ITEM 6176768) itemoff 15432 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 23 key (27674131824640 EXTENT_ITEM 6995968) itemoff 15395 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 24 key (27674138902528 EXTENT_ITEM 8138752) itemoff 15358 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 25 key (27674147041280 EXTENT_ITEM 249856) itemoff 15321 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122234511360 count 1
        item 26 key (27674147291136 EXTENT_ITEM 7839744) itemoff 15284 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 27 key (27674155155456 EXTENT_ITEM 6496256) itemoff 15247 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 28 key (27674161709056 EXTENT_ITEM 5644288) itemoff 15210 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 29 key (27674167476224 EXTENT_ITEM 4452352) itemoff 15173 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 30 key (27674171932672 EXTENT_ITEM 3112960) itemoff 15136 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 31 key (27674175078400 EXTENT_ITEM 8372224) itemoff 15099 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 32 key (27674183467008 EXTENT_ITEM 7749632) itemoff 15062 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 33 key (27674191331328 EXTENT_ITEM 6160384) itemoff 15025 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 34 key (27674197491712 EXTENT_ITEM 131072) itemoff 14988 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279284379648 count 1
        item 35 key (27674197622784 EXTENT_ITEM 5574656) itemoff 14951 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 36 key (27674203197440 EXTENT_ITEM 192512) itemoff 14914 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122676764672 count 1
        item 37 key (27674203389952 EXTENT_ITEM 6529024) itemoff 14877 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 38 key (27674209943552 EXTENT_ITEM 6590464) itemoff 14840 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 39 key (27674216534016 EXTENT_ITEM 225280) itemoff 14803 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122163453952 count 1
        item 40 key (27674216759296 EXTENT_ITEM 5660672) itemoff 14766 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020394491904 count 1
        item 41 key (27674222526464 EXTENT_ITEM 4128768) itemoff 14729 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 42 key (27674226720768 EXTENT_ITEM 3756032) itemoff 14692 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 43 key (27674230476800 EXTENT_ITEM 176128) itemoff 14655 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280135282688 count 1
        item 44 key (27674230652928 EXTENT_ITEM 5263360) itemoff 14618 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 45 key (27674235916288 EXTENT_ITEM 241664) itemoff 14581 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122159407104 count 1
        item 46 key (27674236157952 EXTENT_ITEM 4960256) itemoff 14544 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 47 key (27674241138688 EXTENT_ITEM 4894720) itemoff 14507 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 48 key (27674246119424 EXTENT_ITEM 5013504) itemoff 14470 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 49 key (27674251132928 EXTENT_ITEM 221184) itemoff 14433 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122163453952 count 1
        item 50 key (27674251362304 EXTENT_ITEM 4431872) itemoff 14396 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 51 key (27674255818752 EXTENT_ITEM 4894720) itemoff 14359 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 52 key (27674260799488 EXTENT_ITEM 4292608) itemoff 14322 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 53 key (27674265092096 EXTENT_ITEM 163840) itemoff 14285 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020396523520 count 1
        item 54 key (27674265255936 EXTENT_ITEM 6565888) itemoff 14248 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 55 key (27674271821824 EXTENT_ITEM 229376) itemoff 14211 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280209354752 count 1
        item 56 key (27674272071680 EXTENT_ITEM 5505024) itemoff 14174 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 57 key (27674277576704 EXTENT_ITEM 5976064) itemoff 14137 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 58 key (27674283606016 EXTENT_ITEM 4964352) itemoff 14100 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 59 key (27674288586752 EXTENT_ITEM 5480448) itemoff 14063 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 60 key (27674294091776 EXTENT_ITEM 4526080) itemoff 14026 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 61 key (27674298617856 EXTENT_ITEM 192512) itemoff 13989 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020441022464 count 1
        item 62 key (27674298810368 EXTENT_ITEM 6496256) itemoff 13952 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 63 key (27674305363968 EXTENT_ITEM 16154624) itemoff 13915 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279957565440 count 1
        item 64 key (27674321616896 EXTENT_ITEM 9887744) itemoff 13878 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 65 key (27674331578368 EXTENT_ITEM 7372800) itemoff 13841 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 66 key (27674338951168 EXTENT_ITEM 225280) itemoff 13804 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280209354752 count 1
        item 67 key (27674339180544 EXTENT_ITEM 8847360) itemoff 13767 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 68 key (27674348093440 EXTENT_ITEM 7811072) itemoff 13730 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 69 key (27674355957760 EXTENT_ITEM 8171520) itemoff 13693 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 70 key (27674364129280 EXTENT_ITEM 217088) itemoff 13656 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122233790464 count 1
        item 71 key (27674364346368 EXTENT_ITEM 7016448) itemoff 13619 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 72 key (27674371424256 EXTENT_ITEM 10039296) itemoff 13582 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 73 key (27674381463552 EXTENT_ITEM 184320) itemoff 13545 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 7560141701120 count 1
        item 74 key (27674381647872 EXTENT_ITEM 8077312) itemoff 13508 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 75 key (27674389725184 EXTENT_ITEM 36864) itemoff 13471 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020197015552 count 1
        item 76 key (27674389774336 EXTENT_ITEM 6795264) itemoff 13434 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 77 key (27674396590080 EXTENT_ITEM 7417856) itemoff 13397 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 78 key (27674404007936 EXTENT_ITEM 184320) itemoff 13360 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 7560141701120 count 1
        item 79 key (27674404192256 EXTENT_ITEM 7131136) itemoff 13323 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 80 key (27674411323392 EXTENT_ITEM 200704) itemoff 13286 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280220053504 count 1
        item 81 key (27674411532288 EXTENT_ITEM 8908800) itemoff 13249 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 82 key (27674420445184 EXTENT_ITEM 9134080) itemoff 13212 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 83 key (27674429620224 EXTENT_ITEM 7712768) itemoff 13175 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 84 key (27674437332992 EXTENT_ITEM 151552) itemoff 13138 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28019644203008 count 1
        item 85 key (27674437484544 EXTENT_ITEM 8093696) itemoff 13101 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 86 key (27674445611008 EXTENT_ITEM 4902912) itemoff 13064 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 87 key (27674450591744 EXTENT_ITEM 5885952) itemoff 13027 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 88 key (27674456477696 EXTENT_ITEM 139264) itemoff 12990 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020197015552 count 1
        item 89 key (27674456621056 EXTENT_ITEM 7286784) itemoff 12953 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 90 key (27674463961088 EXTENT_ITEM 9035776) itemoff 12916 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 91 key (27674472996864 EXTENT_ITEM 139264) itemoff 12879 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020225720320 count 1
        item 92 key (27674473136128 EXTENT_ITEM 7618560) itemoff 12842 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 93 key (27674480754688 EXTENT_ITEM 245760) itemoff 12805 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280209354752 count 1
        item 94 key (27674481000448 EXTENT_ITEM 9031680) itemoff 12768 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 95 key (27674490032128 EXTENT_ITEM 143360) itemoff 12731 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280506150912 count 1
        item 96 key (27674490175488 EXTENT_ITEM 8699904) itemoff 12694 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 97 key (27674498875392 EXTENT_ITEM 212992) itemoff 12657 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122233921536 count 1
        item 98 key (27674499088384 EXTENT_ITEM 8237056) itemoff 12620 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 99 key (27674507325440 EXTENT_ITEM 151552) itemoff 12583 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 24310685941760 count 1
        item 100 key (27674507476992 EXTENT_ITEM 8601600) itemoff 12546 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 101 key (27674516127744 EXTENT_ITEM 6819840) itemoff 12509 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280504414208 count 1
        item 102 key (27674522947584 EXTENT_ITEM 258048) itemoff 12472 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279725469696 count 1
        item 103 key (27674523205632 EXTENT_ITEM 10412032) itemoff 12435 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280516390912 count 1
        item 104 key (27674533691392 EXTENT_ITEM 12234752) itemoff 12398 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280516390912 count 1
        item 105 key (27674546012160 EXTENT_ITEM 9891840) itemoff 12361 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280516390912 count 1
        item 106 key (27674555973632 EXTENT_ITEM 8740864) itemoff 12324 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280516390912 count 1
        item 107 key (27674564714496 EXTENT_ITEM 172032) itemoff 12287 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279428198400 count 1
        item 108 key (27674564886528 EXTENT_ITEM 8663040) itemoff 12250 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280516390912 count 1
        item 109 key (27674573549568 EXTENT_ITEM 249856) itemoff 12213 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280209354752 count 1
        item 110 key (27674573799424 EXTENT_ITEM 10661888) itemoff 12176 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280516390912 count 1
        item 111 key (27674584547328 EXTENT_ITEM 9232384) itemoff 12139 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280516390912 count 1
        item 112 key (27674593779712 EXTENT_ITEM 204800) itemoff 12102 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280986841088 count 1
        item 113 key (27674593984512 EXTENT_ITEM 8929280) itemoff 12065 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280516390912 count 1
        item 114 key (27674602913792 EXTENT_ITEM 237568) itemoff 12028 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 5863440744448 count 1
        item 115 key (27674603159552 EXTENT_ITEM 7999488) itemoff 11991 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280516390912 count 1
        item 116 key (27674611286016 EXTENT_ITEM 7749632) itemoff 11954 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 117 key (27674619150336 EXTENT_ITEM 6713344) itemoff 11917 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 118 key (27674625966080 EXTENT_ITEM 7634944) itemoff 11880 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 119 key (27674633601024 EXTENT_ITEM 225280) itemoff 11843 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 5863440744448 count 1
        item 120 key (27674633830400 EXTENT_ITEM 8347648) itemoff 11806 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 121 key (27674642219008 EXTENT_ITEM 8822784) itemoff 11769 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 122 key (27674651131904 EXTENT_ITEM 7557120) itemoff 11732 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 123 key (27674658734080 EXTENT_ITEM 7606272) itemoff 11695 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 124 key (27674666340352 EXTENT_ITEM 258048) itemoff 11658 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280625754112 count 1
        item 125 key (27674666598400 EXTENT_ITEM 6127616) itemoff 11621 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 126 key (27674672726016 EXTENT_ITEM 163840) itemoff 11584 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28019846856704 count 1
        item 127 key (27674672889856 EXTENT_ITEM 7094272) itemoff 11547 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 128 key (27674679984128 EXTENT_ITEM 245760) itemoff 11510 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122233790464 count 1
        item 129 key (27674680229888 EXTENT_ITEM 11227136) itemoff 11473 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 130 key (27674691502080 EXTENT_ITEM 8495104) itemoff 11436 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 131 key (27674699997184 EXTENT_ITEM 155648) itemoff 11399 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280529727488 count 1
        item 132 key (27674700152832 EXTENT_ITEM 9191424) itemoff 11362 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 133 key (27674709344256 EXTENT_ITEM 221184) itemoff 11325 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122233790464 count 1
        item 134 key (27674709590016 EXTENT_ITEM 10739712) itemoff 11288 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 135 key (27674720337920 EXTENT_ITEM 9732096) itemoff 11251 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 136 key (27674730070016 EXTENT_ITEM 221184) itemoff 11214 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122233790464 count 1
        item 137 key (27674730299392 EXTENT_ITEM 9375744) itemoff 11177 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 138 key (27674739736576 EXTENT_ITEM 11051008) itemoff 11140 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 139 key (27674750787584 EXTENT_ITEM 217088) itemoff 11103 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122234118144 count 1
        item 140 key (27674751008768 EXTENT_ITEM 12079104) itemoff 11066 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 141 key (27674763087872 EXTENT_ITEM 241664) itemoff 11029 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122167648256 count 1
        item 142 key (27674763329536 EXTENT_ITEM 7606272) itemoff 10992 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 143 key (27674770935808 EXTENT_ITEM 258048) itemoff 10955 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29279703105536 count 1
        item 144 key (27674771193856 EXTENT_ITEM 12492800) itemoff 10918 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 145 key (27674783776768 EXTENT_ITEM 9842688) itemoff 10881 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280428163072 count 1
        item 146 key (27674793738240 EXTENT_ITEM 737280) itemoff 10844 itemsize 37
                refs 1 gen 107109 flags DATA
                shared data backref parent 29279105941504 count 1
        item 147 key (27674794524672 EXTENT_ITEM 245760) itemoff 10807 itemsize 37
                refs 1 gen 106489 flags DATA
                shared data backref parent 30122147692544 count 1
        item 148 key (27674794786816 EXTENT_ITEM 8060928) itemoff 10770 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280146391040 count 1
        item 149 key (27674802913280 EXTENT_ITEM 1736704) itemoff 10733 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 28019641319424 count 1
        item 150 key (27674804649984 EXTENT_ITEM 167936) itemoff 10696 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 23219986300928 count 1
        item 151 key (27674804817920 EXTENT_ITEM 176128) itemoff 10659 itemsize 37
                refs 1 gen 104882 flags DATA
                shared data backref parent 28020240449536 count 1
        item 152 key (27674805010432 EXTENT_ITEM 9310208) itemoff 10622 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280146391040 count 1
        item 153 key (27674814447616 EXTENT_ITEM 1048576) itemoff 10585 itemsize 37
                refs 1 gen 104365 flags DATA
                shared data backref parent 23220186726400 count 1
        item 154 key (27674815496192 EXTENT_ITEM 7536640) itemoff 10548 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280146391040 count 1
        item 155 key (27674823098368 EXTENT_ITEM 9768960) itemoff 10511 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280146391040 count 1
        item 156 key (27674832867328 EXTENT_ITEM 192512) itemoff 10474 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 28020528283648 count 1
        item 157 key (27674833059840 EXTENT_ITEM 21815296) itemoff 10437 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280146391040 count 1
        item 158 key (27674854875136 EXTENT_ITEM 200704) itemoff 10400 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280986841088 count 1
        item 159 key (27674855079936 EXTENT_ITEM 1667072) itemoff 10363 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 28019641319424 count 1
        item 160 key (27674856747008 EXTENT_ITEM 200704) itemoff 10326 itemsize 37
                refs 1 gen 104882 flags DATA
                shared data backref parent 23220439678976 count 1
        item 161 key (27674856947712 EXTENT_ITEM 208896) itemoff 10289 itemsize 37
                refs 1 gen 104882 flags DATA
                shared data backref parent 23220439678976 count 1
        item 162 key (27674857177088 EXTENT_ITEM 25477120) itemoff 10252 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280146391040 count 1
        item 163 key (27674882654208 EXTENT_ITEM 212992) itemoff 10215 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 30122234445824 count 1
        item 164 key (27674882867200 EXTENT_ITEM 61267968) itemoff 10178 itemsize 37
                refs 1 gen 104112 flags DATA
                shared data backref parent 29280146391040 count 1
        item 165 key (27674944135168 EXTENT_ITEM 151552) itemoff 10141 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 28020395229184 count 1
        item 166 key (27674944286720 EXTENT_ITEM 143360) itemoff 10104 itemsize 37
                refs 1 gen 106489 flags DATA
                shared data backref parent 23219972423680 count 1
        item 167 key (27674944471040 EXTENT_ITEM 937984) itemoff 10067 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 7559560200192 count 1
        item 168 key (27674945409024 EXTENT_ITEM 258048) itemoff 10030 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 28019635044352 count 1
        item 169 key (27674945667072 EXTENT_ITEM 221184) itemoff 9993 itemsize 37
                refs 1 gen 104162 flags DATA
                shared data backref parent 1959273693184 count 1
        item 170 key (27674945888256 EXTENT_ITEM 258048) itemoff 9956 itemsize 37
                refs 1 gen 104867 flags DATA
                shared data backref parent 2799766552576 count 1
        item 171 key (27674946306048 EXTENT_ITEM 1048576) itemoff 9903 itemsize 53
                refs 1 gen 104112 flags DATA
                extent data backref root ROOT_TREE objectid 9654 offset 0 count 1
        item 172 key (27674947354624 EXTENT_ITEM 3362816) itemoff 9866 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 173 key (27674950762496 EXTENT_ITEM 1609728) itemoff 9829 itemsize 37
                refs 1 gen 104162 flags DATA
                shared data backref parent 28019666698240 count 1
        item 174 key (27674952372224 EXTENT_ITEM 221184) itemoff 9792 itemsize 37
                refs 1 gen 104867 flags DATA
                shared data backref parent 2799766552576 count 1
        item 175 key (27674952597504 EXTENT_ITEM 7790592) itemoff 9755 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 176 key (27674960461824 EXTENT_ITEM 3256320) itemoff 9718 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 177 key (27674963869696 EXTENT_ITEM 3149824) itemoff 9681 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 23220306837504 count 1
        item 178 key (27674967019520 EXTENT_ITEM 258048) itemoff 9644 itemsize 37
                refs 1 gen 104867 flags DATA
                shared data backref parent 2799766552576 count 1
        item 179 key (27674967277568 EXTENT_ITEM 31870976) itemoff 9607 itemsize 37
                refs 1 gen 104113 flags DATA
                shared data backref parent 23220619411456 count 1
        item 180 key (27674999259136 EXTENT_ITEM 21942272) itemoff 9570 itemsize 37
                refs 1 gen 104113 flags DATA
                shared data backref parent 23220619411456 count 1
        item 181 key (27675021279232 EXTENT_ITEM 29499392) itemoff 9533 itemsize 37
                refs 1 gen 104113 flags DATA
                shared data backref parent 23220619411456 count 1
        item 182 key (27675050901504 EXTENT_ITEM 24645632) itemoff 9496 itemsize 37
                refs 1 gen 104113 flags DATA
                shared data backref parent 23220619411456 count 1
        item 183 key (27675075547136 EXTENT_ITEM 258048) itemoff 9459 itemsize 37
                refs 1 gen 104113 flags DATA
                shared data backref parent 29280721780736 count 1
        item 184 key (27675075805184 EXTENT_ITEM 3948544) itemoff 9422 itemsize 37
                refs 1 gen 104113 flags DATA
                shared data backref parent 23220619411456 count 1
        item 185 key (27675079753728 EXTENT_ITEM 229376) itemoff 9385 itemsize 37
                refs 1 gen 104113 flags DATA
                shared data backref parent 30122167648256 count 1
        item 186 key (27675079999488 EXTENT_ITEM 1048576) itemoff 9335 itemsize 50
                refs 2 gen 104113 flags DATA
                shared data backref parent 29280856276992 count 1
                shared data backref parent 26386441879552 count 1
        item 187 key (27675081048064 EXTENT_ITEM 262144) itemoff 9285 itemsize 50
                refs 2 gen 104113 flags DATA
                shared data backref parent 30122580606976 count 1
                shared data backref parent 30122293886976 count 1
        item 188 key (27675081310208 EXTENT_ITEM 20791296) itemoff 9248 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 189 key (27675102101504 EXTENT_ITEM 180224) itemoff 9211 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 30122676797440 count 1
        item 190 key (27675102281728 EXTENT_ITEM 16146432) itemoff 9174 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 191 key (27675118534656 EXTENT_ITEM 18743296) itemoff 9137 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 192 key (27675137277952 EXTENT_ITEM 131072) itemoff 9100 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 29279284330496 count 1
        item 193 key (27675137409024 EXTENT_ITEM 20750336) itemoff 9063 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 194 key (27675158159360 EXTENT_ITEM 217088) itemoff 9026 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 29281025654784 count 1
        item 195 key (27675158380544 EXTENT_ITEM 19812352) itemoff 8989 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 196 key (27675178303488 EXTENT_ITEM 35991552) itemoff 8952 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 197 key (27675214295040 EXTENT_ITEM 184320) itemoff 8915 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 30122506977280 count 1
        item 198 key (27675214479360 EXTENT_ITEM 28688384) itemoff 8878 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 199 key (27675243167744 EXTENT_ITEM 147456) itemoff 8841 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 24310792142848 count 1
        item 200 key (27675243315200 EXTENT_ITEM 27705344) itemoff 8804 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 201 key (27675271102464 EXTENT_ITEM 28364800) itemoff 8767 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 202 key (27675299467264 EXTENT_ITEM 208896) itemoff 8730 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 30122191028224 count 1
        item 203 key (27675299676160 EXTENT_ITEM 12423168) itemoff 8693 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 204 key (27675312099328 EXTENT_ITEM 159744) itemoff 8656 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 28019609403392 count 1
        item 205 key (27675312259072 EXTENT_ITEM 24555520) itemoff 8619 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 206 key (27675336900608 EXTENT_ITEM 45821952) itemoff 8582 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 207 key (27675382775808 EXTENT_ITEM 9080832) itemoff 8545 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 208 key (27675391950848 EXTENT_ITEM 7606272) itemoff 8508 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 209 key (27675399557120 EXTENT_ITEM 258048) itemoff 8471 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 25310294720512 count 1
        item 210 key (27675399815168 EXTENT_ITEM 10326016) itemoff 8434 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 211 key (27675410141184 EXTENT_ITEM 159744) itemoff 8397 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 28019836665856 count 1
        item 212 key (27675410300928 EXTENT_ITEM 8400896) itemoff 8360 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 213 key (27675418701824 EXTENT_ITEM 241664) itemoff 8323 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 29279774638080 count 1
        item 214 key (27675418951680 EXTENT_ITEM 10158080) itemoff 8286 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 215 key (27675429175296 EXTENT_ITEM 7802880) itemoff 8249 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015907328 count 1
        item 216 key (27675437039616 EXTENT_ITEM 5058560) itemoff 8212 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 28019618988032 count 1
        item 217 key (27675442098176 EXTENT_ITEM 184320) itemoff 8175 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 29279343558656 count 1
        item 218 key (27675442282496 EXTENT_ITEM 1888256) itemoff 8138 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 1959015874560 count 1
        item 219 key (27675444170752 EXTENT_ITEM 208896) itemoff 8101 itemsize 37
                refs 1 gen 104122 flags DATA
                shared data backref parent 29280970424320 count 1
[bluemond@BlueQ ~]$



[  113.797333] BTRFS warning (device sdc1): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
[  113.797336] BTRFS info (device sdc1): disabling log replay at mount time
[  113.797342] BTRFS info (device sdc1): skip mount time block group searching
[  113.797344] BTRFS info (device sdc1): disk space caching is enabled
[  113.797345] BTRFS info (device sdc1): has skinny extents
[  113.798953] verify_parent_transid: 4 callbacks suppressed
[  113.798955] BTRFS error (device sdc1): parent transid verify failed on 23219912048640 wanted 116443 found 116484
[  113.799374] BTRFS error (device sdc1): parent transid verify failed on 23219912048640 wanted 116443 found 116484
[  113.799380] BTRFS error (device sdc1): failed to read chunk root
[  114.014754] BTRFS error (device sdc1): open_ctree failed 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Need help recovering broken RAID5 array (parent transid verify failed)
  2020-05-16  1:44   ` Emil Heimpel
@ 2020-05-20 11:56     ` Emil Heimpel
  2020-05-20 19:01       ` Chris Murphy
  0 siblings, 1 reply; 8+ messages in thread
From: Emil Heimpel @ 2020-05-20 11:56 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Qu Wenruo, Btrfs BTRFS

[-- Attachment #1: Type: text/plain, Size: 9705 bytes --]

Hi again,

I ran find-root and using the first found root (that is not in the superblock) seems to be finding data with btrfs-restore (only did a dry-run, because I don't have the space at the moment to do a full restore). At least I got warnings about folders where it stopped looping and I recognized the folders. It is still not showing any files, but maybe I misunderstood what the dry-run option is suppose to be doing.

Because the generation of the root is higher than expected, I don't know which root is expected to be the best option to choose from. One that is closest to the root the super thinks is the correct one (fe 30122555883520(gen: 116442 level: 0)) or the one with the highest generation (30122107502592(gen: 116502 level: 1))? To be honest I don't think I quite understand generations and levels :)

My plan would be to restore as much as possible and then try to repair the fs, do a scrub and then see if it managed to fix more data and do a backup of it. Than I would recreate the fs from scratch and restore the backup :)

You can find the output of find-root in the txt attached.

Greetings,
Emil

May 16, 2020 03:44:17 Emil Heimpel <broetchenrackete@gmail.com>:

>       Hi, 
>      
>  
>      Thanks for the answer. I attached the output of the commands you requested as a txt file. Unfortunately mounting didn't work, even with the kernel patch and skipbg option. 
>      
>  
>      I will try to find the journalctl from when it happened. 
>      
>  
>      Emil 
>      
>  
>      May 15, 2020 23:46:55 Chris Murphy <lists@colorremedies.com>: 
>      
>  
>       
>      
> 
> >              On Fri, May 15, 2020 at 12:03 AM Emil Heimpel 
> >       
> >      
> >       <broetchenrackete@gmail.com> wrote: 
> >       
> >      
> >        
> >            
> >        
> >       
> > >  
> > >               
> > >              Hi, 
> > >        
> > >       
> > >         
> > >              I hope this is the right place to ask for help. I am unable to mount my BTRFS array and wanted to know, if it is possible to recover (some) data from it. 
> > >        
> > >       
> > >         
> > >             
> > >         
> > >       
> >  
> >            Hi, yes it is! 
> >       
> >      
> >        
> >             
> >             
> >            
> >        
> >       
> > >  
> > >              I have a RAID1-Metadata/RAID5-Data array consisting of 6 drives, 2x8TB, 5TB, 4TB and 2x3TB. It was running fine for the last 3 months. Because I expanded it drive by drive I wanted to do a full balance the other day, when after around 40% completion (ca 1.5 days) I noticed, that one drive was missing from the array (If I remember correctly, it was the 5TB one). I tried to cancel the balance, but even after a few hours it didn't cancel, so I tried to do a reboot. That didn't work either, so I did a hard reset. Probably not the best idea, I know.... 
> > >        
> > >       
> > >         
> > >             
> > >         
> > >       
> >  
> >            The file system should be power fail safe (with some limited data 
> >       
> >      
> >       loss), but the hardware can betray everything. Your configuration is 
> >       
> >      
> >       better due to raid1 metadata. 
> >       
> >      
> >        
> >            
> >        
> >       
> > >  
> > >              After the reboot all drives appeared again but now I can't mount the array anymore, it gives me the following error in dmesg: 
> > >        
> > >       
> > >         
> > >              [  858.554594] BTRFS info (device sdc1): disk space caching is enabled 
> > >        
> > >       
> > >        [  858.554596] BTRFS info (device sdc1): has skinny extents 
> > >        
> > >       
> > >        [  858.556165] BTRFS error (device sdc1): parent transid verify failed on 23219912048640 wanted 116443 found 116484 
> > >        
> > >       
> > >        [  858.556516] BTRFS error (device sdc1): parent transid verify failed on 23219912048640 wanted 116443  found 116484 
> > >        
> > >       
> > >        [  858.556527] BTRFS error (device sdc1): failed to read chunk root 
> > >        
> > >       
> > >        [  858.588332] BTRFS error (device sdc1): open_ctree failed 
> > >        
> > >       
> > >         
> > >             
> > >         
> > >       
> >  
> >            Extent tree is damaged, but it's unexpected that a newer transid is 
> >       
> >      
> >       found than is wanted. Something happened out of order. Both copies. 
> >       
> >      
> >        
> >            What do you get for: 
> >       
> >      
> >       # btrfs rescue super -v /dev/anydevice 
> >       
> >      
> >       # btrfs insp dump-s -fa /dev/anydevice 
> >       
> >      
> >       # btrfs insp dump-t -b 30122546839552 /dev/anydevice 
> >       
> >      
> >       # mount -o ro,nologreplay,degraded /dev/anydevice 
> >       
> >      
> >        
> >             
> >             
> >             
> >            
> >        
> >       
> > >  
> > >              [bluemond@BlueQ btrfslogs]$ sudo btrfs check /dev/sdd1 
> > >        
> > >       
> > >         
> > >             
> > >         
> > >       
> >  
> >            For what it's worth, btrfs check does find all member devices, so you 
> >       
> >      
> >       only have to run check on any one of them. However, scrub is 
> >       
> >      
> >       different, you can run that individually per block device to work 
> >       
> >      
> >       around some performance problems with raid56, when running it on the 
> >       
> >      
> >       volume's mount point. 
> >       
> >      
> >        
> >             
> >            
> >        
> >        
> >       
> > 
> > >                And how can I prevent it from happening again? Would using the new multi-parity raid1 for Metadata help? 
> > >        
> > >       
> > >         
> > >             
> > >         
> > >       
> >  
> >            Difficult to know yet what went wrong. Do you have dmesg/journalctl -k 
> >       
> >      
> >       for the time period the problem drive began all the way to the forced 
> >       
> >      
> >       power off? It might give a hint. Before doing a forced poweroff while 
> >       
> >      
> >       writes are happening it might help to disable the write cache on all 
> >       
> >      
> >       the drives; or alternatively always disable them. 
> >       
> >      
> >        
> >             
> >            
> >        
> >        
> >       
> > 
> > >                I'm running arch on an ssd. 
> > >        
> > >       
> > >        [bluemond@BlueQ btrfslogs]$ uname -a 
> > >        
> > >       
> > >        Linux BlueQ 5.6.12-arch1-1 #1 SMP PREEMPT Sun, 10 May 2020 10:43:42 +0000 x86_64 GNU/Linux 
> > >        
> > >       
> > >         
> > >              [bluemond@BlueQ btrfslogs]$ btrfs --version 
> > >        
> > >       
> > >        btrfs-progs v5.6 
> > >        
> > >       
> > >         
> > >             
> > >         
> > >       
> >  
> >            5.6.1 is current but I don't think there's anything in the minor 
> >       
> >      
> >       update that applies here. 
> >       
> >      
> >        
> >            Post that info and maybe a dev will have time to take a look. If it 
> >       
> >      
> >       does mount ro,degraded, take the chance to update backups, just in 
> >       
> >      
> >       case. Yeah, ~21TB will be really inconvenient to lose. Also, since 
> >       
> >      
> >       it's over the weekend, and there's some time, it might be useful to 
> >       
> >      
> >       have a btrfs image: 
> >       
> >      
> >        
> >            btrfs-image -ss -c9 -t4 /dev/anydevice ~/problemvolume.btrfs.bin 
> >       
> >      
> >        
> >            This file will be roughly 1/2 the size of file system metadata. I 
> >       
> >      
> >       guess you could have around 140G of metadata depending on nodesize 
> >       
> >      
> >       chosen at mkfs time, and how many small files this filesystem has. 
> >       
> >      
> >        
> >            Still another option that might make it possible to mount, if above 
> >       
> >      
> >       doesn't work; build the kernel with this patch 
> >       
> >      
> >       https://patchwork.kernel.org/project/linux-btrfs/list/?series=170715 
> >       
> >      
> >        
> >            Mount using -o ro,nologreplay,rescue=skipbg 
> >       
> >      
> >        
> >            This also doesn't actually fix the problem, it just might make it 
> >       
> >      
> >       possible to mount the file system, mainly for updating backups in case 
> >       
> >      
> >       it's not possible to fix. 
> >       
> >      
> >        
> >             
> >            -- 
> >       
> >      
> >       Chris Murphy 
> >       
> >      
> >        
> >           
> >        
> >      
>  
>     

[-- Attachment #2: btrsf.findroot.txt --]
[-- Type: text/plain, Size: 18992 bytes --]

Superblock thinks the generation is 116443
Superblock thinks the level is 1
Found tree root at 30122565173248 gen 116443 level 1
Well block 30122107502592(gen: 116502 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122107666432(gen: 116501 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122106765312(gen: 116500 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122121658368(gen: 116497 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122102996992(gen: 116496 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122101309440(gen: 116495 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122100391936(gen: 116494 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122099458048(gen: 116493 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122319429632(gen: 116492 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122137387008(gen: 116491 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122497638400(gen: 116490 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122473930752(gen: 116489 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122315579392(gen: 116486 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122315563008(gen: 116486 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122304487424(gen: 116485 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122304471040(gen: 116485 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122298032128(gen: 116479 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468933632(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468917248(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468900864(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468884480(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468868096(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468802560(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468786176(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468769792(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468605952(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468589568(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468376576(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468343808(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468311040(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468294656(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468261888(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468229120(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122468212736(gen: 116469 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122293592064(gen: 116458 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122555883520(gen: 116442 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122555867136(gen: 116442 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122555850752(gen: 116442 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122555621376(gen: 116442 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122555588608(gen: 116442 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122555555840(gen: 116442 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122555523072(gen: 116442 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122555473920(gen: 116442 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122549657600(gen: 116441 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122546806784(gen: 116438 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122308403200(gen: 116433 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122308386816(gen: 116433 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122482024448(gen: 116408 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122481991680(gen: 116408 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122481958912(gen: 116408 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122491805696(gen: 116342 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122491789312(gen: 116342 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122486235136(gen: 116342 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122485547008(gen: 116342 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122482909184(gen: 116342 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122482827264(gen: 116342 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122482810880(gen: 116342 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122482778112(gen: 116342 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122482745344(gen: 116342 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 30122482401280(gen: 116342 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280847659008(gen: 116336 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280840925184(gen: 116335 level: 1) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280834338816(gen: 116332 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280834289664(gen: 116332 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280831553536(gen: 116332 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280825786368(gen: 116332 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280797081600(gen: 116331 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280796393472(gen: 116331 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280795017216(gen: 116331 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280776388608(gen: 116330 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280757891072(gen: 116330 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28019932790784(gen: 116327 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28019932774400(gen: 116327 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28019893288960(gen: 116327 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28019857506304(gen: 116327 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28019857113088(gen: 116327 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28019888455680(gen: 116326 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 26387051118592(gen: 116326 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 25310236639232(gen: 116326 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 7559563935744(gen: 116326 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 7559367458816(gen: 116325 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 6692570562560(gen: 116325 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29281085046784(gen: 110860 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29281063141376(gen: 110860 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29281140408320(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280909410304(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280871923712(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280697090048(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280682033152(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279853658112(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279850364928(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279718981632(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279684329472(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279553667072(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279550570496(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279419121664(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28019801653248(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 6692393353216(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 6692219633664(gen: 110859 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29281095565312(gen: 110551 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29281039679488(gen: 110468 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28019988627456(gen: 110337 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28019975077888(gen: 110337 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280952942592(gen: 109937 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280627884032(gen: 109388 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280627326976(gen: 109388 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29281210564608(gen: 109385 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279846973440(gen: 109375 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279843237888(gen: 109375 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280196804608(gen: 108637 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29281185710080(gen: 108633 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280468254720(gen: 108527 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280300072960(gen: 108526 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29280017432576(gen: 108526 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279680036864(gen: 108526 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279552847872(gen: 108526 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 4923159674880(gen: 108526 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 4923153940480(gen: 108526 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279432982528(gen: 108520 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279380799488(gen: 108520 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279363055616(gen: 108520 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279355617280(gen: 108520 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 29279175802880(gen: 108520 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 5863619493888(gen: 108519 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 26386847678464(gen: 108383 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 26386468356096(gen: 108380 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 2799412903936(gen: 108325 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28020530233344(gen: 108227 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 26386416762880(gen: 108220 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 5863448657920(gen: 107686 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28020194623488(gen: 107672 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 1075667271680(gen: 107620 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28020127137792(gen: 107614 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 28020388724736(gen: 107386 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 24309878816768(gen: 107376 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 6691901718528(gen: 106867 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 24310098575360(gen: 106811 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 26387216056320(gen: 103210 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 23220080590848(gen: 102044 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 24310357229568(gen: 100795 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 24310530441216(gen: 96307 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 24310298656768(gen: 96271 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 24310293741568(gen: 96271 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 23237383012352(gen: 92298 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 23220814233600(gen: 92237 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 23220295778304(gen: 92219 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 6692060151808(gen: 91345 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 22279392444416(gen: 89503 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 6691836723200(gen: 2484 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 6691836674048(gen: 2484 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 1075344343040(gen: 505 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 1075872677888(gen: 429 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 1075771080704(gen: 403 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 1075652542464(gen: 380 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1
Well block 1075554172928(gen: 365 level: 0) seems good, but generation/level doesn't match, want gen: 116443 level: 1

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Need help recovering broken RAID5 array (parent transid verify failed)
  2020-05-20 11:56     ` Emil Heimpel
@ 2020-05-20 19:01       ` Chris Murphy
       [not found]         ` <21913a92-5059-405f-b2d4-91e785ab77bd@gmail.com>
  0 siblings, 1 reply; 8+ messages in thread
From: Chris Murphy @ 2020-05-20 19:01 UTC (permalink / raw)
  To: Emil Heimpel; +Cc: Chris Murphy, Qu Wenruo, Btrfs BTRFS

On Wed, May 20, 2020 at 5:56 AM Emil Heimpel <broetchenrackete@gmail.com> wrote:
>
> Hi again,
>
> I ran find-root and using the first found root (that is not in the superblock) seems to be finding data with btrfs-restore (only did a dry-run, because I don't have the space at the moment to do a full restore). At least I got warnings about folders where it stopped looping and I recognized the folders. It is still not showing any files, but maybe I misunderstood what the dry-run option is suppose to be doing.
>
> Because the generation of the root is higher than expected, I don't know which root is expected to be the best option to choose from. One that is closest to the root the super thinks is the correct one (fe 30122555883520(gen: 116442 level: 0)) or the one with the highest generation (30122107502592(gen: 116502 level: 1))? To be honest I don't think I quite understand generations and levels :)

Yeah it's confusing.

I think there's extent tree corruption and I'm not sure it can be
repaired. I suggest 'btrfs restore' until you're satisfied, and then
you can try 'btrfs check --init-extent-tree' and see if it can fix the
extent tree. It's maybe a 50/50 chance, hard to say. If it completes,
follow it up with 'btrfs check' without options, and see if it
complains about anything else.

One thing that's important to consider is using space_cache v2. The
default space_cache v1 puts free space metadata into data chunks,
subjecting them to raid56, which is not great. Since you went to the
effort to use raid1 metadata, best to also use space_cache=v2 at first
mount, putting free space metadata into metadata chunks. It's expected
to be the default soon, I guess, but I'm not sure what the time frame
is.

Also consider using hdparm -W (capital W not lower case, see man page)
to disable the write cache on all drives if you're not certain they
consistently honor FUA or fsync.


-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Need help recovering broken RAID5 array (parent transid verify failed)
       [not found]         ` <21913a92-5059-405f-b2d4-91e785ab77bd@gmail.com>
@ 2020-06-04 22:55           ` Emil Heimpel
  2020-09-28 13:04             ` Dan van der Ster
  2020-10-01  4:49             ` Zygo Blaxell
  0 siblings, 2 replies; 8+ messages in thread
From: Emil Heimpel @ 2020-06-04 22:55 UTC (permalink / raw)
  To: Btrfs BTRFS; +Cc: Chris Murphy, Qu Wenruo

Hi,

I checked the smart values for all drives including short tests and all seem fine. I found these in journalctl and they must have happened during the balance:

May 08 08:26:10 BlueQ kernel: sd 11:0:3:0: [sdg] tag#2446 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
May 08 08:26:10 BlueQ kernel: sd 11:0:3:0: [sdg] tag#2446 CDB: Read(16) 88 00 00 00 00 00 42 84 13 18 00 00 00 08 00 00
May 08 08:26:10 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 1115951896 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0

...

May 08 10:53:27 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2455 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
May 08 10:53:27 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2455 CDB: Read(16) 88 00 00 00 00 00 42 60 db 10 00 00 00 08 00 00
May 08 10:53:27 BlueQ kernel: blk_update_request: I/O error, dev sdf, sector 1113643792 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0

...

May 08 12:55:14 BlueQ kernel: sd 11:0:2:0: [sdf] tag#3311 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
May 08 12:55:14 BlueQ kernel: sd 11:0:2:0: [sdf] tag#3311 CDB: Read(16) 88 00 00 00 00 00 42 60 7b 38 00 00 00 30 00 00                          May 08 12:55:14 BlueQ kernel: blk_update_request: I/O error, dev sdf, sector 1113619256 op 0x0:(READ) flags 0x80700 phys_seg 6 prio class 0
May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 0 (dev /dev/sdf1 sector 1113617208)
May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 4096 (dev /dev/sdf1 sector 1113617216)
May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 8192 (dev /dev/sdf1 sector 1113617224)
May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 12288 (dev /dev/sdf1 sector 1113617232)
May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 16384 (dev /dev/sdf1 sector 1113617240)
May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 20480 (dev /dev/sdf1 sector 1113617248)

...

May 08 13:51:51 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2470 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
May 08 13:51:51 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2470 CDB: Read(16) 88 00 00 00 00 00 42 64 19 a0 00 00 00 10 00 00                          May 08 13:51:51 BlueQ kernel: blk_update_request: I/O error, dev sdf, sector 1113856416 op 0x0:(READ) flags 0x80700 phys_seg 2 prio class 0
May 08 13:51:51 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1266969 off 0 (dev /dev/sdf1 sector 1113854368)
May 08 13:51:51 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1266969 off 4096 (dev /dev/sdf1 sector 1113854376)

...

May 08 23:09:19 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2480 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s                         May 08 23:09:19 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2480 CDB: Read(16) 88 00 00 00 00 00 ab 00 30 80 00 00 01 00 00 00                          May 08 23:09:19 BlueQ kernel: blk_update_request: I/O error, dev sdf, sector 2868916352 op 0x0:(READ) flags 0x80700 phys_seg 16 prio class 0
May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 196608 (dev /dev/sdf1 sector 2868914304)
May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 200704 (dev /dev/sdf1 sector 2868914312)
May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 204800 (dev /dev/sdf1 sector 2868914320)
May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 208896 (dev /dev/sdf1 sector 2868914328)
May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 212992 (dev /dev/sdf1 sector 2868914336)
May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 217088 (dev /dev/sdf1 sector 2868914344)
May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 221184 (dev /dev/sdf1 sector 2868914352)
May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 225280 (dev /dev/sdf1 sector 2868914360)
May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 229376 (dev /dev/sdf1 sector 2868914368)
May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 233472 (dev /dev/sdf1 sector 2868914376)

...#btrfs balance started probably

May 09 04:34:52 BlueQ kernel: BTRFS info (device sdc1): found 26 extents, stage: move data extents
May 09 04:34:53 BlueQ kernel: BTRFS info (device sdc1): found 26 extents, stage: update data pointers
May 09 04:34:53 BlueQ kernel: BTRFS info (device sdc1): relocating block group 21793982906368 flags data|raid5
May 09 04:35:26 BlueQ kernel: BTRFS info (device sdc1): found 26 extents, stage: move data extents
May 09 04:35:27 BlueQ kernel: BTRFS info (device sdc1): found 26 extents, stage: update data pointers
May 09 04:35:28 BlueQ kernel: BTRFS info (device sdc1): relocating block group 21790761680896 flags data|raid5
#repeating a lot

...

May 09 05:11:52 BlueQ kernel: BTRFS info (device sdc1): found 29 extents, stage: move data extents                                               May 09 05:11:53 BlueQ kernel: BTRFS info (device sdc1): found 29 extents, stage: update data pointers                                            May 09 05:11:54 BlueQ kernel: BTRFS info (device sdc1): relocating block group 21555612221440 flags data|raid5                                   May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440291328 csum 0x2ac15d26 expected csum 0xd26a9dcb mirror 1
May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440295424 csum 0x2ac15d26 expected csum 0x85d5d3bb mirror 1
May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440299520 csum 0x2ac15d26 expected csum 0x20cd77c6 mirror 1
May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440303616 csum 0x2ac15d26 expected csum 0x67d2b42b mirror 1
May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440307712 csum 0x2ac15d26 expected csum 0xc77fc7cd mirror 1
May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440311808 csum 0x2ac15d26 expected csum 0xe4409fd6 mirror 1
May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440315904 csum 0x2ac15d26 expected csum 0x99156670 mirror 1
May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440320000 csum 0x2ac15d26 expected csum 0xfd4f65c0 mirror 1
May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440324096 csum 0x2ac15d26 expected csum 0xbc27383b mirror 1
May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440328192 csum 0x2ac15d26 expected csum 0x84fb6b1f mirror 1
May 09 05:12:05 BlueQ kernel: repair_io_failure: 6 callbacks suppressed
May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440291328 (dev /dev/sda1 sector 6697578792)
May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440295424 (dev /dev/sda1 sector 6697578800)
May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440303616 (dev /dev/sda1 sector 6697578816)
May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440299520 (dev /dev/sda1 sector 6697578808)
May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440307712 (dev /dev/sda1 sector 6697578824)
May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440311808 (dev /dev/sda1 sector 6697578832)
May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440315904 (dev /dev/sda1 sector 6697578840)
May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440320000 (dev /dev/sda1 sector 6697578848)
May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440324096 (dev /dev/sda1 sector 6697578856)
May 09 05:12:06 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440328192 (dev /dev/sda1 sector 6697578864)
May 09 05:12:36 BlueQ kernel: btrfs_print_data_csum_error: 349 callbacks suppressed
May 09 05:12:36 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3137126400 csum 0x2ac15d26 expected csum 0xde18d96f m>
May 09 05:12:36 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3137130496 csum 0x2ac15d26 expected csum 0xda0ff7db m>
May 09 05:12:36 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3137134592 csum 0x2ac15d26 expected csum 0xf76a890c m>
May 09 05:12:36 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3137138688 csum 0x2ac15d26 expected csum 0x228317a4 m>
May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138387968 csum 0x2ac15d26 expected csum 0xcf6b7db7 m>
May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138519040 csum 0x2ac15d26 expected csum 0xa992d2c0 m>
May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138650112 csum 0x2ac15d26 expected csum 0xfeae0823 m>
May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138523136 csum 0x2ac15d26 expected csum 0xf05799e5 m>
May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138527232 csum 0x2ac15d26 expected csum 0x41210896 m>
May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138531328 csum 0x2ac15d26 expected csum 0x8ff1d037 m>
May 09 05:12:37 BlueQ kernel: repair_io_failure: 350 callbacks suppressed

... #Happily balancing for over 24h without warnings or errors...

May 10 08:32:41 BlueQ kernel: BTRFS info (device sdc1): relocating block group 10412162809856 flags data|raid5
May 10 08:33:17 BlueQ kernel: sd 11:0:3:0: attempting task abort!scmd(0x00000000931cd1e4), outstanding for 7174 ms & timeout 7000 ms
May 10 08:33:17 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1340 CDB: ATA command pass through(16) 85 06 20 00 00 00 00 00 00 00 00 00 00 00 e5 00
May 10 08:33:17 BlueQ kernel: scsi target11:0:3: handle(0x000c), sas_address(0x4433221107000000), phy(7)
May 10 08:33:17 BlueQ kernel: scsi target11:0:3: enclosure logical id(0x590b11c022f3fb00), slot(4)
May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: task abort: SUCCESS scmd(0x00000000931cd1e4)                                                          May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1342 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=14s
May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1342 Sense Key : Not Ready [current]
May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1342 Add. Sense: Logical unit not ready, cause not reportable
May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1342 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
May 10 08:33:21 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 0, rd 0, flush 1, corrupt 0, gen 0                              May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1343 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=14s
May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1343 Sense Key : Not Ready [current]
May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1343 Add. Sense: Logical unit not ready, cause not reportable
May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1343 CDB: Write(16) 8a 00 00 00 00 02 0a 9a a0 80 00 00 0a 00 00 00
May 10 08:33:21 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 8767840384 op 0x1:(WRITE) flags 0x0 phys_seg 61 prio class 0
May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1280 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=14s
May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1280 Sense Key : Not Ready [current]
May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1280 Add. Sense: Logical unit not ready, cause not reportable
May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1280 CDB: Write(16) 8a 00 00 00 00 02 0a 9a aa 80 00 00 0a 00 00 00
May 10 08:33:21 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 8767842944 op 0x1:(WRITE) flags 0x0 phys_seg 65 prio class 0
May 10 08:33:21 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 8767855488 op 0x1:(WRITE) flags 0x4000 phys_seg 37 prio class 0
May 10 08:33:21 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 1, rd 0, flush 1, corrupt 0, gen 0
May 10 08:33:21 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 2, rd 0, flush 1, corrupt 0, gen 0
May 10 08:33:21 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 3, rd 0, flush 1, corrupt 0, gen 0                              May 10 08:33:21 BlueQ udisksd[3593]: Error performing housekeeping for drive /org/freedesktop/UDisks2/drives/ST5000DM000_1FK178_W4J10239: Error >
0000: 00 00 00 00  00 00 00 00  00 00 00 00  00 00 00 00    ................
0010: 00 00 00 00  00 00 00 00  00 00 00 00  00 00 00 00    ................
(g-io-error-quark, 0)
May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): error writing primary super block to device 2
May 10 08:33:23 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 3, rd 0, flush 2, corrupt 0, gen 0
May 10 08:33:23 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
May 10 08:33:23 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 4, rd 0, flush 2, corrupt 0, gen 0
May 10 08:33:23 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1

Do I need to worry about the hdds?

Emil
P. S.: Not sure if my previous email reached the ML....?

Jun 3, 2020 10:44:49 Emil Heimpel <broetchenrackete@gmail.com>:

> Hi again.
> 
> I think I managed to restore all data to a new backup except one old Systembackup image from a laptop. Of course there could be files that weren't found at all, but I didn't notice any.
> 
> I tried init-extent-tree with and without the alternate root tree block, but both failed. Both seemed to crash with a segmentation fault, see attached logs and dmesg-snippets for more information. I did disable write cache on all drives with hdparm as suggested.
> 
> Now I'm not sure what the best way to go forward is. If you have further suggestions I could try to repair the array, I would try them today. Otherwise I would format the drives and create a new array (Metadata raid1(C3?), data raid5, checksum maybe sha or blake2, maybe zstd compression, space_cache v2). If you have any suggestions for the new array feel free to tell me!
> 
> Thank you for the help so far!
> 
> Emil
> 
> dmesg logs:
> 
> "btrfs check --init-extent-tree -p /dev/sda1
> [1534223.372937] btrfs[181698]: segfault at 10 ip 00007f3ef8358d77 sp 00007ffd4c006ee0 error 4 in libc-2.31.so[7f3ef82f6000+14d000]
> [1534223.372949] Code: 88 08 00 00 0f 86 39 04 00 00 8b 35 b7 bf 13 00 85 f6 0f 85 ab 05 00 00 41 f6 44 24 08 01 75 24 49 8b 04 24 49 29 c4 48 01 c3 <49> 8b 54 24 08 48 83 e2 f8 48 39 c2 0f 85 09 06 00 00 4c 89 e7 e8
> [1534223.373107] audit: type=1701 audit(1591128122.557:1822): auid=1000 uid=0 gid=0 ses=39 pid=181698 comm="btrfs" exe="/usr/bin/btrfs" sig=11 res=1
> 
> btrfs check --init-extent-tree -r 30122107502592 -p /dev/sda1
> [1535246.991899] sd 11:0:3:0: [sdg] tag#46 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
> [1535246.991905] sd 11:0:3:0: [sdg] tag#46 CDB: Read(16) 88 00 00 00 00 02 46 30 d9 00 00 00 00 08 00 00
> [1535246.991909] blk_update_request: I/O error, dev sdg, sector 9767540992 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> [1535251.466041] sd 11:0:2:0: [sdf] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> [1535251.466047] sd 11:0:2:0: [sdf] tag#11 CDB: Read(16) 88 00 00 00 00 01 d1 c0 be 00 00 00 00 08 00 00
> [1535251.466051] blk_update_request: I/O error, dev sdf, sector 7814036992 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> [1535328.853062] btrfs[181874]: segfault at 10 ip 00007f6c9c447d77 sp 00007ffc666cc940 error 4 in libc-2.31.so[7f6c9c3e5000+14d000]
> [1535328.853069] Code: 88 08 00 00 0f 86 39 04 00 00 8b 35 b7 bf 13 00 85 f6 0f 85 ab 05 00 00 41 f6 44 24 08 01 75 24 49 8b 04 24 49 29 c4 48 01 c3 <49> 8b 54 24 08 48 83 e2 f8 48 39 c2 0f 85 09 06 00 00 4c 89 e7 e8
> [1535328.853097] audit: type=1701 audit(1591129228.050:1845): auid=1000 uid=0 gid=0 ses=39 pid=181874 comm="btrfs" exe="/usr/bin/btrfs" sig=11 res=1"
> 
> Log from failed restore:
> ERROR: exhausted mirros trying to read (3 > 2)
> Error copying data for /path/to/file/xxxxxxxxxxxxxx.vhdx
> 
> May 20, 2020 21:01:45 Chris Murphy <lists@colorremedies.com>:
> 
>> On Wed, May 20, 2020 at 5:56 AM Emil Heimpel <broetchenrackete@gmail.com> wrote:
>>> 
>>> Hi again,
>>> 
>>> I ran find-root and using the first found root (that is not in the superblock) seems to be finding data with btrfs-restore (only did a dry-run, because I don't have the space at the moment to do a full restore). At least I got warnings about folders where it stopped looping and I recognized the folders. It is still not showing any files, but maybe I misunderstood what the dry-run option is suppose to be doing.
>>> 
>>> Because the generation of the root is higher than expected, I don't know which root is expected to be the best option to choose from. One that is closest to the root the super thinks is the correct one (fe 30122555883520(gen: 116442 level: 0)) or the one with the highest generation (30122107502592(gen: 116502 level: 1))? To be honest I don't think I quite understand generations and levels :)
>> 
>> Yeah it's confusing.
>> 
>> I think there's extent tree corruption and I'm not sure it can be
>> repaired. I suggest 'btrfs restore' until you're satisfied, and then
>> you can try 'btrfs check --init-extent-tree' and see if it can fix the
>> extent tree. It's maybe a 50/50 chance, hard to say. If it completes,
>> follow it up with 'btrfs check' without options, and see if it
>> complains about anything else.
>> 
>> One thing that's important to consider is using space_cache v2. The
>> default space_cache v1 puts free space metadata into data chunks,
>> subjecting them to raid56, which is not great. Since you went to the
>> effort to use raid1 metadata, best to also use space_cache=v2 at first
>> mount, putting free space metadata into metadata chunks. It's expected
>> to be the default soon, I guess, but I'm not sure what the time frame
>> is.
>> 
>> Also consider using hdparm -W (capital W not lower case, see man page)
>> to disable the write cache on all drives if you're not certain they
>> consistently honor FUA or fsync.
>> 
>> -- 
>> Chris Murphy
>> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Need help recovering broken RAID5 array (parent transid verify failed)
  2020-06-04 22:55           ` Emil Heimpel
@ 2020-09-28 13:04             ` Dan van der Ster
  2020-10-01  4:49             ` Zygo Blaxell
  1 sibling, 0 replies; 8+ messages in thread
From: Dan van der Ster @ 2020-09-28 13:04 UTC (permalink / raw)
  To: Emil Heimpel; +Cc: Btrfs BTRFS, Chris Murphy, Qu Wenruo

Is this a ST8000DM004 by chance? 0x2ac15d26 seems to be a magic csum
error value for these drives.
I get the same on one (of my three) ST8000DM004's in a RAID1, and
here's another case in the wild [1].
In my case the corruption is always on newly written data -- I scrub
weekly and it always finds a few hundred new corruptions. (scrub
doesn't print the csum value, but if I manually do some IO to generate
the corrected read errors, the failing csum is always 0x2ac15d26).
I wonder if this is the crypto_shash_digest for all zeros or something
like that.
Maybe if someone from Seagate is listening they already know what the
issue is...

On my side, I'm replacing these drives.

Cheers, Dan

[1] https://forums.unraid.net/topic/85305-btrfs-drive-problems-etc/

On Fri, Jun 5, 2020 at 12:55 AM Emil Heimpel <broetchenrackete@gmail.com> wrote:
>
> Hi,
>
> I checked the smart values for all drives including short tests and all seem fine. I found these in journalctl and they must have happened during the balance:
>
> May 08 08:26:10 BlueQ kernel: sd 11:0:3:0: [sdg] tag#2446 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
> May 08 08:26:10 BlueQ kernel: sd 11:0:3:0: [sdg] tag#2446 CDB: Read(16) 88 00 00 00 00 00 42 84 13 18 00 00 00 08 00 00
> May 08 08:26:10 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 1115951896 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
>
> ...
>
> May 08 10:53:27 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2455 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> May 08 10:53:27 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2455 CDB: Read(16) 88 00 00 00 00 00 42 60 db 10 00 00 00 08 00 00
> May 08 10:53:27 BlueQ kernel: blk_update_request: I/O error, dev sdf, sector 1113643792 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
>
> ...
>
> May 08 12:55:14 BlueQ kernel: sd 11:0:2:0: [sdf] tag#3311 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> May 08 12:55:14 BlueQ kernel: sd 11:0:2:0: [sdf] tag#3311 CDB: Read(16) 88 00 00 00 00 00 42 60 7b 38 00 00 00 30 00 00                          May 08 12:55:14 BlueQ kernel: blk_update_request: I/O error, dev sdf, sector 1113619256 op 0x0:(READ) flags 0x80700 phys_seg 6 prio class 0
> May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 0 (dev /dev/sdf1 sector 1113617208)
> May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 4096 (dev /dev/sdf1 sector 1113617216)
> May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 8192 (dev /dev/sdf1 sector 1113617224)
> May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 12288 (dev /dev/sdf1 sector 1113617232)
> May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 16384 (dev /dev/sdf1 sector 1113617240)
> May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 20480 (dev /dev/sdf1 sector 1113617248)
>
> ...
>
> May 08 13:51:51 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2470 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> May 08 13:51:51 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2470 CDB: Read(16) 88 00 00 00 00 00 42 64 19 a0 00 00 00 10 00 00                          May 08 13:51:51 BlueQ kernel: blk_update_request: I/O error, dev sdf, sector 1113856416 op 0x0:(READ) flags 0x80700 phys_seg 2 prio class 0
> May 08 13:51:51 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1266969 off 0 (dev /dev/sdf1 sector 1113854368)
> May 08 13:51:51 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1266969 off 4096 (dev /dev/sdf1 sector 1113854376)
>
> ...
>
> May 08 23:09:19 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2480 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s                         May 08 23:09:19 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2480 CDB: Read(16) 88 00 00 00 00 00 ab 00 30 80 00 00 01 00 00 00                          May 08 23:09:19 BlueQ kernel: blk_update_request: I/O error, dev sdf, sector 2868916352 op 0x0:(READ) flags 0x80700 phys_seg 16 prio class 0
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 196608 (dev /dev/sdf1 sector 2868914304)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 200704 (dev /dev/sdf1 sector 2868914312)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 204800 (dev /dev/sdf1 sector 2868914320)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 208896 (dev /dev/sdf1 sector 2868914328)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 212992 (dev /dev/sdf1 sector 2868914336)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 217088 (dev /dev/sdf1 sector 2868914344)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 221184 (dev /dev/sdf1 sector 2868914352)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 225280 (dev /dev/sdf1 sector 2868914360)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 229376 (dev /dev/sdf1 sector 2868914368)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 233472 (dev /dev/sdf1 sector 2868914376)
>
> ...#btrfs balance started probably
>
> May 09 04:34:52 BlueQ kernel: BTRFS info (device sdc1): found 26 extents, stage: move data extents
> May 09 04:34:53 BlueQ kernel: BTRFS info (device sdc1): found 26 extents, stage: update data pointers
> May 09 04:34:53 BlueQ kernel: BTRFS info (device sdc1): relocating block group 21793982906368 flags data|raid5
> May 09 04:35:26 BlueQ kernel: BTRFS info (device sdc1): found 26 extents, stage: move data extents
> May 09 04:35:27 BlueQ kernel: BTRFS info (device sdc1): found 26 extents, stage: update data pointers
> May 09 04:35:28 BlueQ kernel: BTRFS info (device sdc1): relocating block group 21790761680896 flags data|raid5
> #repeating a lot
>
> ...
>
> May 09 05:11:52 BlueQ kernel: BTRFS info (device sdc1): found 29 extents, stage: move data extents                                               May 09 05:11:53 BlueQ kernel: BTRFS info (device sdc1): found 29 extents, stage: update data pointers                                            May 09 05:11:54 BlueQ kernel: BTRFS info (device sdc1): relocating block group 21555612221440 flags data|raid5                                   May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440291328 csum 0x2ac15d26 expected csum 0xd26a9dcb mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440295424 csum 0x2ac15d26 expected csum 0x85d5d3bb mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440299520 csum 0x2ac15d26 expected csum 0x20cd77c6 mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440303616 csum 0x2ac15d26 expected csum 0x67d2b42b mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440307712 csum 0x2ac15d26 expected csum 0xc77fc7cd mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440311808 csum 0x2ac15d26 expected csum 0xe4409fd6 mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440315904 csum 0x2ac15d26 expected csum 0x99156670 mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440320000 csum 0x2ac15d26 expected csum 0xfd4f65c0 mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440324096 csum 0x2ac15d26 expected csum 0xbc27383b mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440328192 csum 0x2ac15d26 expected csum 0x84fb6b1f mirror 1
> May 09 05:12:05 BlueQ kernel: repair_io_failure: 6 callbacks suppressed
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440291328 (dev /dev/sda1 sector 6697578792)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440295424 (dev /dev/sda1 sector 6697578800)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440303616 (dev /dev/sda1 sector 6697578816)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440299520 (dev /dev/sda1 sector 6697578808)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440307712 (dev /dev/sda1 sector 6697578824)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440311808 (dev /dev/sda1 sector 6697578832)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440315904 (dev /dev/sda1 sector 6697578840)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440320000 (dev /dev/sda1 sector 6697578848)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440324096 (dev /dev/sda1 sector 6697578856)
> May 09 05:12:06 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440328192 (dev /dev/sda1 sector 6697578864)
> May 09 05:12:36 BlueQ kernel: btrfs_print_data_csum_error: 349 callbacks suppressed
> May 09 05:12:36 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3137126400 csum 0x2ac15d26 expected csum 0xde18d96f m>
> May 09 05:12:36 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3137130496 csum 0x2ac15d26 expected csum 0xda0ff7db m>
> May 09 05:12:36 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3137134592 csum 0x2ac15d26 expected csum 0xf76a890c m>
> May 09 05:12:36 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3137138688 csum 0x2ac15d26 expected csum 0x228317a4 m>
> May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138387968 csum 0x2ac15d26 expected csum 0xcf6b7db7 m>
> May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138519040 csum 0x2ac15d26 expected csum 0xa992d2c0 m>
> May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138650112 csum 0x2ac15d26 expected csum 0xfeae0823 m>
> May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138523136 csum 0x2ac15d26 expected csum 0xf05799e5 m>
> May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138527232 csum 0x2ac15d26 expected csum 0x41210896 m>
> May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138531328 csum 0x2ac15d26 expected csum 0x8ff1d037 m>
> May 09 05:12:37 BlueQ kernel: repair_io_failure: 350 callbacks suppressed
>
> ... #Happily balancing for over 24h without warnings or errors...
>
> May 10 08:32:41 BlueQ kernel: BTRFS info (device sdc1): relocating block group 10412162809856 flags data|raid5
> May 10 08:33:17 BlueQ kernel: sd 11:0:3:0: attempting task abort!scmd(0x00000000931cd1e4), outstanding for 7174 ms & timeout 7000 ms
> May 10 08:33:17 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1340 CDB: ATA command pass through(16) 85 06 20 00 00 00 00 00 00 00 00 00 00 00 e5 00
> May 10 08:33:17 BlueQ kernel: scsi target11:0:3: handle(0x000c), sas_address(0x4433221107000000), phy(7)
> May 10 08:33:17 BlueQ kernel: scsi target11:0:3: enclosure logical id(0x590b11c022f3fb00), slot(4)
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: task abort: SUCCESS scmd(0x00000000931cd1e4)                                                          May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1342 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=14s
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1342 Sense Key : Not Ready [current]
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1342 Add. Sense: Logical unit not ready, cause not reportable
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1342 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
> May 10 08:33:21 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
> May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 0, rd 0, flush 1, corrupt 0, gen 0                              May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1343 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=14s
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1343 Sense Key : Not Ready [current]
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1343 Add. Sense: Logical unit not ready, cause not reportable
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1343 CDB: Write(16) 8a 00 00 00 00 02 0a 9a a0 80 00 00 0a 00 00 00
> May 10 08:33:21 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 8767840384 op 0x1:(WRITE) flags 0x0 phys_seg 61 prio class 0
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1280 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=14s
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1280 Sense Key : Not Ready [current]
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1280 Add. Sense: Logical unit not ready, cause not reportable
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1280 CDB: Write(16) 8a 00 00 00 00 02 0a 9a aa 80 00 00 0a 00 00 00
> May 10 08:33:21 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 8767842944 op 0x1:(WRITE) flags 0x0 phys_seg 65 prio class 0
> May 10 08:33:21 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 8767855488 op 0x1:(WRITE) flags 0x4000 phys_seg 37 prio class 0
> May 10 08:33:21 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
> May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 1, rd 0, flush 1, corrupt 0, gen 0
> May 10 08:33:21 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
> May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 2, rd 0, flush 1, corrupt 0, gen 0
> May 10 08:33:21 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
> May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 3, rd 0, flush 1, corrupt 0, gen 0                              May 10 08:33:21 BlueQ udisksd[3593]: Error performing housekeeping for drive /org/freedesktop/UDisks2/drives/ST5000DM000_1FK178_W4J10239: Error >
> 0000: 00 00 00 00  00 00 00 00  00 00 00 00  00 00 00 00    ................
> 0010: 00 00 00 00  00 00 00 00  00 00 00 00  00 00 00 00    ................
> (g-io-error-quark, 0)
> May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): error writing primary super block to device 2
> May 10 08:33:23 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 3, rd 0, flush 2, corrupt 0, gen 0
> May 10 08:33:23 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
> May 10 08:33:23 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 4, rd 0, flush 2, corrupt 0, gen 0
> May 10 08:33:23 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
>
> Do I need to worry about the hdds?
>
> Emil
> P. S.: Not sure if my previous email reached the ML....?
>
> Jun 3, 2020 10:44:49 Emil Heimpel <broetchenrackete@gmail.com>:
>
> > Hi again.
> >
> > I think I managed to restore all data to a new backup except one old Systembackup image from a laptop. Of course there could be files that weren't found at all, but I didn't notice any.
> >
> > I tried init-extent-tree with and without the alternate root tree block, but both failed. Both seemed to crash with a segmentation fault, see attached logs and dmesg-snippets for more information. I did disable write cache on all drives with hdparm as suggested.
> >
> > Now I'm not sure what the best way to go forward is. If you have further suggestions I could try to repair the array, I would try them today. Otherwise I would format the drives and create a new array (Metadata raid1(C3?), data raid5, checksum maybe sha or blake2, maybe zstd compression, space_cache v2). If you have any suggestions for the new array feel free to tell me!
> >
> > Thank you for the help so far!
> >
> > Emil
> >
> > dmesg logs:
> >
> > "btrfs check --init-extent-tree -p /dev/sda1
> > [1534223.372937] btrfs[181698]: segfault at 10 ip 00007f3ef8358d77 sp 00007ffd4c006ee0 error 4 in libc-2.31.so[7f3ef82f6000+14d000]
> > [1534223.372949] Code: 88 08 00 00 0f 86 39 04 00 00 8b 35 b7 bf 13 00 85 f6 0f 85 ab 05 00 00 41 f6 44 24 08 01 75 24 49 8b 04 24 49 29 c4 48 01 c3 <49> 8b 54 24 08 48 83 e2 f8 48 39 c2 0f 85 09 06 00 00 4c 89 e7 e8
> > [1534223.373107] audit: type=1701 audit(1591128122.557:1822): auid=1000 uid=0 gid=0 ses=39 pid=181698 comm="btrfs" exe="/usr/bin/btrfs" sig=11 res=1
> >
> > btrfs check --init-extent-tree -r 30122107502592 -p /dev/sda1
> > [1535246.991899] sd 11:0:3:0: [sdg] tag#46 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
> > [1535246.991905] sd 11:0:3:0: [sdg] tag#46 CDB: Read(16) 88 00 00 00 00 02 46 30 d9 00 00 00 00 08 00 00
> > [1535246.991909] blk_update_request: I/O error, dev sdg, sector 9767540992 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> > [1535251.466041] sd 11:0:2:0: [sdf] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> > [1535251.466047] sd 11:0:2:0: [sdf] tag#11 CDB: Read(16) 88 00 00 00 00 01 d1 c0 be 00 00 00 00 08 00 00
> > [1535251.466051] blk_update_request: I/O error, dev sdf, sector 7814036992 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> > [1535328.853062] btrfs[181874]: segfault at 10 ip 00007f6c9c447d77 sp 00007ffc666cc940 error 4 in libc-2.31.so[7f6c9c3e5000+14d000]
> > [1535328.853069] Code: 88 08 00 00 0f 86 39 04 00 00 8b 35 b7 bf 13 00 85 f6 0f 85 ab 05 00 00 41 f6 44 24 08 01 75 24 49 8b 04 24 49 29 c4 48 01 c3 <49> 8b 54 24 08 48 83 e2 f8 48 39 c2 0f 85 09 06 00 00 4c 89 e7 e8
> > [1535328.853097] audit: type=1701 audit(1591129228.050:1845): auid=1000 uid=0 gid=0 ses=39 pid=181874 comm="btrfs" exe="/usr/bin/btrfs" sig=11 res=1"
> >
> > Log from failed restore:
> > ERROR: exhausted mirros trying to read (3 > 2)
> > Error copying data for /path/to/file/xxxxxxxxxxxxxx.vhdx
> >
> > May 20, 2020 21:01:45 Chris Murphy <lists@colorremedies.com>:
> >
> >> On Wed, May 20, 2020 at 5:56 AM Emil Heimpel <broetchenrackete@gmail.com> wrote:
> >>>
> >>> Hi again,
> >>>
> >>> I ran find-root and using the first found root (that is not in the superblock) seems to be finding data with btrfs-restore (only did a dry-run, because I don't have the space at the moment to do a full restore). At least I got warnings about folders where it stopped looping and I recognized the folders. It is still not showing any files, but maybe I misunderstood what the dry-run option is suppose to be doing.
> >>>
> >>> Because the generation of the root is higher than expected, I don't know which root is expected to be the best option to choose from. One that is closest to the root the super thinks is the correct one (fe 30122555883520(gen: 116442 level: 0)) or the one with the highest generation (30122107502592(gen: 116502 level: 1))? To be honest I don't think I quite understand generations and levels :)
> >>
> >> Yeah it's confusing.
> >>
> >> I think there's extent tree corruption and I'm not sure it can be
> >> repaired. I suggest 'btrfs restore' until you're satisfied, and then
> >> you can try 'btrfs check --init-extent-tree' and see if it can fix the
> >> extent tree. It's maybe a 50/50 chance, hard to say. If it completes,
> >> follow it up with 'btrfs check' without options, and see if it
> >> complains about anything else.
> >>
> >> One thing that's important to consider is using space_cache v2. The
> >> default space_cache v1 puts free space metadata into data chunks,
> >> subjecting them to raid56, which is not great. Since you went to the
> >> effort to use raid1 metadata, best to also use space_cache=v2 at first
> >> mount, putting free space metadata into metadata chunks. It's expected
> >> to be the default soon, I guess, but I'm not sure what the time frame
> >> is.
> >>
> >> Also consider using hdparm -W (capital W not lower case, see man page)
> >> to disable the write cache on all drives if you're not certain they
> >> consistently honor FUA or fsync.
> >>
> >> --
> >> Chris Murphy
> >>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Need help recovering broken RAID5 array (parent transid verify failed)
  2020-06-04 22:55           ` Emil Heimpel
  2020-09-28 13:04             ` Dan van der Ster
@ 2020-10-01  4:49             ` Zygo Blaxell
  1 sibling, 0 replies; 8+ messages in thread
From: Zygo Blaxell @ 2020-10-01  4:49 UTC (permalink / raw)
  To: Emil Heimpel; +Cc: Btrfs BTRFS, Chris Murphy, Qu Wenruo

On Thu, Jun 04, 2020 at 10:55:19PM +0000, Emil Heimpel wrote:
> Hi,
> 
> I checked the smart values for all drives including short tests and
> all seem fine. I found these in journalctl and they must have happened
> during the balance:
> 
> May 08 08:26:10 BlueQ kernel: sd 11:0:3:0: [sdg] tag#2446 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
> May 08 08:26:10 BlueQ kernel: sd 11:0:3:0: [sdg] tag#2446 CDB: Read(16) 88 00 00 00 00 00 42 84 13 18 00 00 00 08 00 00
> May 08 08:26:10 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 1115951896 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> 
> ...
> 
> May 08 10:53:27 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2455 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> May 08 10:53:27 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2455 CDB: Read(16) 88 00 00 00 00 00 42 60 db 10 00 00 00 08 00 00
> May 08 10:53:27 BlueQ kernel: blk_update_request: I/O error, dev sdf, sector 1113643792 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0

These are SCSI-level read errors from sdf and sdg.  If you have read
errors from the sd layer, that indicates hardware failure, regardless
of what SMART says.

Start by replacing that hardware.  Note that the failing component
may not be the hard drive: SATA cables, power cables, power supplies,
and even HCI (controller) chips can fail too.  If multiple disks are
intermittently failing it's usually the power supply or some other
device putting excess load on a shared cable.  Power supplies and cables
are cheap--swap them out and see what happens.

If you have multiple devices failing in the same raid5 filesystem,
successful recovery is not guaranteed.

> ...
> 
> May 08 12:55:14 BlueQ kernel: sd 11:0:2:0: [sdf] tag#3311 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> May 08 12:55:14 BlueQ kernel: sd 11:0:2:0: [sdf] tag#3311 CDB: Read(16) 88 00 00 00 00 00 42 60 7b 38 00 00 00 30 00 00                          May 08 12:55:14 BlueQ kernel: blk_update_request: I/O error, dev sdf, sector 1113619256 op 0x0:(READ) flags 0x80700 phys_seg 6 prio class 0
> May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 0 (dev /dev/sdf1 sector 1113617208)
> May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 4096 (dev /dev/sdf1 sector 1113617216)
> May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 8192 (dev /dev/sdf1 sector 1113617224)
> May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 12288 (dev /dev/sdf1 sector 1113617232)
> May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 16384 (dev /dev/sdf1 sector 1113617240)
> May 08 12:55:23 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1254360 off 20480 (dev /dev/sdf1 sector 1113617248)
> 
> ...
> 
> May 08 13:51:51 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2470 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> May 08 13:51:51 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2470 CDB: Read(16) 88 00 00 00 00 00 42 64 19 a0 00 00 00 10 00 00                          May 08 13:51:51 BlueQ kernel: blk_update_request: I/O error, dev sdf, sector 1113856416 op 0x0:(READ) flags 0x80700 phys_seg 2 prio class 0
> May 08 13:51:51 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1266969 off 0 (dev /dev/sdf1 sector 1113854368)
> May 08 13:51:51 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 1266969 off 4096 (dev /dev/sdf1 sector 1113854376)
> 
> ...
> 
> May 08 23:09:19 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2480 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s                         May 08 23:09:19 BlueQ kernel: sd 11:0:2:0: [sdf] tag#2480 CDB: Read(16) 88 00 00 00 00 00 ab 00 30 80 00 00 01 00 00 00                          May 08 23:09:19 BlueQ kernel: blk_update_request: I/O error, dev sdf, sector 2868916352 op 0x0:(READ) flags 0x80700 phys_seg 16 prio class 0
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 196608 (dev /dev/sdf1 sector 2868914304)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 200704 (dev /dev/sdf1 sector 2868914312)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 204800 (dev /dev/sdf1 sector 2868914320)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 208896 (dev /dev/sdf1 sector 2868914328)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 212992 (dev /dev/sdf1 sector 2868914336)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 217088 (dev /dev/sdf1 sector 2868914344)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 221184 (dev /dev/sdf1 sector 2868914352)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 225280 (dev /dev/sdf1 sector 2868914360)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 229376 (dev /dev/sdf1 sector 2868914368)
> May 08 23:09:19 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 5126 off 233472 (dev /dev/sdf1 sector 2868914376)
> 
> ...#btrfs balance started probably
> 
> May 09 04:34:52 BlueQ kernel: BTRFS info (device sdc1): found 26 extents, stage: move data extents
> May 09 04:34:53 BlueQ kernel: BTRFS info (device sdc1): found 26 extents, stage: update data pointers
> May 09 04:34:53 BlueQ kernel: BTRFS info (device sdc1): relocating block group 21793982906368 flags data|raid5
> May 09 04:35:26 BlueQ kernel: BTRFS info (device sdc1): found 26 extents, stage: move data extents
> May 09 04:35:27 BlueQ kernel: BTRFS info (device sdc1): found 26 extents, stage: update data pointers
> May 09 04:35:28 BlueQ kernel: BTRFS info (device sdc1): relocating block group 21790761680896 flags data|raid5
> #repeating a lot
> 
> ...
> 
> May 09 05:11:52 BlueQ kernel: BTRFS info (device sdc1): found 29 extents, stage: move data extents                                               May 09 05:11:53 BlueQ kernel: BTRFS info (device sdc1): found 29 extents, stage: update data pointers                                            May 09 05:11:54 BlueQ kernel: BTRFS info (device sdc1): relocating block group 21555612221440 flags data|raid5                                   May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440291328 csum 0x2ac15d26 expected csum 0xd26a9dcb mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440295424 csum 0x2ac15d26 expected csum 0x85d5d3bb mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440299520 csum 0x2ac15d26 expected csum 0x20cd77c6 mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440303616 csum 0x2ac15d26 expected csum 0x67d2b42b mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440307712 csum 0x2ac15d26 expected csum 0xc77fc7cd mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440311808 csum 0x2ac15d26 expected csum 0xe4409fd6 mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440315904 csum 0x2ac15d26 expected csum 0x99156670 mirror 1

root -9 is the relocation tree, which does not exist on disk (it is always
converted to a real tree on transaction commit).  You are probably hitting
a known raid5 bug where parity reconstruction fails in degraded mode
(or at other times when a disk block cannot be read).  This will give
you a lot of false read errors with fictional csum values.  The errors
on other roots could be either a disk failure or a btrfs raid5 bug, but
root -9 errors are almost exclusively the btrfs raid5 bug.

Ignore all of this for now.  Find and replace the broken hardware first.
If you determine that the problem is a failing disk, use 'btrfs replace'
to replace the disk.  If it's just cable or power supply and the disk
is OK, then 'btrfs scrub' should suffice.  Be sure to run scrub on each
disk separately, one at a time, to avoid hitting other btrfs raid5 bugs.

Do not use 'balance', 'dev remove' or 'dev add' in the meantime.
They will just keep failing while the hardware issues are present.
If the drive is failing then the extra seeking will speed up
that failure.

> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440320000 csum 0x2ac15d26 expected csum 0xfd4f65c0 mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440324096 csum 0x2ac15d26 expected csum 0xbc27383b mirror 1
> May 09 05:12:04 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 440328192 csum 0x2ac15d26 expected csum 0x84fb6b1f mirror 1
> May 09 05:12:05 BlueQ kernel: repair_io_failure: 6 callbacks suppressed
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440291328 (dev /dev/sda1 sector 6697578792)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440295424 (dev /dev/sda1 sector 6697578800)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440303616 (dev /dev/sda1 sector 6697578816)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440299520 (dev /dev/sda1 sector 6697578808)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440307712 (dev /dev/sda1 sector 6697578824)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440311808 (dev /dev/sda1 sector 6697578832)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440315904 (dev /dev/sda1 sector 6697578840)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440320000 (dev /dev/sda1 sector 6697578848)
> May 09 05:12:05 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440324096 (dev /dev/sda1 sector 6697578856)
> May 09 05:12:06 BlueQ kernel: BTRFS info (device sdc1): read error corrected: ino 382 off 440328192 (dev /dev/sda1 sector 6697578864)
> May 09 05:12:36 BlueQ kernel: btrfs_print_data_csum_error: 349 callbacks suppressed
> May 09 05:12:36 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3137126400 csum 0x2ac15d26 expected csum 0xde18d96f m>
> May 09 05:12:36 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3137130496 csum 0x2ac15d26 expected csum 0xda0ff7db m>
> May 09 05:12:36 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3137134592 csum 0x2ac15d26 expected csum 0xf76a890c m>
> May 09 05:12:36 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3137138688 csum 0x2ac15d26 expected csum 0x228317a4 m>
> May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138387968 csum 0x2ac15d26 expected csum 0xcf6b7db7 m>
> May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138519040 csum 0x2ac15d26 expected csum 0xa992d2c0 m>
> May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138650112 csum 0x2ac15d26 expected csum 0xfeae0823 m>
> May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138523136 csum 0x2ac15d26 expected csum 0xf05799e5 m>
> May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138527232 csum 0x2ac15d26 expected csum 0x41210896 m>
> May 09 05:12:37 BlueQ kernel: BTRFS warning (device sdc1): csum failed root -9 ino 382 off 3138531328 csum 0x2ac15d26 expected csum 0x8ff1d037 m>
> May 09 05:12:37 BlueQ kernel: repair_io_failure: 350 callbacks suppressed
> 
> ... #Happily balancing for over 24h without warnings or errors...
> 
> May 10 08:32:41 BlueQ kernel: BTRFS info (device sdc1): relocating block group 10412162809856 flags data|raid5
> May 10 08:33:17 BlueQ kernel: sd 11:0:3:0: attempting task abort!scmd(0x00000000931cd1e4), outstanding for 7174 ms & timeout 7000 ms
> May 10 08:33:17 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1340 CDB: ATA command pass through(16) 85 06 20 00 00 00 00 00 00 00 00 00 00 00 e5 00
> May 10 08:33:17 BlueQ kernel: scsi target11:0:3: handle(0x000c), sas_address(0x4433221107000000), phy(7)
> May 10 08:33:17 BlueQ kernel: scsi target11:0:3: enclosure logical id(0x590b11c022f3fb00), slot(4)
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: task abort: SUCCESS scmd(0x00000000931cd1e4)                                                          May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1342 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=14s
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1342 Sense Key : Not Ready [current]
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1342 Add. Sense: Logical unit not ready, cause not reportable
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1342 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
> May 10 08:33:21 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0
> May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 0, rd 0, flush 1, corrupt 0, gen 0                              May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1343 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=14s
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1343 Sense Key : Not Ready [current]
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1343 Add. Sense: Logical unit not ready, cause not reportable
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1343 CDB: Write(16) 8a 00 00 00 00 02 0a 9a a0 80 00 00 0a 00 00 00
> May 10 08:33:21 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 8767840384 op 0x1:(WRITE) flags 0x0 phys_seg 61 prio class 0
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1280 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=14s
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1280 Sense Key : Not Ready [current]
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1280 Add. Sense: Logical unit not ready, cause not reportable
> May 10 08:33:21 BlueQ kernel: sd 11:0:3:0: [sdg] tag#1280 CDB: Write(16) 8a 00 00 00 00 02 0a 9a aa 80 00 00 0a 00 00 00
> May 10 08:33:21 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 8767842944 op 0x1:(WRITE) flags 0x0 phys_seg 65 prio class 0
> May 10 08:33:21 BlueQ kernel: blk_update_request: I/O error, dev sdg, sector 8767855488 op 0x1:(WRITE) flags 0x4000 phys_seg 37 prio class 0
> May 10 08:33:21 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
> May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 1, rd 0, flush 1, corrupt 0, gen 0
> May 10 08:33:21 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1

sdg definitely failing here.

> May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 2, rd 0, flush 1, corrupt 0, gen 0
> May 10 08:33:21 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
> May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 3, rd 0, flush 1, corrupt 0, gen 0                              May 10 08:33:21 BlueQ udisksd[3593]: Error performing housekeeping for drive /org/freedesktop/UDisks2/drives/ST5000DM000_1FK178_W4J10239: Error >

OK, forget all that stuff I said above about power and SATA cables.
That is a Seagate Barracuda.  It's almost certainly the drive.

> 0000: 00 00 00 00  00 00 00 00  00 00 00 00  00 00 00 00    ................
> 0010: 00 00 00 00  00 00 00 00  00 00 00 00  00 00 00 00    ................
> (g-io-error-quark, 0)
> May 10 08:33:21 BlueQ kernel: BTRFS error (device sdc1): error writing primary super block to device 2
> May 10 08:33:23 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 3, rd 0, flush 2, corrupt 0, gen 0
> May 10 08:33:23 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
> May 10 08:33:23 BlueQ kernel: BTRFS error (device sdc1): bdev /dev/sdg1 errs: wr 4, rd 0, flush 2, corrupt 0, gen 0
> May 10 08:33:23 BlueQ kernel: BTRFS warning (device sdc1): lost page write due to IO error on /dev/sdg1
> 
> Do I need to worry about the hdds?

It looks like sdg just gave up.

> Emil
> P. S.: Not sure if my previous email reached the ML....?
> 
> Jun 3, 2020 10:44:49 Emil Heimpel <broetchenrackete@gmail.com>:
> 
> > Hi again.
> > 
> > I think I managed to restore all data to a new backup except one old Systembackup image from a laptop. Of course there could be files that weren't found at all, but I didn't notice any.
> > 
> > I tried init-extent-tree with and without the alternate root tree block, but both failed. Both seemed to crash with a segmentation fault, see attached logs and dmesg-snippets for more information. I did disable write cache on all drives with hdparm as suggested.
> > 
> > Now I'm not sure what the best way to go forward is. If you have further suggestions I could try to repair the array, I would try them today. Otherwise I would format the drives and create a new array (Metadata raid1(C3?), data raid5, checksum maybe sha or blake2, maybe zstd compression, space_cache v2). If you have any suggestions for the new array feel free to tell me!
> > 
> > Thank you for the help so far!
> > 
> > Emil
> > 
> > dmesg logs:
> > 
> > "btrfs check --init-extent-tree -p /dev/sda1
> > [1534223.372937] btrfs[181698]: segfault at 10 ip 00007f3ef8358d77 sp 00007ffd4c006ee0 error 4 in libc-2.31.so[7f3ef82f6000+14d000]
> > [1534223.372949] Code: 88 08 00 00 0f 86 39 04 00 00 8b 35 b7 bf 13 00 85 f6 0f 85 ab 05 00 00 41 f6 44 24 08 01 75 24 49 8b 04 24 49 29 c4 48 01 c3 <49> 8b 54 24 08 48 83 e2 f8 48 39 c2 0f 85 09 06 00 00 4c 89 e7 e8
> > [1534223.373107] audit: type=1701 audit(1591128122.557:1822): auid=1000 uid=0 gid=0 ses=39 pid=181698 comm="btrfs" exe="/usr/bin/btrfs" sig=11 res=1
> > 
> > btrfs check --init-extent-tree -r 30122107502592 -p /dev/sda1
> > [1535246.991899] sd 11:0:3:0: [sdg] tag#46 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=9s
> > [1535246.991905] sd 11:0:3:0: [sdg] tag#46 CDB: Read(16) 88 00 00 00 00 02 46 30 d9 00 00 00 00 08 00 00
> > [1535246.991909] blk_update_request: I/O error, dev sdg, sector 9767540992 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> > [1535251.466041] sd 11:0:2:0: [sdf] tag#11 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=4s
> > [1535251.466047] sd 11:0:2:0: [sdf] tag#11 CDB: Read(16) 88 00 00 00 00 01 d1 c0 be 00 00 00 00 08 00 00
> > [1535251.466051] blk_update_request: I/O error, dev sdf, sector 7814036992 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
> > [1535328.853062] btrfs[181874]: segfault at 10 ip 00007f6c9c447d77 sp 00007ffc666cc940 error 4 in libc-2.31.so[7f6c9c3e5000+14d000]
> > [1535328.853069] Code: 88 08 00 00 0f 86 39 04 00 00 8b 35 b7 bf 13 00 85 f6 0f 85 ab 05 00 00 41 f6 44 24 08 01 75 24 49 8b 04 24 49 29 c4 48 01 c3 <49> 8b 54 24 08 48 83 e2 f8 48 39 c2 0f 85 09 06 00 00 4c 89 e7 e8
> > [1535328.853097] audit: type=1701 audit(1591129228.050:1845): auid=1000 uid=0 gid=0 ses=39 pid=181874 comm="btrfs" exe="/usr/bin/btrfs" sig=11 res=1"
> > 
> > Log from failed restore:
> > ERROR: exhausted mirros trying to read (3 > 2)
> > Error copying data for /path/to/file/xxxxxxxxxxxxxx.vhdx
> > 
> > May 20, 2020 21:01:45 Chris Murphy <lists@colorremedies.com>:
> > 
> >> On Wed, May 20, 2020 at 5:56 AM Emil Heimpel <broetchenrackete@gmail.com> wrote:
> >>> 
> >>> Hi again,
> >>> 
> >>> I ran find-root and using the first found root (that is not in the superblock) seems to be finding data with btrfs-restore (only did a dry-run, because I don't have the space at the moment to do a full restore). At least I got warnings about folders where it stopped looping and I recognized the folders. It is still not showing any files, but maybe I misunderstood what the dry-run option is suppose to be doing.
> >>> 
> >>> Because the generation of the root is higher than expected, I don't know which root is expected to be the best option to choose from. One that is closest to the root the super thinks is the correct one (fe 30122555883520(gen: 116442 level: 0)) or the one with the highest generation (30122107502592(gen: 116502 level: 1))? To be honest I don't think I quite understand generations and levels :)
> >> 
> >> Yeah it's confusing.
> >> 
> >> I think there's extent tree corruption and I'm not sure it can be
> >> repaired. I suggest 'btrfs restore' until you're satisfied, and then
> >> you can try 'btrfs check --init-extent-tree' and see if it can fix the
> >> extent tree. It's maybe a 50/50 chance, hard to say. If it completes,
> >> follow it up with 'btrfs check' without options, and see if it
> >> complains about anything else.
> >> 
> >> One thing that's important to consider is using space_cache v2. The
> >> default space_cache v1 puts free space metadata into data chunks,
> >> subjecting them to raid56, which is not great. Since you went to the
> >> effort to use raid1 metadata, best to also use space_cache=v2 at first
> >> mount, putting free space metadata into metadata chunks. It's expected
> >> to be the default soon, I guess, but I'm not sure what the time frame
> >> is.
> >> 
> >> Also consider using hdparm -W (capital W not lower case, see man page)
> >> to disable the write cache on all drives if you're not certain they
> >> consistently honor FUA or fsync.
> >> 
> >> -- 
> >> Chris Murphy
> >> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-10-01  4:49 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-15  6:03 Need help recovering broken RAID5 array (parent transid verify failed) Emil Heimpel
2020-05-15 21:46 ` Chris Murphy
2020-05-16  1:44   ` Emil Heimpel
2020-05-20 11:56     ` Emil Heimpel
2020-05-20 19:01       ` Chris Murphy
     [not found]         ` <21913a92-5059-405f-b2d4-91e785ab77bd@gmail.com>
2020-06-04 22:55           ` Emil Heimpel
2020-09-28 13:04             ` Dan van der Ster
2020-10-01  4:49             ` Zygo Blaxell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.