All of lore.kernel.org
 help / color / mirror / Atom feed
* Hand Patching a BTRFS Superblock?
@ 2017-12-28  1:46 Stirling Westrup
  2017-12-28  1:53 ` Qu Wenruo
  0 siblings, 1 reply; 14+ messages in thread
From: Stirling Westrup @ 2017-12-28  1:46 UTC (permalink / raw)
  To: linux-btrfs

Here's my situation: I have a network file server containing a 12TB
BTRFS spread out over four devices (sda-sdd) which I am trying to
recover. I do have a backup, but it's about 3 months old, and while I
could certainly rebuild everything from that if I really had to, I
would far rather not have to rerip my latest DVDs. So, I am willing to
experiment if it might save me a few hundred hours of reconstruction.
I don't currently have another 12 TB of space anywhere for making a
scratch copy.

A few days ago sdb developed hard errors and I can no longer mount the
filesystem. sdb is no longer even recognized as a valid btrfs drive.
However, when I ran ddrescue over the drive I managed to make a clone
(sdf) which contains all but 12K of the original drive. However, those
missing 12K are all in the various superblocks, so the cloned drive is
still unreadable.

In the hopes that I was only missing a few bits of the superblocks, I
started out by dd-ing the first 4M of sdd into sdf in the hopes that
ddrescue would overwrite much of the superblocks, and the final bits
from sdd would make things usable.

No such luck. I now have a drive sdf which claims to be identical to
sdd but which is a clone of sdb. In case it matters, sda and sdc are
each 4TB while sdb and sdd are each 2TB drives; sde is my boot drive
and sdf is a 2TB clone of sdb.

What I need to do is to somehow patch sdf's primary superblock so it
contains the correct device number and UUID_SUB for sdb, so that I can
attempt some sort of recovery. Right now my linux is (understandably)
quite confused by the situation:

videon:~ # uname -a
Linux videon 4.4.103-18.41-default #1 SMP Wed Dec 13 14:06:33 UTC 2017
(f66c68c) x86_64 x86_64 x86_64 GNU/Linux

videon:~ # btrfs --version
btrfs-progs v4.5.3+20160729

videon:~ # btrfs fi show
Label: 'Storage'  uuid: 33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
        Total devices 4 FS bytes used 10.69TiB
        devid    1 size 1.82TiB used 1.82TiB path /dev/sdd
        devid    3 size 3.64TiB used 3.54TiB path /dev/sdc
        devid    4 size 3.64TiB used 3.54TiB path /dev/sda
        *** Some devices missing

Any suggestions on how to proceed would be appreciated.

-- 
Stirling Westrup
Programmer, Entrepreneur.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-28  1:46 Hand Patching a BTRFS Superblock? Stirling Westrup
@ 2017-12-28  1:53 ` Qu Wenruo
  2017-12-28 11:41   ` Nikolay Borisov
  0 siblings, 1 reply; 14+ messages in thread
From: Qu Wenruo @ 2017-12-28  1:53 UTC (permalink / raw)
  To: swestrup, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 2639 bytes --]



On 2017年12月28日 09:46, Stirling Westrup wrote:
> Here's my situation: I have a network file server containing a 12TB
> BTRFS spread out over four devices (sda-sdd) which I am trying to
> recover. I do have a backup, but it's about 3 months old, and while I
> could certainly rebuild everything from that if I really had to, I
> would far rather not have to rerip my latest DVDs. So, I am willing to
> experiment if it might save me a few hundred hours of reconstruction.
> I don't currently have another 12 TB of space anywhere for making a
> scratch copy.
> 
> A few days ago sdb developed hard errors and I can no longer mount the
> filesystem. sdb is no longer even recognized as a valid btrfs drive.
> However, when I ran ddrescue over the drive I managed to make a clone
> (sdf) which contains all but 12K of the original drive. However, those
> missing 12K are all in the various superblocks, so the cloned drive is
> still unreadable.
> 
> In the hopes that I was only missing a few bits of the superblocks, I
> started out by dd-ing the first 4M of sdd into sdf in the hopes that
> ddrescue would overwrite much of the superblocks, and the final bits
> from sdd would make things usable.
> 
> No such luck. I now have a drive sdf which claims to be identical to
> sdd but which is a clone of sdb. In case it matters, sda and sdc are
> each 4TB while sdb and sdd are each 2TB drives; sde is my boot drive
> and sdf is a 2TB clone of sdb.
> 
> What I need to do is to somehow patch sdf's primary superblock so it
> contains the correct device number and UUID_SUB for sdb, so that I can
> attempt some sort of recovery. Right now my linux is (understandably)
> quite confused by the situation:

Did you tried "btrfs rescue super-recover"?

Remember to use the devel branch from git, as there is a small bug
prevent it report correct result.

super-recover will try to use the backup superblock to recover the
primary one.

Thanks,
Qu

> 
> videon:~ # uname -a
> Linux videon 4.4.103-18.41-default #1 SMP Wed Dec 13 14:06:33 UTC 2017
> (f66c68c) x86_64 x86_64 x86_64 GNU/Linux
> 
> videon:~ # btrfs --version
> btrfs-progs v4.5.3+20160729
> 
> videon:~ # btrfs fi show
> Label: 'Storage'  uuid: 33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
>         Total devices 4 FS bytes used 10.69TiB
>         devid    1 size 1.82TiB used 1.82TiB path /dev/sdd
>         devid    3 size 3.64TiB used 3.54TiB path /dev/sdc
>         devid    4 size 3.64TiB used 3.54TiB path /dev/sda
>         *** Some devices missing
> 
> Any suggestions on how to proceed would be appreciated.
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 520 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-28  1:53 ` Qu Wenruo
@ 2017-12-28 11:41   ` Nikolay Borisov
  2017-12-28 12:11     ` Qu Wenruo
  0 siblings, 1 reply; 14+ messages in thread
From: Nikolay Borisov @ 2017-12-28 11:41 UTC (permalink / raw)
  To: Qu Wenruo, swestrup, linux-btrfs



On 28.12.2017 03:53, Qu Wenruo wrote:
> 
> 
> On 2017年12月28日 09:46, Stirling Westrup wrote:
>> Here's my situation: I have a network file server containing a 12TB
>> BTRFS spread out over four devices (sda-sdd) which I am trying to
>> recover. I do have a backup, but it's about 3 months old, and while I
>> could certainly rebuild everything from that if I really had to, I
>> would far rather not have to rerip my latest DVDs. So, I am willing to
>> experiment if it might save me a few hundred hours of reconstruction.
>> I don't currently have another 12 TB of space anywhere for making a
>> scratch copy.
>>
>> A few days ago sdb developed hard errors and I can no longer mount the
>> filesystem. sdb is no longer even recognized as a valid btrfs drive.
>> However, when I ran ddrescue over the drive I managed to make a clone
>> (sdf) which contains all but 12K of the original drive. However, those
>> missing 12K are all in the various superblocks, so the cloned drive is
>> still unreadable.
>>
>> In the hopes that I was only missing a few bits of the superblocks, I
>> started out by dd-ing the first 4M of sdd into sdf in the hopes that
>> ddrescue would overwrite much of the superblocks, and the final bits
>> from sdd would make things usable.
>>
>> No such luck. I now have a drive sdf which claims to be identical to
>> sdd but which is a clone of sdb. In case it matters, sda and sdc are
>> each 4TB while sdb and sdd are each 2TB drives; sde is my boot drive
>> and sdf is a 2TB clone of sdb.
>>
>> What I need to do is to somehow patch sdf's primary superblock so it
>> contains the correct device number and UUID_SUB for sdb, so that I can
>> attempt some sort of recovery. Right now my linux is (understandably)
>> quite confused by the situation:
> 
> Did you tried "btrfs rescue super-recover"?
> 
> Remember to use the devel branch from git, as there is a small bug
> prevent it report correct result.

Unforutnately my patchset which fixes super-recover is still not merged,
so he needs to grab the patches from the mailing list and compile the
btrfs tools himself. The patch in question can be found here:

https://patchwork.kernel.org/patch/10092471/

> 
> super-recover will try to use the backup superblock to recover the
> primary one.
> 
> Thanks,
> Qu
> 
>>
>> videon:~ # uname -a
>> Linux videon 4.4.103-18.41-default #1 SMP Wed Dec 13 14:06:33 UTC 2017
>> (f66c68c) x86_64 x86_64 x86_64 GNU/Linux
>>
>> videon:~ # btrfs --version
>> btrfs-progs v4.5.3+20160729
>>
>> videon:~ # btrfs fi show
>> Label: 'Storage'  uuid: 33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
>>         Total devices 4 FS bytes used 10.69TiB
>>         devid    1 size 1.82TiB used 1.82TiB path /dev/sdd
>>         devid    3 size 3.64TiB used 3.54TiB path /dev/sdc
>>         devid    4 size 3.64TiB used 3.54TiB path /dev/sda
>>         *** Some devices missing
>>
>> Any suggestions on how to proceed would be appreciated.
>>
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-28 11:41   ` Nikolay Borisov
@ 2017-12-28 12:11     ` Qu Wenruo
  2017-12-28 23:09       ` Stirling Westrup
  0 siblings, 1 reply; 14+ messages in thread
From: Qu Wenruo @ 2017-12-28 12:11 UTC (permalink / raw)
  To: Nikolay Borisov, swestrup, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 3293 bytes --]



On 2017年12月28日 19:41, Nikolay Borisov wrote:
> 
> 
> On 28.12.2017 03:53, Qu Wenruo wrote:
>>
>>
>> On 2017年12月28日 09:46, Stirling Westrup wrote:
>>> Here's my situation: I have a network file server containing a 12TB
>>> BTRFS spread out over four devices (sda-sdd) which I am trying to
>>> recover. I do have a backup, but it's about 3 months old, and while I
>>> could certainly rebuild everything from that if I really had to, I
>>> would far rather not have to rerip my latest DVDs. So, I am willing to
>>> experiment if it might save me a few hundred hours of reconstruction.
>>> I don't currently have another 12 TB of space anywhere for making a
>>> scratch copy.
>>>
>>> A few days ago sdb developed hard errors and I can no longer mount the
>>> filesystem. sdb is no longer even recognized as a valid btrfs drive.
>>> However, when I ran ddrescue over the drive I managed to make a clone
>>> (sdf) which contains all but 12K of the original drive. However, those
>>> missing 12K are all in the various superblocks, so the cloned drive is
>>> still unreadable.
>>>
>>> In the hopes that I was only missing a few bits of the superblocks, I
>>> started out by dd-ing the first 4M of sdd into sdf in the hopes that
>>> ddrescue would overwrite much of the superblocks, and the final bits
>>> from sdd would make things usable.
>>>
>>> No such luck. I now have a drive sdf which claims to be identical to
>>> sdd but which is a clone of sdb. In case it matters, sda and sdc are
>>> each 4TB while sdb and sdd are each 2TB drives; sde is my boot drive
>>> and sdf is a 2TB clone of sdb.
>>>
>>> What I need to do is to somehow patch sdf's primary superblock so it
>>> contains the correct device number and UUID_SUB for sdb, so that I can
>>> attempt some sort of recovery. Right now my linux is (understandably)
>>> quite confused by the situation:
>>
>> Did you tried "btrfs rescue super-recover"?
>>
>> Remember to use the devel branch from git, as there is a small bug
>> prevent it report correct result.
> 
> Unforutnately my patchset which fixes super-recover is still not merged,
> so he needs to grab the patches from the mailing list and compile the
> btrfs tools himself. The patch in question can be found here:
> 
> https://patchwork.kernel.org/patch/10092471/

And just in-case, "btrfs insp dump-super -fa" output could greatly help
us to check if the backup superblocks are really good.

Thanks,
Qu
> 
>>
>> super-recover will try to use the backup superblock to recover the
>> primary one.
>>
>> Thanks,
>> Qu
>>
>>>
>>> videon:~ # uname -a
>>> Linux videon 4.4.103-18.41-default #1 SMP Wed Dec 13 14:06:33 UTC 2017
>>> (f66c68c) x86_64 x86_64 x86_64 GNU/Linux
>>>
>>> videon:~ # btrfs --version
>>> btrfs-progs v4.5.3+20160729
>>>
>>> videon:~ # btrfs fi show
>>> Label: 'Storage'  uuid: 33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
>>>         Total devices 4 FS bytes used 10.69TiB
>>>         devid    1 size 1.82TiB used 1.82TiB path /dev/sdd
>>>         devid    3 size 3.64TiB used 3.54TiB path /dev/sdc
>>>         devid    4 size 3.64TiB used 3.54TiB path /dev/sda
>>>         *** Some devices missing
>>>
>>> Any suggestions on how to proceed would be appreciated.
>>>
>>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 520 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-28 12:11     ` Qu Wenruo
@ 2017-12-28 23:09       ` Stirling Westrup
  2017-12-29  0:22         ` Qu Wenruo
  0 siblings, 1 reply; 14+ messages in thread
From: Stirling Westrup @ 2017-12-28 23:09 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Nikolay Borisov, linux-btrfs

Using "btrfs rescue super-recover" is of no use since there are no
valid superblocks on the disk I need to fix. In fact, it's even worse,
because the only even partly valid superblock is a copy of the one
from drive sdd, which is a perfectly valid drive. What I need to do
(as far as I can tell) is:

1) Patch the UUID_SUB and device number of sdf to make it distinct
from sdd. Or just generate an entirely new superblock for sdf which
indicates it is device 2 in the 4-device BTRFS (rather than device 1
which it now thinks it is).

2) Recover (somehow) whatever other information from the superblock
that is missing.



On Thu, Dec 28, 2017 at 7:11 AM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
> On 2017年12月28日 19:41, Nikolay Borisov wrote:
>>
>>
>> On 28.12.2017 03:53, Qu Wenruo wrote:
>>>
>>>
>>> On 2017年12月28日 09:46, Stirling Westrup wrote:
>>>> Here's my situation: I have a network file server containing a 12TB
>>>> BTRFS spread out over four devices (sda-sdd) which I am trying to
>>>> recover. I do have a backup, but it's about 3 months old, and while I
>>>> could certainly rebuild everything from that if I really had to, I
>>>> would far rather not have to rerip my latest DVDs. So, I am willing to
>>>> experiment if it might save me a few hundred hours of reconstruction.
>>>> I don't currently have another 12 TB of space anywhere for making a
>>>> scratch copy.
>>>>
>>>> A few days ago sdb developed hard errors and I can no longer mount the
>>>> filesystem. sdb is no longer even recognized as a valid btrfs drive.
>>>> However, when I ran ddrescue over the drive I managed to make a clone
>>>> (sdf) which contains all but 12K of the original drive. However, those
>>>> missing 12K are all in the various superblocks, so the cloned drive is
>>>> still unreadable.
>>>>
>>>> In the hopes that I was only missing a few bits of the superblocks, I
>>>> started out by dd-ing the first 4M of sdd into sdf in the hopes that
>>>> ddrescue would overwrite much of the superblocks, and the final bits
>>>> from sdd would make things usable.
>>>>
>>>> No such luck. I now have a drive sdf which claims to be identical to
>>>> sdd but which is a clone of sdb. In case it matters, sda and sdc are
>>>> each 4TB while sdb and sdd are each 2TB drives; sde is my boot drive
>>>> and sdf is a 2TB clone of sdb.
>>>>
>>>> What I need to do is to somehow patch sdf's primary superblock so it
>>>> contains the correct device number and UUID_SUB for sdb, so that I can
>>>> attempt some sort of recovery. Right now my linux is (understandably)
>>>> quite confused by the situation:
>>>
>>> Did you tried "btrfs rescue super-recover"?
>>>
>>> Remember to use the devel branch from git, as there is a small bug
>>> prevent it report correct result.
>>
>> Unforutnately my patchset which fixes super-recover is still not merged,
>> so he needs to grab the patches from the mailing list and compile the
>> btrfs tools himself. The patch in question can be found here:
>>
>> https://patchwork.kernel.org/patch/10092471/
>
> And just in-case, "btrfs insp dump-super -fa" output could greatly help
> us to check if the backup superblocks are really good.
>


-- 
Stirling Westrup
Programmer, Entrepreneur.
https://www.linkedin.com/e/fpf/77228
http://www.linkedin.com/in/swestrup
http://technaut.livejournal.com
http://sourceforge.net/users/stirlingwestrup

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-28 23:09       ` Stirling Westrup
@ 2017-12-29  0:22         ` Qu Wenruo
  2017-12-29  1:41           ` Stirling Westrup
  0 siblings, 1 reply; 14+ messages in thread
From: Qu Wenruo @ 2017-12-29  0:22 UTC (permalink / raw)
  To: swestrup; +Cc: Nikolay Borisov, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 4450 bytes --]



On 2017年12月29日 07:09, Stirling Westrup wrote:
> Using "btrfs rescue super-recover" is of no use since there are no
> valid superblocks on the disk I need to fix.

Btrfs has normally 1 primary superblock and 1 or 2 backup superblocks.

super-recover is going to read the backup superblocks and use them as
the base to recover primary superblock.

If super-recover can't even find the backups, then the disk is more
damaged than you have expected.

> In fact, it's even worse,
> because the only even partly valid superblock is a copy of the one
> from drive sdd, which is a perfectly valid drive. What I need to do
> (as far as I can tell) is:
> 
> 1) Patch the UUID_SUB and device number of sdf to make it distinct
> from sdd. Or just generate an entirely new superblock for sdf which
> indicates it is device 2 in the 4-device BTRFS (rather than device 1
> which it now thinks it is).

You need your device UUID, which can be found in device tree.
(Only if you could mount the fs in RO and degraded mode, then you're
still OK to read it out)

You're looking for this part of "btrfs ins dump-super" output:
------
...
cache_generation	8
uuid_tree_generation	8
dev_item.uuid		f1d9b288-7865-463f-a65c-ca8b1fbde09b <<<<<
dev_item.fsid		1dd513fb-45f8-404f-ae23-979e3acb78ad [match]
dev_item.type		0
dev_item.total_bytes	10737418240
...
------

> 
> 2) Recover (somehow) whatever other information from the superblock
> that is missing.
> 

Just as I said, if your backup super is also corrupted, there is little
chance to recover.

To verify if the backups are still alive, please paste the output of
"btrfs ins dump-super -fa".
(Even you think super-recover is of no use, the output can still help)

Thanks,
Qu

> 
> 
> On Thu, Dec 28, 2017 at 7:11 AM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>> On 2017年12月28日 19:41, Nikolay Borisov wrote:
>>>
>>>
>>> On 28.12.2017 03:53, Qu Wenruo wrote:
>>>>
>>>>
>>>> On 2017年12月28日 09:46, Stirling Westrup wrote:
>>>>> Here's my situation: I have a network file server containing a 12TB
>>>>> BTRFS spread out over four devices (sda-sdd) which I am trying to
>>>>> recover. I do have a backup, but it's about 3 months old, and while I
>>>>> could certainly rebuild everything from that if I really had to, I
>>>>> would far rather not have to rerip my latest DVDs. So, I am willing to
>>>>> experiment if it might save me a few hundred hours of reconstruction.
>>>>> I don't currently have another 12 TB of space anywhere for making a
>>>>> scratch copy.
>>>>>
>>>>> A few days ago sdb developed hard errors and I can no longer mount the
>>>>> filesystem. sdb is no longer even recognized as a valid btrfs drive.
>>>>> However, when I ran ddrescue over the drive I managed to make a clone
>>>>> (sdf) which contains all but 12K of the original drive. However, those
>>>>> missing 12K are all in the various superblocks, so the cloned drive is
>>>>> still unreadable.
>>>>>
>>>>> In the hopes that I was only missing a few bits of the superblocks, I
>>>>> started out by dd-ing the first 4M of sdd into sdf in the hopes that
>>>>> ddrescue would overwrite much of the superblocks, and the final bits
>>>>> from sdd would make things usable.
>>>>>
>>>>> No such luck. I now have a drive sdf which claims to be identical to
>>>>> sdd but which is a clone of sdb. In case it matters, sda and sdc are
>>>>> each 4TB while sdb and sdd are each 2TB drives; sde is my boot drive
>>>>> and sdf is a 2TB clone of sdb.
>>>>>
>>>>> What I need to do is to somehow patch sdf's primary superblock so it
>>>>> contains the correct device number and UUID_SUB for sdb, so that I can
>>>>> attempt some sort of recovery. Right now my linux is (understandably)
>>>>> quite confused by the situation:
>>>>
>>>> Did you tried "btrfs rescue super-recover"?
>>>>
>>>> Remember to use the devel branch from git, as there is a small bug
>>>> prevent it report correct result.
>>>
>>> Unforutnately my patchset which fixes super-recover is still not merged,
>>> so he needs to grab the patches from the mailing list and compile the
>>> btrfs tools himself. The patch in question can be found here:
>>>
>>> https://patchwork.kernel.org/patch/10092471/
>>
>> And just in-case, "btrfs insp dump-super -fa" output could greatly help
>> us to check if the backup superblocks are really good.
>>
> 
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 520 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-29  0:22         ` Qu Wenruo
@ 2017-12-29  1:41           ` Stirling Westrup
  2017-12-29  2:08             ` Qu Wenruo
  0 siblings, 1 reply; 14+ messages in thread
From: Stirling Westrup @ 2017-12-29  1:41 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Nikolay Borisov, linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 5371 bytes --]

Okay, I ran the command 'btrfs ins dump-super -fa' on each of the four
drives of the array, which are currently sda, sdb, sdc, and sde, and
attached the results as log files.

As you'll note, the one superblock for sde is an exact copy of the one
for sdc, as I copy the first 4M of sdc to sde before starting the
recovery of the bad drive (sde is as much of that drive as I could
copy, which all my tools claim is close to 99.99% of the original).


On Thu, Dec 28, 2017 at 7:22 PM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
> On 2017年12月29日 07:09, Stirling Westrup wrote:
>> Using "btrfs rescue super-recover" is of no use since there are no
>> valid superblocks on the disk I need to fix.
>
> Btrfs has normally 1 primary superblock and 1 or 2 backup superblocks.
>
> super-recover is going to read the backup superblocks and use them as
> the base to recover primary superblock.
>
> If super-recover can't even find the backups, then the disk is more
> damaged than you have expected.
>
>> In fact, it's even worse,
>> because the only even partly valid superblock is a copy of the one
>> from drive sdd, which is a perfectly valid drive. What I need to do
>> (as far as I can tell) is:
>>
>> 1) Patch the UUID_SUB and device number of sdf to make it distinct
>> from sdd. Or just generate an entirely new superblock for sdf which
>> indicates it is device 2 in the 4-device BTRFS (rather than device 1
>> which it now thinks it is).
>
> You need your device UUID, which can be found in device tree.
> (Only if you could mount the fs in RO and degraded mode, then you're
> still OK to read it out)
>
> You're looking for this part of "btrfs ins dump-super" output:
> ------
> ...
> cache_generation        8
> uuid_tree_generation    8
> dev_item.uuid           f1d9b288-7865-463f-a65c-ca8b1fbde09b <<<<<
> dev_item.fsid           1dd513fb-45f8-404f-ae23-979e3acb78ad [match]
> dev_item.type           0
> dev_item.total_bytes    10737418240
> ...
> ------
>
>>
>> 2) Recover (somehow) whatever other information from the superblock
>> that is missing.
>>
>
> Just as I said, if your backup super is also corrupted, there is little
> chance to recover.
>
> To verify if the backups are still alive, please paste the output of
> "btrfs ins dump-super -fa".
> (Even you think super-recover is of no use, the output can still help)
>
> Thanks,
> Qu
>
>>
>>
>> On Thu, Dec 28, 2017 at 7:11 AM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>
>>>
>>> On 2017年12月28日 19:41, Nikolay Borisov wrote:
>>>>
>>>>
>>>> On 28.12.2017 03:53, Qu Wenruo wrote:
>>>>>
>>>>>
>>>>> On 2017年12月28日 09:46, Stirling Westrup wrote:
>>>>>> Here's my situation: I have a network file server containing a 12TB
>>>>>> BTRFS spread out over four devices (sda-sdd) which I am trying to
>>>>>> recover. I do have a backup, but it's about 3 months old, and while I
>>>>>> could certainly rebuild everything from that if I really had to, I
>>>>>> would far rather not have to rerip my latest DVDs. So, I am willing to
>>>>>> experiment if it might save me a few hundred hours of reconstruction.
>>>>>> I don't currently have another 12 TB of space anywhere for making a
>>>>>> scratch copy.
>>>>>>
>>>>>> A few days ago sdb developed hard errors and I can no longer mount the
>>>>>> filesystem. sdb is no longer even recognized as a valid btrfs drive.
>>>>>> However, when I ran ddrescue over the drive I managed to make a clone
>>>>>> (sdf) which contains all but 12K of the original drive. However, those
>>>>>> missing 12K are all in the various superblocks, so the cloned drive is
>>>>>> still unreadable.
>>>>>>
>>>>>> In the hopes that I was only missing a few bits of the superblocks, I
>>>>>> started out by dd-ing the first 4M of sdd into sdf in the hopes that
>>>>>> ddrescue would overwrite much of the superblocks, and the final bits
>>>>>> from sdd would make things usable.
>>>>>>
>>>>>> No such luck. I now have a drive sdf which claims to be identical to
>>>>>> sdd but which is a clone of sdb. In case it matters, sda and sdc are
>>>>>> each 4TB while sdb and sdd are each 2TB drives; sde is my boot drive
>>>>>> and sdf is a 2TB clone of sdb.
>>>>>>
>>>>>> What I need to do is to somehow patch sdf's primary superblock so it
>>>>>> contains the correct device number and UUID_SUB for sdb, so that I can
>>>>>> attempt some sort of recovery. Right now my linux is (understandably)
>>>>>> quite confused by the situation:
>>>>>
>>>>> Did you tried "btrfs rescue super-recover"?
>>>>>
>>>>> Remember to use the devel branch from git, as there is a small bug
>>>>> prevent it report correct result.
>>>>
>>>> Unforutnately my patchset which fixes super-recover is still not merged,
>>>> so he needs to grab the patches from the mailing list and compile the
>>>> btrfs tools himself. The patch in question can be found here:
>>>>
>>>> https://patchwork.kernel.org/patch/10092471/
>>>
>>> And just in-case, "btrfs insp dump-super -fa" output could greatly help
>>> us to check if the backup superblocks are really good.
>>>
>>
>>
>



-- 
Stirling Westrup
Programmer, Entrepreneur.
https://www.linkedin.com/e/fpf/77228
http://www.linkedin.com/in/swestrup
http://technaut.livejournal.com
http://sourceforge.net/users/stirlingwestrup

[-- Attachment #2: a.log --]
[-- Type: text/x-log, Size: 10739 bytes --]

superblock: bytenr=65536, device=/dev/sda
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0xcba5019a [match]
bytenr			65536
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
label			Storage
generation		30779
root			294191104
sys_array_size		226
chunk_root_generation	30772
root_level		1
chunk_root		20971520
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		12002371928064
bytes_used		11754863710208
sectorsize		4096
nodesize		16384
leafsize (deprecated)		16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	30779
uuid_tree_generation	30779
dev_item.uuid		15eff529-4fb2-4975-9c52-c174254dc4cd
dev_item.fsid		33d2890d-f07d-4ba8-b1fc-7b4f14463b1f [match]
dev_item.type		0
dev_item.total_bytes	4000787030016
dev_item.bytes_used	3888044376064
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		4
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7594ec0c-445d-4f04-b923-510508280eab
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|RAID1
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 4 offset 1048576
			dev_uuid 15eff529-4fb2-4975-9c52-c174254dc4cd
			stripe 1 devid 3 offset 1048576
			dev_uuid 12325fca-38c5-4ec9-a4cc-d09b9bc48cae
backup_roots[4]:
	backup 0:
		backup_tree_root:	11418889060352	gen: 30777	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418888880128	gen: 30777	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418889273344	gen: 30777	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	11418890469376	gen: 30778	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418890534912	gen: 30778	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418890764288	gen: 30778	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	294191104	gen: 30779	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	294207488	gen: 30779	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	296386560	gen: 30779	level: 1
		backup_csum_root:	296894464	gen: 30779	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863710208
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	11418886651904	gen: 30776	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418886569984	gen: 30776	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418886815744	gen: 30776	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4


superblock: bytenr=67108864, device=/dev/sda
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0x6bc42954 [match]
bytenr			67108864
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
label			Storage
generation		30779
root			294191104
sys_array_size		226
chunk_root_generation	30772
root_level		1
chunk_root		20971520
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		12002371928064
bytes_used		11754863710208
sectorsize		4096
nodesize		16384
leafsize (deprecated)		16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	30779
uuid_tree_generation	30779
dev_item.uuid		15eff529-4fb2-4975-9c52-c174254dc4cd
dev_item.fsid		33d2890d-f07d-4ba8-b1fc-7b4f14463b1f [match]
dev_item.type		0
dev_item.total_bytes	4000787030016
dev_item.bytes_used	3888044376064
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		4
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7594ec0c-445d-4f04-b923-510508280eab
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|RAID1
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 4 offset 1048576
			dev_uuid 15eff529-4fb2-4975-9c52-c174254dc4cd
			stripe 1 devid 3 offset 1048576
			dev_uuid 12325fca-38c5-4ec9-a4cc-d09b9bc48cae
backup_roots[4]:
	backup 0:
		backup_tree_root:	11418889060352	gen: 30777	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418888880128	gen: 30777	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418889273344	gen: 30777	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	11418890469376	gen: 30778	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418890534912	gen: 30778	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418890764288	gen: 30778	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	294191104	gen: 30779	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	294207488	gen: 30779	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	296386560	gen: 30779	level: 1
		backup_csum_root:	296894464	gen: 30779	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863710208
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	11418886651904	gen: 30776	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418886569984	gen: 30776	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418886815744	gen: 30776	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4


superblock: bytenr=274877906944, device=/dev/sda
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0x96437f65 [match]
bytenr			274877906944
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
label			Storage
generation		30779
root			294191104
sys_array_size		226
chunk_root_generation	30772
root_level		1
chunk_root		20971520
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		12002371928064
bytes_used		11754863710208
sectorsize		4096
nodesize		16384
leafsize (deprecated)		16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	30779
uuid_tree_generation	30779
dev_item.uuid		15eff529-4fb2-4975-9c52-c174254dc4cd
dev_item.fsid		33d2890d-f07d-4ba8-b1fc-7b4f14463b1f [match]
dev_item.type		0
dev_item.total_bytes	4000787030016
dev_item.bytes_used	3888044376064
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		4
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7594ec0c-445d-4f04-b923-510508280eab
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|RAID1
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 4 offset 1048576
			dev_uuid 15eff529-4fb2-4975-9c52-c174254dc4cd
			stripe 1 devid 3 offset 1048576
			dev_uuid 12325fca-38c5-4ec9-a4cc-d09b9bc48cae
backup_roots[4]:
	backup 0:
		backup_tree_root:	11418889060352	gen: 30777	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418888880128	gen: 30777	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418889273344	gen: 30777	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	11418890469376	gen: 30778	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418890534912	gen: 30778	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418890764288	gen: 30778	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	294191104	gen: 30779	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	294207488	gen: 30779	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	296386560	gen: 30779	level: 1
		backup_csum_root:	296894464	gen: 30779	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863710208
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	11418886651904	gen: 30776	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418886569984	gen: 30776	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418886815744	gen: 30776	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4



[-- Attachment #3: b.log --]
[-- Type: text/x-log, Size: 10739 bytes --]

superblock: bytenr=65536, device=/dev/sdb
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0x3cd2e2cb [match]
bytenr			65536
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
label			Storage
generation		30779
root			294191104
sys_array_size		226
chunk_root_generation	30772
root_level		1
chunk_root		20971520
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		12002371928064
bytes_used		11754863710208
sectorsize		4096
nodesize		16384
leafsize (deprecated)		16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	30779
uuid_tree_generation	30779
dev_item.uuid		12325fca-38c5-4ec9-a4cc-d09b9bc48cae
dev_item.fsid		33d2890d-f07d-4ba8-b1fc-7b4f14463b1f [match]
dev_item.type		0
dev_item.total_bytes	4000787030016
dev_item.bytes_used	3888044376064
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		3
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7594ec0c-445d-4f04-b923-510508280eab
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|RAID1
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 4 offset 1048576
			dev_uuid 15eff529-4fb2-4975-9c52-c174254dc4cd
			stripe 1 devid 3 offset 1048576
			dev_uuid 12325fca-38c5-4ec9-a4cc-d09b9bc48cae
backup_roots[4]:
	backup 0:
		backup_tree_root:	11418889060352	gen: 30777	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418888880128	gen: 30777	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418889273344	gen: 30777	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	11418890469376	gen: 30778	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418890534912	gen: 30778	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418890764288	gen: 30778	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	294191104	gen: 30779	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	294207488	gen: 30779	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	296386560	gen: 30779	level: 1
		backup_csum_root:	296894464	gen: 30779	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863710208
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	11418886651904	gen: 30776	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418886569984	gen: 30776	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418886815744	gen: 30776	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4


superblock: bytenr=67108864, device=/dev/sdb
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0x9cb3ca05 [match]
bytenr			67108864
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
label			Storage
generation		30779
root			294191104
sys_array_size		226
chunk_root_generation	30772
root_level		1
chunk_root		20971520
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		12002371928064
bytes_used		11754863710208
sectorsize		4096
nodesize		16384
leafsize (deprecated)		16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	30779
uuid_tree_generation	30779
dev_item.uuid		12325fca-38c5-4ec9-a4cc-d09b9bc48cae
dev_item.fsid		33d2890d-f07d-4ba8-b1fc-7b4f14463b1f [match]
dev_item.type		0
dev_item.total_bytes	4000787030016
dev_item.bytes_used	3888044376064
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		3
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7594ec0c-445d-4f04-b923-510508280eab
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|RAID1
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 4 offset 1048576
			dev_uuid 15eff529-4fb2-4975-9c52-c174254dc4cd
			stripe 1 devid 3 offset 1048576
			dev_uuid 12325fca-38c5-4ec9-a4cc-d09b9bc48cae
backup_roots[4]:
	backup 0:
		backup_tree_root:	11418889060352	gen: 30777	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418888880128	gen: 30777	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418889273344	gen: 30777	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	11418890469376	gen: 30778	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418890534912	gen: 30778	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418890764288	gen: 30778	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	294191104	gen: 30779	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	294207488	gen: 30779	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	296386560	gen: 30779	level: 1
		backup_csum_root:	296894464	gen: 30779	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863710208
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	11418886651904	gen: 30776	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418886569984	gen: 30776	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418886815744	gen: 30776	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4


superblock: bytenr=274877906944, device=/dev/sdb
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0x61349c34 [match]
bytenr			274877906944
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
label			Storage
generation		30779
root			294191104
sys_array_size		226
chunk_root_generation	30772
root_level		1
chunk_root		20971520
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		12002371928064
bytes_used		11754863710208
sectorsize		4096
nodesize		16384
leafsize (deprecated)		16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	30779
uuid_tree_generation	30779
dev_item.uuid		12325fca-38c5-4ec9-a4cc-d09b9bc48cae
dev_item.fsid		33d2890d-f07d-4ba8-b1fc-7b4f14463b1f [match]
dev_item.type		0
dev_item.total_bytes	4000787030016
dev_item.bytes_used	3888044376064
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		3
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7594ec0c-445d-4f04-b923-510508280eab
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|RAID1
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 4 offset 1048576
			dev_uuid 15eff529-4fb2-4975-9c52-c174254dc4cd
			stripe 1 devid 3 offset 1048576
			dev_uuid 12325fca-38c5-4ec9-a4cc-d09b9bc48cae
backup_roots[4]:
	backup 0:
		backup_tree_root:	11418889060352	gen: 30777	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418888880128	gen: 30777	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418889273344	gen: 30777	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	11418890469376	gen: 30778	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418890534912	gen: 30778	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418890764288	gen: 30778	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	294191104	gen: 30779	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	294207488	gen: 30779	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	296386560	gen: 30779	level: 1
		backup_csum_root:	296894464	gen: 30779	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863710208
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	11418886651904	gen: 30776	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418886569984	gen: 30776	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418886815744	gen: 30776	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4



[-- Attachment #4: c.log --]
[-- Type: text/x-log, Size: 10739 bytes --]

superblock: bytenr=65536, device=/dev/sdc
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0xa4ce7564 [match]
bytenr			65536
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
label			Storage
generation		30779
root			294191104
sys_array_size		226
chunk_root_generation	30772
root_level		1
chunk_root		20971520
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		12002371928064
bytes_used		11754863710208
sectorsize		4096
nodesize		16384
leafsize (deprecated)		16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	30779
uuid_tree_generation	30779
dev_item.uuid		7594ec0c-445d-4f04-b923-510508280eab
dev_item.fsid		33d2890d-f07d-4ba8-b1fc-7b4f14463b1f [match]
dev_item.type		0
dev_item.total_bytes	2000398934016
dev_item.bytes_used	2000398909440
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		1
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7594ec0c-445d-4f04-b923-510508280eab
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|RAID1
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 4 offset 1048576
			dev_uuid 15eff529-4fb2-4975-9c52-c174254dc4cd
			stripe 1 devid 3 offset 1048576
			dev_uuid 12325fca-38c5-4ec9-a4cc-d09b9bc48cae
backup_roots[4]:
	backup 0:
		backup_tree_root:	11418889060352	gen: 30777	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418888880128	gen: 30777	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418889273344	gen: 30777	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	11418890469376	gen: 30778	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418890534912	gen: 30778	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418890764288	gen: 30778	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	294191104	gen: 30779	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	294207488	gen: 30779	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	296386560	gen: 30779	level: 1
		backup_csum_root:	296894464	gen: 30779	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863710208
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	11418886651904	gen: 30776	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418886569984	gen: 30776	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418886815744	gen: 30776	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4


superblock: bytenr=67108864, device=/dev/sdc
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0x04af5daa [match]
bytenr			67108864
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
label			Storage
generation		30779
root			294191104
sys_array_size		226
chunk_root_generation	30772
root_level		1
chunk_root		20971520
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		12002371928064
bytes_used		11754863710208
sectorsize		4096
nodesize		16384
leafsize (deprecated)		16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	30779
uuid_tree_generation	30779
dev_item.uuid		7594ec0c-445d-4f04-b923-510508280eab
dev_item.fsid		33d2890d-f07d-4ba8-b1fc-7b4f14463b1f [match]
dev_item.type		0
dev_item.total_bytes	2000398934016
dev_item.bytes_used	2000398909440
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		1
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7594ec0c-445d-4f04-b923-510508280eab
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|RAID1
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 4 offset 1048576
			dev_uuid 15eff529-4fb2-4975-9c52-c174254dc4cd
			stripe 1 devid 3 offset 1048576
			dev_uuid 12325fca-38c5-4ec9-a4cc-d09b9bc48cae
backup_roots[4]:
	backup 0:
		backup_tree_root:	11418889060352	gen: 30777	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418888880128	gen: 30777	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418889273344	gen: 30777	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	11418890469376	gen: 30778	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418890534912	gen: 30778	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418890764288	gen: 30778	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	294191104	gen: 30779	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	294207488	gen: 30779	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	296386560	gen: 30779	level: 1
		backup_csum_root:	296894464	gen: 30779	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863710208
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	11418886651904	gen: 30776	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418886569984	gen: 30776	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418886815744	gen: 30776	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4


superblock: bytenr=274877906944, device=/dev/sdc
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0xf9280b9b [match]
bytenr			274877906944
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
label			Storage
generation		30779
root			294191104
sys_array_size		226
chunk_root_generation	30772
root_level		1
chunk_root		20971520
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		12002371928064
bytes_used		11754863710208
sectorsize		4096
nodesize		16384
leafsize (deprecated)		16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	30779
uuid_tree_generation	30779
dev_item.uuid		7594ec0c-445d-4f04-b923-510508280eab
dev_item.fsid		33d2890d-f07d-4ba8-b1fc-7b4f14463b1f [match]
dev_item.type		0
dev_item.total_bytes	2000398934016
dev_item.bytes_used	2000398909440
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		1
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7594ec0c-445d-4f04-b923-510508280eab
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|RAID1
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 4 offset 1048576
			dev_uuid 15eff529-4fb2-4975-9c52-c174254dc4cd
			stripe 1 devid 3 offset 1048576
			dev_uuid 12325fca-38c5-4ec9-a4cc-d09b9bc48cae
backup_roots[4]:
	backup 0:
		backup_tree_root:	11418889060352	gen: 30777	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418888880128	gen: 30777	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418889273344	gen: 30777	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	11418890469376	gen: 30778	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418890534912	gen: 30778	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418890764288	gen: 30778	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	294191104	gen: 30779	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	294207488	gen: 30779	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	296386560	gen: 30779	level: 1
		backup_csum_root:	296894464	gen: 30779	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863710208
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	11418886651904	gen: 30776	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418886569984	gen: 30776	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418886815744	gen: 30776	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4



[-- Attachment #5: e.log --]
[-- Type: text/x-log, Size: 3785 bytes --]

superblock: bytenr=65536, device=/dev/sde
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0xa4ce7564 [match]
bytenr			65536
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			33d2890d-f07d-4ba8-b1fc-7b4f14463b1f
label			Storage
generation		30779
root			294191104
sys_array_size		226
chunk_root_generation	30772
root_level		1
chunk_root		20971520
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		12002371928064
bytes_used		11754863710208
sectorsize		4096
nodesize		16384
leafsize (deprecated)		16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	30779
uuid_tree_generation	30779
dev_item.uuid		7594ec0c-445d-4f04-b923-510508280eab
dev_item.fsid		33d2890d-f07d-4ba8-b1fc-7b4f14463b1f [match]
dev_item.type		0
dev_item.total_bytes	2000398934016
dev_item.bytes_used	2000398909440
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		1
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 0)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 0
			dev_uuid 7594ec0c-445d-4f04-b923-510508280eab
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 20971520)
		length 8388608 owner 2 stripe_len 65536 type SYSTEM|RAID1
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 2 sub_stripes 0
			stripe 0 devid 4 offset 1048576
			dev_uuid 15eff529-4fb2-4975-9c52-c174254dc4cd
			stripe 1 devid 3 offset 1048576
			dev_uuid 12325fca-38c5-4ec9-a4cc-d09b9bc48cae
backup_roots[4]:
	backup 0:
		backup_tree_root:	11418889060352	gen: 30777	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418888880128	gen: 30777	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418889273344	gen: 30777	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	11418890469376	gen: 30778	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418890534912	gen: 30778	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418890764288	gen: 30778	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	294191104	gen: 30779	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	294207488	gen: 30779	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	296386560	gen: 30779	level: 1
		backup_csum_root:	296894464	gen: 30779	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863710208
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	11418886651904	gen: 30776	level: 1
		backup_chunk_root:	20971520	gen: 30772	level: 1
		backup_extent_root:	11418886569984	gen: 30776	level: 2
		backup_fs_root:		11418892763136	gen: 30777	level: 2
		backup_dev_root:	11418878574592	gen: 30772	level: 1
		backup_csum_root:	11418886815744	gen: 30776	level: 3
		backup_total_bytes:	12002371928064
		backup_bytes_used:	11754863742976
		backup_num_devices:	4


superblock: bytenr=67108864, device=/dev/sde
---------------------------------------------------------

superblock: bytenr=274877906944, device=/dev/sde
---------------------------------------------------------


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-29  1:41           ` Stirling Westrup
@ 2017-12-29  2:08             ` Qu Wenruo
  2017-12-29  3:35               ` Stirling Westrup
  2017-12-30 19:54               ` Stirling Westrup
  0 siblings, 2 replies; 14+ messages in thread
From: Qu Wenruo @ 2017-12-29  2:08 UTC (permalink / raw)
  To: swestrup; +Cc: Nikolay Borisov, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 7188 bytes --]



On 2017年12月29日 09:41, Stirling Westrup wrote:
> Okay, I ran the command 'btrfs ins dump-super -fa' on each of the four
> drives of the array, which are currently sda, sdb, sdc, and sde, and
> attached the results as log files.
> 
> As you'll note, the one superblock for sde is an exact copy of the one
> for sdc, as I copy the first 4M of sdc to sde before starting the
> recovery of the bad drive (sde is as much of that drive as I could
> copy, which all my tools claim is close to 99.99% of the original).

Well, from the result of e.log, there are no backup supers at all.

So either there is a offset when the data is recovered, or you lost most
of your data.

The good news is, according to the correct supers of devid 1/3/4, at
least your system and meta profile is RAID1, and they should be at least
RO degraded mountable.


Yes, this means you could get the needed device UUID and hand craft a
superblock.
But I really doubt about the possibility to success.

If you really want to do that, there is needed steps for you:

1) Get device info from your existing fs
   # btrfs ins dump-tree -t chunk </dev/sda>
   And looking for the following thing:
------
       item 1 key (DEV_ITEMS DEV_ITEM 2) itemoff 16185 itemsize 98
                devid 2 total_bytes 10737418240 bytes_used 289406976
                io_align 4096 io_width 4096 sector_size 4096 type 0
                generation 0 start_offset 0 dev_group 0
                seek_speed 0 bandwidth 0
                uuid f1d9b288-7865-463f-a65c-ca8b1fbde09b
                fsid 1dd513fb-45f8-404f-ae23-979e3acb78ad
------
   Look for the key (DEV_ITEMS DEV_ITEM 2) and grab the "uuid"
   "total_bytes" "bytes_used" (other fields are mostly fixed)

2) Fill the fields of dev_item of a good superblock.
   If you feel it hard, I could help to do it if you provide the binary
   dump of any valid superblock, with above tree dump info.

But as I mentioned before, the disk seems to be heavily damaged or have
an unexpected offset.

Recovery using such method can easily lead to csum error and most (if
not all) RAID0 based data will unable to be read out.

I strongly recommend to do a binary search for magic number "5f42 4852
6653 5f4d" to locate the real offset (if it's offset, not a toasted image)

Thanks,
Qu

> 
> 
> On Thu, Dec 28, 2017 at 7:22 PM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>> On 2017年12月29日 07:09, Stirling Westrup wrote:
>>> Using "btrfs rescue super-recover" is of no use since there are no
>>> valid superblocks on the disk I need to fix.
>>
>> Btrfs has normally 1 primary superblock and 1 or 2 backup superblocks.
>>
>> super-recover is going to read the backup superblocks and use them as
>> the base to recover primary superblock.
>>
>> If super-recover can't even find the backups, then the disk is more
>> damaged than you have expected.
>>
>>> In fact, it's even worse,
>>> because the only even partly valid superblock is a copy of the one
>>> from drive sdd, which is a perfectly valid drive. What I need to do
>>> (as far as I can tell) is:
>>>
>>> 1) Patch the UUID_SUB and device number of sdf to make it distinct
>>> from sdd. Or just generate an entirely new superblock for sdf which
>>> indicates it is device 2 in the 4-device BTRFS (rather than device 1
>>> which it now thinks it is).
>>
>> You need your device UUID, which can be found in device tree.
>> (Only if you could mount the fs in RO and degraded mode, then you're
>> still OK to read it out)
>>
>> You're looking for this part of "btrfs ins dump-super" output:
>> ------
>> ...
>> cache_generation        8
>> uuid_tree_generation    8
>> dev_item.uuid           f1d9b288-7865-463f-a65c-ca8b1fbde09b <<<<<
>> dev_item.fsid           1dd513fb-45f8-404f-ae23-979e3acb78ad [match]
>> dev_item.type           0
>> dev_item.total_bytes    10737418240
>> ...
>> ------
>>
>>>
>>> 2) Recover (somehow) whatever other information from the superblock
>>> that is missing.
>>>
>>
>> Just as I said, if your backup super is also corrupted, there is little
>> chance to recover.
>>
>> To verify if the backups are still alive, please paste the output of
>> "btrfs ins dump-super -fa".
>> (Even you think super-recover is of no use, the output can still help)
>>
>> Thanks,
>> Qu
>>
>>>
>>>
>>> On Thu, Dec 28, 2017 at 7:11 AM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>
>>>>
>>>> On 2017年12月28日 19:41, Nikolay Borisov wrote:
>>>>>
>>>>>
>>>>> On 28.12.2017 03:53, Qu Wenruo wrote:
>>>>>>
>>>>>>
>>>>>> On 2017年12月28日 09:46, Stirling Westrup wrote:
>>>>>>> Here's my situation: I have a network file server containing a 12TB
>>>>>>> BTRFS spread out over four devices (sda-sdd) which I am trying to
>>>>>>> recover. I do have a backup, but it's about 3 months old, and while I
>>>>>>> could certainly rebuild everything from that if I really had to, I
>>>>>>> would far rather not have to rerip my latest DVDs. So, I am willing to
>>>>>>> experiment if it might save me a few hundred hours of reconstruction.
>>>>>>> I don't currently have another 12 TB of space anywhere for making a
>>>>>>> scratch copy.
>>>>>>>
>>>>>>> A few days ago sdb developed hard errors and I can no longer mount the
>>>>>>> filesystem. sdb is no longer even recognized as a valid btrfs drive.
>>>>>>> However, when I ran ddrescue over the drive I managed to make a clone
>>>>>>> (sdf) which contains all but 12K of the original drive. However, those
>>>>>>> missing 12K are all in the various superblocks, so the cloned drive is
>>>>>>> still unreadable.
>>>>>>>
>>>>>>> In the hopes that I was only missing a few bits of the superblocks, I
>>>>>>> started out by dd-ing the first 4M of sdd into sdf in the hopes that
>>>>>>> ddrescue would overwrite much of the superblocks, and the final bits
>>>>>>> from sdd would make things usable.
>>>>>>>
>>>>>>> No such luck. I now have a drive sdf which claims to be identical to
>>>>>>> sdd but which is a clone of sdb. In case it matters, sda and sdc are
>>>>>>> each 4TB while sdb and sdd are each 2TB drives; sde is my boot drive
>>>>>>> and sdf is a 2TB clone of sdb.
>>>>>>>
>>>>>>> What I need to do is to somehow patch sdf's primary superblock so it
>>>>>>> contains the correct device number and UUID_SUB for sdb, so that I can
>>>>>>> attempt some sort of recovery. Right now my linux is (understandably)
>>>>>>> quite confused by the situation:
>>>>>>
>>>>>> Did you tried "btrfs rescue super-recover"?
>>>>>>
>>>>>> Remember to use the devel branch from git, as there is a small bug
>>>>>> prevent it report correct result.
>>>>>
>>>>> Unforutnately my patchset which fixes super-recover is still not merged,
>>>>> so he needs to grab the patches from the mailing list and compile the
>>>>> btrfs tools himself. The patch in question can be found here:
>>>>>
>>>>> https://patchwork.kernel.org/patch/10092471/
>>>>
>>>> And just in-case, "btrfs insp dump-super -fa" output could greatly help
>>>> us to check if the backup superblocks are really good.
>>>>
>>>
>>>
>>
> 
> 
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 516 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-29  2:08             ` Qu Wenruo
@ 2017-12-29  3:35               ` Stirling Westrup
  2017-12-29  4:00                 ` Qu Wenruo
  2017-12-30 19:54               ` Stirling Westrup
  1 sibling, 1 reply; 14+ messages in thread
From: Stirling Westrup @ 2017-12-29  3:35 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Nikolay Borisov, linux-btrfs

On Thu, Dec 28, 2017 at 9:08 PM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>

>
> I strongly recommend to do a binary search for magic number "5f42 4852
> 6653 5f4d" to locate the real offset (if it's offset, not a toasted image)
>
I don't understand, how would I do a binary search for that signature?

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-29  3:35               ` Stirling Westrup
@ 2017-12-29  4:00                 ` Qu Wenruo
  2017-12-29 19:30                   ` Stirling Westrup
  0 siblings, 1 reply; 14+ messages in thread
From: Qu Wenruo @ 2017-12-29  4:00 UTC (permalink / raw)
  To: swestrup; +Cc: Nikolay Borisov, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 488 bytes --]



On 2017年12月29日 11:35, Stirling Westrup wrote:
> On Thu, Dec 28, 2017 at 9:08 PM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
> 
>>
>> I strongly recommend to do a binary search for magic number "5f42 4852
>> 6653 5f4d" to locate the real offset (if it's offset, not a toasted image)
>>
> I don't understand, how would I do a binary search for that signature?
> 
The most stupid idea is to use xxd and grep.

Something like:

# xxd /dev/sde | grep 5f42 -C1


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 520 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-29  4:00                 ` Qu Wenruo
@ 2017-12-29 19:30                   ` Stirling Westrup
  2017-12-30  3:15                     ` Qu Wenruo
  0 siblings, 1 reply; 14+ messages in thread
From: Stirling Westrup @ 2017-12-29 19:30 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Nikolay Borisov, linux-btrfs

You were right! grep found two more signature blocks! How do I make use of them?

videon:~ # LC_ALL=C grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" /dev/sde
65600:_BHRfS_M
26697111807:_BHRfS_M
26854350428:_BHRfS_M

On Thu, Dec 28, 2017 at 11:00 PM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
> On 2017年12月29日 11:35, Stirling Westrup wrote:
>> On Thu, Dec 28, 2017 at 9:08 PM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>
>>>
>>
>>>
>>> I strongly recommend to do a binary search for magic number "5f42 4852
>>> 6653 5f4d" to locate the real offset (if it's offset, not a toasted image)
>>>
>> I don't understand, how would I do a binary search for that signature?
>>
> The most stupid idea is to use xxd and grep.
>
> Something like:
>
> # xxd /dev/sde | grep 5f42 -C1
>



-- 
Stirling Westrup
Programmer, Entrepreneur.
https://www.linkedin.com/e/fpf/77228
http://www.linkedin.com/in/swestrup
http://technaut.livejournal.com
http://sourceforge.net/users/stirlingwestrup

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-29 19:30                   ` Stirling Westrup
@ 2017-12-30  3:15                     ` Qu Wenruo
  0 siblings, 0 replies; 14+ messages in thread
From: Qu Wenruo @ 2017-12-30  3:15 UTC (permalink / raw)
  To: swestrup; +Cc: Nikolay Borisov, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 1503 bytes --]



On 2017年12月30日 03:30, Stirling Westrup wrote:
> You were right! grep found two more signature blocks! How do I make use of them?
> 
> videon:~ # LC_ALL=C grep -obUaP "\x5F\x42\x48\x52\x66\x53\x5F\x4D" /dev/sde
> 65600:_BHRfS_M

This the correct one.
Offset is 64K + 64.

> 26697111807:_BHRfS_M

It is a little tricky now.

Btrfs has its superblocks at:
64K (primary)
64M (backup 1)
256G (backup 2)

While this one is at 25G and has offset which is not 64 (magic inside
superblock).

Is there any btrfs image inside the fs?

> 26854350428:_BHRfS_M

Much like the previous one.

Despite that, you could try to use "inspect dump-super --bytenr" to
check if it's the super you want.

The bytenr you could pass is:
26697111743
26854350364

And at this point, I would say the chance to recover data is really very
low now.

Thanks,
Qu
> 
> On Thu, Dec 28, 2017 at 11:00 PM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>> On 2017年12月29日 11:35, Stirling Westrup wrote:
>>> On Thu, Dec 28, 2017 at 9:08 PM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>>>
>>>>
>>>
>>>>
>>>> I strongly recommend to do a binary search for magic number "5f42 4852
>>>> 6653 5f4d" to locate the real offset (if it's offset, not a toasted image)
>>>>
>>> I don't understand, how would I do a binary search for that signature?
>>>
>> The most stupid idea is to use xxd and grep.
>>
>> Something like:
>>
>> # xxd /dev/sde | grep 5f42 -C1
>>
> 
> 
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 520 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-29  2:08             ` Qu Wenruo
  2017-12-29  3:35               ` Stirling Westrup
@ 2017-12-30 19:54               ` Stirling Westrup
  2017-12-31  1:00                 ` Qu Wenruo
  1 sibling, 1 reply; 14+ messages in thread
From: Stirling Westrup @ 2017-12-30 19:54 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Nikolay Borisov, linux-btrfs

Okay, so I've gone back to your earlier suggestion of dumping the
chunk tree to get info for rebuiling a superblock by hand, but its not
working.

/usr/local/bin/btrfs ins dump-tree -t chunk /dev/sdc
btrfs-progs v4.14
bytenr mismatch, want=294191104, have=314114048
Couldn't read tree root
ERROR: unable to open /dev/sdc

I get the exact same error (except for device name) for every valid
device in the FS.



On Thu, Dec 28, 2017 at 9:08 PM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
>
> Yes, this means you could get the needed device UUID and hand craft a
> superblock.
> But I really doubt about the possibility to success.
>
> If you really want to do that, there is needed steps for you:
>
> 1) Get device info from your existing fs
>    # btrfs ins dump-tree -t chunk </dev/sda>
>    And looking for the following thing:
> ------
>        item 1 key (DEV_ITEMS DEV_ITEM 2) itemoff 16185 itemsize 98
>                 devid 2 total_bytes 10737418240 bytes_used 289406976
>                 io_align 4096 io_width 4096 sector_size 4096 type 0
>                 generation 0 start_offset 0 dev_group 0
>                 seek_speed 0 bandwidth 0
>                 uuid f1d9b288-7865-463f-a65c-ca8b1fbde09b
>                 fsid 1dd513fb-45f8-404f-ae23-979e3acb78ad
> ------
>    Look for the key (DEV_ITEMS DEV_ITEM 2) and grab the "uuid"
>    "total_bytes" "bytes_used" (other fields are mostly fixed)
>
> 2) Fill the fields of dev_item of a good superblock.
>    If you feel it hard, I could help to do it if you provide the binary
>    dump of any valid superblock, with above tree dump info.
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Hand Patching a BTRFS Superblock?
  2017-12-30 19:54               ` Stirling Westrup
@ 2017-12-31  1:00                 ` Qu Wenruo
  0 siblings, 0 replies; 14+ messages in thread
From: Qu Wenruo @ 2017-12-31  1:00 UTC (permalink / raw)
  To: swestrup; +Cc: Nikolay Borisov, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 2219 bytes --]



On 2017年12月31日 03:54, Stirling Westrup wrote:
> Okay, so I've gone back to your earlier suggestion of dumping the
> chunk tree to get info for rebuiling a superblock by hand, but its not
> working.
> 
> /usr/local/bin/btrfs ins dump-tree -t chunk /dev/sdc
> btrfs-progs v4.14
> bytenr mismatch, want=294191104, have=314114048
> Couldn't read tree root
> ERROR: unable to open /dev/sdc
> 
> I get the exact same error (except for device name) for every valid
> device in the FS.

The system is more damaged than your expectation.

Tree root is also corrupted.

Although I could make dump-tree to skip root tree, but even with hand
crafted super, the corrupted tree root is still a big problem.

Thanks,
Qu

> 
> 
> 
> On Thu, Dec 28, 2017 at 9:08 PM, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>
>>
>> Yes, this means you could get the needed device UUID and hand craft a
>> superblock.
>> But I really doubt about the possibility to success.
>>
>> If you really want to do that, there is needed steps for you:
>>
>> 1) Get device info from your existing fs
>>    # btrfs ins dump-tree -t chunk </dev/sda>
>>    And looking for the following thing:
>> ------
>>        item 1 key (DEV_ITEMS DEV_ITEM 2) itemoff 16185 itemsize 98
>>                 devid 2 total_bytes 10737418240 bytes_used 289406976
>>                 io_align 4096 io_width 4096 sector_size 4096 type 0
>>                 generation 0 start_offset 0 dev_group 0
>>                 seek_speed 0 bandwidth 0
>>                 uuid f1d9b288-7865-463f-a65c-ca8b1fbde09b
>>                 fsid 1dd513fb-45f8-404f-ae23-979e3acb78ad
>> ------
>>    Look for the key (DEV_ITEMS DEV_ITEM 2) and grab the "uuid"
>>    "total_bytes" "bytes_used" (other fields are mostly fixed)
>>
>> 2) Fill the fields of dev_item of a good superblock.
>>    If you feel it hard, I could help to do it if you provide the binary
>>    dump of any valid superblock, with above tree dump info.
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 520 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2017-12-31  1:00 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-28  1:46 Hand Patching a BTRFS Superblock? Stirling Westrup
2017-12-28  1:53 ` Qu Wenruo
2017-12-28 11:41   ` Nikolay Borisov
2017-12-28 12:11     ` Qu Wenruo
2017-12-28 23:09       ` Stirling Westrup
2017-12-29  0:22         ` Qu Wenruo
2017-12-29  1:41           ` Stirling Westrup
2017-12-29  2:08             ` Qu Wenruo
2017-12-29  3:35               ` Stirling Westrup
2017-12-29  4:00                 ` Qu Wenruo
2017-12-29 19:30                   ` Stirling Westrup
2017-12-30  3:15                     ` Qu Wenruo
2017-12-30 19:54               ` Stirling Westrup
2017-12-31  1:00                 ` Qu Wenruo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.