linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Need help with potential ~45TB dataloss
@ 2018-11-30 13:53 Patrick Dijkgraaf
  2018-11-30 23:57 ` Qu Wenruo
  0 siblings, 1 reply; 14+ messages in thread
From: Patrick Dijkgraaf @ 2018-11-30 13:53 UTC (permalink / raw)
  To: linux-btrfs

Hi all,

I have been a happy BTRFS user for quite some time. But now I'm facing
a potential ~45TB dataloss... :-(
I hope someone can help!

I have Server A and Server B. Both having a 20-devices BTRFS RAID6
filesystem. Because of known RAID5/6 risks, Server B was a backup of
Server A.
After applying updates to server B and reboot, the FS would not mount
anymore. Because it was "just" a backup. I decided to recreate the FS
and perform a new backup. Later, I discovered that the FS was not
broken, but I faced this issue: 
https://patchwork.kernel.org/patch/10694997/

Anyway, the FS was already recreated, so I needed to do a new backup.
During the backup (using rsync -vah), Server A (the source) encountered
an I/O error and my rsync failed. In an attempt to "quick fix" the
issue, I rebooted Server A after which the FS would not mount anymore.

I documented what I have tried, below. I have not yet tried anything
except what is shown, because I am afraid of causing more harm to
the FS. I hope somebody here can give me advice on how to (hopefully)
retrieve my data...

Thanks in advance!

==========================================

[root@cornelis ~]# btrfs fi show
Label: 'cornelis-btrfs'  uuid: ac643516-670e-40f3-aa4c-f329fc3795fd
	Total devices 1 FS bytes used 463.92GiB
	devid    1 size 800.00GiB used 493.02GiB path
/dev/mapper/cornelis-cornelis--btrfs

Label: 'data'  uuid: 4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
	Total devices 20 FS bytes used 44.85TiB
	devid    1 size 3.64TiB used 3.64TiB path /dev/sdn2
	devid    2 size 3.64TiB used 3.64TiB path /dev/sdp2
	devid    3 size 3.64TiB used 3.64TiB path /dev/sdu2
	devid    4 size 3.64TiB used 3.64TiB path /dev/sdx2
	devid    5 size 3.64TiB used 3.64TiB path /dev/sdh2
	devid    6 size 3.64TiB used 3.64TiB path /dev/sdg2
	devid    7 size 3.64TiB used 3.64TiB path /dev/sdm2
	devid    8 size 3.64TiB used 3.64TiB path /dev/sdw2
	devid    9 size 3.64TiB used 3.64TiB path /dev/sdj2
	devid   10 size 3.64TiB used 3.64TiB path /dev/sdt2
	devid   11 size 3.64TiB used 3.64TiB path /dev/sdk2
	devid   12 size 3.64TiB used 3.64TiB path /dev/sdq2
	devid   13 size 3.64TiB used 3.64TiB path /dev/sds2
	devid   14 size 3.64TiB used 3.64TiB path /dev/sdf2
	devid   15 size 7.28TiB used 588.80GiB path /dev/sdr2
	devid   16 size 7.28TiB used 588.80GiB path /dev/sdo2
	devid   17 size 7.28TiB used 588.80GiB path /dev/sdv2
	devid   18 size 7.28TiB used 588.80GiB path /dev/sdi2
	devid   19 size 7.28TiB used 588.80GiB path /dev/sdl2
	devid   20 size 7.28TiB used 588.80GiB path /dev/sde2

[root@cornelis ~]# mount /dev/sdn2 /mnt/data
mount: /mnt/data: wrong fs type, bad option, bad superblock on
/dev/sdn2, missing codepage or helper program, or other error.

[root@cornelis ~]# btrfs check /dev/sdn2
Opening filesystem to check...
parent transid verify failed on 46451963543552 wanted 114401 found
114173
parent transid verify failed on 46451963543552 wanted 114401 found
114173
checksum verify failed on 46451963543552 found A8F2A769 wanted 4C111ADF
checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
bad tree block 46451963543552, bytenr mismatch, want=46451963543552,
have=75208089814272
Couldn't read tree root
ERROR: cannot open file system

[root@cornelis ~]# btrfs restore /dev/sdn2 /mnt/data/
parent transid verify failed on 46451963543552 wanted 114401 found
114173
parent transid verify failed on 46451963543552 wanted 114401 found
114173
checksum verify failed on 46451963543552 found A8F2A769 wanted 4C111ADF
checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
bad tree block 46451963543552, bytenr mismatch, want=46451963543552,
have=75208089814272
Couldn't read tree root
Could not open root, trying backup super
warning, device 14 is missing
warning, device 13 is missing
warning, device 12 is missing
warning, device 11 is missing
warning, device 10 is missing
warning, device 9 is missing
warning, device 8 is missing
warning, device 7 is missing
warning, device 6 is missing
warning, device 5 is missing
warning, device 4 is missing
warning, device 3 is missing
warning, device 2 is missing
checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
bad tree block 22085632, bytenr mismatch, want=22085632,
have=1147797504
ERROR: cannot read chunk root
Could not open root, trying backup super
warning, device 14 is missing
warning, device 13 is missing
warning, device 12 is missing
warning, device 11 is missing
warning, device 10 is missing
warning, device 9 is missing
warning, device 8 is missing
warning, device 7 is missing
warning, device 6 is missing
warning, device 5 is missing
warning, device 4 is missing
warning, device 3 is missing
warning, device 2 is missing
checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
bad tree block 22085632, bytenr mismatch, want=22085632,
have=1147797504
ERROR: cannot read chunk root
Could not open root, trying backup super

[root@cornelis ~]# uname -r
4.18.16-arch1-1-ARCH

[root@cornelis ~]# btrfs --version
btrfs-progs v4.19

-- 
Cheers,
Patrick




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-11-30 13:53 Need help with potential ~45TB dataloss Patrick Dijkgraaf
@ 2018-11-30 23:57 ` Qu Wenruo
  2018-12-02  9:03   ` Patrick Dijkgraaf
  0 siblings, 1 reply; 14+ messages in thread
From: Qu Wenruo @ 2018-11-30 23:57 UTC (permalink / raw)
  To: Patrick Dijkgraaf, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 6638 bytes --]



On 2018/11/30 下午9:53, Patrick Dijkgraaf wrote:
> Hi all,
> 
> I have been a happy BTRFS user for quite some time. But now I'm facing
> a potential ~45TB dataloss... :-(
> I hope someone can help!
> 
> I have Server A and Server B. Both having a 20-devices BTRFS RAID6
> filesystem. Because of known RAID5/6 risks, Server B was a backup of
> Server A.
> After applying updates to server B and reboot, the FS would not mount
> anymore. Because it was "just" a backup. I decided to recreate the FS
> and perform a new backup. Later, I discovered that the FS was not
> broken, but I faced this issue: 
> https://patchwork.kernel.org/patch/10694997/

Sorry for the inconvenience.

I didn't realize the max_chunk_size limit isn't reliable at that timing.

> 
> Anyway, the FS was already recreated, so I needed to do a new backup.
> During the backup (using rsync -vah), Server A (the source) encountered
> an I/O error and my rsync failed. In an attempt to "quick fix" the
> issue, I rebooted Server A after which the FS would not mount anymore.

Did you have any dmesg about that IO error?

And how is the reboot scheduled? Forced power off or normal reboot command?

> 
> I documented what I have tried, below. I have not yet tried anything
> except what is shown, because I am afraid of causing more harm to
> the FS.

Pretty clever, no btrfs check --repair is a pretty good move.

> I hope somebody here can give me advice on how to (hopefully)
> retrieve my data...
> 
> Thanks in advance!
> 
> ==========================================
> 
> [root@cornelis ~]# btrfs fi show
> Label: 'cornelis-btrfs'  uuid: ac643516-670e-40f3-aa4c-f329fc3795fd
> 	Total devices 1 FS bytes used 463.92GiB
> 	devid    1 size 800.00GiB used 493.02GiB path
> /dev/mapper/cornelis-cornelis--btrfs
> 
> Label: 'data'  uuid: 4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
> 	Total devices 20 FS bytes used 44.85TiB
> 	devid    1 size 3.64TiB used 3.64TiB path /dev/sdn2
> 	devid    2 size 3.64TiB used 3.64TiB path /dev/sdp2
> 	devid    3 size 3.64TiB used 3.64TiB path /dev/sdu2
> 	devid    4 size 3.64TiB used 3.64TiB path /dev/sdx2
> 	devid    5 size 3.64TiB used 3.64TiB path /dev/sdh2
> 	devid    6 size 3.64TiB used 3.64TiB path /dev/sdg2
> 	devid    7 size 3.64TiB used 3.64TiB path /dev/sdm2
> 	devid    8 size 3.64TiB used 3.64TiB path /dev/sdw2
> 	devid    9 size 3.64TiB used 3.64TiB path /dev/sdj2
> 	devid   10 size 3.64TiB used 3.64TiB path /dev/sdt2
> 	devid   11 size 3.64TiB used 3.64TiB path /dev/sdk2
> 	devid   12 size 3.64TiB used 3.64TiB path /dev/sdq2
> 	devid   13 size 3.64TiB used 3.64TiB path /dev/sds2
> 	devid   14 size 3.64TiB used 3.64TiB path /dev/sdf2
> 	devid   15 size 7.28TiB used 588.80GiB path /dev/sdr2
> 	devid   16 size 7.28TiB used 588.80GiB path /dev/sdo2
> 	devid   17 size 7.28TiB used 588.80GiB path /dev/sdv2
> 	devid   18 size 7.28TiB used 588.80GiB path /dev/sdi2
> 	devid   19 size 7.28TiB used 588.80GiB path /dev/sdl2
> 	devid   20 size 7.28TiB used 588.80GiB path /dev/sde2
> 
> [root@cornelis ~]# mount /dev/sdn2 /mnt/data
> mount: /mnt/data: wrong fs type, bad option, bad superblock on
> /dev/sdn2, missing codepage or helper program, or other error.

What is the dmesg of the mount failure?

And have you tried -o ro,degraded ?

> 
> [root@cornelis ~]# btrfs check /dev/sdn2
> Opening filesystem to check...
> parent transid verify failed on 46451963543552 wanted 114401 found
> 114173
> parent transid verify failed on 46451963543552 wanted 114401 found
> 114173
> checksum verify failed on 46451963543552 found A8F2A769 wanted 4C111ADF
> checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
> checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
> bad tree block 46451963543552, bytenr mismatch, want=46451963543552,
> have=75208089814272
> Couldn't read tree root

Would you please also paste the output of "btrfs ins dump-super /dev/sdn2" ?

It looks like your tree root (or at least some tree root nodes/leaves
get corrupted)

> ERROR: cannot open file system

And since it's your tree root corrupted, you could also try
"btrfs-find-root <device>" to try to get a good old copy of your tree root.

But I suspect the corruption happens before you noticed, thus the old
tree root may not help much.

Also, the output of "btrfs ins dump-tree -t root <device>" will help.

Thanks,
Qu
> 
> [root@cornelis ~]# btrfs restore /dev/sdn2 /mnt/data/
> parent transid verify failed on 46451963543552 wanted 114401 found
> 114173
> parent transid verify failed on 46451963543552 wanted 114401 found
> 114173
> checksum verify failed on 46451963543552 found A8F2A769 wanted 4C111ADF
> checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
> checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
> bad tree block 46451963543552, bytenr mismatch, want=46451963543552,
> have=75208089814272
> Couldn't read tree root
> Could not open root, trying backup super
> warning, device 14 is missing
> warning, device 13 is missing
> warning, device 12 is missing
> warning, device 11 is missing
> warning, device 10 is missing
> warning, device 9 is missing
> warning, device 8 is missing
> warning, device 7 is missing
> warning, device 6 is missing
> warning, device 5 is missing
> warning, device 4 is missing
> warning, device 3 is missing
> warning, device 2 is missing
> checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
> checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
> bad tree block 22085632, bytenr mismatch, want=22085632,
> have=1147797504
> ERROR: cannot read chunk root
> Could not open root, trying backup super
> warning, device 14 is missing
> warning, device 13 is missing
> warning, device 12 is missing
> warning, device 11 is missing
> warning, device 10 is missing
> warning, device 9 is missing
> warning, device 8 is missing
> warning, device 7 is missing
> warning, device 6 is missing
> warning, device 5 is missing
> warning, device 4 is missing
> warning, device 3 is missing
> warning, device 2 is missing
> checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
> checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
> bad tree block 22085632, bytenr mismatch, want=22085632,
> have=1147797504
> ERROR: cannot read chunk root
> Could not open root, trying backup super
> 
> [root@cornelis ~]# uname -r
> 4.18.16-arch1-1-ARCH
> 
> [root@cornelis ~]# btrfs --version
> btrfs-progs v4.19
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-11-30 23:57 ` Qu Wenruo
@ 2018-12-02  9:03   ` Patrick Dijkgraaf
  2018-12-02 20:14     ` Patrick Dijkgraaf
  2018-12-03  0:35     ` Qu Wenruo
  0 siblings, 2 replies; 14+ messages in thread
From: Patrick Dijkgraaf @ 2018-12-02  9:03 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

Hi Qu,

Thanks for helping me!

Please see the reponses in-line.
Any suggestions based on this?

Thanks!


On Sat, 2018-12-01 at 07:57 +0800, Qu Wenruo wrote:
> On 2018/11/30 下午9:53, Patrick Dijkgraaf wrote:
> > Hi all,
> > 
> > I have been a happy BTRFS user for quite some time. But now I'm
> > facing
> > a potential ~45TB dataloss... :-(
> > I hope someone can help!
> > 
> > I have Server A and Server B. Both having a 20-devices BTRFS RAID6
> > filesystem. Because of known RAID5/6 risks, Server B was a backup
> > of
> > Server A.
> > After applying updates to server B and reboot, the FS would not
> > mount
> > anymore. Because it was "just" a backup. I decided to recreate the
> > FS
> > and perform a new backup. Later, I discovered that the FS was not
> > broken, but I faced this issue: 
> > https://patchwork.kernel.org/patch/10694997/
> > 
> 
> Sorry for the inconvenience.
> 
> I didn't realize the max_chunk_size limit isn't reliable at that
> timing.

No problem, I should not have jumped to the conclusion to recreate the
backup volume.

> > Anyway, the FS was already recreated, so I needed to do a new
> > backup.
> > During the backup (using rsync -vah), Server A (the source)
> > encountered
> > an I/O error and my rsync failed. In an attempt to "quick fix" the
> > issue, I rebooted Server A after which the FS would not mount
> > anymore.
> 
> Did you have any dmesg about that IO error?

Yes there was. But I omitted capturing it... The system is now rebooted
and I can't retrieve it anymore. :-(

> And how is the reboot scheduled? Forced power off or normal reboot
> command?

The system was rebooted using a normal reboot command.

> > I documented what I have tried, below. I have not yet tried
> > anything
> > except what is shown, because I am afraid of causing more harm to
> > the FS.
> 
> Pretty clever, no btrfs check --repair is a pretty good move.
> 
> > I hope somebody here can give me advice on how to (hopefully)
> > retrieve my data...
> > 
> > Thanks in advance!
> > 
> > ==========================================
> > 
> > [root@cornelis ~]# btrfs fi show
> > Label: 'cornelis-btrfs'  uuid: ac643516-670e-40f3-aa4c-f329fc3795fd
> > 	Total devices 1 FS bytes used 463.92GiB
> > 	devid    1 size 800.00GiB used 493.02GiB path
> > /dev/mapper/cornelis-cornelis--btrfs
> > 
> > Label: 'data'  uuid: 4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
> > 	Total devices 20 FS bytes used 44.85TiB
> > 	devid    1 size 3.64TiB used 3.64TiB path /dev/sdn2
> > 	devid    2 size 3.64TiB used 3.64TiB path /dev/sdp2
> > 	devid    3 size 3.64TiB used 3.64TiB path /dev/sdu2
> > 	devid    4 size 3.64TiB used 3.64TiB path /dev/sdx2
> > 	devid    5 size 3.64TiB used 3.64TiB path /dev/sdh2
> > 	devid    6 size 3.64TiB used 3.64TiB path /dev/sdg2
> > 	devid    7 size 3.64TiB used 3.64TiB path /dev/sdm2
> > 	devid    8 size 3.64TiB used 3.64TiB path /dev/sdw2
> > 	devid    9 size 3.64TiB used 3.64TiB path /dev/sdj2
> > 	devid   10 size 3.64TiB used 3.64TiB path /dev/sdt2
> > 	devid   11 size 3.64TiB used 3.64TiB path /dev/sdk2
> > 	devid   12 size 3.64TiB used 3.64TiB path /dev/sdq2
> > 	devid   13 size 3.64TiB used 3.64TiB path /dev/sds2
> > 	devid   14 size 3.64TiB used 3.64TiB path /dev/sdf2
> > 	devid   15 size 7.28TiB used 588.80GiB path /dev/sdr2
> > 	devid   16 size 7.28TiB used 588.80GiB path /dev/sdo2
> > 	devid   17 size 7.28TiB used 588.80GiB path /dev/sdv2
> > 	devid   18 size 7.28TiB used 588.80GiB path /dev/sdi2
> > 	devid   19 size 7.28TiB used 588.80GiB path /dev/sdl2
> > 	devid   20 size 7.28TiB used 588.80GiB path /dev/sde2
> > 
> > [root@cornelis ~]# mount /dev/sdn2 /mnt/data
> > mount: /mnt/data: wrong fs type, bad option, bad superblock on
> > /dev/sdn2, missing codepage or helper program, or other error.
> 
> What is the dmesg of the mount failure?

[Sun Dec  2 09:41:08 2018] BTRFS info (device sdn2): disk space caching
is enabled
[Sun Dec  2 09:41:08 2018] BTRFS info (device sdn2): has skinny extents
[Sun Dec  2 09:41:08 2018] BTRFS error (device sdn2): parent transid
verify failed on 46451963543552 wanted 114401 found 114173
[Sun Dec  2 09:41:08 2018] BTRFS critical (device sdn2): corrupt leaf:
root=2 block=46451963543552 slot=0, unexpected item end, have
1387359977 expect 16283
[Sun Dec  2 09:41:08 2018] BTRFS warning (device sdn2): failed to read
tree root
[Sun Dec  2 09:41:08 2018] BTRFS error (device sdn2): open_ctree failed

> And have you tried -o ro,degraded ?

Tried it just now, gives the exact same error.

> > [root@cornelis ~]# btrfs check /dev/sdn2
> > Opening filesystem to check...
> > parent transid verify failed on 46451963543552 wanted 114401 found
> > 114173
> > parent transid verify failed on 46451963543552 wanted 114401 found
> > 114173
> > checksum verify failed on 46451963543552 found A8F2A769 wanted
> > 4C111ADF
> > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > 8B07ABE4
> > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > 8B07ABE4
> > bad tree block 46451963543552, bytenr mismatch,
> > want=46451963543552,
> > have=75208089814272
> > Couldn't read tree root
> 
> Would you please also paste the output of "btrfs ins dump-super
> /dev/sdn2" ?

[root@cornelis ~]# btrfs ins dump-super /dev/sdn2
superblock: bytenr=65536, device=/dev/sdn2
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0x51725c39 [match]
bytenr			65536
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
label			data
generation		114401
root			46451963543552
sys_array_size		513
chunk_root_generation	112769
root_level		1
chunk_root		22085632
chunk_root_level	1
log_root		46451935461376
log_root_transid	0
log_root_level		0
total_bytes		104020314161152
bytes_used		49308554543104
sectorsize		4096
nodesize		16384
leafsize (deprecated)		16384
stripesize		4096
root_dir		6
num_devices		20
compat_flags		0x0
compat_ro_flags		0x0
incompat_flags		0x1e1
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  RAID56 |
			  SKINNY_METADATA )
cache_generation	114401
uuid_tree_generation	114401
dev_item.uuid		c6b44903-e849-4403-98c4-f3ba4d0b3fc3
dev_item.fsid		4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5 [match]
dev_item.type		0
dev_item.total_bytes	4000783007744
dev_item.bytes_used	4000781959168
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		1
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0

> It looks like your tree root (or at least some tree root nodes/leaves
> get corrupted)
> 
> > ERROR: cannot open file system
> 
> And since it's your tree root corrupted, you could also try
> "btrfs-find-root <device>" to try to get a good old copy of your tree
> root.

The output is rather long. I pasted it here: 
https://pastebin.com/FkyBLgj9
I'm unsure what to look for in this output?

> But I suspect the corruption happens before you noticed, thus the old
> tree root may not help much.
> 
> Also, the output of "btrfs ins dump-tree -t root <device>" will help.

Here it is:

[root@cornelis ~]# btrfs ins dump-tree -t root /dev/sdn2
btrfs-progs v4.19 
parent transid verify failed on 46451963543552 wanted 114401 found
114173
parent transid verify failed on 46451963543552 wanted 114401 found
114173
checksum verify failed on 46451963543552 found A8F2A769 wanted 4C111ADF
checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
bad tree block 46451963543552, bytenr mismatch, want=46451963543552,
have=75208089814272
Couldn't read tree root
ERROR: unable to open /dev/sdn2

> Thanks,
> Qu

No, thank YOU! :-)

> > [root@cornelis ~]# btrfs restore /dev/sdn2 /mnt/data/
> > parent transid verify failed on 46451963543552 wanted 114401 found
> > 114173
> > parent transid verify failed on 46451963543552 wanted 114401 found
> > 114173
> > checksum verify failed on 46451963543552 found A8F2A769 wanted
> > 4C111ADF
> > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > 8B07ABE4
> > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > 8B07ABE4
> > bad tree block 46451963543552, bytenr mismatch,
> > want=46451963543552,
> > have=75208089814272
> > Couldn't read tree root
> > Could not open root, trying backup super
> > warning, device 14 is missing
> > warning, device 13 is missing
> > warning, device 12 is missing
> > warning, device 11 is missing
> > warning, device 10 is missing
> > warning, device 9 is missing
> > warning, device 8 is missing
> > warning, device 7 is missing
> > warning, device 6 is missing
> > warning, device 5 is missing
> > warning, device 4 is missing
> > warning, device 3 is missing
> > warning, device 2 is missing
> > checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
> > checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
> > bad tree block 22085632, bytenr mismatch, want=22085632,
> > have=1147797504
> > ERROR: cannot read chunk root
> > Could not open root, trying backup super
> > warning, device 14 is missing
> > warning, device 13 is missing
> > warning, device 12 is missing
> > warning, device 11 is missing
> > warning, device 10 is missing
> > warning, device 9 is missing
> > warning, device 8 is missing
> > warning, device 7 is missing
> > warning, device 6 is missing
> > warning, device 5 is missing
> > warning, device 4 is missing
> > warning, device 3 is missing
> > warning, device 2 is missing
> > checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
> > checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
> > bad tree block 22085632, bytenr mismatch, want=22085632,
> > have=1147797504
> > ERROR: cannot read chunk root
> > Could not open root, trying backup super
> > 
> > [root@cornelis ~]# uname -r
> > 4.18.16-arch1-1-ARCH
> > 
> > [root@cornelis ~]# btrfs --version
> > btrfs-progs v4.19
> > 
-- 
Groet / Cheers,
Patrick Dijkgraaf




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-12-02  9:03   ` Patrick Dijkgraaf
@ 2018-12-02 20:14     ` Patrick Dijkgraaf
  2018-12-02 20:30       ` Andrei Borzenkov
  2018-12-03  0:35     ` Qu Wenruo
  1 sibling, 1 reply; 14+ messages in thread
From: Patrick Dijkgraaf @ 2018-12-02 20:14 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

I have some additional info.

I found the reason the FS got corrupted. It was a single failing drive,
which caused the entire cabinet (containing 7 drives) to reset. So the
FS suddenly lost 7 drives.

I have removed the failed drive, so the RAID is now degraded. I hope
the data is still recoverable... ☹

-- 
Groet / Cheers,
Patrick Dijkgraaf



On Sun, 2018-12-02 at 10:03 +0100, Patrick Dijkgraaf wrote:
> Hi Qu,
> 
> Thanks for helping me!
> 
> Please see the reponses in-line.
> Any suggestions based on this?
> 
> Thanks!
> 
> 
> On Sat, 2018-12-01 at 07:57 +0800, Qu Wenruo wrote:
> > On 2018/11/30 下午9:53, Patrick Dijkgraaf wrote:
> > > Hi all,
> > > 
> > > I have been a happy BTRFS user for quite some time. But now I'm
> > > facing
> > > a potential ~45TB dataloss... :-(
> > > I hope someone can help!
> > > 
> > > I have Server A and Server B. Both having a 20-devices BTRFS
> > > RAID6
> > > filesystem. Because of known RAID5/6 risks, Server B was a backup
> > > of
> > > Server A.
> > > After applying updates to server B and reboot, the FS would not
> > > mount
> > > anymore. Because it was "just" a backup. I decided to recreate
> > > the
> > > FS
> > > and perform a new backup. Later, I discovered that the FS was not
> > > broken, but I faced this issue: 
> > > https://patchwork.kernel.org/patch/10694997/
> > > 
> > > 
> > 
> > Sorry for the inconvenience.
> > 
> > I didn't realize the max_chunk_size limit isn't reliable at that
> > timing.
> 
> No problem, I should not have jumped to the conclusion to recreate
> the
> backup volume.
> 
> > > Anyway, the FS was already recreated, so I needed to do a new
> > > backup.
> > > During the backup (using rsync -vah), Server A (the source)
> > > encountered
> > > an I/O error and my rsync failed. In an attempt to "quick fix"
> > > the
> > > issue, I rebooted Server A after which the FS would not mount
> > > anymore.
> > 
> > Did you have any dmesg about that IO error?
> 
> Yes there was. But I omitted capturing it... The system is now
> rebooted
> and I can't retrieve it anymore. :-(
> 
> > And how is the reboot scheduled? Forced power off or normal reboot
> > command?
> 
> The system was rebooted using a normal reboot command.
> 
> > > I documented what I have tried, below. I have not yet tried
> > > anything
> > > except what is shown, because I am afraid of causing more harm to
> > > the FS.
> > 
> > Pretty clever, no btrfs check --repair is a pretty good move.
> > 
> > > I hope somebody here can give me advice on how to (hopefully)
> > > retrieve my data...
> > > 
> > > Thanks in advance!
> > > 
> > > ==========================================
> > > 
> > > [root@cornelis ~]# btrfs fi show
> > > Label: 'cornelis-btrfs'  uuid: ac643516-670e-40f3-aa4c-
> > > f329fc3795fd
> > > 	Total devices 1 FS bytes used 463.92GiB
> > > 	devid    1 size 800.00GiB used 493.02GiB path
> > > /dev/mapper/cornelis-cornelis--btrfs
> > > 
> > > Label: 'data'  uuid: 4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
> > > 	Total devices 20 FS bytes used 44.85TiB
> > > 	devid    1 size 3.64TiB used 3.64TiB path /dev/sdn2
> > > 	devid    2 size 3.64TiB used 3.64TiB path /dev/sdp2
> > > 	devid    3 size 3.64TiB used 3.64TiB path /dev/sdu2
> > > 	devid    4 size 3.64TiB used 3.64TiB path /dev/sdx2
> > > 	devid    5 size 3.64TiB used 3.64TiB path /dev/sdh2
> > > 	devid    6 size 3.64TiB used 3.64TiB path /dev/sdg2
> > > 	devid    7 size 3.64TiB used 3.64TiB path /dev/sdm2
> > > 	devid    8 size 3.64TiB used 3.64TiB path /dev/sdw2
> > > 	devid    9 size 3.64TiB used 3.64TiB path /dev/sdj2
> > > 	devid   10 size 3.64TiB used 3.64TiB path /dev/sdt2
> > > 	devid   11 size 3.64TiB used 3.64TiB path /dev/sdk2
> > > 	devid   12 size 3.64TiB used 3.64TiB path /dev/sdq2
> > > 	devid   13 size 3.64TiB used 3.64TiB path /dev/sds2
> > > 	devid   14 size 3.64TiB used 3.64TiB path /dev/sdf2
> > > 	devid   15 size 7.28TiB used 588.80GiB path /dev/sdr2
> > > 	devid   16 size 7.28TiB used 588.80GiB path /dev/sdo2
> > > 	devid   17 size 7.28TiB used 588.80GiB path /dev/sdv2
> > > 	devid   18 size 7.28TiB used 588.80GiB path /dev/sdi2
> > > 	devid   19 size 7.28TiB used 588.80GiB path /dev/sdl2
> > > 	devid   20 size 7.28TiB used 588.80GiB path /dev/sde2
> > > 
> > > [root@cornelis ~]# mount /dev/sdn2 /mnt/data
> > > mount: /mnt/data: wrong fs type, bad option, bad superblock on
> > > /dev/sdn2, missing codepage or helper program, or other error.
> > 
> > What is the dmesg of the mount failure?
> 
> [Sun Dec  2 09:41:08 2018] BTRFS info (device sdn2): disk space
> caching
> is enabled
> [Sun Dec  2 09:41:08 2018] BTRFS info (device sdn2): has skinny
> extents
> [Sun Dec  2 09:41:08 2018] BTRFS error (device sdn2): parent transid
> verify failed on 46451963543552 wanted 114401 found 114173
> [Sun Dec  2 09:41:08 2018] BTRFS critical (device sdn2): corrupt
> leaf:
> root=2 block=46451963543552 slot=0, unexpected item end, have
> 1387359977 expect 16283
> [Sun Dec  2 09:41:08 2018] BTRFS warning (device sdn2): failed to
> read
> tree root
> [Sun Dec  2 09:41:08 2018] BTRFS error (device sdn2): open_ctree
> failed
> 
> > And have you tried -o ro,degraded ?
> 
> Tried it just now, gives the exact same error.
> 
> > > [root@cornelis ~]# btrfs check /dev/sdn2
> > > Opening filesystem to check...
> > > parent transid verify failed on 46451963543552 wanted 114401
> > > found
> > > 114173
> > > parent transid verify failed on 46451963543552 wanted 114401
> > > found
> > > 114173
> > > checksum verify failed on 46451963543552 found A8F2A769 wanted
> > > 4C111ADF
> > > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > > 8B07ABE4
> > > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > > 8B07ABE4
> > > bad tree block 46451963543552, bytenr mismatch,
> > > want=46451963543552,
> > > have=75208089814272
> > > Couldn't read tree root
> > 
> > Would you please also paste the output of "btrfs ins dump-super
> > /dev/sdn2" ?
> 
> [root@cornelis ~]# btrfs ins dump-super /dev/sdn2
> superblock: bytenr=65536, device=/dev/sdn2
> ---------------------------------------------------------
> csum_type		0 (crc32c)
> csum_size		4
> csum			0x51725c39 [match]
> bytenr			65536
> flags			0x1
> 			( WRITTEN )
> magic			_BHRfS_M [match]
> fsid			4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
> label			data
> generation		114401
> root			46451963543552
> sys_array_size		513
> chunk_root_generation	112769
> root_level		1
> chunk_root		22085632
> chunk_root_level	1
> log_root		46451935461376
> log_root_transid	0
> log_root_level		0
> total_bytes		104020314161152
> bytes_used		49308554543104
> sectorsize		4096
> nodesize		16384
> leafsize (deprecated)		16384
> stripesize		4096
> root_dir		6
> num_devices		20
> compat_flags		0x0
> compat_ro_flags		0x0
> incompat_flags		0x1e1
> 			( MIXED_BACKREF |
> 			  BIG_METADATA |
> 			  EXTENDED_IREF |
> 			  RAID56 |
> 			  SKINNY_METADATA )
> cache_generation	114401
> uuid_tree_generation	114401
> dev_item.uuid		c6b44903-e849-4403-98c4-f3ba4d0b3fc3
> dev_item.fsid		4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5 [match]
> dev_item.type		0
> dev_item.total_bytes	4000783007744
> dev_item.bytes_used	4000781959168
> dev_item.io_align	4096
> dev_item.io_width	4096
> dev_item.sector_size	4096
> dev_item.devid		1
> dev_item.dev_group	0
> dev_item.seek_speed	0
> dev_item.bandwidth	0
> dev_item.generation	0
> 
> > It looks like your tree root (or at least some tree root
> > nodes/leaves
> > get corrupted)
> > 
> > > ERROR: cannot open file system
> > 
> > And since it's your tree root corrupted, you could also try
> > "btrfs-find-root <device>" to try to get a good old copy of your
> > tree
> > root.
> 
> The output is rather long. I pasted it here: 
> https://pastebin.com/FkyBLgj9
> 
> I'm unsure what to look for in this output?
> 
> > But I suspect the corruption happens before you noticed, thus the
> > old
> > tree root may not help much.
> > 
> > Also, the output of "btrfs ins dump-tree -t root <device>" will
> > help.
> 
> Here it is:
> 
> [root@cornelis ~]# btrfs ins dump-tree -t root /dev/sdn2
> btrfs-progs v4.19 
> parent transid verify failed on 46451963543552 wanted 114401 found
> 114173
> parent transid verify failed on 46451963543552 wanted 114401 found
> 114173
> checksum verify failed on 46451963543552 found A8F2A769 wanted
> 4C111ADF
> checksum verify failed on 46451963543552 found 32153BE8 wanted
> 8B07ABE4
> checksum verify failed on 46451963543552 found 32153BE8 wanted
> 8B07ABE4
> bad tree block 46451963543552, bytenr mismatch, want=46451963543552,
> have=75208089814272
> Couldn't read tree root
> ERROR: unable to open /dev/sdn2
> 
> > Thanks,
> > Qu
> 
> No, thank YOU! :-)
> 
> > > [root@cornelis ~]# btrfs restore /dev/sdn2 /mnt/data/
> > > parent transid verify failed on 46451963543552 wanted 114401
> > > found
> > > 114173
> > > parent transid verify failed on 46451963543552 wanted 114401
> > > found
> > > 114173
> > > checksum verify failed on 46451963543552 found A8F2A769 wanted
> > > 4C111ADF
> > > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > > 8B07ABE4
> > > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > > 8B07ABE4
> > > bad tree block 46451963543552, bytenr mismatch,
> > > want=46451963543552,
> > > have=75208089814272
> > > Couldn't read tree root
> > > Could not open root, trying backup super
> > > warning, device 14 is missing
> > > warning, device 13 is missing
> > > warning, device 12 is missing
> > > warning, device 11 is missing
> > > warning, device 10 is missing
> > > warning, device 9 is missing
> > > warning, device 8 is missing
> > > warning, device 7 is missing
> > > warning, device 6 is missing
> > > warning, device 5 is missing
> > > warning, device 4 is missing
> > > warning, device 3 is missing
> > > warning, device 2 is missing
> > > checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
> > > checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
> > > bad tree block 22085632, bytenr mismatch, want=22085632,
> > > have=1147797504
> > > ERROR: cannot read chunk root
> > > Could not open root, trying backup super
> > > warning, device 14 is missing
> > > warning, device 13 is missing
> > > warning, device 12 is missing
> > > warning, device 11 is missing
> > > warning, device 10 is missing
> > > warning, device 9 is missing
> > > warning, device 8 is missing
> > > warning, device 7 is missing
> > > warning, device 6 is missing
> > > warning, device 5 is missing
> > > warning, device 4 is missing
> > > warning, device 3 is missing
> > > warning, device 2 is missing
> > > checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
> > > checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
> > > bad tree block 22085632, bytenr mismatch, want=22085632,
> > > have=1147797504
> > > ERROR: cannot read chunk root
> > > Could not open root, trying backup super
> > > 
> > > [root@cornelis ~]# uname -r
> > > 4.18.16-arch1-1-ARCH
> > > 
> > > [root@cornelis ~]# btrfs --version
> > > btrfs-progs v4.19
> > > 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-12-02 20:14     ` Patrick Dijkgraaf
@ 2018-12-02 20:30       ` Andrei Borzenkov
  2018-12-03  5:58         ` Qu Wenruo
  0 siblings, 1 reply; 14+ messages in thread
From: Andrei Borzenkov @ 2018-12-02 20:30 UTC (permalink / raw)
  To: Patrick Dijkgraaf, Qu Wenruo, linux-btrfs

02.12.2018 23:14, Patrick Dijkgraaf пишет:
> I have some additional info.
> 
> I found the reason the FS got corrupted. It was a single failing drive,
> which caused the entire cabinet (containing 7 drives) to reset. So the
> FS suddenly lost 7 drives.
> 

This remains mystery for me. btrfs is marketed to be always consistent
on disk - you either have previous full transaction or current full
transaction. If current transaction was interrupted the promise is you
are left with previous valid consistent transaction.

Obviously this is not what happens in practice. Which nullifies the main
selling point of btrfs.

Unless this is expected behavior, it sounds like some barriers are
missing and summary data is updated before (and without waiting for)
subordinate data. And if it is expected behavior ...

> I have removed the failed drive, so the RAID is now degraded. I hope
> the data is still recoverable... ☹
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-12-02  9:03   ` Patrick Dijkgraaf
  2018-12-02 20:14     ` Patrick Dijkgraaf
@ 2018-12-03  0:35     ` Qu Wenruo
  2018-12-03  0:45       ` Qu Wenruo
  2018-12-04  9:58       ` Patrick Dijkgraaf
  1 sibling, 2 replies; 14+ messages in thread
From: Qu Wenruo @ 2018-12-03  0:35 UTC (permalink / raw)
  To: Patrick Dijkgraaf, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 11144 bytes --]



On 2018/12/2 下午5:03, Patrick Dijkgraaf wrote:
> Hi Qu,
> 
> Thanks for helping me!
> 
> Please see the reponses in-line.
> Any suggestions based on this?
> 
> Thanks!
> 
> 
> On Sat, 2018-12-01 at 07:57 +0800, Qu Wenruo wrote:
>> On 2018/11/30 下午9:53, Patrick Dijkgraaf wrote:
>>> Hi all,
>>>
>>> I have been a happy BTRFS user for quite some time. But now I'm
>>> facing
>>> a potential ~45TB dataloss... :-(
>>> I hope someone can help!
>>>
>>> I have Server A and Server B. Both having a 20-devices BTRFS RAID6
>>> filesystem. Because of known RAID5/6 risks, Server B was a backup
>>> of
>>> Server A.
>>> After applying updates to server B and reboot, the FS would not
>>> mount
>>> anymore. Because it was "just" a backup. I decided to recreate the
>>> FS
>>> and perform a new backup. Later, I discovered that the FS was not
>>> broken, but I faced this issue: 
>>> https://patchwork.kernel.org/patch/10694997/
>>>
>>
>> Sorry for the inconvenience.
>>
>> I didn't realize the max_chunk_size limit isn't reliable at that
>> timing.
> 
> No problem, I should not have jumped to the conclusion to recreate the
> backup volume.
> 
>>> Anyway, the FS was already recreated, so I needed to do a new
>>> backup.
>>> During the backup (using rsync -vah), Server A (the source)
>>> encountered
>>> an I/O error and my rsync failed. In an attempt to "quick fix" the
>>> issue, I rebooted Server A after which the FS would not mount
>>> anymore.
>>
>> Did you have any dmesg about that IO error?
> 
> Yes there was. But I omitted capturing it... The system is now rebooted
> and I can't retrieve it anymore. :-(
> 
>> And how is the reboot scheduled? Forced power off or normal reboot
>> command?
> 
> The system was rebooted using a normal reboot command.

Then the problem is pretty serious.

Possibly already corrupted before.

> 
>>> I documented what I have tried, below. I have not yet tried
>>> anything
>>> except what is shown, because I am afraid of causing more harm to
>>> the FS.
>>
>> Pretty clever, no btrfs check --repair is a pretty good move.
>>
>>> I hope somebody here can give me advice on how to (hopefully)
>>> retrieve my data...
>>>
>>> Thanks in advance!
>>>
>>> ==========================================
>>>
>>> [root@cornelis ~]# btrfs fi show
>>> Label: 'cornelis-btrfs'  uuid: ac643516-670e-40f3-aa4c-f329fc3795fd
>>> 	Total devices 1 FS bytes used 463.92GiB
>>> 	devid    1 size 800.00GiB used 493.02GiB path
>>> /dev/mapper/cornelis-cornelis--btrfs
>>>
>>> Label: 'data'  uuid: 4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
>>> 	Total devices 20 FS bytes used 44.85TiB
>>> 	devid    1 size 3.64TiB used 3.64TiB path /dev/sdn2
>>> 	devid    2 size 3.64TiB used 3.64TiB path /dev/sdp2
>>> 	devid    3 size 3.64TiB used 3.64TiB path /dev/sdu2
>>> 	devid    4 size 3.64TiB used 3.64TiB path /dev/sdx2
>>> 	devid    5 size 3.64TiB used 3.64TiB path /dev/sdh2
>>> 	devid    6 size 3.64TiB used 3.64TiB path /dev/sdg2
>>> 	devid    7 size 3.64TiB used 3.64TiB path /dev/sdm2
>>> 	devid    8 size 3.64TiB used 3.64TiB path /dev/sdw2
>>> 	devid    9 size 3.64TiB used 3.64TiB path /dev/sdj2
>>> 	devid   10 size 3.64TiB used 3.64TiB path /dev/sdt2
>>> 	devid   11 size 3.64TiB used 3.64TiB path /dev/sdk2
>>> 	devid   12 size 3.64TiB used 3.64TiB path /dev/sdq2
>>> 	devid   13 size 3.64TiB used 3.64TiB path /dev/sds2
>>> 	devid   14 size 3.64TiB used 3.64TiB path /dev/sdf2
>>> 	devid   15 size 7.28TiB used 588.80GiB path /dev/sdr2
>>> 	devid   16 size 7.28TiB used 588.80GiB path /dev/sdo2
>>> 	devid   17 size 7.28TiB used 588.80GiB path /dev/sdv2
>>> 	devid   18 size 7.28TiB used 588.80GiB path /dev/sdi2
>>> 	devid   19 size 7.28TiB used 588.80GiB path /dev/sdl2
>>> 	devid   20 size 7.28TiB used 588.80GiB path /dev/sde2
>>>
>>> [root@cornelis ~]# mount /dev/sdn2 /mnt/data
>>> mount: /mnt/data: wrong fs type, bad option, bad superblock on
>>> /dev/sdn2, missing codepage or helper program, or other error.
>>
>> What is the dmesg of the mount failure?
> 
> [Sun Dec  2 09:41:08 2018] BTRFS info (device sdn2): disk space caching
> is enabled
> [Sun Dec  2 09:41:08 2018] BTRFS info (device sdn2): has skinny extents
> [Sun Dec  2 09:41:08 2018] BTRFS error (device sdn2): parent transid
> verify failed on 46451963543552 wanted 114401 found 114173
> [Sun Dec  2 09:41:08 2018] BTRFS critical (device sdn2): corrupt leaf:
> root=2 block=46451963543552 slot=0, unexpected item end, have
> 1387359977 expect 16283

OK, this shows that one of the copy has mismatched generation while the
other copy is completely corrupted.

> [Sun Dec  2 09:41:08 2018] BTRFS warning (device sdn2): failed to read
> tree root
> [Sun Dec  2 09:41:08 2018] BTRFS error (device sdn2): open_ctree failed
> 
>> And have you tried -o ro,degraded ?
> 
> Tried it just now, gives the exact same error.
> 
>>> [root@cornelis ~]# btrfs check /dev/sdn2
>>> Opening filesystem to check...
>>> parent transid verify failed on 46451963543552 wanted 114401 found
>>> 114173
>>> parent transid verify failed on 46451963543552 wanted 114401 found
>>> 114173
>>> checksum verify failed on 46451963543552 found A8F2A769 wanted
>>> 4C111ADF
>>> checksum verify failed on 46451963543552 found 32153BE8 wanted
>>> 8B07ABE4
>>> checksum verify failed on 46451963543552 found 32153BE8 wanted
>>> 8B07ABE4
>>> bad tree block 46451963543552, bytenr mismatch,
>>> want=46451963543552,
>>> have=75208089814272
>>> Couldn't read tree root
>>
>> Would you please also paste the output of "btrfs ins dump-super
>> /dev/sdn2" ?
> 
> [root@cornelis ~]# btrfs ins dump-super /dev/sdn2
> superblock: bytenr=65536, device=/dev/sdn2
> ---------------------------------------------------------
> csum_type		0 (crc32c)
> csum_size		4
> csum			0x51725c39 [match]
> bytenr			65536
> flags			0x1
> 			( WRITTEN )
> magic			_BHRfS_M [match]
> fsid			4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
> label			data
> generation		114401
> root			46451963543552

The bytenr matches with the dmesg, so it's tree root node corrupted.

> sys_array_size		513
> chunk_root_generation	112769
> root_level		1
> chunk_root		22085632
> chunk_root_level	1
> log_root		46451935461376
> log_root_transid	0
> log_root_level		0
> total_bytes		104020314161152
> bytes_used		49308554543104
> sectorsize		4096
> nodesize		16384
> leafsize (deprecated)		16384
> stripesize		4096
> root_dir		6
> num_devices		20
> compat_flags		0x0
> compat_ro_flags		0x0
> incompat_flags		0x1e1
> 			( MIXED_BACKREF |
> 			  BIG_METADATA |
> 			  EXTENDED_IREF |
> 			  RAID56 |
> 			  SKINNY_METADATA )
> cache_generation	114401
> uuid_tree_generation	114401
> dev_item.uuid		c6b44903-e849-4403-98c4-f3ba4d0b3fc3
> dev_item.fsid		4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5 [match]
> dev_item.type		0
> dev_item.total_bytes	4000783007744
> dev_item.bytes_used	4000781959168
> dev_item.io_align	4096
> dev_item.io_width	4096
> dev_item.sector_size	4096
> dev_item.devid		1
> dev_item.dev_group	0
> dev_item.seek_speed	0
> dev_item.bandwidth	0
> dev_item.generation	0
> 
>> It looks like your tree root (or at least some tree root nodes/leaves
>> get corrupted)
>>
>>> ERROR: cannot open file system
>>
>> And since it's your tree root corrupted, you could also try
>> "btrfs-find-root <device>" to try to get a good old copy of your tree
>> root.
> 
> The output is rather long. I pasted it here: 
> https://pastebin.com/FkyBLgj9
> I'm unsure what to look for in this output?

This shows all the candidates of the older tree root bytenr.

We could use it to try to recover.

You could then try the following command and see if btrfs check can go
further.

 # btrfs check -r 45462239363072 <device>


And the following dump could also help:

 # btrfs ins dump-tree -b 45462239363072 --follow

Thanks,
Qu

> 
>> But I suspect the corruption happens before you noticed, thus the old
>> tree root may not help much.
>>
>> Also, the output of "btrfs ins dump-tree -t root <device>" will help.
> 
> Here it is:
> 
> [root@cornelis ~]# btrfs ins dump-tree -t root /dev/sdn2
> btrfs-progs v4.19 
> parent transid verify failed on 46451963543552 wanted 114401 found
> 114173
> parent transid verify failed on 46451963543552 wanted 114401 found
> 114173
> checksum verify failed on 46451963543552 found A8F2A769 wanted 4C111ADF
> checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
> checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
> bad tree block 46451963543552, bytenr mismatch, want=46451963543552,
> have=75208089814272
> Couldn't read tree root
> ERROR: unable to open /dev/sdn2
> 
>> Thanks,
>> Qu
> 
> No, thank YOU! :-)
> 
>>> [root@cornelis ~]# btrfs restore /dev/sdn2 /mnt/data/
>>> parent transid verify failed on 46451963543552 wanted 114401 found
>>> 114173
>>> parent transid verify failed on 46451963543552 wanted 114401 found
>>> 114173
>>> checksum verify failed on 46451963543552 found A8F2A769 wanted
>>> 4C111ADF
>>> checksum verify failed on 46451963543552 found 32153BE8 wanted
>>> 8B07ABE4
>>> checksum verify failed on 46451963543552 found 32153BE8 wanted
>>> 8B07ABE4
>>> bad tree block 46451963543552, bytenr mismatch,
>>> want=46451963543552,
>>> have=75208089814272
>>> Couldn't read tree root
>>> Could not open root, trying backup super
>>> warning, device 14 is missing
>>> warning, device 13 is missing
>>> warning, device 12 is missing
>>> warning, device 11 is missing
>>> warning, device 10 is missing
>>> warning, device 9 is missing
>>> warning, device 8 is missing
>>> warning, device 7 is missing
>>> warning, device 6 is missing
>>> warning, device 5 is missing
>>> warning, device 4 is missing
>>> warning, device 3 is missing
>>> warning, device 2 is missing
>>> checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
>>> checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
>>> bad tree block 22085632, bytenr mismatch, want=22085632,
>>> have=1147797504
>>> ERROR: cannot read chunk root
>>> Could not open root, trying backup super
>>> warning, device 14 is missing
>>> warning, device 13 is missing
>>> warning, device 12 is missing
>>> warning, device 11 is missing
>>> warning, device 10 is missing
>>> warning, device 9 is missing
>>> warning, device 8 is missing
>>> warning, device 7 is missing
>>> warning, device 6 is missing
>>> warning, device 5 is missing
>>> warning, device 4 is missing
>>> warning, device 3 is missing
>>> warning, device 2 is missing
>>> checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
>>> checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
>>> bad tree block 22085632, bytenr mismatch, want=22085632,
>>> have=1147797504
>>> ERROR: cannot read chunk root
>>> Could not open root, trying backup super
>>>
>>> [root@cornelis ~]# uname -r
>>> 4.18.16-arch1-1-ARCH
>>>
>>> [root@cornelis ~]# btrfs --version
>>> btrfs-progs v4.19
>>>


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-12-03  0:35     ` Qu Wenruo
@ 2018-12-03  0:45       ` Qu Wenruo
  2018-12-04  9:58       ` Patrick Dijkgraaf
  1 sibling, 0 replies; 14+ messages in thread
From: Qu Wenruo @ 2018-12-03  0:45 UTC (permalink / raw)
  To: Patrick Dijkgraaf, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 12011 bytes --]



On 2018/12/3 上午8:35, Qu Wenruo wrote:
> 
> 
> On 2018/12/2 下午5:03, Patrick Dijkgraaf wrote:
>> Hi Qu,
>>
>> Thanks for helping me!
>>
>> Please see the reponses in-line.
>> Any suggestions based on this?
>>
>> Thanks!
>>
>>
>> On Sat, 2018-12-01 at 07:57 +0800, Qu Wenruo wrote:
>>> On 2018/11/30 下午9:53, Patrick Dijkgraaf wrote:
>>>> Hi all,
>>>>
>>>> I have been a happy BTRFS user for quite some time. But now I'm
>>>> facing
>>>> a potential ~45TB dataloss... :-(
>>>> I hope someone can help!
>>>>
>>>> I have Server A and Server B. Both having a 20-devices BTRFS RAID6
>>>> filesystem.

I forgot one important thing here, specially for RAID6.

If one data device corrupted, RAID6 will normally try to rebuild using
RAID5 way, and if another one disk get corrupted, it may not recover
correctly.

Current way to recover is try *all* combination.

IIRC Liu Bo tried such patch but not merged.

This means current RAID6 can only handle two missing devices at its best
condition.
But for corruption, it can only be as good as RAID5.

Thanks,
Qu

> Because of known RAID5/6 risks, Server B was a backup
>>>> of
>>>> Server A.
>>>> After applying updates to server B and reboot, the FS would not
>>>> mount
>>>> anymore. Because it was "just" a backup. I decided to recreate the
>>>> FS
>>>> and perform a new backup. Later, I discovered that the FS was not
>>>> broken, but I faced this issue: 
>>>> https://patchwork.kernel.org/patch/10694997/
>>>>
>>>
>>> Sorry for the inconvenience.
>>>
>>> I didn't realize the max_chunk_size limit isn't reliable at that
>>> timing.
>>
>> No problem, I should not have jumped to the conclusion to recreate the
>> backup volume.
>>
>>>> Anyway, the FS was already recreated, so I needed to do a new
>>>> backup.
>>>> During the backup (using rsync -vah), Server A (the source)
>>>> encountered
>>>> an I/O error and my rsync failed. In an attempt to "quick fix" the
>>>> issue, I rebooted Server A after which the FS would not mount
>>>> anymore.
>>>
>>> Did you have any dmesg about that IO error?
>>
>> Yes there was. But I omitted capturing it... The system is now rebooted
>> and I can't retrieve it anymore. :-(
>>
>>> And how is the reboot scheduled? Forced power off or normal reboot
>>> command?
>>
>> The system was rebooted using a normal reboot command.
> 
> Then the problem is pretty serious.
> 
> Possibly already corrupted before.
> 
>>
>>>> I documented what I have tried, below. I have not yet tried
>>>> anything
>>>> except what is shown, because I am afraid of causing more harm to
>>>> the FS.
>>>
>>> Pretty clever, no btrfs check --repair is a pretty good move.
>>>
>>>> I hope somebody here can give me advice on how to (hopefully)
>>>> retrieve my data...
>>>>
>>>> Thanks in advance!
>>>>
>>>> ==========================================
>>>>
>>>> [root@cornelis ~]# btrfs fi show
>>>> Label: 'cornelis-btrfs'  uuid: ac643516-670e-40f3-aa4c-f329fc3795fd
>>>> 	Total devices 1 FS bytes used 463.92GiB
>>>> 	devid    1 size 800.00GiB used 493.02GiB path
>>>> /dev/mapper/cornelis-cornelis--btrfs
>>>>
>>>> Label: 'data'  uuid: 4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
>>>> 	Total devices 20 FS bytes used 44.85TiB
>>>> 	devid    1 size 3.64TiB used 3.64TiB path /dev/sdn2
>>>> 	devid    2 size 3.64TiB used 3.64TiB path /dev/sdp2
>>>> 	devid    3 size 3.64TiB used 3.64TiB path /dev/sdu2
>>>> 	devid    4 size 3.64TiB used 3.64TiB path /dev/sdx2
>>>> 	devid    5 size 3.64TiB used 3.64TiB path /dev/sdh2
>>>> 	devid    6 size 3.64TiB used 3.64TiB path /dev/sdg2
>>>> 	devid    7 size 3.64TiB used 3.64TiB path /dev/sdm2
>>>> 	devid    8 size 3.64TiB used 3.64TiB path /dev/sdw2
>>>> 	devid    9 size 3.64TiB used 3.64TiB path /dev/sdj2
>>>> 	devid   10 size 3.64TiB used 3.64TiB path /dev/sdt2
>>>> 	devid   11 size 3.64TiB used 3.64TiB path /dev/sdk2
>>>> 	devid   12 size 3.64TiB used 3.64TiB path /dev/sdq2
>>>> 	devid   13 size 3.64TiB used 3.64TiB path /dev/sds2
>>>> 	devid   14 size 3.64TiB used 3.64TiB path /dev/sdf2
>>>> 	devid   15 size 7.28TiB used 588.80GiB path /dev/sdr2
>>>> 	devid   16 size 7.28TiB used 588.80GiB path /dev/sdo2
>>>> 	devid   17 size 7.28TiB used 588.80GiB path /dev/sdv2
>>>> 	devid   18 size 7.28TiB used 588.80GiB path /dev/sdi2
>>>> 	devid   19 size 7.28TiB used 588.80GiB path /dev/sdl2
>>>> 	devid   20 size 7.28TiB used 588.80GiB path /dev/sde2
>>>>
>>>> [root@cornelis ~]# mount /dev/sdn2 /mnt/data
>>>> mount: /mnt/data: wrong fs type, bad option, bad superblock on
>>>> /dev/sdn2, missing codepage or helper program, or other error.
>>>
>>> What is the dmesg of the mount failure?
>>
>> [Sun Dec  2 09:41:08 2018] BTRFS info (device sdn2): disk space caching
>> is enabled
>> [Sun Dec  2 09:41:08 2018] BTRFS info (device sdn2): has skinny extents
>> [Sun Dec  2 09:41:08 2018] BTRFS error (device sdn2): parent transid
>> verify failed on 46451963543552 wanted 114401 found 114173
>> [Sun Dec  2 09:41:08 2018] BTRFS critical (device sdn2): corrupt leaf:
>> root=2 block=46451963543552 slot=0, unexpected item end, have
>> 1387359977 expect 16283
> 
> OK, this shows that one of the copy has mismatched generation while the
> other copy is completely corrupted.
> 
>> [Sun Dec  2 09:41:08 2018] BTRFS warning (device sdn2): failed to read
>> tree root
>> [Sun Dec  2 09:41:08 2018] BTRFS error (device sdn2): open_ctree failed
>>
>>> And have you tried -o ro,degraded ?
>>
>> Tried it just now, gives the exact same error.
>>
>>>> [root@cornelis ~]# btrfs check /dev/sdn2
>>>> Opening filesystem to check...
>>>> parent transid verify failed on 46451963543552 wanted 114401 found
>>>> 114173
>>>> parent transid verify failed on 46451963543552 wanted 114401 found
>>>> 114173
>>>> checksum verify failed on 46451963543552 found A8F2A769 wanted
>>>> 4C111ADF
>>>> checksum verify failed on 46451963543552 found 32153BE8 wanted
>>>> 8B07ABE4
>>>> checksum verify failed on 46451963543552 found 32153BE8 wanted
>>>> 8B07ABE4
>>>> bad tree block 46451963543552, bytenr mismatch,
>>>> want=46451963543552,
>>>> have=75208089814272
>>>> Couldn't read tree root
>>>
>>> Would you please also paste the output of "btrfs ins dump-super
>>> /dev/sdn2" ?
>>
>> [root@cornelis ~]# btrfs ins dump-super /dev/sdn2
>> superblock: bytenr=65536, device=/dev/sdn2
>> ---------------------------------------------------------
>> csum_type		0 (crc32c)
>> csum_size		4
>> csum			0x51725c39 [match]
>> bytenr			65536
>> flags			0x1
>> 			( WRITTEN )
>> magic			_BHRfS_M [match]
>> fsid			4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
>> label			data
>> generation		114401
>> root			46451963543552
> 
> The bytenr matches with the dmesg, so it's tree root node corrupted.
> 
>> sys_array_size		513
>> chunk_root_generation	112769
>> root_level		1
>> chunk_root		22085632
>> chunk_root_level	1
>> log_root		46451935461376
>> log_root_transid	0
>> log_root_level		0
>> total_bytes		104020314161152
>> bytes_used		49308554543104
>> sectorsize		4096
>> nodesize		16384
>> leafsize (deprecated)		16384
>> stripesize		4096
>> root_dir		6
>> num_devices		20
>> compat_flags		0x0
>> compat_ro_flags		0x0
>> incompat_flags		0x1e1
>> 			( MIXED_BACKREF |
>> 			  BIG_METADATA |
>> 			  EXTENDED_IREF |
>> 			  RAID56 |
>> 			  SKINNY_METADATA )
>> cache_generation	114401
>> uuid_tree_generation	114401
>> dev_item.uuid		c6b44903-e849-4403-98c4-f3ba4d0b3fc3
>> dev_item.fsid		4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5 [match]
>> dev_item.type		0
>> dev_item.total_bytes	4000783007744
>> dev_item.bytes_used	4000781959168
>> dev_item.io_align	4096
>> dev_item.io_width	4096
>> dev_item.sector_size	4096
>> dev_item.devid		1
>> dev_item.dev_group	0
>> dev_item.seek_speed	0
>> dev_item.bandwidth	0
>> dev_item.generation	0
>>
>>> It looks like your tree root (or at least some tree root nodes/leaves
>>> get corrupted)
>>>
>>>> ERROR: cannot open file system
>>>
>>> And since it's your tree root corrupted, you could also try
>>> "btrfs-find-root <device>" to try to get a good old copy of your tree
>>> root.
>>
>> The output is rather long. I pasted it here: 
>> https://pastebin.com/FkyBLgj9
>> I'm unsure what to look for in this output?
> 
> This shows all the candidates of the older tree root bytenr.
> 
> We could use it to try to recover.
> 
> You could then try the following command and see if btrfs check can go
> further.
> 
>  # btrfs check -r 45462239363072 <device>
> 
> 
> And the following dump could also help:
> 
>  # btrfs ins dump-tree -b 45462239363072 --follow
> 
> Thanks,
> Qu
> 
>>
>>> But I suspect the corruption happens before you noticed, thus the old
>>> tree root may not help much.
>>>
>>> Also, the output of "btrfs ins dump-tree -t root <device>" will help.
>>
>> Here it is:
>>
>> [root@cornelis ~]# btrfs ins dump-tree -t root /dev/sdn2
>> btrfs-progs v4.19 
>> parent transid verify failed on 46451963543552 wanted 114401 found
>> 114173
>> parent transid verify failed on 46451963543552 wanted 114401 found
>> 114173
>> checksum verify failed on 46451963543552 found A8F2A769 wanted 4C111ADF
>> checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
>> checksum verify failed on 46451963543552 found 32153BE8 wanted 8B07ABE4
>> bad tree block 46451963543552, bytenr mismatch, want=46451963543552,
>> have=75208089814272
>> Couldn't read tree root
>> ERROR: unable to open /dev/sdn2
>>
>>> Thanks,
>>> Qu
>>
>> No, thank YOU! :-)
>>
>>>> [root@cornelis ~]# btrfs restore /dev/sdn2 /mnt/data/
>>>> parent transid verify failed on 46451963543552 wanted 114401 found
>>>> 114173
>>>> parent transid verify failed on 46451963543552 wanted 114401 found
>>>> 114173
>>>> checksum verify failed on 46451963543552 found A8F2A769 wanted
>>>> 4C111ADF
>>>> checksum verify failed on 46451963543552 found 32153BE8 wanted
>>>> 8B07ABE4
>>>> checksum verify failed on 46451963543552 found 32153BE8 wanted
>>>> 8B07ABE4
>>>> bad tree block 46451963543552, bytenr mismatch,
>>>> want=46451963543552,
>>>> have=75208089814272
>>>> Couldn't read tree root
>>>> Could not open root, trying backup super
>>>> warning, device 14 is missing
>>>> warning, device 13 is missing
>>>> warning, device 12 is missing
>>>> warning, device 11 is missing
>>>> warning, device 10 is missing
>>>> warning, device 9 is missing
>>>> warning, device 8 is missing
>>>> warning, device 7 is missing
>>>> warning, device 6 is missing
>>>> warning, device 5 is missing
>>>> warning, device 4 is missing
>>>> warning, device 3 is missing
>>>> warning, device 2 is missing
>>>> checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
>>>> checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
>>>> bad tree block 22085632, bytenr mismatch, want=22085632,
>>>> have=1147797504
>>>> ERROR: cannot read chunk root
>>>> Could not open root, trying backup super
>>>> warning, device 14 is missing
>>>> warning, device 13 is missing
>>>> warning, device 12 is missing
>>>> warning, device 11 is missing
>>>> warning, device 10 is missing
>>>> warning, device 9 is missing
>>>> warning, device 8 is missing
>>>> warning, device 7 is missing
>>>> warning, device 6 is missing
>>>> warning, device 5 is missing
>>>> warning, device 4 is missing
>>>> warning, device 3 is missing
>>>> warning, device 2 is missing
>>>> checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
>>>> checksum verify failed on 22085632 found 5630EA32 wanted 1AA6FFF0
>>>> bad tree block 22085632, bytenr mismatch, want=22085632,
>>>> have=1147797504
>>>> ERROR: cannot read chunk root
>>>> Could not open root, trying backup super
>>>>
>>>> [root@cornelis ~]# uname -r
>>>> 4.18.16-arch1-1-ARCH
>>>>
>>>> [root@cornelis ~]# btrfs --version
>>>> btrfs-progs v4.19
>>>>
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-12-02 20:30       ` Andrei Borzenkov
@ 2018-12-03  5:58         ` Qu Wenruo
  2018-12-04  3:16           ` Chris Murphy
  0 siblings, 1 reply; 14+ messages in thread
From: Qu Wenruo @ 2018-12-03  5:58 UTC (permalink / raw)
  To: Andrei Borzenkov, Patrick Dijkgraaf, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 2713 bytes --]



On 2018/12/3 上午4:30, Andrei Borzenkov wrote:
> 02.12.2018 23:14, Patrick Dijkgraaf пишет:
>> I have some additional info.
>>
>> I found the reason the FS got corrupted. It was a single failing drive,
>> which caused the entire cabinet (containing 7 drives) to reset. So the
>> FS suddenly lost 7 drives.
>>
> 
> This remains mystery for me. btrfs is marketed to be always consistent
> on disk - you either have previous full transaction or current full
> transaction. If current transaction was interrupted the promise is you
> are left with previous valid consistent transaction.
> 
> Obviously this is not what happens in practice. Which nullifies the main
> selling point of btrfs.
> 
> Unless this is expected behavior, it sounds like some barriers are
> missing and summary data is updated before (and without waiting for)
> subordinate data. And if it is expected behavior ...

There are one (unfortunately) known problem for RAID5/6 and one special
problem for RAID6.

The common problem is write hole.
For a RAID5 stripe like:
        Disk 1      |        Disk 2        |       Disk 3
---------------------------------------------------------------
        DATA1       |        DATA2         |       PARITY

If we have written something into DATA1, but powerloss happened before
we update PARITY in disk 3.
In this case, we can't tolerant Disk 2 loss, since DATA1 doesn't match
PARAITY anymore.

Without the ability to know what exactly block we have written, for
write hole problem exists for any parity based solution, including BTRFS
RAID5/6.

From the guys in the mail list, other RAID5/6 implementations have their
own record of which block is updated on-disk, and for powerloss case
they will rebuild involved stripes.

Since btrfs doesn't has such ability, we need to scrub the whole fs to
regain the disk loss tolerance (and hope there will not be another power
loss during it)


The RAID6 special problem is the missing of rebuilt retry logic.
(Not any more after 4.16 kernel, but still missing btrfs-progs support)

For a RAID6 stripe like:
    Disk 1     |    Disk 2      |     Disk 3     |    Disk 4
----------------------------------------------------------------
    DATA1      |    DATA2       |       P        |      Q

If data read from DATA1 failed, we have 3 ways to rebuild the data:
1) Using DATA2 and P (just as RAID5)
2) Using P and Q
3) Using DATA2 and Q

However until 4.16 we won't retry all possible ways to build it.
(Thanks Liu for solving this problem).

Thanks,
Qu

> 
>> I have removed the failed drive, so the RAID is now degraded. I hope
>> the data is still recoverable... ☹
>>
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-12-03  5:58         ` Qu Wenruo
@ 2018-12-04  3:16           ` Chris Murphy
  2018-12-04 10:09             ` Patrick Dijkgraaf
  0 siblings, 1 reply; 14+ messages in thread
From: Chris Murphy @ 2018-12-04  3:16 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Andrei Borzenkov, bolderbast, Btrfs BTRFS

Also useful information for autopsy, perhaps not for fixing, is to
know whether the SCT ERC value for every drive is less than the
kernel's SCSI driver block device command timeout value. It's super
important that the drive reports an explicit read failure before the
read command is considered failed by the kernel. If the drive is still
trying to do a read, and the kernel command timer times out, it'll
just do a reset of the whole link and we lose the outcome for the
hanging command. Upon explicit read error only, can Btrfs, or md RAID,
know what device and physical sector has a problem, and therefore how
to reconstruct the block, and fix the bad sector with a write of known
good data.

smartctl -l scterc /device/
and
cat /sys/block/sda/device/timeout

Only if SCT ERC is enabled with a value below 30, or if the kernel
command timer is change to be well above 30 (like 180, which is
absolutely crazy but a separate conversation) can we be sure that
there haven't just been resets going on for a while, preventing bad
sectors from being fixed up all along, and can contribute to the
problem. This comes up on the linux-raid (mainly md driver) list all
the time, and it contributes to lost RAID all the time. And arguably
it leads to unnecessary data loss in even the single device
desktop/laptop use case as well.


Chris Murphy

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-12-03  0:35     ` Qu Wenruo
  2018-12-03  0:45       ` Qu Wenruo
@ 2018-12-04  9:58       ` Patrick Dijkgraaf
  2018-12-09  9:32         ` Patrick Dijkgraaf
  1 sibling, 1 reply; 14+ messages in thread
From: Patrick Dijkgraaf @ 2018-12-04  9:58 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

Hi, thanks again.
Please see answers inline.

-- 
Groet / Cheers,
Patrick Dijkgraaf



On Mon, 2018-12-03 at 08:35 +0800, Qu Wenruo wrote:
> 
> On 2018/12/2 下午5:03, Patrick Dijkgraaf wrote:
> > Hi Qu,
> > 
> > Thanks for helping me!
> > 
> > Please see the reponses in-line.
> > Any suggestions based on this?
> > 
> > Thanks!
> > 
> > 
> > On Sat, 2018-12-01 at 07:57 +0800, Qu Wenruo wrote:
> > > On 2018/11/30 下午9:53, Patrick Dijkgraaf wrote:
> > > > Hi all,
> > > > 
> > > > I have been a happy BTRFS user for quite some time. But now I'm
> > > > facing
> > > > a potential ~45TB dataloss... :-(
> > > > I hope someone can help!
> > > > 
> > > > I have Server A and Server B. Both having a 20-devices BTRFS
> > > > RAID6
> > > > filesystem. Because of known RAID5/6 risks, Server B was a
> > > > backup
> > > > of
> > > > Server A.
> > > > After applying updates to server B and reboot, the FS would not
> > > > mount
> > > > anymore. Because it was "just" a backup. I decided to recreate
> > > > the
> > > > FS
> > > > and perform a new backup. Later, I discovered that the FS was
> > > > not
> > > > broken, but I faced this issue: 
> > > > https://patchwork.kernel.org/patch/10694997/
> > > > 
> > > > 
> > > 
> > > Sorry for the inconvenience.
> > > 
> > > I didn't realize the max_chunk_size limit isn't reliable at that
> > > timing.
> > 
> > No problem, I should not have jumped to the conclusion to recreate
> > the
> > backup volume.
> > 
> > > > Anyway, the FS was already recreated, so I needed to do a new
> > > > backup.
> > > > During the backup (using rsync -vah), Server A (the source)
> > > > encountered
> > > > an I/O error and my rsync failed. In an attempt to "quick fix"
> > > > the
> > > > issue, I rebooted Server A after which the FS would not mount
> > > > anymore.
> > > 
> > > Did you have any dmesg about that IO error?
> > 
> > Yes there was. But I omitted capturing it... The system is now
> > rebooted
> > and I can't retrieve it anymore. :-(
> > 
> > > And how is the reboot scheduled? Forced power off or normal
> > > reboot
> > > command?
> > 
> > The system was rebooted using a normal reboot command.
> 
> Then the problem is pretty serious.
> 
> Possibly already corrupted before.
> 
> > > > I documented what I have tried, below. I have not yet tried
> > > > anything
> > > > except what is shown, because I am afraid of causing more harm
> > > > to
> > > > the FS.
> > > 
> > > Pretty clever, no btrfs check --repair is a pretty good move.
> > > 
> > > > I hope somebody here can give me advice on how to (hopefully)
> > > > retrieve my data...
> > > > 
> > > > Thanks in advance!
> > > > 
> > > > ==========================================
> > > > 
> > > > [root@cornelis ~]# btrfs fi show
> > > > Label: 'cornelis-btrfs'  uuid: ac643516-670e-40f3-aa4c-
> > > > f329fc3795fd
> > > > 	Total devices 1 FS bytes used 463.92GiB
> > > > 	devid    1 size 800.00GiB used 493.02GiB path
> > > > /dev/mapper/cornelis-cornelis--btrfs
> > > > 
> > > > Label: 'data'  uuid: 4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
> > > > 	Total devices 20 FS bytes used 44.85TiB
> > > > 	devid    1 size 3.64TiB used 3.64TiB path /dev/sdn2
> > > > 	devid    2 size 3.64TiB used 3.64TiB path /dev/sdp2
> > > > 	devid    3 size 3.64TiB used 3.64TiB path /dev/sdu2
> > > > 	devid    4 size 3.64TiB used 3.64TiB path /dev/sdx2
> > > > 	devid    5 size 3.64TiB used 3.64TiB path /dev/sdh2
> > > > 	devid    6 size 3.64TiB used 3.64TiB path /dev/sdg2
> > > > 	devid    7 size 3.64TiB used 3.64TiB path /dev/sdm2
> > > > 	devid    8 size 3.64TiB used 3.64TiB path /dev/sdw2
> > > > 	devid    9 size 3.64TiB used 3.64TiB path /dev/sdj2
> > > > 	devid   10 size 3.64TiB used 3.64TiB path /dev/sdt2
> > > > 	devid   11 size 3.64TiB used 3.64TiB path /dev/sdk2
> > > > 	devid   12 size 3.64TiB used 3.64TiB path /dev/sdq2
> > > > 	devid   13 size 3.64TiB used 3.64TiB path /dev/sds2
> > > > 	devid   14 size 3.64TiB used 3.64TiB path /dev/sdf2
> > > > 	devid   15 size 7.28TiB used 588.80GiB path /dev/sdr2
> > > > 	devid   16 size 7.28TiB used 588.80GiB path /dev/sdo2
> > > > 	devid   17 size 7.28TiB used 588.80GiB path /dev/sdv2
> > > > 	devid   18 size 7.28TiB used 588.80GiB path /dev/sdi2
> > > > 	devid   19 size 7.28TiB used 588.80GiB path /dev/sdl2
> > > > 	devid   20 size 7.28TiB used 588.80GiB path /dev/sde2
> > > > 
> > > > [root@cornelis ~]# mount /dev/sdn2 /mnt/data
> > > > mount: /mnt/data: wrong fs type, bad option, bad superblock on
> > > > /dev/sdn2, missing codepage or helper program, or other error.
> > > 
> > > What is the dmesg of the mount failure?
> > 
> > [Sun Dec  2 09:41:08 2018] BTRFS info (device sdn2): disk space
> > caching
> > is enabled
> > [Sun Dec  2 09:41:08 2018] BTRFS info (device sdn2): has skinny
> > extents
> > [Sun Dec  2 09:41:08 2018] BTRFS error (device sdn2): parent
> > transid
> > verify failed on 46451963543552 wanted 114401 found 114173
> > [Sun Dec  2 09:41:08 2018] BTRFS critical (device sdn2): corrupt
> > leaf:
> > root=2 block=46451963543552 slot=0, unexpected item end, have
> > 1387359977 expect 16283
> 
> OK, this shows that one of the copy has mismatched generation while
> the
> other copy is completely corrupted.
> 
> > [Sun Dec  2 09:41:08 2018] BTRFS warning (device sdn2): failed to
> > read
> > tree root
> > [Sun Dec  2 09:41:08 2018] BTRFS error (device sdn2): open_ctree
> > failed
> > 
> > > And have you tried -o ro,degraded ?
> > 
> > Tried it just now, gives the exact same error.
> > 
> > > > [root@cornelis ~]# btrfs check /dev/sdn2
> > > > Opening filesystem to check...
> > > > parent transid verify failed on 46451963543552 wanted 114401
> > > > found
> > > > 114173
> > > > parent transid verify failed on 46451963543552 wanted 114401
> > > > found
> > > > 114173
> > > > checksum verify failed on 46451963543552 found A8F2A769 wanted
> > > > 4C111ADF
> > > > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > > > 8B07ABE4
> > > > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > > > 8B07ABE4
> > > > bad tree block 46451963543552, bytenr mismatch,
> > > > want=46451963543552,
> > > > have=75208089814272
> > > > Couldn't read tree root
> > > 
> > > Would you please also paste the output of "btrfs ins dump-super
> > > /dev/sdn2" ?
> > 
> > [root@cornelis ~]# btrfs ins dump-super /dev/sdn2
> > superblock: bytenr=65536, device=/dev/sdn2
> > ---------------------------------------------------------
> > csum_type		0 (crc32c)
> > csum_size		4
> > csum			0x51725c39 [match]
> > bytenr			65536
> > flags			0x1
> > 			( WRITTEN )
> > magic			_BHRfS_M [match]
> > fsid			4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
> > label			data
> > generation		114401
> > root			46451963543552
> 
> The bytenr matches with the dmesg, so it's tree root node corrupted.
> 
> > sys_array_size		513
> > chunk_root_generation	112769
> > root_level		1
> > chunk_root		22085632
> > chunk_root_level	1
> > log_root		46451935461376
> > log_root_transid	0
> > log_root_level		0
> > total_bytes		104020314161152
> > bytes_used		49308554543104
> > sectorsize		4096
> > nodesize		16384
> > leafsize (deprecated)		16384
> > stripesize		4096
> > root_dir		6
> > num_devices		20
> > compat_flags		0x0
> > compat_ro_flags		0x0
> > incompat_flags		0x1e1
> > 			( MIXED_BACKREF |
> > 			  BIG_METADATA |
> > 			  EXTENDED_IREF |
> > 			  RAID56 |
> > 			  SKINNY_METADATA )
> > cache_generation	114401
> > uuid_tree_generation	114401
> > dev_item.uuid		c6b44903-e849-4403-98c4-f3ba4d0b3fc3
> > dev_item.fsid		4c66fa8b-8fc6-4bba-9d83-02a2a1d69ad5
> > [match]
> > dev_item.type		0
> > dev_item.total_bytes	4000783007744
> > dev_item.bytes_used	4000781959168
> > dev_item.io_align	4096
> > dev_item.io_width	4096
> > dev_item.sector_size	4096
> > dev_item.devid		1
> > dev_item.dev_group	0
> > dev_item.seek_speed	0
> > dev_item.bandwidth	0
> > dev_item.generation	0
> > 
> > > It looks like your tree root (or at least some tree root
> > > nodes/leaves
> > > get corrupted)
> > > 
> > > > ERROR: cannot open file system
> > > 
> > > And since it's your tree root corrupted, you could also try
> > > "btrfs-find-root <device>" to try to get a good old copy of your
> > > tree
> > > root.
> > 
> > The output is rather long. I pasted it here: 
> > https://pastebin.com/FkyBLgj9
> > 
> > I'm unsure what to look for in this output?
> 
> This shows all the candidates of the older tree root bytenr.
> 
> We could use it to try to recover.
> 
> You could then try the following command and see if btrfs check can
> go
> further.
> 
>  # btrfs check -r 45462239363072 <device>

This gives the following output (remember, I removed the disk that
caused the IO errors, so the RAID is still degraded):

[root@cornelis ~]# btrfs check -r 45462239363072 /dev/sdn2
Opening filesystem to check...
warning, device 6 is missing
checksum verify failed on 22544384 found ED96FBF2 wanted 09754644
checksum verify failed on 22544384 found 5630EA32 wanted 1AA6FFF0
checksum verify failed on 22544384 found 5630EA32 wanted 1AA6FFF0
bad tree block 22544384, bytenr mismatch, want=22544384,
have=1147797504
Couldn't read chunk tree
ERROR: cannot open file system


> And the following dump could also help:
> 
>  # btrfs ins dump-tree -b 45462239363072 --follow

This outputs:

[root@cornelis ~]# btrfs ins dump-tree -b 45462239363072 --follow
/dev/sdn2
btrfs-progs v4.19 
warning, device 6 is missing
checksum verify failed on 22544384 found ED96FBF2 wanted 09754644
checksum verify failed on 22544384 found 5630EA32 wanted 1AA6FFF0
checksum verify failed on 22544384 found 5630EA32 wanted 1AA6FFF0
bad tree block 22544384, bytenr mismatch, want=22544384,
have=1147797504
Couldn't read chunk tree
ERROR: unable to open /dev/sdn2

> Thanks,
> Qu
> 
> > > But I suspect the corruption happens before you noticed, thus the
> > > old
> > > tree root may not help much.
> > > 
> > > Also, the output of "btrfs ins dump-tree -t root <device>" will
> > > help.
> > 
> > Here it is:
> > 
> > [root@cornelis ~]# btrfs ins dump-tree -t root /dev/sdn2
> > btrfs-progs v4.19 
> > parent transid verify failed on 46451963543552 wanted 114401 found
> > 114173
> > parent transid verify failed on 46451963543552 wanted 114401 found
> > 114173
> > checksum verify failed on 46451963543552 found A8F2A769 wanted
> > 4C111ADF
> > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > 8B07ABE4
> > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > 8B07ABE4
> > bad tree block 46451963543552, bytenr mismatch,
> > want=46451963543552,
> > have=75208089814272
> > Couldn't read tree root
> > ERROR: unable to open /dev/sdn2
> > 
> > > Thanks,
> > > Qu
> > 
> > No, thank YOU! :-)
> > 
> > > > [root@cornelis ~]# btrfs restore /dev/sdn2 /mnt/data/
> > > > parent transid verify failed on 46451963543552 wanted 114401
> > > > found
> > > > 114173
> > > > parent transid verify failed on 46451963543552 wanted 114401
> > > > found
> > > > 114173
> > > > checksum verify failed on 46451963543552 found A8F2A769 wanted
> > > > 4C111ADF
> > > > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > > > 8B07ABE4
> > > > checksum verify failed on 46451963543552 found 32153BE8 wanted
> > > > 8B07ABE4
> > > > bad tree block 46451963543552, bytenr mismatch,
> > > > want=46451963543552,
> > > > have=75208089814272
> > > > Couldn't read tree root
> > > > Could not open root, trying backup super
> > > > warning, device 14 is missing
> > > > warning, device 13 is missing
> > > > warning, device 12 is missing
> > > > warning, device 11 is missing
> > > > warning, device 10 is missing
> > > > warning, device 9 is missing
> > > > warning, device 8 is missing
> > > > warning, device 7 is missing
> > > > warning, device 6 is missing
> > > > warning, device 5 is missing
> > > > warning, device 4 is missing
> > > > warning, device 3 is missing
> > > > warning, device 2 is missing
> > > > checksum verify failed on 22085632 found 5630EA32 wanted
> > > > 1AA6FFF0
> > > > checksum verify failed on 22085632 found 5630EA32 wanted
> > > > 1AA6FFF0
> > > > bad tree block 22085632, bytenr mismatch, want=22085632,
> > > > have=1147797504
> > > > ERROR: cannot read chunk root
> > > > Could not open root, trying backup super
> > > > warning, device 14 is missing
> > > > warning, device 13 is missing
> > > > warning, device 12 is missing
> > > > warning, device 11 is missing
> > > > warning, device 10 is missing
> > > > warning, device 9 is missing
> > > > warning, device 8 is missing
> > > > warning, device 7 is missing
> > > > warning, device 6 is missing
> > > > warning, device 5 is missing
> > > > warning, device 4 is missing
> > > > warning, device 3 is missing
> > > > warning, device 2 is missing
> > > > checksum verify failed on 22085632 found 5630EA32 wanted
> > > > 1AA6FFF0
> > > > checksum verify failed on 22085632 found 5630EA32 wanted
> > > > 1AA6FFF0
> > > > bad tree block 22085632, bytenr mismatch, want=22085632,
> > > > have=1147797504
> > > > ERROR: cannot read chunk root
> > > > Could not open root, trying backup super
> > > > 
> > > > [root@cornelis ~]# uname -r
> > > > 4.18.16-arch1-1-ARCH
> > > > 
> > > > [root@cornelis ~]# btrfs --version
> > > > btrfs-progs v4.19
> > > > 
> 
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-12-04  3:16           ` Chris Murphy
@ 2018-12-04 10:09             ` Patrick Dijkgraaf
  2018-12-04 19:38               ` Chris Murphy
  0 siblings, 1 reply; 14+ messages in thread
From: Patrick Dijkgraaf @ 2018-12-04 10:09 UTC (permalink / raw)
  To: Chris Murphy, Qu Wenruo; +Cc: Andrei Borzenkov, Btrfs BTRFS

Hi Chris,

See the output below. Any suggestions based on it?
Thanks!

-- 
Groet / Cheers,
Patrick Dijkgraaf



On Mon, 2018-12-03 at 20:16 -0700, Chris Murphy wrote:
> Also useful information for autopsy, perhaps not for fixing, is to
> know whether the SCT ERC value for every drive is less than the
> kernel's SCSI driver block device command timeout value. It's super
> important that the drive reports an explicit read failure before the
> read command is considered failed by the kernel. If the drive is
> still
> trying to do a read, and the kernel command timer times out, it'll
> just do a reset of the whole link and we lose the outcome for the
> hanging command. Upon explicit read error only, can Btrfs, or md
> RAID,
> know what device and physical sector has a problem, and therefore how
> to reconstruct the block, and fix the bad sector with a write of
> known
> good data.
> 
> smartctl -l scterc /device/

Seems to not work:

[root@cornelis ~]# for disk in /dev/sd{e..x}; do echo ${disk}; smartctl
-l scterc ${disk}; done
/dev/sde
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdf
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdg
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdh
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdi
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdj
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdk
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

Smartctl open device: /dev/sdk failed: No such device
/dev/sdl
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdm
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdn
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SCT Error Recovery Control command not supported

/dev/sdo
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdp
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdq
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdr
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sds
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdt
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SCT Error Recovery Control command not supported

/dev/sdu
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdv
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdw
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SMART WRITE LOG does not return COUNT and LBA_LOW register
SCT (Get) Error Recovery Control command failed

/dev/sdx
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.18.16-arch1-1-ARCH]
(local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, 
www.smartmontools.org

SCT Error Recovery Control command not supported

> and
> cat /sys/block/sda/device/timeout

[root@cornelis ~]# cat /sys/block/sd{e..x}/device/timeout
30
30
30
30
30
30
cat: /sys/block/sdk/device/timeout: No such file or directory
30
30
30
30
30
30
30
30
30
30
30
30
30

> Only if SCT ERC is enabled with a value below 30, or if the kernel
> command timer is change to be well above 30 (like 180, which is
> absolutely crazy but a separate conversation) can we be sure that
> there haven't just been resets going on for a while, preventing bad
> sectors from being fixed up all along, and can contribute to the
> problem. This comes up on the linux-raid (mainly md driver) list all
> the time, and it contributes to lost RAID all the time. And arguably
> it leads to unnecessary data loss in even the single device
> desktop/laptop use case as well.
> 
> 
> Chris Murphy


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-12-04 10:09             ` Patrick Dijkgraaf
@ 2018-12-04 19:38               ` Chris Murphy
  2018-12-09  9:28                 ` Patrick Dijkgraaf
  0 siblings, 1 reply; 14+ messages in thread
From: Chris Murphy @ 2018-12-04 19:38 UTC (permalink / raw)
  To: Patrick Dijkgraaf, Btrfs BTRFS

On Tue, Dec 4, 2018 at 3:09 AM Patrick Dijkgraaf
<bolderbast@duckstad.net> wrote:
>
> Hi Chris,
>
> See the output below. Any suggestions based on it?

If they're SATA drives, they may not support SCT ERC; and if they're
SAS, depending on what controller they're behind, smartctl might need
a hint to properly ask the drive for SCT ERC status. Simplest way to
know is do 'smartctl -x' on one drive, assuming they're all the same
basic make/model other than size.


-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-12-04 19:38               ` Chris Murphy
@ 2018-12-09  9:28                 ` Patrick Dijkgraaf
  0 siblings, 0 replies; 14+ messages in thread
From: Patrick Dijkgraaf @ 2018-12-09  9:28 UTC (permalink / raw)
  To: Chris Murphy, Btrfs BTRFS

Hi Chris,

they're SATA.
smartctl -x gives:

SCT Error Recovery Control command not supported

So seems like we can't do enything with it.

-- 
Groet / Cheers,
Patrick Dijkgraaf



On Tue, 2018-12-04 at 12:38 -0700, Chris Murphy wrote:
> On Tue, Dec 4, 2018 at 3:09 AM Patrick Dijkgraaf
> <
> bolderbast@duckstad.net
> > wrote:
> > Hi Chris,
> > 
> > See the output below. Any suggestions based on it?
> 
> If they're SATA drives, they may not support SCT ERC; and if they're
> SAS, depending on what controller they're behind, smartctl might need
> a hint to properly ask the drive for SCT ERC status. Simplest way to
> know is do 'smartctl -x' on one drive, assuming they're all the same
> basic make/model other than size.
> 
> 



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: Need help with potential ~45TB dataloss
  2018-12-04  9:58       ` Patrick Dijkgraaf
@ 2018-12-09  9:32         ` Patrick Dijkgraaf
  0 siblings, 0 replies; 14+ messages in thread
From: Patrick Dijkgraaf @ 2018-12-09  9:32 UTC (permalink / raw)
  To: Qu Wenruo, linux-btrfs

So, does anyone have any suggestion on how I might recover some of the
data? If not, I'll cut my losses and create a new array...

Thanks!

-- 
Groet / Cheers,
Patrick Dijkgraaf



On Tue, 2018-12-04 at 10:58 +0100, Patrick Dijkgraaf wrote:
> Hi, thanks again.
> Please see answers inline.
> 
> 


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2018-12-09  9:32 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-30 13:53 Need help with potential ~45TB dataloss Patrick Dijkgraaf
2018-11-30 23:57 ` Qu Wenruo
2018-12-02  9:03   ` Patrick Dijkgraaf
2018-12-02 20:14     ` Patrick Dijkgraaf
2018-12-02 20:30       ` Andrei Borzenkov
2018-12-03  5:58         ` Qu Wenruo
2018-12-04  3:16           ` Chris Murphy
2018-12-04 10:09             ` Patrick Dijkgraaf
2018-12-04 19:38               ` Chris Murphy
2018-12-09  9:28                 ` Patrick Dijkgraaf
2018-12-03  0:35     ` Qu Wenruo
2018-12-03  0:45       ` Qu Wenruo
2018-12-04  9:58       ` Patrick Dijkgraaf
2018-12-09  9:32         ` Patrick Dijkgraaf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).