linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
@ 2019-03-25 23:50 berodual_xyz
  2019-03-26  5:18 ` Qu Wenruo
  0 siblings, 1 reply; 13+ messages in thread
From: berodual_xyz @ 2019-03-25 23:50 UTC (permalink / raw)
  To: linux-btrfs

Dear all, I had posted already (excuse for separate mails) that I have a corrupt filesystem that would be very important to get recovered.

Please note I would really appreciate assistance and am willing to PAY for consultation and time.

Kernel 4.20.17
btrfs-progs 4.20.2

The filesystem consists of 4 (HW raid6) devices, each of them shows different superblock backups but my hope is that backup 2 is "consistent" across all of them.

Is there any way to restore a specific superblock backup and attempt to mount the filesystem? Running "btrfs restore" did partially recover the data, but the most important bit is still missing.

See also how "generation" is different between devices 1,2 and 3,4.

Please see below output of the different devices.

My hope is that I can somehow "restore" the smallest common generation of superblock on 60232 to make the FS at least read-only mountable.


################# SDB ############

# btrfs inspect-internal dump-super -f /dev/sdb
superblock: bytenr=65536, device=/dev/sdb
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0xebea13e2 [match]
bytenr			65536
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
metadata_uuid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
label			FS1
generation		60233
root			55432745107456
sys_array_size		194
chunk_root_generation	60232
root_level		1
chunk_root		1146880
chunk_root_level	1
log_root		55432747958272
log_root_transid	0
log_root_level		0
total_bytes		95999901040640
bytes_used		60378125692928
sectorsize		4096
nodesize		16384
leafsize (deprecated)	16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x3
			( FREE_SPACE_TREE |
			  FREE_SPACE_TREE_VALID )
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	18446744073709551615
uuid_tree_generation	60233
dev_item.uuid		7833ce66-ffa3-4baa-a2db-e98d5ec2e369
dev_item.fsid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd [match]
dev_item.type		0
dev_item.total_bytes	23999975260160
dev_item.bytes_used	15122592432128
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		1
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 1048576
			dev_uuid 7833ce66-ffa3-4baa-a2db-e98d5ec2e369
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 60713713270784)
		length 33554432 owner 2 stripe_len 65536 type SYSTEM
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 1 sub_stripes 1
			stripe 0 devid 2 offset 15033460326400
			dev_uuid bb5e99fe-3beb-44f3-a6e0-43395ddfcd84
backup_roots[4]:
	backup 0:
		backup_tree_root:	55432745107456	gen: 60233	level: 1
		backup_chunk_root:	1146880	gen: 60232	level: 1
		backup_extent_root:	55432763981824	gen: 60233	level: 3
		backup_fs_root:		55432763850752	gen: 60233	level: 2
		backup_dev_root:	55432753725440	gen: 60232	level: 1
		backup_csum_root:	55432764342272	gen: 60233	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60378125692928
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	55432755216384	gen: 60230	level: 1
		backup_chunk_root:	1048576	gen: 60230	level: 1
		backup_extent_root:	55432754511872	gen: 60230	level: 3
		backup_fs_root:		55432756281344	gen: 60231	level: 2
		backup_dev_root:	55432755625984	gen: 60230	level: 1
		backup_csum_root:	55432750727168	gen: 60230	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60372281278464
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	55432746385408	gen: 60231	level: 1
		backup_chunk_root:	1048576	gen: 60230	level: 1
		backup_extent_root:	55432738242560	gen: 60231	level: 3
		backup_fs_root:		55432756281344	gen: 60231	level: 2
		backup_dev_root:	55432755625984	gen: 60230	level: 1
		backup_csum_root:	55432750727168	gen: 60230	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60372283375616
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	55432763015168	gen: 60232	level: 1
		backup_chunk_root:	1146880	gen: 60232	level: 1
		backup_extent_root:	55432763981824	gen: 60233	level: 3
		backup_fs_root:		55432763850752	gen: 60233	level: 2
		backup_dev_root:	55432753725440	gen: 60232	level: 1
		backup_csum_root:	55432764342272	gen: 60233	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60378125692928
		backup_num_devices:	4

########### SDC ############

btrfs inspect-internal dump-super -f /dev/sdc
superblock: bytenr=65536, device=/dev/sdc
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0x310a7e9a [match]
bytenr			65536
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
metadata_uuid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
label			FS1
generation		60233
root			55432745107456
sys_array_size		194
chunk_root_generation	60232
root_level		1
chunk_root		1146880
chunk_root_level	1
log_root		55432747958272
log_root_transid	0
log_root_level		0
total_bytes		95999901040640
bytes_used		60378125692928
sectorsize		4096
nodesize		16384
leafsize (deprecated)	16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x3
			( FREE_SPACE_TREE |
			  FREE_SPACE_TREE_VALID )
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	18446744073709551615
uuid_tree_generation	60233
dev_item.uuid		bb5e99fe-3beb-44f3-a6e0-43395ddfcd84
dev_item.fsid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd [match]
dev_item.type		0
dev_item.total_bytes	23999975260160
dev_item.bytes_used	15122613403648
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		2
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 1048576
			dev_uuid 7833ce66-ffa3-4baa-a2db-e98d5ec2e369
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 60713713270784)
		length 33554432 owner 2 stripe_len 65536 type SYSTEM
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 1 sub_stripes 1
			stripe 0 devid 2 offset 15033460326400
			dev_uuid bb5e99fe-3beb-44f3-a6e0-43395ddfcd84
backup_roots[4]:
	backup 0:
		backup_tree_root:	55432745107456	gen: 60233	level: 1
		backup_chunk_root:	1146880	gen: 60232	level: 1
		backup_extent_root:	55432763981824	gen: 60233	level: 3
		backup_fs_root:		55432763850752	gen: 60233	level: 2
		backup_dev_root:	55432753725440	gen: 60232	level: 1
		backup_csum_root:	55432764342272	gen: 60233	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60378125692928
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	55432755216384	gen: 60230	level: 1
		backup_chunk_root:	1048576	gen: 60230	level: 1
		backup_extent_root:	55432754511872	gen: 60230	level: 3
		backup_fs_root:		55432756281344	gen: 60231	level: 2
		backup_dev_root:	55432755625984	gen: 60230	level: 1
		backup_csum_root:	55432750727168	gen: 60230	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60372281278464
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	55432746385408	gen: 60231	level: 1
		backup_chunk_root:	1048576	gen: 60230	level: 1
		backup_extent_root:	55432738242560	gen: 60231	level: 3
		backup_fs_root:		55432756281344	gen: 60231	level: 2
		backup_dev_root:	55432755625984	gen: 60230	level: 1
		backup_csum_root:	55432750727168	gen: 60230	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60372283375616
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	55432763015168	gen: 60232	level: 1
		backup_chunk_root:	1146880	gen: 60232	level: 1
		backup_extent_root:	55432763981824	gen: 60233	level: 3
		backup_fs_root:		55432763850752	gen: 60233	level: 2
		backup_dev_root:	55432753725440	gen: 60232	level: 1
		backup_csum_root:	55432764342272	gen: 60233	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60378125692928
		backup_num_devices:	4

########### SDD ##########

btrfs inspect-internal dump-super -f /dev/sdd
superblock: bytenr=65536, device=/dev/sdd
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0x56f1abb4 [match]
bytenr			65536
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
metadata_uuid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
label			FS1
generation		60234
root			55432766062592
sys_array_size		194
chunk_root_generation	60234
root_level		1
chunk_root		1048576
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		95999901040640
bytes_used		60382615777280
sectorsize		4096
nodesize		16384
leafsize (deprecated)	16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x3
			( FREE_SPACE_TREE |
			  FREE_SPACE_TREE_VALID )
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	18446744073709551615
uuid_tree_generation	60234
dev_item.uuid		cc6e5f1c-081c-441f-9634-6908164e1375
dev_item.fsid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd [match]
dev_item.type		0
dev_item.total_bytes	23999975260160
dev_item.bytes_used	15123653591040
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		3
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 1048576
			dev_uuid 7833ce66-ffa3-4baa-a2db-e98d5ec2e369
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 60713713270784)
		length 33554432 owner 2 stripe_len 65536 type SYSTEM
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 1 sub_stripes 1
			stripe 0 devid 2 offset 15033460326400
			dev_uuid bb5e99fe-3beb-44f3-a6e0-43395ddfcd84
backup_roots[4]:
	backup 0:
		backup_tree_root:	55432745107456	gen: 60233	level: 1
		backup_chunk_root:	1146880	gen: 60232	level: 1
		backup_extent_root:	55432763981824	gen: 60233	level: 3
		backup_fs_root:		55432763850752	gen: 60233	level: 2
		backup_dev_root:	55432753725440	gen: 60232	level: 1
		backup_csum_root:	55432764342272	gen: 60233	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60378125692928
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	55432766062592	gen: 60234	level: 1
		backup_chunk_root:	1048576	gen: 60234	level: 1
		backup_extent_root:	55432746352640	gen: 60234	level: 3
		backup_fs_root:		55432763850752	gen: 60233	level: 2
		backup_dev_root:	55432755789824	gen: 60234	level: 1
		backup_csum_root:	55432764342272	gen: 60233	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60382615777280
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	55432746385408	gen: 60231	level: 1
		backup_chunk_root:	1048576	gen: 60230	level: 1
		backup_extent_root:	55432738242560	gen: 60231	level: 3
		backup_fs_root:		55432756281344	gen: 60231	level: 2
		backup_dev_root:	55432755625984	gen: 60230	level: 1
		backup_csum_root:	55432750727168	gen: 60230	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60372283375616
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	55432763015168	gen: 60232	level: 1
		backup_chunk_root:	1146880	gen: 60232	level: 1
		backup_extent_root:	55432763981824	gen: 60233	level: 3
		backup_fs_root:		55432763850752	gen: 60233	level: 2
		backup_dev_root:	55432753725440	gen: 60232	level: 1
		backup_csum_root:	55432764342272	gen: 60233	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60378125692928
		backup_num_devices:	4

####### SDE #######

 btrfs inspect-internal dump-super -f /dev/sde
superblock: bytenr=65536, device=/dev/sde
---------------------------------------------------------
csum_type		0 (crc32c)
csum_size		4
csum			0x6fcc7a25 [match]
bytenr			65536
flags			0x1
			( WRITTEN )
magic			_BHRfS_M [match]
fsid			8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
metadata_uuid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
label			FS1
generation		60234
root			55432766062592
sys_array_size		194
chunk_root_generation	60234
root_level		1
chunk_root		1048576
chunk_root_level	1
log_root		0
log_root_transid	0
log_root_level		0
total_bytes		95999901040640
bytes_used		60382615777280
sectorsize		4096
nodesize		16384
leafsize (deprecated)	16384
stripesize		4096
root_dir		6
num_devices		4
compat_flags		0x0
compat_ro_flags		0x3
			( FREE_SPACE_TREE |
			  FREE_SPACE_TREE_VALID )
incompat_flags		0x161
			( MIXED_BACKREF |
			  BIG_METADATA |
			  EXTENDED_IREF |
			  SKINNY_METADATA )
cache_generation	18446744073709551615
uuid_tree_generation	60234
dev_item.uuid		d97063ad-078a-44b5-ae47-4f0f9f9755f9
dev_item.fsid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd [match]
dev_item.type		0
dev_item.total_bytes	23999975260160
dev_item.bytes_used	15123653591040
dev_item.io_align	4096
dev_item.io_width	4096
dev_item.sector_size	4096
dev_item.devid		4
dev_item.dev_group	0
dev_item.seek_speed	0
dev_item.bandwidth	0
dev_item.generation	0
sys_chunk_array[2048]:
	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
		length 4194304 owner 2 stripe_len 65536 type SYSTEM
		io_align 4096 io_width 4096 sector_size 4096
		num_stripes 1 sub_stripes 0
			stripe 0 devid 1 offset 1048576
			dev_uuid 7833ce66-ffa3-4baa-a2db-e98d5ec2e369
	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 60713713270784)
		length 33554432 owner 2 stripe_len 65536 type SYSTEM
		io_align 65536 io_width 65536 sector_size 4096
		num_stripes 1 sub_stripes 1
			stripe 0 devid 2 offset 15033460326400
			dev_uuid bb5e99fe-3beb-44f3-a6e0-43395ddfcd84
backup_roots[4]:
	backup 0:
		backup_tree_root:	55432745107456	gen: 60233	level: 1
		backup_chunk_root:	1146880	gen: 60232	level: 1
		backup_extent_root:	55432763981824	gen: 60233	level: 3
		backup_fs_root:		55432763850752	gen: 60233	level: 2
		backup_dev_root:	55432753725440	gen: 60232	level: 1
		backup_csum_root:	55432764342272	gen: 60233	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60378125692928
		backup_num_devices:	4

	backup 1:
		backup_tree_root:	55432766062592	gen: 60234	level: 1
		backup_chunk_root:	1048576	gen: 60234	level: 1
		backup_extent_root:	55432746352640	gen: 60234	level: 3
		backup_fs_root:		55432763850752	gen: 60233	level: 2
		backup_dev_root:	55432755789824	gen: 60234	level: 1
		backup_csum_root:	55432764342272	gen: 60233	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60382615777280
		backup_num_devices:	4

	backup 2:
		backup_tree_root:	55432746385408	gen: 60231	level: 1
		backup_chunk_root:	1048576	gen: 60230	level: 1
		backup_extent_root:	55432738242560	gen: 60231	level: 3
		backup_fs_root:		55432756281344	gen: 60231	level: 2
		backup_dev_root:	55432755625984	gen: 60230	level: 1
		backup_csum_root:	55432750727168	gen: 60230	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60372283375616
		backup_num_devices:	4

	backup 3:
		backup_tree_root:	55432763015168	gen: 60232	level: 1
		backup_chunk_root:	1146880	gen: 60232	level: 1
		backup_extent_root:	55432763981824	gen: 60233	level: 3
		backup_fs_root:		55432763850752	gen: 60233	level: 2
		backup_dev_root:	55432753725440	gen: 60232	level: 1
		backup_csum_root:	55432764342272	gen: 60233	level: 0
		backup_total_bytes:	95999901040640
		backup_bytes_used:	60378125692928
		backup_num_devices:	4

###########################


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
  2019-03-25 23:50 backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount berodual_xyz
@ 2019-03-26  5:18 ` Qu Wenruo
  2019-03-26  6:36   ` Andrei Borzenkov
  0 siblings, 1 reply; 13+ messages in thread
From: Qu Wenruo @ 2019-03-26  5:18 UTC (permalink / raw)
  To: berodual_xyz, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 18446 bytes --]



On 2019/3/26 上午7:50, berodual_xyz wrote:
> Dear all, I had posted already (excuse for separate mails) that I have a corrupt filesystem that would be very important to get recovered.
> 
> Please note I would really appreciate assistance and am willing to PAY for consultation and time.

That's not the way open source community works.
That's how the commercial support works.

> 
> Kernel 4.20.17
> btrfs-progs 4.20.2
> 
> The filesystem consists of 4 (HW raid6) devices, each of them shows different superblock backups but my hope is that backup 2 is "consistent" across all of them.
> 
> Is there any way to restore a specific superblock backup and attempt to mount the filesystem? Running "btrfs restore" did partially recover the data, but the most important bit is still missing.

You sda and sdb are at gen 60233 while sdd and sde are at gen 60234.

It's possible to allow kernel to manually assemble its device list using
"device=" mount option.

Since you're using RAID6, it's possible to recover using 2 devices only,
but in that case you need "degraded" mount option.
And to avoid further problem, you should mount it with "ro,nologreplay"
too, as sda/sdb/sdc has dirty log while sde doesn't.

Furthermore, you still has SINGLE system chunk, it means either your fs
is pretty old, or you haven't balanced the fs forever.

Anyway, that means you have to include sda and sdb.

So the conclusion is, try to mount the fs with sda, mount option
"device=/dev/sda,device=/dev/sdb,ro,degraded,nologreplay".

If it still fails due to extent tree corruption, then try my
experimental patches:
https://github.com/adam900710/linux/tree/rescue_options

It adds a new mount option "rescue=skip_bg" to skip extent tree
completely, to work as a kernel solution like btrfs-restore.

Thanks,
Qu

> 
> See also how "generation" is different between devices 1,2 and 3,4.
> 
> Please see below output of the different devices.
> 
> My hope is that I can somehow "restore" the smallest common generation of superblock on 60232 to make the FS at least read-only mountable.
> 
> 
> ################# SDB ############
> 
> # btrfs inspect-internal dump-super -f /dev/sdb
> superblock: bytenr=65536, device=/dev/sdb
> ---------------------------------------------------------
> csum_type		0 (crc32c)
> csum_size		4
> csum			0xebea13e2 [match]
> bytenr			65536
> flags			0x1
> 			( WRITTEN )
> magic			_BHRfS_M [match]
> fsid			8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
> metadata_uuid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
> label			FS1
> generation		60233
> root			55432745107456
> sys_array_size		194
> chunk_root_generation	60232
> root_level		1
> chunk_root		1146880
> chunk_root_level	1
> log_root		55432747958272
> log_root_transid	0
> log_root_level		0
> total_bytes		95999901040640
> bytes_used		60378125692928
> sectorsize		4096
> nodesize		16384
> leafsize (deprecated)	16384
> stripesize		4096
> root_dir		6
> num_devices		4
> compat_flags		0x0
> compat_ro_flags		0x3
> 			( FREE_SPACE_TREE |
> 			  FREE_SPACE_TREE_VALID )
> incompat_flags		0x161
> 			( MIXED_BACKREF |
> 			  BIG_METADATA |
> 			  EXTENDED_IREF |
> 			  SKINNY_METADATA )
> cache_generation	18446744073709551615
> uuid_tree_generation	60233
> dev_item.uuid		7833ce66-ffa3-4baa-a2db-e98d5ec2e369
> dev_item.fsid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd [match]
> dev_item.type		0
> dev_item.total_bytes	23999975260160
> dev_item.bytes_used	15122592432128
> dev_item.io_align	4096
> dev_item.io_width	4096
> dev_item.sector_size	4096
> dev_item.devid		1
> dev_item.dev_group	0
> dev_item.seek_speed	0
> dev_item.bandwidth	0
> dev_item.generation	0
> sys_chunk_array[2048]:
> 	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
> 		length 4194304 owner 2 stripe_len 65536 type SYSTEM
> 		io_align 4096 io_width 4096 sector_size 4096
> 		num_stripes 1 sub_stripes 0
> 			stripe 0 devid 1 offset 1048576
> 			dev_uuid 7833ce66-ffa3-4baa-a2db-e98d5ec2e369
> 	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 60713713270784)
> 		length 33554432 owner 2 stripe_len 65536 type SYSTEM
> 		io_align 65536 io_width 65536 sector_size 4096
> 		num_stripes 1 sub_stripes 1
> 			stripe 0 devid 2 offset 15033460326400
> 			dev_uuid bb5e99fe-3beb-44f3-a6e0-43395ddfcd84
> backup_roots[4]:
> 	backup 0:
> 		backup_tree_root:	55432745107456	gen: 60233	level: 1
> 		backup_chunk_root:	1146880	gen: 60232	level: 1
> 		backup_extent_root:	55432763981824	gen: 60233	level: 3
> 		backup_fs_root:		55432763850752	gen: 60233	level: 2
> 		backup_dev_root:	55432753725440	gen: 60232	level: 1
> 		backup_csum_root:	55432764342272	gen: 60233	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60378125692928
> 		backup_num_devices:	4
> 
> 	backup 1:
> 		backup_tree_root:	55432755216384	gen: 60230	level: 1
> 		backup_chunk_root:	1048576	gen: 60230	level: 1
> 		backup_extent_root:	55432754511872	gen: 60230	level: 3
> 		backup_fs_root:		55432756281344	gen: 60231	level: 2
> 		backup_dev_root:	55432755625984	gen: 60230	level: 1
> 		backup_csum_root:	55432750727168	gen: 60230	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60372281278464
> 		backup_num_devices:	4
> 
> 	backup 2:
> 		backup_tree_root:	55432746385408	gen: 60231	level: 1
> 		backup_chunk_root:	1048576	gen: 60230	level: 1
> 		backup_extent_root:	55432738242560	gen: 60231	level: 3
> 		backup_fs_root:		55432756281344	gen: 60231	level: 2
> 		backup_dev_root:	55432755625984	gen: 60230	level: 1
> 		backup_csum_root:	55432750727168	gen: 60230	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60372283375616
> 		backup_num_devices:	4
> 
> 	backup 3:
> 		backup_tree_root:	55432763015168	gen: 60232	level: 1
> 		backup_chunk_root:	1146880	gen: 60232	level: 1
> 		backup_extent_root:	55432763981824	gen: 60233	level: 3
> 		backup_fs_root:		55432763850752	gen: 60233	level: 2
> 		backup_dev_root:	55432753725440	gen: 60232	level: 1
> 		backup_csum_root:	55432764342272	gen: 60233	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60378125692928
> 		backup_num_devices:	4
> 
> ########### SDC ############
> 
> btrfs inspect-internal dump-super -f /dev/sdc
> superblock: bytenr=65536, device=/dev/sdc
> ---------------------------------------------------------
> csum_type		0 (crc32c)
> csum_size		4
> csum			0x310a7e9a [match]
> bytenr			65536
> flags			0x1
> 			( WRITTEN )
> magic			_BHRfS_M [match]
> fsid			8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
> metadata_uuid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
> label			FS1
> generation		60233
> root			55432745107456
> sys_array_size		194
> chunk_root_generation	60232
> root_level		1
> chunk_root		1146880
> chunk_root_level	1
> log_root		55432747958272
> log_root_transid	0
> log_root_level		0
> total_bytes		95999901040640
> bytes_used		60378125692928
> sectorsize		4096
> nodesize		16384
> leafsize (deprecated)	16384
> stripesize		4096
> root_dir		6
> num_devices		4
> compat_flags		0x0
> compat_ro_flags		0x3
> 			( FREE_SPACE_TREE |
> 			  FREE_SPACE_TREE_VALID )
> incompat_flags		0x161
> 			( MIXED_BACKREF |
> 			  BIG_METADATA |
> 			  EXTENDED_IREF |
> 			  SKINNY_METADATA )
> cache_generation	18446744073709551615
> uuid_tree_generation	60233
> dev_item.uuid		bb5e99fe-3beb-44f3-a6e0-43395ddfcd84
> dev_item.fsid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd [match]
> dev_item.type		0
> dev_item.total_bytes	23999975260160
> dev_item.bytes_used	15122613403648
> dev_item.io_align	4096
> dev_item.io_width	4096
> dev_item.sector_size	4096
> dev_item.devid		2
> dev_item.dev_group	0
> dev_item.seek_speed	0
> dev_item.bandwidth	0
> dev_item.generation	0
> sys_chunk_array[2048]:
> 	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
> 		length 4194304 owner 2 stripe_len 65536 type SYSTEM
> 		io_align 4096 io_width 4096 sector_size 4096
> 		num_stripes 1 sub_stripes 0
> 			stripe 0 devid 1 offset 1048576
> 			dev_uuid 7833ce66-ffa3-4baa-a2db-e98d5ec2e369
> 	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 60713713270784)
> 		length 33554432 owner 2 stripe_len 65536 type SYSTEM
> 		io_align 65536 io_width 65536 sector_size 4096
> 		num_stripes 1 sub_stripes 1
> 			stripe 0 devid 2 offset 15033460326400
> 			dev_uuid bb5e99fe-3beb-44f3-a6e0-43395ddfcd84
> backup_roots[4]:
> 	backup 0:
> 		backup_tree_root:	55432745107456	gen: 60233	level: 1
> 		backup_chunk_root:	1146880	gen: 60232	level: 1
> 		backup_extent_root:	55432763981824	gen: 60233	level: 3
> 		backup_fs_root:		55432763850752	gen: 60233	level: 2
> 		backup_dev_root:	55432753725440	gen: 60232	level: 1
> 		backup_csum_root:	55432764342272	gen: 60233	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60378125692928
> 		backup_num_devices:	4
> 
> 	backup 1:
> 		backup_tree_root:	55432755216384	gen: 60230	level: 1
> 		backup_chunk_root:	1048576	gen: 60230	level: 1
> 		backup_extent_root:	55432754511872	gen: 60230	level: 3
> 		backup_fs_root:		55432756281344	gen: 60231	level: 2
> 		backup_dev_root:	55432755625984	gen: 60230	level: 1
> 		backup_csum_root:	55432750727168	gen: 60230	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60372281278464
> 		backup_num_devices:	4
> 
> 	backup 2:
> 		backup_tree_root:	55432746385408	gen: 60231	level: 1
> 		backup_chunk_root:	1048576	gen: 60230	level: 1
> 		backup_extent_root:	55432738242560	gen: 60231	level: 3
> 		backup_fs_root:		55432756281344	gen: 60231	level: 2
> 		backup_dev_root:	55432755625984	gen: 60230	level: 1
> 		backup_csum_root:	55432750727168	gen: 60230	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60372283375616
> 		backup_num_devices:	4
> 
> 	backup 3:
> 		backup_tree_root:	55432763015168	gen: 60232	level: 1
> 		backup_chunk_root:	1146880	gen: 60232	level: 1
> 		backup_extent_root:	55432763981824	gen: 60233	level: 3
> 		backup_fs_root:		55432763850752	gen: 60233	level: 2
> 		backup_dev_root:	55432753725440	gen: 60232	level: 1
> 		backup_csum_root:	55432764342272	gen: 60233	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60378125692928
> 		backup_num_devices:	4
> 
> ########### SDD ##########
> 
> btrfs inspect-internal dump-super -f /dev/sdd
> superblock: bytenr=65536, device=/dev/sdd
> ---------------------------------------------------------
> csum_type		0 (crc32c)
> csum_size		4
> csum			0x56f1abb4 [match]
> bytenr			65536
> flags			0x1
> 			( WRITTEN )
> magic			_BHRfS_M [match]
> fsid			8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
> metadata_uuid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
> label			FS1
> generation		60234
> root			55432766062592
> sys_array_size		194
> chunk_root_generation	60234
> root_level		1
> chunk_root		1048576
> chunk_root_level	1
> log_root		0
> log_root_transid	0
> log_root_level		0
> total_bytes		95999901040640
> bytes_used		60382615777280
> sectorsize		4096
> nodesize		16384
> leafsize (deprecated)	16384
> stripesize		4096
> root_dir		6
> num_devices		4
> compat_flags		0x0
> compat_ro_flags		0x3
> 			( FREE_SPACE_TREE |
> 			  FREE_SPACE_TREE_VALID )
> incompat_flags		0x161
> 			( MIXED_BACKREF |
> 			  BIG_METADATA |
> 			  EXTENDED_IREF |
> 			  SKINNY_METADATA )
> cache_generation	18446744073709551615
> uuid_tree_generation	60234
> dev_item.uuid		cc6e5f1c-081c-441f-9634-6908164e1375
> dev_item.fsid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd [match]
> dev_item.type		0
> dev_item.total_bytes	23999975260160
> dev_item.bytes_used	15123653591040
> dev_item.io_align	4096
> dev_item.io_width	4096
> dev_item.sector_size	4096
> dev_item.devid		3
> dev_item.dev_group	0
> dev_item.seek_speed	0
> dev_item.bandwidth	0
> dev_item.generation	0
> sys_chunk_array[2048]:
> 	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
> 		length 4194304 owner 2 stripe_len 65536 type SYSTEM
> 		io_align 4096 io_width 4096 sector_size 4096
> 		num_stripes 1 sub_stripes 0
> 			stripe 0 devid 1 offset 1048576
> 			dev_uuid 7833ce66-ffa3-4baa-a2db-e98d5ec2e369
> 	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 60713713270784)
> 		length 33554432 owner 2 stripe_len 65536 type SYSTEM
> 		io_align 65536 io_width 65536 sector_size 4096
> 		num_stripes 1 sub_stripes 1
> 			stripe 0 devid 2 offset 15033460326400
> 			dev_uuid bb5e99fe-3beb-44f3-a6e0-43395ddfcd84
> backup_roots[4]:
> 	backup 0:
> 		backup_tree_root:	55432745107456	gen: 60233	level: 1
> 		backup_chunk_root:	1146880	gen: 60232	level: 1
> 		backup_extent_root:	55432763981824	gen: 60233	level: 3
> 		backup_fs_root:		55432763850752	gen: 60233	level: 2
> 		backup_dev_root:	55432753725440	gen: 60232	level: 1
> 		backup_csum_root:	55432764342272	gen: 60233	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60378125692928
> 		backup_num_devices:	4
> 
> 	backup 1:
> 		backup_tree_root:	55432766062592	gen: 60234	level: 1
> 		backup_chunk_root:	1048576	gen: 60234	level: 1
> 		backup_extent_root:	55432746352640	gen: 60234	level: 3
> 		backup_fs_root:		55432763850752	gen: 60233	level: 2
> 		backup_dev_root:	55432755789824	gen: 60234	level: 1
> 		backup_csum_root:	55432764342272	gen: 60233	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60382615777280
> 		backup_num_devices:	4
> 
> 	backup 2:
> 		backup_tree_root:	55432746385408	gen: 60231	level: 1
> 		backup_chunk_root:	1048576	gen: 60230	level: 1
> 		backup_extent_root:	55432738242560	gen: 60231	level: 3
> 		backup_fs_root:		55432756281344	gen: 60231	level: 2
> 		backup_dev_root:	55432755625984	gen: 60230	level: 1
> 		backup_csum_root:	55432750727168	gen: 60230	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60372283375616
> 		backup_num_devices:	4
> 
> 	backup 3:
> 		backup_tree_root:	55432763015168	gen: 60232	level: 1
> 		backup_chunk_root:	1146880	gen: 60232	level: 1
> 		backup_extent_root:	55432763981824	gen: 60233	level: 3
> 		backup_fs_root:		55432763850752	gen: 60233	level: 2
> 		backup_dev_root:	55432753725440	gen: 60232	level: 1
> 		backup_csum_root:	55432764342272	gen: 60233	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60378125692928
> 		backup_num_devices:	4
> 
> ####### SDE #######
> 
>  btrfs inspect-internal dump-super -f /dev/sde
> superblock: bytenr=65536, device=/dev/sde
> ---------------------------------------------------------
> csum_type		0 (crc32c)
> csum_size		4
> csum			0x6fcc7a25 [match]
> bytenr			65536
> flags			0x1
> 			( WRITTEN )
> magic			_BHRfS_M [match]
> fsid			8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
> metadata_uuid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd
> label			FS1
> generation		60234
> root			55432766062592
> sys_array_size		194
> chunk_root_generation	60234
> root_level		1
> chunk_root		1048576
> chunk_root_level	1
> log_root		0
> log_root_transid	0
> log_root_level		0
> total_bytes		95999901040640
> bytes_used		60382615777280
> sectorsize		4096
> nodesize		16384
> leafsize (deprecated)	16384
> stripesize		4096
> root_dir		6
> num_devices		4
> compat_flags		0x0
> compat_ro_flags		0x3
> 			( FREE_SPACE_TREE |
> 			  FREE_SPACE_TREE_VALID )
> incompat_flags		0x161
> 			( MIXED_BACKREF |
> 			  BIG_METADATA |
> 			  EXTENDED_IREF |
> 			  SKINNY_METADATA )
> cache_generation	18446744073709551615
> uuid_tree_generation	60234
> dev_item.uuid		d97063ad-078a-44b5-ae47-4f0f9f9755f9
> dev_item.fsid		8b19ff46-3f42-4f51-be6b-5fc8a7d8f2cd [match]
> dev_item.type		0
> dev_item.total_bytes	23999975260160
> dev_item.bytes_used	15123653591040
> dev_item.io_align	4096
> dev_item.io_width	4096
> dev_item.sector_size	4096
> dev_item.devid		4
> dev_item.dev_group	0
> dev_item.seek_speed	0
> dev_item.bandwidth	0
> dev_item.generation	0
> sys_chunk_array[2048]:
> 	item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
> 		length 4194304 owner 2 stripe_len 65536 type SYSTEM
> 		io_align 4096 io_width 4096 sector_size 4096
> 		num_stripes 1 sub_stripes 0
> 			stripe 0 devid 1 offset 1048576
> 			dev_uuid 7833ce66-ffa3-4baa-a2db-e98d5ec2e369
> 	item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 60713713270784)
> 		length 33554432 owner 2 stripe_len 65536 type SYSTEM
> 		io_align 65536 io_width 65536 sector_size 4096
> 		num_stripes 1 sub_stripes 1
> 			stripe 0 devid 2 offset 15033460326400
> 			dev_uuid bb5e99fe-3beb-44f3-a6e0-43395ddfcd84
> backup_roots[4]:
> 	backup 0:
> 		backup_tree_root:	55432745107456	gen: 60233	level: 1
> 		backup_chunk_root:	1146880	gen: 60232	level: 1
> 		backup_extent_root:	55432763981824	gen: 60233	level: 3
> 		backup_fs_root:		55432763850752	gen: 60233	level: 2
> 		backup_dev_root:	55432753725440	gen: 60232	level: 1
> 		backup_csum_root:	55432764342272	gen: 60233	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60378125692928
> 		backup_num_devices:	4
> 
> 	backup 1:
> 		backup_tree_root:	55432766062592	gen: 60234	level: 1
> 		backup_chunk_root:	1048576	gen: 60234	level: 1
> 		backup_extent_root:	55432746352640	gen: 60234	level: 3
> 		backup_fs_root:		55432763850752	gen: 60233	level: 2
> 		backup_dev_root:	55432755789824	gen: 60234	level: 1
> 		backup_csum_root:	55432764342272	gen: 60233	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60382615777280
> 		backup_num_devices:	4
> 
> 	backup 2:
> 		backup_tree_root:	55432746385408	gen: 60231	level: 1
> 		backup_chunk_root:	1048576	gen: 60230	level: 1
> 		backup_extent_root:	55432738242560	gen: 60231	level: 3
> 		backup_fs_root:		55432756281344	gen: 60231	level: 2
> 		backup_dev_root:	55432755625984	gen: 60230	level: 1
> 		backup_csum_root:	55432750727168	gen: 60230	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60372283375616
> 		backup_num_devices:	4
> 
> 	backup 3:
> 		backup_tree_root:	55432763015168	gen: 60232	level: 1
> 		backup_chunk_root:	1146880	gen: 60232	level: 1
> 		backup_extent_root:	55432763981824	gen: 60233	level: 3
> 		backup_fs_root:		55432763850752	gen: 60233	level: 2
> 		backup_dev_root:	55432753725440	gen: 60232	level: 1
> 		backup_csum_root:	55432764342272	gen: 60233	level: 0
> 		backup_total_bytes:	95999901040640
> 		backup_bytes_used:	60378125692928
> 		backup_num_devices:	4
> 
> ###########################
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
  2019-03-26  5:18 ` Qu Wenruo
@ 2019-03-26  6:36   ` Andrei Borzenkov
       [not found]     ` <QXHERo-tRpJm0-hY7ZotKSeF7CfsqeoH0XAf-wmRRsbzVuNWv6BTb9Q2xLQ_KgLlJNQJNSjGK9xlYcqv6QP3kKpFsvtyN0CBex5bFTCjiNE=@protonmail.com>
  2019-03-26 17:38     ` Chris Murphy
  0 siblings, 2 replies; 13+ messages in thread
From: Andrei Borzenkov @ 2019-03-26  6:36 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: berodual_xyz, linux-btrfs

On Tue, Mar 26, 2019 at 8:19 AM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
> >
> > Kernel 4.20.17
> > btrfs-progs 4.20.2
> >
> > The filesystem consists of 4 (HW raid6) devices, each of them shows different superblock backups but my hope is that backup 2 is "consistent" across all of them.
> >
> > Is there any way to restore a specific superblock backup and attempt to mount the filesystem? Running "btrfs restore" did partially recover the data, but the most important bit is still missing.
>
> You sda and sdb are at gen 60233 while sdd and sde are at gen 60234.
>
> It's possible to allow kernel to manually assemble its device list using
> "device=" mount option.
>
> Since you're using RAID6, it's possible to recover using 2 devices only,
> but in that case you need "degraded" mount option.

He has btrfs raid0 profile on top of hardware RAID6 devices.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
       [not found]     ` <QXHERo-tRpJm0-hY7ZotKSeF7CfsqeoH0XAf-wmRRsbzVuNWv6BTb9Q2xLQ_KgLlJNQJNSjGK9xlYcqv6QP3kKpFsvtyN0CBex5bFTCjiNE=@protonmail.com>
@ 2019-03-26  8:53       ` berodual_xyz
       [not found]       ` <cc557b84-602c-8b2e-33f9-3c6d56578631@gmx.com>
  1 sibling, 0 replies; 13+ messages in thread
From: berodual_xyz @ 2019-03-26  8:53 UTC (permalink / raw)
  To: linux-btrfs

Thank you both for your input.

see below.

> > You sda and sdb are at gen 60233 while sdd and sde are at gen 60234.
> > It's possible to allow kernel to manually assemble its device list using
> > "device=" mount option.
> > Since you're using RAID6, it's possible to recover using 2 devices only,
> > but in that case you need "degraded" mount option.
>
> He has btrfs raid0 profile on top of hardware RAID6 devices.

Correct, my FS is a "raid0" across four hardware-raid based raid6 devices. The underlying devices of the raid controller are fine, same as the volumes themselves.

Only corruption seems to be on the btrfs side.

Does your tip regarding mounting by explicitly specifying the devices still make sense? Will this figure out automatically which generation to use?

I am at the moment in the process of using "btrfs restore" to pull more data from the filesystem without making any further changes.

After that I am happy to continue testing, and will happily test your mentioned "skip_bg" patch - but if you think that there is some other way to mount (just for recovery purpose - read only is fine!) while having different gens on the devices, I highly appreciate it.

Thanks Qu and Andrei!

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
       [not found]       ` <cc557b84-602c-8b2e-33f9-3c6d56578631@gmx.com>
@ 2019-03-26 10:24         ` berodual_xyz
  2019-03-26 13:37           ` Qu Wenruo
  0 siblings, 1 reply; 13+ messages in thread
From: berodual_xyz @ 2019-03-26 10:24 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: Andrei Borzenkov, linux-btrfs

Mount messages below.

Thanks for your input, Qu!

##
[42763.884134] BTRFS info (device sdd): disabling free space tree
[42763.884138] BTRFS info (device sdd): force clearing of disk cache
[42763.884140] BTRFS info (device sdd): has skinny extents
[42763.885207] BTRFS error (device sdd): parent transid verify failed on 1048576 wanted 60234 found 60230
[42763.885263] BTRFS error (device sdd): failed to read chunk root
[42763.900922] BTRFS error (device sdd): open_ctree failed
##




Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, 26. March 2019 10:21, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:

> On 2019/3/26 下午4:52, berodual_xyz wrote:
>
> > Thank you both for your input.
> > see below.
> >
> > > > You sda and sdb are at gen 60233 while sdd and sde are at gen 60234.
> > > > It's possible to allow kernel to manually assemble its device list using
> > > > "device=" mount option.
> > > > Since you're using RAID6, it's possible to recover using 2 devices only,
> > > > but in that case you need "degraded" mount option.
> > >
> > > He has btrfs raid0 profile on top of hardware RAID6 devices.
> >
> > Correct, my FS is a "raid0" across four hardware-raid based raid6 devices. The underlying devices of the raid controller are fine, same as the volumes themselves.
>
> Then there is not much we can do.
>
> The super blocks shows all your 4 devices are in 2 different states.
> (older generation with dirt log, newer generation without log).
>
> This means some writes didn't reach all devices.
>
> > Only corruption seems to be on the btrfs side.
>
> Please provide the kernel message when trying to mount the fs.
>
> > Does your tip regarding mounting by explicitly specifying the devices still make sense?
>
> Not really. For RAID0 case, it doesn't make much sense.
>
> > Will this figure out automatically which generation to use?
>
> You could try, as all the mount option is making btrfs completely RO (no
> log replay), so it should be pretty safe.
>
> > I am at the moment in the process of using "btrfs restore" to pull more data from the filesystem without making any further changes.
> > After that I am happy to continue testing, and will happily test your mentioned "skip_bg" patch - but if you think that there is some other way to mount (just for recovery purpose - read only is fine!) while having different gens on the devices, I highly appreciate it.
>
> With mounting failure dmesg, it should be pretty easy to determine
> whether my skip_bg will work.
>
> Thanks,
> Qu
>
> > Thanks Qu and Andrei!


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
  2019-03-26 10:24         ` berodual_xyz
@ 2019-03-26 13:37           ` Qu Wenruo
  2019-03-27 20:35             ` Martin Raiber
  0 siblings, 1 reply; 13+ messages in thread
From: Qu Wenruo @ 2019-03-26 13:37 UTC (permalink / raw)
  To: berodual_xyz; +Cc: Andrei Borzenkov, linux-btrfs


[-- Attachment #1.1: Type: text/plain, Size: 3311 bytes --]



On 2019/3/26 下午6:24, berodual_xyz wrote:
> Mount messages below.
> 
> Thanks for your input, Qu!
> 
> ##
> [42763.884134] BTRFS info (device sdd): disabling free space tree
> [42763.884138] BTRFS info (device sdd): force clearing of disk cache
> [42763.884140] BTRFS info (device sdd): has skinny extents
> [42763.885207] BTRFS error (device sdd): parent transid verify failed on 1048576 wanted 60234 found 60230

So btrfs is using the latest superblock while the good one should be the
old superblock.

Btrfs-progs is able to just ignore the transid mismatch, but kernel
doesn't and shouldn't.

In fact we should allow btrfs rescue super to use super blocks from
other device to replace the old one.

So my patch won't help at all, the failure happens at the very beginning
of the devices list initialization.

BTW, if btrfs restore can't recover certain files, I don't believe any
rescue kernel mount option can do more.

Thanks,
Qu

> [42763.885263] BTRFS error (device sdd): failed to read chunk root
> [42763.900922] BTRFS error (device sdd): open_ctree failed
> ##
> 
> 
> 
> 
> Sent with ProtonMail Secure Email.
> 
> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> On Tuesday, 26. March 2019 10:21, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
> 
>> On 2019/3/26 下午4:52, berodual_xyz wrote:
>>
>>> Thank you both for your input.
>>> see below.
>>>
>>>>> You sda and sdb are at gen 60233 while sdd and sde are at gen 60234.
>>>>> It's possible to allow kernel to manually assemble its device list using
>>>>> "device=" mount option.
>>>>> Since you're using RAID6, it's possible to recover using 2 devices only,
>>>>> but in that case you need "degraded" mount option.
>>>>
>>>> He has btrfs raid0 profile on top of hardware RAID6 devices.
>>>
>>> Correct, my FS is a "raid0" across four hardware-raid based raid6 devices. The underlying devices of the raid controller are fine, same as the volumes themselves.
>>
>> Then there is not much we can do.
>>
>> The super blocks shows all your 4 devices are in 2 different states.
>> (older generation with dirt log, newer generation without log).
>>
>> This means some writes didn't reach all devices.
>>
>>> Only corruption seems to be on the btrfs side.
>>
>> Please provide the kernel message when trying to mount the fs.
>>
>>> Does your tip regarding mounting by explicitly specifying the devices still make sense?
>>
>> Not really. For RAID0 case, it doesn't make much sense.
>>
>>> Will this figure out automatically which generation to use?
>>
>> You could try, as all the mount option is making btrfs completely RO (no
>> log replay), so it should be pretty safe.
>>
>>> I am at the moment in the process of using "btrfs restore" to pull more data from the filesystem without making any further changes.
>>> After that I am happy to continue testing, and will happily test your mentioned "skip_bg" patch - but if you think that there is some other way to mount (just for recovery purpose - read only is fine!) while having different gens on the devices, I highly appreciate it.
>>
>> With mounting failure dmesg, it should be pretty easy to determine
>> whether my skip_bg will work.
>>
>> Thanks,
>> Qu
>>
>>> Thanks Qu and Andrei!
> 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
  2019-03-26  6:36   ` Andrei Borzenkov
       [not found]     ` <QXHERo-tRpJm0-hY7ZotKSeF7CfsqeoH0XAf-wmRRsbzVuNWv6BTb9Q2xLQ_KgLlJNQJNSjGK9xlYcqv6QP3kKpFsvtyN0CBex5bFTCjiNE=@protonmail.com>
@ 2019-03-26 17:38     ` Chris Murphy
  2019-03-26 17:51       ` Chris Murphy
  1 sibling, 1 reply; 13+ messages in thread
From: Chris Murphy @ 2019-03-26 17:38 UTC (permalink / raw)
  To: Andrei Borzenkov; +Cc: Qu Wenruo, berodual_xyz, linux-btrfs

On Tue, Mar 26, 2019 at 12:44 AM Andrei Borzenkov <arvidjaar@gmail.com> wrote:
>
>
> He has btrfs raid0 profile on top of hardware RAID6 devices.

sys_chunk_array[2048]:
        item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
                length 4194304 owner 2 stripe_len 65536 type SYSTEM
                io_align 4096 io_width 4096 sector_size 4096
                num_stripes 1

Pretty sure the metadata profiles is "single". From the super, I can't
tell what profile the data block groups use.

-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
  2019-03-26 17:38     ` Chris Murphy
@ 2019-03-26 17:51       ` Chris Murphy
  2019-03-27 13:03         ` berodual_xyz
  0 siblings, 1 reply; 13+ messages in thread
From: Chris Murphy @ 2019-03-26 17:51 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Andrei Borzenkov, Qu Wenruo, berodual_xyz, linux-btrfs

On Tue, Mar 26, 2019 at 11:38 AM Chris Murphy <lists@colorremedies.com> wrote:
>
> On Tue, Mar 26, 2019 at 12:44 AM Andrei Borzenkov <arvidjaar@gmail.com> wrote:
> >
> >
> > He has btrfs raid0 profile on top of hardware RAID6 devices.
>
> sys_chunk_array[2048]:
>         item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
>                 length 4194304 owner 2 stripe_len 65536 type SYSTEM
>                 io_align 4096 io_width 4096 sector_size 4096
>                 num_stripes 1
>
> Pretty sure the metadata profiles is "single". From the super, I can't
> tell what profile the data block groups use.

system chunk is on two devices:
                num_stripes 1 sub_stripes 0
                num_stripes 1 sub_stripes 1

Maybe it is raid0, but I thought dump super explicitly shows the
profile if it's not single. e.g. SYSTEM|DUP or SYSTEM|RAID1

Only my single profile file systems lack a profile designation in the
super. But I admit I have no raid0 file systems.

-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
  2019-03-26 17:51       ` Chris Murphy
@ 2019-03-27 13:03         ` berodual_xyz
  2019-03-27 18:52           ` Chris Murphy
  0 siblings, 1 reply; 13+ messages in thread
From: berodual_xyz @ 2019-03-27 13:03 UTC (permalink / raw)
  To: Chris Murphy; +Cc: Andrei Borzenkov, Qu Wenruo, linux-btrfs

Dear Chris,

correct - the metadata profile was set to single (with the thought of consolidating metadata updates to a smaller subset of disks instead of creating IO overhead between "data" operations and "metadata" updates).

It seems that "-o clear_cache" was used early on in an attempt to fix the root issue of not being able to mount the filesystem (which was potentially a race condition between systemd not having the devices active and the mount process)

I saw the posts regarding clear_cache corrupting filesystems. Could this be the case here?

"btrfs restore" has retrieved a lot of the files (but not all) and unfortunately most of the seem corrupt after about 1G file length. Smaller files seem fine.

My questions now:

* what is the chance of "btrfs rescue" "chunk-recover" / "super-recover" / "zero-log" having a positive effect on the filesystem.

* what is the chance of "btrfs check --init-extent-tree" fixing the described issues?

It would be really important to try and recover as much as possible from the filesystem. The users have learned their lessons reg. backups (they had one but it was not up2date, so worthless) but obviously no one would have expected a filesystem to just go bang like this.

Thanks again everyone reading and replying!

Marcel


Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Tuesday, March 26, 2019 6:51 PM, Chris Murphy <lists@colorremedies.com> wrote:

> On Tue, Mar 26, 2019 at 11:38 AM Chris Murphy lists@colorremedies.com wrote:
>
> > On Tue, Mar 26, 2019 at 12:44 AM Andrei Borzenkov arvidjaar@gmail.com wrote:
> >
> > > He has btrfs raid0 profile on top of hardware RAID6 devices.
> >
> > sys_chunk_array[2048]:
> > item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
> > length 4194304 owner 2 stripe_len 65536 type SYSTEM
> > io_align 4096 io_width 4096 sector_size 4096
> > num_stripes 1
> > Pretty sure the metadata profiles is "single". From the super, I can't
> > tell what profile the data block groups use.
>
> system chunk is on two devices:
> num_stripes 1 sub_stripes 0
> num_stripes 1 sub_stripes 1
>
> Maybe it is raid0, but I thought dump super explicitly shows the
> profile if it's not single. e.g. SYSTEM|DUP or SYSTEM|RAID1
>
> Only my single profile file systems lack a profile designation in the
> super. But I admit I have no raid0 file systems.
>
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Chris Murphy



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
  2019-03-27 13:03         ` berodual_xyz
@ 2019-03-27 18:52           ` Chris Murphy
  2019-03-27 19:21             ` Chris Murphy
  0 siblings, 1 reply; 13+ messages in thread
From: Chris Murphy @ 2019-03-27 18:52 UTC (permalink / raw)
  To: berodual_xyz; +Cc: Chris Murphy, Andrei Borzenkov, Qu Wenruo, linux-btrfs

On Wed, Mar 27, 2019 at 7:03 AM berodual_xyz
<berodual_xyz@protonmail.com> wrote:
> My questions now:
>
> * what is the chance of "btrfs rescue" "chunk-recover" / "super-recover" / "zero-log" having a positive effect on the filesystem.
>
> * what is the chance of "btrfs check --init-extent-tree" fixing the described issues?

You are using btrfs-progs 4.20.2 and it crashes doing a `btrfs check`.
That's a significant bug.

That tool shouldn't crash, so it already means the file system is in
an inconsistent state that btrfs-progs hasn't seen before, and it gets
so confused that it crashes. I have 0% confidence that any repair that
writes changes to the filesystem will succeed, and better than 99%
confidence that it will crash in the middle of a repair and absolutely
make the file system non-recoverable. All of the repair options you
listed above write changes to the file system, also add `btrfs check
--repair` to that list.

So until you get advice from a developer, you're sitting on that file
system. *shrug*

In the meantime, your best bet is a combination of 'btrfs restore' and
mounting with `-o ro,nologreplay,usebackuproot`. Once those fail to
get you anywhere you're basically stuck, as that's the limit of the
repair tools at this point. You could also in the meantime do a scrub
on all of the raid6 arrays, and whatever other diagnostics or repairs
the hardware raid offers. They *should* recover properly on their own,
but...

And then also in the meantime, prepare for having to rebuild this array.

-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
  2019-03-27 18:52           ` Chris Murphy
@ 2019-03-27 19:21             ` Chris Murphy
  2019-03-27 20:10               ` berodual_xyz
  0 siblings, 1 reply; 13+ messages in thread
From: Chris Murphy @ 2019-03-27 19:21 UTC (permalink / raw)
  To: linux-btrfs; +Cc: berodual_xyz

Removing Andrei and Qu.

On Wed, Mar 27, 2019 at 12:52 PM Chris Murphy <lists@colorremedies.com> wrote:
>
> And then also in the meantime, prepare for having to rebuild this array.

a. Check with the manufacturer of the hardware raid for firmware
updates for all the controllers. Also check if the new version is
backward compatible with an array made with the version you have, and
if not, if downgrade is possible. That way you have the option of
pulling the drives you have, putting them on a shelf, buying new
drives, and creating a new array with new hardware raid controller
firmware without having to blow away this broken Btrfs file system
just yet.

b. If you go with Btrfs again, I suggest using metadata raid1. It's
speculation whether that would help recovery in this failure case. But
it probably wouldn't have made it any worse, and wouldn't meaningfully
impact performance. For first mount after mkfs, use mount option
'space_cache=v2' to create the free space tree, it's soon to be the
default anyway, and for large file systems it offers improved
performance and the same reliability. If you don't care about
performance you could just always use `nospace_cache` mount option in
addition to `noatime,notreelog` and optionally a compression option
like `compress=zstd`. I would not use the nodatacow or nodatasum
options. If you're considering those mount options you should just
consider using ext4 or XFS at the next go around.

c. If it turns out the current Btrfs can be repaired, of course update
backups ASAP. But then I'd personally consider the file system still
suspect for anything other than short term use, and you'll want to
rebuild it from scratch eventually anyway, which lands you back at a.)
and b.) above. The most recent ext4 and XFS upstream work enables
metadata checksumming so you'd be in the same boat as you were with
Btrfs using nodatacow; there are still some older tools that create
those file systems without metadata checksumming, so I'd watch out for
that. And I'd say it's a coin toss which one to pick; I'm not really
sure off hand which one has a greater chance of surviving a hard reset
with inflight data.

d. Back to the hardware raid6 controller: you should make sure it's
really configured per manufacturer's expectations with respect to
drive write caching. Something got lost in the hard reset. Should the
individual drive write caches be disabled? Possible that the hardware
raid vendor expects this, if they're doing controller caching, and
they ensure the proper flushing to disk in the order expected by the
file system, where individual drive write caches can thwart that
ordering. If the controller has a write cache, is it battery backed?
If not, does the manufacturer recommend disabling write caching?
Something didn't work, and these are just some of the questions to try
and find out the optimal settings to avoid this happening in the
future, because even with a backup, restoring this much data is a
PITA.

-- 
Chris Murphy

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
  2019-03-27 19:21             ` Chris Murphy
@ 2019-03-27 20:10               ` berodual_xyz
  0 siblings, 0 replies; 13+ messages in thread
From: berodual_xyz @ 2019-03-27 20:10 UTC (permalink / raw)
  To: Chris Murphy; +Cc: linux-btrfs

Thanks for the extensive answer, Chris!


> a. Check with the manufacturer of the hardware raid for firmware
> updates for all the controllers. Also check if the new version is
> backward compatible with an array made with the version you have, and
> if not, if downgrade is possible. That way you have the option of
> pulling the drives you have, putting them on a shelf, buying new
> drives, and creating a new array with new hardware raid controller
> firmware without having to blow away this broken Btrfs file system
> just yet.

Yes, this is what I am preparing to do and absolutely agree. The HW controllers are part of the issue, for sure. Analysing the logs shows that right at the time where the system became unavailable from user point of view the following happened:

* HW bus reset by kernel module due to IO timeout towards the controller
* unresponsive scsi devices logged by kernel
* btrfs kernel module logging IO in flight being dropped

A few minutes (!) later, the hw controller dropped a disk from one of the arrays and logged IO timeouts towards this (probably faulty) disk. The raid itself always stayed "consistent" but "degraded". There has still not been any issue with the array, also the rebuild went fine.

I have extensive experience with exactly this controller / disk / firmware / kernel combo (on other filesystems) and unfortunately have to say that part of the issue seems to be how BTRFS has (not?) handled the IO timeouts / drops from the lower layer.

Happy to provide insight into the timeline and messages of these events if of interest to anyone.

> b. If you go with Btrfs again, I suggest using metadata raid1. It's
> speculation whether that would help recovery in this failure case. But
> it probably wouldn't have made it any worse, and wouldn't meaningfully
> impact performance.

Point taken!

> For first mount after mkfs, use mount option
> 'space_cache=v2' to create the free space tree, it's soon to be the
> default anyway, and for large file systems it offers improved
> performance and the same reliability.

The system was already on v2.

> If you don't care about
> performance you could just always use `nospace_cache` mount option in
> addition to `noatime,notreelog` and optionally a compression option
> like `compress=zstd`. I would not use the nodatacow or nodatasum
> options. If you're considering those mount options you should just
> consider using ext4 or XFS at the next go around.

Reason for these mount options was to prevent fragmentation of large files (due to COW) and nodatasum due to (supposed to exist) stability issues with the checksumming. Advantage of BTRFS over XFS for me was multi-device capability (without LVM), snapshots and directory (subvolume) based quota.

>
> c. If it turns out the current Btrfs can be repaired, of course update
> backups ASAP. But then I'd personally consider the file system still
> suspect for anything other than short term use, and you'll want to
> rebuild it from scratch eventually anyway, which lands you back at a.)
> and b.) above.

Yes, the FS will in any case only be used to recover and then be rebuild. Not sure if on BTRFS to be honest.
But then, I know people who have lost data due to XFS not handling power failures or lower-layer HW issues well. So maybe it is just what it is and an up2date backup is required in any case (what if a user deletes everything accidentally...the best FS wont safe you from this).


> The most recent ext4 and XFS upstream work enables
> metadata checksumming so you'd be in the same boat as you were with
> Btrfs using nodatacow; there are still some older tools that create
> those file systems without metadata checksumming, so I'd watch out for
> that. And I'd say it's a coin toss which one to pick; I'm not really
> sure off hand which one has a greater chance of surviving a hard reset
> with inflight data.

True.

> d. Back to the hardware raid6 controller: you should make sure it's
> really configured per manufacturer's expectations with respect to
> drive write caching. Something got lost in the hard reset. Should the
> individual drive write caches be disabled? Possible that the hardware
> raid vendor expects this, if they're doing controller caching, and
> they ensure the proper flushing to disk in the order expected by the
> file system, where individual drive write caches can thwart that
> ordering.

As per above, it is now clear that the issue was in fact triggered when the controller timed out and in-flight data "had to" be dropped by upper layers.

> If the controller has a write cache, is it battery backed?

Yes!


> If not, does the manufacturer recommend disabling write caching?
> Something didn't work, and these are just some of the questions to try
> and find out the optimal settings to avoid this happening in the
> future, because even with a backup, restoring this much data is a
> PITA.


Thank you very much again for taking the time to reply. I really appreciate it!

Kind regards,

Marcel

> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Chris Murphy



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount
  2019-03-26 13:37           ` Qu Wenruo
@ 2019-03-27 20:35             ` Martin Raiber
  0 siblings, 0 replies; 13+ messages in thread
From: Martin Raiber @ 2019-03-27 20:35 UTC (permalink / raw)
  To: berodual_xyz; +Cc: linux-btrfs

On 26.03.2019 14:37 Qu Wenruo wrote:
> On 2019/3/26 下午6:24, berodual_xyz wrote:
>> Mount messages below.
>>
>> Thanks for your input, Qu!
>>
>> ##
>> [42763.884134] BTRFS info (device sdd): disabling free space tree
>> [42763.884138] BTRFS info (device sdd): force clearing of disk cache
>> [42763.884140] BTRFS info (device sdd): has skinny extents
>> [42763.885207] BTRFS error (device sdd): parent transid verify failed on 1048576 wanted 60234 found 60230
> So btrfs is using the latest superblock while the good one should be the
> old superblock.
>
> Btrfs-progs is able to just ignore the transid mismatch, but kernel
> doesn't and shouldn't.
>
> In fact we should allow btrfs rescue super to use super blocks from
> other device to replace the old one.
>
> So my patch won't help at all, the failure happens at the very beginning
> of the devices list initialization.
>
> BTW, if btrfs restore can't recover certain files, I don't believe any
> rescue kernel mount option can do more.
>
> Thanks,
> Qu

I have made btrfs limp along (till a rebuild) in the past by commenting
out/removing the transid checks. Obviously you should still mount it
read-only (and with no log replay) and it might crash, but there is a
small chance this would work.

>
>> [42763.885263] BTRFS error (device sdd): failed to read chunk root
>> [42763.900922] BTRFS error (device sdd): open_ctree failed
>> ##
>>
>>
>>
>>
>> Sent with ProtonMail Secure Email.
>>
>> ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
>> On Tuesday, 26. March 2019 10:21, Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>>
>>> On 2019/3/26 下午4:52, berodual_xyz wrote:
>>>
>>>> Thank you both for your input.
>>>> see below.
>>>>
>>>>>> You sda and sdb are at gen 60233 while sdd and sde are at gen 60234.
>>>>>> It's possible to allow kernel to manually assemble its device list using
>>>>>> "device=" mount option.
>>>>>> Since you're using RAID6, it's possible to recover using 2 devices only,
>>>>>> but in that case you need "degraded" mount option.
>>>>> He has btrfs raid0 profile on top of hardware RAID6 devices.
>>>> Correct, my FS is a "raid0" across four hardware-raid based raid6 devices. The underlying devices of the raid controller are fine, same as the volumes themselves.
>>> Then there is not much we can do.
>>>
>>> The super blocks shows all your 4 devices are in 2 different states.
>>> (older generation with dirt log, newer generation without log).
>>>
>>> This means some writes didn't reach all devices.
>>>
>>>> Only corruption seems to be on the btrfs side.
>>> Please provide the kernel message when trying to mount the fs.
>>>
>>>> Does your tip regarding mounting by explicitly specifying the devices still make sense?
>>> Not really. For RAID0 case, it doesn't make much sense.
>>>
>>>> Will this figure out automatically which generation to use?
>>> You could try, as all the mount option is making btrfs completely RO (no
>>> log replay), so it should be pretty safe.
>>>
>>>> I am at the moment in the process of using "btrfs restore" to pull more data from the filesystem without making any further changes.
>>>> After that I am happy to continue testing, and will happily test your mentioned "skip_bg" patch - but if you think that there is some other way to mount (just for recovery purpose - read only is fine!) while having different gens on the devices, I highly appreciate it.
>>> With mounting failure dmesg, it should be pretty easy to determine
>>> whether my skip_bg will work.
>>>
>>> Thanks,
>>> Qu
>>>
>>>> Thanks Qu and Andrei!



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-03-27 20:36 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-25 23:50 backup uuid_tree generation not consistent across multi device (raid0) btrfs - won´t mount berodual_xyz
2019-03-26  5:18 ` Qu Wenruo
2019-03-26  6:36   ` Andrei Borzenkov
     [not found]     ` <QXHERo-tRpJm0-hY7ZotKSeF7CfsqeoH0XAf-wmRRsbzVuNWv6BTb9Q2xLQ_KgLlJNQJNSjGK9xlYcqv6QP3kKpFsvtyN0CBex5bFTCjiNE=@protonmail.com>
2019-03-26  8:53       ` berodual_xyz
     [not found]       ` <cc557b84-602c-8b2e-33f9-3c6d56578631@gmx.com>
2019-03-26 10:24         ` berodual_xyz
2019-03-26 13:37           ` Qu Wenruo
2019-03-27 20:35             ` Martin Raiber
2019-03-26 17:38     ` Chris Murphy
2019-03-26 17:51       ` Chris Murphy
2019-03-27 13:03         ` berodual_xyz
2019-03-27 18:52           ` Chris Murphy
2019-03-27 19:21             ` Chris Murphy
2019-03-27 20:10               ` berodual_xyz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).