linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* RAID1 array failed to read chunk root
@ 2024-01-22 14:53 Aaron W. Swenson
  0 siblings, 0 replies; only message in thread
From: Aaron W. Swenson @ 2024-01-22 14:53 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 3317 bytes --]

After moving residences, I've finally got my computer setup to 
find the array failed to mount. When trying to mount the RAID 
array, I get:

	root # mount -o compress=lzo,noatime,degraded /dev/sdc 
	/srv
	mount: /srv: wrong fs type, bad option, bad superblock on 
	/dev/sdc, missing codepage or helper program, or other 
	error.
       	dmesg(1) may have more information after failed mount 
       system call.

And in dmesg I see:

	[394680.895543] BTRFS info (device sdd): using crc32c 
	(crc32c-generic) checksum algorithm
	[394680.895555] BTRFS info (device sdd): use lzo 
	compression, level 0
	[394680.895557] BTRFS info (device sdd): allowing degraded 
	mounts
	[394680.895558] BTRFS info (device sdd): disk space 
	caching is enabled
	[394680.895802] BTRFS error (device sdd): failed to read 
	chunk root
	[394680.895903] BTRFS error (device sdd): open_ctree 
	failed

Running the command:

	root # btrfs rescue chunk-recover -v /dev/sdd

Takes a few hours (there are eight 4 TB drives in the array). It 
selected three devices from the RAID1 array (I think it was sdc, 
sdd, and sde...but that bit got purged from the scrollback 
buffer), and ultimately resulted in:

	Invalid mapping for 17983143280640-17983143297024, got 
	23995541880832-23996615622656
	Couldn't map the block 17983143280640
	Couldn't read tree root
	open with broken chunk error
	Chunk tree recovery failed

Here's some stats about my machine:

# uname -a
Gentoo Linux martineau 6.1.28-gentoo #1 SMP Sat May 27 19:30:38 
EDT 2023 x86_64 Intel(R) Core(TM) i3-4160 CPU @ 3.60GHz 
GenuineIntel GNU/Linux
# btrfs version
btrfs-progs v6.6.3

Here's a table of my drives (excludes new Seagate Ironwolf 4TB 
that's still in the packaging). All drives in Bay 1 and Bay 2 are 
a part of the same RAID1 array. The Crucial drive is an SSD that's 
setup in the boring typical fashion for a root drive. A couple 
have failed in the past, but I had been able to mount degraded and 
replace the failed drive. I should note, a couple of days ago it 
reported that /dev/sdg was missing (same experience I've had twice 
before), which is why I have the spare drive. Now, it isn't 
reporting anything about the drive.

| ID | Path 	| Bay | Slot | Make	| Model 
  | Size  |
|----+----------+-----+------+---------+---------------------------------------+-------|
|  1 | /dev/sda |   0 |	0 | Crucial | BX100 (CT250BX100SSD1) 
   | 250GB |
| 10 | /dev/sdc |   1 |	1 | Seagate | Constellation ES.3 
  (ST4000NM0033-9ZM) | 4TB   |
|  9 | /dev/sdb |   1 |	2 | Seagate | Constellation ES.3 
   (ST4000NM0033-9ZM) | 4TB   |
|  7 | /dev/sdi |   1 |	3 | Seagate | Constellation ES.3 
   (ST4000NM0033-9ZM) | 4TB   |
|  8 | /dev/sdh |   1 |	4 | Seagate | Ironwolf (ST4000VN008-2DR1) 
   | 4TB   |
|  5 | /dev/sdf |   2 |	1 | Seagate | Constellation ES.3 
   (ST4000NM0033-9ZM) | 4TB   |
|  6 | /dev/sdd |   2 |	2 | Seagate | Ironwolf (ST4000VN008-2DR1) 
   | 4TB   |
|  4 | /dev/sdg |   2 |	3 | Seagate | Constellation ES.3 
   (ST4000NM0033-9ZM) | 4TB   |
|  3 | /dev/sde |   2 |	4 | Seagate | Constellation ES.3 
   (ST4000NM0033-9ZM) | 4TB   |

It isn't the end of the world if I lose the data, but some of the 
videos and photos are sentimental. There's no time crunch for me, 
so if it takes a long time to work through, I have the time to do 
so.

WKR,
Aaron

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 394 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2024-01-22 15:00 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-01-22 14:53 RAID1 array failed to read chunk root Aaron W. Swenson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).