From: Daniel Brunner <daniel@brunner.ninja>
To: linux-btrfs@vger.kernel.org
Subject: corrupted root, doesnt check, repair or mount
Date: Wed, 25 Nov 2020 22:13:37 +0100 [thread overview]
Message-ID: <CAD7Y51gpvZ79nVnkg+i3AuvT-1OiXj0eaq2-aig38pGmBtm-Xw@mail.gmail.com> (raw)
Hi all,
I used btrfs resize to shrink the filesystem and then used mdadm to
shrink the backing device.
Sadly I did not use btrfs for the software raid itself.
After shrinking the mdadm device, my btrfs filesystem doesnt want to
mount or repair anymore.
[ +7,440422] BTRFS info (device dm-0): trying to use backup root at mount time
[ +0,000008] BTRFS info (device dm-0): disabling disk space caching
[ +0,000003] BTRFS info (device dm-0): force clearing of disk cache
[ +0,000003] BTRFS info (device dm-0): has skinny extents
[ +0,002396] BTRFS error (device dm-0): bad tree block start, want
1064960 have 11986489934990110975
[ +0,000125] BTRFS error (device dm-0): failed to read chunk root
[ +0,033922] BTRFS error (device dm-0): open_ctree failed
# btrfs check --repair --force /dev/mapper/bcache0-open
enabling repair mode
Opening filesystem to check...
checksum verify failed on 1064960 found 000000DF wanted 00000007
checksum verify failed on 1064960 found 000000DF wanted 00000007
bad tree block 1064960, bytenr mismatch, want=1064960, have=11986489934990110975
ERROR: cannot read chunk root
ERROR: cannot open file system
# uname -a
Linux flucky-server 5.7.8-arch1-1 #1 SMP PREEMPT Thu, 09 Jul 2020
16:34:01 +0000 x86_64 GNU/Linux
# btrfs --version
btrfs-progs v5.7
# btrfs fi show
Label: none uuid: 4e970755-09c3-4df2-992d-d2f1e0b7d5e4
Total devices 1 FS bytes used 16.08TiB
devid 1 size 26.56TiB used 17.63TiB path /dev/mapper/bcache0-open
# blockdev --getsize64 /dev/mapper/bcache0-open
40002767544320
# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Fri Oct 4 14:13:05 2019
Raid Level : raid6
Array Size : 39065219072 (37255.50 GiB 40002.78 GB)
Used Dev Size : 9766304768 (9313.87 GiB 10000.70 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Nov 25 21:22:05 2020
State : clean
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Delta Devices : -1, (7->6)
Name : flucky-server:komposthaufen (local to host flucky-server)
UUID : 45401df9:e05c8464:b34fba5e:48ffad45
Events : 2385661
Number Major Minor RaidDevice State
0 8 112 0 active sync /dev/sdh
1 8 128 1 active sync /dev/sdi
2 8 48 2 active sync /dev/sdd
3 8 16 3 active sync /dev/sdb
4 8 96 4 active sync /dev/sdg
5 8 80 5 active sync /dev/sdf
looking forward to hear from you :)
BR,
Daniel
next reply other threads:[~2020-11-25 21:14 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-25 21:13 Daniel Brunner [this message]
2020-11-26 23:55 ` corrupted root, doesnt check, repair or mount Chris Murphy
2020-11-30 13:14 ` Daniel Brunner
2020-11-30 13:15 ` Daniel Brunner
2020-12-09 0:30 ` Daniel Brunner
2020-12-09 5:52 ` Chris Murphy
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAD7Y51gpvZ79nVnkg+i3AuvT-1OiXj0eaq2-aig38pGmBtm-Xw@mail.gmail.com \
--to=daniel@brunner.ninja \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).