Linux-BTRFS Archive on
 help / color / Atom feed
From: Chiung-Ming Huang <>
To: Qu Wenruo <>
Cc: Btrfs <>
Subject: Re: How to Fix 'Error: could not find extent items for root 257'?
Date: Thu, 6 Feb 2020 14:50:36 +0800
Message-ID: <> (raw)
In-Reply-To: <>

Qu Wenruo <> 於 2020年2月6日 週四 下午12:35寫道:
> On 2020/2/6 下午12:13, Chiung-Ming Huang wrote:
> > Qu Wenruo <> 於 2020年2月6日 週四 上午9:13寫道:

> Got the attachment.
> The first strange part is, I see several mount failure with is caused by
> 4 or more devices missing.
> Then it mounted with devid1 missing.
> After reboot, you got the the full fs mounted without any missing.

That's because /etc/crypttab of rescue system wasn't set up correctly.
I logged in at the first and then fixed it.

> So far so good, but I'm not sure how degraded mount affects here.

> Soon after that, there is already problem showing some degraded mount is
> causing problem, where num_devices doesn't match.

Before `btrfs balance start -f ...` to single, I removed 3 disks from
/etc/crypttab of
server system. They are 1TB(empty), 2TB(empty), 10TB(5TB data + metadata).
10TB is one of RAID1 copies. I formatted 1TB and 2TB immediately but not 10TB
just in case. Then, I triggered `btrfs balance ...` and let the server
keep receiving
data from internet. I thought 10TB disk has old data and metadata. Even if I add
it back to RAID1, btrfs can figure out what data are new or old and
fix it automatically.
The server can work at the mean time. Just wast some disk space but it will be
rectified by `btrfs balance` or `btrfs scrub` later. Is that true?

`btrfs balance ..` suddenly failed after hours. The server system was
totally not
responded, included ssh and ctrl+alt+3. After that and power-off, I booted into
the rescue systemand then fix /etc/crypttab and bring all to /dev on the rescue

So `super_num_devices 3 mismatch` means these 3 disks. (Not sure)

> So from the full dmesg, it looks like the abuse of degraded is causing
> the problem.

According the description I wrote above, is the conclusion still the same?

> Thus degraded for btrfs should really be considered as a last-resort
> method. And manual scrub after all devices go back online is really
> recommended.

Thanks for your analysis and help.

💔 What's done is done. My purpose now is to try to fix btrfs and save
data as much
as possible. Should I unplug 10TB disk, one of old RAID1 copy, at the first?
Originally, the server had about 6TB data. This 10TB disk I removed from
'/etc/crypttab' keeps about 5TB data. I'm worried what I'm going to do at the
next step may result in loss of this 5TB data. Maybe worse, it's gone already.

I tried mount btrfs with only this 10TB disk. It didn't work. Dmesg showed
[Thu Feb  6 14:34:03 2020] BTRFS info (device bcache2): allowing degraded mounts
[Thu Feb  6 14:34:03 2020] BTRFS info (device bcache2): disk space
caching is enabled
[Thu Feb  6 14:34:03 2020] BTRFS info (device bcache2): has skinny extents
[Thu Feb  6 14:34:03 2020] BTRFS warning (device bcache2): devid 3
uuid f9b7fe84-d95b-4db5-9e2b-c34a2d4186e9 is missing
[Thu Feb  6 14:34:03 2020] BTRFS warning (device bcache2): devid 5
uuid d442b477-0233-4a4a-aa71-cb24343b83ee is missing
[Thu Feb  6 14:34:03 2020] BTRFS warning (device bcache2): devid 6
uuid d18e3182-a3cc-448b-b15b-0a20dc9c8cbe is missing
[Thu Feb  6 14:34:03 2020] BTRFS warning (device bcache2): devid 7
uuid 991286c4-fa81-417a-876d-a0cb10989ded is missing
[Thu Feb  6 14:34:03 2020] BTRFS warning (device bcache2): failed to
read tree root
[Thu Feb  6 14:34:03 2020] BTRFS error (device bcache2): open_ctree failed

Base on your analysis, could you give me some advice about the next
steps to save
my btrfs raid? Are they
1) Apply the patch.
2) `btrfs check --repair /dev/bcache4`

Chiung-Ming Huang

  reply index

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-05 10:18 Chiung-Ming Huang
2020-02-05 10:29 ` Qu Wenruo
2020-02-05 15:29   ` Chiung-Ming Huang
2020-02-05 19:38     ` Chris Murphy
2020-02-06  3:11       ` Chiung-Ming Huang
     [not found]   ` <>
     [not found]     ` <>
2020-02-06  4:13       ` Chiung-Ming Huang
2020-02-06  4:35         ` Qu Wenruo
2020-02-06  6:50           ` Chiung-Ming Huang [this message]
2020-02-07  3:49           ` Chiung-Ming Huang
2020-02-07  4:00             ` Qu Wenruo
2020-02-07  6:16               ` Chiung-Ming Huang
2020-02-07  7:16                 ` Qu Wenruo
2020-02-10  6:50                   ` Chiung-Ming Huang
2020-02-10  7:03                     ` Qu Wenruo
2020-02-15  3:47                       ` Chiung-Ming Huang
2020-02-15  4:29                         ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-BTRFS Archive on

Archives are clonable:
	git clone --mirror linux-btrfs/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-btrfs linux-btrfs/ \
	public-inbox-index linux-btrfs

Example config snippet for mirrors

Newsgroup available over NNTP:

AGPL code for this site: git clone