linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* repeatable(ish) corrupt leaf filesystem splat on 5.1.x
@ 2019-07-04 21:03 Zygo Blaxell
  2019-07-05  0:06 ` Qu Wenruo
  2020-02-04  4:43 ` repeatable(ish) corrupt leaf filesystem splat on 5.1.x - fixed in 5.4.14, 5.5.0 Zygo Blaxell
  0 siblings, 2 replies; 4+ messages in thread
From: Zygo Blaxell @ 2019-07-04 21:03 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 2889 bytes --]

I've seen this twice in 3 days after releasing 5.1.x kernels from the
test lab:

5.1.15 on 2xSATA RAID1 SSD, during a balance:

	[48714.200014][ T3498] BTRFS critical (device dm-21): corrupt leaf: root=2 block=117776711680 slot=57, unexpected item end, have 109534755 expect 12632
	[48714.200381][ T3498] BTRFS critical (device dm-21): corrupt leaf: root=2 block=117776711680 slot=57, unexpected item end, have 109534755 expect 12632
	[48714.200399][ T9749] BTRFS: error (device dm-21) in __btrfs_free_extent:7109: errno=-5 IO failure
	[48714.200401][ T9749] BTRFS info (device dm-21): forced readonly
	[48714.200405][ T9749] BTRFS: error (device dm-21) in btrfs_run_delayed_refs:3008: errno=-5 IO failure
	[48714.200419][ T9749] BTRFS info (device dm-21): found 359 extents
	[48714.200442][ T9749] BTRFS info (device dm-21): 1 enospc errors during balance
	[48714.200445][ T9749] BTRFS info (device dm-21): balance: ended with status: -30

and 5.1.9 on 1xNVME, a few hours after some /proc NULL pointer dereference
bugs:

	[89244.144505][ T7009] BTRFS critical (device dm-4): corrupt leaf: root=2 block=1854946361344 slot=32, unexpected item end, have 1335222703 expect 15056
	[89244.144822][ T7009] BTRFS critical (device dm-4): corrupt leaf: root=2 block=1854946361344 slot=32, unexpected item end, have 1335222703 expect 15056
	[89244.144832][ T2403] BTRFS: error (device dm-4) in btrfs_run_delayed_refs:3008: errno=-5 IO failure
	[89244.144836][ T2403] BTRFS info (device dm-4): forced readonly

The machines had been upgraded from 5.0.x to 5.1.x for less than 24
hours each.

The 5.1.9 machine had crashed (on 5.0.15) before, but a scrub had passed
while running 5.1.9 after the crash.  The filesystem failure occurred
20 hours later.  There were some other NULL pointer deferences in that
uptime, so maybe 5.1.9 is just a generally buggy kernel that nobody
should ever run.  I expect better from 5.1.15, though, which had no
unusual events reported in the 8 hours between its post-reboot scrub
and the filesystem failure.

I have several other machines running 5.1.x kernels that have not yet had
such a failure--including all of my test machines, which don't seem to hit
this issue after 25+ days of stress-testing.  Most of the test machines
are using rotating disks, a few are running SSD+HDD with lvmcache.

One correlation that may be interesting:  both of the failing filesystems
had 1MB unallocated on all disks; all of the non-failing filesystems have
1GB or more unallocated on all disks.  I was running the balance on the
first filesystem to try to free up some unallocated space.  The second
filesystem died without any help from me.

It turns out that 'btrfs check --repair' can fix these!  First time
I've ever seen check --repair fix a broken filesystem.  A few files are
damaged, but the filesystem is read-write again and still working so far
(on a 5.0.21 kernel) .

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-02-04  4:43 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-04 21:03 repeatable(ish) corrupt leaf filesystem splat on 5.1.x Zygo Blaxell
2019-07-05  0:06 ` Qu Wenruo
2019-07-05  3:33   ` Zygo Blaxell
2020-02-04  4:43 ` repeatable(ish) corrupt leaf filesystem splat on 5.1.x - fixed in 5.4.14, 5.5.0 Zygo Blaxell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).