All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Swâmi Petaramesh" <swami@petaramesh.org>
To: Qu Wenruo <quwenruo.btrfs@gmx.com>, linux-btrfs@vger.kernel.org
Cc: Christoph Anton Mitterer <calestyo@scientia.net>
Subject: Re: Massive filesystem corruption since kernel 5.2 (ARCH)
Date: Tue, 27 Aug 2019 12:59:36 +0200	[thread overview]
Message-ID: <fcd2e070-67e9-4889-f778-748070cc9856@petaramesh.org> (raw)
In-Reply-To: <370697f1-24c9-c8bd-01a7-c2885a7ece05@gmx.com>

Hi again,

Le 27/08/2019 à 08:21, Qu Wenruo a écrit :
> I'd prefer to do a "btrfs check --readonly" anyway (which also checks
> free space cache), then go nospace_cache if you're concerned.

Here's what I dit, here's what I got...:

root@PartedMagic:~# uname -r
5.1.5-pmagic64

root@PartedMagic:~# btrfs --version
btrfs-progs v5.1

root@PartedMagic:~# btrfs check --readonly /dev/PPL_VG1/LINUX
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
block group 52677312512 has wrong amount of free space, free space cache
has 266551296 block group has 266584064
failed to load free space cache for block group 52677312512
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
Opening filesystem to check...
Checking filesystem on /dev/PPL_VG1/LINUX
UUID: 25fede5a-d8c2-4c7e-9e7e-b19aad319044
found 87804731392 bytes used, no error found
total csum bytes: 79811080
total tree bytes: 2195832832
total fs tree bytes: 1992900608
total extent tree bytes: 101548032
btree space waste bytes: 380803707
file data blocks allocated: 626135830528
 referenced 124465221632

root@PartedMagic:~# mkdir /hd
root@PartedMagic:~# mount -t btrfs -o noatime,clear_cache
/dev/PPL_VG1/LINUX /hd

(Waited for no disk activity and top showing no btrfs processes)

root@PartedMagic:~# umount /hd

root@PartedMagic:~# mount -t btrfs -o noatime /dev/PPL_VG1/LINUX /hd

root@PartedMagic:~# grep btrfs /proc/self/mountinfo
40 31 0:43 / /hd rw,noatime - btrfs /dev/mapper/PPL_VG1-LINUX
rw,ssd,space_cache,subvolid=5,subvol=/

root@PartedMagic:~# umount /hd

root@PartedMagic:~# btrfs check --readonly /dev/PPL_VG1/LINUX
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space cache
block group 52677312512 has wrong amount of free space, free space cache
has 266551296 block group has 266584064
failed to load free space cache for block group 52677312512
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
Opening filesystem to check...
Checking filesystem on /dev/PPL_VG1/LINUX
UUID: 25fede5a-d8c2-4c7e-9e7e-b19aad319044
found 87804207104 bytes used, no error found
total csum bytes: 79811080
total tree bytes: 2195832832
total fs tree bytes: 1992900608
total extent tree bytes: 101548032
btree space waste bytes: 380804019
file data blocks allocated: 626135306240
 referenced 124464697344
root@PartedMagic:~#


So it seems that mounting with “clear_cache” did not actually clear the
cache and fix the issue ?

ॐ

-- 
Swâmi Petaramesh <swami@petaramesh.org> PGP 9076E32E

  parent reply	other threads:[~2019-08-27 10:59 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-24 17:44 Massive filesystem corruption since kernel 5.2 (ARCH) Christoph Anton Mitterer
2019-08-25 10:00 ` Swâmi Petaramesh
2019-08-27  0:00   ` Christoph Anton Mitterer
2019-08-27  5:06     ` Swâmi Petaramesh
2019-08-27  6:13       ` Swâmi Petaramesh
2019-08-27  6:21         ` Qu Wenruo
2019-08-27  6:34           ` Swâmi Petaramesh
2019-08-27  6:52             ` Qu Wenruo
2019-08-27  9:14               ` Swâmi Petaramesh
2019-08-27 12:40                 ` Hans van Kranenburg
2019-08-29 12:46                   ` Oliver Freyermuth
2019-08-29 13:08                     ` Christoph Anton Mitterer
2019-08-29 13:09                     ` Swâmi Petaramesh
2019-08-29 13:11                     ` Qu Wenruo
2019-08-29 13:17                       ` Oliver Freyermuth
2019-08-29 17:40                         ` Oliver Freyermuth
     [not found]               ` <-z770dp-y45icx-naspi1dhhf7m-b1jjq3853x22lswnef-p5g363n8kd2f-vdijlg-jk4z4q-raec5-em5djr-et1h33i4xib8jxzw1zxyza74-miq3zn-e4azxaaeyo3abtrf6zj8nb18-hbhrrmnr1ww1.1566894946135@email.android.com>
2019-08-27 12:34                 ` Re : " Qu Wenruo
2019-08-27 10:59           ` Swâmi Petaramesh [this message]
2019-08-27 11:11             ` Alberto Bursi
2019-08-27 11:20               ` Swâmi Petaramesh
2019-08-27 11:29                 ` Alberto Bursi
2019-08-27 11:45                   ` Swâmi Petaramesh
2019-08-27 17:49               ` Swâmi Petaramesh
2019-08-27 22:10               ` Chris Murphy
2019-08-27 12:52 ` Michal Soltys
2019-09-12  7:50 ` Filipe Manana
2019-09-12  8:24   ` James Harvey
2019-09-12  9:06     ` Filipe Manana
2019-09-12  9:09     ` Holger Hoffstätte
2019-09-12 10:53     ` Swâmi Petaramesh
2019-09-12 12:58       ` Christoph Anton Mitterer
2019-10-14  4:00         ` Nicholas D Steeves
2019-09-12  8:48   ` Swâmi Petaramesh
2019-09-12 13:09   ` Christoph Anton Mitterer
2019-09-12 14:28     ` Filipe Manana
2019-09-12 14:39       ` Christoph Anton Mitterer
2019-09-12 14:57         ` Swâmi Petaramesh
2019-09-12 16:21           ` Zdenek Kaspar
2019-09-12 18:52             ` Swâmi Petaramesh
2019-09-13 18:50       ` Pete
     [not found]         ` <CACzgC9gvhGwyQAKm5J1smZZjim-ecEix62ZQCY-wwJYVzMmJ3Q@mail.gmail.com>
2019-10-14  2:07           ` Adam Bahe
2019-10-14  2:19             ` Qu Wenruo
2019-10-14 17:54             ` Chris Murphy
  -- strict thread matches above, loose matches on Subject: below --
2019-07-29 12:32 Swâmi Petaramesh
2019-07-29 13:02 ` Swâmi Petaramesh
2019-07-29 13:35   ` Qu Wenruo
2019-07-29 13:42     ` Swâmi Petaramesh
2019-07-29 13:47       ` Qu Wenruo
2019-07-29 13:52         ` Swâmi Petaramesh
2019-07-29 13:59           ` Qu Wenruo
2019-07-29 14:01           ` Swâmi Petaramesh
2019-07-29 14:08             ` Qu Wenruo
2019-07-29 14:21               ` Swâmi Petaramesh
2019-07-29 14:27                 ` Qu Wenruo
2019-07-29 14:34                   ` Swâmi Petaramesh
2019-07-29 14:40                     ` Qu Wenruo
2019-07-29 14:46                       ` Swâmi Petaramesh
2019-07-29 14:51                         ` Qu Wenruo
2019-07-29 14:55                           ` Swâmi Petaramesh
2019-07-29 15:05                             ` Swâmi Petaramesh
2019-07-29 19:20                               ` Chris Murphy
2019-07-30  6:47                                 ` Swâmi Petaramesh
2019-07-29 19:10                       ` Chris Murphy
2019-07-30  8:09                         ` Swâmi Petaramesh
2019-07-30 20:15                           ` Chris Murphy
2019-07-30 22:44                             ` Swâmi Petaramesh
2019-07-30 23:13                               ` Graham Cobb
2019-07-30 23:24                                 ` Chris Murphy
     [not found] ` <f8b08aec-2c43-9545-906e-7e41953d9ed4@bouton.name>
2019-07-29 13:35   ` Swâmi Petaramesh
2019-07-30  8:04     ` Henk Slager
2019-07-30  8:17       ` Swâmi Petaramesh
2019-07-29 13:39   ` Lionel Bouton
2019-07-29 13:45     ` Swâmi Petaramesh
     [not found]       ` <d8c571e4-718e-1241-66ab-176d091d6b48@bouton.name>
2019-07-29 14:04         ` Swâmi Petaramesh
2019-08-01  4:50           ` Anand Jain
2019-08-01  6:07             ` Swâmi Petaramesh
2019-08-01  6:36               ` Qu Wenruo
2019-08-01  8:07                 ` Swâmi Petaramesh
2019-08-01  8:43                   ` Qu Wenruo
2019-08-01 13:46                     ` Anand Jain
2019-08-01 18:56                       ` Swâmi Petaramesh
2019-08-08  8:46                         ` Qu Wenruo
2019-08-08  9:55                           ` Swâmi Petaramesh
2019-08-08 10:12                             ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fcd2e070-67e9-4889-f778-748070cc9856@petaramesh.org \
    --to=swami@petaramesh.org \
    --cc=calestyo@scientia.net \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.