All of lore.kernel.org
 help / color / mirror / Atom feed
From: Oliver Freyermuth <o.freyermuth@googlemail.com>
To: "Qu Wenruo" <quwenruo.btrfs@gmx.com>,
	"Hans van Kranenburg" <hans@knorrie.org>,
	"Swâmi Petaramesh" <swami@petaramesh.org>,
	linux-btrfs@vger.kernel.org
Subject: Re: Massive filesystem corruption since kernel 5.2 (ARCH)
Date: Thu, 29 Aug 2019 15:17:45 +0200	[thread overview]
Message-ID: <8f15294a-753f-1325-b46e-7a41824a9841@googlemail.com> (raw)
In-Reply-To: <d0a7ec7d-42c8-ebc4-7d54-28bda3d50e5f@gmx.com>

Am 29.08.19 um 15:11 schrieb Qu Wenruo:
> 
> 
> On 2019/8/29 下午8:46, Oliver Freyermuth wrote:
>> Am 27.08.19 um 14:40 schrieb Hans van Kranenburg:
>>> On 8/27/19 11:14 AM, Swâmi Petaramesh wrote:
>>>> On 8/27/19 8:52 AM, Qu Wenruo wrote:
>>>>>> or to use the V2 space
>>>>>> cache generally speaking, on any machine that I use (I had understood it
>>>>>> was useful only on multi-TB filesystems...)
>>>>> 10GiB is enough to create large enough block groups to utilize free
>>>>> space cache.
>>>>> So you can't really escape from free space cache.
>>>>
>>>> I meant that I had understood that the V2 space cache was preferable to
>>>> V1 only for multi-TB filesystems.
>>>>
>>>> So would you advise to use V2 space cache also for filesystems < 1 TB ?
>>>
>>> Yes.
>>>
>>
>> This makes me wonder if it should be the default?
> 
> It will be.
> 
> Just a spoiler, I believe features like no-holes and v2 space cache will
> be default in not so far future.
> 
>>
>> This thread made me check on my various BTRFS volumes and for almost all of them (in different machines), I find cases of
>>  failed to load free space cache for block group XXXX, rebuilding it now
>> at several points during the last months in my syslogs - and that's for machines without broken memory, for disks for which FUA should be working fine,
>> without any unsafe shutdowns over their lifetime, and with histories as short as only having seen 5.x kernels.
> 
> That's interesting. In theory that shouldn't happen, especially without
> unsafe shutdown.

I also forgot to add that in addition on the machines there is no mdraid / dm / LUKS in between (i.e. purely btrfs on the drives). 
The messages _seem_ to be more prominent for spinning disks, but after all, my statistics is just 5 devices in total. 
So it really "feels" like a bug crawling somewhere. However, the machines seem to not have not seen any actual corruption as consequence. 
I'm playing with "btrfs check --readonly" now to see if there's really everything still fine, but I'm already running kernel 5.2 with the new checks without issues. 

> But please also be aware that, there is no concrete proof that corrupted
> v1 space cache is causing all the problems.
> What I said is just, corrupted v1 space cache may cause problem, I need
> to at least craft an image to proof my assumption.

I see - that might be useful in any case to hopefully track down the issue. 

> 
>>
>> So if this may cause harmful side effects, happens without clear origin, and v2 is safer due to being CoW,
>> I guess I should switch all my nodes to v2 (or this should become the default in a future kernel?).
> 
> At least, your experience would definitely help the btrfs community.

Ok, then I will slowly switch the nodes one by one - in case I do not come and cry on the list, this means all is well (but I'm only a small datapoint with 5 disks in three machines) ;-). 

Cheers,
	Oliver

> 
> Thanks,
> Qu
> 
>>
>> Cheers,
>> 	Oliver
>>

  reply	other threads:[~2019-08-29 13:17 UTC|newest]

Thread overview: 84+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-24 17:44 Massive filesystem corruption since kernel 5.2 (ARCH) Christoph Anton Mitterer
2019-08-25 10:00 ` Swâmi Petaramesh
2019-08-27  0:00   ` Christoph Anton Mitterer
2019-08-27  5:06     ` Swâmi Petaramesh
2019-08-27  6:13       ` Swâmi Petaramesh
2019-08-27  6:21         ` Qu Wenruo
2019-08-27  6:34           ` Swâmi Petaramesh
2019-08-27  6:52             ` Qu Wenruo
2019-08-27  9:14               ` Swâmi Petaramesh
2019-08-27 12:40                 ` Hans van Kranenburg
2019-08-29 12:46                   ` Oliver Freyermuth
2019-08-29 13:08                     ` Christoph Anton Mitterer
2019-08-29 13:09                     ` Swâmi Petaramesh
2019-08-29 13:11                     ` Qu Wenruo
2019-08-29 13:17                       ` Oliver Freyermuth [this message]
2019-08-29 17:40                         ` Oliver Freyermuth
     [not found]               ` <-z770dp-y45icx-naspi1dhhf7m-b1jjq3853x22lswnef-p5g363n8kd2f-vdijlg-jk4z4q-raec5-em5djr-et1h33i4xib8jxzw1zxyza74-miq3zn-e4azxaaeyo3abtrf6zj8nb18-hbhrrmnr1ww1.1566894946135@email.android.com>
2019-08-27 12:34                 ` Re : " Qu Wenruo
2019-08-27 10:59           ` Swâmi Petaramesh
2019-08-27 11:11             ` Alberto Bursi
2019-08-27 11:20               ` Swâmi Petaramesh
2019-08-27 11:29                 ` Alberto Bursi
2019-08-27 11:45                   ` Swâmi Petaramesh
2019-08-27 17:49               ` Swâmi Petaramesh
2019-08-27 22:10               ` Chris Murphy
2019-08-27 12:52 ` Michal Soltys
2019-09-12  7:50 ` Filipe Manana
2019-09-12  8:24   ` James Harvey
2019-09-12  9:06     ` Filipe Manana
2019-09-12  9:09     ` Holger Hoffstätte
2019-09-12 10:53     ` Swâmi Petaramesh
2019-09-12 12:58       ` Christoph Anton Mitterer
2019-10-14  4:00         ` Nicholas D Steeves
2019-09-12  8:48   ` Swâmi Petaramesh
2019-09-12 13:09   ` Christoph Anton Mitterer
2019-09-12 14:28     ` Filipe Manana
2019-09-12 14:39       ` Christoph Anton Mitterer
2019-09-12 14:57         ` Swâmi Petaramesh
2019-09-12 16:21           ` Zdenek Kaspar
2019-09-12 18:52             ` Swâmi Petaramesh
2019-09-13 18:50       ` Pete
     [not found]         ` <CACzgC9gvhGwyQAKm5J1smZZjim-ecEix62ZQCY-wwJYVzMmJ3Q@mail.gmail.com>
2019-10-14  2:07           ` Adam Bahe
2019-10-14  2:19             ` Qu Wenruo
2019-10-14 17:54             ` Chris Murphy
  -- strict thread matches above, loose matches on Subject: below --
2019-07-29 12:32 Swâmi Petaramesh
2019-07-29 13:02 ` Swâmi Petaramesh
2019-07-29 13:35   ` Qu Wenruo
2019-07-29 13:42     ` Swâmi Petaramesh
2019-07-29 13:47       ` Qu Wenruo
2019-07-29 13:52         ` Swâmi Petaramesh
2019-07-29 13:59           ` Qu Wenruo
2019-07-29 14:01           ` Swâmi Petaramesh
2019-07-29 14:08             ` Qu Wenruo
2019-07-29 14:21               ` Swâmi Petaramesh
2019-07-29 14:27                 ` Qu Wenruo
2019-07-29 14:34                   ` Swâmi Petaramesh
2019-07-29 14:40                     ` Qu Wenruo
2019-07-29 14:46                       ` Swâmi Petaramesh
2019-07-29 14:51                         ` Qu Wenruo
2019-07-29 14:55                           ` Swâmi Petaramesh
2019-07-29 15:05                             ` Swâmi Petaramesh
2019-07-29 19:20                               ` Chris Murphy
2019-07-30  6:47                                 ` Swâmi Petaramesh
2019-07-29 19:10                       ` Chris Murphy
2019-07-30  8:09                         ` Swâmi Petaramesh
2019-07-30 20:15                           ` Chris Murphy
2019-07-30 22:44                             ` Swâmi Petaramesh
2019-07-30 23:13                               ` Graham Cobb
2019-07-30 23:24                                 ` Chris Murphy
     [not found] ` <f8b08aec-2c43-9545-906e-7e41953d9ed4@bouton.name>
2019-07-29 13:35   ` Swâmi Petaramesh
2019-07-30  8:04     ` Henk Slager
2019-07-30  8:17       ` Swâmi Petaramesh
2019-07-29 13:39   ` Lionel Bouton
2019-07-29 13:45     ` Swâmi Petaramesh
     [not found]       ` <d8c571e4-718e-1241-66ab-176d091d6b48@bouton.name>
2019-07-29 14:04         ` Swâmi Petaramesh
2019-08-01  4:50           ` Anand Jain
2019-08-01  6:07             ` Swâmi Petaramesh
2019-08-01  6:36               ` Qu Wenruo
2019-08-01  8:07                 ` Swâmi Petaramesh
2019-08-01  8:43                   ` Qu Wenruo
2019-08-01 13:46                     ` Anand Jain
2019-08-01 18:56                       ` Swâmi Petaramesh
2019-08-08  8:46                         ` Qu Wenruo
2019-08-08  9:55                           ` Swâmi Petaramesh
2019-08-08 10:12                             ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8f15294a-753f-1325-b46e-7a41824a9841@googlemail.com \
    --to=o.freyermuth@googlemail.com \
    --cc=hans@knorrie.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=quwenruo.btrfs@gmx.com \
    --cc=swami@petaramesh.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.