linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qu Wenruo <wqu@suse.com>
To: Jens Bauer <jens-lists@gpio.dk>, linux-btrfs@vger.kernel.org
Subject: Re: How robust is BTRFS?
Date: Thu, 3 Dec 2020 18:59:35 +0800	[thread overview]
Message-ID: <f51b636b-0e5f-c3d9-916f-f8196dae4ef0@suse.com> (raw)
In-Reply-To: <20201203035311997396.38ae743f@gpio.dk>



On 2020/12/3 上午10:53, Jens Bauer wrote:
> Hi all.
> 
> The BTRFS developers deserves some credit!
> 
> This is a testimony from a BTRFS-user.
> For a little more than 6 months, I had my server running on BTRFS.
> My setup was several RAID-10 partitions.

You should be proud for not using RAID5/6. :)

> As my server was located on a remote island and I was about to leave, I just added two more harddisks, to make sure that the risk of failure would be minimal. Now I had four WD10JFCX on the EspressoBIN server running Ubuntu Bionic Beaver.
> 
> Before I left, I *had* noticed some beep-like sounds coming from one of the drives, but it seemed OK, so I didn't bother with it.
> 
> So I left, and 6 months later, I noticed that one of my 'partitions' were failing, so I thought I might go back and replace the failing drive. The journey takes 6 hours.
> 
> When I arrived, I noticed more beep-like sounds than when I left half a year earlier.
> But I was impressed that my server was still running.
> 
> I decided to make a backup and re-format all drives, etc.
> 
> The drives were added in one-by-one, and I noticed that when I added the third drive, again I started hearing that sound I disliked so much.
> 
> After replacing the port-multiplier, I didn't notice any difference.
> 
> "The power supply!" I thought.. Though it's a 3A PSU and should easily handle four 2.5" WS10JFCX drives, it could be that the specs were possibly a little decorated, so I found myself a MeanWell IRM-60-5ST supply and used that instead.
> 
> Still the same noise.
> 
> I then investigated all the cables; lo and behold, silly me had used a china-pigtail for a barrel-connector, where the wires on the pigtail were so incredibly thin that they could not carry the current, resulting in the voltage being lowered the more drives I added.
> 
> I re-did my power cables and then everything worked well.
> 
> ...
> 
> After correcting the problem, I got curious and listed the statistics for each partition.
> I had more than 100000 read/write errors PER DAY for 6 months.
> That's around 18 million read/write-errors, caused by drives turning on/off "randomly".
> 
> AND ALL MY FILES WERE INTACT.
> 
> This is on the border to being impossible.

I would say, yeah, really impressive, even to a btrfs developer.

Btrfs RAID10/RAID1 by design is really good, since it has the extra
checksum to make everything under check, thus unlike regular RAID10
which can only handle missing device once, it knows which data is
incorrect and will then just retry the good copy, and recovery the bad one.

Which means, btrfs can even handle extreme cases like 4 devices raid10,
and each disk disappear for a while, but never 2 disks disappear together.

But in your case, you really put btrfs failover to limit.

In fact, I have to check the code just now to make sure that btrfs can
tolerant metadata writeback error.
My original idea is, no, btrfs should just error out for even single
device metadata writeback, but it turns out that barrier and super block
writeback really can tolerant multi-device error.

Anyway, feel great that btrfs really helped you.

Thanks,
Qu

> 
> I believe that no other file system would be able to survive such conditions.
> -And the developers of this file system really should know what torture it's been through without failing.
> Yes, all files were intact. I tested all those files that I had backed up 6 months earlier against against those that were on the drives; there were no differences - they were binary identical.
> 
> Today, my EspressoBIN + JMB575 port multiplier + four WD20JFCX drives are doing well. No read/write errors have occurred since I replaced my power cable. I upgraded to Focal Fossa and the server has become very stable and usable. I will not recommend the EspressoBIN (I bought two of them and one is failing periodically); instead I'll recommend Solid-Run's products, which are top-quality and well-tested before shipping.
> 
> So this testimony will hopefully encourage others to use BTRFS.
> Besides a robust file system, you get a file system that's absolutely rapid (I'm used to HFS+ on a Mac with a much faster CPU - but BTRFS is still a lot faster).
> You also get real good tools for manipulating the file system and you can add / remove drives on-the-fly.
> 
> Thank you to everyone who worked tirelessly on BTRFS - and also thank you to those who only contributed a correction of a spelling-mistake. Everything counts!
> 
> 
> Love
> Jens
> 


  parent reply	other threads:[~2020-12-03 11:01 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-03  2:53 How robust is BTRFS? Jens Bauer
2020-12-03  7:59 ` Martin Steigerwald
2020-12-03  8:55   ` Jens Bauer
2020-12-03 10:59 ` Qu Wenruo [this message]
2020-12-03 19:13   ` Jens Bauer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f51b636b-0e5f-c3d9-916f-f8196dae4ef0@suse.com \
    --to=wqu@suse.com \
    --cc=jens-lists@gpio.dk \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).