From: Andrei Borzenkov <arvidjaar@gmail.com>
To: webmaster@zedlx.com, linux-btrfs@vger.kernel.org
Subject: Re: Feature requests: online backup - defrag - change RAID level
Date: Tue, 10 Sep 2019 20:39:45 +0300 [thread overview]
Message-ID: <610e9567-2f17-c7c3-01aa-0e1215be44d0@gmail.com> (raw)
In-Reply-To: <20190909131108.Horde.64FzJYflQ6j0CbjYFLqBEz0@server53.web-hosting.com>
09.09.2019 20:11, webmaster@zedlx.com пишет:
...
>>
>> Forgot to mention this part.
>>
>> If your primary objective is to migrate your data to another device
>> online (mounted, without unmount any of the fs).
>
> This is not the primary objective. The primary objective is to produce a
> full, online, easy-to-use, robust backup. But let's say we need to do
> migration...
>>
>> Then I could say, you can still add a new device, then remove the old
>> device to do that.
>
> If the source filesystem already uses RAID1, then, yes, you could do it,
You could do it with any profile.
> but it would be too slow, it would need a lot of user intervention, so
> many commands typed, so many ways to do it wrong, to make a mistake.
>
It requires exactly two commands - one to add new device, another to
remove old device.
> Too cumbersome. Too wastefull of time and resources.
>
Do you mean your imaginary full backup will not read full filesystem?
Otherwise how can it take less time and resources?
>> That would be even more efficient than LVM (not thin provisioned one),
>> as we only move used space.
>
> In fact, you can do this kind of full-online-backup with the help of
> mdadm RAID, or some other RAID solution. It can already be done, no need
> to add 'btrfs backup'.
>
> But, again, to cumbersome, too inflexible, too many problems, and, the
> user would have to setup a downgraded mdadm RAID in front and run with a
> degraded mdadm RAID all the time (since btrfs RAID would be actually
> protecting the data).
>
>> If your objective is to create a full copy as backup, then I'd say my
>> new patchset of btrfs-image data dump may be your best choice.
>
> It should be mountable. It should be performed online. Never heard of
> btrfs-image, i need the docs to see whether this btrfs-image is good
> enough.
>
>> The only down side is, you need to at least mount the source fs to RO
>> mode.
>
> No. That's not really an online backup. Not good enough.
>
>> The true on-line backup is not that easy, especially any write can screw
>> up your backup process, so it must be done unmounted.
>
> Nope, I disagree.
>
> First, there is the RAID1-alike solution, which is easy to perform (just
> send all new writes to both source and destination). It's the same thing
> that mdadm RAID1 would do (like I mentioned a few paragraphs above).
> But, this solution may have a performance concern, when the destination
> drive is too slow.
>
> Fortunately, with btrfs, an online backup is easier tha usual. To
> produce a frozen snapshot of the entire filesystem, just create a
> read-only snapshot of every subvolume (this is not 100% consistent, I
> know, but it is good enough).
>
> But I'm just repeating myself, I already wrote this in the first email.
>
> So, in conclusion I disagree that true on-line backup is not easy.
>
>> Even btrfs send handles this by forcing the source subvolume to be RO,
>> so I can't find an easy solution to address that.
>
> This is a digression, but I would say that you first make a temporary RO
> snapshot of the source subvolume, then use 'btrfs send' on the temporary
> snapshot, then delete the temporary snapshot.
>
> Oh, my.
>
>
next prev parent reply other threads:[~2019-09-10 17:39 UTC|newest]
Thread overview: 111+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-09 2:55 Feature requests: online backup - defrag - change RAID level zedlryqc
2019-09-09 3:51 ` Qu Wenruo
2019-09-09 11:25 ` zedlryqc
2019-09-09 12:18 ` Qu Wenruo
2019-09-09 12:28 ` Qu Wenruo
2019-09-09 17:11 ` webmaster
2019-09-10 17:39 ` Andrei Borzenkov [this message]
2019-09-10 22:41 ` webmaster
2019-09-09 15:29 ` Graham Cobb
2019-09-09 17:24 ` Remi Gauvin
2019-09-09 19:26 ` webmaster
2019-09-10 19:22 ` Austin S. Hemmelgarn
2019-09-10 23:32 ` webmaster
2019-09-11 12:02 ` Austin S. Hemmelgarn
2019-09-11 16:26 ` Zygo Blaxell
2019-09-11 17:20 ` webmaster
2019-09-11 18:19 ` Austin S. Hemmelgarn
2019-09-11 20:01 ` webmaster
2019-09-11 21:42 ` Zygo Blaxell
2019-09-13 1:33 ` General Zed
2019-09-11 21:37 ` webmaster
2019-09-12 11:31 ` Austin S. Hemmelgarn
2019-09-12 19:18 ` webmaster
2019-09-12 19:44 ` Chris Murphy
2019-09-12 21:34 ` General Zed
2019-09-12 22:28 ` Chris Murphy
2019-09-12 22:57 ` General Zed
2019-09-12 23:54 ` Zygo Blaxell
2019-09-13 0:26 ` General Zed
2019-09-13 3:12 ` Zygo Blaxell
2019-09-13 5:05 ` General Zed
2019-09-14 0:56 ` Zygo Blaxell
2019-09-14 1:50 ` General Zed
2019-09-14 4:42 ` Zygo Blaxell
2019-09-14 4:53 ` Zygo Blaxell
2019-09-15 17:54 ` General Zed
2019-09-16 22:51 ` Zygo Blaxell
2019-09-17 1:03 ` General Zed
2019-09-17 1:34 ` General Zed
2019-09-17 1:44 ` Chris Murphy
2019-09-17 4:55 ` Zygo Blaxell
2019-09-17 4:19 ` Zygo Blaxell
2019-09-17 3:10 ` General Zed
2019-09-17 4:05 ` General Zed
2019-09-14 1:56 ` General Zed
2019-09-13 5:22 ` General Zed
2019-09-13 6:16 ` General Zed
2019-09-13 6:58 ` General Zed
2019-09-13 9:25 ` General Zed
2019-09-13 17:02 ` General Zed
2019-09-14 0:59 ` Zygo Blaxell
2019-09-14 1:28 ` General Zed
2019-09-14 4:28 ` Zygo Blaxell
2019-09-15 18:05 ` General Zed
2019-09-16 23:05 ` Zygo Blaxell
2019-09-13 7:51 ` General Zed
2019-09-13 11:04 ` Austin S. Hemmelgarn
2019-09-13 20:43 ` Zygo Blaxell
2019-09-14 0:20 ` General Zed
2019-09-14 18:29 ` Chris Murphy
2019-09-14 23:39 ` Zygo Blaxell
2019-09-13 11:09 ` Austin S. Hemmelgarn
2019-09-13 17:20 ` General Zed
2019-09-13 18:20 ` General Zed
2019-09-12 19:54 ` Austin S. Hemmelgarn
2019-09-12 22:21 ` General Zed
2019-09-13 11:53 ` Austin S. Hemmelgarn
2019-09-13 16:54 ` General Zed
2019-09-13 18:29 ` Austin S. Hemmelgarn
2019-09-13 19:40 ` General Zed
2019-09-14 15:10 ` Jukka Larja
2019-09-12 22:47 ` General Zed
2019-09-11 21:37 ` Zygo Blaxell
2019-09-11 23:21 ` webmaster
2019-09-12 0:10 ` Remi Gauvin
2019-09-12 3:05 ` webmaster
2019-09-12 3:30 ` Remi Gauvin
2019-09-12 3:33 ` Remi Gauvin
2019-09-12 5:19 ` Zygo Blaxell
2019-09-12 21:23 ` General Zed
2019-09-14 4:12 ` Zygo Blaxell
2019-09-16 11:42 ` General Zed
2019-09-17 0:49 ` Zygo Blaxell
2019-09-17 2:30 ` General Zed
2019-09-17 5:30 ` Zygo Blaxell
2019-09-17 10:07 ` General Zed
2019-09-17 23:40 ` Zygo Blaxell
2019-09-18 4:37 ` General Zed
2019-09-18 18:00 ` Zygo Blaxell
2019-09-10 23:58 ` webmaster
2019-09-09 23:24 ` Qu Wenruo
2019-09-09 23:25 ` webmaster
2019-09-09 16:38 ` webmaster
2019-09-09 23:44 ` Qu Wenruo
2019-09-10 0:00 ` Chris Murphy
2019-09-10 0:51 ` Qu Wenruo
2019-09-10 0:06 ` webmaster
2019-09-10 0:48 ` Qu Wenruo
2019-09-10 1:24 ` webmaster
2019-09-10 1:48 ` Qu Wenruo
2019-09-10 3:32 ` webmaster
2019-09-10 14:14 ` Nikolay Borisov
2019-09-10 22:35 ` webmaster
2019-09-11 6:40 ` Nikolay Borisov
2019-09-10 22:48 ` webmaster
2019-09-10 23:14 ` webmaster
2019-09-11 0:26 ` webmaster
2019-09-11 0:36 ` webmaster
2019-09-11 1:00 ` webmaster
2019-09-10 11:12 ` Austin S. Hemmelgarn
2019-09-09 3:12 webmaster
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=610e9567-2f17-c7c3-01aa-0e1215be44d0@gmail.com \
--to=arvidjaar@gmail.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=webmaster@zedlx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).