From: email@example.com To: firstname.lastname@example.org Subject: Feature requests: online backup - defrag - change RAID level Date: Sun, 08 Sep 2019 23:12:57 -0400 Message-ID: <20190908231257.Horde.d0Er24HBTrPxDmnxpCc_T0V@server53.web-hosting.com> (raw) Hello everyone! I have been programming for a long time (over 20 years), and I am quite interested in a lot of low-level stuff. But in reality I have never done anything related to kernels or filesystems. But I did a lot of assembly, C, OS stuff etc... Looking at your project status page (at https://btrfs.wiki.kernel.org/index.php/Status), I must say that your priorities don't quite match mine. Of course, the opinions usually differ. It is my opinion that there are some quite essential features which btrfs is, unfortunately, still missing. So here is a list of features which I would rate as very important (for a modern COW filesystem like btrfs is), so perhaps you can think about it at least a little bit. 1) Full online backup (or copy, whatever you want to call it) btrfs backup <filesystem name> <partition name> [-f] - backups a btrfs filesystem given by <filesystem name> to a partition <partition name> (with all subvolumes). - To be performed by creating a new btrfs filesystem in the destination partition <partition name>, with a new GUID. - All data from the source filesystem <filesystem name> is than copied to the destination partition, similar to how RAID1 works. - The size of the destination partition must be sufficient to hold the used data from the source filesystem, otherwise the operation fails. The point is that the destination doesn't have to be as large as source, just sufficient to hold the data (of course, many details and concerns are skipped in this short proposal) - When the operation completes, the destination partition contains a fully featured, mountable and unmountable btrfs filesystem, which is an exact copy of the source filesystem at some point in time, with all the snapshots and subvolumes of the source filesystem. - There are two possible implementations about how this operation is to be performed, depending on whether the destination drive is slower than source drive(s) or not (like, when the destination is HDD and the source is SDD). If the source and the destination are of similar speed, than a RAID1-alike algorithm can be used (all writes simultaneously go to the source and the destination). This mode can also be used if the user/admin is willing to tolerate a performance hit for some relatively short period of time. The second possible implementation is a bit more complex, it can be done by creating a temporary snapshot or by buffering all the current writes until they can be written to the destination drive, but this implementation is of lesser priority (see if you can make the RAID1 implementation work first). 2) Sensible defrag The defrag is currently a joke. If you use defrag than you better not use subvolumes/snapshots. That's... very… hard to tolerate. Quite a necessary feature. I mean, defrag is an operation that should be performed in many circumstances, and in many cases it is even automatically initiated. But, btrfs defrag is virtually unusable. And, it is unusable where it is most needed, as the presence of subvolumes will, predictably, increase fragmentation by quite a lot. How to do it: - The extents must not be unshared, but just shuffled a bit. Unsharing the extents is, in most situations, not tolerable. - The defrag should work by doing a full defrag of one 'selected subvolume' (which can be selected by user, or it can be guessed because the user probably wants to defrag the currently mounted subvolume, or default subvolume). The other subvolumes should than share data (shared extents) with the 'selected subvolume' (as much as possible). - If you want it even more feature-full and complicated, then you could allow the user to specify a list of selected subvolumes, like: subvol1, subvol2, subvol3… etc. and the defrag algorithm than defrags subvol1 in full, than subvol2 as much as possible while not changing subvol1 and at the same time sharing extents with subvol1, than defrag subvol3 while not changing subvol1 and subvol2… etc. - I think it would be wrong to use a general deduplication algorithm for this. Instead, the information about the shared extents should be analyzed given the starting state of the filesystem, and than the algorithm should produce an optimal solution based on the currently shared extents. Deduplication is a different task. 3) Downgrade to 'single' or 'DUP' (also, general easy way to switch between RAID levels) Currently, as much as I gather, user has to do a "btrfs balance start -dconvert=single -mconvert=single ", than delete a drive, which is a bit ridiculous sequence of operations. Can you do something like "btrfs delete", but such that it also simultaneously converts to 'single', or some other chosen RAID level? ## I hope that you will consider my suggestions, I hope that I'm helpful (although, I guess, the short time I spent working with btrfs and writing this mail can not compare with the amount of work you are putting into it). Perhaps, teams sometimes need a different perspective, outsiders perspective, in order to better understand the situation. So long!
next reply index Thread overview: 111+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-09-09 3:12 webmaster [this message] -- strict thread matches above, loose matches on Subject: below -- 2019-09-09 2:55 zedlryqc 2019-09-09 3:51 ` Qu Wenruo 2019-09-09 11:25 ` zedlryqc 2019-09-09 12:18 ` Qu Wenruo 2019-09-09 12:28 ` Qu Wenruo 2019-09-09 17:11 ` webmaster 2019-09-10 17:39 ` Andrei Borzenkov 2019-09-10 22:41 ` webmaster 2019-09-09 15:29 ` Graham Cobb 2019-09-09 17:24 ` Remi Gauvin 2019-09-09 19:26 ` webmaster 2019-09-10 19:22 ` Austin S. Hemmelgarn 2019-09-10 23:32 ` webmaster 2019-09-11 12:02 ` Austin S. Hemmelgarn 2019-09-11 16:26 ` Zygo Blaxell 2019-09-11 17:20 ` webmaster 2019-09-11 18:19 ` Austin S. Hemmelgarn 2019-09-11 20:01 ` webmaster 2019-09-11 21:42 ` Zygo Blaxell 2019-09-13 1:33 ` General Zed 2019-09-11 21:37 ` webmaster 2019-09-12 11:31 ` Austin S. Hemmelgarn 2019-09-12 19:18 ` webmaster 2019-09-12 19:44 ` Chris Murphy 2019-09-12 21:34 ` General Zed 2019-09-12 22:28 ` Chris Murphy 2019-09-12 22:57 ` General Zed 2019-09-12 23:54 ` Zygo Blaxell 2019-09-13 0:26 ` General Zed 2019-09-13 3:12 ` Zygo Blaxell 2019-09-13 5:05 ` General Zed 2019-09-14 0:56 ` Zygo Blaxell 2019-09-14 1:50 ` General Zed 2019-09-14 4:42 ` Zygo Blaxell 2019-09-14 4:53 ` Zygo Blaxell 2019-09-15 17:54 ` General Zed 2019-09-16 22:51 ` Zygo Blaxell 2019-09-17 1:03 ` General Zed 2019-09-17 1:34 ` General Zed 2019-09-17 1:44 ` Chris Murphy 2019-09-17 4:55 ` Zygo Blaxell 2019-09-17 4:19 ` Zygo Blaxell 2019-09-17 3:10 ` General Zed 2019-09-17 4:05 ` General Zed 2019-09-14 1:56 ` General Zed 2019-09-13 5:22 ` General Zed 2019-09-13 6:16 ` General Zed 2019-09-13 6:58 ` General Zed 2019-09-13 9:25 ` General Zed 2019-09-13 17:02 ` General Zed 2019-09-14 0:59 ` Zygo Blaxell 2019-09-14 1:28 ` General Zed 2019-09-14 4:28 ` Zygo Blaxell 2019-09-15 18:05 ` General Zed 2019-09-16 23:05 ` Zygo Blaxell 2019-09-13 7:51 ` General Zed 2019-09-13 11:04 ` Austin S. Hemmelgarn 2019-09-13 20:43 ` Zygo Blaxell 2019-09-14 0:20 ` General Zed 2019-09-14 18:29 ` Chris Murphy 2019-09-14 23:39 ` Zygo Blaxell 2019-09-13 11:09 ` Austin S. Hemmelgarn 2019-09-13 17:20 ` General Zed 2019-09-13 18:20 ` General Zed 2019-09-12 19:54 ` Austin S. Hemmelgarn 2019-09-12 22:21 ` General Zed 2019-09-13 11:53 ` Austin S. Hemmelgarn 2019-09-13 16:54 ` General Zed 2019-09-13 18:29 ` Austin S. Hemmelgarn 2019-09-13 19:40 ` General Zed 2019-09-14 15:10 ` Jukka Larja 2019-09-12 22:47 ` General Zed 2019-09-11 21:37 ` Zygo Blaxell 2019-09-11 23:21 ` webmaster 2019-09-12 0:10 ` Remi Gauvin 2019-09-12 3:05 ` webmaster 2019-09-12 3:30 ` Remi Gauvin 2019-09-12 3:33 ` Remi Gauvin 2019-09-12 5:19 ` Zygo Blaxell 2019-09-12 21:23 ` General Zed 2019-09-14 4:12 ` Zygo Blaxell 2019-09-16 11:42 ` General Zed 2019-09-17 0:49 ` Zygo Blaxell 2019-09-17 2:30 ` General Zed 2019-09-17 5:30 ` Zygo Blaxell 2019-09-17 10:07 ` General Zed 2019-09-17 23:40 ` Zygo Blaxell 2019-09-18 4:37 ` General Zed 2019-09-18 18:00 ` Zygo Blaxell 2019-09-10 23:58 ` webmaster 2019-09-09 23:24 ` Qu Wenruo 2019-09-09 23:25 ` webmaster 2019-09-09 16:38 ` webmaster 2019-09-09 23:44 ` Qu Wenruo 2019-09-10 0:00 ` Chris Murphy 2019-09-10 0:51 ` Qu Wenruo 2019-09-10 0:06 ` webmaster 2019-09-10 0:48 ` Qu Wenruo 2019-09-10 1:24 ` webmaster 2019-09-10 1:48 ` Qu Wenruo 2019-09-10 3:32 ` webmaster 2019-09-10 14:14 ` Nikolay Borisov 2019-09-10 22:35 ` webmaster 2019-09-11 6:40 ` Nikolay Borisov 2019-09-10 22:48 ` webmaster 2019-09-10 23:14 ` webmaster 2019-09-11 0:26 ` webmaster 2019-09-11 0:36 ` webmaster 2019-09-11 1:00 ` webmaster 2019-09-10 11:12 ` Austin S. Hemmelgarn
Reply instructions: You may reply publically to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190908231257.Horde.d0Er24HBTrPxDmnxpCc_T0V@server53.web-hosting.com \ --email@example.com \ --firstname.lastname@example.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
Linux-BTRFS Archive on lore.kernel.org Archives are clonable: git clone --mirror https://lore.kernel.org/linux-btrfs/0 linux-btrfs/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 linux-btrfs linux-btrfs/ https://lore.kernel.org/linux-btrfs \ email@example.com firstname.lastname@example.org public-inbox-index linux-btrfs Example config snippet for mirrors Newsgroup available over NNTP: nntp://nntp.lore.kernel.org/org.kernel.vger.linux-btrfs AGPL code for this site: git clone https://public-inbox.org/ public-inbox