From: webmaster@zedlx.com
To: Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: Feature requests: online backup - defrag - change RAID level
Date: Tue, 10 Sep 2019 19:14:08 -0400 [thread overview]
Message-ID: <20190910191408.Horde.APH6UgFmn857ecvizpk_Ijb@server53.web-hosting.com> (raw)
In-Reply-To: <3978da3b-bb62-4995-bc46-785446d59265@gmx.com>
Quoting Qu Wenruo <quwenruo.btrfs@gmx.com>:
>>> So here what we could do is: (From easy to hard)
>>> - Introduce an interface to allow defrag not to touch shared extents
>>> it shouldn't be that difficult compared to other work we are going
>>> to do.
>>> At least, user has their choice.
>>
>> That defrag wouldn't acomplish much. You can call it defrag, but it is
>> more like nothing happens.
>
> If one subvolume is not shared by snapshots or reflinks at all, I'd say
> that's exactly what user want.
If one subvolume is not shared by snapshots, the super-duper defrag
would produce the same result concering that subvolume.
Therefore, it is a waste of time to consider this case separately and
to go writing the code to cover just this case.
>>> - Introduce different levels for defrag
>>> Allow btrfs to do some calculation and space usage policy to
>>> determine if it's a good idea to defrag some shared extents.
>>> E.g. my extreme case, unshare the extent would make it possible to
>>> defrag the other subvolume to free a huge amount of space.
>>> A compromise, let user to choose if they want to sacrifice some space.
>>
>> Meh. You can always defrag one chosen subvolume perfectly, without
>> unsharing any file extents.
>
> If the subvolume is shared by another snapshot, you always need to face
> the decision whether to unshare.
> It's unavoidable.
In my opinion, unsharing is a very bad thing to do. If the user orders
it, then OK, but I think it that it is rarely required.
Unsharing can be done manually by just copying the data to another
place (partition). So, if someone really wants to unshare, he can
always easily do it.
When you unshare, it is hard to go back. Unsharing is a one-way road.
When you unshare, you lose free space. Therefore, the defrag should
not unshare.
In my view, the only real decision that needs to be left to the user
is: what to defrag?
In terms of full or partial defrag:
* Everything
- rarely; waste of time and resources, and it wears out SSDs
- perhaps this shouldn't be allowed at all
* 2% od most fragmented files (2% ot total space used, by size in bytes)
- good idea for daily or weekly defrag
- good default
* Let the user choose between 0.01% and 10% (by size in bytes)
- the best
Options by scope:
- One file (when necessary)
- One subvolume (when necessary)
- A list of subvolumes (with priority from first to last; the first
one on the list would be defragmented best)
- All subvolumes
- All subvolumes, with one exclusion list, and one priority list
- option to include or exclude RO subvolumes - as you said, this is
probably the hardest and implementation should be postponed
Therefore, making a super-duper defrag which can defrag one file
(without unsharing!!!) is a good starting point, instead of wasing
time on your proposal "Introduce different levels for defrag".
>> So, since it can be done perfectly without unsharing, why unshare at all?
>
> No, you can't.
>
> Go check my initial "red-herring" case.
I might check it, but I think that you can't be right. You are
thinking too low-level. If you can split extents and fuse extents and
create new extents that are shared by multiple files, than what you
are saying is simply not possible. The operations I listed are
sufficient to produce a perfect full defrag. Always.
next prev parent reply other threads:[~2019-09-10 23:15 UTC|newest]
Thread overview: 111+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-09-09 2:55 Feature requests: online backup - defrag - change RAID level zedlryqc
2019-09-09 3:51 ` Qu Wenruo
2019-09-09 11:25 ` zedlryqc
2019-09-09 12:18 ` Qu Wenruo
2019-09-09 12:28 ` Qu Wenruo
2019-09-09 17:11 ` webmaster
2019-09-10 17:39 ` Andrei Borzenkov
2019-09-10 22:41 ` webmaster
2019-09-09 15:29 ` Graham Cobb
2019-09-09 17:24 ` Remi Gauvin
2019-09-09 19:26 ` webmaster
2019-09-10 19:22 ` Austin S. Hemmelgarn
2019-09-10 23:32 ` webmaster
2019-09-11 12:02 ` Austin S. Hemmelgarn
2019-09-11 16:26 ` Zygo Blaxell
2019-09-11 17:20 ` webmaster
2019-09-11 18:19 ` Austin S. Hemmelgarn
2019-09-11 20:01 ` webmaster
2019-09-11 21:42 ` Zygo Blaxell
2019-09-13 1:33 ` General Zed
2019-09-11 21:37 ` webmaster
2019-09-12 11:31 ` Austin S. Hemmelgarn
2019-09-12 19:18 ` webmaster
2019-09-12 19:44 ` Chris Murphy
2019-09-12 21:34 ` General Zed
2019-09-12 22:28 ` Chris Murphy
2019-09-12 22:57 ` General Zed
2019-09-12 23:54 ` Zygo Blaxell
2019-09-13 0:26 ` General Zed
2019-09-13 3:12 ` Zygo Blaxell
2019-09-13 5:05 ` General Zed
2019-09-14 0:56 ` Zygo Blaxell
2019-09-14 1:50 ` General Zed
2019-09-14 4:42 ` Zygo Blaxell
2019-09-14 4:53 ` Zygo Blaxell
2019-09-15 17:54 ` General Zed
2019-09-16 22:51 ` Zygo Blaxell
2019-09-17 1:03 ` General Zed
2019-09-17 1:34 ` General Zed
2019-09-17 1:44 ` Chris Murphy
2019-09-17 4:55 ` Zygo Blaxell
2019-09-17 4:19 ` Zygo Blaxell
2019-09-17 3:10 ` General Zed
2019-09-17 4:05 ` General Zed
2019-09-14 1:56 ` General Zed
2019-09-13 5:22 ` General Zed
2019-09-13 6:16 ` General Zed
2019-09-13 6:58 ` General Zed
2019-09-13 9:25 ` General Zed
2019-09-13 17:02 ` General Zed
2019-09-14 0:59 ` Zygo Blaxell
2019-09-14 1:28 ` General Zed
2019-09-14 4:28 ` Zygo Blaxell
2019-09-15 18:05 ` General Zed
2019-09-16 23:05 ` Zygo Blaxell
2019-09-13 7:51 ` General Zed
2019-09-13 11:04 ` Austin S. Hemmelgarn
2019-09-13 20:43 ` Zygo Blaxell
2019-09-14 0:20 ` General Zed
2019-09-14 18:29 ` Chris Murphy
2019-09-14 23:39 ` Zygo Blaxell
2019-09-13 11:09 ` Austin S. Hemmelgarn
2019-09-13 17:20 ` General Zed
2019-09-13 18:20 ` General Zed
2019-09-12 19:54 ` Austin S. Hemmelgarn
2019-09-12 22:21 ` General Zed
2019-09-13 11:53 ` Austin S. Hemmelgarn
2019-09-13 16:54 ` General Zed
2019-09-13 18:29 ` Austin S. Hemmelgarn
2019-09-13 19:40 ` General Zed
2019-09-14 15:10 ` Jukka Larja
2019-09-12 22:47 ` General Zed
2019-09-11 21:37 ` Zygo Blaxell
2019-09-11 23:21 ` webmaster
2019-09-12 0:10 ` Remi Gauvin
2019-09-12 3:05 ` webmaster
2019-09-12 3:30 ` Remi Gauvin
2019-09-12 3:33 ` Remi Gauvin
2019-09-12 5:19 ` Zygo Blaxell
2019-09-12 21:23 ` General Zed
2019-09-14 4:12 ` Zygo Blaxell
2019-09-16 11:42 ` General Zed
2019-09-17 0:49 ` Zygo Blaxell
2019-09-17 2:30 ` General Zed
2019-09-17 5:30 ` Zygo Blaxell
2019-09-17 10:07 ` General Zed
2019-09-17 23:40 ` Zygo Blaxell
2019-09-18 4:37 ` General Zed
2019-09-18 18:00 ` Zygo Blaxell
2019-09-10 23:58 ` webmaster
2019-09-09 23:24 ` Qu Wenruo
2019-09-09 23:25 ` webmaster
2019-09-09 16:38 ` webmaster
2019-09-09 23:44 ` Qu Wenruo
2019-09-10 0:00 ` Chris Murphy
2019-09-10 0:51 ` Qu Wenruo
2019-09-10 0:06 ` webmaster
2019-09-10 0:48 ` Qu Wenruo
2019-09-10 1:24 ` webmaster
2019-09-10 1:48 ` Qu Wenruo
2019-09-10 3:32 ` webmaster
2019-09-10 14:14 ` Nikolay Borisov
2019-09-10 22:35 ` webmaster
2019-09-11 6:40 ` Nikolay Borisov
2019-09-10 22:48 ` webmaster
2019-09-10 23:14 ` webmaster [this message]
2019-09-11 0:26 ` webmaster
2019-09-11 0:36 ` webmaster
2019-09-11 1:00 ` webmaster
2019-09-10 11:12 ` Austin S. Hemmelgarn
2019-09-09 3:12 webmaster
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190910191408.Horde.APH6UgFmn857ecvizpk_Ijb@server53.web-hosting.com \
--to=webmaster@zedlx.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).