linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
To: webmaster@zedlx.com
Cc: linux-btrfs@vger.kernel.org
Subject: Re: Feature requests: online backup - defrag - change RAID level
Date: Wed, 11 Sep 2019 14:19:06 -0400	[thread overview]
Message-ID: <c8da6684-6c16-fc80-8e10-1afc1871d512@gmail.com> (raw)
In-Reply-To: <20190911132053.Horde._wJd24LqxxXx9ujl2r5i7PQ@server53.web-hosting.com>

On 2019-09-11 13:20, webmaster@zedlx.com wrote:
> 
> Quoting "Austin S. Hemmelgarn" <ahferroin7@gmail.com>:
> 
>> On 2019-09-10 19:32, webmaster@zedlx.com wrote:
>>>
>>> Quoting "Austin S. Hemmelgarn" <ahferroin7@gmail.com>:
>>>
> 
>>>
>>> === I CHALLENGE you and anyone else on this mailing list: ===
>>>
>>>  - Show me an exaple where splitting an extent requires unsharing, 
>>> and this split is needed to defrag.
>>>
>>> Make it clear, write it yourself, I don't want any machine-made outputs.
>>>
>> Start with the above comment about all writes unsharing the region 
>> being written to.
>>
>> Now, extrapolating from there:
>>
>> Assume you have two files, A and B, each consisting of 64 filesystem 
>> blocks in single shared extent.  Now assume somebody writes a few 
>> bytes to the middle of file B, right around the boundary between 
>> blocks 31 and 32, and that you get similar writes to file A straddling 
>> blocks 14-15 and 47-48.
>>
>> After all of that, file A will be 5 extents:
>>
>> * A reflink to blocks 0-13 of the original extent.
>> * A single isolated extent consisting of the new blocks 14-15
>> * A reflink to blocks 16-46 of the original extent.
>> * A single isolated extent consisting of the new blocks 47-48
>> * A reflink to blocks 49-63 of the original extent.
>>
>> And file B will be 3 extents:
>>
>> * A reflink to blocks 0-30 of the original extent.
>> * A single isolated extent consisting of the new blocks 31-32.
>> * A reflink to blocks 32-63 of the original extent.
>>
>> Note that there are a total of four contiguous sequences of blocks 
>> that are common between both files:
>>
>> * 0-13
>> * 16-30
>> * 32-46
>> * 49-63
>>
>> There is no way to completely defragment either file without splitting 
>> the original extent (which is still there, just not fully referenced 
>> by either file) unless you rewrite the whole file to a new single 
>> extent (which would, of course, completely unshare the whole file).  
>> In fact, if you want to ensure that those shared regions stay 
>> reflinked, there's no way to defragment either file without 
>> _increasing_ the number of extents in that file (either file would 
>> need 7 extents to properly share only those 4 regions), and even then 
>> only one of the files could be fully defragmented.
>>
>> Such a situation generally won't happen if you're just dealing with 
>> read-only snapshots, but is not unusual when dealing with regular 
>> files that are reflinked (which is not an uncommon situation on some 
>> systems, as a lot of people have `cp` aliased to reflink things 
>> whenever possible).
> 
> Well, thank you very much for writing this example. Your example is 
> certainly not minimal, as it seems to me that one write to the file A 
> and one write to file B would be sufficient to prove your point, so 
> there we have one extra write in the example, but that's OK.
> 
> Your example proves that I was wrong. I admit: it is impossible to 
> perfectly defrag one subvolume (in the way I imagined it should be done).
> Why? Because, as in your example, there can be files within a SINGLE 
> subvolume which share their extents with each other. I didn't consider 
> such a case.
> 
> On the other hand, I judge this issue to be mostly irrelevant. Why? 
> Because most of the file sharing will be between subvolumes, not within 
> a subvolume.
Not necessarily. Even ignoring the case of data deduplication (which 
needs to be considered if you care at all about enterprise usage, and is 
part of the whole point of using a CoW filesystem), there are existing 
applications that actively use reflinks, either directly or indirectly 
(via things like the `copy_file_range` system call), and the number of 
such applications is growing.

> When a user creates a reflink to a file in the same 
> subvolume, he is willingly denying himself the assurance of a perfect 
> defrag. Because, as your example proves, if there are a few writes to 
> BOTH files, it gets impossible to defrag perfectly. So, if the user 
> creates such reflinks, it's his own whish and his own fault.
The same argument can be made about snapshots.  It's an invalid argument 
in both cases though because it's not always the user who's creating the 
reflinks or snapshots.
> 
> Such situations will occur only in some specific circumstances:
> a) when the user is reflinking manually
> b) when a file is copied from one subvolume into a different file in a 
> different subvolume.
> 
> The situation a) is unusual in normal use of the filesystem. Even when 
> it occurs, it is the explicit command given by the user, so he should be 
> willing to accept all the consequences, even the bad ones like imperfect 
> defrag.
> 
> The situation b) is possible, but as far as I know copies are currently 
> not done that way in btrfs. There should probably be the option to 
> reflink-copy files fron another subvolume, that would be good.
> 
> But anyway, it doesn't matter. Because most of the sharing will be 
> between subvolumes, not within subvolume. So, if there is some 
> in-subvolume sharing, the defrag wont be 100% perfect, that a minor 
> point. Unimportant.
You're focusing too much on your own use case here.  Not everybody uses 
snapshots, and there are many people who are using reflinks very 
actively within subvolumes, either for deduplication or because it saves 
time and space when dealing with multiple copies of mostly identical 
tress of files.
> 
>>> About merging extents: a defrag should merge extents ONLY when both 
>>> extents are shared by the same files (and when those extents are 
>>> neighbours in both files). In other words, defrag should always merge 
>>> without unsharing. Let's call that operation "fusing extents", so 
>>> that there are no more misunderstandings.
> 
>> And I reiterate: defrag only operates on the file it's passed in.  It 
>> needs to for efficiency reasons (we had a reflink aware defrag for a 
>> while a few years back, it got removed because performance limitations 
>> meant it was unusable in the cases where you actually needed it). 
>> Defrag doesn't even know that there are reflinks to the extents it's 
>> operating on.
> 
> If the defrag doesn't know about all reflinks, that's bad in my view. 
> That is a bad defrag. If you had a reflink-aware defrag, and it was 
> slow, maybe that happened because the implementation was bad. Because, I 
> don't see any reason why it should be slow. So, you will have to explain 
> to me what was causing this performance problems.
> 
>> Given this, defrag isn't willfully unsharing anything, it's just a 
>> side-effect of how it works (since it's rewriting the block layout of 
>> the file in-place).
> 
> The current defrag has to unshare because, as you said, because it is 
> unaware of the full reflink structure. If it doesn't know about all 
> reflinks, it has to unshare, there is no way around that.
> 
>> Now factor in that _any_ write will result in unsharing the region 
>> being written to, rounded to the nearest full filesystem block in both 
>> directions (this is mandatory, it's a side effect of the copy-on-write 
>> nature of BTRFS, and is why files that experience heavy internal 
>> rewrites get fragmented very heavily and very quickly on BTRFS).
> 
> You mean: when defrag performs a write, the new data is unshared because 
> every write is unshared? Really?
> 
> Consider there is an extent E55 shared by two files A and B. The defrag 
> has to move E55 to another location. In order to do that, defrag creates 
> a new extent E70. It makes it belong to file A by changing the reflink 
> of extent E55 in file A to point to E70.
> 
> Now, to retain the original sharing structure, the defrag has to change 
> the reflink of extent E55 in file B to point to E70. You are telling me 
> this is not possible? Bullshit!
> 
> Please explain to me how this 'defrag has to unshare' story of yours 
> isn't an intentional attempt to mislead me.
As mentioned in the previous email, we actually did have a (mostly) 
working reflink-aware defrag a few years back.  It got removed because 
it had serious performance issues.  Note that we're not talking a few 
seconds of extra time to defrag a full tree here, we're talking 
double-digit _minutes_ of extra time to defrag a moderate sized (low 
triple digit GB) subvolume with dozens of snapshots, _if you were lucky_ 
(if you weren't, you would be looking at potentially multiple _hours_ of 
runtime for the defrag).  The performance scaled inversely proportionate 
to the number of reflinks involved and the total amount of data in the 
subvolume being defragmented, and was pretty bad even in the case of 
only a couple of snapshots.

Ultimately, there are a couple of issues at play here:

* Online defrag has to maintain consistency during operation.  The 
current implementation does this by rewriting the regions being 
defragmented (which causes them to become a single new extent (most of 
the time)), which avoids a whole lot of otherwise complicated logic 
required to make sure things happen correctly, and also means that only 
the file being operated on is impacted and only the parts being modified 
need to be protected against concurrent writes.  Properly handling 
reflinks means that _every_ file that shares some part of an extent with 
the file being operated on needs to have the reflinked regions locked 
for the defrag operation, which has a huge impact on performance. Using 
your example, the update to E55 in both files A and B has to happen as 
part of the same commit, which can contain no other writes in that 
region of the file, otherwise you run the risk of losing writes to file 
B that occur while file A is being defragmented.  It's not horrible when 
it's just a small region in two files, but it becomes a big issue when 
dealing with lots of files and/or particularly large extents (extents in 
BTRFS can get into the GB range in terms of size when dealing with 
really big files).

* Reflinks can reference partial extents.  This means, ultimately, that 
you may end up having to split extents in odd ways during defrag if you 
want to preserve reflinks, and might have to split extents _elsewhere_ 
that are only tangentially related to the region being defragmented. 
See the example in my previous email for a case like this, maintaining 
the shared regions as being shared when you defragment either file to a 
single extent will require splitting extents in the other file (in 
either case, whichever file you don't defragment to a single extent will 
end up having 7 extents if you try to force the one that's been 
defragmented to be the canonical version).  Once you consider that a 
given extent can have multiple ranges reflinked from multiple other 
locations, it gets even more complicated.

* If you choose to just not handle the above point by not letting defrag 
split extents, you put a hard lower limit on the amount of fragmentation 
present in a file if you want to preserve reflinks.  IOW, you can't 
defragment files past a certain point.  If we go this way, neither of 
the two files in the example from my previous email could be 
defragmented any further than they already are, because doing so would 
require splitting extents.

* Determining all the reflinks to a given region of a given extent is 
not a cheap operation, and the information may immediately be stale 
(because an operation right after you fetch the info might change 
things).  We could work around this by locking the extent somehow, but 
doing so would be expensive because you would have to hold the lock for 
the entire defrag operation.

  reply	other threads:[~2019-09-11 18:19 UTC|newest]

Thread overview: 111+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-09  2:55 Feature requests: online backup - defrag - change RAID level zedlryqc
2019-09-09  3:51 ` Qu Wenruo
2019-09-09 11:25   ` zedlryqc
2019-09-09 12:18     ` Qu Wenruo
2019-09-09 12:28       ` Qu Wenruo
2019-09-09 17:11         ` webmaster
2019-09-10 17:39           ` Andrei Borzenkov
2019-09-10 22:41             ` webmaster
2019-09-09 15:29       ` Graham Cobb
2019-09-09 17:24         ` Remi Gauvin
2019-09-09 19:26         ` webmaster
2019-09-10 19:22           ` Austin S. Hemmelgarn
2019-09-10 23:32             ` webmaster
2019-09-11 12:02               ` Austin S. Hemmelgarn
2019-09-11 16:26                 ` Zygo Blaxell
2019-09-11 17:20                 ` webmaster
2019-09-11 18:19                   ` Austin S. Hemmelgarn [this message]
2019-09-11 20:01                     ` webmaster
2019-09-11 21:42                       ` Zygo Blaxell
2019-09-13  1:33                         ` General Zed
2019-09-11 21:37                     ` webmaster
2019-09-12 11:31                       ` Austin S. Hemmelgarn
2019-09-12 19:18                         ` webmaster
2019-09-12 19:44                           ` Chris Murphy
2019-09-12 21:34                             ` General Zed
2019-09-12 22:28                               ` Chris Murphy
2019-09-12 22:57                                 ` General Zed
2019-09-12 23:54                                   ` Zygo Blaxell
2019-09-13  0:26                                     ` General Zed
2019-09-13  3:12                                       ` Zygo Blaxell
2019-09-13  5:05                                         ` General Zed
2019-09-14  0:56                                           ` Zygo Blaxell
2019-09-14  1:50                                             ` General Zed
2019-09-14  4:42                                               ` Zygo Blaxell
2019-09-14  4:53                                                 ` Zygo Blaxell
2019-09-15 17:54                                                 ` General Zed
2019-09-16 22:51                                                   ` Zygo Blaxell
2019-09-17  1:03                                                     ` General Zed
2019-09-17  1:34                                                       ` General Zed
2019-09-17  1:44                                                       ` Chris Murphy
2019-09-17  4:55                                                         ` Zygo Blaxell
2019-09-17  4:19                                                       ` Zygo Blaxell
2019-09-17  3:10                                                     ` General Zed
2019-09-17  4:05                                                       ` General Zed
2019-09-14  1:56                                             ` General Zed
2019-09-13  5:22                                         ` General Zed
2019-09-13  6:16                                         ` General Zed
2019-09-13  6:58                                         ` General Zed
2019-09-13  9:25                                           ` General Zed
2019-09-13 17:02                                             ` General Zed
2019-09-14  0:59                                             ` Zygo Blaxell
2019-09-14  1:28                                               ` General Zed
2019-09-14  4:28                                                 ` Zygo Blaxell
2019-09-15 18:05                                                   ` General Zed
2019-09-16 23:05                                                     ` Zygo Blaxell
2019-09-13  7:51                                         ` General Zed
2019-09-13 11:04                                     ` Austin S. Hemmelgarn
2019-09-13 20:43                                       ` Zygo Blaxell
2019-09-14  0:20                                         ` General Zed
2019-09-14 18:29                                       ` Chris Murphy
2019-09-14 23:39                                         ` Zygo Blaxell
2019-09-13 11:09                                   ` Austin S. Hemmelgarn
2019-09-13 17:20                                     ` General Zed
2019-09-13 18:20                                       ` General Zed
2019-09-12 19:54                           ` Austin S. Hemmelgarn
2019-09-12 22:21                             ` General Zed
2019-09-13 11:53                               ` Austin S. Hemmelgarn
2019-09-13 16:54                                 ` General Zed
2019-09-13 18:29                                   ` Austin S. Hemmelgarn
2019-09-13 19:40                                     ` General Zed
2019-09-14 15:10                                       ` Jukka Larja
2019-09-12 22:47                             ` General Zed
2019-09-11 21:37                   ` Zygo Blaxell
2019-09-11 23:21                     ` webmaster
2019-09-12  0:10                       ` Remi Gauvin
2019-09-12  3:05                         ` webmaster
2019-09-12  3:30                           ` Remi Gauvin
2019-09-12  3:33                             ` Remi Gauvin
2019-09-12  5:19                       ` Zygo Blaxell
2019-09-12 21:23                         ` General Zed
2019-09-14  4:12                           ` Zygo Blaxell
2019-09-16 11:42                             ` General Zed
2019-09-17  0:49                               ` Zygo Blaxell
2019-09-17  2:30                                 ` General Zed
2019-09-17  5:30                                   ` Zygo Blaxell
2019-09-17 10:07                                     ` General Zed
2019-09-17 23:40                                       ` Zygo Blaxell
2019-09-18  4:37                                         ` General Zed
2019-09-18 18:00                                           ` Zygo Blaxell
2019-09-10 23:58             ` webmaster
2019-09-09 23:24         ` Qu Wenruo
2019-09-09 23:25         ` webmaster
2019-09-09 16:38       ` webmaster
2019-09-09 23:44         ` Qu Wenruo
2019-09-10  0:00           ` Chris Murphy
2019-09-10  0:51             ` Qu Wenruo
2019-09-10  0:06           ` webmaster
2019-09-10  0:48             ` Qu Wenruo
2019-09-10  1:24               ` webmaster
2019-09-10  1:48                 ` Qu Wenruo
2019-09-10  3:32                   ` webmaster
2019-09-10 14:14                     ` Nikolay Borisov
2019-09-10 22:35                       ` webmaster
2019-09-11  6:40                         ` Nikolay Borisov
2019-09-10 22:48                     ` webmaster
2019-09-10 23:14                   ` webmaster
2019-09-11  0:26               ` webmaster
2019-09-11  0:36                 ` webmaster
2019-09-11  1:00                 ` webmaster
2019-09-10 11:12     ` Austin S. Hemmelgarn
2019-09-09  3:12 webmaster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c8da6684-6c16-fc80-8e10-1afc1871d512@gmail.com \
    --to=ahferroin7@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=webmaster@zedlx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).