linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
To: "Austin S. Hemmelgarn" <ahferroin7@gmail.com>
Cc: webmaster@zedlx.com, linux-btrfs@vger.kernel.org
Subject: Re: Feature requests: online backup - defrag - change RAID level
Date: Wed, 11 Sep 2019 12:26:05 -0400	[thread overview]
Message-ID: <20190911162605.GA22121@hungrycats.org> (raw)
In-Reply-To: <d958659e-6dc0-fa0a-7da9-2d88df4588f5@gmail.com>

[-- Attachment #1: Type: text/plain, Size: 5986 bytes --]

On Wed, Sep 11, 2019 at 08:02:40AM -0400, Austin S. Hemmelgarn wrote:
> On 2019-09-10 19:32, webmaster@zedlx.com wrote:
> > 
> > Quoting "Austin S. Hemmelgarn" <ahferroin7@gmail.com>:
> > 
> > 
> > > > Defrag may break up extents. Defrag may fuse extents. But it
> > > > shouln't ever unshare extents.
> > 
> > > Actually, spitting or merging extents will unshare them in a large
> > > majority of cases.
> > 
> > Ok, this point seems to be repeated over and over without any proof, and
> > it is illogical to me.
> > 
> > About merging extents: a defrag should merge extents ONLY when both
> > extents are shared by the same files (and when those extents are
> > neighbours in both files). In other words, defrag should always merge
> > without unsharing. Let's call that operation "fusing extents", so that
> > there are no more misunderstandings.
> And I reiterate: defrag only operates on the file it's passed in.  It needs
> to for efficiency reasons (we had a reflink aware defrag for a while a few
> years back, it got removed because performance limitations meant it was
> unusable in the cases where you actually needed it). Defrag doesn't even
> know that there are reflinks to the extents it's operating on.
> 
> Now factor in that _any_ write will result in unsharing the region being
> written to, rounded to the nearest full filesystem block in both directions
> (this is mandatory, it's a side effect of the copy-on-write nature of BTRFS,
> and is why files that experience heavy internal rewrites get fragmented very
> heavily and very quickly on BTRFS).
> 
> Given this, defrag isn't willfully unsharing anything, it's just a
> side-effect of how it works (since it's rewriting the block layout of the
> file in-place).
> > 
> > === I CHALLENGE you and anyone else on this mailing list: ===
> > 
> >   - Show me an exaple where splitting an extent requires unsharing, and
> > this split is needed to defrag.
> >
> > Make it clear, write it yourself, I don't want any machine-made outputs.
> > 
> Start with the above comment about all writes unsharing the region being
> written to.
> 
> Now, extrapolating from there:
> 
> Assume you have two files, A and B, each consisting of 64 filesystem blocks
> in single shared extent.  Now assume somebody writes a few bytes to the
> middle of file B, right around the boundary between blocks 31 and 32, and
> that you get similar writes to file A straddling blocks 14-15 and 47-48.
> 
> After all of that, file A will be 5 extents:
> 
> * A reflink to blocks 0-13 of the original extent.
> * A single isolated extent consisting of the new blocks 14-15
> * A reflink to blocks 16-46 of the original extent.
> * A single isolated extent consisting of the new blocks 47-48
> * A reflink to blocks 49-63 of the original extent.
> 
> And file B will be 3 extents:
> 
> * A reflink to blocks 0-30 of the original extent.
> * A single isolated extent consisting of the new blocks 31-32.
> * A reflink to blocks 32-63 of the original extent.
> 
> Note that there are a total of four contiguous sequences of blocks that are
> common between both files:
> 
> * 0-13
> * 16-30
> * 32-46
> * 49-63
> 
> There is no way to completely defragment either file without splitting the
> original extent (which is still there, just not fully referenced by either
> file) unless you rewrite the whole file to a new single extent (which would,
> of course, completely unshare the whole file).  In fact, if you want to
> ensure that those shared regions stay reflinked, there's no way to
> defragment either file without _increasing_ the number of extents in that
> file (either file would need 7 extents to properly share only those 4
> regions), and even then only one of the files could be fully defragmented.

Arguably, the kernel's defrag ioctl should go ahead and do the extent
relocation and update all the reflinks at once, using the file given
in the argument as the "canonical" block order, i.e. the fd and offset
range you pass in is checked, and if it's not physically contiguous,
the extents in the range are copied to a single contiguous extent, then
all the other references to the old extent(s) within the offset range are
rewritten to point to the new extent, then the old extent is discarded.

It is possible to do this from userspace now using a mix of data copies
and dedupe, but it's much more efficient to use the facilities available
in the kernel:  in particular, the kernel can lock the extent in question
while all of this is going on, and the kernel can update shared snapshot
metadata pages directly instead of duplicating them and doing identical
updates on each copy.

This sort of extent reference manipulation, particularly of extents
referenced by readonly snapshots, used to break a lot of things (btrfs
send in particular) but the same issues came up again for dedupe,
and they now seem to be fixed as of 5.3 or so.  Maybe it's time to try
shared-extent-aware defrag again.

In practice, such an improved defrag ioctl would probably need some
more limit parameters, e.g.  "just skip over any extent with more than
1000 references" or "do a two-pass algorithm and relocate data only if
every reference to the data is logically contiguous" to avoid getting
bogged down on extents which require more iops to defrag in the present
than can possibly be saved by using the defrag result in the future.
That makes the defrag API even uglier, with even more magic baked-in
behavior to get in the way of users who know what they're doing, but
some stranger on a mailing list requested it, so why not...  :-P

> Such a situation generally won't happen if you're just dealing with
> read-only snapshots, but is not unusual when dealing with regular files that
> are reflinked (which is not an uncommon situation on some systems, as a lot
> of people have `cp` aliased to reflink things whenever possible).

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

  reply	other threads:[~2019-09-11 16:33 UTC|newest]

Thread overview: 111+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-09-09  2:55 Feature requests: online backup - defrag - change RAID level zedlryqc
2019-09-09  3:51 ` Qu Wenruo
2019-09-09 11:25   ` zedlryqc
2019-09-09 12:18     ` Qu Wenruo
2019-09-09 12:28       ` Qu Wenruo
2019-09-09 17:11         ` webmaster
2019-09-10 17:39           ` Andrei Borzenkov
2019-09-10 22:41             ` webmaster
2019-09-09 15:29       ` Graham Cobb
2019-09-09 17:24         ` Remi Gauvin
2019-09-09 19:26         ` webmaster
2019-09-10 19:22           ` Austin S. Hemmelgarn
2019-09-10 23:32             ` webmaster
2019-09-11 12:02               ` Austin S. Hemmelgarn
2019-09-11 16:26                 ` Zygo Blaxell [this message]
2019-09-11 17:20                 ` webmaster
2019-09-11 18:19                   ` Austin S. Hemmelgarn
2019-09-11 20:01                     ` webmaster
2019-09-11 21:42                       ` Zygo Blaxell
2019-09-13  1:33                         ` General Zed
2019-09-11 21:37                     ` webmaster
2019-09-12 11:31                       ` Austin S. Hemmelgarn
2019-09-12 19:18                         ` webmaster
2019-09-12 19:44                           ` Chris Murphy
2019-09-12 21:34                             ` General Zed
2019-09-12 22:28                               ` Chris Murphy
2019-09-12 22:57                                 ` General Zed
2019-09-12 23:54                                   ` Zygo Blaxell
2019-09-13  0:26                                     ` General Zed
2019-09-13  3:12                                       ` Zygo Blaxell
2019-09-13  5:05                                         ` General Zed
2019-09-14  0:56                                           ` Zygo Blaxell
2019-09-14  1:50                                             ` General Zed
2019-09-14  4:42                                               ` Zygo Blaxell
2019-09-14  4:53                                                 ` Zygo Blaxell
2019-09-15 17:54                                                 ` General Zed
2019-09-16 22:51                                                   ` Zygo Blaxell
2019-09-17  1:03                                                     ` General Zed
2019-09-17  1:34                                                       ` General Zed
2019-09-17  1:44                                                       ` Chris Murphy
2019-09-17  4:55                                                         ` Zygo Blaxell
2019-09-17  4:19                                                       ` Zygo Blaxell
2019-09-17  3:10                                                     ` General Zed
2019-09-17  4:05                                                       ` General Zed
2019-09-14  1:56                                             ` General Zed
2019-09-13  5:22                                         ` General Zed
2019-09-13  6:16                                         ` General Zed
2019-09-13  6:58                                         ` General Zed
2019-09-13  9:25                                           ` General Zed
2019-09-13 17:02                                             ` General Zed
2019-09-14  0:59                                             ` Zygo Blaxell
2019-09-14  1:28                                               ` General Zed
2019-09-14  4:28                                                 ` Zygo Blaxell
2019-09-15 18:05                                                   ` General Zed
2019-09-16 23:05                                                     ` Zygo Blaxell
2019-09-13  7:51                                         ` General Zed
2019-09-13 11:04                                     ` Austin S. Hemmelgarn
2019-09-13 20:43                                       ` Zygo Blaxell
2019-09-14  0:20                                         ` General Zed
2019-09-14 18:29                                       ` Chris Murphy
2019-09-14 23:39                                         ` Zygo Blaxell
2019-09-13 11:09                                   ` Austin S. Hemmelgarn
2019-09-13 17:20                                     ` General Zed
2019-09-13 18:20                                       ` General Zed
2019-09-12 19:54                           ` Austin S. Hemmelgarn
2019-09-12 22:21                             ` General Zed
2019-09-13 11:53                               ` Austin S. Hemmelgarn
2019-09-13 16:54                                 ` General Zed
2019-09-13 18:29                                   ` Austin S. Hemmelgarn
2019-09-13 19:40                                     ` General Zed
2019-09-14 15:10                                       ` Jukka Larja
2019-09-12 22:47                             ` General Zed
2019-09-11 21:37                   ` Zygo Blaxell
2019-09-11 23:21                     ` webmaster
2019-09-12  0:10                       ` Remi Gauvin
2019-09-12  3:05                         ` webmaster
2019-09-12  3:30                           ` Remi Gauvin
2019-09-12  3:33                             ` Remi Gauvin
2019-09-12  5:19                       ` Zygo Blaxell
2019-09-12 21:23                         ` General Zed
2019-09-14  4:12                           ` Zygo Blaxell
2019-09-16 11:42                             ` General Zed
2019-09-17  0:49                               ` Zygo Blaxell
2019-09-17  2:30                                 ` General Zed
2019-09-17  5:30                                   ` Zygo Blaxell
2019-09-17 10:07                                     ` General Zed
2019-09-17 23:40                                       ` Zygo Blaxell
2019-09-18  4:37                                         ` General Zed
2019-09-18 18:00                                           ` Zygo Blaxell
2019-09-10 23:58             ` webmaster
2019-09-09 23:24         ` Qu Wenruo
2019-09-09 23:25         ` webmaster
2019-09-09 16:38       ` webmaster
2019-09-09 23:44         ` Qu Wenruo
2019-09-10  0:00           ` Chris Murphy
2019-09-10  0:51             ` Qu Wenruo
2019-09-10  0:06           ` webmaster
2019-09-10  0:48             ` Qu Wenruo
2019-09-10  1:24               ` webmaster
2019-09-10  1:48                 ` Qu Wenruo
2019-09-10  3:32                   ` webmaster
2019-09-10 14:14                     ` Nikolay Borisov
2019-09-10 22:35                       ` webmaster
2019-09-11  6:40                         ` Nikolay Borisov
2019-09-10 22:48                     ` webmaster
2019-09-10 23:14                   ` webmaster
2019-09-11  0:26               ` webmaster
2019-09-11  0:36                 ` webmaster
2019-09-11  1:00                 ` webmaster
2019-09-10 11:12     ` Austin S. Hemmelgarn
2019-09-09  3:12 webmaster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190911162605.GA22121@hungrycats.org \
    --to=ce3g8jdj@umail.furryterror.org \
    --cc=ahferroin7@gmail.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=webmaster@zedlx.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).