All of lore.kernel.org
 help / color / mirror / Atom feed
From: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
To: Dave Chinner <david@fromorbit.com>
Cc: "Darrick J. Wong" <darrick.wong@oracle.com>,
	fdmanana@kernel.org, fstests@vger.kernel.org,
	linux-btrfs@vger.kernel.org, Filipe Manana <fdmanana@suse.com>,
	linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [patch] file dedupe (and maybe clone) data corruption (was Re: [PATCH] generic: test for deduplication between different files)
Date: Thu, 6 Sep 2018 23:53:06 -0400	[thread overview]
Message-ID: <20180907035305.GJ26989@hungrycats.org> (raw)
In-Reply-To: <20180906083809.GF27618@dastard>

[-- Attachment #1: Type: text/plain, Size: 10884 bytes --]

On Thu, Sep 06, 2018 at 06:38:09PM +1000, Dave Chinner wrote:
> On Fri, Aug 31, 2018 at 01:10:45AM -0400, Zygo Blaxell wrote:
> > On Thu, Aug 30, 2018 at 04:27:43PM +1000, Dave Chinner wrote:
> > > On Thu, Aug 23, 2018 at 08:58:49AM -0400, Zygo Blaxell wrote:
> > > > On Mon, Aug 20, 2018 at 08:33:49AM -0700, Darrick J. Wong wrote:
> > > > > On Mon, Aug 20, 2018 at 11:09:32AM +1000, Dave Chinner wrote:
> > > > > > 	- is documenting rejection on request alignment grounds
> > > > > > 	  (i.e. EINVAL) in the man page sufficient for app
> > > > > > 	  developers to understand what is going on here?
> > > > > 
> > > > > I think so.  The manpage says: "The filesystem does not support
> > > > > reflinking the ranges of the given files", which (to my mind) covers
> > > > > this case of not supporting dedupe of EOF blocks.

[snip]

> tl;dr we don't need a new clone or dedupe API

Well, I wouldn't say _that_ yet.

It seems less likely that the API we need will be a minor revision of the
existing clone or dedupe and more likely that it will be a complementary
API for OOB dedupe engines, or a dedupe API that takes (device,
physical, length) tuples instead of (fd, offset, length) tuples.

> > For future development I've abandoned the entire dedupe_file_range
> > approach.  I need to be able to read and dedupe the data blocks of
> > the filesystem directly without having to deal with details like which
> > files those blocks belong to, especially on filesystems with lots of
> > existing deduped blocks and snapshots. 
> 
> IOWs, your desired OOB dedupe algorithm is:
> 
> 	a) ask the filesystem where all it's file data is

Actually, it's "ask the filesystem where all the *new* file data is"
since we don't want to read any unique data twice on subsequent runs.

> 	b) read that used space to build a data hash index
> 	c) on all the data hash collisions find the owners of the
> 	   colliding blocks
> 	d) if the block data is the same dedupe it
> 
> I agree - that's a simple and effective algorithm. It's also the
> obvious solution to an expert in the field.

It's obvious, but if you literally implement that, you end up with a
filesystem shattered into billions of fragments.

At step c) I need to figure out _which_ colliding block I want to keep.
That's where the algorithm needs to inspect the higher-level file
structure to determine the physical and logical situation for each block.
That leads to one of:

	a) store the hashes and block locations in the hash table
	b) relocate the blocks (defrag)
	c) replace some blocks with a reference to other blocks (dedupe)

This sounds a bit like what xfs_fsr does (or will do?).

Bees also operates under a constant-RAM constraint, so it doesn't operate
in two distinct "collect data" and "act on data collected" passes,
and cannot spend memory to store data about more than a few extents at
any time.  bees dedupes contiguous blocks pairwise as the second duplicate
block is discovered--sometimes keeping the second block, if the selection
algorithm prefers new blocks over blocks previously encountered (e.g. if
a defragmented version of an extent comes along, and we don't want to
undo the defrag operation by deduping non-defragged extents over it).

This does not produce the fastest possible OOB dedupe result--that comes
from an algorithm that does operate in two passes and sorts duplicate
extents by length between the passes.  The fast algorithm produces a
free space vs time curve that starts with a long flat interval followed
by a steep slope that gradually flattens over time, and is suitable
for filesystems that are small enough that the entire filesystem can
be processed in a batch run.  The bees algorithm produces a continuous
straight-ish line that is more suitable for continuous incremental
operation using O(1) memory.

> > The file structure is frankly
> > just noise for dedupe on large filesystems.
> 
> We learnt that lesson back in the late 1990s. xfsdump, xfs_fsr, all
> the SGI^WHPE HSM scanning tools, etc all avoid the directory
> structure because it's so slow. XFS's bulkstat interface, OTOH, can
> scan for target inodes at a over a million inodes/sec if you've got
> the IO and CPU to throw at it....

A million inodes/sec?  btrfs TREE_SEARCH_V2 barely manages 2000 inodes/sec
on a 2.2GHz AMD A6--and that's just in the kernel, not counting delivery
of the data to userspace.  I'm clearly using the wrong filesystem here.

> > I'm building a translation
> > layer for bees that does this--i.e. the main dedupe loop works only with
> > raw data blocks, and the translation layer maps read(blocknr, length)
> > and dedupe(block1, block2, length) requests onto the existing kernel
> > read(fd, offset, length) and dedupe(fd1, off1, fd2, off2, length)i
> 
> That's FS_IOC_GETFSMAP. :P

No, that's the layer that _uses_ FS_IOC_GETFSMAP.  ;)

> FYI, in 2012 I came up with a plan for dedupe in XFS:
> 
> 	a) GETFSMAP to query reverse map tree to find file
> 	   data blocks (don't try to dedupe unused space)
> 	b) read the underlying block device in ascending block
> 	   order and hash each block to build a collision tree.
> 	   Trivially parallelised.
> 	c) GETFSMAP to query rmap for the owners of colliding
> 	   blocks
> 	d) For all file data owner maps of a colliding block
> 		- bulkstat the inode numbers returned by GETFSMAP
> 		  and open_by_handle to get fd's
> 		- run dedupe syscall

I think steps c and d show room for improvement.

I'd like the filesystem to receive a pair of contiguous colliding block
ranges, and have the filesystem locate and update all the owners at once
after performing the comparison (skip updating any owners that have
changed after the comparison).  That effectively merges c and d into
a single syscall.  There are a few advantages to doing it this way,
especially when it comes to requirements for locking (e.g. no need to
lock files against modification during the block compare).

With the existing dedupe syscall, the app has to dig up from the (device,
physical) tuple through the filesystem to get (fd, offset) tuples to pass
to dedupe, which will dig down through the filesystem to turn them back
into (device, physical) tuples again for comparison and then dig back
up again to modify the (inode, offset) tuples for any other references.
It seems like the filesystem could skip one leg of this journey if it
starts with the block address.  If the data changed underneath so the
comparison fails, none of the other work is necessary, and most of the
other work requires seeks.

> You've taken a dozen or so failures to realise what was obvious to
> me at the outset

It was obvious to me too; however, I have to deliver a dedupe engine that
doesn't blow up the system it's running on, and calling LOGICAL_INO on
btrfs more than absolutely necessary has historically been a bad idea.

I tried many alternatives to avoid this (basically ordering the dedupe
syscalls to try to converge on the correct result eventually), but some
use cases (especially snapshots) cannot be done sanely any other way,
so I'm pressing ahead with pure physical block addressing and hoping
that the filesystems catch up.

> - your translation layer will essentially hold the
> same information that XFS already maintains on disk in it's rmap
> btrees. IIRC btrfs has semi-equivalent reverse mapping functionality
> on disk (back pointers?), so maybe your best bet would be to
> implement FSMAP for btrfs and then you have a dedupe app that works
> efficiently on both XFS and btrfs...

btrfs has LOGICAL_INO_V2 and TREE_SEARCH_V2 ioctls which provide the
same sort of data as GETFSMAP in a differently structured address space.
INO_PATHS lets me implement an equivalent of open_by_handle.

The existing translation layer in bees uses the information in the
existing filesystem structures to map logical to physical and back
as required.  The new translation layer operates _only_ in the physical
space at the top, so as far as the core dedupe/defrag engine is concerned,
the filesystem underneath is just an iterable collection of contiguous
data extents that can be read and deduped directly.

The only data bees stores outside of the filesystem is the dedupe block
hash table, the current value of the extent iterator (aka the low key for
the next GETFSMAP call), and the transid of the filesystem the last time
the extent iterator was reset.  btrfs embeds the transid (an indicator
of age) into every data extent record and into metadata page headers, so
bees can rapidly skip old extents in SEARCH_V2 for fast incremental scans.

The translation layer can work as long as every physical block has a
unique address in a data type that has addition, subtraction, less-than
and bisection operators, and the filesystem can map a position in physical
address space to a position in logical address space and back.  I only
need bisection for cases that require backward iteration from a known
key (btrfs csums require this, and the new translation layer enables the
dedupe code to use the existing csums instead of reading and hashing the
data blocks).

On btrfs I open the files to do reading instead of scraping the
block device directly--IMHO a reasonable decision, given that btrfs
implements RAID and compression to make raw block read from a device
very complicated.  Profiling so far has never shown open() to have
significant overhead compared to the other operations; however, if
btrfs had a "I am root, read this arbitrary data block for me" ioctl,
I'd eliminate even that little overhead.

It looks like "open device, seek to physical, read block" is sufficient
for XFS (no compression or other encoding to worry about).

btrfs uses a 64-bit virtual address space (compatible with FIEMAP),
GETFSMAP uses a 32-bit device ID and a 64-bit offset.  This means all
bees DDT entries are immediately 25% bigger on XFS, unless I enumerate the
device IDs and recycle some zero bits in .physical as a device ID index.

I wonder if a future btrfs implementation of GETFSMAP will use the
device ID and physical offset fields, or just return a constant device
ID with .physical carrying the 64-bit virtual address that the rest of
the filesystem uses.

I wonder how compressed data would work with GETFSMAP?  In btrfs the
LOGICAL_INO and FIEMAP ioctls treat the entire compressed extent as a
single entity with no distinct blocks inside.  Bees needs a unique address
for every block, even the compressed ones, so some of that is handled
by emulation using TREE_SEARCH_V2 in the translation layer (the physical
data type is 64-bit extent base plus an uncompressed block offset).

> Cheers,
> 
> Dave.
> -- 
> Dave Chinner
> david@fromorbit.com
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

  reply	other threads:[~2018-09-07  8:32 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-08-17  8:39 [PATCH] generic: test for deduplication between different files fdmanana
2018-08-19 14:07 ` Eryu Guan
2018-08-19 15:41   ` Filipe Manana
2018-08-19 16:19     ` Eryu Guan
2018-08-19 16:21       ` Filipe Manana
2018-08-19 23:11 ` Dave Chinner
2018-08-20  1:09   ` [patch] file dedupe (and maybe clone) data corruption (was Re: [PATCH] generic: test for deduplication between different files) Dave Chinner
2018-08-20 15:33     ` Darrick J. Wong
2018-08-21  0:49       ` Dave Chinner
2018-08-21  1:17         ` Eric Sandeen
2018-08-21  4:49           ` Dave Chinner
2018-08-23 12:58       ` Zygo Blaxell
2018-08-24  2:19         ` Zygo Blaxell
2018-08-30  6:27         ` Dave Chinner
2018-08-31  5:10           ` Zygo Blaxell
2018-09-06  8:38             ` Dave Chinner
2018-09-07  3:53               ` Zygo Blaxell [this message]
2018-09-10  9:06                 ` Dave Chinner
2018-09-19  4:12                   ` Zygo Blaxell
2018-09-21  2:59                     ` Dave Chinner
2018-09-21  4:40                       ` Zygo Blaxell
2018-10-01 20:34                         ` Andreas Dilger
2018-08-21 15:55     ` Filipe Manana
2018-08-21 15:57   ` [PATCH] generic: test for deduplication between different files Filipe Manana

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180907035305.GJ26989@hungrycats.org \
    --to=ce3g8jdj@umail.furryterror.org \
    --cc=darrick.wong@oracle.com \
    --cc=david@fromorbit.com \
    --cc=fdmanana@kernel.org \
    --cc=fdmanana@suse.com \
    --cc=fstests@vger.kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.