On 2019/9/9 下午8:18, Qu Wenruo wrote: > > > On 2019/9/9 下午7:25, zedlryqc@server53.web-hosting.com wrote: >> >> Quoting Qu Wenruo : >>>> 1) Full online backup (or copy, whatever you want to call it) >>>> btrfs backup [-f] >>>> - backups a btrfs filesystem given by to a partition >>>> (with all subvolumes). >>> >>> Why not just btrfs send? >>> >>> Or you want to keep the whole subvolume structures/layout? >> >> Yes, I want to keep the whole subvolume structures/layout. I want to >> keep everything. Usually, when I want to backup a partition, I want to >> keep everything, and I suppose most other people have a similar idea. >> >>> I'd say current send/receive is more flex. >> >> Um, 'flexibility' has nothing to do with it. Send/receive is a >> completely different use case. >> So, each one has some benefits and some drawbacks, but 'send/receive' >> cannot replace 'full online backup' >> >> Here is where send/receive is lacking: >>     - too complicated to do if many subvolumes are involved >>     - may require recursive subvolume enumeration in order to emulate >> 'full online backup' >>     - may require extra storage space >>     - is not mountable, not easy to browse the backup contents >>     - not easy to recover just a few selected files from backup >> There's probably more things where send/receive is lacking, but I think >> I have given sufficient number of important differences which show that >> send/receive cannot successfully replace the functionality of 'full >> online backup'. Forgot to mention this part. If your primary objective is to migrate your data to another device online (mounted, without unmount any of the fs). Then I could say, you can still add a new device, then remove the old device to do that. That would be even more efficient than LVM (not thin provisioned one), as we only move used space. If your objective is to create a full copy as backup, then I'd say my new patchset of btrfs-image data dump may be your best choice. The only down side is, you need to at least mount the source fs to RO mode. The true on-line backup is not that easy, especially any write can screw up your backup process, so it must be done unmounted. Even btrfs send handles this by forcing the source subvolume to be RO, so I can't find an easy solution to address that. Thanks, Qu >> >>> And you also needs to understand btrfs also integrates volume >>> management, thus it's not just , you also needs RAID >>> level and things like that. >> >> This is a minor point. So, please, let's not get into too many >> irrelevant details here. >> >> There can be a sensible default to 'single data, DUP metadata', and a >> way for a user to override this default, but that feature is >> not-so-important. If the user wants to change the RAID level, he can >> easily do it later by mounting the backup. >> >>> >>> All can be done already by send/receive, although at subvolume level. >> >> Yeah, maybe I should manually type it all for all subvolumes, one by >> one. Also must be carefull to do it in the correct order if I want it >> not to consume extra space. >> And the backup is not mountable. >> >> This proposal (workaround) of yours appears to me as too complicated, >> too error prone, >> missing important features. >> >> But, I just thought, you can actually emulate 'full online backup' with >> this send/receive. Here is how. >> You do a script which does the following: >>     - makes a temporary snapshot of every subvolume >>     - use 'btrfs send' to send all the temporary snapshots, on-the-fly >> (maybe via pipe), in the correct order, to a proces running a 'brtfs >> receive', which should then immediately write it all to the destination >> partition. All the buffers can stay in-memory. >>     - when all the snapshots are received and written to destination, >> fix subvol IDs >>     - delete temporary snapshots from source >> Of course, this script should then be a part of standard btrfs tools. >> >>> Please check if send/receive is suitable for your use case. >> >> No. Absolutely not. >> >> >>>> 2) Sensible defrag >>>> The defrag is currently a joke. >> >>>> How to do it: >>>> - The extents must not be unshared, but just shuffled a bit. Unsharing >>>> the extents is, in most situations, not tolerable. >> >>> I definitely see cases unsharing extents makes sense, so at least we >>> should let user to determine what they want. >> >> Maybe there are such cases, but I would say that a vast majority of >> users (99,99%) in a vast majority of cases (99,99%) don't want the >> defrag operation to reduce free disk space. >> >>> What's wrong with current file based defrag? >>> If you want to defrag a subvolume, just iterate through all files. >> >> I repeat: The defrag should not decrease free space. That's the 'normal' >> expectation. > > Since you're talking about btrfs, it's going to do CoW for metadata not > matter whatever, as long as you're going to change anything, btrfs will > cause extra space usage. > (Although the final result may not cause extra used disk space as freed > space is as large as newly allocated space, but to maintain CoW, newly > allocated space can't overlap with old data) > > Further more, talking about snapshots with space wasted by extent > booking, it's definitely possible user want to break the shared extents: > > Subvol 257, inode 257 has the following file extents: > (257 EXTENT_DATA 0) > disk bytenr X len 16M > offset 0 num_bytes 4k << Only 4k is referred in the whole 16M extent. > > Subvol 258, inode 257 has the following file extents: > (257 EXTENT_DATA 0) > disk bytenr X len 16M > offset 0 num_bytes 4K << Shared with that one in subv 257 > (257 EXTENT_DATA 4K) > disk bytenr Y len 16M > offset 0 num_bytes 4K << Similar case, only 4K of 16M is used. > > In that case, user definitely want to defrag file in subvol 258, as if > that extent at bytenr Y can be freed, we can free up 16M, and allocate a > new 8K extent for subvol 258, ino 257. > (And will also want to defrag the extent in subvol 257 ino 257 too) > > That's why knowledge in btrfs tech details can make a difference. > Sometimes you may find some ideas are brilliant and why btrfs is not > implementing it, but if you understand btrfs to some extent, you will > know the answer by yourself. > > >> >>>> - I think it would be wrong to use a general deduplication algorithm for >>>> this. Instead, the information about the shared extents should be >>>> analyzed given the starting state of the filesystem, and than the >>>> algorithm should produce an optimal solution based on the currently >>>> shared extents. >>> >>> Please be more specific, like giving an example for it. >> >> Let's say that there is a file FFF with extents e11, e12, e13, e22, e23, >> e33, e34 >> - in subvolA the file FFF consists of e11, e12, e13 >> - in subvolB the file FFF consists of e11, e22, e23 >> - in subvolC the file FFF consists of e11, e22, e33, e34 >> >> After defrag, where 'selected subvolume' is subvolC, the extents are >> ordered on disk as follows: >> >> e11,e22,e33,e34 - e23 - e12,e13 > > Inode FFF in different subvolumes are different inodes. They have no > knowledge of other inodes in other subvolumes. > > If FFF in subvol C is e11, e22, e33, e34, then that's it. > I didn't see the point still. > > And what's the on-disk bytenr of all these extents? Which has larger > bytenr and length? > > Please provide a better description like xfs_io -c "fiemap -v" output > before and after. > >> >> In the list above, the comma denotes neighbouring extents, the dash >> indicates that there can be a possible gap. >> As you can see in the list, the file FFF is fully defragmented in >> subvolC, since its extents are occupying neighbouring disk sectors. >> >> >>>> 3) Downgrade to 'single' or 'DUP' (also, general easy way to switch >>>> between RAID levels) >>>>  Currently, as much as I gather, user has to do a "btrfs balance start >>>> -dconvert=single -mconvert=single >>>> ", than delete a drive, which is a bit ridiculous sequence of >>>> operations. >> >>> That's a shortcut for chunk profile change. >>> My first idea of this is, it could cause more problem than benefit. >>> (It only benefits profile downgrade, thus only makes sense for >>> RAID1->SINGLE, DUP->SINGLE, and RAID10->RAID0, nothing else) >> >> Those listed cases are exactly the ones I judge to be most important. >> Three important cases. > > I'd argue it's downgrade, not that important. As most users want to > replace the missing/bad device and maintain the raid profile. > >> >>> I still prefer the safer allocate-new-chunk way to convert chunks, even >>> at a cost of extra IO. >> >> I don't mind whether it allocates new chunks or not. It is better, in my >> opinion, if new chunks are not allocated, but both ways are essentially OK. >> >> What I am complaining about is that at one point in time, after issuing >> the command: >>     btrfs balance start -dconvert=single -mconvert=single >> and before issuing the 'btrfs delete', the system could be in a too >> fragile state, with extents unnecesarily spread out over two drives, >> which is both a completely unnecessary operation, and it also seems to >> me that it could be dangerous in some situations involving potentially >> malfunctioning drives. > > In that case, you just need to replace that malfunctioning device other > than fall back to SINGLE. > > Thanks, > Qu > >> >> Please reconsider. >> >> >