First of all, as per instruction on the wiki for the mailing list, you can find everything asked for from the "What information to provide when asking a support question" section is at the end of my email. Look for a series of ## to know when this info is located. So, the setup is that I've got a system with 3 drives containing btrfs on them - SSDHome, NotHome, and Photos. Recently, I finally have a server in the basement with two btrfs drives - backups1 and backups2. It will, naturally, be used for backups. The whole reason I'm using btrfs is for this awesome ability that comes from snapshots and it not taking up too much space because of COW. When I did the btrfs send / receive for SSDHome it took 2.5 days to send ~500GiB over a 1GBps cable to the 3GBps drive in the server. It also had the error: ERROR: failed to clone extents to ermesa/.cache/krunner/ bookmarkrunnerfirefoxfavdbfile.sqlite: Invalid argument and took like 99% CPU according to top for the btrfs process. The subvols in NotHome, by contrast, took MUCH LESS time for the same amount of data. So I did a bunch of internet research and I found a thread on the mailing list that relinks can cause send to take a lot longer. And I didn't have autodefrag on SSDHome mount options in fstab, but had it in the other drives that dont' have a problem. So I ran btrfs fi defrag -v -r /home/ and it listed 5 failures. But after that, I did send/receive again with SSDHome and it was SO MUCH BETTER! btrfs never (or almost never) took more than 2.5% of CPU for btrfs while it was running and it took just 18 hours to send the data over the same cables and to the same drive. So I thought I'd solved the problem. (Although at the cost of my snapshots taking up more space on the SSD) I was able to do a send/receive with the -p option so that the next time it went faster. Life was good. But then.... Here's a situation I am pretty consistently running into (but only with SSDHome - it works fine with the other drives so far....) Let's say that snapshot A is a snapshot sent to the server without -p. It sends the entire 500GB for 18 hours. Then I do snapshot B. I send it with -p - takes 15 minutes or so depending on how much data I've added. Then I do snapshot C - and here I always get an error. And it always is something like: ERROR: link ermesa/.mozilla/firefox/n35gu0fb.default/bookmarkbackups/ bookmarks-2019-06-09_679_I1bs5PtgsPwtyXvcvcRdSg==.jsonlz4 -> ermesa/.mozilla/ firefox/n35gu0fb.default/bookmarkbackups/ bookmarks-2019-06-08_679_I1bs5PtgsPwtyXvcvcRdSg==.jsonlz4 failed: No such file or directory It always involves either .cache or .mozilla - the types of files that are constantly changing. It doesn't matter if I do a defrag before snapshot C followed by the sync command. It seems that for SSDHome I can only do one full snap send and then one parent send. Again, so far it seems to be working fine with the other drives which seems to suggest to me that it's maybe not the version of my kernel or btrfs progs or anything else. I've got the info below, but when I ran a scrub and a btrfs check everything came back fine. So any help you can give me here would be awesome because I'd prefer not to have to either always start from snapshot A (as time increases from A, each snap is going to take longer to send) and it's weird that it's only happening with that drive. ######################################################### uname -a : Linux supermario.mushroomkingdom 5.0.9-200.fc29.x86_64 #1 SMP Mon Apr 22 00:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux btrfs --version: btrfs-progs v4.20.2 btrfs fi show: Label: 'SSDHome' uuid: 1080b060-4f1d-4b72-8544-ada43ec3cb54 Total devices 1 FS bytes used 604.32GiB devid 1 size 931.51GiB used 805.05GiB path /dev/sdb1 Label: 'NotHome' uuid: 09344d53-db1e-43d0-8e43-c41a5884e172 Total devices 1 FS bytes used 2.57TiB devid 1 size 3.64TiB used 2.59TiB path /dev/sdc1 Label: 'Photos' uuid: 27cc1330-c4e3-404f-98f6-f23becec76b5 Total devices 2 FS bytes used 2.88TiB devid 1 size 5.46TiB used 2.88TiB path /dev/sdd devid 2 size 3.64TiB used 2.88TiB path /dev/sda1 btrfs fi df /home: Data, single: total=788.01GiB, used=601.54GiB System, single: total=36.00MiB, used=112.00KiB Metadata, single: total=17.01GiB, used=2.78GiB GlobalReserve, single: total=512.00MiB, used=0.00B And dmesg.log is attached Plus some additional information: btrfs scrub status /home/ scrub status for 1080b060-4f1d-4b72-8544-ada43ec3cb54 scrub started at Wed Jun 5 18:51:01 2019 and finished after 00:30:30 total bytes scrubbed: 586.34GiB with 0 errors btrfs check --force (because even in single user mode, I couldn't unmount / home - even though it's on its own drive): Checking filesystem on /dev/sdb1 UUID: 1080b060-4f1d-4b72-8544-ada43ec3cb54 found 648822566912 bytes used, no error found total csum bytes: 630493176 total tree bytes: 2977923072 total fs tree bytes: 2119663616 total extent tree bytes: 132972544 btree space waste bytes: 509661764 file data blocks allocated: 1882978136064 referenced 1827744911360 And you can see some of the info from me trying to figure this out on the following reddit posts: https://www.reddit.com/r/btrfs/comments/byq7gr/ list_of_folders_to_exclude_from_btrfs_if_home_is/ and https://www.reddit.com/ r/btrfs/comments/bwh2wa/extremely_slow_sendreceive_and_errors_upon/ -- Eric Mesa