linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eric Mesa <eric@ericmesa.com>
To: Chris Murphy <lists@colorremedies.com>
Cc: Btrfs BTRFS <linux-btrfs@vger.kernel.org>
Subject: Re: Issues with btrfs send/receive with parents
Date: Mon, 10 Jun 2019 23:47:11 -0400	[thread overview]
Message-ID: <2339083.MuzE7HVQJs@supermario.mushroomkingdom> (raw)
In-Reply-To: <CAJCQCtSLVxVtArLHrx0XD6J1oWOowqAnOPzeWVx3J25O7vxFQw@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 14330 bytes --]

On Monday, June 10, 2019 9:10:17 PM EDT Chris Murphy wrote:
> On Mon, Jun 10, 2019 at 6:03 PM Eric Mesa <eric@ericmesa.com> wrote:
> > When I did the btrfs send / receive for SSDHome it took 2.5 days to send
> > ~500GiB over a 1GBps cable to the 3GBps drive in the server. It also had
> > the error:
> > ERROR: failed to clone extents to ermesa/.cache/krunner/
> > bookmarkrunnerfirefoxfavdbfile.sqlite: Invalid argument
> 
> While there are distinct send and receive errors possible, I'm not
> familiar with recognizing them. You can get a better idea what the
> problem is with -vv or -vvv to get a more verbose error on the side
> that's having the problem. My guess is this is a send error message.
> 
> > Let's say that snapshot A is a snapshot sent to the server without -p. It
> > sends the entire 500GB for 18 hours.
> > 
> > Then I do snapshot B. I send it with -p - takes 15 minutes or so depending
> > on how much data I've added.
> > 
> > Then I do snapshot C - and here I always get an error.
> 
> It's most useful if you show exact commands because actually it's not
> always obvious to everyone what the logic should be and the error
> handling doesn't always stop a user from doing something that doesn't
> make a lot of sense. We need to know the name of the rw subvolume; the
> command to snapshot it; the full send/receive command for that first
> snapshot; the command for a subsequent snapshot; and the command to
> incrementally send/receive it.
> 

OK, here's an example from when it failed, although I didn't save the error - 
I'm just looking at my history for a time it failed:

#btrfs sub snap -r /home/ /home/.snapshots/2019-06-08-1437
#cd /home/.snapshots/
#btrfs send -p /home/.snapshots/2019-06-05-postdefrag/ 2019-06-08-1437/ | ssh 
#root@tanukimario btrfs receive /media/backups/backups1/supermario-home
#btrfs sub snap -r /home/ /home/.snapshots/2019-06-09-1415
#btrfs send -p 2019-06-08-1437/ 2019-06-09-1415/ | ssh root@tanukimario btrfs 
receive /media/backups/backups1/supermario-home



> > And it always is something like:
> > 
> > ERROR: link ermesa/.mozilla/firefox/n35gu0fb.default/bookmarkbackups/
> > bookmarks-2019-06-09_679_I1bs5PtgsPwtyXvcvcRdSg==.jsonlz4 ->
> > ermesa/.mozilla/ firefox/n35gu0fb.default/bookmarkbackups/
> > bookmarks-2019-06-08_679_I1bs5PtgsPwtyXvcvcRdSg==.jsonlz4 failed: No such
> > file or directory
> > 
> > 
> > 
> > It always involves either .cache or .mozilla - the types of files that are
> > constantly changing.
> > 
> > It doesn't matter if I do a defrag before snapshot C followed by the sync
> > command. It seems that for SSDHome I can only do one full snap send and
> > then one parent send.
> 
> I don't actually know the status of snapshot aware defragmentation. It
> wasn't there, then it was there, then there were problems, and I think
> it was pulled rather than fixed. But I don't remember really. I also
> don't know if there's a difference between manual defragging and
> autodefrag, because I don't use either one. I do use reflinks. And I
> have done deduplication. And I don't have any send/receive failures. I
> do sometimes see slow sections of send/receive.
> 
> > Again, so far it seems to be working fine with the other drives which
> > seems to suggest to me that it's maybe not the version of my kernel or
> > btrfs progs or anything else.
> 
> Do you remember the mkfs command for this file system? Or also helpful would
> be:
> 
> # btrfs insp dump-s -f /dev/X  ## for both send and receive side file
> system (only one device from each Btrfs volume is needed), this will
> give us an idea what the mkfs options were including feature flags.
> 

for the sender:
btrfs insp dump-s -f /dev/sdb1 
superblock: bytenr=65536, device=/dev/sdb1
---------------------------------------------------------
csum_type               0 (crc32c)
csum_size               4
csum                    0xa7282fe2 [match]
bytenr                  65536
flags                   0x1
                        ( WRITTEN )
magic                   _BHRfS_M [match]
fsid                    1080b060-4f1d-4b72-8544-ada43ec3cb54
metadata_uuid           1080b060-4f1d-4b72-8544-ada43ec3cb54
label                   SSDHome
generation              3034047
root                    373805891584
sys_array_size          194
chunk_root_generation   3008316
root_level              1
chunk_root              1010940559360
chunk_root_level        1
log_root                0
log_root_transid        0
log_root_level          0
total_bytes             1000203091968
bytes_used              648933253120
sectorsize              4096
nodesize                16384
leafsize (deprecated)   16384
stripesize              4096
root_dir                6
num_devices             1
compat_flags            0x0
compat_ro_flags         0x0
incompat_flags          0x161
                        ( MIXED_BACKREF |
                          BIG_METADATA |
                          EXTENDED_IREF |
                          SKINNY_METADATA )
cache_generation        3034047
uuid_tree_generation    3034047
dev_item.uuid           61b1e688-5aad-4dff-95b7-2500c08aefb4
dev_item.fsid           1080b060-4f1d-4b72-8544-ada43ec3cb54 [match]
dev_item.type           0
dev_item.total_bytes    1000203091968
dev_item.bytes_used     864416694272
dev_item.io_align       4096
dev_item.io_width       4096
dev_item.sector_size    4096
dev_item.devid          1
dev_item.dev_group      0
dev_item.seek_speed     0
dev_item.bandwidth      0
dev_item.generation     0
sys_chunk_array[2048]:
        item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 1048576)
                length 4194304 owner 2 stripe_len 65536 type SYSTEM
                io_align 4096 io_width 4096 sector_size 4096
                num_stripes 1 sub_stripes 0
                        stripe 0 devid 1 offset 1048576
                        dev_uuid 61b1e688-5aad-4dff-95b7-2500c08aefb4
        item 1 key (FIRST_CHUNK_TREE CHUNK_ITEM 1010940510208)
                length 33554432 owner 2 stripe_len 65536 type SYSTEM
                io_align 65536 io_width 65536 sector_size 4096
                num_stripes 1 sub_stripes 1
                        stripe 0 devid 1 offset 710839107584
                        dev_uuid 61b1e688-5aad-4dff-95b7-2500c08aefb4
backup_roots[4]:
        backup 0:
                backup_tree_root:       373802975232    gen: 3034046    level: 
1
                backup_chunk_root:      1010940559360   gen: 3008316    level: 
1
                backup_extent_root:     373801369600    gen: 3034046    level: 
2
                backup_fs_root:         982774349824    gen: 2278090    level: 
0
                backup_dev_root:        38486016        gen: 3033103    level: 
1
                backup_csum_root:       373801058304    gen: 3034046    level: 
2
                backup_total_bytes:     1000203091968
                backup_bytes_used:      648933277696
                backup_num_devices:     1

        backup 1:
                backup_tree_root:       373805891584    gen: 3034047    level: 
1
                backup_chunk_root:      1010940559360   gen: 3008316    level: 
1
                backup_extent_root:     373805121536    gen: 3034047    level: 
2
                backup_fs_root:         982774349824    gen: 2278090    level: 
0
                backup_dev_root:        38486016        gen: 3033103    level: 
1
                backup_csum_root:       373804859392    gen: 3034047    level: 
2
                backup_total_bytes:     1000203091968
                backup_bytes_used:      648933253120
                backup_num_devices:     1

        backup 2:
                backup_tree_root:       373804662784    gen: 3034044    level: 
1
                backup_chunk_root:      1010940559360   gen: 3008316    level: 
1
                backup_extent_root:     373803008000    gen: 3034044    level: 
2
                backup_fs_root:         982774349824    gen: 2278090    level: 
0
                backup_dev_root:        38486016        gen: 3033103    level: 
1
                backup_csum_root:       373796159488    gen: 3034044    level: 
2
                backup_total_bytes:     1000203091968
                backup_bytes_used:      648933285888
                backup_num_devices:     1

        backup 3:
                backup_tree_root:       373800550400    gen: 3034045    level: 
1
                backup_chunk_root:      1010940559360   gen: 3008316    level: 
1
                backup_extent_root:     373800271872    gen: 3034045    level: 
2
                backup_fs_root:         982774349824    gen: 2278090    level: 
0
                backup_dev_root:        38486016        gen: 3033103    level: 
1
                backup_csum_root:       373799960576    gen: 3034045    level: 
2
                backup_total_bytes:     1000203091968
                backup_bytes_used:      648933298176
                backup_num_devices:     1


for receiver:
superblock: bytenr=65536, device=/dev/sdc1
---------------------------------------------------------
csum_type               0 (crc32c)
csum_size               4
csum                    0x47edc62b [match]
bytenr                  65536
flags                   0x1
                        ( WRITTEN )
magic                   _BHRfS_M [match]
fsid                    e1bc05b0-6fad-4f1a-9586-26a461b65d57
metadata_uuid           e1bc05b0-6fad-4f1a-9586-26a461b65d57
label                   Backups1
generation              15013
root                    368023306240
sys_array_size          129
chunk_root_generation   14915
root_level              1
chunk_root              22020096
chunk_root_level        1
log_root                0
log_root_transid        0
log_root_level          0
total_bytes             4000785104896
bytes_used              2350642929664
sectorsize              4096
nodesize                16384
leafsize (deprecated)   16384
stripesize              4096
root_dir                6
num_devices             1
compat_flags            0x0
compat_ro_flags         0x0
incompat_flags          0x161
                        ( MIXED_BACKREF |
                          BIG_METADATA |
                          EXTENDED_IREF |
                          SKINNY_METADATA )
cache_generation        15013
uuid_tree_generation    15013
dev_item.uuid           6640d884-bcf6-4f0c-8d97-a31d72133a76
dev_item.fsid           e1bc05b0-6fad-4f1a-9586-26a461b65d57 [match]
dev_item.type           0
dev_item.total_bytes    4000785104896
dev_item.bytes_used     2509351419904
dev_item.io_align       4096
dev_item.io_width       4096
dev_item.sector_size    4096
dev_item.devid          1
dev_item.dev_group      0
dev_item.seek_speed     0
dev_item.bandwidth      0
dev_item.generation     0
sys_chunk_array[2048]:
        item 0 key (FIRST_CHUNK_TREE CHUNK_ITEM 22020096)
                length 8388608 owner 2 stripe_len 65536 type SYSTEM|DUP
                io_align 65536 io_width 65536 sector_size 4096
                num_stripes 2 sub_stripes 0
                        stripe 0 devid 1 offset 22020096
                        dev_uuid 6640d884-bcf6-4f0c-8d97-a31d72133a76
                        stripe 1 devid 1 offset 30408704
                        dev_uuid 6640d884-bcf6-4f0c-8d97-a31d72133a76
backup_roots[4]:
        backup 0:
                backup_tree_root:       368023109632    gen: 15010      level: 
1
                backup_chunk_root:      22020096        gen: 14915      level: 
1
                backup_extent_root:     368023158784    gen: 15010      level: 
2
                backup_fs_root:         30621696        gen: 7  level: 0
                backup_dev_root:        499089408       gen: 14915      level: 
1
                backup_csum_root:       368016621568    gen: 15010      level: 
3
                backup_total_bytes:     4000785104896
                backup_bytes_used:      2350286446592
                backup_num_devices:     1

        backup 1:
                backup_tree_root:       368024518656    gen: 15011      level: 
1
                backup_chunk_root:      22020096        gen: 14915      level: 
1
                backup_extent_root:     368024551424    gen: 15011      level: 
2
                backup_fs_root:         30621696        gen: 7  level: 0
                backup_dev_root:        499089408       gen: 14915      level: 
1
                backup_csum_root:       368024682496    gen: 15011      level: 
3
                backup_total_bytes:     4000785104896

                backup_total_bytes:     4000785104896
                backup_bytes_used:      2350642929664
                backup_num_devices:     1



> > And dmesg.log is attached
> 
> [    6.949347] BTRFS info (device sdb1): enabling auto defrag
> 
> Could be related. And then also
> 
> [    9.906695] usb-storage 8-1.3:1.0: USB Mass Storage device detected
> [    9.907006] scsi host7: usb-storage 8-1.3:1.0
> [   10.950446] scsi 7:0:0:0: Direct-Access     B&N      NOOK
>   0322 PQ: 0 ANSI: 2
> [   10.951110] sd 7:0:0:0: Attached scsi generic sg7 type 0
> [   10.951161] sd 7:0:0:0: Power-on or device reset occurred
> [   10.952880] sd 7:0:0:0: [sdg] Attached SCSI removable disk
> snip
> [  267.794434] usb 9-1.1: reset SuperSpeed Gen 1 USB device number 3
> using xhci_hcd
> [  272.838054] usb 9-1.1: device descriptor read/8, error -110
> [  272.941832] usb 9-1.1: reset SuperSpeed Gen 1 USB device number 3
> using xhci_hcd
> [  277.958049] usb 9-1.1: device descriptor read/8, error -110
> [  278.236339] usb 9-1.1: reset SuperSpeed Gen 1 USB device number 3
> using xhci_hcd
> 
> 
> USB enclosed drives can be troublesome with any file system, expressly
> because of these seemingly random reset that happen. I had the same
> thing in an early form of my setup, and it did cause me problems that
> Btrfs worked around. But I considered it untenable and fixed it with a
> good quality self-powered USB hub (don't rely on bus power), or
> perhaps more specifically one that comes with a high amp power
> adapter. It needs to be able to drive all the drives in their
> read/write usage, which for laptop drives is ~0.35A each.
> 
> You really shouldn't be getting link resets like the above, even
> though I suspect it's unrelated to the current problem report.

the drives in question aren't USB. That's for a USB3 hub that I use for my SD 
cards.
-- 
--
Eric Mesa

[-- Attachment #2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

  parent reply	other threads:[~2019-06-11  3:55 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-10 23:56 Issues with btrfs send/receive with parents Eric Mesa
2019-06-11  1:10 ` Chris Murphy
2019-06-11  3:39   ` Andrei Borzenkov
2019-06-11  4:19     ` Eric Mesa
2019-06-12  5:43       ` Andrei Borzenkov
2019-06-12 23:03         ` Eric Mesa
2019-06-12 23:11           ` Eric Mesa
2019-06-13  6:26           ` Andrei Borzenkov
2019-06-13 21:22             ` Eric Mesa
2019-06-14  3:55               ` Chris Murphy
2019-07-14 20:59                 ` Eric Mesa
2019-06-18  4:15               ` Andrei Borzenkov
2019-06-11  3:47   ` Eric Mesa [this message]
2019-06-11  1:11 ` Remi Gauvin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2339083.MuzE7HVQJs@supermario.mushroomkingdom \
    --to=eric@ericmesa.com \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=lists@colorremedies.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).