All of lore.kernel.org
 help / color / mirror / Atom feed
* Send-recieve performance
@ 2016-07-20  9:15 Libor Klepáč
  2016-07-22 12:59 ` Henk Slager
  2016-07-22 13:47 ` Martin Raiber
  0 siblings, 2 replies; 5+ messages in thread
From: Libor Klepáč @ 2016-07-20  9:15 UTC (permalink / raw)
  To: linux-btrfs

Hello,
we use backuppc to backup our hosting machines.

I have recently migrated it to btrfs, so we can use send-recieve for offsite backups of our backups.

I have several btrfs volumes, each hosts nspawn container, which runs in /system subvolume and has backuppc data in /backuppc subvolume
.
I use btrbk to do snapshots and transfer.
Local side is set to keep 5 daily snapshots, remote side to hold some history. (not much yet, i'm using it this way for few weeks).

If you know backuppc behaviour: for every backup (even incremental), it creates full directory tree of each backed up machine even if it has no modified files and places one small file in each, which holds some info for backuppc. 
So after few days i ran into ENOSPACE on one volume, because my metadata grow, because of inlineing.
I switched from mdata=DUP to mdata=single (now I see it's possible to change inline file size, right?).

My problem is, that on some volumes, send-recieve is relatively fast (rate in MB/s or hundreds of kB/s) but on biggest volume (biggest in space and biggest in contained filesystem trees) rate is just 5-30kB/s.

Here is btrbk progress copyed
785MiB 47:52:00 [12.9KiB/s] [4.67KiB/s]

ie. 758MB in 48 hours.

Reciever has high IO/wait - 90-100%, when i push data using btrbk.
When I run dd over ssh it can do 50-75MB/s.

Sending machine is debian jessie with kernel 4.5.0-0.bpo.2-amd64 (upstream 4.5.3) , btrfsprogs 4.4.1. It is virtual machine running on volume exported from MD3420, 4 SAS disks in RAID10.

Recieving machine is debian jessie on Dell T20 with 4x3TB disks in MD RAID5 , kernel is 4.4.0-0.bpo.1-amd64 (upstream 4.4.6), btrfsprgos 4.4.1

BTRFS volumes were created using those listed versions.

Sender:
---------
#mount | grep hosting
/dev/sdg on /mnt/btrfs/hosting type btrfs (rw,noatime,space_cache,subvolid=5,subvol=/)
/dev/sdg on /var/lib/container/hosting type btrfs (rw,noatime,space_cache,subvolid=259,subvol=/system)
/dev/sdg on /var/lib/container/hosting/var/lib/backuppc type btrfs (rw,noatime,space_cache,subvolid=260,subvol=/backuppc)

#btrfs filesystem usage /mnt/btrfs/hosting
Overall:
    Device size:                 840.00GiB
    Device allocated:            815.03GiB
    Device unallocated:           24.97GiB
    Device missing:                  0.00B
    Used:                        522.76GiB
    Free (estimated):            283.66GiB      (min: 271.18GiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,single: Size:710.98GiB, Used:452.29GiB
   /dev/sdg      710.98GiB

Metadata,single: Size:103.98GiB, Used:70.46GiB
   /dev/sdg      103.98GiB

System,DUP: Size:32.00MiB, Used:112.00KiB
   /dev/sdg       64.00MiB

Unallocated:
   /dev/sdg       24.97GiB

# btrfs filesystem show /mnt/btrfs/hosting
Label: 'BackupPC-BcomHosting'  uuid: edecc92a-646a-4585-91a0-9cbb556303e9
        Total devices 1 FS bytes used 522.75GiB
        devid    1 size 840.00GiB used 815.03GiB path /dev/sdg

#Reciever:
#mount | grep hosting
/dev/mapper/vgPecDisk2-lvHostingBackupBtrfs on /mnt/btrfs/hosting type btrfs (rw,noatime,space_cache,subvolid=5,subvol=/)

#btrfs filesystem usage /mnt/btrfs/hosting/
Overall:
    Device size:                 896.00GiB
    Device allocated:            604.07GiB
    Device unallocated:          291.93GiB
    Device missing:                  0.00B
    Used:                        565.98GiB
    Free (estimated):            313.62GiB      (min: 167.65GiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:              512.00MiB      (used: 55.80MiB)

Data,single: Size:530.01GiB, Used:508.32GiB
   /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs   530.01GiB

Metadata,single: Size:74.00GiB, Used:57.65GiB
   /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs    74.00GiB

System,DUP: Size:32.00MiB, Used:80.00KiB
   /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs    64.00MiB

Unallocated:
   /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs   291.93GiB

#btrfs filesystem show /mnt/btrfs/hosting/
Label: none  uuid: 2d7ea471-8794-42ed-bec2-a6ad83f7b038
        Total devices 1 FS bytes used 564.56GiB
        devid    1 size 896.00GiB used 604.07GiB path /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs



What can i do about it? I tried to defragment /backuppc subvolume (without recursive option), should i do it for all snapshots/subvolumes on both sides? 
Should upgrade to 4.6.x kernel help (there is 4.6.3 in backports)?

Thanks for any answer.

With regards,

Libor




^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Send-recieve performance
  2016-07-20  9:15 Send-recieve performance Libor Klepáč
@ 2016-07-22 12:59 ` Henk Slager
  2016-07-22 13:27   ` Libor Klepáč
  2016-07-22 13:47 ` Martin Raiber
  1 sibling, 1 reply; 5+ messages in thread
From: Henk Slager @ 2016-07-22 12:59 UTC (permalink / raw)
  To: Libor Klepáč; +Cc: linux-btrfs

On Wed, Jul 20, 2016 at 11:15 AM, Libor Klepáč <libor.klepac@bcom.cz> wrote:
> Hello,
> we use backuppc to backup our hosting machines.
>
> I have recently migrated it to btrfs, so we can use send-recieve for offsite backups of our backups.
>
> I have several btrfs volumes, each hosts nspawn container, which runs in /system subvolume and has backuppc data in /backuppc subvolume
> .
> I use btrbk to do snapshots and transfer.
> Local side is set to keep 5 daily snapshots, remote side to hold some history. (not much yet, i'm using it this way for few weeks).
>
> If you know backuppc behaviour: for every backup (even incremental), it creates full directory tree of each backed up machine even if it has no modified files and places one small file in each, which holds some info for backuppc.
> So after few days i ran into ENOSPACE on one volume, because my metadata grow, because of inlineing.
> I switched from mdata=DUP to mdata=single (now I see it's possible to change inline file size, right?).

I would try mounting both send and receive volumes with max_inline=0
So then for all small new- and changed files, the filedata will be
stored in data chunks and not inline in the metadata chunks.

That you changed metadata profile from dup to single is unrelated in
principle. single for metadata instead of dup is half the write I/O
for the harddisks, so in that sense it might speed up send actions a
bit. I guess almost all time is spend in seeks.

> My problem is, that on some volumes, send-recieve is relatively fast (rate in MB/s or hundreds of kB/s) but on biggest volume (biggest in space and biggest in contained filesystem trees) rate is just 5-30kB/s.
>
> Here is btrbk progress copyed
> 785MiB 47:52:00 [12.9KiB/s] [4.67KiB/s]
>
> ie. 758MB in 48 hours.
>
> Reciever has high IO/wait - 90-100%, when i push data using btrbk.
> When I run dd over ssh it can do 50-75MB/s.

The send part is the speed bottleneck as it looks like, you can test
and isolate it by doing a dummy send and pipe it to  | mbuffer >
/dev/null  and see what speed you get.

> Sending machine is debian jessie with kernel 4.5.0-0.bpo.2-amd64 (upstream 4.5.3) , btrfsprogs 4.4.1. It is virtual machine running on volume exported from MD3420, 4 SAS disks in RAID10.
>
> Recieving machine is debian jessie on Dell T20 with 4x3TB disks in MD RAID5 , kernel is 4.4.0-0.bpo.1-amd64 (upstream 4.4.6), btrfsprgos 4.4.1
>
> BTRFS volumes were created using those listed versions.
>
> Sender:
> ---------
> #mount | grep hosting
> /dev/sdg on /mnt/btrfs/hosting type btrfs (rw,noatime,space_cache,subvolid=5,subvol=/)
> /dev/sdg on /var/lib/container/hosting type btrfs (rw,noatime,space_cache,subvolid=259,subvol=/system)
> /dev/sdg on /var/lib/container/hosting/var/lib/backuppc type btrfs (rw,noatime,space_cache,subvolid=260,subvol=/backuppc)
>
> #btrfs filesystem usage /mnt/btrfs/hosting
> Overall:
>     Device size:                 840.00GiB
>     Device allocated:            815.03GiB
>     Device unallocated:           24.97GiB
>     Device missing:                  0.00B
>     Used:                        522.76GiB
>     Free (estimated):            283.66GiB      (min: 271.18GiB)
>     Data ratio:                       1.00
>     Metadata ratio:                   1.00
>     Global reserve:              512.00MiB      (used: 0.00B)
>
> Data,single: Size:710.98GiB, Used:452.29GiB
>    /dev/sdg      710.98GiB
>
> Metadata,single: Size:103.98GiB, Used:70.46GiB
>    /dev/sdg      103.98GiB

This is a very large ratio metadata/data. Large and scattered
metadata, even on fast rotational media, will result in slow send
operation is my experience ( incremental send, about 10G metadata). So
hopefully, when all the small files and many directories from backuppc
are in data chunks and metadata is significantly smaller, send will be
faster. However, maybe it is just the huge amount of files and not
inlining of small files that makes metadata so big.

I assume incremental send of snapshots is done.

> System,DUP: Size:32.00MiB, Used:112.00KiB
>    /dev/sdg       64.00MiB
>
> Unallocated:
>    /dev/sdg       24.97GiB
>
> # btrfs filesystem show /mnt/btrfs/hosting
> Label: 'BackupPC-BcomHosting'  uuid: edecc92a-646a-4585-91a0-9cbb556303e9
>         Total devices 1 FS bytes used 522.75GiB
>         devid    1 size 840.00GiB used 815.03GiB path /dev/sdg
>
> #Reciever:
> #mount | grep hosting
> /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs on /mnt/btrfs/hosting type btrfs (rw,noatime,space_cache,subvolid=5,subvol=/)
>
> #btrfs filesystem usage /mnt/btrfs/hosting/
> Overall:
>     Device size:                 896.00GiB
>     Device allocated:            604.07GiB
>     Device unallocated:          291.93GiB
>     Device missing:                  0.00B
>     Used:                        565.98GiB
>     Free (estimated):            313.62GiB      (min: 167.65GiB)
>     Data ratio:                       1.00
>     Metadata ratio:                   1.00
>     Global reserve:              512.00MiB      (used: 55.80MiB)
>
> Data,single: Size:530.01GiB, Used:508.32GiB
>    /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs   530.01GiB
>
> Metadata,single: Size:74.00GiB, Used:57.65GiB
>    /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs    74.00GiB
>
> System,DUP: Size:32.00MiB, Used:80.00KiB
>    /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs    64.00MiB
>
> Unallocated:
>    /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs   291.93GiB
>
> #btrfs filesystem show /mnt/btrfs/hosting/
> Label: none  uuid: 2d7ea471-8794-42ed-bec2-a6ad83f7b038
>         Total devices 1 FS bytes used 564.56GiB
>         devid    1 size 896.00GiB used 604.07GiB path /dev/mapper/vgPecDisk2-lvHostingBackupBtrfs
>
>
>
> What can i do about it? I tried to defragment /backuppc subvolume (without recursive option), should i do it for all snapshots/subvolumes on both sides?
> Should upgrade to 4.6.x kernel help (there is 4.6.3 in backports)?

I think defragmenting in this case won't help much, it results in cow
writes in metadata and files itself are mostly small as I understand.
Also 4.6.x kernel+progs won't help in principle.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Send-recieve performance
  2016-07-22 12:59 ` Henk Slager
@ 2016-07-22 13:27   ` Libor Klepáč
  2016-07-29 12:25     ` Libor Klepáč
  0 siblings, 1 reply; 5+ messages in thread
From: Libor Klepáč @ 2016-07-22 13:27 UTC (permalink / raw)
  To: Henk Slager; +Cc: linux-btrfs

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 5255 bytes --]

Hello,

Dne pátek 22. července 2016 14:59:43 CEST, Henk Slager napsal(a):
> On Wed, Jul 20, 2016 at 11:15 AM, Libor Klepáč <libor.klepac@bcom.cz> wrote:
> > Hello,
> > we use backuppc to backup our hosting machines.
> > 
> > I have recently migrated it to btrfs, so we can use send-recieve for
> > offsite backups of our backups.
> > 
> > I have several btrfs volumes, each hosts nspawn container, which runs in
> > /system subvolume and has backuppc data in /backuppc subvolume .
> > I use btrbk to do snapshots and transfer.
> > Local side is set to keep 5 daily snapshots, remote side to hold some
> > history. (not much yet, i'm using it this way for few weeks).
> > 
> > If you know backuppc behaviour: for every backup (even incremental), it
> > creates full directory tree of each backed up machine even if it has no
> > modified files and places one small file in each, which holds some info
> > for backuppc. So after few days i ran into ENOSPACE on one volume,
> > because my metadata grow, because of inlineing. I switched from mdata=DUP
> > to mdata=single (now I see it's possible to change inline file size,
> > right?).
> I would try mounting both send and receive volumes with max_inline=0
> So then for all small new- and changed files, the filedata will be
> stored in data chunks and not inline in the metadata chunks.

Ok, i will try. Is there way to move existing files from metadata to data 
chunks? Something like btrfs balance with convert filter?

> That you changed metadata profile from dup to single is unrelated in
> principle. single for metadata instead of dup is half the write I/O
> for the harddisks, so in that sense it might speed up send actions a
> bit. I guess almost all time is spend in seeks.

Yes, I just didn't realize that so much files will be in metadata structures 
and it cought me be suprise.

> 
> > My problem is, that on some volumes, send-recieve is relatively fast (rate
> > in MB/s or hundreds of kB/s) but on biggest volume (biggest in space and
> > biggest in contained filesystem trees) rate is just 5-30kB/s.
> > 
> > Here is btrbk progress copyed
> > 785MiB 47:52:00 [12.9KiB/s] [4.67KiB/s]
> > 
> > ie. 758MB in 48 hours.
> > 
> > Reciever has high IO/wait - 90-100%, when i push data using btrbk.
> > When I run dd over ssh it can do 50-75MB/s.
> 
> The send part is the speed bottleneck as it looks like, you can test
> and isolate it by doing a dummy send and pipe it to  | mbuffer >
> /dev/null  and see what speed you get.

I tried it already, did incremental send to file 
#btrfs send -v -p ./backuppc.20160712/  ./backuppc.20160720_1/ | pv > /mnt/
data1/send
At subvol ./backuppc.20160720_1/
joining genl thread
18.9GiB 21:14:45 [ 259KiB/s]

Copied it over scp to reciever with speed 50.9MB/s.
No i will try recieve.


> > Sending machine is debian jessie with kernel 4.5.0-0.bpo.2-amd64 (upstream
> > 4.5.3) , btrfsprogs 4.4.1. It is virtual machine running on volume
> > exported from MD3420, 4 SAS disks in RAID10.
> > 
> > Recieving machine is debian jessie on Dell T20 with 4x3TB disks in MD
> > RAID5 , kernel is 4.4.0-0.bpo.1-amd64 (upstream 4.4.6), btrfsprgos 4.4.1
> > 
> > BTRFS volumes were created using those listed versions.
> > 
> > Sender:
> > ---------
> > #mount | grep hosting
> > /dev/sdg on /mnt/btrfs/hosting type btrfs
> > (rw,noatime,space_cache,subvolid=5,subvol=/) /dev/sdg on
> > /var/lib/container/hosting type btrfs
> > (rw,noatime,space_cache,subvolid=259,subvol=/system) /dev/sdg on
> > /var/lib/container/hosting/var/lib/backuppc type btrfs
> > (rw,noatime,space_cache,subvolid=260,subvol=/backuppc)
> > 
> > #btrfs filesystem usage /mnt/btrfs/hosting
> > 
> > Overall:
> >     Device size:                 840.00GiB
> >     Device allocated:            815.03GiB
> >     Device unallocated:           24.97GiB
> >     Device missing:                  0.00B
> >     Used:                        522.76GiB
> >     Free (estimated):            283.66GiB      (min: 271.18GiB)
> >     Data ratio:                       1.00
> >     Metadata ratio:                   1.00
> >     Global reserve:              512.00MiB      (used: 0.00B)
> > 
> > Data,single: Size:710.98GiB, Used:452.29GiB
> > 
> >    /dev/sdg      710.98GiB
> > 
> > Metadata,single: Size:103.98GiB, Used:70.46GiB
> > 
> >    /dev/sdg      103.98GiB
> 
> This is a very large ratio metadata/data. Large and scattered
> metadata, even on fast rotational media, will result in slow send
> operation is my experience ( incremental send, about 10G metadata). So
> hopefully, when all the small files and many directories from backuppc
> are in data chunks and metadata is significantly smaller, send will be
> faster. However, maybe it is just the huge amount of files and not
> inlining of small files that makes metadata so big.
Backuppc says
"Pool is 462.30GB comprising 5140707 files and 4369 directories"
that is only storage of files, not counting all the server trees

> 
> I assume incremental send of snapshots is done.

Yes, it was incremental

Is anyone interested in btrfs-debug-tree -t 2 output?
It's 2.3GB big (187MB with xz -0 compression)

Libor
ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±ý»k~ÏâžØ^n‡r¡ö¦zË\x1aëh™¨è­Ú&£ûàz¿äz¹Þ—ú+€Ê+zf£¢·hšˆ§~†­†Ûiÿÿïêÿ‘êçz_è®\x0fæj:+v‰¨þ)ߣøm

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Send-recieve performance
  2016-07-20  9:15 Send-recieve performance Libor Klepáč
  2016-07-22 12:59 ` Henk Slager
@ 2016-07-22 13:47 ` Martin Raiber
  1 sibling, 0 replies; 5+ messages in thread
From: Martin Raiber @ 2016-07-22 13:47 UTC (permalink / raw)
  To: linux-btrfs

[-- Attachment #1: Type: text/plain, Size: 1346 bytes --]

On 20.07.2016 11:15 Libor Klepáč wrote:
> Hello,
> we use backuppc to backup our hosting machines.
>
> I have recently migrated it to btrfs, so we can use send-recieve for offsite backups of our backups.
>
> I have several btrfs volumes, each hosts nspawn container, which runs in /system subvolume and has backuppc data in /backuppc subvolume
> .
> I use btrbk to do snapshots and transfer.
> Local side is set to keep 5 daily snapshots, remote side to hold some history. (not much yet, i'm using it this way for few weeks).
>
> If you know backuppc behaviour: for every backup (even incremental), it creates full directory tree of each backed up machine even if it has no modified files and places one small file in each, which holds some info for backuppc. 
> So after few days i ran into ENOSPACE on one volume, because my metadata grow, because of inlineing.
> I switched from mdata=DUP to mdata=single (now I see it's possible to change inline file size, right?).
I am biased, but UrBackup works like BackupPC, except it has a client,
and like btrbk puts every backup into a separate btrfs sub-volume with
snapshotting reducing metadata workload. Then you could create read-only
snapshots from the UrBackup sub-volumes and use e.g. buttersink to copy
those to another btrfs.

So maybe try that?

Regards,
Martin


[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 4242 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Send-recieve performance
  2016-07-22 13:27   ` Libor Klepáč
@ 2016-07-29 12:25     ` Libor Klepáč
  0 siblings, 0 replies; 5+ messages in thread
From: Libor Klepáč @ 2016-07-29 12:25 UTC (permalink / raw)
  To: linux-btrfs

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 4902 bytes --]

On pátek 22. července 2016 13:27:15 CEST Libor Klepáč wrote:
> Hello,
> 
> Dne pátek 22. července 2016 14:59:43 CEST, Henk Slager napsal(a):
> 
> > On Wed, Jul 20, 2016 at 11:15 AM, Libor Klepáč <libor.klepac@bcom.cz>
> > wrote:
> 
> > > Hello,
> > > we use backuppc to backup our hosting machines.
> > > 
> > > I have recently migrated it to btrfs, so we can use send-recieve for
> > > offsite backups of our backups.
> > > 
> > > I have several btrfs volumes, each hosts nspawn container, which runs
> > > in
> > > /system subvolume and has backuppc data in /backuppc subvolume .
> > > I use btrbk to do snapshots and transfer.
> > > Local side is set to keep 5 daily snapshots, remote side to hold some
> > > history. (not much yet, i'm using it this way for few weeks).
> > > 
> > > If you know backuppc behaviour: for every backup (even incremental), it
> > > creates full directory tree of each backed up machine even if it has no
> > > modified files and places one small file in each, which holds some info
> > > for backuppc. So after few days i ran into ENOSPACE on one volume,
> > > because my metadata grow, because of inlineing. I switched from
> > > mdata=DUP
> > > to mdata=single (now I see it's possible to change inline file size,
> > > right?).
> > 
> > I would try mounting both send and receive volumes with max_inline=0
> > So then for all small new- and changed files, the filedata will be
> > stored in data chunks and not inline in the metadata chunks.
> 
> 
> Ok, i will try. Is there way to move existing files from metadata to data 
> chunks? Something like btrfs balance with convert filter?
>
Writen on 25.7.2016: 
I will recreate on new filesystems and do new send/receive

Writen on 29.7.2016:
Created new filesystem or copyed to new subvolumes after mounting with 
max_inline=0

Difference is remarkable, for example
before:
------------------
btrfs filesystem usage  /mnt/btrfs/as/
Overall:
    Device size:                 320.00GiB
    Device allocated:            144.06GiB
    Device unallocated:          175.94GiB
    Device missing:                  0.00B
    Used:                        122.22GiB
    Free (estimated):            176.33GiB      (min: 88.36GiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:              512.00MiB      (used: 40.86MiB)

Data,single: Size:98.00GiB, Used:97.61GiB
   /dev/sdb       98.00GiB

Metadata,single: Size:46.00GiB, Used:24.61GiB
   /dev/sdb       46.00GiB

System,DUP: Size:32.00MiB, Used:16.00KiB
   /dev/sdb       64.00MiB

Unallocated:
   /dev/sdb      175.94GiB

after:
-----------------------
   
btrfs filesystem usage  /mnt/btrfs/as/
Overall:
    Device size:                 320.00GiB
    Device allocated:            137.06GiB
    Device unallocated:          182.94GiB
    Device missing:                  0.00B
    Used:                         54.36GiB
    Free (estimated):            225.15GiB      (min: 133.68GiB)
    Data ratio:                       1.00
    Metadata ratio:                   1.00
    Global reserve:              512.00MiB      (used: 0.00B)

Data,single: Size:91.00GiB, Used:48.79GiB
   /dev/sdb       91.00GiB

Metadata,single: Size:46.00GiB, Used:5.58GiB
   /dev/sdb       46.00GiB

System,DUP: Size:32.00MiB, Used:16.00KiB
   /dev/sdb       64.00MiB

Unallocated:
   /dev/sdb      182.94GiB


> 
> > That you changed metadata profile from dup to single is unrelated in
> > principle. single for metadata instead of dup is half the write I/O
> > for the harddisks, so in that sense it might speed up send actions a
> > bit. I guess almost all time is spend in seeks.
> 
> 
> Yes, I just didn't realize that so much files will be in metadata structures
>  and it cought me be suprise.
> 

 
> > The send part is the speed bottleneck as it looks like, you can test
> > and isolate it by doing a dummy send and pipe it to  | mbuffer >
> > /dev/null  and see what speed you get.
> 
> 
> I tried it already, did incremental send to file 
> #btrfs send -v -p ./backuppc.20160712/  ./backuppc.20160720_1/ | pv > /mnt/
> data1/send
> At subvol ./backuppc.20160720_1/
> joining genl thread
> 18.9GiB 21:14:45 [ 259KiB/s]
> 
> Copied it over scp to reciever with speed 50.9MB/s.
> No i will try recieve.
>

Writen on 25.7.2016: 
Receive did 1GB of those 19GB over weekend, so, canceled ...

Writen on 29.7.2016: 
Even after clean filesystems mounted with max_inline=0, send/receive is slow.
I tried to unmount all filesystem, unload btrfs module, then loaded it again.
Send/receive still slow.

Then i set vm.dirty_bytes to 102400
and then set it to
vm.dirty_background_ratio = 10
vm.dirty_ratio = 20

And voila, speed went up dramaticaly, now it has transfered about 10GB in 
30minutes!

Libor
ÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±ý»k~ÏâžØ^n‡r¡ö¦zË\x1aëh™¨è­Ú&£ûàz¿äz¹Þ—ú+€Ê+zf£¢·hšˆ§~†­†Ûiÿÿïêÿ‘êçz_è®\x0fæj:+v‰¨þ)ߣøm

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-07-29 21:58 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-20  9:15 Send-recieve performance Libor Klepáč
2016-07-22 12:59 ` Henk Slager
2016-07-22 13:27   ` Libor Klepáč
2016-07-29 12:25     ` Libor Klepáč
2016-07-22 13:47 ` Martin Raiber

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.