linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
* [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)
@ 2016-11-07  9:30 Alexander 'Leo' Bergolth
  2016-11-07 10:22 ` Zdenek Kabelac
  0 siblings, 1 reply; 10+ messages in thread
From: Alexander 'Leo' Bergolth @ 2016-11-07  9:30 UTC (permalink / raw)
  To: linux-lvm

Hi!

I am experiencing a dramatic degradation of the sequential write speed
on a raid1 LV that resides on two USB-3 connected harddisks (UAS
enabled), compared to parallel access to both drives without raid or
compared to MD raid:

- parallel sequential writes LVs on both disks: 140 MB/s per disk
- sequential write to MD raid1 without bitmap: 140 MB/s
- sequential write to MD raid1 with bitmap: 48 MB/s
- sequential write to LVM raid1: 17 MB/s !!

According to the kernel messages, my 30 GB raid1-test-LV gets equipped
with a 61440 bit write-intent bitmap (1 bit per 512 byte data?!) whereas
a default MD raid1 bitmap only has 480 bit size. (1 bit per 64 MB).
Maybe the dramatic slowdown is caused by this much too fine grained
bitmap and its updates, which are random IO?

Is there a way to configure the bitmap size?

Cheers,
--leo


My tests:
---------

# parallel writes to independent LVs on both disks:
dd if=/dev/zero of=/dev/vg_t/lv_traw-d1 bs=1M count=1000 oflag=direct &\
  dd if=/dev/zero of=/dev/vg_t/lv_traw-d2 bs=1M count=1000 oflag=direct
1048576000 bytes (1,0 GB, 1000 MiB) copied, 7,51632 s, 140 MB/s
1048576000 bytes (1,0 GB, 1000 MiB) copied, 7,51926 s, 139 MB/s

# using MD raid1 without a bitmap
mdadm -C /dev/md/t --level=1 --raid-devices=2 \
  /dev/vg_t/lv_md_d1 /dev/vg_t/lv_md_d2
dd if=/dev/zero of=/dev/md/t bs=1M count=1000 oflag=direct
1048576000 bytes (1,0 GB, 1000 MiB) copied, 7,4604 s, 141 MB/s

# using a bitmap:
mdadm --grow --bitmap=internal /dev/md/t
dd if=/dev/zero of=/dev/md/t bs=1M count=1000 oflag=direct
1048576000 bytes (1,0 GB, 1000 MiB) copied, 22,0277 s, 47,6 MB/s

# lvm raid1
dd if=/dev/zero of=/dev/vg_t/lv_raid1 bs=1M count=1000 oflag=direct
1048576000 bytes (1,0 GB, 1000 MiB) copied, 63,7003 s, 16,5 MB/s


# MD raid bitmap
[1781588.277129] md127: bitmap initialized from disk: read 1 pages, set
480 of 480 bits

# LVM-Raid bitmap:
[1776745.608956] mdX: bitmap initialized from disk: read 2 pages, set 0
of 61440 bits


-- 
e-mail   ::: Leo.Bergolth (at) wu.ac.at
fax      ::: +43-1-31336-906050
location ::: IT-Services | Vienna University of Economics | Austria

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)
  2016-11-07  9:30 [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?) Alexander 'Leo' Bergolth
@ 2016-11-07 10:22 ` Zdenek Kabelac
  2016-11-07 15:58   ` Alexander 'Leo' Bergolth
  0 siblings, 1 reply; 10+ messages in thread
From: Zdenek Kabelac @ 2016-11-07 10:22 UTC (permalink / raw)
  To: LVM general discussion and development

Dne 7.11.2016 v 10:30 Alexander 'Leo' Bergolth napsal(a):
> Hi!
>
> I am experiencing a dramatic degradation of the sequential write speed
> on a raid1 LV that resides on two USB-3 connected harddisks (UAS
> enabled), compared to parallel access to both drives without raid or
> compared to MD raid:
>
> - parallel sequential writes LVs on both disks: 140 MB/s per disk
> - sequential write to MD raid1 without bitmap: 140 MB/s
> - sequential write to MD raid1 with bitmap: 48 MB/s
> - sequential write to LVM raid1: 17 MB/s !!
>
> According to the kernel messages, my 30 GB raid1-test-LV gets equipped
> with a 61440 bit write-intent bitmap (1 bit per 512 byte data?!) whereas
> a default MD raid1 bitmap only has 480 bit size. (1 bit per 64 MB).
> Maybe the dramatic slowdown is caused by this much too fine grained
> bitmap and its updates, which are random IO?
>
> Is there a way to configure the bitmap size?
>


Hi


Can you please provide some results with  '--regionsize'  changes ?
While   '64MB' is quite 'huge'  for resync I guess 'the current' default
picked region size is likely very very small in same cases.

Regards

Zdenek

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)
  2016-11-07 10:22 ` Zdenek Kabelac
@ 2016-11-07 15:58   ` Alexander 'Leo' Bergolth
  2016-11-08  9:26     ` Zdenek Kabelac
  0 siblings, 1 reply; 10+ messages in thread
From: Alexander 'Leo' Bergolth @ 2016-11-07 15:58 UTC (permalink / raw)
  To: linux-lvm

On 11/07/2016 11:22 AM, Zdenek Kabelac wrote:
> Dne 7.11.2016 v 10:30 Alexander 'Leo' Bergolth napsal(a):
>> I am experiencing a dramatic degradation of the sequential write speed
>> on a raid1 LV that resides on two USB-3 connected harddisks (UAS
>> enabled), compared to parallel access to both drives without raid or
>> compared to MD raid:
>>
>> - parallel sequential writes LVs on both disks: 140 MB/s per disk
>> - sequential write to MD raid1 without bitmap: 140 MB/s
>> - sequential write to MD raid1 with bitmap: 48 MB/s
>> - sequential write to LVM raid1: 17 MB/s !!
>>
>> According to the kernel messages, my 30 GB raid1-test-LV gets equipped
>> with a 61440 bit write-intent bitmap (1 bit per 512 byte data?!) whereas
>> a default MD raid1 bitmap only has 480 bit size. (1 bit per 64 MB).
>> Maybe the dramatic slowdown is caused by this much too fine grained
>> bitmap and its updates, which are random IO?
>>
>> Is there a way to configure the bitmap size?
> 
> Can you please provide some results with  '--regionsize'  changes ?
> While   '64MB' is quite 'huge'  for resync I guess 'the current' default
> picked region size is likely very very small in same cases.

Ah - thanks. Didn't know that --regionsize is also valid for --type raid1.

With --regionsize 64 MB, the bitmap has the same size as the default
bitmap created by mdadmin and write performance is also similar:

*** --regionsize 1M
1048576000 bytes (1,0 GB, 1000 MiB) copied, 63,957 s, 16,4 MB/s
*** --regionsize 2M
1048576000 bytes (1,0 GB, 1000 MiB) copied, 39,1517 s, 26,8 MB/s
*** --regionsize 4M
1048576000 bytes (1,0 GB, 1000 MiB) copied, 32,8275 s, 31,9 MB/s
*** --regionsize 16M
1048576000 bytes (1,0 GB, 1000 MiB) copied, 30,2903 s, 34,6 MB/s
*** --regionsize 32M
1048576000 bytes (1,0 GB, 1000 MiB) copied, 30,1452 s, 34,8 MB/s
*** --regionsize 64M
1048576000 bytes (1,0 GB, 1000 MiB) copied, 21,6208 s, 48,5 MB/s
*** --regionsize 128M
1048576000 bytes (1,0 GB, 1000 MiB) copied, 14,2028 s, 73,8 MB/s
*** --regionsize 256M
1048576000 bytes (1,0 GB, 1000 MiB) copied, 11,6581 s, 89,9 MB/s


Is there a way to change the regionsize for an existing LV?

Cheers,
--leo
-- 
e-mail   ::: Leo.Bergolth (at) wu.ac.at
fax      ::: +43-1-31336-906050
location ::: IT-Services | Vienna University of Economics | Austria

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)
  2016-11-07 15:58   ` Alexander 'Leo' Bergolth
@ 2016-11-08  9:26     ` Zdenek Kabelac
  2016-11-08 15:15       ` Alexander 'Leo' Bergolth
  0 siblings, 1 reply; 10+ messages in thread
From: Zdenek Kabelac @ 2016-11-08  9:26 UTC (permalink / raw)
  To: LVM general discussion and development

Dne 7.11.2016 v 16:58 Alexander 'Leo' Bergolth napsal(a):
> On 11/07/2016 11:22 AM, Zdenek Kabelac wrote:
>> Dne 7.11.2016 v 10:30 Alexander 'Leo' Bergolth napsal(a):
>>> I am experiencing a dramatic degradation of the sequential write speed
>>> on a raid1 LV that resides on two USB-3 connected harddisks (UAS
>>> enabled), compared to parallel access to both drives without raid or
>>> compared to MD raid:
>>>
>>> - parallel sequential writes LVs on both disks: 140 MB/s per disk
>>> - sequential write to MD raid1 without bitmap: 140 MB/s
>>> - sequential write to MD raid1 with bitmap: 48 MB/s
>>> - sequential write to LVM raid1: 17 MB/s !!
>>>
>>> According to the kernel messages, my 30 GB raid1-test-LV gets equipped
>>> with a 61440 bit write-intent bitmap (1 bit per 512 byte data?!) whereas
>>> a default MD raid1 bitmap only has 480 bit size. (1 bit per 64 MB).
>>> Maybe the dramatic slowdown is caused by this much too fine grained
>>> bitmap and its updates, which are random IO?
>>>
>>> Is there a way to configure the bitmap size?
>>
>> Can you please provide some results with  '--regionsize'  changes ?
>> While   '64MB' is quite 'huge'  for resync I guess 'the current' default
>> picked region size is likely very very small in same cases.
>
> Ah - thanks. Didn't know that --regionsize is also valid for --type raid1.
>
> With --regionsize 64 MB, the bitmap has the same size as the default
> bitmap created by mdadmin and write performance is also similar:
>
> *** --regionsize 1M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 63,957 s, 16,4 MB/s
> *** --regionsize 2M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 39,1517 s, 26,8 MB/s
> *** --regionsize 4M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 32,8275 s, 31,9 MB/s
> *** --regionsize 16M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 30,2903 s, 34,6 MB/s
> *** --regionsize 32M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 30,1452 s, 34,8 MB/s
> *** --regionsize 64M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 21,6208 s, 48,5 MB/s
> *** --regionsize 128M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 14,2028 s, 73,8 MB/s
> *** --regionsize 256M
> 1048576000 bytes (1,0 GB, 1000 MiB) copied, 11,6581 s, 89,9 MB/s
>
>
> Is there a way to change the regionsize for an existing LV?


I'm afraid there is not yet support for runtime 'regionsize' change
other then rebuilding array.

But your numbers are really the item to think about.

Lvm2 surely should pick here more sensible default value.

But md raid still seems to pay too big price even with 64M - there
is likely some room for improvement here I'd say...

Regards

Zdenek

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)
  2016-11-08  9:26     ` Zdenek Kabelac
@ 2016-11-08 15:15       ` Alexander 'Leo' Bergolth
  2016-11-11 14:30         ` Brassow Jonathan
  2016-11-18 10:12         ` Zdenek Kabelac
  0 siblings, 2 replies; 10+ messages in thread
From: Alexander 'Leo' Bergolth @ 2016-11-08 15:15 UTC (permalink / raw)
  To: linux-lvm

On 11/08/2016 10:26 AM, Zdenek Kabelac wrote:
> Dne 7.11.2016 v 16:58 Alexander 'Leo' Bergolth napsal(a):
>> On 11/07/2016 11:22 AM, Zdenek Kabelac wrote:
>> Is there a way to change the regionsize for an existing LV?
>
> I'm afraid there is not yet support for runtime 'regionsize' change
> other then rebuilding array.

Unfortunately even rebuilding (converting to linear and back to raid1)
doesn't work.

lvconvert seems to ignore the --regionsize option and use defaults:

lvconvert -m 0 /dev/vg_sys/lv_test
lvconvert --type raid1 -m 1 --regionsize 128M /dev/vg_sys/lv_test

[10881847.012504] mdX: bitmap initialized from disk: read 1 pages, set
4096 of 4096 bits

... which translates to a regionsize of 512k for a 2G volume.

:-(

Cheers,
--leo
-- 
e-mail   ::: Leo.Bergolth (at) wu.ac.at
fax      ::: +43-1-31336-906050
location ::: IT-Services | Vienna University of Economics | Austria

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)
  2016-11-08 15:15       ` Alexander 'Leo' Bergolth
@ 2016-11-11 14:30         ` Brassow Jonathan
  2016-11-11 23:23           ` Brassow Jonathan
  2016-11-18 10:12         ` Zdenek Kabelac
  1 sibling, 1 reply; 10+ messages in thread
From: Brassow Jonathan @ 2016-11-11 14:30 UTC (permalink / raw)
  To: LVM general discussion and development

I’ll get a bug created for this and we’ll fix it.

thanks,
 brassow

> On Nov 8, 2016, at 9:15 AM, Alexander 'Leo' Bergolth <leo@strike.wu.ac.at> wrote:
> 
> lvconvert seems to ignore the --regionsize option and use defaults:

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)
  2016-11-11 14:30         ` Brassow Jonathan
@ 2016-11-11 23:23           ` Brassow Jonathan
  0 siblings, 0 replies; 10+ messages in thread
From: Brassow Jonathan @ 2016-11-11 23:23 UTC (permalink / raw)
  To: LVM general discussion and development

https://bugzilla.redhat.com/show_bug.cgi?id=1394427

> On Nov 11, 2016, at 8:30 AM, Brassow Jonathan <jbrassow@redhat.com> wrote:
> 
> I’ll get a bug created for this and we’ll fix it.
> 
> thanks,
> brassow
> 
>> On Nov 8, 2016, at 9:15 AM, Alexander 'Leo' Bergolth <leo@strike.wu.ac.at> wrote:
>> 
>> lvconvert seems to ignore the --regionsize option and use defaults:
> 
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)
  2016-11-08 15:15       ` Alexander 'Leo' Bergolth
  2016-11-11 14:30         ` Brassow Jonathan
@ 2016-11-18 10:12         ` Zdenek Kabelac
  2016-11-18 11:08           ` Alexander 'Leo' Bergolth
  1 sibling, 1 reply; 10+ messages in thread
From: Zdenek Kabelac @ 2016-11-18 10:12 UTC (permalink / raw)
  To: linux-lvm, leo

Dne 8.11.2016 v 16:15 Alexander 'Leo' Bergolth napsal(a):
> On 11/08/2016 10:26 AM, Zdenek Kabelac wrote:
>> Dne 7.11.2016 v 16:58 Alexander 'Leo' Bergolth napsal(a):
>>> On 11/07/2016 11:22 AM, Zdenek Kabelac wrote:
>>> Is there a way to change the regionsize for an existing LV?
>>
>> I'm afraid there is not yet support for runtime 'regionsize' change
>> other then rebuilding array.
>
> Unfortunately even rebuilding (converting to linear and back to raid1)
> doesn't work.
>
> lvconvert seems to ignore the --regionsize option and use defaults:
>
> lvconvert -m 0 /dev/vg_sys/lv_test
> lvconvert --type raid1 -m 1 --regionsize 128M /dev/vg_sys/lv_test
>
> [10881847.012504] mdX: bitmap initialized from disk: read 1 pages, set
> 4096 of 4096 bits
>
> ... which translates to a regionsize of 512k for a 2G volume.



Hi

After doing some simulations here -

What is the actually USB device type used here?

Aren't you trying to deploy some attached SD-Card/USB-Flash  as your secondary 
leg?

Regards

Zdenek

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)
  2016-11-18 10:12         ` Zdenek Kabelac
@ 2016-11-18 11:08           ` Alexander 'Leo' Bergolth
  2016-11-26 23:21             ` Alexander 'Leo' Bergolth
  0 siblings, 1 reply; 10+ messages in thread
From: Alexander 'Leo' Bergolth @ 2016-11-18 11:08 UTC (permalink / raw)
  To: Zdenek Kabelac, linux-lvm

On 11/18/2016 11:12 AM, Zdenek Kabelac wrote:
> Dne 8.11.2016 v 16:15 Alexander 'Leo' Bergolth napsal(a):
>> On 11/08/2016 10:26 AM, Zdenek Kabelac wrote:
>>> Dne 7.11.2016 v 16:58 Alexander 'Leo' Bergolth napsal(a):
>>>> On 11/07/2016 11:22 AM, Zdenek Kabelac wrote:
>>>> Is there a way to change the regionsize for an existing LV?
>>>
>>> I'm afraid there is not yet support for runtime 'regionsize' change
>>> other then rebuilding array.
>>
>> Unfortunately even rebuilding (converting to linear and back to raid1)
>> doesn't work.
>>
>> lvconvert seems to ignore the --regionsize option and use defaults:
>>
>> lvconvert -m 0 /dev/vg_sys/lv_test
>> lvconvert --type raid1 -m 1 --regionsize 128M /dev/vg_sys/lv_test
>>
>> [10881847.012504] mdX: bitmap initialized from disk: read 1 pages, set
>> 4096 of 4096 bits
>>
>> ... which translates to a regionsize of 512k for a 2G volume.
> 
>
> After doing some simulations here -
> 
> What is the actually USB device type used here?

I did my tests with two 5k-RPM SATA disks connected to a single USB 3.0
port using a JMS562 USB 3.0 to SATA bridge in JBOD mode. According to
lsusb -t, the uas module is in use and looking at
/sys/block/sdX/queue/nr_requests, command queuing seems to be active.

I've discussed my problems with Heinz Mauelshagen yesterday, who was
able to reproduce the issue using two SATA disks, connected to two USB
3.0 ports that share the same USB bus. However, he didn't notice any
speed penalties if the same disks are connected to different USB buses.

So it looks like the problem is USB related...

Cheers,
--leo
-- 
e-mail   ::: Leo.Bergolth (at) wu.ac.at
fax      ::: +43-1-31336-906050
location ::: IT-Services | Vienna University of Economics | Austria

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?)
  2016-11-18 11:08           ` Alexander 'Leo' Bergolth
@ 2016-11-26 23:21             ` Alexander 'Leo' Bergolth
  0 siblings, 0 replies; 10+ messages in thread
From: Alexander 'Leo' Bergolth @ 2016-11-26 23:21 UTC (permalink / raw)
  To: linux-lvm

On 2016-11-18 12:08, Alexander 'Leo' Bergolth wrote:
> I did my tests with two 5k-RPM SATA disks connected to a single USB 3.0
> port using a JMS562 USB 3.0 to SATA bridge in JBOD mode. According to
> lsusb -t, the uas module is in use and looking at
> /sys/block/sdX/queue/nr_requests, command queuing seems to be active.
> 
> I've discussed my problems with Heinz Mauelshagen yesterday, who was
> able to reproduce the issue using two SATA disks, connected to two USB
> 3.0 ports that share the same USB bus. However, he didn't notice any
> speed penalties if the same disks are connected to different USB buses.
> 
> So it looks like the problem is USB related...

I did some tests similar to Heinz Mauelshagens setup and connected my disks to two different USB 3.0 buses. Unfortunately I cannot confirm that some kind of USB congestion is the problem. I am getting the same results as when using just one USB bus: Smaller regionsizes dramatically slow down sequential write speed.

The reason why Heinz got different results was the different dd blocksize in our tests: I did my tests with bs=1M oflag=direct and Heinz used bs=1G oflag=direct. This leads to much less bitmap updates (>1000 vs 60 for 1G of data).

I'd expect that those bitmap updates cause two seeks each. This random IO is, of course, very expensive, especially if slow 5000 RPM disks are used...

I've recorded some tests with blktrace. The results can be downloaded from http://leo.kloburg.at/tmp/lvm-raid1-bitmap/


# lsusb -t
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M
    |__ Port 4: Dev 3, If 0, Class=Hub, Driver=hub/4p, 5000M
        |__ Port 1: Dev 9, If 0, Class=Mass Storage, Driver=uas, 5000M
        |__ Port 2: Dev 8, If 0, Class=Mass Storage, Driver=uas, 5000M

# readlink -f /sys/class/block/sd[bc]/device/
/sys/devices/pci0000:00/0000:00:14.0/usb2/2-4/2-4.2/2-4.2:1.0/host2/target2:0:0/2:0:0:0
/sys/devices/pci0000:00/0000:00:14.0/usb2/2-4/2-4.1/2-4.1:1.0/host3/target3:0:0/3:0:0:0

# echo noop > /sys/block/sdb/queue/scheduler
# echo noop > /sys/block/sdc/queue/scheduler
# pvcreate /dev/sdb3 
# pvcreate /dev/sdc3 
# vgcreate vg_t /dev/sd[bc]3

# lvcreate --type raid1 -m 1 -L30G --regionsize=512k --nosync -y -n lv_t vg_t


# ---------- regionsize 512k, dd bs=1M oflags=direct
# blktrace -d /dev/sdb3 -d /dev/sdc3 -d /dev/vg_t/lv_t -D raid1-512k-reg-direct-bs-1M/
# dd if=/dev/zero of=/dev/vg_t/lv_t bs=1M count=1000 oflag=direct
1048576000 bytes (1,0 GB) copied, 55,7425 s, 18,8 MB/s

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb3              0,00     0,00    0,00   54,00     0,00 18504,00   685,33     0,14    2,52    0,00    2,52   1,70   9,20
sdc3              0,00     0,00    0,00   54,00     0,00 18504,00   685,33     0,14    2,52    0,00    2,52   1,67   9,00
dm-9              0,00     0,00    0,00   18,00     0,00 18432,00  2048,00     1,00   54,06    0,00   54,06  55,39  99,70

# ---------- regionsize 512k, dd bs=1G oflags=direct
# blktrace -d /dev/sdb3 -d /dev/sdc3 -d /dev/vg_t/lv_t -D raid1-512k-reg-direct-bs-1G/
# dd if=/dev/zero of=/dev/vg_t/lv_t bs=1G count=1 oflag=direct
1+0 records in
1+0 records out
1073741824 bytes (1,1 GB) copied, 7,3139 s, 147 MB/s

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb3              0,00     0,00    0,00  306,00     0,00 156672,00  1024,00   135,47  441,34    0,00  441,34   3,27 100,00
sdc3              0,00     0,00    0,00  302,00     0,00 154624,00  1024,00   129,46  421,76    0,00  421,76   3,31 100,00
dm-9              0,00     0,00    0,00    0,00     0,00     0,00     0,00   648,81    0,00    0,00    0,00   0,00 100,00


# ---------- regionsize 512k, dd bs=1M conv=fsync
# blktrace -d /dev/sdb3 -d /dev/sdc3 -d /dev/vg_t/lv_t -D raid1-512k-reg-fsync-bs-1M/
# dd if=/dev/zero of=/dev/vg_t/lv_t bs=1M count=1000 conv=fsync
1048576000 bytes (1,0 GB) copied, 7,75605 s, 135 MB/s

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb3              0,00 21971,00    0,00  285,00     0,00 145920,00  1024,00   141,99  540,75    0,00  540,75   3,51 100,00
sdc3              0,00 21971,00    0,00  310,00     0,00 158720,00  1024,00   106,86  429,35    0,00  429,35   3,23 100,00
dm-9              0,00     0,00    0,00    0,00     0,00     0,00     0,00 24561,60    0,00    0,00    0,00   0,00 100,00


Cheers,
--leo
-- 
e-mail   ::: Leo.Bergolth (at) wu.ac.at   
fax      ::: +43-1-31336-906050
location ::: IT-Services | Vienna University of Economics | Austria

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2016-11-26 23:28 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-07  9:30 [linux-lvm] very slow sequential writes on lvm raid1 (bitmap?) Alexander 'Leo' Bergolth
2016-11-07 10:22 ` Zdenek Kabelac
2016-11-07 15:58   ` Alexander 'Leo' Bergolth
2016-11-08  9:26     ` Zdenek Kabelac
2016-11-08 15:15       ` Alexander 'Leo' Bergolth
2016-11-11 14:30         ` Brassow Jonathan
2016-11-11 23:23           ` Brassow Jonathan
2016-11-18 10:12         ` Zdenek Kabelac
2016-11-18 11:08           ` Alexander 'Leo' Bergolth
2016-11-26 23:21             ` Alexander 'Leo' Bergolth

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).