* Re: raid1 round-robin scheduler
@ 2015-02-19 7:23 konstantin
2015-02-19 15:02 ` Heinz Mauelshagen
0 siblings, 1 reply; 16+ messages in thread
From: konstantin @ 2015-02-19 7:23 UTC (permalink / raw)
To: dm-devel
What version of the kernel should I use to get a round-robin read
implementation on LV raid1?
--
WBR
Konstantin V. Krotov
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2015-02-19 7:23 raid1 round-robin scheduler konstantin
@ 2015-02-19 15:02 ` Heinz Mauelshagen
2015-03-10 11:55 ` konstantin
0 siblings, 1 reply; 16+ messages in thread
From: Heinz Mauelshagen @ 2015-02-19 15:02 UTC (permalink / raw)
To: dm-devel
dm-mirror (i.e. "lvcreate --type mirror" or respective "dmsetup create
--table ...",
which is not the recommended raid1 layout any more) provides read
round-robin since long time.
You'd need an ancient kernel not to have it supported.
"raid1"/"raid10" (the recommended targets) , i.e. the md-raid based
mappings accessible via the dm-raid target
do read optimizations as well. Use "lvcreate --type raid1/raid10 ..." or
a respective dm table to set
those up. The former ("raid1") is the default in modern distributions
and configurable via setting
'mirror_segtype_default = "raid1"' in /etc/lvm/lvm.conf.
Heinz
On 02/19/2015 08:23 AM, konstantin wrote:
> What version of the kernel should I use to get a round-robin read
> implementation on LV raid1?
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2015-02-19 15:02 ` Heinz Mauelshagen
@ 2015-03-10 11:55 ` konstantin
2015-03-10 14:22 ` Heinz Mauelshagen
0 siblings, 1 reply; 16+ messages in thread
From: konstantin @ 2015-03-10 11:55 UTC (permalink / raw)
To: dm-devel
19.02.2015 18:02, Heinz Mauelshagen пишет:
>
>
> dm-mirror (i.e. "lvcreate --type mirror" or respective "dmsetup create
> --table ...",
> which is not the recommended raid1 layout any more) provides read
> round-robin since long time.
> You'd need an ancient kernel not to have it supported.
>
> "raid1"/"raid10" (the recommended targets) , i.e. the md-raid based
> mappings accessible via the dm-raid target
> do read optimizations as well. Use "lvcreate --type raid1/raid10 ..." or
> a respective dm table to set
> those up. The former ("raid1") is the default in modern distributions
> and configurable via setting
> 'mirror_segtype_default = "raid1"' in /etc/lvm/lvm.conf.
>
> Heinz
>
>
>
>
> On 02/19/2015 08:23 AM, konstantin wrote:
>> What version of the kernel should I use to get a round-robin read
>> implementation on LV raid1?
>>
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel
I'm create raid1 lv with "lvcreate --type raid1 -m1 -L5G -n r1lv r1vg"
on vg with two physical devices:
lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Move Log Copy%
Convert Devices
r1lv r1vg rwi-a-m- 5.00g 100.00
r1lv_rimage_0(0),r1lv_rimage_1(0)
[r1lv_rimage_0] r1vg iwi-aor- 5.00g /dev/sda(1)
[r1lv_rimage_1] r1vg iwi-aor- 5.00g /dev/sdb(1)
[r1lv_rmeta_0] r1vg ewi-aor- 4.00m /dev/sda(0)
[r1lv_rmeta_1] r1vg ewi-aor- 4.00m /dev/sdb(0)
but, reading is only one of the devices (i see nmon live disk
utilization). There are solutions to ensure reading from the two devices?
--
WBR
Konstantin V. Krotov
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2015-03-10 11:55 ` konstantin
@ 2015-03-10 14:22 ` Heinz Mauelshagen
2015-03-11 7:22 ` konstantin
0 siblings, 1 reply; 16+ messages in thread
From: Heinz Mauelshagen @ 2015-03-10 14:22 UTC (permalink / raw)
To: dm-devel
On 03/10/2015 12:55 PM, konstantin wrote:
>
>
> 19.02.2015 18:02, Heinz Mauelshagen пишет:
>>
>>
>> dm-mirror (i.e. "lvcreate --type mirror" or respective "dmsetup create
>> --table ...",
>> which is not the recommended raid1 layout any more) provides read
>> round-robin since long time.
>> You'd need an ancient kernel not to have it supported.
>>
>> "raid1"/"raid10" (the recommended targets) , i.e. the md-raid based
>> mappings accessible via the dm-raid target
>> do read optimizations as well. Use "lvcreate --type raid1/raid10 ..." or
>> a respective dm table to set
>> those up. The former ("raid1") is the default in modern distributions
>> and configurable via setting
>> 'mirror_segtype_default = "raid1"' in /etc/lvm/lvm.conf.
>>
>> Heinz
>>
>>
>>
>>
>> On 02/19/2015 08:23 AM, konstantin wrote:
>>> What version of the kernel should I use to get a round-robin read
>>> implementation on LV raid1?
>>>
>>
>> --
>> dm-devel mailing list
>> dm-devel@redhat.com
>> https://www.redhat.com/mailman/listinfo/dm-devel
>
> I'm create raid1 lv with "lvcreate --type raid1 -m1 -L5G -n r1lv r1vg"
> on vg with two physical devices:
>
> lvs -a -o +devices
> LV VG Attr LSize Pool Origin Data% Move Log
> Copy% Convert Devices
> r1lv r1vg rwi-a-m- 5.00g 100.00
> r1lv_rimage_0(0),r1lv_rimage_1(0)
> [r1lv_rimage_0] r1vg iwi-aor- 5.00g /dev/sda(1)
> [r1lv_rimage_1] r1vg iwi-aor- 5.00g /dev/sdb(1)
> [r1lv_rmeta_0] r1vg ewi-aor- 4.00m /dev/sda(0)
> [r1lv_rmeta_1] r1vg ewi-aor- 4.00m /dev/sdb(0)
>
> but, reading is only one of the devices (i see nmon live disk
> utilization). There are solutions to ensure reading from the two devices?
>
How do you test?
Do you read from multiple threads?
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2015-03-10 14:22 ` Heinz Mauelshagen
@ 2015-03-11 7:22 ` konstantin
2015-03-11 10:55 ` Heinz Mauelshagen
0 siblings, 1 reply; 16+ messages in thread
From: konstantin @ 2015-03-11 7:22 UTC (permalink / raw)
To: dm-devel
10.03.2015 17:22, Heinz Mauelshagen wrote:
>
> On 03/10/2015 12:55 PM, konstantin wrote:
>>
>>
>> 19.02.2015 18:02, Heinz Mauelshagen пишет:
>>>
>>>
>>> dm-mirror (i.e. "lvcreate --type mirror" or respective "dmsetup create
>>> --table ...",
>>> which is not the recommended raid1 layout any more) provides read
>>> round-robin since long time.
>>> You'd need an ancient kernel not to have it supported.
>>>
>>> "raid1"/"raid10" (the recommended targets) , i.e. the md-raid based
>>> mappings accessible via the dm-raid target
>>> do read optimizations as well. Use "lvcreate --type raid1/raid10 ..." or
>>> a respective dm table to set
>>> those up. The former ("raid1") is the default in modern distributions
>>> and configurable via setting
>>> 'mirror_segtype_default = "raid1"' in /etc/lvm/lvm.conf.
>>>
>>> Heinz
>>>
>>>
>>>
>>>
>>> On 02/19/2015 08:23 AM, konstantin wrote:
>>>> What version of the kernel should I use to get a round-robin read
>>>> implementation on LV raid1?
>>>>
>>>
>>> --
>>> dm-devel mailing list
>>> dm-devel@redhat.com
>>> https://www.redhat.com/mailman/listinfo/dm-devel
>>
>> I'm create raid1 lv with "lvcreate --type raid1 -m1 -L5G -n r1lv r1vg"
>> on vg with two physical devices:
>>
>> lvs -a -o +devices
>> LV VG Attr LSize Pool Origin Data% Move Log
>> Copy% Convert Devices
>> r1lv r1vg rwi-a-m- 5.00g 100.00
>> r1lv_rimage_0(0),r1lv_rimage_1(0)
>> [r1lv_rimage_0] r1vg iwi-aor- 5.00g /dev/sda(1)
>> [r1lv_rimage_1] r1vg iwi-aor- 5.00g /dev/sdb(1)
>> [r1lv_rmeta_0] r1vg ewi-aor- 4.00m /dev/sda(0)
>> [r1lv_rmeta_1] r1vg ewi-aor- 4.00m /dev/sdb(0)
>>
>> but, reading is only one of the devices (i see nmon live disk
>> utilization). There are solutions to ensure reading from the two devices?
>>
>
> How do you test?
> Do you read from multiple threads?
>
>
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel
really .. I used dd to test in one thread,
but why when I read in one thread I can not read from two PV devices at
the same time?
--
WBR
Konstantin V. Krotov
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2015-03-11 7:22 ` konstantin
@ 2015-03-11 10:55 ` Heinz Mauelshagen
2015-03-11 12:44 ` konstantin
0 siblings, 1 reply; 16+ messages in thread
From: Heinz Mauelshagen @ 2015-03-11 10:55 UTC (permalink / raw)
To: dm-devel
On 03/11/2015 08:22 AM, konstantin wrote:
>
> 10.03.2015 17:22, Heinz Mauelshagen wrote:
>>
>> On 03/10/2015 12:55 PM, konstantin wrote:
>>>
>>>
>>> 19.02.2015 18:02, Heinz Mauelshagen пишет:
>>>>
>>>>
>>>> dm-mirror (i.e. "lvcreate --type mirror" or respective "dmsetup create
>>>> --table ...",
>>>> which is not the recommended raid1 layout any more) provides read
>>>> round-robin since long time.
>>>> You'd need an ancient kernel not to have it supported.
>>>>
>>>> "raid1"/"raid10" (the recommended targets) , i.e. the md-raid based
>>>> mappings accessible via the dm-raid target
>>>> do read optimizations as well. Use "lvcreate --type raid1/raid10
>>>> ..." or
>>>> a respective dm table to set
>>>> those up. The former ("raid1") is the default in modern distributions
>>>> and configurable via setting
>>>> 'mirror_segtype_default = "raid1"' in /etc/lvm/lvm.conf.
>>>>
>>>> Heinz
>>>>
>>>>
>>>>
>>>>
>>>> On 02/19/2015 08:23 AM, konstantin wrote:
>>>>> What version of the kernel should I use to get a round-robin read
>>>>> implementation on LV raid1?
>>>>>
>>>>
>>>> --
>>>> dm-devel mailing list
>>>> dm-devel@redhat.com
>>>> https://www.redhat.com/mailman/listinfo/dm-devel
>>>
>>> I'm create raid1 lv with "lvcreate --type raid1 -m1 -L5G -n r1lv r1vg"
>>> on vg with two physical devices:
>>>
>>> lvs -a -o +devices
>>> LV VG Attr LSize Pool Origin Data% Move Log
>>> Copy% Convert Devices
>>> r1lv r1vg rwi-a-m- 5.00g 100.00
>>> r1lv_rimage_0(0),r1lv_rimage_1(0)
>>> [r1lv_rimage_0] r1vg iwi-aor- 5.00g /dev/sda(1)
>>> [r1lv_rimage_1] r1vg iwi-aor- 5.00g /dev/sdb(1)
>>> [r1lv_rmeta_0] r1vg ewi-aor- 4.00m /dev/sda(0)
>>> [r1lv_rmeta_1] r1vg ewi-aor- 4.00m /dev/sdb(0)
>>>
>>> but, reading is only one of the devices (i see nmon live disk
>>> utilization). There are solutions to ensure reading from the two
>>> devices?
>>>
>>
>> How do you test?
>> Do you read from multiple threads?
>>
>>
>>
>> --
>> dm-devel mailing list
>> dm-devel@redhat.com
>> https://www.redhat.com/mailman/listinfo/dm-devel
>
> really .. I used dd to test in one thread,
That's what I assumed.
> but why when I read in one thread I can not read from two PV devices
> at the same time?
>
Your dd example causes streaming io which is what spindles can handle best.
Thus it would not make sense to split ios up.
For such kind of single-threaded streaming io with sensefull block size
a striped mapping would do better.
Try running dd/fio/... multiple times in parallel and you should see the
expected effect.
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2015-03-11 10:55 ` Heinz Mauelshagen
@ 2015-03-11 12:44 ` konstantin
0 siblings, 0 replies; 16+ messages in thread
From: konstantin @ 2015-03-11 12:44 UTC (permalink / raw)
To: dm-devel
11.03.2015 13:55, Heinz Mauelshagen пишет:
>
> On 03/11/2015 08:22 AM, konstantin wrote:
>>
>> 10.03.2015 17:22, Heinz Mauelshagen wrote:
>>>
>>> On 03/10/2015 12:55 PM, konstantin wrote:
>>>>
>>>>
>>>> 19.02.2015 18:02, Heinz Mauelshagen пишет:
>>>>>
>>>>>
>>>>> dm-mirror (i.e. "lvcreate --type mirror" or respective "dmsetup create
>>>>> --table ...",
>>>>> which is not the recommended raid1 layout any more) provides read
>>>>> round-robin since long time.
>>>>> You'd need an ancient kernel not to have it supported.
>>>>>
>>>>> "raid1"/"raid10" (the recommended targets) , i.e. the md-raid based
>>>>> mappings accessible via the dm-raid target
>>>>> do read optimizations as well. Use "lvcreate --type raid1/raid10
>>>>> ..." or
>>>>> a respective dm table to set
>>>>> those up. The former ("raid1") is the default in modern distributions
>>>>> and configurable via setting
>>>>> 'mirror_segtype_default = "raid1"' in /etc/lvm/lvm.conf.
>>>>>
>>>>> Heinz
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 02/19/2015 08:23 AM, konstantin wrote:
>>>>>> What version of the kernel should I use to get a round-robin read
>>>>>> implementation on LV raid1?
>>>>>>
>>>>>
>>>>> --
>>>>> dm-devel mailing list
>>>>> dm-devel@redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/dm-devel
>>>>
>>>> I'm create raid1 lv with "lvcreate --type raid1 -m1 -L5G -n r1lv r1vg"
>>>> on vg with two physical devices:
>>>>
>>>> lvs -a -o +devices
>>>> LV VG Attr LSize Pool Origin Data% Move Log
>>>> Copy% Convert Devices
>>>> r1lv r1vg rwi-a-m- 5.00g 100.00
>>>> r1lv_rimage_0(0),r1lv_rimage_1(0)
>>>> [r1lv_rimage_0] r1vg iwi-aor- 5.00g /dev/sda(1)
>>>> [r1lv_rimage_1] r1vg iwi-aor- 5.00g /dev/sdb(1)
>>>> [r1lv_rmeta_0] r1vg ewi-aor- 4.00m /dev/sda(0)
>>>> [r1lv_rmeta_1] r1vg ewi-aor- 4.00m /dev/sdb(0)
>>>>
>>>> but, reading is only one of the devices (i see nmon live disk
>>>> utilization). There are solutions to ensure reading from the two
>>>> devices?
>>>>
>>>
>>> How do you test?
>>> Do you read from multiple threads?
>>>
>>>
>>>
>>> --
>>> dm-devel mailing list
>>> dm-devel@redhat.com
>>> https://www.redhat.com/mailman/listinfo/dm-devel
>>
>> really .. I used dd to test in one thread,
>
> That's what I assumed.
>
>> but why when I read in one thread I can not read from two PV devices
>> at the same time?
>>
>
> Your dd example causes streaming io which is what spindles can handle best.
> Thus it would not make sense to split ios up.
> For such kind of single-threaded streaming io with sensefull block size
> a striped mapping would do better.
>
> Try running dd/fio/... multiple times in parallel and you should see the
> expected effect.
>
>
> --
> dm-devel mailing list
> dm-devel@redhat.com
> https://www.redhat.com/mailman/listinfo/dm-devel
my VG based on two PV remote connected by InfiniBand disk storages. I
reach the limit of performance InfiniBand ports on my host and I would
like to parallelize the load between the two raid1 legs (disk storages)
that contain the same data.
--
WBR
Konstantin V. Krotov
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2014-12-05 22:10 ` Vasiliy Tolstov
@ 2014-12-08 12:23 ` Bryn M. Reeves
0 siblings, 0 replies; 16+ messages in thread
From: Bryn M. Reeves @ 2014-12-08 12:23 UTC (permalink / raw)
To: device-mapper development
On Sat, Dec 06, 2014 at 02:10:10AM +0400, Vasiliy Tolstov wrote:
> 2014-12-05 17:34 GMT+03:00 Bryn M. Reeves <bmr@redhat.com>:
> > As Zdenek says today the MD-based dm-raid targets are now available
> > and those should expose all the features of the MD stack (including
> > read balancing).
>
>
> Sorry, but i don't understand what it mean. Can you exmplain me in
> more detail how can i get that i need?
You can create LVM2 LVs that use the MD RAID personalities by using
the --type switch to lvcreate, e.g.:
# lvcreate --type raid1 -n lv_raid1 -L 10g vg_data
This then uses MD for the mirroring instead of the old dm-mirror
target.
Regards,
Bryn.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2014-12-05 14:34 ` Bryn M. Reeves
@ 2014-12-05 22:10 ` Vasiliy Tolstov
2014-12-08 12:23 ` Bryn M. Reeves
0 siblings, 1 reply; 16+ messages in thread
From: Vasiliy Tolstov @ 2014-12-05 22:10 UTC (permalink / raw)
To: device-mapper development
2014-12-05 17:34 GMT+03:00 Bryn M. Reeves <bmr@redhat.com>:
> As Zdenek says today the MD-based dm-raid targets are now available
> and those should expose all the features of the MD stack (including
> read balancing).
Sorry, but i don't understand what it mean. Can you exmplain me in
more detail how can i get that i need?
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2014-12-05 13:08 ` Vasiliy Tolstov
@ 2014-12-05 14:34 ` Bryn M. Reeves
2014-12-05 22:10 ` Vasiliy Tolstov
0 siblings, 1 reply; 16+ messages in thread
From: Bryn M. Reeves @ 2014-12-05 14:34 UTC (permalink / raw)
To: device-mapper development
On Fri, Dec 05, 2014 at 05:08:59PM +0400, Vasiliy Tolstov wrote:
> 2014-12-05 16:02 GMT+03:00 Bryn M. Reeves <bmr@redhat.com>:
> > iirc the patches showed no measurable performance gain.
> >
> > Regards,
> > Bryn.
>
>
> Yes in normal read speed not changed, but in case of fully loaded
> system when one member fully utilized by io, why performance in this
> case not increased?
I don't think this was ever really explained but the testing that was
done failed to show a benefit from the patches and as they added more
complexity to the target for no gain they were not merged.
As Zdenek says today the MD-based dm-raid targets are now available
and those should expose all the features of the MD stack (including
read balancing).
Regards,
Bryn.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2014-12-05 13:06 ` Zdenek Kabelac
@ 2014-12-05 13:10 ` Vasiliy Tolstov
0 siblings, 0 replies; 16+ messages in thread
From: Vasiliy Tolstov @ 2014-12-05 13:10 UTC (permalink / raw)
To: Zdenek Kabelac; +Cc: device-mapper development
2014-12-05 16:06 GMT+03:00 Zdenek Kabelac <zkabelac@redhat.com>:
> Are we talking here about rather deprecated old dm mirror usage,
> or a new raid1 (mdraid) ?
>
> I assume new dm-raid target is using everthing what md raid provides.
I'm use raid1 from linux 3.10.x =), i have raid1 from two members ,
each member exported from dedicated server via srp/srpt.
In case of round-robin i can utilize two devices for read, but now i
see that one device have 10% reading, and another is about 90%.
Because clients have sequential read of data.
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2014-12-05 13:02 ` Bryn M. Reeves
2014-12-05 13:06 ` Zdenek Kabelac
@ 2014-12-05 13:08 ` Vasiliy Tolstov
2014-12-05 14:34 ` Bryn M. Reeves
1 sibling, 1 reply; 16+ messages in thread
From: Vasiliy Tolstov @ 2014-12-05 13:08 UTC (permalink / raw)
To: Bryn M. Reeves; +Cc: device-mapper development
2014-12-05 16:02 GMT+03:00 Bryn M. Reeves <bmr@redhat.com>:
> iirc the patches showed no measurable performance gain.
>
> Regards,
> Bryn.
Yes in normal read speed not changed, but in case of fully loaded
system when one member fully utilized by io, why performance in this
case not increased?
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2014-12-05 13:02 ` Bryn M. Reeves
@ 2014-12-05 13:06 ` Zdenek Kabelac
2014-12-05 13:10 ` Vasiliy Tolstov
2014-12-05 13:08 ` Vasiliy Tolstov
1 sibling, 1 reply; 16+ messages in thread
From: Zdenek Kabelac @ 2014-12-05 13:06 UTC (permalink / raw)
To: device-mapper development; +Cc: Vasiliy Tolstov
Dne 5.12.2014 v 14:02 Bryn M. Reeves napsal(a):
> On Fri, Dec 05, 2014 at 04:38:56PM +0400, Vasiliy Tolstov wrote:
>> 2014-12-05 15:25 GMT+03:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
>>> Hello. Does the somebody create scheduler for raid1 to add ability to
>>> read data from all devices, for example i need to read from all
>>> devices when have sequential read.
>>
>>
>> I found patches in 2012 from Robert Collins, [dm-devel] Load balancing
>> reads on dmraid1 (and 01) arrays
>> why it not merged , does dm devel devs have plans to round-robin raid1
>> reading operations ?
>
> iirc the patches showed no measurable performance gain.
>
Are we talking here about rather deprecated old dm mirror usage,
or a new raid1 (mdraid) ?
I assume new dm-raid target is using everthing what md raid provides.
Zdenek
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2014-12-05 12:38 ` Vasiliy Tolstov
@ 2014-12-05 13:02 ` Bryn M. Reeves
2014-12-05 13:06 ` Zdenek Kabelac
2014-12-05 13:08 ` Vasiliy Tolstov
0 siblings, 2 replies; 16+ messages in thread
From: Bryn M. Reeves @ 2014-12-05 13:02 UTC (permalink / raw)
To: device-mapper development; +Cc: Vasiliy Tolstov
On Fri, Dec 05, 2014 at 04:38:56PM +0400, Vasiliy Tolstov wrote:
> 2014-12-05 15:25 GMT+03:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
> > Hello. Does the somebody create scheduler for raid1 to add ability to
> > read data from all devices, for example i need to read from all
> > devices when have sequential read.
>
>
> I found patches in 2012 from Robert Collins, [dm-devel] Load balancing
> reads on dmraid1 (and 01) arrays
> why it not merged , does dm devel devs have plans to round-robin raid1
> reading operations ?
iirc the patches showed no measurable performance gain.
Regards,
Bryn.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: raid1 round-robin scheduler
2014-12-05 12:25 Vasiliy Tolstov
@ 2014-12-05 12:38 ` Vasiliy Tolstov
2014-12-05 13:02 ` Bryn M. Reeves
0 siblings, 1 reply; 16+ messages in thread
From: Vasiliy Tolstov @ 2014-12-05 12:38 UTC (permalink / raw)
To: Vasiliy Tolstov; +Cc: device-mapper
2014-12-05 15:25 GMT+03:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
> Hello. Does the somebody create scheduler for raid1 to add ability to
> read data from all devices, for example i need to read from all
> devices when have sequential read.
I found patches in 2012 from Robert Collins, [dm-devel] Load balancing
reads on dmraid1 (and 01) arrays
why it not merged , does dm devel devs have plans to round-robin raid1
reading operations ?
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru
^ permalink raw reply [flat|nested] 16+ messages in thread
* raid1 round-robin scheduler
@ 2014-12-05 12:25 Vasiliy Tolstov
2014-12-05 12:38 ` Vasiliy Tolstov
0 siblings, 1 reply; 16+ messages in thread
From: Vasiliy Tolstov @ 2014-12-05 12:25 UTC (permalink / raw)
To: device-mapper
Hello. Does the somebody create scheduler for raid1 to add ability to
read data from all devices, for example i need to read from all
devices when have sequential read.
--
Vasiliy Tolstov,
e-mail: v.tolstov@selfip.ru
jabber: vase@selfip.ru
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2015-03-11 12:44 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-02-19 7:23 raid1 round-robin scheduler konstantin
2015-02-19 15:02 ` Heinz Mauelshagen
2015-03-10 11:55 ` konstantin
2015-03-10 14:22 ` Heinz Mauelshagen
2015-03-11 7:22 ` konstantin
2015-03-11 10:55 ` Heinz Mauelshagen
2015-03-11 12:44 ` konstantin
-- strict thread matches above, loose matches on Subject: below --
2014-12-05 12:25 Vasiliy Tolstov
2014-12-05 12:38 ` Vasiliy Tolstov
2014-12-05 13:02 ` Bryn M. Reeves
2014-12-05 13:06 ` Zdenek Kabelac
2014-12-05 13:10 ` Vasiliy Tolstov
2014-12-05 13:08 ` Vasiliy Tolstov
2014-12-05 14:34 ` Bryn M. Reeves
2014-12-05 22:10 ` Vasiliy Tolstov
2014-12-08 12:23 ` Bryn M. Reeves
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.