All of lore.kernel.org
 help / color / mirror / Atom feed
* [linux-lvm] raid & its stripes
@ 2017-09-13 13:53 lejeczek
  2017-09-14 14:58 ` Brassow Jonathan
  0 siblings, 1 reply; 9+ messages in thread
From: lejeczek @ 2017-09-13 13:53 UTC (permalink / raw)
  To: LVM general discussion and development

hi boys, girls

man page reads: -i ...This is equal to the number of 
physical volumes to scatter the  logical  volume data....
I wonder, when I do not use -i while creating an LV with 10 
phy devs.

$ lvcreate -n raid0.A --type raid0 -I 16 -l 97%pv

a dbench would show:
$ dbench -t 60 20
...
Throughput 112.309 MB/sec  20 clients  20 procs  
max_latency=719.409 ms

Yet when I say: this many stripes:

$ lvcreate -n raid0.A --type raid0 -I 16 -i 10 -l 97%pv

dbench:
...
Throughput 83.2822 MB/sec  20 clients  20 procs  
max_latency=816.027 ms

And though the results would vary, xfs, a dbench for LV with 
no -i as an argument(which LVM chooses then to be 2) would 
always look better.
And I thought, as in the manual, always make stripes to go 
to all phy devices.

Question - is there some "little" magic LVM does? And if yes 
then how/what it is?
many thanks, L.

.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] raid & its stripes
  2017-09-13 13:53 [linux-lvm] raid & its stripes lejeczek
@ 2017-09-14 14:58 ` Brassow Jonathan
  2017-09-14 15:49   ` lejeczek
  0 siblings, 1 reply; 9+ messages in thread
From: Brassow Jonathan @ 2017-09-14 14:58 UTC (permalink / raw)
  To: LVM general discussion and development

Seems strange on the surface.  Would you mind posting the layout of each?  ‘lvs -a -o +devices’

 brassow

> On Sep 13, 2017, at 8:53 AM, lejeczek <peljasz@yahoo.co.uk> wrote:
> 
> hi boys, girls
> 
> man page reads: -i ...This is equal to the number of physical volumes to scatter the  logical  volume data....
> I wonder, when I do not use -i while creating an LV with 10 phy devs.
> 
> $ lvcreate -n raid0.A --type raid0 -I 16 -l 97%pv
> 
> a dbench would show:
> $ dbench -t 60 20
> ...
> Throughput 112.309 MB/sec  20 clients  20 procs  max_latency=719.409 ms
> 
> Yet when I say: this many stripes:
> 
> $ lvcreate -n raid0.A --type raid0 -I 16 -i 10 -l 97%pv
> 
> dbench:
> ...
> Throughput 83.2822 MB/sec  20 clients  20 procs  max_latency=816.027 ms
> 
> And though the results would vary, xfs, a dbench for LV with no -i as an argument(which LVM chooses then to be 2) would always look better.
> And I thought, as in the manual, always make stripes to go to all phy devices.
> 
> Question - is there some "little" magic LVM does? And if yes then how/what it is?
> many thanks, L.
> 
> .
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] raid & its stripes
  2017-09-14 14:58 ` Brassow Jonathan
@ 2017-09-14 15:49   ` lejeczek
  2017-09-15  2:20     ` Brassow Jonathan
  0 siblings, 1 reply; 9+ messages in thread
From: lejeczek @ 2017-09-14 15:49 UTC (permalink / raw)
  To: LVM general discussion and development



On 14/09/17 15:58, Brassow Jonathan wrote:
> Seems strange on the surface.  Would you mind posting the layout of each?  ‘lvs -a -o +devices’
>
>   brassow
here is for LV created without -i, both times with & without 
I supplied all ten(all that VG has) pvs as arguments to 
lvcreate.

$ lvs -a -o +devices,stripes,stripe_size chenbro0.1
   LV                 VG         Attr       LSize  Pool 
Origin Data% Meta%  Move Log Cpy%Sync Convert 
Devices                                 #Str Stripe
   raid0.A            chenbro0.1 rwi-aor--- 21.18t 
raid0.A_rimage_0(0),raid0.A_rimage_1(0)    2 16.00k
   [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t 
/dev/sdak(0)                               1     0
   [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t 
/dev/sdam(0)                               1     0
   [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t 
/dev/sdao(0)                               1     0
   [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t 
/dev/sdaq(0)                               1     0
   [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t 
/dev/sdas(0)                               1     0
   [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t 
/dev/sdau(0)                               1     0
   [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t 
/dev/sdal(0)                               1     0
   [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t 
/dev/sdan(0)                               1     0
   [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t 
/dev/sdap(0)                               1     0
   [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t 
/dev/sdar(0)                               1     0
   [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t 
/dev/sdat(0)                               1     0
   [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t 
/dev/sdav(0)                               1     0

I cannot remove this LV for a while thus will not be able to 
recreate with -i for now, sorry.

>> On Sep 13, 2017, at 8:53 AM, lejeczek <peljasz@yahoo.co.uk> wrote:
>>
>> hi boys, girls
>>
>> man page reads: -i ...This is equal to the number of physical volumes to scatter the  logical  volume data....
>> I wonder, when I do not use -i while creating an LV with 10 phy devs.
>>
>> $ lvcreate -n raid0.A --type raid0 -I 16 -l 97%pv
>>
>> a dbench would show:
>> $ dbench -t 60 20
>> ...
>> Throughput 112.309 MB/sec  20 clients  20 procs  max_latency=719.409 ms
>>
>> Yet when I say: this many stripes:
>>
>> $ lvcreate -n raid0.A --type raid0 -I 16 -i 10 -l 97%pv
>>
>> dbench:
>> ...
>> Throughput 83.2822 MB/sec  20 clients  20 procs  max_latency=816.027 ms
>>
>> And though the results would vary, xfs, a dbench for LV with no -i as an argument(which LVM chooses then to be 2) would always look better.
>> And I thought, as in the manual, always make stripes to go to all phy devices.
>>
>> Question - is there some "little" magic LVM does? And if yes then how/what it is?
>> many thanks, L.
>>
>> .
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] raid & its stripes
  2017-09-14 15:49   ` lejeczek
@ 2017-09-15  2:20     ` Brassow Jonathan
  2017-09-15 11:59       ` lejeczek
  0 siblings, 1 reply; 9+ messages in thread
From: Brassow Jonathan @ 2017-09-15  2:20 UTC (permalink / raw)
  To: LVM general discussion and development

There is definitely a difference here.  You have 2 stripes with 5 devices in each stripe.  If you were writing sequentially, you’d be bouncing between the first 2 devices until they are full, then the next 2, and so on.

When using the -i argument, you are creating 10 stripes.  Writing sequentially causes the writes to go from one device to the next until all are written and then starts back at the first.  This is a very different pattern.

I think the result of any benchmark on these two very different layouts would be significantly different.

 brassow

BTW, I swear at one point that if you did not provide the ‘-i’ it would use all of the devices as a stripe, such that your two examples would result in the same thing.  I could be wrong though.

> On Sep 14, 2017, at 10:49 AM, lejeczek <peljasz@yahoo.co.uk> wrote:
> 
> 
> 
> On 14/09/17 15:58, Brassow Jonathan wrote:
>> Seems strange on the surface.  Would you mind posting the layout of each?  ‘lvs -a -o +devices’
>> 
>>  brassow
> here is for LV created without -i, both times with & without I supplied all ten(all that VG has) pvs as arguments to lvcreate.
> 
> $ lvs -a -o +devices,stripes,stripe_size chenbro0.1
>   LV                 VG         Attr       LSize  Pool Origin Data% Meta%  Move Log Cpy%Sync Convert Devices                                 #Str Stripe
>   raid0.A            chenbro0.1 rwi-aor--- 21.18t raid0.A_rimage_0(0),raid0.A_rimage_1(0)    2 16.00k
>   [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdak(0)                               1     0
>   [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdam(0)                               1     0
>   [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdao(0)                               1     0
>   [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdaq(0)                               1     0
>   [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdas(0)                               1     0
>   [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdau(0)                               1     0
>   [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdal(0)                               1     0
>   [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdan(0)                               1     0
>   [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdap(0)                               1     0
>   [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdar(0)                               1     0
>   [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdat(0)                               1     0
>   [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdav(0)                               1     0
> 
> I cannot remove this LV for a while thus will not be able to recreate with -i for now, sorry.
> 
>>> On Sep 13, 2017, at 8:53 AM, lejeczek <peljasz@yahoo.co.uk> wrote:
>>> 
>>> hi boys, girls
>>> 
>>> man page reads: -i ...This is equal to the number of physical volumes to scatter the  logical  volume data....
>>> I wonder, when I do not use -i while creating an LV with 10 phy devs.
>>> 
>>> $ lvcreate -n raid0.A --type raid0 -I 16 -l 97%pv
>>> 
>>> a dbench would show:
>>> $ dbench -t 60 20
>>> ...
>>> Throughput 112.309 MB/sec  20 clients  20 procs  max_latency=719.409 ms
>>> 
>>> Yet when I say: this many stripes:
>>> 
>>> $ lvcreate -n raid0.A --type raid0 -I 16 -i 10 -l 97%pv
>>> 
>>> dbench:
>>> ...
>>> Throughput 83.2822 MB/sec  20 clients  20 procs  max_latency=816.027 ms
>>> 
>>> And though the results would vary, xfs, a dbench for LV with no -i as an argument(which LVM chooses then to be 2) would always look better.
>>> And I thought, as in the manual, always make stripes to go to all phy devices.
>>> 
>>> Question - is there some "little" magic LVM does? And if yes then how/what it is?
>>> many thanks, L.
>>> 
>>> .
>>> 
>>> _______________________________________________
>>> linux-lvm mailing list
>>> linux-lvm@redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>> 
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 
> 
> .
> 
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] raid & its stripes
  2017-09-15  2:20     ` Brassow Jonathan
@ 2017-09-15 11:59       ` lejeczek
  2017-09-18 16:10         ` Brassow Jonathan
  0 siblings, 1 reply; 9+ messages in thread
From: lejeczek @ 2017-09-15 11:59 UTC (permalink / raw)
  To: LVM general discussion and development



On 15/09/17 03:20, Brassow Jonathan wrote:
> There is definitely a difference here.  You have 2 stripes with 5 devices in each stripe.  If you were writing sequentially, you’d be bouncing between the first 2 devices until they are full, then the next 2, and so on.
>
> When using the -i argument, you are creating 10 stripes.  Writing sequentially causes the writes to go from one device to the next until all are written and then starts back at the first.  This is a very different pattern.
>
> I think the result of any benchmark on these two very different layouts would be significantly different.
>
>   brassow
>
> BTW, I swear at one point that if you did not provide the ‘-i’ it would use all of the devices as a stripe, such that your two examples would result in the same thing.  I could be wrong though.
>
that's what I thought I remembered too.
I guess a big question, from user/admin perspective is: are 
those two stripes LVM decides on(when no -i) is the best 
possible choice LVM makes after some elaborative 
determination so the number of stripes(no -i) would, might 
vary depending on raid type, phy devices number and maybe 
some other factors or, 2 stripes are simply hard-coded defaults?


>> On Sep 14, 2017, at 10:49 AM, lejeczek <peljasz@yahoo.co.uk> wrote:
>>
>>
>>
>> On 14/09/17 15:58, Brassow Jonathan wrote:
>>> Seems strange on the surface.  Would you mind posting the layout of each?  ‘lvs -a -o +devices’
>>>
>>>   brassow
>> here is for LV created without -i, both times with & without I supplied all ten(all that VG has) pvs as arguments to lvcreate.
>>
>> $ lvs -a -o +devices,stripes,stripe_size chenbro0.1
>>    LV                 VG         Attr       LSize  Pool Origin Data% Meta%  Move Log Cpy%Sync Convert Devices                                 #Str Stripe
>>    raid0.A            chenbro0.1 rwi-aor--- 21.18t raid0.A_rimage_0(0),raid0.A_rimage_1(0)    2 16.00k
>>    [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdak(0)                               1     0
>>    [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdam(0)                               1     0
>>    [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdao(0)                               1     0
>>    [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdaq(0)                               1     0
>>    [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdas(0)                               1     0
>>    [raid0.A_rimage_0] chenbro0.1 iwi-aor--- 10.59t /dev/sdau(0)                               1     0
>>    [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdal(0)                               1     0
>>    [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdan(0)                               1     0
>>    [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdap(0)                               1     0
>>    [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdar(0)                               1     0
>>    [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdat(0)                               1     0
>>    [raid0.A_rimage_1] chenbro0.1 iwi-aor--- 10.59t /dev/sdav(0)                               1     0
>>
>> I cannot remove this LV for a while thus will not be able to recreate with -i for now, sorry.
>>
>>>> On Sep 13, 2017, at 8:53 AM, lejeczek <peljasz@yahoo.co.uk> wrote:
>>>>
>>>> hi boys, girls
>>>>
>>>> man page reads: -i ...This is equal to the number of physical volumes to scatter the  logical  volume data....
>>>> I wonder, when I do not use -i while creating an LV with 10 phy devs.
>>>>
>>>> $ lvcreate -n raid0.A --type raid0 -I 16 -l 97%pv
>>>>
>>>> a dbench would show:
>>>> $ dbench -t 60 20
>>>> ...
>>>> Throughput 112.309 MB/sec  20 clients  20 procs  max_latency=719.409 ms
>>>>
>>>> Yet when I say: this many stripes:
>>>>
>>>> $ lvcreate -n raid0.A --type raid0 -I 16 -i 10 -l 97%pv
>>>>
>>>> dbench:
>>>> ...
>>>> Throughput 83.2822 MB/sec  20 clients  20 procs  max_latency=816.027 ms
>>>>
>>>> And though the results would vary, xfs, a dbench for LV with no -i as an argument(which LVM chooses then to be 2) would always look better.
>>>> And I thought, as in the manual, always make stripes to go to all phy devices.
>>>>
>>>> Question - is there some "little" magic LVM does? And if yes then how/what it is?
>>>> many thanks, L.
>>>>
>>>> .
>>>>
>>>> _______________________________________________
>>>> linux-lvm mailing list
>>>> linux-lvm@redhat.com
>>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>> _______________________________________________
>>> linux-lvm mailing list
>>> linux-lvm@redhat.com
>>> https://www.redhat.com/mailman/listinfo/linux-lvm
>>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>>
>> .
>>
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] raid & its stripes
  2017-09-15 11:59       ` lejeczek
@ 2017-09-18 16:10         ` Brassow Jonathan
  2017-09-23 15:31           ` lejeczek
  0 siblings, 1 reply; 9+ messages in thread
From: Brassow Jonathan @ 2017-09-18 16:10 UTC (permalink / raw)
  To: LVM general discussion and development


> On Sep 15, 2017, at 6:59 AM, lejeczek <peljasz@yahoo.co.uk> wrote:
> 
> 
> On 15/09/17 03:20, Brassow Jonathan wrote:
>> There is definitely a difference here.  You have 2 stripes with 5 devices in each stripe.  If you were writing sequentially, you’d be bouncing between the first 2 devices until they are full, then the next 2, and so on.
>> 
>> When using the -i argument, you are creating 10 stripes.  Writing sequentially causes the writes to go from one device to the next until all are written and then starts back at the first.  This is a very different pattern.
>> 
>> I think the result of any benchmark on these two very different layouts would be significantly different.
>> 
>>  brassow
>> 
>> BTW, I swear at one point that if you did not provide the ‘-i’ it would use all of the devices as a stripe, such that your two examples would result in the same thing.  I could be wrong though.
>> 
> that's what I thought I remembered too.
> I guess a big question, from user/admin perspective is: are those two stripes LVM decides on(when no -i) is the best possible choice LVM makes after some elaborative determination so the number of stripes(no -i) would, might vary depending on raid type, phy devices number and maybe some other factors or, 2 stripes are simply hard-coded defaults?

If it is a change in behavior, I’m sure it came as the result of some changes in the RAID handling code from recent updates and is not due to some uber-intellegent agent that is trying to figure out the best fit.

 brassow

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] raid & its stripes
  2017-09-18 16:10         ` Brassow Jonathan
@ 2017-09-23 15:31           ` lejeczek
  2017-09-23 15:35             ` lejeczek
  0 siblings, 1 reply; 9+ messages in thread
From: lejeczek @ 2017-09-23 15:31 UTC (permalink / raw)
  To: LVM general discussion and development



On 18/09/17 17:10, Brassow Jonathan wrote:
>> On Sep 15, 2017, at 6:59 AM, lejeczek <peljasz@yahoo.co.uk> wrote:
>>
>>
>> On 15/09/17 03:20, Brassow Jonathan wrote:
>>> There is definitely a difference here.  You have 2 stripes with 5 devices in each stripe.  If you were writing sequentially, you’d be bouncing between the first 2 devices until they are full, then the next 2, and so on.
>>>
>>> When using the -i argument, you are creating 10 stripes.  Writing sequentially causes the writes to go from one device to the next until all are written and then starts back at the first.  This is a very different pattern.
>>>
>>> I think the result of any benchmark on these two very different layouts would be significantly different.
>>>
>>>   brassow
>>>
>>> BTW, I swear at one point that if you did not provide the ‘-i’ it would use all of the devices as a stripe, such that your two examples would result in the same thing.  I could be wrong though.
>>>
>> that's what I thought I remembered too.
>> I guess a big question, from user/admin perspective is: are those two stripes LVM decides on(when no -i) is the best possible choice LVM makes after some elaborative determination so the number of stripes(no -i) would, might vary depending on raid type, phy devices number and maybe some other factors or, 2 stripes are simply hard-coded defaults?
> If it is a change in behavior, I’m sure it came as the result of some changes in the RAID handling code from recent updates and is not due to some uber-intellegent agent that is trying to figure out the best fit.
>
>   brassow
>
but if confuses, current state of affairs is confusing. To 
add to it:

~]# lvcreate --type raid5 -n raid5-0 -l 96%vg caddy-six 
/dev/sd{a..f}
   Using default stripesize 64.00 KiB.
   Logical volume "raid5-0" created.

~]# lvs -a -o +stripes caddy-six
   LV                 VG        Attr       LSize   Pool 
Origin Data% Meta%  Move Log Cpy%Sync Convert #Str
   raid5-0            caddy-six rwi-a-r--- 
1.75t                                    0.28                3
   [raid5-0_rimage_0] caddy-six Iwi-aor--- 
894.25g                                                        1
   [raid5-0_rimage_0] caddy-six Iwi-aor--- 
894.25g                                                        1
   [raid5-0_rimage_1] caddy-six Iwi-aor--- 
894.25g                                                        1
   [raid5-0_rimage_1] caddy-six Iwi-aor--- 
894.25g                                                        1
   [raid5-0_rimage_2] caddy-six Iwi-aor--- 
894.25g                                                        1
   [raid5-0_rimage_2] caddy-six Iwi-aor--- 
894.25g                                                        1
   [raid5-0_rmeta_0]  caddy-six ewi-aor--- 
4.00m                                                        1
   [raid5-0_rmeta_1]  caddy-six ewi-aor--- 
4.00m                                                        1
   [raid5-0_rmeta_2]  caddy-six ewi-aor--- 
4.00m                                                        1

VG and LV upon creating was told: use 6 PVs.
How can we rely on what lvcreate does when left to decide 
and/or use defaults?
Is above example with raid5 what LVM is suppose to do? Is it 
even correct raid5 layout(six phy disks)?

regards.


> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] raid & its stripes
  2017-09-23 15:31           ` lejeczek
@ 2017-09-23 15:35             ` lejeczek
  2017-09-25 21:36               ` Brassow Jonathan
  0 siblings, 1 reply; 9+ messages in thread
From: lejeczek @ 2017-09-23 15:35 UTC (permalink / raw)
  To: LVM general discussion and development



On 23/09/17 16:31, lejeczek wrote:
>
>
> On 18/09/17 17:10, Brassow Jonathan wrote:
>>> On Sep 15, 2017, at 6:59 AM, lejeczek 
>>> <peljasz@yahoo.co.uk> wrote:
>>>
>>>
>>> On 15/09/17 03:20, Brassow Jonathan wrote:
>>>> There is definitely a difference here.  You have 2 
>>>> stripes with 5 devices in each stripe.  If you were 
>>>> writing sequentially, you’d be bouncing between the 
>>>> first 2 devices until they are full, then the next 2, 
>>>> and so on.
>>>>
>>>> When using the -i argument, you are creating 10 
>>>> stripes. Writing sequentially causes the writes to go 
>>>> from one device to the next until all are written and 
>>>> then starts back at the first.  This is a very 
>>>> different pattern.
>>>>
>>>> I think the result of any benchmark on these two very 
>>>> different layouts would be significantly different.
>>>>
>>>>   brassow
>>>>
>>>> BTW, I swear at one point that if you did not provide 
>>>> the ‘-i’ it would use all of the devices as a stripe, 
>>>> such that your two examples would result in the same 
>>>> thing.  I could be wrong though.
>>>>
>>> that's what I thought I remembered too.
>>> I guess a big question, from user/admin perspective is: 
>>> are those two stripes LVM decides on(when no -i) is the 
>>> best possible choice LVM makes after some elaborative 
>>> determination so the number of stripes(no -i) would, 
>>> might vary depending on raid type, phy devices number 
>>> and maybe some other factors or, 2 stripes are simply 
>>> hard-coded defaults?
>> If it is a change in behavior, I’m sure it came as the 
>> result of some changes in the RAID handling code from 
>> recent updates and is not due to some uber-intellegent 
>> agent that is trying to figure out the best fit.
>>
>>   brassow
>>
> but if confuses, current state of affairs is confusing. To 
> add to it:
>
> ~]# lvcreate --type raid5 -n raid5-0 -l 96%vg caddy-six 
> /dev/sd{a..f}
>   Using default stripesize 64.00 KiB.
>   Logical volume "raid5-0" created.
>
> ~]# lvs -a -o +stripes caddy-six
>   LV                 VG        Attr       LSize   Pool 
> Origin Data% Meta%  Move Log Cpy%Sync Convert #Str
>   raid5-0            caddy-six rwi-a-r--- 
> 1.75t                                    
> 0.28                3
>   [raid5-0_rimage_0] caddy-six Iwi-aor--- 
> 894.25g                                                        
> 1
>   [raid5-0_rimage_0] caddy-six Iwi-aor--- 
> 894.25g                                                        
> 1
>   [raid5-0_rimage_1] caddy-six Iwi-aor--- 
> 894.25g                                                        
> 1
>   [raid5-0_rimage_1] caddy-six Iwi-aor--- 
> 894.25g                                                        
> 1
>   [raid5-0_rimage_2] caddy-six Iwi-aor--- 
> 894.25g                                                        
> 1
>   [raid5-0_rimage_2] caddy-six Iwi-aor--- 
> 894.25g                                                        
> 1
>   [raid5-0_rmeta_0]  caddy-six ewi-aor--- 
> 4.00m                                                        
> 1
>   [raid5-0_rmeta_1]  caddy-six ewi-aor--- 
> 4.00m                                                        
> 1
>   [raid5-0_rmeta_2]  caddy-six ewi-aor--- 
> 4.00m                                                        
> 1
>
> VG and LV upon creating was told: use 6 PVs.
> How can we rely on what lvcreate does when left to decide 
> and/or use defaults?
> Is above example with raid5 what LVM is suppose to do? Is 
> it even correct raid5 layout(six phy disks)?
>
> regards.
>
>
  ~]# lvs -a -o +stripes,devices caddy-six
   LV                 VG        Attr       LSize   Pool 
Origin Data% Meta%  Move Log Cpy%Sync Convert #Str Devices
   raid5-0            caddy-six rwi-a-r--- 
1.75t                                    2.36                
3 raid5-0_rimage_0(0),raid5-0_rimage_1(0),raid5-0_rimage_2(0)
   [raid5-0_rimage_0] caddy-six Iwi-aor--- 
894.25g                                                        
1 /dev/sda(1)
   [raid5-0_rimage_0] caddy-six Iwi-aor--- 
894.25g                                                        
1 /dev/sdd(0)
   [raid5-0_rimage_1] caddy-six Iwi-aor--- 
894.25g                                                        
1 /dev/sdb(1)
   [raid5-0_rimage_1] caddy-six Iwi-aor--- 
894.25g                                                        
1 /dev/sde(0)
   [raid5-0_rimage_2] caddy-six Iwi-aor--- 
894.25g                                                        
1 /dev/sdc(1)
   [raid5-0_rimage_2] caddy-six Iwi-aor--- 
894.25g                                                        
1 /dev/sdf(0)
   [raid5-0_rmeta_0]  caddy-six ewi-aor--- 
4.00m                                                        
1 /dev/sda(0)
   [raid5-0_rmeta_1]  caddy-six ewi-aor--- 
4.00m                                                        
1 /dev/sdb(0)
   [raid5-0_rmeta_2]  caddy-six ewi-aor--- 
4.00m                                                        
1 /dev/sdc(0)
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>


.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [linux-lvm] raid & its stripes
  2017-09-23 15:35             ` lejeczek
@ 2017-09-25 21:36               ` Brassow Jonathan
  0 siblings, 0 replies; 9+ messages in thread
From: Brassow Jonathan @ 2017-09-25 21:36 UTC (permalink / raw)
  To: LVM general discussion and development


> On Sep 23, 2017, at 10:35 AM, lejeczek <peljasz@yahoo.co.uk> wrote:
> 
> 
> 
> On 23/09/17 16:31, lejeczek wrote:
>> 
>> 
>> On 18/09/17 17:10, Brassow Jonathan wrote:
>>>> On Sep 15, 2017, at 6:59 AM, lejeczek <peljasz@yahoo.co.uk> wrote:
>>>> 
>>>> 
>>>> On 15/09/17 03:20, Brassow Jonathan wrote:
>>>>> There is definitely a difference here.  You have 2 stripes with 5 devices in each stripe.  If you were writing sequentially, you’d be bouncing between the first 2 devices until they are full, then the next 2, and so on.
>>>>> 
>>>>> When using the -i argument, you are creating 10 stripes. Writing sequentially causes the writes to go from one device to the next until all are written and then starts back at the first.  This is a very different pattern.
>>>>> 
>>>>> I think the result of any benchmark on these two very different layouts would be significantly different.
>>>>> 
>>>>>   brassow
>>>>> 
>>>>> BTW, I swear at one point that if you did not provide the ‘-i’ it would use all of the devices as a stripe, such that your two examples would result in the same thing.  I could be wrong though.
>>>>> 
>>>> that's what I thought I remembered too.
>>>> I guess a big question, from user/admin perspective is: are those two stripes LVM decides on(when no -i) is the best possible choice LVM makes after some elaborative determination so the number of stripes(no -i) would, might vary depending on raid type, phy devices number and maybe some other factors or, 2 stripes are simply hard-coded defaults?
>>> If it is a change in behavior, I’m sure it came as the result of some changes in the RAID handling code from recent updates and is not due to some uber-intellegent agent that is trying to figure out the best fit.
>>> 
>>>   brassow
>>> 
>> but if confuses, current state of affairs is confusing. To add to it:
>> 
>> ~]# lvcreate --type raid5 -n raid5-0 -l 96%vg caddy-six /dev/sd{a..f}
>>   Using default stripesize 64.00 KiB.
>>   Logical volume "raid5-0" created.
>> 
>> ~]# lvs -a -o +stripes caddy-six
>>   LV                 VG        Attr       LSize   Pool Origin Data% Meta%  Move Log Cpy%Sync Convert #Str
>>   raid5-0            caddy-six rwi-a-r--- 1.75t                                    0.28                3
>>   [raid5-0_rimage_0] caddy-six Iwi-aor--- 894.25g                                                        1
>>   [raid5-0_rimage_0] caddy-six Iwi-aor--- 894.25g                                                        1
>>   [raid5-0_rimage_1] caddy-six Iwi-aor--- 894.25g                                                        1
>>   [raid5-0_rimage_1] caddy-six Iwi-aor--- 894.25g                                                        1
>>   [raid5-0_rimage_2] caddy-six Iwi-aor--- 894.25g                                                        1
>>   [raid5-0_rimage_2] caddy-six Iwi-aor--- 894.25g                                                        1
>>   [raid5-0_rmeta_0]  caddy-six ewi-aor--- 4.00m                                                        1
>>   [raid5-0_rmeta_1]  caddy-six ewi-aor--- 4.00m                                                        1
>>   [raid5-0_rmeta_2]  caddy-six ewi-aor--- 4.00m                                                        1
>> 
>> VG and LV upon creating was told: use 6 PVs.
>> How can we rely on what lvcreate does when left to decide and/or use defaults?
>> Is above example with raid5 what LVM is suppose to do? Is it even correct raid5 layout(six phy disks)?
>> 
>> regards.
>> 
>> 
>  ~]# lvs -a -o +stripes,devices caddy-six
>   LV                 VG        Attr       LSize   Pool Origin Data% Meta%  Move Log Cpy%Sync Convert #Str Devices
>   raid5-0            caddy-six rwi-a-r--- 1.75t                                    2.36                3 raid5-0_rimage_0(0),raid5-0_rimage_1(0),raid5-0_rimage_2(0)
>   [raid5-0_rimage_0] caddy-six Iwi-aor--- 894.25g                                                        1 /dev/sda(1)
>   [raid5-0_rimage_0] caddy-six Iwi-aor--- 894.25g                                                        1 /dev/sdd(0)
>   [raid5-0_rimage_1] caddy-six Iwi-aor--- 894.25g                                                        1 /dev/sdb(1)
>   [raid5-0_rimage_1] caddy-six Iwi-aor--- 894.25g                                                        1 /dev/sde(0)
>   [raid5-0_rimage_2] caddy-six Iwi-aor--- 894.25g                                                        1 /dev/sdc(1)
>   [raid5-0_rimage_2] caddy-six Iwi-aor--- 894.25g                                                        1 /dev/sdf(0)
>   [raid5-0_rmeta_0]  caddy-six ewi-aor--- 4.00m                                                        1 /dev/sda(0)
>   [raid5-0_rmeta_1]  caddy-six ewi-aor--- 4.00m                                                        1 /dev/sdb(0)
>   [raid5-0_rmeta_2]  caddy-six ewi-aor--- 4.00m                                                        1 /dev/sdc(0)

Yeah, looks right to me.  It seems it is picking the minimum viable stripes for the particular RAID type.  RAID0 obviously needs at least 2 stripes.  If you give it 8 devices, it will still choose 2 stripes with 4 devices composing each “leg/image”.  Your RAID5 needs 3 devices (2 for stripe and 1 for parity).  Again, given 6 devices it will choose the minimum stripe count plus a parity.  I suspect RAID 6 would choose at least 3 stripes and the 2 mandatory parity - a minimum of 5 devices.

Bottom line, if you want a specific number of stripes, use ‘-i’.   Remember, ‘-i' specifies the number of stripes and the parity count is added automatically.

 brassow

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-09-25 21:36 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-13 13:53 [linux-lvm] raid & its stripes lejeczek
2017-09-14 14:58 ` Brassow Jonathan
2017-09-14 15:49   ` lejeczek
2017-09-15  2:20     ` Brassow Jonathan
2017-09-15 11:59       ` lejeczek
2017-09-18 16:10         ` Brassow Jonathan
2017-09-23 15:31           ` lejeczek
2017-09-23 15:35             ` lejeczek
2017-09-25 21:36               ` Brassow Jonathan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.