All of lore.kernel.org
 help / color / mirror / Atom feed
* LVM and Raid5
@ 2009-09-16  8:22 Linux Raid Study
  2009-09-16  9:42 ` Jon Hardcastle
                   ` (2 more replies)
  0 siblings, 3 replies; 17+ messages in thread
From: Linux Raid Study @ 2009-09-16  8:22 UTC (permalink / raw)
  To: linux-raid; +Cc: linuxraid.study

Hello:

Has someone experimented with LVM and Raid5 together (on say, 2.6.27)?
Is there any performance drop if LVM/Raid5 are combined vs Raid5 alone?

Thanks for your inputs!

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-16  8:22 LVM and Raid5 Linux Raid Study
@ 2009-09-16  9:42 ` Jon Hardcastle
  2009-09-16 10:09 ` Goswin von Brederlow
  2009-09-17 12:37 ` Michal Soltys
  2 siblings, 0 replies; 17+ messages in thread
From: Jon Hardcastle @ 2009-09-16  9:42 UTC (permalink / raw)
  To: linux-raid; +Cc: linuxraid.study




--- On Wed, 16/9/09, Linux Raid Study <linuxraid.study@gmail.com> wrote:

> From: Linux Raid Study <linuxraid.study@gmail.com>
> Subject: LVM and Raid5
> To: linux-raid@vger.kernel.org
> Cc: linuxraid.study@gmail.com
> Date: Wednesday, 16 September, 2009, 9:22 AM
> Hello:
> 
> Has someone experimented with LVM and Raid5 together (on
> say, 2.6.27)?
> Is there any performance drop if LVM/Raid5 are combined vs
> Raid5 alone?
> 
> Thanks for your inputs!
> --

Hi, I cant remember what exact kernel version i am running but I have certainly noticed a difference. I have had the raid array set up for nearly 2 years and have only in the last 6 months become bothered/interested in optimising it.

Someone suggested this: http://mbhtech.blogspot.com/2009/09/software-raid-vs-lvm-quick-speed-test_08.html

My test concur with these results; I feel I would - in hindsight have not bothered with the LVM. BUT it does give me a flexability I would probably miss otherwise.

I have not looked at optimising stripe sizes though - or any other optimisation really. 

-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
-----------------------


      

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-16  8:22 LVM and Raid5 Linux Raid Study
  2009-09-16  9:42 ` Jon Hardcastle
@ 2009-09-16 10:09 ` Goswin von Brederlow
  2009-09-16 10:20   ` Majed B.
  2009-09-17 12:37 ` Michal Soltys
  2 siblings, 1 reply; 17+ messages in thread
From: Goswin von Brederlow @ 2009-09-16 10:09 UTC (permalink / raw)
  To: Linux Raid Study; +Cc: linux-raid

Linux Raid Study <linuxraid.study@gmail.com> writes:

> Hello:
>
> Has someone experimented with LVM and Raid5 together (on say, 2.6.27)?
> Is there any performance drop if LVM/Raid5 are combined vs Raid5 alone?
>
> Thanks for your inputs!

Has always worked perfectly for me and i can't say I noticed any
performance change.

MfG
        Goswin

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-16 10:09 ` Goswin von Brederlow
@ 2009-09-16 10:20   ` Majed B.
  2009-09-16 10:33     ` Jon Hardcastle
  2009-09-21 17:34     ` Goswin von Brederlow
  0 siblings, 2 replies; 17+ messages in thread
From: Majed B. @ 2009-09-16 10:20 UTC (permalink / raw)
  To: Goswin von Brederlow; +Cc: Linux Raid Study, linux-raid

Hello,

I'm the one who ran those tests with LVM vs. RAID5 and I think I have
faced speed difference because I have disks of varying speeds
(different models and vendors), and I believe that LVM gets crippled
down to the speed of the slowest disk.

On Wed, Sep 16, 2009 at 1:09 PM, Goswin von Brederlow <goswin-v-b@web.de> wrote:
> Linux Raid Study <linuxraid.study@gmail.com> writes:
>
>> Hello:
>>
>> Has someone experimented with LVM and Raid5 together (on say, 2.6.27)?
>> Is there any performance drop if LVM/Raid5 are combined vs Raid5 alone?
>>
>> Thanks for your inputs!
>
> Has always worked perfectly for me and i can't say I noticed any
> performance change.
>
> MfG
>        Goswin
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-16 10:20   ` Majed B.
@ 2009-09-16 10:33     ` Jon Hardcastle
  2009-09-16 11:00       ` Majed B.
  2009-09-21 17:34     ` Goswin von Brederlow
  1 sibling, 1 reply; 17+ messages in thread
From: Jon Hardcastle @ 2009-09-16 10:33 UTC (permalink / raw)
  To: Goswin von Brederlow, Majed B.; +Cc: Linux Raid Study, linux-raid




--- On Wed, 16/9/09, Majed B. <majedb@gmail.com> wrote:

> From: Majed B. <majedb@gmail.com>
> Subject: Re: LVM and Raid5
> To: "Goswin von Brederlow" <goswin-v-b@web.de>
> Cc: "Linux Raid Study" <linuxraid.study@gmail.com>, linux-raid@vger.kernel.org
> Date: Wednesday, 16 September, 2009, 11:20 AM
> Hello,
> 
> I'm the one who ran those tests with LVM vs. RAID5 and I
> think I have
> faced speed difference because I have disks of varying
> speeds
> (different models and vendors), and I believe that LVM gets
> crippled
> down to the speed of the slowest disk.
> 
> On Wed, Sep 16, 2009 at 1:09 PM, Goswin von Brederlow
> <goswin-v-b@web.de>
> wrote:
> > Linux Raid Study <linuxraid.study@gmail.com>
> writes:
> >
> >> Hello:
> >>
> >> Has someone experimented with LVM and Raid5
> together (on say, 2.6.27)?
> >> Is there any performance drop if LVM/Raid5 are
> combined vs Raid5 alone?
> >>
> >> Thanks for your inputs!
> >
> > Has always worked perfectly for me and i can't say I
> noticed any
> > performance change.
> >
> > MfG
> >        Goswin
> > --
> > To unsubscribe from this list: send the line
> "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> 
> 
> 
> -- 
>        Majed B.
> --

Is that the case even if a dd read from say md4 is much, much faster that he lvm's that are based on md4? Surely if the read from the underlying md is faster so should the LV's?

-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
-----------------------


      
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-16 10:33     ` Jon Hardcastle
@ 2009-09-16 11:00       ` Majed B.
  2009-09-16 13:15         ` Chris Webb
  0 siblings, 1 reply; 17+ messages in thread
From: Majed B. @ 2009-09-16 11:00 UTC (permalink / raw)
  To: Jon; +Cc: Goswin von Brederlow, Linux Raid Study, linux-raid

I tested with both dd and hdparm, and in both cases, md0 proved to be
much faster than the LV itself.

On Wed, Sep 16, 2009 at 1:33 PM, Jon Hardcastle <jd_hardcastle@yahoo.com> wrote:
>
>
>
> --- On Wed, 16/9/09, Majed B. <majedb@gmail.com> wrote:
>
>> From: Majed B. <majedb@gmail.com>
>> Subject: Re: LVM and Raid5
>> To: "Goswin von Brederlow" <goswin-v-b@web.de>
>> Cc: "Linux Raid Study" <linuxraid.study@gmail.com>, linux-raid@vger.kernel.org
>> Date: Wednesday, 16 September, 2009, 11:20 AM
>> Hello,
>>
>> I'm the one who ran those tests with LVM vs. RAID5 and I
>> think I have
>> faced speed difference because I have disks of varying
>> speeds
>> (different models and vendors), and I believe that LVM gets
>> crippled
>> down to the speed of the slowest disk.
>>
>> On Wed, Sep 16, 2009 at 1:09 PM, Goswin von Brederlow
>> <goswin-v-b@web.de>
>> wrote:
>> > Linux Raid Study <linuxraid.study@gmail.com>
>> writes:
>> >
>> >> Hello:
>> >>
>> >> Has someone experimented with LVM and Raid5
>> together (on say, 2.6.27)?
>> >> Is there any performance drop if LVM/Raid5 are
>> combined vs Raid5 alone?
>> >>
>> >> Thanks for your inputs!
>> >
>> > Has always worked perfectly for me and i can't say I
>> noticed any
>> > performance change.
>> >
>> > MfG
>> >        Goswin
>> > --
>> > To unsubscribe from this list: send the line
>> "unsubscribe linux-raid" in
>> > the body of a message to majordomo@vger.kernel.org
>> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> >
>>
>>
>>
>> --
>>        Majed B.
>> --
>
> Is that the case even if a dd read from say md4 is much, much faster that he lvm's that are based on md4? Surely if the read from the underlying md is faster so should the LV's?
>
> -----------------------
> N: Jon Hardcastle
> E: Jon@eHardcastle.com
> 'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
> -----------------------
>
>
>
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-16 11:00       ` Majed B.
@ 2009-09-16 13:15         ` Chris Webb
  0 siblings, 0 replies; 17+ messages in thread
From: Chris Webb @ 2009-09-16 13:15 UTC (permalink / raw)
  To: Majed B.; +Cc: Jon, Goswin von Brederlow, Linux Raid Study, linux-raid

"Majed B." <majedb@gmail.com> writes:

> I tested with both dd and hdparm, and in both cases, md0 proved to be
> much faster than the LV itself.

This is quite odd because usually an LVM2 logical volume is just a simple
linear device-mapper target onto the backing device. I could imagine a small
performance change but a large one really surprises me. Could there be
something going on here wrt sync vs non-sync IO?

Best wishes,

Chris.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-16  8:22 LVM and Raid5 Linux Raid Study
  2009-09-16  9:42 ` Jon Hardcastle
  2009-09-16 10:09 ` Goswin von Brederlow
@ 2009-09-17 12:37 ` Michal Soltys
  2009-09-21 14:33     ` [linux-lvm] " Mike Snitzer
  2 siblings, 1 reply; 17+ messages in thread
From: Michal Soltys @ 2009-09-17 12:37 UTC (permalink / raw)
  To: Linux Raid Study; +Cc: linux-raid

Linux Raid Study wrote:
> Hello:
> 
> Has someone experimented with LVM and Raid5 together (on say, 2.6.27)?
> Is there any performance drop if LVM/Raid5 are combined vs Raid5 alone?
> 
> Thanks for your inputs!

Few things to consider when setting up LVM on MD raid:

- readahead set on lvm device

It defaults to 256 on any LVM device, while MD will set it 
accordingly to the amount of disks present in the raid. 
If you do tests on a filesystem, you may see significant 
differences due to that. YMMV depending on the type of used 
benchmark(s).

- filesystem awareness of underlying raid

For example, xfs created on top of raid, will generally get 
the parameters right (stripe unit, stripe width), but if 
it's xfs on lvm on raid, then it won't - you will have 
to provide them manually.

- alignment between LVM chunks and MD chunks

Make sure that extent area used for actual logical volumes 
start at the boundary of stripe unit - you can adjust the 
LVM's metadata size during pvcreate (by default it's 192KiB, so 
with non-default stripe unit it may cause issues, although 
I vaguely recall posts that current LVM is MD aware during 
initialization). Of course LVM must itself start at the boundary 
for that to make any sense (and it doesn't have to be the case - 
for example if you use partitionable MD).

The best case is when LVM chunk is a multiple of stripe width, as 
in such case non-linear logical volumes will be always split 
at the stripe width boundary. But that requires 2^n data disks, 
which is not always the case.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-17 12:37 ` Michal Soltys
@ 2009-09-21 14:33     ` Mike Snitzer
  0 siblings, 0 replies; 17+ messages in thread
From: Mike Snitzer @ 2009-09-21 14:33 UTC (permalink / raw)
  To: Michal Soltys; +Cc: Linux Raid Study, linux-raid, linux-lvm

On Thu, Sep 17, 2009 at 8:37 AM, Michal Soltys <soltys@ziu.info> wrote:
> Linux Raid Study wrote:
>>
>> Hello:
>>
>> Has someone experimented with LVM and Raid5 together (on say, 2.6.27)?
>> Is there any performance drop if LVM/Raid5 are combined vs Raid5 alone?
>>
>> Thanks for your inputs!
>
> Few things to consider when setting up LVM on MD raid:
>
> - readahead set on lvm device
>
> It defaults to 256 on any LVM device, while MD will set it accordingly to
> the amount of disks present in the raid. If you do tests on a filesystem,
> you may see significant differences due to that. YMMV depending on the type
> of used benchmark(s).
>
> - filesystem awareness of underlying raid
>
> For example, xfs created on top of raid, will generally get the parameters
> right (stripe unit, stripe width), but if it's xfs on lvm on raid, then it
> won't - you will have to provide them manually.
>
> - alignment between LVM chunks and MD chunks
>
> Make sure that extent area used for actual logical volumes start at the
> boundary of stripe unit - you can adjust the LVM's metadata size during
> pvcreate (by default it's 192KiB, so with non-default stripe unit it may
> cause issues, although I vaguely recall posts that current LVM is MD aware
> during initialization). Of course LVM must itself start at the boundary for
> that to make any sense (and it doesn't have to be the case - for example if
> you use partitionable MD).

All of the above have been resolved in recent LVM2 userspace (2.02.51
being the most recent release with all these addressed).  The last
issue you mention (partitionable MD alignment offset) is also resolved
when a recent LVM2 is coupled with Linux 2.6.31 (which provides IO
Topology support).

Mike

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [linux-lvm] Re: LVM and Raid5
@ 2009-09-21 14:33     ` Mike Snitzer
  0 siblings, 0 replies; 17+ messages in thread
From: Mike Snitzer @ 2009-09-21 14:33 UTC (permalink / raw)
  To: Michal Soltys; +Cc: linux-raid, Linux Raid Study, linux-lvm

On Thu, Sep 17, 2009 at 8:37 AM, Michal Soltys <soltys@ziu.info> wrote:
> Linux Raid Study wrote:
>>
>> Hello:
>>
>> Has someone experimented with LVM and Raid5 together (on say, 2.6.27)?
>> Is there any performance drop if LVM/Raid5 are combined vs Raid5 alone?
>>
>> Thanks for your inputs!
>
> Few things to consider when setting up LVM on MD raid:
>
> - readahead set on lvm device
>
> It defaults to 256 on any LVM device, while MD will set it accordingly to
> the amount of disks present in the raid. If you do tests on a filesystem,
> you may see significant differences due to that. YMMV depending on the type
> of used benchmark(s).
>
> - filesystem awareness of underlying raid
>
> For example, xfs created on top of raid, will generally get the parameters
> right (stripe unit, stripe width), but if it's xfs on lvm on raid, then it
> won't - you will have to provide them manually.
>
> - alignment between LVM chunks and MD chunks
>
> Make sure that extent area used for actual logical volumes start at the
> boundary of stripe unit - you can adjust the LVM's metadata size during
> pvcreate (by default it's 192KiB, so with non-default stripe unit it may
> cause issues, although I vaguely recall posts that current LVM is MD aware
> during initialization). Of course LVM must itself start at the boundary for
> that to make any sense (and it doesn't have to be the case - for example if
> you use partitionable MD).

All of the above have been resolved in recent LVM2 userspace (2.02.51
being the most recent release with all these addressed).  The last
issue you mention (partitionable MD alignment offset) is also resolved
when a recent LVM2 is coupled with Linux 2.6.31 (which provides IO
Topology support).

Mike

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-21 14:33     ` [linux-lvm] " Mike Snitzer
@ 2009-09-21 16:30       ` Jon Hardcastle
  -1 siblings, 0 replies; 17+ messages in thread
From: Jon Hardcastle @ 2009-09-21 16:30 UTC (permalink / raw)
  To: Michal Soltys, Mike Snitzer; +Cc: linux-raid, Linux Raid Study, linux-lvm

--- On Mon, 21/9/09, Mike Snitzer <snitzer@gmail.com> wrote:

> From: Mike Snitzer <snitzer@gmail.com>
> Subject: Re: LVM and Raid5
> To: "Michal Soltys" <soltys@ziu.info>
> Cc: "Linux Raid Study" <linuxraid.study@gmail.com>, linux-raid@vger.kernel.org, linux-lvm@redhat.com
> Date: Monday, 21 September, 2009, 3:33 PM
> On Thu, Sep 17, 2009 at 8:37 AM,
> Michal Soltys <soltys@ziu.info>
> wrote:
> > Linux Raid Study wrote:
> >>
> >> Hello:
> >>
> >> Has someone experimented with LVM and Raid5
> together (on say, 2.6.27)?
> >> Is there any performance drop if LVM/Raid5 are
> combined vs Raid5 alone?
> >>
> >> Thanks for your inputs!
> >
> > Few things to consider when setting up LVM on MD
> raid:
> >
> > - readahead set on lvm device
> >
> > It defaults to 256 on any LVM device, while MD will
> set it accordingly to
> > the amount of disks present in the raid. If you do
> tests on a filesystem,
> > you may see significant differences due to that. YMMV
> depending on the type
> > of used benchmark(s).
> >
> > - filesystem awareness of underlying raid
> >
> > For example, xfs created on top of raid, will
> generally get the parameters
> > right (stripe unit, stripe width), but if it's xfs on
> lvm on raid, then it
> > won't - you will have to provide them manually.
> >
> > - alignment between LVM chunks and MD chunks
> >
> > Make sure that extent area used for actual logical
> volumes start at the
> > boundary of stripe unit - you can adjust the LVM's
> metadata size during
> > pvcreate (by default it's 192KiB, so with non-default
> stripe unit it may
> > cause issues, although I vaguely recall posts that
> current LVM is MD aware
> > during initialization). Of course LVM must itself
> start at the boundary for
> > that to make any sense (and it doesn't have to be the
> case - for example if
> > you use partitionable MD).
> 
> All of the above have been resolved in recent LVM2
> userspace (2.02.51
> being the most recent release with all these
> addressed).  The last
> issue you mention (partitionable MD alignment offset) is
> also resolved
> when a recent LVM2 is coupled with Linux 2.6.31 (which
> provides IO
> Topology support).
> 
> Mike
> --

When you say 'resolved' are we talking automatically? if so, when the volumes are created... etc etc?
-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
-----------------------





      

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [linux-lvm] Re: LVM and Raid5
@ 2009-09-21 16:30       ` Jon Hardcastle
  0 siblings, 0 replies; 17+ messages in thread
From: Jon Hardcastle @ 2009-09-21 16:30 UTC (permalink / raw)
  To: Michal Soltys, Mike Snitzer; +Cc: linux-raid, Linux Raid Study, linux-lvm

--- On Mon, 21/9/09, Mike Snitzer <snitzer@gmail.com> wrote:

> From: Mike Snitzer <snitzer@gmail.com>
> Subject: Re: LVM and Raid5
> To: "Michal Soltys" <soltys@ziu.info>
> Cc: "Linux Raid Study" <linuxraid.study@gmail.com>, linux-raid@vger.kernel.org, linux-lvm@redhat.com
> Date: Monday, 21 September, 2009, 3:33 PM
> On Thu, Sep 17, 2009 at 8:37 AM,
> Michal Soltys <soltys@ziu.info>
> wrote:
> > Linux Raid Study wrote:
> >>
> >> Hello:
> >>
> >> Has someone experimented with LVM and Raid5
> together (on say, 2.6.27)?
> >> Is there any performance drop if LVM/Raid5 are
> combined vs Raid5 alone?
> >>
> >> Thanks for your inputs!
> >
> > Few things to consider when setting up LVM on MD
> raid:
> >
> > - readahead set on lvm device
> >
> > It defaults to 256 on any LVM device, while MD will
> set it accordingly to
> > the amount of disks present in the raid. If you do
> tests on a filesystem,
> > you may see significant differences due to that. YMMV
> depending on the type
> > of used benchmark(s).
> >
> > - filesystem awareness of underlying raid
> >
> > For example, xfs created on top of raid, will
> generally get the parameters
> > right (stripe unit, stripe width), but if it's xfs on
> lvm on raid, then it
> > won't - you will have to provide them manually.
> >
> > - alignment between LVM chunks and MD chunks
> >
> > Make sure that extent area used for actual logical
> volumes start at the
> > boundary of stripe unit - you can adjust the LVM's
> metadata size during
> > pvcreate (by default it's 192KiB, so with non-default
> stripe unit it may
> > cause issues, although I vaguely recall posts that
> current LVM is MD aware
> > during initialization). Of course LVM must itself
> start at the boundary for
> > that to make any sense (and it doesn't have to be the
> case - for example if
> > you use partitionable MD).
> 
> All of the above have been resolved in recent LVM2
> userspace (2.02.51
> being the most recent release with all these
> addressed).� The last
> issue you mention (partitionable MD alignment offset) is
> also resolved
> when a recent LVM2 is coupled with Linux 2.6.31 (which
> provides IO
> Topology support).
> 
> Mike
> --

When you say 'resolved' are we talking automatically? if so, when the volumes are created... etc etc?
-----------------------
N: Jon Hardcastle
E: Jon@eHardcastle.com
'Do not worry about tomorrow, for tomorrow will bring worries of its own.'
-----------------------





      

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-21 16:30       ` [linux-lvm] " Jon Hardcastle
@ 2009-09-21 17:26         ` Mike Snitzer
  -1 siblings, 0 replies; 17+ messages in thread
From: Mike Snitzer @ 2009-09-21 17:26 UTC (permalink / raw)
  To: Jon, LVM general discussion and development
  Cc: linux-raid, Linux Raid Study, Michal Soltys

On Mon, Sep 21 2009 at 12:30pm -0400,
Jon Hardcastle <jd_hardcastle@yahoo.com> wrote:

> --- On Mon, 21/9/09, Mike Snitzer <snitzer@gmail.com> wrote:
> 
> > From: Mike Snitzer <snitzer@gmail.com>
> > Subject: Re: LVM and Raid5
> > To: "Michal Soltys" <soltys@ziu.info>
> > Cc: "Linux Raid Study" <linuxraid.study@gmail.com>, linux-raid@vger.kernel.org, linux-lvm@redhat.com
> > Date: Monday, 21 September, 2009, 3:33 PM
> > On Thu, Sep 17, 2009 at 8:37 AM,
> > Michal Soltys <soltys@ziu.info>
> > wrote:
> > > Linux Raid Study wrote:
> > >>
> > >> Hello:
> > >>
> > >> Has someone experimented with LVM and Raid5
> > together (on say, 2.6.27)?
> > >> Is there any performance drop if LVM/Raid5 are
> > combined vs Raid5 alone?
> > >>
> > >> Thanks for your inputs!
> > >
> > > Few things to consider when setting up LVM on MD
> > raid:
> > >
> > > - readahead set on lvm device
> > >
> > > It defaults to 256 on any LVM device, while MD will
> > set it accordingly to
> > > the amount of disks present in the raid. If you do
> > tests on a filesystem,
> > > you may see significant differences due to that. YMMV
> > depending on the type
> > > of used benchmark(s).
> > >
> > > - filesystem awareness of underlying raid
> > >
> > > For example, xfs created on top of raid, will
> > generally get the parameters
> > > right (stripe unit, stripe width), but if it's xfs on
> > lvm on raid, then it
> > > won't - you will have to provide them manually.
> > >
> > > - alignment between LVM chunks and MD chunks
> > >
> > > Make sure that extent area used for actual logical
> > volumes start at the
> > > boundary of stripe unit - you can adjust the LVM's
> > metadata size during
> > > pvcreate (by default it's 192KiB, so with non-default
> > stripe unit it may
> > > cause issues, although I vaguely recall posts that
> > current LVM is MD aware
> > > during initialization). Of course LVM must itself
> > start at the boundary for
> > > that to make any sense (and it doesn't have to be the
> > case - for example if
> > > you use partitionable MD).
> > 
> > All of the above have been resolved in recent LVM2
> > userspace (2.02.51
> > being the most recent release with all these
> > addressed).  The last
> > issue you mention (partitionable MD alignment offset) is
> > also resolved
> > when a recent LVM2 is coupled with Linux 2.6.31 (which
> > provides IO
> > Topology support).
> > 
> > Mike
> > --
> 
> When you say 'resolved' are we talking automatically? if so, when the
> volumes are created... etc etc?

Yes, automatically when the volumes are created.

The relevant lvm.conf options (enabled by default) are:

devices/md_chunk_alignment (useful for LVM on MD w/ Linux < 2.6.31)
devices/data_alignment_detection
devices/data_alignment_offset_detection

readahead defaults to "auto" in lvm.conf:
activation/readahead

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [linux-lvm] Re: LVM and Raid5
@ 2009-09-21 17:26         ` Mike Snitzer
  0 siblings, 0 replies; 17+ messages in thread
From: Mike Snitzer @ 2009-09-21 17:26 UTC (permalink / raw)
  To: Jon, LVM general discussion and development
  Cc: linux-raid, Linux Raid Study, Michal Soltys

On Mon, Sep 21 2009 at 12:30pm -0400,
Jon Hardcastle <jd_hardcastle@yahoo.com> wrote:

> --- On Mon, 21/9/09, Mike Snitzer <snitzer@gmail.com> wrote:
> 
> > From: Mike Snitzer <snitzer@gmail.com>
> > Subject: Re: LVM and Raid5
> > To: "Michal Soltys" <soltys@ziu.info>
> > Cc: "Linux Raid Study" <linuxraid.study@gmail.com>, linux-raid@vger.kernel.org, linux-lvm@redhat.com
> > Date: Monday, 21 September, 2009, 3:33 PM
> > On Thu, Sep 17, 2009 at 8:37 AM,
> > Michal Soltys <soltys@ziu.info>
> > wrote:
> > > Linux Raid Study wrote:
> > >>
> > >> Hello:
> > >>
> > >> Has someone experimented with LVM and Raid5
> > together (on say, 2.6.27)?
> > >> Is there any performance drop if LVM/Raid5 are
> > combined vs Raid5 alone?
> > >>
> > >> Thanks for your inputs!
> > >
> > > Few things to consider when setting up LVM on MD
> > raid:
> > >
> > > - readahead set on lvm device
> > >
> > > It defaults to 256 on any LVM device, while MD will
> > set it accordingly to
> > > the amount of disks present in the raid. If you do
> > tests on a filesystem,
> > > you may see significant differences due to that. YMMV
> > depending on the type
> > > of used benchmark(s).
> > >
> > > - filesystem awareness of underlying raid
> > >
> > > For example, xfs created on top of raid, will
> > generally get the parameters
> > > right (stripe unit, stripe width), but if it's xfs on
> > lvm on raid, then it
> > > won't - you will have to provide them manually.
> > >
> > > - alignment between LVM chunks and MD chunks
> > >
> > > Make sure that extent area used for actual logical
> > volumes start at the
> > > boundary of stripe unit - you can adjust the LVM's
> > metadata size during
> > > pvcreate (by default it's 192KiB, so with non-default
> > stripe unit it may
> > > cause issues, although I vaguely recall posts that
> > current LVM is MD aware
> > > during initialization). Of course LVM must itself
> > start at the boundary for
> > > that to make any sense (and it doesn't have to be the
> > case - for example if
> > > you use partitionable MD).
> > 
> > All of the above have been resolved in recent LVM2
> > userspace (2.02.51
> > being the most recent release with all these
> > addressed).� The last
> > issue you mention (partitionable MD alignment offset) is
> > also resolved
> > when a recent LVM2 is coupled with Linux 2.6.31 (which
> > provides IO
> > Topology support).
> > 
> > Mike
> > --
> 
> When you say 'resolved' are we talking automatically? if so, when the
> volumes are created... etc etc?

Yes, automatically when the volumes are created.

The relevant lvm.conf options (enabled by default) are:

devices/md_chunk_alignment (useful for LVM on MD w/ Linux < 2.6.31)
devices/data_alignment_detection
devices/data_alignment_offset_detection

readahead defaults to "auto" in lvm.conf:
activation/readahead

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-16 10:20   ` Majed B.
  2009-09-16 10:33     ` Jon Hardcastle
@ 2009-09-21 17:34     ` Goswin von Brederlow
  1 sibling, 0 replies; 17+ messages in thread
From: Goswin von Brederlow @ 2009-09-21 17:34 UTC (permalink / raw)
  To: Majed B.; +Cc: Goswin von Brederlow, Linux Raid Study, linux-raid

"Majed B." <majedb@gmail.com> writes:

> Hello,
>
> I'm the one who ran those tests with LVM vs. RAID5 and I think I have
> faced speed difference because I have disks of varying speeds
> (different models and vendors), and I believe that LVM gets crippled
> down to the speed of the slowest disk.
>
> On Wed, Sep 16, 2009 at 1:09 PM, Goswin von Brederlow <goswin-v-b@web.de> wrote:
>> Linux Raid Study <linuxraid.study@gmail.com> writes:
>>
>>> Hello:
>>>
>>> Has someone experimented with LVM and Raid5 together (on say, 2.6.27)?
>>> Is there any performance drop if LVM/Raid5 are combined vs Raid5 alone?
>>>
>>> Thanks for your inputs!
>>
>> Has always worked perfectly for me and i can't say I noticed any
>> performance change.
>>
>> MfG
>>        Goswin

But the question was raid5 vs lvm+raid5. The raid5 gets cripled down
to the slowest disk and speeds up due to striping. The LVM should not
have any noticeable effect on top of that.

MfG
        Goswin
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-21 14:33     ` [linux-lvm] " Mike Snitzer
  (?)
  (?)
@ 2009-09-21 17:38     ` Linux Raid Study
  2009-09-21 19:14       ` Majed B.
  -1 siblings, 1 reply; 17+ messages in thread
From: Linux Raid Study @ 2009-09-21 17:38 UTC (permalink / raw)
  To: Mike Snitzer; +Cc: Michal Soltys, linux-raid, linux-lvm

Can I use LVM2 with kernel 2.6.27?

Thanks everyone!

On Mon, Sep 21, 2009 at 7:33 AM, Mike Snitzer <snitzer@gmail.com> wrote:
> On Thu, Sep 17, 2009 at 8:37 AM, Michal Soltys <soltys@ziu.info> wrote:
>> Linux Raid Study wrote:
>>>
>>> Hello:
>>>
>>> Has someone experimented with LVM and Raid5 together (on say, 2.6.27)?
>>> Is there any performance drop if LVM/Raid5 are combined vs Raid5 alone?
>>>
>>> Thanks for your inputs!
>>
>> Few things to consider when setting up LVM on MD raid:
>>
>> - readahead set on lvm device
>>
>> It defaults to 256 on any LVM device, while MD will set it accordingly to
>> the amount of disks present in the raid. If you do tests on a filesystem,
>> you may see significant differences due to that. YMMV depending on the type
>> of used benchmark(s).
>>
>> - filesystem awareness of underlying raid
>>
>> For example, xfs created on top of raid, will generally get the parameters
>> right (stripe unit, stripe width), but if it's xfs on lvm on raid, then it
>> won't - you will have to provide them manually.
>>
>> - alignment between LVM chunks and MD chunks
>>
>> Make sure that extent area used for actual logical volumes start at the
>> boundary of stripe unit - you can adjust the LVM's metadata size during
>> pvcreate (by default it's 192KiB, so with non-default stripe unit it may
>> cause issues, although I vaguely recall posts that current LVM is MD aware
>> during initialization). Of course LVM must itself start at the boundary for
>> that to make any sense (and it doesn't have to be the case - for example if
>> you use partitionable MD).
>
> All of the above have been resolved in recent LVM2 userspace (2.02.51
> being the most recent release with all these addressed).  The last
> issue you mention (partitionable MD alignment offset) is also resolved
> when a recent LVM2 is coupled with Linux 2.6.31 (which provides IO
> Topology support).
>
> Mike
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: LVM and Raid5
  2009-09-21 17:38     ` Linux Raid Study
@ 2009-09-21 19:14       ` Majed B.
  0 siblings, 0 replies; 17+ messages in thread
From: Majed B. @ 2009-09-21 19:14 UTC (permalink / raw)
  To: Linux Raid Study; +Cc: linux-raid, linux-lvm

On Mon, Sep 21, 2009 at 8:38 PM, Linux Raid Study
<linuxraid.study@gmail.com> wrote:
> Can I use LVM2 with kernel 2.6.27?
>
> Thanks everyone!

Can you be more specific?! If it's just in general, then yes. If you
want to use with RAID, then also yes.

>> On Mon, Sep 21, 2009 at 7:33 AM, Mike Snitzer <snitzer@gmail.com> wrote:
>> On Thu, Sep 17, 2009 at 8:37 AM, Michal Soltys <soltys@ziu.info> wrote:
>>> Linux Raid Study wrote:
>>>>
>>>> Hello:
>>>>
>>>> Has someone experimented with LVM and Raid5 together (on say, 2.6.27)?
>>>> Is there any performance drop if LVM/Raid5 are combined vs Raid5 alone?
>>>>
>>>> Thanks for your inputs!
>>>
>>> Few things to consider when setting up LVM on MD raid:
>>>
>>> - readahead set on lvm device
>>>
>>> It defaults to 256 on any LVM device, while MD will set it accordingly to
>>> the amount of disks present in the raid. If you do tests on a filesystem,
>>> you may see significant differences due to that. YMMV depending on the type
>>> of used benchmark(s).
>>>
>>> - filesystem awareness of underlying raid
>>>
>>> For example, xfs created on top of raid, will generally get the parameters
>>> right (stripe unit, stripe width), but if it's xfs on lvm on raid, then it
>>> won't - you will have to provide them manually.
>>>
>>> - alignment between LVM chunks and MD chunks
>>>
>>> Make sure that extent area used for actual logical volumes start at the
>>> boundary of stripe unit - you can adjust the LVM's metadata size during
>>> pvcreate (by default it's 192KiB, so with non-default stripe unit it may
>>> cause issues, although I vaguely recall posts that current LVM is MD aware
>>> during initialization). Of course LVM must itself start at the boundary for
>>> that to make any sense (and it doesn't have to be the case - for example if
>>> you use partitionable MD).
>>
>> All of the above have been resolved in recent LVM2 userspace (2.02.51
>> being the most recent release with all these addressed).  The last
>> issue you mention (partitionable MD alignment offset) is also resolved
>> when a recent LVM2 is coupled with Linux 2.6.31 (which provides IO
>> Topology support).
>>
>> Mike
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>



-- 
       Majed B.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2009-09-21 19:14 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-09-16  8:22 LVM and Raid5 Linux Raid Study
2009-09-16  9:42 ` Jon Hardcastle
2009-09-16 10:09 ` Goswin von Brederlow
2009-09-16 10:20   ` Majed B.
2009-09-16 10:33     ` Jon Hardcastle
2009-09-16 11:00       ` Majed B.
2009-09-16 13:15         ` Chris Webb
2009-09-21 17:34     ` Goswin von Brederlow
2009-09-17 12:37 ` Michal Soltys
2009-09-21 14:33   ` Mike Snitzer
2009-09-21 14:33     ` [linux-lvm] " Mike Snitzer
2009-09-21 16:30     ` Jon Hardcastle
2009-09-21 16:30       ` [linux-lvm] " Jon Hardcastle
2009-09-21 17:26       ` Mike Snitzer
2009-09-21 17:26         ` [linux-lvm] " Mike Snitzer
2009-09-21 17:38     ` Linux Raid Study
2009-09-21 19:14       ` Majed B.

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.