All of lore.kernel.org
 help / color / mirror / Atom feed
* [linux-lvm] is lvmcache really ready?
@ 2016-04-23 16:50 Xen
  2016-04-23 18:10 ` Dax Kelson
  2016-04-25 13:49 ` Joe Thornber
  0 siblings, 2 replies; 6+ messages in thread
From: Xen @ 2016-04-23 16:50 UTC (permalink / raw)
  To: linux-lvm

Attempting to join this list, but the web interface is down.

I am interested in using lvmcache but I hear the performance is very 
meagre, that was a thread from 2014 that I was reading. But a recent 
blog post said the same:

https://www.rath.org/ssd-caching-under-linux.html

This guy tries first lvmcache and performance is not noticable. Then he 
uses bcache and it works like a charm.

He was using regular partitions on regular disks. Nothing raid going on. 
But he tried to increase boot speeds and did not notice anything.

Setting the promote adjustments to zero made no difference.

Is this feature defunct? I mean, is this really a usable and functional 
thing? From what it seems, I think not.

I'll have a small mSATA SSD to test with shortly, but... if this thing 
is so bugged that it keeps reading from the origin device anyway, 
there's not much point to it.

The reason I'm wanting to use it at this point is to speed up booting 
(although unimportant) but mostly to make a more snappy system while 
running, ie. for instance just application startup.

At the same time that would decouple my system from my data by using 
that small SSD for the system (by way of the cache) such that system 
seeks do not any longer effect the performance of other operations on 
the device (that also holds that 'data').

It don't mind having writeback for this because any (small) delay in 
writing to the origin would work well with this strategy.

Some IO is buffered anyway and you might think with 8GB of RAM you could 
have some IO buffering normally, but a cache is something that buffers 
between reboots, so to speak.

I'm simply trying to see what improvements I can get with LVM.



That aside, I think it is annoying that on Debian systems and Ubuntu and 
Kubuntu, thin-provisioning-tools is still not included by default, and 
you also need to create that initramfs hook to have the files included 
into the initramfs. Then, Grub2 also does not support thin LVM volumes 
at all and although you can boot fine on a thin root, grub-probe will 
not be able to process these volumes, complain, and exit.

Personally I think snapshotting on thin is much more usable, if that's 
what you want to use. No necessity to allocate sufficiently-sized 
volumes in advance. I tried to ask on #lvm but they were not helpful.

I don't really get why LVM is being so neglected other than the 
popularity of btrfs these days.

LVM is much more modular and very easy to work with normally. Commands 
make sense for the most part and the only thing that is missing is a 
nice gui.

(I was even writing... some snapshot to incremental backup script at 
some point).

LVM is one of the saner things still existing in Linux from my 
perspective, even if it doesn't feel perfect, but that is more due to 
the fact that you are using software emulation of partitions in a way 
that kinda tries to "avoid" having to do it in "firmware" -- in that 
sense that you are trying to do what regular partitions can't.

Someone called this "deferred design" I believe.

Because of this fact I believe it cannot really actually fully work out 
(for me) I believe particularly when it comes to cross-platformness and 
encryption as, in the same way, I prefer a firmware/bios environment for 
managing RAID arrays rather than just software.

At the same time with UEFI et al. my opinion is just the reverse: let 
the boot loader please be software in some way. Even if it's a menu. At 
least you can adjust your software, the firmware you might not have a 
say about.

Deferred design, when Linux tools do everything a regular computer 
firmware environment should. And then they created UEFI and tried to 
take stuff away that does belong to the operating system.

While not creating anything that would work better except for GPT disks.

I just want to ask a question though about thin LVM, but I will ask in 
another email.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] is lvmcache really ready?
  2016-04-23 16:50 [linux-lvm] is lvmcache really ready? Xen
@ 2016-04-23 18:10 ` Dax Kelson
  2016-04-25 13:19   ` Xen
  2016-04-29 13:20   ` Brassow Jonathan
  2016-04-25 13:49 ` Joe Thornber
  1 sibling, 2 replies; 6+ messages in thread
From: Dax Kelson @ 2016-04-23 18:10 UTC (permalink / raw)
  To: LVM general discussion and development

[-- Attachment #1: Type: text/plain, Size: 4631 bytes --]

I don't always write emails, but when I do, it's a stream of consciousness.

We use LVM Cache in production in the infrastructure hosting our online
classroom environment. The read requests hitting our origin LV dropped by
90%. We're pretty happy with it.

I wish you could have multiple origin LVs using the same cache pool.

Dax Kelson
On Apr 23, 2016 10:54 AM, "Xen" <list@xenhideout.nl> wrote:

> Attempting to join this list, but the web interface is down.
>
> I am interested in using lvmcache but I hear the performance is very
> meagre, that was a thread from 2014 that I was reading. But a recent blog
> post said the same:
>
> https://www.rath.org/ssd-caching-under-linux.html
>
> This guy tries first lvmcache and performance is not noticable. Then he
> uses bcache and it works like a charm.
>
> He was using regular partitions on regular disks. Nothing raid going on.
> But he tried to increase boot speeds and did not notice anything.
>
> Setting the promote adjustments to zero made no difference.
>
> Is this feature defunct? I mean, is this really a usable and functional
> thing? From what it seems, I think not.
>
> I'll have a small mSATA SSD to test with shortly, but... if this thing is
> so bugged that it keeps reading from the origin device anyway, there's not
> much point to it.
>
> The reason I'm wanting to use it at this point is to speed up booting
> (although unimportant) but mostly to make a more snappy system while
> running, ie. for instance just application startup.
>
> At the same time that would decouple my system from my data by using that
> small SSD for the system (by way of the cache) such that system seeks do
> not any longer effect the performance of other operations on the device
> (that also holds that 'data').
>
> It don't mind having writeback for this because any (small) delay in
> writing to the origin would work well with this strategy.
>
> Some IO is buffered anyway and you might think with 8GB of RAM you could
> have some IO buffering normally, but a cache is something that buffers
> between reboots, so to speak.
>
> I'm simply trying to see what improvements I can get with LVM.
>
>
>
> That aside, I think it is annoying that on Debian systems and Ubuntu and
> Kubuntu, thin-provisioning-tools is still not included by default, and you
> also need to create that initramfs hook to have the files included into the
> initramfs. Then, Grub2 also does not support thin LVM volumes at all and
> although you can boot fine on a thin root, grub-probe will not be able to
> process these volumes, complain, and exit.
>
> Personally I think snapshotting on thin is much more usable, if that's
> what you want to use. No necessity to allocate sufficiently-sized volumes
> in advance. I tried to ask on #lvm but they were not helpful.
>
> I don't really get why LVM is being so neglected other than the popularity
> of btrfs these days.
>
> LVM is much more modular and very easy to work with normally. Commands
> make sense for the most part and the only thing that is missing is a nice
> gui.
>
> (I was even writing... some snapshot to incremental backup script at some
> point).
>
> LVM is one of the saner things still existing in Linux from my
> perspective, even if it doesn't feel perfect, but that is more due to the
> fact that you are using software emulation of partitions in a way that
> kinda tries to "avoid" having to do it in "firmware" -- in that sense that
> you are trying to do what regular partitions can't.
>
> Someone called this "deferred design" I believe.
>
> Because of this fact I believe it cannot really actually fully work out
> (for me) I believe particularly when it comes to cross-platformness and
> encryption as, in the same way, I prefer a firmware/bios environment for
> managing RAID arrays rather than just software.
>
> At the same time with UEFI et al. my opinion is just the reverse: let the
> boot loader please be software in some way. Even if it's a menu. At least
> you can adjust your software, the firmware you might not have a say about.
>
> Deferred design, when Linux tools do everything a regular computer
> firmware environment should. And then they created UEFI and tried to take
> stuff away that does belong to the operating system.
>
> While not creating anything that would work better except for GPT disks.
>
> I just want to ask a question though about thin LVM, but I will ask in
> another email.
>
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
>

[-- Attachment #2: Type: text/html, Size: 5467 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] is lvmcache really ready?
  2016-04-23 18:10 ` Dax Kelson
@ 2016-04-25 13:19   ` Xen
  2016-04-29 13:20   ` Brassow Jonathan
  1 sibling, 0 replies; 6+ messages in thread
From: Xen @ 2016-04-25 13:19 UTC (permalink / raw)
  To: LVM general discussion and development

Dax Kelson schreef op 23-04-2016 18:10:
> I don't always write emails, but when I do, it's a stream of
> consciousness.

;-).

> We use LVM Cache in production in the infrastructure hosting our
> online classroom environment. The read requests hitting our origin LV
> dropped by 90%. We're pretty happy with it.
> 
> I wish you could have multiple origin LVs using the same cache pool.

Thanks. Yeah that's what the article said bcache supports. Now you may 
need to cache an LV that is itself an LVM container ;-) (PV).

Regards.



> 
> Dax Kelson
> 
> On Apr 23, 2016 10:54 AM, "Xen" <list@xenhideout.nl> wrote:
> 
>> Attempting to join this list, but the web interface is down.
>> 
>> I am interested in using lvmcache but I hear the performance is very
>> meagre, that was a thread from 2014 that I was reading. But a recent
>> blog post said the same:
>> 
>> https://www.rath.org/ssd-caching-under-linux.html
>> 
>> This guy tries first lvmcache and performance is not noticable. Then
>> he uses bcache and it works like a charm.
>> 
>> He was using regular partitions on regular disks. Nothing raid going
>> on. But he tried to increase boot speeds and did not notice
>> anything.
>> 
>> Setting the promote adjustments to zero made no difference.
>> 
>> Is this feature defunct? I mean, is this really a usable and
>> functional thing? From what it seems, I think not.
>> 
>> I'll have a small mSATA SSD to test with shortly, but... if this
>> thing is so bugged that it keeps reading from the origin device
>> anyway, there's not much point to it.
>> 
>> The reason I'm wanting to use it at this point is to speed up
>> booting (although unimportant) but mostly to make a more snappy
>> system while running, ie. for instance just application startup.
>> 
>> At the same time that would decouple my system from my data by using
>> that small SSD for the system (by way of the cache) such that system
>> seeks do not any longer effect the performance of other operations
>> on the device (that also holds that 'data').
>> 
>> It don't mind having writeback for this because any (small) delay in
>> writing to the origin would work well with this strategy.
>> 
>> Some IO is buffered anyway and you might think with 8GB of RAM you
>> could have some IO buffering normally, but a cache is something that
>> buffers between reboots, so to speak.
>> 
>> I'm simply trying to see what improvements I can get with LVM.
>> 
>> That aside, I think it is annoying that on Debian systems and Ubuntu
>> and Kubuntu, thin-provisioning-tools is still not included by
>> default, and you also need to create that initramfs hook to have the
>> files included into the initramfs. Then, Grub2 also does not support
>> thin LVM volumes at all and although you can boot fine on a thin
>> root, grub-probe will not be able to process these volumes,
>> complain, and exit.
>> 
>> Personally I think snapshotting on thin is much more usable, if
>> that's what you want to use. No necessity to allocate
>> sufficiently-sized volumes in advance. I tried to ask on #lvm but
>> they were not helpful.
>> 
>> I don't really get why LVM is being so neglected other than the
>> popularity of btrfs these days.
>> 
>> LVM is much more modular and very easy to work with normally.
>> Commands make sense for the most part and the only thing that is
>> missing is a nice gui.
>> 
>> (I was even writing... some snapshot to incremental backup script at
>> some point).
>> 
>> LVM is one of the saner things still existing in Linux from my
>> perspective, even if it doesn't feel perfect, but that is more due
>> to the fact that you are using software emulation of partitions in a
>> way that kinda tries to "avoid" having to do it in "firmware" -- in
>> that sense that you are trying to do what regular partitions can't.
>> 
>> Someone called this "deferred design" I believe.
>> 
>> Because of this fact I believe it cannot really actually fully work
>> out (for me) I believe particularly when it comes to
>> cross-platformness and encryption as, in the same way, I prefer a
>> firmware/bios environment for managing RAID arrays rather than just
>> software.
>> 
>> At the same time with UEFI et al. my opinion is just the reverse:
>> let the boot loader please be software in some way. Even if it's a
>> menu. At least you can adjust your software, the firmware you might
>> not have a say about.
>> 
>> Deferred design, when Linux tools do everything a regular computer
>> firmware environment should. And then they created UEFI and tried to
>> take stuff away that does belong to the operating system.
>> 
>> While not creating anything that would work better except for GPT
>> disks.
>> 
>> I just want to ask a question though about thin LVM, but I will ask
>> in another email.
>> 
>> _______________________________________________
>> linux-lvm mailing list
>> linux-lvm@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-lvm
>> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> _______________________________________________
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] is lvmcache really ready?
  2016-04-23 16:50 [linux-lvm] is lvmcache really ready? Xen
  2016-04-23 18:10 ` Dax Kelson
@ 2016-04-25 13:49 ` Joe Thornber
  2016-04-25 15:21   ` Xen
  1 sibling, 1 reply; 6+ messages in thread
From: Joe Thornber @ 2016-04-25 13:49 UTC (permalink / raw)
  To: LVM general discussion and development

On Sat, Apr 23, 2016 at 04:50:01PM +0000, Xen wrote:
> The reason I'm wanting to use it at this point is to speed up
> booting (although unimportant) but mostly to make a more snappy
> system while running, ie. for instance just application startup.

I doubt it'll speed up booting, which occurs infrequently so dm-cache
will not have reason to promote those blocks above others.

- Joe

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] is lvmcache really ready?
  2016-04-25 13:49 ` Joe Thornber
@ 2016-04-25 15:21   ` Xen
  0 siblings, 0 replies; 6+ messages in thread
From: Xen @ 2016-04-25 15:21 UTC (permalink / raw)
  To: LVM general discussion and development

Joe Thornber schreef op 25-04-2016 13:49:
> On Sat, Apr 23, 2016 at 04:50:01PM +0000, Xen wrote:
>> The reason I'm wanting to use it at this point is to speed up
>> booting (although unimportant) but mostly to make a more snappy
>> system while running, ie. for instance just application startup.
> 
> I doubt it'll speed up booting, which occurs infrequently so dm-cache
> will not have reason to promote those blocks above others.

Thank you. I'm not really concerned with boot speeds, the other person 
was. It depends on how I am going to organize the thing. But it is 
probably going to be used for runtime stuff more.

Especially if the thing fills up I expect boot time information not to 
be in that cache, you're right.

Thanks. You people do inspire some confidence :) ;-) :).

Regards.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [linux-lvm] is lvmcache really ready?
  2016-04-23 18:10 ` Dax Kelson
  2016-04-25 13:19   ` Xen
@ 2016-04-29 13:20   ` Brassow Jonathan
  1 sibling, 0 replies; 6+ messages in thread
From: Brassow Jonathan @ 2016-04-29 13:20 UTC (permalink / raw)
  To: LVM general discussion and development


> On Apr 23, 2016, at 1:10 PM, Dax Kelson <dkelson@gurulabs.com> wrote:
> 
> I don't always write emails, but when I do, it's a stream of consciousness.
> 
> We use LVM Cache in production in the infrastructure hosting our online classroom environment. The read requests hitting our origin LV dropped by 90%. We're pretty happy with it.
> 
> I wish you could have multiple origin LVs using the same cache pool.

BTW, If you use thin-provisioning, you can.  If you cache the ThinDataLV (see lvmthin(7)), the effect will be that all thinLVs are cached.

 brassow

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-04-29 13:20 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-23 16:50 [linux-lvm] is lvmcache really ready? Xen
2016-04-23 18:10 ` Dax Kelson
2016-04-25 13:19   ` Xen
2016-04-29 13:20   ` Brassow Jonathan
2016-04-25 13:49 ` Joe Thornber
2016-04-25 15:21   ` Xen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.