archive mirror
 help / color / mirror / Atom feed
From: Zdenek Kabelac <>
To: LVM general discussion and development <>,
	John Stoffel <>
Cc: "Roy Sigurd Karlsbakk" <>, Håkon <>
Subject: Re: [linux-lvm] Looking ahead - tiering with LVM?
Date: Wed, 9 Sep 2020 21:10:13 +0200	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <24409.9033.527504.36789@quad.stoffel.home>

Dne 09. 09. 20 v 20:47 John Stoffel napsal(a):
>>>>>> "Gionatan" == Gionatan Danti <> writes:
> Gionatan> Il 2020-09-09 17:01 Roy Sigurd Karlsbakk ha scritto:
>>> First, filelevel is usually useless. Say you have 50 VMs with Windows
>>> server something. A lot of them are bound to have a ton of equal
>>> storage in the same areas, but the file size and content will vary
>>> over time. With blocklevel tiering, that could work better.
> Gionatan> It really depends on the use case. I applied it to a
> Gionatan> fileserver, so working at file level was the right
> Gionatan> choice. For VMs (or big files) it is useless, I agree.
> This assumes you're tiering whole files, not at the per-block level
> though, right?
>>> This is all known.
> Gionatan> But the only reason to want tiering vs cache is the
> Gionatan> additional space the former provides. If this additional
> Gionatan> space is so small (compared to the combined, total volume
> Gionatan> space), tiering's advantage shrinks to (almost) nothing.
> Do you have numbers?  I'm using DM_CACHE on my home NAS server box,
> and it *does* seem to help, but only in certain cases.   I've got a
> 750gb home directory LV with an 80gb lv_cache writethrough cache
> setup.  So it's not great on write heavy loads, but it's good in read
> heavy ones, such as kernel compiles where it does make a difference.
> So it's not only the caching being per-file or per-block, but how the
> actual cache is done?  writeback is faster, but less reliable if you
> crash.  Writethrough is slower, but much more reliable.


dm-cache (--type cache) is  hotspot cache (most used areas of device)

dm-writecache (--type writecache) is great with write-extensive load (somewhat 
extends your page cache on your NMVe/SSD/persistent-memory)

We were thinking about layering cached above each other - but so far there
was no big demand and also the complexity of solving problem is rising greatly 
-  aka there is no problem to let users to stack cache on top of another cache
on top of 3rd. cache - but what should have when it starts failing...

AFAIK there is no one yet writing driver for combining i.e. SSD + HDD
into a single drive which would be relocating blocks (so you get total size as 
aproximate sum of both devices) - but there is dm-zoned  which solves somewhat 
similar problem - but I've no experience with that...


  reply	other threads:[~2020-09-09 19:10 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-02 18:38 Roy Sigurd Karlsbakk
2020-09-05 11:47 ` Gionatan Danti
2020-09-09 15:01   ` Roy Sigurd Karlsbakk
2020-09-09 18:16     ` Gionatan Danti
2020-09-09 18:47       ` John Stoffel
2020-09-09 19:10         ` Zdenek Kabelac [this message]
2020-09-09 19:21           ` John Stoffel
2020-09-09 19:44         ` Gionatan Danti
2020-09-09 19:53           ` John Stoffel
2020-09-09 20:20             ` Gionatan Danti
2020-09-09 19:41       ` Roy Sigurd Karlsbakk
2020-09-09 19:49         ` Gionatan Danti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \
    --subject='Re: [linux-lvm] Looking ahead - tiering with LVM?' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).