archive mirror
 help / color / mirror / Atom feed
From: "John Stoffel" <>
To: Gionatan Danti <>
Cc: "Roy Sigurd Karlsbakk" <>,
	Håkon <>,
	"LVM general discussion and development" <>
Subject: Re: [linux-lvm] Looking ahead - tiering with LVM?
Date: Wed, 9 Sep 2020 15:53:31 -0400	[thread overview]
Message-ID: <24409.12987.382863.95686@quad.stoffel.home> (raw)
In-Reply-To: <>

>>>>> "Gionatan" == Gionatan Danti <> writes:

Gionatan> Il 2020-09-09 20:47 John Stoffel ha scritto:
>> This assumes you're tiering whole files, not at the per-block level
>> though, right?

Gionatan> The tiered approach I developed and maintained in the past, yes. For any 
Gionatan> LVM-based tiering, we are speaking about block-level tiering (as LVM 
Gionatan> itself has no "files" concept).

>> Do you have numbers?  I'm using DM_CACHE on my home NAS server box,
>> and it *does* seem to help, but only in certain cases.   I've got a
>> 750gb home directory LV with an 80gb lv_cache writethrough cache
>> setup.  So it's not great on write heavy loads, but it's good in read
>> heavy ones, such as kernel compiles where it does make a difference.

Gionatan> Numbers for available space for tiering vs cache can vary
Gionatan> based on your setup. However, storage tiers generally are at
Gionatan> least 5-10X apart from each other (ie: 1 TB SSD for 10 TB
Gionatan> HDD). Hence my gut fealing that tiering is not drastically
Gionatan> better then lvm cache. But hey - I reserve the right to be
Gionatan> totally wrong ;)

Very true, numbers talk, annecdotes walk... 

>> So it's not only the caching being per-file or per-block, but how the
>> actual cache is done?  writeback is faster, but less reliable if you
>> crash.  Writethrough is slower, but much more reliable.

Gionatan> writeback cache surely is more prone to failure vs
Gionatan> writethoug cache. The golden rule is that writeback cache
Gionatan> should use a mirrored device (with device-level powerloss
Gionatan> protected writeback cache if sync write speed is important).

Even in my case I use mirrored SSDs for my cache LVs.  It's the only
sane thing to do IMHO.

Gionatan> But this is somewhat ortogonal to the original question:
Gionatan> block-level tiering itself increases the chances of data
Gionatan> loss (ie: losing the SSD component will ruin the entire
Gionatan> filesystem), so you should used mirrored (or parity) device
Gionatan> for tiering also.

It does, you really need to have a solid setup in terms of hardware,
with known failures modes you can handle, before you start trying to
tier blocks.

Though maybe a write through block cache would be ok, since the cache
would only be for reads, not writes.  Which could help if you have a
bunch of VMs (or containers, or whatevers) with alot of duplicated
data that are all hitting the disk systems at once.

  reply	other threads:[~2020-09-09 19:53 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-09-02 18:38 Roy Sigurd Karlsbakk
2020-09-05 11:47 ` Gionatan Danti
2020-09-09 15:01   ` Roy Sigurd Karlsbakk
2020-09-09 18:16     ` Gionatan Danti
2020-09-09 18:47       ` John Stoffel
2020-09-09 19:10         ` Zdenek Kabelac
2020-09-09 19:21           ` John Stoffel
2020-09-09 19:44         ` Gionatan Danti
2020-09-09 19:53           ` John Stoffel [this message]
2020-09-09 20:20             ` Gionatan Danti
2020-09-09 19:41       ` Roy Sigurd Karlsbakk
2020-09-09 19:49         ` Gionatan Danti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=24409.12987.382863.95686@quad.stoffel.home \ \ \ \ \ \
    --subject='Re: [linux-lvm] Looking ahead - tiering with LVM?' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).