All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthias Ferdinand <bcache@mfedv.net>
To: Santiago Castillo Oli <scastillo@aragon.es>
Cc: linux-bcache@vger.kernel.org
Subject: Re: Best strategy for caching VMs storage
Date: Fri, 21 May 2021 14:29:40 +0200	[thread overview]
Message-ID: <YKentDNRwAmEGb8X@xoff> (raw)
In-Reply-To: <08e95aaf-a5e5-fb32-31ea-ca35cc028fac@aragon.es>

On Fri, May 21, 2021 at 01:56:16PM +0200, Santiago Castillo Oli wrote:
> Hi there.
> 
> 
> I have a host running 4 VMs using qcow2 storage on a ext4 fs over HDD. Each
> VM has 3 qcow files (system, data and swap). I know I have an I/O
> bottleneck.
> 
> I want to use bcache with an SSD to accelerate disk access but I´m not sure
> where should I put bcache on storage stack.
> 
> 
> Should I use bcache on host or in guests?
> 
> Just one bcache backing device for a single (ext4) filesystem with all qcow
> files there, or different bcache and backing devices for each qcow2 file?
> 
> 
> Right know, I prefer qcow2 over thin-lvm for storage, but i could change my
> mind if thin-lvm is a much better combination for bcache.
> 
> 
> What would be the best strategy for caching VMs storage ?
> 
> Any recommendation, please?


Hi,

not claiming to know "the best" strategy, but I would recommend

  - use a single bcache device on the host

  - either use LVM (thick provisioned) to provide block devices
    to VMs, or put a filesystem on it and store qcow2 files there as
    you did before

With lvm-thin you have all metadata activity for all your VMs in one
place. Any error there and you might lose all your VM storage at once.
Of course you should do regular backups of your VMs anyway, but I would
not start using lvm-thin unless I can have the relevant metadata volume
on redundant storage because of the blast radius.
Just my paranoic 2c :-)

Speaking of blast radius: adding an SSD to the stack will make your VMs
storage performance and availability depend on two devices, not just
one, so this might increase your failure rate. Choose a high-quality
SSD, preferably datacenter-grade equipment. And of course, do your own
performance tests to see if there is enough performance improvement to
justify the higher risk of failure.

Matthias

      reply	other threads:[~2021-05-21 12:29 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-21 11:56 Best strategy for caching VMs storage Santiago Castillo Oli
2021-05-21 12:29 ` Matthias Ferdinand [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YKentDNRwAmEGb8X@xoff \
    --to=bcache@mfedv.net \
    --cc=linux-bcache@vger.kernel.org \
    --cc=scastillo@aragon.es \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.