From: Scott Mcdermott <scott@smemsh.net>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] when bringing dm-cache online, consumes all memory and reboots
Date: Mon, 23 Mar 2020 15:02:24 -0700 [thread overview]
Message-ID: <CACRKOwzT2qk_wWNo5m_ZeKa-S6ZoXSxPhJhQVGceNVDzmKu-GQ@mail.gmail.com> (raw)
In-Reply-To: <7931a754-cf8e-eb6c-adf1-d54784dbf73f@redhat.com>
On Mon, Mar 23, 2020 at 2:57 AM Zdenek Kabelac <zkabelac@redhat.com> wrote:
> Dne 23. 03. 20 v 9:26 Joe Thornber napsal(a):
> > On Sun, Mar 22, 2020 at 10:57:35AM -0700, Scott Mcdermott wrote:
> > > have a 931.5 GibiByte SSD pair in raid1 (mdraid) as cache LV for a
> > > data LV on 1.8 TebiByte raid1 (mdraid) pair of larger spinning disk.
>
> Users should be 'performing' some benchmarking about the 'useful' sizes of
> hotspot areas - using nearly 1T of cache for 1.8T of origin doesn't look
> the right ration for caching.
> (i.e. like if your CPU cache would be halve of your DRAM)
the 1.8T origin will be upgraded over time with larger/more spinning
disks, but the cache will remain as it is. hopefully it can perform
well whether it is 1:2 cache:data as now or 1:10+ as later.
> Too big 'cache size' leads usually into way too big caching chunks
> (since we try to limit number of 'chunks' in cache to 1 milion - you
> can rise up this limit - but it will consume a lot of your RAM space as well)
> So IMHO I'd recommend to use at most 512K chunks - which gives you
> about 256GiB of cache size - but still users should be benchmarking what is
> the best for them...)
how to raise this limit? since I'm low RAM this is a problem, but why
are large chunks an issue, besides memory usage? is this causing
unnecessary I/O by an amplification effect? if my system doesn't have
enough memory for this job I will have to find a host board with more
RAM.
> Another hint - lvm2 introduced support for new dm-writecache target as well.
this won't work for me since a lot of my data is reads, and I'm low
memory with large numbers of files. rsync of large trees is the main
workload; existing algorithm is not working fantastically well, but
nonetheless giving a nice boost to my rsync completion times over the
uncached times.
next prev parent reply other threads:[~2020-03-23 22:02 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-03-22 17:57 [linux-lvm] when bringing dm-cache online, consumes all memory and reboots Scott Mcdermott
2020-03-23 8:26 ` Joe Thornber
2020-03-23 9:57 ` Zdenek Kabelac
2020-03-23 16:26 ` John Stoffel
2020-03-23 22:02 ` Scott Mcdermott [this message]
2020-03-24 9:43 ` Zdenek Kabelac
2020-03-24 11:37 ` Gionatan Danti
2020-03-24 15:09 ` Zdenek Kabelac
2020-03-24 22:35 ` Gionatan Danti
2020-03-25 8:55 ` Zdenek Kabelac
2020-03-23 21:35 ` Scott Mcdermott
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CACRKOwzT2qk_wWNo5m_ZeKa-S6ZoXSxPhJhQVGceNVDzmKu-GQ@mail.gmail.com \
--to=scott@smemsh.net \
--cc=linux-lvm@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).