linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: "John Stoffel" <john@stoffel.org>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] when bringing dm-cache online, consumes all memory and reboots
Date: Mon, 23 Mar 2020 12:26:47 -0400	[thread overview]
Message-ID: <24184.58183.680172.804079@quad.stoffel.home> (raw)
In-Reply-To: <7931a754-cf8e-eb6c-adf1-d54784dbf73f@redhat.com>

>>>>> "Zdenek" == Zdenek Kabelac <zkabelac@redhat.com> writes:

Zdenek> Dne 23. 03. 20 v 9:26 Joe Thornber napsal(a):
>> On Sun, Mar 22, 2020 at 10:57:35AM -0700, Scott Mcdermott wrote:
>>> have a 931.5 GibiByte SSD pair in raid1 (mdraid) as cache LV for a
>>> data LV on 1.8 TebiByte raid1 (mdraid) pair of larger spinning disk.
>>> these disks are hosted by a small 4GB big.little ARM system
>>> running4.4.192-rk3399 (armbian 5.98 bionic).  parameters were set
>>> with: lvconvert --type cache --cachemode writeback --cachepolicy smq
>>> --cachesettings migration_threshold=10000000
>> 
>> If you crash then the cache assumes all blocks are dirty and performs
>> a full writeback.  You have set the migration_threshold extremely high
>> so I think this writeback process is just submitting far too much io at once.
>> 
>> Bring it down to around 2048 and try again.
>> 

Zdenek> Hi

Zdenek> Users should be 'performing' some benchmarking about the 'useful' sizes of
Zdenek> hotspot areas - using nearly 1T of cache for 1.8T of origin doesn't look
Zdenek> the right ration for caching.
Zdenek> (i.e. like if your CPU cache would be halve of your DRAM)

Zdenek> Too big 'cache size' leads usually into way too big caching
Zdenek> chunks (since we try to limit number of 'chunks' in cache to 1
Zdenek> milion - you can rise up this limit - but it will consume a
Zdenek> lot of your RAM space as well) So IMHO I'd recommend to use at
Zdenek> most 512K chunks - which gives you about 256GiB of cache size
Zdenek> - but still users should be benchmarking what is the best for
Zdenek> them...)

I think dm-cache should be smarter as well, and not let the users
bring the system to it's knees with outrageous numbers.  When a user
puts a migration_threshold that high, there needs to be a safety check
that the system isn't them using too much memory, and should listen to
memory pressure instead.

Also, can you change the migration_threshold without activating?  Or
when activated?

John

  reply	other threads:[~2020-03-23 16:26 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-22 17:57 [linux-lvm] when bringing dm-cache online, consumes all memory and reboots Scott Mcdermott
2020-03-23  8:26 ` Joe Thornber
2020-03-23  9:57   ` Zdenek Kabelac
2020-03-23 16:26     ` John Stoffel [this message]
2020-03-23 22:02     ` Scott Mcdermott
2020-03-24  9:43       ` Zdenek Kabelac
2020-03-24 11:37         ` Gionatan Danti
2020-03-24 15:09           ` Zdenek Kabelac
2020-03-24 22:35             ` Gionatan Danti
2020-03-25  8:55               ` Zdenek Kabelac
2020-03-23 21:35   ` Scott Mcdermott

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=24184.58183.680172.804079@quad.stoffel.home \
    --to=john@stoffel.org \
    --cc=linux-lvm@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).