linux-lvm.redhat.com archive mirror
 help / color / mirror / Atom feed
From: Scott Mcdermott <scott@smemsh.net>
To: LVM general discussion and development <linux-lvm@redhat.com>
Subject: Re: [linux-lvm] when bringing dm-cache online, consumes all memory and reboots
Date: Mon, 23 Mar 2020 14:35:45 -0700	[thread overview]
Message-ID: <CACRKOwyuDawzP3f9XL1N2PreixCmS8-80UqEOEdKoCS4=x2UpQ@mail.gmail.com> (raw)
In-Reply-To: <20200323082608.7i6wzq2t3k24hzun@reti>

On Mon, Mar 23, 2020 at 1:26 AM Joe Thornber <thornber@redhat.com> wrote:
> On Sun, Mar 22, 2020 at 10:57:35AM -0700, Scott Mcdermott wrote:
> > [system crashed, uses all memory when brought online...]
> > parameters were set with: lvconvert --type cache
> > --cachemode writeback --cachepolicy smq
> > --cachesettings migration_threshold=10000000
>
> If you crash then the cache assumes all blocks are dirty and performs
> a full writeback.  You have set the migration_threshold extremely high
> so I think this writeback process is just submitting far too much io at once.
>
> Bring it down to around 2048 and try again.

the device wasn't visible in "dmsetup table" prior to activation, so I tried:

  lvchange -ay raidbak4/bakvol4; dmsetup message raidbak4-bakvol4 0
migration_threshold 204800

but this continued to crash, apparently the value used at activation
time is enough to crash the system.  instead using:

  lvchange --cachesettings migration_threshold=204800 raidbak4/bakvol4
  lvchange -ay raidbak4/bakvol4

it worked, and the used disk bandwidth was much lower (which, I don't
want it to be, but a functioning system is needed for the thing to
work at all).  after some time doing a lot of I/Os, it went silent and
is presumably flushed, seems to be in working order, thanks.

so I have to experiment to find the highest migration_threshold value
that won't crash my system with OOM? I don't want there to be any
cache bandwidth restriction, it should saturate and use all available
to aggressively promote (for my frequent case, working set actually
would fit entirely in cache, but it's ok if the cache learns this
slowly from usage).

seems like I should be able to use a value that means "use all
available bandwidth" that isn't going to take down my system with OOM.
even if I play with the value, I might find during some pathological
circumstance, it pushes beyond where I tested and now it crashes my
system again.  is there some safe calculation I can use to determine
the maximum amount?

      parent reply	other threads:[~2020-03-23 21:36 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-22 17:57 Scott Mcdermott
2020-03-23  8:26 ` Joe Thornber
2020-03-23  9:57   ` Zdenek Kabelac
2020-03-23 16:26     ` John Stoffel
2020-03-23 22:02     ` Scott Mcdermott
2020-03-24  9:43       ` Zdenek Kabelac
2020-03-24 11:37         ` Gionatan Danti
2020-03-24 15:09           ` Zdenek Kabelac
2020-03-24 22:35             ` Gionatan Danti
2020-03-25  8:55               ` Zdenek Kabelac
2020-03-23 21:35   ` Scott Mcdermott [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACRKOwyuDawzP3f9XL1N2PreixCmS8-80UqEOEdKoCS4=x2UpQ@mail.gmail.com' \
    --to=scott@smemsh.net \
    --cc=linux-lvm@redhat.com \
    --subject='Re: [linux-lvm] when bringing dm-cache online, consumes all memory and reboots' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).