From: Josef Bacik <firstname.lastname@example.org>
To: Andrew Morton <email@example.com>
Cc: firstname.lastname@example.org, email@example.com, firstname.lastname@example.org,
email@example.com, firstname.lastname@example.org, email@example.com,
firstname.lastname@example.org, Josef Bacik <email@example.com>
Subject: Re: [PATCH 1/2] mm: use sc->priority for slab shrink targets
Date: Fri, 28 Jul 2017 23:52:50 +0000 [thread overview]
Message-ID: <20170728235248.GA27897@li70-116.members.linode.com> (raw)
On Thu, Jul 27, 2017 at 04:53:48PM -0700, Andrew Morton wrote:
> On Thu, 20 Jul 2017 14:45:30 -0400 firstname.lastname@example.org wrote:
> > From: Josef Bacik <email@example.com>
> > Previously we were using the ratio of the number of lru pages scanned to
> > the number of eligible lru pages to determine the number of slab objects
> > to scan. The problem with this is that these two things have nothing to
> > do with each other,
> > so in slab heavy work loads where there is little to
> > no page cache we can end up with the pages scanned being a very low
> > number.
> In this case the "number of eligible lru pages" will also be low, so
> these things do have something to do with each other?
The problem is scanned doesn't correlate to the scanned count we calculate, but
rather the pages we're able to actually scan. With almost no page cache we end
up with really low scanned counts to "relatively" high lru count, which makes
the ratio really really low. Anecdotally we would have 10 million inodes in
cache, but the ratios were such that our scan target was like 8k.
> > This means that we reclaim next to no slab pages and waste a
> > lot of time reclaiming small amounts of space.
> > Instead use sc->priority in the same way we use it to determine scan
> > amounts for the lru's.
> That sounds like a good idea.
> Alternatively did you consider hooking into the vmpressure code (or
> hannes's new memdelay code) to determine how hard to scan slab?
Vmpressure requires memcg to be turned on. As for memdelay that might be a good
direction in the future, but right now it's just per task. We could probably
use it for direct reclaim, but I really want this to make kswapd better so we
avoid direct reclaim. If it's expanded to be system wide so we could have an
idea of the effect of memory reclaim on the whole system that would tie in
nicely here. But for now I think staying consistent with everything else is
good enough. Thanks,
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to firstname.lastname@example.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"email@example.com"> firstname.lastname@example.org </a>
next prev parent reply other threads:[~2017-07-28 23:52 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-07-20 18:45 [PATCH 0/2][V3] slab and general reclaim improvements josef
2017-07-20 18:45 ` [PATCH 1/2] mm: use sc->priority for slab shrink targets josef
2017-07-27 23:53 ` Andrew Morton
2017-07-28 23:52 ` Josef Bacik [this message]
2017-07-20 18:45 ` [PATCH 2/2] mm: make kswapd try harder to keep active pages in cache josef
2017-07-27 23:55 ` [PATCH 0/2][V3] slab and general reclaim improvements Andrew Morton
2017-08-22 19:35 [PATCH 1/2] mm: use sc->priority for slab shrink targets josef
2017-08-24 7:08 ` Minchan Kim
2017-08-24 14:29 ` Andrey Ryabinin
2017-08-24 14:49 ` Josef Bacik
2017-08-24 22:15 ` Dave Chinner
2017-08-24 22:45 ` Josef Bacik
2017-08-25 1:40 ` Dave Chinner
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).