archive mirror
 help / color / mirror / Atom feed
From: Minchan Kim <>
	Josef Bacik <>
Subject: Re: [PATCH 2/2][v2] mm: make kswapd try harder to keep active pages in cache
Date: Thu, 24 Aug 2017 16:22:20 +0900	[thread overview]
Message-ID: <20170824072220.GB20463@bgram> (raw)
In-Reply-To: <>

On Tue, Aug 22, 2017 at 03:35:39PM -0400, wrote:
> From: Josef Bacik <>
> While testing slab reclaim I noticed that if we were running a workload
> that used most of the system memory for it's working set and we start
> putting a lot of reclaimable slab pressure on the system (think find /,
> or some other silliness), we will happily evict the active pages over
> the slab cache.  This is kind of backwards as we want to do all that we
> can to keep the active working set in memory, and instead evict these
> short lived objects.  The same thing occurs when say you do a yum
> update of a few packages while your working set takes up most of RAM,
> you end up with inactive lists being relatively small and so we reclaim
> active pages even though we could reclaim these short lived inactive
> pages.

The fundament problem is we cannot identify what are working set and
short-lived objects in adavnce without enough aging so such workload
transition in a short time is really hard to catch up.

A idea in my mind is to create two level list(active, inactive list)
like LRU pages. Then, starts objects inactive list and doesn't promote
the object into active list unless it touches.

Once we see refault of page cache, it would be a good signal to
accelerate slab shrinking. Or, reclaim shrinker's inactive list firstly
before the shrinking page cache active list.
Same way have been used for page cache's inactive list to prevent
anonymous page reclaiming. See get_scan_count.

It's non trivial but worth to try if system with heavy slab objects
would be popular, IMHO.

> My approach here is twofold.  First, keep track of the difference in
> inactive and slab pages since the last time kswapd ran.  In the first
> run this will just be the overall counts of inactive and slab, but for
> each subsequent run we'll have a good idea of where the memory pressure
> is coming from.  Then we use this information to put pressure on either
> the inactive lists or the slab caches, depending on where the pressure
> is coming from.

I don't like this idea.

The pressure should be fair if possible and victim decision should come
from the aging. If we want to put more pressure, it should come from
some feedback loop. And I don't think diff of allocation would be a good
factor for that.

To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to  For more info on Linux MM,
see: .
Don't email: <a href=mailto:""> </a>

  reply	other threads:[~2017-08-24  7:22 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-08-22 19:35 [PATCH 1/2] mm: use sc->priority for slab shrink targets josef
2017-08-22 19:35 ` [PATCH 2/2][v2] mm: make kswapd try harder to keep active pages in cache josef
2017-08-24  7:22   ` Minchan Kim [this message]
2017-08-24  7:08 ` [PATCH 1/2] mm: use sc->priority for slab shrink targets Minchan Kim
2017-08-24 14:29 ` Andrey Ryabinin
2017-08-24 14:49   ` Josef Bacik
2017-08-24 22:15     ` Dave Chinner
2017-08-24 22:45       ` Josef Bacik
2017-08-25  1:40         ` Dave Chinner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170824072220.GB20463@bgram \ \ \ \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).