linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Brian Foster <bfoster@redhat.com>
Cc: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 15/28] mm: back off direct reclaim on excessive shrinker deferral
Date: Fri, 15 Nov 2019 08:28:46 +1100	[thread overview]
Message-ID: <20191114212846.GF4614@dread.disaster.area> (raw)
In-Reply-To: <20191104195822.GF10665@bfoster>

On Mon, Nov 04, 2019 at 02:58:22PM -0500, Brian Foster wrote:
> On Fri, Nov 01, 2019 at 10:46:05AM +1100, Dave Chinner wrote:
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 967e3d3c7748..13c11e10c9c5 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -570,6 +570,8 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> >  		deferred_count = min(deferred_count, freeable_objects * 2);
> >  
> >  	}
> > +	if (current->reclaim_state)
> > +		current->reclaim_state->scanned_objects += scanned_objects;
> 
> Looks like scanned_objects is always zero here.

Yeah, that was a rebase mis-merge. It should be after the scan loop.

> >  	/*
> >  	 * Avoid risking looping forever due to too large nr value:
> > @@ -585,8 +587,11 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
> >  	 * If the shrinker can't run (e.g. due to gfp_mask constraints), then
> >  	 * defer the work to a context that can scan the cache.
> >  	 */
> > -	if (shrinkctl->defer_work)
> > +	if (shrinkctl->defer_work) {
> > +		if (current->reclaim_state)
> > +			current->reclaim_state->deferred_objects += scan_count;
> >  		goto done;
> > +	}
> >  
> >  	/*
> >  	 * Normally, we should not scan less than batch_size objects in one
> > @@ -2871,7 +2876,30 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> >  
> >  		if (reclaim_state) {
> >  			sc->nr_reclaimed += reclaim_state->reclaimed_pages;
> > +
> > +			/*
> > +			 * If we are deferring more work than we are actually
> > +			 * doing in the shrinkers, and we are scanning more
> > +			 * objects than we are pages, the we have a large amount
> > +			 * of slab caches we are deferring work to kswapd for.
> > +			 * We better back off here for a while, otherwise
> > +			 * we risk priority windup, swap storms and OOM kills
> > +			 * once we empty the page lists but still can't make
> > +			 * progress on the shrinker memory.
> > +			 *
> > +			 * kswapd won't ever defer work as it's run under a
> > +			 * GFP_KERNEL context and can always do work.
> > +			 */
> > +			if ((reclaim_state->deferred_objects >
> > +					sc->nr_scanned - nr_scanned) &&
> 
> Out of curiosity, what's the reasoning behind the direct comparison
> between ->deferred_objects and pages? Shouldn't we generally expect more
> slab objects to exist than pages by the nature of slab?

No, we can't make any assumptions about the amount of memory a
reclaimed object pins. e.g. the xfs buf shrinker frees objects that
might have many pages attached to them (e.g. 64k dir buffer, 16k
inode cluster), the GEM/TTM shrinkers track and free pages, the
ashmem shrinker tracks pages, etc.

What we try to do is balance the cost of reinstantiating objects in
memory against each other. Reading in a page generally takes two
IOs, instantiating a new inode generally requires 2 IOs (dir read,
inode read), etc. That's what shrinker->seeks encodes, and it's an
attempt to balance object counts of the different caches in a
predictable manner.


> Also, the comment says "if we are scanning more objects than we are
> pages," yet the code is checking whether we defer more objects than
> scanned pages. Which is more accurate?

Both. :)

if reclaim_state->deferred_objects is larger than the page scan
count,  then we either have a very small page cache or we are
deferring a lot of shrinker work.

if we have a small page cache and shrinker reclaim is not making
good progress (i.e. defer more than scan), then we want to back off
for a while rather than rapidly ramp up the reclaim priority to give
the shrinker owner a chance to make progress. The current XFS inode
shrinker does this internally by blocking on IO, but we're getting
rid of that backoff so we need so other way to throttle reclaim when
we have lots of deferral going on.  THis reduces the pressure on the
page reclaim code, and goes some way to prevent swap storms (caused
by winding up the reclaim priority on a LRU with no file pages left
on it) when we have pure slab cache memory pressure.

-Dave.
-- 
Dave Chinner
david@fromorbit.com

  reply	other threads:[~2019-11-14 21:28 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-31 23:45 [PATCH 00/28] mm, xfs: non-blocking inode reclaim Dave Chinner
2019-10-31 23:45 ` [PATCH 01/28] xfs: Lower CIL flush limit for large logs Dave Chinner
2019-10-31 23:45 ` [PATCH 02/28] xfs: Throttle commits on delayed background CIL push Dave Chinner
2019-11-01 12:04   ` Brian Foster
2019-11-01 21:40     ` Dave Chinner
2019-11-04 22:48       ` Darrick J. Wong
2019-10-31 23:45 ` [PATCH 03/28] xfs: don't allow log IO to be throttled Dave Chinner
2019-10-31 23:45 ` [PATCH 04/28] xfs: Improve metadata buffer reclaim accountability Dave Chinner
2019-11-01 12:05   ` Brian Foster
2019-11-04 23:21   ` Darrick J. Wong
2019-10-31 23:45 ` [PATCH 05/28] xfs: correctly acount for reclaimable slabs Dave Chinner
2019-10-31 23:45 ` [PATCH 06/28] xfs: factor common AIL item deletion code Dave Chinner
2019-11-04 23:16   ` Darrick J. Wong
2019-10-31 23:45 ` [PATCH 07/28] xfs: tail updates only need to occur when LSN changes Dave Chinner
2019-11-04 23:18   ` Darrick J. Wong
2019-10-31 23:45 ` [PATCH 08/28] xfs: factor inode lookup from xfs_ifree_cluster Dave Chinner
2019-11-01 12:05   ` Brian Foster
2019-11-04 23:20   ` Darrick J. Wong
2019-10-31 23:45 ` [PATCH 09/28] mm: directed shrinker work deferral Dave Chinner
2019-11-04 15:25   ` Brian Foster
2019-11-14 20:49     ` Dave Chinner
2019-11-15 17:21       ` Brian Foster
2019-11-18  0:49         ` Dave Chinner
2019-11-19 15:12           ` Brian Foster
2019-10-31 23:46 ` [PATCH 10/28] shrinkers: use defer_work for GFP_NOFS sensitive shrinkers Dave Chinner
2019-10-31 23:46 ` [PATCH 11/28] mm: factor shrinker work calculations Dave Chinner
2019-11-02 10:55   ` kbuild test robot
2019-11-04 15:29   ` Brian Foster
2019-11-14 20:59     ` Dave Chinner
2019-10-31 23:46 ` [PATCH 12/28] shrinker: defer work only to kswapd Dave Chinner
2019-11-04 15:29   ` Brian Foster
2019-11-14 21:11     ` Dave Chinner
2019-11-15 17:23       ` Brian Foster
2019-10-31 23:46 ` [PATCH 13/28] shrinker: clean up variable types and tracepoints Dave Chinner
2019-11-04 15:30   ` Brian Foster
2019-10-31 23:46 ` [PATCH 14/28] mm: reclaim_state records pages reclaimed, not slabs Dave Chinner
2019-11-04 19:58   ` Brian Foster
2019-10-31 23:46 ` [PATCH 15/28] mm: back off direct reclaim on excessive shrinker deferral Dave Chinner
2019-11-04 19:58   ` Brian Foster
2019-11-14 21:28     ` Dave Chinner [this message]
2019-10-31 23:46 ` [PATCH 16/28] mm: kswapd backoff for shrinkers Dave Chinner
2019-11-04 19:58   ` Brian Foster
2019-11-14 21:41     ` Dave Chinner
2019-10-31 23:46 ` [PATCH 17/28] xfs: synchronous AIL pushing Dave Chinner
2019-11-05 17:05   ` Brian Foster
2019-10-31 23:46 ` [PATCH 18/28] xfs: don't block kswapd in inode reclaim Dave Chinner
2019-10-31 23:46 ` [PATCH 19/28] xfs: reduce kswapd blocking on inode locking Dave Chinner
2019-11-05 17:05   ` Brian Foster
2019-10-31 23:46 ` [PATCH 20/28] xfs: kill background reclaim work Dave Chinner
2019-11-05 17:05   ` Brian Foster
2019-10-31 23:46 ` [PATCH 21/28] xfs: use AIL pushing for inode reclaim IO Dave Chinner
2019-11-05 17:06   ` Brian Foster
2019-10-31 23:46 ` [PATCH 22/28] xfs: remove mode from xfs_reclaim_inodes() Dave Chinner
2019-10-31 23:46 ` [PATCH 23/28] xfs: track reclaimable inodes using a LRU list Dave Chinner
2019-10-31 23:46 ` [PATCH 24/28] xfs: reclaim inodes from the LRU Dave Chinner
2019-11-06 17:21   ` Brian Foster
2019-11-14 21:51     ` Dave Chinner
2019-10-31 23:46 ` [PATCH 25/28] xfs: remove unusued old inode reclaim code Dave Chinner
2019-11-06 17:21   ` Brian Foster
2019-10-31 23:46 ` [PATCH 26/28] xfs: use xfs_ail_push_all in xfs_reclaim_inodes Dave Chinner
2019-11-06 17:22   ` Brian Foster
2019-11-14 21:53     ` Dave Chinner
2019-10-31 23:46 ` [PATCH 27/28] rwsem: introduce down/up_write_non_owner Dave Chinner
2019-10-31 23:46 ` [PATCH 28/28] xfs: rework unreferenced inode lookups Dave Chinner
2019-11-06 22:18   ` Brian Foster
2019-11-14 22:16     ` Dave Chinner
2019-11-15 13:13       ` Christoph Hellwig
2019-11-15 17:26       ` Brian Foster
2019-11-18  1:00         ` Dave Chinner
2019-11-19 15:13           ` Brian Foster
2019-11-19 21:18             ` Dave Chinner
2019-11-20 12:42               ` Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191114212846.GF4614@dread.disaster.area \
    --to=david@fromorbit.com \
    --cc=bfoster@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).