linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Brian Foster <bfoster@redhat.com>
To: Dave Chinner <david@fromorbit.com>
Cc: linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 11/28] mm: factor shrinker work calculations
Date: Mon, 4 Nov 2019 10:29:39 -0500	[thread overview]
Message-ID: <20191104152939.GB10665@bfoster> (raw)
In-Reply-To: <20191031234618.15403-12-david@fromorbit.com>

On Fri, Nov 01, 2019 at 10:46:01AM +1100, Dave Chinner wrote:
> From: Dave Chinner <dchinner@redhat.com>
> 
> Start to clean up the shrinker code by factoring out the calculation
> that determines how much work to do. This separates the calculation
> from clamping and other adjustments that are done before the
> shrinker work is run. Document the scan batch size calculation
> better while we are there.
> 
> Also convert the calculation for the amount of work to be done to
> use 64 bit logic so we don't have to keep jumping through hoops to
> keep calculations within 32 bits on 32 bit systems.
> 
> Signed-off-by: Dave Chinner <dchinner@redhat.com>
> ---

I assume the kbuild warning thing will be fixed up...

>  mm/vmscan.c | 97 ++++++++++++++++++++++++++++++++++++++---------------
>  1 file changed, 70 insertions(+), 27 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a215d71d9d4b..2d39ec37c04d 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -459,13 +459,68 @@ EXPORT_SYMBOL(unregister_shrinker);
>  
>  #define SHRINK_BATCH 128
>  
> +/*
> + * Calculate the number of new objects to scan this time around. Return
> + * the work to be done. If there are freeable objects, return that number in
> + * @freeable_objects.
> + */
> +static int64_t shrink_scan_count(struct shrink_control *shrinkctl,
> +			    struct shrinker *shrinker, int priority,
> +			    int64_t *freeable_objects)
> +{
> +	int64_t delta;
> +	int64_t freeable;
> +
> +	freeable = shrinker->count_objects(shrinker, shrinkctl);
> +	if (freeable == 0 || freeable == SHRINK_EMPTY)
> +		return freeable;
> +
> +	if (shrinker->seeks) {
> +		/*
> +		 * shrinker->seeks is a measure of how much IO is required to
> +		 * reinstantiate the object in memory. The default value is 2
> +		 * which is typical for a cold inode requiring a directory read
> +		 * and an inode read to re-instantiate.
> +		 *
> +		 * The scan batch size is defined by the shrinker priority, but
> +		 * to be able to bias the reclaim we increase the default batch
> +		 * size by 4. Hence we end up with a scan batch multipler that
> +		 * scales like so:
> +		 *
> +		 * ->seeks	scan batch multiplier
> +		 *    1		      4.00x
> +		 *    2               2.00x
> +		 *    3               1.33x
> +		 *    4               1.00x
> +		 *    8               0.50x
> +		 *
> +		 * IOWs, the more seeks it takes to pull the item into cache,
> +		 * the smaller the reclaim scan batch. Hence we put more reclaim
> +		 * pressure on caches that are fast to repopulate and to keep a
> +		 * rough balance between caches that have different costs.
> +		 */
> +		delta = freeable >> (priority - 2);

Does anything prevent priority < 2 here?

> +		do_div(delta, shrinker->seeks);
> +	} else {
> +		/*
> +		 * These objects don't require any IO to create. Trim them
> +		 * aggressively under memory pressure to keep them from causing
> +		 * refetches in the IO caches.
> +		 */
> +		delta = freeable / 2;
> +	}
> +
> +	*freeable_objects = freeable;
> +	return delta > 0 ? delta : 0;
> +}
> +
>  static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  				    struct shrinker *shrinker, int priority)
>  {
>  	unsigned long freed = 0;
> -	unsigned long long delta;
>  	long total_scan;
> -	long freeable;
> +	int64_t freeable_objects = 0;
> +	int64_t scan_count;
>  	long nr;
>  	long new_nr;
>  	int nid = shrinkctl->nid;
...
> @@ -487,25 +543,11 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  	 */
>  	nr = atomic_long_xchg(&shrinker->nr_deferred[nid], 0);
>  
> -	total_scan = nr;
> -	if (shrinker->seeks) {
> -		delta = freeable >> priority;
> -		delta *= 4;
> -		do_div(delta, shrinker->seeks);
> -	} else {
> -		/*
> -		 * These objects don't require any IO to create. Trim
> -		 * them aggressively under memory pressure to keep
> -		 * them from causing refetches in the IO caches.
> -		 */
> -		delta = freeable / 2;
> -	}
> -
> -	total_scan += delta;
> +	total_scan = nr + scan_count;
>  	if (total_scan < 0) {
>  		pr_err("shrink_slab: %pS negative objects to delete nr=%ld\n",
>  		       shrinker->scan_objects, total_scan);
> -		total_scan = freeable;
> +		total_scan = scan_count;

Same question as before: why the change in assignment? freeable was the
->count_objects() return value, which is now stored in freeable_objects.

FWIW, the change seems to make sense in that it just factors out the
deferred count, but it's not clear if it's intentional...

Brian

>  		next_deferred = nr;
>  	} else
>  		next_deferred = total_scan;
> @@ -522,19 +564,20 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  	 * Hence only allow the shrinker to scan the entire cache when
>  	 * a large delta change is calculated directly.
>  	 */
> -	if (delta < freeable / 4)
> -		total_scan = min(total_scan, freeable / 2);
> +	if (scan_count < freeable_objects / 4)
> +		total_scan = min_t(long, total_scan, freeable_objects / 2);
>  
>  	/*
>  	 * Avoid risking looping forever due to too large nr value:
>  	 * never try to free more than twice the estimate number of
>  	 * freeable entries.
>  	 */
> -	if (total_scan > freeable * 2)
> -		total_scan = freeable * 2;
> +	if (total_scan > freeable_objects * 2)
> +		total_scan = freeable_objects * 2;
>  
>  	trace_mm_shrink_slab_start(shrinker, shrinkctl, nr,
> -				   freeable, delta, total_scan, priority);
> +				   freeable_objects, scan_count,
> +				   total_scan, priority);
>  
>  	/*
>  	 * If the shrinker can't run (e.g. due to gfp_mask constraints), then
> @@ -559,7 +602,7 @@ static unsigned long do_shrink_slab(struct shrink_control *shrinkctl,
>  	 * possible.
>  	 */
>  	while (total_scan >= batch_size ||
> -	       total_scan >= freeable) {
> +	       total_scan >= freeable_objects) {
>  		unsigned long ret;
>  		unsigned long nr_to_scan = min(batch_size, total_scan);
>  
> -- 
> 2.24.0.rc0
> 


  parent reply	other threads:[~2019-11-04 15:29 UTC|newest]

Thread overview: 72+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-31 23:45 [PATCH 00/28] mm, xfs: non-blocking inode reclaim Dave Chinner
2019-10-31 23:45 ` [PATCH 01/28] xfs: Lower CIL flush limit for large logs Dave Chinner
2019-10-31 23:45 ` [PATCH 02/28] xfs: Throttle commits on delayed background CIL push Dave Chinner
2019-11-01 12:04   ` Brian Foster
2019-11-01 21:40     ` Dave Chinner
2019-11-04 22:48       ` Darrick J. Wong
2019-10-31 23:45 ` [PATCH 03/28] xfs: don't allow log IO to be throttled Dave Chinner
2019-10-31 23:45 ` [PATCH 04/28] xfs: Improve metadata buffer reclaim accountability Dave Chinner
2019-11-01 12:05   ` Brian Foster
2019-11-04 23:21   ` Darrick J. Wong
2019-10-31 23:45 ` [PATCH 05/28] xfs: correctly acount for reclaimable slabs Dave Chinner
2019-10-31 23:45 ` [PATCH 06/28] xfs: factor common AIL item deletion code Dave Chinner
2019-11-04 23:16   ` Darrick J. Wong
2019-10-31 23:45 ` [PATCH 07/28] xfs: tail updates only need to occur when LSN changes Dave Chinner
2019-11-04 23:18   ` Darrick J. Wong
2019-10-31 23:45 ` [PATCH 08/28] xfs: factor inode lookup from xfs_ifree_cluster Dave Chinner
2019-11-01 12:05   ` Brian Foster
2019-11-04 23:20   ` Darrick J. Wong
2019-10-31 23:45 ` [PATCH 09/28] mm: directed shrinker work deferral Dave Chinner
2019-11-04 15:25   ` Brian Foster
2019-11-14 20:49     ` Dave Chinner
2019-11-15 17:21       ` Brian Foster
2019-11-18  0:49         ` Dave Chinner
2019-11-19 15:12           ` Brian Foster
2019-10-31 23:46 ` [PATCH 10/28] shrinkers: use defer_work for GFP_NOFS sensitive shrinkers Dave Chinner
2019-10-31 23:46 ` [PATCH 11/28] mm: factor shrinker work calculations Dave Chinner
2019-11-02 10:55   ` kbuild test robot
2019-11-04 15:29   ` Brian Foster [this message]
2019-11-14 20:59     ` Dave Chinner
2019-10-31 23:46 ` [PATCH 12/28] shrinker: defer work only to kswapd Dave Chinner
2019-11-04 15:29   ` Brian Foster
2019-11-14 21:11     ` Dave Chinner
2019-11-15 17:23       ` Brian Foster
2019-10-31 23:46 ` [PATCH 13/28] shrinker: clean up variable types and tracepoints Dave Chinner
2019-11-04 15:30   ` Brian Foster
2019-10-31 23:46 ` [PATCH 14/28] mm: reclaim_state records pages reclaimed, not slabs Dave Chinner
2019-11-04 19:58   ` Brian Foster
2019-10-31 23:46 ` [PATCH 15/28] mm: back off direct reclaim on excessive shrinker deferral Dave Chinner
2019-11-04 19:58   ` Brian Foster
2019-11-14 21:28     ` Dave Chinner
2019-10-31 23:46 ` [PATCH 16/28] mm: kswapd backoff for shrinkers Dave Chinner
2019-11-04 19:58   ` Brian Foster
2019-11-14 21:41     ` Dave Chinner
2019-10-31 23:46 ` [PATCH 17/28] xfs: synchronous AIL pushing Dave Chinner
2019-11-05 17:05   ` Brian Foster
2019-10-31 23:46 ` [PATCH 18/28] xfs: don't block kswapd in inode reclaim Dave Chinner
2019-10-31 23:46 ` [PATCH 19/28] xfs: reduce kswapd blocking on inode locking Dave Chinner
2019-11-05 17:05   ` Brian Foster
2019-10-31 23:46 ` [PATCH 20/28] xfs: kill background reclaim work Dave Chinner
2019-11-05 17:05   ` Brian Foster
2019-10-31 23:46 ` [PATCH 21/28] xfs: use AIL pushing for inode reclaim IO Dave Chinner
2019-11-05 17:06   ` Brian Foster
2019-10-31 23:46 ` [PATCH 22/28] xfs: remove mode from xfs_reclaim_inodes() Dave Chinner
2019-10-31 23:46 ` [PATCH 23/28] xfs: track reclaimable inodes using a LRU list Dave Chinner
2019-10-31 23:46 ` [PATCH 24/28] xfs: reclaim inodes from the LRU Dave Chinner
2019-11-06 17:21   ` Brian Foster
2019-11-14 21:51     ` Dave Chinner
2019-10-31 23:46 ` [PATCH 25/28] xfs: remove unusued old inode reclaim code Dave Chinner
2019-11-06 17:21   ` Brian Foster
2019-10-31 23:46 ` [PATCH 26/28] xfs: use xfs_ail_push_all in xfs_reclaim_inodes Dave Chinner
2019-11-06 17:22   ` Brian Foster
2019-11-14 21:53     ` Dave Chinner
2019-10-31 23:46 ` [PATCH 27/28] rwsem: introduce down/up_write_non_owner Dave Chinner
2019-10-31 23:46 ` [PATCH 28/28] xfs: rework unreferenced inode lookups Dave Chinner
2019-11-06 22:18   ` Brian Foster
2019-11-14 22:16     ` Dave Chinner
2019-11-15 13:13       ` Christoph Hellwig
2019-11-15 17:26       ` Brian Foster
2019-11-18  1:00         ` Dave Chinner
2019-11-19 15:13           ` Brian Foster
2019-11-19 21:18             ` Dave Chinner
2019-11-20 12:42               ` Brian Foster

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20191104152939.GB10665@bfoster \
    --to=bfoster@redhat.com \
    --cc=david@fromorbit.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).