linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeelb@google.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	 Suren Baghdasaryan <surenb@google.com>,
	Michal Hocko <mhocko@suse.com>, Linux MM <linux-mm@kvack.org>,
	 Cgroups <cgroups@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	 Kernel Team <kernel-team@fb.com>
Subject: Re: [PATCH 09/11] mm: vmscan: move file exhaustion detection to the node level
Date: Wed, 6 Nov 2019 18:52:23 -0800	[thread overview]
Message-ID: <CALvZod6rYcJ4jimyaTTKUvXVZ4a3-G-+wqnRkV875u5npUUeAQ@mail.gmail.com> (raw)
In-Reply-To: <20190603210746.15800-10-hannes@cmpxchg.org>

On Mon, Jun 3, 2019 at 3:08 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> When file pages are lower than the watermark on a node, we try to
> force scan anonymous pages to counter-act the balancing algorithms
> preference for new file pages when they are likely thrashing. This is
> node-level decision, but it's currently made each time we look at an
> lruvec. This is unnecessarily expensive and also a layering violation
> that makes the code harder to understand.
>
> Clean this up by making the check once per node and setting a flag in
> the scan_control.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>

Reviewed-by: Shakeel Butt <shakeelb@google.com>


> ---
>  mm/vmscan.c | 80 ++++++++++++++++++++++++++++-------------------------
>  1 file changed, 42 insertions(+), 38 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index eb535c572733..cabf94dfa92d 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -104,6 +104,9 @@ struct scan_control {
>         /* One of the zones is ready for compaction */
>         unsigned int compaction_ready:1;
>
> +       /* The file pages on the current node are dangerously low */
> +       unsigned int file_is_tiny:1;
> +
>         /* Allocation order */
>         s8 order;
>
> @@ -2219,45 +2222,16 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
>         }
>

Unrelated to the patch. I think we need to revisit all the heuristics
that were added here over the years. get_scan_count() has become
really complicated and weird.

>         /*
> -        * Prevent the reclaimer from falling into the cache trap: as
> -        * cache pages start out inactive, every cache fault will tip
> -        * the scan balance towards the file LRU.  And as the file LRU
> -        * shrinks, so does the window for rotation from references.
> -        * This means we have a runaway feedback loop where a tiny
> -        * thrashing file LRU becomes infinitely more attractive than
> -        * anon pages.  Try to detect this based on file LRU size.
> +        * If the system is almost out of file pages, force-scan anon.
> +        * But only if there are enough inactive anonymous pages on
> +        * the LRU. Otherwise, the small LRU gets thrashed.
>          */
> -       if (!cgroup_reclaim(sc)) {
> -               unsigned long pgdatfile;
> -               unsigned long pgdatfree;
> -               int z;
> -               unsigned long total_high_wmark = 0;
> -
> -               pgdatfree = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES);
> -               pgdatfile = node_page_state(pgdat, NR_ACTIVE_FILE) +
> -                          node_page_state(pgdat, NR_INACTIVE_FILE);
> -
> -               for (z = 0; z < MAX_NR_ZONES; z++) {
> -                       struct zone *zone = &pgdat->node_zones[z];
> -                       if (!managed_zone(zone))
> -                               continue;
> -
> -                       total_high_wmark += high_wmark_pages(zone);
> -               }
> -
> -               if (unlikely(pgdatfile + pgdatfree <= total_high_wmark)) {
> -                       /*
> -                        * Force SCAN_ANON if there are enough inactive
> -                        * anonymous pages on the LRU in eligible zones.
> -                        * Otherwise, the small LRU gets thrashed.
> -                        */
> -                       if (!inactive_list_is_low(lruvec, false, sc, false) &&
> -                           lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, sc->reclaim_idx)
> -                                       >> sc->priority) {
> -                               scan_balance = SCAN_ANON;
> -                               goto out;
> -                       }
> -               }
> +       if (sc->file_is_tiny &&
> +           !inactive_list_is_low(lruvec, false, sc, false) &&
> +           lruvec_lru_size(lruvec, LRU_INACTIVE_ANON,
> +                           sc->reclaim_idx) >> sc->priority) {
> +               scan_balance = SCAN_ANON;
> +               goto out;
>         }
>
>         /*
> @@ -2718,6 +2692,36 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)
>         nr_reclaimed = sc->nr_reclaimed;
>         nr_scanned = sc->nr_scanned;
>
> +       /*
> +        * Prevent the reclaimer from falling into the cache trap: as
> +        * cache pages start out inactive, every cache fault will tip
> +        * the scan balance towards the file LRU.  And as the file LRU
> +        * shrinks, so does the window for rotation from references.
> +        * This means we have a runaway feedback loop where a tiny
> +        * thrashing file LRU becomes infinitely more attractive than
> +        * anon pages.  Try to detect this based on file LRU size.
> +        */
> +       if (!cgroup_reclaim(sc)) {
> +               unsigned long file;
> +               unsigned long free;
> +               int z;
> +               unsigned long total_high_wmark = 0;
> +
> +               free = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES);
> +               file = node_page_state(pgdat, NR_ACTIVE_FILE) +
> +                          node_page_state(pgdat, NR_INACTIVE_FILE);
> +
> +               for (z = 0; z < MAX_NR_ZONES; z++) {
> +                       struct zone *zone = &pgdat->node_zones[z];
> +                       if (!managed_zone(zone))
> +                               continue;
> +
> +                       total_high_wmark += high_wmark_pages(zone);
> +               }
> +
> +               sc->file_is_tiny = file + free <= total_high_wmark;
> +       }
> +
>         shrink_node_memcgs(pgdat, sc);
>
>         if (reclaim_state) {
> --
> 2.21.0
>


  reply	other threads:[~2019-11-07  2:52 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-03 21:07 [PATCH 00/11] mm: fix page aging across multiple cgroups Johannes Weiner
2019-06-03 21:07 ` [PATCH 01/11] mm: vmscan: move inactive_list_is_low() swap check to the caller Johannes Weiner
2019-11-07  2:50   ` Shakeel Butt
2019-11-08  3:43     ` Andrew Morton
2019-06-03 21:07 ` [PATCH 02/11] mm: clean up and clarify lruvec lookup procedure Johannes Weiner
2019-11-07  2:50   ` Shakeel Butt
2019-06-03 21:07 ` [PATCH 03/11] mm: vmscan: simplify lruvec_lru_size() Johannes Weiner
2019-11-07  2:51   ` Shakeel Butt
2019-06-03 21:07 ` [PATCH 04/11] mm: vmscan: naming fixes: cgroup_reclaim() and writeback_working() Johannes Weiner
2019-11-07  2:51   ` Shakeel Butt
2019-06-03 21:07 ` [PATCH 05/11] mm: vmscan: replace shrink_node() loop with a retry jump Johannes Weiner
2019-11-07  2:51   ` Shakeel Butt
2019-06-03 21:07 ` [PATCH 06/11] mm: vmscan: turn shrink_node_memcg() into shrink_lruvec() Johannes Weiner
2019-11-07  2:51   ` Shakeel Butt
2019-06-03 21:07 ` [PATCH 07/11] mm: vmscan: split shrink_node() into node part and memcgs part Johannes Weiner
2019-11-07  2:51   ` Shakeel Butt
2019-06-03 21:07 ` [PATCH 08/11] mm: vmscan: harmonize writeback congestion tracking for nodes & memcgs Johannes Weiner
2019-11-07  2:52   ` Shakeel Butt
2019-06-03 21:07 ` [PATCH 09/11] mm: vmscan: move file exhaustion detection to the node level Johannes Weiner
2019-11-07  2:52   ` Shakeel Butt [this message]
2019-06-03 21:07 ` [PATCH 10/11] mm: vmscan: detect file thrashing at the reclaim root Johannes Weiner
2019-06-03 21:07 ` [PATCH 11/11] mm: vmscan: enforce inactive:active ratio " Johannes Weiner
2019-11-07  2:50 ` [PATCH 00/11] mm: fix page aging across multiple cgroups Shakeel Butt
2019-11-07 17:45   ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALvZod6rYcJ4jimyaTTKUvXVZ4a3-G-+wqnRkV875u5npUUeAQ@mail.gmail.com \
    --to=shakeelb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=aryabinin@virtuozzo.com \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=surenb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).