Linux-mm Archive on lore.kernel.org
 help / color / Atom feed
From: Khadarnimcaan Khadarnimcaan <khadarnimcaan111@gmail.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: cgroups@vger.kernel.org, linux-mm@kvack.org,
	 Michal Hocko <mhocko@suse.com>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	 Rik van Riel <riel@surriel.com>,
	kernel-team@fb.com, linux-kernel@vger.kernel.org,
	 Andrew Morton <akpm@linux-foundation.org>,
	Suren Baghdasaryan <surenb@google.com>,
	 Shakeel Butt <shakeelb@google.com>
Subject: Re: [PATCH 1/3] mm: vmscan: move file exhaustion detection to the node level
Date: Mon, 11 Nov 2019 01:09:39 +0300
Message-ID: <CAP_gnDxx_=+fc1Zonj9kUhh9aXtW6FmnAJzmGkhBA3cZuyc+JA@mail.gmail.com> (raw)
In-Reply-To: <20191107205334.158354-2-hannes@cmpxchg.org>

[-- Attachment #1: Type: text/plain, Size: 5530 bytes --]

On Nov 7, 2019 11:54 PM, "Johannes Weiner" <hannes@cmpxchg.org> wrote:

> When file pages are lower than the watermark on a node, we try to
> force scan anonymous pages to counter-act the balancing algorithms
> preference for new file pages when they are likely thrashing. This is
> a node-level decision, but it's currently made each time we look at an
> lruvec. This is unnecessarily expensive and also a layering violation
> that makes the code harder to understand.
>
> Clean this up by making the check once per node and setting a flag in
> the scan_control.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> Reviewed-by: Shakeel Butt <shakeelb@google.com>
> ---
>  mm/vmscan.c | 80 ++++++++++++++++++++++++++++-------------------------
>  1 file changed, 42 insertions(+), 38 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index d97985262dda..e8dd601e1fad 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -101,6 +101,9 @@ struct scan_control {
>         /* One of the zones is ready for compaction */
>         unsigned int compaction_ready:1;
>
> +       /* The file pages on the current node are dangerously low */
> +       unsigned int file_is_tiny:1;
> +
>         /* Allocation order */
>         s8 order;
>
> @@ -2289,45 +2292,16 @@ static void get_scan_count(struct lruvec *lruvec,
> struct scan_control *sc,
>         }
>
>         /*
> -        * Prevent the reclaimer from falling into the cache trap: as
> -        * cache pages start out inactive, every cache fault will tip
> -        * the scan balance towards the file LRU.  And as the file LRU
> -        * shrinks, so does the window for rotation from references.
> -        * This means we have a runaway feedback loop where a tiny
> -        * thrashing file LRU becomes infinitely more attractive than
> -        * anon pages.  Try to detect this based on file LRU size.
> +        * If the system is almost out of file pages, force-scan anon.
> +        * But only if there are enough inactive anonymous pages on
> +        * the LRU. Otherwise, the small LRU gets thrashed.
>          */
> -       if (!cgroup_reclaim(sc)) {
> -               unsigned long pgdatfile;
> -               unsigned long pgdatfree;
> -               int z;
> -               unsigned long total_high_wmark = 0;
> -
> -               pgdatfree = sum_zone_node_page_state(pgdat->node_id,
> NR_FREE_PAGES);
> -               pgdatfile = node_page_state(pgdat, NR_ACTIVE_FILE) +
> -                          node_page_state(pgdat, NR_INACTIVE_FILE);
> -
> -               for (z = 0; z < MAX_NR_ZONES; z++) {
> -                       struct zone *zone = &pgdat->node_zones[z];
> -                       if (!managed_zone(zone))
> -                               continue;
> -
> -                       total_high_wmark += high_wmark_pages(zone);
> -               }
> -
> -               if (unlikely(pgdatfile + pgdatfree <= total_high_wmark)) {
> -                       /*
> -                        * Force SCAN_ANON if there are enough inactive
> -                        * anonymous pages on the LRU in eligible zones.
> -                        * Otherwise, the small LRU gets thrashed.
> -                        */
> -                       if (!inactive_list_is_low(lruvec, false, sc,
> false) &&
> -                           lruvec_lru_size(lruvec, LRU_INACTIVE_ANON,
> sc->reclaim_idx)
> -                                       >> sc->priority) {
> -                               scan_balance = SCAN_ANON;
> -                               goto out;
> -                       }
> -               }
> +       if (sc->file_is_tiny &&
> +           !inactive_list_is_low(lruvec, false, sc, false) &&
> +           lruvec_lru_size(lruvec, LRU_INACTIVE_ANON,
> +                           sc->reclaim_idx) >> sc->priority) {
> +               scan_balance = SCAN_ANON;
> +               goto out;
>         }
>
>         /*
> @@ -2754,6 +2728,36 @@ static bool shrink_node(pg_data_t *pgdat, struct
> scan_control *sc)
>         nr_reclaimed = sc->nr_reclaimed;
>         nr_scanned = sc->nr_scanned;
>
> +       /*
> +        * Prevent the reclaimer from falling into the cache trap: as
> +        * cache pages start out inactive, every cache fault will tip
> +        * the scan balance towards the file LRU.  And as the file LRU
> +        * shrinks, so does the window for rotation from references.
> +        * This means we have a runaway feedback loop where a tiny
> +        * thrashing file LRU becomes infinitely more attractive than
> +        * anon pages.  Try to detect this based on file LRU size.
> +        */
> +       if (!cgroup_reclaim(sc)) {
> +               unsigned long file;
> +               unsigned long free;
> +               int z;
> +               unsigned long total_high_wmark = 0;
> +
> +               free = sum_zone_node_page_state(pgdat->node_id,
> NR_FREE_PAGES);
> +               file = node_page_state(pgdat, NR_ACTIVE_FILE) +
> +                          node_page_state(pgdat, NR_INACTIVE_FILE);
> +
> +               for (z = 0; z < MAX_NR_ZONES; z++) {
> +                       struct zone *zone = &pgdat->node_zones[z];
> +                       if (!managed_zone(zone))
> +                               continue;
> +
> +                       total_high_wmark += high_wmark_pages(zone);
> +               }
> +
> +               sc->file_is_tiny = file + free <= total_high_wmark;
> +       }
> +
>         shrink_node_memcgs(pgdat, sc);
>
>         if (reclaim_state) {
> --
> 2.24.0
>
>

[-- Attachment #2: Type: text/html, Size: 6964 bytes --]

<div class="gmail_quote">On Nov 7, 2019 11:54 PM, &quot;Johannes Weiner&quot; &lt;<a href="mailto:hannes@cmpxchg.org">hannes@cmpxchg.org</a>&gt; wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">When file pages are lower than the watermark on a node, we try to<br>
force scan anonymous pages to counter-act the balancing algorithms<br>
preference for new file pages when they are likely thrashing. This is<br>
a node-level decision, but it&#39;s currently made each time we look at an<br>
lruvec. This is unnecessarily expensive and also a layering violation<br>
that makes the code harder to understand.<br>
<br>
Clean this up by making the check once per node and setting a flag in<br>
the scan_control.<br>
<br>
Signed-off-by: Johannes Weiner &lt;<a href="mailto:hannes@cmpxchg.org">hannes@cmpxchg.org</a>&gt;<br>
Reviewed-by: Shakeel Butt &lt;<a href="mailto:shakeelb@google.com">shakeelb@google.com</a>&gt;<br>
---<br>
 mm/vmscan.c | 80 ++++++++++++++++++++++++++++--<wbr>-----------------------<br>
 1 file changed, 42 insertions(+), 38 deletions(-)<br>
<br>
diff --git a/mm/vmscan.c b/mm/vmscan.c<br>
index d97985262dda..e8dd601e1fad 100644<br>
--- a/mm/vmscan.c<br>
+++ b/mm/vmscan.c<br>
@@ -101,6 +101,9 @@ struct scan_control {<br>
        /* One of the zones is ready for compaction */<br>
        unsigned int compaction_ready:1;<br>
<br>
+       /* The file pages on the current node are dangerously low */<br>
+       unsigned int file_is_tiny:1;<br>
+<br>
        /* Allocation order */<br>
        s8 order;<br>
<br>
@@ -2289,45 +2292,16 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,<br>
        }<br>
<br>
        /*<br>
-        * Prevent the reclaimer from falling into the cache trap: as<br>
-        * cache pages start out inactive, every cache fault will tip<br>
-        * the scan balance towards the file LRU.  And as the file LRU<br>
-        * shrinks, so does the window for rotation from references.<br>
-        * This means we have a runaway feedback loop where a tiny<br>
-        * thrashing file LRU becomes infinitely more attractive than<br>
-        * anon pages.  Try to detect this based on file LRU size.<br>
+        * If the system is almost out of file pages, force-scan anon.<br>
+        * But only if there are enough inactive anonymous pages on<br>
+        * the LRU. Otherwise, the small LRU gets thrashed.<br>
         */<br>
-       if (!cgroup_reclaim(sc)) {<br>
-               unsigned long pgdatfile;<br>
-               unsigned long pgdatfree;<br>
-               int z;<br>
-               unsigned long total_high_wmark = 0;<br>
-<br>
-               pgdatfree = sum_zone_node_page_state(<wbr>pgdat-&gt;node_id, NR_FREE_PAGES);<br>
-               pgdatfile = node_page_state(pgdat, NR_ACTIVE_FILE) +<br>
-                          node_page_state(pgdat, NR_INACTIVE_FILE);<br>
-<br>
-               for (z = 0; z &lt; MAX_NR_ZONES; z++) {<br>
-                       struct zone *zone = &amp;pgdat-&gt;node_zones[z];<br>
-                       if (!managed_zone(zone))<br>
-                               continue;<br>
-<br>
-                       total_high_wmark += high_wmark_pages(zone);<br>
-               }<br>
-<br>
-               if (unlikely(pgdatfile + pgdatfree &lt;= total_high_wmark)) {<br>
-                       /*<br>
-                        * Force SCAN_ANON if there are enough inactive<br>
-                        * anonymous pages on the LRU in eligible zones.<br>
-                        * Otherwise, the small LRU gets thrashed.<br>
-                        */<br>
-                       if (!inactive_list_is_low(lruvec, false, sc, false) &amp;&amp;<br>
-                           lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, sc-&gt;reclaim_idx)<br>
-                                       &gt;&gt; sc-&gt;priority) {<br>
-                               scan_balance = SCAN_ANON;<br>
-                               goto out;<br>
-                       }<br>
-               }<br>
+       if (sc-&gt;file_is_tiny &amp;&amp;<br>
+           !inactive_list_is_low(lruvec, false, sc, false) &amp;&amp;<br>
+           lruvec_lru_size(lruvec, LRU_INACTIVE_ANON,<br>
+                           sc-&gt;reclaim_idx) &gt;&gt; sc-&gt;priority) {<br>
+               scan_balance = SCAN_ANON;<br>
+               goto out;<br>
        }<br>
<br>
        /*<br>
@@ -2754,6 +2728,36 @@ static bool shrink_node(pg_data_t *pgdat, struct scan_control *sc)<br>
        nr_reclaimed = sc-&gt;nr_reclaimed;<br>
        nr_scanned = sc-&gt;nr_scanned;<br>
<br>
+       /*<br>
+        * Prevent the reclaimer from falling into the cache trap: as<br>
+        * cache pages start out inactive, every cache fault will tip<br>
+        * the scan balance towards the file LRU.  And as the file LRU<br>
+        * shrinks, so does the window for rotation from references.<br>
+        * This means we have a runaway feedback loop where a tiny<br>
+        * thrashing file LRU becomes infinitely more attractive than<br>
+        * anon pages.  Try to detect this based on file LRU size.<br>
+        */<br>
+       if (!cgroup_reclaim(sc)) {<br>
+               unsigned long file;<br>
+               unsigned long free;<br>
+               int z;<br>
+               unsigned long total_high_wmark = 0;<br>
+<br>
+               free = sum_zone_node_page_state(<wbr>pgdat-&gt;node_id, NR_FREE_PAGES);<br>
+               file = node_page_state(pgdat, NR_ACTIVE_FILE) +<br>
+                          node_page_state(pgdat, NR_INACTIVE_FILE);<br>
+<br>
+               for (z = 0; z &lt; MAX_NR_ZONES; z++) {<br>
+                       struct zone *zone = &amp;pgdat-&gt;node_zones[z];<br>
+                       if (!managed_zone(zone))<br>
+                               continue;<br>
+<br>
+                       total_high_wmark += high_wmark_pages(zone);<br>
+               }<br>
+<br>
+               sc-&gt;file_is_tiny = file + free &lt;= total_high_wmark;<br>
+       }<br>
+<br>
        shrink_node_memcgs(pgdat, sc);<br>
<br>
        if (reclaim_state) {<br>
-- <br>
2.24.0<br>
<br>
</blockquote></div>

  parent reply index

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-07 20:53 [PATCH 0/3] mm: fix page aging across multiple cgroups Johannes Weiner
2019-11-07 20:53 ` [PATCH 1/3] mm: vmscan: move file exhaustion detection to the node level Johannes Weiner
2019-11-10 22:02   ` Suren Baghdasaryan
2019-11-10 22:09   ` Khadarnimcaan Khadarnimcaan [this message]
2019-11-07 20:53 ` [PATCH 2/3] mm: vmscan: detect file thrashing at the reclaim root Johannes Weiner
2019-11-11  2:01   ` Suren Baghdasaryan
2019-11-12 17:45     ` Johannes Weiner
2019-11-12 18:45       ` Suren Baghdasaryan
2019-11-12 18:59         ` Johannes Weiner
2019-11-12 20:35           ` Suren Baghdasaryan
2019-11-07 20:53 ` [PATCH 3/3] mm: vmscan: enforce inactive:active ratio " Johannes Weiner
2019-11-11  2:15   ` Suren Baghdasaryan
2019-11-12 18:00     ` Johannes Weiner
2019-11-12 19:13       ` Suren Baghdasaryan
2019-11-12 20:34         ` Suren Baghdasaryan

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAP_gnDxx_=+fc1Zonj9kUhh9aXtW6FmnAJzmGkhBA3cZuyc+JA@mail.gmail.com' \
    --to=khadarnimcaan111@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=aryabinin@virtuozzo.com \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=riel@surriel.com \
    --cc=shakeelb@google.com \
    --cc=surenb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Linux-mm Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-mm/0 linux-mm/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-mm linux-mm/ https://lore.kernel.org/linux-mm \
		linux-mm@kvack.org
	public-inbox-index linux-mm

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kvack.linux-mm


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git