From: Johannes Weiner <hannes@cmpxchg.org>
To: Qian Cai <cai@lca.pw>
Cc: linux-mm@kvack.org, Rik van Riel <riel@surriel.com>,
Minchan Kim <minchan.kim@gmail.com>,
Michal Hocko <mhocko@suse.com>,
Andrew Morton <akpm@linux-foundation.org>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
linux-kernel@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH 09/14] mm: deactivations shouldn't bias the LRU balance
Date: Tue, 26 May 2020 11:55:49 -0400 [thread overview]
Message-ID: <20200526155549.GB850116@cmpxchg.org> (raw)
In-Reply-To: <20200522133335.GA624@Qians-MacBook-Air.local>
On Fri, May 22, 2020 at 09:33:35AM -0400, Qian Cai wrote:
> On Wed, May 20, 2020 at 07:25:20PM -0400, Johannes Weiner wrote:
> > Operations like MADV_FREE, FADV_DONTNEED etc. currently move any
> > affected active pages to the inactive list to accelerate their reclaim
> > (good) but also steer page reclaim toward that LRU type, or away from
> > the other (bad).
> >
> > The reason why this is undesirable is that such operations are not
> > part of the regular page aging cycle, and rather a fluke that doesn't
> > say much about the remaining pages on that list; they might all be in
> > heavy use, and once the chunk of easy victims has been purged, the VM
> > continues to apply elevated pressure on those remaining hot pages. The
> > other LRU, meanwhile, might have easily reclaimable pages, and there
> > was never a need to steer away from it in the first place.
> >
> > As the previous patch outlined, we should focus on recording actually
> > observed cost to steer the balance rather than speculating about the
> > potential value of one LRU list over the other. In that spirit, leave
> > explicitely deactivated pages to the LRU algorithm to pick up, and let
> > rotations decide which list is the easiest to reclaim.
> >
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > Acked-by: Minchan Kim <minchan@kernel.org>
> > Acked-by: Michal Hocko <mhocko@suse.com>
> > ---
> > mm/swap.c | 4 ----
> > 1 file changed, 4 deletions(-)
> >
> > diff --git a/mm/swap.c b/mm/swap.c
> > index 5d62c5a0c651..d7912bfb597f 100644
> > --- a/mm/swap.c
> > +++ b/mm/swap.c
> > @@ -515,14 +515,12 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
> >
> > if (active)
> > __count_vm_event(PGDEACTIVATE);
> > - lru_note_cost(lruvec, !file, hpage_nr_pages(page));
> > }
> >
> []
>
> mm/swap.c: In function 'lru_deactivate_file_fn':
> mm/swap.c:504:11: warning: variable 'file' set but not used
> [-Wunused-but-set-variable]
> int lru, file;
> ^~~~
Oops, my gcc doesn't warn about that, but yes, it's clearly dead code.
$ make mm/swap.o
GEN Makefile
CALL /home/hannes/src/linux/linux/scripts/checksyscalls.sh
CALL /home/hannes/src/linux/linux/scripts/atomic/check-atomics.sh
DESCEND objtool
CC mm/swap.o
$
> This?
>
> diff --git a/mm/swap.c b/mm/swap.c
> index fedf5847dfdb..9c38c1b545af 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -501,7 +501,7 @@ void lru_cache_add_active_or_unevictable(struct page *page,
> static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
> void *arg)
> {
> - int lru, file;
> + int lru;
> bool active;
>
> if (!PageLRU(page))
> @@ -515,7 +515,6 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec,
> return;
>
> active = PageActive(page);
> - file = page_is_file_lru(page);
> lru = page_lru_base_type(page);
>
> del_page_from_lru_list(page, lruvec, lru + active);
Looks good, and it appears Andrew has already queued it. Would you
mind providing the Signed-off-by: for it?
next prev parent reply other threads:[~2020-05-26 15:56 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-05-20 23:25 [PATCH 00/14] mm: balance LRU lists based on relative thrashing v2 Johannes Weiner
2020-05-20 23:25 ` [PATCH 01/14] mm: fix LRU balancing effect of new transparent huge pages Johannes Weiner
2020-05-27 19:48 ` Shakeel Butt
2020-05-20 23:25 ` [PATCH 02/14] mm: keep separate anon and file statistics on page reclaim activity Johannes Weiner
2020-05-20 23:25 ` [PATCH 03/14] mm: allow swappiness that prefers reclaiming anon over the file workingset Johannes Weiner
2020-05-20 23:25 ` [PATCH 04/14] mm: fold and remove lru_cache_add_anon() and lru_cache_add_file() Johannes Weiner
2020-05-20 23:25 ` [PATCH 05/14] mm: workingset: let cache workingset challenge anon Johannes Weiner
2020-05-27 2:06 ` Joonsoo Kim
2020-05-27 13:43 ` Johannes Weiner
2020-05-28 7:16 ` Joonsoo Kim
2020-05-28 17:01 ` Johannes Weiner
2020-05-29 6:48 ` Joonsoo Kim
2020-05-29 15:12 ` Johannes Weiner
2020-06-01 6:14 ` Joonsoo Kim
2020-06-01 15:56 ` Johannes Weiner
2020-06-01 20:44 ` Johannes Weiner
2020-06-04 13:35 ` Vlastimil Babka
2020-06-04 15:05 ` Johannes Weiner
2020-06-12 3:19 ` Joonsoo Kim
2020-06-15 13:41 ` Johannes Weiner
2020-06-16 6:09 ` Joonsoo Kim
2020-06-02 2:34 ` Joonsoo Kim
2020-06-02 16:47 ` Johannes Weiner
2020-06-03 5:40 ` Joonsoo Kim
2020-05-20 23:25 ` [PATCH 06/14] mm: remove use-once cache bias from LRU balancing Johannes Weiner
2020-05-20 23:25 ` [PATCH 07/14] mm: vmscan: drop unnecessary div0 avoidance rounding in get_scan_count() Johannes Weiner
2020-05-20 23:25 ` [PATCH 08/14] mm: base LRU balancing on an explicit cost model Johannes Weiner
2020-05-20 23:25 ` [PATCH 09/14] mm: deactivations shouldn't bias the LRU balance Johannes Weiner
2020-05-22 13:33 ` Qian Cai
2020-05-26 15:55 ` Johannes Weiner [this message]
2020-05-27 0:54 ` Qian Cai
2020-05-20 23:25 ` [PATCH 10/14] mm: only count actual rotations as LRU reclaim cost Johannes Weiner
2020-05-20 23:25 ` [PATCH 11/14] mm: balance LRU lists based on relative thrashing Johannes Weiner
2020-05-20 23:25 ` [PATCH 12/14] mm: vmscan: determine anon/file pressure balance at the reclaim root Johannes Weiner
2020-05-20 23:25 ` [PATCH 13/14] mm: vmscan: reclaim writepage is IO cost Johannes Weiner
2020-05-20 23:25 ` [PATCH 14/14] mm: vmscan: limit the range of LRU type balancing Johannes Weiner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200526155549.GB850116@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=akpm@linux-foundation.org \
--cc=cai@lca.pw \
--cc=iamjoonsoo.kim@lge.com \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=minchan.kim@gmail.com \
--cc=riel@surriel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).