From: Dave Hansen <dave.hansen@linux.intel.com> To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, Dave Hansen <dave.hansen@linux.intel.com>, kbusch@kernel.org, vishal.l.verma@intel.com, yang.shi@linux.alibaba.com, rientjes@google.com, ying.huang@intel.com, dan.j.williams@intel.com Subject: [RFC][PATCH 6/8] mm/vmscan: Consider anonymous pages without swap Date: Mon, 29 Jun 2020 16:45:14 -0700 [thread overview] Message-ID: <20200629234514.CE5BA063@viggo.jf.intel.com> (raw) In-Reply-To: <20200629234503.749E5340@viggo.jf.intel.com> From: Keith Busch <keith.busch@intel.com> Age and reclaim anonymous pages if a migration path is available. The node has other recourses for inactive anonymous pages beyond swap, #Signed-off-by: Keith Busch <keith.busch@intel.com> Cc: Keith Busch <kbusch@kernel.org> [vishal: fixup the migration->demotion rename] Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Yang Shi <yang.shi@linux.alibaba.com> Cc: David Rientjes <rientjes@google.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> -- Changes from Dave 06/2020: * rename reclaim_anon_pages()->can_reclaim_anon_pages() --- b/include/linux/node.h | 9 +++++++++ b/mm/vmscan.c | 32 +++++++++++++++++++++++++++----- 2 files changed, 36 insertions(+), 5 deletions(-) diff -puN include/linux/node.h~0009-mm-vmscan-Consider-anonymous-pages-without-swap include/linux/node.h --- a/include/linux/node.h~0009-mm-vmscan-Consider-anonymous-pages-without-swap 2020-06-29 16:34:42.861312594 -0700 +++ b/include/linux/node.h 2020-06-29 16:34:42.867312594 -0700 @@ -180,4 +180,13 @@ static inline void register_hugetlbfs_wi #define to_node(device) container_of(device, struct node, dev) +#ifdef CONFIG_MIGRATION +extern int next_demotion_node(int node); +#else +static inline int next_demotion_node(int node) +{ + return NUMA_NO_NODE; +} +#endif + #endif /* _LINUX_NODE_H_ */ diff -puN mm/vmscan.c~0009-mm-vmscan-Consider-anonymous-pages-without-swap mm/vmscan.c --- a/mm/vmscan.c~0009-mm-vmscan-Consider-anonymous-pages-without-swap 2020-06-29 16:34:42.863312594 -0700 +++ b/mm/vmscan.c 2020-06-29 16:34:42.868312594 -0700 @@ -288,6 +288,26 @@ static bool writeback_throttling_sane(st } #endif +static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, + int node_id) +{ + /* Always age anon pages when we have swap */ + if (memcg == NULL) { + if (get_nr_swap_pages() > 0) + return true; + } else { + if (mem_cgroup_get_nr_swap_pages(memcg) > 0) + return true; + } + + /* Also age anon pages if we can auto-migrate them */ + if (next_demotion_node(node_id) >= 0) + return true; + + /* No way to reclaim anon pages */ + return false; +} + /* * This misses isolated pages which are not accounted for to save counters. * As the data only determines if reclaim or compaction continues, it is @@ -299,7 +319,7 @@ unsigned long zone_reclaimable_pages(str nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE); - if (get_nr_swap_pages() > 0) + if (can_reclaim_anon_pages(NULL, zone_to_nid(zone))) nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON); @@ -2267,7 +2287,7 @@ static void get_scan_count(struct lruvec enum lru_list lru; /* If we have no swap space, do not bother scanning anon pages. */ - if (!sc->may_swap || mem_cgroup_get_nr_swap_pages(memcg) <= 0) { + if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id)) { scan_balance = SCAN_FILE; goto out; } @@ -2572,7 +2592,9 @@ static void shrink_lruvec(struct lruvec * Even if we did not try to evict anon pages at all, we want to * rebalance the anon lru active/inactive ratio. */ - if (total_swap_pages && inactive_is_low(lruvec, LRU_INACTIVE_ANON)) + if (can_reclaim_anon_pages(lruvec_memcg(lruvec), + lruvec_pgdat(lruvec)->node_id) && + inactive_is_low(lruvec, LRU_INACTIVE_ANON)) shrink_active_list(SWAP_CLUSTER_MAX, lruvec, sc, LRU_ACTIVE_ANON); } @@ -2642,7 +2664,7 @@ static inline bool should_continue_recla */ pages_for_compaction = compact_gap(sc->order); inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE); - if (get_nr_swap_pages() > 0) + if (can_reclaim_anon_pages(NULL, pgdat->node_id)) inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON); return inactive_lru_pages > pages_for_compaction; @@ -3395,7 +3417,7 @@ static void age_active_anon(struct pglis struct mem_cgroup *memcg; struct lruvec *lruvec; - if (!total_swap_pages) + if (!can_reclaim_anon_pages(NULL, pgdat->node_id)) return; lruvec = mem_cgroup_lruvec(NULL, pgdat); _
WARNING: multiple messages have this Message-ID (diff)
From: Dave Hansen <dave.hansen@linux.intel.com> To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org,Dave Hansen <dave.hansen@linux.intel.com>,kbusch@kernel.org,vishal.l.verma@intel.com,yang.shi@linux.alibaba.com,rientjes@google.com,ying.huang@intel.com,dan.j.williams@intel.com Subject: [RFC][PATCH 6/8] mm/vmscan: Consider anonymous pages without swap Date: Mon, 29 Jun 2020 16:45:14 -0700 [thread overview] Message-ID: <20200629234514.CE5BA063@viggo.jf.intel.com> (raw) In-Reply-To: <20200629234503.749E5340@viggo.jf.intel.com> From: Keith Busch <keith.busch@intel.com> Age and reclaim anonymous pages if a migration path is available. The node has other recourses for inactive anonymous pages beyond swap, #Signed-off-by: Keith Busch <keith.busch@intel.com> Cc: Keith Busch <kbusch@kernel.org> [vishal: fixup the migration->demotion rename] Signed-off-by: Vishal Verma <vishal.l.verma@intel.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: Yang Shi <yang.shi@linux.alibaba.com> Cc: David Rientjes <rientjes@google.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> -- Changes from Dave 06/2020: * rename reclaim_anon_pages()->can_reclaim_anon_pages() --- b/include/linux/node.h | 9 +++++++++ b/mm/vmscan.c | 32 +++++++++++++++++++++++++++----- 2 files changed, 36 insertions(+), 5 deletions(-) diff -puN include/linux/node.h~0009-mm-vmscan-Consider-anonymous-pages-without-swap include/linux/node.h --- a/include/linux/node.h~0009-mm-vmscan-Consider-anonymous-pages-without-swap 2020-06-29 16:34:42.861312594 -0700 +++ b/include/linux/node.h 2020-06-29 16:34:42.867312594 -0700 @@ -180,4 +180,13 @@ static inline void register_hugetlbfs_wi #define to_node(device) container_of(device, struct node, dev) +#ifdef CONFIG_MIGRATION +extern int next_demotion_node(int node); +#else +static inline int next_demotion_node(int node) +{ + return NUMA_NO_NODE; +} +#endif + #endif /* _LINUX_NODE_H_ */ diff -puN mm/vmscan.c~0009-mm-vmscan-Consider-anonymous-pages-without-swap mm/vmscan.c --- a/mm/vmscan.c~0009-mm-vmscan-Consider-anonymous-pages-without-swap 2020-06-29 16:34:42.863312594 -0700 +++ b/mm/vmscan.c 2020-06-29 16:34:42.868312594 -0700 @@ -288,6 +288,26 @@ static bool writeback_throttling_sane(st } #endif +static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg, + int node_id) +{ + /* Always age anon pages when we have swap */ + if (memcg == NULL) { + if (get_nr_swap_pages() > 0) + return true; + } else { + if (mem_cgroup_get_nr_swap_pages(memcg) > 0) + return true; + } + + /* Also age anon pages if we can auto-migrate them */ + if (next_demotion_node(node_id) >= 0) + return true; + + /* No way to reclaim anon pages */ + return false; +} + /* * This misses isolated pages which are not accounted for to save counters. * As the data only determines if reclaim or compaction continues, it is @@ -299,7 +319,7 @@ unsigned long zone_reclaimable_pages(str nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE); - if (get_nr_swap_pages() > 0) + if (can_reclaim_anon_pages(NULL, zone_to_nid(zone))) nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) + zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON); @@ -2267,7 +2287,7 @@ static void get_scan_count(struct lruvec enum lru_list lru; /* If we have no swap space, do not bother scanning anon pages. */ - if (!sc->may_swap || mem_cgroup_get_nr_swap_pages(memcg) <= 0) { + if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id)) { scan_balance = SCAN_FILE; goto out; } @@ -2572,7 +2592,9 @@ static void shrink_lruvec(struct lruvec * Even if we did not try to evict anon pages at all, we want to * rebalance the anon lru active/inactive ratio. */ - if (total_swap_pages && inactive_is_low(lruvec, LRU_INACTIVE_ANON)) + if (can_reclaim_anon_pages(lruvec_memcg(lruvec), + lruvec_pgdat(lruvec)->node_id) && + inactive_is_low(lruvec, LRU_INACTIVE_ANON)) shrink_active_list(SWAP_CLUSTER_MAX, lruvec, sc, LRU_ACTIVE_ANON); } @@ -2642,7 +2664,7 @@ static inline bool should_continue_recla */ pages_for_compaction = compact_gap(sc->order); inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE); - if (get_nr_swap_pages() > 0) + if (can_reclaim_anon_pages(NULL, pgdat->node_id)) inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON); return inactive_lru_pages > pages_for_compaction; @@ -3395,7 +3417,7 @@ static void age_active_anon(struct pglis struct mem_cgroup *memcg; struct lruvec *lruvec; - if (!total_swap_pages) + if (!can_reclaim_anon_pages(NULL, pgdat->node_id)) return; lruvec = mem_cgroup_lruvec(NULL, pgdat); _
next prev parent reply other threads:[~2020-06-29 23:48 UTC|newest] Thread overview: 70+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-06-29 23:45 [RFC][PATCH 0/8] Migrate Pages in lieu of discard Dave Hansen 2020-06-29 23:45 ` Dave Hansen 2020-06-29 23:45 ` [RFC][PATCH 1/8] mm/numa: node demotion data structure and lookup Dave Hansen 2020-06-29 23:45 ` Dave Hansen 2020-06-29 23:45 ` [RFC][PATCH 2/8] mm/migrate: Defer allocating new page until needed Dave Hansen 2020-06-29 23:45 ` Dave Hansen 2020-07-01 8:47 ` Greg Thelen 2020-07-01 8:47 ` Greg Thelen 2020-07-01 14:46 ` Dave Hansen 2020-07-01 18:32 ` Yang Shi 2020-06-29 23:45 ` [RFC][PATCH 3/8] mm/vmscan: Attempt to migrate page in lieu of discard Dave Hansen 2020-06-29 23:45 ` Dave Hansen 2020-07-01 0:47 ` David Rientjes 2020-07-01 0:47 ` David Rientjes 2020-07-01 1:29 ` Yang Shi 2020-07-01 5:41 ` David Rientjes 2020-07-01 5:41 ` David Rientjes 2020-07-01 8:54 ` Huang, Ying 2020-07-01 8:54 ` Huang, Ying 2020-07-01 18:20 ` Dave Hansen 2020-07-01 19:50 ` David Rientjes 2020-07-01 19:50 ` David Rientjes 2020-07-02 1:50 ` Huang, Ying 2020-07-02 1:50 ` Huang, Ying 2020-07-01 15:15 ` Dave Hansen 2020-07-01 17:21 ` Yang Shi 2020-07-01 19:45 ` David Rientjes 2020-07-01 19:45 ` David Rientjes 2020-07-02 10:02 ` Jonathan Cameron 2020-07-01 1:40 ` Huang, Ying 2020-07-01 1:40 ` Huang, Ying 2020-07-01 16:48 ` Dave Hansen 2020-07-01 19:25 ` David Rientjes 2020-07-01 19:25 ` David Rientjes 2020-07-02 5:02 ` Huang, Ying 2020-07-02 5:02 ` Huang, Ying 2020-06-29 23:45 ` [RFC][PATCH 4/8] mm/vmscan: add page demotion counter Dave Hansen 2020-06-29 23:45 ` Dave Hansen 2020-06-29 23:45 ` [RFC][PATCH 5/8] mm/numa: automatically generate node migration order Dave Hansen 2020-06-29 23:45 ` Dave Hansen 2020-06-30 8:22 ` Huang, Ying 2020-06-30 8:22 ` Huang, Ying 2020-07-01 18:23 ` Dave Hansen 2020-07-02 1:20 ` Huang, Ying 2020-07-02 1:20 ` Huang, Ying 2020-06-29 23:45 ` Dave Hansen [this message] 2020-06-29 23:45 ` [RFC][PATCH 6/8] mm/vmscan: Consider anonymous pages without swap Dave Hansen 2020-06-29 23:45 ` [RFC][PATCH 7/8] mm/vmscan: never demote for memcg reclaim Dave Hansen 2020-06-29 23:45 ` Dave Hansen 2020-06-29 23:45 ` [RFC][PATCH 8/8] mm/numa: new reclaim mode to enable reclaim-based migration Dave Hansen 2020-06-29 23:45 ` Dave Hansen 2020-06-30 7:23 ` Huang, Ying 2020-06-30 7:23 ` Huang, Ying 2020-06-30 17:50 ` Yang Shi 2020-07-01 0:48 ` Huang, Ying 2020-07-01 0:48 ` Huang, Ying 2020-07-01 1:12 ` Yang Shi 2020-07-01 1:28 ` Huang, Ying 2020-07-01 1:28 ` Huang, Ying 2020-07-01 16:02 ` Dave Hansen 2020-07-03 9:30 ` Huang, Ying 2020-07-03 9:30 ` Huang, Ying 2020-06-30 18:36 ` [RFC][PATCH 0/8] Migrate Pages in lieu of discard Shakeel Butt 2020-06-30 18:36 ` Shakeel Butt 2020-06-30 18:51 ` Dave Hansen 2020-06-30 19:25 ` Shakeel Butt 2020-06-30 19:25 ` Shakeel Butt 2020-06-30 19:31 ` Dave Hansen 2020-07-01 14:24 ` [RFC] [PATCH " Zi Yan 2020-07-01 14:32 ` Dave Hansen
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200629234514.CE5BA063@viggo.jf.intel.com \ --to=dave.hansen@linux.intel.com \ --cc=dan.j.williams@intel.com \ --cc=kbusch@kernel.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=rientjes@google.com \ --cc=vishal.l.verma@intel.com \ --cc=yang.shi@linux.alibaba.com \ --cc=ying.huang@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.