* [PATCH v1 0/2] mm: vm_total_pages and build_all_zonelists() cleanup @ 2020-06-19 13:24 David Hildenbrand 2020-06-19 13:24 ` [PATCH v1 1/2] mm: drop vm_total_pages David Hildenbrand 2020-06-19 13:24 ` [PATCH v1 2/2] mm/page_alloc: drop nr_free_pagecache_pages() David Hildenbrand 0 siblings, 2 replies; 12+ messages in thread From: David Hildenbrand @ 2020-06-19 13:24 UTC (permalink / raw) To: linux-kernel Cc: linux-mm, David Hildenbrand, Andrew Morton, Huang Ying, Johannes Weiner, Michal Hocko, Minchan Kim, Wei Yang Let's drop vm_total_pages and inline nr_free_pagecache_pages() into build_all_zonelists(). David Hildenbrand (2): mm: drop vm_total_pages mm/page_alloc: drop nr_free_pagecache_pages() include/linux/swap.h | 2 -- mm/memory_hotplug.c | 3 --- mm/page-writeback.c | 6 ++---- mm/page_alloc.c | 18 ++++-------------- mm/vmscan.c | 5 ----- 5 files changed, 6 insertions(+), 28 deletions(-) -- 2.26.2 ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v1 1/2] mm: drop vm_total_pages 2020-06-19 13:24 [PATCH v1 0/2] mm: vm_total_pages and build_all_zonelists() cleanup David Hildenbrand @ 2020-06-19 13:24 ` David Hildenbrand 2020-06-19 13:47 ` Wei Yang ` (3 more replies) 2020-06-19 13:24 ` [PATCH v1 2/2] mm/page_alloc: drop nr_free_pagecache_pages() David Hildenbrand 1 sibling, 4 replies; 12+ messages in thread From: David Hildenbrand @ 2020-06-19 13:24 UTC (permalink / raw) To: linux-kernel Cc: linux-mm, David Hildenbrand, Andrew Morton, Johannes Weiner, Michal Hocko, Huang Ying, Minchan Kim, Wei Yang The global variable "vm_total_pages" is a relict from older days. There is only a single user that reads the variable - build_all_zonelists() - and the first thing it does is updating it. Use a local variable in build_all_zonelists() instead and drop the local variable. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: David Hildenbrand <david@redhat.com> --- include/linux/swap.h | 1 - mm/memory_hotplug.c | 3 --- mm/page-writeback.c | 6 ++---- mm/page_alloc.c | 2 ++ mm/vmscan.c | 5 ----- 5 files changed, 4 insertions(+), 13 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 4c5974bb9ba94..124261acd5d0a 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -371,7 +371,6 @@ extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem, extern unsigned long shrink_all_memory(unsigned long nr_pages); extern int vm_swappiness; extern int remove_mapping(struct address_space *mapping, struct page *page); -extern unsigned long vm_total_pages; extern unsigned long reclaim_pages(struct list_head *page_list); #ifdef CONFIG_NUMA diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c index 9b34e03e730a4..d682781cce48d 100644 --- a/mm/memory_hotplug.c +++ b/mm/memory_hotplug.c @@ -835,8 +835,6 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, kswapd_run(nid); kcompactd_run(nid); - vm_total_pages = nr_free_pagecache_pages(); - writeback_set_ratelimit(); memory_notify(MEM_ONLINE, &arg); @@ -1586,7 +1584,6 @@ static int __ref __offline_pages(unsigned long start_pfn, kcompactd_stop(node); } - vm_total_pages = nr_free_pagecache_pages(); writeback_set_ratelimit(); memory_notify(MEM_OFFLINE, &arg); diff --git a/mm/page-writeback.c b/mm/page-writeback.c index 28b3e7a675657..4e4ddd67b71e5 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -2076,13 +2076,11 @@ static int page_writeback_cpu_online(unsigned int cpu) * Called early on to tune the page writeback dirty limits. * * We used to scale dirty pages according to how total memory - * related to pages that could be allocated for buffers (by - * comparing nr_free_buffer_pages() to vm_total_pages. + * related to pages that could be allocated for buffers. * * However, that was when we used "dirty_ratio" to scale with * all memory, and we don't do that any more. "dirty_ratio" - * is now applied to total non-HIGHPAGE memory (by subtracting - * totalhigh_pages from vm_total_pages), and as such we can't + * is now applied to total non-HIGHPAGE memory, and as such we can't * get into the old insane situation any more where we had * large amounts of dirty pages compared to a small amount of * non-HIGHMEM memory. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 0c435b2ed665c..7b0dde69748c1 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5903,6 +5903,8 @@ build_all_zonelists_init(void) */ void __ref build_all_zonelists(pg_data_t *pgdat) { + unsigned long vm_total_pages; + if (system_state == SYSTEM_BOOTING) { build_all_zonelists_init(); } else { diff --git a/mm/vmscan.c b/mm/vmscan.c index b6d84326bdf2d..0010859747df2 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -170,11 +170,6 @@ struct scan_control { * From 0 .. 200. Higher means more swappy. */ int vm_swappiness = 60; -/* - * The total number of pages which are beyond the high watermark within all - * zones. - */ -unsigned long vm_total_pages; static void set_task_reclaim_state(struct task_struct *task, struct reclaim_state *rs) -- 2.26.2 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] mm: drop vm_total_pages 2020-06-19 13:24 ` [PATCH v1 1/2] mm: drop vm_total_pages David Hildenbrand @ 2020-06-19 13:47 ` Wei Yang 2020-06-21 14:46 ` Mike Rapoport ` (2 subsequent siblings) 3 siblings, 0 replies; 12+ messages in thread From: Wei Yang @ 2020-06-19 13:47 UTC (permalink / raw) To: David Hildenbrand Cc: linux-kernel, linux-mm, Andrew Morton, Johannes Weiner, Michal Hocko, Huang Ying, Minchan Kim, Wei Yang On Fri, Jun 19, 2020 at 03:24:09PM +0200, David Hildenbrand wrote: >The global variable "vm_total_pages" is a relict from older days. There >is only a single user that reads the variable - build_all_zonelists() - >and the first thing it does is updating it. Use a local variable in >build_all_zonelists() instead and drop the local variable. > >Cc: Andrew Morton <akpm@linux-foundation.org> >Cc: Johannes Weiner <hannes@cmpxchg.org> >Cc: Michal Hocko <mhocko@suse.com> >Cc: Huang Ying <ying.huang@intel.com> >Cc: Minchan Kim <minchan@kernel.org> >Cc: Wei Yang <richard.weiyang@gmail.com> >Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Wei Yang <richard.weiyang@gmail.com> >--- > include/linux/swap.h | 1 - > mm/memory_hotplug.c | 3 --- > mm/page-writeback.c | 6 ++---- > mm/page_alloc.c | 2 ++ > mm/vmscan.c | 5 ----- > 5 files changed, 4 insertions(+), 13 deletions(-) > >diff --git a/include/linux/swap.h b/include/linux/swap.h >index 4c5974bb9ba94..124261acd5d0a 100644 >--- a/include/linux/swap.h >+++ b/include/linux/swap.h >@@ -371,7 +371,6 @@ extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem, > extern unsigned long shrink_all_memory(unsigned long nr_pages); > extern int vm_swappiness; > extern int remove_mapping(struct address_space *mapping, struct page *page); >-extern unsigned long vm_total_pages; > > extern unsigned long reclaim_pages(struct list_head *page_list); > #ifdef CONFIG_NUMA >diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c >index 9b34e03e730a4..d682781cce48d 100644 >--- a/mm/memory_hotplug.c >+++ b/mm/memory_hotplug.c >@@ -835,8 +835,6 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, > kswapd_run(nid); > kcompactd_run(nid); > >- vm_total_pages = nr_free_pagecache_pages(); >- > writeback_set_ratelimit(); > > memory_notify(MEM_ONLINE, &arg); >@@ -1586,7 +1584,6 @@ static int __ref __offline_pages(unsigned long start_pfn, > kcompactd_stop(node); > } > >- vm_total_pages = nr_free_pagecache_pages(); > writeback_set_ratelimit(); > > memory_notify(MEM_OFFLINE, &arg); >diff --git a/mm/page-writeback.c b/mm/page-writeback.c >index 28b3e7a675657..4e4ddd67b71e5 100644 >--- a/mm/page-writeback.c >+++ b/mm/page-writeback.c >@@ -2076,13 +2076,11 @@ static int page_writeback_cpu_online(unsigned int cpu) > * Called early on to tune the page writeback dirty limits. > * > * We used to scale dirty pages according to how total memory >- * related to pages that could be allocated for buffers (by >- * comparing nr_free_buffer_pages() to vm_total_pages. >+ * related to pages that could be allocated for buffers. > * > * However, that was when we used "dirty_ratio" to scale with > * all memory, and we don't do that any more. "dirty_ratio" >- * is now applied to total non-HIGHPAGE memory (by subtracting >- * totalhigh_pages from vm_total_pages), and as such we can't >+ * is now applied to total non-HIGHPAGE memory, and as such we can't > * get into the old insane situation any more where we had > * large amounts of dirty pages compared to a small amount of > * non-HIGHMEM memory. >diff --git a/mm/page_alloc.c b/mm/page_alloc.c >index 0c435b2ed665c..7b0dde69748c1 100644 >--- a/mm/page_alloc.c >+++ b/mm/page_alloc.c >@@ -5903,6 +5903,8 @@ build_all_zonelists_init(void) > */ > void __ref build_all_zonelists(pg_data_t *pgdat) > { >+ unsigned long vm_total_pages; >+ > if (system_state == SYSTEM_BOOTING) { > build_all_zonelists_init(); > } else { >diff --git a/mm/vmscan.c b/mm/vmscan.c >index b6d84326bdf2d..0010859747df2 100644 >--- a/mm/vmscan.c >+++ b/mm/vmscan.c >@@ -170,11 +170,6 @@ struct scan_control { > * From 0 .. 200. Higher means more swappy. > */ > int vm_swappiness = 60; >-/* >- * The total number of pages which are beyond the high watermark within all >- * zones. >- */ >-unsigned long vm_total_pages; > > static void set_task_reclaim_state(struct task_struct *task, > struct reclaim_state *rs) >-- >2.26.2 -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] mm: drop vm_total_pages 2020-06-19 13:24 ` [PATCH v1 1/2] mm: drop vm_total_pages David Hildenbrand 2020-06-19 13:47 ` Wei Yang @ 2020-06-21 14:46 ` Mike Rapoport 2020-06-22 7:03 ` David Hildenbrand 2020-06-21 19:56 ` Pankaj Gupta 2020-06-23 12:59 ` Michal Hocko 3 siblings, 1 reply; 12+ messages in thread From: Mike Rapoport @ 2020-06-21 14:46 UTC (permalink / raw) To: David Hildenbrand Cc: linux-kernel, linux-mm, Andrew Morton, Johannes Weiner, Michal Hocko, Huang Ying, Minchan Kim, Wei Yang On Fri, Jun 19, 2020 at 03:24:09PM +0200, David Hildenbrand wrote: > The global variable "vm_total_pages" is a relict from older days. There > is only a single user that reads the variable - build_all_zonelists() - > and the first thing it does is updating it. Use a local variable in > build_all_zonelists() instead and drop the local variable. Nit: ^ global > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Huang Ying <ying.huang@intel.com> > Cc: Minchan Kim <minchan@kernel.org> > Cc: Wei Yang <richard.weiyang@gmail.com> > Signed-off-by: David Hildenbrand <david@redhat.com> Except the nit above Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> > --- > include/linux/swap.h | 1 - > mm/memory_hotplug.c | 3 --- > mm/page-writeback.c | 6 ++---- > mm/page_alloc.c | 2 ++ > mm/vmscan.c | 5 ----- > 5 files changed, 4 insertions(+), 13 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 4c5974bb9ba94..124261acd5d0a 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -371,7 +371,6 @@ extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem, > extern unsigned long shrink_all_memory(unsigned long nr_pages); > extern int vm_swappiness; > extern int remove_mapping(struct address_space *mapping, struct page *page); > -extern unsigned long vm_total_pages; > > extern unsigned long reclaim_pages(struct list_head *page_list); > #ifdef CONFIG_NUMA > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 9b34e03e730a4..d682781cce48d 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -835,8 +835,6 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, > kswapd_run(nid); > kcompactd_run(nid); > > - vm_total_pages = nr_free_pagecache_pages(); > - > writeback_set_ratelimit(); > > memory_notify(MEM_ONLINE, &arg); > @@ -1586,7 +1584,6 @@ static int __ref __offline_pages(unsigned long start_pfn, > kcompactd_stop(node); > } > > - vm_total_pages = nr_free_pagecache_pages(); > writeback_set_ratelimit(); > > memory_notify(MEM_OFFLINE, &arg); > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > index 28b3e7a675657..4e4ddd67b71e5 100644 > --- a/mm/page-writeback.c > +++ b/mm/page-writeback.c > @@ -2076,13 +2076,11 @@ static int page_writeback_cpu_online(unsigned int cpu) > * Called early on to tune the page writeback dirty limits. > * > * We used to scale dirty pages according to how total memory > - * related to pages that could be allocated for buffers (by > - * comparing nr_free_buffer_pages() to vm_total_pages. > + * related to pages that could be allocated for buffers. > * > * However, that was when we used "dirty_ratio" to scale with > * all memory, and we don't do that any more. "dirty_ratio" > - * is now applied to total non-HIGHPAGE memory (by subtracting > - * totalhigh_pages from vm_total_pages), and as such we can't > + * is now applied to total non-HIGHPAGE memory, and as such we can't > * get into the old insane situation any more where we had > * large amounts of dirty pages compared to a small amount of > * non-HIGHMEM memory. > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0c435b2ed665c..7b0dde69748c1 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5903,6 +5903,8 @@ build_all_zonelists_init(void) > */ > void __ref build_all_zonelists(pg_data_t *pgdat) > { > + unsigned long vm_total_pages; > + > if (system_state == SYSTEM_BOOTING) { > build_all_zonelists_init(); > } else { > diff --git a/mm/vmscan.c b/mm/vmscan.c > index b6d84326bdf2d..0010859747df2 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -170,11 +170,6 @@ struct scan_control { > * From 0 .. 200. Higher means more swappy. > */ > int vm_swappiness = 60; > -/* > - * The total number of pages which are beyond the high watermark within all > - * zones. > - */ > -unsigned long vm_total_pages; > > static void set_task_reclaim_state(struct task_struct *task, > struct reclaim_state *rs) > -- > 2.26.2 > > -- Sincerely yours, Mike. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] mm: drop vm_total_pages 2020-06-21 14:46 ` Mike Rapoport @ 2020-06-22 7:03 ` David Hildenbrand 0 siblings, 0 replies; 12+ messages in thread From: David Hildenbrand @ 2020-06-22 7:03 UTC (permalink / raw) To: Mike Rapoport, Andrew Morton Cc: linux-kernel, linux-mm, Johannes Weiner, Michal Hocko, Huang Ying, Minchan Kim, Wei Yang On 21.06.20 16:46, Mike Rapoport wrote: > On Fri, Jun 19, 2020 at 03:24:09PM +0200, David Hildenbrand wrote: >> The global variable "vm_total_pages" is a relict from older days. There >> is only a single user that reads the variable - build_all_zonelists() - >> and the first thing it does is updating it. Use a local variable in >> build_all_zonelists() instead and drop the local variable. > > Nit: ^ global Indeed, @Andrew, can you fix this up? Thanks! -- Thanks, David / dhildenb ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] mm: drop vm_total_pages 2020-06-19 13:24 ` [PATCH v1 1/2] mm: drop vm_total_pages David Hildenbrand 2020-06-19 13:47 ` Wei Yang 2020-06-21 14:46 ` Mike Rapoport @ 2020-06-21 19:56 ` Pankaj Gupta 2020-06-23 12:59 ` Michal Hocko 3 siblings, 0 replies; 12+ messages in thread From: Pankaj Gupta @ 2020-06-21 19:56 UTC (permalink / raw) To: David Hildenbrand Cc: LKML, Linux MM, Andrew Morton, Johannes Weiner, Michal Hocko, Huang Ying, Minchan Kim, Wei Yang > The global variable "vm_total_pages" is a relict from older days. There > is only a single user that reads the variable - build_all_zonelists() - > and the first thing it does is updating it. Use a local variable in > build_all_zonelists() instead and drop the local variable. > > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Huang Ying <ying.huang@intel.com> > Cc: Minchan Kim <minchan@kernel.org> > Cc: Wei Yang <richard.weiyang@gmail.com> > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > include/linux/swap.h | 1 - > mm/memory_hotplug.c | 3 --- > mm/page-writeback.c | 6 ++---- > mm/page_alloc.c | 2 ++ > mm/vmscan.c | 5 ----- > 5 files changed, 4 insertions(+), 13 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 4c5974bb9ba94..124261acd5d0a 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -371,7 +371,6 @@ extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem, > extern unsigned long shrink_all_memory(unsigned long nr_pages); > extern int vm_swappiness; > extern int remove_mapping(struct address_space *mapping, struct page *page); > -extern unsigned long vm_total_pages; > > extern unsigned long reclaim_pages(struct list_head *page_list); > #ifdef CONFIG_NUMA > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 9b34e03e730a4..d682781cce48d 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -835,8 +835,6 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, > kswapd_run(nid); > kcompactd_run(nid); > > - vm_total_pages = nr_free_pagecache_pages(); > - > writeback_set_ratelimit(); > > memory_notify(MEM_ONLINE, &arg); > @@ -1586,7 +1584,6 @@ static int __ref __offline_pages(unsigned long start_pfn, > kcompactd_stop(node); > } > > - vm_total_pages = nr_free_pagecache_pages(); > writeback_set_ratelimit(); > > memory_notify(MEM_OFFLINE, &arg); > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > index 28b3e7a675657..4e4ddd67b71e5 100644 > --- a/mm/page-writeback.c > +++ b/mm/page-writeback.c > @@ -2076,13 +2076,11 @@ static int page_writeback_cpu_online(unsigned int cpu) > * Called early on to tune the page writeback dirty limits. > * > * We used to scale dirty pages according to how total memory > - * related to pages that could be allocated for buffers (by > - * comparing nr_free_buffer_pages() to vm_total_pages. > + * related to pages that could be allocated for buffers. > * > * However, that was when we used "dirty_ratio" to scale with > * all memory, and we don't do that any more. "dirty_ratio" > - * is now applied to total non-HIGHPAGE memory (by subtracting > - * totalhigh_pages from vm_total_pages), and as such we can't > + * is now applied to total non-HIGHPAGE memory, and as such we can't > * get into the old insane situation any more where we had > * large amounts of dirty pages compared to a small amount of > * non-HIGHMEM memory. > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0c435b2ed665c..7b0dde69748c1 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5903,6 +5903,8 @@ build_all_zonelists_init(void) > */ > void __ref build_all_zonelists(pg_data_t *pgdat) > { > + unsigned long vm_total_pages; > + > if (system_state == SYSTEM_BOOTING) { > build_all_zonelists_init(); > } else { > diff --git a/mm/vmscan.c b/mm/vmscan.c > index b6d84326bdf2d..0010859747df2 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -170,11 +170,6 @@ struct scan_control { > * From 0 .. 200. Higher means more swappy. > */ > int vm_swappiness = 60; > -/* > - * The total number of pages which are beyond the high watermark within all > - * zones. > - */ > -unsigned long vm_total_pages; > > static void set_task_reclaim_state(struct task_struct *task, > struct reclaim_state *rs) Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com> ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 1/2] mm: drop vm_total_pages 2020-06-19 13:24 ` [PATCH v1 1/2] mm: drop vm_total_pages David Hildenbrand ` (2 preceding siblings ...) 2020-06-21 19:56 ` Pankaj Gupta @ 2020-06-23 12:59 ` Michal Hocko 3 siblings, 0 replies; 12+ messages in thread From: Michal Hocko @ 2020-06-23 12:59 UTC (permalink / raw) To: David Hildenbrand Cc: linux-kernel, linux-mm, Andrew Morton, Johannes Weiner, Huang Ying, Minchan Kim, Wei Yang On Fri 19-06-20 15:24:09, David Hildenbrand wrote: > The global variable "vm_total_pages" is a relict from older days. There > is only a single user that reads the variable - build_all_zonelists() - > and the first thing it does is updating it. Use a local variable in > build_all_zonelists() instead and drop the local variable. > > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Huang Ying <ying.huang@intel.com> > Cc: Minchan Kim <minchan@kernel.org> > Cc: Wei Yang <richard.weiyang@gmail.com> > Signed-off-by: David Hildenbrand <david@redhat.com> Acked-by: Michal Hocko <mhocko@suse.com> > --- > include/linux/swap.h | 1 - > mm/memory_hotplug.c | 3 --- > mm/page-writeback.c | 6 ++---- > mm/page_alloc.c | 2 ++ > mm/vmscan.c | 5 ----- > 5 files changed, 4 insertions(+), 13 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 4c5974bb9ba94..124261acd5d0a 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -371,7 +371,6 @@ extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem, > extern unsigned long shrink_all_memory(unsigned long nr_pages); > extern int vm_swappiness; > extern int remove_mapping(struct address_space *mapping, struct page *page); > -extern unsigned long vm_total_pages; > > extern unsigned long reclaim_pages(struct list_head *page_list); > #ifdef CONFIG_NUMA > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index 9b34e03e730a4..d682781cce48d 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -835,8 +835,6 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages, > kswapd_run(nid); > kcompactd_run(nid); > > - vm_total_pages = nr_free_pagecache_pages(); > - > writeback_set_ratelimit(); > > memory_notify(MEM_ONLINE, &arg); > @@ -1586,7 +1584,6 @@ static int __ref __offline_pages(unsigned long start_pfn, > kcompactd_stop(node); > } > > - vm_total_pages = nr_free_pagecache_pages(); > writeback_set_ratelimit(); > > memory_notify(MEM_OFFLINE, &arg); > diff --git a/mm/page-writeback.c b/mm/page-writeback.c > index 28b3e7a675657..4e4ddd67b71e5 100644 > --- a/mm/page-writeback.c > +++ b/mm/page-writeback.c > @@ -2076,13 +2076,11 @@ static int page_writeback_cpu_online(unsigned int cpu) > * Called early on to tune the page writeback dirty limits. > * > * We used to scale dirty pages according to how total memory > - * related to pages that could be allocated for buffers (by > - * comparing nr_free_buffer_pages() to vm_total_pages. > + * related to pages that could be allocated for buffers. > * > * However, that was when we used "dirty_ratio" to scale with > * all memory, and we don't do that any more. "dirty_ratio" > - * is now applied to total non-HIGHPAGE memory (by subtracting > - * totalhigh_pages from vm_total_pages), and as such we can't > + * is now applied to total non-HIGHPAGE memory, and as such we can't > * get into the old insane situation any more where we had > * large amounts of dirty pages compared to a small amount of > * non-HIGHMEM memory. > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 0c435b2ed665c..7b0dde69748c1 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5903,6 +5903,8 @@ build_all_zonelists_init(void) > */ > void __ref build_all_zonelists(pg_data_t *pgdat) > { > + unsigned long vm_total_pages; > + > if (system_state == SYSTEM_BOOTING) { > build_all_zonelists_init(); > } else { > diff --git a/mm/vmscan.c b/mm/vmscan.c > index b6d84326bdf2d..0010859747df2 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -170,11 +170,6 @@ struct scan_control { > * From 0 .. 200. Higher means more swappy. > */ > int vm_swappiness = 60; > -/* > - * The total number of pages which are beyond the high watermark within all > - * zones. > - */ > -unsigned long vm_total_pages; > > static void set_task_reclaim_state(struct task_struct *task, > struct reclaim_state *rs) > -- > 2.26.2 -- Michal Hocko SUSE Labs ^ permalink raw reply [flat|nested] 12+ messages in thread
* [PATCH v1 2/2] mm/page_alloc: drop nr_free_pagecache_pages() 2020-06-19 13:24 [PATCH v1 0/2] mm: vm_total_pages and build_all_zonelists() cleanup David Hildenbrand 2020-06-19 13:24 ` [PATCH v1 1/2] mm: drop vm_total_pages David Hildenbrand @ 2020-06-19 13:24 ` David Hildenbrand 2020-06-19 13:48 ` Wei Yang ` (3 more replies) 1 sibling, 4 replies; 12+ messages in thread From: David Hildenbrand @ 2020-06-19 13:24 UTC (permalink / raw) To: linux-kernel Cc: linux-mm, David Hildenbrand, Andrew Morton, Johannes Weiner, Michal Hocko, Minchan Kim, Huang Ying, Wei Yang nr_free_pagecache_pages() isn't used outside page_alloc.c anymore - and the name does not really help to understand what's going on. Let's inline it instead and add a comment. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Huang Ying <ying.huang@intel.com> Cc: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: David Hildenbrand <david@redhat.com> --- include/linux/swap.h | 1 - mm/page_alloc.c | 16 ++-------------- 2 files changed, 2 insertions(+), 15 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 124261acd5d0a..9bde6c6b2c045 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -327,7 +327,6 @@ void workingset_update_node(struct xa_node *node); /* linux/mm/page_alloc.c */ extern unsigned long totalreserve_pages; extern unsigned long nr_free_buffer_pages(void); -extern unsigned long nr_free_pagecache_pages(void); /* Definition of global_zone_page_state not available yet */ #define nr_free_pages() global_zone_page_state(NR_FREE_PAGES) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 7b0dde69748c1..c38903d1b3b4d 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5177,19 +5177,6 @@ unsigned long nr_free_buffer_pages(void) } EXPORT_SYMBOL_GPL(nr_free_buffer_pages); -/** - * nr_free_pagecache_pages - count number of pages beyond high watermark - * - * nr_free_pagecache_pages() counts the number of pages which are beyond the - * high watermark within all zones. - * - * Return: number of pages beyond high watermark within all zones. - */ -unsigned long nr_free_pagecache_pages(void) -{ - return nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE)); -} - static inline void show_node(struct zone *zone) { if (IS_ENABLED(CONFIG_NUMA)) @@ -5911,7 +5898,8 @@ void __ref build_all_zonelists(pg_data_t *pgdat) __build_all_zonelists(pgdat); /* cpuset refresh routine should be here */ } - vm_total_pages = nr_free_pagecache_pages(); + /* Get the number of free pages beyond high watermark in all zones. */ + vm_total_pages = nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE)); /* * Disable grouping by mobility if the number of pages in the * system is too low to allow the mechanism to work. It would be -- 2.26.2 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v1 2/2] mm/page_alloc: drop nr_free_pagecache_pages() 2020-06-19 13:24 ` [PATCH v1 2/2] mm/page_alloc: drop nr_free_pagecache_pages() David Hildenbrand @ 2020-06-19 13:48 ` Wei Yang 2020-06-21 14:47 ` Mike Rapoport ` (2 subsequent siblings) 3 siblings, 0 replies; 12+ messages in thread From: Wei Yang @ 2020-06-19 13:48 UTC (permalink / raw) To: David Hildenbrand Cc: linux-kernel, linux-mm, Andrew Morton, Johannes Weiner, Michal Hocko, Minchan Kim, Huang Ying, Wei Yang On Fri, Jun 19, 2020 at 03:24:10PM +0200, David Hildenbrand wrote: >nr_free_pagecache_pages() isn't used outside page_alloc.c anymore - and >the name does not really help to understand what's going on. Let's inline >it instead and add a comment. Not sure "inline it" is the proper word for this. > >Cc: Andrew Morton <akpm@linux-foundation.org> >Cc: Johannes Weiner <hannes@cmpxchg.org> >Cc: Michal Hocko <mhocko@suse.com> >Cc: Minchan Kim <minchan@kernel.org> >Cc: Huang Ying <ying.huang@intel.com> >Cc: Wei Yang <richard.weiyang@gmail.com> >Signed-off-by: David Hildenbrand <david@redhat.com> Besides: Reviewed-by: Wei Yang <richard.weiyang@gmail.com> >--- > include/linux/swap.h | 1 - > mm/page_alloc.c | 16 ++-------------- > 2 files changed, 2 insertions(+), 15 deletions(-) > >diff --git a/include/linux/swap.h b/include/linux/swap.h >index 124261acd5d0a..9bde6c6b2c045 100644 >--- a/include/linux/swap.h >+++ b/include/linux/swap.h >@@ -327,7 +327,6 @@ void workingset_update_node(struct xa_node *node); > /* linux/mm/page_alloc.c */ > extern unsigned long totalreserve_pages; > extern unsigned long nr_free_buffer_pages(void); >-extern unsigned long nr_free_pagecache_pages(void); > > /* Definition of global_zone_page_state not available yet */ > #define nr_free_pages() global_zone_page_state(NR_FREE_PAGES) >diff --git a/mm/page_alloc.c b/mm/page_alloc.c >index 7b0dde69748c1..c38903d1b3b4d 100644 >--- a/mm/page_alloc.c >+++ b/mm/page_alloc.c >@@ -5177,19 +5177,6 @@ unsigned long nr_free_buffer_pages(void) > } > EXPORT_SYMBOL_GPL(nr_free_buffer_pages); > >-/** >- * nr_free_pagecache_pages - count number of pages beyond high watermark >- * >- * nr_free_pagecache_pages() counts the number of pages which are beyond the >- * high watermark within all zones. >- * >- * Return: number of pages beyond high watermark within all zones. >- */ >-unsigned long nr_free_pagecache_pages(void) >-{ >- return nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE)); >-} >- > static inline void show_node(struct zone *zone) > { > if (IS_ENABLED(CONFIG_NUMA)) >@@ -5911,7 +5898,8 @@ void __ref build_all_zonelists(pg_data_t *pgdat) > __build_all_zonelists(pgdat); > /* cpuset refresh routine should be here */ > } >- vm_total_pages = nr_free_pagecache_pages(); >+ /* Get the number of free pages beyond high watermark in all zones. */ >+ vm_total_pages = nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE)); > /* > * Disable grouping by mobility if the number of pages in the > * system is too low to allow the mechanism to work. It would be >-- >2.26.2 -- Wei Yang Help you, Help me ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 2/2] mm/page_alloc: drop nr_free_pagecache_pages() 2020-06-19 13:24 ` [PATCH v1 2/2] mm/page_alloc: drop nr_free_pagecache_pages() David Hildenbrand 2020-06-19 13:48 ` Wei Yang @ 2020-06-21 14:47 ` Mike Rapoport 2020-06-21 19:57 ` Pankaj Gupta 2020-06-23 13:02 ` Michal Hocko 3 siblings, 0 replies; 12+ messages in thread From: Mike Rapoport @ 2020-06-21 14:47 UTC (permalink / raw) To: David Hildenbrand Cc: linux-kernel, linux-mm, Andrew Morton, Johannes Weiner, Michal Hocko, Minchan Kim, Huang Ying, Wei Yang On Fri, Jun 19, 2020 at 03:24:10PM +0200, David Hildenbrand wrote: > nr_free_pagecache_pages() isn't used outside page_alloc.c anymore - and > the name does not really help to understand what's going on. Let's inline > it instead and add a comment. > > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Minchan Kim <minchan@kernel.org> > Cc: Huang Ying <ying.huang@intel.com> > Cc: Wei Yang <richard.weiyang@gmail.com> > Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Mike Rapoport <rppt@linux.ibm.com> > --- > include/linux/swap.h | 1 - > mm/page_alloc.c | 16 ++-------------- > 2 files changed, 2 insertions(+), 15 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 124261acd5d0a..9bde6c6b2c045 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -327,7 +327,6 @@ void workingset_update_node(struct xa_node *node); > /* linux/mm/page_alloc.c */ > extern unsigned long totalreserve_pages; > extern unsigned long nr_free_buffer_pages(void); > -extern unsigned long nr_free_pagecache_pages(void); > > /* Definition of global_zone_page_state not available yet */ > #define nr_free_pages() global_zone_page_state(NR_FREE_PAGES) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 7b0dde69748c1..c38903d1b3b4d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5177,19 +5177,6 @@ unsigned long nr_free_buffer_pages(void) > } > EXPORT_SYMBOL_GPL(nr_free_buffer_pages); > > -/** > - * nr_free_pagecache_pages - count number of pages beyond high watermark > - * > - * nr_free_pagecache_pages() counts the number of pages which are beyond the > - * high watermark within all zones. > - * > - * Return: number of pages beyond high watermark within all zones. > - */ > -unsigned long nr_free_pagecache_pages(void) > -{ > - return nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE)); > -} > - > static inline void show_node(struct zone *zone) > { > if (IS_ENABLED(CONFIG_NUMA)) > @@ -5911,7 +5898,8 @@ void __ref build_all_zonelists(pg_data_t *pgdat) > __build_all_zonelists(pgdat); > /* cpuset refresh routine should be here */ > } > - vm_total_pages = nr_free_pagecache_pages(); > + /* Get the number of free pages beyond high watermark in all zones. */ > + vm_total_pages = nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE)); > /* > * Disable grouping by mobility if the number of pages in the > * system is too low to allow the mechanism to work. It would be > -- > 2.26.2 > > -- Sincerely yours, Mike. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 2/2] mm/page_alloc: drop nr_free_pagecache_pages() 2020-06-19 13:24 ` [PATCH v1 2/2] mm/page_alloc: drop nr_free_pagecache_pages() David Hildenbrand 2020-06-19 13:48 ` Wei Yang 2020-06-21 14:47 ` Mike Rapoport @ 2020-06-21 19:57 ` Pankaj Gupta 2020-06-23 13:02 ` Michal Hocko 3 siblings, 0 replies; 12+ messages in thread From: Pankaj Gupta @ 2020-06-21 19:57 UTC (permalink / raw) To: David Hildenbrand Cc: LKML, Linux MM, Andrew Morton, Johannes Weiner, Michal Hocko, Minchan Kim, Huang Ying, Wei Yang > nr_free_pagecache_pages() isn't used outside page_alloc.c anymore - and > the name does not really help to understand what's going on. Let's inline > it instead and add a comment. > > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Minchan Kim <minchan@kernel.org> > Cc: Huang Ying <ying.huang@intel.com> > Cc: Wei Yang <richard.weiyang@gmail.com> > Signed-off-by: David Hildenbrand <david@redhat.com> > --- > include/linux/swap.h | 1 - > mm/page_alloc.c | 16 ++-------------- > 2 files changed, 2 insertions(+), 15 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 124261acd5d0a..9bde6c6b2c045 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -327,7 +327,6 @@ void workingset_update_node(struct xa_node *node); > /* linux/mm/page_alloc.c */ > extern unsigned long totalreserve_pages; > extern unsigned long nr_free_buffer_pages(void); > -extern unsigned long nr_free_pagecache_pages(void); > > /* Definition of global_zone_page_state not available yet */ > #define nr_free_pages() global_zone_page_state(NR_FREE_PAGES) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 7b0dde69748c1..c38903d1b3b4d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5177,19 +5177,6 @@ unsigned long nr_free_buffer_pages(void) > } > EXPORT_SYMBOL_GPL(nr_free_buffer_pages); > > -/** > - * nr_free_pagecache_pages - count number of pages beyond high watermark > - * > - * nr_free_pagecache_pages() counts the number of pages which are beyond the > - * high watermark within all zones. > - * > - * Return: number of pages beyond high watermark within all zones. > - */ > -unsigned long nr_free_pagecache_pages(void) > -{ > - return nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE)); > -} > - > static inline void show_node(struct zone *zone) > { > if (IS_ENABLED(CONFIG_NUMA)) > @@ -5911,7 +5898,8 @@ void __ref build_all_zonelists(pg_data_t *pgdat) > __build_all_zonelists(pgdat); > /* cpuset refresh routine should be here */ > } > - vm_total_pages = nr_free_pagecache_pages(); > + /* Get the number of free pages beyond high watermark in all zones. */ > + vm_total_pages = nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE)); > /* > * Disable grouping by mobility if the number of pages in the > * system is too low to allow the mechanism to work. It would be Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com> ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v1 2/2] mm/page_alloc: drop nr_free_pagecache_pages() 2020-06-19 13:24 ` [PATCH v1 2/2] mm/page_alloc: drop nr_free_pagecache_pages() David Hildenbrand ` (2 preceding siblings ...) 2020-06-21 19:57 ` Pankaj Gupta @ 2020-06-23 13:02 ` Michal Hocko 3 siblings, 0 replies; 12+ messages in thread From: Michal Hocko @ 2020-06-23 13:02 UTC (permalink / raw) To: David Hildenbrand Cc: linux-kernel, linux-mm, Andrew Morton, Johannes Weiner, Minchan Kim, Huang Ying, Wei Yang On Fri 19-06-20 15:24:10, David Hildenbrand wrote: > nr_free_pagecache_pages() isn't used outside page_alloc.c anymore - and > the name does not really help to understand what's going on. Let's inline > it instead and add a comment. > > Cc: Andrew Morton <akpm@linux-foundation.org> > Cc: Johannes Weiner <hannes@cmpxchg.org> > Cc: Michal Hocko <mhocko@suse.com> > Cc: Minchan Kim <minchan@kernel.org> > Cc: Huang Ying <ying.huang@intel.com> > Cc: Wei Yang <richard.weiyang@gmail.com> > Signed-off-by: David Hildenbrand <david@redhat.com> nr_free_pagecache_pages was an awkward name which kind of makes sense but it is terribly confusing (e.g. there are pagecache consumers restricted to lowmem zones only). Acked-by: Michal Hocko <mhocko@suse.com> > --- > include/linux/swap.h | 1 - > mm/page_alloc.c | 16 ++-------------- > 2 files changed, 2 insertions(+), 15 deletions(-) > > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 124261acd5d0a..9bde6c6b2c045 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -327,7 +327,6 @@ void workingset_update_node(struct xa_node *node); > /* linux/mm/page_alloc.c */ > extern unsigned long totalreserve_pages; > extern unsigned long nr_free_buffer_pages(void); > -extern unsigned long nr_free_pagecache_pages(void); > > /* Definition of global_zone_page_state not available yet */ > #define nr_free_pages() global_zone_page_state(NR_FREE_PAGES) > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 7b0dde69748c1..c38903d1b3b4d 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5177,19 +5177,6 @@ unsigned long nr_free_buffer_pages(void) > } > EXPORT_SYMBOL_GPL(nr_free_buffer_pages); > > -/** > - * nr_free_pagecache_pages - count number of pages beyond high watermark > - * > - * nr_free_pagecache_pages() counts the number of pages which are beyond the > - * high watermark within all zones. > - * > - * Return: number of pages beyond high watermark within all zones. > - */ > -unsigned long nr_free_pagecache_pages(void) > -{ > - return nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE)); > -} > - > static inline void show_node(struct zone *zone) > { > if (IS_ENABLED(CONFIG_NUMA)) > @@ -5911,7 +5898,8 @@ void __ref build_all_zonelists(pg_data_t *pgdat) > __build_all_zonelists(pgdat); > /* cpuset refresh routine should be here */ > } > - vm_total_pages = nr_free_pagecache_pages(); > + /* Get the number of free pages beyond high watermark in all zones. */ > + vm_total_pages = nr_free_zone_pages(gfp_zone(GFP_HIGHUSER_MOVABLE)); > /* > * Disable grouping by mobility if the number of pages in the > * system is too low to allow the mechanism to work. It would be > -- > 2.26.2 -- Michal Hocko SUSE Labs ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2020-06-23 13:07 UTC | newest] Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2020-06-19 13:24 [PATCH v1 0/2] mm: vm_total_pages and build_all_zonelists() cleanup David Hildenbrand 2020-06-19 13:24 ` [PATCH v1 1/2] mm: drop vm_total_pages David Hildenbrand 2020-06-19 13:47 ` Wei Yang 2020-06-21 14:46 ` Mike Rapoport 2020-06-22 7:03 ` David Hildenbrand 2020-06-21 19:56 ` Pankaj Gupta 2020-06-23 12:59 ` Michal Hocko 2020-06-19 13:24 ` [PATCH v1 2/2] mm/page_alloc: drop nr_free_pagecache_pages() David Hildenbrand 2020-06-19 13:48 ` Wei Yang 2020-06-21 14:47 ` Mike Rapoport 2020-06-21 19:57 ` Pankaj Gupta 2020-06-23 13:02 ` Michal Hocko
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).