linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state
@ 2016-06-23 10:05 Arnd Bergmann
  2016-06-23 10:05 ` [RFC, DEBUGGING 2/2] mm: add type checking for page state functions Arnd Bergmann
  2016-06-23 10:41 ` [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Mel Gorman
  0 siblings, 2 replies; 9+ messages in thread
From: Arnd Bergmann @ 2016-06-23 10:05 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Vlastimil Babka, Johannes Weiner, Rik van Riel, Andrew Morton,
	linux-mm, linux-kernel, Arnd Bergmann

I see some new warnings from a recent mm change:

mm/filemap.c: In function '__delete_from_page_cache':
include/linux/vmstat.h:116:2: error: array subscript is above array bounds [-Werror=array-bounds]
  atomic_long_add(x, &zone->vm_stat[item]);
  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds]
  atomic_long_add(x, &zone->vm_stat[item]);
                      ~~~~~~~~~~~~~^~~~~~
include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds]
include/linux/vmstat.h:117:2: error: array subscript is above array bounds [-Werror=array-bounds]

Looking deeper into it, I find that we pass the wrong enum
into some functions after the type for the symbol has changed.

This changes the code to use the other function for those that
are using the incorrect type. I've done this blindly just going
by warnings I got from a debug patch I did for this, so it's likely
that some cases are more subtle and need another change, so please
treat this as a bug-report rather than a patch for applying.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: e426f7b4ade5 ("mm: move most file-based accounting to the node")
---
 mm/filemap.c    |  4 ++--
 mm/page_alloc.c | 15 ++++++++-------
 mm/rmap.c       |  4 ++--
 mm/shmem.c      |  4 ++--
 mm/vmscan.c     |  2 +-
 5 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 6cb19e012887..77e902bf04f4 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -218,9 +218,9 @@ void __delete_from_page_cache(struct page *page, void *shadow)
 
 	/* hugetlb pages do not participate in page cache accounting. */
 	if (!PageHuge(page))
-		__mod_zone_page_state(page_zone(page), NR_FILE_PAGES, -nr);
+		__mod_node_page_state(page_zone(page)->zone_pgdat, NR_FILE_PAGES, -nr);
 	if (PageSwapBacked(page)) {
-		__mod_zone_page_state(page_zone(page), NR_SHMEM, -nr);
+		__mod_node_page_state(page_zone(page)->zone_pgdat, NR_SHMEM, -nr);
 		if (PageTransHuge(page))
 			__dec_zone_page_state(page, NR_SHMEM_THPS);
 	} else {
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 23b5044f5ced..d5287011ed27 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3484,9 +3484,10 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 				unsigned long writeback;
 				unsigned long dirty;
 
-				writeback = zone_page_state_snapshot(zone,
+				writeback = node_page_state_snapshot(zone->zone_pgdat,
 								     NR_WRITEBACK);
-				dirty = zone_page_state_snapshot(zone, NR_FILE_DIRTY);
+				dirty = node_page_state_snapshot(zone->zone_pgdat,
+								 NR_FILE_DIRTY);
 
 				if (2*(writeback + dirty) > reclaimable) {
 					congestion_wait(BLK_RW_ASYNC, HZ/10);
@@ -4396,9 +4397,9 @@ void show_free_areas(unsigned int filter)
 			K(zone->present_pages),
 			K(zone->managed_pages),
 			K(zone_page_state(zone, NR_MLOCK)),
-			K(zone_page_state(zone, NR_FILE_DIRTY)),
-			K(zone_page_state(zone, NR_WRITEBACK)),
-			K(zone_page_state(zone, NR_SHMEM)),
+			K(node_page_state(zone->zone_pgdat, NR_FILE_DIRTY)),
+			K(node_page_state(zone, NR_WRITEBACK)),
+			K(node_page_state(zone, NR_SHMEM)),
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 			K(zone_page_state(zone, NR_SHMEM_THPS) * HPAGE_PMD_NR),
 			K(zone_page_state(zone, NR_SHMEM_PMDMAPPED)
@@ -4410,12 +4411,12 @@ void show_free_areas(unsigned int filter)
 			zone_page_state(zone, NR_KERNEL_STACK) *
 				THREAD_SIZE / 1024,
 			K(zone_page_state(zone, NR_PAGETABLE)),
-			K(zone_page_state(zone, NR_UNSTABLE_NFS)),
+			K(node_page_state(zone, NR_UNSTABLE_NFS)),
 			K(zone_page_state(zone, NR_BOUNCE)),
 			K(free_pcp),
 			K(this_cpu_read(zone->pageset->pcp.count)),
 			K(zone_page_state(zone, NR_FREE_CMA_PAGES)),
-			K(zone_page_state(zone, NR_WRITEBACK_TEMP)),
+			K(node_page_state(zone, NR_WRITEBACK_TEMP)),
 			K(node_page_state(zone->zone_pgdat, NR_PAGES_SCANNED)));
 		printk("lowmem_reserve[]:");
 		for (i = 0; i < MAX_NR_ZONES; i++)
diff --git a/mm/rmap.c b/mm/rmap.c
index 4deff963ea8a..898b2b7806ca 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1296,7 +1296,7 @@ void page_add_file_rmap(struct page *page, bool compound)
 		if (!atomic_inc_and_test(&page->_mapcount))
 			goto out;
 	}
-	__mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, nr);
+	__mod_node_page_state(page_zone(page)->zone_pgdat, NR_FILE_MAPPED, nr);
 	mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED);
 out:
 	unlock_page_memcg(page);
@@ -1336,7 +1336,7 @@ static void page_remove_file_rmap(struct page *page, bool compound)
 	 * these counters are not modified in interrupt context, and
 	 * pte lock(a spinlock) is held, which implies preemption disabled.
 	 */
-	__mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, -nr);
+	__mod_node_page_state(page_zone(page)->zone_pgdat, NR_FILE_MAPPED, -nr);
 	mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED);
 
 	if (unlikely(PageMlocked(page)))
diff --git a/mm/shmem.c b/mm/shmem.c
index e5c50fb0d4a4..99dcb8e5642d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -576,8 +576,8 @@ static int shmem_add_to_page_cache(struct page *page,
 		mapping->nrpages += nr;
 		if (PageTransHuge(page))
 			__inc_zone_page_state(page, NR_SHMEM_THPS);
-		__mod_zone_page_state(page_zone(page), NR_FILE_PAGES, nr);
-		__mod_zone_page_state(page_zone(page), NR_SHMEM, nr);
+		__mod_node_page_state(page_zone(page)->zone_pgdat, NR_FILE_PAGES, nr);
+		__mod_node_page_state(page_zone(page)->zone_pgdat, NR_SHMEM, nr);
 		spin_unlock_irq(&mapping->tree_lock);
 	} else {
 		page->mapping = NULL;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 07e17dac1793..4702069cc80b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2079,7 +2079,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
 		int z;
 		unsigned long total_high_wmark = 0;
 
-		pgdatfree = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES);
+		pgdatfree = global_page_state(NR_FREE_PAGES);
 		pgdatfile = node_page_state(pgdat, NR_ACTIVE_FILE) +
 			   node_page_state(pgdat, NR_INACTIVE_FILE);
 
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC, DEBUGGING 2/2] mm: add type checking for page state functions
  2016-06-23 10:05 [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Arnd Bergmann
@ 2016-06-23 10:05 ` Arnd Bergmann
  2016-06-23 10:41 ` [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Mel Gorman
  1 sibling, 0 replies; 9+ messages in thread
From: Arnd Bergmann @ 2016-06-23 10:05 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Vlastimil Babka, Johannes Weiner, Rik van Riel, Andrew Morton,
	linux-mm, linux-kernel, Arnd Bergmann

We had a couple of bugs where we pass the incorrect 'enum' into
one of the statistics functions, and unfortunately gcc can only
warn about comparing distinct enum types rather than warning
about passing an enum of the wrong type into a function.

This wraps all the stats calls inside of macros that add the
type checking using a comparison. This is a fairly crude method,
but it helped uncover some issues for me.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 include/linux/vmstat.h | 36 +++++++++++++++++++++++++++++++++++-
 1 file changed, 35 insertions(+), 1 deletion(-)

diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index c799073fe1c4..0328858894a5 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -116,6 +116,8 @@ static inline void zone_page_state_add(long x, struct zone *zone,
 	atomic_long_add(x, &zone->vm_stat[item]);
 	atomic_long_add(x, &vm_zone_stat[item]);
 }
+#define zone_page_state_add(x, zone, item) \
+	zone_page_state_add(x, zone, ((item) == (enum zone_stat_item )0) ? (item) : (item))
 
 static inline void node_page_state_add(long x, struct pglist_data *pgdat,
 				 enum node_stat_item item)
@@ -123,6 +125,8 @@ static inline void node_page_state_add(long x, struct pglist_data *pgdat,
 	atomic_long_add(x, &pgdat->vm_stat[item]);
 	atomic_long_add(x, &vm_node_stat[item]);
 }
+#define node_page_state_add(x, node, item) \
+	node_page_state_add(x, node, ((item) == (enum node_stat_item )0) ? (item) : (item))
 
 static inline unsigned long global_page_state(enum zone_stat_item item)
 {
@@ -133,6 +137,8 @@ static inline unsigned long global_page_state(enum zone_stat_item item)
 #endif
 	return x;
 }
+#define global_page_state(item) \
+	global_page_state(((item) == (enum zone_stat_item )0) ? (item) : (item))
 
 static inline unsigned long global_node_page_state(enum node_stat_item item)
 {
@@ -143,6 +149,8 @@ static inline unsigned long global_node_page_state(enum node_stat_item item)
 #endif
 	return x;
 }
+#define global_node_page_state(item) \
+	global_node_page_state(((item) == (enum node_stat_item )0) ? (item) : (item))
 
 static inline unsigned long zone_page_state(struct zone *zone,
 					enum zone_stat_item item)
@@ -154,6 +162,8 @@ static inline unsigned long zone_page_state(struct zone *zone,
 #endif
 	return x;
 }
+#define zone_page_state(zone, item) \
+	zone_page_state(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item))
 
 /*
  * More accurate version that also considers the currently pending
@@ -176,6 +186,8 @@ static inline unsigned long zone_page_state_snapshot(struct zone *zone,
 #endif
 	return x;
 }
+#define zone_page_state_snapshot(zone, item) \
+	zone_page_state_snapshot(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item))
 
 static inline unsigned long node_page_state_snapshot(pg_data_t *pgdat,
 					enum zone_stat_item item)
@@ -192,7 +204,8 @@ static inline unsigned long node_page_state_snapshot(pg_data_t *pgdat,
 #endif
 	return x;
 }
-
+#define node_page_state_snapshot(zone, item) \
+	node_page_state_snapshot(zone, ((item) == (enum node_stat_item )0) ? (item) : (item))
 
 #ifdef CONFIG_NUMA
 extern unsigned long sum_zone_node_page_state(int node,
@@ -341,6 +354,27 @@ static inline void drain_zonestat(struct zone *zone,
 			struct per_cpu_pageset *pset) { }
 #endif		/* CONFIG_SMP */
 
+#define __mod_zone_page_state(zone, item, delta) \
+	__mod_zone_page_state(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item), delta)
+#define __mod_node_page_state(pgdat, item, delta) \
+	__mod_node_page_state(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item), delta)
+#define __inc_zone_state(zone, item) \
+	__inc_zone_state(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item))
+#define __inc_node_state(pgdat, item) \
+	__inc_node_state(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item))
+#define __dec_zone_state(zone, item) \
+	__dec_zone_state(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item))
+#define __dec_node_state(pgdat, item) \
+	__dec_node_state(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item))
+#define __inc_zone_page_state(page, item) \
+	__inc_zone_page_state(page, ((item) == (enum zone_stat_item )0) ? (item) : (item))
+#define __inc_node_page_state(page, item) \
+	__inc_node_page_state(page, ((item) == (enum node_stat_item )0) ? (item) : (item))
+#define __dec_zone_page_state(page, item) \
+	__dec_zone_page_state(page, ((item) == (enum zone_stat_item )0) ? (item) : (item))
+#define __dec_node_page_state(page, item) \
+	__dec_node_page_state(page, ((item) == (enum node_stat_item )0) ? (item) : (item))
+
 static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages,
 					     int migratetype)
 {
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state
  2016-06-23 10:05 [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Arnd Bergmann
  2016-06-23 10:05 ` [RFC, DEBUGGING 2/2] mm: add type checking for page state functions Arnd Bergmann
@ 2016-06-23 10:41 ` Mel Gorman
  2016-06-23 13:17   ` Arnd Bergmann
  1 sibling, 1 reply; 9+ messages in thread
From: Mel Gorman @ 2016-06-23 10:41 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Vlastimil Babka, Johannes Weiner, Rik van Riel, Andrew Morton,
	linux-mm, linux-kernel

On Thu, Jun 23, 2016 at 12:05:17PM +0200, Arnd Bergmann wrote:
> I see some new warnings from a recent mm change:
> 
> mm/filemap.c: In function '__delete_from_page_cache':
> include/linux/vmstat.h:116:2: error: array subscript is above array bounds [-Werror=array-bounds]
>   atomic_long_add(x, &zone->vm_stat[item]);
>   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds]
>   atomic_long_add(x, &zone->vm_stat[item]);
>                       ~~~~~~~~~~~~~^~~~~~
> include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds]
> include/linux/vmstat.h:117:2: error: array subscript is above array bounds [-Werror=array-bounds]
> 
> Looking deeper into it, I find that we pass the wrong enum
> into some functions after the type for the symbol has changed.
> 
> This changes the code to use the other function for those that
> are using the incorrect type. I've done this blindly just going
> by warnings I got from a debug patch I did for this, so it's likely
> that some cases are more subtle and need another change, so please
> treat this as a bug-report rather than a patch for applying.
> 

I have an alternative fix for this in a private tree. For now, I've asked
Andrew to withdraw the series entirely as there are non-trivial collisions
with OOM detection rework and huge page support for tmpfs.  It'll be easier
and safer to resolve this outside of mmotm as it'll require a full round
of testing which takes 3-4 days.

Thanks.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state
  2016-06-23 10:41 ` [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Mel Gorman
@ 2016-06-23 13:17   ` Arnd Bergmann
  2016-06-23 13:18     ` [RFC, DEBUGGING v2 " Arnd Bergmann
  2016-06-23 13:51     ` [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Mel Gorman
  0 siblings, 2 replies; 9+ messages in thread
From: Arnd Bergmann @ 2016-06-23 13:17 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Vlastimil Babka, Johannes Weiner, Rik van Riel, Andrew Morton,
	linux-mm, linux-kernel

On Thursday, June 23, 2016 11:41:24 AM CEST Mel Gorman wrote:
> On Thu, Jun 23, 2016 at 12:05:17PM +0200, Arnd Bergmann wrote:
> > I see some new warnings from a recent mm change:
> > 
> > mm/filemap.c: In function '__delete_from_page_cache':
> > include/linux/vmstat.h:116:2: error: array subscript is above array bounds [-Werror=array-bounds]
> >   atomic_long_add(x, &zone->vm_stat[item]);
> >   ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> > include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds]
> >   atomic_long_add(x, &zone->vm_stat[item]);
> >                       ~~~~~~~~~~~~~^~~~~~
> > include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds]
> > include/linux/vmstat.h:117:2: error: array subscript is above array bounds [-Werror=array-bounds]
> > 
> > Looking deeper into it, I find that we pass the wrong enum
> > into some functions after the type for the symbol has changed.
> > 
> > This changes the code to use the other function for those that
> > are using the incorrect type. I've done this blindly just going
> > by warnings I got from a debug patch I did for this, so it's likely
> > that some cases are more subtle and need another change, so please
> > treat this as a bug-report rather than a patch for applying.
> > 
> 
> I have an alternative fix for this in a private tree. For now, I've asked
> Andrew to withdraw the series entirely as there are non-trivial collisions
> with OOM detection rework and huge page support for tmpfs.  It'll be easier
> and safer to resolve this outside of mmotm as it'll require a full round
> of testing which takes 3-4 days.

Ok. I've done a new version of my debug patch now, will follow up here
so you can do some testing on top of that as well if you like. We probably
don't want to apply my patch for the type checking, but you might find it
useful for your own testing.

	Arnd

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [RFC, DEBUGGING v2 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state
  2016-06-23 13:17   ` Arnd Bergmann
@ 2016-06-23 13:18     ` Arnd Bergmann
  2016-06-23 13:18       ` [RFC, DEBUGGING v2 2/2] mm: add type checking for page state functions Arnd Bergmann
  2016-06-23 13:51     ` [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Mel Gorman
  1 sibling, 1 reply; 9+ messages in thread
From: Arnd Bergmann @ 2016-06-23 13:18 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Vlastimil Babka, Johannes Weiner, Rik van Riel, Andrew Morton,
	linux-mm, linux-kernel, Arnd Bergmann

I see some new warnings from a recent mm change:

mm/filemap.c: In function '__delete_from_page_cache':
include/linux/vmstat.h:116:2: error: array subscript is above array bounds [-Werror=array-bounds]
  atomic_long_add(x, &zone->vm_stat[item]);
  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds]
  atomic_long_add(x, &zone->vm_stat[item]);
                      ~~~~~~~~~~~~~^~~~~~
include/linux/vmstat.h:116:35: error: array subscript is above array bounds [-Werror=array-bounds]
include/linux/vmstat.h:117:2: error: array subscript is above array bounds [-Werror=array-bounds]

Looking deeper into it, I find that we pass the wrong enum
into some functions after the type for the symbol has changed.

This changes the code to use the other function for those that
are using the incorrect type. I've done this blindly just going
by warnings I got from a debug patch I did for this, so it's likely
that some cases are more subtle and need another change, so please
treat this as a bug-report rather than a patch for applying.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: e426f7b4ade5 ("mm: move most file-based accounting to the node")
---
 mm/filemap.c    |  4 ++--
 mm/khugepaged.c |  4 ++--
 mm/page_alloc.c | 15 ++++++++-------
 mm/rmap.c       |  4 ++--
 mm/shmem.c      |  4 ++--
 mm/vmscan.c     |  2 +-
 6 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 6cb19e012887..e0fe47e9ea44 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -218,9 +218,9 @@ void __delete_from_page_cache(struct page *page, void *shadow)
 
 	/* hugetlb pages do not participate in page cache accounting. */
 	if (!PageHuge(page))
-		__mod_zone_page_state(page_zone(page), NR_FILE_PAGES, -nr);
+		__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, -nr);
 	if (PageSwapBacked(page)) {
-		__mod_zone_page_state(page_zone(page), NR_SHMEM, -nr);
+		__mod_node_page_state(page_pgdat(page), NR_SHMEM, -nr);
 		if (PageTransHuge(page))
 			__dec_zone_page_state(page, NR_SHMEM_THPS);
 	} else {
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index af256d599080..0efda0345aed 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1476,8 +1476,8 @@ tree_unlocked:
 		local_irq_save(flags);
 		__inc_zone_page_state(new_page, NR_SHMEM_THPS);
 		if (nr_none) {
-			__mod_zone_page_state(zone, NR_FILE_PAGES, nr_none);
-			__mod_zone_page_state(zone, NR_SHMEM, nr_none);
+			__mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none);
+			__mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none);
 		}
 		local_irq_restore(flags);
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 23b5044f5ced..277dc0cbe780 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3484,9 +3484,10 @@ should_reclaim_retry(gfp_t gfp_mask, unsigned order,
 				unsigned long writeback;
 				unsigned long dirty;
 
-				writeback = zone_page_state_snapshot(zone,
+				writeback = node_page_state_snapshot(zone->zone_pgdat,
 								     NR_WRITEBACK);
-				dirty = zone_page_state_snapshot(zone, NR_FILE_DIRTY);
+				dirty = node_page_state_snapshot(zone->zone_pgdat,
+								 NR_FILE_DIRTY);
 
 				if (2*(writeback + dirty) > reclaimable) {
 					congestion_wait(BLK_RW_ASYNC, HZ/10);
@@ -4396,9 +4397,9 @@ void show_free_areas(unsigned int filter)
 			K(zone->present_pages),
 			K(zone->managed_pages),
 			K(zone_page_state(zone, NR_MLOCK)),
-			K(zone_page_state(zone, NR_FILE_DIRTY)),
-			K(zone_page_state(zone, NR_WRITEBACK)),
-			K(zone_page_state(zone, NR_SHMEM)),
+			K(node_page_state(zone->zone_pgdat, NR_FILE_DIRTY)),
+			K(node_page_state(zone->zone_pgdat, NR_WRITEBACK)),
+			K(node_page_state(zone->zone_pgdat, NR_SHMEM)),
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 			K(zone_page_state(zone, NR_SHMEM_THPS) * HPAGE_PMD_NR),
 			K(zone_page_state(zone, NR_SHMEM_PMDMAPPED)
@@ -4410,12 +4411,12 @@ void show_free_areas(unsigned int filter)
 			zone_page_state(zone, NR_KERNEL_STACK) *
 				THREAD_SIZE / 1024,
 			K(zone_page_state(zone, NR_PAGETABLE)),
-			K(zone_page_state(zone, NR_UNSTABLE_NFS)),
+			K(node_page_state(zone->zone_pgdat, NR_UNSTABLE_NFS)),
 			K(zone_page_state(zone, NR_BOUNCE)),
 			K(free_pcp),
 			K(this_cpu_read(zone->pageset->pcp.count)),
 			K(zone_page_state(zone, NR_FREE_CMA_PAGES)),
-			K(zone_page_state(zone, NR_WRITEBACK_TEMP)),
+			K(node_page_state(zone->zone_pgdat, NR_WRITEBACK_TEMP)),
 			K(node_page_state(zone->zone_pgdat, NR_PAGES_SCANNED)));
 		printk("lowmem_reserve[]:");
 		for (i = 0; i < MAX_NR_ZONES; i++)
diff --git a/mm/rmap.c b/mm/rmap.c
index 4deff963ea8a..a66f80bc8703 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1296,7 +1296,7 @@ void page_add_file_rmap(struct page *page, bool compound)
 		if (!atomic_inc_and_test(&page->_mapcount))
 			goto out;
 	}
-	__mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, nr);
+	__mod_node_page_state(page_pgdat(page), NR_FILE_MAPPED, nr);
 	mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED);
 out:
 	unlock_page_memcg(page);
@@ -1336,7 +1336,7 @@ static void page_remove_file_rmap(struct page *page, bool compound)
 	 * these counters are not modified in interrupt context, and
 	 * pte lock(a spinlock) is held, which implies preemption disabled.
 	 */
-	__mod_zone_page_state(page_zone(page), NR_FILE_MAPPED, -nr);
+	__mod_node_page_state(page_pgdat(page), NR_FILE_MAPPED, -nr);
 	mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED);
 
 	if (unlikely(PageMlocked(page)))
diff --git a/mm/shmem.c b/mm/shmem.c
index e5c50fb0d4a4..a03c087f71fe 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -576,8 +576,8 @@ static int shmem_add_to_page_cache(struct page *page,
 		mapping->nrpages += nr;
 		if (PageTransHuge(page))
 			__inc_zone_page_state(page, NR_SHMEM_THPS);
-		__mod_zone_page_state(page_zone(page), NR_FILE_PAGES, nr);
-		__mod_zone_page_state(page_zone(page), NR_SHMEM, nr);
+		__mod_node_page_state(page_pgdat(page), NR_FILE_PAGES, nr);
+		__mod_node_page_state(page_pgdat(page), NR_SHMEM, nr);
 		spin_unlock_irq(&mapping->tree_lock);
 	} else {
 		page->mapping = NULL;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 07e17dac1793..4702069cc80b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2079,7 +2079,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
 		int z;
 		unsigned long total_high_wmark = 0;
 
-		pgdatfree = sum_zone_node_page_state(pgdat->node_id, NR_FREE_PAGES);
+		pgdatfree = global_page_state(NR_FREE_PAGES);
 		pgdatfile = node_page_state(pgdat, NR_ACTIVE_FILE) +
 			   node_page_state(pgdat, NR_INACTIVE_FILE);
 
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC, DEBUGGING v2 2/2] mm: add type checking for page state functions
  2016-06-23 13:18     ` [RFC, DEBUGGING v2 " Arnd Bergmann
@ 2016-06-23 13:18       ` Arnd Bergmann
  0 siblings, 0 replies; 9+ messages in thread
From: Arnd Bergmann @ 2016-06-23 13:18 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Vlastimil Babka, Johannes Weiner, Rik van Riel, Andrew Morton,
	linux-mm, linux-kernel, Arnd Bergmann

We had a couple of bugs where we pass the incorrect 'enum' into
one of the statistics functions, and unfortunately gcc can only
warn about comparing distinct enum types rather than warning
about passing an enum of the wrong type into a function.

This wraps all the stats calls inside of macros that add the
type checking using a comparison.

This second version is better compile-tested, but it's also really ugly.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 include/linux/vmstat.h | 158 ++++++++++++++++++++++++++++++++++---------------
 mm/vmstat.c            | 136 +++++++++++++++++++++---------------------
 2 files changed, 178 insertions(+), 116 deletions(-)

diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
index c799073fe1c4..390b7ae3efb2 100644
--- a/include/linux/vmstat.h
+++ b/include/linux/vmstat.h
@@ -110,21 +110,25 @@ static inline void vm_events_fold_cpu(int cpu)
 extern atomic_long_t vm_zone_stat[NR_VM_ZONE_STAT_ITEMS];
 extern atomic_long_t vm_node_stat[NR_VM_NODE_STAT_ITEMS];
 
-static inline void zone_page_state_add(long x, struct zone *zone,
+static inline void zone_page_state_add_check(long x, struct zone *zone,
 				 enum zone_stat_item item)
 {
 	atomic_long_add(x, &zone->vm_stat[item]);
 	atomic_long_add(x, &vm_zone_stat[item]);
 }
+#define zone_page_state_add(x, zone, item) \
+	zone_page_state_add_check(x, zone, ((item) == (enum zone_stat_item )0) ? (item) : (item))
 
-static inline void node_page_state_add(long x, struct pglist_data *pgdat,
+static inline void node_page_state_add_check(long x, struct pglist_data *pgdat,
 				 enum node_stat_item item)
 {
 	atomic_long_add(x, &pgdat->vm_stat[item]);
 	atomic_long_add(x, &vm_node_stat[item]);
 }
+#define node_page_state_add(x, node, item) \
+	node_page_state_add_check(x, node, ((item) == (enum node_stat_item )0) ? (item) : (item))
 
-static inline unsigned long global_page_state(enum zone_stat_item item)
+static inline unsigned long global_page_state_check(enum zone_stat_item item)
 {
 	long x = atomic_long_read(&vm_zone_stat[item]);
 #ifdef CONFIG_SMP
@@ -133,8 +137,10 @@ static inline unsigned long global_page_state(enum zone_stat_item item)
 #endif
 	return x;
 }
+#define global_page_state(item) \
+	global_page_state_check(((item) == (enum zone_stat_item )0) ? (item) : (item))
 
-static inline unsigned long global_node_page_state(enum node_stat_item item)
+static inline unsigned long global_node_page_state_check(enum node_stat_item item)
 {
 	long x = atomic_long_read(&vm_node_stat[item]);
 #ifdef CONFIG_SMP
@@ -143,8 +149,10 @@ static inline unsigned long global_node_page_state(enum node_stat_item item)
 #endif
 	return x;
 }
+#define global_node_page_state(item) \
+	global_node_page_state_check(((item) == (enum node_stat_item )0) ? (item) : (item))
 
-static inline unsigned long zone_page_state(struct zone *zone,
+static inline unsigned long zone_page_state_check(struct zone *zone,
 					enum zone_stat_item item)
 {
 	long x = atomic_long_read(&zone->vm_stat[item]);
@@ -154,6 +162,8 @@ static inline unsigned long zone_page_state(struct zone *zone,
 #endif
 	return x;
 }
+#define zone_page_state(zone, item) \
+	zone_page_state_check(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item))
 
 /*
  * More accurate version that also considers the currently pending
@@ -161,7 +171,7 @@ static inline unsigned long zone_page_state(struct zone *zone,
  * deltas. There is no synchronization so the result cannot be
  * exactly accurate either.
  */
-static inline unsigned long zone_page_state_snapshot(struct zone *zone,
+static inline unsigned long zone_page_state_snapshot_check(struct zone *zone,
 					enum zone_stat_item item)
 {
 	long x = atomic_long_read(&zone->vm_stat[item]);
@@ -176,8 +186,10 @@ static inline unsigned long zone_page_state_snapshot(struct zone *zone,
 #endif
 	return x;
 }
+#define zone_page_state_snapshot(zone, item) \
+	zone_page_state_snapshot_check(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item))
 
-static inline unsigned long node_page_state_snapshot(pg_data_t *pgdat,
+static inline unsigned long node_page_state_snapshot_check(pg_data_t *pgdat,
 					enum zone_stat_item item)
 {
 	long x = atomic_long_read(&pgdat->vm_stat[item]);
@@ -192,13 +204,18 @@ static inline unsigned long node_page_state_snapshot(pg_data_t *pgdat,
 #endif
 	return x;
 }
-
+#define node_page_state_snapshot(zone, item) \
+	node_page_state_snapshot_check(zone, ((item) == (enum node_stat_item )0) ? (item) : (item))
 
 #ifdef CONFIG_NUMA
-extern unsigned long sum_zone_node_page_state(int node,
+extern unsigned long sum_zone_node_page_state_check(int node,
 						enum zone_stat_item item);
-extern unsigned long node_page_state(struct pglist_data *pgdat,
+#define sum_zone_node_page_state(node, item) \
+	sum_zone_node_page_state_check(node, ((item) == (enum node_stat_item )0) ? (item) : (item))
+extern unsigned long node_page_state_check(struct pglist_data *pgdat,
 						enum node_stat_item item);
+#define node_page_state(pgdat, item) \
+	node_page_state_check(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item))
 #else
 #define sum_zone_node_page_state(node, item) global_node_page_state(item)
 #define node_page_state(node, item) global_node_page_state(item)
@@ -210,28 +227,52 @@ extern unsigned long node_page_state(struct pglist_data *pgdat,
 #define sub_node_page_state(__p, __i, __d) mod_node_page_state(__p, __i, -(__d))
 
 #ifdef CONFIG_SMP
-void __mod_zone_page_state(struct zone *, enum zone_stat_item item, long);
-void __inc_zone_page_state(struct page *, enum zone_stat_item);
-void __dec_zone_page_state(struct page *, enum zone_stat_item);
-
-void __mod_node_page_state(struct pglist_data *, enum node_stat_item item, long);
-void __inc_node_page_state(struct page *, enum node_stat_item);
-void __dec_node_page_state(struct page *, enum node_stat_item);
-
-void mod_zone_page_state(struct zone *, enum zone_stat_item, long);
-void inc_zone_page_state(struct page *, enum zone_stat_item);
-void dec_zone_page_state(struct page *, enum zone_stat_item);
-
-void mod_node_page_state(struct pglist_data *, enum node_stat_item, long);
-void inc_node_page_state(struct page *, enum node_stat_item);
-void dec_node_page_state(struct page *, enum node_stat_item);
-
-extern void inc_node_state(struct pglist_data *, enum node_stat_item);
-extern void __inc_zone_state(struct zone *, enum zone_stat_item);
-extern void __inc_node_state(struct pglist_data *, enum node_stat_item);
-extern void dec_zone_state(struct zone *, enum zone_stat_item);
-extern void __dec_zone_state(struct zone *, enum zone_stat_item);
-extern void __dec_node_state(struct pglist_data *, enum node_stat_item);
+void __mod_zone_page_state_check(struct zone *, enum zone_stat_item item, long);
+void __inc_zone_page_state_check(struct page *, enum zone_stat_item);
+void __dec_zone_page_state_check(struct page *, enum zone_stat_item);
+
+void __mod_node_page_state_check(struct pglist_data *, enum node_stat_item item, long);
+void __inc_node_page_state_check(struct page *, enum node_stat_item);
+void __dec_node_page_state_check(struct page *, enum node_stat_item);
+
+void mod_zone_page_state_check(struct zone *, enum zone_stat_item, long);
+void inc_zone_page_state_check(struct page *, enum zone_stat_item);
+void dec_zone_page_state_check(struct page *, enum zone_stat_item);
+
+#define mod_zone_page_state(zone, item, delta) \
+	mod_zone_page_state_check(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item), delta)
+#define inc_zone_page_state(page, item) \
+	inc_zone_page_state_check(page, ((item) == (enum zone_stat_item )0) ? (item) : (item))
+#define dec_zone_page_state(page, item) \
+	dec_zone_page_state_check(page, ((item) == (enum zone_stat_item )0) ? (item) : (item))
+
+void mod_node_page_state_check(struct pglist_data *, enum node_stat_item, long);
+void inc_node_page_state_check(struct page *, enum node_stat_item);
+void dec_node_page_state_check(struct page *, enum node_stat_item);
+
+#define mod_node_page_state(pgdat, item, delta) \
+	mod_node_page_state_check(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item), delta)
+#define inc_node_page_state(page, item) \
+	inc_node_page_state_check(page, ((item) == (enum node_stat_item )0) ? (item) : (item))
+#define dec_node_page_state(page, item) \
+	dec_node_page_state_check(page, ((item) == (enum node_stat_item )0) ? (item) : (item))
+
+extern void inc_node_state_check(struct pglist_data *, enum node_stat_item);
+extern void __inc_zone_state_check(struct zone *, enum zone_stat_item);
+extern void __inc_node_state_check(struct pglist_data *, enum node_stat_item);
+extern void dec_zone_state_check(struct zone *, enum zone_stat_item);
+extern void __dec_zone_state_check(struct zone *, enum zone_stat_item);
+extern void __dec_node_state_check(struct pglist_data *, enum node_stat_item);
+
+#define inc_node_state(pgdat, item) \
+	inc_node_state_check(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item))
+#define dec_node_state(pgdat, item) \
+	dec_node_state_check(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item))
+
+#define inc_zone_state(zone, item) \
+	inc_zone_state_check(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item))
+#define dec_zone_state(zone, item) \
+	dec_zone_state_check(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item))
 
 void quiet_vmstat(void);
 void cpu_vm_stats_fold(int cpu);
@@ -253,65 +294,65 @@ void set_pgdat_percpu_threshold(pg_data_t *pgdat,
  * We do not maintain differentials in a single processor configuration.
  * The functions directly modify the zone and global counters.
  */
-static inline void __mod_zone_page_state(struct zone *zone,
+static inline void __mod_zone_page_state_check(struct zone *zone,
 			enum zone_stat_item item, long delta)
 {
-	zone_page_state_add(delta, zone, item);
+	zone_page_state_add_check(delta, zone, item);
 }
 
-static inline void __mod_node_page_state(struct pglist_data *pgdat,
+static inline void __mod_node_page_state_check(struct pglist_data *pgdat,
 			enum node_stat_item item, int delta)
 {
-	node_page_state_add(delta, pgdat, item);
+	node_page_state_add_check(delta, pgdat, item);
 }
 
-static inline void __inc_zone_state(struct zone *zone, enum zone_stat_item item)
+static inline void __inc_zone_state_check(struct zone *zone, enum zone_stat_item item)
 {
 	atomic_long_inc(&zone->vm_stat[item]);
 	atomic_long_inc(&vm_zone_stat[item]);
 }
 
-static inline void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item)
+static inline void __inc_node_state_check(struct pglist_data *pgdat, enum node_stat_item item)
 {
 	atomic_long_inc(&pgdat->vm_stat[item]);
 	atomic_long_inc(&vm_node_stat[item]);
 }
 
-static inline void __dec_zone_state(struct zone *zone, enum zone_stat_item item)
+static inline void __dec_zone_state_check(struct zone *zone, enum zone_stat_item item)
 {
 	atomic_long_dec(&zone->vm_stat[item]);
 	atomic_long_dec(&vm_zone_stat[item]);
 }
 
-static inline void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item)
+static inline void __dec_node_state_check(struct pglist_data *pgdat, enum node_stat_item item)
 {
 	atomic_long_dec(&pgdat->vm_stat[item]);
 	atomic_long_dec(&vm_node_stat[item]);
 }
 
-static inline void __inc_zone_page_state(struct page *page,
+static inline void __inc_zone_page_state_check(struct page *page,
 			enum zone_stat_item item)
 {
-	__inc_zone_state(page_zone(page), item);
+	__inc_zone_state_check(page_zone(page), item);
 }
 
-static inline void __inc_node_page_state(struct page *page,
+static inline void __inc_node_page_state_check(struct page *page,
 			enum node_stat_item item)
 {
-	__inc_node_state(page_pgdat(page), item);
+	__inc_node_state_check(page_pgdat(page), item);
 }
 
 
-static inline void __dec_zone_page_state(struct page *page,
+static inline void __dec_zone_page_state_check(struct page *page,
 			enum zone_stat_item item)
 {
-	__dec_zone_state(page_zone(page), item);
+	__dec_zone_state_check(page_zone(page), item);
 }
 
-static inline void __dec_node_page_state(struct page *page,
+static inline void __dec_node_page_state_check(struct page *page,
 			enum node_stat_item item)
 {
-	__dec_node_state(page_pgdat(page), item);
+	__dec_node_state_check(page_pgdat(page), item);
 }
 
 
@@ -341,6 +382,27 @@ static inline void drain_zonestat(struct zone *zone,
 			struct per_cpu_pageset *pset) { }
 #endif		/* CONFIG_SMP */
 
+#define __mod_zone_page_state(zone, item, delta) \
+	__mod_zone_page_state_check(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item), delta)
+#define __mod_node_page_state(pgdat, item, delta) \
+	__mod_node_page_state_check(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item), delta)
+#define __inc_zone_state(zone, item) \
+	__inc_zone_state_check(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item))
+#define __inc_node_state(pgdat, item) \
+	__inc_node_state_check(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item))
+#define __dec_zone_state(zone, item) \
+	__dec_zone_state_check(zone, ((item) == (enum zone_stat_item )0) ? (item) : (item))
+#define __dec_node_state(pgdat, item) \
+	__dec_node_state_check(pgdat, ((item) == (enum node_stat_item )0) ? (item) : (item))
+#define __inc_zone_page_state(page, item) \
+	__inc_zone_page_state_check(page, ((item) == (enum zone_stat_item )0) ? (item) : (item))
+#define __inc_node_page_state(page, item) \
+	__inc_node_page_state_check(page, ((item) == (enum node_stat_item )0) ? (item) : (item))
+#define __dec_zone_page_state(page, item) \
+	__dec_zone_page_state_check(page, ((item) == (enum zone_stat_item )0) ? (item) : (item))
+#define __dec_node_page_state(page, item) \
+	__dec_node_page_state_check(page, ((item) == (enum node_stat_item )0) ? (item) : (item))
+
 static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages,
 					     int migratetype)
 {
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 78c682ade326..c309a701d953 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -224,7 +224,7 @@ void set_pgdat_percpu_threshold(pg_data_t *pgdat,
  * or when we know that preemption is disabled and that
  * particular counter cannot be updated from interrupt context.
  */
-void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item,
+void __mod_zone_page_state_check(struct zone *zone, enum zone_stat_item item,
 			   long delta)
 {
 	struct per_cpu_pageset __percpu *pcp = zone->pageset;
@@ -242,9 +242,9 @@ void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item,
 	}
 	__this_cpu_write(*p, x);
 }
-EXPORT_SYMBOL(__mod_zone_page_state);
+EXPORT_SYMBOL(__mod_zone_page_state_check);
 
-void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item,
+void __mod_node_page_state_check(struct pglist_data *pgdat, enum node_stat_item item,
 				long delta)
 {
 	struct per_cpu_nodestat __percpu *pcp = pgdat->per_cpu_nodestats;
@@ -262,7 +262,7 @@ void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item,
 	}
 	__this_cpu_write(*p, x);
 }
-EXPORT_SYMBOL(__mod_node_page_state);
+EXPORT_SYMBOL(__mod_node_page_state_check);
 
 /*
  * Optimized increment and decrement functions.
@@ -287,7 +287,7 @@ EXPORT_SYMBOL(__mod_node_page_state);
  * in between and therefore the atomicity vs. interrupt cannot be exploited
  * in a useful way here.
  */
-void __inc_zone_state(struct zone *zone, enum zone_stat_item item)
+void __inc_zone_state_check(struct zone *zone, enum zone_stat_item item)
 {
 	struct per_cpu_pageset __percpu *pcp = zone->pageset;
 	s8 __percpu *p = pcp->vm_stat_diff + item;
@@ -303,7 +303,7 @@ void __inc_zone_state(struct zone *zone, enum zone_stat_item item)
 	}
 }
 
-void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item)
+void __inc_node_state_check(struct pglist_data *pgdat, enum node_stat_item item)
 {
 	struct per_cpu_nodestat __percpu *pcp = pgdat->per_cpu_nodestats;
 	s8 __percpu *p = pcp->vm_node_stat_diff + item;
@@ -314,24 +314,24 @@ void __inc_node_state(struct pglist_data *pgdat, enum node_stat_item item)
 	if (unlikely(v > t)) {
 		s8 overstep = t >> 1;
 
-		node_page_state_add(v + overstep, pgdat, item);
+		node_page_state_add_check(v + overstep, pgdat, item);
 		__this_cpu_write(*p, -overstep);
 	}
 }
 
-void __inc_zone_page_state(struct page *page, enum zone_stat_item item)
+void __inc_zone_page_state_check(struct page *page, enum zone_stat_item item)
 {
-	__inc_zone_state(page_zone(page), item);
+	__inc_zone_state_check(page_zone(page), item);
 }
-EXPORT_SYMBOL(__inc_zone_page_state);
+EXPORT_SYMBOL(__inc_zone_page_state_check);
 
-void __inc_node_page_state(struct page *page, enum node_stat_item item)
+void __inc_node_page_state_check(struct page *page, enum node_stat_item item)
 {
-	__inc_node_state(page_pgdat(page), item);
+	__inc_node_state_check(page_pgdat(page), item);
 }
-EXPORT_SYMBOL(__inc_node_page_state);
+EXPORT_SYMBOL(__inc_node_page_state_check);
 
-void __dec_zone_state(struct zone *zone, enum zone_stat_item item)
+void __dec_zone_state_check(struct zone *zone, enum zone_stat_item item)
 {
 	struct per_cpu_pageset __percpu *pcp = zone->pageset;
 	s8 __percpu *p = pcp->vm_stat_diff + item;
@@ -342,12 +342,12 @@ void __dec_zone_state(struct zone *zone, enum zone_stat_item item)
 	if (unlikely(v < - t)) {
 		s8 overstep = t >> 1;
 
-		zone_page_state_add(v - overstep, zone, item);
+		zone_page_state_add_check(v - overstep, zone, item);
 		__this_cpu_write(*p, overstep);
 	}
 }
 
-void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item)
+void __dec_node_state_check(struct pglist_data *pgdat, enum node_stat_item item)
 {
 	struct per_cpu_nodestat __percpu *pcp = pgdat->per_cpu_nodestats;
 	s8 __percpu *p = pcp->vm_node_stat_diff + item;
@@ -358,22 +358,22 @@ void __dec_node_state(struct pglist_data *pgdat, enum node_stat_item item)
 	if (unlikely(v < - t)) {
 		s8 overstep = t >> 1;
 
-		node_page_state_add(v - overstep, pgdat, item);
+		node_page_state_add_check(v - overstep, pgdat, item);
 		__this_cpu_write(*p, overstep);
 	}
 }
 
-void __dec_zone_page_state(struct page *page, enum zone_stat_item item)
+void __dec_zone_page_state_check(struct page *page, enum zone_stat_item item)
 {
-	__dec_zone_state(page_zone(page), item);
+	__dec_zone_state_check(page_zone(page), item);
 }
-EXPORT_SYMBOL(__dec_zone_page_state);
+EXPORT_SYMBOL(__dec_zone_page_state_check);
 
-void __dec_node_page_state(struct page *page, enum node_stat_item item)
+void __dec_node_page_state_check(struct page *page, enum node_stat_item item)
 {
-	__dec_node_state(page_pgdat(page), item);
+	__dec_node_state_check(page_pgdat(page), item);
 }
-EXPORT_SYMBOL(__dec_node_page_state);
+EXPORT_SYMBOL(__dec_node_page_state_check);
 
 #ifdef CONFIG_HAVE_CMPXCHG_LOCAL
 /*
@@ -426,26 +426,26 @@ static inline void mod_zone_state(struct zone *zone,
 		zone_page_state_add(z, zone, item);
 }
 
-void mod_zone_page_state(struct zone *zone, enum zone_stat_item item,
+void mod_zone_page_state_check(struct zone *zone, enum zone_stat_item item,
 			 long delta)
 {
-	mod_zone_state(zone, item, delta, 0);
+	mod_zone_state_check(zone, item, delta, 0);
 }
-EXPORT_SYMBOL(mod_zone_page_state);
+EXPORT_SYMBOL(mod_zone_page_state_check);
 
-void inc_zone_page_state(struct page *page, enum zone_stat_item item)
+void inc_zone_page_state_check(struct page *page, enum zone_stat_item item)
 {
-	mod_zone_state(page_zone(page), item, 1, 1);
+	mod_zone_state_check(page_zone(page), item, 1, 1);
 }
-EXPORT_SYMBOL(inc_zone_page_state);
+EXPORT_SYMBOL(inc_zone_page_state_check);
 
-void dec_zone_page_state(struct page *page, enum zone_stat_item item)
+void dec_zone_page_state_check(struct page *page, enum zone_stat_item item)
 {
-	mod_zone_state(page_zone(page), item, -1, -1);
+	mod_zone_state_check(page_zone(page), item, -1, -1);
 }
-EXPORT_SYMBOL(dec_zone_page_state);
+EXPORT_SYMBOL(dec_zone_page_state_check);
 
-static inline void mod_node_state(struct pglist_data *pgdat,
+static inline void mod_node_state_check(struct pglist_data *pgdat,
        enum node_stat_item item, int delta, int overstep_mode)
 {
 	struct per_cpu_nodestat __percpu *pcp = pgdat->per_cpu_nodestats;
@@ -480,111 +480,111 @@ static inline void mod_node_state(struct pglist_data *pgdat,
 	} while (this_cpu_cmpxchg(*p, o, n) != o);
 
 	if (z)
-		node_page_state_add(z, pgdat, item);
+		node_page_state_add_check(z, pgdat, item);
 }
 
-void mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item,
+void mod_node_page_state_check(struct pglist_data *pgdat, enum node_stat_item item,
 					long delta)
 {
-	mod_node_state(pgdat, item, delta, 0);
+	mod_node_state_check(pgdat, item, delta, 0);
 }
-EXPORT_SYMBOL(mod_node_page_state);
+EXPORT_SYMBOL(mod_node_page_state_check);
 
-void inc_node_state(struct pglist_data *pgdat, enum node_stat_item item)
+void inc_node_state_check(struct pglist_data *pgdat, enum node_stat_item item)
 {
-	mod_node_state(pgdat, item, 1, 1);
+	mod_node_state_check(pgdat, item, 1, 1);
 }
 
-void inc_node_page_state(struct page *page, enum node_stat_item item)
+void inc_node_page_state_check(struct page *page, enum node_stat_item item)
 {
-	mod_node_state(page_pgdat(page), item, 1, 1);
+	mod_node_state_check(page_pgdat(page), item, 1, 1);
 }
-EXPORT_SYMBOL(inc_node_page_state);
+EXPORT_SYMBOL(inc_node_page_state_check);
 
-void dec_node_page_state(struct page *page, enum node_stat_item item)
+void dec_node_page_state_check(struct page *page, enum node_stat_item item)
 {
-	mod_node_state(page_pgdat(page), item, -1, -1);
+	mod_node_state_check(page_pgdat(page), item, -1, -1);
 }
-EXPORT_SYMBOL(dec_node_page_state);
+EXPORT_SYMBOL(dec_node_page_state_check);
 #else
 /*
  * Use interrupt disable to serialize counter updates
  */
-void mod_zone_page_state(struct zone *zone, enum zone_stat_item item,
+void mod_zone_page_state_check(struct zone *zone, enum zone_stat_item item,
 			 long delta)
 {
 	unsigned long flags;
 
 	local_irq_save(flags);
-	__mod_zone_page_state(zone, item, delta);
+	__mod_zone_page_state_check(zone, item, delta);
 	local_irq_restore(flags);
 }
-EXPORT_SYMBOL(mod_zone_page_state);
+EXPORT_SYMBOL(mod_zone_page_state_check);
 
-void inc_zone_page_state(struct page *page, enum zone_stat_item item)
+void inc_zone_page_state_check(struct page *page, enum zone_stat_item item)
 {
 	unsigned long flags;
 	struct zone *zone;
 
 	zone = page_zone(page);
 	local_irq_save(flags);
-	__inc_zone_state(zone, item);
+	__inc_zone_state_check(zone, item);
 	local_irq_restore(flags);
 }
-EXPORT_SYMBOL(inc_zone_page_state);
+EXPORT_SYMBOL(inc_zone_page_state_check);
 
-void dec_zone_page_state(struct page *page, enum zone_stat_item item)
+void dec_zone_page_state_check(struct page *page, enum zone_stat_item item)
 {
 	unsigned long flags;
 
 	local_irq_save(flags);
-	__dec_zone_page_state(page, item);
+	__dec_zone_page_state_check(page, item);
 	local_irq_restore(flags);
 }
-EXPORT_SYMBOL(dec_zone_page_state);
+EXPORT_SYMBOL(dec_zone_page_state_check);
 
-void inc_node_state(struct pglist_data *pgdat, enum node_stat_item item)
+void inc_node_state_check(struct pglist_data *pgdat, enum node_stat_item item)
 {
 	unsigned long flags;
 
 	local_irq_save(flags);
-	__inc_node_state(pgdat, item);
+	__inc_node_state_check(pgdat, item);
 	local_irq_restore(flags);
 }
-EXPORT_SYMBOL(inc_node_state);
+EXPORT_SYMBOL(inc_node_state_check);
 
-void mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item,
+void mod_node_page_state_check(struct pglist_data *pgdat, enum node_stat_item item,
 					long delta)
 {
 	unsigned long flags;
 
 	local_irq_save(flags);
-	__mod_node_page_state(pgdat, item, delta);
+	__mod_node_page_state_check(pgdat, item, delta);
 	local_irq_restore(flags);
 }
-EXPORT_SYMBOL(mod_node_page_state);
+EXPORT_SYMBOL(mod_node_page_state_check);
 
-void inc_node_page_state(struct page *page, enum node_stat_item item)
+void inc_node_page_state_check(struct page *page, enum node_stat_item item)
 {
 	unsigned long flags;
 	struct pglist_data *pgdat;
 
 	pgdat = page_pgdat(page);
 	local_irq_save(flags);
-	__inc_node_state(pgdat, item);
+	__inc_node_state_check(pgdat, item);
 	local_irq_restore(flags);
 }
-EXPORT_SYMBOL(inc_node_page_state);
+EXPORT_SYMBOL(inc_node_page_state_check);
 
-void dec_node_page_state(struct page *page, enum node_stat_item item)
+void dec_node_page_state_check(struct page *page, enum node_stat_item item)
 {
 	unsigned long flags;
 
 	local_irq_save(flags);
-	__dec_node_page_state(page, item);
+	__dec_node_page_state_check(page, item);
 	local_irq_restore(flags);
 }
-EXPORT_SYMBOL(dec_node_page_state);
+EXPORT_SYMBOL(dec_node_page_state_check);
 #endif
 
 /*
@@ -775,7 +775,7 @@ void drain_zonestat(struct zone *zone, struct per_cpu_pageset *pset)
  * is called frequently in a NUMA machine, so try to be as
  * frugal as possible.
  */
-unsigned long sum_zone_node_page_state(int node,
+unsigned long sum_zone_node_page_state_check(int node,
 				 enum zone_stat_item item)
 {
 	struct zone *zones = NODE_DATA(node)->node_zones;
@@ -791,7 +791,7 @@ unsigned long sum_zone_node_page_state(int node,
 /*
  * Determine the per node value of a stat item.
  */
-unsigned long node_page_state(struct pglist_data *pgdat,
+unsigned long node_page_state_check(struct pglist_data *pgdat,
 				enum node_stat_item item)
 {
 	long x = atomic_long_read(&pgdat->vm_stat[item]);
-- 
2.9.0

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state
  2016-06-23 13:17   ` Arnd Bergmann
  2016-06-23 13:18     ` [RFC, DEBUGGING v2 " Arnd Bergmann
@ 2016-06-23 13:51     ` Mel Gorman
  2016-06-23 15:56       ` Arnd Bergmann
  1 sibling, 1 reply; 9+ messages in thread
From: Mel Gorman @ 2016-06-23 13:51 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Vlastimil Babka, Johannes Weiner, Rik van Riel, Andrew Morton,
	linux-mm, linux-kernel

On Thu, Jun 23, 2016 at 03:17:43PM +0200, Arnd Bergmann wrote:
> > I have an alternative fix for this in a private tree. For now, I've asked
> > Andrew to withdraw the series entirely as there are non-trivial collisions
> > with OOM detection rework and huge page support for tmpfs.  It'll be easier
> > and safer to resolve this outside of mmotm as it'll require a full round
> > of testing which takes 3-4 days.
> 
> Ok. I've done a new version of my debug patch now, will follow up here
> so you can do some testing on top of that as well if you like. We probably
> don't want to apply my patch for the type checking, but you might find it
> useful for your own testing.
> 

It is useful. After fixing up a bunch of problems manually, it
identified two more errors. I probably won't merge it but I'll hang on
to it during development.

Thanks!

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state
  2016-06-23 13:51     ` [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Mel Gorman
@ 2016-06-23 15:56       ` Arnd Bergmann
  2016-06-23 16:39         ` Mel Gorman
  0 siblings, 1 reply; 9+ messages in thread
From: Arnd Bergmann @ 2016-06-23 15:56 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Vlastimil Babka, Johannes Weiner, Rik van Riel, Andrew Morton,
	linux-mm, linux-kernel

On Thursday, June 23, 2016 2:51:11 PM CEST Mel Gorman wrote:
> On Thu, Jun 23, 2016 at 03:17:43PM +0200, Arnd Bergmann wrote:
> > > I have an alternative fix for this in a private tree. For now, I've asked
> > > Andrew to withdraw the series entirely as there are non-trivial collisions
> > > with OOM detection rework and huge page support for tmpfs.  It'll be easier
> > > and safer to resolve this outside of mmotm as it'll require a full round
> > > of testing which takes 3-4 days.
> > 
> > Ok. I've done a new version of my debug patch now, will follow up here
> > so you can do some testing on top of that as well if you like. We probably
> > don't want to apply my patch for the type checking, but you might find it
> > useful for your own testing.
> > 
> 
> It is useful. After fixing up a bunch of problems manually, it
> identified two more errors. I probably won't merge it but I'll hang on
> to it during development.

I'm glad it helps. On my randconfig build machine, I've also now run
into yet another finding that I originally didn't catch, not sure if you
found this one already:

In file included from ../include/linux/mm.h:999:0,
                 from ../include/linux/highmem.h:7,
                 from ../drivers/staging/lustre/lustre/osc/../../include/linux/libcfs/linux/libcfs.h:46,
                 from ../drivers/staging/lustre/lustre/osc/../../include/linux/libcfs/libcfs.h:36,
                 from ../drivers/staging/lustre/lustre/osc/osc_cl_internal.h:45,
                 from ../drivers/staging/lustre/lustre/osc/osc_cache.c:40:
../drivers/staging/lustre/lustre/osc/osc_cache.c: In function 'osc_dec_unstable_pages':
../include/linux/vmstat.h:247:42: error: comparison between 'enum node_stat_item' and 'enum zone_stat_item' [-Werror=enum-compare]
  dec_zone_page_state_check(page, ((item) == (enum zone_stat_item )0) ? (item) : (item))
                                          ^
../drivers/staging/lustre/lustre/osc/osc_cache.c:1867:3: note: in expansion of macro 'dec_zone_page_state'
   dec_zone_page_state(desc->bd_iov[i].kiov_page, NR_UNSTABLE_NFS);
   ^~~~~~~~~~~~~~~~~~~
../drivers/staging/lustre/lustre/osc/osc_cache.c: In function 'osc_inc_unstable_pages':
../include/linux/vmstat.h:245:42: error: comparison between 'enum node_stat_item' and 'enum zone_stat_item' [-Werror=enum-compare]
  inc_zone_page_state_check(page, ((item) == (enum zone_stat_item )0) ? (item) : (item))
                                          ^
../drivers/staging/lustre/lustre/osc/osc_cache.c:1901:3: note: in expansion of macro 'inc_zone_page_state'
   inc_zone_page_state(desc->bd_iov[i].kiov_page, NR_UNSTABLE_NFS);
   ^~~~~~~~~~~~~~~~~~~

	Arnd

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state
  2016-06-23 15:56       ` Arnd Bergmann
@ 2016-06-23 16:39         ` Mel Gorman
  0 siblings, 0 replies; 9+ messages in thread
From: Mel Gorman @ 2016-06-23 16:39 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Vlastimil Babka, Johannes Weiner, Rik van Riel, Andrew Morton,
	linux-mm, linux-kernel

On Thu, Jun 23, 2016 at 05:56:57PM +0200, Arnd Bergmann wrote:
> On Thursday, June 23, 2016 2:51:11 PM CEST Mel Gorman wrote:
> > On Thu, Jun 23, 2016 at 03:17:43PM +0200, Arnd Bergmann wrote:
> > > > I have an alternative fix for this in a private tree. For now, I've asked
> > > > Andrew to withdraw the series entirely as there are non-trivial collisions
> > > > with OOM detection rework and huge page support for tmpfs.  It'll be easier
> > > > and safer to resolve this outside of mmotm as it'll require a full round
> > > > of testing which takes 3-4 days.
> > > 
> > > Ok. I've done a new version of my debug patch now, will follow up here
> > > so you can do some testing on top of that as well if you like. We probably
> > > don't want to apply my patch for the type checking, but you might find it
> > > useful for your own testing.
> > > 
> > 
> > It is useful. After fixing up a bunch of problems manually, it
> > identified two more errors. I probably won't merge it but I'll hang on
> > to it during development.
> 
> I'm glad it helps. On my randconfig build machine, I've also now run
> into yet another finding that I originally didn't catch, not sure if you
> found this one already:
> 

It's corrected in my current working tree. Thanks for continuing to
check.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2016-06-23 16:39 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-23 10:05 [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Arnd Bergmann
2016-06-23 10:05 ` [RFC, DEBUGGING 2/2] mm: add type checking for page state functions Arnd Bergmann
2016-06-23 10:41 ` [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Mel Gorman
2016-06-23 13:17   ` Arnd Bergmann
2016-06-23 13:18     ` [RFC, DEBUGGING v2 " Arnd Bergmann
2016-06-23 13:18       ` [RFC, DEBUGGING v2 2/2] mm: add type checking for page state functions Arnd Bergmann
2016-06-23 13:51     ` [RFC, DEBUGGING 1/2] mm: pass NR_FILE_PAGES/NR_SHMEM into node_page_state Mel Gorman
2016-06-23 15:56       ` Arnd Bergmann
2016-06-23 16:39         ` Mel Gorman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).