All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/9] mm: zone & pgdat accessors plus some cleanup
@ 2013-01-17 22:52 ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen; +Cc: LKML, Andrew Morton, Catalin Marinas

Summaries:
1 - avoid repeating checks for section in page flags by adding a define.
2 - add & switch to zone_end_pfn() and zone_spans_pfn()
3 - adds zone_is_initialized() & zone_is_empty()
4 - adds a VM_BUG using zone_is_initialized() in __free_one_page()
5 - add pgdat_end_pfn() and pgdat_is_empty()
6 - add debugging message to VM_BUG check.
7 - add ensure_zone_is_initialized() (for memory_hotplug)
8 - use the above addition in memory_hotplug
9 - use pgdat_end_pfn()

As a general concern: spanned_pages & start_pfn (in pgdat & zone) are supposed
to be locked (via a seqlock) when read (due to changes to them via
memory_hotplug), but very few (only 1?) of their users appear to actually lock
them.

--

Since v1:
 - drop zone+pgdat growth factoring (I use this in some WIP code to resign the
   NUMA node a page belongs to, will send with that patchset)
 - merge zone_end_pfn() & zone_spans_pfn() introduction & usage
 - split zone_is_initialized() & zone_is_empty() out from the above.
 - add a missing semicolon

 include/linux/mm.h     |  8 ++++++--
 include/linux/mmzone.h | 34 +++++++++++++++++++++++++++++----
 mm/compaction.c        | 10 +++++-----
 mm/kmemleak.c          |  5 ++---
 mm/memory_hotplug.c    | 52 ++++++++++++++++++++++++++------------------------
 mm/page_alloc.c        | 31 +++++++++++++++++-------------
 mm/vmstat.c            |  2 +-
 7 files changed, 89 insertions(+), 53 deletions(-)


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v2 0/9] mm: zone & pgdat accessors plus some cleanup
@ 2013-01-17 22:52 ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen; +Cc: LKML, Andrew Morton, Catalin Marinas

Summaries:
1 - avoid repeating checks for section in page flags by adding a define.
2 - add & switch to zone_end_pfn() and zone_spans_pfn()
3 - adds zone_is_initialized() & zone_is_empty()
4 - adds a VM_BUG using zone_is_initialized() in __free_one_page()
5 - add pgdat_end_pfn() and pgdat_is_empty()
6 - add debugging message to VM_BUG check.
7 - add ensure_zone_is_initialized() (for memory_hotplug)
8 - use the above addition in memory_hotplug
9 - use pgdat_end_pfn()

As a general concern: spanned_pages & start_pfn (in pgdat & zone) are supposed
to be locked (via a seqlock) when read (due to changes to them via
memory_hotplug), but very few (only 1?) of their users appear to actually lock
them.

--

Since v1:
 - drop zone+pgdat growth factoring (I use this in some WIP code to resign the
   NUMA node a page belongs to, will send with that patchset)
 - merge zone_end_pfn() & zone_spans_pfn() introduction & usage
 - split zone_is_initialized() & zone_is_empty() out from the above.
 - add a missing semicolon

 include/linux/mm.h     |  8 ++++++--
 include/linux/mmzone.h | 34 +++++++++++++++++++++++++++++----
 mm/compaction.c        | 10 +++++-----
 mm/kmemleak.c          |  5 ++---
 mm/memory_hotplug.c    | 52 ++++++++++++++++++++++++++------------------------
 mm/page_alloc.c        | 31 +++++++++++++++++-------------
 mm/vmstat.c            |  2 +-
 7 files changed, 89 insertions(+), 53 deletions(-)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH 1/9] mm: add SECTION_IN_PAGE_FLAGS
  2013-01-17 22:52 ` Cody P Schafer
@ 2013-01-17 22:52   ` Cody P Schafer
  -1 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Instead of directly utilizing a combination of config options to determine this,
add a macro to specifically address it.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 include/linux/mm.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 66e2f7c..ef69564 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -625,6 +625,10 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
 #define NODE_NOT_IN_PAGE_FLAGS
 #endif
 
+#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
+#define SECTION_IN_PAGE_FLAGS
+#endif
+
 /*
  * Define the bit shifts to access each section.  For non-existent
  * sections we define the shift as 0; that plus a 0 mask ensures
@@ -727,7 +731,7 @@ static inline struct zone *page_zone(const struct page *page)
 	return &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)];
 }
 
-#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
+#ifdef SECTION_IN_PAGE_FLAGS
 static inline void set_page_section(struct page *page, unsigned long section)
 {
 	page->flags &= ~(SECTIONS_MASK << SECTIONS_PGSHIFT);
@@ -757,7 +761,7 @@ static inline void set_page_links(struct page *page, enum zone_type zone,
 {
 	set_page_zone(page, zone);
 	set_page_node(page, node);
-#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
+#ifdef SECTION_IN_PAGE_FLAGS
 	set_page_section(page, pfn_to_section_nr(pfn));
 #endif
 }
-- 
1.8.0.3


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 1/9] mm: add SECTION_IN_PAGE_FLAGS
@ 2013-01-17 22:52   ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Instead of directly utilizing a combination of config options to determine this,
add a macro to specifically address it.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 include/linux/mm.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 66e2f7c..ef69564 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -625,6 +625,10 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
 #define NODE_NOT_IN_PAGE_FLAGS
 #endif
 
+#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
+#define SECTION_IN_PAGE_FLAGS
+#endif
+
 /*
  * Define the bit shifts to access each section.  For non-existent
  * sections we define the shift as 0; that plus a 0 mask ensures
@@ -727,7 +731,7 @@ static inline struct zone *page_zone(const struct page *page)
 	return &NODE_DATA(page_to_nid(page))->node_zones[page_zonenum(page)];
 }
 
-#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
+#ifdef SECTION_IN_PAGE_FLAGS
 static inline void set_page_section(struct page *page, unsigned long section)
 {
 	page->flags &= ~(SECTIONS_MASK << SECTIONS_PGSHIFT);
@@ -757,7 +761,7 @@ static inline void set_page_links(struct page *page, enum zone_type zone,
 {
 	set_page_zone(page, zone);
 	set_page_node(page, node);
-#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
+#ifdef SECTION_IN_PAGE_FLAGS
 	set_page_section(page, pfn_to_section_nr(pfn));
 #endif
 }
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 2/9] mm: add & use zone_end_pfn() and zone_spans_pfn()
  2013-01-17 22:52 ` Cody P Schafer
@ 2013-01-17 22:52   ` Cody P Schafer
  -1 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Add 2 helpers (zone_end_pfn() and zone_spans_pfn()) to reduce code
duplication.

This also switches to using them in compaction (where an additional
variable needed to be renamed), page_alloc, vmstat, memory_hotplug,
and kmemleak.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---

Note that in compaction.c I avoid calling zone_end_pfn() repeatedly because I
expect at some point the sycronization issues with start_pfn & spanned_pages
will need fixing, either by actually using the seqlock or clever memory barrier
usage.

 include/linux/mmzone.h | 10 ++++++++++
 mm/compaction.c        | 10 +++++-----
 mm/kmemleak.c          |  5 ++---
 mm/memory_hotplug.c    | 10 +++++-----
 mm/page_alloc.c        | 22 +++++++++-------------
 mm/vmstat.c            |  2 +-
 6 files changed, 32 insertions(+), 27 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 73b64a3..d91d964 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -543,6 +543,16 @@ static inline int zone_is_oom_locked(const struct zone *zone)
 	return test_bit(ZONE_OOM_LOCKED, &zone->flags);
 }
 
+static inline unsigned zone_end_pfn(const struct zone *zone)
+{
+	return zone->zone_start_pfn + zone->spanned_pages;
+}
+
+static inline bool zone_spans_pfn(const struct zone *zone, unsigned long pfn)
+{
+	return zone->zone_start_pfn <= pfn && pfn < zone_end_pfn(zone);
+}
+
 /*
  * The "priority" of VM scanning is how much of the queues we will scan in one
  * go. A value of 12 for DEF_PRIORITY implies that we will scan 1/4096th of the
diff --git a/mm/compaction.c b/mm/compaction.c
index c62bd06..ea66be3 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -85,7 +85,7 @@ static inline bool isolation_suitable(struct compact_control *cc,
 static void __reset_isolation_suitable(struct zone *zone)
 {
 	unsigned long start_pfn = zone->zone_start_pfn;
-	unsigned long end_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	unsigned long end_pfn = zone_end_pfn(zone);
 	unsigned long pfn;
 
 	zone->compact_cached_migrate_pfn = start_pfn;
@@ -644,7 +644,7 @@ static void isolate_freepages(struct zone *zone,
 				struct compact_control *cc)
 {
 	struct page *page;
-	unsigned long high_pfn, low_pfn, pfn, zone_end_pfn, end_pfn;
+	unsigned long high_pfn, low_pfn, pfn, z_end_pfn, end_pfn;
 	int nr_freepages = cc->nr_freepages;
 	struct list_head *freelist = &cc->freepages;
 
@@ -663,7 +663,7 @@ static void isolate_freepages(struct zone *zone,
 	 */
 	high_pfn = min(low_pfn, pfn);
 
-	zone_end_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	z_end_pfn = zone_end_pfn(zone);
 
 	/*
 	 * Isolate free pages until enough are available to migrate the
@@ -706,7 +706,7 @@ static void isolate_freepages(struct zone *zone,
 		 * only scans within a pageblock
 		 */
 		end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
-		end_pfn = min(end_pfn, zone_end_pfn);
+		end_pfn = min(end_pfn, z_end_pfn);
 		isolated = isolate_freepages_block(cc, pfn, end_pfn,
 						   freelist, false);
 		nr_freepages += isolated;
@@ -920,7 +920,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 {
 	int ret;
 	unsigned long start_pfn = zone->zone_start_pfn;
-	unsigned long end_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	unsigned long end_pfn = zone_end_pfn(zone);
 
 	ret = compaction_suitable(zone, cc->order);
 	switch (ret) {
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 752a705..83dd5fb 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -1300,9 +1300,8 @@ static void kmemleak_scan(void)
 	 */
 	lock_memory_hotplug();
 	for_each_online_node(i) {
-		pg_data_t *pgdat = NODE_DATA(i);
-		unsigned long start_pfn = pgdat->node_start_pfn;
-		unsigned long end_pfn = start_pfn + pgdat->node_spanned_pages;
+		unsigned long start_pfn = node_start_pfn(i);
+		unsigned long end_pfn = node_end_pfn(i);
 		unsigned long pfn;
 
 		for (pfn = start_pfn; pfn < end_pfn; pfn++) {
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index d04ed87..c62bcca 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -270,7 +270,7 @@ static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
 	pgdat_resize_lock(z1->zone_pgdat, &flags);
 
 	/* can't move pfns which are higher than @z2 */
-	if (end_pfn > z2->zone_start_pfn + z2->spanned_pages)
+	if (end_pfn > zone_end_pfn(z2))
 		goto out_fail;
 	/* the move out part mast at the left most of @z2 */
 	if (start_pfn > z2->zone_start_pfn)
@@ -286,7 +286,7 @@ static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
 		z1_start_pfn = start_pfn;
 
 	resize_zone(z1, z1_start_pfn, end_pfn);
-	resize_zone(z2, end_pfn, z2->zone_start_pfn + z2->spanned_pages);
+	resize_zone(z2, end_pfn, zone_end_pfn(z2));
 
 	pgdat_resize_unlock(z1->zone_pgdat, &flags);
 
@@ -318,15 +318,15 @@ static int __meminit move_pfn_range_right(struct zone *z1, struct zone *z2,
 	if (z1->zone_start_pfn > start_pfn)
 		goto out_fail;
 	/* the move out part mast at the right most of @z1 */
-	if (z1->zone_start_pfn + z1->spanned_pages >  end_pfn)
+	if (zone_end_pfn(z1) >  end_pfn)
 		goto out_fail;
 	/* must included/overlap */
-	if (start_pfn >= z1->zone_start_pfn + z1->spanned_pages)
+	if (start_pfn >= zone_end_pfn(z1))
 		goto out_fail;
 
 	/* use end_pfn for z2's end_pfn if z2 is empty */
 	if (z2->spanned_pages)
-		z2_end_pfn = z2->zone_start_pfn + z2->spanned_pages;
+		z2_end_pfn = zone_end_pfn(z2);
 	else
 		z2_end_pfn = end_pfn;
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index df2022f..e2574ea 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -242,9 +242,7 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
 
 	do {
 		seq = zone_span_seqbegin(zone);
-		if (pfn >= zone->zone_start_pfn + zone->spanned_pages)
-			ret = 1;
-		else if (pfn < zone->zone_start_pfn)
+		if (!zone_spans_pfn(zone, pfn))
 			ret = 1;
 	} while (zone_span_seqretry(zone, seq));
 
@@ -976,9 +974,9 @@ int move_freepages_block(struct zone *zone, struct page *page,
 	end_pfn = start_pfn + pageblock_nr_pages - 1;
 
 	/* Do not cross zone boundaries */
-	if (start_pfn < zone->zone_start_pfn)
+	if (!zone_spans_pfn(zone, start_pfn))
 		start_page = page;
-	if (end_pfn >= zone->zone_start_pfn + zone->spanned_pages)
+	if (!zone_spans_pfn(zone, end_pfn))
 		return 0;
 
 	return move_freepages(zone, start_page, end_page, migratetype);
@@ -1272,7 +1270,7 @@ void mark_free_pages(struct zone *zone)
 
 	spin_lock_irqsave(&zone->lock, flags);
 
-	max_zone_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	max_zone_pfn = zone_end_pfn(zone);
 	for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++)
 		if (pfn_valid(pfn)) {
 			struct page *page = pfn_to_page(pfn);
@@ -3775,7 +3773,7 @@ static void setup_zone_migrate_reserve(struct zone *zone)
 	 * the block.
 	 */
 	start_pfn = zone->zone_start_pfn;
-	end_pfn = start_pfn + zone->spanned_pages;
+	end_pfn = zone_end_pfn(zone);
 	start_pfn = roundup(start_pfn, pageblock_nr_pages);
 	reserve = roundup(min_wmark_pages(zone), pageblock_nr_pages) >>
 							pageblock_order;
@@ -3889,7 +3887,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
 		 * pfn out of zone.
 		 */
 		if ((z->zone_start_pfn <= pfn)
-		    && (pfn < z->zone_start_pfn + z->spanned_pages)
+		    && (pfn < zone_end_pfn(z))
 		    && !(pfn & (pageblock_nr_pages - 1)))
 			set_pageblock_migratetype(page, MIGRATE_MOVABLE);
 
@@ -4617,7 +4615,7 @@ static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat)
 		 * for the buddy allocator to function correctly.
 		 */
 		start = pgdat->node_start_pfn & ~(MAX_ORDER_NR_PAGES - 1);
-		end = pgdat->node_start_pfn + pgdat->node_spanned_pages;
+		end = pgdat_end_pfn(pgdat);
 		end = ALIGN(end, MAX_ORDER_NR_PAGES);
 		size =  (end - start) * sizeof(struct page);
 		map = alloc_remap(pgdat->node_id, size);
@@ -5637,8 +5635,7 @@ void set_pageblock_flags_group(struct page *page, unsigned long flags,
 	pfn = page_to_pfn(page);
 	bitmap = get_pageblock_bitmap(zone, pfn);
 	bitidx = pfn_to_bitidx(zone, pfn);
-	VM_BUG_ON(pfn < zone->zone_start_pfn);
-	VM_BUG_ON(pfn >= zone->zone_start_pfn + zone->spanned_pages);
+	VM_BUG_ON(!zone_spans_pfn(zone, pfn));
 
 	for (; start_bitidx <= end_bitidx; start_bitidx++, value <<= 1)
 		if (flags & value)
@@ -5736,8 +5733,7 @@ bool is_pageblock_removable_nolock(struct page *page)
 
 	zone = page_zone(page);
 	pfn = page_to_pfn(page);
-	if (zone->zone_start_pfn > pfn ||
-			zone->zone_start_pfn + zone->spanned_pages <= pfn)
+	if (!zone_spans_pfn(zone, pfn))
 		return false;
 
 	return !has_unmovable_pages(zone, page, 0, true);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 9800306..ca99641 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -890,7 +890,7 @@ static void pagetypeinfo_showblockcount_print(struct seq_file *m,
 	int mtype;
 	unsigned long pfn;
 	unsigned long start_pfn = zone->zone_start_pfn;
-	unsigned long end_pfn = start_pfn + zone->spanned_pages;
+	unsigned long end_pfn = zone_end_pfn(zone);
 	unsigned long count[MIGRATE_TYPES] = { 0, };
 
 	for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) {
-- 
1.8.0.3


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 2/9] mm: add & use zone_end_pfn() and zone_spans_pfn()
@ 2013-01-17 22:52   ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Add 2 helpers (zone_end_pfn() and zone_spans_pfn()) to reduce code
duplication.

This also switches to using them in compaction (where an additional
variable needed to be renamed), page_alloc, vmstat, memory_hotplug,
and kmemleak.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---

Note that in compaction.c I avoid calling zone_end_pfn() repeatedly because I
expect at some point the sycronization issues with start_pfn & spanned_pages
will need fixing, either by actually using the seqlock or clever memory barrier
usage.

 include/linux/mmzone.h | 10 ++++++++++
 mm/compaction.c        | 10 +++++-----
 mm/kmemleak.c          |  5 ++---
 mm/memory_hotplug.c    | 10 +++++-----
 mm/page_alloc.c        | 22 +++++++++-------------
 mm/vmstat.c            |  2 +-
 6 files changed, 32 insertions(+), 27 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 73b64a3..d91d964 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -543,6 +543,16 @@ static inline int zone_is_oom_locked(const struct zone *zone)
 	return test_bit(ZONE_OOM_LOCKED, &zone->flags);
 }
 
+static inline unsigned zone_end_pfn(const struct zone *zone)
+{
+	return zone->zone_start_pfn + zone->spanned_pages;
+}
+
+static inline bool zone_spans_pfn(const struct zone *zone, unsigned long pfn)
+{
+	return zone->zone_start_pfn <= pfn && pfn < zone_end_pfn(zone);
+}
+
 /*
  * The "priority" of VM scanning is how much of the queues we will scan in one
  * go. A value of 12 for DEF_PRIORITY implies that we will scan 1/4096th of the
diff --git a/mm/compaction.c b/mm/compaction.c
index c62bd06..ea66be3 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -85,7 +85,7 @@ static inline bool isolation_suitable(struct compact_control *cc,
 static void __reset_isolation_suitable(struct zone *zone)
 {
 	unsigned long start_pfn = zone->zone_start_pfn;
-	unsigned long end_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	unsigned long end_pfn = zone_end_pfn(zone);
 	unsigned long pfn;
 
 	zone->compact_cached_migrate_pfn = start_pfn;
@@ -644,7 +644,7 @@ static void isolate_freepages(struct zone *zone,
 				struct compact_control *cc)
 {
 	struct page *page;
-	unsigned long high_pfn, low_pfn, pfn, zone_end_pfn, end_pfn;
+	unsigned long high_pfn, low_pfn, pfn, z_end_pfn, end_pfn;
 	int nr_freepages = cc->nr_freepages;
 	struct list_head *freelist = &cc->freepages;
 
@@ -663,7 +663,7 @@ static void isolate_freepages(struct zone *zone,
 	 */
 	high_pfn = min(low_pfn, pfn);
 
-	zone_end_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	z_end_pfn = zone_end_pfn(zone);
 
 	/*
 	 * Isolate free pages until enough are available to migrate the
@@ -706,7 +706,7 @@ static void isolate_freepages(struct zone *zone,
 		 * only scans within a pageblock
 		 */
 		end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
-		end_pfn = min(end_pfn, zone_end_pfn);
+		end_pfn = min(end_pfn, z_end_pfn);
 		isolated = isolate_freepages_block(cc, pfn, end_pfn,
 						   freelist, false);
 		nr_freepages += isolated;
@@ -920,7 +920,7 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
 {
 	int ret;
 	unsigned long start_pfn = zone->zone_start_pfn;
-	unsigned long end_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	unsigned long end_pfn = zone_end_pfn(zone);
 
 	ret = compaction_suitable(zone, cc->order);
 	switch (ret) {
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 752a705..83dd5fb 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -1300,9 +1300,8 @@ static void kmemleak_scan(void)
 	 */
 	lock_memory_hotplug();
 	for_each_online_node(i) {
-		pg_data_t *pgdat = NODE_DATA(i);
-		unsigned long start_pfn = pgdat->node_start_pfn;
-		unsigned long end_pfn = start_pfn + pgdat->node_spanned_pages;
+		unsigned long start_pfn = node_start_pfn(i);
+		unsigned long end_pfn = node_end_pfn(i);
 		unsigned long pfn;
 
 		for (pfn = start_pfn; pfn < end_pfn; pfn++) {
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index d04ed87..c62bcca 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -270,7 +270,7 @@ static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
 	pgdat_resize_lock(z1->zone_pgdat, &flags);
 
 	/* can't move pfns which are higher than @z2 */
-	if (end_pfn > z2->zone_start_pfn + z2->spanned_pages)
+	if (end_pfn > zone_end_pfn(z2))
 		goto out_fail;
 	/* the move out part mast at the left most of @z2 */
 	if (start_pfn > z2->zone_start_pfn)
@@ -286,7 +286,7 @@ static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
 		z1_start_pfn = start_pfn;
 
 	resize_zone(z1, z1_start_pfn, end_pfn);
-	resize_zone(z2, end_pfn, z2->zone_start_pfn + z2->spanned_pages);
+	resize_zone(z2, end_pfn, zone_end_pfn(z2));
 
 	pgdat_resize_unlock(z1->zone_pgdat, &flags);
 
@@ -318,15 +318,15 @@ static int __meminit move_pfn_range_right(struct zone *z1, struct zone *z2,
 	if (z1->zone_start_pfn > start_pfn)
 		goto out_fail;
 	/* the move out part mast at the right most of @z1 */
-	if (z1->zone_start_pfn + z1->spanned_pages >  end_pfn)
+	if (zone_end_pfn(z1) >  end_pfn)
 		goto out_fail;
 	/* must included/overlap */
-	if (start_pfn >= z1->zone_start_pfn + z1->spanned_pages)
+	if (start_pfn >= zone_end_pfn(z1))
 		goto out_fail;
 
 	/* use end_pfn for z2's end_pfn if z2 is empty */
 	if (z2->spanned_pages)
-		z2_end_pfn = z2->zone_start_pfn + z2->spanned_pages;
+		z2_end_pfn = zone_end_pfn(z2);
 	else
 		z2_end_pfn = end_pfn;
 
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index df2022f..e2574ea 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -242,9 +242,7 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
 
 	do {
 		seq = zone_span_seqbegin(zone);
-		if (pfn >= zone->zone_start_pfn + zone->spanned_pages)
-			ret = 1;
-		else if (pfn < zone->zone_start_pfn)
+		if (!zone_spans_pfn(zone, pfn))
 			ret = 1;
 	} while (zone_span_seqretry(zone, seq));
 
@@ -976,9 +974,9 @@ int move_freepages_block(struct zone *zone, struct page *page,
 	end_pfn = start_pfn + pageblock_nr_pages - 1;
 
 	/* Do not cross zone boundaries */
-	if (start_pfn < zone->zone_start_pfn)
+	if (!zone_spans_pfn(zone, start_pfn))
 		start_page = page;
-	if (end_pfn >= zone->zone_start_pfn + zone->spanned_pages)
+	if (!zone_spans_pfn(zone, end_pfn))
 		return 0;
 
 	return move_freepages(zone, start_page, end_page, migratetype);
@@ -1272,7 +1270,7 @@ void mark_free_pages(struct zone *zone)
 
 	spin_lock_irqsave(&zone->lock, flags);
 
-	max_zone_pfn = zone->zone_start_pfn + zone->spanned_pages;
+	max_zone_pfn = zone_end_pfn(zone);
 	for (pfn = zone->zone_start_pfn; pfn < max_zone_pfn; pfn++)
 		if (pfn_valid(pfn)) {
 			struct page *page = pfn_to_page(pfn);
@@ -3775,7 +3773,7 @@ static void setup_zone_migrate_reserve(struct zone *zone)
 	 * the block.
 	 */
 	start_pfn = zone->zone_start_pfn;
-	end_pfn = start_pfn + zone->spanned_pages;
+	end_pfn = zone_end_pfn(zone);
 	start_pfn = roundup(start_pfn, pageblock_nr_pages);
 	reserve = roundup(min_wmark_pages(zone), pageblock_nr_pages) >>
 							pageblock_order;
@@ -3889,7 +3887,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
 		 * pfn out of zone.
 		 */
 		if ((z->zone_start_pfn <= pfn)
-		    && (pfn < z->zone_start_pfn + z->spanned_pages)
+		    && (pfn < zone_end_pfn(z))
 		    && !(pfn & (pageblock_nr_pages - 1)))
 			set_pageblock_migratetype(page, MIGRATE_MOVABLE);
 
@@ -4617,7 +4615,7 @@ static void __init_refok alloc_node_mem_map(struct pglist_data *pgdat)
 		 * for the buddy allocator to function correctly.
 		 */
 		start = pgdat->node_start_pfn & ~(MAX_ORDER_NR_PAGES - 1);
-		end = pgdat->node_start_pfn + pgdat->node_spanned_pages;
+		end = pgdat_end_pfn(pgdat);
 		end = ALIGN(end, MAX_ORDER_NR_PAGES);
 		size =  (end - start) * sizeof(struct page);
 		map = alloc_remap(pgdat->node_id, size);
@@ -5637,8 +5635,7 @@ void set_pageblock_flags_group(struct page *page, unsigned long flags,
 	pfn = page_to_pfn(page);
 	bitmap = get_pageblock_bitmap(zone, pfn);
 	bitidx = pfn_to_bitidx(zone, pfn);
-	VM_BUG_ON(pfn < zone->zone_start_pfn);
-	VM_BUG_ON(pfn >= zone->zone_start_pfn + zone->spanned_pages);
+	VM_BUG_ON(!zone_spans_pfn(zone, pfn));
 
 	for (; start_bitidx <= end_bitidx; start_bitidx++, value <<= 1)
 		if (flags & value)
@@ -5736,8 +5733,7 @@ bool is_pageblock_removable_nolock(struct page *page)
 
 	zone = page_zone(page);
 	pfn = page_to_pfn(page);
-	if (zone->zone_start_pfn > pfn ||
-			zone->zone_start_pfn + zone->spanned_pages <= pfn)
+	if (!zone_spans_pfn(zone, pfn))
 		return false;
 
 	return !has_unmovable_pages(zone, page, 0, true);
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 9800306..ca99641 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -890,7 +890,7 @@ static void pagetypeinfo_showblockcount_print(struct seq_file *m,
 	int mtype;
 	unsigned long pfn;
 	unsigned long start_pfn = zone->zone_start_pfn;
-	unsigned long end_pfn = start_pfn + zone->spanned_pages;
+	unsigned long end_pfn = zone_end_pfn(zone);
 	unsigned long count[MIGRATE_TYPES] = { 0, };
 
 	for (pfn = start_pfn; pfn < end_pfn; pfn += pageblock_nr_pages) {
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 3/9] mm: add zone_is_empty() and zone_is_initialized()
  2013-01-17 22:52 ` Cody P Schafer
@ 2013-01-17 22:52   ` Cody P Schafer
  -1 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Factoring out these 2 checks makes it more clear what we are actually
checking for.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 include/linux/mmzone.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index d91d964..696cb7c 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -553,6 +553,16 @@ static inline bool zone_spans_pfn(const struct zone *zone, unsigned long pfn)
 	return zone->zone_start_pfn <= pfn && pfn < zone_end_pfn(zone);
 }
 
+static inline bool zone_is_initialized(struct zone *zone)
+{
+	return !!zone->wait_table;
+}
+
+static inline bool zone_is_empty(struct zone *zone)
+{
+	return zone->spanned_pages == 0;
+}
+
 /*
  * The "priority" of VM scanning is how much of the queues we will scan in one
  * go. A value of 12 for DEF_PRIORITY implies that we will scan 1/4096th of the
-- 
1.8.0.3


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 3/9] mm: add zone_is_empty() and zone_is_initialized()
@ 2013-01-17 22:52   ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Factoring out these 2 checks makes it more clear what we are actually
checking for.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 include/linux/mmzone.h | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index d91d964..696cb7c 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -553,6 +553,16 @@ static inline bool zone_spans_pfn(const struct zone *zone, unsigned long pfn)
 	return zone->zone_start_pfn <= pfn && pfn < zone_end_pfn(zone);
 }
 
+static inline bool zone_is_initialized(struct zone *zone)
+{
+	return !!zone->wait_table;
+}
+
+static inline bool zone_is_empty(struct zone *zone)
+{
+	return zone->spanned_pages == 0;
+}
+
 /*
  * The "priority" of VM scanning is how much of the queues we will scan in one
  * go. A value of 12 for DEF_PRIORITY implies that we will scan 1/4096th of the
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 4/9] mm/page_alloc: add a VM_BUG in __free_one_page() if the zone is uninitialized.
  2013-01-17 22:52 ` Cody P Schafer
@ 2013-01-17 22:52   ` Cody P Schafer
  -1 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer, Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

Freeing pages to uninitialized zones is not handled by
__free_one_page(), and should never happen when the code is correct.

Ran into this while writing some code that dynamically onlines extra
zones.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/page_alloc.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e2574ea..f8ed277 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -530,6 +530,8 @@ static inline void __free_one_page(struct page *page,
 	unsigned long uninitialized_var(buddy_idx);
 	struct page *buddy;
 
+	VM_BUG_ON(!zone_is_initialized(zone));
+
 	if (unlikely(PageCompound(page)))
 		if (unlikely(destroy_compound_page(page, order)))
 			return;
-- 
1.8.0.3


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 4/9] mm/page_alloc: add a VM_BUG in __free_one_page() if the zone is uninitialized.
@ 2013-01-17 22:52   ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer, Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

Freeing pages to uninitialized zones is not handled by
__free_one_page(), and should never happen when the code is correct.

Ran into this while writing some code that dynamically onlines extra
zones.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/page_alloc.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e2574ea..f8ed277 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -530,6 +530,8 @@ static inline void __free_one_page(struct page *page,
 	unsigned long uninitialized_var(buddy_idx);
 	struct page *buddy;
 
+	VM_BUG_ON(!zone_is_initialized(zone));
+
 	if (unlikely(PageCompound(page)))
 		if (unlikely(destroy_compound_page(page, order)))
 			return;
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 5/9] mmzone: add pgdat_{end_pfn,is_empty}() helpers & consolidate.
  2013-01-17 22:52 ` Cody P Schafer
@ 2013-01-17 22:52   ` Cody P Schafer
  -1 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer, Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

Add pgdat_end_pfn() and pgdat_is_empty() helpers which match the similar
zone_*() functions.

Change node_end_pfn() to be a wrapper of pgdat_end_pfn().

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 include/linux/mmzone.h | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 696cb7c..d7abff0 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -772,11 +772,17 @@ typedef struct pglist_data {
 #define nid_page_nr(nid, pagenr) 	pgdat_page_nr(NODE_DATA(nid),(pagenr))
 
 #define node_start_pfn(nid)	(NODE_DATA(nid)->node_start_pfn)
+#define node_end_pfn(nid) pgdat_end_pfn(NODE_DATA(nid))
 
-#define node_end_pfn(nid) ({\
-	pg_data_t *__pgdat = NODE_DATA(nid);\
-	__pgdat->node_start_pfn + __pgdat->node_spanned_pages;\
-})
+static inline unsigned long pgdat_end_pfn(pg_data_t *pgdat)
+{
+	return pgdat->node_start_pfn + pgdat->node_spanned_pages;
+}
+
+static inline bool pgdat_is_empty(pg_data_t *pgdat)
+{
+	return !pgdat->node_start_pfn && !pgdat->node_spanned_pages;
+}
 
 #include <linux/memory_hotplug.h>
 
-- 
1.8.0.3


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 5/9] mmzone: add pgdat_{end_pfn,is_empty}() helpers & consolidate.
@ 2013-01-17 22:52   ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer, Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

Add pgdat_end_pfn() and pgdat_is_empty() helpers which match the similar
zone_*() functions.

Change node_end_pfn() to be a wrapper of pgdat_end_pfn().

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 include/linux/mmzone.h | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 696cb7c..d7abff0 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -772,11 +772,17 @@ typedef struct pglist_data {
 #define nid_page_nr(nid, pagenr) 	pgdat_page_nr(NODE_DATA(nid),(pagenr))
 
 #define node_start_pfn(nid)	(NODE_DATA(nid)->node_start_pfn)
+#define node_end_pfn(nid) pgdat_end_pfn(NODE_DATA(nid))
 
-#define node_end_pfn(nid) ({\
-	pg_data_t *__pgdat = NODE_DATA(nid);\
-	__pgdat->node_start_pfn + __pgdat->node_spanned_pages;\
-})
+static inline unsigned long pgdat_end_pfn(pg_data_t *pgdat)
+{
+	return pgdat->node_start_pfn + pgdat->node_spanned_pages;
+}
+
+static inline bool pgdat_is_empty(pg_data_t *pgdat)
+{
+	return !pgdat->node_start_pfn && !pgdat->node_spanned_pages;
+}
 
 #include <linux/memory_hotplug.h>
 
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 6/9] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries()
  2013-01-17 22:52 ` Cody P Schafer
@ 2013-01-17 22:52   ` Cody P Schafer
  -1 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Add a debug message which prints when a page is found outside of the
boundaries of the zone it should belong to. Format is:
	"page $pfn outside zone [ $start_pfn - $end_pfn ]"

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/page_alloc.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f8ed277..f1783cf 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -239,13 +239,20 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
 	int ret = 0;
 	unsigned seq;
 	unsigned long pfn = page_to_pfn(page);
+	unsigned long sp, start_pfn;
 
 	do {
 		seq = zone_span_seqbegin(zone);
+		start_pfn = zone->zone_start_pfn;
+		sp = zone->spanned_pages;
 		if (!zone_spans_pfn(zone, pfn))
 			ret = 1;
 	} while (zone_span_seqretry(zone, seq));
 
+	if (ret)
+		pr_debug("page %lu outside zone [ %lu - %lu ]\n",
+			pfn, start_pfn, start_pfn + sp);
+
 	return ret;
 }
 
-- 
1.8.0.3


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 6/9] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries()
@ 2013-01-17 22:52   ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Add a debug message which prints when a page is found outside of the
boundaries of the zone it should belong to. Format is:
	"page $pfn outside zone [ $start_pfn - $end_pfn ]"

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/page_alloc.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f8ed277..f1783cf 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -239,13 +239,20 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
 	int ret = 0;
 	unsigned seq;
 	unsigned long pfn = page_to_pfn(page);
+	unsigned long sp, start_pfn;
 
 	do {
 		seq = zone_span_seqbegin(zone);
+		start_pfn = zone->zone_start_pfn;
+		sp = zone->spanned_pages;
 		if (!zone_spans_pfn(zone, pfn))
 			ret = 1;
 	} while (zone_span_seqretry(zone, seq));
 
+	if (ret)
+		pr_debug("page %lu outside zone [ %lu - %lu ]\n",
+			pfn, start_pfn, start_pfn + sp);
+
 	return ret;
 }
 
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 7/9] mm: add helper ensure_zone_is_initialized()
  2013-01-17 22:52 ` Cody P Schafer
@ 2013-01-17 22:52   ` Cody P Schafer
  -1 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer, Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

ensure_zone_is_initialized() checks if a zone is in a empty & not
initialized state (typically occuring after it is created in memory
hotplugging), and, if so, calls init_currently_empty_zone() to
initialize the zone.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/memory_hotplug.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index c62bcca..bede456 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -253,6 +253,17 @@ static void fix_zone_id(struct zone *zone, unsigned long start_pfn,
 		set_page_links(pfn_to_page(pfn), zid, nid, pfn);
 }
 
+/* Can fail with -ENOMEM from allocating a wait table with vmalloc() or
+ * alloc_bootmem_node_nopanic() */
+static int __ref ensure_zone_is_initialized(struct zone *zone,
+			unsigned long start_pfn, unsigned long num_pages)
+{
+	if (!zone_is_initialized(zone))
+		return init_currently_empty_zone(zone, start_pfn, num_pages,
+						 MEMMAP_HOTPLUG);
+	return 0;
+}
+
 static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
 		unsigned long start_pfn, unsigned long end_pfn)
 {
-- 
1.8.0.3


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 7/9] mm: add helper ensure_zone_is_initialized()
@ 2013-01-17 22:52   ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:52 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer, Cody P Schafer

From: Cody P Schafer <jmesmon@gmail.com>

ensure_zone_is_initialized() checks if a zone is in a empty & not
initialized state (typically occuring after it is created in memory
hotplugging), and, if so, calls init_currently_empty_zone() to
initialize the zone.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/memory_hotplug.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index c62bcca..bede456 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -253,6 +253,17 @@ static void fix_zone_id(struct zone *zone, unsigned long start_pfn,
 		set_page_links(pfn_to_page(pfn), zid, nid, pfn);
 }
 
+/* Can fail with -ENOMEM from allocating a wait table with vmalloc() or
+ * alloc_bootmem_node_nopanic() */
+static int __ref ensure_zone_is_initialized(struct zone *zone,
+			unsigned long start_pfn, unsigned long num_pages)
+{
+	if (!zone_is_initialized(zone))
+		return init_currently_empty_zone(zone, start_pfn, num_pages,
+						 MEMMAP_HOTPLUG);
+	return 0;
+}
+
 static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
 		unsigned long start_pfn, unsigned long end_pfn)
 {
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 8/9] mm/memory_hotplug: use ensure_zone_is_initialized()
  2013-01-17 22:52 ` Cody P Schafer
@ 2013-01-17 22:53   ` Cody P Schafer
  -1 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:53 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Remove open coding of ensure_zone_is_initialzied().

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/memory_hotplug.c | 29 ++++++++++-------------------
 1 file changed, 10 insertions(+), 19 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index bede456..016944f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -271,12 +271,9 @@ static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
 	unsigned long flags;
 	unsigned long z1_start_pfn;
 
-	if (!z1->wait_table) {
-		ret = init_currently_empty_zone(z1, start_pfn,
-			end_pfn - start_pfn, MEMMAP_HOTPLUG);
-		if (ret)
-			return ret;
-	}
+	ret = ensure_zone_is_initialized(z1, start_pfn, end_pfn - start_pfn);
+	if (ret)
+		return ret;
 
 	pgdat_resize_lock(z1->zone_pgdat, &flags);
 
@@ -316,12 +313,9 @@ static int __meminit move_pfn_range_right(struct zone *z1, struct zone *z2,
 	unsigned long flags;
 	unsigned long z2_end_pfn;
 
-	if (!z2->wait_table) {
-		ret = init_currently_empty_zone(z2, start_pfn,
-			end_pfn - start_pfn, MEMMAP_HOTPLUG);
-		if (ret)
-			return ret;
-	}
+	ret = ensure_zone_is_initialized(z2, start_pfn, end_pfn - start_pfn);
+	if (ret)
+		return ret;
 
 	pgdat_resize_lock(z1->zone_pgdat, &flags);
 
@@ -374,16 +368,13 @@ static int __meminit __add_zone(struct zone *zone, unsigned long phys_start_pfn)
 	int nid = pgdat->node_id;
 	int zone_type;
 	unsigned long flags;
+	int ret;
 
 	zone_type = zone - pgdat->node_zones;
-	if (!zone->wait_table) {
-		int ret;
+	ret = ensure_zone_is_initialized(zone, phys_start_pfn, nr_pages);
+	if (ret)
+		return ret;
 
-		ret = init_currently_empty_zone(zone, phys_start_pfn,
-						nr_pages, MEMMAP_HOTPLUG);
-		if (ret)
-			return ret;
-	}
 	pgdat_resize_lock(zone->zone_pgdat, &flags);
 	grow_zone_span(zone, phys_start_pfn, phys_start_pfn + nr_pages);
 	grow_pgdat_span(zone->zone_pgdat, phys_start_pfn,
-- 
1.8.0.3


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 8/9] mm/memory_hotplug: use ensure_zone_is_initialized()
@ 2013-01-17 22:53   ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:53 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Remove open coding of ensure_zone_is_initialzied().

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/memory_hotplug.c | 29 ++++++++++-------------------
 1 file changed, 10 insertions(+), 19 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index bede456..016944f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -271,12 +271,9 @@ static int __meminit move_pfn_range_left(struct zone *z1, struct zone *z2,
 	unsigned long flags;
 	unsigned long z1_start_pfn;
 
-	if (!z1->wait_table) {
-		ret = init_currently_empty_zone(z1, start_pfn,
-			end_pfn - start_pfn, MEMMAP_HOTPLUG);
-		if (ret)
-			return ret;
-	}
+	ret = ensure_zone_is_initialized(z1, start_pfn, end_pfn - start_pfn);
+	if (ret)
+		return ret;
 
 	pgdat_resize_lock(z1->zone_pgdat, &flags);
 
@@ -316,12 +313,9 @@ static int __meminit move_pfn_range_right(struct zone *z1, struct zone *z2,
 	unsigned long flags;
 	unsigned long z2_end_pfn;
 
-	if (!z2->wait_table) {
-		ret = init_currently_empty_zone(z2, start_pfn,
-			end_pfn - start_pfn, MEMMAP_HOTPLUG);
-		if (ret)
-			return ret;
-	}
+	ret = ensure_zone_is_initialized(z2, start_pfn, end_pfn - start_pfn);
+	if (ret)
+		return ret;
 
 	pgdat_resize_lock(z1->zone_pgdat, &flags);
 
@@ -374,16 +368,13 @@ static int __meminit __add_zone(struct zone *zone, unsigned long phys_start_pfn)
 	int nid = pgdat->node_id;
 	int zone_type;
 	unsigned long flags;
+	int ret;
 
 	zone_type = zone - pgdat->node_zones;
-	if (!zone->wait_table) {
-		int ret;
+	ret = ensure_zone_is_initialized(zone, phys_start_pfn, nr_pages);
+	if (ret)
+		return ret;
 
-		ret = init_currently_empty_zone(zone, phys_start_pfn,
-						nr_pages, MEMMAP_HOTPLUG);
-		if (ret)
-			return ret;
-	}
 	pgdat_resize_lock(zone->zone_pgdat, &flags);
 	grow_zone_span(zone, phys_start_pfn, phys_start_pfn + nr_pages);
 	grow_pgdat_span(zone->zone_pgdat, phys_start_pfn,
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 9/9] mm/memory_hotplug: use pgdat_end_pfn() instead of open coding the same.
  2013-01-17 22:52 ` Cody P Schafer
@ 2013-01-17 22:53   ` Cody P Schafer
  -1 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:53 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Replace open coded pgdat_end_pfn() with helper function.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/memory_hotplug.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 016944f..6eb93a5 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -189,7 +189,7 @@ void register_page_bootmem_info_node(struct pglist_data *pgdat)
 	}
 
 	pfn = pgdat->node_start_pfn;
-	end_pfn = pfn + pgdat->node_spanned_pages;
+	end_pfn = pgdat_end_pfn(pgdat);
 
 	/* register_section info */
 	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
-- 
1.8.0.3


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH 9/9] mm/memory_hotplug: use pgdat_end_pfn() instead of open coding the same.
@ 2013-01-17 22:53   ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-01-17 22:53 UTC (permalink / raw)
  To: Linux MM, David Hansen
  Cc: LKML, Andrew Morton, Catalin Marinas, Cody P Schafer

Replace open coded pgdat_end_pfn() with helper function.

Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
---
 mm/memory_hotplug.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 016944f..6eb93a5 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -189,7 +189,7 @@ void register_page_bootmem_info_node(struct pglist_data *pgdat)
 	}
 
 	pfn = pgdat->node_start_pfn;
-	end_pfn = pfn + pgdat->node_spanned_pages;
+	end_pfn = pgdat_end_pfn(pgdat);
 
 	/* register_section info */
 	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
-- 
1.8.0.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/9] mm: add SECTION_IN_PAGE_FLAGS
  2013-01-17 22:52   ` Cody P Schafer
@ 2013-02-02  0:20     ` Andrew Morton
  -1 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2013-02-02  0:20 UTC (permalink / raw)
  To: Cody P Schafer; +Cc: Linux MM, David Hansen, LKML, Catalin Marinas

On Thu, 17 Jan 2013 14:52:53 -0800
Cody P Schafer <cody@linux.vnet.ibm.com> wrote:

> Instead of directly utilizing a combination of config options to determine this,
> add a macro to specifically address it.
> 
> ...
>
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -625,6 +625,10 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
>  #define NODE_NOT_IN_PAGE_FLAGS
>  #endif
>  
> +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
> +#define SECTION_IN_PAGE_FLAGS
> +#endif

We could do this in Kconfig itself, in the definition of a new
CONFIG_SECTION_IN_PAGE_FLAGS.

I'm not sure that I like that sort of thing a lot though - it's rather a
pain to have to switch from .[ch] over to Kconfig to find the
definitions of things.  I should get off my tail and teach my ctags
scripts to handle this.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/9] mm: add SECTION_IN_PAGE_FLAGS
@ 2013-02-02  0:20     ` Andrew Morton
  0 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2013-02-02  0:20 UTC (permalink / raw)
  To: Cody P Schafer; +Cc: Linux MM, David Hansen, LKML, Catalin Marinas

On Thu, 17 Jan 2013 14:52:53 -0800
Cody P Schafer <cody@linux.vnet.ibm.com> wrote:

> Instead of directly utilizing a combination of config options to determine this,
> add a macro to specifically address it.
> 
> ...
>
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -625,6 +625,10 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
>  #define NODE_NOT_IN_PAGE_FLAGS
>  #endif
>  
> +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
> +#define SECTION_IN_PAGE_FLAGS
> +#endif

We could do this in Kconfig itself, in the definition of a new
CONFIG_SECTION_IN_PAGE_FLAGS.

I'm not sure that I like that sort of thing a lot though - it's rather a
pain to have to switch from .[ch] over to Kconfig to find the
definitions of things.  I should get off my tail and teach my ctags
scripts to handle this.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 5/9] mmzone: add pgdat_{end_pfn,is_empty}() helpers & consolidate.
  2013-01-17 22:52   ` Cody P Schafer
@ 2013-02-02  0:23     ` Andrew Morton
  -1 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2013-02-02  0:23 UTC (permalink / raw)
  To: Cody P Schafer
  Cc: Linux MM, David Hansen, LKML, Catalin Marinas, Cody P Schafer

On Thu, 17 Jan 2013 14:52:57 -0800
Cody P Schafer <cody@linux.vnet.ibm.com> wrote:

> From: Cody P Schafer <jmesmon@gmail.com>
> 
> Add pgdat_end_pfn() and pgdat_is_empty() helpers which match the similar
> zone_*() functions.
> 
> Change node_end_pfn() to be a wrapper of pgdat_end_pfn().
> 
> ...
>
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -772,11 +772,17 @@ typedef struct pglist_data {
>  #define nid_page_nr(nid, pagenr) 	pgdat_page_nr(NODE_DATA(nid),(pagenr))
>  
>  #define node_start_pfn(nid)	(NODE_DATA(nid)->node_start_pfn)
> +#define node_end_pfn(nid) pgdat_end_pfn(NODE_DATA(nid))

I wonder if these could be implemented in nice C code rather than nasty
cpp code.

> -#define node_end_pfn(nid) ({\
> -	pg_data_t *__pgdat = NODE_DATA(nid);\
> -	__pgdat->node_start_pfn + __pgdat->node_spanned_pages;\
> -})
> +static inline unsigned long pgdat_end_pfn(pg_data_t *pgdat)
> +{
> +	return pgdat->node_start_pfn + pgdat->node_spanned_pages;
> +}

It wouldn't hurt to add a little comment pointing out that this returns
"end pfn plus one", or similar.  ie, it is exclusive, not inclusive. 
Ditto the "zone_*() functions", if needed.

> +static inline bool pgdat_is_empty(pg_data_t *pgdat)
> +{
> +	return !pgdat->node_start_pfn && !pgdat->node_spanned_pages;
> +}


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 5/9] mmzone: add pgdat_{end_pfn,is_empty}() helpers & consolidate.
@ 2013-02-02  0:23     ` Andrew Morton
  0 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2013-02-02  0:23 UTC (permalink / raw)
  To: Cody P Schafer
  Cc: Linux MM, David Hansen, LKML, Catalin Marinas, Cody P Schafer

On Thu, 17 Jan 2013 14:52:57 -0800
Cody P Schafer <cody@linux.vnet.ibm.com> wrote:

> From: Cody P Schafer <jmesmon@gmail.com>
> 
> Add pgdat_end_pfn() and pgdat_is_empty() helpers which match the similar
> zone_*() functions.
> 
> Change node_end_pfn() to be a wrapper of pgdat_end_pfn().
> 
> ...
>
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -772,11 +772,17 @@ typedef struct pglist_data {
>  #define nid_page_nr(nid, pagenr) 	pgdat_page_nr(NODE_DATA(nid),(pagenr))
>  
>  #define node_start_pfn(nid)	(NODE_DATA(nid)->node_start_pfn)
> +#define node_end_pfn(nid) pgdat_end_pfn(NODE_DATA(nid))

I wonder if these could be implemented in nice C code rather than nasty
cpp code.

> -#define node_end_pfn(nid) ({\
> -	pg_data_t *__pgdat = NODE_DATA(nid);\
> -	__pgdat->node_start_pfn + __pgdat->node_spanned_pages;\
> -})
> +static inline unsigned long pgdat_end_pfn(pg_data_t *pgdat)
> +{
> +	return pgdat->node_start_pfn + pgdat->node_spanned_pages;
> +}

It wouldn't hurt to add a little comment pointing out that this returns
"end pfn plus one", or similar.  ie, it is exclusive, not inclusive. 
Ditto the "zone_*() functions", if needed.

> +static inline bool pgdat_is_empty(pg_data_t *pgdat)
> +{
> +	return !pgdat->node_start_pfn && !pgdat->node_spanned_pages;
> +}

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 6/9] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries()
  2013-01-17 22:52   ` Cody P Schafer
@ 2013-02-02  0:28     ` Andrew Morton
  -1 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2013-02-02  0:28 UTC (permalink / raw)
  To: Cody P Schafer; +Cc: Linux MM, David Hansen, LKML, Catalin Marinas

On Thu, 17 Jan 2013 14:52:58 -0800
Cody P Schafer <cody@linux.vnet.ibm.com> wrote:

> Add a debug message which prints when a page is found outside of the
> boundaries of the zone it should belong to. Format is:
> 	"page $pfn outside zone [ $start_pfn - $end_pfn ]"
> 
> Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
> ---
>  mm/page_alloc.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index f8ed277..f1783cf 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -239,13 +239,20 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
>  	int ret = 0;
>  	unsigned seq;
>  	unsigned long pfn = page_to_pfn(page);
> +	unsigned long sp, start_pfn;
>  
>  	do {
>  		seq = zone_span_seqbegin(zone);
> +		start_pfn = zone->zone_start_pfn;
> +		sp = zone->spanned_pages;
>  		if (!zone_spans_pfn(zone, pfn))
>  			ret = 1;
>  	} while (zone_span_seqretry(zone, seq));
>  
> +	if (ret)
> +		pr_debug("page %lu outside zone [ %lu - %lu ]\n",
> +			pfn, start_pfn, start_pfn + sp);
> +
>  	return ret;
>  }

As this condition leads to a VM_BUG_ON(), "pr_debug" seems rather wimpy
and I doubt if we need to be concerned about flooding the console.

I'll switch it to pr_err.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 6/9] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries()
@ 2013-02-02  0:28     ` Andrew Morton
  0 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2013-02-02  0:28 UTC (permalink / raw)
  To: Cody P Schafer; +Cc: Linux MM, David Hansen, LKML, Catalin Marinas

On Thu, 17 Jan 2013 14:52:58 -0800
Cody P Schafer <cody@linux.vnet.ibm.com> wrote:

> Add a debug message which prints when a page is found outside of the
> boundaries of the zone it should belong to. Format is:
> 	"page $pfn outside zone [ $start_pfn - $end_pfn ]"
> 
> Signed-off-by: Cody P Schafer <cody@linux.vnet.ibm.com>
> ---
>  mm/page_alloc.c | 7 +++++++
>  1 file changed, 7 insertions(+)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index f8ed277..f1783cf 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -239,13 +239,20 @@ static int page_outside_zone_boundaries(struct zone *zone, struct page *page)
>  	int ret = 0;
>  	unsigned seq;
>  	unsigned long pfn = page_to_pfn(page);
> +	unsigned long sp, start_pfn;
>  
>  	do {
>  		seq = zone_span_seqbegin(zone);
> +		start_pfn = zone->zone_start_pfn;
> +		sp = zone->spanned_pages;
>  		if (!zone_spans_pfn(zone, pfn))
>  			ret = 1;
>  	} while (zone_span_seqretry(zone, seq));
>  
> +	if (ret)
> +		pr_debug("page %lu outside zone [ %lu - %lu ]\n",
> +			pfn, start_pfn, start_pfn + sp);
> +
>  	return ret;
>  }

As this condition leads to a VM_BUG_ON(), "pr_debug" seems rather wimpy
and I doubt if we need to be concerned about flooding the console.

I'll switch it to pr_err.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 6/9] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries()
  2013-02-02  0:28     ` Andrew Morton
@ 2013-02-02  0:29       ` Andrew Morton
  -1 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2013-02-02  0:29 UTC (permalink / raw)
  To: Cody P Schafer, Linux MM, David Hansen, LKML, Catalin Marinas

On Fri, 1 Feb 2013 16:28:48 -0800
Andrew Morton <akpm@linux-foundation.org> wrote:

> > +	if (ret)
> > +		pr_debug("page %lu outside zone [ %lu - %lu ]\n",
> > +			pfn, start_pfn, start_pfn + sp);
> > +
> >  	return ret;
> >  }
> 
> As this condition leads to a VM_BUG_ON(), "pr_debug" seems rather wimpy
> and I doubt if we need to be concerned about flooding the console.
> 
> I'll switch it to pr_err.

otoh, as nobody has ever hit that VM_BUG_ON() (yes?), do we really need
the patch?


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 6/9] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries()
@ 2013-02-02  0:29       ` Andrew Morton
  0 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2013-02-02  0:29 UTC (permalink / raw)
  To: Cody P Schafer, Linux MM, David Hansen, LKML, Catalin Marinas

On Fri, 1 Feb 2013 16:28:48 -0800
Andrew Morton <akpm@linux-foundation.org> wrote:

> > +	if (ret)
> > +		pr_debug("page %lu outside zone [ %lu - %lu ]\n",
> > +			pfn, start_pfn, start_pfn + sp);
> > +
> >  	return ret;
> >  }
> 
> As this condition leads to a VM_BUG_ON(), "pr_debug" seems rather wimpy
> and I doubt if we need to be concerned about flooding the console.
> 
> I'll switch it to pr_err.

otoh, as nobody has ever hit that VM_BUG_ON() (yes?), do we really need
the patch?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 0/9] mm: zone & pgdat accessors plus some cleanup
  2013-01-17 22:52 ` Cody P Schafer
@ 2013-02-02  0:39   ` Andrew Morton
  -1 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2013-02-02  0:39 UTC (permalink / raw)
  To: Cody P Schafer
  Cc: Linux MM, David Hansen, LKML, Catalin Marinas, Jiang Liu,
	Wen Congyang, Lai Jiangshan, Wu Jianguo, Tang Chen

On Thu, 17 Jan 2013 14:52:52 -0800
Cody P Schafer <cody@linux.vnet.ibm.com> wrote:

> Summaries:
> 1 - avoid repeating checks for section in page flags by adding a define.
> 2 - add & switch to zone_end_pfn() and zone_spans_pfn()
> 3 - adds zone_is_initialized() & zone_is_empty()
> 4 - adds a VM_BUG using zone_is_initialized() in __free_one_page()
> 5 - add pgdat_end_pfn() and pgdat_is_empty()
> 6 - add debugging message to VM_BUG check.
> 7 - add ensure_zone_is_initialized() (for memory_hotplug)
> 8 - use the above addition in memory_hotplug
> 9 - use pgdat_end_pfn()

Well that's a nice little patchset.

Some of the patches were marked From:cody@linux.vnet.ibm.com and others
were From:jmesmon@gmail.com.  This is strange.  If you want me to fix
that up, please let me know which is preferred.

> As a general concern: spanned_pages & start_pfn (in pgdat & zone) are supposed
> to be locked (via a seqlock) when read (due to changes to them via
> memory_hotplug), but very few (only 1?) of their users appear to actually lock
> them.

OK, thanks.  Perhaps this is something which the memory-hotplug
developers could take a look at?


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 0/9] mm: zone & pgdat accessors plus some cleanup
@ 2013-02-02  0:39   ` Andrew Morton
  0 siblings, 0 replies; 36+ messages in thread
From: Andrew Morton @ 2013-02-02  0:39 UTC (permalink / raw)
  To: Cody P Schafer
  Cc: Linux MM, David Hansen, LKML, Catalin Marinas, Jiang Liu,
	Wen Congyang, Lai Jiangshan, Wu Jianguo, Tang Chen

On Thu, 17 Jan 2013 14:52:52 -0800
Cody P Schafer <cody@linux.vnet.ibm.com> wrote:

> Summaries:
> 1 - avoid repeating checks for section in page flags by adding a define.
> 2 - add & switch to zone_end_pfn() and zone_spans_pfn()
> 3 - adds zone_is_initialized() & zone_is_empty()
> 4 - adds a VM_BUG using zone_is_initialized() in __free_one_page()
> 5 - add pgdat_end_pfn() and pgdat_is_empty()
> 6 - add debugging message to VM_BUG check.
> 7 - add ensure_zone_is_initialized() (for memory_hotplug)
> 8 - use the above addition in memory_hotplug
> 9 - use pgdat_end_pfn()

Well that's a nice little patchset.

Some of the patches were marked From:cody@linux.vnet.ibm.com and others
were From:jmesmon@gmail.com.  This is strange.  If you want me to fix
that up, please let me know which is preferred.

> As a general concern: spanned_pages & start_pfn (in pgdat & zone) are supposed
> to be locked (via a seqlock) when read (due to changes to them via
> memory_hotplug), but very few (only 1?) of their users appear to actually lock
> them.

OK, thanks.  Perhaps this is something which the memory-hotplug
developers could take a look at?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 6/9] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries()
  2013-02-02  0:29       ` Andrew Morton
@ 2013-02-05 22:20         ` Cody P Schafer
  -1 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-02-05 22:20 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux MM, David Hansen, LKML, Catalin Marinas

On 02/01/2013 04:29 PM, Andrew Morton wrote:
> On Fri, 1 Feb 2013 16:28:48 -0800
> Andrew Morton <akpm@linux-foundation.org> wrote:
>
>>> +	if (ret)
>>> +		pr_debug("page %lu outside zone [ %lu - %lu ]\n",
>>> +			pfn, start_pfn, start_pfn + sp);
>>> +
>>>   	return ret;
>>>   }
>>
>> As this condition leads to a VM_BUG_ON(), "pr_debug" seems rather wimpy
>> and I doubt if we need to be concerned about flooding the console.
>>
>> I'll switch it to pr_err.
>
> otoh, as nobody has ever hit that VM_BUG_ON() (yes?), do we really need
> the patch?

I've hit this bug while developing some code that moves pages between zones.

As it helped me debug that issue with my own code, I could see how 
another developer might be helped by it.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 6/9] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries()
@ 2013-02-05 22:20         ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-02-05 22:20 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux MM, David Hansen, LKML, Catalin Marinas

On 02/01/2013 04:29 PM, Andrew Morton wrote:
> On Fri, 1 Feb 2013 16:28:48 -0800
> Andrew Morton <akpm@linux-foundation.org> wrote:
>
>>> +	if (ret)
>>> +		pr_debug("page %lu outside zone [ %lu - %lu ]\n",
>>> +			pfn, start_pfn, start_pfn + sp);
>>> +
>>>   	return ret;
>>>   }
>>
>> As this condition leads to a VM_BUG_ON(), "pr_debug" seems rather wimpy
>> and I doubt if we need to be concerned about flooding the console.
>>
>> I'll switch it to pr_err.
>
> otoh, as nobody has ever hit that VM_BUG_ON() (yes?), do we really need
> the patch?

I've hit this bug while developing some code that moves pages between zones.

As it helped me debug that issue with my own code, I could see how 
another developer might be helped by it.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/9] mm: add SECTION_IN_PAGE_FLAGS
  2013-02-02  0:20     ` Andrew Morton
@ 2013-02-05 22:23       ` Cody P Schafer
  -1 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-02-05 22:23 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux MM, David Hansen, LKML, Catalin Marinas

On 02/01/2013 04:20 PM, Andrew Morton wrote:
> On Thu, 17 Jan 2013 14:52:53 -0800
> Cody P Schafer <cody@linux.vnet.ibm.com> wrote:
>
>> Instead of directly utilizing a combination of config options to determine this,
>> add a macro to specifically address it.
>>
>> ...
>>
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -625,6 +625,10 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
>>   #define NODE_NOT_IN_PAGE_FLAGS
>>   #endif
>>
>> +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
>> +#define SECTION_IN_PAGE_FLAGS
>> +#endif
>
> We could do this in Kconfig itself, in the definition of a new
> CONFIG_SECTION_IN_PAGE_FLAGS.

Yep, I only put it here because it "sounds" the similar to 
NODE_NOT_IN_PAGE_FLAGS, but (of course) NODE_NOT_IN_PAGE_FLAGS isn't 
defined based on pure dependencies, while this is.

> I'm not sure that I like that sort of thing a lot though - it's rather a
> pain to have to switch from .[ch] over to Kconfig to find the
> definitions of things.  I should get off my tail and teach my ctags
> scripts to handle this.




^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH 1/9] mm: add SECTION_IN_PAGE_FLAGS
@ 2013-02-05 22:23       ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-02-05 22:23 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux MM, David Hansen, LKML, Catalin Marinas

On 02/01/2013 04:20 PM, Andrew Morton wrote:
> On Thu, 17 Jan 2013 14:52:53 -0800
> Cody P Schafer <cody@linux.vnet.ibm.com> wrote:
>
>> Instead of directly utilizing a combination of config options to determine this,
>> add a macro to specifically address it.
>>
>> ...
>>
>> --- a/include/linux/mm.h
>> +++ b/include/linux/mm.h
>> @@ -625,6 +625,10 @@ static inline pte_t maybe_mkwrite(pte_t pte, struct vm_area_struct *vma)
>>   #define NODE_NOT_IN_PAGE_FLAGS
>>   #endif
>>
>> +#if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP)
>> +#define SECTION_IN_PAGE_FLAGS
>> +#endif
>
> We could do this in Kconfig itself, in the definition of a new
> CONFIG_SECTION_IN_PAGE_FLAGS.

Yep, I only put it here because it "sounds" the similar to 
NODE_NOT_IN_PAGE_FLAGS, but (of course) NODE_NOT_IN_PAGE_FLAGS isn't 
defined based on pure dependencies, while this is.

> I'm not sure that I like that sort of thing a lot though - it's rather a
> pain to have to switch from .[ch] over to Kconfig to find the
> definitions of things.  I should get off my tail and teach my ctags
> scripts to handle this.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 0/9] mm: zone & pgdat accessors plus some cleanup
  2013-02-02  0:39   ` Andrew Morton
@ 2013-02-05 22:59     ` Cody P Schafer
  -1 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-02-05 22:59 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Linux MM, David Hansen, LKML, Catalin Marinas, Jiang Liu,
	Wen Congyang, Lai Jiangshan, Wu Jianguo, Tang Chen

On 02/01/2013 04:39 PM, Andrew Morton wrote:
> On Thu, 17 Jan 2013 14:52:52 -0800
> Cody P Schafer <cody@linux.vnet.ibm.com> wrote:
>
>> Summaries:
>> 1 - avoid repeating checks for section in page flags by adding a define.
>> 2 - add & switch to zone_end_pfn() and zone_spans_pfn()
>> 3 - adds zone_is_initialized() & zone_is_empty()
>> 4 - adds a VM_BUG using zone_is_initialized() in __free_one_page()
>> 5 - add pgdat_end_pfn() and pgdat_is_empty()
>> 6 - add debugging message to VM_BUG check.
>> 7 - add ensure_zone_is_initialized() (for memory_hotplug)
>> 8 - use the above addition in memory_hotplug
>> 9 - use pgdat_end_pfn()
>
> Well that's a nice little patchset.
>
> Some of the patches were marked From:cody@linux.vnet.ibm.com and others
> were From:jmesmon@gmail.com.  This is strange.  If you want me to fix
> that up, please let me know which is preferred.

They should all be "From:cody@linux.vnet.ibm.com", other address was me 
messing up gitconfig (which I've since fixed).

>> As a general concern: spanned_pages & start_pfn (in pgdat & zone) are supposed
>> to be locked (via a seqlock) when read (due to changes to them via
>> memory_hotplug), but very few (only 1?) of their users appear to actually lock
>> them.
>
> OK, thanks.  Perhaps this is something which the memory-hotplug
> developers could take a look at?

Yep. It's not immediately clear that not locking on read would do 
terrible things, but at the least the documentation needs fixing and 
explanation as to why the locking is not used some (or all) places.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v2 0/9] mm: zone & pgdat accessors plus some cleanup
@ 2013-02-05 22:59     ` Cody P Schafer
  0 siblings, 0 replies; 36+ messages in thread
From: Cody P Schafer @ 2013-02-05 22:59 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Linux MM, David Hansen, LKML, Catalin Marinas, Jiang Liu,
	Wen Congyang, Lai Jiangshan, Wu Jianguo, Tang Chen

On 02/01/2013 04:39 PM, Andrew Morton wrote:
> On Thu, 17 Jan 2013 14:52:52 -0800
> Cody P Schafer <cody@linux.vnet.ibm.com> wrote:
>
>> Summaries:
>> 1 - avoid repeating checks for section in page flags by adding a define.
>> 2 - add & switch to zone_end_pfn() and zone_spans_pfn()
>> 3 - adds zone_is_initialized() & zone_is_empty()
>> 4 - adds a VM_BUG using zone_is_initialized() in __free_one_page()
>> 5 - add pgdat_end_pfn() and pgdat_is_empty()
>> 6 - add debugging message to VM_BUG check.
>> 7 - add ensure_zone_is_initialized() (for memory_hotplug)
>> 8 - use the above addition in memory_hotplug
>> 9 - use pgdat_end_pfn()
>
> Well that's a nice little patchset.
>
> Some of the patches were marked From:cody@linux.vnet.ibm.com and others
> were From:jmesmon@gmail.com.  This is strange.  If you want me to fix
> that up, please let me know which is preferred.

They should all be "From:cody@linux.vnet.ibm.com", other address was me 
messing up gitconfig (which I've since fixed).

>> As a general concern: spanned_pages & start_pfn (in pgdat & zone) are supposed
>> to be locked (via a seqlock) when read (due to changes to them via
>> memory_hotplug), but very few (only 1?) of their users appear to actually lock
>> them.
>
> OK, thanks.  Perhaps this is something which the memory-hotplug
> developers could take a look at?

Yep. It's not immediately clear that not locking on read would do 
terrible things, but at the least the documentation needs fixing and 
explanation as to why the locking is not used some (or all) places.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2013-02-05 22:59 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-01-17 22:52 [PATCH v2 0/9] mm: zone & pgdat accessors plus some cleanup Cody P Schafer
2013-01-17 22:52 ` Cody P Schafer
2013-01-17 22:52 ` [PATCH 1/9] mm: add SECTION_IN_PAGE_FLAGS Cody P Schafer
2013-01-17 22:52   ` Cody P Schafer
2013-02-02  0:20   ` Andrew Morton
2013-02-02  0:20     ` Andrew Morton
2013-02-05 22:23     ` Cody P Schafer
2013-02-05 22:23       ` Cody P Schafer
2013-01-17 22:52 ` [PATCH 2/9] mm: add & use zone_end_pfn() and zone_spans_pfn() Cody P Schafer
2013-01-17 22:52   ` Cody P Schafer
2013-01-17 22:52 ` [PATCH 3/9] mm: add zone_is_empty() and zone_is_initialized() Cody P Schafer
2013-01-17 22:52   ` Cody P Schafer
2013-01-17 22:52 ` [PATCH 4/9] mm/page_alloc: add a VM_BUG in __free_one_page() if the zone is uninitialized Cody P Schafer
2013-01-17 22:52   ` Cody P Schafer
2013-01-17 22:52 ` [PATCH 5/9] mmzone: add pgdat_{end_pfn,is_empty}() helpers & consolidate Cody P Schafer
2013-01-17 22:52   ` Cody P Schafer
2013-02-02  0:23   ` Andrew Morton
2013-02-02  0:23     ` Andrew Morton
2013-01-17 22:52 ` [PATCH 6/9] mm/page_alloc: add informative debugging message in page_outside_zone_boundaries() Cody P Schafer
2013-01-17 22:52   ` Cody P Schafer
2013-02-02  0:28   ` Andrew Morton
2013-02-02  0:28     ` Andrew Morton
2013-02-02  0:29     ` Andrew Morton
2013-02-02  0:29       ` Andrew Morton
2013-02-05 22:20       ` Cody P Schafer
2013-02-05 22:20         ` Cody P Schafer
2013-01-17 22:52 ` [PATCH 7/9] mm: add helper ensure_zone_is_initialized() Cody P Schafer
2013-01-17 22:52   ` Cody P Schafer
2013-01-17 22:53 ` [PATCH 8/9] mm/memory_hotplug: use ensure_zone_is_initialized() Cody P Schafer
2013-01-17 22:53   ` Cody P Schafer
2013-01-17 22:53 ` [PATCH 9/9] mm/memory_hotplug: use pgdat_end_pfn() instead of open coding the same Cody P Schafer
2013-01-17 22:53   ` Cody P Schafer
2013-02-02  0:39 ` [PATCH v2 0/9] mm: zone & pgdat accessors plus some cleanup Andrew Morton
2013-02-02  0:39   ` Andrew Morton
2013-02-05 22:59   ` Cody P Schafer
2013-02-05 22:59     ` Cody P Schafer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.