linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/6] Reduce memory waste by page extension user
@ 2016-08-16  2:51 js1304
  2016-08-16  2:51 ` [PATCH v2 1/6] mm/debug_pagealloc: clean-up guard page handling code js1304
                   ` (6 more replies)
  0 siblings, 7 replies; 12+ messages in thread
From: js1304 @ 2016-08-16  2:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Minchan Kim, Michal Hocko, Sergey Senozhatsky,
	linux-kernel, linux-mm, Joonsoo Kim

From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

v2:
Fix rebase mistake (per Vlastimil)
Rename some variable/function to prevent confusion (per Vlastimil)
Fix header dependency (per Sergey)

This patchset tries to reduce memory waste by page extension user.

First case is architecture supported debug_pagealloc. It doesn't
requires additional memory if guard page isn't used. 8 bytes per
page will be saved in this case.

Second case is related to page owner feature. Until now, if page_ext
users want to use it's own fields on page_ext, fields should be
defined in struct page_ext by hard-coding. It has a following problem.

struct page_ext {
 #ifdef CONFIG_A
	int a;
 #endif
 #ifdef CONFIG_B
	int b;
 #endif
};

Assume that kernel is built with both CONFIG_A and CONFIG_B.
Even if we enable feature A and doesn't enable feature B at runtime,
each entry of struct page_ext takes two int rather than one int.
It's undesirable waste so this patch tries to reduce it. By this patchset,
we can save 20 bytes per page dedicated for page owner feature
in some configurations.

Thanks.

Joonsoo Kim (6):
  mm/debug_pagealloc: clean-up guard page handling code
  mm/debug_pagealloc: don't allocate page_ext if we don't use guard page
  mm/page_owner: move page_owner specific function to page_owner.c
  mm/page_ext: rename offset to index
  mm/page_ext: support extra space allocation by page_ext user
  mm/page_owner: don't define fields on struct page_ext by hard-coding

 include/linux/page_ext.h   |   8 +--
 include/linux/page_owner.h |   2 +
 mm/page_alloc.c            |  44 +++++++------
 mm/page_ext.c              |  45 +++++++++----
 mm/page_owner.c            | 156 ++++++++++++++++++++++++++++++++++++++-------
 mm/vmstat.c                |  79 -----------------------
 6 files changed, 196 insertions(+), 138 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/6] mm/debug_pagealloc: clean-up guard page handling code
  2016-08-16  2:51 [PATCH v2 0/6] Reduce memory waste by page extension user js1304
@ 2016-08-16  2:51 ` js1304
  2016-08-16  2:51 ` [PATCH v2 2/6] mm/debug_pagealloc: don't allocate page_ext if we don't use guard page js1304
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: js1304 @ 2016-08-16  2:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Minchan Kim, Michal Hocko, Sergey Senozhatsky,
	linux-kernel, linux-mm, Joonsoo Kim

From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

We can make code clean by moving decision condition
for set_page_guard() into set_page_guard() itself. It will
help code readability. There is no functional change.

Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_alloc.c | 34 ++++++++++++++++++----------------
 1 file changed, 18 insertions(+), 16 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 277c3d0..5e7944b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -638,17 +638,20 @@ static int __init debug_guardpage_minorder_setup(char *buf)
 }
 __setup("debug_guardpage_minorder=", debug_guardpage_minorder_setup);
 
-static inline void set_page_guard(struct zone *zone, struct page *page,
+static inline bool set_page_guard(struct zone *zone, struct page *page,
 				unsigned int order, int migratetype)
 {
 	struct page_ext *page_ext;
 
 	if (!debug_guardpage_enabled())
-		return;
+		return false;
+
+	if (order >= debug_guardpage_minorder())
+		return false;
 
 	page_ext = lookup_page_ext(page);
 	if (unlikely(!page_ext))
-		return;
+		return false;
 
 	__set_bit(PAGE_EXT_DEBUG_GUARD, &page_ext->flags);
 
@@ -656,6 +659,8 @@ static inline void set_page_guard(struct zone *zone, struct page *page,
 	set_page_private(page, order);
 	/* Guard pages are not available for any usage */
 	__mod_zone_freepage_state(zone, -(1 << order), migratetype);
+
+	return true;
 }
 
 static inline void clear_page_guard(struct zone *zone, struct page *page,
@@ -678,8 +683,8 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
 }
 #else
 struct page_ext_operations debug_guardpage_ops = { NULL, };
-static inline void set_page_guard(struct zone *zone, struct page *page,
-				unsigned int order, int migratetype) {}
+static inline bool set_page_guard(struct zone *zone, struct page *page,
+			unsigned int order, int migratetype) { return false; }
 static inline void clear_page_guard(struct zone *zone, struct page *page,
 				unsigned int order, int migratetype) {}
 #endif
@@ -1650,18 +1655,15 @@ static inline void expand(struct zone *zone, struct page *page,
 		size >>= 1;
 		VM_BUG_ON_PAGE(bad_range(zone, &page[size]), &page[size]);
 
-		if (IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) &&
-			debug_guardpage_enabled() &&
-			high < debug_guardpage_minorder()) {
-			/*
-			 * Mark as guard pages (or page), that will allow to
-			 * merge back to allocator when buddy will be freed.
-			 * Corresponding page table entries will not be touched,
-			 * pages will stay not present in virtual address space
-			 */
-			set_page_guard(zone, &page[size], high, migratetype);
+		/*
+		 * Mark as guard pages (or page), that will allow to
+		 * merge back to allocator when buddy will be freed.
+		 * Corresponding page table entries will not be touched,
+		 * pages will stay not present in virtual address space
+		 */
+		if (set_page_guard(zone, &page[size], high, migratetype))
 			continue;
-		}
+
 		list_add(&page[size].lru, &area->free_list[migratetype]);
 		area->nr_free++;
 		set_page_order(&page[size], high);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 2/6] mm/debug_pagealloc: don't allocate page_ext if we don't use guard page
  2016-08-16  2:51 [PATCH v2 0/6] Reduce memory waste by page extension user js1304
  2016-08-16  2:51 ` [PATCH v2 1/6] mm/debug_pagealloc: clean-up guard page handling code js1304
@ 2016-08-16  2:51 ` js1304
  2016-08-16  2:51 ` [PATCH v2 3/6] mm/page_owner: move page_owner specific function to page_owner.c js1304
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 12+ messages in thread
From: js1304 @ 2016-08-16  2:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Minchan Kim, Michal Hocko, Sergey Senozhatsky,
	linux-kernel, linux-mm, Joonsoo Kim

From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

What debug_pagealloc does is just mapping/unmapping page table.
Basically, it doesn't need additional memory space to memorize something.
But, with guard page feature, it requires additional memory to distinguish
if the page is for guard or not. Guard page is only used when
debug_guardpage_minorder is non-zero so this patch removes additional
memory allocation (page_ext) if debug_guardpage_minorder is zero.

It saves memory if we just use debug_pagealloc and not guard page.

Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_alloc.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5e7944b..45cb021 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -608,6 +608,9 @@ static bool need_debug_guardpage(void)
 	if (!debug_pagealloc_enabled())
 		return false;
 
+	if (!debug_guardpage_minorder())
+		return false;
+
 	return true;
 }
 
@@ -616,6 +619,9 @@ static void init_debug_guardpage(void)
 	if (!debug_pagealloc_enabled())
 		return;
 
+	if (!debug_guardpage_minorder())
+		return;
+
 	_debug_guardpage_enabled = true;
 }
 
@@ -636,7 +642,7 @@ static int __init debug_guardpage_minorder_setup(char *buf)
 	pr_info("Setting debug_guardpage_minorder to %lu\n", res);
 	return 0;
 }
-__setup("debug_guardpage_minorder=", debug_guardpage_minorder_setup);
+early_param("debug_guardpage_minorder", debug_guardpage_minorder_setup);
 
 static inline bool set_page_guard(struct zone *zone, struct page *page,
 				unsigned int order, int migratetype)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 3/6] mm/page_owner: move page_owner specific function to page_owner.c
  2016-08-16  2:51 [PATCH v2 0/6] Reduce memory waste by page extension user js1304
  2016-08-16  2:51 ` [PATCH v2 1/6] mm/debug_pagealloc: clean-up guard page handling code js1304
  2016-08-16  2:51 ` [PATCH v2 2/6] mm/debug_pagealloc: don't allocate page_ext if we don't use guard page js1304
@ 2016-08-16  2:51 ` js1304
  2016-08-16  7:21   ` Vlastimil Babka
  2016-08-16  2:51 ` [PATCH v2 4/6] mm/page_ext: rename offset to index js1304
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 12+ messages in thread
From: js1304 @ 2016-08-16  2:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Minchan Kim, Michal Hocko, Sergey Senozhatsky,
	linux-kernel, linux-mm, Joonsoo Kim

From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

There is no reason that page_owner specific function resides on vmstat.c.

Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/page_owner.h |  2 ++
 mm/page_owner.c            | 77 ++++++++++++++++++++++++++++++++++++++++++++
 mm/vmstat.c                | 79 ----------------------------------------------
 3 files changed, 79 insertions(+), 79 deletions(-)

diff --git a/include/linux/page_owner.h b/include/linux/page_owner.h
index 30583ab..2be728d 100644
--- a/include/linux/page_owner.h
+++ b/include/linux/page_owner.h
@@ -14,6 +14,8 @@ extern void __split_page_owner(struct page *page, unsigned int order);
 extern void __copy_page_owner(struct page *oldpage, struct page *newpage);
 extern void __set_page_owner_migrate_reason(struct page *page, int reason);
 extern void __dump_page_owner(struct page *page);
+extern void pagetypeinfo_showmixedcount_print(struct seq_file *m,
+					pg_data_t *pgdat, struct zone *zone);
 
 static inline void reset_page_owner(struct page *page, unsigned int order)
 {
diff --git a/mm/page_owner.c b/mm/page_owner.c
index 3b241f5..2cae0b2 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -8,6 +8,7 @@
 #include <linux/jump_label.h>
 #include <linux/migrate.h>
 #include <linux/stackdepot.h>
+#include <linux/seq_file.h>
 
 #include "internal.h"
 
@@ -214,6 +215,82 @@ void __copy_page_owner(struct page *oldpage, struct page *newpage)
 	__set_bit(PAGE_EXT_OWNER, &new_ext->flags);
 }
 
+void pagetypeinfo_showmixedcount_print(struct seq_file *m, pg_data_t *pgdat,
+					struct zone *zone)
+{
+	struct page *page;
+	struct page_ext *page_ext;
+	unsigned long pfn = zone->zone_start_pfn, block_end_pfn;
+	unsigned long end_pfn = pfn + zone->spanned_pages;
+	unsigned long count[MIGRATE_TYPES] = { 0, };
+	int pageblock_mt, page_mt;
+	int i;
+
+	/* Scan block by block. First and last block may be incomplete */
+	pfn = zone->zone_start_pfn;
+
+	/*
+	 * Walk the zone in pageblock_nr_pages steps. If a page block spans
+	 * a zone boundary, it will be double counted between zones. This does
+	 * not matter as the mixed block count will still be correct
+	 */
+	for (; pfn < end_pfn; ) {
+		if (!pfn_valid(pfn)) {
+			pfn = ALIGN(pfn + 1, pageblock_nr_pages);
+			continue;
+		}
+
+		block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
+		block_end_pfn = min(block_end_pfn, end_pfn);
+
+		page = pfn_to_page(pfn);
+		pageblock_mt = get_pageblock_migratetype(page);
+
+		for (; pfn < block_end_pfn; pfn++) {
+			if (!pfn_valid_within(pfn))
+				continue;
+
+			page = pfn_to_page(pfn);
+
+			if (page_zone(page) != zone)
+				continue;
+
+			if (PageBuddy(page)) {
+				pfn += (1UL << page_order(page)) - 1;
+				continue;
+			}
+
+			if (PageReserved(page))
+				continue;
+
+			page_ext = lookup_page_ext(page);
+			if (unlikely(!page_ext))
+				continue;
+
+			if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags))
+				continue;
+
+			page_mt = gfpflags_to_migratetype(page_ext->gfp_mask);
+			if (pageblock_mt != page_mt) {
+				if (is_migrate_cma(pageblock_mt))
+					count[MIGRATE_MOVABLE]++;
+				else
+					count[pageblock_mt]++;
+
+				pfn = block_end_pfn;
+				break;
+			}
+			pfn += (1UL << page_ext->order) - 1;
+		}
+	}
+
+	/* Print counts */
+	seq_printf(m, "Node %d, zone %8s ", pgdat->node_id, zone->name);
+	for (i = 0; i < MIGRATE_TYPES; i++)
+		seq_printf(m, "%12lu ", count[i]);
+	seq_putc(m, '\n');
+}
+
 static ssize_t
 print_page_owner(char __user *buf, size_t count, unsigned long pfn,
 		struct page *page, struct page_ext *page_ext,
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 84397e8..dc04e76 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1254,85 +1254,6 @@ static int pagetypeinfo_showblockcount(struct seq_file *m, void *arg)
 	return 0;
 }
 
-#ifdef CONFIG_PAGE_OWNER
-static void pagetypeinfo_showmixedcount_print(struct seq_file *m,
-							pg_data_t *pgdat,
-							struct zone *zone)
-{
-	struct page *page;
-	struct page_ext *page_ext;
-	unsigned long pfn = zone->zone_start_pfn, block_end_pfn;
-	unsigned long end_pfn = pfn + zone->spanned_pages;
-	unsigned long count[MIGRATE_TYPES] = { 0, };
-	int pageblock_mt, page_mt;
-	int i;
-
-	/* Scan block by block. First and last block may be incomplete */
-	pfn = zone->zone_start_pfn;
-
-	/*
-	 * Walk the zone in pageblock_nr_pages steps. If a page block spans
-	 * a zone boundary, it will be double counted between zones. This does
-	 * not matter as the mixed block count will still be correct
-	 */
-	for (; pfn < end_pfn; ) {
-		if (!pfn_valid(pfn)) {
-			pfn = ALIGN(pfn + 1, pageblock_nr_pages);
-			continue;
-		}
-
-		block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages);
-		block_end_pfn = min(block_end_pfn, end_pfn);
-
-		page = pfn_to_page(pfn);
-		pageblock_mt = get_pageblock_migratetype(page);
-
-		for (; pfn < block_end_pfn; pfn++) {
-			if (!pfn_valid_within(pfn))
-				continue;
-
-			page = pfn_to_page(pfn);
-
-			if (page_zone(page) != zone)
-				continue;
-
-			if (PageBuddy(page)) {
-				pfn += (1UL << page_order(page)) - 1;
-				continue;
-			}
-
-			if (PageReserved(page))
-				continue;
-
-			page_ext = lookup_page_ext(page);
-			if (unlikely(!page_ext))
-				continue;
-
-			if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags))
-				continue;
-
-			page_mt = gfpflags_to_migratetype(page_ext->gfp_mask);
-			if (pageblock_mt != page_mt) {
-				if (is_migrate_cma(pageblock_mt))
-					count[MIGRATE_MOVABLE]++;
-				else
-					count[pageblock_mt]++;
-
-				pfn = block_end_pfn;
-				break;
-			}
-			pfn += (1UL << page_ext->order) - 1;
-		}
-	}
-
-	/* Print counts */
-	seq_printf(m, "Node %d, zone %8s ", pgdat->node_id, zone->name);
-	for (i = 0; i < MIGRATE_TYPES; i++)
-		seq_printf(m, "%12lu ", count[i]);
-	seq_putc(m, '\n');
-}
-#endif /* CONFIG_PAGE_OWNER */
-
 /*
  * Print out the number of pageblocks for each migratetype that contain pages
  * of other types. This gives an indication of how well fallbacks are being
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 4/6] mm/page_ext: rename offset to index
  2016-08-16  2:51 [PATCH v2 0/6] Reduce memory waste by page extension user js1304
                   ` (2 preceding siblings ...)
  2016-08-16  2:51 ` [PATCH v2 3/6] mm/page_owner: move page_owner specific function to page_owner.c js1304
@ 2016-08-16  2:51 ` js1304
  2016-08-16  7:23   ` Vlastimil Babka
  2016-08-16  2:51 ` [PATCH v2 5/6] mm/page_ext: support extra space allocation by page_ext user js1304
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 12+ messages in thread
From: js1304 @ 2016-08-16  2:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Minchan Kim, Michal Hocko, Sergey Senozhatsky,
	linux-kernel, linux-mm, Joonsoo Kim

From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Here, 'offset' means entry index in page_ext array. Following patch
will use 'offset' for field offset in each entry so rename current
'offset' to prevent confusion.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 mm/page_ext.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/page_ext.c b/mm/page_ext.c
index 44a4c02..1629282 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -102,7 +102,7 @@ void __meminit pgdat_page_ext_init(struct pglist_data *pgdat)
 struct page_ext *lookup_page_ext(struct page *page)
 {
 	unsigned long pfn = page_to_pfn(page);
-	unsigned long offset;
+	unsigned long index;
 	struct page_ext *base;
 
 	base = NODE_DATA(page_to_nid(page))->node_page_ext;
@@ -119,9 +119,9 @@ struct page_ext *lookup_page_ext(struct page *page)
 	if (unlikely(!base))
 		return NULL;
 #endif
-	offset = pfn - round_down(node_start_pfn(page_to_nid(page)),
+	index = pfn - round_down(node_start_pfn(page_to_nid(page)),
 					MAX_ORDER_NR_PAGES);
-	return base + offset;
+	return base + index;
 }
 
 static int __init alloc_node_page_ext(int nid)
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 5/6] mm/page_ext: support extra space allocation by page_ext user
  2016-08-16  2:51 [PATCH v2 0/6] Reduce memory waste by page extension user js1304
                   ` (3 preceding siblings ...)
  2016-08-16  2:51 ` [PATCH v2 4/6] mm/page_ext: rename offset to index js1304
@ 2016-08-16  2:51 ` js1304
  2016-08-16  7:25   ` Vlastimil Babka
  2016-08-16  2:51 ` [PATCH v2 6/6] mm/page_owner: don't define fields on struct page_ext by hard-coding js1304
  2016-08-16  9:53 ` [PATCH v2 0/6] Reduce memory waste by page extension user Michal Hocko
  6 siblings, 1 reply; 12+ messages in thread
From: js1304 @ 2016-08-16  2:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Minchan Kim, Michal Hocko, Sergey Senozhatsky,
	linux-kernel, linux-mm, Joonsoo Kim

From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Until now, if some page_ext users want to use it's own field on page_ext,
it should be defined in struct page_ext by hard-coding. It has a problem
that wastes memory in following situation.

struct page_ext {
 #ifdef CONFIG_A
	int a;
 #endif
 #ifdef CONFIG_B
	int b;
 #endif
};

Assume that kernel is built with both CONFIG_A and CONFIG_B.
Even if we enable feature A and doesn't enable feature B at runtime,
each entry of struct page_ext takes two int rather than one int.
It's undesirable result so this patch tries to fix it.

To solve above problem, this patch implements to support extra space
allocation at runtime. When need() callback returns true, it's extra
memory requirement is summed to entry size of page_ext. Also, offset
for each user's extra memory space is returned. With this offset,
user can use this extra space and there is no need to define needed
field on page_ext by hard-coding.

This patch only implements an infrastructure. Following patch will use it
for page_owner which is only user having it's own fields on page_ext.

Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/page_ext.h |  2 ++
 mm/page_alloc.c          |  2 +-
 mm/page_ext.c            | 41 +++++++++++++++++++++++++++++++----------
 3 files changed, 34 insertions(+), 11 deletions(-)

diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
index 03f2a3e..179bdc4 100644
--- a/include/linux/page_ext.h
+++ b/include/linux/page_ext.h
@@ -7,6 +7,8 @@
 
 struct pglist_data;
 struct page_ext_operations {
+	size_t offset;
+	size_t size;
 	bool (*need)(void);
 	void (*init)(void);
 };
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 45cb021..d2e365c 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -688,7 +688,7 @@ static inline void clear_page_guard(struct zone *zone, struct page *page,
 		__mod_zone_freepage_state(zone, (1 << order), migratetype);
 }
 #else
-struct page_ext_operations debug_guardpage_ops = { NULL, };
+struct page_ext_operations debug_guardpage_ops;
 static inline bool set_page_guard(struct zone *zone, struct page *page,
 			unsigned int order, int migratetype) { return false; }
 static inline void clear_page_guard(struct zone *zone, struct page *page,
diff --git a/mm/page_ext.c b/mm/page_ext.c
index 1629282..121dcff 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -42,6 +42,11 @@
  * and page extension core can skip to allocate memory. As result,
  * none of memory is wasted.
  *
+ * When need callback returns true, page_ext checks if there is a request for
+ * extra memory through size in struct page_ext_operations. If it is non-zero,
+ * extra space is allocated for each page_ext entry and offset is returned to
+ * user through offset in struct page_ext_operations.
+ *
  * The init callback is used to do proper initialization after page extension
  * is completely initialized. In sparse memory system, extra memory is
  * allocated some time later than memmap is allocated. In other words, lifetime
@@ -66,18 +71,24 @@ static struct page_ext_operations *page_ext_ops[] = {
 };
 
 static unsigned long total_usage;
+static unsigned long extra_mem;
 
 static bool __init invoke_need_callbacks(void)
 {
 	int i;
 	int entries = ARRAY_SIZE(page_ext_ops);
+	bool need = false;
 
 	for (i = 0; i < entries; i++) {
-		if (page_ext_ops[i]->need && page_ext_ops[i]->need())
-			return true;
+		if (page_ext_ops[i]->need && page_ext_ops[i]->need()) {
+			page_ext_ops[i]->offset = sizeof(struct page_ext) +
+						extra_mem;
+			extra_mem += page_ext_ops[i]->size;
+			need = true;
+		}
 	}
 
-	return false;
+	return need;
 }
 
 static void __init invoke_init_callbacks(void)
@@ -91,6 +102,16 @@ static void __init invoke_init_callbacks(void)
 	}
 }
 
+static unsigned long get_entry_size(void)
+{
+	return sizeof(struct page_ext) + extra_mem;
+}
+
+static inline struct page_ext *get_entry(void *base, unsigned long index)
+{
+	return base + get_entry_size() * index;
+}
+
 #if !defined(CONFIG_SPARSEMEM)
 
 
@@ -121,7 +142,7 @@ struct page_ext *lookup_page_ext(struct page *page)
 #endif
 	index = pfn - round_down(node_start_pfn(page_to_nid(page)),
 					MAX_ORDER_NR_PAGES);
-	return base + index;
+	return get_entry(base, index);
 }
 
 static int __init alloc_node_page_ext(int nid)
@@ -143,7 +164,7 @@ static int __init alloc_node_page_ext(int nid)
 		!IS_ALIGNED(node_end_pfn(nid), MAX_ORDER_NR_PAGES))
 		nr_pages += MAX_ORDER_NR_PAGES;
 
-	table_size = sizeof(struct page_ext) * nr_pages;
+	table_size = get_entry_size() * nr_pages;
 
 	base = memblock_virt_alloc_try_nid_nopanic(
 			table_size, PAGE_SIZE, __pa(MAX_DMA_ADDRESS),
@@ -196,7 +217,7 @@ struct page_ext *lookup_page_ext(struct page *page)
 	if (!section->page_ext)
 		return NULL;
 #endif
-	return section->page_ext + pfn;
+	return get_entry(section->page_ext, pfn);
 }
 
 static void *__meminit alloc_page_ext(size_t size, int nid)
@@ -229,7 +250,7 @@ static int __meminit init_section_page_ext(unsigned long pfn, int nid)
 	if (section->page_ext)
 		return 0;
 
-	table_size = sizeof(struct page_ext) * PAGES_PER_SECTION;
+	table_size = get_entry_size() * PAGES_PER_SECTION;
 	base = alloc_page_ext(table_size, nid);
 
 	/*
@@ -249,7 +270,7 @@ static int __meminit init_section_page_ext(unsigned long pfn, int nid)
 	 * we need to apply a mask.
 	 */
 	pfn &= PAGE_SECTION_MASK;
-	section->page_ext = base - pfn;
+	section->page_ext = (void *)base - get_entry_size() * pfn;
 	total_usage += table_size;
 	return 0;
 }
@@ -262,7 +283,7 @@ static void free_page_ext(void *addr)
 		struct page *page = virt_to_page(addr);
 		size_t table_size;
 
-		table_size = sizeof(struct page_ext) * PAGES_PER_SECTION;
+		table_size = get_entry_size() * PAGES_PER_SECTION;
 
 		BUG_ON(PageReserved(page));
 		free_pages_exact(addr, table_size);
@@ -277,7 +298,7 @@ static void __free_page_ext(unsigned long pfn)
 	ms = __pfn_to_section(pfn);
 	if (!ms || !ms->page_ext)
 		return;
-	base = ms->page_ext + pfn;
+	base = get_entry(ms->page_ext, pfn);
 	free_page_ext(base);
 	ms->page_ext = NULL;
 }
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 6/6] mm/page_owner: don't define fields on struct page_ext by hard-coding
  2016-08-16  2:51 [PATCH v2 0/6] Reduce memory waste by page extension user js1304
                   ` (4 preceding siblings ...)
  2016-08-16  2:51 ` [PATCH v2 5/6] mm/page_ext: support extra space allocation by page_ext user js1304
@ 2016-08-16  2:51 ` js1304
  2016-08-16  9:53 ` [PATCH v2 0/6] Reduce memory waste by page extension user Michal Hocko
  6 siblings, 0 replies; 12+ messages in thread
From: js1304 @ 2016-08-16  2:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Minchan Kim, Michal Hocko, Sergey Senozhatsky,
	linux-kernel, linux-mm, Joonsoo Kim

From: Joonsoo Kim <iamjoonsoo.kim@lge.com>

There is a memory waste problem if we define field on struct page_ext
by hard-coding. Entry size of struct page_ext includes the size of
those fields even if it is disabled at runtime. Now, extra memory request
at runtime is possible so page_owner don't need to define it's own fields
by hard-coding.

This patch removes hard-coded define and uses extra memory for storing
page_owner information in page_owner. Most of code are just mechanical
changes.

Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
---
 include/linux/page_ext.h |  6 ----
 mm/page_owner.c          | 83 +++++++++++++++++++++++++++++++++---------------
 2 files changed, 58 insertions(+), 31 deletions(-)

diff --git a/include/linux/page_ext.h b/include/linux/page_ext.h
index 179bdc4..9298c39 100644
--- a/include/linux/page_ext.h
+++ b/include/linux/page_ext.h
@@ -44,12 +44,6 @@ enum page_ext_flags {
  */
 struct page_ext {
 	unsigned long flags;
-#ifdef CONFIG_PAGE_OWNER
-	unsigned int order;
-	gfp_t gfp_mask;
-	int last_migrate_reason;
-	depot_stack_handle_t handle;
-#endif
 };
 
 extern void pgdat_page_ext_init(struct pglist_data *pgdat);
diff --git a/mm/page_owner.c b/mm/page_owner.c
index 2cae0b2..0537d15 100644
--- a/mm/page_owner.c
+++ b/mm/page_owner.c
@@ -18,6 +18,13 @@
  */
 #define PAGE_OWNER_STACK_DEPTH (16)
 
+struct page_owner {
+	unsigned int order;
+	gfp_t gfp_mask;
+	int last_migrate_reason;
+	depot_stack_handle_t handle;
+};
+
 static bool page_owner_disabled = true;
 DEFINE_STATIC_KEY_FALSE(page_owner_inited);
 
@@ -86,10 +93,16 @@ static void init_page_owner(void)
 }
 
 struct page_ext_operations page_owner_ops = {
+	.size = sizeof(struct page_owner),
 	.need = need_page_owner,
 	.init = init_page_owner,
 };
 
+static inline struct page_owner *get_page_owner(struct page_ext *page_ext)
+{
+	return (void *)page_ext + page_owner_ops.offset;
+}
+
 void __reset_page_owner(struct page *page, unsigned int order)
 {
 	int i;
@@ -156,14 +169,16 @@ noinline void __set_page_owner(struct page *page, unsigned int order,
 					gfp_t gfp_mask)
 {
 	struct page_ext *page_ext = lookup_page_ext(page);
+	struct page_owner *page_owner;
 
 	if (unlikely(!page_ext))
 		return;
 
-	page_ext->handle = save_stack(gfp_mask);
-	page_ext->order = order;
-	page_ext->gfp_mask = gfp_mask;
-	page_ext->last_migrate_reason = -1;
+	page_owner = get_page_owner(page_ext);
+	page_owner->handle = save_stack(gfp_mask);
+	page_owner->order = order;
+	page_owner->gfp_mask = gfp_mask;
+	page_owner->last_migrate_reason = -1;
 
 	__set_bit(PAGE_EXT_OWNER, &page_ext->flags);
 }
@@ -171,21 +186,26 @@ noinline void __set_page_owner(struct page *page, unsigned int order,
 void __set_page_owner_migrate_reason(struct page *page, int reason)
 {
 	struct page_ext *page_ext = lookup_page_ext(page);
+	struct page_owner *page_owner;
+
 	if (unlikely(!page_ext))
 		return;
 
-	page_ext->last_migrate_reason = reason;
+	page_owner = get_page_owner(page_ext);
+	page_owner->last_migrate_reason = reason;
 }
 
 void __split_page_owner(struct page *page, unsigned int order)
 {
 	int i;
 	struct page_ext *page_ext = lookup_page_ext(page);
+	struct page_owner *page_owner;
 
 	if (unlikely(!page_ext))
 		return;
 
-	page_ext->order = 0;
+	page_owner = get_page_owner(page_ext);
+	page_owner->order = 0;
 	for (i = 1; i < (1 << order); i++)
 		__copy_page_owner(page, page + i);
 }
@@ -194,14 +214,18 @@ void __copy_page_owner(struct page *oldpage, struct page *newpage)
 {
 	struct page_ext *old_ext = lookup_page_ext(oldpage);
 	struct page_ext *new_ext = lookup_page_ext(newpage);
+	struct page_owner *old_page_owner, *new_page_owner;
 
 	if (unlikely(!old_ext || !new_ext))
 		return;
 
-	new_ext->order = old_ext->order;
-	new_ext->gfp_mask = old_ext->gfp_mask;
-	new_ext->last_migrate_reason = old_ext->last_migrate_reason;
-	new_ext->handle = old_ext->handle;
+	old_page_owner = get_page_owner(old_ext);
+	new_page_owner = get_page_owner(new_ext);
+	new_page_owner->order = old_page_owner->order;
+	new_page_owner->gfp_mask = old_page_owner->gfp_mask;
+	new_page_owner->last_migrate_reason =
+		old_page_owner->last_migrate_reason;
+	new_page_owner->handle = old_page_owner->handle;
 
 	/*
 	 * We don't clear the bit on the oldpage as it's going to be freed
@@ -220,6 +244,7 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m, pg_data_t *pgdat,
 {
 	struct page *page;
 	struct page_ext *page_ext;
+	struct page_owner *page_owner;
 	unsigned long pfn = zone->zone_start_pfn, block_end_pfn;
 	unsigned long end_pfn = pfn + zone->spanned_pages;
 	unsigned long count[MIGRATE_TYPES] = { 0, };
@@ -270,7 +295,9 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m, pg_data_t *pgdat,
 			if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags))
 				continue;
 
-			page_mt = gfpflags_to_migratetype(page_ext->gfp_mask);
+			page_owner = get_page_owner(page_ext);
+			page_mt = gfpflags_to_migratetype(
+					page_owner->gfp_mask);
 			if (pageblock_mt != page_mt) {
 				if (is_migrate_cma(pageblock_mt))
 					count[MIGRATE_MOVABLE]++;
@@ -280,7 +307,7 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m, pg_data_t *pgdat,
 				pfn = block_end_pfn;
 				break;
 			}
-			pfn += (1UL << page_ext->order) - 1;
+			pfn += (1UL << page_owner->order) - 1;
 		}
 	}
 
@@ -293,7 +320,7 @@ void pagetypeinfo_showmixedcount_print(struct seq_file *m, pg_data_t *pgdat,
 
 static ssize_t
 print_page_owner(char __user *buf, size_t count, unsigned long pfn,
-		struct page *page, struct page_ext *page_ext,
+		struct page *page, struct page_owner *page_owner,
 		depot_stack_handle_t handle)
 {
 	int ret;
@@ -313,15 +340,15 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
 
 	ret = snprintf(kbuf, count,
 			"Page allocated via order %u, mask %#x(%pGg)\n",
-			page_ext->order, page_ext->gfp_mask,
-			&page_ext->gfp_mask);
+			page_owner->order, page_owner->gfp_mask,
+			&page_owner->gfp_mask);
 
 	if (ret >= count)
 		goto err;
 
 	/* Print information relevant to grouping pages by mobility */
 	pageblock_mt = get_pageblock_migratetype(page);
-	page_mt  = gfpflags_to_migratetype(page_ext->gfp_mask);
+	page_mt  = gfpflags_to_migratetype(page_owner->gfp_mask);
 	ret += snprintf(kbuf + ret, count - ret,
 			"PFN %lu type %s Block %lu type %s Flags %#lx(%pGp)\n",
 			pfn,
@@ -338,10 +365,10 @@ print_page_owner(char __user *buf, size_t count, unsigned long pfn,
 	if (ret >= count)
 		goto err;
 
-	if (page_ext->last_migrate_reason != -1) {
+	if (page_owner->last_migrate_reason != -1) {
 		ret += snprintf(kbuf + ret, count - ret,
 			"Page has been migrated, last migrate reason: %s\n",
-			migrate_reason_names[page_ext->last_migrate_reason]);
+			migrate_reason_names[page_owner->last_migrate_reason]);
 		if (ret >= count)
 			goto err;
 	}
@@ -364,6 +391,7 @@ err:
 void __dump_page_owner(struct page *page)
 {
 	struct page_ext *page_ext = lookup_page_ext(page);
+	struct page_owner *page_owner;
 	unsigned long entries[PAGE_OWNER_STACK_DEPTH];
 	struct stack_trace trace = {
 		.nr_entries = 0,
@@ -379,7 +407,9 @@ void __dump_page_owner(struct page *page)
 		pr_alert("There is not page extension available.\n");
 		return;
 	}
-	gfp_mask = page_ext->gfp_mask;
+
+	page_owner = get_page_owner(page_ext);
+	gfp_mask = page_owner->gfp_mask;
 	mt = gfpflags_to_migratetype(gfp_mask);
 
 	if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags)) {
@@ -387,7 +417,7 @@ void __dump_page_owner(struct page *page)
 		return;
 	}
 
-	handle = READ_ONCE(page_ext->handle);
+	handle = READ_ONCE(page_owner->handle);
 	if (!handle) {
 		pr_alert("page_owner info is not active (free page?)\n");
 		return;
@@ -395,12 +425,12 @@ void __dump_page_owner(struct page *page)
 
 	depot_fetch_stack(handle, &trace);
 	pr_alert("page allocated via order %u, migratetype %s, gfp_mask %#x(%pGg)\n",
-		 page_ext->order, migratetype_names[mt], gfp_mask, &gfp_mask);
+		 page_owner->order, migratetype_names[mt], gfp_mask, &gfp_mask);
 	print_stack_trace(&trace, 0);
 
-	if (page_ext->last_migrate_reason != -1)
+	if (page_owner->last_migrate_reason != -1)
 		pr_alert("page has been migrated, last migrate reason: %s\n",
-			migrate_reason_names[page_ext->last_migrate_reason]);
+			migrate_reason_names[page_owner->last_migrate_reason]);
 }
 
 static ssize_t
@@ -409,6 +439,7 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 	unsigned long pfn;
 	struct page *page;
 	struct page_ext *page_ext;
+	struct page_owner *page_owner;
 	depot_stack_handle_t handle;
 
 	if (!static_branch_unlikely(&page_owner_inited))
@@ -458,11 +489,13 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 		if (!test_bit(PAGE_EXT_OWNER, &page_ext->flags))
 			continue;
 
+		page_owner = get_page_owner(page_ext);
+
 		/*
 		 * Access to page_ext->handle isn't synchronous so we should
 		 * be careful to access it.
 		 */
-		handle = READ_ONCE(page_ext->handle);
+		handle = READ_ONCE(page_owner->handle);
 		if (!handle)
 			continue;
 
@@ -470,7 +503,7 @@ read_page_owner(struct file *file, char __user *buf, size_t count, loff_t *ppos)
 		*ppos = (pfn - min_low_pfn) + 1;
 
 		return print_page_owner(buf, count, pfn, page,
-				page_ext, handle);
+				page_owner, handle);
 	}
 
 	return 0;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/6] mm/page_owner: move page_owner specific function to page_owner.c
  2016-08-16  2:51 ` [PATCH v2 3/6] mm/page_owner: move page_owner specific function to page_owner.c js1304
@ 2016-08-16  7:21   ` Vlastimil Babka
  0 siblings, 0 replies; 12+ messages in thread
From: Vlastimil Babka @ 2016-08-16  7:21 UTC (permalink / raw)
  To: js1304, Andrew Morton
  Cc: Minchan Kim, Michal Hocko, Sergey Senozhatsky, linux-kernel,
	linux-mm, Joonsoo Kim

On 08/16/2016 04:51 AM, js1304@gmail.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> There is no reason that page_owner specific function resides on vmstat.c.
>
> Reviewed-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/6] mm/page_ext: rename offset to index
  2016-08-16  2:51 ` [PATCH v2 4/6] mm/page_ext: rename offset to index js1304
@ 2016-08-16  7:23   ` Vlastimil Babka
  0 siblings, 0 replies; 12+ messages in thread
From: Vlastimil Babka @ 2016-08-16  7:23 UTC (permalink / raw)
  To: js1304, Andrew Morton
  Cc: Minchan Kim, Michal Hocko, Sergey Senozhatsky, linux-kernel,
	linux-mm, Joonsoo Kim

On 08/16/2016 04:51 AM, js1304@gmail.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> Here, 'offset' means entry index in page_ext array. Following patch
> will use 'offset' for field offset in each entry so rename current
> 'offset' to prevent confusion.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 5/6] mm/page_ext: support extra space allocation by page_ext user
  2016-08-16  2:51 ` [PATCH v2 5/6] mm/page_ext: support extra space allocation by page_ext user js1304
@ 2016-08-16  7:25   ` Vlastimil Babka
  0 siblings, 0 replies; 12+ messages in thread
From: Vlastimil Babka @ 2016-08-16  7:25 UTC (permalink / raw)
  To: js1304, Andrew Morton
  Cc: Minchan Kim, Michal Hocko, Sergey Senozhatsky, linux-kernel,
	linux-mm, Joonsoo Kim

On 08/16/2016 04:51 AM, js1304@gmail.com wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
>
> Until now, if some page_ext users want to use it's own field on page_ext,
> it should be defined in struct page_ext by hard-coding. It has a problem
> that wastes memory in following situation.
>
> struct page_ext {
>  #ifdef CONFIG_A
> 	int a;
>  #endif
>  #ifdef CONFIG_B
> 	int b;
>  #endif
> };
>
> Assume that kernel is built with both CONFIG_A and CONFIG_B.
> Even if we enable feature A and doesn't enable feature B at runtime,
> each entry of struct page_ext takes two int rather than one int.
> It's undesirable result so this patch tries to fix it.
>
> To solve above problem, this patch implements to support extra space
> allocation at runtime. When need() callback returns true, it's extra
> memory requirement is summed to entry size of page_ext. Also, offset
> for each user's extra memory space is returned. With this offset,
> user can use this extra space and there is no need to define needed
> field on page_ext by hard-coding.
>
> This patch only implements an infrastructure. Following patch will use it
> for page_owner which is only user having it's own fields on page_ext.
>
> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 0/6] Reduce memory waste by page extension user
  2016-08-16  2:51 [PATCH v2 0/6] Reduce memory waste by page extension user js1304
                   ` (5 preceding siblings ...)
  2016-08-16  2:51 ` [PATCH v2 6/6] mm/page_owner: don't define fields on struct page_ext by hard-coding js1304
@ 2016-08-16  9:53 ` Michal Hocko
  2016-08-16 10:12   ` Michal Hocko
  6 siblings, 1 reply; 12+ messages in thread
From: Michal Hocko @ 2016-08-16  9:53 UTC (permalink / raw)
  To: js1304
  Cc: Andrew Morton, Vlastimil Babka, Minchan Kim, Sergey Senozhatsky,
	linux-kernel, linux-mm, Joonsoo Kim

On Tue 16-08-16 11:51:13, Joonsoo Kim wrote:
> From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> 
> v2:
> Fix rebase mistake (per Vlastimil)
> Rename some variable/function to prevent confusion (per Vlastimil)
> Fix header dependency (per Sergey)
> 
> This patchset tries to reduce memory waste by page extension user.
> 
> First case is architecture supported debug_pagealloc. It doesn't
> requires additional memory if guard page isn't used. 8 bytes per
> page will be saved in this case.
> 
> Second case is related to page owner feature. Until now, if page_ext
> users want to use it's own fields on page_ext, fields should be
> defined in struct page_ext by hard-coding. It has a following problem.
> 
> struct page_ext {
>  #ifdef CONFIG_A
> 	int a;
>  #endif
>  #ifdef CONFIG_B
> 	int b;
>  #endif
> };
> 
> Assume that kernel is built with both CONFIG_A and CONFIG_B.
> Even if we enable feature A and doesn't enable feature B at runtime,
> each entry of struct page_ext takes two int rather than one int.
> It's undesirable waste so this patch tries to reduce it. By this patchset,
> we can save 20 bytes per page dedicated for page owner feature
> in some configurations.

FWIW I like this. I have only glanced over those patches so I do not
feel comfortable to give my a-b but the approach is sensible and the
memory savings are really attractive. Page owner is a really great
debugging feauture so enabling it makes a lot of sense on production
servers where the memory wasting is a no-go.

Thanks!
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 0/6] Reduce memory waste by page extension user
  2016-08-16  9:53 ` [PATCH v2 0/6] Reduce memory waste by page extension user Michal Hocko
@ 2016-08-16 10:12   ` Michal Hocko
  0 siblings, 0 replies; 12+ messages in thread
From: Michal Hocko @ 2016-08-16 10:12 UTC (permalink / raw)
  To: js1304
  Cc: Andrew Morton, Vlastimil Babka, Minchan Kim, Sergey Senozhatsky,
	linux-kernel, linux-mm, Joonsoo Kim

On Tue 16-08-16 11:53:00, Michal Hocko wrote:
> On Tue 16-08-16 11:51:13, Joonsoo Kim wrote:
> > From: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > 
> > v2:
> > Fix rebase mistake (per Vlastimil)
> > Rename some variable/function to prevent confusion (per Vlastimil)
> > Fix header dependency (per Sergey)
> > 
> > This patchset tries to reduce memory waste by page extension user.
> > 
> > First case is architecture supported debug_pagealloc. It doesn't
> > requires additional memory if guard page isn't used. 8 bytes per
> > page will be saved in this case.
> > 
> > Second case is related to page owner feature. Until now, if page_ext
> > users want to use it's own fields on page_ext, fields should be
> > defined in struct page_ext by hard-coding. It has a following problem.
> > 
> > struct page_ext {
> >  #ifdef CONFIG_A
> > 	int a;
> >  #endif
> >  #ifdef CONFIG_B
> > 	int b;
> >  #endif
> > };
> > 
> > Assume that kernel is built with both CONFIG_A and CONFIG_B.
> > Even if we enable feature A and doesn't enable feature B at runtime,
> > each entry of struct page_ext takes two int rather than one int.
> > It's undesirable waste so this patch tries to reduce it. By this patchset,
> > we can save 20 bytes per page dedicated for page owner feature
> > in some configurations.
> 
> FWIW I like this. I have only glanced over those patches so I do not
> feel comfortable to give my a-b but the approach is sensible and the
> memory savings are really attractive. Page owner is a really great
> debugging feauture so enabling it makes a lot of sense on production
> servers where the memory wasting is a no-go.

OK, I have missed that page_ext is only allocated if there is at least
one feature which requires it enabled. So normally there shouldn't be
too much of wasted memory. Anyway allocating per feature makes a lot of
sense regardless.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-08-16 10:12 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-16  2:51 [PATCH v2 0/6] Reduce memory waste by page extension user js1304
2016-08-16  2:51 ` [PATCH v2 1/6] mm/debug_pagealloc: clean-up guard page handling code js1304
2016-08-16  2:51 ` [PATCH v2 2/6] mm/debug_pagealloc: don't allocate page_ext if we don't use guard page js1304
2016-08-16  2:51 ` [PATCH v2 3/6] mm/page_owner: move page_owner specific function to page_owner.c js1304
2016-08-16  7:21   ` Vlastimil Babka
2016-08-16  2:51 ` [PATCH v2 4/6] mm/page_ext: rename offset to index js1304
2016-08-16  7:23   ` Vlastimil Babka
2016-08-16  2:51 ` [PATCH v2 5/6] mm/page_ext: support extra space allocation by page_ext user js1304
2016-08-16  7:25   ` Vlastimil Babka
2016-08-16  2:51 ` [PATCH v2 6/6] mm/page_owner: don't define fields on struct page_ext by hard-coding js1304
2016-08-16  9:53 ` [PATCH v2 0/6] Reduce memory waste by page extension user Michal Hocko
2016-08-16 10:12   ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).