All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/5] mm: place pages to the freelist tail when onling and undoing isolation
@ 2020-09-28 18:21 David Hildenbrand
  2020-09-28 18:21 ` [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag David Hildenbrand
                   ` (4 more replies)
  0 siblings, 5 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-09-28 18:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, linux-hyperv, xen-devel, linux-acpi, Andrew Morton,
	David Hildenbrand, Alexander Duyck, Dave Hansen, Haiyang Zhang,
	K. Y. Srinivasan, Mel Gorman, Michael Ellerman, Michal Hocko,
	Mike Rapoport, Oscar Salvador, Scott Cheloha, Stephen Hemminger,
	Vlastimil Babka, Wei Liu, Wei Yang

When adding separate memory blocks via add_memory*() and onlining them
immediately, the metadata (especially the memmap) of the next block will be
placed onto one of the just added+onlined block. This creates a chain
of unmovable allocations: If the last memory block cannot get
offlined+removed() so will all dependent ones. We directly have unmovable
allocations all over the place.

This can be observed quite easily using virtio-mem, however, it can also
be observed when using DIMMs. The freshly onlined pages will usually be
placed to the head of the freelists, meaning they will be allocated next,
turning the just-added memory usually immediately un-removable. The
fresh pages are cold, prefering to allocate others (that might be hot)
also feels to be the natural thing to do.

It also applies to the hyper-v balloon xen-balloon, and ppc64 dlpar: when
adding separate, successive memory blocks, each memory block will have
unmovable allocations on them - for example gigantic pages will fail to
allocate.

While the ZONE_NORMAL doesn't provide any guarantees that memory can get
offlined+removed again (any kind of fragmentation with unmovable
allocations is possible), there are many scenarios (hotplugging a lot of
memory, running workload, hotunplug some memory/as much as possible) where
we can offline+remove quite a lot with this patchset.

a) To visualize the problem, a very simple example:

Start a VM with 4GB and 8GB of virtio-mem memory:

 [root@localhost ~]# lsmem
 RANGE                                 SIZE  STATE REMOVABLE  BLOCK
 0x0000000000000000-0x00000000bfffffff   3G online       yes   0-23
 0x0000000100000000-0x000000033fffffff   9G online       yes 32-103

 Memory block size:       128M
 Total online memory:      12G
 Total offline memory:      0B

Then try to unplug as much as possible using virtio-mem. Observe which
memory blocks are still around. Without this patch set:

 [root@localhost ~]# lsmem
 RANGE                                  SIZE  STATE REMOVABLE   BLOCK
 0x0000000000000000-0x00000000bfffffff    3G online       yes    0-23
 0x0000000100000000-0x000000013fffffff    1G online       yes   32-39
 0x0000000148000000-0x000000014fffffff  128M online       yes      41
 0x0000000158000000-0x000000015fffffff  128M online       yes      43
 0x0000000168000000-0x000000016fffffff  128M online       yes      45
 0x0000000178000000-0x000000017fffffff  128M online       yes      47
 0x0000000188000000-0x0000000197ffffff  256M online       yes   49-50
 0x00000001a0000000-0x00000001a7ffffff  128M online       yes      52
 0x00000001b0000000-0x00000001b7ffffff  128M online       yes      54
 0x00000001c0000000-0x00000001c7ffffff  128M online       yes      56
 0x00000001d0000000-0x00000001d7ffffff  128M online       yes      58
 0x00000001e0000000-0x00000001e7ffffff  128M online       yes      60
 0x00000001f0000000-0x00000001f7ffffff  128M online       yes      62
 0x0000000200000000-0x0000000207ffffff  128M online       yes      64
 0x0000000210000000-0x0000000217ffffff  128M online       yes      66
 0x0000000220000000-0x0000000227ffffff  128M online       yes      68
 0x0000000230000000-0x0000000237ffffff  128M online       yes      70
 0x0000000240000000-0x0000000247ffffff  128M online       yes      72
 0x0000000250000000-0x0000000257ffffff  128M online       yes      74
 0x0000000260000000-0x0000000267ffffff  128M online       yes      76
 0x0000000270000000-0x0000000277ffffff  128M online       yes      78
 0x0000000280000000-0x0000000287ffffff  128M online       yes      80
 0x0000000290000000-0x0000000297ffffff  128M online       yes      82
 0x00000002a0000000-0x00000002a7ffffff  128M online       yes      84
 0x00000002b0000000-0x00000002b7ffffff  128M online       yes      86
 0x00000002c0000000-0x00000002c7ffffff  128M online       yes      88
 0x00000002d0000000-0x00000002d7ffffff  128M online       yes      90
 0x00000002e0000000-0x00000002e7ffffff  128M online       yes      92
 0x00000002f0000000-0x00000002f7ffffff  128M online       yes      94
 0x0000000300000000-0x0000000307ffffff  128M online       yes      96
 0x0000000310000000-0x0000000317ffffff  128M online       yes      98
 0x0000000320000000-0x0000000327ffffff  128M online       yes     100
 0x0000000330000000-0x000000033fffffff  256M online       yes 102-103

 Memory block size:       128M
 Total online memory:     8.1G
 Total offline memory:      0B

With this patch set:

 [root@localhost ~]# lsmem
 RANGE                                 SIZE  STATE REMOVABLE BLOCK
 0x0000000000000000-0x00000000bfffffff   3G online       yes  0-23
 0x0000000100000000-0x000000013fffffff   1G online       yes 32-39

 Memory block size:       128M
 Total online memory:       4G
 Total offline memory:      0B

All memory can get unplugged, all memory block can get removed. Of course,
no workload ran and the system was basically idle, but it highlights the
issue - the fairly deterministic chain of unmovable allocations. When a
huge page for the 2MB memmap is needed, a just-onlined 4MB page will
be split. The remaining 2MB page will be used for the memmap of the next
memory block. So one memory block will hold the memmap of the two following
memory blocks. Finally the pages of the last-onlined memory block will get
used for the next bigger allocations - if any allocation is unmovable,
all dependent memory blocks cannot get unplugged and removed until that
allocation is gone.

Note that with bigger memory blocks (e.g., 256MB), *all* memory
blocks are dependent and none can get unplugged again!

b) Experiment with memory intensive workload

I performed an experiment with an older version of this patch set
(before we used undo_isolate_page_range() in online_pages():
Hotplug 56GB to a VM with an initial 4GB, onlining all memory to
ZONE_NORMAL right from the kernel when adding it. I then run various
memory intensive workloads that consume most system memory for a total of
45 minutes. Once finished, I try to unplug as much memory as possible.

With this change, I am able to remove via virtio-mem (adding individual
128MB memory blocks) 413 out of 448 added memory blocks. Via individual
(256MB) DIMMs 380 out of 448 added memory blocks. (I don't have any numbers
without this patchset, but looking at the above example, it's at most half
of the 448 memory blocks for virtio-mem, and most probably none for DIMMs).

Again, there are workloads that might behave very differently due to the
nature of ZONE_NORMAL.

This change also affects (besodes memory onlining):
- Other users of undo_isolate_page_range(): Pages are always placed to the
  tail.
-- When memory offlining fails
-- When memory isolation fails after having isolated some pageblocks
-- When alloc_contig_range() either succeeds or fails
- Other users of __putback_isolated_page(): Pages are always placed to the
  tail.
-- Free page reporting
- Other users of __free_pages_core()
-- AFAIKs, any memory that is getting exposed to the buddy during boot.
   IIUC we will now usually allocate memory from lower addresses within
   a zone first (especially during boot).
- Other users of generic_online_page()
-- Hyper-V balloon

RFC -> v1:
- Tweak some patch descriptions
- "mm/page_alloc: place pages to tail in __putback_isolated_page()"
-- FOP_TO_TAIL now has higher precedence than page shuffling
-- Add a note that nothing should rely on FOP_TO_TAIL for correctness
- "mm/page_alloc: always move pages to the tail of the freelist in
   unset_migratetype_isolate()"
-- Use "bool" parameter for move_freepages_block() as requested
- "mm/page_alloc: place pages to tail in __free_pages_core()"
-- Eliminate set_page_refcounted() + page_ref_dec() and add a comment
- "mm/memory_hotplug: update comment regarding zone shuffling"
-- Added

David Hildenbrand (5):
  mm/page_alloc: convert "report" flag of __free_one_page() to a proper
    flag
  mm/page_alloc: place pages to tail in __putback_isolated_page()
  mm/page_alloc: always move pages to the tail of the freelist in
    unset_migratetype_isolate()
  mm/page_alloc: place pages to tail in __free_pages_core()
  mm/memory_hotplug: update comment regarding zone shuffling

 include/linux/page-isolation.h |   4 +-
 mm/memory_hotplug.c            |  11 ++--
 mm/page_alloc.c                | 114 ++++++++++++++++++++++++---------
 mm/page_isolation.c            |  12 +++-
 4 files changed, 97 insertions(+), 44 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag
  2020-09-28 18:21 [PATCH v1 0/5] mm: place pages to the freelist tail when onling and undoing isolation David Hildenbrand
@ 2020-09-28 18:21 ` David Hildenbrand
  2020-09-28 20:11     ` Pankaj Gupta
                     ` (3 more replies)
  2020-09-28 18:21 ` [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page() David Hildenbrand
                   ` (3 subsequent siblings)
  4 siblings, 4 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-09-28 18:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, linux-hyperv, xen-devel, linux-acpi, Andrew Morton,
	David Hildenbrand, Alexander Duyck, Vlastimil Babka,
	Oscar Salvador, Mel Gorman, Michal Hocko, Dave Hansen, Wei Yang,
	Mike Rapoport

Let's prepare for additional flags and avoid long parameter lists of bools.
Follow-up patches will also make use of the flags in __free_pages_ok(),
however, I wasn't able to come up with a better name for the type - should
be good enough for internal purposes.

Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mike Rapoport <rppt@kernel.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/page_alloc.c | 28 ++++++++++++++++++++--------
 1 file changed, 20 insertions(+), 8 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index df90e3654f97..daab90e960fe 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -77,6 +77,18 @@
 #include "shuffle.h"
 #include "page_reporting.h"
 
+/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
+typedef int __bitwise fop_t;
+
+/* No special request */
+#define FOP_NONE		((__force fop_t)0)
+
+/*
+ * Skip free page reporting notification for the (possibly merged) page. (will
+ * *not* mark the page reported, only skip the notification).
+ */
+#define FOP_SKIP_REPORT_NOTIFY	((__force fop_t)BIT(0))
+
 /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
 static DEFINE_MUTEX(pcp_batch_high_lock);
 #define MIN_PERCPU_PAGELIST_FRACTION	(8)
@@ -948,10 +960,9 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
  * -- nyc
  */
 
-static inline void __free_one_page(struct page *page,
-		unsigned long pfn,
-		struct zone *zone, unsigned int order,
-		int migratetype, bool report)
+static inline void __free_one_page(struct page *page, unsigned long pfn,
+				   struct zone *zone, unsigned int order,
+				   int migratetype, fop_t fop_flags)
 {
 	struct capture_control *capc = task_capc(zone);
 	unsigned long buddy_pfn;
@@ -1038,7 +1049,7 @@ static inline void __free_one_page(struct page *page,
 		add_to_free_list(page, zone, order, migratetype);
 
 	/* Notify page reporting subsystem of freed page */
-	if (report)
+	if (!(fop_flags & FOP_SKIP_REPORT_NOTIFY))
 		page_reporting_notify_free(order);
 }
 
@@ -1379,7 +1390,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 		if (unlikely(isolated_pageblocks))
 			mt = get_pageblock_migratetype(page);
 
-		__free_one_page(page, page_to_pfn(page), zone, 0, mt, true);
+		__free_one_page(page, page_to_pfn(page), zone, 0, mt, FOP_NONE);
 		trace_mm_page_pcpu_drain(page, 0, mt);
 	}
 	spin_unlock(&zone->lock);
@@ -1395,7 +1406,7 @@ static void free_one_page(struct zone *zone,
 		is_migrate_isolate(migratetype))) {
 		migratetype = get_pfnblock_migratetype(page, pfn);
 	}
-	__free_one_page(page, pfn, zone, order, migratetype, true);
+	__free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
 	spin_unlock(&zone->lock);
 }
 
@@ -3288,7 +3299,8 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
 	lockdep_assert_held(&zone->lock);
 
 	/* Return isolated page to tail of freelist. */
-	__free_one_page(page, page_to_pfn(page), zone, order, mt, false);
+	__free_one_page(page, page_to_pfn(page), zone, order, mt,
+			FOP_SKIP_REPORT_NOTIFY);
 }
 
 /*
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page()
  2020-09-28 18:21 [PATCH v1 0/5] mm: place pages to the freelist tail when onling and undoing isolation David Hildenbrand
  2020-09-28 18:21 ` [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag David Hildenbrand
@ 2020-09-28 18:21 ` David Hildenbrand
  2020-09-28 20:38     ` Pankaj Gupta
                     ` (2 more replies)
  2020-09-28 18:21 ` [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate() David Hildenbrand
                   ` (2 subsequent siblings)
  4 siblings, 3 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-09-28 18:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, linux-hyperv, xen-devel, linux-acpi, Andrew Morton,
	David Hildenbrand, Alexander Duyck, Oscar Salvador, Mel Gorman,
	Michal Hocko, Dave Hansen, Vlastimil Babka, Wei Yang,
	Mike Rapoport, Scott Cheloha, Michael Ellerman

__putback_isolated_page() already documents that pages will be placed to
the tail of the freelist - this is, however, not the case for
"order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
the case for all existing users.

This change affects two users:
- free page reporting
- page isolation, when undoing the isolation (including memory onlining).

This behavior is desireable for pages that haven't really been touched
lately, so exactly the two users that don't actually read/write page
content, but rather move untouched pages.

The new behavior is especially desirable for memory onlining, where we
allow allocation of newly onlined pages via undo_isolate_page_range()
in online_pages(). Right now, we always place them to the head of the
free list, resulting in undesireable behavior: Assume we add
individual memory chunks via add_memory() and online them right away to
the NORMAL zone. We create a dependency chain of unmovable allocations
e.g., via the memmap. The memmap of the next chunk will be placed onto
previous chunks - if the last block cannot get offlined+removed, all
dependent ones cannot get offlined+removed. While this can already be
observed with individual DIMMs, it's more of an issue for virtio-mem
(and I suspect also ppc DLPAR).

Document that this should only be used for optimizations, and no code
should realy on this for correction (if the order of freepage lists
ever changes).

We won't care about page shuffling: memory onlining already properly
shuffles after onlining. free page reporting doesn't care about
physically contiguous ranges, and there are already cases where page
isolation will simply move (physically close) free pages to (currently)
the head of the freelists via move_freepages_block() instead of
shuffling. If this becomes ever relevant, we should shuffle the whole
zone when undoing isolation of larger ranges, and after
free_contig_range().

Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Scott Cheloha <cheloha@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/page_alloc.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index daab90e960fe..9e3ed4a6f69a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -89,6 +89,18 @@ typedef int __bitwise fop_t;
  */
 #define FOP_SKIP_REPORT_NOTIFY	((__force fop_t)BIT(0))
 
+/*
+ * Place the (possibly merged) page to the tail of the freelist. Will ignore
+ * page shuffling (relevant code - e.g., memory onlining - is expected to
+ * shuffle the whole zone).
+ *
+ * Note: No code should rely onto this flag for correctness - it's purely
+ *       to allow for optimizations when handing back either fresh pages
+ *       (memory onlining) or untouched pages (page isolation, free page
+ *       reporting).
+ */
+#define FOP_TO_TAIL		((__force fop_t)BIT(1))
+
 /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
 static DEFINE_MUTEX(pcp_batch_high_lock);
 #define MIN_PERCPU_PAGELIST_FRACTION	(8)
@@ -1038,7 +1050,9 @@ static inline void __free_one_page(struct page *page, unsigned long pfn,
 done_merging:
 	set_page_order(page, order);
 
-	if (is_shuffle_order(order))
+	if (fop_flags & FOP_TO_TAIL)
+		to_tail = true;
+	else if (is_shuffle_order(order))
 		to_tail = shuffle_pick_tail();
 	else
 		to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
@@ -3300,7 +3314,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
 
 	/* Return isolated page to tail of freelist. */
 	__free_one_page(page, page_to_pfn(page), zone, order, mt,
-			FOP_SKIP_REPORT_NOTIFY);
+			FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL);
 }
 
 /*
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()
  2020-09-28 18:21 [PATCH v1 0/5] mm: place pages to the freelist tail when onling and undoing isolation David Hildenbrand
  2020-09-28 18:21 ` [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag David Hildenbrand
  2020-09-28 18:21 ` [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page() David Hildenbrand
@ 2020-09-28 18:21 ` David Hildenbrand
  2020-09-28 20:55     ` Pankaj Gupta
                     ` (2 more replies)
  2020-09-28 18:21 ` [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core() David Hildenbrand
  2020-09-28 18:21 ` [PATCH v1 5/5] mm/memory_hotplug: update comment regarding zone shuffling David Hildenbrand
  4 siblings, 3 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-09-28 18:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, linux-hyperv, xen-devel, linux-acpi, Andrew Morton,
	David Hildenbrand, Oscar Salvador, Alexander Duyck, Mel Gorman,
	Michal Hocko, Dave Hansen, Vlastimil Babka, Wei Yang,
	Mike Rapoport, Scott Cheloha, Michael Ellerman

Page isolation doesn't actually touch the pages, it simply isolates
pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.

We already place pages to the tail of the freelists when undoing
isolation via __putback_isolated_page(), let's do it in any case
(e.g., if order <= pageblock_order) and document the behavior.

Add a "to_tail" parameter to move_freepages_block() but introduce a
a new move_to_free_list_tail() - similar to add_to_free_list_tail().

This change results in all pages getting onlined via online_pages() to
be placed to the tail of the freelist.

Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Scott Cheloha <cheloha@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 include/linux/page-isolation.h |  4 ++--
 mm/page_alloc.c                | 35 +++++++++++++++++++++++-----------
 mm/page_isolation.c            | 12 +++++++++---
 3 files changed, 35 insertions(+), 16 deletions(-)

diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
index 572458016331..3eca9b3c5305 100644
--- a/include/linux/page-isolation.h
+++ b/include/linux/page-isolation.h
@@ -36,8 +36,8 @@ static inline bool is_migrate_isolate(int migratetype)
 struct page *has_unmovable_pages(struct zone *zone, struct page *page,
 				 int migratetype, int flags);
 void set_pageblock_migratetype(struct page *page, int migratetype);
-int move_freepages_block(struct zone *zone, struct page *page,
-				int migratetype, int *num_movable);
+int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
+			 bool to_tail, int *num_movable);
 
 /*
  * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 9e3ed4a6f69a..d5a5f528b8ca 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -905,6 +905,15 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
 	list_move(&page->lru, &area->free_list[migratetype]);
 }
 
+/* Used for pages which are on another list */
+static inline void move_to_free_list_tail(struct page *page, struct zone *zone,
+					  unsigned int order, int migratetype)
+{
+	struct free_area *area = &zone->free_area[order];
+
+	list_move_tail(&page->lru, &area->free_list[migratetype]);
+}
+
 static inline void del_page_from_free_list(struct page *page, struct zone *zone,
 					   unsigned int order)
 {
@@ -2338,9 +2347,9 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
  * Note that start_page and end_pages are not aligned on a pageblock
  * boundary. If alignment is required, use move_freepages_block()
  */
-static int move_freepages(struct zone *zone,
-			  struct page *start_page, struct page *end_page,
-			  int migratetype, int *num_movable)
+static int move_freepages(struct zone *zone, struct page *start_page,
+			  struct page *end_page, int migratetype,
+			  bool to_tail, int *num_movable)
 {
 	struct page *page;
 	unsigned int order;
@@ -2371,7 +2380,10 @@ static int move_freepages(struct zone *zone,
 		VM_BUG_ON_PAGE(page_zone(page) != zone, page);
 
 		order = page_order(page);
-		move_to_free_list(page, zone, order, migratetype);
+		if (to_tail)
+			move_to_free_list_tail(page, zone, order, migratetype);
+		else
+			move_to_free_list(page, zone, order, migratetype);
 		page += 1 << order;
 		pages_moved += 1 << order;
 	}
@@ -2379,8 +2391,8 @@ static int move_freepages(struct zone *zone,
 	return pages_moved;
 }
 
-int move_freepages_block(struct zone *zone, struct page *page,
-				int migratetype, int *num_movable)
+int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
+			 bool to_tail, int *num_movable)
 {
 	unsigned long start_pfn, end_pfn;
 	struct page *start_page, *end_page;
@@ -2401,7 +2413,7 @@ int move_freepages_block(struct zone *zone, struct page *page,
 		return 0;
 
 	return move_freepages(zone, start_page, end_page, migratetype,
-								num_movable);
+			      to_tail, num_movable);
 }
 
 static void change_pageblock_range(struct page *pageblock_page,
@@ -2526,8 +2538,8 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
 	if (!whole_block)
 		goto single_page;
 
-	free_pages = move_freepages_block(zone, page, start_type,
-						&movable_pages);
+	free_pages = move_freepages_block(zone, page, start_type, false,
+					  &movable_pages);
 	/*
 	 * Determine how many pages are compatible with our allocation.
 	 * For movable allocation, it's the number of movable pages which
@@ -2635,7 +2647,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
 	    && !is_migrate_cma(mt)) {
 		zone->nr_reserved_highatomic += pageblock_nr_pages;
 		set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC);
-		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL);
+		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, false,
+				     NULL);
 	}
 
 out_unlock:
@@ -2711,7 +2724,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
 			 */
 			set_pageblock_migratetype(page, ac->migratetype);
 			ret = move_freepages_block(zone, page, ac->migratetype,
-									NULL);
+						   false, NULL);
 			if (ret) {
 				spin_unlock_irqrestore(&zone->lock, flags);
 				return ret;
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index abfe26ad59fd..de44e1329706 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -45,7 +45,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
 		set_pageblock_migratetype(page, MIGRATE_ISOLATE);
 		zone->nr_isolate_pageblock++;
 		nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE,
-									NULL);
+						false, NULL);
 
 		__mod_zone_freepage_state(zone, -nr_pages, mt);
 		spin_unlock_irqrestore(&zone->lock, flags);
@@ -83,7 +83,7 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
 	 * Because freepage with more than pageblock_order on isolated
 	 * pageblock is restricted to merge due to freepage counting problem,
 	 * it is possible that there is free buddy page.
-	 * move_freepages_block() doesn't care of merge so we need other
+	 * move_freepages_block() don't care about merging, so we need another
 	 * approach in order to merge them. Isolation and free will make
 	 * these pages to be merged.
 	 */
@@ -106,9 +106,15 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
 	 * If we isolate freepage with more than pageblock_order, there
 	 * should be no freepage in the range, so we could avoid costly
 	 * pageblock scanning for freepage moving.
+	 *
+	 * We didn't actually touch any of the isolated pages, so place them
+	 * to the tail of the freelist. This is an optimization for memory
+	 * onlining - just onlined memory won't immediately be considered for
+	 * allocation.
 	 */
 	if (!isolated_page) {
-		nr_pages = move_freepages_block(zone, page, migratetype, NULL);
+		nr_pages = move_freepages_block(zone, page, migratetype, true,
+						NULL);
 		__mod_zone_freepage_state(zone, nr_pages, migratetype);
 	}
 	set_pageblock_migratetype(page, migratetype);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core()
  2020-09-28 18:21 [PATCH v1 0/5] mm: place pages to the freelist tail when onling and undoing isolation David Hildenbrand
                   ` (2 preceding siblings ...)
  2020-09-28 18:21 ` [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate() David Hildenbrand
@ 2020-09-28 18:21 ` David Hildenbrand
  2020-09-28 20:33     ` Pankaj Gupta
                     ` (2 more replies)
  2020-09-28 18:21 ` [PATCH v1 5/5] mm/memory_hotplug: update comment regarding zone shuffling David Hildenbrand
  4 siblings, 3 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-09-28 18:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, linux-hyperv, xen-devel, linux-acpi, Andrew Morton,
	David Hildenbrand, Vlastimil Babka, Oscar Salvador,
	Alexander Duyck, Mel Gorman, Michal Hocko, Dave Hansen, Wei Yang,
	Mike Rapoport, K. Y. Srinivasan, Haiyang Zhang,
	Stephen Hemminger, Wei Liu

__free_pages_core() is used when exposing fresh memory to the buddy
during system boot and when onlining memory in generic_online_page().

generic_online_page() is used in two cases:

1. Direct memory onlining in online_pages().
2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
   balloon and virtio-mem), when parts of a section are kept
   fake-offline to be fake-onlined later on.

In 1, we already place pages to the tail of the freelist. Pages will be
freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
via undo_isolate_page_range().

In 2, we currently don't implement a proper rule. In case of virtio-mem,
where we currently always online MAX_ORDER - 1 pages, the pages will be
placed to the HEAD of the freelist - undesireable. While the hyper-v
balloon calls generic_online_page() with single pages, usually it will
call it on successive single pages in a larger block.

The pages are fresh, so place them to the tail of the freelists and avoid
the PCP. In __free_pages_core(), remove the now superflouos call to
set_page_refcounted() and add a comment regarding page initialization and
the refcount.

Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
is usually of limited use in virtualized environments), we might want to
shuffle after a sequence of generic_online_page() calls in the
relevant callers.

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Wei Liu <wei.liu@kernel.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/page_alloc.c | 37 ++++++++++++++++++++++++-------------
 1 file changed, 24 insertions(+), 13 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d5a5f528b8ca..8a2134fe9947 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -270,7 +270,8 @@ bool pm_suspended_storage(void)
 unsigned int pageblock_order __read_mostly;
 #endif
 
-static void __free_pages_ok(struct page *page, unsigned int order);
+static void __free_pages_ok(struct page *page, unsigned int order,
+			    fop_t fop_flags);
 
 /*
  * results with 256, 32 in the lowmem_reserve sysctl:
@@ -682,7 +683,7 @@ static void bad_page(struct page *page, const char *reason)
 void free_compound_page(struct page *page)
 {
 	mem_cgroup_uncharge(page);
-	__free_pages_ok(page, compound_order(page));
+	__free_pages_ok(page, compound_order(page), FOP_NONE);
 }
 
 void prep_compound_page(struct page *page, unsigned int order)
@@ -1419,17 +1420,15 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 	spin_unlock(&zone->lock);
 }
 
-static void free_one_page(struct zone *zone,
-				struct page *page, unsigned long pfn,
-				unsigned int order,
-				int migratetype)
+static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn,
+			  unsigned int order, int migratetype, fop_t fop_flags)
 {
 	spin_lock(&zone->lock);
 	if (unlikely(has_isolate_pageblock(zone) ||
 		is_migrate_isolate(migratetype))) {
 		migratetype = get_pfnblock_migratetype(page, pfn);
 	}
-	__free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
+	__free_one_page(page, pfn, zone, order, migratetype, fop_flags);
 	spin_unlock(&zone->lock);
 }
 
@@ -1507,7 +1506,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
 	}
 }
 
-static void __free_pages_ok(struct page *page, unsigned int order)
+static void __free_pages_ok(struct page *page, unsigned int order,
+			    fop_t fop_flags)
 {
 	unsigned long flags;
 	int migratetype;
@@ -1519,7 +1519,8 @@ static void __free_pages_ok(struct page *page, unsigned int order)
 	migratetype = get_pfnblock_migratetype(page, pfn);
 	local_irq_save(flags);
 	__count_vm_events(PGFREE, 1 << order);
-	free_one_page(page_zone(page), page, pfn, order, migratetype);
+	free_one_page(page_zone(page), page, pfn, order, migratetype,
+		      fop_flags);
 	local_irq_restore(flags);
 }
 
@@ -1529,6 +1530,11 @@ void __free_pages_core(struct page *page, unsigned int order)
 	struct page *p = page;
 	unsigned int loop;
 
+	/*
+	 * When initializing the memmap, init_single_page() sets the refcount
+	 * of all pages to 1 ("allocated"/"not free"). We have to set the
+	 * refcount of all involved pages to 0.
+	 */
 	prefetchw(p);
 	for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
 		prefetchw(p + 1);
@@ -1539,8 +1545,12 @@ void __free_pages_core(struct page *page, unsigned int order)
 	set_page_count(p, 0);
 
 	atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
-	set_page_refcounted(page);
-	__free_pages(page, order);
+
+	/*
+	 * Bypass PCP and place fresh pages right to the tail, primarily
+	 * relevant for memory onlining.
+	 */
+	__free_pages_ok(page, order, FOP_TO_TAIL);
 }
 
 #ifdef CONFIG_NEED_MULTIPLE_NODES
@@ -3171,7 +3181,8 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
 	 */
 	if (migratetype >= MIGRATE_PCPTYPES) {
 		if (unlikely(is_migrate_isolate(migratetype))) {
-			free_one_page(zone, page, pfn, 0, migratetype);
+			free_one_page(zone, page, pfn, 0, migratetype,
+				      FOP_NONE);
 			return;
 		}
 		migratetype = MIGRATE_MOVABLE;
@@ -5063,7 +5074,7 @@ static inline void free_the_page(struct page *page, unsigned int order)
 	if (order == 0)		/* Via pcp? */
 		free_unref_page(page);
 	else
-		__free_pages_ok(page, order);
+		__free_pages_ok(page, order, FOP_NONE);
 }
 
 void __free_pages(struct page *page, unsigned int order)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v1 5/5] mm/memory_hotplug: update comment regarding zone shuffling
  2020-09-28 18:21 [PATCH v1 0/5] mm: place pages to the freelist tail when onling and undoing isolation David Hildenbrand
                   ` (3 preceding siblings ...)
  2020-09-28 18:21 ` [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core() David Hildenbrand
@ 2020-09-28 18:21 ` David Hildenbrand
  2020-09-29  9:40   ` Wei Yang
  2020-10-02 13:41   ` Michal Hocko
  4 siblings, 2 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-09-28 18:21 UTC (permalink / raw)
  To: linux-kernel
  Cc: linux-mm, linux-hyperv, xen-devel, linux-acpi, Andrew Morton,
	David Hildenbrand, Alexander Duyck, Mel Gorman, Michal Hocko,
	Dave Hansen, Vlastimil Babka, Wei Yang, Oscar Salvador,
	Mike Rapoport

As we no longer shuffle via generic_online_page() and when undoing
isolation, we can simplify the comment.

We now effectively shuffle only once (properly) when onlining new
memory.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mike Rapoport <rppt@kernel.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/memory_hotplug.c | 11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9db80ee29caa..c589bd8801bb 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -859,13 +859,10 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
 	undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
 
 	/*
-	 * When exposing larger, physically contiguous memory areas to the
-	 * buddy, shuffling in the buddy (when freeing onlined pages, putting
-	 * them either to the head or the tail of the freelist) is only helpful
-	 * for maintaining the shuffle, but not for creating the initial
-	 * shuffle. Shuffle the whole zone to make sure the just onlined pages
-	 * are properly distributed across the whole freelist. Make sure to
-	 * shuffle once pageblocks are no longer isolated.
+	 * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
+	 * the tail of the freelist when undoing isolation). Shuffle the whole
+	 * zone to make sure the just onlined pages are properly distributed
+	 * across the whole freelist - to create an initial shuffle.
 	 */
 	shuffle_zone(zone);
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag
  2020-09-28 18:21 ` [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag David Hildenbrand
@ 2020-09-28 20:11     ` Pankaj Gupta
  2020-09-29  8:58   ` Wei Yang
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 36+ messages in thread
From: Pankaj Gupta @ 2020-09-28 20:11 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: LKML, Linux MM, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Vlastimil Babka, Oscar Salvador,
	Mel Gorman, Michal Hocko, Dave Hansen, Wei Yang, Mike Rapoport

> Let's prepare for additional flags and avoid long parameter lists of bools.
> Follow-up patches will also make use of the flags in __free_pages_ok(),
> however, I wasn't able to come up with a better name for the type - should
> be good enough for internal purposes.
>
> Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/page_alloc.c | 28 ++++++++++++++++++++--------
>  1 file changed, 20 insertions(+), 8 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index df90e3654f97..daab90e960fe 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -77,6 +77,18 @@
>  #include "shuffle.h"
>  #include "page_reporting.h"
>
> +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
> +typedef int __bitwise fop_t;
> +
> +/* No special request */
> +#define FOP_NONE               ((__force fop_t)0)
> +
> +/*
> + * Skip free page reporting notification for the (possibly merged) page. (will
> + * *not* mark the page reported, only skip the notification).
> + */
> +#define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0))
> +
>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_FRACTION   (8)
> @@ -948,10 +960,9 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
>   * -- nyc
>   */
>
> -static inline void __free_one_page(struct page *page,
> -               unsigned long pfn,
> -               struct zone *zone, unsigned int order,
> -               int migratetype, bool report)
> +static inline void __free_one_page(struct page *page, unsigned long pfn,
> +                                  struct zone *zone, unsigned int order,
> +                                  int migratetype, fop_t fop_flags)
>  {
>         struct capture_control *capc = task_capc(zone);
>         unsigned long buddy_pfn;
> @@ -1038,7 +1049,7 @@ static inline void __free_one_page(struct page *page,
>                 add_to_free_list(page, zone, order, migratetype);
>
>         /* Notify page reporting subsystem of freed page */
> -       if (report)
> +       if (!(fop_flags & FOP_SKIP_REPORT_NOTIFY))
>                 page_reporting_notify_free(order);
>  }
>
> @@ -1379,7 +1390,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>                 if (unlikely(isolated_pageblocks))
>                         mt = get_pageblock_migratetype(page);
>
> -               __free_one_page(page, page_to_pfn(page), zone, 0, mt, true);
> +               __free_one_page(page, page_to_pfn(page), zone, 0, mt, FOP_NONE);
>                 trace_mm_page_pcpu_drain(page, 0, mt);
>         }
>         spin_unlock(&zone->lock);
> @@ -1395,7 +1406,7 @@ static void free_one_page(struct zone *zone,
>                 is_migrate_isolate(migratetype))) {
>                 migratetype = get_pfnblock_migratetype(page, pfn);
>         }
> -       __free_one_page(page, pfn, zone, order, migratetype, true);
> +       __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
>         spin_unlock(&zone->lock);
>  }
>
> @@ -3288,7 +3299,8 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
>         lockdep_assert_held(&zone->lock);
>
>         /* Return isolated page to tail of freelist. */
> -       __free_one_page(page, page_to_pfn(page), zone, order, mt, false);
> +       __free_one_page(page, page_to_pfn(page), zone, order, mt,
> +                       FOP_SKIP_REPORT_NOTIFY);
>  }

Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag
@ 2020-09-28 20:11     ` Pankaj Gupta
  0 siblings, 0 replies; 36+ messages in thread
From: Pankaj Gupta @ 2020-09-28 20:11 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: LKML, Linux MM, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Vlastimil Babka, Oscar Salvador,
	Mel Gorman, Michal Hocko, Dave Hansen, Wei Yang, Mike Rapoport

> Let's prepare for additional flags and avoid long parameter lists of bools.
> Follow-up patches will also make use of the flags in __free_pages_ok(),
> however, I wasn't able to come up with a better name for the type - should
> be good enough for internal purposes.
>
> Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/page_alloc.c | 28 ++++++++++++++++++++--------
>  1 file changed, 20 insertions(+), 8 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index df90e3654f97..daab90e960fe 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -77,6 +77,18 @@
>  #include "shuffle.h"
>  #include "page_reporting.h"
>
> +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
> +typedef int __bitwise fop_t;
> +
> +/* No special request */
> +#define FOP_NONE               ((__force fop_t)0)
> +
> +/*
> + * Skip free page reporting notification for the (possibly merged) page. (will
> + * *not* mark the page reported, only skip the notification).
> + */
> +#define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0))
> +
>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_FRACTION   (8)
> @@ -948,10 +960,9 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
>   * -- nyc
>   */
>
> -static inline void __free_one_page(struct page *page,
> -               unsigned long pfn,
> -               struct zone *zone, unsigned int order,
> -               int migratetype, bool report)
> +static inline void __free_one_page(struct page *page, unsigned long pfn,
> +                                  struct zone *zone, unsigned int order,
> +                                  int migratetype, fop_t fop_flags)
>  {
>         struct capture_control *capc = task_capc(zone);
>         unsigned long buddy_pfn;
> @@ -1038,7 +1049,7 @@ static inline void __free_one_page(struct page *page,
>                 add_to_free_list(page, zone, order, migratetype);
>
>         /* Notify page reporting subsystem of freed page */
> -       if (report)
> +       if (!(fop_flags & FOP_SKIP_REPORT_NOTIFY))
>                 page_reporting_notify_free(order);
>  }
>
> @@ -1379,7 +1390,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>                 if (unlikely(isolated_pageblocks))
>                         mt = get_pageblock_migratetype(page);
>
> -               __free_one_page(page, page_to_pfn(page), zone, 0, mt, true);
> +               __free_one_page(page, page_to_pfn(page), zone, 0, mt, FOP_NONE);
>                 trace_mm_page_pcpu_drain(page, 0, mt);
>         }
>         spin_unlock(&zone->lock);
> @@ -1395,7 +1406,7 @@ static void free_one_page(struct zone *zone,
>                 is_migrate_isolate(migratetype))) {
>                 migratetype = get_pfnblock_migratetype(page, pfn);
>         }
> -       __free_one_page(page, pfn, zone, order, migratetype, true);
> +       __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
>         spin_unlock(&zone->lock);
>  }
>
> @@ -3288,7 +3299,8 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
>         lockdep_assert_held(&zone->lock);
>
>         /* Return isolated page to tail of freelist. */
> -       __free_one_page(page, page_to_pfn(page), zone, order, mt, false);
> +       __free_one_page(page, page_to_pfn(page), zone, order, mt,
> +                       FOP_SKIP_REPORT_NOTIFY);
>  }

Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core()
  2020-09-28 18:21 ` [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core() David Hildenbrand
  2020-09-28 20:33     ` Pankaj Gupta
@ 2020-09-28 20:33     ` Pankaj Gupta
  2020-10-02 13:41   ` Michal Hocko
  2 siblings, 0 replies; 36+ messages in thread
From: Pankaj Gupta @ 2020-09-28 20:33 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: LKML, Linux MM, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Vlastimil Babka, Oscar Salvador, Alexander Duyck,
	Mel Gorman, Michal Hocko, Dave Hansen, Wei Yang, Mike Rapoport,
	K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu

> __free_pages_core() is used when exposing fresh memory to the buddy
> during system boot and when onlining memory in generic_online_page().
>
> generic_online_page() is used in two cases:
>
> 1. Direct memory onlining in online_pages().
> 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
>    balloon and virtio-mem), when parts of a section are kept
>    fake-offline to be fake-onlined later on.
>
> In 1, we already place pages to the tail of the freelist. Pages will be
> freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
> via undo_isolate_page_range().
>
> In 2, we currently don't implement a proper rule. In case of virtio-mem,
> where we currently always online MAX_ORDER - 1 pages, the pages will be
> placed to the HEAD of the freelist - undesireable. While the hyper-v
> balloon calls generic_online_page() with single pages, usually it will
> call it on successive single pages in a larger block.
>
> The pages are fresh, so place them to the tail of the freelists and avoid
> the PCP. In __free_pages_core(), remove the now superflouos call to
> set_page_refcounted() and add a comment regarding page initialization and
> the refcount.
>
> Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
> is usually of limited use in virtualized environments), we might want to
> shuffle after a sequence of generic_online_page() calls in the
> relevant callers.
>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
> Cc: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: Stephen Hemminger <sthemmin@microsoft.com>
> Cc: Wei Liu <wei.liu@kernel.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/page_alloc.c | 37 ++++++++++++++++++++++++-------------
>  1 file changed, 24 insertions(+), 13 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d5a5f528b8ca..8a2134fe9947 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -270,7 +270,8 @@ bool pm_suspended_storage(void)
>  unsigned int pageblock_order __read_mostly;
>  #endif
>
> -static void __free_pages_ok(struct page *page, unsigned int order);
> +static void __free_pages_ok(struct page *page, unsigned int order,
> +                           fop_t fop_flags);
>
>  /*
>   * results with 256, 32 in the lowmem_reserve sysctl:
> @@ -682,7 +683,7 @@ static void bad_page(struct page *page, const char *reason)
>  void free_compound_page(struct page *page)
>  {
>         mem_cgroup_uncharge(page);
> -       __free_pages_ok(page, compound_order(page));
> +       __free_pages_ok(page, compound_order(page), FOP_NONE);
>  }
>
>  void prep_compound_page(struct page *page, unsigned int order)
> @@ -1419,17 +1420,15 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>         spin_unlock(&zone->lock);
>  }
>
> -static void free_one_page(struct zone *zone,
> -                               struct page *page, unsigned long pfn,
> -                               unsigned int order,
> -                               int migratetype)
> +static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn,
> +                         unsigned int order, int migratetype, fop_t fop_flags)
>  {
>         spin_lock(&zone->lock);
>         if (unlikely(has_isolate_pageblock(zone) ||
>                 is_migrate_isolate(migratetype))) {
>                 migratetype = get_pfnblock_migratetype(page, pfn);
>         }
> -       __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
> +       __free_one_page(page, pfn, zone, order, migratetype, fop_flags);
>         spin_unlock(&zone->lock);
>  }
>
> @@ -1507,7 +1506,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
>         }
>  }
>
> -static void __free_pages_ok(struct page *page, unsigned int order)
> +static void __free_pages_ok(struct page *page, unsigned int order,
> +                           fop_t fop_flags)
>  {
>         unsigned long flags;
>         int migratetype;
> @@ -1519,7 +1519,8 @@ static void __free_pages_ok(struct page *page, unsigned int order)
>         migratetype = get_pfnblock_migratetype(page, pfn);
>         local_irq_save(flags);
>         __count_vm_events(PGFREE, 1 << order);
> -       free_one_page(page_zone(page), page, pfn, order, migratetype);
> +       free_one_page(page_zone(page), page, pfn, order, migratetype,
> +                     fop_flags);
>         local_irq_restore(flags);
>  }
>
> @@ -1529,6 +1530,11 @@ void __free_pages_core(struct page *page, unsigned int order)
>         struct page *p = page;
>         unsigned int loop;
>
> +       /*
> +        * When initializing the memmap, init_single_page() sets the refcount
> +        * of all pages to 1 ("allocated"/"not free"). We have to set the
> +        * refcount of all involved pages to 0.
> +        */
>         prefetchw(p);
>         for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
>                 prefetchw(p + 1);
> @@ -1539,8 +1545,12 @@ void __free_pages_core(struct page *page, unsigned int order)
>         set_page_count(p, 0);
>
>         atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
> -       set_page_refcounted(page);
> -       __free_pages(page, order);
> +
> +       /*
> +        * Bypass PCP and place fresh pages right to the tail, primarily
> +        * relevant for memory onlining.
> +        */
> +       __free_pages_ok(page, order, FOP_TO_TAIL);
>  }
>
>  #ifdef CONFIG_NEED_MULTIPLE_NODES
> @@ -3171,7 +3181,8 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
>          */
>         if (migratetype >= MIGRATE_PCPTYPES) {
>                 if (unlikely(is_migrate_isolate(migratetype))) {
> -                       free_one_page(zone, page, pfn, 0, migratetype);
> +                       free_one_page(zone, page, pfn, 0, migratetype,
> +                                     FOP_NONE);
>                         return;
>                 }
>                 migratetype = MIGRATE_MOVABLE;
> @@ -5063,7 +5074,7 @@ static inline void free_the_page(struct page *page, unsigned int order)
>         if (order == 0)         /* Via pcp? */
>                 free_unref_page(page);
>         else
> -               __free_pages_ok(page, order);
> +               __free_pages_ok(page, order, FOP_NONE);
>  }
>
>  void __free_pages(struct page *page, unsigned int order)

Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core()
@ 2020-09-28 20:33     ` Pankaj Gupta
  0 siblings, 0 replies; 36+ messages in thread
From: Pankaj Gupta @ 2020-09-28 20:33 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: LKML, Linux MM, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Vlastimil Babka, Oscar Salvador, Alexander Duyck,
	Mel Gorman, Michal Hocko, Dave Hansen, Wei Yang, Mike Rapoport,
	K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu

> __free_pages_core() is used when exposing fresh memory to the buddy
> during system boot and when onlining memory in generic_online_page().
>
> generic_online_page() is used in two cases:
>
> 1. Direct memory onlining in online_pages().
> 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
>    balloon and virtio-mem), when parts of a section are kept
>    fake-offline to be fake-onlined later on.
>
> In 1, we already place pages to the tail of the freelist. Pages will be
> freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
> via undo_isolate_page_range().
>
> In 2, we currently don't implement a proper rule. In case of virtio-mem,
> where we currently always online MAX_ORDER - 1 pages, the pages will be
> placed to the HEAD of the freelist - undesireable. While the hyper-v
> balloon calls generic_online_page() with single pages, usually it will
> call it on successive single pages in a larger block.
>
> The pages are fresh, so place them to the tail of the freelists and avoid
> the PCP. In __free_pages_core(), remove the now superflouos call to
> set_page_refcounted() and add a comment regarding page initialization and
> the refcount.
>
> Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
> is usually of limited use in virtualized environments), we might want to
> shuffle after a sequence of generic_online_page() calls in the
> relevant callers.
>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
> Cc: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: Stephen Hemminger <sthemmin@microsoft.com>
> Cc: Wei Liu <wei.liu@kernel.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/page_alloc.c | 37 ++++++++++++++++++++++++-------------
>  1 file changed, 24 insertions(+), 13 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d5a5f528b8ca..8a2134fe9947 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -270,7 +270,8 @@ bool pm_suspended_storage(void)
>  unsigned int pageblock_order __read_mostly;
>  #endif
>
> -static void __free_pages_ok(struct page *page, unsigned int order);
> +static void __free_pages_ok(struct page *page, unsigned int order,
> +                           fop_t fop_flags);
>
>  /*
>   * results with 256, 32 in the lowmem_reserve sysctl:
> @@ -682,7 +683,7 @@ static void bad_page(struct page *page, const char *reason)
>  void free_compound_page(struct page *page)
>  {
>         mem_cgroup_uncharge(page);
> -       __free_pages_ok(page, compound_order(page));
> +       __free_pages_ok(page, compound_order(page), FOP_NONE);
>  }
>
>  void prep_compound_page(struct page *page, unsigned int order)
> @@ -1419,17 +1420,15 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>         spin_unlock(&zone->lock);
>  }
>
> -static void free_one_page(struct zone *zone,
> -                               struct page *page, unsigned long pfn,
> -                               unsigned int order,
> -                               int migratetype)
> +static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn,
> +                         unsigned int order, int migratetype, fop_t fop_flags)
>  {
>         spin_lock(&zone->lock);
>         if (unlikely(has_isolate_pageblock(zone) ||
>                 is_migrate_isolate(migratetype))) {
>                 migratetype = get_pfnblock_migratetype(page, pfn);
>         }
> -       __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
> +       __free_one_page(page, pfn, zone, order, migratetype, fop_flags);
>         spin_unlock(&zone->lock);
>  }
>
> @@ -1507,7 +1506,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
>         }
>  }
>
> -static void __free_pages_ok(struct page *page, unsigned int order)
> +static void __free_pages_ok(struct page *page, unsigned int order,
> +                           fop_t fop_flags)
>  {
>         unsigned long flags;
>         int migratetype;
> @@ -1519,7 +1519,8 @@ static void __free_pages_ok(struct page *page, unsigned int order)
>         migratetype = get_pfnblock_migratetype(page, pfn);
>         local_irq_save(flags);
>         __count_vm_events(PGFREE, 1 << order);
> -       free_one_page(page_zone(page), page, pfn, order, migratetype);
> +       free_one_page(page_zone(page), page, pfn, order, migratetype,
> +                     fop_flags);
>         local_irq_restore(flags);
>  }
>
> @@ -1529,6 +1530,11 @@ void __free_pages_core(struct page *page, unsigned int order)
>         struct page *p = page;
>         unsigned int loop;
>
> +       /*
> +        * When initializing the memmap, init_single_page() sets the refcount
> +        * of all pages to 1 ("allocated"/"not free"). We have to set the
> +        * refcount of all involved pages to 0.
> +        */
>         prefetchw(p);
>         for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
>                 prefetchw(p + 1);
> @@ -1539,8 +1545,12 @@ void __free_pages_core(struct page *page, unsigned int order)
>         set_page_count(p, 0);
>
>         atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
> -       set_page_refcounted(page);
> -       __free_pages(page, order);
> +
> +       /*
> +        * Bypass PCP and place fresh pages right to the tail, primarily
> +        * relevant for memory onlining.
> +        */
> +       __free_pages_ok(page, order, FOP_TO_TAIL);
>  }
>
>  #ifdef CONFIG_NEED_MULTIPLE_NODES
> @@ -3171,7 +3181,8 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
>          */
>         if (migratetype >= MIGRATE_PCPTYPES) {
>                 if (unlikely(is_migrate_isolate(migratetype))) {
> -                       free_one_page(zone, page, pfn, 0, migratetype);
> +                       free_one_page(zone, page, pfn, 0, migratetype,
> +                                     FOP_NONE);
>                         return;
>                 }
>                 migratetype = MIGRATE_MOVABLE;
> @@ -5063,7 +5074,7 @@ static inline void free_the_page(struct page *page, unsigned int order)
>         if (order == 0)         /* Via pcp? */
>                 free_unref_page(page);
>         else
> -               __free_pages_ok(page, order);
> +               __free_pages_ok(page, order, FOP_NONE);
>  }
>
>  void __free_pages(struct page *page, unsigned int order)

Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core()
@ 2020-09-28 20:33     ` Pankaj Gupta
  0 siblings, 0 replies; 36+ messages in thread
From: Pankaj Gupta @ 2020-09-28 20:33 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: LKML, Linux MM, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Vlastimil Babka, Oscar Salvador, Alexander Duyck,
	Mel Gorman, Michal Hocko, Dave Hansen, Wei Yang, Mike Rapoport,
	K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu

> __free_pages_core() is used when exposing fresh memory to the buddy
> during system boot and when onlining memory in generic_online_page().
>
> generic_online_page() is used in two cases:
>
> 1. Direct memory onlining in online_pages().
> 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
>    balloon and virtio-mem), when parts of a section are kept
>    fake-offline to be fake-onlined later on.
>
> In 1, we already place pages to the tail of the freelist. Pages will be
> freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
> via undo_isolate_page_range().
>
> In 2, we currently don't implement a proper rule. In case of virtio-mem,
> where we currently always online MAX_ORDER - 1 pages, the pages will be
> placed to the HEAD of the freelist - undesireable. While the hyper-v
> balloon calls generic_online_page() with single pages, usually it will
> call it on successive single pages in a larger block.
>
> The pages are fresh, so place them to the tail of the freelists and avoid
> the PCP. In __free_pages_core(), remove the now superflouos call to
> set_page_refcounted() and add a comment regarding page initialization and
> the refcount.
>
> Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
> is usually of limited use in virtualized environments), we might want to
> shuffle after a sequence of generic_online_page() calls in the
> relevant callers.
>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
> Cc: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: Stephen Hemminger <sthemmin@microsoft.com>
> Cc: Wei Liu <wei.liu@kernel.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/page_alloc.c | 37 ++++++++++++++++++++++++-------------
>  1 file changed, 24 insertions(+), 13 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d5a5f528b8ca..8a2134fe9947 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -270,7 +270,8 @@ bool pm_suspended_storage(void)
>  unsigned int pageblock_order __read_mostly;
>  #endif
>
> -static void __free_pages_ok(struct page *page, unsigned int order);
> +static void __free_pages_ok(struct page *page, unsigned int order,
> +                           fop_t fop_flags);
>
>  /*
>   * results with 256, 32 in the lowmem_reserve sysctl:
> @@ -682,7 +683,7 @@ static void bad_page(struct page *page, const char *reason)
>  void free_compound_page(struct page *page)
>  {
>         mem_cgroup_uncharge(page);
> -       __free_pages_ok(page, compound_order(page));
> +       __free_pages_ok(page, compound_order(page), FOP_NONE);
>  }
>
>  void prep_compound_page(struct page *page, unsigned int order)
> @@ -1419,17 +1420,15 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>         spin_unlock(&zone->lock);
>  }
>
> -static void free_one_page(struct zone *zone,
> -                               struct page *page, unsigned long pfn,
> -                               unsigned int order,
> -                               int migratetype)
> +static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn,
> +                         unsigned int order, int migratetype, fop_t fop_flags)
>  {
>         spin_lock(&zone->lock);
>         if (unlikely(has_isolate_pageblock(zone) ||
>                 is_migrate_isolate(migratetype))) {
>                 migratetype = get_pfnblock_migratetype(page, pfn);
>         }
> -       __free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
> +       __free_one_page(page, pfn, zone, order, migratetype, fop_flags);
>         spin_unlock(&zone->lock);
>  }
>
> @@ -1507,7 +1506,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
>         }
>  }
>
> -static void __free_pages_ok(struct page *page, unsigned int order)
> +static void __free_pages_ok(struct page *page, unsigned int order,
> +                           fop_t fop_flags)
>  {
>         unsigned long flags;
>         int migratetype;
> @@ -1519,7 +1519,8 @@ static void __free_pages_ok(struct page *page, unsigned int order)
>         migratetype = get_pfnblock_migratetype(page, pfn);
>         local_irq_save(flags);
>         __count_vm_events(PGFREE, 1 << order);
> -       free_one_page(page_zone(page), page, pfn, order, migratetype);
> +       free_one_page(page_zone(page), page, pfn, order, migratetype,
> +                     fop_flags);
>         local_irq_restore(flags);
>  }
>
> @@ -1529,6 +1530,11 @@ void __free_pages_core(struct page *page, unsigned int order)
>         struct page *p = page;
>         unsigned int loop;
>
> +       /*
> +        * When initializing the memmap, init_single_page() sets the refcount
> +        * of all pages to 1 ("allocated"/"not free"). We have to set the
> +        * refcount of all involved pages to 0.
> +        */
>         prefetchw(p);
>         for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
>                 prefetchw(p + 1);
> @@ -1539,8 +1545,12 @@ void __free_pages_core(struct page *page, unsigned int order)
>         set_page_count(p, 0);
>
>         atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
> -       set_page_refcounted(page);
> -       __free_pages(page, order);
> +
> +       /*
> +        * Bypass PCP and place fresh pages right to the tail, primarily
> +        * relevant for memory onlining.
> +        */
> +       __free_pages_ok(page, order, FOP_TO_TAIL);
>  }
>
>  #ifdef CONFIG_NEED_MULTIPLE_NODES
> @@ -3171,7 +3181,8 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
>          */
>         if (migratetype >= MIGRATE_PCPTYPES) {
>                 if (unlikely(is_migrate_isolate(migratetype))) {
> -                       free_one_page(zone, page, pfn, 0, migratetype);
> +                       free_one_page(zone, page, pfn, 0, migratetype,
> +                                     FOP_NONE);
>                         return;
>                 }
>                 migratetype = MIGRATE_MOVABLE;
> @@ -5063,7 +5074,7 @@ static inline void free_the_page(struct page *page, unsigned int order)
>         if (order == 0)         /* Via pcp? */
>                 free_unref_page(page);
>         else
> -               __free_pages_ok(page, order);
> +               __free_pages_ok(page, order, FOP_NONE);
>  }
>
>  void __free_pages(struct page *page, unsigned int order)

Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page()
  2020-09-28 18:21 ` [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page() David Hildenbrand
@ 2020-09-28 20:38     ` Pankaj Gupta
  2020-09-29  9:10   ` Wei Yang
  2020-10-02 13:19   ` Michal Hocko
  2 siblings, 0 replies; 36+ messages in thread
From: Pankaj Gupta @ 2020-09-28 20:38 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: LKML, Linux MM, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Oscar Salvador, Mel Gorman,
	Michal Hocko, Dave Hansen, Vlastimil Babka, Wei Yang,
	Mike Rapoport, Scott Cheloha, Michael Ellerman

> __putback_isolated_page() already documents that pages will be placed to
> the tail of the freelist - this is, however, not the case for
> "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
> the case for all existing users.
>
> This change affects two users:
> - free page reporting
> - page isolation, when undoing the isolation (including memory onlining).
>
> This behavior is desireable for pages that haven't really been touched
> lately, so exactly the two users that don't actually read/write page
> content, but rather move untouched pages.
>
> The new behavior is especially desirable for memory onlining, where we
> allow allocation of newly onlined pages via undo_isolate_page_range()
> in online_pages(). Right now, we always place them to the head of the
> free list, resulting in undesireable behavior: Assume we add
> individual memory chunks via add_memory() and online them right away to
> the NORMAL zone. We create a dependency chain of unmovable allocations
> e.g., via the memmap. The memmap of the next chunk will be placed onto
> previous chunks - if the last block cannot get offlined+removed, all
> dependent ones cannot get offlined+removed. While this can already be
> observed with individual DIMMs, it's more of an issue for virtio-mem
> (and I suspect also ppc DLPAR).
>
> Document that this should only be used for optimizations, and no code
> should realy on this for correction (if the order of freepage lists
> ever changes).
>
> We won't care about page shuffling: memory onlining already properly
> shuffles after onlining. free page reporting doesn't care about
> physically contiguous ranges, and there are already cases where page
> isolation will simply move (physically close) free pages to (currently)
> the head of the freelists via move_freepages_block() instead of
> shuffling. If this becomes ever relevant, we should shuffle the whole
> zone when undoing isolation of larger ranges, and after
> free_contig_range().
>
> Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Scott Cheloha <cheloha@linux.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/page_alloc.c | 18 ++++++++++++++++--
>  1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index daab90e960fe..9e3ed4a6f69a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -89,6 +89,18 @@ typedef int __bitwise fop_t;
>   */
>  #define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0))
>
> +/*
> + * Place the (possibly merged) page to the tail of the freelist. Will ignore
> + * page shuffling (relevant code - e.g., memory onlining - is expected to
> + * shuffle the whole zone).
> + *
> + * Note: No code should rely onto this flag for correctness - it's purely
> + *       to allow for optimizations when handing back either fresh pages
> + *       (memory onlining) or untouched pages (page isolation, free page
> + *       reporting).
> + */
> +#define FOP_TO_TAIL            ((__force fop_t)BIT(1))
> +
>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_FRACTION   (8)
> @@ -1038,7 +1050,9 @@ static inline void __free_one_page(struct page *page, unsigned long pfn,
>  done_merging:
>         set_page_order(page, order);
>
> -       if (is_shuffle_order(order))
> +       if (fop_flags & FOP_TO_TAIL)
> +               to_tail = true;
> +       else if (is_shuffle_order(order))
>                 to_tail = shuffle_pick_tail();
>         else
>                 to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
> @@ -3300,7 +3314,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
>
>         /* Return isolated page to tail of freelist. */
>         __free_one_page(page, page_to_pfn(page), zone, order, mt,
> -                       FOP_SKIP_REPORT_NOTIFY);
> +                       FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL);
>  }

Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page()
@ 2020-09-28 20:38     ` Pankaj Gupta
  0 siblings, 0 replies; 36+ messages in thread
From: Pankaj Gupta @ 2020-09-28 20:38 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: LKML, Linux MM, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Oscar Salvador, Mel Gorman,
	Michal Hocko, Dave Hansen, Vlastimil Babka, Wei Yang,
	Mike Rapoport, Scott Cheloha, Michael Ellerman

> __putback_isolated_page() already documents that pages will be placed to
> the tail of the freelist - this is, however, not the case for
> "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
> the case for all existing users.
>
> This change affects two users:
> - free page reporting
> - page isolation, when undoing the isolation (including memory onlining).
>
> This behavior is desireable for pages that haven't really been touched
> lately, so exactly the two users that don't actually read/write page
> content, but rather move untouched pages.
>
> The new behavior is especially desirable for memory onlining, where we
> allow allocation of newly onlined pages via undo_isolate_page_range()
> in online_pages(). Right now, we always place them to the head of the
> free list, resulting in undesireable behavior: Assume we add
> individual memory chunks via add_memory() and online them right away to
> the NORMAL zone. We create a dependency chain of unmovable allocations
> e.g., via the memmap. The memmap of the next chunk will be placed onto
> previous chunks - if the last block cannot get offlined+removed, all
> dependent ones cannot get offlined+removed. While this can already be
> observed with individual DIMMs, it's more of an issue for virtio-mem
> (and I suspect also ppc DLPAR).
>
> Document that this should only be used for optimizations, and no code
> should realy on this for correction (if the order of freepage lists
> ever changes).
>
> We won't care about page shuffling: memory onlining already properly
> shuffles after onlining. free page reporting doesn't care about
> physically contiguous ranges, and there are already cases where page
> isolation will simply move (physically close) free pages to (currently)
> the head of the freelists via move_freepages_block() instead of
> shuffling. If this becomes ever relevant, we should shuffle the whole
> zone when undoing isolation of larger ranges, and after
> free_contig_range().
>
> Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Scott Cheloha <cheloha@linux.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/page_alloc.c | 18 ++++++++++++++++--
>  1 file changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index daab90e960fe..9e3ed4a6f69a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -89,6 +89,18 @@ typedef int __bitwise fop_t;
>   */
>  #define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0))
>
> +/*
> + * Place the (possibly merged) page to the tail of the freelist. Will ignore
> + * page shuffling (relevant code - e.g., memory onlining - is expected to
> + * shuffle the whole zone).
> + *
> + * Note: No code should rely onto this flag for correctness - it's purely
> + *       to allow for optimizations when handing back either fresh pages
> + *       (memory onlining) or untouched pages (page isolation, free page
> + *       reporting).
> + */
> +#define FOP_TO_TAIL            ((__force fop_t)BIT(1))
> +
>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_FRACTION   (8)
> @@ -1038,7 +1050,9 @@ static inline void __free_one_page(struct page *page, unsigned long pfn,
>  done_merging:
>         set_page_order(page, order);
>
> -       if (is_shuffle_order(order))
> +       if (fop_flags & FOP_TO_TAIL)
> +               to_tail = true;
> +       else if (is_shuffle_order(order))
>                 to_tail = shuffle_pick_tail();
>         else
>                 to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
> @@ -3300,7 +3314,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
>
>         /* Return isolated page to tail of freelist. */
>         __free_one_page(page, page_to_pfn(page), zone, order, mt,
> -                       FOP_SKIP_REPORT_NOTIFY);
> +                       FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL);
>  }

Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()
  2020-09-28 18:21 ` [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate() David Hildenbrand
@ 2020-09-28 20:55     ` Pankaj Gupta
  2020-09-29  9:18   ` Wei Yang
  2020-10-02 13:24   ` Michal Hocko
  2 siblings, 0 replies; 36+ messages in thread
From: Pankaj Gupta @ 2020-09-28 20:55 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: LKML, Linux MM, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Oscar Salvador, Alexander Duyck, Mel Gorman,
	Michal Hocko, Dave Hansen, Vlastimil Babka, Wei Yang,
	Mike Rapoport, Scott Cheloha, Michael Ellerman

> Page isolation doesn't actually touch the pages, it simply isolates
> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
>
> We already place pages to the tail of the freelists when undoing
> isolation via __putback_isolated_page(), let's do it in any case
> (e.g., if order <= pageblock_order) and document the behavior.
>
> Add a "to_tail" parameter to move_freepages_block() but introduce a
> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
>
> This change results in all pages getting onlined via online_pages() to
> be placed to the tail of the freelist.
>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Scott Cheloha <cheloha@linux.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  include/linux/page-isolation.h |  4 ++--
>  mm/page_alloc.c                | 35 +++++++++++++++++++++++-----------
>  mm/page_isolation.c            | 12 +++++++++---
>  3 files changed, 35 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
> index 572458016331..3eca9b3c5305 100644
> --- a/include/linux/page-isolation.h
> +++ b/include/linux/page-isolation.h
> @@ -36,8 +36,8 @@ static inline bool is_migrate_isolate(int migratetype)
>  struct page *has_unmovable_pages(struct zone *zone, struct page *page,
>                                  int migratetype, int flags);
>  void set_pageblock_migratetype(struct page *page, int migratetype);
> -int move_freepages_block(struct zone *zone, struct page *page,
> -                               int migratetype, int *num_movable);
> +int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
> +                        bool to_tail, int *num_movable);
>
>  /*
>   * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9e3ed4a6f69a..d5a5f528b8ca 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -905,6 +905,15 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
>         list_move(&page->lru, &area->free_list[migratetype]);
>  }
>
> +/* Used for pages which are on another list */
> +static inline void move_to_free_list_tail(struct page *page, struct zone *zone,
> +                                         unsigned int order, int migratetype)
> +{
> +       struct free_area *area = &zone->free_area[order];
> +
> +       list_move_tail(&page->lru, &area->free_list[migratetype]);
> +}
> +
>  static inline void del_page_from_free_list(struct page *page, struct zone *zone,
>                                            unsigned int order)
>  {
> @@ -2338,9 +2347,9 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
>   * Note that start_page and end_pages are not aligned on a pageblock
>   * boundary. If alignment is required, use move_freepages_block()
>   */
> -static int move_freepages(struct zone *zone,
> -                         struct page *start_page, struct page *end_page,
> -                         int migratetype, int *num_movable)
> +static int move_freepages(struct zone *zone, struct page *start_page,
> +                         struct page *end_page, int migratetype,
> +                         bool to_tail, int *num_movable)
>  {
>         struct page *page;
>         unsigned int order;
> @@ -2371,7 +2380,10 @@ static int move_freepages(struct zone *zone,
>                 VM_BUG_ON_PAGE(page_zone(page) != zone, page);
>
>                 order = page_order(page);
> -               move_to_free_list(page, zone, order, migratetype);
> +               if (to_tail)
> +                       move_to_free_list_tail(page, zone, order, migratetype);
> +               else
> +                       move_to_free_list(page, zone, order, migratetype);
>                 page += 1 << order;
>                 pages_moved += 1 << order;
>         }
> @@ -2379,8 +2391,8 @@ static int move_freepages(struct zone *zone,
>         return pages_moved;
>  }
>
> -int move_freepages_block(struct zone *zone, struct page *page,
> -                               int migratetype, int *num_movable)
> +int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
> +                        bool to_tail, int *num_movable)
>  {
>         unsigned long start_pfn, end_pfn;
>         struct page *start_page, *end_page;
> @@ -2401,7 +2413,7 @@ int move_freepages_block(struct zone *zone, struct page *page,
>                 return 0;
>
>         return move_freepages(zone, start_page, end_page, migratetype,
> -                                                               num_movable);
> +                             to_tail, num_movable);
>  }
>
>  static void change_pageblock_range(struct page *pageblock_page,
> @@ -2526,8 +2538,8 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>         if (!whole_block)
>                 goto single_page;
>
> -       free_pages = move_freepages_block(zone, page, start_type,
> -                                               &movable_pages);
> +       free_pages = move_freepages_block(zone, page, start_type, false,
> +                                         &movable_pages);
>         /*
>          * Determine how many pages are compatible with our allocation.
>          * For movable allocation, it's the number of movable pages which
> @@ -2635,7 +2647,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>             && !is_migrate_cma(mt)) {
>                 zone->nr_reserved_highatomic += pageblock_nr_pages;
>                 set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC);
> -               move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL);
> +               move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, false,
> +                                    NULL);
>         }
>
>  out_unlock:
> @@ -2711,7 +2724,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>                          */
>                         set_pageblock_migratetype(page, ac->migratetype);
>                         ret = move_freepages_block(zone, page, ac->migratetype,
> -                                                                       NULL);
> +                                                  false, NULL);
>                         if (ret) {
>                                 spin_unlock_irqrestore(&zone->lock, flags);
>                                 return ret;
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index abfe26ad59fd..de44e1329706 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -45,7 +45,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
>                 set_pageblock_migratetype(page, MIGRATE_ISOLATE);
>                 zone->nr_isolate_pageblock++;
>                 nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE,
> -                                                                       NULL);
> +                                               false, NULL);
>
>                 __mod_zone_freepage_state(zone, -nr_pages, mt);
>                 spin_unlock_irqrestore(&zone->lock, flags);
> @@ -83,7 +83,7 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
>          * Because freepage with more than pageblock_order on isolated
>          * pageblock is restricted to merge due to freepage counting problem,
>          * it is possible that there is free buddy page.
> -        * move_freepages_block() doesn't care of merge so we need other
> +        * move_freepages_block() don't care about merging, so we need another
>          * approach in order to merge them. Isolation and free will make
>          * these pages to be merged.
>          */
> @@ -106,9 +106,15 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
>          * If we isolate freepage with more than pageblock_order, there
>          * should be no freepage in the range, so we could avoid costly
>          * pageblock scanning for freepage moving.
> +        *
> +        * We didn't actually touch any of the isolated pages, so place them
> +        * to the tail of the freelist. This is an optimization for memory
> +        * onlining - just onlined memory won't immediately be considered for
> +        * allocation.
>          */
>         if (!isolated_page) {
> -               nr_pages = move_freepages_block(zone, page, migratetype, NULL);
> +               nr_pages = move_freepages_block(zone, page, migratetype, true,
> +                                               NULL);
>                 __mod_zone_freepage_state(zone, nr_pages, migratetype);
>         }
>         set_pageblock_migratetype(page, migratetype);

Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()
@ 2020-09-28 20:55     ` Pankaj Gupta
  0 siblings, 0 replies; 36+ messages in thread
From: Pankaj Gupta @ 2020-09-28 20:55 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: LKML, Linux MM, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Oscar Salvador, Alexander Duyck, Mel Gorman,
	Michal Hocko, Dave Hansen, Vlastimil Babka, Wei Yang,
	Mike Rapoport, Scott Cheloha, Michael Ellerman

> Page isolation doesn't actually touch the pages, it simply isolates
> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
>
> We already place pages to the tail of the freelists when undoing
> isolation via __putback_isolated_page(), let's do it in any case
> (e.g., if order <= pageblock_order) and document the behavior.
>
> Add a "to_tail" parameter to move_freepages_block() but introduce a
> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
>
> This change results in all pages getting onlined via online_pages() to
> be placed to the tail of the freelist.
>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Scott Cheloha <cheloha@linux.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  include/linux/page-isolation.h |  4 ++--
>  mm/page_alloc.c                | 35 +++++++++++++++++++++++-----------
>  mm/page_isolation.c            | 12 +++++++++---
>  3 files changed, 35 insertions(+), 16 deletions(-)
>
> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
> index 572458016331..3eca9b3c5305 100644
> --- a/include/linux/page-isolation.h
> +++ b/include/linux/page-isolation.h
> @@ -36,8 +36,8 @@ static inline bool is_migrate_isolate(int migratetype)
>  struct page *has_unmovable_pages(struct zone *zone, struct page *page,
>                                  int migratetype, int flags);
>  void set_pageblock_migratetype(struct page *page, int migratetype);
> -int move_freepages_block(struct zone *zone, struct page *page,
> -                               int migratetype, int *num_movable);
> +int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
> +                        bool to_tail, int *num_movable);
>
>  /*
>   * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9e3ed4a6f69a..d5a5f528b8ca 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -905,6 +905,15 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
>         list_move(&page->lru, &area->free_list[migratetype]);
>  }
>
> +/* Used for pages which are on another list */
> +static inline void move_to_free_list_tail(struct page *page, struct zone *zone,
> +                                         unsigned int order, int migratetype)
> +{
> +       struct free_area *area = &zone->free_area[order];
> +
> +       list_move_tail(&page->lru, &area->free_list[migratetype]);
> +}
> +
>  static inline void del_page_from_free_list(struct page *page, struct zone *zone,
>                                            unsigned int order)
>  {
> @@ -2338,9 +2347,9 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
>   * Note that start_page and end_pages are not aligned on a pageblock
>   * boundary. If alignment is required, use move_freepages_block()
>   */
> -static int move_freepages(struct zone *zone,
> -                         struct page *start_page, struct page *end_page,
> -                         int migratetype, int *num_movable)
> +static int move_freepages(struct zone *zone, struct page *start_page,
> +                         struct page *end_page, int migratetype,
> +                         bool to_tail, int *num_movable)
>  {
>         struct page *page;
>         unsigned int order;
> @@ -2371,7 +2380,10 @@ static int move_freepages(struct zone *zone,
>                 VM_BUG_ON_PAGE(page_zone(page) != zone, page);
>
>                 order = page_order(page);
> -               move_to_free_list(page, zone, order, migratetype);
> +               if (to_tail)
> +                       move_to_free_list_tail(page, zone, order, migratetype);
> +               else
> +                       move_to_free_list(page, zone, order, migratetype);
>                 page += 1 << order;
>                 pages_moved += 1 << order;
>         }
> @@ -2379,8 +2391,8 @@ static int move_freepages(struct zone *zone,
>         return pages_moved;
>  }
>
> -int move_freepages_block(struct zone *zone, struct page *page,
> -                               int migratetype, int *num_movable)
> +int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
> +                        bool to_tail, int *num_movable)
>  {
>         unsigned long start_pfn, end_pfn;
>         struct page *start_page, *end_page;
> @@ -2401,7 +2413,7 @@ int move_freepages_block(struct zone *zone, struct page *page,
>                 return 0;
>
>         return move_freepages(zone, start_page, end_page, migratetype,
> -                                                               num_movable);
> +                             to_tail, num_movable);
>  }
>
>  static void change_pageblock_range(struct page *pageblock_page,
> @@ -2526,8 +2538,8 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>         if (!whole_block)
>                 goto single_page;
>
> -       free_pages = move_freepages_block(zone, page, start_type,
> -                                               &movable_pages);
> +       free_pages = move_freepages_block(zone, page, start_type, false,
> +                                         &movable_pages);
>         /*
>          * Determine how many pages are compatible with our allocation.
>          * For movable allocation, it's the number of movable pages which
> @@ -2635,7 +2647,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>             && !is_migrate_cma(mt)) {
>                 zone->nr_reserved_highatomic += pageblock_nr_pages;
>                 set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC);
> -               move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL);
> +               move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, false,
> +                                    NULL);
>         }
>
>  out_unlock:
> @@ -2711,7 +2724,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>                          */
>                         set_pageblock_migratetype(page, ac->migratetype);
>                         ret = move_freepages_block(zone, page, ac->migratetype,
> -                                                                       NULL);
> +                                                  false, NULL);
>                         if (ret) {
>                                 spin_unlock_irqrestore(&zone->lock, flags);
>                                 return ret;
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index abfe26ad59fd..de44e1329706 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -45,7 +45,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
>                 set_pageblock_migratetype(page, MIGRATE_ISOLATE);
>                 zone->nr_isolate_pageblock++;
>                 nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE,
> -                                                                       NULL);
> +                                               false, NULL);
>
>                 __mod_zone_freepage_state(zone, -nr_pages, mt);
>                 spin_unlock_irqrestore(&zone->lock, flags);
> @@ -83,7 +83,7 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
>          * Because freepage with more than pageblock_order on isolated
>          * pageblock is restricted to merge due to freepage counting problem,
>          * it is possible that there is free buddy page.
> -        * move_freepages_block() doesn't care of merge so we need other
> +        * move_freepages_block() don't care about merging, so we need another
>          * approach in order to merge them. Isolation and free will make
>          * these pages to be merged.
>          */
> @@ -106,9 +106,15 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
>          * If we isolate freepage with more than pageblock_order, there
>          * should be no freepage in the range, so we could avoid costly
>          * pageblock scanning for freepage moving.
> +        *
> +        * We didn't actually touch any of the isolated pages, so place them
> +        * to the tail of the freelist. This is an optimization for memory
> +        * onlining - just onlined memory won't immediately be considered for
> +        * allocation.
>          */
>         if (!isolated_page) {
> -               nr_pages = move_freepages_block(zone, page, migratetype, NULL);
> +               nr_pages = move_freepages_block(zone, page, migratetype, true,
> +                                               NULL);
>                 __mod_zone_freepage_state(zone, nr_pages, migratetype);
>         }
>         set_pageblock_migratetype(page, migratetype);

Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag
  2020-09-28 18:21 ` [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag David Hildenbrand
  2020-09-28 20:11     ` Pankaj Gupta
@ 2020-09-29  8:58   ` Wei Yang
  2020-10-02 13:17   ` Michal Hocko
  2020-10-02 13:41   ` Matthew Wilcox
  3 siblings, 0 replies; 36+ messages in thread
From: Wei Yang @ 2020-09-29  8:58 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Vlastimil Babka, Oscar Salvador,
	Mel Gorman, Michal Hocko, Dave Hansen, Wei Yang, Mike Rapoport

On Mon, Sep 28, 2020 at 08:21:06PM +0200, David Hildenbrand wrote:
>Let's prepare for additional flags and avoid long parameter lists of bools.
>Follow-up patches will also make use of the flags in __free_pages_ok(),
>however, I wasn't able to come up with a better name for the type - should
>be good enough for internal purposes.
>
>Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
>Reviewed-by: Oscar Salvador <osalvador@suse.de>
>Cc: Andrew Morton <akpm@linux-foundation.org>
>Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>Cc: Mel Gorman <mgorman@techsingularity.net>
>Cc: Michal Hocko <mhocko@kernel.org>
>Cc: Dave Hansen <dave.hansen@intel.com>
>Cc: Vlastimil Babka <vbabka@suse.cz>
>Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
>Cc: Oscar Salvador <osalvador@suse.de>
>Cc: Mike Rapoport <rppt@kernel.org>
>Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>

>---
> mm/page_alloc.c | 28 ++++++++++++++++++++--------
> 1 file changed, 20 insertions(+), 8 deletions(-)
>
>diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>index df90e3654f97..daab90e960fe 100644
>--- a/mm/page_alloc.c
>+++ b/mm/page_alloc.c
>@@ -77,6 +77,18 @@
> #include "shuffle.h"
> #include "page_reporting.h"
> 
>+/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
>+typedef int __bitwise fop_t;
>+
>+/* No special request */
>+#define FOP_NONE		((__force fop_t)0)
>+
>+/*
>+ * Skip free page reporting notification for the (possibly merged) page. (will
>+ * *not* mark the page reported, only skip the notification).
>+ */
>+#define FOP_SKIP_REPORT_NOTIFY	((__force fop_t)BIT(0))
>+
> /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
> static DEFINE_MUTEX(pcp_batch_high_lock);
> #define MIN_PERCPU_PAGELIST_FRACTION	(8)
>@@ -948,10 +960,9 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
>  * -- nyc
>  */
> 
>-static inline void __free_one_page(struct page *page,
>-		unsigned long pfn,
>-		struct zone *zone, unsigned int order,
>-		int migratetype, bool report)
>+static inline void __free_one_page(struct page *page, unsigned long pfn,
>+				   struct zone *zone, unsigned int order,
>+				   int migratetype, fop_t fop_flags)
> {
> 	struct capture_control *capc = task_capc(zone);
> 	unsigned long buddy_pfn;
>@@ -1038,7 +1049,7 @@ static inline void __free_one_page(struct page *page,
> 		add_to_free_list(page, zone, order, migratetype);
> 
> 	/* Notify page reporting subsystem of freed page */
>-	if (report)
>+	if (!(fop_flags & FOP_SKIP_REPORT_NOTIFY))
> 		page_reporting_notify_free(order);
> }
> 
>@@ -1379,7 +1390,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> 		if (unlikely(isolated_pageblocks))
> 			mt = get_pageblock_migratetype(page);
> 
>-		__free_one_page(page, page_to_pfn(page), zone, 0, mt, true);
>+		__free_one_page(page, page_to_pfn(page), zone, 0, mt, FOP_NONE);
> 		trace_mm_page_pcpu_drain(page, 0, mt);
> 	}
> 	spin_unlock(&zone->lock);
>@@ -1395,7 +1406,7 @@ static void free_one_page(struct zone *zone,
> 		is_migrate_isolate(migratetype))) {
> 		migratetype = get_pfnblock_migratetype(page, pfn);
> 	}
>-	__free_one_page(page, pfn, zone, order, migratetype, true);
>+	__free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
> 	spin_unlock(&zone->lock);
> }
> 
>@@ -3288,7 +3299,8 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
> 	lockdep_assert_held(&zone->lock);
> 
> 	/* Return isolated page to tail of freelist. */
>-	__free_one_page(page, page_to_pfn(page), zone, order, mt, false);
>+	__free_one_page(page, page_to_pfn(page), zone, order, mt,
>+			FOP_SKIP_REPORT_NOTIFY);
> }
> 
> /*
>-- 
>2.26.2

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page()
  2020-09-28 18:21 ` [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page() David Hildenbrand
  2020-09-28 20:38     ` Pankaj Gupta
@ 2020-09-29  9:10   ` Wei Yang
  2020-10-02 13:19   ` Michal Hocko
  2 siblings, 0 replies; 36+ messages in thread
From: Wei Yang @ 2020-09-29  9:10 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Oscar Salvador, Mel Gorman,
	Michal Hocko, Dave Hansen, Vlastimil Babka, Wei Yang,
	Mike Rapoport, Scott Cheloha, Michael Ellerman

On Mon, Sep 28, 2020 at 08:21:07PM +0200, David Hildenbrand wrote:
>__putback_isolated_page() already documents that pages will be placed to
>the tail of the freelist - this is, however, not the case for
>"order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
>the case for all existing users.
>
>This change affects two users:
>- free page reporting
>- page isolation, when undoing the isolation (including memory onlining).
>
>This behavior is desireable for pages that haven't really been touched
>lately, so exactly the two users that don't actually read/write page
>content, but rather move untouched pages.
>
>The new behavior is especially desirable for memory onlining, where we
>allow allocation of newly onlined pages via undo_isolate_page_range()
>in online_pages(). Right now, we always place them to the head of the
>free list, resulting in undesireable behavior: Assume we add
>individual memory chunks via add_memory() and online them right away to
>the NORMAL zone. We create a dependency chain of unmovable allocations
>e.g., via the memmap. The memmap of the next chunk will be placed onto
>previous chunks - if the last block cannot get offlined+removed, all
>dependent ones cannot get offlined+removed. While this can already be
>observed with individual DIMMs, it's more of an issue for virtio-mem
>(and I suspect also ppc DLPAR).
>
>Document that this should only be used for optimizations, and no code
>should realy on this for correction (if the order of freepage lists
>ever changes).
>
>We won't care about page shuffling: memory onlining already properly
>shuffles after onlining. free page reporting doesn't care about
>physically contiguous ranges, and there are already cases where page
>isolation will simply move (physically close) free pages to (currently)
>the head of the freelists via move_freepages_block() instead of
>shuffling. If this becomes ever relevant, we should shuffle the whole
>zone when undoing isolation of larger ranges, and after
>free_contig_range().
>
>Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>Reviewed-by: Oscar Salvador <osalvador@suse.de>
>Cc: Andrew Morton <akpm@linux-foundation.org>
>Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>Cc: Mel Gorman <mgorman@techsingularity.net>
>Cc: Michal Hocko <mhocko@kernel.org>
>Cc: Dave Hansen <dave.hansen@intel.com>
>Cc: Vlastimil Babka <vbabka@suse.cz>
>Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
>Cc: Oscar Salvador <osalvador@suse.de>
>Cc: Mike Rapoport <rppt@kernel.org>
>Cc: Scott Cheloha <cheloha@linux.ibm.com>
>Cc: Michael Ellerman <mpe@ellerman.id.au>
>Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>

>---
> mm/page_alloc.c | 18 ++++++++++++++++--
> 1 file changed, 16 insertions(+), 2 deletions(-)
>
>diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>index daab90e960fe..9e3ed4a6f69a 100644
>--- a/mm/page_alloc.c
>+++ b/mm/page_alloc.c
>@@ -89,6 +89,18 @@ typedef int __bitwise fop_t;
>  */
> #define FOP_SKIP_REPORT_NOTIFY	((__force fop_t)BIT(0))
> 
>+/*
>+ * Place the (possibly merged) page to the tail of the freelist. Will ignore
>+ * page shuffling (relevant code - e.g., memory onlining - is expected to
>+ * shuffle the whole zone).
>+ *
>+ * Note: No code should rely onto this flag for correctness - it's purely
>+ *       to allow for optimizations when handing back either fresh pages
>+ *       (memory onlining) or untouched pages (page isolation, free page
>+ *       reporting).
>+ */
>+#define FOP_TO_TAIL		((__force fop_t)BIT(1))
>+
> /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
> static DEFINE_MUTEX(pcp_batch_high_lock);
> #define MIN_PERCPU_PAGELIST_FRACTION	(8)
>@@ -1038,7 +1050,9 @@ static inline void __free_one_page(struct page *page, unsigned long pfn,
> done_merging:
> 	set_page_order(page, order);
> 
>-	if (is_shuffle_order(order))
>+	if (fop_flags & FOP_TO_TAIL)
>+		to_tail = true;
>+	else if (is_shuffle_order(order))
> 		to_tail = shuffle_pick_tail();
> 	else
> 		to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
>@@ -3300,7 +3314,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
> 
> 	/* Return isolated page to tail of freelist. */
> 	__free_one_page(page, page_to_pfn(page), zone, order, mt,
>-			FOP_SKIP_REPORT_NOTIFY);
>+			FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL);
> }
> 
> /*
>-- 
>2.26.2

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()
  2020-09-28 18:21 ` [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate() David Hildenbrand
  2020-09-28 20:55     ` Pankaj Gupta
@ 2020-09-29  9:18   ` Wei Yang
  2020-09-29 10:12     ` David Hildenbrand
  2020-10-02 13:24   ` Michal Hocko
  2 siblings, 1 reply; 36+ messages in thread
From: Wei Yang @ 2020-09-29  9:18 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Oscar Salvador, Alexander Duyck, Mel Gorman,
	Michal Hocko, Dave Hansen, Vlastimil Babka, Wei Yang,
	Mike Rapoport, Scott Cheloha, Michael Ellerman

On Mon, Sep 28, 2020 at 08:21:08PM +0200, David Hildenbrand wrote:
>Page isolation doesn't actually touch the pages, it simply isolates
>pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
>
>We already place pages to the tail of the freelists when undoing
>isolation via __putback_isolated_page(), let's do it in any case
>(e.g., if order <= pageblock_order) and document the behavior.
>
>Add a "to_tail" parameter to move_freepages_block() but introduce a
>a new move_to_free_list_tail() - similar to add_to_free_list_tail().
>
>This change results in all pages getting onlined via online_pages() to
>be placed to the tail of the freelist.
>
>Reviewed-by: Oscar Salvador <osalvador@suse.de>
>Cc: Andrew Morton <akpm@linux-foundation.org>
>Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>Cc: Mel Gorman <mgorman@techsingularity.net>
>Cc: Michal Hocko <mhocko@kernel.org>
>Cc: Dave Hansen <dave.hansen@intel.com>
>Cc: Vlastimil Babka <vbabka@suse.cz>
>Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
>Cc: Oscar Salvador <osalvador@suse.de>
>Cc: Mike Rapoport <rppt@kernel.org>
>Cc: Scott Cheloha <cheloha@linux.ibm.com>
>Cc: Michael Ellerman <mpe@ellerman.id.au>
>Signed-off-by: David Hildenbrand <david@redhat.com>
>---
> include/linux/page-isolation.h |  4 ++--
> mm/page_alloc.c                | 35 +++++++++++++++++++++++-----------
> mm/page_isolation.c            | 12 +++++++++---
> 3 files changed, 35 insertions(+), 16 deletions(-)
>
>diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
>index 572458016331..3eca9b3c5305 100644
>--- a/include/linux/page-isolation.h
>+++ b/include/linux/page-isolation.h
>@@ -36,8 +36,8 @@ static inline bool is_migrate_isolate(int migratetype)
> struct page *has_unmovable_pages(struct zone *zone, struct page *page,
> 				 int migratetype, int flags);
> void set_pageblock_migratetype(struct page *page, int migratetype);
>-int move_freepages_block(struct zone *zone, struct page *page,
>-				int migratetype, int *num_movable);
>+int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
>+			 bool to_tail, int *num_movable);
> 
> /*
>  * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
>diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>index 9e3ed4a6f69a..d5a5f528b8ca 100644
>--- a/mm/page_alloc.c
>+++ b/mm/page_alloc.c
>@@ -905,6 +905,15 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
> 	list_move(&page->lru, &area->free_list[migratetype]);
> }
> 
>+/* Used for pages which are on another list */
>+static inline void move_to_free_list_tail(struct page *page, struct zone *zone,
>+					  unsigned int order, int migratetype)
>+{
>+	struct free_area *area = &zone->free_area[order];
>+
>+	list_move_tail(&page->lru, &area->free_list[migratetype]);
>+}
>+

Would it be better to pass the *to_tail* to move_to_free_list(), so we won't
have a new function?

> static inline void del_page_from_free_list(struct page *page, struct zone *zone,
> 					   unsigned int order)
> {
>@@ -2338,9 +2347,9 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
>  * Note that start_page and end_pages are not aligned on a pageblock
>  * boundary. If alignment is required, use move_freepages_block()
>  */
>-static int move_freepages(struct zone *zone,
>-			  struct page *start_page, struct page *end_page,
>-			  int migratetype, int *num_movable)
>+static int move_freepages(struct zone *zone, struct page *start_page,
>+			  struct page *end_page, int migratetype,
>+			  bool to_tail, int *num_movable)
> {
> 	struct page *page;
> 	unsigned int order;
>@@ -2371,7 +2380,10 @@ static int move_freepages(struct zone *zone,
> 		VM_BUG_ON_PAGE(page_zone(page) != zone, page);
> 
> 		order = page_order(page);
>-		move_to_free_list(page, zone, order, migratetype);
>+		if (to_tail)
>+			move_to_free_list_tail(page, zone, order, migratetype);
>+		else
>+			move_to_free_list(page, zone, order, migratetype);

And here, we just need to pass the *to_tail* to move_to_free_list().

> 		page += 1 << order;
> 		pages_moved += 1 << order;
> 	}
>@@ -2379,8 +2391,8 @@ static int move_freepages(struct zone *zone,
> 	return pages_moved;
> }
> 
>-int move_freepages_block(struct zone *zone, struct page *page,
>-				int migratetype, int *num_movable)
>+int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
>+			 bool to_tail, int *num_movable)
> {
> 	unsigned long start_pfn, end_pfn;
> 	struct page *start_page, *end_page;
>@@ -2401,7 +2413,7 @@ int move_freepages_block(struct zone *zone, struct page *page,
> 		return 0;
> 
> 	return move_freepages(zone, start_page, end_page, migratetype,
>-								num_movable);
>+			      to_tail, num_movable);
> }
> 
> static void change_pageblock_range(struct page *pageblock_page,
>@@ -2526,8 +2538,8 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
> 	if (!whole_block)
> 		goto single_page;
> 
>-	free_pages = move_freepages_block(zone, page, start_type,
>-						&movable_pages);
>+	free_pages = move_freepages_block(zone, page, start_type, false,
>+					  &movable_pages);
> 	/*
> 	 * Determine how many pages are compatible with our allocation.
> 	 * For movable allocation, it's the number of movable pages which
>@@ -2635,7 +2647,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
> 	    && !is_migrate_cma(mt)) {
> 		zone->nr_reserved_highatomic += pageblock_nr_pages;
> 		set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC);
>-		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL);
>+		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, false,
>+				     NULL);
> 	}
> 
> out_unlock:
>@@ -2711,7 +2724,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
> 			 */
> 			set_pageblock_migratetype(page, ac->migratetype);
> 			ret = move_freepages_block(zone, page, ac->migratetype,
>-									NULL);
>+						   false, NULL);
> 			if (ret) {
> 				spin_unlock_irqrestore(&zone->lock, flags);
> 				return ret;
>diff --git a/mm/page_isolation.c b/mm/page_isolation.c
>index abfe26ad59fd..de44e1329706 100644
>--- a/mm/page_isolation.c
>+++ b/mm/page_isolation.c
>@@ -45,7 +45,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
> 		set_pageblock_migratetype(page, MIGRATE_ISOLATE);
> 		zone->nr_isolate_pageblock++;
> 		nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE,
>-									NULL);
>+						false, NULL);
> 
> 		__mod_zone_freepage_state(zone, -nr_pages, mt);
> 		spin_unlock_irqrestore(&zone->lock, flags);
>@@ -83,7 +83,7 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
> 	 * Because freepage with more than pageblock_order on isolated
> 	 * pageblock is restricted to merge due to freepage counting problem,
> 	 * it is possible that there is free buddy page.
>-	 * move_freepages_block() doesn't care of merge so we need other
>+	 * move_freepages_block() don't care about merging, so we need another
> 	 * approach in order to merge them. Isolation and free will make
> 	 * these pages to be merged.
> 	 */
>@@ -106,9 +106,15 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
> 	 * If we isolate freepage with more than pageblock_order, there
> 	 * should be no freepage in the range, so we could avoid costly
> 	 * pageblock scanning for freepage moving.
>+	 *
>+	 * We didn't actually touch any of the isolated pages, so place them
>+	 * to the tail of the freelist. This is an optimization for memory
>+	 * onlining - just onlined memory won't immediately be considered for
>+	 * allocation.
> 	 */
> 	if (!isolated_page) {
>-		nr_pages = move_freepages_block(zone, page, migratetype, NULL);
>+		nr_pages = move_freepages_block(zone, page, migratetype, true,
>+						NULL);
> 		__mod_zone_freepage_state(zone, nr_pages, migratetype);
> 	}
> 	set_pageblock_migratetype(page, migratetype);
>-- 
>2.26.2

Others looks good to me.

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core()
  2020-09-28 18:21 ` [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core() David Hildenbrand
  2020-09-28 20:33     ` Pankaj Gupta
@ 2020-09-29  9:36   ` Wei Yang
  2020-09-29 10:14     ` David Hildenbrand
  2020-10-02 13:41   ` Michal Hocko
  2 siblings, 1 reply; 36+ messages in thread
From: Wei Yang @ 2020-09-29  9:36 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Vlastimil Babka, Oscar Salvador, Alexander Duyck,
	Mel Gorman, Michal Hocko, Dave Hansen, Wei Yang, Mike Rapoport,
	K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu

On Mon, Sep 28, 2020 at 08:21:09PM +0200, David Hildenbrand wrote:
>__free_pages_core() is used when exposing fresh memory to the buddy
>during system boot and when onlining memory in generic_online_page().
>
>generic_online_page() is used in two cases:
>
>1. Direct memory onlining in online_pages().
>2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
>   balloon and virtio-mem), when parts of a section are kept
>   fake-offline to be fake-onlined later on.
>
>In 1, we already place pages to the tail of the freelist. Pages will be
>freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
>via undo_isolate_page_range().
>
>In 2, we currently don't implement a proper rule. In case of virtio-mem,
>where we currently always online MAX_ORDER - 1 pages, the pages will be
>placed to the HEAD of the freelist - undesireable. While the hyper-v
>balloon calls generic_online_page() with single pages, usually it will
>call it on successive single pages in a larger block.
>
>The pages are fresh, so place them to the tail of the freelists and avoid
>the PCP. In __free_pages_core(), remove the now superflouos call to
>set_page_refcounted() and add a comment regarding page initialization and
>the refcount.
>
>Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
>is usually of limited use in virtualized environments), we might want to
>shuffle after a sequence of generic_online_page() calls in the
>relevant callers.
>
>Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
>Reviewed-by: Oscar Salvador <osalvador@suse.de>
>Cc: Andrew Morton <akpm@linux-foundation.org>
>Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>Cc: Mel Gorman <mgorman@techsingularity.net>
>Cc: Michal Hocko <mhocko@kernel.org>
>Cc: Dave Hansen <dave.hansen@intel.com>
>Cc: Vlastimil Babka <vbabka@suse.cz>
>Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
>Cc: Oscar Salvador <osalvador@suse.de>
>Cc: Mike Rapoport <rppt@kernel.org>
>Cc: "K. Y. Srinivasan" <kys@microsoft.com>
>Cc: Haiyang Zhang <haiyangz@microsoft.com>
>Cc: Stephen Hemminger <sthemmin@microsoft.com>
>Cc: Wei Liu <wei.liu@kernel.org>
>Signed-off-by: David Hildenbrand <david@redhat.com>
>---
> mm/page_alloc.c | 37 ++++++++++++++++++++++++-------------
> 1 file changed, 24 insertions(+), 13 deletions(-)
>
>diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>index d5a5f528b8ca..8a2134fe9947 100644
>--- a/mm/page_alloc.c
>+++ b/mm/page_alloc.c
>@@ -270,7 +270,8 @@ bool pm_suspended_storage(void)
> unsigned int pageblock_order __read_mostly;
> #endif
> 
>-static void __free_pages_ok(struct page *page, unsigned int order);
>+static void __free_pages_ok(struct page *page, unsigned int order,
>+			    fop_t fop_flags);
> 
> /*
>  * results with 256, 32 in the lowmem_reserve sysctl:
>@@ -682,7 +683,7 @@ static void bad_page(struct page *page, const char *reason)
> void free_compound_page(struct page *page)
> {
> 	mem_cgroup_uncharge(page);
>-	__free_pages_ok(page, compound_order(page));
>+	__free_pages_ok(page, compound_order(page), FOP_NONE);
> }
> 
> void prep_compound_page(struct page *page, unsigned int order)
>@@ -1419,17 +1420,15 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> 	spin_unlock(&zone->lock);
> }
> 
>-static void free_one_page(struct zone *zone,
>-				struct page *page, unsigned long pfn,
>-				unsigned int order,
>-				int migratetype)
>+static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn,
>+			  unsigned int order, int migratetype, fop_t fop_flags)
> {
> 	spin_lock(&zone->lock);
> 	if (unlikely(has_isolate_pageblock(zone) ||
> 		is_migrate_isolate(migratetype))) {
> 		migratetype = get_pfnblock_migratetype(page, pfn);
> 	}
>-	__free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
>+	__free_one_page(page, pfn, zone, order, migratetype, fop_flags);
> 	spin_unlock(&zone->lock);
> }
> 
>@@ -1507,7 +1506,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
> 	}
> }
> 
>-static void __free_pages_ok(struct page *page, unsigned int order)
>+static void __free_pages_ok(struct page *page, unsigned int order,
>+			    fop_t fop_flags)
> {
> 	unsigned long flags;
> 	int migratetype;
>@@ -1519,7 +1519,8 @@ static void __free_pages_ok(struct page *page, unsigned int order)
> 	migratetype = get_pfnblock_migratetype(page, pfn);
> 	local_irq_save(flags);
> 	__count_vm_events(PGFREE, 1 << order);
>-	free_one_page(page_zone(page), page, pfn, order, migratetype);
>+	free_one_page(page_zone(page), page, pfn, order, migratetype,
>+		      fop_flags);
> 	local_irq_restore(flags);
> }
> 
>@@ -1529,6 +1530,11 @@ void __free_pages_core(struct page *page, unsigned int order)
> 	struct page *p = page;
> 	unsigned int loop;
> 
>+	/*
>+	 * When initializing the memmap, init_single_page() sets the refcount

If my code is the latest version.

s/init_single_page/__init_single_page/

Besides this.

Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>

>+	 * of all pages to 1 ("allocated"/"not free"). We have to set the
>+	 * refcount of all involved pages to 0.
>+	 */
> 	prefetchw(p);
> 	for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
> 		prefetchw(p + 1);
>@@ -1539,8 +1545,12 @@ void __free_pages_core(struct page *page, unsigned int order)
> 	set_page_count(p, 0);
> 
> 	atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
>-	set_page_refcounted(page);
>-	__free_pages(page, order);
>+
>+	/*
>+	 * Bypass PCP and place fresh pages right to the tail, primarily
>+	 * relevant for memory onlining.
>+	 */
>+	__free_pages_ok(page, order, FOP_TO_TAIL);
> }
> 
> #ifdef CONFIG_NEED_MULTIPLE_NODES
>@@ -3171,7 +3181,8 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
> 	 */
> 	if (migratetype >= MIGRATE_PCPTYPES) {
> 		if (unlikely(is_migrate_isolate(migratetype))) {
>-			free_one_page(zone, page, pfn, 0, migratetype);
>+			free_one_page(zone, page, pfn, 0, migratetype,
>+				      FOP_NONE);
> 			return;
> 		}
> 		migratetype = MIGRATE_MOVABLE;
>@@ -5063,7 +5074,7 @@ static inline void free_the_page(struct page *page, unsigned int order)
> 	if (order == 0)		/* Via pcp? */
> 		free_unref_page(page);
> 	else
>-		__free_pages_ok(page, order);
>+		__free_pages_ok(page, order, FOP_NONE);
> }
> 
> void __free_pages(struct page *page, unsigned int order)
>-- 
>2.26.2

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 5/5] mm/memory_hotplug: update comment regarding zone shuffling
  2020-09-28 18:21 ` [PATCH v1 5/5] mm/memory_hotplug: update comment regarding zone shuffling David Hildenbrand
@ 2020-09-29  9:40   ` Wei Yang
  2020-10-02 13:41   ` Michal Hocko
  1 sibling, 0 replies; 36+ messages in thread
From: Wei Yang @ 2020-09-29  9:40 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Mel Gorman, Michal Hocko,
	Dave Hansen, Vlastimil Babka, Wei Yang, Oscar Salvador,
	Mike Rapoport

On Mon, Sep 28, 2020 at 08:21:10PM +0200, David Hildenbrand wrote:
>As we no longer shuffle via generic_online_page() and when undoing
>isolation, we can simplify the comment.
>
>We now effectively shuffle only once (properly) when onlining new
>memory.
>
>Cc: Andrew Morton <akpm@linux-foundation.org>
>Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>Cc: Mel Gorman <mgorman@techsingularity.net>
>Cc: Michal Hocko <mhocko@kernel.org>
>Cc: Dave Hansen <dave.hansen@intel.com>
>Cc: Vlastimil Babka <vbabka@suse.cz>
>Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
>Cc: Oscar Salvador <osalvador@suse.de>
>Cc: Mike Rapoport <rppt@kernel.org>
>Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>

>---
> mm/memory_hotplug.c | 11 ++++-------
> 1 file changed, 4 insertions(+), 7 deletions(-)
>
>diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>index 9db80ee29caa..c589bd8801bb 100644
>--- a/mm/memory_hotplug.c
>+++ b/mm/memory_hotplug.c
>@@ -859,13 +859,10 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
> 	undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
> 
> 	/*
>-	 * When exposing larger, physically contiguous memory areas to the
>-	 * buddy, shuffling in the buddy (when freeing onlined pages, putting
>-	 * them either to the head or the tail of the freelist) is only helpful
>-	 * for maintaining the shuffle, but not for creating the initial
>-	 * shuffle. Shuffle the whole zone to make sure the just onlined pages
>-	 * are properly distributed across the whole freelist. Make sure to
>-	 * shuffle once pageblocks are no longer isolated.
>+	 * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
>+	 * the tail of the freelist when undoing isolation). Shuffle the whole
>+	 * zone to make sure the just onlined pages are properly distributed
>+	 * across the whole freelist - to create an initial shuffle.
> 	 */
> 	shuffle_zone(zone);
> 
>-- 
>2.26.2

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()
  2020-09-29  9:18   ` Wei Yang
@ 2020-09-29 10:12     ` David Hildenbrand
  2020-09-30  7:48       ` Wei Yang
  0 siblings, 1 reply; 36+ messages in thread
From: David Hildenbrand @ 2020-09-29 10:12 UTC (permalink / raw)
  To: Wei Yang
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Oscar Salvador, Alexander Duyck, Mel Gorman,
	Michal Hocko, Dave Hansen, Vlastimil Babka, Mike Rapoport,
	Scott Cheloha, Michael Ellerman

On 29.09.20 11:18, Wei Yang wrote:
> On Mon, Sep 28, 2020 at 08:21:08PM +0200, David Hildenbrand wrote:
>> Page isolation doesn't actually touch the pages, it simply isolates
>> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
>>
>> We already place pages to the tail of the freelists when undoing
>> isolation via __putback_isolated_page(), let's do it in any case
>> (e.g., if order <= pageblock_order) and document the behavior.
>>
>> Add a "to_tail" parameter to move_freepages_block() but introduce a
>> a new move_to_free_list_tail() - similar to add_to_free_list_tail().

s/a a/a/

>>
>> This change results in all pages getting onlined via online_pages() to
>> be placed to the tail of the freelist.
>>
>> Reviewed-by: Oscar Salvador <osalvador@suse.de>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>> Cc: Mel Gorman <mgorman@techsingularity.net>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Dave Hansen <dave.hansen@intel.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Mike Rapoport <rppt@kernel.org>
>> Cc: Scott Cheloha <cheloha@linux.ibm.com>
>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>> include/linux/page-isolation.h |  4 ++--
>> mm/page_alloc.c                | 35 +++++++++++++++++++++++-----------
>> mm/page_isolation.c            | 12 +++++++++---
>> 3 files changed, 35 insertions(+), 16 deletions(-)
>>
>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
>> index 572458016331..3eca9b3c5305 100644
>> --- a/include/linux/page-isolation.h
>> +++ b/include/linux/page-isolation.h
>> @@ -36,8 +36,8 @@ static inline bool is_migrate_isolate(int migratetype)
>> struct page *has_unmovable_pages(struct zone *zone, struct page *page,
>> 				 int migratetype, int flags);
>> void set_pageblock_migratetype(struct page *page, int migratetype);
>> -int move_freepages_block(struct zone *zone, struct page *page,
>> -				int migratetype, int *num_movable);
>> +int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
>> +			 bool to_tail, int *num_movable);
>>
>> /*
>>  * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 9e3ed4a6f69a..d5a5f528b8ca 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -905,6 +905,15 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
>> 	list_move(&page->lru, &area->free_list[migratetype]);
>> }
>>
>> +/* Used for pages which are on another list */
>> +static inline void move_to_free_list_tail(struct page *page, struct zone *zone,
>> +					  unsigned int order, int migratetype)
>> +{
>> +	struct free_area *area = &zone->free_area[order];
>> +
>> +	list_move_tail(&page->lru, &area->free_list[migratetype]);
>> +}
>> +
> 
> Would it be better to pass the *to_tail* to move_to_free_list(), so we won't
> have a new function?

Hi,

thanks for the review!

See discussion in RFC + cover letter:

"Add a "to_tail" parameter to move_freepages_block() but introduce a new
move_to_free_list_tail() - similar to add_to_free_list_tail()."


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core()
  2020-09-29  9:36   ` Wei Yang
@ 2020-09-29 10:14     ` David Hildenbrand
  0 siblings, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-09-29 10:14 UTC (permalink / raw)
  To: Wei Yang
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Vlastimil Babka, Oscar Salvador, Alexander Duyck,
	Mel Gorman, Michal Hocko, Dave Hansen, Mike Rapoport,
	K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu

On 29.09.20 11:36, Wei Yang wrote:
> On Mon, Sep 28, 2020 at 08:21:09PM +0200, David Hildenbrand wrote:
>> __free_pages_core() is used when exposing fresh memory to the buddy
>> during system boot and when onlining memory in generic_online_page().
>>
>> generic_online_page() is used in two cases:
>>
>> 1. Direct memory onlining in online_pages().
>> 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
>>   balloon and virtio-mem), when parts of a section are kept
>>   fake-offline to be fake-onlined later on.
>>
>> In 1, we already place pages to the tail of the freelist. Pages will be
>> freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
>> via undo_isolate_page_range().
>>
>> In 2, we currently don't implement a proper rule. In case of virtio-mem,
>> where we currently always online MAX_ORDER - 1 pages, the pages will be
>> placed to the HEAD of the freelist - undesireable. While the hyper-v
>> balloon calls generic_online_page() with single pages, usually it will
>> call it on successive single pages in a larger block.
>>
>> The pages are fresh, so place them to the tail of the freelists and avoid
>> the PCP. In __free_pages_core(), remove the now superflouos call to
>> set_page_refcounted() and add a comment regarding page initialization and
>> the refcount.
>>
>> Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
>> is usually of limited use in virtualized environments), we might want to
>> shuffle after a sequence of generic_online_page() calls in the
>> relevant callers.
>>
>> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
>> Reviewed-by: Oscar Salvador <osalvador@suse.de>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>> Cc: Mel Gorman <mgorman@techsingularity.net>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Dave Hansen <dave.hansen@intel.com>
>> Cc: Vlastimil Babka <vbabka@suse.cz>
>> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
>> Cc: Oscar Salvador <osalvador@suse.de>
>> Cc: Mike Rapoport <rppt@kernel.org>
>> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
>> Cc: Haiyang Zhang <haiyangz@microsoft.com>
>> Cc: Stephen Hemminger <sthemmin@microsoft.com>
>> Cc: Wei Liu <wei.liu@kernel.org>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>> mm/page_alloc.c | 37 ++++++++++++++++++++++++-------------
>> 1 file changed, 24 insertions(+), 13 deletions(-)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index d5a5f528b8ca..8a2134fe9947 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -270,7 +270,8 @@ bool pm_suspended_storage(void)
>> unsigned int pageblock_order __read_mostly;
>> #endif
>>
>> -static void __free_pages_ok(struct page *page, unsigned int order);
>> +static void __free_pages_ok(struct page *page, unsigned int order,
>> +			    fop_t fop_flags);
>>
>> /*
>>  * results with 256, 32 in the lowmem_reserve sysctl:
>> @@ -682,7 +683,7 @@ static void bad_page(struct page *page, const char *reason)
>> void free_compound_page(struct page *page)
>> {
>> 	mem_cgroup_uncharge(page);
>> -	__free_pages_ok(page, compound_order(page));
>> +	__free_pages_ok(page, compound_order(page), FOP_NONE);
>> }
>>
>> void prep_compound_page(struct page *page, unsigned int order)
>> @@ -1419,17 +1420,15 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>> 	spin_unlock(&zone->lock);
>> }
>>
>> -static void free_one_page(struct zone *zone,
>> -				struct page *page, unsigned long pfn,
>> -				unsigned int order,
>> -				int migratetype)
>> +static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn,
>> +			  unsigned int order, int migratetype, fop_t fop_flags)
>> {
>> 	spin_lock(&zone->lock);
>> 	if (unlikely(has_isolate_pageblock(zone) ||
>> 		is_migrate_isolate(migratetype))) {
>> 		migratetype = get_pfnblock_migratetype(page, pfn);
>> 	}
>> -	__free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
>> +	__free_one_page(page, pfn, zone, order, migratetype, fop_flags);
>> 	spin_unlock(&zone->lock);
>> }
>>
>> @@ -1507,7 +1506,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
>> 	}
>> }
>>
>> -static void __free_pages_ok(struct page *page, unsigned int order)
>> +static void __free_pages_ok(struct page *page, unsigned int order,
>> +			    fop_t fop_flags)
>> {
>> 	unsigned long flags;
>> 	int migratetype;
>> @@ -1519,7 +1519,8 @@ static void __free_pages_ok(struct page *page, unsigned int order)
>> 	migratetype = get_pfnblock_migratetype(page, pfn);
>> 	local_irq_save(flags);
>> 	__count_vm_events(PGFREE, 1 << order);
>> -	free_one_page(page_zone(page), page, pfn, order, migratetype);
>> +	free_one_page(page_zone(page), page, pfn, order, migratetype,
>> +		      fop_flags);
>> 	local_irq_restore(flags);
>> }
>>
>> @@ -1529,6 +1530,11 @@ void __free_pages_core(struct page *page, unsigned int order)
>> 	struct page *p = page;
>> 	unsigned int loop;
>>
>> +	/*
>> +	 * When initializing the memmap, init_single_page() sets the refcount
> 
> If my code is the latest version.
> 
> s/init_single_page/__init_single_page/

Indeed - thanks!


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()
  2020-09-29 10:12     ` David Hildenbrand
@ 2020-09-30  7:48       ` Wei Yang
  0 siblings, 0 replies; 36+ messages in thread
From: Wei Yang @ 2020-09-30  7:48 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Wei Yang, linux-kernel, linux-mm, linux-hyperv, xen-devel,
	linux-acpi, Andrew Morton, Oscar Salvador, Alexander Duyck,
	Mel Gorman, Michal Hocko, Dave Hansen, Vlastimil Babka,
	Mike Rapoport, Scott Cheloha, Michael Ellerman

On Tue, Sep 29, 2020 at 12:12:14PM +0200, David Hildenbrand wrote:
>On 29.09.20 11:18, Wei Yang wrote:
>> On Mon, Sep 28, 2020 at 08:21:08PM +0200, David Hildenbrand wrote:
>>> Page isolation doesn't actually touch the pages, it simply isolates
>>> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
>>>
>>> We already place pages to the tail of the freelists when undoing
>>> isolation via __putback_isolated_page(), let's do it in any case
>>> (e.g., if order <= pageblock_order) and document the behavior.
>>>
>>> Add a "to_tail" parameter to move_freepages_block() but introduce a
>>> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
>
>s/a a/a/
>
>>>
>>> This change results in all pages getting onlined via online_pages() to
>>> be placed to the tail of the freelist.
>>>
>>> Reviewed-by: Oscar Salvador <osalvador@suse.de>
>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
>>> Cc: Mel Gorman <mgorman@techsingularity.net>
>>> Cc: Michal Hocko <mhocko@kernel.org>
>>> Cc: Dave Hansen <dave.hansen@intel.com>
>>> Cc: Vlastimil Babka <vbabka@suse.cz>
>>> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
>>> Cc: Oscar Salvador <osalvador@suse.de>
>>> Cc: Mike Rapoport <rppt@kernel.org>
>>> Cc: Scott Cheloha <cheloha@linux.ibm.com>
>>> Cc: Michael Ellerman <mpe@ellerman.id.au>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>> include/linux/page-isolation.h |  4 ++--
>>> mm/page_alloc.c                | 35 +++++++++++++++++++++++-----------
>>> mm/page_isolation.c            | 12 +++++++++---
>>> 3 files changed, 35 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
>>> index 572458016331..3eca9b3c5305 100644
>>> --- a/include/linux/page-isolation.h
>>> +++ b/include/linux/page-isolation.h
>>> @@ -36,8 +36,8 @@ static inline bool is_migrate_isolate(int migratetype)
>>> struct page *has_unmovable_pages(struct zone *zone, struct page *page,
>>> 				 int migratetype, int flags);
>>> void set_pageblock_migratetype(struct page *page, int migratetype);
>>> -int move_freepages_block(struct zone *zone, struct page *page,
>>> -				int migratetype, int *num_movable);
>>> +int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
>>> +			 bool to_tail, int *num_movable);
>>>
>>> /*
>>>  * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index 9e3ed4a6f69a..d5a5f528b8ca 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -905,6 +905,15 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
>>> 	list_move(&page->lru, &area->free_list[migratetype]);
>>> }
>>>
>>> +/* Used for pages which are on another list */
>>> +static inline void move_to_free_list_tail(struct page *page, struct zone *zone,
>>> +					  unsigned int order, int migratetype)
>>> +{
>>> +	struct free_area *area = &zone->free_area[order];
>>> +
>>> +	list_move_tail(&page->lru, &area->free_list[migratetype]);
>>> +}
>>> +
>> 
>> Would it be better to pass the *to_tail* to move_to_free_list(), so we won't
>> have a new function?
>
>Hi,
>
>thanks for the review!
>
>See discussion in RFC + cover letter:
>
>"Add a "to_tail" parameter to move_freepages_block() but introduce a new
>move_to_free_list_tail() - similar to add_to_free_list_tail()."

Hmm, sounds reasonable.

Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>

-- 
Wei Yang
Help you, Help me

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag
  2020-09-28 18:21 ` [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag David Hildenbrand
  2020-09-28 20:11     ` Pankaj Gupta
  2020-09-29  8:58   ` Wei Yang
@ 2020-10-02 13:17   ` Michal Hocko
  2020-10-02 13:41   ` Matthew Wilcox
  3 siblings, 0 replies; 36+ messages in thread
From: Michal Hocko @ 2020-10-02 13:17 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Vlastimil Babka, Oscar Salvador,
	Mel Gorman, Dave Hansen, Wei Yang, Mike Rapoport

On Mon 28-09-20 20:21:06, David Hildenbrand wrote:
> Let's prepare for additional flags and avoid long parameter lists of bools.
> Follow-up patches will also make use of the flags in __free_pages_ok(),
> however, I wasn't able to come up with a better name for the type - should
> be good enough for internal purposes.
> 
> Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Hopefully this will not wrack the generated code. But considering we
would need another parameter there is not much choice left.

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/page_alloc.c | 28 ++++++++++++++++++++--------
>  1 file changed, 20 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index df90e3654f97..daab90e960fe 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -77,6 +77,18 @@
>  #include "shuffle.h"
>  #include "page_reporting.h"
>  
> +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
> +typedef int __bitwise fop_t;
> +
> +/* No special request */
> +#define FOP_NONE		((__force fop_t)0)
> +
> +/*
> + * Skip free page reporting notification for the (possibly merged) page. (will
> + * *not* mark the page reported, only skip the notification).
> + */
> +#define FOP_SKIP_REPORT_NOTIFY	((__force fop_t)BIT(0))
> +
>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_FRACTION	(8)
> @@ -948,10 +960,9 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
>   * -- nyc
>   */
>  
> -static inline void __free_one_page(struct page *page,
> -		unsigned long pfn,
> -		struct zone *zone, unsigned int order,
> -		int migratetype, bool report)
> +static inline void __free_one_page(struct page *page, unsigned long pfn,
> +				   struct zone *zone, unsigned int order,
> +				   int migratetype, fop_t fop_flags)
>  {
>  	struct capture_control *capc = task_capc(zone);
>  	unsigned long buddy_pfn;
> @@ -1038,7 +1049,7 @@ static inline void __free_one_page(struct page *page,
>  		add_to_free_list(page, zone, order, migratetype);
>  
>  	/* Notify page reporting subsystem of freed page */
> -	if (report)
> +	if (!(fop_flags & FOP_SKIP_REPORT_NOTIFY))
>  		page_reporting_notify_free(order);
>  }
>  
> @@ -1379,7 +1390,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>  		if (unlikely(isolated_pageblocks))
>  			mt = get_pageblock_migratetype(page);
>  
> -		__free_one_page(page, page_to_pfn(page), zone, 0, mt, true);
> +		__free_one_page(page, page_to_pfn(page), zone, 0, mt, FOP_NONE);
>  		trace_mm_page_pcpu_drain(page, 0, mt);
>  	}
>  	spin_unlock(&zone->lock);
> @@ -1395,7 +1406,7 @@ static void free_one_page(struct zone *zone,
>  		is_migrate_isolate(migratetype))) {
>  		migratetype = get_pfnblock_migratetype(page, pfn);
>  	}
> -	__free_one_page(page, pfn, zone, order, migratetype, true);
> +	__free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
>  	spin_unlock(&zone->lock);
>  }
>  
> @@ -3288,7 +3299,8 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
>  	lockdep_assert_held(&zone->lock);
>  
>  	/* Return isolated page to tail of freelist. */
> -	__free_one_page(page, page_to_pfn(page), zone, order, mt, false);
> +	__free_one_page(page, page_to_pfn(page), zone, order, mt,
> +			FOP_SKIP_REPORT_NOTIFY);
>  }
>  
>  /*
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page()
  2020-09-28 18:21 ` [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page() David Hildenbrand
  2020-09-28 20:38     ` Pankaj Gupta
  2020-09-29  9:10   ` Wei Yang
@ 2020-10-02 13:19   ` Michal Hocko
  2 siblings, 0 replies; 36+ messages in thread
From: Michal Hocko @ 2020-10-02 13:19 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Oscar Salvador, Mel Gorman,
	Dave Hansen, Vlastimil Babka, Wei Yang, Mike Rapoport,
	Scott Cheloha, Michael Ellerman

On Mon 28-09-20 20:21:07, David Hildenbrand wrote:
> __putback_isolated_page() already documents that pages will be placed to
> the tail of the freelist - this is, however, not the case for
> "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
> the case for all existing users.
> 
> This change affects two users:
> - free page reporting
> - page isolation, when undoing the isolation (including memory onlining).
> 
> This behavior is desireable for pages that haven't really been touched
> lately, so exactly the two users that don't actually read/write page
> content, but rather move untouched pages.
> 
> The new behavior is especially desirable for memory onlining, where we
> allow allocation of newly onlined pages via undo_isolate_page_range()
> in online_pages(). Right now, we always place them to the head of the
> free list, resulting in undesireable behavior: Assume we add
> individual memory chunks via add_memory() and online them right away to
> the NORMAL zone. We create a dependency chain of unmovable allocations
> e.g., via the memmap. The memmap of the next chunk will be placed onto
> previous chunks - if the last block cannot get offlined+removed, all
> dependent ones cannot get offlined+removed. While this can already be
> observed with individual DIMMs, it's more of an issue for virtio-mem
> (and I suspect also ppc DLPAR).
> 
> Document that this should only be used for optimizations, and no code
> should realy on this for correction (if the order of freepage lists
> ever changes).
> 
> We won't care about page shuffling: memory onlining already properly
> shuffles after onlining. free page reporting doesn't care about
> physically contiguous ranges, and there are already cases where page
> isolation will simply move (physically close) free pages to (currently)
> the head of the freelists via move_freepages_block() instead of
> shuffling. If this becomes ever relevant, we should shuffle the whole
> zone when undoing isolation of larger ranges, and after
> free_contig_range().
> 
> Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Scott Cheloha <cheloha@linux.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/page_alloc.c | 18 ++++++++++++++++--
>  1 file changed, 16 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index daab90e960fe..9e3ed4a6f69a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -89,6 +89,18 @@ typedef int __bitwise fop_t;
>   */
>  #define FOP_SKIP_REPORT_NOTIFY	((__force fop_t)BIT(0))
>  
> +/*
> + * Place the (possibly merged) page to the tail of the freelist. Will ignore
> + * page shuffling (relevant code - e.g., memory onlining - is expected to
> + * shuffle the whole zone).
> + *
> + * Note: No code should rely onto this flag for correctness - it's purely
> + *       to allow for optimizations when handing back either fresh pages
> + *       (memory onlining) or untouched pages (page isolation, free page
> + *       reporting).
> + */
> +#define FOP_TO_TAIL		((__force fop_t)BIT(1))
> +
>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_FRACTION	(8)
> @@ -1038,7 +1050,9 @@ static inline void __free_one_page(struct page *page, unsigned long pfn,
>  done_merging:
>  	set_page_order(page, order);
>  
> -	if (is_shuffle_order(order))
> +	if (fop_flags & FOP_TO_TAIL)
> +		to_tail = true;
> +	else if (is_shuffle_order(order))
>  		to_tail = shuffle_pick_tail();
>  	else
>  		to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
> @@ -3300,7 +3314,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
>  
>  	/* Return isolated page to tail of freelist. */
>  	__free_one_page(page, page_to_pfn(page), zone, order, mt,
> -			FOP_SKIP_REPORT_NOTIFY);
> +			FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL);
>  }
>  
>  /*
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()
  2020-09-28 18:21 ` [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate() David Hildenbrand
  2020-09-28 20:55     ` Pankaj Gupta
  2020-09-29  9:18   ` Wei Yang
@ 2020-10-02 13:24   ` Michal Hocko
  2020-10-02 15:20     ` David Hildenbrand
  2 siblings, 1 reply; 36+ messages in thread
From: Michal Hocko @ 2020-10-02 13:24 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Oscar Salvador, Alexander Duyck, Mel Gorman,
	Dave Hansen, Vlastimil Babka, Wei Yang, Mike Rapoport,
	Scott Cheloha, Michael Ellerman

On Mon 28-09-20 20:21:08, David Hildenbrand wrote:
> Page isolation doesn't actually touch the pages, it simply isolates
> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
> 
> We already place pages to the tail of the freelists when undoing
> isolation via __putback_isolated_page(), let's do it in any case
> (e.g., if order <= pageblock_order) and document the behavior.
> 
> Add a "to_tail" parameter to move_freepages_block() but introduce a
> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
> 
> This change results in all pages getting onlined via online_pages() to
> be placed to the tail of the freelist.

Is there anything preventing to do this unconditionally? Or in other
words is any of the existing callers of move_freepages_block benefiting
from adding to the head?

> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Scott Cheloha <cheloha@linux.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  include/linux/page-isolation.h |  4 ++--
>  mm/page_alloc.c                | 35 +++++++++++++++++++++++-----------
>  mm/page_isolation.c            | 12 +++++++++---
>  3 files changed, 35 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
> index 572458016331..3eca9b3c5305 100644
> --- a/include/linux/page-isolation.h
> +++ b/include/linux/page-isolation.h
> @@ -36,8 +36,8 @@ static inline bool is_migrate_isolate(int migratetype)
>  struct page *has_unmovable_pages(struct zone *zone, struct page *page,
>  				 int migratetype, int flags);
>  void set_pageblock_migratetype(struct page *page, int migratetype);
> -int move_freepages_block(struct zone *zone, struct page *page,
> -				int migratetype, int *num_movable);
> +int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
> +			 bool to_tail, int *num_movable);
>  
>  /*
>   * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9e3ed4a6f69a..d5a5f528b8ca 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -905,6 +905,15 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
>  	list_move(&page->lru, &area->free_list[migratetype]);
>  }
>  
> +/* Used for pages which are on another list */
> +static inline void move_to_free_list_tail(struct page *page, struct zone *zone,
> +					  unsigned int order, int migratetype)
> +{
> +	struct free_area *area = &zone->free_area[order];
> +
> +	list_move_tail(&page->lru, &area->free_list[migratetype]);
> +}
> +
>  static inline void del_page_from_free_list(struct page *page, struct zone *zone,
>  					   unsigned int order)
>  {
> @@ -2338,9 +2347,9 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
>   * Note that start_page and end_pages are not aligned on a pageblock
>   * boundary. If alignment is required, use move_freepages_block()
>   */
> -static int move_freepages(struct zone *zone,
> -			  struct page *start_page, struct page *end_page,
> -			  int migratetype, int *num_movable)
> +static int move_freepages(struct zone *zone, struct page *start_page,
> +			  struct page *end_page, int migratetype,
> +			  bool to_tail, int *num_movable)
>  {
>  	struct page *page;
>  	unsigned int order;
> @@ -2371,7 +2380,10 @@ static int move_freepages(struct zone *zone,
>  		VM_BUG_ON_PAGE(page_zone(page) != zone, page);
>  
>  		order = page_order(page);
> -		move_to_free_list(page, zone, order, migratetype);
> +		if (to_tail)
> +			move_to_free_list_tail(page, zone, order, migratetype);
> +		else
> +			move_to_free_list(page, zone, order, migratetype);
>  		page += 1 << order;
>  		pages_moved += 1 << order;
>  	}
> @@ -2379,8 +2391,8 @@ static int move_freepages(struct zone *zone,
>  	return pages_moved;
>  }
>  
> -int move_freepages_block(struct zone *zone, struct page *page,
> -				int migratetype, int *num_movable)
> +int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
> +			 bool to_tail, int *num_movable)
>  {
>  	unsigned long start_pfn, end_pfn;
>  	struct page *start_page, *end_page;
> @@ -2401,7 +2413,7 @@ int move_freepages_block(struct zone *zone, struct page *page,
>  		return 0;
>  
>  	return move_freepages(zone, start_page, end_page, migratetype,
> -								num_movable);
> +			      to_tail, num_movable);
>  }
>  
>  static void change_pageblock_range(struct page *pageblock_page,
> @@ -2526,8 +2538,8 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  	if (!whole_block)
>  		goto single_page;
>  
> -	free_pages = move_freepages_block(zone, page, start_type,
> -						&movable_pages);
> +	free_pages = move_freepages_block(zone, page, start_type, false,
> +					  &movable_pages);
>  	/*
>  	 * Determine how many pages are compatible with our allocation.
>  	 * For movable allocation, it's the number of movable pages which
> @@ -2635,7 +2647,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>  	    && !is_migrate_cma(mt)) {
>  		zone->nr_reserved_highatomic += pageblock_nr_pages;
>  		set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC);
> -		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL);
> +		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, false,
> +				     NULL);
>  	}
>  
>  out_unlock:
> @@ -2711,7 +2724,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
>  			ret = move_freepages_block(zone, page, ac->migratetype,
> -									NULL);
> +						   false, NULL);
>  			if (ret) {
>  				spin_unlock_irqrestore(&zone->lock, flags);
>  				return ret;
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index abfe26ad59fd..de44e1329706 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -45,7 +45,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
>  		set_pageblock_migratetype(page, MIGRATE_ISOLATE);
>  		zone->nr_isolate_pageblock++;
>  		nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE,
> -									NULL);
> +						false, NULL);
>  
>  		__mod_zone_freepage_state(zone, -nr_pages, mt);
>  		spin_unlock_irqrestore(&zone->lock, flags);
> @@ -83,7 +83,7 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
>  	 * Because freepage with more than pageblock_order on isolated
>  	 * pageblock is restricted to merge due to freepage counting problem,
>  	 * it is possible that there is free buddy page.
> -	 * move_freepages_block() doesn't care of merge so we need other
> +	 * move_freepages_block() don't care about merging, so we need another
>  	 * approach in order to merge them. Isolation and free will make
>  	 * these pages to be merged.
>  	 */
> @@ -106,9 +106,15 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
>  	 * If we isolate freepage with more than pageblock_order, there
>  	 * should be no freepage in the range, so we could avoid costly
>  	 * pageblock scanning for freepage moving.
> +	 *
> +	 * We didn't actually touch any of the isolated pages, so place them
> +	 * to the tail of the freelist. This is an optimization for memory
> +	 * onlining - just onlined memory won't immediately be considered for
> +	 * allocation.
>  	 */
>  	if (!isolated_page) {
> -		nr_pages = move_freepages_block(zone, page, migratetype, NULL);
> +		nr_pages = move_freepages_block(zone, page, migratetype, true,
> +						NULL);
>  		__mod_zone_freepage_state(zone, nr_pages, migratetype);
>  	}
>  	set_pageblock_migratetype(page, migratetype);
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core()
  2020-09-28 18:21 ` [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core() David Hildenbrand
  2020-09-28 20:33     ` Pankaj Gupta
  2020-09-29  9:36   ` Wei Yang
@ 2020-10-02 13:41   ` Michal Hocko
  2020-10-02 15:10     ` David Hildenbrand
  2 siblings, 1 reply; 36+ messages in thread
From: Michal Hocko @ 2020-10-02 13:41 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Vlastimil Babka, Oscar Salvador, Alexander Duyck,
	Mel Gorman, Dave Hansen, Wei Yang, Mike Rapoport,
	K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu

On Mon 28-09-20 20:21:09, David Hildenbrand wrote:
> __free_pages_core() is used when exposing fresh memory to the buddy
> during system boot and when onlining memory in generic_online_page().
> 
> generic_online_page() is used in two cases:
> 
> 1. Direct memory onlining in online_pages().
> 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
>    balloon and virtio-mem), when parts of a section are kept
>    fake-offline to be fake-onlined later on.
> 
> In 1, we already place pages to the tail of the freelist. Pages will be
> freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
> via undo_isolate_page_range().
> 
> In 2, we currently don't implement a proper rule. In case of virtio-mem,
> where we currently always online MAX_ORDER - 1 pages, the pages will be
> placed to the HEAD of the freelist - undesireable. While the hyper-v
> balloon calls generic_online_page() with single pages, usually it will
> call it on successive single pages in a larger block.
> 
> The pages are fresh, so place them to the tail of the freelists and avoid
> the PCP. In __free_pages_core(), remove the now superflouos call to
> set_page_refcounted() and add a comment regarding page initialization and
> the refcount.
> 
> Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
> is usually of limited use in virtualized environments), we might want to
> shuffle after a sequence of generic_online_page() calls in the
> relevant callers.

It took some time to get through all the freeing paths with subtle
differences but this looks reasonable. You are mentioning that this
influences a boot time free memory ordering as well but only very
briefly. I do not expect this to make a huge difference but who knows.
It makes some sense to add pages in the order they show up in the
physical address ordering.

> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
> Cc: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: Stephen Hemminger <sthemmin@microsoft.com>
> Cc: Wei Liu <wei.liu@kernel.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>

That being said I do not see any fundamental problems.
Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/page_alloc.c | 37 ++++++++++++++++++++++++-------------
>  1 file changed, 24 insertions(+), 13 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d5a5f528b8ca..8a2134fe9947 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -270,7 +270,8 @@ bool pm_suspended_storage(void)
>  unsigned int pageblock_order __read_mostly;
>  #endif
>  
> -static void __free_pages_ok(struct page *page, unsigned int order);
> +static void __free_pages_ok(struct page *page, unsigned int order,
> +			    fop_t fop_flags);
>  
>  /*
>   * results with 256, 32 in the lowmem_reserve sysctl:
> @@ -682,7 +683,7 @@ static void bad_page(struct page *page, const char *reason)
>  void free_compound_page(struct page *page)
>  {
>  	mem_cgroup_uncharge(page);
> -	__free_pages_ok(page, compound_order(page));
> +	__free_pages_ok(page, compound_order(page), FOP_NONE);
>  }
>  
>  void prep_compound_page(struct page *page, unsigned int order)
> @@ -1419,17 +1420,15 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>  	spin_unlock(&zone->lock);
>  }
>  
> -static void free_one_page(struct zone *zone,
> -				struct page *page, unsigned long pfn,
> -				unsigned int order,
> -				int migratetype)
> +static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn,
> +			  unsigned int order, int migratetype, fop_t fop_flags)
>  {
>  	spin_lock(&zone->lock);
>  	if (unlikely(has_isolate_pageblock(zone) ||
>  		is_migrate_isolate(migratetype))) {
>  		migratetype = get_pfnblock_migratetype(page, pfn);
>  	}
> -	__free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
> +	__free_one_page(page, pfn, zone, order, migratetype, fop_flags);
>  	spin_unlock(&zone->lock);
>  }
>  
> @@ -1507,7 +1506,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
>  	}
>  }
>  
> -static void __free_pages_ok(struct page *page, unsigned int order)
> +static void __free_pages_ok(struct page *page, unsigned int order,
> +			    fop_t fop_flags)
>  {
>  	unsigned long flags;
>  	int migratetype;
> @@ -1519,7 +1519,8 @@ static void __free_pages_ok(struct page *page, unsigned int order)
>  	migratetype = get_pfnblock_migratetype(page, pfn);
>  	local_irq_save(flags);
>  	__count_vm_events(PGFREE, 1 << order);
> -	free_one_page(page_zone(page), page, pfn, order, migratetype);
> +	free_one_page(page_zone(page), page, pfn, order, migratetype,
> +		      fop_flags);
>  	local_irq_restore(flags);
>  }
>  
> @@ -1529,6 +1530,11 @@ void __free_pages_core(struct page *page, unsigned int order)
>  	struct page *p = page;
>  	unsigned int loop;
>  
> +	/*
> +	 * When initializing the memmap, init_single_page() sets the refcount
> +	 * of all pages to 1 ("allocated"/"not free"). We have to set the
> +	 * refcount of all involved pages to 0.
> +	 */
>  	prefetchw(p);
>  	for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
>  		prefetchw(p + 1);
> @@ -1539,8 +1545,12 @@ void __free_pages_core(struct page *page, unsigned int order)
>  	set_page_count(p, 0);
>  
>  	atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
> -	set_page_refcounted(page);
> -	__free_pages(page, order);
> +
> +	/*
> +	 * Bypass PCP and place fresh pages right to the tail, primarily
> +	 * relevant for memory onlining.
> +	 */
> +	__free_pages_ok(page, order, FOP_TO_TAIL);
>  }
>  
>  #ifdef CONFIG_NEED_MULTIPLE_NODES
> @@ -3171,7 +3181,8 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
>  	 */
>  	if (migratetype >= MIGRATE_PCPTYPES) {
>  		if (unlikely(is_migrate_isolate(migratetype))) {
> -			free_one_page(zone, page, pfn, 0, migratetype);
> +			free_one_page(zone, page, pfn, 0, migratetype,
> +				      FOP_NONE);
>  			return;
>  		}
>  		migratetype = MIGRATE_MOVABLE;
> @@ -5063,7 +5074,7 @@ static inline void free_the_page(struct page *page, unsigned int order)
>  	if (order == 0)		/* Via pcp? */
>  		free_unref_page(page);
>  	else
> -		__free_pages_ok(page, order);
> +		__free_pages_ok(page, order, FOP_NONE);
>  }
>  
>  void __free_pages(struct page *page, unsigned int order)
> -- 
> 2.26.2
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag
  2020-09-28 18:21 ` [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag David Hildenbrand
                     ` (2 preceding siblings ...)
  2020-10-02 13:17   ` Michal Hocko
@ 2020-10-02 13:41   ` Matthew Wilcox
  2020-10-02 14:48     ` David Hildenbrand
  3 siblings, 1 reply; 36+ messages in thread
From: Matthew Wilcox @ 2020-10-02 13:41 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Vlastimil Babka, Oscar Salvador,
	Mel Gorman, Michal Hocko, Dave Hansen, Wei Yang, Mike Rapoport

On Mon, Sep 28, 2020 at 08:21:06PM +0200, David Hildenbrand wrote:
> Let's prepare for additional flags and avoid long parameter lists of bools.
> Follow-up patches will also make use of the flags in __free_pages_ok(),
> however, I wasn't able to come up with a better name for the type - should
> be good enough for internal purposes.

> +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
> +typedef int __bitwise fop_t;

That invites confusion with f_op.  There's no reason to use _t as a suffix
here ... why not free_f?

> +/*
> + * Skip free page reporting notification for the (possibly merged) page. (will
> + * *not* mark the page reported, only skip the notification).

... Don't you mean "will not skip marking the page as reported, only
skip the notification"?

*reads code*

No, I'm still confused.  What does this sentence mean?

Would it help to have a FOP_DEFAULT that has FOP_REPORT_NOTIFY set and
then a FOP_SKIP_REPORT_NOTIFY define that is 0?

> -static inline void __free_one_page(struct page *page,
> -		unsigned long pfn,
> -		struct zone *zone, unsigned int order,
> -		int migratetype, bool report)
> +static inline void __free_one_page(struct page *page, unsigned long pfn,
> +				   struct zone *zone, unsigned int order,
> +				   int migratetype, fop_t fop_flags)

Please don't over-indent like this.

static inline void __free_one_page(struct page *page, unsigned long pfn,
		struct zone *zone, unsigned int order, int migratetype,
		fop_t fop_flags)

reads just as well and then if someone needs to delete the 'static'
later, they don't need to fiddle around with subsequent lines getting
the whitespace to line up again.


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 5/5] mm/memory_hotplug: update comment regarding zone shuffling
  2020-09-28 18:21 ` [PATCH v1 5/5] mm/memory_hotplug: update comment regarding zone shuffling David Hildenbrand
  2020-09-29  9:40   ` Wei Yang
@ 2020-10-02 13:41   ` Michal Hocko
  1 sibling, 0 replies; 36+ messages in thread
From: Michal Hocko @ 2020-10-02 13:41 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Mel Gorman, Dave Hansen,
	Vlastimil Babka, Wei Yang, Oscar Salvador, Mike Rapoport

On Mon 28-09-20 20:21:10, David Hildenbrand wrote:
> As we no longer shuffle via generic_online_page() and when undoing
> isolation, we can simplify the comment.
> 
> We now effectively shuffle only once (properly) when onlining new
> memory.
> 
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/memory_hotplug.c | 11 ++++-------
>  1 file changed, 4 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 9db80ee29caa..c589bd8801bb 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -859,13 +859,10 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>  	undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
>  
>  	/*
> -	 * When exposing larger, physically contiguous memory areas to the
> -	 * buddy, shuffling in the buddy (when freeing onlined pages, putting
> -	 * them either to the head or the tail of the freelist) is only helpful
> -	 * for maintaining the shuffle, but not for creating the initial
> -	 * shuffle. Shuffle the whole zone to make sure the just onlined pages
> -	 * are properly distributed across the whole freelist. Make sure to
> -	 * shuffle once pageblocks are no longer isolated.
> +	 * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
> +	 * the tail of the freelist when undoing isolation). Shuffle the whole
> +	 * zone to make sure the just onlined pages are properly distributed
> +	 * across the whole freelist - to create an initial shuffle.
>  	 */
>  	shuffle_zone(zone);
>  
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag
  2020-10-02 13:41   ` Matthew Wilcox
@ 2020-10-02 14:48     ` David Hildenbrand
  2020-10-02 14:57       ` David Hildenbrand
  0 siblings, 1 reply; 36+ messages in thread
From: David Hildenbrand @ 2020-10-02 14:48 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Vlastimil Babka, Oscar Salvador,
	Mel Gorman, Michal Hocko, Dave Hansen, Wei Yang, Mike Rapoport

On 02.10.20 15:41, Matthew Wilcox wrote:
> On Mon, Sep 28, 2020 at 08:21:06PM +0200, David Hildenbrand wrote:
>> Let's prepare for additional flags and avoid long parameter lists of bools.
>> Follow-up patches will also make use of the flags in __free_pages_ok(),
>> however, I wasn't able to come up with a better name for the type - should
>> be good enough for internal purposes.
> 
>> +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
>> +typedef int __bitwise fop_t;
> 
> That invites confusion with f_op.  There's no reason to use _t as a suffix
> here ... why not free_f?

git grep "bitwise" | grep typedef | grep include/linux

indicates that "_t" it the right thing to do.

I want a name that highlights that is is for the internal variants of
free_page(), free_f / free_t is too generic.

fpi_t (Free Page Internal) ?

> 
>> +/*
>> + * Skip free page reporting notification for the (possibly merged) page. (will
>> + * *not* mark the page reported, only skip the notification).
> 
> ... Don't you mean "will not skip marking the page as reported, only
> skip the notification"?

Yeah, I can use that.

The way free page reporting works is that

1. Free page reporting infrastructure will get notified after buddy
merging about a newly freed page.

2. Once a certain threshold of free pages is reached, it will pull pages
from the freelist, report them, and mark them as reported. (see
mm/page_reporting.c)

During 2., we didn't actually free a "new page", we only temporarily
removed it from the list, that's why we have to skip the notification.

What we do here is skip 1., not 2.

> 
> *reads code*
> 
> No, I'm still confused.  What does this sentence mean?
> 
> Would it help to have a FOP_DEFAULT that has FOP_REPORT_NOTIFY set and
> then a FOP_SKIP_REPORT_NOTIFY define that is 0?

Hmm, I'm not entirely sure if that improves the situation. Then, I need
3 defines instead of two, and an "inverse" documentation for
FOP_REPORT_NOTIFY.

> 
>> -static inline void __free_one_page(struct page *page,
>> -		unsigned long pfn,
>> -		struct zone *zone, unsigned int order,
>> -		int migratetype, bool report)
>> +static inline void __free_one_page(struct page *page, unsigned long pfn,
>> +				   struct zone *zone, unsigned int order,
>> +				   int migratetype, fop_t fop_flags)
> 
> Please don't over-indent like this.
> 
> static inline void __free_one_page(struct page *page, unsigned long pfn,
> 		struct zone *zone, unsigned int order, int migratetype,
> 		fop_t fop_flags)
> 
> reads just as well and then if someone needs to delete the 'static'
> later, they don't need to fiddle around with subsequent lines getting
> the whitespace to line up again.
> 

I don't care too much about this specific instance and can fix it up.
(this is clearly a matter of personal taste)

Thanks!

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag
  2020-10-02 14:48     ` David Hildenbrand
@ 2020-10-02 14:57       ` David Hildenbrand
  0 siblings, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-10-02 14:57 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Alexander Duyck, Vlastimil Babka, Oscar Salvador,
	Mel Gorman, Michal Hocko, Dave Hansen, Wei Yang, Mike Rapoport

On 02.10.20 16:48, David Hildenbrand wrote:
> On 02.10.20 15:41, Matthew Wilcox wrote:
>> On Mon, Sep 28, 2020 at 08:21:06PM +0200, David Hildenbrand wrote:
>>> Let's prepare for additional flags and avoid long parameter lists of bools.
>>> Follow-up patches will also make use of the flags in __free_pages_ok(),
>>> however, I wasn't able to come up with a better name for the type - should
>>> be good enough for internal purposes.
>>
>>> +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
>>> +typedef int __bitwise fop_t;
>>
>> That invites confusion with f_op.  There's no reason to use _t as a suffix
>> here ... why not free_f?
> 
> git grep "bitwise" | grep typedef | grep include/linux
> 
> indicates that "_t" it the right thing to do.
> 
> I want a name that highlights that is is for the internal variants of
> free_page(), free_f / free_t is too generic.
> 
> fpi_t (Free Page Internal) ?
> 
>>
>>> +/*
>>> + * Skip free page reporting notification for the (possibly merged) page. (will
>>> + * *not* mark the page reported, only skip the notification).
>>
>> ... Don't you mean "will not skip marking the page as reported, only
>> skip the notification"?
> 
> Yeah, I can use that.

Reading again, it doesn't quite fit. Marking pages as reported is
handled by mm/page_reporting.c

/*
 * Skip free page reporting notification for the (possibly merged) page.
 * This does not hinder free page reporting from grabbing the page,
 * reporting it and marking it "reported" -  it only skips notifying
 * the free page reporting infrastructure about a newly freed page. For
 * example, used  when temporarily pulling a page from the freelist and
 * putting it back unmodified.
 */

Is that clearer?

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core()
  2020-10-02 13:41   ` Michal Hocko
@ 2020-10-02 15:10     ` David Hildenbrand
  0 siblings, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-10-02 15:10 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Vlastimil Babka, Oscar Salvador, Alexander Duyck,
	Mel Gorman, Dave Hansen, Wei Yang, Mike Rapoport,
	K. Y. Srinivasan, Haiyang Zhang, Stephen Hemminger, Wei Liu

On 02.10.20 15:41, Michal Hocko wrote:
> On Mon 28-09-20 20:21:09, David Hildenbrand wrote:
>> __free_pages_core() is used when exposing fresh memory to the buddy
>> during system boot and when onlining memory in generic_online_page().
>>
>> generic_online_page() is used in two cases:
>>
>> 1. Direct memory onlining in online_pages().
>> 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
>>    balloon and virtio-mem), when parts of a section are kept
>>    fake-offline to be fake-onlined later on.
>>
>> In 1, we already place pages to the tail of the freelist. Pages will be
>> freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
>> via undo_isolate_page_range().
>>
>> In 2, we currently don't implement a proper rule. In case of virtio-mem,
>> where we currently always online MAX_ORDER - 1 pages, the pages will be
>> placed to the HEAD of the freelist - undesireable. While the hyper-v
>> balloon calls generic_online_page() with single pages, usually it will
>> call it on successive single pages in a larger block.
>>
>> The pages are fresh, so place them to the tail of the freelists and avoid
>> the PCP. In __free_pages_core(), remove the now superflouos call to
>> set_page_refcounted() and add a comment regarding page initialization and
>> the refcount.
>>
>> Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
>> is usually of limited use in virtualized environments), we might want to
>> shuffle after a sequence of generic_online_page() calls in the
>> relevant callers.
> 
> It took some time to get through all the freeing paths with subtle
> differences but this looks reasonable. You are mentioning that this
> influences a boot time free memory ordering as well but only very
> briefly. I do not expect this to make a huge difference but who knows.
> It makes some sense to add pages in the order they show up in the
> physical address ordering.

I think boot memory is mostly exposed in the physical address ordering.
In that case, higher addresses will now be used less likely immediately
after this patch. I also don't think it's an issue - if we still detect
it's an issue it's fairly easy to change again.

Thanks!

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()
  2020-10-02 13:24   ` Michal Hocko
@ 2020-10-02 15:20     ` David Hildenbrand
  2020-10-05  6:56       ` Michal Hocko
  0 siblings, 1 reply; 36+ messages in thread
From: David Hildenbrand @ 2020-10-02 15:20 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Oscar Salvador, Alexander Duyck, Mel Gorman,
	Dave Hansen, Vlastimil Babka, Wei Yang, Mike Rapoport,
	Scott Cheloha, Michael Ellerman

On 02.10.20 15:24, Michal Hocko wrote:
> On Mon 28-09-20 20:21:08, David Hildenbrand wrote:
>> Page isolation doesn't actually touch the pages, it simply isolates
>> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
>>
>> We already place pages to the tail of the freelists when undoing
>> isolation via __putback_isolated_page(), let's do it in any case
>> (e.g., if order <= pageblock_order) and document the behavior.
>>
>> Add a "to_tail" parameter to move_freepages_block() but introduce a
>> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
>>
>> This change results in all pages getting onlined via online_pages() to
>> be placed to the tail of the freelist.
> 
> Is there anything preventing to do this unconditionally? Or in other
> words is any of the existing callers of move_freepages_block benefiting
> from adding to the head?

1. mm/page_isolation.c:set_migratetype_isolate()

We move stuff to the MIGRATE_ISOLATE list, we don't care about the order
there.

2. steal_suitable_fallback():

I don't think we care too much about the order when already stealing
pageblocks ... and the freelist is empty I guess?

3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock()

Not sure if we really care.

Good question, I tried to be careful of what I touch. Thoughts?

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()
  2020-10-02 15:20     ` David Hildenbrand
@ 2020-10-05  6:56       ` Michal Hocko
  2020-10-05  8:20         ` Mel Gorman
  0 siblings, 1 reply; 36+ messages in thread
From: Michal Hocko @ 2020-10-05  6:56 UTC (permalink / raw)
  To: David Hildenbrand, Mel Gorman, Vlastimil Babka
  Cc: linux-kernel, linux-mm, linux-hyperv, xen-devel, linux-acpi,
	Andrew Morton, Oscar Salvador, Alexander Duyck, Dave Hansen,
	Wei Yang, Mike Rapoport, Scott Cheloha, Michael Ellerman

On Fri 02-10-20 17:20:09, David Hildenbrand wrote:
> On 02.10.20 15:24, Michal Hocko wrote:
> > On Mon 28-09-20 20:21:08, David Hildenbrand wrote:
> >> Page isolation doesn't actually touch the pages, it simply isolates
> >> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
> >>
> >> We already place pages to the tail of the freelists when undoing
> >> isolation via __putback_isolated_page(), let's do it in any case
> >> (e.g., if order <= pageblock_order) and document the behavior.
> >>
> >> Add a "to_tail" parameter to move_freepages_block() but introduce a
> >> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
> >>
> >> This change results in all pages getting onlined via online_pages() to
> >> be placed to the tail of the freelist.
> > 
> > Is there anything preventing to do this unconditionally? Or in other
> > words is any of the existing callers of move_freepages_block benefiting
> > from adding to the head?
> 
> 1. mm/page_isolation.c:set_migratetype_isolate()
> 
> We move stuff to the MIGRATE_ISOLATE list, we don't care about the order
> there.
> 
> 2. steal_suitable_fallback():
> 
> I don't think we care too much about the order when already stealing
> pageblocks ... and the freelist is empty I guess?
> 
> 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock()
> 
> Not sure if we really care.

Honestly, I have no idea. I can imagine that some atomic high order
workloads (e.g. in net) might benefit from cache line hot pages but I am
not sure this is really observable. If yes it would likely be better to
have this documented than relying on wild guess. If we do not have any
evidence then I would vote for simplicity first and go with
unconditional add_to_tail which would simply your patch a bit.

Maybe Vlastimil or Mel would have a better picture.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()
  2020-10-05  6:56       ` Michal Hocko
@ 2020-10-05  8:20         ` Mel Gorman
  2020-10-05  9:11           ` David Hildenbrand
  0 siblings, 1 reply; 36+ messages in thread
From: Mel Gorman @ 2020-10-05  8:20 UTC (permalink / raw)
  To: Michal Hocko
  Cc: David Hildenbrand, Vlastimil Babka, linux-kernel, linux-mm,
	linux-hyperv, xen-devel, linux-acpi, Andrew Morton,
	Oscar Salvador, Alexander Duyck, Dave Hansen, Wei Yang,
	Mike Rapoport, Scott Cheloha, Michael Ellerman

On Mon, Oct 05, 2020 at 08:56:48AM +0200, Michal Hocko wrote:
> On Fri 02-10-20 17:20:09, David Hildenbrand wrote:
> > On 02.10.20 15:24, Michal Hocko wrote:
> > > On Mon 28-09-20 20:21:08, David Hildenbrand wrote:
> > >> Page isolation doesn't actually touch the pages, it simply isolates
> > >> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
> > >>
> > >> We already place pages to the tail of the freelists when undoing
> > >> isolation via __putback_isolated_page(), let's do it in any case
> > >> (e.g., if order <= pageblock_order) and document the behavior.
> > >>
> > >> Add a "to_tail" parameter to move_freepages_block() but introduce a
> > >> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
> > >>
> > >> This change results in all pages getting onlined via online_pages() to
> > >> be placed to the tail of the freelist.
> > > 
> > > Is there anything preventing to do this unconditionally? Or in other
> > > words is any of the existing callers of move_freepages_block benefiting
> > > from adding to the head?
> > 
> > 1. mm/page_isolation.c:set_migratetype_isolate()
> > 
> > We move stuff to the MIGRATE_ISOLATE list, we don't care about the order
> > there.
> > 
> > 2. steal_suitable_fallback():
> > 
> > I don't think we care too much about the order when already stealing
> > pageblocks ... and the freelist is empty I guess?
> > 
> > 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock()
> > 
> > Not sure if we really care.
> 
> Honestly, I have no idea. I can imagine that some atomic high order
> workloads (e.g. in net) might benefit from cache line hot pages but I am
> not sure this is really observable.

The highatomic reserve is more concerned that about the allocation
succeeding than it is about cache hotness.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate()
  2020-10-05  8:20         ` Mel Gorman
@ 2020-10-05  9:11           ` David Hildenbrand
  0 siblings, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-10-05  9:11 UTC (permalink / raw)
  To: Mel Gorman, Michal Hocko
  Cc: Vlastimil Babka, linux-kernel, linux-mm, linux-hyperv, xen-devel,
	linux-acpi, Andrew Morton, Oscar Salvador, Alexander Duyck,
	Dave Hansen, Wei Yang, Mike Rapoport, Scott Cheloha,
	Michael Ellerman

On 05.10.20 10:20, Mel Gorman wrote:
> On Mon, Oct 05, 2020 at 08:56:48AM +0200, Michal Hocko wrote:
>> On Fri 02-10-20 17:20:09, David Hildenbrand wrote:
>>> On 02.10.20 15:24, Michal Hocko wrote:
>>>> On Mon 28-09-20 20:21:08, David Hildenbrand wrote:
>>>>> Page isolation doesn't actually touch the pages, it simply isolates
>>>>> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
>>>>>
>>>>> We already place pages to the tail of the freelists when undoing
>>>>> isolation via __putback_isolated_page(), let's do it in any case
>>>>> (e.g., if order <= pageblock_order) and document the behavior.
>>>>>
>>>>> Add a "to_tail" parameter to move_freepages_block() but introduce a
>>>>> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
>>>>>
>>>>> This change results in all pages getting onlined via online_pages() to
>>>>> be placed to the tail of the freelist.
>>>>
>>>> Is there anything preventing to do this unconditionally? Or in other
>>>> words is any of the existing callers of move_freepages_block benefiting
>>>> from adding to the head?
>>>
>>> 1. mm/page_isolation.c:set_migratetype_isolate()
>>>
>>> We move stuff to the MIGRATE_ISOLATE list, we don't care about the order
>>> there.
>>>
>>> 2. steal_suitable_fallback():
>>>
>>> I don't think we care too much about the order when already stealing
>>> pageblocks ... and the freelist is empty I guess?
>>>
>>> 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock()
>>>
>>> Not sure if we really care.
>>
>> Honestly, I have no idea. I can imagine that some atomic high order
>> workloads (e.g. in net) might benefit from cache line hot pages but I am
>> not sure this is really observable.
> 
> The highatomic reserve is more concerned that about the allocation
> succeeding than it is about cache hotness.
> 

Thanks Mel and Michal. I'll simplify this patch then - and if it turns
out to be an actual problem, we can change that one instance, adding a
proper comment.

Thanks!

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2020-10-05  9:12 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-28 18:21 [PATCH v1 0/5] mm: place pages to the freelist tail when onling and undoing isolation David Hildenbrand
2020-09-28 18:21 ` [PATCH v1 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag David Hildenbrand
2020-09-28 20:11   ` Pankaj Gupta
2020-09-28 20:11     ` Pankaj Gupta
2020-09-29  8:58   ` Wei Yang
2020-10-02 13:17   ` Michal Hocko
2020-10-02 13:41   ` Matthew Wilcox
2020-10-02 14:48     ` David Hildenbrand
2020-10-02 14:57       ` David Hildenbrand
2020-09-28 18:21 ` [PATCH v1 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page() David Hildenbrand
2020-09-28 20:38   ` Pankaj Gupta
2020-09-28 20:38     ` Pankaj Gupta
2020-09-29  9:10   ` Wei Yang
2020-10-02 13:19   ` Michal Hocko
2020-09-28 18:21 ` [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of the freelist in unset_migratetype_isolate() David Hildenbrand
2020-09-28 20:55   ` Pankaj Gupta
2020-09-28 20:55     ` Pankaj Gupta
2020-09-29  9:18   ` Wei Yang
2020-09-29 10:12     ` David Hildenbrand
2020-09-30  7:48       ` Wei Yang
2020-10-02 13:24   ` Michal Hocko
2020-10-02 15:20     ` David Hildenbrand
2020-10-05  6:56       ` Michal Hocko
2020-10-05  8:20         ` Mel Gorman
2020-10-05  9:11           ` David Hildenbrand
2020-09-28 18:21 ` [PATCH v1 4/5] mm/page_alloc: place pages to tail in __free_pages_core() David Hildenbrand
2020-09-28 20:33   ` Pankaj Gupta
2020-09-28 20:33     ` Pankaj Gupta
2020-09-28 20:33     ` Pankaj Gupta
2020-09-29  9:36   ` Wei Yang
2020-09-29 10:14     ` David Hildenbrand
2020-10-02 13:41   ` Michal Hocko
2020-10-02 15:10     ` David Hildenbrand
2020-09-28 18:21 ` [PATCH v1 5/5] mm/memory_hotplug: update comment regarding zone shuffling David Hildenbrand
2020-09-29  9:40   ` Wei Yang
2020-10-02 13:41   ` Michal Hocko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.