linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH 00/12] mm: tweak page cache migration
@ 2020-10-06 20:51 Dave Hansen
  2020-10-06 20:51 ` [RFC][PATCH 01/12] mm/vmscan: restore zone_reclaim_mode ABI Dave Hansen
                   ` (13 more replies)
  0 siblings, 14 replies; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dave Hansen, npiggin, akpm, willy, yang.shi, linux-mm

First of all, I think this little slice of code is a bit
under-documented.  Perhaps this will help clarify things.

I'm pretty confident the page_count() check in the first
patch is right, which is why I removed it outright.  The
xas_load() check is a bit murkier, so I just left a
warning in for it.

Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC][PATCH 01/12] mm/vmscan: restore zone_reclaim_mode ABI
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
@ 2020-10-06 20:51 ` Dave Hansen
  2020-10-07  8:45   ` Christopher Lameter
  2020-10-06 20:51 ` [RFC][PATCH 02/12] mm/vmscan: move RECLAIM* bits to uapi header Dave Hansen
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dave Hansen, ben.widawsky, rientjes, alex.shi, dwagner, tobin,
	cl, akpm, ying.huang, dan.j.williams, cai, stable


From: Dave Hansen <dave.hansen@linux.intel.com>

I went to go add a new RECLAIM_* mode for the zone_reclaim_mode
sysctl.  Like a good kernel developer, I also went to go update the
documentation.  I noticed that the bits in the documentation didn't
match the bits in the #defines.

The VM never explicitly checks the RECLAIM_ZONE bit.  The bit is,
however implicitly checked when checking 'node_reclaim_mode==0'.
The RECLAIM_ZONE #define was removed in a cleanup.  That, by itself
is fine.

But, when the bit was removed (bit 0) the _other_ bit locations also
got changed.  That's not OK because the bit values are documented to
mean one specific thing and users surely rely on them meaning that one
thing and not changing from kernel to kernel.  The end result is that
if someone had a script that did:

	sysctl vm.zone_reclaim_mode=1

This script would have gone from enalbing node reclaim for clean
unmapped pages to writing out pages during node reclaim after the
commit in question.  That's not great.

Put the bits back the way they were and add a comment so something
like this is a bit harder to do again.  Update the documentation to
make it clear that the first bit is ignored.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Fixes: 648b5cf368e0 ("mm/vmscan: remove unused RECLAIM_OFF/RECLAIM_ZONE")
Reviewed-by: Ben Widawsky <ben.widawsky@intel.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Alex Shi <alex.shi@linux.alibaba.com>
Cc: Daniel Wagner <dwagner@suse.de>
Cc: "Tobin C. Harding" <tobin@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Daniel Wagner <dwagner@suse.de>
Cc: stable@vger.kernel.org

--

Changes from v2:
 * Update description to indicate that bit0 was used for clean
   unmapped page node reclaim.
---

 b/Documentation/admin-guide/sysctl/vm.rst |   10 +++++-----
 b/mm/vmscan.c                             |    9 +++++++--
 2 files changed, 12 insertions(+), 7 deletions(-)

diff -puN Documentation/admin-guide/sysctl/vm.rst~mm-vmscan-restore-old-zone_reclaim_mode-abi Documentation/admin-guide/sysctl/vm.rst
--- a/Documentation/admin-guide/sysctl/vm.rst~mm-vmscan-restore-old-zone_reclaim_mode-abi	2020-10-06 13:39:20.595818443 -0700
+++ b/Documentation/admin-guide/sysctl/vm.rst	2020-10-06 13:39:20.601818443 -0700
@@ -976,11 +976,11 @@ that benefit from having their data cach
 left disabled as the caching effect is likely to be more important than
 data locality.
 
-zone_reclaim may be enabled if it's known that the workload is partitioned
-such that each partition fits within a NUMA node and that accessing remote
-memory would cause a measurable performance reduction.  The page allocator
-will then reclaim easily reusable pages (those page cache pages that are
-currently not used) before allocating off node pages.
+Consider enabling one or more zone_reclaim mode bits if it's known that the
+workload is partitioned such that each partition fits within a NUMA node
+and that accessing remote memory would cause a measurable performance
+reduction.  The page allocator will take additional actions before
+allocating off node pages.
 
 Allowing zone reclaim to write out pages stops processes that are
 writing large amounts of data from dirtying pages on other nodes. Zone
diff -puN mm/vmscan.c~mm-vmscan-restore-old-zone_reclaim_mode-abi mm/vmscan.c
--- a/mm/vmscan.c~mm-vmscan-restore-old-zone_reclaim_mode-abi	2020-10-06 13:39:20.597818443 -0700
+++ b/mm/vmscan.c	2020-10-06 13:39:20.602818443 -0700
@@ -4083,8 +4083,13 @@ module_init(kswapd_init)
  */
 int node_reclaim_mode __read_mostly;
 
-#define RECLAIM_WRITE (1<<0)	/* Writeout pages during reclaim */
-#define RECLAIM_UNMAP (1<<1)	/* Unmap pages during reclaim */
+/*
+ * These bit locations are exposed in the vm.zone_reclaim_mode sysctl
+ * ABI.  New bits are OK, but existing bits can never change.
+ */
+#define RECLAIM_ZONE  (1<<0)   /* Run shrink_inactive_list on the zone */
+#define RECLAIM_WRITE (1<<1)   /* Writeout pages during reclaim */
+#define RECLAIM_UNMAP (1<<2)   /* Unmap pages during reclaim */
 
 /*
  * Priority for NODE_RECLAIM. This determines the fraction of pages
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC][PATCH 02/12] mm/vmscan: move RECLAIM* bits to uapi header
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
  2020-10-06 20:51 ` [RFC][PATCH 01/12] mm/vmscan: restore zone_reclaim_mode ABI Dave Hansen
@ 2020-10-06 20:51 ` Dave Hansen
  2020-10-07  8:45   ` Christopher Lameter
  2020-10-06 20:51 ` [RFC][PATCH 03/12] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks Dave Hansen
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dave Hansen, ben.widawsky, rientjes, alex.shi, dwagner, tobin,
	cl, akpm, ying.huang, dan.j.williams, cai


From: Dave Hansen <dave.hansen@linux.intel.com>

It is currently not obvious that the RECLAIM_* bits are part of the
uapi since they are defined in vmscan.c.  Move them to a uapi header
to make it obvious.

This should have no functional impact.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Ben Widawsky <ben.widawsky@intel.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Alex Shi <alex.shi@linux.alibaba.com>
Cc: Daniel Wagner <dwagner@suse.de>
Cc: "Tobin C. Harding" <tobin@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Daniel Wagner <dwagner@suse.de>

--

Note: This is not cc'd to stable.  It does not fix any bugs.
---

 b/include/uapi/linux/mempolicy.h |    7 +++++++
 b/mm/vmscan.c                    |    8 --------
 2 files changed, 7 insertions(+), 8 deletions(-)

diff -puN include/uapi/linux/mempolicy.h~mm-vmscan-move-RECLAIM-bits-to-uapi include/uapi/linux/mempolicy.h
--- a/include/uapi/linux/mempolicy.h~mm-vmscan-move-RECLAIM-bits-to-uapi	2020-10-06 13:39:21.720818440 -0700
+++ b/include/uapi/linux/mempolicy.h	2020-10-06 13:39:21.726818440 -0700
@@ -62,5 +62,12 @@ enum {
 #define MPOL_F_MOF	(1 << 3) /* this policy wants migrate on fault */
 #define MPOL_F_MORON	(1 << 4) /* Migrate On protnone Reference On Node */
 
+/*
+ * These bit locations are exposed in the vm.zone_reclaim_mode sysctl
+ * ABI.  New bits are OK, but existing bits can never change.
+ */
+#define RECLAIM_ZONE	(1<<0)	/* Run shrink_inactive_list on the zone */
+#define RECLAIM_WRITE	(1<<1)	/* Writeout pages during reclaim */
+#define RECLAIM_UNMAP	(1<<2)	/* Unmap pages during reclaim */
 
 #endif /* _UAPI_LINUX_MEMPOLICY_H */
diff -puN mm/vmscan.c~mm-vmscan-move-RECLAIM-bits-to-uapi mm/vmscan.c
--- a/mm/vmscan.c~mm-vmscan-move-RECLAIM-bits-to-uapi	2020-10-06 13:39:21.722818440 -0700
+++ b/mm/vmscan.c	2020-10-06 13:39:21.727818440 -0700
@@ -4084,14 +4084,6 @@ module_init(kswapd_init)
 int node_reclaim_mode __read_mostly;
 
 /*
- * These bit locations are exposed in the vm.zone_reclaim_mode sysctl
- * ABI.  New bits are OK, but existing bits can never change.
- */
-#define RECLAIM_ZONE  (1<<0)   /* Run shrink_inactive_list on the zone */
-#define RECLAIM_WRITE (1<<1)   /* Writeout pages during reclaim */
-#define RECLAIM_UNMAP (1<<2)   /* Unmap pages during reclaim */
-
-/*
  * Priority for NODE_RECLAIM. This determines the fraction of pages
  * of a node considered for each zone_reclaim. 4 scans 1/16th of
  * a zone.
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC][PATCH 03/12] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
  2020-10-06 20:51 ` [RFC][PATCH 01/12] mm/vmscan: restore zone_reclaim_mode ABI Dave Hansen
  2020-10-06 20:51 ` [RFC][PATCH 02/12] mm/vmscan: move RECLAIM* bits to uapi header Dave Hansen
@ 2020-10-06 20:51 ` Dave Hansen
  2020-10-07  8:47   ` Christopher Lameter
  2020-10-06 20:51 ` [RFC][PATCH 04/12] mm/numa: node demotion data structure and lookup Dave Hansen
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dave Hansen, ben.widawsky, alex.shi, tobin, cl, akpm, ying.huang,
	dan.j.williams, cai, dwagner


From: Dave Hansen <dave.hansen@linux.intel.com>

RECLAIM_ZONE was assumed to be unused because it was never explicitly
used in the kernel.  However, there were a number of places where it
was checked implicitly by checking 'node_reclaim_mode' for a zero
value.

These zero checks are not great because it is not obvious what a zero
mode *means* in the code.  Replace them with a helper which makes it
more obvious: node_reclaim_enabled().

This helper also provides a handy place to explicitly check the
RECLAIM_ZONE bit itself.  Check it explicitly there to make it more
obvious where the bit can affect behavior.

This should have no functional impact.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Reviewed-by: Ben Widawsky <ben.widawsky@intel.com>
Cc: Alex Shi <alex.shi@linux.alibaba.com>
Cc: "Tobin C. Harding" <tobin@kernel.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Qian Cai <cai@lca.pw>
Cc: Daniel Wagner <dwagner@suse.de>

--

Note: This is not cc'd to stable.  It does not fix any bugs.
---

 b/include/linux/swap.h |    7 +++++++
 b/mm/khugepaged.c      |    2 +-
 b/mm/page_alloc.c      |    2 +-
 3 files changed, 9 insertions(+), 2 deletions(-)

diff -puN include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper include/linux/swap.h
--- a/include/linux/swap.h~mm-vmscan-node_reclaim_mode_helper	2020-10-06 13:39:22.850818437 -0700
+++ b/include/linux/swap.h	2020-10-06 13:39:22.859818437 -0700
@@ -12,6 +12,7 @@
 #include <linux/fs.h>
 #include <linux/atomic.h>
 #include <linux/page-flags.h>
+#include <uapi/linux/mempolicy.h>
 #include <asm/page.h>
 
 struct notifier_block;
@@ -381,6 +382,12 @@ extern int sysctl_min_slab_ratio;
 #define node_reclaim_mode 0
 #endif
 
+static inline bool node_reclaim_enabled(void)
+{
+	/* Is any node_reclaim_mode bit set? */
+	return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP);
+}
+
 extern void check_move_unevictable_pages(struct pagevec *pvec);
 
 extern int kswapd_run(int nid);
diff -puN mm/khugepaged.c~mm-vmscan-node_reclaim_mode_helper mm/khugepaged.c
--- a/mm/khugepaged.c~mm-vmscan-node_reclaim_mode_helper	2020-10-06 13:39:22.852818437 -0700
+++ b/mm/khugepaged.c	2020-10-06 13:39:22.859818437 -0700
@@ -794,7 +794,7 @@ static bool khugepaged_scan_abort(int ni
 	 * If node_reclaim_mode is disabled, then no extra effort is made to
 	 * allocate memory locally.
 	 */
-	if (!node_reclaim_mode)
+	if (!node_reclaim_enabled())
 		return false;
 
 	/* If there is a count for this node already, it must be acceptable */
diff -puN mm/page_alloc.c~mm-vmscan-node_reclaim_mode_helper mm/page_alloc.c
--- a/mm/page_alloc.c~mm-vmscan-node_reclaim_mode_helper	2020-10-06 13:39:22.855818437 -0700
+++ b/mm/page_alloc.c	2020-10-06 13:39:22.862818437 -0700
@@ -3802,7 +3802,7 @@ retry:
 			if (alloc_flags & ALLOC_NO_WATERMARKS)
 				goto try_this_zone;
 
-			if (node_reclaim_mode == 0 ||
+			if (!node_reclaim_enabled() ||
 			    !zone_allows_reclaim(ac->preferred_zoneref->zone, zone))
 				continue;
 
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC][PATCH 04/12] mm/numa: node demotion data structure and lookup
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
                   ` (2 preceding siblings ...)
  2020-10-06 20:51 ` [RFC][PATCH 03/12] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks Dave Hansen
@ 2020-10-06 20:51 ` Dave Hansen
  2020-10-06 20:51 ` [RFC][PATCH 05/12] mm/numa: automatically generate node migration order Dave Hansen
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dave Hansen, yang.shi, rientjes, ying.huang, dan.j.williams


From: Dave Hansen <dave.hansen@linux.intel.com>

Prepare for the kernel to auto-migrate pages to other memory nodes
with a user defined node migration table. This allows creating single
migration target for each NUMA node to enable the kernel to do NUMA
page migrations instead of simply reclaiming colder pages. A node
with no target is a "terminal node", so reclaim acts normally there.
The migration target does not fundamentally _need_ to be a single node,
but this implementation starts there to limit complexity.

If you consider the migration path as a graph, cycles (loops) in the
graph are disallowed.  This avoids wasting resources by constantly
migrating (A->B, B->A, A->B ...).  The expectation is that cycles will
never be allowed.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>

--

changes in July 2020:
 - Remove loop from next_demotion_node() and get_online_mems().
   This means that the node returned by next_demotion_node()
   might now be offline, but the worst case is that the
   allocation fails.  That's fine since it is transient.
---

 b/mm/migrate.c |   16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff -puN mm/migrate.c~0006-node-Define-and-export-memory-migration-path mm/migrate.c
--- a/mm/migrate.c~0006-node-Define-and-export-memory-migration-path	2020-10-06 13:39:24.067818434 -0700
+++ b/mm/migrate.c	2020-10-06 13:39:24.071818434 -0700
@@ -1161,6 +1161,22 @@ out:
 	return rc;
 }
 
+static int node_demotion[MAX_NUMNODES] = {[0 ...  MAX_NUMNODES - 1] = NUMA_NO_NODE};
+
+/**
+ * next_demotion_node() - Get the next node in the demotion path
+ * @node: The starting node to lookup the next node
+ *
+ * @returns: node id for next memory node in the demotion path hierarchy
+ * from @node; NUMA_NO_NODE if @node is terminal.  This does not keep
+ * @node online or guarantee that it *continues* to be the next demotion
+ * target.
+ */
+int next_demotion_node(int node)
+{
+	return node_demotion[node];
+}
+
 /*
  * Obtain the lock on page, remove all ptes and migrate the page
  * to the newly allocated page in newpage.
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC][PATCH 05/12] mm/numa: automatically generate node migration order
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
                   ` (3 preceding siblings ...)
  2020-10-06 20:51 ` [RFC][PATCH 04/12] mm/numa: node demotion data structure and lookup Dave Hansen
@ 2020-10-06 20:51 ` Dave Hansen
  2020-10-06 20:51 ` [RFC][PATCH 06/12] mm/migrate: update migration order during on hotplug events Dave Hansen
                   ` (8 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dave Hansen, yang.shi, rientjes, ying.huang, dan.j.williams


From: Dave Hansen <dave.hansen@linux.intel.com>

When memory fills up on a node, memory contents can be
automatically migrated to another node.  The biggest problems are
knowing when to migrate and to where the migration should be
targeted.

The most straightforward way to generate the "to where" list
would be to follow the page allocator fallback lists.  Those
lists already tell us if memory is full where to look next.  It
would also be logical to move memory in that order.

But, the allocator fallback lists have a fatal flaw: most nodes
appear in all the lists.  This would potentially lead to
migration cycles (A->B, B->A, A->B, ...).

Instead of using the allocator fallback lists directly, keep a
separate node migration ordering.  But, reuse the same data used
to generate page allocator fallback in the first place:
find_next_best_node().

This means that the firmware data used to populate node distances
essentially dictates the ordering for now.  It should also be
architecture-neutral since all NUMA architectures have a working
find_next_best_node().

The protocol for node_demotion[] access and writing is not
standard.  It has no specific locking and is intended to be read
locklessly.  Readers must take care to avoid observing changes
that appear incoherent.  This was done so that node_demotion[]
locking has no chance of becoming a bottleneck on large systems
with lots of CPUs in direct reclaim.

This code is unused for now.  It will be called later in the
series.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
---

 b/mm/internal.h   |    1 
 b/mm/migrate.c    |  137 +++++++++++++++++++++++++++++++++++++++++++++++++++++-
 b/mm/page_alloc.c |    2 
 3 files changed, 138 insertions(+), 2 deletions(-)

diff -puN mm/internal.h~auto-setup-default-migration-path-from-firmware mm/internal.h
--- a/mm/internal.h~auto-setup-default-migration-path-from-firmware	2020-10-06 13:39:25.112818432 -0700
+++ b/mm/internal.h	2020-10-06 13:39:25.121818432 -0700
@@ -203,6 +203,7 @@ extern int user_min_free_kbytes;
 
 extern void zone_pcp_update(struct zone *zone);
 extern void zone_pcp_reset(struct zone *zone);
+extern int find_next_best_node(int node, nodemask_t *used_node_mask);
 
 #if defined CONFIG_COMPACTION || defined CONFIG_CMA
 
diff -puN mm/migrate.c~auto-setup-default-migration-path-from-firmware mm/migrate.c
--- a/mm/migrate.c~auto-setup-default-migration-path-from-firmware	2020-10-06 13:39:25.114818432 -0700
+++ b/mm/migrate.c	2020-10-06 13:39:25.122818432 -0700
@@ -1161,6 +1161,10 @@ out:
 	return rc;
 }
 
+/*
+ * Writes to this array occur without locking.  READ_ONCE()
+ * is recommended for readers to ensure consistent reads.
+ */
 static int node_demotion[MAX_NUMNODES] = {[0 ...  MAX_NUMNODES - 1] = NUMA_NO_NODE};
 
 /**
@@ -1174,7 +1178,13 @@ static int node_demotion[MAX_NUMNODES] =
  */
 int next_demotion_node(int node)
 {
-	return node_demotion[node];
+	/*
+	 * node_demotion[] is updated without excluding
+	 * this function from running.  READ_ONCE() avoids
+	 * reading multiple, inconsistent 'node' values
+	 * during an update.
+	 */
+	return READ_ONCE(node_demotion[node]);
 }
 
 /*
@@ -3112,3 +3122,128 @@ void migrate_vma_finalize(struct migrate
 }
 EXPORT_SYMBOL(migrate_vma_finalize);
 #endif /* CONFIG_DEVICE_PRIVATE */
+
+/* Disable reclaim-based migration. */
+static void disable_all_migrate_targets(void)
+{
+	int node;
+
+	for_each_online_node(node)
+		node_demotion[node] = NUMA_NO_NODE;
+}
+
+/*
+ * Find an automatic demotion target for 'node'.
+ * Failing here is OK.  It might just indicate
+ * being at the end of a chain.
+ */
+static int establish_migrate_target(int node, nodemask_t *used)
+{
+	int migration_target;
+
+	/*
+	 * Can not set a migration target on a
+	 * node with it already set.
+	 *
+	 * No need for READ_ONCE() here since this
+	 * in the write path for node_demotion[].
+	 * This should be the only thread writing.
+	 */
+	if (node_demotion[node] != NUMA_NO_NODE)
+		return NUMA_NO_NODE;
+
+	migration_target = find_next_best_node(node, used);
+	if (migration_target == NUMA_NO_NODE)
+		return NUMA_NO_NODE;
+
+	node_demotion[node] = migration_target;
+
+	return migration_target;
+}
+
+/*
+ * When memory fills up on a node, memory contents can be
+ * automatically migrated to another node instead of
+ * discarded at reclaim.
+ *
+ * Establish a "migration path" which will start at nodes
+ * with CPUs and will follow the priorities used to build the
+ * page allocator zonelists.
+ *
+ * The difference here is that cycles must be avoided.  If
+ * node0 migrates to node1, then neither node1, nor anything
+ * node1 migrates to can migrate to node0.
+ *
+ * This function can run simultaneously with readers of
+ * node_demotion[].  However, it can not run simultaneously
+ * with itself.  Exclusion is provided by memory hotplug events
+ * being single-threaded.
+ */
+void __set_migration_target_nodes(void)
+{
+	nodemask_t next_pass	= NODE_MASK_NONE;
+	nodemask_t this_pass	= NODE_MASK_NONE;
+	nodemask_t used_targets = NODE_MASK_NONE;
+	int node;
+
+	/*
+	 * Avoid any oddities like cycles that could occur
+	 * from changes in the topology.  This will leave
+	 * a momentary gap when migration is disabled.
+	 */
+	disable_all_migrate_targets();
+
+	/*
+	 * Ensure that the "disable" is visible across the system.
+	 * Readers will see either a combination of before+disable
+	 * state or disable+after.  They will never see before and
+	 * after state together.
+	 *
+	 * The before+after state together might have cycles and
+	 * could cause readers to do things like loop until this
+	 * function finishes.  This ensures they can only see a
+	 * single "bad" read and would, for instance, only loop
+	 * once.
+	 */
+	smp_wmb();
+
+	/*
+	 * Allocations go close to CPUs, first.  Assume that
+	 * the migration path starts at the nodes with CPUs.
+	 */
+	next_pass = node_states[N_CPU];
+again:
+	this_pass = next_pass;
+	next_pass = NODE_MASK_NONE;
+	/*
+	 * To avoid cycles in the migration "graph", ensure
+	 * that migration sources are not future targets by
+	 * setting them in 'used_targets'.  Do this only
+	 * once per pass so that multiple source nodes can
+	 * share a target node.
+	 *
+	 * 'used_targets' will become unavailable in future
+	 * passes.  This limits some opportunities for
+	 * multiple source nodes to share a desintation.
+	 */
+	nodes_or(used_targets, used_targets, this_pass);
+	for_each_node_mask(node, this_pass) {
+		int target_node = establish_migrate_target(node, &used_targets);
+
+		if (target_node == NUMA_NO_NODE)
+			continue;
+
+		/* Visit targets from this pass in the next pass: */
+		node_set(target_node, next_pass);
+	}
+	/* Is another pass necessary? */
+	if (!nodes_empty(next_pass))
+		goto again;
+}
+
+void set_migration_target_nodes(void)
+{
+	get_online_mems();
+	__set_migration_target_nodes();
+	put_online_mems();
+}
diff -puN mm/page_alloc.c~auto-setup-default-migration-path-from-firmware mm/page_alloc.c
--- a/mm/page_alloc.c~auto-setup-default-migration-path-from-firmware	2020-10-06 13:39:25.117818432 -0700
+++ b/mm/page_alloc.c	2020-10-06 13:39:25.124818432 -0700
@@ -5632,7 +5632,7 @@ static int node_load[MAX_NUMNODES];
  *
  * Return: node id of the found node or %NUMA_NO_NODE if no node is found.
  */
-static int find_next_best_node(int node, nodemask_t *used_node_mask)
+int find_next_best_node(int node, nodemask_t *used_node_mask)
 {
 	int n, val;
 	int min_val = INT_MAX;
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC][PATCH 06/12] mm/migrate: update migration order during on hotplug events
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
                   ` (4 preceding siblings ...)
  2020-10-06 20:51 ` [RFC][PATCH 05/12] mm/numa: automatically generate node migration order Dave Hansen
@ 2020-10-06 20:51 ` Dave Hansen
  2020-10-06 20:51 ` [RFC][PATCH 07/12] mm/migrate: make migrate_pages() return nr_succeeded Dave Hansen
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dave Hansen, yang.shi, rientjes, ying.huang, dan.j.williams


From: Dave Hansen <dave.hansen@linux.intel.com>

Reclaim-based migration is attempting to optimize data placement in
memory based on the system topology.  If the system changes, so must
the migration ordering.

The implementation here is pretty simple and entirely unoptimized.  On
any memory or CPU hotplug events, assume that a node was added or
removed and recalculate all migration targets.  This ensures that the
node_demotion[] array is always ready to be used in case the new
reclaim mode is enabled.

This recalculation is far from optimal, most glaringly that it does
not even attempt to figure out if nodes are actually coming or going.
But, given the expected paucity of hotplug events, this should be
fine.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
---

 b/mm/migrate.c |   93 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 93 insertions(+)

diff -puN mm/migrate.c~enable-numa-demotion mm/migrate.c
--- a/mm/migrate.c~enable-numa-demotion	2020-10-06 13:39:26.342818429 -0700
+++ b/mm/migrate.c	2020-10-06 13:39:26.346818429 -0700
@@ -49,6 +49,7 @@
 #include <linux/sched/mm.h>
 #include <linux/ptrace.h>
 #include <linux/oom.h>
+#include <linux/memory.h>
 
 #include <asm/tlbflush.h>
 
@@ -3241,9 +3242,101 @@ again:
 		goto again;
 }
 
+/*
+ * For callers that do not hold get_online_mems() already.
+ */
 void set_migration_target_nodes(void)
 {
 	get_online_mems();
 	__set_migration_target_nodes();
 	put_online_mems();
 }
+
+/*
+ * React to hotplug events that might affect the migration targes
+ * like events that online or offline NUMA nodes.
+ *
+ * The ordering is also currently dependent on which nodes have
+ * CPUs.  That means we need CPU on/offline notification too.
+ */
+static int migration_online_cpu(unsigned int cpu)
+{
+	set_migration_target_nodes();
+	return 0;
+}
+
+static int migration_offline_cpu(unsigned int cpu)
+{
+	set_migration_target_nodes();
+	return 0;
+}
+
+/*
+ * This leaves migrate-on-reclaim transiently disabled
+ * between the MEM_GOING_OFFLINE and MEM_OFFLINE events.
+ * This runs reclaim-based micgration is enabled or not.
+ * This ensures that the user can turn reclaim-based
+ * migration at any time without needing to recalcuate
+ * migration targets.
+ *
+ * These callbacks already hold get_online_mems().  That
+ * is why __set_migration_target_nodes() can be used as
+ * opposed to set_migration_target_nodes().
+ */
+#if defined(CONFIG_MEMORY_HOTPLUG)
+static int __meminit migrate_on_reclaim_callback(struct notifier_block *self,
+						 unsigned long action, void *arg)
+{
+	switch (action) {
+	case MEM_GOING_OFFLINE:
+		/*
+		 * Make sure there are not transient states where
+		 * an offline node is a migration target.  This
+		 * will leave migration disabled until the offline
+		 * completes and the MEM_OFFLINE case below runs.
+		 */
+		disable_all_migrate_targets();
+		break;
+	case MEM_OFFLINE:
+	case MEM_ONLINE:
+		/*
+		 * Recalculate the target nodes once the node
+		 * reaches its final state (online or offline).
+		 */
+		__set_migration_target_nodes();
+		break;
+	case MEM_CANCEL_OFFLINE:
+		/*
+		 * MEM_GOING_OFFLINE disabled all the migration
+		 * targets.  Reenable them.
+		 */
+		__set_migration_target_nodes();
+		break;
+	case MEM_GOING_ONLINE:
+	case MEM_CANCEL_ONLINE:
+		break;
+	}
+
+	return notifier_from_errno(0);
+}
+
+static int __init migrate_on_reclaim_init(void)
+{
+	int ret;
+
+	ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "migrate on reclaim",
+				migration_online_cpu,
+				migration_offline_cpu);
+	/*
+	 * In the unlikely case that this fails, the automatic
+	 * migration targets may become suboptimal for nodes
+	 * where N_CPU changes.  With such a small impact in a
+	 * rare case, do not bother trying to do anything special.
+	 */
+	WARN_ON(ret < 0);
+
+	hotplug_memory_notifier(migrate_on_reclaim_callback, 100);
+	return 0;
+}
+late_initcall(migrate_on_reclaim_init);
+#endif /* CONFIG_MEMORY_HOTPLUG */
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC][PATCH 07/12] mm/migrate: make migrate_pages() return nr_succeeded
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
                   ` (5 preceding siblings ...)
  2020-10-06 20:51 ` [RFC][PATCH 06/12] mm/migrate: update migration order during on hotplug events Dave Hansen
@ 2020-10-06 20:51 ` Dave Hansen
  2020-10-06 20:51 ` [RFC][PATCH 08/12] mm/migrate: demote pages during reclaim Dave Hansen
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dave Hansen, yang.shi, rientjes, ying.huang, dan.j.williams


From: Yang Shi <yang.shi@linux.alibaba.com>

The migrate_pages() returns the number of pages that were not migrated,
or an error code.  When returning an error code, there is no way to know
how many pages were migrated or not migrated.

In the following patch, migrate_pages() is used to demote pages to PMEM
node, we need account how many pages are reclaimed (demoted) since page
reclaim behavior depends on this.  Add *nr_succeeded parameter to make
migrate_pages() return how many pages are demoted successfully for all
cases.

Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
---

 b/include/linux/migrate.h |    5 +++--
 b/mm/compaction.c         |    3 ++-
 b/mm/gup.c                |    4 +++-
 b/mm/memory-failure.c     |    7 +++++--
 b/mm/memory_hotplug.c     |    4 +++-
 b/mm/mempolicy.c          |    7 +++++--
 b/mm/migrate.c            |   16 +++++++++-------
 b/mm/page_alloc.c         |    9 ++++++---
 8 files changed, 36 insertions(+), 19 deletions(-)

diff -puN include/linux/migrate.h~migrate_pages-add-success-return include/linux/migrate.h
--- a/include/linux/migrate.h~migrate_pages-add-success-return	2020-10-06 13:39:27.415818426 -0700
+++ b/include/linux/migrate.h	2020-10-06 13:39:27.439818426 -0700
@@ -40,7 +40,8 @@ extern int migrate_page(struct address_s
 			struct page *newpage, struct page *page,
 			enum migrate_mode mode);
 extern int migrate_pages(struct list_head *l, new_page_t new, free_page_t free,
-		unsigned long private, enum migrate_mode mode, int reason);
+		unsigned long private, enum migrate_mode mode, int reason,
+		unsigned int *nr_succeeded);
 extern struct page *alloc_migration_target(struct page *page, unsigned long private);
 extern int isolate_movable_page(struct page *page, isolate_mode_t mode);
 extern void putback_movable_page(struct page *page);
@@ -58,7 +59,7 @@ extern int migrate_page_move_mapping(str
 static inline void putback_movable_pages(struct list_head *l) {}
 static inline int migrate_pages(struct list_head *l, new_page_t new,
 		free_page_t free, unsigned long private, enum migrate_mode mode,
-		int reason)
+		int reason, unsigned int *nr_succeeded)
 	{ return -ENOSYS; }
 static inline struct page *alloc_migration_target(struct page *page,
 		unsigned long private)
diff -puN mm/compaction.c~migrate_pages-add-success-return mm/compaction.c
--- a/mm/compaction.c~migrate_pages-add-success-return	2020-10-06 13:39:27.417818426 -0700
+++ b/mm/compaction.c	2020-10-06 13:39:27.440818426 -0700
@@ -2196,6 +2196,7 @@ compact_zone(struct compact_control *cc,
 	unsigned long last_migrated_pfn;
 	const bool sync = cc->mode != MIGRATE_ASYNC;
 	bool update_cached;
+	unsigned int nr_succeeded = 0;
 
 	/*
 	 * These counters track activities during zone compaction.  Initialize
@@ -2314,7 +2315,7 @@ compact_zone(struct compact_control *cc,
 
 		err = migrate_pages(&cc->migratepages, compaction_alloc,
 				compaction_free, (unsigned long)cc, cc->mode,
-				MR_COMPACTION);
+				MR_COMPACTION, &nr_succeeded);
 
 		trace_mm_compaction_migratepages(cc->nr_migratepages, err,
 							&cc->migratepages);
diff -puN mm/gup.c~migrate_pages-add-success-return mm/gup.c
--- a/mm/gup.c~migrate_pages-add-success-return	2020-10-06 13:39:27.419818426 -0700
+++ b/mm/gup.c	2020-10-06 13:39:27.441818426 -0700
@@ -1586,6 +1586,7 @@ static long check_and_migrate_cma_pages(
 	unsigned long step;
 	bool drain_allow = true;
 	bool migrate_allow = true;
+	unsigned int nr_succeeded = 0;
 	LIST_HEAD(cma_page_list);
 	long ret = nr_pages;
 	struct migration_target_control mtc = {
@@ -1638,7 +1639,8 @@ check_again:
 			put_page(pages[i]);
 
 		if (migrate_pages(&cma_page_list, alloc_migration_target, NULL,
-			(unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE)) {
+			(unsigned long)&mtc, MIGRATE_SYNC, MR_CONTIG_RANGE,
+				  &nr_succeeded)) {
 			/*
 			 * some of the pages failed migration. Do get_user_pages
 			 * without migration.
diff -puN mm/memory-failure.c~migrate_pages-add-success-return mm/memory-failure.c
--- a/mm/memory-failure.c~migrate_pages-add-success-return	2020-10-06 13:39:27.421818426 -0700
+++ b/mm/memory-failure.c	2020-10-06 13:39:27.441818426 -0700
@@ -1724,6 +1724,7 @@ static int soft_offline_huge_page(struct
 	int ret;
 	unsigned long pfn = page_to_pfn(page);
 	struct page *hpage = compound_head(page);
+	unsigned int nr_succeeded = 0;
 	LIST_HEAD(pagelist);
 
 	/*
@@ -1751,7 +1752,7 @@ static int soft_offline_huge_page(struct
 	}
 
 	ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL,
-				MIGRATE_SYNC, MR_MEMORY_FAILURE);
+				MIGRATE_SYNC, MR_MEMORY_FAILURE, &nr_succeeded);
 	if (ret) {
 		pr_info("soft offline: %#lx: hugepage migration failed %d, type %lx (%pGp)\n",
 			pfn, ret, page->flags, &page->flags);
@@ -1782,6 +1783,7 @@ static int __soft_offline_page(struct pa
 {
 	int ret;
 	unsigned long pfn = page_to_pfn(page);
+	unsigned int nr_succeeded = 0;
 
 	/*
 	 * Check PageHWPoison again inside page lock because PageHWPoison
@@ -1841,7 +1843,8 @@ static int __soft_offline_page(struct pa
 						page_is_file_lru(page));
 		list_add(&page->lru, &pagelist);
 		ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL,
-					MIGRATE_SYNC, MR_MEMORY_FAILURE);
+					MIGRATE_SYNC, MR_MEMORY_FAILURE,
+					&nr_succeeded);
 		if (ret) {
 			if (!list_empty(&pagelist))
 				putback_movable_pages(&pagelist);
diff -puN mm/memory_hotplug.c~migrate_pages-add-success-return mm/memory_hotplug.c
--- a/mm/memory_hotplug.c~migrate_pages-add-success-return	2020-10-06 13:39:27.428818426 -0700
+++ b/mm/memory_hotplug.c	2020-10-06 13:39:27.442818426 -0700
@@ -1301,6 +1301,7 @@ do_migrate_range(unsigned long start_pfn
 	unsigned long pfn;
 	struct page *page, *head;
 	int ret = 0;
+	unsigned int nr_succeeded = 0;
 	LIST_HEAD(source);
 
 	for (pfn = start_pfn; pfn < end_pfn; pfn++) {
@@ -1356,7 +1357,8 @@ do_migrate_range(unsigned long start_pfn
 	if (!list_empty(&source)) {
 		/* Allocate a new page from the nearest neighbor node */
 		ret = migrate_pages(&source, new_node_page, NULL, 0,
-					MIGRATE_SYNC, MR_MEMORY_HOTPLUG);
+					MIGRATE_SYNC, MR_MEMORY_HOTPLUG,
+					&nr_succeeded);
 		if (ret) {
 			list_for_each_entry(page, &source, lru) {
 				pr_warn("migrating pfn %lx failed ret:%d ",
diff -puN mm/mempolicy.c~migrate_pages-add-success-return mm/mempolicy.c
--- a/mm/mempolicy.c~migrate_pages-add-success-return	2020-10-06 13:39:27.430818426 -0700
+++ b/mm/mempolicy.c	2020-10-06 13:39:27.443818426 -0700
@@ -1072,6 +1072,7 @@ static int migrate_page_add(struct page
 static int migrate_to_node(struct mm_struct *mm, int source, int dest,
 			   int flags)
 {
+	unsigned int nr_succeeded = 0;
 	nodemask_t nmask;
 	LIST_HEAD(pagelist);
 	int err = 0;
@@ -1094,7 +1095,7 @@ static int migrate_to_node(struct mm_str
 
 	if (!list_empty(&pagelist)) {
 		err = migrate_pages(&pagelist, alloc_migration_target, NULL,
-				(unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL);
+				(unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, &nr_succeeded);
 		if (err)
 			putback_movable_pages(&pagelist);
 	}
@@ -1271,6 +1272,7 @@ static long do_mbind(unsigned long start
 		     nodemask_t *nmask, unsigned long flags)
 {
 	struct mm_struct *mm = current->mm;
+	unsigned int nr_succeeded = 0;
 	struct mempolicy *new;
 	unsigned long end;
 	int err;
@@ -1352,7 +1354,8 @@ static long do_mbind(unsigned long start
 		if (!list_empty(&pagelist)) {
 			WARN_ON_ONCE(flags & MPOL_MF_LAZY);
 			nr_failed = migrate_pages(&pagelist, new_page, NULL,
-				start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND);
+				start, MIGRATE_SYNC, MR_MEMPOLICY_MBIND,
+				&nr_succeeded);
 			if (nr_failed)
 				putback_movable_pages(&pagelist);
 		}
diff -puN mm/migrate.c~migrate_pages-add-success-return mm/migrate.c
--- a/mm/migrate.c~migrate_pages-add-success-return	2020-10-06 13:39:27.432818426 -0700
+++ b/mm/migrate.c	2020-10-06 13:39:27.444818426 -0700
@@ -1433,6 +1433,7 @@ out:
  * @mode:		The migration mode that specifies the constraints for
  *			page migration, if any.
  * @reason:		The reason for page migration.
+ * @nr_succeeded:	The number of pages migrated successfully.
  *
  * The function returns after 10 attempts or if no pages are movable any more
  * because the list has become empty or no retryable pages exist any more.
@@ -1443,12 +1444,11 @@ out:
  */
 int migrate_pages(struct list_head *from, new_page_t get_new_page,
 		free_page_t put_new_page, unsigned long private,
-		enum migrate_mode mode, int reason)
+		enum migrate_mode mode, int reason, unsigned int *nr_succeeded)
 {
 	int retry = 1;
 	int thp_retry = 1;
 	int nr_failed = 0;
-	int nr_succeeded = 0;
 	int nr_thp_succeeded = 0;
 	int nr_thp_failed = 0;
 	int nr_thp_split = 0;
@@ -1529,7 +1529,7 @@ retry:
 					nr_succeeded += nr_subpages;
 					break;
 				}
-				nr_succeeded++;
+				(*nr_succeeded)++;
 				break;
 			default:
 				/*
@@ -1552,12 +1552,12 @@ retry:
 	nr_thp_failed += thp_retry;
 	rc = nr_failed;
 out:
-	count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded);
+	count_vm_events(PGMIGRATE_SUCCESS, *nr_succeeded);
 	count_vm_events(PGMIGRATE_FAIL, nr_failed);
 	count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded);
 	count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed);
 	count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split);
-	trace_mm_migrate_pages(nr_succeeded, nr_failed, nr_thp_succeeded,
+	trace_mm_migrate_pages(*nr_succeeded, nr_failed, nr_thp_succeeded,
 			       nr_thp_failed, nr_thp_split, mode, reason);
 
 	if (!swapwrite)
@@ -1625,6 +1625,7 @@ static int store_status(int __user *stat
 static int do_move_pages_to_node(struct mm_struct *mm,
 		struct list_head *pagelist, int node)
 {
+	unsigned int nr_succeeded = 0;
 	int err;
 	struct migration_target_control mtc = {
 		.nid = node,
@@ -1632,7 +1633,7 @@ static int do_move_pages_to_node(struct
 	};
 
 	err = migrate_pages(pagelist, alloc_migration_target, NULL,
-			(unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL);
+			(unsigned long)&mtc, MIGRATE_SYNC, MR_SYSCALL, &nr_succeeded);
 	if (err)
 		putback_movable_pages(pagelist);
 	return err;
@@ -2090,6 +2091,7 @@ int migrate_misplaced_page(struct page *
 	pg_data_t *pgdat = NODE_DATA(node);
 	int isolated;
 	int nr_remaining;
+	unsigned int nr_succeeded = 0;
 	LIST_HEAD(migratepages);
 
 	/*
@@ -2114,7 +2116,7 @@ int migrate_misplaced_page(struct page *
 	list_add(&page->lru, &migratepages);
 	nr_remaining = migrate_pages(&migratepages, alloc_misplaced_dst_page,
 				     NULL, node, MIGRATE_ASYNC,
-				     MR_NUMA_MISPLACED);
+				     MR_NUMA_MISPLACED, &nr_succeeded);
 	if (nr_remaining) {
 		if (!list_empty(&migratepages)) {
 			list_del(&page->lru);
diff -puN mm/page_alloc.c~migrate_pages-add-success-return mm/page_alloc.c
--- a/mm/page_alloc.c~migrate_pages-add-success-return	2020-10-06 13:39:27.435818426 -0700
+++ b/mm/page_alloc.c	2020-10-06 13:39:27.446818426 -0700
@@ -8346,7 +8346,8 @@ static unsigned long pfn_max_align_up(un
 
 /* [start, end) must belong to a single zone. */
 static int __alloc_contig_migrate_range(struct compact_control *cc,
-					unsigned long start, unsigned long end)
+					unsigned long start, unsigned long end,
+					unsigned int *nr_succeeded)
 {
 	/* This function is based on compact_zone() from compaction.c. */
 	unsigned int nr_reclaimed;
@@ -8384,7 +8385,8 @@ static int __alloc_contig_migrate_range(
 		cc->nr_migratepages -= nr_reclaimed;
 
 		ret = migrate_pages(&cc->migratepages, alloc_migration_target,
-				NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE);
+				NULL, (unsigned long)&mtc, cc->mode, MR_CONTIG_RANGE,
+				nr_succeeded);
 	}
 	if (ret < 0) {
 		putback_movable_pages(&cc->migratepages);
@@ -8420,6 +8422,7 @@ int alloc_contig_range(unsigned long sta
 	unsigned long outer_start, outer_end;
 	unsigned int order;
 	int ret = 0;
+	unsigned int nr_succeeded = 0;
 
 	struct compact_control cc = {
 		.nr_migratepages = 0,
@@ -8472,7 +8475,7 @@ int alloc_contig_range(unsigned long sta
 	 * allocated.  So, if we fall through be sure to clear ret so that
 	 * -EBUSY is not accidentally used or returned to caller.
 	 */
-	ret = __alloc_contig_migrate_range(&cc, start, end);
+	ret = __alloc_contig_migrate_range(&cc, start, end, &nr_succeeded);
 	if (ret && ret != -EBUSY)
 		goto done;
 	ret =0;
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC][PATCH 08/12] mm/migrate: demote pages during reclaim
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
                   ` (6 preceding siblings ...)
  2020-10-06 20:51 ` [RFC][PATCH 07/12] mm/migrate: make migrate_pages() return nr_succeeded Dave Hansen
@ 2020-10-06 20:51 ` Dave Hansen
  2020-10-06 20:51 ` [RFC][PATCH 09/12] mm/vmscan: add page demotion counter Dave Hansen
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dave Hansen, yang.shi, rientjes, ying.huang, dan.j.williams


From: Dave Hansen <dave.hansen@linux.intel.com>

This is mostly derived from a patch from Yang Shi:

	https://lore.kernel.org/linux-mm/1560468577-101178-10-git-send-email-yang.shi@linux.alibaba.com/

Add code to the reclaim path (shrink_page_list()) to "demote" data
to another NUMA node instead of discarding the data.  This always
avoids the cost of I/O needed to read the page back in and sometimes
avoids the writeout cost when the pagee is dirty.

A second pass through shrink_page_list() will be made if any demotions
fail.  This essentally falls back to normal reclaim behavior in the
case that demotions fail.  Previous versions of this patch may have
simply failed to reclaim pages which were eligible for demotion but
were unable to be demoted in practice.

Note: This just adds the start of infratructure for migration. It is
actually disabled next to the FIXME in migrate_demote_page_ok().

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>

--

changes from 20200730:
 * Add another pass through shrink_page_list() when demotion
   fails.
---

 b/include/linux/migrate.h |    2 
 b/mm/vmscan.c             |   97 ++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 99 insertions(+)

diff -puN include/linux/migrate.h~demote-with-migrate_pages include/linux/migrate.h
--- a/include/linux/migrate.h~demote-with-migrate_pages	2020-10-06 13:39:29.059818422 -0700
+++ b/include/linux/migrate.h	2020-10-06 13:39:29.067818422 -0700
@@ -27,6 +27,7 @@ enum migrate_reason {
 	MR_MEMPOLICY_MBIND,
 	MR_NUMA_MISPLACED,
 	MR_CONTIG_RANGE,
+	MR_DEMOTION,
 	MR_TYPES
 };
 
@@ -196,6 +197,7 @@ struct migrate_vma {
 int migrate_vma_setup(struct migrate_vma *args);
 void migrate_vma_pages(struct migrate_vma *migrate);
 void migrate_vma_finalize(struct migrate_vma *migrate);
+int next_demotion_node(int node);
 
 #endif /* CONFIG_MIGRATION */
 
diff -puN mm/vmscan.c~demote-with-migrate_pages mm/vmscan.c
--- a/mm/vmscan.c~demote-with-migrate_pages	2020-10-06 13:39:29.061818422 -0700
+++ b/mm/vmscan.c	2020-10-06 13:39:29.068818422 -0700
@@ -43,6 +43,7 @@
 #include <linux/kthread.h>
 #include <linux/freezer.h>
 #include <linux/memcontrol.h>
+#include <linux/migrate.h>
 #include <linux/delayacct.h>
 #include <linux/sysctl.h>
 #include <linux/oom.h>
@@ -1034,6 +1035,24 @@ static enum page_references page_check_r
 	return PAGEREF_RECLAIM;
 }
 
+bool migrate_demote_page_ok(struct page *page, struct scan_control *sc)
+{
+	int next_nid = next_demotion_node(page_to_nid(page));
+
+	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG_ON_PAGE(PageHuge(page), page);
+	VM_BUG_ON_PAGE(PageLRU(page), page);
+
+	if (next_nid == NUMA_NO_NODE)
+		return false;
+	if (PageTransHuge(page) && !thp_migration_supported())
+		return false;
+
+	// FIXME: actually enable this later in the series
+	return false;
+}
+
+
 /* Check if a page is dirty or under writeback */
 static void page_check_dirty_writeback(struct page *page,
 				       bool *dirty, bool *writeback)
@@ -1064,6 +1083,60 @@ static void page_check_dirty_writeback(s
 		mapping->a_ops->is_dirty_writeback(page, dirty, writeback);
 }
 
+static struct page *alloc_demote_page(struct page *page, unsigned long node)
+{
+	/*
+	 * Try to fail quickly if memory on the target node is not
+	 * available.  Leaving out __GFP_IO and __GFP_FS helps with
+	 * this.  If the desintation node is full, we want kswapd to
+	 * run there so that its pages will get reclaimed and future
+	 * migration attempts may succeed.
+	 */
+	gfp_t flags = (__GFP_HIGHMEM | __GFP_MOVABLE | __GFP_NORETRY |
+		       __GFP_NOMEMALLOC | __GFP_NOWARN | __GFP_THISNODE |
+		       __GFP_KSWAPD_RECLAIM);
+	/* HugeTLB pages should not be on the LRU */
+	WARN_ON_ONCE(PageHuge(page));
+
+	if (PageTransHuge(page)) {
+		struct page *thp;
+
+		flags |= __GFP_COMP;
+
+		thp = alloc_pages_node(node, flags, HPAGE_PMD_ORDER);
+		if (!thp)
+			return NULL;
+		prep_transhuge_page(thp);
+		return thp;
+	}
+
+	return __alloc_pages_node(node, flags, 0);
+}
+
+/*
+ * Take pages on @demote_list and attempt to demote them to
+ * another node.  Pages which are not demoted are left on
+ * @demote_pages.
+ */
+static unsigned int demote_page_list(struct list_head *demote_pages,
+				     struct pglist_data *pgdat,
+				     struct scan_control *sc)
+{
+	int target_nid = next_demotion_node(pgdat->node_id);
+	unsigned int nr_succeeded = 0;
+	int err;
+
+	if (list_empty(demote_pages))
+		return 0;
+
+	/* Demotion ignores all cpuset and mempolicy settings */
+	err = migrate_pages(demote_pages, alloc_demote_page, NULL,
+			    target_nid, MIGRATE_ASYNC, MR_DEMOTION,
+			    &nr_succeeded);
+
+	return nr_succeeded;
+}
+
 /*
  * shrink_page_list() returns the number of reclaimed pages
  */
@@ -1076,12 +1149,15 @@ static unsigned int shrink_page_list(str
 {
 	LIST_HEAD(ret_pages);
 	LIST_HEAD(free_pages);
+	LIST_HEAD(demote_pages);
 	unsigned int nr_reclaimed = 0;
 	unsigned int pgactivate = 0;
+	bool do_demote_pass = true;
 
 	memset(stat, 0, sizeof(*stat));
 	cond_resched();
 
+retry:
 	while (!list_empty(page_list)) {
 		struct address_space *mapping;
 		struct page *page;
@@ -1231,6 +1307,16 @@ static unsigned int shrink_page_list(str
 		}
 
 		/*
+		 * Before reclaiming the page, try to relocate
+		 * its contents to another node.
+		 */
+		if (do_demote_pass && migrate_demote_page_ok(page, sc)) {
+			list_add(&page->lru, &demote_pages);
+			unlock_page(page);
+			continue;
+		}
+
+		/*
 		 * Anonymous process memory has backing store?
 		 * Try to allocate it some swap space here.
 		 * Lazyfree page could be freed directly
@@ -1477,6 +1563,17 @@ keep:
 		list_add(&page->lru, &ret_pages);
 		VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page);
 	}
+	/* 'page_list' is always empty here */
+
+	/* Migrate pages selected for demotion */
+	nr_reclaimed += demote_page_list(&demote_pages, pgdat, sc);
+	/* Pages that could not be demoted are still in @demote_pages */
+	if (!list_empty(&demote_pages)) {
+		/* Pages which failed to demoted go back on on @page_list for retry: */
+		list_splice_init(&demote_pages, page_list);
+		do_demote_pass = false;
+		goto retry;
+	}
 
 	pgactivate = stat->nr_activate[0] + stat->nr_activate[1];
 
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC][PATCH 09/12] mm/vmscan: add page demotion counter
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
                   ` (7 preceding siblings ...)
  2020-10-06 20:51 ` [RFC][PATCH 08/12] mm/migrate: demote pages during reclaim Dave Hansen
@ 2020-10-06 20:51 ` Dave Hansen
  2020-10-06 20:51 ` [RFC][PATCH 10/12] mm/vmscan: Consider anonymous pages without swap Dave Hansen
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dave Hansen, yang.shi, rientjes, ying.huang, dan.j.williams


From: Yang Shi <yang.shi@linux.alibaba.com>

Account the number of demoted pages into reclaim_state->nr_demoted.

Add pgdemote_kswapd and pgdemote_direct VM counters showed in
/proc/vmstat.

[ daveh:
   - __count_vm_events() a bit, and made them look at the THP
     size directly rather than getting data from migrate_pages()
]

Signed-off-by: Yang Shi <yang.shi@linux.alibaba.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
---

 b/include/linux/vm_event_item.h |    2 ++
 b/mm/vmscan.c                   |    6 ++++++
 b/mm/vmstat.c                   |    2 ++
 3 files changed, 10 insertions(+)

diff -puN include/linux/vm_event_item.h~mm-vmscan-add-page-demotion-counter include/linux/vm_event_item.h
--- a/include/linux/vm_event_item.h~mm-vmscan-add-page-demotion-counter	2020-10-06 13:39:30.204818419 -0700
+++ b/include/linux/vm_event_item.h	2020-10-06 13:39:30.212818419 -0700
@@ -33,6 +33,8 @@ enum vm_event_item { PGPGIN, PGPGOUT, PS
 		PGREUSE,
 		PGSTEAL_KSWAPD,
 		PGSTEAL_DIRECT,
+		PGDEMOTE_KSWAPD,
+		PGDEMOTE_DIRECT,
 		PGSCAN_KSWAPD,
 		PGSCAN_DIRECT,
 		PGSCAN_DIRECT_THROTTLE,
diff -puN mm/vmscan.c~mm-vmscan-add-page-demotion-counter mm/vmscan.c
--- a/mm/vmscan.c~mm-vmscan-add-page-demotion-counter	2020-10-06 13:39:30.206818419 -0700
+++ b/mm/vmscan.c	2020-10-06 13:39:30.213818419 -0700
@@ -147,6 +147,7 @@ struct scan_control {
 		unsigned int immediate;
 		unsigned int file_taken;
 		unsigned int taken;
+		unsigned int demoted;
 	} nr;
 
 	/* for recording the reclaimed slab by now */
@@ -1134,6 +1135,11 @@ static unsigned int demote_page_list(str
 			    target_nid, MIGRATE_ASYNC, MR_DEMOTION,
 			    &nr_succeeded);
 
+	if (current_is_kswapd())
+		__count_vm_events(PGDEMOTE_KSWAPD, nr_succeeded);
+	else
+		__count_vm_events(PGDEMOTE_DIRECT, nr_succeeded);
+
 	return nr_succeeded;
 }
 
diff -puN mm/vmstat.c~mm-vmscan-add-page-demotion-counter mm/vmstat.c
--- a/mm/vmstat.c~mm-vmscan-add-page-demotion-counter	2020-10-06 13:39:30.208818419 -0700
+++ b/mm/vmstat.c	2020-10-06 13:39:30.214818419 -0700
@@ -1244,6 +1244,8 @@ const char * const vmstat_text[] = {
 	"pgreuse",
 	"pgsteal_kswapd",
 	"pgsteal_direct",
+	"pgdemote_kswapd",
+	"pgdemote_direct",
 	"pgscan_kswapd",
 	"pgscan_direct",
 	"pgscan_direct_throttle",
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC][PATCH 10/12] mm/vmscan: Consider anonymous pages without swap
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
                   ` (8 preceding siblings ...)
  2020-10-06 20:51 ` [RFC][PATCH 09/12] mm/vmscan: add page demotion counter Dave Hansen
@ 2020-10-06 20:51 ` Dave Hansen
  2020-10-06 20:51 ` [RFC][PATCH 11/12] mm/vmscan: never demote for memcg reclaim Dave Hansen
                   ` (3 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel
  Cc: Dave Hansen, kbusch, vishal.l.verma, yang.shi, rientjes,
	ying.huang, dan.j.williams


From: Keith Busch <kbusch@kernel.org>

Age and reclaim anonymous pages if a migration path is available. The
node has other recourses for inactive anonymous pages beyond swap,

#Signed-off-by: Keith Busch <keith.busch@intel.com>
Cc: Keith Busch <kbusch@kernel.org>
[vishal: fixup the migration->demotion rename]
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>

--

Changes from Dave 06/2020:
 * rename reclaim_anon_pages()->can_reclaim_anon_pages()

Note: Keith's Intel SoB is commented out because he is no
longer at Intel and his @intel.com mail will bouncee
---

 b/include/linux/node.h |    9 +++++++++
 b/mm/vmscan.c          |   33 ++++++++++++++++++++++++++++-----
 2 files changed, 37 insertions(+), 5 deletions(-)

diff -puN include/linux/node.h~0009-mm-vmscan-Consider-anonymous-pages-without-swap include/linux/node.h
--- a/include/linux/node.h~0009-mm-vmscan-Consider-anonymous-pages-without-swap	2020-10-06 13:39:31.421818416 -0700
+++ b/include/linux/node.h	2020-10-06 13:39:31.427818416 -0700
@@ -180,4 +180,13 @@ static inline void register_hugetlbfs_wi
 
 #define to_node(device) container_of(device, struct node, dev)
 
+#ifdef CONFIG_MIGRATION
+extern int next_demotion_node(int node);
+#else
+static inline int next_demotion_node(int node)
+{
+	return NUMA_NO_NODE;
+}
+#endif
+
 #endif /* _LINUX_NODE_H_ */
diff -puN mm/vmscan.c~0009-mm-vmscan-Consider-anonymous-pages-without-swap mm/vmscan.c
--- a/mm/vmscan.c~0009-mm-vmscan-Consider-anonymous-pages-without-swap	2020-10-06 13:39:31.424818416 -0700
+++ b/mm/vmscan.c	2020-10-06 13:39:31.429818416 -0700
@@ -290,6 +290,26 @@ static bool writeback_throttling_sane(st
 }
 #endif
 
+static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg,
+					  int node_id)
+{
+	/* Always age anon pages when we have swap */
+	if (memcg == NULL) {
+		if (get_nr_swap_pages() > 0)
+			return true;
+	} else {
+		if (mem_cgroup_get_nr_swap_pages(memcg) > 0)
+			return true;
+	}
+
+	/* Also age anon pages if we can auto-migrate them */
+	if (next_demotion_node(node_id) >= 0)
+		return true;
+
+	/* No way to reclaim anon pages */
+	return false;
+}
+
 /*
  * This misses isolated pages which are not accounted for to save counters.
  * As the data only determines if reclaim or compaction continues, it is
@@ -301,7 +321,7 @@ unsigned long zone_reclaimable_pages(str
 
 	nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) +
 		zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE);
-	if (get_nr_swap_pages() > 0)
+	if (can_reclaim_anon_pages(NULL, zone_to_nid(zone)))
 		nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) +
 			zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON);
 
@@ -2337,6 +2357,7 @@ enum scan_balance {
 static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
 			   unsigned long *nr)
 {
+	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
 	struct mem_cgroup *memcg = lruvec_memcg(lruvec);
 	unsigned long anon_cost, file_cost, total_cost;
 	int swappiness = mem_cgroup_swappiness(memcg);
@@ -2347,7 +2368,7 @@ static void get_scan_count(struct lruvec
 	enum lru_list lru;
 
 	/* If we have no swap space, do not bother scanning anon pages. */
-	if (!sc->may_swap || mem_cgroup_get_nr_swap_pages(memcg) <= 0) {
+	if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id)) {
 		scan_balance = SCAN_FILE;
 		goto out;
 	}
@@ -2631,7 +2652,9 @@ static void shrink_lruvec(struct lruvec
 	 * Even if we did not try to evict anon pages at all, we want to
 	 * rebalance the anon lru active/inactive ratio.
 	 */
-	if (total_swap_pages && inactive_is_low(lruvec, LRU_INACTIVE_ANON))
+	if (can_reclaim_anon_pages(lruvec_memcg(lruvec),
+			       lruvec_pgdat(lruvec)->node_id) &&
+	    inactive_is_low(lruvec, LRU_INACTIVE_ANON))
 		shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
 				   sc, LRU_ACTIVE_ANON);
 }
@@ -2701,7 +2724,7 @@ static inline bool should_continue_recla
 	 */
 	pages_for_compaction = compact_gap(sc->order);
 	inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE);
-	if (get_nr_swap_pages() > 0)
+	if (can_reclaim_anon_pages(NULL, pgdat->node_id))
 		inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON);
 
 	return inactive_lru_pages > pages_for_compaction;
@@ -3460,7 +3483,7 @@ static void age_active_anon(struct pglis
 	struct mem_cgroup *memcg;
 	struct lruvec *lruvec;
 
-	if (!total_swap_pages)
+	if (!can_reclaim_anon_pages(NULL, pgdat->node_id))
 		return;
 
 	lruvec = mem_cgroup_lruvec(NULL, pgdat);
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC][PATCH 11/12] mm/vmscan: never demote for memcg reclaim
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
                   ` (9 preceding siblings ...)
  2020-10-06 20:51 ` [RFC][PATCH 10/12] mm/vmscan: Consider anonymous pages without swap Dave Hansen
@ 2020-10-06 20:51 ` Dave Hansen
  2020-10-06 20:51 ` [RFC][PATCH 12/12] mm/migrate: new zone_reclaim_mode to enable reclaim migration Dave Hansen
                   ` (2 subsequent siblings)
  13 siblings, 0 replies; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dave Hansen, yang.shi, rientjes, ying.huang, dan.j.williams


From: Dave Hansen <dave.hansen@linux.intel.com>

Global reclaim aims to reduce the amount of memory used on
a given node or set of nodes.  Migrating pages to another
node serves this purpose.

memcg reclaim is different.  Its goal is to reduce the
total memory consumption of the entire memcg, across all
nodes.  Migration does not assist memcg reclaim because
it just moves page contents between nodes rather than
actually reducing memory consumption.

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Suggested-by: Yang Shi <yang.shi@linux.alibaba.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
---

 b/mm/vmscan.c |   33 +++++++++++++++++++++++++--------
 1 file changed, 25 insertions(+), 8 deletions(-)

diff -puN mm/vmscan.c~never-demote-for-memcg-reclaim mm/vmscan.c
--- a/mm/vmscan.c~never-demote-for-memcg-reclaim	2020-10-06 13:39:32.577818413 -0700
+++ b/mm/vmscan.c	2020-10-06 13:39:32.582818413 -0700
@@ -291,8 +291,11 @@ static bool writeback_throttling_sane(st
 #endif
 
 static inline bool can_reclaim_anon_pages(struct mem_cgroup *memcg,
-					  int node_id)
+					  int node_id,
+					  struct scan_control *sc)
 {
+	bool in_cgroup_reclaim = false;
+
 	/* Always age anon pages when we have swap */
 	if (memcg == NULL) {
 		if (get_nr_swap_pages() > 0)
@@ -302,8 +305,18 @@ static inline bool can_reclaim_anon_page
 			return true;
 	}
 
-	/* Also age anon pages if we can auto-migrate them */
-	if (next_demotion_node(node_id) >= 0)
+	/* Can only be in memcg reclaim in paths with valid 'sc': */
+	if (sc && cgroup_reclaim(sc))
+		in_cgroup_reclaim = true;
+
+	/*
+	 * Also age anon pages if we can auto-migrate them.
+	 *
+	 * Migrating a page does not reduce comsumption of a
+	 * memcg so should not be performed when in memcg
+	 * reclaim.
+	 */
+	if (!in_cgroup_reclaim && (next_demotion_node(node_id) >= 0))
 		return true;
 
 	/* No way to reclaim anon pages */
@@ -321,7 +334,7 @@ unsigned long zone_reclaimable_pages(str
 
 	nr = zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_FILE) +
 		zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_FILE);
-	if (can_reclaim_anon_pages(NULL, zone_to_nid(zone)))
+	if (can_reclaim_anon_pages(NULL, zone_to_nid(zone), NULL))
 		nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) +
 			zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON);
 
@@ -1064,6 +1077,10 @@ bool migrate_demote_page_ok(struct page
 	VM_BUG_ON_PAGE(PageHuge(page), page);
 	VM_BUG_ON_PAGE(PageLRU(page), page);
 
+	/* It is pointless to do demotion in memcg reclaim */
+	if (cgroup_reclaim(sc))
+		return false;
+
 	if (next_nid == NUMA_NO_NODE)
 		return false;
 	if (PageTransHuge(page) && !thp_migration_supported())
@@ -2368,7 +2385,7 @@ static void get_scan_count(struct lruvec
 	enum lru_list lru;
 
 	/* If we have no swap space, do not bother scanning anon pages. */
-	if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id)) {
+	if (!sc->may_swap || !can_reclaim_anon_pages(memcg, pgdat->node_id, sc)) {
 		scan_balance = SCAN_FILE;
 		goto out;
 	}
@@ -2653,7 +2670,7 @@ static void shrink_lruvec(struct lruvec
 	 * rebalance the anon lru active/inactive ratio.
 	 */
 	if (can_reclaim_anon_pages(lruvec_memcg(lruvec),
-			       lruvec_pgdat(lruvec)->node_id) &&
+			       lruvec_pgdat(lruvec)->node_id, sc) &&
 	    inactive_is_low(lruvec, LRU_INACTIVE_ANON))
 		shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
 				   sc, LRU_ACTIVE_ANON);
@@ -2724,7 +2741,7 @@ static inline bool should_continue_recla
 	 */
 	pages_for_compaction = compact_gap(sc->order);
 	inactive_lru_pages = node_page_state(pgdat, NR_INACTIVE_FILE);
-	if (can_reclaim_anon_pages(NULL, pgdat->node_id))
+	if (can_reclaim_anon_pages(NULL, pgdat->node_id, sc))
 		inactive_lru_pages += node_page_state(pgdat, NR_INACTIVE_ANON);
 
 	return inactive_lru_pages > pages_for_compaction;
@@ -3483,7 +3500,7 @@ static void age_active_anon(struct pglis
 	struct mem_cgroup *memcg;
 	struct lruvec *lruvec;
 
-	if (!can_reclaim_anon_pages(NULL, pgdat->node_id))
+	if (!can_reclaim_anon_pages(NULL, pgdat->node_id, sc))
 		return;
 
 	lruvec = mem_cgroup_lruvec(NULL, pgdat);
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [RFC][PATCH 12/12] mm/migrate: new zone_reclaim_mode to enable reclaim migration
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
                   ` (10 preceding siblings ...)
  2020-10-06 20:51 ` [RFC][PATCH 11/12] mm/vmscan: never demote for memcg reclaim Dave Hansen
@ 2020-10-06 20:51 ` Dave Hansen
  2020-10-06 20:53 ` [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
  2020-10-07  9:52 ` Michal Hocko
  13 siblings, 0 replies; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:51 UTC (permalink / raw)
  To: linux-kernel; +Cc: Dave Hansen, yang.shi, rientjes, ying.huang, dan.j.williams


From: Dave Hansen <dave.hansen@linux.intel.com>

Some method is obviously needed to enable reclaim-based migration.

Just like traditional autonuma, there will be some workloads that
will benefit like workloads with more "static" configurations where
hot pages stay hot and cold pages stay cold.  If pages come and go
from the hot and cold sets, the benefits of this approach will be
more limited.

The benefits are truly workload-based and *not* hardware-based.
We do not believe that there is a viable threshold where certain
hardware configurations should have this mechanism enabled while
others do not.

To be conservative, earlier work defaulted to disable reclaim-
based migration and did not include a mechanism to enable it.
This propses extending the existing "zone_reclaim_mode" (now
now really node_reclaim_mode) as a method to enable it.

We are open to any alternative that allows end users to enable
this mechanism or disable it it workload harm is detected (just
like traditional autonuma).

Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
---

 b/Documentation/admin-guide/sysctl/vm.rst |    9 +++++++++
 b/include/linux/swap.h                    |    3 ++-
 b/include/uapi/linux/mempolicy.h          |    1 +
 b/mm/vmscan.c                             |    6 ++++--
 4 files changed, 16 insertions(+), 3 deletions(-)

diff -puN Documentation/admin-guide/sysctl/vm.rst~RECLAIM_MIGRATE Documentation/admin-guide/sysctl/vm.rst
--- a/Documentation/admin-guide/sysctl/vm.rst~RECLAIM_MIGRATE	2020-10-06 13:39:55.520818356 -0700
+++ b/Documentation/admin-guide/sysctl/vm.rst	2020-10-06 13:39:55.532818356 -0700
@@ -969,6 +969,7 @@ This is value OR'ed together of
 1	Zone reclaim on
 2	Zone reclaim writes dirty pages out
 4	Zone reclaim swaps pages
+8	Zone reclaim migrates pages
 =	===================================
 
 zone_reclaim_mode is disabled by default.  For file servers or workloads
@@ -993,3 +994,11 @@ of other processes running on other node
 Allowing regular swap effectively restricts allocations to the local
 node unless explicitly overridden by memory policies or cpuset
 configurations.
+
+Page migration during reclaim is intended for systems with tiered memory
+configurations.  These systems have multiple types of memory with varied
+performance characteristics instead of plain NUMA systems where the same
+kind of memory is found at varied distances.  Allowing page migration
+during reclaim enables these systems to migrate pages from fast tiers to
+slow tiers when the fast tier is under pressure.  This migration is
+performed before swap.
diff -puN include/linux/swap.h~RECLAIM_MIGRATE include/linux/swap.h
--- a/include/linux/swap.h~RECLAIM_MIGRATE	2020-10-06 13:39:55.524818356 -0700
+++ b/include/linux/swap.h	2020-10-06 13:39:55.533818356 -0700
@@ -385,7 +385,8 @@ extern int sysctl_min_slab_ratio;
 static inline bool node_reclaim_enabled(void)
 {
 	/* Is any node_reclaim_mode bit set? */
-	return node_reclaim_mode & (RECLAIM_ZONE|RECLAIM_WRITE|RECLAIM_UNMAP);
+	return node_reclaim_mode & (RECLAIM_ZONE |RECLAIM_WRITE|
+				    RECLAIM_UNMAP|RECLAIM_MIGRATE);
 }
 
 extern void check_move_unevictable_pages(struct pagevec *pvec);
diff -puN include/uapi/linux/mempolicy.h~RECLAIM_MIGRATE include/uapi/linux/mempolicy.h
--- a/include/uapi/linux/mempolicy.h~RECLAIM_MIGRATE	2020-10-06 13:39:55.526818356 -0700
+++ b/include/uapi/linux/mempolicy.h	2020-10-06 13:39:55.533818356 -0700
@@ -69,5 +69,6 @@ enum {
 #define RECLAIM_ZONE	(1<<0)	/* Run shrink_inactive_list on the zone */
 #define RECLAIM_WRITE	(1<<1)	/* Writeout pages during reclaim */
 #define RECLAIM_UNMAP	(1<<2)	/* Unmap pages during reclaim */
+#define RECLAIM_MIGRATE	(1<<3)	/* Migrate to other nodes during reclaim */
 
 #endif /* _UAPI_LINUX_MEMPOLICY_H */
diff -puN mm/vmscan.c~RECLAIM_MIGRATE mm/vmscan.c
--- a/mm/vmscan.c~RECLAIM_MIGRATE	2020-10-06 13:39:55.528818356 -0700
+++ b/mm/vmscan.c	2020-10-06 13:39:55.534818356 -0700
@@ -1077,6 +1077,9 @@ bool migrate_demote_page_ok(struct page
 	VM_BUG_ON_PAGE(PageHuge(page), page);
 	VM_BUG_ON_PAGE(PageLRU(page), page);
 
+	if (!(node_reclaim_mode & RECLAIM_MIGRATE))
+		return false;
+
 	/* It is pointless to do demotion in memcg reclaim */
 	if (cgroup_reclaim(sc))
 		return false;
@@ -1086,8 +1089,7 @@ bool migrate_demote_page_ok(struct page
 	if (PageTransHuge(page) && !thp_migration_supported())
 		return false;
 
-	// FIXME: actually enable this later in the series
-	return false;
+	return true;
 }
 
 
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC][PATCH 00/12] mm: tweak page cache migration
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
                   ` (11 preceding siblings ...)
  2020-10-06 20:51 ` [RFC][PATCH 12/12] mm/migrate: new zone_reclaim_mode to enable reclaim migration Dave Hansen
@ 2020-10-06 20:53 ` Dave Hansen
  2020-10-07  9:52 ` Michal Hocko
  13 siblings, 0 replies; 21+ messages in thread
From: Dave Hansen @ 2020-10-06 20:53 UTC (permalink / raw)
  To: linux-kernel; +Cc: npiggin, akpm, willy, yang.shi, linux-mm

Ugh, sorry about that.  I fat-fingered the wrong cover letter!

This should have been

Subject: [v4] Migrate Pages in lieu of discard

--

Changes since (automigrate-20200818):
 * Fall back to normal reclaim when demotion fails

The full series is also available here:

	https://github.com/hansendc/linux/tree/automigrate-20200818

I really just want folks to look at:

	[RFC][PATCH 08/12] mm/migrate: demote pages during reclaim

I've reworked that so that it can both use the high-level migration
API, and fall back to normal reclaim if migration fails.  I think
that gives us the best of both worlds.

I'm posting the series in case folks want to run the whole thing.

--

We're starting to see systems with more and more kinds of memory such
as Intel's implementation of persistent memory.

Let's say you have a system with some DRAM and some persistent memory.
Today, once DRAM fills up, reclaim will start and some of the DRAM
contents will be thrown out.  Allocations will, at some point, start
falling over to the slower persistent memory.

That has two nasty properties.  First, the newer allocations can end
up in the slower persistent memory.  Second, reclaimed data in DRAM
are just discarded even if there are gobs of space in persistent
memory that could be used.

This set implements a solution to these problems.  At the end of the
reclaim process in shrink_page_list() just before the last page
refcount is dropped, the page is migrated to persistent memory instead
of being dropped.

While I've talked about a DRAM/PMEM pairing, this approach would
function in any environment where memory tiers exist.

This is not perfect.  It "strands" pages in slower memory and never
brings them back to fast DRAM.  Other things need to be built to
promote hot pages back to DRAM.

== Open Issues ==

 * For cpusets and memory policies that restrict allocations
   to PMEM, is it OK to demote to PMEM?  Do we need a cgroup-
   level API to opt-in or opt-out of these migrations?

Cc: Yang Shi <yang.shi@linux.alibaba.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>

--

Changes since (https://lwn.net/Articles/824830/):
 * Use higher-level migrate_pages() API approach from Yang Shi's
   earlier patches.
 * made sure to actually check node_reclaim_mode's new bit
 * disabled migration entirely before introducing RECLAIM_MIGRATE
 * Replace GFP_NOWAIT with explicit __GFP_KSWAPD_RECLAIM and
   comment why we want that.
 * Comment on effects of that keep multiple source nodes from
   sharing target nodes


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC][PATCH 01/12] mm/vmscan: restore zone_reclaim_mode ABI
  2020-10-06 20:51 ` [RFC][PATCH 01/12] mm/vmscan: restore zone_reclaim_mode ABI Dave Hansen
@ 2020-10-07  8:45   ` Christopher Lameter
  0 siblings, 0 replies; 21+ messages in thread
From: Christopher Lameter @ 2020-10-07  8:45 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, ben.widawsky, rientjes, alex.shi, dwagner, tobin,
	akpm, ying.huang, dan.j.williams, cai, stable

On Tue, 6 Oct 2020, Dave Hansen wrote:

> But, when the bit was removed (bit 0) the _other_ bit locations also
> got changed.  That's not OK because the bit values are documented to
> mean one specific thing and users surely rely on them meaning that one
> thing and not changing from kernel to kernel.  The end result is that
> if someone had a script that did:

Exactly right. Sorry must have missed to review that patch.

Acked-by: Christoph Lameter <cl@linux.com>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC][PATCH 02/12] mm/vmscan: move RECLAIM* bits to uapi header
  2020-10-06 20:51 ` [RFC][PATCH 02/12] mm/vmscan: move RECLAIM* bits to uapi header Dave Hansen
@ 2020-10-07  8:45   ` Christopher Lameter
  0 siblings, 0 replies; 21+ messages in thread
From: Christopher Lameter @ 2020-10-07  8:45 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, ben.widawsky, rientjes, alex.shi, dwagner, tobin,
	akpm, ying.huang, dan.j.williams, cai

On Tue, 6 Oct 2020, Dave Hansen wrote:

> It is currently not obvious that the RECLAIM_* bits are part of the
> uapi since they are defined in vmscan.c.  Move them to a uapi header
> to make it obvious.

Acked-by: Christoph Lameter <cl@linux.com>


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC][PATCH 03/12] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks
  2020-10-06 20:51 ` [RFC][PATCH 03/12] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks Dave Hansen
@ 2020-10-07  8:47   ` Christopher Lameter
  0 siblings, 0 replies; 21+ messages in thread
From: Christopher Lameter @ 2020-10-07  8:47 UTC (permalink / raw)
  To: Dave Hansen
  Cc: linux-kernel, ben.widawsky, alex.shi, tobin, akpm, ying.huang,
	dan.j.williams, cai, dwagner

On Tue, 6 Oct 2020, Dave Hansen wrote:

> These zero checks are not great because it is not obvious what a zero
> mode *means* in the code.  Replace them with a helper which makes it
> more obvious: node_reclaim_enabled().

Well it uselessly checks bits. But whatever. It will prevent future code
removal.

Acked-by: Christoph Lameter <cl@linux.com>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC][PATCH 00/12] mm: tweak page cache migration
  2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
                   ` (12 preceding siblings ...)
  2020-10-06 20:53 ` [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
@ 2020-10-07  9:52 ` Michal Hocko
  2020-10-07  9:55   ` David Hildenbrand
  13 siblings, 1 reply; 21+ messages in thread
From: Michal Hocko @ 2020-10-07  9:52 UTC (permalink / raw)
  To: Dave Hansen; +Cc: linux-kernel, npiggin, akpm, willy, yang.shi, linux-mm

Am I the only one missing patch 1-5? lore.k.o doesn't seem to link them
under this message id either.

On Tue 06-10-20 13:51:03, Dave Hansen wrote:
> First of all, I think this little slice of code is a bit
> under-documented.  Perhaps this will help clarify things.
> 
> I'm pretty confident the page_count() check in the first
> patch is right, which is why I removed it outright.  The
> xas_load() check is a bit murkier, so I just left a
> warning in for it.
> 
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: Yang Shi <yang.shi@linux.alibaba.com>
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC][PATCH 00/12] mm: tweak page cache migration
  2020-10-07  9:52 ` Michal Hocko
@ 2020-10-07  9:55   ` David Hildenbrand
  2020-10-07 15:52     ` Yang Shi
  0 siblings, 1 reply; 21+ messages in thread
From: David Hildenbrand @ 2020-10-07  9:55 UTC (permalink / raw)
  To: Michal Hocko, Dave Hansen
  Cc: linux-kernel, npiggin, akpm, willy, yang.shi, linux-mm

On 07.10.20 11:52, Michal Hocko wrote:
> Am I the only one missing patch 1-5? lore.k.o doesn't seem to link them
> under this message id either.

I received no patches via linux-mm, only the cover letter and Dave's
reply. (maybe some are still in flight ...)

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC][PATCH 00/12] mm: tweak page cache migration
  2020-10-07  9:55   ` David Hildenbrand
@ 2020-10-07 15:52     ` Yang Shi
  2020-10-07 15:58       ` Dave Hansen
  0 siblings, 1 reply; 21+ messages in thread
From: Yang Shi @ 2020-10-07 15:52 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Michal Hocko, Dave Hansen, Linux Kernel Mailing List,
	Nicholas Piggin, Andrew Morton, Matthew Wilcox, Yang Shi,
	Linux MM

On Wed, Oct 7, 2020 at 2:55 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 07.10.20 11:52, Michal Hocko wrote:
> > Am I the only one missing patch 1-5? lore.k.o doesn't seem to link them
> > under this message id either.
>
> I received no patches via linux-mm, only the cover letter and Dave's
> reply. (maybe some are still in flight ...)

Yes, exactly the same to me, but anyway I saw the patches via linux-kernel.

And, it seems the github series doesn't reflect the changes made by this series.

>
> --
> Thanks,
>
> David / dhildenb
>
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [RFC][PATCH 00/12] mm: tweak page cache migration
  2020-10-07 15:52     ` Yang Shi
@ 2020-10-07 15:58       ` Dave Hansen
  0 siblings, 0 replies; 21+ messages in thread
From: Dave Hansen @ 2020-10-07 15:58 UTC (permalink / raw)
  To: Yang Shi, David Hildenbrand
  Cc: Michal Hocko, Dave Hansen, Linux Kernel Mailing List,
	Nicholas Piggin, Andrew Morton, Matthew Wilcox, Yang Shi,
	Linux MM

On 10/7/20 8:52 AM, Yang Shi wrote:
> On Wed, Oct 7, 2020 at 2:55 AM David Hildenbrand <david@redhat.com> wrote:
>> On 07.10.20 11:52, Michal Hocko wrote:
>>> Am I the only one missing patch 1-5? lore.k.o doesn't seem to link them
>>> under this message id either.
>> I received no patches via linux-mm, only the cover letter and Dave's
>> reply. (maybe some are still in flight ...)
> Yes, exactly the same to me, but anyway I saw the patches via linux-kernel.
> 
> And, it seems the github series doesn't reflect the changes made by this series.

Sorry about that.  I'll try to resend the series.

There have been some Intel->list troubles as of late, but I think I'm
probably to blame for this one.


^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2020-10-07 15:58 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-06 20:51 [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
2020-10-06 20:51 ` [RFC][PATCH 01/12] mm/vmscan: restore zone_reclaim_mode ABI Dave Hansen
2020-10-07  8:45   ` Christopher Lameter
2020-10-06 20:51 ` [RFC][PATCH 02/12] mm/vmscan: move RECLAIM* bits to uapi header Dave Hansen
2020-10-07  8:45   ` Christopher Lameter
2020-10-06 20:51 ` [RFC][PATCH 03/12] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks Dave Hansen
2020-10-07  8:47   ` Christopher Lameter
2020-10-06 20:51 ` [RFC][PATCH 04/12] mm/numa: node demotion data structure and lookup Dave Hansen
2020-10-06 20:51 ` [RFC][PATCH 05/12] mm/numa: automatically generate node migration order Dave Hansen
2020-10-06 20:51 ` [RFC][PATCH 06/12] mm/migrate: update migration order during on hotplug events Dave Hansen
2020-10-06 20:51 ` [RFC][PATCH 07/12] mm/migrate: make migrate_pages() return nr_succeeded Dave Hansen
2020-10-06 20:51 ` [RFC][PATCH 08/12] mm/migrate: demote pages during reclaim Dave Hansen
2020-10-06 20:51 ` [RFC][PATCH 09/12] mm/vmscan: add page demotion counter Dave Hansen
2020-10-06 20:51 ` [RFC][PATCH 10/12] mm/vmscan: Consider anonymous pages without swap Dave Hansen
2020-10-06 20:51 ` [RFC][PATCH 11/12] mm/vmscan: never demote for memcg reclaim Dave Hansen
2020-10-06 20:51 ` [RFC][PATCH 12/12] mm/migrate: new zone_reclaim_mode to enable reclaim migration Dave Hansen
2020-10-06 20:53 ` [RFC][PATCH 00/12] mm: tweak page cache migration Dave Hansen
2020-10-07  9:52 ` Michal Hocko
2020-10-07  9:55   ` David Hildenbrand
2020-10-07 15:52     ` Yang Shi
2020-10-07 15:58       ` Dave Hansen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).