linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -V8 0/6] NUMA balancing: optimize memory placement for memory tiering system
@ 2021-09-14  1:36 Huang Ying
  2021-09-14  1:36 ` [PATCH -V8 1/6] NUMA balancing: optimize page " Huang Ying
                   ` (5 more replies)
  0 siblings, 6 replies; 17+ messages in thread
From: Huang Ying @ 2021-09-14  1:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Huang Ying, Andrew Morton, Michal Hocko, Rik van Riel,
	Mel Gorman, Peter Zijlstra, Dave Hansen, Yang Shi, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, linux-mm

The changes since the last post are as follows,

- Rebased on latest upstream kernel (v5.15-rc1)

- Make user-specified threshold take effect sooner

--

With the advent of various new memory types, some machines will have
multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
memory subsystem of these machines can be called memory tiering
system, because the performance of the different types of memory are
different.

After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
for use like normal RAM"), the PMEM could be used as the
cost-effective volatile memory in separate NUMA nodes.  In a typical
memory tiering system, there are CPUs, DRAM and PMEM in each physical
NUMA node.  The CPUs and the DRAM will be put in one logical node,
while the PMEM will be put in another (faked) logical node.

To optimize the system overall performance, the hot pages should be
placed in DRAM node.  To do that, we need to identify the hot pages in
the PMEM node and migrate them to DRAM node via NUMA migration.

In the original NUMA balancing, there are already a set of existing
mechanisms to identify the pages recently accessed by the CPUs in a
node and migrate the pages to the node.  So we can reuse these
mechanisms to build the mechanisms to optimize the page placement in
the memory tiering system.  This is implemented in this patchset.

At the other hand, the cold pages should be placed in PMEM node.  So,
we also need to identify the cold pages in the DRAM node and migrate
them to PMEM node.

In commit 26aa2d199d6f ("mm/migrate: demote pages during reclaim"), a
mechanism to demote the cold DRAM pages to PMEM node under memory
pressure is implemented.  Based on that, the cold DRAM pages can be
demoted to PMEM node proactively to free some memory space on DRAM
node to accommodate the promoted hot PMEM pages.  This is implemented
in this patchset too.

We have tested the solution with the pmbench memory accessing
benchmark with the 80:20 read/write ratio and the normal access
address distribution on a 2 socket Intel server with Optane DC
Persistent Memory Model.  The test results of the base kernel and step
by step optimizations are as follows,

                Throughput	Promotion      DRAM bandwidth
		  access/s           MB/s                MB/s
               -----------     ----------      --------------
Base            74238178.0                             4291.7
Patch 2        146050652.3          359.4             11248.6
Patch 3        146300787.1          355.2             11237.2
Patch 4        162536383.0          211.7             11890.4
Patch 5        157187775.0          105.9             10412.3
Patch 6        164028415.2           73.3             10810.6

The whole patchset improves the benchmark score up to 119.1%.  The
basic NUMA balancing based optimization solution (patch 1), the hot
page selection algorithm (patch 4), and the threshold automatic
adjustment algorithms (patch 6) improves the performance or reduce the
overhead (promotion MB/s) greatly.

Changelog:

v8:

- Rebased on latest upstream kernel (v5.15-rc1)

- Make user-specified threshold take effect sooner

v7:

- Rebased on the mmots tree of 2021-07-15.

- Some minor fixes.

v6:

- Rebased on the latest page demotion patchset. (which bases on v5.11)

v5:

- Rebased on the latest page demotion patchset. (which bases on v5.10)

v4:

- Rebased on the latest page demotion patchset. (which bases on v5.9-rc6)

- Add page promotion counter.

v3:

- Move the rate limit control as late as possible per Mel Gorman's
  comments.

- Revise the hot page selection implementation to store page scan time
  in struct page.

- Code cleanup.

- Rebased on the latest page demotion patchset.

v2:

- Addressed comments for V1.

- Rebased on v5.5.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH -V8 1/6] NUMA balancing: optimize page placement for memory tiering system
  2021-09-14  1:36 [PATCH -V8 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
@ 2021-09-14  1:36 ` Huang Ying
  2021-09-14 22:40   ` Yang Shi
  2021-09-14  1:36 ` [PATCH -V8 2/6] memory tiering: add page promotion counter Huang Ying
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Huang Ying @ 2021-09-14  1:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Huang Ying, Andrew Morton, Michal Hocko, Rik van Riel,
	Mel Gorman, Peter Zijlstra, Dave Hansen, Yang Shi, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, linux-mm

With the advent of various new memory types, some machines will have
multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
memory subsystem of these machines can be called memory tiering
system, because the performance of the different types of memory are
usually different.

In such system, because of the memory accessing pattern changing etc,
some pages in the slow memory may become hot globally.  So in this
patch, the NUMA balancing mechanism is enhanced to optimize the page
placement among the different memory types according to hot/cold
dynamically.

In a typical memory tiering system, there are CPUs, fast memory and
slow memory in each physical NUMA node.  The CPUs and the fast memory
will be put in one logical node (called fast memory node), while the
slow memory will be put in another (faked) logical node (called slow
memory node).  That is, the fast memory is regarded as local while the
slow memory is regarded as remote.  So it's possible for the recently
accessed pages in the slow memory node to be promoted to the fast
memory node via the existing NUMA balancing mechanism.

The original NUMA balancing mechanism will stop to migrate pages if the free
memory of the target node will become below the high watermark.  This
is a reasonable policy if there's only one memory type.  But this
makes the original NUMA balancing mechanism almost not work to optimize page
placement among different memory types.  Details are as follows.

It's the common cases that the working-set size of the workload is
larger than the size of the fast memory nodes.  Otherwise, it's
unnecessary to use the slow memory at all.  So in the common cases,
there are almost always no enough free pages in the fast memory nodes,
so that the globally hot pages in the slow memory node cannot be
promoted to the fast memory node.  To solve the issue, we have 2
choices as follows,

a. Ignore the free pages watermark checking when promoting hot pages
   from the slow memory node to the fast memory node.  This will
   create some memory pressure in the fast memory node, thus trigger
   the memory reclaiming.  So that, the cold pages in the fast memory
   node will be demoted to the slow memory node.

b. Make kswapd of the fast memory node to reclaim pages until the free
   pages are a little more (about 10MB) than the high watermark.  Then,
   if the free pages of the fast memory node reaches high watermark, and
   some hot pages need to be promoted, kswapd of the fast memory node
   will be waken up to demote some cold pages in the fast memory node to
   the slow memory node.  This will free some extra space in the fast
   memory node, so the hot pages in the slow memory node can be
   promoted to the fast memory node.

The choice "a" will create the memory pressure in the fast memory
node.  If the memory pressure of the workload is high, the memory
pressure may become so high that the memory allocation latency of the
workload is influenced, e.g. the direct reclaiming may be triggered.

The choice "b" works much better at this aspect.  If the memory
pressure of the workload is high, the hot pages promotion will stop
earlier because its allocation watermark is higher than that of the
normal memory allocation.  So in this patch, choice "b" is
implemented.

In addition to the original page placement optimization among sockets,
the NUMA balancing mechanism is extended to be used to optimize page
placement according to hot/cold among different memory types.  So the
sysctl user space interface (numa_balancing) is extended in a backward
compatible way as follow, so that the users can enable/disable these
functionality individually.

The sysctl is converted from a Boolean value to a bits field.  The
definition of the flags is,

- 0x0: NUMA_BALANCING_DISABLED
- 0x1: NUMA_BALANCING_NORMAL
- 0x2: NUMA_BALANCING_MEMORY_TIERING

TODO:

- Update ABI document: Documentation/sysctl/kernel.txt

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/linux/sched/sysctl.h | 10 ++++++++++
 kernel/sched/core.c          | 10 ++++------
 kernel/sysctl.c              |  7 ++++---
 mm/migrate.c                 | 19 +++++++++++++++++--
 mm/vmscan.c                  | 16 ++++++++++++++++
 5 files changed, 51 insertions(+), 11 deletions(-)

diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index 304f431178fd..bc54c1d75d6d 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -35,6 +35,16 @@ enum sched_tunable_scaling {
 	SCHED_TUNABLESCALING_END,
 };
 
+#define NUMA_BALANCING_DISABLED		0x0
+#define NUMA_BALANCING_NORMAL		0x1
+#define NUMA_BALANCING_MEMORY_TIERING	0x2
+
+#ifdef CONFIG_NUMA_BALANCING
+extern int sysctl_numa_balancing_mode;
+#else
+#define sysctl_numa_balancing_mode	0
+#endif
+
 /*
  *  control realtime throttling:
  *
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1bba4128a3e6..e61c2d415601 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4228,6 +4228,8 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
 
 #ifdef CONFIG_NUMA_BALANCING
 
+int sysctl_numa_balancing_mode;
+
 void set_numabalancing_state(bool enabled)
 {
 	if (enabled)
@@ -4240,20 +4242,16 @@ void set_numabalancing_state(bool enabled)
 int sysctl_numa_balancing(struct ctl_table *table, int write,
 			  void *buffer, size_t *lenp, loff_t *ppos)
 {
-	struct ctl_table t;
 	int err;
-	int state = static_branch_likely(&sched_numa_balancing);
 
 	if (write && !capable(CAP_SYS_ADMIN))
 		return -EPERM;
 
-	t = *table;
-	t.data = &state;
-	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
+	err = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
 	if (err < 0)
 		return err;
 	if (write)
-		set_numabalancing_state(state);
+		set_numabalancing_state(*(int *)table->data);
 	return err;
 }
 #endif
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 083be6af29d7..666c58455355 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -115,6 +115,7 @@ static int sixty = 60;
 
 static int __maybe_unused neg_one = -1;
 static int __maybe_unused two = 2;
+static int __maybe_unused three = 3;
 static int __maybe_unused four = 4;
 static unsigned long zero_ul;
 static unsigned long one_ul = 1;
@@ -1803,12 +1804,12 @@ static struct ctl_table kern_table[] = {
 #ifdef CONFIG_NUMA_BALANCING
 	{
 		.procname	= "numa_balancing",
-		.data		= NULL, /* filled in by handler */
-		.maxlen		= sizeof(unsigned int),
+		.data		= &sysctl_numa_balancing_mode,
+		.maxlen		= sizeof(int),
 		.mode		= 0644,
 		.proc_handler	= sysctl_numa_balancing,
 		.extra1		= SYSCTL_ZERO,
-		.extra2		= SYSCTL_ONE,
+		.extra2		= &three,
 	},
 #endif /* CONFIG_NUMA_BALANCING */
 	{
diff --git a/mm/migrate.c b/mm/migrate.c
index a6a7743ee98f..a159a36dd412 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -50,6 +50,7 @@
 #include <linux/ptrace.h>
 #include <linux/oom.h>
 #include <linux/memory.h>
+#include <linux/sched/sysctl.h>
 
 #include <asm/tlbflush.h>
 
@@ -2110,16 +2111,30 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 {
 	int page_lru;
 	int nr_pages = thp_nr_pages(page);
+	int order = compound_order(page);
 
-	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
+	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
 
 	/* Do not migrate THP mapped by multiple processes */
 	if (PageTransHuge(page) && total_mapcount(page) > 1)
 		return 0;
 
 	/* Avoid migrating to a node that is nearly full */
-	if (!migrate_balanced_pgdat(pgdat, nr_pages))
+	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
+		int z;
+
+		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) ||
+		    !numa_demotion_enabled)
+			return 0;
+		if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE)
+			return 0;
+		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
+			if (populated_zone(pgdat->node_zones + z))
+				break;
+		}
+		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
 		return 0;
+	}
 
 	if (isolate_lru_page(page))
 		return 0;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index f441c5946a4c..7fe737fd0e03 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -56,6 +56,7 @@
 
 #include <linux/swapops.h>
 #include <linux/balloon_compaction.h>
+#include <linux/sched/sysctl.h>
 
 #include "internal.h"
 
@@ -3775,6 +3776,12 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx)
 	return false;
 }
 
+/*
+ * Keep the free pages on fast memory node a little more than the high
+ * watermark to accommodate the promoted pages.
+ */
+#define NUMA_BALANCING_PROMOTE_WATERMARK	(10UL * 1024 * 1024 >> PAGE_SHIFT)
+
 /*
  * Returns true if there is an eligible zone balanced for the request order
  * and highest_zoneidx
@@ -3796,6 +3803,15 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
 			continue;
 
 		mark = high_wmark_pages(zone);
+		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+		    numa_demotion_enabled &&
+		    next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) {
+			unsigned long promote_mark;
+
+			promote_mark = min(NUMA_BALANCING_PROMOTE_WATERMARK,
+					   pgdat->node_present_pages >> 6);
+			mark += promote_mark;
+		}
 		if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
 			return true;
 	}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH -V8 2/6] memory tiering: add page promotion counter
  2021-09-14  1:36 [PATCH -V8 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
  2021-09-14  1:36 ` [PATCH -V8 1/6] NUMA balancing: optimize page " Huang Ying
@ 2021-09-14  1:36 ` Huang Ying
  2021-09-14 22:41   ` Yang Shi
  2021-09-14  1:36 ` [PATCH -V8 3/6] memory tiering: skip to scan fast memory Huang Ying
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Huang Ying @ 2021-09-14  1:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Huang Ying, Andrew Morton, Michal Hocko, Rik van Riel,
	Mel Gorman, Peter Zijlstra, Dave Hansen, Yang Shi, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, linux-mm

To distinguish the number of the memory tiering promoted pages from
that of the originally inter-socket NUMA balancing migrated pages.
The counter is per-node (count in the target node).  So this can be
used to identify promotion imbalance among the NUMA nodes.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/linux/mmzone.h |  3 +++
 include/linux/node.h   |  5 +++++
 mm/migrate.c           | 11 +++++++++--
 mm/vmstat.c            |  3 +++
 4 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 6a1d79d84675..37ccd6158765 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -209,6 +209,9 @@ enum node_stat_item {
 	NR_PAGETABLE,		/* used for pagetables */
 #ifdef CONFIG_SWAP
 	NR_SWAPCACHE,
+#endif
+#ifdef CONFIG_NUMA_BALANCING
+	PGPROMOTE_SUCCESS,	/* promote successfully */
 #endif
 	NR_VM_NODE_STAT_ITEMS
 };
diff --git a/include/linux/node.h b/include/linux/node.h
index 8e5a29897936..26e96fcc66af 100644
--- a/include/linux/node.h
+++ b/include/linux/node.h
@@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
 
 #define to_node(device) container_of(device, struct node, dev)
 
+static inline bool node_is_toptier(int node)
+{
+	return node_state(node, N_CPU);
+}
+
 #endif /* _LINUX_NODE_H_ */
diff --git a/mm/migrate.c b/mm/migrate.c
index a159a36dd412..6f7a6e2ef41f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2163,6 +2163,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 	pg_data_t *pgdat = NODE_DATA(node);
 	int isolated;
 	int nr_remaining;
+	int nr_succeeded;
 	LIST_HEAD(migratepages);
 	new_page_t *new;
 	bool compound;
@@ -2201,7 +2202,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 
 	list_add(&page->lru, &migratepages);
 	nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
-				     MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
+				     MIGRATE_ASYNC, MR_NUMA_MISPLACED,
+				     &nr_succeeded);
 	if (nr_remaining) {
 		if (!list_empty(&migratepages)) {
 			list_del(&page->lru);
@@ -2210,8 +2212,13 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 			putback_lru_page(page);
 		}
 		isolated = 0;
-	} else
+	} else {
 		count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages);
+		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+		    !node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
+			mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS,
+					    nr_succeeded);
+	}
 	BUG_ON(!list_empty(&migratepages));
 	return isolated;
 
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 8ce2620344b2..fff0ec94d795 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1236,6 +1236,9 @@ const char * const vmstat_text[] = {
 #ifdef CONFIG_SWAP
 	"nr_swapcached",
 #endif
+#ifdef CONFIG_NUMA_BALANCING
+	"pgpromote_success",
+#endif
 
 	/* enum writeback_stat_item counters */
 	"nr_dirty_threshold",
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH -V8 3/6] memory tiering: skip to scan fast memory
  2021-09-14  1:36 [PATCH -V8 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
  2021-09-14  1:36 ` [PATCH -V8 1/6] NUMA balancing: optimize page " Huang Ying
  2021-09-14  1:36 ` [PATCH -V8 2/6] memory tiering: add page promotion counter Huang Ying
@ 2021-09-14  1:36 ` Huang Ying
  2021-09-14  1:36 ` [PATCH -V8 4/6] memory tiering: hot page selection with hint page fault latency Huang Ying
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 17+ messages in thread
From: Huang Ying @ 2021-09-14  1:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Huang Ying, Dave Hansen, Andrew Morton, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Yang Shi, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, linux-mm

If the NUMA balancing isn't used to optimize the page placement among
sockets but only among memory types, the hot pages in the fast memory
node couldn't be migrated (promoted) to anywhere.  So it's unnecessary
to scan the pages in the fast memory node via changing their PTE/PMD
mapping to be PROT_NONE.  So that the page faults could be avoided
too.

In the test, if only the memory tiering NUMA balancing mode is enabled, the
number of the NUMA balancing hint faults for the DRAM node is reduced to
almost 0 with the patch.  While the benchmark score doesn't change
visibly.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 mm/huge_memory.c | 30 +++++++++++++++++++++---------
 mm/mprotect.c    | 13 ++++++++++++-
 2 files changed, 33 insertions(+), 10 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 5e9ef0fc261e..8edcd64b5b1f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -34,6 +34,7 @@
 #include <linux/oom.h>
 #include <linux/numa.h>
 #include <linux/page_owner.h>
+#include <linux/sched/sysctl.h>
 
 #include <asm/tlb.h>
 #include <asm/pgalloc.h>
@@ -1766,17 +1767,28 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 	}
 #endif
 
-	/*
-	 * Avoid trapping faults against the zero page. The read-only
-	 * data is likely to be read-cached on the local CPU and
-	 * local/remote hits to the zero page are not interesting.
-	 */
-	if (prot_numa && is_huge_zero_pmd(*pmd))
-		goto unlock;
+	if (prot_numa) {
+		struct page *page;
+		/*
+		 * Avoid trapping faults against the zero page. The read-only
+		 * data is likely to be read-cached on the local CPU and
+		 * local/remote hits to the zero page are not interesting.
+		 */
+		if (is_huge_zero_pmd(*pmd))
+			goto unlock;
 
-	if (prot_numa && pmd_protnone(*pmd))
-		goto unlock;
+		if (pmd_protnone(*pmd))
+			goto unlock;
 
+		page = pmd_page(*pmd);
+		/*
+		 * Skip scanning top tier node if normal numa
+		 * balancing is disabled
+		 */
+		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
+		    node_is_toptier(page_to_nid(page)))
+			goto unlock;
+	}
 	/*
 	 * In case prot_numa, we are under mmap_read_lock(mm). It's critical
 	 * to not clear pmd intermittently to avoid race with MADV_DONTNEED
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 883e2cc85cad..0dd3f82ec6eb 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -29,6 +29,7 @@
 #include <linux/uaccess.h>
 #include <linux/mm_inline.h>
 #include <linux/pgtable.h>
+#include <linux/sched/sysctl.h>
 #include <asm/cacheflush.h>
 #include <asm/mmu_context.h>
 #include <asm/tlbflush.h>
@@ -83,6 +84,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 			 */
 			if (prot_numa) {
 				struct page *page;
+				int nid;
 
 				/* Avoid TLB flush if possible */
 				if (pte_protnone(oldpte))
@@ -109,7 +111,16 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 				 * Don't mess with PTEs if page is already on the node
 				 * a single-threaded process is running on.
 				 */
-				if (target_node == page_to_nid(page))
+				nid = page_to_nid(page);
+				if (target_node == nid)
+					continue;
+
+				/*
+				 * Skip scanning top tier node if normal numa
+				 * balancing is disabled
+				 */
+				if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
+				    node_is_toptier(nid))
 					continue;
 			}
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH -V8 4/6] memory tiering: hot page selection with hint page fault latency
  2021-09-14  1:36 [PATCH -V8 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
                   ` (2 preceding siblings ...)
  2021-09-14  1:36 ` [PATCH -V8 3/6] memory tiering: skip to scan fast memory Huang Ying
@ 2021-09-14  1:36 ` Huang Ying
  2021-09-14  1:37 ` [PATCH -V8 5/6] memory tiering: rate limit NUMA migration throughput Huang Ying
  2021-09-14  1:37 ` [PATCH -V8 6/6] memory tiering: adjust hot threshold automatically Huang Ying
  5 siblings, 0 replies; 17+ messages in thread
From: Huang Ying @ 2021-09-14  1:36 UTC (permalink / raw)
  To: linux-kernel
  Cc: Huang Ying, Andrew Morton, Michal Hocko, Rik van Riel,
	Mel Gorman, Peter Zijlstra, Dave Hansen, Yang Shi, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, linux-mm

To optimize page placement in a memory tiering system with NUMA
balancing, the hot pages in the slow memory node need to be
identified.  Essentially, the original NUMA balancing implementation
selects the mostly recently accessed (MRU) pages as the hot pages.
But this isn't a very good algorithm to identify the hot pages.

So, in this patch we implemented a better hot page selection
algorithm.  Which is based on NUMA balancing page table scanning and
hint page fault as follows,

- When the page tables of the processes are scanned to change PTE/PMD
  to be PROT_NONE, the current time is recorded in struct page as scan
  time.

- When the page is accessed, hint page fault will occur.  The scan
  time is gotten from the struct page.  And The hint page fault
  latency is defined as

    hint page fault time - scan time

The shorter the hint page fault latency of a page is, the higher the
probability of their access frequency to be higher.  So the hint page
fault latency is a good estimation of the page hot/cold.

But it's hard to find some extra space in struct page to hold the scan
time.  Fortunately, we can reuse some bits used by the original NUMA
balancing.

NUMA balancing uses some bits in struct page to store the page
accessing CPU and PID (referring to page_cpupid_xchg_last()).  Which
is used by the multi-stage node selection algorithm to avoid to
migrate pages shared accessed by the NUMA nodes back and forth.  But
for pages in the slow memory node, even if they are shared accessed by
multiple NUMA nodes, as long as the pages are hot, they need to be
promoted to the fast memory node.  So the accessing CPU and PID
information are unnecessary for the slow memory pages.  We can reuse
these bits in struct page to record the scan time for them.  For the
fast memory pages, these bits are used as before.

The remaining problem is how to determine the hot threshold.  It's not
easy to be done automatically.  So we provide a sysctl knob:
kernel.numa_balancing_hot_threshold_ms.  All pages with hint page
fault latency < the threshold will be considered hot.  The system
administrator can determine the hot threshold via various information,
such as PMEM bandwidth limit, the average number of the pages pass the
hot threshold, etc.  The default hot threshold is 1 second, which
works well in our performance test.

The patch improves the score of pmbench memory accessing benchmark
with 80:20 read/write ratio and normal access address distribution by
16.8% with 41.1% less pages promoted (that is, less overhead) on a 2
socket Intel server with Optance DC Persistent Memory.

The downside of the patch is that the response time to the workload
hot spot changing may be much longer.  For example,

- A previous cold memory area becomes hot

- The hint page fault will be triggered.  But the hint page fault
  latency isn't shorter than the hot threshold.  So the pages will
  not be promoted.

- When the memory area is scanned again, maybe after a scan period,
  the hint page fault latency measured will be shorter than the hot
  threshold and the pages will be promoted.

To mitigate this,

- If there are enough free space in the fast memory node, the hot
  threshold will not be used, all pages will be promoted upon the hint
  page fault for fast response.

- If fast response is more important for system performance, the
  administrator can set a higher hot threshold.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/linux/mm.h           | 29 ++++++++++++++++
 include/linux/sched/sysctl.h |  1 +
 kernel/sched/fair.c          | 67 ++++++++++++++++++++++++++++++++++++
 kernel/sysctl.c              |  7 ++++
 mm/huge_memory.c             | 13 +++++--
 mm/memory.c                  | 11 +++++-
 mm/migrate.c                 | 12 +++++++
 mm/mmzone.c                  | 17 +++++++++
 mm/mprotect.c                |  8 ++++-
 9 files changed, 160 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 73a52aba448f..12aaa9ec8db0 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1380,6 +1380,18 @@ static inline int page_to_nid(const struct page *page)
 #endif
 
 #ifdef CONFIG_NUMA_BALANCING
+/* page access time bits needs to hold at least 4 seconds */
+#define PAGE_ACCESS_TIME_MIN_BITS	12
+#if LAST_CPUPID_SHIFT < PAGE_ACCESS_TIME_MIN_BITS
+#define PAGE_ACCESS_TIME_BUCKETS				\
+	(PAGE_ACCESS_TIME_MIN_BITS - LAST_CPUPID_SHIFT)
+#else
+#define PAGE_ACCESS_TIME_BUCKETS	0
+#endif
+
+#define PAGE_ACCESS_TIME_MASK				\
+	(LAST_CPUPID_MASK << PAGE_ACCESS_TIME_BUCKETS)
+
 static inline int cpu_pid_to_cpupid(int cpu, int pid)
 {
 	return ((cpu & LAST__CPU_MASK) << LAST__PID_SHIFT) | (pid & LAST__PID_MASK);
@@ -1422,6 +1434,16 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
 	return xchg(&page->_last_cpupid, cpupid & LAST_CPUPID_MASK);
 }
 
+static inline unsigned int xchg_page_access_time(struct page *page,
+						 unsigned int time)
+{
+	unsigned int last_time;
+
+	last_time = xchg(&page->_last_cpupid,
+			 (time >> PAGE_ACCESS_TIME_BUCKETS) & LAST_CPUPID_MASK);
+	return last_time << PAGE_ACCESS_TIME_BUCKETS;
+}
+
 static inline int page_cpupid_last(struct page *page)
 {
 	return page->_last_cpupid;
@@ -1437,6 +1459,7 @@ static inline int page_cpupid_last(struct page *page)
 }
 
 extern int page_cpupid_xchg_last(struct page *page, int cpupid);
+extern unsigned int xchg_page_access_time(struct page *page, unsigned int time);
 
 static inline void page_cpupid_reset_last(struct page *page)
 {
@@ -1449,6 +1472,12 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
 	return page_to_nid(page); /* XXX */
 }
 
+static inline unsigned int xchg_page_access_time(struct page *page,
+						 unsigned int time)
+{
+	return 0;
+}
+
 static inline int page_cpupid_last(struct page *page)
 {
 	return page_to_nid(page); /* XXX */
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index bc54c1d75d6d..0ea43b146aee 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -41,6 +41,7 @@ enum sched_tunable_scaling {
 
 #ifdef CONFIG_NUMA_BALANCING
 extern int sysctl_numa_balancing_mode;
+extern unsigned int sysctl_numa_balancing_hot_threshold;
 #else
 #define sysctl_numa_balancing_mode	0
 #endif
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ff69f245b939..6734cf4ce39a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1069,6 +1069,9 @@ unsigned int sysctl_numa_balancing_scan_size = 256;
 /* Scan @scan_size MB every @scan_period after an initial @scan_delay in ms */
 unsigned int sysctl_numa_balancing_scan_delay = 1000;
 
+/* The page with hint page fault latency < threshold in ms is considered hot */
+unsigned int sysctl_numa_balancing_hot_threshold = 1000;
+
 struct numa_group {
 	refcount_t refcount;
 
@@ -1409,6 +1412,37 @@ static inline unsigned long group_weight(struct task_struct *p, int nid,
 	return 1000 * faults / total_faults;
 }
 
+static bool pgdat_free_space_enough(struct pglist_data *pgdat)
+{
+	int z;
+	unsigned long enough_mark;
+
+	enough_mark = max(1UL * 1024 * 1024 * 1024 >> PAGE_SHIFT,
+			  pgdat->node_present_pages >> 4);
+	for (z = pgdat->nr_zones - 1; z >= 0; z--) {
+		struct zone *zone = pgdat->node_zones + z;
+
+		if (!populated_zone(zone))
+			continue;
+
+		if (zone_watermark_ok(zone, 0,
+				      high_wmark_pages(zone) + enough_mark,
+				      ZONE_MOVABLE, 0))
+			return true;
+	}
+	return false;
+}
+
+static int numa_hint_fault_latency(struct page *page)
+{
+	unsigned int last_time, time;
+
+	time = jiffies_to_msecs(jiffies);
+	last_time = xchg_page_access_time(page, time);
+
+	return (time - last_time) & PAGE_ACCESS_TIME_MASK;
+}
+
 bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 				int src_nid, int dst_cpu)
 {
@@ -1416,6 +1450,27 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 	int dst_nid = cpu_to_node(dst_cpu);
 	int last_cpupid, this_cpupid;
 
+	/*
+	 * The pages in slow memory node should be migrated according
+	 * to hot/cold instead of accessing CPU node.
+	 */
+	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+	    !node_is_toptier(src_nid)) {
+		struct pglist_data *pgdat;
+		unsigned long latency, th;
+
+		pgdat = NODE_DATA(dst_nid);
+		if (pgdat_free_space_enough(pgdat))
+			return true;
+
+		th = sysctl_numa_balancing_hot_threshold;
+		latency = numa_hint_fault_latency(page);
+		if (latency > th)
+			return false;
+
+		return true;
+	}
+
 	this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
 	last_cpupid = page_cpupid_xchg_last(page, this_cpupid);
 
@@ -2636,6 +2691,11 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
 	if (!p->mm)
 		return;
 
+	/* Numa faults statistics are unnecessary for the slow memory node */
+	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+	    !node_is_toptier(mem_node))
+		return;
+
 	/* Allocate buffer to track faults on a per-node basis */
 	if (unlikely(!p->numa_faults)) {
 		int size = sizeof(*p->numa_faults) *
@@ -2655,6 +2715,13 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
 	 */
 	if (unlikely(last_cpupid == (-1 & LAST_CPUPID_MASK))) {
 		priv = 1;
+	} else if (unlikely(!cpu_online(cpupid_to_cpu(last_cpupid)))) {
+		/*
+		 * In memory tiering mode, cpupid of slow memory page is
+		 * used to record page access time, so its value may be
+		 * invalid during numa balancing mode transition.
+		 */
+		return;
 	} else {
 		priv = cpupid_match_pid(p, last_cpupid);
 		if (!priv && !(flags & TNF_NO_GROUP))
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 666c58455355..ea105f52b646 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1811,6 +1811,13 @@ static struct ctl_table kern_table[] = {
 		.extra1		= SYSCTL_ZERO,
 		.extra2		= &three,
 	},
+	{
+		.procname	= "numa_balancing_hot_threshold_ms",
+		.data		= &sysctl_numa_balancing_hot_threshold,
+		.maxlen		= sizeof(unsigned int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec,
+	},
 #endif /* CONFIG_NUMA_BALANCING */
 	{
 		.procname	= "sched_rt_period_us",
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8edcd64b5b1f..10cdd8e399e7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1430,7 +1430,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 	struct page *page;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
 	int page_nid = NUMA_NO_NODE;
-	int target_nid, last_cpupid = -1;
+	int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK);
 	bool migrated = false;
 	bool was_writable = pmd_savedwrite(oldpmd);
 	int flags = 0;
@@ -1451,7 +1451,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 		flags |= TNF_NO_GROUP;
 
 	page_nid = page_to_nid(page);
-	last_cpupid = page_cpupid_last(page);
+	if (node_is_toptier(page_nid))
+		last_cpupid = page_cpupid_last(page);
 	target_nid = numa_migrate_prep(page, vma, haddr, page_nid,
 				       &flags);
 
@@ -1769,6 +1770,7 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 
 	if (prot_numa) {
 		struct page *page;
+		bool toptier;
 		/*
 		 * Avoid trapping faults against the zero page. The read-only
 		 * data is likely to be read-cached on the local CPU and
@@ -1781,13 +1783,18 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 			goto unlock;
 
 		page = pmd_page(*pmd);
+		toptier = node_is_toptier(page_to_nid(page));
 		/*
 		 * Skip scanning top tier node if normal numa
 		 * balancing is disabled
 		 */
 		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
-		    node_is_toptier(page_to_nid(page)))
+		    toptier)
 			goto unlock;
+
+		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+		    !toptier)
+			xchg_page_access_time(page, jiffies_to_msecs(jiffies));
 	}
 	/*
 	 * In case prot_numa, we are under mmap_read_lock(mm). It's critical
diff --git a/mm/memory.c b/mm/memory.c
index ce4456268a87..3e79e8636c69 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -73,6 +73,7 @@
 #include <linux/perf_event.h>
 #include <linux/ptrace.h>
 #include <linux/vmalloc.h>
+#include <linux/sched/sysctl.h>
 
 #include <trace/events/kmem.h>
 
@@ -4386,8 +4387,16 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
 		flags |= TNF_SHARED;
 
-	last_cpupid = page_cpupid_last(page);
 	page_nid = page_to_nid(page);
+	/*
+	 * In memory tiering mode, cpupid of slow memory page is used
+	 * to record page access time.  So use default value.
+	 */
+	if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
+	    !node_is_toptier(page_nid))
+		last_cpupid = (-1 & LAST_CPUPID_MASK);
+	else
+		last_cpupid = page_cpupid_last(page);
 	target_nid = numa_migrate_prep(page, vma, vmf->address, page_nid,
 			&flags);
 	if (target_nid == NUMA_NO_NODE) {
diff --git a/mm/migrate.c b/mm/migrate.c
index 6f7a6e2ef41f..6ec2be0c935a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -577,6 +577,18 @@ void migrate_page_states(struct page *newpage, struct page *page)
 	 * future migrations of this same page.
 	 */
 	cpupid = page_cpupid_xchg_last(page, -1);
+	/*
+	 * If migrate between slow and fast memory node, reset cpupid,
+	 * because that is used to record page access time in slow
+	 * memory node
+	 */
+	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) {
+		bool f_toptier = node_is_toptier(page_to_nid(page));
+		bool t_toptier = node_is_toptier(page_to_nid(newpage));
+
+		if (f_toptier != t_toptier)
+			cpupid = -1;
+	}
 	page_cpupid_xchg_last(newpage, cpupid);
 
 	ksm_migrate_page(newpage, page);
diff --git a/mm/mmzone.c b/mm/mmzone.c
index eb89d6e018e2..27f9075632ee 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -99,4 +99,21 @@ int page_cpupid_xchg_last(struct page *page, int cpupid)
 
 	return last_cpupid;
 }
+
+unsigned int xchg_page_access_time(struct page *page, unsigned int time)
+{
+	unsigned long old_flags, flags;
+	unsigned int last_time;
+
+	time >>= PAGE_ACCESS_TIME_BUCKETS;
+	do {
+		old_flags = flags = page->flags;
+		last_time = (flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
+
+		flags &= ~(LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT);
+		flags |= (time & LAST_CPUPID_MASK) << LAST_CPUPID_PGSHIFT;
+	} while (unlikely(cmpxchg(&page->flags, old_flags, flags) != old_flags));
+
+	return last_time << PAGE_ACCESS_TIME_BUCKETS;
+}
 #endif
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 0dd3f82ec6eb..bbf2c65cc4ae 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -85,6 +85,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 			if (prot_numa) {
 				struct page *page;
 				int nid;
+				bool toptier;
 
 				/* Avoid TLB flush if possible */
 				if (pte_protnone(oldpte))
@@ -114,14 +115,19 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 				nid = page_to_nid(page);
 				if (target_node == nid)
 					continue;
+				toptier = node_is_toptier(nid);
 
 				/*
 				 * Skip scanning top tier node if normal numa
 				 * balancing is disabled
 				 */
 				if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
-				    node_is_toptier(nid))
+				    toptier)
 					continue;
+				if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+				    !toptier)
+					xchg_page_access_time(page,
+						jiffies_to_msecs(jiffies));
 			}
 
 			oldpte = ptep_modify_prot_start(vma, addr, pte);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH -V8 5/6] memory tiering: rate limit NUMA migration throughput
  2021-09-14  1:36 [PATCH -V8 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
                   ` (3 preceding siblings ...)
  2021-09-14  1:36 ` [PATCH -V8 4/6] memory tiering: hot page selection with hint page fault latency Huang Ying
@ 2021-09-14  1:37 ` Huang Ying
  2021-09-14  1:37 ` [PATCH -V8 6/6] memory tiering: adjust hot threshold automatically Huang Ying
  5 siblings, 0 replies; 17+ messages in thread
From: Huang Ying @ 2021-09-14  1:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Huang Ying, Andrew Morton, Michal Hocko, Rik van Riel,
	Mel Gorman, Peter Zijlstra, Dave Hansen, Yang Shi, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, linux-mm

In NUMA balancing memory tiering mode, the hot slow memory pages could
be promoted to the fast memory node via NUMA balancing.  But this
incurs some overhead too.  So that sometimes the workload performance
may be hurt.  To avoid too much disturbing to the workload in these
situations, we should make it possible to rate limit the promotion
throughput.

So, in this patch, we implement a simple rate limit algorithm as
follows.  The number of the candidate pages to be promoted to the fast
memory node via NUMA balancing is counted, if the count exceeds the
limit specified by the users, the NUMA balancing promotion will be
stopped until the next second.

Test the patch with the pmbench memory accessing benchmark with 80:20
read/write ratio and normal access address distribution on a 2 socket
Intel server with Optane DC Persistent Memory Model.  In the test, the
page promotion throughput decreases 51.4% (from 213.0 MB/s to 103.6
MB/s) with the patch, while the benchmark score decreases only 1.8%.

A new sysctl knob kernel.numa_balancing_rate_limit_mbps is added for
the users to specify the limit.

TODO: Add ABI document for new sysctl knob.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/linux/mmzone.h       |  5 +++++
 include/linux/sched/sysctl.h |  1 +
 kernel/sched/fair.c          | 29 +++++++++++++++++++++++++++--
 kernel/sysctl.c              |  8 ++++++++
 mm/vmstat.c                  |  1 +
 5 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 37ccd6158765..d6a0efd387bd 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -212,6 +212,7 @@ enum node_stat_item {
 #endif
 #ifdef CONFIG_NUMA_BALANCING
 	PGPROMOTE_SUCCESS,	/* promote successfully */
+	PGPROMOTE_CANDIDATE,	/* candidate pages to promote */
 #endif
 	NR_VM_NODE_STAT_ITEMS
 };
@@ -887,6 +888,10 @@ typedef struct pglist_data {
 	struct deferred_split deferred_split_queue;
 #endif
 
+#ifdef CONFIG_NUMA_BALANCING
+	unsigned long numa_ts;
+	unsigned long numa_nr_candidate;
+#endif
 	/* Fields commonly accessed by the page reclaim scanner */
 
 	/*
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index 0ea43b146aee..7d937adaac0f 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -42,6 +42,7 @@ enum sched_tunable_scaling {
 #ifdef CONFIG_NUMA_BALANCING
 extern int sysctl_numa_balancing_mode;
 extern unsigned int sysctl_numa_balancing_hot_threshold;
+extern unsigned int sysctl_numa_balancing_rate_limit;
 #else
 #define sysctl_numa_balancing_mode	0
 #endif
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6734cf4ce39a..ce8c620ed1f6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1071,6 +1071,11 @@ unsigned int sysctl_numa_balancing_scan_delay = 1000;
 
 /* The page with hint page fault latency < threshold in ms is considered hot */
 unsigned int sysctl_numa_balancing_hot_threshold = 1000;
+/*
+ * Restrict the NUMA migration per second in MB for each target node
+ * if no enough free space in target node
+ */
+unsigned int sysctl_numa_balancing_rate_limit = 65536;
 
 struct numa_group {
 	refcount_t refcount;
@@ -1443,6 +1448,23 @@ static int numa_hint_fault_latency(struct page *page)
 	return (time - last_time) & PAGE_ACCESS_TIME_MASK;
 }
 
+static bool numa_migration_check_rate_limit(struct pglist_data *pgdat,
+					    unsigned long rate_limit, int nr)
+{
+	unsigned long nr_candidate;
+	unsigned long now = jiffies, last_ts;
+
+	mod_node_page_state(pgdat, PGPROMOTE_CANDIDATE, nr);
+	nr_candidate = node_page_state(pgdat, PGPROMOTE_CANDIDATE);
+	last_ts = pgdat->numa_ts;
+	if (now > last_ts + HZ &&
+	    cmpxchg(&pgdat->numa_ts, last_ts, now) == last_ts)
+		pgdat->numa_nr_candidate = nr_candidate;
+	if (nr_candidate - pgdat->numa_nr_candidate > rate_limit)
+		return false;
+	return true;
+}
+
 bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 				int src_nid, int dst_cpu)
 {
@@ -1457,7 +1479,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 	    !node_is_toptier(src_nid)) {
 		struct pglist_data *pgdat;
-		unsigned long latency, th;
+		unsigned long rate_limit, latency, th;
 
 		pgdat = NODE_DATA(dst_nid);
 		if (pgdat_free_space_enough(pgdat))
@@ -1468,7 +1490,10 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 		if (latency > th)
 			return false;
 
-		return true;
+		rate_limit =
+			sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT);
+		return numa_migration_check_rate_limit(pgdat, rate_limit,
+						       thp_nr_pages(page));
 	}
 
 	this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index ea105f52b646..0d89021bd66a 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1818,6 +1818,14 @@ static struct ctl_table kern_table[] = {
 		.mode		= 0644,
 		.proc_handler	= proc_dointvec,
 	},
+	{
+		.procname	= "numa_balancing_rate_limit_mbps",
+		.data		= &sysctl_numa_balancing_rate_limit,
+		.maxlen		= sizeof(unsigned int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec_minmax,
+		.extra1		= SYSCTL_ZERO,
+	},
 #endif /* CONFIG_NUMA_BALANCING */
 	{
 		.procname	= "sched_rt_period_us",
diff --git a/mm/vmstat.c b/mm/vmstat.c
index fff0ec94d795..da2abeaf9e6c 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1238,6 +1238,7 @@ const char * const vmstat_text[] = {
 #endif
 #ifdef CONFIG_NUMA_BALANCING
 	"pgpromote_success",
+	"pgpromote_candidate",
 #endif
 
 	/* enum writeback_stat_item counters */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH -V8 6/6] memory tiering: adjust hot threshold automatically
  2021-09-14  1:36 [PATCH -V8 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
                   ` (4 preceding siblings ...)
  2021-09-14  1:37 ` [PATCH -V8 5/6] memory tiering: rate limit NUMA migration throughput Huang Ying
@ 2021-09-14  1:37 ` Huang Ying
  5 siblings, 0 replies; 17+ messages in thread
From: Huang Ying @ 2021-09-14  1:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: Huang Ying, Andrew Morton, Michal Hocko, Rik van Riel,
	Mel Gorman, Peter Zijlstra, Dave Hansen, Yang Shi, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, linux-mm

It isn't easy for the administrator to determine the hot threshold.
So in this patch, a method to adjust the hot threshold automatically
is implemented.  The basic idea is to control the number of the
candidate promotion pages to match the promotion rate limit.  If the
hint page fault latency of a page is less than the hot threshold, we
will try to promote the page, and the page is called the candidate
promotion page.

If the number of the candidate promotion pages in the statistics
interval is much more than the promotion rate limit, the hot threshold
will be decreased to reduce the number of the candidate promotion
pages.  Otherwise, the hot threshold will be increased to increase the
number of the candidate promotion pages.

To make the above method works, in each statistics interval, the total
number of the pages to check (on which the hint page faults occur) and
the hot/cold distribution need to be stable.  Because the page tables
are scanned linearly in NUMA balancing, but the hot/cold distribution
isn't uniform along the address, the statistics interval should be
larger than the NUMA balancing scan period.  So in the patch, the max
scan period is used as statistics interval and it works well in our
tests.

The sysctl knob kernel.numa_balancing_hot_threshold_ms becomes the
initial value and max value of the hot threshold.

The patch improves the score of pmbench memory accessing benchmark
with 80:20 read/write ratio and normal access address distribution by
3.9% with 32.4% fewer NUMA page migrations on a 2 socket Intel server
with Optance DC Persistent Memory.  Because it improves the accuracy
of the hot page selection.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/linux/mmzone.h       |  3 ++
 include/linux/sched/sysctl.h |  2 ++
 kernel/sched/fair.c          | 59 +++++++++++++++++++++++++++++++++---
 kernel/sysctl.c              |  3 +-
 4 files changed, 62 insertions(+), 5 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index d6a0efd387bd..69bb672ea743 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -891,6 +891,9 @@ typedef struct pglist_data {
 #ifdef CONFIG_NUMA_BALANCING
 	unsigned long numa_ts;
 	unsigned long numa_nr_candidate;
+	unsigned long numa_threshold_ts;
+	unsigned long numa_threshold_nr_candidate;
+	unsigned long numa_threshold;
 #endif
 	/* Fields commonly accessed by the page reclaim scanner */
 
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index 7d937adaac0f..ff2c43e8ebac 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -84,6 +84,8 @@ int sysctl_sched_uclamp_handler(struct ctl_table *table, int write,
 		void *buffer, size_t *lenp, loff_t *ppos);
 int sysctl_numa_balancing(struct ctl_table *table, int write, void *buffer,
 		size_t *lenp, loff_t *ppos);
+int sysctl_numa_balancing_threshold(struct ctl_table *table, int write, void *buffer,
+		size_t *lenp, loff_t *ppos);
 int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
 		size_t *lenp, loff_t *ppos);
 
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ce8c620ed1f6..b1783708700a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1465,6 +1465,54 @@ static bool numa_migration_check_rate_limit(struct pglist_data *pgdat,
 	return true;
 }
 
+int sysctl_numa_balancing_threshold(struct ctl_table *table, int write, void *buffer,
+		size_t *lenp, loff_t *ppos)
+{
+	int err;
+	struct pglist_data *pgdat;
+
+	if (write && !capable(CAP_SYS_ADMIN))
+		return -EPERM;
+
+	err = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+	if (err < 0 || !write)
+		return err;
+
+	for_each_online_pgdat(pgdat)
+		pgdat->numa_threshold = 0;
+
+	return err;
+}
+
+#define NUMA_MIGRATION_ADJUST_STEPS	16
+
+static void numa_migration_adjust_threshold(struct pglist_data *pgdat,
+					    unsigned long rate_limit,
+					    unsigned long ref_th)
+{
+	unsigned long now = jiffies, last_th_ts, th_period;
+	unsigned long unit_th, th;
+	unsigned long nr_cand, ref_cand, diff_cand;
+
+	th_period = msecs_to_jiffies(sysctl_numa_balancing_scan_period_max);
+	last_th_ts = pgdat->numa_threshold_ts;
+	if (now > last_th_ts + th_period &&
+	    cmpxchg(&pgdat->numa_threshold_ts, last_th_ts, now) == last_th_ts) {
+		ref_cand = rate_limit *
+			sysctl_numa_balancing_scan_period_max / 1000;
+		nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE);
+		diff_cand = nr_cand - pgdat->numa_threshold_nr_candidate;
+		unit_th = ref_th / NUMA_MIGRATION_ADJUST_STEPS;
+		th = pgdat->numa_threshold ? : ref_th;
+		if (diff_cand > ref_cand * 11 / 10)
+			th = max(th - unit_th, unit_th);
+		else if (diff_cand < ref_cand * 9 / 10)
+			th = min(th + unit_th, ref_th);
+		pgdat->numa_threshold_nr_candidate = nr_cand;
+		pgdat->numa_threshold = th;
+	}
+}
+
 bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 				int src_nid, int dst_cpu)
 {
@@ -1479,19 +1527,22 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 	    !node_is_toptier(src_nid)) {
 		struct pglist_data *pgdat;
-		unsigned long rate_limit, latency, th;
+		unsigned long rate_limit, latency, th, def_th;
 
 		pgdat = NODE_DATA(dst_nid);
 		if (pgdat_free_space_enough(pgdat))
 			return true;
 
-		th = sysctl_numa_balancing_hot_threshold;
+		def_th = sysctl_numa_balancing_hot_threshold;
+		rate_limit =
+			sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT);
+		numa_migration_adjust_threshold(pgdat, rate_limit, def_th);
+
+		th = pgdat->numa_threshold ? : def_th;
 		latency = numa_hint_fault_latency(page);
 		if (latency > th)
 			return false;
 
-		rate_limit =
-			sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT);
 		return numa_migration_check_rate_limit(pgdat, rate_limit,
 						       thp_nr_pages(page));
 	}
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 0d89021bd66a..0a87d5877718 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1816,7 +1816,8 @@ static struct ctl_table kern_table[] = {
 		.data		= &sysctl_numa_balancing_hot_threshold,
 		.maxlen		= sizeof(unsigned int),
 		.mode		= 0644,
-		.proc_handler	= proc_dointvec,
+		.proc_handler	= sysctl_numa_balancing_threshold,
+		.extra1		= SYSCTL_ZERO,
 	},
 	{
 		.procname	= "numa_balancing_rate_limit_mbps",
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH -V8 1/6] NUMA balancing: optimize page placement for memory tiering system
  2021-09-14  1:36 ` [PATCH -V8 1/6] NUMA balancing: optimize page " Huang Ying
@ 2021-09-14 22:40   ` Yang Shi
  2021-09-15  1:44     ` Huang, Ying
  0 siblings, 1 reply; 17+ messages in thread
From: Yang Shi @ 2021-09-14 22:40 UTC (permalink / raw)
  To: Huang Ying
  Cc: Linux Kernel Mailing List, Andrew Morton, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, Linux MM

On Mon, Sep 13, 2021 at 6:37 PM Huang Ying <ying.huang@intel.com> wrote:
>
> With the advent of various new memory types, some machines will have
> multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
> memory subsystem of these machines can be called memory tiering
> system, because the performance of the different types of memory are
> usually different.
>
> In such system, because of the memory accessing pattern changing etc,
> some pages in the slow memory may become hot globally.  So in this
> patch, the NUMA balancing mechanism is enhanced to optimize the page
> placement among the different memory types according to hot/cold
> dynamically.
>
> In a typical memory tiering system, there are CPUs, fast memory and
> slow memory in each physical NUMA node.  The CPUs and the fast memory
> will be put in one logical node (called fast memory node), while the
> slow memory will be put in another (faked) logical node (called slow
> memory node).  That is, the fast memory is regarded as local while the
> slow memory is regarded as remote.  So it's possible for the recently
> accessed pages in the slow memory node to be promoted to the fast
> memory node via the existing NUMA balancing mechanism.
>
> The original NUMA balancing mechanism will stop to migrate pages if the free
> memory of the target node will become below the high watermark.  This
> is a reasonable policy if there's only one memory type.  But this
> makes the original NUMA balancing mechanism almost not work to optimize page
> placement among different memory types.  Details are as follows.
>
> It's the common cases that the working-set size of the workload is
> larger than the size of the fast memory nodes.  Otherwise, it's
> unnecessary to use the slow memory at all.  So in the common cases,
> there are almost always no enough free pages in the fast memory nodes,
> so that the globally hot pages in the slow memory node cannot be
> promoted to the fast memory node.  To solve the issue, we have 2
> choices as follows,
>
> a. Ignore the free pages watermark checking when promoting hot pages
>    from the slow memory node to the fast memory node.  This will
>    create some memory pressure in the fast memory node, thus trigger
>    the memory reclaiming.  So that, the cold pages in the fast memory
>    node will be demoted to the slow memory node.
>
> b. Make kswapd of the fast memory node to reclaim pages until the free
>    pages are a little more (about 10MB) than the high watermark.  Then,
>    if the free pages of the fast memory node reaches high watermark, and
>    some hot pages need to be promoted, kswapd of the fast memory node
>    will be waken up to demote some cold pages in the fast memory node to
>    the slow memory node.  This will free some extra space in the fast
>    memory node, so the hot pages in the slow memory node can be
>    promoted to the fast memory node.
>
> The choice "a" will create the memory pressure in the fast memory
> node.  If the memory pressure of the workload is high, the memory
> pressure may become so high that the memory allocation latency of the
> workload is influenced, e.g. the direct reclaiming may be triggered.
>
> The choice "b" works much better at this aspect.  If the memory
> pressure of the workload is high, the hot pages promotion will stop
> earlier because its allocation watermark is higher than that of the
> normal memory allocation.  So in this patch, choice "b" is
> implemented.
>
> In addition to the original page placement optimization among sockets,
> the NUMA balancing mechanism is extended to be used to optimize page
> placement according to hot/cold among different memory types.  So the
> sysctl user space interface (numa_balancing) is extended in a backward
> compatible way as follow, so that the users can enable/disable these
> functionality individually.
>
> The sysctl is converted from a Boolean value to a bits field.  The
> definition of the flags is,
>
> - 0x0: NUMA_BALANCING_DISABLED
> - 0x1: NUMA_BALANCING_NORMAL
> - 0x2: NUMA_BALANCING_MEMORY_TIERING

Thanks for coming up with the patches. TBH the first question off the
top of my head is all the complexity is really worthy for real life
workload at the moment? And the interfaces (sysctl knob files exported
to users) look complicated for the users. I don't know if the users
know how to set an optimal value for their workloads.

I don't disagree the NUMA balancing needs optimization and improvement
for tiering memory, the question we need answer is how far we should
go for now and what the interfaces should look like. Does it make
sense to you?

IMHO I'd prefer the most simple and straightforward approach at the
moment. For example, we could just skip high water mark check for PMEM
promotion.

>
> TODO:
>
> - Update ABI document: Documentation/sysctl/kernel.txt
>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: osalvador <osalvador@suse.de>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-mm@kvack.org
> ---
>  include/linux/sched/sysctl.h | 10 ++++++++++
>  kernel/sched/core.c          | 10 ++++------
>  kernel/sysctl.c              |  7 ++++---
>  mm/migrate.c                 | 19 +++++++++++++++++--
>  mm/vmscan.c                  | 16 ++++++++++++++++
>  5 files changed, 51 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
> index 304f431178fd..bc54c1d75d6d 100644
> --- a/include/linux/sched/sysctl.h
> +++ b/include/linux/sched/sysctl.h
> @@ -35,6 +35,16 @@ enum sched_tunable_scaling {
>         SCHED_TUNABLESCALING_END,
>  };
>
> +#define NUMA_BALANCING_DISABLED                0x0
> +#define NUMA_BALANCING_NORMAL          0x1
> +#define NUMA_BALANCING_MEMORY_TIERING  0x2
> +
> +#ifdef CONFIG_NUMA_BALANCING
> +extern int sysctl_numa_balancing_mode;
> +#else
> +#define sysctl_numa_balancing_mode     0
> +#endif
> +
>  /*
>   *  control realtime throttling:
>   *
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 1bba4128a3e6..e61c2d415601 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4228,6 +4228,8 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
>
>  #ifdef CONFIG_NUMA_BALANCING
>
> +int sysctl_numa_balancing_mode;
> +
>  void set_numabalancing_state(bool enabled)
>  {
>         if (enabled)
> @@ -4240,20 +4242,16 @@ void set_numabalancing_state(bool enabled)
>  int sysctl_numa_balancing(struct ctl_table *table, int write,
>                           void *buffer, size_t *lenp, loff_t *ppos)
>  {
> -       struct ctl_table t;
>         int err;
> -       int state = static_branch_likely(&sched_numa_balancing);
>
>         if (write && !capable(CAP_SYS_ADMIN))
>                 return -EPERM;
>
> -       t = *table;
> -       t.data = &state;
> -       err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
> +       err = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
>         if (err < 0)
>                 return err;
>         if (write)
> -               set_numabalancing_state(state);
> +               set_numabalancing_state(*(int *)table->data);
>         return err;
>  }
>  #endif
> diff --git a/kernel/sysctl.c b/kernel/sysctl.c
> index 083be6af29d7..666c58455355 100644
> --- a/kernel/sysctl.c
> +++ b/kernel/sysctl.c
> @@ -115,6 +115,7 @@ static int sixty = 60;
>
>  static int __maybe_unused neg_one = -1;
>  static int __maybe_unused two = 2;
> +static int __maybe_unused three = 3;
>  static int __maybe_unused four = 4;
>  static unsigned long zero_ul;
>  static unsigned long one_ul = 1;
> @@ -1803,12 +1804,12 @@ static struct ctl_table kern_table[] = {
>  #ifdef CONFIG_NUMA_BALANCING
>         {
>                 .procname       = "numa_balancing",
> -               .data           = NULL, /* filled in by handler */
> -               .maxlen         = sizeof(unsigned int),
> +               .data           = &sysctl_numa_balancing_mode,
> +               .maxlen         = sizeof(int),
>                 .mode           = 0644,
>                 .proc_handler   = sysctl_numa_balancing,
>                 .extra1         = SYSCTL_ZERO,
> -               .extra2         = SYSCTL_ONE,
> +               .extra2         = &three,
>         },
>  #endif /* CONFIG_NUMA_BALANCING */
>         {
> diff --git a/mm/migrate.c b/mm/migrate.c
> index a6a7743ee98f..a159a36dd412 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -50,6 +50,7 @@
>  #include <linux/ptrace.h>
>  #include <linux/oom.h>
>  #include <linux/memory.h>
> +#include <linux/sched/sysctl.h>
>
>  #include <asm/tlbflush.h>
>
> @@ -2110,16 +2111,30 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  {
>         int page_lru;
>         int nr_pages = thp_nr_pages(page);
> +       int order = compound_order(page);
>
> -       VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
> +       VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>
>         /* Do not migrate THP mapped by multiple processes */
>         if (PageTransHuge(page) && total_mapcount(page) > 1)
>                 return 0;
>
>         /* Avoid migrating to a node that is nearly full */
> -       if (!migrate_balanced_pgdat(pgdat, nr_pages))
> +       if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
> +               int z;
> +
> +               if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) ||
> +                   !numa_demotion_enabled)
> +                       return 0;
> +               if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE)
> +                       return 0;
> +               for (z = pgdat->nr_zones - 1; z >= 0; z--) {
> +                       if (populated_zone(pgdat->node_zones + z))
> +                               break;
> +               }
> +               wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
>                 return 0;
> +       }
>
>         if (isolate_lru_page(page))
>                 return 0;
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index f441c5946a4c..7fe737fd0e03 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -56,6 +56,7 @@
>
>  #include <linux/swapops.h>
>  #include <linux/balloon_compaction.h>
> +#include <linux/sched/sysctl.h>
>
>  #include "internal.h"
>
> @@ -3775,6 +3776,12 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx)
>         return false;
>  }
>
> +/*
> + * Keep the free pages on fast memory node a little more than the high
> + * watermark to accommodate the promoted pages.
> + */
> +#define NUMA_BALANCING_PROMOTE_WATERMARK       (10UL * 1024 * 1024 >> PAGE_SHIFT)
> +
>  /*
>   * Returns true if there is an eligible zone balanced for the request order
>   * and highest_zoneidx
> @@ -3796,6 +3803,15 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
>                         continue;
>
>                 mark = high_wmark_pages(zone);
> +               if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
> +                   numa_demotion_enabled &&
> +                   next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) {
> +                       unsigned long promote_mark;
> +
> +                       promote_mark = min(NUMA_BALANCING_PROMOTE_WATERMARK,
> +                                          pgdat->node_present_pages >> 6);
> +                       mark += promote_mark;
> +               }
>                 if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
>                         return true;
>         }
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH -V8 2/6] memory tiering: add page promotion counter
  2021-09-14  1:36 ` [PATCH -V8 2/6] memory tiering: add page promotion counter Huang Ying
@ 2021-09-14 22:41   ` Yang Shi
  2021-09-15  1:53     ` Huang, Ying
  0 siblings, 1 reply; 17+ messages in thread
From: Yang Shi @ 2021-09-14 22:41 UTC (permalink / raw)
  To: Huang Ying
  Cc: Linux Kernel Mailing List, Andrew Morton, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, Linux MM

On Mon, Sep 13, 2021 at 6:37 PM Huang Ying <ying.huang@intel.com> wrote:
>
> To distinguish the number of the memory tiering promoted pages from
> that of the originally inter-socket NUMA balancing migrated pages.
> The counter is per-node (count in the target node).  So this can be
> used to identify promotion imbalance among the NUMA nodes.

I'd like this patch be the very first one in the series. Since we need
such counters regardless of all the optimizations. And actually I
think this patch could go with the merged "migration in lieu of
discard" patchset.

>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: Mel Gorman <mgorman@suse.de>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: osalvador <osalvador@suse.de>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-mm@kvack.org
> ---
>  include/linux/mmzone.h |  3 +++
>  include/linux/node.h   |  5 +++++
>  mm/migrate.c           | 11 +++++++++--
>  mm/vmstat.c            |  3 +++
>  4 files changed, 20 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 6a1d79d84675..37ccd6158765 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -209,6 +209,9 @@ enum node_stat_item {
>         NR_PAGETABLE,           /* used for pagetables */
>  #ifdef CONFIG_SWAP
>         NR_SWAPCACHE,
> +#endif
> +#ifdef CONFIG_NUMA_BALANCING
> +       PGPROMOTE_SUCCESS,      /* promote successfully */
>  #endif
>         NR_VM_NODE_STAT_ITEMS
>  };
> diff --git a/include/linux/node.h b/include/linux/node.h
> index 8e5a29897936..26e96fcc66af 100644
> --- a/include/linux/node.h
> +++ b/include/linux/node.h
> @@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
>
>  #define to_node(device) container_of(device, struct node, dev)
>
> +static inline bool node_is_toptier(int node)
> +{
> +       return node_state(node, N_CPU);
> +}
> +
>  #endif /* _LINUX_NODE_H_ */
> diff --git a/mm/migrate.c b/mm/migrate.c
> index a159a36dd412..6f7a6e2ef41f 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2163,6 +2163,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>         pg_data_t *pgdat = NODE_DATA(node);
>         int isolated;
>         int nr_remaining;
> +       int nr_succeeded;
>         LIST_HEAD(migratepages);
>         new_page_t *new;
>         bool compound;
> @@ -2201,7 +2202,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>
>         list_add(&page->lru, &migratepages);
>         nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
> -                                    MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
> +                                    MIGRATE_ASYNC, MR_NUMA_MISPLACED,
> +                                    &nr_succeeded);
>         if (nr_remaining) {
>                 if (!list_empty(&migratepages)) {
>                         list_del(&page->lru);
> @@ -2210,8 +2212,13 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>                         putback_lru_page(page);
>                 }
>                 isolated = 0;
> -       } else
> +       } else {
>                 count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages);
> +               if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
> +                   !node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
> +                       mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS,
> +                                           nr_succeeded);
> +       }
>         BUG_ON(!list_empty(&migratepages));
>         return isolated;
>
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 8ce2620344b2..fff0ec94d795 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1236,6 +1236,9 @@ const char * const vmstat_text[] = {
>  #ifdef CONFIG_SWAP
>         "nr_swapcached",
>  #endif
> +#ifdef CONFIG_NUMA_BALANCING
> +       "pgpromote_success",
> +#endif
>
>         /* enum writeback_stat_item counters */
>         "nr_dirty_threshold",
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH -V8 1/6] NUMA balancing: optimize page placement for memory tiering system
  2021-09-14 22:40   ` Yang Shi
@ 2021-09-15  1:44     ` Huang, Ying
  2021-09-15  2:47       ` Yang Shi
  0 siblings, 1 reply; 17+ messages in thread
From: Huang, Ying @ 2021-09-15  1:44 UTC (permalink / raw)
  To: Yang Shi
  Cc: Linux Kernel Mailing List, Andrew Morton, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, Linux MM

Yang Shi <shy828301@gmail.com> writes:

> On Mon, Sep 13, 2021 at 6:37 PM Huang Ying <ying.huang@intel.com> wrote:
>>
>> With the advent of various new memory types, some machines will have
>> multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
>> memory subsystem of these machines can be called memory tiering
>> system, because the performance of the different types of memory are
>> usually different.
>>
>> In such system, because of the memory accessing pattern changing etc,
>> some pages in the slow memory may become hot globally.  So in this
>> patch, the NUMA balancing mechanism is enhanced to optimize the page
>> placement among the different memory types according to hot/cold
>> dynamically.
>>
>> In a typical memory tiering system, there are CPUs, fast memory and
>> slow memory in each physical NUMA node.  The CPUs and the fast memory
>> will be put in one logical node (called fast memory node), while the
>> slow memory will be put in another (faked) logical node (called slow
>> memory node).  That is, the fast memory is regarded as local while the
>> slow memory is regarded as remote.  So it's possible for the recently
>> accessed pages in the slow memory node to be promoted to the fast
>> memory node via the existing NUMA balancing mechanism.
>>
>> The original NUMA balancing mechanism will stop to migrate pages if the free
>> memory of the target node will become below the high watermark.  This
>> is a reasonable policy if there's only one memory type.  But this
>> makes the original NUMA balancing mechanism almost not work to optimize page
>> placement among different memory types.  Details are as follows.
>>
>> It's the common cases that the working-set size of the workload is
>> larger than the size of the fast memory nodes.  Otherwise, it's
>> unnecessary to use the slow memory at all.  So in the common cases,
>> there are almost always no enough free pages in the fast memory nodes,
>> so that the globally hot pages in the slow memory node cannot be
>> promoted to the fast memory node.  To solve the issue, we have 2
>> choices as follows,
>>
>> a. Ignore the free pages watermark checking when promoting hot pages
>>    from the slow memory node to the fast memory node.  This will
>>    create some memory pressure in the fast memory node, thus trigger
>>    the memory reclaiming.  So that, the cold pages in the fast memory
>>    node will be demoted to the slow memory node.
>>
>> b. Make kswapd of the fast memory node to reclaim pages until the free
>>    pages are a little more (about 10MB) than the high watermark.  Then,
>>    if the free pages of the fast memory node reaches high watermark, and
>>    some hot pages need to be promoted, kswapd of the fast memory node
>>    will be waken up to demote some cold pages in the fast memory node to
>>    the slow memory node.  This will free some extra space in the fast
>>    memory node, so the hot pages in the slow memory node can be
>>    promoted to the fast memory node.
>>
>> The choice "a" will create the memory pressure in the fast memory
>> node.  If the memory pressure of the workload is high, the memory
>> pressure may become so high that the memory allocation latency of the
>> workload is influenced, e.g. the direct reclaiming may be triggered.
>>
>> The choice "b" works much better at this aspect.  If the memory
>> pressure of the workload is high, the hot pages promotion will stop
>> earlier because its allocation watermark is higher than that of the
>> normal memory allocation.  So in this patch, choice "b" is
>> implemented.
>>
>> In addition to the original page placement optimization among sockets,
>> the NUMA balancing mechanism is extended to be used to optimize page
>> placement according to hot/cold among different memory types.  So the
>> sysctl user space interface (numa_balancing) is extended in a backward
>> compatible way as follow, so that the users can enable/disable these
>> functionality individually.
>>
>> The sysctl is converted from a Boolean value to a bits field.  The
>> definition of the flags is,
>>
>> - 0x0: NUMA_BALANCING_DISABLED
>> - 0x1: NUMA_BALANCING_NORMAL
>> - 0x2: NUMA_BALANCING_MEMORY_TIERING
>
> Thanks for coming up with the patches. TBH the first question off the
> top of my head is all the complexity is really worthy for real life
> workload at the moment? And the interfaces (sysctl knob files exported
> to users) look complicated for the users. I don't know if the users
> know how to set an optimal value for their workloads.
>
> I don't disagree the NUMA balancing needs optimization and improvement
> for tiering memory, the question we need answer is how far we should
> go for now and what the interfaces should look like. Does it make
> sense to you?
>
> IMHO I'd prefer the most simple and straightforward approach at the
> moment. For example, we could just skip high water mark check for PMEM
> promotion.

Hi, Yang,

Thanks for comments.

I understand your concerns about complexity.  I have tried to organize
the patchset so that the initial patch is as simple as possible and the
complexity is introduced step by step.  But it seems that your simplest
version is even simpler than my one :-)

In this patch ([1/6]), I introduced 2 stuff.

Firstly, a sysctl knob is provided to disable the NUMA balancing based
promotion.  Per my understanding, you suggest to remove this.  If so,
optimizing cross-socket access and promoting hot PMEM pages to DRAM must
be enabled/disabled together.  If a user wants to enable promoting the
hot PMEM pages to DRAM but disable optimizing cross-socket access
because they have already bound the CPU of the workload so that there's no
much cross-socket access, how can they do?

Secondly, we add a promote watermark to the DRAM node so that we can
demote/promote pages between the high and promote watermark.  Per my
understanding, you suggest just to ignore the high watermark checking
for promoting.  The problem is that this may make the free pages of the
DRAM node too few.  If many pages are promoted in short time, the free
pages will be kept near the min watermark for a while, so that the page
allocation from the application will trigger direct reclaiming.  We have
observed page allocation failure in a test before with a similar policy.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH -V8 2/6] memory tiering: add page promotion counter
  2021-09-14 22:41   ` Yang Shi
@ 2021-09-15  1:53     ` Huang, Ying
  0 siblings, 0 replies; 17+ messages in thread
From: Huang, Ying @ 2021-09-15  1:53 UTC (permalink / raw)
  To: Yang Shi
  Cc: Linux Kernel Mailing List, Andrew Morton, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, Linux MM

Yang Shi <shy828301@gmail.com> writes:

> On Mon, Sep 13, 2021 at 6:37 PM Huang Ying <ying.huang@intel.com> wrote:
>>
>> To distinguish the number of the memory tiering promoted pages from
>> that of the originally inter-socket NUMA balancing migrated pages.
>> The counter is per-node (count in the target node).  So this can be
>> used to identify promotion imbalance among the NUMA nodes.
>
> I'd like this patch be the very first one in the series. Since we need
> such counters regardless of all the optimizations. And actually I
> think this patch could go with the merged "migration in lieu of
> discard" patchset.

Yes.  This sounds reasonable.  I will change this in the next version.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH -V8 1/6] NUMA balancing: optimize page placement for memory tiering system
  2021-09-15  1:44     ` Huang, Ying
@ 2021-09-15  2:47       ` Yang Shi
  2021-09-15  3:58         ` Huang, Ying
  0 siblings, 1 reply; 17+ messages in thread
From: Yang Shi @ 2021-09-15  2:47 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Linux Kernel Mailing List, Andrew Morton, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, Linux MM

On Tue, Sep 14, 2021 at 6:45 PM Huang, Ying <ying.huang@intel.com> wrote:
>
> Yang Shi <shy828301@gmail.com> writes:
>
> > On Mon, Sep 13, 2021 at 6:37 PM Huang Ying <ying.huang@intel.com> wrote:
> >>
> >> With the advent of various new memory types, some machines will have
> >> multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
> >> memory subsystem of these machines can be called memory tiering
> >> system, because the performance of the different types of memory are
> >> usually different.
> >>
> >> In such system, because of the memory accessing pattern changing etc,
> >> some pages in the slow memory may become hot globally.  So in this
> >> patch, the NUMA balancing mechanism is enhanced to optimize the page
> >> placement among the different memory types according to hot/cold
> >> dynamically.
> >>
> >> In a typical memory tiering system, there are CPUs, fast memory and
> >> slow memory in each physical NUMA node.  The CPUs and the fast memory
> >> will be put in one logical node (called fast memory node), while the
> >> slow memory will be put in another (faked) logical node (called slow
> >> memory node).  That is, the fast memory is regarded as local while the
> >> slow memory is regarded as remote.  So it's possible for the recently
> >> accessed pages in the slow memory node to be promoted to the fast
> >> memory node via the existing NUMA balancing mechanism.
> >>
> >> The original NUMA balancing mechanism will stop to migrate pages if the free
> >> memory of the target node will become below the high watermark.  This
> >> is a reasonable policy if there's only one memory type.  But this
> >> makes the original NUMA balancing mechanism almost not work to optimize page
> >> placement among different memory types.  Details are as follows.
> >>
> >> It's the common cases that the working-set size of the workload is
> >> larger than the size of the fast memory nodes.  Otherwise, it's
> >> unnecessary to use the slow memory at all.  So in the common cases,
> >> there are almost always no enough free pages in the fast memory nodes,
> >> so that the globally hot pages in the slow memory node cannot be
> >> promoted to the fast memory node.  To solve the issue, we have 2
> >> choices as follows,
> >>
> >> a. Ignore the free pages watermark checking when promoting hot pages
> >>    from the slow memory node to the fast memory node.  This will
> >>    create some memory pressure in the fast memory node, thus trigger
> >>    the memory reclaiming.  So that, the cold pages in the fast memory
> >>    node will be demoted to the slow memory node.
> >>
> >> b. Make kswapd of the fast memory node to reclaim pages until the free
> >>    pages are a little more (about 10MB) than the high watermark.  Then,
> >>    if the free pages of the fast memory node reaches high watermark, and
> >>    some hot pages need to be promoted, kswapd of the fast memory node
> >>    will be waken up to demote some cold pages in the fast memory node to
> >>    the slow memory node.  This will free some extra space in the fast
> >>    memory node, so the hot pages in the slow memory node can be
> >>    promoted to the fast memory node.
> >>
> >> The choice "a" will create the memory pressure in the fast memory
> >> node.  If the memory pressure of the workload is high, the memory
> >> pressure may become so high that the memory allocation latency of the
> >> workload is influenced, e.g. the direct reclaiming may be triggered.
> >>
> >> The choice "b" works much better at this aspect.  If the memory
> >> pressure of the workload is high, the hot pages promotion will stop
> >> earlier because its allocation watermark is higher than that of the
> >> normal memory allocation.  So in this patch, choice "b" is
> >> implemented.
> >>
> >> In addition to the original page placement optimization among sockets,
> >> the NUMA balancing mechanism is extended to be used to optimize page
> >> placement according to hot/cold among different memory types.  So the
> >> sysctl user space interface (numa_balancing) is extended in a backward
> >> compatible way as follow, so that the users can enable/disable these
> >> functionality individually.
> >>
> >> The sysctl is converted from a Boolean value to a bits field.  The
> >> definition of the flags is,
> >>
> >> - 0x0: NUMA_BALANCING_DISABLED
> >> - 0x1: NUMA_BALANCING_NORMAL
> >> - 0x2: NUMA_BALANCING_MEMORY_TIERING
> >
> > Thanks for coming up with the patches. TBH the first question off the
> > top of my head is all the complexity is really worthy for real life
> > workload at the moment? And the interfaces (sysctl knob files exported
> > to users) look complicated for the users. I don't know if the users
> > know how to set an optimal value for their workloads.
> >
> > I don't disagree the NUMA balancing needs optimization and improvement
> > for tiering memory, the question we need answer is how far we should
> > go for now and what the interfaces should look like. Does it make
> > sense to you?
> >
> > IMHO I'd prefer the most simple and straightforward approach at the
> > moment. For example, we could just skip high water mark check for PMEM
> > promotion.
>
> Hi, Yang,
>
> Thanks for comments.
>
> I understand your concerns about complexity.  I have tried to organize
> the patchset so that the initial patch is as simple as possible and the
> complexity is introduced step by step.  But it seems that your simplest
> version is even simpler than my one :-)
>
> In this patch ([1/6]), I introduced 2 stuff.
>
> Firstly, a sysctl knob is provided to disable the NUMA balancing based
> promotion.  Per my understanding, you suggest to remove this.  If so,
> optimizing cross-socket access and promoting hot PMEM pages to DRAM must
> be enabled/disabled together.  If a user wants to enable promoting the
> hot PMEM pages to DRAM but disable optimizing cross-socket access
> because they have already bound the CPU of the workload so that there's no
> much cross-socket access, how can they do?

I should make myself clearer. Here I mean the whole series, not this
specific patch. I'm concerned that the interfaces (hint fault latency
and ratelimit) are hard to understand and configure for users and
whether we go too far at the moment or not. I'm dealing with the end
users, I'd admit I'm not even sure how to configure the knobs to
achieve optimal performance for different real life workloads.

For this specific patch I'm ok to a new promotion mode. There might be
usecase that users just want to do promotion between tiered memory but
not care about NUMA locality.

>
> Secondly, we add a promote watermark to the DRAM node so that we can
> demote/promote pages between the high and promote watermark.  Per my
> understanding, you suggest just to ignore the high watermark checking
> for promoting.  The problem is that this may make the free pages of the
> DRAM node too few.  If many pages are promoted in short time, the free
> pages will be kept near the min watermark for a while, so that the page
> allocation from the application will trigger direct reclaiming.  We have
> observed page allocation failure in a test before with a similar policy.

The question is, applicable to the hint fault latency and ratelimit
too, we already have some NUMA balancing knobs to control scan period
and scan size and watermark knobs to tune how aggressively kswapd
works, can they do the same jobs instead of introducing any new knobs?

>
> Best Regards,
> Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH -V8 1/6] NUMA balancing: optimize page placement for memory tiering system
  2021-09-15  2:47       ` Yang Shi
@ 2021-09-15  3:58         ` Huang, Ying
  2021-09-15 21:32           ` Yang Shi
  0 siblings, 1 reply; 17+ messages in thread
From: Huang, Ying @ 2021-09-15  3:58 UTC (permalink / raw)
  To: Yang Shi
  Cc: Linux Kernel Mailing List, Andrew Morton, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, Linux MM

Yang Shi <shy828301@gmail.com> writes:

> On Tue, Sep 14, 2021 at 6:45 PM Huang, Ying <ying.huang@intel.com> wrote:
>>
>> Yang Shi <shy828301@gmail.com> writes:
>>
>> > On Mon, Sep 13, 2021 at 6:37 PM Huang Ying <ying.huang@intel.com> wrote:
>> >>
>> >> With the advent of various new memory types, some machines will have
>> >> multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
>> >> memory subsystem of these machines can be called memory tiering
>> >> system, because the performance of the different types of memory are
>> >> usually different.
>> >>
>> >> In such system, because of the memory accessing pattern changing etc,
>> >> some pages in the slow memory may become hot globally.  So in this
>> >> patch, the NUMA balancing mechanism is enhanced to optimize the page
>> >> placement among the different memory types according to hot/cold
>> >> dynamically.
>> >>
>> >> In a typical memory tiering system, there are CPUs, fast memory and
>> >> slow memory in each physical NUMA node.  The CPUs and the fast memory
>> >> will be put in one logical node (called fast memory node), while the
>> >> slow memory will be put in another (faked) logical node (called slow
>> >> memory node).  That is, the fast memory is regarded as local while the
>> >> slow memory is regarded as remote.  So it's possible for the recently
>> >> accessed pages in the slow memory node to be promoted to the fast
>> >> memory node via the existing NUMA balancing mechanism.
>> >>
>> >> The original NUMA balancing mechanism will stop to migrate pages if the free
>> >> memory of the target node will become below the high watermark.  This
>> >> is a reasonable policy if there's only one memory type.  But this
>> >> makes the original NUMA balancing mechanism almost not work to optimize page
>> >> placement among different memory types.  Details are as follows.
>> >>
>> >> It's the common cases that the working-set size of the workload is
>> >> larger than the size of the fast memory nodes.  Otherwise, it's
>> >> unnecessary to use the slow memory at all.  So in the common cases,
>> >> there are almost always no enough free pages in the fast memory nodes,
>> >> so that the globally hot pages in the slow memory node cannot be
>> >> promoted to the fast memory node.  To solve the issue, we have 2
>> >> choices as follows,
>> >>
>> >> a. Ignore the free pages watermark checking when promoting hot pages
>> >>    from the slow memory node to the fast memory node.  This will
>> >>    create some memory pressure in the fast memory node, thus trigger
>> >>    the memory reclaiming.  So that, the cold pages in the fast memory
>> >>    node will be demoted to the slow memory node.
>> >>
>> >> b. Make kswapd of the fast memory node to reclaim pages until the free
>> >>    pages are a little more (about 10MB) than the high watermark.  Then,
>> >>    if the free pages of the fast memory node reaches high watermark, and
>> >>    some hot pages need to be promoted, kswapd of the fast memory node
>> >>    will be waken up to demote some cold pages in the fast memory node to
>> >>    the slow memory node.  This will free some extra space in the fast
>> >>    memory node, so the hot pages in the slow memory node can be
>> >>    promoted to the fast memory node.
>> >>
>> >> The choice "a" will create the memory pressure in the fast memory
>> >> node.  If the memory pressure of the workload is high, the memory
>> >> pressure may become so high that the memory allocation latency of the
>> >> workload is influenced, e.g. the direct reclaiming may be triggered.
>> >>
>> >> The choice "b" works much better at this aspect.  If the memory
>> >> pressure of the workload is high, the hot pages promotion will stop
>> >> earlier because its allocation watermark is higher than that of the
>> >> normal memory allocation.  So in this patch, choice "b" is
>> >> implemented.
>> >>
>> >> In addition to the original page placement optimization among sockets,
>> >> the NUMA balancing mechanism is extended to be used to optimize page
>> >> placement according to hot/cold among different memory types.  So the
>> >> sysctl user space interface (numa_balancing) is extended in a backward
>> >> compatible way as follow, so that the users can enable/disable these
>> >> functionality individually.
>> >>
>> >> The sysctl is converted from a Boolean value to a bits field.  The
>> >> definition of the flags is,
>> >>
>> >> - 0x0: NUMA_BALANCING_DISABLED
>> >> - 0x1: NUMA_BALANCING_NORMAL
>> >> - 0x2: NUMA_BALANCING_MEMORY_TIERING
>> >
>> > Thanks for coming up with the patches. TBH the first question off the
>> > top of my head is all the complexity is really worthy for real life
>> > workload at the moment? And the interfaces (sysctl knob files exported
>> > to users) look complicated for the users. I don't know if the users
>> > know how to set an optimal value for their workloads.
>> >
>> > I don't disagree the NUMA balancing needs optimization and improvement
>> > for tiering memory, the question we need answer is how far we should
>> > go for now and what the interfaces should look like. Does it make
>> > sense to you?
>> >
>> > IMHO I'd prefer the most simple and straightforward approach at the
>> > moment. For example, we could just skip high water mark check for PMEM
>> > promotion.
>>
>> Hi, Yang,
>>
>> Thanks for comments.
>>
>> I understand your concerns about complexity.  I have tried to organize
>> the patchset so that the initial patch is as simple as possible and the
>> complexity is introduced step by step.  But it seems that your simplest
>> version is even simpler than my one :-)
>>
>> In this patch ([1/6]), I introduced 2 stuff.
>>
>> Firstly, a sysctl knob is provided to disable the NUMA balancing based
>> promotion.  Per my understanding, you suggest to remove this.  If so,
>> optimizing cross-socket access and promoting hot PMEM pages to DRAM must
>> be enabled/disabled together.  If a user wants to enable promoting the
>> hot PMEM pages to DRAM but disable optimizing cross-socket access
>> because they have already bound the CPU of the workload so that there's no
>> much cross-socket access, how can they do?
>
> I should make myself clearer. Here I mean the whole series, not this
> specific patch. I'm concerned that the interfaces (hint fault latency
> and ratelimit) are hard to understand and configure for users and
> whether we go too far at the moment or not. I'm dealing with the end
> users, I'd admit I'm not even sure how to configure the knobs to
> achieve optimal performance for different real life workloads.

Sorry, I misunderstand your original idea.  I understand that the knob
isn't user-friendly.  But sometimes, we cannot avoid it completely :-(
In this patchset, I try to introduce the complexity and knobs one by
one, and show the performance benefit of each step for people to judge
whether the newly added complexity and knob can be complemented by the
performance increment.  If the benefit of some patches cannot complement
its complexity, I am OK to merge just part of the patchset firstly.

So how about be more specific?  For example, if you are general OK about
the complexity and knob introduced by [1-3/6], but have concerns about
[4/6], then we can discuss about that specifically?

> For this specific patch I'm ok to a new promotion mode. There might be
> usecase that users just want to do promotion between tiered memory but
> not care about NUMA locality.

Yes.

>> Secondly, we add a promote watermark to the DRAM node so that we can
>> demote/promote pages between the high and promote watermark.  Per my
>> understanding, you suggest just to ignore the high watermark checking
>> for promoting.  The problem is that this may make the free pages of the
>> DRAM node too few.  If many pages are promoted in short time, the free
>> pages will be kept near the min watermark for a while, so that the page
>> allocation from the application will trigger direct reclaiming.  We have
>> observed page allocation failure in a test before with a similar policy.
>
> The question is, applicable to the hint fault latency and ratelimit
> too, we already have some NUMA balancing knobs to control scan period
> and scan size and watermark knobs to tune how aggressively kswapd
> works, can they do the same jobs instead of introducing any new knobs?

In this specific patch, we don't introduce a new knob for the page
demotion.  For other knobs, how about discuss them in the patch that
introduce them and one by one?

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH -V8 1/6] NUMA balancing: optimize page placement for memory tiering system
  2021-09-15  3:58         ` Huang, Ying
@ 2021-09-15 21:32           ` Yang Shi
  2021-09-16  1:44             ` Huang, Ying
  0 siblings, 1 reply; 17+ messages in thread
From: Yang Shi @ 2021-09-15 21:32 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Linux Kernel Mailing List, Andrew Morton, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, Linux MM

On Tue, Sep 14, 2021 at 8:58 PM Huang, Ying <ying.huang@intel.com> wrote:
>
> Yang Shi <shy828301@gmail.com> writes:
>
> > On Tue, Sep 14, 2021 at 6:45 PM Huang, Ying <ying.huang@intel.com> wrote:
> >>
> >> Yang Shi <shy828301@gmail.com> writes:
> >>
> >> > On Mon, Sep 13, 2021 at 6:37 PM Huang Ying <ying.huang@intel.com> wrote:
> >> >>
> >> >> With the advent of various new memory types, some machines will have
> >> >> multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
> >> >> memory subsystem of these machines can be called memory tiering
> >> >> system, because the performance of the different types of memory are
> >> >> usually different.
> >> >>
> >> >> In such system, because of the memory accessing pattern changing etc,
> >> >> some pages in the slow memory may become hot globally.  So in this
> >> >> patch, the NUMA balancing mechanism is enhanced to optimize the page
> >> >> placement among the different memory types according to hot/cold
> >> >> dynamically.
> >> >>
> >> >> In a typical memory tiering system, there are CPUs, fast memory and
> >> >> slow memory in each physical NUMA node.  The CPUs and the fast memory
> >> >> will be put in one logical node (called fast memory node), while the
> >> >> slow memory will be put in another (faked) logical node (called slow
> >> >> memory node).  That is, the fast memory is regarded as local while the
> >> >> slow memory is regarded as remote.  So it's possible for the recently
> >> >> accessed pages in the slow memory node to be promoted to the fast
> >> >> memory node via the existing NUMA balancing mechanism.
> >> >>
> >> >> The original NUMA balancing mechanism will stop to migrate pages if the free
> >> >> memory of the target node will become below the high watermark.  This
> >> >> is a reasonable policy if there's only one memory type.  But this
> >> >> makes the original NUMA balancing mechanism almost not work to optimize page
> >> >> placement among different memory types.  Details are as follows.
> >> >>
> >> >> It's the common cases that the working-set size of the workload is
> >> >> larger than the size of the fast memory nodes.  Otherwise, it's
> >> >> unnecessary to use the slow memory at all.  So in the common cases,
> >> >> there are almost always no enough free pages in the fast memory nodes,
> >> >> so that the globally hot pages in the slow memory node cannot be
> >> >> promoted to the fast memory node.  To solve the issue, we have 2
> >> >> choices as follows,
> >> >>
> >> >> a. Ignore the free pages watermark checking when promoting hot pages
> >> >>    from the slow memory node to the fast memory node.  This will
> >> >>    create some memory pressure in the fast memory node, thus trigger
> >> >>    the memory reclaiming.  So that, the cold pages in the fast memory
> >> >>    node will be demoted to the slow memory node.
> >> >>
> >> >> b. Make kswapd of the fast memory node to reclaim pages until the free
> >> >>    pages are a little more (about 10MB) than the high watermark.  Then,
> >> >>    if the free pages of the fast memory node reaches high watermark, and
> >> >>    some hot pages need to be promoted, kswapd of the fast memory node
> >> >>    will be waken up to demote some cold pages in the fast memory node to
> >> >>    the slow memory node.  This will free some extra space in the fast
> >> >>    memory node, so the hot pages in the slow memory node can be
> >> >>    promoted to the fast memory node.
> >> >>
> >> >> The choice "a" will create the memory pressure in the fast memory
> >> >> node.  If the memory pressure of the workload is high, the memory
> >> >> pressure may become so high that the memory allocation latency of the
> >> >> workload is influenced, e.g. the direct reclaiming may be triggered.
> >> >>
> >> >> The choice "b" works much better at this aspect.  If the memory
> >> >> pressure of the workload is high, the hot pages promotion will stop
> >> >> earlier because its allocation watermark is higher than that of the
> >> >> normal memory allocation.  So in this patch, choice "b" is
> >> >> implemented.
> >> >>
> >> >> In addition to the original page placement optimization among sockets,
> >> >> the NUMA balancing mechanism is extended to be used to optimize page
> >> >> placement according to hot/cold among different memory types.  So the
> >> >> sysctl user space interface (numa_balancing) is extended in a backward
> >> >> compatible way as follow, so that the users can enable/disable these
> >> >> functionality individually.
> >> >>
> >> >> The sysctl is converted from a Boolean value to a bits field.  The
> >> >> definition of the flags is,
> >> >>
> >> >> - 0x0: NUMA_BALANCING_DISABLED
> >> >> - 0x1: NUMA_BALANCING_NORMAL
> >> >> - 0x2: NUMA_BALANCING_MEMORY_TIERING
> >> >
> >> > Thanks for coming up with the patches. TBH the first question off the
> >> > top of my head is all the complexity is really worthy for real life
> >> > workload at the moment? And the interfaces (sysctl knob files exported
> >> > to users) look complicated for the users. I don't know if the users
> >> > know how to set an optimal value for their workloads.
> >> >
> >> > I don't disagree the NUMA balancing needs optimization and improvement
> >> > for tiering memory, the question we need answer is how far we should
> >> > go for now and what the interfaces should look like. Does it make
> >> > sense to you?
> >> >
> >> > IMHO I'd prefer the most simple and straightforward approach at the
> >> > moment. For example, we could just skip high water mark check for PMEM
> >> > promotion.
> >>
> >> Hi, Yang,
> >>
> >> Thanks for comments.
> >>
> >> I understand your concerns about complexity.  I have tried to organize
> >> the patchset so that the initial patch is as simple as possible and the
> >> complexity is introduced step by step.  But it seems that your simplest
> >> version is even simpler than my one :-)
> >>
> >> In this patch ([1/6]), I introduced 2 stuff.
> >>
> >> Firstly, a sysctl knob is provided to disable the NUMA balancing based
> >> promotion.  Per my understanding, you suggest to remove this.  If so,
> >> optimizing cross-socket access and promoting hot PMEM pages to DRAM must
> >> be enabled/disabled together.  If a user wants to enable promoting the
> >> hot PMEM pages to DRAM but disable optimizing cross-socket access
> >> because they have already bound the CPU of the workload so that there's no
> >> much cross-socket access, how can they do?
> >
> > I should make myself clearer. Here I mean the whole series, not this
> > specific patch. I'm concerned that the interfaces (hint fault latency
> > and ratelimit) are hard to understand and configure for users and
> > whether we go too far at the moment or not. I'm dealing with the end
> > users, I'd admit I'm not even sure how to configure the knobs to
> > achieve optimal performance for different real life workloads.
>
> Sorry, I misunderstand your original idea.  I understand that the knob
> isn't user-friendly.  But sometimes, we cannot avoid it completely :-(
> In this patchset, I try to introduce the complexity and knobs one by
> one, and show the performance benefit of each step for people to judge
> whether the newly added complexity and knob can be complemented by the
> performance increment.  If the benefit of some patches cannot complement
> its complexity, I am OK to merge just part of the patchset firstly.

Understood. But I really hesitate to go that far at this moment since
the picture is not that clear yet IMHO. We have to support them (maybe
forever) once we merge them.

So I'd prefer to work on the simplest and most necessary stuff for
now. Just like how we dealt with demotion.

>
> So how about be more specific?  For example, if you are general OK about
> the complexity and knob introduced by [1-3/6], but have concerns about
> [4/6], then we can discuss about that specifically?

Yeah, we could.

>
> > For this specific patch I'm ok to a new promotion mode. There might be
> > usecase that users just want to do promotion between tiered memory but
> > not care about NUMA locality.
>
> Yes.
>
> >> Secondly, we add a promote watermark to the DRAM node so that we can
> >> demote/promote pages between the high and promote watermark.  Per my
> >> understanding, you suggest just to ignore the high watermark checking
> >> for promoting.  The problem is that this may make the free pages of the
> >> DRAM node too few.  If many pages are promoted in short time, the free
> >> pages will be kept near the min watermark for a while, so that the page
> >> allocation from the application will trigger direct reclaiming.  We have
> >> observed page allocation failure in a test before with a similar policy.
> >
> > The question is, applicable to the hint fault latency and ratelimit
> > too, we already have some NUMA balancing knobs to control scan period
> > and scan size and watermark knobs to tune how aggressively kswapd
> > works, can they do the same jobs instead of introducing any new knobs?
>
> In this specific patch, we don't introduce a new knob for the page
> demotion.  For other knobs, how about discuss them in the patch that
> introduce them and one by one?

That comment is applicable to the watermark hack in this patch too.
Per your above description, the problem is the significant amount of
promotion in short period of time may deplete free memory. So I'm
wondering if the amount of promotion could be ratelimited by NUMA
balancing scan period and scan size. I understand this may have some
hot pages stay on PMEM for a longer time, but does it really matter?
In addition, the gap between low <--> min <--> high could be adjusted
by watermark_scale_factor, so kswapd could work more aggressively to
keep free memory.

>
> Best Regards,
> Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH -V8 1/6] NUMA balancing: optimize page placement for memory tiering system
  2021-09-15 21:32           ` Yang Shi
@ 2021-09-16  1:44             ` Huang, Ying
  2021-09-17  0:47               ` Yang Shi
  0 siblings, 1 reply; 17+ messages in thread
From: Huang, Ying @ 2021-09-16  1:44 UTC (permalink / raw)
  To: Yang Shi
  Cc: Linux Kernel Mailing List, Andrew Morton, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, Linux MM

Yang Shi <shy828301@gmail.com> writes:

> On Tue, Sep 14, 2021 at 8:58 PM Huang, Ying <ying.huang@intel.com> wrote:
>>
>> Yang Shi <shy828301@gmail.com> writes:
>>
>> > On Tue, Sep 14, 2021 at 6:45 PM Huang, Ying <ying.huang@intel.com> wrote:
>> >>
>> >> Yang Shi <shy828301@gmail.com> writes:
>> >>
>> >> > On Mon, Sep 13, 2021 at 6:37 PM Huang Ying <ying.huang@intel.com> wrote:
>> >> >>
>> >> >> With the advent of various new memory types, some machines will have
>> >> >> multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
>> >> >> memory subsystem of these machines can be called memory tiering
>> >> >> system, because the performance of the different types of memory are
>> >> >> usually different.
>> >> >>
>> >> >> In such system, because of the memory accessing pattern changing etc,
>> >> >> some pages in the slow memory may become hot globally.  So in this
>> >> >> patch, the NUMA balancing mechanism is enhanced to optimize the page
>> >> >> placement among the different memory types according to hot/cold
>> >> >> dynamically.
>> >> >>
>> >> >> In a typical memory tiering system, there are CPUs, fast memory and
>> >> >> slow memory in each physical NUMA node.  The CPUs and the fast memory
>> >> >> will be put in one logical node (called fast memory node), while the
>> >> >> slow memory will be put in another (faked) logical node (called slow
>> >> >> memory node).  That is, the fast memory is regarded as local while the
>> >> >> slow memory is regarded as remote.  So it's possible for the recently
>> >> >> accessed pages in the slow memory node to be promoted to the fast
>> >> >> memory node via the existing NUMA balancing mechanism.
>> >> >>
>> >> >> The original NUMA balancing mechanism will stop to migrate pages if the free
>> >> >> memory of the target node will become below the high watermark.  This
>> >> >> is a reasonable policy if there's only one memory type.  But this
>> >> >> makes the original NUMA balancing mechanism almost not work to optimize page
>> >> >> placement among different memory types.  Details are as follows.
>> >> >>
>> >> >> It's the common cases that the working-set size of the workload is
>> >> >> larger than the size of the fast memory nodes.  Otherwise, it's
>> >> >> unnecessary to use the slow memory at all.  So in the common cases,
>> >> >> there are almost always no enough free pages in the fast memory nodes,
>> >> >> so that the globally hot pages in the slow memory node cannot be
>> >> >> promoted to the fast memory node.  To solve the issue, we have 2
>> >> >> choices as follows,
>> >> >>
>> >> >> a. Ignore the free pages watermark checking when promoting hot pages
>> >> >>    from the slow memory node to the fast memory node.  This will
>> >> >>    create some memory pressure in the fast memory node, thus trigger
>> >> >>    the memory reclaiming.  So that, the cold pages in the fast memory
>> >> >>    node will be demoted to the slow memory node.
>> >> >>
>> >> >> b. Make kswapd of the fast memory node to reclaim pages until the free
>> >> >>    pages are a little more (about 10MB) than the high watermark.  Then,
>> >> >>    if the free pages of the fast memory node reaches high watermark, and
>> >> >>    some hot pages need to be promoted, kswapd of the fast memory node
>> >> >>    will be waken up to demote some cold pages in the fast memory node to
>> >> >>    the slow memory node.  This will free some extra space in the fast
>> >> >>    memory node, so the hot pages in the slow memory node can be
>> >> >>    promoted to the fast memory node.
>> >> >>
>> >> >> The choice "a" will create the memory pressure in the fast memory
>> >> >> node.  If the memory pressure of the workload is high, the memory
>> >> >> pressure may become so high that the memory allocation latency of the
>> >> >> workload is influenced, e.g. the direct reclaiming may be triggered.
>> >> >>
>> >> >> The choice "b" works much better at this aspect.  If the memory
>> >> >> pressure of the workload is high, the hot pages promotion will stop
>> >> >> earlier because its allocation watermark is higher than that of the
>> >> >> normal memory allocation.  So in this patch, choice "b" is
>> >> >> implemented.
>> >> >>
>> >> >> In addition to the original page placement optimization among sockets,
>> >> >> the NUMA balancing mechanism is extended to be used to optimize page
>> >> >> placement according to hot/cold among different memory types.  So the
>> >> >> sysctl user space interface (numa_balancing) is extended in a backward
>> >> >> compatible way as follow, so that the users can enable/disable these
>> >> >> functionality individually.
>> >> >>
>> >> >> The sysctl is converted from a Boolean value to a bits field.  The
>> >> >> definition of the flags is,
>> >> >>
>> >> >> - 0x0: NUMA_BALANCING_DISABLED
>> >> >> - 0x1: NUMA_BALANCING_NORMAL
>> >> >> - 0x2: NUMA_BALANCING_MEMORY_TIERING
>> >> >
>> >> > Thanks for coming up with the patches. TBH the first question off the
>> >> > top of my head is all the complexity is really worthy for real life
>> >> > workload at the moment? And the interfaces (sysctl knob files exported
>> >> > to users) look complicated for the users. I don't know if the users
>> >> > know how to set an optimal value for their workloads.
>> >> >
>> >> > I don't disagree the NUMA balancing needs optimization and improvement
>> >> > for tiering memory, the question we need answer is how far we should
>> >> > go for now and what the interfaces should look like. Does it make
>> >> > sense to you?
>> >> >
>> >> > IMHO I'd prefer the most simple and straightforward approach at the
>> >> > moment. For example, we could just skip high water mark check for PMEM
>> >> > promotion.
>> >>
>> >> Hi, Yang,
>> >>
>> >> Thanks for comments.
>> >>
>> >> I understand your concerns about complexity.  I have tried to organize
>> >> the patchset so that the initial patch is as simple as possible and the
>> >> complexity is introduced step by step.  But it seems that your simplest
>> >> version is even simpler than my one :-)
>> >>
>> >> In this patch ([1/6]), I introduced 2 stuff.
>> >>
>> >> Firstly, a sysctl knob is provided to disable the NUMA balancing based
>> >> promotion.  Per my understanding, you suggest to remove this.  If so,
>> >> optimizing cross-socket access and promoting hot PMEM pages to DRAM must
>> >> be enabled/disabled together.  If a user wants to enable promoting the
>> >> hot PMEM pages to DRAM but disable optimizing cross-socket access
>> >> because they have already bound the CPU of the workload so that there's no
>> >> much cross-socket access, how can they do?
>> >
>> > I should make myself clearer. Here I mean the whole series, not this
>> > specific patch. I'm concerned that the interfaces (hint fault latency
>> > and ratelimit) are hard to understand and configure for users and
>> > whether we go too far at the moment or not. I'm dealing with the end
>> > users, I'd admit I'm not even sure how to configure the knobs to
>> > achieve optimal performance for different real life workloads.
>>
>> Sorry, I misunderstand your original idea.  I understand that the knob
>> isn't user-friendly.  But sometimes, we cannot avoid it completely :-(
>> In this patchset, I try to introduce the complexity and knobs one by
>> one, and show the performance benefit of each step for people to judge
>> whether the newly added complexity and knob can be complemented by the
>> performance increment.  If the benefit of some patches cannot complement
>> its complexity, I am OK to merge just part of the patchset firstly.
>
> Understood. But I really hesitate to go that far at this moment since
> the picture is not that clear yet IMHO. We have to support them (maybe
> forever) once we merge them.

OK.  The [1-3/6] is the simplest implementation.  We can start with that
firstly?

> So I'd prefer to work on the simplest and most necessary stuff for
> now. Just like how we dealt with demotion.
>
>>
>> So how about be more specific?  For example, if you are general OK about
>> the complexity and knob introduced by [1-3/6], but have concerns about
>> [4/6], then we can discuss about that specifically?
>
> Yeah, we could.
>
>>
>> > For this specific patch I'm ok to a new promotion mode. There might be
>> > usecase that users just want to do promotion between tiered memory but
>> > not care about NUMA locality.
>>
>> Yes.
>>
>> >> Secondly, we add a promote watermark to the DRAM node so that we can
>> >> demote/promote pages between the high and promote watermark.  Per my
>> >> understanding, you suggest just to ignore the high watermark checking
>> >> for promoting.  The problem is that this may make the free pages of the
>> >> DRAM node too few.  If many pages are promoted in short time, the free
>> >> pages will be kept near the min watermark for a while, so that the page
>> >> allocation from the application will trigger direct reclaiming.  We have
>> >> observed page allocation failure in a test before with a similar policy.
>> >
>> > The question is, applicable to the hint fault latency and ratelimit
>> > too, we already have some NUMA balancing knobs to control scan period
>> > and scan size and watermark knobs to tune how aggressively kswapd
>> > works, can they do the same jobs instead of introducing any new knobs?
>>
>> In this specific patch, we don't introduce a new knob for the page
>> demotion.  For other knobs, how about discuss them in the patch that
>> introduce them and one by one?
>
> That comment is applicable to the watermark hack in this patch too.
> Per your above description, the problem is the significant amount of
> promotion in short period of time may deplete free memory. So I'm
> wondering if the amount of promotion could be ratelimited by NUMA
> balancing scan period and scan size. I understand this may have some
> hot pages stay on PMEM for a longer time, but does it really matter?
> In addition, the gap between low <--> min <--> high could be adjusted
> by watermark_scale_factor, so kswapd could work more aggressively to
> keep free memory.

We can control the NUMA balancing scan speed, but we cannot control the
speed of the hint page faults.  For example, we scanned a large portion
of PMEM without many hint page faults because the pages are really cold,
but suddenly a large amount of cold pages become hot, so they will be
promoted to DRAM.  This will create heavy memory pressure on DRAM node,
make it hard for the normal page allocation from the applications.

And, for some workloads, we need to promote the hot pages to DRAM
quickly, otherwise, the pages will become cold.  We should make it
possible to support these users too.  Do you agree?

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH -V8 1/6] NUMA balancing: optimize page placement for memory tiering system
  2021-09-16  1:44             ` Huang, Ying
@ 2021-09-17  0:47               ` Yang Shi
  2021-09-17  1:24                 ` Huang, Ying
  0 siblings, 1 reply; 17+ messages in thread
From: Yang Shi @ 2021-09-17  0:47 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Linux Kernel Mailing List, Andrew Morton, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, Linux MM

On Wed, Sep 15, 2021 at 6:45 PM Huang, Ying <ying.huang@intel.com> wrote:
>
> Yang Shi <shy828301@gmail.com> writes:
>
> > On Tue, Sep 14, 2021 at 8:58 PM Huang, Ying <ying.huang@intel.com> wrote:
> >>
> >> Yang Shi <shy828301@gmail.com> writes:
> >>
> >> > On Tue, Sep 14, 2021 at 6:45 PM Huang, Ying <ying.huang@intel.com> wrote:
> >> >>
> >> >> Yang Shi <shy828301@gmail.com> writes:
> >> >>
> >> >> > On Mon, Sep 13, 2021 at 6:37 PM Huang Ying <ying.huang@intel.com> wrote:
> >> >> >>
> >> >> >> With the advent of various new memory types, some machines will have
> >> >> >> multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
> >> >> >> memory subsystem of these machines can be called memory tiering
> >> >> >> system, because the performance of the different types of memory are
> >> >> >> usually different.
> >> >> >>
> >> >> >> In such system, because of the memory accessing pattern changing etc,
> >> >> >> some pages in the slow memory may become hot globally.  So in this
> >> >> >> patch, the NUMA balancing mechanism is enhanced to optimize the page
> >> >> >> placement among the different memory types according to hot/cold
> >> >> >> dynamically.
> >> >> >>
> >> >> >> In a typical memory tiering system, there are CPUs, fast memory and
> >> >> >> slow memory in each physical NUMA node.  The CPUs and the fast memory
> >> >> >> will be put in one logical node (called fast memory node), while the
> >> >> >> slow memory will be put in another (faked) logical node (called slow
> >> >> >> memory node).  That is, the fast memory is regarded as local while the
> >> >> >> slow memory is regarded as remote.  So it's possible for the recently
> >> >> >> accessed pages in the slow memory node to be promoted to the fast
> >> >> >> memory node via the existing NUMA balancing mechanism.
> >> >> >>
> >> >> >> The original NUMA balancing mechanism will stop to migrate pages if the free
> >> >> >> memory of the target node will become below the high watermark.  This
> >> >> >> is a reasonable policy if there's only one memory type.  But this
> >> >> >> makes the original NUMA balancing mechanism almost not work to optimize page
> >> >> >> placement among different memory types.  Details are as follows.
> >> >> >>
> >> >> >> It's the common cases that the working-set size of the workload is
> >> >> >> larger than the size of the fast memory nodes.  Otherwise, it's
> >> >> >> unnecessary to use the slow memory at all.  So in the common cases,
> >> >> >> there are almost always no enough free pages in the fast memory nodes,
> >> >> >> so that the globally hot pages in the slow memory node cannot be
> >> >> >> promoted to the fast memory node.  To solve the issue, we have 2
> >> >> >> choices as follows,
> >> >> >>
> >> >> >> a. Ignore the free pages watermark checking when promoting hot pages
> >> >> >>    from the slow memory node to the fast memory node.  This will
> >> >> >>    create some memory pressure in the fast memory node, thus trigger
> >> >> >>    the memory reclaiming.  So that, the cold pages in the fast memory
> >> >> >>    node will be demoted to the slow memory node.
> >> >> >>
> >> >> >> b. Make kswapd of the fast memory node to reclaim pages until the free
> >> >> >>    pages are a little more (about 10MB) than the high watermark.  Then,
> >> >> >>    if the free pages of the fast memory node reaches high watermark, and
> >> >> >>    some hot pages need to be promoted, kswapd of the fast memory node
> >> >> >>    will be waken up to demote some cold pages in the fast memory node to
> >> >> >>    the slow memory node.  This will free some extra space in the fast
> >> >> >>    memory node, so the hot pages in the slow memory node can be
> >> >> >>    promoted to the fast memory node.
> >> >> >>
> >> >> >> The choice "a" will create the memory pressure in the fast memory
> >> >> >> node.  If the memory pressure of the workload is high, the memory
> >> >> >> pressure may become so high that the memory allocation latency of the
> >> >> >> workload is influenced, e.g. the direct reclaiming may be triggered.
> >> >> >>
> >> >> >> The choice "b" works much better at this aspect.  If the memory
> >> >> >> pressure of the workload is high, the hot pages promotion will stop
> >> >> >> earlier because its allocation watermark is higher than that of the
> >> >> >> normal memory allocation.  So in this patch, choice "b" is
> >> >> >> implemented.
> >> >> >>
> >> >> >> In addition to the original page placement optimization among sockets,
> >> >> >> the NUMA balancing mechanism is extended to be used to optimize page
> >> >> >> placement according to hot/cold among different memory types.  So the
> >> >> >> sysctl user space interface (numa_balancing) is extended in a backward
> >> >> >> compatible way as follow, so that the users can enable/disable these
> >> >> >> functionality individually.
> >> >> >>
> >> >> >> The sysctl is converted from a Boolean value to a bits field.  The
> >> >> >> definition of the flags is,
> >> >> >>
> >> >> >> - 0x0: NUMA_BALANCING_DISABLED
> >> >> >> - 0x1: NUMA_BALANCING_NORMAL
> >> >> >> - 0x2: NUMA_BALANCING_MEMORY_TIERING
> >> >> >
> >> >> > Thanks for coming up with the patches. TBH the first question off the
> >> >> > top of my head is all the complexity is really worthy for real life
> >> >> > workload at the moment? And the interfaces (sysctl knob files exported
> >> >> > to users) look complicated for the users. I don't know if the users
> >> >> > know how to set an optimal value for their workloads.
> >> >> >
> >> >> > I don't disagree the NUMA balancing needs optimization and improvement
> >> >> > for tiering memory, the question we need answer is how far we should
> >> >> > go for now and what the interfaces should look like. Does it make
> >> >> > sense to you?
> >> >> >
> >> >> > IMHO I'd prefer the most simple and straightforward approach at the
> >> >> > moment. For example, we could just skip high water mark check for PMEM
> >> >> > promotion.
> >> >>
> >> >> Hi, Yang,
> >> >>
> >> >> Thanks for comments.
> >> >>
> >> >> I understand your concerns about complexity.  I have tried to organize
> >> >> the patchset so that the initial patch is as simple as possible and the
> >> >> complexity is introduced step by step.  But it seems that your simplest
> >> >> version is even simpler than my one :-)
> >> >>
> >> >> In this patch ([1/6]), I introduced 2 stuff.
> >> >>
> >> >> Firstly, a sysctl knob is provided to disable the NUMA balancing based
> >> >> promotion.  Per my understanding, you suggest to remove this.  If so,
> >> >> optimizing cross-socket access and promoting hot PMEM pages to DRAM must
> >> >> be enabled/disabled together.  If a user wants to enable promoting the
> >> >> hot PMEM pages to DRAM but disable optimizing cross-socket access
> >> >> because they have already bound the CPU of the workload so that there's no
> >> >> much cross-socket access, how can they do?
> >> >
> >> > I should make myself clearer. Here I mean the whole series, not this
> >> > specific patch. I'm concerned that the interfaces (hint fault latency
> >> > and ratelimit) are hard to understand and configure for users and
> >> > whether we go too far at the moment or not. I'm dealing with the end
> >> > users, I'd admit I'm not even sure how to configure the knobs to
> >> > achieve optimal performance for different real life workloads.
> >>
> >> Sorry, I misunderstand your original idea.  I understand that the knob
> >> isn't user-friendly.  But sometimes, we cannot avoid it completely :-(
> >> In this patchset, I try to introduce the complexity and knobs one by
> >> one, and show the performance benefit of each step for people to judge
> >> whether the newly added complexity and knob can be complemented by the
> >> performance increment.  If the benefit of some patches cannot complement
> >> its complexity, I am OK to merge just part of the patchset firstly.
> >
> > Understood. But I really hesitate to go that far at this moment since
> > the picture is not that clear yet IMHO. We have to support them (maybe
> > forever) once we merge them.
>
> OK.  The [1-3/6] is the simplest implementation.  We can start with that
> firstly?

Sure.

>
> > So I'd prefer to work on the simplest and most necessary stuff for
> > now. Just like how we dealt with demotion.
> >
> >>
> >> So how about be more specific?  For example, if you are general OK about
> >> the complexity and knob introduced by [1-3/6], but have concerns about
> >> [4/6], then we can discuss about that specifically?
> >
> > Yeah, we could.
> >
> >>
> >> > For this specific patch I'm ok to a new promotion mode. There might be
> >> > usecase that users just want to do promotion between tiered memory but
> >> > not care about NUMA locality.
> >>
> >> Yes.
> >>
> >> >> Secondly, we add a promote watermark to the DRAM node so that we can
> >> >> demote/promote pages between the high and promote watermark.  Per my
> >> >> understanding, you suggest just to ignore the high watermark checking
> >> >> for promoting.  The problem is that this may make the free pages of the
> >> >> DRAM node too few.  If many pages are promoted in short time, the free
> >> >> pages will be kept near the min watermark for a while, so that the page
> >> >> allocation from the application will trigger direct reclaiming.  We have
> >> >> observed page allocation failure in a test before with a similar policy.
> >> >
> >> > The question is, applicable to the hint fault latency and ratelimit
> >> > too, we already have some NUMA balancing knobs to control scan period
> >> > and scan size and watermark knobs to tune how aggressively kswapd
> >> > works, can they do the same jobs instead of introducing any new knobs?
> >>
> >> In this specific patch, we don't introduce a new knob for the page
> >> demotion.  For other knobs, how about discuss them in the patch that
> >> introduce them and one by one?
> >
> > That comment is applicable to the watermark hack in this patch too.
> > Per your above description, the problem is the significant amount of
> > promotion in short period of time may deplete free memory. So I'm
> > wondering if the amount of promotion could be ratelimited by NUMA
> > balancing scan period and scan size. I understand this may have some
> > hot pages stay on PMEM for a longer time, but does it really matter?
> > In addition, the gap between low <--> min <--> high could be adjusted
> > by watermark_scale_factor, so kswapd could work more aggressively to
> > keep free memory.
>
> We can control the NUMA balancing scan speed, but we cannot control the
> speed of the hint page faults.  For example, we scanned a large portion

Could adjusting scan size help out?


> of PMEM without many hint page faults because the pages are really cold,
> but suddenly a large amount of cold pages become hot, so they will be
> promoted to DRAM.  This will create heavy memory pressure on DRAM node,
> make it hard for the normal page allocation from the applications.
>
> And, for some workloads, we need to promote the hot pages to DRAM
> quickly, otherwise, the pages will become cold.  We should make it
> possible to support these users too.  Do you agree?

I agree there may be such workloads. But do we have to achieve very
good support for them right now? We don't even know how common such
workload is.

>
> Best Regards,
> Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH -V8 1/6] NUMA balancing: optimize page placement for memory tiering system
  2021-09-17  0:47               ` Yang Shi
@ 2021-09-17  1:24                 ` Huang, Ying
  0 siblings, 0 replies; 17+ messages in thread
From: Huang, Ying @ 2021-09-17  1:24 UTC (permalink / raw)
  To: Yang Shi
  Cc: Linux Kernel Mailing List, Andrew Morton, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, Linux MM

Yang Shi <shy828301@gmail.com> writes:

> On Wed, Sep 15, 2021 at 6:45 PM Huang, Ying <ying.huang@intel.com> wrote:
>>
>> Yang Shi <shy828301@gmail.com> writes:
>>
>> > On Tue, Sep 14, 2021 at 8:58 PM Huang, Ying <ying.huang@intel.com> wrote:
>> >>
>> >> Yang Shi <shy828301@gmail.com> writes:
>> >>
>> >> > On Tue, Sep 14, 2021 at 6:45 PM Huang, Ying <ying.huang@intel.com> wrote:
>> >> >>
>> >> >> Yang Shi <shy828301@gmail.com> writes:
>> >> >>
>> >> >> > On Mon, Sep 13, 2021 at 6:37 PM Huang Ying <ying.huang@intel.com> wrote:
>> >> >> >>
>> >> >> >> With the advent of various new memory types, some machines will have
>> >> >> >> multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
>> >> >> >> memory subsystem of these machines can be called memory tiering
>> >> >> >> system, because the performance of the different types of memory are
>> >> >> >> usually different.
>> >> >> >>
>> >> >> >> In such system, because of the memory accessing pattern changing etc,
>> >> >> >> some pages in the slow memory may become hot globally.  So in this
>> >> >> >> patch, the NUMA balancing mechanism is enhanced to optimize the page
>> >> >> >> placement among the different memory types according to hot/cold
>> >> >> >> dynamically.
>> >> >> >>
>> >> >> >> In a typical memory tiering system, there are CPUs, fast memory and
>> >> >> >> slow memory in each physical NUMA node.  The CPUs and the fast memory
>> >> >> >> will be put in one logical node (called fast memory node), while the
>> >> >> >> slow memory will be put in another (faked) logical node (called slow
>> >> >> >> memory node).  That is, the fast memory is regarded as local while the
>> >> >> >> slow memory is regarded as remote.  So it's possible for the recently
>> >> >> >> accessed pages in the slow memory node to be promoted to the fast
>> >> >> >> memory node via the existing NUMA balancing mechanism.
>> >> >> >>
>> >> >> >> The original NUMA balancing mechanism will stop to migrate pages if the free
>> >> >> >> memory of the target node will become below the high watermark.  This
>> >> >> >> is a reasonable policy if there's only one memory type.  But this
>> >> >> >> makes the original NUMA balancing mechanism almost not work to optimize page
>> >> >> >> placement among different memory types.  Details are as follows.
>> >> >> >>
>> >> >> >> It's the common cases that the working-set size of the workload is
>> >> >> >> larger than the size of the fast memory nodes.  Otherwise, it's
>> >> >> >> unnecessary to use the slow memory at all.  So in the common cases,
>> >> >> >> there are almost always no enough free pages in the fast memory nodes,
>> >> >> >> so that the globally hot pages in the slow memory node cannot be
>> >> >> >> promoted to the fast memory node.  To solve the issue, we have 2
>> >> >> >> choices as follows,
>> >> >> >>
>> >> >> >> a. Ignore the free pages watermark checking when promoting hot pages
>> >> >> >>    from the slow memory node to the fast memory node.  This will
>> >> >> >>    create some memory pressure in the fast memory node, thus trigger
>> >> >> >>    the memory reclaiming.  So that, the cold pages in the fast memory
>> >> >> >>    node will be demoted to the slow memory node.
>> >> >> >>
>> >> >> >> b. Make kswapd of the fast memory node to reclaim pages until the free
>> >> >> >>    pages are a little more (about 10MB) than the high watermark.  Then,
>> >> >> >>    if the free pages of the fast memory node reaches high watermark, and
>> >> >> >>    some hot pages need to be promoted, kswapd of the fast memory node
>> >> >> >>    will be waken up to demote some cold pages in the fast memory node to
>> >> >> >>    the slow memory node.  This will free some extra space in the fast
>> >> >> >>    memory node, so the hot pages in the slow memory node can be
>> >> >> >>    promoted to the fast memory node.
>> >> >> >>
>> >> >> >> The choice "a" will create the memory pressure in the fast memory
>> >> >> >> node.  If the memory pressure of the workload is high, the memory
>> >> >> >> pressure may become so high that the memory allocation latency of the
>> >> >> >> workload is influenced, e.g. the direct reclaiming may be triggered.
>> >> >> >>
>> >> >> >> The choice "b" works much better at this aspect.  If the memory
>> >> >> >> pressure of the workload is high, the hot pages promotion will stop
>> >> >> >> earlier because its allocation watermark is higher than that of the
>> >> >> >> normal memory allocation.  So in this patch, choice "b" is
>> >> >> >> implemented.
>> >> >> >>
>> >> >> >> In addition to the original page placement optimization among sockets,
>> >> >> >> the NUMA balancing mechanism is extended to be used to optimize page
>> >> >> >> placement according to hot/cold among different memory types.  So the
>> >> >> >> sysctl user space interface (numa_balancing) is extended in a backward
>> >> >> >> compatible way as follow, so that the users can enable/disable these
>> >> >> >> functionality individually.
>> >> >> >>
>> >> >> >> The sysctl is converted from a Boolean value to a bits field.  The
>> >> >> >> definition of the flags is,
>> >> >> >>
>> >> >> >> - 0x0: NUMA_BALANCING_DISABLED
>> >> >> >> - 0x1: NUMA_BALANCING_NORMAL
>> >> >> >> - 0x2: NUMA_BALANCING_MEMORY_TIERING
>> >> >> >
>> >> >> > Thanks for coming up with the patches. TBH the first question off the
>> >> >> > top of my head is all the complexity is really worthy for real life
>> >> >> > workload at the moment? And the interfaces (sysctl knob files exported
>> >> >> > to users) look complicated for the users. I don't know if the users
>> >> >> > know how to set an optimal value for their workloads.
>> >> >> >
>> >> >> > I don't disagree the NUMA balancing needs optimization and improvement
>> >> >> > for tiering memory, the question we need answer is how far we should
>> >> >> > go for now and what the interfaces should look like. Does it make
>> >> >> > sense to you?
>> >> >> >
>> >> >> > IMHO I'd prefer the most simple and straightforward approach at the
>> >> >> > moment. For example, we could just skip high water mark check for PMEM
>> >> >> > promotion.
>> >> >>
>> >> >> Hi, Yang,
>> >> >>
>> >> >> Thanks for comments.
>> >> >>
>> >> >> I understand your concerns about complexity.  I have tried to organize
>> >> >> the patchset so that the initial patch is as simple as possible and the
>> >> >> complexity is introduced step by step.  But it seems that your simplest
>> >> >> version is even simpler than my one :-)
>> >> >>
>> >> >> In this patch ([1/6]), I introduced 2 stuff.
>> >> >>
>> >> >> Firstly, a sysctl knob is provided to disable the NUMA balancing based
>> >> >> promotion.  Per my understanding, you suggest to remove this.  If so,
>> >> >> optimizing cross-socket access and promoting hot PMEM pages to DRAM must
>> >> >> be enabled/disabled together.  If a user wants to enable promoting the
>> >> >> hot PMEM pages to DRAM but disable optimizing cross-socket access
>> >> >> because they have already bound the CPU of the workload so that there's no
>> >> >> much cross-socket access, how can they do?
>> >> >
>> >> > I should make myself clearer. Here I mean the whole series, not this
>> >> > specific patch. I'm concerned that the interfaces (hint fault latency
>> >> > and ratelimit) are hard to understand and configure for users and
>> >> > whether we go too far at the moment or not. I'm dealing with the end
>> >> > users, I'd admit I'm not even sure how to configure the knobs to
>> >> > achieve optimal performance for different real life workloads.
>> >>
>> >> Sorry, I misunderstand your original idea.  I understand that the knob
>> >> isn't user-friendly.  But sometimes, we cannot avoid it completely :-(
>> >> In this patchset, I try to introduce the complexity and knobs one by
>> >> one, and show the performance benefit of each step for people to judge
>> >> whether the newly added complexity and knob can be complemented by the
>> >> performance increment.  If the benefit of some patches cannot complement
>> >> its complexity, I am OK to merge just part of the patchset firstly.
>> >
>> > Understood. But I really hesitate to go that far at this moment since
>> > the picture is not that clear yet IMHO. We have to support them (maybe
>> > forever) once we merge them.
>>
>> OK.  The [1-3/6] is the simplest implementation.  We can start with that
>> firstly?
>
> Sure.
>
>>
>> > So I'd prefer to work on the simplest and most necessary stuff for
>> > now. Just like how we dealt with demotion.
>> >
>> >>
>> >> So how about be more specific?  For example, if you are general OK about
>> >> the complexity and knob introduced by [1-3/6], but have concerns about
>> >> [4/6], then we can discuss about that specifically?
>> >
>> > Yeah, we could.
>> >
>> >>
>> >> > For this specific patch I'm ok to a new promotion mode. There might be
>> >> > usecase that users just want to do promotion between tiered memory but
>> >> > not care about NUMA locality.
>> >>
>> >> Yes.
>> >>
>> >> >> Secondly, we add a promote watermark to the DRAM node so that we can
>> >> >> demote/promote pages between the high and promote watermark.  Per my
>> >> >> understanding, you suggest just to ignore the high watermark checking
>> >> >> for promoting.  The problem is that this may make the free pages of the
>> >> >> DRAM node too few.  If many pages are promoted in short time, the free
>> >> >> pages will be kept near the min watermark for a while, so that the page
>> >> >> allocation from the application will trigger direct reclaiming.  We have
>> >> >> observed page allocation failure in a test before with a similar policy.
>> >> >
>> >> > The question is, applicable to the hint fault latency and ratelimit
>> >> > too, we already have some NUMA balancing knobs to control scan period
>> >> > and scan size and watermark knobs to tune how aggressively kswapd
>> >> > works, can they do the same jobs instead of introducing any new knobs?
>> >>
>> >> In this specific patch, we don't introduce a new knob for the page
>> >> demotion.  For other knobs, how about discuss them in the patch that
>> >> introduce them and one by one?
>> >
>> > That comment is applicable to the watermark hack in this patch too.
>> > Per your above description, the problem is the significant amount of
>> > promotion in short period of time may deplete free memory. So I'm
>> > wondering if the amount of promotion could be ratelimited by NUMA
>> > balancing scan period and scan size. I understand this may have some
>> > hot pages stay on PMEM for a longer time, but does it really matter?
>> > In addition, the gap between low <--> min <--> high could be adjusted
>> > by watermark_scale_factor, so kswapd could work more aggressively to
>> > keep free memory.
>>
>> We can control the NUMA balancing scan speed, but we cannot control the
>> speed of the hint page faults.  For example, we scanned a large portion
>
> Could adjusting scan size help out?

I don't think adjusting scan size helps here.  "scan size" just changes
how many pages are scanned for one task_work(), it even doesn't change
the scan speed.  How can it help?

>> of PMEM without many hint page faults because the pages are really cold,
>> but suddenly a large amount of cold pages become hot, so they will be
>> promoted to DRAM.  This will create heavy memory pressure on DRAM node,
>> make it hard for the normal page allocation from the applications.
>>
>> And, for some workloads, we need to promote the hot pages to DRAM
>> quickly, otherwise, the pages will become cold.  We should make it
>> possible to support these users too.  Do you agree?
>
> I agree there may be such workloads. But do we have to achieve very
> good support for them right now? We don't even know how common such
> workload is.

The performance of PMEM is much lower than that of DRAM now.  If some
pages in PMEM becomes hot, the quicker we move these hot PMEM pages to
DRAM, the better the performance.  So I think this is a common problem.

And it's not too hard to implement.  This is a small patch anyway.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-09-17  1:24 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-14  1:36 [PATCH -V8 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
2021-09-14  1:36 ` [PATCH -V8 1/6] NUMA balancing: optimize page " Huang Ying
2021-09-14 22:40   ` Yang Shi
2021-09-15  1:44     ` Huang, Ying
2021-09-15  2:47       ` Yang Shi
2021-09-15  3:58         ` Huang, Ying
2021-09-15 21:32           ` Yang Shi
2021-09-16  1:44             ` Huang, Ying
2021-09-17  0:47               ` Yang Shi
2021-09-17  1:24                 ` Huang, Ying
2021-09-14  1:36 ` [PATCH -V8 2/6] memory tiering: add page promotion counter Huang Ying
2021-09-14 22:41   ` Yang Shi
2021-09-15  1:53     ` Huang, Ying
2021-09-14  1:36 ` [PATCH -V8 3/6] memory tiering: skip to scan fast memory Huang Ying
2021-09-14  1:36 ` [PATCH -V8 4/6] memory tiering: hot page selection with hint page fault latency Huang Ying
2021-09-14  1:37 ` [PATCH -V8 5/6] memory tiering: rate limit NUMA migration throughput Huang Ying
2021-09-14  1:37 ` [PATCH -V8 6/6] memory tiering: adjust hot threshold automatically Huang Ying

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).