All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system
@ 2021-12-07  2:27 Huang Ying
  2021-12-07  2:27 ` [PATCH -V10 RESEND 1/6] NUMA Balancing: add page promotion counter Huang Ying
                   ` (6 more replies)
  0 siblings, 7 replies; 22+ messages in thread
From: Huang Ying @ 2021-12-07  2:27 UTC (permalink / raw)
  To: Peter Zijlstra, Mel Gorman
  Cc: linux-mm, linux-kernel, Feng Tang, Huang Ying, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

The changes since the last post are as follows,

- Rebased on v5.16-rc1

- Revise error processing for [1/6] (promotion counter) per Yang's comments

- Add sysctl document for [2/6] (optimize page placement)

- Reset threshold adjustment state when disable/enable tiering mode

- Reset threshold when workload transition is detected.

--

With the advent of various new memory types, some machines will have
multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
memory subsystem of these machines can be called memory tiering
system, because the performance of the different types of memory are
different.

After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
for use like normal RAM"), the PMEM could be used as the
cost-effective volatile memory in separate NUMA nodes.  In a typical
memory tiering system, there are CPUs, DRAM and PMEM in each physical
NUMA node.  The CPUs and the DRAM will be put in one logical node,
while the PMEM will be put in another (faked) logical node.

To optimize the system overall performance, the hot pages should be
placed in DRAM node.  To do that, we need to identify the hot pages in
the PMEM node and migrate them to DRAM node via NUMA migration.

In the original NUMA balancing, there are already a set of existing
mechanisms to identify the pages recently accessed by the CPUs in a
node and migrate the pages to the node.  So we can reuse these
mechanisms to build the mechanisms to optimize the page placement in
the memory tiering system.  This is implemented in this patchset.

At the other hand, the cold pages should be placed in PMEM node.  So,
we also need to identify the cold pages in the DRAM node and migrate
them to PMEM node.

In commit 26aa2d199d6f ("mm/migrate: demote pages during reclaim"), a
mechanism to demote the cold DRAM pages to PMEM node under memory
pressure is implemented.  Based on that, the cold DRAM pages can be
demoted to PMEM node proactively to free some memory space on DRAM
node to accommodate the promoted hot PMEM pages.  This is implemented
in this patchset too.

We have tested the solution with the pmbench memory accessing
benchmark with the 80:20 read/write ratio and the normal access
address distribution on a 2 socket Intel server with Optane DC
Persistent Memory Model.  The test results of the base kernel and step
by step optimizations are as follows,

                Throughput	Promotion      DRAM bandwidth
		  access/s           MB/s                MB/s
               -----------     ----------      --------------
Base		69263986.8			       1830.2
Patch 2	       135691921.4	    385.6	      11315.9
Patch 3	       133239016.8	    384.7	      11065.2
Patch 4	       151310868.9          197.6	      11397.0
Patch 5	       142311252.8           99.3	       9580.8
Patch 6	       149044263.9	     65.5	       9922.8

The whole patchset improves the benchmark score up to 115.2%.  The
basic NUMA balancing based optimization solution (patch 2), the hot
page selection algorithm (patch 4), and the threshold automatic
adjustment algorithms (patch 6) improves the performance or reduce the
overhead (promotion MB/s) greatly.

Changelog:

v10:

- Rebased on v5.16-rc1

- Revise error processing for [1/6] (promotion counter) per Yang's comments

- Add sysctl document for [2/6] (optimize page placement)

- Reset threshold adjustment state when disable/enable tiering mode

- Reset threshold when workload transition is detected.

v9:

- Rebased on v5.15-rc4

- Make "add promotion counter" the first patch per Yang's comments

v8:

- Rebased on v5.15-rc1

- Make user-specified threshold take effect sooner

v7:

- Rebased on the mmots tree of 2021-07-15.

- Some minor fixes.

v6:

- Rebased on the latest page demotion patchset. (which bases on v5.11)

v5:

- Rebased on the latest page demotion patchset. (which bases on v5.10)

v4:

- Rebased on the latest page demotion patchset. (which bases on v5.9-rc6)

- Add page promotion counter.

v3:

- Move the rate limit control as late as possible per Mel Gorman's
  comments.

- Revise the hot page selection implementation to store page scan time
  in struct page.

- Code cleanup.

- Rebased on the latest page demotion patchset.

v2:

- Addressed comments for V1.

- Rebased on v5.5.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH -V10 RESEND 1/6] NUMA Balancing: add page promotion counter
  2021-12-07  2:27 [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
@ 2021-12-07  2:27 ` Huang Ying
  2021-12-07  6:05   ` Hasan Al Maruf
  2021-12-17  7:25   ` Baolin Wang
  2021-12-07  2:27 ` [PATCH -V10 RESEND 2/6] NUMA balancing: optimize page placement for memory tiering system Huang Ying
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 22+ messages in thread
From: Huang Ying @ 2021-12-07  2:27 UTC (permalink / raw)
  To: Peter Zijlstra, Mel Gorman
  Cc: linux-mm, linux-kernel, Feng Tang, Huang Ying, Yang Shi,
	Andrew Morton, Michal Hocko, Rik van Riel, Mel Gorman,
	Dave Hansen, Zi Yan, Wei Xu, osalvador, Shakeel Butt,
	Hasan Al Maruf

In a system with multiple memory types, e.g. DRAM and PMEM, the CPU
and DRAM in one socket will be put in one NUMA node as before, while
the PMEM will be put in another NUMA node as described in the
description of the commit c221c0b0308f ("device-dax: "Hotplug"
persistent memory for use like normal RAM").  So, the NUMA balancing
mechanism will identify all PMEM accesses as remote access and try to
promote the PMEM pages to DRAM.

To distinguish the number of the inter-type promoted pages from that
of the inter-socket migrated pages.  A new vmstat count is added.  The
counter is per-node (count in the target node).  So this can be used
to identify promotion imbalance among the NUMA nodes.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Hasan Al Maruf <hasanalmaruf@fb.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/linux/mmzone.h |  3 +++
 include/linux/node.h   |  5 +++++
 mm/migrate.c           | 13 ++++++++++---
 mm/vmstat.c            |  3 +++
 4 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 58e744b78c2c..eda6d2f09d77 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -210,6 +210,9 @@ enum node_stat_item {
 	NR_PAGETABLE,		/* used for pagetables */
 #ifdef CONFIG_SWAP
 	NR_SWAPCACHE,
+#endif
+#ifdef CONFIG_NUMA_BALANCING
+	PGPROMOTE_SUCCESS,	/* promote successfully */
 #endif
 	NR_VM_NODE_STAT_ITEMS
 };
diff --git a/include/linux/node.h b/include/linux/node.h
index bb21fd631b16..81bbf1c0afd3 100644
--- a/include/linux/node.h
+++ b/include/linux/node.h
@@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
 
 #define to_node(device) container_of(device, struct node, dev)
 
+static inline bool node_is_toptier(int node)
+{
+	return node_state(node, N_CPU);
+}
+
 #endif /* _LINUX_NODE_H_ */
diff --git a/mm/migrate.c b/mm/migrate.c
index cf25b00f03c8..b7c27abb0e5c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2141,6 +2141,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 	pg_data_t *pgdat = NODE_DATA(node);
 	int isolated;
 	int nr_remaining;
+	int nr_succeeded;
 	LIST_HEAD(migratepages);
 	new_page_t *new;
 	bool compound;
@@ -2179,7 +2180,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 
 	list_add(&page->lru, &migratepages);
 	nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
-				     MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
+				     MIGRATE_ASYNC, MR_NUMA_MISPLACED,
+				     &nr_succeeded);
 	if (nr_remaining) {
 		if (!list_empty(&migratepages)) {
 			list_del(&page->lru);
@@ -2188,8 +2190,13 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 			putback_lru_page(page);
 		}
 		isolated = 0;
-	} else
-		count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages);
+	}
+	if (nr_succeeded) {
+		count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
+		if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
+			mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS,
+					    nr_succeeded);
+	}
 	BUG_ON(!list_empty(&migratepages));
 	return isolated;
 
diff --git a/mm/vmstat.c b/mm/vmstat.c
index d701c335628c..53a6e92b1efb 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1242,6 +1242,9 @@ const char * const vmstat_text[] = {
 #ifdef CONFIG_SWAP
 	"nr_swapcached",
 #endif
+#ifdef CONFIG_NUMA_BALANCING
+	"pgpromote_success",
+#endif
 
 	/* enum writeback_stat_item counters */
 	"nr_dirty_threshold",
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH -V10 RESEND 2/6] NUMA balancing: optimize page placement for memory tiering system
  2021-12-07  2:27 [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
  2021-12-07  2:27 ` [PATCH -V10 RESEND 1/6] NUMA Balancing: add page promotion counter Huang Ying
@ 2021-12-07  2:27 ` Huang Ying
  2021-12-07  6:36   ` Hasan Al Maruf
  2021-12-17  7:35   ` Baolin Wang
  2021-12-07  2:27 ` [PATCH -V10 RESEND 3/6] memory tiering: skip to scan fast memory Huang Ying
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 22+ messages in thread
From: Huang Ying @ 2021-12-07  2:27 UTC (permalink / raw)
  To: Peter Zijlstra, Mel Gorman
  Cc: linux-mm, linux-kernel, Feng Tang, Huang Ying, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

With the advent of various new memory types, some machines will have
multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
memory subsystem of these machines can be called memory tiering
system, because the performance of the different types of memory are
usually different.

In such system, because of the memory accessing pattern changing etc,
some pages in the slow memory may become hot globally.  So in this
patch, the NUMA balancing mechanism is enhanced to optimize the page
placement among the different memory types according to hot/cold
dynamically.

In a typical memory tiering system, there are CPUs, fast memory and
slow memory in each physical NUMA node.  The CPUs and the fast memory
will be put in one logical node (called fast memory node), while the
slow memory will be put in another (faked) logical node (called slow
memory node).  That is, the fast memory is regarded as local while the
slow memory is regarded as remote.  So it's possible for the recently
accessed pages in the slow memory node to be promoted to the fast
memory node via the existing NUMA balancing mechanism.

The original NUMA balancing mechanism will stop to migrate pages if the free
memory of the target node will become below the high watermark.  This
is a reasonable policy if there's only one memory type.  But this
makes the original NUMA balancing mechanism almost not work to optimize page
placement among different memory types.  Details are as follows.

It's the common cases that the working-set size of the workload is
larger than the size of the fast memory nodes.  Otherwise, it's
unnecessary to use the slow memory at all.  So in the common cases,
there are almost always no enough free pages in the fast memory nodes,
so that the globally hot pages in the slow memory node cannot be
promoted to the fast memory node.  To solve the issue, we have 2
choices as follows,

a. Ignore the free pages watermark checking when promoting hot pages
   from the slow memory node to the fast memory node.  This will
   create some memory pressure in the fast memory node, thus trigger
   the memory reclaiming.  So that, the cold pages in the fast memory
   node will be demoted to the slow memory node.

b. Make kswapd of the fast memory node to reclaim pages until the free
   pages are a little more (about 10MB) than the high watermark.  Then,
   if the free pages of the fast memory node reaches high watermark, and
   some hot pages need to be promoted, kswapd of the fast memory node
   will be waken up to demote some cold pages in the fast memory node to
   the slow memory node.  This will free some extra space in the fast
   memory node, so the hot pages in the slow memory node can be
   promoted to the fast memory node.

The choice "a" will create the memory pressure in the fast memory
node.  If the memory pressure of the workload is high, the memory
pressure may become so high that the memory allocation latency of the
workload is influenced, e.g. the direct reclaiming may be triggered.

The choice "b" works much better at this aspect.  If the memory
pressure of the workload is high, the hot pages promotion will stop
earlier because its allocation watermark is higher than that of the
normal memory allocation.  So in this patch, choice "b" is
implemented.

In addition to the original page placement optimization among sockets,
the NUMA balancing mechanism is extended to be used to optimize page
placement according to hot/cold among different memory types.  So the
sysctl user space interface (numa_balancing) is extended in a backward
compatible way as follow, so that the users can enable/disable these
functionality individually.

The sysctl is converted from a Boolean value to a bits field.  The
definition of the flags is,

- 0x0: NUMA_BALANCING_DISABLED
- 0x1: NUMA_BALANCING_NORMAL
- 0x2: NUMA_BALANCING_MEMORY_TIERING

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Hasan Al Maruf <hasanalmaruf@fb.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 Documentation/admin-guide/sysctl/kernel.rst | 29 ++++++++++++++-------
 include/linux/sched/sysctl.h                | 10 +++++++
 kernel/sched/core.c                         | 21 ++++++++++++---
 kernel/sysctl.c                             |  3 ++-
 mm/migrate.c                                | 19 ++++++++++++--
 mm/vmscan.c                                 | 16 ++++++++++++
 6 files changed, 82 insertions(+), 16 deletions(-)

diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
index 0e486f41185e..5502ea6083ba 100644
--- a/Documentation/admin-guide/sysctl/kernel.rst
+++ b/Documentation/admin-guide/sysctl/kernel.rst
@@ -595,16 +595,23 @@ Documentation/admin-guide/kernel-parameters.rst).
 numa_balancing
 ==============
 
-Enables/disables automatic page fault based NUMA memory
-balancing. Memory is moved automatically to nodes
-that access it often.
+Enables/disables and configure automatic page fault based NUMA memory
+balancing.  Memory is moved automatically to nodes that access it
+often.  The value to set can be the result to OR the following,
 
-Enables/disables automatic NUMA memory balancing. On NUMA machines, there
-is a performance penalty if remote memory is accessed by a CPU. When this
-feature is enabled the kernel samples what task thread is accessing memory
-by periodically unmapping pages and later trapping a page fault. At the
-time of the page fault, it is determined if the data being accessed should
-be migrated to a local memory node.
+= =================================
+0x0 NUMA_BALANCING_DISABLED
+0x1 NUMA_BALANCING_NORMAL
+0x2 NUMA_BALANCING_MEMORY_TIERING
+= =================================
+
+Or NUMA_BALANCING_NORMAL to optimize page placement among different
+NUMA nodes to reduce remote accessing.  On NUMA machines, there is a
+performance penalty if remote memory is accessed by a CPU. When this
+feature is enabled the kernel samples what task thread is accessing
+memory by periodically unmapping pages and later trapping a page
+fault. At the time of the page fault, it is determined if the data
+being accessed should be migrated to a local memory node.
 
 The unmapping of pages and trapping faults incur additional overhead that
 ideally is offset by improved memory locality but there is no universal
@@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms,
 numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms,
 numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls.
 
+Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among
+different types of memory (represented as different NUMA nodes) to
+place the hot pages in the fast memory.  This is implemented based on
+unmapping and page fault too.
 
 numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb
 ===============================================================================================================================
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index 304f431178fd..bc54c1d75d6d 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -35,6 +35,16 @@ enum sched_tunable_scaling {
 	SCHED_TUNABLESCALING_END,
 };
 
+#define NUMA_BALANCING_DISABLED		0x0
+#define NUMA_BALANCING_NORMAL		0x1
+#define NUMA_BALANCING_MEMORY_TIERING	0x2
+
+#ifdef CONFIG_NUMA_BALANCING
+extern int sysctl_numa_balancing_mode;
+#else
+#define sysctl_numa_balancing_mode	0
+#endif
+
 /*
  *  control realtime throttling:
  *
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3c9b0fda64ac..5dcabc98432f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4265,7 +4265,9 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
 
 #ifdef CONFIG_NUMA_BALANCING
 
-void set_numabalancing_state(bool enabled)
+int sysctl_numa_balancing_mode;
+
+static void __set_numabalancing_state(bool enabled)
 {
 	if (enabled)
 		static_branch_enable(&sched_numa_balancing);
@@ -4273,13 +4275,22 @@ void set_numabalancing_state(bool enabled)
 		static_branch_disable(&sched_numa_balancing);
 }
 
+void set_numabalancing_state(bool enabled)
+{
+	if (enabled)
+		sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL;
+	else
+		sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED;
+	__set_numabalancing_state(enabled);
+}
+
 #ifdef CONFIG_PROC_SYSCTL
 int sysctl_numa_balancing(struct ctl_table *table, int write,
 			  void *buffer, size_t *lenp, loff_t *ppos)
 {
 	struct ctl_table t;
 	int err;
-	int state = static_branch_likely(&sched_numa_balancing);
+	int state = sysctl_numa_balancing_mode;
 
 	if (write && !capable(CAP_SYS_ADMIN))
 		return -EPERM;
@@ -4289,8 +4300,10 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
 	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
 	if (err < 0)
 		return err;
-	if (write)
-		set_numabalancing_state(state);
+	if (write) {
+		sysctl_numa_balancing_mode = state;
+		__set_numabalancing_state(state);
+	}
 	return err;
 }
 #endif
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 083be6af29d7..a1be94ea80ba 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -115,6 +115,7 @@ static int sixty = 60;
 
 static int __maybe_unused neg_one = -1;
 static int __maybe_unused two = 2;
+static int __maybe_unused three = 3;
 static int __maybe_unused four = 4;
 static unsigned long zero_ul;
 static unsigned long one_ul = 1;
@@ -1808,7 +1809,7 @@ static struct ctl_table kern_table[] = {
 		.mode		= 0644,
 		.proc_handler	= sysctl_numa_balancing,
 		.extra1		= SYSCTL_ZERO,
-		.extra2		= SYSCTL_ONE,
+		.extra2		= &three,
 	},
 #endif /* CONFIG_NUMA_BALANCING */
 	{
diff --git a/mm/migrate.c b/mm/migrate.c
index b7c27abb0e5c..286c84c014dd 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -50,6 +50,7 @@
 #include <linux/ptrace.h>
 #include <linux/oom.h>
 #include <linux/memory.h>
+#include <linux/sched/sysctl.h>
 
 #include <asm/tlbflush.h>
 
@@ -2103,16 +2104,30 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 {
 	int page_lru;
 	int nr_pages = thp_nr_pages(page);
+	int order = compound_order(page);
 
-	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
+	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
 
 	/* Do not migrate THP mapped by multiple processes */
 	if (PageTransHuge(page) && total_mapcount(page) > 1)
 		return 0;
 
 	/* Avoid migrating to a node that is nearly full */
-	if (!migrate_balanced_pgdat(pgdat, nr_pages))
+	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
+		int z;
+
+		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) ||
+		    !numa_demotion_enabled)
+			return 0;
+		if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE)
+			return 0;
+		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
+			if (populated_zone(pgdat->node_zones + z))
+				break;
+		}
+		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
 		return 0;
+	}
 
 	if (isolate_lru_page(page))
 		return 0;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c266e64d2f7e..5edb5dfa8900 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -56,6 +56,7 @@
 
 #include <linux/swapops.h>
 #include <linux/balloon_compaction.h>
+#include <linux/sched/sysctl.h>
 
 #include "internal.h"
 
@@ -3919,6 +3920,12 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx)
 	return false;
 }
 
+/*
+ * Keep the free pages on fast memory node a little more than the high
+ * watermark to accommodate the promoted pages.
+ */
+#define NUMA_BALANCING_PROMOTE_WATERMARK	(10UL * 1024 * 1024 >> PAGE_SHIFT)
+
 /*
  * Returns true if there is an eligible zone balanced for the request order
  * and highest_zoneidx
@@ -3940,6 +3947,15 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
 			continue;
 
 		mark = high_wmark_pages(zone);
+		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+		    numa_demotion_enabled &&
+		    next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) {
+			unsigned long promote_mark;
+
+			promote_mark = min(NUMA_BALANCING_PROMOTE_WATERMARK,
+					   pgdat->node_present_pages >> 6);
+			mark += promote_mark;
+		}
 		if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
 			return true;
 	}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH -V10 RESEND 3/6] memory tiering: skip to scan fast memory
  2021-12-07  2:27 [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
  2021-12-07  2:27 ` [PATCH -V10 RESEND 1/6] NUMA Balancing: add page promotion counter Huang Ying
  2021-12-07  2:27 ` [PATCH -V10 RESEND 2/6] NUMA balancing: optimize page placement for memory tiering system Huang Ying
@ 2021-12-07  2:27 ` Huang Ying
  2021-12-17  7:41   ` Baolin Wang
  2021-12-07  2:27 ` [PATCH -V10 RESEND 4/6] memory tiering: hot page selection with hint page fault latency Huang Ying
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 22+ messages in thread
From: Huang Ying @ 2021-12-07  2:27 UTC (permalink / raw)
  To: Peter Zijlstra, Mel Gorman
  Cc: linux-mm, linux-kernel, Feng Tang, Huang Ying, Dave Hansen,
	Andrew Morton, Michal Hocko, Rik van Riel, Mel Gorman, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

If the NUMA balancing isn't used to optimize the page placement among
sockets but only among memory types, the hot pages in the fast memory
node couldn't be migrated (promoted) to anywhere.  So it's unnecessary
to scan the pages in the fast memory node via changing their PTE/PMD
mapping to be PROT_NONE.  So that the page faults could be avoided
too.

In the test, if only the memory tiering NUMA balancing mode is enabled, the
number of the NUMA balancing hint faults for the DRAM node is reduced to
almost 0 with the patch.  While the benchmark score doesn't change
visibly.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Hasan Al Maruf <hasanalmaruf@fb.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 mm/huge_memory.c | 30 +++++++++++++++++++++---------
 mm/mprotect.c    | 13 ++++++++++++-
 2 files changed, 33 insertions(+), 10 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e5483347291c..cab8048eb779 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -34,6 +34,7 @@
 #include <linux/oom.h>
 #include <linux/numa.h>
 #include <linux/page_owner.h>
+#include <linux/sched/sysctl.h>
 
 #include <asm/tlb.h>
 #include <asm/pgalloc.h>
@@ -1766,17 +1767,28 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 	}
 #endif
 
-	/*
-	 * Avoid trapping faults against the zero page. The read-only
-	 * data is likely to be read-cached on the local CPU and
-	 * local/remote hits to the zero page are not interesting.
-	 */
-	if (prot_numa && is_huge_zero_pmd(*pmd))
-		goto unlock;
+	if (prot_numa) {
+		struct page *page;
+		/*
+		 * Avoid trapping faults against the zero page. The read-only
+		 * data is likely to be read-cached on the local CPU and
+		 * local/remote hits to the zero page are not interesting.
+		 */
+		if (is_huge_zero_pmd(*pmd))
+			goto unlock;
 
-	if (prot_numa && pmd_protnone(*pmd))
-		goto unlock;
+		if (pmd_protnone(*pmd))
+			goto unlock;
 
+		page = pmd_page(*pmd);
+		/*
+		 * Skip scanning top tier node if normal numa
+		 * balancing is disabled
+		 */
+		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
+		    node_is_toptier(page_to_nid(page)))
+			goto unlock;
+	}
 	/*
 	 * In case prot_numa, we are under mmap_read_lock(mm). It's critical
 	 * to not clear pmd intermittently to avoid race with MADV_DONTNEED
diff --git a/mm/mprotect.c b/mm/mprotect.c
index e552f5e0ccbd..ddc24ca52b12 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -29,6 +29,7 @@
 #include <linux/uaccess.h>
 #include <linux/mm_inline.h>
 #include <linux/pgtable.h>
+#include <linux/sched/sysctl.h>
 #include <asm/cacheflush.h>
 #include <asm/mmu_context.h>
 #include <asm/tlbflush.h>
@@ -83,6 +84,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 			 */
 			if (prot_numa) {
 				struct page *page;
+				int nid;
 
 				/* Avoid TLB flush if possible */
 				if (pte_protnone(oldpte))
@@ -109,7 +111,16 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 				 * Don't mess with PTEs if page is already on the node
 				 * a single-threaded process is running on.
 				 */
-				if (target_node == page_to_nid(page))
+				nid = page_to_nid(page);
+				if (target_node == nid)
+					continue;
+
+				/*
+				 * Skip scanning top tier node if normal numa
+				 * balancing is disabled
+				 */
+				if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
+				    node_is_toptier(nid))
 					continue;
 			}
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH -V10 RESEND 4/6] memory tiering: hot page selection with hint page fault latency
  2021-12-07  2:27 [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
                   ` (2 preceding siblings ...)
  2021-12-07  2:27 ` [PATCH -V10 RESEND 3/6] memory tiering: skip to scan fast memory Huang Ying
@ 2021-12-07  2:27 ` Huang Ying
  2021-12-07  2:27 ` [PATCH -V10 RESEND 5/6] memory tiering: rate limit NUMA migration throughput Huang Ying
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 22+ messages in thread
From: Huang Ying @ 2021-12-07  2:27 UTC (permalink / raw)
  To: Peter Zijlstra, Mel Gorman
  Cc: linux-mm, linux-kernel, Feng Tang, Huang Ying, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

To optimize page placement in a memory tiering system with NUMA
balancing, the hot pages in the slow memory node need to be
identified.  Essentially, the original NUMA balancing implementation
selects the mostly recently accessed (MRU) pages as the hot pages.
But this isn't a very good algorithm to identify the hot pages.

So, in this patch we implemented a better hot page selection
algorithm.  Which is based on NUMA balancing page table scanning and
hint page fault as follows,

- When the page tables of the processes are scanned to change PTE/PMD
  to be PROT_NONE, the current time is recorded in struct page as scan
  time.

- When the page is accessed, hint page fault will occur.  The scan
  time is gotten from the struct page.  And The hint page fault
  latency is defined as

    hint page fault time - scan time

The shorter the hint page fault latency of a page is, the higher the
probability of their access frequency to be higher.  So the hint page
fault latency is a good estimation of the page hot/cold.

But it's hard to find some extra space in struct page to hold the scan
time.  Fortunately, we can reuse some bits used by the original NUMA
balancing.

NUMA balancing uses some bits in struct page to store the page
accessing CPU and PID (referring to page_cpupid_xchg_last()).  Which
is used by the multi-stage node selection algorithm to avoid to
migrate pages shared accessed by the NUMA nodes back and forth.  But
for pages in the slow memory node, even if they are shared accessed by
multiple NUMA nodes, as long as the pages are hot, they need to be
promoted to the fast memory node.  So the accessing CPU and PID
information are unnecessary for the slow memory pages.  We can reuse
these bits in struct page to record the scan time for them.  For the
fast memory pages, these bits are used as before.

The remaining problem is how to determine the hot threshold.  It's not
easy to be done automatically.  So we provide a sysctl knob:
kernel.numa_balancing_hot_threshold_ms.  All pages with hint page
fault latency < the threshold will be considered hot.  The system
administrator can determine the hot threshold via various information,
such as PMEM bandwidth limit, the average number of the pages pass the
hot threshold, etc.  The default hot threshold is 1 second, which
works well in our performance test.

The downside of the patch is that the response time to the workload
hot spot changing may be much longer.  For example,

- A previous cold memory area becomes hot

- The hint page fault will be triggered.  But the hint page fault
  latency isn't shorter than the hot threshold.  So the pages will
  not be promoted.

- When the memory area is scanned again, maybe after a scan period,
  the hint page fault latency measured will be shorter than the hot
  threshold and the pages will be promoted.

To mitigate this,

- If there are enough free space in the fast memory node, the hot
  threshold will not be used, all pages will be promoted upon the hint
  page fault for fast response.

- If fast response is more important for system performance, the
  administrator can set a higher hot threshold.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Hasan Al Maruf <hasanalmaruf@fb.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/linux/mm.h           | 29 ++++++++++++++++
 include/linux/sched/sysctl.h |  1 +
 kernel/sched/fair.c          | 67 ++++++++++++++++++++++++++++++++++++
 kernel/sysctl.c              |  7 ++++
 mm/huge_memory.c             | 13 +++++--
 mm/memory.c                  | 11 +++++-
 mm/migrate.c                 | 12 +++++++
 mm/mmzone.c                  | 17 +++++++++
 mm/mprotect.c                |  8 ++++-
 9 files changed, 160 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index a7e4a9e7d807..a9ea778eafe0 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1393,6 +1393,18 @@ static inline int folio_nid(const struct folio *folio)
 }
 
 #ifdef CONFIG_NUMA_BALANCING
+/* page access time bits needs to hold at least 4 seconds */
+#define PAGE_ACCESS_TIME_MIN_BITS	12
+#if LAST_CPUPID_SHIFT < PAGE_ACCESS_TIME_MIN_BITS
+#define PAGE_ACCESS_TIME_BUCKETS				\
+	(PAGE_ACCESS_TIME_MIN_BITS - LAST_CPUPID_SHIFT)
+#else
+#define PAGE_ACCESS_TIME_BUCKETS	0
+#endif
+
+#define PAGE_ACCESS_TIME_MASK				\
+	(LAST_CPUPID_MASK << PAGE_ACCESS_TIME_BUCKETS)
+
 static inline int cpu_pid_to_cpupid(int cpu, int pid)
 {
 	return ((cpu & LAST__CPU_MASK) << LAST__PID_SHIFT) | (pid & LAST__PID_MASK);
@@ -1435,6 +1447,16 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
 	return xchg(&page->_last_cpupid, cpupid & LAST_CPUPID_MASK);
 }
 
+static inline unsigned int xchg_page_access_time(struct page *page,
+						 unsigned int time)
+{
+	unsigned int last_time;
+
+	last_time = xchg(&page->_last_cpupid,
+			 (time >> PAGE_ACCESS_TIME_BUCKETS) & LAST_CPUPID_MASK);
+	return last_time << PAGE_ACCESS_TIME_BUCKETS;
+}
+
 static inline int page_cpupid_last(struct page *page)
 {
 	return page->_last_cpupid;
@@ -1450,6 +1472,7 @@ static inline int page_cpupid_last(struct page *page)
 }
 
 extern int page_cpupid_xchg_last(struct page *page, int cpupid);
+extern unsigned int xchg_page_access_time(struct page *page, unsigned int time);
 
 static inline void page_cpupid_reset_last(struct page *page)
 {
@@ -1462,6 +1485,12 @@ static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
 	return page_to_nid(page); /* XXX */
 }
 
+static inline unsigned int xchg_page_access_time(struct page *page,
+						 unsigned int time)
+{
+	return 0;
+}
+
 static inline int page_cpupid_last(struct page *page)
 {
 	return page_to_nid(page); /* XXX */
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index bc54c1d75d6d..0ea43b146aee 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -41,6 +41,7 @@ enum sched_tunable_scaling {
 
 #ifdef CONFIG_NUMA_BALANCING
 extern int sysctl_numa_balancing_mode;
+extern unsigned int sysctl_numa_balancing_hot_threshold;
 #else
 #define sysctl_numa_balancing_mode	0
 #endif
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6e476f6d9435..2b78664a5ce2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1026,6 +1026,9 @@ unsigned int sysctl_numa_balancing_scan_size = 256;
 /* Scan @scan_size MB every @scan_period after an initial @scan_delay in ms */
 unsigned int sysctl_numa_balancing_scan_delay = 1000;
 
+/* The page with hint page fault latency < threshold in ms is considered hot */
+unsigned int sysctl_numa_balancing_hot_threshold = 1000;
+
 struct numa_group {
 	refcount_t refcount;
 
@@ -1367,6 +1370,37 @@ static inline unsigned long group_weight(struct task_struct *p, int nid,
 	return 1000 * faults / total_faults;
 }
 
+static bool pgdat_free_space_enough(struct pglist_data *pgdat)
+{
+	int z;
+	unsigned long enough_mark;
+
+	enough_mark = max(1UL * 1024 * 1024 * 1024 >> PAGE_SHIFT,
+			  pgdat->node_present_pages >> 4);
+	for (z = pgdat->nr_zones - 1; z >= 0; z--) {
+		struct zone *zone = pgdat->node_zones + z;
+
+		if (!populated_zone(zone))
+			continue;
+
+		if (zone_watermark_ok(zone, 0,
+				      high_wmark_pages(zone) + enough_mark,
+				      ZONE_MOVABLE, 0))
+			return true;
+	}
+	return false;
+}
+
+static int numa_hint_fault_latency(struct page *page)
+{
+	unsigned int last_time, time;
+
+	time = jiffies_to_msecs(jiffies);
+	last_time = xchg_page_access_time(page, time);
+
+	return (time - last_time) & PAGE_ACCESS_TIME_MASK;
+}
+
 bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 				int src_nid, int dst_cpu)
 {
@@ -1374,6 +1408,27 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 	int dst_nid = cpu_to_node(dst_cpu);
 	int last_cpupid, this_cpupid;
 
+	/*
+	 * The pages in slow memory node should be migrated according
+	 * to hot/cold instead of accessing CPU node.
+	 */
+	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+	    !node_is_toptier(src_nid)) {
+		struct pglist_data *pgdat;
+		unsigned long latency, th;
+
+		pgdat = NODE_DATA(dst_nid);
+		if (pgdat_free_space_enough(pgdat))
+			return true;
+
+		th = sysctl_numa_balancing_hot_threshold;
+		latency = numa_hint_fault_latency(page);
+		if (latency > th)
+			return false;
+
+		return true;
+	}
+
 	this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
 	last_cpupid = page_cpupid_xchg_last(page, this_cpupid);
 
@@ -2592,6 +2647,11 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
 	if (!p->mm)
 		return;
 
+	/* Numa faults statistics are unnecessary for the slow memory node */
+	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+	    !node_is_toptier(mem_node))
+		return;
+
 	/* Allocate buffer to track faults on a per-node basis */
 	if (unlikely(!p->numa_faults)) {
 		int size = sizeof(*p->numa_faults) *
@@ -2611,6 +2671,13 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
 	 */
 	if (unlikely(last_cpupid == (-1 & LAST_CPUPID_MASK))) {
 		priv = 1;
+	} else if (unlikely(!cpu_online(cpupid_to_cpu(last_cpupid)))) {
+		/*
+		 * In memory tiering mode, cpupid of slow memory page is
+		 * used to record page access time, so its value may be
+		 * invalid during numa balancing mode transition.
+		 */
+		return;
 	} else {
 		priv = cpupid_match_pid(p, last_cpupid);
 		if (!priv && !(flags & TNF_NO_GROUP))
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index a1be94ea80ba..40432524642a 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1811,6 +1811,13 @@ static struct ctl_table kern_table[] = {
 		.extra1		= SYSCTL_ZERO,
 		.extra2		= &three,
 	},
+	{
+		.procname	= "numa_balancing_hot_threshold_ms",
+		.data		= &sysctl_numa_balancing_hot_threshold,
+		.maxlen		= sizeof(unsigned int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec,
+	},
 #endif /* CONFIG_NUMA_BALANCING */
 	{
 		.procname	= "sched_rt_period_us",
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index cab8048eb779..1999ef14582e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1430,7 +1430,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 	struct page *page;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
 	int page_nid = NUMA_NO_NODE;
-	int target_nid, last_cpupid = -1;
+	int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK);
 	bool migrated = false;
 	bool was_writable = pmd_savedwrite(oldpmd);
 	int flags = 0;
@@ -1451,7 +1451,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 		flags |= TNF_NO_GROUP;
 
 	page_nid = page_to_nid(page);
-	last_cpupid = page_cpupid_last(page);
+	if (node_is_toptier(page_nid))
+		last_cpupid = page_cpupid_last(page);
 	target_nid = numa_migrate_prep(page, vma, haddr, page_nid,
 				       &flags);
 
@@ -1769,6 +1770,7 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 
 	if (prot_numa) {
 		struct page *page;
+		bool toptier;
 		/*
 		 * Avoid trapping faults against the zero page. The read-only
 		 * data is likely to be read-cached on the local CPU and
@@ -1781,13 +1783,18 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 			goto unlock;
 
 		page = pmd_page(*pmd);
+		toptier = node_is_toptier(page_to_nid(page));
 		/*
 		 * Skip scanning top tier node if normal numa
 		 * balancing is disabled
 		 */
 		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
-		    node_is_toptier(page_to_nid(page)))
+		    toptier)
 			goto unlock;
+
+		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+		    !toptier)
+			xchg_page_access_time(page, jiffies_to_msecs(jiffies));
 	}
 	/*
 	 * In case prot_numa, we are under mmap_read_lock(mm). It's critical
diff --git a/mm/memory.c b/mm/memory.c
index dbd902f41a3a..ac49be62193d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -73,6 +73,7 @@
 #include <linux/perf_event.h>
 #include <linux/ptrace.h>
 #include <linux/vmalloc.h>
+#include <linux/sched/sysctl.h>
 
 #include <trace/events/kmem.h>
 
@@ -4379,8 +4380,16 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
 		flags |= TNF_SHARED;
 
-	last_cpupid = page_cpupid_last(page);
 	page_nid = page_to_nid(page);
+	/*
+	 * In memory tiering mode, cpupid of slow memory page is used
+	 * to record page access time.  So use default value.
+	 */
+	if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
+	    !node_is_toptier(page_nid))
+		last_cpupid = (-1 & LAST_CPUPID_MASK);
+	else
+		last_cpupid = page_cpupid_last(page);
 	target_nid = numa_migrate_prep(page, vma, vmf->address, page_nid,
 			&flags);
 	if (target_nid == NUMA_NO_NODE) {
diff --git a/mm/migrate.c b/mm/migrate.c
index 286c84c014dd..03006bbd4042 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -572,6 +572,18 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
 	 * future migrations of this same page.
 	 */
 	cpupid = page_cpupid_xchg_last(&folio->page, -1);
+	/*
+	 * If migrate between slow and fast memory node, reset cpupid,
+	 * because that is used to record page access time in slow
+	 * memory node
+	 */
+	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) {
+		bool f_toptier = node_is_toptier(page_to_nid(&folio->page));
+		bool t_toptier = node_is_toptier(page_to_nid(&newfolio->page));
+
+		if (f_toptier != t_toptier)
+			cpupid = -1;
+	}
 	page_cpupid_xchg_last(&newfolio->page, cpupid);
 
 	folio_migrate_ksm(newfolio, folio);
diff --git a/mm/mmzone.c b/mm/mmzone.c
index eb89d6e018e2..27f9075632ee 100644
--- a/mm/mmzone.c
+++ b/mm/mmzone.c
@@ -99,4 +99,21 @@ int page_cpupid_xchg_last(struct page *page, int cpupid)
 
 	return last_cpupid;
 }
+
+unsigned int xchg_page_access_time(struct page *page, unsigned int time)
+{
+	unsigned long old_flags, flags;
+	unsigned int last_time;
+
+	time >>= PAGE_ACCESS_TIME_BUCKETS;
+	do {
+		old_flags = flags = page->flags;
+		last_time = (flags >> LAST_CPUPID_PGSHIFT) & LAST_CPUPID_MASK;
+
+		flags &= ~(LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT);
+		flags |= (time & LAST_CPUPID_MASK) << LAST_CPUPID_PGSHIFT;
+	} while (unlikely(cmpxchg(&page->flags, old_flags, flags) != old_flags));
+
+	return last_time << PAGE_ACCESS_TIME_BUCKETS;
+}
 #endif
diff --git a/mm/mprotect.c b/mm/mprotect.c
index ddc24ca52b12..407559241b58 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -85,6 +85,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 			if (prot_numa) {
 				struct page *page;
 				int nid;
+				bool toptier;
 
 				/* Avoid TLB flush if possible */
 				if (pte_protnone(oldpte))
@@ -114,14 +115,19 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 				nid = page_to_nid(page);
 				if (target_node == nid)
 					continue;
+				toptier = node_is_toptier(nid);
 
 				/*
 				 * Skip scanning top tier node if normal numa
 				 * balancing is disabled
 				 */
 				if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
-				    node_is_toptier(nid))
+				    toptier)
 					continue;
+				if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+				    !toptier)
+					xchg_page_access_time(page,
+						jiffies_to_msecs(jiffies));
 			}
 
 			oldpte = ptep_modify_prot_start(vma, addr, pte);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH -V10 RESEND 5/6] memory tiering: rate limit NUMA migration throughput
  2021-12-07  2:27 [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
                   ` (3 preceding siblings ...)
  2021-12-07  2:27 ` [PATCH -V10 RESEND 4/6] memory tiering: hot page selection with hint page fault latency Huang Ying
@ 2021-12-07  2:27 ` Huang Ying
  2021-12-07  2:27 ` [PATCH -V10 RESEND 6/6] memory tiering: adjust hot threshold automatically Huang Ying
  2022-01-12 16:10 ` [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system Peter Zijlstra
  6 siblings, 0 replies; 22+ messages in thread
From: Huang Ying @ 2021-12-07  2:27 UTC (permalink / raw)
  To: Peter Zijlstra, Mel Gorman
  Cc: linux-mm, linux-kernel, Feng Tang, Huang Ying, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

In NUMA balancing memory tiering mode, the hot slow memory pages could
be promoted to the fast memory node via NUMA balancing.  But this
incurs some overhead too.  So that sometimes the workload performance
may be hurt.  To avoid too much disturbing to the workload in these
situations, we should make it possible to rate limit the promotion
throughput.

So, in this patch, we implement a simple rate limit algorithm as
follows.  The number of the candidate pages to be promoted to the fast
memory node via NUMA balancing is counted, if the count exceeds the
limit specified by the users, the NUMA balancing promotion will be
stopped until the next second.

A new sysctl knob kernel.numa_balancing_rate_limit_mbps is added for
the users to specify the limit.

TODO: Add ABI document for new sysctl knob.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Hasan Al Maruf <hasanalmaruf@fb.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/linux/mmzone.h       |  5 +++++
 include/linux/sched/sysctl.h |  1 +
 kernel/sched/fair.c          | 29 +++++++++++++++++++++++++++--
 kernel/sysctl.c              |  8 ++++++++
 mm/vmstat.c                  |  1 +
 5 files changed, 42 insertions(+), 2 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index eda6d2f09d77..f3b044993bc5 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -213,6 +213,7 @@ enum node_stat_item {
 #endif
 #ifdef CONFIG_NUMA_BALANCING
 	PGPROMOTE_SUCCESS,	/* promote successfully */
+	PGPROMOTE_CANDIDATE,	/* candidate pages to promote */
 #endif
 	NR_VM_NODE_STAT_ITEMS
 };
@@ -902,6 +903,10 @@ typedef struct pglist_data {
 	struct deferred_split deferred_split_queue;
 #endif
 
+#ifdef CONFIG_NUMA_BALANCING
+	unsigned long numa_ts;
+	unsigned long numa_nr_candidate;
+#endif
 	/* Fields commonly accessed by the page reclaim scanner */
 
 	/*
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index 0ea43b146aee..7d937adaac0f 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -42,6 +42,7 @@ enum sched_tunable_scaling {
 #ifdef CONFIG_NUMA_BALANCING
 extern int sysctl_numa_balancing_mode;
 extern unsigned int sysctl_numa_balancing_hot_threshold;
+extern unsigned int sysctl_numa_balancing_rate_limit;
 #else
 #define sysctl_numa_balancing_mode	0
 #endif
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2b78664a5ce2..7912669a2065 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1028,6 +1028,11 @@ unsigned int sysctl_numa_balancing_scan_delay = 1000;
 
 /* The page with hint page fault latency < threshold in ms is considered hot */
 unsigned int sysctl_numa_balancing_hot_threshold = 1000;
+/*
+ * Restrict the NUMA migration per second in MB for each target node
+ * if no enough free space in target node
+ */
+unsigned int sysctl_numa_balancing_rate_limit = 65536;
 
 struct numa_group {
 	refcount_t refcount;
@@ -1401,6 +1406,23 @@ static int numa_hint_fault_latency(struct page *page)
 	return (time - last_time) & PAGE_ACCESS_TIME_MASK;
 }
 
+static bool numa_migration_check_rate_limit(struct pglist_data *pgdat,
+					    unsigned long rate_limit, int nr)
+{
+	unsigned long nr_candidate;
+	unsigned long now = jiffies, last_ts;
+
+	mod_node_page_state(pgdat, PGPROMOTE_CANDIDATE, nr);
+	nr_candidate = node_page_state(pgdat, PGPROMOTE_CANDIDATE);
+	last_ts = pgdat->numa_ts;
+	if (now > last_ts + HZ &&
+	    cmpxchg(&pgdat->numa_ts, last_ts, now) == last_ts)
+		pgdat->numa_nr_candidate = nr_candidate;
+	if (nr_candidate - pgdat->numa_nr_candidate > rate_limit)
+		return false;
+	return true;
+}
+
 bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 				int src_nid, int dst_cpu)
 {
@@ -1415,7 +1437,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 	    !node_is_toptier(src_nid)) {
 		struct pglist_data *pgdat;
-		unsigned long latency, th;
+		unsigned long rate_limit, latency, th;
 
 		pgdat = NODE_DATA(dst_nid);
 		if (pgdat_free_space_enough(pgdat))
@@ -1426,7 +1448,10 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 		if (latency > th)
 			return false;
 
-		return true;
+		rate_limit =
+			sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT);
+		return numa_migration_check_rate_limit(pgdat, rate_limit,
+						       thp_nr_pages(page));
 	}
 
 	this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 40432524642a..7be964eb0d13 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1818,6 +1818,14 @@ static struct ctl_table kern_table[] = {
 		.mode		= 0644,
 		.proc_handler	= proc_dointvec,
 	},
+	{
+		.procname	= "numa_balancing_rate_limit_mbps",
+		.data		= &sysctl_numa_balancing_rate_limit,
+		.maxlen		= sizeof(unsigned int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec_minmax,
+		.extra1		= SYSCTL_ZERO,
+	},
 #endif /* CONFIG_NUMA_BALANCING */
 	{
 		.procname	= "sched_rt_period_us",
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 53a6e92b1efb..787a012de3e2 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1244,6 +1244,7 @@ const char * const vmstat_text[] = {
 #endif
 #ifdef CONFIG_NUMA_BALANCING
 	"pgpromote_success",
+	"pgpromote_candidate",
 #endif
 
 	/* enum writeback_stat_item counters */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH -V10 RESEND 6/6] memory tiering: adjust hot threshold automatically
  2021-12-07  2:27 [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
                   ` (4 preceding siblings ...)
  2021-12-07  2:27 ` [PATCH -V10 RESEND 5/6] memory tiering: rate limit NUMA migration throughput Huang Ying
@ 2021-12-07  2:27 ` Huang Ying
  2022-01-12 16:10 ` [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system Peter Zijlstra
  6 siblings, 0 replies; 22+ messages in thread
From: Huang Ying @ 2021-12-07  2:27 UTC (permalink / raw)
  To: Peter Zijlstra, Mel Gorman
  Cc: linux-mm, linux-kernel, Feng Tang, Huang Ying, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

It isn't easy for the administrator to determine the hot threshold.
So in this patch, a method to adjust the hot threshold automatically
is implemented.  The basic idea is to control the number of the
candidate promotion pages to match the promotion rate limit.  If the
hint page fault latency of a page is less than the hot threshold, we
will try to promote the page, and the page is called the candidate
promotion page.

If the number of the candidate promotion pages in the statistics
interval is much more than the promotion rate limit, the hot threshold
will be decreased to reduce the number of the candidate promotion
pages.  Otherwise, the hot threshold will be increased to increase the
number of the candidate promotion pages.

To make the above method works, in each statistics interval, the total
number of the pages to check (on which the hint page faults occur) and
the hot/cold distribution need to be stable.  Because the page tables
are scanned linearly in NUMA balancing, but the hot/cold distribution
isn't uniform along the address, the statistics interval should be
larger than the NUMA balancing scan period.  So in the patch, the max
scan period is used as statistics interval and it works well in our
tests.

The sysctl knob kernel.numa_balancing_hot_threshold_ms becomes the
initial value and max value of the hot threshold.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Hasan Al Maruf <hasanalmaruf@fb.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/linux/mmzone.h       |  3 ++
 include/linux/sched/sysctl.h |  2 ++
 kernel/sched/core.c          | 15 +++++++++
 kernel/sched/fair.c          | 64 +++++++++++++++++++++++++++++++++---
 kernel/sysctl.c              |  3 +-
 5 files changed, 81 insertions(+), 6 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index f3b044993bc5..4ac0ae1cf15d 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -906,6 +906,9 @@ typedef struct pglist_data {
 #ifdef CONFIG_NUMA_BALANCING
 	unsigned long numa_ts;
 	unsigned long numa_nr_candidate;
+	unsigned long numa_threshold_ts;
+	unsigned long numa_threshold_nr_candidate;
+	unsigned long numa_threshold;
 #endif
 	/* Fields commonly accessed by the page reclaim scanner */
 
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index 7d937adaac0f..ff2c43e8ebac 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -84,6 +84,8 @@ int sysctl_sched_uclamp_handler(struct ctl_table *table, int write,
 		void *buffer, size_t *lenp, loff_t *ppos);
 int sysctl_numa_balancing(struct ctl_table *table, int write, void *buffer,
 		size_t *lenp, loff_t *ppos);
+int sysctl_numa_balancing_threshold(struct ctl_table *table, int write, void *buffer,
+		size_t *lenp, loff_t *ppos);
 int sysctl_schedstats(struct ctl_table *table, int write, void *buffer,
 		size_t *lenp, loff_t *ppos);
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5dcabc98432f..1cca2c8a3423 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4285,6 +4285,18 @@ void set_numabalancing_state(bool enabled)
 }
 
 #ifdef CONFIG_PROC_SYSCTL
+static void reset_memory_tiering(void)
+{
+	struct pglist_data *pgdat;
+
+	for_each_online_pgdat(pgdat) {
+		pgdat->numa_threshold = 0;
+		pgdat->numa_threshold_nr_candidate =
+			node_page_state(pgdat, PGPROMOTE_CANDIDATE);
+		pgdat->numa_threshold_ts = jiffies;
+	}
+}
+
 int sysctl_numa_balancing(struct ctl_table *table, int write,
 			  void *buffer, size_t *lenp, loff_t *ppos)
 {
@@ -4301,6 +4313,9 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
 	if (err < 0)
 		return err;
 	if (write) {
+		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
+		    (state & NUMA_BALANCING_MEMORY_TIERING))
+			reset_memory_tiering();
 		sysctl_numa_balancing_mode = state;
 		__set_numabalancing_state(state);
 	}
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 7912669a2065..daa978d2d70d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1423,6 +1423,54 @@ static bool numa_migration_check_rate_limit(struct pglist_data *pgdat,
 	return true;
 }
 
+int sysctl_numa_balancing_threshold(struct ctl_table *table, int write, void *buffer,
+		size_t *lenp, loff_t *ppos)
+{
+	int err;
+	struct pglist_data *pgdat;
+
+	if (write && !capable(CAP_SYS_ADMIN))
+		return -EPERM;
+
+	err = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+	if (err < 0 || !write)
+		return err;
+
+	for_each_online_pgdat(pgdat)
+		pgdat->numa_threshold = 0;
+
+	return err;
+}
+
+#define NUMA_MIGRATION_ADJUST_STEPS	16
+
+static void numa_migration_adjust_threshold(struct pglist_data *pgdat,
+					    unsigned long rate_limit,
+					    unsigned long ref_th)
+{
+	unsigned long now = jiffies, last_th_ts, th_period;
+	unsigned long unit_th, th;
+	unsigned long nr_cand, ref_cand, diff_cand;
+
+	th_period = msecs_to_jiffies(sysctl_numa_balancing_scan_period_max);
+	last_th_ts = pgdat->numa_threshold_ts;
+	if (now > last_th_ts + th_period &&
+	    cmpxchg(&pgdat->numa_threshold_ts, last_th_ts, now) == last_th_ts) {
+		ref_cand = rate_limit *
+			sysctl_numa_balancing_scan_period_max / 1000;
+		nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE);
+		diff_cand = nr_cand - pgdat->numa_threshold_nr_candidate;
+		unit_th = ref_th / NUMA_MIGRATION_ADJUST_STEPS;
+		th = pgdat->numa_threshold ? : ref_th;
+		if (diff_cand > ref_cand * 11 / 10)
+			th = max(th - unit_th, unit_th);
+		else if (diff_cand < ref_cand * 9 / 10)
+			th = min(th + unit_th, ref_th);
+		pgdat->numa_threshold_nr_candidate = nr_cand;
+		pgdat->numa_threshold = th;
+	}
+}
+
 bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 				int src_nid, int dst_cpu)
 {
@@ -1437,19 +1485,25 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 	    !node_is_toptier(src_nid)) {
 		struct pglist_data *pgdat;
-		unsigned long rate_limit, latency, th;
+		unsigned long rate_limit, latency, th, def_th;
 
 		pgdat = NODE_DATA(dst_nid);
-		if (pgdat_free_space_enough(pgdat))
+		if (pgdat_free_space_enough(pgdat)) {
+			/* workload changed, reset hot threshold */
+			pgdat->numa_threshold = 0;
 			return true;
+		}
 
-		th = sysctl_numa_balancing_hot_threshold;
+		def_th = sysctl_numa_balancing_hot_threshold;
+		rate_limit =
+			sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT);
+		numa_migration_adjust_threshold(pgdat, rate_limit, def_th);
+
+		th = pgdat->numa_threshold ? : def_th;
 		latency = numa_hint_fault_latency(page);
 		if (latency > th)
 			return false;
 
-		rate_limit =
-			sysctl_numa_balancing_rate_limit << (20 - PAGE_SHIFT);
 		return numa_migration_check_rate_limit(pgdat, rate_limit,
 						       thp_nr_pages(page));
 	}
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 7be964eb0d13..38892422ffac 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1816,7 +1816,8 @@ static struct ctl_table kern_table[] = {
 		.data		= &sysctl_numa_balancing_hot_threshold,
 		.maxlen		= sizeof(unsigned int),
 		.mode		= 0644,
-		.proc_handler	= proc_dointvec,
+		.proc_handler	= sysctl_numa_balancing_threshold,
+		.extra1		= SYSCTL_ZERO,
 	},
 	{
 		.procname	= "numa_balancing_rate_limit_mbps",
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 1/6] NUMA Balancing: add page promotion counter
  2021-12-07  2:27 ` [PATCH -V10 RESEND 1/6] NUMA Balancing: add page promotion counter Huang Ying
@ 2021-12-07  6:05   ` Hasan Al Maruf
  2021-12-08  2:16     ` Huang, Ying
  2021-12-17  7:25   ` Baolin Wang
  1 sibling, 1 reply; 22+ messages in thread
From: Hasan Al Maruf @ 2021-12-07  6:05 UTC (permalink / raw)
  To: ying.huang
  Cc: akpm, dave.hansen, feng.tang, hasanalmaruf, linux-kernel,
	linux-mm, mgorman, mgorman, mhocko, osalvador, peterz, riel,
	shakeelb, shy828301, weixugc, ziy

Hi Huang,

>+#ifdef CONFIG_NUMA_BALANCING
>+	PGPROMOTE_SUCCESS,	/* promote successfully */

I find a breakdown of Anon and File page promotion can often be useful to
understand an application's behavior (i.e. what kind of pages are moved to
remote node and later being promoted). What do you think about adding
counters for such a breakdown?

What's your thought on adding counters for failures on different reasons?

Best,
Hasan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 2/6] NUMA balancing: optimize page placement for memory tiering system
  2021-12-07  2:27 ` [PATCH -V10 RESEND 2/6] NUMA balancing: optimize page placement for memory tiering system Huang Ying
@ 2021-12-07  6:36   ` Hasan Al Maruf
  2021-12-08  3:16     ` Huang, Ying
  2021-12-17  7:35   ` Baolin Wang
  1 sibling, 1 reply; 22+ messages in thread
From: Hasan Al Maruf @ 2021-12-07  6:36 UTC (permalink / raw)
  To: ying.huang
  Cc: akpm, dave.hansen, feng.tang, hasanalmaruf, linux-kernel,
	linux-mm, mgorman, mgorman, mhocko, osalvador, peterz, riel,
	shakeelb, shy828301, weixugc, ziy

Hi Huang,

>+void set_numabalancing_state(bool enabled)
>+{
>+	if (enabled)
>+		sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL;
>+	else
>+		sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED;
>+	__set_numabalancing_state(enabled);
>+}
>+

One of the properties of optimized NUMA Balancing for tiered memory is we
are not going to scan top-tier nodes as promotion doesn't make sense there
(implemented in the next patch [3/6]). However, if a system has only
single memory node with CPU, does it make sense to run
`NUMA_BALANCING_NORMAL` mode there? What do you think about downgrading to
`NUMA_BALANCING_MEMORY_TIERING` mode if a user setup NUMA Balancing on
the default mode of `NUMA_BALANCING_NORMAL` on a single toptier memory
node?

>diff --git a/mm/vmscan.c b/mm/vmscan.c
>index c266e64d2f7e..5edb5dfa8900 100644
>--- a/mm/vmscan.c
>+++ b/mm/vmscan.c
>@@ -56,6 +56,7 @@
>
> #include <linux/swapops.h>
> #include <linux/balloon_compaction.h>
>+#include <linux/sched/sysctl.h>
>
> #include "internal.h"
>
>@@ -3919,6 +3920,12 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx)
> 	return false;
> }
>
>+/*
>+ * Keep the free pages on fast memory node a little more than the high
>+ * watermark to accommodate the promoted pages.
>+ */
>+#define NUMA_BALANCING_PROMOTE_WATERMARK	(10UL * 1024 * 1024 >> PAGE_SHIFT)
>+
> /*
>  * Returns true if there is an eligible zone balanced for the request order
>  * and highest_zoneidx
>@@ -3940,6 +3947,15 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
> 			continue;
>
> 		mark = high_wmark_pages(zone);
>+		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
>+		    numa_demotion_enabled &&
>+		    next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) {
>+			unsigned long promote_mark;
>+
>+			promote_mark = min(NUMA_BALANCING_PROMOTE_WATERMARK,
>+					   pgdat->node_present_pages >> 6);
>+			mark += promote_mark;
>+		}
> 		if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
> 			return true;
> 	}

This can be moved to a different patch. I think, this patch [2/6] can be
splitted into two basic patches -- 1. NUMA Balancing interface for tiered
memory and 2. maintaining a headroom for promotion.

Instead of having a static value for `NUMA_BALANCING_PROMOTE_WATERMARK`
what about decoupling the allocation and reclamation and add a user-space
interface for controling them?

Do you think patch [2/5] and [3/5] of this series can be merged to your
current patchset?

https://lore.kernel.org/all/cover.1637778851.git.hasanalmaruf@fb.com/

Best,
Hasan

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 1/6] NUMA Balancing: add page promotion counter
  2021-12-07  6:05   ` Hasan Al Maruf
@ 2021-12-08  2:16     ` Huang, Ying
  0 siblings, 0 replies; 22+ messages in thread
From: Huang, Ying @ 2021-12-08  2:16 UTC (permalink / raw)
  To: Hasan Al Maruf
  Cc: akpm, dave.hansen, feng.tang, hasanalmaruf, linux-kernel,
	linux-mm, mgorman, mgorman, mhocko, osalvador, peterz, riel,
	shakeelb, shy828301, weixugc, ziy

Hasan Al Maruf <hasan3050@gmail.com> writes:

> Hi Huang,
>
>>+#ifdef CONFIG_NUMA_BALANCING
>>+	PGPROMOTE_SUCCESS,	/* promote successfully */
>
> I find a breakdown of Anon and File page promotion can often be useful to
> understand an application's behavior (i.e. what kind of pages are moved to
> remote node and later being promoted). What do you think about adding
> counters for such a breakdown?
>
> What's your thought on adding counters for failures on different reasons?

I think that all these provide helpful information.  But I think that we
can add them in separate patches.  That will make reviewing simpler.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 2/6] NUMA balancing: optimize page placement for memory tiering system
  2021-12-07  6:36   ` Hasan Al Maruf
@ 2021-12-08  3:16     ` Huang, Ying
  0 siblings, 0 replies; 22+ messages in thread
From: Huang, Ying @ 2021-12-08  3:16 UTC (permalink / raw)
  To: Hasan Al Maruf
  Cc: akpm, dave.hansen, feng.tang, hasanalmaruf, linux-kernel,
	linux-mm, mgorman, mgorman, mhocko, osalvador, peterz, riel,
	shakeelb, shy828301, weixugc, ziy, Johannes Weiner

Hasan Al Maruf <hasan3050@gmail.com> writes:

> Hi Huang,
>
>>+void set_numabalancing_state(bool enabled)
>>+{
>>+	if (enabled)
>>+		sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL;
>>+	else
>>+		sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED;
>>+	__set_numabalancing_state(enabled);
>>+}
>>+
>
> One of the properties of optimized NUMA Balancing for tiered memory is we
> are not going to scan top-tier nodes as promotion doesn't make sense there
> (implemented in the next patch [3/6]). However, if a system has only
> single memory node with CPU, does it make sense to run
> `NUMA_BALANCING_NORMAL` mode there? What do you think about downgrading to
> `NUMA_BALANCING_MEMORY_TIERING` mode if a user setup NUMA Balancing on
> the default mode of `NUMA_BALANCING_NORMAL` on a single toptier memory
> node?

Consider a system with only 1 NUMA node and no PMEM, should we refuse
NUMA balancing to be enabled at all?

Per my understanding, the philosophy behind is to keep thing as small as
possible instead of as smart as possible.  Do you agree?

>>diff --git a/mm/vmscan.c b/mm/vmscan.c
>>index c266e64d2f7e..5edb5dfa8900 100644
>>--- a/mm/vmscan.c
>>+++ b/mm/vmscan.c
>>@@ -56,6 +56,7 @@
>>
>> #include <linux/swapops.h>
>> #include <linux/balloon_compaction.h>
>>+#include <linux/sched/sysctl.h>
>>
>> #include "internal.h"
>>
>>@@ -3919,6 +3920,12 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx)
>> 	return false;
>> }
>>
>>+/*
>>+ * Keep the free pages on fast memory node a little more than the high
>>+ * watermark to accommodate the promoted pages.
>>+ */
>>+#define NUMA_BALANCING_PROMOTE_WATERMARK	(10UL * 1024 * 1024 >> PAGE_SHIFT)
>>+
>> /*
>>  * Returns true if there is an eligible zone balanced for the request order
>>  * and highest_zoneidx
>>@@ -3940,6 +3947,15 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
>> 			continue;
>>
>> 		mark = high_wmark_pages(zone);
>>+		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
>>+		    numa_demotion_enabled &&
>>+		    next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) {
>>+			unsigned long promote_mark;
>>+
>>+			promote_mark = min(NUMA_BALANCING_PROMOTE_WATERMARK,
>>+					   pgdat->node_present_pages >> 6);
>>+			mark += promote_mark;
>>+		}
>> 		if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
>> 			return true;
>> 	}
>
> This can be moved to a different patch. I think, this patch [2/6] can be
> splitted into two basic patches -- 1. NUMA Balancing interface for tiered
> memory and 2. maintaining a headroom for promotion.

Johannes has taught me that, if we introduce a new function, variable,
or interface, it's better to introduce its user together.  So that we
can determine whether it's necessary to do that, whether the definition
is suitable, etc.  I think that makes sense.  So I try to do that in
this patchset too.

As in [2/5] of your patchset below, another possibility is to make
1. NUMA balancing interface for tiered memory and 2. skip scanning top
tier memory in NUMA balancing in one patch.  One concern is that
although this is an optimization, there's almost no measurable
performance difference.  This makes it hard to justify to extend the
user space interface.  Do you have better data to support this?

> Instead of having a static value for `NUMA_BALANCING_PROMOTE_WATERMARK`
> what about decoupling the allocation and reclamation and add a user-space
> interface for controling them?

This means to add a new user space ABI.  Because we may need to support
the new ABI forever, we should have strong justification to add it.
I am not against to add an ABI to adjust promotion watermark in
general.  I think that the path could be,

- Have a simplest solution that works without introducing new ABI, like
  something in this patch, or revised.

- Then try to add a new ABI in a separate patch with enough
  justification, for example, with much improved performance data.

Do you agree?

> Do you think patch [2/5] and [3/5] of this series can be merged to your
> current patchset?
>
> https://lore.kernel.org/all/cover.1637778851.git.hasanalmaruf@fb.com/

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 1/6] NUMA Balancing: add page promotion counter
  2021-12-07  2:27 ` [PATCH -V10 RESEND 1/6] NUMA Balancing: add page promotion counter Huang Ying
  2021-12-07  6:05   ` Hasan Al Maruf
@ 2021-12-17  7:25   ` Baolin Wang
  1 sibling, 0 replies; 22+ messages in thread
From: Baolin Wang @ 2021-12-17  7:25 UTC (permalink / raw)
  To: Huang Ying, Peter Zijlstra, Mel Gorman
  Cc: linux-mm, linux-kernel, Feng Tang, Yang Shi, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Zi Yan,
	Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf



On 12/7/2021 10:27 AM, Huang Ying wrote:
> In a system with multiple memory types, e.g. DRAM and PMEM, the CPU
> and DRAM in one socket will be put in one NUMA node as before, while
> the PMEM will be put in another NUMA node as described in the
> description of the commit c221c0b0308f ("device-dax: "Hotplug"
> persistent memory for use like normal RAM").  So, the NUMA balancing
> mechanism will identify all PMEM accesses as remote access and try to
> promote the PMEM pages to DRAM.
> 
> To distinguish the number of the inter-type promoted pages from that
> of the inter-socket migrated pages.  A new vmstat count is added.  The
> counter is per-node (count in the target node).  So this can be used
> to identify promotion imbalance among the NUMA nodes.
> 
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Reviewed-by: Yang Shi <shy828301@gmail.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: osalvador <osalvador@suse.de>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Hasan Al Maruf <hasanalmaruf@fb.com>
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-mm@kvack.org
> ---

Tested on my tiered memory system, and works well. Please feel free to add:
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>

>   include/linux/mmzone.h |  3 +++
>   include/linux/node.h   |  5 +++++
>   mm/migrate.c           | 13 ++++++++++---
>   mm/vmstat.c            |  3 +++
>   4 files changed, 21 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 58e744b78c2c..eda6d2f09d77 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -210,6 +210,9 @@ enum node_stat_item {
>   	NR_PAGETABLE,		/* used for pagetables */
>   #ifdef CONFIG_SWAP
>   	NR_SWAPCACHE,
> +#endif
> +#ifdef CONFIG_NUMA_BALANCING
> +	PGPROMOTE_SUCCESS,	/* promote successfully */
>   #endif
>   	NR_VM_NODE_STAT_ITEMS
>   };
> diff --git a/include/linux/node.h b/include/linux/node.h
> index bb21fd631b16..81bbf1c0afd3 100644
> --- a/include/linux/node.h
> +++ b/include/linux/node.h
> @@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
>   
>   #define to_node(device) container_of(device, struct node, dev)
>   
> +static inline bool node_is_toptier(int node)
> +{
> +	return node_state(node, N_CPU);
> +}
> +
>   #endif /* _LINUX_NODE_H_ */
> diff --git a/mm/migrate.c b/mm/migrate.c
> index cf25b00f03c8..b7c27abb0e5c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -2141,6 +2141,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>   	pg_data_t *pgdat = NODE_DATA(node);
>   	int isolated;
>   	int nr_remaining;
> +	int nr_succeeded;
>   	LIST_HEAD(migratepages);
>   	new_page_t *new;
>   	bool compound;
> @@ -2179,7 +2180,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>   
>   	list_add(&page->lru, &migratepages);
>   	nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
> -				     MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
> +				     MIGRATE_ASYNC, MR_NUMA_MISPLACED,
> +				     &nr_succeeded);
>   	if (nr_remaining) {
>   		if (!list_empty(&migratepages)) {
>   			list_del(&page->lru);
> @@ -2188,8 +2190,13 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
>   			putback_lru_page(page);
>   		}
>   		isolated = 0;
> -	} else
> -		count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages);
> +	}
> +	if (nr_succeeded) {
> +		count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
> +		if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
> +			mod_node_page_state(NODE_DATA(node), PGPROMOTE_SUCCESS,
> +					    nr_succeeded);
> +	}
>   	BUG_ON(!list_empty(&migratepages));
>   	return isolated;
>   
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index d701c335628c..53a6e92b1efb 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -1242,6 +1242,9 @@ const char * const vmstat_text[] = {
>   #ifdef CONFIG_SWAP
>   	"nr_swapcached",
>   #endif
> +#ifdef CONFIG_NUMA_BALANCING
> +	"pgpromote_success",
> +#endif
>   
>   	/* enum writeback_stat_item counters */
>   	"nr_dirty_threshold",

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 2/6] NUMA balancing: optimize page placement for memory tiering system
  2021-12-07  2:27 ` [PATCH -V10 RESEND 2/6] NUMA balancing: optimize page placement for memory tiering system Huang Ying
  2021-12-07  6:36   ` Hasan Al Maruf
@ 2021-12-17  7:35   ` Baolin Wang
  1 sibling, 0 replies; 22+ messages in thread
From: Baolin Wang @ 2021-12-17  7:35 UTC (permalink / raw)
  To: Huang Ying, Peter Zijlstra, Mel Gorman
  Cc: linux-mm, linux-kernel, Feng Tang, Andrew Morton, Michal Hocko,
	Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi, Zi Yan, Wei Xu,
	osalvador, Shakeel Butt, Hasan Al Maruf



On 12/7/2021 10:27 AM, Huang Ying wrote:
> With the advent of various new memory types, some machines will have
> multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
> memory subsystem of these machines can be called memory tiering
> system, because the performance of the different types of memory are
> usually different.
> 
> In such system, because of the memory accessing pattern changing etc,
> some pages in the slow memory may become hot globally.  So in this
> patch, the NUMA balancing mechanism is enhanced to optimize the page
> placement among the different memory types according to hot/cold
> dynamically.
> 
> In a typical memory tiering system, there are CPUs, fast memory and
> slow memory in each physical NUMA node.  The CPUs and the fast memory
> will be put in one logical node (called fast memory node), while the
> slow memory will be put in another (faked) logical node (called slow
> memory node).  That is, the fast memory is regarded as local while the
> slow memory is regarded as remote.  So it's possible for the recently
> accessed pages in the slow memory node to be promoted to the fast
> memory node via the existing NUMA balancing mechanism.
> 
> The original NUMA balancing mechanism will stop to migrate pages if the free
> memory of the target node will become below the high watermark.  This
> is a reasonable policy if there's only one memory type.  But this
> makes the original NUMA balancing mechanism almost not work to optimize page
> placement among different memory types.  Details are as follows.
> 
> It's the common cases that the working-set size of the workload is
> larger than the size of the fast memory nodes.  Otherwise, it's
> unnecessary to use the slow memory at all.  So in the common cases,
> there are almost always no enough free pages in the fast memory nodes,
> so that the globally hot pages in the slow memory node cannot be
> promoted to the fast memory node.  To solve the issue, we have 2
> choices as follows,
> 
> a. Ignore the free pages watermark checking when promoting hot pages
>     from the slow memory node to the fast memory node.  This will
>     create some memory pressure in the fast memory node, thus trigger
>     the memory reclaiming.  So that, the cold pages in the fast memory
>     node will be demoted to the slow memory node.
> 
> b. Make kswapd of the fast memory node to reclaim pages until the free
>     pages are a little more (about 10MB) than the high watermark.  Then,
>     if the free pages of the fast memory node reaches high watermark, and
>     some hot pages need to be promoted, kswapd of the fast memory node
>     will be waken up to demote some cold pages in the fast memory node to
>     the slow memory node.  This will free some extra space in the fast
>     memory node, so the hot pages in the slow memory node can be
>     promoted to the fast memory node.
> 
> The choice "a" will create the memory pressure in the fast memory
> node.  If the memory pressure of the workload is high, the memory
> pressure may become so high that the memory allocation latency of the
> workload is influenced, e.g. the direct reclaiming may be triggered.
> 
> The choice "b" works much better at this aspect.  If the memory
> pressure of the workload is high, the hot pages promotion will stop
> earlier because its allocation watermark is higher than that of the
> normal memory allocation.  So in this patch, choice "b" is
> implemented.
> 
> In addition to the original page placement optimization among sockets,
> the NUMA balancing mechanism is extended to be used to optimize page
> placement according to hot/cold among different memory types.  So the
> sysctl user space interface (numa_balancing) is extended in a backward
> compatible way as follow, so that the users can enable/disable these
> functionality individually.
> 
> The sysctl is converted from a Boolean value to a bits field.  The
> definition of the flags is,
> 
> - 0x0: NUMA_BALANCING_DISABLED
> - 0x1: NUMA_BALANCING_NORMAL
> - 0x2: NUMA_BALANCING_MEMORY_TIERING
> 
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: osalvador <osalvador@suse.de>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Hasan Al Maruf <hasanalmaruf@fb.com>
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-mm@kvack.org
> ---
>   Documentation/admin-guide/sysctl/kernel.rst | 29 ++++++++++++++-------
>   include/linux/sched/sysctl.h                | 10 +++++++
>   kernel/sched/core.c                         | 21 ++++++++++++---
>   kernel/sysctl.c                             |  3 ++-
>   mm/migrate.c                                | 19 ++++++++++++--
>   mm/vmscan.c                                 | 16 ++++++++++++
>   6 files changed, 82 insertions(+), 16 deletions(-)
> 
> diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
> index 0e486f41185e..5502ea6083ba 100644
> --- a/Documentation/admin-guide/sysctl/kernel.rst
> +++ b/Documentation/admin-guide/sysctl/kernel.rst
> @@ -595,16 +595,23 @@ Documentation/admin-guide/kernel-parameters.rst).
>   numa_balancing
>   ==============
>   
> -Enables/disables automatic page fault based NUMA memory
> -balancing. Memory is moved automatically to nodes
> -that access it often.
> +Enables/disables and configure automatic page fault based NUMA memory
> +balancing.  Memory is moved automatically to nodes that access it
> +often.  The value to set can be the result to OR the following,
>   
> -Enables/disables automatic NUMA memory balancing. On NUMA machines, there
> -is a performance penalty if remote memory is accessed by a CPU. When this
> -feature is enabled the kernel samples what task thread is accessing memory
> -by periodically unmapping pages and later trapping a page fault. At the
> -time of the page fault, it is determined if the data being accessed should
> -be migrated to a local memory node.
> += =================================
> +0x0 NUMA_BALANCING_DISABLED
> +0x1 NUMA_BALANCING_NORMAL
> +0x2 NUMA_BALANCING_MEMORY_TIERING
> += =================================
> +
> +Or NUMA_BALANCING_NORMAL to optimize page placement among different
> +NUMA nodes to reduce remote accessing.  On NUMA machines, there is a
> +performance penalty if remote memory is accessed by a CPU. When this
> +feature is enabled the kernel samples what task thread is accessing
> +memory by periodically unmapping pages and later trapping a page
> +fault. At the time of the page fault, it is determined if the data
> +being accessed should be migrated to a local memory node.
>   
>   The unmapping of pages and trapping faults incur additional overhead that
>   ideally is offset by improved memory locality but there is no universal
> @@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms,
>   numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms,
>   numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls.
>   
> +Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among
> +different types of memory (represented as different NUMA nodes) to
> +place the hot pages in the fast memory.  This is implemented based on
> +unmapping and page fault too.
>   
>   numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb
>   ===============================================================================================================================
> diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
> index 304f431178fd..bc54c1d75d6d 100644
> --- a/include/linux/sched/sysctl.h
> +++ b/include/linux/sched/sysctl.h
> @@ -35,6 +35,16 @@ enum sched_tunable_scaling {
>   	SCHED_TUNABLESCALING_END,
>   };
>   
> +#define NUMA_BALANCING_DISABLED		0x0
> +#define NUMA_BALANCING_NORMAL		0x1
> +#define NUMA_BALANCING_MEMORY_TIERING	0x2
> +
> +#ifdef CONFIG_NUMA_BALANCING
> +extern int sysctl_numa_balancing_mode;
> +#else
> +#define sysctl_numa_balancing_mode	0
> +#endif
> +
>   /*
>    *  control realtime throttling:
>    *
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 3c9b0fda64ac..5dcabc98432f 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4265,7 +4265,9 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
>   
>   #ifdef CONFIG_NUMA_BALANCING
>   
> -void set_numabalancing_state(bool enabled)
> +int sysctl_numa_balancing_mode;
> +
> +static void __set_numabalancing_state(bool enabled)
>   {
>   	if (enabled)
>   		static_branch_enable(&sched_numa_balancing);
> @@ -4273,13 +4275,22 @@ void set_numabalancing_state(bool enabled)
>   		static_branch_disable(&sched_numa_balancing);
>   }
>   
> +void set_numabalancing_state(bool enabled)
> +{
> +	if (enabled)
> +		sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL;
> +	else
> +		sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED;
> +	__set_numabalancing_state(enabled);
> +}
> +
>   #ifdef CONFIG_PROC_SYSCTL
>   int sysctl_numa_balancing(struct ctl_table *table, int write,
>   			  void *buffer, size_t *lenp, loff_t *ppos)
>   {
>   	struct ctl_table t;
>   	int err;
> -	int state = static_branch_likely(&sched_numa_balancing);
> +	int state = sysctl_numa_balancing_mode;
>   
>   	if (write && !capable(CAP_SYS_ADMIN))
>   		return -EPERM;
> @@ -4289,8 +4300,10 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
>   	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
>   	if (err < 0)
>   		return err;
> -	if (write)
> -		set_numabalancing_state(state);
> +	if (write) {
> +		sysctl_numa_balancing_mode = state;
> +		__set_numabalancing_state(state);
> +	}
>   	return err;
>   }
>   #endif
> diff --git a/kernel/sysctl.c b/kernel/sysctl.c
> index 083be6af29d7..a1be94ea80ba 100644
> --- a/kernel/sysctl.c
> +++ b/kernel/sysctl.c
> @@ -115,6 +115,7 @@ static int sixty = 60;
>   
>   static int __maybe_unused neg_one = -1;
>   static int __maybe_unused two = 2;
> +static int __maybe_unused three = 3;
>   static int __maybe_unused four = 4;
>   static unsigned long zero_ul;
>   static unsigned long one_ul = 1;
> @@ -1808,7 +1809,7 @@ static struct ctl_table kern_table[] = {
>   		.mode		= 0644,
>   		.proc_handler	= sysctl_numa_balancing,
>   		.extra1		= SYSCTL_ZERO,
> -		.extra2		= SYSCTL_ONE,
> +		.extra2		= &three,
>   	},
>   #endif /* CONFIG_NUMA_BALANCING */
>   	{
> diff --git a/mm/migrate.c b/mm/migrate.c
> index b7c27abb0e5c..286c84c014dd 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -50,6 +50,7 @@
>   #include <linux/ptrace.h>
>   #include <linux/oom.h>
>   #include <linux/memory.h>
> +#include <linux/sched/sysctl.h>
>   
>   #include <asm/tlbflush.h>
>   
> @@ -2103,16 +2104,30 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>   {
>   	int page_lru;
>   	int nr_pages = thp_nr_pages(page);
> +	int order = compound_order(page);
>   
> -	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
> +	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>   
>   	/* Do not migrate THP mapped by multiple processes */
>   	if (PageTransHuge(page) && total_mapcount(page) > 1)
>   		return 0;
>   
>   	/* Avoid migrating to a node that is nearly full */
> -	if (!migrate_balanced_pgdat(pgdat, nr_pages))
> +	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
> +		int z;
> +
> +		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) ||
> +		    !numa_demotion_enabled)
> +			return 0;
> +		if (next_demotion_node(pgdat->node_id) == NUMA_NO_NODE)
> +			return 0;
> +		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
> +			if (populated_zone(pgdat->node_zones + z))
> +				break;
> +		}
> +		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
>   		return 0;
> +	}
>   
>   	if (isolate_lru_page(page))
>   		return 0;
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c266e64d2f7e..5edb5dfa8900 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -56,6 +56,7 @@
>   
>   #include <linux/swapops.h>
>   #include <linux/balloon_compaction.h>
> +#include <linux/sched/sysctl.h>
>   
>   #include "internal.h"
>   
> @@ -3919,6 +3920,12 @@ static bool pgdat_watermark_boosted(pg_data_t *pgdat, int highest_zoneidx)
>   	return false;
>   }
>   
> +/*
> + * Keep the free pages on fast memory node a little more than the high
> + * watermark to accommodate the promoted pages.
> + */
> +#define NUMA_BALANCING_PROMOTE_WATERMARK	(10UL * 1024 * 1024 >> PAGE_SHIFT)

 From our testing the fixed promote wartermark is not suitable for all 
scenarios, but as you said, I also agree with that we can start from a 
simplest solution that works. So please feel free to add:

Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>

> +
>   /*
>    * Returns true if there is an eligible zone balanced for the request order
>    * and highest_zoneidx
> @@ -3940,6 +3947,15 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
>   			continue;
>   
>   		mark = high_wmark_pages(zone);
> +		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
> +		    numa_demotion_enabled &&
> +		    next_demotion_node(pgdat->node_id) != NUMA_NO_NODE) {
> +			unsigned long promote_mark;
> +
> +			promote_mark = min(NUMA_BALANCING_PROMOTE_WATERMARK,
> +					   pgdat->node_present_pages >> 6);
> +			mark += promote_mark;
> +		}
>   		if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
>   			return true;
>   	}

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 3/6] memory tiering: skip to scan fast memory
  2021-12-07  2:27 ` [PATCH -V10 RESEND 3/6] memory tiering: skip to scan fast memory Huang Ying
@ 2021-12-17  7:41   ` Baolin Wang
  0 siblings, 0 replies; 22+ messages in thread
From: Baolin Wang @ 2021-12-17  7:41 UTC (permalink / raw)
  To: Huang Ying, Peter Zijlstra, Mel Gorman
  Cc: linux-mm, linux-kernel, Feng Tang, Dave Hansen, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Yang Shi, Zi Yan, Wei Xu,
	osalvador, Shakeel Butt, Hasan Al Maruf



On 12/7/2021 10:27 AM, Huang Ying wrote:
> If the NUMA balancing isn't used to optimize the page placement among
> sockets but only among memory types, the hot pages in the fast memory
> node couldn't be migrated (promoted) to anywhere.  So it's unnecessary
> to scan the pages in the fast memory node via changing their PTE/PMD
> mapping to be PROT_NONE.  So that the page faults could be avoided
> too.
> 
> In the test, if only the memory tiering NUMA balancing mode is enabled, the
> number of the NUMA balancing hint faults for the DRAM node is reduced to
> almost 0 with the patch.  While the benchmark score doesn't change
> visibly.
> 
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: osalvador <osalvador@suse.de>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: Hasan Al Maruf <hasanalmaruf@fb.com>
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-mm@kvack.org
> ---

LGTM. Please feel free to add:
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system
  2021-12-07  2:27 [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
                   ` (5 preceding siblings ...)
  2021-12-07  2:27 ` [PATCH -V10 RESEND 6/6] memory tiering: adjust hot threshold automatically Huang Ying
@ 2022-01-12 16:10 ` Peter Zijlstra
  2022-01-13  7:19   ` Huang, Ying
  6 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2022-01-12 16:10 UTC (permalink / raw)
  To: Huang Ying
  Cc: Mel Gorman, linux-mm, linux-kernel, Feng Tang, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

On Tue, Dec 07, 2021 at 10:27:51AM +0800, Huang Ying wrote:
> After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
> for use like normal RAM"), the PMEM could be used as the
> cost-effective volatile memory in separate NUMA nodes.  In a typical
> memory tiering system, there are CPUs, DRAM and PMEM in each physical
> NUMA node.  The CPUs and the DRAM will be put in one logical node,
> while the PMEM will be put in another (faked) logical node.

So what does a system like that actually look like, SLIT table wise, and
how does that affect init_numa_topology_type() ?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system
  2022-01-12 16:10 ` [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system Peter Zijlstra
@ 2022-01-13  7:19   ` Huang, Ying
  2022-01-13  9:49     ` Peter Zijlstra
  0 siblings, 1 reply; 22+ messages in thread
From: Huang, Ying @ 2022-01-13  7:19 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mel Gorman, linux-mm, linux-kernel, Feng Tang, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

[-- Attachment #1: Type: text/plain, Size: 3326 bytes --]

Hi, Peter,

Peter Zijlstra <peterz@infradead.org> writes:

> On Tue, Dec 07, 2021 at 10:27:51AM +0800, Huang Ying wrote:
>> After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
>> for use like normal RAM"), the PMEM could be used as the
>> cost-effective volatile memory in separate NUMA nodes.  In a typical
>> memory tiering system, there are CPUs, DRAM and PMEM in each physical
>> NUMA node.  The CPUs and the DRAM will be put in one logical node,
>> while the PMEM will be put in another (faked) logical node.
>
> So what does a system like that actually look like, SLIT table wise, and
> how does that affect init_numa_topology_type() ?

The SLIT table is as follows,

[000h 0000   4]                    Signature : "SLIT"    [System Locality Information Table]
[004h 0004   4]                 Table Length : 0000042C
[008h 0008   1]                     Revision : 01
[009h 0009   1]                     Checksum : 59
[00Ah 0010   6]                       Oem ID : "INTEL "
[010h 0016   8]                 Oem Table ID : "S2600WF "
[018h 0024   4]                 Oem Revision : 00000001
[01Ch 0028   4]              Asl Compiler ID : "INTL"
[020h 0032   4]        Asl Compiler Revision : 20091013

[024h 0036   8]                   Localities : 0000000000000004
[02Ch 0044   4]                 Locality   0 : 0A 15 11 1C
[030h 0048   4]                 Locality   1 : 15 0A 1C 11
[034h 0052   4]                 Locality   2 : 11 1C 0A 1C
[038h 0056   4]                 Locality   3 : 1C 11 1C 0A

The `numactl -H` output is as follows,

available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
node 0 size: 64136 MB
node 0 free: 5981 MB
node 1 cpus: 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
node 1 size: 64466 MB
node 1 free: 10415 MB
node 2 cpus:
node 2 size: 253952 MB
node 2 free: 253920 MB
node 3 cpus:
node 3 size: 253952 MB
node 3 free: 253951 MB
node distances:
node   0   1   2   3 
  0:  10  21  17  28 
  1:  21  10  28  17 
  2:  17  28  10  28 
  3:  28  17  28  10 

init_numa_topology_type() set sched_numa_topology_type to NUMA_DIRECT.

The node 0 and node 1 are onlined during boot.  While the PMEM node,
that is, node 2 and node 3 are onlined later.  As in the following dmesg
snippet.

[    2.252573][    T0] ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff]
[    2.259224][    T0] ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x107fffffff]
[    2.266139][    T0] ACPI: SRAT: Node 2 PXM 2 [mem 0x1080000000-0x4f7fffffff] non-volatile
[    2.274267][    T0] ACPI: SRAT: Node 1 PXM 1 [mem 0x4f80000000-0x5f7fffffff]
[    2.281271][    T0] ACPI: SRAT: Node 3 PXM 3 [mem 0x5f80000000-0x9e7fffffff] non-volatile
[    2.289403][    T0] NUMA: Initialized distance table, cnt=4
[    2.294934][    T0] NUMA: Node 0 [mem 0x00000000-0x7fffffff] + [mem 0x100000000-0x107fffffff] -> [mem 0x00000000-0x107fffffff]
[    2.306266][    T0] NODE_DATA(0) allocated [mem 0x107ffd5000-0x107fffffff]
[    2.313115][    T0] NODE_DATA(1) allocated [mem 0x5f7ffd0000-0x5f7fffafff]

[    5.391151][    T1] smp: Brought up 2 nodes, 96 CPUs

Full dmesg is attached.

Best Regards,
Huang, Ying


[-- Attachment #2: dmesg.xz --]
[-- Type: application/x-xz, Size: 27120 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system
  2022-01-13  7:19   ` Huang, Ying
@ 2022-01-13  9:49     ` Peter Zijlstra
  2022-01-13 12:06       ` Huang, Ying
  0 siblings, 1 reply; 22+ messages in thread
From: Peter Zijlstra @ 2022-01-13  9:49 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Mel Gorman, linux-mm, linux-kernel, Feng Tang, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

On Thu, Jan 13, 2022 at 03:19:06PM +0800, Huang, Ying wrote:
> Hi, Peter,
> 
> Peter Zijlstra <peterz@infradead.org> writes:
> 
> > On Tue, Dec 07, 2021 at 10:27:51AM +0800, Huang Ying wrote:
> >> After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
> >> for use like normal RAM"), the PMEM could be used as the
> >> cost-effective volatile memory in separate NUMA nodes.  In a typical
> >> memory tiering system, there are CPUs, DRAM and PMEM in each physical
> >> NUMA node.  The CPUs and the DRAM will be put in one logical node,
> >> while the PMEM will be put in another (faked) logical node.
> >
> > So what does a system like that actually look like, SLIT table wise, and
> > how does that affect init_numa_topology_type() ?
> 
> The SLIT table is as follows,
> 
> [000h 0000   4]                    Signature : "SLIT"    [System Locality Information Table]
> [004h 0004   4]                 Table Length : 0000042C
> [008h 0008   1]                     Revision : 01
> [009h 0009   1]                     Checksum : 59
> [00Ah 0010   6]                       Oem ID : "INTEL "
> [010h 0016   8]                 Oem Table ID : "S2600WF "
> [018h 0024   4]                 Oem Revision : 00000001
> [01Ch 0028   4]              Asl Compiler ID : "INTL"
> [020h 0032   4]        Asl Compiler Revision : 20091013
> 
> [024h 0036   8]                   Localities : 0000000000000004
> [02Ch 0044   4]                 Locality   0 : 0A 15 11 1C
> [030h 0048   4]                 Locality   1 : 15 0A 1C 11
> [034h 0052   4]                 Locality   2 : 11 1C 0A 1C
> [038h 0056   4]                 Locality   3 : 1C 11 1C 0A
> 
> The `numactl -H` output is as follows,
> 
> available: 4 nodes (0-3)
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
> node 0 size: 64136 MB
> node 0 free: 5981 MB
> node 1 cpus: 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
> node 1 size: 64466 MB
> node 1 free: 10415 MB
> node 2 cpus:
> node 2 size: 253952 MB
> node 2 free: 253920 MB
> node 3 cpus:
> node 3 size: 253952 MB
> node 3 free: 253951 MB
> node distances:
> node   0   1   2   3 
>   0:  10  21  17  28 
>   1:  21  10  28  17 
>   2:  17  28  10  28 
>   3:  28  17  28  10 
> 
> init_numa_topology_type() set sched_numa_topology_type to NUMA_DIRECT.
> 
> The node 0 and node 1 are onlined during boot.  While the PMEM node,
> that is, node 2 and node 3 are onlined later.  As in the following dmesg
> snippet.

But how? sched_init_numa() scans the *whole* SLIT table to determine
nr_levels / sched_domains_numa_levels, even offline nodes. Therefore it
should find 4 distinct distance values and end up not selecting
NUMA_DIRECT.

Similarly for the other types it uses for_each_online_node(), which
would include the pmem nodes once they've been onlined, but I'm thinking
we explicitly want to skip CPU-less nodes in that iteration.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system
  2022-01-13  9:49     ` Peter Zijlstra
@ 2022-01-13 12:06       ` Huang, Ying
  2022-01-13 13:00         ` Peter Zijlstra
  0 siblings, 1 reply; 22+ messages in thread
From: Huang, Ying @ 2022-01-13 12:06 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mel Gorman, linux-mm, linux-kernel, Feng Tang, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

Peter Zijlstra <peterz@infradead.org> writes:

> On Thu, Jan 13, 2022 at 03:19:06PM +0800, Huang, Ying wrote:
>> Hi, Peter,
>> 
>> Peter Zijlstra <peterz@infradead.org> writes:
>> 
>> > On Tue, Dec 07, 2021 at 10:27:51AM +0800, Huang Ying wrote:
>> >> After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
>> >> for use like normal RAM"), the PMEM could be used as the
>> >> cost-effective volatile memory in separate NUMA nodes.  In a typical
>> >> memory tiering system, there are CPUs, DRAM and PMEM in each physical
>> >> NUMA node.  The CPUs and the DRAM will be put in one logical node,
>> >> while the PMEM will be put in another (faked) logical node.
>> >
>> > So what does a system like that actually look like, SLIT table wise, and
>> > how does that affect init_numa_topology_type() ?
>> 
>> The SLIT table is as follows,
>> 
>> [000h 0000   4]                    Signature : "SLIT"    [System Locality Information Table]
>> [004h 0004   4]                 Table Length : 0000042C
>> [008h 0008   1]                     Revision : 01
>> [009h 0009   1]                     Checksum : 59
>> [00Ah 0010   6]                       Oem ID : "INTEL "
>> [010h 0016   8]                 Oem Table ID : "S2600WF "
>> [018h 0024   4]                 Oem Revision : 00000001
>> [01Ch 0028   4]              Asl Compiler ID : "INTL"
>> [020h 0032   4]        Asl Compiler Revision : 20091013
>> 
>> [024h 0036   8]                   Localities : 0000000000000004
>> [02Ch 0044   4]                 Locality   0 : 0A 15 11 1C
>> [030h 0048   4]                 Locality   1 : 15 0A 1C 11
>> [034h 0052   4]                 Locality   2 : 11 1C 0A 1C
>> [038h 0056   4]                 Locality   3 : 1C 11 1C 0A
>> 
>> The `numactl -H` output is as follows,
>> 
>> available: 4 nodes (0-3)
>> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
>> node 0 size: 64136 MB
>> node 0 free: 5981 MB
>> node 1 cpus: 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95
>> node 1 size: 64466 MB
>> node 1 free: 10415 MB
>> node 2 cpus:
>> node 2 size: 253952 MB
>> node 2 free: 253920 MB
>> node 3 cpus:
>> node 3 size: 253952 MB
>> node 3 free: 253951 MB
>> node distances:
>> node   0   1   2   3 
>>   0:  10  21  17  28 
>>   1:  21  10  28  17 
>>   2:  17  28  10  28 
>>   3:  28  17  28  10 
>> 
>> init_numa_topology_type() set sched_numa_topology_type to NUMA_DIRECT.
>> 
>> The node 0 and node 1 are onlined during boot.  While the PMEM node,
>> that is, node 2 and node 3 are onlined later.  As in the following dmesg
>> snippet.
>
> But how? sched_init_numa() scans the *whole* SLIT table to determine
> nr_levels / sched_domains_numa_levels, even offline nodes. Therefore it
> should find 4 distinct distance values and end up not selecting
> NUMA_DIRECT.
>
> Similarly for the other types it uses for_each_online_node(), which
> would include the pmem nodes once they've been onlined, but I'm thinking
> we explicitly want to skip CPU-less nodes in that iteration.

I used the debug patch as below, and get the log in dmesg as follows,

[    5.394577][    T1] sched_numa_topology_type: 0, levels: 4, max_distance: 28

I found that I forget another caller of init_numa_topology_type() run
during hotplug.  I will add another printk() to show it.  Sorry about
that.

Best Regards,
Huang, Ying

-------------------------------8<------------------------------------
From 11cea4be2db6220333d84f5b168174f534ac0933 Mon Sep 17 00:00:00 2001
From: Huang Ying <ying.huang@intel.com>
Date: Thu, 13 Jan 2022 09:53:15 +0800
Subject: [PATCH] dbg: show sched_numa_topology_type

---
 kernel/sched/topology.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index d201a7052a29..9d92191fd62d 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1914,6 +1914,10 @@ void sched_init_numa(void)
 
 	init_numa_topology_type();
 
+	pr_info("sched_numa_topology_type: %d, levels: %d, max_distance: %d\n",
+		sched_numa_topology_type, sched_domains_numa_levels,
+		sched_max_numa_distance);
+
 	sched_numa_onlined_nodes = bitmap_alloc(nr_node_ids, GFP_KERNEL);
 	if (!sched_numa_onlined_nodes)
 		return;
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system
  2022-01-13 12:06       ` Huang, Ying
@ 2022-01-13 13:00         ` Peter Zijlstra
  2022-01-13 13:13           ` Huang, Ying
  2022-01-13 14:24           ` Huang, Ying
  0 siblings, 2 replies; 22+ messages in thread
From: Peter Zijlstra @ 2022-01-13 13:00 UTC (permalink / raw)
  To: Huang, Ying
  Cc: Mel Gorman, linux-mm, linux-kernel, Feng Tang, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

On Thu, Jan 13, 2022 at 08:06:40PM +0800, Huang, Ying wrote:
> Peter Zijlstra <peterz@infradead.org> writes:
> > On Thu, Jan 13, 2022 at 03:19:06PM +0800, Huang, Ying wrote:
> >> Peter Zijlstra <peterz@infradead.org> writes:
> >> > On Tue, Dec 07, 2021 at 10:27:51AM +0800, Huang Ying wrote:

> >> >> After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
> >> >> for use like normal RAM"), the PMEM could be used as the
> >> >> cost-effective volatile memory in separate NUMA nodes.  In a typical
> >> >> memory tiering system, there are CPUs, DRAM and PMEM in each physical
> >> >> NUMA node.  The CPUs and the DRAM will be put in one logical node,
> >> >> while the PMEM will be put in another (faked) logical node.
> >> >
> >> > So what does a system like that actually look like, SLIT table wise, and
> >> > how does that affect init_numa_topology_type() ?
> >> 
> >> The SLIT table is as follows,

<snip>

> >> node distances:
> >> node   0   1   2   3 
> >>   0:  10  21  17  28 
> >>   1:  21  10  28  17 
> >>   2:  17  28  10  28 
> >>   3:  28  17  28  10 
> >> 
> >> init_numa_topology_type() set sched_numa_topology_type to NUMA_DIRECT.
> >> 
> >> The node 0 and node 1 are onlined during boot.  While the PMEM node,
> >> that is, node 2 and node 3 are onlined later.  As in the following dmesg
> >> snippet.
> >
> > But how? sched_init_numa() scans the *whole* SLIT table to determine
> > nr_levels / sched_domains_numa_levels, even offline nodes. Therefore it
> > should find 4 distinct distance values and end up not selecting
> > NUMA_DIRECT.
> >
> > Similarly for the other types it uses for_each_online_node(), which
> > would include the pmem nodes once they've been onlined, but I'm thinking
> > we explicitly want to skip CPU-less nodes in that iteration.
> 
> I used the debug patch as below, and get the log in dmesg as follows,
> 
> [    5.394577][    T1] sched_numa_topology_type: 0, levels: 4, max_distance: 28
> 
> I found that I forget another caller of init_numa_topology_type() run
> during hotplug.  I will add another printk() to show it.  Sorry about
> that.

Can you try with this on?

I'm suspecting there's a problem with init_numa_topology_type(); it will
never find the max distance due to the _online_ clause in the iteration,
since you said the pmem nodes are not online yet.

---
diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index d201a7052a29..53ab9c63c185 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -1756,6 +1756,8 @@ static void init_numa_topology_type(void)
 			return;
 		}
 	}
+
+	WARN(1, "no NUMA type determined");
 }

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system
  2022-01-13 13:00         ` Peter Zijlstra
@ 2022-01-13 13:13           ` Huang, Ying
  2022-01-13 14:24           ` Huang, Ying
  1 sibling, 0 replies; 22+ messages in thread
From: Huang, Ying @ 2022-01-13 13:13 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mel Gorman, linux-mm, linux-kernel, Feng Tang, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

Peter Zijlstra <peterz@infradead.org> writes:

> On Thu, Jan 13, 2022 at 08:06:40PM +0800, Huang, Ying wrote:
>> Peter Zijlstra <peterz@infradead.org> writes:
>> > On Thu, Jan 13, 2022 at 03:19:06PM +0800, Huang, Ying wrote:
>> >> Peter Zijlstra <peterz@infradead.org> writes:
>> >> > On Tue, Dec 07, 2021 at 10:27:51AM +0800, Huang Ying wrote:
>
>> >> >> After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
>> >> >> for use like normal RAM"), the PMEM could be used as the
>> >> >> cost-effective volatile memory in separate NUMA nodes.  In a typical
>> >> >> memory tiering system, there are CPUs, DRAM and PMEM in each physical
>> >> >> NUMA node.  The CPUs and the DRAM will be put in one logical node,
>> >> >> while the PMEM will be put in another (faked) logical node.
>> >> >
>> >> > So what does a system like that actually look like, SLIT table wise, and
>> >> > how does that affect init_numa_topology_type() ?
>> >> 
>> >> The SLIT table is as follows,
>
> <snip>
>
>> >> node distances:
>> >> node   0   1   2   3 
>> >>   0:  10  21  17  28 
>> >>   1:  21  10  28  17 
>> >>   2:  17  28  10  28 
>> >>   3:  28  17  28  10 
>> >> 
>> >> init_numa_topology_type() set sched_numa_topology_type to NUMA_DIRECT.
>> >> 
>> >> The node 0 and node 1 are onlined during boot.  While the PMEM node,
>> >> that is, node 2 and node 3 are onlined later.  As in the following dmesg
>> >> snippet.
>> >
>> > But how? sched_init_numa() scans the *whole* SLIT table to determine
>> > nr_levels / sched_domains_numa_levels, even offline nodes. Therefore it
>> > should find 4 distinct distance values and end up not selecting
>> > NUMA_DIRECT.
>> >
>> > Similarly for the other types it uses for_each_online_node(), which
>> > would include the pmem nodes once they've been onlined, but I'm thinking
>> > we explicitly want to skip CPU-less nodes in that iteration.
>> 
>> I used the debug patch as below, and get the log in dmesg as follows,
>> 
>> [    5.394577][    T1] sched_numa_topology_type: 0, levels: 4, max_distance: 28
>> 
>> I found that I forget another caller of init_numa_topology_type() run
>> during hotplug.  I will add another printk() to show it.  Sorry about
>> that.
>
> Can you try with this on?
>
> I'm suspecting there's a problem with init_numa_topology_type(); it will
> never find the max distance due to the _online_ clause in the iteration,
> since you said the pmem nodes are not online yet.
>
> ---
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index d201a7052a29..53ab9c63c185 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1756,6 +1756,8 @@ static void init_numa_topology_type(void)
>  			return;
>  		}
>  	}
> +
> +	WARN(1, "no NUMA type determined");
>  }

Sure.  Will do this.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system
  2022-01-13 13:00         ` Peter Zijlstra
  2022-01-13 13:13           ` Huang, Ying
@ 2022-01-13 14:24           ` Huang, Ying
  2022-01-14  5:24             ` Huang, Ying
  1 sibling, 1 reply; 22+ messages in thread
From: Huang, Ying @ 2022-01-13 14:24 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mel Gorman, linux-mm, linux-kernel, Feng Tang, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

Peter Zijlstra <peterz@infradead.org> writes:

> On Thu, Jan 13, 2022 at 08:06:40PM +0800, Huang, Ying wrote:
>> Peter Zijlstra <peterz@infradead.org> writes:
>> > On Thu, Jan 13, 2022 at 03:19:06PM +0800, Huang, Ying wrote:
>> >> Peter Zijlstra <peterz@infradead.org> writes:
>> >> > On Tue, Dec 07, 2021 at 10:27:51AM +0800, Huang Ying wrote:
>
>> >> >> After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
>> >> >> for use like normal RAM"), the PMEM could be used as the
>> >> >> cost-effective volatile memory in separate NUMA nodes.  In a typical
>> >> >> memory tiering system, there are CPUs, DRAM and PMEM in each physical
>> >> >> NUMA node.  The CPUs and the DRAM will be put in one logical node,
>> >> >> while the PMEM will be put in another (faked) logical node.
>> >> >
>> >> > So what does a system like that actually look like, SLIT table wise, and
>> >> > how does that affect init_numa_topology_type() ?
>> >> 
>> >> The SLIT table is as follows,
>
> <snip>
>
>> >> node distances:
>> >> node   0   1   2   3 
>> >>   0:  10  21  17  28 
>> >>   1:  21  10  28  17 
>> >>   2:  17  28  10  28 
>> >>   3:  28  17  28  10 
>> >> 
>> >> init_numa_topology_type() set sched_numa_topology_type to NUMA_DIRECT.
>> >> 
>> >> The node 0 and node 1 are onlined during boot.  While the PMEM node,
>> >> that is, node 2 and node 3 are onlined later.  As in the following dmesg
>> >> snippet.
>> >
>> > But how? sched_init_numa() scans the *whole* SLIT table to determine
>> > nr_levels / sched_domains_numa_levels, even offline nodes. Therefore it
>> > should find 4 distinct distance values and end up not selecting
>> > NUMA_DIRECT.
>> >
>> > Similarly for the other types it uses for_each_online_node(), which
>> > would include the pmem nodes once they've been onlined, but I'm thinking
>> > we explicitly want to skip CPU-less nodes in that iteration.
>> 
>> I used the debug patch as below, and get the log in dmesg as follows,
>> 
>> [    5.394577][    T1] sched_numa_topology_type: 0, levels: 4, max_distance: 28
>> 
>> I found that I forget another caller of init_numa_topology_type() run
>> during hotplug.  I will add another printk() to show it.  Sorry about
>> that.
>
> Can you try with this on?
>
> I'm suspecting there's a problem with init_numa_topology_type(); it will
> never find the max distance due to the _online_ clause in the iteration,
> since you said the pmem nodes are not online yet.
>
> ---
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index d201a7052a29..53ab9c63c185 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -1756,6 +1756,8 @@ static void init_numa_topology_type(void)
>  			return;
>  		}
>  	}
> +
> +	WARN(1, "no NUMA type determined");
>  }

Hi, Peter,

I have run the test, the warning is triggered in the dmesg as follows.
I will continue to debug hotplug tomorrow.

[    5.400923][    T1] ------------[ cut here ]------------
[    5.401917][    T1] no NUMA type determined
[    5.401921][    T1] WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:1760 init_numa_topology_type+0x199/0x1c0
[    5.403918][    T1] Modules linked in:
[    5.404917][    T1] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.16.0-rc8-00053-gbe30433a13c0 #1
[    5.405917][    T1] Hardware name: Intel Corporation S2600WFD/S2600WFD, BIOS SE5C620.86B.0D.01.0286.011120190816 01/11/2019
[    5.406917][    T1] RIP: 0010:init_numa_topology_type+0x199/0x1c0
[    5.407917][    T1] Code: de 82 41 89 dc e8 07 4f 4e 00 3d 00 04 00 00 44 0f 4e e0 3d ff 03 00 00 0f 8e ca fe ff ff 48 c7 c7 a7 88 55 82 e8 0c e5 b3 00 <0f> 0b e9 74 ff ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 66 66 2e 0f
[    5.408917][    T1] RSP: 0000:ffffc900000b7e00 EFLAGS: 00010286
[    5.409917][    T1] RAX: 0000000000000000 RBX: 0000000000000400 RCX: c0000000ffff7fff
[    5.410917][    T1] RDX: ffffc900000b7c28 RSI: 00000000ffff7fff RDI: 0000000000000000
[    5.411917][    T1] RBP: 000000000000001c R08: 0000000000000000 R09: ffffc900000b7c20
[    5.412917][    T1] R10: 0000000000000001 R11: 0000000000000001 R12: 0000000000000400
[    5.413917][    T1] R13: 0000000000000400 R14: 0000000000000400 R15: 000000000000000c
[    5.414917][    T1] FS:  0000000000000000(0000) GS:ffff88903f600000(0000) knlGS:0000000000000000
[    5.415917][    T1] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    5.416917][    T1] CR2: ffff88df7fc01000 CR3: 0000005f7ec0a001 CR4: 00000000007706f0
[    5.417917][    T1] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[    5.418917][    T1] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[    5.419917][    T1] PKRU: 55555554
[    5.420917][    T1] Call Trace:
[    5.421919][    T1]  <TASK>
[    5.422919][    T1]  sched_init_numa+0x4a7/0x5c0
[    5.423918][    T1]  sched_init_smp+0x18/0x79
[    5.424918][    T1]  kernel_init_freeable+0x136/0x276
[    5.425918][    T1]  ? rest_init+0x100/0x100
[    5.426917][    T1]  kernel_init+0x16/0x140
[    5.427917][    T1]  ret_from_fork+0x1f/0x30
[    5.428918][    T1]  </TASK>
[    5.429919][    T1] ---[ end trace aa5563c4363f1ba3 ]---
[    5.430917][    T1] sched_numa_topology_type: 0, levels: 4, max_distance: 28

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system
  2022-01-13 14:24           ` Huang, Ying
@ 2022-01-14  5:24             ` Huang, Ying
  0 siblings, 0 replies; 22+ messages in thread
From: Huang, Ying @ 2022-01-14  5:24 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mel Gorman, linux-mm, linux-kernel, Feng Tang, Andrew Morton,
	Michal Hocko, Rik van Riel, Mel Gorman, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Hasan Al Maruf

"Huang, Ying" <ying.huang@intel.com> writes:

> Peter Zijlstra <peterz@infradead.org> writes:
>
>> On Thu, Jan 13, 2022 at 08:06:40PM +0800, Huang, Ying wrote:
>>> Peter Zijlstra <peterz@infradead.org> writes:
>>> > On Thu, Jan 13, 2022 at 03:19:06PM +0800, Huang, Ying wrote:
>>> >> Peter Zijlstra <peterz@infradead.org> writes:
>>> >> > On Tue, Dec 07, 2021 at 10:27:51AM +0800, Huang Ying wrote:
>>
>>> >> >> After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
>>> >> >> for use like normal RAM"), the PMEM could be used as the
>>> >> >> cost-effective volatile memory in separate NUMA nodes.  In a typical
>>> >> >> memory tiering system, there are CPUs, DRAM and PMEM in each physical
>>> >> >> NUMA node.  The CPUs and the DRAM will be put in one logical node,
>>> >> >> while the PMEM will be put in another (faked) logical node.
>>> >> >
>>> >> > So what does a system like that actually look like, SLIT table wise, and
>>> >> > how does that affect init_numa_topology_type() ?
>>> >> 
>>> >> The SLIT table is as follows,
>>
>> <snip>
>>
>>> >> node distances:
>>> >> node   0   1   2   3 
>>> >>   0:  10  21  17  28 
>>> >>   1:  21  10  28  17 
>>> >>   2:  17  28  10  28 
>>> >>   3:  28  17  28  10 
>>> >> 
>>> >> init_numa_topology_type() set sched_numa_topology_type to NUMA_DIRECT.
>>> >> 
>>> >> The node 0 and node 1 are onlined during boot.  While the PMEM node,
>>> >> that is, node 2 and node 3 are onlined later.  As in the following dmesg
>>> >> snippet.
>>> >
>>> > But how? sched_init_numa() scans the *whole* SLIT table to determine
>>> > nr_levels / sched_domains_numa_levels, even offline nodes. Therefore it
>>> > should find 4 distinct distance values and end up not selecting
>>> > NUMA_DIRECT.
>>> >
>>> > Similarly for the other types it uses for_each_online_node(), which
>>> > would include the pmem nodes once they've been onlined, but I'm thinking
>>> > we explicitly want to skip CPU-less nodes in that iteration.
>>> 
>>> I used the debug patch as below, and get the log in dmesg as follows,
>>> 
>>> [    5.394577][    T1] sched_numa_topology_type: 0, levels: 4, max_distance: 28
>>> 
>>> I found that I forget another caller of init_numa_topology_type() run
>>> during hotplug.  I will add another printk() to show it.  Sorry about
>>> that.
>>
>> Can you try with this on?
>>
>> I'm suspecting there's a problem with init_numa_topology_type(); it will
>> never find the max distance due to the _online_ clause in the iteration,
>> since you said the pmem nodes are not online yet.
>>
>> ---
>> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
>> index d201a7052a29..53ab9c63c185 100644
>> --- a/kernel/sched/topology.c
>> +++ b/kernel/sched/topology.c
>> @@ -1756,6 +1756,8 @@ static void init_numa_topology_type(void)
>>  			return;
>>  		}
>>  	}
>> +
>> +	WARN(1, "no NUMA type determined");
>>  }
>
> Hi, Peter,
>
> I have run the test, the warning is triggered in the dmesg as follows.
> I will continue to debug hotplug tomorrow.

I did more experiments and found that init_numa_topology_type() will not
be called during PMEM nodes plugging in.  Because it will only be called
when a CPU of a never-onlined-before node is onlined.  There's no CPU on
the PMEM nodes (2/3).  So, when the PMEM node is onlined,
init_numa_topology_type() will not be called. And
sched_numa_topology_type will not be changed.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2022-01-14  5:24 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-07  2:27 [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
2021-12-07  2:27 ` [PATCH -V10 RESEND 1/6] NUMA Balancing: add page promotion counter Huang Ying
2021-12-07  6:05   ` Hasan Al Maruf
2021-12-08  2:16     ` Huang, Ying
2021-12-17  7:25   ` Baolin Wang
2021-12-07  2:27 ` [PATCH -V10 RESEND 2/6] NUMA balancing: optimize page placement for memory tiering system Huang Ying
2021-12-07  6:36   ` Hasan Al Maruf
2021-12-08  3:16     ` Huang, Ying
2021-12-17  7:35   ` Baolin Wang
2021-12-07  2:27 ` [PATCH -V10 RESEND 3/6] memory tiering: skip to scan fast memory Huang Ying
2021-12-17  7:41   ` Baolin Wang
2021-12-07  2:27 ` [PATCH -V10 RESEND 4/6] memory tiering: hot page selection with hint page fault latency Huang Ying
2021-12-07  2:27 ` [PATCH -V10 RESEND 5/6] memory tiering: rate limit NUMA migration throughput Huang Ying
2021-12-07  2:27 ` [PATCH -V10 RESEND 6/6] memory tiering: adjust hot threshold automatically Huang Ying
2022-01-12 16:10 ` [PATCH -V10 RESEND 0/6] NUMA balancing: optimize memory placement for memory tiering system Peter Zijlstra
2022-01-13  7:19   ` Huang, Ying
2022-01-13  9:49     ` Peter Zijlstra
2022-01-13 12:06       ` Huang, Ying
2022-01-13 13:00         ` Peter Zijlstra
2022-01-13 13:13           ` Huang, Ying
2022-01-13 14:24           ` Huang, Ying
2022-01-14  5:24             ` Huang, Ying

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.