linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -V14 0/3] NUMA balancing: optimize memory placement for memory tiering system
@ 2022-03-01  8:53 Huang Ying
  2022-03-01  8:53 ` [PATCH -V14 1/3] NUMA Balancing: add page promotion counter Huang Ying
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Huang Ying @ 2022-03-01  8:53 UTC (permalink / raw)
  To: Peter Zijlstra, Mel Gorman, Andrew Morton
  Cc: linux-mm, linux-kernel, Feng Tang, Huang Ying, Michal Hocko,
	Rik van Riel, Dave Hansen, Yang Shi, Zi Yan, Wei Xu,
	Oscar Salvador, Shakeel Butt, Johannes Weiner

The changes since the last post are as follows,

- Improved the patch description of [2/3] per Oscar's comments.  Thanks!

- Added Oscar's Reviewed-by for [2/3] and [3/3].

--

With the advent of various new memory types, some machines will have
multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
memory subsystem of these machines can be called memory tiering
system, because the performance of the different types of memory are
different.

After commit c221c0b0308f ("device-dax: "Hotplug" persistent memory
for use like normal RAM"), the PMEM could be used as the
cost-effective volatile memory in separate NUMA nodes.  In a typical
memory tiering system, there are CPUs, DRAM and PMEM in each physical
NUMA node.  The CPUs and the DRAM will be put in one logical node,
while the PMEM will be put in another (faked) logical node.

To optimize the system overall performance, the hot pages should be
placed in DRAM node.  To do that, we need to identify the hot pages in
the PMEM node and migrate them to DRAM node via NUMA migration.

In the original NUMA balancing, there are already a set of existing
mechanisms to identify the pages recently accessed by the CPUs in a
node and migrate the pages to the node.  So we can reuse these
mechanisms to build the mechanisms to optimize the page placement in
the memory tiering system.  This is implemented in this patchset.

At the other hand, the cold pages should be placed in PMEM node.  So,
we also need to identify the cold pages in the DRAM node and migrate
them to PMEM node.

In commit 26aa2d199d6f ("mm/migrate: demote pages during reclaim"), a
mechanism to demote the cold DRAM pages to PMEM node under memory
pressure is implemented.  Based on that, the cold DRAM pages can be
demoted to PMEM node proactively to free some memory space on DRAM
node to accommodate the promoted hot PMEM pages.  This is implemented
in this patchset too.

We have tested the solution with the pmbench memory accessing
benchmark with the 80:20 read/write ratio and the Gauss access address
distribution on a 2 socket Intel server with Optane DC Persistent
Memory Model.  The test results shows that the pmbench score can
improve up to 95.9%.

Changelog:

v14:

- Improved the patch description of [2/3] per Oscar's comments.  Thanks!

- Added Oscar's Reviewed-by for [2/3] and [3/3].


v13:

- Fix nr_succeeded type in migrate_misplaced_page per Oscar's comments.

- Make NUMA_BALANCING_MEMORY_TIERING works independent of demotion
  knob per Johannes' comments.

v12:

- Rebased on v5.17-rc4

- Change promotion watermark implementation per Johannes' comments

- Fixed several sysctl ABI document bugs, Thanks Andrew.

v11:

- Rebased on v5.17-rc1

- Remove [4-6] from the original patchset to make it easier to be
  reviewed.

- Change the additional promotion watermark to be the high watermark / 4.

v10:

- Rebased on v5.16-rc1

- Revise error processing for [1/6] (promotion counter) per Yang's comments

- Add sysctl document for [2/6] (optimize page placement)

- Reset threshold adjustment state when disable/enable tiering mode

- Reset threshold when workload transition is detected.

v9:

- Rebased on v5.15-rc4

- Make "add promotion counter" the first patch per Yang's comments

v8:

- Rebased on v5.15-rc1

- Make user-specified threshold take effect sooner

v7:

- Rebased on the mmots tree of 2021-07-15.

- Some minor fixes.

v6:

- Rebased on the latest page demotion patchset. (which bases on v5.11)

v5:

- Rebased on the latest page demotion patchset. (which bases on v5.10)

v4:

- Rebased on the latest page demotion patchset. (which bases on v5.9-rc6)

- Add page promotion counter.

v3:

- Move the rate limit control as late as possible per Mel Gorman's
  comments.

- Revise the hot page selection implementation to store page scan time
  in struct page.

- Code cleanup.

- Rebased on the latest page demotion patchset.

v2:

- Addressed comments for V1.

- Rebased on v5.5.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH -V14 1/3] NUMA Balancing: add page promotion counter
  2022-03-01  8:53 [PATCH -V14 0/3] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
@ 2022-03-01  8:53 ` Huang Ying
  2022-03-01  8:53 ` [PATCH -V14 2/3] NUMA balancing: optimize page placement for memory tiering system Huang Ying
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Huang Ying @ 2022-03-01  8:53 UTC (permalink / raw)
  To: Peter Zijlstra, Mel Gorman, Andrew Morton
  Cc: linux-mm, linux-kernel, Feng Tang, Huang Ying, Yang Shi,
	Baolin Wang, Johannes Weiner, Oscar Salvador, Michal Hocko,
	Rik van Riel, Dave Hansen, Zi Yan, Wei Xu, Shakeel Butt,
	zhongjiang-ali

In a system with multiple memory types, e.g. DRAM and PMEM, the CPU
and DRAM in one socket will be put in one NUMA node as before, while
the PMEM will be put in another NUMA node as described in the
description of the commit c221c0b0308f ("device-dax: "Hotplug"
persistent memory for use like normal RAM").  So, the NUMA balancing
mechanism will identify all PMEM accesses as remote access and try to
promote the PMEM pages to DRAM.

To distinguish the number of the inter-type promoted pages from that
of the inter-socket migrated pages.  A new vmstat count is added.  The
counter is per-node (count in the target node).  So this can be used
to identify promotion imbalance among the NUMA nodes.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Reviewed-by: Yang Shi <shy828301@gmail.com>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: zhongjiang-ali <zhongjiang-ali@linux.alibaba.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/linux/mmzone.h |  3 +++
 include/linux/node.h   |  5 +++++
 mm/migrate.c           | 13 ++++++++++---
 mm/vmstat.c            |  3 +++
 4 files changed, 21 insertions(+), 3 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index aed44e9b5d89..44bd054ca12b 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -210,6 +210,9 @@ enum node_stat_item {
 	NR_PAGETABLE,		/* used for pagetables */
 #ifdef CONFIG_SWAP
 	NR_SWAPCACHE,
+#endif
+#ifdef CONFIG_NUMA_BALANCING
+	PGPROMOTE_SUCCESS,	/* promote successfully */
 #endif
 	NR_VM_NODE_STAT_ITEMS
 };
diff --git a/include/linux/node.h b/include/linux/node.h
index bb21fd631b16..81bbf1c0afd3 100644
--- a/include/linux/node.h
+++ b/include/linux/node.h
@@ -181,4 +181,9 @@ static inline void register_hugetlbfs_with_node(node_registration_func_t reg,
 
 #define to_node(device) container_of(device, struct node, dev)
 
+static inline bool node_is_toptier(int node)
+{
+	return node_state(node, N_CPU);
+}
+
 #endif /* _LINUX_NODE_H_ */
diff --git a/mm/migrate.c b/mm/migrate.c
index 665dbe8cad72..cdeaf01e601a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -2072,6 +2072,7 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 	pg_data_t *pgdat = NODE_DATA(node);
 	int isolated;
 	int nr_remaining;
+	unsigned int nr_succeeded;
 	LIST_HEAD(migratepages);
 	new_page_t *new;
 	bool compound;
@@ -2110,7 +2111,8 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 
 	list_add(&page->lru, &migratepages);
 	nr_remaining = migrate_pages(&migratepages, *new, NULL, node,
-				     MIGRATE_ASYNC, MR_NUMA_MISPLACED, NULL);
+				     MIGRATE_ASYNC, MR_NUMA_MISPLACED,
+				     &nr_succeeded);
 	if (nr_remaining) {
 		if (!list_empty(&migratepages)) {
 			list_del(&page->lru);
@@ -2119,8 +2121,13 @@ int migrate_misplaced_page(struct page *page, struct vm_area_struct *vma,
 			putback_lru_page(page);
 		}
 		isolated = 0;
-	} else
-		count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_pages);
+	}
+	if (nr_succeeded) {
+		count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded);
+		if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node))
+			mod_node_page_state(pgdat, PGPROMOTE_SUCCESS,
+					    nr_succeeded);
+	}
 	BUG_ON(!list_empty(&migratepages));
 	return isolated;
 
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 4057372745d0..846b670dd346 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1242,6 +1242,9 @@ const char * const vmstat_text[] = {
 #ifdef CONFIG_SWAP
 	"nr_swapcached",
 #endif
+#ifdef CONFIG_NUMA_BALANCING
+	"pgpromote_success",
+#endif
 
 	/* enum writeback_stat_item counters */
 	"nr_dirty_threshold",
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH -V14 2/3] NUMA balancing: optimize page placement for memory tiering system
  2022-03-01  8:53 [PATCH -V14 0/3] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
  2022-03-01  8:53 ` [PATCH -V14 1/3] NUMA Balancing: add page promotion counter Huang Ying
@ 2022-03-01  8:53 ` Huang Ying
  2022-03-01 21:58   ` Yang Shi
  2022-03-01  8:53 ` [PATCH -V14 3/3] memory tiering: skip to scan fast memory Huang Ying
  2022-03-01  8:55 ` [PATCH -V14 0/3] NUMA balancing: optimize memory placement for memory tiering system Huang, Ying
  3 siblings, 1 reply; 7+ messages in thread
From: Huang Ying @ 2022-03-01  8:53 UTC (permalink / raw)
  To: Peter Zijlstra, Mel Gorman, Andrew Morton
  Cc: linux-mm, linux-kernel, Feng Tang, Huang Ying, Baolin Wang,
	Oscar Salvador, Johannes Weiner, Michal Hocko, Rik van Riel,
	Dave Hansen, Yang Shi, Zi Yan, Wei Xu, Shakeel Butt,
	zhongjiang-ali, Randy Dunlap

With the advent of various new memory types, some machines will have
multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
memory subsystem of these machines can be called memory tiering
system, because the performance of the different types of memory are
usually different.

In such system, because of the memory accessing pattern changing etc,
some pages in the slow memory may become hot globally.  So in this
patch, the NUMA balancing mechanism is enhanced to optimize the page
placement among the different memory types according to hot/cold
dynamically.

In a typical memory tiering system, there are CPUs, fast memory and
slow memory in each physical NUMA node.  The CPUs and the fast memory
will be put in one logical node (called fast memory node), while the
slow memory will be put in another (faked) logical node (called slow
memory node).  That is, the fast memory is regarded as local while the
slow memory is regarded as remote.  So it's possible for the recently
accessed pages in the slow memory node to be promoted to the fast
memory node via the existing NUMA balancing mechanism.

The original NUMA balancing mechanism will stop to migrate pages if
the free memory of the target node becomes below the high watermark.
This is a reasonable policy if there's only one memory type.  But this
makes the original NUMA balancing mechanism almost do not work to
optimize page placement among different memory types.  Details are as
follows.

It's the common cases that the working-set size of the workload is
larger than the size of the fast memory nodes.  Otherwise, it's
unnecessary to use the slow memory at all.  So, there are almost
always no enough free pages in the fast memory nodes, so that the
globally hot pages in the slow memory node cannot be promoted to the
fast memory node.  To solve the issue, we have 2 choices as follows,

a. Ignore the free pages watermark checking when promoting hot pages
   from the slow memory node to the fast memory node.  This will
   create some memory pressure in the fast memory node, thus trigger
   the memory reclaiming.  So that, the cold pages in the fast memory
   node will be demoted to the slow memory node.

b. Define a new watermark called wmark_promo which is higher than
   wmark_high, and have kswapd reclaiming pages until free pages reach
   such watermark.  The scenario is as follows: when we want to promote
   hot-pages from a slow memory to a fast memory, but fast memory's free
   pages would go lower than high watermark with such promotion, we wake
   up kswapd with wmark_promo watermark in order to demote cold pages and
   free us up some space.  So, next time we want to promote hot-pages we
   might have a chance of doing so.

The choice "a" may create high memory pressure in the fast memory
node.  If the memory pressure of the workload is high, the memory
pressure may become so high that the memory allocation latency of the
workload is influenced, e.g. the direct reclaiming may be triggered.

The choice "b" works much better at this aspect.  If the memory
pressure of the workload is high, the hot pages promotion will stop
earlier because its allocation watermark is higher than that of the
normal memory allocation.  So in this patch, choice "b" is
implemented.  A new zone watermark (WMARK_PROMO) is added.  Which is
larger than the high watermark and can be controlled via
watermark_scale_factor.

In addition to the original page placement optimization among sockets,
the NUMA balancing mechanism is extended to be used to optimize page
placement according to hot/cold among different memory types.  So the
sysctl user space interface (numa_balancing) is extended in a backward
compatible way as follow, so that the users can enable/disable these
functionality individually.

The sysctl is converted from a Boolean value to a bits field.  The
definition of the flags is,

- 0: NUMA_BALANCING_DISABLED
- 1: NUMA_BALANCING_NORMAL
- 2: NUMA_BALANCING_MEMORY_TIERING

We have tested the patch with the pmbench memory accessing benchmark
with the 80:20 read/write ratio and the Gauss access address
distribution on a 2 socket Intel server with Optane DC Persistent
Memory Model.  The test results shows that the pmbench score can
improve up to 95.9%.

Thanks Andrew Morton to help fix the document format error.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: zhongjiang-ali <zhongjiang-ali@linux.alibaba.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 Documentation/admin-guide/sysctl/kernel.rst | 29 ++++++++++++++-------
 include/linux/mmzone.h                      |  1 +
 include/linux/sched/sysctl.h                | 10 +++++++
 kernel/sched/core.c                         | 21 ++++++++++++---
 kernel/sysctl.c                             |  2 +-
 mm/migrate.c                                | 16 ++++++++++--
 mm/page_alloc.c                             |  3 ++-
 mm/vmscan.c                                 |  6 ++++-
 8 files changed, 70 insertions(+), 18 deletions(-)

diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
index d359bcfadd39..fdfd2b684822 100644
--- a/Documentation/admin-guide/sysctl/kernel.rst
+++ b/Documentation/admin-guide/sysctl/kernel.rst
@@ -595,16 +595,23 @@ Documentation/admin-guide/kernel-parameters.rst).
 numa_balancing
 ==============
 
-Enables/disables automatic page fault based NUMA memory
-balancing. Memory is moved automatically to nodes
-that access it often.
+Enables/disables and configures automatic page fault based NUMA memory
+balancing.  Memory is moved automatically to nodes that access it often.
+The value to set can be the result of ORing the following:
 
-Enables/disables automatic NUMA memory balancing. On NUMA machines, there
-is a performance penalty if remote memory is accessed by a CPU. When this
-feature is enabled the kernel samples what task thread is accessing memory
-by periodically unmapping pages and later trapping a page fault. At the
-time of the page fault, it is determined if the data being accessed should
-be migrated to a local memory node.
+= =================================
+0 NUMA_BALANCING_DISABLED
+1 NUMA_BALANCING_NORMAL
+2 NUMA_BALANCING_MEMORY_TIERING
+= =================================
+
+Or NUMA_BALANCING_NORMAL to optimize page placement among different
+NUMA nodes to reduce remote accessing.  On NUMA machines, there is a
+performance penalty if remote memory is accessed by a CPU. When this
+feature is enabled the kernel samples what task thread is accessing
+memory by periodically unmapping pages and later trapping a page
+fault. At the time of the page fault, it is determined if the data
+being accessed should be migrated to a local memory node.
 
 The unmapping of pages and trapping faults incur additional overhead that
 ideally is offset by improved memory locality but there is no universal
@@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms,
 numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms,
 numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls.
 
+Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among
+different types of memory (represented as different NUMA nodes) to
+place the hot pages in the fast memory.  This is implemented based on
+unmapping and page fault too.
 
 numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb
 ===============================================================================================================================
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 44bd054ca12b..06bc55db19bf 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -342,6 +342,7 @@ enum zone_watermarks {
 	WMARK_MIN,
 	WMARK_LOW,
 	WMARK_HIGH,
+	WMARK_PROMO,
 	NR_WMARK
 };
 
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index c19dd5a2c05c..b5eec8854c5a 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -23,6 +23,16 @@ enum sched_tunable_scaling {
 	SCHED_TUNABLESCALING_END,
 };
 
+#define NUMA_BALANCING_DISABLED		0x0
+#define NUMA_BALANCING_NORMAL		0x1
+#define NUMA_BALANCING_MEMORY_TIERING	0x2
+
+#ifdef CONFIG_NUMA_BALANCING
+extern int sysctl_numa_balancing_mode;
+#else
+#define sysctl_numa_balancing_mode	0
+#endif
+
 /*
  *  control realtime throttling:
  *
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fcf0c180617c..c25348e9ae3a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4280,7 +4280,9 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
 
 #ifdef CONFIG_NUMA_BALANCING
 
-void set_numabalancing_state(bool enabled)
+int sysctl_numa_balancing_mode;
+
+static void __set_numabalancing_state(bool enabled)
 {
 	if (enabled)
 		static_branch_enable(&sched_numa_balancing);
@@ -4288,13 +4290,22 @@ void set_numabalancing_state(bool enabled)
 		static_branch_disable(&sched_numa_balancing);
 }
 
+void set_numabalancing_state(bool enabled)
+{
+	if (enabled)
+		sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL;
+	else
+		sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED;
+	__set_numabalancing_state(enabled);
+}
+
 #ifdef CONFIG_PROC_SYSCTL
 int sysctl_numa_balancing(struct ctl_table *table, int write,
 			  void *buffer, size_t *lenp, loff_t *ppos)
 {
 	struct ctl_table t;
 	int err;
-	int state = static_branch_likely(&sched_numa_balancing);
+	int state = sysctl_numa_balancing_mode;
 
 	if (write && !capable(CAP_SYS_ADMIN))
 		return -EPERM;
@@ -4304,8 +4315,10 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
 	err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
 	if (err < 0)
 		return err;
-	if (write)
-		set_numabalancing_state(state);
+	if (write) {
+		sysctl_numa_balancing_mode = state;
+		__set_numabalancing_state(state);
+	}
 	return err;
 }
 #endif
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 5ae443b2882e..c90a564af720 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1689,7 +1689,7 @@ static struct ctl_table kern_table[] = {
 		.mode		= 0644,
 		.proc_handler	= sysctl_numa_balancing,
 		.extra1		= SYSCTL_ZERO,
-		.extra2		= SYSCTL_ONE,
+		.extra2		= SYSCTL_FOUR,
 	},
 #endif /* CONFIG_NUMA_BALANCING */
 	{
diff --git a/mm/migrate.c b/mm/migrate.c
index cdeaf01e601a..08ca9b9b142e 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -51,6 +51,7 @@
 #include <linux/oom.h>
 #include <linux/memory.h>
 #include <linux/random.h>
+#include <linux/sched/sysctl.h>
 
 #include <asm/tlbflush.h>
 
@@ -2034,16 +2035,27 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 {
 	int page_lru;
 	int nr_pages = thp_nr_pages(page);
+	int order = compound_order(page);
 
-	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
+	VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
 
 	/* Do not migrate THP mapped by multiple processes */
 	if (PageTransHuge(page) && total_mapcount(page) > 1)
 		return 0;
 
 	/* Avoid migrating to a node that is nearly full */
-	if (!migrate_balanced_pgdat(pgdat, nr_pages))
+	if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
+		int z;
+
+		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
+			return 0;
+		for (z = pgdat->nr_zones - 1; z >= 0; z--) {
+			if (populated_zone(pgdat->node_zones + z))
+				break;
+		}
+		wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
 		return 0;
+	}
 
 	if (isolate_lru_page(page))
 		return 0;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 3589febc6d31..295b8f1fc31d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8474,7 +8474,8 @@ static void __setup_per_zone_wmarks(void)
 
 		zone->watermark_boost = 0;
 		zone->_watermark[WMARK_LOW]  = min_wmark_pages(zone) + tmp;
-		zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2;
+		zone->_watermark[WMARK_HIGH] = low_wmark_pages(zone) + tmp;
+		zone->_watermark[WMARK_PROMO] = high_wmark_pages(zone) + tmp;
 
 		spin_unlock_irqrestore(&zone->lock, flags);
 	}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6dd8f455bb82..199b8aadbdd6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -56,6 +56,7 @@
 
 #include <linux/swapops.h>
 #include <linux/balloon_compaction.h>
+#include <linux/sched/sysctl.h>
 
 #include "internal.h"
 
@@ -3988,7 +3989,10 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
 		if (!managed_zone(zone))
 			continue;
 
-		mark = high_wmark_pages(zone);
+		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING)
+			mark = wmark_pages(zone, WMARK_PROMO);
+		else
+			mark = high_wmark_pages(zone);
 		if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
 			return true;
 	}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH -V14 3/3] memory tiering: skip to scan fast memory
  2022-03-01  8:53 [PATCH -V14 0/3] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
  2022-03-01  8:53 ` [PATCH -V14 1/3] NUMA Balancing: add page promotion counter Huang Ying
  2022-03-01  8:53 ` [PATCH -V14 2/3] NUMA balancing: optimize page placement for memory tiering system Huang Ying
@ 2022-03-01  8:53 ` Huang Ying
  2022-03-01 22:03   ` Yang Shi
  2022-03-01  8:55 ` [PATCH -V14 0/3] NUMA balancing: optimize memory placement for memory tiering system Huang, Ying
  3 siblings, 1 reply; 7+ messages in thread
From: Huang Ying @ 2022-03-01  8:53 UTC (permalink / raw)
  To: Peter Zijlstra, Mel Gorman, Andrew Morton
  Cc: linux-mm, linux-kernel, Feng Tang, Huang Ying, Dave Hansen,
	Baolin Wang, Johannes Weiner, Oscar Salvador, Michal Hocko,
	Rik van Riel, Yang Shi, Zi Yan, Wei Xu, Shakeel Butt,
	zhongjiang-ali

If the NUMA balancing isn't used to optimize the page placement among
sockets but only among memory types, the hot pages in the fast memory
node couldn't be migrated (promoted) to anywhere.  So it's unnecessary
to scan the pages in the fast memory node via changing their PTE/PMD
mapping to be PROT_NONE.  So that the page faults could be avoided
too.

In the test, if only the memory tiering NUMA balancing mode is enabled, the
number of the NUMA balancing hint faults for the DRAM node is reduced to
almost 0 with the patch.  While the benchmark score doesn't change
visibly.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: zhongjiang-ali <zhongjiang-ali@linux.alibaba.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 mm/huge_memory.c | 30 +++++++++++++++++++++---------
 mm/mprotect.c    | 13 ++++++++++++-
 2 files changed, 33 insertions(+), 10 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 406a3c28c026..9ce126cb0cfd 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -34,6 +34,7 @@
 #include <linux/oom.h>
 #include <linux/numa.h>
 #include <linux/page_owner.h>
+#include <linux/sched/sysctl.h>
 
 #include <asm/tlb.h>
 #include <asm/pgalloc.h>
@@ -1766,17 +1767,28 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
 	}
 #endif
 
-	/*
-	 * Avoid trapping faults against the zero page. The read-only
-	 * data is likely to be read-cached on the local CPU and
-	 * local/remote hits to the zero page are not interesting.
-	 */
-	if (prot_numa && is_huge_zero_pmd(*pmd))
-		goto unlock;
+	if (prot_numa) {
+		struct page *page;
+		/*
+		 * Avoid trapping faults against the zero page. The read-only
+		 * data is likely to be read-cached on the local CPU and
+		 * local/remote hits to the zero page are not interesting.
+		 */
+		if (is_huge_zero_pmd(*pmd))
+			goto unlock;
 
-	if (prot_numa && pmd_protnone(*pmd))
-		goto unlock;
+		if (pmd_protnone(*pmd))
+			goto unlock;
 
+		page = pmd_page(*pmd);
+		/*
+		 * Skip scanning top tier node if normal numa
+		 * balancing is disabled
+		 */
+		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
+		    node_is_toptier(page_to_nid(page)))
+			goto unlock;
+	}
 	/*
 	 * In case prot_numa, we are under mmap_read_lock(mm). It's critical
 	 * to not clear pmd intermittently to avoid race with MADV_DONTNEED
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 0138dfcdb1d8..2fe03e695c81 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -29,6 +29,7 @@
 #include <linux/uaccess.h>
 #include <linux/mm_inline.h>
 #include <linux/pgtable.h>
+#include <linux/sched/sysctl.h>
 #include <asm/cacheflush.h>
 #include <asm/mmu_context.h>
 #include <asm/tlbflush.h>
@@ -83,6 +84,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 			 */
 			if (prot_numa) {
 				struct page *page;
+				int nid;
 
 				/* Avoid TLB flush if possible */
 				if (pte_protnone(oldpte))
@@ -109,7 +111,16 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 				 * Don't mess with PTEs if page is already on the node
 				 * a single-threaded process is running on.
 				 */
-				if (target_node == page_to_nid(page))
+				nid = page_to_nid(page);
+				if (target_node == nid)
+					continue;
+
+				/*
+				 * Skip scanning top tier node if normal numa
+				 * balancing is disabled
+				 */
+				if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
+				    node_is_toptier(nid))
 					continue;
 			}
 
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH -V14 0/3] NUMA balancing: optimize memory placement for memory tiering system
  2022-03-01  8:53 [PATCH -V14 0/3] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
                   ` (2 preceding siblings ...)
  2022-03-01  8:53 ` [PATCH -V14 3/3] memory tiering: skip to scan fast memory Huang Ying
@ 2022-03-01  8:55 ` Huang, Ying
  3 siblings, 0 replies; 7+ messages in thread
From: Huang, Ying @ 2022-03-01  8:55 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Feng Tang, Michal Hocko, Rik van Riel,
	Dave Hansen, Yang Shi, Zi Yan, Wei Xu, Oscar Salvador,
	Shakeel Butt, Johannes Weiner, Peter Zijlstra, Mel Gorman

Hi, Andrew,

There are no any code change in this new version.  I just revised the
patch description based on comments.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH -V14 2/3] NUMA balancing: optimize page placement for memory tiering system
  2022-03-01  8:53 ` [PATCH -V14 2/3] NUMA balancing: optimize page placement for memory tiering system Huang Ying
@ 2022-03-01 21:58   ` Yang Shi
  0 siblings, 0 replies; 7+ messages in thread
From: Yang Shi @ 2022-03-01 21:58 UTC (permalink / raw)
  To: Huang Ying
  Cc: Peter Zijlstra, Mel Gorman, Andrew Morton, Linux MM,
	Linux Kernel Mailing List, Feng Tang, Baolin Wang,
	Oscar Salvador, Johannes Weiner, Michal Hocko, Rik van Riel,
	Dave Hansen, Zi Yan, Wei Xu, Shakeel Butt, zhongjiang-ali,
	Randy Dunlap

On Tue, Mar 1, 2022 at 12:54 AM Huang Ying <ying.huang@intel.com> wrote:
>
> With the advent of various new memory types, some machines will have
> multiple types of memory, e.g. DRAM and PMEM (persistent memory).  The
> memory subsystem of these machines can be called memory tiering
> system, because the performance of the different types of memory are
> usually different.
>
> In such system, because of the memory accessing pattern changing etc,
> some pages in the slow memory may become hot globally.  So in this
> patch, the NUMA balancing mechanism is enhanced to optimize the page
> placement among the different memory types according to hot/cold
> dynamically.
>
> In a typical memory tiering system, there are CPUs, fast memory and
> slow memory in each physical NUMA node.  The CPUs and the fast memory
> will be put in one logical node (called fast memory node), while the
> slow memory will be put in another (faked) logical node (called slow
> memory node).  That is, the fast memory is regarded as local while the
> slow memory is regarded as remote.  So it's possible for the recently
> accessed pages in the slow memory node to be promoted to the fast
> memory node via the existing NUMA balancing mechanism.
>
> The original NUMA balancing mechanism will stop to migrate pages if
> the free memory of the target node becomes below the high watermark.
> This is a reasonable policy if there's only one memory type.  But this
> makes the original NUMA balancing mechanism almost do not work to
> optimize page placement among different memory types.  Details are as
> follows.
>
> It's the common cases that the working-set size of the workload is
> larger than the size of the fast memory nodes.  Otherwise, it's
> unnecessary to use the slow memory at all.  So, there are almost
> always no enough free pages in the fast memory nodes, so that the
> globally hot pages in the slow memory node cannot be promoted to the
> fast memory node.  To solve the issue, we have 2 choices as follows,
>
> a. Ignore the free pages watermark checking when promoting hot pages
>    from the slow memory node to the fast memory node.  This will
>    create some memory pressure in the fast memory node, thus trigger
>    the memory reclaiming.  So that, the cold pages in the fast memory
>    node will be demoted to the slow memory node.
>
> b. Define a new watermark called wmark_promo which is higher than
>    wmark_high, and have kswapd reclaiming pages until free pages reach
>    such watermark.  The scenario is as follows: when we want to promote
>    hot-pages from a slow memory to a fast memory, but fast memory's free
>    pages would go lower than high watermark with such promotion, we wake
>    up kswapd with wmark_promo watermark in order to demote cold pages and
>    free us up some space.  So, next time we want to promote hot-pages we
>    might have a chance of doing so.
>
> The choice "a" may create high memory pressure in the fast memory
> node.  If the memory pressure of the workload is high, the memory
> pressure may become so high that the memory allocation latency of the
> workload is influenced, e.g. the direct reclaiming may be triggered.
>
> The choice "b" works much better at this aspect.  If the memory
> pressure of the workload is high, the hot pages promotion will stop
> earlier because its allocation watermark is higher than that of the
> normal memory allocation.  So in this patch, choice "b" is
> implemented.  A new zone watermark (WMARK_PROMO) is added.  Which is
> larger than the high watermark and can be controlled via
> watermark_scale_factor.
>
> In addition to the original page placement optimization among sockets,
> the NUMA balancing mechanism is extended to be used to optimize page
> placement according to hot/cold among different memory types.  So the
> sysctl user space interface (numa_balancing) is extended in a backward
> compatible way as follow, so that the users can enable/disable these
> functionality individually.
>
> The sysctl is converted from a Boolean value to a bits field.  The
> definition of the flags is,
>
> - 0: NUMA_BALANCING_DISABLED
> - 1: NUMA_BALANCING_NORMAL
> - 2: NUMA_BALANCING_MEMORY_TIERING
>
> We have tested the patch with the pmbench memory accessing benchmark
> with the 80:20 read/write ratio and the Gauss access address
> distribution on a 2 socket Intel server with Optane DC Persistent
> Memory Model.  The test results shows that the pmbench score can
> improve up to 95.9%.
>
> Thanks Andrew Morton to help fix the document format error.
>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: zhongjiang-ali <zhongjiang-ali@linux.alibaba.com>
> Cc: Randy Dunlap <rdunlap@infradead.org>
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-mm@kvack.org
> ---
>  Documentation/admin-guide/sysctl/kernel.rst | 29 ++++++++++++++-------
>  include/linux/mmzone.h                      |  1 +
>  include/linux/sched/sysctl.h                | 10 +++++++
>  kernel/sched/core.c                         | 21 ++++++++++++---
>  kernel/sysctl.c                             |  2 +-
>  mm/migrate.c                                | 16 ++++++++++--
>  mm/page_alloc.c                             |  3 ++-
>  mm/vmscan.c                                 |  6 ++++-
>  8 files changed, 70 insertions(+), 18 deletions(-)
>
> diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
> index d359bcfadd39..fdfd2b684822 100644
> --- a/Documentation/admin-guide/sysctl/kernel.rst
> +++ b/Documentation/admin-guide/sysctl/kernel.rst
> @@ -595,16 +595,23 @@ Documentation/admin-guide/kernel-parameters.rst).
>  numa_balancing
>  ==============
>
> -Enables/disables automatic page fault based NUMA memory
> -balancing. Memory is moved automatically to nodes
> -that access it often.
> +Enables/disables and configures automatic page fault based NUMA memory
> +balancing.  Memory is moved automatically to nodes that access it often.
> +The value to set can be the result of ORing the following:
>
> -Enables/disables automatic NUMA memory balancing. On NUMA machines, there
> -is a performance penalty if remote memory is accessed by a CPU. When this
> -feature is enabled the kernel samples what task thread is accessing memory
> -by periodically unmapping pages and later trapping a page fault. At the
> -time of the page fault, it is determined if the data being accessed should
> -be migrated to a local memory node.
> += =================================
> +0 NUMA_BALANCING_DISABLED
> +1 NUMA_BALANCING_NORMAL
> +2 NUMA_BALANCING_MEMORY_TIERING
> += =================================
> +
> +Or NUMA_BALANCING_NORMAL to optimize page placement among different
> +NUMA nodes to reduce remote accessing.  On NUMA machines, there is a
> +performance penalty if remote memory is accessed by a CPU. When this
> +feature is enabled the kernel samples what task thread is accessing
> +memory by periodically unmapping pages and later trapping a page
> +fault. At the time of the page fault, it is determined if the data
> +being accessed should be migrated to a local memory node.
>
>  The unmapping of pages and trapping faults incur additional overhead that
>  ideally is offset by improved memory locality but there is no universal
> @@ -615,6 +622,10 @@ faults may be controlled by the `numa_balancing_scan_period_min_ms,
>  numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms,
>  numa_balancing_scan_size_mb`_, and numa_balancing_settle_count sysctls.
>
> +Or NUMA_BALANCING_MEMORY_TIERING to optimize page placement among
> +different types of memory (represented as different NUMA nodes) to
> +place the hot pages in the fast memory.  This is implemented based on
> +unmapping and page fault too.
>
>  numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb
>  ===============================================================================================================================
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 44bd054ca12b..06bc55db19bf 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -342,6 +342,7 @@ enum zone_watermarks {
>         WMARK_MIN,
>         WMARK_LOW,
>         WMARK_HIGH,
> +       WMARK_PROMO,

TBH I'm not a fan of another water mark since we already have quite a
few water marks (regular water mark, water mark boost, water mark
promo). But it is not a big deal and gated problem for now since it is
not user visible. We definitely could try to consolidate some of them
later.

The patch looks fine to me. Reviewed-by: Yang Shi <shy828301@gmail.com>

>         NR_WMARK
>  };
>
> diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
> index c19dd5a2c05c..b5eec8854c5a 100644
> --- a/include/linux/sched/sysctl.h
> +++ b/include/linux/sched/sysctl.h
> @@ -23,6 +23,16 @@ enum sched_tunable_scaling {
>         SCHED_TUNABLESCALING_END,
>  };
>
> +#define NUMA_BALANCING_DISABLED                0x0
> +#define NUMA_BALANCING_NORMAL          0x1
> +#define NUMA_BALANCING_MEMORY_TIERING  0x2
> +
> +#ifdef CONFIG_NUMA_BALANCING
> +extern int sysctl_numa_balancing_mode;
> +#else
> +#define sysctl_numa_balancing_mode     0
> +#endif
> +
>  /*
>   *  control realtime throttling:
>   *
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index fcf0c180617c..c25348e9ae3a 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4280,7 +4280,9 @@ DEFINE_STATIC_KEY_FALSE(sched_numa_balancing);
>
>  #ifdef CONFIG_NUMA_BALANCING
>
> -void set_numabalancing_state(bool enabled)
> +int sysctl_numa_balancing_mode;
> +
> +static void __set_numabalancing_state(bool enabled)
>  {
>         if (enabled)
>                 static_branch_enable(&sched_numa_balancing);
> @@ -4288,13 +4290,22 @@ void set_numabalancing_state(bool enabled)
>                 static_branch_disable(&sched_numa_balancing);
>  }
>
> +void set_numabalancing_state(bool enabled)
> +{
> +       if (enabled)
> +               sysctl_numa_balancing_mode = NUMA_BALANCING_NORMAL;
> +       else
> +               sysctl_numa_balancing_mode = NUMA_BALANCING_DISABLED;
> +       __set_numabalancing_state(enabled);
> +}
> +
>  #ifdef CONFIG_PROC_SYSCTL
>  int sysctl_numa_balancing(struct ctl_table *table, int write,
>                           void *buffer, size_t *lenp, loff_t *ppos)
>  {
>         struct ctl_table t;
>         int err;
> -       int state = static_branch_likely(&sched_numa_balancing);
> +       int state = sysctl_numa_balancing_mode;
>
>         if (write && !capable(CAP_SYS_ADMIN))
>                 return -EPERM;
> @@ -4304,8 +4315,10 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
>         err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
>         if (err < 0)
>                 return err;
> -       if (write)
> -               set_numabalancing_state(state);
> +       if (write) {
> +               sysctl_numa_balancing_mode = state;
> +               __set_numabalancing_state(state);
> +       }
>         return err;
>  }
>  #endif
> diff --git a/kernel/sysctl.c b/kernel/sysctl.c
> index 5ae443b2882e..c90a564af720 100644
> --- a/kernel/sysctl.c
> +++ b/kernel/sysctl.c
> @@ -1689,7 +1689,7 @@ static struct ctl_table kern_table[] = {
>                 .mode           = 0644,
>                 .proc_handler   = sysctl_numa_balancing,
>                 .extra1         = SYSCTL_ZERO,
> -               .extra2         = SYSCTL_ONE,
> +               .extra2         = SYSCTL_FOUR,
>         },
>  #endif /* CONFIG_NUMA_BALANCING */
>         {
> diff --git a/mm/migrate.c b/mm/migrate.c
> index cdeaf01e601a..08ca9b9b142e 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -51,6 +51,7 @@
>  #include <linux/oom.h>
>  #include <linux/memory.h>
>  #include <linux/random.h>
> +#include <linux/sched/sysctl.h>
>
>  #include <asm/tlbflush.h>
>
> @@ -2034,16 +2035,27 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
>  {
>         int page_lru;
>         int nr_pages = thp_nr_pages(page);
> +       int order = compound_order(page);
>
> -       VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
> +       VM_BUG_ON_PAGE(order && !PageTransHuge(page), page);
>
>         /* Do not migrate THP mapped by multiple processes */
>         if (PageTransHuge(page) && total_mapcount(page) > 1)
>                 return 0;
>
>         /* Avoid migrating to a node that is nearly full */
> -       if (!migrate_balanced_pgdat(pgdat, nr_pages))
> +       if (!migrate_balanced_pgdat(pgdat, nr_pages)) {
> +               int z;
> +
> +               if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING))
> +                       return 0;
> +               for (z = pgdat->nr_zones - 1; z >= 0; z--) {
> +                       if (populated_zone(pgdat->node_zones + z))
> +                               break;
> +               }
> +               wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE);
>                 return 0;
> +       }
>
>         if (isolate_lru_page(page))
>                 return 0;
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 3589febc6d31..295b8f1fc31d 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8474,7 +8474,8 @@ static void __setup_per_zone_wmarks(void)
>
>                 zone->watermark_boost = 0;
>                 zone->_watermark[WMARK_LOW]  = min_wmark_pages(zone) + tmp;
> -               zone->_watermark[WMARK_HIGH] = min_wmark_pages(zone) + tmp * 2;
> +               zone->_watermark[WMARK_HIGH] = low_wmark_pages(zone) + tmp;
> +               zone->_watermark[WMARK_PROMO] = high_wmark_pages(zone) + tmp;
>
>                 spin_unlock_irqrestore(&zone->lock, flags);
>         }
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 6dd8f455bb82..199b8aadbdd6 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -56,6 +56,7 @@
>
>  #include <linux/swapops.h>
>  #include <linux/balloon_compaction.h>
> +#include <linux/sched/sysctl.h>
>
>  #include "internal.h"
>
> @@ -3988,7 +3989,10 @@ static bool pgdat_balanced(pg_data_t *pgdat, int order, int highest_zoneidx)
>                 if (!managed_zone(zone))
>                         continue;
>
> -               mark = high_wmark_pages(zone);
> +               if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING)
> +                       mark = wmark_pages(zone, WMARK_PROMO);
> +               else
> +                       mark = high_wmark_pages(zone);
>                 if (zone_watermark_ok_safe(zone, order, mark, highest_zoneidx))
>                         return true;
>         }
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH -V14 3/3] memory tiering: skip to scan fast memory
  2022-03-01  8:53 ` [PATCH -V14 3/3] memory tiering: skip to scan fast memory Huang Ying
@ 2022-03-01 22:03   ` Yang Shi
  0 siblings, 0 replies; 7+ messages in thread
From: Yang Shi @ 2022-03-01 22:03 UTC (permalink / raw)
  To: Huang Ying
  Cc: Peter Zijlstra, Mel Gorman, Andrew Morton, Linux MM,
	Linux Kernel Mailing List, Feng Tang, Dave Hansen, Baolin Wang,
	Johannes Weiner, Oscar Salvador, Michal Hocko, Rik van Riel,
	Zi Yan, Wei Xu, Shakeel Butt, zhongjiang-ali

On Tue, Mar 1, 2022 at 12:54 AM Huang Ying <ying.huang@intel.com> wrote:
>
> If the NUMA balancing isn't used to optimize the page placement among
> sockets but only among memory types, the hot pages in the fast memory
> node couldn't be migrated (promoted) to anywhere.  So it's unnecessary
> to scan the pages in the fast memory node via changing their PTE/PMD
> mapping to be PROT_NONE.  So that the page faults could be avoided
> too.
>
> In the test, if only the memory tiering NUMA balancing mode is enabled, the
> number of the NUMA balancing hint faults for the DRAM node is reduced to
> almost 0 with the patch.  While the benchmark score doesn't change
> visibly.

Reviewed-by: Yang Shi <shy828301@gmail.com>

>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Suggested-by: Dave Hansen <dave.hansen@linux.intel.com>
> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Rik van Riel <riel@surriel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Yang Shi <shy828301@gmail.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Wei Xu <weixugc@google.com>
> Cc: Shakeel Butt <shakeelb@google.com>
> Cc: zhongjiang-ali <zhongjiang-ali@linux.alibaba.com>
> Cc: linux-kernel@vger.kernel.org
> Cc: linux-mm@kvack.org
> ---
>  mm/huge_memory.c | 30 +++++++++++++++++++++---------
>  mm/mprotect.c    | 13 ++++++++++++-
>  2 files changed, 33 insertions(+), 10 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 406a3c28c026..9ce126cb0cfd 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -34,6 +34,7 @@
>  #include <linux/oom.h>
>  #include <linux/numa.h>
>  #include <linux/page_owner.h>
> +#include <linux/sched/sysctl.h>
>
>  #include <asm/tlb.h>
>  #include <asm/pgalloc.h>
> @@ -1766,17 +1767,28 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
>         }
>  #endif
>
> -       /*
> -        * Avoid trapping faults against the zero page. The read-only
> -        * data is likely to be read-cached on the local CPU and
> -        * local/remote hits to the zero page are not interesting.
> -        */
> -       if (prot_numa && is_huge_zero_pmd(*pmd))
> -               goto unlock;
> +       if (prot_numa) {
> +               struct page *page;
> +               /*
> +                * Avoid trapping faults against the zero page. The read-only
> +                * data is likely to be read-cached on the local CPU and
> +                * local/remote hits to the zero page are not interesting.
> +                */
> +               if (is_huge_zero_pmd(*pmd))
> +                       goto unlock;
>
> -       if (prot_numa && pmd_protnone(*pmd))
> -               goto unlock;
> +               if (pmd_protnone(*pmd))
> +                       goto unlock;
>
> +               page = pmd_page(*pmd);
> +               /*
> +                * Skip scanning top tier node if normal numa
> +                * balancing is disabled
> +                */
> +               if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
> +                   node_is_toptier(page_to_nid(page)))
> +                       goto unlock;
> +       }
>         /*
>          * In case prot_numa, we are under mmap_read_lock(mm). It's critical
>          * to not clear pmd intermittently to avoid race with MADV_DONTNEED
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index 0138dfcdb1d8..2fe03e695c81 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -29,6 +29,7 @@
>  #include <linux/uaccess.h>
>  #include <linux/mm_inline.h>
>  #include <linux/pgtable.h>
> +#include <linux/sched/sysctl.h>
>  #include <asm/cacheflush.h>
>  #include <asm/mmu_context.h>
>  #include <asm/tlbflush.h>
> @@ -83,6 +84,7 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
>                          */
>                         if (prot_numa) {
>                                 struct page *page;
> +                               int nid;
>
>                                 /* Avoid TLB flush if possible */
>                                 if (pte_protnone(oldpte))
> @@ -109,7 +111,16 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
>                                  * Don't mess with PTEs if page is already on the node
>                                  * a single-threaded process is running on.
>                                  */
> -                               if (target_node == page_to_nid(page))
> +                               nid = page_to_nid(page);
> +                               if (target_node == nid)
> +                                       continue;
> +
> +                               /*
> +                                * Skip scanning top tier node if normal numa
> +                                * balancing is disabled
> +                                */
> +                               if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
> +                                   node_is_toptier(nid))
>                                         continue;
>                         }
>
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-03-01 22:03 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-01  8:53 [PATCH -V14 0/3] NUMA balancing: optimize memory placement for memory tiering system Huang Ying
2022-03-01  8:53 ` [PATCH -V14 1/3] NUMA Balancing: add page promotion counter Huang Ying
2022-03-01  8:53 ` [PATCH -V14 2/3] NUMA balancing: optimize page placement for memory tiering system Huang Ying
2022-03-01 21:58   ` Yang Shi
2022-03-01  8:53 ` [PATCH -V14 3/3] memory tiering: skip to scan fast memory Huang Ying
2022-03-01 22:03   ` Yang Shi
2022-03-01  8:55 ` [PATCH -V14 0/3] NUMA balancing: optimize memory placement for memory tiering system Huang, Ying

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).