All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH -V3 0/3] memory tiering: hot page selection
@ 2022-06-14  8:16 Huang Ying
  2022-06-14  8:16 ` [PATCH -V3 1/3] memory tiering: hot page selection with hint page fault latency Huang Ying
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Huang Ying @ 2022-06-14  8:16 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Johannes Weiner,
	Michal Hocko, Rik van Riel, Mel Gorman, Peter Zijlstra,
	Dave Hansen, Yang Shi, Zi Yan, Wei Xu, osalvador, Shakeel Butt,
	Zhong Jiang

To optimize page placement in a memory tiering system with NUMA
balancing, the hot pages in the slow memory nodes need to be
identified.  Essentially, the original NUMA balancing implementation
selects the mostly recently accessed (MRU) pages to promote.  But this
isn't a perfect algorithm to identify the hot pages.  Because the
pages with quite low access frequency may be accessed eventually given
the NUMA balancing page table scanning period could be quite long
(e.g. 60 seconds).  So in this patchset, we implement a new hot page
identification algorithm based on the latency between NUMA balancing
page table scanning and hint page fault.  Which is a kind of mostly
frequently accessed (MFU) algorithm.

In NUMA balancing memory tiering mode, if there are hot pages in slow
memory node and cold pages in fast memory node, we need to
promote/demote hot/cold pages between the fast and cold memory nodes.

A choice is to promote/demote as fast as possible.  But the CPU cycles
and memory bandwidth consumed by the high promoting/demoting
throughput will hurt the latency of some workload because of accessing
inflating and slow memory bandwidth contention.

A way to resolve this issue is to restrict the max promoting/demoting
throughput.  It will take longer to finish the promoting/demoting.
But the workload latency will be better.  This is implemented in this
patchset as the page promotion rate limit mechanism.

The promotion hot threshold is workload and system configuration
dependent.  So in this patchset, a method to adjust the hot threshold
automatically is implemented.  The basic idea is to control the number
of the candidate promotion pages to match the promotion rate limit.

We used the pmbench memory accessing benchmark tested the patchset on
a 2-socket server system with DRAM and PMEM installed.  The test
results are as follows,

		pmbench score		promote rate
		 (accesses/s)			MB/s
		-------------		------------
base		  146887704.1		       725.6
hot selection     165695601.2		       544.0
rate limit	  162814569.8		       165.2
auto adjustment	  170495294.0                  136.9

From the results above,

With hot page selection patch [1/3], the pmbench score increases about
12.8%, and promote rate (overhead) decreases about 25.0%, compared with
base kernel.

With rate limit patch [2/3], pmbench score decreases about 1.7%, and
promote rate decreases about 69.6%, compared with hot page selection
patch.

With threshold auto adjustment patch [3/3], pmbench score increases
about 4.7%, and promote rate decrease about 17.1%, compared with rate
limit patch.

Changelogs:

v3:

- Rebased on v5.19-rc1

- Renamed newly-added fields in struct pglist_data.

v2:

- Added ABI document for promote rate limit per Andrew's comments.  Thanks!

- Added function comments when necessary per Andrew's comments.

- Address other comments from Andrew Morton.

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH -V3 1/3] memory tiering: hot page selection with hint page fault latency
  2022-06-14  8:16 [PATCH -V3 0/3] memory tiering: hot page selection Huang Ying
@ 2022-06-14  8:16 ` Huang Ying
  2022-06-14  8:16 ` [PATCH -V3 2/3] memory tiering: rate limit NUMA migration throughput Huang Ying
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Huang Ying @ 2022-06-14  8:16 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Johannes Weiner,
	Michal Hocko, Rik van Riel, Mel Gorman, Peter Zijlstra,
	Dave Hansen, Yang Shi, Zi Yan, Wei Xu, osalvador, Shakeel Butt,
	Zhong Jiang

To optimize page placement in a memory tiering system with NUMA
balancing, the hot pages in the slow memory node need to be
identified.  Essentially, the original NUMA balancing implementation
selects the mostly recently accessed (MRU) pages to promote.  But this
isn't a perfect algorithm to identify the hot pages.  Because the
pages with quite low access frequency may be accessed eventually given
the NUMA balancing page table scanning period could be quite
long (e.g. 60 seconds).  The most frequently accessed (MFU) algorithm
is better.

So, in this patch we implemented a better hot page selection
algorithm.  Which is based on NUMA balancing page table scanning and
hint page fault as follows,

- When the page tables of the processes are scanned to change PTE/PMD
  to be PROT_NONE, the current time is recorded in struct page as scan
  time.

- When the page is accessed, hint page fault will occur.  The scan
  time is gotten from the struct page.  And The hint page fault
  latency is defined as

    hint page fault time - scan time

The shorter the hint page fault latency of a page is, the higher the
probability of their access frequency to be higher.  So the hint page
fault latency is a better estimation of the page hot/cold.

It's hard to find some extra space in struct page to hold the scan
time.  Fortunately, we can reuse some bits used by the original NUMA
balancing.

NUMA balancing uses some bits in struct page to store the page
accessing CPU and PID (referring to page_cpupid_xchg_last()).  Which
is used by the multi-stage node selection algorithm to avoid to
migrate pages shared accessed by the NUMA nodes back and forth.  But
for pages in the slow memory node, even if they are shared accessed by
multiple NUMA nodes, as long as the pages are hot, they need to be
promoted to the fast memory node.  So the accessing CPU and PID
information are unnecessary for the slow memory pages.  We can reuse
these bits in struct page to record the scan time.  For the fast
memory pages, these bits are used as before.

For the hot threshold, the default value is 1 second, which works well
in our performance test.  All pages with hint page fault latency < hot
threshold will be considered hot.

It's hard for users to determine the hot threshold.  So we don't
provide a kernel ABI to set it, just provide a debugfs interface for
advanced users to experiment.  We will continue to work on a hot
threshold automatic adjustment mechanism.

The downside of the above method is that the response time to the
workload hot spot changing may be much longer.  For example,

- A previous cold memory area becomes hot

- The hint page fault will be triggered.  But the hint page fault
  latency isn't shorter than the hot threshold.  So the pages will
  not be promoted.

- When the memory area is scanned again, maybe after a scan period,
  the hint page fault latency measured will be shorter than the hot
  threshold and the pages will be promoted.

To mitigate this, if there are enough free space in the fast memory
node, the hot threshold will not be used, all pages will be promoted
upon the hint page fault for fast response.

Thanks Zhong Jiang reported and tested the fix for a bug when
disabling memory tiering mode dynamically.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Zhong Jiang <zhongjiang-ali@linux.alibaba.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/linux/mm.h   | 25 +++++++++++
 kernel/sched/debug.c |  1 +
 kernel/sched/fair.c  | 99 ++++++++++++++++++++++++++++++++++++++++++++
 kernel/sched/sched.h |  1 +
 mm/huge_memory.c     | 17 ++++++--
 mm/memory.c          | 11 ++++-
 mm/migrate.c         | 12 ++++++
 mm/mprotect.c        |  8 +++-
 8 files changed, 169 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index bc8f326be0ce..6fd23267597d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1311,6 +1311,18 @@ static inline int folio_nid(const struct folio *folio)
 }
 
 #ifdef CONFIG_NUMA_BALANCING
+/* page access time bits needs to hold at least 4 seconds */
+#define PAGE_ACCESS_TIME_MIN_BITS	12
+#if LAST_CPUPID_SHIFT < PAGE_ACCESS_TIME_MIN_BITS
+#define PAGE_ACCESS_TIME_BUCKETS				\
+	(PAGE_ACCESS_TIME_MIN_BITS - LAST_CPUPID_SHIFT)
+#else
+#define PAGE_ACCESS_TIME_BUCKETS	0
+#endif
+
+#define PAGE_ACCESS_TIME_MASK				\
+	(LAST_CPUPID_MASK << PAGE_ACCESS_TIME_BUCKETS)
+
 static inline int cpu_pid_to_cpupid(int cpu, int pid)
 {
 	return ((cpu & LAST__CPU_MASK) << LAST__PID_SHIFT) | (pid & LAST__PID_MASK);
@@ -1374,12 +1386,25 @@ static inline void page_cpupid_reset_last(struct page *page)
 	page->flags |= LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT;
 }
 #endif /* LAST_CPUPID_NOT_IN_PAGE_FLAGS */
+
+static inline int xchg_page_access_time(struct page *page, int time)
+{
+	int last_time;
+
+	last_time = page_cpupid_xchg_last(page, time >> PAGE_ACCESS_TIME_BUCKETS);
+	return last_time << PAGE_ACCESS_TIME_BUCKETS;
+}
 #else /* !CONFIG_NUMA_BALANCING */
 static inline int page_cpupid_xchg_last(struct page *page, int cpupid)
 {
 	return page_to_nid(page); /* XXX */
 }
 
+static inline int xchg_page_access_time(struct page *page, int time)
+{
+	return 0;
+}
+
 static inline int page_cpupid_last(struct page *page)
 {
 	return page_to_nid(page); /* XXX */
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index bb3d63bdf4ae..ad63dbfc54f1 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -333,6 +333,7 @@ static __init int sched_init_debug(void)
 	debugfs_create_u32("scan_period_min_ms", 0644, numa, &sysctl_numa_balancing_scan_period_min);
 	debugfs_create_u32("scan_period_max_ms", 0644, numa, &sysctl_numa_balancing_scan_period_max);
 	debugfs_create_u32("scan_size_mb", 0644, numa, &sysctl_numa_balancing_scan_size);
+	debugfs_create_u32("hot_threshold_ms", 0644, numa, &sysctl_numa_balancing_hot_threshold);
 #endif
 
 	debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 77b2048a9326..edc3d741ef84 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1070,6 +1070,9 @@ unsigned int sysctl_numa_balancing_scan_size = 256;
 /* Scan @scan_size MB every @scan_period after an initial @scan_delay in ms */
 unsigned int sysctl_numa_balancing_scan_delay = 1000;
 
+/* The page with hint page fault latency < threshold in ms is considered hot */
+unsigned int sysctl_numa_balancing_hot_threshold = MSEC_PER_SEC;
+
 struct numa_group {
 	refcount_t refcount;
 
@@ -1412,6 +1415,68 @@ static inline unsigned long group_weight(struct task_struct *p, int nid,
 	return 1000 * faults / total_faults;
 }
 
+/*
+ * If memory tiering mode is enabled, cpupid of slow memory page is
+ * used to record scan time instead of CPU and PID.  When tiering mode
+ * is disabled at run time, the scan time (in cpupid) will be
+ * interpreted as CPU and PID.  So CPU needs to be checked to avoid to
+ * access out of array bound.
+ */
+static inline bool cpupid_valid(int cpupid)
+{
+	return cpupid_to_cpu(cpupid) < nr_cpu_ids;
+}
+
+/*
+ * For memory tiering mode, if there are enough free pages (more than
+ * enough watermark defined here) in fast memory node, to take full
+ * advantage of fast memory capacity, all recently accessed slow
+ * memory pages will be migrated to fast memory node without
+ * considering hot threshold.
+ */
+static bool pgdat_free_space_enough(struct pglist_data *pgdat)
+{
+	int z;
+	unsigned long enough_wmark;
+
+	enough_wmark = max(1UL * 1024 * 1024 * 1024 >> PAGE_SHIFT,
+			   pgdat->node_present_pages >> 4);
+	for (z = pgdat->nr_zones - 1; z >= 0; z--) {
+		struct zone *zone = pgdat->node_zones + z;
+
+		if (!populated_zone(zone))
+			continue;
+
+		if (zone_watermark_ok(zone, 0,
+				      wmark_pages(zone, WMARK_PROMO) + enough_wmark,
+				      ZONE_MOVABLE, 0))
+			return true;
+	}
+	return false;
+}
+
+/*
+ * For memory tiering mode, when page tables are scanned, the scan
+ * time will be recorded in struct page in addition to make page
+ * PROT_NONE for slow memory page.  So when the page is accessed, in
+ * hint page fault handler, the hint page fault latency is calculated
+ * via,
+ *
+ *	hint page fault latency = hint page fault time - scan time
+ *
+ * The smaller the hint page fault latency, the higher the possibility
+ * for the page to be hot.
+ */
+static int numa_hint_fault_latency(struct page *page)
+{
+	int last_time, time;
+
+	time = jiffies_to_msecs(jiffies);
+	last_time = xchg_page_access_time(page, time);
+
+	return (time - last_time) & PAGE_ACCESS_TIME_MASK;
+}
+
 bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 				int src_nid, int dst_cpu)
 {
@@ -1419,9 +1484,34 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 	int dst_nid = cpu_to_node(dst_cpu);
 	int last_cpupid, this_cpupid;
 
+	/*
+	 * The pages in slow memory node should be migrated according
+	 * to hot/cold instead of private/shared.
+	 */
+	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+	    !node_is_toptier(src_nid)) {
+		struct pglist_data *pgdat;
+		unsigned long latency, th;
+
+		pgdat = NODE_DATA(dst_nid);
+		if (pgdat_free_space_enough(pgdat))
+			return true;
+
+		th = sysctl_numa_balancing_hot_threshold;
+		latency = numa_hint_fault_latency(page);
+		if (latency >= th)
+			return false;
+
+		return true;
+	}
+
 	this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
 	last_cpupid = page_cpupid_xchg_last(page, this_cpupid);
 
+	if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
+	    !node_is_toptier(src_nid) && !cpupid_valid(last_cpupid))
+		return false;
+
 	/*
 	 * Allow first faults or private faults to migrate immediately early in
 	 * the lifetime of a task. The magic number 4 is based on waiting for
@@ -2654,6 +2744,15 @@ void task_numa_fault(int last_cpupid, int mem_node, int pages, int flags)
 	if (!p->mm)
 		return;
 
+	/*
+	 * NUMA faults statistics are unnecessary for the slow memory
+	 * node for memory tiering mode.
+	 */
+	if (!node_is_toptier(mem_node) &&
+	    (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING ||
+	     !cpupid_valid(last_cpupid)))
+		return;
+
 	/* Allocate buffer to track faults on a per-node basis */
 	if (unlikely(!p->numa_faults)) {
 		int size = sizeof(*p->numa_faults) *
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 01259611beb9..953b01416462 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2406,6 +2406,7 @@ extern unsigned int sysctl_numa_balancing_scan_delay;
 extern unsigned int sysctl_numa_balancing_scan_period_min;
 extern unsigned int sysctl_numa_balancing_scan_period_max;
 extern unsigned int sysctl_numa_balancing_scan_size;
+extern unsigned int sysctl_numa_balancing_hot_threshold;
 #endif
 
 #ifdef CONFIG_SCHED_HRTICK
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index a77c78a2b6b5..35b7a27669b5 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1410,7 +1410,7 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 	struct page *page;
 	unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
 	int page_nid = NUMA_NO_NODE;
-	int target_nid, last_cpupid = -1;
+	int target_nid, last_cpupid = (-1 & LAST_CPUPID_MASK);
 	bool migrated = false;
 	bool was_writable = pmd_savedwrite(oldpmd);
 	int flags = 0;
@@ -1431,7 +1431,12 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
 		flags |= TNF_NO_GROUP;
 
 	page_nid = page_to_nid(page);
-	last_cpupid = page_cpupid_last(page);
+	/*
+	 * For memory tiering mode, cpupid of slow memory page is used
+	 * to record page access time.  So use default value.
+	 */
+	if (node_is_toptier(page_nid))
+		last_cpupid = page_cpupid_last(page);
 	target_nid = numa_migrate_prep(page, vma, haddr, page_nid,
 				       &flags);
 
@@ -1755,6 +1760,7 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 
 	if (prot_numa) {
 		struct page *page;
+		bool toptier;
 		/*
 		 * Avoid trapping faults against the zero page. The read-only
 		 * data is likely to be read-cached on the local CPU and
@@ -1767,13 +1773,18 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 			goto unlock;
 
 		page = pmd_page(*pmd);
+		toptier = node_is_toptier(page_to_nid(page));
 		/*
 		 * Skip scanning top tier node if normal numa
 		 * balancing is disabled
 		 */
 		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
-		    node_is_toptier(page_to_nid(page)))
+		    toptier)
 			goto unlock;
+
+		if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+		    !toptier)
+			xchg_page_access_time(page, jiffies_to_msecs(jiffies));
 	}
 	/*
 	 * In case prot_numa, we are under mmap_read_lock(mm). It's critical
diff --git a/mm/memory.c b/mm/memory.c
index 3383d3530a4f..c1dac8095880 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -74,6 +74,7 @@
 #include <linux/perf_event.h>
 #include <linux/ptrace.h>
 #include <linux/vmalloc.h>
+#include <linux/sched/sysctl.h>
 
 #include <trace/events/kmem.h>
 
@@ -4726,8 +4727,16 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 	if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED))
 		flags |= TNF_SHARED;
 
-	last_cpupid = page_cpupid_last(page);
 	page_nid = page_to_nid(page);
+	/*
+	 * For memory tiering mode, cpupid of slow memory page is used
+	 * to record page access time.  So use default value.
+	 */
+	if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
+	    !node_is_toptier(page_nid))
+		last_cpupid = (-1 & LAST_CPUPID_MASK);
+	else
+		last_cpupid = page_cpupid_last(page);
 	target_nid = numa_migrate_prep(page, vma, vmf->address, page_nid,
 			&flags);
 	if (target_nid == NUMA_NO_NODE) {
diff --git a/mm/migrate.c b/mm/migrate.c
index 8a897f34ce2c..bb0bb604a5f7 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -541,6 +541,18 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
 	 * future migrations of this same page.
 	 */
 	cpupid = page_cpupid_xchg_last(&folio->page, -1);
+	/*
+	 * For memory tiering mode, when migrate between slow and fast
+	 * memory node, reset cpupid, because that is used to record
+	 * page access time in slow memory node.
+	 */
+	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) {
+		bool f_toptier = node_is_toptier(page_to_nid(&folio->page));
+		bool t_toptier = node_is_toptier(page_to_nid(&newfolio->page));
+
+		if (f_toptier != t_toptier)
+			cpupid = -1;
+	}
 	page_cpupid_xchg_last(&newfolio->page, cpupid);
 
 	folio_migrate_ksm(newfolio, folio);
diff --git a/mm/mprotect.c b/mm/mprotect.c
index ba5592655ee3..4da10376a23b 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -89,6 +89,7 @@ static unsigned long change_pte_range(struct mmu_gather *tlb,
 			if (prot_numa) {
 				struct page *page;
 				int nid;
+				bool toptier;
 
 				/* Avoid TLB flush if possible */
 				if (pte_protnone(oldpte))
@@ -118,14 +119,19 @@ static unsigned long change_pte_range(struct mmu_gather *tlb,
 				nid = page_to_nid(page);
 				if (target_node == nid)
 					continue;
+				toptier = node_is_toptier(nid);
 
 				/*
 				 * Skip scanning top tier node if normal numa
 				 * balancing is disabled
 				 */
 				if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
-				    node_is_toptier(nid))
+				    toptier)
 					continue;
+				if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
+				    !toptier)
+					xchg_page_access_time(page,
+						jiffies_to_msecs(jiffies));
 			}
 
 			oldpte = ptep_modify_prot_start(vma, addr, pte);
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH -V3 2/3] memory tiering: rate limit NUMA migration throughput
  2022-06-14  8:16 [PATCH -V3 0/3] memory tiering: hot page selection Huang Ying
  2022-06-14  8:16 ` [PATCH -V3 1/3] memory tiering: hot page selection with hint page fault latency Huang Ying
@ 2022-06-14  8:16 ` Huang Ying
  2022-06-14  8:16 ` [PATCH -V3 3/3] memory tiering: adjust hot threshold automatically Huang Ying
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Huang Ying @ 2022-06-14  8:16 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Johannes Weiner,
	Michal Hocko, Rik van Riel, Mel Gorman, Peter Zijlstra,
	Dave Hansen, Yang Shi, Zi Yan, Wei Xu, osalvador, Shakeel Butt

In NUMA balancing memory tiering mode, if there are hot pages in slow
memory node and cold pages in fast memory node, we need to
promote/demote hot/cold pages between the fast and cold memory nodes.

A choice is to promote/demote as fast as possible.  But the CPU cycles
and memory bandwidth consumed by the high promoting/demoting
throughput will hurt the latency of some workload because of accessing
inflating and slow memory bandwidth contention.

A way to resolve this issue is to restrict the max promoting/demoting
throughput.  It will take longer to finish the promoting/demoting.
But the workload latency will be better.  This is implemented in this
patch as the page promotion rate limit mechanism.

The number of the candidate pages to be promoted to the fast memory
node via NUMA balancing is counted, if the count exceeds the limit
specified by the users, the NUMA balancing promotion will be stopped
until the next second.

A new sysctl knob kernel.numa_balancing_promote_rate_limit_MBps is
added for the users to specify the limit.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 Documentation/admin-guide/sysctl/kernel.rst | 11 +++++++
 include/linux/mmzone.h                      |  7 +++++
 include/linux/sched/sysctl.h                |  1 +
 kernel/sched/fair.c                         | 33 +++++++++++++++++++--
 kernel/sysctl.c                             |  8 +++++
 mm/vmstat.c                                 |  1 +
 6 files changed, 59 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
index ddccd1077462..c99bceafd162 100644
--- a/Documentation/admin-guide/sysctl/kernel.rst
+++ b/Documentation/admin-guide/sysctl/kernel.rst
@@ -623,6 +623,17 @@ different types of memory (represented as different NUMA nodes) to
 place the hot pages in the fast memory.  This is implemented based on
 unmapping and page fault too.
 
+numa_balancing_promote_rate_limit_MBps
+======================================
+
+Too high promotion/demotion throughput between different memory types
+may hurt application latency.  This can be used to rate limit the
+promotion throughput.  The per-node max promotion throughput in MB/s
+will be limited to be no more than the set value.
+
+A rule of thumb is to set this to less than 1/10 of the PMEM node
+write bandwidth.
+
 oops_all_cpu_backtrace
 ======================
 
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index aab70355d64f..994a0cd39595 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -221,6 +221,7 @@ enum node_stat_item {
 #endif
 #ifdef CONFIG_NUMA_BALANCING
 	PGPROMOTE_SUCCESS,	/* promote successfully */
+	PGPROMOTE_CANDIDATE,	/* candidate pages to promote */
 #endif
 	NR_VM_NODE_STAT_ITEMS
 };
@@ -912,6 +913,12 @@ typedef struct pglist_data {
 	struct deferred_split deferred_split_queue;
 #endif
 
+#ifdef CONFIG_NUMA_BALANCING
+	/* start time in ms of current promote rate limit period */
+	unsigned int nbp_rl_start;
+	/* number of promote candidate pages at start time of current rate limit period */
+	unsigned long nbp_rl_nr_cand;
+#endif
 	/* Fields commonly accessed by the page reclaim scanner */
 
 	/*
diff --git a/include/linux/sched/sysctl.h b/include/linux/sched/sysctl.h
index e650946816d0..303ee7dd0c7e 100644
--- a/include/linux/sched/sysctl.h
+++ b/include/linux/sched/sysctl.h
@@ -27,6 +27,7 @@ enum sched_tunable_scaling {
 
 #ifdef CONFIG_NUMA_BALANCING
 extern int sysctl_numa_balancing_mode;
+extern unsigned int sysctl_numa_balancing_promote_rate_limit;
 #else
 #define sysctl_numa_balancing_mode	0
 #endif
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index edc3d741ef84..d779a91a8ca0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1073,6 +1073,9 @@ unsigned int sysctl_numa_balancing_scan_delay = 1000;
 /* The page with hint page fault latency < threshold in ms is considered hot */
 unsigned int sysctl_numa_balancing_hot_threshold = MSEC_PER_SEC;
 
+/* Restrict the NUMA promotion throughput (MB/s) for each target node. */
+unsigned int sysctl_numa_balancing_promote_rate_limit = 65536;
+
 struct numa_group {
 	refcount_t refcount;
 
@@ -1477,6 +1480,29 @@ static int numa_hint_fault_latency(struct page *page)
 	return (time - last_time) & PAGE_ACCESS_TIME_MASK;
 }
 
+/*
+ * For memory tiering mode, too high promotion/demotion throughput may
+ * hurt application latency.  So we provide a mechanism to rate limit
+ * the number of pages that are tried to be promoted.
+ */
+static bool numa_promotion_rate_limit(struct pglist_data *pgdat,
+				      unsigned long rate_limit, int nr)
+{
+	unsigned long nr_cand;
+	unsigned int now, start;
+
+	now = jiffies_to_msecs(jiffies);
+	mod_node_page_state(pgdat, PGPROMOTE_CANDIDATE, nr);
+	nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE);
+	start = pgdat->nbp_rl_start;
+	if (now - start > MSEC_PER_SEC &&
+	    cmpxchg(&pgdat->nbp_rl_start, start, now) == start)
+		pgdat->nbp_rl_nr_cand = nr_cand;
+	if (nr_cand - pgdat->nbp_rl_nr_cand >= rate_limit)
+		return true;
+	return false;
+}
+
 bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 				int src_nid, int dst_cpu)
 {
@@ -1491,7 +1517,7 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 	    !node_is_toptier(src_nid)) {
 		struct pglist_data *pgdat;
-		unsigned long latency, th;
+		unsigned long rate_limit, latency, th;
 
 		pgdat = NODE_DATA(dst_nid);
 		if (pgdat_free_space_enough(pgdat))
@@ -1502,7 +1528,10 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 		if (latency >= th)
 			return false;
 
-		return true;
+		rate_limit = sysctl_numa_balancing_promote_rate_limit << \
+			(20 - PAGE_SHIFT);
+		return !numa_promotion_rate_limit(pgdat, rate_limit,
+						  thp_nr_pages(page));
 	}
 
 	this_cpupid = cpu_pid_to_cpupid(dst_cpu, current->pid);
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index e52b6e372c60..3188698e2c8e 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1597,6 +1597,14 @@ static struct ctl_table kern_table[] = {
 		.extra1		= SYSCTL_ZERO,
 		.extra2		= SYSCTL_FOUR,
 	},
+	{
+		.procname	= "numa_balancing_promote_rate_limit_MBps",
+		.data		= &sysctl_numa_balancing_promote_rate_limit,
+		.maxlen		= sizeof(unsigned int),
+		.mode		= 0644,
+		.proc_handler	= proc_dointvec_minmax,
+		.extra1		= SYSCTL_ZERO,
+	},
 #endif /* CONFIG_NUMA_BALANCING */
 	{
 		.procname	= "panic",
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 373d2730fcf2..068ca7d150ab 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -1245,6 +1245,7 @@ const char * const vmstat_text[] = {
 #endif
 #ifdef CONFIG_NUMA_BALANCING
 	"pgpromote_success",
+	"pgpromote_candidate",
 #endif
 
 	/* enum writeback_stat_item counters */
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH -V3 3/3] memory tiering: adjust hot threshold automatically
  2022-06-14  8:16 [PATCH -V3 0/3] memory tiering: hot page selection Huang Ying
  2022-06-14  8:16 ` [PATCH -V3 1/3] memory tiering: hot page selection with hint page fault latency Huang Ying
  2022-06-14  8:16 ` [PATCH -V3 2/3] memory tiering: rate limit NUMA migration throughput Huang Ying
@ 2022-06-14  8:16 ` Huang Ying
  2022-06-14 15:30 ` [PATCH -V3 0/3] memory tiering: hot page selection Johannes Weiner
  2022-06-20  3:19 ` Baolin Wang
  4 siblings, 0 replies; 8+ messages in thread
From: Huang Ying @ 2022-06-14  8:16 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, linux-kernel, Huang Ying, Johannes Weiner,
	Michal Hocko, Rik van Riel, Mel Gorman, Peter Zijlstra,
	Dave Hansen, Yang Shi, Zi Yan, Wei Xu, osalvador, Shakeel Butt

The promotion hot threshold is workload and system configuration
dependent.  So in this patch, a method to adjust the hot threshold
automatically is implemented.  The basic idea is to control the number
of the candidate promotion pages to match the promotion rate limit.
If the hint page fault latency of a page is less than the hot
threshold, we will try to promote the page, and the page is called the
candidate promotion page.

If the number of the candidate promotion pages in the statistics
interval is much more than the promotion rate limit, the hot threshold
will be decreased to reduce the number of the candidate promotion
pages.  Otherwise, the hot threshold will be increased to increase the
number of the candidate promotion pages.

To make the above method works, in each statistics interval, the total
number of the pages to check (on which the hint page faults occur) and
the hot/cold distribution need to be stable.  Because the page tables
are scanned linearly in NUMA balancing, but the hot/cold distribution
isn't uniform along the address usually, the statistics interval
should be larger than the NUMA balancing scan period.  So in the
patch, the max scan period is used as statistics interval and it works
well in our tests.

Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Wei Xu <weixugc@google.com>
Cc: osalvador <osalvador@suse.de>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org
---
 include/linux/mmzone.h |  9 +++++++++
 kernel/sched/core.c    | 14 +++++++++++++
 kernel/sched/fair.c    | 46 +++++++++++++++++++++++++++++++++++++-----
 3 files changed, 64 insertions(+), 5 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 994a0cd39595..33d875d23e9a 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -918,6 +918,15 @@ typedef struct pglist_data {
 	unsigned int nbp_rl_start;
 	/* number of promote candidate pages at start time of current rate limit period */
 	unsigned long nbp_rl_nr_cand;
+	/* promote threshold in ms */
+	unsigned int nbp_threshold;
+	/* start time in ms of current promote threshold adjustment period */
+	unsigned int nbp_th_start;
+	/*
+	 * number of promote candidate pages at stat time of current promote
+	 * threshold adjustment period
+	 */
+	unsigned long nbp_th_nr_cand;
 #endif
 	/* Fields commonly accessed by the page reclaim scanner */
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index bfa7452ca92e..6f9c7a4f647f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4361,6 +4361,17 @@ void set_numabalancing_state(bool enabled)
 }
 
 #ifdef CONFIG_PROC_SYSCTL
+static void reset_memory_tiering(void)
+{
+	struct pglist_data *pgdat;
+
+	for_each_online_pgdat(pgdat) {
+		pgdat->nbp_threshold = 0;
+		pgdat->nbp_th_nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE);
+		pgdat->nbp_th_start = jiffies_to_msecs(jiffies);
+	}
+}
+
 int sysctl_numa_balancing(struct ctl_table *table, int write,
 			  void *buffer, size_t *lenp, loff_t *ppos)
 {
@@ -4377,6 +4388,9 @@ int sysctl_numa_balancing(struct ctl_table *table, int write,
 	if (err < 0)
 		return err;
 	if (write) {
+		if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) &&
+		    (state & NUMA_BALANCING_MEMORY_TIERING))
+			reset_memory_tiering();
 		sysctl_numa_balancing_mode = state;
 		__set_numabalancing_state(state);
 	}
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d779a91a8ca0..cc5b26fefae8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1503,6 +1503,35 @@ static bool numa_promotion_rate_limit(struct pglist_data *pgdat,
 	return false;
 }
 
+#define NUMA_MIGRATION_ADJUST_STEPS	16
+
+static void numa_promotion_adjust_threshold(struct pglist_data *pgdat,
+					    unsigned long rate_limit,
+					    unsigned int ref_th)
+{
+	unsigned int now, start, th_period, unit_th, th;
+	unsigned long nr_cand, ref_cand, diff_cand;
+
+	now = jiffies_to_msecs(jiffies);
+	th_period = sysctl_numa_balancing_scan_period_max;
+	start = pgdat->nbp_th_start;
+	if (now - start > th_period &&
+	    cmpxchg(&pgdat->nbp_th_start, start, now) == start) {
+		ref_cand = rate_limit *
+			sysctl_numa_balancing_scan_period_max / MSEC_PER_SEC;
+		nr_cand = node_page_state(pgdat, PGPROMOTE_CANDIDATE);
+		diff_cand = nr_cand - pgdat->nbp_th_nr_cand;
+		unit_th = ref_th * 2 / NUMA_MIGRATION_ADJUST_STEPS;
+		th = pgdat->nbp_threshold ? : ref_th;
+		if (diff_cand > ref_cand * 11 / 10)
+			th = max(th - unit_th, unit_th);
+		else if (diff_cand < ref_cand * 9 / 10)
+			th = min(th + unit_th, ref_th * 2);
+		pgdat->nbp_th_nr_cand = nr_cand;
+		pgdat->nbp_threshold = th;
+	}
+}
+
 bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 				int src_nid, int dst_cpu)
 {
@@ -1517,19 +1546,26 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 	if (sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING &&
 	    !node_is_toptier(src_nid)) {
 		struct pglist_data *pgdat;
-		unsigned long rate_limit, latency, th;
+		unsigned long rate_limit;
+		unsigned int latency, th, def_th;
 
 		pgdat = NODE_DATA(dst_nid);
-		if (pgdat_free_space_enough(pgdat))
+		if (pgdat_free_space_enough(pgdat)) {
+			/* workload changed, reset hot threshold */
+			pgdat->nbp_threshold = 0;
 			return true;
+		}
+
+		def_th = sysctl_numa_balancing_hot_threshold;
+		rate_limit = sysctl_numa_balancing_promote_rate_limit << \
+			(20 - PAGE_SHIFT);
+		numa_promotion_adjust_threshold(pgdat, rate_limit, def_th);
 
-		th = sysctl_numa_balancing_hot_threshold;
+		th = pgdat->nbp_threshold ? : def_th;
 		latency = numa_hint_fault_latency(page);
 		if (latency >= th)
 			return false;
 
-		rate_limit = sysctl_numa_balancing_promote_rate_limit << \
-			(20 - PAGE_SHIFT);
 		return !numa_promotion_rate_limit(pgdat, rate_limit,
 						  thp_nr_pages(page));
 	}
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH -V3 0/3] memory tiering: hot page selection
  2022-06-14  8:16 [PATCH -V3 0/3] memory tiering: hot page selection Huang Ying
                   ` (2 preceding siblings ...)
  2022-06-14  8:16 ` [PATCH -V3 3/3] memory tiering: adjust hot threshold automatically Huang Ying
@ 2022-06-14 15:30 ` Johannes Weiner
  2022-06-15  3:47   ` Ying Huang
  2022-06-20  3:19 ` Baolin Wang
  4 siblings, 1 reply; 8+ messages in thread
From: Johannes Weiner @ 2022-06-14 15:30 UTC (permalink / raw)
  To: Huang Ying
  Cc: Andrew Morton, linux-mm, linux-kernel, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Zhong Jiang

Hi Huang,

Have you had a chance to look at our hot page detection patch that
Hasan has sent out some time ago? [1]

It hooks into page reclaim to determine what is and isn't hot. Reclaim
is an existing, well-tested mechanism to do just that. It's just 13
lines of code: set active bit on the first hint fault; promote on the
second one if the active bit is still set. This promotes only pages
hot enough that they can compete with toptier access frequencies.

It's not just convenient, it's also essential to link tier promotion
rate to page aging. Tiered NUMA balancing is about establishing a
global LRU order across two (or more) nodes. LRU promotions *within* a
node require multiple LRU cycles with references. LRU promotions
*between* nodes must follow the same rules, and be subject to the same
aging pressure, or you can get much colder pages promoted into a very
hot workingset and wreak havoc.

We've hammered this patch quite extensively with several Meta
production workloads and it's been working reliably at keeping
reasonable promotion rates.

@@ -4202,6 +4202,19 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
 
 	last_cpupid = page_cpupid_last(page);
 	page_nid = page_to_nid(page);
+
+	/* Only migrate pages that are active on non-toptier node */
+	if (numa_promotion_tiered_enabled &&
+		!node_is_toptier(page_nid) &&
+		!PageActive(page)) {
+		count_vm_numa_event(NUMA_HINT_FAULTS);
+		if (page_nid == numa_node_id())
+			count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL);
+		mark_page_accessed(page);
+		pte_unmap_unlock(vmf->pte, vmf->ptl);
+		goto out;
+	}
+
 	target_nid = numa_migrate_prep(page, vma, vmf->address, page_nid,
 			&flags);
 	pte_unmap_unlock(vmf->pte, vmf->ptl);

[1] https://lore.kernel.org/all/20211130003634.35468-1-hasanalmaruf@fb.com/t/#m85b95624622f175ca17a00cc8cc0fc9cc4eeb6d2

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH -V3 0/3] memory tiering: hot page selection
  2022-06-14 15:30 ` [PATCH -V3 0/3] memory tiering: hot page selection Johannes Weiner
@ 2022-06-15  3:47   ` Ying Huang
  0 siblings, 0 replies; 8+ messages in thread
From: Ying Huang @ 2022-06-15  3:47 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Andrew Morton, linux-mm, linux-kernel, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Zhong Jiang

On Tue, 2022-06-14 at 11:30 -0400, Johannes Weiner wrote:
> Hi Huang,

Hi, Johannes,

> Have you had a chance to look at our hot page detection patch that
> Hasan has sent out some time ago? [1]

Yes.  I have seen that patch before.

> It hooks into page reclaim to determine what is and isn't hot. Reclaim
> is an existing, well-tested mechanism to do just that. It's just 13
> lines of code: set active bit on the first hint fault; promote on the
> second one if the active bit is still set. This promotes only pages
> hot enough that they can compete with toptier access frequencies.

In general, I think that patch is good.  And it can work with the hot
page selection patchset (this series) together.  That is, if
!PageActive(), then activate the page; otherwise, promote the page if
the hint page fault latency is short too.

In a system with swap device configured, and with continuous memory
pressure on all memory types (including PMEM), the NUMA balancing hint
page fault can help the page reclaiming, the page accesses can be
detected much earlier.  And page reclaiming can help page promotion
via keeping recently-not-accessed pages in inactive list and
recently-accessed pages in active list.

In a system without swap device configured and continuous memory
pressure on slow tier memory (e.g., PMEM), page reclaiming doesn't
help much because the active/inactive list aren't scanned regularly.
This is true for some users.  And the method in this series still
helps.

> It's not just convenient, it's also essential to link tier promotion
> rate to page aging. Tiered NUMA balancing is about establishing a
> global LRU order across two (or more) nodes. LRU promotions *within* a
> node require multiple LRU cycles with references.

IMHO, LRU algorithm is good for page reclaiming.  It isn't sufficient
for page promoting by itself.  Because it can identify cold pages
well, but its accuracy of identifying hot pages isn't enough.  That
is, it's hard to distinguish between warm pages and hot pages with
LRU/MRU itself.  The hint page fault latency introduced in this series
is to help on that.

> LRU promotions
> *between* nodes must follow the same rules, and be subject to the same
> aging pressure, or you can get much colder pages promoted into a very
> hot workingset and wreak havoc.
> 
> We've hammered this patch quite extensively with several Meta
> production workloads and it's been working reliably at keeping
> reasonable promotion rates.

Sounds good.  Do you have some data to share?

> @@ -4202,6 +4202,19 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf)
>  
> 
>  	last_cpupid = page_cpupid_last(page);
>  	page_nid = page_to_nid(page);
> +
> +	/* Only migrate pages that are active on non-toptier node */
> +	if (numa_promotion_tiered_enabled &&
> +		!node_is_toptier(page_nid) &&
> +		!PageActive(page)) {
> +		count_vm_numa_event(NUMA_HINT_FAULTS);
> +		if (page_nid == numa_node_id())
> +			count_vm_numa_event(NUMA_HINT_FAULTS_LOCAL);
> +		mark_page_accessed(page);
> +		pte_unmap_unlock(vmf->pte, vmf->ptl);
> +		goto out;
> +	}
> +
>  	target_nid = numa_migrate_prep(page, vma, vmf->address, page_nid,
>  			&flags);
>  	pte_unmap_unlock(vmf->pte, vmf->ptl);
> 
> [1] https://lore.kernel.org/all/20211130003634.35468-1-hasanalmaruf@fb.com/t/#m85b95624622f175ca17a00cc8cc0fc9cc4eeb6d2

Best Regards,
Huang, Ying


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH -V3 0/3] memory tiering: hot page selection
  2022-06-14  8:16 [PATCH -V3 0/3] memory tiering: hot page selection Huang Ying
                   ` (3 preceding siblings ...)
  2022-06-14 15:30 ` [PATCH -V3 0/3] memory tiering: hot page selection Johannes Weiner
@ 2022-06-20  3:19 ` Baolin Wang
  2022-06-20  3:24   ` Huang, Ying
  4 siblings, 1 reply; 8+ messages in thread
From: Baolin Wang @ 2022-06-20  3:19 UTC (permalink / raw)
  To: Huang Ying, Andrew Morton
  Cc: linux-mm, linux-kernel, Johannes Weiner, Michal Hocko,
	Rik van Riel, Mel Gorman, Peter Zijlstra, Dave Hansen, Yang Shi,
	Zi Yan, Wei Xu, osalvador, Shakeel Butt, Zhong Jiang



On 6/14/2022 4:16 PM, Huang Ying wrote:
> To optimize page placement in a memory tiering system with NUMA
> balancing, the hot pages in the slow memory nodes need to be
> identified.  Essentially, the original NUMA balancing implementation
> selects the mostly recently accessed (MRU) pages to promote.  But this
> isn't a perfect algorithm to identify the hot pages.  Because the
> pages with quite low access frequency may be accessed eventually given
> the NUMA balancing page table scanning period could be quite long
> (e.g. 60 seconds).  So in this patchset, we implement a new hot page
> identification algorithm based on the latency between NUMA balancing
> page table scanning and hint page fault.  Which is a kind of mostly
> frequently accessed (MFU) algorithm.
> 
> In NUMA balancing memory tiering mode, if there are hot pages in slow
> memory node and cold pages in fast memory node, we need to
> promote/demote hot/cold pages between the fast and cold memory nodes.
> 
> A choice is to promote/demote as fast as possible.  But the CPU cycles
> and memory bandwidth consumed by the high promoting/demoting
> throughput will hurt the latency of some workload because of accessing
> inflating and slow memory bandwidth contention.
> 
> A way to resolve this issue is to restrict the max promoting/demoting
> throughput.  It will take longer to finish the promoting/demoting.
> But the workload latency will be better.  This is implemented in this
> patchset as the page promotion rate limit mechanism.
> 
> The promotion hot threshold is workload and system configuration
> dependent.  So in this patchset, a method to adjust the hot threshold
> automatically is implemented.  The basic idea is to control the number
> of the candidate promotion pages to match the promotion rate limit.
> 
> We used the pmbench memory accessing benchmark tested the patchset on
> a 2-socket server system with DRAM and PMEM installed.  The test
> results are as follows,
> 
> 		pmbench score		promote rate
> 		 (accesses/s)			MB/s
> 		-------------		------------
> base		  146887704.1		       725.6
> hot selection     165695601.2		       544.0
> rate limit	  162814569.8		       165.2
> auto adjustment	  170495294.0                  136.9
> 
>  From the results above,
> 
> With hot page selection patch [1/3], the pmbench score increases about
> 12.8%, and promote rate (overhead) decreases about 25.0%, compared with
> base kernel.
> 
> With rate limit patch [2/3], pmbench score decreases about 1.7%, and
> promote rate decreases about 69.6%, compared with hot page selection
> patch.
> 
> With threshold auto adjustment patch [3/3], pmbench score increases
> about 4.7%, and promote rate decrease about 17.1%, compared with rate
> limit patch.

I did a simple testing with mysql on my machine which contains 1 DRAM 
node (30G) and 1 PMEM node (126G).

sysbench /usr/share/sysbench/oltp_read_write.lua \
......
--tables=200 \
--table-size=1000000 \
--report-interval=10 \
--threads=16 \
--time=120

The tps can be improved about 5% from below data, and I think this is a 
good start to optimize the promotion. So for this series, please feel 
free to add:

Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>

Without this patchset:
  transactions:                        2080188 (3466.48 per sec.)

With this patch set:
  transactions:                        2174296 (3623.40 per sec.)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH -V3 0/3] memory tiering: hot page selection
  2022-06-20  3:19 ` Baolin Wang
@ 2022-06-20  3:24   ` Huang, Ying
  0 siblings, 0 replies; 8+ messages in thread
From: Huang, Ying @ 2022-06-20  3:24 UTC (permalink / raw)
  To: Baolin Wang
  Cc: Andrew Morton, linux-mm, linux-kernel, Johannes Weiner,
	Michal Hocko, Rik van Riel, Mel Gorman, Peter Zijlstra,
	Dave Hansen, Yang Shi, Zi Yan, Wei Xu, osalvador, Shakeel Butt,
	Zhong Jiang

Baolin Wang <baolin.wang@linux.alibaba.com> writes:

> On 6/14/2022 4:16 PM, Huang Ying wrote:
>> To optimize page placement in a memory tiering system with NUMA
>> balancing, the hot pages in the slow memory nodes need to be
>> identified.  Essentially, the original NUMA balancing implementation
>> selects the mostly recently accessed (MRU) pages to promote.  But this
>> isn't a perfect algorithm to identify the hot pages.  Because the
>> pages with quite low access frequency may be accessed eventually given
>> the NUMA balancing page table scanning period could be quite long
>> (e.g. 60 seconds).  So in this patchset, we implement a new hot page
>> identification algorithm based on the latency between NUMA balancing
>> page table scanning and hint page fault.  Which is a kind of mostly
>> frequently accessed (MFU) algorithm.
>> In NUMA balancing memory tiering mode, if there are hot pages in
>> slow
>> memory node and cold pages in fast memory node, we need to
>> promote/demote hot/cold pages between the fast and cold memory nodes.
>> A choice is to promote/demote as fast as possible.  But the CPU
>> cycles
>> and memory bandwidth consumed by the high promoting/demoting
>> throughput will hurt the latency of some workload because of accessing
>> inflating and slow memory bandwidth contention.
>> A way to resolve this issue is to restrict the max
>> promoting/demoting
>> throughput.  It will take longer to finish the promoting/demoting.
>> But the workload latency will be better.  This is implemented in this
>> patchset as the page promotion rate limit mechanism.
>> The promotion hot threshold is workload and system configuration
>> dependent.  So in this patchset, a method to adjust the hot threshold
>> automatically is implemented.  The basic idea is to control the number
>> of the candidate promotion pages to match the promotion rate limit.
>> We used the pmbench memory accessing benchmark tested the patchset
>> on
>> a 2-socket server system with DRAM and PMEM installed.  The test
>> results are as follows,
>> 		pmbench score		promote rate
>> 		 (accesses/s)			MB/s
>> 		-------------		------------
>> base		  146887704.1		       725.6
>> hot selection     165695601.2		       544.0
>> rate limit	  162814569.8		       165.2
>> auto adjustment	  170495294.0                  136.9
>>  From the results above,
>> With hot page selection patch [1/3], the pmbench score increases
>> about
>> 12.8%, and promote rate (overhead) decreases about 25.0%, compared with
>> base kernel.
>> With rate limit patch [2/3], pmbench score decreases about 1.7%, and
>> promote rate decreases about 69.6%, compared with hot page selection
>> patch.
>> With threshold auto adjustment patch [3/3], pmbench score increases
>> about 4.7%, and promote rate decrease about 17.1%, compared with rate
>> limit patch.
>
> I did a simple testing with mysql on my machine which contains 1 DRAM
> node (30G) and 1 PMEM node (126G).
>
> sysbench /usr/share/sysbench/oltp_read_write.lua \
> ......
> --tables=200 \
> --table-size=1000000 \
> --report-interval=10 \
> --threads=16 \
> --time=120
>
> The tps can be improved about 5% from below data, and I think this is
> a good start to optimize the promotion. So for this series, please
> feel free to add:
>
> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>
> Without this patchset:
>  transactions:                        2080188 (3466.48 per sec.)
>
> With this patch set:
>  transactions:                        2174296 (3623.40 per sec.)

Thanks a lot!

Best Regards,
Huang, Ying

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-06-20  3:24 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-14  8:16 [PATCH -V3 0/3] memory tiering: hot page selection Huang Ying
2022-06-14  8:16 ` [PATCH -V3 1/3] memory tiering: hot page selection with hint page fault latency Huang Ying
2022-06-14  8:16 ` [PATCH -V3 2/3] memory tiering: rate limit NUMA migration throughput Huang Ying
2022-06-14  8:16 ` [PATCH -V3 3/3] memory tiering: adjust hot threshold automatically Huang Ying
2022-06-14 15:30 ` [PATCH -V3 0/3] memory tiering: hot page selection Johannes Weiner
2022-06-15  3:47   ` Ying Huang
2022-06-20  3:19 ` Baolin Wang
2022-06-20  3:24   ` Huang, Ying

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.