* [PATCH -V2 01/10] mm, pcp: avoid to drain PCP when process exit
2023-09-26 6:09 [PATCH -V2 00/10] mm: PCP high auto-tuning Huang Ying
@ 2023-09-26 6:09 ` Huang Ying
2023-09-26 6:09 ` [PATCH -V2 02/10] cacheinfo: calculate per-CPU data cache size Huang Ying
` (8 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Huang Ying @ 2023-09-26 6:09 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Arjan Van De Ven, Huang Ying, Mel Gorman,
Vlastimil Babka, David Hildenbrand, Johannes Weiner, Dave Hansen,
Michal Hocko, Pavel Tatashin, Matthew Wilcox, Christoph Lameter
In commit f26b3fa04611 ("mm/page_alloc: limit number of high-order
pages on PCP during bulk free"), the PCP (Per-CPU Pageset) will be
drained when PCP is mostly used for high-order pages freeing to
improve the cache-hot pages reusing between page allocation and
freeing CPUs.
But, the PCP draining mechanism may be triggered unexpectedly when
process exits. With some customized trace point, it was found that
PCP draining (free_high == true) was triggered with the order-1 page
freeing with the following call stack,
=> free_unref_page_commit
=> free_unref_page
=> __mmdrop
=> exit_mm
=> do_exit
=> do_group_exit
=> __x64_sys_exit_group
=> do_syscall_64
Checking the source code, this is the page table PGD
freeing (mm_free_pgd()). It's a order-1 page freeing if
CONFIG_PAGE_TABLE_ISOLATION=y. Which is a common configuration for
security.
Just before that, page freeing with the following call stack was
found,
=> free_unref_page_commit
=> free_unref_page_list
=> release_pages
=> tlb_batch_pages_flush
=> tlb_finish_mmu
=> exit_mmap
=> __mmput
=> exit_mm
=> do_exit
=> do_group_exit
=> __x64_sys_exit_group
=> do_syscall_64
So, when a process exits,
- a large number of user pages of the process will be freed without
page allocation, it's highly possible that pcp->free_factor becomes
> 0.
- after freeing all user pages, the PGD will be freed, which is a
order-1 page freeing, PCP will be drained.
All in all, when a process exits, it's high possible that the PCP will
be drained. This is an unexpected behavior.
To avoid this, in the patch, the PCP draining will only be triggered
for 2 consecutive high-order page freeing.
On a 2-socket Intel server with 224 logical CPU, we run 8 kbuild
instances in parallel (each with `make -j 28`) in 8 cgroup. This
simulates the kbuild server that is used by 0-Day kbuild service.
With the patch, the cycles% of the spinlock contention (mostly for
zone lock) decreases from 13.5% to 10.6% (with PCP size == 361). The
number of PCP draining for high order pages freeing (free_high)
decreases 80.8%.
This helps network workload too for reduced zone lock contention. On
a 2-socket Intel server with 128 logical CPU, with the patch, the
network bandwidth of the UNIX (AF_UNIX) test case of lmbench test
suite with 16-pair processes increase 17.1%. The cycles% of the
spinlock contention (mostly for zone lock) decreases from 50.0% to
45.8%. The number of PCP draining for high order pages
freeing (free_high) decreases 27.4%. The cache miss rate keeps 0.3%.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
include/linux/mmzone.h | 5 ++++-
mm/page_alloc.c | 11 ++++++++---
2 files changed, 12 insertions(+), 4 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 4106fbc5b4b3..64d5ed2bb724 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -676,12 +676,15 @@ enum zone_watermarks {
#define high_wmark_pages(z) (z->_watermark[WMARK_HIGH] + z->watermark_boost)
#define wmark_pages(z, i) (z->_watermark[i] + z->watermark_boost)
+#define PCPF_PREV_FREE_HIGH_ORDER 0x01
+
struct per_cpu_pages {
spinlock_t lock; /* Protects lists field */
int count; /* number of pages in the list */
int high; /* high watermark, emptying needed */
int batch; /* chunk size for buddy add/remove */
- short free_factor; /* batch scaling factor during free */
+ u8 flags; /* protected by pcp->lock */
+ u8 free_factor; /* batch scaling factor during free */
#ifdef CONFIG_NUMA
short expire; /* When 0, remote pagesets are drained */
#endif
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 95546f376302..295e61f0c49d 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2370,7 +2370,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
{
int high;
int pindex;
- bool free_high;
+ bool free_high = false;
__count_vm_events(PGFREE, 1 << order);
pindex = order_to_pindex(migratetype, order);
@@ -2383,8 +2383,13 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
* freeing without allocation. The remainder after bulk freeing
* stops will be drained from vmstat refresh context.
*/
- free_high = (pcp->free_factor && order && order <= PAGE_ALLOC_COSTLY_ORDER);
-
+ if (order && order <= PAGE_ALLOC_COSTLY_ORDER) {
+ free_high = (pcp->free_factor &&
+ (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER));
+ pcp->flags |= PCPF_PREV_FREE_HIGH_ORDER;
+ } else if (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) {
+ pcp->flags &= ~PCPF_PREV_FREE_HIGH_ORDER;
+ }
high = nr_pcp_high(pcp, zone, free_high);
if (pcp->count >= high) {
free_pcppages_bulk(zone, nr_pcp_free(pcp, high, free_high), pcp, pindex);
--
2.39.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH -V2 02/10] cacheinfo: calculate per-CPU data cache size
2023-09-26 6:09 [PATCH -V2 00/10] mm: PCP high auto-tuning Huang Ying
2023-09-26 6:09 ` [PATCH -V2 01/10] mm, pcp: avoid to drain PCP when process exit Huang Ying
@ 2023-09-26 6:09 ` Huang Ying
2023-09-26 6:09 ` [PATCH -V2 03/10] mm, pcp: reduce lock contention for draining high-order pages Huang Ying
` (7 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Huang Ying @ 2023-09-26 6:09 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Arjan Van De Ven, Huang Ying,
Sudeep Holla, Mel Gorman, Vlastimil Babka, David Hildenbrand,
Johannes Weiner, Dave Hansen, Michal Hocko, Pavel Tatashin,
Matthew Wilcox, Christoph Lameter
Per-CPU data cache size is useful information. For example, it can be
used to determine per-CPU cache size. So, in this patch, the data
cache size for each CPU is calculated via data_cache_size /
shared_cpu_weight.
A brute-force algorithm to iterate all online CPUs is used to avoid
to allocate an extra cpumask, especially in offline callback.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Sudeep Holla <sudeep.holla@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
drivers/base/cacheinfo.c | 42 ++++++++++++++++++++++++++++++++++++++-
include/linux/cacheinfo.h | 1 +
2 files changed, 42 insertions(+), 1 deletion(-)
diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
index cbae8be1fe52..3e8951a3fbab 100644
--- a/drivers/base/cacheinfo.c
+++ b/drivers/base/cacheinfo.c
@@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu)
return rc;
}
+static void update_data_cache_size_cpu(unsigned int cpu)
+{
+ struct cpu_cacheinfo *ci;
+ struct cacheinfo *leaf;
+ unsigned int i, nr_shared;
+ unsigned int size_data = 0;
+
+ if (!per_cpu_cacheinfo(cpu))
+ return;
+
+ ci = ci_cacheinfo(cpu);
+ for (i = 0; i < cache_leaves(cpu); i++) {
+ leaf = per_cpu_cacheinfo_idx(cpu, i);
+ if (leaf->type != CACHE_TYPE_DATA &&
+ leaf->type != CACHE_TYPE_UNIFIED)
+ continue;
+ nr_shared = cpumask_weight(&leaf->shared_cpu_map);
+ if (!nr_shared)
+ continue;
+ size_data += leaf->size / nr_shared;
+ }
+ ci->size_data = size_data;
+}
+
+static void update_data_cache_size(bool cpu_online, unsigned int cpu)
+{
+ unsigned int icpu;
+
+ for_each_online_cpu(icpu) {
+ if (!cpu_online && icpu == cpu)
+ continue;
+ update_data_cache_size_cpu(icpu);
+ }
+}
+
static int cacheinfo_cpu_online(unsigned int cpu)
{
int rc = detect_cache_attributes(cpu);
@@ -906,7 +941,11 @@ static int cacheinfo_cpu_online(unsigned int cpu)
return rc;
rc = cache_add_dev(cpu);
if (rc)
- free_cache_attributes(cpu);
+ goto err;
+ update_data_cache_size(true, cpu);
+ return 0;
+err:
+ free_cache_attributes(cpu);
return rc;
}
@@ -916,6 +955,7 @@ static int cacheinfo_cpu_pre_down(unsigned int cpu)
cpu_cache_sysfs_exit(cpu);
free_cache_attributes(cpu);
+ update_data_cache_size(false, cpu);
return 0;
}
diff --git a/include/linux/cacheinfo.h b/include/linux/cacheinfo.h
index a5cfd44fab45..4e7ccfa0c36d 100644
--- a/include/linux/cacheinfo.h
+++ b/include/linux/cacheinfo.h
@@ -73,6 +73,7 @@ struct cacheinfo {
struct cpu_cacheinfo {
struct cacheinfo *info_list;
+ unsigned int size_data;
unsigned int num_levels;
unsigned int num_leaves;
bool cpu_map_populated;
--
2.39.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH -V2 03/10] mm, pcp: reduce lock contention for draining high-order pages
2023-09-26 6:09 [PATCH -V2 00/10] mm: PCP high auto-tuning Huang Ying
2023-09-26 6:09 ` [PATCH -V2 01/10] mm, pcp: avoid to drain PCP when process exit Huang Ying
2023-09-26 6:09 ` [PATCH -V2 02/10] cacheinfo: calculate per-CPU data cache size Huang Ying
@ 2023-09-26 6:09 ` Huang Ying
2023-09-26 6:09 ` [PATCH -V2 04/10] mm: restrict the pcp batch scale factor to avoid too long latency Huang Ying
` (6 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Huang Ying @ 2023-09-26 6:09 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Arjan Van De Ven, Huang Ying, Mel Gorman,
Vlastimil Babka, David Hildenbrand, Johannes Weiner, Dave Hansen,
Michal Hocko, Pavel Tatashin, Matthew Wilcox, Christoph Lameter
In commit f26b3fa04611 ("mm/page_alloc: limit number of high-order
pages on PCP during bulk free"), the PCP (Per-CPU Pageset) will be
drained when PCP is mostly used for high-order pages freeing to
improve the cache-hot pages reusing between page allocating and
freeing CPUs.
On system with small per-CPU data cache, pages shouldn't be cached
before draining to guarantee cache-hot. But on a system with large
per-CPU data cache, more pages can be cached before draining to reduce
zone lock contention.
So, in this patch, instead of draining without any caching, "batch"
pages will be cached in PCP before draining if the per-CPU data cache
size is more than "4 * batch".
On a 2-socket Intel server with 128 logical CPU, with the patch, the
network bandwidth of the UNIX (AF_UNIX) test case of lmbench test
suite with 16-pair processes increase 72.2%. The cycles% of the
spinlock contention (mostly for zone lock) decreases from 45.8% to
21.2%. The number of PCP draining for high order pages
freeing (free_high) decreases 89.8%. The cache miss rate keeps 0.3%.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
drivers/base/cacheinfo.c | 2 ++
include/linux/gfp.h | 1 +
include/linux/mmzone.h | 1 +
mm/page_alloc.c | 37 ++++++++++++++++++++++++++++++++++++-
4 files changed, 40 insertions(+), 1 deletion(-)
diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
index 3e8951a3fbab..a55b2f83958b 100644
--- a/drivers/base/cacheinfo.c
+++ b/drivers/base/cacheinfo.c
@@ -943,6 +943,7 @@ static int cacheinfo_cpu_online(unsigned int cpu)
if (rc)
goto err;
update_data_cache_size(true, cpu);
+ setup_pcp_cacheinfo();
return 0;
err:
free_cache_attributes(cpu);
@@ -956,6 +957,7 @@ static int cacheinfo_cpu_pre_down(unsigned int cpu)
free_cache_attributes(cpu);
update_data_cache_size(false, cpu);
+ setup_pcp_cacheinfo();
return 0;
}
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 665f06675c83..665edc11fb9f 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -325,6 +325,7 @@ void drain_all_pages(struct zone *zone);
void drain_local_pages(struct zone *zone);
void page_alloc_init_late(void);
+void setup_pcp_cacheinfo(void);
/*
* gfp_allowed_mask is set to GFP_BOOT_MASK during early boot to restrict what
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 64d5ed2bb724..4132e7490b49 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -677,6 +677,7 @@ enum zone_watermarks {
#define wmark_pages(z, i) (z->_watermark[i] + z->watermark_boost)
#define PCPF_PREV_FREE_HIGH_ORDER 0x01
+#define PCPF_FREE_HIGH_BATCH 0x02
struct per_cpu_pages {
spinlock_t lock; /* Protects lists field */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 295e61f0c49d..e97814985710 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -52,6 +52,7 @@
#include <linux/psi.h>
#include <linux/khugepaged.h>
#include <linux/delayacct.h>
+#include <linux/cacheinfo.h>
#include <asm/div64.h>
#include "internal.h"
#include "shuffle.h"
@@ -2385,7 +2386,9 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
*/
if (order && order <= PAGE_ALLOC_COSTLY_ORDER) {
free_high = (pcp->free_factor &&
- (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER));
+ (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) &&
+ (!(pcp->flags & PCPF_FREE_HIGH_BATCH) ||
+ pcp->count >= READ_ONCE(pcp->batch)));
pcp->flags |= PCPF_PREV_FREE_HIGH_ORDER;
} else if (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) {
pcp->flags &= ~PCPF_PREV_FREE_HIGH_ORDER;
@@ -5418,6 +5421,38 @@ static void zone_pcp_update(struct zone *zone, int cpu_online)
mutex_unlock(&pcp_batch_high_lock);
}
+static void zone_pcp_update_cacheinfo(struct zone *zone)
+{
+ int cpu;
+ struct per_cpu_pages *pcp;
+ struct cpu_cacheinfo *cci;
+
+ for_each_online_cpu(cpu) {
+ pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
+ cci = get_cpu_cacheinfo(cpu);
+ /*
+ * If per-CPU data cache is large enough, up to
+ * "batch" high-order pages can be cached in PCP for
+ * consecutive freeing. This can reduce zone lock
+ * contention without hurting cache-hot pages sharing.
+ */
+ spin_lock(&pcp->lock);
+ if ((cci->size_data >> PAGE_SHIFT) > 4 * pcp->batch)
+ pcp->flags |= PCPF_FREE_HIGH_BATCH;
+ else
+ pcp->flags &= ~PCPF_FREE_HIGH_BATCH;
+ spin_unlock(&pcp->lock);
+ }
+}
+
+void setup_pcp_cacheinfo(void)
+{
+ struct zone *zone;
+
+ for_each_populated_zone(zone)
+ zone_pcp_update_cacheinfo(zone);
+}
+
/*
* Allocate per cpu pagesets and initialize them.
* Before this call only boot pagesets were available.
--
2.39.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH -V2 04/10] mm: restrict the pcp batch scale factor to avoid too long latency
2023-09-26 6:09 [PATCH -V2 00/10] mm: PCP high auto-tuning Huang Ying
` (2 preceding siblings ...)
2023-09-26 6:09 ` [PATCH -V2 03/10] mm, pcp: reduce lock contention for draining high-order pages Huang Ying
@ 2023-09-26 6:09 ` Huang Ying
2023-09-26 6:09 ` [PATCH -V2 05/10] mm, page_alloc: scale the number of pages that are batch allocated Huang Ying
` (5 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Huang Ying @ 2023-09-26 6:09 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Arjan Van De Ven, Huang Ying, Mel Gorman,
Vlastimil Babka, David Hildenbrand, Johannes Weiner, Dave Hansen,
Michal Hocko, Pavel Tatashin, Matthew Wilcox, Christoph Lameter
In page allocator, PCP (Per-CPU Pageset) is refilled and drained in
batches to increase page allocation throughput, reduce page
allocation/freeing latency per page, and reduce zone lock contention.
But too large batch size will cause too long maximal
allocation/freeing latency, which may punish arbitrary users. So the
default batch size is chosen carefully (in zone_batchsize(), the value
is 63 for zone > 1GB) to avoid that.
In commit 3b12e7e97938 ("mm/page_alloc: scale the number of pages that
are batch freed"), the batch size will be scaled for large number of
page freeing to improve page freeing performance and reduce zone lock
contention. Similar optimization can be used for large number of
pages allocation too.
To find out a suitable max batch scale factor (that is, max effective
batch size), some tests and measurement on some machines were done as
follows.
A set of debug patches are implemented as follows,
- Set PCP high to be 2 * batch to reduce the effect of PCP high
- Disable free batch size scaling to get the raw performance.
- The code with zone lock held is extracted from rmqueue_bulk() and
free_pcppages_bulk() to 2 separate functions to make it easy to
measure the function run time with ftrace function_graph tracer.
- The batch size is hard coded to be 63 (default), 127, 255, 511,
1023, 2047, 4095.
Then will-it-scale/page_fault1 is used to generate the page
allocation/freeing workload. The page allocation/freeing throughput
(page/s) is measured via will-it-scale. The page allocation/freeing
average latency (alloc/free latency avg, in us) and allocation/freeing
latency at 99 percentile (alloc/free latency 99%, in us) are measured
with ftrace function_graph tracer.
The test results are as follows,
Sapphire Rapids Server
======================
Batch throughput free latency free latency alloc latency alloc latency
page/s avg / us 99% / us avg / us 99% / us
----- ---------- ------------ ------------ ------------- -------------
63 513633.4 2.33 3.57 2.67 6.83
127 517616.7 4.35 6.65 4.22 13.03
255 520822.8 8.29 13.32 7.52 25.24
511 524122.0 15.79 23.42 14.02 49.35
1023 525980.5 30.25 44.19 25.36 94.88
2047 526793.6 59.39 84.50 45.22 140.81
Ice Lake Server
===============
Batch throughput free latency free latency alloc latency alloc latency
page/s avg / us 99% / us avg / us 99% / us
----- ---------- ------------ ------------ ------------- -------------
63 620210.3 2.21 3.68 2.02 4.35
127 627003.0 4.09 6.86 3.51 8.28
255 630777.5 7.70 13.50 6.17 15.97
511 633651.5 14.85 22.62 11.66 31.08
1023 637071.1 28.55 42.02 20.81 54.36
2047 638089.7 56.54 84.06 39.28 91.68
Cascade Lake Server
===================
Batch throughput free latency free latency alloc latency alloc latency
page/s avg / us 99% / us avg / us 99% / us
----- ---------- ------------ ------------ ------------- -------------
63 404706.7 3.29 5.03 3.53 4.75
127 422475.2 6.12 9.09 6.36 8.76
255 411522.2 11.68 16.97 10.90 16.39
511 428124.1 22.54 31.28 19.86 32.25
1023 414718.4 43.39 62.52 40.00 66.33
2047 429848.7 86.64 120.34 71.14 106.08
Commet Lake Desktop
===================
Batch throughput free latency free latency alloc latency alloc latency
page/s avg / us 99% / us avg / us 99% / us
----- ---------- ------------ ------------ ------------- -------------
63 795183.13 2.18 3.55 2.03 3.05
127 803067.85 3.91 6.56 3.85 5.52
255 812771.10 7.35 10.80 7.14 10.20
511 817723.48 14.17 27.54 13.43 30.31
1023 818870.19 27.72 40.10 27.89 46.28
Coffee Lake Desktop
===================
Batch throughput free latency free latency alloc latency alloc latency
page/s avg / us 99% / us avg / us 99% / us
----- ---------- ------------ ------------ ------------- -------------
63 510542.8 3.13 4.40 2.48 3.43
127 514288.6 5.97 7.89 4.65 6.04
255 516889.7 11.86 15.58 8.96 12.55
511 519802.4 23.10 28.81 16.95 26.19
1023 520802.7 45.30 52.51 33.19 45.95
2047 519997.1 90.63 104.00 65.26 81.74
From the above data, to restrict the allocation/freeing latency to be
less than 100 us in most times, the max batch scale factor needs to be
less than or equal to 5.
So, in this patch, the batch scale factor is restricted to be less
than or equal to 5.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
mm/page_alloc.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e97814985710..4b601f505401 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -86,6 +86,9 @@ typedef int __bitwise fpi_t;
*/
#define FPI_TO_TAIL ((__force fpi_t)BIT(1))
+/* Maximum PCP batch scale factor to restrict max allocation/freeing latency */
+#define PCP_BATCH_SCALE_MAX 5
+
/* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
static DEFINE_MUTEX(pcp_batch_high_lock);
#define MIN_PERCPU_PAGELIST_HIGH_FRACTION (8)
@@ -2340,7 +2343,7 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high)
* freeing of pages without any allocation.
*/
batch <<= pcp->free_factor;
- if (batch < max_nr_free)
+ if (batch < max_nr_free && pcp->free_factor < PCP_BATCH_SCALE_MAX)
pcp->free_factor++;
batch = clamp(batch, min_nr_free, max_nr_free);
--
2.39.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH -V2 05/10] mm, page_alloc: scale the number of pages that are batch allocated
2023-09-26 6:09 [PATCH -V2 00/10] mm: PCP high auto-tuning Huang Ying
` (3 preceding siblings ...)
2023-09-26 6:09 ` [PATCH -V2 04/10] mm: restrict the pcp batch scale factor to avoid too long latency Huang Ying
@ 2023-09-26 6:09 ` Huang Ying
2023-09-26 6:09 ` [PATCH -V2 06/10] mm: add framework for PCP high auto-tuning Huang Ying
` (4 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Huang Ying @ 2023-09-26 6:09 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Arjan Van De Ven, Huang Ying, Mel Gorman,
Vlastimil Babka, David Hildenbrand, Johannes Weiner, Dave Hansen,
Michal Hocko, Pavel Tatashin, Matthew Wilcox, Christoph Lameter
When a task is allocating a large number of order-0 pages, it may
acquire the zone->lock multiple times allocating pages in batches.
This may unnecessarily contend on the zone lock when allocating very
large number of pages. This patch adapts the size of the batch based
on the recent pattern to scale the batch size for subsequent
allocations.
On a 2-socket Intel server with 224 logical CPU, we run 8 kbuild
instances in parallel (each with `make -j 28`) in 8 cgroup. This
simulates the kbuild server that is used by 0-Day kbuild service.
With the patch, the cycles% of the spinlock contention (mostly for
zone lock) decreases from 11.7% to 10.0% (with PCP size == 361).
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Suggested-by: Mel Gorman <mgorman@techsingularity.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
include/linux/mmzone.h | 3 ++-
mm/page_alloc.c | 52 ++++++++++++++++++++++++++++++++++--------
2 files changed, 44 insertions(+), 11 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 4132e7490b49..4f7420e35fbb 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -685,9 +685,10 @@ struct per_cpu_pages {
int high; /* high watermark, emptying needed */
int batch; /* chunk size for buddy add/remove */
u8 flags; /* protected by pcp->lock */
+ u8 alloc_factor; /* batch scaling factor during allocate */
u8 free_factor; /* batch scaling factor during free */
#ifdef CONFIG_NUMA
- short expire; /* When 0, remote pagesets are drained */
+ u8 expire; /* When 0, remote pagesets are drained */
#endif
/* Lists of pages, one per migrate type stored on the pcp-lists */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4b601f505401..b9226845abf7 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2376,6 +2376,12 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
int pindex;
bool free_high = false;
+ /*
+ * On freeing, reduce the number of pages that are batch allocated.
+ * See nr_pcp_alloc() where alloc_factor is increased for subsequent
+ * allocations.
+ */
+ pcp->alloc_factor >>= 1;
__count_vm_events(PGFREE, 1 << order);
pindex = order_to_pindex(migratetype, order);
list_add(&page->pcp_list, &pcp->lists[pindex]);
@@ -2682,6 +2688,41 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
return page;
}
+static int nr_pcp_alloc(struct per_cpu_pages *pcp, int order)
+{
+ int high, batch, max_nr_alloc;
+
+ high = READ_ONCE(pcp->high);
+ batch = READ_ONCE(pcp->batch);
+
+ /* Check for PCP disabled or boot pageset */
+ if (unlikely(high < batch))
+ return 1;
+
+ /*
+ * Double the number of pages allocated each time there is subsequent
+ * refiling of order-0 pages without drain.
+ */
+ if (!order) {
+ max_nr_alloc = max(high - pcp->count - batch, batch);
+ batch <<= pcp->alloc_factor;
+ if (batch <= max_nr_alloc && pcp->alloc_factor < PCP_BATCH_SCALE_MAX)
+ pcp->alloc_factor++;
+ batch = min(batch, max_nr_alloc);
+ }
+
+ /*
+ * Scale batch relative to order if batch implies free pages
+ * can be stored on the PCP. Batch can be 1 for small zones or
+ * for boot pagesets which should never store free pages as
+ * the pages may belong to arbitrary zones.
+ */
+ if (batch > 1)
+ batch = max(batch >> order, 2);
+
+ return batch;
+}
+
/* Remove page from the per-cpu list, caller must protect the list */
static inline
struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order,
@@ -2694,18 +2735,9 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order,
do {
if (list_empty(list)) {
- int batch = READ_ONCE(pcp->batch);
+ int batch = nr_pcp_alloc(pcp, order);
int alloced;
- /*
- * Scale batch relative to order if batch implies
- * free pages can be stored on the PCP. Batch can
- * be 1 for small zones or for boot pagesets which
- * should never store free pages as the pages may
- * belong to arbitrary zones.
- */
- if (batch > 1)
- batch = max(batch >> order, 2);
alloced = rmqueue_bulk(zone, order,
batch, list,
migratetype, alloc_flags);
--
2.39.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH -V2 06/10] mm: add framework for PCP high auto-tuning
2023-09-26 6:09 [PATCH -V2 00/10] mm: PCP high auto-tuning Huang Ying
` (4 preceding siblings ...)
2023-09-26 6:09 ` [PATCH -V2 05/10] mm, page_alloc: scale the number of pages that are batch allocated Huang Ying
@ 2023-09-26 6:09 ` Huang Ying
2023-09-26 6:09 ` [PATCH -V2 07/10] mm: tune PCP high automatically Huang Ying
` (3 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Huang Ying @ 2023-09-26 6:09 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Arjan Van De Ven, Huang Ying, Mel Gorman,
Vlastimil Babka, David Hildenbrand, Johannes Weiner, Dave Hansen,
Michal Hocko, Pavel Tatashin, Matthew Wilcox, Christoph Lameter
The page allocation performance requirements of different workloads
are usually different. So, we need to tune PCP (per-CPU pageset) high
to optimize the workload page allocation performance. Now, we have a
system wide sysctl knob (percpu_pagelist_high_fraction) to tune PCP
high by hand. But, it's hard to find out the best value by hand. And
one global configuration may not work best for the different workloads
that run on the same system. One solution to these issues is to tune
PCP high of each CPU automatically.
This patch adds the framework for PCP high auto-tuning. With it,
pcp->high of each CPU will be changed automatically by tuning
algorithm at runtime. The minimal high (pcp->high_min) is the
original PCP high value calculated based on the low watermark pages.
While the maximal high (pcp->high_max) is the PCP high value when
percpu_pagelist_high_fraction sysctl knob is set to
MIN_PERCPU_PAGELIST_HIGH_FRACTION. That is, the maximal pcp->high
that can be set via sysctl knob by hand.
It's possible that PCP high auto-tuning doesn't work well for some
workloads. So, when PCP high is tuned by hand via the sysctl knob,
the auto-tuning will be disabled. The PCP high set by hand will be
used instead.
This patch only adds the framework, so pcp->high will be set to
pcp->high_min (original default) always. We will add actual
auto-tuning algorithm in the following patches in the series.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
Documentation/admin-guide/sysctl/vm.rst | 12 +++--
include/linux/mmzone.h | 5 +-
mm/page_alloc.c | 71 ++++++++++++++++---------
3 files changed, 58 insertions(+), 30 deletions(-)
diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
index 45ba1f4dc004..7386366fe114 100644
--- a/Documentation/admin-guide/sysctl/vm.rst
+++ b/Documentation/admin-guide/sysctl/vm.rst
@@ -843,10 +843,14 @@ each zone between per-cpu lists.
The batch value of each per-cpu page list remains the same regardless of
the value of the high fraction so allocation latencies are unaffected.
-The initial value is zero. Kernel uses this value to set the high pcp->high
-mark based on the low watermark for the zone and the number of local
-online CPUs. If the user writes '0' to this sysctl, it will revert to
-this default behavior.
+The initial value is zero. With this value, kernel will tune pcp->high
+automatically according to the requirements of workloads. The lower
+limit of tuning is based on the low watermark for the zone and the
+number of local online CPUs. The upper limit is the page number when
+the sysctl is set to the minimal value (8). If the user writes '0' to
+this sysctl, it will revert to this default behavior. In another
+words, if the user write other value, the auto-tuning will be disabled
+and the user specified pcp->high will be used.
stat_interval
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 4f7420e35fbb..d6cfb5023f3e 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -683,6 +683,8 @@ struct per_cpu_pages {
spinlock_t lock; /* Protects lists field */
int count; /* number of pages in the list */
int high; /* high watermark, emptying needed */
+ int high_min; /* min high watermark */
+ int high_max; /* max high watermark */
int batch; /* chunk size for buddy add/remove */
u8 flags; /* protected by pcp->lock */
u8 alloc_factor; /* batch scaling factor during allocate */
@@ -842,7 +844,8 @@ struct zone {
* the high and batch values are copied to individual pagesets for
* faster access
*/
- int pageset_high;
+ int pageset_high_min;
+ int pageset_high_max;
int pageset_batch;
#ifndef CONFIG_SPARSEMEM
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b9226845abf7..df07580dbd53 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2353,7 +2353,7 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high)
static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
bool free_high)
{
- int high = READ_ONCE(pcp->high);
+ int high = READ_ONCE(pcp->high_min);
if (unlikely(!high || free_high))
return 0;
@@ -2692,7 +2692,7 @@ static int nr_pcp_alloc(struct per_cpu_pages *pcp, int order)
{
int high, batch, max_nr_alloc;
- high = READ_ONCE(pcp->high);
+ high = READ_ONCE(pcp->high_min);
batch = READ_ONCE(pcp->batch);
/* Check for PCP disabled or boot pageset */
@@ -5298,14 +5298,15 @@ static int zone_batchsize(struct zone *zone)
}
static int percpu_pagelist_high_fraction;
-static int zone_highsize(struct zone *zone, int batch, int cpu_online)
+static int zone_highsize(struct zone *zone, int batch, int cpu_online,
+ int high_fraction)
{
#ifdef CONFIG_MMU
int high;
int nr_split_cpus;
unsigned long total_pages;
- if (!percpu_pagelist_high_fraction) {
+ if (!high_fraction) {
/*
* By default, the high value of the pcp is based on the zone
* low watermark so that if they are full then background
@@ -5318,15 +5319,15 @@ static int zone_highsize(struct zone *zone, int batch, int cpu_online)
* value is based on a fraction of the managed pages in the
* zone.
*/
- total_pages = zone_managed_pages(zone) / percpu_pagelist_high_fraction;
+ total_pages = zone_managed_pages(zone) / high_fraction;
}
/*
* Split the high value across all online CPUs local to the zone. Note
* that early in boot that CPUs may not be online yet and that during
* CPU hotplug that the cpumask is not yet updated when a CPU is being
- * onlined. For memory nodes that have no CPUs, split pcp->high across
- * all online CPUs to mitigate the risk that reclaim is triggered
+ * onlined. For memory nodes that have no CPUs, split the high value
+ * across all online CPUs to mitigate the risk that reclaim is triggered
* prematurely due to pages stored on pcp lists.
*/
nr_split_cpus = cpumask_weight(cpumask_of_node(zone_to_nid(zone))) + cpu_online;
@@ -5354,19 +5355,21 @@ static int zone_highsize(struct zone *zone, int batch, int cpu_online)
* However, guaranteeing these relations at all times would require e.g. write
* barriers here but also careful usage of read barriers at the read side, and
* thus be prone to error and bad for performance. Thus the update only prevents
- * store tearing. Any new users of pcp->batch and pcp->high should ensure they
- * can cope with those fields changing asynchronously, and fully trust only the
- * pcp->count field on the local CPU with interrupts disabled.
+ * store tearing. Any new users of pcp->batch, pcp->high_min and pcp->high_max
+ * should ensure they can cope with those fields changing asynchronously, and
+ * fully trust only the pcp->count field on the local CPU with interrupts
+ * disabled.
*
* mutex_is_locked(&pcp_batch_high_lock) required when calling this function
* outside of boot time (or some other assurance that no concurrent updaters
* exist).
*/
-static void pageset_update(struct per_cpu_pages *pcp, unsigned long high,
- unsigned long batch)
+static void pageset_update(struct per_cpu_pages *pcp, unsigned long high_min,
+ unsigned long high_max, unsigned long batch)
{
WRITE_ONCE(pcp->batch, batch);
- WRITE_ONCE(pcp->high, high);
+ WRITE_ONCE(pcp->high_min, high_min);
+ WRITE_ONCE(pcp->high_max, high_max);
}
static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonestat *pzstats)
@@ -5386,20 +5389,21 @@ static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonesta
* need to be as careful as pageset_update() as nobody can access the
* pageset yet.
*/
- pcp->high = BOOT_PAGESET_HIGH;
+ pcp->high_min = BOOT_PAGESET_HIGH;
+ pcp->high_max = BOOT_PAGESET_HIGH;
pcp->batch = BOOT_PAGESET_BATCH;
pcp->free_factor = 0;
}
-static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned long high,
- unsigned long batch)
+static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned long high_min,
+ unsigned long high_max, unsigned long batch)
{
struct per_cpu_pages *pcp;
int cpu;
for_each_possible_cpu(cpu) {
pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
- pageset_update(pcp, high, batch);
+ pageset_update(pcp, high_min, high_max, batch);
}
}
@@ -5409,19 +5413,34 @@ static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned long h
*/
static void zone_set_pageset_high_and_batch(struct zone *zone, int cpu_online)
{
- int new_high, new_batch;
+ int new_high_min, new_high_max, new_batch;
new_batch = max(1, zone_batchsize(zone));
- new_high = zone_highsize(zone, new_batch, cpu_online);
+ if (percpu_pagelist_high_fraction) {
+ new_high_min = zone_highsize(zone, new_batch, cpu_online,
+ percpu_pagelist_high_fraction);
+ /*
+ * PCP high is tuned manually, disable auto-tuning via
+ * setting high_min and high_max to the manual value.
+ */
+ new_high_max = new_high_min;
+ } else {
+ new_high_min = zone_highsize(zone, new_batch, cpu_online, 0);
+ new_high_max = zone_highsize(zone, new_batch, cpu_online,
+ MIN_PERCPU_PAGELIST_HIGH_FRACTION);
+ }
- if (zone->pageset_high == new_high &&
+ if (zone->pageset_high_min == new_high_min &&
+ zone->pageset_high_max == new_high_max &&
zone->pageset_batch == new_batch)
return;
- zone->pageset_high = new_high;
+ zone->pageset_high_min = new_high_min;
+ zone->pageset_high_max = new_high_max;
zone->pageset_batch = new_batch;
- __zone_set_pageset_high_and_batch(zone, new_high, new_batch);
+ __zone_set_pageset_high_and_batch(zone, new_high_min, new_high_max,
+ new_batch);
}
void __meminit setup_zone_pageset(struct zone *zone)
@@ -5529,7 +5548,8 @@ __meminit void zone_pcp_init(struct zone *zone)
*/
zone->per_cpu_pageset = &boot_pageset;
zone->per_cpu_zonestats = &boot_zonestats;
- zone->pageset_high = BOOT_PAGESET_HIGH;
+ zone->pageset_high_min = BOOT_PAGESET_HIGH;
+ zone->pageset_high_max = BOOT_PAGESET_HIGH;
zone->pageset_batch = BOOT_PAGESET_BATCH;
if (populated_zone(zone))
@@ -6431,13 +6451,14 @@ EXPORT_SYMBOL(free_contig_range);
void zone_pcp_disable(struct zone *zone)
{
mutex_lock(&pcp_batch_high_lock);
- __zone_set_pageset_high_and_batch(zone, 0, 1);
+ __zone_set_pageset_high_and_batch(zone, 0, 0, 1);
__drain_all_pages(zone, true);
}
void zone_pcp_enable(struct zone *zone)
{
- __zone_set_pageset_high_and_batch(zone, zone->pageset_high, zone->pageset_batch);
+ __zone_set_pageset_high_and_batch(zone, zone->pageset_high_min,
+ zone->pageset_high_max, zone->pageset_batch);
mutex_unlock(&pcp_batch_high_lock);
}
--
2.39.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH -V2 07/10] mm: tune PCP high automatically
2023-09-26 6:09 [PATCH -V2 00/10] mm: PCP high auto-tuning Huang Ying
` (5 preceding siblings ...)
2023-09-26 6:09 ` [PATCH -V2 06/10] mm: add framework for PCP high auto-tuning Huang Ying
@ 2023-09-26 6:09 ` Huang Ying
2023-09-26 6:09 ` [PATCH -V2 08/10] mm, pcp: decrease PCP high if free pages < high watermark Huang Ying
` (2 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Huang Ying @ 2023-09-26 6:09 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Arjan Van De Ven, Huang Ying, Mel Gorman,
Michal Hocko, Vlastimil Babka, David Hildenbrand,
Johannes Weiner, Dave Hansen, Pavel Tatashin, Matthew Wilcox,
Christoph Lameter
The target to tune PCP high automatically is as follows,
- Minimize allocation/freeing from/to shared zone
- Minimize idle pages in PCP
- Minimize pages in PCP if the system free pages is too few
To reach these target, a tuning algorithm as follows is designed,
- When we refill PCP via allocating from the zone, increase PCP high.
Because if we had larger PCP, we could avoid to allocate from the
zone.
- In periodic vmstat updating kworker (via refresh_cpu_vm_stats()),
decrease PCP high to try to free possible idle PCP pages.
- When page reclaiming is active for the zone, stop increasing PCP
high in allocating path, decrease PCP high and free some pages in
freeing path.
So, the PCP high can be tuned to the page allocating/freeing depth of
workloads eventually.
One issue of the algorithm is that if the number of pages allocated is
much more than that of pages freed on a CPU, the PCP high may become
the maximal value even if the allocating/freeing depth is small. But
this isn't a severe issue, because there are no idle pages in this
case.
One alternative choice is to increase PCP high when we drain PCP via
trying to free pages to the zone, but don't increase PCP high during
PCP refilling. This can avoid the issue above. But if the number of
pages allocated is much less than that of pages freed on a CPU, there
will be many idle pages in PCP and it may be hard to free these idle
pages.
On a 2-socket Intel server with 224 logical CPU, we run 8 kbuild
instances in parallel (each with `make -j 28`) in 8 cgroup. This
simulates the kbuild server that is used by 0-Day kbuild service.
With the patch, the build time decreases 3.6%. The cycles% of the
spinlock contention (mostly for zone lock) decreases from 10.0% to
0.7% (with PCP size == 361). The number of PCP draining for high
order pages freeing (free_high) decreases 63.4%. The number of pages
allocated from zone (instead of from PCP) decreases 80.4%.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Suggested-by: Mel Gorman <mgorman@techsingularity.net>
Suggested-by: Michal Hocko <mhocko@suse.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
include/linux/gfp.h | 1 +
mm/page_alloc.c | 118 ++++++++++++++++++++++++++++++++++----------
mm/vmstat.c | 8 +--
3 files changed, 98 insertions(+), 29 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 665edc11fb9f..5b917e5b9350 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -320,6 +320,7 @@ extern void page_frag_free(void *addr);
#define free_page(addr) free_pages((addr), 0)
void page_alloc_init_cpuhp(void);
+int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp);
void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp);
void drain_all_pages(struct zone *zone);
void drain_local_pages(struct zone *zone);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index df07580dbd53..0d482a55235b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2160,6 +2160,40 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
return i;
}
+/*
+ * Called from the vmstat counter updater to decay the PCP high.
+ * Return whether there are addition works to do.
+ */
+int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
+{
+ int high_min, to_drain, batch;
+ int todo = 0;
+
+ high_min = READ_ONCE(pcp->high_min);
+ batch = READ_ONCE(pcp->batch);
+ /*
+ * Decrease pcp->high periodically to try to free possible
+ * idle PCP pages. And, avoid to free too many pages to
+ * control latency.
+ */
+ if (pcp->high > high_min) {
+ pcp->high = max3(pcp->count - (batch << PCP_BATCH_SCALE_MAX),
+ pcp->high * 4 / 5, high_min);
+ if (pcp->high > high_min)
+ todo++;
+ }
+
+ to_drain = pcp->count - pcp->high;
+ if (to_drain > 0) {
+ spin_lock(&pcp->lock);
+ free_pcppages_bulk(zone, to_drain, pcp, 0);
+ spin_unlock(&pcp->lock);
+ todo++;
+ }
+
+ return todo;
+}
+
#ifdef CONFIG_NUMA
/*
* Called from the vmstat counter updater to drain pagesets of this
@@ -2321,14 +2355,13 @@ static bool free_unref_page_prepare(struct page *page, unsigned long pfn,
return true;
}
-static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high)
+static int nr_pcp_free(struct per_cpu_pages *pcp, int batch, int high, bool free_high)
{
int min_nr_free, max_nr_free;
- int batch = READ_ONCE(pcp->batch);
- /* Free everything if batch freeing high-order pages. */
+ /* Free as much as possible if batch freeing high-order pages. */
if (unlikely(free_high))
- return pcp->count;
+ return min(pcp->count, batch << PCP_BATCH_SCALE_MAX);
/* Check for PCP disabled or boot pageset */
if (unlikely(high < batch))
@@ -2343,7 +2376,7 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high)
* freeing of pages without any allocation.
*/
batch <<= pcp->free_factor;
- if (batch < max_nr_free && pcp->free_factor < PCP_BATCH_SCALE_MAX)
+ if (batch <= max_nr_free && pcp->free_factor < PCP_BATCH_SCALE_MAX)
pcp->free_factor++;
batch = clamp(batch, min_nr_free, max_nr_free);
@@ -2351,28 +2384,47 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int high, bool free_high)
}
static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
- bool free_high)
+ int batch, bool free_high)
{
- int high = READ_ONCE(pcp->high_min);
+ int high, high_min, high_max;
- if (unlikely(!high || free_high))
+ high_min = READ_ONCE(pcp->high_min);
+ high_max = READ_ONCE(pcp->high_max);
+ high = pcp->high = clamp(pcp->high, high_min, high_max);
+
+ if (unlikely(!high))
return 0;
- if (!test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags))
- return high;
+ if (unlikely(free_high)) {
+ pcp->high = max(high - (batch << PCP_BATCH_SCALE_MAX), high_min);
+ return 0;
+ }
/*
* If reclaim is active, limit the number of pages that can be
* stored on pcp lists
*/
- return min(READ_ONCE(pcp->batch) << 2, high);
+ if (test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags)) {
+ pcp->high = max(high - (batch << pcp->free_factor), high_min);
+ return min(batch << 2, pcp->high);
+ }
+
+ if (pcp->count >= high && high_min != high_max) {
+ int need_high = (batch << pcp->free_factor) + batch;
+
+ /* pcp->high should be large enough to hold batch freed pages */
+ if (pcp->high < need_high)
+ pcp->high = clamp(need_high, high_min, high_max);
+ }
+
+ return high;
}
static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
struct page *page, int migratetype,
unsigned int order)
{
- int high;
+ int high, batch;
int pindex;
bool free_high = false;
@@ -2387,6 +2439,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
list_add(&page->pcp_list, &pcp->lists[pindex]);
pcp->count += 1 << order;
+ batch = READ_ONCE(pcp->batch);
/*
* As high-order pages other than THP's stored on PCP can contribute
* to fragmentation, limit the number stored when PCP is heavily
@@ -2397,14 +2450,15 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
free_high = (pcp->free_factor &&
(pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) &&
(!(pcp->flags & PCPF_FREE_HIGH_BATCH) ||
- pcp->count >= READ_ONCE(pcp->batch)));
+ pcp->count >= READ_ONCE(batch)));
pcp->flags |= PCPF_PREV_FREE_HIGH_ORDER;
} else if (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) {
pcp->flags &= ~PCPF_PREV_FREE_HIGH_ORDER;
}
- high = nr_pcp_high(pcp, zone, free_high);
+ high = nr_pcp_high(pcp, zone, batch, free_high);
if (pcp->count >= high) {
- free_pcppages_bulk(zone, nr_pcp_free(pcp, high, free_high), pcp, pindex);
+ free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high),
+ pcp, pindex);
}
}
@@ -2688,24 +2742,38 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
return page;
}
-static int nr_pcp_alloc(struct per_cpu_pages *pcp, int order)
+static int nr_pcp_alloc(struct per_cpu_pages *pcp, struct zone *zone, int order)
{
- int high, batch, max_nr_alloc;
+ int high, base_batch, batch, max_nr_alloc;
+ int high_max, high_min;
- high = READ_ONCE(pcp->high_min);
- batch = READ_ONCE(pcp->batch);
+ base_batch = READ_ONCE(pcp->batch);
+ high_min = READ_ONCE(pcp->high_min);
+ high_max = READ_ONCE(pcp->high_max);
+ high = pcp->high = clamp(pcp->high, high_min, high_max);
/* Check for PCP disabled or boot pageset */
- if (unlikely(high < batch))
+ if (unlikely(high < base_batch))
return 1;
+ if (order)
+ batch = base_batch;
+ else
+ batch = (base_batch << pcp->alloc_factor);
+
/*
- * Double the number of pages allocated each time there is subsequent
- * refiling of order-0 pages without drain.
+ * If we had larger pcp->high, we could avoid to allocate from
+ * zone.
*/
+ if (high_min != high_max && !test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags))
+ high = pcp->high = min(high + batch, high_max);
+
if (!order) {
- max_nr_alloc = max(high - pcp->count - batch, batch);
- batch <<= pcp->alloc_factor;
+ max_nr_alloc = max(high - pcp->count - base_batch, base_batch);
+ /*
+ * Double the number of pages allocated each time there is
+ * subsequent refiling of order-0 pages without drain.
+ */
if (batch <= max_nr_alloc && pcp->alloc_factor < PCP_BATCH_SCALE_MAX)
pcp->alloc_factor++;
batch = min(batch, max_nr_alloc);
@@ -2735,7 +2803,7 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order,
do {
if (list_empty(list)) {
- int batch = nr_pcp_alloc(pcp, order);
+ int batch = nr_pcp_alloc(pcp, zone, order);
int alloced;
alloced = rmqueue_bulk(zone, order,
diff --git a/mm/vmstat.c b/mm/vmstat.c
index 00e81e99c6ee..2f716ad14168 100644
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -814,9 +814,7 @@ static int refresh_cpu_vm_stats(bool do_pagesets)
for_each_populated_zone(zone) {
struct per_cpu_zonestat __percpu *pzstats = zone->per_cpu_zonestats;
-#ifdef CONFIG_NUMA
struct per_cpu_pages __percpu *pcp = zone->per_cpu_pageset;
-#endif
for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) {
int v;
@@ -832,10 +830,12 @@ static int refresh_cpu_vm_stats(bool do_pagesets)
#endif
}
}
-#ifdef CONFIG_NUMA
if (do_pagesets) {
cond_resched();
+
+ changes += decay_pcp_high(zone, this_cpu_ptr(pcp));
+#ifdef CONFIG_NUMA
/*
* Deal with draining the remote pageset of this
* processor
@@ -862,8 +862,8 @@ static int refresh_cpu_vm_stats(bool do_pagesets)
drain_zone_pages(zone, this_cpu_ptr(pcp));
changes++;
}
- }
#endif
+ }
}
for_each_online_pgdat(pgdat) {
--
2.39.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH -V2 08/10] mm, pcp: decrease PCP high if free pages < high watermark
2023-09-26 6:09 [PATCH -V2 00/10] mm: PCP high auto-tuning Huang Ying
` (6 preceding siblings ...)
2023-09-26 6:09 ` [PATCH -V2 07/10] mm: tune PCP high automatically Huang Ying
@ 2023-09-26 6:09 ` Huang Ying
2023-09-26 6:09 ` [PATCH -V2 09/10] mm, pcp: avoid to reduce PCP high unnecessarily Huang Ying
2023-09-26 6:09 ` [PATCH -V2 10/10] mm, pcp: reduce detecting time of consecutive high order page freeing Huang Ying
9 siblings, 0 replies; 11+ messages in thread
From: Huang Ying @ 2023-09-26 6:09 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Arjan Van De Ven, Huang Ying, Mel Gorman,
Vlastimil Babka, David Hildenbrand, Johannes Weiner, Dave Hansen,
Michal Hocko, Pavel Tatashin, Matthew Wilcox, Christoph Lameter
One target of PCP is to minimize pages in PCP if the system free pages
is too few. To reach that target, when page reclaiming is active for
the zone (ZONE_RECLAIM_ACTIVE), we will stop increasing PCP high in
allocating path, decrease PCP high and free some pages in freeing
path. But this may be too late because the background page reclaiming
may introduce latency for some workloads. So, in this patch, during
page allocation we will detect whether the number of free pages of the
zone is below high watermark. If so, we will stop increasing PCP high
in allocating path, decrease PCP high and free some pages in freeing
path. With this, we can reduce the possibility of the premature
background page reclaiming caused by too large PCP.
The high watermark checking is done in allocating path to reduce the
overhead in hotter freeing path.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
include/linux/mmzone.h | 1 +
mm/page_alloc.c | 22 ++++++++++++++++++++--
2 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index d6cfb5023f3e..8a19e2af89df 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1006,6 +1006,7 @@ enum zone_flags {
* Cleared when kswapd is woken.
*/
ZONE_RECLAIM_ACTIVE, /* kswapd may be scanning the zone. */
+ ZONE_BELOW_HIGH, /* zone is below high watermark. */
};
static inline unsigned long zone_managed_pages(struct zone *zone)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0d482a55235b..08b74c65b88a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2409,7 +2409,13 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
return min(batch << 2, pcp->high);
}
- if (pcp->count >= high && high_min != high_max) {
+ if (high_min == high_max)
+ return high;
+
+ if (test_bit(ZONE_BELOW_HIGH, &zone->flags)) {
+ pcp->high = max(high - (batch << pcp->free_factor), high_min);
+ high = max(pcp->count, high_min);
+ } else if (pcp->count >= high) {
int need_high = (batch << pcp->free_factor) + batch;
/* pcp->high should be large enough to hold batch freed pages */
@@ -2459,6 +2465,10 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
if (pcp->count >= high) {
free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high),
pcp, pindex);
+ if (test_bit(ZONE_BELOW_HIGH, &zone->flags) &&
+ zone_watermark_ok(zone, 0, high_wmark_pages(zone),
+ ZONE_MOVABLE, 0))
+ clear_bit(ZONE_BELOW_HIGH, &zone->flags);
}
}
@@ -2765,7 +2775,7 @@ static int nr_pcp_alloc(struct per_cpu_pages *pcp, struct zone *zone, int order)
* If we had larger pcp->high, we could avoid to allocate from
* zone.
*/
- if (high_min != high_max && !test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags))
+ if (high_min != high_max && !test_bit(ZONE_BELOW_HIGH, &zone->flags))
high = pcp->high = min(high + batch, high_max);
if (!order) {
@@ -3226,6 +3236,14 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags,
}
}
+ mark = high_wmark_pages(zone);
+ if (zone_watermark_fast(zone, order, mark,
+ ac->highest_zoneidx, alloc_flags,
+ gfp_mask))
+ goto try_this_zone;
+ else if (!test_bit(ZONE_BELOW_HIGH, &zone->flags))
+ set_bit(ZONE_BELOW_HIGH, &zone->flags);
+
mark = wmark_pages(zone, alloc_flags & ALLOC_WMARK_MASK);
if (!zone_watermark_fast(zone, order, mark,
ac->highest_zoneidx, alloc_flags,
--
2.39.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH -V2 09/10] mm, pcp: avoid to reduce PCP high unnecessarily
2023-09-26 6:09 [PATCH -V2 00/10] mm: PCP high auto-tuning Huang Ying
` (7 preceding siblings ...)
2023-09-26 6:09 ` [PATCH -V2 08/10] mm, pcp: decrease PCP high if free pages < high watermark Huang Ying
@ 2023-09-26 6:09 ` Huang Ying
2023-09-26 6:09 ` [PATCH -V2 10/10] mm, pcp: reduce detecting time of consecutive high order page freeing Huang Ying
9 siblings, 0 replies; 11+ messages in thread
From: Huang Ying @ 2023-09-26 6:09 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Arjan Van De Ven, Huang Ying, Mel Gorman,
Vlastimil Babka, David Hildenbrand, Johannes Weiner, Dave Hansen,
Michal Hocko, Pavel Tatashin, Matthew Wilcox, Christoph Lameter
In PCP high auto-tuning algorithm, to minimize idle pages in PCP, in
periodic vmstat updating kworker (via refresh_cpu_vm_stats()), we will
decrease PCP high to try to free possible idle PCP pages. One issue
is that even if the page allocating/freeing depth is larger than
maximal PCP high, we may reduce PCP high unnecessarily.
To avoid the above issue, in this patch, we will track the minimal PCP
page count. And, the periodic PCP high decrement will not more than
the recent minimal PCP page count. So, only detected idle pages will
be freed.
On a 2-socket Intel server with 224 logical CPU, we run 8 kbuild
instances in parallel (each with `make -j 28`) in 8 cgroup. This
simulates the kbuild server that is used by 0-Day kbuild service.
With the patch, The number of pages allocated from zone (instead of
from PCP) decreases 21.4%.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
include/linux/mmzone.h | 1 +
mm/page_alloc.c | 15 ++++++++++-----
2 files changed, 11 insertions(+), 5 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 8a19e2af89df..35b78c7522a7 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -682,6 +682,7 @@ enum zone_watermarks {
struct per_cpu_pages {
spinlock_t lock; /* Protects lists field */
int count; /* number of pages in the list */
+ int count_min; /* minimal number of pages in the list recently */
int high; /* high watermark, emptying needed */
int high_min; /* min high watermark */
int high_max; /* max high watermark */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 08b74c65b88a..d7b602822ab3 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2166,19 +2166,20 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
*/
int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
{
- int high_min, to_drain, batch;
+ int high_min, decrease, to_drain, batch;
int todo = 0;
high_min = READ_ONCE(pcp->high_min);
batch = READ_ONCE(pcp->batch);
/*
- * Decrease pcp->high periodically to try to free possible
- * idle PCP pages. And, avoid to free too many pages to
- * control latency.
+ * Decrease pcp->high periodically to free idle PCP pages counted
+ * via pcp->count_min. And, avoid to free too many pages to
+ * control latency. This caps pcp->high decrement too.
*/
if (pcp->high > high_min) {
+ decrease = min(pcp->count_min, pcp->high / 5);
pcp->high = max3(pcp->count - (batch << PCP_BATCH_SCALE_MAX),
- pcp->high * 4 / 5, high_min);
+ pcp->high - decrease, high_min);
if (pcp->high > high_min)
todo++;
}
@@ -2191,6 +2192,8 @@ int decay_pcp_high(struct zone *zone, struct per_cpu_pages *pcp)
todo++;
}
+ pcp->count_min = pcp->count;
+
return todo;
}
@@ -2828,6 +2831,8 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order,
page = list_first_entry(list, struct page, pcp_list);
list_del(&page->pcp_list);
pcp->count -= 1 << order;
+ if (pcp->count < pcp->count_min)
+ pcp->count_min = pcp->count;
} while (check_new_pages(page, order));
return page;
--
2.39.2
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH -V2 10/10] mm, pcp: reduce detecting time of consecutive high order page freeing
2023-09-26 6:09 [PATCH -V2 00/10] mm: PCP high auto-tuning Huang Ying
` (8 preceding siblings ...)
2023-09-26 6:09 ` [PATCH -V2 09/10] mm, pcp: avoid to reduce PCP high unnecessarily Huang Ying
@ 2023-09-26 6:09 ` Huang Ying
9 siblings, 0 replies; 11+ messages in thread
From: Huang Ying @ 2023-09-26 6:09 UTC (permalink / raw)
To: Andrew Morton
Cc: linux-mm, linux-kernel, Arjan Van De Ven, Huang Ying, Mel Gorman,
Vlastimil Babka, David Hildenbrand, Johannes Weiner, Dave Hansen,
Michal Hocko, Pavel Tatashin, Matthew Wilcox, Christoph Lameter
In current PCP auto-tuning design, if the number of pages allocated is
much more than that of pages freed on a CPU, the PCP high may become
the maximal value even if the allocating/freeing depth is small, for
example, in the sender of network workloads. If a CPU was used as
sender originally, then it is used as receiver after context
switching, we need to fill the whole PCP with maximal high before
triggering PCP draining for consecutive high order freeing. This will
hurt the performance of some network workloads.
To solve the issue, in this patch, we will track the consecutive page
freeing with a counter in stead of relying on PCP draining. So, we
can detect consecutive page freeing much earlier.
On a 2-socket Intel server with 128 logical CPU, we tested
SCTP_STREAM_MANY test case of netperf test suite with 64-pair
processes. With the patch, the network bandwidth improves 3.1%. This
restores the performance drop caused by PCP auto-tuning.
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Hildenbrand <david@redhat.com>
Cc: Johannes Weiner <jweiner@redhat.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Christoph Lameter <cl@linux.com>
---
include/linux/mmzone.h | 2 +-
mm/page_alloc.c | 23 +++++++++++------------
2 files changed, 12 insertions(+), 13 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 35b78c7522a7..44f6dc3cdeeb 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -689,10 +689,10 @@ struct per_cpu_pages {
int batch; /* chunk size for buddy add/remove */
u8 flags; /* protected by pcp->lock */
u8 alloc_factor; /* batch scaling factor during allocate */
- u8 free_factor; /* batch scaling factor during free */
#ifdef CONFIG_NUMA
u8 expire; /* When 0, remote pagesets are drained */
#endif
+ short free_count; /* consecutive free count */
/* Lists of pages, one per migrate type stored on the pcp-lists */
struct list_head lists[NR_PCP_LISTS];
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d7b602822ab3..206ab768ec23 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2375,13 +2375,10 @@ static int nr_pcp_free(struct per_cpu_pages *pcp, int batch, int high, bool free
max_nr_free = high - batch;
/*
- * Double the number of pages freed each time there is subsequent
- * freeing of pages without any allocation.
+ * Increase the batch number to the number of the consecutive
+ * freed pages to reduce zone lock contention.
*/
- batch <<= pcp->free_factor;
- if (batch <= max_nr_free && pcp->free_factor < PCP_BATCH_SCALE_MAX)
- pcp->free_factor++;
- batch = clamp(batch, min_nr_free, max_nr_free);
+ batch = clamp_t(int, pcp->free_count, min_nr_free, max_nr_free);
return batch;
}
@@ -2408,7 +2405,7 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
* stored on pcp lists
*/
if (test_bit(ZONE_RECLAIM_ACTIVE, &zone->flags)) {
- pcp->high = max(high - (batch << pcp->free_factor), high_min);
+ pcp->high = max(high - pcp->free_count, high_min);
return min(batch << 2, pcp->high);
}
@@ -2416,10 +2413,10 @@ static int nr_pcp_high(struct per_cpu_pages *pcp, struct zone *zone,
return high;
if (test_bit(ZONE_BELOW_HIGH, &zone->flags)) {
- pcp->high = max(high - (batch << pcp->free_factor), high_min);
+ pcp->high = max(high - pcp->free_count, high_min);
high = max(pcp->count, high_min);
} else if (pcp->count >= high) {
- int need_high = (batch << pcp->free_factor) + batch;
+ int need_high = pcp->free_count + batch;
/* pcp->high should be large enough to hold batch freed pages */
if (pcp->high < need_high)
@@ -2456,7 +2453,7 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
* stops will be drained from vmstat refresh context.
*/
if (order && order <= PAGE_ALLOC_COSTLY_ORDER) {
- free_high = (pcp->free_factor &&
+ free_high = (pcp->free_count >= batch &&
(pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) &&
(!(pcp->flags & PCPF_FREE_HIGH_BATCH) ||
pcp->count >= READ_ONCE(batch)));
@@ -2464,6 +2461,8 @@ static void free_unref_page_commit(struct zone *zone, struct per_cpu_pages *pcp,
} else if (pcp->flags & PCPF_PREV_FREE_HIGH_ORDER) {
pcp->flags &= ~PCPF_PREV_FREE_HIGH_ORDER;
}
+ if (pcp->free_count < (batch << PCP_BATCH_SCALE_MAX))
+ pcp->free_count += (1 << order);
high = nr_pcp_high(pcp, zone, batch, free_high);
if (pcp->count >= high) {
free_pcppages_bulk(zone, nr_pcp_free(pcp, batch, high, free_high),
@@ -2861,7 +2860,7 @@ static struct page *rmqueue_pcplist(struct zone *preferred_zone,
* See nr_pcp_free() where free_factor is increased for subsequent
* frees.
*/
- pcp->free_factor >>= 1;
+ pcp->free_count >>= 1;
list = &pcp->lists[order_to_pindex(migratetype, order)];
page = __rmqueue_pcplist(zone, order, migratetype, alloc_flags, pcp, list);
pcp_spin_unlock(pcp);
@@ -5483,7 +5482,7 @@ static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonesta
pcp->high_min = BOOT_PAGESET_HIGH;
pcp->high_max = BOOT_PAGESET_HIGH;
pcp->batch = BOOT_PAGESET_BATCH;
- pcp->free_factor = 0;
+ pcp->free_count = 0;
}
static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned long high_min,
--
2.39.2
^ permalink raw reply related [flat|nested] 11+ messages in thread