mm-commits.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: akpm@linux-foundation.org, bigeasy@linutronix.de,
	brouer@redhat.com, chuck.lever@oracle.com, linux-mm@kvack.org,
	mgorman@techsingularity.net, mhocko@kernel.org, mingo@kernel.org,
	mm-commits@vger.kernel.org, peterz@infradead.org,
	tglx@linutronix.de, torvalds@linux-foundation.org,
	vbabka@suse.cz
Subject: [patch 160/192] mm/page_alloc: split per cpu page lists and zone stats
Date: Mon, 28 Jun 2021 19:41:38 -0700	[thread overview]
Message-ID: <20210629024138.Kh1pBmreI%akpm@linux-foundation.org> (raw)
In-Reply-To: <20210628193256.008961950a714730751c1423@linux-foundation.org>

From: Mel Gorman <mgorman@techsingularity.net>
Subject: mm/page_alloc: split per cpu page lists and zone stats

The PCP (per-cpu page allocator in page_alloc.c) shares locking
requirements with vmstat and the zone lock which is inconvenient and
causes some issues.  For example, the PCP list and vmstat share the same
per-cpu space meaning that it's possible that vmstat updates dirty cache
lines holding per-cpu lists across CPUs unless padding is used.  Second,
PREEMPT_RT does not want to disable IRQs for too long in the page
allocator.

This series splits the locking requirements and uses locks types more
suitable for PREEMPT_RT, reduces the time when special locking is required
for stats and reduces the time when IRQs need to be disabled on
!PREEMPT_RT kernels.

Why local_lock?  PREEMPT_RT considers the following sequence to be unsafe
as documented in Documentation/locking/locktypes.rst

   local_irq_disable();
   spin_lock(&lock);

The pcp allocator has this sequence for rmqueue_pcplist (local_irq_save)
-> __rmqueue_pcplist -> rmqueue_bulk (spin_lock).  While it's possible to
separate this out, it generally means there are points where we enable
IRQs and reenable them again immediately.  To prevent a migration and the
per-cpu pointer going stale, migrate_disable is also needed.  That is a
custom lock that is similar, but worse, than local_lock.  Furthermore, on
PREEMPT_RT, it's undesirable to leave IRQs disabled for too long.  By
converting to local_lock which disables migration on PREEMPT_RT, the
locking requirements can be separated and start moving the protections for
PCP, stats and the zone lock to PREEMPT_RT-safe equivalent locking.  As a
bonus, local_lock also means that PROVE_LOCKING does something useful.

After that, it's obvious that zone_statistics incurs too much overhead and
leaves IRQs disabled for longer than necessary on !PREEMPT_RT kernels. 
zone_statistics uses perfectly accurate counters requiring IRQs be
disabled for parallel RMW sequences when inaccurate ones like vm_events
would do.  The series makes the NUMA statistics (NUMA_HIT and friends)
inaccurate counters that then require no special protection on
!PREEMPT_RT.

The bulk page allocator can then do stat updates in bulk with IRQs enabled
which should improve the efficiency.  Technically, this could have been
done without the local_lock and vmstat conversion work and the order
simply reflects the timing of when different series were implemented.

Finally, there are places where we conflate IRQs being disabled for the
PCP with the IRQ-safe zone spinlock.  The remainder of the series reduces
the scope of what is protected by disabled IRQs on !PREEMPT_RT kernels. 
By the end of the series, page_alloc.c does not call local_irq_save so the
locking scope is a bit clearer.  The one exception is that modifying
NR_FREE_PAGES still happens in places where it's known the IRQs are
disabled as it's harmless for PREEMPT_RT and would be expensive to split
the locking there.

No performance data is included because despite the overhead of the stats,
it's within the noise for most workloads on !PREEMPT_RT.  However, Jesper
Dangaard Brouer ran a page allocation microbenchmark on a E5-1650 v4 @
3.60GHz CPU on the first version of this series.  Focusing on the array
variant of the bulk page allocator reveals the following.

(CPU: Intel(R) Xeon(R) CPU E5-1650 v4 @ 3.60GHz)
ARRAY variant: time_bulk_page_alloc_free_array: step=bulk size

         Baseline        Patched
 1       56.383          54.225 (+3.83%)
 2       40.047          35.492 (+11.38%)
 3       37.339          32.643 (+12.58%)
 4       35.578          30.992 (+12.89%)
 8       33.592          29.606 (+11.87%)
 16      32.362          28.532 (+11.85%)
 32      31.476          27.728 (+11.91%)
 64      30.633          27.252 (+11.04%)
 128     30.596          27.090 (+11.46%)

While this is a positive outcome, the series is more likely to be
interesting to the RT people in terms of getting parts of the PREEMPT_RT
tree into mainline.  


This patch (of 9):

The per-cpu page allocator lists and the per-cpu vmstat deltas are stored
in the same struct per_cpu_pages even though vmstats have no direct impact
on the per-cpu page lists.  This is inconsistent because the vmstats for a
node are stored on a dedicated structure.  The bigger issue is that the
per_cpu_pages structure is not cache-aligned and stat updates either cache
conflict with adjacent per-cpu lists incurring a runtime cost or padding
is required incurring a memory cost.

This patch splits the per-cpu pagelists and the vmstat deltas into
separate structures.  It's mostly a mechanical conversion but some
variable renaming is done to clearly distinguish the per-cpu pages
structure (pcp) from the vmstats (pzstats).

Superficially, this appears to increase the size of the per_cpu_pages
structure but the movement of expire fills a structure hole so there is no
impact overall.

[mgorman@techsingularity.net: make it W=1 cleaner]
  Link: https://lkml.kernel.org/r/20210514144622.GA3735@techsingularity.net
[mgorman@techsingularity.net: make it W=1 even cleaner]
  Link: https://lkml.kernel.org/r/20210516140705.GB3735@techsingularity.net
[lkp@intel.com: check struct per_cpu_zonestat has a non-zero size]
[vbabka@suse.cz: Init zone->per_cpu_zonestats properly]
Link: https://lkml.kernel.org/r/20210512095458.30632-1-mgorman@techsingularity.net
Link: https://lkml.kernel.org/r/20210512095458.30632-2-mgorman@techsingularity.net
Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Chuck Lever <chuck.lever@oracle.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/mmzone.h |   18 +++----
 include/linux/vmstat.h |    8 +--
 mm/page_alloc.c        |   85 ++++++++++++++++++---------------
 mm/vmstat.c            |   98 ++++++++++++++++++++-------------------
 4 files changed, 113 insertions(+), 96 deletions(-)

--- a/include/linux/mmzone.h~mm-page_alloc-split-per-cpu-page-lists-and-zone-stats
+++ a/include/linux/mmzone.h
@@ -341,20 +341,21 @@ struct per_cpu_pages {
 	int count;		/* number of pages in the list */
 	int high;		/* high watermark, emptying needed */
 	int batch;		/* chunk size for buddy add/remove */
+#ifdef CONFIG_NUMA
+	int expire;		/* When 0, remote pagesets are drained */
+#endif
 
 	/* Lists of pages, one per migrate type stored on the pcp-lists */
 	struct list_head lists[MIGRATE_PCPTYPES];
 };
 
-struct per_cpu_pageset {
-	struct per_cpu_pages pcp;
-#ifdef CONFIG_NUMA
-	s8 expire;
-	u16 vm_numa_stat_diff[NR_VM_NUMA_STAT_ITEMS];
-#endif
+struct per_cpu_zonestat {
 #ifdef CONFIG_SMP
-	s8 stat_threshold;
 	s8 vm_stat_diff[NR_VM_ZONE_STAT_ITEMS];
+	s8 stat_threshold;
+#endif
+#ifdef CONFIG_NUMA
+	u16 vm_numa_stat_diff[NR_VM_NUMA_STAT_ITEMS];
 #endif
 };
 
@@ -484,7 +485,8 @@ struct zone {
 	int node;
 #endif
 	struct pglist_data	*zone_pgdat;
-	struct per_cpu_pageset __percpu *pageset;
+	struct per_cpu_pages	__percpu *per_cpu_pageset;
+	struct per_cpu_zonestat	__percpu *per_cpu_zonestats;
 	/*
 	 * the high and batch values are copied to individual pagesets for
 	 * faster access
--- a/include/linux/vmstat.h~mm-page_alloc-split-per-cpu-page-lists-and-zone-stats
+++ a/include/linux/vmstat.h
@@ -163,7 +163,7 @@ static inline unsigned long zone_numa_st
 	int cpu;
 
 	for_each_online_cpu(cpu)
-		x += per_cpu_ptr(zone->pageset, cpu)->vm_numa_stat_diff[item];
+		x += per_cpu_ptr(zone->per_cpu_zonestats, cpu)->vm_numa_stat_diff[item];
 
 	return x;
 }
@@ -236,7 +236,7 @@ static inline unsigned long zone_page_st
 #ifdef CONFIG_SMP
 	int cpu;
 	for_each_online_cpu(cpu)
-		x += per_cpu_ptr(zone->pageset, cpu)->vm_stat_diff[item];
+		x += per_cpu_ptr(zone->per_cpu_zonestats, cpu)->vm_stat_diff[item];
 
 	if (x < 0)
 		x = 0;
@@ -291,7 +291,7 @@ struct ctl_table;
 int vmstat_refresh(struct ctl_table *, int write, void *buffer, size_t *lenp,
 		loff_t *ppos);
 
-void drain_zonestat(struct zone *zone, struct per_cpu_pageset *);
+void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *);
 
 int calculate_pressure_threshold(struct zone *zone);
 int calculate_normal_threshold(struct zone *zone);
@@ -399,7 +399,7 @@ static inline void cpu_vm_stats_fold(int
 static inline void quiet_vmstat(void) { }
 
 static inline void drain_zonestat(struct zone *zone,
-			struct per_cpu_pageset *pset) { }
+			struct per_cpu_zonestat *pzstats) { }
 #endif		/* CONFIG_SMP */
 
 static inline void __mod_zone_freepage_state(struct zone *zone, int nr_pages,
--- a/mm/page_alloc.c~mm-page_alloc-split-per-cpu-page-lists-and-zone-stats
+++ a/mm/page_alloc.c
@@ -3026,15 +3026,14 @@ void drain_zone_pages(struct zone *zone,
 static void drain_pages_zone(unsigned int cpu, struct zone *zone)
 {
 	unsigned long flags;
-	struct per_cpu_pageset *pset;
 	struct per_cpu_pages *pcp;
 
 	local_irq_save(flags);
-	pset = per_cpu_ptr(zone->pageset, cpu);
 
-	pcp = &pset->pcp;
+	pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
 	if (pcp->count)
 		free_pcppages_bulk(zone, pcp->count, pcp);
+
 	local_irq_restore(flags);
 }
 
@@ -3133,7 +3132,7 @@ static void __drain_all_pages(struct zon
 	 * disables preemption as part of its processing
 	 */
 	for_each_online_cpu(cpu) {
-		struct per_cpu_pageset *pcp;
+		struct per_cpu_pages *pcp;
 		struct zone *z;
 		bool has_pcps = false;
 
@@ -3144,13 +3143,13 @@ static void __drain_all_pages(struct zon
 			 */
 			has_pcps = true;
 		} else if (zone) {
-			pcp = per_cpu_ptr(zone->pageset, cpu);
-			if (pcp->pcp.count)
+			pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
+			if (pcp->count)
 				has_pcps = true;
 		} else {
 			for_each_populated_zone(z) {
-				pcp = per_cpu_ptr(z->pageset, cpu);
-				if (pcp->pcp.count) {
+				pcp = per_cpu_ptr(z->per_cpu_pageset, cpu);
+				if (pcp->count) {
 					has_pcps = true;
 					break;
 				}
@@ -3280,7 +3279,7 @@ static void free_unref_page_commit(struc
 		migratetype = MIGRATE_MOVABLE;
 	}
 
-	pcp = &this_cpu_ptr(zone->pageset)->pcp;
+	pcp = this_cpu_ptr(zone->per_cpu_pageset);
 	list_add(&page->lru, &pcp->lists[migratetype]);
 	pcp->count++;
 	if (pcp->count >= READ_ONCE(pcp->high))
@@ -3496,7 +3495,7 @@ static struct page *rmqueue_pcplist(stru
 	unsigned long flags;
 
 	local_irq_save(flags);
-	pcp = &this_cpu_ptr(zone->pageset)->pcp;
+	pcp = this_cpu_ptr(zone->per_cpu_pageset);
 	list = &pcp->lists[migratetype];
 	page = __rmqueue_pcplist(zone,  migratetype, alloc_flags, pcp, list);
 	if (page) {
@@ -5105,7 +5104,7 @@ unsigned long __alloc_pages_bulk(gfp_t g
 
 	/* Attempt the batch allocation */
 	local_irq_save(flags);
-	pcp = &this_cpu_ptr(zone->pageset)->pcp;
+	pcp = this_cpu_ptr(zone->per_cpu_pageset);
 	pcp_list = &pcp->lists[ac.migratetype];
 
 	while (nr_populated < nr_pages) {
@@ -5720,7 +5719,7 @@ void show_free_areas(unsigned int filter
 			continue;
 
 		for_each_online_cpu(cpu)
-			free_pcp += per_cpu_ptr(zone->pageset, cpu)->pcp.count;
+			free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count;
 	}
 
 	printk("active_anon:%lu inactive_anon:%lu isolated_anon:%lu\n"
@@ -5812,7 +5811,7 @@ void show_free_areas(unsigned int filter
 
 		free_pcp = 0;
 		for_each_online_cpu(cpu)
-			free_pcp += per_cpu_ptr(zone->pageset, cpu)->pcp.count;
+			free_pcp += per_cpu_ptr(zone->per_cpu_pageset, cpu)->count;
 
 		show_node(zone);
 		printk(KERN_CONT
@@ -5853,7 +5852,7 @@ void show_free_areas(unsigned int filter
 			K(zone_page_state(zone, NR_MLOCK)),
 			K(zone_page_state(zone, NR_BOUNCE)),
 			K(free_pcp),
-			K(this_cpu_read(zone->pageset->pcp.count)),
+			K(this_cpu_read(zone->per_cpu_pageset->count)),
 			K(zone_page_state(zone, NR_FREE_CMA_PAGES)));
 		printk("lowmem_reserve[]:");
 		for (i = 0; i < MAX_NR_ZONES; i++)
@@ -6180,11 +6179,12 @@ static void build_zonelists(pg_data_t *p
  * not check if the processor is online before following the pageset pointer.
  * Other parts of the kernel may not check if the zone is available.
  */
-static void pageset_init(struct per_cpu_pageset *p);
+static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonestat *pzstats);
 /* These effectively disable the pcplists in the boot pageset completely */
 #define BOOT_PAGESET_HIGH	0
 #define BOOT_PAGESET_BATCH	1
-static DEFINE_PER_CPU(struct per_cpu_pageset, boot_pageset);
+static DEFINE_PER_CPU(struct per_cpu_pages, boot_pageset);
+static DEFINE_PER_CPU(struct per_cpu_zonestat, boot_zonestats);
 static DEFINE_PER_CPU(struct per_cpu_nodestat, boot_nodestats);
 
 static void __build_all_zonelists(void *data)
@@ -6251,7 +6251,7 @@ build_all_zonelists_init(void)
 	 * (a chicken-egg dilemma).
 	 */
 	for_each_possible_cpu(cpu)
-		pageset_init(&per_cpu(boot_pageset, cpu));
+		per_cpu_pages_init(&per_cpu(boot_pageset, cpu), &per_cpu(boot_zonestats, cpu));
 
 	mminit_verify_zonelist();
 	cpuset_init_current_mems_allowed();
@@ -6650,14 +6650,13 @@ static void pageset_update(struct per_cp
 	WRITE_ONCE(pcp->high, high);
 }
 
-static void pageset_init(struct per_cpu_pageset *p)
+static void per_cpu_pages_init(struct per_cpu_pages *pcp, struct per_cpu_zonestat *pzstats)
 {
-	struct per_cpu_pages *pcp;
 	int migratetype;
 
-	memset(p, 0, sizeof(*p));
+	memset(pcp, 0, sizeof(*pcp));
+	memset(pzstats, 0, sizeof(*pzstats));
 
-	pcp = &p->pcp;
 	for (migratetype = 0; migratetype < MIGRATE_PCPTYPES; migratetype++)
 		INIT_LIST_HEAD(&pcp->lists[migratetype]);
 
@@ -6674,12 +6673,12 @@ static void pageset_init(struct per_cpu_
 static void __zone_set_pageset_high_and_batch(struct zone *zone, unsigned long high,
 		unsigned long batch)
 {
-	struct per_cpu_pageset *p;
+	struct per_cpu_pages *pcp;
 	int cpu;
 
 	for_each_possible_cpu(cpu) {
-		p = per_cpu_ptr(zone->pageset, cpu);
-		pageset_update(&p->pcp, high, batch);
+		pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
+		pageset_update(pcp, high, batch);
 	}
 }
 
@@ -6714,13 +6713,20 @@ static void zone_set_pageset_high_and_ba
 
 void __meminit setup_zone_pageset(struct zone *zone)
 {
-	struct per_cpu_pageset *p;
 	int cpu;
 
-	zone->pageset = alloc_percpu(struct per_cpu_pageset);
+	/* Size may be 0 on !SMP && !NUMA */
+	if (sizeof(struct per_cpu_zonestat) > 0)
+		zone->per_cpu_zonestats = alloc_percpu(struct per_cpu_zonestat);
+
+	zone->per_cpu_pageset = alloc_percpu(struct per_cpu_pages);
 	for_each_possible_cpu(cpu) {
-		p = per_cpu_ptr(zone->pageset, cpu);
-		pageset_init(p);
+		struct per_cpu_pages *pcp;
+		struct per_cpu_zonestat *pzstats;
+
+		pcp = per_cpu_ptr(zone->per_cpu_pageset, cpu);
+		pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu);
+		per_cpu_pages_init(pcp, pzstats);
 	}
 
 	zone_set_pageset_high_and_batch(zone);
@@ -6747,9 +6753,9 @@ void __init setup_per_cpu_pageset(void)
 	 * the nodes these zones are associated with.
 	 */
 	for_each_possible_cpu(cpu) {
-		struct per_cpu_pageset *pcp = &per_cpu(boot_pageset, cpu);
-		memset(pcp->vm_numa_stat_diff, 0,
-		       sizeof(pcp->vm_numa_stat_diff));
+		struct per_cpu_zonestat *pzstats = &per_cpu(boot_zonestats, cpu);
+		memset(pzstats->vm_numa_stat_diff, 0,
+		       sizeof(pzstats->vm_numa_stat_diff));
 	}
 #endif
 
@@ -6765,7 +6771,8 @@ static __meminit void zone_pcp_init(stru
 	 * relies on the ability of the linker to provide the
 	 * offset of a (static) per cpu variable into the per cpu area.
 	 */
-	zone->pageset = &boot_pageset;
+	zone->per_cpu_pageset = &boot_pageset;
+	zone->per_cpu_zonestats = &boot_zonestats;
 	zone->pageset_high = BOOT_PAGESET_HIGH;
 	zone->pageset_batch = BOOT_PAGESET_BATCH;
 
@@ -9046,15 +9053,17 @@ void zone_pcp_enable(struct zone *zone)
 void zone_pcp_reset(struct zone *zone)
 {
 	int cpu;
-	struct per_cpu_pageset *pset;
+	struct per_cpu_zonestat *pzstats;
 
-	if (zone->pageset != &boot_pageset) {
+	if (zone->per_cpu_pageset != &boot_pageset) {
 		for_each_online_cpu(cpu) {
-			pset = per_cpu_ptr(zone->pageset, cpu);
-			drain_zonestat(zone, pset);
+			pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu);
+			drain_zonestat(zone, pzstats);
 		}
-		free_percpu(zone->pageset);
-		zone->pageset = &boot_pageset;
+		free_percpu(zone->per_cpu_pageset);
+		free_percpu(zone->per_cpu_zonestats);
+		zone->per_cpu_pageset = &boot_pageset;
+		zone->per_cpu_zonestats = &boot_zonestats;
 	}
 }
 
--- a/mm/vmstat.c~mm-page_alloc-split-per-cpu-page-lists-and-zone-stats
+++ a/mm/vmstat.c
@@ -44,7 +44,7 @@ static void zero_zone_numa_counters(stru
 	for (item = 0; item < NR_VM_NUMA_STAT_ITEMS; item++) {
 		atomic_long_set(&zone->vm_numa_stat[item], 0);
 		for_each_online_cpu(cpu)
-			per_cpu_ptr(zone->pageset, cpu)->vm_numa_stat_diff[item]
+			per_cpu_ptr(zone->per_cpu_zonestats, cpu)->vm_numa_stat_diff[item]
 						= 0;
 	}
 }
@@ -266,7 +266,7 @@ void refresh_zone_stat_thresholds(void)
 		for_each_online_cpu(cpu) {
 			int pgdat_threshold;
 
-			per_cpu_ptr(zone->pageset, cpu)->stat_threshold
+			per_cpu_ptr(zone->per_cpu_zonestats, cpu)->stat_threshold
 							= threshold;
 
 			/* Base nodestat threshold on the largest populated zone. */
@@ -303,7 +303,7 @@ void set_pgdat_percpu_threshold(pg_data_
 
 		threshold = (*calculate_pressure)(zone);
 		for_each_online_cpu(cpu)
-			per_cpu_ptr(zone->pageset, cpu)->stat_threshold
+			per_cpu_ptr(zone->per_cpu_zonestats, cpu)->stat_threshold
 							= threshold;
 	}
 }
@@ -316,7 +316,7 @@ void set_pgdat_percpu_threshold(pg_data_
 void __mod_zone_page_state(struct zone *zone, enum zone_stat_item item,
 			   long delta)
 {
-	struct per_cpu_pageset __percpu *pcp = zone->pageset;
+	struct per_cpu_zonestat __percpu *pcp = zone->per_cpu_zonestats;
 	s8 __percpu *p = pcp->vm_stat_diff + item;
 	long x;
 	long t;
@@ -389,7 +389,7 @@ EXPORT_SYMBOL(__mod_node_page_state);
  */
 void __inc_zone_state(struct zone *zone, enum zone_stat_item item)
 {
-	struct per_cpu_pageset __percpu *pcp = zone->pageset;
+	struct per_cpu_zonestat __percpu *pcp = zone->per_cpu_zonestats;
 	s8 __percpu *p = pcp->vm_stat_diff + item;
 	s8 v, t;
 
@@ -435,7 +435,7 @@ EXPORT_SYMBOL(__inc_node_page_state);
 
 void __dec_zone_state(struct zone *zone, enum zone_stat_item item)
 {
-	struct per_cpu_pageset __percpu *pcp = zone->pageset;
+	struct per_cpu_zonestat __percpu *pcp = zone->per_cpu_zonestats;
 	s8 __percpu *p = pcp->vm_stat_diff + item;
 	s8 v, t;
 
@@ -495,7 +495,7 @@ EXPORT_SYMBOL(__dec_node_page_state);
 static inline void mod_zone_state(struct zone *zone,
        enum zone_stat_item item, long delta, int overstep_mode)
 {
-	struct per_cpu_pageset __percpu *pcp = zone->pageset;
+	struct per_cpu_zonestat __percpu *pcp = zone->per_cpu_zonestats;
 	s8 __percpu *p = pcp->vm_stat_diff + item;
 	long o, n, t, z;
 
@@ -781,19 +781,22 @@ static int refresh_cpu_vm_stats(bool do_
 	int changes = 0;
 
 	for_each_populated_zone(zone) {
-		struct per_cpu_pageset __percpu *p = zone->pageset;
+		struct per_cpu_zonestat __percpu *pzstats = zone->per_cpu_zonestats;
+#ifdef CONFIG_NUMA
+		struct per_cpu_pages __percpu *pcp = zone->per_cpu_pageset;
+#endif
 
 		for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++) {
 			int v;
 
-			v = this_cpu_xchg(p->vm_stat_diff[i], 0);
+			v = this_cpu_xchg(pzstats->vm_stat_diff[i], 0);
 			if (v) {
 
 				atomic_long_add(v, &zone->vm_stat[i]);
 				global_zone_diff[i] += v;
 #ifdef CONFIG_NUMA
 				/* 3 seconds idle till flush */
-				__this_cpu_write(p->expire, 3);
+				__this_cpu_write(pcp->expire, 3);
 #endif
 			}
 		}
@@ -801,12 +804,12 @@ static int refresh_cpu_vm_stats(bool do_
 		for (i = 0; i < NR_VM_NUMA_STAT_ITEMS; i++) {
 			int v;
 
-			v = this_cpu_xchg(p->vm_numa_stat_diff[i], 0);
+			v = this_cpu_xchg(pzstats->vm_numa_stat_diff[i], 0);
 			if (v) {
 
 				atomic_long_add(v, &zone->vm_numa_stat[i]);
 				global_numa_diff[i] += v;
-				__this_cpu_write(p->expire, 3);
+				__this_cpu_write(pcp->expire, 3);
 			}
 		}
 
@@ -819,23 +822,23 @@ static int refresh_cpu_vm_stats(bool do_
 			 * Check if there are pages remaining in this pageset
 			 * if not then there is nothing to expire.
 			 */
-			if (!__this_cpu_read(p->expire) ||
-			       !__this_cpu_read(p->pcp.count))
+			if (!__this_cpu_read(pcp->expire) ||
+			       !__this_cpu_read(pcp->count))
 				continue;
 
 			/*
 			 * We never drain zones local to this processor.
 			 */
 			if (zone_to_nid(zone) == numa_node_id()) {
-				__this_cpu_write(p->expire, 0);
+				__this_cpu_write(pcp->expire, 0);
 				continue;
 			}
 
-			if (__this_cpu_dec_return(p->expire))
+			if (__this_cpu_dec_return(pcp->expire))
 				continue;
 
-			if (__this_cpu_read(p->pcp.count)) {
-				drain_zone_pages(zone, this_cpu_ptr(&p->pcp));
+			if (__this_cpu_read(pcp->count)) {
+				drain_zone_pages(zone, this_cpu_ptr(pcp));
 				changes++;
 			}
 		}
@@ -882,27 +885,27 @@ void cpu_vm_stats_fold(int cpu)
 	int global_node_diff[NR_VM_NODE_STAT_ITEMS] = { 0, };
 
 	for_each_populated_zone(zone) {
-		struct per_cpu_pageset *p;
+		struct per_cpu_zonestat *pzstats;
 
-		p = per_cpu_ptr(zone->pageset, cpu);
+		pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu);
 
 		for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)
-			if (p->vm_stat_diff[i]) {
+			if (pzstats->vm_stat_diff[i]) {
 				int v;
 
-				v = p->vm_stat_diff[i];
-				p->vm_stat_diff[i] = 0;
+				v = pzstats->vm_stat_diff[i];
+				pzstats->vm_stat_diff[i] = 0;
 				atomic_long_add(v, &zone->vm_stat[i]);
 				global_zone_diff[i] += v;
 			}
 
 #ifdef CONFIG_NUMA
 		for (i = 0; i < NR_VM_NUMA_STAT_ITEMS; i++)
-			if (p->vm_numa_stat_diff[i]) {
+			if (pzstats->vm_numa_stat_diff[i]) {
 				int v;
 
-				v = p->vm_numa_stat_diff[i];
-				p->vm_numa_stat_diff[i] = 0;
+				v = pzstats->vm_numa_stat_diff[i];
+				pzstats->vm_numa_stat_diff[i] = 0;
 				atomic_long_add(v, &zone->vm_numa_stat[i]);
 				global_numa_diff[i] += v;
 			}
@@ -936,24 +939,24 @@ void cpu_vm_stats_fold(int cpu)
  * this is only called if !populated_zone(zone), which implies no other users of
  * pset->vm_stat_diff[] exist.
  */
-void drain_zonestat(struct zone *zone, struct per_cpu_pageset *pset)
+void drain_zonestat(struct zone *zone, struct per_cpu_zonestat *pzstats)
 {
 	int i;
 
 	for (i = 0; i < NR_VM_ZONE_STAT_ITEMS; i++)
-		if (pset->vm_stat_diff[i]) {
-			int v = pset->vm_stat_diff[i];
-			pset->vm_stat_diff[i] = 0;
+		if (pzstats->vm_stat_diff[i]) {
+			int v = pzstats->vm_stat_diff[i];
+			pzstats->vm_stat_diff[i] = 0;
 			atomic_long_add(v, &zone->vm_stat[i]);
 			atomic_long_add(v, &vm_zone_stat[i]);
 		}
 
 #ifdef CONFIG_NUMA
 	for (i = 0; i < NR_VM_NUMA_STAT_ITEMS; i++)
-		if (pset->vm_numa_stat_diff[i]) {
-			int v = pset->vm_numa_stat_diff[i];
+		if (pzstats->vm_numa_stat_diff[i]) {
+			int v = pzstats->vm_numa_stat_diff[i];
 
-			pset->vm_numa_stat_diff[i] = 0;
+			pzstats->vm_numa_stat_diff[i] = 0;
 			atomic_long_add(v, &zone->vm_numa_stat[i]);
 			atomic_long_add(v, &vm_numa_stat[i]);
 		}
@@ -965,8 +968,8 @@ void drain_zonestat(struct zone *zone, s
 void __inc_numa_state(struct zone *zone,
 				 enum numa_stat_item item)
 {
-	struct per_cpu_pageset __percpu *pcp = zone->pageset;
-	u16 __percpu *p = pcp->vm_numa_stat_diff + item;
+	struct per_cpu_zonestat __percpu *pzstats = zone->per_cpu_zonestats;
+	u16 __percpu *p = pzstats->vm_numa_stat_diff + item;
 	u16 v;
 
 	v = __this_cpu_inc_return(*p);
@@ -1693,21 +1696,23 @@ static void zoneinfo_show_print(struct s
 
 	seq_printf(m, "\n  pagesets");
 	for_each_online_cpu(i) {
-		struct per_cpu_pageset *pageset;
+		struct per_cpu_pages *pcp;
+		struct per_cpu_zonestat __maybe_unused *pzstats;
 
-		pageset = per_cpu_ptr(zone->pageset, i);
+		pcp = per_cpu_ptr(zone->per_cpu_pageset, i);
 		seq_printf(m,
 			   "\n    cpu: %i"
 			   "\n              count: %i"
 			   "\n              high:  %i"
 			   "\n              batch: %i",
 			   i,
-			   pageset->pcp.count,
-			   pageset->pcp.high,
-			   pageset->pcp.batch);
+			   pcp->count,
+			   pcp->high,
+			   pcp->batch);
 #ifdef CONFIG_SMP
+		pzstats = per_cpu_ptr(zone->per_cpu_zonestats, i);
 		seq_printf(m, "\n  vm stats threshold: %d",
-				pageset->stat_threshold);
+				pzstats->stat_threshold);
 #endif
 	}
 	seq_printf(m,
@@ -1927,17 +1932,18 @@ static bool need_update(int cpu)
 	struct zone *zone;
 
 	for_each_populated_zone(zone) {
-		struct per_cpu_pageset *p = per_cpu_ptr(zone->pageset, cpu);
+		struct per_cpu_zonestat *pzstats = per_cpu_ptr(zone->per_cpu_zonestats, cpu);
 		struct per_cpu_nodestat *n;
+
 		/*
 		 * The fast way of checking if there are any vmstat diffs.
 		 */
-		if (memchr_inv(p->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS *
-			       sizeof(p->vm_stat_diff[0])))
+		if (memchr_inv(pzstats->vm_stat_diff, 0, NR_VM_ZONE_STAT_ITEMS *
+			       sizeof(pzstats->vm_stat_diff[0])))
 			return true;
 #ifdef CONFIG_NUMA
-		if (memchr_inv(p->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS *
-			       sizeof(p->vm_numa_stat_diff[0])))
+		if (memchr_inv(pzstats->vm_numa_stat_diff, 0, NR_VM_NUMA_STAT_ITEMS *
+			       sizeof(pzstats->vm_numa_stat_diff[0])))
 			return true;
 #endif
 		if (last_pgdat == zone->zone_pgdat)
_

  parent reply	other threads:[~2021-06-29  2:41 UTC|newest]

Thread overview: 202+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-29  2:32 incoming Andrew Morton
2021-06-29  2:33 ` [patch 001/192] mm/gup: fix try_grab_compound_head() race with split_huge_page() Andrew Morton
2021-06-29  2:33 ` [patch 002/192] mm/page_alloc: fix memory map initialization for descending nodes Andrew Morton
2021-06-29  2:33 ` [patch 003/192] mm/page_alloc: correct return value of populated elements if bulk array is populated Andrew Morton
2021-06-29  2:33 ` [patch 004/192] kthread: switch to new kerneldoc syntax for named variable macro argument Andrew Morton
2021-06-29  2:33 ` [patch 005/192] kthread_worker: fix return value when kthread_mod_delayed_work() races with kthread_cancel_delayed_work_sync() Andrew Morton
2021-06-29  2:33 ` [patch 006/192] ia64: headers: drop duplicated words Andrew Morton
2021-06-29  2:33 ` [patch 007/192] ia64: mca_drv: fix incorrect array size calculation Andrew Morton
2021-06-29  2:33 ` [patch 008/192] streamline_config.pl: make spacing consistent Andrew Morton
2021-06-29  2:33 ` [patch 009/192] streamline_config.pl: add softtabstop=4 for vim users Andrew Morton
2021-06-29  2:33 ` [patch 010/192] scripts/spelling.txt: add more spellings to spelling.txt Andrew Morton
2021-06-29  2:33 ` [patch 011/192] ntfs: fix validity check for file name attribute Andrew Morton
2021-06-29  2:33 ` [patch 012/192] squashfs: add option to panic on errors Andrew Morton
2021-06-29  2:33 ` [patch 013/192] ocfs2: remove unnecessary INIT_LIST_HEAD() Andrew Morton
2021-06-29  2:34 ` [patch 014/192] ocfs2: fix snprintf() checking Andrew Morton
2021-06-29  2:34 ` [patch 015/192] ocfs2: remove redundant assignment to pointer queue Andrew Morton
2021-06-29  2:34 ` [patch 016/192] ocfs2: remove repeated uptodate check for buffer Andrew Morton
2021-06-29  2:34 ` [patch 017/192] ocfs2: replace simple_strtoull() with kstrtoull() Andrew Morton
2021-06-29  2:34 ` [patch 018/192] ocfs2: remove redundant initialization of variable ret Andrew Morton
2021-06-29  2:34 ` [patch 019/192] kernel: watchdog: modify the explanation related to watchdog thread Andrew Morton
2021-06-29  2:34 ` [patch 020/192] doc: " Andrew Morton
2021-06-29  2:34 ` [patch 021/192] doc: watchdog: modify the doc related to "watchdog/%u" Andrew Morton
2021-06-29  2:34 ` [patch 022/192] slab: use __func__ to trace function name Andrew Morton
2021-06-29  2:34 ` [patch 023/192] kunit: make test->lock irq safe Andrew Morton
2021-06-29  2:34 ` [patch 024/192] mm/slub, kunit: add a KUnit test for SLUB debugging functionality Andrew Morton
2021-06-29  2:34 ` [patch 025/192] slub: remove resiliency_test() function Andrew Morton
2021-06-29  2:34 ` [patch 026/192] mm, slub: change run-time assertion in kmalloc_index() to compile-time Andrew Morton
2021-06-29  2:34 ` [patch 027/192] slub: restore slub_debug=- behavior Andrew Morton
2021-06-29  2:34 ` [patch 028/192] slub: actually use 'message' in restore_bytes() Andrew Morton
2021-06-29  2:34 ` [patch 029/192] slub: indicate slab_fix() uses printf formats Andrew Morton
2021-06-29  2:34 ` [patch 030/192] slub: force on no_hash_pointers when slub_debug is enabled Andrew Morton
2021-06-29  2:34 ` [patch 031/192] mm: slub: move sysfs slab alloc/free interfaces to debugfs Andrew Morton
2021-06-29  2:34 ` [patch 032/192] mm/slub: add taint after the errors are printed Andrew Morton
2021-06-29  2:35 ` [patch 033/192] mm/kmemleak: fix possible wrong memory scanning period Andrew Morton
2021-06-29  2:35 ` [patch 034/192] dax: fix ENOMEM handling in grab_mapping_entry() Andrew Morton
2021-06-29  2:35 ` [patch 035/192] tools/vm/page_owner_sort.c: check malloc() return Andrew Morton
2021-06-29  2:35 ` [patch 036/192] mm/debug_vm_pgtable: ensure THP availability via has_transparent_hugepage() Andrew Morton
2021-06-29  2:35 ` [patch 037/192] mm: mmap_lock: use local locks instead of disabling preemption Andrew Morton
2021-06-29  2:35 ` [patch 038/192] mm/page_reporting: fix code style in __page_reporting_request() Andrew Morton
2021-06-29  2:35 ` [patch 039/192] mm/page_reporting: export reporting order as module parameter Andrew Morton
2021-06-29  2:35 ` [patch 040/192] mm/page_reporting: allow driver to specify reporting order Andrew Morton
2021-06-29  2:35 ` [patch 041/192] virtio_balloon: specify page reporting order if needed Andrew Morton
2021-06-29  2:35 ` [patch 042/192] mm: page-writeback: kill get_writeback_state() comments Andrew Morton
2021-06-29  2:35 ` [patch 043/192] mm/page-writeback: Fix performance when BDI's share of ratio is 0 Andrew Morton
2021-06-29  2:35 ` [patch 044/192] mm/page-writeback: update the comment of Dirty position control Andrew Morton
2021-06-29  2:35 ` [patch 045/192] mm/page-writeback: use __this_cpu_inc() in account_page_dirtied() Andrew Morton
2021-06-29  2:35 ` [patch 046/192] writeback, cgroup: do not switch inodes with I_WILL_FREE flag Andrew Morton
2021-06-29  2:35 ` [patch 047/192] writeback, cgroup: add smp_mb() to cgroup_writeback_umount() Andrew Morton
2021-06-29  2:35 ` [patch 048/192] writeback, cgroup: increment isw_nr_in_flight before grabbing an inode Andrew Morton
2021-06-29  2:35 ` [patch 049/192] writeback, cgroup: switch to rcu_work API in inode_switch_wbs() Andrew Morton
2021-06-29  2:35 ` [patch 050/192] writeback, cgroup: keep list of inodes attached to bdi_writeback Andrew Morton
2021-06-29  2:35 ` [patch 051/192] writeback, cgroup: split out the functional part of inode_switch_wbs_work_fn() Andrew Morton
2021-06-29  2:35 ` [patch 052/192] writeback, cgroup: support switching multiple inodes at once Andrew Morton
2021-06-29  2:36 ` [patch 053/192] writeback, cgroup: release dying cgwbs by switching attached inodes Andrew Morton
2021-06-29  2:36 ` [patch 054/192] fs: unexport __set_page_dirty Andrew Morton
2021-06-29  2:36 ` [patch 055/192] fs: move ramfs_aops to libfs Andrew Morton
2021-06-29  2:36 ` [patch 056/192] mm: require ->set_page_dirty to be explicitly wired up Andrew Morton
2021-06-29  2:36 ` [patch 057/192] mm/writeback: move __set_page_dirty() to core mm Andrew Morton
2021-06-29  2:36 ` [patch 058/192] mm/writeback: use __set_page_dirty in __set_page_dirty_nobuffers Andrew Morton
2021-06-29  2:36 ` [patch 059/192] iomap: use __set_page_dirty_nobuffers Andrew Morton
2021-06-29  2:36 ` [patch 060/192] fs: remove anon_set_page_dirty() Andrew Morton
2021-06-29  2:36 ` [patch 061/192] fs: remove noop_set_page_dirty() Andrew Morton
2021-06-29  2:36 ` [patch 062/192] mm: move page dirtying prototypes from mm.h Andrew Morton
2021-06-29  2:36 ` [patch 063/192] mm/gup_benchmark: support threading Andrew Morton
2021-06-29  2:36 ` [patch 064/192] mm: gup: allow FOLL_PIN to scale in SMP Andrew Morton
2021-06-29  2:36 ` [patch 065/192] mm: gup: pack has_pinned in MMF_HAS_PINNED Andrew Morton
2021-06-29  2:36 ` [patch 066/192] mm: pagewalk: fix walk for hugepage tables Andrew Morton
2021-06-29  2:36 ` [patch 067/192] mm/swapfile: use percpu_ref to serialize against concurrent swapoff Andrew Morton
2021-06-29  2:36 ` [patch 068/192] swap: fix do_swap_page() race with swapoff Andrew Morton
2021-06-29  2:36 ` [patch 069/192] mm/swap: remove confusing checking for non_swap_entry() in swap_ra_info() Andrew Morton
2021-06-29  2:36 ` [patch 070/192] mm/shmem: fix shmem_swapin() race with swapoff Andrew Morton
2021-06-29  2:37 ` [patch 071/192] mm/swapfile: move get_swap_page_of_type() under CONFIG_HIBERNATION Andrew Morton
2021-06-29  2:37 ` [patch 072/192] mm/swap: remove unused local variable nr_shadows Andrew Morton
2021-06-29  2:37 ` [patch 073/192] mm/swap_slots.c: delete meaningless forward declarations Andrew Morton
2021-06-29  2:37 ` [patch 074/192] mm, swap: remove unnecessary smp_rmb() in swap_type_to_swap_info() Andrew Morton
2021-06-29  2:37 ` [patch 075/192] mm: free idle swap cache page after COW Andrew Morton
2021-06-29  2:37 ` [patch 076/192] swap: check mapping_empty() for swap cache before being freed Andrew Morton
2021-06-29  2:37 ` [patch 077/192] mm/memcg: move mod_objcg_state() to memcontrol.c Andrew Morton
2021-06-29  2:37 ` [patch 078/192] mm/memcg: cache vmstat data in percpu memcg_stock_pcp Andrew Morton
2021-06-29  2:37 ` [patch 079/192] mm/memcg: improve refill_obj_stock() performance Andrew Morton
2021-06-29  2:37 ` [patch 080/192] mm/memcg: optimize user context object stock access Andrew Morton
2021-06-29  2:37 ` [patch 081/192] mm: memcg/slab: properly set up gfp flags for objcg pointer array Andrew Morton
2021-06-29  2:37 ` [patch 082/192] mm: memcg/slab: create a new set of kmalloc-cg-<n> caches Andrew Morton
2021-06-29  2:37 ` [patch 083/192] mm: memcg/slab: disable cache merging for KMALLOC_NORMAL caches Andrew Morton
2021-06-29  2:37 ` [patch 084/192] mm: memcontrol: fix root_mem_cgroup charging Andrew Morton
2021-06-29  2:37 ` [patch 085/192] mm: memcontrol: fix page charging in page replacement Andrew Morton
2021-06-29  2:37 ` [patch 086/192] mm: memcontrol: bail out early when !mm in get_mem_cgroup_from_mm Andrew Morton
2021-06-29  2:37 ` [patch 087/192] mm: memcontrol: remove the pgdata parameter of mem_cgroup_page_lruvec Andrew Morton
2021-06-29  2:37 ` [patch 088/192] mm: memcontrol: simplify lruvec_holds_page_lru_lock Andrew Morton
2021-06-29  2:37 ` [patch 089/192] mm: memcontrol: rename lruvec_holds_page_lru_lock to page_matches_lruvec Andrew Morton
2021-06-29  2:38 ` [patch 090/192] mm: memcontrol: simplify the logic of objcg pinning memcg Andrew Morton
2021-06-29  2:38 ` [patch 091/192] mm: memcontrol: move obj_cgroup_uncharge_pages() out of css_set_lock Andrew Morton
2021-06-29  2:38 ` [patch 092/192] mm: vmscan: remove noinline_for_stack Andrew Morton
2021-06-29  2:38 ` [patch 093/192] memcontrol: use flexible-array member Andrew Morton
2021-06-29  2:38 ` [patch 094/192] loop: use worker per cgroup instead of kworker Andrew Morton
2021-06-29  2:38 ` [patch 095/192] mm: charge active memcg when no mm is set Andrew Morton
2021-06-29  2:38 ` [patch 096/192] loop: charge i/o to mem and blk cg Andrew Morton
2021-06-29  2:38 ` [patch 097/192] mm: memcontrol: remove trailing semicolon in macros Andrew Morton
2021-06-29  2:38 ` [patch 098/192] perf: MAP_EXECUTABLE does not indicate VM_MAYEXEC Andrew Morton
2021-06-29  2:38 ` [patch 099/192] binfmt: remove in-tree usage of MAP_EXECUTABLE Andrew Morton
2021-06-29  2:38 ` [patch 100/192] mm: ignore MAP_EXECUTABLE in ksys_mmap_pgoff() Andrew Morton
2021-06-29  2:38 ` [patch 101/192] mm/mmap.c: logic of find_vma_intersection repeated in __do_munmap Andrew Morton
2021-06-29  2:38 ` [patch 102/192] mm/mmap: introduce unlock_range() for code cleanup Andrew Morton
2021-06-29  2:38 ` [patch 103/192] mm/mmap: use find_vma_intersection() in do_mmap() for overlap Andrew Morton
2021-06-29  2:38 ` [patch 104/192] mm/memory.c: fix comment of finish_mkwrite_fault() Andrew Morton
2021-06-29  2:38 ` [patch 105/192] mm: add vma_lookup(), update find_vma_intersection() comments Andrew Morton
2021-06-29  2:38 ` [patch 106/192] drm/i915/selftests: use vma_lookup() in __igt_mmap() Andrew Morton
2021-06-29  2:38 ` [patch 107/192] arch/arc/kernel/troubleshoot: use vma_lookup() instead of find_vma() Andrew Morton
2021-06-29  2:38 ` [patch 108/192] arch/arm64/kvm: use vma_lookup() instead of find_vma_intersection() Andrew Morton
2021-06-29  2:39 ` [patch 109/192] arch/powerpc/kvm/book3s_hv_uvmem: " Andrew Morton
2021-06-29  2:39 ` [patch 110/192] arch/powerpc/kvm/book3s: use vma_lookup() in kvmppc_hv_setup_htab_rma() Andrew Morton
2021-06-29  2:39 ` [patch 111/192] arch/mips/kernel/traps: use vma_lookup() instead of find_vma() Andrew Morton
2021-06-29  2:39 ` [patch 112/192] arch/m68k/kernel/sys_m68k: use vma_lookup() in sys_cacheflush() Andrew Morton
2021-06-29  2:39 ` [patch 113/192] x86/sgx: use vma_lookup() in sgx_encl_find() Andrew Morton
2021-06-29  2:39 ` [patch 114/192] virt/kvm: use vma_lookup() instead of find_vma_intersection() Andrew Morton
2021-06-29  2:39 ` [patch 115/192] vfio: " Andrew Morton
2021-06-29  2:39 ` [patch 116/192] net/ipv5/tcp: use vma_lookup() in tcp_zerocopy_receive() Andrew Morton
2021-06-29  2:39 ` [patch 117/192] drm/amdgpu: use vma_lookup() in amdgpu_ttm_tt_get_user_pages() Andrew Morton
2021-06-29  2:39 ` [patch 118/192] media: videobuf2: use vma_lookup() in get_vaddr_frames() Andrew Morton
2021-06-29  2:39 ` [patch 119/192] misc/sgi-gru/grufault: use vma_lookup() in gru_find_vma() Andrew Morton
2021-06-29  2:39 ` [patch 120/192] kernel/events/uprobes: use vma_lookup() in find_active_uprobe() Andrew Morton
2021-06-29  2:39 ` [patch 121/192] lib/test_hmm: use vma_lookup() in dmirror_migrate() Andrew Morton
2021-06-29  2:39 ` [patch 122/192] mm/ksm: use vma_lookup() in find_mergeable_vma() Andrew Morton
2021-06-29  2:39 ` [patch 123/192] mm/migrate: use vma_lookup() in do_pages_stat_array() Andrew Morton
2021-06-29  2:39 ` [patch 124/192] mm/mremap: use vma_lookup() in vma_to_resize() Andrew Morton
2021-06-29  2:39 ` [patch 125/192] mm/memory.c: use vma_lookup() in __access_remote_vm() Andrew Morton
2021-06-29  2:39 ` [patch 126/192] mm/mempolicy: " Andrew Morton
2021-06-29  2:39 ` [patch 127/192] mm: update legacy flush_tlb_* to use vma Andrew Morton
2021-06-29  2:39 ` [patch 128/192] mm: improve mprotect(R|W) efficiency on pages referenced once Andrew Morton
2021-06-29 17:50   ` Linus Torvalds
2021-06-30  0:12     ` Peter Xu
2021-06-30  1:39       ` Peter Xu
2021-06-30  2:25         ` Linus Torvalds
2021-06-30 16:42           ` Peter Xu
2021-06-30 18:03             ` Linus Torvalds
2021-07-01  1:27               ` Peter Xu
2021-07-01 18:29                 ` Linus Torvalds
2021-07-06  1:24                   ` Peter Xu
2021-06-29  2:40 ` [patch 129/192] h8300: remove unused variable Andrew Morton
2021-06-29  2:40 ` [patch 130/192] mm/dmapool: use DEVICE_ATTR_RO macro Andrew Morton
2021-06-29  2:40 ` [patch 131/192] mm, tracing: unify PFN format strings Andrew Morton
2021-06-29  2:40 ` [patch 132/192] mm/page_alloc: add an alloc_pages_bulk_array_node() helper Andrew Morton
2021-06-29  2:40 ` [patch 133/192] mm/vmalloc: switch to bulk allocator in __vmalloc_area_node() Andrew Morton
2021-06-29  2:40 ` [patch 134/192] mm/vmalloc: print a warning message first on failure Andrew Morton
2021-06-29  2:40 ` [patch 135/192] mm/vmalloc: remove quoted strings split across lines Andrew Morton
2021-06-29  2:40 ` [patch 136/192] mm/vmalloc: fallback to a single page allocator Andrew Morton
2021-06-29  2:40 ` [patch 137/192] mm: vmalloc: add cond_resched() in __vunmap() Andrew Morton
2021-06-29  2:40 ` [patch 138/192] printk: introduce dump_stack_lvl() Andrew Morton
2021-06-29  2:40 ` [patch 139/192] kasan: use dump_stack_lvl(KERN_ERR) to print stacks Andrew Morton
2021-06-29  2:40 ` [patch 140/192] kasan: test: improve failure message in KUNIT_EXPECT_KASAN_FAIL() Andrew Morton
2021-06-29  2:40 ` [patch 141/192] kasan: allow an architecture to disable inline instrumentation Andrew Morton
2021-06-29  2:40 ` [patch 142/192] kasan: allow architectures to provide an outline readiness check Andrew Morton
2021-06-29  2:40 ` [patch 143/192] mm: define default MAX_PTRS_PER_* in include/pgtable.h Andrew Morton
2021-06-29  2:40 ` [patch 144/192] kasan: use MAX_PTRS_PER_* for early shadow tables Andrew Morton
2021-06-29  2:40 ` [patch 145/192] kasan: rename CONFIG_KASAN_SW_TAGS_IDENTIFY to CONFIG_KASAN_TAGS_IDENTIFY Andrew Morton
2021-06-29  2:40 ` [patch 146/192] kasan: integrate the common part of two KASAN tag-based modes Andrew Morton
2021-06-29  2:40 ` [patch 147/192] kasan: add memory corruption identification support for hardware tag-based mode Andrew Morton
2021-06-29  2:41 ` [patch 148/192] mm: report which part of mem is being freed on initmem case Andrew Morton
2021-06-29  2:41 ` [patch 149/192] mm/mmzone.h: simplify is_highmem_idx() Andrew Morton
2021-06-29  2:41 ` [patch 150/192] mm: make __dump_page static Andrew Morton
2021-06-29  2:41 ` [patch 151/192] mm/page_alloc: bail out on fatal signal during reclaim/compaction retry attempt Andrew Morton
2021-06-29  2:41 ` [patch 152/192] mm/debug: factor PagePoisoned out of __dump_page Andrew Morton
2021-06-29  2:41 ` [patch 153/192] mm/page_owner: constify dump_page_owner Andrew Morton
2021-06-29  2:41 ` [patch 154/192] mm: make compound_head const-preserving Andrew Morton
2021-06-29  2:41 ` [patch 155/192] mm: constify get_pfnblock_flags_mask and get_pfnblock_migratetype Andrew Morton
2021-06-29  2:41 ` [patch 156/192] mm: constify page_count and page_ref_count Andrew Morton
2021-06-29  2:41 ` [patch 157/192] mm: optimise nth_page for contiguous memmap Andrew Morton
2021-06-29  2:41 ` [patch 158/192] mm/page_alloc: switch to pr_debug Andrew Morton
2021-06-29  2:41 ` [patch 159/192] kbuild: skip per-CPU BTF generation for pahole v1.18-v1.21 Andrew Morton
2021-06-29  2:41 ` Andrew Morton [this message]
2021-06-29  2:41 ` [patch 161/192] mm/page_alloc: convert per-cpu list protection to local_lock Andrew Morton
2021-06-29  2:41 ` [patch 162/192] mm/vmstat: convert NUMA statistics to basic NUMA counters Andrew Morton
2021-06-29  2:41 ` [patch 163/192] mm/vmstat: inline NUMA event counter updates Andrew Morton
2021-06-29  2:41 ` [patch 164/192] mm/page_alloc: batch the accounting updates in the bulk allocator Andrew Morton
2021-06-29  2:41 ` [patch 165/192] mm/page_alloc: reduce duration that IRQs are disabled for VM counters Andrew Morton
2021-06-29  2:41 ` [patch 166/192] mm/page_alloc: explicitly acquire the zone lock in __free_pages_ok Andrew Morton
2021-06-29  2:42 ` [patch 167/192] mm/page_alloc: avoid conflating IRQs disabled with zone->lock Andrew Morton
2021-06-29  2:42 ` [patch 168/192] mm/page_alloc: update PGFREE outside the zone lock in __free_pages_ok Andrew Morton
2021-06-29  2:42 ` [patch 169/192] mm: page_alloc: dump migrate-failed pages only at -EBUSY Andrew Morton
2021-06-29  2:42 ` [patch 170/192] mm/page_alloc: delete vm.percpu_pagelist_fraction Andrew Morton
2021-06-29  2:42 ` [patch 171/192] mm/page_alloc: disassociate the pcp->high from pcp->batch Andrew Morton
2021-06-29  2:42 ` [patch 172/192] mm/page_alloc: adjust pcp->high after CPU hotplug events Andrew Morton
2021-06-29  2:42 ` [patch 173/192] mm/page_alloc: scale the number of pages that are batch freed Andrew Morton
2021-06-29  2:42 ` [patch 174/192] mm/page_alloc: limit the number of pages on PCP lists when reclaim is active Andrew Morton
2021-06-29  2:42 ` [patch 175/192] mm/page_alloc: introduce vm.percpu_pagelist_high_fraction Andrew Morton
2021-06-29  2:42 ` [patch 176/192] mm: drop SECTION_SHIFT in code comments Andrew Morton
2021-06-29  2:42 ` [patch 177/192] mm/page_alloc: improve memmap_pages dbg msg Andrew Morton
2021-06-29  2:42 ` [patch 178/192] mm/page_alloc: fix counting of managed_pages Andrew Morton
2021-06-29  2:42 ` [patch 179/192] mm/page_alloc: move free_the_page Andrew Morton
2021-06-29  2:42 ` [patch 180/192] alpha: remove DISCONTIGMEM and NUMA Andrew Morton
2021-06-29  2:42 ` [patch 181/192] arc: update comment about HIGHMEM implementation Andrew Morton
2021-06-29  2:42 ` [patch 182/192] arc: remove support for DISCONTIGMEM Andrew Morton
2021-06-29  2:42 ` [patch 183/192] m68k: " Andrew Morton
2021-06-29  2:42 ` [patch 184/192] mm: remove CONFIG_DISCONTIGMEM Andrew Morton
2021-06-29  2:42 ` [patch 185/192] arch, mm: remove stale mentions of DISCONIGMEM Andrew Morton
2021-06-29  2:42 ` [patch 186/192] docs: remove description of DISCONTIGMEM Andrew Morton
2021-06-29  2:43 ` [patch 187/192] mm: replace CONFIG_NEED_MULTIPLE_NODES with CONFIG_NUMA Andrew Morton
2021-06-29  2:43 ` [patch 188/192] mm: replace CONFIG_FLAT_NODE_MEM_MAP with CONFIG_FLATMEM Andrew Morton
2021-06-29  2:43 ` [patch 189/192] mm/page_alloc: allow high-order pages to be stored on the per-cpu lists Andrew Morton
2021-06-29  2:43 ` [patch 190/192] mm/page_alloc: split pcp->high across all online CPUs for cpuless nodes Andrew Morton
2021-06-29  2:43 ` [patch 191/192] mm,hwpoison: send SIGBUS with error virutal address Andrew Morton
2021-06-29  2:43 ` [patch 192/192] mm,hwpoison: make get_hwpoison_page() call get_any_page() Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210629024138.Kh1pBmreI%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=brouer@redhat.com \
    --cc=chuck.lever@oracle.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@kernel.org \
    --cc=mingo@kernel.org \
    --cc=mm-commits@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).