mm-commits.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: akpm@linux-foundation.org, bigeasy@linutronix.de,
	brouer@redhat.com, cl@linux.com, iamjoonsoo.kim@lge.com,
	jannh@google.com, linux-mm@kvack.org,
	mgorman@techsingularity.net, mm-commits@vger.kernel.org,
	penberg@kernel.org, rientjes@google.com, sven@narfation.org,
	tglx@linutronix.de, torvalds@linux-foundation.org,
	vbabka@suse.cz
Subject: [patch 043/212] mm, slub: convert kmem_cpu_slab protection to local_lock
Date: Thu, 02 Sep 2021 14:52:15 -0700	[thread overview]
Message-ID: <20210902215215.7TuZyn_O7%akpm@linux-foundation.org> (raw)
In-Reply-To: <20210902144820.78957dff93d7bea620d55a89@linux-foundation.org>

From: Vlastimil Babka <vbabka@suse.cz>
Subject: mm, slub: convert kmem_cpu_slab protection to local_lock

Embed local_lock into struct kmem_cpu_slab and use the irq-safe versions
of local_lock instead of plain local_irq_save/restore.  On !PREEMPT_RT
that's equivalent, with better lockdep visibility.  On PREEMPT_RT that
means better preemption.

However, the cost on PREEMPT_RT is the loss of lockless fast paths which
only work with cpu freelist.  Those are designed to detect and recover
from being preempted by other conflicting operations (both fast or slow
path), but the slow path operations assume they cannot be preempted by a
fast path operation, which is guaranteed naturally with disabled irqs. 
With local locks on PREEMPT_RT, the fast paths now also need to take the
local lock to avoid races.

In the allocation fastpath slab_alloc_node() we can just defer to the
slowpath __slab_alloc() which also works with cpu freelist, but under the
local lock.  In the free fastpath do_slab_free() we have to add a new
local lock protected version of freeing to the cpu freelist, as the
existing slowpath only works with the page freelist.

Also update the comment about locking scheme in SLUB to reflect changes
done by this series.

[efault@gmx.de: use local_lock() without irq in PREEMPT_RT scope, debugging of RT crashes resulting in put_cpu_partial() locking changes]
[vbabka@suse.cz: simplify lockdep_assert_held in lockdep_assert_held()]
  Link: https://lkml.kernel.org/r/7e9ccf34-57d1-786b-2dfd-3b9ba78e1b32@suse.cz
[vbabka@suse.cz: fix kmem_cache_cpu fields alignment for double cmpxchg]
  Link: https://lkml.kernel.org/r/e907c2b6-6df1-8038-8c6c-aa9c1fd11259@suse.cz
Link: https://lkml.kernel.org/r/20210805152000.12817-36-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Sven Eckelmann <sven@narfation.org>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Jann Horn <jannh@google.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/slub_def.h |    6 +
 mm/slub.c                |  142 ++++++++++++++++++++++++++++---------
 2 files changed, 115 insertions(+), 33 deletions(-)

--- a/include/linux/slub_def.h~mm-slub-convert-kmem_cpu_slab-protection-to-local_lock
+++ a/include/linux/slub_def.h
@@ -10,6 +10,7 @@
 #include <linux/kfence.h>
 #include <linux/kobject.h>
 #include <linux/reciprocal_div.h>
+#include <linux/local_lock.h>
 
 enum stat_item {
 	ALLOC_FASTPATH,		/* Allocation from cpu slab */
@@ -40,6 +41,10 @@ enum stat_item {
 	CPU_PARTIAL_DRAIN,	/* Drain cpu partial to node partial */
 	NR_SLUB_STAT_ITEMS };
 
+/*
+ * When changing the layout, make sure freelist and tid are still compatible
+ * with this_cpu_cmpxchg_double() alignment requirements.
+ */
 struct kmem_cache_cpu {
 	void **freelist;	/* Pointer to next available object */
 	unsigned long tid;	/* Globally unique transaction id */
@@ -47,6 +52,7 @@ struct kmem_cache_cpu {
 #ifdef CONFIG_SLUB_CPU_PARTIAL
 	struct page *partial;	/* Partially allocated frozen slabs */
 #endif
+	local_lock_t lock;	/* Protects the fields above */
 #ifdef CONFIG_SLUB_STATS
 	unsigned stat[NR_SLUB_STAT_ITEMS];
 #endif
--- a/mm/slub.c~mm-slub-convert-kmem_cpu_slab-protection-to-local_lock
+++ a/mm/slub.c
@@ -46,13 +46,21 @@
 /*
  * Lock order:
  *   1. slab_mutex (Global Mutex)
- *   2. node->list_lock
- *   3. slab_lock(page) (Only on some arches and for debugging)
+ *   2. node->list_lock (Spinlock)
+ *   3. kmem_cache->cpu_slab->lock (Local lock)
+ *   4. slab_lock(page) (Only on some arches or for debugging)
+ *   5. object_map_lock (Only for debugging)
  *
  *   slab_mutex
  *
  *   The role of the slab_mutex is to protect the list of all the slabs
  *   and to synchronize major metadata changes to slab cache structures.
+ *   Also synchronizes memory hotplug callbacks.
+ *
+ *   slab_lock
+ *
+ *   The slab_lock is a wrapper around the page lock, thus it is a bit
+ *   spinlock.
  *
  *   The slab_lock is only used for debugging and on arches that do not
  *   have the ability to do a cmpxchg_double. It only protects:
@@ -61,6 +69,8 @@
  *	C. page->objects	-> Number of objects in page
  *	D. page->frozen		-> frozen state
  *
+ *   Frozen slabs
+ *
  *   If a slab is frozen then it is exempt from list management. It is not
  *   on any list except per cpu partial list. The processor that froze the
  *   slab is the one who can perform list operations on the page. Other
@@ -68,6 +78,8 @@
  *   froze the slab is the only one that can retrieve the objects from the
  *   page's freelist.
  *
+ *   list_lock
+ *
  *   The list_lock protects the partial and full list on each node and
  *   the partial slab counter. If taken then no new slabs may be added or
  *   removed from the lists nor make the number of partial slabs be modified.
@@ -79,10 +91,36 @@
  *   slabs, operations can continue without any centralized lock. F.e.
  *   allocating a long series of objects that fill up slabs does not require
  *   the list lock.
- *   Interrupts are disabled during allocation and deallocation in order to
- *   make the slab allocator safe to use in the context of an irq. In addition
- *   interrupts are disabled to ensure that the processor does not change
- *   while handling per_cpu slabs, due to kernel preemption.
+ *
+ *   cpu_slab->lock local lock
+ *
+ *   This locks protect slowpath manipulation of all kmem_cache_cpu fields
+ *   except the stat counters. This is a percpu structure manipulated only by
+ *   the local cpu, so the lock protects against being preempted or interrupted
+ *   by an irq. Fast path operations rely on lockless operations instead.
+ *   On PREEMPT_RT, the local lock does not actually disable irqs (and thus
+ *   prevent the lockless operations), so fastpath operations also need to take
+ *   the lock and are no longer lockless.
+ *
+ *   lockless fastpaths
+ *
+ *   The fast path allocation (slab_alloc_node()) and freeing (do_slab_free())
+ *   are fully lockless when satisfied from the percpu slab (and when
+ *   cmpxchg_double is possible to use, otherwise slab_lock is taken).
+ *   They also don't disable preemption or migration or irqs. They rely on
+ *   the transaction id (tid) field to detect being preempted or moved to
+ *   another cpu.
+ *
+ *   irq, preemption, migration considerations
+ *
+ *   Interrupts are disabled as part of list_lock or local_lock operations, or
+ *   around the slab_lock operation, in order to make the slab allocator safe
+ *   to use in the context of an irq.
+ *
+ *   In addition, preemption (or migration on PREEMPT_RT) is disabled in the
+ *   allocation slowpath, bulk allocation, and put_cpu_partial(), so that the
+ *   local cpu doesn't change in the process and e.g. the kmem_cache_cpu pointer
+ *   doesn't have to be revalidated in each section protected by the local lock.
  *
  * SLUB assigns one slab for allocation to each processor.
  * Allocations only occur from these slabs called cpu slabs.
@@ -2231,9 +2269,13 @@ static inline void note_cmpxchg_failure(
 static void init_kmem_cache_cpus(struct kmem_cache *s)
 {
 	int cpu;
+	struct kmem_cache_cpu *c;
 
-	for_each_possible_cpu(cpu)
-		per_cpu_ptr(s->cpu_slab, cpu)->tid = init_tid(cpu);
+	for_each_possible_cpu(cpu) {
+		c = per_cpu_ptr(s->cpu_slab, cpu);
+		local_lock_init(&c->lock);
+		c->tid = init_tid(cpu);
+	}
 }
 
 /*
@@ -2444,10 +2486,10 @@ static void unfreeze_partials(struct kme
 	struct page *partial_page;
 	unsigned long flags;
 
-	local_irq_save(flags);
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 	partial_page = this_cpu_read(s->cpu_slab->partial);
 	this_cpu_write(s->cpu_slab->partial, NULL);
-	local_irq_restore(flags);
+	local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 
 	if (partial_page)
 		__unfreeze_partials(s, partial_page);
@@ -2480,7 +2522,7 @@ static void put_cpu_partial(struct kmem_
 	int pages = 0;
 	int pobjects = 0;
 
-	local_irq_save(flags);
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 
 	oldpage = this_cpu_read(s->cpu_slab->partial);
 
@@ -2508,7 +2550,7 @@ static void put_cpu_partial(struct kmem_
 
 	this_cpu_write(s->cpu_slab->partial, page);
 
-	local_irq_restore(flags);
+	local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 
 	if (page_to_unfreeze) {
 		__unfreeze_partials(s, page_to_unfreeze);
@@ -2532,7 +2574,7 @@ static inline void flush_slab(struct kme
 	struct page *page;
 
 	if (lock)
-		local_irq_save(flags);
+		local_lock_irqsave(&s->cpu_slab->lock, flags);
 
 	freelist = c->freelist;
 	page = c->page;
@@ -2542,7 +2584,7 @@ static inline void flush_slab(struct kme
 	c->tid = next_tid(c->tid);
 
 	if (lock)
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 
 	if (page)
 		deactivate_slab(s, page, freelist);
@@ -2849,9 +2891,9 @@ redo:
 		goto deactivate_slab;
 
 	/* must check again c->page in case we got preempted and it changed */
-	local_irq_save(flags);
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 	if (unlikely(page != c->page)) {
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 		goto reread_page;
 	}
 	freelist = c->freelist;
@@ -2862,7 +2904,7 @@ redo:
 
 	if (!freelist) {
 		c->page = NULL;
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 		stat(s, DEACTIVATE_BYPASS);
 		goto new_slab;
 	}
@@ -2871,7 +2913,7 @@ redo:
 
 load_freelist:
 
-	lockdep_assert_irqs_disabled();
+	lockdep_assert_held(this_cpu_ptr(&s->cpu_slab->lock));
 
 	/*
 	 * freelist is pointing to the list of objects to be used.
@@ -2881,39 +2923,39 @@ load_freelist:
 	VM_BUG_ON(!c->page->frozen);
 	c->freelist = get_freepointer(s, freelist);
 	c->tid = next_tid(c->tid);
-	local_irq_restore(flags);
+	local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 	return freelist;
 
 deactivate_slab:
 
-	local_irq_save(flags);
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 	if (page != c->page) {
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 		goto reread_page;
 	}
 	freelist = c->freelist;
 	c->page = NULL;
 	c->freelist = NULL;
-	local_irq_restore(flags);
+	local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 	deactivate_slab(s, page, freelist);
 
 new_slab:
 
 	if (slub_percpu_partial(c)) {
-		local_irq_save(flags);
+		local_lock_irqsave(&s->cpu_slab->lock, flags);
 		if (unlikely(c->page)) {
-			local_irq_restore(flags);
+			local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 			goto reread_page;
 		}
 		if (unlikely(!slub_percpu_partial(c))) {
-			local_irq_restore(flags);
+			local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 			/* we were preempted and partial list got empty */
 			goto new_objects;
 		}
 
 		page = c->page = slub_percpu_partial(c);
 		slub_set_percpu_partial(c, page);
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 		stat(s, CPU_PARTIAL_ALLOC);
 		goto redo;
 	}
@@ -2966,7 +3008,7 @@ check_new_page:
 
 retry_load_page:
 
-	local_irq_save(flags);
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 	if (unlikely(c->page)) {
 		void *flush_freelist = c->freelist;
 		struct page *flush_page = c->page;
@@ -2975,7 +3017,7 @@ retry_load_page:
 		c->freelist = NULL;
 		c->tid = next_tid(c->tid);
 
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 
 		deactivate_slab(s, flush_page, flush_freelist);
 
@@ -3094,7 +3136,15 @@ redo:
 
 	object = c->freelist;
 	page = c->page;
-	if (unlikely(!object || !page || !node_match(page, node))) {
+	/*
+	 * We cannot use the lockless fastpath on PREEMPT_RT because if a
+	 * slowpath has taken the local_lock_irqsave(), it is not protected
+	 * against a fast path operation in an irq handler. So we need to take
+	 * the slow path which uses local_lock. It is still relatively fast if
+	 * there is a suitable cpu freelist.
+	 */
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) ||
+	    unlikely(!object || !page || !node_match(page, node))) {
 		object = __slab_alloc(s, gfpflags, node, addr, c);
 	} else {
 		void *next_object = get_freepointer_safe(s, object);
@@ -3354,6 +3404,7 @@ redo:
 	barrier();
 
 	if (likely(page == c->page)) {
+#ifndef CONFIG_PREEMPT_RT
 		void **freelist = READ_ONCE(c->freelist);
 
 		set_freepointer(s, tail_obj, freelist);
@@ -3366,6 +3417,31 @@ redo:
 			note_cmpxchg_failure("slab_free", s, tid);
 			goto redo;
 		}
+#else /* CONFIG_PREEMPT_RT */
+		/*
+		 * We cannot use the lockless fastpath on PREEMPT_RT because if
+		 * a slowpath has taken the local_lock_irqsave(), it is not
+		 * protected against a fast path operation in an irq handler. So
+		 * we need to take the local_lock. We shouldn't simply defer to
+		 * __slab_free() as that wouldn't use the cpu freelist at all.
+		 */
+		void **freelist;
+
+		local_lock(&s->cpu_slab->lock);
+		c = this_cpu_ptr(s->cpu_slab);
+		if (unlikely(page != c->page)) {
+			local_unlock(&s->cpu_slab->lock);
+			goto redo;
+		}
+		tid = c->tid;
+		freelist = c->freelist;
+
+		set_freepointer(s, tail_obj, freelist);
+		c->freelist = head;
+		c->tid = next_tid(tid);
+
+		local_unlock(&s->cpu_slab->lock);
+#endif
 		stat(s, FREE_FASTPATH);
 	} else
 		__slab_free(s, page, head, tail_obj, cnt, addr);
@@ -3544,7 +3620,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
 	 * handlers invoking normal fastpath.
 	 */
 	c = slub_get_cpu_ptr(s->cpu_slab);
-	local_irq_disable();
+	local_lock_irq(&s->cpu_slab->lock);
 
 	for (i = 0; i < size; i++) {
 		void *object = kfence_alloc(s, s->object_size, flags);
@@ -3565,7 +3641,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
 			 */
 			c->tid = next_tid(c->tid);
 
-			local_irq_enable();
+			local_unlock_irq(&s->cpu_slab->lock);
 
 			/*
 			 * Invoking slow path likely have side-effect
@@ -3579,7 +3655,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
 			c = this_cpu_ptr(s->cpu_slab);
 			maybe_wipe_obj_freeptr(s, p[i]);
 
-			local_irq_disable();
+			local_lock_irq(&s->cpu_slab->lock);
 
 			continue; /* goto for-loop */
 		}
@@ -3588,7 +3664,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
 		maybe_wipe_obj_freeptr(s, p[i]);
 	}
 	c->tid = next_tid(c->tid);
-	local_irq_enable();
+	local_unlock_irq(&s->cpu_slab->lock);
 	slub_put_cpu_ptr(s->cpu_slab);
 
 	/*
_

  parent reply	other threads:[~2021-09-02 21:52 UTC|newest]

Thread overview: 253+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-02 21:48 incoming Andrew Morton
2021-09-02 21:49 ` incoming Andrew Morton
2021-09-02 21:49 ` [patch 001/212] ia64: fix typo in a comment Andrew Morton
2021-09-02 21:50 ` [patch 002/212] ia64: fix #endif comment for reserve_elfcorehdr() Andrew Morton
2021-09-02 21:50 ` [patch 003/212] ia64: make reserve_elfcorehdr() static Andrew Morton
2021-09-02 21:50 ` [patch 004/212] ia64: make num_rsvd_regions static Andrew Morton
2021-09-02 21:50 ` [patch 005/212] ocfs2: remove an unnecessary condition Andrew Morton
2021-09-02 21:50 ` [patch 006/212] ocfs2: quota_local: fix possible uninitialized-variable access in ocfs2_local_read_info() Andrew Morton
2021-09-02 21:50 ` [patch 007/212] ocfs2: ocfs2_downconvert_lock failure results in deadlock Andrew Morton
2021-09-02 21:50 ` [patch 008/212] arch/csky/kernel/probes/kprobes.c: fix bugon.cocci warnings Andrew Morton
2021-09-02 21:50 ` [patch 009/212] mm, slub: don't call flush_all() from slab_debug_trace_open() Andrew Morton
2021-09-02 21:50 ` [patch 010/212] mm, slub: allocate private object map for debugfs listings Andrew Morton
2021-09-02 21:50 ` [patch 011/212] mm, slub: allocate private object map for validate_slab_cache() Andrew Morton
2021-09-02 21:50 ` [patch 012/212] mm, slub: don't disable irq for debug_check_no_locks_freed() Andrew Morton
2021-09-02 21:50 ` [patch 013/212] mm, slub: remove redundant unfreeze_partials() from put_cpu_partial() Andrew Morton
2021-09-02 21:50 ` [patch 014/212] mm, slub: unify cmpxchg_double_slab() and __cmpxchg_double_slab() Andrew Morton
2021-09-02 21:50 ` [patch 015/212] mm, slub: extract get_partial() from new_slab_objects() Andrew Morton
2021-09-02 21:50 ` [patch 016/212] mm, slub: dissolve new_slab_objects() into ___slab_alloc() Andrew Morton
2021-09-02 21:50 ` [patch 017/212] mm, slub: return slab page from get_partial() and set c->page afterwards Andrew Morton
2021-09-02 21:50 ` [patch 018/212] mm, slub: restructure new page checks in ___slab_alloc() Andrew Morton
2021-09-02 21:50 ` [patch 019/212] mm, slub: simplify kmem_cache_cpu and tid setup Andrew Morton
2021-09-02 21:50 ` [patch 020/212] mm, slub: move disabling/enabling irqs to ___slab_alloc() Andrew Morton
2021-09-02 21:51 ` [patch 021/212] mm, slub: do initial checks in ___slab_alloc() with irqs enabled Andrew Morton
2021-09-02 21:51 ` [patch 022/212] mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() Andrew Morton
2021-09-02 21:51 ` [patch 023/212] mm, slub: restore irqs around calling new_slab() Andrew Morton
2021-09-02 21:51 ` [patch 024/212] mm, slub: validate slab from partial list or page allocator before making it cpu slab Andrew Morton
2021-09-02 21:51 ` [patch 025/212] mm, slub: check new pages with restored irqs Andrew Morton
2021-09-02 21:51 ` [patch 026/212] mm, slub: stop disabling irqs around get_partial() Andrew Morton
2021-09-02 21:51 ` [patch 027/212] mm, slub: move reset of c->page and freelist out of deactivate_slab() Andrew Morton
2021-09-02 21:51 ` [patch 028/212] mm, slub: make locking in deactivate_slab() irq-safe Andrew Morton
2021-09-02 21:51 ` [patch 029/212] mm, slub: call deactivate_slab() without disabling irqs Andrew Morton
2021-09-02 21:51 ` [patch 030/212] mm, slub: move irq control into unfreeze_partials() Andrew Morton
2021-09-02 21:51 ` [patch 031/212] mm, slub: discard slabs in unfreeze_partials() without irqs disabled Andrew Morton
2021-09-02 21:51 ` [patch 032/212] mm, slub: detach whole partial list at once in unfreeze_partials() Andrew Morton
2021-09-02 21:51 ` [patch 033/212] mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing Andrew Morton
2021-09-02 21:51 ` [patch 034/212] mm, slub: only disable irq with spin_lock in __unfreeze_partials() Andrew Morton
2021-09-02 21:51 ` [patch 035/212] mm, slub: don't disable irqs in slub_cpu_dead() Andrew Morton
2021-09-02 21:51 ` [patch 036/212] mm, slab: make flush_slab() possible to call with irqs enabled Andrew Morton
2021-09-02 23:15   ` Linus Torvalds
2021-09-02 23:34     ` Linus Torvalds
2021-09-02 23:51       ` Linus Torvalds
2021-09-03  5:26         ` Vlastimil Babka
2021-09-03  6:22           ` Mike Galbraith
2021-09-03  7:03             ` Vlastimil Babka
2021-09-03 11:14               ` Vlastimil Babka
2021-09-03 16:18             ` Linus Torvalds
2021-09-02 21:51 ` [patch 037/212] mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context Andrew Morton
2021-09-02 21:51 ` [patch 038/212] mm: slub: make object_map_lock a raw_spinlock_t Andrew Morton
2021-09-02 21:52 ` [patch 039/212] mm, slub: optionally save/restore irqs in slab_[un]lock()/ Andrew Morton
2021-09-02 21:52 ` [patch 040/212] mm, slub: make slab_lock() disable irqs with PREEMPT_RT Andrew Morton
2021-09-02 21:52 ` [patch 041/212] mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg Andrew Morton
2021-09-02 21:52 ` [patch 042/212] mm, slub: use migrate_disable() on PREEMPT_RT Andrew Morton
2021-09-02 21:52 ` Andrew Morton [this message]
2021-09-02 21:52 ` [patch 044/212] mm/debug_vm_pgtable: introduce struct pgtable_debug_args Andrew Morton
2021-09-02 21:52 ` [patch 045/212] mm/debug_vm_pgtable: use struct pgtable_debug_args in basic tests Andrew Morton
2021-09-02 21:52 ` [patch 046/212] mm/debug_vm_pgtable: use struct pgtable_debug_args in leaf and savewrite tests Andrew Morton
2021-09-02 21:52 ` [patch 047/212] mm/debug_vm_pgtable: use struct pgtable_debug_args in protnone and devmap tests Andrew Morton
2021-09-02 21:52 ` [patch 048/212] mm/debug_vm_pgtable: use struct pgtable_debug_args in soft_dirty and swap tests Andrew Morton
2021-09-02 21:52 ` [patch 049/212] mm/debug_vm_pgtable: use struct pgtable_debug_args in migration and thp tests Andrew Morton
2021-09-02 21:52 ` [patch 050/212] mm/debug_vm_pgtable: use struct pgtable_debug_args in PTE modifying tests Andrew Morton
2021-09-02 21:52 ` [patch 051/212] mm/debug_vm_pgtable: use struct pgtable_debug_args in PMD " Andrew Morton
2021-09-02 21:52 ` [patch 052/212] mm/debug_vm_pgtable: use struct pgtable_debug_args in PUD " Andrew Morton
2021-09-02 21:52 ` [patch 053/212] mm/debug_vm_pgtable: use struct pgtable_debug_args in PGD and P4D " Andrew Morton
2021-09-02 21:52 ` [patch 054/212] mm/debug_vm_pgtable: remove unused code Andrew Morton
2021-09-02 21:52 ` [patch 055/212] mm/debug_vm_pgtable: fix corrupted page flag Andrew Morton
2021-09-02 21:52 ` [patch 056/212] mm: report a more useful address for reclaim acquisition Andrew Morton
2021-09-02 21:53 ` [patch 057/212] mm: add kernel_misc_reclaimable in show_free_areas Andrew Morton
2021-09-02 21:53 ` [patch 058/212] writeback: track number of inodes under writeback Andrew Morton
2021-09-02 21:53 ` [patch 059/212] writeback: reliably update bandwidth estimation Andrew Morton
2021-09-02 21:53 ` [patch 060/212] writeback: fix bandwidth estimate for spiky workload Andrew Morton
2021-09-02 21:53 ` [patch 061/212] writeback: rename domain_update_bandwidth() Andrew Morton
2021-09-02 21:53 ` [patch 062/212] writeback: use READ_ONCE for unlocked reads of writeback stats Andrew Morton
2021-09-02 21:53 ` [patch 063/212] mm: remove irqsave/restore locking from contexts with irqs enabled Andrew Morton
2021-09-02 21:53 ` [patch 064/212] fs: drop_caches: fix skipping over shadow cache inodes Andrew Morton
2021-09-02 21:53 ` [patch 065/212] fs: inode: count invalidated shadow pages in pginodesteal Andrew Morton
2021-09-02 21:53 ` [patch 066/212] writeback: memcg: simplify cgroup_writeback_by_id Andrew Morton
2021-09-02 21:53 ` [patch 067/212] include/linux/buffer_head.h: fix boolreturn.cocci warnings Andrew Morton
2021-09-02 21:53 ` [patch 068/212] mm: gup: remove set but unused local variable major Andrew Morton
2021-09-02 21:53 ` [patch 069/212] mm: gup: remove unneed local variable orig_refs Andrew Morton
2021-09-02 21:53 ` [patch 070/212] mm: gup: remove useless BUG_ON in __get_user_pages() Andrew Morton
2021-09-02 21:53 ` [patch 071/212] mm: gup: fix potential pgmap refcnt leak in __gup_device_huge() Andrew Morton
2021-09-03 16:35   ` Linus Torvalds
2021-09-03 17:55     ` John Hubbard
2021-09-03 19:01       ` Linus Torvalds
2021-09-02 21:53 ` [patch 072/212] mm: gup: use helper PAGE_ALIGNED in populate_vma_page_range() Andrew Morton
2021-09-02 21:53 ` [patch 073/212] mm/gup: documentation corrections for gup/pup Andrew Morton
2021-09-02 21:53 ` [patch 074/212] mm/gup: small refactoring: simplify try_grab_page() Andrew Morton
2021-09-02 21:53 ` [patch 075/212] mm/gup: remove try_get_page(), call try_get_compound_head() directly Andrew Morton
2021-09-02 23:23   ` John Hubbard
2021-09-02 21:53 ` [patch 076/212] fs, mm: fix race in unlinking swapfile Andrew Morton
2021-09-02 21:54 ` [patch 077/212] mm: delete unused get_kernel_page() Andrew Morton
2021-09-02 21:54 ` [patch 078/212] shmem: use raw_spinlock_t for ->stat_lock Andrew Morton
2021-09-02 21:54 ` [patch 079/212] shmem: remove unneeded variable ret Andrew Morton
2021-09-02 21:54 ` [patch 080/212] shmem: remove unneeded header file Andrew Morton
2021-09-02 21:54 ` [patch 081/212] shmem: remove unneeded function forward declaration Andrew Morton
2021-09-02 21:54 ` [patch 082/212] shmem: include header file to declare swap_info Andrew Morton
2021-09-02 21:54 ` [patch 083/212] huge tmpfs: fix fallocate(vanilla) advance over huge pages Andrew Morton
2021-09-02 21:54 ` [patch 084/212] huge tmpfs: fix split_huge_page() after FALLOC_FL_KEEP_SIZE Andrew Morton
2021-09-02 21:54 ` [patch 085/212] huge tmpfs: remove shrinklist addition from shmem_setattr() Andrew Morton
2021-09-02 21:54 ` [patch 086/212] huge tmpfs: revert shmem's use of transhuge_vma_enabled() Andrew Morton
2021-09-02 21:54 ` [patch 087/212] huge tmpfs: move shmem_huge_enabled() upwards Andrew Morton
2021-09-02 21:54 ` [patch 088/212] huge tmpfs: SGP_NOALLOC to stop collapse_file() on race Andrew Morton
2021-09-02 21:54 ` [patch 089/212] huge tmpfs: shmem_is_huge(vma, inode, index) Andrew Morton
2021-09-02 21:54 ` [patch 090/212] huge tmpfs: decide stat.st_blksize by shmem_is_huge() Andrew Morton
2021-09-02 21:54 ` [patch 091/212] shmem: shmem_writepage() split unlikely i915 THP Andrew Morton
2021-09-02 21:54 ` [patch 092/212] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Andrew Morton
2021-09-02 21:54 ` [patch 093/212] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Andrew Morton
2021-09-02 21:54 ` [patch 094/212] mm, memcg: inline swap-related functions " Andrew Morton
2021-09-02 21:54 ` [patch 095/212] memcg: enable accounting for pids in nested pid namespaces Andrew Morton
2021-09-02 21:55 ` [patch 096/212] memcg: switch lruvec stats to rstat Andrew Morton
2021-09-02 21:55 ` [patch 097/212] memcg: infrastructure to flush memcg stats Andrew Morton
2021-09-05 12:44   ` [memcg] 45208c9105: aim7.jobs-per-min -14.0% regression kernel test robot
2021-09-05 22:15     ` Shakeel Butt
2021-09-07  3:30       ` Feng Tang
2021-09-10  0:43         ` Shakeel Butt
2021-09-10  1:08           ` Feng Tang
2021-09-10  1:19             ` Shakeel Butt
2021-09-10  2:34               ` Feng Tang
2021-09-10  4:17                 ` Shakeel Butt
2021-09-02 21:55 ` [patch 098/212] memcg: charge fs_context and legacy_fs_context Andrew Morton
2021-09-02 21:55 ` [patch 099/212] memcg: enable accounting for mnt_cache entries Andrew Morton
2021-09-02 21:55 ` [patch 100/212] memcg: enable accounting for pollfd and select bits arrays Andrew Morton
2021-09-05 13:27   ` [memcg] fa4e6b1ad5: will-it-scale.per_thread_ops -15.4% regression kernel test robot
2021-09-02 21:55 ` [patch 101/212] memcg: enable accounting for file lock caches Andrew Morton
2021-09-05 13:08   ` [memcg] 059dd9003a: will-it-scale.per_process_ops -39.8% regression kernel test robot
2021-09-02 21:55 ` [patch 102/212] memcg: enable accounting for fasync_cache Andrew Morton
2021-09-02 21:55 ` [patch 103/212] memcg: enable accounting for new namesapces and struct nsproxy Andrew Morton
2021-09-02 21:55 ` [patch 104/212] memcg: enable accounting of ipc resources Andrew Morton
2021-09-02 21:55 ` [patch 105/212] memcg: enable accounting for signals Andrew Morton
2021-09-02 21:55 ` [patch 106/212] memcg: enable accounting for posix_timers_cache slab Andrew Morton
2021-09-02 21:55 ` [patch 107/212] memcg: enable accounting for ldt_struct objects Andrew Morton
2021-09-02 21:55 ` [patch 108/212] memcg: cleanup racy sum avoidance code Andrew Morton
2021-09-02 21:55 ` [patch 109/212] memcg: replace in_interrupt() by !in_task() in active_memcg() Andrew Morton
2021-09-02 21:55 ` [patch 110/212] mm: memcontrol: set the correct memcg swappiness restriction Andrew Morton
2021-09-02 21:55 ` [patch 111/212] mm, memcg: remove unused functions Andrew Morton
2021-09-02 21:55 ` [patch 112/212] mm, memcg: save some atomic ops when flush is already true Andrew Morton
2021-09-02 21:56 ` [patch 113/212] memcg: fix up drain_local_stock comment Andrew Morton
2021-09-02 21:56 ` [patch 114/212] memcg: make memcg->event_list_lock irqsafe Andrew Morton
2021-09-02 21:56 ` [patch 115/212] selftests/vm: use kselftest skip code for skipped tests Andrew Morton
2021-09-02 21:56 ` [patch 116/212] selftests: Fix spelling mistake "cann't" -> "cannot" Andrew Morton
2021-09-02 21:56 ` [patch 117/212] lazy tlb: introduce lazy mm refcount helper functions Andrew Morton
2021-09-02 21:56 ` [patch 118/212] lazy tlb: allow lazy tlb mm refcounting to be configurable Andrew Morton
2021-09-02 21:56 ` [patch 119/212] lazy tlb: shoot lazies, a non-refcounting lazy tlb option Andrew Morton
2021-09-02 22:28   ` Andy Lutomirski
2021-09-02 22:50     ` Linus Torvalds
2021-09-02 22:53       ` Andrew Morton
2021-09-03  0:35         ` Andy Lutomirski
2021-09-03  0:46         ` Nicholas Piggin
2021-09-03  5:11           ` Andy Lutomirski
2021-09-03  5:44             ` Nicholas Piggin
2021-09-03 23:48               ` Nicholas Piggin
2021-09-03  0:48     ` Nicholas Piggin
2021-09-02 21:56 ` [patch 120/212] powerpc/64s: enable MMU_LAZY_TLB_SHOOTDOWN Andrew Morton
2021-09-02 21:56 ` [patch 121/212] mmc: JZ4740: remove the flush_kernel_dcache_page call in jz4740_mmc_read_data Andrew Morton
2021-09-02 21:56 ` [patch 122/212] mmc: mmc_spi: replace flush_kernel_dcache_page with flush_dcache_page Andrew Morton
2021-09-02 21:56 ` [patch 123/212] scatterlist: " Andrew Morton
2021-09-02 21:56 ` [patch 124/212] mm: remove flush_kernel_dcache_page Andrew Morton
2021-09-02 21:56 ` [patch 125/212] mm,do_huge_pmd_numa_page: remove unnecessary TLB flushing code Andrew Morton
2021-09-02 21:56 ` [patch 126/212] mm: change fault_in_pages_* to have an unsigned size parameter Andrew Morton
2021-09-02 21:56 ` [patch 127/212] mm/pagemap: add mmap_assert_locked() annotations to find_vma*() Andrew Morton
2021-09-02 21:56 ` [patch 128/212] remap_file_pages: Use vma_lookup() instead of find_vma() Andrew Morton
2021-09-02 21:56 ` [patch 129/212] mm/mremap: fix memory account on do_munmap() failure Andrew Morton
2021-09-02 21:56 ` [patch 130/212] mm/bootmem_info.c: mark __init on register_page_bootmem_info_section Andrew Morton
2021-09-02 21:56 ` [patch 131/212] mm: sparse: pass section_nr to section_mark_present Andrew Morton
2021-09-02 21:57 ` [patch 132/212] mm: sparse: pass section_nr to find_memory_block Andrew Morton
2021-09-02 21:57 ` [patch 133/212] mm: sparse: remove __section_nr() function Andrew Morton
2021-09-02 21:57 ` [patch 134/212] mm/sparse: set SECTION_NID_SHIFT to 6 Andrew Morton
2021-09-02 21:57 ` [patch 135/212] include/linux/mmzone.h: avoid a warning in sparse memory support Andrew Morton
2021-09-02 21:57 ` [patch 136/212] mm/sparse: clarify pgdat_to_phys Andrew Morton
2021-09-02 21:57 ` [patch 137/212] mm/vmalloc: use batched page requests in bulk-allocator Andrew Morton
2021-09-02 21:57 ` [patch 138/212] mm/vmalloc: remove gfpflags_allow_blocking() check Andrew Morton
2021-09-02 21:57 ` [patch 139/212] lib/test_vmalloc.c: add a new 'nr_pages' parameter Andrew Morton
2021-09-02 21:57 ` [patch 140/212] mm/vmalloc: fix wrong behavior in vread Andrew Morton
2021-09-02 21:57 ` [patch 141/212] mm/kasan: move kasan.fault to mm/kasan/report.c Andrew Morton
2021-09-02 21:57 ` [patch 142/212] kasan: test: rework kmalloc_oob_right Andrew Morton
2021-09-02 21:57 ` [patch 143/212] kasan: test: avoid writing invalid memory Andrew Morton
2021-09-02 21:57 ` [patch 144/212] kasan: test: avoid corrupting memory via memset Andrew Morton
2021-09-02 21:57 ` [patch 145/212] kasan: test: disable kmalloc_memmove_invalid_size for HW_TAGS Andrew Morton
2021-09-02 21:57 ` [patch 146/212] kasan: test: only do kmalloc_uaf_memset for generic mode Andrew Morton
2021-09-02 21:57 ` [patch 147/212] kasan: test: clean up ksize_uaf Andrew Morton
2021-09-02 21:57 ` [patch 148/212] kasan: test: avoid corrupting memory in copy_user_test Andrew Morton
2021-09-02 21:57 ` [patch 149/212] kasan: test: avoid corrupting memory in kasan_rcu_uaf Andrew Morton
2021-09-02 21:57 ` [patch 150/212] mm/page_alloc: always initialize memory map for the holes Andrew Morton
2021-09-02 21:57 ` [patch 151/212] microblaze: simplify pte_alloc_one_kernel() Andrew Morton
2021-09-02 21:58 ` [patch 152/212] mm: introduce memmap_alloc() to unify memory map allocation Andrew Morton
2021-09-02 21:58 ` [patch 153/212] memblock: stop poisoning raw allocations Andrew Morton
2021-09-02 21:58 ` [patch 154/212] mm/page_alloc.c: fix 'zone_id' may be used uninitialized in this function warning Andrew Morton
2021-09-02 21:58 ` [patch 155/212] mm/page_alloc: make alloc_node_mem_map() __init rather than __ref Andrew Morton
2021-09-02 21:58 ` [patch 156/212] mm/page_alloc.c: use in_task() Andrew Morton
2021-09-02 21:58 ` [patch 157/212] mm/page_isolation: tracing: trace all test_pages_isolated failures Andrew Morton
2021-09-02 21:58 ` [patch 158/212] mm/hwpoison: remove unneeded variable unmap_success Andrew Morton
2021-09-02 21:58 ` [patch 159/212] mm/hwpoison: fix potential pte_unmap_unlock pte error Andrew Morton
2021-09-02 21:58 ` [patch 160/212] mm/hwpoison: change argument struct page **hpagep to *hpage Andrew Morton
2021-09-02 21:58 ` [patch 161/212] mm/hwpoison: fix some obsolete comments Andrew Morton
2021-09-02 21:58 ` [patch 162/212] mm: hwpoison: don't drop slab caches for offlining non-LRU page Andrew Morton
2021-09-02 21:58 ` [patch 163/212] doc: hwpoison: correct the support for hugepage Andrew Morton
2021-09-02 21:58 ` [patch 164/212] mm: hwpoison: dump page for unhandlable page Andrew Morton
2021-09-02 21:58 ` [patch 165/212] mm: fix panic caused by __page_handle_poison() Andrew Morton
2021-09-02 21:58 ` [patch 166/212] hugetlb: simplify prep_compound_gigantic_page ref count racing code Andrew Morton
2021-09-02 21:58 ` [patch 167/212] hugetlb: drop ref count earlier after page allocation Andrew Morton
2021-09-02 21:58 ` [patch 168/212] hugetlb: before freeing hugetlb page set dtor to appropriate value Andrew Morton
2021-09-02 21:58 ` [patch 169/212] hugetlb: fix hugetlb cgroup refcounting during vma split Andrew Morton
2021-09-02 21:58 ` [patch 170/212] userfaultfd: change mmap_changing to atomic Andrew Morton
2021-09-02 21:58 ` [patch 171/212] userfaultfd: prevent concurrent API initialization Andrew Morton
2021-09-02 21:59 ` [patch 172/212] selftests/vm/userfaultfd: wake after copy failure Andrew Morton
2021-09-02 21:59 ` [patch 173/212] mm/numa: automatically generate node migration order Andrew Morton
2021-09-02 21:59 ` [patch 174/212] mm/migrate: update node demotion order on hotplug events Andrew Morton
2021-09-05 13:59   ` [mm/migrate] 9eeb73028c: stress-ng.memhotplug.ops_per_sec -53.8% regression kernel test robot
2021-09-06  1:53     ` Huang, Ying
2021-09-06  3:57       ` Dave Hansen
2021-09-06  5:57         ` Huang, Ying
2021-09-06  6:09           ` Chen Yu
2021-09-06  8:31             ` Huang, Ying
2021-09-17  3:14     ` Huang, Ying
2021-10-21 14:29       ` Oliver Sang
2021-09-02 21:59 ` [patch 175/212] mm/migrate: enable returning precise migrate_pages() success count Andrew Morton
2021-09-02 21:59 ` [patch 176/212] mm/migrate: demote pages during reclaim Andrew Morton
2021-09-02 21:59 ` [patch 177/212] mm/vmscan: add page demotion counter Andrew Morton
2021-09-02 21:59 ` [patch 178/212] mm/vmscan: add helper for querying ability to age anonymous pages Andrew Morton
2021-09-02 21:59 ` [patch 179/212] mm/vmscan: Consider anonymous pages without swap Andrew Morton
2021-09-02 21:59 ` [patch 180/212] mm/vmscan: never demote for memcg reclaim Andrew Morton
2021-09-02 21:59 ` [patch 181/212] mm/migrate: add sysfs interface to enable reclaim migration Andrew Morton
2021-09-02 21:59 ` [patch 182/212] mm/vmpressure: replace vmpressure_to_css() with vmpressure_to_memcg() Andrew Morton
2021-09-02 21:59 ` [patch 183/212] mm/vmscan: remove the PageDirty check after MADV_FREE pages are page_ref_freezed Andrew Morton
2021-09-02 21:59 ` [patch 184/212] mm/vmscan: remove misleading setting to sc->priority Andrew Morton
2021-09-02 21:59 ` [patch 185/212] mm/vmscan: remove unneeded return value of kswapd_run() Andrew Morton
2021-09-02 21:59 ` [patch 186/212] mm/vmscan: add 'else' to remove check_pending label Andrew Morton
2021-09-02 21:59 ` [patch 187/212] mm, vmscan: guarantee drop_slab_node() termination Andrew Morton
2021-09-02 21:59 ` [patch 188/212] mm: compaction: optimize proactive compaction deferrals Andrew Morton
2021-09-02 21:59 ` [patch 189/212] mm: compaction: support triggering of proactive compaction by user Andrew Morton
2021-09-02 22:00 ` [patch 190/212] mm/mempolicy: use readable NUMA_NO_NODE macro instead of magic number Andrew Morton
2021-09-02 22:00 ` [patch 191/212] mm/mempolicy: add MPOL_PREFERRED_MANY for multiple preferred nodes Andrew Morton
2021-09-02 22:00 ` [patch 192/212] mm/memplicy: add page allocation function for MPOL_PREFERRED_MANY policy Andrew Morton
2021-09-02 22:00 ` [patch 193/212] mm/hugetlb: add support for mempolicy MPOL_PREFERRED_MANY Andrew Morton
2021-09-02 22:00 ` [patch 194/212] mm/mempolicy: advertise new MPOL_PREFERRED_MANY Andrew Morton
2021-09-02 22:00 ` [patch 195/212] mm/mempolicy: unify the create() func for bind/interleave/prefer-many policies Andrew Morton
2021-09-02 22:00 ` [patch 196/212] mm/mempolicy.c: use in_task() in mempolicy_slab_node() Andrew Morton
2021-09-02 22:00 ` [patch 197/212] memblock: make memblock_find_in_range method private Andrew Morton
2021-09-02 22:00 ` [patch 198/212] mm: introduce process_mrelease system call Andrew Morton
2021-09-02 22:00 ` [patch 199/212] mm: wire up syscall process_mrelease Andrew Morton
2021-09-02 22:00 ` [patch 200/212] mm/migrate: correct kernel-doc notation Andrew Morton
2021-09-02 22:00 ` [patch 201/212] selftests: vm: add KSM merge test Andrew Morton
2021-09-02 22:00 ` [patch 202/212] selftests: vm: add KSM unmerge test Andrew Morton
2021-09-02 22:00 ` [patch 203/212] selftests: vm: add KSM zero page merging test Andrew Morton
2021-09-02 22:00 ` [patch 204/212] selftests: vm: add KSM merging across nodes test Andrew Morton
2021-09-02 22:00 ` [patch 205/212] mm: KSM: fix data type Andrew Morton
2021-09-02 22:00 ` [patch 206/212] selftests: vm: add KSM merging time test Andrew Morton
2021-09-02 22:00 ` [patch 207/212] selftests: vm: add COW time test for KSM pages Andrew Morton
2021-09-02 22:01 ` [patch 208/212] mm/percpu,c: remove obsolete comments of pcpu_chunk_populated() Andrew Morton
2021-09-02 22:01 ` [patch 209/212] mm/vmstat: correct some wrong comments Andrew Morton
2021-09-02 22:01 ` [patch 210/212] mm/vmstat: simplify the array size calculation Andrew Morton
2021-09-02 22:01 ` [patch 211/212] mm/vmstat: remove unneeded return value Andrew Morton
2021-09-02 22:01 ` [patch 212/212] mm/madvise: add MADV_WILLNEED to process_madvise() Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210902215215.7TuZyn_O7%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=brouer@redhat.com \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=jannh@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mm-commits@vger.kernel.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=sven@narfation.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).