mm-commits.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: akpm@linux-foundation.org, bigeasy@linutronix.de,
	brouer@redhat.com, cl@linux.com, iamjoonsoo.kim@lge.com,
	jannh@google.com, linux-mm@kvack.org,
	mgorman@techsingularity.net, mm-commits@vger.kernel.org,
	penberg@kernel.org, quic_qiancai@quicinc.com,
	rientjes@google.com, tglx@linutronix.de,
	torvalds@linux-foundation.org, vbabka@suse.cz
Subject: [patch 033/147] mm, slub: convert kmem_cpu_slab protection to local_lock
Date: Tue, 07 Sep 2021 19:54:43 -0700	[thread overview]
Message-ID: <20210908025443.H2cpHSr9P%akpm@linux-foundation.org> (raw)
In-Reply-To: <20210907195226.14b1d22a07c085b22968b933@linux-foundation.org>

From: Vlastimil Babka <vbabka@suse.cz>
Subject: mm, slub: convert kmem_cpu_slab protection to local_lock

Embed local_lock into struct kmem_cpu_slab and use the irq-safe versions
of local_lock instead of plain local_irq_save/restore.  On !PREEMPT_RT
that's equivalent, with better lockdep visibility.  On PREEMPT_RT that
means better preemption.

However, the cost on PREEMPT_RT is the loss of lockless fast paths which
only work with cpu freelist.  Those are designed to detect and recover
from being preempted by other conflicting operations (both fast or slow
path), but the slow path operations assume they cannot be preempted by a
fast path operation, which is guaranteed naturally with disabled irqs. 
With local locks on PREEMPT_RT, the fast paths now also need to take the
local lock to avoid races.

In the allocation fastpath slab_alloc_node() we can just defer to the
slowpath __slab_alloc() which also works with cpu freelist, but under the
local lock.  In the free fastpath do_slab_free() we have to add a new
local lock protected version of freeing to the cpu freelist, as the
existing slowpath only works with the page freelist.

Also update the comment about locking scheme in SLUB to reflect changes
done by this series.

[ Mike Galbraith <efault@gmx.de>: use local_lock() without irq in PREEMPT_RT
  scope; debugging of RT crashes resulting in put_cpu_partial() locking changes ]
Link: https://lkml.kernel.org/r/20210904105003.11688-34-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Jann Horn <jannh@google.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Qian Cai <quic_qiancai@quicinc.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/slub_def.h |    6 +
 mm/slub.c                |  146 ++++++++++++++++++++++++++++---------
 2 files changed, 117 insertions(+), 35 deletions(-)

--- a/include/linux/slub_def.h~mm-slub-convert-kmem_cpu_slab-protection-to-local_lock
+++ a/include/linux/slub_def.h
@@ -10,6 +10,7 @@
 #include <linux/kfence.h>
 #include <linux/kobject.h>
 #include <linux/reciprocal_div.h>
+#include <linux/local_lock.h>
 
 enum stat_item {
 	ALLOC_FASTPATH,		/* Allocation from cpu slab */
@@ -40,6 +41,10 @@ enum stat_item {
 	CPU_PARTIAL_DRAIN,	/* Drain cpu partial to node partial */
 	NR_SLUB_STAT_ITEMS };
 
+/*
+ * When changing the layout, make sure freelist and tid are still compatible
+ * with this_cpu_cmpxchg_double() alignment requirements.
+ */
 struct kmem_cache_cpu {
 	void **freelist;	/* Pointer to next available object */
 	unsigned long tid;	/* Globally unique transaction id */
@@ -47,6 +52,7 @@ struct kmem_cache_cpu {
 #ifdef CONFIG_SLUB_CPU_PARTIAL
 	struct page *partial;	/* Partially allocated frozen slabs */
 #endif
+	local_lock_t lock;	/* Protects the fields above */
 #ifdef CONFIG_SLUB_STATS
 	unsigned stat[NR_SLUB_STAT_ITEMS];
 #endif
--- a/mm/slub.c~mm-slub-convert-kmem_cpu_slab-protection-to-local_lock
+++ a/mm/slub.c
@@ -46,13 +46,21 @@
 /*
  * Lock order:
  *   1. slab_mutex (Global Mutex)
- *   2. node->list_lock
- *   3. slab_lock(page) (Only on some arches and for debugging)
+ *   2. node->list_lock (Spinlock)
+ *   3. kmem_cache->cpu_slab->lock (Local lock)
+ *   4. slab_lock(page) (Only on some arches or for debugging)
+ *   5. object_map_lock (Only for debugging)
  *
  *   slab_mutex
  *
  *   The role of the slab_mutex is to protect the list of all the slabs
  *   and to synchronize major metadata changes to slab cache structures.
+ *   Also synchronizes memory hotplug callbacks.
+ *
+ *   slab_lock
+ *
+ *   The slab_lock is a wrapper around the page lock, thus it is a bit
+ *   spinlock.
  *
  *   The slab_lock is only used for debugging and on arches that do not
  *   have the ability to do a cmpxchg_double. It only protects:
@@ -61,6 +69,8 @@
  *	C. page->objects	-> Number of objects in page
  *	D. page->frozen		-> frozen state
  *
+ *   Frozen slabs
+ *
  *   If a slab is frozen then it is exempt from list management. It is not
  *   on any list except per cpu partial list. The processor that froze the
  *   slab is the one who can perform list operations on the page. Other
@@ -68,6 +78,8 @@
  *   froze the slab is the only one that can retrieve the objects from the
  *   page's freelist.
  *
+ *   list_lock
+ *
  *   The list_lock protects the partial and full list on each node and
  *   the partial slab counter. If taken then no new slabs may be added or
  *   removed from the lists nor make the number of partial slabs be modified.
@@ -79,10 +91,36 @@
  *   slabs, operations can continue without any centralized lock. F.e.
  *   allocating a long series of objects that fill up slabs does not require
  *   the list lock.
- *   Interrupts are disabled during allocation and deallocation in order to
- *   make the slab allocator safe to use in the context of an irq. In addition
- *   interrupts are disabled to ensure that the processor does not change
- *   while handling per_cpu slabs, due to kernel preemption.
+ *
+ *   cpu_slab->lock local lock
+ *
+ *   This locks protect slowpath manipulation of all kmem_cache_cpu fields
+ *   except the stat counters. This is a percpu structure manipulated only by
+ *   the local cpu, so the lock protects against being preempted or interrupted
+ *   by an irq. Fast path operations rely on lockless operations instead.
+ *   On PREEMPT_RT, the local lock does not actually disable irqs (and thus
+ *   prevent the lockless operations), so fastpath operations also need to take
+ *   the lock and are no longer lockless.
+ *
+ *   lockless fastpaths
+ *
+ *   The fast path allocation (slab_alloc_node()) and freeing (do_slab_free())
+ *   are fully lockless when satisfied from the percpu slab (and when
+ *   cmpxchg_double is possible to use, otherwise slab_lock is taken).
+ *   They also don't disable preemption or migration or irqs. They rely on
+ *   the transaction id (tid) field to detect being preempted or moved to
+ *   another cpu.
+ *
+ *   irq, preemption, migration considerations
+ *
+ *   Interrupts are disabled as part of list_lock or local_lock operations, or
+ *   around the slab_lock operation, in order to make the slab allocator safe
+ *   to use in the context of an irq.
+ *
+ *   In addition, preemption (or migration on PREEMPT_RT) is disabled in the
+ *   allocation slowpath, bulk allocation, and put_cpu_partial(), so that the
+ *   local cpu doesn't change in the process and e.g. the kmem_cache_cpu pointer
+ *   doesn't have to be revalidated in each section protected by the local lock.
  *
  * SLUB assigns one slab for allocation to each processor.
  * Allocations only occur from these slabs called cpu slabs.
@@ -2250,9 +2288,13 @@ static inline void note_cmpxchg_failure(
 static void init_kmem_cache_cpus(struct kmem_cache *s)
 {
 	int cpu;
+	struct kmem_cache_cpu *c;
 
-	for_each_possible_cpu(cpu)
-		per_cpu_ptr(s->cpu_slab, cpu)->tid = init_tid(cpu);
+	for_each_possible_cpu(cpu) {
+		c = per_cpu_ptr(s->cpu_slab, cpu);
+		local_lock_init(&c->lock);
+		c->tid = init_tid(cpu);
+	}
 }
 
 /*
@@ -2463,10 +2505,10 @@ static void unfreeze_partials(struct kme
 	struct page *partial_page;
 	unsigned long flags;
 
-	local_irq_save(flags);
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 	partial_page = this_cpu_read(s->cpu_slab->partial);
 	this_cpu_write(s->cpu_slab->partial, NULL);
-	local_irq_restore(flags);
+	local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 
 	if (partial_page)
 		__unfreeze_partials(s, partial_page);
@@ -2499,7 +2541,7 @@ static void put_cpu_partial(struct kmem_
 	int pages = 0;
 	int pobjects = 0;
 
-	local_irq_save(flags);
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 
 	oldpage = this_cpu_read(s->cpu_slab->partial);
 
@@ -2527,7 +2569,7 @@ static void put_cpu_partial(struct kmem_
 
 	this_cpu_write(s->cpu_slab->partial, page);
 
-	local_irq_restore(flags);
+	local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 
 	if (page_to_unfreeze) {
 		__unfreeze_partials(s, page_to_unfreeze);
@@ -2549,7 +2591,7 @@ static inline void flush_slab(struct kme
 	struct page *page;
 	void *freelist;
 
-	local_irq_save(flags);
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 
 	page = c->page;
 	freelist = c->freelist;
@@ -2558,7 +2600,7 @@ static inline void flush_slab(struct kme
 	c->freelist = NULL;
 	c->tid = next_tid(c->tid);
 
-	local_irq_restore(flags);
+	local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 
 	if (page) {
 		deactivate_slab(s, page, freelist);
@@ -2780,8 +2822,6 @@ static inline bool pfmemalloc_match_unsa
  * The page is still frozen if the return value is not NULL.
  *
  * If this function returns NULL then the page has been unfrozen.
- *
- * This function must be called with interrupt disabled.
  */
 static inline void *get_freelist(struct kmem_cache *s, struct page *page)
 {
@@ -2789,6 +2829,8 @@ static inline void *get_freelist(struct
 	unsigned long counters;
 	void *freelist;
 
+	lockdep_assert_held(this_cpu_ptr(&s->cpu_slab->lock));
+
 	do {
 		freelist = page->freelist;
 		counters = page->counters;
@@ -2873,9 +2915,9 @@ redo:
 		goto deactivate_slab;
 
 	/* must check again c->page in case we got preempted and it changed */
-	local_irq_save(flags);
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 	if (unlikely(page != c->page)) {
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 		goto reread_page;
 	}
 	freelist = c->freelist;
@@ -2886,7 +2928,7 @@ redo:
 
 	if (!freelist) {
 		c->page = NULL;
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 		stat(s, DEACTIVATE_BYPASS);
 		goto new_slab;
 	}
@@ -2895,7 +2937,7 @@ redo:
 
 load_freelist:
 
-	lockdep_assert_irqs_disabled();
+	lockdep_assert_held(this_cpu_ptr(&s->cpu_slab->lock));
 
 	/*
 	 * freelist is pointing to the list of objects to be used.
@@ -2905,39 +2947,39 @@ load_freelist:
 	VM_BUG_ON(!c->page->frozen);
 	c->freelist = get_freepointer(s, freelist);
 	c->tid = next_tid(c->tid);
-	local_irq_restore(flags);
+	local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 	return freelist;
 
 deactivate_slab:
 
-	local_irq_save(flags);
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 	if (page != c->page) {
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 		goto reread_page;
 	}
 	freelist = c->freelist;
 	c->page = NULL;
 	c->freelist = NULL;
-	local_irq_restore(flags);
+	local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 	deactivate_slab(s, page, freelist);
 
 new_slab:
 
 	if (slub_percpu_partial(c)) {
-		local_irq_save(flags);
+		local_lock_irqsave(&s->cpu_slab->lock, flags);
 		if (unlikely(c->page)) {
-			local_irq_restore(flags);
+			local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 			goto reread_page;
 		}
 		if (unlikely(!slub_percpu_partial(c))) {
-			local_irq_restore(flags);
+			local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 			/* we were preempted and partial list got empty */
 			goto new_objects;
 		}
 
 		page = c->page = slub_percpu_partial(c);
 		slub_set_percpu_partial(c, page);
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 		stat(s, CPU_PARTIAL_ALLOC);
 		goto redo;
 	}
@@ -2990,7 +3032,7 @@ check_new_page:
 
 retry_load_page:
 
-	local_irq_save(flags);
+	local_lock_irqsave(&s->cpu_slab->lock, flags);
 	if (unlikely(c->page)) {
 		void *flush_freelist = c->freelist;
 		struct page *flush_page = c->page;
@@ -2999,7 +3041,7 @@ retry_load_page:
 		c->freelist = NULL;
 		c->tid = next_tid(c->tid);
 
-		local_irq_restore(flags);
+		local_unlock_irqrestore(&s->cpu_slab->lock, flags);
 
 		deactivate_slab(s, flush_page, flush_freelist);
 
@@ -3118,7 +3160,15 @@ redo:
 
 	object = c->freelist;
 	page = c->page;
-	if (unlikely(!object || !page || !node_match(page, node))) {
+	/*
+	 * We cannot use the lockless fastpath on PREEMPT_RT because if a
+	 * slowpath has taken the local_lock_irqsave(), it is not protected
+	 * against a fast path operation in an irq handler. So we need to take
+	 * the slow path which uses local_lock. It is still relatively fast if
+	 * there is a suitable cpu freelist.
+	 */
+	if (IS_ENABLED(CONFIG_PREEMPT_RT) ||
+	    unlikely(!object || !page || !node_match(page, node))) {
 		object = __slab_alloc(s, gfpflags, node, addr, c);
 	} else {
 		void *next_object = get_freepointer_safe(s, object);
@@ -3378,6 +3428,7 @@ redo:
 	barrier();
 
 	if (likely(page == c->page)) {
+#ifndef CONFIG_PREEMPT_RT
 		void **freelist = READ_ONCE(c->freelist);
 
 		set_freepointer(s, tail_obj, freelist);
@@ -3390,6 +3441,31 @@ redo:
 			note_cmpxchg_failure("slab_free", s, tid);
 			goto redo;
 		}
+#else /* CONFIG_PREEMPT_RT */
+		/*
+		 * We cannot use the lockless fastpath on PREEMPT_RT because if
+		 * a slowpath has taken the local_lock_irqsave(), it is not
+		 * protected against a fast path operation in an irq handler. So
+		 * we need to take the local_lock. We shouldn't simply defer to
+		 * __slab_free() as that wouldn't use the cpu freelist at all.
+		 */
+		void **freelist;
+
+		local_lock(&s->cpu_slab->lock);
+		c = this_cpu_ptr(s->cpu_slab);
+		if (unlikely(page != c->page)) {
+			local_unlock(&s->cpu_slab->lock);
+			goto redo;
+		}
+		tid = c->tid;
+		freelist = c->freelist;
+
+		set_freepointer(s, tail_obj, freelist);
+		c->freelist = head;
+		c->tid = next_tid(tid);
+
+		local_unlock(&s->cpu_slab->lock);
+#endif
 		stat(s, FREE_FASTPATH);
 	} else
 		__slab_free(s, page, head, tail_obj, cnt, addr);
@@ -3568,7 +3644,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
 	 * handlers invoking normal fastpath.
 	 */
 	c = slub_get_cpu_ptr(s->cpu_slab);
-	local_irq_disable();
+	local_lock_irq(&s->cpu_slab->lock);
 
 	for (i = 0; i < size; i++) {
 		void *object = kfence_alloc(s, s->object_size, flags);
@@ -3589,7 +3665,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
 			 */
 			c->tid = next_tid(c->tid);
 
-			local_irq_enable();
+			local_unlock_irq(&s->cpu_slab->lock);
 
 			/*
 			 * Invoking slow path likely have side-effect
@@ -3603,7 +3679,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
 			c = this_cpu_ptr(s->cpu_slab);
 			maybe_wipe_obj_freeptr(s, p[i]);
 
-			local_irq_disable();
+			local_lock_irq(&s->cpu_slab->lock);
 
 			continue; /* goto for-loop */
 		}
@@ -3612,7 +3688,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
 		maybe_wipe_obj_freeptr(s, p[i]);
 	}
 	c->tid = next_tid(c->tid);
-	local_irq_enable();
+	local_unlock_irq(&s->cpu_slab->lock);
 	slub_put_cpu_ptr(s->cpu_slab);
 
 	/*
_

  parent reply	other threads:[~2021-09-08  2:54 UTC|newest]

Thread overview: 185+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-08  2:52 incoming Andrew Morton
2021-09-08  2:52 ` [patch 001/147] mm, slub: don't call flush_all() from slab_debug_trace_open() Andrew Morton
2021-09-08  2:53 ` [patch 002/147] mm, slub: allocate private object map for debugfs listings Andrew Morton
2021-09-08  2:53 ` [patch 003/147] mm, slub: allocate private object map for validate_slab_cache() Andrew Morton
2021-09-08  2:53 ` [patch 004/147] mm, slub: don't disable irq for debug_check_no_locks_freed() Andrew Morton
2021-09-08  2:53 ` [patch 005/147] mm, slub: remove redundant unfreeze_partials() from put_cpu_partial() Andrew Morton
2021-09-08  2:53 ` [patch 006/147] mm, slub: extract get_partial() from new_slab_objects() Andrew Morton
2021-09-08  2:53 ` [patch 007/147] mm, slub: dissolve new_slab_objects() into ___slab_alloc() Andrew Morton
2021-09-08  2:53 ` [patch 008/147] mm, slub: return slab page from get_partial() and set c->page afterwards Andrew Morton
2021-09-08  2:53 ` [patch 009/147] mm, slub: restructure new page checks in ___slab_alloc() Andrew Morton
2021-09-08  2:53 ` [patch 010/147] mm, slub: simplify kmem_cache_cpu and tid setup Andrew Morton
2021-09-08  2:53 ` [patch 011/147] mm, slub: move disabling/enabling irqs to ___slab_alloc() Andrew Morton
2021-09-08  2:53 ` [patch 012/147] mm, slub: do initial checks in ___slab_alloc() with irqs enabled Andrew Morton
2021-09-08  2:53 ` [patch 013/147] mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() Andrew Morton
2021-09-08  2:53 ` [patch 014/147] mm, slub: restore irqs around calling new_slab() Andrew Morton
2021-09-08  2:53 ` [patch 015/147] mm, slub: validate slab from partial list or page allocator before making it cpu slab Andrew Morton
2021-09-08  2:53 ` [patch 016/147] mm, slub: check new pages with restored irqs Andrew Morton
2021-09-08  2:53 ` [patch 017/147] mm, slub: stop disabling irqs around get_partial() Andrew Morton
2021-09-08  2:53 ` [patch 018/147] mm, slub: move reset of c->page and freelist out of deactivate_slab() Andrew Morton
2021-09-08  2:53 ` [patch 019/147] mm, slub: make locking in deactivate_slab() irq-safe Andrew Morton
2021-09-08  2:54 ` [patch 020/147] mm, slub: call deactivate_slab() without disabling irqs Andrew Morton
2021-09-08  2:54 ` [patch 021/147] mm, slub: move irq control into unfreeze_partials() Andrew Morton
2021-09-08  2:54 ` [patch 022/147] mm, slub: discard slabs in unfreeze_partials() without irqs disabled Andrew Morton
2021-09-08  2:54 ` [patch 023/147] mm, slub: detach whole partial list at once in unfreeze_partials() Andrew Morton
2021-09-08  2:54 ` [patch 024/147] mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing Andrew Morton
2021-09-08  2:54 ` [patch 025/147] mm, slub: only disable irq with spin_lock in __unfreeze_partials() Andrew Morton
2021-09-08  2:54 ` [patch 026/147] mm, slub: don't disable irqs in slub_cpu_dead() Andrew Morton
2021-09-08  2:54 ` [patch 027/147] mm, slab: split out the cpu offline variant of flush_slab() Andrew Morton
2021-09-08  2:54 ` [patch 028/147] mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context Andrew Morton
2021-09-08  2:54 ` [patch 029/147] mm: slub: make object_map_lock a raw_spinlock_t Andrew Morton
2021-09-08  2:54 ` [patch 030/147] mm, slub: make slab_lock() disable irqs with PREEMPT_RT Andrew Morton
2021-09-08  2:54 ` [patch 031/147] mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg Andrew Morton
2021-09-08 13:05   ` Jesper Dangaard Brouer
2021-09-08 13:58     ` Vlastimil Babka
2021-09-08 14:55       ` David Hildenbrand
2021-09-08 14:59         ` David Hildenbrand
2021-09-08 17:14           ` Jesper Dangaard Brouer
2021-09-08 17:24             ` David Hildenbrand
2021-09-08 16:11       ` Jesper Dangaard Brouer
2021-09-08 16:31         ` Linus Torvalds
2021-09-08  2:54 ` [patch 032/147] mm, slub: use migrate_disable() on PREEMPT_RT Andrew Morton
2021-09-08  2:54 ` Andrew Morton [this message]
2021-09-08  2:54 ` [patch 034/147] memory-hotplug.rst: remove locking details from admin-guide Andrew Morton
2021-09-08  2:54 ` [patch 035/147] memory-hotplug.rst: complete admin-guide overhaul Andrew Morton
2021-09-08  2:54 ` [patch 036/147] mm: remove pfn_valid_within() and CONFIG_HOLES_IN_ZONE Andrew Morton
2021-09-08  2:54 ` [patch 037/147] mm: memory_hotplug: cleanup after removal of pfn_valid_within() Andrew Morton
2021-09-08  2:54 ` [patch 038/147] mm/memory_hotplug: use "unsigned long" for PFN in zone_for_pfn_range() Andrew Morton
2021-09-08  2:55 ` [patch 039/147] mm/memory_hotplug: remove nid parameter from arch_remove_memory() Andrew Morton
2021-09-08  2:55 ` [patch 040/147] mm/memory_hotplug: remove nid parameter from remove_memory() and friends Andrew Morton
2021-09-08  2:55 ` [patch 041/147] ACPI: memhotplug: memory resources cannot be enabled yet Andrew Morton
2021-09-08  2:55 ` [patch 042/147] mm: track present early pages per zone Andrew Morton
2021-09-08  2:55 ` [patch 043/147] mm/memory_hotplug: introduce "auto-movable" online policy Andrew Morton
2021-09-08  2:55 ` [patch 044/147] drivers/base/memory: introduce "memory groups" to logically group memory blocks Andrew Morton
2021-09-08  2:55 ` [patch 045/147] mm/memory_hotplug: track present pages in memory groups Andrew Morton
2021-09-08  2:55 ` [patch 046/147] ACPI: memhotplug: use a single static memory group for a single memory device Andrew Morton
2021-09-08  2:55 ` [patch 047/147] dax/kmem: use a single static memory group for a single probed unit Andrew Morton
2021-09-08  2:55 ` [patch 048/147] virtio-mem: use a single dynamic memory group for a single virtio-mem device Andrew Morton
2021-09-08  2:55 ` [patch 049/147] mm/memory_hotplug: memory group aware "auto-movable" online policy Andrew Morton
2021-09-08  2:55 ` [patch 050/147] mm/memory_hotplug: improved dynamic " Andrew Morton
2021-09-08  2:55 ` [patch 051/147] mm/memory_hotplug: use helper zone_is_zone_device() to simplify the code Andrew Morton
2021-09-08  2:55 ` [patch 052/147] mm: remove redundant compound_head() calling Andrew Morton
2021-09-08  2:55 ` [patch 053/147] riscv: only select GENERIC_IOREMAP if MMU support is enabled Andrew Morton
2021-09-08  2:56 ` [patch 054/147] mm: move ioremap_page_range to vmalloc.c Andrew Morton
2021-09-08  2:56 ` [patch 055/147] mm: don't allow executable ioremap mappings Andrew Morton
2021-09-08  2:56 ` [patch 056/147] mm/early_ioremap.c: remove redundant early_ioremap_shutdown() Andrew Morton
2021-09-08  2:56 ` [patch 057/147] highmem: don't disable preemption on RT in kmap_atomic() Andrew Morton
2021-09-08  2:56 ` [patch 058/147] mm: in_irq() cleanup Andrew Morton
2021-09-08  2:56 ` [patch 059/147] mm: introduce PAGEFLAGS_MASK to replace ((1UL << NR_PAGEFLAGS) - 1) Andrew Morton
2021-09-08  2:56 ` [patch 060/147] mm/secretmem: use refcount_t instead of atomic_t Andrew Morton
2021-09-08  2:56 ` [patch 061/147] kfence: show cpu and timestamp in alloc/free info Andrew Morton
2021-09-08  2:56 ` [patch 062/147] kfence: test: fail fast if disabled at boot Andrew Morton
2021-09-08  2:56 ` [patch 063/147] mm: introduce Data Access MONitor (DAMON) Andrew Morton
2021-09-08  2:56 ` [patch 064/147] mm/damon/core: implement region-based sampling Andrew Morton
2021-09-08  2:56 ` [patch 065/147] mm/damon: adaptively adjust regions Andrew Morton
2021-09-08  2:56 ` [patch 066/147] mm/idle_page_tracking: make PG_idle reusable Andrew Morton
2021-09-08  2:56 ` [patch 067/147] mm/damon: implement primitives for the virtual memory address spaces Andrew Morton
2021-09-08  2:56 ` [patch 068/147] mm/damon: add a tracepoint Andrew Morton
2021-09-08  2:56 ` [patch 069/147] mm/damon: implement a debugfs-based user space interface Andrew Morton
2021-09-08  2:56 ` [patch 070/147] mm/damon/dbgfs: export kdamond pid to the user space Andrew Morton
2021-09-08  2:57 ` [patch 071/147] mm/damon/dbgfs: support multiple contexts Andrew Morton
2021-09-08  2:57 ` [patch 072/147] Documentation: add documents for DAMON Andrew Morton
2021-09-08  2:57 ` [patch 073/147] mm/damon: add kunit tests Andrew Morton
2021-09-08  2:57 ` [patch 074/147] mm/damon: add user space selftests Andrew Morton
2021-09-08  2:57 ` [patch 075/147] MAINTAINERS: update for DAMON Andrew Morton
2021-09-08  2:57 ` [patch 076/147] alpha: agp: make empty macros use do-while-0 style Andrew Morton
2021-09-08  2:57 ` [patch 077/147] alpha: pci-sysfs: fix all kernel-doc warnings Andrew Morton
2021-09-08  2:57 ` [patch 078/147] percpu: remove export of pcpu_base_addr Andrew Morton
2021-09-08  2:57 ` [patch 079/147] fs/proc/kcore.c: add mmap interface Andrew Morton
2021-09-08 18:13   ` Linus Torvalds
     [not found]     ` <fab939c0-42c1-f6ee-f7f8-14280cb5b411@bytedance.com>
2021-09-09 17:32       ` [External] " Linus Torvalds
2021-09-09 17:34         ` Linus Torvalds
2021-09-10  3:18           ` Feng Zhou
2021-09-10 10:08   ` David Hildenbrand
2021-09-10 12:00     ` Mike Rapoport
2021-09-10 12:02       ` David Hildenbrand
2021-09-08  2:57 ` [patch 080/147] proc: stop using seq_get_buf in proc_task_name Andrew Morton
2021-09-08  2:57 ` [patch 081/147] connector: send event on write to /proc/[pid]/comm Andrew Morton
2021-09-08  2:57 ` [patch 082/147] arch: Kconfig: fix spelling mistake "seperate" -> "separate" Andrew Morton
2021-09-08  2:57 ` [patch 083/147] include/linux/once.h: fix trivia typo Not -> Note Andrew Morton
2021-09-08  2:57 ` [patch 084/147] units: change from 'L' to 'UL' Andrew Morton
2021-09-08  2:57 ` [patch 085/147] units: add the HZ macros Andrew Morton
2021-09-08  2:57 ` [patch 086/147] thermal/drivers/devfreq_cooling: use " Andrew Morton
2021-09-08  2:57 ` [patch 087/147] devfreq: " Andrew Morton
2021-09-08  2:57 ` [patch 088/147] iio/drivers/as73211: " Andrew Morton
2021-09-08  2:58 ` [patch 089/147] hwmon/drivers/mr75203: " Andrew Morton
2021-09-08  2:58 ` [patch 090/147] iio/drivers/hid-sensor: " Andrew Morton
2021-09-08  2:58 ` [patch 091/147] i2c/drivers/ov02q10: " Andrew Morton
2021-09-08  2:58 ` [patch 092/147] mtd/drivers/nand: " Andrew Morton
2021-09-08  6:39   ` Miquel Raynal
2021-09-08  2:58 ` [patch 093/147] phy/drivers/stm32: " Andrew Morton
2021-09-08  2:58 ` [patch 094/147] kernel/acct.c: use dedicated helper to access rlimit values Andrew Morton
2021-09-08  2:58 ` [patch 095/147] profiling: fix shift-out-of-bounds bugs Andrew Morton
2021-09-08  2:58 ` [patch 096/147] MAINTAINERS: update ClangBuiltLinux mailing list Andrew Morton
2021-09-08  2:58 ` [patch 097/147] Documentation/llvm: update " Andrew Morton
2021-09-08  2:58 ` [patch 098/147] Documentation/llvm: update IRC location Andrew Morton
2021-09-08  2:58 ` [patch 099/147] math: make RATIONAL tristate Andrew Morton
2021-09-08  2:58 ` [patch 100/147] math: RATIONAL_KUNIT_TEST should depend on RATIONAL instead of selecting it Andrew Morton
2021-09-08  2:58 ` [patch 101/147] lib/string: optimized memcpy Andrew Morton
2021-09-08 18:26   ` Linus Torvalds
2021-09-08  2:58 ` [patch 102/147] lib/string: optimized memmove Andrew Morton
2021-09-08 18:29   ` Linus Torvalds
2021-09-09  8:28     ` David Laight
2021-09-08  2:58 ` [patch 103/147] lib/string: optimized memset Andrew Morton
2021-09-08 18:34   ` Linus Torvalds
2021-09-09 10:27     ` Matteo Croce
2021-09-08  2:58 ` [patch 104/147] lib/test: convert test_sort.c to use KUnit Andrew Morton
2021-09-08  2:58 ` [patch 105/147] lib/dump_stack: correct kernel-doc notation Andrew Morton
2021-09-08  2:58 ` [patch 106/147] lib/iov_iter.c: fix kernel-doc warnings Andrew Morton
2021-09-08  2:58 ` [patch 107/147] bitops: protect find_first_{,zero}_bit properly Andrew Morton
2021-09-08  2:59 ` [patch 108/147] bitops: move find_bit_*_le functions from le.h to find.h Andrew Morton
2021-09-08 18:37   ` Linus Torvalds
2021-09-08 19:38     ` Yury Norov
2021-09-08 19:46       ` Linus Torvalds
2021-09-08 19:49       ` Andrew Morton
2021-09-08 19:56         ` Linus Torvalds
2021-09-08 20:08           ` Linus Torvalds
2021-09-08 20:16         ` Yury Norov
2021-09-08  2:59 ` [patch 109/147] include: move find.h from asm_generic to linux Andrew Morton
2021-09-08  2:59 ` [patch 110/147] arch: remove GENERIC_FIND_FIRST_BIT entirely Andrew Morton
2021-09-08  2:59 ` [patch 111/147] lib: add find_first_and_bit() Andrew Morton
2021-09-08  2:59 ` [patch 112/147] cpumask: use find_first_and_bit() Andrew Morton
2021-09-08  2:59 ` [patch 113/147] all: replace find_next{,_zero}_bit with find_first{,_zero}_bit where appropriate Andrew Morton
2021-09-08  2:59 ` [patch 114/147] tools: sync tools/bitmap with mother linux Andrew Morton
2021-09-08  2:59 ` [patch 115/147] cpumask: replace cpumask_next_* with cpumask_first_* where appropriate Andrew Morton
2021-09-08  2:59 ` [patch 116/147] include/linux: move for_each_bit() macros from bitops.h to find.h Andrew Morton
2021-09-08  2:59 ` [patch 117/147] find: micro-optimize for_each_{set,clear}_bit() Andrew Morton
2021-09-08  2:59 ` [patch 118/147] bitops: replace for_each_*_bit_from() with for_each_*_bit() where appropriate Andrew Morton
2021-09-08  2:59 ` [patch 119/147] tools: rename bitmap_alloc() to bitmap_zalloc() Andrew Morton
2021-09-08  2:59 ` [patch 120/147] mm/percpu: micro-optimize pcpu_is_populated() Andrew Morton
2021-09-08  2:59 ` [patch 121/147] bitmap: unify find_bit operations Andrew Morton
2021-09-08  2:59 ` [patch 122/147] lib: bitmap: add performance test for bitmap_print_to_pagebuf Andrew Morton
2021-09-08  2:59 ` [patch 123/147] vsprintf: rework bitmap_list_string Andrew Morton
2021-09-08  2:59 ` [patch 124/147] checkpatch: support wide strings Andrew Morton
2021-09-08  2:59 ` [patch 125/147] checkpatch: make email address check case insensitive Andrew Morton
2021-09-08  2:59 ` [patch 126/147] checkpatch: improve GIT_COMMIT_ID test Andrew Morton
2021-09-08  3:00 ` [patch 127/147] fs/epoll: use a per-cpu counter for user's watches count Andrew Morton
2021-09-08  3:00 ` [patch 128/147] init: move usermodehelper_enable() to populate_rootfs() Andrew Morton
2021-09-08 15:44   ` Luis Chamberlain
2021-09-10  8:12     ` Rasmus Villemoes
2021-09-10 17:47       ` H. Peter Anvin
2021-09-10 17:51       ` Luis Chamberlain
2021-09-08  3:00 ` [patch 130/147] nilfs2: fix memory leak in nilfs_sysfs_create_device_group Andrew Morton
2021-09-08  3:00 ` [patch 131/147] nilfs2: fix NULL pointer in nilfs_##name##_attr_release Andrew Morton
2021-09-08  3:00 ` [patch 132/147] nilfs2: fix memory leak in nilfs_sysfs_create_##name##_group Andrew Morton
2021-09-08  3:00 ` [patch 133/147] nilfs2: fix memory leak in nilfs_sysfs_delete_##name##_group Andrew Morton
2021-09-08  3:00 ` [patch 134/147] nilfs2: fix memory leak in nilfs_sysfs_create_snapshot_group Andrew Morton
2021-09-08  3:00 ` [patch 135/147] nilfs2: fix memory leak in nilfs_sysfs_delete_snapshot_group Andrew Morton
2021-09-08  3:00 ` [patch 136/147] nilfs2: use refcount_dec_and_lock() to fix potential UAF Andrew Morton
2021-09-24 10:35   ` Pavel Machek
2021-09-24 11:09     ` Ryusuke Konishi
2021-09-24 12:12   ` Matthew Wilcox
2021-09-24 15:09     ` Ryusuke Konishi
2021-09-08  3:00 ` [patch 137/147] fs/coredump.c: log if a core dump is aborted due to changed file permissions Andrew Morton
2021-09-08  3:00 ` [patch 138/147] coredump: fix memleak in dump_vma_snapshot() Andrew Morton
2021-09-08  3:00 ` [patch 139/147] kernel/fork.c: unexport get_{mm,task}_exe_file Andrew Morton
2021-09-08  3:00 ` [patch 140/147] pid: cleanup the stale comment mentioning pidmap_init() Andrew Morton
2021-09-08  3:00 ` [patch 141/147] prctl: allow to setup brk for et_dyn executables Andrew Morton
2021-09-08  3:00 ` [patch 142/147] configs: remove the obsolete CONFIG_INPUT_POLLDEV Andrew Morton
2021-09-08  3:00 ` [patch 143/147] Kconfig.debug: drop selecting non-existing HARDLOCKUP_DETECTOR_ARCH Andrew Morton
2021-09-08  3:00 ` [patch 144/147] selftests/memfd: remove unused variable Andrew Morton
2021-09-08  3:00 ` [patch 145/147] ipc: replace costly bailout check in sysvipc_find_ipc() Andrew Morton
2021-09-08  3:00 ` [patch 146/147] mm/workingset: correct kernel-doc notations Andrew Morton
2021-09-08  3:00 ` [patch 147/147] scripts: check_extable: fix typo in user error message Andrew Morton
2021-09-08  3:16 ` [patch 129/147] trap: cleanup trap_init() Andrew Morton
2021-09-08  8:57 ` incoming Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210908025443.H2cpHSr9P%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=brouer@redhat.com \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=jannh@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=mm-commits@vger.kernel.org \
    --cc=penberg@kernel.org \
    --cc=quic_qiancai@quicinc.com \
    --cc=rientjes@google.com \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).