All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: akpm@linux-foundation.org, bigeasy@linutronix.de,
	catalin.marinas@arm.com, haitao.liu@windriver.com,
	linux-mm@kvack.org, mm-commits@vger.kernel.org,
	torvalds@linux-foundation.org, yongxin.liu@windriver.com,
	zhe.he@windriver.com
Subject: [patch 019/118] mm/kmemleak: turn kmemleak_lock and object->lock to raw_spinlock_t
Date: Thu, 30 Jan 2020 22:12:00 -0800	[thread overview]
Message-ID: <20200131061200.VwtZx6Z6N%akpm@linux-foundation.org> (raw)
In-Reply-To: <20200130221021.5f0211c56346d5485af07923@linux-foundation.org>

From: He Zhe <zhe.he@windriver.com>
Subject: mm/kmemleak: turn kmemleak_lock and object->lock to raw_spinlock_t

kmemleak_lock as a rwlock on RT can possibly be acquired in atomic context
which does work.

Since the kmemleak operation is performed in atomic context make it a
raw_spinlock_t so it can also be acquired on RT.  This is used for
debugging and is not enabled by default in a production like environment
(where performance/latency matters) so it makes sense to make it a
raw_spinlock_t instead trying to get rid of the atomic context.  Turn also
the kmemleak_object->lock into raw_spinlock_t which is acquired (nested)
while the kmemleak_lock is held.

The time spent in "echo scan > kmemleak" slightly improved on 64core box
with this patch applied after boot.

[bigeasy@linutronix.de: redo the description, update comments. Merge the individual bits:  He Zhe did the kmemleak_lock, Liu Haitao the ->lock and Yongxin Liu forwarded Liu's patch.]
Link: http://lkml.kernel.org/r/20191219170834.4tah3prf2gdothz4@linutronix.de
Link: https://lkml.kernel.org/r/20181218150744.GB20197@arrakis.emea.arm.com
Link: https://lkml.kernel.org/r/1542877459-144382-1-git-send-email-zhe.he@windriver.com
Link: https://lkml.kernel.org/r/20190927082230.34152-1-yongxin.liu@windriver.com
Signed-off-by: He Zhe <zhe.he@windriver.com>
Signed-off-by: Liu Haitao <haitao.liu@windriver.com>
Signed-off-by: Yongxin Liu <yongxin.liu@windriver.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/kmemleak.c |  112 ++++++++++++++++++++++++------------------------
 1 file changed, 56 insertions(+), 56 deletions(-)

--- a/mm/kmemleak.c~kmemleak-turn-kmemleak_lock-and-object-lock-to-raw_spinlock_t
+++ a/mm/kmemleak.c
@@ -13,7 +13,7 @@
  *
  * The following locks and mutexes are used by kmemleak:
  *
- * - kmemleak_lock (rwlock): protects the object_list modifications and
+ * - kmemleak_lock (raw_spinlock_t): protects the object_list modifications and
  *   accesses to the object_tree_root. The object_list is the main list
  *   holding the metadata (struct kmemleak_object) for the allocated memory
  *   blocks. The object_tree_root is a red black tree used to look-up
@@ -22,13 +22,13 @@
  *   object_tree_root in the create_object() function called from the
  *   kmemleak_alloc() callback and removed in delete_object() called from the
  *   kmemleak_free() callback
- * - kmemleak_object.lock (spinlock): protects a kmemleak_object. Accesses to
- *   the metadata (e.g. count) are protected by this lock. Note that some
- *   members of this structure may be protected by other means (atomic or
- *   kmemleak_lock). This lock is also held when scanning the corresponding
- *   memory block to avoid the kernel freeing it via the kmemleak_free()
- *   callback. This is less heavyweight than holding a global lock like
- *   kmemleak_lock during scanning
+ * - kmemleak_object.lock (raw_spinlock_t): protects a kmemleak_object.
+ *   Accesses to the metadata (e.g. count) are protected by this lock. Note
+ *   that some members of this structure may be protected by other means
+ *   (atomic or kmemleak_lock). This lock is also held when scanning the
+ *   corresponding memory block to avoid the kernel freeing it via the
+ *   kmemleak_free() callback. This is less heavyweight than holding a global
+ *   lock like kmemleak_lock during scanning.
  * - scan_mutex (mutex): ensures that only one thread may scan the memory for
  *   unreferenced objects at a time. The gray_list contains the objects which
  *   are already referenced or marked as false positives and need to be
@@ -135,7 +135,7 @@ struct kmemleak_scan_area {
  * (use_count) and freed using the RCU mechanism.
  */
 struct kmemleak_object {
-	spinlock_t lock;
+	raw_spinlock_t lock;
 	unsigned int flags;		/* object status flags */
 	struct list_head object_list;
 	struct list_head gray_list;
@@ -191,8 +191,8 @@ static int mem_pool_free_count = ARRAY_S
 static LIST_HEAD(mem_pool_free_list);
 /* search tree for object boundaries */
 static struct rb_root object_tree_root = RB_ROOT;
-/* rw_lock protecting the access to object_list and object_tree_root */
-static DEFINE_RWLOCK(kmemleak_lock);
+/* protecting the access to object_list and object_tree_root */
+static DEFINE_RAW_SPINLOCK(kmemleak_lock);
 
 /* allocation caches for kmemleak internal data */
 static struct kmem_cache *object_cache;
@@ -426,7 +426,7 @@ static struct kmemleak_object *mem_pool_
 	}
 
 	/* slab allocation failed, try the memory pool */
-	write_lock_irqsave(&kmemleak_lock, flags);
+	raw_spin_lock_irqsave(&kmemleak_lock, flags);
 	object = list_first_entry_or_null(&mem_pool_free_list,
 					  typeof(*object), object_list);
 	if (object)
@@ -435,7 +435,7 @@ static struct kmemleak_object *mem_pool_
 		object = &mem_pool[--mem_pool_free_count];
 	else
 		pr_warn_once("Memory pool empty, consider increasing CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE\n");
-	write_unlock_irqrestore(&kmemleak_lock, flags);
+	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
 
 	return object;
 }
@@ -453,9 +453,9 @@ static void mem_pool_free(struct kmemlea
 	}
 
 	/* add the object to the memory pool free list */
-	write_lock_irqsave(&kmemleak_lock, flags);
+	raw_spin_lock_irqsave(&kmemleak_lock, flags);
 	list_add(&object->object_list, &mem_pool_free_list);
-	write_unlock_irqrestore(&kmemleak_lock, flags);
+	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
 }
 
 /*
@@ -514,9 +514,9 @@ static struct kmemleak_object *find_and_
 	struct kmemleak_object *object;
 
 	rcu_read_lock();
-	read_lock_irqsave(&kmemleak_lock, flags);
+	raw_spin_lock_irqsave(&kmemleak_lock, flags);
 	object = lookup_object(ptr, alias);
-	read_unlock_irqrestore(&kmemleak_lock, flags);
+	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
 
 	/* check whether the object is still available */
 	if (object && !get_object(object))
@@ -546,11 +546,11 @@ static struct kmemleak_object *find_and_
 	unsigned long flags;
 	struct kmemleak_object *object;
 
-	write_lock_irqsave(&kmemleak_lock, flags);
+	raw_spin_lock_irqsave(&kmemleak_lock, flags);
 	object = lookup_object(ptr, alias);
 	if (object)
 		__remove_object(object);
-	write_unlock_irqrestore(&kmemleak_lock, flags);
+	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
 
 	return object;
 }
@@ -585,7 +585,7 @@ static struct kmemleak_object *create_ob
 	INIT_LIST_HEAD(&object->object_list);
 	INIT_LIST_HEAD(&object->gray_list);
 	INIT_HLIST_HEAD(&object->area_list);
-	spin_lock_init(&object->lock);
+	raw_spin_lock_init(&object->lock);
 	atomic_set(&object->use_count, 1);
 	object->flags = OBJECT_ALLOCATED;
 	object->pointer = ptr;
@@ -617,7 +617,7 @@ static struct kmemleak_object *create_ob
 	/* kernel backtrace */
 	object->trace_len = __save_stack_trace(object->trace);
 
-	write_lock_irqsave(&kmemleak_lock, flags);
+	raw_spin_lock_irqsave(&kmemleak_lock, flags);
 
 	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
 	min_addr = min(min_addr, untagged_ptr);
@@ -649,7 +649,7 @@ static struct kmemleak_object *create_ob
 
 	list_add_tail_rcu(&object->object_list, &object_list);
 out:
-	write_unlock_irqrestore(&kmemleak_lock, flags);
+	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
 	return object;
 }
 
@@ -667,9 +667,9 @@ static void __delete_object(struct kmeml
 	 * Locking here also ensures that the corresponding memory block
 	 * cannot be freed when it is being scanned.
 	 */
-	spin_lock_irqsave(&object->lock, flags);
+	raw_spin_lock_irqsave(&object->lock, flags);
 	object->flags &= ~OBJECT_ALLOCATED;
-	spin_unlock_irqrestore(&object->lock, flags);
+	raw_spin_unlock_irqrestore(&object->lock, flags);
 	put_object(object);
 }
 
@@ -739,9 +739,9 @@ static void paint_it(struct kmemleak_obj
 {
 	unsigned long flags;
 
-	spin_lock_irqsave(&object->lock, flags);
+	raw_spin_lock_irqsave(&object->lock, flags);
 	__paint_it(object, color);
-	spin_unlock_irqrestore(&object->lock, flags);
+	raw_spin_unlock_irqrestore(&object->lock, flags);
 }
 
 static void paint_ptr(unsigned long ptr, int color)
@@ -798,7 +798,7 @@ static void add_scan_area(unsigned long
 	if (scan_area_cache)
 		area = kmem_cache_alloc(scan_area_cache, gfp_kmemleak_mask(gfp));
 
-	spin_lock_irqsave(&object->lock, flags);
+	raw_spin_lock_irqsave(&object->lock, flags);
 	if (!area) {
 		pr_warn_once("Cannot allocate a scan area, scanning the full object\n");
 		/* mark the object for full scan to avoid false positives */
@@ -820,7 +820,7 @@ static void add_scan_area(unsigned long
 
 	hlist_add_head(&area->node, &object->area_list);
 out_unlock:
-	spin_unlock_irqrestore(&object->lock, flags);
+	raw_spin_unlock_irqrestore(&object->lock, flags);
 	put_object(object);
 }
 
@@ -842,9 +842,9 @@ static void object_set_excess_ref(unsign
 		return;
 	}
 
-	spin_lock_irqsave(&object->lock, flags);
+	raw_spin_lock_irqsave(&object->lock, flags);
 	object->excess_ref = excess_ref;
-	spin_unlock_irqrestore(&object->lock, flags);
+	raw_spin_unlock_irqrestore(&object->lock, flags);
 	put_object(object);
 }
 
@@ -864,9 +864,9 @@ static void object_no_scan(unsigned long
 		return;
 	}
 
-	spin_lock_irqsave(&object->lock, flags);
+	raw_spin_lock_irqsave(&object->lock, flags);
 	object->flags |= OBJECT_NO_SCAN;
-	spin_unlock_irqrestore(&object->lock, flags);
+	raw_spin_unlock_irqrestore(&object->lock, flags);
 	put_object(object);
 }
 
@@ -1026,9 +1026,9 @@ void __ref kmemleak_update_trace(const v
 		return;
 	}
 
-	spin_lock_irqsave(&object->lock, flags);
+	raw_spin_lock_irqsave(&object->lock, flags);
 	object->trace_len = __save_stack_trace(object->trace);
-	spin_unlock_irqrestore(&object->lock, flags);
+	raw_spin_unlock_irqrestore(&object->lock, flags);
 
 	put_object(object);
 }
@@ -1233,7 +1233,7 @@ static void scan_block(void *_start, voi
 	unsigned long flags;
 	unsigned long untagged_ptr;
 
-	read_lock_irqsave(&kmemleak_lock, flags);
+	raw_spin_lock_irqsave(&kmemleak_lock, flags);
 	for (ptr = start; ptr < end; ptr++) {
 		struct kmemleak_object *object;
 		unsigned long pointer;
@@ -1268,7 +1268,7 @@ static void scan_block(void *_start, voi
 		 * previously acquired in scan_object(). These locks are
 		 * enclosed by scan_mutex.
 		 */
-		spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING);
+		raw_spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING);
 		/* only pass surplus references (object already gray) */
 		if (color_gray(object)) {
 			excess_ref = object->excess_ref;
@@ -1277,7 +1277,7 @@ static void scan_block(void *_start, voi
 			excess_ref = 0;
 			update_refs(object);
 		}
-		spin_unlock(&object->lock);
+		raw_spin_unlock(&object->lock);
 
 		if (excess_ref) {
 			object = lookup_object(excess_ref, 0);
@@ -1286,12 +1286,12 @@ static void scan_block(void *_start, voi
 			if (object == scanned)
 				/* circular reference, ignore */
 				continue;
-			spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING);
+			raw_spin_lock_nested(&object->lock, SINGLE_DEPTH_NESTING);
 			update_refs(object);
-			spin_unlock(&object->lock);
+			raw_spin_unlock(&object->lock);
 		}
 	}
-	read_unlock_irqrestore(&kmemleak_lock, flags);
+	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
 }
 
 /*
@@ -1324,7 +1324,7 @@ static void scan_object(struct kmemleak_
 	 * Once the object->lock is acquired, the corresponding memory block
 	 * cannot be freed (the same lock is acquired in delete_object).
 	 */
-	spin_lock_irqsave(&object->lock, flags);
+	raw_spin_lock_irqsave(&object->lock, flags);
 	if (object->flags & OBJECT_NO_SCAN)
 		goto out;
 	if (!(object->flags & OBJECT_ALLOCATED))
@@ -1344,9 +1344,9 @@ static void scan_object(struct kmemleak_
 			if (start >= end)
 				break;
 
-			spin_unlock_irqrestore(&object->lock, flags);
+			raw_spin_unlock_irqrestore(&object->lock, flags);
 			cond_resched();
-			spin_lock_irqsave(&object->lock, flags);
+			raw_spin_lock_irqsave(&object->lock, flags);
 		} while (object->flags & OBJECT_ALLOCATED);
 	} else
 		hlist_for_each_entry(area, &object->area_list, node)
@@ -1354,7 +1354,7 @@ static void scan_object(struct kmemleak_
 				   (void *)(area->start + area->size),
 				   object);
 out:
-	spin_unlock_irqrestore(&object->lock, flags);
+	raw_spin_unlock_irqrestore(&object->lock, flags);
 }
 
 /*
@@ -1407,7 +1407,7 @@ static void kmemleak_scan(void)
 	/* prepare the kmemleak_object's */
 	rcu_read_lock();
 	list_for_each_entry_rcu(object, &object_list, object_list) {
-		spin_lock_irqsave(&object->lock, flags);
+		raw_spin_lock_irqsave(&object->lock, flags);
 #ifdef DEBUG
 		/*
 		 * With a few exceptions there should be a maximum of
@@ -1424,7 +1424,7 @@ static void kmemleak_scan(void)
 		if (color_gray(object) && get_object(object))
 			list_add_tail(&object->gray_list, &gray_list);
 
-		spin_unlock_irqrestore(&object->lock, flags);
+		raw_spin_unlock_irqrestore(&object->lock, flags);
 	}
 	rcu_read_unlock();
 
@@ -1492,14 +1492,14 @@ static void kmemleak_scan(void)
 	 */
 	rcu_read_lock();
 	list_for_each_entry_rcu(object, &object_list, object_list) {
-		spin_lock_irqsave(&object->lock, flags);
+		raw_spin_lock_irqsave(&object->lock, flags);
 		if (color_white(object) && (object->flags & OBJECT_ALLOCATED)
 		    && update_checksum(object) && get_object(object)) {
 			/* color it gray temporarily */
 			object->count = object->min_count;
 			list_add_tail(&object->gray_list, &gray_list);
 		}
-		spin_unlock_irqrestore(&object->lock, flags);
+		raw_spin_unlock_irqrestore(&object->lock, flags);
 	}
 	rcu_read_unlock();
 
@@ -1519,7 +1519,7 @@ static void kmemleak_scan(void)
 	 */
 	rcu_read_lock();
 	list_for_each_entry_rcu(object, &object_list, object_list) {
-		spin_lock_irqsave(&object->lock, flags);
+		raw_spin_lock_irqsave(&object->lock, flags);
 		if (unreferenced_object(object) &&
 		    !(object->flags & OBJECT_REPORTED)) {
 			object->flags |= OBJECT_REPORTED;
@@ -1529,7 +1529,7 @@ static void kmemleak_scan(void)
 
 			new_leaks++;
 		}
-		spin_unlock_irqrestore(&object->lock, flags);
+		raw_spin_unlock_irqrestore(&object->lock, flags);
 	}
 	rcu_read_unlock();
 
@@ -1681,10 +1681,10 @@ static int kmemleak_seq_show(struct seq_
 	struct kmemleak_object *object = v;
 	unsigned long flags;
 
-	spin_lock_irqsave(&object->lock, flags);
+	raw_spin_lock_irqsave(&object->lock, flags);
 	if ((object->flags & OBJECT_REPORTED) && unreferenced_object(object))
 		print_unreferenced(seq, object);
-	spin_unlock_irqrestore(&object->lock, flags);
+	raw_spin_unlock_irqrestore(&object->lock, flags);
 	return 0;
 }
 
@@ -1714,9 +1714,9 @@ static int dump_str_object_info(const ch
 		return -EINVAL;
 	}
 
-	spin_lock_irqsave(&object->lock, flags);
+	raw_spin_lock_irqsave(&object->lock, flags);
 	dump_object_info(object);
-	spin_unlock_irqrestore(&object->lock, flags);
+	raw_spin_unlock_irqrestore(&object->lock, flags);
 
 	put_object(object);
 	return 0;
@@ -1735,11 +1735,11 @@ static void kmemleak_clear(void)
 
 	rcu_read_lock();
 	list_for_each_entry_rcu(object, &object_list, object_list) {
-		spin_lock_irqsave(&object->lock, flags);
+		raw_spin_lock_irqsave(&object->lock, flags);
 		if ((object->flags & OBJECT_REPORTED) &&
 		    unreferenced_object(object))
 			__paint_it(object, KMEMLEAK_GREY);
-		spin_unlock_irqrestore(&object->lock, flags);
+		raw_spin_unlock_irqrestore(&object->lock, flags);
 	}
 	rcu_read_unlock();
 
_

  parent reply	other threads:[~2020-01-31  6:12 UTC|newest]

Thread overview: 147+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-01-31  6:10 incoming Andrew Morton
2020-01-31  6:11 ` [patch 001/118] lib/test_bitmap: correct test data offsets for 32-bit Andrew Morton
2020-01-31  6:11 ` [patch 002/118] memcg: fix a crash in wb_workfn when a device disappears Andrew Morton
2020-01-31  6:11 ` [patch 003/118] mm/mempolicy.c: fix out of bounds write in mpol_parse_str() Andrew Morton
2020-01-31  6:11   ` Andrew Morton
2020-01-31  6:11 ` [patch 004/118] mm/sparse.c: reset section's mem_map when fully deactivated Andrew Morton
2020-01-31  6:11 ` Andrew Morton
2020-01-31  6:11 ` [patch 005/118] mm/migrate.c: also overwrite error when it is bigger than zero Andrew Morton
2020-01-31  6:11 ` [patch 006/118] mm/memory_hotplug: fix remove_memory() lockdep splat Andrew Morton
2020-01-31  6:11   ` Andrew Morton
2020-01-31  6:11 ` [patch 007/118] mm: thp: don't need care deferred split queue in memcg charge move path Andrew Morton
2020-01-31  6:11   ` Andrew Morton
2020-01-31  6:11 ` [patch 008/118] mm: move_pages: report the number of non-attempted pages Andrew Morton
2020-01-31  6:11 ` [patch 009/118] scripts/spelling.txt: add more spellings to spelling.txt Andrew Morton
2020-01-31  6:11 ` [patch 010/118] scripts/spelling.txt: add "issus" typo Andrew Morton
2020-01-31  6:11 ` [patch 011/118] fs: ocfs: remove unnecessary assertion in dlm_migrate_lockres Andrew Morton
2020-01-31  6:11   ` Andrew Morton
2020-01-31  6:11 ` [patch 012/118] ocfs2: remove unneeded semicolons Andrew Morton
2020-01-31  6:11 ` [patch 013/118] ocfs2: make local header paths relative to C files Andrew Morton
2020-01-31  6:11   ` Andrew Morton
2020-01-31  6:11 ` [patch 014/118] ocfs2/dlm: remove redundant assignment to ret Andrew Morton
2020-01-31  6:11 ` [patch 015/118] ocfs2/dlm: move BITS_TO_BYTES() to bitops.h for wider use Andrew Morton
2020-01-31  6:11   ` Andrew Morton
2020-01-31  6:11 ` [patch 016/118] ocfs2: fix a NULL pointer dereference when call ocfs2_update_inode_fsync_trans() Andrew Morton
2020-01-31  6:11 ` [patch 017/118] ocfs2: use ocfs2_update_inode_fsync_trans() to access t_tid in handle->h_transaction Andrew Morton
2020-01-31  6:11 ` [patch 018/118] mm/slub.c: avoid slub allocation while holding list_lock Andrew Morton
2020-02-03 17:25   ` Christopher Lameter
2020-01-31  6:12 ` Andrew Morton [this message]
2020-01-31  6:12 ` [patch 020/118] mm/debug.c: always print flags in dump_page() Andrew Morton
2020-01-31  6:12 ` [patch 021/118] mm/filemap.c: clean up filemap_write_and_wait() Andrew Morton
2020-01-31  6:12   ` Andrew Morton
2020-01-31  6:12 ` [patch 022/118] mm: fix gup_pud_range Andrew Morton
2020-01-31  6:12 ` [patch 023/118] mm/gup.c: use is_vm_hugetlb_page() to check whether to follow huge Andrew Morton
2020-01-31  6:12 ` [patch 024/118] mm/gup: factor out duplicate code from four routines Andrew Morton
2020-01-31  6:12 ` [patch 025/118] mm/gup: move try_get_compound_head() to top, fix minor issues Andrew Morton
2020-01-31  6:12   ` Andrew Morton
2020-01-31  6:12 ` [patch 026/118] mm: Cleanup __put_devmap_managed_page() vs ->page_free() Andrew Morton
2020-01-31  6:12 ` [patch 027/118] mm: devmap: refactor 1-based refcounting for ZONE_DEVICE pages Andrew Morton
2020-01-31  6:12 ` [patch 028/118] goldish_pipe: rename local pin_user_pages() routine Andrew Morton
2020-01-31  6:12 ` [patch 029/118] mm: fix get_user_pages_remote()'s handling of FOLL_LONGTERM Andrew Morton
2020-01-31  6:12 ` [patch 030/118] vfio: fix FOLL_LONGTERM use, simplify get_user_pages_remote() call Andrew Morton
2020-01-31  6:12   ` Andrew Morton
2020-01-31  6:12 ` [patch 031/118] mm/gup: allow FOLL_FORCE for get_user_pages_fast() Andrew Morton
2020-01-31  6:12 ` [patch 032/118] IB/umem: use get_user_pages_fast() to pin DMA pages Andrew Morton
2020-01-31  6:12   ` Andrew Morton
2020-01-31  6:12 ` [patch 033/118] media/v4l2-core: set pages dirty upon releasing DMA buffers Andrew Morton
2020-01-31  6:12 ` [patch 034/118] mm/gup: introduce pin_user_pages*() and FOLL_PIN Andrew Morton
2020-01-31  6:12 ` [patch 035/118] goldish_pipe: convert to pin_user_pages() and put_user_page() Andrew Morton
2020-01-31  6:12   ` Andrew Morton
2020-01-31  6:13 ` [patch 036/118] IB/{core,hw,umem}: set FOLL_PIN via pin_user_pages*(), fix up ODP Andrew Morton
2020-01-31  6:13 ` [patch 037/118] mm/process_vm_access: set FOLL_PIN via pin_user_pages_remote() Andrew Morton
2020-01-31  6:13 ` [patch 038/118] drm/via: set FOLL_PIN via pin_user_pages_fast() Andrew Morton
2020-01-31  6:13 ` [patch 039/118] fs/io_uring: set FOLL_PIN via pin_user_pages() Andrew Morton
2020-01-31  6:13 ` [patch 040/118] net/xdp: " Andrew Morton
2020-01-31  6:13 ` [patch 041/118] media/v4l2-core: pin_user_pages (FOLL_PIN) and put_user_page() conversion Andrew Morton
2020-01-31  6:13 ` [patch 042/118] vfio, mm: " Andrew Morton
2020-01-31  6:13 ` [patch 043/118] powerpc: book3s64: convert to pin_user_pages() and put_user_page() Andrew Morton
2020-01-31  6:13 ` [patch 044/118] mm/gup_benchmark: use proper FOLL_WRITE flags instead of hard-coding "1" Andrew Morton
2020-01-31  6:13 ` [patch 045/118] mm, tree-wide: rename put_user_page*() to unpin_user_page*() Andrew Morton
2020-01-31  6:13 ` [patch 046/118] mm/swapfile.c: swap_next should increase position index Andrew Morton
2020-01-31  6:13 ` [patch 047/118] mm/memcontrol.c: cleanup some useless code Andrew Morton
2020-01-31  6:13 ` [patch 048/118] mm/page_vma_mapped.c: explicitly compare pfn for normal, hugetlbfs and THP page Andrew Morton
2020-01-31  6:13 ` [patch 049/118] mm, tracing: print symbol name for kmem_alloc_node call_site events Andrew Morton
2020-01-31  6:13 ` [patch 050/118] lib/test_kasan.c: fix memory leak in kmalloc_oob_krealloc_more() Andrew Morton
2020-01-31  6:13 ` [patch 051/118] mm/early_ioremap.c: use %pa to print resource_size_t variables Andrew Morton
2020-01-31  6:13 ` [patch 052/118] mm/page_alloc: skip non present sections on zone initialization Andrew Morton
2020-01-31  6:14 ` [patch 053/118] mm: remove the memory isolate notifier Andrew Morton
2020-01-31  6:14   ` Andrew Morton
2020-01-31  6:14 ` [patch 054/118] mm: remove "count" parameter from has_unmovable_pages() Andrew Morton
2020-01-31  6:14 ` [patch 055/118] mm/vmscan.c: remove unused return value of shrink_node Andrew Morton
2020-01-31  6:14   ` Andrew Morton
2020-01-31  6:14 ` [patch 056/118] mm/vmscan: remove prefetch_prev_lru_page Andrew Morton
2020-01-31  6:14   ` Andrew Morton
2020-01-31  6:14 ` [patch 057/118] mm/vmscan: remove unused RECLAIM_OFF/RECLAIM_ZONE Andrew Morton
2020-01-31  6:14 ` [patch 058/118] tools/vm/slabinfo: fix sanity checks enabling Andrew Morton
2020-01-31  6:14 ` [patch 059/118] mm/memblock: define memblock_physmem_add() Andrew Morton
2020-01-31  6:14 ` [patch 060/118] memblock: Use __func__ in remaining memblock_dbg() call sites Andrew Morton
2020-01-31  6:14 ` [patch 061/118] mm, oom: dump stack of victim when reaping failed Andrew Morton
2020-01-31  6:14 ` [patch 062/118] mm/huge_memory.c: use head to check huge zero page Andrew Morton
2020-01-31  6:14 ` [patch 063/118] mm/huge_memory.c: use head to emphasize the purpose of page Andrew Morton
2020-01-31  6:14 ` [patch 064/118] mm/huge_memory.c: reduce critical section protected by split_queue_lock Andrew Morton
2020-01-31  6:14 ` [patch 065/118] mm/migrate: remove useless mask of start address Andrew Morton
2020-01-31  6:14 ` [patch 066/118] mm/migrate: clean up some minor coding style Andrew Morton
2020-01-31  6:14 ` [patch 067/118] mm/migrate: add stable check in migrate_vma_insert_page() Andrew Morton
2020-01-31  6:14 ` [patch 068/118] mm, thp: fix defrag setting if newline is not used Andrew Morton
2020-01-31  6:14 ` [patch 069/118] mm/mmap.c: get rid of odd jump labels in find_mergeable_anon_vma() Andrew Morton
2020-01-31  6:14 ` [patch 070/118] mm/memory_hotplug: pass in nid to online_pages() Andrew Morton
2020-01-31  6:14   ` Andrew Morton
2020-01-31  6:14 ` [patch 071/118] mm/hotplug: silence a lockdep splat with printk() Andrew Morton
2020-01-31  6:15 ` [patch 072/118] mm/page_isolation: fix potential warning from user Andrew Morton
2020-01-31  6:15 ` [patch 073/118] mm/zswap.c: add allocation hysteresis if pool limit is hit Andrew Morton
2020-01-31  6:15 ` [patch 074/118] zswap: potential NULL dereference on error in init_zswap() Andrew Morton
2020-01-31  6:15 ` [patch 075/118] include/linux/mm.h: clean up obsolete check on space in page->flags Andrew Morton
2020-01-31  6:15   ` Andrew Morton
2020-01-31  6:15 ` [patch 076/118] include/linux/mm.h: remove dead code totalram_pages_set() Andrew Morton
2020-01-31  6:15   ` Andrew Morton
2020-01-31  6:15 ` [patch 077/118] include/linux/memory.h: drop fields 'hw' and 'phys_callback' from struct memory_block Andrew Morton
2020-01-31  6:15 ` [patch 078/118] mm: fix comments related to node reclaim Andrew Morton
2020-01-31  6:15 ` [patch 079/118] zram: try to avoid worst-case scenario on same element pages Andrew Morton
2020-01-31  6:15 ` [patch 080/118] drivers/block/zram/zram_drv.c: fix error return codes not being returned in writeback_store Andrew Morton
2020-01-31  6:15 ` [patch 081/118] include/linux/units.h: add helpers for kelvin to/from Celsius conversion Andrew Morton
2020-01-31  6:15 ` [patch 082/118] ACPI: thermal: switch to use <linux/units.h> helpers Andrew Morton
2020-01-31  6:15 ` [patch 083/118] platform/x86: asus-wmi: " Andrew Morton
2020-01-31  6:15   ` Andrew Morton
2020-01-31  6:15 ` [patch 084/118] platform/x86: intel_menlow: " Andrew Morton
2020-01-31  6:15 ` [patch 085/118] thermal: int340x: " Andrew Morton
2020-01-31  6:15 ` [patch 086/118] thermal: intel_pch: " Andrew Morton
2020-01-31  6:15 ` [patch 087/118] nvme: hwmon: " Andrew Morton
2020-01-31  6:15 ` [patch 088/118] thermal: remove kelvin to/from Celsius conversion helpers from <linux/thermal.h> Andrew Morton
2020-01-31  6:15   ` Andrew Morton
2020-01-31  6:16 ` [patch 089/118] iwlegacy: use <linux/units.h> helpers Andrew Morton
2020-01-31  6:16   ` Andrew Morton
2020-01-31  6:16 ` [patch 090/118] iwlwifi: " Andrew Morton
2020-01-31  6:28   ` Luciano Coelho
2020-01-31  6:16 ` [patch 091/118] thermal: armada: remove unused TO_MCELSIUS macro Andrew Morton
2020-01-31  6:16   ` Andrew Morton
2020-01-31  6:16 ` [patch 092/118] iio: adc: qcom-vadc-common: use <linux/units.h> helpers Andrew Morton
2020-01-31  6:16 ` [patch 093/118] lib/zlib: add s390 hardware support for kernel zlib_deflate Andrew Morton
2020-01-31  6:16 ` [patch 094/118] s390/boot: rename HEAP_SIZE due to name collision Andrew Morton
2020-01-31  6:16 ` [patch 095/118] lib/zlib: add s390 hardware support for kernel zlib_inflate Andrew Morton
2020-01-31  6:16 ` [patch 096/118] s390/boot: add dfltcc= kernel command line parameter Andrew Morton
2020-01-31  6:16 ` [patch 097/118] lib/zlib: add zlib_deflate_dfltcc_enabled() function Andrew Morton
2020-01-31  6:16   ` Andrew Morton
2020-01-31  6:16 ` [patch 098/118] btrfs: use larger zlib buffer for s390 hardware compression Andrew Morton
2020-01-31  6:16 ` [patch 099/118] lib/scatterlist.c: adjust indentation in __sg_alloc_table Andrew Morton
2020-01-31  6:16 ` [patch 100/118] uapi: rename ext2_swab() to swab() and share globally in swab.h Andrew Morton
2020-01-31  6:16   ` Andrew Morton
2020-01-31  6:16 ` [patch 101/118] lib/find_bit.c: join _find_next_bit{_le} Andrew Morton
2020-01-31  6:16   ` Andrew Morton
2020-01-31  6:16 ` [patch 102/118] lib/find_bit.c: uninline helper _find_next_bit() Andrew Morton
2020-01-31  6:16 ` [patch 103/118] fs/binfmt_elf.c: smaller code generation around auxv vector fill Andrew Morton
2020-01-31  6:16 ` [patch 104/118] fs/binfmt_elf.c: fix ->start_code calculation Andrew Morton
2020-01-31  6:16 ` [patch 105/118] fs/binfmt_elf.c: don't copy ELF header around Andrew Morton
2023-11-22  7:15   ` Jinjie Ruan
2020-01-31  6:16 ` [patch 106/118] fs/binfmt_elf.c: better codegen around current->mm Andrew Morton
2020-01-31  6:17 ` [patch 107/118] fs/binfmt_elf.c: make BAD_ADDR() unlikely Andrew Morton
2020-01-31  6:17 ` [patch 108/118] fs/binfmt_elf.c: coredump: allocate core ELF header on stack Andrew Morton
2020-01-31  6:17 ` [patch 109/118] fs/binfmt_elf.c: coredump: delete duplicated overflow check Andrew Morton
2020-01-31  6:17 ` [patch 110/118] fs/binfmt_elf.c: coredump: allow process with empty address space to coredump Andrew Morton
2020-01-31  6:17 ` [patch 111/118] init/main.c: log arguments and environment passed to init Andrew Morton
2020-01-31  6:17 ` [patch 112/118] init/main.c: remove unnecessary repair_env_string in do_initcall_level Andrew Morton
2020-01-31  6:17 ` [patch 113/118] init/main.c: fix quoted value handling in unknown_bootoption Andrew Morton
2020-01-31  6:17 ` [patch 114/118] init/main.c: fix misleading "This architecture does not have kernel memory protection" message Andrew Morton
2020-01-31  6:17 ` [patch 115/118] reiserfs: prevent NULL pointer dereference in reiserfs_insert_item() Andrew Morton
2020-01-31  6:17 ` [patch 116/118] execve: warn if process starts with executable stack Andrew Morton
2020-01-31  6:17 ` [patch 117/118] include/linux/io-mapping.h-mapping: use PHYS_PFN() macro in io_mapping_map_atomic_wc() Andrew Morton
2020-01-31  6:17 ` [patch 118/118] kcov: ignore fault-inject and stacktrace Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200131061200.VwtZx6Z6N%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=catalin.marinas@arm.com \
    --cc=haitao.liu@windriver.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mm-commits@vger.kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=yongxin.liu@windriver.com \
    --cc=zhe.he@windriver.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.