All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: akpm@linux-foundation.org, cl@linux.com, guro@fb.com,
	hannes@cmpxchg.org, linux-mm@kvack.org, mhocko@kernel.org,
	mm-commits@vger.kernel.org, shakeelb@google.com, tj@kernel.org,
	torvalds@linux-foundation.org, vbabka@suse.cz
Subject: [patch 067/163] mm: memcg/slab: obj_cgroup API
Date: Thu, 06 Aug 2020 23:20:49 -0700	[thread overview]
Message-ID: <20200807062049.id3xOVXTn%akpm@linux-foundation.org> (raw)
In-Reply-To: <20200806231643.a2711a608dd0f18bff2caf2b@linux-foundation.org>

From: Roman Gushchin <guro@fb.com>
Subject: mm: memcg/slab: obj_cgroup API

Obj_cgroup API provides an ability to account sub-page sized kernel
objects, which potentially outlive the original memory cgroup.

The top-level API consists of the following functions:
  bool obj_cgroup_tryget(struct obj_cgroup *objcg);
  void obj_cgroup_get(struct obj_cgroup *objcg);
  void obj_cgroup_put(struct obj_cgroup *objcg);

  int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size);
  void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size);

  struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg);
  struct obj_cgroup *get_obj_cgroup_from_current(void);

Object cgroup is basically a pointer to a memory cgroup with a per-cpu
reference counter.  It substitutes a memory cgroup in places where it's
necessary to charge a custom amount of bytes instead of pages.

All charged memory rounded down to pages is charged to the corresponding
memory cgroup using __memcg_kmem_charge().

It implements reparenting: on memcg offlining it's getting reattached to
the parent memory cgroup.  Each online memory cgroup has an associated
active object cgroup to handle new allocations and the list of all
attached object cgroups.  On offlining of a cgroup this list is reparented
and for each object cgroup in the list the memcg pointer is swapped to the
parent memory cgroup.  It prevents long-living objects from pinning the
original memory cgroup in the memory.

The implementation is based on byte-sized per-cpu stocks.  A sub-page
sized leftover is stored in an atomic field, which is a part of obj_cgroup
object.  So on cgroup offlining the leftover is automatically reparented.

memcg->objcg is rcu protected.  objcg->memcg is a raw pointer, which is
always pointing at a memory cgroup, but can be atomically swapped to the
parent memory cgroup.  So a user must ensure the lifetime of the
cgroup, e.g.  grab rcu_read_lock or css_set_lock.

Link: http://lkml.kernel.org/r/20200623174037.3951353-7-guro@fb.com
Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Roman Gushchin <guro@fb.com>
Reviewed-by: Shakeel Butt <shakeelb@google.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/memcontrol.h |   51 ++++++
 mm/memcontrol.c            |  288 ++++++++++++++++++++++++++++++++++-
 2 files changed, 338 insertions(+), 1 deletion(-)

--- a/include/linux/memcontrol.h~mm-memcg-slab-obj_cgroup-api
+++ a/include/linux/memcontrol.h
@@ -23,6 +23,7 @@
 #include <linux/page-flags.h>
 
 struct mem_cgroup;
+struct obj_cgroup;
 struct page;
 struct mm_struct;
 struct kmem_cache;
@@ -193,6 +194,22 @@ struct memcg_cgwb_frn {
 };
 
 /*
+ * Bucket for arbitrarily byte-sized objects charged to a memory
+ * cgroup. The bucket can be reparented in one piece when the cgroup
+ * is destroyed, without having to round up the individual references
+ * of all live memory objects in the wild.
+ */
+struct obj_cgroup {
+	struct percpu_ref refcnt;
+	struct mem_cgroup *memcg;
+	atomic_t nr_charged_bytes;
+	union {
+		struct list_head list;
+		struct rcu_head rcu;
+	};
+};
+
+/*
  * The memory controller data structure. The memory controller controls both
  * page cache and RSS per cgroup. We would eventually like to provide
  * statistics based on the statistics developed by Rik Van Riel for clock-pro,
@@ -301,6 +318,8 @@ struct mem_cgroup {
 	int kmemcg_id;
 	enum memcg_kmem_state kmem_state;
 	struct list_head kmem_caches;
+	struct obj_cgroup __rcu *objcg;
+	struct list_head objcg_list; /* list of inherited objcgs */
 #endif
 
 #ifdef CONFIG_CGROUP_WRITEBACK
@@ -416,6 +435,33 @@ struct mem_cgroup *mem_cgroup_from_css(s
 	return css ? container_of(css, struct mem_cgroup, css) : NULL;
 }
 
+static inline bool obj_cgroup_tryget(struct obj_cgroup *objcg)
+{
+	return percpu_ref_tryget(&objcg->refcnt);
+}
+
+static inline void obj_cgroup_get(struct obj_cgroup *objcg)
+{
+	percpu_ref_get(&objcg->refcnt);
+}
+
+static inline void obj_cgroup_put(struct obj_cgroup *objcg)
+{
+	percpu_ref_put(&objcg->refcnt);
+}
+
+/*
+ * After the initialization objcg->memcg is always pointing at
+ * a valid memcg, but can be atomically swapped to the parent memcg.
+ *
+ * The caller must ensure that the returned memcg won't be released:
+ * e.g. acquire the rcu_read_lock or css_set_lock.
+ */
+static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
+{
+	return READ_ONCE(objcg->memcg);
+}
+
 static inline void mem_cgroup_put(struct mem_cgroup *memcg)
 {
 	if (memcg)
@@ -1368,6 +1414,11 @@ void __memcg_kmem_uncharge(struct mem_cg
 int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order);
 void __memcg_kmem_uncharge_page(struct page *page, int order);
 
+struct obj_cgroup *get_obj_cgroup_from_current(void);
+
+int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size);
+void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size);
+
 extern struct static_key_false memcg_kmem_enabled_key;
 extern struct workqueue_struct *memcg_kmem_cache_wq;
 
--- a/mm/memcontrol.c~mm-memcg-slab-obj_cgroup-api
+++ a/mm/memcontrol.c
@@ -257,6 +257,98 @@ struct cgroup_subsys_state *vmpressure_t
 }
 
 #ifdef CONFIG_MEMCG_KMEM
+extern spinlock_t css_set_lock;
+
+static void obj_cgroup_release(struct percpu_ref *ref)
+{
+	struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
+	struct mem_cgroup *memcg;
+	unsigned int nr_bytes;
+	unsigned int nr_pages;
+	unsigned long flags;
+
+	/*
+	 * At this point all allocated objects are freed, and
+	 * objcg->nr_charged_bytes can't have an arbitrary byte value.
+	 * However, it can be PAGE_SIZE or (x * PAGE_SIZE).
+	 *
+	 * The following sequence can lead to it:
+	 * 1) CPU0: objcg == stock->cached_objcg
+	 * 2) CPU1: we do a small allocation (e.g. 92 bytes),
+	 *          PAGE_SIZE bytes are charged
+	 * 3) CPU1: a process from another memcg is allocating something,
+	 *          the stock if flushed,
+	 *          objcg->nr_charged_bytes = PAGE_SIZE - 92
+	 * 5) CPU0: we do release this object,
+	 *          92 bytes are added to stock->nr_bytes
+	 * 6) CPU0: stock is flushed,
+	 *          92 bytes are added to objcg->nr_charged_bytes
+	 *
+	 * In the result, nr_charged_bytes == PAGE_SIZE.
+	 * This page will be uncharged in obj_cgroup_release().
+	 */
+	nr_bytes = atomic_read(&objcg->nr_charged_bytes);
+	WARN_ON_ONCE(nr_bytes & (PAGE_SIZE - 1));
+	nr_pages = nr_bytes >> PAGE_SHIFT;
+
+	spin_lock_irqsave(&css_set_lock, flags);
+	memcg = obj_cgroup_memcg(objcg);
+	if (nr_pages)
+		__memcg_kmem_uncharge(memcg, nr_pages);
+	list_del(&objcg->list);
+	mem_cgroup_put(memcg);
+	spin_unlock_irqrestore(&css_set_lock, flags);
+
+	percpu_ref_exit(ref);
+	kfree_rcu(objcg, rcu);
+}
+
+static struct obj_cgroup *obj_cgroup_alloc(void)
+{
+	struct obj_cgroup *objcg;
+	int ret;
+
+	objcg = kzalloc(sizeof(struct obj_cgroup), GFP_KERNEL);
+	if (!objcg)
+		return NULL;
+
+	ret = percpu_ref_init(&objcg->refcnt, obj_cgroup_release, 0,
+			      GFP_KERNEL);
+	if (ret) {
+		kfree(objcg);
+		return NULL;
+	}
+	INIT_LIST_HEAD(&objcg->list);
+	return objcg;
+}
+
+static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
+				  struct mem_cgroup *parent)
+{
+	struct obj_cgroup *objcg, *iter;
+
+	objcg = rcu_replace_pointer(memcg->objcg, NULL, true);
+
+	spin_lock_irq(&css_set_lock);
+
+	/* Move active objcg to the parent's list */
+	xchg(&objcg->memcg, parent);
+	css_get(&parent->css);
+	list_add(&objcg->list, &parent->objcg_list);
+
+	/* Move already reparented objcgs to the parent's list */
+	list_for_each_entry(iter, &memcg->objcg_list, list) {
+		css_get(&parent->css);
+		xchg(&iter->memcg, parent);
+		css_put(&memcg->css);
+	}
+	list_splice(&memcg->objcg_list, &parent->objcg_list);
+
+	spin_unlock_irq(&css_set_lock);
+
+	percpu_ref_kill(&objcg->refcnt);
+}
+
 /*
  * This will be the memcg's index in each cache's ->memcg_params.memcg_caches.
  * The main reason for not using cgroup id for this:
@@ -2047,6 +2139,12 @@ EXPORT_SYMBOL(unlock_page_memcg);
 struct memcg_stock_pcp {
 	struct mem_cgroup *cached; /* this never be root cgroup */
 	unsigned int nr_pages;
+
+#ifdef CONFIG_MEMCG_KMEM
+	struct obj_cgroup *cached_objcg;
+	unsigned int nr_bytes;
+#endif
+
 	struct work_struct work;
 	unsigned long flags;
 #define FLUSHING_CACHED_CHARGE	0
@@ -2054,6 +2152,22 @@ struct memcg_stock_pcp {
 static DEFINE_PER_CPU(struct memcg_stock_pcp, memcg_stock);
 static DEFINE_MUTEX(percpu_charge_mutex);
 
+#ifdef CONFIG_MEMCG_KMEM
+static void drain_obj_stock(struct memcg_stock_pcp *stock);
+static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
+				     struct mem_cgroup *root_memcg);
+
+#else
+static inline void drain_obj_stock(struct memcg_stock_pcp *stock)
+{
+}
+static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
+				     struct mem_cgroup *root_memcg)
+{
+	return false;
+}
+#endif
+
 /**
  * consume_stock: Try to consume stocked charge on this cpu.
  * @memcg: memcg to consume from.
@@ -2120,6 +2234,7 @@ static void drain_local_stock(struct wor
 	local_irq_save(flags);
 
 	stock = this_cpu_ptr(&memcg_stock);
+	drain_obj_stock(stock);
 	drain_stock(stock);
 	clear_bit(FLUSHING_CACHED_CHARGE, &stock->flags);
 
@@ -2179,6 +2294,8 @@ static void drain_all_stock(struct mem_c
 		if (memcg && stock->nr_pages &&
 		    mem_cgroup_is_descendant(memcg, root_memcg))
 			flush = true;
+		if (obj_stock_flush_required(stock, root_memcg))
+			flush = true;
 		rcu_read_unlock();
 
 		if (flush &&
@@ -2705,6 +2822,30 @@ struct mem_cgroup *mem_cgroup_from_obj(v
 	return page->mem_cgroup;
 }
 
+__always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
+{
+	struct obj_cgroup *objcg = NULL;
+	struct mem_cgroup *memcg;
+
+	if (unlikely(!current->mm && !current->active_memcg))
+		return NULL;
+
+	rcu_read_lock();
+	if (unlikely(current->active_memcg))
+		memcg = rcu_dereference(current->active_memcg);
+	else
+		memcg = mem_cgroup_from_task(current);
+
+	for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
+		objcg = rcu_dereference(memcg->objcg);
+		if (objcg && obj_cgroup_tryget(objcg))
+			break;
+	}
+	rcu_read_unlock();
+
+	return objcg;
+}
+
 static int memcg_alloc_cache_id(void)
 {
 	int id, size;
@@ -2996,6 +3137,140 @@ void __memcg_kmem_uncharge_page(struct p
 	if (PageKmemcg(page))
 		__ClearPageKmemcg(page);
 }
+
+static bool consume_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes)
+{
+	struct memcg_stock_pcp *stock;
+	unsigned long flags;
+	bool ret = false;
+
+	local_irq_save(flags);
+
+	stock = this_cpu_ptr(&memcg_stock);
+	if (objcg == stock->cached_objcg && stock->nr_bytes >= nr_bytes) {
+		stock->nr_bytes -= nr_bytes;
+		ret = true;
+	}
+
+	local_irq_restore(flags);
+
+	return ret;
+}
+
+static void drain_obj_stock(struct memcg_stock_pcp *stock)
+{
+	struct obj_cgroup *old = stock->cached_objcg;
+
+	if (!old)
+		return;
+
+	if (stock->nr_bytes) {
+		unsigned int nr_pages = stock->nr_bytes >> PAGE_SHIFT;
+		unsigned int nr_bytes = stock->nr_bytes & (PAGE_SIZE - 1);
+
+		if (nr_pages) {
+			rcu_read_lock();
+			__memcg_kmem_uncharge(obj_cgroup_memcg(old), nr_pages);
+			rcu_read_unlock();
+		}
+
+		/*
+		 * The leftover is flushed to the centralized per-memcg value.
+		 * On the next attempt to refill obj stock it will be moved
+		 * to a per-cpu stock (probably, on an other CPU), see
+		 * refill_obj_stock().
+		 *
+		 * How often it's flushed is a trade-off between the memory
+		 * limit enforcement accuracy and potential CPU contention,
+		 * so it might be changed in the future.
+		 */
+		atomic_add(nr_bytes, &old->nr_charged_bytes);
+		stock->nr_bytes = 0;
+	}
+
+	obj_cgroup_put(old);
+	stock->cached_objcg = NULL;
+}
+
+static bool obj_stock_flush_required(struct memcg_stock_pcp *stock,
+				     struct mem_cgroup *root_memcg)
+{
+	struct mem_cgroup *memcg;
+
+	if (stock->cached_objcg) {
+		memcg = obj_cgroup_memcg(stock->cached_objcg);
+		if (memcg && mem_cgroup_is_descendant(memcg, root_memcg))
+			return true;
+	}
+
+	return false;
+}
+
+static void refill_obj_stock(struct obj_cgroup *objcg, unsigned int nr_bytes)
+{
+	struct memcg_stock_pcp *stock;
+	unsigned long flags;
+
+	local_irq_save(flags);
+
+	stock = this_cpu_ptr(&memcg_stock);
+	if (stock->cached_objcg != objcg) { /* reset if necessary */
+		drain_obj_stock(stock);
+		obj_cgroup_get(objcg);
+		stock->cached_objcg = objcg;
+		stock->nr_bytes = atomic_xchg(&objcg->nr_charged_bytes, 0);
+	}
+	stock->nr_bytes += nr_bytes;
+
+	if (stock->nr_bytes > PAGE_SIZE)
+		drain_obj_stock(stock);
+
+	local_irq_restore(flags);
+}
+
+int obj_cgroup_charge(struct obj_cgroup *objcg, gfp_t gfp, size_t size)
+{
+	struct mem_cgroup *memcg;
+	unsigned int nr_pages, nr_bytes;
+	int ret;
+
+	if (consume_obj_stock(objcg, size))
+		return 0;
+
+	/*
+	 * In theory, memcg->nr_charged_bytes can have enough
+	 * pre-charged bytes to satisfy the allocation. However,
+	 * flushing memcg->nr_charged_bytes requires two atomic
+	 * operations, and memcg->nr_charged_bytes can't be big,
+	 * so it's better to ignore it and try grab some new pages.
+	 * memcg->nr_charged_bytes will be flushed in
+	 * refill_obj_stock(), called from this function or
+	 * independently later.
+	 */
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(objcg);
+	css_get(&memcg->css);
+	rcu_read_unlock();
+
+	nr_pages = size >> PAGE_SHIFT;
+	nr_bytes = size & (PAGE_SIZE - 1);
+
+	if (nr_bytes)
+		nr_pages += 1;
+
+	ret = __memcg_kmem_charge(memcg, gfp, nr_pages);
+	if (!ret && nr_bytes)
+		refill_obj_stock(objcg, PAGE_SIZE - nr_bytes);
+
+	css_put(&memcg->css);
+	return ret;
+}
+
+void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
+{
+	refill_obj_stock(objcg, size);
+}
+
 #endif /* CONFIG_MEMCG_KMEM */
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -3416,6 +3691,7 @@ static void memcg_flush_percpu_vmevents(
 #ifdef CONFIG_MEMCG_KMEM
 static int memcg_online_kmem(struct mem_cgroup *memcg)
 {
+	struct obj_cgroup *objcg;
 	int memcg_id;
 
 	if (cgroup_memory_nokmem)
@@ -3428,6 +3704,14 @@ static int memcg_online_kmem(struct mem_
 	if (memcg_id < 0)
 		return memcg_id;
 
+	objcg = obj_cgroup_alloc();
+	if (!objcg) {
+		memcg_free_cache_id(memcg_id);
+		return -ENOMEM;
+	}
+	objcg->memcg = memcg;
+	rcu_assign_pointer(memcg->objcg, objcg);
+
 	static_branch_enable(&memcg_kmem_enabled_key);
 
 	/*
@@ -3464,9 +3748,10 @@ static void memcg_offline_kmem(struct me
 		parent = root_mem_cgroup;
 
 	/*
-	 * Deactivate and reparent kmem_caches.
+	 * Deactivate and reparent kmem_caches and objcgs.
 	 */
 	memcg_deactivate_kmem_caches(memcg, parent);
+	memcg_reparent_objcgs(memcg, parent);
 
 	kmemcg_id = memcg->kmemcg_id;
 	BUG_ON(kmemcg_id < 0);
@@ -5030,6 +5315,7 @@ static struct mem_cgroup *mem_cgroup_all
 	memcg->socket_pressure = jiffies;
 #ifdef CONFIG_MEMCG_KMEM
 	memcg->kmemcg_id = -1;
+	INIT_LIST_HEAD(&memcg->objcg_list);
 #endif
 #ifdef CONFIG_CGROUP_WRITEBACK
 	INIT_LIST_HEAD(&memcg->cgwb_list);
_

  parent reply	other threads:[~2020-08-07  6:20 UTC|newest]

Thread overview: 182+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-08-07  6:16 incoming Andrew Morton
2020-08-07  6:17 ` [patch 001/163] mm/memory.c: avoid access flag update TLB flush for retried page fault Andrew Morton
2020-08-07 18:17   ` Linus Torvalds
2020-08-07 18:17     ` Linus Torvalds
2020-08-07 20:53     ` Yang Shi
2020-08-07 20:53       ` Yang Shi
2020-08-08  4:33       ` Linus Torvalds
2020-08-08  4:33         ` Linus Torvalds
2020-08-10 17:48         ` Yang Shi
2020-08-10 17:48           ` Yang Shi
2020-08-10 18:57           ` Linus Torvalds
2020-08-10 18:57             ` Linus Torvalds
2020-08-07  6:17 ` [patch 002/163] mm/migrate: fix migrate_pgmap_owner w/o CONFIG_MMU_NOTIFIER Andrew Morton
2020-08-07  6:17 ` [patch 003/163] mm/shuffle: don't move pages between zones and don't read garbage memmaps Andrew Morton
2020-08-07  6:17 ` [patch 004/163] mm: fix kthread_use_mm() vs TLB invalidate Andrew Morton
2020-08-07  6:17 ` [patch 005/163] kthread: remove incorrect comment in kthread_create_on_cpu() Andrew Morton
2020-08-07  6:17 ` [patch 006/163] tools/: replace HTTP links with HTTPS ones Andrew Morton
2020-08-07  6:17 ` [patch 007/163] tools/testing/selftests/cgroup/cgroup_util.c: cg_read_strcmp: fix null pointer dereference Andrew Morton
2020-08-07  6:17 ` [patch 008/163] scripts/tags.sh: collect compiled source precisely Andrew Morton
2020-08-07  6:17 ` [patch 009/163] scripts/bloat-o-meter: Support comparing library archives Andrew Morton
2020-08-07  6:17 ` [patch 010/163] scripts/decode_stacktrace.sh: skip missing symbols Andrew Morton
2020-08-07  6:17 ` [patch 011/163] scripts/decode_stacktrace.sh: guess basepath if not specified Andrew Morton
2020-08-07  6:17 ` [patch 012/163] scripts/decode_stacktrace.sh: guess path to modules Andrew Morton
2020-08-07  6:17 ` [patch 013/163] scripts/decode_stacktrace.sh: guess path to vmlinux by release name Andrew Morton
2020-08-07  6:17 ` [patch 014/163] const_structs.checkpatch: add regulator_ops Andrew Morton
2020-08-07  6:17 ` [patch 015/163] scripts/spelling.txt: add more spellings to spelling.txt Andrew Morton
2020-08-07  6:17 ` [patch 016/163] ntfs: fix ntfs_test_inode and ntfs_init_locked_inode function type Andrew Morton
2020-08-07  6:17 ` [patch 017/163] ocfs2: fix remounting needed after setfacl command Andrew Morton
2020-08-07  6:17 ` [patch 018/163] ocfs2: suballoc.h: delete a duplicated word Andrew Morton
2020-08-07  6:18 ` [patch 019/163] ocfs2: change slot number type s16 to u16 Andrew Morton
2020-08-07  6:18 ` [patch 020/163] ocfs2: replace HTTP links with HTTPS ones Andrew Morton
2020-08-07  6:18 ` [patch 021/163] ocfs2: fix unbalanced locking Andrew Morton
2020-08-07  6:18 ` [patch 022/163] mm, treewide: rename kzfree() to kfree_sensitive() Andrew Morton
2020-08-07  6:18 ` [patch 023/163] mm: ksize() should silently accept a NULL pointer Andrew Morton
2020-08-07  6:18 ` [patch 024/163] mm/slab: expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB Andrew Morton
2020-08-07  6:18 ` [patch 025/163] mm/slab: add naive detection of double free Andrew Morton
2020-08-07  6:18 ` [patch 026/163] mm, slab: check GFP_SLAB_BUG_MASK before alloc_pages in kmalloc_order Andrew Morton
2020-08-07  6:18 ` [patch 027/163] mm/slab.c: update outdated kmem_list3 in a comment Andrew Morton
2020-08-07  6:18 ` [patch 028/163] mm, slub: extend slub_debug syntax for multiple blocks Andrew Morton
2020-08-07  6:18 ` [patch 029/163] mm, slub: make some slub_debug related attributes read-only Andrew Morton
2020-08-07  6:18 ` [patch 030/163] mm, slub: remove runtime allocation order changes Andrew Morton
2020-08-07  6:18 ` [patch 031/163] mm, slub: make remaining slub_debug related attributes read-only Andrew Morton
2020-08-07  6:18 ` [patch 032/163] mm, slub: make reclaim_account attribute read-only Andrew Morton
2020-08-07  6:18 ` [patch 033/163] mm, slub: introduce static key for slub_debug() Andrew Morton
2020-08-07  6:18 ` [patch 034/163] mm, slub: introduce kmem_cache_debug_flags() Andrew Morton
2020-08-07  6:18 ` [patch 035/163] mm, slub: extend checks guarded by slub_debug static key Andrew Morton
2020-08-07  6:19 ` [patch 036/163] mm, slab/slub: move and improve cache_from_obj() Andrew Morton
2020-08-07  6:19 ` [patch 037/163] mm, slab/slub: improve error reporting and overhead of cache_from_obj() Andrew Morton
2020-08-07  6:19 ` [patch 038/163] mm/slub.c: drop lockdep_assert_held() from put_map() Andrew Morton
2020-08-07  6:19 ` [patch 039/163] mm, kcsan: instrument SLAB/SLUB free with "ASSERT_EXCLUSIVE_ACCESS" Andrew Morton
2020-08-07  6:19 ` [patch 040/163] mm/debug_vm_pgtable: add tests validating arch helpers for core MM features Andrew Morton
2020-08-07  6:19 ` [patch 041/163] mm/debug_vm_pgtable: add tests validating advanced arch page table helpers Andrew Morton
2020-08-07  6:19 ` [patch 042/163] mm/debug_vm_pgtable: add debug prints for individual tests Andrew Morton
2020-08-07  6:19 ` [patch 043/163] Documentation/mm: add descriptions for arch page table helpers Andrew Morton
2020-08-07  6:19 ` [patch 044/163] mm/debug: handle page->mapping better in dump_page Andrew Morton
2020-08-07  6:19 ` [patch 045/163] mm/debug: dump compound page information on a second line Andrew Morton
2020-08-07  6:19 ` [patch 046/163] mm/debug: print head flags in dump_page Andrew Morton
2020-08-07  6:19 ` [patch 047/163] mm/debug: switch dump_page to get_kernel_nofault Andrew Morton
2020-08-07  6:19 ` [patch 048/163] mm/debug: print the inode number in dump_page Andrew Morton
2020-08-07  6:19 ` [patch 049/163] mm/debug: print hashed address of struct page Andrew Morton
2020-08-07  6:19 ` [patch 050/163] mm, dump_page: do not crash with bad compound_mapcount() Andrew Morton
2020-08-07  6:19 ` [patch 051/163] mm: filemap: clear idle flag for writes Andrew Morton
2020-08-07  6:19 ` [patch 052/163] mm: filemap: add missing FGP_ flags in kerneldoc comment for pagecache_get_page Andrew Morton
2020-08-07  6:20 ` [patch 053/163] mm/gup.c: fix the comment of return value for populate_vma_page_range() Andrew Morton
2020-08-07  6:20 ` [patch 054/163] mm/swap_slots.c: simplify alloc_swap_slot_cache() Andrew Morton
2020-08-07  6:20 ` [patch 055/163] mm/swap_slots.c: simplify enable_swap_slots_cache() Andrew Morton
2020-08-07  6:20 ` [patch 056/163] mm/swap_slots.c: remove redundant check for swap_slot_cache_initialized Andrew Morton
2020-08-07  6:20 ` [patch 057/163] mm: swap: fix kerneldoc of swap_vma_readahead() Andrew Morton
2020-08-07  6:20 ` [patch 058/163] mm/page_io.c: use blk_io_schedule() for avoiding task hung in sync io Andrew Morton
2020-08-07  6:20 ` [patch 059/163] tmpfs: per-superblock i_ino support Andrew Morton
2020-08-07  6:20 ` [patch 060/163] tmpfs: support 64-bit inums per-sb Andrew Morton
2020-08-07  6:20 ` [patch 061/163] mm: kmem: make memcg_kmem_enabled() irreversible Andrew Morton
2020-08-07  6:20 ` [patch 062/163] mm: memcg: factor out memcg- and lruvec-level changes out of __mod_lruvec_state() Andrew Morton
2020-08-07  6:20 ` [patch 063/163] mm: memcg: prepare for byte-sized vmstat items Andrew Morton
2020-08-07  6:20 ` [patch 064/163] mm: memcg: convert vmstat slab counters to bytes Andrew Morton
2020-08-07  6:20 ` [patch 065/163] mm: slub: implement SLUB version of obj_to_index() Andrew Morton
2020-08-07  6:20 ` [patch 066/163] mm: memcontrol: decouple reference counting from page accounting Andrew Morton
2020-08-07  6:20 ` Andrew Morton [this message]
2020-08-07  6:20 ` [patch 068/163] mm: memcg/slab: allocate obj_cgroups for non-root slab pages Andrew Morton
2020-08-07  6:20 ` [patch 069/163] mm: memcg/slab: save obj_cgroup for non-root slab objects Andrew Morton
2020-08-07  6:20 ` [patch 070/163] mm: memcg/slab: charge individual slab objects instead of pages Andrew Morton
2020-08-07  6:21 ` [patch 071/163] mm: memcg/slab: deprecate memory.kmem.slabinfo Andrew Morton
2020-08-07  6:21 ` [patch 072/163] mm: memcg/slab: move memcg_kmem_bypass() to memcontrol.h Andrew Morton
2020-08-07  6:21 ` [patch 073/163] mm: memcg/slab: use a single set of kmem_caches for all accounted allocations Andrew Morton
2020-08-07  6:21 ` [patch 074/163] mm: memcg/slab: simplify memcg cache creation Andrew Morton
2020-08-07  6:21 ` [patch 075/163] mm: memcg/slab: remove memcg_kmem_get_cache() Andrew Morton
2020-08-07  6:21 ` [patch 076/163] mm: memcg/slab: deprecate slab_root_caches Andrew Morton
2020-08-07  6:21 ` [patch 077/163] mm: memcg/slab: remove redundant check in memcg_accumulate_slabinfo() Andrew Morton
2020-08-07  6:21 ` [patch 078/163] mm: memcg/slab: use a single set of kmem_caches for all allocations Andrew Morton
2020-08-07  6:21 ` [patch 079/163] kselftests: cgroup: add kernel memory accounting tests Andrew Morton
2020-08-07  6:21 ` [patch 080/163] tools/cgroup: add memcg_slabinfo.py tool Andrew Morton
2020-08-07  6:21 ` [patch 081/163] mm: memcontrol: account kernel stack per node Andrew Morton
2020-08-07  6:21 ` [patch 082/163] mm: memcg/slab: remove unused argument by charge_slab_page() Andrew Morton
2020-08-07  6:21 ` [patch 083/163] mm: slab: rename (un)charge_slab_page() to (un)account_slab_page() Andrew Morton
2020-08-07  6:21 ` [patch 084/163] mm: kmem: switch to static_branch_likely() in memcg_kmem_enabled() Andrew Morton
2020-08-07  6:21 ` [patch 085/163] mm: memcontrol: avoid workload stalls when lowering memory.high Andrew Morton
2020-08-07  6:21 ` [patch 086/163] mm, memcg: reclaim more aggressively before high allocator throttling Andrew Morton
2020-08-07  6:21 ` [patch 087/163] mm, memcg: unify reclaim retry limits with page allocator Andrew Morton
2020-08-07  6:22 ` [patch 088/163] mm, memcg: avoid stale protection values when cgroup is above protection Andrew Morton
2020-08-07  6:22 ` [patch 089/163] mm, memcg: decouple e{low,min} state mutations from protection checks Andrew Morton
2020-08-07  6:22 ` [patch 090/163] memcg, oom: check memcg margin for parallel oom Andrew Morton
2020-08-07  6:22 ` [patch 091/163] mm: memcontrol: restore proper dirty throttling when memory.high changes Andrew Morton
2020-08-07  6:22 ` [patch 092/163] mm: memcontrol: don't count limit-setting reclaim as memory pressure Andrew Morton
2020-08-07  6:22 ` [patch 093/163] mm/page_counter.c: fix protection usage propagation Andrew Morton
2020-08-07  6:22 ` [patch 094/163] mm: remove redundant check non_swap_entry() Andrew Morton
2020-08-07  6:22 ` [patch 095/163] mm/memory.c: make remap_pfn_range() reject unaligned addr Andrew Morton
2020-08-07  6:22 ` [patch 096/163] mm: remove unneeded includes of <asm/pgalloc.h> Andrew Morton
2020-08-07  6:22 ` [patch 097/163] opeinrisc: switch to generic version of pte allocation Andrew Morton
2020-08-07  6:22 ` [patch 098/163] xtensa: " Andrew Morton
2020-08-07  6:22 ` [patch 099/163] asm-generic: pgalloc: provide generic pmd_alloc_one() and pmd_free_one() Andrew Morton
2020-08-07  6:22 ` [patch 100/163] asm-generic: pgalloc: provide generic pud_alloc_one() and pud_free_one() Andrew Morton
2020-08-07  6:22 ` [patch 101/163] asm-generic: pgalloc: provide generic pgd_free() Andrew Morton
2020-08-07  6:22 ` [patch 102/163] mm: move lib/ioremap.c to mm/ Andrew Morton
2020-08-07  6:22 ` [patch 103/163] mm: move p?d_alloc_track to separate header file Andrew Morton
2020-08-07  6:22 ` [patch 104/163] mm/mmap: optimize a branch judgment in ksys_mmap_pgoff() Andrew Morton
2020-08-07  6:23 ` [patch 105/163] proc/meminfo: avoid open coded reading of vm_committed_as Andrew Morton
2020-08-07  6:23 ` [patch 106/163] mm/util.c: make vm_memory_committed() more accurate Andrew Morton
2020-08-07  6:23 ` [patch 107/163] percpu_counter: add percpu_counter_sync() Andrew Morton
2020-08-07  6:23 ` [patch 108/163] mm: adjust vm_committed_as_batch according to vm overcommit policy Andrew Morton
2020-08-07  6:23 ` [patch 109/163] mm/sparsemem: enable vmem_altmap support in vmemmap_populate_basepages() Andrew Morton
2020-08-07  6:23 ` [patch 110/163] mm/sparsemem: enable vmem_altmap support in vmemmap_alloc_block_buf() Andrew Morton
2020-08-07  6:23 ` [patch 111/163] arm64/mm: enable vmem_altmap support for vmemmap mappings Andrew Morton
2020-08-07  6:23 ` [patch 112/163] mm: mmap: merge vma after call_mmap() if possible Andrew Morton
2020-08-07  6:23 ` [patch 113/163] mm: remove unnecessary wrapper function do_mmap_pgoff() Andrew Morton
2020-08-07  6:23 ` [patch 114/163] mm/mremap: it is sure to have enough space when extent meets requirement Andrew Morton
2020-08-07  6:23 ` [patch 115/163] mm/mremap: calculate extent in one place Andrew Morton
2020-08-07  6:23 ` [patch 116/163] mm/mremap: start addresses are properly aligned Andrew Morton
2020-08-07  6:23 ` [patch 117/163] selftests: add mincore() tests Andrew Morton
2020-08-07  6:23 ` [patch 118/163] mm/sparse: never partially remove memmap for early section Andrew Morton
2020-08-07  6:23 ` [patch 119/163] mm/sparse: only sub-section aligned range would be populated Andrew Morton
2020-08-07  6:24 ` [patch 120/163] mm/sparse: cleanup the code surrounding memory_present() Andrew Morton
2020-08-07  6:24 ` [patch 121/163] vmalloc: convert to XArray Andrew Morton
2020-08-07  6:24 ` [patch 122/163] mm/vmalloc: simplify merge_or_add_vmap_area() Andrew Morton
2020-08-07  6:24 ` [patch 123/163] mm/vmalloc: simplify augment_tree_propagate_check() Andrew Morton
2020-08-07  6:24 ` [patch 124/163] mm/vmalloc: switch to "propagate()" callback Andrew Morton
2020-08-07  6:24 ` [patch 125/163] mm/vmalloc: update the header about KVA rework Andrew Morton
2020-08-07  6:24 ` [patch 126/163] mm: vmalloc: remove redundant assignment in unmap_kernel_range_noflush() Andrew Morton
2020-08-07  6:24 ` [patch 127/163] mm/vmalloc.c: remove BUG() from the find_va_links() Andrew Morton
2020-08-07  6:24 ` [patch 128/163] kasan: improve and simplify Kconfig.kasan Andrew Morton
2020-08-07  6:24 ` [patch 129/163] kasan: update required compiler versions in documentation Andrew Morton
2020-08-07  6:24 ` [patch 130/163] rcu: kasan: record and print call_rcu() call stack Andrew Morton
2020-08-07  6:24 ` [patch 131/163] kasan: record and print the free track Andrew Morton
2020-08-07  6:24 ` [patch 132/163] kasan: add tests for call_rcu stack recording Andrew Morton
2020-08-07  6:24 ` [patch 133/163] kasan: update documentation for generic kasan Andrew Morton
2020-08-07  6:24 ` [patch 134/163] kasan: remove kasan_unpoison_stack_above_sp_to() Andrew Morton
2020-08-07  6:24 ` [patch 135/163] lib/test_kasan.c: fix KASAN unit tests for tag-based KASAN Andrew Morton
2020-08-07  6:24 ` [patch 136/163] kasan: don't tag stacks allocated with pagealloc Andrew Morton
2020-08-07  6:25 ` [patch 137/163] efi: provide empty efi_enter_virtual_mode implementation Andrew Morton
2020-08-07  6:25 ` [patch 138/163] kasan, arm64: don't instrument functions that enable kasan Andrew Morton
2020-08-07  6:25 ` [patch 139/163] kasan: allow enabling stack tagging for tag-based mode Andrew Morton
2020-08-07  6:25 ` [patch 140/163] kasan: adjust kasan_stack_oob " Andrew Morton
2020-08-07  6:25 ` [patch 141/163] mm, page_alloc: use unlikely() in task_capc() Andrew Morton
2020-08-07  6:25 ` [patch 142/163] page_alloc: consider highatomic reserve in watermark fast Andrew Morton
2020-08-07  6:25 ` [patch 143/163] mm, page_alloc: skip ->waternark_boost for atomic order-0 allocations Andrew Morton
2020-08-07  6:25 ` [patch 144/163] mm: remove vm_total_pages Andrew Morton
2020-08-07  6:25 ` [patch 145/163] mm/page_alloc: remove nr_free_pagecache_pages() Andrew Morton
2020-08-07  6:25 ` [patch 146/163] mm/memory_hotplug: document why shuffle_zone() is relevant Andrew Morton
2020-08-07  6:25 ` [patch 147/163] mm/shuffle: remove dynamic reconfiguration Andrew Morton
2020-08-07  6:25 ` [patch 148/163] mm/page_alloc.c: replace the definition of NR_MIGRATETYPE_BITS with PB_migratetype_bits Andrew Morton
2020-08-07  6:25 ` [patch 149/163] mm/page_alloc.c: extract the common part in pfn_to_bitidx() Andrew Morton
2020-08-07  6:25 ` [patch 150/163] mm/page_alloc.c: simplify pageblock bitmap access Andrew Morton
2020-08-07  6:25 ` [patch 151/163] mm/page_alloc.c: remove unnecessary end_bitidx for [set|get]_pfnblock_flags_mask() Andrew Morton
2020-08-07  6:25 ` [patch 152/163] mm/page_alloc: silence a KASAN false positive Andrew Morton
2020-08-07  6:25 ` [patch 153/163] mm/page_alloc: fallbacks at most has 3 elements Andrew Morton
2020-08-07  6:26 ` [patch 154/163] mm/page_alloc.c: skip setting nodemask when we are in interrupt Andrew Morton
2020-08-07  6:26 ` [patch 155/163] mm/page_alloc: fix memalloc_nocma_{save/restore} APIs Andrew Morton
2020-08-07  6:26 ` [patch 156/163] mm: thp: replace HTTP links with HTTPS ones Andrew Morton
2020-08-07  6:26 ` [patch 157/163] mm/hugetlb: fix calculation of adjust_range_if_pmd_sharing_possible Andrew Morton
2020-08-07  6:26 ` [patch 158/163] khugepaged: collapse_pte_mapped_thp() flush the right range Andrew Morton
2020-08-07  6:26 ` [patch 159/163] khugepaged: collapse_pte_mapped_thp() protect the pmd lock Andrew Morton
2020-08-07  6:26 ` [patch 160/163] khugepaged: retract_page_tables() remember to test exit Andrew Morton
2020-08-07  6:26 ` [patch 161/163] khugepaged: khugepaged_test_exit() check mmget_still_valid() Andrew Morton
2020-08-07  6:26 ` [patch 162/163] mm/vmscan.c: fix typo Andrew Morton
2020-08-07  6:26 ` [patch 163/163] mm: vmscan: consistent update to pgrefill Andrew Morton
2020-08-07 19:32 ` [nacked] mm-avoid-access-flag-update-tlb-flush-for-retried-page-fault.patch removed from -mm tree Andrew Morton
2020-08-11  4:18 ` + mempolicyh-fix-typo.patch added to " Andrew Morton
2020-08-11  4:47 ` + mm-vunmap-add-cond_resched-in-vunmap_pmd_range.patch " Andrew Morton
2020-08-11 20:33 ` + mm-memcg-charge-memcg-percpu-memory-to-the-parent-cgroup-fix.patch " Andrew Morton
2020-08-11 20:49 ` + mm-slub-fix-conversion-of-freelist_corrupted.patch " Andrew Morton
2020-08-11 21:14 ` + revert-mm-vmstatc-do-not-show-lowmem-reserve-protection-information-of-empty-zone.patch " Andrew Morton
2020-08-11 21:16 ` + romfs-support-inode-blocks-calculation.patch " Andrew Morton
2020-08-11 22:20 ` + mm-vmstat-add-events-for-thp-migration-without-split-v4.patch " Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200807062049.id3xOVXTn%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mm-commits@vger.kernel.org \
    --cc=shakeelb@google.com \
    --cc=tj@kernel.org \
    --cc=torvalds@linux-foundation.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.