All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-21 12:56 ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm, hannes, longman, mhocko, roman.gushchin, shakeelb
  Cc: cgroups, duanxiongchun, linux-kernel, linux-mm, Muchun Song

This version is rebased on mm-unstable. Hopefully, Andrew can get this series
into mm-unstable which will help to determine whether there is a problem or
degradation. I am also doing some benchmark tests in parallel.

Since the following patchsets applied. All the kernel memory are charged
with the new APIs of obj_cgroup.

	commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
	commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")

But user memory allocations (LRU pages) pinning memcgs for a long time -
it exists at a larger scale and is causing recurring problems in the real
world: page cache doesn't get reclaimed for a long time, or is used by the
second, third, fourth, ... instance of the same job that was restarted into
a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
and make page reclaim very inefficient.

We can convert LRU pages and most other raw memcg pins to the objcg direction
to fix this problem, and then the LRU pages will not pin the memcgs.

This patchset aims to make the LRU pages to drop the reference to memory
cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
of the dying cgroups will not increase if we run the following test script.

```bash
#!/bin/bash

dd if=/dev/zero of=temp bs=4096 count=1
cat /proc/cgroups | grep memory

for i in {0..2000}
do
	mkdir /sys/fs/cgroup/memory/test$i
	echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
	cat temp >> log
	echo $$ > /sys/fs/cgroup/memory/cgroup.procs
	rmdir /sys/fs/cgroup/memory/test$i
done

cat /proc/cgroups | grep memory

rm -f temp log
```

v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/

v6:
 - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
 - Rebase to mm-unstable.

v5:
 - Lots of improvements from Johannes, Roman and Waiman.
 - Fix lockdep warning reported by kernel test robot.
 - Add two new patches to do code cleanup.
 - Collect Acked-by and Reviewed-by from Johannes and Roman.
 - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
   local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
   it to local_lock.  It could be an improvement in the future.

v4:
 - Resend and rebased on v5.18.

v3:
 - Removed the Acked-by tags from Roman since this version is based on
   the folio relevant.

v2:
 - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
   dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
 - Rebase to linux 5.15-rc1.
 - Add a new pacth to cleanup mem_cgroup_kmem_disabled().

v1:
 - Drop RFC tag.
 - Rebase to linux next-20210811.

RFC v4:
 - Collect Acked-by from Roman.
 - Rebase to linux next-20210525.
 - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
 - Change the patch 1 title to "prepare objcg API for non-kmem usage".
 - Convert reparent_ops_head to an array in patch 8.

Thanks for Roman's review and suggestions.

RFC v3:
 - Drop the code cleanup and simplification patches. Gather those patches
   into a separate series[1].
 - Rework patch #1 suggested by Johannes.

RFC v2:
 - Collect Acked-by tags by Johannes. Thanks.
 - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
 - Fix move_pages_to_lru().

Muchun Song (11):
  mm: memcontrol: remove dead code and comments
  mm: rename unlock_page_lruvec{_irq, _irqrestore} to
    lruvec_unlock{_irq, _irqrestore}
  mm: memcontrol: prepare objcg API for non-kmem usage
  mm: memcontrol: make lruvec lock safe when LRU pages are reparented
  mm: vmscan: rework move_pages_to_lru()
  mm: thp: make split queue lock safe when LRU pages are reparented
  mm: memcontrol: make all the callers of {folio,page}_memcg() safe
  mm: memcontrol: introduce memcg_reparent_ops
  mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
  mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
  mm: lru: use lruvec lock to serialize memcg changes

 fs/buffer.c                      |   4 +-
 fs/fs-writeback.c                |  23 +-
 include/linux/memcontrol.h       | 218 +++++++++------
 include/linux/mm_inline.h        |   6 +
 include/trace/events/writeback.h |   5 +
 mm/compaction.c                  |  39 ++-
 mm/huge_memory.c                 | 153 ++++++++--
 mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
 mm/migrate.c                     |   4 +
 mm/mlock.c                       |   2 +-
 mm/page_io.c                     |   5 +-
 mm/swap.c                        |  49 ++--
 mm/vmscan.c                      |  66 ++---
 13 files changed, 776 insertions(+), 382 deletions(-)


base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
-- 
2.11.0


^ permalink raw reply	[flat|nested] 54+ messages in thread

* [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-21 12:56 ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	hannes-druUgvl0LCNAfugRpC6u6w, longman-H+wXaHxf7aLQT0dZR+AlfA,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Muchun Song

This version is rebased on mm-unstable. Hopefully, Andrew can get this series
into mm-unstable which will help to determine whether there is a problem or
degradation. I am also doing some benchmark tests in parallel.

Since the following patchsets applied. All the kernel memory are charged
with the new APIs of obj_cgroup.

	commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
	commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")

But user memory allocations (LRU pages) pinning memcgs for a long time -
it exists at a larger scale and is causing recurring problems in the real
world: page cache doesn't get reclaimed for a long time, or is used by the
second, third, fourth, ... instance of the same job that was restarted into
a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
and make page reclaim very inefficient.

We can convert LRU pages and most other raw memcg pins to the objcg direction
to fix this problem, and then the LRU pages will not pin the memcgs.

This patchset aims to make the LRU pages to drop the reference to memory
cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
of the dying cgroups will not increase if we run the following test script.

```bash
#!/bin/bash

dd if=/dev/zero of=temp bs=4096 count=1
cat /proc/cgroups | grep memory

for i in {0..2000}
do
	mkdir /sys/fs/cgroup/memory/test$i
	echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
	cat temp >> log
	echo $$ > /sys/fs/cgroup/memory/cgroup.procs
	rmdir /sys/fs/cgroup/memory/test$i
done

cat /proc/cgroups | grep memory

rm -f temp log
```

v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/

v6:
 - Collect Acked-by and Reviewed-by from Roman and Michal Koutn√Ω. Thanks.
 - Rebase to mm-unstable.

v5:
 - Lots of improvements from Johannes, Roman and Waiman.
 - Fix lockdep warning reported by kernel test robot.
 - Add two new patches to do code cleanup.
 - Collect Acked-by and Reviewed-by from Johannes and Roman.
 - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
   local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
   it to local_lock.  It could be an improvement in the future.

v4:
 - Resend and rebased on v5.18.

v3:
 - Removed the Acked-by tags from Roman since this version is based on
   the folio relevant.

v2:
 - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
   dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
 - Rebase to linux 5.15-rc1.
 - Add a new pacth to cleanup mem_cgroup_kmem_disabled().

v1:
 - Drop RFC tag.
 - Rebase to linux next-20210811.

RFC v4:
 - Collect Acked-by from Roman.
 - Rebase to linux next-20210525.
 - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
 - Change the patch 1 title to "prepare objcg API for non-kmem usage".
 - Convert reparent_ops_head to an array in patch 8.

Thanks for Roman's review and suggestions.

RFC v3:
 - Drop the code cleanup and simplification patches. Gather those patches
   into a separate series[1].
 - Rework patch #1 suggested by Johannes.

RFC v2:
 - Collect Acked-by tags by Johannes. Thanks.
 - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
 - Fix move_pages_to_lru().

Muchun Song (11):
  mm: memcontrol: remove dead code and comments
  mm: rename unlock_page_lruvec{_irq, _irqrestore} to
    lruvec_unlock{_irq, _irqrestore}
  mm: memcontrol: prepare objcg API for non-kmem usage
  mm: memcontrol: make lruvec lock safe when LRU pages are reparented
  mm: vmscan: rework move_pages_to_lru()
  mm: thp: make split queue lock safe when LRU pages are reparented
  mm: memcontrol: make all the callers of {folio,page}_memcg() safe
  mm: memcontrol: introduce memcg_reparent_ops
  mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
  mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
  mm: lru: use lruvec lock to serialize memcg changes

 fs/buffer.c                      |   4 +-
 fs/fs-writeback.c                |  23 +-
 include/linux/memcontrol.h       | 218 +++++++++------
 include/linux/mm_inline.h        |   6 +
 include/trace/events/writeback.h |   5 +
 mm/compaction.c                  |  39 ++-
 mm/huge_memory.c                 | 153 ++++++++--
 mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
 mm/migrate.c                     |   4 +
 mm/mlock.c                       |   2 +-
 mm/page_io.c                     |   5 +-
 mm/swap.c                        |  49 ++--
 mm/vmscan.c                      |  66 ++---
 13 files changed, 776 insertions(+), 382 deletions(-)


base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
-- 
2.11.0


^ permalink raw reply	[flat|nested] 54+ messages in thread

* [PATCH v6 01/11] mm: memcontrol: remove dead code and comments
  2022-06-21 12:56 ` Muchun Song
  (?)
@ 2022-06-21 12:56 ` Muchun Song
  -1 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm, hannes, longman, mhocko, roman.gushchin, shakeelb
  Cc: cgroups, duanxiongchun, linux-kernel, linux-mm, Muchun Song

Since no-hierarchy mode is deprecated after

  commit bef8620cd8e0 ("mm: memcg: deprecate the non-hierarchical mode")

so parent_mem_cgroup() cannot return a NULL except root memcg, however, root
memcg cannot be offline, so it is safe to drop the check of returned value
of parent_mem_cgroup().  Remove those dead code.

The comments in memcg_offline_kmem() above memcg_reparent_list_lrus() are
out of date since

  commit 5abc1e37afa0 ("mm: list_lru: allocate list_lru_one only when needed")

There is no ordering requirement between memcg_reparent_list_lrus() and
memcg_reparent_objcgs(), so remove those outdated comments.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 include/linux/memcontrol.h |  3 +--
 mm/memcontrol.c            | 12 ------------
 mm/vmscan.c                |  6 +-----
 3 files changed, 2 insertions(+), 19 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 4d31ce55b1c0..318d8880d62a 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -866,8 +866,7 @@ static inline struct mem_cgroup *lruvec_memcg(struct lruvec *lruvec)
  * parent_mem_cgroup - find the accounting parent of a memcg
  * @memcg: memcg whose parent to find
  *
- * Returns the parent memcg, or NULL if this is the root or the memory
- * controller is in legacy no-hierarchy mode.
+ * Returns the parent memcg, or NULL if this is the root.
  */
 static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg)
 {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 163492b9efa9..fc706d6fc265 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3684,17 +3684,7 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg)
 		return;
 
 	parent = parent_mem_cgroup(memcg);
-	if (!parent)
-		parent = root_mem_cgroup;
-
 	memcg_reparent_objcgs(memcg, parent);
-
-	/*
-	 * After we have finished memcg_reparent_objcgs(), all list_lrus
-	 * corresponding to this cgroup are guaranteed to remain empty.
-	 * The ordering is imposed by list_lru_node->lock taken by
-	 * memcg_reparent_list_lrus().
-	 */
 	memcg_reparent_list_lrus(memcg, parent);
 }
 #else
@@ -7195,8 +7185,6 @@ static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg)
 			break;
 		}
 		memcg = parent_mem_cgroup(memcg);
-		if (!memcg)
-			memcg = root_mem_cgroup;
 	}
 	return memcg;
 }
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 88fce64cfa96..b68b0216424d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -409,13 +409,9 @@ void reparent_shrinker_deferred(struct mem_cgroup *memcg)
 {
 	int i, nid;
 	long nr;
-	struct mem_cgroup *parent;
+	struct mem_cgroup *parent = parent_mem_cgroup(memcg);
 	struct shrinker_info *child_info, *parent_info;
 
-	parent = parent_mem_cgroup(memcg);
-	if (!parent)
-		parent = root_mem_cgroup;
-
 	/* Prevent from concurrent shrinker_info expand */
 	down_read(&shrinker_rwsem);
 	for_each_node(nid) {
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 02/11] mm: rename unlock_page_lruvec{_irq, _irqrestore} to lruvec_unlock{_irq, _irqrestore}
  2022-06-21 12:56 ` Muchun Song
  (?)
  (?)
@ 2022-06-21 12:56 ` Muchun Song
  -1 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm, hannes, longman, mhocko, roman.gushchin, shakeelb
  Cc: cgroups, duanxiongchun, linux-kernel, linux-mm, Muchun Song

It is weird to use folio_lruvec_lock() variants and unlock_page_lruvec() variants
together, e.g. locking folio and unlocking page.  So rename
unlock_page_lruvec{_irq, _irqrestore} to lruvec_unlock{_irq, _irqrestore}.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 include/linux/memcontrol.h | 10 +++++-----
 mm/compaction.c            | 12 ++++++------
 mm/huge_memory.c           |  2 +-
 mm/mlock.c                 |  2 +-
 mm/swap.c                  | 14 +++++++-------
 mm/vmscan.c                |  4 ++--
 6 files changed, 22 insertions(+), 22 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 318d8880d62a..d0c0da7cafb7 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1579,17 +1579,17 @@ static inline struct lruvec *parent_lruvec(struct lruvec *lruvec)
 	return mem_cgroup_lruvec(memcg, lruvec_pgdat(lruvec));
 }
 
-static inline void unlock_page_lruvec(struct lruvec *lruvec)
+static inline void lruvec_unlock(struct lruvec *lruvec)
 {
 	spin_unlock(&lruvec->lru_lock);
 }
 
-static inline void unlock_page_lruvec_irq(struct lruvec *lruvec)
+static inline void lruvec_unlock_irq(struct lruvec *lruvec)
 {
 	spin_unlock_irq(&lruvec->lru_lock);
 }
 
-static inline void unlock_page_lruvec_irqrestore(struct lruvec *lruvec,
+static inline void lruvec_unlock_irqrestore(struct lruvec *lruvec,
 		unsigned long flags)
 {
 	spin_unlock_irqrestore(&lruvec->lru_lock, flags);
@@ -1611,7 +1611,7 @@ static inline struct lruvec *folio_lruvec_relock_irq(struct folio *folio,
 		if (folio_matches_lruvec(folio, locked_lruvec))
 			return locked_lruvec;
 
-		unlock_page_lruvec_irq(locked_lruvec);
+		lruvec_unlock_irq(locked_lruvec);
 	}
 
 	return folio_lruvec_lock_irq(folio);
@@ -1625,7 +1625,7 @@ static inline struct lruvec *folio_lruvec_relock_irqsave(struct folio *folio,
 		if (folio_matches_lruvec(folio, locked_lruvec))
 			return locked_lruvec;
 
-		unlock_page_lruvec_irqrestore(locked_lruvec, *flags);
+		lruvec_unlock_irqrestore(locked_lruvec, *flags);
 	}
 
 	return folio_lruvec_lock_irqsave(folio, flags);
diff --git a/mm/compaction.c b/mm/compaction.c
index 1f89b969c12b..46351a14eed2 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -864,7 +864,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		 */
 		if (!(low_pfn % COMPACT_CLUSTER_MAX)) {
 			if (locked) {
-				unlock_page_lruvec_irqrestore(locked, flags);
+				lruvec_unlock_irqrestore(locked, flags);
 				locked = NULL;
 			}
 
@@ -977,7 +977,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 			if (unlikely(__PageMovable(page)) &&
 					!PageIsolated(page)) {
 				if (locked) {
-					unlock_page_lruvec_irqrestore(locked, flags);
+					lruvec_unlock_irqrestore(locked, flags);
 					locked = NULL;
 				}
 
@@ -1060,7 +1060,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		/* If we already hold the lock, we can skip some rechecking */
 		if (lruvec != locked) {
 			if (locked)
-				unlock_page_lruvec_irqrestore(locked, flags);
+				lruvec_unlock_irqrestore(locked, flags);
 
 			compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);
 			locked = lruvec;
@@ -1119,7 +1119,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 isolate_fail_put:
 		/* Avoid potential deadlock in freeing page under lru_lock */
 		if (locked) {
-			unlock_page_lruvec_irqrestore(locked, flags);
+			lruvec_unlock_irqrestore(locked, flags);
 			locked = NULL;
 		}
 		put_page(page);
@@ -1135,7 +1135,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		 */
 		if (nr_isolated) {
 			if (locked) {
-				unlock_page_lruvec_irqrestore(locked, flags);
+				lruvec_unlock_irqrestore(locked, flags);
 				locked = NULL;
 			}
 			putback_movable_pages(&cc->migratepages);
@@ -1167,7 +1167,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 
 isolate_abort:
 	if (locked)
-		unlock_page_lruvec_irqrestore(locked, flags);
+		lruvec_unlock_irqrestore(locked, flags);
 	if (page) {
 		SetPageLRU(page);
 		put_page(page);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2e2a8b5bc567..66d9ed8a1289 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2515,7 +2515,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
 	}
 
 	ClearPageCompound(head);
-	unlock_page_lruvec(lruvec);
+	lruvec_unlock(lruvec);
 	/* Caller disabled irqs, so they are still disabled here */
 
 	split_page_owner(head, nr);
diff --git a/mm/mlock.c b/mm/mlock.c
index 7032f6dd0ce1..d9039fb9c56b 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -205,7 +205,7 @@ static void mlock_pagevec(struct pagevec *pvec)
 	}
 
 	if (lruvec)
-		unlock_page_lruvec_irq(lruvec);
+		lruvec_unlock_irq(lruvec);
 	release_pages(pvec->pages, pvec->nr);
 	pagevec_reinit(pvec);
 }
diff --git a/mm/swap.c b/mm/swap.c
index 1f563d857768..127ef4db394f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -86,7 +86,7 @@ static void __page_cache_release(struct folio *folio)
 		lruvec = folio_lruvec_lock_irqsave(folio, &flags);
 		lruvec_del_folio(lruvec, folio);
 		__folio_clear_lru_flags(folio);
-		unlock_page_lruvec_irqrestore(lruvec, flags);
+		lruvec_unlock_irqrestore(lruvec, flags);
 	}
 	/* See comment on folio_test_mlocked in release_pages() */
 	if (unlikely(folio_test_mlocked(folio))) {
@@ -249,7 +249,7 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
 	}
 
 	if (lruvec)
-		unlock_page_lruvec_irqrestore(lruvec, flags);
+		lruvec_unlock_irqrestore(lruvec, flags);
 	folios_put(fbatch->folios, folio_batch_count(fbatch));
 	folio_batch_init(fbatch);
 }
@@ -392,7 +392,7 @@ static void folio_activate(struct folio *folio)
 	if (folio_test_clear_lru(folio)) {
 		lruvec = folio_lruvec_lock_irq(folio);
 		folio_activate_fn(lruvec, folio);
-		unlock_page_lruvec_irq(lruvec);
+		lruvec_unlock_irq(lruvec);
 		folio_set_lru(folio);
 	}
 }
@@ -948,7 +948,7 @@ void release_pages(struct page **pages, int nr)
 		 * same lruvec. The lock is held only if lruvec != NULL.
 		 */
 		if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) {
-			unlock_page_lruvec_irqrestore(lruvec, flags);
+			lruvec_unlock_irqrestore(lruvec, flags);
 			lruvec = NULL;
 		}
 
@@ -957,7 +957,7 @@ void release_pages(struct page **pages, int nr)
 
 		if (folio_is_zone_device(folio)) {
 			if (lruvec) {
-				unlock_page_lruvec_irqrestore(lruvec, flags);
+				lruvec_unlock_irqrestore(lruvec, flags);
 				lruvec = NULL;
 			}
 			if (put_devmap_managed_page(&folio->page))
@@ -972,7 +972,7 @@ void release_pages(struct page **pages, int nr)
 
 		if (folio_test_large(folio)) {
 			if (lruvec) {
-				unlock_page_lruvec_irqrestore(lruvec, flags);
+				lruvec_unlock_irqrestore(lruvec, flags);
 				lruvec = NULL;
 			}
 			__folio_put_large(folio);
@@ -1006,7 +1006,7 @@ void release_pages(struct page **pages, int nr)
 		list_add(&folio->lru, &pages_to_free);
 	}
 	if (lruvec)
-		unlock_page_lruvec_irqrestore(lruvec, flags);
+		lruvec_unlock_irqrestore(lruvec, flags);
 
 	mem_cgroup_uncharge_list(&pages_to_free);
 	free_unref_page_list(&pages_to_free);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b68b0216424d..6a554712ef5d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2257,7 +2257,7 @@ int folio_isolate_lru(struct folio *folio)
 		folio_get(folio);
 		lruvec = folio_lruvec_lock_irq(folio);
 		lruvec_del_folio(lruvec, folio);
-		unlock_page_lruvec_irq(lruvec);
+		lruvec_unlock_irq(lruvec);
 		ret = 0;
 	}
 
@@ -4886,7 +4886,7 @@ void check_move_unevictable_pages(struct pagevec *pvec)
 	if (lruvec) {
 		__count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued);
 		__count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned);
-		unlock_page_lruvec_irq(lruvec);
+		lruvec_unlock_irq(lruvec);
 	} else if (pgscanned) {
 		count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned);
 	}
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 03/11] mm: memcontrol: prepare objcg API for non-kmem usage
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm, hannes, longman, mhocko, roman.gushchin, shakeelb
  Cc: cgroups, duanxiongchun, linux-kernel, linux-mm, Muchun Song,
	Michal Koutný

Pagecache pages are charged at the allocation time and holding a
reference to the original memory cgroup until being reclaimed.
Depending on the memory pressure, specific patterns of the page
sharing between different cgroups and the cgroup creation and
destruction rates, a large number of dying memory cgroups can be
pinned by pagecache pages. It makes the page reclaim less efficient
and wastes memory.

We can convert LRU pages and most other raw memcg pins to the objcg
direction to fix this problem, and then the page->memcg will always
point to an object cgroup pointer.

Therefore, the infrastructure of objcg no longer only serves
CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
can reuse it to charge pages.

We know that the LRU pages are not accounted at the root level. But
the page->memcg_data points to the root_mem_cgroup. So the
page->memcg_data of the LRU pages always points to a valid pointer.
But the root_mem_cgroup dose not have an object cgroup. If we use
obj_cgroup APIs to charge the LRU pages, we should set the
page->memcg_data to a root object cgroup. So we also allocate an
object cgroup for the root_mem_cgroup.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Michal Koutný <mkoutny@suse.com>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 include/linux/memcontrol.h |  2 +-
 mm/memcontrol.c            | 56 +++++++++++++++++++++++++++-------------------
 2 files changed, 34 insertions(+), 24 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index d0c0da7cafb7..111eda6ff1ce 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -321,10 +321,10 @@ struct mem_cgroup {
 
 #ifdef CONFIG_MEMCG_KMEM
 	int kmemcg_id;
+#endif
 	struct obj_cgroup __rcu *objcg;
 	/* list of inherited objcgs, protected by objcg_lock */
 	struct list_head objcg_list;
-#endif
 
 	MEMCG_PADDING(_pad2_);
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index fc706d6fc265..3c489651d312 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -252,9 +252,9 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)
 	return container_of(vmpr, struct mem_cgroup, vmpressure);
 }
 
-#ifdef CONFIG_MEMCG_KMEM
 static DEFINE_SPINLOCK(objcg_lock);
 
+#ifdef CONFIG_MEMCG_KMEM
 bool mem_cgroup_kmem_disabled(void)
 {
 	return cgroup_memory_nokmem;
@@ -263,12 +263,10 @@ bool mem_cgroup_kmem_disabled(void)
 static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg,
 				      unsigned int nr_pages);
 
-static void obj_cgroup_release(struct percpu_ref *ref)
+static void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
 {
-	struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
 	unsigned int nr_bytes;
 	unsigned int nr_pages;
-	unsigned long flags;
 
 	/*
 	 * At this point all allocated objects are freed, and
@@ -282,9 +280,9 @@ static void obj_cgroup_release(struct percpu_ref *ref)
 	 * 3) CPU1: a process from another memcg is allocating something,
 	 *          the stock if flushed,
 	 *          objcg->nr_charged_bytes = PAGE_SIZE - 92
-	 * 5) CPU0: we do release this object,
+	 * 4) CPU0: we do release this object,
 	 *          92 bytes are added to stock->nr_bytes
-	 * 6) CPU0: stock is flushed,
+	 * 5) CPU0: stock is flushed,
 	 *          92 bytes are added to objcg->nr_charged_bytes
 	 *
 	 * In the result, nr_charged_bytes == PAGE_SIZE.
@@ -296,6 +294,19 @@ static void obj_cgroup_release(struct percpu_ref *ref)
 
 	if (nr_pages)
 		obj_cgroup_uncharge_pages(objcg, nr_pages);
+}
+#else
+static inline void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
+{
+}
+#endif
+
+static void obj_cgroup_release(struct percpu_ref *ref)
+{
+	struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
+	unsigned long flags;
+
+	obj_cgroup_release_bytes(objcg);
 
 	spin_lock_irqsave(&objcg_lock, flags);
 	list_del(&objcg->list);
@@ -324,10 +335,10 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
 	return objcg;
 }
 
-static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
-				  struct mem_cgroup *parent)
+static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
 {
 	struct obj_cgroup *objcg, *iter;
+	struct mem_cgroup *parent = parent_mem_cgroup(memcg);
 
 	objcg = rcu_replace_pointer(memcg->objcg, NULL, true);
 
@@ -346,6 +357,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
 	percpu_ref_kill(&objcg->refcnt);
 }
 
+#ifdef CONFIG_MEMCG_KMEM
 /*
  * A lot of the calls to the cache allocation functions are expected to be
  * inlined by the compiler. Since the calls to memcg_slab_pre_alloc_hook() are
@@ -3651,21 +3663,12 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
 #ifdef CONFIG_MEMCG_KMEM
 static int memcg_online_kmem(struct mem_cgroup *memcg)
 {
-	struct obj_cgroup *objcg;
-
 	if (cgroup_memory_nokmem)
 		return 0;
 
 	if (unlikely(mem_cgroup_is_root(memcg)))
 		return 0;
 
-	objcg = obj_cgroup_alloc();
-	if (!objcg)
-		return -ENOMEM;
-
-	objcg->memcg = memcg;
-	rcu_assign_pointer(memcg->objcg, objcg);
-
 	static_branch_enable(&memcg_kmem_enabled_key);
 
 	memcg->kmemcg_id = memcg->id.id;
@@ -3675,17 +3678,13 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
 
 static void memcg_offline_kmem(struct mem_cgroup *memcg)
 {
-	struct mem_cgroup *parent;
-
 	if (cgroup_memory_nokmem)
 		return;
 
 	if (unlikely(mem_cgroup_is_root(memcg)))
 		return;
 
-	parent = parent_mem_cgroup(memcg);
-	memcg_reparent_objcgs(memcg, parent);
-	memcg_reparent_list_lrus(memcg, parent);
+	memcg_reparent_list_lrus(memcg, parent_mem_cgroup(memcg));
 }
 #else
 static int memcg_online_kmem(struct mem_cgroup *memcg)
@@ -5190,8 +5189,8 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
 	memcg->socket_pressure = jiffies;
 #ifdef CONFIG_MEMCG_KMEM
 	memcg->kmemcg_id = -1;
-	INIT_LIST_HEAD(&memcg->objcg_list);
 #endif
+	INIT_LIST_HEAD(&memcg->objcg_list);
 #ifdef CONFIG_CGROUP_WRITEBACK
 	INIT_LIST_HEAD(&memcg->cgwb_list);
 	for (i = 0; i < MEMCG_CGWB_FRN_CNT; i++)
@@ -5256,6 +5255,7 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
 static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 {
 	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+	struct obj_cgroup *objcg;
 
 	if (memcg_online_kmem(memcg))
 		goto remove_id;
@@ -5268,6 +5268,13 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 	if (alloc_shrinker_info(memcg))
 		goto offline_kmem;
 
+	objcg = obj_cgroup_alloc();
+	if (!objcg)
+		goto free_shrinker;
+
+	objcg->memcg = memcg;
+	rcu_assign_pointer(memcg->objcg, objcg);
+
 	/* Online state pins memcg ID, memcg ID pins CSS */
 	refcount_set(&memcg->id.ref, 1);
 	css_get(css);
@@ -5276,6 +5283,8 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 		queue_delayed_work(system_unbound_wq, &stats_flush_dwork,
 				   2UL*HZ);
 	return 0;
+free_shrinker:
+	free_shrinker_info(memcg);
 offline_kmem:
 	memcg_offline_kmem(memcg);
 remove_id:
@@ -5303,6 +5312,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
 	page_counter_set_min(&memcg->memory, 0);
 	page_counter_set_low(&memcg->memory, 0);
 
+	memcg_reparent_objcgs(memcg);
 	memcg_offline_kmem(memcg);
 	reparent_shrinker_deferred(memcg);
 	wb_memcg_offline(memcg);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 03/11] mm: memcontrol: prepare objcg API for non-kmem usage
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	hannes-druUgvl0LCNAfugRpC6u6w, longman-H+wXaHxf7aLQT0dZR+AlfA,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Muchun Song, Michal Koutný

Pagecache pages are charged at the allocation time and holding a
reference to the original memory cgroup until being reclaimed.
Depending on the memory pressure, specific patterns of the page
sharing between different cgroups and the cgroup creation and
destruction rates, a large number of dying memory cgroups can be
pinned by pagecache pages. It makes the page reclaim less efficient
and wastes memory.

We can convert LRU pages and most other raw memcg pins to the objcg
direction to fix this problem, and then the page->memcg will always
point to an object cgroup pointer.

Therefore, the infrastructure of objcg no longer only serves
CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
can reuse it to charge pages.

We know that the LRU pages are not accounted at the root level. But
the page->memcg_data points to the root_mem_cgroup. So the
page->memcg_data of the LRU pages always points to a valid pointer.
But the root_mem_cgroup dose not have an object cgroup. If we use
obj_cgroup APIs to charge the LRU pages, we should set the
page->memcg_data to a root object cgroup. So we also allocate an
object cgroup for the root_mem_cgroup.

Signed-off-by: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
Acked-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Reviewed-by: Michal Koutn√Ω <mkoutny-IBi9RG/b67k@public.gmane.org>
Acked-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>
---
 include/linux/memcontrol.h |  2 +-
 mm/memcontrol.c            | 56 +++++++++++++++++++++++++++-------------------
 2 files changed, 34 insertions(+), 24 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index d0c0da7cafb7..111eda6ff1ce 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -321,10 +321,10 @@ struct mem_cgroup {
 
 #ifdef CONFIG_MEMCG_KMEM
 	int kmemcg_id;
+#endif
 	struct obj_cgroup __rcu *objcg;
 	/* list of inherited objcgs, protected by objcg_lock */
 	struct list_head objcg_list;
-#endif
 
 	MEMCG_PADDING(_pad2_);
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index fc706d6fc265..3c489651d312 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -252,9 +252,9 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)
 	return container_of(vmpr, struct mem_cgroup, vmpressure);
 }
 
-#ifdef CONFIG_MEMCG_KMEM
 static DEFINE_SPINLOCK(objcg_lock);
 
+#ifdef CONFIG_MEMCG_KMEM
 bool mem_cgroup_kmem_disabled(void)
 {
 	return cgroup_memory_nokmem;
@@ -263,12 +263,10 @@ bool mem_cgroup_kmem_disabled(void)
 static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg,
 				      unsigned int nr_pages);
 
-static void obj_cgroup_release(struct percpu_ref *ref)
+static void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
 {
-	struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
 	unsigned int nr_bytes;
 	unsigned int nr_pages;
-	unsigned long flags;
 
 	/*
 	 * At this point all allocated objects are freed, and
@@ -282,9 +280,9 @@ static void obj_cgroup_release(struct percpu_ref *ref)
 	 * 3) CPU1: a process from another memcg is allocating something,
 	 *          the stock if flushed,
 	 *          objcg->nr_charged_bytes = PAGE_SIZE - 92
-	 * 5) CPU0: we do release this object,
+	 * 4) CPU0: we do release this object,
 	 *          92 bytes are added to stock->nr_bytes
-	 * 6) CPU0: stock is flushed,
+	 * 5) CPU0: stock is flushed,
 	 *          92 bytes are added to objcg->nr_charged_bytes
 	 *
 	 * In the result, nr_charged_bytes == PAGE_SIZE.
@@ -296,6 +294,19 @@ static void obj_cgroup_release(struct percpu_ref *ref)
 
 	if (nr_pages)
 		obj_cgroup_uncharge_pages(objcg, nr_pages);
+}
+#else
+static inline void obj_cgroup_release_bytes(struct obj_cgroup *objcg)
+{
+}
+#endif
+
+static void obj_cgroup_release(struct percpu_ref *ref)
+{
+	struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
+	unsigned long flags;
+
+	obj_cgroup_release_bytes(objcg);
 
 	spin_lock_irqsave(&objcg_lock, flags);
 	list_del(&objcg->list);
@@ -324,10 +335,10 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
 	return objcg;
 }
 
-static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
-				  struct mem_cgroup *parent)
+static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
 {
 	struct obj_cgroup *objcg, *iter;
+	struct mem_cgroup *parent = parent_mem_cgroup(memcg);
 
 	objcg = rcu_replace_pointer(memcg->objcg, NULL, true);
 
@@ -346,6 +357,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
 	percpu_ref_kill(&objcg->refcnt);
 }
 
+#ifdef CONFIG_MEMCG_KMEM
 /*
  * A lot of the calls to the cache allocation functions are expected to be
  * inlined by the compiler. Since the calls to memcg_slab_pre_alloc_hook() are
@@ -3651,21 +3663,12 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
 #ifdef CONFIG_MEMCG_KMEM
 static int memcg_online_kmem(struct mem_cgroup *memcg)
 {
-	struct obj_cgroup *objcg;
-
 	if (cgroup_memory_nokmem)
 		return 0;
 
 	if (unlikely(mem_cgroup_is_root(memcg)))
 		return 0;
 
-	objcg = obj_cgroup_alloc();
-	if (!objcg)
-		return -ENOMEM;
-
-	objcg->memcg = memcg;
-	rcu_assign_pointer(memcg->objcg, objcg);
-
 	static_branch_enable(&memcg_kmem_enabled_key);
 
 	memcg->kmemcg_id = memcg->id.id;
@@ -3675,17 +3678,13 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
 
 static void memcg_offline_kmem(struct mem_cgroup *memcg)
 {
-	struct mem_cgroup *parent;
-
 	if (cgroup_memory_nokmem)
 		return;
 
 	if (unlikely(mem_cgroup_is_root(memcg)))
 		return;
 
-	parent = parent_mem_cgroup(memcg);
-	memcg_reparent_objcgs(memcg, parent);
-	memcg_reparent_list_lrus(memcg, parent);
+	memcg_reparent_list_lrus(memcg, parent_mem_cgroup(memcg));
 }
 #else
 static int memcg_online_kmem(struct mem_cgroup *memcg)
@@ -5190,8 +5189,8 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
 	memcg->socket_pressure = jiffies;
 #ifdef CONFIG_MEMCG_KMEM
 	memcg->kmemcg_id = -1;
-	INIT_LIST_HEAD(&memcg->objcg_list);
 #endif
+	INIT_LIST_HEAD(&memcg->objcg_list);
 #ifdef CONFIG_CGROUP_WRITEBACK
 	INIT_LIST_HEAD(&memcg->cgwb_list);
 	for (i = 0; i < MEMCG_CGWB_FRN_CNT; i++)
@@ -5256,6 +5255,7 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
 static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 {
 	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+	struct obj_cgroup *objcg;
 
 	if (memcg_online_kmem(memcg))
 		goto remove_id;
@@ -5268,6 +5268,13 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 	if (alloc_shrinker_info(memcg))
 		goto offline_kmem;
 
+	objcg = obj_cgroup_alloc();
+	if (!objcg)
+		goto free_shrinker;
+
+	objcg->memcg = memcg;
+	rcu_assign_pointer(memcg->objcg, objcg);
+
 	/* Online state pins memcg ID, memcg ID pins CSS */
 	refcount_set(&memcg->id.ref, 1);
 	css_get(css);
@@ -5276,6 +5283,8 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 		queue_delayed_work(system_unbound_wq, &stats_flush_dwork,
 				   2UL*HZ);
 	return 0;
+free_shrinker:
+	free_shrinker_info(memcg);
 offline_kmem:
 	memcg_offline_kmem(memcg);
 remove_id:
@@ -5303,6 +5312,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
 	page_counter_set_min(&memcg->memory, 0);
 	page_counter_set_low(&memcg->memory, 0);
 
+	memcg_reparent_objcgs(memcg);
 	memcg_offline_kmem(memcg);
 	reparent_shrinker_deferred(memcg);
 	wb_memcg_offline(memcg);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 04/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm, hannes, longman, mhocko, roman.gushchin, shakeelb
  Cc: cgroups, duanxiongchun, linux-kernel, linux-mm, Muchun Song

The diagram below shows how to make the folio lruvec lock safe when LRU
pages are reparented.

folio_lruvec_lock(folio)
	rcu_read_lock();
    retry:
	lruvec = folio_lruvec(folio);

        // The folio is reparented at this time.
        spin_lock(&lruvec->lru_lock);

        if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio)))
            // Acquired the wrong lruvec lock and need to retry.
            // Because this folio is on the parent memcg lruvec list.
            spin_unlock(&lruvec->lru_lock);
	    goto retry;

        // If we reach here, it means that folio_memcg(folio) is stable.

memcg_reparent_objcgs(memcg)
    // lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
    spin_lock(&lruvec->lru_lock);
    spin_lock(&lruvec_parent->lru_lock);

    // Move all the pages from the lruvec list to the parent lruvec list.

    spin_unlock(&lruvec_parent->lru_lock);
    spin_unlock(&lruvec->lru_lock);

After we acquire the lruvec lock, we need to check whether the folio is
reparented. If so, we need to reacquire the new lruvec lock. On the
routine of the LRU pages reparenting, we will also acquire the lruvec
lock (will be implemented in the later patch). So folio_memcg() cannot
be changed when we hold the lruvec lock.

Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after
we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So
remove it.

This is a preparation for reparenting the LRU pages.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/memcontrol.h | 18 +++-------------
 mm/compaction.c            | 27 +++++++++++++++++++----
 mm/memcontrol.c            | 53 ++++++++++++++++++++++++++--------------------
 mm/swap.c                  |  5 +++++
 4 files changed, 61 insertions(+), 42 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 111eda6ff1ce..ff3106eca6f3 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -758,7 +758,9 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg,
  * folio_lruvec - return lruvec for isolating/putting an LRU folio
  * @folio: Pointer to the folio.
  *
- * This function relies on folio->mem_cgroup being stable.
+ * The lruvec can be changed to its parent lruvec when the page reparented.
+ * The caller need to recheck if it cares about this changes (just like
+ * folio_lruvec_lock() does).
  */
 static inline struct lruvec *folio_lruvec(struct folio *folio)
 {
@@ -777,15 +779,6 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio);
 struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
 						unsigned long *flags);
 
-#ifdef CONFIG_DEBUG_VM
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio);
-#else
-static inline
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-}
-#endif
-
 static inline
 struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){
 	return css ? container_of(css, struct mem_cgroup, css) : NULL;
@@ -1260,11 +1253,6 @@ static inline struct lruvec *folio_lruvec(struct folio *folio)
 	return &pgdat->__lruvec;
 }
 
-static inline
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-}
-
 static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg)
 {
 	return NULL;
diff --git a/mm/compaction.c b/mm/compaction.c
index 46351a14eed2..fe49ac9aedd8 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -508,6 +508,25 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
 	return true;
 }
 
+static struct lruvec *
+compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags,
+				  struct compact_control *cc)
+{
+	struct lruvec *lruvec;
+
+	rcu_read_lock();
+retry:
+	lruvec = folio_lruvec(folio);
+	compact_lock_irqsave(&lruvec->lru_lock, flags, cc);
+	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+		spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
+		goto retry;
+	}
+	rcu_read_unlock();
+
+	return lruvec;
+}
+
 /*
  * Compaction requires the taking of some coarse locks that are potentially
  * very heavily contended. The lock should be periodically unlocked to avoid
@@ -834,6 +853,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 
 	/* Time to isolate some pages for migration */
 	for (; low_pfn < end_pfn; low_pfn++) {
+		struct folio *folio;
 
 		if (skip_on_failure && low_pfn >= next_skip_pfn) {
 			/*
@@ -1055,18 +1075,17 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		if (!TestClearPageLRU(page))
 			goto isolate_fail_put;
 
-		lruvec = folio_lruvec(page_folio(page));
+		folio = page_folio(page);
+		lruvec = folio_lruvec(folio);
 
 		/* If we already hold the lock, we can skip some rechecking */
 		if (lruvec != locked) {
 			if (locked)
 				lruvec_unlock_irqrestore(locked, flags);
 
-			compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);
+			lruvec = compact_folio_lruvec_lock_irqsave(folio, &flags, cc);
 			locked = lruvec;
 
-			lruvec_memcg_debug(lruvec, page_folio(page));
-
 			/* Try get exclusive access under lock */
 			if (!skip_updated) {
 				skip_updated = true;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 3c489651d312..6f171480b2f2 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1195,23 +1195,6 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
 	return ret;
 }
 
-#ifdef CONFIG_DEBUG_VM
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-	struct mem_cgroup *memcg;
-
-	if (mem_cgroup_disabled())
-		return;
-
-	memcg = folio_memcg(folio);
-
-	if (!memcg)
-		VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != root_mem_cgroup, folio);
-	else
-		VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != memcg, folio);
-}
-#endif
-
 /**
  * folio_lruvec_lock - Lock the lruvec for a folio.
  * @folio: Pointer to the folio.
@@ -1226,10 +1209,18 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
  */
 struct lruvec *folio_lruvec_lock(struct folio *folio)
 {
-	struct lruvec *lruvec = folio_lruvec(folio);
+	struct lruvec *lruvec;
 
+	rcu_read_lock();
+retry:
+	lruvec = folio_lruvec(folio);
 	spin_lock(&lruvec->lru_lock);
-	lruvec_memcg_debug(lruvec, folio);
+
+	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+		spin_unlock(&lruvec->lru_lock);
+		goto retry;
+	}
+	rcu_read_unlock();
 
 	return lruvec;
 }
@@ -1249,10 +1240,18 @@ struct lruvec *folio_lruvec_lock(struct folio *folio)
  */
 struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
 {
-	struct lruvec *lruvec = folio_lruvec(folio);
+	struct lruvec *lruvec;
 
+	rcu_read_lock();
+retry:
+	lruvec = folio_lruvec(folio);
 	spin_lock_irq(&lruvec->lru_lock);
-	lruvec_memcg_debug(lruvec, folio);
+
+	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+		spin_unlock_irq(&lruvec->lru_lock);
+		goto retry;
+	}
+	rcu_read_unlock();
 
 	return lruvec;
 }
@@ -1274,10 +1273,18 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
 struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
 		unsigned long *flags)
 {
-	struct lruvec *lruvec = folio_lruvec(folio);
+	struct lruvec *lruvec;
 
+	rcu_read_lock();
+retry:
+	lruvec = folio_lruvec(folio);
 	spin_lock_irqsave(&lruvec->lru_lock, *flags);
-	lruvec_memcg_debug(lruvec, folio);
+
+	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+		spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
+		goto retry;
+	}
+	rcu_read_unlock();
 
 	return lruvec;
 }
diff --git a/mm/swap.c b/mm/swap.c
index 127ef4db394f..987dcbd93ffa 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -337,6 +337,11 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages)
 
 void lru_note_cost_folio(struct folio *folio)
 {
+	WARN_ON_ONCE(!rcu_read_lock_held());
+	/*
+	 * The rcu read lock is held by the caller, so we do not need to
+	 * care about the lruvec returned by folio_lruvec() being released.
+	 */
 	lru_note_cost(folio_lruvec(folio), folio_is_file_lru(folio),
 			folio_nr_pages(folio));
 }
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 04/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	hannes-druUgvl0LCNAfugRpC6u6w, longman-H+wXaHxf7aLQT0dZR+AlfA,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Muchun Song

The diagram below shows how to make the folio lruvec lock safe when LRU
pages are reparented.

folio_lruvec_lock(folio)
	rcu_read_lock();
    retry:
	lruvec = folio_lruvec(folio);

        // The folio is reparented at this time.
        spin_lock(&lruvec->lru_lock);

        if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio)))
            // Acquired the wrong lruvec lock and need to retry.
            // Because this folio is on the parent memcg lruvec list.
            spin_unlock(&lruvec->lru_lock);
	    goto retry;

        // If we reach here, it means that folio_memcg(folio) is stable.

memcg_reparent_objcgs(memcg)
    // lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
    spin_lock(&lruvec->lru_lock);
    spin_lock(&lruvec_parent->lru_lock);

    // Move all the pages from the lruvec list to the parent lruvec list.

    spin_unlock(&lruvec_parent->lru_lock);
    spin_unlock(&lruvec->lru_lock);

After we acquire the lruvec lock, we need to check whether the folio is
reparented. If so, we need to reacquire the new lruvec lock. On the
routine of the LRU pages reparenting, we will also acquire the lruvec
lock (will be implemented in the later patch). So folio_memcg() cannot
be changed when we hold the lruvec lock.

Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after
we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So
remove it.

This is a preparation for reparenting the LRU pages.

Signed-off-by: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
---
 include/linux/memcontrol.h | 18 +++-------------
 mm/compaction.c            | 27 +++++++++++++++++++----
 mm/memcontrol.c            | 53 ++++++++++++++++++++++++++--------------------
 mm/swap.c                  |  5 +++++
 4 files changed, 61 insertions(+), 42 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 111eda6ff1ce..ff3106eca6f3 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -758,7 +758,9 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg,
  * folio_lruvec - return lruvec for isolating/putting an LRU folio
  * @folio: Pointer to the folio.
  *
- * This function relies on folio->mem_cgroup being stable.
+ * The lruvec can be changed to its parent lruvec when the page reparented.
+ * The caller need to recheck if it cares about this changes (just like
+ * folio_lruvec_lock() does).
  */
 static inline struct lruvec *folio_lruvec(struct folio *folio)
 {
@@ -777,15 +779,6 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio);
 struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
 						unsigned long *flags);
 
-#ifdef CONFIG_DEBUG_VM
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio);
-#else
-static inline
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-}
-#endif
-
 static inline
 struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){
 	return css ? container_of(css, struct mem_cgroup, css) : NULL;
@@ -1260,11 +1253,6 @@ static inline struct lruvec *folio_lruvec(struct folio *folio)
 	return &pgdat->__lruvec;
 }
 
-static inline
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-}
-
 static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg)
 {
 	return NULL;
diff --git a/mm/compaction.c b/mm/compaction.c
index 46351a14eed2..fe49ac9aedd8 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -508,6 +508,25 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
 	return true;
 }
 
+static struct lruvec *
+compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags,
+				  struct compact_control *cc)
+{
+	struct lruvec *lruvec;
+
+	rcu_read_lock();
+retry:
+	lruvec = folio_lruvec(folio);
+	compact_lock_irqsave(&lruvec->lru_lock, flags, cc);
+	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+		spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
+		goto retry;
+	}
+	rcu_read_unlock();
+
+	return lruvec;
+}
+
 /*
  * Compaction requires the taking of some coarse locks that are potentially
  * very heavily contended. The lock should be periodically unlocked to avoid
@@ -834,6 +853,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 
 	/* Time to isolate some pages for migration */
 	for (; low_pfn < end_pfn; low_pfn++) {
+		struct folio *folio;
 
 		if (skip_on_failure && low_pfn >= next_skip_pfn) {
 			/*
@@ -1055,18 +1075,17 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		if (!TestClearPageLRU(page))
 			goto isolate_fail_put;
 
-		lruvec = folio_lruvec(page_folio(page));
+		folio = page_folio(page);
+		lruvec = folio_lruvec(folio);
 
 		/* If we already hold the lock, we can skip some rechecking */
 		if (lruvec != locked) {
 			if (locked)
 				lruvec_unlock_irqrestore(locked, flags);
 
-			compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);
+			lruvec = compact_folio_lruvec_lock_irqsave(folio, &flags, cc);
 			locked = lruvec;
 
-			lruvec_memcg_debug(lruvec, page_folio(page));
-
 			/* Try get exclusive access under lock */
 			if (!skip_updated) {
 				skip_updated = true;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 3c489651d312..6f171480b2f2 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1195,23 +1195,6 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
 	return ret;
 }
 
-#ifdef CONFIG_DEBUG_VM
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-	struct mem_cgroup *memcg;
-
-	if (mem_cgroup_disabled())
-		return;
-
-	memcg = folio_memcg(folio);
-
-	if (!memcg)
-		VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != root_mem_cgroup, folio);
-	else
-		VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != memcg, folio);
-}
-#endif
-
 /**
  * folio_lruvec_lock - Lock the lruvec for a folio.
  * @folio: Pointer to the folio.
@@ -1226,10 +1209,18 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
  */
 struct lruvec *folio_lruvec_lock(struct folio *folio)
 {
-	struct lruvec *lruvec = folio_lruvec(folio);
+	struct lruvec *lruvec;
 
+	rcu_read_lock();
+retry:
+	lruvec = folio_lruvec(folio);
 	spin_lock(&lruvec->lru_lock);
-	lruvec_memcg_debug(lruvec, folio);
+
+	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+		spin_unlock(&lruvec->lru_lock);
+		goto retry;
+	}
+	rcu_read_unlock();
 
 	return lruvec;
 }
@@ -1249,10 +1240,18 @@ struct lruvec *folio_lruvec_lock(struct folio *folio)
  */
 struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
 {
-	struct lruvec *lruvec = folio_lruvec(folio);
+	struct lruvec *lruvec;
 
+	rcu_read_lock();
+retry:
+	lruvec = folio_lruvec(folio);
 	spin_lock_irq(&lruvec->lru_lock);
-	lruvec_memcg_debug(lruvec, folio);
+
+	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+		spin_unlock_irq(&lruvec->lru_lock);
+		goto retry;
+	}
+	rcu_read_unlock();
 
 	return lruvec;
 }
@@ -1274,10 +1273,18 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
 struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
 		unsigned long *flags)
 {
-	struct lruvec *lruvec = folio_lruvec(folio);
+	struct lruvec *lruvec;
 
+	rcu_read_lock();
+retry:
+	lruvec = folio_lruvec(folio);
 	spin_lock_irqsave(&lruvec->lru_lock, *flags);
-	lruvec_memcg_debug(lruvec, folio);
+
+	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+		spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
+		goto retry;
+	}
+	rcu_read_unlock();
 
 	return lruvec;
 }
diff --git a/mm/swap.c b/mm/swap.c
index 127ef4db394f..987dcbd93ffa 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -337,6 +337,11 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages)
 
 void lru_note_cost_folio(struct folio *folio)
 {
+	WARN_ON_ONCE(!rcu_read_lock_held());
+	/*
+	 * The rcu read lock is held by the caller, so we do not need to
+	 * care about the lruvec returned by folio_lruvec() being released.
+	 */
 	lru_note_cost(folio_lruvec(folio), folio_is_file_lru(folio),
 			folio_nr_pages(folio));
 }
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 05/11] mm: vmscan: rework move_pages_to_lru()
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm, hannes, longman, mhocko, roman.gushchin, shakeelb
  Cc: cgroups, duanxiongchun, linux-kernel, linux-mm, Muchun Song

In the later patch, we will reparent the LRU pages. The pages moved to
appropriate LRU list can be reparented during the process of the
move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we
should use the more general interface of folio_lruvec_relock_irq() to
acquire the correct lruvec lock.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 mm/vmscan.c | 39 +++++++++++++++++++--------------------
 1 file changed, 19 insertions(+), 20 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6a554712ef5d..697656151431 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2312,23 +2312,26 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
  * move_pages_to_lru() moves folios from private @list to appropriate LRU list.
  * On return, @list is reused as a list of folios to be freed by the caller.
  *
- * Returns the number of pages moved to the given lruvec.
+ * Returns the number of pages moved to the appropriate LRU list.
+ *
+ * Note: The caller must not hold any lruvec lock.
  */
-static unsigned int move_pages_to_lru(struct lruvec *lruvec,
-				      struct list_head *list)
+static unsigned int move_pages_to_lru(struct list_head *list)
 {
 	int nr_pages, nr_moved = 0;
+	struct lruvec *lruvec = NULL;
 	LIST_HEAD(folios_to_free);
 
 	while (!list_empty(list)) {
 		struct folio *folio = lru_to_folio(list);
 
+		lruvec = folio_lruvec_relock_irq(folio, lruvec);
 		VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
 		list_del(&folio->lru);
 		if (unlikely(!folio_evictable(folio))) {
-			spin_unlock_irq(&lruvec->lru_lock);
+			lruvec_unlock_irq(lruvec);
 			folio_putback_lru(folio);
-			spin_lock_irq(&lruvec->lru_lock);
+			lruvec = NULL;
 			continue;
 		}
 
@@ -2349,19 +2352,15 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 			__folio_clear_lru_flags(folio);
 
 			if (unlikely(folio_test_large(folio))) {
-				spin_unlock_irq(&lruvec->lru_lock);
+				lruvec_unlock_irq(lruvec);
 				destroy_large_folio(folio);
-				spin_lock_irq(&lruvec->lru_lock);
+				lruvec = NULL;
 			} else
 				list_add(&folio->lru, &folios_to_free);
 
 			continue;
 		}
 
-		/*
-		 * All pages were isolated from the same lruvec (and isolation
-		 * inhibits memcg migration).
-		 */
 		VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
 		lruvec_add_folio(lruvec, folio);
 		nr_pages = folio_nr_pages(folio);
@@ -2370,6 +2369,8 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 			workingset_age_nonresident(lruvec, nr_pages);
 	}
 
+	if (lruvec)
+		lruvec_unlock_irq(lruvec);
 	/*
 	 * To save our caller's stack, now use input list for pages to free.
 	 */
@@ -2440,16 +2441,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 
 	nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false);
 
-	spin_lock_irq(&lruvec->lru_lock);
-	move_pages_to_lru(lruvec, &page_list);
+	move_pages_to_lru(&page_list);
 
+	local_irq_disable();
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 	item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT;
 	if (!cgroup_reclaim(sc))
 		__count_vm_events(item, nr_reclaimed);
 	__count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed);
 	__count_vm_events(PGSTEAL_ANON + file, nr_reclaimed);
-	spin_unlock_irq(&lruvec->lru_lock);
+	local_irq_enable();
 
 	lru_note_cost(lruvec, file, stat.nr_pageout);
 	mem_cgroup_uncharge_list(&page_list);
@@ -2578,18 +2579,16 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	/*
 	 * Move folios back to the lru list.
 	 */
-	spin_lock_irq(&lruvec->lru_lock);
-
-	nr_activate = move_pages_to_lru(lruvec, &l_active);
-	nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
+	nr_activate = move_pages_to_lru(&l_active);
+	nr_deactivate = move_pages_to_lru(&l_inactive);
 	/* Keep all free folios in l_active list */
 	list_splice(&l_inactive, &l_active);
 
+	local_irq_disable();
 	__count_vm_events(PGDEACTIVATE, nr_deactivate);
 	__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
-
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
-	spin_unlock_irq(&lruvec->lru_lock);
+	local_irq_enable();
 
 	mem_cgroup_uncharge_list(&l_active);
 	free_unref_page_list(&l_active);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 05/11] mm: vmscan: rework move_pages_to_lru()
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	hannes-druUgvl0LCNAfugRpC6u6w, longman-H+wXaHxf7aLQT0dZR+AlfA,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Muchun Song

In the later patch, we will reparent the LRU pages. The pages moved to
appropriate LRU list can be reparented during the process of the
move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we
should use the more general interface of folio_lruvec_relock_irq() to
acquire the correct lruvec lock.

Signed-off-by: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
Acked-by: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
Acked-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>
---
 mm/vmscan.c | 39 +++++++++++++++++++--------------------
 1 file changed, 19 insertions(+), 20 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6a554712ef5d..697656151431 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2312,23 +2312,26 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
  * move_pages_to_lru() moves folios from private @list to appropriate LRU list.
  * On return, @list is reused as a list of folios to be freed by the caller.
  *
- * Returns the number of pages moved to the given lruvec.
+ * Returns the number of pages moved to the appropriate LRU list.
+ *
+ * Note: The caller must not hold any lruvec lock.
  */
-static unsigned int move_pages_to_lru(struct lruvec *lruvec,
-				      struct list_head *list)
+static unsigned int move_pages_to_lru(struct list_head *list)
 {
 	int nr_pages, nr_moved = 0;
+	struct lruvec *lruvec = NULL;
 	LIST_HEAD(folios_to_free);
 
 	while (!list_empty(list)) {
 		struct folio *folio = lru_to_folio(list);
 
+		lruvec = folio_lruvec_relock_irq(folio, lruvec);
 		VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
 		list_del(&folio->lru);
 		if (unlikely(!folio_evictable(folio))) {
-			spin_unlock_irq(&lruvec->lru_lock);
+			lruvec_unlock_irq(lruvec);
 			folio_putback_lru(folio);
-			spin_lock_irq(&lruvec->lru_lock);
+			lruvec = NULL;
 			continue;
 		}
 
@@ -2349,19 +2352,15 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 			__folio_clear_lru_flags(folio);
 
 			if (unlikely(folio_test_large(folio))) {
-				spin_unlock_irq(&lruvec->lru_lock);
+				lruvec_unlock_irq(lruvec);
 				destroy_large_folio(folio);
-				spin_lock_irq(&lruvec->lru_lock);
+				lruvec = NULL;
 			} else
 				list_add(&folio->lru, &folios_to_free);
 
 			continue;
 		}
 
-		/*
-		 * All pages were isolated from the same lruvec (and isolation
-		 * inhibits memcg migration).
-		 */
 		VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
 		lruvec_add_folio(lruvec, folio);
 		nr_pages = folio_nr_pages(folio);
@@ -2370,6 +2369,8 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 			workingset_age_nonresident(lruvec, nr_pages);
 	}
 
+	if (lruvec)
+		lruvec_unlock_irq(lruvec);
 	/*
 	 * To save our caller's stack, now use input list for pages to free.
 	 */
@@ -2440,16 +2441,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 
 	nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false);
 
-	spin_lock_irq(&lruvec->lru_lock);
-	move_pages_to_lru(lruvec, &page_list);
+	move_pages_to_lru(&page_list);
 
+	local_irq_disable();
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 	item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT;
 	if (!cgroup_reclaim(sc))
 		__count_vm_events(item, nr_reclaimed);
 	__count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed);
 	__count_vm_events(PGSTEAL_ANON + file, nr_reclaimed);
-	spin_unlock_irq(&lruvec->lru_lock);
+	local_irq_enable();
 
 	lru_note_cost(lruvec, file, stat.nr_pageout);
 	mem_cgroup_uncharge_list(&page_list);
@@ -2578,18 +2579,16 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	/*
 	 * Move folios back to the lru list.
 	 */
-	spin_lock_irq(&lruvec->lru_lock);
-
-	nr_activate = move_pages_to_lru(lruvec, &l_active);
-	nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
+	nr_activate = move_pages_to_lru(&l_active);
+	nr_deactivate = move_pages_to_lru(&l_inactive);
 	/* Keep all free folios in l_active list */
 	list_splice(&l_inactive, &l_active);
 
+	local_irq_disable();
 	__count_vm_events(PGDEACTIVATE, nr_deactivate);
 	__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
-
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
-	spin_unlock_irq(&lruvec->lru_lock);
+	local_irq_enable();
 
 	mem_cgroup_uncharge_list(&l_active);
 	free_unref_page_list(&l_active);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 06/11] mm: thp: make split queue lock safe when LRU pages are reparented
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm, hannes, longman, mhocko, roman.gushchin, shakeelb
  Cc: cgroups, duanxiongchun, linux-kernel, linux-mm, Muchun Song

Similar to the lruvec lock, we use the same approach to make the split
queue lock safe when LRU pages are reparented.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 include/linux/memcontrol.h |  10 ++++
 mm/huge_memory.c           | 116 +++++++++++++++++++++++++++++++++++----------
 2 files changed, 100 insertions(+), 26 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index ff3106eca6f3..026b62b206b1 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1691,6 +1691,11 @@ int alloc_shrinker_info(struct mem_cgroup *memcg);
 void free_shrinker_info(struct mem_cgroup *memcg);
 void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id);
 void reparent_shrinker_deferred(struct mem_cgroup *memcg);
+
+static inline int shrinker_id(struct shrinker *shrinker)
+{
+	return shrinker->id;
+}
 #else
 #define mem_cgroup_sockets_enabled 0
 static inline void mem_cgroup_sk_alloc(struct sock *sk) { };
@@ -1704,6 +1709,11 @@ static inline void set_shrinker_bit(struct mem_cgroup *memcg,
 				    int nid, int shrinker_id)
 {
 }
+
+static inline int shrinker_id(struct shrinker *shrinker)
+{
+	return -1;
+}
 #endif
 
 #ifdef CONFIG_MEMCG_KMEM
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 66d9ed8a1289..11ec92783b37 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -558,25 +558,90 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
 }
 
 #ifdef CONFIG_MEMCG
-static inline struct deferred_split *get_deferred_split_queue(struct page *page)
+static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio,
+		struct deferred_split *queue)
 {
-	struct mem_cgroup *memcg = page_memcg(compound_head(page));
-	struct pglist_data *pgdat = NODE_DATA(page_to_nid(page));
+	if (mem_cgroup_disabled())
+		return NULL;
+	if (&NODE_DATA(folio_nid(folio))->deferred_split_queue == queue)
+		return NULL;
+	return container_of(queue, struct mem_cgroup, deferred_split_queue);
+}
 
-	if (memcg)
-		return &memcg->deferred_split_queue;
-	else
-		return &pgdat->deferred_split_queue;
+static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio)
+{
+	struct mem_cgroup *memcg = folio_memcg(folio);
+
+	return memcg ? &memcg->deferred_split_queue : NULL;
 }
 #else
-static inline struct deferred_split *get_deferred_split_queue(struct page *page)
+static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio,
+		struct deferred_split *queue)
 {
-	struct pglist_data *pgdat = NODE_DATA(page_to_nid(page));
+	return NULL;
+}
 
-	return &pgdat->deferred_split_queue;
+static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio)
+{
+	return NULL;
 }
 #endif
 
+static struct deferred_split *folio_split_queue(struct folio *folio)
+{
+	struct deferred_split *queue = folio_memcg_split_queue(folio);
+
+	return queue ? : &NODE_DATA(folio_nid(folio))->deferred_split_queue;
+}
+
+static struct deferred_split *folio_split_queue_lock(struct folio *folio)
+{
+	struct deferred_split *queue;
+
+	rcu_read_lock();
+retry:
+	queue = folio_split_queue(folio);
+	spin_lock(&queue->split_queue_lock);
+
+	if (unlikely(folio_split_queue_memcg(folio, queue) != folio_memcg(folio))) {
+		spin_unlock(&queue->split_queue_lock);
+		goto retry;
+	}
+	rcu_read_unlock();
+
+	return queue;
+}
+
+static struct deferred_split *
+folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags)
+{
+	struct deferred_split *queue;
+
+	rcu_read_lock();
+retry:
+	queue = folio_split_queue(folio);
+	spin_lock_irqsave(&queue->split_queue_lock, *flags);
+
+	if (unlikely(folio_split_queue_memcg(folio, queue) != folio_memcg(folio))) {
+		spin_unlock_irqrestore(&queue->split_queue_lock, *flags);
+		goto retry;
+	}
+	rcu_read_unlock();
+
+	return queue;
+}
+
+static inline void split_queue_unlock(struct deferred_split *queue)
+{
+	spin_unlock(&queue->split_queue_lock);
+}
+
+static inline void split_queue_unlock_irqrestore(struct deferred_split *queue,
+						 unsigned long flags)
+{
+	spin_unlock_irqrestore(&queue->split_queue_lock, flags);
+}
+
 void prep_transhuge_page(struct page *page)
 {
 	/*
@@ -2600,7 +2665,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 {
 	struct folio *folio = page_folio(page);
 	struct page *head = &folio->page;
-	struct deferred_split *ds_queue = get_deferred_split_queue(head);
+	struct deferred_split *ds_queue;
 	XA_STATE(xas, &head->mapping->i_pages, head->index);
 	struct anon_vma *anon_vma = NULL;
 	struct address_space *mapping = NULL;
@@ -2692,13 +2757,13 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 	}
 
 	/* Prevent deferred_split_scan() touching ->_refcount */
-	spin_lock(&ds_queue->split_queue_lock);
+	ds_queue = folio_split_queue_lock(folio);
 	if (page_ref_freeze(head, 1 + extra_pins)) {
 		if (!list_empty(page_deferred_list(head))) {
 			ds_queue->split_queue_len--;
 			list_del(page_deferred_list(head));
 		}
-		spin_unlock(&ds_queue->split_queue_lock);
+		split_queue_unlock(ds_queue);
 		if (mapping) {
 			int nr = thp_nr_pages(head);
 
@@ -2716,7 +2781,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 		__split_huge_page(page, list, end);
 		ret = 0;
 	} else {
-		spin_unlock(&ds_queue->split_queue_lock);
+		split_queue_unlock(ds_queue);
 fail:
 		if (mapping)
 			xas_unlock(&xas);
@@ -2740,25 +2805,23 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 
 void free_transhuge_page(struct page *page)
 {
-	struct deferred_split *ds_queue = get_deferred_split_queue(page);
+	struct deferred_split *ds_queue;
 	unsigned long flags;
 
-	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+	ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags);
 	if (!list_empty(page_deferred_list(page))) {
 		ds_queue->split_queue_len--;
 		list_del(page_deferred_list(page));
 	}
-	spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
+	split_queue_unlock_irqrestore(ds_queue, flags);
 	free_compound_page(page);
 }
 
 void deferred_split_huge_page(struct page *page)
 {
-	struct deferred_split *ds_queue = get_deferred_split_queue(page);
-#ifdef CONFIG_MEMCG
-	struct mem_cgroup *memcg = page_memcg(compound_head(page));
-#endif
+	struct deferred_split *ds_queue;
 	unsigned long flags;
+	struct folio *folio = page_folio(page);
 
 	VM_BUG_ON_PAGE(!PageTransHuge(page), page);
 
@@ -2775,18 +2838,19 @@ void deferred_split_huge_page(struct page *page)
 	if (PageSwapCache(page))
 		return;
 
-	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+	ds_queue = folio_split_queue_lock_irqsave(folio, &flags);
 	if (list_empty(page_deferred_list(page))) {
+		struct mem_cgroup *memcg;
+
+		memcg = folio_split_queue_memcg(folio, ds_queue);
 		count_vm_event(THP_DEFERRED_SPLIT_PAGE);
 		list_add_tail(page_deferred_list(page), &ds_queue->split_queue);
 		ds_queue->split_queue_len++;
-#ifdef CONFIG_MEMCG
 		if (memcg)
 			set_shrinker_bit(memcg, page_to_nid(page),
-					 deferred_split_shrinker.id);
-#endif
+					 shrinker_id(&deferred_split_shrinker));
 	}
-	spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
+	split_queue_unlock_irqrestore(ds_queue, flags);
 }
 
 static unsigned long deferred_split_count(struct shrinker *shrink,
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 06/11] mm: thp: make split queue lock safe when LRU pages are reparented
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	hannes-druUgvl0LCNAfugRpC6u6w, longman-H+wXaHxf7aLQT0dZR+AlfA,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Muchun Song

Similar to the lruvec lock, we use the same approach to make the split
queue lock safe when LRU pages are reparented.

Signed-off-by: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
Acked-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>
---
 include/linux/memcontrol.h |  10 ++++
 mm/huge_memory.c           | 116 +++++++++++++++++++++++++++++++++++----------
 2 files changed, 100 insertions(+), 26 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index ff3106eca6f3..026b62b206b1 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -1691,6 +1691,11 @@ int alloc_shrinker_info(struct mem_cgroup *memcg);
 void free_shrinker_info(struct mem_cgroup *memcg);
 void set_shrinker_bit(struct mem_cgroup *memcg, int nid, int shrinker_id);
 void reparent_shrinker_deferred(struct mem_cgroup *memcg);
+
+static inline int shrinker_id(struct shrinker *shrinker)
+{
+	return shrinker->id;
+}
 #else
 #define mem_cgroup_sockets_enabled 0
 static inline void mem_cgroup_sk_alloc(struct sock *sk) { };
@@ -1704,6 +1709,11 @@ static inline void set_shrinker_bit(struct mem_cgroup *memcg,
 				    int nid, int shrinker_id)
 {
 }
+
+static inline int shrinker_id(struct shrinker *shrinker)
+{
+	return -1;
+}
 #endif
 
 #ifdef CONFIG_MEMCG_KMEM
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 66d9ed8a1289..11ec92783b37 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -558,25 +558,90 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
 }
 
 #ifdef CONFIG_MEMCG
-static inline struct deferred_split *get_deferred_split_queue(struct page *page)
+static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio,
+		struct deferred_split *queue)
 {
-	struct mem_cgroup *memcg = page_memcg(compound_head(page));
-	struct pglist_data *pgdat = NODE_DATA(page_to_nid(page));
+	if (mem_cgroup_disabled())
+		return NULL;
+	if (&NODE_DATA(folio_nid(folio))->deferred_split_queue == queue)
+		return NULL;
+	return container_of(queue, struct mem_cgroup, deferred_split_queue);
+}
 
-	if (memcg)
-		return &memcg->deferred_split_queue;
-	else
-		return &pgdat->deferred_split_queue;
+static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio)
+{
+	struct mem_cgroup *memcg = folio_memcg(folio);
+
+	return memcg ? &memcg->deferred_split_queue : NULL;
 }
 #else
-static inline struct deferred_split *get_deferred_split_queue(struct page *page)
+static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio,
+		struct deferred_split *queue)
 {
-	struct pglist_data *pgdat = NODE_DATA(page_to_nid(page));
+	return NULL;
+}
 
-	return &pgdat->deferred_split_queue;
+static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio)
+{
+	return NULL;
 }
 #endif
 
+static struct deferred_split *folio_split_queue(struct folio *folio)
+{
+	struct deferred_split *queue = folio_memcg_split_queue(folio);
+
+	return queue ? : &NODE_DATA(folio_nid(folio))->deferred_split_queue;
+}
+
+static struct deferred_split *folio_split_queue_lock(struct folio *folio)
+{
+	struct deferred_split *queue;
+
+	rcu_read_lock();
+retry:
+	queue = folio_split_queue(folio);
+	spin_lock(&queue->split_queue_lock);
+
+	if (unlikely(folio_split_queue_memcg(folio, queue) != folio_memcg(folio))) {
+		spin_unlock(&queue->split_queue_lock);
+		goto retry;
+	}
+	rcu_read_unlock();
+
+	return queue;
+}
+
+static struct deferred_split *
+folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags)
+{
+	struct deferred_split *queue;
+
+	rcu_read_lock();
+retry:
+	queue = folio_split_queue(folio);
+	spin_lock_irqsave(&queue->split_queue_lock, *flags);
+
+	if (unlikely(folio_split_queue_memcg(folio, queue) != folio_memcg(folio))) {
+		spin_unlock_irqrestore(&queue->split_queue_lock, *flags);
+		goto retry;
+	}
+	rcu_read_unlock();
+
+	return queue;
+}
+
+static inline void split_queue_unlock(struct deferred_split *queue)
+{
+	spin_unlock(&queue->split_queue_lock);
+}
+
+static inline void split_queue_unlock_irqrestore(struct deferred_split *queue,
+						 unsigned long flags)
+{
+	spin_unlock_irqrestore(&queue->split_queue_lock, flags);
+}
+
 void prep_transhuge_page(struct page *page)
 {
 	/*
@@ -2600,7 +2665,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 {
 	struct folio *folio = page_folio(page);
 	struct page *head = &folio->page;
-	struct deferred_split *ds_queue = get_deferred_split_queue(head);
+	struct deferred_split *ds_queue;
 	XA_STATE(xas, &head->mapping->i_pages, head->index);
 	struct anon_vma *anon_vma = NULL;
 	struct address_space *mapping = NULL;
@@ -2692,13 +2757,13 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 	}
 
 	/* Prevent deferred_split_scan() touching ->_refcount */
-	spin_lock(&ds_queue->split_queue_lock);
+	ds_queue = folio_split_queue_lock(folio);
 	if (page_ref_freeze(head, 1 + extra_pins)) {
 		if (!list_empty(page_deferred_list(head))) {
 			ds_queue->split_queue_len--;
 			list_del(page_deferred_list(head));
 		}
-		spin_unlock(&ds_queue->split_queue_lock);
+		split_queue_unlock(ds_queue);
 		if (mapping) {
 			int nr = thp_nr_pages(head);
 
@@ -2716,7 +2781,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 		__split_huge_page(page, list, end);
 		ret = 0;
 	} else {
-		spin_unlock(&ds_queue->split_queue_lock);
+		split_queue_unlock(ds_queue);
 fail:
 		if (mapping)
 			xas_unlock(&xas);
@@ -2740,25 +2805,23 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 
 void free_transhuge_page(struct page *page)
 {
-	struct deferred_split *ds_queue = get_deferred_split_queue(page);
+	struct deferred_split *ds_queue;
 	unsigned long flags;
 
-	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+	ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags);
 	if (!list_empty(page_deferred_list(page))) {
 		ds_queue->split_queue_len--;
 		list_del(page_deferred_list(page));
 	}
-	spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
+	split_queue_unlock_irqrestore(ds_queue, flags);
 	free_compound_page(page);
 }
 
 void deferred_split_huge_page(struct page *page)
 {
-	struct deferred_split *ds_queue = get_deferred_split_queue(page);
-#ifdef CONFIG_MEMCG
-	struct mem_cgroup *memcg = page_memcg(compound_head(page));
-#endif
+	struct deferred_split *ds_queue;
 	unsigned long flags;
+	struct folio *folio = page_folio(page);
 
 	VM_BUG_ON_PAGE(!PageTransHuge(page), page);
 
@@ -2775,18 +2838,19 @@ void deferred_split_huge_page(struct page *page)
 	if (PageSwapCache(page))
 		return;
 
-	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+	ds_queue = folio_split_queue_lock_irqsave(folio, &flags);
 	if (list_empty(page_deferred_list(page))) {
+		struct mem_cgroup *memcg;
+
+		memcg = folio_split_queue_memcg(folio, ds_queue);
 		count_vm_event(THP_DEFERRED_SPLIT_PAGE);
 		list_add_tail(page_deferred_list(page), &ds_queue->split_queue);
 		ds_queue->split_queue_len++;
-#ifdef CONFIG_MEMCG
 		if (memcg)
 			set_shrinker_bit(memcg, page_to_nid(page),
-					 deferred_split_shrinker.id);
-#endif
+					 shrinker_id(&deferred_split_shrinker));
 	}
-	spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
+	split_queue_unlock_irqrestore(ds_queue, flags);
 }
 
 static unsigned long deferred_split_count(struct shrinker *shrink,
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 07/11] mm: memcontrol: make all the callers of {folio,page}_memcg() safe
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm, hannes, longman, mhocko, roman.gushchin, shakeelb
  Cc: cgroups, duanxiongchun, linux-kernel, linux-mm, Muchun Song

When we use objcg APIs to charge the LRU pages, the page will not hold
a reference to the memcg associated with the page. So the caller of the
{folio,page}_memcg() should hold an rcu read lock or obtain a reference
to the memcg associated with the page to protect memcg from being
released. So introduce get_mem_cgroup_from_{page,folio}() to obtain a
reference to the memory cgroup associated with the page.

In this patch, make all the callers hold an rcu read lock or obtain a
reference to the memcg to protect memcg from being released when the LRU
pages reparented.

We do not need to adjust the callers of {folio,page}_memcg() during
the whole process of mem_cgroup_move_task(). Because the cgroup migration
and memory cgroup offlining are serialized by @cgroup_mutex. In this
routine, the LRU pages cannot be reparented to its parent memory cgroup.
So {folio,page}_memcg() is stable and cannot be released.

This is a preparation for reparenting the LRU pages.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 fs/buffer.c                      |  4 +--
 fs/fs-writeback.c                | 23 +++++++-------
 include/linux/memcontrol.h       | 66 +++++++++++++++++++++++++++++++++-----
 include/trace/events/writeback.h |  5 +++
 mm/memcontrol.c                  | 68 +++++++++++++++++++++++++++++-----------
 mm/migrate.c                     |  4 +++
 mm/page_io.c                     |  5 +--
 7 files changed, 135 insertions(+), 40 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 898c7f301b1b..04ec53f327e4 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -819,8 +819,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
 	if (retry)
 		gfp |= __GFP_NOFAIL;
 
-	/* The page lock pins the memcg */
-	memcg = page_memcg(page);
+	memcg = get_mem_cgroup_from_page(page);
 	old_memcg = set_active_memcg(memcg);
 
 	head = NULL;
@@ -840,6 +839,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
 		set_bh_page(bh, page, offset);
 	}
 out:
+	mem_cgroup_put(memcg);
 	set_active_memcg(old_memcg);
 	return head;
 /*
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 05221366a16d..1cbac56c810b 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -244,15 +244,13 @@ void __inode_attach_wb(struct inode *inode, struct page *page)
 	if (inode_cgwb_enabled(inode)) {
 		struct cgroup_subsys_state *memcg_css;
 
-		if (page) {
-			memcg_css = mem_cgroup_css_from_page(page);
-			wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
-		} else {
-			/* must pin memcg_css, see wb_get_create() */
+		/* must pin memcg_css, see wb_get_create() */
+		if (page)
+			memcg_css = get_mem_cgroup_css_from_page(page);
+		else
 			memcg_css = task_get_css(current, memory_cgrp_id);
-			wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
-			css_put(memcg_css);
-		}
+		wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
+		css_put(memcg_css);
 	}
 
 	if (!wb)
@@ -869,16 +867,16 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
 	if (!wbc->wb || wbc->no_cgroup_owner)
 		return;
 
-	css = mem_cgroup_css_from_page(page);
+	css = get_mem_cgroup_css_from_page(page);
 	/* dead cgroups shouldn't contribute to inode ownership arbitration */
 	if (!(css->flags & CSS_ONLINE))
-		return;
+		goto out;
 
 	id = css->id;
 
 	if (id == wbc->wb_id) {
 		wbc->wb_bytes += bytes;
-		return;
+		goto out;
 	}
 
 	if (id == wbc->wb_lcand_id)
@@ -891,6 +889,9 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
 		wbc->wb_tcand_bytes += bytes;
 	else
 		wbc->wb_tcand_bytes -= min(bytes, wbc->wb_tcand_bytes);
+
+out:
+	css_put(css);
 }
 EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner);
 
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 026b62b206b1..a8bd4bb39502 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -379,7 +379,7 @@ static inline bool folio_memcg_kmem(struct folio *folio);
  * a valid memcg, but can be atomically swapped to the parent memcg.
  *
  * The caller must ensure that the returned memcg won't be released:
- * e.g. acquire the rcu_read_lock or css_set_lock.
+ * e.g. acquire the rcu_read_lock or objcg_lock or cgroup_mutex.
  */
 static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
 {
@@ -445,8 +445,8 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
  * - lock_page_memcg()
  * - exclusive reference
  *
- * For a kmem folio a caller should hold an rcu read lock to protect memcg
- * associated with a kmem folio from being released.
+ * Note: The caller should hold an rcu read lock to protect memcg associated
+ * with a folio from being released.
  */
 static inline struct mem_cgroup *folio_memcg(struct folio *folio)
 {
@@ -455,12 +455,48 @@ static inline struct mem_cgroup *folio_memcg(struct folio *folio)
 	return __folio_memcg(folio);
 }
 
+/*
+ * page_memcg - Get the memory cgroup associated with a page.
+ * @page: Pointer to the page.
+ *
+ * See the cooments in folio_memcg().
+ */
 static inline struct mem_cgroup *page_memcg(struct page *page)
 {
 	return folio_memcg(page_folio(page));
 }
 
-/**
+/*
+ * get_mem_cgroup_from_folio - Obtain a reference on the memory cgroup
+ *			       associated with a folio.
+ * @folio: Pointer to the folio.
+ *
+ * Returns a pointer to the memory cgroup (and obtain a reference on it)
+ * associated with the folio, or NULL. This function assumes that the
+ * folio is known to have a proper memory cgroup pointer. It's not safe
+ * to call this function against some type of pages, e.g. slab pages or
+ * ex-slab pages.
+ */
+static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
+{
+	struct mem_cgroup *memcg;
+
+	rcu_read_lock();
+retry:
+	memcg = folio_memcg(folio);
+	if (unlikely(memcg && !css_tryget(&memcg->css)))
+		goto retry;
+	rcu_read_unlock();
+
+	return memcg;
+}
+
+static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
+{
+	return get_mem_cgroup_from_folio(page_folio(page));
+}
+
+/*
  * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio.
  * @folio: Pointer to the folio.
  *
@@ -888,7 +924,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
 	return match;
 }
 
-struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page);
+struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page);
 ino_t page_cgroup_ino(struct page *page);
 
 static inline bool mem_cgroup_online(struct mem_cgroup *memcg)
@@ -1058,19 +1094,25 @@ static inline void count_memcg_events(struct mem_cgroup *memcg,
 static inline void count_memcg_page_event(struct page *page,
 					  enum vm_event_item idx)
 {
-	struct mem_cgroup *memcg = page_memcg(page);
+	struct mem_cgroup *memcg;
 
+	rcu_read_lock();
+	memcg = page_memcg(page);
 	if (memcg)
 		count_memcg_events(memcg, idx, 1);
+	rcu_read_unlock();
 }
 
 static inline void count_memcg_folio_events(struct folio *folio,
 		enum vm_event_item idx, unsigned long nr)
 {
-	struct mem_cgroup *memcg = folio_memcg(folio);
+	struct mem_cgroup *memcg;
 
+	rcu_read_lock();
+	memcg = folio_memcg(folio);
 	if (memcg)
 		count_memcg_events(memcg, idx, nr);
+	rcu_read_unlock();
 }
 
 static inline void count_memcg_event_mm(struct mm_struct *mm,
@@ -1149,6 +1191,16 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
 	return NULL;
 }
 
+static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
+{
+	return NULL;
+}
+
+static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
+{
+	return NULL;
+}
+
 static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
 {
 	WARN_ON_ONCE(!rcu_read_lock_held());
diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
index 86b2a82da546..cdb822339f13 100644
--- a/include/trace/events/writeback.h
+++ b/include/trace/events/writeback.h
@@ -258,6 +258,11 @@ TRACE_EVENT(track_foreign_dirty,
 		__entry->ino		= inode ? inode->i_ino : 0;
 		__entry->memcg_id	= wb->memcg_css->id;
 		__entry->cgroup_ino	= __trace_wb_assign_cgroup(wb);
+		/*
+		 * TP_fast_assign() is under preemption disabled which can
+		 * serve as an RCU read-side critical section so that the
+		 * memcg returned by folio_memcg() cannot be freed.
+		 */
 		__entry->page_cgroup_ino = cgroup_ino(folio_memcg(folio)->css.cgroup);
 	),
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6f171480b2f2..346a954e190e 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -369,7 +369,7 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
 #endif
 
 /**
- * mem_cgroup_css_from_page - css of the memcg associated with a page
+ * get_mem_cgroup_css_from_page - get css of the memcg associated with a page
  * @page: page of interest
  *
  * If memcg is bound to the default hierarchy, css of the memcg associated
@@ -379,13 +379,15 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
  * If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup
  * is returned.
  */
-struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
+struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page)
 {
 	struct mem_cgroup *memcg;
 
-	memcg = page_memcg(page);
+	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
+		return &root_mem_cgroup->css;
 
-	if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
+	memcg = get_mem_cgroup_from_page(page);
+	if (!memcg)
 		memcg = root_mem_cgroup;
 
 	return &memcg->css;
@@ -768,13 +770,13 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
 void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx,
 			     int val)
 {
-	struct page *head = compound_head(page); /* rmap on tail pages */
+	struct folio *folio = page_folio(page); /* rmap on tail pages */
 	struct mem_cgroup *memcg;
 	pg_data_t *pgdat = page_pgdat(page);
 	struct lruvec *lruvec;
 
 	rcu_read_lock();
-	memcg = page_memcg(head);
+	memcg = folio_memcg(folio);
 	/* Untracked pages have no memcg, no lruvec. Update only the node */
 	if (!memcg) {
 		rcu_read_unlock();
@@ -2056,7 +2058,9 @@ void folio_memcg_lock(struct folio *folio)
 	 * The RCU lock is held throughout the transaction.  The fast
 	 * path can get away without acquiring the memcg->move_lock
 	 * because page moving starts with an RCU grace period.
-         */
+	 *
+	 * The RCU lock also protects the memcg from being freed.
+	 */
 	rcu_read_lock();
 
 	if (mem_cgroup_disabled())
@@ -3353,7 +3357,7 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
 void split_page_memcg(struct page *head, unsigned int nr)
 {
 	struct folio *folio = page_folio(head);
-	struct mem_cgroup *memcg = folio_memcg(folio);
+	struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
 	int i;
 
 	if (mem_cgroup_disabled() || !memcg)
@@ -3366,6 +3370,8 @@ void split_page_memcg(struct page *head, unsigned int nr)
 		obj_cgroup_get_many(__folio_objcg(folio), nr - 1);
 	else
 		css_get_many(&memcg->css, nr - 1);
+
+	css_put(&memcg->css);
 }
 
 #ifdef CONFIG_MEMCG_SWAP
@@ -4558,7 +4564,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
 void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio,
 					     struct bdi_writeback *wb)
 {
-	struct mem_cgroup *memcg = folio_memcg(folio);
+	struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
 	struct memcg_cgwb_frn *frn;
 	u64 now = get_jiffies_64();
 	u64 oldest_at = now;
@@ -4605,6 +4611,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio,
 		frn->memcg_id = wb->memcg_css->id;
 		frn->at = now;
 	}
+	css_put(&memcg->css);
 }
 
 /* issue foreign writeback flushes for recorded foreign dirtying events */
@@ -6167,6 +6174,14 @@ static void mem_cgroup_move_charge(void)
 	atomic_dec(&mc.from->moving_account);
 }
 
+/*
+ * The cgroup migration and memory cgroup offlining are serialized by
+ * @cgroup_mutex. If we reach here, it means that the LRU pages cannot
+ * be reparented to its parent memory cgroup. So during the whole process
+ * of mem_cgroup_move_task(), page_memcg(page) is stable. So we do not
+ * need to worry about the memcg (returned from page_memcg()) being
+ * released even if we do not hold an rcu read lock.
+ */
 static void mem_cgroup_move_task(void)
 {
 	if (mc.to) {
@@ -7025,7 +7040,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
 	if (folio_memcg(new))
 		return;
 
-	memcg = folio_memcg(old);
+	memcg = get_mem_cgroup_from_folio(old);
 	VM_WARN_ON_ONCE_FOLIO(!memcg, old);
 	if (!memcg)
 		return;
@@ -7044,6 +7059,8 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
 	mem_cgroup_charge_statistics(memcg, nr_pages);
 	memcg_check_events(memcg, folio_nid(new));
 	local_irq_restore(flags);
+
+	css_put(&memcg->css);
 }
 
 DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
@@ -7228,6 +7245,10 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
 	if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return;
 
+	/*
+	 * Interrupts should be disabled by the caller (see the comments below),
+	 * which can serve as RCU read-side critical sections.
+	 */
 	memcg = folio_memcg(folio);
 
 	VM_WARN_ON_ONCE_FOLIO(!memcg, folio);
@@ -7289,19 +7310,21 @@ int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry)
 	struct page_counter *counter;
 	struct mem_cgroup *memcg;
 	unsigned short oldid;
+	int ret = 0;
 
 	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return 0;
 
+	rcu_read_lock();
 	memcg = folio_memcg(folio);
 
 	VM_WARN_ON_ONCE_FOLIO(!memcg, folio);
 	if (!memcg)
-		return 0;
+		goto out;
 
 	if (!entry.val) {
 		memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
-		return 0;
+		goto out;
 	}
 
 	memcg = mem_cgroup_id_get_online(memcg);
@@ -7311,7 +7334,8 @@ int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry)
 		memcg_memory_event(memcg, MEMCG_SWAP_MAX);
 		memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
 		mem_cgroup_id_put(memcg);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto out;
 	}
 
 	/* Get references for the tail pages, too */
@@ -7320,8 +7344,10 @@ int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry)
 	oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages);
 	VM_BUG_ON_FOLIO(oldid, folio);
 	mod_memcg_state(memcg, MEMCG_SWAP, nr_pages);
+out:
+	rcu_read_unlock();
 
-	return 0;
+	return ret;
 }
 
 /**
@@ -7366,6 +7392,7 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg)
 bool mem_cgroup_swap_full(struct page *page)
 {
 	struct mem_cgroup *memcg;
+	bool ret = false;
 
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 
@@ -7374,19 +7401,24 @@ bool mem_cgroup_swap_full(struct page *page)
 	if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return false;
 
+	rcu_read_lock();
 	memcg = page_memcg(page);
 	if (!memcg)
-		return false;
+		goto out;
 
 	for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
 		unsigned long usage = page_counter_read(&memcg->swap);
 
 		if (usage * 2 >= READ_ONCE(memcg->swap.high) ||
-		    usage * 2 >= READ_ONCE(memcg->swap.max))
-			return true;
+		    usage * 2 >= READ_ONCE(memcg->swap.max)) {
+			ret = true;
+			goto out;
+		}
 	}
+out:
+	rcu_read_unlock();
 
-	return false;
+	return ret;
 }
 
 static int __init setup_swap_account(char *s)
diff --git a/mm/migrate.c b/mm/migrate.c
index 1ece23d80bc4..2e49b96fa339 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -451,6 +451,10 @@ int folio_migrate_mapping(struct address_space *mapping,
 		struct lruvec *old_lruvec, *new_lruvec;
 		struct mem_cgroup *memcg;
 
+		/*
+		 * Irq is disabled, which can serve as RCU read-side critical
+		 * sections.
+		 */
 		memcg = folio_memcg(folio);
 		old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
 		new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
diff --git a/mm/page_io.c b/mm/page_io.c
index 68318134dc92..f75ebbc95ee6 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -222,13 +222,14 @@ static void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
 	struct cgroup_subsys_state *css;
 	struct mem_cgroup *memcg;
 
+	rcu_read_lock();
 	memcg = page_memcg(page);
 	if (!memcg)
-		return;
+		goto out;
 
-	rcu_read_lock();
 	css = cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys);
 	bio_associate_blkg_from_css(bio, css);
+out:
 	rcu_read_unlock();
 }
 #else
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 07/11] mm: memcontrol: make all the callers of {folio,page}_memcg() safe
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	hannes-druUgvl0LCNAfugRpC6u6w, longman-H+wXaHxf7aLQT0dZR+AlfA,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Muchun Song

When we use objcg APIs to charge the LRU pages, the page will not hold
a reference to the memcg associated with the page. So the caller of the
{folio,page}_memcg() should hold an rcu read lock or obtain a reference
to the memcg associated with the page to protect memcg from being
released. So introduce get_mem_cgroup_from_{page,folio}() to obtain a
reference to the memory cgroup associated with the page.

In this patch, make all the callers hold an rcu read lock or obtain a
reference to the memcg to protect memcg from being released when the LRU
pages reparented.

We do not need to adjust the callers of {folio,page}_memcg() during
the whole process of mem_cgroup_move_task(). Because the cgroup migration
and memory cgroup offlining are serialized by @cgroup_mutex. In this
routine, the LRU pages cannot be reparented to its parent memory cgroup.
So {folio,page}_memcg() is stable and cannot be released.

This is a preparation for reparenting the LRU pages.

Signed-off-by: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
Acked-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>
---
 fs/buffer.c                      |  4 +--
 fs/fs-writeback.c                | 23 +++++++-------
 include/linux/memcontrol.h       | 66 +++++++++++++++++++++++++++++++++-----
 include/trace/events/writeback.h |  5 +++
 mm/memcontrol.c                  | 68 +++++++++++++++++++++++++++++-----------
 mm/migrate.c                     |  4 +++
 mm/page_io.c                     |  5 +--
 7 files changed, 135 insertions(+), 40 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 898c7f301b1b..04ec53f327e4 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -819,8 +819,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
 	if (retry)
 		gfp |= __GFP_NOFAIL;
 
-	/* The page lock pins the memcg */
-	memcg = page_memcg(page);
+	memcg = get_mem_cgroup_from_page(page);
 	old_memcg = set_active_memcg(memcg);
 
 	head = NULL;
@@ -840,6 +839,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
 		set_bh_page(bh, page, offset);
 	}
 out:
+	mem_cgroup_put(memcg);
 	set_active_memcg(old_memcg);
 	return head;
 /*
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 05221366a16d..1cbac56c810b 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -244,15 +244,13 @@ void __inode_attach_wb(struct inode *inode, struct page *page)
 	if (inode_cgwb_enabled(inode)) {
 		struct cgroup_subsys_state *memcg_css;
 
-		if (page) {
-			memcg_css = mem_cgroup_css_from_page(page);
-			wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
-		} else {
-			/* must pin memcg_css, see wb_get_create() */
+		/* must pin memcg_css, see wb_get_create() */
+		if (page)
+			memcg_css = get_mem_cgroup_css_from_page(page);
+		else
 			memcg_css = task_get_css(current, memory_cgrp_id);
-			wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
-			css_put(memcg_css);
-		}
+		wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
+		css_put(memcg_css);
 	}
 
 	if (!wb)
@@ -869,16 +867,16 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
 	if (!wbc->wb || wbc->no_cgroup_owner)
 		return;
 
-	css = mem_cgroup_css_from_page(page);
+	css = get_mem_cgroup_css_from_page(page);
 	/* dead cgroups shouldn't contribute to inode ownership arbitration */
 	if (!(css->flags & CSS_ONLINE))
-		return;
+		goto out;
 
 	id = css->id;
 
 	if (id == wbc->wb_id) {
 		wbc->wb_bytes += bytes;
-		return;
+		goto out;
 	}
 
 	if (id == wbc->wb_lcand_id)
@@ -891,6 +889,9 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
 		wbc->wb_tcand_bytes += bytes;
 	else
 		wbc->wb_tcand_bytes -= min(bytes, wbc->wb_tcand_bytes);
+
+out:
+	css_put(css);
 }
 EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner);
 
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 026b62b206b1..a8bd4bb39502 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -379,7 +379,7 @@ static inline bool folio_memcg_kmem(struct folio *folio);
  * a valid memcg, but can be atomically swapped to the parent memcg.
  *
  * The caller must ensure that the returned memcg won't be released:
- * e.g. acquire the rcu_read_lock or css_set_lock.
+ * e.g. acquire the rcu_read_lock or objcg_lock or cgroup_mutex.
  */
 static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
 {
@@ -445,8 +445,8 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
  * - lock_page_memcg()
  * - exclusive reference
  *
- * For a kmem folio a caller should hold an rcu read lock to protect memcg
- * associated with a kmem folio from being released.
+ * Note: The caller should hold an rcu read lock to protect memcg associated
+ * with a folio from being released.
  */
 static inline struct mem_cgroup *folio_memcg(struct folio *folio)
 {
@@ -455,12 +455,48 @@ static inline struct mem_cgroup *folio_memcg(struct folio *folio)
 	return __folio_memcg(folio);
 }
 
+/*
+ * page_memcg - Get the memory cgroup associated with a page.
+ * @page: Pointer to the page.
+ *
+ * See the cooments in folio_memcg().
+ */
 static inline struct mem_cgroup *page_memcg(struct page *page)
 {
 	return folio_memcg(page_folio(page));
 }
 
-/**
+/*
+ * get_mem_cgroup_from_folio - Obtain a reference on the memory cgroup
+ *			       associated with a folio.
+ * @folio: Pointer to the folio.
+ *
+ * Returns a pointer to the memory cgroup (and obtain a reference on it)
+ * associated with the folio, or NULL. This function assumes that the
+ * folio is known to have a proper memory cgroup pointer. It's not safe
+ * to call this function against some type of pages, e.g. slab pages or
+ * ex-slab pages.
+ */
+static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
+{
+	struct mem_cgroup *memcg;
+
+	rcu_read_lock();
+retry:
+	memcg = folio_memcg(folio);
+	if (unlikely(memcg && !css_tryget(&memcg->css)))
+		goto retry;
+	rcu_read_unlock();
+
+	return memcg;
+}
+
+static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
+{
+	return get_mem_cgroup_from_folio(page_folio(page));
+}
+
+/*
  * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio.
  * @folio: Pointer to the folio.
  *
@@ -888,7 +924,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
 	return match;
 }
 
-struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page);
+struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page);
 ino_t page_cgroup_ino(struct page *page);
 
 static inline bool mem_cgroup_online(struct mem_cgroup *memcg)
@@ -1058,19 +1094,25 @@ static inline void count_memcg_events(struct mem_cgroup *memcg,
 static inline void count_memcg_page_event(struct page *page,
 					  enum vm_event_item idx)
 {
-	struct mem_cgroup *memcg = page_memcg(page);
+	struct mem_cgroup *memcg;
 
+	rcu_read_lock();
+	memcg = page_memcg(page);
 	if (memcg)
 		count_memcg_events(memcg, idx, 1);
+	rcu_read_unlock();
 }
 
 static inline void count_memcg_folio_events(struct folio *folio,
 		enum vm_event_item idx, unsigned long nr)
 {
-	struct mem_cgroup *memcg = folio_memcg(folio);
+	struct mem_cgroup *memcg;
 
+	rcu_read_lock();
+	memcg = folio_memcg(folio);
 	if (memcg)
 		count_memcg_events(memcg, idx, nr);
+	rcu_read_unlock();
 }
 
 static inline void count_memcg_event_mm(struct mm_struct *mm,
@@ -1149,6 +1191,16 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
 	return NULL;
 }
 
+static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
+{
+	return NULL;
+}
+
+static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
+{
+	return NULL;
+}
+
 static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
 {
 	WARN_ON_ONCE(!rcu_read_lock_held());
diff --git a/include/trace/events/writeback.h b/include/trace/events/writeback.h
index 86b2a82da546..cdb822339f13 100644
--- a/include/trace/events/writeback.h
+++ b/include/trace/events/writeback.h
@@ -258,6 +258,11 @@ TRACE_EVENT(track_foreign_dirty,
 		__entry->ino		= inode ? inode->i_ino : 0;
 		__entry->memcg_id	= wb->memcg_css->id;
 		__entry->cgroup_ino	= __trace_wb_assign_cgroup(wb);
+		/*
+		 * TP_fast_assign() is under preemption disabled which can
+		 * serve as an RCU read-side critical section so that the
+		 * memcg returned by folio_memcg() cannot be freed.
+		 */
 		__entry->page_cgroup_ino = cgroup_ino(folio_memcg(folio)->css.cgroup);
 	),
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6f171480b2f2..346a954e190e 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -369,7 +369,7 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
 #endif
 
 /**
- * mem_cgroup_css_from_page - css of the memcg associated with a page
+ * get_mem_cgroup_css_from_page - get css of the memcg associated with a page
  * @page: page of interest
  *
  * If memcg is bound to the default hierarchy, css of the memcg associated
@@ -379,13 +379,15 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
  * If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup
  * is returned.
  */
-struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
+struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page)
 {
 	struct mem_cgroup *memcg;
 
-	memcg = page_memcg(page);
+	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
+		return &root_mem_cgroup->css;
 
-	if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
+	memcg = get_mem_cgroup_from_page(page);
+	if (!memcg)
 		memcg = root_mem_cgroup;
 
 	return &memcg->css;
@@ -768,13 +770,13 @@ void __mod_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx,
 void __mod_lruvec_page_state(struct page *page, enum node_stat_item idx,
 			     int val)
 {
-	struct page *head = compound_head(page); /* rmap on tail pages */
+	struct folio *folio = page_folio(page); /* rmap on tail pages */
 	struct mem_cgroup *memcg;
 	pg_data_t *pgdat = page_pgdat(page);
 	struct lruvec *lruvec;
 
 	rcu_read_lock();
-	memcg = page_memcg(head);
+	memcg = folio_memcg(folio);
 	/* Untracked pages have no memcg, no lruvec. Update only the node */
 	if (!memcg) {
 		rcu_read_unlock();
@@ -2056,7 +2058,9 @@ void folio_memcg_lock(struct folio *folio)
 	 * The RCU lock is held throughout the transaction.  The fast
 	 * path can get away without acquiring the memcg->move_lock
 	 * because page moving starts with an RCU grace period.
-         */
+	 *
+	 * The RCU lock also protects the memcg from being freed.
+	 */
 	rcu_read_lock();
 
 	if (mem_cgroup_disabled())
@@ -3353,7 +3357,7 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
 void split_page_memcg(struct page *head, unsigned int nr)
 {
 	struct folio *folio = page_folio(head);
-	struct mem_cgroup *memcg = folio_memcg(folio);
+	struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
 	int i;
 
 	if (mem_cgroup_disabled() || !memcg)
@@ -3366,6 +3370,8 @@ void split_page_memcg(struct page *head, unsigned int nr)
 		obj_cgroup_get_many(__folio_objcg(folio), nr - 1);
 	else
 		css_get_many(&memcg->css, nr - 1);
+
+	css_put(&memcg->css);
 }
 
 #ifdef CONFIG_MEMCG_SWAP
@@ -4558,7 +4564,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
 void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio,
 					     struct bdi_writeback *wb)
 {
-	struct mem_cgroup *memcg = folio_memcg(folio);
+	struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
 	struct memcg_cgwb_frn *frn;
 	u64 now = get_jiffies_64();
 	u64 oldest_at = now;
@@ -4605,6 +4611,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio,
 		frn->memcg_id = wb->memcg_css->id;
 		frn->at = now;
 	}
+	css_put(&memcg->css);
 }
 
 /* issue foreign writeback flushes for recorded foreign dirtying events */
@@ -6167,6 +6174,14 @@ static void mem_cgroup_move_charge(void)
 	atomic_dec(&mc.from->moving_account);
 }
 
+/*
+ * The cgroup migration and memory cgroup offlining are serialized by
+ * @cgroup_mutex. If we reach here, it means that the LRU pages cannot
+ * be reparented to its parent memory cgroup. So during the whole process
+ * of mem_cgroup_move_task(), page_memcg(page) is stable. So we do not
+ * need to worry about the memcg (returned from page_memcg()) being
+ * released even if we do not hold an rcu read lock.
+ */
 static void mem_cgroup_move_task(void)
 {
 	if (mc.to) {
@@ -7025,7 +7040,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
 	if (folio_memcg(new))
 		return;
 
-	memcg = folio_memcg(old);
+	memcg = get_mem_cgroup_from_folio(old);
 	VM_WARN_ON_ONCE_FOLIO(!memcg, old);
 	if (!memcg)
 		return;
@@ -7044,6 +7059,8 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
 	mem_cgroup_charge_statistics(memcg, nr_pages);
 	memcg_check_events(memcg, folio_nid(new));
 	local_irq_restore(flags);
+
+	css_put(&memcg->css);
 }
 
 DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
@@ -7228,6 +7245,10 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
 	if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return;
 
+	/*
+	 * Interrupts should be disabled by the caller (see the comments below),
+	 * which can serve as RCU read-side critical sections.
+	 */
 	memcg = folio_memcg(folio);
 
 	VM_WARN_ON_ONCE_FOLIO(!memcg, folio);
@@ -7289,19 +7310,21 @@ int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry)
 	struct page_counter *counter;
 	struct mem_cgroup *memcg;
 	unsigned short oldid;
+	int ret = 0;
 
 	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return 0;
 
+	rcu_read_lock();
 	memcg = folio_memcg(folio);
 
 	VM_WARN_ON_ONCE_FOLIO(!memcg, folio);
 	if (!memcg)
-		return 0;
+		goto out;
 
 	if (!entry.val) {
 		memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
-		return 0;
+		goto out;
 	}
 
 	memcg = mem_cgroup_id_get_online(memcg);
@@ -7311,7 +7334,8 @@ int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry)
 		memcg_memory_event(memcg, MEMCG_SWAP_MAX);
 		memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
 		mem_cgroup_id_put(memcg);
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto out;
 	}
 
 	/* Get references for the tail pages, too */
@@ -7320,8 +7344,10 @@ int __mem_cgroup_try_charge_swap(struct folio *folio, swp_entry_t entry)
 	oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages);
 	VM_BUG_ON_FOLIO(oldid, folio);
 	mod_memcg_state(memcg, MEMCG_SWAP, nr_pages);
+out:
+	rcu_read_unlock();
 
-	return 0;
+	return ret;
 }
 
 /**
@@ -7366,6 +7392,7 @@ long mem_cgroup_get_nr_swap_pages(struct mem_cgroup *memcg)
 bool mem_cgroup_swap_full(struct page *page)
 {
 	struct mem_cgroup *memcg;
+	bool ret = false;
 
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 
@@ -7374,19 +7401,24 @@ bool mem_cgroup_swap_full(struct page *page)
 	if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return false;
 
+	rcu_read_lock();
 	memcg = page_memcg(page);
 	if (!memcg)
-		return false;
+		goto out;
 
 	for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
 		unsigned long usage = page_counter_read(&memcg->swap);
 
 		if (usage * 2 >= READ_ONCE(memcg->swap.high) ||
-		    usage * 2 >= READ_ONCE(memcg->swap.max))
-			return true;
+		    usage * 2 >= READ_ONCE(memcg->swap.max)) {
+			ret = true;
+			goto out;
+		}
 	}
+out:
+	rcu_read_unlock();
 
-	return false;
+	return ret;
 }
 
 static int __init setup_swap_account(char *s)
diff --git a/mm/migrate.c b/mm/migrate.c
index 1ece23d80bc4..2e49b96fa339 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -451,6 +451,10 @@ int folio_migrate_mapping(struct address_space *mapping,
 		struct lruvec *old_lruvec, *new_lruvec;
 		struct mem_cgroup *memcg;
 
+		/*
+		 * Irq is disabled, which can serve as RCU read-side critical
+		 * sections.
+		 */
 		memcg = folio_memcg(folio);
 		old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
 		new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
diff --git a/mm/page_io.c b/mm/page_io.c
index 68318134dc92..f75ebbc95ee6 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -222,13 +222,14 @@ static void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
 	struct cgroup_subsys_state *css;
 	struct mem_cgroup *memcg;
 
+	rcu_read_lock();
 	memcg = page_memcg(page);
 	if (!memcg)
-		return;
+		goto out;
 
-	rcu_read_lock();
 	css = cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys);
 	bio_associate_blkg_from_css(bio, css);
+out:
 	rcu_read_unlock();
 }
 #else
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 08/11] mm: memcontrol: introduce memcg_reparent_ops
  2022-06-21 12:56 ` Muchun Song
                   ` (7 preceding siblings ...)
  (?)
@ 2022-06-21 12:56 ` Muchun Song
  -1 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm, hannes, longman, mhocko, roman.gushchin, shakeelb
  Cc: cgroups, duanxiongchun, linux-kernel, linux-mm, Muchun Song

In the previous patch, we know how to make the lruvec lock safe when LRU
pages are reparented. We should do something like following.

    memcg_reparent_objcgs(memcg)
        1) lock
        // lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
        spin_lock(&lruvec->lru_lock);
        spin_lock(&lruvec_parent->lru_lock);

        2) relocate from current memcg to its parent
        // Move all the pages from the lruvec list to the parent lruvec list.

        3) unlock
        spin_unlock(&lruvec_parent->lru_lock);
        spin_unlock(&lruvec->lru_lock);

Apart from the page lruvec lock, the deferred split queue lock (THP only)
also needs to do something similar. So we extract the necessary three steps
in the memcg_reparent_objcgs().

    memcg_reparent_objcgs(memcg)
        1) lock
        memcg_reparent_ops->lock(memcg, parent);

        2) relocate
        memcg_reparent_ops->relocate(memcg, reparent);

        3) unlock
        memcg_reparent_ops->unlock(memcg, reparent);

Now there are two different locks (e.g. lruvec lock and deferred split
queue lock) need to use this infrastructure. In the next patch, we will
use those APIs to make those locks safe when the LRU pages reparented.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/memcontrol.h | 20 +++++++++++++++
 mm/memcontrol.c            | 62 ++++++++++++++++++++++++++++++++++++----------
 2 files changed, 69 insertions(+), 13 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index a8bd4bb39502..63dbdef60cbd 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -353,6 +353,26 @@ struct mem_cgroup {
 	struct mem_cgroup_per_node *nodeinfo[];
 };
 
+struct memcg_reparent_ops {
+	/*
+	 * Note that interrupt is disabled before calling those callbacks,
+	 * so the interrupt should remain disabled when leaving those callbacks.
+	 */
+	void (*lock)(struct mem_cgroup *src, struct mem_cgroup *dst);
+	void (*relocate)(struct mem_cgroup *src, struct mem_cgroup *dst);
+	void (*unlock)(struct mem_cgroup *src, struct mem_cgroup *dst);
+};
+
+#define DEFINE_MEMCG_REPARENT_OPS(name)					\
+	const struct memcg_reparent_ops memcg_##name##_reparent_ops = {	\
+		.lock		= name##_reparent_lock,			\
+		.relocate	= name##_reparent_relocate,		\
+		.unlock		= name##_reparent_unlock,		\
+	}
+
+#define DECLARE_MEMCG_REPARENT_OPS(name)				\
+	extern const struct memcg_reparent_ops memcg_##name##_reparent_ops
+
 /*
  * size of first charge trial. "32" comes from vmscan.c's magic value.
  * TODO: maybe necessary to use big numbers in big irons.
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 346a954e190e..6ef3a264054e 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -335,24 +335,60 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
 	return objcg;
 }
 
-static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
+static void objcg_reparent_lock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	spin_lock(&objcg_lock);
+}
+
+static void objcg_reparent_relocate(struct mem_cgroup *src, struct mem_cgroup *dst)
 {
 	struct obj_cgroup *objcg, *iter;
-	struct mem_cgroup *parent = parent_mem_cgroup(memcg);
 
-	objcg = rcu_replace_pointer(memcg->objcg, NULL, true);
+	objcg = rcu_replace_pointer(src->objcg, NULL, true);
+	/* 1) Ready to reparent active objcg. */
+	list_add(&objcg->list, &src->objcg_list);
+	/* 2) Reparent active objcg and already reparented objcgs to dst. */
+	list_for_each_entry(iter, &src->objcg_list, list)
+		WRITE_ONCE(iter->memcg, dst);
+	/* 3) Move already reparented objcgs to the dst's list */
+	list_splice(&src->objcg_list, &dst->objcg_list);
+}
+
+static void objcg_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	spin_unlock(&objcg_lock);
+}
 
-	spin_lock_irq(&objcg_lock);
+static DEFINE_MEMCG_REPARENT_OPS(objcg);
 
-	/* 1) Ready to reparent active objcg. */
-	list_add(&objcg->list, &memcg->objcg_list);
-	/* 2) Reparent active objcg and already reparented objcgs to parent. */
-	list_for_each_entry(iter, &memcg->objcg_list, list)
-		WRITE_ONCE(iter->memcg, parent);
-	/* 3) Move already reparented objcgs to the parent's list */
-	list_splice(&memcg->objcg_list, &parent->objcg_list);
-
-	spin_unlock_irq(&objcg_lock);
+static const struct memcg_reparent_ops *memcg_reparent_ops[] = {
+	&memcg_objcg_reparent_ops,
+};
+
+#define DEFINE_MEMCG_REPARENT_FUNC(phase)				\
+	static void memcg_reparent_##phase(struct mem_cgroup *src,	\
+					   struct mem_cgroup *dst)	\
+	{								\
+		int i;							\
+									\
+		for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++)	\
+			memcg_reparent_ops[i]->phase(src, dst);		\
+	}
+
+DEFINE_MEMCG_REPARENT_FUNC(lock)
+DEFINE_MEMCG_REPARENT_FUNC(relocate)
+DEFINE_MEMCG_REPARENT_FUNC(unlock)
+
+static void memcg_reparent_objcgs(struct mem_cgroup *src)
+{
+	struct mem_cgroup *dst = parent_mem_cgroup(src);
+	struct obj_cgroup *objcg = rcu_dereference_protected(src->objcg, true);
+
+	local_irq_disable();
+	memcg_reparent_lock(src, dst);
+	memcg_reparent_relocate(src, dst);
+	memcg_reparent_unlock(src, dst);
+	local_irq_enable();
 
 	percpu_ref_kill(&objcg->refcnt);
 }
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 09/11] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm, hannes, longman, mhocko, roman.gushchin, shakeelb
  Cc: cgroups, duanxiongchun, linux-kernel, linux-mm, Muchun Song,
	Michal Koutný

We will reuse the obj_cgroup APIs to charge the LRU pages. Finally,
page->memcg_data will have 2 different meanings.

  - For the slab pages, page->memcg_data points to an object cgroups
    vector.

  - For the kmem pages (exclude the slab pages) and the LRU pages,
    page->memcg_data points to an object cgroup.

In this patch, we reuse obj_cgroup APIs to charge LRU pages. In the end,
The page cache cannot prevent long-living objects from pinning the original
memory cgroup in the memory.

At the same time we also changed the rules of page and objcg or memcg
binding stability. The new rules are as follows.

For a page any of the following ensures page and objcg binding stability:

  - the page lock
  - LRU isolation
  - lock_page_memcg()
  - exclusive reference

Based on the stable binding of page and objcg, for a page any of the
following ensures page and memcg binding stability:

  - objcg_lock
  - cgroup_mutex
  - the lruvec lock
  - the split queue lock (only THP page)

If the caller only want to ensure that the page counters of memcg are
updated correctly, ensure that the binding stability of page and objcg
is sufficient.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Michal Koutný <mkoutny@suse.com>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 include/linux/memcontrol.h |  89 +++++--------
 mm/huge_memory.c           |  35 +++++
 mm/memcontrol.c            | 317 ++++++++++++++++++++++++++++++---------------
 3 files changed, 282 insertions(+), 159 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 63dbdef60cbd..744cde2b2368 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -392,8 +392,6 @@ enum page_memcg_data_flags {
 
 #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1)
 
-static inline bool folio_memcg_kmem(struct folio *folio);
-
 /*
  * After the initialization objcg->memcg is always pointing at
  * a valid memcg, but can be atomically swapped to the parent memcg.
@@ -407,43 +405,19 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
 }
 
 /*
- * __folio_memcg - Get the memory cgroup associated with a non-kmem folio
- * @folio: Pointer to the folio.
- *
- * Returns a pointer to the memory cgroup associated with the folio,
- * or NULL. This function assumes that the folio is known to have a
- * proper memory cgroup pointer. It's not safe to call this function
- * against some type of folios, e.g. slab folios or ex-slab folios or
- * kmem folios.
- */
-static inline struct mem_cgroup *__folio_memcg(struct folio *folio)
-{
-	unsigned long memcg_data = folio->memcg_data;
-
-	VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
-	VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio);
-	VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio);
-
-	return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-}
-
-/*
- * __folio_objcg - get the object cgroup associated with a kmem folio.
+ * folio_objcg - get the object cgroup associated with a folio.
  * @folio: Pointer to the folio.
  *
  * Returns a pointer to the object cgroup associated with the folio,
  * or NULL. This function assumes that the folio is known to have a
- * proper object cgroup pointer. It's not safe to call this function
- * against some type of folios, e.g. slab folios or ex-slab folios or
- * LRU folios.
+ * proper object cgroup pointer.
  */
-static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
+static inline struct obj_cgroup *folio_objcg(struct folio *folio)
 {
 	unsigned long memcg_data = folio->memcg_data;
 
 	VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
 	VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio);
-	VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio);
 
 	return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
 }
@@ -457,22 +431,33 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
  * proper memory cgroup pointer. It's not safe to call this function
  * against some type of folios, e.g. slab folios or ex-slab folios.
  *
- * For a non-kmem folio any of the following ensures folio and memcg binding
- * stability:
+ * For a folio any of the following ensures folio and objcg binding stability:
  *
  * - the folio lock
  * - LRU isolation
  * - lock_page_memcg()
  * - exclusive reference
  *
+ * Based on the stable binding of folio and objcg, for a folio any of the
+ * following ensures folio and memcg binding stability:
+ *
+ * - objcg_lock
+ * - cgroup_mutex
+ * - the lruvec lock
+ * - the split queue lock (only THP page)
+ *
+ * If the caller only want to ensure that the page counters of memcg are
+ * updated correctly, ensure that the binding stability of folio and objcg
+ * is sufficient.
+ *
  * Note: The caller should hold an rcu read lock to protect memcg associated
  * with a folio from being released.
  */
 static inline struct mem_cgroup *folio_memcg(struct folio *folio)
 {
-	if (folio_memcg_kmem(folio))
-		return obj_cgroup_memcg(__folio_objcg(folio));
-	return __folio_memcg(folio);
+	struct obj_cgroup *objcg = folio_objcg(folio);
+
+	return objcg ? obj_cgroup_memcg(objcg) : NULL;
 }
 
 /*
@@ -496,6 +481,8 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
  * folio is known to have a proper memory cgroup pointer. It's not safe
  * to call this function against some type of pages, e.g. slab pages or
  * ex-slab pages.
+ *
+ * The page and objcg or memcg binding rules can refer to folio_memcg().
  */
 static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
 {
@@ -526,22 +513,20 @@ static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
  *
  * Return: A pointer to the memory cgroup associated with the folio,
  * or NULL.
+ *
+ * The folio and objcg or memcg binding rules can refer to folio_memcg().
  */
 static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
 {
 	unsigned long memcg_data = READ_ONCE(folio->memcg_data);
+	struct obj_cgroup *objcg;
 
 	VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
 	WARN_ON_ONCE(!rcu_read_lock_held());
 
-	if (memcg_data & MEMCG_DATA_KMEM) {
-		struct obj_cgroup *objcg;
+	objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
 
-		objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-		return obj_cgroup_memcg(objcg);
-	}
-
-	return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
+	return objcg ? obj_cgroup_memcg(objcg) : NULL;
 }
 
 /*
@@ -554,16 +539,10 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
  * has an associated memory cgroup pointer or an object cgroups vector or
  * an object cgroup.
  *
- * For a non-kmem page any of the following ensures page and memcg binding
- * stability:
+ * The page and objcg or memcg binding rules can refer to page_memcg().
  *
- * - the page lock
- * - LRU isolation
- * - lock_page_memcg()
- * - exclusive reference
- *
- * For a kmem page a caller should hold an rcu read lock to protect memcg
- * associated with a kmem page from being released.
+ * A caller should hold an rcu read lock to protect memcg associated with a
+ * page from being released.
  */
 static inline struct mem_cgroup *page_memcg_check(struct page *page)
 {
@@ -572,18 +551,14 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
 	 * for slab pages, READ_ONCE() should be used here.
 	 */
 	unsigned long memcg_data = READ_ONCE(page->memcg_data);
+	struct obj_cgroup *objcg;
 
 	if (memcg_data & MEMCG_DATA_OBJCGS)
 		return NULL;
 
-	if (memcg_data & MEMCG_DATA_KMEM) {
-		struct obj_cgroup *objcg;
-
-		objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-		return obj_cgroup_memcg(objcg);
-	}
+	objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
 
-	return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
+	return objcg ? obj_cgroup_memcg(objcg) : NULL;
 }
 
 static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 11ec92783b37..4f276ae6e840 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -558,6 +558,8 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
 }
 
 #ifdef CONFIG_MEMCG
+static struct shrinker deferred_split_shrinker;
+
 static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio,
 		struct deferred_split *queue)
 {
@@ -574,6 +576,39 @@ static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio
 
 	return memcg ? &memcg->deferred_split_queue : NULL;
 }
+
+static void thp_sq_reparent_lock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	spin_lock(&src->deferred_split_queue.split_queue_lock);
+	spin_lock_nested(&dst->deferred_split_queue.split_queue_lock,
+			 SINGLE_DEPTH_NESTING);
+}
+
+static void thp_sq_reparent_relocate(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	int nid;
+	struct deferred_split *src_queue, *dst_queue;
+
+	src_queue = &src->deferred_split_queue;
+	dst_queue = &dst->deferred_split_queue;
+
+	if (!src_queue->split_queue_len)
+		return;
+
+	list_splice_tail_init(&src_queue->split_queue, &dst_queue->split_queue);
+	dst_queue->split_queue_len += src_queue->split_queue_len;
+	src_queue->split_queue_len = 0;
+
+	for_each_node(nid)
+		set_shrinker_bit(dst, nid, deferred_split_shrinker.id);
+}
+
+static void thp_sq_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	spin_unlock(&dst->deferred_split_queue.split_queue_lock);
+	spin_unlock(&src->deferred_split_queue.split_queue_lock);
+}
+DEFINE_MEMCG_REPARENT_OPS(thp_sq);
 #else
 static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio,
 		struct deferred_split *queue)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6ef3a264054e..803dbdf5f233 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -77,6 +77,7 @@ struct cgroup_subsys memory_cgrp_subsys __read_mostly;
 EXPORT_SYMBOL(memory_cgrp_subsys);
 
 struct mem_cgroup *root_mem_cgroup __read_mostly;
+static struct obj_cgroup *root_obj_cgroup __read_mostly;
 
 /* Active memory cgroup to use from an interrupt context */
 DEFINE_PER_CPU(struct mem_cgroup *, int_active_memcg);
@@ -254,6 +255,11 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)
 
 static DEFINE_SPINLOCK(objcg_lock);
 
+static inline bool obj_cgroup_is_root(struct obj_cgroup *objcg)
+{
+	return objcg == root_obj_cgroup;
+}
+
 #ifdef CONFIG_MEMCG_KMEM
 bool mem_cgroup_kmem_disabled(void)
 {
@@ -361,8 +367,78 @@ static void objcg_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst
 
 static DEFINE_MEMCG_REPARENT_OPS(objcg);
 
+static void lruvec_reparent_lock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	int nid, nest = 0;
+
+	for_each_node(nid) {
+		spin_lock_nested(&mem_cgroup_lruvec(src,
+				 NODE_DATA(nid))->lru_lock, nest++);
+		spin_lock_nested(&mem_cgroup_lruvec(dst,
+				 NODE_DATA(nid))->lru_lock, nest++);
+	}
+}
+
+static void lruvec_reparent_lru(struct lruvec *src, struct lruvec *dst,
+				enum lru_list lru)
+{
+	int zid;
+	struct mem_cgroup_per_node *mz_src, *mz_dst;
+
+	mz_src = container_of(src, struct mem_cgroup_per_node, lruvec);
+	mz_dst = container_of(dst, struct mem_cgroup_per_node, lruvec);
+
+	if (lru != LRU_UNEVICTABLE)
+		list_splice_tail_init(&src->lists[lru], &dst->lists[lru]);
+
+	for (zid = 0; zid < MAX_NR_ZONES; zid++) {
+		mz_dst->lru_zone_size[zid][lru] += mz_src->lru_zone_size[zid][lru];
+		mz_src->lru_zone_size[zid][lru] = 0;
+	}
+}
+
+static void lruvec_reparent_relocate(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	int nid;
+
+	for_each_node(nid) {
+		enum lru_list lru;
+		struct lruvec *src_lruvec, *dst_lruvec;
+
+		src_lruvec = mem_cgroup_lruvec(src, NODE_DATA(nid));
+		dst_lruvec = mem_cgroup_lruvec(dst, NODE_DATA(nid));
+
+		dst_lruvec->anon_cost += src_lruvec->anon_cost;
+		dst_lruvec->file_cost += src_lruvec->file_cost;
+
+		for_each_lru(lru)
+			lruvec_reparent_lru(src_lruvec, dst_lruvec, lru);
+	}
+}
+
+static void lruvec_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	int nid;
+
+	for_each_node(nid) {
+		spin_unlock(&mem_cgroup_lruvec(dst, NODE_DATA(nid))->lru_lock);
+		spin_unlock(&mem_cgroup_lruvec(src, NODE_DATA(nid))->lru_lock);
+	}
+}
+
+static DEFINE_MEMCG_REPARENT_OPS(lruvec);
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+DECLARE_MEMCG_REPARENT_OPS(thp_sq);
+#endif
+
+/* The lock order depends on the order of elements in this array. */
 static const struct memcg_reparent_ops *memcg_reparent_ops[] = {
 	&memcg_objcg_reparent_ops,
+	&memcg_lruvec_reparent_ops,
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	&memcg_thp_sq_reparent_ops,
+#endif
 };
 
 #define DEFINE_MEMCG_REPARENT_FUNC(phase)				\
@@ -2825,18 +2901,33 @@ static inline void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages
 		page_counter_uncharge(&memcg->memsw, nr_pages);
 }
 
-static void commit_charge(struct folio *folio, struct mem_cgroup *memcg)
+static void commit_charge(struct folio *folio, struct obj_cgroup *objcg)
 {
-	VM_BUG_ON_FOLIO(folio_memcg(folio), folio);
+	VM_BUG_ON_FOLIO(folio_objcg(folio), folio);
 	/*
-	 * Any of the following ensures page's memcg stability:
+	 * Any of the following ensures page's objcg stability:
 	 *
 	 * - the page lock
 	 * - LRU isolation
 	 * - lock_page_memcg()
 	 * - exclusive reference
 	 */
-	folio->memcg_data = (unsigned long)memcg;
+	folio->memcg_data = (unsigned long)objcg;
+}
+
+static struct obj_cgroup *get_obj_cgroup_from_memcg(struct mem_cgroup *memcg)
+{
+	struct obj_cgroup *objcg = NULL;
+
+	rcu_read_lock();
+	for (; memcg; memcg = parent_mem_cgroup(memcg)) {
+		objcg = rcu_dereference(memcg->objcg);
+		if (objcg && obj_cgroup_tryget(objcg))
+			break;
+	}
+	rcu_read_unlock();
+
+	return objcg;
 }
 
 #ifdef CONFIG_MEMCG_KMEM
@@ -2982,19 +3073,6 @@ struct mem_cgroup *mem_cgroup_from_slab_obj(void *p)
 	return mem_cgroup_from_obj_folio(virt_to_folio(p), p);
 }
 
-static struct obj_cgroup *__get_obj_cgroup_from_memcg(struct mem_cgroup *memcg)
-{
-	struct obj_cgroup *objcg = NULL;
-
-	for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
-		objcg = rcu_dereference(memcg->objcg);
-		if (objcg && obj_cgroup_tryget(objcg))
-			break;
-		objcg = NULL;
-	}
-	return objcg;
-}
-
 __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
 {
 	struct obj_cgroup *objcg = NULL;
@@ -3008,7 +3086,16 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
 		memcg = active_memcg();
 	else
 		memcg = mem_cgroup_from_task(current);
-	objcg = __get_obj_cgroup_from_memcg(memcg);
+
+	if (mem_cgroup_is_root(memcg))
+		goto out;
+
+	objcg = get_obj_cgroup_from_memcg(memcg);
+	if (obj_cgroup_is_root(objcg)) {
+		obj_cgroup_put(objcg);
+		objcg = NULL;
+	}
+out:
 	rcu_read_unlock();
 	return objcg;
 }
@@ -3020,20 +3107,10 @@ struct obj_cgroup *get_obj_cgroup_from_page(struct page *page)
 	if (!memcg_kmem_enabled() || memcg_kmem_bypass())
 		return NULL;
 
-	if (PageMemcgKmem(page)) {
-		objcg = __folio_objcg(page_folio(page));
+	objcg = folio_objcg(page_folio(page));
+	if (objcg)
 		obj_cgroup_get(objcg);
-	} else {
-		struct mem_cgroup *memcg;
 
-		rcu_read_lock();
-		memcg = __folio_memcg(page_folio(page));
-		if (memcg)
-			objcg = __get_obj_cgroup_from_memcg(memcg);
-		else
-			objcg = NULL;
-		rcu_read_unlock();
-	}
 	return objcg;
 }
 
@@ -3128,13 +3205,13 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
 void __memcg_kmem_uncharge_page(struct page *page, int order)
 {
 	struct folio *folio = page_folio(page);
-	struct obj_cgroup *objcg;
+	struct obj_cgroup *objcg = folio_objcg(folio);
 	unsigned int nr_pages = 1 << order;
 
-	if (!folio_memcg_kmem(folio))
+	if (!objcg)
 		return;
 
-	objcg = __folio_objcg(folio);
+	VM_BUG_ON_FOLIO(!folio_memcg_kmem(folio), folio);
 	obj_cgroup_uncharge_pages(objcg, nr_pages);
 	folio->memcg_data = 0;
 	obj_cgroup_put(objcg);
@@ -3388,26 +3465,21 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
 #endif /* CONFIG_MEMCG_KMEM */
 
 /*
- * Because page_memcg(head) is not set on tails, set it now.
+ * Because page_objcg(head) is not set on tails, set it now.
  */
 void split_page_memcg(struct page *head, unsigned int nr)
 {
 	struct folio *folio = page_folio(head);
-	struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
+	struct obj_cgroup *objcg = folio_objcg(folio);
 	int i;
 
-	if (mem_cgroup_disabled() || !memcg)
+	if (mem_cgroup_disabled() || !objcg)
 		return;
 
 	for (i = 1; i < nr; i++)
 		folio_page(folio, i)->memcg_data = folio->memcg_data;
 
-	if (folio_memcg_kmem(folio))
-		obj_cgroup_get_many(__folio_objcg(folio), nr - 1);
-	else
-		css_get_many(&memcg->css, nr - 1);
-
-	css_put(&memcg->css);
+	obj_cgroup_get_many(objcg, nr - 1);
 }
 
 #ifdef CONFIG_MEMCG_SWAP
@@ -5325,6 +5397,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 	objcg->memcg = memcg;
 	rcu_assign_pointer(memcg->objcg, objcg);
 
+	if (unlikely(mem_cgroup_is_root(memcg)))
+		root_obj_cgroup = objcg;
+
 	/* Online state pins memcg ID, memcg ID pins CSS */
 	refcount_set(&memcg->id.ref, 1);
 	css_get(css);
@@ -5729,10 +5804,12 @@ static int mem_cgroup_move_account(struct page *page,
 	 */
 	smp_mb();
 
-	css_get(&to->css);
-	css_put(&from->css);
+	rcu_read_lock();
+	obj_cgroup_get(rcu_dereference(to->objcg));
+	obj_cgroup_put(rcu_dereference(from->objcg));
+	rcu_read_unlock();
 
-	folio->memcg_data = (unsigned long)to;
+	folio->memcg_data = (unsigned long)rcu_access_pointer(to->objcg);
 
 	__folio_memcg_unlock(from);
 
@@ -6208,6 +6285,42 @@ static void mem_cgroup_move_charge(void)
 	walk_page_range(mc.mm, 0, ULONG_MAX, &charge_walk_ops, NULL);
 	mmap_read_unlock(mc.mm);
 	atomic_dec(&mc.from->moving_account);
+
+	/*
+	 * Moving its pages to another memcg is finished. Wait for already
+	 * started RCU-only updates to finish to make sure that the caller
+	 * of lock_page_memcg() can unlock the correct move_lock. The
+	 * possible bad scenario would like:
+	 *
+	 * CPU0:				CPU1:
+	 * mem_cgroup_move_charge()
+	 *     walk_page_range()
+	 *
+	 *					lock_page_memcg(page)
+	 *					    memcg = folio_memcg()
+	 *					    spin_lock_irqsave(&memcg->move_lock)
+	 *					    memcg->move_lock_task = current
+	 *
+	 *     atomic_dec(&mc.from->moving_account)
+	 *
+	 * mem_cgroup_css_offline()
+	 *     memcg_offline_kmem()
+	 *         memcg_reparent_objcgs() <== reparented
+	 *
+	 *					unlock_page_memcg(page)
+	 *					    memcg = folio_memcg() <== memcg has been changed
+	 *					    if (memcg->move_lock_task == current) <== false
+	 *					        spin_unlock_irqrestore(&memcg->move_lock)
+	 *
+	 * Once mem_cgroup_move_charge() returns (it means that the cgroup_mutex
+	 * would be released soon), the page can be reparented to its parent
+	 * memcg. When the unlock_page_memcg() is called for the page, we will
+	 * miss unlock the move_lock. So using synchronize_rcu to wait for
+	 * already started RCU-only updates to finish before this function
+	 * returns (mem_cgroup_move_charge() and mem_cgroup_css_offline() are
+	 * serialized by cgroup_mutex).
+	 */
+	synchronize_rcu();
 }
 
 /*
@@ -6822,21 +6935,26 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
 static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg,
 			gfp_t gfp)
 {
+	struct obj_cgroup *objcg;
 	long nr_pages = folio_nr_pages(folio);
-	int ret;
+	int ret = 0;
 
-	ret = try_charge(memcg, gfp, nr_pages);
+	objcg = get_obj_cgroup_from_memcg(memcg);
+	/* Do not account at the root objcg level. */
+	if (!obj_cgroup_is_root(objcg))
+		ret = try_charge(memcg, gfp, nr_pages);
 	if (ret)
 		goto out;
 
-	css_get(&memcg->css);
-	commit_charge(folio, memcg);
+	obj_cgroup_get(objcg);
+	commit_charge(folio, objcg);
 
 	local_irq_disable();
 	mem_cgroup_charge_statistics(memcg, nr_pages);
 	memcg_check_events(memcg, folio_nid(folio));
 	local_irq_enable();
 out:
+	obj_cgroup_put(objcg);
 	return ret;
 }
 
@@ -6922,7 +7040,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry)
 }
 
 struct uncharge_gather {
-	struct mem_cgroup *memcg;
+	struct obj_cgroup *objcg;
 	unsigned long nr_memory;
 	unsigned long pgpgout;
 	unsigned long nr_kmem;
@@ -6937,63 +7055,56 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug)
 static void uncharge_batch(const struct uncharge_gather *ug)
 {
 	unsigned long flags;
+	struct mem_cgroup *memcg;
 
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(ug->objcg);
 	if (ug->nr_memory) {
-		page_counter_uncharge(&ug->memcg->memory, ug->nr_memory);
+		page_counter_uncharge(&memcg->memory, ug->nr_memory);
 		if (do_memsw_account())
-			page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory);
+			page_counter_uncharge(&memcg->memsw, ug->nr_memory);
 		if (ug->nr_kmem)
-			memcg_account_kmem(ug->memcg, -ug->nr_kmem);
-		memcg_oom_recover(ug->memcg);
+			memcg_account_kmem(memcg, -ug->nr_kmem);
+		memcg_oom_recover(memcg);
 	}
 
 	local_irq_save(flags);
-	__count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout);
-	__this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory);
-	memcg_check_events(ug->memcg, ug->nid);
+	__count_memcg_events(memcg, PGPGOUT, ug->pgpgout);
+	__this_cpu_add(memcg->vmstats_percpu->nr_page_events, ug->nr_memory);
+	memcg_check_events(memcg, ug->nid);
 	local_irq_restore(flags);
+	rcu_read_unlock();
 
 	/* drop reference from uncharge_folio */
-	css_put(&ug->memcg->css);
+	obj_cgroup_put(ug->objcg);
 }
 
 static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
 {
 	long nr_pages;
-	struct mem_cgroup *memcg;
 	struct obj_cgroup *objcg;
 
 	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
 
 	/*
 	 * Nobody should be changing or seriously looking at
-	 * folio memcg or objcg at this point, we have fully
-	 * exclusive access to the folio.
+	 * folio objcg at this point, we have fully exclusive
+	 * access to the folio.
 	 */
-	if (folio_memcg_kmem(folio)) {
-		objcg = __folio_objcg(folio);
-		/*
-		 * This get matches the put at the end of the function and
-		 * kmem pages do not hold memcg references anymore.
-		 */
-		memcg = get_mem_cgroup_from_objcg(objcg);
-	} else {
-		memcg = __folio_memcg(folio);
-	}
-
-	if (!memcg)
+	objcg = folio_objcg(folio);
+	if (!objcg)
 		return;
 
-	if (ug->memcg != memcg) {
-		if (ug->memcg) {
+	if (ug->objcg != objcg) {
+		if (ug->objcg) {
 			uncharge_batch(ug);
 			uncharge_gather_clear(ug);
 		}
-		ug->memcg = memcg;
+		ug->objcg = objcg;
 		ug->nid = folio_nid(folio);
 
-		/* pairs with css_put in uncharge_batch */
-		css_get(&memcg->css);
+		/* pairs with obj_cgroup_put in uncharge_batch */
+		obj_cgroup_get(objcg);
 	}
 
 	nr_pages = folio_nr_pages(folio);
@@ -7001,19 +7112,15 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
 	if (folio_memcg_kmem(folio)) {
 		ug->nr_memory += nr_pages;
 		ug->nr_kmem += nr_pages;
-
-		folio->memcg_data = 0;
-		obj_cgroup_put(objcg);
 	} else {
 		/* LRU pages aren't accounted at the root level */
-		if (!mem_cgroup_is_root(memcg))
+		if (!obj_cgroup_is_root(objcg))
 			ug->nr_memory += nr_pages;
 		ug->pgpgout++;
-
-		folio->memcg_data = 0;
 	}
 
-	css_put(&memcg->css);
+	folio->memcg_data = 0;
+	obj_cgroup_put(objcg);
 }
 
 void __mem_cgroup_uncharge(struct folio *folio)
@@ -7021,7 +7128,7 @@ void __mem_cgroup_uncharge(struct folio *folio)
 	struct uncharge_gather ug;
 
 	/* Don't touch folio->lru of any random page, pre-check: */
-	if (!folio_memcg(folio))
+	if (!folio_objcg(folio))
 		return;
 
 	uncharge_gather_clear(&ug);
@@ -7044,7 +7151,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list)
 	uncharge_gather_clear(&ug);
 	list_for_each_entry(folio, page_list, lru)
 		uncharge_folio(folio, &ug);
-	if (ug.memcg)
+	if (ug.objcg)
 		uncharge_batch(&ug);
 }
 
@@ -7061,6 +7168,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list)
 void mem_cgroup_migrate(struct folio *old, struct folio *new)
 {
 	struct mem_cgroup *memcg;
+	struct obj_cgroup *objcg;
 	long nr_pages = folio_nr_pages(new);
 	unsigned long flags;
 
@@ -7073,30 +7181,33 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
 		return;
 
 	/* Page cache replacement: new folio already charged? */
-	if (folio_memcg(new))
+	if (folio_objcg(new))
 		return;
 
-	memcg = get_mem_cgroup_from_folio(old);
-	VM_WARN_ON_ONCE_FOLIO(!memcg, old);
-	if (!memcg)
+	objcg = folio_objcg(old);
+	VM_WARN_ON_ONCE_FOLIO(!objcg, old);
+	if (!objcg)
 		return;
 
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(objcg);
+
 	/* Force-charge the new page. The old one will be freed soon */
-	if (!mem_cgroup_is_root(memcg)) {
+	if (!obj_cgroup_is_root(objcg)) {
 		page_counter_charge(&memcg->memory, nr_pages);
 		if (do_memsw_account())
 			page_counter_charge(&memcg->memsw, nr_pages);
 	}
 
-	css_get(&memcg->css);
-	commit_charge(new, memcg);
+	obj_cgroup_get(objcg);
+	commit_charge(new, objcg);
 
 	local_irq_save(flags);
 	mem_cgroup_charge_statistics(memcg, nr_pages);
 	memcg_check_events(memcg, folio_nid(new));
 	local_irq_restore(flags);
 
-	css_put(&memcg->css);
+	rcu_read_unlock();
 }
 
 DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
@@ -7269,6 +7380,7 @@ static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg)
 void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
 {
 	struct mem_cgroup *memcg, *swap_memcg;
+	struct obj_cgroup *objcg;
 	unsigned int nr_entries;
 	unsigned short oldid;
 
@@ -7281,15 +7393,16 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
 	if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return;
 
+	objcg = folio_objcg(folio);
+	VM_WARN_ON_ONCE_FOLIO(!objcg, folio);
+	if (!objcg)
+		return;
+
 	/*
 	 * Interrupts should be disabled by the caller (see the comments below),
 	 * which can serve as RCU read-side critical sections.
 	 */
-	memcg = folio_memcg(folio);
-
-	VM_WARN_ON_ONCE_FOLIO(!memcg, folio);
-	if (!memcg)
-		return;
+	memcg = obj_cgroup_memcg(objcg);
 
 	/*
 	 * In case the memcg owning these pages has been offlined and doesn't
@@ -7308,7 +7421,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
 
 	folio->memcg_data = 0;
 
-	if (!mem_cgroup_is_root(memcg))
+	if (!obj_cgroup_is_root(objcg))
 		page_counter_uncharge(&memcg->memory, nr_entries);
 
 	if (!cgroup_memory_noswap && memcg != swap_memcg) {
@@ -7328,7 +7441,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
 	memcg_stats_unlock();
 	memcg_check_events(memcg, folio_nid(folio));
 
-	css_put(&memcg->css);
+	obj_cgroup_put(objcg);
 }
 
 /**
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 09/11] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	hannes-druUgvl0LCNAfugRpC6u6w, longman-H+wXaHxf7aLQT0dZR+AlfA,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Muchun Song, Michal Koutný

We will reuse the obj_cgroup APIs to charge the LRU pages. Finally,
page->memcg_data will have 2 different meanings.

  - For the slab pages, page->memcg_data points to an object cgroups
    vector.

  - For the kmem pages (exclude the slab pages) and the LRU pages,
    page->memcg_data points to an object cgroup.

In this patch, we reuse obj_cgroup APIs to charge LRU pages. In the end,
The page cache cannot prevent long-living objects from pinning the original
memory cgroup in the memory.

At the same time we also changed the rules of page and objcg or memcg
binding stability. The new rules are as follows.

For a page any of the following ensures page and objcg binding stability:

  - the page lock
  - LRU isolation
  - lock_page_memcg()
  - exclusive reference

Based on the stable binding of page and objcg, for a page any of the
following ensures page and memcg binding stability:

  - objcg_lock
  - cgroup_mutex
  - the lruvec lock
  - the split queue lock (only THP page)

If the caller only want to ensure that the page counters of memcg are
updated correctly, ensure that the binding stability of page and objcg
is sufficient.

Signed-off-by: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
Reviewed-by: Michal Koutn√Ω <mkoutny-IBi9RG/b67k@public.gmane.org>
Acked-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>
---
 include/linux/memcontrol.h |  89 +++++--------
 mm/huge_memory.c           |  35 +++++
 mm/memcontrol.c            | 317 ++++++++++++++++++++++++++++++---------------
 3 files changed, 282 insertions(+), 159 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 63dbdef60cbd..744cde2b2368 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -392,8 +392,6 @@ enum page_memcg_data_flags {
 
 #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1)
 
-static inline bool folio_memcg_kmem(struct folio *folio);
-
 /*
  * After the initialization objcg->memcg is always pointing at
  * a valid memcg, but can be atomically swapped to the parent memcg.
@@ -407,43 +405,19 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
 }
 
 /*
- * __folio_memcg - Get the memory cgroup associated with a non-kmem folio
- * @folio: Pointer to the folio.
- *
- * Returns a pointer to the memory cgroup associated with the folio,
- * or NULL. This function assumes that the folio is known to have a
- * proper memory cgroup pointer. It's not safe to call this function
- * against some type of folios, e.g. slab folios or ex-slab folios or
- * kmem folios.
- */
-static inline struct mem_cgroup *__folio_memcg(struct folio *folio)
-{
-	unsigned long memcg_data = folio->memcg_data;
-
-	VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
-	VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio);
-	VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio);
-
-	return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-}
-
-/*
- * __folio_objcg - get the object cgroup associated with a kmem folio.
+ * folio_objcg - get the object cgroup associated with a folio.
  * @folio: Pointer to the folio.
  *
  * Returns a pointer to the object cgroup associated with the folio,
  * or NULL. This function assumes that the folio is known to have a
- * proper object cgroup pointer. It's not safe to call this function
- * against some type of folios, e.g. slab folios or ex-slab folios or
- * LRU folios.
+ * proper object cgroup pointer.
  */
-static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
+static inline struct obj_cgroup *folio_objcg(struct folio *folio)
 {
 	unsigned long memcg_data = folio->memcg_data;
 
 	VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
 	VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio);
-	VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio);
 
 	return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
 }
@@ -457,22 +431,33 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
  * proper memory cgroup pointer. It's not safe to call this function
  * against some type of folios, e.g. slab folios or ex-slab folios.
  *
- * For a non-kmem folio any of the following ensures folio and memcg binding
- * stability:
+ * For a folio any of the following ensures folio and objcg binding stability:
  *
  * - the folio lock
  * - LRU isolation
  * - lock_page_memcg()
  * - exclusive reference
  *
+ * Based on the stable binding of folio and objcg, for a folio any of the
+ * following ensures folio and memcg binding stability:
+ *
+ * - objcg_lock
+ * - cgroup_mutex
+ * - the lruvec lock
+ * - the split queue lock (only THP page)
+ *
+ * If the caller only want to ensure that the page counters of memcg are
+ * updated correctly, ensure that the binding stability of folio and objcg
+ * is sufficient.
+ *
  * Note: The caller should hold an rcu read lock to protect memcg associated
  * with a folio from being released.
  */
 static inline struct mem_cgroup *folio_memcg(struct folio *folio)
 {
-	if (folio_memcg_kmem(folio))
-		return obj_cgroup_memcg(__folio_objcg(folio));
-	return __folio_memcg(folio);
+	struct obj_cgroup *objcg = folio_objcg(folio);
+
+	return objcg ? obj_cgroup_memcg(objcg) : NULL;
 }
 
 /*
@@ -496,6 +481,8 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
  * folio is known to have a proper memory cgroup pointer. It's not safe
  * to call this function against some type of pages, e.g. slab pages or
  * ex-slab pages.
+ *
+ * The page and objcg or memcg binding rules can refer to folio_memcg().
  */
 static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
 {
@@ -526,22 +513,20 @@ static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
  *
  * Return: A pointer to the memory cgroup associated with the folio,
  * or NULL.
+ *
+ * The folio and objcg or memcg binding rules can refer to folio_memcg().
  */
 static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
 {
 	unsigned long memcg_data = READ_ONCE(folio->memcg_data);
+	struct obj_cgroup *objcg;
 
 	VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
 	WARN_ON_ONCE(!rcu_read_lock_held());
 
-	if (memcg_data & MEMCG_DATA_KMEM) {
-		struct obj_cgroup *objcg;
+	objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
 
-		objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-		return obj_cgroup_memcg(objcg);
-	}
-
-	return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
+	return objcg ? obj_cgroup_memcg(objcg) : NULL;
 }
 
 /*
@@ -554,16 +539,10 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
  * has an associated memory cgroup pointer or an object cgroups vector or
  * an object cgroup.
  *
- * For a non-kmem page any of the following ensures page and memcg binding
- * stability:
+ * The page and objcg or memcg binding rules can refer to page_memcg().
  *
- * - the page lock
- * - LRU isolation
- * - lock_page_memcg()
- * - exclusive reference
- *
- * For a kmem page a caller should hold an rcu read lock to protect memcg
- * associated with a kmem page from being released.
+ * A caller should hold an rcu read lock to protect memcg associated with a
+ * page from being released.
  */
 static inline struct mem_cgroup *page_memcg_check(struct page *page)
 {
@@ -572,18 +551,14 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
 	 * for slab pages, READ_ONCE() should be used here.
 	 */
 	unsigned long memcg_data = READ_ONCE(page->memcg_data);
+	struct obj_cgroup *objcg;
 
 	if (memcg_data & MEMCG_DATA_OBJCGS)
 		return NULL;
 
-	if (memcg_data & MEMCG_DATA_KMEM) {
-		struct obj_cgroup *objcg;
-
-		objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-		return obj_cgroup_memcg(objcg);
-	}
+	objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
 
-	return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
+	return objcg ? obj_cgroup_memcg(objcg) : NULL;
 }
 
 static inline struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 11ec92783b37..4f276ae6e840 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -558,6 +558,8 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
 }
 
 #ifdef CONFIG_MEMCG
+static struct shrinker deferred_split_shrinker;
+
 static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio,
 		struct deferred_split *queue)
 {
@@ -574,6 +576,39 @@ static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio
 
 	return memcg ? &memcg->deferred_split_queue : NULL;
 }
+
+static void thp_sq_reparent_lock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	spin_lock(&src->deferred_split_queue.split_queue_lock);
+	spin_lock_nested(&dst->deferred_split_queue.split_queue_lock,
+			 SINGLE_DEPTH_NESTING);
+}
+
+static void thp_sq_reparent_relocate(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	int nid;
+	struct deferred_split *src_queue, *dst_queue;
+
+	src_queue = &src->deferred_split_queue;
+	dst_queue = &dst->deferred_split_queue;
+
+	if (!src_queue->split_queue_len)
+		return;
+
+	list_splice_tail_init(&src_queue->split_queue, &dst_queue->split_queue);
+	dst_queue->split_queue_len += src_queue->split_queue_len;
+	src_queue->split_queue_len = 0;
+
+	for_each_node(nid)
+		set_shrinker_bit(dst, nid, deferred_split_shrinker.id);
+}
+
+static void thp_sq_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	spin_unlock(&dst->deferred_split_queue.split_queue_lock);
+	spin_unlock(&src->deferred_split_queue.split_queue_lock);
+}
+DEFINE_MEMCG_REPARENT_OPS(thp_sq);
 #else
 static inline struct mem_cgroup *folio_split_queue_memcg(struct folio *folio,
 		struct deferred_split *queue)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6ef3a264054e..803dbdf5f233 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -77,6 +77,7 @@ struct cgroup_subsys memory_cgrp_subsys __read_mostly;
 EXPORT_SYMBOL(memory_cgrp_subsys);
 
 struct mem_cgroup *root_mem_cgroup __read_mostly;
+static struct obj_cgroup *root_obj_cgroup __read_mostly;
 
 /* Active memory cgroup to use from an interrupt context */
 DEFINE_PER_CPU(struct mem_cgroup *, int_active_memcg);
@@ -254,6 +255,11 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)
 
 static DEFINE_SPINLOCK(objcg_lock);
 
+static inline bool obj_cgroup_is_root(struct obj_cgroup *objcg)
+{
+	return objcg == root_obj_cgroup;
+}
+
 #ifdef CONFIG_MEMCG_KMEM
 bool mem_cgroup_kmem_disabled(void)
 {
@@ -361,8 +367,78 @@ static void objcg_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst
 
 static DEFINE_MEMCG_REPARENT_OPS(objcg);
 
+static void lruvec_reparent_lock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	int nid, nest = 0;
+
+	for_each_node(nid) {
+		spin_lock_nested(&mem_cgroup_lruvec(src,
+				 NODE_DATA(nid))->lru_lock, nest++);
+		spin_lock_nested(&mem_cgroup_lruvec(dst,
+				 NODE_DATA(nid))->lru_lock, nest++);
+	}
+}
+
+static void lruvec_reparent_lru(struct lruvec *src, struct lruvec *dst,
+				enum lru_list lru)
+{
+	int zid;
+	struct mem_cgroup_per_node *mz_src, *mz_dst;
+
+	mz_src = container_of(src, struct mem_cgroup_per_node, lruvec);
+	mz_dst = container_of(dst, struct mem_cgroup_per_node, lruvec);
+
+	if (lru != LRU_UNEVICTABLE)
+		list_splice_tail_init(&src->lists[lru], &dst->lists[lru]);
+
+	for (zid = 0; zid < MAX_NR_ZONES; zid++) {
+		mz_dst->lru_zone_size[zid][lru] += mz_src->lru_zone_size[zid][lru];
+		mz_src->lru_zone_size[zid][lru] = 0;
+	}
+}
+
+static void lruvec_reparent_relocate(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	int nid;
+
+	for_each_node(nid) {
+		enum lru_list lru;
+		struct lruvec *src_lruvec, *dst_lruvec;
+
+		src_lruvec = mem_cgroup_lruvec(src, NODE_DATA(nid));
+		dst_lruvec = mem_cgroup_lruvec(dst, NODE_DATA(nid));
+
+		dst_lruvec->anon_cost += src_lruvec->anon_cost;
+		dst_lruvec->file_cost += src_lruvec->file_cost;
+
+		for_each_lru(lru)
+			lruvec_reparent_lru(src_lruvec, dst_lruvec, lru);
+	}
+}
+
+static void lruvec_reparent_unlock(struct mem_cgroup *src, struct mem_cgroup *dst)
+{
+	int nid;
+
+	for_each_node(nid) {
+		spin_unlock(&mem_cgroup_lruvec(dst, NODE_DATA(nid))->lru_lock);
+		spin_unlock(&mem_cgroup_lruvec(src, NODE_DATA(nid))->lru_lock);
+	}
+}
+
+static DEFINE_MEMCG_REPARENT_OPS(lruvec);
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+DECLARE_MEMCG_REPARENT_OPS(thp_sq);
+#endif
+
+/* The lock order depends on the order of elements in this array. */
 static const struct memcg_reparent_ops *memcg_reparent_ops[] = {
 	&memcg_objcg_reparent_ops,
+	&memcg_lruvec_reparent_ops,
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	&memcg_thp_sq_reparent_ops,
+#endif
 };
 
 #define DEFINE_MEMCG_REPARENT_FUNC(phase)				\
@@ -2825,18 +2901,33 @@ static inline void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages
 		page_counter_uncharge(&memcg->memsw, nr_pages);
 }
 
-static void commit_charge(struct folio *folio, struct mem_cgroup *memcg)
+static void commit_charge(struct folio *folio, struct obj_cgroup *objcg)
 {
-	VM_BUG_ON_FOLIO(folio_memcg(folio), folio);
+	VM_BUG_ON_FOLIO(folio_objcg(folio), folio);
 	/*
-	 * Any of the following ensures page's memcg stability:
+	 * Any of the following ensures page's objcg stability:
 	 *
 	 * - the page lock
 	 * - LRU isolation
 	 * - lock_page_memcg()
 	 * - exclusive reference
 	 */
-	folio->memcg_data = (unsigned long)memcg;
+	folio->memcg_data = (unsigned long)objcg;
+}
+
+static struct obj_cgroup *get_obj_cgroup_from_memcg(struct mem_cgroup *memcg)
+{
+	struct obj_cgroup *objcg = NULL;
+
+	rcu_read_lock();
+	for (; memcg; memcg = parent_mem_cgroup(memcg)) {
+		objcg = rcu_dereference(memcg->objcg);
+		if (objcg && obj_cgroup_tryget(objcg))
+			break;
+	}
+	rcu_read_unlock();
+
+	return objcg;
 }
 
 #ifdef CONFIG_MEMCG_KMEM
@@ -2982,19 +3073,6 @@ struct mem_cgroup *mem_cgroup_from_slab_obj(void *p)
 	return mem_cgroup_from_obj_folio(virt_to_folio(p), p);
 }
 
-static struct obj_cgroup *__get_obj_cgroup_from_memcg(struct mem_cgroup *memcg)
-{
-	struct obj_cgroup *objcg = NULL;
-
-	for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
-		objcg = rcu_dereference(memcg->objcg);
-		if (objcg && obj_cgroup_tryget(objcg))
-			break;
-		objcg = NULL;
-	}
-	return objcg;
-}
-
 __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
 {
 	struct obj_cgroup *objcg = NULL;
@@ -3008,7 +3086,16 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
 		memcg = active_memcg();
 	else
 		memcg = mem_cgroup_from_task(current);
-	objcg = __get_obj_cgroup_from_memcg(memcg);
+
+	if (mem_cgroup_is_root(memcg))
+		goto out;
+
+	objcg = get_obj_cgroup_from_memcg(memcg);
+	if (obj_cgroup_is_root(objcg)) {
+		obj_cgroup_put(objcg);
+		objcg = NULL;
+	}
+out:
 	rcu_read_unlock();
 	return objcg;
 }
@@ -3020,20 +3107,10 @@ struct obj_cgroup *get_obj_cgroup_from_page(struct page *page)
 	if (!memcg_kmem_enabled() || memcg_kmem_bypass())
 		return NULL;
 
-	if (PageMemcgKmem(page)) {
-		objcg = __folio_objcg(page_folio(page));
+	objcg = folio_objcg(page_folio(page));
+	if (objcg)
 		obj_cgroup_get(objcg);
-	} else {
-		struct mem_cgroup *memcg;
 
-		rcu_read_lock();
-		memcg = __folio_memcg(page_folio(page));
-		if (memcg)
-			objcg = __get_obj_cgroup_from_memcg(memcg);
-		else
-			objcg = NULL;
-		rcu_read_unlock();
-	}
 	return objcg;
 }
 
@@ -3128,13 +3205,13 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
 void __memcg_kmem_uncharge_page(struct page *page, int order)
 {
 	struct folio *folio = page_folio(page);
-	struct obj_cgroup *objcg;
+	struct obj_cgroup *objcg = folio_objcg(folio);
 	unsigned int nr_pages = 1 << order;
 
-	if (!folio_memcg_kmem(folio))
+	if (!objcg)
 		return;
 
-	objcg = __folio_objcg(folio);
+	VM_BUG_ON_FOLIO(!folio_memcg_kmem(folio), folio);
 	obj_cgroup_uncharge_pages(objcg, nr_pages);
 	folio->memcg_data = 0;
 	obj_cgroup_put(objcg);
@@ -3388,26 +3465,21 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
 #endif /* CONFIG_MEMCG_KMEM */
 
 /*
- * Because page_memcg(head) is not set on tails, set it now.
+ * Because page_objcg(head) is not set on tails, set it now.
  */
 void split_page_memcg(struct page *head, unsigned int nr)
 {
 	struct folio *folio = page_folio(head);
-	struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
+	struct obj_cgroup *objcg = folio_objcg(folio);
 	int i;
 
-	if (mem_cgroup_disabled() || !memcg)
+	if (mem_cgroup_disabled() || !objcg)
 		return;
 
 	for (i = 1; i < nr; i++)
 		folio_page(folio, i)->memcg_data = folio->memcg_data;
 
-	if (folio_memcg_kmem(folio))
-		obj_cgroup_get_many(__folio_objcg(folio), nr - 1);
-	else
-		css_get_many(&memcg->css, nr - 1);
-
-	css_put(&memcg->css);
+	obj_cgroup_get_many(objcg, nr - 1);
 }
 
 #ifdef CONFIG_MEMCG_SWAP
@@ -5325,6 +5397,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 	objcg->memcg = memcg;
 	rcu_assign_pointer(memcg->objcg, objcg);
 
+	if (unlikely(mem_cgroup_is_root(memcg)))
+		root_obj_cgroup = objcg;
+
 	/* Online state pins memcg ID, memcg ID pins CSS */
 	refcount_set(&memcg->id.ref, 1);
 	css_get(css);
@@ -5729,10 +5804,12 @@ static int mem_cgroup_move_account(struct page *page,
 	 */
 	smp_mb();
 
-	css_get(&to->css);
-	css_put(&from->css);
+	rcu_read_lock();
+	obj_cgroup_get(rcu_dereference(to->objcg));
+	obj_cgroup_put(rcu_dereference(from->objcg));
+	rcu_read_unlock();
 
-	folio->memcg_data = (unsigned long)to;
+	folio->memcg_data = (unsigned long)rcu_access_pointer(to->objcg);
 
 	__folio_memcg_unlock(from);
 
@@ -6208,6 +6285,42 @@ static void mem_cgroup_move_charge(void)
 	walk_page_range(mc.mm, 0, ULONG_MAX, &charge_walk_ops, NULL);
 	mmap_read_unlock(mc.mm);
 	atomic_dec(&mc.from->moving_account);
+
+	/*
+	 * Moving its pages to another memcg is finished. Wait for already
+	 * started RCU-only updates to finish to make sure that the caller
+	 * of lock_page_memcg() can unlock the correct move_lock. The
+	 * possible bad scenario would like:
+	 *
+	 * CPU0:				CPU1:
+	 * mem_cgroup_move_charge()
+	 *     walk_page_range()
+	 *
+	 *					lock_page_memcg(page)
+	 *					    memcg = folio_memcg()
+	 *					    spin_lock_irqsave(&memcg->move_lock)
+	 *					    memcg->move_lock_task = current
+	 *
+	 *     atomic_dec(&mc.from->moving_account)
+	 *
+	 * mem_cgroup_css_offline()
+	 *     memcg_offline_kmem()
+	 *         memcg_reparent_objcgs() <== reparented
+	 *
+	 *					unlock_page_memcg(page)
+	 *					    memcg = folio_memcg() <== memcg has been changed
+	 *					    if (memcg->move_lock_task == current) <== false
+	 *					        spin_unlock_irqrestore(&memcg->move_lock)
+	 *
+	 * Once mem_cgroup_move_charge() returns (it means that the cgroup_mutex
+	 * would be released soon), the page can be reparented to its parent
+	 * memcg. When the unlock_page_memcg() is called for the page, we will
+	 * miss unlock the move_lock. So using synchronize_rcu to wait for
+	 * already started RCU-only updates to finish before this function
+	 * returns (mem_cgroup_move_charge() and mem_cgroup_css_offline() are
+	 * serialized by cgroup_mutex).
+	 */
+	synchronize_rcu();
 }
 
 /*
@@ -6822,21 +6935,26 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
 static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg,
 			gfp_t gfp)
 {
+	struct obj_cgroup *objcg;
 	long nr_pages = folio_nr_pages(folio);
-	int ret;
+	int ret = 0;
 
-	ret = try_charge(memcg, gfp, nr_pages);
+	objcg = get_obj_cgroup_from_memcg(memcg);
+	/* Do not account at the root objcg level. */
+	if (!obj_cgroup_is_root(objcg))
+		ret = try_charge(memcg, gfp, nr_pages);
 	if (ret)
 		goto out;
 
-	css_get(&memcg->css);
-	commit_charge(folio, memcg);
+	obj_cgroup_get(objcg);
+	commit_charge(folio, objcg);
 
 	local_irq_disable();
 	mem_cgroup_charge_statistics(memcg, nr_pages);
 	memcg_check_events(memcg, folio_nid(folio));
 	local_irq_enable();
 out:
+	obj_cgroup_put(objcg);
 	return ret;
 }
 
@@ -6922,7 +7040,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry)
 }
 
 struct uncharge_gather {
-	struct mem_cgroup *memcg;
+	struct obj_cgroup *objcg;
 	unsigned long nr_memory;
 	unsigned long pgpgout;
 	unsigned long nr_kmem;
@@ -6937,63 +7055,56 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug)
 static void uncharge_batch(const struct uncharge_gather *ug)
 {
 	unsigned long flags;
+	struct mem_cgroup *memcg;
 
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(ug->objcg);
 	if (ug->nr_memory) {
-		page_counter_uncharge(&ug->memcg->memory, ug->nr_memory);
+		page_counter_uncharge(&memcg->memory, ug->nr_memory);
 		if (do_memsw_account())
-			page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory);
+			page_counter_uncharge(&memcg->memsw, ug->nr_memory);
 		if (ug->nr_kmem)
-			memcg_account_kmem(ug->memcg, -ug->nr_kmem);
-		memcg_oom_recover(ug->memcg);
+			memcg_account_kmem(memcg, -ug->nr_kmem);
+		memcg_oom_recover(memcg);
 	}
 
 	local_irq_save(flags);
-	__count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout);
-	__this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory);
-	memcg_check_events(ug->memcg, ug->nid);
+	__count_memcg_events(memcg, PGPGOUT, ug->pgpgout);
+	__this_cpu_add(memcg->vmstats_percpu->nr_page_events, ug->nr_memory);
+	memcg_check_events(memcg, ug->nid);
 	local_irq_restore(flags);
+	rcu_read_unlock();
 
 	/* drop reference from uncharge_folio */
-	css_put(&ug->memcg->css);
+	obj_cgroup_put(ug->objcg);
 }
 
 static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
 {
 	long nr_pages;
-	struct mem_cgroup *memcg;
 	struct obj_cgroup *objcg;
 
 	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
 
 	/*
 	 * Nobody should be changing or seriously looking at
-	 * folio memcg or objcg at this point, we have fully
-	 * exclusive access to the folio.
+	 * folio objcg at this point, we have fully exclusive
+	 * access to the folio.
 	 */
-	if (folio_memcg_kmem(folio)) {
-		objcg = __folio_objcg(folio);
-		/*
-		 * This get matches the put at the end of the function and
-		 * kmem pages do not hold memcg references anymore.
-		 */
-		memcg = get_mem_cgroup_from_objcg(objcg);
-	} else {
-		memcg = __folio_memcg(folio);
-	}
-
-	if (!memcg)
+	objcg = folio_objcg(folio);
+	if (!objcg)
 		return;
 
-	if (ug->memcg != memcg) {
-		if (ug->memcg) {
+	if (ug->objcg != objcg) {
+		if (ug->objcg) {
 			uncharge_batch(ug);
 			uncharge_gather_clear(ug);
 		}
-		ug->memcg = memcg;
+		ug->objcg = objcg;
 		ug->nid = folio_nid(folio);
 
-		/* pairs with css_put in uncharge_batch */
-		css_get(&memcg->css);
+		/* pairs with obj_cgroup_put in uncharge_batch */
+		obj_cgroup_get(objcg);
 	}
 
 	nr_pages = folio_nr_pages(folio);
@@ -7001,19 +7112,15 @@ static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
 	if (folio_memcg_kmem(folio)) {
 		ug->nr_memory += nr_pages;
 		ug->nr_kmem += nr_pages;
-
-		folio->memcg_data = 0;
-		obj_cgroup_put(objcg);
 	} else {
 		/* LRU pages aren't accounted at the root level */
-		if (!mem_cgroup_is_root(memcg))
+		if (!obj_cgroup_is_root(objcg))
 			ug->nr_memory += nr_pages;
 		ug->pgpgout++;
-
-		folio->memcg_data = 0;
 	}
 
-	css_put(&memcg->css);
+	folio->memcg_data = 0;
+	obj_cgroup_put(objcg);
 }
 
 void __mem_cgroup_uncharge(struct folio *folio)
@@ -7021,7 +7128,7 @@ void __mem_cgroup_uncharge(struct folio *folio)
 	struct uncharge_gather ug;
 
 	/* Don't touch folio->lru of any random page, pre-check: */
-	if (!folio_memcg(folio))
+	if (!folio_objcg(folio))
 		return;
 
 	uncharge_gather_clear(&ug);
@@ -7044,7 +7151,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list)
 	uncharge_gather_clear(&ug);
 	list_for_each_entry(folio, page_list, lru)
 		uncharge_folio(folio, &ug);
-	if (ug.memcg)
+	if (ug.objcg)
 		uncharge_batch(&ug);
 }
 
@@ -7061,6 +7168,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list)
 void mem_cgroup_migrate(struct folio *old, struct folio *new)
 {
 	struct mem_cgroup *memcg;
+	struct obj_cgroup *objcg;
 	long nr_pages = folio_nr_pages(new);
 	unsigned long flags;
 
@@ -7073,30 +7181,33 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
 		return;
 
 	/* Page cache replacement: new folio already charged? */
-	if (folio_memcg(new))
+	if (folio_objcg(new))
 		return;
 
-	memcg = get_mem_cgroup_from_folio(old);
-	VM_WARN_ON_ONCE_FOLIO(!memcg, old);
-	if (!memcg)
+	objcg = folio_objcg(old);
+	VM_WARN_ON_ONCE_FOLIO(!objcg, old);
+	if (!objcg)
 		return;
 
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(objcg);
+
 	/* Force-charge the new page. The old one will be freed soon */
-	if (!mem_cgroup_is_root(memcg)) {
+	if (!obj_cgroup_is_root(objcg)) {
 		page_counter_charge(&memcg->memory, nr_pages);
 		if (do_memsw_account())
 			page_counter_charge(&memcg->memsw, nr_pages);
 	}
 
-	css_get(&memcg->css);
-	commit_charge(new, memcg);
+	obj_cgroup_get(objcg);
+	commit_charge(new, objcg);
 
 	local_irq_save(flags);
 	mem_cgroup_charge_statistics(memcg, nr_pages);
 	memcg_check_events(memcg, folio_nid(new));
 	local_irq_restore(flags);
 
-	css_put(&memcg->css);
+	rcu_read_unlock();
 }
 
 DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
@@ -7269,6 +7380,7 @@ static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg)
 void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
 {
 	struct mem_cgroup *memcg, *swap_memcg;
+	struct obj_cgroup *objcg;
 	unsigned int nr_entries;
 	unsigned short oldid;
 
@@ -7281,15 +7393,16 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
 	if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return;
 
+	objcg = folio_objcg(folio);
+	VM_WARN_ON_ONCE_FOLIO(!objcg, folio);
+	if (!objcg)
+		return;
+
 	/*
 	 * Interrupts should be disabled by the caller (see the comments below),
 	 * which can serve as RCU read-side critical sections.
 	 */
-	memcg = folio_memcg(folio);
-
-	VM_WARN_ON_ONCE_FOLIO(!memcg, folio);
-	if (!memcg)
-		return;
+	memcg = obj_cgroup_memcg(objcg);
 
 	/*
 	 * In case the memcg owning these pages has been offlined and doesn't
@@ -7308,7 +7421,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
 
 	folio->memcg_data = 0;
 
-	if (!mem_cgroup_is_root(memcg))
+	if (!obj_cgroup_is_root(objcg))
 		page_counter_uncharge(&memcg->memory, nr_entries);
 
 	if (!cgroup_memory_noswap && memcg != swap_memcg) {
@@ -7328,7 +7441,7 @@ void mem_cgroup_swapout(struct folio *folio, swp_entry_t entry)
 	memcg_stats_unlock();
 	memcg_check_events(memcg, folio_nid(folio));
 
-	css_put(&memcg->css);
+	obj_cgroup_put(objcg);
 }
 
 /**
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 10/11] mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm, hannes, longman, mhocko, roman.gushchin, shakeelb
  Cc: cgroups, duanxiongchun, linux-kernel, linux-mm, Muchun Song

We need to make sure that the page is deleted from or added to the
correct lruvec list. So add a VM_WARN_ON_ONCE_FOLIO() to catch
invalid users.  Then the VM_BUG_ON_PAGE() in move_pages_to_lru()
could be removed since add_page_to_lru_list() will check that.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
---
 include/linux/mm_inline.h | 6 ++++++
 mm/vmscan.c               | 1 -
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 7b25b53c474a..6585198b19e2 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -99,6 +99,8 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio)
 {
 	enum lru_list lru = folio_lru_list(folio);
 
+	VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+
 	update_lru_size(lruvec, lru, folio_zonenum(folio),
 			folio_nr_pages(folio));
 	if (lru != LRU_UNEVICTABLE)
@@ -116,6 +118,8 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio)
 {
 	enum lru_list lru = folio_lru_list(folio);
 
+	VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+
 	update_lru_size(lruvec, lru, folio_zonenum(folio),
 			folio_nr_pages(folio));
 	/* This is not expected to be used on LRU_UNEVICTABLE */
@@ -133,6 +137,8 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio)
 {
 	enum lru_list lru = folio_lru_list(folio);
 
+	VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+
 	if (lru != LRU_UNEVICTABLE)
 		list_del(&folio->lru);
 	update_lru_size(lruvec, lru, folio_zonenum(folio),
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 697656151431..51b1607c81e4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2361,7 +2361,6 @@ static unsigned int move_pages_to_lru(struct list_head *list)
 			continue;
 		}
 
-		VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
 		lruvec_add_folio(lruvec, folio);
 		nr_pages = folio_nr_pages(folio);
 		nr_moved += nr_pages;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 10/11] mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	hannes-druUgvl0LCNAfugRpC6u6w, longman-H+wXaHxf7aLQT0dZR+AlfA,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Muchun Song

We need to make sure that the page is deleted from or added to the
correct lruvec list. So add a VM_WARN_ON_ONCE_FOLIO() to catch
invalid users.  Then the VM_BUG_ON_PAGE() in move_pages_to_lru()
could be removed since add_page_to_lru_list() will check that.

Signed-off-by: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
Acked-by: Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org>
---
 include/linux/mm_inline.h | 6 ++++++
 mm/vmscan.c               | 1 -
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 7b25b53c474a..6585198b19e2 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -99,6 +99,8 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio)
 {
 	enum lru_list lru = folio_lru_list(folio);
 
+	VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+
 	update_lru_size(lruvec, lru, folio_zonenum(folio),
 			folio_nr_pages(folio));
 	if (lru != LRU_UNEVICTABLE)
@@ -116,6 +118,8 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio)
 {
 	enum lru_list lru = folio_lru_list(folio);
 
+	VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+
 	update_lru_size(lruvec, lru, folio_zonenum(folio),
 			folio_nr_pages(folio));
 	/* This is not expected to be used on LRU_UNEVICTABLE */
@@ -133,6 +137,8 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio)
 {
 	enum lru_list lru = folio_lru_list(folio);
 
+	VM_WARN_ON_ONCE_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+
 	if (lru != LRU_UNEVICTABLE)
 		list_del(&folio->lru);
 	update_lru_size(lruvec, lru, folio_zonenum(folio),
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 697656151431..51b1607c81e4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2361,7 +2361,6 @@ static unsigned int move_pages_to_lru(struct list_head *list)
 			continue;
 		}
 
-		VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
 		lruvec_add_folio(lruvec, folio);
 		nr_pages = folio_nr_pages(folio);
 		nr_moved += nr_pages;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 11/11] mm: lru: use lruvec lock to serialize memcg changes
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm, hannes, longman, mhocko, roman.gushchin, shakeelb
  Cc: cgroups, duanxiongchun, linux-kernel, linux-mm, Muchun Song

As described by commit fc574c23558c ("mm/swap.c: serialize memcg
changes in pagevec_lru_move_fn"), TestClearPageLRU() aims to
serialize mem_cgroup_move_account() during pagevec_lru_move_fn().
Now folio_lruvec_lock*() has the ability to detect whether page
memcg has been changed. So we can use lruvec lock to serialize
mem_cgroup_move_account() during pagevec_lru_move_fn(). This
change is a partial revert of the commit fc574c23558c ("mm/swap.c:
serialize memcg changes in pagevec_lru_move_fn").

And pagevec_lru_move_fn() is more hot compare with
mem_cgroup_move_account(), removing an atomic operation would be
an optimization. Also this change would not dirty cacheline for a
page which isn't on the LRU.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/memcontrol.c | 34 ++++++++++++++++++++++++++++++++++
 mm/swap.c       | 32 +++++++++++++++-----------------
 mm/vmscan.c     | 16 +++++++---------
 3 files changed, 56 insertions(+), 26 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 803dbdf5f233..85adc43c5a25 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1330,10 +1330,39 @@ struct lruvec *folio_lruvec_lock(struct folio *folio)
 	lruvec = folio_lruvec(folio);
 	spin_lock(&lruvec->lru_lock);
 
+	/*
+	 * The memcg of the page can be changed by any the following routines:
+	 *
+	 * 1) mem_cgroup_move_account() or
+	 * 2) memcg_reparent_objcgs()
+	 *
+	 * The possible bad scenario would like:
+	 *
+	 * CPU0:                CPU1:                CPU2:
+	 * lruvec = folio_lruvec()
+	 *
+	 *                      if (!isolate_lru_page())
+	 *                              mem_cgroup_move_account()
+	 *
+	 *                                           memcg_reparent_objcgs()
+	 *
+	 * spin_lock(&lruvec->lru_lock)
+	 *                ^^^^^^
+	 *              wrong lock
+	 *
+	 * Either CPU1 or CPU2 can change page memcg, so we need to check
+	 * whether page memcg is changed, if so, we should reacquire the
+	 * new lruvec lock.
+	 */
 	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
 		spin_unlock(&lruvec->lru_lock);
 		goto retry;
 	}
+
+	/*
+	 * When we reach here, it means that the folio_memcg(folio) is
+	 * stable.
+	 */
 	rcu_read_unlock();
 
 	return lruvec;
@@ -1361,6 +1390,7 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
 	lruvec = folio_lruvec(folio);
 	spin_lock_irq(&lruvec->lru_lock);
 
+	/* See the comments in folio_lruvec_lock(). */
 	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
 		spin_unlock_irq(&lruvec->lru_lock);
 		goto retry;
@@ -1394,6 +1424,7 @@ struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
 	lruvec = folio_lruvec(folio);
 	spin_lock_irqsave(&lruvec->lru_lock, *flags);
 
+	/* See the comments in folio_lruvec_lock(). */
 	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
 		spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
 		goto retry;
@@ -5809,7 +5840,10 @@ static int mem_cgroup_move_account(struct page *page,
 	obj_cgroup_put(rcu_dereference(from->objcg));
 	rcu_read_unlock();
 
+	/* See the comments in folio_lruvec_lock(). */
+	spin_lock(&from_vec->lru_lock);
 	folio->memcg_data = (unsigned long)rcu_access_pointer(to->objcg);
+	spin_unlock(&from_vec->lru_lock);
 
 	__folio_memcg_unlock(from);
 
diff --git a/mm/swap.c b/mm/swap.c
index 987dcbd93ffa..0fc59409e27d 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -196,6 +196,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio)
 
 	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
 
+	folio_set_lru(folio);
 	/*
 	 * Is an smp_mb__after_atomic() still required here, before
 	 * folio_evictable() tests the mlocked flag, to rule out the possibility
@@ -238,14 +239,8 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
 	for (i = 0; i < folio_batch_count(fbatch); i++) {
 		struct folio *folio = fbatch->folios[i];
 
-		/* block memcg migration while the folio moves between lru */
-		if (move_fn != lru_add_fn && !folio_test_clear_lru(folio))
-			continue;
-
 		lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags);
 		move_fn(lruvec, folio);
-
-		folio_set_lru(folio);
 	}
 
 	if (lruvec)
@@ -265,7 +260,7 @@ static void folio_batch_add_and_move(struct folio_batch *fbatch,
 
 static void lru_move_tail_fn(struct lruvec *lruvec, struct folio *folio)
 {
-	if (!folio_test_unevictable(folio)) {
+	if (folio_test_lru(folio) && !folio_test_unevictable(folio)) {
 		lruvec_del_folio(lruvec, folio);
 		folio_clear_active(folio);
 		lruvec_add_folio_tail(lruvec, folio);
@@ -348,7 +343,8 @@ void lru_note_cost_folio(struct folio *folio)
 
 static void folio_activate_fn(struct lruvec *lruvec, struct folio *folio)
 {
-	if (!folio_test_active(folio) && !folio_test_unevictable(folio)) {
+	if (folio_test_lru(folio) && !folio_test_active(folio) &&
+	    !folio_test_unevictable(folio)) {
 		long nr_pages = folio_nr_pages(folio);
 
 		lruvec_del_folio(lruvec, folio);
@@ -394,12 +390,9 @@ static void folio_activate(struct folio *folio)
 {
 	struct lruvec *lruvec;
 
-	if (folio_test_clear_lru(folio)) {
-		lruvec = folio_lruvec_lock_irq(folio);
-		folio_activate_fn(lruvec, folio);
-		lruvec_unlock_irq(lruvec);
-		folio_set_lru(folio);
-	}
+	lruvec = folio_lruvec_lock_irq(folio);
+	folio_activate_fn(lruvec, folio);
+	lruvec_unlock_irq(lruvec);
 }
 #endif
 
@@ -542,6 +535,9 @@ static void lru_deactivate_file_fn(struct lruvec *lruvec, struct folio *folio)
 	bool active = folio_test_active(folio);
 	long nr_pages = folio_nr_pages(folio);
 
+	if (!folio_test_lru(folio))
+		return;
+
 	if (folio_test_unevictable(folio))
 		return;
 
@@ -580,7 +576,8 @@ static void lru_deactivate_file_fn(struct lruvec *lruvec, struct folio *folio)
 
 static void lru_deactivate_fn(struct lruvec *lruvec, struct folio *folio)
 {
-	if (folio_test_active(folio) && !folio_test_unevictable(folio)) {
+	if (folio_test_lru(folio) && folio_test_active(folio) &&
+	    !folio_test_unevictable(folio)) {
 		long nr_pages = folio_nr_pages(folio);
 
 		lruvec_del_folio(lruvec, folio);
@@ -596,8 +593,9 @@ static void lru_deactivate_fn(struct lruvec *lruvec, struct folio *folio)
 
 static void lru_lazyfree_fn(struct lruvec *lruvec, struct folio *folio)
 {
-	if (folio_test_anon(folio) && folio_test_swapbacked(folio) &&
-	    !folio_test_swapcache(folio) && !folio_test_unevictable(folio)) {
+	if (folio_test_lru(folio) && folio_test_anon(folio) &&
+	    folio_test_swapbacked(folio) && !folio_test_swapcache(folio) &&
+	    !folio_test_unevictable(folio)) {
 		long nr_pages = folio_nr_pages(folio);
 
 		lruvec_del_folio(lruvec, folio);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 51b1607c81e4..11e1f6fc5898 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4864,21 +4864,19 @@ void check_move_unevictable_pages(struct pagevec *pvec)
 		if (PageTransTail(page))
 			continue;
 
-		nr_pages = thp_nr_pages(page);
+		nr_pages = folio_nr_pages(folio);
 		pgscanned += nr_pages;
 
-		/* block memcg migration during page moving between lru */
-		if (!TestClearPageLRU(page))
+		lruvec = folio_lruvec_relock_irq(folio, lruvec);
+		if (!folio_test_lru(folio) || !folio_test_unevictable(folio))
 			continue;
 
-		lruvec = folio_lruvec_relock_irq(folio, lruvec);
-		if (page_evictable(page) && PageUnevictable(page)) {
-			del_page_from_lru_list(page, lruvec);
-			ClearPageUnevictable(page);
-			add_page_to_lru_list(page, lruvec);
+		if (folio_evictable(folio)) {
+			lruvec_del_folio(lruvec, folio);
+			folio_clear_unevictable(folio);
+			lruvec_add_folio(lruvec, folio);
 			pgrescued += nr_pages;
 		}
-		SetPageLRU(page);
 	}
 
 	if (lruvec) {
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* [PATCH v6 11/11] mm: lru: use lruvec lock to serialize memcg changes
@ 2022-06-21 12:56   ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-21 12:56 UTC (permalink / raw)
  To: akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b,
	hannes-druUgvl0LCNAfugRpC6u6w, longman-H+wXaHxf7aLQT0dZR+AlfA,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Muchun Song

As described by commit fc574c23558c ("mm/swap.c: serialize memcg
changes in pagevec_lru_move_fn"), TestClearPageLRU() aims to
serialize mem_cgroup_move_account() during pagevec_lru_move_fn().
Now folio_lruvec_lock*() has the ability to detect whether page
memcg has been changed. So we can use lruvec lock to serialize
mem_cgroup_move_account() during pagevec_lru_move_fn(). This
change is a partial revert of the commit fc574c23558c ("mm/swap.c:
serialize memcg changes in pagevec_lru_move_fn").

And pagevec_lru_move_fn() is more hot compare with
mem_cgroup_move_account(), removing an atomic operation would be
an optimization. Also this change would not dirty cacheline for a
page which isn't on the LRU.

Signed-off-by: Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>
---
 mm/memcontrol.c | 34 ++++++++++++++++++++++++++++++++++
 mm/swap.c       | 32 +++++++++++++++-----------------
 mm/vmscan.c     | 16 +++++++---------
 3 files changed, 56 insertions(+), 26 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 803dbdf5f233..85adc43c5a25 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1330,10 +1330,39 @@ struct lruvec *folio_lruvec_lock(struct folio *folio)
 	lruvec = folio_lruvec(folio);
 	spin_lock(&lruvec->lru_lock);
 
+	/*
+	 * The memcg of the page can be changed by any the following routines:
+	 *
+	 * 1) mem_cgroup_move_account() or
+	 * 2) memcg_reparent_objcgs()
+	 *
+	 * The possible bad scenario would like:
+	 *
+	 * CPU0:                CPU1:                CPU2:
+	 * lruvec = folio_lruvec()
+	 *
+	 *                      if (!isolate_lru_page())
+	 *                              mem_cgroup_move_account()
+	 *
+	 *                                           memcg_reparent_objcgs()
+	 *
+	 * spin_lock(&lruvec->lru_lock)
+	 *                ^^^^^^
+	 *              wrong lock
+	 *
+	 * Either CPU1 or CPU2 can change page memcg, so we need to check
+	 * whether page memcg is changed, if so, we should reacquire the
+	 * new lruvec lock.
+	 */
 	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
 		spin_unlock(&lruvec->lru_lock);
 		goto retry;
 	}
+
+	/*
+	 * When we reach here, it means that the folio_memcg(folio) is
+	 * stable.
+	 */
 	rcu_read_unlock();
 
 	return lruvec;
@@ -1361,6 +1390,7 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
 	lruvec = folio_lruvec(folio);
 	spin_lock_irq(&lruvec->lru_lock);
 
+	/* See the comments in folio_lruvec_lock(). */
 	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
 		spin_unlock_irq(&lruvec->lru_lock);
 		goto retry;
@@ -1394,6 +1424,7 @@ struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
 	lruvec = folio_lruvec(folio);
 	spin_lock_irqsave(&lruvec->lru_lock, *flags);
 
+	/* See the comments in folio_lruvec_lock(). */
 	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
 		spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
 		goto retry;
@@ -5809,7 +5840,10 @@ static int mem_cgroup_move_account(struct page *page,
 	obj_cgroup_put(rcu_dereference(from->objcg));
 	rcu_read_unlock();
 
+	/* See the comments in folio_lruvec_lock(). */
+	spin_lock(&from_vec->lru_lock);
 	folio->memcg_data = (unsigned long)rcu_access_pointer(to->objcg);
+	spin_unlock(&from_vec->lru_lock);
 
 	__folio_memcg_unlock(from);
 
diff --git a/mm/swap.c b/mm/swap.c
index 987dcbd93ffa..0fc59409e27d 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -196,6 +196,7 @@ static void lru_add_fn(struct lruvec *lruvec, struct folio *folio)
 
 	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
 
+	folio_set_lru(folio);
 	/*
 	 * Is an smp_mb__after_atomic() still required here, before
 	 * folio_evictable() tests the mlocked flag, to rule out the possibility
@@ -238,14 +239,8 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
 	for (i = 0; i < folio_batch_count(fbatch); i++) {
 		struct folio *folio = fbatch->folios[i];
 
-		/* block memcg migration while the folio moves between lru */
-		if (move_fn != lru_add_fn && !folio_test_clear_lru(folio))
-			continue;
-
 		lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags);
 		move_fn(lruvec, folio);
-
-		folio_set_lru(folio);
 	}
 
 	if (lruvec)
@@ -265,7 +260,7 @@ static void folio_batch_add_and_move(struct folio_batch *fbatch,
 
 static void lru_move_tail_fn(struct lruvec *lruvec, struct folio *folio)
 {
-	if (!folio_test_unevictable(folio)) {
+	if (folio_test_lru(folio) && !folio_test_unevictable(folio)) {
 		lruvec_del_folio(lruvec, folio);
 		folio_clear_active(folio);
 		lruvec_add_folio_tail(lruvec, folio);
@@ -348,7 +343,8 @@ void lru_note_cost_folio(struct folio *folio)
 
 static void folio_activate_fn(struct lruvec *lruvec, struct folio *folio)
 {
-	if (!folio_test_active(folio) && !folio_test_unevictable(folio)) {
+	if (folio_test_lru(folio) && !folio_test_active(folio) &&
+	    !folio_test_unevictable(folio)) {
 		long nr_pages = folio_nr_pages(folio);
 
 		lruvec_del_folio(lruvec, folio);
@@ -394,12 +390,9 @@ static void folio_activate(struct folio *folio)
 {
 	struct lruvec *lruvec;
 
-	if (folio_test_clear_lru(folio)) {
-		lruvec = folio_lruvec_lock_irq(folio);
-		folio_activate_fn(lruvec, folio);
-		lruvec_unlock_irq(lruvec);
-		folio_set_lru(folio);
-	}
+	lruvec = folio_lruvec_lock_irq(folio);
+	folio_activate_fn(lruvec, folio);
+	lruvec_unlock_irq(lruvec);
 }
 #endif
 
@@ -542,6 +535,9 @@ static void lru_deactivate_file_fn(struct lruvec *lruvec, struct folio *folio)
 	bool active = folio_test_active(folio);
 	long nr_pages = folio_nr_pages(folio);
 
+	if (!folio_test_lru(folio))
+		return;
+
 	if (folio_test_unevictable(folio))
 		return;
 
@@ -580,7 +576,8 @@ static void lru_deactivate_file_fn(struct lruvec *lruvec, struct folio *folio)
 
 static void lru_deactivate_fn(struct lruvec *lruvec, struct folio *folio)
 {
-	if (folio_test_active(folio) && !folio_test_unevictable(folio)) {
+	if (folio_test_lru(folio) && folio_test_active(folio) &&
+	    !folio_test_unevictable(folio)) {
 		long nr_pages = folio_nr_pages(folio);
 
 		lruvec_del_folio(lruvec, folio);
@@ -596,8 +593,9 @@ static void lru_deactivate_fn(struct lruvec *lruvec, struct folio *folio)
 
 static void lru_lazyfree_fn(struct lruvec *lruvec, struct folio *folio)
 {
-	if (folio_test_anon(folio) && folio_test_swapbacked(folio) &&
-	    !folio_test_swapcache(folio) && !folio_test_unevictable(folio)) {
+	if (folio_test_lru(folio) && folio_test_anon(folio) &&
+	    folio_test_swapbacked(folio) && !folio_test_swapcache(folio) &&
+	    !folio_test_unevictable(folio)) {
 		long nr_pages = folio_nr_pages(folio);
 
 		lruvec_del_folio(lruvec, folio);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 51b1607c81e4..11e1f6fc5898 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4864,21 +4864,19 @@ void check_move_unevictable_pages(struct pagevec *pvec)
 		if (PageTransTail(page))
 			continue;
 
-		nr_pages = thp_nr_pages(page);
+		nr_pages = folio_nr_pages(folio);
 		pgscanned += nr_pages;
 
-		/* block memcg migration during page moving between lru */
-		if (!TestClearPageLRU(page))
+		lruvec = folio_lruvec_relock_irq(folio, lruvec);
+		if (!folio_test_lru(folio) || !folio_test_unevictable(folio))
 			continue;
 
-		lruvec = folio_lruvec_relock_irq(folio, lruvec);
-		if (page_evictable(page) && PageUnevictable(page)) {
-			del_page_from_lru_list(page, lruvec);
-			ClearPageUnevictable(page);
-			add_page_to_lru_list(page, lruvec);
+		if (folio_evictable(folio)) {
+			lruvec_del_folio(lruvec, folio);
+			folio_clear_unevictable(folio);
+			lruvec_add_folio(lruvec, folio);
 			pgrescued += nr_pages;
 		}
-		SetPageLRU(page);
 	}
 
 	if (lruvec) {
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-26 10:32   ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-06-26 10:32 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman, Michal Hocko,
	Roman Gushchin, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> into mm-unstable which will help to determine whether there is a problem or
> degradation. I am also doing some benchmark tests in parallel.
>
> Since the following patchsets applied. All the kernel memory are charged
> with the new APIs of obj_cgroup.
>
>         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
>         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
>
> But user memory allocations (LRU pages) pinning memcgs for a long time -
> it exists at a larger scale and is causing recurring problems in the real
> world: page cache doesn't get reclaimed for a long time, or is used by the
> second, third, fourth, ... instance of the same job that was restarted into
> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> and make page reclaim very inefficient.
>
> We can convert LRU pages and most other raw memcg pins to the objcg direction
> to fix this problem, and then the LRU pages will not pin the memcgs.
>
> This patchset aims to make the LRU pages to drop the reference to memory
> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> of the dying cgroups will not increase if we run the following test script.

This is amazing work!

Sorry if I came late, I didn't follow the threads of previous versions
so this might be redundant, I just have a couple of questions.

a) If LRU pages keep getting parented until they reach root_mem_cgroup
(assuming they can), aren't these pages effectively unaccounted at
this point or leaked? Is there protection against this?

b) Since moving charged pages between memcgs is now becoming easier by
using the APIs of obj_cgroup, I wonder if this opens the door for
future work to transfer charges to memcgs that are actually using
reparented resources. For example, let's say cgroup A reads a few
pages into page cache, and then they are no longer used by cgroup A.
cgroup B, however, is using the same pages that are currently charged
to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
dies, and these pages are reparented to A's parent, can we possibly
mark these reparented pages (maybe in the page tables somewhere) so
that next time they get accessed we recharge them to B instead
(possibly asynchronously)?
I don't have much experience about page tables but I am pretty sure
they are loaded so maybe there is no room in PTEs for something like
this, but I have always wondered about what we can do for this case
where a cgroup is consistently using memory charged to another cgroup.
Maybe when this memory is reparented is a good point in time to decide
to recharge appropriately. It would also fix the reparenty leak to
root problem (if it even exists).

Thanks again for this work and please excuse my ignorance if any part
of what I said doesn't make sense :)

>
> ```bash
> #!/bin/bash
>
> dd if=/dev/zero of=temp bs=4096 count=1
> cat /proc/cgroups | grep memory
>
> for i in {0..2000}
> do
>         mkdir /sys/fs/cgroup/memory/test$i
>         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
>         cat temp >> log
>         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
>         rmdir /sys/fs/cgroup/memory/test$i
> done
>
> cat /proc/cgroups | grep memory
>
> rm -f temp log
> ```
>
> v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
>
> v6:
>  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
>  - Rebase to mm-unstable.
>
> v5:
>  - Lots of improvements from Johannes, Roman and Waiman.
>  - Fix lockdep warning reported by kernel test robot.
>  - Add two new patches to do code cleanup.
>  - Collect Acked-by and Reviewed-by from Johannes and Roman.
>  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
>    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
>    it to local_lock.  It could be an improvement in the future.
>
> v4:
>  - Resend and rebased on v5.18.
>
> v3:
>  - Removed the Acked-by tags from Roman since this version is based on
>    the folio relevant.
>
> v2:
>  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
>    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
>  - Rebase to linux 5.15-rc1.
>  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
>
> v1:
>  - Drop RFC tag.
>  - Rebase to linux next-20210811.
>
> RFC v4:
>  - Collect Acked-by from Roman.
>  - Rebase to linux next-20210525.
>  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
>  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
>  - Convert reparent_ops_head to an array in patch 8.
>
> Thanks for Roman's review and suggestions.
>
> RFC v3:
>  - Drop the code cleanup and simplification patches. Gather those patches
>    into a separate series[1].
>  - Rework patch #1 suggested by Johannes.
>
> RFC v2:
>  - Collect Acked-by tags by Johannes. Thanks.
>  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
>  - Fix move_pages_to_lru().
>
> Muchun Song (11):
>   mm: memcontrol: remove dead code and comments
>   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
>     lruvec_unlock{_irq, _irqrestore}
>   mm: memcontrol: prepare objcg API for non-kmem usage
>   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
>   mm: vmscan: rework move_pages_to_lru()
>   mm: thp: make split queue lock safe when LRU pages are reparented
>   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
>   mm: memcontrol: introduce memcg_reparent_ops
>   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
>   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
>   mm: lru: use lruvec lock to serialize memcg changes
>
>  fs/buffer.c                      |   4 +-
>  fs/fs-writeback.c                |  23 +-
>  include/linux/memcontrol.h       | 218 +++++++++------
>  include/linux/mm_inline.h        |   6 +
>  include/trace/events/writeback.h |   5 +
>  mm/compaction.c                  |  39 ++-
>  mm/huge_memory.c                 | 153 ++++++++--
>  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
>  mm/migrate.c                     |   4 +
>  mm/mlock.c                       |   2 +-
>  mm/page_io.c                     |   5 +-
>  mm/swap.c                        |  49 ++--
>  mm/vmscan.c                      |  66 ++---
>  13 files changed, 776 insertions(+), 382 deletions(-)
>
>
> base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> --
> 2.11.0
>
>

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-26 10:32   ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-06-26 10:32 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman-H+wXaHxf7aLQT0dZR+AlfA,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Cgroups,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg, Linux Kernel Mailing List,
	Linux-MM

On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
>
> This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> into mm-unstable which will help to determine whether there is a problem or
> degradation. I am also doing some benchmark tests in parallel.
>
> Since the following patchsets applied. All the kernel memory are charged
> with the new APIs of obj_cgroup.
>
>         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
>         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
>
> But user memory allocations (LRU pages) pinning memcgs for a long time -
> it exists at a larger scale and is causing recurring problems in the real
> world: page cache doesn't get reclaimed for a long time, or is used by the
> second, third, fourth, ... instance of the same job that was restarted into
> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> and make page reclaim very inefficient.
>
> We can convert LRU pages and most other raw memcg pins to the objcg direction
> to fix this problem, and then the LRU pages will not pin the memcgs.
>
> This patchset aims to make the LRU pages to drop the reference to memory
> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> of the dying cgroups will not increase if we run the following test script.

This is amazing work!

Sorry if I came late, I didn't follow the threads of previous versions
so this might be redundant, I just have a couple of questions.

a) If LRU pages keep getting parented until they reach root_mem_cgroup
(assuming they can), aren't these pages effectively unaccounted at
this point or leaked? Is there protection against this?

b) Since moving charged pages between memcgs is now becoming easier by
using the APIs of obj_cgroup, I wonder if this opens the door for
future work to transfer charges to memcgs that are actually using
reparented resources. For example, let's say cgroup A reads a few
pages into page cache, and then they are no longer used by cgroup A.
cgroup B, however, is using the same pages that are currently charged
to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
dies, and these pages are reparented to A's parent, can we possibly
mark these reparented pages (maybe in the page tables somewhere) so
that next time they get accessed we recharge them to B instead
(possibly asynchronously)?
I don't have much experience about page tables but I am pretty sure
they are loaded so maybe there is no room in PTEs for something like
this, but I have always wondered about what we can do for this case
where a cgroup is consistently using memory charged to another cgroup.
Maybe when this memory is reparented is a good point in time to decide
to recharge appropriately. It would also fix the reparenty leak to
root problem (if it even exists).

Thanks again for this work and please excuse my ignorance if any part
of what I said doesn't make sense :)

>
> ```bash
> #!/bin/bash
>
> dd if=/dev/zero of=temp bs=4096 count=1
> cat /proc/cgroups | grep memory
>
> for i in {0..2000}
> do
>         mkdir /sys/fs/cgroup/memory/test$i
>         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
>         cat temp >> log
>         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
>         rmdir /sys/fs/cgroup/memory/test$i
> done
>
> cat /proc/cgroups | grep memory
>
> rm -f temp log
> ```
>
> v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
>
> v6:
>  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
>  - Rebase to mm-unstable.
>
> v5:
>  - Lots of improvements from Johannes, Roman and Waiman.
>  - Fix lockdep warning reported by kernel test robot.
>  - Add two new patches to do code cleanup.
>  - Collect Acked-by and Reviewed-by from Johannes and Roman.
>  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
>    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
>    it to local_lock.  It could be an improvement in the future.
>
> v4:
>  - Resend and rebased on v5.18.
>
> v3:
>  - Removed the Acked-by tags from Roman since this version is based on
>    the folio relevant.
>
> v2:
>  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
>    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
>  - Rebase to linux 5.15-rc1.
>  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
>
> v1:
>  - Drop RFC tag.
>  - Rebase to linux next-20210811.
>
> RFC v4:
>  - Collect Acked-by from Roman.
>  - Rebase to linux next-20210525.
>  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
>  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
>  - Convert reparent_ops_head to an array in patch 8.
>
> Thanks for Roman's review and suggestions.
>
> RFC v3:
>  - Drop the code cleanup and simplification patches. Gather those patches
>    into a separate series[1].
>  - Rework patch #1 suggested by Johannes.
>
> RFC v2:
>  - Collect Acked-by tags by Johannes. Thanks.
>  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
>  - Fix move_pages_to_lru().
>
> Muchun Song (11):
>   mm: memcontrol: remove dead code and comments
>   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
>     lruvec_unlock{_irq, _irqrestore}
>   mm: memcontrol: prepare objcg API for non-kmem usage
>   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
>   mm: vmscan: rework move_pages_to_lru()
>   mm: thp: make split queue lock safe when LRU pages are reparented
>   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
>   mm: memcontrol: introduce memcg_reparent_ops
>   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
>   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
>   mm: lru: use lruvec lock to serialize memcg changes
>
>  fs/buffer.c                      |   4 +-
>  fs/fs-writeback.c                |  23 +-
>  include/linux/memcontrol.h       | 218 +++++++++------
>  include/linux/mm_inline.h        |   6 +
>  include/trace/events/writeback.h |   5 +
>  mm/compaction.c                  |  39 ++-
>  mm/huge_memory.c                 | 153 ++++++++--
>  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
>  mm/migrate.c                     |   4 +
>  mm/mlock.c                       |   2 +-
>  mm/page_io.c                     |   5 +-
>  mm/swap.c                        |  49 ++--
>  mm/vmscan.c                      |  66 ++---
>  13 files changed, 776 insertions(+), 382 deletions(-)
>
>
> base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> --
> 2.11.0
>
>

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-27  7:11     ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-27  7:11 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, longman, Michal Hocko,
	Roman Gushchin, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > into mm-unstable which will help to determine whether there is a problem or
> > degradation. I am also doing some benchmark tests in parallel.
> >
> > Since the following patchsets applied. All the kernel memory are charged
> > with the new APIs of obj_cgroup.
> >
> >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> >
> > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > it exists at a larger scale and is causing recurring problems in the real
> > world: page cache doesn't get reclaimed for a long time, or is used by the
> > second, third, fourth, ... instance of the same job that was restarted into
> > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > and make page reclaim very inefficient.
> >
> > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > to fix this problem, and then the LRU pages will not pin the memcgs.
> >
> > This patchset aims to make the LRU pages to drop the reference to memory
> > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > of the dying cgroups will not increase if we run the following test script.
> 
> This is amazing work!
> 
> Sorry if I came late, I didn't follow the threads of previous versions
> so this might be redundant, I just have a couple of questions.
> 
> a) If LRU pages keep getting parented until they reach root_mem_cgroup
> (assuming they can), aren't these pages effectively unaccounted at
> this point or leaked? Is there protection against this?
>

In this case, those pages are accounted in root memcg level. Unfortunately,
there is no mechanism now to transfer a page's memcg from one to another.
 
> b) Since moving charged pages between memcgs is now becoming easier by
> using the APIs of obj_cgroup, I wonder if this opens the door for
> future work to transfer charges to memcgs that are actually using
> reparented resources. For example, let's say cgroup A reads a few
> pages into page cache, and then they are no longer used by cgroup A.
> cgroup B, however, is using the same pages that are currently charged
> to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> dies, and these pages are reparented to A's parent, can we possibly
> mark these reparented pages (maybe in the page tables somewhere) so
> that next time they get accessed we recharge them to B instead
> (possibly asynchronously)?
> I don't have much experience about page tables but I am pretty sure
> they are loaded so maybe there is no room in PTEs for something like
> this, but I have always wondered about what we can do for this case
> where a cgroup is consistently using memory charged to another cgroup.
> Maybe when this memory is reparented is a good point in time to decide
> to recharge appropriately. It would also fix the reparenty leak to
> root problem (if it even exists).
> 

From my point of view, this is going to be an improvement to the memcg
subsystem in the future.  IIUC, most reparented pages are page cache
pages without be mapped to users. So page tables are not a suitable
place to record this information. However, we already have this information
in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
equal to the page's obj_cgroup->memcg->objcg, it means this page have
been reparented. I am thinking if a place where a page is mapped (probably
page fault patch) or page (cache) is written (usually vfs write path)
is suitable to transfer page's memcg from one to another. But need more
thinking, e.g. How to decide if a reparented page needs to be transferred?
If we need more information to make this decision, where to store those
information? This is my primary thoughts on this question.

Thanks.

> Thanks again for this work and please excuse my ignorance if any part
> of what I said doesn't make sense :)
> 
> >
> > ```bash
> > #!/bin/bash
> >
> > dd if=/dev/zero of=temp bs=4096 count=1
> > cat /proc/cgroups | grep memory
> >
> > for i in {0..2000}
> > do
> >         mkdir /sys/fs/cgroup/memory/test$i
> >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> >         cat temp >> log
> >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> >         rmdir /sys/fs/cgroup/memory/test$i
> > done
> >
> > cat /proc/cgroups | grep memory
> >
> > rm -f temp log
> > ```
> >
> > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> >
> > v6:
> >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> >  - Rebase to mm-unstable.
> >
> > v5:
> >  - Lots of improvements from Johannes, Roman and Waiman.
> >  - Fix lockdep warning reported by kernel test robot.
> >  - Add two new patches to do code cleanup.
> >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> >    it to local_lock.  It could be an improvement in the future.
> >
> > v4:
> >  - Resend and rebased on v5.18.
> >
> > v3:
> >  - Removed the Acked-by tags from Roman since this version is based on
> >    the folio relevant.
> >
> > v2:
> >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> >  - Rebase to linux 5.15-rc1.
> >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> >
> > v1:
> >  - Drop RFC tag.
> >  - Rebase to linux next-20210811.
> >
> > RFC v4:
> >  - Collect Acked-by from Roman.
> >  - Rebase to linux next-20210525.
> >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> >  - Convert reparent_ops_head to an array in patch 8.
> >
> > Thanks for Roman's review and suggestions.
> >
> > RFC v3:
> >  - Drop the code cleanup and simplification patches. Gather those patches
> >    into a separate series[1].
> >  - Rework patch #1 suggested by Johannes.
> >
> > RFC v2:
> >  - Collect Acked-by tags by Johannes. Thanks.
> >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> >  - Fix move_pages_to_lru().
> >
> > Muchun Song (11):
> >   mm: memcontrol: remove dead code and comments
> >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> >     lruvec_unlock{_irq, _irqrestore}
> >   mm: memcontrol: prepare objcg API for non-kmem usage
> >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> >   mm: vmscan: rework move_pages_to_lru()
> >   mm: thp: make split queue lock safe when LRU pages are reparented
> >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> >   mm: memcontrol: introduce memcg_reparent_ops
> >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> >   mm: lru: use lruvec lock to serialize memcg changes
> >
> >  fs/buffer.c                      |   4 +-
> >  fs/fs-writeback.c                |  23 +-
> >  include/linux/memcontrol.h       | 218 +++++++++------
> >  include/linux/mm_inline.h        |   6 +
> >  include/trace/events/writeback.h |   5 +
> >  mm/compaction.c                  |  39 ++-
> >  mm/huge_memory.c                 | 153 ++++++++--
> >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> >  mm/migrate.c                     |   4 +
> >  mm/mlock.c                       |   2 +-
> >  mm/page_io.c                     |   5 +-
> >  mm/swap.c                        |  49 ++--
> >  mm/vmscan.c                      |  66 ++---
> >  13 files changed, 776 insertions(+), 382 deletions(-)
> >
> >
> > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > --
> > 2.11.0
> >
> >
> 

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-27  7:11     ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-27  7:11 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, longman-H+wXaHxf7aLQT0dZR+AlfA,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Cgroups,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg, Linux Kernel Mailing List,
	Linux-MM

On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> >
> > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > into mm-unstable which will help to determine whether there is a problem or
> > degradation. I am also doing some benchmark tests in parallel.
> >
> > Since the following patchsets applied. All the kernel memory are charged
> > with the new APIs of obj_cgroup.
> >
> >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> >
> > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > it exists at a larger scale and is causing recurring problems in the real
> > world: page cache doesn't get reclaimed for a long time, or is used by the
> > second, third, fourth, ... instance of the same job that was restarted into
> > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > and make page reclaim very inefficient.
> >
> > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > to fix this problem, and then the LRU pages will not pin the memcgs.
> >
> > This patchset aims to make the LRU pages to drop the reference to memory
> > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > of the dying cgroups will not increase if we run the following test script.
> 
> This is amazing work!
> 
> Sorry if I came late, I didn't follow the threads of previous versions
> so this might be redundant, I just have a couple of questions.
> 
> a) If LRU pages keep getting parented until they reach root_mem_cgroup
> (assuming they can), aren't these pages effectively unaccounted at
> this point or leaked? Is there protection against this?
>

In this case, those pages are accounted in root memcg level. Unfortunately,
there is no mechanism now to transfer a page's memcg from one to another.
 
> b) Since moving charged pages between memcgs is now becoming easier by
> using the APIs of obj_cgroup, I wonder if this opens the door for
> future work to transfer charges to memcgs that are actually using
> reparented resources. For example, let's say cgroup A reads a few
> pages into page cache, and then they are no longer used by cgroup A.
> cgroup B, however, is using the same pages that are currently charged
> to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> dies, and these pages are reparented to A's parent, can we possibly
> mark these reparented pages (maybe in the page tables somewhere) so
> that next time they get accessed we recharge them to B instead
> (possibly asynchronously)?
> I don't have much experience about page tables but I am pretty sure
> they are loaded so maybe there is no room in PTEs for something like
> this, but I have always wondered about what we can do for this case
> where a cgroup is consistently using memory charged to another cgroup.
> Maybe when this memory is reparented is a good point in time to decide
> to recharge appropriately. It would also fix the reparenty leak to
> root problem (if it even exists).
> 

From my point of view, this is going to be an improvement to the memcg
subsystem in the future.  IIUC, most reparented pages are page cache
pages without be mapped to users. So page tables are not a suitable
place to record this information. However, we already have this information
in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
equal to the page's obj_cgroup->memcg->objcg, it means this page have
been reparented. I am thinking if a place where a page is mapped (probably
page fault patch) or page (cache) is written (usually vfs write path)
is suitable to transfer page's memcg from one to another. But need more
thinking, e.g. How to decide if a reparented page needs to be transferred?
If we need more information to make this decision, where to store those
information? This is my primary thoughts on this question.

Thanks.

> Thanks again for this work and please excuse my ignorance if any part
> of what I said doesn't make sense :)
> 
> >
> > ```bash
> > #!/bin/bash
> >
> > dd if=/dev/zero of=temp bs=4096 count=1
> > cat /proc/cgroups | grep memory
> >
> > for i in {0..2000}
> > do
> >         mkdir /sys/fs/cgroup/memory/test$i
> >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> >         cat temp >> log
> >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> >         rmdir /sys/fs/cgroup/memory/test$i
> > done
> >
> > cat /proc/cgroups | grep memory
> >
> > rm -f temp log
> > ```
> >
> > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> >
> > v6:
> >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> >  - Rebase to mm-unstable.
> >
> > v5:
> >  - Lots of improvements from Johannes, Roman and Waiman.
> >  - Fix lockdep warning reported by kernel test robot.
> >  - Add two new patches to do code cleanup.
> >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> >    it to local_lock.  It could be an improvement in the future.
> >
> > v4:
> >  - Resend and rebased on v5.18.
> >
> > v3:
> >  - Removed the Acked-by tags from Roman since this version is based on
> >    the folio relevant.
> >
> > v2:
> >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> >  - Rebase to linux 5.15-rc1.
> >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> >
> > v1:
> >  - Drop RFC tag.
> >  - Rebase to linux next-20210811.
> >
> > RFC v4:
> >  - Collect Acked-by from Roman.
> >  - Rebase to linux next-20210525.
> >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> >  - Convert reparent_ops_head to an array in patch 8.
> >
> > Thanks for Roman's review and suggestions.
> >
> > RFC v3:
> >  - Drop the code cleanup and simplification patches. Gather those patches
> >    into a separate series[1].
> >  - Rework patch #1 suggested by Johannes.
> >
> > RFC v2:
> >  - Collect Acked-by tags by Johannes. Thanks.
> >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> >  - Fix move_pages_to_lru().
> >
> > Muchun Song (11):
> >   mm: memcontrol: remove dead code and comments
> >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> >     lruvec_unlock{_irq, _irqrestore}
> >   mm: memcontrol: prepare objcg API for non-kmem usage
> >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> >   mm: vmscan: rework move_pages_to_lru()
> >   mm: thp: make split queue lock safe when LRU pages are reparented
> >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> >   mm: memcontrol: introduce memcg_reparent_ops
> >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> >   mm: lru: use lruvec lock to serialize memcg changes
> >
> >  fs/buffer.c                      |   4 +-
> >  fs/fs-writeback.c                |  23 +-
> >  include/linux/memcontrol.h       | 218 +++++++++------
> >  include/linux/mm_inline.h        |   6 +
> >  include/trace/events/writeback.h |   5 +
> >  mm/compaction.c                  |  39 ++-
> >  mm/huge_memory.c                 | 153 ++++++++--
> >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> >  mm/migrate.c                     |   4 +
> >  mm/mlock.c                       |   2 +-
> >  mm/page_io.c                     |   5 +-
> >  mm/swap.c                        |  49 ++--
> >  mm/vmscan.c                      |  66 ++---
> >  13 files changed, 776 insertions(+), 382 deletions(-)
> >
> >
> > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > --
> > 2.11.0
> >
> >
> 

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-27  8:05       ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-06-27  8:05 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman, Michal Hocko,
	Roman Gushchin, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > >
> > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > into mm-unstable which will help to determine whether there is a problem or
> > > degradation. I am also doing some benchmark tests in parallel.
> > >
> > > Since the following patchsets applied. All the kernel memory are charged
> > > with the new APIs of obj_cgroup.
> > >
> > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > >
> > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > it exists at a larger scale and is causing recurring problems in the real
> > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > second, third, fourth, ... instance of the same job that was restarted into
> > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > and make page reclaim very inefficient.
> > >
> > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > >
> > > This patchset aims to make the LRU pages to drop the reference to memory
> > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > of the dying cgroups will not increase if we run the following test script.
> >
> > This is amazing work!
> >
> > Sorry if I came late, I didn't follow the threads of previous versions
> > so this might be redundant, I just have a couple of questions.
> >
> > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > (assuming they can), aren't these pages effectively unaccounted at
> > this point or leaked? Is there protection against this?
> >
>
> In this case, those pages are accounted in root memcg level. Unfortunately,
> there is no mechanism now to transfer a page's memcg from one to another.
>
> > b) Since moving charged pages between memcgs is now becoming easier by
> > using the APIs of obj_cgroup, I wonder if this opens the door for
> > future work to transfer charges to memcgs that are actually using
> > reparented resources. For example, let's say cgroup A reads a few
> > pages into page cache, and then they are no longer used by cgroup A.
> > cgroup B, however, is using the same pages that are currently charged
> > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > dies, and these pages are reparented to A's parent, can we possibly
> > mark these reparented pages (maybe in the page tables somewhere) so
> > that next time they get accessed we recharge them to B instead
> > (possibly asynchronously)?
> > I don't have much experience about page tables but I am pretty sure
> > they are loaded so maybe there is no room in PTEs for something like
> > this, but I have always wondered about what we can do for this case
> > where a cgroup is consistently using memory charged to another cgroup.
> > Maybe when this memory is reparented is a good point in time to decide
> > to recharge appropriately. It would also fix the reparenty leak to
> > root problem (if it even exists).
> >
>
> From my point of view, this is going to be an improvement to the memcg
> subsystem in the future.  IIUC, most reparented pages are page cache
> pages without be mapped to users. So page tables are not a suitable
> place to record this information. However, we already have this information
> in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> equal to the page's obj_cgroup->memcg->objcg, it means this page have
> been reparented. I am thinking if a place where a page is mapped (probably
> page fault patch) or page (cache) is written (usually vfs write path)
> is suitable to transfer page's memcg from one to another. But need more

Very good point about unmapped pages, I missed this. Page tables will
do us no good here. Such a change would indeed require careful thought
because (like you mentioned) there are multiple points in time where
it might be suitable to consider recharging the page (e.g. when the
page is mapped). This could be an incremental change though. Right now
we have no recharging at all, so maybe we can gradually add recharging
to suitable paths.

> thinking, e.g. How to decide if a reparented page needs to be transferred?

Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
current is not a descendant of page's obj_cgroup->memcg) is a good
place to start?

My rationale is that if the page is charged to root_mem_cgroup through
reparenting and a process in a memcg is using it then this is probably
an accounting leak. If a page is charged to a memcg A through
reparenting and is used by a memcg B in a different subtree, then
probably memcg B is getting away with using the page for free while A
is being taxed. If B is a descendant of A, it is still getting away
with using the page unaccounted, but at least it makes no difference
for A.

One could argue that we might as well recharge a reparented page
anyway if the process is cheap (or done asynchronously), and the paths
where we do recharging are not very common.

All of this might be moot, I am just thinking out loud. In any way
this would be future work and not part of this work.


> If we need more information to make this decision, where to store those
> information? This is my primary thoughts on this question.

>
> Thanks.
>
> > Thanks again for this work and please excuse my ignorance if any part
> > of what I said doesn't make sense :)
> >
> > >
> > > ```bash
> > > #!/bin/bash
> > >
> > > dd if=/dev/zero of=temp bs=4096 count=1
> > > cat /proc/cgroups | grep memory
> > >
> > > for i in {0..2000}
> > > do
> > >         mkdir /sys/fs/cgroup/memory/test$i
> > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > >         cat temp >> log
> > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > >         rmdir /sys/fs/cgroup/memory/test$i
> > > done
> > >
> > > cat /proc/cgroups | grep memory
> > >
> > > rm -f temp log
> > > ```
> > >
> > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> > >
> > > v6:
> > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > >  - Rebase to mm-unstable.
> > >
> > > v5:
> > >  - Lots of improvements from Johannes, Roman and Waiman.
> > >  - Fix lockdep warning reported by kernel test robot.
> > >  - Add two new patches to do code cleanup.
> > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > >    it to local_lock.  It could be an improvement in the future.
> > >
> > > v4:
> > >  - Resend and rebased on v5.18.
> > >
> > > v3:
> > >  - Removed the Acked-by tags from Roman since this version is based on
> > >    the folio relevant.
> > >
> > > v2:
> > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > >  - Rebase to linux 5.15-rc1.
> > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > >
> > > v1:
> > >  - Drop RFC tag.
> > >  - Rebase to linux next-20210811.
> > >
> > > RFC v4:
> > >  - Collect Acked-by from Roman.
> > >  - Rebase to linux next-20210525.
> > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > >  - Convert reparent_ops_head to an array in patch 8.
> > >
> > > Thanks for Roman's review and suggestions.
> > >
> > > RFC v3:
> > >  - Drop the code cleanup and simplification patches. Gather those patches
> > >    into a separate series[1].
> > >  - Rework patch #1 suggested by Johannes.
> > >
> > > RFC v2:
> > >  - Collect Acked-by tags by Johannes. Thanks.
> > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > >  - Fix move_pages_to_lru().
> > >
> > > Muchun Song (11):
> > >   mm: memcontrol: remove dead code and comments
> > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > >     lruvec_unlock{_irq, _irqrestore}
> > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > >   mm: vmscan: rework move_pages_to_lru()
> > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > >   mm: memcontrol: introduce memcg_reparent_ops
> > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > >   mm: lru: use lruvec lock to serialize memcg changes
> > >
> > >  fs/buffer.c                      |   4 +-
> > >  fs/fs-writeback.c                |  23 +-
> > >  include/linux/memcontrol.h       | 218 +++++++++------
> > >  include/linux/mm_inline.h        |   6 +
> > >  include/trace/events/writeback.h |   5 +
> > >  mm/compaction.c                  |  39 ++-
> > >  mm/huge_memory.c                 | 153 ++++++++--
> > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > >  mm/migrate.c                     |   4 +
> > >  mm/mlock.c                       |   2 +-
> > >  mm/page_io.c                     |   5 +-
> > >  mm/swap.c                        |  49 ++--
> > >  mm/vmscan.c                      |  66 ++---
> > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > >
> > >
> > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > --
> > > 2.11.0
> > >
> > >
> >

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-27  8:05       ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-06-27  8:05 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman-H+wXaHxf7aLQT0dZR+AlfA,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Cgroups,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg, Linux Kernel Mailing List,
	Linux-MM

On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
>
> On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > >
> > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > into mm-unstable which will help to determine whether there is a problem or
> > > degradation. I am also doing some benchmark tests in parallel.
> > >
> > > Since the following patchsets applied. All the kernel memory are charged
> > > with the new APIs of obj_cgroup.
> > >
> > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > >
> > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > it exists at a larger scale and is causing recurring problems in the real
> > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > second, third, fourth, ... instance of the same job that was restarted into
> > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > and make page reclaim very inefficient.
> > >
> > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > >
> > > This patchset aims to make the LRU pages to drop the reference to memory
> > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > of the dying cgroups will not increase if we run the following test script.
> >
> > This is amazing work!
> >
> > Sorry if I came late, I didn't follow the threads of previous versions
> > so this might be redundant, I just have a couple of questions.
> >
> > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > (assuming they can), aren't these pages effectively unaccounted at
> > this point or leaked? Is there protection against this?
> >
>
> In this case, those pages are accounted in root memcg level. Unfortunately,
> there is no mechanism now to transfer a page's memcg from one to another.
>
> > b) Since moving charged pages between memcgs is now becoming easier by
> > using the APIs of obj_cgroup, I wonder if this opens the door for
> > future work to transfer charges to memcgs that are actually using
> > reparented resources. For example, let's say cgroup A reads a few
> > pages into page cache, and then they are no longer used by cgroup A.
> > cgroup B, however, is using the same pages that are currently charged
> > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > dies, and these pages are reparented to A's parent, can we possibly
> > mark these reparented pages (maybe in the page tables somewhere) so
> > that next time they get accessed we recharge them to B instead
> > (possibly asynchronously)?
> > I don't have much experience about page tables but I am pretty sure
> > they are loaded so maybe there is no room in PTEs for something like
> > this, but I have always wondered about what we can do for this case
> > where a cgroup is consistently using memory charged to another cgroup.
> > Maybe when this memory is reparented is a good point in time to decide
> > to recharge appropriately. It would also fix the reparenty leak to
> > root problem (if it even exists).
> >
>
> From my point of view, this is going to be an improvement to the memcg
> subsystem in the future.  IIUC, most reparented pages are page cache
> pages without be mapped to users. So page tables are not a suitable
> place to record this information. However, we already have this information
> in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> equal to the page's obj_cgroup->memcg->objcg, it means this page have
> been reparented. I am thinking if a place where a page is mapped (probably
> page fault patch) or page (cache) is written (usually vfs write path)
> is suitable to transfer page's memcg from one to another. But need more

Very good point about unmapped pages, I missed this. Page tables will
do us no good here. Such a change would indeed require careful thought
because (like you mentioned) there are multiple points in time where
it might be suitable to consider recharging the page (e.g. when the
page is mapped). This could be an incremental change though. Right now
we have no recharging at all, so maybe we can gradually add recharging
to suitable paths.

> thinking, e.g. How to decide if a reparented page needs to be transferred?

Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
current is not a descendant of page's obj_cgroup->memcg) is a good
place to start?

My rationale is that if the page is charged to root_mem_cgroup through
reparenting and a process in a memcg is using it then this is probably
an accounting leak. If a page is charged to a memcg A through
reparenting and is used by a memcg B in a different subtree, then
probably memcg B is getting away with using the page for free while A
is being taxed. If B is a descendant of A, it is still getting away
with using the page unaccounted, but at least it makes no difference
for A.

One could argue that we might as well recharge a reparented page
anyway if the process is cheap (or done asynchronously), and the paths
where we do recharging are not very common.

All of this might be moot, I am just thinking out loud. In any way
this would be future work and not part of this work.


> If we need more information to make this decision, where to store those
> information? This is my primary thoughts on this question.

>
> Thanks.
>
> > Thanks again for this work and please excuse my ignorance if any part
> > of what I said doesn't make sense :)
> >
> > >
> > > ```bash
> > > #!/bin/bash
> > >
> > > dd if=/dev/zero of=temp bs=4096 count=1
> > > cat /proc/cgroups | grep memory
> > >
> > > for i in {0..2000}
> > > do
> > >         mkdir /sys/fs/cgroup/memory/test$i
> > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > >         cat temp >> log
> > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > >         rmdir /sys/fs/cgroup/memory/test$i
> > > done
> > >
> > > cat /proc/cgroups | grep memory
> > >
> > > rm -f temp log
> > > ```
> > >
> > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> > >
> > > v6:
> > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > >  - Rebase to mm-unstable.
> > >
> > > v5:
> > >  - Lots of improvements from Johannes, Roman and Waiman.
> > >  - Fix lockdep warning reported by kernel test robot.
> > >  - Add two new patches to do code cleanup.
> > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > >    it to local_lock.  It could be an improvement in the future.
> > >
> > > v4:
> > >  - Resend and rebased on v5.18.
> > >
> > > v3:
> > >  - Removed the Acked-by tags from Roman since this version is based on
> > >    the folio relevant.
> > >
> > > v2:
> > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > >  - Rebase to linux 5.15-rc1.
> > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > >
> > > v1:
> > >  - Drop RFC tag.
> > >  - Rebase to linux next-20210811.
> > >
> > > RFC v4:
> > >  - Collect Acked-by from Roman.
> > >  - Rebase to linux next-20210525.
> > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > >  - Convert reparent_ops_head to an array in patch 8.
> > >
> > > Thanks for Roman's review and suggestions.
> > >
> > > RFC v3:
> > >  - Drop the code cleanup and simplification patches. Gather those patches
> > >    into a separate series[1].
> > >  - Rework patch #1 suggested by Johannes.
> > >
> > > RFC v2:
> > >  - Collect Acked-by tags by Johannes. Thanks.
> > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > >  - Fix move_pages_to_lru().
> > >
> > > Muchun Song (11):
> > >   mm: memcontrol: remove dead code and comments
> > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > >     lruvec_unlock{_irq, _irqrestore}
> > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > >   mm: vmscan: rework move_pages_to_lru()
> > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > >   mm: memcontrol: introduce memcg_reparent_ops
> > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > >   mm: lru: use lruvec lock to serialize memcg changes
> > >
> > >  fs/buffer.c                      |   4 +-
> > >  fs/fs-writeback.c                |  23 +-
> > >  include/linux/memcontrol.h       | 218 +++++++++------
> > >  include/linux/mm_inline.h        |   6 +
> > >  include/trace/events/writeback.h |   5 +
> > >  mm/compaction.c                  |  39 ++-
> > >  mm/huge_memory.c                 | 153 ++++++++--
> > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > >  mm/migrate.c                     |   4 +
> > >  mm/mlock.c                       |   2 +-
> > >  mm/page_io.c                     |   5 +-
> > >  mm/swap.c                        |  49 ++--
> > >  mm/vmscan.c                      |  66 ++---
> > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > >
> > >
> > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > --
> > > 2.11.0
> > >
> > >
> >

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-27 10:13         ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-27 10:13 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, longman, Michal Hocko,
	Roman Gushchin, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote:
> On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > >
> > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > into mm-unstable which will help to determine whether there is a problem or
> > > > degradation. I am also doing some benchmark tests in parallel.
> > > >
> > > > Since the following patchsets applied. All the kernel memory are charged
> > > > with the new APIs of obj_cgroup.
> > > >
> > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > >
> > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > it exists at a larger scale and is causing recurring problems in the real
> > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > and make page reclaim very inefficient.
> > > >
> > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > >
> > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > of the dying cgroups will not increase if we run the following test script.
> > >
> > > This is amazing work!
> > >
> > > Sorry if I came late, I didn't follow the threads of previous versions
> > > so this might be redundant, I just have a couple of questions.
> > >
> > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > (assuming they can), aren't these pages effectively unaccounted at
> > > this point or leaked? Is there protection against this?
> > >
> >
> > In this case, those pages are accounted in root memcg level. Unfortunately,
> > there is no mechanism now to transfer a page's memcg from one to another.
> >
> > > b) Since moving charged pages between memcgs is now becoming easier by
> > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > future work to transfer charges to memcgs that are actually using
> > > reparented resources. For example, let's say cgroup A reads a few
> > > pages into page cache, and then they are no longer used by cgroup A.
> > > cgroup B, however, is using the same pages that are currently charged
> > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > dies, and these pages are reparented to A's parent, can we possibly
> > > mark these reparented pages (maybe in the page tables somewhere) so
> > > that next time they get accessed we recharge them to B instead
> > > (possibly asynchronously)?
> > > I don't have much experience about page tables but I am pretty sure
> > > they are loaded so maybe there is no room in PTEs for something like
> > > this, but I have always wondered about what we can do for this case
> > > where a cgroup is consistently using memory charged to another cgroup.
> > > Maybe when this memory is reparented is a good point in time to decide
> > > to recharge appropriately. It would also fix the reparenty leak to
> > > root problem (if it even exists).
> > >
> >
> > From my point of view, this is going to be an improvement to the memcg
> > subsystem in the future.  IIUC, most reparented pages are page cache
> > pages without be mapped to users. So page tables are not a suitable
> > place to record this information. However, we already have this information
> > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > been reparented. I am thinking if a place where a page is mapped (probably
> > page fault patch) or page (cache) is written (usually vfs write path)
> > is suitable to transfer page's memcg from one to another. But need more
> 
> Very good point about unmapped pages, I missed this. Page tables will
> do us no good here. Such a change would indeed require careful thought
> because (like you mentioned) there are multiple points in time where
> it might be suitable to consider recharging the page (e.g. when the
> page is mapped). This could be an incremental change though. Right now
> we have no recharging at all, so maybe we can gradually add recharging
> to suitable paths.
>

Agree.
 
> > thinking, e.g. How to decide if a reparented page needs to be transferred?
> 
> Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of

This is a good start.

> current is not a descendant of page's obj_cgroup->memcg) is a good

I am not sure this one since a page could be shared between different
memcg.

    root
   /   \
  A     B
 / \     \
C   E     D

e.g. a page (originally, it belongs to memcg E and E is dying) is reparented
to memcg A, and it is shared between C and D now. Then we need to consider 
whether it should be recharged. Yep, we need more thinging about recharging.

> place to start?
>
> My rationale is that if the page is charged to root_mem_cgroup through

I think the following issue not only exists in root_mem_cgroup but also
in non-root.

> reparenting and a process in a memcg is using it then this is probably
> an accounting leak. If a page is charged to a memcg A through
> reparenting and is used by a memcg B in a different subtree, then
> probably memcg B is getting away with using the page for free while A
> is being taxed. If B is a descendant of A, it is still getting away
> with using the page unaccounted, but at least it makes no difference
> for A.

I agree this case needs to be improved.

> 
> One could argue that we might as well recharge a reparented page
> anyway if the process is cheap (or done asynchronously), and the paths
> where we do recharging are not very common.
> 
> All of this might be moot, I am just thinking out loud. In any way
> this would be future work and not part of this work.
> 

Agree.

Thanks.

> 
> > If we need more information to make this decision, where to store those
> > information? This is my primary thoughts on this question.
> 
> >
> > Thanks.
> >
> > > Thanks again for this work and please excuse my ignorance if any part
> > > of what I said doesn't make sense :)
> > >
> > > >
> > > > ```bash
> > > > #!/bin/bash
> > > >
> > > > dd if=/dev/zero of=temp bs=4096 count=1
> > > > cat /proc/cgroups | grep memory
> > > >
> > > > for i in {0..2000}
> > > > do
> > > >         mkdir /sys/fs/cgroup/memory/test$i
> > > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > > >         cat temp >> log
> > > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > > >         rmdir /sys/fs/cgroup/memory/test$i
> > > > done
> > > >
> > > > cat /proc/cgroups | grep memory
> > > >
> > > > rm -f temp log
> > > > ```
> > > >
> > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> > > >
> > > > v6:
> > > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > > >  - Rebase to mm-unstable.
> > > >
> > > > v5:
> > > >  - Lots of improvements from Johannes, Roman and Waiman.
> > > >  - Fix lockdep warning reported by kernel test robot.
> > > >  - Add two new patches to do code cleanup.
> > > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > > >    it to local_lock.  It could be an improvement in the future.
> > > >
> > > > v4:
> > > >  - Resend and rebased on v5.18.
> > > >
> > > > v3:
> > > >  - Removed the Acked-by tags from Roman since this version is based on
> > > >    the folio relevant.
> > > >
> > > > v2:
> > > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > > >  - Rebase to linux 5.15-rc1.
> > > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > > >
> > > > v1:
> > > >  - Drop RFC tag.
> > > >  - Rebase to linux next-20210811.
> > > >
> > > > RFC v4:
> > > >  - Collect Acked-by from Roman.
> > > >  - Rebase to linux next-20210525.
> > > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > > >  - Convert reparent_ops_head to an array in patch 8.
> > > >
> > > > Thanks for Roman's review and suggestions.
> > > >
> > > > RFC v3:
> > > >  - Drop the code cleanup and simplification patches. Gather those patches
> > > >    into a separate series[1].
> > > >  - Rework patch #1 suggested by Johannes.
> > > >
> > > > RFC v2:
> > > >  - Collect Acked-by tags by Johannes. Thanks.
> > > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > > >  - Fix move_pages_to_lru().
> > > >
> > > > Muchun Song (11):
> > > >   mm: memcontrol: remove dead code and comments
> > > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > > >     lruvec_unlock{_irq, _irqrestore}
> > > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > > >   mm: vmscan: rework move_pages_to_lru()
> > > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > > >   mm: memcontrol: introduce memcg_reparent_ops
> > > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > > >   mm: lru: use lruvec lock to serialize memcg changes
> > > >
> > > >  fs/buffer.c                      |   4 +-
> > > >  fs/fs-writeback.c                |  23 +-
> > > >  include/linux/memcontrol.h       | 218 +++++++++------
> > > >  include/linux/mm_inline.h        |   6 +
> > > >  include/trace/events/writeback.h |   5 +
> > > >  mm/compaction.c                  |  39 ++-
> > > >  mm/huge_memory.c                 | 153 ++++++++--
> > > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > > >  mm/migrate.c                     |   4 +
> > > >  mm/mlock.c                       |   2 +-
> > > >  mm/page_io.c                     |   5 +-
> > > >  mm/swap.c                        |  49 ++--
> > > >  mm/vmscan.c                      |  66 ++---
> > > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > > >
> > > >
> > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > > --
> > > > 2.11.0
> > > >
> > > >
> > >
> 

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-27 10:13         ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-06-27 10:13 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, longman-H+wXaHxf7aLQT0dZR+AlfA,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Cgroups,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg, Linux Kernel Mailing List,
	Linux-MM

On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote:
> On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> >
> > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > > >
> > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > into mm-unstable which will help to determine whether there is a problem or
> > > > degradation. I am also doing some benchmark tests in parallel.
> > > >
> > > > Since the following patchsets applied. All the kernel memory are charged
> > > > with the new APIs of obj_cgroup.
> > > >
> > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > >
> > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > it exists at a larger scale and is causing recurring problems in the real
> > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > and make page reclaim very inefficient.
> > > >
> > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > >
> > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > of the dying cgroups will not increase if we run the following test script.
> > >
> > > This is amazing work!
> > >
> > > Sorry if I came late, I didn't follow the threads of previous versions
> > > so this might be redundant, I just have a couple of questions.
> > >
> > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > (assuming they can), aren't these pages effectively unaccounted at
> > > this point or leaked? Is there protection against this?
> > >
> >
> > In this case, those pages are accounted in root memcg level. Unfortunately,
> > there is no mechanism now to transfer a page's memcg from one to another.
> >
> > > b) Since moving charged pages between memcgs is now becoming easier by
> > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > future work to transfer charges to memcgs that are actually using
> > > reparented resources. For example, let's say cgroup A reads a few
> > > pages into page cache, and then they are no longer used by cgroup A.
> > > cgroup B, however, is using the same pages that are currently charged
> > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > dies, and these pages are reparented to A's parent, can we possibly
> > > mark these reparented pages (maybe in the page tables somewhere) so
> > > that next time they get accessed we recharge them to B instead
> > > (possibly asynchronously)?
> > > I don't have much experience about page tables but I am pretty sure
> > > they are loaded so maybe there is no room in PTEs for something like
> > > this, but I have always wondered about what we can do for this case
> > > where a cgroup is consistently using memory charged to another cgroup.
> > > Maybe when this memory is reparented is a good point in time to decide
> > > to recharge appropriately. It would also fix the reparenty leak to
> > > root problem (if it even exists).
> > >
> >
> > From my point of view, this is going to be an improvement to the memcg
> > subsystem in the future.  IIUC, most reparented pages are page cache
> > pages without be mapped to users. So page tables are not a suitable
> > place to record this information. However, we already have this information
> > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > been reparented. I am thinking if a place where a page is mapped (probably
> > page fault patch) or page (cache) is written (usually vfs write path)
> > is suitable to transfer page's memcg from one to another. But need more
> 
> Very good point about unmapped pages, I missed this. Page tables will
> do us no good here. Such a change would indeed require careful thought
> because (like you mentioned) there are multiple points in time where
> it might be suitable to consider recharging the page (e.g. when the
> page is mapped). This could be an incremental change though. Right now
> we have no recharging at all, so maybe we can gradually add recharging
> to suitable paths.
>

Agree.
 
> > thinking, e.g. How to decide if a reparented page needs to be transferred?
> 
> Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of

This is a good start.

> current is not a descendant of page's obj_cgroup->memcg) is a good

I am not sure this one since a page could be shared between different
memcg.

    root
   /   \
  A     B
 / \     \
C   E     D

e.g. a page (originally, it belongs to memcg E and E is dying) is reparented
to memcg A, and it is shared between C and D now. Then we need to consider 
whether it should be recharged. Yep, we need more thinging about recharging.

> place to start?
>
> My rationale is that if the page is charged to root_mem_cgroup through

I think the following issue not only exists in root_mem_cgroup but also
in non-root.

> reparenting and a process in a memcg is using it then this is probably
> an accounting leak. If a page is charged to a memcg A through
> reparenting and is used by a memcg B in a different subtree, then
> probably memcg B is getting away with using the page for free while A
> is being taxed. If B is a descendant of A, it is still getting away
> with using the page unaccounted, but at least it makes no difference
> for A.

I agree this case needs to be improved.

> 
> One could argue that we might as well recharge a reparented page
> anyway if the process is cheap (or done asynchronously), and the paths
> where we do recharging are not very common.
> 
> All of this might be moot, I am just thinking out loud. In any way
> this would be future work and not part of this work.
> 

Agree.

Thanks.

> 
> > If we need more information to make this decision, where to store those
> > information? This is my primary thoughts on this question.
> 
> >
> > Thanks.
> >
> > > Thanks again for this work and please excuse my ignorance if any part
> > > of what I said doesn't make sense :)
> > >
> > > >
> > > > ```bash
> > > > #!/bin/bash
> > > >
> > > > dd if=/dev/zero of=temp bs=4096 count=1
> > > > cat /proc/cgroups | grep memory
> > > >
> > > > for i in {0..2000}
> > > > do
> > > >         mkdir /sys/fs/cgroup/memory/test$i
> > > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > > >         cat temp >> log
> > > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > > >         rmdir /sys/fs/cgroup/memory/test$i
> > > > done
> > > >
> > > > cat /proc/cgroups | grep memory
> > > >
> > > > rm -f temp log
> > > > ```
> > > >
> > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > >
> > > > v6:
> > > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > > >  - Rebase to mm-unstable.
> > > >
> > > > v5:
> > > >  - Lots of improvements from Johannes, Roman and Waiman.
> > > >  - Fix lockdep warning reported by kernel test robot.
> > > >  - Add two new patches to do code cleanup.
> > > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > > >    it to local_lock.  It could be an improvement in the future.
> > > >
> > > > v4:
> > > >  - Resend and rebased on v5.18.
> > > >
> > > > v3:
> > > >  - Removed the Acked-by tags from Roman since this version is based on
> > > >    the folio relevant.
> > > >
> > > > v2:
> > > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > > >  - Rebase to linux 5.15-rc1.
> > > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > > >
> > > > v1:
> > > >  - Drop RFC tag.
> > > >  - Rebase to linux next-20210811.
> > > >
> > > > RFC v4:
> > > >  - Collect Acked-by from Roman.
> > > >  - Rebase to linux next-20210525.
> > > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > > >  - Convert reparent_ops_head to an array in patch 8.
> > > >
> > > > Thanks for Roman's review and suggestions.
> > > >
> > > > RFC v3:
> > > >  - Drop the code cleanup and simplification patches. Gather those patches
> > > >    into a separate series[1].
> > > >  - Rework patch #1 suggested by Johannes.
> > > >
> > > > RFC v2:
> > > >  - Collect Acked-by tags by Johannes. Thanks.
> > > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > > >  - Fix move_pages_to_lru().
> > > >
> > > > Muchun Song (11):
> > > >   mm: memcontrol: remove dead code and comments
> > > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > > >     lruvec_unlock{_irq, _irqrestore}
> > > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > > >   mm: vmscan: rework move_pages_to_lru()
> > > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > > >   mm: memcontrol: introduce memcg_reparent_ops
> > > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > > >   mm: lru: use lruvec lock to serialize memcg changes
> > > >
> > > >  fs/buffer.c                      |   4 +-
> > > >  fs/fs-writeback.c                |  23 +-
> > > >  include/linux/memcontrol.h       | 218 +++++++++------
> > > >  include/linux/mm_inline.h        |   6 +
> > > >  include/trace/events/writeback.h |   5 +
> > > >  mm/compaction.c                  |  39 ++-
> > > >  mm/huge_memory.c                 | 153 ++++++++--
> > > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > > >  mm/migrate.c                     |   4 +
> > > >  mm/mlock.c                       |   2 +-
> > > >  mm/page_io.c                     |   5 +-
> > > >  mm/swap.c                        |  49 ++--
> > > >  mm/vmscan.c                      |  66 ++---
> > > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > > >
> > > >
> > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > > --
> > > > 2.11.0
> > > >
> > > >
> > >
> 

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-27 10:43         ` Mika Penttilä
  0 siblings, 0 replies; 54+ messages in thread
From: Mika Penttilä @ 2022-06-27 10:43 UTC (permalink / raw)
  To: Yosry Ahmed, Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman, Michal Hocko,
	Roman Gushchin, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM



On 27.6.2022 11.05, Yosry Ahmed wrote:
> On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
>>
>> On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
>>> On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
>>>>
>>>> This version is rebased on mm-unstable. Hopefully, Andrew can get this series
>>>> into mm-unstable which will help to determine whether there is a problem or
>>>> degradation. I am also doing some benchmark tests in parallel.
>>>>
>>>> Since the following patchsets applied. All the kernel memory are charged
>>>> with the new APIs of obj_cgroup.
>>>>
>>>>          commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
>>>>          commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
>>>>
>>>> But user memory allocations (LRU pages) pinning memcgs for a long time -
>>>> it exists at a larger scale and is causing recurring problems in the real
>>>> world: page cache doesn't get reclaimed for a long time, or is used by the
>>>> second, third, fourth, ... instance of the same job that was restarted into
>>>> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
>>>> and make page reclaim very inefficient.
>>>>
>>>> We can convert LRU pages and most other raw memcg pins to the objcg direction
>>>> to fix this problem, and then the LRU pages will not pin the memcgs.
>>>>
>>>> This patchset aims to make the LRU pages to drop the reference to memory
>>>> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
>>>> of the dying cgroups will not increase if we run the following test script.
>>>
>>> This is amazing work!
>>>
>>> Sorry if I came late, I didn't follow the threads of previous versions
>>> so this might be redundant, I just have a couple of questions.
>>>
>>> a) If LRU pages keep getting parented until they reach root_mem_cgroup
>>> (assuming they can), aren't these pages effectively unaccounted at
>>> this point or leaked? Is there protection against this?
>>>
>>
>> In this case, those pages are accounted in root memcg level. Unfortunately,
>> there is no mechanism now to transfer a page's memcg from one to another.
>>
>>> b) Since moving charged pages between memcgs is now becoming easier by
>>> using the APIs of obj_cgroup, I wonder if this opens the door for
>>> future work to transfer charges to memcgs that are actually using
>>> reparented resources. For example, let's say cgroup A reads a few
>>> pages into page cache, and then they are no longer used by cgroup A.
>>> cgroup B, however, is using the same pages that are currently charged
>>> to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
>>> dies, and these pages are reparented to A's parent, can we possibly
>>> mark these reparented pages (maybe in the page tables somewhere) so
>>> that next time they get accessed we recharge them to B instead
>>> (possibly asynchronously)?
>>> I don't have much experience about page tables but I am pretty sure
>>> they are loaded so maybe there is no room in PTEs for something like
>>> this, but I have always wondered about what we can do for this case
>>> where a cgroup is consistently using memory charged to another cgroup.
>>> Maybe when this memory is reparented is a good point in time to decide
>>> to recharge appropriately. It would also fix the reparenty leak to
>>> root problem (if it even exists).
>>>
>>
>>  From my point of view, this is going to be an improvement to the memcg
>> subsystem in the future.  IIUC, most reparented pages are page cache
>> pages without be mapped to users. So page tables are not a suitable
>> place to record this information. However, we already have this information
>> in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
>> equal to the page's obj_cgroup->memcg->objcg, it means this page have
>> been reparented. I am thinking if a place where a page is mapped (probably
>> page fault patch) or page (cache) is written (usually vfs write path)
>> is suitable to transfer page's memcg from one to another. But need more
> 
> Very good point about unmapped pages, I missed this. Page tables will
> do us no good here. Such a change would indeed require careful thought
> because (like you mentioned) there are multiple points in time where
> it might be suitable to consider recharging the page (e.g. when the
> page is mapped). This could be an incremental change though. Right now
> we have no recharging at all, so maybe we can gradually add recharging
> to suitable paths.
> 
>> thinking, e.g. How to decide if a reparented page needs to be transferred?
> 
> Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
> current is not a descendant of page's obj_cgroup->memcg) is a good
> place to start?
> 
> My rationale is that if the page is charged to root_mem_cgroup through
> reparenting and a process in a memcg is using it then this is probably
> an accounting leak. If a page is charged to a memcg A through
> reparenting and is used by a memcg B in a different subtree, then
> probably memcg B is getting away with using the page for free while A
> is being taxed. If B is a descendant of A, it is still getting away
> with using the page unaccounted, but at least it makes no difference
> for A.
> 
> One could argue that we might as well recharge a reparented page
> anyway if the process is cheap (or done asynchronously), and the paths
> where we do recharging are not very common.
> 
> All of this might be moot, I am just thinking out loud. In any way
> this would be future work and not part of this work.
> 


I think you have to uncharge at the reparented parent to keep balances 
right (because parent is hierarchically charged thru page_counter). And 
maybe recharge after that if appropriate.




> 
>> If we need more information to make this decision, where to store those
>> information? This is my primary thoughts on this question.
> 
>>
>> Thanks.
>>
>>> Thanks again for this work and please excuse my ignorance if any part
>>> of what I said doesn't make sense :)
>>>
>>>>
>>>> ```bash
>>>> #!/bin/bash
>>>>
>>>> dd if=/dev/zero of=temp bs=4096 count=1
>>>> cat /proc/cgroups | grep memory
>>>>
>>>> for i in {0..2000}
>>>> do
>>>>          mkdir /sys/fs/cgroup/memory/test$i
>>>>          echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
>>>>          cat temp >> log
>>>>          echo $$ > /sys/fs/cgroup/memory/cgroup.procs
>>>>          rmdir /sys/fs/cgroup/memory/test$i
>>>> done
>>>>
>>>> cat /proc/cgroups | grep memory
>>>>
>>>> rm -f temp log
>>>> ```
>>>>
>>>> v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
>>>> v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
>>>> v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
>>>> v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
>>>> v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
>>>> RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
>>>> RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
>>>> RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
>>>> RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
>>>>
>>>> v6:
>>>>   - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
>>>>   - Rebase to mm-unstable.
>>>>
>>>> v5:
>>>>   - Lots of improvements from Johannes, Roman and Waiman.
>>>>   - Fix lockdep warning reported by kernel test robot.
>>>>   - Add two new patches to do code cleanup.
>>>>   - Collect Acked-by and Reviewed-by from Johannes and Roman.
>>>>   - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
>>>>     local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
>>>>     it to local_lock.  It could be an improvement in the future.
>>>>
>>>> v4:
>>>>   - Resend and rebased on v5.18.
>>>>
>>>> v3:
>>>>   - Removed the Acked-by tags from Roman since this version is based on
>>>>     the folio relevant.
>>>>
>>>> v2:
>>>>   - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
>>>>     dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
>>>>   - Rebase to linux 5.15-rc1.
>>>>   - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
>>>>
>>>> v1:
>>>>   - Drop RFC tag.
>>>>   - Rebase to linux next-20210811.
>>>>
>>>> RFC v4:
>>>>   - Collect Acked-by from Roman.
>>>>   - Rebase to linux next-20210525.
>>>>   - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
>>>>   - Change the patch 1 title to "prepare objcg API for non-kmem usage".
>>>>   - Convert reparent_ops_head to an array in patch 8.
>>>>
>>>> Thanks for Roman's review and suggestions.
>>>>
>>>> RFC v3:
>>>>   - Drop the code cleanup and simplification patches. Gather those patches
>>>>     into a separate series[1].
>>>>   - Rework patch #1 suggested by Johannes.
>>>>
>>>> RFC v2:
>>>>   - Collect Acked-by tags by Johannes. Thanks.
>>>>   - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
>>>>   - Fix move_pages_to_lru().
>>>>
>>>> Muchun Song (11):
>>>>    mm: memcontrol: remove dead code and comments
>>>>    mm: rename unlock_page_lruvec{_irq, _irqrestore} to
>>>>      lruvec_unlock{_irq, _irqrestore}
>>>>    mm: memcontrol: prepare objcg API for non-kmem usage
>>>>    mm: memcontrol: make lruvec lock safe when LRU pages are reparented
>>>>    mm: vmscan: rework move_pages_to_lru()
>>>>    mm: thp: make split queue lock safe when LRU pages are reparented
>>>>    mm: memcontrol: make all the callers of {folio,page}_memcg() safe
>>>>    mm: memcontrol: introduce memcg_reparent_ops
>>>>    mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
>>>>    mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
>>>>    mm: lru: use lruvec lock to serialize memcg changes
>>>>
>>>>   fs/buffer.c                      |   4 +-
>>>>   fs/fs-writeback.c                |  23 +-
>>>>   include/linux/memcontrol.h       | 218 +++++++++------
>>>>   include/linux/mm_inline.h        |   6 +
>>>>   include/trace/events/writeback.h |   5 +
>>>>   mm/compaction.c                  |  39 ++-
>>>>   mm/huge_memory.c                 | 153 ++++++++--
>>>>   mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
>>>>   mm/migrate.c                     |   4 +
>>>>   mm/mlock.c                       |   2 +-
>>>>   mm/page_io.c                     |   5 +-
>>>>   mm/swap.c                        |  49 ++--
>>>>   mm/vmscan.c                      |  66 ++---
>>>>   13 files changed, 776 insertions(+), 382 deletions(-)
>>>>
>>>>
>>>> base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
>>>> --
>>>> 2.11.0
>>>>
>>>>
>>>
> 


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-27 10:43         ` Mika Penttilä
  0 siblings, 0 replies; 54+ messages in thread
From: Mika Penttilä @ 2022-06-27 10:43 UTC (permalink / raw)
  To: Yosry Ahmed, Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman-H+wXaHxf7aLQT0dZR+AlfA,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Cgroups,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg, Linux Kernel Mailing List,
	Linux-MM



On 27.6.2022 11.05, Yosry Ahmed wrote:
> On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
>>
>> On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
>>> On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
>>>>
>>>> This version is rebased on mm-unstable. Hopefully, Andrew can get this series
>>>> into mm-unstable which will help to determine whether there is a problem or
>>>> degradation. I am also doing some benchmark tests in parallel.
>>>>
>>>> Since the following patchsets applied. All the kernel memory are charged
>>>> with the new APIs of obj_cgroup.
>>>>
>>>>          commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
>>>>          commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
>>>>
>>>> But user memory allocations (LRU pages) pinning memcgs for a long time -
>>>> it exists at a larger scale and is causing recurring problems in the real
>>>> world: page cache doesn't get reclaimed for a long time, or is used by the
>>>> second, third, fourth, ... instance of the same job that was restarted into
>>>> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
>>>> and make page reclaim very inefficient.
>>>>
>>>> We can convert LRU pages and most other raw memcg pins to the objcg direction
>>>> to fix this problem, and then the LRU pages will not pin the memcgs.
>>>>
>>>> This patchset aims to make the LRU pages to drop the reference to memory
>>>> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
>>>> of the dying cgroups will not increase if we run the following test script.
>>>
>>> This is amazing work!
>>>
>>> Sorry if I came late, I didn't follow the threads of previous versions
>>> so this might be redundant, I just have a couple of questions.
>>>
>>> a) If LRU pages keep getting parented until they reach root_mem_cgroup
>>> (assuming they can), aren't these pages effectively unaccounted at
>>> this point or leaked? Is there protection against this?
>>>
>>
>> In this case, those pages are accounted in root memcg level. Unfortunately,
>> there is no mechanism now to transfer a page's memcg from one to another.
>>
>>> b) Since moving charged pages between memcgs is now becoming easier by
>>> using the APIs of obj_cgroup, I wonder if this opens the door for
>>> future work to transfer charges to memcgs that are actually using
>>> reparented resources. For example, let's say cgroup A reads a few
>>> pages into page cache, and then they are no longer used by cgroup A.
>>> cgroup B, however, is using the same pages that are currently charged
>>> to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
>>> dies, and these pages are reparented to A's parent, can we possibly
>>> mark these reparented pages (maybe in the page tables somewhere) so
>>> that next time they get accessed we recharge them to B instead
>>> (possibly asynchronously)?
>>> I don't have much experience about page tables but I am pretty sure
>>> they are loaded so maybe there is no room in PTEs for something like
>>> this, but I have always wondered about what we can do for this case
>>> where a cgroup is consistently using memory charged to another cgroup.
>>> Maybe when this memory is reparented is a good point in time to decide
>>> to recharge appropriately. It would also fix the reparenty leak to
>>> root problem (if it even exists).
>>>
>>
>>  From my point of view, this is going to be an improvement to the memcg
>> subsystem in the future.  IIUC, most reparented pages are page cache
>> pages without be mapped to users. So page tables are not a suitable
>> place to record this information. However, we already have this information
>> in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
>> equal to the page's obj_cgroup->memcg->objcg, it means this page have
>> been reparented. I am thinking if a place where a page is mapped (probably
>> page fault patch) or page (cache) is written (usually vfs write path)
>> is suitable to transfer page's memcg from one to another. But need more
> 
> Very good point about unmapped pages, I missed this. Page tables will
> do us no good here. Such a change would indeed require careful thought
> because (like you mentioned) there are multiple points in time where
> it might be suitable to consider recharging the page (e.g. when the
> page is mapped). This could be an incremental change though. Right now
> we have no recharging at all, so maybe we can gradually add recharging
> to suitable paths.
> 
>> thinking, e.g. How to decide if a reparented page needs to be transferred?
> 
> Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
> current is not a descendant of page's obj_cgroup->memcg) is a good
> place to start?
> 
> My rationale is that if the page is charged to root_mem_cgroup through
> reparenting and a process in a memcg is using it then this is probably
> an accounting leak. If a page is charged to a memcg A through
> reparenting and is used by a memcg B in a different subtree, then
> probably memcg B is getting away with using the page for free while A
> is being taxed. If B is a descendant of A, it is still getting away
> with using the page unaccounted, but at least it makes no difference
> for A.
> 
> One could argue that we might as well recharge a reparented page
> anyway if the process is cheap (or done asynchronously), and the paths
> where we do recharging are not very common.
> 
> All of this might be moot, I am just thinking out loud. In any way
> this would be future work and not part of this work.
> 


I think you have to uncharge at the reparented parent to keep balances 
right (because parent is hierarchically charged thru page_counter). And 
maybe recharge after that if appropriate.




> 
>> If we need more information to make this decision, where to store those
>> information? This is my primary thoughts on this question.
> 
>>
>> Thanks.
>>
>>> Thanks again for this work and please excuse my ignorance if any part
>>> of what I said doesn't make sense :)
>>>
>>>>
>>>> ```bash
>>>> #!/bin/bash
>>>>
>>>> dd if=/dev/zero of=temp bs=4096 count=1
>>>> cat /proc/cgroups | grep memory
>>>>
>>>> for i in {0..2000}
>>>> do
>>>>          mkdir /sys/fs/cgroup/memory/test$i
>>>>          echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
>>>>          cat temp >> log
>>>>          echo $$ > /sys/fs/cgroup/memory/cgroup.procs
>>>>          rmdir /sys/fs/cgroup/memory/test$i
>>>> done
>>>>
>>>> cat /proc/cgroups | grep memory
>>>>
>>>> rm -f temp log
>>>> ```
>>>>
>>>> v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
>>>> v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
>>>> v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
>>>> v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
>>>> v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
>>>> RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
>>>> RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
>>>> RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
>>>> RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
>>>>
>>>> v6:
>>>>   - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
>>>>   - Rebase to mm-unstable.
>>>>
>>>> v5:
>>>>   - Lots of improvements from Johannes, Roman and Waiman.
>>>>   - Fix lockdep warning reported by kernel test robot.
>>>>   - Add two new patches to do code cleanup.
>>>>   - Collect Acked-by and Reviewed-by from Johannes and Roman.
>>>>   - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
>>>>     local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
>>>>     it to local_lock.  It could be an improvement in the future.
>>>>
>>>> v4:
>>>>   - Resend and rebased on v5.18.
>>>>
>>>> v3:
>>>>   - Removed the Acked-by tags from Roman since this version is based on
>>>>     the folio relevant.
>>>>
>>>> v2:
>>>>   - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
>>>>     dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
>>>>   - Rebase to linux 5.15-rc1.
>>>>   - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
>>>>
>>>> v1:
>>>>   - Drop RFC tag.
>>>>   - Rebase to linux next-20210811.
>>>>
>>>> RFC v4:
>>>>   - Collect Acked-by from Roman.
>>>>   - Rebase to linux next-20210525.
>>>>   - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
>>>>   - Change the patch 1 title to "prepare objcg API for non-kmem usage".
>>>>   - Convert reparent_ops_head to an array in patch 8.
>>>>
>>>> Thanks for Roman's review and suggestions.
>>>>
>>>> RFC v3:
>>>>   - Drop the code cleanup and simplification patches. Gather those patches
>>>>     into a separate series[1].
>>>>   - Rework patch #1 suggested by Johannes.
>>>>
>>>> RFC v2:
>>>>   - Collect Acked-by tags by Johannes. Thanks.
>>>>   - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
>>>>   - Fix move_pages_to_lru().
>>>>
>>>> Muchun Song (11):
>>>>    mm: memcontrol: remove dead code and comments
>>>>    mm: rename unlock_page_lruvec{_irq, _irqrestore} to
>>>>      lruvec_unlock{_irq, _irqrestore}
>>>>    mm: memcontrol: prepare objcg API for non-kmem usage
>>>>    mm: memcontrol: make lruvec lock safe when LRU pages are reparented
>>>>    mm: vmscan: rework move_pages_to_lru()
>>>>    mm: thp: make split queue lock safe when LRU pages are reparented
>>>>    mm: memcontrol: make all the callers of {folio,page}_memcg() safe
>>>>    mm: memcontrol: introduce memcg_reparent_ops
>>>>    mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
>>>>    mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
>>>>    mm: lru: use lruvec lock to serialize memcg changes
>>>>
>>>>   fs/buffer.c                      |   4 +-
>>>>   fs/fs-writeback.c                |  23 +-
>>>>   include/linux/memcontrol.h       | 218 +++++++++------
>>>>   include/linux/mm_inline.h        |   6 +
>>>>   include/trace/events/writeback.h |   5 +
>>>>   mm/compaction.c                  |  39 ++-
>>>>   mm/huge_memory.c                 | 153 ++++++++--
>>>>   mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
>>>>   mm/migrate.c                     |   4 +
>>>>   mm/mlock.c                       |   2 +-
>>>>   mm/page_io.c                     |   5 +-
>>>>   mm/swap.c                        |  49 ++--
>>>>   mm/vmscan.c                      |  66 ++---
>>>>   13 files changed, 776 insertions(+), 382 deletions(-)
>>>>
>>>>
>>>> base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
>>>> --
>>>> 2.11.0
>>>>
>>>>
>>>
> 


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
  2022-06-27 10:13         ` Muchun Song
@ 2022-06-27 16:46           ` Yosry Ahmed
  -1 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-06-27 16:46 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman, Michal Hocko,
	Roman Gushchin, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 3:13 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote:
> > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > >
> > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > >
> > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > >
> > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > with the new APIs of obj_cgroup.
> > > > >
> > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > >
> > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > and make page reclaim very inefficient.
> > > > >
> > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > >
> > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > of the dying cgroups will not increase if we run the following test script.
> > > >
> > > > This is amazing work!
> > > >
> > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > so this might be redundant, I just have a couple of questions.
> > > >
> > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > this point or leaked? Is there protection against this?
> > > >
> > >
> > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > there is no mechanism now to transfer a page's memcg from one to another.
> > >
> > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > future work to transfer charges to memcgs that are actually using
> > > > reparented resources. For example, let's say cgroup A reads a few
> > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > cgroup B, however, is using the same pages that are currently charged
> > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > that next time they get accessed we recharge them to B instead
> > > > (possibly asynchronously)?
> > > > I don't have much experience about page tables but I am pretty sure
> > > > they are loaded so maybe there is no room in PTEs for something like
> > > > this, but I have always wondered about what we can do for this case
> > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > Maybe when this memory is reparented is a good point in time to decide
> > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > root problem (if it even exists).
> > > >
> > >
> > > From my point of view, this is going to be an improvement to the memcg
> > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > pages without be mapped to users. So page tables are not a suitable
> > > place to record this information. However, we already have this information
> > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > been reparented. I am thinking if a place where a page is mapped (probably
> > > page fault patch) or page (cache) is written (usually vfs write path)
> > > is suitable to transfer page's memcg from one to another. But need more
> >
> > Very good point about unmapped pages, I missed this. Page tables will
> > do us no good here. Such a change would indeed require careful thought
> > because (like you mentioned) there are multiple points in time where
> > it might be suitable to consider recharging the page (e.g. when the
> > page is mapped). This could be an incremental change though. Right now
> > we have no recharging at all, so maybe we can gradually add recharging
> > to suitable paths.
> >
>
> Agree.
>
> > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> >
> > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
>
> This is a good start.
>
> > current is not a descendant of page's obj_cgroup->memcg) is a good
>
> I am not sure this one since a page could be shared between different
> memcg.
>
>     root
>    /   \
>   A     B
>  / \     \
> C   E     D
>
> e.g. a page (originally, it belongs to memcg E and E is dying) is reparented
> to memcg A, and it is shared between C and D now. Then we need to consider
> whether it should be recharged. Yep, we need more thinging about recharging.

Assuming that we are recharging in the mapping path, and D is mapping
a page that was used by E and later reparented to A, I think we should
recharge it to D and uncharge from A in all cases:
If C is not using the page (not shared), then the page should be
accounted to its real user, D, instead of taxing A.
If C is also using the page(shared), then it is not wrong to have the
page accounted to D since it's also a user of the page. Either way
only one of the memcgs using the page will be charged.

So I think either way recharging the page to D instead of A would be
correct. IMO, whether we want to skip the recharge to D for some cases
or not would depend on performance and not correctness, since it
should always be correct to recharge the page to D in this scenario.

>
> > place to start?
> >
> > My rationale is that if the page is charged to root_mem_cgroup through
>
> I think the following issue not only exists in root_mem_cgroup but also
> in non-root.

What's special about root is that every single memcg is a descendant
from root, and that accounting user pages to root is usually not
something that we want. So if we rely on a heuristic like (memcg of
current is not a descendant of page's obj_cgroup->memcg), we need to
have a special case for root so that reparented pages to root are
always recharged.

>
> > reparenting and a process in a memcg is using it then this is probably
> > an accounting leak. If a page is charged to a memcg A through
> > reparenting and is used by a memcg B in a different subtree, then
> > probably memcg B is getting away with using the page for free while A
> > is being taxed. If B is a descendant of A, it is still getting away
> > with using the page unaccounted, but at least it makes no difference
> > for A.
>
> I agree this case needs to be improved.
>
> >
> > One could argue that we might as well recharge a reparented page
> > anyway if the process is cheap (or done asynchronously), and the paths
> > where we do recharging are not very common.
> >
> > All of this might be moot, I am just thinking out loud. In any way
> > this would be future work and not part of this work.
> >
>
> Agree.
>
> Thanks.
>
> >
> > > If we need more information to make this decision, where to store those
> > > information? This is my primary thoughts on this question.
> >
> > >
> > > Thanks.
> > >
> > > > Thanks again for this work and please excuse my ignorance if any part
> > > > of what I said doesn't make sense :)
> > > >
> > > > >
> > > > > ```bash
> > > > > #!/bin/bash
> > > > >
> > > > > dd if=/dev/zero of=temp bs=4096 count=1
> > > > > cat /proc/cgroups | grep memory
> > > > >
> > > > > for i in {0..2000}
> > > > > do
> > > > >         mkdir /sys/fs/cgroup/memory/test$i
> > > > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > > > >         cat temp >> log
> > > > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > > > >         rmdir /sys/fs/cgroup/memory/test$i
> > > > > done
> > > > >
> > > > > cat /proc/cgroups | grep memory
> > > > >
> > > > > rm -f temp log
> > > > > ```
> > > > >
> > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> > > > >
> > > > > v6:
> > > > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > > > >  - Rebase to mm-unstable.
> > > > >
> > > > > v5:
> > > > >  - Lots of improvements from Johannes, Roman and Waiman.
> > > > >  - Fix lockdep warning reported by kernel test robot.
> > > > >  - Add two new patches to do code cleanup.
> > > > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > > > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > > > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > > > >    it to local_lock.  It could be an improvement in the future.
> > > > >
> > > > > v4:
> > > > >  - Resend and rebased on v5.18.
> > > > >
> > > > > v3:
> > > > >  - Removed the Acked-by tags from Roman since this version is based on
> > > > >    the folio relevant.
> > > > >
> > > > > v2:
> > > > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > > > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > > > >  - Rebase to linux 5.15-rc1.
> > > > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > > > >
> > > > > v1:
> > > > >  - Drop RFC tag.
> > > > >  - Rebase to linux next-20210811.
> > > > >
> > > > > RFC v4:
> > > > >  - Collect Acked-by from Roman.
> > > > >  - Rebase to linux next-20210525.
> > > > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > > > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > > > >  - Convert reparent_ops_head to an array in patch 8.
> > > > >
> > > > > Thanks for Roman's review and suggestions.
> > > > >
> > > > > RFC v3:
> > > > >  - Drop the code cleanup and simplification patches. Gather those patches
> > > > >    into a separate series[1].
> > > > >  - Rework patch #1 suggested by Johannes.
> > > > >
> > > > > RFC v2:
> > > > >  - Collect Acked-by tags by Johannes. Thanks.
> > > > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > > > >  - Fix move_pages_to_lru().
> > > > >
> > > > > Muchun Song (11):
> > > > >   mm: memcontrol: remove dead code and comments
> > > > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > > > >     lruvec_unlock{_irq, _irqrestore}
> > > > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > > > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > > > >   mm: vmscan: rework move_pages_to_lru()
> > > > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > > > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > > > >   mm: memcontrol: introduce memcg_reparent_ops
> > > > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > > > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > > > >   mm: lru: use lruvec lock to serialize memcg changes
> > > > >
> > > > >  fs/buffer.c                      |   4 +-
> > > > >  fs/fs-writeback.c                |  23 +-
> > > > >  include/linux/memcontrol.h       | 218 +++++++++------
> > > > >  include/linux/mm_inline.h        |   6 +
> > > > >  include/trace/events/writeback.h |   5 +
> > > > >  mm/compaction.c                  |  39 ++-
> > > > >  mm/huge_memory.c                 | 153 ++++++++--
> > > > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > > > >  mm/migrate.c                     |   4 +
> > > > >  mm/mlock.c                       |   2 +-
> > > > >  mm/page_io.c                     |   5 +-
> > > > >  mm/swap.c                        |  49 ++--
> > > > >  mm/vmscan.c                      |  66 ++---
> > > > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > > > >
> > > > >
> > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > > > --
> > > > > 2.11.0
> > > > >
> > > > >
> > > >
> >

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-27 16:46           ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-06-27 16:46 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman-H+wXaHxf7aLQT0dZR+AlfA,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Cgroups,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg, Linux Kernel Mailing List,
	Linux-MM

On Mon, Jun 27, 2022 at 3:13 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
>
> On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote:
> > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > >
> > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > >
> > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > >
> > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > with the new APIs of obj_cgroup.
> > > > >
> > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > >
> > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > and make page reclaim very inefficient.
> > > > >
> > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > >
> > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > of the dying cgroups will not increase if we run the following test script.
> > > >
> > > > This is amazing work!
> > > >
> > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > so this might be redundant, I just have a couple of questions.
> > > >
> > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > this point or leaked? Is there protection against this?
> > > >
> > >
> > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > there is no mechanism now to transfer a page's memcg from one to another.
> > >
> > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > future work to transfer charges to memcgs that are actually using
> > > > reparented resources. For example, let's say cgroup A reads a few
> > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > cgroup B, however, is using the same pages that are currently charged
> > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > that next time they get accessed we recharge them to B instead
> > > > (possibly asynchronously)?
> > > > I don't have much experience about page tables but I am pretty sure
> > > > they are loaded so maybe there is no room in PTEs for something like
> > > > this, but I have always wondered about what we can do for this case
> > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > Maybe when this memory is reparented is a good point in time to decide
> > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > root problem (if it even exists).
> > > >
> > >
> > > From my point of view, this is going to be an improvement to the memcg
> > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > pages without be mapped to users. So page tables are not a suitable
> > > place to record this information. However, we already have this information
> > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > been reparented. I am thinking if a place where a page is mapped (probably
> > > page fault patch) or page (cache) is written (usually vfs write path)
> > > is suitable to transfer page's memcg from one to another. But need more
> >
> > Very good point about unmapped pages, I missed this. Page tables will
> > do us no good here. Such a change would indeed require careful thought
> > because (like you mentioned) there are multiple points in time where
> > it might be suitable to consider recharging the page (e.g. when the
> > page is mapped). This could be an incremental change though. Right now
> > we have no recharging at all, so maybe we can gradually add recharging
> > to suitable paths.
> >
>
> Agree.
>
> > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> >
> > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
>
> This is a good start.
>
> > current is not a descendant of page's obj_cgroup->memcg) is a good
>
> I am not sure this one since a page could be shared between different
> memcg.
>
>     root
>    /   \
>   A     B
>  / \     \
> C   E     D
>
> e.g. a page (originally, it belongs to memcg E and E is dying) is reparented
> to memcg A, and it is shared between C and D now. Then we need to consider
> whether it should be recharged. Yep, we need more thinging about recharging.

Assuming that we are recharging in the mapping path, and D is mapping
a page that was used by E and later reparented to A, I think we should
recharge it to D and uncharge from A in all cases:
If C is not using the page (not shared), then the page should be
accounted to its real user, D, instead of taxing A.
If C is also using the page(shared), then it is not wrong to have the
page accounted to D since it's also a user of the page. Either way
only one of the memcgs using the page will be charged.

So I think either way recharging the page to D instead of A would be
correct. IMO, whether we want to skip the recharge to D for some cases
or not would depend on performance and not correctness, since it
should always be correct to recharge the page to D in this scenario.

>
> > place to start?
> >
> > My rationale is that if the page is charged to root_mem_cgroup through
>
> I think the following issue not only exists in root_mem_cgroup but also
> in non-root.

What's special about root is that every single memcg is a descendant
from root, and that accounting user pages to root is usually not
something that we want. So if we rely on a heuristic like (memcg of
current is not a descendant of page's obj_cgroup->memcg), we need to
have a special case for root so that reparented pages to root are
always recharged.

>
> > reparenting and a process in a memcg is using it then this is probably
> > an accounting leak. If a page is charged to a memcg A through
> > reparenting and is used by a memcg B in a different subtree, then
> > probably memcg B is getting away with using the page for free while A
> > is being taxed. If B is a descendant of A, it is still getting away
> > with using the page unaccounted, but at least it makes no difference
> > for A.
>
> I agree this case needs to be improved.
>
> >
> > One could argue that we might as well recharge a reparented page
> > anyway if the process is cheap (or done asynchronously), and the paths
> > where we do recharging are not very common.
> >
> > All of this might be moot, I am just thinking out loud. In any way
> > this would be future work and not part of this work.
> >
>
> Agree.
>
> Thanks.
>
> >
> > > If we need more information to make this decision, where to store those
> > > information? This is my primary thoughts on this question.
> >
> > >
> > > Thanks.
> > >
> > > > Thanks again for this work and please excuse my ignorance if any part
> > > > of what I said doesn't make sense :)
> > > >
> > > > >
> > > > > ```bash
> > > > > #!/bin/bash
> > > > >
> > > > > dd if=/dev/zero of=temp bs=4096 count=1
> > > > > cat /proc/cgroups | grep memory
> > > > >
> > > > > for i in {0..2000}
> > > > > do
> > > > >         mkdir /sys/fs/cgroup/memory/test$i
> > > > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > > > >         cat temp >> log
> > > > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > > > >         rmdir /sys/fs/cgroup/memory/test$i
> > > > > done
> > > > >
> > > > > cat /proc/cgroups | grep memory
> > > > >
> > > > > rm -f temp log
> > > > > ```
> > > > >
> > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > >
> > > > > v6:
> > > > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > > > >  - Rebase to mm-unstable.
> > > > >
> > > > > v5:
> > > > >  - Lots of improvements from Johannes, Roman and Waiman.
> > > > >  - Fix lockdep warning reported by kernel test robot.
> > > > >  - Add two new patches to do code cleanup.
> > > > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > > > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > > > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > > > >    it to local_lock.  It could be an improvement in the future.
> > > > >
> > > > > v4:
> > > > >  - Resend and rebased on v5.18.
> > > > >
> > > > > v3:
> > > > >  - Removed the Acked-by tags from Roman since this version is based on
> > > > >    the folio relevant.
> > > > >
> > > > > v2:
> > > > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > > > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > > > >  - Rebase to linux 5.15-rc1.
> > > > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > > > >
> > > > > v1:
> > > > >  - Drop RFC tag.
> > > > >  - Rebase to linux next-20210811.
> > > > >
> > > > > RFC v4:
> > > > >  - Collect Acked-by from Roman.
> > > > >  - Rebase to linux next-20210525.
> > > > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > > > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > > > >  - Convert reparent_ops_head to an array in patch 8.
> > > > >
> > > > > Thanks for Roman's review and suggestions.
> > > > >
> > > > > RFC v3:
> > > > >  - Drop the code cleanup and simplification patches. Gather those patches
> > > > >    into a separate series[1].
> > > > >  - Rework patch #1 suggested by Johannes.
> > > > >
> > > > > RFC v2:
> > > > >  - Collect Acked-by tags by Johannes. Thanks.
> > > > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > > > >  - Fix move_pages_to_lru().
> > > > >
> > > > > Muchun Song (11):
> > > > >   mm: memcontrol: remove dead code and comments
> > > > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > > > >     lruvec_unlock{_irq, _irqrestore}
> > > > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > > > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > > > >   mm: vmscan: rework move_pages_to_lru()
> > > > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > > > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > > > >   mm: memcontrol: introduce memcg_reparent_ops
> > > > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > > > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > > > >   mm: lru: use lruvec lock to serialize memcg changes
> > > > >
> > > > >  fs/buffer.c                      |   4 +-
> > > > >  fs/fs-writeback.c                |  23 +-
> > > > >  include/linux/memcontrol.h       | 218 +++++++++------
> > > > >  include/linux/mm_inline.h        |   6 +
> > > > >  include/trace/events/writeback.h |   5 +
> > > > >  mm/compaction.c                  |  39 ++-
> > > > >  mm/huge_memory.c                 | 153 ++++++++--
> > > > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > > > >  mm/migrate.c                     |   4 +
> > > > >  mm/mlock.c                       |   2 +-
> > > > >  mm/page_io.c                     |   5 +-
> > > > >  mm/swap.c                        |  49 ++--
> > > > >  mm/vmscan.c                      |  66 ++---
> > > > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > > > >
> > > > >
> > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > > > --
> > > > > 2.11.0
> > > > >
> > > > >
> > > >
> >

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-27 16:49           ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-06-27 16:49 UTC (permalink / raw)
  To: Mika Penttilä
  Cc: Muchun Song, Andrew Morton, Johannes Weiner, longman,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Cgroups,
	duanxiongchun, Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 3:43 AM Mika Penttilä <mpenttil@redhat.com> wrote:
>
>
>
> On 27.6.2022 11.05, Yosry Ahmed wrote:
> > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >>
> >> On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> >>> On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >>>>
> >>>> This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> >>>> into mm-unstable which will help to determine whether there is a problem or
> >>>> degradation. I am also doing some benchmark tests in parallel.
> >>>>
> >>>> Since the following patchsets applied. All the kernel memory are charged
> >>>> with the new APIs of obj_cgroup.
> >>>>
> >>>>          commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> >>>>          commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> >>>>
> >>>> But user memory allocations (LRU pages) pinning memcgs for a long time -
> >>>> it exists at a larger scale and is causing recurring problems in the real
> >>>> world: page cache doesn't get reclaimed for a long time, or is used by the
> >>>> second, third, fourth, ... instance of the same job that was restarted into
> >>>> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> >>>> and make page reclaim very inefficient.
> >>>>
> >>>> We can convert LRU pages and most other raw memcg pins to the objcg direction
> >>>> to fix this problem, and then the LRU pages will not pin the memcgs.
> >>>>
> >>>> This patchset aims to make the LRU pages to drop the reference to memory
> >>>> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> >>>> of the dying cgroups will not increase if we run the following test script.
> >>>
> >>> This is amazing work!
> >>>
> >>> Sorry if I came late, I didn't follow the threads of previous versions
> >>> so this might be redundant, I just have a couple of questions.
> >>>
> >>> a) If LRU pages keep getting parented until they reach root_mem_cgroup
> >>> (assuming they can), aren't these pages effectively unaccounted at
> >>> this point or leaked? Is there protection against this?
> >>>
> >>
> >> In this case, those pages are accounted in root memcg level. Unfortunately,
> >> there is no mechanism now to transfer a page's memcg from one to another.
> >>
> >>> b) Since moving charged pages between memcgs is now becoming easier by
> >>> using the APIs of obj_cgroup, I wonder if this opens the door for
> >>> future work to transfer charges to memcgs that are actually using
> >>> reparented resources. For example, let's say cgroup A reads a few
> >>> pages into page cache, and then they are no longer used by cgroup A.
> >>> cgroup B, however, is using the same pages that are currently charged
> >>> to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> >>> dies, and these pages are reparented to A's parent, can we possibly
> >>> mark these reparented pages (maybe in the page tables somewhere) so
> >>> that next time they get accessed we recharge them to B instead
> >>> (possibly asynchronously)?
> >>> I don't have much experience about page tables but I am pretty sure
> >>> they are loaded so maybe there is no room in PTEs for something like
> >>> this, but I have always wondered about what we can do for this case
> >>> where a cgroup is consistently using memory charged to another cgroup.
> >>> Maybe when this memory is reparented is a good point in time to decide
> >>> to recharge appropriately. It would also fix the reparenty leak to
> >>> root problem (if it even exists).
> >>>
> >>
> >>  From my point of view, this is going to be an improvement to the memcg
> >> subsystem in the future.  IIUC, most reparented pages are page cache
> >> pages without be mapped to users. So page tables are not a suitable
> >> place to record this information. However, we already have this information
> >> in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> >> equal to the page's obj_cgroup->memcg->objcg, it means this page have
> >> been reparented. I am thinking if a place where a page is mapped (probably
> >> page fault patch) or page (cache) is written (usually vfs write path)
> >> is suitable to transfer page's memcg from one to another. But need more
> >
> > Very good point about unmapped pages, I missed this. Page tables will
> > do us no good here. Such a change would indeed require careful thought
> > because (like you mentioned) there are multiple points in time where
> > it might be suitable to consider recharging the page (e.g. when the
> > page is mapped). This could be an incremental change though. Right now
> > we have no recharging at all, so maybe we can gradually add recharging
> > to suitable paths.
> >
> >> thinking, e.g. How to decide if a reparented page needs to be transferred?
> >
> > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
> > current is not a descendant of page's obj_cgroup->memcg) is a good
> > place to start?
> >
> > My rationale is that if the page is charged to root_mem_cgroup through
> > reparenting and a process in a memcg is using it then this is probably
> > an accounting leak. If a page is charged to a memcg A through
> > reparenting and is used by a memcg B in a different subtree, then
> > probably memcg B is getting away with using the page for free while A
> > is being taxed. If B is a descendant of A, it is still getting away
> > with using the page unaccounted, but at least it makes no difference
> > for A.
> >
> > One could argue that we might as well recharge a reparented page
> > anyway if the process is cheap (or done asynchronously), and the paths
> > where we do recharging are not very common.
> >
> > All of this might be moot, I am just thinking out loud. In any way
> > this would be future work and not part of this work.
> >
>
>
> I think you have to uncharge at the reparented parent to keep balances
> right (because parent is hierarchically charged thru page_counter). And
> maybe recharge after that if appropriate.
>

Yeah when I say "recharge" I mean transferring the accounting from one
memcg to another. I think every page should end up accounted to one
memcg afterall. Thanks for pointing that out.

>
>
>
> >
> >> If we need more information to make this decision, where to store those
> >> information? This is my primary thoughts on this question.
> >
> >>
> >> Thanks.
> >>
> >>> Thanks again for this work and please excuse my ignorance if any part
> >>> of what I said doesn't make sense :)
> >>>
> >>>>
> >>>> ```bash
> >>>> #!/bin/bash
> >>>>
> >>>> dd if=/dev/zero of=temp bs=4096 count=1
> >>>> cat /proc/cgroups | grep memory
> >>>>
> >>>> for i in {0..2000}
> >>>> do
> >>>>          mkdir /sys/fs/cgroup/memory/test$i
> >>>>          echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> >>>>          cat temp >> log
> >>>>          echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> >>>>          rmdir /sys/fs/cgroup/memory/test$i
> >>>> done
> >>>>
> >>>> cat /proc/cgroups | grep memory
> >>>>
> >>>> rm -f temp log
> >>>> ```
> >>>>
> >>>> v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> >>>> v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> >>>> v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> >>>> v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> >>>> v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> >>>> RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> >>>> RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> >>>> RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> >>>> RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> >>>>
> >>>> v6:
> >>>>   - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> >>>>   - Rebase to mm-unstable.
> >>>>
> >>>> v5:
> >>>>   - Lots of improvements from Johannes, Roman and Waiman.
> >>>>   - Fix lockdep warning reported by kernel test robot.
> >>>>   - Add two new patches to do code cleanup.
> >>>>   - Collect Acked-by and Reviewed-by from Johannes and Roman.
> >>>>   - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> >>>>     local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> >>>>     it to local_lock.  It could be an improvement in the future.
> >>>>
> >>>> v4:
> >>>>   - Resend and rebased on v5.18.
> >>>>
> >>>> v3:
> >>>>   - Removed the Acked-by tags from Roman since this version is based on
> >>>>     the folio relevant.
> >>>>
> >>>> v2:
> >>>>   - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> >>>>     dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> >>>>   - Rebase to linux 5.15-rc1.
> >>>>   - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> >>>>
> >>>> v1:
> >>>>   - Drop RFC tag.
> >>>>   - Rebase to linux next-20210811.
> >>>>
> >>>> RFC v4:
> >>>>   - Collect Acked-by from Roman.
> >>>>   - Rebase to linux next-20210525.
> >>>>   - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> >>>>   - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> >>>>   - Convert reparent_ops_head to an array in patch 8.
> >>>>
> >>>> Thanks for Roman's review and suggestions.
> >>>>
> >>>> RFC v3:
> >>>>   - Drop the code cleanup and simplification patches. Gather those patches
> >>>>     into a separate series[1].
> >>>>   - Rework patch #1 suggested by Johannes.
> >>>>
> >>>> RFC v2:
> >>>>   - Collect Acked-by tags by Johannes. Thanks.
> >>>>   - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> >>>>   - Fix move_pages_to_lru().
> >>>>
> >>>> Muchun Song (11):
> >>>>    mm: memcontrol: remove dead code and comments
> >>>>    mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> >>>>      lruvec_unlock{_irq, _irqrestore}
> >>>>    mm: memcontrol: prepare objcg API for non-kmem usage
> >>>>    mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> >>>>    mm: vmscan: rework move_pages_to_lru()
> >>>>    mm: thp: make split queue lock safe when LRU pages are reparented
> >>>>    mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> >>>>    mm: memcontrol: introduce memcg_reparent_ops
> >>>>    mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> >>>>    mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> >>>>    mm: lru: use lruvec lock to serialize memcg changes
> >>>>
> >>>>   fs/buffer.c                      |   4 +-
> >>>>   fs/fs-writeback.c                |  23 +-
> >>>>   include/linux/memcontrol.h       | 218 +++++++++------
> >>>>   include/linux/mm_inline.h        |   6 +
> >>>>   include/trace/events/writeback.h |   5 +
> >>>>   mm/compaction.c                  |  39 ++-
> >>>>   mm/huge_memory.c                 | 153 ++++++++--
> >>>>   mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> >>>>   mm/migrate.c                     |   4 +
> >>>>   mm/mlock.c                       |   2 +-
> >>>>   mm/page_io.c                     |   5 +-
> >>>>   mm/swap.c                        |  49 ++--
> >>>>   mm/vmscan.c                      |  66 ++---
> >>>>   13 files changed, 776 insertions(+), 382 deletions(-)
> >>>>
> >>>>
> >>>> base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> >>>> --
> >>>> 2.11.0
> >>>>
> >>>>
> >>>
> >
>

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-27 16:49           ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-06-27 16:49 UTC (permalink / raw)
  To: Mika Penttilä
  Cc: Muchun Song, Andrew Morton, Johannes Weiner,
	longman-H+wXaHxf7aLQT0dZR+AlfA, Michal Hocko, Roman Gushchin,
	Shakeel Butt, Cgroups, duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 3:43 AM Mika Penttilä <mpenttil-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org> wrote:
>
>
>
> On 27.6.2022 11.05, Yosry Ahmed wrote:
> > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> >>
> >> On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> >>> On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> >>>>
> >>>> This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> >>>> into mm-unstable which will help to determine whether there is a problem or
> >>>> degradation. I am also doing some benchmark tests in parallel.
> >>>>
> >>>> Since the following patchsets applied. All the kernel memory are charged
> >>>> with the new APIs of obj_cgroup.
> >>>>
> >>>>          commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> >>>>          commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> >>>>
> >>>> But user memory allocations (LRU pages) pinning memcgs for a long time -
> >>>> it exists at a larger scale and is causing recurring problems in the real
> >>>> world: page cache doesn't get reclaimed for a long time, or is used by the
> >>>> second, third, fourth, ... instance of the same job that was restarted into
> >>>> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> >>>> and make page reclaim very inefficient.
> >>>>
> >>>> We can convert LRU pages and most other raw memcg pins to the objcg direction
> >>>> to fix this problem, and then the LRU pages will not pin the memcgs.
> >>>>
> >>>> This patchset aims to make the LRU pages to drop the reference to memory
> >>>> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> >>>> of the dying cgroups will not increase if we run the following test script.
> >>>
> >>> This is amazing work!
> >>>
> >>> Sorry if I came late, I didn't follow the threads of previous versions
> >>> so this might be redundant, I just have a couple of questions.
> >>>
> >>> a) If LRU pages keep getting parented until they reach root_mem_cgroup
> >>> (assuming they can), aren't these pages effectively unaccounted at
> >>> this point or leaked? Is there protection against this?
> >>>
> >>
> >> In this case, those pages are accounted in root memcg level. Unfortunately,
> >> there is no mechanism now to transfer a page's memcg from one to another.
> >>
> >>> b) Since moving charged pages between memcgs is now becoming easier by
> >>> using the APIs of obj_cgroup, I wonder if this opens the door for
> >>> future work to transfer charges to memcgs that are actually using
> >>> reparented resources. For example, let's say cgroup A reads a few
> >>> pages into page cache, and then they are no longer used by cgroup A.
> >>> cgroup B, however, is using the same pages that are currently charged
> >>> to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> >>> dies, and these pages are reparented to A's parent, can we possibly
> >>> mark these reparented pages (maybe in the page tables somewhere) so
> >>> that next time they get accessed we recharge them to B instead
> >>> (possibly asynchronously)?
> >>> I don't have much experience about page tables but I am pretty sure
> >>> they are loaded so maybe there is no room in PTEs for something like
> >>> this, but I have always wondered about what we can do for this case
> >>> where a cgroup is consistently using memory charged to another cgroup.
> >>> Maybe when this memory is reparented is a good point in time to decide
> >>> to recharge appropriately. It would also fix the reparenty leak to
> >>> root problem (if it even exists).
> >>>
> >>
> >>  From my point of view, this is going to be an improvement to the memcg
> >> subsystem in the future.  IIUC, most reparented pages are page cache
> >> pages without be mapped to users. So page tables are not a suitable
> >> place to record this information. However, we already have this information
> >> in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> >> equal to the page's obj_cgroup->memcg->objcg, it means this page have
> >> been reparented. I am thinking if a place where a page is mapped (probably
> >> page fault patch) or page (cache) is written (usually vfs write path)
> >> is suitable to transfer page's memcg from one to another. But need more
> >
> > Very good point about unmapped pages, I missed this. Page tables will
> > do us no good here. Such a change would indeed require careful thought
> > because (like you mentioned) there are multiple points in time where
> > it might be suitable to consider recharging the page (e.g. when the
> > page is mapped). This could be an incremental change though. Right now
> > we have no recharging at all, so maybe we can gradually add recharging
> > to suitable paths.
> >
> >> thinking, e.g. How to decide if a reparented page needs to be transferred?
> >
> > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
> > current is not a descendant of page's obj_cgroup->memcg) is a good
> > place to start?
> >
> > My rationale is that if the page is charged to root_mem_cgroup through
> > reparenting and a process in a memcg is using it then this is probably
> > an accounting leak. If a page is charged to a memcg A through
> > reparenting and is used by a memcg B in a different subtree, then
> > probably memcg B is getting away with using the page for free while A
> > is being taxed. If B is a descendant of A, it is still getting away
> > with using the page unaccounted, but at least it makes no difference
> > for A.
> >
> > One could argue that we might as well recharge a reparented page
> > anyway if the process is cheap (or done asynchronously), and the paths
> > where we do recharging are not very common.
> >
> > All of this might be moot, I am just thinking out loud. In any way
> > this would be future work and not part of this work.
> >
>
>
> I think you have to uncharge at the reparented parent to keep balances
> right (because parent is hierarchically charged thru page_counter). And
> maybe recharge after that if appropriate.
>

Yeah when I say "recharge" I mean transferring the accounting from one
memcg to another. I think every page should end up accounted to one
memcg afterall. Thanks for pointing that out.

>
>
>
> >
> >> If we need more information to make this decision, where to store those
> >> information? This is my primary thoughts on this question.
> >
> >>
> >> Thanks.
> >>
> >>> Thanks again for this work and please excuse my ignorance if any part
> >>> of what I said doesn't make sense :)
> >>>
> >>>>
> >>>> ```bash
> >>>> #!/bin/bash
> >>>>
> >>>> dd if=/dev/zero of=temp bs=4096 count=1
> >>>> cat /proc/cgroups | grep memory
> >>>>
> >>>> for i in {0..2000}
> >>>> do
> >>>>          mkdir /sys/fs/cgroup/memory/test$i
> >>>>          echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> >>>>          cat temp >> log
> >>>>          echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> >>>>          rmdir /sys/fs/cgroup/memory/test$i
> >>>> done
> >>>>
> >>>> cat /proc/cgroups | grep memory
> >>>>
> >>>> rm -f temp log
> >>>> ```
> >>>>
> >>>> v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> >>>> v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> >>>> v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> >>>> v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> >>>> v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> >>>> RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> >>>> RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> >>>> RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> >>>> RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> >>>>
> >>>> v6:
> >>>>   - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> >>>>   - Rebase to mm-unstable.
> >>>>
> >>>> v5:
> >>>>   - Lots of improvements from Johannes, Roman and Waiman.
> >>>>   - Fix lockdep warning reported by kernel test robot.
> >>>>   - Add two new patches to do code cleanup.
> >>>>   - Collect Acked-by and Reviewed-by from Johannes and Roman.
> >>>>   - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> >>>>     local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> >>>>     it to local_lock.  It could be an improvement in the future.
> >>>>
> >>>> v4:
> >>>>   - Resend and rebased on v5.18.
> >>>>
> >>>> v3:
> >>>>   - Removed the Acked-by tags from Roman since this version is based on
> >>>>     the folio relevant.
> >>>>
> >>>> v2:
> >>>>   - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> >>>>     dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> >>>>   - Rebase to linux 5.15-rc1.
> >>>>   - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> >>>>
> >>>> v1:
> >>>>   - Drop RFC tag.
> >>>>   - Rebase to linux next-20210811.
> >>>>
> >>>> RFC v4:
> >>>>   - Collect Acked-by from Roman.
> >>>>   - Rebase to linux next-20210525.
> >>>>   - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> >>>>   - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> >>>>   - Convert reparent_ops_head to an array in patch 8.
> >>>>
> >>>> Thanks for Roman's review and suggestions.
> >>>>
> >>>> RFC v3:
> >>>>   - Drop the code cleanup and simplification patches. Gather those patches
> >>>>     into a separate series[1].
> >>>>   - Rework patch #1 suggested by Johannes.
> >>>>
> >>>> RFC v2:
> >>>>   - Collect Acked-by tags by Johannes. Thanks.
> >>>>   - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> >>>>   - Fix move_pages_to_lru().
> >>>>
> >>>> Muchun Song (11):
> >>>>    mm: memcontrol: remove dead code and comments
> >>>>    mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> >>>>      lruvec_unlock{_irq, _irqrestore}
> >>>>    mm: memcontrol: prepare objcg API for non-kmem usage
> >>>>    mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> >>>>    mm: vmscan: rework move_pages_to_lru()
> >>>>    mm: thp: make split queue lock safe when LRU pages are reparented
> >>>>    mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> >>>>    mm: memcontrol: introduce memcg_reparent_ops
> >>>>    mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> >>>>    mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> >>>>    mm: lru: use lruvec lock to serialize memcg changes
> >>>>
> >>>>   fs/buffer.c                      |   4 +-
> >>>>   fs/fs-writeback.c                |  23 +-
> >>>>   include/linux/memcontrol.h       | 218 +++++++++------
> >>>>   include/linux/mm_inline.h        |   6 +
> >>>>   include/trace/events/writeback.h |   5 +
> >>>>   mm/compaction.c                  |  39 ++-
> >>>>   mm/huge_memory.c                 | 153 ++++++++--
> >>>>   mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> >>>>   mm/migrate.c                     |   4 +
> >>>>   mm/mlock.c                       |   2 +-
> >>>>   mm/page_io.c                     |   5 +-
> >>>>   mm/swap.c                        |  49 ++--
> >>>>   mm/vmscan.c                      |  66 ++---
> >>>>   13 files changed, 776 insertions(+), 382 deletions(-)
> >>>>
> >>>>
> >>>> base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> >>>> --
> >>>> 2.11.0
> >>>>
> >>>>
> >>>
> >
>

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
  2022-06-27 10:13         ` Muchun Song
@ 2022-06-28  1:24           ` Roman Gushchin
  -1 siblings, 0 replies; 54+ messages in thread
From: Roman Gushchin @ 2022-06-28  1:24 UTC (permalink / raw)
  To: Muchun Song
  Cc: Yosry Ahmed, Andrew Morton, Johannes Weiner, longman,
	Michal Hocko, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 06:13:48PM +0800, Muchun Song wrote:
> On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote:
> > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > >
> > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > >
> > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > >
> > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > with the new APIs of obj_cgroup.
> > > > >
> > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > >
> > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > and make page reclaim very inefficient.
> > > > >
> > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > >
> > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > of the dying cgroups will not increase if we run the following test script.
> > > >
> > > > This is amazing work!
> > > >
> > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > so this might be redundant, I just have a couple of questions.
> > > >
> > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > this point or leaked? Is there protection against this?
> > > >
> > >
> > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > there is no mechanism now to transfer a page's memcg from one to another.
> > >
> > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > future work to transfer charges to memcgs that are actually using
> > > > reparented resources. For example, let's say cgroup A reads a few
> > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > cgroup B, however, is using the same pages that are currently charged
> > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > that next time they get accessed we recharge them to B instead
> > > > (possibly asynchronously)?
> > > > I don't have much experience about page tables but I am pretty sure
> > > > they are loaded so maybe there is no room in PTEs for something like
> > > > this, but I have always wondered about what we can do for this case
> > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > Maybe when this memory is reparented is a good point in time to decide
> > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > root problem (if it even exists).
> > > >
> > >
> > > From my point of view, this is going to be an improvement to the memcg
> > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > pages without be mapped to users. So page tables are not a suitable
> > > place to record this information. However, we already have this information
> > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > been reparented. I am thinking if a place where a page is mapped (probably
> > > page fault patch) or page (cache) is written (usually vfs write path)
> > > is suitable to transfer page's memcg from one to another. But need more
> > 
> > Very good point about unmapped pages, I missed this. Page tables will
> > do us no good here. Such a change would indeed require careful thought
> > because (like you mentioned) there are multiple points in time where
> > it might be suitable to consider recharging the page (e.g. when the
> > page is mapped). This could be an incremental change though. Right now
> > we have no recharging at all, so maybe we can gradually add recharging
> > to suitable paths.
> >
> 
> Agree.
>  
> > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > 
> > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
> 
> This is a good start.
> 
> > current is not a descendant of page's obj_cgroup->memcg) is a good
> 
> I am not sure this one since a page could be shared between different
> memcg.

No way :)

> 
>     root
>    /   \
>   A     B
>  / \     \
> C   E     D
> 
> e.g. a page (originally, it belongs to memcg E and E is dying) is reparented
> to memcg A, and it is shared between C and D now. Then we need to consider 
> whether it should be recharged. Yep, we need more thinging about recharging.

This is why I wasn't sure that objcg-based reparenting is the best approach.
Instead (or maybe even _with_ the reparenting) we can recharge pages on, say,
page activation and/or rotation (inactive->inactive). Pagefaults/reads are
probably to hot to do it there. But the reclaim path should be more accessible
in terms of the performance overhead. Just some ideas.

Thanks!

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-28  1:24           ` Roman Gushchin
  0 siblings, 0 replies; 54+ messages in thread
From: Roman Gushchin @ 2022-06-28  1:24 UTC (permalink / raw)
  To: Muchun Song
  Cc: Yosry Ahmed, Andrew Morton, Johannes Weiner,
	longman-H+wXaHxf7aLQT0dZR+AlfA, Michal Hocko, Shakeel Butt,
	Cgroups, duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 06:13:48PM +0800, Muchun Song wrote:
> On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote:
> > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > >
> > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > > > >
> > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > >
> > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > with the new APIs of obj_cgroup.
> > > > >
> > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > >
> > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > and make page reclaim very inefficient.
> > > > >
> > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > >
> > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > of the dying cgroups will not increase if we run the following test script.
> > > >
> > > > This is amazing work!
> > > >
> > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > so this might be redundant, I just have a couple of questions.
> > > >
> > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > this point or leaked? Is there protection against this?
> > > >
> > >
> > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > there is no mechanism now to transfer a page's memcg from one to another.
> > >
> > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > future work to transfer charges to memcgs that are actually using
> > > > reparented resources. For example, let's say cgroup A reads a few
> > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > cgroup B, however, is using the same pages that are currently charged
> > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > that next time they get accessed we recharge them to B instead
> > > > (possibly asynchronously)?
> > > > I don't have much experience about page tables but I am pretty sure
> > > > they are loaded so maybe there is no room in PTEs for something like
> > > > this, but I have always wondered about what we can do for this case
> > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > Maybe when this memory is reparented is a good point in time to decide
> > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > root problem (if it even exists).
> > > >
> > >
> > > From my point of view, this is going to be an improvement to the memcg
> > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > pages without be mapped to users. So page tables are not a suitable
> > > place to record this information. However, we already have this information
> > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > been reparented. I am thinking if a place where a page is mapped (probably
> > > page fault patch) or page (cache) is written (usually vfs write path)
> > > is suitable to transfer page's memcg from one to another. But need more
> > 
> > Very good point about unmapped pages, I missed this. Page tables will
> > do us no good here. Such a change would indeed require careful thought
> > because (like you mentioned) there are multiple points in time where
> > it might be suitable to consider recharging the page (e.g. when the
> > page is mapped). This could be an incremental change though. Right now
> > we have no recharging at all, so maybe we can gradually add recharging
> > to suitable paths.
> >
> 
> Agree.
>  
> > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > 
> > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
> 
> This is a good start.
> 
> > current is not a descendant of page's obj_cgroup->memcg) is a good
> 
> I am not sure this one since a page could be shared between different
> memcg.

No way :)

> 
>     root
>    /   \
>   A     B
>  / \     \
> C   E     D
> 
> e.g. a page (originally, it belongs to memcg E and E is dying) is reparented
> to memcg A, and it is shared between C and D now. Then we need to consider 
> whether it should be recharged. Yep, we need more thinging about recharging.

This is why I wasn't sure that objcg-based reparenting is the best approach.
Instead (or maybe even _with_ the reparenting) we can recharge pages on, say,
page activation and/or rotation (inactive->inactive). Pagefaults/reads are
probably to hot to do it there. But the reclaim path should be more accessible
in terms of the performance overhead. Just some ideas.

Thanks!

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
  2022-06-28  1:24           ` Roman Gushchin
@ 2022-06-28  1:31             ` Yosry Ahmed
  -1 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-06-28  1:31 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Muchun Song, Andrew Morton, Johannes Weiner, longman,
	Michal Hocko, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 6:24 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> On Mon, Jun 27, 2022 at 06:13:48PM +0800, Muchun Song wrote:
> > On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote:
> > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > >
> > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > > >
> > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > > >
> > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > with the new APIs of obj_cgroup.
> > > > > >
> > > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > > >
> > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > and make page reclaim very inefficient.
> > > > > >
> > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > >
> > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > >
> > > > > This is amazing work!
> > > > >
> > > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > > so this might be redundant, I just have a couple of questions.
> > > > >
> > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > > this point or leaked? Is there protection against this?
> > > > >
> > > >
> > > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > > there is no mechanism now to transfer a page's memcg from one to another.
> > > >
> > > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > > future work to transfer charges to memcgs that are actually using
> > > > > reparented resources. For example, let's say cgroup A reads a few
> > > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > > cgroup B, however, is using the same pages that are currently charged
> > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > > that next time they get accessed we recharge them to B instead
> > > > > (possibly asynchronously)?
> > > > > I don't have much experience about page tables but I am pretty sure
> > > > > they are loaded so maybe there is no room in PTEs for something like
> > > > > this, but I have always wondered about what we can do for this case
> > > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > > Maybe when this memory is reparented is a good point in time to decide
> > > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > > root problem (if it even exists).
> > > > >
> > > >
> > > > From my point of view, this is going to be an improvement to the memcg
> > > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > > pages without be mapped to users. So page tables are not a suitable
> > > > place to record this information. However, we already have this information
> > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > > been reparented. I am thinking if a place where a page is mapped (probably
> > > > page fault patch) or page (cache) is written (usually vfs write path)
> > > > is suitable to transfer page's memcg from one to another. But need more
> > >
> > > Very good point about unmapped pages, I missed this. Page tables will
> > > do us no good here. Such a change would indeed require careful thought
> > > because (like you mentioned) there are multiple points in time where
> > > it might be suitable to consider recharging the page (e.g. when the
> > > page is mapped). This could be an incremental change though. Right now
> > > we have no recharging at all, so maybe we can gradually add recharging
> > > to suitable paths.
> > >
> >
> > Agree.
> >
> > > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > >
> > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
> >
> > This is a good start.
> >
> > > current is not a descendant of page's obj_cgroup->memcg) is a good
> >
> > I am not sure this one since a page could be shared between different
> > memcg.
>
> No way :)

No way in terms of charging or usage? AFAIU a page is only charged to
one memcg, but can be used by multiple memcgs if it exists in the page
cache for example. Am I missing something here?

>
> >
> >     root
> >    /   \
> >   A     B
> >  / \     \
> > C   E     D
> >
> > e.g. a page (originally, it belongs to memcg E and E is dying) is reparented
> > to memcg A, and it is shared between C and D now. Then we need to consider
> > whether it should be recharged. Yep, we need more thinging about recharging.
>
> This is why I wasn't sure that objcg-based reparenting is the best approach.
> Instead (or maybe even _with_ the reparenting) we can recharge pages on, say,
> page activation and/or rotation (inactive->inactive). Pagefaults/reads are
> probably to hot to do it there. But the reclaim path should be more accessible
> in terms of the performance overhead. Just some ideas.

Thanks for chipping in, Roman! I am honestly not sure on what paths
the recharge should occur, but I know that we will probably need a
recharge mechanism at some point. We can start adding recharging
gradually to paths that don't affect performance, reclaim is a very
good place. Maybe we sort LRUs such that reparented pages are scanned
first, and possibly recharged under memcg pressure.

>
> Thanks!

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-28  1:31             ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-06-28  1:31 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Muchun Song, Andrew Morton, Johannes Weiner,
	longman-H+wXaHxf7aLQT0dZR+AlfA, Michal Hocko, Shakeel Butt,
	Cgroups, duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 6:24 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
>
> On Mon, Jun 27, 2022 at 06:13:48PM +0800, Muchun Song wrote:
> > On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote:
> > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > > >
> > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > > > > >
> > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > > >
> > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > with the new APIs of obj_cgroup.
> > > > > >
> > > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > > >
> > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > and make page reclaim very inefficient.
> > > > > >
> > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > >
> > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > >
> > > > > This is amazing work!
> > > > >
> > > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > > so this might be redundant, I just have a couple of questions.
> > > > >
> > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > > this point or leaked? Is there protection against this?
> > > > >
> > > >
> > > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > > there is no mechanism now to transfer a page's memcg from one to another.
> > > >
> > > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > > future work to transfer charges to memcgs that are actually using
> > > > > reparented resources. For example, let's say cgroup A reads a few
> > > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > > cgroup B, however, is using the same pages that are currently charged
> > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > > that next time they get accessed we recharge them to B instead
> > > > > (possibly asynchronously)?
> > > > > I don't have much experience about page tables but I am pretty sure
> > > > > they are loaded so maybe there is no room in PTEs for something like
> > > > > this, but I have always wondered about what we can do for this case
> > > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > > Maybe when this memory is reparented is a good point in time to decide
> > > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > > root problem (if it even exists).
> > > > >
> > > >
> > > > From my point of view, this is going to be an improvement to the memcg
> > > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > > pages without be mapped to users. So page tables are not a suitable
> > > > place to record this information. However, we already have this information
> > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > > been reparented. I am thinking if a place where a page is mapped (probably
> > > > page fault patch) or page (cache) is written (usually vfs write path)
> > > > is suitable to transfer page's memcg from one to another. But need more
> > >
> > > Very good point about unmapped pages, I missed this. Page tables will
> > > do us no good here. Such a change would indeed require careful thought
> > > because (like you mentioned) there are multiple points in time where
> > > it might be suitable to consider recharging the page (e.g. when the
> > > page is mapped). This could be an incremental change though. Right now
> > > we have no recharging at all, so maybe we can gradually add recharging
> > > to suitable paths.
> > >
> >
> > Agree.
> >
> > > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > >
> > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
> >
> > This is a good start.
> >
> > > current is not a descendant of page's obj_cgroup->memcg) is a good
> >
> > I am not sure this one since a page could be shared between different
> > memcg.
>
> No way :)

No way in terms of charging or usage? AFAIU a page is only charged to
one memcg, but can be used by multiple memcgs if it exists in the page
cache for example. Am I missing something here?

>
> >
> >     root
> >    /   \
> >   A     B
> >  / \     \
> > C   E     D
> >
> > e.g. a page (originally, it belongs to memcg E and E is dying) is reparented
> > to memcg A, and it is shared between C and D now. Then we need to consider
> > whether it should be recharged. Yep, we need more thinging about recharging.
>
> This is why I wasn't sure that objcg-based reparenting is the best approach.
> Instead (or maybe even _with_ the reparenting) we can recharge pages on, say,
> page activation and/or rotation (inactive->inactive). Pagefaults/reads are
> probably to hot to do it there. But the reclaim path should be more accessible
> in terms of the performance overhead. Just some ideas.

Thanks for chipping in, Roman! I am honestly not sure on what paths
the recharge should occur, but I know that we will probably need a
recharge mechanism at some point. We can start adding recharging
gradually to paths that don't affect performance, reclaim is a very
good place. Maybe we sort LRUs such that reparented pages are scanned
first, and possibly recharged under memcg pressure.

>
> Thanks!

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-28  1:37               ` Roman Gushchin
  0 siblings, 0 replies; 54+ messages in thread
From: Roman Gushchin @ 2022-06-28  1:37 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Muchun Song, Andrew Morton, Johannes Weiner, longman,
	Michal Hocko, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 06:31:14PM -0700, Yosry Ahmed wrote:
> On Mon, Jun 27, 2022 at 6:24 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> >
> > On Mon, Jun 27, 2022 at 06:13:48PM +0800, Muchun Song wrote:
> > > On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote:
> > > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > >
> > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > > > >
> > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > > > >
> > > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > > with the new APIs of obj_cgroup.
> > > > > > >
> > > > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > > > >
> > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > > and make page reclaim very inefficient.
> > > > > > >
> > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > > >
> > > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > > >
> > > > > > This is amazing work!
> > > > > >
> > > > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > > > so this might be redundant, I just have a couple of questions.
> > > > > >
> > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > > > this point or leaked? Is there protection against this?
> > > > > >
> > > > >
> > > > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > > > there is no mechanism now to transfer a page's memcg from one to another.
> > > > >
> > > > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > > > future work to transfer charges to memcgs that are actually using
> > > > > > reparented resources. For example, let's say cgroup A reads a few
> > > > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > > > cgroup B, however, is using the same pages that are currently charged
> > > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > > > that next time they get accessed we recharge them to B instead
> > > > > > (possibly asynchronously)?
> > > > > > I don't have much experience about page tables but I am pretty sure
> > > > > > they are loaded so maybe there is no room in PTEs for something like
> > > > > > this, but I have always wondered about what we can do for this case
> > > > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > > > Maybe when this memory is reparented is a good point in time to decide
> > > > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > > > root problem (if it even exists).
> > > > > >
> > > > >
> > > > > From my point of view, this is going to be an improvement to the memcg
> > > > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > > > pages without be mapped to users. So page tables are not a suitable
> > > > > place to record this information. However, we already have this information
> > > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > > > been reparented. I am thinking if a place where a page is mapped (probably
> > > > > page fault patch) or page (cache) is written (usually vfs write path)
> > > > > is suitable to transfer page's memcg from one to another. But need more
> > > >
> > > > Very good point about unmapped pages, I missed this. Page tables will
> > > > do us no good here. Such a change would indeed require careful thought
> > > > because (like you mentioned) there are multiple points in time where
> > > > it might be suitable to consider recharging the page (e.g. when the
> > > > page is mapped). This could be an incremental change though. Right now
> > > > we have no recharging at all, so maybe we can gradually add recharging
> > > > to suitable paths.
> > > >
> > >
> > > Agree.
> > >
> > > > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > > >
> > > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
> > >
> > > This is a good start.
> > >
> > > > current is not a descendant of page's obj_cgroup->memcg) is a good
> > >
> > > I am not sure this one since a page could be shared between different
> > > memcg.
> >
> > No way :)
> 
> No way in terms of charging or usage? AFAIU a page is only charged to
> one memcg, but can be used by multiple memcgs if it exists in the page
> cache for example. Am I missing something here?

Charging of course. I mean we can't realistically precisely account for
shared use of a page between multiple cgroups, at least not at 4k granularity.

> 
> >
> > >
> > >     root
> > >    /   \
> > >   A     B
> > >  / \     \
> > > C   E     D
> > >
> > > e.g. a page (originally, it belongs to memcg E and E is dying) is reparented
> > > to memcg A, and it is shared between C and D now. Then we need to consider
> > > whether it should be recharged. Yep, we need more thinging about recharging.
> >
> > This is why I wasn't sure that objcg-based reparenting is the best approach.
> > Instead (or maybe even _with_ the reparenting) we can recharge pages on, say,
> > page activation and/or rotation (inactive->inactive). Pagefaults/reads are
> > probably to hot to do it there. But the reclaim path should be more accessible
> > in terms of the performance overhead. Just some ideas.
> 
> Thanks for chipping in, Roman! I am honestly not sure on what paths
> the recharge should occur, but I know that we will probably need a
> recharge mechanism at some point. We can start adding recharging
> gradually to paths that don't affect performance, reclaim is a very
> good place. Maybe we sort LRUs such that reparented pages are scanned
> first, and possibly recharged under memcg pressure.

I think the activation path is a good place to start because we know for sure
that a page is actively used and we know who is using it.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-28  1:37               ` Roman Gushchin
  0 siblings, 0 replies; 54+ messages in thread
From: Roman Gushchin @ 2022-06-28  1:37 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Muchun Song, Andrew Morton, Johannes Weiner,
	longman-H+wXaHxf7aLQT0dZR+AlfA, Michal Hocko, Shakeel Butt,
	Cgroups, duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 06:31:14PM -0700, Yosry Ahmed wrote:
> On Mon, Jun 27, 2022 at 6:24 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> >
> > On Mon, Jun 27, 2022 at 06:13:48PM +0800, Muchun Song wrote:
> > > On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote:
> > > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > > > >
> > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > > > > > >
> > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > > > >
> > > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > > with the new APIs of obj_cgroup.
> > > > > > >
> > > > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > > > >
> > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > > and make page reclaim very inefficient.
> > > > > > >
> > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > > >
> > > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > > >
> > > > > > This is amazing work!
> > > > > >
> > > > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > > > so this might be redundant, I just have a couple of questions.
> > > > > >
> > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > > > this point or leaked? Is there protection against this?
> > > > > >
> > > > >
> > > > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > > > there is no mechanism now to transfer a page's memcg from one to another.
> > > > >
> > > > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > > > future work to transfer charges to memcgs that are actually using
> > > > > > reparented resources. For example, let's say cgroup A reads a few
> > > > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > > > cgroup B, however, is using the same pages that are currently charged
> > > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > > > that next time they get accessed we recharge them to B instead
> > > > > > (possibly asynchronously)?
> > > > > > I don't have much experience about page tables but I am pretty sure
> > > > > > they are loaded so maybe there is no room in PTEs for something like
> > > > > > this, but I have always wondered about what we can do for this case
> > > > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > > > Maybe when this memory is reparented is a good point in time to decide
> > > > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > > > root problem (if it even exists).
> > > > > >
> > > > >
> > > > > From my point of view, this is going to be an improvement to the memcg
> > > > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > > > pages without be mapped to users. So page tables are not a suitable
> > > > > place to record this information. However, we already have this information
> > > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > > > been reparented. I am thinking if a place where a page is mapped (probably
> > > > > page fault patch) or page (cache) is written (usually vfs write path)
> > > > > is suitable to transfer page's memcg from one to another. But need more
> > > >
> > > > Very good point about unmapped pages, I missed this. Page tables will
> > > > do us no good here. Such a change would indeed require careful thought
> > > > because (like you mentioned) there are multiple points in time where
> > > > it might be suitable to consider recharging the page (e.g. when the
> > > > page is mapped). This could be an incremental change though. Right now
> > > > we have no recharging at all, so maybe we can gradually add recharging
> > > > to suitable paths.
> > > >
> > >
> > > Agree.
> > >
> > > > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > > >
> > > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
> > >
> > > This is a good start.
> > >
> > > > current is not a descendant of page's obj_cgroup->memcg) is a good
> > >
> > > I am not sure this one since a page could be shared between different
> > > memcg.
> >
> > No way :)
> 
> No way in terms of charging or usage? AFAIU a page is only charged to
> one memcg, but can be used by multiple memcgs if it exists in the page
> cache for example. Am I missing something here?

Charging of course. I mean we can't realistically precisely account for
shared use of a page between multiple cgroups, at least not at 4k granularity.

> 
> >
> > >
> > >     root
> > >    /   \
> > >   A     B
> > >  / \     \
> > > C   E     D
> > >
> > > e.g. a page (originally, it belongs to memcg E and E is dying) is reparented
> > > to memcg A, and it is shared between C and D now. Then we need to consider
> > > whether it should be recharged. Yep, we need more thinging about recharging.
> >
> > This is why I wasn't sure that objcg-based reparenting is the best approach.
> > Instead (or maybe even _with_ the reparenting) we can recharge pages on, say,
> > page activation and/or rotation (inactive->inactive). Pagefaults/reads are
> > probably to hot to do it there. But the reclaim path should be more accessible
> > in terms of the performance overhead. Just some ideas.
> 
> Thanks for chipping in, Roman! I am honestly not sure on what paths
> the recharge should occur, but I know that we will probably need a
> recharge mechanism at some point. We can start adding recharging
> gradually to paths that don't affect performance, reclaim is a very
> good place. Maybe we sort LRUs such that reparented pages are scanned
> first, and possibly recharged under memcg pressure.

I think the activation path is a good place to start because we know for sure
that a page is actively used and we know who is using it.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
  2022-06-28  1:37               ` Roman Gushchin
@ 2022-06-28  1:45                 ` Yosry Ahmed
  -1 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-06-28  1:45 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Muchun Song, Andrew Morton, Johannes Weiner, longman,
	Michal Hocko, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 6:38 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
>
> On Mon, Jun 27, 2022 at 06:31:14PM -0700, Yosry Ahmed wrote:
> > On Mon, Jun 27, 2022 at 6:24 PM Roman Gushchin <roman.gushchin@linux.dev> wrote:
> > >
> > > On Mon, Jun 27, 2022 at 06:13:48PM +0800, Muchun Song wrote:
> > > > On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote:
> > > > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > > >
> > > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > > > > >
> > > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > > > > >
> > > > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > > > with the new APIs of obj_cgroup.
> > > > > > > >
> > > > > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > > > > >
> > > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > > > and make page reclaim very inefficient.
> > > > > > > >
> > > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > > > >
> > > > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > > > >
> > > > > > > This is amazing work!
> > > > > > >
> > > > > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > > > > so this might be redundant, I just have a couple of questions.
> > > > > > >
> > > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > > > > this point or leaked? Is there protection against this?
> > > > > > >
> > > > > >
> > > > > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > > > > there is no mechanism now to transfer a page's memcg from one to another.
> > > > > >
> > > > > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > > > > future work to transfer charges to memcgs that are actually using
> > > > > > > reparented resources. For example, let's say cgroup A reads a few
> > > > > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > > > > cgroup B, however, is using the same pages that are currently charged
> > > > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > > > > that next time they get accessed we recharge them to B instead
> > > > > > > (possibly asynchronously)?
> > > > > > > I don't have much experience about page tables but I am pretty sure
> > > > > > > they are loaded so maybe there is no room in PTEs for something like
> > > > > > > this, but I have always wondered about what we can do for this case
> > > > > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > > > > Maybe when this memory is reparented is a good point in time to decide
> > > > > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > > > > root problem (if it even exists).
> > > > > > >
> > > > > >
> > > > > > From my point of view, this is going to be an improvement to the memcg
> > > > > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > > > > pages without be mapped to users. So page tables are not a suitable
> > > > > > place to record this information. However, we already have this information
> > > > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > > > > been reparented. I am thinking if a place where a page is mapped (probably
> > > > > > page fault patch) or page (cache) is written (usually vfs write path)
> > > > > > is suitable to transfer page's memcg from one to another. But need more
> > > > >
> > > > > Very good point about unmapped pages, I missed this. Page tables will
> > > > > do us no good here. Such a change would indeed require careful thought
> > > > > because (like you mentioned) there are multiple points in time where
> > > > > it might be suitable to consider recharging the page (e.g. when the
> > > > > page is mapped). This could be an incremental change though. Right now
> > > > > we have no recharging at all, so maybe we can gradually add recharging
> > > > > to suitable paths.
> > > > >
> > > >
> > > > Agree.
> > > >
> > > > > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > > > >
> > > > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
> > > >
> > > > This is a good start.
> > > >
> > > > > current is not a descendant of page's obj_cgroup->memcg) is a good
> > > >
> > > > I am not sure this one since a page could be shared between different
> > > > memcg.
> > >
> > > No way :)
> >
> > No way in terms of charging or usage? AFAIU a page is only charged to
> > one memcg, but can be used by multiple memcgs if it exists in the page
> > cache for example. Am I missing something here?
>
> Charging of course. I mean we can't realistically precisely account for
> shared use of a page between multiple cgroups, at least not at 4k granularity.
>
> >
> > >
> > > >
> > > >     root
> > > >    /   \
> > > >   A     B
> > > >  / \     \
> > > > C   E     D
> > > >
> > > > e.g. a page (originally, it belongs to memcg E and E is dying) is reparented
> > > > to memcg A, and it is shared between C and D now. Then we need to consider
> > > > whether it should be recharged. Yep, we need more thinging about recharging.
> > >
> > > This is why I wasn't sure that objcg-based reparenting is the best approach.
> > > Instead (or maybe even _with_ the reparenting) we can recharge pages on, say,
> > > page activation and/or rotation (inactive->inactive). Pagefaults/reads are
> > > probably to hot to do it there. But the reclaim path should be more accessible
> > > in terms of the performance overhead. Just some ideas.
> >
> > Thanks for chipping in, Roman! I am honestly not sure on what paths
> > the recharge should occur, but I know that we will probably need a
> > recharge mechanism at some point. We can start adding recharging
> > gradually to paths that don't affect performance, reclaim is a very
> > good place. Maybe we sort LRUs such that reparented pages are scanned
> > first, and possibly recharged under memcg pressure.
>
> I think the activation path is a good place to start because we know for sure
> that a page is actively used and we know who is using it.

I agree. What I am suggesting is to additionally scan reparented pages
first under memory pressure. These pages were used by a dead
descendant, so there is a big chance they aren't being used anymore or
they are used by a different memcg, in this case recharging these
pages (if possible) might put the memcg back below its limit. If a
memcg reaches its limit and undergoes reclaim because of reparented
pages it isn't using, this is bad. If during reclaim we keep those
pages and reclaim other pages that are actually being used by the
memcg (even if colder), is arguably worse. WDYT?

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-06-28  1:45                 ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-06-28  1:45 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Muchun Song, Andrew Morton, Johannes Weiner,
	longman-H+wXaHxf7aLQT0dZR+AlfA, Michal Hocko, Shakeel Butt,
	Cgroups, duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 6:38 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
>
> On Mon, Jun 27, 2022 at 06:31:14PM -0700, Yosry Ahmed wrote:
> > On Mon, Jun 27, 2022 at 6:24 PM Roman Gushchin <roman.gushchin-fxUVXftIFDnyG1zEObXtfA@public.gmane.org> wrote:
> > >
> > > On Mon, Jun 27, 2022 at 06:13:48PM +0800, Muchun Song wrote:
> > > > On Mon, Jun 27, 2022 at 01:05:06AM -0700, Yosry Ahmed wrote:
> > > > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > > > > >
> > > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > > > > > > >
> > > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > > > > >
> > > > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > > > with the new APIs of obj_cgroup.
> > > > > > > >
> > > > > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > > > > >
> > > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > > > and make page reclaim very inefficient.
> > > > > > > >
> > > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > > > >
> > > > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > > > >
> > > > > > > This is amazing work!
> > > > > > >
> > > > > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > > > > so this might be redundant, I just have a couple of questions.
> > > > > > >
> > > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > > > > this point or leaked? Is there protection against this?
> > > > > > >
> > > > > >
> > > > > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > > > > there is no mechanism now to transfer a page's memcg from one to another.
> > > > > >
> > > > > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > > > > future work to transfer charges to memcgs that are actually using
> > > > > > > reparented resources. For example, let's say cgroup A reads a few
> > > > > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > > > > cgroup B, however, is using the same pages that are currently charged
> > > > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > > > > that next time they get accessed we recharge them to B instead
> > > > > > > (possibly asynchronously)?
> > > > > > > I don't have much experience about page tables but I am pretty sure
> > > > > > > they are loaded so maybe there is no room in PTEs for something like
> > > > > > > this, but I have always wondered about what we can do for this case
> > > > > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > > > > Maybe when this memory is reparented is a good point in time to decide
> > > > > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > > > > root problem (if it even exists).
> > > > > > >
> > > > > >
> > > > > > From my point of view, this is going to be an improvement to the memcg
> > > > > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > > > > pages without be mapped to users. So page tables are not a suitable
> > > > > > place to record this information. However, we already have this information
> > > > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > > > > been reparented. I am thinking if a place where a page is mapped (probably
> > > > > > page fault patch) or page (cache) is written (usually vfs write path)
> > > > > > is suitable to transfer page's memcg from one to another. But need more
> > > > >
> > > > > Very good point about unmapped pages, I missed this. Page tables will
> > > > > do us no good here. Such a change would indeed require careful thought
> > > > > because (like you mentioned) there are multiple points in time where
> > > > > it might be suitable to consider recharging the page (e.g. when the
> > > > > page is mapped). This could be an incremental change though. Right now
> > > > > we have no recharging at all, so maybe we can gradually add recharging
> > > > > to suitable paths.
> > > > >
> > > >
> > > > Agree.
> > > >
> > > > > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > > > >
> > > > > Maybe if (page's obj_cgroup->memcg == root_mem_cgroup) OR (memcg of
> > > >
> > > > This is a good start.
> > > >
> > > > > current is not a descendant of page's obj_cgroup->memcg) is a good
> > > >
> > > > I am not sure this one since a page could be shared between different
> > > > memcg.
> > >
> > > No way :)
> >
> > No way in terms of charging or usage? AFAIU a page is only charged to
> > one memcg, but can be used by multiple memcgs if it exists in the page
> > cache for example. Am I missing something here?
>
> Charging of course. I mean we can't realistically precisely account for
> shared use of a page between multiple cgroups, at least not at 4k granularity.
>
> >
> > >
> > > >
> > > >     root
> > > >    /   \
> > > >   A     B
> > > >  / \     \
> > > > C   E     D
> > > >
> > > > e.g. a page (originally, it belongs to memcg E and E is dying) is reparented
> > > > to memcg A, and it is shared between C and D now. Then we need to consider
> > > > whether it should be recharged. Yep, we need more thinging about recharging.
> > >
> > > This is why I wasn't sure that objcg-based reparenting is the best approach.
> > > Instead (or maybe even _with_ the reparenting) we can recharge pages on, say,
> > > page activation and/or rotation (inactive->inactive). Pagefaults/reads are
> > > probably to hot to do it there. But the reclaim path should be more accessible
> > > in terms of the performance overhead. Just some ideas.
> >
> > Thanks for chipping in, Roman! I am honestly not sure on what paths
> > the recharge should occur, but I know that we will probably need a
> > recharge mechanism at some point. We can start adding recharging
> > gradually to paths that don't affect performance, reclaim is a very
> > good place. Maybe we sort LRUs such that reparented pages are scanned
> > first, and possibly recharged under memcg pressure.
>
> I think the activation path is a good place to start because we know for sure
> that a page is actively used and we know who is using it.

I agree. What I am suggesting is to additionally scan reparented pages
first under memory pressure. These pages were used by a dead
descendant, so there is a big chance they aren't being used anymore or
they are used by a different memcg, in this case recharging these
pages (if possible) might put the memcg back below its limit. If a
memcg reaches its limit and undergoes reclaim because of reparented
pages it isn't using, this is bad. If during reclaim we keep those
pages and reclaim other pages that are actually being used by the
memcg (even if colder), is arguably worse. WDYT?

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-07-03 23:23   ` Andrew Morton
  0 siblings, 0 replies; 54+ messages in thread
From: Andrew Morton @ 2022-07-03 23:23 UTC (permalink / raw)
  To: Muchun Song
  Cc: hannes, longman, mhocko, roman.gushchin, shakeelb, cgroups,
	duanxiongchun, linux-kernel, linux-mm, Yosry Ahmed

On Tue, 21 Jun 2022 20:56:47 +0800 Muchun Song <songmuchun@bytedance.com> wrote:

> This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> into mm-unstable which will help to determine whether there is a problem or
> degradation. I am also doing some benchmark tests in parallel.
> 
> Since the following patchsets applied. All the kernel memory are charged
> with the new APIs of obj_cgroup.
> 
> 	commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> 	commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> 
> But user memory allocations (LRU pages) pinning memcgs for a long time -
> it exists at a larger scale and is causing recurring problems in the real
> world: page cache doesn't get reclaimed for a long time, or is used by the
> second, third, fourth, ... instance of the same job that was restarted into
> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> and make page reclaim very inefficient.
> 
> We can convert LRU pages and most other raw memcg pins to the objcg direction
> to fix this problem, and then the LRU pages will not pin the memcgs.
> 
> This patchset aims to make the LRU pages to drop the reference to memory
> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> of the dying cgroups will not increase if we run the following test script.
> 
> ...
>

I don't have reviewer or acker tags on a couple of these, but there is
still time - I plan to push this series into mm-stable around July 8.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-07-03 23:23   ` Andrew Morton
  0 siblings, 0 replies; 54+ messages in thread
From: Andrew Morton @ 2022-07-03 23:23 UTC (permalink / raw)
  To: Muchun Song
  Cc: hannes-druUgvl0LCNAfugRpC6u6w, longman-H+wXaHxf7aLQT0dZR+AlfA,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A,
	roman.gushchin-fxUVXftIFDnyG1zEObXtfA,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA, cgroups-u79uwXL29TY76Z2rM5mHXA,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg, Yosry Ahmed

On Tue, 21 Jun 2022 20:56:47 +0800 Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:

> This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> into mm-unstable which will help to determine whether there is a problem or
> degradation. I am also doing some benchmark tests in parallel.
> 
> Since the following patchsets applied. All the kernel memory are charged
> with the new APIs of obj_cgroup.
> 
> 	commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> 	commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> 
> But user memory allocations (LRU pages) pinning memcgs for a long time -
> it exists at a larger scale and is causing recurring problems in the real
> world: page cache doesn't get reclaimed for a long time, or is used by the
> second, third, fourth, ... instance of the same job that was restarted into
> a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> and make page reclaim very inefficient.
> 
> We can convert LRU pages and most other raw memcg pins to the objcg direction
> to fix this problem, and then the LRU pages will not pin the memcgs.
> 
> This patchset aims to make the LRU pages to drop the reference to memory
> cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> of the dying cgroups will not increase if we run the following test script.
> 
> ...
>

I don't have reviewer or acker tags on a couple of these, but there is
still time - I plan to push this series into mm-stable around July 8.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-07-07 22:14       ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-07-07 22:14 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman, Michal Hocko,
	Roman Gushchin, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > >
> > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > into mm-unstable which will help to determine whether there is a problem or
> > > degradation. I am also doing some benchmark tests in parallel.
> > >
> > > Since the following patchsets applied. All the kernel memory are charged
> > > with the new APIs of obj_cgroup.
> > >
> > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > >
> > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > it exists at a larger scale and is causing recurring problems in the real
> > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > second, third, fourth, ... instance of the same job that was restarted into
> > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > and make page reclaim very inefficient.
> > >
> > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > >
> > > This patchset aims to make the LRU pages to drop the reference to memory
> > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > of the dying cgroups will not increase if we run the following test script.
> >
> > This is amazing work!
> >
> > Sorry if I came late, I didn't follow the threads of previous versions
> > so this might be redundant, I just have a couple of questions.
> >
> > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > (assuming they can), aren't these pages effectively unaccounted at
> > this point or leaked? Is there protection against this?
> >
>
> In this case, those pages are accounted in root memcg level. Unfortunately,
> there is no mechanism now to transfer a page's memcg from one to another.
>

Hey Muchun,

Quick question regarding the behavior of this change on cgroup v1 (I
know .. I know .. sorry):

When a memcg dies, its LRU pages are reparented, but what happens to
the charge? IIUC we don't do anything because the pages are already
hierarchically charged to the parent. Is this correct?

In cgroup v1, we have non-hierarchical stats as well, so I am trying
to understand if the reparented memory will appear in the
non-hierarchical stats of the parent (my understanding is that the
will not). I am also particularly interested in the charging behavior
of pages that get reparented to root_mem_cgroup.

The main reason I am asking is that (hierarchical_usage -
non-hierarchical_usage - children_hierarchical_usage) is *roughly*
something that we use, especially at the root level, to estimate
zombie memory usage. I am trying to see if this change will break such
calculations. Thanks!

> > b) Since moving charged pages between memcgs is now becoming easier by
> > using the APIs of obj_cgroup, I wonder if this opens the door for
> > future work to transfer charges to memcgs that are actually using
> > reparented resources. For example, let's say cgroup A reads a few
> > pages into page cache, and then they are no longer used by cgroup A.
> > cgroup B, however, is using the same pages that are currently charged
> > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > dies, and these pages are reparented to A's parent, can we possibly
> > mark these reparented pages (maybe in the page tables somewhere) so
> > that next time they get accessed we recharge them to B instead
> > (possibly asynchronously)?
> > I don't have much experience about page tables but I am pretty sure
> > they are loaded so maybe there is no room in PTEs for something like
> > this, but I have always wondered about what we can do for this case
> > where a cgroup is consistently using memory charged to another cgroup.
> > Maybe when this memory is reparented is a good point in time to decide
> > to recharge appropriately. It would also fix the reparenty leak to
> > root problem (if it even exists).
> >
>
> From my point of view, this is going to be an improvement to the memcg
> subsystem in the future.  IIUC, most reparented pages are page cache
> pages without be mapped to users. So page tables are not a suitable
> place to record this information. However, we already have this information
> in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> equal to the page's obj_cgroup->memcg->objcg, it means this page have
> been reparented. I am thinking if a place where a page is mapped (probably
> page fault patch) or page (cache) is written (usually vfs write path)
> is suitable to transfer page's memcg from one to another. But need more
> thinking, e.g. How to decide if a reparented page needs to be transferred?
> If we need more information to make this decision, where to store those
> information? This is my primary thoughts on this question.
>
> Thanks.
>
> > Thanks again for this work and please excuse my ignorance if any part
> > of what I said doesn't make sense :)
> >
> > >
> > > ```bash
> > > #!/bin/bash
> > >
> > > dd if=/dev/zero of=temp bs=4096 count=1
> > > cat /proc/cgroups | grep memory
> > >
> > > for i in {0..2000}
> > > do
> > >         mkdir /sys/fs/cgroup/memory/test$i
> > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > >         cat temp >> log
> > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > >         rmdir /sys/fs/cgroup/memory/test$i
> > > done
> > >
> > > cat /proc/cgroups | grep memory
> > >
> > > rm -f temp log
> > > ```
> > >
> > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> > >
> > > v6:
> > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > >  - Rebase to mm-unstable.
> > >
> > > v5:
> > >  - Lots of improvements from Johannes, Roman and Waiman.
> > >  - Fix lockdep warning reported by kernel test robot.
> > >  - Add two new patches to do code cleanup.
> > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > >    it to local_lock.  It could be an improvement in the future.
> > >
> > > v4:
> > >  - Resend and rebased on v5.18.
> > >
> > > v3:
> > >  - Removed the Acked-by tags from Roman since this version is based on
> > >    the folio relevant.
> > >
> > > v2:
> > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > >  - Rebase to linux 5.15-rc1.
> > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > >
> > > v1:
> > >  - Drop RFC tag.
> > >  - Rebase to linux next-20210811.
> > >
> > > RFC v4:
> > >  - Collect Acked-by from Roman.
> > >  - Rebase to linux next-20210525.
> > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > >  - Convert reparent_ops_head to an array in patch 8.
> > >
> > > Thanks for Roman's review and suggestions.
> > >
> > > RFC v3:
> > >  - Drop the code cleanup and simplification patches. Gather those patches
> > >    into a separate series[1].
> > >  - Rework patch #1 suggested by Johannes.
> > >
> > > RFC v2:
> > >  - Collect Acked-by tags by Johannes. Thanks.
> > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > >  - Fix move_pages_to_lru().
> > >
> > > Muchun Song (11):
> > >   mm: memcontrol: remove dead code and comments
> > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > >     lruvec_unlock{_irq, _irqrestore}
> > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > >   mm: vmscan: rework move_pages_to_lru()
> > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > >   mm: memcontrol: introduce memcg_reparent_ops
> > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > >   mm: lru: use lruvec lock to serialize memcg changes
> > >
> > >  fs/buffer.c                      |   4 +-
> > >  fs/fs-writeback.c                |  23 +-
> > >  include/linux/memcontrol.h       | 218 +++++++++------
> > >  include/linux/mm_inline.h        |   6 +
> > >  include/trace/events/writeback.h |   5 +
> > >  mm/compaction.c                  |  39 ++-
> > >  mm/huge_memory.c                 | 153 ++++++++--
> > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > >  mm/migrate.c                     |   4 +
> > >  mm/mlock.c                       |   2 +-
> > >  mm/page_io.c                     |   5 +-
> > >  mm/swap.c                        |  49 ++--
> > >  mm/vmscan.c                      |  66 ++---
> > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > >
> > >
> > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > --
> > > 2.11.0
> > >
> > >
> >

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-07-07 22:14       ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-07-07 22:14 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman-H+wXaHxf7aLQT0dZR+AlfA,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Cgroups,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg, Linux Kernel Mailing List,
	Linux-MM

On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
>
> On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > >
> > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > into mm-unstable which will help to determine whether there is a problem or
> > > degradation. I am also doing some benchmark tests in parallel.
> > >
> > > Since the following patchsets applied. All the kernel memory are charged
> > > with the new APIs of obj_cgroup.
> > >
> > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > >
> > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > it exists at a larger scale and is causing recurring problems in the real
> > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > second, third, fourth, ... instance of the same job that was restarted into
> > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > and make page reclaim very inefficient.
> > >
> > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > >
> > > This patchset aims to make the LRU pages to drop the reference to memory
> > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > of the dying cgroups will not increase if we run the following test script.
> >
> > This is amazing work!
> >
> > Sorry if I came late, I didn't follow the threads of previous versions
> > so this might be redundant, I just have a couple of questions.
> >
> > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > (assuming they can), aren't these pages effectively unaccounted at
> > this point or leaked? Is there protection against this?
> >
>
> In this case, those pages are accounted in root memcg level. Unfortunately,
> there is no mechanism now to transfer a page's memcg from one to another.
>

Hey Muchun,

Quick question regarding the behavior of this change on cgroup v1 (I
know .. I know .. sorry):

When a memcg dies, its LRU pages are reparented, but what happens to
the charge? IIUC we don't do anything because the pages are already
hierarchically charged to the parent. Is this correct?

In cgroup v1, we have non-hierarchical stats as well, so I am trying
to understand if the reparented memory will appear in the
non-hierarchical stats of the parent (my understanding is that the
will not). I am also particularly interested in the charging behavior
of pages that get reparented to root_mem_cgroup.

The main reason I am asking is that (hierarchical_usage -
non-hierarchical_usage - children_hierarchical_usage) is *roughly*
something that we use, especially at the root level, to estimate
zombie memory usage. I am trying to see if this change will break such
calculations. Thanks!

> > b) Since moving charged pages between memcgs is now becoming easier by
> > using the APIs of obj_cgroup, I wonder if this opens the door for
> > future work to transfer charges to memcgs that are actually using
> > reparented resources. For example, let's say cgroup A reads a few
> > pages into page cache, and then they are no longer used by cgroup A.
> > cgroup B, however, is using the same pages that are currently charged
> > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > dies, and these pages are reparented to A's parent, can we possibly
> > mark these reparented pages (maybe in the page tables somewhere) so
> > that next time they get accessed we recharge them to B instead
> > (possibly asynchronously)?
> > I don't have much experience about page tables but I am pretty sure
> > they are loaded so maybe there is no room in PTEs for something like
> > this, but I have always wondered about what we can do for this case
> > where a cgroup is consistently using memory charged to another cgroup.
> > Maybe when this memory is reparented is a good point in time to decide
> > to recharge appropriately. It would also fix the reparenty leak to
> > root problem (if it even exists).
> >
>
> From my point of view, this is going to be an improvement to the memcg
> subsystem in the future.  IIUC, most reparented pages are page cache
> pages without be mapped to users. So page tables are not a suitable
> place to record this information. However, we already have this information
> in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> equal to the page's obj_cgroup->memcg->objcg, it means this page have
> been reparented. I am thinking if a place where a page is mapped (probably
> page fault patch) or page (cache) is written (usually vfs write path)
> is suitable to transfer page's memcg from one to another. But need more
> thinking, e.g. How to decide if a reparented page needs to be transferred?
> If we need more information to make this decision, where to store those
> information? This is my primary thoughts on this question.
>
> Thanks.
>
> > Thanks again for this work and please excuse my ignorance if any part
> > of what I said doesn't make sense :)
> >
> > >
> > > ```bash
> > > #!/bin/bash
> > >
> > > dd if=/dev/zero of=temp bs=4096 count=1
> > > cat /proc/cgroups | grep memory
> > >
> > > for i in {0..2000}
> > > do
> > >         mkdir /sys/fs/cgroup/memory/test$i
> > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > >         cat temp >> log
> > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > >         rmdir /sys/fs/cgroup/memory/test$i
> > > done
> > >
> > > cat /proc/cgroups | grep memory
> > >
> > > rm -f temp log
> > > ```
> > >
> > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> > >
> > > v6:
> > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > >  - Rebase to mm-unstable.
> > >
> > > v5:
> > >  - Lots of improvements from Johannes, Roman and Waiman.
> > >  - Fix lockdep warning reported by kernel test robot.
> > >  - Add two new patches to do code cleanup.
> > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > >    it to local_lock.  It could be an improvement in the future.
> > >
> > > v4:
> > >  - Resend and rebased on v5.18.
> > >
> > > v3:
> > >  - Removed the Acked-by tags from Roman since this version is based on
> > >    the folio relevant.
> > >
> > > v2:
> > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > >  - Rebase to linux 5.15-rc1.
> > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > >
> > > v1:
> > >  - Drop RFC tag.
> > >  - Rebase to linux next-20210811.
> > >
> > > RFC v4:
> > >  - Collect Acked-by from Roman.
> > >  - Rebase to linux next-20210525.
> > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > >  - Convert reparent_ops_head to an array in patch 8.
> > >
> > > Thanks for Roman's review and suggestions.
> > >
> > > RFC v3:
> > >  - Drop the code cleanup and simplification patches. Gather those patches
> > >    into a separate series[1].
> > >  - Rework patch #1 suggested by Johannes.
> > >
> > > RFC v2:
> > >  - Collect Acked-by tags by Johannes. Thanks.
> > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > >  - Fix move_pages_to_lru().
> > >
> > > Muchun Song (11):
> > >   mm: memcontrol: remove dead code and comments
> > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > >     lruvec_unlock{_irq, _irqrestore}
> > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > >   mm: vmscan: rework move_pages_to_lru()
> > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > >   mm: memcontrol: introduce memcg_reparent_ops
> > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > >   mm: lru: use lruvec lock to serialize memcg changes
> > >
> > >  fs/buffer.c                      |   4 +-
> > >  fs/fs-writeback.c                |  23 +-
> > >  include/linux/memcontrol.h       | 218 +++++++++------
> > >  include/linux/mm_inline.h        |   6 +
> > >  include/trace/events/writeback.h |   5 +
> > >  mm/compaction.c                  |  39 ++-
> > >  mm/huge_memory.c                 | 153 ++++++++--
> > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > >  mm/migrate.c                     |   4 +
> > >  mm/mlock.c                       |   2 +-
> > >  mm/page_io.c                     |   5 +-
> > >  mm/swap.c                        |  49 ++--
> > >  mm/vmscan.c                      |  66 ++---
> > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > >
> > >
> > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > --
> > > 2.11.0
> > >
> > >
> >

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-07-08  6:52         ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-07-08  6:52 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, longman, Michal Hocko,
	Roman Gushchin, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Thu, Jul 07, 2022 at 03:14:26PM -0700, Yosry Ahmed wrote:
> On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > >
> > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > into mm-unstable which will help to determine whether there is a problem or
> > > > degradation. I am also doing some benchmark tests in parallel.
> > > >
> > > > Since the following patchsets applied. All the kernel memory are charged
> > > > with the new APIs of obj_cgroup.
> > > >
> > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > >
> > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > it exists at a larger scale and is causing recurring problems in the real
> > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > and make page reclaim very inefficient.
> > > >
> > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > >
> > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > of the dying cgroups will not increase if we run the following test script.
> > >
> > > This is amazing work!
> > >
> > > Sorry if I came late, I didn't follow the threads of previous versions
> > > so this might be redundant, I just have a couple of questions.
> > >
> > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > (assuming they can), aren't these pages effectively unaccounted at
> > > this point or leaked? Is there protection against this?
> > >
> >
> > In this case, those pages are accounted in root memcg level. Unfortunately,
> > there is no mechanism now to transfer a page's memcg from one to another.
> >
> 
> Hey Muchun,
> 
> Quick question regarding the behavior of this change on cgroup v1 (I
> know .. I know .. sorry):
> 
> When a memcg dies, its LRU pages are reparented, but what happens to
> the charge? IIUC we don't do anything because the pages are already
> hierarchically charged to the parent. Is this correct?
>

Correct.

> In cgroup v1, we have non-hierarchical stats as well, so I am trying
> to understand if the reparented memory will appear in the
> non-hierarchical stats of the parent (my understanding is that the
> will not). I am also particularly interested in the charging behavior
> of pages that get reparented to root_mem_cgroup.
>

I didn't change any memory stats when reparenting.

> The main reason I am asking is that (hierarchical_usage -
> non-hierarchical_usage - children_hierarchical_usage) is *roughly*
> something that we use, especially at the root level, to estimate
> zombie memory usage. I am trying to see if this change will break such
> calculations. Thanks!
> 

So I think your calculations will still be correct. If you have
any unexpected result, please let me know. Thanks.

> > > b) Since moving charged pages between memcgs is now becoming easier by
> > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > future work to transfer charges to memcgs that are actually using
> > > reparented resources. For example, let's say cgroup A reads a few
> > > pages into page cache, and then they are no longer used by cgroup A.
> > > cgroup B, however, is using the same pages that are currently charged
> > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > dies, and these pages are reparented to A's parent, can we possibly
> > > mark these reparented pages (maybe in the page tables somewhere) so
> > > that next time they get accessed we recharge them to B instead
> > > (possibly asynchronously)?
> > > I don't have much experience about page tables but I am pretty sure
> > > they are loaded so maybe there is no room in PTEs for something like
> > > this, but I have always wondered about what we can do for this case
> > > where a cgroup is consistently using memory charged to another cgroup.
> > > Maybe when this memory is reparented is a good point in time to decide
> > > to recharge appropriately. It would also fix the reparenty leak to
> > > root problem (if it even exists).
> > >
> >
> > From my point of view, this is going to be an improvement to the memcg
> > subsystem in the future.  IIUC, most reparented pages are page cache
> > pages without be mapped to users. So page tables are not a suitable
> > place to record this information. However, we already have this information
> > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > been reparented. I am thinking if a place where a page is mapped (probably
> > page fault patch) or page (cache) is written (usually vfs write path)
> > is suitable to transfer page's memcg from one to another. But need more
> > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > If we need more information to make this decision, where to store those
> > information? This is my primary thoughts on this question.
> >
> > Thanks.
> >
> > > Thanks again for this work and please excuse my ignorance if any part
> > > of what I said doesn't make sense :)
> > >
> > > >
> > > > ```bash
> > > > #!/bin/bash
> > > >
> > > > dd if=/dev/zero of=temp bs=4096 count=1
> > > > cat /proc/cgroups | grep memory
> > > >
> > > > for i in {0..2000}
> > > > do
> > > >         mkdir /sys/fs/cgroup/memory/test$i
> > > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > > >         cat temp >> log
> > > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > > >         rmdir /sys/fs/cgroup/memory/test$i
> > > > done
> > > >
> > > > cat /proc/cgroups | grep memory
> > > >
> > > > rm -f temp log
> > > > ```
> > > >
> > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> > > >
> > > > v6:
> > > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > > >  - Rebase to mm-unstable.
> > > >
> > > > v5:
> > > >  - Lots of improvements from Johannes, Roman and Waiman.
> > > >  - Fix lockdep warning reported by kernel test robot.
> > > >  - Add two new patches to do code cleanup.
> > > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > > >    it to local_lock.  It could be an improvement in the future.
> > > >
> > > > v4:
> > > >  - Resend and rebased on v5.18.
> > > >
> > > > v3:
> > > >  - Removed the Acked-by tags from Roman since this version is based on
> > > >    the folio relevant.
> > > >
> > > > v2:
> > > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > > >  - Rebase to linux 5.15-rc1.
> > > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > > >
> > > > v1:
> > > >  - Drop RFC tag.
> > > >  - Rebase to linux next-20210811.
> > > >
> > > > RFC v4:
> > > >  - Collect Acked-by from Roman.
> > > >  - Rebase to linux next-20210525.
> > > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > > >  - Convert reparent_ops_head to an array in patch 8.
> > > >
> > > > Thanks for Roman's review and suggestions.
> > > >
> > > > RFC v3:
> > > >  - Drop the code cleanup and simplification patches. Gather those patches
> > > >    into a separate series[1].
> > > >  - Rework patch #1 suggested by Johannes.
> > > >
> > > > RFC v2:
> > > >  - Collect Acked-by tags by Johannes. Thanks.
> > > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > > >  - Fix move_pages_to_lru().
> > > >
> > > > Muchun Song (11):
> > > >   mm: memcontrol: remove dead code and comments
> > > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > > >     lruvec_unlock{_irq, _irqrestore}
> > > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > > >   mm: vmscan: rework move_pages_to_lru()
> > > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > > >   mm: memcontrol: introduce memcg_reparent_ops
> > > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > > >   mm: lru: use lruvec lock to serialize memcg changes
> > > >
> > > >  fs/buffer.c                      |   4 +-
> > > >  fs/fs-writeback.c                |  23 +-
> > > >  include/linux/memcontrol.h       | 218 +++++++++------
> > > >  include/linux/mm_inline.h        |   6 +
> > > >  include/trace/events/writeback.h |   5 +
> > > >  mm/compaction.c                  |  39 ++-
> > > >  mm/huge_memory.c                 | 153 ++++++++--
> > > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > > >  mm/migrate.c                     |   4 +
> > > >  mm/mlock.c                       |   2 +-
> > > >  mm/page_io.c                     |   5 +-
> > > >  mm/swap.c                        |  49 ++--
> > > >  mm/vmscan.c                      |  66 ++---
> > > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > > >
> > > >
> > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > > --
> > > > 2.11.0
> > > >
> > > >
> > >
> 

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-07-08  6:52         ` Muchun Song
  0 siblings, 0 replies; 54+ messages in thread
From: Muchun Song @ 2022-07-08  6:52 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, longman-H+wXaHxf7aLQT0dZR+AlfA,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Cgroups,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg, Linux Kernel Mailing List,
	Linux-MM

On Thu, Jul 07, 2022 at 03:14:26PM -0700, Yosry Ahmed wrote:
> On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> >
> > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > > >
> > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > into mm-unstable which will help to determine whether there is a problem or
> > > > degradation. I am also doing some benchmark tests in parallel.
> > > >
> > > > Since the following patchsets applied. All the kernel memory are charged
> > > > with the new APIs of obj_cgroup.
> > > >
> > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > >
> > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > it exists at a larger scale and is causing recurring problems in the real
> > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > and make page reclaim very inefficient.
> > > >
> > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > >
> > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > of the dying cgroups will not increase if we run the following test script.
> > >
> > > This is amazing work!
> > >
> > > Sorry if I came late, I didn't follow the threads of previous versions
> > > so this might be redundant, I just have a couple of questions.
> > >
> > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > (assuming they can), aren't these pages effectively unaccounted at
> > > this point or leaked? Is there protection against this?
> > >
> >
> > In this case, those pages are accounted in root memcg level. Unfortunately,
> > there is no mechanism now to transfer a page's memcg from one to another.
> >
> 
> Hey Muchun,
> 
> Quick question regarding the behavior of this change on cgroup v1 (I
> know .. I know .. sorry):
> 
> When a memcg dies, its LRU pages are reparented, but what happens to
> the charge? IIUC we don't do anything because the pages are already
> hierarchically charged to the parent. Is this correct?
>

Correct.

> In cgroup v1, we have non-hierarchical stats as well, so I am trying
> to understand if the reparented memory will appear in the
> non-hierarchical stats of the parent (my understanding is that the
> will not). I am also particularly interested in the charging behavior
> of pages that get reparented to root_mem_cgroup.
>

I didn't change any memory stats when reparenting.

> The main reason I am asking is that (hierarchical_usage -
> non-hierarchical_usage - children_hierarchical_usage) is *roughly*
> something that we use, especially at the root level, to estimate
> zombie memory usage. I am trying to see if this change will break such
> calculations. Thanks!
> 

So I think your calculations will still be correct. If you have
any unexpected result, please let me know. Thanks.

> > > b) Since moving charged pages between memcgs is now becoming easier by
> > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > future work to transfer charges to memcgs that are actually using
> > > reparented resources. For example, let's say cgroup A reads a few
> > > pages into page cache, and then they are no longer used by cgroup A.
> > > cgroup B, however, is using the same pages that are currently charged
> > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > dies, and these pages are reparented to A's parent, can we possibly
> > > mark these reparented pages (maybe in the page tables somewhere) so
> > > that next time they get accessed we recharge them to B instead
> > > (possibly asynchronously)?
> > > I don't have much experience about page tables but I am pretty sure
> > > they are loaded so maybe there is no room in PTEs for something like
> > > this, but I have always wondered about what we can do for this case
> > > where a cgroup is consistently using memory charged to another cgroup.
> > > Maybe when this memory is reparented is a good point in time to decide
> > > to recharge appropriately. It would also fix the reparenty leak to
> > > root problem (if it even exists).
> > >
> >
> > From my point of view, this is going to be an improvement to the memcg
> > subsystem in the future.  IIUC, most reparented pages are page cache
> > pages without be mapped to users. So page tables are not a suitable
> > place to record this information. However, we already have this information
> > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > been reparented. I am thinking if a place where a page is mapped (probably
> > page fault patch) or page (cache) is written (usually vfs write path)
> > is suitable to transfer page's memcg from one to another. But need more
> > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > If we need more information to make this decision, where to store those
> > information? This is my primary thoughts on this question.
> >
> > Thanks.
> >
> > > Thanks again for this work and please excuse my ignorance if any part
> > > of what I said doesn't make sense :)
> > >
> > > >
> > > > ```bash
> > > > #!/bin/bash
> > > >
> > > > dd if=/dev/zero of=temp bs=4096 count=1
> > > > cat /proc/cgroups | grep memory
> > > >
> > > > for i in {0..2000}
> > > > do
> > > >         mkdir /sys/fs/cgroup/memory/test$i
> > > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > > >         cat temp >> log
> > > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > > >         rmdir /sys/fs/cgroup/memory/test$i
> > > > done
> > > >
> > > > cat /proc/cgroups | grep memory
> > > >
> > > > rm -f temp log
> > > > ```
> > > >
> > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > >
> > > > v6:
> > > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > > >  - Rebase to mm-unstable.
> > > >
> > > > v5:
> > > >  - Lots of improvements from Johannes, Roman and Waiman.
> > > >  - Fix lockdep warning reported by kernel test robot.
> > > >  - Add two new patches to do code cleanup.
> > > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > > >    it to local_lock.  It could be an improvement in the future.
> > > >
> > > > v4:
> > > >  - Resend and rebased on v5.18.
> > > >
> > > > v3:
> > > >  - Removed the Acked-by tags from Roman since this version is based on
> > > >    the folio relevant.
> > > >
> > > > v2:
> > > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > > >  - Rebase to linux 5.15-rc1.
> > > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > > >
> > > > v1:
> > > >  - Drop RFC tag.
> > > >  - Rebase to linux next-20210811.
> > > >
> > > > RFC v4:
> > > >  - Collect Acked-by from Roman.
> > > >  - Rebase to linux next-20210525.
> > > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > > >  - Convert reparent_ops_head to an array in patch 8.
> > > >
> > > > Thanks for Roman's review and suggestions.
> > > >
> > > > RFC v3:
> > > >  - Drop the code cleanup and simplification patches. Gather those patches
> > > >    into a separate series[1].
> > > >  - Rework patch #1 suggested by Johannes.
> > > >
> > > > RFC v2:
> > > >  - Collect Acked-by tags by Johannes. Thanks.
> > > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > > >  - Fix move_pages_to_lru().
> > > >
> > > > Muchun Song (11):
> > > >   mm: memcontrol: remove dead code and comments
> > > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > > >     lruvec_unlock{_irq, _irqrestore}
> > > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > > >   mm: vmscan: rework move_pages_to_lru()
> > > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > > >   mm: memcontrol: introduce memcg_reparent_ops
> > > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > > >   mm: lru: use lruvec lock to serialize memcg changes
> > > >
> > > >  fs/buffer.c                      |   4 +-
> > > >  fs/fs-writeback.c                |  23 +-
> > > >  include/linux/memcontrol.h       | 218 +++++++++------
> > > >  include/linux/mm_inline.h        |   6 +
> > > >  include/trace/events/writeback.h |   5 +
> > > >  mm/compaction.c                  |  39 ++-
> > > >  mm/huge_memory.c                 | 153 ++++++++--
> > > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > > >  mm/migrate.c                     |   4 +
> > > >  mm/mlock.c                       |   2 +-
> > > >  mm/page_io.c                     |   5 +-
> > > >  mm/swap.c                        |  49 ++--
> > > >  mm/vmscan.c                      |  66 ++---
> > > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > > >
> > > >
> > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > > --
> > > > 2.11.0
> > > >
> > > >
> > >
> 

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-07-08  9:26           ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-07-08  9:26 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman, Michal Hocko,
	Roman Gushchin, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Thu, Jul 7, 2022 at 11:52 PM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Thu, Jul 07, 2022 at 03:14:26PM -0700, Yosry Ahmed wrote:
> > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > >
> > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > >
> > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > >
> > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > with the new APIs of obj_cgroup.
> > > > >
> > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > >
> > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > and make page reclaim very inefficient.
> > > > >
> > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > >
> > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > of the dying cgroups will not increase if we run the following test script.
> > > >
> > > > This is amazing work!
> > > >
> > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > so this might be redundant, I just have a couple of questions.
> > > >
> > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > this point or leaked? Is there protection against this?
> > > >
> > >
> > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > there is no mechanism now to transfer a page's memcg from one to another.
> > >
> >
> > Hey Muchun,
> >
> > Quick question regarding the behavior of this change on cgroup v1 (I
> > know .. I know .. sorry):
> >
> > When a memcg dies, its LRU pages are reparented, but what happens to
> > the charge? IIUC we don't do anything because the pages are already
> > hierarchically charged to the parent. Is this correct?
> >
>
> Correct.
>
> > In cgroup v1, we have non-hierarchical stats as well, so I am trying
> > to understand if the reparented memory will appear in the
> > non-hierarchical stats of the parent (my understanding is that the
> > will not). I am also particularly interested in the charging behavior
> > of pages that get reparented to root_mem_cgroup.
> >
>
> I didn't change any memory stats when reparenting.
>
> > The main reason I am asking is that (hierarchical_usage -
> > non-hierarchical_usage - children_hierarchical_usage) is *roughly*
> > something that we use, especially at the root level, to estimate
> > zombie memory usage. I am trying to see if this change will break such
> > calculations. Thanks!
> >
>
> So I think your calculations will still be correct. If you have
> any unexpected result, please let me know. Thanks.

I have been looking at the code and the patchset and I think there
might be a problem with the stats, at least for cgroup v1. Lets say we
have a parent memcg P, which has a child memcg C. When processes in
memcg C allocate memory the stats (e.g. NR_ANON_MAPPED) are updated
for C (non-hierarchical per-cpu counters, memcg->vmstats_percpu), and
for P (aggregated stats, memcg->vmstats).

When memcg C is offlined, its pages are reparented to memcg P, so far
P->vmstats (hierarchical) still have those pages, and
P->vmstats_percpu (non-hierarchical) don't. So far so good.

Now those reparented pages get uncharged, but their memcg is P now, so
they get subtracted from P's *non-hierarchical* stats (and eventually
hierarchical stats as well). So now P->vmstats (hierarchical)
decreases, which is correct, but P->vmstats_percpu (non-hierarchical)
also decreases, which is wrong, as those stats were never added to
P->vmstats_percpu to begin with.

From a cgroup v2 perspective *maybe* everything continues to work, but
this breaks cgroup v1 non-hierarchical stats. In fact, if the
reparented memory exceeds the original non-hierarchical memory in P,
we can underflow those stats  because we are subtracting stats that
were never added in the first place.

Please let me know if I am misunderstanding something and there is
actually no problem with the non-hierarchical stats (you can stop
reading here if this is all in my head and there's actually no
problem).

Off the top of my mind we can handle stats modifications of reparented
memory separately. We should not updated local per-cpu counters, maybe
we should rather update memcg->vmstat.state_pending directly so that
the changes appear as if they come from a child memcg. Two problems
come with such an approach:

1) memcg->vmstat.state_pending is shared between cpus, and so far is
only modified by mem_cgroup_css_rstat_flush() in locked context. A
solution would be to add reparented state to
memcg->vmstat.state_percpu instead and treat it like
memcg->vmstat.state_pending in mem_cgroup_css_rstat_flush(). Keep in
mind that this adds a tiny bit of memory overhead (roughly 8
bytes*num_cpus for each memcg).

2) Identifying that we are updating stats of reparented memory. This
should be easy if we have a pointer to the page to compare page->objcg
with page->objcg->memcg->objcg, but AFAICT the memcg stats are updated
in __mod_memcg_state() and __mod_memcg_lruvec_state(), and we have no
idea in each of these what page(s) is the stats update associated
with. They are called from many different places, it would be
troublesome to pass such information down from all call sites. I have
nothing off the top of my head to fix this problem except passing the
necessary info through all code paths to __mod_memcg_state() and
__mod_memcg_lruvec_state(), which is far from ideal.

Again, I am sorry if these discussions are late, I didn't have time to
look at previous versions of this patchset.

>
> > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > future work to transfer charges to memcgs that are actually using
> > > > reparented resources. For example, let's say cgroup A reads a few
> > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > cgroup B, however, is using the same pages that are currently charged
> > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > that next time they get accessed we recharge them to B instead
> > > > (possibly asynchronously)?
> > > > I don't have much experience about page tables but I am pretty sure
> > > > they are loaded so maybe there is no room in PTEs for something like
> > > > this, but I have always wondered about what we can do for this case
> > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > Maybe when this memory is reparented is a good point in time to decide
> > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > root problem (if it even exists).
> > > >
> > >
> > > From my point of view, this is going to be an improvement to the memcg
> > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > pages without be mapped to users. So page tables are not a suitable
> > > place to record this information. However, we already have this information
> > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > been reparented. I am thinking if a place where a page is mapped (probably
> > > page fault patch) or page (cache) is written (usually vfs write path)
> > > is suitable to transfer page's memcg from one to another. But need more
> > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > > If we need more information to make this decision, where to store those
> > > information? This is my primary thoughts on this question.
> > >
> > > Thanks.
> > >
> > > > Thanks again for this work and please excuse my ignorance if any part
> > > > of what I said doesn't make sense :)
> > > >
> > > > >
> > > > > ```bash
> > > > > #!/bin/bash
> > > > >
> > > > > dd if=/dev/zero of=temp bs=4096 count=1
> > > > > cat /proc/cgroups | grep memory
> > > > >
> > > > > for i in {0..2000}
> > > > > do
> > > > >         mkdir /sys/fs/cgroup/memory/test$i
> > > > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > > > >         cat temp >> log
> > > > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > > > >         rmdir /sys/fs/cgroup/memory/test$i
> > > > > done
> > > > >
> > > > > cat /proc/cgroups | grep memory
> > > > >
> > > > > rm -f temp log
> > > > > ```
> > > > >
> > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> > > > >
> > > > > v6:
> > > > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > > > >  - Rebase to mm-unstable.
> > > > >
> > > > > v5:
> > > > >  - Lots of improvements from Johannes, Roman and Waiman.
> > > > >  - Fix lockdep warning reported by kernel test robot.
> > > > >  - Add two new patches to do code cleanup.
> > > > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > > > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > > > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > > > >    it to local_lock.  It could be an improvement in the future.
> > > > >
> > > > > v4:
> > > > >  - Resend and rebased on v5.18.
> > > > >
> > > > > v3:
> > > > >  - Removed the Acked-by tags from Roman since this version is based on
> > > > >    the folio relevant.
> > > > >
> > > > > v2:
> > > > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > > > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > > > >  - Rebase to linux 5.15-rc1.
> > > > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > > > >
> > > > > v1:
> > > > >  - Drop RFC tag.
> > > > >  - Rebase to linux next-20210811.
> > > > >
> > > > > RFC v4:
> > > > >  - Collect Acked-by from Roman.
> > > > >  - Rebase to linux next-20210525.
> > > > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > > > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > > > >  - Convert reparent_ops_head to an array in patch 8.
> > > > >
> > > > > Thanks for Roman's review and suggestions.
> > > > >
> > > > > RFC v3:
> > > > >  - Drop the code cleanup and simplification patches. Gather those patches
> > > > >    into a separate series[1].
> > > > >  - Rework patch #1 suggested by Johannes.
> > > > >
> > > > > RFC v2:
> > > > >  - Collect Acked-by tags by Johannes. Thanks.
> > > > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > > > >  - Fix move_pages_to_lru().
> > > > >
> > > > > Muchun Song (11):
> > > > >   mm: memcontrol: remove dead code and comments
> > > > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > > > >     lruvec_unlock{_irq, _irqrestore}
> > > > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > > > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > > > >   mm: vmscan: rework move_pages_to_lru()
> > > > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > > > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > > > >   mm: memcontrol: introduce memcg_reparent_ops
> > > > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > > > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > > > >   mm: lru: use lruvec lock to serialize memcg changes
> > > > >
> > > > >  fs/buffer.c                      |   4 +-
> > > > >  fs/fs-writeback.c                |  23 +-
> > > > >  include/linux/memcontrol.h       | 218 +++++++++------
> > > > >  include/linux/mm_inline.h        |   6 +
> > > > >  include/trace/events/writeback.h |   5 +
> > > > >  mm/compaction.c                  |  39 ++-
> > > > >  mm/huge_memory.c                 | 153 ++++++++--
> > > > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > > > >  mm/migrate.c                     |   4 +
> > > > >  mm/mlock.c                       |   2 +-
> > > > >  mm/page_io.c                     |   5 +-
> > > > >  mm/swap.c                        |  49 ++--
> > > > >  mm/vmscan.c                      |  66 ++---
> > > > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > > > >
> > > > >
> > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > > > --
> > > > > 2.11.0
> > > > >
> > > > >
> > > >
> >

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-07-08  9:26           ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-07-08  9:26 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman-H+wXaHxf7aLQT0dZR+AlfA,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Cgroups,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg, Linux Kernel Mailing List,
	Linux-MM

On Thu, Jul 7, 2022 at 11:52 PM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
>
> On Thu, Jul 07, 2022 at 03:14:26PM -0700, Yosry Ahmed wrote:
> > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > >
> > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > >
> > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > >
> > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > with the new APIs of obj_cgroup.
> > > > >
> > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > >
> > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > and make page reclaim very inefficient.
> > > > >
> > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > >
> > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > of the dying cgroups will not increase if we run the following test script.
> > > >
> > > > This is amazing work!
> > > >
> > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > so this might be redundant, I just have a couple of questions.
> > > >
> > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > this point or leaked? Is there protection against this?
> > > >
> > >
> > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > there is no mechanism now to transfer a page's memcg from one to another.
> > >
> >
> > Hey Muchun,
> >
> > Quick question regarding the behavior of this change on cgroup v1 (I
> > know .. I know .. sorry):
> >
> > When a memcg dies, its LRU pages are reparented, but what happens to
> > the charge? IIUC we don't do anything because the pages are already
> > hierarchically charged to the parent. Is this correct?
> >
>
> Correct.
>
> > In cgroup v1, we have non-hierarchical stats as well, so I am trying
> > to understand if the reparented memory will appear in the
> > non-hierarchical stats of the parent (my understanding is that the
> > will not). I am also particularly interested in the charging behavior
> > of pages that get reparented to root_mem_cgroup.
> >
>
> I didn't change any memory stats when reparenting.
>
> > The main reason I am asking is that (hierarchical_usage -
> > non-hierarchical_usage - children_hierarchical_usage) is *roughly*
> > something that we use, especially at the root level, to estimate
> > zombie memory usage. I am trying to see if this change will break such
> > calculations. Thanks!
> >
>
> So I think your calculations will still be correct. If you have
> any unexpected result, please let me know. Thanks.

I have been looking at the code and the patchset and I think there
might be a problem with the stats, at least for cgroup v1. Lets say we
have a parent memcg P, which has a child memcg C. When processes in
memcg C allocate memory the stats (e.g. NR_ANON_MAPPED) are updated
for C (non-hierarchical per-cpu counters, memcg->vmstats_percpu), and
for P (aggregated stats, memcg->vmstats).

When memcg C is offlined, its pages are reparented to memcg P, so far
P->vmstats (hierarchical) still have those pages, and
P->vmstats_percpu (non-hierarchical) don't. So far so good.

Now those reparented pages get uncharged, but their memcg is P now, so
they get subtracted from P's *non-hierarchical* stats (and eventually
hierarchical stats as well). So now P->vmstats (hierarchical)
decreases, which is correct, but P->vmstats_percpu (non-hierarchical)
also decreases, which is wrong, as those stats were never added to
P->vmstats_percpu to begin with.

From a cgroup v2 perspective *maybe* everything continues to work, but
this breaks cgroup v1 non-hierarchical stats. In fact, if the
reparented memory exceeds the original non-hierarchical memory in P,
we can underflow those stats  because we are subtracting stats that
were never added in the first place.

Please let me know if I am misunderstanding something and there is
actually no problem with the non-hierarchical stats (you can stop
reading here if this is all in my head and there's actually no
problem).

Off the top of my mind we can handle stats modifications of reparented
memory separately. We should not updated local per-cpu counters, maybe
we should rather update memcg->vmstat.state_pending directly so that
the changes appear as if they come from a child memcg. Two problems
come with such an approach:

1) memcg->vmstat.state_pending is shared between cpus, and so far is
only modified by mem_cgroup_css_rstat_flush() in locked context. A
solution would be to add reparented state to
memcg->vmstat.state_percpu instead and treat it like
memcg->vmstat.state_pending in mem_cgroup_css_rstat_flush(). Keep in
mind that this adds a tiny bit of memory overhead (roughly 8
bytes*num_cpus for each memcg).

2) Identifying that we are updating stats of reparented memory. This
should be easy if we have a pointer to the page to compare page->objcg
with page->objcg->memcg->objcg, but AFAICT the memcg stats are updated
in __mod_memcg_state() and __mod_memcg_lruvec_state(), and we have no
idea in each of these what page(s) is the stats update associated
with. They are called from many different places, it would be
troublesome to pass such information down from all call sites. I have
nothing off the top of my head to fix this problem except passing the
necessary info through all code paths to __mod_memcg_state() and
__mod_memcg_lruvec_state(), which is far from ideal.

Again, I am sorry if these discussions are late, I didn't have time to
look at previous versions of this patchset.

>
> > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > future work to transfer charges to memcgs that are actually using
> > > > reparented resources. For example, let's say cgroup A reads a few
> > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > cgroup B, however, is using the same pages that are currently charged
> > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > that next time they get accessed we recharge them to B instead
> > > > (possibly asynchronously)?
> > > > I don't have much experience about page tables but I am pretty sure
> > > > they are loaded so maybe there is no room in PTEs for something like
> > > > this, but I have always wondered about what we can do for this case
> > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > Maybe when this memory is reparented is a good point in time to decide
> > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > root problem (if it even exists).
> > > >
> > >
> > > From my point of view, this is going to be an improvement to the memcg
> > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > pages without be mapped to users. So page tables are not a suitable
> > > place to record this information. However, we already have this information
> > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > been reparented. I am thinking if a place where a page is mapped (probably
> > > page fault patch) or page (cache) is written (usually vfs write path)
> > > is suitable to transfer page's memcg from one to another. But need more
> > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > > If we need more information to make this decision, where to store those
> > > information? This is my primary thoughts on this question.
> > >
> > > Thanks.
> > >
> > > > Thanks again for this work and please excuse my ignorance if any part
> > > > of what I said doesn't make sense :)
> > > >
> > > > >
> > > > > ```bash
> > > > > #!/bin/bash
> > > > >
> > > > > dd if=/dev/zero of=temp bs=4096 count=1
> > > > > cat /proc/cgroups | grep memory
> > > > >
> > > > > for i in {0..2000}
> > > > > do
> > > > >         mkdir /sys/fs/cgroup/memory/test$i
> > > > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > > > >         cat temp >> log
> > > > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > > > >         rmdir /sys/fs/cgroup/memory/test$i
> > > > > done
> > > > >
> > > > > cat /proc/cgroups | grep memory
> > > > >
> > > > > rm -f temp log
> > > > > ```
> > > > >
> > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > >
> > > > > v6:
> > > > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > > > >  - Rebase to mm-unstable.
> > > > >
> > > > > v5:
> > > > >  - Lots of improvements from Johannes, Roman and Waiman.
> > > > >  - Fix lockdep warning reported by kernel test robot.
> > > > >  - Add two new patches to do code cleanup.
> > > > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > > > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > > > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > > > >    it to local_lock.  It could be an improvement in the future.
> > > > >
> > > > > v4:
> > > > >  - Resend and rebased on v5.18.
> > > > >
> > > > > v3:
> > > > >  - Removed the Acked-by tags from Roman since this version is based on
> > > > >    the folio relevant.
> > > > >
> > > > > v2:
> > > > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > > > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > > > >  - Rebase to linux 5.15-rc1.
> > > > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > > > >
> > > > > v1:
> > > > >  - Drop RFC tag.
> > > > >  - Rebase to linux next-20210811.
> > > > >
> > > > > RFC v4:
> > > > >  - Collect Acked-by from Roman.
> > > > >  - Rebase to linux next-20210525.
> > > > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > > > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > > > >  - Convert reparent_ops_head to an array in patch 8.
> > > > >
> > > > > Thanks for Roman's review and suggestions.
> > > > >
> > > > > RFC v3:
> > > > >  - Drop the code cleanup and simplification patches. Gather those patches
> > > > >    into a separate series[1].
> > > > >  - Rework patch #1 suggested by Johannes.
> > > > >
> > > > > RFC v2:
> > > > >  - Collect Acked-by tags by Johannes. Thanks.
> > > > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > > > >  - Fix move_pages_to_lru().
> > > > >
> > > > > Muchun Song (11):
> > > > >   mm: memcontrol: remove dead code and comments
> > > > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > > > >     lruvec_unlock{_irq, _irqrestore}
> > > > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > > > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > > > >   mm: vmscan: rework move_pages_to_lru()
> > > > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > > > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > > > >   mm: memcontrol: introduce memcg_reparent_ops
> > > > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > > > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > > > >   mm: lru: use lruvec lock to serialize memcg changes
> > > > >
> > > > >  fs/buffer.c                      |   4 +-
> > > > >  fs/fs-writeback.c                |  23 +-
> > > > >  include/linux/memcontrol.h       | 218 +++++++++------
> > > > >  include/linux/mm_inline.h        |   6 +
> > > > >  include/trace/events/writeback.h |   5 +
> > > > >  mm/compaction.c                  |  39 ++-
> > > > >  mm/huge_memory.c                 | 153 ++++++++--
> > > > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > > > >  mm/migrate.c                     |   4 +
> > > > >  mm/mlock.c                       |   2 +-
> > > > >  mm/page_io.c                     |   5 +-
> > > > >  mm/swap.c                        |  49 ++--
> > > > >  mm/vmscan.c                      |  66 ++---
> > > > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > > > >
> > > > >
> > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > > > --
> > > > > 2.11.0
> > > > >
> > > > >
> > > >
> >

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
  2022-07-08  9:26           ` Yosry Ahmed
  (?)
@ 2022-07-09  5:51           ` Muchun Song
  2022-07-09  9:23               ` Yosry Ahmed
  -1 siblings, 1 reply; 54+ messages in thread
From: Muchun Song @ 2022-07-09  5:51 UTC (permalink / raw)
  To: Yosry Ahmed
  Cc: Andrew Morton, Johannes Weiner, longman, Michal Hocko,
	Roman Gushchin, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Fri, Jul 08, 2022 at 02:26:08AM -0700, Yosry Ahmed wrote:
> On Thu, Jul 7, 2022 at 11:52 PM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > On Thu, Jul 07, 2022 at 03:14:26PM -0700, Yosry Ahmed wrote:
> > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > >
> > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > > >
> > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > > >
> > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > with the new APIs of obj_cgroup.
> > > > > >
> > > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > > >
> > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > and make page reclaim very inefficient.
> > > > > >
> > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > >
> > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > >
> > > > > This is amazing work!
> > > > >
> > > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > > so this might be redundant, I just have a couple of questions.
> > > > >
> > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > > this point or leaked? Is there protection against this?
> > > > >
> > > >
> > > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > > there is no mechanism now to transfer a page's memcg from one to another.
> > > >
> > >
> > > Hey Muchun,
> > >
> > > Quick question regarding the behavior of this change on cgroup v1 (I
> > > know .. I know .. sorry):
> > >
> > > When a memcg dies, its LRU pages are reparented, but what happens to
> > > the charge? IIUC we don't do anything because the pages are already
> > > hierarchically charged to the parent. Is this correct?
> > >
> >
> > Correct.
> >
> > > In cgroup v1, we have non-hierarchical stats as well, so I am trying
> > > to understand if the reparented memory will appear in the
> > > non-hierarchical stats of the parent (my understanding is that the
> > > will not). I am also particularly interested in the charging behavior
> > > of pages that get reparented to root_mem_cgroup.
> > >
> >
> > I didn't change any memory stats when reparenting.
> >
> > > The main reason I am asking is that (hierarchical_usage -
> > > non-hierarchical_usage - children_hierarchical_usage) is *roughly*
> > > something that we use, especially at the root level, to estimate
> > > zombie memory usage. I am trying to see if this change will break such
> > > calculations. Thanks!
> > >
> >
> > So I think your calculations will still be correct. If you have
> > any unexpected result, please let me know. Thanks.
> 
> I have been looking at the code and the patchset and I think there
> might be a problem with the stats, at least for cgroup v1. Lets say we
> have a parent memcg P, which has a child memcg C. When processes in
> memcg C allocate memory the stats (e.g. NR_ANON_MAPPED) are updated
> for C (non-hierarchical per-cpu counters, memcg->vmstats_percpu), and
> for P (aggregated stats, memcg->vmstats).
> 
> When memcg C is offlined, its pages are reparented to memcg P, so far
> P->vmstats (hierarchical) still have those pages, and
> P->vmstats_percpu (non-hierarchical) don't. So far so good.
> 
> Now those reparented pages get uncharged, but their memcg is P now, so
> they get subtracted from P's *non-hierarchical* stats (and eventually
> hierarchical stats as well). So now P->vmstats (hierarchical)
> decreases, which is correct, but P->vmstats_percpu (non-hierarchical)
> also decreases, which is wrong, as those stats were never added to
> P->vmstats_percpu to begin with.
> 
> From a cgroup v2 perspective *maybe* everything continues to work, but
> this breaks cgroup v1 non-hierarchical stats. In fact, if the
> reparented memory exceeds the original non-hierarchical memory in P,
> we can underflow those stats  because we are subtracting stats that
> were never added in the first place.
> 
> Please let me know if I am misunderstanding something and there is
> actually no problem with the non-hierarchical stats (you can stop
> reading here if this is all in my head and there's actually no
> problem).
>

Thanks for patient explanation. Now I got your point.
 
> Off the top of my mind we can handle stats modifications of reparented
> memory separately. We should not updated local per-cpu counters, maybe
> we should rather update memcg->vmstat.state_pending directly so that
> the changes appear as if they come from a child memcg. Two problems
> come with such an approach:
>

Instead of avoiding updating local per-cpu counters for reparented pages,
after reparenting, how about propagating the child memcg's local per-cpu
counters to its parent after LRU pages reparenting? And we do not need to
propagate all vmstats, just some vmstats exposed to cgroup v1 users (like
memcg1_stats, memcg1_events and lru list pages). I think a reparented page
is just a little bit of difference compared to other non-reparented pages,
propagating local per-cpu counters may be acceptable. What do you think?

> 1) memcg->vmstat.state_pending is shared between cpus, and so far is
> only modified by mem_cgroup_css_rstat_flush() in locked context. A
> solution would be to add reparented state to
> memcg->vmstat.state_percpu instead and treat it like
> memcg->vmstat.state_pending in mem_cgroup_css_rstat_flush(). Keep in
> mind that this adds a tiny bit of memory overhead (roughly 8
> bytes*num_cpus for each memcg).
> 
> 2) Identifying that we are updating stats of reparented memory. This
> should be easy if we have a pointer to the page to compare page->objcg
> with page->objcg->memcg->objcg, but AFAICT the memcg stats are updated
> in __mod_memcg_state() and __mod_memcg_lruvec_state(), and we have no
> idea in each of these what page(s) is the stats update associated
> with. They are called from many different places, it would be
> troublesome to pass such information down from all call sites. I have
> nothing off the top of my head to fix this problem except passing the
> necessary info through all code paths to __mod_memcg_state() and
> __mod_memcg_lruvec_state(), which is far from ideal.
> 
> Again, I am sorry if these discussions are late, I didn't have time to
> look at previous versions of this patchset.
>

Not late, thanks for your feedback.

> >
> > > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > > future work to transfer charges to memcgs that are actually using
> > > > > reparented resources. For example, let's say cgroup A reads a few
> > > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > > cgroup B, however, is using the same pages that are currently charged
> > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > > that next time they get accessed we recharge them to B instead
> > > > > (possibly asynchronously)?
> > > > > I don't have much experience about page tables but I am pretty sure
> > > > > they are loaded so maybe there is no room in PTEs for something like
> > > > > this, but I have always wondered about what we can do for this case
> > > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > > Maybe when this memory is reparented is a good point in time to decide
> > > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > > root problem (if it even exists).
> > > > >
> > > >
> > > > From my point of view, this is going to be an improvement to the memcg
> > > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > > pages without be mapped to users. So page tables are not a suitable
> > > > place to record this information. However, we already have this information
> > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > > been reparented. I am thinking if a place where a page is mapped (probably
> > > > page fault patch) or page (cache) is written (usually vfs write path)
> > > > is suitable to transfer page's memcg from one to another. But need more
> > > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > > > If we need more information to make this decision, where to store those
> > > > information? This is my primary thoughts on this question.
> > > >
> > > > Thanks.
> > > >
> > > > > Thanks again for this work and please excuse my ignorance if any part
> > > > > of what I said doesn't make sense :)
> > > > >
> > > > > >
> > > > > > ```bash
> > > > > > #!/bin/bash
> > > > > >
> > > > > > dd if=/dev/zero of=temp bs=4096 count=1
> > > > > > cat /proc/cgroups | grep memory
> > > > > >
> > > > > > for i in {0..2000}
> > > > > > do
> > > > > >         mkdir /sys/fs/cgroup/memory/test$i
> > > > > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > > > > >         cat temp >> log
> > > > > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > > > > >         rmdir /sys/fs/cgroup/memory/test$i
> > > > > > done
> > > > > >
> > > > > > cat /proc/cgroups | grep memory
> > > > > >
> > > > > > rm -f temp log
> > > > > > ```
> > > > > >
> > > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> > > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> > > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> > > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> > > > > >
> > > > > > v6:
> > > > > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > > > > >  - Rebase to mm-unstable.
> > > > > >
> > > > > > v5:
> > > > > >  - Lots of improvements from Johannes, Roman and Waiman.
> > > > > >  - Fix lockdep warning reported by kernel test robot.
> > > > > >  - Add two new patches to do code cleanup.
> > > > > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > > > > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > > > > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > > > > >    it to local_lock.  It could be an improvement in the future.
> > > > > >
> > > > > > v4:
> > > > > >  - Resend and rebased on v5.18.
> > > > > >
> > > > > > v3:
> > > > > >  - Removed the Acked-by tags from Roman since this version is based on
> > > > > >    the folio relevant.
> > > > > >
> > > > > > v2:
> > > > > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > > > > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > > > > >  - Rebase to linux 5.15-rc1.
> > > > > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > > > > >
> > > > > > v1:
> > > > > >  - Drop RFC tag.
> > > > > >  - Rebase to linux next-20210811.
> > > > > >
> > > > > > RFC v4:
> > > > > >  - Collect Acked-by from Roman.
> > > > > >  - Rebase to linux next-20210525.
> > > > > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > > > > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > > > > >  - Convert reparent_ops_head to an array in patch 8.
> > > > > >
> > > > > > Thanks for Roman's review and suggestions.
> > > > > >
> > > > > > RFC v3:
> > > > > >  - Drop the code cleanup and simplification patches. Gather those patches
> > > > > >    into a separate series[1].
> > > > > >  - Rework patch #1 suggested by Johannes.
> > > > > >
> > > > > > RFC v2:
> > > > > >  - Collect Acked-by tags by Johannes. Thanks.
> > > > > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > > > > >  - Fix move_pages_to_lru().
> > > > > >
> > > > > > Muchun Song (11):
> > > > > >   mm: memcontrol: remove dead code and comments
> > > > > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > > > > >     lruvec_unlock{_irq, _irqrestore}
> > > > > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > > > > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > > > > >   mm: vmscan: rework move_pages_to_lru()
> > > > > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > > > > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > > > > >   mm: memcontrol: introduce memcg_reparent_ops
> > > > > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > > > > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > > > > >   mm: lru: use lruvec lock to serialize memcg changes
> > > > > >
> > > > > >  fs/buffer.c                      |   4 +-
> > > > > >  fs/fs-writeback.c                |  23 +-
> > > > > >  include/linux/memcontrol.h       | 218 +++++++++------
> > > > > >  include/linux/mm_inline.h        |   6 +
> > > > > >  include/trace/events/writeback.h |   5 +
> > > > > >  mm/compaction.c                  |  39 ++-
> > > > > >  mm/huge_memory.c                 | 153 ++++++++--
> > > > > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > > > > >  mm/migrate.c                     |   4 +
> > > > > >  mm/mlock.c                       |   2 +-
> > > > > >  mm/page_io.c                     |   5 +-
> > > > > >  mm/swap.c                        |  49 ++--
> > > > > >  mm/vmscan.c                      |  66 ++---
> > > > > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > > > > >
> > > > > >
> > > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > > > > --
> > > > > > 2.11.0
> > > > > >
> > > > > >
> > > > >
> > >
> 

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-07-09  9:23               ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-07-09  9:23 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman, Michal Hocko,
	Roman Gushchin, Shakeel Butt, Cgroups, duanxiongchun,
	Linux Kernel Mailing List, Linux-MM

On Fri, Jul 8, 2022 at 10:51 PM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Fri, Jul 08, 2022 at 02:26:08AM -0700, Yosry Ahmed wrote:
> > On Thu, Jul 7, 2022 at 11:52 PM Muchun Song <songmuchun@bytedance.com> wrote:
> > >
> > > On Thu, Jul 07, 2022 at 03:14:26PM -0700, Yosry Ahmed wrote:
> > > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > >
> > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > > > >
> > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > > > >
> > > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > > with the new APIs of obj_cgroup.
> > > > > > >
> > > > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > > > >
> > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > > and make page reclaim very inefficient.
> > > > > > >
> > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > > >
> > > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > > >
> > > > > > This is amazing work!
> > > > > >
> > > > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > > > so this might be redundant, I just have a couple of questions.
> > > > > >
> > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > > > this point or leaked? Is there protection against this?
> > > > > >
> > > > >
> > > > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > > > there is no mechanism now to transfer a page's memcg from one to another.
> > > > >
> > > >
> > > > Hey Muchun,
> > > >
> > > > Quick question regarding the behavior of this change on cgroup v1 (I
> > > > know .. I know .. sorry):
> > > >
> > > > When a memcg dies, its LRU pages are reparented, but what happens to
> > > > the charge? IIUC we don't do anything because the pages are already
> > > > hierarchically charged to the parent. Is this correct?
> > > >
> > >
> > > Correct.
> > >
> > > > In cgroup v1, we have non-hierarchical stats as well, so I am trying
> > > > to understand if the reparented memory will appear in the
> > > > non-hierarchical stats of the parent (my understanding is that the
> > > > will not). I am also particularly interested in the charging behavior
> > > > of pages that get reparented to root_mem_cgroup.
> > > >
> > >
> > > I didn't change any memory stats when reparenting.
> > >
> > > > The main reason I am asking is that (hierarchical_usage -
> > > > non-hierarchical_usage - children_hierarchical_usage) is *roughly*
> > > > something that we use, especially at the root level, to estimate
> > > > zombie memory usage. I am trying to see if this change will break such
> > > > calculations. Thanks!
> > > >
> > >
> > > So I think your calculations will still be correct. If you have
> > > any unexpected result, please let me know. Thanks.
> >
> > I have been looking at the code and the patchset and I think there
> > might be a problem with the stats, at least for cgroup v1. Lets say we
> > have a parent memcg P, which has a child memcg C. When processes in
> > memcg C allocate memory the stats (e.g. NR_ANON_MAPPED) are updated
> > for C (non-hierarchical per-cpu counters, memcg->vmstats_percpu), and
> > for P (aggregated stats, memcg->vmstats).
> >
> > When memcg C is offlined, its pages are reparented to memcg P, so far
> > P->vmstats (hierarchical) still have those pages, and
> > P->vmstats_percpu (non-hierarchical) don't. So far so good.
> >
> > Now those reparented pages get uncharged, but their memcg is P now, so
> > they get subtracted from P's *non-hierarchical* stats (and eventually
> > hierarchical stats as well). So now P->vmstats (hierarchical)
> > decreases, which is correct, but P->vmstats_percpu (non-hierarchical)
> > also decreases, which is wrong, as those stats were never added to
> > P->vmstats_percpu to begin with.
> >
> > From a cgroup v2 perspective *maybe* everything continues to work, but
> > this breaks cgroup v1 non-hierarchical stats. In fact, if the
> > reparented memory exceeds the original non-hierarchical memory in P,
> > we can underflow those stats  because we are subtracting stats that
> > were never added in the first place.
> >
> > Please let me know if I am misunderstanding something and there is
> > actually no problem with the non-hierarchical stats (you can stop
> > reading here if this is all in my head and there's actually no
> > problem).
> >
>
> Thanks for patient explanation. Now I got your point.
>
> > Off the top of my mind we can handle stats modifications of reparented
> > memory separately. We should not updated local per-cpu counters, maybe
> > we should rather update memcg->vmstat.state_pending directly so that
> > the changes appear as if they come from a child memcg. Two problems
> > come with such an approach:
> >
>
> Instead of avoiding updating local per-cpu counters for reparented pages,
> after reparenting, how about propagating the child memcg's local per-cpu
> counters to its parent after LRU pages reparenting? And we do not need to
> propagate all vmstats, just some vmstats exposed to cgroup v1 users (like
> memcg1_stats, memcg1_events and lru list pages). I think a reparented page
> is just a little bit of difference compared to other non-reparented pages,
> propagating local per-cpu counters may be acceptable. What do you think?
>

I think this introduces another problem. Now the non-hierarchical
stats of a parent memcg (P in the above example) would include
reparented memory. This hides zombie memory usage. As I elaborated
earlier, parent_hierarchical_usage - parent_non_hierarchical_usage -
SUM(children_hierarchical_usage) should give an estimate of the zombie
memory under parent. If we propagate reparented memory stats (aka
zombies) to the parent's non-hierarchical stats, then we have no way
of finding out how much zombie memory lives in a memcg. This problem
becomes more significant when we are reparenting to root, where zombie
memory is part of unaccounted system overhead.

Actually there is a different problem even in cgroup v2. At root level
there will be no way of finding out whether unaccounted system
overhead (root_usage - SUM(top_level_memcgs_usage)) comes from zombie
memcgs or not, because zombie memcgs will no longer exist and
reparented/zombie memory can be indistinguishable from memory that has
always lived in root. This makes debugging high system overhead even
harder, but that's a problem with the reparenting approach in general,
unrelated to the non-hierarchical stats problem.

> > 1) memcg->vmstat.state_pending is shared between cpus, and so far is
> > only modified by mem_cgroup_css_rstat_flush() in locked context. A
> > solution would be to add reparented state to
> > memcg->vmstat.state_percpu instead and treat it like
> > memcg->vmstat.state_pending in mem_cgroup_css_rstat_flush(). Keep in
> > mind that this adds a tiny bit of memory overhead (roughly 8
> > bytes*num_cpus for each memcg).
> >
> > 2) Identifying that we are updating stats of reparented memory. This
> > should be easy if we have a pointer to the page to compare page->objcg
> > with page->objcg->memcg->objcg, but AFAICT the memcg stats are updated
> > in __mod_memcg_state() and __mod_memcg_lruvec_state(), and we have no
> > idea in each of these what page(s) is the stats update associated
> > with. They are called from many different places, it would be
> > troublesome to pass such information down from all call sites. I have
> > nothing off the top of my head to fix this problem except passing the
> > necessary info through all code paths to __mod_memcg_state() and
> > __mod_memcg_lruvec_state(), which is far from ideal.
> >
> > Again, I am sorry if these discussions are late, I didn't have time to
> > look at previous versions of this patchset.
> >
>
> Not late, thanks for your feedback.
>
> > >
> > > > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > > > future work to transfer charges to memcgs that are actually using
> > > > > > reparented resources. For example, let's say cgroup A reads a few
> > > > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > > > cgroup B, however, is using the same pages that are currently charged
> > > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > > > that next time they get accessed we recharge them to B instead
> > > > > > (possibly asynchronously)?
> > > > > > I don't have much experience about page tables but I am pretty sure
> > > > > > they are loaded so maybe there is no room in PTEs for something like
> > > > > > this, but I have always wondered about what we can do for this case
> > > > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > > > Maybe when this memory is reparented is a good point in time to decide
> > > > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > > > root problem (if it even exists).
> > > > > >
> > > > >
> > > > > From my point of view, this is going to be an improvement to the memcg
> > > > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > > > pages without be mapped to users. So page tables are not a suitable
> > > > > place to record this information. However, we already have this information
> > > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > > > been reparented. I am thinking if a place where a page is mapped (probably
> > > > > page fault patch) or page (cache) is written (usually vfs write path)
> > > > > is suitable to transfer page's memcg from one to another. But need more
> > > > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > > > > If we need more information to make this decision, where to store those
> > > > > information? This is my primary thoughts on this question.
> > > > >
> > > > > Thanks.
> > > > >
> > > > > > Thanks again for this work and please excuse my ignorance if any part
> > > > > > of what I said doesn't make sense :)
> > > > > >
> > > > > > >
> > > > > > > ```bash
> > > > > > > #!/bin/bash
> > > > > > >
> > > > > > > dd if=/dev/zero of=temp bs=4096 count=1
> > > > > > > cat /proc/cgroups | grep memory
> > > > > > >
> > > > > > > for i in {0..2000}
> > > > > > > do
> > > > > > >         mkdir /sys/fs/cgroup/memory/test$i
> > > > > > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > > > > > >         cat temp >> log
> > > > > > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > > > > > >         rmdir /sys/fs/cgroup/memory/test$i
> > > > > > > done
> > > > > > >
> > > > > > > cat /proc/cgroups | grep memory
> > > > > > >
> > > > > > > rm -f temp log
> > > > > > > ```
> > > > > > >
> > > > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun@bytedance.com/
> > > > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun@bytedance.com/
> > > > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun@bytedance.com/
> > > > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun@bytedance.com/
> > > > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun@bytedance.com/
> > > > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun@bytedance.com/
> > > > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun@bytedance.com/
> > > > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun@bytedance.com/
> > > > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun@bytedance.com/
> > > > > > >
> > > > > > > v6:
> > > > > > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > > > > > >  - Rebase to mm-unstable.
> > > > > > >
> > > > > > > v5:
> > > > > > >  - Lots of improvements from Johannes, Roman and Waiman.
> > > > > > >  - Fix lockdep warning reported by kernel test robot.
> > > > > > >  - Add two new patches to do code cleanup.
> > > > > > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > > > > > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > > > > > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > > > > > >    it to local_lock.  It could be an improvement in the future.
> > > > > > >
> > > > > > > v4:
> > > > > > >  - Resend and rebased on v5.18.
> > > > > > >
> > > > > > > v3:
> > > > > > >  - Removed the Acked-by tags from Roman since this version is based on
> > > > > > >    the folio relevant.
> > > > > > >
> > > > > > > v2:
> > > > > > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > > > > > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > > > > > >  - Rebase to linux 5.15-rc1.
> > > > > > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > > > > > >
> > > > > > > v1:
> > > > > > >  - Drop RFC tag.
> > > > > > >  - Rebase to linux next-20210811.
> > > > > > >
> > > > > > > RFC v4:
> > > > > > >  - Collect Acked-by from Roman.
> > > > > > >  - Rebase to linux next-20210525.
> > > > > > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > > > > > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > > > > > >  - Convert reparent_ops_head to an array in patch 8.
> > > > > > >
> > > > > > > Thanks for Roman's review and suggestions.
> > > > > > >
> > > > > > > RFC v3:
> > > > > > >  - Drop the code cleanup and simplification patches. Gather those patches
> > > > > > >    into a separate series[1].
> > > > > > >  - Rework patch #1 suggested by Johannes.
> > > > > > >
> > > > > > > RFC v2:
> > > > > > >  - Collect Acked-by tags by Johannes. Thanks.
> > > > > > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > > > > > >  - Fix move_pages_to_lru().
> > > > > > >
> > > > > > > Muchun Song (11):
> > > > > > >   mm: memcontrol: remove dead code and comments
> > > > > > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > > > > > >     lruvec_unlock{_irq, _irqrestore}
> > > > > > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > > > > > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > > > > > >   mm: vmscan: rework move_pages_to_lru()
> > > > > > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > > > > > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > > > > > >   mm: memcontrol: introduce memcg_reparent_ops
> > > > > > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > > > > > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > > > > > >   mm: lru: use lruvec lock to serialize memcg changes
> > > > > > >
> > > > > > >  fs/buffer.c                      |   4 +-
> > > > > > >  fs/fs-writeback.c                |  23 +-
> > > > > > >  include/linux/memcontrol.h       | 218 +++++++++------
> > > > > > >  include/linux/mm_inline.h        |   6 +
> > > > > > >  include/trace/events/writeback.h |   5 +
> > > > > > >  mm/compaction.c                  |  39 ++-
> > > > > > >  mm/huge_memory.c                 | 153 ++++++++--
> > > > > > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > > > > > >  mm/migrate.c                     |   4 +
> > > > > > >  mm/mlock.c                       |   2 +-
> > > > > > >  mm/page_io.c                     |   5 +-
> > > > > > >  mm/swap.c                        |  49 ++--
> > > > > > >  mm/vmscan.c                      |  66 ++---
> > > > > > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > > > > > >
> > > > > > >
> > > > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > > > > > --
> > > > > > > 2.11.0
> > > > > > >
> > > > > > >
> > > > > >
> > > >
> >

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages
@ 2022-07-09  9:23               ` Yosry Ahmed
  0 siblings, 0 replies; 54+ messages in thread
From: Yosry Ahmed @ 2022-07-09  9:23 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Johannes Weiner, longman-H+wXaHxf7aLQT0dZR+AlfA,
	Michal Hocko, Roman Gushchin, Shakeel Butt, Cgroups,
	duanxiongchun-EC8Uxl6Npydl57MIdRCFDg, Linux Kernel Mailing List,
	Linux-MM

On Fri, Jul 8, 2022 at 10:51 PM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
>
> On Fri, Jul 08, 2022 at 02:26:08AM -0700, Yosry Ahmed wrote:
> > On Thu, Jul 7, 2022 at 11:52 PM Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org> wrote:
> > >
> > > On Thu, Jul 07, 2022 at 03:14:26PM -0700, Yosry Ahmed wrote:
> > > > On Mon, Jun 27, 2022 at 12:11 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > >
> > > > > On Sun, Jun 26, 2022 at 03:32:02AM -0700, Yosry Ahmed wrote:
> > > > > > On Tue, Jun 21, 2022 at 5:57 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > > > >
> > > > > > > This version is rebased on mm-unstable. Hopefully, Andrew can get this series
> > > > > > > into mm-unstable which will help to determine whether there is a problem or
> > > > > > > degradation. I am also doing some benchmark tests in parallel.
> > > > > > >
> > > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > > with the new APIs of obj_cgroup.
> > > > > > >
> > > > > > >         commit f2fe7b09a52b ("mm: memcg/slab: charge individual slab objects instead of pages")
> > > > > > >         commit b4e0b68fbd9d ("mm: memcontrol: use obj_cgroup APIs to charge kmem pages")
> > > > > > >
> > > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > > and make page reclaim very inefficient.
> > > > > > >
> > > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > > >
> > > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > > >
> > > > > > This is amazing work!
> > > > > >
> > > > > > Sorry if I came late, I didn't follow the threads of previous versions
> > > > > > so this might be redundant, I just have a couple of questions.
> > > > > >
> > > > > > a) If LRU pages keep getting parented until they reach root_mem_cgroup
> > > > > > (assuming they can), aren't these pages effectively unaccounted at
> > > > > > this point or leaked? Is there protection against this?
> > > > > >
> > > > >
> > > > > In this case, those pages are accounted in root memcg level. Unfortunately,
> > > > > there is no mechanism now to transfer a page's memcg from one to another.
> > > > >
> > > >
> > > > Hey Muchun,
> > > >
> > > > Quick question regarding the behavior of this change on cgroup v1 (I
> > > > know .. I know .. sorry):
> > > >
> > > > When a memcg dies, its LRU pages are reparented, but what happens to
> > > > the charge? IIUC we don't do anything because the pages are already
> > > > hierarchically charged to the parent. Is this correct?
> > > >
> > >
> > > Correct.
> > >
> > > > In cgroup v1, we have non-hierarchical stats as well, so I am trying
> > > > to understand if the reparented memory will appear in the
> > > > non-hierarchical stats of the parent (my understanding is that the
> > > > will not). I am also particularly interested in the charging behavior
> > > > of pages that get reparented to root_mem_cgroup.
> > > >
> > >
> > > I didn't change any memory stats when reparenting.
> > >
> > > > The main reason I am asking is that (hierarchical_usage -
> > > > non-hierarchical_usage - children_hierarchical_usage) is *roughly*
> > > > something that we use, especially at the root level, to estimate
> > > > zombie memory usage. I am trying to see if this change will break such
> > > > calculations. Thanks!
> > > >
> > >
> > > So I think your calculations will still be correct. If you have
> > > any unexpected result, please let me know. Thanks.
> >
> > I have been looking at the code and the patchset and I think there
> > might be a problem with the stats, at least for cgroup v1. Lets say we
> > have a parent memcg P, which has a child memcg C. When processes in
> > memcg C allocate memory the stats (e.g. NR_ANON_MAPPED) are updated
> > for C (non-hierarchical per-cpu counters, memcg->vmstats_percpu), and
> > for P (aggregated stats, memcg->vmstats).
> >
> > When memcg C is offlined, its pages are reparented to memcg P, so far
> > P->vmstats (hierarchical) still have those pages, and
> > P->vmstats_percpu (non-hierarchical) don't. So far so good.
> >
> > Now those reparented pages get uncharged, but their memcg is P now, so
> > they get subtracted from P's *non-hierarchical* stats (and eventually
> > hierarchical stats as well). So now P->vmstats (hierarchical)
> > decreases, which is correct, but P->vmstats_percpu (non-hierarchical)
> > also decreases, which is wrong, as those stats were never added to
> > P->vmstats_percpu to begin with.
> >
> > From a cgroup v2 perspective *maybe* everything continues to work, but
> > this breaks cgroup v1 non-hierarchical stats. In fact, if the
> > reparented memory exceeds the original non-hierarchical memory in P,
> > we can underflow those stats  because we are subtracting stats that
> > were never added in the first place.
> >
> > Please let me know if I am misunderstanding something and there is
> > actually no problem with the non-hierarchical stats (you can stop
> > reading here if this is all in my head and there's actually no
> > problem).
> >
>
> Thanks for patient explanation. Now I got your point.
>
> > Off the top of my mind we can handle stats modifications of reparented
> > memory separately. We should not updated local per-cpu counters, maybe
> > we should rather update memcg->vmstat.state_pending directly so that
> > the changes appear as if they come from a child memcg. Two problems
> > come with such an approach:
> >
>
> Instead of avoiding updating local per-cpu counters for reparented pages,
> after reparenting, how about propagating the child memcg's local per-cpu
> counters to its parent after LRU pages reparenting? And we do not need to
> propagate all vmstats, just some vmstats exposed to cgroup v1 users (like
> memcg1_stats, memcg1_events and lru list pages). I think a reparented page
> is just a little bit of difference compared to other non-reparented pages,
> propagating local per-cpu counters may be acceptable. What do you think?
>

I think this introduces another problem. Now the non-hierarchical
stats of a parent memcg (P in the above example) would include
reparented memory. This hides zombie memory usage. As I elaborated
earlier, parent_hierarchical_usage - parent_non_hierarchical_usage -
SUM(children_hierarchical_usage) should give an estimate of the zombie
memory under parent. If we propagate reparented memory stats (aka
zombies) to the parent's non-hierarchical stats, then we have no way
of finding out how much zombie memory lives in a memcg. This problem
becomes more significant when we are reparenting to root, where zombie
memory is part of unaccounted system overhead.

Actually there is a different problem even in cgroup v2. At root level
there will be no way of finding out whether unaccounted system
overhead (root_usage - SUM(top_level_memcgs_usage)) comes from zombie
memcgs or not, because zombie memcgs will no longer exist and
reparented/zombie memory can be indistinguishable from memory that has
always lived in root. This makes debugging high system overhead even
harder, but that's a problem with the reparenting approach in general,
unrelated to the non-hierarchical stats problem.

> > 1) memcg->vmstat.state_pending is shared between cpus, and so far is
> > only modified by mem_cgroup_css_rstat_flush() in locked context. A
> > solution would be to add reparented state to
> > memcg->vmstat.state_percpu instead and treat it like
> > memcg->vmstat.state_pending in mem_cgroup_css_rstat_flush(). Keep in
> > mind that this adds a tiny bit of memory overhead (roughly 8
> > bytes*num_cpus for each memcg).
> >
> > 2) Identifying that we are updating stats of reparented memory. This
> > should be easy if we have a pointer to the page to compare page->objcg
> > with page->objcg->memcg->objcg, but AFAICT the memcg stats are updated
> > in __mod_memcg_state() and __mod_memcg_lruvec_state(), and we have no
> > idea in each of these what page(s) is the stats update associated
> > with. They are called from many different places, it would be
> > troublesome to pass such information down from all call sites. I have
> > nothing off the top of my head to fix this problem except passing the
> > necessary info through all code paths to __mod_memcg_state() and
> > __mod_memcg_lruvec_state(), which is far from ideal.
> >
> > Again, I am sorry if these discussions are late, I didn't have time to
> > look at previous versions of this patchset.
> >
>
> Not late, thanks for your feedback.
>
> > >
> > > > > > b) Since moving charged pages between memcgs is now becoming easier by
> > > > > > using the APIs of obj_cgroup, I wonder if this opens the door for
> > > > > > future work to transfer charges to memcgs that are actually using
> > > > > > reparented resources. For example, let's say cgroup A reads a few
> > > > > > pages into page cache, and then they are no longer used by cgroup A.
> > > > > > cgroup B, however, is using the same pages that are currently charged
> > > > > > to cgroup A, so it keeps taxing cgroup A for its use. When cgroup A
> > > > > > dies, and these pages are reparented to A's parent, can we possibly
> > > > > > mark these reparented pages (maybe in the page tables somewhere) so
> > > > > > that next time they get accessed we recharge them to B instead
> > > > > > (possibly asynchronously)?
> > > > > > I don't have much experience about page tables but I am pretty sure
> > > > > > they are loaded so maybe there is no room in PTEs for something like
> > > > > > this, but I have always wondered about what we can do for this case
> > > > > > where a cgroup is consistently using memory charged to another cgroup.
> > > > > > Maybe when this memory is reparented is a good point in time to decide
> > > > > > to recharge appropriately. It would also fix the reparenty leak to
> > > > > > root problem (if it even exists).
> > > > > >
> > > > >
> > > > > From my point of view, this is going to be an improvement to the memcg
> > > > > subsystem in the future.  IIUC, most reparented pages are page cache
> > > > > pages without be mapped to users. So page tables are not a suitable
> > > > > place to record this information. However, we already have this information
> > > > > in struct obj_cgroup and struct mem_cgroup. If a page's obj_cgroup is not
> > > > > equal to the page's obj_cgroup->memcg->objcg, it means this page have
> > > > > been reparented. I am thinking if a place where a page is mapped (probably
> > > > > page fault patch) or page (cache) is written (usually vfs write path)
> > > > > is suitable to transfer page's memcg from one to another. But need more
> > > > > thinking, e.g. How to decide if a reparented page needs to be transferred?
> > > > > If we need more information to make this decision, where to store those
> > > > > information? This is my primary thoughts on this question.
> > > > >
> > > > > Thanks.
> > > > >
> > > > > > Thanks again for this work and please excuse my ignorance if any part
> > > > > > of what I said doesn't make sense :)
> > > > > >
> > > > > > >
> > > > > > > ```bash
> > > > > > > #!/bin/bash
> > > > > > >
> > > > > > > dd if=/dev/zero of=temp bs=4096 count=1
> > > > > > > cat /proc/cgroups | grep memory
> > > > > > >
> > > > > > > for i in {0..2000}
> > > > > > > do
> > > > > > >         mkdir /sys/fs/cgroup/memory/test$i
> > > > > > >         echo $$ > /sys/fs/cgroup/memory/test$i/cgroup.procs
> > > > > > >         cat temp >> log
> > > > > > >         echo $$ > /sys/fs/cgroup/memory/cgroup.procs
> > > > > > >         rmdir /sys/fs/cgroup/memory/test$i
> > > > > > > done
> > > > > > >
> > > > > > > cat /proc/cgroups | grep memory
> > > > > > >
> > > > > > > rm -f temp log
> > > > > > > ```
> > > > > > >
> > > > > > > v5: https://lore.kernel.org/all/20220530074919.46352-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > > > v4: https://lore.kernel.org/all/20220524060551.80037-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > > > v3: https://lore.kernel.org/all/20220216115132.52602-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > > > v2: https://lore.kernel.org/all/20210916134748.67712-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > > > v1: https://lore.kernel.org/all/20210814052519.86679-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > > > RFC v4: https://lore.kernel.org/all/20210527093336.14895-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > > > RFC v3: https://lore.kernel.org/all/20210421070059.69361-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > > > RFC v2: https://lore.kernel.org/all/20210409122959.82264-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > > > RFC v1: https://lore.kernel.org/all/20210330101531.82752-1-songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org/
> > > > > > >
> > > > > > > v6:
> > > > > > >  - Collect Acked-by and Reviewed-by from Roman and Michal Koutný. Thanks.
> > > > > > >  - Rebase to mm-unstable.
> > > > > > >
> > > > > > > v5:
> > > > > > >  - Lots of improvements from Johannes, Roman and Waiman.
> > > > > > >  - Fix lockdep warning reported by kernel test robot.
> > > > > > >  - Add two new patches to do code cleanup.
> > > > > > >  - Collect Acked-by and Reviewed-by from Johannes and Roman.
> > > > > > >  - I didn't replace local_irq_disable/enable() to local_lock/unlock_irq() since
> > > > > > >    local_lock/unlock_irq() takes an parameter, it needs more thinking to transform
> > > > > > >    it to local_lock.  It could be an improvement in the future.
> > > > > > >
> > > > > > > v4:
> > > > > > >  - Resend and rebased on v5.18.
> > > > > > >
> > > > > > > v3:
> > > > > > >  - Removed the Acked-by tags from Roman since this version is based on
> > > > > > >    the folio relevant.
> > > > > > >
> > > > > > > v2:
> > > > > > >  - Rename obj_cgroup_release_kmem() to obj_cgroup_release_bytes() and the
> > > > > > >    dependencies of CONFIG_MEMCG_KMEM (suggested by Roman, Thanks).
> > > > > > >  - Rebase to linux 5.15-rc1.
> > > > > > >  - Add a new pacth to cleanup mem_cgroup_kmem_disabled().
> > > > > > >
> > > > > > > v1:
> > > > > > >  - Drop RFC tag.
> > > > > > >  - Rebase to linux next-20210811.
> > > > > > >
> > > > > > > RFC v4:
> > > > > > >  - Collect Acked-by from Roman.
> > > > > > >  - Rebase to linux next-20210525.
> > > > > > >  - Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
> > > > > > >  - Change the patch 1 title to "prepare objcg API for non-kmem usage".
> > > > > > >  - Convert reparent_ops_head to an array in patch 8.
> > > > > > >
> > > > > > > Thanks for Roman's review and suggestions.
> > > > > > >
> > > > > > > RFC v3:
> > > > > > >  - Drop the code cleanup and simplification patches. Gather those patches
> > > > > > >    into a separate series[1].
> > > > > > >  - Rework patch #1 suggested by Johannes.
> > > > > > >
> > > > > > > RFC v2:
> > > > > > >  - Collect Acked-by tags by Johannes. Thanks.
> > > > > > >  - Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
> > > > > > >  - Fix move_pages_to_lru().
> > > > > > >
> > > > > > > Muchun Song (11):
> > > > > > >   mm: memcontrol: remove dead code and comments
> > > > > > >   mm: rename unlock_page_lruvec{_irq, _irqrestore} to
> > > > > > >     lruvec_unlock{_irq, _irqrestore}
> > > > > > >   mm: memcontrol: prepare objcg API for non-kmem usage
> > > > > > >   mm: memcontrol: make lruvec lock safe when LRU pages are reparented
> > > > > > >   mm: vmscan: rework move_pages_to_lru()
> > > > > > >   mm: thp: make split queue lock safe when LRU pages are reparented
> > > > > > >   mm: memcontrol: make all the callers of {folio,page}_memcg() safe
> > > > > > >   mm: memcontrol: introduce memcg_reparent_ops
> > > > > > >   mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
> > > > > > >   mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function
> > > > > > >   mm: lru: use lruvec lock to serialize memcg changes
> > > > > > >
> > > > > > >  fs/buffer.c                      |   4 +-
> > > > > > >  fs/fs-writeback.c                |  23 +-
> > > > > > >  include/linux/memcontrol.h       | 218 +++++++++------
> > > > > > >  include/linux/mm_inline.h        |   6 +
> > > > > > >  include/trace/events/writeback.h |   5 +
> > > > > > >  mm/compaction.c                  |  39 ++-
> > > > > > >  mm/huge_memory.c                 | 153 ++++++++--
> > > > > > >  mm/memcontrol.c                  | 584 +++++++++++++++++++++++++++------------
> > > > > > >  mm/migrate.c                     |   4 +
> > > > > > >  mm/mlock.c                       |   2 +-
> > > > > > >  mm/page_io.c                     |   5 +-
> > > > > > >  mm/swap.c                        |  49 ++--
> > > > > > >  mm/vmscan.c                      |  66 ++---
> > > > > > >  13 files changed, 776 insertions(+), 382 deletions(-)
> > > > > > >
> > > > > > >
> > > > > > > base-commit: 882be1ed6b1b5073fc88552181b99bd2b9c0031f
> > > > > > > --
> > > > > > > 2.11.0
> > > > > > >
> > > > > > >
> > > > > >
> > > >
> >

^ permalink raw reply	[flat|nested] 54+ messages in thread

end of thread, other threads:[~2022-07-09  9:24 UTC | newest]

Thread overview: 54+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-21 12:56 [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages Muchun Song
2022-06-21 12:56 ` Muchun Song
2022-06-21 12:56 ` [PATCH v6 01/11] mm: memcontrol: remove dead code and comments Muchun Song
2022-06-21 12:56 ` [PATCH v6 02/11] mm: rename unlock_page_lruvec{_irq, _irqrestore} to lruvec_unlock{_irq, _irqrestore} Muchun Song
2022-06-21 12:56 ` [PATCH v6 03/11] mm: memcontrol: prepare objcg API for non-kmem usage Muchun Song
2022-06-21 12:56   ` Muchun Song
2022-06-21 12:56 ` [PATCH v6 04/11] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Muchun Song
2022-06-21 12:56   ` Muchun Song
2022-06-21 12:56 ` [PATCH v6 05/11] mm: vmscan: rework move_pages_to_lru() Muchun Song
2022-06-21 12:56   ` Muchun Song
2022-06-21 12:56 ` [PATCH v6 06/11] mm: thp: make split queue lock safe when LRU pages are reparented Muchun Song
2022-06-21 12:56   ` Muchun Song
2022-06-21 12:56 ` [PATCH v6 07/11] mm: memcontrol: make all the callers of {folio,page}_memcg() safe Muchun Song
2022-06-21 12:56   ` Muchun Song
2022-06-21 12:56 ` [PATCH v6 08/11] mm: memcontrol: introduce memcg_reparent_ops Muchun Song
2022-06-21 12:56 ` [PATCH v6 09/11] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Muchun Song
2022-06-21 12:56   ` Muchun Song
2022-06-21 12:56 ` [PATCH v6 10/11] mm: lru: add VM_WARN_ON_ONCE_FOLIO to lru maintenance function Muchun Song
2022-06-21 12:56   ` Muchun Song
2022-06-21 12:56 ` [PATCH v6 11/11] mm: lru: use lruvec lock to serialize memcg changes Muchun Song
2022-06-21 12:56   ` Muchun Song
2022-06-26 10:32 ` [PATCH v6 00/11] Use obj_cgroup APIs to charge the LRU pages Yosry Ahmed
2022-06-26 10:32   ` Yosry Ahmed
2022-06-27  7:11   ` Muchun Song
2022-06-27  7:11     ` Muchun Song
2022-06-27  8:05     ` Yosry Ahmed
2022-06-27  8:05       ` Yosry Ahmed
2022-06-27 10:13       ` Muchun Song
2022-06-27 10:13         ` Muchun Song
2022-06-27 16:46         ` Yosry Ahmed
2022-06-27 16:46           ` Yosry Ahmed
2022-06-28  1:24         ` Roman Gushchin
2022-06-28  1:24           ` Roman Gushchin
2022-06-28  1:31           ` Yosry Ahmed
2022-06-28  1:31             ` Yosry Ahmed
2022-06-28  1:37             ` Roman Gushchin
2022-06-28  1:37               ` Roman Gushchin
2022-06-28  1:45               ` Yosry Ahmed
2022-06-28  1:45                 ` Yosry Ahmed
2022-06-27 10:43       ` Mika Penttilä
2022-06-27 10:43         ` Mika Penttilä
2022-06-27 16:49         ` Yosry Ahmed
2022-06-27 16:49           ` Yosry Ahmed
2022-07-07 22:14     ` Yosry Ahmed
2022-07-07 22:14       ` Yosry Ahmed
2022-07-08  6:52       ` Muchun Song
2022-07-08  6:52         ` Muchun Song
2022-07-08  9:26         ` Yosry Ahmed
2022-07-08  9:26           ` Yosry Ahmed
2022-07-09  5:51           ` Muchun Song
2022-07-09  9:23             ` Yosry Ahmed
2022-07-09  9:23               ` Yosry Ahmed
2022-07-03 23:23 ` Andrew Morton
2022-07-03 23:23   ` Andrew Morton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.