All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages
@ 2021-08-14  5:25 Muchun Song
  2021-08-14  5:25 ` [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage Muchun Song
                   ` (11 more replies)
  0 siblings, 12 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

Hi,

This version is basded on the next-20210811 and drops RFC tag from the last
version. Comments and reviews are welcome. Thanks.

Since the following patchsets applied. All the kernel memory are charged
with the new APIs of obj_cgroup.

	[v17,00/19] The new cgroup slab memory controller[1]
	[v5,0/7] Use obj_cgroup APIs to charge kmem pages[2]

But user memory allocations (LRU pages) pinning memcgs for a long time -
it exists at a larger scale and is causing recurring problems in the real
world: page cache doesn't get reclaimed for a long time, or is used by the
second, third, fourth, ... instance of the same job that was restarted into
a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
and make page reclaim very inefficient.

We can convert LRU pages and most other raw memcg pins to the objcg direction
to fix this problem, and then the LRU pages will not pin the memcgs.

This patchset aims to make the LRU pages to drop the reference to memory
cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
of the dying cgroups will not increase if we run the following test script.

```bash
#!/bin/bash

cat /proc/cgroups | grep memory

cd /sys/fs/cgroup/memory

for i in range{1..500}
do
	mkdir test
	echo $$ > test/cgroup.procs
	sleep 60 &
	echo $$ > cgroup.procs
	echo `cat test/cgroup.procs` > cgroup.procs
	rmdir test
done

cat /proc/cgroups | grep memory
```

Thanks.

[1] https://lore.kernel.org/linux-mm/20200623015846.1141975-1-guro@fb.com/
[2] https://lore.kernel.org/linux-mm/20210319163821.20704-1-songmuchun@bytedance.com/

Changlogs in v1:
  1. Drop RFC tag.
  2. Rebase to linux next-20210811.

Changlogs in RFC v4:
  1. Collect Acked-by from Roman.
  2. Rebase to linux next-20210525.
  3. Rename obj_cgroup_release_uncharge() to obj_cgroup_release_kmem().
  4. Change the patch 1 title to "prepare objcg API for non-kmem usage".
  5. Convert reparent_ops_head to an array in patch 8.

  Thanks for Roman's review and suggestions.

Changlogs in RFC v3:
  1. Drop the code cleanup and simplification patches. Gather those patches
     into a separate series[1].
  2. Rework patch #1 suggested by Johannes.

Changlogs in RFC v2:
  1. Collect Acked-by tags by Johannes. Thanks.
  2. Rework lruvec_holds_page_lru_lock() suggested by Johannes. Thanks.
  3. Fix move_pages_to_lru().

Muchun Song (12):
  mm: memcontrol: prepare objcg API for non-kmem usage
  mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave
  mm: memcontrol: make lruvec lock safe when LRU pages are reparented
  mm: vmscan: rework move_pages_to_lru()
  mm: thp: introduce folio_split_queue_lock{_irqsave}()
  mm: thp: make split queue lock safe when LRU pages are reparented
  mm: memcontrol: make all the callers of {folio,page}_memcg() safe
  mm: memcontrol: introduce memcg_reparent_ops
  mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
  mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg()
  mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function
  mm: lru: use lruvec lock to serialize memcg changes

 Documentation/admin-guide/cgroup-v1/memory.rst |   2 +-
 fs/buffer.c                                    |  11 +-
 fs/fs-writeback.c                              |  23 +-
 include/linux/memcontrol.h                     | 184 ++++----
 include/linux/mm.h                             |   1 +
 include/linux/mm_inline.h                      |  15 +-
 mm/compaction.c                                |  39 +-
 mm/filemap.c                                   |   2 +-
 mm/huge_memory.c                               | 162 ++++++--
 mm/memcontrol.c                                | 554 ++++++++++++++++++-------
 mm/migrate.c                                   |   4 +
 mm/page-writeback.c                            |   6 +-
 mm/page_io.c                                   |   5 +-
 mm/rmap.c                                      |  14 +-
 mm/swap.c                                      |  49 +--
 mm/vmscan.c                                    |  57 ++-
 16 files changed, 778 insertions(+), 350 deletions(-)

-- 
2.11.0


^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage
  2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
@ 2021-08-14  5:25 ` Muchun Song
  2021-08-14 22:23     ` kernel test robot
  2021-08-18  3:01   ` Roman Gushchin
  2021-08-14  5:25 ` [PATCH v1 02/12] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave Muchun Song
                   ` (10 subsequent siblings)
  11 siblings, 2 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

Pagecache pages are charged at the allocation time and holding a
reference to the original memory cgroup until being reclaimed.
Depending on the memory pressure, specific patterns of the page
sharing between different cgroups and the cgroup creation and
destruction rates, a large number of dying memory cgroups can be
pinned by pagecache pages. It makes the page reclaim less efficient
and wastes memory.

We can convert LRU pages and most other raw memcg pins to the objcg
direction to fix this problem, and then the page->memcg will always
point to an object cgroup pointer.

Therefore, the infrastructure of objcg no longer only serves
CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
can reuse it to charge pages.

We know that the LRU pages are not accounted at the root level. But
the page->memcg_data points to the root_mem_cgroup. So the
page->memcg_data of the LRU pages always points to a valid pointer.
But the root_mem_cgroup dose not have an object cgroup. If we use
obj_cgroup APIs to charge the LRU pages, we should set the
page->memcg_data to a root object cgroup. So we also allocate an
object cgroup for the root_mem_cgroup.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/memcontrol.h |  4 ++-
 mm/memcontrol.c            | 66 +++++++++++++++++++++++++++++-----------------
 2 files changed, 45 insertions(+), 25 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 0ff146486aed..41a35de93d75 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -221,7 +221,9 @@ struct memcg_cgwb_frn {
 struct obj_cgroup {
 	struct percpu_ref refcnt;
 	struct mem_cgroup *memcg;
+#ifdef CONFIG_MEMCG_KMEM
 	atomic_t nr_charged_bytes;
+#endif
 	union {
 		struct list_head list;
 		struct rcu_head rcu;
@@ -319,9 +321,9 @@ struct mem_cgroup {
 #ifdef CONFIG_MEMCG_KMEM
 	int kmemcg_id;
 	enum memcg_kmem_state kmem_state;
+#endif
 	struct obj_cgroup __rcu *objcg;
 	struct list_head objcg_list; /* list of inherited objcgs */
-#endif
 
 	MEMCG_PADDING(_pad2_);
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 3e7c205a1852..7df2176e4f02 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -261,7 +261,6 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)
 	return container_of(vmpr, struct mem_cgroup, vmpressure);
 }
 
-#ifdef CONFIG_MEMCG_KMEM
 extern spinlock_t css_set_lock;
 
 bool mem_cgroup_kmem_disabled(void)
@@ -269,15 +268,14 @@ bool mem_cgroup_kmem_disabled(void)
 	return cgroup_memory_nokmem;
 }
 
+#ifdef CONFIG_MEMCG_KMEM
 static void obj_cgroup_uncharge_pages(struct obj_cgroup *objcg,
 				      unsigned int nr_pages);
 
-static void obj_cgroup_release(struct percpu_ref *ref)
+static void obj_cgroup_release_kmem(struct obj_cgroup *objcg)
 {
-	struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
 	unsigned int nr_bytes;
 	unsigned int nr_pages;
-	unsigned long flags;
 
 	/*
 	 * At this point all allocated objects are freed, and
@@ -291,9 +289,9 @@ static void obj_cgroup_release(struct percpu_ref *ref)
 	 * 3) CPU1: a process from another memcg is allocating something,
 	 *          the stock if flushed,
 	 *          objcg->nr_charged_bytes = PAGE_SIZE - 92
-	 * 5) CPU0: we do release this object,
+	 * 4) CPU0: we do release this object,
 	 *          92 bytes are added to stock->nr_bytes
-	 * 6) CPU0: stock is flushed,
+	 * 5) CPU0: stock is flushed,
 	 *          92 bytes are added to objcg->nr_charged_bytes
 	 *
 	 * In the result, nr_charged_bytes == PAGE_SIZE.
@@ -305,6 +303,19 @@ static void obj_cgroup_release(struct percpu_ref *ref)
 
 	if (nr_pages)
 		obj_cgroup_uncharge_pages(objcg, nr_pages);
+}
+#else
+static inline void obj_cgroup_release_kmem(struct obj_cgroup *objcg)
+{
+}
+#endif
+
+static void obj_cgroup_release(struct percpu_ref *ref)
+{
+	struct obj_cgroup *objcg = container_of(ref, struct obj_cgroup, refcnt);
+	unsigned long flags;
+
+	obj_cgroup_release_kmem(objcg);
 
 	spin_lock_irqsave(&css_set_lock, flags);
 	list_del(&objcg->list);
@@ -333,10 +344,14 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
 	return objcg;
 }
 
-static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
-				  struct mem_cgroup *parent)
+static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
 {
 	struct obj_cgroup *objcg, *iter;
+	struct mem_cgroup *parent;
+
+	parent = parent_mem_cgroup(memcg);
+	if (!parent)
+		parent = root_mem_cgroup;
 
 	objcg = rcu_replace_pointer(memcg->objcg, NULL, true);
 
@@ -355,6 +370,7 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg,
 	percpu_ref_kill(&objcg->refcnt);
 }
 
+#ifdef CONFIG_MEMCG_KMEM
 /*
  * This will be used as a shrinker list's index.
  * The main reason for not using cgroup id for this:
@@ -3579,7 +3595,6 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
 #ifdef CONFIG_MEMCG_KMEM
 static int memcg_online_kmem(struct mem_cgroup *memcg)
 {
-	struct obj_cgroup *objcg;
 	int memcg_id;
 
 	if (cgroup_memory_nokmem)
@@ -3592,14 +3607,6 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
 	if (memcg_id < 0)
 		return memcg_id;
 
-	objcg = obj_cgroup_alloc();
-	if (!objcg) {
-		memcg_free_cache_id(memcg_id);
-		return -ENOMEM;
-	}
-	objcg->memcg = memcg;
-	rcu_assign_pointer(memcg->objcg, objcg);
-
 	static_branch_enable(&memcg_kmem_enabled_key);
 
 	memcg->kmemcg_id = memcg_id;
@@ -3623,8 +3630,6 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg)
 	if (!parent)
 		parent = root_mem_cgroup;
 
-	memcg_reparent_objcgs(memcg, parent);
-
 	kmemcg_id = memcg->kmemcg_id;
 	BUG_ON(kmemcg_id < 0);
 
@@ -5151,8 +5156,8 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
 	memcg->socket_pressure = jiffies;
 #ifdef CONFIG_MEMCG_KMEM
 	memcg->kmemcg_id = -1;
-	INIT_LIST_HEAD(&memcg->objcg_list);
 #endif
+	INIT_LIST_HEAD(&memcg->objcg_list);
 #ifdef CONFIG_CGROUP_WRITEBACK
 	INIT_LIST_HEAD(&memcg->cgwb_list);
 	for (i = 0; i < MEMCG_CGWB_FRN_CNT; i++)
@@ -5224,16 +5229,22 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
 static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 {
 	struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+	struct obj_cgroup *objcg;
 
 	/*
 	 * A memcg must be visible for expand_shrinker_info()
 	 * by the time the maps are allocated. So, we allocate maps
 	 * here, when for_each_mem_cgroup() can't skip it.
 	 */
-	if (alloc_shrinker_info(memcg)) {
-		mem_cgroup_id_remove(memcg);
-		return -ENOMEM;
-	}
+	if (alloc_shrinker_info(memcg))
+		goto remove_id;
+
+	objcg = obj_cgroup_alloc();
+	if (!objcg)
+		goto free_shrinker;
+
+	objcg->memcg = memcg;
+	rcu_assign_pointer(memcg->objcg, objcg);
 
 	/* Online state pins memcg ID, memcg ID pins CSS */
 	refcount_set(&memcg->id.ref, 1);
@@ -5243,6 +5254,12 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 		queue_delayed_work(system_unbound_wq, &stats_flush_dwork,
 				   2UL*HZ);
 	return 0;
+
+free_shrinker:
+	free_shrinker_info(memcg);
+remove_id:
+	mem_cgroup_id_remove(memcg);
+	return -ENOMEM;
 }
 
 static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
@@ -5266,6 +5283,7 @@ static void mem_cgroup_css_offline(struct cgroup_subsys_state *css)
 	page_counter_set_low(&memcg->memory, 0);
 
 	memcg_offline_kmem(memcg);
+	memcg_reparent_objcgs(memcg);
 	reparent_shrinker_deferred(memcg);
 	wb_memcg_offline(memcg);
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 02/12] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave
  2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
  2021-08-14  5:25 ` [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage Muchun Song
@ 2021-08-14  5:25 ` Muchun Song
  2021-08-14  5:25 ` [PATCH v1 03/12] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Muchun Song
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

If we reuse the objcg APIs to charge LRU pages, the folio_memcg()
can be changed when the LRU pages reparented. In this case, we need
to acquire the new lruvec lock.

    lruvec = folio_lruvec(folio);

    // The page is reparented.

    compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);

    // Acquired the wrong lruvec lock and need to retry.

But compact_lock_irqsave() only take lruvec lock as the parameter,
we cannot aware this change. If it can take the page as parameter
to acquire the lruvec lock. When the page memcg is changed, we can
use the folio_memcg() detect whether we need to reacquire the new
lruvec lock. So compact_lock_irqsave() is not suitable for us.
Similar to folio_lruvec_lock_irqsave(), introduce
compact_folio_lruvec_lock_irqsave() to acquire the lruvec lock in
the compaction routine.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Roman Gushchin <guro@fb.com>
---
 mm/compaction.c | 31 +++++++++++++++++++++++++++----
 1 file changed, 27 insertions(+), 4 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index fbc60f964c38..3131558a7c31 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -509,6 +509,29 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags,
 	return true;
 }
 
+static struct lruvec *
+compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags,
+				  struct compact_control *cc)
+{
+	struct lruvec *lruvec;
+
+	lruvec = folio_lruvec(folio);
+
+	/* Track if the lock is contended in async mode */
+	if (cc->mode == MIGRATE_ASYNC && !cc->contended) {
+		if (spin_trylock_irqsave(&lruvec->lru_lock, *flags))
+			goto out;
+
+		cc->contended = true;
+	}
+
+	spin_lock_irqsave(&lruvec->lru_lock, *flags);
+out:
+	lruvec_memcg_debug(lruvec, folio);
+
+	return lruvec;
+}
+
 /*
  * Compaction requires the taking of some coarse locks that are potentially
  * very heavily contended. The lock should be periodically unlocked to avoid
@@ -837,6 +860,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 
 	/* Time to isolate some pages for migration */
 	for (; low_pfn < end_pfn; low_pfn++) {
+		struct folio *folio;
 
 		if (skip_on_failure && low_pfn >= next_skip_pfn) {
 			/*
@@ -1022,18 +1046,17 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		if (!TestClearPageLRU(page))
 			goto isolate_fail_put;
 
-		lruvec = folio_lruvec(page_folio(page));
+		folio = page_folio(page);
+		lruvec = folio_lruvec(folio);
 
 		/* If we already hold the lock, we can skip some rechecking */
 		if (lruvec != locked) {
 			if (locked)
 				unlock_page_lruvec_irqrestore(locked, flags);
 
-			compact_lock_irqsave(&lruvec->lru_lock, &flags, cc);
+			lruvec = compact_folio_lruvec_lock_irqsave(folio, &flags, cc);
 			locked = lruvec;
 
-			lruvec_memcg_debug(lruvec, page_folio(page));
-
 			/* Try get exclusive access under lock */
 			if (!skip_updated) {
 				skip_updated = true;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 03/12] mm: memcontrol: make lruvec lock safe when LRU pages are reparented
  2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
  2021-08-14  5:25 ` [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage Muchun Song
  2021-08-14  5:25 ` [PATCH v1 02/12] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave Muchun Song
@ 2021-08-14  5:25 ` Muchun Song
  2021-08-18  3:18   ` Roman Gushchin
  2021-08-14  5:25 ` [PATCH v1 04/12] mm: vmscan: rework move_pages_to_lru() Muchun Song
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

The diagram below shows how to make the folio lruvec lock safe when LRU
pages are reparented.

folio_lruvec_lock(folio)
    retry:
	lruvec = folio_lruvec(folio);

        // The folio is reparented at this time.
        spin_lock(&lruvec->lru_lock);

        if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio)))
            // Acquired the wrong lruvec lock and need to retry.
            // Because this folio is on the parent memcg lruvec list.
            goto retry;

        // If we reach here, it means that folio_memcg(folio) is stable.

memcg_reparent_objcgs(memcg)
    // lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
    spin_lock(&lruvec->lru_lock);
    spin_lock(&lruvec_parent->lru_lock);

    // Move all the pages from the lruvec list to the parent lruvec list.

    spin_unlock(&lruvec_parent->lru_lock);
    spin_unlock(&lruvec->lru_lock);

After we acquire the lruvec lock, we need to check whether the folio is
reparented. If so, we need to reacquire the new lruvec lock. On the
routine of the LRU pages reparenting, we will also acquire the lruvec
lock (will be implemented in the later patch). So folio_memcg() cannot
be changed when we hold the lruvec lock.

Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after
we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So
remove it.

This is a preparation for reparenting the LRU pages.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Roman Gushchin <guro@fb.com>
---
 include/linux/memcontrol.h | 18 +++-----------
 mm/compaction.c            | 10 +++++++-
 mm/memcontrol.c            | 62 +++++++++++++++++++++++++++++-----------------
 mm/swap.c                  |  4 +++
 4 files changed, 55 insertions(+), 39 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 41a35de93d75..5a8f85bd9bbf 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -769,7 +769,9 @@ static inline struct lruvec *mem_cgroup_lruvec(struct mem_cgroup *memcg,
  * folio_lruvec - return lruvec for isolating/putting an LRU folio
  * @folio: Pointer to the folio.
  *
- * This function relies on folio->mem_cgroup being stable.
+ * The lruvec can be changed to its parent lruvec when the page reparented.
+ * The caller need to recheck if it cares about this changes (just like
+ * folio_lruvec_lock() does).
  */
 static inline struct lruvec *folio_lruvec(struct folio *folio)
 {
@@ -788,15 +790,6 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio);
 struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
 						unsigned long *flags);
 
-#ifdef CONFIG_DEBUG_VM
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio);
-#else
-static inline
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-}
-#endif
-
 static inline
 struct mem_cgroup *mem_cgroup_from_css(struct cgroup_subsys_state *css){
 	return css ? container_of(css, struct mem_cgroup, css) : NULL;
@@ -1243,11 +1236,6 @@ static inline struct lruvec *folio_lruvec(struct folio *folio)
 	return &pgdat->__lruvec;
 }
 
-static inline
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-}
-
 static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg)
 {
 	return NULL;
diff --git a/mm/compaction.c b/mm/compaction.c
index 3131558a7c31..97d72f6b3dd5 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -515,6 +515,8 @@ compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags,
 {
 	struct lruvec *lruvec;
 
+	rcu_read_lock();
+retry:
 	lruvec = folio_lruvec(folio);
 
 	/* Track if the lock is contended in async mode */
@@ -527,7 +529,13 @@ compact_folio_lruvec_lock_irqsave(struct folio *folio, unsigned long *flags,
 
 	spin_lock_irqsave(&lruvec->lru_lock, *flags);
 out:
-	lruvec_memcg_debug(lruvec, folio);
+	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+		spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
+		goto retry;
+	}
+
+	/* See the comments in folio_lruvec_lock(). */
+	rcu_read_unlock();
 
 	return lruvec;
 }
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 7df2176e4f02..7955da38e385 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1146,23 +1146,6 @@ int mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
 	return ret;
 }
 
-#ifdef CONFIG_DEBUG_VM
-void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
-{
-	struct mem_cgroup *memcg;
-
-	if (mem_cgroup_disabled())
-		return;
-
-	memcg = folio_memcg(folio);
-
-	if (!memcg)
-		VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != root_mem_cgroup, folio);
-	else
-		VM_BUG_ON_FOLIO(lruvec_memcg(lruvec) != memcg, folio);
-}
-#endif
-
 /**
  * folio_lruvec_lock - lock and return lruvec for a given folio.
  * @folio: Pointer to the folio.
@@ -1175,20 +1158,43 @@ void lruvec_memcg_debug(struct lruvec *lruvec, struct folio *folio)
  */
 struct lruvec *folio_lruvec_lock(struct folio *folio)
 {
-	struct lruvec *lruvec = folio_lruvec(folio);
+	struct lruvec *lruvec;
 
+	rcu_read_lock();
+retry:
+	lruvec = folio_lruvec(folio);
 	spin_lock(&lruvec->lru_lock);
-	lruvec_memcg_debug(lruvec, folio);
+
+	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+		spin_unlock(&lruvec->lru_lock);
+		goto retry;
+	}
+
+	/*
+	 * Preemption is disabled in the internal of spin_lock, which can serve
+	 * as RCU read-side critical sections.
+	 */
+	rcu_read_unlock();
 
 	return lruvec;
 }
 
 struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
 {
-	struct lruvec *lruvec = folio_lruvec(folio);
+	struct lruvec *lruvec;
 
+	rcu_read_lock();
+retry:
+	lruvec = folio_lruvec(folio);
 	spin_lock_irq(&lruvec->lru_lock);
-	lruvec_memcg_debug(lruvec, folio);
+
+	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+		spin_unlock_irq(&lruvec->lru_lock);
+		goto retry;
+	}
+
+	/* See the comments in folio_lruvec_lock(). */
+	rcu_read_unlock();
 
 	return lruvec;
 }
@@ -1196,10 +1202,20 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
 struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
 		unsigned long *flags)
 {
-	struct lruvec *lruvec = folio_lruvec(folio);
+	struct lruvec *lruvec;
 
+	rcu_read_lock();
+retry:
+	lruvec = folio_lruvec(folio);
 	spin_lock_irqsave(&lruvec->lru_lock, *flags);
-	lruvec_memcg_debug(lruvec, folio);
+
+	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
+		spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
+		goto retry;
+	}
+
+	/* See the comments in folio_lruvec_lock(). */
+	rcu_read_unlock();
 
 	return lruvec;
 }
diff --git a/mm/swap.c b/mm/swap.c
index 2bca632b73df..9554ff008fe6 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -295,6 +295,10 @@ void lru_note_cost(struct lruvec *lruvec, bool file, unsigned int nr_pages)
 
 void lru_note_cost_folio(struct folio *folio)
 {
+	/*
+	 * The rcu read lock is held by the caller, so we do not need to
+	 * care about the lruvec returned by folio_lruvec() being released.
+	 */
 	lru_note_cost(folio_lruvec(folio), folio_is_file_lru(folio),
 			folio_nr_pages(folio));
 }
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 04/12] mm: vmscan: rework move_pages_to_lru()
  2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
                   ` (2 preceding siblings ...)
  2021-08-14  5:25 ` [PATCH v1 03/12] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Muchun Song
@ 2021-08-14  5:25 ` Muchun Song
  2021-08-14  5:25 ` [PATCH v1 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}() Muchun Song
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

In the later patch, we will reparent the LRU pages. The pages moved to
appropriate LRU list can be reparented during the process of the
move_pages_to_lru(). So holding a lruvec lock by the caller is wrong, we
should use the more general interface of folio_lruvec_relock_irq() to
acquire the correct lruvec lock.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/mm.h |  1 +
 mm/vmscan.c        | 49 +++++++++++++++++++++++++------------------------
 2 files changed, 26 insertions(+), 24 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ce8fc0fd6d6e..1e7f06bc5f2d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -227,6 +227,7 @@ int overcommit_policy_handler(struct ctl_table *, int, void *, size_t *,
 #define PAGE_ALIGNED(addr)	IS_ALIGNED((unsigned long)(addr), PAGE_SIZE)
 
 #define lru_to_page(head) (list_entry((head)->prev, struct page, lru))
+#define lru_to_folio(head) (list_entry((head)->prev, struct folio, lru))
 
 void setup_initial_init_mm(void *start_code, void *end_code,
 			   void *end_data, void *brk);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 403a175a720f..8ce42858ad5d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2153,23 +2153,28 @@ static int too_many_isolated(struct pglist_data *pgdat, int file,
  * move_pages_to_lru() moves pages from private @list to appropriate LRU list.
  * On return, @list is reused as a list of pages to be freed by the caller.
  *
- * Returns the number of pages moved to the given lruvec.
+ * Returns the number of pages moved to the appropriate LRU list.
+ *
+ * Note: The caller must not hold any lruvec lock.
  */
-static unsigned int move_pages_to_lru(struct lruvec *lruvec,
-				      struct list_head *list)
+static unsigned int move_pages_to_lru(struct list_head *list)
 {
-	int nr_pages, nr_moved = 0;
+	int nr_moved = 0;
+	struct lruvec *lruvec = NULL;
 	LIST_HEAD(pages_to_free);
-	struct page *page;
 
 	while (!list_empty(list)) {
-		page = lru_to_page(list);
+		int nr_pages;
+		struct folio *folio = lru_to_folio(list);
+		struct page *page = &folio->page;
+
+		lruvec = folio_lruvec_relock_irq(folio, lruvec);
 		VM_BUG_ON_PAGE(PageLRU(page), page);
 		list_del(&page->lru);
 		if (unlikely(!page_evictable(page))) {
-			spin_unlock_irq(&lruvec->lru_lock);
+			unlock_page_lruvec_irq(lruvec);
 			putback_lru_page(page);
-			spin_lock_irq(&lruvec->lru_lock);
+			lruvec = NULL;
 			continue;
 		}
 
@@ -2190,20 +2195,16 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 			__clear_page_lru_flags(page);
 
 			if (unlikely(PageCompound(page))) {
-				spin_unlock_irq(&lruvec->lru_lock);
+				unlock_page_lruvec_irq(lruvec);
 				destroy_compound_page(page);
-				spin_lock_irq(&lruvec->lru_lock);
+				lruvec = NULL;
 			} else
 				list_add(&page->lru, &pages_to_free);
 
 			continue;
 		}
 
-		/*
-		 * All pages were isolated from the same lruvec (and isolation
-		 * inhibits memcg migration).
-		 */
-		VM_BUG_ON_PAGE(!folio_matches_lruvec(page_folio(page), lruvec), page);
+		VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page);
 		add_page_to_lru_list(page, lruvec);
 		nr_pages = thp_nr_pages(page);
 		nr_moved += nr_pages;
@@ -2211,6 +2212,8 @@ static unsigned int move_pages_to_lru(struct lruvec *lruvec,
 			workingset_age_nonresident(lruvec, nr_pages);
 	}
 
+	if (lruvec)
+		unlock_page_lruvec_irq(lruvec);
 	/*
 	 * To save our caller's stack, now use input list for pages to free.
 	 */
@@ -2284,16 +2287,16 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 
 	nr_reclaimed = shrink_page_list(&page_list, pgdat, sc, &stat, false);
 
-	spin_lock_irq(&lruvec->lru_lock);
-	move_pages_to_lru(lruvec, &page_list);
+	move_pages_to_lru(&page_list);
 
+	local_irq_disable();
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
 	item = current_is_kswapd() ? PGSTEAL_KSWAPD : PGSTEAL_DIRECT;
 	if (!cgroup_reclaim(sc))
 		__count_vm_events(item, nr_reclaimed);
 	__count_memcg_events(lruvec_memcg(lruvec), item, nr_reclaimed);
 	__count_vm_events(PGSTEAL_ANON + file, nr_reclaimed);
-	spin_unlock_irq(&lruvec->lru_lock);
+	local_irq_enable();
 
 	lru_note_cost(lruvec, file, stat.nr_pageout);
 	mem_cgroup_uncharge_list(&page_list);
@@ -2420,18 +2423,16 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	/*
 	 * Move pages back to the lru list.
 	 */
-	spin_lock_irq(&lruvec->lru_lock);
-
-	nr_activate = move_pages_to_lru(lruvec, &l_active);
-	nr_deactivate = move_pages_to_lru(lruvec, &l_inactive);
+	nr_activate = move_pages_to_lru(&l_active);
+	nr_deactivate = move_pages_to_lru(&l_inactive);
 	/* Keep all free pages in l_active list */
 	list_splice(&l_inactive, &l_active);
 
+	local_irq_disable();
 	__count_vm_events(PGDEACTIVATE, nr_deactivate);
 	__count_memcg_events(lruvec_memcg(lruvec), PGDEACTIVATE, nr_deactivate);
-
 	__mod_node_page_state(pgdat, NR_ISOLATED_ANON + file, -nr_taken);
-	spin_unlock_irq(&lruvec->lru_lock);
+	local_irq_enable();
 
 	mem_cgroup_uncharge_list(&l_active);
 	free_unref_page_list(&l_active);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}()
  2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
                   ` (3 preceding siblings ...)
  2021-08-14  5:25 ` [PATCH v1 04/12] mm: vmscan: rework move_pages_to_lru() Muchun Song
@ 2021-08-14  5:25 ` Muchun Song
  2021-08-14  8:22     ` kernel test robot
  2021-08-14 10:38     ` kernel test robot
  2021-08-14  5:25 ` [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented Muchun Song
                   ` (6 subsequent siblings)
  11 siblings, 2 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

We should make thp deferred split queue lock safe when LRU pages
are reparented. Similar to folio_lruvec_lock{_irqsave, _irq}(), we
introduce folio_split_queue_lock{_irqsave}() to make the deferred
split queue lock easier to be reparented.

And in the next patch, we can use a similar approach (just like
lruvec lock does) to make thp deferred split queue lock safe when
the LRU pages reparented.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/huge_memory.c | 93 +++++++++++++++++++++++++++++++++++++++++---------------
 1 file changed, 69 insertions(+), 24 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ade81c123d87..c49ef28e48c1 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -499,25 +499,70 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
 }
 
 #ifdef CONFIG_MEMCG
-static inline struct deferred_split *get_deferred_split_queue(struct page *page)
+static inline struct mem_cgroup *split_queue_memcg(struct deferred_split *queue)
 {
-	struct mem_cgroup *memcg = page_memcg(compound_head(page));
-	struct pglist_data *pgdat = NODE_DATA(page_to_nid(page));
+	if (mem_cgroup_disabled())
+		return NULL;
+	return container_of(queue, struct mem_cgroup, deferred_split_queue);
+}
 
-	if (memcg)
-		return &memcg->deferred_split_queue;
-	else
-		return &pgdat->deferred_split_queue;
+static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio)
+{
+	struct mem_cgroup *memcg = folio_memcg(folio);
+
+	return memcg ? &memcg->deferred_split_queue : NULL;
 }
 #else
-static inline struct deferred_split *get_deferred_split_queue(struct page *page)
+static inline struct mem_cgroup *split_queue_memcg(struct deferred_split *queue)
 {
-	struct pglist_data *pgdat = NODE_DATA(page_to_nid(page));
+	return NULL;
+}
 
-	return &pgdat->deferred_split_queue;
+static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio)
+{
+	return NULL;
 }
 #endif
 
+static struct deferred_split *folio_split_queue(struct folio *folio)
+{
+	struct deferred_split *queue = folio_memcg_split_queue(folio);
+
+	return queue ? : &NODE_DATA(folio_nid(folio))->deferred_split_queue;
+}
+
+static struct deferred_split *folio_split_queue_lock(struct folio *folio)
+{
+	struct deferred_split *queue;
+
+	queue = folio_split_queue(folio);
+	spin_lock(&queue->split_queue_lock);
+
+	return queue;
+}
+
+static struct deferred_split *
+folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags)
+{
+	struct deferred_split *queue;
+
+	queue = folio_split_queue(folio);
+	spin_lock_irqsave(&queue->split_queue_lock, *flags);
+
+	return queue;
+}
+
+static inline void split_queue_unlock(struct deferred_split *queue)
+{
+	spin_unlock(&queue->split_queue_lock);
+}
+
+static inline void split_queue_unlock_irqrestore(struct deferred_split *queue,
+						 unsigned long flags)
+{
+	spin_unlock_irqrestore(&queue->split_queue_lock, flags);
+}
+
 void prep_transhuge_page(struct page *page)
 {
 	/*
@@ -2610,8 +2655,9 @@ bool can_split_huge_page(struct page *page, int *pextra_pins)
  */
 int split_huge_page_to_list(struct page *page, struct list_head *list)
 {
-	struct page *head = compound_head(page);
-	struct deferred_split *ds_queue = get_deferred_split_queue(head);
+	struct folio *folio = page_folio(page);
+	struct page *head = &folio->page;
+	struct deferred_split *ds_queue;
 	struct anon_vma *anon_vma = NULL;
 	struct address_space *mapping = NULL;
 	int extra_pins, ret;
@@ -2689,13 +2735,13 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 	}
 
 	/* Prevent deferred_split_scan() touching ->_refcount */
-	spin_lock(&ds_queue->split_queue_lock);
+	ds_queue = folio_split_queue_lock(folio);
 	if (page_ref_freeze(head, 1 + extra_pins)) {
 		if (!list_empty(page_deferred_list(head))) {
 			ds_queue->split_queue_len--;
 			list_del(page_deferred_list(head));
 		}
-		spin_unlock(&ds_queue->split_queue_lock);
+		split_queue_unlock(ds_queue);
 		if (mapping) {
 			int nr = thp_nr_pages(head);
 
@@ -2710,7 +2756,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 		__split_huge_page(page, list, end);
 		ret = 0;
 	} else {
-		spin_unlock(&ds_queue->split_queue_lock);
+		split_queue_unlock(ds_queue);
 fail:
 		if (mapping)
 			xa_unlock(&mapping->i_pages);
@@ -2733,24 +2779,22 @@ int split_huge_page_to_list(struct page *page, struct list_head *list)
 
 void free_transhuge_page(struct page *page)
 {
-	struct deferred_split *ds_queue = get_deferred_split_queue(page);
+	struct deferred_split *ds_queue;
 	unsigned long flags;
 
-	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+	ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags);
 	if (!list_empty(page_deferred_list(page))) {
 		ds_queue->split_queue_len--;
 		list_del(page_deferred_list(page));
 	}
-	spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
+	split_queue_unlock_irqrestore(ds_queue, flags);
 	free_compound_page(page);
 }
 
 void deferred_split_huge_page(struct page *page)
 {
-	struct deferred_split *ds_queue = get_deferred_split_queue(page);
-#ifdef CONFIG_MEMCG
-	struct mem_cgroup *memcg = page_memcg(compound_head(page));
-#endif
+	struct deferred_split *ds_queue;
+	struct mem_cgroup *memcg;
 	unsigned long flags;
 
 	VM_BUG_ON_PAGE(!PageTransHuge(page), page);
@@ -2768,7 +2812,8 @@ void deferred_split_huge_page(struct page *page)
 	if (PageSwapCache(page))
 		return;
 
-	spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+	ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags);
+	memcg = split_queue_memcg(ds_queue);
 	if (list_empty(page_deferred_list(page))) {
 		count_vm_event(THP_DEFERRED_SPLIT_PAGE);
 		list_add_tail(page_deferred_list(page), &ds_queue->split_queue);
@@ -2779,7 +2824,7 @@ void deferred_split_huge_page(struct page *page)
 					 deferred_split_shrinker.id);
 #endif
 	}
-	spin_unlock_irqrestore(&ds_queue->split_queue_lock, flags);
+	split_queue_unlock_irqrestore(ds_queue, flags);
 }
 
 static unsigned long deferred_split_count(struct shrinker *shrink,
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented
  2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
                   ` (4 preceding siblings ...)
  2021-08-14  5:25 ` [PATCH v1 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}() Muchun Song
@ 2021-08-14  5:25 ` Muchun Song
  2021-08-14  5:25 ` [PATCH v1 07/12] mm: memcontrol: make all the callers of {folio,page}_memcg() safe Muchun Song
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

Similar to the lruvec lock, we use the same approach to make the split
queue lock safe when LRU pages are reparented.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/huge_memory.c | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index c49ef28e48c1..22fbf2c74d49 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -535,9 +535,22 @@ static struct deferred_split *folio_split_queue_lock(struct folio *folio)
 {
 	struct deferred_split *queue;
 
+	rcu_read_lock();
+retry:
 	queue = folio_split_queue(folio);
 	spin_lock(&queue->split_queue_lock);
 
+	if (unlikely(split_queue_memcg(queue) != folio_memcg(folio))) {
+		spin_unlock(&queue->split_queue_lock);
+		goto retry;
+	}
+
+	/*
+	 * Preemption is disabled in the internal of spin_lock, which can serve
+	 * as RCU read-side critical sections.
+	 */
+	rcu_read_unlock();
+
 	return queue;
 }
 
@@ -546,9 +559,19 @@ folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags)
 {
 	struct deferred_split *queue;
 
+	rcu_read_lock();
+retry:
 	queue = folio_split_queue(folio);
 	spin_lock_irqsave(&queue->split_queue_lock, *flags);
 
+	if (unlikely(split_queue_memcg(queue) != folio_memcg(folio))) {
+		spin_unlock_irqrestore(&queue->split_queue_lock, *flags);
+		goto retry;
+	}
+
+	/* See the comments in folio_split_queue_lock(). */
+	rcu_read_unlock();
+
 	return queue;
 }
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 07/12] mm: memcontrol: make all the callers of {folio,page}_memcg() safe
  2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
                   ` (5 preceding siblings ...)
  2021-08-14  5:25 ` [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented Muchun Song
@ 2021-08-14  5:25 ` Muchun Song
  2021-08-14  5:25 ` [PATCH v1 08/12] mm: memcontrol: introduce memcg_reparent_ops Muchun Song
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

When we use objcg APIs to charge the LRU pages, the page will not hold
a reference to the memcg associated with the page. So the caller of the
{folio,page}_memcg() should hold an rcu read lock or obtain a reference
to the memcg associated with the page to protect memcg from being
released. So introduce get_mem_cgroup_from_{page,folio}() to obtain a
reference to the memory cgroup associated with the page.

In this patch, make all the callers hold an rcu read lock or obtain a
reference to the memcg to protect memcg from being released when the LRU
pages reparented.

We do not need to adjust the callers of {folio,page}_memcg() during
the whole process of mem_cgroup_move_task(). Because the cgroup migration
and memory cgroup offlining are serialized by @cgroup_mutex. In this
routine, the LRU pages cannot be reparented to its parent memory cgroup.
So {folio,page}_memcg() is stable and cannot be released.

This is a preparation for reparenting the LRU pages.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 fs/buffer.c                |  3 ++-
 fs/fs-writeback.c          | 23 +++++++++++----------
 include/linux/memcontrol.h | 49 ++++++++++++++++++++++++++++++++++++++++++---
 mm/memcontrol.c            | 50 ++++++++++++++++++++++++++++++++++++----------
 mm/migrate.c               |  4 ++++
 mm/page_io.c               |  5 +++--
 6 files changed, 106 insertions(+), 28 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index f5384cff7e0c..88123f84885a 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -823,7 +823,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
 		gfp |= __GFP_NOFAIL;
 
 	/* The page lock pins the memcg */
-	memcg = page_memcg(page);
+	memcg = get_mem_cgroup_from_page(page);
 	old_memcg = set_active_memcg(memcg);
 
 	head = NULL;
@@ -843,6 +843,7 @@ struct buffer_head *alloc_page_buffers(struct page *page, unsigned long size,
 		set_bh_page(bh, page, offset);
 	}
 out:
+	mem_cgroup_put(memcg);
 	set_active_memcg(old_memcg);
 	return head;
 /*
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 81ec192ce067..d9a67fffcc78 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -243,15 +243,13 @@ void __inode_attach_wb(struct inode *inode, struct page *page)
 	if (inode_cgwb_enabled(inode)) {
 		struct cgroup_subsys_state *memcg_css;
 
-		if (page) {
-			memcg_css = mem_cgroup_css_from_page(page);
-			wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
-		} else {
-			/* must pin memcg_css, see wb_get_create() */
+		/* must pin memcg_css, see wb_get_create() */
+		if (page)
+			memcg_css = get_mem_cgroup_css_from_page(page);
+		else
 			memcg_css = task_get_css(current, memory_cgrp_id);
-			wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
-			css_put(memcg_css);
-		}
+		wb = wb_get_create(bdi, memcg_css, GFP_ATOMIC);
+		css_put(memcg_css);
 	}
 
 	if (!wb)
@@ -866,16 +864,16 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
 	if (!wbc->wb || wbc->no_cgroup_owner)
 		return;
 
-	css = mem_cgroup_css_from_page(page);
+	css = get_mem_cgroup_css_from_page(page);
 	/* dead cgroups shouldn't contribute to inode ownership arbitration */
 	if (!(css->flags & CSS_ONLINE))
-		return;
+		goto out;
 
 	id = css->id;
 
 	if (id == wbc->wb_id) {
 		wbc->wb_bytes += bytes;
-		return;
+		goto out;
 	}
 
 	if (id == wbc->wb_lcand_id)
@@ -888,6 +886,9 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
 		wbc->wb_tcand_bytes += bytes;
 	else
 		wbc->wb_tcand_bytes -= min(bytes, wbc->wb_tcand_bytes);
+
+out:
+	css_put(css);
 }
 EXPORT_SYMBOL_GPL(wbc_account_cgroup_owner);
 
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 5a8f85bd9bbf..431fc606f6f9 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -378,7 +378,7 @@ static inline bool folio_memcg_kmem(struct folio *folio);
  * a valid memcg, but can be atomically swapped to the parent memcg.
  *
  * The caller must ensure that the returned memcg won't be released:
- * e.g. acquire the rcu_read_lock or css_set_lock.
+ * e.g. acquire the rcu_read_lock or css_set_lock or cgroup_mutex.
  */
 static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
 {
@@ -460,6 +460,36 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
 }
 
 /*
+ * get_mem_cgroup_from_folio - Obtain a reference on the memory cgroup
+ *			       associated with a folio.
+ * @folio: Pointer to the folio.
+ *
+ * Returns a pointer to the memory cgroup (and obtain a reference on it)
+ * associated with the folio, or NULL. This function assumes that the
+ * folio is known to have a proper memory cgroup pointer. It's not safe
+ * to call this function against some type of pages, e.g. slab pages or
+ * ex-slab pages.
+ */
+static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
+{
+	struct mem_cgroup *memcg;
+
+	rcu_read_lock();
+retry:
+	memcg = folio_memcg(folio);
+	if (unlikely(memcg && !css_tryget(&memcg->css)))
+		goto retry;
+	rcu_read_unlock();
+
+	return memcg;
+}
+
+static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
+{
+	return get_mem_cgroup_from_folio(page_folio(page));
+}
+
+/*
  * folio_memcg_rcu - Locklessly get the memory cgroup associated with a folio.
  * @folio: Pointer to the folio.
  *
@@ -893,7 +923,7 @@ static inline bool mm_match_cgroup(struct mm_struct *mm,
 	return match;
 }
 
-struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page);
+struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page);
 ino_t page_cgroup_ino(struct page *page);
 
 static inline bool mem_cgroup_online(struct mem_cgroup *memcg)
@@ -1051,10 +1081,13 @@ static inline void count_memcg_events(struct mem_cgroup *memcg,
 static inline void count_memcg_page_event(struct page *page,
 					  enum vm_event_item idx)
 {
-	struct mem_cgroup *memcg = page_memcg(page);
+	struct mem_cgroup *memcg;
 
+	rcu_read_lock();
+	memcg = page_memcg(page);
 	if (memcg)
 		count_memcg_events(memcg, idx, 1);
+	rcu_read_unlock();
 }
 
 static inline void count_memcg_event_mm(struct mm_struct *mm,
@@ -1133,6 +1166,16 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
 	return NULL;
 }
 
+static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
+{
+	return NULL;
+}
+
+static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
+{
+	return NULL;
+}
+
 static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
 {
 	WARN_ON_ONCE(!rcu_read_lock_held());
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 7955da38e385..0eca3cf6cede 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -424,7 +424,7 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
 #endif
 
 /**
- * mem_cgroup_css_from_page - css of the memcg associated with a page
+ * get_mem_cgroup_css_from_page - get css of the memcg associated with a page
  * @page: page of interest
  *
  * If memcg is bound to the default hierarchy, css of the memcg associated
@@ -434,13 +434,15 @@ EXPORT_SYMBOL(memcg_kmem_enabled_key);
  * If memcg is bound to a traditional hierarchy, the css of root_mem_cgroup
  * is returned.
  */
-struct cgroup_subsys_state *mem_cgroup_css_from_page(struct page *page)
+struct cgroup_subsys_state *get_mem_cgroup_css_from_page(struct page *page)
 {
 	struct mem_cgroup *memcg;
 
-	memcg = page_memcg(page);
+	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
+		return &root_mem_cgroup->css;
 
-	if (!memcg || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
+	memcg = get_mem_cgroup_from_page(page);
+	if (!memcg)
 		memcg = root_mem_cgroup;
 
 	return &memcg->css;
@@ -1983,7 +1985,9 @@ void folio_memcg_lock(struct folio *folio)
 	 * The RCU lock is held throughout the transaction.  The fast
 	 * path can get away without acquiring the memcg->move_lock
 	 * because page moving starts with an RCU grace period.
-         */
+	 *
+	 * The RCU lock also protects the memcg from being freed.
+	 */
 	rcu_read_lock();
 
 	if (mem_cgroup_disabled())
@@ -4553,7 +4557,7 @@ void mem_cgroup_wb_stats(struct bdi_writeback *wb, unsigned long *pfilepages,
 void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio,
 					     struct bdi_writeback *wb)
 {
-	struct mem_cgroup *memcg = folio_memcg(folio);
+	struct mem_cgroup *memcg = get_mem_cgroup_from_folio(folio);
 	struct memcg_cgwb_frn *frn;
 	u64 now = get_jiffies_64();
 	u64 oldest_at = now;
@@ -4600,6 +4604,7 @@ void mem_cgroup_track_foreign_dirty_slowpath(struct folio *folio,
 		frn->memcg_id = wb->memcg_css->id;
 		frn->at = now;
 	}
+	css_put(&memcg->css);
 }
 
 /* issue foreign writeback flushes for recorded foreign dirtying events */
@@ -6169,6 +6174,14 @@ static void mem_cgroup_move_charge(void)
 	atomic_dec(&mc.from->moving_account);
 }
 
+/*
+ * The cgroup migration and memory cgroup offlining are serialized by
+ * @cgroup_mutex. If we reach here, it means that the LRU pages cannot
+ * be reparented to its parent memory cgroup. So during the whole process
+ * of mem_cgroup_move_task(), page_memcg(page) is stable. So we do not
+ * need to worry about the memcg (returned from page_memcg()) being
+ * released even if we do not hold an rcu read lock.
+ */
 static void mem_cgroup_move_task(void)
 {
 	if (mc.to) {
@@ -6991,7 +7004,7 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
 	if (folio_memcg(new))
 		return;
 
-	memcg = folio_memcg(old);
+	memcg = get_mem_cgroup_from_folio(old);
 	VM_WARN_ON_ONCE_FOLIO(!memcg, old);
 	if (!memcg)
 		return;
@@ -7010,6 +7023,8 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
 	mem_cgroup_charge_statistics(memcg, nr_pages);
 	memcg_check_events(memcg, folio_nid(new));
 	local_irq_restore(flags);
+
+	css_put(&memcg->css);
 }
 
 DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
@@ -7198,6 +7213,10 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 	if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return;
 
+	/*
+	 * Interrupts should be disabled by the caller (see the comments below),
+	 * which can serve as RCU read-side critical sections.
+	 */
 	memcg = page_memcg(page);
 
 	VM_WARN_ON_ONCE_PAGE(!memcg, page);
@@ -7262,15 +7281,16 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
 	if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return 0;
 
+	rcu_read_lock();
 	memcg = page_memcg(page);
 
 	VM_WARN_ON_ONCE_PAGE(!memcg, page);
 	if (!memcg)
-		return 0;
+		goto out;
 
 	if (!entry.val) {
 		memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
-		return 0;
+		goto out;
 	}
 
 	memcg = mem_cgroup_id_get_online(memcg);
@@ -7280,6 +7300,7 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
 		memcg_memory_event(memcg, MEMCG_SWAP_MAX);
 		memcg_memory_event(memcg, MEMCG_SWAP_FAIL);
 		mem_cgroup_id_put(memcg);
+		rcu_read_unlock();
 		return -ENOMEM;
 	}
 
@@ -7289,6 +7310,8 @@ int __mem_cgroup_try_charge_swap(struct page *page, swp_entry_t entry)
 	oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg), nr_pages);
 	VM_BUG_ON_PAGE(oldid, page);
 	mod_memcg_state(memcg, MEMCG_SWAP, nr_pages);
+out:
+	rcu_read_unlock();
 
 	return 0;
 }
@@ -7343,17 +7366,22 @@ bool mem_cgroup_swap_full(struct page *page)
 	if (cgroup_memory_noswap || !cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return false;
 
+	rcu_read_lock();
 	memcg = page_memcg(page);
 	if (!memcg)
-		return false;
+		goto out;
 
 	for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
 		unsigned long usage = page_counter_read(&memcg->swap);
 
 		if (usage * 2 >= READ_ONCE(memcg->swap.high) ||
-		    usage * 2 >= READ_ONCE(memcg->swap.max))
+		    usage * 2 >= READ_ONCE(memcg->swap.max)) {
+			rcu_read_unlock();
 			return true;
+		}
 	}
+out:
+	rcu_read_unlock();
 
 	return false;
 }
diff --git a/mm/migrate.c b/mm/migrate.c
index 7a03a61bbcd8..cc7f5ecfd4a2 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -473,6 +473,10 @@ int folio_migrate_mapping(struct address_space *mapping,
 		struct lruvec *old_lruvec, *new_lruvec;
 		struct mem_cgroup *memcg;
 
+		/*
+		 * Irq is disabled, which can serve as RCU read-side critical
+		 * sections.
+		 */
 		memcg = folio_memcg(folio);
 		old_lruvec = mem_cgroup_lruvec(memcg, oldzone->zone_pgdat);
 		new_lruvec = mem_cgroup_lruvec(memcg, newzone->zone_pgdat);
diff --git a/mm/page_io.c b/mm/page_io.c
index d597bc6e6e45..63014c61932a 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -269,13 +269,14 @@ static void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
 	struct cgroup_subsys_state *css;
 	struct mem_cgroup *memcg;
 
+	rcu_read_lock();
 	memcg = page_memcg(page);
 	if (!memcg)
-		return;
+		goto out;
 
-	rcu_read_lock();
 	css = cgroup_e_css(memcg->css.cgroup, &io_cgrp_subsys);
 	bio_associate_blkg_from_css(bio, css);
+out:
 	rcu_read_unlock();
 }
 #else
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 08/12] mm: memcontrol: introduce memcg_reparent_ops
  2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
                   ` (6 preceding siblings ...)
  2021-08-14  5:25 ` [PATCH v1 07/12] mm: memcontrol: make all the callers of {folio,page}_memcg() safe Muchun Song
@ 2021-08-14  5:25 ` Muchun Song
  2021-08-14  5:25 ` [PATCH v1 09/12] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Muchun Song
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

In the previous patch, we know how to make the lruvec lock safe when LRU
pages are reparented. We should do something like following.

    memcg_reparent_objcgs(memcg)
        1) lock
        // lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
        spin_lock(&lruvec->lru_lock);
        spin_lock(&lruvec_parent->lru_lock);

        2) do reparent
        // Move all the pages from the lruvec list to the parent lruvec list.

        3) unlock
        spin_unlock(&lruvec_parent->lru_lock);
        spin_unlock(&lruvec->lru_lock);

Apart from the page lruvec lock, the deferred split queue lock (THP only)
also needs to do something similar. So we extract the necessary three steps
in the memcg_reparent_objcgs().

    memcg_reparent_objcgs(memcg)
        1) lock
        memcg_reparent_ops->lock(memcg, parent);

        2) reparent
        memcg_reparent_ops->reparent(memcg, reparent);

        3) unlock
        memcg_reparent_ops->unlock(memcg, reparent);

Now there are two different locks (e.g. lruvec lock and deferred split
queue lock) need to use this infrastructure. In the next patch, we will
use those APIs to make those locks safe when the LRU pages reparented.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/memcontrol.h |  7 +++++++
 mm/memcontrol.c            | 43 +++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 48 insertions(+), 2 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 431fc606f6f9..d85c03f2d76d 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -352,6 +352,13 @@ struct mem_cgroup {
 	struct mem_cgroup_per_node *nodeinfo[];
 };
 
+struct memcg_reparent_ops {
+	/* Irq is disabled before calling those callbacks. */
+	void (*lock)(struct mem_cgroup *memcg, struct mem_cgroup *parent);
+	void (*unlock)(struct mem_cgroup *memcg, struct mem_cgroup *parent);
+	void (*reparent)(struct mem_cgroup *memcg, struct mem_cgroup *parent);
+};
+
 /*
  * size of first charge trial. "32" comes from vmscan.c's magic value.
  * TODO: maybe necessary to use big numbers in big irons.
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 0eca3cf6cede..8cfb4221c36a 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -344,6 +344,35 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
 	return objcg;
 }
 
+static const struct memcg_reparent_ops *memcg_reparent_ops[] = {};
+
+static void memcg_reparent_lock(struct mem_cgroup *memcg,
+				struct mem_cgroup *parent)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++)
+		memcg_reparent_ops[i]->lock(memcg, parent);
+}
+
+static void memcg_reparent_unlock(struct mem_cgroup *memcg,
+				  struct mem_cgroup *parent)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++)
+		memcg_reparent_ops[i]->unlock(memcg, parent);
+}
+
+static void memcg_do_reparent(struct mem_cgroup *memcg,
+			      struct mem_cgroup *parent)
+{
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(memcg_reparent_ops); i++)
+		memcg_reparent_ops[i]->reparent(memcg, parent);
+}
+
 static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
 {
 	struct obj_cgroup *objcg, *iter;
@@ -353,9 +382,13 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
 	if (!parent)
 		parent = root_mem_cgroup;
 
+	local_irq_disable();
+
+	memcg_reparent_lock(memcg, parent);
+
 	objcg = rcu_replace_pointer(memcg->objcg, NULL, true);
 
-	spin_lock_irq(&css_set_lock);
+	spin_lock(&css_set_lock);
 
 	/* 1) Ready to reparent active objcg. */
 	list_add(&objcg->list, &memcg->objcg_list);
@@ -365,7 +398,13 @@ static void memcg_reparent_objcgs(struct mem_cgroup *memcg)
 	/* 3) Move already reparented objcgs to the parent's list */
 	list_splice(&memcg->objcg_list, &parent->objcg_list);
 
-	spin_unlock_irq(&css_set_lock);
+	spin_unlock(&css_set_lock);
+
+	memcg_do_reparent(memcg, parent);
+
+	memcg_reparent_unlock(memcg, parent);
+
+	local_irq_enable();
 
 	percpu_ref_kill(&objcg->refcnt);
 }
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 09/12] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
  2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
                   ` (7 preceding siblings ...)
  2021-08-14  5:25 ` [PATCH v1 08/12] mm: memcontrol: introduce memcg_reparent_ops Muchun Song
@ 2021-08-14  5:25 ` Muchun Song
  2021-08-14 14:08     ` kernel test robot
  2021-08-14  5:25 ` [PATCH v1 10/12] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() Muchun Song
                   ` (2 subsequent siblings)
  11 siblings, 1 reply; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

We will reuse the obj_cgroup APIs to charge the LRU pages. Finally,
page->memcg_data will have 2 different meanings.

  - For the slab pages, page->memcg_data points to an object cgroups
    vector.

  - For the kmem pages (exclude the slab pages) and the LRU pages,
    page->memcg_data points to an object cgroup.

In this patch, we reuse obj_cgroup APIs to charge LRU pages. In the end,
The page cache cannot prevent long-living objects from pinning the original
memory cgroup in the memory.

At the same time we also changed the rules of page and objcg or memcg
binding stability. The new rules are as follows.

For a page any of the following ensures page and objcg binding stability:

  - the page lock
  - LRU isolation
  - lock_page_memcg()
  - exclusive reference

Based on the stable binding of page and objcg, for a page any of the
following ensures page and memcg binding stability:

  - css_set_lock
  - cgroup_mutex
  - the lruvec lock
  - the split queue lock (only THP page)

If the caller only want to ensure that the page counters of memcg are
updated correctly, ensure that the binding stability of page and objcg
is sufficient.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/memcontrol.h |  94 ++++++--------
 mm/huge_memory.c           |  42 +++++++
 mm/memcontrol.c            | 296 ++++++++++++++++++++++++++++++++-------------
 3 files changed, 293 insertions(+), 139 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index d85c03f2d76d..92c98a952bab 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -378,8 +378,6 @@ enum page_memcg_data_flags {
 
 #define MEMCG_DATA_FLAGS_MASK (__NR_MEMCG_DATA_FLAGS - 1)
 
-static inline bool folio_memcg_kmem(struct folio *folio);
-
 /*
  * After the initialization objcg->memcg is always pointing at
  * a valid memcg, but can be atomically swapped to the parent memcg.
@@ -393,43 +391,19 @@ static inline struct mem_cgroup *obj_cgroup_memcg(struct obj_cgroup *objcg)
 }
 
 /*
- * __folio_memcg - Get the memory cgroup associated with a non-kmem folio
- * @folio: Pointer to the folio.
- *
- * Returns a pointer to the memory cgroup associated with the folio,
- * or NULL. This function assumes that the folio is known to have a
- * proper memory cgroup pointer. It's not safe to call this function
- * against some type of folios, e.g. slab folios or ex-slab folios or
- * kmem folios.
- */
-static inline struct mem_cgroup *__folio_memcg(struct folio *folio)
-{
-	unsigned long memcg_data = folio->memcg_data;
-
-	VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
-	VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio);
-	VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_KMEM, folio);
-
-	return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-}
-
-/*
- * __folio_objcg - get the object cgroup associated with a kmem folio.
+ * folio_objcg - get the object cgroup associated with a folio.
  * @folio: Pointer to the folio.
  *
  * Returns a pointer to the object cgroup associated with the folio,
  * or NULL. This function assumes that the folio is known to have a
- * proper object cgroup pointer. It's not safe to call this function
- * against some type of folios, e.g. slab folios or ex-slab folios or
- * LRU folios.
+ * proper object cgroup pointer.
  */
-static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
+static inline struct obj_cgroup *folio_objcg(struct folio *folio)
 {
 	unsigned long memcg_data = folio->memcg_data;
 
 	VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
 	VM_BUG_ON_FOLIO(memcg_data & MEMCG_DATA_OBJCGS, folio);
-	VM_BUG_ON_FOLIO(!(memcg_data & MEMCG_DATA_KMEM), folio);
 
 	return (struct obj_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
 }
@@ -443,7 +417,7 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
  * proper memory cgroup pointer. It's not safe to call this function
  * against some type of folios, e.g. slab folios or ex-slab folios.
  *
- * For a non-kmem folio any of the following ensures folio and memcg binding
+ * For a folio any of the following ensures folio and memcg binding
  * stability:
  *
  * - the folio lock
@@ -451,14 +425,28 @@ static inline struct obj_cgroup *__folio_objcg(struct folio *folio)
  * - lock_page_memcg()
  * - exclusive reference
  *
- * For a kmem folio a caller should hold an rcu read lock to protect memcg
- * associated with a kmem folio from being released.
+ * Based on the stable binding of folio and objcg, for a folio any of the
+ * following ensures folio and memcg binding stability:
+ *
+ * - css_set_lock
+ * - cgroup_mutex
+ * - the lruvec lock
+ * - the split queue lock (only THP page)
+ *
+ * If the caller only want to ensure that the page counters of memcg are
+ * updated correctly, ensure that the binding stability of folio and objcg
+ * is sufficient.
+ *
+ * A caller should hold an rcu read lock (In addition, regions of code across
+ * which interrupts, preemption, or softirqs have been disabled also serve as
+ * RCU read-side critical sections) to protect memcg associated with a folio
+ * from being released.
  */
 static inline struct mem_cgroup *folio_memcg(struct folio *folio)
 {
-	if (folio_memcg_kmem(folio))
-		return obj_cgroup_memcg(__folio_objcg(folio));
-	return __folio_memcg(folio);
+	struct obj_cgroup *objcg = folio_objcg(folio);
+
+	return objcg ? obj_cgroup_memcg(objcg) : NULL;
 }
 
 static inline struct mem_cgroup *page_memcg(struct page *page)
@@ -476,6 +464,8 @@ static inline struct mem_cgroup *page_memcg(struct page *page)
  * folio is known to have a proper memory cgroup pointer. It's not safe
  * to call this function against some type of pages, e.g. slab pages or
  * ex-slab pages.
+ *
+ * The page and objcg or memcg binding rules can refer to folio_memcg().
  */
 static inline struct mem_cgroup *get_mem_cgroup_from_folio(struct folio *folio)
 {
@@ -506,22 +496,20 @@ static inline struct mem_cgroup *get_mem_cgroup_from_page(struct page *page)
  *
  * Return: A pointer to the memory cgroup associated with the folio,
  * or NULL.
+ *
+ * The folio and objcg or memcg binding rules can refer to folio_memcg().
  */
 static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
 {
 	unsigned long memcg_data = READ_ONCE(folio->memcg_data);
+	struct obj_cgroup *objcg;
 
 	VM_BUG_ON_FOLIO(folio_test_slab(folio), folio);
 	WARN_ON_ONCE(!rcu_read_lock_held());
 
-	if (memcg_data & MEMCG_DATA_KMEM) {
-		struct obj_cgroup *objcg;
-
-		objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-		return obj_cgroup_memcg(objcg);
-	}
+	objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
 
-	return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
+	return objcg ? obj_cgroup_memcg(objcg) : NULL;
 }
 
 /*
@@ -534,16 +522,10 @@ static inline struct mem_cgroup *folio_memcg_rcu(struct folio *folio)
  * has an associated memory cgroup pointer or an object cgroups vector or
  * an object cgroup.
  *
- * For a non-kmem page any of the following ensures page and memcg binding
- * stability:
+ * The page and objcg or memcg binding rules can refer to page_memcg().
  *
- * - the page lock
- * - LRU isolation
- * - lock_page_memcg()
- * - exclusive reference
- *
- * For a kmem page a caller should hold an rcu read lock to protect memcg
- * associated with a kmem page from being released.
+ * A caller should hold an rcu read lock to protect memcg associated with a
+ * page from being released.
  */
 static inline struct mem_cgroup *page_memcg_check(struct page *page)
 {
@@ -552,18 +534,14 @@ static inline struct mem_cgroup *page_memcg_check(struct page *page)
 	 * for slab pages, READ_ONCE() should be used here.
 	 */
 	unsigned long memcg_data = READ_ONCE(page->memcg_data);
+	struct obj_cgroup *objcg;
 
 	if (memcg_data & MEMCG_DATA_OBJCGS)
 		return NULL;
 
-	if (memcg_data & MEMCG_DATA_KMEM) {
-		struct obj_cgroup *objcg;
-
-		objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
-		return obj_cgroup_memcg(objcg);
-	}
+	objcg = (void *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
 
-	return (struct mem_cgroup *)(memcg_data & ~MEMCG_DATA_FLAGS_MASK);
+	return objcg ? obj_cgroup_memcg(objcg) : NULL;
 }
 
 #ifdef CONFIG_MEMCG_KMEM
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 22fbf2c74d49..731e1b894407 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -499,6 +499,8 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
 }
 
 #ifdef CONFIG_MEMCG
+static struct shrinker deferred_split_shrinker;
+
 static inline struct mem_cgroup *split_queue_memcg(struct deferred_split *queue)
 {
 	if (mem_cgroup_disabled())
@@ -512,6 +514,46 @@ static inline struct deferred_split *folio_memcg_split_queue(struct folio *folio
 
 	return memcg ? &memcg->deferred_split_queue : NULL;
 }
+
+static void memcg_reparent_split_queue_lock(struct mem_cgroup *memcg,
+					    struct mem_cgroup *parent)
+{
+	spin_lock(&memcg->deferred_split_queue.split_queue_lock);
+	spin_lock(&parent->deferred_split_queue.split_queue_lock);
+}
+
+static void memcg_reparent_split_queue_unlock(struct mem_cgroup *memcg,
+					      struct mem_cgroup *parent)
+{
+	spin_unlock(&parent->deferred_split_queue.split_queue_lock);
+	spin_unlock(&memcg->deferred_split_queue.split_queue_lock);
+}
+
+static void memcg_reparent_split_queue(struct mem_cgroup *memcg,
+				       struct mem_cgroup *parent)
+{
+	int nid;
+	struct deferred_split *src, *dst;
+
+	src = &memcg->deferred_split_queue;
+	dst = &parent->deferred_split_queue;
+
+	if (!src->split_queue_len)
+		return;
+
+	list_splice_tail_init(&src->split_queue, &dst->split_queue);
+	dst->split_queue_len += src->split_queue_len;
+	src->split_queue_len = 0;
+
+	for_each_node(nid)
+		set_shrinker_bit(parent, nid, deferred_split_shrinker.id);
+}
+
+const struct memcg_reparent_ops split_queue_reparent_ops = {
+	.lock		= memcg_reparent_split_queue_lock,
+	.unlock		= memcg_reparent_split_queue_unlock,
+	.reparent	= memcg_reparent_split_queue,
+};
 #else
 static inline struct mem_cgroup *split_queue_memcg(struct deferred_split *queue)
 {
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 8cfb4221c36a..52a878081452 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -75,6 +75,7 @@ struct cgroup_subsys memory_cgrp_subsys __read_mostly;
 EXPORT_SYMBOL(memory_cgrp_subsys);
 
 struct mem_cgroup *root_mem_cgroup __read_mostly;
+static struct obj_cgroup *root_obj_cgroup __read_mostly;
 
 /* Active memory cgroup to use from an interrupt context */
 DEFINE_PER_CPU(struct mem_cgroup *, int_active_memcg);
@@ -263,6 +264,11 @@ struct mem_cgroup *vmpressure_to_memcg(struct vmpressure *vmpr)
 
 extern spinlock_t css_set_lock;
 
+static inline bool obj_cgroup_is_root(struct obj_cgroup *objcg)
+{
+	return objcg == root_obj_cgroup;
+}
+
 bool mem_cgroup_kmem_disabled(void)
 {
 	return cgroup_memory_nokmem;
@@ -344,7 +350,81 @@ static struct obj_cgroup *obj_cgroup_alloc(void)
 	return objcg;
 }
 
-static const struct memcg_reparent_ops *memcg_reparent_ops[] = {};
+static void memcg_reparent_lruvec_lock(struct mem_cgroup *memcg,
+				       struct mem_cgroup *parent)
+{
+	int i;
+
+	for_each_node(i) {
+		spin_lock(&mem_cgroup_lruvec(memcg, NODE_DATA(i))->lru_lock);
+		spin_lock(&mem_cgroup_lruvec(parent, NODE_DATA(i))->lru_lock);
+	}
+}
+
+static void memcg_reparent_lruvec_unlock(struct mem_cgroup *memcg,
+					 struct mem_cgroup *parent)
+{
+	int i;
+
+	for_each_node(i) {
+		spin_unlock(&mem_cgroup_lruvec(parent, NODE_DATA(i))->lru_lock);
+		spin_unlock(&mem_cgroup_lruvec(memcg, NODE_DATA(i))->lru_lock);
+	}
+}
+
+static void lruvec_reparent_lru(struct lruvec *src, struct lruvec *dst,
+				enum lru_list lru)
+{
+	int zid;
+	struct mem_cgroup_per_node *mz_src, *mz_dst;
+
+	mz_src = container_of(src, struct mem_cgroup_per_node, lruvec);
+	mz_dst = container_of(dst, struct mem_cgroup_per_node, lruvec);
+
+	list_splice_tail_init(&src->lists[lru], &dst->lists[lru]);
+
+	for (zid = 0; zid < MAX_NR_ZONES; zid++) {
+		mz_dst->lru_zone_size[zid][lru] += mz_src->lru_zone_size[zid][lru];
+		mz_src->lru_zone_size[zid][lru] = 0;
+	}
+}
+
+static void memcg_reparent_lruvec(struct mem_cgroup *memcg,
+				  struct mem_cgroup *parent)
+{
+	int i;
+
+	for_each_node(i) {
+		enum lru_list lru;
+		struct lruvec *src, *dst;
+
+		src = mem_cgroup_lruvec(memcg, NODE_DATA(i));
+		dst = mem_cgroup_lruvec(parent, NODE_DATA(i));
+
+		dst->anon_cost += src->anon_cost;
+		dst->file_cost += src->file_cost;
+
+		for_each_lru(lru)
+			lruvec_reparent_lru(src, dst, lru);
+	}
+}
+
+static const struct memcg_reparent_ops lruvec_reparent_ops = {
+	.lock		= memcg_reparent_lruvec_lock,
+	.unlock		= memcg_reparent_lruvec_unlock,
+	.reparent	= memcg_reparent_lruvec,
+};
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+extern struct memcg_reparent_ops split_queue_reparent_ops;
+#endif
+
+static const struct memcg_reparent_ops *memcg_reparent_ops[] = {
+	&lruvec_reparent_ops,
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	&split_queue_reparent_ops,
+#endif
+};
 
 static void memcg_reparent_lock(struct mem_cgroup *memcg,
 				struct mem_cgroup *parent)
@@ -2799,18 +2879,18 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages)
 }
 #endif
 
-static void commit_charge(struct folio *folio, struct mem_cgroup *memcg)
+static void commit_charge(struct folio *folio, struct obj_cgroup *objcg)
 {
-	VM_BUG_ON_FOLIO(folio_memcg(folio), folio);
+	VM_BUG_ON_FOLIO(folio_objcg(folio), folio);
 	/*
-	 * Any of the following ensures page's memcg stability:
+	 * Any of the following ensures page's objcg stability:
 	 *
 	 * - the page lock
 	 * - LRU isolation
 	 * - lock_page_memcg()
 	 * - exclusive reference
 	 */
-	folio->memcg_data = (unsigned long)memcg;
+	folio->memcg_data = (unsigned long)objcg;
 }
 
 static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
@@ -2827,6 +2907,21 @@ static struct mem_cgroup *get_mem_cgroup_from_objcg(struct obj_cgroup *objcg)
 	return memcg;
 }
 
+static struct obj_cgroup *get_obj_cgroup_from_memcg(struct mem_cgroup *memcg)
+{
+	struct obj_cgroup *objcg = NULL;
+
+	rcu_read_lock();
+	for (; memcg; memcg = parent_mem_cgroup(memcg)) {
+		objcg = rcu_dereference(memcg->objcg);
+		if (objcg && obj_cgroup_tryget(objcg))
+			break;
+	}
+	rcu_read_unlock();
+
+	return objcg;
+}
+
 #ifdef CONFIG_MEMCG_KMEM
 /*
  * The allocated objcg pointers array is not accounted directly.
@@ -2932,12 +3027,15 @@ __always_inline struct obj_cgroup *get_obj_cgroup_from_current(void)
 	else
 		memcg = mem_cgroup_from_task(current);
 
-	for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg)) {
-		objcg = rcu_dereference(memcg->objcg);
-		if (objcg && obj_cgroup_tryget(objcg))
-			break;
+	if (mem_cgroup_is_root(memcg))
+		goto out;
+
+	objcg = get_obj_cgroup_from_memcg(memcg);
+	if (obj_cgroup_is_root(objcg)) {
+		obj_cgroup_put(objcg);
 		objcg = NULL;
 	}
+out:
 	rcu_read_unlock();
 
 	return objcg;
@@ -3081,13 +3179,13 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
 void __memcg_kmem_uncharge_page(struct page *page, int order)
 {
 	struct folio *folio = page_folio(page);
-	struct obj_cgroup *objcg;
+	struct obj_cgroup *objcg = folio_objcg(folio);
 	unsigned int nr_pages = 1 << order;
 
-	if (!folio_memcg_kmem(folio))
+	if (!objcg)
 		return;
 
-	objcg = __folio_objcg(folio);
+	VM_BUG_ON_FOLIO(!folio_memcg_kmem(folio), folio);
 	obj_cgroup_uncharge_pages(objcg, nr_pages);
 	folio->memcg_data = 0;
 	obj_cgroup_put(objcg);
@@ -3319,24 +3417,21 @@ void obj_cgroup_uncharge(struct obj_cgroup *objcg, size_t size)
 #endif /* CONFIG_MEMCG_KMEM */
 
 /*
- * Because page_memcg(head) is not set on tails, set it now.
+ * Because page_objcg(head) is not set on tails, set it now.
  */
 void split_page_memcg(struct page *head, unsigned int nr)
 {
 	struct folio *folio = page_folio(head);
-	struct mem_cgroup *memcg = folio_memcg(folio);
+	struct obj_cgroup *objcg = folio_objcg(folio);
 	int i;
 
-	if (mem_cgroup_disabled() || !memcg)
+	if (mem_cgroup_disabled() || !objcg)
 		return;
 
 	for (i = 1; i < nr; i++)
 		folio_page(folio, i)->memcg_data = folio->memcg_data;
 
-	if (folio_memcg_kmem(folio))
-		obj_cgroup_get_many(__folio_objcg(folio), nr - 1);
-	else
-		css_get_many(&memcg->css, nr - 1);
+	obj_cgroup_get_many(objcg, nr - 1);
 }
 
 #ifdef CONFIG_MEMCG_SWAP
@@ -5306,6 +5401,9 @@ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
 	objcg->memcg = memcg;
 	rcu_assign_pointer(memcg->objcg, objcg);
 
+	if (unlikely(mem_cgroup_is_root(memcg)))
+		root_obj_cgroup = objcg;
+
 	/* Online state pins memcg ID, memcg ID pins CSS */
 	refcount_set(&memcg->id.ref, 1);
 	css_get(css);
@@ -5735,10 +5833,10 @@ static int mem_cgroup_move_account(struct page *page,
 	 */
 	smp_mb();
 
-	css_get(&to->css);
-	css_put(&from->css);
+	obj_cgroup_get(to->objcg);
+	obj_cgroup_put(from->objcg);
 
-	folio->memcg_data = (unsigned long)to;
+	folio->memcg_data = (unsigned long)to->objcg;
 
 	__folio_memcg_unlock(from);
 
@@ -6211,6 +6309,42 @@ static void mem_cgroup_move_charge(void)
 
 	mmap_read_unlock(mc.mm);
 	atomic_dec(&mc.from->moving_account);
+
+	/*
+	 * Moving its pages to another memcg is finished. Wait for already
+	 * started RCU-only updates to finish to make sure that the caller
+	 * of lock_page_memcg() can unlock the correct move_lock. The
+	 * possible bad scenario would like:
+	 *
+	 * CPU0:				CPU1:
+	 * mem_cgroup_move_charge()
+	 *     walk_page_range()
+	 *
+	 *					lock_page_memcg(page)
+	 *					    memcg = folio_memcg()
+	 *					    spin_lock_irqsave(&memcg->move_lock)
+	 *					    memcg->move_lock_task = current
+	 *
+	 *     atomic_dec(&mc.from->moving_account)
+	 *
+	 * mem_cgroup_css_offline()
+	 *     memcg_offline_kmem()
+	 *         memcg_reparent_objcgs() <== reparented
+	 *
+	 *					unlock_page_memcg(page)
+	 *					    memcg = folio_memcg() <== memcg has been changed
+	 *					    if (memcg->move_lock_task == current) <== false
+	 *					        spin_unlock_irqrestore(&memcg->move_lock)
+	 *
+	 * Once mem_cgroup_move_charge() returns (it means that the cgroup_mutex
+	 * would be released soon), the page can be reparented to its parent
+	 * memcg. When the unlock_page_memcg() is called for the page, we will
+	 * miss unlock the move_lock. So using synchronize_rcu to wait for
+	 * already started RCU-only updates to finish before this function
+	 * returns (mem_cgroup_move_charge() and mem_cgroup_css_offline() are
+	 * serialized by cgroup_mutex).
+	 */
+	synchronize_rcu();
 }
 
 /*
@@ -6768,21 +6902,26 @@ void mem_cgroup_calculate_protection(struct mem_cgroup *root,
 static int charge_memcg(struct folio *folio, struct mem_cgroup *memcg,
 			gfp_t gfp)
 {
+	struct obj_cgroup *objcg;
 	long nr_pages = folio_nr_pages(folio);
-	int ret;
+	int ret = 0;
 
-	ret = try_charge(memcg, gfp, nr_pages);
+	objcg = get_obj_cgroup_from_memcg(memcg);
+	/* Do not account at the root objcg level. */
+	if (!obj_cgroup_is_root(objcg))
+		ret = try_charge(memcg, gfp, nr_pages);
 	if (ret)
 		goto out;
 
-	css_get(&memcg->css);
-	commit_charge(folio, memcg);
+	obj_cgroup_get(objcg);
+	commit_charge(folio, objcg);
 
 	local_irq_disable();
 	mem_cgroup_charge_statistics(memcg, nr_pages);
 	memcg_check_events(memcg, folio_nid(folio));
 	local_irq_enable();
 out:
+	obj_cgroup_put(objcg);
 	return ret;
 }
 
@@ -6882,7 +7021,7 @@ void mem_cgroup_swapin_uncharge_swap(swp_entry_t entry)
 }
 
 struct uncharge_gather {
-	struct mem_cgroup *memcg;
+	struct obj_cgroup *objcg;
 	unsigned long nr_memory;
 	unsigned long pgpgout;
 	unsigned long nr_kmem;
@@ -6897,84 +7036,73 @@ static inline void uncharge_gather_clear(struct uncharge_gather *ug)
 static void uncharge_batch(const struct uncharge_gather *ug)
 {
 	unsigned long flags;
+	struct mem_cgroup *memcg;
 
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(ug->objcg);
 	if (ug->nr_memory) {
-		page_counter_uncharge(&ug->memcg->memory, ug->nr_memory);
+		page_counter_uncharge(&memcg->memory, ug->nr_memory);
 		if (do_memsw_account())
-			page_counter_uncharge(&ug->memcg->memsw, ug->nr_memory);
+			page_counter_uncharge(&memcg->memsw, ug->nr_memory);
 		if (!cgroup_subsys_on_dfl(memory_cgrp_subsys) && ug->nr_kmem)
-			page_counter_uncharge(&ug->memcg->kmem, ug->nr_kmem);
-		memcg_oom_recover(ug->memcg);
+			page_counter_uncharge(&memcg->kmem, ug->nr_kmem);
+		memcg_oom_recover(memcg);
 	}
 
 	local_irq_save(flags);
-	__count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout);
-	__this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_memory);
-	memcg_check_events(ug->memcg, ug->nid);
+	__count_memcg_events(memcg, PGPGOUT, ug->pgpgout);
+	__this_cpu_add(memcg->vmstats_percpu->nr_page_events, ug->nr_memory);
+	memcg_check_events(memcg, ug->nid);
 	local_irq_restore(flags);
+	rcu_read_unlock();
 
 	/* drop reference from uncharge_folio */
-	css_put(&ug->memcg->css);
+	css_put(&memcg->css);
+	obj_cgroup_put(ug->objcg);
 }
 
 static void uncharge_folio(struct folio *folio, struct uncharge_gather *ug)
 {
 	long nr_pages;
-	struct mem_cgroup *memcg;
 	struct obj_cgroup *objcg;
-	bool use_objcg = folio_memcg_kmem(folio);
 
 	VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
 
 	/*
 	 * Nobody should be changing or seriously looking at
-	 * folio memcg or objcg at this point, we have fully
-	 * exclusive access to the folio.
+	 * folio objcg at this point, we have fully exclusive
+	 * access to the folio.
 	 */
-	if (use_objcg) {
-		objcg = __folio_objcg(folio);
-		/*
-		 * This get matches the put at the end of the function and
-		 * kmem pages do not hold memcg references anymore.
-		 */
-		memcg = get_mem_cgroup_from_objcg(objcg);
-	} else {
-		memcg = __folio_memcg(folio);
-	}
-
-	if (!memcg)
+	objcg = folio_objcg(folio);
+	if (!objcg)
 		return;
 
-	if (ug->memcg != memcg) {
-		if (ug->memcg) {
+	if (ug->objcg != objcg) {
+		if (ug->objcg) {
 			uncharge_batch(ug);
 			uncharge_gather_clear(ug);
 		}
-		ug->memcg = memcg;
+		ug->objcg = objcg;
 		ug->nid = folio_nid(folio);
 
-		/* pairs with css_put in uncharge_batch */
-		css_get(&memcg->css);
+		/* pairs with obj_cgroup_put in uncharge_batch */
+		obj_cgroup_get(objcg);
 	}
 
 	nr_pages = folio_nr_pages(folio);
 
-	if (use_objcg) {
+	if (folio_memcg_kmem(folio)) {
 		ug->nr_memory += nr_pages;
 		ug->nr_kmem += nr_pages;
-
-		folio->memcg_data = 0;
-		obj_cgroup_put(objcg);
 	} else {
 		/* LRU pages aren't accounted at the root level */
-		if (!mem_cgroup_is_root(memcg))
+		if (!obj_cgroup_is_root(objcg))
 			ug->nr_memory += nr_pages;
 		ug->pgpgout++;
-
-		folio->memcg_data = 0;
 	}
 
-	css_put(&memcg->css);
+	folio->memcg_data = 0;
+	obj_cgroup_put(objcg);
 }
 
 /**
@@ -6988,7 +7116,7 @@ void __mem_cgroup_uncharge(struct folio *folio)
 	struct uncharge_gather ug;
 
 	/* Don't touch folio->lru of any random page, pre-check: */
-	if (!folio_memcg(folio))
+	if (!folio_objcg(folio))
 		return;
 
 	uncharge_gather_clear(&ug);
@@ -7011,7 +7139,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list)
 	uncharge_gather_clear(&ug);
 	list_for_each_entry(folio, page_list, lru)
 		uncharge_folio(folio, &ug);
-	if (ug.memcg)
+	if (ug.objcg)
 		uncharge_batch(&ug);
 }
 
@@ -7028,6 +7156,7 @@ void __mem_cgroup_uncharge_list(struct list_head *page_list)
 void mem_cgroup_migrate(struct folio *old, struct folio *new)
 {
 	struct mem_cgroup *memcg;
+	struct obj_cgroup *objcg;
 	long nr_pages = folio_nr_pages(new);
 	unsigned long flags;
 
@@ -7040,30 +7169,33 @@ void mem_cgroup_migrate(struct folio *old, struct folio *new)
 		return;
 
 	/* Page cache replacement: new folio already charged? */
-	if (folio_memcg(new))
+	if (folio_objcg(new))
 		return;
 
-	memcg = get_mem_cgroup_from_folio(old);
-	VM_WARN_ON_ONCE_FOLIO(!memcg, old);
-	if (!memcg)
+	objcg = folio_objcg(old);
+	VM_WARN_ON_ONCE_FOLIO(!objcg, old);
+	if (!objcg)
 		return;
 
+	rcu_read_lock();
+	memcg = obj_cgroup_memcg(objcg);
+
 	/* Force-charge the new page. The old one will be freed soon */
-	if (!mem_cgroup_is_root(memcg)) {
+	if (!obj_cgroup_is_root(objcg)) {
 		page_counter_charge(&memcg->memory, nr_pages);
 		if (do_memsw_account())
 			page_counter_charge(&memcg->memsw, nr_pages);
 	}
 
-	css_get(&memcg->css);
-	commit_charge(new, memcg);
+	obj_cgroup_get(objcg);
+	commit_charge(new, objcg);
 
 	local_irq_save(flags);
 	mem_cgroup_charge_statistics(memcg, nr_pages);
 	memcg_check_events(memcg, folio_nid(new));
 	local_irq_restore(flags);
 
-	css_put(&memcg->css);
+	rcu_read_unlock();
 }
 
 DEFINE_STATIC_KEY_FALSE(memcg_sockets_enabled_key);
@@ -7240,6 +7372,7 @@ static struct mem_cgroup *mem_cgroup_id_get_online(struct mem_cgroup *memcg)
 void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 {
 	struct mem_cgroup *memcg, *swap_memcg;
+	struct obj_cgroup *objcg;
 	unsigned int nr_entries;
 	unsigned short oldid;
 
@@ -7252,15 +7385,16 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 	if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
 		return;
 
+	objcg = folio_objcg(page_folio(page));
+	VM_WARN_ON_ONCE_PAGE(!objcg, page);
+	if (!objcg)
+		return;
+
 	/*
 	 * Interrupts should be disabled by the caller (see the comments below),
 	 * which can serve as RCU read-side critical sections.
 	 */
-	memcg = page_memcg(page);
-
-	VM_WARN_ON_ONCE_PAGE(!memcg, page);
-	if (!memcg)
-		return;
+	memcg = obj_cgroup_memcg(objcg);
 
 	/*
 	 * In case the memcg owning these pages has been offlined and doesn't
@@ -7279,7 +7413,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 
 	page->memcg_data = 0;
 
-	if (!mem_cgroup_is_root(memcg))
+	if (!obj_cgroup_is_root(objcg))
 		page_counter_uncharge(&memcg->memory, nr_entries);
 
 	if (!cgroup_memory_noswap && memcg != swap_memcg) {
@@ -7298,7 +7432,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 	mem_cgroup_charge_statistics(memcg, -nr_entries);
 	memcg_check_events(memcg, page_to_nid(page));
 
-	css_put(&memcg->css);
+	obj_cgroup_put(objcg);
 }
 
 /**
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 10/12] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg()
  2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
                   ` (8 preceding siblings ...)
  2021-08-14  5:25 ` [PATCH v1 09/12] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Muchun Song
@ 2021-08-14  5:25 ` Muchun Song
  2021-08-14  5:25 ` [PATCH v1 11/12] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function Muchun Song
  2021-08-14  5:25 ` [PATCH v1 12/12] mm: lru: use lruvec lock to serialize memcg changes Muchun Song
  11 siblings, 0 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

Now the lock_page_memcg() does not lock a page and memcg binding, it
actually lock a page and objcg binding. So rename lock_page_memcg()
to lock_page_objcg().

This is just code cleanup without any functionality changes.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 Documentation/admin-guide/cgroup-v1/memory.rst |  2 +-
 fs/buffer.c                                    |  8 ++++----
 include/linux/memcontrol.h                     | 14 +++++++-------
 mm/filemap.c                                   |  2 +-
 mm/huge_memory.c                               |  4 ++--
 mm/memcontrol.c                                | 24 ++++++++++++------------
 mm/page-writeback.c                            |  6 +++---
 mm/rmap.c                                      | 14 +++++++-------
 8 files changed, 37 insertions(+), 37 deletions(-)

diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
index 41191b5fb69d..dd582312b91a 100644
--- a/Documentation/admin-guide/cgroup-v1/memory.rst
+++ b/Documentation/admin-guide/cgroup-v1/memory.rst
@@ -291,7 +291,7 @@ Lock order is as follows:
 
   Page lock (PG_locked bit of page->flags)
     mm->page_table_lock or split pte_lock
-      lock_page_memcg (memcg->move_lock)
+      lock_page_objcg (memcg->move_lock)
         mapping->i_pages lock
           lruvec->lru_lock.
 
diff --git a/fs/buffer.c b/fs/buffer.c
index 88123f84885a..c3d20ebb1d0b 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -635,14 +635,14 @@ int __set_page_dirty_buffers(struct page *page)
 	 * Lock out page's memcg migration to keep PageDirty
 	 * synchronized with per-memcg dirty page counters.
 	 */
-	lock_page_memcg(page);
+	lock_page_objcg(page);
 	newly_dirty = !TestSetPageDirty(page);
 	spin_unlock(&mapping->private_lock);
 
 	if (newly_dirty)
 		__set_page_dirty(page, mapping, 1);
 
-	unlock_page_memcg(page);
+	unlock_page_objcg(page);
 
 	if (newly_dirty)
 		__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
@@ -1139,13 +1139,13 @@ void mark_buffer_dirty(struct buffer_head *bh)
 		struct page *page = bh->b_page;
 		struct address_space *mapping = NULL;
 
-		lock_page_memcg(page);
+		lock_page_objcg(page);
 		if (!TestSetPageDirty(page)) {
 			mapping = page_mapping(page);
 			if (mapping)
 				__set_page_dirty(page, mapping, 0);
 		}
-		unlock_page_memcg(page);
+		unlock_page_objcg(page);
 		if (mapping)
 			__mark_inode_dirty(mapping->host, I_DIRTY_PAGES);
 	}
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 92c98a952bab..611e6a6d7b00 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -417,12 +417,12 @@ static inline struct obj_cgroup *folio_objcg(struct folio *folio)
  * proper memory cgroup pointer. It's not safe to call this function
  * against some type of folios, e.g. slab folios or ex-slab folios.
  *
- * For a folio any of the following ensures folio and memcg binding
- * stability:
+ * For a page any of the following ensures page and objcg binding
+ * stability (But the folio can be reparented to its parent memcg):
  *
  * - the folio lock
  * - LRU isolation
- * - lock_page_memcg()
+ * - lock_page_objcg()
  * - exclusive reference
  *
  * Based on the stable binding of folio and objcg, for a folio any of the
@@ -970,8 +970,8 @@ extern bool cgroup_memory_noswap;
 
 void folio_memcg_lock(struct folio *folio);
 void folio_memcg_unlock(struct folio *folio);
-void lock_page_memcg(struct page *page);
-void unlock_page_memcg(struct page *page);
+void lock_page_objcg(struct page *page);
+void unlock_page_objcg(struct page *page);
 
 void __mod_memcg_state(struct mem_cgroup *memcg, int idx, int val);
 
@@ -1388,11 +1388,11 @@ mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg)
 {
 }
 
-static inline void lock_page_memcg(struct page *page)
+static inline void lock_page_objcg(struct page *page)
 {
 }
 
-static inline void unlock_page_memcg(struct page *page)
+static inline void unlock_page_objcg(struct page *page)
 {
 }
 
diff --git a/mm/filemap.c b/mm/filemap.c
index 53913fced7ae..b5298fd17d14 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -112,7 +112,7 @@
  *    ->i_pages lock		(page_remove_rmap->set_page_dirty)
  *    bdi.wb->list_lock		(page_remove_rmap->set_page_dirty)
  *    ->inode->i_lock		(page_remove_rmap->set_page_dirty)
- *    ->memcg->move_lock	(page_remove_rmap->lock_page_memcg)
+ *    ->memcg->move_lock	(page_remove_rmap->lock_page_objcg)
  *    bdi.wb->list_lock		(zap_pte_range->set_page_dirty)
  *    ->inode->i_lock		(zap_pte_range->set_page_dirty)
  *    ->private_lock		(zap_pte_range->__set_page_dirty_buffers)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 731e1b894407..eb3c07c00d2c 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2227,7 +2227,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 				atomic_inc(&page[i]._mapcount);
 		}
 
-		lock_page_memcg(page);
+		lock_page_objcg(page);
 		if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
 			/* Last compound_mapcount is gone. */
 			__mod_lruvec_page_state(page, NR_ANON_THPS,
@@ -2238,7 +2238,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
 					atomic_dec(&page[i]._mapcount);
 			}
 		}
-		unlock_page_memcg(page);
+		unlock_page_objcg(page);
 	}
 
 	smp_wmb(); /* make pte visible before pmd */
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 52a878081452..9464e6d2d735 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2135,18 +2135,18 @@ void folio_memcg_lock(struct folio *folio)
 	 * When charge migration first begins, we can have multiple
 	 * critical sections holding the fast-path RCU lock and one
 	 * holding the slowpath move_lock. Track the task who has the
-	 * move_lock for unlock_page_memcg().
+	 * move_lock for unlock_page_objcg().
 	 */
 	memcg->move_lock_task = current;
 	memcg->move_lock_flags = flags;
 }
 EXPORT_SYMBOL(folio_memcg_lock);
 
-void lock_page_memcg(struct page *page)
+void lock_page_objcg(struct page *page)
 {
 	folio_memcg_lock(page_folio(page));
 }
-EXPORT_SYMBOL(lock_page_memcg);
+EXPORT_SYMBOL(lock_page_objcg);
 
 static void __folio_memcg_unlock(struct mem_cgroup *memcg)
 {
@@ -2176,11 +2176,11 @@ void folio_memcg_unlock(struct folio *folio)
 }
 EXPORT_SYMBOL(folio_memcg_unlock);
 
-void unlock_page_memcg(struct page *page)
+void unlock_page_objcg(struct page *page)
 {
 	folio_memcg_unlock(page_folio(page));
 }
-EXPORT_SYMBOL(unlock_page_memcg);
+EXPORT_SYMBOL(unlock_page_objcg);
 
 struct obj_stock {
 #ifdef CONFIG_MEMCG_KMEM
@@ -2887,7 +2887,7 @@ static void commit_charge(struct folio *folio, struct obj_cgroup *objcg)
 	 *
 	 * - the page lock
 	 * - LRU isolation
-	 * - lock_page_memcg()
+	 * - lock_page_objcg()
 	 * - exclusive reference
 	 */
 	folio->memcg_data = (unsigned long)objcg;
@@ -5826,7 +5826,7 @@ static int mem_cgroup_move_account(struct page *page,
 	 * with (un)charging, migration, LRU putback, or anything else
 	 * that would rely on a stable page's memory cgroup.
 	 *
-	 * Note that lock_page_memcg is a memcg lock, not a page lock,
+	 * Note that lock_page_objcg is a memcg lock, not a page lock,
 	 * to save space. As soon as we switch page's memory cgroup to a
 	 * new memcg that isn't locked, the above state can change
 	 * concurrently again. Make sure we're truly done with it.
@@ -6281,7 +6281,7 @@ static void mem_cgroup_move_charge(void)
 {
 	lru_add_drain_all();
 	/*
-	 * Signal lock_page_memcg() to take the memcg's move_lock
+	 * Signal lock_page_objcg() to take the memcg's move_lock
 	 * while we're moving its pages to another memcg. Then wait
 	 * for already started RCU-only updates to finish.
 	 */
@@ -6313,14 +6313,14 @@ static void mem_cgroup_move_charge(void)
 	/*
 	 * Moving its pages to another memcg is finished. Wait for already
 	 * started RCU-only updates to finish to make sure that the caller
-	 * of lock_page_memcg() can unlock the correct move_lock. The
+	 * of lock_page_objcg() can unlock the correct move_lock. The
 	 * possible bad scenario would like:
 	 *
 	 * CPU0:				CPU1:
 	 * mem_cgroup_move_charge()
 	 *     walk_page_range()
 	 *
-	 *					lock_page_memcg(page)
+	 *					unlock_page_objcg(page)
 	 *					    memcg = folio_memcg()
 	 *					    spin_lock_irqsave(&memcg->move_lock)
 	 *					    memcg->move_lock_task = current
@@ -6331,14 +6331,14 @@ static void mem_cgroup_move_charge(void)
 	 *     memcg_offline_kmem()
 	 *         memcg_reparent_objcgs() <== reparented
 	 *
-	 *					unlock_page_memcg(page)
+	 *					unlock_page_objcg(page)
 	 *					    memcg = folio_memcg() <== memcg has been changed
 	 *					    if (memcg->move_lock_task == current) <== false
 	 *					        spin_unlock_irqrestore(&memcg->move_lock)
 	 *
 	 * Once mem_cgroup_move_charge() returns (it means that the cgroup_mutex
 	 * would be released soon), the page can be reparented to its parent
-	 * memcg. When the unlock_page_memcg() is called for the page, we will
+	 * memcg. When the unlock_page_objcg() is called for the page, we will
 	 * miss unlock the move_lock. So using synchronize_rcu to wait for
 	 * already started RCU-only updates to finish before this function
 	 * returns (mem_cgroup_move_charge() and mem_cgroup_css_offline() are
diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 5b171aa067c1..d8fd7308dd39 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2434,7 +2434,7 @@ EXPORT_SYMBOL(__set_page_dirty_no_writeback);
 /*
  * Helper function for set_page_dirty family.
  *
- * Caller must hold lock_page_memcg().
+ * Caller must hold lock_page_objcg().
  *
  * NOTE: This relies on being atomic wrt interrupts.
  */
@@ -2468,7 +2468,7 @@ static void folio_account_dirtied(struct folio *folio,
 /*
  * Helper function for deaccounting dirty page without writeback.
  *
- * Caller must hold lock_page_memcg().
+ * Caller must hold lock_page_objcg().
  */
 void folio_account_cleaned(struct folio *folio, struct address_space *mapping,
 			  struct bdi_writeback *wb)
@@ -2489,7 +2489,7 @@ void folio_account_cleaned(struct folio *folio, struct address_space *mapping,
  * If warn is true, then emit a warning if the folio is not uptodate and has
  * not been truncated.
  *
- * The caller must hold lock_page_memcg().
+ * The caller must hold lock_page_objcg().
  */
 void __folio_mark_dirty(struct folio *folio, struct address_space *mapping,
 			     int warn)
diff --git a/mm/rmap.c b/mm/rmap.c
index 09c41e1f44d8..232494888628 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -32,7 +32,7 @@
  *                 swap_lock (in swap_duplicate, swap_info_get)
  *                   mmlist_lock (in mmput, drain_mmlist and others)
  *                   mapping->private_lock (in __set_page_dirty_buffers)
- *                     lock_page_memcg move_lock (in __set_page_dirty_buffers)
+ *                     lock_page_objcg move_lock (in __set_page_dirty_buffers)
  *                       i_pages lock (widely used)
  *                         lruvec->lru_lock (in folio_lruvec_lock_irq)
  *                   inode->i_lock (in set_page_dirty's __mark_inode_dirty)
@@ -1125,7 +1125,7 @@ void do_page_add_anon_rmap(struct page *page,
 	bool first;
 
 	if (unlikely(PageKsm(page)))
-		lock_page_memcg(page);
+		lock_page_objcg(page);
 	else
 		VM_BUG_ON_PAGE(!PageLocked(page), page);
 
@@ -1153,7 +1153,7 @@ void do_page_add_anon_rmap(struct page *page,
 	}
 
 	if (unlikely(PageKsm(page))) {
-		unlock_page_memcg(page);
+		unlock_page_objcg(page);
 		return;
 	}
 
@@ -1213,7 +1213,7 @@ void page_add_file_rmap(struct page *page, bool compound)
 	int i, nr = 1;
 
 	VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
-	lock_page_memcg(page);
+	lock_page_objcg(page);
 	if (compound && PageTransHuge(page)) {
 		int nr_pages = thp_nr_pages(page);
 
@@ -1242,7 +1242,7 @@ void page_add_file_rmap(struct page *page, bool compound)
 	}
 	__mod_lruvec_page_state(page, NR_FILE_MAPPED, nr);
 out:
-	unlock_page_memcg(page);
+	unlock_page_objcg(page);
 }
 
 static void page_remove_file_rmap(struct page *page, bool compound)
@@ -1343,7 +1343,7 @@ static void page_remove_anon_compound_rmap(struct page *page)
  */
 void page_remove_rmap(struct page *page, bool compound)
 {
-	lock_page_memcg(page);
+	lock_page_objcg(page);
 
 	if (!PageAnon(page)) {
 		page_remove_file_rmap(page, compound);
@@ -1382,7 +1382,7 @@ void page_remove_rmap(struct page *page, bool compound)
 	 * faster for those pages still in swapcache.
 	 */
 out:
-	unlock_page_memcg(page);
+	unlock_page_objcg(page);
 }
 
 /*
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 11/12] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function
  2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
                   ` (9 preceding siblings ...)
  2021-08-14  5:25 ` [PATCH v1 10/12] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() Muchun Song
@ 2021-08-14  5:25 ` Muchun Song
  2021-08-14  5:25 ` [PATCH v1 12/12] mm: lru: use lruvec lock to serialize memcg changes Muchun Song
  11 siblings, 0 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

We need to make sure that the page is deleted from or added to the
correct lruvec list. So add a VM_BUG_ON_FOLIO() to catch invalid
users.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/mm_inline.h | 15 ++++++++++++---
 mm/vmscan.c               |  1 -
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index e2ec68b0515c..60eb827a41fe 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -103,7 +103,10 @@ void lruvec_add_folio(struct lruvec *lruvec, struct folio *folio)
 static __always_inline void add_page_to_lru_list(struct page *page,
 				struct lruvec *lruvec)
 {
-	lruvec_add_folio(lruvec, page_folio(page));
+	struct folio *folio = page_folio(page);
+
+	VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+	lruvec_add_folio(lruvec, folio);
 }
 
 static __always_inline
@@ -119,7 +122,10 @@ void lruvec_add_folio_tail(struct lruvec *lruvec, struct folio *folio)
 static __always_inline void add_page_to_lru_list_tail(struct page *page,
 				struct lruvec *lruvec)
 {
-	lruvec_add_folio_tail(lruvec, page_folio(page));
+	struct folio *folio = page_folio(page);
+
+	VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+	lruvec_add_folio_tail(lruvec, folio);
 }
 
 static __always_inline
@@ -133,6 +139,9 @@ void lruvec_del_folio(struct lruvec *lruvec, struct folio *folio)
 static __always_inline void del_page_from_lru_list(struct page *page,
 				struct lruvec *lruvec)
 {
-	lruvec_del_folio(lruvec, page_folio(page));
+	struct folio *folio = page_folio(page);
+
+	VM_BUG_ON_FOLIO(!folio_matches_lruvec(folio, lruvec), folio);
+	lruvec_del_folio(lruvec, folio);
 }
 #endif
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 8ce42858ad5d..902d36ec91a3 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2204,7 +2204,6 @@ static unsigned int move_pages_to_lru(struct list_head *list)
 			continue;
 		}
 
-		VM_BUG_ON_PAGE(!folio_matches_lruvec(folio, lruvec), page);
 		add_page_to_lru_list(page, lruvec);
 		nr_pages = thp_nr_pages(page);
 		nr_moved += nr_pages;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v1 12/12] mm: lru: use lruvec lock to serialize memcg changes
  2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
                   ` (10 preceding siblings ...)
  2021-08-14  5:25 ` [PATCH v1 11/12] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function Muchun Song
@ 2021-08-14  5:25 ` Muchun Song
  11 siblings, 0 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-14  5:25 UTC (permalink / raw)
  To: guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: linux-kernel, linux-mm, duanxiongchun, fam.zheng, bsingharora,
	shy828301, alexs, smuchun, zhengqi.arch, Muchun Song

As described by commit fc574c23558c ("mm/swap.c: serialize memcg
changes in pagevec_lru_move_fn"), TestClearPageLRU() aims to
serialize mem_cgroup_move_account() during pagevec_lru_move_fn().
Now folio_lruvec_lock*() has the ability to detect whether page
memcg has been changed. So we can use lruvec lock to serialize
mem_cgroup_move_account() during pagevec_lru_move_fn(). This
change is a partial revert of the commit fc574c23558c ("mm/swap.c:
serialize memcg changes in pagevec_lru_move_fn").

And pagevec_lru_move_fn() is more hot compare with
mem_cgroup_move_account(), removing an atomic operation would be
an optimization. Also this change would not dirty cacheline for a
page which isn't on the LRU.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/memcontrol.c | 31 +++++++++++++++++++++++++++++++
 mm/swap.c       | 45 ++++++++++++++-------------------------------
 mm/vmscan.c     |  9 ++++-----
 3 files changed, 49 insertions(+), 36 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 9464e6d2d735..7732ccf7d180 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1286,12 +1286,38 @@ struct lruvec *folio_lruvec_lock(struct folio *folio)
 	lruvec = folio_lruvec(folio);
 	spin_lock(&lruvec->lru_lock);
 
+	/*
+	 * The memcg of the page can be changed by any the following routines:
+	 *
+	 * 1) mem_cgroup_move_account() or
+	 * 2) memcg_reparent_objcgs()
+	 *
+	 * The possible bad scenario would like:
+	 *
+	 * CPU0:                CPU1:                CPU2:
+	 * lruvec = folio_lruvec()
+	 *
+	 *                      if (!isolate_lru_page())
+	 *                              mem_cgroup_move_account()
+	 *
+	 *                                           memcg_reparent_objcgs()
+	 *
+	 * spin_lock(&lruvec->lru_lock)
+	 *                ^^^^^^
+	 *              wrong lock
+	 *
+	 * Either CPU1 or CPU2 can change page memcg, so we need to check
+	 * whether page memcg is changed, if so, we should reacquire the
+	 * new lruvec lock.
+	 */
 	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
 		spin_unlock(&lruvec->lru_lock);
 		goto retry;
 	}
 
 	/*
+	 * When we reach here, it means that the folio_memcg(folio) is stable.
+	 *
 	 * Preemption is disabled in the internal of spin_lock, which can serve
 	 * as RCU read-side critical sections.
 	 */
@@ -1309,6 +1335,7 @@ struct lruvec *folio_lruvec_lock_irq(struct folio *folio)
 	lruvec = folio_lruvec(folio);
 	spin_lock_irq(&lruvec->lru_lock);
 
+	/* See the comments in folio_lruvec_lock(). */
 	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
 		spin_unlock_irq(&lruvec->lru_lock);
 		goto retry;
@@ -1330,6 +1357,7 @@ struct lruvec *folio_lruvec_lock_irqsave(struct folio *folio,
 	lruvec = folio_lruvec(folio);
 	spin_lock_irqsave(&lruvec->lru_lock, *flags);
 
+	/* See the comments in folio_lruvec_lock(). */
 	if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio))) {
 		spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
 		goto retry;
@@ -5836,7 +5864,10 @@ static int mem_cgroup_move_account(struct page *page,
 	obj_cgroup_get(to->objcg);
 	obj_cgroup_put(from->objcg);
 
+	/* See the comments in folio_lruvec_lock(). */
+	spin_lock(&from_vec->lru_lock);
 	folio->memcg_data = (unsigned long)to->objcg;
+	spin_unlock(&from_vec->lru_lock);
 
 	__folio_memcg_unlock(from);
 
diff --git a/mm/swap.c b/mm/swap.c
index 9554ff008fe6..00b6776860e8 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -191,14 +191,8 @@ static void pagevec_lru_move_fn(struct pagevec *pvec,
 		struct page *page = pvec->pages[i];
 		struct folio *folio = page_folio(page);
 
-		/* block memcg migration during page moving between lru */
-		if (!TestClearPageLRU(page))
-			continue;
-
 		lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags);
 		(*move_fn)(page, lruvec);
-
-		SetPageLRU(page);
 	}
 	if (lruvec)
 		unlock_page_lruvec_irqrestore(lruvec, flags);
@@ -210,7 +204,7 @@ static void pagevec_move_tail_fn(struct page *page, struct lruvec *lruvec)
 {
 	struct folio *folio = page_folio(page);
 
-	if (!folio_test_unevictable(folio)) {
+	if (folio_test_lru(folio) && !folio_test_unevictable(folio)) {
 		lruvec_del_folio(lruvec, folio);
 		folio_clear_active(folio);
 		lruvec_add_folio_tail(lruvec, folio);
@@ -305,7 +299,8 @@ void lru_note_cost_folio(struct folio *folio)
 
 static void __folio_activate(struct folio *folio, struct lruvec *lruvec)
 {
-	if (!folio_test_active(folio) && !folio_test_unevictable(folio)) {
+	if (folio_test_lru(folio) && !folio_test_active(folio) &&
+	    !folio_test_unevictable(folio)) {
 		long nr_pages = folio_nr_pages(folio);
 
 		lruvec_del_folio(lruvec, folio);
@@ -362,12 +357,9 @@ static void folio_activate(struct folio *folio)
 {
 	struct lruvec *lruvec;
 
-	if (folio_test_clear_lru(folio)) {
-		lruvec = folio_lruvec_lock_irq(folio);
-		__folio_activate(folio, lruvec);
-		unlock_page_lruvec_irq(lruvec);
-		folio_set_lru(folio);
-	}
+	lruvec = folio_lruvec_lock_irq(folio);
+	__folio_activate(folio, lruvec);
+	unlock_page_lruvec_irq(lruvec);
 }
 #endif
 
@@ -520,6 +512,9 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec)
 	bool active = PageActive(page);
 	int nr_pages = thp_nr_pages(page);
 
+	if (!PageLRU(page))
+		return;
+
 	if (PageUnevictable(page))
 		return;
 
@@ -557,7 +552,7 @@ static void lru_deactivate_file_fn(struct page *page, struct lruvec *lruvec)
 
 static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec)
 {
-	if (PageActive(page) && !PageUnevictable(page)) {
+	if (PageLRU(page) && PageActive(page) && !PageUnevictable(page)) {
 		int nr_pages = thp_nr_pages(page);
 
 		del_page_from_lru_list(page, lruvec);
@@ -573,7 +568,7 @@ static void lru_deactivate_fn(struct page *page, struct lruvec *lruvec)
 
 static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec)
 {
-	if (PageAnon(page) && PageSwapBacked(page) &&
+	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
 	    !PageSwapCache(page) && !PageUnevictable(page)) {
 		int nr_pages = thp_nr_pages(page);
 
@@ -983,8 +978,9 @@ void __pagevec_release(struct pagevec *pvec)
 }
 EXPORT_SYMBOL(__pagevec_release);
 
-static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec)
+static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec)
 {
+	struct folio *folio = page_folio(page);
 	int was_unevictable = folio_test_clear_unevictable(folio);
 	long nr_pages = folio_nr_pages(folio);
 
@@ -1040,20 +1036,7 @@ static void __pagevec_lru_add_fn(struct folio *folio, struct lruvec *lruvec)
  */
 void __pagevec_lru_add(struct pagevec *pvec)
 {
-	int i;
-	struct lruvec *lruvec = NULL;
-	unsigned long flags = 0;
-
-	for (i = 0; i < pagevec_count(pvec); i++) {
-		struct folio *folio = page_folio(pvec->pages[i]);
-
-		lruvec = folio_lruvec_relock_irqsave(folio, lruvec, &flags);
-		__pagevec_lru_add_fn(folio, lruvec);
-	}
-	if (lruvec)
-		unlock_page_lruvec_irqrestore(lruvec, flags);
-	release_pages(pvec->pages, pvec->nr);
-	pagevec_reinit(pvec);
+	pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn);
 }
 
 /**
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 902d36ec91a3..7cff2f748df8 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4667,18 +4667,17 @@ void check_move_unevictable_pages(struct pagevec *pvec)
 		nr_pages = thp_nr_pages(page);
 		pgscanned += nr_pages;
 
-		/* block memcg migration during page moving between lru */
-		if (!TestClearPageLRU(page))
+		lruvec = folio_lruvec_relock_irq(folio, lruvec);
+
+		if (!PageLRU(page) || !PageUnevictable(page))
 			continue;
 
-		lruvec = folio_lruvec_relock_irq(folio, lruvec);
-		if (page_evictable(page) && PageUnevictable(page)) {
+		if (page_evictable(page)) {
 			del_page_from_lru_list(page, lruvec);
 			ClearPageUnevictable(page);
 			add_page_to_lru_list(page, lruvec);
 			pgrescued += nr_pages;
 		}
-		SetPageLRU(page);
 	}
 
 	if (lruvec) {
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}()
  2021-08-14  5:25 ` [PATCH v1 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}() Muchun Song
@ 2021-08-14  8:22     ` kernel test robot
  2021-08-14 10:38     ` kernel test robot
  1 sibling, 0 replies; 29+ messages in thread
From: kernel test robot @ 2021-08-14  8:22 UTC (permalink / raw)
  To: Muchun Song, guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: clang-built-linux, kbuild-all, linux-kernel, linux-mm,
	duanxiongchun, fam.zheng

[-- Attachment #1: Type: text/plain, Size: 8512 bytes --]

Hi Muchun,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on next-20210813]
[cannot apply to hnaz-linux-mm/master cgroup/for-next linus/master v5.14-rc5 v5.14-rc4 v5.14-rc3 v5.14-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
base:    4b358aabb93a2c654cd1dcab1a25a589f6e2b153
config: s390-randconfig-r044-20210814 (attached as .config)
compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 1f7b25ea76a925aca690da28de9d78db7ca99d0c)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install s390 cross compiling tool for clang build
        # apt-get install binutils-s390x-linux-gnu
        # https://github.com/0day-ci/linux/commit/3460bcf13b968edf6f4621c0e0dcde46500957e5
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
        git checkout 3460bcf13b968edf6f4621c0e0dcde46500957e5
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=s390 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   In file included from mm/huge_memory.c:16:
   In file included from include/linux/rmap.h:12:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:23:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:75:
   include/asm-generic/io.h:464:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __raw_readb(PCI_IOBASE + addr);
                             ~~~~~~~~~~ ^
   include/asm-generic/io.h:477:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
                                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:36:59: note: expanded from macro '__le16_to_cpu'
   #define __le16_to_cpu(x) __swab16((__force __u16)(__le16)(x))
                                                             ^
   include/uapi/linux/swab.h:102:54: note: expanded from macro '__swab16'
   #define __swab16(x) (__u16)__builtin_bswap16((__u16)(x))
                                                        ^
   In file included from mm/huge_memory.c:16:
   In file included from include/linux/rmap.h:12:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:23:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:75:
   include/asm-generic/io.h:490:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
                                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:34:59: note: expanded from macro '__le32_to_cpu'
   #define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
                                                             ^
   include/uapi/linux/swab.h:115:54: note: expanded from macro '__swab32'
   #define __swab32(x) (__u32)__builtin_bswap32((__u32)(x))
                                                        ^
   In file included from mm/huge_memory.c:16:
   In file included from include/linux/rmap.h:12:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:23:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:75:
   include/asm-generic/io.h:501:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writeb(value, PCI_IOBASE + addr);
                               ~~~~~~~~~~ ^
   include/asm-generic/io.h:511:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
                                                         ~~~~~~~~~~ ^
   include/asm-generic/io.h:521:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
                                                         ~~~~~~~~~~ ^
   include/asm-generic/io.h:609:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsb(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:617:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsw(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:625:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsl(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:634:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesb(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
   include/asm-generic/io.h:643:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesw(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
   include/asm-generic/io.h:652:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesl(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
>> mm/huge_memory.c:2797:21: warning: variable 'memcg' set but not used [-Wunused-but-set-variable]
           struct mem_cgroup *memcg;
                              ^
   13 warnings generated.


vim +/memcg +2797 mm/huge_memory.c

  2793	
  2794	void deferred_split_huge_page(struct page *page)
  2795	{
  2796		struct deferred_split *ds_queue;
> 2797		struct mem_cgroup *memcg;
  2798		unsigned long flags;
  2799	
  2800		VM_BUG_ON_PAGE(!PageTransHuge(page), page);
  2801	
  2802		/*
  2803		 * The try_to_unmap() in page reclaim path might reach here too,
  2804		 * this may cause a race condition to corrupt deferred split queue.
  2805		 * And, if page reclaim is already handling the same page, it is
  2806		 * unnecessary to handle it again in shrinker.
  2807		 *
  2808		 * Check PageSwapCache to determine if the page is being
  2809		 * handled by page reclaim since THP swap would add the page into
  2810		 * swap cache before calling try_to_unmap().
  2811		 */
  2812		if (PageSwapCache(page))
  2813			return;
  2814	
  2815		ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags);
  2816		memcg = split_queue_memcg(ds_queue);
  2817		if (list_empty(page_deferred_list(page))) {
  2818			count_vm_event(THP_DEFERRED_SPLIT_PAGE);
  2819			list_add_tail(page_deferred_list(page), &ds_queue->split_queue);
  2820			ds_queue->split_queue_len++;
  2821	#ifdef CONFIG_MEMCG
  2822			if (memcg)
  2823				set_shrinker_bit(memcg, page_to_nid(page),
  2824						 deferred_split_shrinker.id);
  2825	#endif
  2826		}
  2827		split_queue_unlock_irqrestore(ds_queue, flags);
  2828	}
  2829	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 28818 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}()
@ 2021-08-14  8:22     ` kernel test robot
  0 siblings, 0 replies; 29+ messages in thread
From: kernel test robot @ 2021-08-14  8:22 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 8666 bytes --]

Hi Muchun,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on next-20210813]
[cannot apply to hnaz-linux-mm/master cgroup/for-next linus/master v5.14-rc5 v5.14-rc4 v5.14-rc3 v5.14-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
base:    4b358aabb93a2c654cd1dcab1a25a589f6e2b153
config: s390-randconfig-r044-20210814 (attached as .config)
compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 1f7b25ea76a925aca690da28de9d78db7ca99d0c)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install s390 cross compiling tool for clang build
        # apt-get install binutils-s390x-linux-gnu
        # https://github.com/0day-ci/linux/commit/3460bcf13b968edf6f4621c0e0dcde46500957e5
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
        git checkout 3460bcf13b968edf6f4621c0e0dcde46500957e5
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=s390 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   In file included from mm/huge_memory.c:16:
   In file included from include/linux/rmap.h:12:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:23:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:75:
   include/asm-generic/io.h:464:31: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __raw_readb(PCI_IOBASE + addr);
                             ~~~~~~~~~~ ^
   include/asm-generic/io.h:477:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le16_to_cpu((__le16 __force)__raw_readw(PCI_IOBASE + addr));
                                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:36:59: note: expanded from macro '__le16_to_cpu'
   #define __le16_to_cpu(x) __swab16((__force __u16)(__le16)(x))
                                                             ^
   include/uapi/linux/swab.h:102:54: note: expanded from macro '__swab16'
   #define __swab16(x) (__u16)__builtin_bswap16((__u16)(x))
                                                        ^
   In file included from mm/huge_memory.c:16:
   In file included from include/linux/rmap.h:12:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:23:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:75:
   include/asm-generic/io.h:490:61: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           val = __le32_to_cpu((__le32 __force)__raw_readl(PCI_IOBASE + addr));
                                                           ~~~~~~~~~~ ^
   include/uapi/linux/byteorder/big_endian.h:34:59: note: expanded from macro '__le32_to_cpu'
   #define __le32_to_cpu(x) __swab32((__force __u32)(__le32)(x))
                                                             ^
   include/uapi/linux/swab.h:115:54: note: expanded from macro '__swab32'
   #define __swab32(x) (__u32)__builtin_bswap32((__u32)(x))
                                                        ^
   In file included from mm/huge_memory.c:16:
   In file included from include/linux/rmap.h:12:
   In file included from include/linux/memcontrol.h:22:
   In file included from include/linux/writeback.h:14:
   In file included from include/linux/blk-cgroup.h:23:
   In file included from include/linux/blkdev.h:23:
   In file included from include/linux/scatterlist.h:9:
   In file included from arch/s390/include/asm/io.h:75:
   include/asm-generic/io.h:501:33: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writeb(value, PCI_IOBASE + addr);
                               ~~~~~~~~~~ ^
   include/asm-generic/io.h:511:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writew((u16 __force)cpu_to_le16(value), PCI_IOBASE + addr);
                                                         ~~~~~~~~~~ ^
   include/asm-generic/io.h:521:59: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           __raw_writel((u32 __force)cpu_to_le32(value), PCI_IOBASE + addr);
                                                         ~~~~~~~~~~ ^
   include/asm-generic/io.h:609:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsb(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:617:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsw(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:625:20: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           readsl(PCI_IOBASE + addr, buffer, count);
                  ~~~~~~~~~~ ^
   include/asm-generic/io.h:634:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesb(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
   include/asm-generic/io.h:643:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesw(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
   include/asm-generic/io.h:652:21: warning: performing pointer arithmetic on a null pointer has undefined behavior [-Wnull-pointer-arithmetic]
           writesl(PCI_IOBASE + addr, buffer, count);
                   ~~~~~~~~~~ ^
>> mm/huge_memory.c:2797:21: warning: variable 'memcg' set but not used [-Wunused-but-set-variable]
           struct mem_cgroup *memcg;
                              ^
   13 warnings generated.


vim +/memcg +2797 mm/huge_memory.c

  2793	
  2794	void deferred_split_huge_page(struct page *page)
  2795	{
  2796		struct deferred_split *ds_queue;
> 2797		struct mem_cgroup *memcg;
  2798		unsigned long flags;
  2799	
  2800		VM_BUG_ON_PAGE(!PageTransHuge(page), page);
  2801	
  2802		/*
  2803		 * The try_to_unmap() in page reclaim path might reach here too,
  2804		 * this may cause a race condition to corrupt deferred split queue.
  2805		 * And, if page reclaim is already handling the same page, it is
  2806		 * unnecessary to handle it again in shrinker.
  2807		 *
  2808		 * Check PageSwapCache to determine if the page is being
  2809		 * handled by page reclaim since THP swap would add the page into
  2810		 * swap cache before calling try_to_unmap().
  2811		 */
  2812		if (PageSwapCache(page))
  2813			return;
  2814	
  2815		ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags);
  2816		memcg = split_queue_memcg(ds_queue);
  2817		if (list_empty(page_deferred_list(page))) {
  2818			count_vm_event(THP_DEFERRED_SPLIT_PAGE);
  2819			list_add_tail(page_deferred_list(page), &ds_queue->split_queue);
  2820			ds_queue->split_queue_len++;
  2821	#ifdef CONFIG_MEMCG
  2822			if (memcg)
  2823				set_shrinker_bit(memcg, page_to_nid(page),
  2824						 deferred_split_shrinker.id);
  2825	#endif
  2826		}
  2827		split_queue_unlock_irqrestore(ds_queue, flags);
  2828	}
  2829	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 28818 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}()
  2021-08-14  5:25 ` [PATCH v1 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}() Muchun Song
@ 2021-08-14 10:38     ` kernel test robot
  2021-08-14 10:38     ` kernel test robot
  1 sibling, 0 replies; 29+ messages in thread
From: kernel test robot @ 2021-08-14 10:38 UTC (permalink / raw)
  To: Muchun Song, guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: kbuild-all, linux-kernel, linux-mm, duanxiongchun, fam.zheng

[-- Attachment #1: Type: text/plain, Size: 3266 bytes --]

Hi Muchun,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on next-20210813]
[cannot apply to hnaz-linux-mm/master cgroup/for-next linus/master v5.14-rc5 v5.14-rc4 v5.14-rc3 v5.14-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
base:    4b358aabb93a2c654cd1dcab1a25a589f6e2b153
config: sparc64-randconfig-p002-20210814 (attached as .config)
compiler: sparc64-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/3460bcf13b968edf6f4621c0e0dcde46500957e5
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
        git checkout 3460bcf13b968edf6f4621c0e0dcde46500957e5
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=sparc64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   mm/huge_memory.c: In function 'deferred_split_huge_page':
>> mm/huge_memory.c:2797:28: warning: variable 'memcg' set but not used [-Wunused-but-set-variable]
    2797 |         struct mem_cgroup *memcg;
         |                            ^~~~~


vim +/memcg +2797 mm/huge_memory.c

  2793	
  2794	void deferred_split_huge_page(struct page *page)
  2795	{
  2796		struct deferred_split *ds_queue;
> 2797		struct mem_cgroup *memcg;
  2798		unsigned long flags;
  2799	
  2800		VM_BUG_ON_PAGE(!PageTransHuge(page), page);
  2801	
  2802		/*
  2803		 * The try_to_unmap() in page reclaim path might reach here too,
  2804		 * this may cause a race condition to corrupt deferred split queue.
  2805		 * And, if page reclaim is already handling the same page, it is
  2806		 * unnecessary to handle it again in shrinker.
  2807		 *
  2808		 * Check PageSwapCache to determine if the page is being
  2809		 * handled by page reclaim since THP swap would add the page into
  2810		 * swap cache before calling try_to_unmap().
  2811		 */
  2812		if (PageSwapCache(page))
  2813			return;
  2814	
  2815		ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags);
  2816		memcg = split_queue_memcg(ds_queue);
  2817		if (list_empty(page_deferred_list(page))) {
  2818			count_vm_event(THP_DEFERRED_SPLIT_PAGE);
  2819			list_add_tail(page_deferred_list(page), &ds_queue->split_queue);
  2820			ds_queue->split_queue_len++;
  2821	#ifdef CONFIG_MEMCG
  2822			if (memcg)
  2823				set_shrinker_bit(memcg, page_to_nid(page),
  2824						 deferred_split_shrinker.id);
  2825	#endif
  2826		}
  2827		split_queue_unlock_irqrestore(ds_queue, flags);
  2828	}
  2829	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 31572 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}()
@ 2021-08-14 10:38     ` kernel test robot
  0 siblings, 0 replies; 29+ messages in thread
From: kernel test robot @ 2021-08-14 10:38 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 3346 bytes --]

Hi Muchun,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on next-20210813]
[cannot apply to hnaz-linux-mm/master cgroup/for-next linus/master v5.14-rc5 v5.14-rc4 v5.14-rc3 v5.14-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
base:    4b358aabb93a2c654cd1dcab1a25a589f6e2b153
config: sparc64-randconfig-p002-20210814 (attached as .config)
compiler: sparc64-linux-gcc (GCC) 11.2.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/3460bcf13b968edf6f4621c0e0dcde46500957e5
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
        git checkout 3460bcf13b968edf6f4621c0e0dcde46500957e5
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross ARCH=sparc64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   mm/huge_memory.c: In function 'deferred_split_huge_page':
>> mm/huge_memory.c:2797:28: warning: variable 'memcg' set but not used [-Wunused-but-set-variable]
    2797 |         struct mem_cgroup *memcg;
         |                            ^~~~~


vim +/memcg +2797 mm/huge_memory.c

  2793	
  2794	void deferred_split_huge_page(struct page *page)
  2795	{
  2796		struct deferred_split *ds_queue;
> 2797		struct mem_cgroup *memcg;
  2798		unsigned long flags;
  2799	
  2800		VM_BUG_ON_PAGE(!PageTransHuge(page), page);
  2801	
  2802		/*
  2803		 * The try_to_unmap() in page reclaim path might reach here too,
  2804		 * this may cause a race condition to corrupt deferred split queue.
  2805		 * And, if page reclaim is already handling the same page, it is
  2806		 * unnecessary to handle it again in shrinker.
  2807		 *
  2808		 * Check PageSwapCache to determine if the page is being
  2809		 * handled by page reclaim since THP swap would add the page into
  2810		 * swap cache before calling try_to_unmap().
  2811		 */
  2812		if (PageSwapCache(page))
  2813			return;
  2814	
  2815		ds_queue = folio_split_queue_lock_irqsave(page_folio(page), &flags);
  2816		memcg = split_queue_memcg(ds_queue);
  2817		if (list_empty(page_deferred_list(page))) {
  2818			count_vm_event(THP_DEFERRED_SPLIT_PAGE);
  2819			list_add_tail(page_deferred_list(page), &ds_queue->split_queue);
  2820			ds_queue->split_queue_len++;
  2821	#ifdef CONFIG_MEMCG
  2822			if (memcg)
  2823				set_shrinker_bit(memcg, page_to_nid(page),
  2824						 deferred_split_shrinker.id);
  2825	#endif
  2826		}
  2827		split_queue_unlock_irqrestore(ds_queue, flags);
  2828	}
  2829	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 31572 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 09/12] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
  2021-08-14  5:25 ` [PATCH v1 09/12] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Muchun Song
@ 2021-08-14 14:08     ` kernel test robot
  0 siblings, 0 replies; 29+ messages in thread
From: kernel test robot @ 2021-08-14 14:08 UTC (permalink / raw)
  To: Muchun Song, guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: kbuild-all, linux-kernel, linux-mm, duanxiongchun, fam.zheng

[-- Attachment #1: Type: text/plain, Size: 8640 bytes --]

Hi Muchun,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on next-20210813]
[cannot apply to hnaz-linux-mm/master cgroup/for-next linus/master v5.14-rc5 v5.14-rc4 v5.14-rc3 v5.14-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
base:    4b358aabb93a2c654cd1dcab1a25a589f6e2b153
config: i386-randconfig-s032-20210814 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.3-348-gf0e6938b-dirty
        # https://github.com/0day-ci/linux/commit/33aa30f8c508696b533f8817a5212d6efdd424bb
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
        git checkout 33aa30f8c508696b533f8817a5212d6efdd424bb
        # save the attached .config to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)
   mm/memcontrol.c:4231:21: sparse: sparse: incompatible types in comparison expression (different address spaces):
   mm/memcontrol.c:4231:21: sparse:    struct mem_cgroup_threshold_ary [noderef] __rcu *
   mm/memcontrol.c:4231:21: sparse:    struct mem_cgroup_threshold_ary *
   mm/memcontrol.c:4233:21: sparse: sparse: incompatible types in comparison expression (different address spaces):
   mm/memcontrol.c:4233:21: sparse:    struct mem_cgroup_threshold_ary [noderef] __rcu *
   mm/memcontrol.c:4233:21: sparse:    struct mem_cgroup_threshold_ary *
   mm/memcontrol.c:4389:9: sparse: sparse: incompatible types in comparison expression (different address spaces):
   mm/memcontrol.c:4389:9: sparse:    struct mem_cgroup_threshold_ary [noderef] __rcu *
   mm/memcontrol.c:4389:9: sparse:    struct mem_cgroup_threshold_ary *
   mm/memcontrol.c:4483:9: sparse: sparse: incompatible types in comparison expression (different address spaces):
   mm/memcontrol.c:4483:9: sparse:    struct mem_cgroup_threshold_ary [noderef] __rcu *
   mm/memcontrol.c:4483:9: sparse:    struct mem_cgroup_threshold_ary *
>> mm/memcontrol.c:5836:26: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct obj_cgroup *objcg @@     got struct obj_cgroup [noderef] __rcu *objcg @@
   mm/memcontrol.c:5837:28: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct obj_cgroup *objcg @@     got struct obj_cgroup [noderef] __rcu *objcg @@
   mm/memcontrol.c:6133:23: sparse: sparse: incompatible types in comparison expression (different address spaces):
   mm/memcontrol.c:6133:23: sparse:    struct task_struct [noderef] __rcu *
   mm/memcontrol.c:6133:23: sparse:    struct task_struct *
   mm/memcontrol.c: note: in included file:
   include/linux/memcontrol.h:780:9: sparse: sparse: context imbalance in 'memcg_reparent_lruvec_lock' - wrong count at exit
   include/linux/memcontrol.h:780:9: sparse: sparse: context imbalance in 'memcg_reparent_lruvec_unlock' - unexpected unlock
   mm/memcontrol.c: note: in included file (through include/linux/rculist.h, include/linux/pid.h, include/linux/sched.h, ...):
   include/linux/rcupdate.h:718:9: sparse: sparse: context imbalance in 'folio_lruvec_lock' - wrong count at exit
   include/linux/rcupdate.h:718:9: sparse: sparse: context imbalance in 'folio_lruvec_lock_irq' - wrong count at exit
   include/linux/rcupdate.h:718:9: sparse: sparse: context imbalance in 'folio_lruvec_lock_irqsave' - wrong count at exit
   mm/memcontrol.c:2098:6: sparse: sparse: context imbalance in 'folio_memcg_lock' - wrong count at exit
   mm/memcontrol.c:2154:17: sparse: sparse: context imbalance in '__folio_memcg_unlock' - unexpected unlock

vim +5836 mm/memcontrol.c

  5733	
  5734	/**
  5735	 * mem_cgroup_move_account - move account of the page
  5736	 * @page: the page
  5737	 * @compound: charge the page as compound or small page
  5738	 * @from: mem_cgroup which the page is moved from.
  5739	 * @to:	mem_cgroup which the page is moved to. @from != @to.
  5740	 *
  5741	 * The caller must make sure the page is not on LRU (isolate_page() is useful.)
  5742	 *
  5743	 * This function doesn't do "charge" to new cgroup and doesn't do "uncharge"
  5744	 * from old cgroup.
  5745	 */
  5746	static int mem_cgroup_move_account(struct page *page,
  5747					   bool compound,
  5748					   struct mem_cgroup *from,
  5749					   struct mem_cgroup *to)
  5750	{
  5751		struct folio *folio = page_folio(page);
  5752		struct lruvec *from_vec, *to_vec;
  5753		struct pglist_data *pgdat;
  5754		unsigned int nr_pages = compound ? folio_nr_pages(folio) : 1;
  5755		int nid, ret;
  5756	
  5757		VM_BUG_ON(from == to);
  5758		VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
  5759		VM_BUG_ON(compound && !folio_test_multi(folio));
  5760	
  5761		/*
  5762		 * Prevent mem_cgroup_migrate() from looking at
  5763		 * page's memory cgroup of its source page while we change it.
  5764		 */
  5765		ret = -EBUSY;
  5766		if (!folio_trylock(folio))
  5767			goto out;
  5768	
  5769		ret = -EINVAL;
  5770		if (folio_memcg(folio) != from)
  5771			goto out_unlock;
  5772	
  5773		pgdat = folio_pgdat(folio);
  5774		from_vec = mem_cgroup_lruvec(from, pgdat);
  5775		to_vec = mem_cgroup_lruvec(to, pgdat);
  5776	
  5777		folio_memcg_lock(folio);
  5778	
  5779		if (folio_test_anon(folio)) {
  5780			if (folio_mapped(folio)) {
  5781				__mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages);
  5782				__mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages);
  5783				if (folio_test_transhuge(folio)) {
  5784					__mod_lruvec_state(from_vec, NR_ANON_THPS,
  5785							   -nr_pages);
  5786					__mod_lruvec_state(to_vec, NR_ANON_THPS,
  5787							   nr_pages);
  5788				}
  5789			}
  5790		} else {
  5791			__mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages);
  5792			__mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages);
  5793	
  5794			if (folio_test_swapbacked(folio)) {
  5795				__mod_lruvec_state(from_vec, NR_SHMEM, -nr_pages);
  5796				__mod_lruvec_state(to_vec, NR_SHMEM, nr_pages);
  5797			}
  5798	
  5799			if (folio_mapped(folio)) {
  5800				__mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages);
  5801				__mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages);
  5802			}
  5803	
  5804			if (folio_test_dirty(folio)) {
  5805				struct address_space *mapping = folio_mapping(folio);
  5806	
  5807				if (mapping_can_writeback(mapping)) {
  5808					__mod_lruvec_state(from_vec, NR_FILE_DIRTY,
  5809							   -nr_pages);
  5810					__mod_lruvec_state(to_vec, NR_FILE_DIRTY,
  5811							   nr_pages);
  5812				}
  5813			}
  5814		}
  5815	
  5816		if (folio_test_writeback(folio)) {
  5817			__mod_lruvec_state(from_vec, NR_WRITEBACK, -nr_pages);
  5818			__mod_lruvec_state(to_vec, NR_WRITEBACK, nr_pages);
  5819		}
  5820	
  5821		/*
  5822		 * All state has been migrated, let's switch to the new memcg.
  5823		 *
  5824		 * It is safe to change page's memcg here because the page
  5825		 * is referenced, charged, isolated, and locked: we can't race
  5826		 * with (un)charging, migration, LRU putback, or anything else
  5827		 * that would rely on a stable page's memory cgroup.
  5828		 *
  5829		 * Note that lock_page_memcg is a memcg lock, not a page lock,
  5830		 * to save space. As soon as we switch page's memory cgroup to a
  5831		 * new memcg that isn't locked, the above state can change
  5832		 * concurrently again. Make sure we're truly done with it.
  5833		 */
  5834		smp_mb();
  5835	
> 5836		obj_cgroup_get(to->objcg);
  5837		obj_cgroup_put(from->objcg);
  5838	
  5839		folio->memcg_data = (unsigned long)to->objcg;
  5840	
  5841		__folio_memcg_unlock(from);
  5842	
  5843		ret = 0;
  5844		nid = folio_nid(folio);
  5845	
  5846		local_irq_disable();
  5847		mem_cgroup_charge_statistics(to, nr_pages);
  5848		memcg_check_events(to, nid);
  5849		mem_cgroup_charge_statistics(from, -nr_pages);
  5850		memcg_check_events(from, nid);
  5851		local_irq_enable();
  5852	out_unlock:
  5853		folio_unlock(folio);
  5854	out:
  5855		return ret;
  5856	}
  5857	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 37476 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 09/12] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages
@ 2021-08-14 14:08     ` kernel test robot
  0 siblings, 0 replies; 29+ messages in thread
From: kernel test robot @ 2021-08-14 14:08 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 8829 bytes --]

Hi Muchun,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on next-20210813]
[cannot apply to hnaz-linux-mm/master cgroup/for-next linus/master v5.14-rc5 v5.14-rc4 v5.14-rc3 v5.14-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
base:    4b358aabb93a2c654cd1dcab1a25a589f6e2b153
config: i386-randconfig-s032-20210814 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.3-348-gf0e6938b-dirty
        # https://github.com/0day-ci/linux/commit/33aa30f8c508696b533f8817a5212d6efdd424bb
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
        git checkout 33aa30f8c508696b533f8817a5212d6efdd424bb
        # save the attached .config to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=i386 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)
   mm/memcontrol.c:4231:21: sparse: sparse: incompatible types in comparison expression (different address spaces):
   mm/memcontrol.c:4231:21: sparse:    struct mem_cgroup_threshold_ary [noderef] __rcu *
   mm/memcontrol.c:4231:21: sparse:    struct mem_cgroup_threshold_ary *
   mm/memcontrol.c:4233:21: sparse: sparse: incompatible types in comparison expression (different address spaces):
   mm/memcontrol.c:4233:21: sparse:    struct mem_cgroup_threshold_ary [noderef] __rcu *
   mm/memcontrol.c:4233:21: sparse:    struct mem_cgroup_threshold_ary *
   mm/memcontrol.c:4389:9: sparse: sparse: incompatible types in comparison expression (different address spaces):
   mm/memcontrol.c:4389:9: sparse:    struct mem_cgroup_threshold_ary [noderef] __rcu *
   mm/memcontrol.c:4389:9: sparse:    struct mem_cgroup_threshold_ary *
   mm/memcontrol.c:4483:9: sparse: sparse: incompatible types in comparison expression (different address spaces):
   mm/memcontrol.c:4483:9: sparse:    struct mem_cgroup_threshold_ary [noderef] __rcu *
   mm/memcontrol.c:4483:9: sparse:    struct mem_cgroup_threshold_ary *
>> mm/memcontrol.c:5836:26: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct obj_cgroup *objcg @@     got struct obj_cgroup [noderef] __rcu *objcg @@
   mm/memcontrol.c:5837:28: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct obj_cgroup *objcg @@     got struct obj_cgroup [noderef] __rcu *objcg @@
   mm/memcontrol.c:6133:23: sparse: sparse: incompatible types in comparison expression (different address spaces):
   mm/memcontrol.c:6133:23: sparse:    struct task_struct [noderef] __rcu *
   mm/memcontrol.c:6133:23: sparse:    struct task_struct *
   mm/memcontrol.c: note: in included file:
   include/linux/memcontrol.h:780:9: sparse: sparse: context imbalance in 'memcg_reparent_lruvec_lock' - wrong count at exit
   include/linux/memcontrol.h:780:9: sparse: sparse: context imbalance in 'memcg_reparent_lruvec_unlock' - unexpected unlock
   mm/memcontrol.c: note: in included file (through include/linux/rculist.h, include/linux/pid.h, include/linux/sched.h, ...):
   include/linux/rcupdate.h:718:9: sparse: sparse: context imbalance in 'folio_lruvec_lock' - wrong count at exit
   include/linux/rcupdate.h:718:9: sparse: sparse: context imbalance in 'folio_lruvec_lock_irq' - wrong count at exit
   include/linux/rcupdate.h:718:9: sparse: sparse: context imbalance in 'folio_lruvec_lock_irqsave' - wrong count at exit
   mm/memcontrol.c:2098:6: sparse: sparse: context imbalance in 'folio_memcg_lock' - wrong count at exit
   mm/memcontrol.c:2154:17: sparse: sparse: context imbalance in '__folio_memcg_unlock' - unexpected unlock

vim +5836 mm/memcontrol.c

  5733	
  5734	/**
  5735	 * mem_cgroup_move_account - move account of the page
  5736	 * @page: the page
  5737	 * @compound: charge the page as compound or small page
  5738	 * @from: mem_cgroup which the page is moved from.
  5739	 * @to:	mem_cgroup which the page is moved to. @from != @to.
  5740	 *
  5741	 * The caller must make sure the page is not on LRU (isolate_page() is useful.)
  5742	 *
  5743	 * This function doesn't do "charge" to new cgroup and doesn't do "uncharge"
  5744	 * from old cgroup.
  5745	 */
  5746	static int mem_cgroup_move_account(struct page *page,
  5747					   bool compound,
  5748					   struct mem_cgroup *from,
  5749					   struct mem_cgroup *to)
  5750	{
  5751		struct folio *folio = page_folio(page);
  5752		struct lruvec *from_vec, *to_vec;
  5753		struct pglist_data *pgdat;
  5754		unsigned int nr_pages = compound ? folio_nr_pages(folio) : 1;
  5755		int nid, ret;
  5756	
  5757		VM_BUG_ON(from == to);
  5758		VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
  5759		VM_BUG_ON(compound && !folio_test_multi(folio));
  5760	
  5761		/*
  5762		 * Prevent mem_cgroup_migrate() from looking at
  5763		 * page's memory cgroup of its source page while we change it.
  5764		 */
  5765		ret = -EBUSY;
  5766		if (!folio_trylock(folio))
  5767			goto out;
  5768	
  5769		ret = -EINVAL;
  5770		if (folio_memcg(folio) != from)
  5771			goto out_unlock;
  5772	
  5773		pgdat = folio_pgdat(folio);
  5774		from_vec = mem_cgroup_lruvec(from, pgdat);
  5775		to_vec = mem_cgroup_lruvec(to, pgdat);
  5776	
  5777		folio_memcg_lock(folio);
  5778	
  5779		if (folio_test_anon(folio)) {
  5780			if (folio_mapped(folio)) {
  5781				__mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages);
  5782				__mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages);
  5783				if (folio_test_transhuge(folio)) {
  5784					__mod_lruvec_state(from_vec, NR_ANON_THPS,
  5785							   -nr_pages);
  5786					__mod_lruvec_state(to_vec, NR_ANON_THPS,
  5787							   nr_pages);
  5788				}
  5789			}
  5790		} else {
  5791			__mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages);
  5792			__mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages);
  5793	
  5794			if (folio_test_swapbacked(folio)) {
  5795				__mod_lruvec_state(from_vec, NR_SHMEM, -nr_pages);
  5796				__mod_lruvec_state(to_vec, NR_SHMEM, nr_pages);
  5797			}
  5798	
  5799			if (folio_mapped(folio)) {
  5800				__mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages);
  5801				__mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages);
  5802			}
  5803	
  5804			if (folio_test_dirty(folio)) {
  5805				struct address_space *mapping = folio_mapping(folio);
  5806	
  5807				if (mapping_can_writeback(mapping)) {
  5808					__mod_lruvec_state(from_vec, NR_FILE_DIRTY,
  5809							   -nr_pages);
  5810					__mod_lruvec_state(to_vec, NR_FILE_DIRTY,
  5811							   nr_pages);
  5812				}
  5813			}
  5814		}
  5815	
  5816		if (folio_test_writeback(folio)) {
  5817			__mod_lruvec_state(from_vec, NR_WRITEBACK, -nr_pages);
  5818			__mod_lruvec_state(to_vec, NR_WRITEBACK, nr_pages);
  5819		}
  5820	
  5821		/*
  5822		 * All state has been migrated, let's switch to the new memcg.
  5823		 *
  5824		 * It is safe to change page's memcg here because the page
  5825		 * is referenced, charged, isolated, and locked: we can't race
  5826		 * with (un)charging, migration, LRU putback, or anything else
  5827		 * that would rely on a stable page's memory cgroup.
  5828		 *
  5829		 * Note that lock_page_memcg is a memcg lock, not a page lock,
  5830		 * to save space. As soon as we switch page's memory cgroup to a
  5831		 * new memcg that isn't locked, the above state can change
  5832		 * concurrently again. Make sure we're truly done with it.
  5833		 */
  5834		smp_mb();
  5835	
> 5836		obj_cgroup_get(to->objcg);
  5837		obj_cgroup_put(from->objcg);
  5838	
  5839		folio->memcg_data = (unsigned long)to->objcg;
  5840	
  5841		__folio_memcg_unlock(from);
  5842	
  5843		ret = 0;
  5844		nid = folio_nid(folio);
  5845	
  5846		local_irq_disable();
  5847		mem_cgroup_charge_statistics(to, nr_pages);
  5848		memcg_check_events(to, nid);
  5849		mem_cgroup_charge_statistics(from, -nr_pages);
  5850		memcg_check_events(from, nid);
  5851		local_irq_enable();
  5852	out_unlock:
  5853		folio_unlock(folio);
  5854	out:
  5855		return ret;
  5856	}
  5857	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 37476 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage
  2021-08-14  5:25 ` [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage Muchun Song
@ 2021-08-14 22:23     ` kernel test robot
  2021-08-18  3:01   ` Roman Gushchin
  1 sibling, 0 replies; 29+ messages in thread
From: kernel test robot @ 2021-08-14 22:23 UTC (permalink / raw)
  To: Muchun Song, guro, hannes, mhocko, akpm, shakeelb, vdavydov.dev
  Cc: clang-built-linux, kbuild-all, linux-kernel, linux-mm,
	duanxiongchun, fam.zheng

[-- Attachment #1: Type: text/plain, Size: 2436 bytes --]

Hi Muchun,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on next-20210813]
[also build test ERROR on v5.14-rc5]
[cannot apply to hnaz-linux-mm/master cgroup/for-next linus/master v5.14-rc5 v5.14-rc4 v5.14-rc3]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
base:    4b358aabb93a2c654cd1dcab1a25a589f6e2b153
config: x86_64-randconfig-a004-20210815 (attached as .config)
compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 1f7b25ea76a925aca690da28de9d78db7ca99d0c)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/8a7c5ff0c21bed8ed69322118d0a01388ee8a6fd
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
        git checkout 8a7c5ff0c21bed8ed69322118d0a01388ee8a6fd
        # save the attached .config to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> mm/memcontrol.c:266:6: error: redefinition of 'mem_cgroup_kmem_disabled'
   bool mem_cgroup_kmem_disabled(void)
        ^
   include/linux/memcontrol.h:1728:20: note: previous definition is here
   static inline bool mem_cgroup_kmem_disabled(void)
                      ^
   1 error generated.


vim +/mem_cgroup_kmem_disabled +266 mm/memcontrol.c

bf4f059954dcb2 Roman Gushchin 2020-08-06  265  
4d5c8aedc8aa6a Roman Gushchin 2021-06-02 @266  bool mem_cgroup_kmem_disabled(void)
4d5c8aedc8aa6a Roman Gushchin 2021-06-02  267  {
4d5c8aedc8aa6a Roman Gushchin 2021-06-02  268  	return cgroup_memory_nokmem;
4d5c8aedc8aa6a Roman Gushchin 2021-06-02  269  }
4d5c8aedc8aa6a Roman Gushchin 2021-06-02  270  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 33243 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage
@ 2021-08-14 22:23     ` kernel test robot
  0 siblings, 0 replies; 29+ messages in thread
From: kernel test robot @ 2021-08-14 22:23 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 2490 bytes --]

Hi Muchun,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on next-20210813]
[also build test ERROR on v5.14-rc5]
[cannot apply to hnaz-linux-mm/master cgroup/for-next linus/master v5.14-rc5 v5.14-rc4 v5.14-rc3]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
base:    4b358aabb93a2c654cd1dcab1a25a589f6e2b153
config: x86_64-randconfig-a004-20210815 (attached as .config)
compiler: clang version 14.0.0 (https://github.com/llvm/llvm-project 1f7b25ea76a925aca690da28de9d78db7ca99d0c)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/0day-ci/linux/commit/8a7c5ff0c21bed8ed69322118d0a01388ee8a6fd
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
        git checkout 8a7c5ff0c21bed8ed69322118d0a01388ee8a6fd
        # save the attached .config to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross O=build_dir ARCH=x86_64 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> mm/memcontrol.c:266:6: error: redefinition of 'mem_cgroup_kmem_disabled'
   bool mem_cgroup_kmem_disabled(void)
        ^
   include/linux/memcontrol.h:1728:20: note: previous definition is here
   static inline bool mem_cgroup_kmem_disabled(void)
                      ^
   1 error generated.


vim +/mem_cgroup_kmem_disabled +266 mm/memcontrol.c

bf4f059954dcb2 Roman Gushchin 2020-08-06  265  
4d5c8aedc8aa6a Roman Gushchin 2021-06-02 @266  bool mem_cgroup_kmem_disabled(void)
4d5c8aedc8aa6a Roman Gushchin 2021-06-02  267  {
4d5c8aedc8aa6a Roman Gushchin 2021-06-02  268  	return cgroup_memory_nokmem;
4d5c8aedc8aa6a Roman Gushchin 2021-06-02  269  }
4d5c8aedc8aa6a Roman Gushchin 2021-06-02  270  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 33243 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage
  2021-08-14  5:25 ` [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage Muchun Song
  2021-08-14 22:23     ` kernel test robot
@ 2021-08-18  3:01   ` Roman Gushchin
  2021-08-20  6:44       ` Muchun Song
  1 sibling, 1 reply; 29+ messages in thread
From: Roman Gushchin @ 2021-08-18  3:01 UTC (permalink / raw)
  To: Muchun Song
  Cc: hannes, mhocko, akpm, shakeelb, vdavydov.dev, linux-kernel,
	linux-mm, duanxiongchun, fam.zheng, bsingharora, shy828301,
	alexs, smuchun, zhengqi.arch

On Sat, Aug 14, 2021 at 01:25:08PM +0800, Muchun Song wrote:
> Pagecache pages are charged at the allocation time and holding a
> reference to the original memory cgroup until being reclaimed.
> Depending on the memory pressure, specific patterns of the page
> sharing between different cgroups and the cgroup creation and
> destruction rates, a large number of dying memory cgroups can be
> pinned by pagecache pages. It makes the page reclaim less efficient
> and wastes memory.
> 
> We can convert LRU pages and most other raw memcg pins to the objcg
> direction to fix this problem, and then the page->memcg will always
> point to an object cgroup pointer.
> 
> Therefore, the infrastructure of objcg no longer only serves
> CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
> objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
> can reuse it to charge pages.
> 
> We know that the LRU pages are not accounted at the root level. But
> the page->memcg_data points to the root_mem_cgroup. So the
> page->memcg_data of the LRU pages always points to a valid pointer.
> But the root_mem_cgroup dose not have an object cgroup. If we use
> obj_cgroup APIs to charge the LRU pages, we should set the
> page->memcg_data to a root object cgroup. So we also allocate an
> object cgroup for the root_mem_cgroup.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

I like the "move objcg stuff to memcg/css level" part.

I'm less convinced about making byte-sized charging kmem-specific
(both naming and #ifdef CONFIG_MEMCG_KMEM). Do we really win a lot?

I understand why we might wanna compile out some checks from the
hot allocation path, but few bytes in struct objcg will not make a big
difference, as well as few lines of code in cgroup creation/removal paths.

Also it might be useful for byte-sized accounting outside kmem, e.g. zswap.
So, I'd remove this dependency and rename to something like
obj_cgroup_release_bytes().

In the long run we might wanna to eliminate CONFIG_MEMCG_KMEM completely,
so let's at least not add new dependencies.

Thanks!

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 03/12] mm: memcontrol: make lruvec lock safe when LRU pages are reparented
  2021-08-14  5:25 ` [PATCH v1 03/12] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Muchun Song
@ 2021-08-18  3:18   ` Roman Gushchin
  2021-08-18  4:28       ` Muchun Song
  0 siblings, 1 reply; 29+ messages in thread
From: Roman Gushchin @ 2021-08-18  3:18 UTC (permalink / raw)
  To: Muchun Song
  Cc: hannes, mhocko, akpm, shakeelb, vdavydov.dev, linux-kernel,
	linux-mm, duanxiongchun, fam.zheng, bsingharora, shy828301,
	alexs, smuchun, zhengqi.arch

On Sat, Aug 14, 2021 at 01:25:10PM +0800, Muchun Song wrote:
> The diagram below shows how to make the folio lruvec lock safe when LRU
> pages are reparented.
> 
> folio_lruvec_lock(folio)
>     retry:
> 	lruvec = folio_lruvec(folio);
> 
>         // The folio is reparented at this time.
>         spin_lock(&lruvec->lru_lock);
> 
>         if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio)))
>             // Acquired the wrong lruvec lock and need to retry.
>             // Because this folio is on the parent memcg lruvec list.
>             goto retry;
> 
>         // If we reach here, it means that folio_memcg(folio) is stable.
> 
> memcg_reparent_objcgs(memcg)
>     // lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
>     spin_lock(&lruvec->lru_lock);
>     spin_lock(&lruvec_parent->lru_lock);
> 
>     // Move all the pages from the lruvec list to the parent lruvec list.
> 
>     spin_unlock(&lruvec_parent->lru_lock);
>     spin_unlock(&lruvec->lru_lock);
> 
> After we acquire the lruvec lock, we need to check whether the folio is
> reparented. If so, we need to reacquire the new lruvec lock. On the
> routine of the LRU pages reparenting, we will also acquire the lruvec
> lock (will be implemented in the later patch). So folio_memcg() cannot
> be changed when we hold the lruvec lock.
> 
> Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after
> we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So
> remove it.
> 
> This is a preparation for reparenting the LRU pages.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Acked-by: Roman Gushchin <guro@fb.com>

Maybe it's mostly s/page/folio, but the patch looks quite differently
in comparison to the version I did ack. In general, please, drop acks
when there are significant changes between versions.

Thanks!

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 03/12] mm: memcontrol: make lruvec lock safe when LRU pages are reparented
  2021-08-18  3:18   ` Roman Gushchin
@ 2021-08-18  4:28       ` Muchun Song
  0 siblings, 0 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-18  4:28 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Shakeel Butt,
	Vladimir Davydov, LKML, Linux Memory Management List,
	Xiongchun duan, fam.zheng, Singh, Balbir, Yang Shi, Alex Shi,
	Muchun Song, Qi Zheng

On Wed, Aug 18, 2021 at 11:18 AM Roman Gushchin <guro@fb.com> wrote:
>
> On Sat, Aug 14, 2021 at 01:25:10PM +0800, Muchun Song wrote:
> > The diagram below shows how to make the folio lruvec lock safe when LRU
> > pages are reparented.
> >
> > folio_lruvec_lock(folio)
> >     retry:
> >       lruvec = folio_lruvec(folio);
> >
> >         // The folio is reparented at this time.
> >         spin_lock(&lruvec->lru_lock);
> >
> >         if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio)))
> >             // Acquired the wrong lruvec lock and need to retry.
> >             // Because this folio is on the parent memcg lruvec list.
> >             goto retry;
> >
> >         // If we reach here, it means that folio_memcg(folio) is stable.
> >
> > memcg_reparent_objcgs(memcg)
> >     // lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
> >     spin_lock(&lruvec->lru_lock);
> >     spin_lock(&lruvec_parent->lru_lock);
> >
> >     // Move all the pages from the lruvec list to the parent lruvec list.
> >
> >     spin_unlock(&lruvec_parent->lru_lock);
> >     spin_unlock(&lruvec->lru_lock);
> >
> > After we acquire the lruvec lock, we need to check whether the folio is
> > reparented. If so, we need to reacquire the new lruvec lock. On the
> > routine of the LRU pages reparenting, we will also acquire the lruvec
> > lock (will be implemented in the later patch). So folio_memcg() cannot
> > be changed when we hold the lruvec lock.
> >
> > Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after
> > we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So
> > remove it.
> >
> > This is a preparation for reparenting the LRU pages.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > Acked-by: Roman Gushchin <guro@fb.com>
>
> Maybe it's mostly s/page/folio, but the patch looks quite differently
> in comparison to the version I did ack. In general, please, drop acks
> when there are significant changes between versions.

Got it. I'll drop all acks in the next version. Thanks.

>
> Thanks!

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 03/12] mm: memcontrol: make lruvec lock safe when LRU pages are reparented
@ 2021-08-18  4:28       ` Muchun Song
  0 siblings, 0 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-18  4:28 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Shakeel Butt,
	Vladimir Davydov, LKML, Linux Memory Management List,
	Xiongchun duan, fam.zheng, Singh, Balbir, Yang Shi, Alex Shi,
	Muchun Song, Qi Zheng

On Wed, Aug 18, 2021 at 11:18 AM Roman Gushchin <guro@fb.com> wrote:
>
> On Sat, Aug 14, 2021 at 01:25:10PM +0800, Muchun Song wrote:
> > The diagram below shows how to make the folio lruvec lock safe when LRU
> > pages are reparented.
> >
> > folio_lruvec_lock(folio)
> >     retry:
> >       lruvec = folio_lruvec(folio);
> >
> >         // The folio is reparented at this time.
> >         spin_lock(&lruvec->lru_lock);
> >
> >         if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio)))
> >             // Acquired the wrong lruvec lock and need to retry.
> >             // Because this folio is on the parent memcg lruvec list.
> >             goto retry;
> >
> >         // If we reach here, it means that folio_memcg(folio) is stable.
> >
> > memcg_reparent_objcgs(memcg)
> >     // lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
> >     spin_lock(&lruvec->lru_lock);
> >     spin_lock(&lruvec_parent->lru_lock);
> >
> >     // Move all the pages from the lruvec list to the parent lruvec list.
> >
> >     spin_unlock(&lruvec_parent->lru_lock);
> >     spin_unlock(&lruvec->lru_lock);
> >
> > After we acquire the lruvec lock, we need to check whether the folio is
> > reparented. If so, we need to reacquire the new lruvec lock. On the
> > routine of the LRU pages reparenting, we will also acquire the lruvec
> > lock (will be implemented in the later patch). So folio_memcg() cannot
> > be changed when we hold the lruvec lock.
> >
> > Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after
> > we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So
> > remove it.
> >
> > This is a preparation for reparenting the LRU pages.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > Acked-by: Roman Gushchin <guro@fb.com>
>
> Maybe it's mostly s/page/folio, but the patch looks quite differently
> in comparison to the version I did ack. In general, please, drop acks
> when there are significant changes between versions.

Got it. I'll drop all acks in the next version. Thanks.

>
> Thanks!


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 03/12] mm: memcontrol: make lruvec lock safe when LRU pages are reparented
  2021-08-18  4:28       ` Muchun Song
  (?)
@ 2021-08-18  4:47       ` Roman Gushchin
  -1 siblings, 0 replies; 29+ messages in thread
From: Roman Gushchin @ 2021-08-18  4:47 UTC (permalink / raw)
  To: Muchun Song
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Shakeel Butt,
	Vladimir Davydov, LKML, Linux Memory Management List,
	Xiongchun duan, fam.zheng, Singh, Balbir, Yang Shi, Alex Shi,
	Muchun Song, Qi Zheng

On Wed, Aug 18, 2021 at 12:28:02PM +0800, Muchun Song wrote:
> On Wed, Aug 18, 2021 at 11:18 AM Roman Gushchin <guro@fb.com> wrote:
> >
> > On Sat, Aug 14, 2021 at 01:25:10PM +0800, Muchun Song wrote:
> > > The diagram below shows how to make the folio lruvec lock safe when LRU
> > > pages are reparented.
> > >
> > > folio_lruvec_lock(folio)
> > >     retry:
> > >       lruvec = folio_lruvec(folio);
> > >
> > >         // The folio is reparented at this time.
> > >         spin_lock(&lruvec->lru_lock);
> > >
> > >         if (unlikely(lruvec_memcg(lruvec) != folio_memcg(folio)))
> > >             // Acquired the wrong lruvec lock and need to retry.
> > >             // Because this folio is on the parent memcg lruvec list.
> > >             goto retry;
> > >
> > >         // If we reach here, it means that folio_memcg(folio) is stable.
> > >
> > > memcg_reparent_objcgs(memcg)
> > >     // lruvec belongs to memcg and lruvec_parent belongs to parent memcg.
> > >     spin_lock(&lruvec->lru_lock);
> > >     spin_lock(&lruvec_parent->lru_lock);
> > >
> > >     // Move all the pages from the lruvec list to the parent lruvec list.
> > >
> > >     spin_unlock(&lruvec_parent->lru_lock);
> > >     spin_unlock(&lruvec->lru_lock);
> > >
> > > After we acquire the lruvec lock, we need to check whether the folio is
> > > reparented. If so, we need to reacquire the new lruvec lock. On the
> > > routine of the LRU pages reparenting, we will also acquire the lruvec
> > > lock (will be implemented in the later patch). So folio_memcg() cannot
> > > be changed when we hold the lruvec lock.
> > >
> > > Since lruvec_memcg(lruvec) is always equal to folio_memcg(folio) after
> > > we hold the lruvec lock, lruvec_memcg_debug() check is pointless. So
> > > remove it.
> > >
> > > This is a preparation for reparenting the LRU pages.
> > >
> > > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > > Acked-by: Roman Gushchin <guro@fb.com>
> >
> > Maybe it's mostly s/page/folio, but the patch looks quite differently
> > in comparison to the version I did ack. In general, please, drop acks
> > when there are significant changes between versions.
> 
> Got it. I'll drop all acks in the next version. Thanks.

Thank you!

The code look correct though. But the locking scheme becomes very complicated
(it was already complex). I wonder if there are better ideas. No ideas out
of my head, need to think more. If not, your approach looks ok.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage
  2021-08-18  3:01   ` Roman Gushchin
@ 2021-08-20  6:44       ` Muchun Song
  0 siblings, 0 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-20  6:44 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Shakeel Butt,
	Vladimir Davydov, LKML, Linux Memory Management List,
	Xiongchun duan, fam.zheng, Singh, Balbir, Yang Shi, Alex Shi,
	Muchun Song, Qi Zheng

On Wed, Aug 18, 2021 at 11:01 AM Roman Gushchin <guro@fb.com> wrote:
>
> On Sat, Aug 14, 2021 at 01:25:08PM +0800, Muchun Song wrote:
> > Pagecache pages are charged at the allocation time and holding a
> > reference to the original memory cgroup until being reclaimed.
> > Depending on the memory pressure, specific patterns of the page
> > sharing between different cgroups and the cgroup creation and
> > destruction rates, a large number of dying memory cgroups can be
> > pinned by pagecache pages. It makes the page reclaim less efficient
> > and wastes memory.
> >
> > We can convert LRU pages and most other raw memcg pins to the objcg
> > direction to fix this problem, and then the page->memcg will always
> > point to an object cgroup pointer.
> >
> > Therefore, the infrastructure of objcg no longer only serves
> > CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
> > objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
> > can reuse it to charge pages.
> >
> > We know that the LRU pages are not accounted at the root level. But
> > the page->memcg_data points to the root_mem_cgroup. So the
> > page->memcg_data of the LRU pages always points to a valid pointer.
> > But the root_mem_cgroup dose not have an object cgroup. If we use
> > obj_cgroup APIs to charge the LRU pages, we should set the
> > page->memcg_data to a root object cgroup. So we also allocate an
> > object cgroup for the root_mem_cgroup.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> I like the "move objcg stuff to memcg/css level" part.
>
> I'm less convinced about making byte-sized charging kmem-specific
> (both naming and #ifdef CONFIG_MEMCG_KMEM). Do we really win a lot?
>
> I understand why we might wanna compile out some checks from the
> hot allocation path, but few bytes in struct objcg will not make a big
> difference, as well as few lines of code in cgroup creation/removal paths.
>
> Also it might be useful for byte-sized accounting outside kmem, e.g. zswap.
> So, I'd remove this dependency and rename to something like
> obj_cgroup_release_bytes().

Got it. I'll do that in the next version. Thanks for your suggestions.

>
> In the long run we might wanna to eliminate CONFIG_MEMCG_KMEM completely,
> so let's at least not add new dependencies.

Got it.

>
> Thanks!

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage
@ 2021-08-20  6:44       ` Muchun Song
  0 siblings, 0 replies; 29+ messages in thread
From: Muchun Song @ 2021-08-20  6:44 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Johannes Weiner, Michal Hocko, Andrew Morton, Shakeel Butt,
	Vladimir Davydov, LKML, Linux Memory Management List,
	Xiongchun duan, fam.zheng, Singh, Balbir, Yang Shi, Alex Shi,
	Muchun Song, Qi Zheng

On Wed, Aug 18, 2021 at 11:01 AM Roman Gushchin <guro@fb.com> wrote:
>
> On Sat, Aug 14, 2021 at 01:25:08PM +0800, Muchun Song wrote:
> > Pagecache pages are charged at the allocation time and holding a
> > reference to the original memory cgroup until being reclaimed.
> > Depending on the memory pressure, specific patterns of the page
> > sharing between different cgroups and the cgroup creation and
> > destruction rates, a large number of dying memory cgroups can be
> > pinned by pagecache pages. It makes the page reclaim less efficient
> > and wastes memory.
> >
> > We can convert LRU pages and most other raw memcg pins to the objcg
> > direction to fix this problem, and then the page->memcg will always
> > point to an object cgroup pointer.
> >
> > Therefore, the infrastructure of objcg no longer only serves
> > CONFIG_MEMCG_KMEM. In this patch, we move the infrastructure of the
> > objcg out of the scope of the CONFIG_MEMCG_KMEM so that the LRU pages
> > can reuse it to charge pages.
> >
> > We know that the LRU pages are not accounted at the root level. But
> > the page->memcg_data points to the root_mem_cgroup. So the
> > page->memcg_data of the LRU pages always points to a valid pointer.
> > But the root_mem_cgroup dose not have an object cgroup. If we use
> > obj_cgroup APIs to charge the LRU pages, we should set the
> > page->memcg_data to a root object cgroup. So we also allocate an
> > object cgroup for the root_mem_cgroup.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> I like the "move objcg stuff to memcg/css level" part.
>
> I'm less convinced about making byte-sized charging kmem-specific
> (both naming and #ifdef CONFIG_MEMCG_KMEM). Do we really win a lot?
>
> I understand why we might wanna compile out some checks from the
> hot allocation path, but few bytes in struct objcg will not make a big
> difference, as well as few lines of code in cgroup creation/removal paths.
>
> Also it might be useful for byte-sized accounting outside kmem, e.g. zswap.
> So, I'd remove this dependency and rename to something like
> obj_cgroup_release_bytes().

Got it. I'll do that in the next version. Thanks for your suggestions.

>
> In the long run we might wanna to eliminate CONFIG_MEMCG_KMEM completely,
> so let's at least not add new dependencies.

Got it.

>
> Thanks!


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented
@ 2021-08-15  3:18 kernel test robot
  0 siblings, 0 replies; 29+ messages in thread
From: kernel test robot @ 2021-08-15  3:18 UTC (permalink / raw)
  To: kbuild

[-- Attachment #1: Type: text/plain, Size: 6446 bytes --]

CC: kbuild-all(a)lists.01.org
In-Reply-To: <20210814052519.86679-7-songmuchun@bytedance.com>
References: <20210814052519.86679-7-songmuchun@bytedance.com>
TO: Muchun Song <songmuchun@bytedance.com>
TO: guro(a)fb.com
TO: hannes(a)cmpxchg.org
TO: mhocko(a)kernel.org
TO: akpm(a)linux-foundation.org
TO: shakeelb(a)google.com
TO: vdavydov.dev(a)gmail.com
CC: linux-kernel(a)vger.kernel.org
CC: linux-mm(a)kvack.org
CC: duanxiongchun(a)bytedance.com
CC: fam.zheng(a)bytedance.com

Hi Muchun,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on next-20210813]
[cannot apply to hnaz-linux-mm/master cgroup/for-next linus/master v5.14-rc5 v5.14-rc4 v5.14-rc3 v5.14-rc5]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
base:    4b358aabb93a2c654cd1dcab1a25a589f6e2b153
:::::: branch date: 22 hours ago
:::::: commit date: 22 hours ago
config: i386-randconfig-s001-20210815 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.3-348-gf0e6938b-dirty
        # https://github.com/0day-ci/linux/commit/f19b75eb79975f101227d4b99f0aeda46a378c98
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Muchun-Song/Use-obj_cgroup-APIs-to-charge-the-LRU-pages/20210814-132844
        git checkout f19b75eb79975f101227d4b99f0aeda46a378c98
        # save the attached .config to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=i386 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)
   mm/huge_memory.c: note: in included file (through include/linux/rbtree.h, include/linux/mm_types.h, include/linux/mmzone.h, ...):
   include/linux/rcupdate.h:718:9: sparse: sparse: context imbalance in 'folio_split_queue_lock' - wrong count at exit
>> include/linux/rcupdate.h:718:9: sparse: sparse: context imbalance in 'folio_split_queue_lock_irqsave' - wrong count at exit
   mm/huge_memory.c:1634:20: sparse: sparse: context imbalance in 'madvise_free_huge_pmd' - unexpected unlock
   mm/huge_memory.c:1671:28: sparse: sparse: context imbalance in 'zap_huge_pmd' - unexpected unlock
   mm/huge_memory.c:1778:28: sparse: sparse: context imbalance in 'move_huge_pmd' - unexpected unlock
   mm/huge_memory.c:1889:20: sparse: sparse: context imbalance in 'change_huge_pmd' - unexpected unlock
   mm/huge_memory.c:1899:12: sparse: sparse: context imbalance in '__pmd_trans_huge_lock' - wrong count at exit
   mm/huge_memory.c:2533:29: sparse: sparse: context imbalance in '__split_huge_page' - unexpected unlock
   mm/huge_memory.c:2786:17: sparse: sparse: context imbalance in 'split_huge_page_to_list' - different lock contexts for basic block
   mm/huge_memory.c:2813:38: sparse: sparse: context imbalance in 'free_transhuge_page' - unexpected unlock
   mm/huge_memory.c:2850:38: sparse: sparse: context imbalance in 'deferred_split_huge_page' - unexpected unlock

vim +/folio_split_queue_lock_irqsave +718 include/linux/rcupdate.h

^1da177e4c3f41 Linus Torvalds   2005-04-16  691  
^1da177e4c3f41 Linus Torvalds   2005-04-16  692  /*
^1da177e4c3f41 Linus Torvalds   2005-04-16  693   * So where is rcu_write_lock()?  It does not exist, as there is no
^1da177e4c3f41 Linus Torvalds   2005-04-16  694   * way for writers to lock out RCU readers.  This is a feature, not
^1da177e4c3f41 Linus Torvalds   2005-04-16  695   * a bug -- this property is what provides RCU's performance benefits.
^1da177e4c3f41 Linus Torvalds   2005-04-16  696   * Of course, writers must coordinate with each other.  The normal
^1da177e4c3f41 Linus Torvalds   2005-04-16  697   * spinlock primitives work well for this, but any other technique may be
^1da177e4c3f41 Linus Torvalds   2005-04-16  698   * used as well.  RCU does not care how the writers keep out of each
^1da177e4c3f41 Linus Torvalds   2005-04-16  699   * others' way, as long as they do so.
^1da177e4c3f41 Linus Torvalds   2005-04-16  700   */
3d76c082907e8f Paul E. McKenney 2009-09-28  701  
3d76c082907e8f Paul E. McKenney 2009-09-28  702  /**
ca5ecddfa8fcbd Paul E. McKenney 2010-04-28  703   * rcu_read_unlock() - marks the end of an RCU read-side critical section.
3d76c082907e8f Paul E. McKenney 2009-09-28  704   *
0223846010750e Paul E. McKenney 2021-04-29  705   * In almost all situations, rcu_read_unlock() is immune from deadlock.
0223846010750e Paul E. McKenney 2021-04-29  706   * In recent kernels that have consolidated synchronize_sched() and
0223846010750e Paul E. McKenney 2021-04-29  707   * synchronize_rcu_bh() into synchronize_rcu(), this deadlock immunity
0223846010750e Paul E. McKenney 2021-04-29  708   * also extends to the scheduler's runqueue and priority-inheritance
0223846010750e Paul E. McKenney 2021-04-29  709   * spinlocks, courtesy of the quiescent-state deferral that is carried
0223846010750e Paul E. McKenney 2021-04-29  710   * out when rcu_read_unlock() is invoked with interrupts disabled.
f27bc4873fa8b7 Paul E. McKenney 2014-05-04  711   *
3d76c082907e8f Paul E. McKenney 2009-09-28  712   * See rcu_read_lock() for more information.
3d76c082907e8f Paul E. McKenney 2009-09-28  713   */
bc33f24bdca8b6 Paul E. McKenney 2009-08-22  714  static inline void rcu_read_unlock(void)
bc33f24bdca8b6 Paul E. McKenney 2009-08-22  715  {
f78f5b90c4ffa5 Paul E. McKenney 2015-06-18  716  	RCU_LOCKDEP_WARN(!rcu_is_watching(),
bde23c6892878e Heiko Carstens   2012-02-01  717  			 "rcu_read_unlock() used illegally while idle");
bc33f24bdca8b6 Paul E. McKenney 2009-08-22 @718  	__release(RCU);
bc33f24bdca8b6 Paul E. McKenney 2009-08-22  719  	__rcu_read_unlock();
d24209bb689e2c Paul E. McKenney 2015-01-21  720  	rcu_lock_release(&rcu_lock_map); /* Keep acq info for rls diags. */
bc33f24bdca8b6 Paul E. McKenney 2009-08-22  721  }
^1da177e4c3f41 Linus Torvalds   2005-04-16  722  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 40110 bytes --]

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2021-08-20  6:44 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-14  5:25 [PATCH v1 00/12] Use obj_cgroup APIs to charge the LRU pages Muchun Song
2021-08-14  5:25 ` [PATCH v1 01/12] mm: memcontrol: prepare objcg API for non-kmem usage Muchun Song
2021-08-14 22:23   ` kernel test robot
2021-08-14 22:23     ` kernel test robot
2021-08-18  3:01   ` Roman Gushchin
2021-08-20  6:44     ` Muchun Song
2021-08-20  6:44       ` Muchun Song
2021-08-14  5:25 ` [PATCH v1 02/12] mm: memcontrol: introduce compact_folio_lruvec_lock_irqsave Muchun Song
2021-08-14  5:25 ` [PATCH v1 03/12] mm: memcontrol: make lruvec lock safe when LRU pages are reparented Muchun Song
2021-08-18  3:18   ` Roman Gushchin
2021-08-18  4:28     ` Muchun Song
2021-08-18  4:28       ` Muchun Song
2021-08-18  4:47       ` Roman Gushchin
2021-08-14  5:25 ` [PATCH v1 04/12] mm: vmscan: rework move_pages_to_lru() Muchun Song
2021-08-14  5:25 ` [PATCH v1 05/12] mm: thp: introduce folio_split_queue_lock{_irqsave}() Muchun Song
2021-08-14  8:22   ` kernel test robot
2021-08-14  8:22     ` kernel test robot
2021-08-14 10:38   ` kernel test robot
2021-08-14 10:38     ` kernel test robot
2021-08-14  5:25 ` [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented Muchun Song
2021-08-14  5:25 ` [PATCH v1 07/12] mm: memcontrol: make all the callers of {folio,page}_memcg() safe Muchun Song
2021-08-14  5:25 ` [PATCH v1 08/12] mm: memcontrol: introduce memcg_reparent_ops Muchun Song
2021-08-14  5:25 ` [PATCH v1 09/12] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Muchun Song
2021-08-14 14:08   ` kernel test robot
2021-08-14 14:08     ` kernel test robot
2021-08-14  5:25 ` [PATCH v1 10/12] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() Muchun Song
2021-08-14  5:25 ` [PATCH v1 11/12] mm: lru: add VM_BUG_ON_FOLIO to lru maintenance function Muchun Song
2021-08-14  5:25 ` [PATCH v1 12/12] mm: lru: use lruvec lock to serialize memcg changes Muchun Song
2021-08-15  3:18 [PATCH v1 06/12] mm: thp: make split queue lock safe when LRU pages are reparented kernel test robot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.