linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] protect page cache from freeing inode
@ 2020-01-08 16:03 Yafang Shao
  2020-01-08 16:03 ` [PATCH v3 1/3] mm, list_lru: make memcg visible to lru walker isolation function Yafang Shao
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Yafang Shao @ 2020-01-08 16:03 UTC (permalink / raw)
  To: dchinner, hannes, mhocko, vdavydov.dev, guro, akpm, viro
  Cc: linux-mm, linux-fsdevel, Yafang Shao

On my server there're some running MEMCGs protected by memory.{min, low},
but I found the usage of these MEMCGs abruptly became very small, which
were far less than the protect limit. It confused me and finally I
found that was because of inode stealing.
Once an inode is freed, all its belonging page caches will be dropped as
well, no matter how may page caches it has. So if we intend to protect the
page caches in a memcg, we must protect their host (the inode) first.
Otherwise the memcg protection can be easily bypassed with freeing inode,
especially if there're big files in this memcg.
The inherent mismatch between memcg and inode is a trouble. One inode can
be shared by different MEMCGs, but it is a very rare case. If an inode is
shared, its belonging page caches may be charged to different MEMCGs.
Currently there's no perfect solution to fix this kind of issue, but the
inode majority-writer ownership switching can help it more or less.

- Changes against v2:
    1. Seperates memcg patches from this patchset, suggested by Roman.
       A separate patch is alreay ACKed by Roman, please the MEMCG
       maintianers help take a look at it[1].
    2. Improves code around the usage of for_each_mem_cgroup(), suggested
       by Dave
    3. Use memcg_low_reclaim passed from scan_control, instead of
       introducing a new member in struct mem_cgroup.
    4. Some other code improvement suggested by Dave.


- Changes against v1:
Use the memcg passed from the shrink_control, instead of getting it from
inode itself, suggested by Dave. That could make the laying better.

[1]
https://lore.kernel.org/linux-mm/CALOAHbBhPgh3WEuLu2B6e2vj1J8K=gGOyCKzb8tKWmDqFs-rfQ@mail.gmail.com/

Yafang Shao (3):
  mm, list_lru: make memcg visible to lru walker isolation function
  mm, shrinker: make memcg low reclaim visible to lru walker isolation
    function
  memcg, inode: protect page cache from freeing inode

 fs/inode.c                 | 78 ++++++++++++++++++++++++++++++++++++++++++++--
 include/linux/memcontrol.h | 21 +++++++++++++
 include/linux/shrinker.h   |  3 ++
 mm/list_lru.c              | 47 +++++++++++++++++-----------
 mm/memcontrol.c            | 15 ---------
 mm/vmscan.c                | 27 +++++++++-------
 6 files changed, 143 insertions(+), 48 deletions(-)

-- 
1.8.3.1


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v3 1/3] mm, list_lru: make memcg visible to lru walker isolation function
  2020-01-08 16:03 [PATCH v3 0/3] protect page cache from freeing inode Yafang Shao
@ 2020-01-08 16:03 ` Yafang Shao
  2020-01-08 16:03 ` [PATCH v3 2/3] mm, shrinker: make memcg low reclaim " Yafang Shao
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 7+ messages in thread
From: Yafang Shao @ 2020-01-08 16:03 UTC (permalink / raw)
  To: dchinner, hannes, mhocko, vdavydov.dev, guro, akpm, viro
  Cc: linux-mm, linux-fsdevel, Yafang Shao

The lru walker isolation function may use this memcg to do something, e.g.
the inode isolatation function will use the memcg to do inode protection in
followup patch. So make memcg visible to the lru walker isolation function.

Something should be emphasized in this patch is it replaces
for_each_memcg_cache_index() with for_each_mem_cgroup() in
list_lru_walk_node(). Because there's a gap between these two MACROs that
for_each_mem_cgroup() depends on CONFIG_MEMCG while the other one depends
on CONFIG_MEMCG_KMEM. But as list_lru_memcg_aware() returns false if
CONFIG_MEMCG_KMEM is not configured, it is safe to this replacement.
Another difference between for_each_memcg_cache_index() and
for_each_mem_cgroup() is that for_each_memcg_cache_index() excludes the
root_mem_cgroup because its kmemcg_id is -1, while for_each_mem_cgroup()
includes the root_mem_cgroup. So we need to skip the root_mem_cgroup
explicitly in the for loop.

Cc: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 include/linux/memcontrol.h | 21 +++++++++++++++++++++
 mm/list_lru.c              | 47 +++++++++++++++++++++++++++-------------------
 mm/memcontrol.c            | 15 ---------------
 3 files changed, 49 insertions(+), 34 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index a7a0a1a5..a624c42 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -445,6 +445,21 @@ struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *,
 int mem_cgroup_scan_tasks(struct mem_cgroup *,
 			  int (*)(struct task_struct *, void *), void *);
 
+/*
+ * Iteration constructs for visiting all cgroups (under a tree).  If
+ * loops are exited prematurely (break), mem_cgroup_iter_break() must
+ * be used for reference counting.
+ */
+#define for_each_mem_cgroup_tree(iter, root)		\
+	for (iter = mem_cgroup_iter(root, NULL, NULL);	\
+	     iter != NULL;				\
+	     iter = mem_cgroup_iter(root, iter, NULL))
+
+#define for_each_mem_cgroup(iter)			\
+	for (iter = mem_cgroup_iter(NULL, NULL, NULL);	\
+	     iter != NULL;				\
+	     iter = mem_cgroup_iter(NULL, iter, NULL))
+
 static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg)
 {
 	if (mem_cgroup_disabled())
@@ -945,6 +960,12 @@ static inline int mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
 	return 0;
 }
 
+#define for_each_mem_cgroup_tree(iter)		\
+	for (iter = NULL; iter; )
+
+#define for_each_mem_cgroup(iter)		\
+	for (iter = NULL; iter; )
+
 static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg)
 {
 	return 0;
diff --git a/mm/list_lru.c b/mm/list_lru.c
index 0f1f6b0..6daa8c6 100644
--- a/mm/list_lru.c
+++ b/mm/list_lru.c
@@ -207,11 +207,11 @@ unsigned long list_lru_count_node(struct list_lru *lru, int nid)
 EXPORT_SYMBOL_GPL(list_lru_count_node);
 
 static unsigned long
-__list_lru_walk_one(struct list_lru_node *nlru, int memcg_idx,
+__list_lru_walk_one(struct list_lru_node *nlru, struct mem_cgroup *memcg,
 		    list_lru_walk_cb isolate, void *cb_arg,
 		    unsigned long *nr_to_walk)
 {
-
+	int memcg_idx = memcg_cache_id(memcg);
 	struct list_lru_one *l;
 	struct list_head *item, *n;
 	unsigned long isolated = 0;
@@ -273,7 +273,7 @@ unsigned long list_lru_count_node(struct list_lru *lru, int nid)
 	unsigned long ret;
 
 	spin_lock(&nlru->lock);
-	ret = __list_lru_walk_one(nlru, memcg_cache_id(memcg), isolate, cb_arg,
+	ret = __list_lru_walk_one(nlru, memcg, isolate, cb_arg,
 				  nr_to_walk);
 	spin_unlock(&nlru->lock);
 	return ret;
@@ -289,7 +289,7 @@ unsigned long list_lru_count_node(struct list_lru *lru, int nid)
 	unsigned long ret;
 
 	spin_lock_irq(&nlru->lock);
-	ret = __list_lru_walk_one(nlru, memcg_cache_id(memcg), isolate, cb_arg,
+	ret = __list_lru_walk_one(nlru, memcg, isolate, cb_arg,
 				  nr_to_walk);
 	spin_unlock_irq(&nlru->lock);
 	return ret;
@@ -299,25 +299,34 @@ unsigned long list_lru_walk_node(struct list_lru *lru, int nid,
 				 list_lru_walk_cb isolate, void *cb_arg,
 				 unsigned long *nr_to_walk)
 {
-	long isolated = 0;
-	int memcg_idx;
+	struct list_lru_node *nlru;
+	struct mem_cgroup *memcg;
+	long isolated;
 
-	isolated += list_lru_walk_one(lru, nid, NULL, isolate, cb_arg,
-				      nr_to_walk);
-	if (*nr_to_walk > 0 && list_lru_memcg_aware(lru)) {
-		for_each_memcg_cache_index(memcg_idx) {
-			struct list_lru_node *nlru = &lru->node[nid];
+	/* iterate the global lru first */
+	isolated = list_lru_walk_one(lru, nid, NULL, isolate, cb_arg,
+				     nr_to_walk);
 
-			spin_lock(&nlru->lock);
-			isolated += __list_lru_walk_one(nlru, memcg_idx,
-							isolate, cb_arg,
-							nr_to_walk);
-			spin_unlock(&nlru->lock);
+	if (!list_lru_memcg_aware(lru))
+		goto out;
 
-			if (*nr_to_walk <= 0)
-				break;
-		}
+	nlru = &lru->node[nid];
+	for_each_mem_cgroup(memcg) {
+		/* already scanned the root memcg above */
+		if (mem_cgroup_is_root(memcg))
+			continue;
+
+		if (*nr_to_walk <= 0)
+			break;
+
+		spin_lock(&nlru->lock);
+		isolated += __list_lru_walk_one(nlru, memcg,
+						isolate, cb_arg,
+						nr_to_walk);
+		spin_unlock(&nlru->lock);
 	}
+
+out:
 	return isolated;
 }
 EXPORT_SYMBOL_GPL(list_lru_walk_node);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 601405b..9bd4ea7 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -222,21 +222,6 @@ enum res_type {
 /* Used for OOM nofiier */
 #define OOM_CONTROL		(0)
 
-/*
- * Iteration constructs for visiting all cgroups (under a tree).  If
- * loops are exited prematurely (break), mem_cgroup_iter_break() must
- * be used for reference counting.
- */
-#define for_each_mem_cgroup_tree(iter, root)		\
-	for (iter = mem_cgroup_iter(root, NULL, NULL);	\
-	     iter != NULL;				\
-	     iter = mem_cgroup_iter(root, iter, NULL))
-
-#define for_each_mem_cgroup(iter)			\
-	for (iter = mem_cgroup_iter(NULL, NULL, NULL);	\
-	     iter != NULL;				\
-	     iter = mem_cgroup_iter(NULL, iter, NULL))
-
 static inline bool should_force_charge(void)
 {
 	return tsk_is_oom_victim(current) || fatal_signal_pending(current) ||
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 2/3] mm, shrinker: make memcg low reclaim visible to lru walker isolation function
  2020-01-08 16:03 [PATCH v3 0/3] protect page cache from freeing inode Yafang Shao
  2020-01-08 16:03 ` [PATCH v3 1/3] mm, list_lru: make memcg visible to lru walker isolation function Yafang Shao
@ 2020-01-08 16:03 ` Yafang Shao
  2020-01-08 16:03 ` [PATCH v3 3/3] memcg, inode: protect page cache from freeing inode Yafang Shao
  2020-01-22 13:46 ` [PATCH v3 0/3] " Yafang Shao
  3 siblings, 0 replies; 7+ messages in thread
From: Yafang Shao @ 2020-01-08 16:03 UTC (permalink / raw)
  To: dchinner, hannes, mhocko, vdavydov.dev, guro, akpm, viro
  Cc: linux-mm, linux-fsdevel, Yafang Shao

A new member memcg_low_reclaim is introduced in shrink_control struct,
which is derived from scan_control struct, in order to tell the shrinker
whether the reclaim session is under memcg low reclaim or not.
The followup patch will use this new member.

Cc: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 include/linux/shrinker.h |  3 +++
 mm/vmscan.c              | 27 ++++++++++++++++-----------
 2 files changed, 19 insertions(+), 11 deletions(-)

diff --git a/include/linux/shrinker.h b/include/linux/shrinker.h
index 0f80123..dc42ae5 100644
--- a/include/linux/shrinker.h
+++ b/include/linux/shrinker.h
@@ -31,6 +31,9 @@ struct shrink_control {
 
 	/* current memcg being shrunk (for memcg aware shrinkers) */
 	struct mem_cgroup *memcg;
+
+	/* derived from struct scan_control */
+	bool memcg_low_reclaim;
 };
 
 #define SHRINK_STOP (~0UL)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5a6445e..c97d005 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -639,10 +639,9 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
 
 /**
  * shrink_slab - shrink slab caches
- * @gfp_mask: allocation context
- * @nid: node whose slab caches to target
  * @memcg: memory cgroup whose slab caches to target
- * @priority: the reclaim priority
+ * @sc: scan_control struct for this reclaim session
+ * @nid: node whose slab caches to target
  *
  * Call the shrink functions to age shrinkable caches.
  *
@@ -652,15 +651,18 @@ static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid,
  * @memcg specifies the memory cgroup to target. Unaware shrinkers
  * are called only if it is the root cgroup.
  *
- * @priority is sc->priority, we take the number of objects and >> by priority
- * in order to get the scan target.
+ * @sc is the scan_control struct, we take the number of objects
+ * and >> by sc->priority in order to get the scan target.
  *
  * Returns the number of reclaimed slab objects.
  */
-static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
-				 struct mem_cgroup *memcg,
-				 int priority)
+static unsigned long shrink_slab(struct mem_cgroup *memcg,
+				 struct scan_control *sc,
+				 int nid)
 {
+	bool memcg_low_reclaim = sc->memcg_low_reclaim;
+	gfp_t gfp_mask = sc->gfp_mask;
+	int priority = sc->priority;
 	unsigned long ret, freed = 0;
 	struct shrinker *shrinker;
 
@@ -682,6 +684,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 			.gfp_mask = gfp_mask,
 			.nid = nid,
 			.memcg = memcg,
+			.memcg_low_reclaim = memcg_low_reclaim,
 		};
 
 		ret = do_shrink_slab(&sc, shrinker, priority);
@@ -708,6 +711,9 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 void drop_slab_node(int nid)
 {
 	unsigned long freed;
+	struct scan_control sc = {
+		.gfp_mask = GFP_KERNEL,
+	};
 
 	do {
 		struct mem_cgroup *memcg = NULL;
@@ -715,7 +721,7 @@ void drop_slab_node(int nid)
 		freed = 0;
 		memcg = mem_cgroup_iter(NULL, NULL, NULL);
 		do {
-			freed += shrink_slab(GFP_KERNEL, nid, memcg, 0);
+			freed += shrink_slab(memcg, &sc, nid);
 		} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
 	} while (freed > 10);
 }
@@ -2684,8 +2690,7 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc)
 
 		shrink_lruvec(lruvec, sc);
 
-		shrink_slab(sc->gfp_mask, pgdat->node_id, memcg,
-			    sc->priority);
+		shrink_slab(memcg, sc, pgdat->node_id);
 
 		/* Record the group's reclaim efficiency */
 		vmpressure(sc->gfp_mask, memcg, false,
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v3 3/3] memcg, inode: protect page cache from freeing inode
  2020-01-08 16:03 [PATCH v3 0/3] protect page cache from freeing inode Yafang Shao
  2020-01-08 16:03 ` [PATCH v3 1/3] mm, list_lru: make memcg visible to lru walker isolation function Yafang Shao
  2020-01-08 16:03 ` [PATCH v3 2/3] mm, shrinker: make memcg low reclaim " Yafang Shao
@ 2020-01-08 16:03 ` Yafang Shao
  2020-01-22 13:46 ` [PATCH v3 0/3] " Yafang Shao
  3 siblings, 0 replies; 7+ messages in thread
From: Yafang Shao @ 2020-01-08 16:03 UTC (permalink / raw)
  To: dchinner, hannes, mhocko, vdavydov.dev, guro, akpm, viro
  Cc: linux-mm, linux-fsdevel, Yafang Shao

On my server there're some running MEMCGs protected by memory.{min, low},
but I found the usage of these MEMCGs abruptly became very small, which
were far less than the protect limit. It confused me and finally I
found that was because of inode stealing.
Once an inode is freed, all its belonging page caches will be dropped as
well, no matter how may page caches it has. So if we intend to protect the
page caches in a memcg, we must protect their host (the inode) first.
Otherwise the memcg protection can be easily bypassed with freeing inode,
especially if there're big files in this memcg.

Supposes we have a memcg, and the stat of this memcg is,
        memory.current = 1024M
        memory.min = 512M
And in this memcg there's a inode with 800M page caches.
Once this memcg is scanned by kswapd or other regular reclaimers,
    kswapd <<<< It can be either of the regular reclaimers.
        shrink_node_memcgs
            switch (mem_cgroup_protected()) <<<< Not protected
                case MEMCG_PROT_NONE:  <<<< Will scan this memcg
                        beak;
            shrink_lruvec() <<<< Reclaim the page caches
            shrink_slab()   <<<< It may free this inode and drop all its
                                 page caches(800M).
So we must protect the inode first if we want to protect page caches.

The inherent mismatch between memcg and inode is a trouble. One inode can
be shared by different MEMCGs, but it is a very rare case. If an inode is
shared, its belonging page caches may be charged to different MEMCGs.
Currently there's no perfect solution to fix this kind of issue, but the
inode majority-writer ownership switching can help it more or less.

Cc: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
---
 fs/inode.c | 78 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---
 1 file changed, 75 insertions(+), 3 deletions(-)

diff --git a/fs/inode.c b/fs/inode.c
index 2b0f511..80dddbc 100644
--- a/fs/inode.c
+++ b/fs/inode.c
@@ -54,6 +54,12 @@
  *   inode_hash_lock
  */
 
+struct inode_isolate_control {
+	struct list_head *freeable;
+	struct mem_cgroup *memcg;	/* derived from shrink_control */
+	bool memcg_low_reclaim;		/* derived from scan_control */
+};
+
 static unsigned int i_hash_mask __read_mostly;
 static unsigned int i_hash_shift __read_mostly;
 static struct hlist_head *inode_hashtable __read_mostly;
@@ -713,6 +719,61 @@ int invalidate_inodes(struct super_block *sb, bool kill_dirty)
 	return busy;
 }
 
+#ifdef CONFIG_MEMCG_KMEM
+/*
+ * Once an inode is freed, all its belonging page caches will be dropped as
+ * well, even if there're lots of page caches. So if we intend to protect
+ * page caches in a memcg, we must protect their host(the inode) first.
+ * Otherwise the memcg protection can be easily bypassed with freeing inode,
+ * especially if there're big files in this memcg.
+ * Note that it may happen that the page caches are already charged to the
+ * memcg, but the inode hasn't been added to this memcg yet. In this case,
+ * this inode is not protected.
+ * The inherent mismatch between memcg and inode is a trouble. One inode
+ * can be shared by different MEMCGs, but it is a very rare case. If
+ * an inode is shared, its belonging page caches may be charged to
+ * different MEMCGs. Currently there's no perfect solution to fix this
+ * kind of issue, but the inode majority-writer ownership switching can
+ * help it more or less.
+ */
+static bool memcg_can_reclaim_inode(struct inode *inode,
+				    struct inode_isolate_control *iic)
+{
+	unsigned long cgroup_size;
+	unsigned long protection;
+	struct mem_cgroup *memcg;
+	bool reclaimable = true;
+
+	if (!inode->i_data.nrpages)
+		goto out;
+
+	/* Excludes freeing inode via drop_caches */
+	if (!current->reclaim_state)
+		goto out;
+
+	memcg = iic->memcg;
+	if (!memcg || memcg == root_mem_cgroup)
+		goto out;
+
+	protection = mem_cgroup_protection(memcg, iic->memcg_low_reclaim);
+	if (!protection)
+		goto out;
+
+	cgroup_size = mem_cgroup_size(memcg);
+	if (inode->i_data.nrpages + protection >= cgroup_size)
+		reclaimable = false;
+
+out:
+	return reclaimable;
+}
+#else /* CONFIG_MEMCG_KMEM */
+static bool memcg_can_reclaim_inode(struct inode *inode,
+				    struct inode_isolate_control *iic)
+{
+	return true;
+}
+#endif /* CONFIG_MEMCG_KMEM */
+
 /*
  * Isolate the inode from the LRU in preparation for freeing it.
  *
@@ -731,8 +792,9 @@ int invalidate_inodes(struct super_block *sb, bool kill_dirty)
 static enum lru_status inode_lru_isolate(struct list_head *item,
 		struct list_lru_one *lru, spinlock_t *lru_lock, void *arg)
 {
-	struct list_head *freeable = arg;
-	struct inode	*inode = container_of(item, struct inode, i_lru);
+	struct inode_isolate_control *iic = arg;
+	struct list_head *freeable = iic->freeable;
+	struct inode *inode = container_of(item, struct inode, i_lru);
 
 	/*
 	 * we are inverting the lru lock/inode->i_lock here, so use a trylock.
@@ -741,6 +803,11 @@ static enum lru_status inode_lru_isolate(struct list_head *item,
 	if (!spin_trylock(&inode->i_lock))
 		return LRU_SKIP;
 
+	if (!memcg_can_reclaim_inode(inode, iic)) {
+		spin_unlock(&inode->i_lock);
+		return LRU_ROTATE;
+	}
+
 	/*
 	 * Referenced or dirty inodes are still in use. Give them another pass
 	 * through the LRU as we canot reclaim them now.
@@ -798,9 +865,14 @@ long prune_icache_sb(struct super_block *sb, struct shrink_control *sc)
 {
 	LIST_HEAD(freeable);
 	long freed;
+	struct inode_isolate_control iic = {
+		.freeable = &freeable,
+		.memcg = sc->memcg,
+		.memcg_low_reclaim = sc->memcg_low_reclaim,
+	};
 
 	freed = list_lru_shrink_walk(&sb->s_inode_lru, sc,
-				     inode_lru_isolate, &freeable);
+				     inode_lru_isolate, &iic);
 	dispose_list(&freeable);
 	return freed;
 }
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 0/3] protect page cache from freeing inode
  2020-01-08 16:03 [PATCH v3 0/3] protect page cache from freeing inode Yafang Shao
                   ` (2 preceding siblings ...)
  2020-01-08 16:03 ` [PATCH v3 3/3] memcg, inode: protect page cache from freeing inode Yafang Shao
@ 2020-01-22 13:46 ` Yafang Shao
  2020-02-04 21:19   ` Dave Chinner
  3 siblings, 1 reply; 7+ messages in thread
From: Yafang Shao @ 2020-01-22 13:46 UTC (permalink / raw)
  To: Dave Chinner, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Roman Gushchin, Andrew Morton, Al Viro
  Cc: Linux MM, linux-fsdevel

On Thu, Jan 9, 2020 at 12:04 AM Yafang Shao <laoar.shao@gmail.com> wrote:
>
> On my server there're some running MEMCGs protected by memory.{min, low},
> but I found the usage of these MEMCGs abruptly became very small, which
> were far less than the protect limit. It confused me and finally I
> found that was because of inode stealing.
> Once an inode is freed, all its belonging page caches will be dropped as
> well, no matter how may page caches it has. So if we intend to protect the
> page caches in a memcg, we must protect their host (the inode) first.
> Otherwise the memcg protection can be easily bypassed with freeing inode,
> especially if there're big files in this memcg.
> The inherent mismatch between memcg and inode is a trouble. One inode can
> be shared by different MEMCGs, but it is a very rare case. If an inode is
> shared, its belonging page caches may be charged to different MEMCGs.
> Currently there's no perfect solution to fix this kind of issue, but the
> inode majority-writer ownership switching can help it more or less.
>
> - Changes against v2:
>     1. Seperates memcg patches from this patchset, suggested by Roman.
>        A separate patch is alreay ACKed by Roman, please the MEMCG
>        maintianers help take a look at it[1].
>     2. Improves code around the usage of for_each_mem_cgroup(), suggested
>        by Dave
>     3. Use memcg_low_reclaim passed from scan_control, instead of
>        introducing a new member in struct mem_cgroup.
>     4. Some other code improvement suggested by Dave.
>
>
> - Changes against v1:
> Use the memcg passed from the shrink_control, instead of getting it from
> inode itself, suggested by Dave. That could make the laying better.
>
> [1]
> https://lore.kernel.org/linux-mm/CALOAHbBhPgh3WEuLu2B6e2vj1J8K=gGOyCKzb8tKWmDqFs-rfQ@mail.gmail.com/
>
> Yafang Shao (3):
>   mm, list_lru: make memcg visible to lru walker isolation function
>   mm, shrinker: make memcg low reclaim visible to lru walker isolation
>     function
>   memcg, inode: protect page cache from freeing inode
>
>  fs/inode.c                 | 78 ++++++++++++++++++++++++++++++++++++++++++++--
>  include/linux/memcontrol.h | 21 +++++++++++++
>  include/linux/shrinker.h   |  3 ++
>  mm/list_lru.c              | 47 +++++++++++++++++-----------
>  mm/memcontrol.c            | 15 ---------
>  mm/vmscan.c                | 27 +++++++++-------
>  6 files changed, 143 insertions(+), 48 deletions(-)
>

Dave,  Johannes,

Any comments on this new version ?

Thanks
Yafang

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 0/3] protect page cache from freeing inode
  2020-01-22 13:46 ` [PATCH v3 0/3] " Yafang Shao
@ 2020-02-04 21:19   ` Dave Chinner
  2020-02-05  1:19     ` Yafang Shao
  0 siblings, 1 reply; 7+ messages in thread
From: Dave Chinner @ 2020-02-04 21:19 UTC (permalink / raw)
  To: Yafang Shao
  Cc: Dave Chinner, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Roman Gushchin, Andrew Morton, Al Viro, Linux MM, linux-fsdevel

On Wed, Jan 22, 2020 at 09:46:57PM +0800, Yafang Shao wrote:
> On Thu, Jan 9, 2020 at 12:04 AM Yafang Shao <laoar.shao@gmail.com> wrote:
> >
> > On my server there're some running MEMCGs protected by memory.{min, low},
> > but I found the usage of these MEMCGs abruptly became very small, which
> > were far less than the protect limit. It confused me and finally I
> > found that was because of inode stealing.
> > Once an inode is freed, all its belonging page caches will be dropped as
> > well, no matter how may page caches it has. So if we intend to protect the
> > page caches in a memcg, we must protect their host (the inode) first.
> > Otherwise the memcg protection can be easily bypassed with freeing inode,
> > especially if there're big files in this memcg.
> > The inherent mismatch between memcg and inode is a trouble. One inode can
> > be shared by different MEMCGs, but it is a very rare case. If an inode is
> > shared, its belonging page caches may be charged to different MEMCGs.
> > Currently there's no perfect solution to fix this kind of issue, but the
> > inode majority-writer ownership switching can help it more or less.
> >
> > - Changes against v2:
> >     1. Seperates memcg patches from this patchset, suggested by Roman.
> >        A separate patch is alreay ACKed by Roman, please the MEMCG
> >        maintianers help take a look at it[1].
> >     2. Improves code around the usage of for_each_mem_cgroup(), suggested
> >        by Dave
> >     3. Use memcg_low_reclaim passed from scan_control, instead of
> >        introducing a new member in struct mem_cgroup.
> >     4. Some other code improvement suggested by Dave.
> >
> >
> > - Changes against v1:
> > Use the memcg passed from the shrink_control, instead of getting it from
> > inode itself, suggested by Dave. That could make the laying better.
> >
> > [1]
> > https://lore.kernel.org/linux-mm/CALOAHbBhPgh3WEuLu2B6e2vj1J8K=gGOyCKzb8tKWmDqFs-rfQ@mail.gmail.com/
> >
> > Yafang Shao (3):
> >   mm, list_lru: make memcg visible to lru walker isolation function
> >   mm, shrinker: make memcg low reclaim visible to lru walker isolation
> >     function
> >   memcg, inode: protect page cache from freeing inode
> >
> >  fs/inode.c                 | 78 ++++++++++++++++++++++++++++++++++++++++++++--
> >  include/linux/memcontrol.h | 21 +++++++++++++
> >  include/linux/shrinker.h   |  3 ++
> >  mm/list_lru.c              | 47 +++++++++++++++++-----------
> >  mm/memcontrol.c            | 15 ---------
> >  mm/vmscan.c                | 27 +++++++++-------
> >  6 files changed, 143 insertions(+), 48 deletions(-)
> >
> 
> Dave,  Johannes,
> 
> Any comments on this new version ?

Sorry, I lost track of this amongst travel and conferences mid
january. Can you update and post it again once -rc1 is out?

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v3 0/3] protect page cache from freeing inode
  2020-02-04 21:19   ` Dave Chinner
@ 2020-02-05  1:19     ` Yafang Shao
  0 siblings, 0 replies; 7+ messages in thread
From: Yafang Shao @ 2020-02-05  1:19 UTC (permalink / raw)
  To: Dave Chinner
  Cc: Dave Chinner, Johannes Weiner, Michal Hocko, Vladimir Davydov,
	Roman Gushchin, Andrew Morton, Al Viro, Linux MM, linux-fsdevel

On Wed, Feb 5, 2020 at 5:20 AM Dave Chinner <david@fromorbit.com> wrote:
>
> On Wed, Jan 22, 2020 at 09:46:57PM +0800, Yafang Shao wrote:
> > On Thu, Jan 9, 2020 at 12:04 AM Yafang Shao <laoar.shao@gmail.com> wrote:
> > >
> > > On my server there're some running MEMCGs protected by memory.{min, low},
> > > but I found the usage of these MEMCGs abruptly became very small, which
> > > were far less than the protect limit. It confused me and finally I
> > > found that was because of inode stealing.
> > > Once an inode is freed, all its belonging page caches will be dropped as
> > > well, no matter how may page caches it has. So if we intend to protect the
> > > page caches in a memcg, we must protect their host (the inode) first.
> > > Otherwise the memcg protection can be easily bypassed with freeing inode,
> > > especially if there're big files in this memcg.
> > > The inherent mismatch between memcg and inode is a trouble. One inode can
> > > be shared by different MEMCGs, but it is a very rare case. If an inode is
> > > shared, its belonging page caches may be charged to different MEMCGs.
> > > Currently there's no perfect solution to fix this kind of issue, but the
> > > inode majority-writer ownership switching can help it more or less.
> > >
> > > - Changes against v2:
> > >     1. Seperates memcg patches from this patchset, suggested by Roman.
> > >        A separate patch is alreay ACKed by Roman, please the MEMCG
> > >        maintianers help take a look at it[1].
> > >     2. Improves code around the usage of for_each_mem_cgroup(), suggested
> > >        by Dave
> > >     3. Use memcg_low_reclaim passed from scan_control, instead of
> > >        introducing a new member in struct mem_cgroup.
> > >     4. Some other code improvement suggested by Dave.
> > >
> > >
> > > - Changes against v1:
> > > Use the memcg passed from the shrink_control, instead of getting it from
> > > inode itself, suggested by Dave. That could make the laying better.
> > >
> > > [1]
> > > https://lore.kernel.org/linux-mm/CALOAHbBhPgh3WEuLu2B6e2vj1J8K=gGOyCKzb8tKWmDqFs-rfQ@mail.gmail.com/
> > >
> > > Yafang Shao (3):
> > >   mm, list_lru: make memcg visible to lru walker isolation function
> > >   mm, shrinker: make memcg low reclaim visible to lru walker isolation
> > >     function
> > >   memcg, inode: protect page cache from freeing inode
> > >
> > >  fs/inode.c                 | 78 ++++++++++++++++++++++++++++++++++++++++++++--
> > >  include/linux/memcontrol.h | 21 +++++++++++++
> > >  include/linux/shrinker.h   |  3 ++
> > >  mm/list_lru.c              | 47 +++++++++++++++++-----------
> > >  mm/memcontrol.c            | 15 ---------
> > >  mm/vmscan.c                | 27 +++++++++-------
> > >  6 files changed, 143 insertions(+), 48 deletions(-)
> > >
> >
> > Dave,  Johannes,
> >
> > Any comments on this new version ?
>
> Sorry, I lost track of this amongst travel and conferences mid
> january. Can you update and post it again once -rc1 is out?
>

Sure, I will do it.
Thanks for your reply.

Thanks
Yafang

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2020-02-05  1:20 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-08 16:03 [PATCH v3 0/3] protect page cache from freeing inode Yafang Shao
2020-01-08 16:03 ` [PATCH v3 1/3] mm, list_lru: make memcg visible to lru walker isolation function Yafang Shao
2020-01-08 16:03 ` [PATCH v3 2/3] mm, shrinker: make memcg low reclaim " Yafang Shao
2020-01-08 16:03 ` [PATCH v3 3/3] memcg, inode: protect page cache from freeing inode Yafang Shao
2020-01-22 13:46 ` [PATCH v3 0/3] " Yafang Shao
2020-02-04 21:19   ` Dave Chinner
2020-02-05  1:19     ` Yafang Shao

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).