linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC v2] memcg: add memcg lru for page reclaiming
@ 2019-10-26 11:07 Hillf Danton
  2019-10-29  8:37 ` Michal Hocko
                   ` (4 more replies)
  0 siblings, 5 replies; 6+ messages in thread
From: Hillf Danton @ 2019-10-26 11:07 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, linux-kernel, Chris Down, Tejun Heo,
	Roman Gushchin, Michal Hocko, Johannes Weiner, Shakeel Butt,
	Matthew Wilcox, Minchan Kim, Mel Gorman, Hillf Danton


Currently soft limit reclaim is frozen, see
Documentation/admin-guide/cgroup-v2.rst for reasons.

This work adds memcg hook into kswapd's logic to bypass slr,
paving a brick for its cleanup later.

After b23afb93d317 ("memcg: punt high overage reclaim to
return-to-userland path"), high limit breachers are reclaimed one
after another spiraling up through the memcg hierarchy before
returning to userspace.

We can not add new hook yet if it is infeasible to defer that
reclaiming a bit further until kswapd becomes active.

It can be defered however because high limit breach looks benign
in the absence of memory pressure, or we ensure it will be
reclaimed soon in the presence of kswapd.

To delay reclaiming, the spiralup is broken into two parts: the
up half that only rips the first victim, and the bottom half that
only queues the victim's first ancestor for later processing.
The defer can be ignored if already under memory pressure; otherwise
work is done after BH.

Then we need a fifo list to facilitate queuing up breachers and
ripping them in round robin once kswapd starts working. It is
essencially a simple copy of the page lru.

New hook is not added without slr's another problem addressed,
though the first was solved by the current spiralup.
Overrecalim is solved by ripping MEMCG_CHARGE_BATCH pages a time
from a victim. And it is current high work behavior too.

V2 is based on next-20191025.

Changes since v1
- drop MEMCG_LRU 
- add hook into kswapd's logic to bypass slr

Changes since v0
- add MEMCG_LRU in init/Kconfig
- drop changes in mm/vmscan.c
- make memcg lru work in parallel to slr

Cc: Chris Down <chris@chrisdown.name>
Cc: Tejun Heo <tj@kernel.org>
Cc: Roman Gushchin <guro@fb.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: Hillf Danton <hdanton@sina.com>
---

--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -222,6 +222,8 @@ struct mem_cgroup {
 	/* Upper bound of normal memory consumption range */
 	unsigned long high;
 
+	struct list_head lru_node;
+
 	/* Range enforcement for interrupt charges */
 	struct work_struct high_work;
 
@@ -740,6 +742,8 @@ static inline void mod_lruvec_page_state
 	local_irq_restore(flags);
 }
 
+void mem_cgroup_reclaim_high(void);
+
 unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
 						gfp_t gfp_mask,
 						unsigned long *total_scanned);
@@ -1126,6 +1130,10 @@ static inline void __mod_lruvec_slab_sta
 	__mod_node_page_state(page_pgdat(page), idx, val);
 }
 
+static inline void mem_cgroup_reclaim_high(void)
+{
+}
+
 static inline
 unsigned long mem_cgroup_soft_limit_reclaim(pg_data_t *pgdat, int order,
 					    gfp_t gfp_mask,
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2332,14 +2332,57 @@ static int memcg_hotplug_cpu_dead(unsign
 	return 0;
 }
 
+static DEFINE_SPINLOCK(memcg_lru_lock);
+static LIST_HEAD(memcg_lru);		/* a copy of page lru */
+
+static void memcg_add_lru(struct mem_cgroup *memcg)
+{
+	spin_lock_irq(&memcg_lru_lock);
+	if (list_empty(&memcg->lru_node))
+		list_add_tail(&memcg->lru_node, &memcg_lru);
+	spin_unlock_irq(&memcg_lru_lock);
+}
+
+static struct mem_cgroup *memcg_pinch_lru(void)
+{
+	struct mem_cgroup *memcg, *next;
+
+	spin_lock_irq(&memcg_lru_lock);
+
+	list_for_each_entry_safe(memcg, next, &memcg_lru, lru_node) {
+		list_del_init(&memcg->lru_node);
+
+		if (page_counter_read(&memcg->memory) > memcg->high) {
+			spin_unlock_irq(&memcg_lru_lock);
+			return memcg;
+		}
+	}
+	spin_unlock_irq(&memcg_lru_lock);
+
+	return NULL;
+}
+
+void mem_cgroup_reclaim_high(void)
+{
+	struct mem_cgroup *memcg = memcg_pinch_lru();
+
+	if (memcg)
+		schedule_work(&memcg->high_work);
+}
+
 static void reclaim_high(struct mem_cgroup *memcg,
 			 unsigned int nr_pages,
 			 gfp_t gfp_mask)
 {
+	struct mem_cgroup *victim = memcg;
 	do {
 		if (page_counter_read(&memcg->memory) <= memcg->high)
 			continue;
 		memcg_memory_event(memcg, MEMCG_HIGH);
+		if (victim != memcg) {
+			memcg_add_lru(memcg);
+			return;
+		}
 		try_to_free_mem_cgroup_pages(memcg, nr_pages, gfp_mask, true);
 	} while ((memcg = parent_mem_cgroup(memcg)));
 }
@@ -5055,6 +5098,7 @@ static struct mem_cgroup *mem_cgroup_all
 	if (memcg_wb_domain_init(memcg, GFP_KERNEL))
 		goto fail;
 
+	INIT_LIST_HEAD(&memcg->lru_node);
 	INIT_WORK(&memcg->high_work, high_work_func);
 	memcg->last_scanned_node = MAX_NUMNODES;
 	INIT_LIST_HEAD(&memcg->oom_notify);
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2996,6 +2996,9 @@ static void shrink_zones(struct zonelist
 			if (zone->zone_pgdat == last_pgdat)
 				continue;
 
+			mem_cgroup_reclaim_high();
+			continue;
+
 			/*
 			 * This steals pages from memory cgroups over softlimit
 			 * and returns the number of reclaimed pages and
@@ -3690,12 +3693,16 @@ restart:
 		if (sc.priority < DEF_PRIORITY - 2)
 			sc.may_writepage = 1;
 
+		mem_cgroup_reclaim_high();
+		goto soft_limit_reclaim_end;
+
 		/* Call soft limit reclaim before calling shrink_node. */
 		sc.nr_scanned = 0;
 		nr_soft_scanned = 0;
 		nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(pgdat, sc.order,
 						sc.gfp_mask, &nr_soft_scanned);
 		sc.nr_reclaimed += nr_soft_reclaimed;
+soft_limit_reclaim_end:
 
 		/*
 		 * There should be no need to raise the scanning priority if
--



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-11-09 12:19 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-26 11:07 [RFC v2] memcg: add memcg lru for page reclaiming Hillf Danton
2019-10-29  8:37 ` Michal Hocko
2019-10-29 15:46 ` Johannes Weiner
2019-11-07  9:02 ` [memcg] 1fc14cf673: invoked_oom-killer:gfp_mask=0x kernel test robot
2019-11-08  4:01 ` Hillf Danton
2019-11-09 12:19 ` Hillf Danton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).