All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] memcg: trivial cleanups
@ 2012-01-01  7:26 Hugh Dickins
  2012-01-01  7:27 ` [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR Hugh Dickins
                   ` (6 more replies)
  0 siblings, 7 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-01  7:26 UTC (permalink / raw)
  To: Andrew Morton
  Cc: KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

Obviously I've missed the boat for per-memcg per-zone LRU locking in 3.3,
but I've split out a shameless bunch of trivial cleanups from that work,
and hoping these might still sneak in unless they're controversial.

Following on from my earlier mmotm/next patches, here's five
to memcontrol.c and .h, followed by six to the rest of mm.

[PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR
[PATCH 2/5] memcg: replace mem and mem_cont stragglers
[PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
[PATCH 4/5] memcg: enum lru_list lru
[PATCH 5/5] memcg: remove redundant returns

 include/linux/memcontrol.h |    2 
 mm/memcontrol.c            |  121 ++++++++++++++++-------------------
 2 files changed, 58 insertions(+), 65 deletions(-)

Hugh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR
  2012-01-01  7:26 [PATCH 0/5] memcg: trivial cleanups Hugh Dickins
@ 2012-01-01  7:27 ` Hugh Dickins
  2012-01-01  7:33   ` KOSAKI Motohiro
                     ` (2 more replies)
  2012-01-01  7:29 ` [PATCH 2/5] memcg: replace mem and mem_cont stragglers Hugh Dickins
                   ` (5 subsequent siblings)
  6 siblings, 3 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-01  7:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

Correct an #endif comment in memcontrol.h from MEM_CONT to MEM_RES_CTLR.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 include/linux/memcontrol.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- mmotm.orig/include/linux/memcontrol.h	2011-12-30 21:21:34.923338593 -0800
+++ mmotm/include/linux/memcontrol.h	2011-12-30 21:21:51.939338993 -0800
@@ -396,7 +396,7 @@ static inline void mem_cgroup_replace_pa
 static inline void mem_cgroup_reset_owner(struct page *page)
 {
 }
-#endif /* CONFIG_CGROUP_MEM_CONT */
+#endif /* CONFIG_CGROUP_MEM_RES_CTLR */
 
 #if !defined(CONFIG_CGROUP_MEM_RES_CTLR) || !defined(CONFIG_DEBUG_VM)
 static inline bool

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 2/5] memcg: replace mem and mem_cont stragglers
  2012-01-01  7:26 [PATCH 0/5] memcg: trivial cleanups Hugh Dickins
  2012-01-01  7:27 ` [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR Hugh Dickins
@ 2012-01-01  7:29 ` Hugh Dickins
  2012-01-01  7:34   ` KOSAKI Motohiro
                     ` (2 more replies)
  2012-01-01  7:30 ` [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT Hugh Dickins
                   ` (4 subsequent siblings)
  6 siblings, 3 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-01  7:29 UTC (permalink / raw)
  To: Andrew Morton
  Cc: KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

Replace mem and mem_cont stragglers in memcontrol.c by memcg.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 mm/memcontrol.c |   84 +++++++++++++++++++++++-----------------------
 1 file changed, 42 insertions(+), 42 deletions(-)

--- mmotm.orig/mm/memcontrol.c	2011-12-30 21:21:34.895338593 -0800
+++ mmotm/mm/memcontrol.c	2011-12-30 21:22:05.679339324 -0800
@@ -144,7 +144,7 @@ struct mem_cgroup_per_zone {
 	unsigned long long	usage_in_excess;/* Set to the value by which */
 						/* the soft limit is exceeded*/
 	bool			on_tree;
-	struct mem_cgroup	*mem;		/* Back pointer, we cannot */
+	struct mem_cgroup	*memcg;		/* Back pointer, we cannot */
 						/* use container_of	   */
 };
 /* Macro for accessing counter */
@@ -597,9 +597,9 @@ retry:
 	 * we will to add it back at the end of reclaim to its correct
 	 * position in the tree.
 	 */
-	__mem_cgroup_remove_exceeded(mz->mem, mz, mctz);
-	if (!res_counter_soft_limit_excess(&mz->mem->res) ||
-		!css_tryget(&mz->mem->css))
+	__mem_cgroup_remove_exceeded(mz->memcg, mz, mctz);
+	if (!res_counter_soft_limit_excess(&mz->memcg->res) ||
+		!css_tryget(&mz->memcg->css))
 		goto retry;
 done:
 	return mz;
@@ -1743,22 +1743,22 @@ static DEFINE_SPINLOCK(memcg_oom_lock);
 static DECLARE_WAIT_QUEUE_HEAD(memcg_oom_waitq);
 
 struct oom_wait_info {
-	struct mem_cgroup *mem;
+	struct mem_cgroup *memcg;
 	wait_queue_t	wait;
 };
 
 static int memcg_oom_wake_function(wait_queue_t *wait,
 	unsigned mode, int sync, void *arg)
 {
-	struct mem_cgroup *wake_memcg = (struct mem_cgroup *)arg,
-			  *oom_wait_memcg;
+	struct mem_cgroup *wake_memcg = (struct mem_cgroup *)arg;
+	struct mem_cgroup *oom_wait_memcg;
 	struct oom_wait_info *oom_wait_info;
 
 	oom_wait_info = container_of(wait, struct oom_wait_info, wait);
-	oom_wait_memcg = oom_wait_info->mem;
+	oom_wait_memcg = oom_wait_info->memcg;
 
 	/*
-	 * Both of oom_wait_info->mem and wake_mem are stable under us.
+	 * Both of oom_wait_info->memcg and wake_memcg are stable under us.
 	 * Then we can use css_is_ancestor without taking care of RCU.
 	 */
 	if (!mem_cgroup_same_or_subtree(oom_wait_memcg, wake_memcg)
@@ -1787,7 +1787,7 @@ bool mem_cgroup_handle_oom(struct mem_cg
 	struct oom_wait_info owait;
 	bool locked, need_to_kill;
 
-	owait.mem = memcg;
+	owait.memcg = memcg;
 	owait.wait.flags = 0;
 	owait.wait.func = memcg_oom_wake_function;
 	owait.wait.private = current;
@@ -3535,7 +3535,7 @@ unsigned long mem_cgroup_soft_limit_recl
 			break;
 
 		nr_scanned = 0;
-		reclaimed = mem_cgroup_soft_reclaim(mz->mem, zone,
+		reclaimed = mem_cgroup_soft_reclaim(mz->memcg, zone,
 						    gfp_mask, &nr_scanned);
 		nr_reclaimed += reclaimed;
 		*total_scanned += nr_scanned;
@@ -3562,13 +3562,13 @@ unsigned long mem_cgroup_soft_limit_recl
 				next_mz =
 				__mem_cgroup_largest_soft_limit_node(mctz);
 				if (next_mz == mz)
-					css_put(&next_mz->mem->css);
+					css_put(&next_mz->memcg->css);
 				else /* next_mz == NULL or other memcg */
 					break;
 			} while (1);
 		}
-		__mem_cgroup_remove_exceeded(mz->mem, mz, mctz);
-		excess = res_counter_soft_limit_excess(&mz->mem->res);
+		__mem_cgroup_remove_exceeded(mz->memcg, mz, mctz);
+		excess = res_counter_soft_limit_excess(&mz->memcg->res);
 		/*
 		 * One school of thought says that we should not add
 		 * back the node to the tree if reclaim returns 0.
@@ -3578,9 +3578,9 @@ unsigned long mem_cgroup_soft_limit_recl
 		 * term TODO.
 		 */
 		/* If excess == 0, no tree ops */
-		__mem_cgroup_insert_exceeded(mz->mem, mz, mctz, excess);
+		__mem_cgroup_insert_exceeded(mz->memcg, mz, mctz, excess);
 		spin_unlock(&mctz->lock);
-		css_put(&mz->mem->css);
+		css_put(&mz->memcg->css);
 		loop++;
 		/*
 		 * Could not reclaim anything and there are no more
@@ -3593,7 +3593,7 @@ unsigned long mem_cgroup_soft_limit_recl
 			break;
 	} while (!nr_reclaimed);
 	if (next_mz)
-		css_put(&next_mz->mem->css);
+		css_put(&next_mz->memcg->css);
 	return nr_reclaimed;
 }
 
@@ -4096,38 +4096,38 @@ static int mem_control_numa_stat_show(st
 	unsigned long total_nr, file_nr, anon_nr, unevictable_nr;
 	unsigned long node_nr;
 	struct cgroup *cont = m->private;
-	struct mem_cgroup *mem_cont = mem_cgroup_from_cont(cont);
+	struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
 
-	total_nr = mem_cgroup_nr_lru_pages(mem_cont, LRU_ALL);
+	total_nr = mem_cgroup_nr_lru_pages(memcg, LRU_ALL);
 	seq_printf(m, "total=%lu", total_nr);
 	for_each_node_state(nid, N_HIGH_MEMORY) {
-		node_nr = mem_cgroup_node_nr_lru_pages(mem_cont, nid, LRU_ALL);
+		node_nr = mem_cgroup_node_nr_lru_pages(memcg, nid, LRU_ALL);
 		seq_printf(m, " N%d=%lu", nid, node_nr);
 	}
 	seq_putc(m, '\n');
 
-	file_nr = mem_cgroup_nr_lru_pages(mem_cont, LRU_ALL_FILE);
+	file_nr = mem_cgroup_nr_lru_pages(memcg, LRU_ALL_FILE);
 	seq_printf(m, "file=%lu", file_nr);
 	for_each_node_state(nid, N_HIGH_MEMORY) {
-		node_nr = mem_cgroup_node_nr_lru_pages(mem_cont, nid,
+		node_nr = mem_cgroup_node_nr_lru_pages(memcg, nid,
 				LRU_ALL_FILE);
 		seq_printf(m, " N%d=%lu", nid, node_nr);
 	}
 	seq_putc(m, '\n');
 
-	anon_nr = mem_cgroup_nr_lru_pages(mem_cont, LRU_ALL_ANON);
+	anon_nr = mem_cgroup_nr_lru_pages(memcg, LRU_ALL_ANON);
 	seq_printf(m, "anon=%lu", anon_nr);
 	for_each_node_state(nid, N_HIGH_MEMORY) {
-		node_nr = mem_cgroup_node_nr_lru_pages(mem_cont, nid,
+		node_nr = mem_cgroup_node_nr_lru_pages(memcg, nid,
 				LRU_ALL_ANON);
 		seq_printf(m, " N%d=%lu", nid, node_nr);
 	}
 	seq_putc(m, '\n');
 
-	unevictable_nr = mem_cgroup_nr_lru_pages(mem_cont, BIT(LRU_UNEVICTABLE));
+	unevictable_nr = mem_cgroup_nr_lru_pages(memcg, BIT(LRU_UNEVICTABLE));
 	seq_printf(m, "unevictable=%lu", unevictable_nr);
 	for_each_node_state(nid, N_HIGH_MEMORY) {
-		node_nr = mem_cgroup_node_nr_lru_pages(mem_cont, nid,
+		node_nr = mem_cgroup_node_nr_lru_pages(memcg, nid,
 				BIT(LRU_UNEVICTABLE));
 		seq_printf(m, " N%d=%lu", nid, node_nr);
 	}
@@ -4139,12 +4139,12 @@ static int mem_control_numa_stat_show(st
 static int mem_control_stat_show(struct cgroup *cont, struct cftype *cft,
 				 struct cgroup_map_cb *cb)
 {
-	struct mem_cgroup *mem_cont = mem_cgroup_from_cont(cont);
+	struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
 	struct mcs_total_stat mystat;
 	int i;
 
 	memset(&mystat, 0, sizeof(mystat));
-	mem_cgroup_get_local_stat(mem_cont, &mystat);
+	mem_cgroup_get_local_stat(memcg, &mystat);
 
 
 	for (i = 0; i < NR_MCS_STAT; i++) {
@@ -4156,14 +4156,14 @@ static int mem_control_stat_show(struct
 	/* Hierarchical information */
 	{
 		unsigned long long limit, memsw_limit;
-		memcg_get_hierarchical_limit(mem_cont, &limit, &memsw_limit);
+		memcg_get_hierarchical_limit(memcg, &limit, &memsw_limit);
 		cb->fill(cb, "hierarchical_memory_limit", limit);
 		if (do_swap_account)
 			cb->fill(cb, "hierarchical_memsw_limit", memsw_limit);
 	}
 
 	memset(&mystat, 0, sizeof(mystat));
-	mem_cgroup_get_total_stat(mem_cont, &mystat);
+	mem_cgroup_get_total_stat(memcg, &mystat);
 	for (i = 0; i < NR_MCS_STAT; i++) {
 		if (i == MCS_SWAP && !do_swap_account)
 			continue;
@@ -4179,7 +4179,7 @@ static int mem_control_stat_show(struct
 
 		for_each_online_node(nid)
 			for (zid = 0; zid < MAX_NR_ZONES; zid++) {
-				mz = mem_cgroup_zoneinfo(mem_cont, nid, zid);
+				mz = mem_cgroup_zoneinfo(memcg, nid, zid);
 
 				recent_rotated[0] +=
 					mz->reclaim_stat.recent_rotated[0];
@@ -4808,7 +4808,7 @@ static int alloc_mem_cgroup_per_zone_inf
 			INIT_LIST_HEAD(&mz->lruvec.lists[l]);
 		mz->usage_in_excess = 0;
 		mz->on_tree = false;
-		mz->mem = memcg;
+		mz->memcg = memcg;
 	}
 	memcg->info.nodeinfo[node] = pn;
 	return 0;
@@ -4821,29 +4821,29 @@ static void free_mem_cgroup_per_zone_inf
 
 static struct mem_cgroup *mem_cgroup_alloc(void)
 {
-	struct mem_cgroup *mem;
+	struct mem_cgroup *memcg;
 	int size = sizeof(struct mem_cgroup);
 
 	/* Can be very big if MAX_NUMNODES is very big */
 	if (size < PAGE_SIZE)
-		mem = kzalloc(size, GFP_KERNEL);
+		memcg = kzalloc(size, GFP_KERNEL);
 	else
-		mem = vzalloc(size);
+		memcg = vzalloc(size);
 
-	if (!mem)
+	if (!memcg)
 		return NULL;
 
-	mem->stat = alloc_percpu(struct mem_cgroup_stat_cpu);
-	if (!mem->stat)
+	memcg->stat = alloc_percpu(struct mem_cgroup_stat_cpu);
+	if (!memcg->stat)
 		goto out_free;
-	spin_lock_init(&mem->pcp_counter_lock);
-	return mem;
+	spin_lock_init(&memcg->pcp_counter_lock);
+	return memcg;
 
 out_free:
 	if (size < PAGE_SIZE)
-		kfree(mem);
+		kfree(memcg);
 	else
-		vfree(mem);
+		vfree(memcg);
 	return NULL;
 }
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
  2012-01-01  7:26 [PATCH 0/5] memcg: trivial cleanups Hugh Dickins
  2012-01-01  7:27 ` [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR Hugh Dickins
  2012-01-01  7:29 ` [PATCH 2/5] memcg: replace mem and mem_cont stragglers Hugh Dickins
@ 2012-01-01  7:30 ` Hugh Dickins
  2012-01-01  7:37   ` KOSAKI Motohiro
                     ` (2 more replies)
  2012-01-01  7:31 ` [PATCH 4/5] memcg: enum lru_list lru Hugh Dickins
                   ` (3 subsequent siblings)
  6 siblings, 3 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-01  7:30 UTC (permalink / raw)
  To: Andrew Morton
  Cc: KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

I never understood why we need a MEM_CGROUP_ZSTAT(mz, idx) macro
to obscure the LRU counts.  For easier searching?  So call it
lru_size rather than bare count (lru_length sounds better, but
would be wrong, since each huge page raises lru_size hugely).

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 mm/memcontrol.c |   14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

--- mmotm.orig/mm/memcontrol.c	2011-12-30 21:22:05.679339324 -0800
+++ mmotm/mm/memcontrol.c	2011-12-30 21:23:28.243341326 -0800
@@ -135,7 +135,7 @@ struct mem_cgroup_reclaim_iter {
  */
 struct mem_cgroup_per_zone {
 	struct lruvec		lruvec;
-	unsigned long		count[NR_LRU_LISTS];
+	unsigned long		lru_size[NR_LRU_LISTS];
 
 	struct mem_cgroup_reclaim_iter reclaim_iter[DEF_PRIORITY + 1];
 
@@ -147,8 +147,6 @@ struct mem_cgroup_per_zone {
 	struct mem_cgroup	*memcg;		/* Back pointer, we cannot */
 						/* use container_of	   */
 };
-/* Macro for accessing counter */
-#define MEM_CGROUP_ZSTAT(mz, idx)	((mz)->count[(idx)])
 
 struct mem_cgroup_per_node {
 	struct mem_cgroup_per_zone zoneinfo[MAX_NR_ZONES];
@@ -713,7 +711,7 @@ mem_cgroup_zone_nr_lru_pages(struct mem_
 
 	for_each_lru(l) {
 		if (BIT(l) & lru_mask)
-			ret += MEM_CGROUP_ZSTAT(mz, l);
+			ret += mz->lru_size[l];
 	}
 	return ret;
 }
@@ -1048,7 +1046,7 @@ struct lruvec *mem_cgroup_lru_add_list(s
 	memcg = pc->mem_cgroup;
 	mz = page_cgroup_zoneinfo(memcg, page);
 	/* compound_order() is stabilized through lru_lock */
-	MEM_CGROUP_ZSTAT(mz, lru) += 1 << compound_order(page);
+	mz->lru_size[lru] += 1 << compound_order(page);
 	return &mz->lruvec;
 }
 
@@ -1076,8 +1074,8 @@ void mem_cgroup_lru_del_list(struct page
 	VM_BUG_ON(!memcg);
 	mz = page_cgroup_zoneinfo(memcg, page);
 	/* huge page split is done under lru_lock. so, we have no races. */
-	VM_BUG_ON(MEM_CGROUP_ZSTAT(mz, lru) < (1 << compound_order(page)));
-	MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
+	VM_BUG_ON(mz->lru_size[lru] < (1 << compound_order(page)));
+	mz->lru_size[lru] -= 1 << compound_order(page);
 }
 
 void mem_cgroup_lru_del(struct page *page)
@@ -3615,7 +3613,7 @@ static int mem_cgroup_force_empty_list(s
 	mz = mem_cgroup_zoneinfo(memcg, node, zid);
 	list = &mz->lruvec.lists[lru];
 
-	loop = MEM_CGROUP_ZSTAT(mz, lru);
+	loop = mz->lru_size[lru];
 	/* give some margin against EBUSY etc...*/
 	loop += 256;
 	busy = NULL;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 4/5] memcg: enum lru_list lru
  2012-01-01  7:26 [PATCH 0/5] memcg: trivial cleanups Hugh Dickins
                   ` (2 preceding siblings ...)
  2012-01-01  7:30 ` [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT Hugh Dickins
@ 2012-01-01  7:31 ` Hugh Dickins
  2012-01-01  7:38   ` KOSAKI Motohiro
                     ` (2 more replies)
  2012-01-01  7:33 ` [PATCH 5/5] memcg: remove redundant returns Hugh Dickins
                   ` (2 subsequent siblings)
  6 siblings, 3 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-01  7:31 UTC (permalink / raw)
  To: Andrew Morton
  Cc: KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

Mostly we use "enum lru_list lru": change those few "l"s to "lru"s.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 mm/memcontrol.c |   20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

--- mmotm.orig/mm/memcontrol.c	2011-12-30 21:23:28.000000000 -0800
+++ mmotm/mm/memcontrol.c	2011-12-30 21:29:03.695349263 -0800
@@ -704,14 +704,14 @@ mem_cgroup_zone_nr_lru_pages(struct mem_
 			unsigned int lru_mask)
 {
 	struct mem_cgroup_per_zone *mz;
-	enum lru_list l;
+	enum lru_list lru;
 	unsigned long ret = 0;
 
 	mz = mem_cgroup_zoneinfo(memcg, nid, zid);
 
-	for_each_lru(l) {
-		if (BIT(l) & lru_mask)
-			ret += mz->lru_size[l];
+	for_each_lru(lru) {
+		if (BIT(lru) & lru_mask)
+			ret += mz->lru_size[lru];
 	}
 	return ret;
 }
@@ -3687,10 +3687,10 @@ move_account:
 		mem_cgroup_start_move(memcg);
 		for_each_node_state(node, N_HIGH_MEMORY) {
 			for (zid = 0; !ret && zid < MAX_NR_ZONES; zid++) {
-				enum lru_list l;
-				for_each_lru(l) {
+				enum lru_list lru;
+				for_each_lru(lru) {
 					ret = mem_cgroup_force_empty_list(memcg,
-							node, zid, l);
+							node, zid, lru);
 					if (ret)
 						break;
 				}
@@ -4784,7 +4784,7 @@ static int alloc_mem_cgroup_per_zone_inf
 {
 	struct mem_cgroup_per_node *pn;
 	struct mem_cgroup_per_zone *mz;
-	enum lru_list l;
+	enum lru_list lru;
 	int zone, tmp = node;
 	/*
 	 * This routine is called against possible nodes.
@@ -4802,8 +4802,8 @@ static int alloc_mem_cgroup_per_zone_inf
 
 	for (zone = 0; zone < MAX_NR_ZONES; zone++) {
 		mz = &pn->zoneinfo[zone];
-		for_each_lru(l)
-			INIT_LIST_HEAD(&mz->lruvec.lists[l]);
+		for_each_lru(lru)
+			INIT_LIST_HEAD(&mz->lruvec.lists[lru]);
 		mz->usage_in_excess = 0;
 		mz->on_tree = false;
 		mz->memcg = memcg;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 5/5] memcg: remove redundant returns
  2012-01-01  7:26 [PATCH 0/5] memcg: trivial cleanups Hugh Dickins
                   ` (3 preceding siblings ...)
  2012-01-01  7:31 ` [PATCH 4/5] memcg: enum lru_list lru Hugh Dickins
@ 2012-01-01  7:33 ` Hugh Dickins
  2012-01-01  7:38   ` KOSAKI Motohiro
                     ` (2 more replies)
  2012-01-01 17:01 ` [PATCH 0/5] memcg: trivial cleanups Kirill A. Shutemov
  2012-01-09 13:03 ` Johannes Weiner
  6 siblings, 3 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-01  7:33 UTC (permalink / raw)
  To: Andrew Morton
  Cc: KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

Remove redundant returns from ends of functions, and one blank line.

Signed-off-by: Hugh Dickins <hughd@google.com>
---
 mm/memcontrol.c |    5 -----
 1 file changed, 5 deletions(-)

--- mmotm.orig/mm/memcontrol.c	2011-12-30 21:29:03.695349263 -0800
+++ mmotm/mm/memcontrol.c	2011-12-30 21:29:37.611350065 -0800
@@ -1362,7 +1362,6 @@ void mem_cgroup_print_oom_info(struct me
 	if (!memcg || !p)
 		return;
 
-
 	rcu_read_lock();
 
 	mem_cgrp = memcg->css.cgroup;
@@ -1897,7 +1896,6 @@ out:
 	if (unlikely(need_unlock))
 		move_unlock_page_cgroup(pc, &flags);
 	rcu_read_unlock();
-	return;
 }
 EXPORT_SYMBOL(mem_cgroup_update_page_stat);
 
@@ -2691,7 +2689,6 @@ __mem_cgroup_commit_charge_lrucare(struc
 		SetPageLRU(page);
 	}
 	spin_unlock_irqrestore(&zone->lru_lock, flags);
-	return;
 }
 
 int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
@@ -2881,7 +2878,6 @@ direct_uncharge:
 		res_counter_uncharge(&memcg->memsw, nr_pages * PAGE_SIZE);
 	if (unlikely(batch->memcg != memcg))
 		memcg_oom_recover(memcg);
-	return;
 }
 
 /*
@@ -3935,7 +3931,6 @@ static void memcg_get_hierarchical_limit
 out:
 	*mem_limit = min_limit;
 	*memsw_limit = min_memsw_limit;
-	return;
 }
 
 static int mem_cgroup_reset(struct cgroup *cont, unsigned int event)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR
  2012-01-01  7:27 ` [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR Hugh Dickins
@ 2012-01-01  7:33   ` KOSAKI Motohiro
  2012-01-02 11:53   ` Michal Hocko
  2012-01-05  6:30   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: KOSAKI Motohiro @ 2012-01-01  7:33 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko,
	Balbir Singh, linux-mm

(1/1/12 2:27 AM), Hugh Dickins wrote:
> Correct an #endif comment in memcontrol.h from MEM_CONT to MEM_RES_CTLR.
>
> Signed-off-by: Hugh Dickins<hughd@google.com>
> ---
>   include/linux/memcontrol.h |    2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- mmotm.orig/include/linux/memcontrol.h	2011-12-30 21:21:34.923338593 -0800
> +++ mmotm/include/linux/memcontrol.h	2011-12-30 21:21:51.939338993 -0800
> @@ -396,7 +396,7 @@ static inline void mem_cgroup_replace_pa
>   static inline void mem_cgroup_reset_owner(struct page *page)
>   {
>   }
> -#endif /* CONFIG_CGROUP_MEM_CONT */
> +#endif /* CONFIG_CGROUP_MEM_RES_CTLR */

Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 2/5] memcg: replace mem and mem_cont stragglers
  2012-01-01  7:29 ` [PATCH 2/5] memcg: replace mem and mem_cont stragglers Hugh Dickins
@ 2012-01-01  7:34   ` KOSAKI Motohiro
  2012-01-02 12:25   ` Michal Hocko
  2012-01-05  6:31   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: KOSAKI Motohiro @ 2012-01-01  7:34 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko,
	Balbir Singh, linux-mm

(1/1/12 2:29 AM), Hugh Dickins wrote:
> Replace mem and mem_cont stragglers in memcontrol.c by memcg.
>
> Signed-off-by: Hugh Dickins<hughd@google.com>
> ---
>   mm/memcontrol.c |   84 +++++++++++++++++++++++-----------------------
>   1 file changed, 42 insertions(+), 42 deletions(-)
>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
  2012-01-01  7:30 ` [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT Hugh Dickins
@ 2012-01-01  7:37   ` KOSAKI Motohiro
  2012-01-02 12:59   ` Michal Hocko
  2012-01-05  6:33   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: KOSAKI Motohiro @ 2012-01-01  7:37 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko,
	Balbir Singh, linux-mm

(1/1/12 2:30 AM), Hugh Dickins wrote:
> I never understood why we need a MEM_CGROUP_ZSTAT(mz, idx) macro
> to obscure the LRU counts.  For easier searching?  So call it
> lru_size rather than bare count (lru_length sounds better, but
> would be wrong, since each huge page raises lru_size hugely).
>
> Signed-off-by: Hugh Dickins<hughd@google.com>

I don't dislike both before and after. so, I'm keeping neutral. :-)


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 4/5] memcg: enum lru_list lru
  2012-01-01  7:31 ` [PATCH 4/5] memcg: enum lru_list lru Hugh Dickins
@ 2012-01-01  7:38   ` KOSAKI Motohiro
  2012-01-02 13:01   ` Michal Hocko
  2012-01-05  6:34   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: KOSAKI Motohiro @ 2012-01-01  7:38 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko,
	Balbir Singh, linux-mm

(1/1/12 2:31 AM), Hugh Dickins wrote:
> Mostly we use "enum lru_list lru": change those few "l"s to "lru"s.
>
> Signed-off-by: Hugh Dickins<hughd@google.com>

Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/5] memcg: remove redundant returns
  2012-01-01  7:33 ` [PATCH 5/5] memcg: remove redundant returns Hugh Dickins
@ 2012-01-01  7:38   ` KOSAKI Motohiro
  2012-01-02 13:03   ` Michal Hocko
  2012-01-05  6:35   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: KOSAKI Motohiro @ 2012-01-01  7:38 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko,
	Balbir Singh, linux-mm

(1/1/12 2:33 AM), Hugh Dickins wrote:
> Remove redundant returns from ends of functions, and one blank line.
>
> Signed-off-by: Hugh Dickins<hughd@google.com>

Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 0/5] memcg: trivial cleanups
  2012-01-01  7:26 [PATCH 0/5] memcg: trivial cleanups Hugh Dickins
                   ` (4 preceding siblings ...)
  2012-01-01  7:33 ` [PATCH 5/5] memcg: remove redundant returns Hugh Dickins
@ 2012-01-01 17:01 ` Kirill A. Shutemov
  2012-01-09 13:03 ` Johannes Weiner
  6 siblings, 0 replies; 33+ messages in thread
From: Kirill A. Shutemov @ 2012-01-01 17:01 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Michal Hocko,
	Balbir Singh, KOSAKI Motohiro, linux-mm

On Sat, Dec 31, 2011 at 11:26:42PM -0800, Hugh Dickins wrote:
> Obviously I've missed the boat for per-memcg per-zone LRU locking in 3.3,
> but I've split out a shameless bunch of trivial cleanups from that work,
> and hoping these might still sneak in unless they're controversial.
> 
> Following on from my earlier mmotm/next patches, here's five
> to memcontrol.c and .h, followed by six to the rest of mm.
> 
> [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR
> [PATCH 2/5] memcg: replace mem and mem_cont stragglers
> [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
> [PATCH 4/5] memcg: enum lru_list lru
> [PATCH 5/5] memcg: remove redundant returns

Acked-by: Kirill A. Shutemov <kirill@shutemov.name>

> 
>  include/linux/memcontrol.h |    2 
>  mm/memcontrol.c            |  121 ++++++++++++++++-------------------
>  2 files changed, 58 insertions(+), 65 deletions(-)
> 
> Hugh
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR
  2012-01-01  7:27 ` [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR Hugh Dickins
  2012-01-01  7:33   ` KOSAKI Motohiro
@ 2012-01-02 11:53   ` Michal Hocko
  2012-01-05  6:30   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: Michal Hocko @ 2012-01-02 11:53 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Sat 31-12-11 23:27:59, Hugh Dickins wrote:
> Correct an #endif comment in memcontrol.h from MEM_CONT to MEM_RES_CTLR.
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>

Acked-by: Michal Hocko <mhocko@suse.cz>

Thanks

> ---
>  include/linux/memcontrol.h |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- mmotm.orig/include/linux/memcontrol.h	2011-12-30 21:21:34.923338593 -0800
> +++ mmotm/include/linux/memcontrol.h	2011-12-30 21:21:51.939338993 -0800
> @@ -396,7 +396,7 @@ static inline void mem_cgroup_replace_pa
>  static inline void mem_cgroup_reset_owner(struct page *page)
>  {
>  }
> -#endif /* CONFIG_CGROUP_MEM_CONT */
> +#endif /* CONFIG_CGROUP_MEM_RES_CTLR */
>  
>  #if !defined(CONFIG_CGROUP_MEM_RES_CTLR) || !defined(CONFIG_DEBUG_VM)
>  static inline bool

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 2/5] memcg: replace mem and mem_cont stragglers
  2012-01-01  7:29 ` [PATCH 2/5] memcg: replace mem and mem_cont stragglers Hugh Dickins
  2012-01-01  7:34   ` KOSAKI Motohiro
@ 2012-01-02 12:25   ` Michal Hocko
  2012-01-05  6:31   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: Michal Hocko @ 2012-01-02 12:25 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Sat 31-12-11 23:29:14, Hugh Dickins wrote:
> Replace mem and mem_cont stragglers in memcontrol.c by memcg.
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>

OK, we should finally be consistent in naming

Acked-by: Michal Hocko <mhocko@suse.cz>

Thanks!

> ---
>  mm/memcontrol.c |   84 +++++++++++++++++++++++-----------------------
>  1 file changed, 42 insertions(+), 42 deletions(-)
> 
> --- mmotm.orig/mm/memcontrol.c	2011-12-30 21:21:34.895338593 -0800
> +++ mmotm/mm/memcontrol.c	2011-12-30 21:22:05.679339324 -0800
> @@ -144,7 +144,7 @@ struct mem_cgroup_per_zone {
>  	unsigned long long	usage_in_excess;/* Set to the value by which */
>  						/* the soft limit is exceeded*/
>  	bool			on_tree;
> -	struct mem_cgroup	*mem;		/* Back pointer, we cannot */
> +	struct mem_cgroup	*memcg;		/* Back pointer, we cannot */
>  						/* use container_of	   */
>  };
>  /* Macro for accessing counter */
> @@ -597,9 +597,9 @@ retry:
>  	 * we will to add it back at the end of reclaim to its correct
>  	 * position in the tree.
>  	 */
> -	__mem_cgroup_remove_exceeded(mz->mem, mz, mctz);
> -	if (!res_counter_soft_limit_excess(&mz->mem->res) ||
> -		!css_tryget(&mz->mem->css))
> +	__mem_cgroup_remove_exceeded(mz->memcg, mz, mctz);
> +	if (!res_counter_soft_limit_excess(&mz->memcg->res) ||
> +		!css_tryget(&mz->memcg->css))
>  		goto retry;
>  done:
>  	return mz;
> @@ -1743,22 +1743,22 @@ static DEFINE_SPINLOCK(memcg_oom_lock);
>  static DECLARE_WAIT_QUEUE_HEAD(memcg_oom_waitq);
>  
>  struct oom_wait_info {
> -	struct mem_cgroup *mem;
> +	struct mem_cgroup *memcg;
>  	wait_queue_t	wait;
>  };
>  
>  static int memcg_oom_wake_function(wait_queue_t *wait,
>  	unsigned mode, int sync, void *arg)
>  {
> -	struct mem_cgroup *wake_memcg = (struct mem_cgroup *)arg,
> -			  *oom_wait_memcg;
> +	struct mem_cgroup *wake_memcg = (struct mem_cgroup *)arg;
> +	struct mem_cgroup *oom_wait_memcg;
>  	struct oom_wait_info *oom_wait_info;
>  
>  	oom_wait_info = container_of(wait, struct oom_wait_info, wait);
> -	oom_wait_memcg = oom_wait_info->mem;
> +	oom_wait_memcg = oom_wait_info->memcg;
>  
>  	/*
> -	 * Both of oom_wait_info->mem and wake_mem are stable under us.
> +	 * Both of oom_wait_info->memcg and wake_memcg are stable under us.
>  	 * Then we can use css_is_ancestor without taking care of RCU.
>  	 */
>  	if (!mem_cgroup_same_or_subtree(oom_wait_memcg, wake_memcg)
> @@ -1787,7 +1787,7 @@ bool mem_cgroup_handle_oom(struct mem_cg
>  	struct oom_wait_info owait;
>  	bool locked, need_to_kill;
>  
> -	owait.mem = memcg;
> +	owait.memcg = memcg;
>  	owait.wait.flags = 0;
>  	owait.wait.func = memcg_oom_wake_function;
>  	owait.wait.private = current;
> @@ -3535,7 +3535,7 @@ unsigned long mem_cgroup_soft_limit_recl
>  			break;
>  
>  		nr_scanned = 0;
> -		reclaimed = mem_cgroup_soft_reclaim(mz->mem, zone,
> +		reclaimed = mem_cgroup_soft_reclaim(mz->memcg, zone,
>  						    gfp_mask, &nr_scanned);
>  		nr_reclaimed += reclaimed;
>  		*total_scanned += nr_scanned;
> @@ -3562,13 +3562,13 @@ unsigned long mem_cgroup_soft_limit_recl
>  				next_mz =
>  				__mem_cgroup_largest_soft_limit_node(mctz);
>  				if (next_mz == mz)
> -					css_put(&next_mz->mem->css);
> +					css_put(&next_mz->memcg->css);
>  				else /* next_mz == NULL or other memcg */
>  					break;
>  			} while (1);
>  		}
> -		__mem_cgroup_remove_exceeded(mz->mem, mz, mctz);
> -		excess = res_counter_soft_limit_excess(&mz->mem->res);
> +		__mem_cgroup_remove_exceeded(mz->memcg, mz, mctz);
> +		excess = res_counter_soft_limit_excess(&mz->memcg->res);
>  		/*
>  		 * One school of thought says that we should not add
>  		 * back the node to the tree if reclaim returns 0.
> @@ -3578,9 +3578,9 @@ unsigned long mem_cgroup_soft_limit_recl
>  		 * term TODO.
>  		 */
>  		/* If excess == 0, no tree ops */
> -		__mem_cgroup_insert_exceeded(mz->mem, mz, mctz, excess);
> +		__mem_cgroup_insert_exceeded(mz->memcg, mz, mctz, excess);
>  		spin_unlock(&mctz->lock);
> -		css_put(&mz->mem->css);
> +		css_put(&mz->memcg->css);
>  		loop++;
>  		/*
>  		 * Could not reclaim anything and there are no more
> @@ -3593,7 +3593,7 @@ unsigned long mem_cgroup_soft_limit_recl
>  			break;
>  	} while (!nr_reclaimed);
>  	if (next_mz)
> -		css_put(&next_mz->mem->css);
> +		css_put(&next_mz->memcg->css);
>  	return nr_reclaimed;
>  }
>  
> @@ -4096,38 +4096,38 @@ static int mem_control_numa_stat_show(st
>  	unsigned long total_nr, file_nr, anon_nr, unevictable_nr;
>  	unsigned long node_nr;
>  	struct cgroup *cont = m->private;
> -	struct mem_cgroup *mem_cont = mem_cgroup_from_cont(cont);
> +	struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
>  
> -	total_nr = mem_cgroup_nr_lru_pages(mem_cont, LRU_ALL);
> +	total_nr = mem_cgroup_nr_lru_pages(memcg, LRU_ALL);
>  	seq_printf(m, "total=%lu", total_nr);
>  	for_each_node_state(nid, N_HIGH_MEMORY) {
> -		node_nr = mem_cgroup_node_nr_lru_pages(mem_cont, nid, LRU_ALL);
> +		node_nr = mem_cgroup_node_nr_lru_pages(memcg, nid, LRU_ALL);
>  		seq_printf(m, " N%d=%lu", nid, node_nr);
>  	}
>  	seq_putc(m, '\n');
>  
> -	file_nr = mem_cgroup_nr_lru_pages(mem_cont, LRU_ALL_FILE);
> +	file_nr = mem_cgroup_nr_lru_pages(memcg, LRU_ALL_FILE);
>  	seq_printf(m, "file=%lu", file_nr);
>  	for_each_node_state(nid, N_HIGH_MEMORY) {
> -		node_nr = mem_cgroup_node_nr_lru_pages(mem_cont, nid,
> +		node_nr = mem_cgroup_node_nr_lru_pages(memcg, nid,
>  				LRU_ALL_FILE);
>  		seq_printf(m, " N%d=%lu", nid, node_nr);
>  	}
>  	seq_putc(m, '\n');
>  
> -	anon_nr = mem_cgroup_nr_lru_pages(mem_cont, LRU_ALL_ANON);
> +	anon_nr = mem_cgroup_nr_lru_pages(memcg, LRU_ALL_ANON);
>  	seq_printf(m, "anon=%lu", anon_nr);
>  	for_each_node_state(nid, N_HIGH_MEMORY) {
> -		node_nr = mem_cgroup_node_nr_lru_pages(mem_cont, nid,
> +		node_nr = mem_cgroup_node_nr_lru_pages(memcg, nid,
>  				LRU_ALL_ANON);
>  		seq_printf(m, " N%d=%lu", nid, node_nr);
>  	}
>  	seq_putc(m, '\n');
>  
> -	unevictable_nr = mem_cgroup_nr_lru_pages(mem_cont, BIT(LRU_UNEVICTABLE));
> +	unevictable_nr = mem_cgroup_nr_lru_pages(memcg, BIT(LRU_UNEVICTABLE));
>  	seq_printf(m, "unevictable=%lu", unevictable_nr);
>  	for_each_node_state(nid, N_HIGH_MEMORY) {
> -		node_nr = mem_cgroup_node_nr_lru_pages(mem_cont, nid,
> +		node_nr = mem_cgroup_node_nr_lru_pages(memcg, nid,
>  				BIT(LRU_UNEVICTABLE));
>  		seq_printf(m, " N%d=%lu", nid, node_nr);
>  	}
> @@ -4139,12 +4139,12 @@ static int mem_control_numa_stat_show(st
>  static int mem_control_stat_show(struct cgroup *cont, struct cftype *cft,
>  				 struct cgroup_map_cb *cb)
>  {
> -	struct mem_cgroup *mem_cont = mem_cgroup_from_cont(cont);
> +	struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
>  	struct mcs_total_stat mystat;
>  	int i;
>  
>  	memset(&mystat, 0, sizeof(mystat));
> -	mem_cgroup_get_local_stat(mem_cont, &mystat);
> +	mem_cgroup_get_local_stat(memcg, &mystat);
>  
>  
>  	for (i = 0; i < NR_MCS_STAT; i++) {
> @@ -4156,14 +4156,14 @@ static int mem_control_stat_show(struct
>  	/* Hierarchical information */
>  	{
>  		unsigned long long limit, memsw_limit;
> -		memcg_get_hierarchical_limit(mem_cont, &limit, &memsw_limit);
> +		memcg_get_hierarchical_limit(memcg, &limit, &memsw_limit);
>  		cb->fill(cb, "hierarchical_memory_limit", limit);
>  		if (do_swap_account)
>  			cb->fill(cb, "hierarchical_memsw_limit", memsw_limit);
>  	}
>  
>  	memset(&mystat, 0, sizeof(mystat));
> -	mem_cgroup_get_total_stat(mem_cont, &mystat);
> +	mem_cgroup_get_total_stat(memcg, &mystat);
>  	for (i = 0; i < NR_MCS_STAT; i++) {
>  		if (i == MCS_SWAP && !do_swap_account)
>  			continue;
> @@ -4179,7 +4179,7 @@ static int mem_control_stat_show(struct
>  
>  		for_each_online_node(nid)
>  			for (zid = 0; zid < MAX_NR_ZONES; zid++) {
> -				mz = mem_cgroup_zoneinfo(mem_cont, nid, zid);
> +				mz = mem_cgroup_zoneinfo(memcg, nid, zid);
>  
>  				recent_rotated[0] +=
>  					mz->reclaim_stat.recent_rotated[0];
> @@ -4808,7 +4808,7 @@ static int alloc_mem_cgroup_per_zone_inf
>  			INIT_LIST_HEAD(&mz->lruvec.lists[l]);
>  		mz->usage_in_excess = 0;
>  		mz->on_tree = false;
> -		mz->mem = memcg;
> +		mz->memcg = memcg;
>  	}
>  	memcg->info.nodeinfo[node] = pn;
>  	return 0;
> @@ -4821,29 +4821,29 @@ static void free_mem_cgroup_per_zone_inf
>  
>  static struct mem_cgroup *mem_cgroup_alloc(void)
>  {
> -	struct mem_cgroup *mem;
> +	struct mem_cgroup *memcg;
>  	int size = sizeof(struct mem_cgroup);
>  
>  	/* Can be very big if MAX_NUMNODES is very big */
>  	if (size < PAGE_SIZE)
> -		mem = kzalloc(size, GFP_KERNEL);
> +		memcg = kzalloc(size, GFP_KERNEL);
>  	else
> -		mem = vzalloc(size);
> +		memcg = vzalloc(size);
>  
> -	if (!mem)
> +	if (!memcg)
>  		return NULL;
>  
> -	mem->stat = alloc_percpu(struct mem_cgroup_stat_cpu);
> -	if (!mem->stat)
> +	memcg->stat = alloc_percpu(struct mem_cgroup_stat_cpu);
> +	if (!memcg->stat)
>  		goto out_free;
> -	spin_lock_init(&mem->pcp_counter_lock);
> -	return mem;
> +	spin_lock_init(&memcg->pcp_counter_lock);
> +	return memcg;
>  
>  out_free:
>  	if (size < PAGE_SIZE)
> -		kfree(mem);
> +		kfree(memcg);
>  	else
> -		vfree(mem);
> +		vfree(memcg);
>  	return NULL;
>  }
>  

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
  2012-01-01  7:30 ` [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT Hugh Dickins
  2012-01-01  7:37   ` KOSAKI Motohiro
@ 2012-01-02 12:59   ` Michal Hocko
  2012-01-02 19:43     ` Hugh Dickins
  2012-01-05  6:33   ` KAMEZAWA Hiroyuki
  2 siblings, 1 reply; 33+ messages in thread
From: Michal Hocko @ 2012-01-02 12:59 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Sat 31-12-11 23:30:38, Hugh Dickins wrote:
> I never understood why we need a MEM_CGROUP_ZSTAT(mz, idx) macro
> to obscure the LRU counts.  For easier searching?  So call it
> lru_size rather than bare count (lru_length sounds better, but
> would be wrong, since each huge page raises lru_size hugely).

lru_size is unique at the global scope at the moment but this might
change in the future. MEM_CGROUP_ZSTAT should be unique and so easier
to grep or cscope. 
On the other hand lru_size sounds like a better name so I am all for
renaming but we should make sure that we somehow get memcg into it
(either to macro MEM_CGROUP_LRU_SIZE or get rid of macro and have
memcg_lru_size field name - which is ugly long).

> 
> Signed-off-by: Hugh Dickins <hughd@google.com>
> ---
>  mm/memcontrol.c |   14 ++++++--------
>  1 file changed, 6 insertions(+), 8 deletions(-)
> 
> --- mmotm.orig/mm/memcontrol.c	2011-12-30 21:22:05.679339324 -0800
> +++ mmotm/mm/memcontrol.c	2011-12-30 21:23:28.243341326 -0800
> @@ -135,7 +135,7 @@ struct mem_cgroup_reclaim_iter {
>   */
>  struct mem_cgroup_per_zone {
>  	struct lruvec		lruvec;
> -	unsigned long		count[NR_LRU_LISTS];
> +	unsigned long		lru_size[NR_LRU_LISTS];
>  
>  	struct mem_cgroup_reclaim_iter reclaim_iter[DEF_PRIORITY + 1];
>  
> @@ -147,8 +147,6 @@ struct mem_cgroup_per_zone {
>  	struct mem_cgroup	*memcg;		/* Back pointer, we cannot */
>  						/* use container_of	   */
>  };
> -/* Macro for accessing counter */
> -#define MEM_CGROUP_ZSTAT(mz, idx)	((mz)->count[(idx)])
>  
>  struct mem_cgroup_per_node {
>  	struct mem_cgroup_per_zone zoneinfo[MAX_NR_ZONES];
> @@ -713,7 +711,7 @@ mem_cgroup_zone_nr_lru_pages(struct mem_
>  
>  	for_each_lru(l) {
>  		if (BIT(l) & lru_mask)
> -			ret += MEM_CGROUP_ZSTAT(mz, l);
> +			ret += mz->lru_size[l];
>  	}
>  	return ret;
>  }
> @@ -1048,7 +1046,7 @@ struct lruvec *mem_cgroup_lru_add_list(s
>  	memcg = pc->mem_cgroup;
>  	mz = page_cgroup_zoneinfo(memcg, page);
>  	/* compound_order() is stabilized through lru_lock */
> -	MEM_CGROUP_ZSTAT(mz, lru) += 1 << compound_order(page);
> +	mz->lru_size[lru] += 1 << compound_order(page);
>  	return &mz->lruvec;
>  }
>  
> @@ -1076,8 +1074,8 @@ void mem_cgroup_lru_del_list(struct page
>  	VM_BUG_ON(!memcg);
>  	mz = page_cgroup_zoneinfo(memcg, page);
>  	/* huge page split is done under lru_lock. so, we have no races. */
> -	VM_BUG_ON(MEM_CGROUP_ZSTAT(mz, lru) < (1 << compound_order(page)));
> -	MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
> +	VM_BUG_ON(mz->lru_size[lru] < (1 << compound_order(page)));
> +	mz->lru_size[lru] -= 1 << compound_order(page);
>  }
>  
>  void mem_cgroup_lru_del(struct page *page)
> @@ -3615,7 +3613,7 @@ static int mem_cgroup_force_empty_list(s
>  	mz = mem_cgroup_zoneinfo(memcg, node, zid);
>  	list = &mz->lruvec.lists[lru];
>  
> -	loop = MEM_CGROUP_ZSTAT(mz, lru);
> +	loop = mz->lru_size[lru];
>  	/* give some margin against EBUSY etc...*/
>  	loop += 256;
>  	busy = NULL;

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 4/5] memcg: enum lru_list lru
  2012-01-01  7:31 ` [PATCH 4/5] memcg: enum lru_list lru Hugh Dickins
  2012-01-01  7:38   ` KOSAKI Motohiro
@ 2012-01-02 13:01   ` Michal Hocko
  2012-01-05  6:34   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: Michal Hocko @ 2012-01-02 13:01 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Sat 31-12-11 23:31:53, Hugh Dickins wrote:
> Mostly we use "enum lru_list lru": change those few "l"s to "lru"s.

OK, I can see that you are doing the same clean up in the generic mm
code as well.

> Signed-off-by: Hugh Dickins <hughd@google.com>

Acked-by: Michal Hocko <mhocko@suse.cz>

> ---
>  mm/memcontrol.c |   20 ++++++++++----------
>  1 file changed, 10 insertions(+), 10 deletions(-)
> 
> --- mmotm.orig/mm/memcontrol.c	2011-12-30 21:23:28.000000000 -0800
> +++ mmotm/mm/memcontrol.c	2011-12-30 21:29:03.695349263 -0800
> @@ -704,14 +704,14 @@ mem_cgroup_zone_nr_lru_pages(struct mem_
>  			unsigned int lru_mask)
>  {
>  	struct mem_cgroup_per_zone *mz;
> -	enum lru_list l;
> +	enum lru_list lru;
>  	unsigned long ret = 0;
>  
>  	mz = mem_cgroup_zoneinfo(memcg, nid, zid);
>  
> -	for_each_lru(l) {
> -		if (BIT(l) & lru_mask)
> -			ret += mz->lru_size[l];
> +	for_each_lru(lru) {
> +		if (BIT(lru) & lru_mask)
> +			ret += mz->lru_size[lru];
>  	}
>  	return ret;
>  }
> @@ -3687,10 +3687,10 @@ move_account:
>  		mem_cgroup_start_move(memcg);
>  		for_each_node_state(node, N_HIGH_MEMORY) {
>  			for (zid = 0; !ret && zid < MAX_NR_ZONES; zid++) {
> -				enum lru_list l;
> -				for_each_lru(l) {
> +				enum lru_list lru;
> +				for_each_lru(lru) {
>  					ret = mem_cgroup_force_empty_list(memcg,
> -							node, zid, l);
> +							node, zid, lru);
>  					if (ret)
>  						break;
>  				}
> @@ -4784,7 +4784,7 @@ static int alloc_mem_cgroup_per_zone_inf
>  {
>  	struct mem_cgroup_per_node *pn;
>  	struct mem_cgroup_per_zone *mz;
> -	enum lru_list l;
> +	enum lru_list lru;
>  	int zone, tmp = node;
>  	/*
>  	 * This routine is called against possible nodes.
> @@ -4802,8 +4802,8 @@ static int alloc_mem_cgroup_per_zone_inf
>  
>  	for (zone = 0; zone < MAX_NR_ZONES; zone++) {
>  		mz = &pn->zoneinfo[zone];
> -		for_each_lru(l)
> -			INIT_LIST_HEAD(&mz->lruvec.lists[l]);
> +		for_each_lru(lru)
> +			INIT_LIST_HEAD(&mz->lruvec.lists[lru]);
>  		mz->usage_in_excess = 0;
>  		mz->on_tree = false;
>  		mz->memcg = memcg;

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/5] memcg: remove redundant returns
  2012-01-01  7:33 ` [PATCH 5/5] memcg: remove redundant returns Hugh Dickins
  2012-01-01  7:38   ` KOSAKI Motohiro
@ 2012-01-02 13:03   ` Michal Hocko
  2012-01-05  6:35   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: Michal Hocko @ 2012-01-02 13:03 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Sat 31-12-11 23:33:24, Hugh Dickins wrote:
> Remove redundant returns from ends of functions, and one blank line.
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>

Acked-by: Michal Hocko <mhocko@suse.cz>

> ---
>  mm/memcontrol.c |    5 -----
>  1 file changed, 5 deletions(-)
> 
> --- mmotm.orig/mm/memcontrol.c	2011-12-30 21:29:03.695349263 -0800
> +++ mmotm/mm/memcontrol.c	2011-12-30 21:29:37.611350065 -0800
> @@ -1362,7 +1362,6 @@ void mem_cgroup_print_oom_info(struct me
>  	if (!memcg || !p)
>  		return;
>  
> -
>  	rcu_read_lock();
>  
>  	mem_cgrp = memcg->css.cgroup;
> @@ -1897,7 +1896,6 @@ out:
>  	if (unlikely(need_unlock))
>  		move_unlock_page_cgroup(pc, &flags);
>  	rcu_read_unlock();
> -	return;
>  }
>  EXPORT_SYMBOL(mem_cgroup_update_page_stat);
>  
> @@ -2691,7 +2689,6 @@ __mem_cgroup_commit_charge_lrucare(struc
>  		SetPageLRU(page);
>  	}
>  	spin_unlock_irqrestore(&zone->lru_lock, flags);
> -	return;
>  }
>  
>  int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
> @@ -2881,7 +2878,6 @@ direct_uncharge:
>  		res_counter_uncharge(&memcg->memsw, nr_pages * PAGE_SIZE);
>  	if (unlikely(batch->memcg != memcg))
>  		memcg_oom_recover(memcg);
> -	return;
>  }
>  
>  /*
> @@ -3935,7 +3931,6 @@ static void memcg_get_hierarchical_limit
>  out:
>  	*mem_limit = min_limit;
>  	*memsw_limit = min_memsw_limit;
> -	return;
>  }
>  
>  static int mem_cgroup_reset(struct cgroup *cont, unsigned int event)

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
  2012-01-02 12:59   ` Michal Hocko
@ 2012-01-02 19:43     ` Hugh Dickins
  2012-01-03 11:05       ` Michal Hocko
  0 siblings, 1 reply; 33+ messages in thread
From: Hugh Dickins @ 2012-01-02 19:43 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Mon, 2 Jan 2012, Michal Hocko wrote:
> On Sat 31-12-11 23:30:38, Hugh Dickins wrote:
> > I never understood why we need a MEM_CGROUP_ZSTAT(mz, idx) macro
> > to obscure the LRU counts.  For easier searching?  So call it
> > lru_size rather than bare count (lru_length sounds better, but
> > would be wrong, since each huge page raises lru_size hugely).
> 
> lru_size is unique at the global scope at the moment but this might
> change in the future. MEM_CGROUP_ZSTAT should be unique and so easier
> to grep or cscope. 
> On the other hand lru_size sounds like a better name so I am all for
> renaming but we should make sure that we somehow get memcg into it
> (either to macro MEM_CGROUP_LRU_SIZE or get rid of macro and have
> memcg_lru_size field name - which is ugly long).

I do disagree.  You're asking to introduce artificial differences,
whereas generally we're trying to minimize the differences between
global and memcg.

I'm happy with the way mem_cgroup_zone_lruvec(), for example, returns
a pointer to the relevant structure, whether it's global or per-memcg,
and we then work with the contents of that structure, whichever it is:
lruvec in each case, not global_lruvec in one case and memcg_lruvec
in the other.

And certainly not GLOBAL_ZLRUVEC or MEM_CGROUP_ZLRUVEC!

Hugh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
  2012-01-02 19:43     ` Hugh Dickins
@ 2012-01-03 11:05       ` Michal Hocko
  0 siblings, 0 replies; 33+ messages in thread
From: Michal Hocko @ 2012-01-03 11:05 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Johannes Weiner, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Mon 02-01-12 11:43:27, Hugh Dickins wrote:
> On Mon, 2 Jan 2012, Michal Hocko wrote:
> > On Sat 31-12-11 23:30:38, Hugh Dickins wrote:
> > > I never understood why we need a MEM_CGROUP_ZSTAT(mz, idx) macro
> > > to obscure the LRU counts.  For easier searching?  So call it
> > > lru_size rather than bare count (lru_length sounds better, but
> > > would be wrong, since each huge page raises lru_size hugely).
> > 
> > lru_size is unique at the global scope at the moment but this might
> > change in the future. MEM_CGROUP_ZSTAT should be unique and so easier
> > to grep or cscope. 
> > On the other hand lru_size sounds like a better name so I am all for
> > renaming but we should make sure that we somehow get memcg into it
> > (either to macro MEM_CGROUP_LRU_SIZE or get rid of macro and have
> > memcg_lru_size field name - which is ugly long).
> 
> I do disagree.  You're asking to introduce artificial differences,
> whereas generally we're trying to minimize the differences between
> global and memcg.

I am not asking to _introduce_ a new artificial difference I just wanted
to make memcg lru accounting obvious. 
Currently, if I want to check that we account correctly I cscope/grep
__mod_zone_page_state on the global level and we have MEM_CGROUP_ZSTAT
for the memcg. If you remove the macro then it would be little bit
harder (it won't actually because lru_size is unique at the moment it is
just not that obvious).

> I'm happy with the way mem_cgroup_zone_lruvec(), for example, returns
> a pointer to the relevant structure, whether it's global or per-memcg,
> and we then work with the contents of that structure, whichever it is:
> lruvec in each case, not global_lruvec in one case and memcg_lruvec
> in the other.

Yes, I like it as well but we do not account the same way for memcg and
global.

Anyway, I do not have any strong opinion about the macro. Nevertheless,
I definitely like the count->lru_size renaming.

Thanks
-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR
  2012-01-01  7:27 ` [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR Hugh Dickins
  2012-01-01  7:33   ` KOSAKI Motohiro
  2012-01-02 11:53   ` Michal Hocko
@ 2012-01-05  6:30   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-01-05  6:30 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Johannes Weiner, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Sat, 31 Dec 2011 23:27:59 -0800 (PST)
Hugh Dickins <hughd@google.com> wrote:

> Correct an #endif comment in memcontrol.h from MEM_CONT to MEM_RES_CTLR.
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 2/5] memcg: replace mem and mem_cont stragglers
  2012-01-01  7:29 ` [PATCH 2/5] memcg: replace mem and mem_cont stragglers Hugh Dickins
  2012-01-01  7:34   ` KOSAKI Motohiro
  2012-01-02 12:25   ` Michal Hocko
@ 2012-01-05  6:31   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-01-05  6:31 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Johannes Weiner, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Sat, 31 Dec 2011 23:29:14 -0800 (PST)
Hugh Dickins <hughd@google.com> wrote:

> Replace mem and mem_cont stragglers in memcontrol.c by memcg.
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>

Thank you.

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
  2012-01-01  7:30 ` [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT Hugh Dickins
  2012-01-01  7:37   ` KOSAKI Motohiro
  2012-01-02 12:59   ` Michal Hocko
@ 2012-01-05  6:33   ` KAMEZAWA Hiroyuki
  2012-01-05 20:14     ` Hugh Dickins
  2 siblings, 1 reply; 33+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-01-05  6:33 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Johannes Weiner, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Sat, 31 Dec 2011 23:30:38 -0800 (PST)
Hugh Dickins <hughd@google.com> wrote:

> I never understood why we need a MEM_CGROUP_ZSTAT(mz, idx) macro
> to obscure the LRU counts.  For easier searching?  So call it
> lru_size rather than bare count (lru_length sounds better, but
> would be wrong, since each huge page raises lru_size hugely).
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>


Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

BTW, can this counter be moved to lruvec finally ?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 4/5] memcg: enum lru_list lru
  2012-01-01  7:31 ` [PATCH 4/5] memcg: enum lru_list lru Hugh Dickins
  2012-01-01  7:38   ` KOSAKI Motohiro
  2012-01-02 13:01   ` Michal Hocko
@ 2012-01-05  6:34   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-01-05  6:34 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Johannes Weiner, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Sat, 31 Dec 2011 23:31:53 -0800 (PST)
Hugh Dickins <hughd@google.com> wrote:

> Mostly we use "enum lru_list lru": change those few "l"s to "lru"s.
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 5/5] memcg: remove redundant returns
  2012-01-01  7:33 ` [PATCH 5/5] memcg: remove redundant returns Hugh Dickins
  2012-01-01  7:38   ` KOSAKI Motohiro
  2012-01-02 13:03   ` Michal Hocko
@ 2012-01-05  6:35   ` KAMEZAWA Hiroyuki
  2 siblings, 0 replies; 33+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-01-05  6:35 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Johannes Weiner, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Sat, 31 Dec 2011 23:33:24 -0800 (PST)
Hugh Dickins <hughd@google.com> wrote:

> Remove redundant returns from ends of functions, and one blank line.
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
  2012-01-05  6:33   ` KAMEZAWA Hiroyuki
@ 2012-01-05 20:14     ` Hugh Dickins
  0 siblings, 0 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-05 20:14 UTC (permalink / raw)
  To: KAMEZAWA Hiroyuki
  Cc: Andrew Morton, Johannes Weiner, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Thu, 5 Jan 2012, KAMEZAWA Hiroyuki wrote:
> On Sat, 31 Dec 2011 23:30:38 -0800 (PST)
> Hugh Dickins <hughd@google.com> wrote:
> 
> > I never understood why we need a MEM_CGROUP_ZSTAT(mz, idx) macro
> > to obscure the LRU counts.  For easier searching?  So call it
> > lru_size rather than bare count (lru_length sounds better, but
> > would be wrong, since each huge page raises lru_size hugely).
> > 
> > Signed-off-by: Hugh Dickins <hughd@google.com>
> 
> 
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

Thanks (to you and the other guys) for all the acks.

> 
> BTW, can this counter be moved to lruvec finally ?

Well.  It could be, but I haven't moved it (even in coming patches),
and it's not yet quite clear to me whether that's right or not.

Because there's a struct lruvec in struct zone, which we use when
!CONFIG_CGROUP_MEM_RES_CTLR or when mem_cgroup_disabled(); but the
corresponding lru_sizes would be a waste in those cases, because
they then just duplicate vm_stat[NR_INACTIVE_ANON..NR_UNEVICTABLE].

And we want to keep vm_stat[NR_INACTIVE_ANON..NR_UNEVICTABLE],
because we do want those global LRU sizes even in the memcg case.

Of course, we could put unused lru_size fields in anyway, it would
not waste much space.

But I'd prefer to hold off for now: I imagine that we're moving
towards a future in which even !CONFIG_CGROUP_MEM_RES_CTLR will have a
root_mem_cgroup, and it will become clearer what to place where then.

We use the lruvec heavily in the per-memcg per-zone locking patches,
as something low-level code can operate on without needing to know
if it's memcg or global; but have not actually needed to move the
lru_sizes into the structure (perhaps it's a hack: there is one place
I use container_of to go from the lruvec pointer to the lru_sizes).

(I might want to move the reclaim_stat into the lruvec, don't know
yet: I only just noticed that there are places where I'm not locking
the reclaim_stat properly: it's not such a big deal that it was ever
obvious, but I ought to get it right.)

Hugh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 0/5] memcg: trivial cleanups
  2012-01-01  7:26 [PATCH 0/5] memcg: trivial cleanups Hugh Dickins
                   ` (5 preceding siblings ...)
  2012-01-01 17:01 ` [PATCH 0/5] memcg: trivial cleanups Kirill A. Shutemov
@ 2012-01-09 13:03 ` Johannes Weiner
  2012-01-15  0:07   ` Hugh Dickins
  6 siblings, 1 reply; 33+ messages in thread
From: Johannes Weiner @ 2012-01-09 13:03 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, KAMEZAWA Hiroyuki, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, linux-mm

On Sat, Dec 31, 2011 at 11:26:42PM -0800, Hugh Dickins wrote:
> Obviously I've missed the boat for per-memcg per-zone LRU locking in 3.3,
> but I've split out a shameless bunch of trivial cleanups from that work,
> and hoping these might still sneak in unless they're controversial.
> 
> Following on from my earlier mmotm/next patches, here's five
> to memcontrol.c and .h, followed by six to the rest of mm.
> 
> [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR
> [PATCH 2/5] memcg: replace mem and mem_cont stragglers
> [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
> [PATCH 4/5] memcg: enum lru_list lru
> [PATCH 5/5] memcg: remove redundant returns

No objections from my side wrt putting them into 3.3.

Thanks!

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 0/5] memcg: trivial cleanups
  2012-01-09 13:03 ` Johannes Weiner
@ 2012-01-15  0:07   ` Hugh Dickins
  2012-01-15  0:09     ` [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR Hugh Dickins
                       ` (5 more replies)
  0 siblings, 6 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-15  0:07 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, KAMEZAWA Hiroyuki, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, Kirill A. Shutemov, linux-mm

On Mon, 9 Jan 2012, Johannes Weiner wrote:
> On Sat, Dec 31, 2011 at 11:26:42PM -0800, Hugh Dickins wrote:
> > Obviously I've missed the boat for per-memcg per-zone LRU locking in 3.3,
> > but I've split out a shameless bunch of trivial cleanups from that work,
> > and hoping these might still sneak in unless they're controversial.
> > 
> > Following on from my earlier mmotm/next patches, here's five
> > to memcontrol.c and .h, followed by six to the rest of mm.
> > 
> > [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR
> > [PATCH 2/5] memcg: replace mem and mem_cont stragglers
> > [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
> > [PATCH 4/5] memcg: enum lru_list lru
> > [PATCH 5/5] memcg: remove redundant returns
> 
> No objections from my side wrt putting them into 3.3.
> 
> Thanks!

I was hoping that these five memcg trivia (and my two SHM_UNLOCK fixes)
were on their way into 3.3, but they've not yet shown up in mm-commits.

I'll resend them all again now: I've not rediffed, since they apply
(if at different offsets) to Linus's current git tree; but I have added
in the (somewhat disproportionate for trivia!) Acked-bys and Reviewed-bys.

Michal was not happy with 3/5: I've summarized below the --- on that one,
do with it as you wish - I think neither Michal nor I shall slam the door
and burst into tears if you decide against one of us.

Thanks,
Hugh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR
  2012-01-15  0:07   ` Hugh Dickins
@ 2012-01-15  0:09     ` Hugh Dickins
  2012-01-15  0:10     ` [PATCH 2/5] memcg: replace mem and mem_cont stragglers Hugh Dickins
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-15  0:09 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, KAMEZAWA Hiroyuki, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, Kirill A. Shutemov, linux-mm

Correct an #endif comment in memcontrol.h from MEM_CONT to MEM_RES_CTLR.

Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Acked-by: Michal Hocko <mhocko@suse.cz>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 include/linux/memcontrol.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- mmotm.orig/include/linux/memcontrol.h	2011-12-30 21:21:34.923338593 -0800
+++ mmotm/include/linux/memcontrol.h	2011-12-30 21:21:51.939338993 -0800
@@ -396,7 +396,7 @@ static inline void mem_cgroup_replace_pa
 static inline void mem_cgroup_reset_owner(struct page *page)
 {
 }
-#endif /* CONFIG_CGROUP_MEM_CONT */
+#endif /* CONFIG_CGROUP_MEM_RES_CTLR */
 
 #if !defined(CONFIG_CGROUP_MEM_RES_CTLR) || !defined(CONFIG_DEBUG_VM)
 static inline bool

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 2/5] memcg: replace mem and mem_cont stragglers
  2012-01-15  0:07   ` Hugh Dickins
  2012-01-15  0:09     ` [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR Hugh Dickins
@ 2012-01-15  0:10     ` Hugh Dickins
  2012-01-15  0:12     ` [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT Hugh Dickins
                       ` (3 subsequent siblings)
  5 siblings, 0 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-15  0:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, KAMEZAWA Hiroyuki, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, Kirill A. Shutemov, linux-mm

Replace mem and mem_cont stragglers in memcontrol.c by memcg.

Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Acked-by: Michal Hocko <mhocko@suse.cz>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 mm/memcontrol.c |   84 +++++++++++++++++++++++-----------------------
 1 file changed, 42 insertions(+), 42 deletions(-)

--- mmotm.orig/mm/memcontrol.c	2011-12-30 21:21:34.895338593 -0800
+++ mmotm/mm/memcontrol.c	2011-12-30 21:22:05.679339324 -0800
@@ -144,7 +144,7 @@ struct mem_cgroup_per_zone {
 	unsigned long long	usage_in_excess;/* Set to the value by which */
 						/* the soft limit is exceeded*/
 	bool			on_tree;
-	struct mem_cgroup	*mem;		/* Back pointer, we cannot */
+	struct mem_cgroup	*memcg;		/* Back pointer, we cannot */
 						/* use container_of	   */
 };
 /* Macro for accessing counter */
@@ -597,9 +597,9 @@ retry:
 	 * we will to add it back at the end of reclaim to its correct
 	 * position in the tree.
 	 */
-	__mem_cgroup_remove_exceeded(mz->mem, mz, mctz);
-	if (!res_counter_soft_limit_excess(&mz->mem->res) ||
-		!css_tryget(&mz->mem->css))
+	__mem_cgroup_remove_exceeded(mz->memcg, mz, mctz);
+	if (!res_counter_soft_limit_excess(&mz->memcg->res) ||
+		!css_tryget(&mz->memcg->css))
 		goto retry;
 done:
 	return mz;
@@ -1743,22 +1743,22 @@ static DEFINE_SPINLOCK(memcg_oom_lock);
 static DECLARE_WAIT_QUEUE_HEAD(memcg_oom_waitq);
 
 struct oom_wait_info {
-	struct mem_cgroup *mem;
+	struct mem_cgroup *memcg;
 	wait_queue_t	wait;
 };
 
 static int memcg_oom_wake_function(wait_queue_t *wait,
 	unsigned mode, int sync, void *arg)
 {
-	struct mem_cgroup *wake_memcg = (struct mem_cgroup *)arg,
-			  *oom_wait_memcg;
+	struct mem_cgroup *wake_memcg = (struct mem_cgroup *)arg;
+	struct mem_cgroup *oom_wait_memcg;
 	struct oom_wait_info *oom_wait_info;
 
 	oom_wait_info = container_of(wait, struct oom_wait_info, wait);
-	oom_wait_memcg = oom_wait_info->mem;
+	oom_wait_memcg = oom_wait_info->memcg;
 
 	/*
-	 * Both of oom_wait_info->mem and wake_mem are stable under us.
+	 * Both of oom_wait_info->memcg and wake_memcg are stable under us.
 	 * Then we can use css_is_ancestor without taking care of RCU.
 	 */
 	if (!mem_cgroup_same_or_subtree(oom_wait_memcg, wake_memcg)
@@ -1787,7 +1787,7 @@ bool mem_cgroup_handle_oom(struct mem_cg
 	struct oom_wait_info owait;
 	bool locked, need_to_kill;
 
-	owait.mem = memcg;
+	owait.memcg = memcg;
 	owait.wait.flags = 0;
 	owait.wait.func = memcg_oom_wake_function;
 	owait.wait.private = current;
@@ -3535,7 +3535,7 @@ unsigned long mem_cgroup_soft_limit_recl
 			break;
 
 		nr_scanned = 0;
-		reclaimed = mem_cgroup_soft_reclaim(mz->mem, zone,
+		reclaimed = mem_cgroup_soft_reclaim(mz->memcg, zone,
 						    gfp_mask, &nr_scanned);
 		nr_reclaimed += reclaimed;
 		*total_scanned += nr_scanned;
@@ -3562,13 +3562,13 @@ unsigned long mem_cgroup_soft_limit_recl
 				next_mz =
 				__mem_cgroup_largest_soft_limit_node(mctz);
 				if (next_mz == mz)
-					css_put(&next_mz->mem->css);
+					css_put(&next_mz->memcg->css);
 				else /* next_mz == NULL or other memcg */
 					break;
 			} while (1);
 		}
-		__mem_cgroup_remove_exceeded(mz->mem, mz, mctz);
-		excess = res_counter_soft_limit_excess(&mz->mem->res);
+		__mem_cgroup_remove_exceeded(mz->memcg, mz, mctz);
+		excess = res_counter_soft_limit_excess(&mz->memcg->res);
 		/*
 		 * One school of thought says that we should not add
 		 * back the node to the tree if reclaim returns 0.
@@ -3578,9 +3578,9 @@ unsigned long mem_cgroup_soft_limit_recl
 		 * term TODO.
 		 */
 		/* If excess == 0, no tree ops */
-		__mem_cgroup_insert_exceeded(mz->mem, mz, mctz, excess);
+		__mem_cgroup_insert_exceeded(mz->memcg, mz, mctz, excess);
 		spin_unlock(&mctz->lock);
-		css_put(&mz->mem->css);
+		css_put(&mz->memcg->css);
 		loop++;
 		/*
 		 * Could not reclaim anything and there are no more
@@ -3593,7 +3593,7 @@ unsigned long mem_cgroup_soft_limit_recl
 			break;
 	} while (!nr_reclaimed);
 	if (next_mz)
-		css_put(&next_mz->mem->css);
+		css_put(&next_mz->memcg->css);
 	return nr_reclaimed;
 }
 
@@ -4096,38 +4096,38 @@ static int mem_control_numa_stat_show(st
 	unsigned long total_nr, file_nr, anon_nr, unevictable_nr;
 	unsigned long node_nr;
 	struct cgroup *cont = m->private;
-	struct mem_cgroup *mem_cont = mem_cgroup_from_cont(cont);
+	struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
 
-	total_nr = mem_cgroup_nr_lru_pages(mem_cont, LRU_ALL);
+	total_nr = mem_cgroup_nr_lru_pages(memcg, LRU_ALL);
 	seq_printf(m, "total=%lu", total_nr);
 	for_each_node_state(nid, N_HIGH_MEMORY) {
-		node_nr = mem_cgroup_node_nr_lru_pages(mem_cont, nid, LRU_ALL);
+		node_nr = mem_cgroup_node_nr_lru_pages(memcg, nid, LRU_ALL);
 		seq_printf(m, " N%d=%lu", nid, node_nr);
 	}
 	seq_putc(m, '\n');
 
-	file_nr = mem_cgroup_nr_lru_pages(mem_cont, LRU_ALL_FILE);
+	file_nr = mem_cgroup_nr_lru_pages(memcg, LRU_ALL_FILE);
 	seq_printf(m, "file=%lu", file_nr);
 	for_each_node_state(nid, N_HIGH_MEMORY) {
-		node_nr = mem_cgroup_node_nr_lru_pages(mem_cont, nid,
+		node_nr = mem_cgroup_node_nr_lru_pages(memcg, nid,
 				LRU_ALL_FILE);
 		seq_printf(m, " N%d=%lu", nid, node_nr);
 	}
 	seq_putc(m, '\n');
 
-	anon_nr = mem_cgroup_nr_lru_pages(mem_cont, LRU_ALL_ANON);
+	anon_nr = mem_cgroup_nr_lru_pages(memcg, LRU_ALL_ANON);
 	seq_printf(m, "anon=%lu", anon_nr);
 	for_each_node_state(nid, N_HIGH_MEMORY) {
-		node_nr = mem_cgroup_node_nr_lru_pages(mem_cont, nid,
+		node_nr = mem_cgroup_node_nr_lru_pages(memcg, nid,
 				LRU_ALL_ANON);
 		seq_printf(m, " N%d=%lu", nid, node_nr);
 	}
 	seq_putc(m, '\n');
 
-	unevictable_nr = mem_cgroup_nr_lru_pages(mem_cont, BIT(LRU_UNEVICTABLE));
+	unevictable_nr = mem_cgroup_nr_lru_pages(memcg, BIT(LRU_UNEVICTABLE));
 	seq_printf(m, "unevictable=%lu", unevictable_nr);
 	for_each_node_state(nid, N_HIGH_MEMORY) {
-		node_nr = mem_cgroup_node_nr_lru_pages(mem_cont, nid,
+		node_nr = mem_cgroup_node_nr_lru_pages(memcg, nid,
 				BIT(LRU_UNEVICTABLE));
 		seq_printf(m, " N%d=%lu", nid, node_nr);
 	}
@@ -4139,12 +4139,12 @@ static int mem_control_numa_stat_show(st
 static int mem_control_stat_show(struct cgroup *cont, struct cftype *cft,
 				 struct cgroup_map_cb *cb)
 {
-	struct mem_cgroup *mem_cont = mem_cgroup_from_cont(cont);
+	struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
 	struct mcs_total_stat mystat;
 	int i;
 
 	memset(&mystat, 0, sizeof(mystat));
-	mem_cgroup_get_local_stat(mem_cont, &mystat);
+	mem_cgroup_get_local_stat(memcg, &mystat);
 
 
 	for (i = 0; i < NR_MCS_STAT; i++) {
@@ -4156,14 +4156,14 @@ static int mem_control_stat_show(struct
 	/* Hierarchical information */
 	{
 		unsigned long long limit, memsw_limit;
-		memcg_get_hierarchical_limit(mem_cont, &limit, &memsw_limit);
+		memcg_get_hierarchical_limit(memcg, &limit, &memsw_limit);
 		cb->fill(cb, "hierarchical_memory_limit", limit);
 		if (do_swap_account)
 			cb->fill(cb, "hierarchical_memsw_limit", memsw_limit);
 	}
 
 	memset(&mystat, 0, sizeof(mystat));
-	mem_cgroup_get_total_stat(mem_cont, &mystat);
+	mem_cgroup_get_total_stat(memcg, &mystat);
 	for (i = 0; i < NR_MCS_STAT; i++) {
 		if (i == MCS_SWAP && !do_swap_account)
 			continue;
@@ -4179,7 +4179,7 @@ static int mem_control_stat_show(struct
 
 		for_each_online_node(nid)
 			for (zid = 0; zid < MAX_NR_ZONES; zid++) {
-				mz = mem_cgroup_zoneinfo(mem_cont, nid, zid);
+				mz = mem_cgroup_zoneinfo(memcg, nid, zid);
 
 				recent_rotated[0] +=
 					mz->reclaim_stat.recent_rotated[0];
@@ -4808,7 +4808,7 @@ static int alloc_mem_cgroup_per_zone_inf
 			INIT_LIST_HEAD(&mz->lruvec.lists[l]);
 		mz->usage_in_excess = 0;
 		mz->on_tree = false;
-		mz->mem = memcg;
+		mz->memcg = memcg;
 	}
 	memcg->info.nodeinfo[node] = pn;
 	return 0;
@@ -4821,29 +4821,29 @@ static void free_mem_cgroup_per_zone_inf
 
 static struct mem_cgroup *mem_cgroup_alloc(void)
 {
-	struct mem_cgroup *mem;
+	struct mem_cgroup *memcg;
 	int size = sizeof(struct mem_cgroup);
 
 	/* Can be very big if MAX_NUMNODES is very big */
 	if (size < PAGE_SIZE)
-		mem = kzalloc(size, GFP_KERNEL);
+		memcg = kzalloc(size, GFP_KERNEL);
 	else
-		mem = vzalloc(size);
+		memcg = vzalloc(size);
 
-	if (!mem)
+	if (!memcg)
 		return NULL;
 
-	mem->stat = alloc_percpu(struct mem_cgroup_stat_cpu);
-	if (!mem->stat)
+	memcg->stat = alloc_percpu(struct mem_cgroup_stat_cpu);
+	if (!memcg->stat)
 		goto out_free;
-	spin_lock_init(&mem->pcp_counter_lock);
-	return mem;
+	spin_lock_init(&memcg->pcp_counter_lock);
+	return memcg;
 
 out_free:
 	if (size < PAGE_SIZE)
-		kfree(mem);
+		kfree(memcg);
 	else
-		vfree(mem);
+		vfree(memcg);
 	return NULL;
 }
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
  2012-01-15  0:07   ` Hugh Dickins
  2012-01-15  0:09     ` [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR Hugh Dickins
  2012-01-15  0:10     ` [PATCH 2/5] memcg: replace mem and mem_cont stragglers Hugh Dickins
@ 2012-01-15  0:12     ` Hugh Dickins
  2012-01-15  0:13     ` [PATCH 4/5] memcg: enum lru_list lru Hugh Dickins
                       ` (2 subsequent siblings)
  5 siblings, 0 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-15  0:12 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, KAMEZAWA Hiroyuki, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, Kirill A. Shutemov, linux-mm

I never understood why we need a MEM_CGROUP_ZSTAT(mz, idx) macro
to obscure the LRU counts.  For easier searching?  So call it
lru_size rather than bare count (lru_length sounds better, but
would be wrong, since each huge page raises lru_size hugely).

Signed-off-by: Hugh Dickins <hughd@google.com>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
KOSAKI-san felt neutral on this one.  Hannes said no objections.
Michal liked the namechange from count to lru_size, but wanted a
MEM_CGROUP_LRU_SIZE(mz, lru) macro to access it.  Whilst I prefer that
to MEM_CGROUP_ZSTAT(), I cannot see the point of an accessor for this.
So here's my original patch, for you to ignore or put in as you prefer,
then Michal can send something on top if he still feels strongly.

 mm/memcontrol.c |   14 ++++++--------
 1 file changed, 6 insertions(+), 8 deletions(-)

--- mmotm.orig/mm/memcontrol.c	2011-12-30 21:22:05.679339324 -0800
+++ mmotm/mm/memcontrol.c	2011-12-30 21:23:28.243341326 -0800
@@ -135,7 +135,7 @@ struct mem_cgroup_reclaim_iter {
  */
 struct mem_cgroup_per_zone {
 	struct lruvec		lruvec;
-	unsigned long		count[NR_LRU_LISTS];
+	unsigned long		lru_size[NR_LRU_LISTS];
 
 	struct mem_cgroup_reclaim_iter reclaim_iter[DEF_PRIORITY + 1];
 
@@ -147,8 +147,6 @@ struct mem_cgroup_per_zone {
 	struct mem_cgroup	*memcg;		/* Back pointer, we cannot */
 						/* use container_of	   */
 };
-/* Macro for accessing counter */
-#define MEM_CGROUP_ZSTAT(mz, idx)	((mz)->count[(idx)])
 
 struct mem_cgroup_per_node {
 	struct mem_cgroup_per_zone zoneinfo[MAX_NR_ZONES];
@@ -713,7 +711,7 @@ mem_cgroup_zone_nr_lru_pages(struct mem_
 
 	for_each_lru(l) {
 		if (BIT(l) & lru_mask)
-			ret += MEM_CGROUP_ZSTAT(mz, l);
+			ret += mz->lru_size[l];
 	}
 	return ret;
 }
@@ -1048,7 +1046,7 @@ struct lruvec *mem_cgroup_lru_add_list(s
 	memcg = pc->mem_cgroup;
 	mz = page_cgroup_zoneinfo(memcg, page);
 	/* compound_order() is stabilized through lru_lock */
-	MEM_CGROUP_ZSTAT(mz, lru) += 1 << compound_order(page);
+	mz->lru_size[lru] += 1 << compound_order(page);
 	return &mz->lruvec;
 }
 
@@ -1076,8 +1074,8 @@ void mem_cgroup_lru_del_list(struct page
 	VM_BUG_ON(!memcg);
 	mz = page_cgroup_zoneinfo(memcg, page);
 	/* huge page split is done under lru_lock. so, we have no races. */
-	VM_BUG_ON(MEM_CGROUP_ZSTAT(mz, lru) < (1 << compound_order(page)));
-	MEM_CGROUP_ZSTAT(mz, lru) -= 1 << compound_order(page);
+	VM_BUG_ON(mz->lru_size[lru] < (1 << compound_order(page)));
+	mz->lru_size[lru] -= 1 << compound_order(page);
 }
 
 void mem_cgroup_lru_del(struct page *page)
@@ -3615,7 +3613,7 @@ static int mem_cgroup_force_empty_list(s
 	mz = mem_cgroup_zoneinfo(memcg, node, zid);
 	list = &mz->lruvec.lists[lru];
 
-	loop = MEM_CGROUP_ZSTAT(mz, lru);
+	loop = mz->lru_size[lru];
 	/* give some margin against EBUSY etc...*/
 	loop += 256;
 	busy = NULL;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 4/5] memcg: enum lru_list lru
  2012-01-15  0:07   ` Hugh Dickins
                       ` (2 preceding siblings ...)
  2012-01-15  0:12     ` [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT Hugh Dickins
@ 2012-01-15  0:13     ` Hugh Dickins
  2012-01-15  0:14     ` [PATCH 5/5] memcg: remove redundant returns Hugh Dickins
  2012-01-16  9:47     ` [PATCH 0/5] memcg: trivial cleanups Michal Hocko
  5 siblings, 0 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-15  0:13 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, KAMEZAWA Hiroyuki, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, Kirill A. Shutemov, linux-mm

Mostly we use "enum lru_list lru": change those few "l"s to "lru"s.

Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Acked-by: Michal Hocko <mhocko@suse.cz>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 mm/memcontrol.c |   20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

--- mmotm.orig/mm/memcontrol.c	2011-12-30 21:23:28.000000000 -0800
+++ mmotm/mm/memcontrol.c	2011-12-30 21:29:03.695349263 -0800
@@ -704,14 +704,14 @@ mem_cgroup_zone_nr_lru_pages(struct mem_
 			unsigned int lru_mask)
 {
 	struct mem_cgroup_per_zone *mz;
-	enum lru_list l;
+	enum lru_list lru;
 	unsigned long ret = 0;
 
 	mz = mem_cgroup_zoneinfo(memcg, nid, zid);
 
-	for_each_lru(l) {
-		if (BIT(l) & lru_mask)
-			ret += mz->lru_size[l];
+	for_each_lru(lru) {
+		if (BIT(lru) & lru_mask)
+			ret += mz->lru_size[lru];
 	}
 	return ret;
 }
@@ -3687,10 +3687,10 @@ move_account:
 		mem_cgroup_start_move(memcg);
 		for_each_node_state(node, N_HIGH_MEMORY) {
 			for (zid = 0; !ret && zid < MAX_NR_ZONES; zid++) {
-				enum lru_list l;
-				for_each_lru(l) {
+				enum lru_list lru;
+				for_each_lru(lru) {
 					ret = mem_cgroup_force_empty_list(memcg,
-							node, zid, l);
+							node, zid, lru);
 					if (ret)
 						break;
 				}
@@ -4784,7 +4784,7 @@ static int alloc_mem_cgroup_per_zone_inf
 {
 	struct mem_cgroup_per_node *pn;
 	struct mem_cgroup_per_zone *mz;
-	enum lru_list l;
+	enum lru_list lru;
 	int zone, tmp = node;
 	/*
 	 * This routine is called against possible nodes.
@@ -4802,8 +4802,8 @@ static int alloc_mem_cgroup_per_zone_inf
 
 	for (zone = 0; zone < MAX_NR_ZONES; zone++) {
 		mz = &pn->zoneinfo[zone];
-		for_each_lru(l)
-			INIT_LIST_HEAD(&mz->lruvec.lists[l]);
+		for_each_lru(lru)
+			INIT_LIST_HEAD(&mz->lruvec.lists[lru]);
 		mz->usage_in_excess = 0;
 		mz->on_tree = false;
 		mz->memcg = memcg;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* [PATCH 5/5] memcg: remove redundant returns
  2012-01-15  0:07   ` Hugh Dickins
                       ` (3 preceding siblings ...)
  2012-01-15  0:13     ` [PATCH 4/5] memcg: enum lru_list lru Hugh Dickins
@ 2012-01-15  0:14     ` Hugh Dickins
  2012-01-16  9:47     ` [PATCH 0/5] memcg: trivial cleanups Michal Hocko
  5 siblings, 0 replies; 33+ messages in thread
From: Hugh Dickins @ 2012-01-15  0:14 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Johannes Weiner, KAMEZAWA Hiroyuki, Michal Hocko, Balbir Singh,
	KOSAKI Motohiro, Kirill A. Shutemov, linux-mm

Remove redundant returns from ends of functions, and one blank line.

Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Acked-by: Kirill A. Shutemov <kirill@shutemov.name>
Acked-by: Michal Hocko <mhocko@suse.cz>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 mm/memcontrol.c |    5 -----
 1 file changed, 5 deletions(-)

--- mmotm.orig/mm/memcontrol.c	2011-12-30 21:29:03.695349263 -0800
+++ mmotm/mm/memcontrol.c	2011-12-30 21:29:37.611350065 -0800
@@ -1362,7 +1362,6 @@ void mem_cgroup_print_oom_info(struct me
 	if (!memcg || !p)
 		return;
 
-
 	rcu_read_lock();
 
 	mem_cgrp = memcg->css.cgroup;
@@ -1897,7 +1896,6 @@ out:
 	if (unlikely(need_unlock))
 		move_unlock_page_cgroup(pc, &flags);
 	rcu_read_unlock();
-	return;
 }
 EXPORT_SYMBOL(mem_cgroup_update_page_stat);
 
@@ -2691,7 +2689,6 @@ __mem_cgroup_commit_charge_lrucare(struc
 		SetPageLRU(page);
 	}
 	spin_unlock_irqrestore(&zone->lru_lock, flags);
-	return;
 }
 
 int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
@@ -2881,7 +2878,6 @@ direct_uncharge:
 		res_counter_uncharge(&memcg->memsw, nr_pages * PAGE_SIZE);
 	if (unlikely(batch->memcg != memcg))
 		memcg_oom_recover(memcg);
-	return;
 }
 
 /*
@@ -3935,7 +3931,6 @@ static void memcg_get_hierarchical_limit
 out:
 	*mem_limit = min_limit;
 	*memsw_limit = min_memsw_limit;
-	return;
 }
 
 static int mem_cgroup_reset(struct cgroup *cont, unsigned int event)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

* Re: [PATCH 0/5] memcg: trivial cleanups
  2012-01-15  0:07   ` Hugh Dickins
                       ` (4 preceding siblings ...)
  2012-01-15  0:14     ` [PATCH 5/5] memcg: remove redundant returns Hugh Dickins
@ 2012-01-16  9:47     ` Michal Hocko
  5 siblings, 0 replies; 33+ messages in thread
From: Michal Hocko @ 2012-01-16  9:47 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Johannes Weiner, KAMEZAWA Hiroyuki, Balbir Singh,
	KOSAKI Motohiro, Kirill A. Shutemov, linux-mm

On Sat 14-01-12 16:07:42, Hugh Dickins wrote:
> On Mon, 9 Jan 2012, Johannes Weiner wrote:
> > On Sat, Dec 31, 2011 at 11:26:42PM -0800, Hugh Dickins wrote:
> > > Obviously I've missed the boat for per-memcg per-zone LRU locking in 3.3,
> > > but I've split out a shameless bunch of trivial cleanups from that work,
> > > and hoping these might still sneak in unless they're controversial.
> > > 
> > > Following on from my earlier mmotm/next patches, here's five
> > > to memcontrol.c and .h, followed by six to the rest of mm.
> > > 
> > > [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR
> > > [PATCH 2/5] memcg: replace mem and mem_cont stragglers
> > > [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT
> > > [PATCH 4/5] memcg: enum lru_list lru
> > > [PATCH 5/5] memcg: remove redundant returns
> > 
> > No objections from my side wrt putting them into 3.3.
> > 
> > Thanks!
> 
> I was hoping that these five memcg trivia (and my two SHM_UNLOCK fixes)
> were on their way into 3.3, but they've not yet shown up in mm-commits.
> 
> I'll resend them all again now: I've not rediffed, since they apply
> (if at different offsets) to Linus's current git tree; but I have added
> in the (somewhat disproportionate for trivia!) Acked-bys and Reviewed-bys.
> 
> Michal was not happy with 3/5: I've summarized below the --- on that one,
> do with it as you wish - I think neither Michal nor I shall slam the door
> and burst into tears if you decide against one of us.

Yes, please go on with the patch. I will not lose any sleep over
MEM_CGROUP_ZSTAT ;)
As both Kame and Johannes acked that, there is no need to discuss that
more.

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 33+ messages in thread

end of thread, other threads:[~2012-01-16  9:47 UTC | newest]

Thread overview: 33+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-01  7:26 [PATCH 0/5] memcg: trivial cleanups Hugh Dickins
2012-01-01  7:27 ` [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR Hugh Dickins
2012-01-01  7:33   ` KOSAKI Motohiro
2012-01-02 11:53   ` Michal Hocko
2012-01-05  6:30   ` KAMEZAWA Hiroyuki
2012-01-01  7:29 ` [PATCH 2/5] memcg: replace mem and mem_cont stragglers Hugh Dickins
2012-01-01  7:34   ` KOSAKI Motohiro
2012-01-02 12:25   ` Michal Hocko
2012-01-05  6:31   ` KAMEZAWA Hiroyuki
2012-01-01  7:30 ` [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT Hugh Dickins
2012-01-01  7:37   ` KOSAKI Motohiro
2012-01-02 12:59   ` Michal Hocko
2012-01-02 19:43     ` Hugh Dickins
2012-01-03 11:05       ` Michal Hocko
2012-01-05  6:33   ` KAMEZAWA Hiroyuki
2012-01-05 20:14     ` Hugh Dickins
2012-01-01  7:31 ` [PATCH 4/5] memcg: enum lru_list lru Hugh Dickins
2012-01-01  7:38   ` KOSAKI Motohiro
2012-01-02 13:01   ` Michal Hocko
2012-01-05  6:34   ` KAMEZAWA Hiroyuki
2012-01-01  7:33 ` [PATCH 5/5] memcg: remove redundant returns Hugh Dickins
2012-01-01  7:38   ` KOSAKI Motohiro
2012-01-02 13:03   ` Michal Hocko
2012-01-05  6:35   ` KAMEZAWA Hiroyuki
2012-01-01 17:01 ` [PATCH 0/5] memcg: trivial cleanups Kirill A. Shutemov
2012-01-09 13:03 ` Johannes Weiner
2012-01-15  0:07   ` Hugh Dickins
2012-01-15  0:09     ` [PATCH 1/5] memcg: replace MEM_CONT by MEM_RES_CTLR Hugh Dickins
2012-01-15  0:10     ` [PATCH 2/5] memcg: replace mem and mem_cont stragglers Hugh Dickins
2012-01-15  0:12     ` [PATCH 3/5] memcg: lru_size instead of MEM_CGROUP_ZSTAT Hugh Dickins
2012-01-15  0:13     ` [PATCH 4/5] memcg: enum lru_list lru Hugh Dickins
2012-01-15  0:14     ` [PATCH 5/5] memcg: remove redundant returns Hugh Dickins
2012-01-16  9:47     ` [PATCH 0/5] memcg: trivial cleanups Michal Hocko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.