All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/7] mm: some simple cleanups
@ 2012-03-22 21:56 ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel

I left here only small and simple patches, some of them already acked.

Patch for lru filters in __isolate_lru_page() was reworked again: we can remove all
these checks, because lumpy isolation in shrink_active_list() now forbidden.

---

Hugh Dickins (2):
      mm/memcg: scanning_global_lru means mem_cgroup_disabled
      mm/memcg: move reclaim_stat into lruvec

Konstantin Khlebnikov (5):
      mm: push lru index into shrink_[in]active_list()
      mm: mark mm-inline functions as __always_inline
      mm: remove lru type checks from __isolate_lru_page()
      mm/memcg: kill mem_cgroup_lru_del()
      mm/memcg: use vm_swappiness from target memory cgroup


 include/linux/memcontrol.h |   14 ------
 include/linux/mm_inline.h  |    8 ++-
 include/linux/mmzone.h     |   39 +++++++---------
 include/linux/swap.h       |    2 -
 mm/compaction.c            |    4 +-
 mm/memcontrol.c            |   32 +++----------
 mm/page_alloc.c            |    8 ++-
 mm/swap.c                  |   14 ++----
 mm/vmscan.c                |  105 +++++++++++++++-----------------------------
 9 files changed, 74 insertions(+), 152 deletions(-)

-- 
Signature

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH v6 0/7] mm: some simple cleanups
@ 2012-03-22 21:56 ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel

I left here only small and simple patches, some of them already acked.

Patch for lru filters in __isolate_lru_page() was reworked again: we can remove all
these checks, because lumpy isolation in shrink_active_list() now forbidden.

---

Hugh Dickins (2):
      mm/memcg: scanning_global_lru means mem_cgroup_disabled
      mm/memcg: move reclaim_stat into lruvec

Konstantin Khlebnikov (5):
      mm: push lru index into shrink_[in]active_list()
      mm: mark mm-inline functions as __always_inline
      mm: remove lru type checks from __isolate_lru_page()
      mm/memcg: kill mem_cgroup_lru_del()
      mm/memcg: use vm_swappiness from target memory cgroup


 include/linux/memcontrol.h |   14 ------
 include/linux/mm_inline.h  |    8 ++-
 include/linux/mmzone.h     |   39 +++++++---------
 include/linux/swap.h       |    2 -
 mm/compaction.c            |    4 +-
 mm/memcontrol.c            |   32 +++----------
 mm/page_alloc.c            |    8 ++-
 mm/swap.c                  |   14 ++----
 mm/vmscan.c                |  105 +++++++++++++++-----------------------------
 9 files changed, 74 insertions(+), 152 deletions(-)

-- 
Signature

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
  2012-03-22 21:56 ` Konstantin Khlebnikov
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel
  Cc: Hugh Dickins, KAMEZAWA Hiroyuki, Glauber Costa

From: Hugh Dickins <hughd@google.com>

Although one has to admire the skill with which it has been concealed,
scanning_global_lru(mz) is actually just an interesting way to test
mem_cgroup_disabled().  Too many developer hours have been wasted on
confusing it with global_reclaim(): just use mem_cgroup_disabled().

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Glauber Costa <glommer@parallels.com>
---
 mm/vmscan.c |   18 ++++--------------
 1 files changed, 4 insertions(+), 14 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 49f15ef..c684f44 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -164,26 +164,16 @@ static bool global_reclaim(struct scan_control *sc)
 {
 	return !sc->target_mem_cgroup;
 }
-
-static bool scanning_global_lru(struct mem_cgroup_zone *mz)
-{
-	return !mz->mem_cgroup;
-}
 #else
 static bool global_reclaim(struct scan_control *sc)
 {
 	return true;
 }
-
-static bool scanning_global_lru(struct mem_cgroup_zone *mz)
-{
-	return true;
-}
 #endif
 
 static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
 {
-	if (!scanning_global_lru(mz))
+	if (!mem_cgroup_disabled())
 		return mem_cgroup_get_reclaim_stat(mz->mem_cgroup, mz->zone);
 
 	return &mz->zone->reclaim_stat;
@@ -192,7 +182,7 @@ static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
 static unsigned long zone_nr_lru_pages(struct mem_cgroup_zone *mz,
 				       enum lru_list lru)
 {
-	if (!scanning_global_lru(mz))
+	if (!mem_cgroup_disabled())
 		return mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
 						    zone_to_nid(mz->zone),
 						    zone_idx(mz->zone),
@@ -1806,7 +1796,7 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
 	if (!total_swap_pages)
 		return 0;
 
-	if (!scanning_global_lru(mz))
+	if (!mem_cgroup_disabled())
 		return mem_cgroup_inactive_anon_is_low(mz->mem_cgroup,
 						       mz->zone);
 
@@ -1845,7 +1835,7 @@ static int inactive_file_is_low_global(struct zone *zone)
  */
 static int inactive_file_is_low(struct mem_cgroup_zone *mz)
 {
-	if (!scanning_global_lru(mz))
+	if (!mem_cgroup_disabled())
 		return mem_cgroup_inactive_file_is_low(mz->mem_cgroup,
 						       mz->zone);
 


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel
  Cc: Hugh Dickins, KAMEZAWA Hiroyuki, Glauber Costa

From: Hugh Dickins <hughd@google.com>

Although one has to admire the skill with which it has been concealed,
scanning_global_lru(mz) is actually just an interesting way to test
mem_cgroup_disabled().  Too many developer hours have been wasted on
confusing it with global_reclaim(): just use mem_cgroup_disabled().

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Glauber Costa <glommer@parallels.com>
---
 mm/vmscan.c |   18 ++++--------------
 1 files changed, 4 insertions(+), 14 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 49f15ef..c684f44 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -164,26 +164,16 @@ static bool global_reclaim(struct scan_control *sc)
 {
 	return !sc->target_mem_cgroup;
 }
-
-static bool scanning_global_lru(struct mem_cgroup_zone *mz)
-{
-	return !mz->mem_cgroup;
-}
 #else
 static bool global_reclaim(struct scan_control *sc)
 {
 	return true;
 }
-
-static bool scanning_global_lru(struct mem_cgroup_zone *mz)
-{
-	return true;
-}
 #endif
 
 static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
 {
-	if (!scanning_global_lru(mz))
+	if (!mem_cgroup_disabled())
 		return mem_cgroup_get_reclaim_stat(mz->mem_cgroup, mz->zone);
 
 	return &mz->zone->reclaim_stat;
@@ -192,7 +182,7 @@ static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
 static unsigned long zone_nr_lru_pages(struct mem_cgroup_zone *mz,
 				       enum lru_list lru)
 {
-	if (!scanning_global_lru(mz))
+	if (!mem_cgroup_disabled())
 		return mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
 						    zone_to_nid(mz->zone),
 						    zone_idx(mz->zone),
@@ -1806,7 +1796,7 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
 	if (!total_swap_pages)
 		return 0;
 
-	if (!scanning_global_lru(mz))
+	if (!mem_cgroup_disabled())
 		return mem_cgroup_inactive_anon_is_low(mz->mem_cgroup,
 						       mz->zone);
 
@@ -1845,7 +1835,7 @@ static int inactive_file_is_low_global(struct zone *zone)
  */
 static int inactive_file_is_low(struct mem_cgroup_zone *mz)
 {
-	if (!scanning_global_lru(mz))
+	if (!mem_cgroup_disabled())
 		return mem_cgroup_inactive_file_is_low(mz->mem_cgroup,
 						       mz->zone);
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 2/7] mm/memcg: move reclaim_stat into lruvec
  2012-03-22 21:56 ` Konstantin Khlebnikov
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

From: Hugh Dickins <hughd@google.com>

With mem_cgroup_disabled() now explicit, it becomes clear that the
zone_reclaim_stat structure actually belongs in lruvec, per-zone
when memcg is disabled but per-memcg per-zone when it's enabled.

We can delete mem_cgroup_get_reclaim_stat(), and change
update_page_reclaim_stat() to update just the one set of stats,
the one which get_scan_count() will actually use.

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 include/linux/memcontrol.h |    9 ---------
 include/linux/mmzone.h     |   29 ++++++++++++++---------------
 mm/memcontrol.c            |   27 +++++++--------------------
 mm/page_alloc.c            |    8 ++++----
 mm/swap.c                  |   14 ++++----------
 mm/vmscan.c                |    5 +----
 6 files changed, 30 insertions(+), 62 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index f94efd2..95dc32c 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -121,8 +121,6 @@ int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg,
 int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
 unsigned long mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg,
 					int nid, int zid, unsigned int lrumask);
-struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
-						      struct zone *zone);
 struct zone_reclaim_stat*
 mem_cgroup_get_reclaim_stat_from_page(struct page *page);
 extern void mem_cgroup_print_oom_info(struct mem_cgroup *memcg,
@@ -351,13 +349,6 @@ mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg, int nid, int zid,
 	return 0;
 }
 
-
-static inline struct zone_reclaim_stat*
-mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg, struct zone *zone)
-{
-	return NULL;
-}
-
 static inline struct zone_reclaim_stat*
 mem_cgroup_get_reclaim_stat_from_page(struct page *page)
 {
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index dff7115..9316931 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -159,8 +159,22 @@ static inline int is_unevictable_lru(enum lru_list lru)
 	return (lru == LRU_UNEVICTABLE);
 }
 
+struct zone_reclaim_stat {
+	/*
+	 * The pageout code in vmscan.c keeps track of how many of the
+	 * mem/swap backed and file backed pages are refeferenced.
+	 * The higher the rotated/scanned ratio, the more valuable
+	 * that cache is.
+	 *
+	 * The anon LRU stats live in [0], file LRU stats in [1]
+	 */
+	unsigned long		recent_rotated[2];
+	unsigned long		recent_scanned[2];
+};
+
 struct lruvec {
 	struct list_head lists[NR_LRU_LISTS];
+	struct zone_reclaim_stat reclaim_stat;
 };
 
 /* Mask used at gathering information at once (see memcontrol.c) */
@@ -287,19 +301,6 @@ enum zone_type {
 #error ZONES_SHIFT -- too many zones configured adjust calculation
 #endif
 
-struct zone_reclaim_stat {
-	/*
-	 * The pageout code in vmscan.c keeps track of how many of the
-	 * mem/swap backed and file backed pages are refeferenced.
-	 * The higher the rotated/scanned ratio, the more valuable
-	 * that cache is.
-	 *
-	 * The anon LRU stats live in [0], file LRU stats in [1]
-	 */
-	unsigned long		recent_rotated[2];
-	unsigned long		recent_scanned[2];
-};
-
 struct zone {
 	/* Fields commonly accessed by the page allocator */
 
@@ -374,8 +375,6 @@ struct zone {
 	spinlock_t		lru_lock;
 	struct lruvec		lruvec;
 
-	struct zone_reclaim_stat reclaim_stat;
-
 	unsigned long		pages_scanned;	   /* since last reclaim */
 	unsigned long		flags;		   /* zone flags, see below */
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b2ee6df..59697fb 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -138,7 +138,6 @@ struct mem_cgroup_per_zone {
 
 	struct mem_cgroup_reclaim_iter reclaim_iter[DEF_PRIORITY + 1];
 
-	struct zone_reclaim_stat reclaim_stat;
 	struct rb_node		tree_node;	/* RB tree node */
 	unsigned long long	usage_in_excess;/* Set to the value by which */
 						/* the soft limit is exceeded*/
@@ -1233,16 +1232,6 @@ int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg, struct zone *zone)
 	return (active > inactive);
 }
 
-struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
-						      struct zone *zone)
-{
-	int nid = zone_to_nid(zone);
-	int zid = zone_idx(zone);
-	struct mem_cgroup_per_zone *mz = mem_cgroup_zoneinfo(memcg, nid, zid);
-
-	return &mz->reclaim_stat;
-}
-
 struct zone_reclaim_stat *
 mem_cgroup_get_reclaim_stat_from_page(struct page *page)
 {
@@ -1258,7 +1247,7 @@ mem_cgroup_get_reclaim_stat_from_page(struct page *page)
 	/* Ensure pc->mem_cgroup is visible after reading PCG_USED. */
 	smp_rmb();
 	mz = page_cgroup_zoneinfo(pc->mem_cgroup, page);
-	return &mz->reclaim_stat;
+	return &mz->lruvec.reclaim_stat;
 }
 
 #define mem_cgroup_from_res_counter(counter, member)	\
@@ -4216,21 +4205,19 @@ static int mem_control_stat_show(struct cgroup *cont, struct cftype *cft,
 	{
 		int nid, zid;
 		struct mem_cgroup_per_zone *mz;
+		struct zone_reclaim_stat *rstat;
 		unsigned long recent_rotated[2] = {0, 0};
 		unsigned long recent_scanned[2] = {0, 0};
 
 		for_each_online_node(nid)
 			for (zid = 0; zid < MAX_NR_ZONES; zid++) {
 				mz = mem_cgroup_zoneinfo(memcg, nid, zid);
+				rstat = &mz->lruvec.reclaim_stat;
 
-				recent_rotated[0] +=
-					mz->reclaim_stat.recent_rotated[0];
-				recent_rotated[1] +=
-					mz->reclaim_stat.recent_rotated[1];
-				recent_scanned[0] +=
-					mz->reclaim_stat.recent_scanned[0];
-				recent_scanned[1] +=
-					mz->reclaim_stat.recent_scanned[1];
+				recent_rotated[0] += rstat->recent_rotated[0];
+				recent_rotated[1] += rstat->recent_rotated[1];
+				recent_scanned[0] += rstat->recent_scanned[0];
+				recent_scanned[1] += rstat->recent_scanned[1];
 			}
 		cb->fill(cb, "recent_rotated_anon", recent_rotated[0]);
 		cb->fill(cb, "recent_rotated_file", recent_rotated[1]);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index caea788..95ac749 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4317,10 +4317,10 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
 		zone_pcp_init(zone);
 		for_each_lru(lru)
 			INIT_LIST_HEAD(&zone->lruvec.lists[lru]);
-		zone->reclaim_stat.recent_rotated[0] = 0;
-		zone->reclaim_stat.recent_rotated[1] = 0;
-		zone->reclaim_stat.recent_scanned[0] = 0;
-		zone->reclaim_stat.recent_scanned[1] = 0;
+		zone->lruvec.reclaim_stat.recent_rotated[0] = 0;
+		zone->lruvec.reclaim_stat.recent_rotated[1] = 0;
+		zone->lruvec.reclaim_stat.recent_scanned[0] = 0;
+		zone->lruvec.reclaim_stat.recent_scanned[1] = 0;
 		zap_zone_vm_stats(zone);
 		zone->flags = 0;
 		if (!size)
diff --git a/mm/swap.c b/mm/swap.c
index 5c13f13..60d14da 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -279,21 +279,15 @@ void rotate_reclaimable_page(struct page *page)
 static void update_page_reclaim_stat(struct zone *zone, struct page *page,
 				     int file, int rotated)
 {
-	struct zone_reclaim_stat *reclaim_stat = &zone->reclaim_stat;
-	struct zone_reclaim_stat *memcg_reclaim_stat;
+	struct zone_reclaim_stat *reclaim_stat;
 
-	memcg_reclaim_stat = mem_cgroup_get_reclaim_stat_from_page(page);
+	reclaim_stat = mem_cgroup_get_reclaim_stat_from_page(page);
+	if (!reclaim_stat)
+		reclaim_stat = &zone->lruvec.reclaim_stat;
 
 	reclaim_stat->recent_scanned[file]++;
 	if (rotated)
 		reclaim_stat->recent_rotated[file]++;
-
-	if (!memcg_reclaim_stat)
-		return;
-
-	memcg_reclaim_stat->recent_scanned[file]++;
-	if (rotated)
-		memcg_reclaim_stat->recent_rotated[file]++;
 }
 
 static void __activate_page(struct page *page, void *arg)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c684f44..f4dca0c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -173,10 +173,7 @@ static bool global_reclaim(struct scan_control *sc)
 
 static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
 {
-	if (!mem_cgroup_disabled())
-		return mem_cgroup_get_reclaim_stat(mz->mem_cgroup, mz->zone);
-
-	return &mz->zone->reclaim_stat;
+	return &mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup)->reclaim_stat;
 }
 
 static unsigned long zone_nr_lru_pages(struct mem_cgroup_zone *mz,


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 2/7] mm/memcg: move reclaim_stat into lruvec
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

From: Hugh Dickins <hughd@google.com>

With mem_cgroup_disabled() now explicit, it becomes clear that the
zone_reclaim_stat structure actually belongs in lruvec, per-zone
when memcg is disabled but per-memcg per-zone when it's enabled.

We can delete mem_cgroup_get_reclaim_stat(), and change
update_page_reclaim_stat() to update just the one set of stats,
the one which get_scan_count() will actually use.

Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
---
 include/linux/memcontrol.h |    9 ---------
 include/linux/mmzone.h     |   29 ++++++++++++++---------------
 mm/memcontrol.c            |   27 +++++++--------------------
 mm/page_alloc.c            |    8 ++++----
 mm/swap.c                  |   14 ++++----------
 mm/vmscan.c                |    5 +----
 6 files changed, 30 insertions(+), 62 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index f94efd2..95dc32c 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -121,8 +121,6 @@ int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg,
 int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
 unsigned long mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg,
 					int nid, int zid, unsigned int lrumask);
-struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
-						      struct zone *zone);
 struct zone_reclaim_stat*
 mem_cgroup_get_reclaim_stat_from_page(struct page *page);
 extern void mem_cgroup_print_oom_info(struct mem_cgroup *memcg,
@@ -351,13 +349,6 @@ mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg, int nid, int zid,
 	return 0;
 }
 
-
-static inline struct zone_reclaim_stat*
-mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg, struct zone *zone)
-{
-	return NULL;
-}
-
 static inline struct zone_reclaim_stat*
 mem_cgroup_get_reclaim_stat_from_page(struct page *page)
 {
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index dff7115..9316931 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -159,8 +159,22 @@ static inline int is_unevictable_lru(enum lru_list lru)
 	return (lru == LRU_UNEVICTABLE);
 }
 
+struct zone_reclaim_stat {
+	/*
+	 * The pageout code in vmscan.c keeps track of how many of the
+	 * mem/swap backed and file backed pages are refeferenced.
+	 * The higher the rotated/scanned ratio, the more valuable
+	 * that cache is.
+	 *
+	 * The anon LRU stats live in [0], file LRU stats in [1]
+	 */
+	unsigned long		recent_rotated[2];
+	unsigned long		recent_scanned[2];
+};
+
 struct lruvec {
 	struct list_head lists[NR_LRU_LISTS];
+	struct zone_reclaim_stat reclaim_stat;
 };
 
 /* Mask used at gathering information at once (see memcontrol.c) */
@@ -287,19 +301,6 @@ enum zone_type {
 #error ZONES_SHIFT -- too many zones configured adjust calculation
 #endif
 
-struct zone_reclaim_stat {
-	/*
-	 * The pageout code in vmscan.c keeps track of how many of the
-	 * mem/swap backed and file backed pages are refeferenced.
-	 * The higher the rotated/scanned ratio, the more valuable
-	 * that cache is.
-	 *
-	 * The anon LRU stats live in [0], file LRU stats in [1]
-	 */
-	unsigned long		recent_rotated[2];
-	unsigned long		recent_scanned[2];
-};
-
 struct zone {
 	/* Fields commonly accessed by the page allocator */
 
@@ -374,8 +375,6 @@ struct zone {
 	spinlock_t		lru_lock;
 	struct lruvec		lruvec;
 
-	struct zone_reclaim_stat reclaim_stat;
-
 	unsigned long		pages_scanned;	   /* since last reclaim */
 	unsigned long		flags;		   /* zone flags, see below */
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b2ee6df..59697fb 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -138,7 +138,6 @@ struct mem_cgroup_per_zone {
 
 	struct mem_cgroup_reclaim_iter reclaim_iter[DEF_PRIORITY + 1];
 
-	struct zone_reclaim_stat reclaim_stat;
 	struct rb_node		tree_node;	/* RB tree node */
 	unsigned long long	usage_in_excess;/* Set to the value by which */
 						/* the soft limit is exceeded*/
@@ -1233,16 +1232,6 @@ int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg, struct zone *zone)
 	return (active > inactive);
 }
 
-struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
-						      struct zone *zone)
-{
-	int nid = zone_to_nid(zone);
-	int zid = zone_idx(zone);
-	struct mem_cgroup_per_zone *mz = mem_cgroup_zoneinfo(memcg, nid, zid);
-
-	return &mz->reclaim_stat;
-}
-
 struct zone_reclaim_stat *
 mem_cgroup_get_reclaim_stat_from_page(struct page *page)
 {
@@ -1258,7 +1247,7 @@ mem_cgroup_get_reclaim_stat_from_page(struct page *page)
 	/* Ensure pc->mem_cgroup is visible after reading PCG_USED. */
 	smp_rmb();
 	mz = page_cgroup_zoneinfo(pc->mem_cgroup, page);
-	return &mz->reclaim_stat;
+	return &mz->lruvec.reclaim_stat;
 }
 
 #define mem_cgroup_from_res_counter(counter, member)	\
@@ -4216,21 +4205,19 @@ static int mem_control_stat_show(struct cgroup *cont, struct cftype *cft,
 	{
 		int nid, zid;
 		struct mem_cgroup_per_zone *mz;
+		struct zone_reclaim_stat *rstat;
 		unsigned long recent_rotated[2] = {0, 0};
 		unsigned long recent_scanned[2] = {0, 0};
 
 		for_each_online_node(nid)
 			for (zid = 0; zid < MAX_NR_ZONES; zid++) {
 				mz = mem_cgroup_zoneinfo(memcg, nid, zid);
+				rstat = &mz->lruvec.reclaim_stat;
 
-				recent_rotated[0] +=
-					mz->reclaim_stat.recent_rotated[0];
-				recent_rotated[1] +=
-					mz->reclaim_stat.recent_rotated[1];
-				recent_scanned[0] +=
-					mz->reclaim_stat.recent_scanned[0];
-				recent_scanned[1] +=
-					mz->reclaim_stat.recent_scanned[1];
+				recent_rotated[0] += rstat->recent_rotated[0];
+				recent_rotated[1] += rstat->recent_rotated[1];
+				recent_scanned[0] += rstat->recent_scanned[0];
+				recent_scanned[1] += rstat->recent_scanned[1];
 			}
 		cb->fill(cb, "recent_rotated_anon", recent_rotated[0]);
 		cb->fill(cb, "recent_rotated_file", recent_rotated[1]);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index caea788..95ac749 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4317,10 +4317,10 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
 		zone_pcp_init(zone);
 		for_each_lru(lru)
 			INIT_LIST_HEAD(&zone->lruvec.lists[lru]);
-		zone->reclaim_stat.recent_rotated[0] = 0;
-		zone->reclaim_stat.recent_rotated[1] = 0;
-		zone->reclaim_stat.recent_scanned[0] = 0;
-		zone->reclaim_stat.recent_scanned[1] = 0;
+		zone->lruvec.reclaim_stat.recent_rotated[0] = 0;
+		zone->lruvec.reclaim_stat.recent_rotated[1] = 0;
+		zone->lruvec.reclaim_stat.recent_scanned[0] = 0;
+		zone->lruvec.reclaim_stat.recent_scanned[1] = 0;
 		zap_zone_vm_stats(zone);
 		zone->flags = 0;
 		if (!size)
diff --git a/mm/swap.c b/mm/swap.c
index 5c13f13..60d14da 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -279,21 +279,15 @@ void rotate_reclaimable_page(struct page *page)
 static void update_page_reclaim_stat(struct zone *zone, struct page *page,
 				     int file, int rotated)
 {
-	struct zone_reclaim_stat *reclaim_stat = &zone->reclaim_stat;
-	struct zone_reclaim_stat *memcg_reclaim_stat;
+	struct zone_reclaim_stat *reclaim_stat;
 
-	memcg_reclaim_stat = mem_cgroup_get_reclaim_stat_from_page(page);
+	reclaim_stat = mem_cgroup_get_reclaim_stat_from_page(page);
+	if (!reclaim_stat)
+		reclaim_stat = &zone->lruvec.reclaim_stat;
 
 	reclaim_stat->recent_scanned[file]++;
 	if (rotated)
 		reclaim_stat->recent_rotated[file]++;
-
-	if (!memcg_reclaim_stat)
-		return;
-
-	memcg_reclaim_stat->recent_scanned[file]++;
-	if (rotated)
-		memcg_reclaim_stat->recent_rotated[file]++;
 }
 
 static void __activate_page(struct page *page, void *arg)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c684f44..f4dca0c 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -173,10 +173,7 @@ static bool global_reclaim(struct scan_control *sc)
 
 static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
 {
-	if (!mem_cgroup_disabled())
-		return mem_cgroup_get_reclaim_stat(mz->mem_cgroup, mz->zone);
-
-	return &mz->zone->reclaim_stat;
+	return &mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup)->reclaim_stat;
 }
 
 static unsigned long zone_nr_lru_pages(struct mem_cgroup_zone *mz,

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 3/7] mm: push lru index into shrink_[in]active_list()
  2012-03-22 21:56 ` Konstantin Khlebnikov
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Let's toss lru index through call stack to isolate_lru_pages(),
this is better than its reconstructing from individual bits.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Hugh Dickins <hughd@google.com>
---
 mm/vmscan.c |   41 +++++++++++++++++------------------------
 1 files changed, 17 insertions(+), 24 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index f4dca0c..fb6d54e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1127,15 +1127,14 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
  * @nr_scanned:	The number of pages that were scanned.
  * @sc:		The scan_control struct for this reclaim session
  * @mode:	One of the LRU isolation modes
- * @active:	True [1] if isolating active pages
- * @file:	True [1] if isolating file [!anon] pages
+ * @lru		LRU list id for isolating
  *
  * returns how many pages were moved onto *@dst.
  */
 static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 		struct mem_cgroup_zone *mz, struct list_head *dst,
 		unsigned long *nr_scanned, struct scan_control *sc,
-		isolate_mode_t mode, int active, int file)
+		isolate_mode_t mode, enum lru_list lru)
 {
 	struct lruvec *lruvec;
 	struct list_head *src;
@@ -1144,13 +1143,9 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 	unsigned long nr_lumpy_dirty = 0;
 	unsigned long nr_lumpy_failed = 0;
 	unsigned long scan;
-	int lru = LRU_BASE;
+	int file = is_file_lru(lru);
 
 	lruvec = mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup);
-	if (active)
-		lru += LRU_ACTIVE;
-	if (file)
-		lru += LRU_FILE;
 	src = &lruvec->lists[lru];
 
 	for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) {
@@ -1487,7 +1482,7 @@ static inline bool should_reclaim_stall(unsigned long nr_taken,
  */
 static noinline_for_stack unsigned long
 shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
-		     struct scan_control *sc, int priority, int file)
+		     struct scan_control *sc, int priority, enum lru_list lru)
 {
 	LIST_HEAD(page_list);
 	unsigned long nr_scanned;
@@ -1498,6 +1493,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 	unsigned long nr_dirty = 0;
 	unsigned long nr_writeback = 0;
 	isolate_mode_t isolate_mode = ISOLATE_INACTIVE;
+	int file = is_file_lru(lru);
 	struct zone *zone = mz->zone;
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
 
@@ -1523,7 +1519,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 	spin_lock_irq(&zone->lru_lock);
 
 	nr_taken = isolate_lru_pages(nr_to_scan, mz, &page_list, &nr_scanned,
-				     sc, isolate_mode, 0, file);
+				     sc, isolate_mode, lru);
 	if (global_reclaim(sc)) {
 		zone->pages_scanned += nr_scanned;
 		if (current_is_kswapd())
@@ -1661,7 +1657,7 @@ static void move_active_pages_to_lru(struct zone *zone,
 static void shrink_active_list(unsigned long nr_to_scan,
 			       struct mem_cgroup_zone *mz,
 			       struct scan_control *sc,
-			       int priority, int file)
+			       int priority, enum lru_list lru)
 {
 	unsigned long nr_taken;
 	unsigned long nr_scanned;
@@ -1673,6 +1669,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
 	unsigned long nr_rotated = 0;
 	isolate_mode_t isolate_mode = ISOLATE_ACTIVE;
+	int file = is_file_lru(lru);
 	struct zone *zone = mz->zone;
 
 	lru_add_drain();
@@ -1687,17 +1684,14 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	spin_lock_irq(&zone->lru_lock);
 
 	nr_taken = isolate_lru_pages(nr_to_scan, mz, &l_hold, &nr_scanned, sc,
-				     isolate_mode, 1, file);
+				     isolate_mode, lru);
 	if (global_reclaim(sc))
 		zone->pages_scanned += nr_scanned;
 
 	reclaim_stat->recent_scanned[file] += nr_taken;
 
 	__count_zone_vm_events(PGREFILL, zone, nr_scanned);
-	if (file)
-		__mod_zone_page_state(zone, NR_ACTIVE_FILE, -nr_taken);
-	else
-		__mod_zone_page_state(zone, NR_ACTIVE_ANON, -nr_taken);
+	__mod_zone_page_state(zone, NR_LRU_BASE + lru, -nr_taken);
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, nr_taken);
 	spin_unlock_irq(&zone->lru_lock);
 
@@ -1752,10 +1746,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	reclaim_stat->recent_rotated[file] += nr_rotated;
 
-	move_active_pages_to_lru(zone, &l_active, &l_hold,
-						LRU_ACTIVE + file * LRU_FILE);
-	move_active_pages_to_lru(zone, &l_inactive, &l_hold,
-						LRU_BASE   + file * LRU_FILE);
+	move_active_pages_to_lru(zone, &l_active, &l_hold, lru);
+	move_active_pages_to_lru(zone, &l_inactive, &l_hold, lru - LRU_ACTIVE);
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, -nr_taken);
 	spin_unlock_irq(&zone->lru_lock);
 
@@ -1855,11 +1847,11 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
 
 	if (is_active_lru(lru)) {
 		if (inactive_list_is_low(mz, file))
-			shrink_active_list(nr_to_scan, mz, sc, priority, file);
+			shrink_active_list(nr_to_scan, mz, sc, priority, lru);
 		return 0;
 	}
 
-	return shrink_inactive_list(nr_to_scan, mz, sc, priority, file);
+	return shrink_inactive_list(nr_to_scan, mz, sc, priority, lru);
 }
 
 static int vmscan_swappiness(struct mem_cgroup_zone *mz,
@@ -2110,7 +2102,8 @@ restart:
 	 * rebalance the anon lru active/inactive ratio.
 	 */
 	if (inactive_anon_is_low(mz))
-		shrink_active_list(SWAP_CLUSTER_MAX, mz, sc, priority, 0);
+		shrink_active_list(SWAP_CLUSTER_MAX, mz,
+				   sc, priority, LRU_ACTIVE_ANON);
 
 	/* reclaim/compaction might need reclaim to continue */
 	if (should_continue_reclaim(mz, nr_reclaimed,
@@ -2550,7 +2543,7 @@ static void age_active_anon(struct zone *zone, struct scan_control *sc,
 
 		if (inactive_anon_is_low(&mz))
 			shrink_active_list(SWAP_CLUSTER_MAX, &mz,
-					   sc, priority, 0);
+					   sc, priority, LRU_ACTIVE_ANON);
 
 		memcg = mem_cgroup_iter(NULL, memcg, NULL);
 	} while (memcg);


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 3/7] mm: push lru index into shrink_[in]active_list()
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Let's toss lru index through call stack to isolate_lru_pages(),
this is better than its reconstructing from individual bits.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Acked-by: Hugh Dickins <hughd@google.com>
---
 mm/vmscan.c |   41 +++++++++++++++++------------------------
 1 files changed, 17 insertions(+), 24 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index f4dca0c..fb6d54e 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1127,15 +1127,14 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
  * @nr_scanned:	The number of pages that were scanned.
  * @sc:		The scan_control struct for this reclaim session
  * @mode:	One of the LRU isolation modes
- * @active:	True [1] if isolating active pages
- * @file:	True [1] if isolating file [!anon] pages
+ * @lru		LRU list id for isolating
  *
  * returns how many pages were moved onto *@dst.
  */
 static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 		struct mem_cgroup_zone *mz, struct list_head *dst,
 		unsigned long *nr_scanned, struct scan_control *sc,
-		isolate_mode_t mode, int active, int file)
+		isolate_mode_t mode, enum lru_list lru)
 {
 	struct lruvec *lruvec;
 	struct list_head *src;
@@ -1144,13 +1143,9 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 	unsigned long nr_lumpy_dirty = 0;
 	unsigned long nr_lumpy_failed = 0;
 	unsigned long scan;
-	int lru = LRU_BASE;
+	int file = is_file_lru(lru);
 
 	lruvec = mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup);
-	if (active)
-		lru += LRU_ACTIVE;
-	if (file)
-		lru += LRU_FILE;
 	src = &lruvec->lists[lru];
 
 	for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) {
@@ -1487,7 +1482,7 @@ static inline bool should_reclaim_stall(unsigned long nr_taken,
  */
 static noinline_for_stack unsigned long
 shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
-		     struct scan_control *sc, int priority, int file)
+		     struct scan_control *sc, int priority, enum lru_list lru)
 {
 	LIST_HEAD(page_list);
 	unsigned long nr_scanned;
@@ -1498,6 +1493,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 	unsigned long nr_dirty = 0;
 	unsigned long nr_writeback = 0;
 	isolate_mode_t isolate_mode = ISOLATE_INACTIVE;
+	int file = is_file_lru(lru);
 	struct zone *zone = mz->zone;
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
 
@@ -1523,7 +1519,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 	spin_lock_irq(&zone->lru_lock);
 
 	nr_taken = isolate_lru_pages(nr_to_scan, mz, &page_list, &nr_scanned,
-				     sc, isolate_mode, 0, file);
+				     sc, isolate_mode, lru);
 	if (global_reclaim(sc)) {
 		zone->pages_scanned += nr_scanned;
 		if (current_is_kswapd())
@@ -1661,7 +1657,7 @@ static void move_active_pages_to_lru(struct zone *zone,
 static void shrink_active_list(unsigned long nr_to_scan,
 			       struct mem_cgroup_zone *mz,
 			       struct scan_control *sc,
-			       int priority, int file)
+			       int priority, enum lru_list lru)
 {
 	unsigned long nr_taken;
 	unsigned long nr_scanned;
@@ -1673,6 +1669,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
 	unsigned long nr_rotated = 0;
 	isolate_mode_t isolate_mode = ISOLATE_ACTIVE;
+	int file = is_file_lru(lru);
 	struct zone *zone = mz->zone;
 
 	lru_add_drain();
@@ -1687,17 +1684,14 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	spin_lock_irq(&zone->lru_lock);
 
 	nr_taken = isolate_lru_pages(nr_to_scan, mz, &l_hold, &nr_scanned, sc,
-				     isolate_mode, 1, file);
+				     isolate_mode, lru);
 	if (global_reclaim(sc))
 		zone->pages_scanned += nr_scanned;
 
 	reclaim_stat->recent_scanned[file] += nr_taken;
 
 	__count_zone_vm_events(PGREFILL, zone, nr_scanned);
-	if (file)
-		__mod_zone_page_state(zone, NR_ACTIVE_FILE, -nr_taken);
-	else
-		__mod_zone_page_state(zone, NR_ACTIVE_ANON, -nr_taken);
+	__mod_zone_page_state(zone, NR_LRU_BASE + lru, -nr_taken);
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, nr_taken);
 	spin_unlock_irq(&zone->lru_lock);
 
@@ -1752,10 +1746,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	reclaim_stat->recent_rotated[file] += nr_rotated;
 
-	move_active_pages_to_lru(zone, &l_active, &l_hold,
-						LRU_ACTIVE + file * LRU_FILE);
-	move_active_pages_to_lru(zone, &l_inactive, &l_hold,
-						LRU_BASE   + file * LRU_FILE);
+	move_active_pages_to_lru(zone, &l_active, &l_hold, lru);
+	move_active_pages_to_lru(zone, &l_inactive, &l_hold, lru - LRU_ACTIVE);
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, -nr_taken);
 	spin_unlock_irq(&zone->lru_lock);
 
@@ -1855,11 +1847,11 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
 
 	if (is_active_lru(lru)) {
 		if (inactive_list_is_low(mz, file))
-			shrink_active_list(nr_to_scan, mz, sc, priority, file);
+			shrink_active_list(nr_to_scan, mz, sc, priority, lru);
 		return 0;
 	}
 
-	return shrink_inactive_list(nr_to_scan, mz, sc, priority, file);
+	return shrink_inactive_list(nr_to_scan, mz, sc, priority, lru);
 }
 
 static int vmscan_swappiness(struct mem_cgroup_zone *mz,
@@ -2110,7 +2102,8 @@ restart:
 	 * rebalance the anon lru active/inactive ratio.
 	 */
 	if (inactive_anon_is_low(mz))
-		shrink_active_list(SWAP_CLUSTER_MAX, mz, sc, priority, 0);
+		shrink_active_list(SWAP_CLUSTER_MAX, mz,
+				   sc, priority, LRU_ACTIVE_ANON);
 
 	/* reclaim/compaction might need reclaim to continue */
 	if (should_continue_reclaim(mz, nr_reclaimed,
@@ -2550,7 +2543,7 @@ static void age_active_anon(struct zone *zone, struct scan_control *sc,
 
 		if (inactive_anon_is_low(&mz))
 			shrink_active_list(SWAP_CLUSTER_MAX, &mz,
-					   sc, priority, 0);
+					   sc, priority, LRU_ACTIVE_ANON);
 
 		memcg = mem_cgroup_iter(NULL, memcg, NULL);
 	} while (memcg);

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 4/7] mm: mark mm-inline functions as __always_inline
  2012-03-22 21:56 ` Konstantin Khlebnikov
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel

GCC sometimes ignores "inline" directives even for small and simple functions.
This supposed to be fixed in gcc 4.7, but it was released only yesterday.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>

---

add/remove: 0/0 grow/shrink: 1/5 up/down: 3/-57 (-54)
function                                     old     new   delta
mem_cgroup_charge_common                     253     256      +3
lru_deactivate_fn                            500     498      -2
lru_add_page_tail                            364     361      -3
mem_cgroup_usage_unregister_event            501     493      -8
mem_cgroup_lru_del                            73      65      -8
__mem_cgroup_commit_charge                   676     640     -36
---
 include/linux/mm_inline.h |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 227fd3e..16d45d9 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -21,7 +21,7 @@ static inline int page_is_file_cache(struct page *page)
 	return !PageSwapBacked(page);
 }
 
-static inline void
+static __always_inline void
 add_page_to_lru_list(struct zone *zone, struct page *page, enum lru_list lru)
 {
 	struct lruvec *lruvec;
@@ -31,7 +31,7 @@ add_page_to_lru_list(struct zone *zone, struct page *page, enum lru_list lru)
 	__mod_zone_page_state(zone, NR_LRU_BASE + lru, hpage_nr_pages(page));
 }
 
-static inline void
+static __always_inline void
 del_page_from_lru_list(struct zone *zone, struct page *page, enum lru_list lru)
 {
 	mem_cgroup_lru_del_list(page, lru);
@@ -61,7 +61,7 @@ static inline enum lru_list page_lru_base_type(struct page *page)
  * Returns the LRU list a page was on, as an index into the array of LRU
  * lists; and clears its Unevictable or Active flags, ready for freeing.
  */
-static inline enum lru_list page_off_lru(struct page *page)
+static __always_inline enum lru_list page_off_lru(struct page *page)
 {
 	enum lru_list lru;
 
@@ -85,7 +85,7 @@ static inline enum lru_list page_off_lru(struct page *page)
  * Returns the LRU list a page should be on, as an index
  * into the array of LRU lists.
  */
-static inline enum lru_list page_lru(struct page *page)
+static __always_inline enum lru_list page_lru(struct page *page)
 {
 	enum lru_list lru;
 


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 4/7] mm: mark mm-inline functions as __always_inline
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel

GCC sometimes ignores "inline" directives even for small and simple functions.
This supposed to be fixed in gcc 4.7, but it was released only yesterday.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>

---

add/remove: 0/0 grow/shrink: 1/5 up/down: 3/-57 (-54)
function                                     old     new   delta
mem_cgroup_charge_common                     253     256      +3
lru_deactivate_fn                            500     498      -2
lru_add_page_tail                            364     361      -3
mem_cgroup_usage_unregister_event            501     493      -8
mem_cgroup_lru_del                            73      65      -8
__mem_cgroup_commit_charge                   676     640     -36
---
 include/linux/mm_inline.h |    8 ++++----
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 227fd3e..16d45d9 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -21,7 +21,7 @@ static inline int page_is_file_cache(struct page *page)
 	return !PageSwapBacked(page);
 }
 
-static inline void
+static __always_inline void
 add_page_to_lru_list(struct zone *zone, struct page *page, enum lru_list lru)
 {
 	struct lruvec *lruvec;
@@ -31,7 +31,7 @@ add_page_to_lru_list(struct zone *zone, struct page *page, enum lru_list lru)
 	__mod_zone_page_state(zone, NR_LRU_BASE + lru, hpage_nr_pages(page));
 }
 
-static inline void
+static __always_inline void
 del_page_from_lru_list(struct zone *zone, struct page *page, enum lru_list lru)
 {
 	mem_cgroup_lru_del_list(page, lru);
@@ -61,7 +61,7 @@ static inline enum lru_list page_lru_base_type(struct page *page)
  * Returns the LRU list a page was on, as an index into the array of LRU
  * lists; and clears its Unevictable or Active flags, ready for freeing.
  */
-static inline enum lru_list page_off_lru(struct page *page)
+static __always_inline enum lru_list page_off_lru(struct page *page)
 {
 	enum lru_list lru;
 
@@ -85,7 +85,7 @@ static inline enum lru_list page_off_lru(struct page *page)
  * Returns the LRU list a page should be on, as an index
  * into the array of LRU lists.
  */
-static inline enum lru_list page_lru(struct page *page)
+static __always_inline enum lru_list page_lru(struct page *page)
 {
 	enum lru_list lru;
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 5/7] mm: remove lru type checks from __isolate_lru_page()
  2012-03-22 21:56 ` Konstantin Khlebnikov
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

After patch "mm: forbid lumpy-reclaim in shrink_active_list()" we can completely
remove anon/file and active/inactive lru type filters from __isolate_lru_page(),
because isolation for 0-order reclaim always isolates pages from right lru list.
And pages-isolation for lumpy shrink_inactive_list() or memory-compaction anyway
allowed to isolate pages from all evictable lru lists.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hughd@google.com>

---

add/remove: 0/0 grow/shrink: 0/3 up/down: 0/-148 (-148)
function                                     old     new   delta
shrink_inactive_list                        1259    1243     -16
__isolate_lru_page                           301     253     -48
static.isolate_lru_pages                    1055     971     -84
---
 include/linux/mmzone.h |   10 +++-------
 include/linux/swap.h   |    2 +-
 mm/compaction.c        |    4 ++--
 mm/vmscan.c            |   27 +++++----------------------
 4 files changed, 11 insertions(+), 32 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 9316931..71d81b0 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -183,16 +183,12 @@ struct lruvec {
 #define LRU_ALL_EVICTABLE (LRU_ALL_FILE | LRU_ALL_ANON)
 #define LRU_ALL	     ((1 << NR_LRU_LISTS) - 1)
 
-/* Isolate inactive pages */
-#define ISOLATE_INACTIVE	((__force isolate_mode_t)0x1)
-/* Isolate active pages */
-#define ISOLATE_ACTIVE		((__force isolate_mode_t)0x2)
 /* Isolate clean file */
-#define ISOLATE_CLEAN		((__force isolate_mode_t)0x4)
+#define ISOLATE_CLEAN		((__force isolate_mode_t)0x1)
 /* Isolate unmapped file */
-#define ISOLATE_UNMAPPED	((__force isolate_mode_t)0x8)
+#define ISOLATE_UNMAPPED	((__force isolate_mode_t)0x2)
 /* Isolate for asynchronous migration */
-#define ISOLATE_ASYNC_MIGRATE	((__force isolate_mode_t)0x10)
+#define ISOLATE_ASYNC_MIGRATE	((__force isolate_mode_t)0x4)
 
 /* LRU Isolation modes. */
 typedef unsigned __bitwise__ isolate_mode_t;
diff --git a/include/linux/swap.h b/include/linux/swap.h
index b86b5c2..bef952f 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -248,7 +248,7 @@ static inline void lru_cache_add_file(struct page *page)
 /* linux/mm/vmscan.c */
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
-extern int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file);
+extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
 extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem,
 						  gfp_t gfp_mask, bool noswap);
 extern unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *mem,
diff --git a/mm/compaction.c b/mm/compaction.c
index 74a8c82..dd2fea5 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -261,7 +261,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 	unsigned long last_pageblock_nr = 0, pageblock_nr;
 	unsigned long nr_scanned = 0, nr_isolated = 0;
 	struct list_head *migratelist = &cc->migratepages;
-	isolate_mode_t mode = ISOLATE_ACTIVE|ISOLATE_INACTIVE;
+	isolate_mode_t mode = 0;
 
 	/* Do not scan outside zone boundaries */
 	low_pfn = max(cc->migrate_pfn, zone->zone_start_pfn);
@@ -375,7 +375,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 			mode |= ISOLATE_ASYNC_MIGRATE;
 
 		/* Try isolate the page */
-		if (__isolate_lru_page(page, mode, 0) != 0)
+		if (__isolate_lru_page(page, mode) != 0)
 			continue;
 
 		VM_BUG_ON(PageTransCompound(page));
diff --git a/mm/vmscan.c b/mm/vmscan.c
index fb6d54e..5f6ed98 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1028,29 +1028,14 @@ keep_lumpy:
  *
  * returns 0 on success, -ve errno on failure.
  */
-int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
+int __isolate_lru_page(struct page *page, isolate_mode_t mode)
 {
-	bool all_lru_mode;
 	int ret = -EINVAL;
 
 	/* Only take pages on the LRU. */
 	if (!PageLRU(page))
 		return ret;
 
-	all_lru_mode = (mode & (ISOLATE_ACTIVE|ISOLATE_INACTIVE)) ==
-		(ISOLATE_ACTIVE|ISOLATE_INACTIVE);
-
-	/*
-	 * When checking the active state, we need to be sure we are
-	 * dealing with comparible boolean values.  Take the logical not
-	 * of each.
-	 */
-	if (!all_lru_mode && !PageActive(page) != !(mode & ISOLATE_ACTIVE))
-		return ret;
-
-	if (!all_lru_mode && !!page_is_file_cache(page) != file)
-		return ret;
-
 	/*
 	 * When this function is being called for lumpy reclaim, we
 	 * initially look into all LRU pages, active, inactive and
@@ -1160,7 +1145,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 
 		VM_BUG_ON(!PageLRU(page));
 
-		switch (__isolate_lru_page(page, mode, file)) {
+		switch (__isolate_lru_page(page, mode)) {
 		case 0:
 			mem_cgroup_lru_del(page);
 			list_move(&page->lru, dst);
@@ -1218,7 +1203,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 			    !PageSwapCache(cursor_page))
 				break;
 
-			if (__isolate_lru_page(cursor_page, mode, file) == 0) {
+			if (__isolate_lru_page(cursor_page, mode) == 0) {
 				unsigned int isolated_pages;
 
 				mem_cgroup_lru_del(cursor_page);
@@ -1492,7 +1477,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 	unsigned long nr_file;
 	unsigned long nr_dirty = 0;
 	unsigned long nr_writeback = 0;
-	isolate_mode_t isolate_mode = ISOLATE_INACTIVE;
+	isolate_mode_t isolate_mode = 0;
 	int file = is_file_lru(lru);
 	struct zone *zone = mz->zone;
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
@@ -1506,8 +1491,6 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 	}
 
 	set_reclaim_mode(priority, sc, false);
-	if (sc->reclaim_mode & RECLAIM_MODE_LUMPYRECLAIM)
-		isolate_mode |= ISOLATE_ACTIVE;
 
 	lru_add_drain();
 
@@ -1668,7 +1651,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	struct page *page;
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
 	unsigned long nr_rotated = 0;
-	isolate_mode_t isolate_mode = ISOLATE_ACTIVE;
+	isolate_mode_t isolate_mode = 0;
 	int file = is_file_lru(lru);
 	struct zone *zone = mz->zone;
 


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 5/7] mm: remove lru type checks from __isolate_lru_page()
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

After patch "mm: forbid lumpy-reclaim in shrink_active_list()" we can completely
remove anon/file and active/inactive lru type filters from __isolate_lru_page(),
because isolation for 0-order reclaim always isolates pages from right lru list.
And pages-isolation for lumpy shrink_inactive_list() or memory-compaction anyway
allowed to isolate pages from all evictable lru lists.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hughd@google.com>

---

add/remove: 0/0 grow/shrink: 0/3 up/down: 0/-148 (-148)
function                                     old     new   delta
shrink_inactive_list                        1259    1243     -16
__isolate_lru_page                           301     253     -48
static.isolate_lru_pages                    1055     971     -84
---
 include/linux/mmzone.h |   10 +++-------
 include/linux/swap.h   |    2 +-
 mm/compaction.c        |    4 ++--
 mm/vmscan.c            |   27 +++++----------------------
 4 files changed, 11 insertions(+), 32 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 9316931..71d81b0 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -183,16 +183,12 @@ struct lruvec {
 #define LRU_ALL_EVICTABLE (LRU_ALL_FILE | LRU_ALL_ANON)
 #define LRU_ALL	     ((1 << NR_LRU_LISTS) - 1)
 
-/* Isolate inactive pages */
-#define ISOLATE_INACTIVE	((__force isolate_mode_t)0x1)
-/* Isolate active pages */
-#define ISOLATE_ACTIVE		((__force isolate_mode_t)0x2)
 /* Isolate clean file */
-#define ISOLATE_CLEAN		((__force isolate_mode_t)0x4)
+#define ISOLATE_CLEAN		((__force isolate_mode_t)0x1)
 /* Isolate unmapped file */
-#define ISOLATE_UNMAPPED	((__force isolate_mode_t)0x8)
+#define ISOLATE_UNMAPPED	((__force isolate_mode_t)0x2)
 /* Isolate for asynchronous migration */
-#define ISOLATE_ASYNC_MIGRATE	((__force isolate_mode_t)0x10)
+#define ISOLATE_ASYNC_MIGRATE	((__force isolate_mode_t)0x4)
 
 /* LRU Isolation modes. */
 typedef unsigned __bitwise__ isolate_mode_t;
diff --git a/include/linux/swap.h b/include/linux/swap.h
index b86b5c2..bef952f 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -248,7 +248,7 @@ static inline void lru_cache_add_file(struct page *page)
 /* linux/mm/vmscan.c */
 extern unsigned long try_to_free_pages(struct zonelist *zonelist, int order,
 					gfp_t gfp_mask, nodemask_t *mask);
-extern int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file);
+extern int __isolate_lru_page(struct page *page, isolate_mode_t mode);
 extern unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *mem,
 						  gfp_t gfp_mask, bool noswap);
 extern unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *mem,
diff --git a/mm/compaction.c b/mm/compaction.c
index 74a8c82..dd2fea5 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -261,7 +261,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 	unsigned long last_pageblock_nr = 0, pageblock_nr;
 	unsigned long nr_scanned = 0, nr_isolated = 0;
 	struct list_head *migratelist = &cc->migratepages;
-	isolate_mode_t mode = ISOLATE_ACTIVE|ISOLATE_INACTIVE;
+	isolate_mode_t mode = 0;
 
 	/* Do not scan outside zone boundaries */
 	low_pfn = max(cc->migrate_pfn, zone->zone_start_pfn);
@@ -375,7 +375,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 			mode |= ISOLATE_ASYNC_MIGRATE;
 
 		/* Try isolate the page */
-		if (__isolate_lru_page(page, mode, 0) != 0)
+		if (__isolate_lru_page(page, mode) != 0)
 			continue;
 
 		VM_BUG_ON(PageTransCompound(page));
diff --git a/mm/vmscan.c b/mm/vmscan.c
index fb6d54e..5f6ed98 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1028,29 +1028,14 @@ keep_lumpy:
  *
  * returns 0 on success, -ve errno on failure.
  */
-int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
+int __isolate_lru_page(struct page *page, isolate_mode_t mode)
 {
-	bool all_lru_mode;
 	int ret = -EINVAL;
 
 	/* Only take pages on the LRU. */
 	if (!PageLRU(page))
 		return ret;
 
-	all_lru_mode = (mode & (ISOLATE_ACTIVE|ISOLATE_INACTIVE)) ==
-		(ISOLATE_ACTIVE|ISOLATE_INACTIVE);
-
-	/*
-	 * When checking the active state, we need to be sure we are
-	 * dealing with comparible boolean values.  Take the logical not
-	 * of each.
-	 */
-	if (!all_lru_mode && !PageActive(page) != !(mode & ISOLATE_ACTIVE))
-		return ret;
-
-	if (!all_lru_mode && !!page_is_file_cache(page) != file)
-		return ret;
-
 	/*
 	 * When this function is being called for lumpy reclaim, we
 	 * initially look into all LRU pages, active, inactive and
@@ -1160,7 +1145,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 
 		VM_BUG_ON(!PageLRU(page));
 
-		switch (__isolate_lru_page(page, mode, file)) {
+		switch (__isolate_lru_page(page, mode)) {
 		case 0:
 			mem_cgroup_lru_del(page);
 			list_move(&page->lru, dst);
@@ -1218,7 +1203,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 			    !PageSwapCache(cursor_page))
 				break;
 
-			if (__isolate_lru_page(cursor_page, mode, file) == 0) {
+			if (__isolate_lru_page(cursor_page, mode) == 0) {
 				unsigned int isolated_pages;
 
 				mem_cgroup_lru_del(cursor_page);
@@ -1492,7 +1477,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 	unsigned long nr_file;
 	unsigned long nr_dirty = 0;
 	unsigned long nr_writeback = 0;
-	isolate_mode_t isolate_mode = ISOLATE_INACTIVE;
+	isolate_mode_t isolate_mode = 0;
 	int file = is_file_lru(lru);
 	struct zone *zone = mz->zone;
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
@@ -1506,8 +1491,6 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 	}
 
 	set_reclaim_mode(priority, sc, false);
-	if (sc->reclaim_mode & RECLAIM_MODE_LUMPYRECLAIM)
-		isolate_mode |= ISOLATE_ACTIVE;
 
 	lru_add_drain();
 
@@ -1668,7 +1651,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	struct page *page;
 	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
 	unsigned long nr_rotated = 0;
-	isolate_mode_t isolate_mode = ISOLATE_ACTIVE;
+	isolate_mode_t isolate_mode = 0;
 	int file = is_file_lru(lru);
 	struct zone *zone = mz->zone;
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 6/7] mm/memcg: kill mem_cgroup_lru_del()
  2012-03-22 21:56 ` Konstantin Khlebnikov
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

This patch kills mem_cgroup_lru_del(), we can use mem_cgroup_lru_del_list()
instead. On 0-order isolation we already have right lru list id.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hughd@google.com>
---
 include/linux/memcontrol.h |    5 -----
 mm/memcontrol.c            |    5 -----
 mm/vmscan.c                |    7 +++++--
 3 files changed, 5 insertions(+), 12 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 95dc32c..58d820c 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -66,7 +66,6 @@ struct lruvec *mem_cgroup_zone_lruvec(struct zone *, struct mem_cgroup *);
 struct lruvec *mem_cgroup_lru_add_list(struct zone *, struct page *,
 				       enum lru_list);
 void mem_cgroup_lru_del_list(struct page *, enum lru_list);
-void mem_cgroup_lru_del(struct page *);
 struct lruvec *mem_cgroup_lru_move_lists(struct zone *, struct page *,
 					 enum lru_list, enum lru_list);
 
@@ -260,10 +259,6 @@ static inline void mem_cgroup_lru_del_list(struct page *page, enum lru_list lru)
 {
 }
 
-static inline void mem_cgroup_lru_del(struct page *page)
-{
-}
-
 static inline struct lruvec *mem_cgroup_lru_move_lists(struct zone *zone,
 						       struct page *page,
 						       enum lru_list from,
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 59697fb..16db6c1 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1115,11 +1115,6 @@ void mem_cgroup_lru_del_list(struct page *page, enum lru_list lru)
 	mz->lru_size[lru] -= 1 << compound_order(page);
 }
 
-void mem_cgroup_lru_del(struct page *page)
-{
-	mem_cgroup_lru_del_list(page, page_lru(page));
-}
-
 /**
  * mem_cgroup_lru_move_lists - account for moving a page between lrus
  * @zone: zone of the page
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5f6ed98..9de66be 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1147,7 +1147,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 
 		switch (__isolate_lru_page(page, mode)) {
 		case 0:
-			mem_cgroup_lru_del(page);
+			mem_cgroup_lru_del_list(page, lru);
 			list_move(&page->lru, dst);
 			nr_taken += hpage_nr_pages(page);
 			break;
@@ -1205,8 +1205,11 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 
 			if (__isolate_lru_page(cursor_page, mode) == 0) {
 				unsigned int isolated_pages;
+				enum lru_list cursor_lru;
 
-				mem_cgroup_lru_del(cursor_page);
+				cursor_lru = page_lru(cursor_page);
+				mem_cgroup_lru_del_list(cursor_page,
+							cursor_lru);
 				list_move(&cursor_page->lru, dst);
 				isolated_pages = hpage_nr_pages(cursor_page);
 				nr_taken += isolated_pages;


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 6/7] mm/memcg: kill mem_cgroup_lru_del()
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

This patch kills mem_cgroup_lru_del(), we can use mem_cgroup_lru_del_list()
instead. On 0-order isolation we already have right lru list id.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Hugh Dickins <hughd@google.com>
---
 include/linux/memcontrol.h |    5 -----
 mm/memcontrol.c            |    5 -----
 mm/vmscan.c                |    7 +++++--
 3 files changed, 5 insertions(+), 12 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 95dc32c..58d820c 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -66,7 +66,6 @@ struct lruvec *mem_cgroup_zone_lruvec(struct zone *, struct mem_cgroup *);
 struct lruvec *mem_cgroup_lru_add_list(struct zone *, struct page *,
 				       enum lru_list);
 void mem_cgroup_lru_del_list(struct page *, enum lru_list);
-void mem_cgroup_lru_del(struct page *);
 struct lruvec *mem_cgroup_lru_move_lists(struct zone *, struct page *,
 					 enum lru_list, enum lru_list);
 
@@ -260,10 +259,6 @@ static inline void mem_cgroup_lru_del_list(struct page *page, enum lru_list lru)
 {
 }
 
-static inline void mem_cgroup_lru_del(struct page *page)
-{
-}
-
 static inline struct lruvec *mem_cgroup_lru_move_lists(struct zone *zone,
 						       struct page *page,
 						       enum lru_list from,
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 59697fb..16db6c1 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1115,11 +1115,6 @@ void mem_cgroup_lru_del_list(struct page *page, enum lru_list lru)
 	mz->lru_size[lru] -= 1 << compound_order(page);
 }
 
-void mem_cgroup_lru_del(struct page *page)
-{
-	mem_cgroup_lru_del_list(page, page_lru(page));
-}
-
 /**
  * mem_cgroup_lru_move_lists - account for moving a page between lrus
  * @zone: zone of the page
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 5f6ed98..9de66be 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1147,7 +1147,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 
 		switch (__isolate_lru_page(page, mode)) {
 		case 0:
-			mem_cgroup_lru_del(page);
+			mem_cgroup_lru_del_list(page, lru);
 			list_move(&page->lru, dst);
 			nr_taken += hpage_nr_pages(page);
 			break;
@@ -1205,8 +1205,11 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 
 			if (__isolate_lru_page(cursor_page, mode) == 0) {
 				unsigned int isolated_pages;
+				enum lru_list cursor_lru;
 
-				mem_cgroup_lru_del(cursor_page);
+				cursor_lru = page_lru(cursor_page);
+				mem_cgroup_lru_del_list(cursor_page,
+							cursor_lru);
 				list_move(&cursor_page->lru, dst);
 				isolated_pages = hpage_nr_pages(cursor_page);
 				nr_taken += isolated_pages;

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 7/7] mm/memcg: use vm_swappiness from target memory cgroup
  2012-03-22 21:56 ` Konstantin Khlebnikov
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: KAMEZAWA Hiroyuki

Use vm_swappiness from memory cgroup which is triggered this memory reclaim.
This is more reasonable and allows to kill one argument.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujtisu.com>
---
 mm/vmscan.c |    9 ++++-----
 1 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9de66be..5e2906d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1840,12 +1840,11 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
 	return shrink_inactive_list(nr_to_scan, mz, sc, priority, lru);
 }
 
-static int vmscan_swappiness(struct mem_cgroup_zone *mz,
-			     struct scan_control *sc)
+static int vmscan_swappiness(struct scan_control *sc)
 {
 	if (global_reclaim(sc))
 		return vm_swappiness;
-	return mem_cgroup_swappiness(mz->mem_cgroup);
+	return mem_cgroup_swappiness(sc->target_mem_cgroup);
 }
 
 /*
@@ -1913,8 +1912,8 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
 	 * With swappiness at 100, anonymous and file have the same priority.
 	 * This scanning priority is essentially the inverse of IO cost.
 	 */
-	anon_prio = vmscan_swappiness(mz, sc);
-	file_prio = 200 - vmscan_swappiness(mz, sc);
+	anon_prio = vmscan_swappiness(sc);
+	file_prio = 200 - vmscan_swappiness(sc);
 
 	/*
 	 * OK, so we have swap space and a fair amount of page cache


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v6 7/7] mm/memcg: use vm_swappiness from target memory cgroup
@ 2012-03-22 21:56   ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-22 21:56 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: KAMEZAWA Hiroyuki

Use vm_swappiness from memory cgroup which is triggered this memory reclaim.
This is more reasonable and allows to kill one argument.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujtisu.com>
---
 mm/vmscan.c |    9 ++++-----
 1 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9de66be..5e2906d 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1840,12 +1840,11 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
 	return shrink_inactive_list(nr_to_scan, mz, sc, priority, lru);
 }
 
-static int vmscan_swappiness(struct mem_cgroup_zone *mz,
-			     struct scan_control *sc)
+static int vmscan_swappiness(struct scan_control *sc)
 {
 	if (global_reclaim(sc))
 		return vm_swappiness;
-	return mem_cgroup_swappiness(mz->mem_cgroup);
+	return mem_cgroup_swappiness(sc->target_mem_cgroup);
 }
 
 /*
@@ -1913,8 +1912,8 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
 	 * With swappiness at 100, anonymous and file have the same priority.
 	 * This scanning priority is essentially the inverse of IO cost.
 	 */
-	anon_prio = vmscan_swappiness(mz, sc);
-	file_prio = 200 - vmscan_swappiness(mz, sc);
+	anon_prio = vmscan_swappiness(sc);
+	file_prio = 200 - vmscan_swappiness(sc);
 
 	/*
 	 * OK, so we have swap space and a fair amount of page cache

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 5/7] mm: remove lru type checks from __isolate_lru_page()
  2012-03-22 21:56   ` Konstantin Khlebnikov
@ 2012-03-23  1:46     ` KAMEZAWA Hiroyuki
  -1 siblings, 0 replies; 42+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-03-23  1:46 UTC (permalink / raw)
  To: Konstantin Khlebnikov; +Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins

(2012/03/23 6:56), Konstantin Khlebnikov wrote:

> After patch "mm: forbid lumpy-reclaim in shrink_active_list()" we can completely
> remove anon/file and active/inactive lru type filters from __isolate_lru_page(),
> because isolation for 0-order reclaim always isolates pages from right lru list.
> And pages-isolation for lumpy shrink_inactive_list() or memory-compaction anyway
> allowed to isolate pages from all evictable lru lists.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Hugh Dickins <hughd@google.com>
> 

seems reasonable to me.

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 5/7] mm: remove lru type checks from __isolate_lru_page()
@ 2012-03-23  1:46     ` KAMEZAWA Hiroyuki
  0 siblings, 0 replies; 42+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-03-23  1:46 UTC (permalink / raw)
  To: Konstantin Khlebnikov; +Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins

(2012/03/23 6:56), Konstantin Khlebnikov wrote:

> After patch "mm: forbid lumpy-reclaim in shrink_active_list()" we can completely
> remove anon/file and active/inactive lru type filters from __isolate_lru_page(),
> because isolation for 0-order reclaim always isolates pages from right lru list.
> And pages-isolation for lumpy shrink_inactive_list() or memory-compaction anyway
> allowed to isolate pages from all evictable lru lists.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Hugh Dickins <hughd@google.com>
> 

seems reasonable to me.

Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 6/7] mm/memcg: kill mem_cgroup_lru_del()
  2012-03-22 21:56   ` Konstantin Khlebnikov
@ 2012-03-23  1:48     ` KAMEZAWA Hiroyuki
  -1 siblings, 0 replies; 42+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-03-23  1:48 UTC (permalink / raw)
  To: Konstantin Khlebnikov; +Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins

(2012/03/23 6:56), Konstantin Khlebnikov wrote:

> This patch kills mem_cgroup_lru_del(), we can use mem_cgroup_lru_del_list()
> instead. On 0-order isolation we already have right lru list id.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Hugh Dickins <hughd@google.com>


Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>



^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 6/7] mm/memcg: kill mem_cgroup_lru_del()
@ 2012-03-23  1:48     ` KAMEZAWA Hiroyuki
  0 siblings, 0 replies; 42+ messages in thread
From: KAMEZAWA Hiroyuki @ 2012-03-23  1:48 UTC (permalink / raw)
  To: Konstantin Khlebnikov; +Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins

(2012/03/23 6:56), Konstantin Khlebnikov wrote:

> This patch kills mem_cgroup_lru_del(), we can use mem_cgroup_lru_del_list()
> instead. On 0-order isolation we already have right lru list id.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Hugh Dickins <hughd@google.com>


Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 3/7] mm: push lru index into shrink_[in]active_list()
  2012-03-22 21:56   ` Konstantin Khlebnikov
@ 2012-03-23  4:48     ` Minchan Kim
  -1 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2012-03-23  4:48 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins, KAMEZAWA Hiroyuki

On Fri, Mar 23, 2012 at 01:56:27AM +0400, Konstantin Khlebnikov wrote:
> Let's toss lru index through call stack to isolate_lru_pages(),
> this is better than its reconstructing from individual bits.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Acked-by: Hugh Dickins <hughd@google.com>
> ---
>  mm/vmscan.c |   41 +++++++++++++++++------------------------
>  1 files changed, 17 insertions(+), 24 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index f4dca0c..fb6d54e 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1127,15 +1127,14 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
>   * @nr_scanned:	The number of pages that were scanned.
>   * @sc:		The scan_control struct for this reclaim session
>   * @mode:	One of the LRU isolation modes
> - * @active:	True [1] if isolating active pages
> - * @file:	True [1] if isolating file [!anon] pages
> + * @lru		LRU list id for isolating

Missing colon.

Otherwise, nice cleanup!

Reviewed-by: Minchan Kim <minchan@kernel.org>
-- 
Kind regards,
Minchan Kim

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 3/7] mm: push lru index into shrink_[in]active_list()
@ 2012-03-23  4:48     ` Minchan Kim
  0 siblings, 0 replies; 42+ messages in thread
From: Minchan Kim @ 2012-03-23  4:48 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins, KAMEZAWA Hiroyuki

On Fri, Mar 23, 2012 at 01:56:27AM +0400, Konstantin Khlebnikov wrote:
> Let's toss lru index through call stack to isolate_lru_pages(),
> this is better than its reconstructing from individual bits.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Acked-by: Hugh Dickins <hughd@google.com>
> ---
>  mm/vmscan.c |   41 +++++++++++++++++------------------------
>  1 files changed, 17 insertions(+), 24 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index f4dca0c..fb6d54e 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1127,15 +1127,14 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
>   * @nr_scanned:	The number of pages that were scanned.
>   * @sc:		The scan_control struct for this reclaim session
>   * @mode:	One of the LRU isolation modes
> - * @active:	True [1] if isolating active pages
> - * @file:	True [1] if isolating file [!anon] pages
> + * @lru		LRU list id for isolating

Missing colon.

Otherwise, nice cleanup!

Reviewed-by: Minchan Kim <minchan@kernel.org>
-- 
Kind regards,
Minchan Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
  2012-03-22 21:56   ` Konstantin Khlebnikov
@ 2012-03-26 15:04     ` Michal Hocko
  -1 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-26 15:04 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins,
	KAMEZAWA Hiroyuki, Glauber Costa, Johannes Weiner

[Adding Johannes to CC]

On Fri 23-03-12 01:56:16, Konstantin Khlebnikov wrote:
> From: Hugh Dickins <hughd@google.com>
> 
> Although one has to admire the skill with which it has been concealed,
> scanning_global_lru(mz) is actually just an interesting way to test
> mem_cgroup_disabled().  Too many developer hours have been wasted on
> confusing it with global_reclaim(): just use mem_cgroup_disabled().

Is this really correct?

> Signed-off-by: Hugh Dickins <hughd@google.com>
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Acked-by: Glauber Costa <glommer@parallels.com>
> ---
>  mm/vmscan.c |   18 ++++--------------
>  1 files changed, 4 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 49f15ef..c684f44 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
[...]
> @@ -1806,7 +1796,7 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
>  	if (!total_swap_pages)
>  		return 0;
>  
> -	if (!scanning_global_lru(mz))
> +	if (!mem_cgroup_disabled())
>  		return mem_cgroup_inactive_anon_is_low(mz->mem_cgroup,
>  						       mz->zone);

mem_cgroup_inactive_anon_is_low calculation is slightly different than
what we have for cgroup_disabled case. calculate_zone_inactive_ratio
considers _all_ present pages in the zone while memcg variant only
active+inactive.

>  
> @@ -1845,7 +1835,7 @@ static int inactive_file_is_low_global(struct zone *zone)
>   */
>  static int inactive_file_is_low(struct mem_cgroup_zone *mz)
>  {
> -	if (!scanning_global_lru(mz))
> +	if (!mem_cgroup_disabled())
>  		return mem_cgroup_inactive_file_is_low(mz->mem_cgroup,
>  						       mz->zone);
>  
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
@ 2012-03-26 15:04     ` Michal Hocko
  0 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-26 15:04 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins,
	KAMEZAWA Hiroyuki, Glauber Costa, Johannes Weiner

[Adding Johannes to CC]

On Fri 23-03-12 01:56:16, Konstantin Khlebnikov wrote:
> From: Hugh Dickins <hughd@google.com>
> 
> Although one has to admire the skill with which it has been concealed,
> scanning_global_lru(mz) is actually just an interesting way to test
> mem_cgroup_disabled().  Too many developer hours have been wasted on
> confusing it with global_reclaim(): just use mem_cgroup_disabled().

Is this really correct?

> Signed-off-by: Hugh Dickins <hughd@google.com>
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Acked-by: Glauber Costa <glommer@parallels.com>
> ---
>  mm/vmscan.c |   18 ++++--------------
>  1 files changed, 4 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 49f15ef..c684f44 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
[...]
> @@ -1806,7 +1796,7 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
>  	if (!total_swap_pages)
>  		return 0;
>  
> -	if (!scanning_global_lru(mz))
> +	if (!mem_cgroup_disabled())
>  		return mem_cgroup_inactive_anon_is_low(mz->mem_cgroup,
>  						       mz->zone);

mem_cgroup_inactive_anon_is_low calculation is slightly different than
what we have for cgroup_disabled case. calculate_zone_inactive_ratio
considers _all_ present pages in the zone while memcg variant only
active+inactive.

>  
> @@ -1845,7 +1835,7 @@ static int inactive_file_is_low_global(struct zone *zone)
>   */
>  static int inactive_file_is_low(struct mem_cgroup_zone *mz)
>  {
> -	if (!scanning_global_lru(mz))
> +	if (!mem_cgroup_disabled())
>  		return mem_cgroup_inactive_file_is_low(mz->mem_cgroup,
>  						       mz->zone);
>  
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 2/7] mm/memcg: move reclaim_stat into lruvec
  2012-03-22 21:56   ` Konstantin Khlebnikov
@ 2012-03-26 15:16     ` Michal Hocko
  -1 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-26 15:16 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins, KAMEZAWA Hiroyuki

On Fri 23-03-12 01:56:23, Konstantin Khlebnikov wrote:
> From: Hugh Dickins <hughd@google.com>
> 
> With mem_cgroup_disabled() now explicit, it becomes clear that the
> zone_reclaim_stat structure actually belongs in lruvec, per-zone
> when memcg is disabled but per-memcg per-zone when it's enabled.
> 
> We can delete mem_cgroup_get_reclaim_stat(), and change
> update_page_reclaim_stat() to update just the one set of stats,
> the one which get_scan_count() will actually use.
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

I really like this one. I always hated how we do double accounting.
Happily 
Acked-by: Michal Hocko <mhocko@suse.cz>

> ---
>  include/linux/memcontrol.h |    9 ---------
>  include/linux/mmzone.h     |   29 ++++++++++++++---------------
>  mm/memcontrol.c            |   27 +++++++--------------------
>  mm/page_alloc.c            |    8 ++++----
>  mm/swap.c                  |   14 ++++----------
>  mm/vmscan.c                |    5 +----
>  6 files changed, 30 insertions(+), 62 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index f94efd2..95dc32c 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -121,8 +121,6 @@ int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg,
>  int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
>  unsigned long mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg,
>  					int nid, int zid, unsigned int lrumask);
> -struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
> -						      struct zone *zone);
>  struct zone_reclaim_stat*
>  mem_cgroup_get_reclaim_stat_from_page(struct page *page);
>  extern void mem_cgroup_print_oom_info(struct mem_cgroup *memcg,
> @@ -351,13 +349,6 @@ mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg, int nid, int zid,
>  	return 0;
>  }
>  
> -
> -static inline struct zone_reclaim_stat*
> -mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg, struct zone *zone)
> -{
> -	return NULL;
> -}
> -
>  static inline struct zone_reclaim_stat*
>  mem_cgroup_get_reclaim_stat_from_page(struct page *page)
>  {
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index dff7115..9316931 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -159,8 +159,22 @@ static inline int is_unevictable_lru(enum lru_list lru)
>  	return (lru == LRU_UNEVICTABLE);
>  }
>  
> +struct zone_reclaim_stat {
> +	/*
> +	 * The pageout code in vmscan.c keeps track of how many of the
> +	 * mem/swap backed and file backed pages are refeferenced.
> +	 * The higher the rotated/scanned ratio, the more valuable
> +	 * that cache is.
> +	 *
> +	 * The anon LRU stats live in [0], file LRU stats in [1]
> +	 */
> +	unsigned long		recent_rotated[2];
> +	unsigned long		recent_scanned[2];
> +};
> +
>  struct lruvec {
>  	struct list_head lists[NR_LRU_LISTS];
> +	struct zone_reclaim_stat reclaim_stat;
>  };
>  
>  /* Mask used at gathering information at once (see memcontrol.c) */
> @@ -287,19 +301,6 @@ enum zone_type {
>  #error ZONES_SHIFT -- too many zones configured adjust calculation
>  #endif
>  
> -struct zone_reclaim_stat {
> -	/*
> -	 * The pageout code in vmscan.c keeps track of how many of the
> -	 * mem/swap backed and file backed pages are refeferenced.
> -	 * The higher the rotated/scanned ratio, the more valuable
> -	 * that cache is.
> -	 *
> -	 * The anon LRU stats live in [0], file LRU stats in [1]
> -	 */
> -	unsigned long		recent_rotated[2];
> -	unsigned long		recent_scanned[2];
> -};
> -
>  struct zone {
>  	/* Fields commonly accessed by the page allocator */
>  
> @@ -374,8 +375,6 @@ struct zone {
>  	spinlock_t		lru_lock;
>  	struct lruvec		lruvec;
>  
> -	struct zone_reclaim_stat reclaim_stat;
> -
>  	unsigned long		pages_scanned;	   /* since last reclaim */
>  	unsigned long		flags;		   /* zone flags, see below */
>  
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index b2ee6df..59697fb 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -138,7 +138,6 @@ struct mem_cgroup_per_zone {
>  
>  	struct mem_cgroup_reclaim_iter reclaim_iter[DEF_PRIORITY + 1];
>  
> -	struct zone_reclaim_stat reclaim_stat;
>  	struct rb_node		tree_node;	/* RB tree node */
>  	unsigned long long	usage_in_excess;/* Set to the value by which */
>  						/* the soft limit is exceeded*/
> @@ -1233,16 +1232,6 @@ int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg, struct zone *zone)
>  	return (active > inactive);
>  }
>  
> -struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
> -						      struct zone *zone)
> -{
> -	int nid = zone_to_nid(zone);
> -	int zid = zone_idx(zone);
> -	struct mem_cgroup_per_zone *mz = mem_cgroup_zoneinfo(memcg, nid, zid);
> -
> -	return &mz->reclaim_stat;
> -}
> -
>  struct zone_reclaim_stat *
>  mem_cgroup_get_reclaim_stat_from_page(struct page *page)
>  {
> @@ -1258,7 +1247,7 @@ mem_cgroup_get_reclaim_stat_from_page(struct page *page)
>  	/* Ensure pc->mem_cgroup is visible after reading PCG_USED. */
>  	smp_rmb();
>  	mz = page_cgroup_zoneinfo(pc->mem_cgroup, page);
> -	return &mz->reclaim_stat;
> +	return &mz->lruvec.reclaim_stat;
>  }
>  
>  #define mem_cgroup_from_res_counter(counter, member)	\
> @@ -4216,21 +4205,19 @@ static int mem_control_stat_show(struct cgroup *cont, struct cftype *cft,
>  	{
>  		int nid, zid;
>  		struct mem_cgroup_per_zone *mz;
> +		struct zone_reclaim_stat *rstat;
>  		unsigned long recent_rotated[2] = {0, 0};
>  		unsigned long recent_scanned[2] = {0, 0};
>  
>  		for_each_online_node(nid)
>  			for (zid = 0; zid < MAX_NR_ZONES; zid++) {
>  				mz = mem_cgroup_zoneinfo(memcg, nid, zid);
> +				rstat = &mz->lruvec.reclaim_stat;
>  
> -				recent_rotated[0] +=
> -					mz->reclaim_stat.recent_rotated[0];
> -				recent_rotated[1] +=
> -					mz->reclaim_stat.recent_rotated[1];
> -				recent_scanned[0] +=
> -					mz->reclaim_stat.recent_scanned[0];
> -				recent_scanned[1] +=
> -					mz->reclaim_stat.recent_scanned[1];
> +				recent_rotated[0] += rstat->recent_rotated[0];
> +				recent_rotated[1] += rstat->recent_rotated[1];
> +				recent_scanned[0] += rstat->recent_scanned[0];
> +				recent_scanned[1] += rstat->recent_scanned[1];
>  			}
>  		cb->fill(cb, "recent_rotated_anon", recent_rotated[0]);
>  		cb->fill(cb, "recent_rotated_file", recent_rotated[1]);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index caea788..95ac749 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4317,10 +4317,10 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
>  		zone_pcp_init(zone);
>  		for_each_lru(lru)
>  			INIT_LIST_HEAD(&zone->lruvec.lists[lru]);
> -		zone->reclaim_stat.recent_rotated[0] = 0;
> -		zone->reclaim_stat.recent_rotated[1] = 0;
> -		zone->reclaim_stat.recent_scanned[0] = 0;
> -		zone->reclaim_stat.recent_scanned[1] = 0;
> +		zone->lruvec.reclaim_stat.recent_rotated[0] = 0;
> +		zone->lruvec.reclaim_stat.recent_rotated[1] = 0;
> +		zone->lruvec.reclaim_stat.recent_scanned[0] = 0;
> +		zone->lruvec.reclaim_stat.recent_scanned[1] = 0;
>  		zap_zone_vm_stats(zone);
>  		zone->flags = 0;
>  		if (!size)
> diff --git a/mm/swap.c b/mm/swap.c
> index 5c13f13..60d14da 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -279,21 +279,15 @@ void rotate_reclaimable_page(struct page *page)
>  static void update_page_reclaim_stat(struct zone *zone, struct page *page,
>  				     int file, int rotated)
>  {
> -	struct zone_reclaim_stat *reclaim_stat = &zone->reclaim_stat;
> -	struct zone_reclaim_stat *memcg_reclaim_stat;
> +	struct zone_reclaim_stat *reclaim_stat;
>  
> -	memcg_reclaim_stat = mem_cgroup_get_reclaim_stat_from_page(page);
> +	reclaim_stat = mem_cgroup_get_reclaim_stat_from_page(page);
> +	if (!reclaim_stat)
> +		reclaim_stat = &zone->lruvec.reclaim_stat;
>  
>  	reclaim_stat->recent_scanned[file]++;
>  	if (rotated)
>  		reclaim_stat->recent_rotated[file]++;
> -
> -	if (!memcg_reclaim_stat)
> -		return;
> -
> -	memcg_reclaim_stat->recent_scanned[file]++;
> -	if (rotated)
> -		memcg_reclaim_stat->recent_rotated[file]++;
>  }
>  
>  static void __activate_page(struct page *page, void *arg)
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c684f44..f4dca0c 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -173,10 +173,7 @@ static bool global_reclaim(struct scan_control *sc)
>  
>  static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
>  {
> -	if (!mem_cgroup_disabled())
> -		return mem_cgroup_get_reclaim_stat(mz->mem_cgroup, mz->zone);
> -
> -	return &mz->zone->reclaim_stat;
> +	return &mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup)->reclaim_stat;
>  }
>  
>  static unsigned long zone_nr_lru_pages(struct mem_cgroup_zone *mz,
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 2/7] mm/memcg: move reclaim_stat into lruvec
@ 2012-03-26 15:16     ` Michal Hocko
  0 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-26 15:16 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins, KAMEZAWA Hiroyuki

On Fri 23-03-12 01:56:23, Konstantin Khlebnikov wrote:
> From: Hugh Dickins <hughd@google.com>
> 
> With mem_cgroup_disabled() now explicit, it becomes clear that the
> zone_reclaim_stat structure actually belongs in lruvec, per-zone
> when memcg is disabled but per-memcg per-zone when it's enabled.
> 
> We can delete mem_cgroup_get_reclaim_stat(), and change
> update_page_reclaim_stat() to update just the one set of stats,
> the one which get_scan_count() will actually use.
> 
> Signed-off-by: Hugh Dickins <hughd@google.com>
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>

I really like this one. I always hated how we do double accounting.
Happily 
Acked-by: Michal Hocko <mhocko@suse.cz>

> ---
>  include/linux/memcontrol.h |    9 ---------
>  include/linux/mmzone.h     |   29 ++++++++++++++---------------
>  mm/memcontrol.c            |   27 +++++++--------------------
>  mm/page_alloc.c            |    8 ++++----
>  mm/swap.c                  |   14 ++++----------
>  mm/vmscan.c                |    5 +----
>  6 files changed, 30 insertions(+), 62 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index f94efd2..95dc32c 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -121,8 +121,6 @@ int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg,
>  int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
>  unsigned long mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg,
>  					int nid, int zid, unsigned int lrumask);
> -struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
> -						      struct zone *zone);
>  struct zone_reclaim_stat*
>  mem_cgroup_get_reclaim_stat_from_page(struct page *page);
>  extern void mem_cgroup_print_oom_info(struct mem_cgroup *memcg,
> @@ -351,13 +349,6 @@ mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg, int nid, int zid,
>  	return 0;
>  }
>  
> -
> -static inline struct zone_reclaim_stat*
> -mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg, struct zone *zone)
> -{
> -	return NULL;
> -}
> -
>  static inline struct zone_reclaim_stat*
>  mem_cgroup_get_reclaim_stat_from_page(struct page *page)
>  {
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index dff7115..9316931 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -159,8 +159,22 @@ static inline int is_unevictable_lru(enum lru_list lru)
>  	return (lru == LRU_UNEVICTABLE);
>  }
>  
> +struct zone_reclaim_stat {
> +	/*
> +	 * The pageout code in vmscan.c keeps track of how many of the
> +	 * mem/swap backed and file backed pages are refeferenced.
> +	 * The higher the rotated/scanned ratio, the more valuable
> +	 * that cache is.
> +	 *
> +	 * The anon LRU stats live in [0], file LRU stats in [1]
> +	 */
> +	unsigned long		recent_rotated[2];
> +	unsigned long		recent_scanned[2];
> +};
> +
>  struct lruvec {
>  	struct list_head lists[NR_LRU_LISTS];
> +	struct zone_reclaim_stat reclaim_stat;
>  };
>  
>  /* Mask used at gathering information at once (see memcontrol.c) */
> @@ -287,19 +301,6 @@ enum zone_type {
>  #error ZONES_SHIFT -- too many zones configured adjust calculation
>  #endif
>  
> -struct zone_reclaim_stat {
> -	/*
> -	 * The pageout code in vmscan.c keeps track of how many of the
> -	 * mem/swap backed and file backed pages are refeferenced.
> -	 * The higher the rotated/scanned ratio, the more valuable
> -	 * that cache is.
> -	 *
> -	 * The anon LRU stats live in [0], file LRU stats in [1]
> -	 */
> -	unsigned long		recent_rotated[2];
> -	unsigned long		recent_scanned[2];
> -};
> -
>  struct zone {
>  	/* Fields commonly accessed by the page allocator */
>  
> @@ -374,8 +375,6 @@ struct zone {
>  	spinlock_t		lru_lock;
>  	struct lruvec		lruvec;
>  
> -	struct zone_reclaim_stat reclaim_stat;
> -
>  	unsigned long		pages_scanned;	   /* since last reclaim */
>  	unsigned long		flags;		   /* zone flags, see below */
>  
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index b2ee6df..59697fb 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -138,7 +138,6 @@ struct mem_cgroup_per_zone {
>  
>  	struct mem_cgroup_reclaim_iter reclaim_iter[DEF_PRIORITY + 1];
>  
> -	struct zone_reclaim_stat reclaim_stat;
>  	struct rb_node		tree_node;	/* RB tree node */
>  	unsigned long long	usage_in_excess;/* Set to the value by which */
>  						/* the soft limit is exceeded*/
> @@ -1233,16 +1232,6 @@ int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg, struct zone *zone)
>  	return (active > inactive);
>  }
>  
> -struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
> -						      struct zone *zone)
> -{
> -	int nid = zone_to_nid(zone);
> -	int zid = zone_idx(zone);
> -	struct mem_cgroup_per_zone *mz = mem_cgroup_zoneinfo(memcg, nid, zid);
> -
> -	return &mz->reclaim_stat;
> -}
> -
>  struct zone_reclaim_stat *
>  mem_cgroup_get_reclaim_stat_from_page(struct page *page)
>  {
> @@ -1258,7 +1247,7 @@ mem_cgroup_get_reclaim_stat_from_page(struct page *page)
>  	/* Ensure pc->mem_cgroup is visible after reading PCG_USED. */
>  	smp_rmb();
>  	mz = page_cgroup_zoneinfo(pc->mem_cgroup, page);
> -	return &mz->reclaim_stat;
> +	return &mz->lruvec.reclaim_stat;
>  }
>  
>  #define mem_cgroup_from_res_counter(counter, member)	\
> @@ -4216,21 +4205,19 @@ static int mem_control_stat_show(struct cgroup *cont, struct cftype *cft,
>  	{
>  		int nid, zid;
>  		struct mem_cgroup_per_zone *mz;
> +		struct zone_reclaim_stat *rstat;
>  		unsigned long recent_rotated[2] = {0, 0};
>  		unsigned long recent_scanned[2] = {0, 0};
>  
>  		for_each_online_node(nid)
>  			for (zid = 0; zid < MAX_NR_ZONES; zid++) {
>  				mz = mem_cgroup_zoneinfo(memcg, nid, zid);
> +				rstat = &mz->lruvec.reclaim_stat;
>  
> -				recent_rotated[0] +=
> -					mz->reclaim_stat.recent_rotated[0];
> -				recent_rotated[1] +=
> -					mz->reclaim_stat.recent_rotated[1];
> -				recent_scanned[0] +=
> -					mz->reclaim_stat.recent_scanned[0];
> -				recent_scanned[1] +=
> -					mz->reclaim_stat.recent_scanned[1];
> +				recent_rotated[0] += rstat->recent_rotated[0];
> +				recent_rotated[1] += rstat->recent_rotated[1];
> +				recent_scanned[0] += rstat->recent_scanned[0];
> +				recent_scanned[1] += rstat->recent_scanned[1];
>  			}
>  		cb->fill(cb, "recent_rotated_anon", recent_rotated[0]);
>  		cb->fill(cb, "recent_rotated_file", recent_rotated[1]);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index caea788..95ac749 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4317,10 +4317,10 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
>  		zone_pcp_init(zone);
>  		for_each_lru(lru)
>  			INIT_LIST_HEAD(&zone->lruvec.lists[lru]);
> -		zone->reclaim_stat.recent_rotated[0] = 0;
> -		zone->reclaim_stat.recent_rotated[1] = 0;
> -		zone->reclaim_stat.recent_scanned[0] = 0;
> -		zone->reclaim_stat.recent_scanned[1] = 0;
> +		zone->lruvec.reclaim_stat.recent_rotated[0] = 0;
> +		zone->lruvec.reclaim_stat.recent_rotated[1] = 0;
> +		zone->lruvec.reclaim_stat.recent_scanned[0] = 0;
> +		zone->lruvec.reclaim_stat.recent_scanned[1] = 0;
>  		zap_zone_vm_stats(zone);
>  		zone->flags = 0;
>  		if (!size)
> diff --git a/mm/swap.c b/mm/swap.c
> index 5c13f13..60d14da 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -279,21 +279,15 @@ void rotate_reclaimable_page(struct page *page)
>  static void update_page_reclaim_stat(struct zone *zone, struct page *page,
>  				     int file, int rotated)
>  {
> -	struct zone_reclaim_stat *reclaim_stat = &zone->reclaim_stat;
> -	struct zone_reclaim_stat *memcg_reclaim_stat;
> +	struct zone_reclaim_stat *reclaim_stat;
>  
> -	memcg_reclaim_stat = mem_cgroup_get_reclaim_stat_from_page(page);
> +	reclaim_stat = mem_cgroup_get_reclaim_stat_from_page(page);
> +	if (!reclaim_stat)
> +		reclaim_stat = &zone->lruvec.reclaim_stat;
>  
>  	reclaim_stat->recent_scanned[file]++;
>  	if (rotated)
>  		reclaim_stat->recent_rotated[file]++;
> -
> -	if (!memcg_reclaim_stat)
> -		return;
> -
> -	memcg_reclaim_stat->recent_scanned[file]++;
> -	if (rotated)
> -		memcg_reclaim_stat->recent_rotated[file]++;
>  }
>  
>  static void __activate_page(struct page *page, void *arg)
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index c684f44..f4dca0c 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -173,10 +173,7 @@ static bool global_reclaim(struct scan_control *sc)
>  
>  static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
>  {
> -	if (!mem_cgroup_disabled())
> -		return mem_cgroup_get_reclaim_stat(mz->mem_cgroup, mz->zone);
> -
> -	return &mz->zone->reclaim_stat;
> +	return &mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup)->reclaim_stat;
>  }
>  
>  static unsigned long zone_nr_lru_pages(struct mem_cgroup_zone *mz,
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
  2012-03-26 15:04     ` Michal Hocko
@ 2012-03-26 15:18       ` Johannes Weiner
  -1 siblings, 0 replies; 42+ messages in thread
From: Johannes Weiner @ 2012-03-26 15:18 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Konstantin Khlebnikov, linux-mm, Andrew Morton, linux-kernel,
	Hugh Dickins, KAMEZAWA Hiroyuki, Glauber Costa

On Mon, Mar 26, 2012 at 05:04:29PM +0200, Michal Hocko wrote:
> [Adding Johannes to CC]
> 
> On Fri 23-03-12 01:56:16, Konstantin Khlebnikov wrote:
> > From: Hugh Dickins <hughd@google.com>
> > 
> > Although one has to admire the skill with which it has been concealed,
> > scanning_global_lru(mz) is actually just an interesting way to test
> > mem_cgroup_disabled().  Too many developer hours have been wasted on
> > confusing it with global_reclaim(): just use mem_cgroup_disabled().
> 
> Is this really correct?

Yes, if the memory controller is enabled, we never have a global LRU
and always scan the per-memcg lists.

> > Signed-off-by: Hugh Dickins <hughd@google.com>
> > Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> > Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > Acked-by: Glauber Costa <glommer@parallels.com>
> > ---
> >  mm/vmscan.c |   18 ++++--------------
> >  1 files changed, 4 insertions(+), 14 deletions(-)
> > 
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 49f15ef..c684f44 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> [...]
> > @@ -1806,7 +1796,7 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
> >  	if (!total_swap_pages)
> >  		return 0;
> >  
> > -	if (!scanning_global_lru(mz))
> > +	if (!mem_cgroup_disabled())
> >  		return mem_cgroup_inactive_anon_is_low(mz->mem_cgroup,
> >  						       mz->zone);
> 
> mem_cgroup_inactive_anon_is_low calculation is slightly different than
> what we have for cgroup_disabled case. calculate_zone_inactive_ratio
> considers _all_ present pages in the zone while memcg variant only
> active+inactive.

The memcg has nothing to go by but actual number of LRU pages; there
is no 'present pages' equivalent.

I don't think that it matters much in reality given the sqrt scale,
but the difference is still unfortunate.  Konstantin was meaning to
unify all this, though.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
@ 2012-03-26 15:18       ` Johannes Weiner
  0 siblings, 0 replies; 42+ messages in thread
From: Johannes Weiner @ 2012-03-26 15:18 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Konstantin Khlebnikov, linux-mm, Andrew Morton, linux-kernel,
	Hugh Dickins, KAMEZAWA Hiroyuki, Glauber Costa

On Mon, Mar 26, 2012 at 05:04:29PM +0200, Michal Hocko wrote:
> [Adding Johannes to CC]
> 
> On Fri 23-03-12 01:56:16, Konstantin Khlebnikov wrote:
> > From: Hugh Dickins <hughd@google.com>
> > 
> > Although one has to admire the skill with which it has been concealed,
> > scanning_global_lru(mz) is actually just an interesting way to test
> > mem_cgroup_disabled().  Too many developer hours have been wasted on
> > confusing it with global_reclaim(): just use mem_cgroup_disabled().
> 
> Is this really correct?

Yes, if the memory controller is enabled, we never have a global LRU
and always scan the per-memcg lists.

> > Signed-off-by: Hugh Dickins <hughd@google.com>
> > Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> > Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > Acked-by: Glauber Costa <glommer@parallels.com>
> > ---
> >  mm/vmscan.c |   18 ++++--------------
> >  1 files changed, 4 insertions(+), 14 deletions(-)
> > 
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 49f15ef..c684f44 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> [...]
> > @@ -1806,7 +1796,7 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
> >  	if (!total_swap_pages)
> >  		return 0;
> >  
> > -	if (!scanning_global_lru(mz))
> > +	if (!mem_cgroup_disabled())
> >  		return mem_cgroup_inactive_anon_is_low(mz->mem_cgroup,
> >  						       mz->zone);
> 
> mem_cgroup_inactive_anon_is_low calculation is slightly different than
> what we have for cgroup_disabled case. calculate_zone_inactive_ratio
> considers _all_ present pages in the zone while memcg variant only
> active+inactive.

The memcg has nothing to go by but actual number of LRU pages; there
is no 'present pages' equivalent.

I don't think that it matters much in reality given the sqrt scale,
but the difference is still unfortunate.  Konstantin was meaning to
unify all this, though.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 6/7] mm/memcg: kill mem_cgroup_lru_del()
  2012-03-22 21:56   ` Konstantin Khlebnikov
@ 2012-03-26 15:23     ` Michal Hocko
  -1 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-26 15:23 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins, KAMEZAWA Hiroyuki

On Fri 23-03-12 01:56:39, Konstantin Khlebnikov wrote:
> This patch kills mem_cgroup_lru_del(), we can use mem_cgroup_lru_del_list()
> instead. On 0-order isolation we already have right lru list id.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Hugh Dickins <hughd@google.com>

Yes, looks good
Acked-by: Michal Hocko <mhocko@suse.cz>

Just a small nit..
[...]
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 5f6ed98..9de66be 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
[...]
> @@ -1205,8 +1205,11 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>  
>  			if (__isolate_lru_page(cursor_page, mode) == 0) {
>  				unsigned int isolated_pages;
> +				enum lru_list cursor_lru;
>  
> -				mem_cgroup_lru_del(cursor_page);
> +				cursor_lru = page_lru(cursor_page);
> +				mem_cgroup_lru_del_list(cursor_page,
> +							cursor_lru);

Why not mem_cgroup_lru_del_list(cursor_page,
				page_lru(cursor_page));
The patch would be smaller and it doesn't make checkpatch unhappy as well.

>  				list_move(&cursor_page->lru, dst);
>  				isolated_pages = hpage_nr_pages(cursor_page);
>  				nr_taken += isolated_pages;
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 6/7] mm/memcg: kill mem_cgroup_lru_del()
@ 2012-03-26 15:23     ` Michal Hocko
  0 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-26 15:23 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins, KAMEZAWA Hiroyuki

On Fri 23-03-12 01:56:39, Konstantin Khlebnikov wrote:
> This patch kills mem_cgroup_lru_del(), we can use mem_cgroup_lru_del_list()
> instead. On 0-order isolation we already have right lru list id.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Cc: Hugh Dickins <hughd@google.com>

Yes, looks good
Acked-by: Michal Hocko <mhocko@suse.cz>

Just a small nit..
[...]
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 5f6ed98..9de66be 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
[...]
> @@ -1205,8 +1205,11 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>  
>  			if (__isolate_lru_page(cursor_page, mode) == 0) {
>  				unsigned int isolated_pages;
> +				enum lru_list cursor_lru;
>  
> -				mem_cgroup_lru_del(cursor_page);
> +				cursor_lru = page_lru(cursor_page);
> +				mem_cgroup_lru_del_list(cursor_page,
> +							cursor_lru);

Why not mem_cgroup_lru_del_list(cursor_page,
				page_lru(cursor_page));
The patch would be smaller and it doesn't make checkpatch unhappy as well.

>  				list_move(&cursor_page->lru, dst);
>  				isolated_pages = hpage_nr_pages(cursor_page);
>  				nr_taken += isolated_pages;
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
  2012-03-26 15:18       ` Johannes Weiner
@ 2012-03-26 15:31         ` Michal Hocko
  -1 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-26 15:31 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Konstantin Khlebnikov, linux-mm, Andrew Morton, linux-kernel,
	Hugh Dickins, KAMEZAWA Hiroyuki, Glauber Costa

On Mon 26-03-12 17:18:15, Johannes Weiner wrote:
> On Mon, Mar 26, 2012 at 05:04:29PM +0200, Michal Hocko wrote:
> > [Adding Johannes to CC]
> > 
> > On Fri 23-03-12 01:56:16, Konstantin Khlebnikov wrote:
> > > From: Hugh Dickins <hughd@google.com>
> > > 
> > > Although one has to admire the skill with which it has been concealed,
> > > scanning_global_lru(mz) is actually just an interesting way to test
> > > mem_cgroup_disabled().  Too many developer hours have been wasted on
> > > confusing it with global_reclaim(): just use mem_cgroup_disabled().
> > 
> > Is this really correct?
> 
> Yes, if the memory controller is enabled, we never have a global LRU
> and always scan the per-memcg lists.
> 
> > > Signed-off-by: Hugh Dickins <hughd@google.com>
> > > Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> > > Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > > Acked-by: Glauber Costa <glommer@parallels.com>
> > > ---
> > >  mm/vmscan.c |   18 ++++--------------
> > >  1 files changed, 4 insertions(+), 14 deletions(-)
> > > 
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index 49f15ef..c684f44 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > [...]
> > > @@ -1806,7 +1796,7 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
> > >  	if (!total_swap_pages)
> > >  		return 0;
> > >  
> > > -	if (!scanning_global_lru(mz))
> > > +	if (!mem_cgroup_disabled())
> > >  		return mem_cgroup_inactive_anon_is_low(mz->mem_cgroup,
> > >  						       mz->zone);
> > 
> > mem_cgroup_inactive_anon_is_low calculation is slightly different than
> > what we have for cgroup_disabled case. calculate_zone_inactive_ratio
> > considers _all_ present pages in the zone while memcg variant only
> > active+inactive.
> 
> The memcg has nothing to go by but actual number of LRU pages; there
> is no 'present pages' equivalent.

Yes

> I don't think that it matters much in reality given the sqrt scale,
> but the difference is still unfortunate.  

OK you are probably right that the scale is too small to be a problem.
I guess that a note about changed ratio calculation should be added to
the changelog.

> Konstantin was meaning to unify all this, though.
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
@ 2012-03-26 15:31         ` Michal Hocko
  0 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-26 15:31 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: Konstantin Khlebnikov, linux-mm, Andrew Morton, linux-kernel,
	Hugh Dickins, KAMEZAWA Hiroyuki, Glauber Costa

On Mon 26-03-12 17:18:15, Johannes Weiner wrote:
> On Mon, Mar 26, 2012 at 05:04:29PM +0200, Michal Hocko wrote:
> > [Adding Johannes to CC]
> > 
> > On Fri 23-03-12 01:56:16, Konstantin Khlebnikov wrote:
> > > From: Hugh Dickins <hughd@google.com>
> > > 
> > > Although one has to admire the skill with which it has been concealed,
> > > scanning_global_lru(mz) is actually just an interesting way to test
> > > mem_cgroup_disabled().  Too many developer hours have been wasted on
> > > confusing it with global_reclaim(): just use mem_cgroup_disabled().
> > 
> > Is this really correct?
> 
> Yes, if the memory controller is enabled, we never have a global LRU
> and always scan the per-memcg lists.
> 
> > > Signed-off-by: Hugh Dickins <hughd@google.com>
> > > Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> > > Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> > > Acked-by: Glauber Costa <glommer@parallels.com>
> > > ---
> > >  mm/vmscan.c |   18 ++++--------------
> > >  1 files changed, 4 insertions(+), 14 deletions(-)
> > > 
> > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > index 49f15ef..c684f44 100644
> > > --- a/mm/vmscan.c
> > > +++ b/mm/vmscan.c
> > [...]
> > > @@ -1806,7 +1796,7 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
> > >  	if (!total_swap_pages)
> > >  		return 0;
> > >  
> > > -	if (!scanning_global_lru(mz))
> > > +	if (!mem_cgroup_disabled())
> > >  		return mem_cgroup_inactive_anon_is_low(mz->mem_cgroup,
> > >  						       mz->zone);
> > 
> > mem_cgroup_inactive_anon_is_low calculation is slightly different than
> > what we have for cgroup_disabled case. calculate_zone_inactive_ratio
> > considers _all_ present pages in the zone while memcg variant only
> > active+inactive.
> 
> The memcg has nothing to go by but actual number of LRU pages; there
> is no 'present pages' equivalent.

Yes

> I don't think that it matters much in reality given the sqrt scale,
> but the difference is still unfortunate.  

OK you are probably right that the scale is too small to be a problem.
I guess that a note about changed ratio calculation should be added to
the changelog.

> Konstantin was meaning to unify all this, though.
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 3/7] mm: push lru index into shrink_[in]active_list()
  2012-03-22 21:56   ` Konstantin Khlebnikov
@ 2012-03-26 15:39     ` Michal Hocko
  -1 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-26 15:39 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins, KAMEZAWA Hiroyuki

On Fri 23-03-12 01:56:27, Konstantin Khlebnikov wrote:
> Let's toss lru index through call stack to isolate_lru_pages(),
> this is better than its reconstructing from individual bits.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Acked-by: Hugh Dickins <hughd@google.com>

Looks nice
Reviewed-by: Michal Hocko <mhocko@suse.cz>

> ---
>  mm/vmscan.c |   41 +++++++++++++++++------------------------
>  1 files changed, 17 insertions(+), 24 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index f4dca0c..fb6d54e 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1127,15 +1127,14 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
>   * @nr_scanned:	The number of pages that were scanned.
>   * @sc:		The scan_control struct for this reclaim session
>   * @mode:	One of the LRU isolation modes
> - * @active:	True [1] if isolating active pages
> - * @file:	True [1] if isolating file [!anon] pages
> + * @lru		LRU list id for isolating
>   *
>   * returns how many pages were moved onto *@dst.
>   */
>  static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>  		struct mem_cgroup_zone *mz, struct list_head *dst,
>  		unsigned long *nr_scanned, struct scan_control *sc,
> -		isolate_mode_t mode, int active, int file)
> +		isolate_mode_t mode, enum lru_list lru)
>  {
>  	struct lruvec *lruvec;
>  	struct list_head *src;
> @@ -1144,13 +1143,9 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>  	unsigned long nr_lumpy_dirty = 0;
>  	unsigned long nr_lumpy_failed = 0;
>  	unsigned long scan;
> -	int lru = LRU_BASE;
> +	int file = is_file_lru(lru);
>  
>  	lruvec = mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup);
> -	if (active)
> -		lru += LRU_ACTIVE;
> -	if (file)
> -		lru += LRU_FILE;
>  	src = &lruvec->lists[lru];
>  
>  	for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) {
> @@ -1487,7 +1482,7 @@ static inline bool should_reclaim_stall(unsigned long nr_taken,
>   */
>  static noinline_for_stack unsigned long
>  shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
> -		     struct scan_control *sc, int priority, int file)
> +		     struct scan_control *sc, int priority, enum lru_list lru)
>  {
>  	LIST_HEAD(page_list);
>  	unsigned long nr_scanned;
> @@ -1498,6 +1493,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
>  	unsigned long nr_dirty = 0;
>  	unsigned long nr_writeback = 0;
>  	isolate_mode_t isolate_mode = ISOLATE_INACTIVE;
> +	int file = is_file_lru(lru);
>  	struct zone *zone = mz->zone;
>  	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
>  
> @@ -1523,7 +1519,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
>  	spin_lock_irq(&zone->lru_lock);
>  
>  	nr_taken = isolate_lru_pages(nr_to_scan, mz, &page_list, &nr_scanned,
> -				     sc, isolate_mode, 0, file);
> +				     sc, isolate_mode, lru);
>  	if (global_reclaim(sc)) {
>  		zone->pages_scanned += nr_scanned;
>  		if (current_is_kswapd())
> @@ -1661,7 +1657,7 @@ static void move_active_pages_to_lru(struct zone *zone,
>  static void shrink_active_list(unsigned long nr_to_scan,
>  			       struct mem_cgroup_zone *mz,
>  			       struct scan_control *sc,
> -			       int priority, int file)
> +			       int priority, enum lru_list lru)
>  {
>  	unsigned long nr_taken;
>  	unsigned long nr_scanned;
> @@ -1673,6 +1669,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
>  	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
>  	unsigned long nr_rotated = 0;
>  	isolate_mode_t isolate_mode = ISOLATE_ACTIVE;
> +	int file = is_file_lru(lru);
>  	struct zone *zone = mz->zone;
>  
>  	lru_add_drain();
> @@ -1687,17 +1684,14 @@ static void shrink_active_list(unsigned long nr_to_scan,
>  	spin_lock_irq(&zone->lru_lock);
>  
>  	nr_taken = isolate_lru_pages(nr_to_scan, mz, &l_hold, &nr_scanned, sc,
> -				     isolate_mode, 1, file);
> +				     isolate_mode, lru);
>  	if (global_reclaim(sc))
>  		zone->pages_scanned += nr_scanned;
>  
>  	reclaim_stat->recent_scanned[file] += nr_taken;
>  
>  	__count_zone_vm_events(PGREFILL, zone, nr_scanned);
> -	if (file)
> -		__mod_zone_page_state(zone, NR_ACTIVE_FILE, -nr_taken);
> -	else
> -		__mod_zone_page_state(zone, NR_ACTIVE_ANON, -nr_taken);
> +	__mod_zone_page_state(zone, NR_LRU_BASE + lru, -nr_taken);
>  	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, nr_taken);
>  	spin_unlock_irq(&zone->lru_lock);
>  
> @@ -1752,10 +1746,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
>  	 */
>  	reclaim_stat->recent_rotated[file] += nr_rotated;
>  
> -	move_active_pages_to_lru(zone, &l_active, &l_hold,
> -						LRU_ACTIVE + file * LRU_FILE);
> -	move_active_pages_to_lru(zone, &l_inactive, &l_hold,
> -						LRU_BASE   + file * LRU_FILE);
> +	move_active_pages_to_lru(zone, &l_active, &l_hold, lru);
> +	move_active_pages_to_lru(zone, &l_inactive, &l_hold, lru - LRU_ACTIVE);
>  	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, -nr_taken);
>  	spin_unlock_irq(&zone->lru_lock);
>  
> @@ -1855,11 +1847,11 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
>  
>  	if (is_active_lru(lru)) {
>  		if (inactive_list_is_low(mz, file))
> -			shrink_active_list(nr_to_scan, mz, sc, priority, file);
> +			shrink_active_list(nr_to_scan, mz, sc, priority, lru);
>  		return 0;
>  	}
>  
> -	return shrink_inactive_list(nr_to_scan, mz, sc, priority, file);
> +	return shrink_inactive_list(nr_to_scan, mz, sc, priority, lru);
>  }
>  
>  static int vmscan_swappiness(struct mem_cgroup_zone *mz,
> @@ -2110,7 +2102,8 @@ restart:
>  	 * rebalance the anon lru active/inactive ratio.
>  	 */
>  	if (inactive_anon_is_low(mz))
> -		shrink_active_list(SWAP_CLUSTER_MAX, mz, sc, priority, 0);
> +		shrink_active_list(SWAP_CLUSTER_MAX, mz,
> +				   sc, priority, LRU_ACTIVE_ANON);
>  
>  	/* reclaim/compaction might need reclaim to continue */
>  	if (should_continue_reclaim(mz, nr_reclaimed,
> @@ -2550,7 +2543,7 @@ static void age_active_anon(struct zone *zone, struct scan_control *sc,
>  
>  		if (inactive_anon_is_low(&mz))
>  			shrink_active_list(SWAP_CLUSTER_MAX, &mz,
> -					   sc, priority, 0);
> +					   sc, priority, LRU_ACTIVE_ANON);
>  
>  		memcg = mem_cgroup_iter(NULL, memcg, NULL);
>  	} while (memcg);
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 3/7] mm: push lru index into shrink_[in]active_list()
@ 2012-03-26 15:39     ` Michal Hocko
  0 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-26 15:39 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins, KAMEZAWA Hiroyuki

On Fri 23-03-12 01:56:27, Konstantin Khlebnikov wrote:
> Let's toss lru index through call stack to isolate_lru_pages(),
> this is better than its reconstructing from individual bits.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
> Acked-by: Hugh Dickins <hughd@google.com>

Looks nice
Reviewed-by: Michal Hocko <mhocko@suse.cz>

> ---
>  mm/vmscan.c |   41 +++++++++++++++++------------------------
>  1 files changed, 17 insertions(+), 24 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index f4dca0c..fb6d54e 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1127,15 +1127,14 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
>   * @nr_scanned:	The number of pages that were scanned.
>   * @sc:		The scan_control struct for this reclaim session
>   * @mode:	One of the LRU isolation modes
> - * @active:	True [1] if isolating active pages
> - * @file:	True [1] if isolating file [!anon] pages
> + * @lru		LRU list id for isolating
>   *
>   * returns how many pages were moved onto *@dst.
>   */
>  static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>  		struct mem_cgroup_zone *mz, struct list_head *dst,
>  		unsigned long *nr_scanned, struct scan_control *sc,
> -		isolate_mode_t mode, int active, int file)
> +		isolate_mode_t mode, enum lru_list lru)
>  {
>  	struct lruvec *lruvec;
>  	struct list_head *src;
> @@ -1144,13 +1143,9 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>  	unsigned long nr_lumpy_dirty = 0;
>  	unsigned long nr_lumpy_failed = 0;
>  	unsigned long scan;
> -	int lru = LRU_BASE;
> +	int file = is_file_lru(lru);
>  
>  	lruvec = mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup);
> -	if (active)
> -		lru += LRU_ACTIVE;
> -	if (file)
> -		lru += LRU_FILE;
>  	src = &lruvec->lists[lru];
>  
>  	for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) {
> @@ -1487,7 +1482,7 @@ static inline bool should_reclaim_stall(unsigned long nr_taken,
>   */
>  static noinline_for_stack unsigned long
>  shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
> -		     struct scan_control *sc, int priority, int file)
> +		     struct scan_control *sc, int priority, enum lru_list lru)
>  {
>  	LIST_HEAD(page_list);
>  	unsigned long nr_scanned;
> @@ -1498,6 +1493,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
>  	unsigned long nr_dirty = 0;
>  	unsigned long nr_writeback = 0;
>  	isolate_mode_t isolate_mode = ISOLATE_INACTIVE;
> +	int file = is_file_lru(lru);
>  	struct zone *zone = mz->zone;
>  	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
>  
> @@ -1523,7 +1519,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
>  	spin_lock_irq(&zone->lru_lock);
>  
>  	nr_taken = isolate_lru_pages(nr_to_scan, mz, &page_list, &nr_scanned,
> -				     sc, isolate_mode, 0, file);
> +				     sc, isolate_mode, lru);
>  	if (global_reclaim(sc)) {
>  		zone->pages_scanned += nr_scanned;
>  		if (current_is_kswapd())
> @@ -1661,7 +1657,7 @@ static void move_active_pages_to_lru(struct zone *zone,
>  static void shrink_active_list(unsigned long nr_to_scan,
>  			       struct mem_cgroup_zone *mz,
>  			       struct scan_control *sc,
> -			       int priority, int file)
> +			       int priority, enum lru_list lru)
>  {
>  	unsigned long nr_taken;
>  	unsigned long nr_scanned;
> @@ -1673,6 +1669,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
>  	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
>  	unsigned long nr_rotated = 0;
>  	isolate_mode_t isolate_mode = ISOLATE_ACTIVE;
> +	int file = is_file_lru(lru);
>  	struct zone *zone = mz->zone;
>  
>  	lru_add_drain();
> @@ -1687,17 +1684,14 @@ static void shrink_active_list(unsigned long nr_to_scan,
>  	spin_lock_irq(&zone->lru_lock);
>  
>  	nr_taken = isolate_lru_pages(nr_to_scan, mz, &l_hold, &nr_scanned, sc,
> -				     isolate_mode, 1, file);
> +				     isolate_mode, lru);
>  	if (global_reclaim(sc))
>  		zone->pages_scanned += nr_scanned;
>  
>  	reclaim_stat->recent_scanned[file] += nr_taken;
>  
>  	__count_zone_vm_events(PGREFILL, zone, nr_scanned);
> -	if (file)
> -		__mod_zone_page_state(zone, NR_ACTIVE_FILE, -nr_taken);
> -	else
> -		__mod_zone_page_state(zone, NR_ACTIVE_ANON, -nr_taken);
> +	__mod_zone_page_state(zone, NR_LRU_BASE + lru, -nr_taken);
>  	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, nr_taken);
>  	spin_unlock_irq(&zone->lru_lock);
>  
> @@ -1752,10 +1746,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
>  	 */
>  	reclaim_stat->recent_rotated[file] += nr_rotated;
>  
> -	move_active_pages_to_lru(zone, &l_active, &l_hold,
> -						LRU_ACTIVE + file * LRU_FILE);
> -	move_active_pages_to_lru(zone, &l_inactive, &l_hold,
> -						LRU_BASE   + file * LRU_FILE);
> +	move_active_pages_to_lru(zone, &l_active, &l_hold, lru);
> +	move_active_pages_to_lru(zone, &l_inactive, &l_hold, lru - LRU_ACTIVE);
>  	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, -nr_taken);
>  	spin_unlock_irq(&zone->lru_lock);
>  
> @@ -1855,11 +1847,11 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
>  
>  	if (is_active_lru(lru)) {
>  		if (inactive_list_is_low(mz, file))
> -			shrink_active_list(nr_to_scan, mz, sc, priority, file);
> +			shrink_active_list(nr_to_scan, mz, sc, priority, lru);
>  		return 0;
>  	}
>  
> -	return shrink_inactive_list(nr_to_scan, mz, sc, priority, file);
> +	return shrink_inactive_list(nr_to_scan, mz, sc, priority, lru);
>  }
>  
>  static int vmscan_swappiness(struct mem_cgroup_zone *mz,
> @@ -2110,7 +2102,8 @@ restart:
>  	 * rebalance the anon lru active/inactive ratio.
>  	 */
>  	if (inactive_anon_is_low(mz))
> -		shrink_active_list(SWAP_CLUSTER_MAX, mz, sc, priority, 0);
> +		shrink_active_list(SWAP_CLUSTER_MAX, mz,
> +				   sc, priority, LRU_ACTIVE_ANON);
>  
>  	/* reclaim/compaction might need reclaim to continue */
>  	if (should_continue_reclaim(mz, nr_reclaimed,
> @@ -2550,7 +2543,7 @@ static void age_active_anon(struct zone *zone, struct scan_control *sc,
>  
>  		if (inactive_anon_is_low(&mz))
>  			shrink_active_list(SWAP_CLUSTER_MAX, &mz,
> -					   sc, priority, 0);
> +					   sc, priority, LRU_ACTIVE_ANON);
>  
>  		memcg = mem_cgroup_iter(NULL, memcg, NULL);
>  	} while (memcg);
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 7/7] mm/memcg: use vm_swappiness from target memory cgroup
  2012-03-22 21:56   ` Konstantin Khlebnikov
@ 2012-03-26 15:59     ` Michal Hocko
  -1 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-26 15:59 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, KAMEZAWA Hiroyuki

On Fri 23-03-12 01:56:43, Konstantin Khlebnikov wrote:
> Use vm_swappiness from memory cgroup which is triggered this memory reclaim.
> This is more reasonable and allows to kill one argument.

Could you be more specific why is this more reasonable? 
I am afraid this might lead to an unexpected behavior when the target
memcg has quite high swappiness while other groups in the hierarchy have
it 0 so we would end up swapping even from those groups.

> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujtisu.com>
> ---
>  mm/vmscan.c |    9 ++++-----
>  1 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9de66be..5e2906d 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1840,12 +1840,11 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
>  	return shrink_inactive_list(nr_to_scan, mz, sc, priority, lru);
>  }
>  
> -static int vmscan_swappiness(struct mem_cgroup_zone *mz,
> -			     struct scan_control *sc)
> +static int vmscan_swappiness(struct scan_control *sc)
>  {
>  	if (global_reclaim(sc))
>  		return vm_swappiness;
> -	return mem_cgroup_swappiness(mz->mem_cgroup);
> +	return mem_cgroup_swappiness(sc->target_mem_cgroup);
>  }
>  
>  /*
> @@ -1913,8 +1912,8 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
>  	 * With swappiness at 100, anonymous and file have the same priority.
>  	 * This scanning priority is essentially the inverse of IO cost.
>  	 */
> -	anon_prio = vmscan_swappiness(mz, sc);
> -	file_prio = 200 - vmscan_swappiness(mz, sc);
> +	anon_prio = vmscan_swappiness(sc);
> +	file_prio = 200 - vmscan_swappiness(sc);
>  
>  	/*
>  	 * OK, so we have swap space and a fair amount of page cache
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 7/7] mm/memcg: use vm_swappiness from target memory cgroup
@ 2012-03-26 15:59     ` Michal Hocko
  0 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-26 15:59 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, KAMEZAWA Hiroyuki

On Fri 23-03-12 01:56:43, Konstantin Khlebnikov wrote:
> Use vm_swappiness from memory cgroup which is triggered this memory reclaim.
> This is more reasonable and allows to kill one argument.

Could you be more specific why is this more reasonable? 
I am afraid this might lead to an unexpected behavior when the target
memcg has quite high swappiness while other groups in the hierarchy have
it 0 so we would end up swapping even from those groups.

> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujtisu.com>
> ---
>  mm/vmscan.c |    9 ++++-----
>  1 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 9de66be..5e2906d 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1840,12 +1840,11 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
>  	return shrink_inactive_list(nr_to_scan, mz, sc, priority, lru);
>  }
>  
> -static int vmscan_swappiness(struct mem_cgroup_zone *mz,
> -			     struct scan_control *sc)
> +static int vmscan_swappiness(struct scan_control *sc)
>  {
>  	if (global_reclaim(sc))
>  		return vm_swappiness;
> -	return mem_cgroup_swappiness(mz->mem_cgroup);
> +	return mem_cgroup_swappiness(sc->target_mem_cgroup);
>  }
>  
>  /*
> @@ -1913,8 +1912,8 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
>  	 * With swappiness at 100, anonymous and file have the same priority.
>  	 * This scanning priority is essentially the inverse of IO cost.
>  	 */
> -	anon_prio = vmscan_swappiness(mz, sc);
> -	file_prio = 200 - vmscan_swappiness(mz, sc);
> +	anon_prio = vmscan_swappiness(sc);
> +	file_prio = 200 - vmscan_swappiness(sc);
>  
>  	/*
>  	 * OK, so we have swap space and a fair amount of page cache
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 6/7] mm/memcg: kill mem_cgroup_lru_del()
  2012-03-26 15:23     ` Michal Hocko
@ 2012-03-26 16:05       ` Konstantin Khlebnikov
  -1 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-26 16:05 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins, KAMEZAWA Hiroyuki

Michal Hocko wrote:
> On Fri 23-03-12 01:56:39, Konstantin Khlebnikov wrote:
>> This patch kills mem_cgroup_lru_del(), we can use mem_cgroup_lru_del_list()
>> instead. On 0-order isolation we already have right lru list id.
>>
>> Signed-off-by: Konstantin Khlebnikov<khlebnikov@openvz.org>
>> Cc: KAMEZAWA Hiroyuki<kamezawa.hiroyu@jp.fujitsu.com>
>> Cc: Hugh Dickins<hughd@google.com>
>
> Yes, looks good
> Acked-by: Michal Hocko<mhocko@suse.cz>
>
> Just a small nit..
> [...]
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 5f6ed98..9de66be 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
> [...]
>> @@ -1205,8 +1205,11 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>>
>>   			if (__isolate_lru_page(cursor_page, mode) == 0) {
>>   				unsigned int isolated_pages;
>> +				enum lru_list cursor_lru;
>>
>> -				mem_cgroup_lru_del(cursor_page);
>> +				cursor_lru = page_lru(cursor_page);
>> +				mem_cgroup_lru_del_list(cursor_page,
>> +							cursor_lru);
>
> Why not mem_cgroup_lru_del_list(cursor_page,
> 				page_lru(cursor_page));
> The patch would be smaller and it doesn't make checkpatch unhappy as well.

Lumpy-reclaim supposed to be removed soon, so...

>
>>   				list_move(&cursor_page->lru, dst);
>>   				isolated_pages = hpage_nr_pages(cursor_page);
>>   				nr_taken += isolated_pages;
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 6/7] mm/memcg: kill mem_cgroup_lru_del()
@ 2012-03-26 16:05       ` Konstantin Khlebnikov
  0 siblings, 0 replies; 42+ messages in thread
From: Konstantin Khlebnikov @ 2012-03-26 16:05 UTC (permalink / raw)
  To: Michal Hocko
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins, KAMEZAWA Hiroyuki

Michal Hocko wrote:
> On Fri 23-03-12 01:56:39, Konstantin Khlebnikov wrote:
>> This patch kills mem_cgroup_lru_del(), we can use mem_cgroup_lru_del_list()
>> instead. On 0-order isolation we already have right lru list id.
>>
>> Signed-off-by: Konstantin Khlebnikov<khlebnikov@openvz.org>
>> Cc: KAMEZAWA Hiroyuki<kamezawa.hiroyu@jp.fujitsu.com>
>> Cc: Hugh Dickins<hughd@google.com>
>
> Yes, looks good
> Acked-by: Michal Hocko<mhocko@suse.cz>
>
> Just a small nit..
> [...]
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 5f6ed98..9de66be 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
> [...]
>> @@ -1205,8 +1205,11 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
>>
>>   			if (__isolate_lru_page(cursor_page, mode) == 0) {
>>   				unsigned int isolated_pages;
>> +				enum lru_list cursor_lru;
>>
>> -				mem_cgroup_lru_del(cursor_page);
>> +				cursor_lru = page_lru(cursor_page);
>> +				mem_cgroup_lru_del_list(cursor_page,
>> +							cursor_lru);
>
> Why not mem_cgroup_lru_del_list(cursor_page,
> 				page_lru(cursor_page));
> The patch would be smaller and it doesn't make checkpatch unhappy as well.

Lumpy-reclaim supposed to be removed soon, so...

>
>>   				list_move(&cursor_page->lru, dst);
>>   				isolated_pages = hpage_nr_pages(cursor_page);
>>   				nr_taken += isolated_pages;
>>
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at  http://www.tux.org/lkml/
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
  2012-03-26 15:31         ` Michal Hocko
@ 2012-03-26 21:39           ` Hugh Dickins
  -1 siblings, 0 replies; 42+ messages in thread
From: Hugh Dickins @ 2012-03-26 21:39 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Johannes Weiner, Konstantin Khlebnikov, linux-mm, Andrew Morton,
	linux-kernel, KAMEZAWA Hiroyuki, Glauber Costa

On Mon, 26 Mar 2012, Michal Hocko wrote:
> 
> I guess that a note about changed ratio calculation should be added to
> the changelog.

To the changelog of a patch which changes the ratio calculation, yes; but
not to the changelog of this patch, which changes only the name of the test.

Hugh

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
@ 2012-03-26 21:39           ` Hugh Dickins
  0 siblings, 0 replies; 42+ messages in thread
From: Hugh Dickins @ 2012-03-26 21:39 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Johannes Weiner, Konstantin Khlebnikov, linux-mm, Andrew Morton,
	linux-kernel, KAMEZAWA Hiroyuki, Glauber Costa

On Mon, 26 Mar 2012, Michal Hocko wrote:
> 
> I guess that a note about changed ratio calculation should be added to
> the changelog.

To the changelog of a patch which changes the ratio calculation, yes; but
not to the changelog of this patch, which changes only the name of the test.

Hugh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
  2012-03-26 21:39           ` Hugh Dickins
@ 2012-03-27  7:46             ` Michal Hocko
  -1 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-27  7:46 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Johannes Weiner, Konstantin Khlebnikov, linux-mm, Andrew Morton,
	linux-kernel, KAMEZAWA Hiroyuki, Glauber Costa

On Mon 26-03-12 14:39:49, Hugh Dickins wrote:
> On Mon, 26 Mar 2012, Michal Hocko wrote:
> > 
> > I guess that a note about changed ratio calculation should be added to
> > the changelog.
> 
> To the changelog of a patch which changes the ratio calculation, yes; but
> not to the changelog of this patch, which changes only the name of the test.

You are right. I somehow missed the important point that we had a
different ratio calculation since memcg naturalization...

Sorry for noise.

> 
> Hugh
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled
@ 2012-03-27  7:46             ` Michal Hocko
  0 siblings, 0 replies; 42+ messages in thread
From: Michal Hocko @ 2012-03-27  7:46 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Johannes Weiner, Konstantin Khlebnikov, linux-mm, Andrew Morton,
	linux-kernel, KAMEZAWA Hiroyuki, Glauber Costa

On Mon 26-03-12 14:39:49, Hugh Dickins wrote:
> On Mon, 26 Mar 2012, Michal Hocko wrote:
> > 
> > I guess that a note about changed ratio calculation should be added to
> > the changelog.
> 
> To the changelog of a patch which changes the ratio calculation, yes; but
> not to the changelog of this patch, which changes only the name of the test.

You are right. I somehow missed the important point that we had a
different ratio calculation since memcg naturalization...

Sorry for noise.

> 
> Hugh
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

-- 
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9    
Czech Republic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2012-03-27  7:46 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-03-22 21:56 [PATCH v6 0/7] mm: some simple cleanups Konstantin Khlebnikov
2012-03-22 21:56 ` Konstantin Khlebnikov
2012-03-22 21:56 ` [PATCH v6 1/7] mm/memcg: scanning_global_lru means mem_cgroup_disabled Konstantin Khlebnikov
2012-03-22 21:56   ` Konstantin Khlebnikov
2012-03-26 15:04   ` Michal Hocko
2012-03-26 15:04     ` Michal Hocko
2012-03-26 15:18     ` Johannes Weiner
2012-03-26 15:18       ` Johannes Weiner
2012-03-26 15:31       ` Michal Hocko
2012-03-26 15:31         ` Michal Hocko
2012-03-26 21:39         ` Hugh Dickins
2012-03-26 21:39           ` Hugh Dickins
2012-03-27  7:46           ` Michal Hocko
2012-03-27  7:46             ` Michal Hocko
2012-03-22 21:56 ` [PATCH v6 2/7] mm/memcg: move reclaim_stat into lruvec Konstantin Khlebnikov
2012-03-22 21:56   ` Konstantin Khlebnikov
2012-03-26 15:16   ` Michal Hocko
2012-03-26 15:16     ` Michal Hocko
2012-03-22 21:56 ` [PATCH v6 3/7] mm: push lru index into shrink_[in]active_list() Konstantin Khlebnikov
2012-03-22 21:56   ` Konstantin Khlebnikov
2012-03-23  4:48   ` Minchan Kim
2012-03-23  4:48     ` Minchan Kim
2012-03-26 15:39   ` Michal Hocko
2012-03-26 15:39     ` Michal Hocko
2012-03-22 21:56 ` [PATCH v6 4/7] mm: mark mm-inline functions as __always_inline Konstantin Khlebnikov
2012-03-22 21:56   ` Konstantin Khlebnikov
2012-03-22 21:56 ` [PATCH v6 5/7] mm: remove lru type checks from __isolate_lru_page() Konstantin Khlebnikov
2012-03-22 21:56   ` Konstantin Khlebnikov
2012-03-23  1:46   ` KAMEZAWA Hiroyuki
2012-03-23  1:46     ` KAMEZAWA Hiroyuki
2012-03-22 21:56 ` [PATCH v6 6/7] mm/memcg: kill mem_cgroup_lru_del() Konstantin Khlebnikov
2012-03-22 21:56   ` Konstantin Khlebnikov
2012-03-23  1:48   ` KAMEZAWA Hiroyuki
2012-03-23  1:48     ` KAMEZAWA Hiroyuki
2012-03-26 15:23   ` Michal Hocko
2012-03-26 15:23     ` Michal Hocko
2012-03-26 16:05     ` Konstantin Khlebnikov
2012-03-26 16:05       ` Konstantin Khlebnikov
2012-03-22 21:56 ` [PATCH v6 7/7] mm/memcg: use vm_swappiness from target memory cgroup Konstantin Khlebnikov
2012-03-22 21:56   ` Konstantin Khlebnikov
2012-03-26 15:59   ` Michal Hocko
2012-03-26 15:59     ` Michal Hocko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.