linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/22] mm: lru_lock splitting
@ 2012-02-20 17:22 Konstantin Khlebnikov
  2012-02-20 17:22 ` [PATCH v2 01/22] memcg: rework inactive_ratio logic Konstantin Khlebnikov
                   ` (22 more replies)
  0 siblings, 23 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:22 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

There complete patch-set with my lru_lock splitting
plus all related preparations and cleanups rebased to next-20120210

git: https://github.com/koct9i/linux/commits/lruvec

main changes:
* rebase
* sed -e 's/book/lruvec/g'
* fixed locking
* some cleaning and reordering

---

Konstantin Khlebnikov (22):
      memcg: rework inactive_ratio logic
      memcg: fix page_referencies cgroup filter on global reclaim
      memcg: use vm_swappiness from current memcg
      mm: drain percpu lru add/rotate page-vectors on cpu hot-unplug
      mm: replace per-cpu lru-add page-vectors with page-lists
      mm: deprecate pagevec lru-add functions
      mm: rename lruvec->lists into lruvec->pages_lru
      mm: add lruvec->pages_count
      mm: link lruvec with zone and node
      mm: unify inactive_list_is_low()
      mm: add lruvec->reclaim_stat
      mm: kill struct mem_cgroup_zone
      mm: move page-to-lruvec translation upper
      mm: push lruvec into update_page_reclaim_stat()
      mm: push lruvecs from pagevec_lru_move_fn() to iterator
      mm: introduce lruvec locking primitives
      mm: handle lruvec relocks on lumpy reclaim
      mm: handle lruvec relocks in compaction
      mm: handle lruvec relock in memory controller
      mm: optimize putback for 0-order reclaim
      mm: free lruvec in memcgroup via rcu
      mm: split zone->lru_lock


 fs/mpage.c                 |   21 +-
 fs/nfs/dir.c               |   10 -
 include/linux/memcontrol.h |   62 -------
 include/linux/mm.h         |   37 ++++
 include/linux/mm_inline.h  |   19 +-
 include/linux/mmzone.h     |   19 +-
 include/linux/pagevec.h    |    4 
 include/linux/swap.h       |    5 -
 mm/compaction.c            |   30 ++-
 mm/huge_memory.c           |   10 +
 mm/internal.h              |  226 +++++++++++++++++++++++++
 mm/memcontrol.c            |  320 ++++++++++++++---------------------
 mm/page_alloc.c            |   19 +-
 mm/readahead.c             |   15 +-
 mm/swap.c                  |  256 ++++++++++++++++------------
 mm/vmscan.c                |  402 +++++++++++++++++++++-----------------------
 16 files changed, 821 insertions(+), 634 deletions(-)

-- 
Signature

^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v2 01/22] memcg: rework inactive_ratio logic
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
@ 2012-02-20 17:22 ` Konstantin Khlebnikov
  2012-02-20 17:22 ` [PATCH v2 02/22] memcg: fix page_referencies cgroup filter on global reclaim Konstantin Khlebnikov
                   ` (21 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:22 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

This patch adds mem_cgroup->inactive_ratio calculated from hierarchical memory limit.
It updated at each limit change before shrinking cgroup to this new limit.
Ratios for all child cgroups are updated too, because parent limit can affect them.
Update precedure can be greatly optimized if its performance becomes the problem.
Inactive ratio for unlimited or huge limit does not matter, because we'll never hit it.

At global reclaim always use global ratio from zone->inactive_ratio.
At mem-cgroup reclaim use inactive_ratio from target memory cgroup,
this is cgroup which hit its limit and cause this reclaimer invocation.

Thus, global memory reclaimer will try to keep ratio for all lru lists in zone
above one mark, this guarantee that total ratio in this zone will be above too.
Meanwhile mem-cgroup will do the same thing for its lru lists in all zones, and
for all lru lists in all sub-cgroups in hierarchy.

Also this patch removes some redundant code.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 include/linux/memcontrol.h |   16 ++------
 mm/memcontrol.c            |   85 ++++++++++++++++++++++++--------------------
 mm/vmscan.c                |   82 +++++++++++++++++++++++-------------------
 3 files changed, 93 insertions(+), 90 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index bf4e1f4..4fbe18a 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -113,10 +113,7 @@ void mem_cgroup_iter_break(struct mem_cgroup *, struct mem_cgroup *);
 /*
  * For memory reclaim.
  */
-int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg,
-				    struct zone *zone);
-int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg,
-				    struct zone *zone);
+unsigned int mem_cgroup_inactive_ratio(struct mem_cgroup *memcg);
 int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
 unsigned long mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg,
 					int nid, int zid, unsigned int lrumask);
@@ -319,16 +316,9 @@ static inline bool mem_cgroup_disabled(void)
 	return true;
 }
 
-static inline int
-mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg, struct zone *zone)
-{
-	return 1;
-}
-
-static inline int
-mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg, struct zone *zone)
+static inline unsigned int mem_cgroup_inactive_ratio(struct mem_cgroup *memcg)
 {
-	return 1;
+	return 0;
 }
 
 static inline unsigned long
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index ab315ab..fe0b8fb 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -210,6 +210,8 @@ struct mem_cgroup_eventfd_list {
 
 static void mem_cgroup_threshold(struct mem_cgroup *memcg);
 static void mem_cgroup_oom_notify(struct mem_cgroup *memcg);
+static void memcg_get_hierarchical_limit(struct mem_cgroup *memcg,
+		unsigned long long *mem_limit, unsigned long long *memsw_limit);
 
 /*
  * The memory controller data structure. The memory controller controls both
@@ -254,6 +256,10 @@ struct mem_cgroup {
 	atomic_t	refcnt;
 
 	int	swappiness;
+
+	/* The target ratio of ACTIVE_ANON to INACTIVE_ANON pages */
+	unsigned int inactive_ratio;
+
 	/* OOM-Killer disable */
 	int		oom_kill_disable;
 
@@ -1157,44 +1163,6 @@ int task_in_mem_cgroup(struct task_struct *task, const struct mem_cgroup *memcg)
 	return ret;
 }
 
-int mem_cgroup_inactive_anon_is_low(struct mem_cgroup *memcg, struct zone *zone)
-{
-	unsigned long inactive_ratio;
-	int nid = zone_to_nid(zone);
-	int zid = zone_idx(zone);
-	unsigned long inactive;
-	unsigned long active;
-	unsigned long gb;
-
-	inactive = mem_cgroup_zone_nr_lru_pages(memcg, nid, zid,
-						BIT(LRU_INACTIVE_ANON));
-	active = mem_cgroup_zone_nr_lru_pages(memcg, nid, zid,
-					      BIT(LRU_ACTIVE_ANON));
-
-	gb = (inactive + active) >> (30 - PAGE_SHIFT);
-	if (gb)
-		inactive_ratio = int_sqrt(10 * gb);
-	else
-		inactive_ratio = 1;
-
-	return inactive * inactive_ratio < active;
-}
-
-int mem_cgroup_inactive_file_is_low(struct mem_cgroup *memcg, struct zone *zone)
-{
-	unsigned long active;
-	unsigned long inactive;
-	int zid = zone_idx(zone);
-	int nid = zone_to_nid(zone);
-
-	inactive = mem_cgroup_zone_nr_lru_pages(memcg, nid, zid,
-						BIT(LRU_INACTIVE_FILE));
-	active = mem_cgroup_zone_nr_lru_pages(memcg, nid, zid,
-					      BIT(LRU_ACTIVE_FILE));
-
-	return (active > inactive);
-}
-
 struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
 						      struct zone *zone)
 {
@@ -3374,6 +3342,32 @@ void mem_cgroup_print_bad_page(struct page *page)
 
 static DEFINE_MUTEX(set_limit_mutex);
 
+/*
+ * Update inactive_ratio accoring to new memory limit
+ */
+static void mem_cgroup_update_inactive_ratio(struct mem_cgroup *memcg,
+					     unsigned long long target)
+{
+	unsigned long long mem_limit, memsw_limit, gb;
+	struct mem_cgroup *iter;
+
+	for_each_mem_cgroup_tree(iter, memcg) {
+		memcg_get_hierarchical_limit(iter, &mem_limit, &memsw_limit);
+		mem_limit = min(mem_limit, target);
+
+		gb = mem_limit >> 30;
+		if (gb && 10 * gb < INT_MAX)
+			iter->inactive_ratio = int_sqrt(10 * gb);
+		else
+			iter->inactive_ratio = 1;
+	}
+}
+
+unsigned int mem_cgroup_inactive_ratio(struct mem_cgroup *memcg)
+{
+	return memcg->inactive_ratio;
+}
+
 static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
 				unsigned long long val)
 {
@@ -3423,6 +3417,7 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
 			else
 				memcg->memsw_is_minimum = false;
 		}
+		mem_cgroup_update_inactive_ratio(memcg, val);
 		mutex_unlock(&set_limit_mutex);
 
 		if (!ret)
@@ -3440,6 +3435,12 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
 	if (!ret && enlarge)
 		memcg_oom_recover(memcg);
 
+	if (ret) {
+		mutex_lock(&set_limit_mutex);
+		mem_cgroup_update_inactive_ratio(memcg, RESOURCE_MAX);
+		mutex_unlock(&set_limit_mutex);
+	}
+
 	return ret;
 }
 
@@ -4155,6 +4156,8 @@ static int mem_control_stat_show(struct cgroup *cont, struct cftype *cft,
 	}
 
 #ifdef CONFIG_DEBUG_VM
+	cb->fill(cb, "inactive_ratio", memcg->inactive_ratio);
+
 	{
 		int nid, zid;
 		struct mem_cgroup_per_zone *mz;
@@ -4934,8 +4937,12 @@ mem_cgroup_create(struct cgroup *cont)
 	memcg->last_scanned_node = MAX_NUMNODES;
 	INIT_LIST_HEAD(&memcg->oom_notify);
 
-	if (parent)
+	if (parent) {
 		memcg->swappiness = mem_cgroup_swappiness(parent);
+		memcg->inactive_ratio = parent->inactive_ratio;
+	} else
+		memcg->inactive_ratio = 1;
+
 	atomic_set(&memcg->refcnt, 1);
 	memcg->move_charge_at_immigrate = 0;
 	mutex_init(&memcg->thresholds_lock);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 87e4d6a..ee4d87a 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1779,19 +1779,6 @@ static void shrink_active_list(unsigned long nr_to_scan,
 }
 
 #ifdef CONFIG_SWAP
-static int inactive_anon_is_low_global(struct zone *zone)
-{
-	unsigned long active, inactive;
-
-	active = zone_page_state(zone, NR_ACTIVE_ANON);
-	inactive = zone_page_state(zone, NR_INACTIVE_ANON);
-
-	if (inactive * zone->inactive_ratio < active)
-		return 1;
-
-	return 0;
-}
-
 /**
  * inactive_anon_is_low - check if anonymous pages need to be deactivated
  * @zone: zone to check
@@ -1800,8 +1787,12 @@ static int inactive_anon_is_low_global(struct zone *zone)
  * Returns true if the zone does not have enough inactive anon pages,
  * meaning some active anon pages need to be deactivated.
  */
-static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
+static int inactive_anon_is_low(struct mem_cgroup_zone *mz,
+				struct scan_control *sc)
 {
+	unsigned long active, inactive;
+	unsigned int ratio;
+
 	/*
 	 * If we don't have swap space, anonymous page deactivation
 	 * is pointless.
@@ -1809,29 +1800,33 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz)
 	if (!total_swap_pages)
 		return 0;
 
-	if (!scanning_global_lru(mz))
-		return mem_cgroup_inactive_anon_is_low(mz->mem_cgroup,
-						       mz->zone);
+	if (global_reclaim(sc))
+		ratio = mz->zone->inactive_ratio;
+	else
+		ratio = mem_cgroup_inactive_ratio(sc->target_mem_cgroup);
 
-	return inactive_anon_is_low_global(mz->zone);
+	if (scanning_global_lru(mz)) {
+		active = zone_page_state(mz->zone, NR_ACTIVE_ANON);
+		inactive = zone_page_state(mz->zone, NR_INACTIVE_ANON);
+	} else {
+		active = mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
+				zone_to_nid(mz->zone), zone_idx(mz->zone),
+				BIT(LRU_ACTIVE_ANON));
+		inactive = mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
+				zone_to_nid(mz->zone), zone_idx(mz->zone),
+				BIT(LRU_INACTIVE_ANON));
+	}
+
+	return inactive * ratio < active;
 }
 #else
-static inline int inactive_anon_is_low(struct mem_cgroup_zone *mz)
+static inline int inactive_anon_is_low(struct mem_cgroup_zone *mz,
+				       struct scan_control *sc)
 {
 	return 0;
 }
 #endif
 
-static int inactive_file_is_low_global(struct zone *zone)
-{
-	unsigned long active, inactive;
-
-	active = zone_page_state(zone, NR_ACTIVE_FILE);
-	inactive = zone_page_state(zone, NR_INACTIVE_FILE);
-
-	return (active > inactive);
-}
-
 /**
  * inactive_file_is_low - check if file pages need to be deactivated
  * @mz: memory cgroup and zone to check
@@ -1848,19 +1843,30 @@ static int inactive_file_is_low_global(struct zone *zone)
  */
 static int inactive_file_is_low(struct mem_cgroup_zone *mz)
 {
-	if (!scanning_global_lru(mz))
-		return mem_cgroup_inactive_file_is_low(mz->mem_cgroup,
-						       mz->zone);
+	unsigned long active, inactive;
+
+	if (scanning_global_lru(mz)) {
+		active = zone_page_state(mz->zone, NR_ACTIVE_FILE);
+		inactive = zone_page_state(mz->zone, NR_INACTIVE_FILE);
+	} else {
+		active = mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
+				zone_to_nid(mz->zone), zone_idx(mz->zone),
+				BIT(LRU_ACTIVE_FILE));
+		inactive = mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
+				zone_to_nid(mz->zone), zone_idx(mz->zone),
+				BIT(LRU_INACTIVE_FILE));
+	}
 
-	return inactive_file_is_low_global(mz->zone);
+	return inactive < active;
 }
 
-static int inactive_list_is_low(struct mem_cgroup_zone *mz, int file)
+static int inactive_list_is_low(struct mem_cgroup_zone *mz,
+				struct scan_control *sc, int file)
 {
 	if (file)
 		return inactive_file_is_low(mz);
 	else
-		return inactive_anon_is_low(mz);
+		return inactive_anon_is_low(mz, sc);
 }
 
 static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
@@ -1870,7 +1876,7 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
 	int file = is_file_lru(lru);
 
 	if (is_active_lru(lru)) {
-		if (inactive_list_is_low(mz, file))
+		if (inactive_list_is_low(mz, sc, file))
 			shrink_active_list(nr_to_scan, mz, sc, priority, file);
 		return 0;
 	}
@@ -2125,7 +2131,7 @@ restart:
 	 * Even if we did not try to evict anon pages at all, we want to
 	 * rebalance the anon lru active/inactive ratio.
 	 */
-	if (inactive_anon_is_low(mz))
+	if (inactive_anon_is_low(mz, sc))
 		shrink_active_list(SWAP_CLUSTER_MAX, mz, sc, priority, 0);
 
 	/* reclaim/compaction might need reclaim to continue */
@@ -2558,7 +2564,7 @@ static void age_active_anon(struct zone *zone, struct scan_control *sc,
 			.zone = zone,
 		};
 
-		if (inactive_anon_is_low(&mz))
+		if (inactive_anon_is_low(&mz, sc))
 			shrink_active_list(SWAP_CLUSTER_MAX, &mz,
 					   sc, priority, 0);
 


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 02/22] memcg: fix page_referencies cgroup filter on global reclaim
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
  2012-02-20 17:22 ` [PATCH v2 01/22] memcg: rework inactive_ratio logic Konstantin Khlebnikov
@ 2012-02-20 17:22 ` Konstantin Khlebnikov
  2012-02-20 17:22 ` [PATCH v2 03/22] memcg: use vm_swappiness from current memcg Konstantin Khlebnikov
                   ` (20 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:22 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Global memory reclaimer should't skip referencies for any pages,
even if they are shared between different cgroups.

This patch adds scan_control->current_mem_cgroup, which points to currently
shrinking sub-cgroup in hierarchy, at global reclaim it always NULL.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 mm/vmscan.c |   18 ++++++++++++++----
 1 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index ee4d87a..4bb23ef 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -109,6 +109,12 @@ struct scan_control {
 	struct mem_cgroup *target_mem_cgroup;
 
 	/*
+	 * Currently reclaiming memory cgroup in hierarchy,
+	 * NULL for global reclaim.
+	 */
+	struct mem_cgroup *current_mem_cgroup;
+
+	/*
 	 * Nodemask of nodes allowed by the caller. If NULL, all nodes
 	 * are scanned.
 	 */
@@ -701,13 +707,13 @@ enum page_references {
 };
 
 static enum page_references page_check_references(struct page *page,
-						  struct mem_cgroup_zone *mz,
 						  struct scan_control *sc)
 {
 	int referenced_ptes, referenced_page;
 	unsigned long vm_flags;
 
-	referenced_ptes = page_referenced(page, 1, mz->mem_cgroup, &vm_flags);
+	referenced_ptes = page_referenced(page, 1,
+			sc->current_mem_cgroup, &vm_flags);
 	referenced_page = TestClearPageReferenced(page);
 
 	/* Lumpy reclaim - ignore references */
@@ -828,7 +834,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 			}
 		}
 
-		references = page_check_references(page, mz, sc);
+		references = page_check_references(page, sc);
 		switch (references) {
 		case PAGEREF_ACTIVATE:
 			goto activate_locked;
@@ -1735,7 +1741,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 			continue;
 		}
 
-		if (page_referenced(page, 0, mz->mem_cgroup, &vm_flags)) {
+		if (page_referenced(page, 0, sc->current_mem_cgroup, &vm_flags)) {
 			nr_rotated += hpage_nr_pages(page);
 			/*
 			 * Identify referenced, file-backed active pages and
@@ -2159,6 +2165,9 @@ static void shrink_zone(int priority, struct zone *zone,
 			.zone = zone,
 		};
 
+		if (!global_reclaim(sc))
+			sc->current_mem_cgroup = memcg;
+
 		shrink_mem_cgroup_zone(priority, &mz, sc);
 		/*
 		 * Limit reclaim has historically picked one memcg and
@@ -2478,6 +2487,7 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
 		.may_swap = !noswap,
 		.order = 0,
 		.target_mem_cgroup = memcg,
+		.current_mem_cgroup = memcg,
 	};
 	struct mem_cgroup_zone mz = {
 		.mem_cgroup = memcg,


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 03/22] memcg: use vm_swappiness from current memcg
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
  2012-02-20 17:22 ` [PATCH v2 01/22] memcg: rework inactive_ratio logic Konstantin Khlebnikov
  2012-02-20 17:22 ` [PATCH v2 02/22] memcg: fix page_referencies cgroup filter on global reclaim Konstantin Khlebnikov
@ 2012-02-20 17:22 ` Konstantin Khlebnikov
  2012-02-20 17:22 ` [PATCH v2 04/22] mm: drain percpu lru add/rotate page-vectors on cpu hot-unplug Konstantin Khlebnikov
                   ` (19 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:22 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

At this point this is always the same cgroup, but it allows to drop one argument.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 mm/vmscan.c |    9 ++++-----
 1 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4bb23ef..c54a75b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1890,12 +1890,11 @@ static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
 	return shrink_inactive_list(nr_to_scan, mz, sc, priority, file);
 }
 
-static int vmscan_swappiness(struct mem_cgroup_zone *mz,
-			     struct scan_control *sc)
+static int vmscan_swappiness(struct scan_control *sc)
 {
 	if (global_reclaim(sc))
 		return vm_swappiness;
-	return mem_cgroup_swappiness(mz->mem_cgroup);
+	return mem_cgroup_swappiness(sc->current_mem_cgroup);
 }
 
 /*
@@ -1963,8 +1962,8 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
 	 * With swappiness at 100, anonymous and file have the same priority.
 	 * This scanning priority is essentially the inverse of IO cost.
 	 */
-	anon_prio = vmscan_swappiness(mz, sc);
-	file_prio = 200 - vmscan_swappiness(mz, sc);
+	anon_prio = vmscan_swappiness(sc);
+	file_prio = 200 - vmscan_swappiness(sc);
 
 	/*
 	 * OK, so we have swap space and a fair amount of page cache


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 04/22] mm: drain percpu lru add/rotate page-vectors on cpu hot-unplug
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (2 preceding siblings ...)
  2012-02-20 17:22 ` [PATCH v2 03/22] memcg: use vm_swappiness from current memcg Konstantin Khlebnikov
@ 2012-02-20 17:22 ` Konstantin Khlebnikov
  2012-02-20 17:22 ` [PATCH v2 05/22] mm: replace per-cpu lru-add page-vectors with page-lists Konstantin Khlebnikov
                   ` (18 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:22 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

This cpu hotplug hook was accidentally removed in commit v2.6.30-rc4-18-g00a62ce
("mm: fix Committed_AS underflow on large NR_CPUS environment")

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 include/linux/swap.h |    1 +
 mm/page_alloc.c      |    1 +
 mm/swap.c            |    4 ++--
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index f7df3ea..ba2c8d7 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -229,6 +229,7 @@ extern void lru_add_page_tail(struct zone* zone,
 extern void activate_page(struct page *);
 extern void mark_page_accessed(struct page *);
 extern void lru_add_drain(void);
+extern void lru_add_drain_cpu(int cpu);
 extern int lru_add_drain_all(void);
 extern void rotate_reclaimable_page(struct page *page);
 extern void deactivate_page(struct page *page);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a547177..85517af 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4873,6 +4873,7 @@ static int page_alloc_cpu_notify(struct notifier_block *self,
 	int cpu = (unsigned long)hcpu;
 
 	if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
+		lru_add_drain_cpu(cpu);
 		drain_pages(cpu);
 
 		/*
diff --git a/mm/swap.c b/mm/swap.c
index fff1ff7..38b2686 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -496,7 +496,7 @@ static void lru_deactivate_fn(struct page *page, void *arg)
  * Either "cpu" is the current CPU, and preemption has already been
  * disabled; or "cpu" is being hot-unplugged, and is already dead.
  */
-static void drain_cpu_pagevecs(int cpu)
+void lru_add_drain_cpu(int cpu)
 {
 	struct pagevec *pvecs = per_cpu(lru_add_pvecs, cpu);
 	struct pagevec *pvec;
@@ -553,7 +553,7 @@ void deactivate_page(struct page *page)
 
 void lru_add_drain(void)
 {
-	drain_cpu_pagevecs(get_cpu());
+	lru_add_drain_cpu(get_cpu());
 	put_cpu();
 }
 


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 05/22] mm: replace per-cpu lru-add page-vectors with page-lists
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (3 preceding siblings ...)
  2012-02-20 17:22 ` [PATCH v2 04/22] mm: drain percpu lru add/rotate page-vectors on cpu hot-unplug Konstantin Khlebnikov
@ 2012-02-20 17:22 ` Konstantin Khlebnikov
  2012-02-20 17:22 ` [PATCH v2 06/22] mm: deprecate pagevec lru-add functions Konstantin Khlebnikov
                   ` (17 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:22 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

This patch replaces page-vectors with page-lists in lru_cache_add*() functions.
We can use page->lru for linking because page obviously not in lru.

Now per-cpu batch limited with its pages total size, not pages count,
otherwise it can be extremely huge if there many huge-pages inside:
PAGEVEC_SIZE * HPAGE_SIZE = 28Mb, per-cpu!
These pages are hidden from memory reclaimer for a while.
New limit: LRU_CACHE_ADD_BATCH = 64 (* PAGE_SIZE = 256Kb)

So, huge-page adding now will always drain per-cpu list. Huge-page allocation
and preparation is long procedure, thus nobody will notice this draining.

Draining procedure disables preemption only for pages list isolation,
thus batch size can be increased without negative effect for latency.

Plus this patch introduces new function lru_cache_add_list() and use it in
mpage_readpages() and read_pages(). There pages already collected in list.
Unlike to single-page lru-add, list-add reuse page-referencies from caller,
thus we save one page_get()/page_put() per page.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 fs/mpage.c           |   21 ++++++----
 include/linux/swap.h |    2 +
 mm/readahead.c       |   15 ++++---
 mm/swap.c            |  104 ++++++++++++++++++++++++++++++++++++++++++++++----
 4 files changed, 119 insertions(+), 23 deletions(-)

diff --git a/fs/mpage.c b/fs/mpage.c
index 643e9f5..6474f41 100644
--- a/fs/mpage.c
+++ b/fs/mpage.c
@@ -15,6 +15,7 @@
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/mm.h>
+#include <linux/swap.h>
 #include <linux/kdev_t.h>
 #include <linux/gfp.h>
 #include <linux/bio.h>
@@ -367,29 +368,33 @@ mpage_readpages(struct address_space *mapping, struct list_head *pages,
 				unsigned nr_pages, get_block_t get_block)
 {
 	struct bio *bio = NULL;
-	unsigned page_idx;
 	sector_t last_block_in_bio = 0;
 	struct buffer_head map_bh;
 	unsigned long first_logical_block = 0;
+	struct page *page, *next;
+	int nr_added = 0;
 
 	map_bh.b_state = 0;
 	map_bh.b_size = 0;
-	for (page_idx = 0; page_idx < nr_pages; page_idx++) {
-		struct page *page = list_entry(pages->prev, struct page, lru);
 
+	list_for_each_entry_safe(page, next, pages, lru) {
 		prefetchw(&page->flags);
-		list_del(&page->lru);
-		if (!add_to_page_cache_lru(page, mapping,
+		if (!add_to_page_cache(page, mapping,
 					page->index, GFP_KERNEL)) {
 			bio = do_mpage_readpage(bio, page,
-					nr_pages - page_idx,
+					nr_pages,
 					&last_block_in_bio, &map_bh,
 					&first_logical_block,
 					get_block);
+			nr_added++;
+		} else {
+			list_del(&page->lru);
+			page_cache_release(page);
 		}
-		page_cache_release(page);
+		nr_pages--;
 	}
-	BUG_ON(!list_empty(pages));
+	BUG_ON(nr_pages);
+	lru_cache_add_list(pages, nr_added, LRU_INACTIVE_FILE);
 	if (bio)
 		mpage_bio_submit(READ, bio);
 	return 0;
diff --git a/include/linux/swap.h b/include/linux/swap.h
index ba2c8d7..7394100 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -226,6 +226,8 @@ extern void __lru_cache_add(struct page *, enum lru_list lru);
 extern void lru_cache_add_lru(struct page *, enum lru_list lru);
 extern void lru_add_page_tail(struct zone* zone,
 			      struct page *page, struct page *page_tail);
+extern void lru_cache_add_list(struct list_head *pages,
+			       int size, enum lru_list lru);
 extern void activate_page(struct page *);
 extern void mark_page_accessed(struct page *);
 extern void lru_add_drain(void);
diff --git a/mm/readahead.c b/mm/readahead.c
index cbcbb02..2f6fe4b 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -11,6 +11,7 @@
 #include <linux/fs.h>
 #include <linux/gfp.h>
 #include <linux/mm.h>
+#include <linux/swap.h>
 #include <linux/export.h>
 #include <linux/blkdev.h>
 #include <linux/backing-dev.h>
@@ -110,7 +111,7 @@ static int read_pages(struct address_space *mapping, struct file *filp,
 		struct list_head *pages, unsigned nr_pages)
 {
 	struct blk_plug plug;
-	unsigned page_idx;
+	struct page *page, *next;
 	int ret;
 
 	blk_start_plug(&plug);
@@ -122,15 +123,17 @@ static int read_pages(struct address_space *mapping, struct file *filp,
 		goto out;
 	}
 
-	for (page_idx = 0; page_idx < nr_pages; page_idx++) {
-		struct page *page = list_to_page(pages);
-		list_del(&page->lru);
-		if (!add_to_page_cache_lru(page, mapping,
+	list_for_each_entry_safe(page, next, pages, lru) {
+		if (!add_to_page_cache(page, mapping,
 					page->index, GFP_KERNEL)) {
 			mapping->a_ops->readpage(filp, page);
+		} else {
+			list_del(&page->lru);
+			page_cache_release(page);
+			nr_pages--;
 		}
-		page_cache_release(page);
 	}
+	lru_cache_add_list(pages, nr_pages, LRU_INACTIVE_FILE);
 	ret = 0;
 
 out:
diff --git a/mm/swap.c b/mm/swap.c
index 38b2686..303fbc3 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -36,7 +36,12 @@
 /* How many pages do we try to swap or page in/out together? */
 int page_cluster;
 
-static DEFINE_PER_CPU(struct pagevec[NR_LRU_LISTS], lru_add_pvecs);
+/* How many pages may be in per-cpu lru-add pending list */
+#define LRU_CACHE_ADD_BATCH	64
+
+static DEFINE_PER_CPU(struct list_head[NR_LRU_LISTS], lru_add_pages);
+static DEFINE_PER_CPU(int[NR_LRU_LISTS], lru_add_pending);
+
 static DEFINE_PER_CPU(struct pagevec, lru_rotate_pvecs);
 static DEFINE_PER_CPU(struct pagevec, lru_deactivate_pvecs);
 
@@ -371,14 +376,83 @@ void mark_page_accessed(struct page *page)
 }
 EXPORT_SYMBOL(mark_page_accessed);
 
+static void __lru_cache_add_list(struct list_head *pages, enum lru_list lru)
+{
+	int file = is_file_lru(lru);
+	int active = is_active_lru(lru);
+	struct page *page, *next;
+	struct zone *pagezone, *zone = NULL;
+	unsigned long uninitialized_var(flags);
+	LIST_HEAD(free_pages);
+
+	list_for_each_entry_safe(page, next, pages, lru) {
+		pagezone = page_zone(page);
+		if (pagezone != zone) {
+			if (zone)
+				spin_unlock_irqrestore(&zone->lru_lock, flags);
+			zone = pagezone;
+			spin_lock_irqsave(&zone->lru_lock, flags);
+		}
+		VM_BUG_ON(PageActive(page));
+		VM_BUG_ON(PageUnevictable(page));
+		VM_BUG_ON(PageLRU(page));
+		SetPageLRU(page);
+		if (active)
+			SetPageActive(page);
+		update_page_reclaim_stat(zone, page, file, active);
+		add_page_to_lru_list(zone, page, lru);
+		if (unlikely(put_page_testzero(page))) {
+			__ClearPageLRU(page);
+			__ClearPageActive(page);
+			del_page_from_lru_list(zone, page, lru);
+			if (unlikely(PageCompound(page))) {
+				spin_unlock_irqrestore(&zone->lru_lock, flags);
+				zone = NULL;
+				(*get_compound_page_dtor(page))(page);
+			} else
+				list_add_tail(&page->lru, &free_pages);
+		}
+	}
+	if (zone)
+		spin_unlock_irqrestore(&zone->lru_lock, flags);
+
+	free_hot_cold_page_list(&free_pages, 0);
+}
+
+/**
+ * lru_cache_add_list - add list of pages into lru, drop caller's page
+ *			references and reinitialize list.
+ * @pages	list of pages to adding
+ * @size	total size of pages in list
+ * @lru		the LRU list to which the page is added.
+ */
+void lru_cache_add_list(struct list_head *pages, int size, enum lru_list lru)
+{
+	struct list_head *list;
+
+	preempt_disable();
+	list = __this_cpu_ptr(lru_add_pages + lru);
+	list_splice_tail_init(pages, list);
+	if (likely(__this_cpu_add_return(lru_add_pending[lru], size) <=
+				LRU_CACHE_ADD_BATCH)) {
+		preempt_enable();
+		return;
+	}
+	list_replace_init(list, pages);
+	__this_cpu_write(lru_add_pending[lru], 0);
+	preempt_enable();
+	__lru_cache_add_list(pages, lru);
+	INIT_LIST_HEAD(pages);
+}
+EXPORT_SYMBOL(lru_cache_add_list);
+
 void __lru_cache_add(struct page *page, enum lru_list lru)
 {
-	struct pagevec *pvec = &get_cpu_var(lru_add_pvecs)[lru];
+	struct list_head pages = LIST_HEAD_INIT(page->lru);
+	int size = hpage_nr_pages(page);
 
 	page_cache_get(page);
-	if (!pagevec_add(pvec, page))
-		__pagevec_lru_add(pvec, lru);
-	put_cpu_var(lru_add_pvecs);
+	lru_cache_add_list(&pages, size, lru);
 }
 EXPORT_SYMBOL(__lru_cache_add);
 
@@ -498,14 +572,16 @@ static void lru_deactivate_fn(struct page *page, void *arg)
  */
 void lru_add_drain_cpu(int cpu)
 {
-	struct pagevec *pvecs = per_cpu(lru_add_pvecs, cpu);
+	struct list_head *pages = per_cpu(lru_add_pages, cpu);
 	struct pagevec *pvec;
 	int lru;
 
 	for_each_lru(lru) {
-		pvec = &pvecs[lru - LRU_BASE];
-		if (pagevec_count(pvec))
-			__pagevec_lru_add(pvec, lru);
+		if (!list_empty(pages + lru)) {
+			__lru_cache_add_list(pages + lru, lru);
+			INIT_LIST_HEAD(pages + lru);
+			per_cpu(lru_add_pending[lru], cpu) = 0;
+		}
 	}
 
 	pvec = &per_cpu(lru_rotate_pvecs, cpu);
@@ -780,3 +856,13 @@ void __init swap_setup(void)
 	 * _really_ don't want to cluster much more
 	 */
 }
+
+void __init lru_cache_add_init(void)
+{
+	int cpu, lru;
+
+	for_each_possible_cpu(cpu)
+		for_each_lru(lru)
+			INIT_LIST_HEAD(per_cpu(lru_add_pages, cpu) + lru);
+}
+core_initcall(lru_cache_add_init);


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 06/22] mm: deprecate pagevec lru-add functions
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (4 preceding siblings ...)
  2012-02-20 17:22 ` [PATCH v2 05/22] mm: replace per-cpu lru-add page-vectors with page-lists Konstantin Khlebnikov
@ 2012-02-20 17:22 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 07/22] mm: rename lruvec->lists into lruvec->pages_lru Konstantin Khlebnikov
                   ` (16 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:22 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

They mostly unused, the last user is fs/cachefiles/rdwr.c
This patch replaces __pagevec_lru_add() with smaller implementation.
It is exported, so we should keep it for a while.

Plus simplify and fix pathetic single-page page-vector operations in
nfs_symlink(), this was second pagevec_lru_add_file() user.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 fs/nfs/dir.c            |   10 +++-------
 include/linux/pagevec.h |    4 +++-
 mm/swap.c               |   27 +++++++--------------------
 3 files changed, 13 insertions(+), 28 deletions(-)

diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index 0d71eb6..cbbc03c 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -1863,7 +1863,6 @@ static int nfs_unlink(struct inode *dir, struct dentry *dentry)
  */
 static int nfs_symlink(struct inode *dir, struct dentry *dentry, const char *symname)
 {
-	struct pagevec lru_pvec;
 	struct page *page;
 	char *kaddr;
 	struct iattr attr;
@@ -1903,15 +1902,12 @@ static int nfs_symlink(struct inode *dir, struct dentry *dentry, const char *sym
 	 * No big deal if we can't add this page to the page cache here.
 	 * READLINK will get the missing page from the server if needed.
 	 */
-	pagevec_init(&lru_pvec, 0);
-	if (!add_to_page_cache(page, dentry->d_inode->i_mapping, 0,
+	if (!add_to_page_cache_lru(page, dentry->d_inode->i_mapping, 0,
 							GFP_KERNEL)) {
-		pagevec_add(&lru_pvec, page);
-		pagevec_lru_add_file(&lru_pvec);
 		SetPageUptodate(page);
 		unlock_page(page);
-	} else
-		__free_page(page);
+	}
+	put_page(page);
 
 	return 0;
 }
diff --git a/include/linux/pagevec.h b/include/linux/pagevec.h
index 2aa12b8..4df37fe 100644
--- a/include/linux/pagevec.h
+++ b/include/linux/pagevec.h
@@ -21,7 +21,6 @@ struct pagevec {
 };
 
 void __pagevec_release(struct pagevec *pvec);
-void __pagevec_lru_add(struct pagevec *pvec, enum lru_list lru);
 unsigned pagevec_lookup(struct pagevec *pvec, struct address_space *mapping,
 		pgoff_t start, unsigned nr_pages);
 unsigned pagevec_lookup_tag(struct pagevec *pvec,
@@ -64,6 +63,9 @@ static inline void pagevec_release(struct pagevec *pvec)
 		__pagevec_release(pvec);
 }
 
+/* Use lru_cache_add_list() instead */
+void __deprecated __pagevec_lru_add(struct pagevec *pvec, enum lru_list lru);
+
 static inline void __pagevec_lru_add_anon(struct pagevec *pvec)
 {
 	__pagevec_lru_add(pvec, LRU_INACTIVE_ANON);
diff --git a/mm/swap.c b/mm/swap.c
index 303fbc3..0d8845c 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -772,33 +772,20 @@ void lru_add_page_tail(struct zone* zone,
 }
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
-static void __pagevec_lru_add_fn(struct page *page, void *arg)
-{
-	enum lru_list lru = (enum lru_list)arg;
-	struct zone *zone = page_zone(page);
-	int file = is_file_lru(lru);
-	int active = is_active_lru(lru);
-
-	VM_BUG_ON(PageActive(page));
-	VM_BUG_ON(PageUnevictable(page));
-	VM_BUG_ON(PageLRU(page));
-
-	SetPageLRU(page);
-	if (active)
-		SetPageActive(page);
-	update_page_reclaim_stat(zone, page, file, active);
-	add_page_to_lru_list(zone, page, lru);
-}
-
 /*
  * Add the passed pages to the LRU, then drop the caller's refcount
  * on them.  Reinitialises the caller's pagevec.
  */
 void __pagevec_lru_add(struct pagevec *pvec, enum lru_list lru)
 {
-	VM_BUG_ON(is_unevictable_lru(lru));
+	LIST_HEAD(pages);
+	int i;
 
-	pagevec_lru_move_fn(pvec, __pagevec_lru_add_fn, (void *)lru);
+	VM_BUG_ON(is_unevictable_lru(lru));
+	for ( i = 0 ; i < pvec->nr ; i++ )
+		list_add_tail(&pvec->pages[i]->lru, &pages);
+	pagevec_reinit(pvec);
+	__lru_cache_add_list(&pages, lru);
 }
 EXPORT_SYMBOL(__pagevec_lru_add);
 


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 07/22] mm: rename lruvec->lists into lruvec->pages_lru
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (5 preceding siblings ...)
  2012-02-20 17:22 ` [PATCH v2 06/22] mm: deprecate pagevec lru-add functions Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 08/22] mm: add lruvec->pages_count Konstantin Khlebnikov
                   ` (15 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

This is much more unique and grep-friendly name.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 include/linux/mm_inline.h |    2 +-
 include/linux/mmzone.h    |    2 +-
 mm/memcontrol.c           |    6 +++---
 mm/page_alloc.c           |    2 +-
 mm/swap.c                 |    4 ++--
 mm/vmscan.c               |    6 +++---
 6 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 227fd3e..8415596 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -27,7 +27,7 @@ add_page_to_lru_list(struct zone *zone, struct page *page, enum lru_list lru)
 	struct lruvec *lruvec;
 
 	lruvec = mem_cgroup_lru_add_list(zone, page, lru);
-	list_add(&page->lru, &lruvec->lists[lru]);
+	list_add(&page->lru, &lruvec->pages_lru[lru]);
 	__mod_zone_page_state(zone, NR_LRU_BASE + lru, hpage_nr_pages(page));
 }
 
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index f10a54c..0d2e6b6 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -160,7 +160,7 @@ static inline int is_unevictable_lru(enum lru_list lru)
 }
 
 struct lruvec {
-	struct list_head lists[NR_LRU_LISTS];
+	struct list_head pages_lru[NR_LRU_LISTS];
 };
 
 /* Mask used at gathering information at once (see memcontrol.c) */
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index fe0b8fb..b65c619 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1036,7 +1036,7 @@ struct lruvec *mem_cgroup_zone_lruvec(struct zone *zone,
  * the lruvec for the given @zone and the memcg @page is charged to.
  *
  * The callsite is then responsible for physically linking the page to
- * the returned lruvec->lists[@lru].
+ * the returned lruvec->pages_lru[@lru].
  */
 struct lruvec *mem_cgroup_lru_add_list(struct zone *zone, struct page *page,
 				       enum lru_list lru)
@@ -3611,7 +3611,7 @@ static int mem_cgroup_force_empty_list(struct mem_cgroup *memcg,
 
 	zone = &NODE_DATA(node)->node_zones[zid];
 	mz = mem_cgroup_zoneinfo(memcg, node, zid);
-	list = &mz->lruvec.lists[lru];
+	list = &mz->lruvec.pages_lru[lru];
 
 	loop = mz->lru_size[lru];
 	/* give some margin against EBUSY etc...*/
@@ -4737,7 +4737,7 @@ static int alloc_mem_cgroup_per_zone_info(struct mem_cgroup *memcg, int node)
 	for (zone = 0; zone < MAX_NR_ZONES; zone++) {
 		mz = &pn->zoneinfo[zone];
 		for_each_lru(lru)
-			INIT_LIST_HEAD(&mz->lruvec.lists[lru]);
+			INIT_LIST_HEAD(&mz->lruvec.pages_lru[lru]);
 		mz->usage_in_excess = 0;
 		mz->on_tree = false;
 		mz->memcg = memcg;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 85517af..b75af1e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4363,7 +4363,7 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
 
 		zone_pcp_init(zone);
 		for_each_lru(lru)
-			INIT_LIST_HEAD(&zone->lruvec.lists[lru]);
+			INIT_LIST_HEAD(&zone->lruvec.pages_lru[lru]);
 		zone->reclaim_stat.recent_rotated[0] = 0;
 		zone->reclaim_stat.recent_rotated[1] = 0;
 		zone->reclaim_stat.recent_scanned[0] = 0;
diff --git a/mm/swap.c b/mm/swap.c
index 0d8845c..f57604f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -243,7 +243,7 @@ static void pagevec_move_tail_fn(struct page *page, void *arg)
 
 		lruvec = mem_cgroup_lru_move_lists(page_zone(page),
 						   page, lru, lru);
-		list_move_tail(&page->lru, &lruvec->lists[lru]);
+		list_move_tail(&page->lru, &lruvec->pages_lru[lru]);
 		(*pgmoved)++;
 	}
 }
@@ -556,7 +556,7 @@ static void lru_deactivate_fn(struct page *page, void *arg)
 		 * We moves tha page into tail of inactive.
 		 */
 		lruvec = mem_cgroup_lru_move_lists(zone, page, lru, lru);
-		list_move_tail(&page->lru, &lruvec->lists[lru]);
+		list_move_tail(&page->lru, &lruvec->pages_lru[lru]);
 		__count_vm_event(PGROTATED);
 	}
 
diff --git a/mm/vmscan.c b/mm/vmscan.c
index c54a75b..7083567 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1170,7 +1170,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 		lru += LRU_ACTIVE;
 	if (file)
 		lru += LRU_FILE;
-	src = &lruvec->lists[lru];
+	src = &lruvec->pages_lru[lru];
 
 	for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) {
 		struct page *page;
@@ -1669,7 +1669,7 @@ static void move_active_pages_to_lru(struct zone *zone,
 		SetPageLRU(page);
 
 		lruvec = mem_cgroup_lru_add_list(zone, page, lru);
-		list_move(&page->lru, &lruvec->lists[lru]);
+		list_move(&page->lru, &lruvec->pages_lru[lru]);
 		pgmoved += hpage_nr_pages(page);
 
 		if (put_page_testzero(page)) {
@@ -3583,7 +3583,7 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
 			__dec_zone_state(zone, NR_UNEVICTABLE);
 			lruvec = mem_cgroup_lru_move_lists(zone, page,
 						LRU_UNEVICTABLE, lru);
-			list_move(&page->lru, &lruvec->lists[lru]);
+			list_move(&page->lru, &lruvec->pages_lru[lru]);
 			__inc_zone_state(zone, NR_INACTIVE_ANON + lru);
 			pgrescued++;
 		}


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 08/22] mm: add lruvec->pages_count
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (6 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 07/22] mm: rename lruvec->lists into lruvec->pages_lru Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 09/22] mm: link lruvec with zone and node Konstantin Khlebnikov
                   ` (14 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Move lru pages counter from mem_cgroup_per_zone->count[] to lruvec->pages_count[]

Account pages in all lruvecs, incuding root,
this isn't a huge overhead, but it greatly simplifies all code.

Redundant page_lruvec() calls will be optimized in further patches.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 include/linux/memcontrol.h |   29 -----------
 include/linux/mm.h         |   17 ++++++
 include/linux/mm_inline.h  |   15 ++++--
 include/linux/mmzone.h     |    9 ++-
 mm/memcontrol.c            |  119 ++++++++++----------------------------------
 mm/page_alloc.c            |    4 +
 mm/swap.c                  |    7 +--
 mm/vmscan.c                |   25 +++++++--
 8 files changed, 83 insertions(+), 142 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 4fbe18a..cc6061a 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -63,12 +63,6 @@ extern int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
 					gfp_t gfp_mask);
 
 struct lruvec *mem_cgroup_zone_lruvec(struct zone *, struct mem_cgroup *);
-struct lruvec *mem_cgroup_lru_add_list(struct zone *, struct page *,
-				       enum lru_list);
-void mem_cgroup_lru_del_list(struct page *, enum lru_list);
-void mem_cgroup_lru_del(struct page *);
-struct lruvec *mem_cgroup_lru_move_lists(struct zone *, struct page *,
-					 enum lru_list, enum lru_list);
 
 /* For coalescing uncharge for reducing memcg' overhead*/
 extern void mem_cgroup_uncharge_start(void);
@@ -220,29 +214,6 @@ static inline struct lruvec *mem_cgroup_zone_lruvec(struct zone *zone,
 	return &zone->lruvec;
 }
 
-static inline struct lruvec *mem_cgroup_lru_add_list(struct zone *zone,
-						     struct page *page,
-						     enum lru_list lru)
-{
-	return &zone->lruvec;
-}
-
-static inline void mem_cgroup_lru_del_list(struct page *page, enum lru_list lru)
-{
-}
-
-static inline void mem_cgroup_lru_del(struct page *page)
-{
-}
-
-static inline struct lruvec *mem_cgroup_lru_move_lists(struct zone *zone,
-						       struct page *page,
-						       enum lru_list from,
-						       enum lru_list to)
-{
-	return &zone->lruvec;
-}
-
 static inline struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page)
 {
 	return NULL;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ee3ebc1..e483f30 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -728,6 +728,23 @@ static inline void set_page_links(struct page *page, enum zone_type zone,
 #endif
 }
 
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+
+/* Multiple lruvecs in zone */
+
+extern struct lruvec *page_lruvec(struct page *lruvec);
+
+#else /* CONFIG_CGROUP_MEM_RES_CTLR */
+
+/* Single lruvec in zone */
+
+static inline struct lruvec *page_lruvec(struct page *page)
+{
+	return &page_zone(page)->lruvec;
+}
+
+#endif /* CONFIG_CGROUP_MEM_RES_CTLR */
+
 /*
  * Some inline functions in vmstat.h depend on page_zone()
  */
diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index 8415596..daa3d15 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -24,19 +24,24 @@ static inline int page_is_file_cache(struct page *page)
 static inline void
 add_page_to_lru_list(struct zone *zone, struct page *page, enum lru_list lru)
 {
-	struct lruvec *lruvec;
+	struct lruvec *lruvec = page_lruvec(page);
+	int numpages = hpage_nr_pages(page);
 
-	lruvec = mem_cgroup_lru_add_list(zone, page, lru);
 	list_add(&page->lru, &lruvec->pages_lru[lru]);
-	__mod_zone_page_state(zone, NR_LRU_BASE + lru, hpage_nr_pages(page));
+	lruvec->pages_count[lru] += numpages;
+	__mod_zone_page_state(zone, NR_LRU_BASE + lru, numpages);
 }
 
 static inline void
 del_page_from_lru_list(struct zone *zone, struct page *page, enum lru_list lru)
 {
-	mem_cgroup_lru_del_list(page, lru);
+	struct lruvec *lruvec = page_lruvec(page);
+	int numpages = hpage_nr_pages(page);
+
 	list_del(&page->lru);
-	__mod_zone_page_state(zone, NR_LRU_BASE + lru, -hpage_nr_pages(page));
+	lruvec->pages_count[lru] -= numpages;
+	VM_BUG_ON((long)lruvec->pages_count[lru] < 0);
+	__mod_zone_page_state(zone, NR_LRU_BASE + lru, -numpages);
 }
 
 /**
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 0d2e6b6..b39f230 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -159,10 +159,6 @@ static inline int is_unevictable_lru(enum lru_list lru)
 	return (lru == LRU_UNEVICTABLE);
 }
 
-struct lruvec {
-	struct list_head pages_lru[NR_LRU_LISTS];
-};
-
 /* Mask used at gathering information at once (see memcontrol.c) */
 #define LRU_ALL_FILE (BIT(LRU_INACTIVE_FILE) | BIT(LRU_ACTIVE_FILE))
 #define LRU_ALL_ANON (BIT(LRU_INACTIVE_ANON) | BIT(LRU_ACTIVE_ANON))
@@ -300,6 +296,11 @@ struct zone_reclaim_stat {
 	unsigned long		recent_scanned[2];
 };
 
+struct lruvec {
+	struct list_head	pages_lru[NR_LRU_LISTS];
+	unsigned long		pages_count[NR_LRU_LISTS];
+};
+
 struct zone {
 	/* Fields commonly accessed by the page allocator */
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b65c619..fa64817 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -135,7 +135,6 @@ struct mem_cgroup_reclaim_iter {
  */
 struct mem_cgroup_per_zone {
 	struct lruvec		lruvec;
-	unsigned long		lru_size[NR_LRU_LISTS];
 
 	struct mem_cgroup_reclaim_iter reclaim_iter[DEF_PRIORITY + 1];
 
@@ -716,7 +715,7 @@ mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg, int nid, int zid,
 
 	for_each_lru(lru) {
 		if (BIT(lru) & lru_mask)
-			ret += mz->lru_size[lru];
+			ret += mz->lruvec.pages_count[lru];
 	}
 	return ret;
 }
@@ -992,6 +991,28 @@ out:
 EXPORT_SYMBOL(mem_cgroup_count_vm_event);
 
 /**
+ * page_lruvec - get the lruvec there this page is located
+ * @page: the struct page pointer with stable reference
+ *
+ * Caller must guarantee page_cgroup->mem_cgroup pointer validity.
+ *
+ * Returns pointer to struct lruvec.
+ */
+struct lruvec *page_lruvec(struct page *page)
+{
+	struct mem_cgroup_per_zone *mz;
+	struct page_cgroup *pc;
+
+	if (mem_cgroup_disabled())
+		return &page_zone(page)->lruvec;
+
+	pc = lookup_page_cgroup(page);
+	mz = mem_cgroup_zoneinfo(pc->mem_cgroup,
+			page_to_nid(page), page_zonenum(page));
+	return &mz->lruvec;
+}
+
+/**
  * mem_cgroup_zone_lruvec - get the lru list vector for a zone and memcg
  * @zone: zone of the wanted lruvec
  * @mem: memcg of the wanted lruvec
@@ -1026,93 +1047,6 @@ struct lruvec *mem_cgroup_zone_lruvec(struct zone *zone,
  * When moving account, the page is not on LRU. It's isolated.
  */
 
-/**
- * mem_cgroup_lru_add_list - account for adding an lru page and return lruvec
- * @zone: zone of the page
- * @page: the page
- * @lru: current lru
- *
- * This function accounts for @page being added to @lru, and returns
- * the lruvec for the given @zone and the memcg @page is charged to.
- *
- * The callsite is then responsible for physically linking the page to
- * the returned lruvec->pages_lru[@lru].
- */
-struct lruvec *mem_cgroup_lru_add_list(struct zone *zone, struct page *page,
-				       enum lru_list lru)
-{
-	struct mem_cgroup_per_zone *mz;
-	struct mem_cgroup *memcg;
-	struct page_cgroup *pc;
-
-	if (mem_cgroup_disabled())
-		return &zone->lruvec;
-
-	pc = lookup_page_cgroup(page);
-	memcg = pc->mem_cgroup;
-	mz = page_cgroup_zoneinfo(memcg, page);
-	/* compound_order() is stabilized through lru_lock */
-	mz->lru_size[lru] += 1 << compound_order(page);
-	return &mz->lruvec;
-}
-
-/**
- * mem_cgroup_lru_del_list - account for removing an lru page
- * @page: the page
- * @lru: target lru
- *
- * This function accounts for @page being removed from @lru.
- *
- * The callsite is then responsible for physically unlinking
- * @page->lru.
- */
-void mem_cgroup_lru_del_list(struct page *page, enum lru_list lru)
-{
-	struct mem_cgroup_per_zone *mz;
-	struct mem_cgroup *memcg;
-	struct page_cgroup *pc;
-
-	if (mem_cgroup_disabled())
-		return;
-
-	pc = lookup_page_cgroup(page);
-	memcg = pc->mem_cgroup;
-	VM_BUG_ON(!memcg);
-	mz = page_cgroup_zoneinfo(memcg, page);
-	/* huge page split is done under lru_lock. so, we have no races. */
-	VM_BUG_ON(mz->lru_size[lru] < (1 << compound_order(page)));
-	mz->lru_size[lru] -= 1 << compound_order(page);
-}
-
-void mem_cgroup_lru_del(struct page *page)
-{
-	mem_cgroup_lru_del_list(page, page_lru(page));
-}
-
-/**
- * mem_cgroup_lru_move_lists - account for moving a page between lrus
- * @zone: zone of the page
- * @page: the page
- * @from: current lru
- * @to: target lru
- *
- * This function accounts for @page being moved between the lrus @from
- * and @to, and returns the lruvec for the given @zone and the memcg
- * @page is charged to.
- *
- * The callsite is then responsible for physically relinking
- * @page->lru to the returned lruvec->lists[@to].
- */
-struct lruvec *mem_cgroup_lru_move_lists(struct zone *zone,
-					 struct page *page,
-					 enum lru_list from,
-					 enum lru_list to)
-{
-	/* XXX: Optimize this, especially for @from == @to */
-	mem_cgroup_lru_del_list(page, from);
-	return mem_cgroup_lru_add_list(zone, page, to);
-}
-
 /*
  * Checks whether given mem is same or in the root_mem_cgroup's
  * hierarchy subtree
@@ -3612,8 +3546,7 @@ static int mem_cgroup_force_empty_list(struct mem_cgroup *memcg,
 	zone = &NODE_DATA(node)->node_zones[zid];
 	mz = mem_cgroup_zoneinfo(memcg, node, zid);
 	list = &mz->lruvec.pages_lru[lru];
-
-	loop = mz->lru_size[lru];
+	loop = mz->lruvec.pages_count[lru];
 	/* give some margin against EBUSY etc...*/
 	loop += 256;
 	busy = NULL;
@@ -4736,8 +4669,10 @@ static int alloc_mem_cgroup_per_zone_info(struct mem_cgroup *memcg, int node)
 
 	for (zone = 0; zone < MAX_NR_ZONES; zone++) {
 		mz = &pn->zoneinfo[zone];
-		for_each_lru(lru)
+		for_each_lru(lru) {
 			INIT_LIST_HEAD(&mz->lruvec.pages_lru[lru]);
+			mz->lruvec.pages_count[lru] = 0;
+		}
 		mz->usage_in_excess = 0;
 		mz->on_tree = false;
 		mz->memcg = memcg;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b75af1e..c7fcddc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4362,8 +4362,10 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
 		zone->zone_pgdat = pgdat;
 
 		zone_pcp_init(zone);
-		for_each_lru(lru)
+		for_each_lru(lru) {
 			INIT_LIST_HEAD(&zone->lruvec.pages_lru[lru]);
+			zone->lruvec.pages_count[lru] = 0;
+		}
 		zone->reclaim_stat.recent_rotated[0] = 0;
 		zone->reclaim_stat.recent_rotated[1] = 0;
 		zone->reclaim_stat.recent_scanned[0] = 0;
diff --git a/mm/swap.c b/mm/swap.c
index f57604f..4363daf 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -239,10 +239,8 @@ static void pagevec_move_tail_fn(struct page *page, void *arg)
 
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
 		enum lru_list lru = page_lru_base_type(page);
-		struct lruvec *lruvec;
+		struct lruvec *lruvec = page_lruvec(page);
 
-		lruvec = mem_cgroup_lru_move_lists(page_zone(page),
-						   page, lru, lru);
 		list_move_tail(&page->lru, &lruvec->pages_lru[lru]);
 		(*pgmoved)++;
 	}
@@ -550,12 +548,11 @@ static void lru_deactivate_fn(struct page *page, void *arg)
 		 */
 		SetPageReclaim(page);
 	} else {
-		struct lruvec *lruvec;
+		struct lruvec *lruvec = page_lruvec(page);
 		/*
 		 * The page's writeback ends up during pagevec
 		 * We moves tha page into tail of inactive.
 		 */
-		lruvec = mem_cgroup_lru_move_lists(zone, page, lru, lru);
 		list_move_tail(&page->lru, &lruvec->pages_lru[lru]);
 		__count_vm_event(PGROTATED);
 	}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 7083567..3e8d049 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1186,7 +1186,6 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 
 		switch (__isolate_lru_page(page, mode, file)) {
 		case 0:
-			mem_cgroup_lru_del(page);
 			list_move(&page->lru, dst);
 			nr_taken += hpage_nr_pages(page);
 			break;
@@ -1244,10 +1243,16 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 
 			if (__isolate_lru_page(cursor_page, mode, file) == 0) {
 				unsigned int isolated_pages;
+				struct lruvec *cursor_lruvec;
+				int cursor_lru = page_lru(cursor_page);
 
-				mem_cgroup_lru_del(cursor_page);
 				list_move(&cursor_page->lru, dst);
 				isolated_pages = hpage_nr_pages(cursor_page);
+				cursor_lruvec = page_lruvec(cursor_page);
+				cursor_lruvec->pages_count[cursor_lru] -=
+								isolated_pages;
+				VM_BUG_ON((long)cursor_lruvec->
+						pages_count[cursor_lru] < 0);
 				nr_taken += isolated_pages;
 				nr_lumpy_taken += isolated_pages;
 				if (PageDirty(cursor_page))
@@ -1279,6 +1284,9 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 			nr_lumpy_failed++;
 	}
 
+	lruvec->pages_count[lru] -= nr_taken - nr_lumpy_taken;
+	VM_BUG_ON((long)lruvec->pages_count[lru] < 0);
+
 	*nr_scanned = scan;
 
 	trace_mm_vmscan_lru_isolate(sc->order,
@@ -1662,15 +1670,18 @@ static void move_active_pages_to_lru(struct zone *zone,
 
 	while (!list_empty(list)) {
 		struct lruvec *lruvec;
+		int numpages;
 
 		page = lru_to_page(list);
 
 		VM_BUG_ON(PageLRU(page));
 		SetPageLRU(page);
 
-		lruvec = mem_cgroup_lru_add_list(zone, page, lru);
+		lruvec = page_lruvec(page);
 		list_move(&page->lru, &lruvec->pages_lru[lru]);
-		pgmoved += hpage_nr_pages(page);
+		numpages = hpage_nr_pages(page);
+		lruvec->pages_count[lru] += numpages;
+		pgmoved += numpages;
 
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
@@ -3581,8 +3592,10 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
 			VM_BUG_ON(PageActive(page));
 			ClearPageUnevictable(page);
 			__dec_zone_state(zone, NR_UNEVICTABLE);
-			lruvec = mem_cgroup_lru_move_lists(zone, page,
-						LRU_UNEVICTABLE, lru);
+			lruvec = page_lruvec(page);
+			lruvec->pages_count[LRU_UNEVICTABLE]--;
+			VM_BUG_ON((long)lruvec->pages_count[LRU_UNEVICTABLE] < 0);
+			lruvec->pages_count[lru]++;
 			list_move(&page->lru, &lruvec->pages_lru[lru]);
 			__inc_zone_state(zone, NR_INACTIVE_ANON + lru);
 			pgrescued++;


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 09/22] mm: link lruvec with zone and node
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (7 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 08/22] mm: add lruvec->pages_count Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 10/22] mm: unify inactive_list_is_low() Konstantin Khlebnikov
                   ` (13 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

This patch adds links from lruvec to its zone and node.
For CONFIG_CGROUP_MEM_RES_CTLR=n this is just container_of().

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 include/linux/mm.h     |   20 ++++++++++++++++++++
 include/linux/mmzone.h |    4 ++++
 mm/memcontrol.c        |    2 ++
 mm/page_alloc.c        |    4 ++++
 4 files changed, 30 insertions(+), 0 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index e483f30..24c24f0 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -734,6 +734,16 @@ static inline void set_page_links(struct page *page, enum zone_type zone,
 
 extern struct lruvec *page_lruvec(struct page *lruvec);
 
+static inline struct zone *lruvec_zone(struct lruvec *lruvec)
+{
+	return lruvec->zone;
+}
+
+static inline struct pglist_data *lruvec_node(struct lruvec *lruvec)
+{
+	return lruvec->node;
+}
+
 #else /* CONFIG_CGROUP_MEM_RES_CTLR */
 
 /* Single lruvec in zone */
@@ -743,6 +753,16 @@ static inline struct lruvec *page_lruvec(struct page *page)
 	return &page_zone(page)->lruvec;
 }
 
+static inline struct zone *lruvec_zone(struct lruvec *lruvec)
+{
+	return container_of(lruvec, struct zone, lruvec);
+}
+
+static inline struct pglist_data *lruvec_node(struct lruvec *lruvec)
+{
+	return lruvec_zone(lruvec)->zone_pgdat;
+}
+
 #endif /* CONFIG_CGROUP_MEM_RES_CTLR */
 
 /*
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index b39f230..bd2cae4 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -297,6 +297,10 @@ struct zone_reclaim_stat {
 };
 
 struct lruvec {
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+	struct pglist_data	*node;
+	struct zone		*zone;
+#endif
 	struct list_head	pages_lru[NR_LRU_LISTS];
 	unsigned long		pages_count[NR_LRU_LISTS];
 };
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index fa64817..e29420d 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4673,6 +4673,8 @@ static int alloc_mem_cgroup_per_zone_info(struct mem_cgroup *memcg, int node)
 			INIT_LIST_HEAD(&mz->lruvec.pages_lru[lru]);
 			mz->lruvec.pages_count[lru] = 0;
 		}
+		mz->lruvec.node = NODE_DATA(node);
+		mz->lruvec.zone = &NODE_DATA(node)->node_zones[zone];
 		mz->usage_in_excess = 0;
 		mz->on_tree = false;
 		mz->memcg = memcg;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c7fcddc..c500084 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4366,6 +4366,10 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
 			INIT_LIST_HEAD(&zone->lruvec.pages_lru[lru]);
 			zone->lruvec.pages_count[lru] = 0;
 		}
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+		zone->lruvec.node = pgdat;
+		zone->lruvec.zone = zone;
+#endif
 		zone->reclaim_stat.recent_rotated[0] = 0;
 		zone->reclaim_stat.recent_rotated[1] = 0;
 		zone->reclaim_stat.recent_scanned[0] = 0;


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 10/22] mm: unify inactive_list_is_low()
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (8 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 09/22] mm: link lruvec with zone and node Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 11/22] mm: add lruvec->reclaim_stat Konstantin Khlebnikov
                   ` (12 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Unify memcg and non-memcg logic, always use exact counters from struct lruvec.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 mm/vmscan.c |   30 ++++++++----------------------
 1 files changed, 8 insertions(+), 22 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3e8d049..6889a39 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1809,6 +1809,7 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz,
 {
 	unsigned long active, inactive;
 	unsigned int ratio;
+	struct lruvec *lruvec;
 
 	/*
 	 * If we don't have swap space, anonymous page deactivation
@@ -1822,17 +1823,9 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz,
 	else
 		ratio = mem_cgroup_inactive_ratio(sc->target_mem_cgroup);
 
-	if (scanning_global_lru(mz)) {
-		active = zone_page_state(mz->zone, NR_ACTIVE_ANON);
-		inactive = zone_page_state(mz->zone, NR_INACTIVE_ANON);
-	} else {
-		active = mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
-				zone_to_nid(mz->zone), zone_idx(mz->zone),
-				BIT(LRU_ACTIVE_ANON));
-		inactive = mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
-				zone_to_nid(mz->zone), zone_idx(mz->zone),
-				BIT(LRU_INACTIVE_ANON));
-	}
+	lruvec = mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup);
+	active = lruvec->pages_count[LRU_ACTIVE_ANON];
+	inactive = lruvec->pages_count[LRU_INACTIVE_ANON];
 
 	return inactive * ratio < active;
 }
@@ -1861,18 +1854,11 @@ static inline int inactive_anon_is_low(struct mem_cgroup_zone *mz,
 static int inactive_file_is_low(struct mem_cgroup_zone *mz)
 {
 	unsigned long active, inactive;
+	struct lruvec *lruvec;
 
-	if (scanning_global_lru(mz)) {
-		active = zone_page_state(mz->zone, NR_ACTIVE_FILE);
-		inactive = zone_page_state(mz->zone, NR_INACTIVE_FILE);
-	} else {
-		active = mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
-				zone_to_nid(mz->zone), zone_idx(mz->zone),
-				BIT(LRU_ACTIVE_FILE));
-		inactive = mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
-				zone_to_nid(mz->zone), zone_idx(mz->zone),
-				BIT(LRU_INACTIVE_FILE));
-	}
+	lruvec = mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup);
+	active = lruvec->pages_count[LRU_ACTIVE_FILE];
+	inactive = lruvec->pages_count[LRU_INACTIVE_FILE];
 
 	return inactive < active;
 }


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 11/22] mm: add lruvec->reclaim_stat
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (9 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 10/22] mm: unify inactive_list_is_low() Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 12/22] mm: kill struct mem_cgroup_zone Konstantin Khlebnikov
                   ` (11 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Merge memcg and non-memcg reclaim stat. We need to update only one.
Move zone->reclaimer_stat and mem_cgroup_per_zone->reclaimer_stat to struct lruvec.

struct lruvec will become operating unit for recalimer logic,
thus this is perfect place for these counters.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 include/linux/memcontrol.h |   17 --------------
 include/linux/mmzone.h     |    4 ++-
 mm/memcontrol.c            |   52 +++++---------------------------------------
 mm/page_alloc.c            |    6 ++---
 mm/swap.c                  |   12 ++--------
 mm/vmscan.c                |    5 +++-
 6 files changed, 15 insertions(+), 81 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index cc6061a..1d00222 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -111,10 +111,6 @@ unsigned int mem_cgroup_inactive_ratio(struct mem_cgroup *memcg);
 int mem_cgroup_select_victim_node(struct mem_cgroup *memcg);
 unsigned long mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg,
 					int nid, int zid, unsigned int lrumask);
-struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
-						      struct zone *zone);
-struct zone_reclaim_stat*
-mem_cgroup_get_reclaim_stat_from_page(struct page *page);
 extern void mem_cgroup_print_oom_info(struct mem_cgroup *memcg,
 					struct task_struct *p);
 extern void mem_cgroup_replace_page_cache(struct page *oldpage,
@@ -299,19 +295,6 @@ mem_cgroup_zone_nr_lru_pages(struct mem_cgroup *memcg, int nid, int zid,
 	return 0;
 }
 
-
-static inline struct zone_reclaim_stat*
-mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg, struct zone *zone)
-{
-	return NULL;
-}
-
-static inline struct zone_reclaim_stat*
-mem_cgroup_get_reclaim_stat_from_page(struct page *page)
-{
-	return NULL;
-}
-
 static inline void
 mem_cgroup_print_oom_info(struct mem_cgroup *memcg, struct task_struct *p)
 {
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index bd2cae4..9fd82b1 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -303,6 +303,8 @@ struct lruvec {
 #endif
 	struct list_head	pages_lru[NR_LRU_LISTS];
 	unsigned long		pages_count[NR_LRU_LISTS];
+
+	struct zone_reclaim_stat	reclaim_stat;
 };
 
 struct zone {
@@ -379,8 +381,6 @@ struct zone {
 	spinlock_t		lru_lock;
 	struct lruvec		lruvec;
 
-	struct zone_reclaim_stat reclaim_stat;
-
 	unsigned long		pages_scanned;	   /* since last reclaim */
 	unsigned long		flags;		   /* zone flags, see below */
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e29420d..5c1414b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -138,7 +138,6 @@ struct mem_cgroup_per_zone {
 
 	struct mem_cgroup_reclaim_iter reclaim_iter[DEF_PRIORITY + 1];
 
-	struct zone_reclaim_stat reclaim_stat;
 	struct rb_node		tree_node;	/* RB tree node */
 	unsigned long long	usage_in_excess;/* Set to the value by which */
 						/* the soft limit is exceeded*/
@@ -447,15 +446,6 @@ struct cgroup_subsys_state *mem_cgroup_css(struct mem_cgroup *memcg)
 	return &memcg->css;
 }
 
-static struct mem_cgroup_per_zone *
-page_cgroup_zoneinfo(struct mem_cgroup *memcg, struct page *page)
-{
-	int nid = page_to_nid(page);
-	int zid = page_zonenum(page);
-
-	return mem_cgroup_zoneinfo(memcg, nid, zid);
-}
-
 static struct mem_cgroup_tree_per_zone *
 soft_limit_tree_node_zone(int nid, int zid)
 {
@@ -1097,34 +1087,6 @@ int task_in_mem_cgroup(struct task_struct *task, const struct mem_cgroup *memcg)
 	return ret;
 }
 
-struct zone_reclaim_stat *mem_cgroup_get_reclaim_stat(struct mem_cgroup *memcg,
-						      struct zone *zone)
-{
-	int nid = zone_to_nid(zone);
-	int zid = zone_idx(zone);
-	struct mem_cgroup_per_zone *mz = mem_cgroup_zoneinfo(memcg, nid, zid);
-
-	return &mz->reclaim_stat;
-}
-
-struct zone_reclaim_stat *
-mem_cgroup_get_reclaim_stat_from_page(struct page *page)
-{
-	struct page_cgroup *pc;
-	struct mem_cgroup_per_zone *mz;
-
-	if (mem_cgroup_disabled())
-		return NULL;
-
-	pc = lookup_page_cgroup(page);
-	if (!PageCgroupUsed(pc))
-		return NULL;
-	/* Ensure pc->mem_cgroup is visible after reading PCG_USED. */
-	smp_rmb();
-	mz = page_cgroup_zoneinfo(pc->mem_cgroup, page);
-	return &mz->reclaim_stat;
-}
-
 #define mem_cgroup_from_res_counter(counter, member)	\
 	container_of(counter, struct mem_cgroup, member)
 
@@ -4094,21 +4056,19 @@ static int mem_control_stat_show(struct cgroup *cont, struct cftype *cft,
 	{
 		int nid, zid;
 		struct mem_cgroup_per_zone *mz;
+		struct zone_reclaim_stat *rs;
 		unsigned long recent_rotated[2] = {0, 0};
 		unsigned long recent_scanned[2] = {0, 0};
 
 		for_each_online_node(nid)
 			for (zid = 0; zid < MAX_NR_ZONES; zid++) {
 				mz = mem_cgroup_zoneinfo(memcg, nid, zid);
+				rs = &mz->lruvec.reclaim_stat;
 
-				recent_rotated[0] +=
-					mz->reclaim_stat.recent_rotated[0];
-				recent_rotated[1] +=
-					mz->reclaim_stat.recent_rotated[1];
-				recent_scanned[0] +=
-					mz->reclaim_stat.recent_scanned[0];
-				recent_scanned[1] +=
-					mz->reclaim_stat.recent_scanned[1];
+				recent_rotated[0] += rs->recent_rotated[0];
+				recent_rotated[1] += rs->recent_rotated[1];
+				recent_scanned[0] += rs->recent_scanned[0];
+				recent_scanned[1] += rs->recent_scanned[1];
 			}
 		cb->fill(cb, "recent_rotated_anon", recent_rotated[0]);
 		cb->fill(cb, "recent_rotated_file", recent_rotated[1]);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c500084..72263e4 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4370,10 +4370,8 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
 		zone->lruvec.node = pgdat;
 		zone->lruvec.zone = zone;
 #endif
-		zone->reclaim_stat.recent_rotated[0] = 0;
-		zone->reclaim_stat.recent_rotated[1] = 0;
-		zone->reclaim_stat.recent_scanned[0] = 0;
-		zone->reclaim_stat.recent_scanned[1] = 0;
+		memset(&zone->lruvec.reclaim_stat, 0,
+				sizeof(struct zone_reclaim_stat));
 		zap_zone_vm_stats(zone);
 		zone->flags = 0;
 		if (!size)
diff --git a/mm/swap.c b/mm/swap.c
index 4363daf..f31bd45 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -282,21 +282,13 @@ void rotate_reclaimable_page(struct page *page)
 static void update_page_reclaim_stat(struct zone *zone, struct page *page,
 				     int file, int rotated)
 {
-	struct zone_reclaim_stat *reclaim_stat = &zone->reclaim_stat;
-	struct zone_reclaim_stat *memcg_reclaim_stat;
+	struct zone_reclaim_stat *reclaim_stat;
 
-	memcg_reclaim_stat = mem_cgroup_get_reclaim_stat_from_page(page);
+	reclaim_stat = &page_lruvec(page)->reclaim_stat;
 
 	reclaim_stat->recent_scanned[file]++;
 	if (rotated)
 		reclaim_stat->recent_rotated[file]++;
-
-	if (!memcg_reclaim_stat)
-		return;
-
-	memcg_reclaim_stat->recent_scanned[file]++;
-	if (rotated)
-		memcg_reclaim_stat->recent_rotated[file]++;
 }
 
 static void __activate_page(struct page *page, void *arg)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6889a39..f2eb9c4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -190,9 +190,10 @@ static bool scanning_global_lru(struct mem_cgroup_zone *mz)
 static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
 {
 	if (!scanning_global_lru(mz))
-		return mem_cgroup_get_reclaim_stat(mz->mem_cgroup, mz->zone);
+		return &mem_cgroup_zone_lruvec(mz->zone,
+				mz->mem_cgroup)->reclaim_stat;
 
-	return &mz->zone->reclaim_stat;
+	return &mz->zone->lruvec.reclaim_stat;
 }
 
 static unsigned long zone_nr_lru_pages(struct mem_cgroup_zone *mz,


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 12/22] mm: kill struct mem_cgroup_zone
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (10 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 11/22] mm: add lruvec->reclaim_stat Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 13/22] mm: move page-to-lruvec translation upper Konstantin Khlebnikov
                   ` (10 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Now struct mem_cgroup_zone always points to one lruvec, either root zone->lruvec
or tome some from memcg.
So this fancy pointer can be replaced with direct pointer to struct lruvec.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 mm/vmscan.c |  187 ++++++++++++++++++++++-------------------------------------
 1 files changed, 70 insertions(+), 117 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index f2eb9c4..dc17f61 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -121,11 +121,6 @@ struct scan_control {
 	nodemask_t	*nodemask;
 };
 
-struct mem_cgroup_zone {
-	struct mem_cgroup *mem_cgroup;
-	struct zone *zone;
-};
-
 #define lru_to_page(_head) (list_entry((_head)->prev, struct page, lru))
 
 #ifdef ARCH_HAS_PREFETCH
@@ -170,45 +165,13 @@ static bool global_reclaim(struct scan_control *sc)
 {
 	return !sc->target_mem_cgroup;
 }
-
-static bool scanning_global_lru(struct mem_cgroup_zone *mz)
-{
-	return !mz->mem_cgroup;
-}
 #else
 static bool global_reclaim(struct scan_control *sc)
 {
 	return true;
 }
-
-static bool scanning_global_lru(struct mem_cgroup_zone *mz)
-{
-	return true;
-}
 #endif
 
-static struct zone_reclaim_stat *get_reclaim_stat(struct mem_cgroup_zone *mz)
-{
-	if (!scanning_global_lru(mz))
-		return &mem_cgroup_zone_lruvec(mz->zone,
-				mz->mem_cgroup)->reclaim_stat;
-
-	return &mz->zone->lruvec.reclaim_stat;
-}
-
-static unsigned long zone_nr_lru_pages(struct mem_cgroup_zone *mz,
-				       enum lru_list lru)
-{
-	if (!scanning_global_lru(mz))
-		return mem_cgroup_zone_nr_lru_pages(mz->mem_cgroup,
-						    zone_to_nid(mz->zone),
-						    zone_idx(mz->zone),
-						    BIT(lru));
-
-	return zone_page_state(mz->zone, NR_LRU_BASE + lru);
-}
-
-
 /*
  * Add a shrinker callback to be called from the vm
  */
@@ -770,7 +733,7 @@ static enum page_references page_check_references(struct page *page,
  * shrink_page_list() returns the number of reclaimed pages
  */
 static unsigned long shrink_page_list(struct list_head *page_list,
-				      struct mem_cgroup_zone *mz,
+				      struct lruvec *lruvec,
 				      struct scan_control *sc,
 				      int priority,
 				      unsigned long *ret_nr_dirty,
@@ -801,7 +764,7 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 			goto keep;
 
 		VM_BUG_ON(PageActive(page));
-		VM_BUG_ON(page_zone(page) != mz->zone);
+		VM_BUG_ON(page_zone(page) != lruvec_zone(lruvec));
 
 		sc->nr_scanned++;
 
@@ -1027,7 +990,7 @@ keep_lumpy:
 	 * will encounter the same problem
 	 */
 	if (nr_dirty && nr_dirty == nr_congested && global_reclaim(sc))
-		zone_set_flag(mz->zone, ZONE_CONGESTED);
+		zone_set_flag(lruvec_zone(lruvec), ZONE_CONGESTED);
 
 	free_hot_cold_page_list(&free_pages, 1);
 
@@ -1142,7 +1105,7 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
  * Appropriate locks must be held before calling this function.
  *
  * @nr_to_scan:	The number of pages to look through on the list.
- * @mz:		The mem_cgroup_zone to pull pages from.
+ * @lruvec	The struct lruvec to pull pages from.
  * @dst:	The temp list to put pages on to.
  * @nr_scanned:	The number of pages that were scanned.
  * @sc:		The scan_control struct for this reclaim session
@@ -1153,11 +1116,10 @@ int __isolate_lru_page(struct page *page, isolate_mode_t mode, int file)
  * returns how many pages were moved onto *@dst.
  */
 static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
-		struct mem_cgroup_zone *mz, struct list_head *dst,
+		struct lruvec *lruvec, struct list_head *dst,
 		unsigned long *nr_scanned, struct scan_control *sc,
 		isolate_mode_t mode, int active, int file)
 {
-	struct lruvec *lruvec;
 	struct list_head *src;
 	unsigned long nr_taken = 0;
 	unsigned long nr_lumpy_taken = 0;
@@ -1166,7 +1128,6 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 	unsigned long scan;
 	int lru = LRU_BASE;
 
-	lruvec = mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup);
 	if (active)
 		lru += LRU_ACTIVE;
 	if (file)
@@ -1372,11 +1333,11 @@ static int too_many_isolated(struct zone *zone, int file,
 }
 
 static noinline_for_stack void
-putback_inactive_pages(struct mem_cgroup_zone *mz,
+putback_inactive_pages(struct lruvec *lruvec,
 		       struct list_head *page_list)
 {
-	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
-	struct zone *zone = mz->zone;
+	struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat;
+	struct zone *zone = lruvec_zone(lruvec);
 	LIST_HEAD(pages_to_free);
 
 	/*
@@ -1423,13 +1384,13 @@ putback_inactive_pages(struct mem_cgroup_zone *mz,
 }
 
 static noinline_for_stack void
-update_isolated_counts(struct mem_cgroup_zone *mz,
+update_isolated_counts(struct lruvec *lruvec,
 		       struct list_head *page_list,
 		       unsigned long *nr_anon,
 		       unsigned long *nr_file)
 {
-	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
-	struct zone *zone = mz->zone;
+	struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat;
+	struct zone *zone = lruvec_zone(lruvec);
 	unsigned int count[NR_LRU_LISTS] = { 0, };
 	unsigned long nr_active = 0;
 	struct page *page;
@@ -1513,7 +1474,7 @@ static inline bool should_reclaim_stall(unsigned long nr_taken,
  * of reclaimed pages
  */
 static noinline_for_stack unsigned long
-shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
+shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 		     struct scan_control *sc, int priority, int file)
 {
 	LIST_HEAD(page_list);
@@ -1525,7 +1486,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 	unsigned long nr_dirty = 0;
 	unsigned long nr_writeback = 0;
 	isolate_mode_t isolate_mode = ISOLATE_INACTIVE;
-	struct zone *zone = mz->zone;
+	struct zone *zone = lruvec_zone(lruvec);
 
 	while (unlikely(too_many_isolated(zone, file, sc))) {
 		congestion_wait(BLK_RW_ASYNC, HZ/10);
@@ -1548,8 +1509,9 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 
 	spin_lock_irq(&zone->lru_lock);
 
-	nr_taken = isolate_lru_pages(nr_to_scan, mz, &page_list, &nr_scanned,
-				     sc, isolate_mode, 0, file);
+	nr_taken = isolate_lru_pages(nr_to_scan, lruvec, &page_list,
+				     &nr_scanned, sc, isolate_mode, 0, file);
+
 	if (global_reclaim(sc)) {
 		zone->pages_scanned += nr_scanned;
 		if (current_is_kswapd())
@@ -1565,20 +1527,20 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 		return 0;
 	}
 
-	update_isolated_counts(mz, &page_list, &nr_anon, &nr_file);
+	update_isolated_counts(lruvec, &page_list, &nr_anon, &nr_file);
 
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON, nr_anon);
 	__mod_zone_page_state(zone, NR_ISOLATED_FILE, nr_file);
 
 	spin_unlock_irq(&zone->lru_lock);
 
-	nr_reclaimed = shrink_page_list(&page_list, mz, sc, priority,
+	nr_reclaimed = shrink_page_list(&page_list, lruvec, sc, priority,
 						&nr_dirty, &nr_writeback);
 
 	/* Check if we should syncronously wait for writeback */
 	if (should_reclaim_stall(nr_taken, nr_reclaimed, priority, sc)) {
 		set_reclaim_mode(priority, sc, true);
-		nr_reclaimed += shrink_page_list(&page_list, mz, sc,
+		nr_reclaimed += shrink_page_list(&page_list, lruvec, sc,
 					priority, &nr_dirty, &nr_writeback);
 	}
 
@@ -1588,7 +1550,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
 		__count_vm_events(KSWAPD_STEAL, nr_reclaimed);
 	__count_zone_vm_events(PGSTEAL, zone, nr_reclaimed);
 
-	putback_inactive_pages(mz, &page_list);
+	putback_inactive_pages(lruvec, &page_list);
 
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON, -nr_anon);
 	__mod_zone_page_state(zone, NR_ISOLATED_FILE, -nr_file);
@@ -1703,7 +1665,7 @@ static void move_active_pages_to_lru(struct zone *zone,
 }
 
 static void shrink_active_list(unsigned long nr_to_scan,
-			       struct mem_cgroup_zone *mz,
+			       struct lruvec *lruvec,
 			       struct scan_control *sc,
 			       int priority, int file)
 {
@@ -1714,10 +1676,10 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	LIST_HEAD(l_active);
 	LIST_HEAD(l_inactive);
 	struct page *page;
-	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
+	struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat;
 	unsigned long nr_rotated = 0;
 	isolate_mode_t isolate_mode = ISOLATE_ACTIVE;
-	struct zone *zone = mz->zone;
+	struct zone *zone = lruvec_zone(lruvec);
 
 	lru_add_drain();
 
@@ -1728,8 +1690,9 @@ static void shrink_active_list(unsigned long nr_to_scan,
 
 	spin_lock_irq(&zone->lru_lock);
 
-	nr_taken = isolate_lru_pages(nr_to_scan, mz, &l_hold, &nr_scanned, sc,
-				     isolate_mode, 1, file);
+	nr_taken = isolate_lru_pages(nr_to_scan, lruvec, &l_hold, &nr_scanned,
+				     sc, isolate_mode, 1, file);
+
 	if (global_reclaim(sc))
 		zone->pages_scanned += nr_scanned;
 
@@ -1805,12 +1768,11 @@ static void shrink_active_list(unsigned long nr_to_scan,
  * Returns true if the zone does not have enough inactive anon pages,
  * meaning some active anon pages need to be deactivated.
  */
-static int inactive_anon_is_low(struct mem_cgroup_zone *mz,
+static int inactive_anon_is_low(struct lruvec *lruvec,
 				struct scan_control *sc)
 {
 	unsigned long active, inactive;
 	unsigned int ratio;
-	struct lruvec *lruvec;
 
 	/*
 	 * If we don't have swap space, anonymous page deactivation
@@ -1820,18 +1782,17 @@ static int inactive_anon_is_low(struct mem_cgroup_zone *mz,
 		return 0;
 
 	if (global_reclaim(sc))
-		ratio = mz->zone->inactive_ratio;
+		ratio = lruvec_zone(lruvec)->inactive_ratio;
 	else
 		ratio = mem_cgroup_inactive_ratio(sc->target_mem_cgroup);
 
-	lruvec = mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup);
 	active = lruvec->pages_count[LRU_ACTIVE_ANON];
 	inactive = lruvec->pages_count[LRU_INACTIVE_ANON];
 
 	return inactive * ratio < active;
 }
 #else
-static inline int inactive_anon_is_low(struct mem_cgroup_zone *mz,
+static inline int inactive_anon_is_low(struct lruvec *lruvec,
 				       struct scan_control *sc)
 {
 	return 0;
@@ -1852,40 +1813,38 @@ static inline int inactive_anon_is_low(struct mem_cgroup_zone *mz,
  * This uses a different ratio than the anonymous pages, because
  * the page cache uses a use-once replacement algorithm.
  */
-static int inactive_file_is_low(struct mem_cgroup_zone *mz)
+static int inactive_file_is_low(struct lruvec *lruvec)
 {
 	unsigned long active, inactive;
-	struct lruvec *lruvec;
 
-	lruvec = mem_cgroup_zone_lruvec(mz->zone, mz->mem_cgroup);
 	active = lruvec->pages_count[LRU_ACTIVE_FILE];
 	inactive = lruvec->pages_count[LRU_INACTIVE_FILE];
 
 	return inactive < active;
 }
 
-static int inactive_list_is_low(struct mem_cgroup_zone *mz,
+static int inactive_list_is_low(struct lruvec *lruvec,
 				struct scan_control *sc, int file)
 {
 	if (file)
-		return inactive_file_is_low(mz);
+		return inactive_file_is_low(lruvec);
 	else
-		return inactive_anon_is_low(mz, sc);
+		return inactive_anon_is_low(lruvec, sc);
 }
 
 static unsigned long shrink_list(enum lru_list lru, unsigned long nr_to_scan,
-				 struct mem_cgroup_zone *mz,
+				 struct lruvec *lruvec,
 				 struct scan_control *sc, int priority)
 {
 	int file = is_file_lru(lru);
 
 	if (is_active_lru(lru)) {
-		if (inactive_list_is_low(mz, sc, file))
-			shrink_active_list(nr_to_scan, mz, sc, priority, file);
+		if (inactive_list_is_low(lruvec, sc, file))
+			shrink_active_list(nr_to_scan, lruvec, sc, priority, file);
 		return 0;
 	}
 
-	return shrink_inactive_list(nr_to_scan, mz, sc, priority, file);
+	return shrink_inactive_list(nr_to_scan, lruvec, sc, priority, file);
 }
 
 static int vmscan_swappiness(struct scan_control *sc)
@@ -1903,17 +1862,18 @@ static int vmscan_swappiness(struct scan_control *sc)
  *
  * nr[0] = anon pages to scan; nr[1] = file pages to scan
  */
-static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
+static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
 			   unsigned long *nr, int priority)
 {
 	unsigned long anon, file, free;
 	unsigned long anon_prio, file_prio;
 	unsigned long ap, fp;
-	struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(mz);
+	struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat;
 	u64 fraction[2], denominator;
 	enum lru_list lru;
 	int noswap = 0;
 	bool force_scan = false;
+	struct zone *zone = lruvec_zone(lruvec);
 
 	/*
 	 * If the zone or memcg is small, nr[l] can be 0.  This
@@ -1925,7 +1885,7 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
 	 * latencies, so it's better to scan a minimum amount there as
 	 * well.
 	 */
-	if (current_is_kswapd() && mz->zone->all_unreclaimable)
+	if (current_is_kswapd() && zone->all_unreclaimable)
 		force_scan = true;
 	if (!global_reclaim(sc))
 		force_scan = true;
@@ -1939,16 +1899,16 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
 		goto out;
 	}
 
-	anon  = zone_nr_lru_pages(mz, LRU_ACTIVE_ANON) +
-		zone_nr_lru_pages(mz, LRU_INACTIVE_ANON);
-	file  = zone_nr_lru_pages(mz, LRU_ACTIVE_FILE) +
-		zone_nr_lru_pages(mz, LRU_INACTIVE_FILE);
+	anon  = lruvec->pages_count[LRU_ACTIVE_ANON] +
+		lruvec->pages_count[LRU_INACTIVE_ANON];
+	file  = lruvec->pages_count[LRU_ACTIVE_FILE] +
+		lruvec->pages_count[LRU_INACTIVE_FILE];
 
 	if (global_reclaim(sc)) {
-		free  = zone_page_state(mz->zone, NR_FREE_PAGES);
+		free  = zone_page_state(zone, NR_FREE_PAGES);
 		/* If we have very few page cache pages,
 		   force-scan anon pages. */
-		if (unlikely(file + free <= high_wmark_pages(mz->zone))) {
+		if (unlikely(file + free <= high_wmark_pages(zone))) {
 			fraction[0] = 1;
 			fraction[1] = 0;
 			denominator = 1;
@@ -1974,7 +1934,7 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
 	 *
 	 * anon in [0], file in [1]
 	 */
-	spin_lock_irq(&mz->zone->lru_lock);
+	spin_lock_irq(&zone->lru_lock);
 	if (unlikely(reclaim_stat->recent_scanned[0] > anon / 4)) {
 		reclaim_stat->recent_scanned[0] /= 2;
 		reclaim_stat->recent_rotated[0] /= 2;
@@ -1995,7 +1955,7 @@ static void get_scan_count(struct mem_cgroup_zone *mz, struct scan_control *sc,
 
 	fp = (file_prio + 1) * (reclaim_stat->recent_scanned[1] + 1);
 	fp /= reclaim_stat->recent_rotated[1] + 1;
-	spin_unlock_irq(&mz->zone->lru_lock);
+	spin_unlock_irq(&zone->lru_lock);
 
 	fraction[0] = ap;
 	fraction[1] = fp;
@@ -2005,7 +1965,7 @@ out:
 		int file = is_file_lru(lru);
 		unsigned long scan;
 
-		scan = zone_nr_lru_pages(mz, lru);
+		scan = lruvec->pages_count[lru];
 		if (priority || noswap) {
 			scan >>= priority;
 			if (!scan && force_scan)
@@ -2023,7 +1983,7 @@ out:
  * back to the allocator and call try_to_compact_zone(), we ensure that
  * there are enough free pages for it to be likely successful
  */
-static inline bool should_continue_reclaim(struct mem_cgroup_zone *mz,
+static inline bool should_continue_reclaim(struct lruvec *lruvec,
 					unsigned long nr_reclaimed,
 					unsigned long nr_scanned,
 					struct scan_control *sc)
@@ -2063,15 +2023,15 @@ static inline bool should_continue_reclaim(struct mem_cgroup_zone *mz,
 	 * inactive lists are large enough, continue reclaiming
 	 */
 	pages_for_compaction = (2UL << sc->order);
-	inactive_lru_pages = zone_nr_lru_pages(mz, LRU_INACTIVE_FILE);
+	inactive_lru_pages = lruvec->pages_count[LRU_INACTIVE_FILE];
 	if (nr_swap_pages > 0)
-		inactive_lru_pages += zone_nr_lru_pages(mz, LRU_INACTIVE_ANON);
+		inactive_lru_pages += lruvec->pages_count[LRU_INACTIVE_ANON];
 	if (sc->nr_reclaimed < pages_for_compaction &&
 			inactive_lru_pages > pages_for_compaction)
 		return true;
 
 	/* If compaction would go ahead or the allocation would succeed, stop */
-	switch (compaction_suitable(mz->zone, sc->order)) {
+	switch (compaction_suitable(lruvec_zone(lruvec), sc->order)) {
 	case COMPACT_PARTIAL:
 	case COMPACT_CONTINUE:
 		return false;
@@ -2083,8 +2043,8 @@ static inline bool should_continue_reclaim(struct mem_cgroup_zone *mz,
 /*
  * This is a basic per-zone page freer.  Used by both kswapd and direct reclaim.
  */
-static void shrink_mem_cgroup_zone(int priority, struct mem_cgroup_zone *mz,
-				   struct scan_control *sc)
+static void shrink_lruvec(int priority, struct lruvec *lruvec,
+			struct scan_control *sc)
 {
 	unsigned long nr[NR_LRU_LISTS];
 	unsigned long nr_to_scan;
@@ -2096,7 +2056,7 @@ static void shrink_mem_cgroup_zone(int priority, struct mem_cgroup_zone *mz,
 restart:
 	nr_reclaimed = 0;
 	nr_scanned = sc->nr_scanned;
-	get_scan_count(mz, sc, nr, priority);
+	get_scan_count(lruvec, sc, nr, priority);
 
 	blk_start_plug(&plug);
 	while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
@@ -2108,7 +2068,7 @@ restart:
 				nr[lru] -= nr_to_scan;
 
 				nr_reclaimed += shrink_list(lru, nr_to_scan,
-							    mz, sc, priority);
+							    lruvec, sc, priority);
 			}
 		}
 		/*
@@ -2134,11 +2094,11 @@ restart:
 	 * Even if we did not try to evict anon pages at all, we want to
 	 * rebalance the anon lru active/inactive ratio.
 	 */
-	if (inactive_anon_is_low(mz, sc))
-		shrink_active_list(SWAP_CLUSTER_MAX, mz, sc, priority, 0);
+	if (inactive_anon_is_low(lruvec, sc))
+		shrink_active_list(SWAP_CLUSTER_MAX, lruvec, sc, priority, 0);
 
 	/* reclaim/compaction might need reclaim to continue */
-	if (should_continue_reclaim(mz, nr_reclaimed,
+	if (should_continue_reclaim(lruvec, nr_reclaimed,
 					sc->nr_scanned - nr_scanned, sc))
 		goto restart;
 
@@ -2154,18 +2114,17 @@ static void shrink_zone(int priority, struct zone *zone,
 		.priority = priority,
 	};
 	struct mem_cgroup *memcg;
+	struct lruvec *lruvec;
 
 	memcg = mem_cgroup_iter(root, NULL, &reclaim);
 	do {
-		struct mem_cgroup_zone mz = {
-			.mem_cgroup = memcg,
-			.zone = zone,
-		};
+		lruvec = mem_cgroup_zone_lruvec(zone, memcg);
 
 		if (!global_reclaim(sc))
 			sc->current_mem_cgroup = memcg;
 
-		shrink_mem_cgroup_zone(priority, &mz, sc);
+		shrink_lruvec(priority, lruvec, sc);
+
 		/*
 		 * Limit reclaim has historically picked one memcg and
 		 * scanned it with decreasing priority levels until
@@ -2486,10 +2445,7 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
 		.target_mem_cgroup = memcg,
 		.current_mem_cgroup = memcg,
 	};
-	struct mem_cgroup_zone mz = {
-		.mem_cgroup = memcg,
-		.zone = zone,
-	};
+	struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);
 
 	sc.gfp_mask = (gfp_mask & GFP_RECLAIM_MASK) |
 			(GFP_HIGHUSER_MOVABLE & ~GFP_RECLAIM_MASK);
@@ -2505,7 +2461,7 @@ unsigned long mem_cgroup_shrink_node_zone(struct mem_cgroup *memcg,
 	 * will pick up pages from other mem cgroup's as well. We hack
 	 * the priority and make it zero.
 	 */
-	shrink_mem_cgroup_zone(0, &mz, &sc);
+	shrink_lruvec(0, lruvec, &sc);
 
 	trace_mm_vmscan_memcg_softlimit_reclaim_end(sc.nr_reclaimed);
 
@@ -2566,13 +2522,10 @@ static void age_active_anon(struct zone *zone, struct scan_control *sc,
 
 	memcg = mem_cgroup_iter(NULL, NULL, NULL);
 	do {
-		struct mem_cgroup_zone mz = {
-			.mem_cgroup = memcg,
-			.zone = zone,
-		};
+		struct lruvec *lruvec = mem_cgroup_zone_lruvec(zone, memcg);
 
-		if (inactive_anon_is_low(&mz, sc))
-			shrink_active_list(SWAP_CLUSTER_MAX, &mz,
+		if (inactive_anon_is_low(lruvec, sc))
+			shrink_active_list(SWAP_CLUSTER_MAX, lruvec,
 					   sc, priority, 0);
 
 		memcg = mem_cgroup_iter(NULL, memcg, NULL);


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 13/22] mm: move page-to-lruvec translation upper
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (11 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 12/22] mm: kill struct mem_cgroup_zone Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 14/22] mm: push lruvec into update_page_reclaim_stat() Konstantin Khlebnikov
                   ` (9 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

move page_lruvec() out of add_page_to_lru_list() and del_page_from_lru_list()
switch its first argument from zone to lruvec.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 include/linux/mm_inline.h |   10 ++++------
 mm/compaction.c           |    4 +++-
 mm/memcontrol.c           |    7 +++++--
 mm/swap.c                 |   33 ++++++++++++++++++++++-----------
 mm/vmscan.c               |   16 +++++++++++-----
 5 files changed, 45 insertions(+), 25 deletions(-)

diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
index daa3d15..143a2e8 100644
--- a/include/linux/mm_inline.h
+++ b/include/linux/mm_inline.h
@@ -22,26 +22,24 @@ static inline int page_is_file_cache(struct page *page)
 }
 
 static inline void
-add_page_to_lru_list(struct zone *zone, struct page *page, enum lru_list lru)
+add_page_to_lru_list(struct lruvec *lruvec, struct page *page, enum lru_list lru)
 {
-	struct lruvec *lruvec = page_lruvec(page);
 	int numpages = hpage_nr_pages(page);
 
 	list_add(&page->lru, &lruvec->pages_lru[lru]);
 	lruvec->pages_count[lru] += numpages;
-	__mod_zone_page_state(zone, NR_LRU_BASE + lru, numpages);
+	__mod_zone_page_state(lruvec_zone(lruvec), NR_LRU_BASE + lru, numpages);
 }
 
 static inline void
-del_page_from_lru_list(struct zone *zone, struct page *page, enum lru_list lru)
+del_page_from_lru_list(struct lruvec *lruvec, struct page *page, enum lru_list lru)
 {
-	struct lruvec *lruvec = page_lruvec(page);
 	int numpages = hpage_nr_pages(page);
 
 	list_del(&page->lru);
 	lruvec->pages_count[lru] -= numpages;
 	VM_BUG_ON((long)lruvec->pages_count[lru] < 0);
-	__mod_zone_page_state(zone, NR_LRU_BASE + lru, -numpages);
+	__mod_zone_page_state(lruvec_zone(lruvec), NR_LRU_BASE + lru, -numpages);
 }
 
 /**
diff --git a/mm/compaction.c b/mm/compaction.c
index 74a8c82..a976b28 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -262,6 +262,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 	unsigned long nr_scanned = 0, nr_isolated = 0;
 	struct list_head *migratelist = &cc->migratepages;
 	isolate_mode_t mode = ISOLATE_ACTIVE|ISOLATE_INACTIVE;
+	struct lruvec *lruvec;
 
 	/* Do not scan outside zone boundaries */
 	low_pfn = max(cc->migrate_pfn, zone->zone_start_pfn);
@@ -381,7 +382,8 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 		VM_BUG_ON(PageTransCompound(page));
 
 		/* Successfully isolated */
-		del_page_from_lru_list(zone, page, page_lru(page));
+		lruvec = page_lruvec(page);
+		del_page_from_lru_list(lruvec, page, page_lru(page));
 		list_add(&page->lru, migratelist);
 		cc->nr_migratepages++;
 		nr_isolated++;
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5c1414b..ea1fdeb 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2529,6 +2529,7 @@ __mem_cgroup_commit_charge_lrucare(struct page *page, struct mem_cgroup *memcg,
 {
 	struct page_cgroup *pc = lookup_page_cgroup(page);
 	struct zone *zone = page_zone(page);
+	struct lruvec *lruvec;
 	unsigned long flags;
 	bool removed = false;
 
@@ -2539,13 +2540,15 @@ __mem_cgroup_commit_charge_lrucare(struct page *page, struct mem_cgroup *memcg,
 	 */
 	spin_lock_irqsave(&zone->lru_lock, flags);
 	if (PageLRU(page)) {
-		del_page_from_lru_list(zone, page, page_lru(page));
+		lruvec = page_lruvec(page);
+		del_page_from_lru_list(lruvec, page, page_lru(page));
 		ClearPageLRU(page);
 		removed = true;
 	}
 	__mem_cgroup_commit_charge(memcg, page, 1, pc, ctype);
 	if (removed) {
-		add_page_to_lru_list(zone, page, page_lru(page));
+		lruvec = page_lruvec(page);
+		add_page_to_lru_list(lruvec, page, page_lru(page));
 		SetPageLRU(page);
 	}
 	spin_unlock_irqrestore(&zone->lru_lock, flags);
diff --git a/mm/swap.c b/mm/swap.c
index f31bd45..0167d6f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -54,11 +54,13 @@ static void __page_cache_release(struct page *page)
 	if (PageLRU(page)) {
 		unsigned long flags;
 		struct zone *zone = page_zone(page);
+		struct lruvec *lruvec;
 
 		spin_lock_irqsave(&zone->lru_lock, flags);
 		VM_BUG_ON(!PageLRU(page));
 		__ClearPageLRU(page);
-		del_page_from_lru_list(zone, page, page_off_lru(page));
+		lruvec = page_lruvec(page);
+		del_page_from_lru_list(lruvec, page, page_off_lru(page));
 		spin_unlock_irqrestore(&zone->lru_lock, flags);
 	}
 }
@@ -298,11 +300,13 @@ static void __activate_page(struct page *page, void *arg)
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
 		int file = page_is_file_cache(page);
 		int lru = page_lru_base_type(page);
-		del_page_from_lru_list(zone, page, lru);
+		struct lruvec *lruvec = page_lruvec(page);
+
+		del_page_from_lru_list(lruvec, page, lru);
 
 		SetPageActive(page);
 		lru += LRU_ACTIVE;
-		add_page_to_lru_list(zone, page, lru);
+		add_page_to_lru_list(lruvec, page, lru);
 		__count_vm_event(PGACTIVATE);
 
 		update_page_reclaim_stat(zone, page, file, 1);
@@ -371,6 +375,7 @@ static void __lru_cache_add_list(struct list_head *pages, enum lru_list lru)
 	int file = is_file_lru(lru);
 	int active = is_active_lru(lru);
 	struct page *page, *next;
+	struct lruvec *lruvec;
 	struct zone *pagezone, *zone = NULL;
 	unsigned long uninitialized_var(flags);
 	LIST_HEAD(free_pages);
@@ -390,11 +395,12 @@ static void __lru_cache_add_list(struct list_head *pages, enum lru_list lru)
 		if (active)
 			SetPageActive(page);
 		update_page_reclaim_stat(zone, page, file, active);
-		add_page_to_lru_list(zone, page, lru);
+		lruvec = page_lruvec(page);
+		add_page_to_lru_list(lruvec, page, lru);
 		if (unlikely(put_page_testzero(page))) {
 			__ClearPageLRU(page);
 			__ClearPageActive(page);
-			del_page_from_lru_list(zone, page, lru);
+			del_page_from_lru_list(lruvec, page, lru);
 			if (unlikely(PageCompound(page))) {
 				spin_unlock_irqrestore(&zone->lru_lock, flags);
 				zone = NULL;
@@ -478,11 +484,13 @@ void lru_cache_add_lru(struct page *page, enum lru_list lru)
 void add_page_to_unevictable_list(struct page *page)
 {
 	struct zone *zone = page_zone(page);
+	struct lruvec *lruvec;
 
 	spin_lock_irq(&zone->lru_lock);
 	SetPageUnevictable(page);
 	SetPageLRU(page);
-	add_page_to_lru_list(zone, page, LRU_UNEVICTABLE);
+	lruvec = page_lruvec(page);
+	add_page_to_lru_list(lruvec, page, LRU_UNEVICTABLE);
 	spin_unlock_irq(&zone->lru_lock);
 }
 
@@ -512,6 +520,7 @@ static void lru_deactivate_fn(struct page *page, void *arg)
 	int lru, file;
 	bool active;
 	struct zone *zone = page_zone(page);
+	struct lruvec *lruvec;
 
 	if (!PageLRU(page))
 		return;
@@ -527,10 +536,11 @@ static void lru_deactivate_fn(struct page *page, void *arg)
 
 	file = page_is_file_cache(page);
 	lru = page_lru_base_type(page);
-	del_page_from_lru_list(zone, page, lru + active);
+	lruvec = page_lruvec(page);
+	del_page_from_lru_list(lruvec, page, lru + active);
 	ClearPageActive(page);
 	ClearPageReferenced(page);
-	add_page_to_lru_list(zone, page, lru);
+	add_page_to_lru_list(lruvec, page, lru);
 
 	if (PageWriteback(page) || PageDirty(page)) {
 		/*
@@ -540,7 +550,6 @@ static void lru_deactivate_fn(struct page *page, void *arg)
 		 */
 		SetPageReclaim(page);
 	} else {
-		struct lruvec *lruvec = page_lruvec(page);
 		/*
 		 * The page's writeback ends up during pagevec
 		 * We moves tha page into tail of inactive.
@@ -672,6 +681,7 @@ void release_pages(struct page **pages, int nr, int cold)
 
 		if (PageLRU(page)) {
 			struct zone *pagezone = page_zone(page);
+			struct lruvec *lruvec = page_lruvec(page);
 
 			if (pagezone != zone) {
 				if (zone)
@@ -682,7 +692,7 @@ void release_pages(struct page **pages, int nr, int cold)
 			}
 			VM_BUG_ON(!PageLRU(page));
 			__ClearPageLRU(page);
-			del_page_from_lru_list(zone, page, page_off_lru(page));
+			del_page_from_lru_list(lruvec, page, page_off_lru(page));
 		}
 
 		list_add(&page->lru, &pages_to_free);
@@ -720,6 +730,7 @@ void lru_add_page_tail(struct zone* zone,
 	int active;
 	enum lru_list lru;
 	const int file = 0;
+	struct lruvec *lruvec = page_lruvec(page);
 
 	VM_BUG_ON(!PageHead(page));
 	VM_BUG_ON(PageCompound(page_tail));
@@ -754,7 +765,7 @@ void lru_add_page_tail(struct zone* zone,
 		 * Use the standard add function to put page_tail on the list,
 		 * but then correct its position so they all end up in order.
 		 */
-		add_page_to_lru_list(zone, page_tail, lru);
+		add_page_to_lru_list(lruvec, page_tail, lru);
 		list_head = page_tail->lru.prev;
 		list_move_tail(&page_tail->lru, list_head);
 	}
diff --git a/mm/vmscan.c b/mm/vmscan.c
index dc17f61..767d3ac 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1292,6 +1292,7 @@ int isolate_lru_page(struct page *page)
 
 	if (PageLRU(page)) {
 		struct zone *zone = page_zone(page);
+		struct lruvec *lruvec;
 
 		spin_lock_irq(&zone->lru_lock);
 		if (PageLRU(page)) {
@@ -1299,8 +1300,8 @@ int isolate_lru_page(struct page *page)
 			ret = 0;
 			get_page(page);
 			ClearPageLRU(page);
-
-			del_page_from_lru_list(zone, page, lru);
+			lruvec = page_lruvec(page);
+			del_page_from_lru_list(lruvec, page, lru);
 		}
 		spin_unlock_irq(&zone->lru_lock);
 	}
@@ -1345,6 +1346,7 @@ putback_inactive_pages(struct lruvec *lruvec,
 	 */
 	while (!list_empty(page_list)) {
 		struct page *page = lru_to_page(page_list);
+		struct lruvec *lruvec;
 		int lru;
 
 		VM_BUG_ON(PageLRU(page));
@@ -1357,7 +1359,10 @@ putback_inactive_pages(struct lruvec *lruvec,
 		}
 		SetPageLRU(page);
 		lru = page_lru(page);
-		add_page_to_lru_list(zone, page, lru);
+
+		/* can differ only on lumpy reclaim */
+		lruvec = page_lruvec(page);
+		add_page_to_lru_list(lruvec, page, lru);
 		if (is_active_lru(lru)) {
 			int file = is_file_lru(lru);
 			int numpages = hpage_nr_pages(page);
@@ -1366,7 +1371,7 @@ putback_inactive_pages(struct lruvec *lruvec,
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
 			__ClearPageActive(page);
-			del_page_from_lru_list(zone, page, lru);
+			del_page_from_lru_list(lruvec, page, lru);
 
 			if (unlikely(PageCompound(page))) {
 				spin_unlock_irq(&zone->lru_lock);
@@ -1640,6 +1645,7 @@ static void move_active_pages_to_lru(struct zone *zone,
 		VM_BUG_ON(PageLRU(page));
 		SetPageLRU(page);
 
+		/* can differ only on lumpy reclaim */
 		lruvec = page_lruvec(page);
 		list_move(&page->lru, &lruvec->pages_lru[lru]);
 		numpages = hpage_nr_pages(page);
@@ -1649,7 +1655,7 @@ static void move_active_pages_to_lru(struct zone *zone,
 		if (put_page_testzero(page)) {
 			__ClearPageLRU(page);
 			__ClearPageActive(page);
-			del_page_from_lru_list(zone, page, lru);
+			del_page_from_lru_list(lruvec, page, lru);
 
 			if (unlikely(PageCompound(page))) {
 				spin_unlock_irq(&zone->lru_lock);


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 14/22] mm: push lruvec into update_page_reclaim_stat()
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (12 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 13/22] mm: move page-to-lruvec translation upper Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 15/22] mm: push lruvecs from pagevec_lru_move_fn() to iterator Konstantin Khlebnikov
                   ` (8 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Push lruvec pointer into update_page_reclaim_stat()
* drop page argument
* drop active and file arguments, use lru instead

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 mm/swap.c |   30 +++++++++---------------------
 1 files changed, 9 insertions(+), 21 deletions(-)

diff --git a/mm/swap.c b/mm/swap.c
index 0167d6f..a549f11 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -281,24 +281,19 @@ void rotate_reclaimable_page(struct page *page)
 	}
 }
 
-static void update_page_reclaim_stat(struct zone *zone, struct page *page,
-				     int file, int rotated)
+static void update_page_reclaim_stat(struct lruvec *lruvec, enum lru_list lru)
 {
-	struct zone_reclaim_stat *reclaim_stat;
-
-	reclaim_stat = &page_lruvec(page)->reclaim_stat;
+	struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat;
+	int file = is_file_lru(lru);
 
 	reclaim_stat->recent_scanned[file]++;
-	if (rotated)
+	if (is_active_lru(lru))
 		reclaim_stat->recent_rotated[file]++;
 }
 
 static void __activate_page(struct page *page, void *arg)
 {
-	struct zone *zone = page_zone(page);
-
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
-		int file = page_is_file_cache(page);
 		int lru = page_lru_base_type(page);
 		struct lruvec *lruvec = page_lruvec(page);
 
@@ -309,7 +304,7 @@ static void __activate_page(struct page *page, void *arg)
 		add_page_to_lru_list(lruvec, page, lru);
 		__count_vm_event(PGACTIVATE);
 
-		update_page_reclaim_stat(zone, page, file, 1);
+		update_page_reclaim_stat(lruvec, lru);
 	}
 }
 
@@ -372,8 +367,6 @@ EXPORT_SYMBOL(mark_page_accessed);
 
 static void __lru_cache_add_list(struct list_head *pages, enum lru_list lru)
 {
-	int file = is_file_lru(lru);
-	int active = is_active_lru(lru);
 	struct page *page, *next;
 	struct lruvec *lruvec;
 	struct zone *pagezone, *zone = NULL;
@@ -392,10 +385,10 @@ static void __lru_cache_add_list(struct list_head *pages, enum lru_list lru)
 		VM_BUG_ON(PageUnevictable(page));
 		VM_BUG_ON(PageLRU(page));
 		SetPageLRU(page);
-		if (active)
+		if (is_active_lru(lru))
 			SetPageActive(page);
-		update_page_reclaim_stat(zone, page, file, active);
 		lruvec = page_lruvec(page);
+		update_page_reclaim_stat(lruvec, lru);
 		add_page_to_lru_list(lruvec, page, lru);
 		if (unlikely(put_page_testzero(page))) {
 			__ClearPageLRU(page);
@@ -519,7 +512,6 @@ static void lru_deactivate_fn(struct page *page, void *arg)
 {
 	int lru, file;
 	bool active;
-	struct zone *zone = page_zone(page);
 	struct lruvec *lruvec;
 
 	if (!PageLRU(page))
@@ -560,7 +552,7 @@ static void lru_deactivate_fn(struct page *page, void *arg)
 
 	if (active)
 		__count_vm_event(PGDEACTIVATE);
-	update_page_reclaim_stat(zone, page, file, 0);
+	update_page_reclaim_stat(lruvec, lru);
 }
 
 /*
@@ -727,9 +719,7 @@ EXPORT_SYMBOL(__pagevec_release);
 void lru_add_page_tail(struct zone* zone,
 		       struct page *page, struct page *page_tail)
 {
-	int active;
 	enum lru_list lru;
-	const int file = 0;
 	struct lruvec *lruvec = page_lruvec(page);
 
 	VM_BUG_ON(!PageHead(page));
@@ -742,13 +732,11 @@ void lru_add_page_tail(struct zone* zone,
 	if (page_evictable(page_tail, NULL)) {
 		if (PageActive(page)) {
 			SetPageActive(page_tail);
-			active = 1;
 			lru = LRU_ACTIVE_ANON;
 		} else {
-			active = 0;
 			lru = LRU_INACTIVE_ANON;
 		}
-		update_page_reclaim_stat(zone, page_tail, file, active);
+		update_page_reclaim_stat(lruvec, lru);
 	} else {
 		SetPageUnevictable(page_tail);
 		lru = LRU_UNEVICTABLE;


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 15/22] mm: push lruvecs from pagevec_lru_move_fn() to iterator
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (13 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 14/22] mm: push lruvec into update_page_reclaim_stat() Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 16/22] mm: introduce lruvec locking primitives Konstantin Khlebnikov
                   ` (7 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Push lruvec pointer from pagevec_lru_move_fn() to iterator function.
Push lruvec pointer into lru_add_page_tail()

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 include/linux/swap.h |    2 +-
 mm/huge_memory.c     |    4 +++-
 mm/swap.c            |   23 +++++++++++------------
 3 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 7394100..e0b1674 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -224,7 +224,7 @@ extern unsigned int nr_free_pagecache_pages(void);
 /* linux/mm/swap.c */
 extern void __lru_cache_add(struct page *, enum lru_list lru);
 extern void lru_cache_add_lru(struct page *, enum lru_list lru);
-extern void lru_add_page_tail(struct zone* zone,
+extern void lru_add_page_tail(struct lruvec *lruvec,
 			      struct page *page, struct page *page_tail);
 extern void lru_cache_add_list(struct list_head *pages,
 			       int size, enum lru_list lru);
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 91d3efb..09e7069 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1229,10 +1229,12 @@ static void __split_huge_page_refcount(struct page *page)
 {
 	int i;
 	struct zone *zone = page_zone(page);
+	struct lruvec *lruvec;
 	int tail_count = 0;
 
 	/* prevent PageLRU to go away from under us, and freeze lru stats */
 	spin_lock_irq(&zone->lru_lock);
+	lruvec = page_lruvec(page);
 	compound_lock(page);
 	/* complete memcg works before add pages to LRU */
 	mem_cgroup_split_huge_fixup(page);
@@ -1308,7 +1310,7 @@ static void __split_huge_page_refcount(struct page *page)
 		BUG_ON(!PageSwapBacked(page_tail));
 
 
-		lru_add_page_tail(zone, page, page_tail);
+		lru_add_page_tail(lruvec, page, page_tail);
 	}
 	atomic_sub(tail_count, &page->_count);
 	BUG_ON(atomic_read(&page->_count) <= 0);
diff --git a/mm/swap.c b/mm/swap.c
index a549f11..ca51e5f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -209,7 +209,8 @@ void put_pages_list(struct list_head *pages)
 EXPORT_SYMBOL(put_pages_list);
 
 static void pagevec_lru_move_fn(struct pagevec *pvec,
-				void (*move_fn)(struct page *page, void *arg),
+				void (*move_fn)(struct lruvec *lruvec,
+						struct page *page, void *arg),
 				void *arg)
 {
 	int i;
@@ -219,6 +220,7 @@ static void pagevec_lru_move_fn(struct pagevec *pvec,
 	for (i = 0; i < pagevec_count(pvec); i++) {
 		struct page *page = pvec->pages[i];
 		struct zone *pagezone = page_zone(page);
+		struct lruvec *lruvec;
 
 		if (pagezone != zone) {
 			if (zone)
@@ -227,7 +229,8 @@ static void pagevec_lru_move_fn(struct pagevec *pvec,
 			spin_lock_irqsave(&zone->lru_lock, flags);
 		}
 
-		(*move_fn)(page, arg);
+		lruvec = page_lruvec(page);
+		(*move_fn)(lruvec, page, arg);
 	}
 	if (zone)
 		spin_unlock_irqrestore(&zone->lru_lock, flags);
@@ -235,13 +238,13 @@ static void pagevec_lru_move_fn(struct pagevec *pvec,
 	pagevec_reinit(pvec);
 }
 
-static void pagevec_move_tail_fn(struct page *page, void *arg)
+static void pagevec_move_tail_fn(struct lruvec *lruvec,
+				 struct page *page, void *arg)
 {
 	int *pgmoved = arg;
 
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
 		enum lru_list lru = page_lru_base_type(page);
-		struct lruvec *lruvec = page_lruvec(page);
 
 		list_move_tail(&page->lru, &lruvec->pages_lru[lru]);
 		(*pgmoved)++;
@@ -291,11 +294,10 @@ static void update_page_reclaim_stat(struct lruvec *lruvec, enum lru_list lru)
 		reclaim_stat->recent_rotated[file]++;
 }
 
-static void __activate_page(struct page *page, void *arg)
+static void __activate_page(struct lruvec *lruvec, struct page *page, void *arg)
 {
 	if (PageLRU(page) && !PageActive(page) && !PageUnevictable(page)) {
 		int lru = page_lru_base_type(page);
-		struct lruvec *lruvec = page_lruvec(page);
 
 		del_page_from_lru_list(lruvec, page, lru);
 
@@ -508,11 +510,10 @@ void add_page_to_unevictable_list(struct page *page)
  * be write it out by flusher threads as this is much more effective
  * than the single-page writeout from reclaim.
  */
-static void lru_deactivate_fn(struct page *page, void *arg)
+static void lru_deactivate_fn(struct lruvec *lruvec, struct page *page, void *arg)
 {
 	int lru, file;
 	bool active;
-	struct lruvec *lruvec;
 
 	if (!PageLRU(page))
 		return;
@@ -528,7 +529,6 @@ static void lru_deactivate_fn(struct page *page, void *arg)
 
 	file = page_is_file_cache(page);
 	lru = page_lru_base_type(page);
-	lruvec = page_lruvec(page);
 	del_page_from_lru_list(lruvec, page, lru + active);
 	ClearPageActive(page);
 	ClearPageReferenced(page);
@@ -716,16 +716,15 @@ EXPORT_SYMBOL(__pagevec_release);
 
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 /* used by __split_huge_page_refcount() */
-void lru_add_page_tail(struct zone* zone,
+void lru_add_page_tail(struct lruvec *lruvec,
 		       struct page *page, struct page *page_tail)
 {
 	enum lru_list lru;
-	struct lruvec *lruvec = page_lruvec(page);
 
 	VM_BUG_ON(!PageHead(page));
 	VM_BUG_ON(PageCompound(page_tail));
 	VM_BUG_ON(PageLRU(page_tail));
-	VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&zone->lru_lock));
+	VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&lruvec_zone(lruvec)->lru_lock));
 
 	SetPageLRU(page_tail);
 


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 16/22] mm: introduce lruvec locking primitives
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (14 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 15/22] mm: push lruvecs from pagevec_lru_move_fn() to iterator Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 17/22] mm: handle lruvec relocks on lumpy reclaim Konstantin Khlebnikov
                   ` (6 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

This is initial preparation for lru_lock splitting.

This locking primites designed to hide splitted nature of lru_lock
and to avoid overhead for non-splitted lru_lock in non-memcg case.

* Lock via lruvec reference

lock_lruvec(lruvec, flags)
lock_lruvec_irq(lruvec)

* Lock via page reference

lock_page_lruvec(page, flags)
lock_page_lruvec_irq(page)
relock_page_lruvec(lruvec, page, flags)
relock_page_lruvec_irq(lruvec, page)
__relock_page_lruvec(lruvec, page)	(argument lruvec is non-NULL)

They always returns pointer to some locked lruvec, page anyway can be
not in lru, PageLRU() sign is stable while we hold returned lruvec lock.
Consequent relock calls must be in the same zone.
Caller must guarantee page to lruvec reference validity.

* Lock via pfn, for random page

catch_page_lruvec(&lruvec, page)

It returns true on success: lruvec is locked and PageLRU is set.
Initial lruvec can be NULL. Consequent calls must in the same zone.

* Unlock

unlock_lruvec(lruvec, flags)
unlock_lruvec_irq(lruvec)

* Wait

wait_lruvec_unlock(lruvec)
Wait for lruvec unlock, caller must have stable reference to lruvec.

__wait_lruvec_unlock(lruvec)
Wait for lruvec unlock before locking other lrulock for same page,
nothing if there only one possible lruvec per page.
Used at page-to-lruvec reference switching to stabilize PageLRU sign.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 mm/huge_memory.c |    8 +-
 mm/internal.h    |  200 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/memcontrol.c  |    8 +-
 mm/swap.c        |   78 ++++++---------------
 mm/vmscan.c      |   77 +++++++++------------
 5 files changed, 263 insertions(+), 108 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 09e7069..74996b8 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1228,13 +1228,11 @@ static int __split_huge_page_splitting(struct page *page,
 static void __split_huge_page_refcount(struct page *page)
 {
 	int i;
-	struct zone *zone = page_zone(page);
 	struct lruvec *lruvec;
 	int tail_count = 0;
 
 	/* prevent PageLRU to go away from under us, and freeze lru stats */
-	spin_lock_irq(&zone->lru_lock);
-	lruvec = page_lruvec(page);
+	lruvec = lock_page_lruvec_irq(page);
 	compound_lock(page);
 	/* complete memcg works before add pages to LRU */
 	mem_cgroup_split_huge_fixup(page);
@@ -1316,11 +1314,11 @@ static void __split_huge_page_refcount(struct page *page)
 	BUG_ON(atomic_read(&page->_count) <= 0);
 
 	__dec_zone_page_state(page, NR_ANON_TRANSPARENT_HUGEPAGES);
-	__mod_zone_page_state(zone, NR_ANON_PAGES, HPAGE_PMD_NR);
+	__mod_zone_page_state(lruvec_zone(lruvec), NR_ANON_PAGES, HPAGE_PMD_NR);
 
 	ClearPageCompound(page);
 	compound_unlock(page);
-	spin_unlock_irq(&zone->lru_lock);
+	unlock_lruvec_irq(lruvec);
 
 	for (i = 1; i < HPAGE_PMD_NR; i++) {
 		struct page *page_tail = page + i;
diff --git a/mm/internal.h b/mm/internal.h
index 2189af4..a1a3206 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -13,6 +13,206 @@
 
 #include <linux/mm.h>
 
+static inline void lock_lruvec(struct lruvec *lruvec, unsigned long *flags)
+{
+	spin_lock_irqsave(&lruvec_zone(lruvec)->lru_lock, *flags);
+}
+
+static inline void lock_lruvec_irq(struct lruvec *lruvec)
+{
+	spin_lock_irq(&lruvec_zone(lruvec)->lru_lock);
+}
+
+static inline void unlock_lruvec(struct lruvec *lruvec, unsigned long *flags)
+{
+	spin_unlock_irqrestore(&lruvec_zone(lruvec)->lru_lock, *flags);
+}
+
+static inline void unlock_lruvec_irq(struct lruvec *lruvec)
+{
+	spin_unlock_irq(&lruvec_zone(lruvec)->lru_lock);
+}
+
+static inline void wait_lruvec_unlock(struct lruvec *lruvec)
+{
+	spin_unlock_wait(&lruvec_zone(lruvec)->lru_lock);
+}
+
+#ifdef CONFIG_CGROUP_MEM_RES_CTLR
+
+/* Dynamic page to lruvec mapping */
+
+static inline struct lruvec *lock_page_lruvec(struct page *page,
+					      unsigned long *flags)
+{
+	struct zone *zone = page_zone(page);
+
+	spin_lock_irqsave(&zone->lru_lock, *flags);
+	return page_lruvec(page);
+}
+
+static inline struct lruvec *lock_page_lruvec_irq(struct page *page)
+{
+	struct zone *zone = page_zone(page);
+
+	spin_lock_irq(&zone->lru_lock);
+	return page_lruvec(page);
+}
+
+static inline struct lruvec *relock_page_lruvec(struct lruvec *lruvec,
+						struct page *page,
+						unsigned long *flags)
+{
+	struct zone *zone = page_zone(page);
+
+	if (!lruvec || zone != lruvec_zone(lruvec)) {
+		if (lruvec)
+			unlock_lruvec(lruvec, flags);
+		lruvec = lock_page_lruvec(page, flags);
+	}
+
+	return lruvec;
+}
+
+static inline struct lruvec *relock_page_lruvec_irq(struct lruvec *lruvec,
+						    struct page *page)
+{
+	struct zone *zone = page_zone(page);
+
+	if (!lruvec || zone != lruvec_zone(lruvec)) {
+		if (lruvec)
+			unlock_lruvec_irq(lruvec);
+		lruvec = lock_page_lruvec_irq(page);
+	}
+
+	return lruvec;
+}
+
+/* Lock lruvec for other page in the same zone, lruvec must be locked */
+static inline struct lruvec *__relock_page_lruvec(struct lruvec *lruvec,
+						  struct page *page)
+{
+	/* Currenyly only one lruvec per-zone */
+	return page_lruvec(page);
+}
+
+/*
+ * Try to catch page in LRU.
+ * Caller may not have stable reference to page.
+ * Page for next call must be from the same zone.
+ * Returns true if page successfully catched in LRU.
+ */
+static inline bool catch_page_lruvec(struct lruvec **lruvec, struct page *page)
+{
+	struct zone *zone;
+	bool ret = false;
+
+	if (PageLRU(page)) {
+		if (!*lruvec) {
+			zone = page_zone(page);
+			spin_lock_irq(&zone->lru_lock);
+		} else
+			zone = lruvec_zone(*lruvec);
+
+		if (PageLRU(page)) {
+			*lruvec = page_lruvec(page);
+			ret = true;
+		} else
+			*lruvec = &zone->lruvec;
+	}
+
+	return ret;
+}
+
+/* Wait for lruvec unlock before locking other lruvec for the same page */
+static inline void __wait_lruvec_unlock(struct lruvec *lruvec)
+{
+	/* Currently only one lruvec per-zone */
+}
+
+#else /* CONFIG_CGROUP_MEM_RES_CTLR */
+
+/* Fixed page to lruvec mapping */
+
+static inline struct lruvec *lock_page_lruvec(struct page *page,
+					      unsigned long *flags)
+{
+	struct lruvec *lruvec = page_lruvec(page);
+
+	lock_lruvec(lruvec, flags);
+	return lruvec;
+}
+
+static inline struct lruvec *lock_page_lruvec_irq(struct page *page)
+{
+	struct lruvec *lruvec = page_lruvec(page);
+
+	lock_lruvec_irq(lruvec);
+	return lruvec;
+}
+
+static inline struct lruvec *relock_page_lruvec(struct lruvec *locked_lruvec,
+						struct page *page,
+						unsigned long *flags)
+{
+	struct lruvec *lruvec = page_lruvec(page);
+
+	if (unlikely(locked_lruvec != lruvec)) {
+		if (locked_lruvec)
+			unlock_lruvec(locked_lruvec, flags);
+		lock_lruvec(lruvec, flags);
+		locked_lruvec = lruvec;
+	}
+	return locked_lruvec;
+}
+
+static inline struct lruvec *relock_page_lruvec_irq(
+		struct lruvec *locked_lruvec, struct page *page)
+{
+	struct lruvec *lruvec = page_lruvec(page);
+
+	if (unlikely(locked_lruvec != lruvec)) {
+		if (locked_lruvec)
+			unlock_lruvec_irq(locked_lruvec);
+		lock_lruvec_irq(lruvec);
+		locked_lruvec = lruvec;
+	}
+	return locked_lruvec;
+}
+
+/* Lock lruvec for other page in the same zone */
+static inline struct lruvec *__relock_page_lruvec(struct lruvec *locked_lruvec,
+						  struct page *page)
+{
+	/* Currently ony one lruvec per-zone */
+	return locked_lruvec;
+}
+
+/* Try to catch page in LRU. */
+static inline bool catch_page_lruvec(struct lruvec **lruvec, struct page *page)
+{
+	bool ret = false;
+
+	if (PageLRU(page)) {
+		if (!*lruvec)
+			*lruvec = lock_page_lruvec_irq(page);
+		else
+			*lruvec = __relock_page_lruvec(*lruvec, page);
+		if (PageLRU(page))
+			ret = true;
+	}
+
+	return ret;
+}
+
+/* Wait for lruvec unlock before locking other lruvec for the same page */
+static inline void __wait_lruvec_unlock(struct lruvec *lruvec)
+{
+	/* Fixed page to lruvec mapping, there only one possible lruvec */
+}
+
+#endif /* CONFIG_CGROUP_MEM_RES_CTLR */
+
 void free_pgtables(struct mmu_gather *tlb, struct vm_area_struct *start_vma,
 		unsigned long floor, unsigned long ceiling);
 
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index ea1fdeb..40e1a66 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3520,19 +3520,19 @@ static int mem_cgroup_force_empty_list(struct mem_cgroup *memcg,
 		struct page *page;
 
 		ret = 0;
-		spin_lock_irqsave(&zone->lru_lock, flags);
+		lock_lruvec(&mz->lruvec, &flags);
 		if (list_empty(list)) {
-			spin_unlock_irqrestore(&zone->lru_lock, flags);
+			unlock_lruvec(&mz->lruvec, &flags);
 			break;
 		}
 		page = list_entry(list->prev, struct page, lru);
 		if (busy == page) {
 			list_move(&page->lru, list);
 			busy = NULL;
-			spin_unlock_irqrestore(&zone->lru_lock, flags);
+			unlock_lruvec(&mz->lruvec, &flags);
 			continue;
 		}
-		spin_unlock_irqrestore(&zone->lru_lock, flags);
+		unlock_lruvec(&mz->lruvec, &flags);
 
 		pc = lookup_page_cgroup(page);
 
diff --git a/mm/swap.c b/mm/swap.c
index ca51e5f..9e81df3 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -53,15 +53,13 @@ static void __page_cache_release(struct page *page)
 {
 	if (PageLRU(page)) {
 		unsigned long flags;
-		struct zone *zone = page_zone(page);
 		struct lruvec *lruvec;
 
-		spin_lock_irqsave(&zone->lru_lock, flags);
+		lruvec = lock_page_lruvec(page, &flags);
 		VM_BUG_ON(!PageLRU(page));
 		__ClearPageLRU(page);
-		lruvec = page_lruvec(page);
 		del_page_from_lru_list(lruvec, page, page_off_lru(page));
-		spin_unlock_irqrestore(&zone->lru_lock, flags);
+		unlock_lruvec(lruvec, &flags);
 	}
 }
 
@@ -214,26 +212,17 @@ static void pagevec_lru_move_fn(struct pagevec *pvec,
 				void *arg)
 {
 	int i;
-	struct zone *zone = NULL;
+	struct lruvec *lruvec = NULL;
 	unsigned long flags = 0;
 
 	for (i = 0; i < pagevec_count(pvec); i++) {
 		struct page *page = pvec->pages[i];
-		struct zone *pagezone = page_zone(page);
-		struct lruvec *lruvec;
-
-		if (pagezone != zone) {
-			if (zone)
-				spin_unlock_irqrestore(&zone->lru_lock, flags);
-			zone = pagezone;
-			spin_lock_irqsave(&zone->lru_lock, flags);
-		}
 
-		lruvec = page_lruvec(page);
+		lruvec = relock_page_lruvec(lruvec, page, &flags);
 		(*move_fn)(lruvec, page, arg);
 	}
-	if (zone)
-		spin_unlock_irqrestore(&zone->lru_lock, flags);
+	if (lruvec)
+		unlock_lruvec(lruvec, &flags);
 	release_pages(pvec->pages, pvec->nr, pvec->cold);
 	pagevec_reinit(pvec);
 }
@@ -340,11 +329,11 @@ static inline void activate_page_drain(int cpu)
 
 void activate_page(struct page *page)
 {
-	struct zone *zone = page_zone(page);
+	struct lruvec *lruvec;
 
-	spin_lock_irq(&zone->lru_lock);
+	lruvec = lock_page_lruvec_irq(page);
 	__activate_page(page, NULL);
-	spin_unlock_irq(&zone->lru_lock);
+	unlock_lruvec_irq(lruvec);
 }
 #endif
 
@@ -370,26 +359,18 @@ EXPORT_SYMBOL(mark_page_accessed);
 static void __lru_cache_add_list(struct list_head *pages, enum lru_list lru)
 {
 	struct page *page, *next;
-	struct lruvec *lruvec;
-	struct zone *pagezone, *zone = NULL;
+	struct lruvec *lruvec = NULL;
 	unsigned long uninitialized_var(flags);
 	LIST_HEAD(free_pages);
 
 	list_for_each_entry_safe(page, next, pages, lru) {
-		pagezone = page_zone(page);
-		if (pagezone != zone) {
-			if (zone)
-				spin_unlock_irqrestore(&zone->lru_lock, flags);
-			zone = pagezone;
-			spin_lock_irqsave(&zone->lru_lock, flags);
-		}
+		lruvec = relock_page_lruvec(lruvec, page, &flags);
 		VM_BUG_ON(PageActive(page));
 		VM_BUG_ON(PageUnevictable(page));
 		VM_BUG_ON(PageLRU(page));
 		SetPageLRU(page);
 		if (is_active_lru(lru))
 			SetPageActive(page);
-		lruvec = page_lruvec(page);
 		update_page_reclaim_stat(lruvec, lru);
 		add_page_to_lru_list(lruvec, page, lru);
 		if (unlikely(put_page_testzero(page))) {
@@ -397,15 +378,15 @@ static void __lru_cache_add_list(struct list_head *pages, enum lru_list lru)
 			__ClearPageActive(page);
 			del_page_from_lru_list(lruvec, page, lru);
 			if (unlikely(PageCompound(page))) {
-				spin_unlock_irqrestore(&zone->lru_lock, flags);
-				zone = NULL;
+				unlock_lruvec(lruvec, &flags);
+				lruvec = NULL;
 				(*get_compound_page_dtor(page))(page);
 			} else
 				list_add_tail(&page->lru, &free_pages);
 		}
 	}
-	if (zone)
-		spin_unlock_irqrestore(&zone->lru_lock, flags);
+	if (lruvec)
+		unlock_lruvec(lruvec, &flags);
 
 	free_hot_cold_page_list(&free_pages, 0);
 }
@@ -478,15 +459,13 @@ void lru_cache_add_lru(struct page *page, enum lru_list lru)
  */
 void add_page_to_unevictable_list(struct page *page)
 {
-	struct zone *zone = page_zone(page);
 	struct lruvec *lruvec;
 
-	spin_lock_irq(&zone->lru_lock);
+	lruvec = lock_page_lruvec_irq(page);
 	SetPageUnevictable(page);
 	SetPageLRU(page);
-	lruvec = page_lruvec(page);
 	add_page_to_lru_list(lruvec, page, LRU_UNEVICTABLE);
-	spin_unlock_irq(&zone->lru_lock);
+	unlock_lruvec_irq(lruvec);
 }
 
 /*
@@ -653,16 +632,16 @@ void release_pages(struct page **pages, int nr, int cold)
 {
 	int i;
 	LIST_HEAD(pages_to_free);
-	struct zone *zone = NULL;
+	struct lruvec *lruvec = NULL;
 	unsigned long uninitialized_var(flags);
 
 	for (i = 0; i < nr; i++) {
 		struct page *page = pages[i];
 
 		if (unlikely(PageCompound(page))) {
-			if (zone) {
-				spin_unlock_irqrestore(&zone->lru_lock, flags);
-				zone = NULL;
+			if (lruvec) {
+				unlock_lruvec(lruvec, &flags);
+				lruvec = NULL;
 			}
 			put_compound_page(page);
 			continue;
@@ -672,16 +651,7 @@ void release_pages(struct page **pages, int nr, int cold)
 			continue;
 
 		if (PageLRU(page)) {
-			struct zone *pagezone = page_zone(page);
-			struct lruvec *lruvec = page_lruvec(page);
-
-			if (pagezone != zone) {
-				if (zone)
-					spin_unlock_irqrestore(&zone->lru_lock,
-									flags);
-				zone = pagezone;
-				spin_lock_irqsave(&zone->lru_lock, flags);
-			}
+			lruvec = relock_page_lruvec(lruvec, page, &flags);
 			VM_BUG_ON(!PageLRU(page));
 			__ClearPageLRU(page);
 			del_page_from_lru_list(lruvec, page, page_off_lru(page));
@@ -689,8 +659,8 @@ void release_pages(struct page **pages, int nr, int cold)
 
 		list_add(&page->lru, &pages_to_free);
 	}
-	if (zone)
-		spin_unlock_irqrestore(&zone->lru_lock, flags);
+	if (lruvec)
+		unlock_lruvec(lruvec, &flags);
 
 	free_hot_cold_page_list(&pages_to_free, cold);
 }
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 767d3ac..4dba1df 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1291,19 +1291,17 @@ int isolate_lru_page(struct page *page)
 	VM_BUG_ON(!page_count(page));
 
 	if (PageLRU(page)) {
-		struct zone *zone = page_zone(page);
 		struct lruvec *lruvec;
 
-		spin_lock_irq(&zone->lru_lock);
+		lruvec = lock_page_lruvec_irq(page);
 		if (PageLRU(page)) {
 			int lru = page_lru(page);
 			ret = 0;
 			get_page(page);
 			ClearPageLRU(page);
-			lruvec = page_lruvec(page);
 			del_page_from_lru_list(lruvec, page, lru);
 		}
-		spin_unlock_irq(&zone->lru_lock);
+		unlock_lruvec_irq(lruvec);
 	}
 	return ret;
 }
@@ -1338,7 +1336,6 @@ putback_inactive_pages(struct lruvec *lruvec,
 		       struct list_head *page_list)
 {
 	struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat;
-	struct zone *zone = lruvec_zone(lruvec);
 	LIST_HEAD(pages_to_free);
 
 	/*
@@ -1346,15 +1343,14 @@ putback_inactive_pages(struct lruvec *lruvec,
 	 */
 	while (!list_empty(page_list)) {
 		struct page *page = lru_to_page(page_list);
-		struct lruvec *lruvec;
 		int lru;
 
 		VM_BUG_ON(PageLRU(page));
 		list_del(&page->lru);
 		if (unlikely(!page_evictable(page, NULL))) {
-			spin_unlock_irq(&zone->lru_lock);
+			unlock_lruvec_irq(lruvec);
 			putback_lru_page(page);
-			spin_lock_irq(&zone->lru_lock);
+			lock_lruvec_irq(lruvec);
 			continue;
 		}
 		SetPageLRU(page);
@@ -1374,9 +1370,9 @@ putback_inactive_pages(struct lruvec *lruvec,
 			del_page_from_lru_list(lruvec, page, lru);
 
 			if (unlikely(PageCompound(page))) {
-				spin_unlock_irq(&zone->lru_lock);
+				unlock_lruvec_irq(lruvec);
 				(*get_compound_page_dtor(page))(page);
-				spin_lock_irq(&zone->lru_lock);
+				lock_lruvec_irq(lruvec);
 			} else
 				list_add(&page->lru, &pages_to_free);
 		}
@@ -1512,7 +1508,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	if (!sc->may_writepage)
 		isolate_mode |= ISOLATE_CLEAN;
 
-	spin_lock_irq(&zone->lru_lock);
+	lock_lruvec_irq(lruvec);
 
 	nr_taken = isolate_lru_pages(nr_to_scan, lruvec, &page_list,
 				     &nr_scanned, sc, isolate_mode, 0, file);
@@ -1528,7 +1524,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	}
 
 	if (nr_taken == 0) {
-		spin_unlock_irq(&zone->lru_lock);
+		unlock_lruvec_irq(lruvec);
 		return 0;
 	}
 
@@ -1537,7 +1533,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON, nr_anon);
 	__mod_zone_page_state(zone, NR_ISOLATED_FILE, nr_file);
 
-	spin_unlock_irq(&zone->lru_lock);
+	unlock_lruvec_irq(lruvec);
 
 	nr_reclaimed = shrink_page_list(&page_list, lruvec, sc, priority,
 						&nr_dirty, &nr_writeback);
@@ -1549,7 +1545,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 					priority, &nr_dirty, &nr_writeback);
 	}
 
-	spin_lock_irq(&zone->lru_lock);
+	lock_lruvec_irq(lruvec);
 
 	if (current_is_kswapd())
 		__count_vm_events(KSWAPD_STEAL, nr_reclaimed);
@@ -1560,7 +1556,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON, -nr_anon);
 	__mod_zone_page_state(zone, NR_ISOLATED_FILE, -nr_file);
 
-	spin_unlock_irq(&zone->lru_lock);
+	unlock_lruvec_irq(lruvec);
 
 	free_hot_cold_page_list(&page_list, 1);
 
@@ -1616,7 +1612,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
  * But we had to alter page->flags anyway.
  */
 
-static void move_active_pages_to_lru(struct zone *zone,
+static void move_active_pages_to_lru(struct lruvec *lruvec,
 				     struct list_head *list,
 				     struct list_head *pages_to_free,
 				     enum lru_list lru)
@@ -1625,7 +1621,7 @@ static void move_active_pages_to_lru(struct zone *zone,
 	struct page *page;
 
 	if (buffer_heads_over_limit) {
-		spin_unlock_irq(&zone->lru_lock);
+		unlock_lruvec_irq(lruvec);
 		list_for_each_entry(page, list, lru) {
 			if (page_has_private(page) && trylock_page(page)) {
 				if (page_has_private(page))
@@ -1633,11 +1629,10 @@ static void move_active_pages_to_lru(struct zone *zone,
 				unlock_page(page);
 			}
 		}
-		spin_lock_irq(&zone->lru_lock);
+		lock_lruvec_irq(lruvec);
 	}
 
 	while (!list_empty(list)) {
-		struct lruvec *lruvec;
 		int numpages;
 
 		page = lru_to_page(list);
@@ -1658,14 +1653,14 @@ static void move_active_pages_to_lru(struct zone *zone,
 			del_page_from_lru_list(lruvec, page, lru);
 
 			if (unlikely(PageCompound(page))) {
-				spin_unlock_irq(&zone->lru_lock);
+				unlock_lruvec_irq(lruvec);
 				(*get_compound_page_dtor(page))(page);
-				spin_lock_irq(&zone->lru_lock);
+				lock_lruvec_irq(lruvec);
 			} else
 				list_add(&page->lru, pages_to_free);
 		}
 	}
-	__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
+	__mod_zone_page_state(lruvec_zone(lruvec), NR_LRU_BASE + lru, pgmoved);
 	if (!is_active_lru(lru))
 		__count_vm_events(PGDEACTIVATE, pgmoved);
 }
@@ -1694,7 +1689,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	if (!sc->may_writepage)
 		isolate_mode |= ISOLATE_CLEAN;
 
-	spin_lock_irq(&zone->lru_lock);
+	lock_lruvec_irq(lruvec);
 
 	nr_taken = isolate_lru_pages(nr_to_scan, lruvec, &l_hold, &nr_scanned,
 				     sc, isolate_mode, 1, file);
@@ -1710,7 +1705,8 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	else
 		__mod_zone_page_state(zone, NR_ACTIVE_ANON, -nr_taken);
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, nr_taken);
-	spin_unlock_irq(&zone->lru_lock);
+
+	unlock_lruvec_irq(lruvec);
 
 	while (!list_empty(&l_hold)) {
 		cond_resched();
@@ -1746,7 +1742,7 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	/*
 	 * Move pages back to the lru list.
 	 */
-	spin_lock_irq(&zone->lru_lock);
+	lock_lruvec_irq(lruvec);
 	/*
 	 * Count referenced pages from currently used mappings as rotated,
 	 * even though only some of them are actually re-activated.  This
@@ -1755,12 +1751,12 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	reclaim_stat->recent_rotated[file] += nr_rotated;
 
-	move_active_pages_to_lru(zone, &l_active, &l_hold,
+	move_active_pages_to_lru(lruvec, &l_active, &l_hold,
 						LRU_ACTIVE + file * LRU_FILE);
-	move_active_pages_to_lru(zone, &l_inactive, &l_hold,
+	move_active_pages_to_lru(lruvec, &l_inactive, &l_hold,
 						LRU_BASE   + file * LRU_FILE);
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, -nr_taken);
-	spin_unlock_irq(&zone->lru_lock);
+	unlock_lruvec_irq(lruvec);
 
 	free_hot_cold_page_list(&l_hold, 1);
 }
@@ -1940,7 +1936,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
 	 *
 	 * anon in [0], file in [1]
 	 */
-	spin_lock_irq(&zone->lru_lock);
+	lock_lruvec_irq(lruvec);
 	if (unlikely(reclaim_stat->recent_scanned[0] > anon / 4)) {
 		reclaim_stat->recent_scanned[0] /= 2;
 		reclaim_stat->recent_rotated[0] /= 2;
@@ -1961,7 +1957,7 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc,
 
 	fp = (file_prio + 1) * (reclaim_stat->recent_scanned[1] + 1);
 	fp /= reclaim_stat->recent_rotated[1] + 1;
-	spin_unlock_irq(&zone->lru_lock);
+	unlock_lruvec_irq(lruvec);
 
 	fraction[0] = ap;
 	fraction[1] = fp;
@@ -3510,24 +3506,16 @@ int page_evictable(struct page *page, struct vm_area_struct *vma)
  */
 void check_move_unevictable_pages(struct page **pages, int nr_pages)
 {
-	struct lruvec *lruvec;
-	struct zone *zone = NULL;
+	struct lruvec *lruvec = NULL;
 	int pgscanned = 0;
 	int pgrescued = 0;
 	int i;
 
 	for (i = 0; i < nr_pages; i++) {
 		struct page *page = pages[i];
-		struct zone *pagezone;
 
 		pgscanned++;
-		pagezone = page_zone(page);
-		if (pagezone != zone) {
-			if (zone)
-				spin_unlock_irq(&zone->lru_lock);
-			zone = pagezone;
-			spin_lock_irq(&zone->lru_lock);
-		}
+		lruvec = relock_page_lruvec_irq(lruvec, page);
 
 		if (!PageLRU(page) || !PageUnevictable(page))
 			continue;
@@ -3537,21 +3525,20 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
 
 			VM_BUG_ON(PageActive(page));
 			ClearPageUnevictable(page);
-			__dec_zone_state(zone, NR_UNEVICTABLE);
-			lruvec = page_lruvec(page);
+			__dec_zone_state(lruvec_zone(lruvec), NR_UNEVICTABLE);
 			lruvec->pages_count[LRU_UNEVICTABLE]--;
 			VM_BUG_ON((long)lruvec->pages_count[LRU_UNEVICTABLE] < 0);
 			lruvec->pages_count[lru]++;
 			list_move(&page->lru, &lruvec->pages_lru[lru]);
-			__inc_zone_state(zone, NR_INACTIVE_ANON + lru);
+			__inc_zone_state(lruvec_zone(lruvec), NR_INACTIVE_ANON + lru);
 			pgrescued++;
 		}
 	}
 
-	if (zone) {
+	if (lruvec) {
 		__count_vm_events(UNEVICTABLE_PGRESCUED, pgrescued);
 		__count_vm_events(UNEVICTABLE_PGSCANNED, pgscanned);
-		spin_unlock_irq(&zone->lru_lock);
+		unlock_lruvec_irq(lruvec);
 	}
 }
 #endif /* CONFIG_SHMEM */


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 17/22] mm: handle lruvec relocks on lumpy reclaim
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (15 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 16/22] mm: introduce lruvec locking primitives Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 18/22] mm: handle lruvec relocks in compaction Konstantin Khlebnikov
                   ` (5 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Prepare for lock splitting in lumly reclaim logic.
Now move_active_pages_to_lru() and putback_inactive_pages()
can put pages into different lruvecs.

* relock book before SetPageLRU()
* update reclaim_stat pointer after relocks
* return currently locked lruvec

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 mm/vmscan.c |   48 ++++++++++++++++++++++++++++++++++--------------
 1 files changed, 34 insertions(+), 14 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 4dba1df..39b4525 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1120,6 +1120,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 		unsigned long *nr_scanned, struct scan_control *sc,
 		isolate_mode_t mode, int active, int file)
 {
+	struct lruvec *cursor_lruvec = lruvec;
 	struct list_head *src;
 	unsigned long nr_taken = 0;
 	unsigned long nr_lumpy_taken = 0;
@@ -1203,14 +1204,16 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 			    !PageSwapCache(cursor_page))
 				break;
 
+			/* Switch cursor_lruvec lock for lumpy isolate */
+			if (!catch_page_lruvec(&cursor_lruvec, cursor_page))
+				continue;
+
 			if (__isolate_lru_page(cursor_page, mode, file) == 0) {
 				unsigned int isolated_pages;
-				struct lruvec *cursor_lruvec;
 				int cursor_lru = page_lru(cursor_page);
 
 				list_move(&cursor_page->lru, dst);
 				isolated_pages = hpage_nr_pages(cursor_page);
-				cursor_lruvec = page_lruvec(cursor_page);
 				cursor_lruvec->pages_count[cursor_lru] -=
 								isolated_pages;
 				VM_BUG_ON((long)cursor_lruvec->
@@ -1241,6 +1244,9 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 			}
 		}
 
+		/* Restore original lruvec lock */
+		cursor_lruvec = __relock_page_lruvec(cursor_lruvec, page);
+
 		/* If we break out of the loop above, lumpy reclaim failed */
 		if (pfn < end_pfn)
 			nr_lumpy_failed++;
@@ -1331,7 +1337,10 @@ static int too_many_isolated(struct zone *zone, int file,
 	return isolated > inactive;
 }
 
-static noinline_for_stack void
+/*
+ * Returns currently locked lruvec
+ */
+static noinline_for_stack struct lruvec *
 putback_inactive_pages(struct lruvec *lruvec,
 		       struct list_head *page_list)
 {
@@ -1353,11 +1362,14 @@ putback_inactive_pages(struct lruvec *lruvec,
 			lock_lruvec_irq(lruvec);
 			continue;
 		}
+
+		/* can differ only on lumpy reclaim */
+		lruvec = __relock_page_lruvec(lruvec, page);
+		reclaim_stat = &lruvec->reclaim_stat;
+
 		SetPageLRU(page);
 		lru = page_lru(page);
 
-		/* can differ only on lumpy reclaim */
-		lruvec = page_lruvec(page);
 		add_page_to_lru_list(lruvec, page, lru);
 		if (is_active_lru(lru)) {
 			int file = is_file_lru(lru);
@@ -1382,6 +1394,8 @@ putback_inactive_pages(struct lruvec *lruvec,
 	 * To save our caller's stack, now use input list for pages to free.
 	 */
 	list_splice(&pages_to_free, page_list);
+
+	return lruvec;
 }
 
 static noinline_for_stack void
@@ -1551,7 +1565,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 		__count_vm_events(KSWAPD_STEAL, nr_reclaimed);
 	__count_zone_vm_events(PGSTEAL, zone, nr_reclaimed);
 
-	putback_inactive_pages(lruvec, &page_list);
+	lruvec = putback_inactive_pages(lruvec, &page_list);
 
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON, -nr_anon);
 	__mod_zone_page_state(zone, NR_ISOLATED_FILE, -nr_file);
@@ -1610,12 +1624,15 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
  *
  * The downside is that we have to touch page->_count against each page.
  * But we had to alter page->flags anyway.
+ *
+ * Returns currently locked lruvec
  */
 
-static void move_active_pages_to_lru(struct lruvec *lruvec,
-				     struct list_head *list,
-				     struct list_head *pages_to_free,
-				     enum lru_list lru)
+static struct lruvec *
+move_active_pages_to_lru(struct lruvec *lruvec,
+			 struct list_head *list,
+			 struct list_head *pages_to_free,
+			 enum lru_list lru)
 {
 	unsigned long pgmoved = 0;
 	struct page *page;
@@ -1637,11 +1654,12 @@ static void move_active_pages_to_lru(struct lruvec *lruvec,
 
 		page = lru_to_page(list);
 
+		/* can differ only on lumpy reclaim */
+		lruvec = __relock_page_lruvec(lruvec, page);
+
 		VM_BUG_ON(PageLRU(page));
 		SetPageLRU(page);
 
-		/* can differ only on lumpy reclaim */
-		lruvec = page_lruvec(page);
 		list_move(&page->lru, &lruvec->pages_lru[lru]);
 		numpages = hpage_nr_pages(page);
 		lruvec->pages_count[lru] += numpages;
@@ -1663,6 +1681,8 @@ static void move_active_pages_to_lru(struct lruvec *lruvec,
 	__mod_zone_page_state(lruvec_zone(lruvec), NR_LRU_BASE + lru, pgmoved);
 	if (!is_active_lru(lru))
 		__count_vm_events(PGDEACTIVATE, pgmoved);
+
+	return lruvec;
 }
 
 static void shrink_active_list(unsigned long nr_to_scan,
@@ -1751,9 +1771,9 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	reclaim_stat->recent_rotated[file] += nr_rotated;
 
-	move_active_pages_to_lru(lruvec, &l_active, &l_hold,
+	lruvec = move_active_pages_to_lru(lruvec, &l_active, &l_hold,
 						LRU_ACTIVE + file * LRU_FILE);
-	move_active_pages_to_lru(lruvec, &l_inactive, &l_hold,
+	lruvec = move_active_pages_to_lru(lruvec, &l_inactive, &l_hold,
 						LRU_BASE   + file * LRU_FILE);
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, -nr_taken);
 	unlock_lruvec_irq(lruvec);


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 18/22] mm: handle lruvec relocks in compaction
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (16 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 17/22] mm: handle lruvec relocks on lumpy reclaim Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 19/22] mm: handle lruvec relock in memory controller Konstantin Khlebnikov
                   ` (4 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Prepare for lru_lock splitting in memory compaction code.

* disable irqs in acct_isolated() for __mod_zone_page_state(),
  lru_lock isn't required there.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 mm/compaction.c |   30 ++++++++++++++++--------------
 1 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index a976b28..1e89165 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -224,8 +224,10 @@ static void acct_isolated(struct zone *zone, struct compact_control *cc)
 	list_for_each_entry(page, &cc->migratepages, lru)
 		count[!!page_is_file_cache(page)]++;
 
+	local_irq_disable();
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON, count[0]);
 	__mod_zone_page_state(zone, NR_ISOLATED_FILE, count[1]);
+	local_irq_enable();
 }
 
 /* Similar to reclaim, but different enough that they don't share logic */
@@ -262,7 +264,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 	unsigned long nr_scanned = 0, nr_isolated = 0;
 	struct list_head *migratelist = &cc->migratepages;
 	isolate_mode_t mode = ISOLATE_ACTIVE|ISOLATE_INACTIVE;
-	struct lruvec *lruvec;
+	struct lruvec *lruvec = NULL;
 
 	/* Do not scan outside zone boundaries */
 	low_pfn = max(cc->migrate_pfn, zone->zone_start_pfn);
@@ -294,25 +296,24 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 
 	/* Time to isolate some pages for migration */
 	cond_resched();
-	spin_lock_irq(&zone->lru_lock);
 	for (; low_pfn < end_pfn; low_pfn++) {
 		struct page *page;
-		bool locked = true;
 
 		/* give a chance to irqs before checking need_resched() */
 		if (!((low_pfn+1) % SWAP_CLUSTER_MAX)) {
-			spin_unlock_irq(&zone->lru_lock);
-			locked = false;
+			if (lruvec)
+				unlock_lruvec_irq(lruvec);
+			lruvec = NULL;
 		}
-		if (need_resched() || spin_is_contended(&zone->lru_lock)) {
-			if (locked)
-				spin_unlock_irq(&zone->lru_lock);
+		if (need_resched() ||
+		    (lruvec && spin_is_contended(&zone->lru_lock))) {
+			if (lruvec)
+				unlock_lruvec_irq(lruvec);
+			lruvec = NULL;
 			cond_resched();
-			spin_lock_irq(&zone->lru_lock);
 			if (fatal_signal_pending(current))
 				break;
-		} else if (!locked)
-			spin_lock_irq(&zone->lru_lock);
+		}
 
 		/*
 		 * migrate_pfn does not necessarily start aligned to a
@@ -359,7 +360,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 			continue;
 		}
 
-		if (!PageLRU(page))
+		if (!catch_page_lruvec(&lruvec, page))
 			continue;
 
 		/*
@@ -382,7 +383,6 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 		VM_BUG_ON(PageTransCompound(page));
 
 		/* Successfully isolated */
-		lruvec = page_lruvec(page);
 		del_page_from_lru_list(lruvec, page, page_lru(page));
 		list_add(&page->lru, migratelist);
 		cc->nr_migratepages++;
@@ -395,9 +395,11 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 		}
 	}
 
+	if (lruvec)
+		unlock_lruvec_irq(lruvec);
+
 	acct_isolated(zone, cc);
 
-	spin_unlock_irq(&zone->lru_lock);
 	cc->migrate_pfn = low_pfn;
 
 	trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated);


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 19/22] mm: handle lruvec relock in memory controller
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (17 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 18/22] mm: handle lruvec relocks in compaction Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 20/22] mm: optimize putback for 0-order reclaim Konstantin Khlebnikov
                   ` (3 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Carefully relock lruvec lru lock at page memory cgroup change.

* Stabilize PageLRU() sign with  __wait_lruvec_unlock(old_lruvec)
  It must be called between each pc->mem_cgroup change and
  page putback into new lruvec, otherwise someone else can lock old lruvec and
  see PageLRU(), while page already moved into other lruvec.
* In free_pn_rcu() wait for lruvec lock release.
  Locking primitives keep lruvec pointer after successful lock held.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 mm/memcontrol.c |   36 ++++++++++++++++++++++++++++--------
 1 files changed, 28 insertions(+), 8 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 40e1a66..69763da 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2368,6 +2368,7 @@ static int mem_cgroup_move_account(struct page *page,
 	unsigned long flags;
 	int ret;
 	bool anon = PageAnon(page);
+	struct lruvec *old_lruvec;
 
 	VM_BUG_ON(from == to);
 	VM_BUG_ON(PageLRU(page));
@@ -2397,12 +2398,24 @@ static int mem_cgroup_move_account(struct page *page,
 		preempt_enable();
 	}
 	mem_cgroup_charge_statistics(from, anon, -nr_pages);
+
+	/* charge keep old lruvec alive */
+	old_lruvec = page_lruvec(page);
+
+	/* caller should have done css_get */
+	pc->mem_cgroup = to;
+
+	/*
+	 * Stabilize PageLRU() sing for old_lruvec lock holder.
+	 * Do not putback page while someone hold old_lruvec lock,
+	 * otherwise it can think it catched page in old_lruvec lru.
+	 */
+	__wait_lruvec_unlock(old_lruvec);
+
 	if (uncharge)
 		/* This is not "cancel", but cancel_charge does all we need. */
 		__mem_cgroup_cancel_charge(from, nr_pages);
 
-	/* caller should have done css_get */
-	pc->mem_cgroup = to;
 	mem_cgroup_charge_statistics(to, anon, nr_pages);
 	/*
 	 * We charges against "to" which may not have any tasks. Then, "to"
@@ -2528,7 +2541,6 @@ __mem_cgroup_commit_charge_lrucare(struct page *page, struct mem_cgroup *memcg,
 					enum charge_type ctype)
 {
 	struct page_cgroup *pc = lookup_page_cgroup(page);
-	struct zone *zone = page_zone(page);
 	struct lruvec *lruvec;
 	unsigned long flags;
 	bool removed = false;
@@ -2538,20 +2550,19 @@ __mem_cgroup_commit_charge_lrucare(struct page *page, struct mem_cgroup *memcg,
 	 * is already on LRU. It means the page may on some other page_cgroup's
 	 * LRU. Take care of it.
 	 */
-	spin_lock_irqsave(&zone->lru_lock, flags);
+	lruvec = lock_page_lruvec(page, &flags);
 	if (PageLRU(page)) {
-		lruvec = page_lruvec(page);
 		del_page_from_lru_list(lruvec, page, page_lru(page));
 		ClearPageLRU(page);
 		removed = true;
 	}
 	__mem_cgroup_commit_charge(memcg, page, 1, pc, ctype);
 	if (removed) {
-		lruvec = page_lruvec(page);
+		lruvec = __relock_page_lruvec(lruvec, page);
 		add_page_to_lru_list(lruvec, page, page_lru(page));
 		SetPageLRU(page);
 	}
-	spin_unlock_irqrestore(&zone->lru_lock, flags);
+	unlock_lruvec(lruvec, &flags);
 }
 
 int mem_cgroup_cache_charge(struct page *page, struct mm_struct *mm,
@@ -4648,7 +4659,16 @@ static int alloc_mem_cgroup_per_zone_info(struct mem_cgroup *memcg, int node)
 
 static void free_mem_cgroup_per_zone_info(struct mem_cgroup *memcg, int node)
 {
-	kfree(memcg->info.nodeinfo[node]);
+	struct mem_cgroup_per_node *pn = memcg->info.nodeinfo[node];
+	int zone;
+
+	if (!pn)
+		return;
+
+	for (zone = 0; zone < MAX_NR_ZONES; zone++)
+		wait_lruvec_unlock(&pn->zoneinfo[zone].lruvec);
+
+	kfree(pn);
 }
 
 static struct mem_cgroup *mem_cgroup_alloc(void)


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 20/22] mm: optimize putback for 0-order reclaim
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (18 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 19/22] mm: handle lruvec relock in memory controller Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 21/22] mm: free lruvec in memcgroup via rcu Konstantin Khlebnikov
                   ` (2 subsequent siblings)
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

At 0-order reclaim all pages are isolated from one lruvec,
thus we don't need to recheck and relock page_lruvec on putback.

Maybe it would be better to collect lumpy-isolated pages into
separate list and handle them independently.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 mm/vmscan.c |   17 +++++++++++------
 1 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 39b4525..b9bd6c7 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1342,6 +1342,7 @@ static int too_many_isolated(struct zone *zone, int file,
  */
 static noinline_for_stack struct lruvec *
 putback_inactive_pages(struct lruvec *lruvec,
+		       struct scan_control *sc,
 		       struct list_head *page_list)
 {
 	struct zone_reclaim_stat *reclaim_stat = &lruvec->reclaim_stat;
@@ -1364,8 +1365,10 @@ putback_inactive_pages(struct lruvec *lruvec,
 		}
 
 		/* can differ only on lumpy reclaim */
-		lruvec = __relock_page_lruvec(lruvec, page);
-		reclaim_stat = &lruvec->reclaim_stat;
+		if (sc->order) {
+			lruvec = __relock_page_lruvec(lruvec, page);
+			reclaim_stat = &lruvec->reclaim_stat;
+		}
 
 		SetPageLRU(page);
 		lru = page_lru(page);
@@ -1565,7 +1568,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 		__count_vm_events(KSWAPD_STEAL, nr_reclaimed);
 	__count_zone_vm_events(PGSTEAL, zone, nr_reclaimed);
 
-	lruvec = putback_inactive_pages(lruvec, &page_list);
+	lruvec = putback_inactive_pages(lruvec, sc, &page_list);
 
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON, -nr_anon);
 	__mod_zone_page_state(zone, NR_ISOLATED_FILE, -nr_file);
@@ -1630,6 +1633,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
 
 static struct lruvec *
 move_active_pages_to_lru(struct lruvec *lruvec,
+			 struct scan_control *sc,
 			 struct list_head *list,
 			 struct list_head *pages_to_free,
 			 enum lru_list lru)
@@ -1655,7 +1659,8 @@ move_active_pages_to_lru(struct lruvec *lruvec,
 		page = lru_to_page(list);
 
 		/* can differ only on lumpy reclaim */
-		lruvec = __relock_page_lruvec(lruvec, page);
+		if (sc->order)
+			lruvec = __relock_page_lruvec(lruvec, page);
 
 		VM_BUG_ON(PageLRU(page));
 		SetPageLRU(page);
@@ -1771,9 +1776,9 @@ static void shrink_active_list(unsigned long nr_to_scan,
 	 */
 	reclaim_stat->recent_rotated[file] += nr_rotated;
 
-	lruvec = move_active_pages_to_lru(lruvec, &l_active, &l_hold,
+	lruvec = move_active_pages_to_lru(lruvec, sc, &l_active, &l_hold,
 						LRU_ACTIVE + file * LRU_FILE);
-	lruvec = move_active_pages_to_lru(lruvec, &l_inactive, &l_hold,
+	lruvec = move_active_pages_to_lru(lruvec, sc, &l_inactive, &l_hold,
 						LRU_BASE   + file * LRU_FILE);
 	__mod_zone_page_state(zone, NR_ISOLATED_ANON + file, -nr_taken);
 	unlock_lruvec_irq(lruvec);


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 21/22] mm: free lruvec in memcgroup via rcu
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (19 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 20/22] mm: optimize putback for 0-order reclaim Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-20 17:23 ` [PATCH v2 22/22] mm: split zone->lru_lock Konstantin Khlebnikov
  2012-02-22  4:19 ` [PATCH v2 00/22] mm: lru_lock splitting Andi Kleen
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

This is required for splitting lru_lock into per-lruvec pieces.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 mm/memcontrol.c |   20 +++++++++++++++-----
 1 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 69763da..eb024c1 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -148,6 +148,7 @@ struct mem_cgroup_per_zone {
 
 struct mem_cgroup_per_node {
 	struct mem_cgroup_per_zone zoneinfo[MAX_NR_ZONES];
+	struct rcu_head		rcu_head;
 };
 
 struct mem_cgroup_lru_info {
@@ -4657,18 +4658,27 @@ static int alloc_mem_cgroup_per_zone_info(struct mem_cgroup *memcg, int node)
 	return 0;
 }
 
+static void free_pn_rcu(struct rcu_head *rcu_head)
+{
+       struct mem_cgroup_per_node *pn;
+       int zone;
+
+       pn = container_of(rcu_head, struct mem_cgroup_per_node, rcu_head);
+
+       for (zone = 0; zone < MAX_NR_ZONES; zone++)
+	       wait_lruvec_unlock(&pn->zoneinfo[zone].lruvec);
+
+       kfree(pn);
+}
+
 static void free_mem_cgroup_per_zone_info(struct mem_cgroup *memcg, int node)
 {
 	struct mem_cgroup_per_node *pn = memcg->info.nodeinfo[node];
-	int zone;
 
 	if (!pn)
 		return;
 
-	for (zone = 0; zone < MAX_NR_ZONES; zone++)
-		wait_lruvec_unlock(&pn->zoneinfo[zone].lruvec);
-
-	kfree(pn);
+	call_rcu(&pn->rcu_head, free_pn_rcu);
 }
 
 static struct mem_cgroup *mem_cgroup_alloc(void)


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v2 22/22] mm: split zone->lru_lock
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (20 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 21/22] mm: free lruvec in memcgroup via rcu Konstantin Khlebnikov
@ 2012-02-20 17:23 ` Konstantin Khlebnikov
  2012-02-22  4:19 ` [PATCH v2 00/22] mm: lru_lock splitting Andi Kleen
  22 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-20 17:23 UTC (permalink / raw)
  To: linux-mm, Andrew Morton, linux-kernel; +Cc: Hugh Dickins, KAMEZAWA Hiroyuki

Looks like all ready for splitting zone->lru_lock into per-lruvec pieces.

lruvec locking loop protected with rcu. Memory controller already releases its
lru-vectors via rcu, lru-vectors embedded into zones never released.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 include/linux/mmzone.h |    2 -
 mm/compaction.c        |    2 -
 mm/internal.h          |  100 ++++++++++++++++++++++++++++++------------------
 mm/memcontrol.c        |    1 
 mm/page_alloc.c        |    2 -
 mm/swap.c              |    2 -
 6 files changed, 68 insertions(+), 41 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 9fd82b1..56995db 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -301,6 +301,7 @@ struct lruvec {
 	struct pglist_data	*node;
 	struct zone		*zone;
 #endif
+	spinlock_t		lru_lock;
 	struct list_head	pages_lru[NR_LRU_LISTS];
 	unsigned long		pages_count[NR_LRU_LISTS];
 
@@ -378,7 +379,6 @@ struct zone {
 	ZONE_PADDING(_pad1_)
 
 	/* Fields commonly accessed by the page reclaim scanner */
-	spinlock_t		lru_lock;
 	struct lruvec		lruvec;
 
 	unsigned long		pages_scanned;	   /* since last reclaim */
diff --git a/mm/compaction.c b/mm/compaction.c
index 1e89165..3fbb958 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -306,7 +306,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone,
 			lruvec = NULL;
 		}
 		if (need_resched() ||
-		    (lruvec && spin_is_contended(&zone->lru_lock))) {
+		    (lruvec && spin_is_contended(&lruvec->lru_lock))) {
 			if (lruvec)
 				unlock_lruvec_irq(lruvec);
 			lruvec = NULL;
diff --git a/mm/internal.h b/mm/internal.h
index a1a3206..110d653 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -15,76 +15,98 @@
 
 static inline void lock_lruvec(struct lruvec *lruvec, unsigned long *flags)
 {
-	spin_lock_irqsave(&lruvec_zone(lruvec)->lru_lock, *flags);
+	spin_lock_irqsave(&lruvec->lru_lock, *flags);
 }
 
 static inline void lock_lruvec_irq(struct lruvec *lruvec)
 {
-	spin_lock_irq(&lruvec_zone(lruvec)->lru_lock);
+	spin_lock_irq(&lruvec->lru_lock);
 }
 
 static inline void unlock_lruvec(struct lruvec *lruvec, unsigned long *flags)
 {
-	spin_unlock_irqrestore(&lruvec_zone(lruvec)->lru_lock, *flags);
+	spin_unlock_irqrestore(&lruvec->lru_lock, *flags);
 }
 
 static inline void unlock_lruvec_irq(struct lruvec *lruvec)
 {
-	spin_unlock_irq(&lruvec_zone(lruvec)->lru_lock);
+	spin_unlock_irq(&lruvec->lru_lock);
 }
 
 static inline void wait_lruvec_unlock(struct lruvec *lruvec)
 {
-	spin_unlock_wait(&lruvec_zone(lruvec)->lru_lock);
+	spin_unlock_wait(&lruvec->lru_lock);
 }
 
 #ifdef CONFIG_CGROUP_MEM_RES_CTLR
 
 /* Dynamic page to lruvec mapping */
 
+/* protected with rcu, interrupts disabled, locked_lruvec != NULL */
+static inline struct lruvec *__catch_page_lruvec(struct lruvec *locked_lruvec,
+						 struct page *page)
+{
+	struct lruvec *lruvec;
+
+	do {
+		lruvec = page_lruvec(page);
+		if (likely(lruvec == locked_lruvec))
+			return lruvec;
+		spin_unlock(&locked_lruvec->lru_lock);
+		spin_lock(&lruvec->lru_lock);
+		locked_lruvec = lruvec;
+	} while (1);
+}
+
 static inline struct lruvec *lock_page_lruvec(struct page *page,
 					      unsigned long *flags)
 {
-	struct zone *zone = page_zone(page);
+	struct lruvec *lruvec;
 
-	spin_lock_irqsave(&zone->lru_lock, *flags);
-	return page_lruvec(page);
+	rcu_read_lock();
+	lruvec = page_lruvec(page);
+	lock_lruvec(lruvec, flags);
+	lruvec = __catch_page_lruvec(lruvec, page);
+	rcu_read_unlock();
+	return lruvec;
 }
 
 static inline struct lruvec *lock_page_lruvec_irq(struct page *page)
 {
-	struct zone *zone = page_zone(page);
+	struct lruvec *lruvec;
 
-	spin_lock_irq(&zone->lru_lock);
-	return page_lruvec(page);
+	rcu_read_lock();
+	lruvec = page_lruvec(page);
+	lock_lruvec_irq(lruvec);
+	lruvec = __catch_page_lruvec(lruvec, page);
+	rcu_read_unlock();
+	return lruvec;
 }
 
 static inline struct lruvec *relock_page_lruvec(struct lruvec *lruvec,
 						struct page *page,
 						unsigned long *flags)
 {
-	struct zone *zone = page_zone(page);
-
-	if (!lruvec || zone != lruvec_zone(lruvec)) {
-		if (lruvec)
-			unlock_lruvec(lruvec, flags);
-		lruvec = lock_page_lruvec(page, flags);
+	rcu_read_lock();
+	if (!lruvec) {
+		lruvec = page_lruvec(page);
+		lock_lruvec(lruvec, flags);
 	}
-
+	lruvec = __catch_page_lruvec(lruvec, page);
+	rcu_read_unlock();
 	return lruvec;
 }
 
 static inline struct lruvec *relock_page_lruvec_irq(struct lruvec *lruvec,
 						    struct page *page)
 {
-	struct zone *zone = page_zone(page);
-
-	if (!lruvec || zone != lruvec_zone(lruvec)) {
-		if (lruvec)
-			unlock_lruvec_irq(lruvec);
-		lruvec = lock_page_lruvec_irq(page);
+	rcu_read_lock();
+	if (!lruvec) {
+		lruvec = page_lruvec(page);
+		lock_lruvec_irq(lruvec);
 	}
-
+	lruvec = __catch_page_lruvec(lruvec, page);
+	rcu_read_unlock();
 	return lruvec;
 }
 
@@ -92,8 +114,10 @@ static inline struct lruvec *relock_page_lruvec_irq(struct lruvec *lruvec,
 static inline struct lruvec *__relock_page_lruvec(struct lruvec *lruvec,
 						  struct page *page)
 {
-	/* Currenyly only one lruvec per-zone */
-	return page_lruvec(page);
+	rcu_read_lock();
+	lruvec = __catch_page_lruvec(lruvec, page);
+	rcu_read_unlock();
+	return lruvec;
 }
 
 /*
@@ -104,22 +128,24 @@ static inline struct lruvec *__relock_page_lruvec(struct lruvec *lruvec,
  */
 static inline bool catch_page_lruvec(struct lruvec **lruvec, struct page *page)
 {
-	struct zone *zone;
 	bool ret = false;
 
+	rcu_read_lock();
+	/*
+	 * If we see there PageLRU(), it means page has valid lruvec link.
+	 * We need protect whole operation with single rcu-interval, otherwise
+	 * lruvec which hold this LRU sign can run out before we secure it.
+	 */
 	if (PageLRU(page)) {
 		if (!*lruvec) {
-			zone = page_zone(page);
-			spin_lock_irq(&zone->lru_lock);
-		} else
-			zone = lruvec_zone(*lruvec);
-
-		if (PageLRU(page)) {
 			*lruvec = page_lruvec(page);
+			lock_lruvec_irq(*lruvec);
+		}
+		*lruvec = __catch_page_lruvec(*lruvec, page);
+		if (PageLRU(page))
 			ret = true;
-		} else
-			*lruvec = &zone->lruvec;
 	}
+	rcu_read_unlock();
 
 	return ret;
 }
@@ -127,7 +153,7 @@ static inline bool catch_page_lruvec(struct lruvec **lruvec, struct page *page)
 /* Wait for lruvec unlock before locking other lruvec for the same page */
 static inline void __wait_lruvec_unlock(struct lruvec *lruvec)
 {
-	/* Currently only one lruvec per-zone */
+	wait_lruvec_unlock(lruvec);
 }
 
 #else /* CONFIG_CGROUP_MEM_RES_CTLR */
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index eb024c1..d0ca9d0 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -4648,6 +4648,7 @@ static int alloc_mem_cgroup_per_zone_info(struct mem_cgroup *memcg, int node)
 			INIT_LIST_HEAD(&mz->lruvec.pages_lru[lru]);
 			mz->lruvec.pages_count[lru] = 0;
 		}
+		spin_lock_init(&mz->lruvec.lru_lock);
 		mz->lruvec.node = NODE_DATA(node);
 		mz->lruvec.zone = &NODE_DATA(node)->node_zones[zone];
 		mz->usage_in_excess = 0;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 72263e4..c258024 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4357,7 +4357,7 @@ static void __paginginit free_area_init_core(struct pglist_data *pgdat,
 #endif
 		zone->name = zone_names[j];
 		spin_lock_init(&zone->lock);
-		spin_lock_init(&zone->lru_lock);
+		spin_lock_init(&zone->lruvec.lru_lock);
 		zone_seqlock_init(zone);
 		zone->zone_pgdat = pgdat;
 
diff --git a/mm/swap.c b/mm/swap.c
index 9e81df3..43866d7 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -694,7 +694,7 @@ void lru_add_page_tail(struct lruvec *lruvec,
 	VM_BUG_ON(!PageHead(page));
 	VM_BUG_ON(PageCompound(page_tail));
 	VM_BUG_ON(PageLRU(page_tail));
-	VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&lruvec_zone(lruvec)->lru_lock));
+	VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&lruvec->lru_lock));
 
 	SetPageLRU(page_tail);
 


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 00/22] mm: lru_lock splitting
  2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
                   ` (21 preceding siblings ...)
  2012-02-20 17:23 ` [PATCH v2 22/22] mm: split zone->lru_lock Konstantin Khlebnikov
@ 2012-02-22  4:19 ` Andi Kleen
  2012-02-22  5:11   ` Konstantin Khlebnikov
  22 siblings, 1 reply; 27+ messages in thread
From: Andi Kleen @ 2012-02-22  4:19 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins, KAMEZAWA Hiroyuki

Konstantin Khlebnikov <khlebnikov@openvz.org> writes:

Konstantin,

> There complete patch-set with my lru_lock splitting
> plus all related preparations and cleanups rebased to next-20120210

On large systems we're also seeing lock contention on the lru_lock
without using memcgs. Any thoughts how this could be extended for this
situation too?

Thanks,

-Andi

-- 
ak@linux.intel.com -- Speaking for myself only

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 00/22] mm: lru_lock splitting
  2012-02-22  4:19 ` [PATCH v2 00/22] mm: lru_lock splitting Andi Kleen
@ 2012-02-22  5:11   ` Konstantin Khlebnikov
  2012-02-22  6:16     ` Andi Kleen
  0 siblings, 1 reply; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-22  5:11 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins, KAMEZAWA Hiroyuki

Andi Kleen wrote:
> Konstantin Khlebnikov<khlebnikov@openvz.org>  writes:
>
> Konstantin,
>
>> There complete patch-set with my lru_lock splitting
>> plus all related preparations and cleanups rebased to next-20120210
>
> On large systems we're also seeing lock contention on the lru_lock
> without using memcgs. Any thoughts how this could be extended for this
> situation too?

We can split lru_lock by pfn-based interleaving.
After all these cleanups it is very easy. I already have patch for this.

>
> Thanks,
>
> -Andi
>


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 00/22] mm: lru_lock splitting
  2012-02-22  5:11   ` Konstantin Khlebnikov
@ 2012-02-22  6:16     ` Andi Kleen
  2012-02-23 14:01       ` Konstantin Khlebnikov
  0 siblings, 1 reply; 27+ messages in thread
From: Andi Kleen @ 2012-02-22  6:16 UTC (permalink / raw)
  To: Konstantin Khlebnikov
  Cc: Andi Kleen, linux-mm, Andrew Morton, linux-kernel, Hugh Dickins,
	KAMEZAWA Hiroyuki, tim.c.chen

On Wed, Feb 22, 2012 at 09:11:32AM +0400, Konstantin Khlebnikov wrote:
> Andi Kleen wrote:
> >Konstantin Khlebnikov<khlebnikov@openvz.org>  writes:
> >
> >Konstantin,
> >
> >>There complete patch-set with my lru_lock splitting
> >>plus all related preparations and cleanups rebased to next-20120210
> >
> >On large systems we're also seeing lock contention on the lru_lock
> >without using memcgs. Any thoughts how this could be extended for this
> >situation too?
> 
> We can split lru_lock by pfn-based interleaving.
> After all these cleanups it is very easy. I already have patch for this.

Cool. If you send it can try it out on a large system.

This would split the LRU by pfn too, correct?

-Andi

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v2 00/22] mm: lru_lock splitting
  2012-02-22  6:16     ` Andi Kleen
@ 2012-02-23 14:01       ` Konstantin Khlebnikov
  0 siblings, 0 replies; 27+ messages in thread
From: Konstantin Khlebnikov @ 2012-02-23 14:01 UTC (permalink / raw)
  To: Andi Kleen
  Cc: linux-mm, Andrew Morton, linux-kernel, Hugh Dickins,
	KAMEZAWA Hiroyuki, tim.c.chen

Andi Kleen wrote:
> On Wed, Feb 22, 2012 at 09:11:32AM +0400, Konstantin Khlebnikov wrote:
>> Andi Kleen wrote:
>>> Konstantin Khlebnikov<khlebnikov@openvz.org>   writes:
>>>
>>> Konstantin,
>>>
>>>> There complete patch-set with my lru_lock splitting
>>>> plus all related preparations and cleanups rebased to next-20120210
>>>
>>> On large systems we're also seeing lock contention on the lru_lock
>>> without using memcgs. Any thoughts how this could be extended for this
>>> situation too?
>>
>> We can split lru_lock by pfn-based interleaving.
>> After all these cleanups it is very easy. I already have patch for this.
>
> Cool. If you send it can try it out on a large system.

See last patch in v3 patchset in lkml or in
git: https://github.com/koct9i/linux/commits/lruvec-v3

>
> This would split the LRU by pfn too, correct?

Of course, I don't see any problems with splitting large zone into some
independent pages subsets. But all sub-pages in huge-page should be in one lru,
that's why I use pfn-based interleaving.

>
> -Andi


^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2012-02-23 14:02 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
2012-02-20 17:22 ` [PATCH v2 01/22] memcg: rework inactive_ratio logic Konstantin Khlebnikov
2012-02-20 17:22 ` [PATCH v2 02/22] memcg: fix page_referencies cgroup filter on global reclaim Konstantin Khlebnikov
2012-02-20 17:22 ` [PATCH v2 03/22] memcg: use vm_swappiness from current memcg Konstantin Khlebnikov
2012-02-20 17:22 ` [PATCH v2 04/22] mm: drain percpu lru add/rotate page-vectors on cpu hot-unplug Konstantin Khlebnikov
2012-02-20 17:22 ` [PATCH v2 05/22] mm: replace per-cpu lru-add page-vectors with page-lists Konstantin Khlebnikov
2012-02-20 17:22 ` [PATCH v2 06/22] mm: deprecate pagevec lru-add functions Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 07/22] mm: rename lruvec->lists into lruvec->pages_lru Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 08/22] mm: add lruvec->pages_count Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 09/22] mm: link lruvec with zone and node Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 10/22] mm: unify inactive_list_is_low() Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 11/22] mm: add lruvec->reclaim_stat Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 12/22] mm: kill struct mem_cgroup_zone Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 13/22] mm: move page-to-lruvec translation upper Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 14/22] mm: push lruvec into update_page_reclaim_stat() Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 15/22] mm: push lruvecs from pagevec_lru_move_fn() to iterator Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 16/22] mm: introduce lruvec locking primitives Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 17/22] mm: handle lruvec relocks on lumpy reclaim Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 18/22] mm: handle lruvec relocks in compaction Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 19/22] mm: handle lruvec relock in memory controller Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 20/22] mm: optimize putback for 0-order reclaim Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 21/22] mm: free lruvec in memcgroup via rcu Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 22/22] mm: split zone->lru_lock Konstantin Khlebnikov
2012-02-22  4:19 ` [PATCH v2 00/22] mm: lru_lock splitting Andi Kleen
2012-02-22  5:11   ` Konstantin Khlebnikov
2012-02-22  6:16     ` Andi Kleen
2012-02-23 14:01       ` Konstantin Khlebnikov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).