linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Konstantin Khlebnikov <khlebnikov@openvz.org>
To: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org
Cc: Hugh Dickins <hughd@google.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Subject: [PATCH v2 04/22] mm: drain percpu lru add/rotate page-vectors on cpu hot-unplug
Date: Mon, 20 Feb 2012 21:22:50 +0400	[thread overview]
Message-ID: <20120220172250.22196.16894.stgit@zurg> (raw)
In-Reply-To: <20120220171138.22196.65847.stgit@zurg>

This cpu hotplug hook was accidentally removed in commit v2.6.30-rc4-18-g00a62ce
("mm: fix Committed_AS underflow on large NR_CPUS environment")

Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
---
 include/linux/swap.h |    1 +
 mm/page_alloc.c      |    1 +
 mm/swap.c            |    4 ++--
 3 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index f7df3ea..ba2c8d7 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -229,6 +229,7 @@ extern void lru_add_page_tail(struct zone* zone,
 extern void activate_page(struct page *);
 extern void mark_page_accessed(struct page *);
 extern void lru_add_drain(void);
+extern void lru_add_drain_cpu(int cpu);
 extern int lru_add_drain_all(void);
 extern void rotate_reclaimable_page(struct page *page);
 extern void deactivate_page(struct page *page);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a547177..85517af 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4873,6 +4873,7 @@ static int page_alloc_cpu_notify(struct notifier_block *self,
 	int cpu = (unsigned long)hcpu;
 
 	if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) {
+		lru_add_drain_cpu(cpu);
 		drain_pages(cpu);
 
 		/*
diff --git a/mm/swap.c b/mm/swap.c
index fff1ff7..38b2686 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -496,7 +496,7 @@ static void lru_deactivate_fn(struct page *page, void *arg)
  * Either "cpu" is the current CPU, and preemption has already been
  * disabled; or "cpu" is being hot-unplugged, and is already dead.
  */
-static void drain_cpu_pagevecs(int cpu)
+void lru_add_drain_cpu(int cpu)
 {
 	struct pagevec *pvecs = per_cpu(lru_add_pvecs, cpu);
 	struct pagevec *pvec;
@@ -553,7 +553,7 @@ void deactivate_page(struct page *page)
 
 void lru_add_drain(void)
 {
-	drain_cpu_pagevecs(get_cpu());
+	lru_add_drain_cpu(get_cpu());
 	put_cpu();
 }
 


  parent reply	other threads:[~2012-02-20 17:22 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-20 17:22 [PATCH v2 00/22] mm: lru_lock splitting Konstantin Khlebnikov
2012-02-20 17:22 ` [PATCH v2 01/22] memcg: rework inactive_ratio logic Konstantin Khlebnikov
2012-02-20 17:22 ` [PATCH v2 02/22] memcg: fix page_referencies cgroup filter on global reclaim Konstantin Khlebnikov
2012-02-20 17:22 ` [PATCH v2 03/22] memcg: use vm_swappiness from current memcg Konstantin Khlebnikov
2012-02-20 17:22 ` Konstantin Khlebnikov [this message]
2012-02-20 17:22 ` [PATCH v2 05/22] mm: replace per-cpu lru-add page-vectors with page-lists Konstantin Khlebnikov
2012-02-20 17:22 ` [PATCH v2 06/22] mm: deprecate pagevec lru-add functions Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 07/22] mm: rename lruvec->lists into lruvec->pages_lru Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 08/22] mm: add lruvec->pages_count Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 09/22] mm: link lruvec with zone and node Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 10/22] mm: unify inactive_list_is_low() Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 11/22] mm: add lruvec->reclaim_stat Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 12/22] mm: kill struct mem_cgroup_zone Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 13/22] mm: move page-to-lruvec translation upper Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 14/22] mm: push lruvec into update_page_reclaim_stat() Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 15/22] mm: push lruvecs from pagevec_lru_move_fn() to iterator Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 16/22] mm: introduce lruvec locking primitives Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 17/22] mm: handle lruvec relocks on lumpy reclaim Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 18/22] mm: handle lruvec relocks in compaction Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 19/22] mm: handle lruvec relock in memory controller Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 20/22] mm: optimize putback for 0-order reclaim Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 21/22] mm: free lruvec in memcgroup via rcu Konstantin Khlebnikov
2012-02-20 17:23 ` [PATCH v2 22/22] mm: split zone->lru_lock Konstantin Khlebnikov
2012-02-22  4:19 ` [PATCH v2 00/22] mm: lru_lock splitting Andi Kleen
2012-02-22  5:11   ` Konstantin Khlebnikov
2012-02-22  6:16     ` Andi Kleen
2012-02-23 14:01       ` Konstantin Khlebnikov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120220172250.22196.16894.stgit@zurg \
    --to=khlebnikov@openvz.org \
    --cc=akpm@linux-foundation.org \
    --cc=hughd@google.com \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).