From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758547Ab0CLQlg (ORCPT ); Fri, 12 Mar 2010 11:41:36 -0500 Received: from gir.skynet.ie ([193.1.99.77]:40191 "EHLO gir.skynet.ie" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757950Ab0CLQl3 (ORCPT ); Fri, 12 Mar 2010 11:41:29 -0500 From: Mel Gorman To: Andrew Morton Cc: Andrea Arcangeli , Christoph Lameter , Adam Litke , Avi Kivity , David Rientjes , KOSAKI Motohiro , Rik van Riel , Mel Gorman , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 01/11] mm,migration: Take a reference to the anon_vma before migrating Date: Fri, 12 Mar 2010 16:41:17 +0000 Message-Id: <1268412087-13536-2-git-send-email-mel@csn.ul.ie> X-Mailer: git-send-email 1.6.5 In-Reply-To: <1268412087-13536-1-git-send-email-mel@csn.ul.ie> References: <1268412087-13536-1-git-send-email-mel@csn.ul.ie> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org rmap_walk_anon() does not use page_lock_anon_vma() for looking up and locking an anon_vma and it does not appear to have sufficient locking to ensure the anon_vma does not disappear from under it. This patch copies an approach used by KSM to take a reference on the anon_vma while pages are being migrated. This should prevent rmap_walk() running into nasty surprises later because anon_vma has been freed. Signed-off-by: Mel Gorman Acked-by: Rik van Riel --- include/linux/rmap.h | 23 +++++++++++++++++++++++ mm/migrate.c | 12 ++++++++++++ mm/rmap.c | 10 +++++----- 3 files changed, 40 insertions(+), 5 deletions(-) diff --git a/include/linux/rmap.h b/include/linux/rmap.h index d25bd22..567d43f 100644 --- a/include/linux/rmap.h +++ b/include/linux/rmap.h @@ -29,6 +29,9 @@ struct anon_vma { #ifdef CONFIG_KSM atomic_t ksm_refcount; #endif +#ifdef CONFIG_MIGRATION + atomic_t migrate_refcount; +#endif /* * NOTE: the LSB of the head.next is set by * mm_take_all_locks() _after_ taking the above lock. So the @@ -81,6 +84,26 @@ static inline int ksm_refcount(struct anon_vma *anon_vma) return 0; } #endif /* CONFIG_KSM */ +#ifdef CONFIG_MIGRATION +static inline void migrate_refcount_init(struct anon_vma *anon_vma) +{ + atomic_set(&anon_vma->migrate_refcount, 0); +} + +static inline int migrate_refcount(struct anon_vma *anon_vma) +{ + return atomic_read(&anon_vma->migrate_refcount); +} +#else +static inline void migrate_refcount_init(struct anon_vma *anon_vma) +{ +} + +static inline int migrate_refcount(struct anon_vma *anon_vma) +{ + return 0; +} +#endif /* CONFIG_MIGRATE */ static inline struct anon_vma *page_anon_vma(struct page *page) { diff --git a/mm/migrate.c b/mm/migrate.c index 88000b8..98eaaf2 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -547,6 +547,7 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private, int rcu_locked = 0; int charge = 0; struct mem_cgroup *mem = NULL; + struct anon_vma *anon_vma = NULL; if (!newpage) return -ENOMEM; @@ -603,6 +604,8 @@ static int unmap_and_move(new_page_t get_new_page, unsigned long private, if (PageAnon(page)) { rcu_read_lock(); rcu_locked = 1; + anon_vma = page_anon_vma(page); + atomic_inc(&anon_vma->migrate_refcount); } /* @@ -642,6 +645,15 @@ skip_unmap: if (rc) remove_migration_ptes(page, page); rcu_unlock: + + /* Drop an anon_vma reference if we took one */ + if (anon_vma && atomic_dec_and_lock(&anon_vma->migrate_refcount, &anon_vma->lock)) { + int empty = list_empty(&anon_vma->head); + spin_unlock(&anon_vma->lock); + if (empty) + anon_vma_free(anon_vma); + } + if (rcu_locked) rcu_read_unlock(); uncharge: diff --git a/mm/rmap.c b/mm/rmap.c index fcd593c..578d0fe 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -248,7 +248,8 @@ static void anon_vma_unlink(struct anon_vma_chain *anon_vma_chain) list_del(&anon_vma_chain->same_anon_vma); /* We must garbage collect the anon_vma if it's empty */ - empty = list_empty(&anon_vma->head) && !ksm_refcount(anon_vma); + empty = list_empty(&anon_vma->head) && !ksm_refcount(anon_vma) && + !migrate_refcount(anon_vma); spin_unlock(&anon_vma->lock); if (empty) @@ -273,6 +274,7 @@ static void anon_vma_ctor(void *data) spin_lock_init(&anon_vma->lock); ksm_refcount_init(anon_vma); + migrate_refcount_init(anon_vma); INIT_LIST_HEAD(&anon_vma->head); } @@ -1338,10 +1340,8 @@ static int rmap_walk_anon(struct page *page, int (*rmap_one)(struct page *, /* * Note: remove_migration_ptes() cannot use page_lock_anon_vma() * because that depends on page_mapped(); but not all its usages - * are holding mmap_sem, which also gave the necessary guarantee - * (that this anon_vma's slab has not already been destroyed). - * This needs to be reviewed later: avoiding page_lock_anon_vma() - * is risky, and currently limits the usefulness of rmap_walk(). + * are holding mmap_sem. Users without mmap_sem are required to + * take a reference count to prevent the anon_vma disappearing */ anon_vma = page_anon_vma(page); if (!anon_vma) -- 1.6.5