From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1758317AbZEMAer (ORCPT ); Tue, 12 May 2009 20:34:47 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1758257AbZEMAeX (ORCPT ); Tue, 12 May 2009 20:34:23 -0400 Received: from fgwmail6.fujitsu.co.jp ([192.51.44.36]:33200 "EHLO fgwmail6.fujitsu.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1758261AbZEMAeV (ORCPT ); Tue, 12 May 2009 20:34:21 -0400 Date: Wed, 13 May 2009 09:32:50 +0900 From: KAMEZAWA Hiroyuki To: Daisuke Nishimura Cc: Daisuke Nishimura , "linux-mm@kvack.org" , "balbir@linux.vnet.ibm.com" , "akpm@linux-foundation.org" , mingo@elte.hu, "linux-kernel@vger.kernel.org" Subject: Re: [PATCH][BUGFIX] memcg: fix for deadlock between lock_page_cgroup and mapping tree_lock Message-Id: <20090513093250.7803d3d0.kamezawa.hiroyu@jp.fujitsu.com> In-Reply-To: <20090513092828.cbaa5a76.nishimura@mxp.nes.nec.co.jp> References: <20090512104401.28edc0a8.kamezawa.hiroyu@jp.fujitsu.com> <20090512140648.0974cb10.nishimura@mxp.nes.nec.co.jp> <20090512160901.8a6c5f64.kamezawa.hiroyu@jp.fujitsu.com> <20090512170007.ad7f5c7b.nishimura@mxp.nes.nec.co.jp> <20090512171356.3d3a7554.kamezawa.hiroyu@jp.fujitsu.com> <20090512195823.15c5cb80.d-nishimura@mtf.biglobe.ne.jp> <20090513085949.3c4b7b97.kamezawa.hiroyu@jp.fujitsu.com> <20090513092828.cbaa5a76.nishimura@mxp.nes.nec.co.jp> Organization: FUJITSU Co. LTD. X-Mailer: Sylpheed 2.5.0 (GTK+ 2.10.14; i686-pc-mingw32) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 13 May 2009 09:28:28 +0900 Daisuke Nishimura wrote: > On Wed, 13 May 2009 08:59:49 +0900, KAMEZAWA Hiroyuki wrote: > > On Tue, 12 May 2009 19:58:23 +0900 > > Daisuke Nishimura wrote: > > > > > On Tue, 12 May 2009 17:13:56 +0900 > > > KAMEZAWA Hiroyuki wrote: > > > > > > > On Tue, 12 May 2009 17:00:07 +0900 > > > > Daisuke Nishimura wrote: > > > > > hmm, I see. > > > > > cache_charge is outside of tree_lock, so moving uncharge would make sense. > > > > > IMHO, we should make the period of spinlock as small as possible, > > > > > and charge/uncharge of pagecache/swapcache is protected by page lock, not tree_lock. > > > > > > > > > How about this ? > > > Looks good conceptually, but it cannot be built :) > > > > > > It needs a fix like this. > > > Passed build test with enabling/disabling both CONFIG_MEM_RES_CTLR > > > and CONFIG_SWAP. > > > > > ok, will update. can I add you Signed-off-by on the patch ? > > > Sure. > > Signed-off-by: Daisuke Nishimura > > The patch(with my fix applied) seems to work fine, I need run it > for more long time though. > Ok, I'll treat this as an independent issue, not as "4/4". Thanks, -Kame > > Thanks, > > -Kame > > > === > > > include/linux/swap.h | 5 +++++ > > > mm/memcontrol.c | 4 +++- > > > mm/swap_state.c | 4 +--- > > > mm/vmscan.c | 2 +- > > > 4 files changed, 10 insertions(+), 5 deletions(-) > > > > > > diff --git a/include/linux/swap.h b/include/linux/swap.h > > > index caf0767..6ea541d 100644 > > > --- a/include/linux/swap.h > > > +++ b/include/linux/swap.h > > > @@ -431,6 +431,11 @@ static inline swp_entry_t get_swap_page(void) > > > #define has_swap_token(x) 0 > > > #define disable_swap_token() do { } while(0) > > > > > > +static inline void > > > +mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent) > > > +{ > > > +} > > > + > > > #endif /* CONFIG_SWAP */ > > > #endif /* __KERNEL__*/ > > > #endif /* _LINUX_SWAP_H */ > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > index 0c9c1ad..89523cf 100644 > > > --- a/mm/memcontrol.c > > > +++ b/mm/memcontrol.c > > > @@ -1488,8 +1488,9 @@ void mem_cgroup_uncharge_cache_page(struct page *page) > > > __mem_cgroup_uncharge_common(page, MEM_CGROUP_CHARGE_TYPE_CACHE); > > > } > > > > > > +#ifdef CONFIG_SWAP > > > /* > > > - * called from __delete_from_swap_cache() and drop "page" account. > > > + * called after __delete_from_swap_cache() and drop "page" account. > > > * memcg information is recorded to swap_cgroup of "ent" > > > */ > > > void mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent) > > > @@ -1506,6 +1507,7 @@ void mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent) > > > if (memcg) > > > css_put(&memcg->css); > > > } > > > +#endif > > > > > > #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP > > > /* > > > diff --git a/mm/swap_state.c b/mm/swap_state.c > > > index 87f10d4..7624c89 100644 > > > --- a/mm/swap_state.c > > > +++ b/mm/swap_state.c > > > @@ -109,8 +109,6 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp_mask) > > > */ > > > void __delete_from_swap_cache(struct page *page) > > > { > > > - swp_entry_t ent = {.val = page_private(page)}; > > > - > > > VM_BUG_ON(!PageLocked(page)); > > > VM_BUG_ON(!PageSwapCache(page)); > > > VM_BUG_ON(PageWriteback(page)); > > > @@ -190,7 +188,7 @@ void delete_from_swap_cache(struct page *page) > > > __delete_from_swap_cache(page); > > > spin_unlock_irq(&swapper_space.tree_lock); > > > > > > - mem_cgroup_uncharge_swapcache(page, ent); > > > + mem_cgroup_uncharge_swapcache(page, entry); > > > swap_free(entry); > > > page_cache_release(page); > > > } > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > > index 6c5988d..a7d7a06 100644 > > > --- a/mm/vmscan.c > > > +++ b/mm/vmscan.c > > > @@ -470,7 +470,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page) > > > swp_entry_t swap = { .val = page_private(page) }; > > > __delete_from_swap_cache(page); > > > spin_unlock_irq(&mapping->tree_lock); > > > - mem_cgroup_uncharge_swapcache(page); > > > + mem_cgroup_uncharge_swapcache(page, swap); > > > swap_free(swap); > > > } else { > > > __remove_from_page_cache(page); > > > === > > > -- > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > > > the body of a message to majordomo@vger.kernel.org > > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > Please read the FAQ at http://www.tux.org/lkml/ > > > > > > From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail138.messagelabs.com (mail138.messagelabs.com [216.82.249.35]) by kanga.kvack.org (Postfix) with SMTP id 723CB6B00AC for ; Tue, 12 May 2009 20:33:51 -0400 (EDT) Received: from m5.gw.fujitsu.co.jp ([10.0.50.75]) by fgwmail5.fujitsu.co.jp (Fujitsu Gateway) with ESMTP id n4D0YLZj003064 for (envelope-from kamezawa.hiroyu@jp.fujitsu.com); Wed, 13 May 2009 09:34:21 +0900 Received: from smail (m5 [127.0.0.1]) by outgoing.m5.gw.fujitsu.co.jp (Postfix) with ESMTP id 2CEE92AEA82 for ; Wed, 13 May 2009 09:34:21 +0900 (JST) Received: from s5.gw.fujitsu.co.jp (s5.gw.fujitsu.co.jp [10.0.50.95]) by m5.gw.fujitsu.co.jp (Postfix) with ESMTP id C0E7F1EF085 for ; Wed, 13 May 2009 09:34:20 +0900 (JST) Received: from s5.gw.fujitsu.co.jp (localhost.localdomain [127.0.0.1]) by s5.gw.fujitsu.co.jp (Postfix) with ESMTP id 9D1351DB8040 for ; Wed, 13 May 2009 09:34:20 +0900 (JST) Received: from ml14.s.css.fujitsu.com (ml14.s.css.fujitsu.com [10.249.87.104]) by s5.gw.fujitsu.co.jp (Postfix) with ESMTP id 4EACCE08001 for ; Wed, 13 May 2009 09:34:20 +0900 (JST) Date: Wed, 13 May 2009 09:32:50 +0900 From: KAMEZAWA Hiroyuki Subject: Re: [PATCH][BUGFIX] memcg: fix for deadlock between lock_page_cgroup and mapping tree_lock Message-Id: <20090513093250.7803d3d0.kamezawa.hiroyu@jp.fujitsu.com> In-Reply-To: <20090513092828.cbaa5a76.nishimura@mxp.nes.nec.co.jp> References: <20090512104401.28edc0a8.kamezawa.hiroyu@jp.fujitsu.com> <20090512140648.0974cb10.nishimura@mxp.nes.nec.co.jp> <20090512160901.8a6c5f64.kamezawa.hiroyu@jp.fujitsu.com> <20090512170007.ad7f5c7b.nishimura@mxp.nes.nec.co.jp> <20090512171356.3d3a7554.kamezawa.hiroyu@jp.fujitsu.com> <20090512195823.15c5cb80.d-nishimura@mtf.biglobe.ne.jp> <20090513085949.3c4b7b97.kamezawa.hiroyu@jp.fujitsu.com> <20090513092828.cbaa5a76.nishimura@mxp.nes.nec.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org To: Daisuke Nishimura Cc: Daisuke Nishimura , "linux-mm@kvack.org" , "balbir@linux.vnet.ibm.com" , "akpm@linux-foundation.org" , mingo@elte.hu, "linux-kernel@vger.kernel.org" List-ID: On Wed, 13 May 2009 09:28:28 +0900 Daisuke Nishimura wrote: > On Wed, 13 May 2009 08:59:49 +0900, KAMEZAWA Hiroyuki wrote: > > On Tue, 12 May 2009 19:58:23 +0900 > > Daisuke Nishimura wrote: > > > > > On Tue, 12 May 2009 17:13:56 +0900 > > > KAMEZAWA Hiroyuki wrote: > > > > > > > On Tue, 12 May 2009 17:00:07 +0900 > > > > Daisuke Nishimura wrote: > > > > > hmm, I see. > > > > > cache_charge is outside of tree_lock, so moving uncharge would make sense. > > > > > IMHO, we should make the period of spinlock as small as possible, > > > > > and charge/uncharge of pagecache/swapcache is protected by page lock, not tree_lock. > > > > > > > > > How about this ? > > > Looks good conceptually, but it cannot be built :) > > > > > > It needs a fix like this. > > > Passed build test with enabling/disabling both CONFIG_MEM_RES_CTLR > > > and CONFIG_SWAP. > > > > > ok, will update. can I add you Signed-off-by on the patch ? > > > Sure. > > Signed-off-by: Daisuke Nishimura > > The patch(with my fix applied) seems to work fine, I need run it > for more long time though. > Ok, I'll treat this as an independent issue, not as "4/4". Thanks, -Kame > > Thanks, > > -Kame > > > === > > > include/linux/swap.h | 5 +++++ > > > mm/memcontrol.c | 4 +++- > > > mm/swap_state.c | 4 +--- > > > mm/vmscan.c | 2 +- > > > 4 files changed, 10 insertions(+), 5 deletions(-) > > > > > > diff --git a/include/linux/swap.h b/include/linux/swap.h > > > index caf0767..6ea541d 100644 > > > --- a/include/linux/swap.h > > > +++ b/include/linux/swap.h > > > @@ -431,6 +431,11 @@ static inline swp_entry_t get_swap_page(void) > > > #define has_swap_token(x) 0 > > > #define disable_swap_token() do { } while(0) > > > > > > +static inline void > > > +mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent) > > > +{ > > > +} > > > + > > > #endif /* CONFIG_SWAP */ > > > #endif /* __KERNEL__*/ > > > #endif /* _LINUX_SWAP_H */ > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > > index 0c9c1ad..89523cf 100644 > > > --- a/mm/memcontrol.c > > > +++ b/mm/memcontrol.c > > > @@ -1488,8 +1488,9 @@ void mem_cgroup_uncharge_cache_page(struct page *page) > > > __mem_cgroup_uncharge_common(page, MEM_CGROUP_CHARGE_TYPE_CACHE); > > > } > > > > > > +#ifdef CONFIG_SWAP > > > /* > > > - * called from __delete_from_swap_cache() and drop "page" account. > > > + * called after __delete_from_swap_cache() and drop "page" account. > > > * memcg information is recorded to swap_cgroup of "ent" > > > */ > > > void mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent) > > > @@ -1506,6 +1507,7 @@ void mem_cgroup_uncharge_swapcache(struct page *page, swp_entry_t ent) > > > if (memcg) > > > css_put(&memcg->css); > > > } > > > +#endif > > > > > > #ifdef CONFIG_CGROUP_MEM_RES_CTLR_SWAP > > > /* > > > diff --git a/mm/swap_state.c b/mm/swap_state.c > > > index 87f10d4..7624c89 100644 > > > --- a/mm/swap_state.c > > > +++ b/mm/swap_state.c > > > @@ -109,8 +109,6 @@ int add_to_swap_cache(struct page *page, swp_entry_t entry, gfp_t gfp_mask) > > > */ > > > void __delete_from_swap_cache(struct page *page) > > > { > > > - swp_entry_t ent = {.val = page_private(page)}; > > > - > > > VM_BUG_ON(!PageLocked(page)); > > > VM_BUG_ON(!PageSwapCache(page)); > > > VM_BUG_ON(PageWriteback(page)); > > > @@ -190,7 +188,7 @@ void delete_from_swap_cache(struct page *page) > > > __delete_from_swap_cache(page); > > > spin_unlock_irq(&swapper_space.tree_lock); > > > > > > - mem_cgroup_uncharge_swapcache(page, ent); > > > + mem_cgroup_uncharge_swapcache(page, entry); > > > swap_free(entry); > > > page_cache_release(page); > > > } > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > > index 6c5988d..a7d7a06 100644 > > > --- a/mm/vmscan.c > > > +++ b/mm/vmscan.c > > > @@ -470,7 +470,7 @@ static int __remove_mapping(struct address_space *mapping, struct page *page) > > > swp_entry_t swap = { .val = page_private(page) }; > > > __delete_from_swap_cache(page); > > > spin_unlock_irq(&mapping->tree_lock); > > > - mem_cgroup_uncharge_swapcache(page); > > > + mem_cgroup_uncharge_swapcache(page, swap); > > > swap_free(swap); > > > } else { > > > __remove_from_page_cache(page); > > > === > > > -- > > > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > > > the body of a message to majordomo@vger.kernel.org > > > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > Please read the FAQ at http://www.tux.org/lkml/ > > > > > > -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org