From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754063Ab2J1R4c (ORCPT ); Sun, 28 Oct 2012 13:56:32 -0400 Received: from zene.cmpxchg.org ([85.214.230.12]:60482 "EHLO zene.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753937Ab2J1R4b (ORCPT ); Sun, 28 Oct 2012 13:56:31 -0400 Date: Sun, 28 Oct 2012 13:56:15 -0400 From: Johannes Weiner To: Peter Zijlstra Cc: Zhouping Liu , Rik van Riel , Andrea Arcangeli , Mel Gorman , Thomas Gleixner , Linus Torvalds , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Ingo Molnar Subject: Re: [PATCH 00/31] numa/core patches Message-ID: <20121028175615.GC29827@cmpxchg.org> References: <20121025121617.617683848@chello.nl> <508A52E1.8020203@redhat.com> <1351242480.12171.48.camel@twins> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1351242480.12171.48.camel@twins> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Oct 26, 2012 at 11:08:00AM +0200, Peter Zijlstra wrote: > On Fri, 2012-10-26 at 17:07 +0800, Zhouping Liu wrote: > > [ 180.918591] RIP: 0010:[] [] mem_cgroup_prepare_migration+0xba/0xd0 > > > [ 182.681450] [] do_huge_pmd_numa_page+0x180/0x500 > > [ 182.775090] [] handle_mm_fault+0x1e9/0x360 > > [ 182.863038] [] __do_page_fault+0x172/0x4e0 > > [ 182.950574] [] ? __switch_to_xtra+0x163/0x1a0 > > [ 183.041512] [] ? __switch_to+0x3ce/0x4a0 > > [ 183.126832] [] ? __schedule+0x3c6/0x7a0 > > [ 183.211216] [] do_page_fault+0xe/0x10 > > [ 183.293705] [] page_fault+0x28/0x30 > > Johannes, this looks like the thp migration memcg hookery gone bad, > could you have a look at this? Oops. Here is an incremental fix, feel free to fold it into #31. Signed-off-by: Johannes Weiner --- diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5c30a14..0d7ebd3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -801,8 +801,6 @@ void do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma, if (!new_page) goto alloc_fail; - mem_cgroup_prepare_migration(page, new_page, &memcg); - lru = PageLRU(page); if (lru && isolate_lru_page(page)) /* does an implicit get_page() */ @@ -835,6 +833,14 @@ void do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma, return; } + /* + * Traditional migration needs to prepare the memcg charge + * transaction early to prevent the old page from being + * uncharged when installing migration entries. Here we can + * save the potential rollback and start the charge transfer + * only when migration is already known to end successfully. + */ + mem_cgroup_prepare_migration(page, new_page, &memcg); entry = mk_pmd(new_page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); @@ -845,6 +851,12 @@ void do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma, set_pmd_at(mm, haddr, pmd, entry); update_mmu_cache_pmd(vma, address, entry); page_remove_rmap(page); + /* + * Finish the charge transaction under the page table lock to + * prevent split_huge_page() from dividing up the charge + * before it's fully transferred to the new page. + */ + mem_cgroup_end_migration(memcg, page, new_page, true); spin_unlock(&mm->page_table_lock); put_page(page); /* Drop the rmap reference */ @@ -856,18 +868,14 @@ void do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma, unlock_page(new_page); - mem_cgroup_end_migration(memcg, page, new_page, true); - unlock_page(page); put_page(page); /* Drop the local reference */ return; alloc_fail: - if (new_page) { - mem_cgroup_end_migration(memcg, page, new_page, false); + if (new_page) put_page(new_page); - } unlock_page(page); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7acf43b..011e510 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3255,15 +3255,18 @@ void mem_cgroup_prepare_migration(struct page *page, struct page *newpage, struct mem_cgroup **memcgp) { struct mem_cgroup *memcg = NULL; + unsigned int nr_pages = 1; struct page_cgroup *pc; enum charge_type ctype; *memcgp = NULL; - VM_BUG_ON(PageTransHuge(page)); if (mem_cgroup_disabled()) return; + if (PageTransHuge(page)) + nr_pages <<= compound_order(page); + pc = lookup_page_cgroup(page); lock_page_cgroup(pc); if (PageCgroupUsed(pc)) { @@ -3325,7 +3328,7 @@ void mem_cgroup_prepare_migration(struct page *page, struct page *newpage, * charged to the res_counter since we plan on replacing the * old one and only one page is going to be left afterwards. */ - __mem_cgroup_commit_charge(memcg, newpage, 1, ctype, false); + __mem_cgroup_commit_charge(memcg, newpage, nr_pages, ctype, false); } /* remove redundant charge if migration failed*/ From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from psmtp.com (na3sys010amx171.postini.com [74.125.245.171]) by kanga.kvack.org (Postfix) with SMTP id D2A946B006C for ; Sun, 28 Oct 2012 13:56:33 -0400 (EDT) Date: Sun, 28 Oct 2012 13:56:15 -0400 From: Johannes Weiner Subject: Re: [PATCH 00/31] numa/core patches Message-ID: <20121028175615.GC29827@cmpxchg.org> References: <20121025121617.617683848@chello.nl> <508A52E1.8020203@redhat.com> <1351242480.12171.48.camel@twins> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1351242480.12171.48.camel@twins> Sender: owner-linux-mm@kvack.org List-ID: To: Peter Zijlstra Cc: Zhouping Liu , Rik van Riel , Andrea Arcangeli , Mel Gorman , Thomas Gleixner , Linus Torvalds , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Ingo Molnar On Fri, Oct 26, 2012 at 11:08:00AM +0200, Peter Zijlstra wrote: > On Fri, 2012-10-26 at 17:07 +0800, Zhouping Liu wrote: > > [ 180.918591] RIP: 0010:[] [] mem_cgroup_prepare_migration+0xba/0xd0 > > > [ 182.681450] [] do_huge_pmd_numa_page+0x180/0x500 > > [ 182.775090] [] handle_mm_fault+0x1e9/0x360 > > [ 182.863038] [] __do_page_fault+0x172/0x4e0 > > [ 182.950574] [] ? __switch_to_xtra+0x163/0x1a0 > > [ 183.041512] [] ? __switch_to+0x3ce/0x4a0 > > [ 183.126832] [] ? __schedule+0x3c6/0x7a0 > > [ 183.211216] [] do_page_fault+0xe/0x10 > > [ 183.293705] [] page_fault+0x28/0x30 > > Johannes, this looks like the thp migration memcg hookery gone bad, > could you have a look at this? Oops. Here is an incremental fix, feel free to fold it into #31. Signed-off-by: Johannes Weiner --- diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 5c30a14..0d7ebd3 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -801,8 +801,6 @@ void do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma, if (!new_page) goto alloc_fail; - mem_cgroup_prepare_migration(page, new_page, &memcg); - lru = PageLRU(page); if (lru && isolate_lru_page(page)) /* does an implicit get_page() */ @@ -835,6 +833,14 @@ void do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma, return; } + /* + * Traditional migration needs to prepare the memcg charge + * transaction early to prevent the old page from being + * uncharged when installing migration entries. Here we can + * save the potential rollback and start the charge transfer + * only when migration is already known to end successfully. + */ + mem_cgroup_prepare_migration(page, new_page, &memcg); entry = mk_pmd(new_page, vma->vm_page_prot); entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); @@ -845,6 +851,12 @@ void do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma, set_pmd_at(mm, haddr, pmd, entry); update_mmu_cache_pmd(vma, address, entry); page_remove_rmap(page); + /* + * Finish the charge transaction under the page table lock to + * prevent split_huge_page() from dividing up the charge + * before it's fully transferred to the new page. + */ + mem_cgroup_end_migration(memcg, page, new_page, true); spin_unlock(&mm->page_table_lock); put_page(page); /* Drop the rmap reference */ @@ -856,18 +868,14 @@ void do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma, unlock_page(new_page); - mem_cgroup_end_migration(memcg, page, new_page, true); - unlock_page(page); put_page(page); /* Drop the local reference */ return; alloc_fail: - if (new_page) { - mem_cgroup_end_migration(memcg, page, new_page, false); + if (new_page) put_page(new_page); - } unlock_page(page); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 7acf43b..011e510 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3255,15 +3255,18 @@ void mem_cgroup_prepare_migration(struct page *page, struct page *newpage, struct mem_cgroup **memcgp) { struct mem_cgroup *memcg = NULL; + unsigned int nr_pages = 1; struct page_cgroup *pc; enum charge_type ctype; *memcgp = NULL; - VM_BUG_ON(PageTransHuge(page)); if (mem_cgroup_disabled()) return; + if (PageTransHuge(page)) + nr_pages <<= compound_order(page); + pc = lookup_page_cgroup(page); lock_page_cgroup(pc); if (PageCgroupUsed(pc)) { @@ -3325,7 +3328,7 @@ void mem_cgroup_prepare_migration(struct page *page, struct page *newpage, * charged to the res_counter since we plan on replacing the * old one and only one page is going to be left afterwards. */ - __mem_cgroup_commit_charge(memcg, newpage, 1, ctype, false); + __mem_cgroup_commit_charge(memcg, newpage, nr_pages, ctype, false); } /* remove redundant charge if migration failed*/ -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org