From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752534Ab0K2Fvg (ORCPT ); Mon, 29 Nov 2010 00:51:36 -0500 Received: from TYO201.gate.nec.co.jp ([202.32.8.193]:35777 "EHLO tyo201.gate.nec.co.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1749667Ab0K2Fve (ORCPT ); Mon, 29 Nov 2010 00:51:34 -0500 Date: Mon, 29 Nov 2010 14:38:01 +0900 From: Daisuke Nishimura To: Andrea Arcangeli Cc: linux-mm@kvack.org, Linus Torvalds , Andrew Morton , linux-kernel@vger.kernel.org, Marcelo Tosatti , Adam Litke , Avi Kivity , Hugh Dickins , Rik van Riel , Mel Gorman , Dave Hansen , Benjamin Herrenschmidt , Ingo Molnar , Mike Travis , KAMEZAWA Hiroyuki , Christoph Lameter , Chris Wright , bpicco@redhat.com, KOSAKI Motohiro , Balbir Singh , "Michael S. Tsirkin" , Peter Zijlstra , Johannes Weiner , Chris Mason , Borislav Petkov , Daisuke Nishimura Subject: Re: [PATCH 53 of 66] add numa awareness to hugepage allocations Message-Id: <20101129143801.abef5228.nishimura@mxp.nes.nec.co.jp> In-Reply-To: <223ee926614158fc1353.1288798108@v2.random> References: <223ee926614158fc1353.1288798108@v2.random> Organization: NEC Soft, Ltd. X-Mailer: Sylpheed 3.0.3 (GTK+ 2.10.14; i686-pc-mingw32) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > @@ -1655,7 +1672,11 @@ static void collapse_huge_page(struct mm > unsigned long hstart, hend; > > VM_BUG_ON(address & ~HPAGE_PMD_MASK); > +#ifndef CONFIG_NUMA > VM_BUG_ON(!*hpage); > +#else > + VM_BUG_ON(*hpage); > +#endif > > /* > * Prevent all access to pagetables with the exception of > @@ -1693,7 +1714,15 @@ static void collapse_huge_page(struct mm > if (!pmd_present(*pmd) || pmd_trans_huge(*pmd)) > goto out; > > +#ifndef CONFIG_NUMA > new_page = *hpage; > +#else > + new_page = alloc_hugepage_vma(khugepaged_defrag(), vma, address); > + if (unlikely(!new_page)) { > + *hpage = ERR_PTR(-ENOMEM); > + goto out; > + } > +#endif > if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) > goto out; > I think this should be: if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) { #ifdef CONFIG_NUMA put_page(new_page); #endif goto out; } Thanks, Daisuke Nishimura. > @@ -1724,6 +1753,9 @@ static void collapse_huge_page(struct mm > spin_unlock(&mm->page_table_lock); > anon_vma_unlock(vma->anon_vma); > mem_cgroup_uncharge_page(new_page); > +#ifdef CONFIG_NUMA > + put_page(new_page); > +#endif > goto out; > } > > @@ -1759,7 +1791,9 @@ static void collapse_huge_page(struct mm > mm->nr_ptes--; > spin_unlock(&mm->page_table_lock); > > +#ifndef CONFIG_NUMA > *hpage = NULL; > +#endif > khugepaged_pages_collapsed++; > out: > up_write(&mm->mmap_sem); From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail138.messagelabs.com (mail138.messagelabs.com [216.82.249.35]) by kanga.kvack.org (Postfix) with ESMTP id 707B26B004A for ; Mon, 29 Nov 2010 00:52:31 -0500 (EST) Date: Mon, 29 Nov 2010 14:38:01 +0900 From: Daisuke Nishimura Subject: Re: [PATCH 53 of 66] add numa awareness to hugepage allocations Message-Id: <20101129143801.abef5228.nishimura@mxp.nes.nec.co.jp> In-Reply-To: <223ee926614158fc1353.1288798108@v2.random> References: <223ee926614158fc1353.1288798108@v2.random> Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org To: Andrea Arcangeli Cc: linux-mm@kvack.org, Linus Torvalds , Andrew Morton , linux-kernel@vger.kernel.org, Marcelo Tosatti , Adam Litke , Avi Kivity , Hugh Dickins , Rik van Riel , Mel Gorman , Dave Hansen , Benjamin Herrenschmidt , Ingo Molnar , Mike Travis , KAMEZAWA Hiroyuki , Christoph Lameter , Chris Wright , bpicco@redhat.com, KOSAKI Motohiro , Balbir Singh , "Michael S. Tsirkin" , Peter Zijlstra , Johannes Weiner , Chris Mason , Borislav Petkov , Daisuke Nishimura List-ID: > @@ -1655,7 +1672,11 @@ static void collapse_huge_page(struct mm > unsigned long hstart, hend; > > VM_BUG_ON(address & ~HPAGE_PMD_MASK); > +#ifndef CONFIG_NUMA > VM_BUG_ON(!*hpage); > +#else > + VM_BUG_ON(*hpage); > +#endif > > /* > * Prevent all access to pagetables with the exception of > @@ -1693,7 +1714,15 @@ static void collapse_huge_page(struct mm > if (!pmd_present(*pmd) || pmd_trans_huge(*pmd)) > goto out; > > +#ifndef CONFIG_NUMA > new_page = *hpage; > +#else > + new_page = alloc_hugepage_vma(khugepaged_defrag(), vma, address); > + if (unlikely(!new_page)) { > + *hpage = ERR_PTR(-ENOMEM); > + goto out; > + } > +#endif > if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) > goto out; > I think this should be: if (unlikely(mem_cgroup_newpage_charge(new_page, mm, GFP_KERNEL))) { #ifdef CONFIG_NUMA put_page(new_page); #endif goto out; } Thanks, Daisuke Nishimura. > @@ -1724,6 +1753,9 @@ static void collapse_huge_page(struct mm > spin_unlock(&mm->page_table_lock); > anon_vma_unlock(vma->anon_vma); > mem_cgroup_uncharge_page(new_page); > +#ifdef CONFIG_NUMA > + put_page(new_page); > +#endif > goto out; > } > > @@ -1759,7 +1791,9 @@ static void collapse_huge_page(struct mm > mm->nr_ptes--; > spin_unlock(&mm->page_table_lock); > > +#ifndef CONFIG_NUMA > *hpage = NULL; > +#endif > khugepaged_pages_collapsed++; > out: > up_write(&mm->mmap_sem); -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom policy in Canada: sign http://dissolvethecrtc.ca/ Don't email: email@kvack.org