linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Hugh Dickins <hughd@google.com>
To: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>, Jiri Slaby <jslaby@suse.cz>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	David Rientjes <rientjes@google.com>,
	Andi Kleen <andi@firstfloor.org>,
	Wu Fengguang <fengguang.wu@intel.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/6] thp: optimize away unnecessary page table locking
Date: Mon, 20 Feb 2012 03:38:27 -0800 (PST)	[thread overview]
Message-ID: <alpine.LSU.2.00.1202200329420.4225@eggly.anvils> (raw)
In-Reply-To: <1329722927-12108-1-git-send-email-n-horiguchi@ah.jp.nec.com>

On Mon, 20 Feb 2012, Naoya Horiguchi wrote:
> On Sun, Feb 19, 2012 at 01:21:02PM -0800, Hugh Dickins wrote:
> > On Wed, 8 Feb 2012, Naoya Horiguchi wrote:
> > > Currently when we check if we can handle thp as it is or we need to
> > > split it into regular sized pages, we hold page table lock prior to
> > > check whether a given pmd is mapping thp or not. Because of this,
> > > when it's not "huge pmd" we suffer from unnecessary lock/unlock overhead.
> > > To remove it, this patch introduces a optimized check function and
> > > replace several similar logics with it.
> > > 
> > > Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> > > Cc: David Rientjes <rientjes@google.com>
> > > 
> > > Changes since v4:
> > >   - Rethink returned value of __pmd_trans_huge_lock()
> > 
> > [snip]
> > 
> > > --- 3.3-rc2.orig/mm/mremap.c
> > > +++ 3.3-rc2/mm/mremap.c
> > > @@ -155,8 +155,6 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
> > >  			if (err > 0) {
> > >  				need_flush = true;
> > >  				continue;
> > > -			} else if (!err) {
> > > -				split_huge_page_pmd(vma->vm_mm, old_pmd);
> > >  			}
> > >  			VM_BUG_ON(pmd_trans_huge(*old_pmd));
> > >  		}
> 
> Thanks for reporting, 
> 
> > Is that what you intended to do there?
> 
> No. This is a bug.
> 
> > I just hit that VM_BUG_ON on rc3-next-20120217.
> 
> I found that when extend != HPAGE_PMD_SIZE, thp is not split so
> it hits the VM_BUG_ON.
> The following patch cancels the change in returned value in v4->v5
> and I confirmed this fixes the problem in my simple test.
> Andrew, could you add it on top of this optimization patch?
> 
> Naoya
> ----------------------------------------------------
> From 3c49816cab7d8cb072d9dffb97242e40f5124230 Mon Sep 17 00:00:00 2001
> From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Date: Mon, 20 Feb 2012 01:48:12 -0500
> Subject: [PATCH] fix mremap bug of failing to split thp
> 
> The patch "thp: optimize away unnecessary page table locking" introduced
> a bug to move_page_tables(), where we fail to split thp when move_huge_pmd()
> is not called. To fix it, this patch reverts the return value changes and
> readd if (!err) block.
> 
> Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>

That fixes the case I hit, thank you.  Though I did have to apply
the task_mmu.c part by hand, there are differences on neighbouring
lines.

Jiri, your "Regression: Bad page map in process xyz" is actually
on linux-next, isn't it?  I wonder if this patch will fix yours too
(you were using zypper, I was updating with yast2).

Hugh

> ---
>  fs/proc/task_mmu.c |    6 +++---
>  mm/huge_memory.c   |   13 ++++++-------
>  mm/mremap.c        |    2 ++
>  3 files changed, 11 insertions(+), 10 deletions(-)
> 
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 7810281..2d12325 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -394,7 +394,7 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>  	pte_t *pte;
>  	spinlock_t *ptl;
>  
> -	if (pmd_trans_huge_lock(pmd, vma)) {
> +	if (pmd_trans_huge_lock(pmd, vma) == 1) {
>  		smaps_pte_entry(pmd_to_pte_t(pmd), addr, HPAGE_PMD_SIZE, walk);
>  		spin_unlock(&walk->mm->page_table_lock);
>  		mss->anonymous_thp += HPAGE_PMD_SIZE;
> @@ -696,7 +696,7 @@ static int pagemap_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>  	/* find the first VMA at or above 'addr' */
>  	vma = find_vma(walk->mm, addr);
>  
> -	if (pmd_trans_huge_lock(pmd, vma)) {
> +	if (pmd_trans_huge_lock(pmd, vma) == 1) {
>  		for (; addr != end; addr += PAGE_SIZE) {
>  			unsigned long offset = (addr & ~PAGEMAP_WALK_MASK)
>  				>> PAGE_SHIFT;
> @@ -973,7 +973,7 @@ static int gather_pte_stats(pmd_t *pmd, unsigned long addr,
>  
>  	md = walk->private;
>  
> -	if (pmd_trans_huge_lock(pmd, md->vma)) {
> +	if (pmd_trans_huge_lock(pmd, md->vma) == 1) {
>  		pte_t huge_pte = pmd_to_pte_t(pmd);
>  		struct page *page;
>  
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index bbf57b5..f342bb2 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -1030,7 +1030,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>  {
>  	int ret = 0;
>  
> -	if (__pmd_trans_huge_lock(pmd, vma)) {
> +	if (__pmd_trans_huge_lock(pmd, vma) == 1) {
>  		struct page *page;
>  		pgtable_t pgtable;
>  		pgtable = get_pmd_huge_pte(tlb->mm);
> @@ -1056,7 +1056,7 @@ int mincore_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
>  {
>  	int ret = 0;
>  
> -	if (__pmd_trans_huge_lock(pmd, vma)) {
> +	if (__pmd_trans_huge_lock(pmd, vma) == 1) {
>  		/*
>  		 * All logical pages in the range are present
>  		 * if backed by a huge page.
> @@ -1094,12 +1094,11 @@ int move_huge_pmd(struct vm_area_struct *vma, struct vm_area_struct *new_vma,
>  		goto out;
>  	}
>  
> -	if (__pmd_trans_huge_lock(old_pmd, vma)) {
> +	if ((ret = __pmd_trans_huge_lock(old_pmd, vma)) == 1) {
>  		pmd = pmdp_get_and_clear(mm, old_addr, old_pmd);
>  		VM_BUG_ON(!pmd_none(*new_pmd));
>  		set_pmd_at(mm, new_addr, new_pmd, pmd);
>  		spin_unlock(&mm->page_table_lock);
> -		ret = 1;
>  	}
>  out:
>  	return ret;
> @@ -1111,7 +1110,7 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
>  	struct mm_struct *mm = vma->vm_mm;
>  	int ret = 0;
>  
> -	if (__pmd_trans_huge_lock(pmd, vma)) {
> +	if (__pmd_trans_huge_lock(pmd, vma) == 1) {
>  		pmd_t entry;
>  		entry = pmdp_get_and_clear(mm, addr, pmd);
>  		entry = pmd_modify(entry, newprot);
> @@ -1125,7 +1124,7 @@ int change_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
>  
>  /*
>   * Returns 1 if a given pmd maps a stable (not under splitting) thp.
> - * Returns 0 otherwise.
> + * Returns -1 if it maps a thp under splitting. Returns 0 otherwise.
>   *
>   * Note that if it returns 1, this routine returns without unlocking page
>   * table locks. So callers must unlock them.
> @@ -1137,7 +1136,7 @@ int __pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma)
>  		if (unlikely(pmd_trans_splitting(*pmd))) {
>  			spin_unlock(&vma->vm_mm->page_table_lock);
>  			wait_split_huge_page(vma->anon_vma, pmd);
> -			return 0;
> +			return -1;
>  		} else {
>  			/* Thp mapped by 'pmd' is stable, so we can
>  			 * handle it as it is. */
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 22458b9..87bb839 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -155,6 +155,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
>  			if (err > 0) {
>  				need_flush = true;
>  				continue;
> +			} else if (!err) {
> +				split_huge_page_pmd(vma->vm_mm, old_pmd);
>  			}
>  			VM_BUG_ON(pmd_trans_huge(*old_pmd));
>  		}
> -- 
> 1.7.7.6

  reply	other threads:[~2012-02-20 11:39 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-02-08 15:51 [PATCH 0/6 v5] pagemap handles transparent hugepage Naoya Horiguchi
2012-02-08 15:51 ` [PATCH 1/6] pagemap: avoid splitting thp when reading /proc/pid/pagemap Naoya Horiguchi
2012-02-08 15:51 ` [PATCH 2/6] thp: optimize away unnecessary page table locking Naoya Horiguchi
2012-02-09  2:19   ` KAMEZAWA Hiroyuki
2012-02-19 21:21   ` Hugh Dickins
2012-02-20  7:28     ` Naoya Horiguchi
2012-02-20 11:38       ` Hugh Dickins [this message]
2012-02-20 11:54         ` Jiri Slaby
2012-02-08 15:51 ` [PATCH 3/6] pagemap: export KPF_THP Naoya Horiguchi
2012-02-08 15:51 ` [PATCH 4/6] pagemap: document KPF_THP and make page-types aware of it Naoya Horiguchi
2012-02-08 15:51 ` [PATCH 5/6] introduce pmd_to_pte_t() Naoya Horiguchi
2012-02-09  2:24   ` KAMEZAWA Hiroyuki
2012-02-16  0:54   ` Andrew Morton
2012-02-16  9:02     ` Naoya Horiguchi
2012-02-08 15:51 ` [PATCH 6/6] pagemap: introduce data structure for pagemap entry Naoya Horiguchi
2012-02-09  2:29   ` KAMEZAWA Hiroyuki
2012-02-09  4:16     ` Naoya Horiguchi
2012-02-10  0:27   ` Andrew Morton
2012-02-10  0:59     ` KAMEZAWA Hiroyuki
  -- strict thread matches above, loose matches on Subject: below --
2012-01-27 23:02 [PATCH 0/6 v4] pagemap handles transparent hugepage Naoya Horiguchi
2012-01-27 23:02 ` [PATCH 2/6] thp: optimize away unnecessary page table locking Naoya Horiguchi
2012-01-28 11:23   ` Hillf Danton
2012-01-28 22:33     ` Naoya Horiguchi
2012-01-30  6:22   ` KAMEZAWA Hiroyuki
2012-02-02  5:27     ` Naoya Horiguchi
2012-02-02  8:32       ` KAMEZAWA Hiroyuki
2012-01-16 17:19 Naoya Horiguchi
2012-01-12 19:34 [PATCH 0/6 v3] pagemap handles transparent hugepage Naoya Horiguchi
2012-01-12 19:34 ` [PATCH 2/6] thp: optimize away unnecessary page table locking Naoya Horiguchi
2012-01-13 12:04   ` Hillf Danton
2012-01-13 15:14     ` Naoya Horiguchi
2012-01-14  3:24       ` Hillf Danton
2012-01-14 17:19   ` Andrea Arcangeli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.LSU.2.00.1202200329420.4225@eggly.anvils \
    --to=hughd@google.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=andi@firstfloor.org \
    --cc=fengguang.wu@intel.com \
    --cc=jslaby@suse.cz \
    --cc=kamezawa.hiroyu@jp.fujitsu.com \
    --cc=kosaki.motohiro@jp.fujitsu.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=n-horiguchi@ah.jp.nec.com \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).