All of lore.kernel.org
 help / color / mirror / Atom feed
From: Wu Fengguang <fengguang.wu@intel.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Matthew Wilcox <willy@linux.intel.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Li Shaohua <shaohua.li@intel.com>
Subject: Re: [TESTCASE] Clean pages clogging the VM
Date: Thu, 19 Aug 2010 00:07:31 +0800	[thread overview]
Message-ID: <20100818160731.GA15002@localhost> (raw)
In-Reply-To: <20100818160613.GE9431@localhost>

On Thu, Aug 19, 2010 at 12:06:13AM +0800, Wu Fengguang wrote:
> On Wed, Aug 18, 2010 at 04:13:08PM +0200, Johannes Weiner wrote:
> > Hi Matthew,
> > 
> > On Tue, Aug 17, 2010 at 03:50:01PM -0400, Matthew Wilcox wrote:
> > > 
> > > No comment on this?  Was it just that I posted it during the VM summit?
> > 
> > I have not forgotten about it.  I just have a hard time reproducing
> > those extreme stalls you observed.
> > 
> > Running that test on a 2.5GHz machine with 2G of memory gives me
> > stalls of up to half a second.  The patchset I am experimenting with
> > gets me down to peaks of 70ms, but it needs further work.
> > 
> > Mapped file pages get two rounds on the LRU list, so once the VM
> > starts scanning, it has to go through all of them twice and can only
> > reclaim them on the second encounter.
> > 
> > At that point, since we scan without making progress, we start waiting
> > for IO, which is not happening in this case, so we sit there until a
> > timeout expires.
> 
> Right, this could lead to some 1s stall. Shaohua and me also noticed
> this when investigating the responsiveness issues. And we are wondering
> if it makes sense to do congestion_wait() only when the bdi is really
> congested? There are no IO underway anyway in this case.
> 
> > This stupid-waiting can be improved, and I am working on that.  But
> 
> Yeah, stupid waiting :)
> 
> > since I can not reproduce your observations, I don't know if this is
> > the (sole) source of the problem.  Can I send you patches?
> 
> Sure.
> 
> > > On Mon, Aug 09, 2010 at 09:30:00AM -0400, Matthew Wilcox wrote:
> > > > 
> > > > This testcase shows some odd behaviour from the Linux VM.
> > > > 
> > > > It creates a 1TB sparse file, mmaps it, and randomly reads locations 
> > > > in it.  Due to the file being entirely sparse, the VM allocates new pages
> > > > and zeroes them.  Initially, it runs very fast, taking on the order of
> > > > 2.7 to 4us per page fault.  Eventually, the VM runs out of free pages,
> > > > and starts doing huge amounts of work trying to figure out which of
> > > > these clean pages to throw away.
> > 
> > This is similar to one of my test cases for:
> > 
> > 	6457474 vmscan: detect mapped file pages used only once
> > 	31c0569 vmscan: drop page_mapping_inuse()
> > 	dfc8d63 vmscan: factor out page reference checks
> > 
> > because the situation was even worse before (see the series
> > description in dfc8d63).  Maybe asking the obvious, but the kernel you
> > tested on did include those commits, right?
> > 
> > And just to be sure, I sent you a test-patch to disable the used-once
> > detection on IRC the other day.  Did you have time to run it yet?
> > Here it is again:
> > 
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 9c7e57c..c757bba 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -584,6 +584,7 @@ static enum page_references page_check_references(struct page *page,
> >  		return PAGEREF_RECLAIM;
> >  
> >  	if (referenced_ptes) {
> > +		return PAGEREF_ACTIVATE;
> 
> How come page activation helps?
> 
> >  		if (PageAnon(page))
> >  			return PAGEREF_ACTIVATE;
> >  		/*
> > 
> > 
> > > > In my testing with a 6GB machine and 2.9GHz CPU, one in every
> > > > 15,000 page faults takes over a second, and one in every 40,000
> > > > page faults take over seven seconds!
> > > > 
> > > > This test-case demonstrates a problem that occurs with a read-mostly 
> > > > mmap of a file on very fast media.  I wouldn't like to see a solution
> > > > that special-cases zeroed pages.  I think userspace has done its part
> > > > to tell the kernel what's it's doing by calling madvise(MADV_RANDOM).
> > > > This ought to be enough to hint to the kernel that it should be eagerly
> > > > throwing away pages in this VMA.
> > 
> > We can probably do something like the following, but I am not sure
> > this is a good fix, either.  How many applications are using
> > madvise()?
> 
> Heh, it sounds crazy to rip random read pages, though it does help to
> produce a FAST test case.
> 
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -495,7 +495,7 @@ int page_referenced_one(struct page *pag
> >  		 * mapping is already gone, the unmap path will have
> >  		 * set PG_referenced or activated the page.
> >  		 */
> > -		if (likely(!VM_SequentialReadHint(vma)))
> > +		if (likely(!(vma->vm_flags & (VM_SEQ_READ|VM_RAND_READ))))
> >  			referenced++;
> >  	}
> 
> Thanks,
> Fengguang
> 

WARNING: multiple messages have this Message-ID (diff)
From: Wu Fengguang <fengguang.wu@intel.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Matthew Wilcox <willy@linux.intel.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Li Shaohua <shaohua.li@intel.com>
Subject: Re: [TESTCASE] Clean pages clogging the VM
Date: Thu, 19 Aug 2010 00:07:31 +0800	[thread overview]
Message-ID: <20100818160731.GA15002@localhost> (raw)
In-Reply-To: <20100818160613.GE9431@localhost>

On Thu, Aug 19, 2010 at 12:06:13AM +0800, Wu Fengguang wrote:
> On Wed, Aug 18, 2010 at 04:13:08PM +0200, Johannes Weiner wrote:
> > Hi Matthew,
> > 
> > On Tue, Aug 17, 2010 at 03:50:01PM -0400, Matthew Wilcox wrote:
> > > 
> > > No comment on this?  Was it just that I posted it during the VM summit?
> > 
> > I have not forgotten about it.  I just have a hard time reproducing
> > those extreme stalls you observed.
> > 
> > Running that test on a 2.5GHz machine with 2G of memory gives me
> > stalls of up to half a second.  The patchset I am experimenting with
> > gets me down to peaks of 70ms, but it needs further work.
> > 
> > Mapped file pages get two rounds on the LRU list, so once the VM
> > starts scanning, it has to go through all of them twice and can only
> > reclaim them on the second encounter.
> > 
> > At that point, since we scan without making progress, we start waiting
> > for IO, which is not happening in this case, so we sit there until a
> > timeout expires.
> 
> Right, this could lead to some 1s stall. Shaohua and me also noticed
> this when investigating the responsiveness issues. And we are wondering
> if it makes sense to do congestion_wait() only when the bdi is really
> congested? There are no IO underway anyway in this case.
> 
> > This stupid-waiting can be improved, and I am working on that.  But
> 
> Yeah, stupid waiting :)
> 
> > since I can not reproduce your observations, I don't know if this is
> > the (sole) source of the problem.  Can I send you patches?
> 
> Sure.
> 
> > > On Mon, Aug 09, 2010 at 09:30:00AM -0400, Matthew Wilcox wrote:
> > > > 
> > > > This testcase shows some odd behaviour from the Linux VM.
> > > > 
> > > > It creates a 1TB sparse file, mmaps it, and randomly reads locations 
> > > > in it.  Due to the file being entirely sparse, the VM allocates new pages
> > > > and zeroes them.  Initially, it runs very fast, taking on the order of
> > > > 2.7 to 4us per page fault.  Eventually, the VM runs out of free pages,
> > > > and starts doing huge amounts of work trying to figure out which of
> > > > these clean pages to throw away.
> > 
> > This is similar to one of my test cases for:
> > 
> > 	6457474 vmscan: detect mapped file pages used only once
> > 	31c0569 vmscan: drop page_mapping_inuse()
> > 	dfc8d63 vmscan: factor out page reference checks
> > 
> > because the situation was even worse before (see the series
> > description in dfc8d63).  Maybe asking the obvious, but the kernel you
> > tested on did include those commits, right?
> > 
> > And just to be sure, I sent you a test-patch to disable the used-once
> > detection on IRC the other day.  Did you have time to run it yet?
> > Here it is again:
> > 
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 9c7e57c..c757bba 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -584,6 +584,7 @@ static enum page_references page_check_references(struct page *page,
> >  		return PAGEREF_RECLAIM;
> >  
> >  	if (referenced_ptes) {
> > +		return PAGEREF_ACTIVATE;
> 
> How come page activation helps?
> 
> >  		if (PageAnon(page))
> >  			return PAGEREF_ACTIVATE;
> >  		/*
> > 
> > 
> > > > In my testing with a 6GB machine and 2.9GHz CPU, one in every
> > > > 15,000 page faults takes over a second, and one in every 40,000
> > > > page faults take over seven seconds!
> > > > 
> > > > This test-case demonstrates a problem that occurs with a read-mostly 
> > > > mmap of a file on very fast media.  I wouldn't like to see a solution
> > > > that special-cases zeroed pages.  I think userspace has done its part
> > > > to tell the kernel what's it's doing by calling madvise(MADV_RANDOM).
> > > > This ought to be enough to hint to the kernel that it should be eagerly
> > > > throwing away pages in this VMA.
> > 
> > We can probably do something like the following, but I am not sure
> > this is a good fix, either.  How many applications are using
> > madvise()?
> 
> Heh, it sounds crazy to rip random read pages, though it does help to
> produce a FAST test case.
> 
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -495,7 +495,7 @@ int page_referenced_one(struct page *pag
> >  		 * mapping is already gone, the unmap path will have
> >  		 * set PG_referenced or activated the page.
> >  		 */
> > -		if (likely(!VM_SequentialReadHint(vma)))
> > +		if (likely(!(vma->vm_flags & (VM_SEQ_READ|VM_RAND_READ))))
> >  			referenced++;
> >  	}
> 
> Thanks,
> Fengguang
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2010-08-18 16:07 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-08-09 13:30 [TESTCASE] Clean pages clogging the VM Matthew Wilcox
2010-08-17 19:50 ` Matthew Wilcox
2010-08-17 19:50   ` Matthew Wilcox
2010-08-18 14:13   ` Johannes Weiner
2010-08-18 14:13     ` Johannes Weiner
     [not found]     ` <20100818160613.GE9431@localhost>
2010-08-18 16:07       ` Wu Fengguang [this message]
2010-08-18 16:07         ` Wu Fengguang
2010-08-19  1:42         ` Shaohua Li
2010-08-19  1:42           ` Shaohua Li
2010-08-19 11:51         ` Johannes Weiner
2010-08-19 11:51           ` Johannes Weiner
2010-08-19 21:09           ` Wu Fengguang
2010-08-19 21:09             ` Wu Fengguang
2010-08-20  5:05           ` Shaohua Li
2010-08-20  5:05             ` Shaohua Li
2010-08-18 21:26     ` Wu Fengguang
2010-08-18 21:26       ` Wu Fengguang
2010-08-19  9:18     ` KOSAKI Motohiro
2010-08-19  9:18       ` KOSAKI Motohiro

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100818160731.GA15002@localhost \
    --to=fengguang.wu@intel.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=shaohua.li@intel.com \
    --cc=willy@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.