archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <>
To: Nicholas Piggin <>
Cc: Linus Torvalds <>,
	Dave Hansen <>,
	Bob Peterson <>,
	Linux Kernel Mailing List <>,
	Steven Whitehouse <>,
	Andrew Lutomirski <>,
	Andreas Gruenbacher <>,
	Peter Zijlstra <>,
	linux-mm <>
Subject: Re: [PATCH 2/2] mm: add PageWaiters indicating tasks are waiting for a page bit
Date: Tue, 3 Jan 2017 17:18:27 +0000	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

On Tue, Jan 03, 2017 at 10:29:58PM +1000, Nicholas Piggin wrote:
> > kernel building showed nothing unusual on any machine
> > 
> > git checkout in a loop showed;
> > 	o minor gains with Nick's patch
> > 	o no impact from Linus's patch
> > 	o flat performance from PeterZ's
> > 
> > git test suite showed
> > 	o close to flat performance on all patches
> > 	o Linus' patch on top showed increased variability but not serious
> I'd be really surprised if Linus's patch is actually adding variability
> unless it is just some random cache or branch predictor or similar change
> due to changed code sizes. Testing on skylake CPU showed the old sequence
> takes a big stall with the load-after-lock;op hazard.
> So I wouldn't worry about it too much, but maybe something interesting to
> look at for someone who knows x86 microarchitectures well.

Agreed, it looked like a testing artifact. Later in the day it was obvious
that different results were obtained between boots and minor changes
in timing.

It's compounded by the fact that this particular test is based on /tmp so
different people will get different results depending on the filesystem. In
my case, that was btrfs.

> > 
> > will-it-scale pagefault tests
> > 	o page_fault1 and page_fault2 showed no differences in processes
> > 
> > 	o page_fault3 using processes did show some large losses at some
> > 	  process counts on all patches. The losses were not consistent on
> > 	  each run. There also was no consistently at loss with increasing
> > 	  process counts. It did appear that Peter's patch had fewer
> > 	  problems with only one thread count showing problems so it
> > 	  *may* be more resistent to the problem but not completely and
> > 	  it's not obvious why it might be so it could be a testing
> > 	  anomaly
> Okay. page_fault3 has each process doing repeated page faults on their
> own 128MB file in /tmp. Unless they fill memory and start to reclaim,
> (which I believe must be happening in Dave's case) there should be no
> contention on page lock. After the patch, the uncontended case should
> be strictly faster when there is no contention.

It's possible in the test that I setup that it's getting screwed by the
filesystem used. It's writing that array and on a COW filesystem there
is a whole bucket of variables such as new block allocations, writeback
timing etc. The writeback timing alone means that this test is questionable
because the results will depend on when background writeback triggers. I'm
not sure how Dave is controlling for these factors.

As an aside, will-it-scale page_fault was one of the tests I had dropped way
down on my list of priorities to watch a long time ago.  It's a short-lived
filesystem-based test vunerable to timing issues and highly variable that
didn't seem worth controlling for at the time. I queued the test for a look
on the rough offchance your patches were falling over something obvious. If
it is, I haven't spotted it yet. Profiles are very similar. Separate
runs with lock stat show detectable but negligable contentions on
page_wait_table. The bulk of the contention is within the filesystem.

> When there is contention, there is an added cost of setting and clearing
> page waiters bit. Maybe there is some other issue there... are you seeing
> the losses in uncontended case, contended, or both?

I couldn't determine from the profile whether it was uncontended or not.
The fact that there are boot-to-boot variations makes it difficult
to determine if a profile vs !profile run is equivalent without using
debugging patches to gather stats. lock_stat does show large numbers of
contentions all right but the vast bulk of them are internal to btrfs.

Mel Gorman

  reply	other threads:[~2017-01-03 17:18 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-12-25  3:00 [PATCH 0/2] PageWaiters again Nicholas Piggin
2016-12-25  3:00 ` [PATCH 1/2] mm: Use owner_priv bit for PageSwapCache, valid when PageSwapBacked Nicholas Piggin
2016-12-25  5:13   ` Hugh Dickins
2016-12-25  3:00 ` [PATCH 2/2] mm: add PageWaiters indicating tasks are waiting for a page bit Nicholas Piggin
2016-12-25 21:51   ` Linus Torvalds
2016-12-26  1:16     ` Nicholas Piggin
2016-12-26 19:07       ` Linus Torvalds
2016-12-27 11:19         ` Nicholas Piggin
2016-12-27 18:58           ` Linus Torvalds
2016-12-27 19:23             ` Linus Torvalds
2016-12-27 19:24               ` Linus Torvalds
2016-12-27 19:40                 ` Linus Torvalds
2016-12-27 20:17                   ` Linus Torvalds
2016-12-28  3:53             ` Nicholas Piggin
2016-12-28 19:17               ` Linus Torvalds
2016-12-29  4:08                 ` Nicholas Piggin
2016-12-29  4:16                   ` Linus Torvalds
2016-12-29  5:26                     ` Nicholas Piggin
2017-01-03 10:24                       ` Mel Gorman
2017-01-03 12:29                         ` Nicholas Piggin
2017-01-03 17:18                           ` Mel Gorman [this message]
2016-12-29 22:16                     ` [PATCH] mm/filemap: fix parameters to test_bit() Olof Johansson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \ \ \ \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).