All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alex Thorlton <athorlton@sgi.com>
To: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	"Eric W . Biederman" <ebiederm@xmission.com>,
	"Paul E . McKenney" <paulmck@linux.vnet.ibm.com>,
	Al Viro <viro@zeniv.linux.org.uk>,
	Andi Kleen <ak@linux.intel.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Dave Jones <davej@redhat.com>,
	David Howells <dhowells@redhat.com>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Kees Cook <keescook@chromium.org>, Mel Gorman <mgorman@suse.de>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Oleg Nesterov <oleg@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Rik van Riel <riel@redhat.com>, Robin Holt <robinmholt@gmail.com>,
	Sedat Dilek <sedat.dilek@gmail.com>,
	Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCHv4 00/10] split page table lock for PMD tables
Date: Fri, 4 Oct 2013 15:31:47 -0500	[thread overview]
Message-ID: <20131004203147.GE32110@sgi.com> (raw)
In-Reply-To: <20131004202602.2D389E0090@blue.fi.intel.com>

On Fri, Oct 04, 2013 at 11:26:02PM +0300, Kirill A. Shutemov wrote:
> Alex Thorlton wrote:
> > Kirill,
> > 
> > I've pasted in my results for 512 cores below.  Things are looking 
> > really good here.  I don't have a test for HUGETLBFS, but if you want to
> > pass me the one you used, I can run that too.  I suppose I could write
> > one, but why reinvent the wheel? :)
> 
> Patch below.

Good deal, thanks.  I'll get some test results put up soon.

> 
> > Sorry for the delay on these results.  I hit some strange issues with
> > running thp_memscale on systems with either of the following
> > combinations of configuration options set:
> > 
> > [thp off]
> > HUGETLBFS=y
> > HUGETLB_PAGE=y
> > NUMA_BALANCING=y
> > NUMA_BALANCING_DEFAULT_ENABLED=y
> > 
> > [thp on or off]
> > HUGETLBFS=n
> > HUGETLB_PAGE=n
> > NUMA_BALANCING=y
> > NUMA_BALANCING_DEFAULT_ENABLED=y
> > 
> > I'm getting segfaults intermittently, as well as some weird RCU sched
> > errors.  This happens in vanilla 3.12-rc2, so it doesn't have anything
> > to do with your patches, but I thought I'd let you know.  There didn't
> > used to be any issues with this test, so I think there's a subtle kernel
> > bug here.  That's, of course, an entirely separate issue though.
> 
> I'll take a look next week, if nobody does it before.

I'm starting a bisect now.  Not sure how long it'll take, but I'll keep
you posted.

> 
> > 
> > As far as these patches go, I think everything looks good (save for the
> > bit of discussion you were having with Andrew earlier, which I think
> > you've worked out).  My testing shows that the page fault rates are
> > actually better on this threaded test than in the non-threaded case!
> > 
> > - Alex
> > 
> > THP on, v3.12-rc2:
> > ------------------
> > 
> >  Performance counter stats for './thp_memscale -C 0 -m 0 -c 512 -b 512m' (5 runs):
> > 
> >   568668865.944994 task-clock                #  528.547 CPUs utilized            ( +-  0.21% ) [100.00%]
> >          1,491,589 context-switches          #    0.000 M/sec                    ( +-  0.25% ) [100.00%]
> >              1,085 CPU-migrations            #    0.000 M/sec                    ( +-  1.80% ) [100.00%]
> >            400,822 page-faults               #    0.000 M/sec                    ( +-  0.41% )
> > 1,306,612,476,049,478 cycles                    #    2.298 GHz                      ( +-  0.23% ) [100.00%]
> > 1,277,211,694,318,724 stalled-cycles-frontend   #   97.75% frontend cycles idle     ( +-  0.21% ) [100.00%]
> > 1,163,736,844,232,064 stalled-cycles-backend    #   89.07% backend  cycles idle     ( +-  0.20% ) [100.00%]
> > 53,855,178,678,230 instructions              #    0.04  insns per cycle        
> >                                              #   23.72  stalled cycles per insn  ( +-  1.15% ) [100.00%]
> > 21,041,661,816,782 branches                  #   37.002 M/sec                    ( +-  0.64% ) [100.00%]
> >        606,665,092 branch-misses             #    0.00% of all branches          ( +-  0.63% )
> > 
> >     1075.909782795 seconds time elapsed                                          ( +-  0.21% )
> >
> > THP on, patched:
> > ----------------
> > 
> >  Performance counter stats for './runt -t -c 512 -b 512m' (5 runs):
> > 
> >    15836198.490485 task-clock                #  533.304 CPUs utilized            ( +-  0.95% ) [100.00%]
> >            127,507 context-switches          #    0.000 M/sec                    ( +-  1.65% ) [100.00%]
> >              1,223 CPU-migrations            #    0.000 M/sec                    ( +-  3.23% ) [100.00%]
> >            302,080 page-faults               #    0.000 M/sec                    ( +-  6.88% )
> > 18,925,875,973,975 cycles                    #    1.195 GHz                      ( +-  0.43% ) [100.00%]
> > 18,325,469,464,007 stalled-cycles-frontend   #   96.83% frontend cycles idle     ( +-  0.44% ) [100.00%]
> > 17,522,272,147,141 stalled-cycles-backend    #   92.58% backend  cycles idle     ( +-  0.49% ) [100.00%]
> >  2,686,490,067,197 instructions              #    0.14  insns per cycle        
> >                                              #    6.82  stalled cycles per insn  ( +-  2.16% ) [100.00%]
> >    944,712,646,402 branches                  #   59.655 M/sec                    ( +-  2.03% ) [100.00%]
> >        145,956,565 branch-misses             #    0.02% of all branches          ( +-  0.88% )
> > 
> >       29.694499652 seconds time elapsed                                          ( +-  0.95% )
> > 
> > (these results are from the test suite that I ripped thp_memscale out
> > of, but it's the same test)
> 
> 36 times faster. Not bad I think. ;)
> 
> Naive patch to use HUGETLB:
> 
> --- thp_memscale/thp_memscale.c	2013-09-23 23:44:21.000000000 +0300
> +++ thp_memscale/thp_memscale.c	2013-09-26 17:45:47.878429885 +0300
> @@ -191,7 +191,10 @@
>  	int id, i, cnt;
>  
>  	id = (long)arg;
> -	p = malloc(bytes);
> +	p = mmap(NULL, bytes, PROT_READ | PROT_WRITE,
> +			MAP_ANONYMOUS | MAP_PRIVATE | MAP_HUGETLB, 0, 0);
> +	if (p == MAP_FAILED)
> +		perrorx("mmap failed");
>  	ps = p;
>  
>  	if (runon(basecpu + id) < 0)
> -- 
>  Kirill A. Shutemov

WARNING: multiple messages have this Message-ID (diff)
From: Alex Thorlton <athorlton@sgi.com>
To: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ingo Molnar <mingo@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>,
	"Eric W . Biederman" <ebiederm@xmission.com>,
	"Paul E . McKenney" <paulmck@linux.vnet.ibm.com>,
	Al Viro <viro@zeniv.linux.org.uk>,
	Andi Kleen <ak@linux.intel.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Dave Jones <davej@redhat.com>,
	David Howells <dhowells@redhat.com>,
	Frederic Weisbecker <fweisbec@gmail.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Kees Cook <keescook@chromium.org>, Mel Gorman <mgorman@suse.de>,
	Michael Kerrisk <mtk.manpages@gmail.com>,
	Oleg Nesterov <oleg@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Rik van Riel <riel@redhat.com>, Robin Holt <robinmholt@gmail.com>,
	Sedat Dilek <sedat.dilek@gmail.com>,
	Srikar Dronamraju <srikar@linux.vnet.ibm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCHv4 00/10] split page table lock for PMD tables
Date: Fri, 4 Oct 2013 15:31:47 -0500	[thread overview]
Message-ID: <20131004203147.GE32110@sgi.com> (raw)
In-Reply-To: <20131004202602.2D389E0090@blue.fi.intel.com>

On Fri, Oct 04, 2013 at 11:26:02PM +0300, Kirill A. Shutemov wrote:
> Alex Thorlton wrote:
> > Kirill,
> > 
> > I've pasted in my results for 512 cores below.  Things are looking 
> > really good here.  I don't have a test for HUGETLBFS, but if you want to
> > pass me the one you used, I can run that too.  I suppose I could write
> > one, but why reinvent the wheel? :)
> 
> Patch below.

Good deal, thanks.  I'll get some test results put up soon.

> 
> > Sorry for the delay on these results.  I hit some strange issues with
> > running thp_memscale on systems with either of the following
> > combinations of configuration options set:
> > 
> > [thp off]
> > HUGETLBFS=y
> > HUGETLB_PAGE=y
> > NUMA_BALANCING=y
> > NUMA_BALANCING_DEFAULT_ENABLED=y
> > 
> > [thp on or off]
> > HUGETLBFS=n
> > HUGETLB_PAGE=n
> > NUMA_BALANCING=y
> > NUMA_BALANCING_DEFAULT_ENABLED=y
> > 
> > I'm getting segfaults intermittently, as well as some weird RCU sched
> > errors.  This happens in vanilla 3.12-rc2, so it doesn't have anything
> > to do with your patches, but I thought I'd let you know.  There didn't
> > used to be any issues with this test, so I think there's a subtle kernel
> > bug here.  That's, of course, an entirely separate issue though.
> 
> I'll take a look next week, if nobody does it before.

I'm starting a bisect now.  Not sure how long it'll take, but I'll keep
you posted.

> 
> > 
> > As far as these patches go, I think everything looks good (save for the
> > bit of discussion you were having with Andrew earlier, which I think
> > you've worked out).  My testing shows that the page fault rates are
> > actually better on this threaded test than in the non-threaded case!
> > 
> > - Alex
> > 
> > THP on, v3.12-rc2:
> > ------------------
> > 
> >  Performance counter stats for './thp_memscale -C 0 -m 0 -c 512 -b 512m' (5 runs):
> > 
> >   568668865.944994 task-clock                #  528.547 CPUs utilized            ( +-  0.21% ) [100.00%]
> >          1,491,589 context-switches          #    0.000 M/sec                    ( +-  0.25% ) [100.00%]
> >              1,085 CPU-migrations            #    0.000 M/sec                    ( +-  1.80% ) [100.00%]
> >            400,822 page-faults               #    0.000 M/sec                    ( +-  0.41% )
> > 1,306,612,476,049,478 cycles                    #    2.298 GHz                      ( +-  0.23% ) [100.00%]
> > 1,277,211,694,318,724 stalled-cycles-frontend   #   97.75% frontend cycles idle     ( +-  0.21% ) [100.00%]
> > 1,163,736,844,232,064 stalled-cycles-backend    #   89.07% backend  cycles idle     ( +-  0.20% ) [100.00%]
> > 53,855,178,678,230 instructions              #    0.04  insns per cycle        
> >                                              #   23.72  stalled cycles per insn  ( +-  1.15% ) [100.00%]
> > 21,041,661,816,782 branches                  #   37.002 M/sec                    ( +-  0.64% ) [100.00%]
> >        606,665,092 branch-misses             #    0.00% of all branches          ( +-  0.63% )
> > 
> >     1075.909782795 seconds time elapsed                                          ( +-  0.21% )
> >
> > THP on, patched:
> > ----------------
> > 
> >  Performance counter stats for './runt -t -c 512 -b 512m' (5 runs):
> > 
> >    15836198.490485 task-clock                #  533.304 CPUs utilized            ( +-  0.95% ) [100.00%]
> >            127,507 context-switches          #    0.000 M/sec                    ( +-  1.65% ) [100.00%]
> >              1,223 CPU-migrations            #    0.000 M/sec                    ( +-  3.23% ) [100.00%]
> >            302,080 page-faults               #    0.000 M/sec                    ( +-  6.88% )
> > 18,925,875,973,975 cycles                    #    1.195 GHz                      ( +-  0.43% ) [100.00%]
> > 18,325,469,464,007 stalled-cycles-frontend   #   96.83% frontend cycles idle     ( +-  0.44% ) [100.00%]
> > 17,522,272,147,141 stalled-cycles-backend    #   92.58% backend  cycles idle     ( +-  0.49% ) [100.00%]
> >  2,686,490,067,197 instructions              #    0.14  insns per cycle        
> >                                              #    6.82  stalled cycles per insn  ( +-  2.16% ) [100.00%]
> >    944,712,646,402 branches                  #   59.655 M/sec                    ( +-  2.03% ) [100.00%]
> >        145,956,565 branch-misses             #    0.02% of all branches          ( +-  0.88% )
> > 
> >       29.694499652 seconds time elapsed                                          ( +-  0.95% )
> > 
> > (these results are from the test suite that I ripped thp_memscale out
> > of, but it's the same test)
> 
> 36 times faster. Not bad I think. ;)
> 
> Naive patch to use HUGETLB:
> 
> --- thp_memscale/thp_memscale.c	2013-09-23 23:44:21.000000000 +0300
> +++ thp_memscale/thp_memscale.c	2013-09-26 17:45:47.878429885 +0300
> @@ -191,7 +191,10 @@
>  	int id, i, cnt;
>  
>  	id = (long)arg;
> -	p = malloc(bytes);
> +	p = mmap(NULL, bytes, PROT_READ | PROT_WRITE,
> +			MAP_ANONYMOUS | MAP_PRIVATE | MAP_HUGETLB, 0, 0);
> +	if (p == MAP_FAILED)
> +		perrorx("mmap failed");
>  	ps = p;
>  
>  	if (runon(basecpu + id) < 0)
> -- 
>  Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2013-10-04 20:31 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-09-27 13:16 [PATCHv4 00/10] split page table lock for PMD tables Kirill A. Shutemov
2013-09-27 13:16 ` Kirill A. Shutemov
2013-09-27 13:16 ` [PATCHv4 01/10] mm: rename USE_SPLIT_PTLOCKS to USE_SPLIT_PTE_PTLOCKS Kirill A. Shutemov
2013-09-27 13:16   ` Kirill A. Shutemov
2013-09-27 13:16 ` [PATCHv4 02/10] mm: convert mm->nr_ptes to atomic_t Kirill A. Shutemov
2013-09-27 13:16   ` Kirill A. Shutemov
2013-09-27 20:46   ` Cody P Schafer
2013-09-27 20:46     ` Cody P Schafer
2013-09-27 21:01     ` Dave Hansen
2013-09-27 21:01       ` Dave Hansen
2013-09-27 22:24     ` Kirill A. Shutemov
2013-09-28  0:13       ` Johannes Weiner
2013-09-28  0:13         ` Johannes Weiner
2013-09-28 16:12         ` Kirill A. Shutemov
2013-09-27 13:16 ` [PATCHv4 03/10] mm: introduce api for split page table lock for PMD level Kirill A. Shutemov
2013-09-27 13:16   ` Kirill A. Shutemov
2013-09-27 13:16 ` [PATCHv4 04/10] mm, thp: change pmd_trans_huge_lock() to return taken lock Kirill A. Shutemov
2013-09-27 13:16   ` Kirill A. Shutemov
2013-09-27 13:16 ` [PATCHv4 05/10] mm, thp: move ptl taking inside page_check_address_pmd() Kirill A. Shutemov
2013-09-27 13:16   ` Kirill A. Shutemov
2013-09-27 13:16 ` [PATCHv4 06/10] mm, thp: do not access mm->pmd_huge_pte directly Kirill A. Shutemov
2013-09-27 13:16   ` Kirill A. Shutemov
2013-09-27 13:16 ` [PATCHv4 07/10] mm, hugetlb: convert hugetlbfs to use split pmd lock Kirill A. Shutemov
2013-09-27 13:16   ` Kirill A. Shutemov
2013-09-27 13:16 ` [PATCHv4 08/10] mm: convent the rest to new page table lock api Kirill A. Shutemov
2013-09-27 13:16   ` Kirill A. Shutemov
2013-09-27 13:16 ` [PATCHv4 09/10] mm: implement split page table lock for PMD level Kirill A. Shutemov
2013-09-27 13:16   ` Kirill A. Shutemov
2013-10-03 23:11   ` Andrew Morton
2013-10-03 23:11     ` Andrew Morton
2013-10-03 23:38     ` Kirill A. Shutemov
2013-10-03 23:38       ` Kirill A. Shutemov
2013-10-04  0:34       ` Kirill A. Shutemov
2013-10-04  7:21     ` Peter Zijlstra
2013-10-04  7:21       ` Peter Zijlstra
2013-10-03 23:42   ` Kirill A. Shutemov
2013-09-27 13:16 ` [PATCHv4 10/10] x86, mm: enable " Kirill A. Shutemov
2013-09-27 13:16   ` Kirill A. Shutemov
2013-10-04 20:12 ` [PATCHv4 00/10] split page table lock for PMD tables Alex Thorlton
2013-10-04 20:12   ` Alex Thorlton
2013-10-04 20:26   ` Kirill A. Shutemov
2013-10-04 20:26     ` Kirill A. Shutemov
2013-10-04 20:31     ` Alex Thorlton [this message]
2013-10-04 20:31       ` Alex Thorlton
2013-10-07  9:48       ` Kirill A. Shutemov
2013-10-07  9:48         ` Kirill A. Shutemov
2013-10-08 21:47         ` Alex Thorlton
2013-10-08 21:47           ` Alex Thorlton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131004203147.GE32110@sgi.com \
    --to=athorlton@sgi.com \
    --cc=aarcange@redhat.com \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=dave.hansen@intel.com \
    --cc=davej@redhat.com \
    --cc=dhowells@redhat.com \
    --cc=ebiederm@xmission.com \
    --cc=fweisbec@gmail.com \
    --cc=hannes@cmpxchg.org \
    --cc=keescook@chromium.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mingo@redhat.com \
    --cc=mtk.manpages@gmail.com \
    --cc=n-horiguchi@ah.jp.nec.com \
    --cc=oleg@redhat.com \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=robinmholt@gmail.com \
    --cc=sedat.dilek@gmail.com \
    --cc=srikar@linux.vnet.ibm.com \
    --cc=tglx@linutronix.de \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.