All of lore.kernel.org
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Dave Hansen <dave@sr71.net>
Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	akpm@linux-foundation.org, kirill.shutemov@linux.intel.com,
	ak@linux.intel.com, riel@redhat.com, alex.shi@linaro.org,
	dave.hansen@linux.intel.com
Subject: Re: [PATCH 6/6] x86: mm: set TLB flush tunable to sane value (33)
Date: Thu, 24 Apr 2014 11:46:53 +0100	[thread overview]
Message-ID: <20140424104147.GU23991@suse.de> (raw)
In-Reply-To: <20140421182428.FC2104C1@viggo.jf.intel.com>

On Mon, Apr 21, 2014 at 11:24:28AM -0700, Dave Hansen wrote:
> 
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> This has been run through Intel's LKP tests across a wide range
> of modern sytems and workloads and it wasn't shown to make a
> measurable performance difference positive or negative.
> 
> Now that we have some shiny new tracepoints, we can actually
> figure out what the heck is going on.
> 

Good stuff. This is the type of thing I should have done the last time
to set the parameters for the tlbflush microbench. Nice one out of you!

> During a kernel compile, 60% of the flush_tlb_mm_range() calls
> are for a single page.  It breaks down like this:
> 
>  size   percent  percent<=
>   V        V        V
> GLOBAL:   2.20%   2.20% avg cycles:  2283
>      1:  56.92%  59.12% avg cycles:  1276
>      2:  13.78%  72.90% avg cycles:  1505
>      3:   8.26%  81.16% avg cycles:  1880
>      4:   7.41%  88.58% avg cycles:  2447
>      5:   1.73%  90.31% avg cycles:  2358
>      6:   1.32%  91.63% avg cycles:  2563
>      7:   1.14%  92.77% avg cycles:  2862
>      8:   0.62%  93.39% avg cycles:  3542
>      9:   0.08%  93.47% avg cycles:  3289
>     10:   0.43%  93.90% avg cycles:  3570
>     11:   0.20%  94.10% avg cycles:  3767
>     12:   0.08%  94.18% avg cycles:  3996
>     13:   0.03%  94.20% avg cycles:  4077
>     14:   0.02%  94.23% avg cycles:  4836
>     15:   0.04%  94.26% avg cycles:  5699
>     16:   0.06%  94.32% avg cycles:  5041
>     17:   0.57%  94.89% avg cycles:  5473
>     18:   0.02%  94.91% avg cycles:  5396
>     19:   0.03%  94.95% avg cycles:  5296
>     20:   0.02%  94.96% avg cycles:  6749
>     21:   0.18%  95.14% avg cycles:  6225
>     22:   0.01%  95.15% avg cycles:  6393
>     23:   0.01%  95.16% avg cycles:  6861
>     24:   0.12%  95.28% avg cycles:  6912
>     25:   0.05%  95.32% avg cycles:  7190
>     26:   0.01%  95.33% avg cycles:  7793
>     27:   0.01%  95.34% avg cycles:  7833
>     28:   0.01%  95.35% avg cycles:  8253
>     29:   0.08%  95.42% avg cycles:  8024
>     30:   0.03%  95.45% avg cycles:  9670
>     31:   0.01%  95.46% avg cycles:  8949
>     32:   0.01%  95.46% avg cycles:  9350
>     33:   3.11%  98.57% avg cycles:  8534
>     34:   0.02%  98.60% avg cycles: 10977
>     35:   0.02%  98.62% avg cycles: 11400
> 
> We get in to dimishing returns pretty quickly.  On pre-IvyBridge
> CPUs, we used to set the limit at 8 pages, and it was set at 128
> on IvyBrige.  That 128 number looks pretty silly considering that
> less than 0.5% of the flushes are that large.
> 
> The previous code tried to size this number based on the size of
> the TLB.  Good idea, but it's error-prone, needs maintenance
> (which it didn't get up to now), and probably would not matter in
> practice much.
> 
> Settting it to 33 means that we cover the mallopt
> M_TRIM_THRESHOLD, which is the most universally common size to do
> flushes.
> 

A kernel compile is hardly a representative workload but I accept the
logic of tuning it based on current settings for M_TRIM_THRESHOLD and
the tools are there to do a more detailed analysis if tlb flush times
for people are identified as being a problem.

Acked-by: Mel Gorman <mgorman@suse.de>

-- 
Mel Gorman
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Mel Gorman <mgorman@suse.de>
To: Dave Hansen <dave@sr71.net>
Cc: x86@kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	akpm@linux-foundation.org, kirill.shutemov@linux.intel.com,
	ak@linux.intel.com, riel@redhat.com, alex.shi@linaro.org,
	dave.hansen@linux.intel.com
Subject: Re: [PATCH 6/6] x86: mm: set TLB flush tunable to sane value (33)
Date: Thu, 24 Apr 2014 11:46:53 +0100	[thread overview]
Message-ID: <20140424104147.GU23991@suse.de> (raw)
In-Reply-To: <20140421182428.FC2104C1@viggo.jf.intel.com>

On Mon, Apr 21, 2014 at 11:24:28AM -0700, Dave Hansen wrote:
> 
> From: Dave Hansen <dave.hansen@linux.intel.com>
> 
> This has been run through Intel's LKP tests across a wide range
> of modern sytems and workloads and it wasn't shown to make a
> measurable performance difference positive or negative.
> 
> Now that we have some shiny new tracepoints, we can actually
> figure out what the heck is going on.
> 

Good stuff. This is the type of thing I should have done the last time
to set the parameters for the tlbflush microbench. Nice one out of you!

> During a kernel compile, 60% of the flush_tlb_mm_range() calls
> are for a single page.  It breaks down like this:
> 
>  size   percent  percent<=
>   V        V        V
> GLOBAL:   2.20%   2.20% avg cycles:  2283
>      1:  56.92%  59.12% avg cycles:  1276
>      2:  13.78%  72.90% avg cycles:  1505
>      3:   8.26%  81.16% avg cycles:  1880
>      4:   7.41%  88.58% avg cycles:  2447
>      5:   1.73%  90.31% avg cycles:  2358
>      6:   1.32%  91.63% avg cycles:  2563
>      7:   1.14%  92.77% avg cycles:  2862
>      8:   0.62%  93.39% avg cycles:  3542
>      9:   0.08%  93.47% avg cycles:  3289
>     10:   0.43%  93.90% avg cycles:  3570
>     11:   0.20%  94.10% avg cycles:  3767
>     12:   0.08%  94.18% avg cycles:  3996
>     13:   0.03%  94.20% avg cycles:  4077
>     14:   0.02%  94.23% avg cycles:  4836
>     15:   0.04%  94.26% avg cycles:  5699
>     16:   0.06%  94.32% avg cycles:  5041
>     17:   0.57%  94.89% avg cycles:  5473
>     18:   0.02%  94.91% avg cycles:  5396
>     19:   0.03%  94.95% avg cycles:  5296
>     20:   0.02%  94.96% avg cycles:  6749
>     21:   0.18%  95.14% avg cycles:  6225
>     22:   0.01%  95.15% avg cycles:  6393
>     23:   0.01%  95.16% avg cycles:  6861
>     24:   0.12%  95.28% avg cycles:  6912
>     25:   0.05%  95.32% avg cycles:  7190
>     26:   0.01%  95.33% avg cycles:  7793
>     27:   0.01%  95.34% avg cycles:  7833
>     28:   0.01%  95.35% avg cycles:  8253
>     29:   0.08%  95.42% avg cycles:  8024
>     30:   0.03%  95.45% avg cycles:  9670
>     31:   0.01%  95.46% avg cycles:  8949
>     32:   0.01%  95.46% avg cycles:  9350
>     33:   3.11%  98.57% avg cycles:  8534
>     34:   0.02%  98.60% avg cycles: 10977
>     35:   0.02%  98.62% avg cycles: 11400
> 
> We get in to dimishing returns pretty quickly.  On pre-IvyBridge
> CPUs, we used to set the limit at 8 pages, and it was set at 128
> on IvyBrige.  That 128 number looks pretty silly considering that
> less than 0.5% of the flushes are that large.
> 
> The previous code tried to size this number based on the size of
> the TLB.  Good idea, but it's error-prone, needs maintenance
> (which it didn't get up to now), and probably would not matter in
> practice much.
> 
> Settting it to 33 means that we cover the mallopt
> M_TRIM_THRESHOLD, which is the most universally common size to do
> flushes.
> 

A kernel compile is hardly a representative workload but I accept the
logic of tuning it based on current settings for M_TRIM_THRESHOLD and
the tools are there to do a more detailed analysis if tlb flush times
for people are identified as being a problem.

Acked-by: Mel Gorman <mgorman@suse.de>

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  parent reply	other threads:[~2014-04-24 10:47 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-21 18:24 [PATCH 0/6] x86: rework tlb range flushing code Dave Hansen
2014-04-21 18:24 ` Dave Hansen
2014-04-21 18:24 ` [PATCH 1/6] x86: mm: clean up tlb " Dave Hansen
2014-04-21 18:24   ` Dave Hansen
2014-04-22 16:53   ` Rik van Riel
2014-04-22 16:53     ` Rik van Riel
2014-04-24  8:33   ` Mel Gorman
2014-04-24  8:33     ` Mel Gorman
2014-04-21 18:24 ` [PATCH 2/6] x86: mm: rip out complicated, out-of-date, buggy TLB flushing Dave Hansen
2014-04-21 18:24   ` Dave Hansen
2014-04-22 16:54   ` Rik van Riel
2014-04-22 16:54     ` Rik van Riel
2014-04-24  8:45   ` Mel Gorman
2014-04-24  8:45     ` Mel Gorman
2014-04-24 16:58     ` Dave Hansen
2014-04-24 16:58       ` Dave Hansen
2014-04-24 18:00       ` Mel Gorman
2014-04-24 18:00         ` Mel Gorman
2014-04-25 21:39     ` Dave Hansen
2014-04-25 21:39       ` Dave Hansen
2014-04-21 18:24 ` [PATCH 3/6] x86: mm: fix missed global TLB flush stat Dave Hansen
2014-04-21 18:24   ` Dave Hansen
2014-04-22 17:15   ` Rik van Riel
2014-04-22 17:15     ` Rik van Riel
2014-04-24  8:49   ` Mel Gorman
2014-04-24  8:49     ` Mel Gorman
2014-04-21 18:24 ` [PATCH 4/6] x86: mm: trace tlb flushes Dave Hansen
2014-04-21 18:24   ` Dave Hansen
2014-04-22 21:19   ` Rik van Riel
2014-04-22 21:19     ` Rik van Riel
2014-04-24 10:14   ` Mel Gorman
2014-04-24 10:14     ` Mel Gorman
2014-04-24 20:42     ` Dave Hansen
2014-04-24 20:42       ` Dave Hansen
2014-04-21 18:24 ` [PATCH 5/6] x86: mm: new tunable for single vs full TLB flush Dave Hansen
2014-04-21 18:24   ` Dave Hansen
2014-04-22 21:31   ` Rik van Riel
2014-04-22 21:31     ` Rik van Riel
2014-04-24 10:37   ` Mel Gorman
2014-04-24 10:37     ` Mel Gorman
2014-04-24 17:25     ` Dave Hansen
2014-04-24 17:25       ` Dave Hansen
2014-04-24 17:53       ` Rik van Riel
2014-04-24 17:53         ` Rik van Riel
2014-04-24 22:03         ` Dave Hansen
2014-04-24 22:03           ` Dave Hansen
2014-07-07 17:43     ` Dave Hansen
2014-07-07 17:43       ` Dave Hansen
2014-07-08  0:43       ` Alex Shi
2014-07-08  0:43         ` Alex Shi
2014-04-21 18:24 ` [PATCH 6/6] x86: mm: set TLB flush tunable to sane value (33) Dave Hansen
2014-04-21 18:24   ` Dave Hansen
2014-04-22 21:33   ` Rik van Riel
2014-04-22 21:33     ` Rik van Riel
2014-04-24 10:46   ` Mel Gorman [this message]
2014-04-24 10:46     ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140424104147.GU23991@suse.de \
    --to=mgorman@suse.de \
    --cc=ak@linux.intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@linaro.org \
    --cc=dave.hansen@linux.intel.com \
    --cc=dave@sr71.net \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=riel@redhat.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.