All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mremap: Increase LATENCY_LIMIT of mremap to reduce the number of TLB shootdowns
@ 2018-06-06 14:02 Mel Gorman
  2018-06-06 15:55 ` Nadav Amit
  0 siblings, 1 reply; 4+ messages in thread
From: Mel Gorman @ 2018-06-06 14:02 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Dave Hansen, mhocko, vbabka, Aaron Lu, linux-kernel, linux-mm

Commit 5d1904204c99 ("mremap: fix race between mremap() and page cleanning")
fixed races between mremap and other operations for both file-backed and
anonymous mappings. The file-backed was the most critical as it allowed the
possibility that data could be changed on a physical page after page_mkclean
returned which could trigger data loss or data integrity issues. A customer
reported that the cost of the TLBs for anonymous regressions was excessive
and resulting in a 30-50% drop in performance overall since this commit
on a microbenchmark. Unfortunately I neither have access to the test-case
nor can I describe what it does other than saying that mremap operations
dominate heavily.

This patch increases the LATENCY_LIMIT to handle TLB flushes on a
PMD boundary instead of every 64 pages. This reduces the number of TLB
shootdowns by a factor of 8 which is not reported to completely restore
performance but gets it within an acceptable percentage. The given metric
here is simply described as "higher is better".

Baseline that was known good
002:  Metric:       91.05
004:  Metric:      109.45
008:  Metric:       73.08
016:  Metric:       58.14
032:  Metric:       61.09
064:  Metric:       57.76
128:  Metric:       55.43

Current
001:  Metric:       54.98
002:  Metric:       56.56
004:  Metric:       41.22
008:  Metric:       35.96
016:  Metric:       36.45
032:  Metric:       35.71
064:  Metric:       35.73
128:  Metric:       34.96

With patch
001:  Metric:       61.43
002:  Metric:       81.64
004:  Metric:       67.92
008:  Metric:       51.67
016:  Metric:       50.47
032:  Metric:       52.29
064:  Metric:       50.01
128:  Metric:       49.04

So for low threads, it's not restored but for larger number of threads,
it's closer to the "known good" baseline. The downside is that PTL lock
hold times will be slightly higher but it's unlikely that an mremap and
another operation will contend on the same PMD. This is the first time I
encountered a realistic workload that was mremap intensive (thousands of
calls per second with small ranges dominating).

Using a different mremap-intensive workload that is not representative of
the real workload there is little difference observed outside of noise in
the headline metrics However, the TLB shootdowns are reduced by 11% on
average and at the peak, TLB shootdowns were reduced by 21%. Interrupts
were sampled every second while the workload ran to get those figures.
It's known that the figures will vary as the non-representative load is
non-deterministic.

An alternative patch was posted that should have significantly reduced the
TLB flushes but unfortunately it does not perform as well as this version
on the customer test case. If revisited, the two patches can stack on top
of each other.

Signed-off-by: Mel Gorman <mgorman@suse.com>
---
 mm/mremap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/mremap.c b/mm/mremap.c
index 049470aa1e3e..b5017cb2e1e9 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -191,7 +191,7 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
 		drop_rmap_locks(vma);
 }
 
-#define LATENCY_LIMIT	(64 * PAGE_SIZE)
+#define LATENCY_LIMIT	(PMD_SIZE)
 
 unsigned long move_page_tables(struct vm_area_struct *vma,
 		unsigned long old_addr, struct vm_area_struct *new_vma,

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] mremap: Increase LATENCY_LIMIT of mremap to reduce the number of TLB shootdowns
  2018-06-06 14:02 [PATCH] mremap: Increase LATENCY_LIMIT of mremap to reduce the number of TLB shootdowns Mel Gorman
@ 2018-06-06 15:55 ` Nadav Amit
  2018-06-06 17:47   ` Mel Gorman
  0 siblings, 1 reply; 4+ messages in thread
From: Nadav Amit @ 2018-06-06 15:55 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Dave Hansen, mhocko, vbabka, Aaron Lu,
	linux-kernel, linux-mm

Mel Gorman <mgorman@techsingularity.net> wrote:

> Commit 5d1904204c99 ("mremap: fix race between mremap() and page cleanning")
> fixed races between mremap and other operations for both file-backed and
> anonymous mappings. The file-backed was the most critical as it allowed the
> possibility that data could be changed on a physical page after page_mkclean
> returned which could trigger data loss or data integrity issues. A customer
> reported that the cost of the TLBs for anonymous regressions was excessive
> and resulting in a 30-50% drop in performance overall since this commit
> on a microbenchmark. Unfortunately I neither have access to the test-case
> nor can I describe what it does other than saying that mremap operations
> dominate heavily.
> 
> This patch increases the LATENCY_LIMIT to handle TLB flushes on a
> PMD boundary instead of every 64 pages. This reduces the number of TLB
> shootdowns by a factor of 8 which is not reported to completely restore
> performance but gets it within an acceptable percentage. The given metric
> here is simply described as "higher is better".
> 
> Baseline that was known good
> 002:  Metric:       91.05
> 004:  Metric:      109.45
> 008:  Metric:       73.08
> 016:  Metric:       58.14
> 032:  Metric:       61.09
> 064:  Metric:       57.76
> 128:  Metric:       55.43
> 
> Current
> 001:  Metric:       54.98
> 002:  Metric:       56.56
> 004:  Metric:       41.22
> 008:  Metric:       35.96
> 016:  Metric:       36.45
> 032:  Metric:       35.71
> 064:  Metric:       35.73
> 128:  Metric:       34.96
> 
> With patch
> 001:  Metric:       61.43
> 002:  Metric:       81.64
> 004:  Metric:       67.92
> 008:  Metric:       51.67
> 016:  Metric:       50.47
> 032:  Metric:       52.29
> 064:  Metric:       50.01
> 128:  Metric:       49.04
> 
> So for low threads, it's not restored but for larger number of threads,
> it's closer to the "known good" baseline. The downside is that PTL lock
> hold times will be slightly higher but it's unlikely that an mremap and
> another operation will contend on the same PMD. This is the first time I
> encountered a realistic workload that was mremap intensive (thousands of
> calls per second with small ranges dominating).
> 
> Using a different mremap-intensive workload that is not representative of
> the real workload there is little difference observed outside of noise in
> the headline metrics However, the TLB shootdowns are reduced by 11% on
> average and at the peak, TLB shootdowns were reduced by 21%. Interrupts
> were sampled every second while the workload ran to get those figures.
> It's known that the figures will vary as the non-representative load is
> non-deterministic.
> 
> An alternative patch was posted that should have significantly reduced the
> TLB flushes but unfortunately it does not perform as well as this version
> on the customer test case. If revisited, the two patches can stack on top
> of each other.
> 
> Signed-off-by: Mel Gorman <mgorman@suse.com>
> ---
> mm/mremap.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/mremap.c b/mm/mremap.c
> index 049470aa1e3e..b5017cb2e1e9 100644
> --- a/mm/mremap.c
> +++ b/mm/mremap.c
> @@ -191,7 +191,7 @@ static void move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
> 		drop_rmap_locks(vma);
> }
> 
> -#define LATENCY_LIMIT	(64 * PAGE_SIZE)
> +#define LATENCY_LIMIT	(PMD_SIZE)
> 
> unsigned long move_page_tables(struct vm_area_struct *vma,
> 		unsigned long old_addr, struct vm_area_struct *new_vma,

This LATENCY_LIMIT is only used in move_page_tables() in the following
manner:

  next = (new_addr + PMD_SIZE) & PMD_MASK;
  if (extent > next - new_addr)
      extent = next - new_addr;
  if (extent > LATENCY_LIMIT)
      extent = LATENCY_LIMIT;
   
If LATENCY_LIMIT is to be changed to PMD_SIZE, then IIUC the last condition
is not required, and LATENCY_LIMIT can just be removed (assuming there is no
underflow case that hides somewhere).

No?

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mremap: Increase LATENCY_LIMIT of mremap to reduce the number of TLB shootdowns
  2018-06-06 15:55 ` Nadav Amit
@ 2018-06-06 17:47   ` Mel Gorman
  2018-06-06 18:20     ` Nadav Amit
  0 siblings, 1 reply; 4+ messages in thread
From: Mel Gorman @ 2018-06-06 17:47 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Andrew Morton, Dave Hansen, mhocko, vbabka, Aaron Lu,
	linux-kernel, linux-mm

On Wed, Jun 06, 2018 at 08:55:15AM -0700, Nadav Amit wrote:
> > -#define LATENCY_LIMIT	(64 * PAGE_SIZE)
> > +#define LATENCY_LIMIT	(PMD_SIZE)
> > 
> > unsigned long move_page_tables(struct vm_area_struct *vma,
> > 		unsigned long old_addr, struct vm_area_struct *new_vma,
> 
> This LATENCY_LIMIT is only used in move_page_tables() in the following
> manner:
> 
>   next = (new_addr + PMD_SIZE) & PMD_MASK;
>   if (extent > next - new_addr)
>       extent = next - new_addr;
>   if (extent > LATENCY_LIMIT)
>       extent = LATENCY_LIMIT;
>    
> If LATENCY_LIMIT is to be changed to PMD_SIZE, then IIUC the last condition
> is not required, and LATENCY_LIMIT can just be removed (assuming there is no
> underflow case that hides somewhere).
> 

I see no problem removing it other than we may forget that we ever limited
PTE lock hold times for any reason. I'm skeptical it will matter unless
mremap-intensive workloads are a lot more common than I believe.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mremap: Increase LATENCY_LIMIT of mremap to reduce the number of TLB shootdowns
  2018-06-06 17:47   ` Mel Gorman
@ 2018-06-06 18:20     ` Nadav Amit
  0 siblings, 0 replies; 4+ messages in thread
From: Nadav Amit @ 2018-06-06 18:20 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Dave Hansen, mhocko, vbabka, Aaron Lu,
	linux-kernel, linux-mm

Mel Gorman <mgorman@techsingularity.net> wrote:

> On Wed, Jun 06, 2018 at 08:55:15AM -0700, Nadav Amit wrote:
>>> -#define LATENCY_LIMIT	(64 * PAGE_SIZE)
>>> +#define LATENCY_LIMIT	(PMD_SIZE)
>>> 
>>> unsigned long move_page_tables(struct vm_area_struct *vma,
>>> 		unsigned long old_addr, struct vm_area_struct *new_vma,
>> 
>> This LATENCY_LIMIT is only used in move_page_tables() in the following
>> manner:
>> 
>>  next = (new_addr + PMD_SIZE) & PMD_MASK;
>>  if (extent > next - new_addr)
>>      extent = next - new_addr;
>>  if (extent > LATENCY_LIMIT)
>>      extent = LATENCY_LIMIT;
>> 
>> If LATENCY_LIMIT is to be changed to PMD_SIZE, then IIUC the last condition
>> is not required, and LATENCY_LIMIT can just be removed (assuming there is no
>> underflow case that hides somewhere).
> 
> I see no problem removing it other than we may forget that we ever limited
> PTE lock hold times for any reason. I'm skeptical it will matter unless
> mremap-intensive workloads are a lot more common than I believe.

I have no opinion regarding the behavior change. It is just that code with
no effect is oftentimes confusing. A comment (if needed) can replace the
code, and git history would provide how it was once supported.

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-06-06 18:20 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-06 14:02 [PATCH] mremap: Increase LATENCY_LIMIT of mremap to reduce the number of TLB shootdowns Mel Gorman
2018-06-06 15:55 ` Nadav Amit
2018-06-06 17:47   ` Mel Gorman
2018-06-06 18:20     ` Nadav Amit

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.