All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/7] x86,tlb,mm: make lazy TLB mode even lazier
@ 2018-07-16 19:03 Rik van Riel
  2018-07-16 19:03 ` [PATCH 1/7] mm: allocate mm_cpumask dynamically based on nr_cpu_ids Rik van Riel
                   ` (6 more replies)
  0 siblings, 7 replies; 61+ messages in thread
From: Rik van Riel @ 2018-07-16 19:03 UTC (permalink / raw)
  To: linux-kernel; +Cc: x86, luto, efault, kernel-team, mingo, dave.hansen

Song noticed switch_mm_irqs_off taking a lot of CPU time in recent
kernels, using 1.9% of a 48 CPU system during a netperf run. Digging
into the profile, the atomic operations in cpumask_clear_cpu and
cpumask_set_cpu are responsible for about half of that CPU use.

However, the CPUs running netperf are simply switching back and
forth between netperf and the idle task, which does not require any
changes to the mm_cpumask if lazy TLB mode were used.

Additionally, the init_mm cpumask ends up being the most heavily
contended one in the system, for no reason at all.

Making really lazy TLB mode work again on modern kernels, by sending
a shootdown IPI only when page table pages are being unmapped, we get
back some performance.

v6 addresses the comment and Signed-off-by issues pointed out by Ingo.

On memcache workloads on 2 socket systems, this patch series seems
to reduce total system CPU use by 1-2%. On Song's netbench tests,
CPU use in the context switch time is about cut in half.

These patches also provide a little memory savings by shrinking
the size of mm_struct, especially on distro kernels compiled with
a gigantically large NR_CPUS.



^ permalink raw reply	[flat|nested] 61+ messages in thread
* [PATCH v5 0/7] x86,tlb,mm: make lazy TLB mode even lazier
@ 2018-07-10 14:28 Rik van Riel
  2018-07-10 14:28 ` [PATCH 4/7] x86,tlb: make lazy TLB mode lazier Rik van Riel
  0 siblings, 1 reply; 61+ messages in thread
From: Rik van Riel @ 2018-07-10 14:28 UTC (permalink / raw)
  To: linux-kernel
  Cc: x86, luto, dave.hansen, mingo, kernel-team, efault, tglx,
	songliubraving, hpa

Song noticed switch_mm_irqs_off taking a lot of CPU time in recent
kernels, using 1.9% of a 48 CPU system during a netperf run. Digging
into the profile, the atomic operations in cpumask_clear_cpu and
cpumask_set_cpu are responsible for about half of that CPU use.

However, the CPUs running netperf are simply switching back and
forth between netperf and the idle task, which does not require any
changes to the mm_cpumask if lazy TLB mode were used.

Additionally, the init_mm cpumask ends up being the most heavily
contended one in the system, for no reason at all.

Making really lazy TLB mode work again on modern kernels, by sending
a shootdown IPI only when page table pages are being unmapped, we get
back some performance.

v5 of the series fixes the preempt bug and string overflow compiler warnings
pointed out by Mike Galbraith.

On memcache workloads on 2 socket systems, this patch series seems
to reduce total system CPU use by 1-2%. On Song's netbench tests,
CPU use in the context switch time is about cut in half.

These patches also provide a little memory savings by shrinking
the size of mm_struct, especially on distro kernels compiled with
a gigantically large NR_CPUS.


^ permalink raw reply	[flat|nested] 61+ messages in thread
* [PATCH v4 0/7] x86,tlb,mm: make lazy TLB mode even lazier
@ 2018-07-06 21:56 Rik van Riel
  2018-07-06 21:56 ` [PATCH 4/7] x86,tlb: make lazy TLB mode lazier Rik van Riel
  0 siblings, 1 reply; 61+ messages in thread
From: Rik van Riel @ 2018-07-06 21:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: x86, luto, dave.hansen, mingo, kernel-team, tglx, efault,
	songliubraving, hpa

Song noticed switch_mm_irqs_off taking a lot of CPU time in recent
kernels, using 1.9% of a 48 CPU system during a netperf run. Digging
into the profile, the atomic operations in cpumask_clear_cpu and
cpumask_set_cpu are responsible for about half of that CPU use.

However, the CPUs running netperf are simply switching back and
forth between netperf and the idle task, which does not require any
changes to the mm_cpumask if lazy TLB mode were used.

Additionally, the init_mm cpumask ends up being the most heavily
contended one in the system, for no reason at all.

Making really lazy TLB mode work again on modern kernels, by sending
a shootdown IPI only when page table pages are being unmapped, we get
back some performance.

v4 of the series has a few minor cleanups; no functional changes since v3

On memcache workloads on 2 socket systems, this patch series seems
to reduce total system CPU use by 1-2%. On Song's netbench tests,
CPU use in the context switch time is about cut in half.

These patches also provide a little memory savings by shrinking
the size of mm_struct, especially on distro kernels compiled with
a gigantically large NR_CPUS.



^ permalink raw reply	[flat|nested] 61+ messages in thread
* [PATCH v3 0/7] x86,tlb,mm: make lazy TLB mode even lazier
@ 2018-06-29 14:29 Rik van Riel
  2018-06-29 14:29 ` [PATCH 4/7] x86,tlb: make lazy TLB mode lazier Rik van Riel
  0 siblings, 1 reply; 61+ messages in thread
From: Rik van Riel @ 2018-06-29 14:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: x86, luto, dave.hansen, mingo, kernel-team, tglx, efault,
	songliubraving, hpa

Song noticed switch_mm_irqs_off taking a lot of CPU time in recent
kernels, using 1.9% of a 48 CPU system during a netperf run. Digging
into the profile, the atomic operations in cpumask_clear_cpu and
cpumask_set_cpu are responsible for about half of that CPU use.

However, the CPUs running netperf are simply switching back and
forth between netperf and the idle task, which does not require any
changes to the mm_cpumask if lazy TLB mode were used.

Additionally, the init_mm cpumask ends up being the most heavily
contended one in the system, for no reason at all.

Making really lazy TLB mode work again on modern kernels, by sending
a shootdown IPI only when page table pages are being unmapped, we get
back some performance.

v3 of the series addresses Andy Lutomirski's latest comments, and
seems to work well.

On memcache workloads on 2 socket systems, this patch series seems
to reduce total system CPU use by 1-2%. On Song's netbench tests,
CPU use in the context switch time is about cut in half.

These patches also provide a little memory savings by shrinking
the size of mm_struct, especially on distro kernels compiled with
a gigantically large NR_CPUS.


^ permalink raw reply	[flat|nested] 61+ messages in thread
* [PATCH 0/7] x86,tlb,mm: make lazy TLB mode even lazier
@ 2018-06-20 19:56 Rik van Riel
  2018-06-20 19:56 ` [PATCH 4/7] x86,tlb: make lazy TLB mode lazier Rik van Riel
  0 siblings, 1 reply; 61+ messages in thread
From: Rik van Riel @ 2018-06-20 19:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: 86, luto, mingo, tglx, dave.hansen, efault, songliubraving, kernel-team

Song noticed switch_mm_irqs_off taking a lot of CPU time in recent
kernels, using 1.9% of a 48 CPU system during a netperf run. Digging
into the profile, the atomic operations in cpumask_clear_cpu and
cpumask_set_cpu are responsible for about half of that CPU use.

However, the CPUs running netperf are simply switching back and
forth between netperf and the idle task, which does not require any
changes to the mm_cpumask if lazy TLB mode were used.

Additionally, the init_mm cpumask ends up being the most heavily
contended one in the system, for no reason at all.

Making really lazy TLB mode work again on modern kernels, by sending
a shootdown IPI only when page table pages are being unmapped, we get
back some performance.


Using a memcache style workload on Broadwell systems, these patches
result in about a 0.5% reduction of CPU use on the system. Numbers
on Haswell are inconclusive so far.


Song's netperf performance results:

w/o patchset:

Throughput: 1.74716e+06
perf profile:
+    0.95%  swapper          [kernel.vmlinux]          [k] switch_mm_irqs_off
+    0.82%  netserver        [kernel.vmlinux]          [k] switch_mm_irqs_off

w/ patchset:

Throughput: 1.76911e+06
perf profile:
+    0.81%  swapper          [kernel.vmlinux]          [k] switch_mm_irqs_off

With these patches, netserver no longer calls switch_mm_irqs_off,
and the CPU use of enter_lazy_tlb was below the 0.05% threshold of
statistics gathered by Song's scripts.


I am still working on a patch to also get rid of the continuous
pounding on mm->count during lazy TLB entry and exit, when the same
mm_struct is being used all the time. I do not have that working yet.

Until then, these patches provide a nice performance boost, as well
as a little memory savings by shrinking the size of mm_struct.



^ permalink raw reply	[flat|nested] 61+ messages in thread

end of thread, other threads:[~2018-10-09 14:59 UTC | newest]

Thread overview: 61+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-16 19:03 [PATCH v6 0/7] x86,tlb,mm: make lazy TLB mode even lazier Rik van Riel
2018-07-16 19:03 ` [PATCH 1/7] mm: allocate mm_cpumask dynamically based on nr_cpu_ids Rik van Riel
2018-07-17  9:33   ` [tip:x86/mm] mm: Allocate the mm_cpumask (mm->cpu_bitmap[]) " tip-bot for Rik van Riel
2018-08-04 22:28   ` [PATCH 1/7] mm: allocate mm_cpumask " Guenter Roeck
2018-07-16 19:03 ` [PATCH 2/7] x86,tlb: leave lazy TLB mode at page table free time Rik van Riel
2018-07-17  9:34   ` [tip:x86/mm] x86/mm/tlb: Leave " tip-bot for Rik van Riel
2018-07-17 11:46     ` Peter Zijlstra
2018-07-25  1:00       ` Anders Roxell
2018-08-16  1:54   ` [PATCH 2/7] x86,tlb: leave " Andy Lutomirski
2018-08-16  5:31     ` Rik van Riel
2018-07-16 19:03 ` [PATCH 3/7] x86,mm: restructure switch_mm_irqs_off Rik van Riel
2018-07-17  9:34   ` [tip:x86/mm] x86/mm/tlb: Restructure switch_mm_irqs_off() tip-bot for Rik van Riel
2018-10-09 14:58   ` tip-bot for Rik van Riel
2018-07-16 19:03 ` [PATCH 4/7] x86,tlb: make lazy TLB mode lazier Rik van Riel
2018-07-17  9:35   ` [tip:x86/mm] x86/mm/tlb: Make " tip-bot for Rik van Riel
2018-07-17 11:33     ` Peter Zijlstra
2018-07-18 15:33       ` Rik van Riel
2018-07-18 16:00         ` Peter Zijlstra
     [not found]           ` <081E558D-DB34-4A18-A35C-896BC47F6EBA@surriel.com>
2018-07-18 18:23             ` Peter Zijlstra
2018-07-18 18:51               ` Rik van Riel
2018-07-19  9:13                 ` Peter Zijlstra
2018-07-17 20:04   ` [PATCH 4/7] x86,tlb: make " Andy Lutomirski
     [not found]     ` <FF977B78-140F-4787-AA57-0EA934017D85@surriel.com>
2018-07-17 21:29       ` Andy Lutomirski
2018-07-17 22:05         ` Rik van Riel
2018-07-17 22:27           ` Andy Lutomirski
2018-07-18 20:58     ` Rik van Riel
2018-07-18 23:13       ` Andy Lutomirski
     [not found]         ` <B976CC13-D014-433A-83DE-F8DF9AB4F421@surriel.com>
2018-07-19 16:45           ` Andy Lutomirski
2018-07-19 17:04             ` Andy Lutomirski
2018-07-19 17:04               ` Andy Lutomirski
2018-07-20  4:57               ` Benjamin Herrenschmidt
2018-07-20  8:30               ` Peter Zijlstra
2018-07-23 12:26                 ` Rik van Riel
2018-07-23 12:26                   ` Rik van Riel
2018-07-24 16:33               ` Will Deacon
     [not found]             ` <CF849A07-B7CE-4DE9-8246-53AC5A53A705@surriel.com>
2018-07-19 17:18               ` Andy Lutomirski
2018-07-20  8:02             ` Vitaly Kuznetsov
2018-07-20  9:49               ` Peter Zijlstra
2018-07-20 10:18                 ` Vitaly Kuznetsov
2018-07-20  9:32             ` Peter Zijlstra
2018-07-20 11:04               ` Peter Zijlstra
2018-07-16 19:03 ` [PATCH 5/7] x86,tlb: only send page table free TLB flush to lazy TLB CPUs Rik van Riel
2018-07-17  9:35   ` [tip:x86/mm] x86/mm/tlb: Only " tip-bot for Rik van Riel
2018-07-17 11:39     ` Peter Zijlstra
     [not found]       ` <1F8BDD25-864D-4105-B872-2109AA417454@surriel.com>
     [not found]         ` <24AA4367-22A1-450E-8F6A-3CBF39518384@surriel.com>
2018-07-18 16:19           ` Peter Zijlstra
2018-07-16 19:03 ` [PATCH 6/7] x86,mm: always use lazy TLB mode Rik van Riel
2018-07-17  9:36   ` [tip:x86/mm] x86/mm/tlb: Always " tip-bot for Rik van Riel
2018-10-09 14:58   ` tip-bot for Rik van Riel
2018-07-16 19:03 ` [PATCH 7/7] x86,switch_mm: skip atomic operations for init_mm Rik van Riel
2018-07-17  9:36   ` [tip:x86/mm] x86/mm/tlb: Skip atomic operations for 'init_mm' in switch_mm_irqs_off() tip-bot for Rik van Riel
  -- strict thread matches above, loose matches on Subject: below --
2018-07-10 14:28 [PATCH v5 0/7] x86,tlb,mm: make lazy TLB mode even lazier Rik van Riel
2018-07-10 14:28 ` [PATCH 4/7] x86,tlb: make lazy TLB mode lazier Rik van Riel
2018-07-06 21:56 [PATCH v4 0/7] x86,tlb,mm: make lazy TLB mode even lazier Rik van Riel
2018-07-06 21:56 ` [PATCH 4/7] x86,tlb: make lazy TLB mode lazier Rik van Riel
2018-06-29 14:29 [PATCH v3 0/7] x86,tlb,mm: make lazy TLB mode even lazier Rik van Riel
2018-06-29 14:29 ` [PATCH 4/7] x86,tlb: make lazy TLB mode lazier Rik van Riel
2018-06-29 17:05   ` Dave Hansen
2018-06-29 17:29     ` Rik van Riel
2018-06-20 19:56 [PATCH 0/7] x86,tlb,mm: make lazy TLB mode even lazier Rik van Riel
2018-06-20 19:56 ` [PATCH 4/7] x86,tlb: make lazy TLB mode lazier Rik van Riel
2018-06-22 15:04   ` Andy Lutomirski
2018-06-22 15:15     ` Rik van Riel
2018-06-22 15:34       ` Andy Lutomirski
2018-06-22 17:05   ` Dave Hansen
2018-06-22 17:16     ` Rik van Riel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.