All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/7] x86,tlb,mm: make lazy TLB mode even lazier
@ 2018-07-16 19:03 Rik van Riel
  2018-07-16 19:03 ` [PATCH 1/7] mm: allocate mm_cpumask dynamically based on nr_cpu_ids Rik van Riel
                   ` (6 more replies)
  0 siblings, 7 replies; 51+ messages in thread
From: Rik van Riel @ 2018-07-16 19:03 UTC (permalink / raw)
  To: linux-kernel; +Cc: x86, luto, efault, kernel-team, mingo, dave.hansen

Song noticed switch_mm_irqs_off taking a lot of CPU time in recent
kernels, using 1.9% of a 48 CPU system during a netperf run. Digging
into the profile, the atomic operations in cpumask_clear_cpu and
cpumask_set_cpu are responsible for about half of that CPU use.

However, the CPUs running netperf are simply switching back and
forth between netperf and the idle task, which does not require any
changes to the mm_cpumask if lazy TLB mode were used.

Additionally, the init_mm cpumask ends up being the most heavily
contended one in the system, for no reason at all.

Making really lazy TLB mode work again on modern kernels, by sending
a shootdown IPI only when page table pages are being unmapped, we get
back some performance.

v6 addresses the comment and Signed-off-by issues pointed out by Ingo.

On memcache workloads on 2 socket systems, this patch series seems
to reduce total system CPU use by 1-2%. On Song's netbench tests,
CPU use in the context switch time is about cut in half.

These patches also provide a little memory savings by shrinking
the size of mm_struct, especially on distro kernels compiled with
a gigantically large NR_CPUS.



^ permalink raw reply	[flat|nested] 51+ messages in thread
* [PATCH 7/7] x86/mm/tlb: Make lazy TLB mode lazier
@ 2018-09-26  3:58 Rik van Riel
  2018-10-09 15:01 ` [tip:x86/mm] " tip-bot for Rik van Riel
  0 siblings, 1 reply; 51+ messages in thread
From: Rik van Riel @ 2018-09-26  3:58 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, kernel-team, will.deacon, songliubraving, mingo, luto,
	hpa, npiggin, Rik van Riel

Lazy TLB mode can result in an idle CPU being woken up by a TLB flush,
when all it really needs to do is reload %CR3 at the next context switch,
assuming no page table pages got freed.

Memory ordering is used to prevent race conditions between switch_mm_irqs_off,
which checks whether .tlb_gen changed, and the TLB invalidation code, which
increments .tlb_gen whenever page table entries get invalidated.

The atomic increment in inc_mm_tlb_gen is its own barrier; the context
switch code adds an explicit barrier between reading tlbstate.is_lazy and
next->context.tlb_gen.

CPUs in lazy TLB mode remain part of the mm_cpumask(mm), both because
that allows TLB flush IPIs to be sent at page table freeing time, and
because the cache line bouncing on the mm_cpumask(mm) was responsible
for about half the CPU use in switch_mm_irqs_off().

Tested-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Rik van Riel <riel@surriel.com>
---
 arch/x86/mm/tlb.c | 67 ++++++++++++++++++++++++++++++++++++++++-------
 1 file changed, 58 insertions(+), 9 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index b228d2a6b5fa..707d757e70ec 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -185,6 +185,7 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 {
 	struct mm_struct *real_prev = this_cpu_read(cpu_tlbstate.loaded_mm);
 	u16 prev_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
+	bool was_lazy = this_cpu_read(cpu_tlbstate.is_lazy);
 	unsigned cpu = smp_processor_id();
 	u64 next_tlb_gen;
 	bool need_flush;
@@ -242,17 +243,40 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 			   next->context.ctx_id);
 
 		/*
-		 * We don't currently support having a real mm loaded without
-		 * our cpu set in mm_cpumask().  We have all the bookkeeping
-		 * in place to figure out whether we would need to flush
-		 * if our cpu were cleared in mm_cpumask(), but we don't
-		 * currently use it.
+		 * Even in lazy TLB mode, the CPU should stay set in the
+		 * mm_cpumask. The TLB shootdown code can figure out from
+		 * from cpu_tlbstate.is_lazy whether or not to send an IPI.
 		 */
 		if (WARN_ON_ONCE(real_prev != &init_mm &&
 				 !cpumask_test_cpu(cpu, mm_cpumask(next))))
 			cpumask_set_cpu(cpu, mm_cpumask(next));
 
-		return;
+		/*
+		 * If the CPU is not in lazy TLB mode, we are just switching
+		 * from one thread in a process to another thread in the same
+		 * process. No TLB flush required.
+		 */
+		if (!was_lazy)
+			return;
+
+		/*
+		 * Read the tlb_gen to check whether a flush is needed.
+		 * If the TLB is up to date, just use it.
+		 * The barrier synchronizes with the tlb_gen increment in
+		 * the TLB shootdown code.
+		 */
+		smp_mb();
+		next_tlb_gen = atomic64_read(&next->context.tlb_gen);
+		if (this_cpu_read(cpu_tlbstate.ctxs[prev_asid].tlb_gen) ==
+				next_tlb_gen)
+			return;
+
+		/*
+		 * TLB contents went out of date while we were in lazy
+		 * mode. Fall through to the TLB switching code below.
+		 */
+		new_asid = prev_asid;
+		need_flush = true;
 	} else {
 		u16 new_asid;
 		bool need_flush;
@@ -348,8 +372,10 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
 	this_cpu_write(cpu_tlbstate.loaded_mm, next);
 	this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid);
 
-	load_mm_cr4(next);
-	switch_ldt(real_prev, next);
+	if (next != real_prev) {
+		load_mm_cr4(next);
+		switch_ldt(real_prev, next);
+	}
 }
 
 /*
@@ -457,6 +483,9 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 		 * paging-structure cache to avoid speculatively reading
 		 * garbage into our TLB.  Since switching to init_mm is barely
 		 * slower than a minimal flush, just switch to init_mm.
+		 *
+		 * This should be rare, with native_flush_tlb_others skipping
+		 * IPIs to lazy TLB mode CPUs.
 		 */
 		switch_mm_irqs_off(NULL, &init_mm, NULL);
 		return;
@@ -559,6 +588,11 @@ static void flush_tlb_func_remote(void *info)
 	flush_tlb_func_common(f, false, TLB_REMOTE_SHOOTDOWN);
 }
 
+static bool tlb_is_not_lazy(int cpu, void *data)
+{
+	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
+}
+
 void native_flush_tlb_others(const struct cpumask *cpumask,
 			     const struct flush_tlb_info *info)
 {
@@ -594,8 +628,23 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 					       (void *)info, 1);
 		return;
 	}
-	smp_call_function_many(cpumask, flush_tlb_func_remote,
+
+	/*
+	 * If no page tables were freed, we can skip sending IPIs to
+	 * CPUs in lazy TLB mode. They will flush the CPU themselves
+	 * at the next context switch.
+	 *
+	 * However, if page tables are getting freed, we need to send the
+	 * IPI everywhere, to prevent CPUs in lazy TLB mode from tripping
+	 * up on the new contents of what used to be page tables, while
+	 * doing a speculative memory access.
+	 */
+	if (info->freed_tables)
+		smp_call_function_many(cpumask, flush_tlb_func_remote,
 			       (void *)info, 1);
+	else
+		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func_remote,
+				(void *)info, 1, GFP_ATOMIC, cpumask);
 }
 
 /*
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2018-10-09 15:01 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-16 19:03 [PATCH v6 0/7] x86,tlb,mm: make lazy TLB mode even lazier Rik van Riel
2018-07-16 19:03 ` [PATCH 1/7] mm: allocate mm_cpumask dynamically based on nr_cpu_ids Rik van Riel
2018-07-17  9:33   ` [tip:x86/mm] mm: Allocate the mm_cpumask (mm->cpu_bitmap[]) " tip-bot for Rik van Riel
2018-08-04 22:28   ` [PATCH 1/7] mm: allocate mm_cpumask " Guenter Roeck
2018-07-16 19:03 ` [PATCH 2/7] x86,tlb: leave lazy TLB mode at page table free time Rik van Riel
2018-07-17  9:34   ` [tip:x86/mm] x86/mm/tlb: Leave " tip-bot for Rik van Riel
2018-07-17 11:46     ` Peter Zijlstra
2018-07-25  1:00       ` Anders Roxell
2018-08-16  1:54   ` [PATCH 2/7] x86,tlb: leave " Andy Lutomirski
2018-08-16  5:31     ` Rik van Riel
2018-07-16 19:03 ` [PATCH 3/7] x86,mm: restructure switch_mm_irqs_off Rik van Riel
2018-07-17  9:34   ` [tip:x86/mm] x86/mm/tlb: Restructure switch_mm_irqs_off() tip-bot for Rik van Riel
2018-10-09 14:58   ` tip-bot for Rik van Riel
2018-07-16 19:03 ` [PATCH 4/7] x86,tlb: make lazy TLB mode lazier Rik van Riel
2018-07-17  9:35   ` [tip:x86/mm] x86/mm/tlb: Make " tip-bot for Rik van Riel
2018-07-17 11:33     ` Peter Zijlstra
2018-07-18 15:33       ` Rik van Riel
2018-07-18 16:00         ` Peter Zijlstra
     [not found]           ` <081E558D-DB34-4A18-A35C-896BC47F6EBA@surriel.com>
2018-07-18 18:23             ` Peter Zijlstra
2018-07-18 18:51               ` Rik van Riel
2018-07-19  9:13                 ` Peter Zijlstra
2018-07-17 20:04   ` [PATCH 4/7] x86,tlb: make " Andy Lutomirski
     [not found]     ` <FF977B78-140F-4787-AA57-0EA934017D85@surriel.com>
2018-07-17 21:29       ` Andy Lutomirski
2018-07-17 22:05         ` Rik van Riel
2018-07-17 22:27           ` Andy Lutomirski
2018-07-18 20:58     ` Rik van Riel
2018-07-18 23:13       ` Andy Lutomirski
     [not found]         ` <B976CC13-D014-433A-83DE-F8DF9AB4F421@surriel.com>
2018-07-19 16:45           ` Andy Lutomirski
2018-07-19 17:04             ` Andy Lutomirski
2018-07-19 17:04               ` Andy Lutomirski
2018-07-20  4:57               ` Benjamin Herrenschmidt
2018-07-20  8:30               ` Peter Zijlstra
2018-07-23 12:26                 ` Rik van Riel
2018-07-23 12:26                   ` Rik van Riel
2018-07-24 16:33               ` Will Deacon
     [not found]             ` <CF849A07-B7CE-4DE9-8246-53AC5A53A705@surriel.com>
2018-07-19 17:18               ` Andy Lutomirski
2018-07-20  8:02             ` Vitaly Kuznetsov
2018-07-20  9:49               ` Peter Zijlstra
2018-07-20 10:18                 ` Vitaly Kuznetsov
2018-07-20  9:32             ` Peter Zijlstra
2018-07-20 11:04               ` Peter Zijlstra
2018-07-16 19:03 ` [PATCH 5/7] x86,tlb: only send page table free TLB flush to lazy TLB CPUs Rik van Riel
2018-07-17  9:35   ` [tip:x86/mm] x86/mm/tlb: Only " tip-bot for Rik van Riel
2018-07-17 11:39     ` Peter Zijlstra
     [not found]       ` <1F8BDD25-864D-4105-B872-2109AA417454@surriel.com>
     [not found]         ` <24AA4367-22A1-450E-8F6A-3CBF39518384@surriel.com>
2018-07-18 16:19           ` Peter Zijlstra
2018-07-16 19:03 ` [PATCH 6/7] x86,mm: always use lazy TLB mode Rik van Riel
2018-07-17  9:36   ` [tip:x86/mm] x86/mm/tlb: Always " tip-bot for Rik van Riel
2018-10-09 14:58   ` tip-bot for Rik van Riel
2018-07-16 19:03 ` [PATCH 7/7] x86,switch_mm: skip atomic operations for init_mm Rik van Riel
2018-07-17  9:36   ` [tip:x86/mm] x86/mm/tlb: Skip atomic operations for 'init_mm' in switch_mm_irqs_off() tip-bot for Rik van Riel
2018-09-26  3:58 [PATCH 7/7] x86/mm/tlb: Make lazy TLB mode lazier Rik van Riel
2018-10-09 15:01 ` [tip:x86/mm] " tip-bot for Rik van Riel

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.