All of lore.kernel.org
 help / color / mirror / Atom feed
From: Nadav Amit <nadav.amit@gmail.com>
To: linux-kernel@vger.kernel.org
Cc: Peter Zijlstra <peterz@infradead.org>,
	Andy Lutomirski <luto@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Nadav Amit <namit@vmware.com>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Rik van Riel <riel@surriel.com>,
	Josh Poimboeuf <jpoimboe@redhat.com>
Subject: [PATCH v6 3/9] x86/mm/tlb: Open-code on_each_cpu_cond_mask() for tlb_is_not_lazy()
Date: Sat, 20 Feb 2021 15:17:06 -0800	[thread overview]
Message-ID: <20210220231712.2475218-4-namit@vmware.com> (raw)
In-Reply-To: <20210220231712.2475218-1-namit@vmware.com>

From: Nadav Amit <namit@vmware.com>

Open-code on_each_cpu_cond_mask() in native_flush_tlb_others() to
optimize the code. Open-coding eliminates the need for the indirect branch
that is used to call is_lazy(), and in CPUs that are vulnerable to
Spectre v2, it eliminates the retpoline. In addition, it allows to use a
preallocated cpumask to compute the CPUs that should be.

This would later allow us not to adapt on_each_cpu_cond_mask() to
support local and remote functions.

Note that calling tlb_is_not_lazy() for every CPU that needs to be
flushed, as done in native_flush_tlb_multi() might look ugly, but it is
equivalent to what is currently done in on_each_cpu_cond_mask().
Actually, native_flush_tlb_multi() does it more efficiently since it
avoids using an indirect branch for the matter.

Reviewed-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
 arch/x86/mm/tlb.c | 37 ++++++++++++++++++++++++++++++++-----
 1 file changed, 32 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index bf12371db6c4..07b6701a540a 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -788,11 +788,13 @@ static void flush_tlb_func(void *info)
 			nr_invalidate);
 }
 
-static bool tlb_is_not_lazy(int cpu, void *data)
+static bool tlb_is_not_lazy(int cpu)
 {
 	return !per_cpu(cpu_tlbstate.is_lazy, cpu);
 }
 
+static DEFINE_PER_CPU(cpumask_t, flush_tlb_mask);
+
 STATIC_NOPV void native_flush_tlb_others(const struct cpumask *cpumask,
 					 const struct flush_tlb_info *info)
 {
@@ -813,12 +815,37 @@ STATIC_NOPV void native_flush_tlb_others(const struct cpumask *cpumask,
 	 * up on the new contents of what used to be page tables, while
 	 * doing a speculative memory access.
 	 */
-	if (info->freed_tables)
+	if (info->freed_tables) {
 		smp_call_function_many(cpumask, flush_tlb_func,
 			       (void *)info, 1);
-	else
-		on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func,
-				(void *)info, 1, cpumask);
+	} else {
+		/*
+		 * Although we could have used on_each_cpu_cond_mask(),
+		 * open-coding it has performance advantages, as it eliminates
+		 * the need for indirect calls or retpolines. In addition, it
+		 * allows to use a designated cpumask for evaluating the
+		 * condition, instead of allocating one.
+		 *
+		 * This code works under the assumption that there are no nested
+		 * TLB flushes, an assumption that is already made in
+		 * flush_tlb_mm_range().
+		 *
+		 * cond_cpumask is logically a stack-local variable, but it is
+		 * more efficient to have it off the stack and not to allocate
+		 * it on demand. Preemption is disabled and this code is
+		 * non-reentrant.
+		 */
+		struct cpumask *cond_cpumask = this_cpu_ptr(&flush_tlb_mask);
+		int cpu;
+
+		cpumask_clear(cond_cpumask);
+
+		for_each_cpu(cpu, cpumask) {
+			if (tlb_is_not_lazy(cpu))
+				__cpumask_set_cpu(cpu, cond_cpumask);
+		}
+		smp_call_function_many(cond_cpumask, flush_tlb_func, (void *)info, 1);
+	}
 }
 
 void flush_tlb_others(const struct cpumask *cpumask,
-- 
2.25.1


  parent reply	other threads:[~2021-02-20 23:25 UTC|newest]

Thread overview: 33+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-20 23:17 [PATCH v6 0/9] x86/tlb: Concurrent TLB flushes Nadav Amit
2021-02-20 23:17 ` Nadav Amit
2021-02-20 23:17 ` [PATCH v6 1/9] smp: Run functions concurrently in smp_call_function_many_cond() Nadav Amit
2021-03-01 17:10   ` Peter Zijlstra
2021-03-01 19:01     ` Nadav Amit
2021-03-02  7:05     ` [PATCH] smp: Micro-optimize smp_call_function_many_cond() Ingo Molnar
2021-03-02  9:54   ` [tip: x86/mm] smp: Run functions concurrently in smp_call_function_many_cond() tip-bot2 for Nadav Amit
2021-03-06 12:12   ` tip-bot2 for Nadav Amit
2021-02-20 23:17 ` [PATCH v6 2/9] x86/mm/tlb: Unify flush_tlb_func_local() and flush_tlb_func_remote() Nadav Amit
2021-03-02  9:54   ` [tip: x86/mm] " tip-bot2 for Nadav Amit
2021-03-06 12:12   ` tip-bot2 for Nadav Amit
2021-02-20 23:17 ` Nadav Amit [this message]
2021-03-02  9:54   ` [tip: x86/mm] x86/mm/tlb: Open-code on_each_cpu_cond_mask() for tlb_is_not_lazy() tip-bot2 for Nadav Amit
2021-03-06 12:12   ` tip-bot2 for Nadav Amit
2021-02-20 23:17 ` [PATCH v6 4/9] x86/mm/tlb: Flush remote and local TLBs concurrently Nadav Amit
2021-02-20 23:17   ` Nadav Amit
2021-03-02  9:54   ` [tip: x86/mm] " tip-bot2 for Nadav Amit
2021-03-06 12:12   ` tip-bot2 for Nadav Amit
2021-02-20 23:17 ` [PATCH v6 5/9] x86/mm/tlb: Privatize cpu_tlbstate Nadav Amit
2021-03-02  9:54   ` [tip: x86/mm] " tip-bot2 for Nadav Amit
2021-03-06 12:12   ` tip-bot2 for Nadav Amit
2021-02-20 23:17 ` [PATCH v6 6/9] x86/mm/tlb: Do not make is_lazy dirty for no reason Nadav Amit
2021-03-02  9:54   ` [tip: x86/mm] " tip-bot2 for Nadav Amit
2021-03-06 12:12   ` tip-bot2 for Nadav Amit
2021-02-20 23:17 ` [PATCH v6 7/9] cpumask: Mark functions as pure Nadav Amit
2021-03-02  9:54   ` [tip: x86/mm] " tip-bot2 for Nadav Amit
2021-03-06 12:12   ` tip-bot2 for Nadav Amit
2021-02-20 23:17 ` [PATCH v6 8/9] x86/mm/tlb: Remove unnecessary uses of the inline keyword Nadav Amit
2021-03-02  9:54   ` [tip: x86/mm] " tip-bot2 for Nadav Amit
2021-03-06 12:12   ` tip-bot2 for Nadav Amit
2021-02-20 23:17 ` [PATCH v6 9/9] smp: inline on_each_cpu_cond() and on_each_cpu() Nadav Amit
2021-03-02  9:54   ` [tip: x86/mm] smp: Inline " tip-bot2 for Nadav Amit
2021-03-06 12:12   ` tip-bot2 for Nadav Amit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210220231712.2475218-4-namit@vmware.com \
    --to=nadav.amit@gmail.com \
    --cc=dave.hansen@linux.intel.com \
    --cc=jpoimboe@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=namit@vmware.com \
    --cc=peterz@infradead.org \
    --cc=riel@surriel.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.