linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86/mm/tlb: Do partial TLB flush when possible
@ 2019-05-29  7:56 Zhenzhong Duan
  2019-05-30 14:15 ` Andy Lutomirski
  0 siblings, 1 reply; 3+ messages in thread
From: Zhenzhong Duan @ 2019-05-29  7:56 UTC (permalink / raw)
  To: linux-kernel
  Cc: srinivas.eeda, Zhenzhong Duan, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	x86

This is a small optimization to stale TLB flush, if there is one new TLB
flush, let it choose to do partial or full flush. or else, the stale
flush take over and do full flush.

Add unlikely() to info->freed_tables check as freeing page tables
is relatively less.

Signed-off-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Srinivas Eeda <srinivas.eeda@oracle.com>
Cc: x86@kernel.org

---
 arch/x86/mm/tlb.c | 20 +++++++++++++++-----
 1 file changed, 15 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 91f6db9..63a8125 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -569,6 +569,17 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 		return;
 	}
 
+	if (unlikely(f->new_tlb_gen <= local_tlb_gen &&
+	    local_tlb_gen + 1 == mm_tlb_gen)) {
+		/*
+		 * For stale TLB flush request, if there will be one new TLB
+		 * flush coming, we leave the work to the new IPI as it knows
+		 * partial or full TLB flush to take, or else we do the full
+		 * flush.
+		 */
+		trace_tlb_flush(reason, 0);
+		return;
+	}
 	WARN_ON_ONCE(local_tlb_gen > mm_tlb_gen);
 	WARN_ON_ONCE(f->new_tlb_gen > mm_tlb_gen);
 
@@ -577,7 +588,8 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 	 * This does not strictly imply that we need to flush (it's
 	 * possible that f->new_tlb_gen <= local_tlb_gen), but we're
 	 * going to need to flush in the very near future, so we might
-	 * as well get it over with.
+	 * as well get it over with in case we know there will be more
+	 * than one outstanding TLB flush request.
 	 *
 	 * The only question is whether to do a full or partial flush.
 	 *
@@ -609,9 +621,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
 	 *    local_tlb_gen all the way to mm_tlb_gen and we can probably
 	 *    avoid another flush in the very near future.
 	 */
-	if (f->end != TLB_FLUSH_ALL &&
-	    f->new_tlb_gen == local_tlb_gen + 1 &&
-	    f->new_tlb_gen == mm_tlb_gen) {
+	if (f->end != TLB_FLUSH_ALL && local_tlb_gen + 1 == mm_tlb_gen) {
 		/* Partial flush */
 		unsigned long nr_invalidate = (f->end - f->start) >> f->stride_shift;
 		unsigned long addr = f->start;
@@ -703,7 +713,7 @@ void native_flush_tlb_others(const struct cpumask *cpumask,
 	 * up on the new contents of what used to be page tables, while
 	 * doing a speculative memory access.
 	 */
-	if (info->freed_tables)
+	if (unlikely(info->freed_tables))
 		smp_call_function_many(cpumask, flush_tlb_func_remote,
 			       (void *)info, 1);
 	else
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] x86/mm/tlb: Do partial TLB flush when possible
  2019-05-29  7:56 [PATCH] x86/mm/tlb: Do partial TLB flush when possible Zhenzhong Duan
@ 2019-05-30 14:15 ` Andy Lutomirski
  2019-05-31  2:51   ` Zhenzhong Duan
  0 siblings, 1 reply; 3+ messages in thread
From: Andy Lutomirski @ 2019-05-30 14:15 UTC (permalink / raw)
  To: Zhenzhong Duan
  Cc: LKML, srinivas.eeda, Dave Hansen, Andy Lutomirski,
	Peter Zijlstra, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	X86 ML

On Thu, May 30, 2019 at 12:56 AM Zhenzhong Duan
<zhenzhong.duan@oracle.com> wrote:
>
> This is a small optimization to stale TLB flush, if there is one new TLB
> flush, let it choose to do partial or full flush. or else, the stale
> flush take over and do full flush.

I think this is invalid because:

>
> +       if (unlikely(f->new_tlb_gen <= local_tlb_gen &&
> +           local_tlb_gen + 1 == mm_tlb_gen)) {
> +               /*
> +                * For stale TLB flush request, if there will be one new TLB
> +                * flush coming, we leave the work to the new IPI as it knows
> +                * partial or full TLB flush to take, or else we do the full
> +                * flush.
> +                */
> +               trace_tlb_flush(reason, 0);
> +               return;

We do indeed know that the TLB will get flushed eventually, but we're
actually providing a stronger guarantee that the TLB will be
adequately flushed by the time we return.  Otherwise, after
flush_tlb_mm_range(), there will be a window in which the TLB isn't
flushed yet.

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH] x86/mm/tlb: Do partial TLB flush when possible
  2019-05-30 14:15 ` Andy Lutomirski
@ 2019-05-31  2:51   ` Zhenzhong Duan
  0 siblings, 0 replies; 3+ messages in thread
From: Zhenzhong Duan @ 2019-05-31  2:51 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: LKML, srinivas.eeda, Dave Hansen, Peter Zijlstra,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, X86 ML


On 2019/5/30 22:15, Andy Lutomirski wrote:
> On Thu, May 30, 2019 at 12:56 AM Zhenzhong Duan
> <zhenzhong.duan@oracle.com> wrote:
>> This is a small optimization to stale TLB flush, if there is one new TLB
>> flush, let it choose to do partial or full flush. or else, the stale
>> flush take over and do full flush.
> I think this is invalid because:
>
>> +       if (unlikely(f->new_tlb_gen <= local_tlb_gen &&
>> +           local_tlb_gen + 1 == mm_tlb_gen)) {
>> +               /*
>> +                * For stale TLB flush request, if there will be one new TLB
>> +                * flush coming, we leave the work to the new IPI as it knows
>> +                * partial or full TLB flush to take, or else we do the full
>> +                * flush.
>> +                */
>> +               trace_tlb_flush(reason, 0);
>> +               return;
> We do indeed know that the TLB will get flushed eventually, but we're
> actually providing a stronger guarantee that the TLB will be
> adequately flushed by the time we return.  Otherwise, after
> flush_tlb_mm_range(), there will be a window in which the TLB isn't
> flushed yet.

You are right. I didn't notice this point, sorry for the noise.

Zhenzhong


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-05-31  2:52 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-29  7:56 [PATCH] x86/mm/tlb: Do partial TLB flush when possible Zhenzhong Duan
2019-05-30 14:15 ` Andy Lutomirski
2019-05-31  2:51   ` Zhenzhong Duan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).