linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Borislav Petkov <bp@alien8.de>
To: Andy Lutomirski <luto@kernel.org>
Cc: X86 ML <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@suse.de>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Nadav Amit <nadav.amit@gmail.com>, Rik van Riel <riel@redhat.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Arjan van de Ven <arjan@linux.intel.com>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v3 05/11] x86/mm: Track the TLB's tlb_gen and update the flushing algorithm
Date: Thu, 22 Jun 2017 19:22:20 +0200	[thread overview]
Message-ID: <20170622172220.wf3egiwx2kqbxbi2@pd.tnic> (raw)
In-Reply-To: <CALCETrUPqG-YcneqSqUYzWTJbm2Ae0Nj3K0MuS0cKYeD0yWhuw@mail.gmail.com>

On Thu, Jun 22, 2017 at 08:55:36AM -0700, Andy Lutomirski wrote:
> > Ah, simple: we control the flushing with info.new_tlb_gen and
> > mm->context.tlb_gen. I.e., this check:
> >
> >
> >         if (f->end != TLB_FLUSH_ALL &&
> >             f->new_tlb_gen == local_tlb_gen + 1 &&
> >             f->new_tlb_gen == mm_tlb_gen) {
> >
> > why can't we write:
> >
> >         if (f->end != TLB_FLUSH_ALL &&
> >             mm_tlb_gen == local_tlb_gen + 1)
> >
> > ?
> 
> Ah, I thought you were asking about why I needed mm_tlb_gen ==
> local_tlb_gen + 1.  This is just an optimization, or at least I hope
> it is.  The idea is that, if we know that another flush is coming, it
> seems likely that it would be faster to do a full flush and increase
> local_tlb_gen all the way to mm_tlb_gen rather than doing a partial
> flush, increasing local_tlb_gen to something less than mm_tlb_gen, and
> needing to flush again very soon.

Thus the f->new_tlb_gen check whether it is local_tlb_gen + 1.

Btw, do you see how confusing this check is: you have new_tlb_gen from
a variable passed from the function call IPI, local_tlb_gen which is
the CPU's own and then there's also mm_tlb_gen which we've written into
new_tlb_gen from:

	info.new_tlb_gen = bump_mm_tlb_gen(mm);

which incremented mm_tlb_gen too.

> Hmm.  I'd be nervous that there are more subtle races if we do this.
> For example, suppose that a partial flush increments tlb_gen from 1 to
> 2 and a full flush increments tlb_gen from 2 to 3.  Meanwhile, the CPU
> is busy switching back and forth between mms, so the partial flush
> sees the cpu set in mm_cpumask but the full flush doesn't see the cpu
> set in mm_cpumask.

Lemme see if I understand this correctly: you mean, the full flush will
exit early due to the

	if (!cpumask_test_cpu(smp_processor_id(), mm_cpumask(loaded_mm))) {

test?

> The flush IPI hits after a switch_mm_irqs_off() call notices the
> change from 1 to 2. switch_mm_irqs_off() will do a full flush and
> increment the local tlb_gen to 2, and the IPI handler for the partial
> flush will see local_tlb_gen == mm_tlb_gen - 1 (because local_tlb_gen
> == 2 and mm_tlb_gen == 3) and do a partial flush.

Why, the 2->3 flush has f->end == TLB_FLUSH_ALL.

That's why you have this thing in addition to the tlb_gen.

What we end up doing in this case, is promote the partial flush to a
full one and thus have a partial and a full flush which are close by
converted to two full flushes.

> The problem here is that it's not obvious to me that this actually
> ends up flushing everything that's needed. Maybe all the memory
> ordering gets this right, but I can imagine scenarios in which
> switch_mm_irqs_off() does its flush early enough that the TLB picks up
> an entry that was supposed to get zapped by the full flush.

See above.

And I don't think that having two full flushes back-to-back is going to
cost a lot as the second one won't flush a whole lot.

> IOW it *might* be valid, but I think it would need very careful review
> and documentation.

Always.

Btw, I get the sense this TLB flush avoiding scheme becomes pretty
complex for diminishing reasons.

  [ Or maybe I'm not seeing them - I'm always open to corrections. ]

Especially if intermediary levels from the pagetable walker are cached
and reestablishing the TLB entries seldom means a full walk.

You should do a full fledged benchmark to see whether this whole
complexity is even worth it, methinks.

-- 
Regards/Gruss,
    Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

  reply	other threads:[~2017-06-22 17:22 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-21  5:22 [PATCH v3 00/11] PCID and improved laziness Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 01/11] x86/mm: Don't reenter flush_tlb_func_common() Andy Lutomirski
2017-06-21  8:01   ` Thomas Gleixner
2017-06-21  8:49   ` Borislav Petkov
2017-06-21 15:15     ` Andy Lutomirski
2017-06-21 23:26   ` Nadav Amit
2017-06-22  2:27     ` Andy Lutomirski
2017-06-22  7:32       ` Ingo Molnar
2017-06-21  5:22 ` [PATCH v3 02/11] x86/ldt: Simplify LDT switching logic Andy Lutomirski
2017-06-21  8:03   ` Thomas Gleixner
2017-06-21  9:40   ` Borislav Petkov
2017-06-22 11:08   ` [tip:x86/mm] x86/ldt: Simplify the " tip-bot for Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 03/11] x86/mm: Remove reset_lazy_tlbstate() Andy Lutomirski
2017-06-21  8:03   ` Thomas Gleixner
2017-06-21  9:50   ` Borislav Petkov
2017-06-22 11:08   ` [tip:x86/mm] " tip-bot for Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 04/11] x86/mm: Give each mm TLB flush generation a unique ID Andy Lutomirski
2017-06-21  8:05   ` Thomas Gleixner
2017-06-21 10:33   ` Borislav Petkov
2017-06-21 15:23     ` Andy Lutomirski
2017-06-21 17:06       ` Borislav Petkov
2017-06-21 17:43   ` Borislav Petkov
2017-06-22  2:34     ` Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 05/11] x86/mm: Track the TLB's tlb_gen and update the flushing algorithm Andy Lutomirski
2017-06-21  8:32   ` Thomas Gleixner
2017-06-21 15:11     ` Andy Lutomirski
2017-06-21 18:44   ` Borislav Petkov
2017-06-22  2:46     ` Andy Lutomirski
2017-06-22  7:24       ` Borislav Petkov
2017-06-22 14:48         ` Andy Lutomirski
2017-06-22 14:59           ` Borislav Petkov
2017-06-22 15:55             ` Andy Lutomirski
2017-06-22 17:22               ` Borislav Petkov [this message]
2017-06-22 18:08                 ` Andy Lutomirski
2017-06-23  8:42                   ` Borislav Petkov
2017-06-23 15:46                     ` Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 06/11] x86/mm: Rework lazy TLB mode and TLB freshness tracking Andy Lutomirski
2017-06-21  9:01   ` Thomas Gleixner
2017-06-21 16:04     ` Andy Lutomirski
2017-06-21 17:29       ` Borislav Petkov
2017-06-22 14:50   ` Borislav Petkov
2017-06-22 17:47     ` Andy Lutomirski
2017-06-22 19:05       ` Borislav Petkov
2017-07-27 19:53       ` Andrew Banman
2017-07-28  2:05         ` Andy Lutomirski
2017-06-23 13:34   ` Boris Ostrovsky
2017-06-23 15:22     ` Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 07/11] x86/mm: Stop calling leave_mm() in idle code Andy Lutomirski
2017-06-21  9:22   ` Thomas Gleixner
2017-06-21 15:16     ` Andy Lutomirski
2017-06-23  9:07   ` Borislav Petkov
2017-06-21  5:22 ` [PATCH v3 08/11] x86/mm: Disable PCID on 32-bit kernels Andy Lutomirski
2017-06-21  9:26   ` Thomas Gleixner
2017-06-23  9:24   ` Borislav Petkov
2017-06-21  5:22 ` [PATCH v3 09/11] x86/mm: Add nopcid to turn off PCID Andy Lutomirski
2017-06-21  9:27   ` Thomas Gleixner
2017-06-23  9:34   ` Borislav Petkov
2017-06-21  5:22 ` [PATCH v3 10/11] x86/mm: Enable CR4.PCIDE on supported systems Andy Lutomirski
2017-06-21  9:39   ` Thomas Gleixner
2017-06-21 13:40     ` Thomas Gleixner
2017-06-21 20:34     ` Andy Lutomirski
2017-06-23 11:50   ` Borislav Petkov
2017-06-23 15:28     ` Andy Lutomirski
2017-06-23 13:35   ` Boris Ostrovsky
2017-06-21  5:22 ` [PATCH v3 11/11] x86/mm: Try to preserve old TLB entries using PCID Andy Lutomirski
2017-06-21 13:38   ` Thomas Gleixner
2017-06-21 13:40     ` Thomas Gleixner
2017-06-22  2:57     ` Andy Lutomirski
2017-06-22 12:21       ` Thomas Gleixner
2017-06-22 18:12         ` Andy Lutomirski
2017-06-22 21:22           ` Thomas Gleixner
2017-06-23  3:09             ` Andy Lutomirski
2017-06-23  7:29               ` Thomas Gleixner
2017-06-22 16:09   ` Nadav Amit
2017-06-22 18:10     ` Andy Lutomirski
2017-06-26 15:58   ` Borislav Petkov
2017-06-21 18:23 ` [PATCH v3 00/11] PCID and improved laziness Linus Torvalds
2017-06-22  5:19   ` Andy Lutomirski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170622172220.wf3egiwx2kqbxbi2@pd.tnic \
    --to=bp@alien8.de \
    --cc=akpm@linux-foundation.org \
    --cc=arjan@linux.intel.com \
    --cc=dave.hansen@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mgorman@suse.de \
    --cc=nadav.amit@gmail.com \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=torvalds@linux-foundation.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).