linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andy Lutomirski <luto@kernel.org>
To: Borislav Petkov <bp@alien8.de>
Cc: Andy Lutomirski <luto@kernel.org>, X86 ML <x86@kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mel Gorman <mgorman@suse.de>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Nadav Amit <nadav.amit@gmail.com>, Rik van Riel <riel@redhat.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Arjan van de Ven <arjan@linux.intel.com>,
	Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH v3 05/11] x86/mm: Track the TLB's tlb_gen and update the flushing algorithm
Date: Thu, 22 Jun 2017 11:08:38 -0700	[thread overview]
Message-ID: <CALCETrUbiXK8gjS=U2j4jW8YgPv4j+wgwsa4nJLnO+902fXfKQ@mail.gmail.com> (raw)
In-Reply-To: <20170622172220.wf3egiwx2kqbxbi2@pd.tnic>

On Thu, Jun 22, 2017 at 10:22 AM, Borislav Petkov <bp@alien8.de> wrote:
> On Thu, Jun 22, 2017 at 08:55:36AM -0700, Andy Lutomirski wrote:
>> > Ah, simple: we control the flushing with info.new_tlb_gen and
>> > mm->context.tlb_gen. I.e., this check:
>> >
>> >
>> >         if (f->end != TLB_FLUSH_ALL &&
>> >             f->new_tlb_gen == local_tlb_gen + 1 &&
>> >             f->new_tlb_gen == mm_tlb_gen) {
>> >
>> > why can't we write:
>> >
>> >         if (f->end != TLB_FLUSH_ALL &&
>> >             mm_tlb_gen == local_tlb_gen + 1)
>> >
>> > ?
>>
>> Ah, I thought you were asking about why I needed mm_tlb_gen ==
>> local_tlb_gen + 1.  This is just an optimization, or at least I hope
>> it is.  The idea is that, if we know that another flush is coming, it
>> seems likely that it would be faster to do a full flush and increase
>> local_tlb_gen all the way to mm_tlb_gen rather than doing a partial
>> flush, increasing local_tlb_gen to something less than mm_tlb_gen, and
>> needing to flush again very soon.
>
> Thus the f->new_tlb_gen check whether it is local_tlb_gen + 1.
>
> Btw, do you see how confusing this check is: you have new_tlb_gen from
> a variable passed from the function call IPI, local_tlb_gen which is
> the CPU's own and then there's also mm_tlb_gen which we've written into
> new_tlb_gen from:
>
>         info.new_tlb_gen = bump_mm_tlb_gen(mm);
>
> which incremented mm_tlb_gen too.

Yes, I agree it's confusing.  There really are three numbers.  Those
numbers are: the latest generation, the generation that this CPU has
caught up to, and the generation that the requester of the flush we're
currently handling has asked us to catch up to.  I don't see a way to
reduce the complexity.

>
>> Hmm.  I'd be nervous that there are more subtle races if we do this.
>> For example, suppose that a partial flush increments tlb_gen from 1 to
>> 2 and a full flush increments tlb_gen from 2 to 3.  Meanwhile, the CPU
>> is busy switching back and forth between mms, so the partial flush
>> sees the cpu set in mm_cpumask but the full flush doesn't see the cpu
>> set in mm_cpumask.
>
> Lemme see if I understand this correctly: you mean, the full flush will
> exit early due to the
>
>         if (!cpumask_test_cpu(smp_processor_id(), mm_cpumask(loaded_mm))) {
>
> test?

Yes, or at least that's what I'm imagining.

>
>> The flush IPI hits after a switch_mm_irqs_off() call notices the
>> change from 1 to 2. switch_mm_irqs_off() will do a full flush and
>> increment the local tlb_gen to 2, and the IPI handler for the partial
>> flush will see local_tlb_gen == mm_tlb_gen - 1 (because local_tlb_gen
>> == 2 and mm_tlb_gen == 3) and do a partial flush.
>
> Why, the 2->3 flush has f->end == TLB_FLUSH_ALL.
>
> That's why you have this thing in addition to the tlb_gen.

Yes.  The idea is that we only do remote partial flushes when it's
100% obvious that it's safe.

>
> What we end up doing in this case, is promote the partial flush to a
> full one and thus have a partial and a full flush which are close by
> converted to two full flushes.

It could be converted to two full flushes or to just one, I think,
depending on what order everything happens in.

>
>> The problem here is that it's not obvious to me that this actually
>> ends up flushing everything that's needed. Maybe all the memory
>> ordering gets this right, but I can imagine scenarios in which
>> switch_mm_irqs_off() does its flush early enough that the TLB picks up
>> an entry that was supposed to get zapped by the full flush.
>
> See above.
>
> And I don't think that having two full flushes back-to-back is going to
> cost a lot as the second one won't flush a whole lot.

>From limited benchmarking on new Intel chips, a full flush is very
expensive no matter what.  I think this is silly because I suspect
that the PCID circuitry could internally simulate a full flush at very
little cost, but it seems that it doesn't.  I haven't tried to
benchmark INVLPG.

>
>> IOW it *might* be valid, but I think it would need very careful review
>> and documentation.
>
> Always.
>
> Btw, I get the sense this TLB flush avoiding scheme becomes pretty
> complex for diminishing reasons.
>
>   [ Or maybe I'm not seeing them - I'm always open to corrections. ]
>
> Especially if intermediary levels from the pagetable walker are cached
> and reestablishing the TLB entries seldom means a full walk.
>
> You should do a full fledged benchmark to see whether this whole
> complexity is even worth it, methinks.

I agree that, by itself, this patch is waaaaaay too complex to be
justifiable.  The thing is that, after quite a few false starts, I
couldn't find a clean way to get PCID and improved laziness without
this thing as a prerequisite.  Both of those features depend on having
a heuristic for when a flush can be avoided, and that heuristic must
*never* say that a flush can be skipped when it can't be skipped.
This patch gives a way to do that.

I tried a few other approaches:

 - Keeping a cpumask of which CPUs are up to date.  Someone at Intel
tried this once and I inherited that code, but I scrapped it all after
it had both performance and correctness issues.  I tried the approach
again from scratch and paulmck poked all kinds of holes in it.

 - Using a lock to make sure that only one flush can be in progress on
a given mm at a given time.  The performance is just fine -- flushes
can't usefully happen in parallel anyway.  The problem is that the
batched unmap code in the core mm (which is apparently a huge win on
some workloads) can introduce arbitrarily long delays between
initiating a flush and actually requesting that the IPIs be sent.  I
could have worked around this with fancy data structures, but getting
them right so they wouldn't deadlock if called during reclaim and
preventing lock ordering issues would have been really nasty.

 - Poking remote cpus' data structures even when they're lazy.  This
wouldn't scale on systems with many cpus, since a given mm can easily
be lazy on every single other cpu all at once.

 - Ditching remote partial flushes entirely.  But those were recently
re-optimized by some Intel folks (Dave and others, IIRC) and came with
nice benchmarks showing that they were useful on some workloads.
(munmapping a small range, presumably.)

But this approach of using three separate tlb_gen values seems to
cover all the bases, and I don't think it's *that* bad.

--Andy

  reply	other threads:[~2017-06-22 18:09 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-21  5:22 [PATCH v3 00/11] PCID and improved laziness Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 01/11] x86/mm: Don't reenter flush_tlb_func_common() Andy Lutomirski
2017-06-21  8:01   ` Thomas Gleixner
2017-06-21  8:49   ` Borislav Petkov
2017-06-21 15:15     ` Andy Lutomirski
2017-06-21 23:26   ` Nadav Amit
2017-06-22  2:27     ` Andy Lutomirski
2017-06-22  7:32       ` Ingo Molnar
2017-06-21  5:22 ` [PATCH v3 02/11] x86/ldt: Simplify LDT switching logic Andy Lutomirski
2017-06-21  8:03   ` Thomas Gleixner
2017-06-21  9:40   ` Borislav Petkov
2017-06-22 11:08   ` [tip:x86/mm] x86/ldt: Simplify the " tip-bot for Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 03/11] x86/mm: Remove reset_lazy_tlbstate() Andy Lutomirski
2017-06-21  8:03   ` Thomas Gleixner
2017-06-21  9:50   ` Borislav Petkov
2017-06-22 11:08   ` [tip:x86/mm] " tip-bot for Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 04/11] x86/mm: Give each mm TLB flush generation a unique ID Andy Lutomirski
2017-06-21  8:05   ` Thomas Gleixner
2017-06-21 10:33   ` Borislav Petkov
2017-06-21 15:23     ` Andy Lutomirski
2017-06-21 17:06       ` Borislav Petkov
2017-06-21 17:43   ` Borislav Petkov
2017-06-22  2:34     ` Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 05/11] x86/mm: Track the TLB's tlb_gen and update the flushing algorithm Andy Lutomirski
2017-06-21  8:32   ` Thomas Gleixner
2017-06-21 15:11     ` Andy Lutomirski
2017-06-21 18:44   ` Borislav Petkov
2017-06-22  2:46     ` Andy Lutomirski
2017-06-22  7:24       ` Borislav Petkov
2017-06-22 14:48         ` Andy Lutomirski
2017-06-22 14:59           ` Borislav Petkov
2017-06-22 15:55             ` Andy Lutomirski
2017-06-22 17:22               ` Borislav Petkov
2017-06-22 18:08                 ` Andy Lutomirski [this message]
2017-06-23  8:42                   ` Borislav Petkov
2017-06-23 15:46                     ` Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 06/11] x86/mm: Rework lazy TLB mode and TLB freshness tracking Andy Lutomirski
2017-06-21  9:01   ` Thomas Gleixner
2017-06-21 16:04     ` Andy Lutomirski
2017-06-21 17:29       ` Borislav Petkov
2017-06-22 14:50   ` Borislav Petkov
2017-06-22 17:47     ` Andy Lutomirski
2017-06-22 19:05       ` Borislav Petkov
2017-07-27 19:53       ` Andrew Banman
2017-07-28  2:05         ` Andy Lutomirski
2017-06-23 13:34   ` Boris Ostrovsky
2017-06-23 15:22     ` Andy Lutomirski
2017-06-21  5:22 ` [PATCH v3 07/11] x86/mm: Stop calling leave_mm() in idle code Andy Lutomirski
2017-06-21  9:22   ` Thomas Gleixner
2017-06-21 15:16     ` Andy Lutomirski
2017-06-23  9:07   ` Borislav Petkov
2017-06-21  5:22 ` [PATCH v3 08/11] x86/mm: Disable PCID on 32-bit kernels Andy Lutomirski
2017-06-21  9:26   ` Thomas Gleixner
2017-06-23  9:24   ` Borislav Petkov
2017-06-21  5:22 ` [PATCH v3 09/11] x86/mm: Add nopcid to turn off PCID Andy Lutomirski
2017-06-21  9:27   ` Thomas Gleixner
2017-06-23  9:34   ` Borislav Petkov
2017-06-21  5:22 ` [PATCH v3 10/11] x86/mm: Enable CR4.PCIDE on supported systems Andy Lutomirski
2017-06-21  9:39   ` Thomas Gleixner
2017-06-21 13:40     ` Thomas Gleixner
2017-06-21 20:34     ` Andy Lutomirski
2017-06-23 11:50   ` Borislav Petkov
2017-06-23 15:28     ` Andy Lutomirski
2017-06-23 13:35   ` Boris Ostrovsky
2017-06-21  5:22 ` [PATCH v3 11/11] x86/mm: Try to preserve old TLB entries using PCID Andy Lutomirski
2017-06-21 13:38   ` Thomas Gleixner
2017-06-21 13:40     ` Thomas Gleixner
2017-06-22  2:57     ` Andy Lutomirski
2017-06-22 12:21       ` Thomas Gleixner
2017-06-22 18:12         ` Andy Lutomirski
2017-06-22 21:22           ` Thomas Gleixner
2017-06-23  3:09             ` Andy Lutomirski
2017-06-23  7:29               ` Thomas Gleixner
2017-06-22 16:09   ` Nadav Amit
2017-06-22 18:10     ` Andy Lutomirski
2017-06-26 15:58   ` Borislav Petkov
2017-06-21 18:23 ` [PATCH v3 00/11] PCID and improved laziness Linus Torvalds
2017-06-22  5:19   ` Andy Lutomirski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALCETrUbiXK8gjS=U2j4jW8YgPv4j+wgwsa4nJLnO+902fXfKQ@mail.gmail.com' \
    --to=luto@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=arjan@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=dave.hansen@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=nadav.amit@gmail.com \
    --cc=peterz@infradead.org \
    --cc=riel@redhat.com \
    --cc=torvalds@linux-foundation.org \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).