linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Mel Gorman <mgorman@suse.de>
To: Andy Lutomirski <luto@kernel.org>
Cc: Nadav Amit <nadav.amit@gmail.com>,
	"open list:MEMORY MANAGEMENT" <linux-mm@kvack.org>
Subject: Re: Potential race in TLB flush batching?
Date: Tue, 11 Jul 2017 16:53:12 +0100	[thread overview]
Message-ID: <20170711155312.637eyzpqeghcgqzp@suse.de> (raw)
In-Reply-To: <CALCETrUOYwpJZAAVF8g+_U9fo5cXmGhYrM-ix+X=bbfid+j-Cw@mail.gmail.com>

On Tue, Jul 11, 2017 at 07:58:04AM -0700, Andrew Lutomirski wrote:
> On Tue, Jul 11, 2017 at 6:20 AM, Mel Gorman <mgorman@suse.de> wrote:
> > +
> > +/*
> > + * This is called after an mprotect update that altered no pages. Batched
> > + * unmap releases the PTL before a flush occurs leaving a window where
> > + * an mprotect that reduces access rights can still access the page after
> > + * mprotect returns via a stale TLB entry. Avoid this possibility by flushing
> > + * the local TLB if mprotect updates no pages so that the the caller of
> > + * mprotect always gets expected behaviour. It's overkill and unnecessary to
> > + * flush all TLBs as a separate thread accessing the data that raced with
> > + * both reclaim and mprotect as there is no risk of data corruption and
> > + * the exact timing of a parallel thread seeing a protection update without
> > + * any serialisation on the application side is always uncertain.
> > + */
> > +void batched_unmap_protection_update(void)
> > +{
> > +       count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL);
> > +       local_flush_tlb();
> > +       trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL);
> > +}
> > +
> 
> What about remote CPUs?  You could get migrated right after mprotect()
> or the inconsistency could be observed on another CPU. 

If it's migrated then it has also context switched so the TLB entry will
be read for the first time. If the entry is inconsistent for another CPU
accessing the data then it'll potentially successfully access a page that
was just mprotected but this is similar to simply racing with the call
to mprotect itself. The timing isn't exact, nor does it need to be. One
thread accessing data racing with another thread doing mprotect without
any synchronisation in the application is always going to be unreliable.
I'm less certain once PCID tracking is in place and whether it's possible for
a process to be context switching fast enough to allow an access. If it's
possible then batching would require an unconditional flush on mprotect
even if no pages are updated if access is being limited by the mprotect
which would be unfortunate.

> I also really
> don't like bypassing arch code like this.  The implementation of
> flush_tlb_mm_range() in tip:x86/mm (and slated for this merge window!)
> is *very* different from what's there now, and it is not written in
> the expectation that some generic code might call local_tlb_flush()
> and expect any kind of coherency at all.
> 

Assuming that gets merged first then the most straight-forward approach
would be to setup a arch_tlbflush_unmap_batch with just the local CPU set
in the mask or something similar.

> I'm also still nervous about situations in which, while a batched
> flush is active, a user calls mprotect() and then does something else
> that gets confused by the fact that there's an RO PTE and doesn't
> flush out the RW TLB entry.  COWing a page, perhaps?
> 

The race in question only applies if mprotect had no PTEs to update. If
any page was updated then the TLB is flushed before mprotect returns.
With the patch (or a variant on top of your work), at least the local TLB
will be flushed even if no PTEs were updated. This might be more expensive
than it has to be but I expect that mprotects on range with no PTEs to
update are fairly rare.

> Would a better fix perhaps be to find a way to figure out whether a
> batched flush is pending on the mm in question and flush it out if you
> do any optimizations based on assuming that the TLB is in any respect
> consistent with the page tables?  With the changes in -tip, x86 could,
> in principle, supply a function to sync up its TLB state.  That would
> require cross-CPU poking at state or an inconditional IPI (that might
> end up not flushing anything), but either is doable.

It's potentially doable if a field like tlb_flush_pending was added
to mm_struct that is set when batching starts. I don't think there is
a logical place where it can be cleared as when the TLB gets flushed by
reclaim, it can't rmap again to clear the flag. What would happen is that
the first mprotect after any batching happened at any point in the past
would have to unconditionally flush the TLB and then clear the flag. That
would be a relatively minor hit and cover all the possibilities and should
work unmodified with or without your series applied.

Would that be preferable to you?

-- 
Mel Gorman
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-07-11 15:53 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-11  0:52 Potential race in TLB flush batching? Nadav Amit
2017-07-11  6:41 ` Mel Gorman
2017-07-11  7:30   ` Nadav Amit
2017-07-11  9:29     ` Mel Gorman
2017-07-11 10:40       ` Nadav Amit
2017-07-11 13:20         ` Mel Gorman
2017-07-11 14:58           ` Andy Lutomirski
2017-07-11 15:53             ` Mel Gorman [this message]
2017-07-11 17:23               ` Andy Lutomirski
2017-07-11 19:18                 ` Mel Gorman
2017-07-11 20:06                   ` Nadav Amit
2017-07-11 21:09                     ` Mel Gorman
2017-07-11 20:09                   ` Mel Gorman
2017-07-11 21:52                     ` Mel Gorman
2017-07-11 22:27                       ` Nadav Amit
2017-07-11 22:34                         ` Nadav Amit
2017-07-12  8:27                         ` Mel Gorman
2017-07-12 23:27                           ` Nadav Amit
2017-07-12 23:36                             ` Andy Lutomirski
2017-07-12 23:42                               ` Nadav Amit
2017-07-13  5:38                                 ` Andy Lutomirski
2017-07-13 16:05                                   ` Nadav Amit
2017-07-13 16:06                                     ` Andy Lutomirski
2017-07-13  6:07                             ` Mel Gorman
2017-07-13 16:08                               ` Andy Lutomirski
2017-07-13 17:07                                 ` Mel Gorman
2017-07-13 17:15                                   ` Andy Lutomirski
2017-07-13 18:23                                     ` Mel Gorman
2017-07-14 23:16                               ` Nadav Amit
2017-07-15 15:55                                 ` Mel Gorman
2017-07-15 16:41                                   ` Andy Lutomirski
2017-07-17  7:49                                     ` Mel Gorman
2017-07-18 21:28                                   ` Nadav Amit
2017-07-19  7:41                                     ` Mel Gorman
2017-07-19 19:41                                       ` Nadav Amit
2017-07-19 19:58                                         ` Mel Gorman
2017-07-19 20:20                                           ` Nadav Amit
2017-07-19 21:47                                             ` Mel Gorman
2017-07-19 22:19                                               ` Nadav Amit
2017-07-19 22:59                                                 ` Mel Gorman
2017-07-19 23:39                                                   ` Nadav Amit
2017-07-20  7:43                                                     ` Mel Gorman
2017-07-22  1:19                                                       ` Nadav Amit
2017-07-24  9:58                                                         ` Mel Gorman
2017-07-24 19:46                                                           ` Nadav Amit
2017-07-25  7:37                                                           ` Minchan Kim
2017-07-25  8:51                                                             ` Mel Gorman
2017-07-25  9:11                                                               ` Minchan Kim
2017-07-25 10:10                                                                 ` Mel Gorman
2017-07-26  5:43                                                                   ` Minchan Kim
2017-07-26  9:22                                                                     ` Mel Gorman
2017-07-26 19:18                                                                       ` Nadav Amit
2017-07-26 23:40                                                                         ` Minchan Kim
2017-07-27  0:09                                                                           ` Nadav Amit
2017-07-27  0:34                                                                             ` Minchan Kim
2017-07-27  0:48                                                                               ` Nadav Amit
2017-07-27  1:13                                                                                 ` Nadav Amit
2017-07-27  7:04                                                                                   ` Minchan Kim
2017-07-27  7:21                                                                                     ` Mel Gorman
2017-07-27 16:04                                                                                       ` Nadav Amit
2017-07-27 17:36                                                                                         ` Mel Gorman
2017-07-26 23:44                                                                       ` Minchan Kim
2017-07-11 22:07                   ` Andy Lutomirski
2017-07-11 22:33                     ` Mel Gorman
2017-07-14  7:00                     ` Benjamin Herrenschmidt
2017-07-14  8:31                       ` Mel Gorman
2017-07-14  9:02                         ` Benjamin Herrenschmidt
2017-07-14  9:27                           ` Mel Gorman
2017-07-14 22:21                             ` Andy Lutomirski
2017-07-11 16:22           ` Nadav Amit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170711155312.637eyzpqeghcgqzp@suse.de \
    --to=mgorman@suse.de \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=nadav.amit@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).