All of lore.kernel.org
 help / color / mirror / Atom feed
From: Nadav Amit <nadav.amit@gmail.com>
To: Mel Gorman <mgorman@suse.de>
Cc: Andy Lutomirski <luto@kernel.org>,
	"open list:MEMORY MANAGEMENT" <linux-mm@kvack.org>
Subject: Re: Potential race in TLB flush batching?
Date: Wed, 19 Jul 2017 16:39:07 -0700	[thread overview]
Message-ID: <4DC97890-9FFA-4BA4-B300-B679BAB2136D@gmail.com> (raw)
In-Reply-To: <20170719225950.wfpfzpc6llwlyxdo@suse.de>

Mel Gorman <mgorman@suse.de> wrote:

> On Wed, Jul 19, 2017 at 03:19:00PM -0700, Nadav Amit wrote:
>>>> Yes, of course, since KSM does not batch TLB flushes. I regarded the other
>>>> direction - first try_to_unmap() removes the PTE (but still does not flush),
>>>> unlocks the page, and then KSM acquires the page lock and calls
>>>> write_protect_page(). It finds out the PTE is not present and does not flush
>>>> the TLB.
>>> 
>>> When KSM acquires the page lock, it then acquires the PTL where the
>>> cleared PTE is observed directly and skipped.
>> 
>> I don???t see why. Let???s try again - CPU0 reclaims while CPU1 deduplicates:
>> 
>> CPU0				CPU1
>> ----				----
>> shrink_page_list()
>> 
>> => try_to_unmap()
>> ==> try_to_unmap_one()
>> [ unmaps from some page-tables ]
>> 
>> [ try_to_unmap returns false;
>>  page not reclaimed ]
>> 
>> => keep_locked: unlock_page()
>> 
>> [ TLB flush deferred ]
>> 				try_to_merge_one_page()
>> 				=> trylock_page()
>> 				=> write_protect_page()
>> 				==> acquire ptl
>> 				  [ PTE non-present ???> no PTE change
>> 				    and no flush ]
>> 				==> release ptl
>> 				==> replace_page()
>> 
>> 
>> At this point, while replace_page() is running, CPU0 may still not have
>> flushed the TLBs. Another CPU (CPU2) may hold a stale PTE, which is not
>> write-protected. It can therefore write to that page while replace_page() is
>> running, resulting in memory corruption.
>> 
>> No?
> 
> KSM is not my strong point so it's reaching the point where others more
> familiar with that code need to be involved.

Do not assume for a second that I really know what is going on over there.

> If try_to_unmap returns false on CPU0 then at least one unmap attempt
> failed and the page is not reclaimed.

Actually, try_to_unmap() may even return true, and the page would still not
be reclaimed - for example if page_has_private() and freeing the buffers
fails. In this case, the page would be unlocked as well.

> For those that were unmapped, they
> will get flushed in the near future. When KSM operates on CPU1, it'll skip
> the unmapped pages under the PTL so stale TLB entries are not relevant as
> the mapped entries are still pointing to a valid page and ksm misses a merge
> opportunity.

This is the case I regarded, but I do not understand your point. The whole
problem is that CPU1 would skip the unmapped pages under the PTL. As it
skips them it does not flush them from the TLB. And as a result,
replace_page() may happen before the TLB is flushed by CPU0.

> If it write protects a page, ksm unconditionally flushes the PTE
> on clearing the PTE so again, there is no stale entry anywhere. For CPU2,
> it'll either reference a PTE that was unmapped in which case it'll fault
> once CPU0 flushes the TLB and until then it's safe to read and write as
> long as the TLB is flushed before the page is freed or IO is initiated which
> reclaim already handles.

In my scenario the page is not freed and there is no I/O in the reclaim
path. The TLB flush of CPU0 in my scenario is just deferred while the
page-table lock is not held. As I mentioned before, this time-period can be
potentially very long in a virtual machine. CPU2 referenced a PTE that
was unmapped by CPU0 (reclaim path) but not CPU1 (ksm path).

ksm, IIUC, would not expect modifications of the page during replace_page.
Eventually it would flush the TLB (after changing the PTE to point to the
deduplicated page). But in the meanwhile, another CPU may use stale PTEs for
writes, and those writes would be lost after the page is deduplicated.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-07-19 23:39 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-11  0:52 Potential race in TLB flush batching? Nadav Amit
2017-07-11  6:41 ` Mel Gorman
2017-07-11  7:30   ` Nadav Amit
2017-07-11  9:29     ` Mel Gorman
2017-07-11 10:40       ` Nadav Amit
2017-07-11 13:20         ` Mel Gorman
2017-07-11 14:58           ` Andy Lutomirski
2017-07-11 15:53             ` Mel Gorman
2017-07-11 17:23               ` Andy Lutomirski
2017-07-11 19:18                 ` Mel Gorman
2017-07-11 20:06                   ` Nadav Amit
2017-07-11 21:09                     ` Mel Gorman
2017-07-11 20:09                   ` Mel Gorman
2017-07-11 21:52                     ` Mel Gorman
2017-07-11 22:27                       ` Nadav Amit
2017-07-11 22:34                         ` Nadav Amit
2017-07-12  8:27                         ` Mel Gorman
2017-07-12 23:27                           ` Nadav Amit
2017-07-12 23:36                             ` Andy Lutomirski
2017-07-12 23:42                               ` Nadav Amit
2017-07-13  5:38                                 ` Andy Lutomirski
2017-07-13 16:05                                   ` Nadav Amit
2017-07-13 16:06                                     ` Andy Lutomirski
2017-07-13  6:07                             ` Mel Gorman
2017-07-13 16:08                               ` Andy Lutomirski
2017-07-13 17:07                                 ` Mel Gorman
2017-07-13 17:15                                   ` Andy Lutomirski
2017-07-13 18:23                                     ` Mel Gorman
2017-07-14 23:16                               ` Nadav Amit
2017-07-15 15:55                                 ` Mel Gorman
2017-07-15 16:41                                   ` Andy Lutomirski
2017-07-17  7:49                                     ` Mel Gorman
2017-07-18 21:28                                   ` Nadav Amit
2017-07-19  7:41                                     ` Mel Gorman
2017-07-19 19:41                                       ` Nadav Amit
2017-07-19 19:58                                         ` Mel Gorman
2017-07-19 20:20                                           ` Nadav Amit
2017-07-19 21:47                                             ` Mel Gorman
2017-07-19 22:19                                               ` Nadav Amit
2017-07-19 22:59                                                 ` Mel Gorman
2017-07-19 23:39                                                   ` Nadav Amit [this message]
2017-07-20  7:43                                                     ` Mel Gorman
2017-07-22  1:19                                                       ` Nadav Amit
2017-07-24  9:58                                                         ` Mel Gorman
2017-07-24 19:46                                                           ` Nadav Amit
2017-07-25  7:37                                                           ` Minchan Kim
2017-07-25  8:51                                                             ` Mel Gorman
2017-07-25  9:11                                                               ` Minchan Kim
2017-07-25 10:10                                                                 ` Mel Gorman
2017-07-26  5:43                                                                   ` Minchan Kim
2017-07-26  9:22                                                                     ` Mel Gorman
2017-07-26 19:18                                                                       ` Nadav Amit
2017-07-26 23:40                                                                         ` Minchan Kim
2017-07-27  0:09                                                                           ` Nadav Amit
2017-07-27  0:34                                                                             ` Minchan Kim
2017-07-27  0:48                                                                               ` Nadav Amit
2017-07-27  1:13                                                                                 ` Nadav Amit
2017-07-27  7:04                                                                                   ` Minchan Kim
2017-07-27  7:21                                                                                     ` Mel Gorman
2017-07-27 16:04                                                                                       ` Nadav Amit
2017-07-27 17:36                                                                                         ` Mel Gorman
2017-07-26 23:44                                                                       ` Minchan Kim
2017-07-11 22:07                   ` Andy Lutomirski
2017-07-11 22:33                     ` Mel Gorman
2017-07-14  7:00                     ` Benjamin Herrenschmidt
2017-07-14  8:31                       ` Mel Gorman
2017-07-14  9:02                         ` Benjamin Herrenschmidt
2017-07-14  9:27                           ` Mel Gorman
2017-07-14 22:21                             ` Andy Lutomirski
2017-07-11 16:22           ` Nadav Amit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4DC97890-9FFA-4BA4-B300-B679BAB2136D@gmail.com \
    --to=nadav.amit@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=luto@kernel.org \
    --cc=mgorman@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.