From: Nadav Amit <nadav.amit@gmail.com>
To: Andrea Arcangeli <aarcange@redhat.com>
Cc: linux-mm <linux-mm@kvack.org>, Peter Xu <peterx@redhat.com>,
lkml <linux-kernel@vger.kernel.org>,
Pavel Emelyanov <xemul@openvz.org>,
Mike Kravetz <mike.kravetz@oracle.com>,
Mike Rapoport <rppt@linux.vnet.ibm.com>,
stable@vger.kernel.org, minchan@kernel.org,
Andy Lutomirski <luto@kernel.org>,
yuzhao@google.com, Will Deacon <will@kernel.org>,
Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect
Date: Sat, 19 Dec 2020 14:06:02 -0800 [thread overview]
Message-ID: <DD367393-D1B3-4A84-AF92-9C6BAEAB40DC@gmail.com> (raw)
In-Reply-To: <EDC00345-B46E-4396-8379-98E943723809@gmail.com>
> On Dec 19, 2020, at 1:34 PM, Nadav Amit <nadav.amit@gmail.com> wrote:
>
> [ cc’ing some more people who have experience with similar problems ]
>
>> On Dec 19, 2020, at 11:15 AM, Andrea Arcangeli <aarcange@redhat.com> wrote:
>>
>> Hello,
>>
>> On Fri, Dec 18, 2020 at 08:30:06PM -0800, Nadav Amit wrote:
>>> Analyzing this problem indicates that there is a real bug since
>>> mmap_lock is only taken for read in mwriteprotect_range(). This might
>>
>> Never having to take the mmap_sem for writing, and in turn never
>> blocking, in order to modify the pagetables is quite an important
>> feature in uffd that justifies uffd instead of mprotect. It's not the
>> most important reason to use uffd, but it'd be nice if that guarantee
>> would remain also for the UFFDIO_WRITEPROTECT API, not only for the
>> other pgtable manipulations.
>>
>>> Consider the following scenario with 3 CPUs (cpu2 is not shown):
>>>
>>> cpu0 cpu1
>>> ---- ----
>>> userfaultfd_writeprotect()
>>> [ write-protecting ]
>>> mwriteprotect_range()
>>> mmap_read_lock()
>>> change_protection()
>>> change_protection_range()
>>> ...
>>> change_pte_range()
>>> [ defer TLB flushes]
>>> userfaultfd_writeprotect()
>>> mmap_read_lock()
>>> change_protection()
>>> [ write-unprotect ]
>>> ...
>>> [ unprotect PTE logically ]
>>> ...
>>> [ page-fault]
>>> ...
>>> wp_page_copy()
>>> [ set new writable page in PTE]
>>
>> Can't we check mm_tlb_flush_pending(vma->vm_mm) if MM_CP_UFFD_WP_ALL
>> is set and do an explicit (potentially spurious) tlb flush before
>> write-unprotect?
>
> There is a concrete scenario that I actually encountered and then there is a
> general problem.
>
> In general, the kernel code assumes that PTEs that are read from the
> page-tables are coherent across all the TLBs, excluding permission promotion
> (i.e., the PTE may have higher permissions in the page-tables than those
> that are cached in the TLBs).
>
> We therefore need to both: (a) protect change_protection_range() from the
> changes of others who might defer TLB flushes without taking mmap_sem for
> write (e.g., try_to_unmap_one()); and (b) to protect others (e.g.,
> page-fault handlers) from concurrent changes of change_protection().
>
> We have already encountered several similar bugs, and debugging such issues
> s time consuming and these bugs impact is substantial (memory corruption,
> security). So I think we should only stick to general solutions.
>
> So perhaps your the approach of your proposed solution is feasible, but it
> would have to be applied all over the place: we will need to add a check for
> mm_tlb_flush_pending() and conditionally flush the TLB in every case in
> which PTEs are read and there might be an assumption that the
> access-permission reflect what the TLBs hold. This includes page-fault
> handlers, but also NUMA migration code in change_protection(), softdirty
> cleanup in clear_refs_write() and maybe others.
>
> [ I have in mind another solution, such as keeping in each page-table a
> “table-generation” which is the mm-generation at the time of the change,
> and only flush if “table-generation”==“mm-generation”, but it requires
> some thought on how to avoid adding new memory barriers. ]
>
> IOW: I think the change that you suggest is insufficient, and a proper
> solution is too intrusive for “stable".
>
> As for performance, I can add another patch later to remove the TLB flush
> that is unnecessarily performed during change_protection_range() that does
> permission promotion. I know that your concern is about the “protect” case
> but I cannot think of a good immediate solution that avoids taking mmap_lock
> for write.
>
> Thoughts?
On a second thought (i.e., I don’t know what I was thinking), doing so —
checking mm_tlb_flush_pending() on every PTE read which is potentially
dangerous and flushing if needed - can lead to huge amount of TLB flushes
and shootodowns as the counter might be elevated for considerable amount of
time.
So this solution seems to me as a no-go.
next prev parent reply other threads:[~2020-12-19 22:06 UTC|newest]
Thread overview: 120+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-19 4:30 [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect Nadav Amit
2020-12-19 19:15 ` Andrea Arcangeli
[not found] ` <EDC00345-B46E-4396-8379-98E943723809@gmail.com>
2020-12-19 22:06 ` Nadav Amit [this message]
2020-12-20 2:20 ` Andrea Arcangeli
2020-12-21 4:36 ` Nadav Amit
2020-12-21 5:12 ` Yu Zhao
2020-12-21 5:25 ` Nadav Amit
2020-12-21 5:39 ` Nadav Amit
2020-12-21 7:29 ` Yu Zhao
2020-12-22 20:34 ` Andy Lutomirski
2020-12-22 20:58 ` Nadav Amit
2020-12-22 21:34 ` Andrea Arcangeli
2020-12-20 2:01 ` Andy Lutomirski
2020-12-20 2:49 ` Andrea Arcangeli
2020-12-20 5:08 ` Andy Lutomirski
2020-12-21 18:03 ` Andrea Arcangeli
2020-12-21 18:22 ` Andy Lutomirski
2020-12-20 6:05 ` Yu Zhao
2020-12-20 8:06 ` Nadav Amit
2020-12-20 9:54 ` Yu Zhao
2020-12-21 3:33 ` Nadav Amit
2020-12-21 4:44 ` Yu Zhao
2020-12-21 17:27 ` Peter Xu
2020-12-21 18:31 ` Nadav Amit
2020-12-21 19:16 ` Yu Zhao
2020-12-21 19:55 ` Linus Torvalds
2020-12-21 20:21 ` Yu Zhao
2020-12-21 20:25 ` Linus Torvalds
2020-12-21 20:23 ` Nadav Amit
2020-12-21 20:26 ` Linus Torvalds
2020-12-21 21:24 ` Yu Zhao
2020-12-21 21:49 ` Nadav Amit
2020-12-21 22:30 ` Peter Xu
2020-12-21 22:55 ` Nadav Amit
2020-12-21 23:30 ` Linus Torvalds
2020-12-21 23:46 ` Nadav Amit
2020-12-22 19:44 ` Andrea Arcangeli
2020-12-22 20:19 ` Nadav Amit
2020-12-22 21:17 ` Andrea Arcangeli
2020-12-21 23:12 ` Yu Zhao
2020-12-21 23:33 ` Linus Torvalds
2020-12-22 0:00 ` Yu Zhao
2020-12-22 0:11 ` Linus Torvalds
2020-12-22 0:24 ` Yu Zhao
2020-12-21 23:22 ` Linus Torvalds
2020-12-22 3:19 ` Andy Lutomirski
2020-12-22 4:16 ` Linus Torvalds
2020-12-22 20:19 ` Andy Lutomirski
2021-01-05 15:37 ` Peter Zijlstra
2021-01-05 18:03 ` Andrea Arcangeli
2021-01-12 16:20 ` Peter Zijlstra
2021-01-12 11:43 ` Vinayak Menon
2021-01-12 15:47 ` Laurent Dufour
2021-01-12 16:57 ` Peter Zijlstra
2021-01-12 19:02 ` Laurent Dufour
2021-01-12 19:15 ` Nadav Amit
2021-01-12 19:56 ` Yu Zhao
2021-01-12 20:38 ` Nadav Amit
2021-01-12 20:49 ` Yu Zhao
2021-01-12 21:43 ` Will Deacon
2021-01-12 22:29 ` Nadav Amit
2021-01-12 22:46 ` Will Deacon
2021-01-13 0:31 ` Andy Lutomirski
2021-01-17 4:41 ` Yu Zhao
2021-01-17 7:32 ` Nadav Amit
2021-01-17 9:16 ` Yu Zhao
2021-01-17 10:13 ` Nadav Amit
2021-01-17 19:25 ` Yu Zhao
2021-01-18 2:49 ` Nadav Amit
2020-12-22 9:38 ` Nadav Amit
2020-12-22 19:31 ` Andrea Arcangeli
2020-12-22 20:15 ` Matthew Wilcox
2020-12-22 20:26 ` Andrea Arcangeli
2020-12-22 21:14 ` Yu Zhao
2020-12-22 22:02 ` Andrea Arcangeli
2020-12-22 23:39 ` Yu Zhao
2020-12-22 23:50 ` Linus Torvalds
2020-12-23 0:01 ` Linus Torvalds
2020-12-23 0:23 ` Yu Zhao
2020-12-23 2:17 ` Andrea Arcangeli
2020-12-23 9:44 ` Linus Torvalds
2020-12-23 10:06 ` Yu Zhao
2020-12-23 16:24 ` Peter Xu
2020-12-23 18:51 ` Andrea Arcangeli
2020-12-23 18:55 ` Andrea Arcangeli
2020-12-23 19:12 ` Yu Zhao
2020-12-23 19:32 ` Peter Xu
2020-12-23 0:20 ` Linus Torvalds
2020-12-23 2:56 ` Andrea Arcangeli
2020-12-23 3:36 ` Yu Zhao
2020-12-23 15:52 ` Peter Xu
2020-12-23 21:07 ` Andrea Arcangeli
2020-12-23 21:39 ` Andrea Arcangeli
2020-12-23 22:29 ` Yu Zhao
2020-12-23 23:04 ` Andrea Arcangeli
2020-12-24 1:21 ` Andy Lutomirski
2020-12-24 2:00 ` Andrea Arcangeli
2020-12-24 3:09 ` Nadav Amit
2020-12-24 3:30 ` Nadav Amit
2020-12-24 3:34 ` Yu Zhao
2020-12-24 4:01 ` Andrea Arcangeli
2020-12-24 5:18 ` Nadav Amit
2020-12-24 18:49 ` Andrea Arcangeli
2020-12-24 19:16 ` Andrea Arcangeli
2020-12-24 4:37 ` Nadav Amit
2020-12-24 3:31 ` Andrea Arcangeli
2020-12-23 23:39 ` Linus Torvalds
2020-12-24 1:01 ` Andrea Arcangeli
2020-12-22 21:14 ` Nadav Amit
2020-12-22 12:40 ` Nadav Amit
2020-12-22 18:30 ` Yu Zhao
2020-12-22 19:20 ` Nadav Amit
2020-12-23 16:23 ` Will Deacon
2020-12-23 19:04 ` Nadav Amit
2020-12-23 22:05 ` Andrea Arcangeli
2020-12-23 22:45 ` Nadav Amit
2020-12-23 23:55 ` Andrea Arcangeli
2020-12-21 21:55 ` Peter Xu
2020-12-21 23:13 ` Linus Torvalds
2020-12-21 19:53 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DD367393-D1B3-4A84-AF92-9C6BAEAB40DC@gmail.com \
--to=nadav.amit@gmail.com \
--cc=aarcange@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mike.kravetz@oracle.com \
--cc=minchan@kernel.org \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=rppt@linux.vnet.ibm.com \
--cc=stable@vger.kernel.org \
--cc=will@kernel.org \
--cc=xemul@openvz.org \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).