From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.5 required=3.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FSL_HELO_FAKE,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6DDAC433DB for ; Mon, 21 Dec 2020 05:12:43 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 820CF22CAF for ; Mon, 21 Dec 2020 05:12:43 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 820CF22CAF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 99D426B005C; Mon, 21 Dec 2020 00:12:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 925DD6B005D; Mon, 21 Dec 2020 00:12:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8136B6B006C; Mon, 21 Dec 2020 00:12:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0168.hostedemail.com [216.40.44.168]) by kanga.kvack.org (Postfix) with ESMTP id 648B86B005C for ; Mon, 21 Dec 2020 00:12:42 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 28D978249980 for ; Mon, 21 Dec 2020 05:12:42 +0000 (UTC) X-FDA: 77616119364.17.quiet44_480358a27454 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 121DE180D0180 for ; Mon, 21 Dec 2020 05:12:42 +0000 (UTC) X-HE-Tag: quiet44_480358a27454 X-Filterd-Recvd-Size: 10411 Received: from mail-il1-f170.google.com (mail-il1-f170.google.com [209.85.166.170]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Mon, 21 Dec 2020 05:12:41 +0000 (UTC) Received: by mail-il1-f170.google.com with SMTP id t9so7825022ilf.2 for ; Sun, 20 Dec 2020 21:12:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=vswCukahHI84DC0lcMezNvVlWnF1S1S3+uLVpnYj1r4=; b=HE7XS3E//8ycSGXzh/6H9wdve7qwxdJnIX0qC9Nq/ALqPuwX+9S+FoblZ4cSaIGpU9 w+xfm0NHcWJ5fYhrcK1TNQijEJwhfCsrCFRg18Vn0J3H1hteYiDr9HKw4DrFSpsZmNp3 Zqfm+XrdIUynL5Sqq4wZZ7sb+URJAmTM15lnZgpIKVdwLm7eMvGdOxjHxcFwZ8YydWBc yOeMc+8u+HyB70bg/nSRAQtp9elax1o7QVDMhnJz/IqX8AFNrojW6uEWtbCYHvP/jLsS S0ShJikwivPRoSml0mrF9lCpWbXWCFamtvwHXB8j+ISPGlf3ztqWRfgtFs6zJC7nLsPx 0UyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=vswCukahHI84DC0lcMezNvVlWnF1S1S3+uLVpnYj1r4=; b=AVKFIFQy/vAo43LeUWnWY4HXUK0kima/DeEIVfbG1GZJc4d5w6uBOr5BGccdFIEiYV UIvSDEucFqJjUCl8/7wTMEwDwIvmY55DNUAJFD2C3+JzDUa8Iu9iqqujwUJkhAx4R09C A6siLFatjyF6xvhevYAXMHmm6vfro8iDDu3evfWLSCHa0kY+/lr13fBZGCPgKq7HdmQ8 +ippJVjTUWvn6tlk9NwTYtGh5qYJG0hqzp1POG0PAwApvsXDfCPWDMgtrhCvAtNGM3OR S0+fD1l5QeCUR404reje/jJX2HWKVjYyIU2XRrSVUGuokOeNYOk9J2OiF6rFRkPg7zQc q9Hw== X-Gm-Message-State: AOAM530oOcoQOZGy4jQZjbsqfid7pMQxKGRa4Qnnj+SR6uNEfe45qtGc UpuQAp/Yudttki1CUR15l9iKIw== X-Google-Smtp-Source: ABdhPJy/28dluEI2zZ26w5IG4pu144SzvEEQhZSVKHNJeLBhF12fU9+UXV21cI4kTSy4bJMamlVOsQ== X-Received: by 2002:a05:6e02:f93:: with SMTP id v19mr14874422ilo.154.1608527560929; Sun, 20 Dec 2020 21:12:40 -0800 (PST) Received: from google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) by smtp.gmail.com with ESMTPSA id a9sm21528543ion.53.2020.12.20.21.12.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 20 Dec 2020 21:12:40 -0800 (PST) Date: Sun, 20 Dec 2020 22:12:36 -0700 From: Yu Zhao To: Nadav Amit Cc: Andrea Arcangeli , linux-mm , Peter Xu , lkml , Pavel Emelyanov , Mike Kravetz , Mike Rapoport , stable@vger.kernel.org, minchan@kernel.org, Andy Lutomirski , Will Deacon , Peter Zijlstra Subject: Re: [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect Message-ID: References: <20201219043006.2206347-1-namit@vmware.com> <729A8C1E-FC5B-4F46-AE01-85E00C66DFFF@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <729A8C1E-FC5B-4F46-AE01-85E00C66DFFF@gmail.com> Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Dec 20, 2020 at 08:36:15PM -0800, Nadav Amit wrote: > > On Dec 19, 2020, at 6:20 PM, Andrea Arcangeli w= rote: > >=20 > > On Sat, Dec 19, 2020 at 02:06:02PM -0800, Nadav Amit wrote: > >>> On Dec 19, 2020, at 1:34 PM, Nadav Amit wrot= e: > >>>=20 > >>> [ cc=E2=80=99ing some more people who have experience with similar = problems ] > >>>=20 > >>>> On Dec 19, 2020, at 11:15 AM, Andrea Arcangeli wrote: > >>>>=20 > >>>> Hello, > >>>>=20 > >>>> On Fri, Dec 18, 2020 at 08:30:06PM -0800, Nadav Amit wrote: > >>>>> Analyzing this problem indicates that there is a real bug since > >>>>> mmap_lock is only taken for read in mwriteprotect_range(). This m= ight > >>>>=20 > >>>> Never having to take the mmap_sem for writing, and in turn never > >>>> blocking, in order to modify the pagetables is quite an important > >>>> feature in uffd that justifies uffd instead of mprotect. It's not = the > >>>> most important reason to use uffd, but it'd be nice if that guaran= tee > >>>> would remain also for the UFFDIO_WRITEPROTECT API, not only for th= e > >>>> other pgtable manipulations. > >>>>=20 > >>>>> Consider the following scenario with 3 CPUs (cpu2 is not shown): > >>>>>=20 > >>>>> cpu0 cpu1 > >>>>> ---- ---- > >>>>> userfaultfd_writeprotect() > >>>>> [ write-protecting ] > >>>>> mwriteprotect_range() > >>>>> mmap_read_lock() > >>>>> change_protection() > >>>>> change_protection_range() > >>>>> ... > >>>>> change_pte_range() > >>>>> [ defer TLB flushes] > >>>>> userfaultfd_writeprotect() > >>>>> mmap_read_lock() > >>>>> change_protection() > >>>>> [ write-unprotect ] > >>>>> ... > >>>>> [ unprotect PTE logically ] > >=20 > > Is the uffd selftest failing with upstream or after your kernel > > modification that removes the tlb flush from unprotect? >=20 > Please see my reply to Yu. I was wrong in this analysis, and I sent a > correction to my analysis. The problem actually happens when > userfaultfd_writeprotect() unprotects the memory. >=20 > > } else if (uffd_wp_resolve) { > > /* > > * Leave the write bit to be handled > > * by PF interrupt handler, then > > * things like COW could be properly > > * handled. > > */ > > ptent =3D pte_clear_uffd_wp(ptent); > > } > >=20 > > Upstraem this will still do pages++, there's a tlb flush before > > change_protection can return here, so I'm confused. > >=20 >=20 > You are correct. The problem I encountered with userfaultfd_writeprotec= t() > is during unprotecting path. >=20 > Having said that, I think that there are additional scenarios that are > problematic. Consider for instance madvise_dontneed_free() that is raci= ng > with userfaultfd_writeprotect(). If madvise_dontneed_free() completed > removing the PTEs, but still did not flush, change_pte_range() will see > non-present PTEs, say a flush is not needed, and then > change_protection_range() will not do a flush, and return while > the memory is still not protected. >=20 > > I don't share your concern. What matters is the PT lock, so it > > wouldn't be one per pte, but a least an order 9 higher, but let's > > assume one flush per pte. > >=20 > > It's either huge mapping and then it's likely running without other > > tlb flushing in background (postcopy snapshotting), or it's a granula= r > > protect with distributed shared memory in which case the number of > > changd ptes or huge_pmds tends to be always 1 anyway. So it doesn't > > matter if it's deferred. > >=20 > > I agree it may require a larger tlb flush review not just mprotect > > though, but it didn't sound particularly complex. Note the > > UFFDIO_WRITEPROTECT is still relatively recent so backports won't > > risk to reject so heavy as to require a band-aid. > >=20 > > My second thought is, I don't see exactly the bug and it's not clear > > if it's upstream reproducing this, but assuming this happens on > > upstream, even ignoring everything else happening in the tlb flush > > code, this sounds like purely introduced by userfaultfd_writeprotect(= ) > > vs userfaultfd_writeprotect() (since it's the only place changing > > protection with mmap_sem for reading and note we already unmap and > > flush tlb with mmap_sem for reading in MADV_DONTNEED/MADV_FREE clears > > the dirty bit etc..). Flushing tlbs with mmap_sem for reading is > > nothing new, the only new thing is the flush after wrprotect. > >=20 > > So instead of altering any tlb flush code, would it be possible to > > just stick to mmap_lock for reading and then serialize > > userfaultfd_writeprotect() against itself with an additional > > mm->mmap_wprotect_lock mutex? That'd be a very local change to > > userfaultfd too. > >=20 > > Can you look if the rule mmap_sem for reading plus a new > > mm->mmap_wprotect_lock mutex or the mmap_sem for writing, whenever > > wrprotecting ptes, is enough to comply with the current tlb flushing > > code, so not to require any change non local to uffd (modulo the > > additional mutex). >=20 > So I did not fully understand your solution, but I took your point and > looked again on similar cases. To be fair, despite my experience with t= hese > deferred TLB flushes as well as Peter Zijlstra=E2=80=99s great document= ation, I keep > getting confused (e.g., can=E2=80=99t we somehow combine tlb_flush_batc= hed and > tlb_flush_pending ?) >=20 > As I said before, my initial scenario was wrong, and the problem is not > userfaultfd_writeprotect() racing against itself. This one seems actual= ly > benign to me. >=20 > Nevertheless, I do think there is a problem in change_protection_range(= ). > Specifically, see the aforementioned scenario of a race between > madvise_dontneed_free() and userfaultfd_writeprotect(). >=20 > So an immediate solution for such a case can be resolve without holding > mmap_lock for write, by just adding a test for mm_tlb_flush_nested() in > change_protection_range(): >=20 > /* > * Only flush the TLB if we actually modified any entries > * or if there are pending TLB flushes. > */ > if (pages || mm_tlb_flush_nested(mm)) > flush_tlb_range(vma, start, end); > =20 > To be fair, I am not confident I did not miss other problematic cases. >=20 > But for now, this change, with the preserve_write change should address= the > immediate issues. Let me know if you agree. >=20 > Let me know whether you agree. The problem starts in UFD, and is related to tlb flush. But its focal point is in do_wp_page(). I'd suggest you look at function and see what it does before and after the commits I listed, with the following conditions PageAnon(), !PageKsm(), !PageSwapCache(), !pte_write(), page_mapcount() =3D 1, page_count() > 1 or PageLocked() when it runs against the two UFD examples you listed.