From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 98D7BC433DB for ; Mon, 21 Dec 2020 04:36:21 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 197E822CB2 for ; Mon, 21 Dec 2020 04:36:20 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 197E822CB2 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2B60E6B005C; Sun, 20 Dec 2020 23:36:20 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2665E6B005D; Sun, 20 Dec 2020 23:36:20 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 155A16B0068; Sun, 20 Dec 2020 23:36:20 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0124.hostedemail.com [216.40.44.124]) by kanga.kvack.org (Postfix) with ESMTP id 00CB56B005C for ; Sun, 20 Dec 2020 23:36:19 -0500 (EST) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id AAD883625 for ; Mon, 21 Dec 2020 04:36:19 +0000 (UTC) X-FDA: 77616027678.17.wine70_2d04f8b27454 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin17.hostedemail.com (Postfix) with ESMTP id 85C36180D0180 for ; Mon, 21 Dec 2020 04:36:19 +0000 (UTC) X-HE-Tag: wine70_2d04f8b27454 X-Filterd-Recvd-Size: 9821 Received: from mail-pj1-f46.google.com (mail-pj1-f46.google.com [209.85.216.46]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Mon, 21 Dec 2020 04:36:19 +0000 (UTC) Received: by mail-pj1-f46.google.com with SMTP id j13so5705399pjz.3 for ; Sun, 20 Dec 2020 20:36:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=er+ipeu/poOdqTDZAuNQsbSQR97lvxpQszzjDt9p57c=; b=Zqj0pUaVgucofAUfGA6jsA7q+VLNGcoSAj8Eza52YmsdyTiOaQV6/NkRkWx3Obd0VI UIOJ0/Jy0f/4tNTRFxm0Rj9kU85DpaZvbp+Q+L/hHS/BOB71rw3/T16YRITjAuW2RHI+ 5dJaVkLT+BLTH3Qi5KcCBFFh/5bw6XtbfVGG8JcL40z7aQVUpsZuFkJvXX2BDMy14Dar O86IkXfQrYdEv53Km6XlR1Eqo6+7jsiQ+wyThH5CGFlqoXViLb0bcK3riWEYb+Pb7/k0 YbUkrmyAXCINgr40djBlxsQzHZwGP0etmbT68oz5cdUfEac+NGO5zOx/sSrmmlJFfYZX +UDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=er+ipeu/poOdqTDZAuNQsbSQR97lvxpQszzjDt9p57c=; b=r/VtN2u2YSm9tWamirm5QKubiVuNz/eM1X6YOEXB7lHqdZhWwVp+BY57/J4IDxYWbb dY+qr+bv2NvEnEO1SluHZCejMy1z7svdbM7Hf/yzgo3/ntEpxyftKB00iVSME8LC0S9v p2BwslvfGu3y57eJfXk4oVRADRe4/t285KeDFHkMhzWAQp9ILS1MbWZRttsr7DIMIcN/ o81K1mckVPbePihC+LM3WgzIuvBnuxrc3ybOjKxuzvlGSDAPJifBPt+RpXQaOSPsLzFZ DO3iPzSk0rq7zYO3/fA853CulBbv/05HS44WfrYJzG4dEE49oRVxwnwM3/SbBGhZD/CO qMkw== X-Gm-Message-State: AOAM533XlxIVZKJgfPixe1dP6g0rAzz92Jij6bjZtfvg4Re8VlGgRtvo swTNM6wQbkLB60tHnzjsJnQ= X-Google-Smtp-Source: ABdhPJxXN03oS9xkXOivxonNzHt1HjuvhYb7hY2h7LR+v05GxmGMpmLMR8ENUsHrk0B2kUBgVE+39Q== X-Received: by 2002:a17:902:7d88:b029:db:7aa4:864c with SMTP id a8-20020a1709027d88b02900db7aa4864cmr14766782plm.34.1608525377910; Sun, 20 Dec 2020 20:36:17 -0800 (PST) Received: from ?IPv6:2601:647:4700:9b2:104c:8d35:de28:b8dc? ([2601:647:4700:9b2:104c:8d35:de28:b8dc]) by smtp.gmail.com with ESMTPSA id w27sm11247878pfq.104.2020.12.20.20.36.15 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sun, 20 Dec 2020 20:36:16 -0800 (PST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 13.4 \(3608.120.23.2.4\)) Subject: Re: [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect From: Nadav Amit In-Reply-To: Date: Sun, 20 Dec 2020 20:36:15 -0800 Cc: linux-mm , Peter Xu , lkml , Pavel Emelyanov , Mike Kravetz , Mike Rapoport , stable@vger.kernel.org, minchan@kernel.org, Andy Lutomirski , yuzhao@google.com, Will Deacon , Peter Zijlstra Content-Transfer-Encoding: quoted-printable Message-Id: <729A8C1E-FC5B-4F46-AE01-85E00C66DFFF@gmail.com> References: <20201219043006.2206347-1-namit@vmware.com> To: Andrea Arcangeli X-Mailer: Apple Mail (2.3608.120.23.2.4) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > On Dec 19, 2020, at 6:20 PM, Andrea Arcangeli = wrote: >=20 > On Sat, Dec 19, 2020 at 02:06:02PM -0800, Nadav Amit wrote: >>> On Dec 19, 2020, at 1:34 PM, Nadav Amit = wrote: >>>=20 >>> [ cc=E2=80=99ing some more people who have experience with similar = problems ] >>>=20 >>>> On Dec 19, 2020, at 11:15 AM, Andrea Arcangeli = wrote: >>>>=20 >>>> Hello, >>>>=20 >>>> On Fri, Dec 18, 2020 at 08:30:06PM -0800, Nadav Amit wrote: >>>>> Analyzing this problem indicates that there is a real bug since >>>>> mmap_lock is only taken for read in mwriteprotect_range(). This = might >>>>=20 >>>> Never having to take the mmap_sem for writing, and in turn never >>>> blocking, in order to modify the pagetables is quite an important >>>> feature in uffd that justifies uffd instead of mprotect. It's not = the >>>> most important reason to use uffd, but it'd be nice if that = guarantee >>>> would remain also for the UFFDIO_WRITEPROTECT API, not only for the >>>> other pgtable manipulations. >>>>=20 >>>>> Consider the following scenario with 3 CPUs (cpu2 is not shown): >>>>>=20 >>>>> cpu0 cpu1 >>>>> ---- ---- >>>>> userfaultfd_writeprotect() >>>>> [ write-protecting ] >>>>> mwriteprotect_range() >>>>> mmap_read_lock() >>>>> change_protection() >>>>> change_protection_range() >>>>> ... >>>>> change_pte_range() >>>>> [ defer TLB flushes] >>>>> userfaultfd_writeprotect() >>>>> mmap_read_lock() >>>>> change_protection() >>>>> [ write-unprotect ] >>>>> ... >>>>> [ unprotect PTE logically ] >=20 > Is the uffd selftest failing with upstream or after your kernel > modification that removes the tlb flush from unprotect? Please see my reply to Yu. I was wrong in this analysis, and I sent a correction to my analysis. The problem actually happens when userfaultfd_writeprotect() unprotects the memory. > } else if (uffd_wp_resolve) { > /* > * Leave the write bit to be handled > * by PF interrupt handler, then > * things like COW could be properly > * handled. > */ > ptent =3D pte_clear_uffd_wp(ptent); > } >=20 > Upstraem this will still do pages++, there's a tlb flush before > change_protection can return here, so I'm confused. >=20 You are correct. The problem I encountered with = userfaultfd_writeprotect() is during unprotecting path. Having said that, I think that there are additional scenarios that are problematic. Consider for instance madvise_dontneed_free() that is = racing with userfaultfd_writeprotect(). If madvise_dontneed_free() completed removing the PTEs, but still did not flush, change_pte_range() will see non-present PTEs, say a flush is not needed, and then change_protection_range() will not do a flush, and return while the memory is still not protected. > I don't share your concern. What matters is the PT lock, so it > wouldn't be one per pte, but a least an order 9 higher, but let's > assume one flush per pte. >=20 > It's either huge mapping and then it's likely running without other > tlb flushing in background (postcopy snapshotting), or it's a granular > protect with distributed shared memory in which case the number of > changd ptes or huge_pmds tends to be always 1 anyway. So it doesn't > matter if it's deferred. >=20 > I agree it may require a larger tlb flush review not just mprotect > though, but it didn't sound particularly complex. Note the > UFFDIO_WRITEPROTECT is still relatively recent so backports won't > risk to reject so heavy as to require a band-aid. >=20 > My second thought is, I don't see exactly the bug and it's not clear > if it's upstream reproducing this, but assuming this happens on > upstream, even ignoring everything else happening in the tlb flush > code, this sounds like purely introduced by userfaultfd_writeprotect() > vs userfaultfd_writeprotect() (since it's the only place changing > protection with mmap_sem for reading and note we already unmap and > flush tlb with mmap_sem for reading in MADV_DONTNEED/MADV_FREE clears > the dirty bit etc..). Flushing tlbs with mmap_sem for reading is > nothing new, the only new thing is the flush after wrprotect. >=20 > So instead of altering any tlb flush code, would it be possible to > just stick to mmap_lock for reading and then serialize > userfaultfd_writeprotect() against itself with an additional > mm->mmap_wprotect_lock mutex? That'd be a very local change to > userfaultfd too. >=20 > Can you look if the rule mmap_sem for reading plus a new > mm->mmap_wprotect_lock mutex or the mmap_sem for writing, whenever > wrprotecting ptes, is enough to comply with the current tlb flushing > code, so not to require any change non local to uffd (modulo the > additional mutex). So I did not fully understand your solution, but I took your point and looked again on similar cases. To be fair, despite my experience with = these deferred TLB flushes as well as Peter Zijlstra=E2=80=99s great = documentation, I keep getting confused (e.g., can=E2=80=99t we somehow combine = tlb_flush_batched and tlb_flush_pending ?) As I said before, my initial scenario was wrong, and the problem is not userfaultfd_writeprotect() racing against itself. This one seems = actually benign to me. Nevertheless, I do think there is a problem in = change_protection_range(). Specifically, see the aforementioned scenario of a race between madvise_dontneed_free() and userfaultfd_writeprotect(). So an immediate solution for such a case can be resolve without holding mmap_lock for write, by just adding a test for mm_tlb_flush_nested() in change_protection_range(): /* * Only flush the TLB if we actually modified any entries * or if there are pending TLB flushes. */ if (pages || mm_tlb_flush_nested(mm)) flush_tlb_range(vma, start, end); =20 To be fair, I am not confident I did not miss other problematic cases. But for now, this change, with the preserve_write change should address = the immediate issues. Let me know if you agree. Let me know whether you agree.