From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.5 required=3.0 tests=BAYES_00,DKIM_ADSP_CUSTOM_MED, DKIM_INVALID,DKIM_SIGNED,FSL_HELO_FAKE,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28BE3C4361B for ; Sun, 20 Dec 2020 06:05:35 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 939A623718 for ; Sun, 20 Dec 2020 06:05:34 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 939A623718 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7FF746B005C; Sun, 20 Dec 2020 01:05:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7AF906B005D; Sun, 20 Dec 2020 01:05:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6292A6B0068; Sun, 20 Dec 2020 01:05:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id 47E156B005C for ; Sun, 20 Dec 2020 01:05:33 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id F24BF180AD837 for ; Sun, 20 Dec 2020 06:05:32 +0000 (UTC) X-FDA: 77612623704.08.chair87_2716cf72744c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin08.hostedemail.com (Postfix) with ESMTP id D279D1819E621 for ; Sun, 20 Dec 2020 06:05:32 +0000 (UTC) X-HE-Tag: chair87_2716cf72744c X-Filterd-Recvd-Size: 7674 Received: from mail-il1-f177.google.com (mail-il1-f177.google.com [209.85.166.177]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Sun, 20 Dec 2020 06:05:32 +0000 (UTC) Received: by mail-il1-f177.google.com with SMTP id p5so6029349iln.8 for ; Sat, 19 Dec 2020 22:05:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:content-transfer-encoding:in-reply-to; bh=3PPLg+PyRHcRYpZXYU0/asVVIWhX7CVrUOdST+JsHR0=; b=e9wxcQKAFmPoEefwuZbY4XDYd4tBIR1mNJrmj/KVWvI/UZUztU7+L4iHz+Ve9enOI+ E2lrFuoODyJXy71rDvSN4jQw1/jWVkaHsoHX8wOHb2M0vig7rxn0n3iT9Jr5ij4N+5Op FUID7+AbzTYOVJ1KMLZsFkggc51AVCic7CrJd31xeO34IKxcmQQD7SjEAOT2tSv3WueV jrWSVojFo//oZQnc8LOGMxJxPVxHA/Sp9paANaQEYBlp2RZkxJ0I38bCB8jI1OG+VxtS Zwddmly0CxNmChMl8rVCN5c2w1PqfAOk35/xeloRR92oDb42Hcy1egNuuW3yeIWjVB6G qFvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:content-transfer-encoding :in-reply-to; bh=3PPLg+PyRHcRYpZXYU0/asVVIWhX7CVrUOdST+JsHR0=; b=gKATIQ9V43ix6vPql1uHBcm5rGpwiea372m/+XhZrF3lfef2vBYHM/FyJIaHbY7ISN pOUC+Q3q2IFlzAmbGUGE90c29TppEwQxYhraWhPcxXvull0iQwZn7m/R1tL/4zEjapdR pBN4jM5mRtliHGQfoJYfyrv7mHs5mpQ5TH1Bx6/Qi1lJrLB32rfQ4hVXU0GzEenupNGU cbNbYiXRXqugMM0hBBdGZ1R5AZG0zsuq+ZrKyp0LECxmJBktt/rwd+UyqN42J4HSg/ac 4lJjcJefVWTiF7MF6iEWvC3GXKlrOmqZz6GmvF9XSfUjv1kGWYk6M3MWVfyFMKxGUN7z luag== X-Gm-Message-State: AOAM531Q1s9j0jY56WjotDdo9ef8/QE2+/qfC1f+3sy7t7cfCsP+TeeW P1u1VZL46yAzWciQ1dzT5KbfOQ== X-Google-Smtp-Source: ABdhPJyy9yVMnwfNic7VftM2KcUqOrVxMDy8+CIcb0HITMOOPzzKatb6Pr6CA9cJOtnPTOc4U8HISA== X-Received: by 2002:a92:d7d2:: with SMTP id g18mr11631723ilq.2.1608444331660; Sat, 19 Dec 2020 22:05:31 -0800 (PST) Received: from google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) by smtp.gmail.com with ESMTPSA id f13sm12714253iog.18.2020.12.19.22.05.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 19 Dec 2020 22:05:30 -0800 (PST) Date: Sat, 19 Dec 2020 23:05:26 -0700 From: Yu Zhao To: Nadav Amit Cc: Andrea Arcangeli , linux-mm , Peter Xu , lkml , Pavel Emelyanov , Mike Kravetz , Mike Rapoport , stable@vger.kernel.org, minchan@kernel.org, Andy Lutomirski , Will Deacon , Peter Zijlstra Subject: Re: [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect Message-ID: References: <20201219043006.2206347-1-namit@vmware.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sat, Dec 19, 2020 at 01:34:29PM -0800, Nadav Amit wrote: > [ cc=E2=80=99ing some more people who have experience with similar prob= lems ] >=20 > > On Dec 19, 2020, at 11:15 AM, Andrea Arcangeli = wrote: > >=20 > > Hello, > >=20 > > On Fri, Dec 18, 2020 at 08:30:06PM -0800, Nadav Amit wrote: > >> Analyzing this problem indicates that there is a real bug since > >> mmap_lock is only taken for read in mwriteprotect_range(). This migh= t > >=20 > > Never having to take the mmap_sem for writing, and in turn never > > blocking, in order to modify the pagetables is quite an important > > feature in uffd that justifies uffd instead of mprotect. It's not the > > most important reason to use uffd, but it'd be nice if that guarantee > > would remain also for the UFFDIO_WRITEPROTECT API, not only for the > > other pgtable manipulations. > >=20 > >> Consider the following scenario with 3 CPUs (cpu2 is not shown): > >>=20 > >> cpu0 cpu1 > >> ---- ---- > >> userfaultfd_writeprotect() > >> [ write-protecting ] > >> mwriteprotect_range() > >> mmap_read_lock() > >> change_protection() > >> change_protection_range() > >> ... > >> change_pte_range() > >> [ defer TLB flushes] > >> userfaultfd_writeprotect() > >> mmap_read_lock() > >> change_protection() > >> [ write-unprotect ] > >> ... > >> [ unprotect PTE logically ] > >> ... > >> [ page-fault] > >> ... > >> wp_page_copy() > >> [ set new writable page in PTE] I don't see any problem in this example -- wp_page_copy() calls ptep_clear_flush_notify(), which should take care of the stale entry left by cpu0. That being said, I suspect the memory corruption you observed is related this example, with cpu1 running something else that flushes conditionally depending on pte_write(). Do you know which type of pages were corrupted? file, anon, etc. > > Can't we check mm_tlb_flush_pending(vma->vm_mm) if MM_CP_UFFD_WP_ALL > > is set and do an explicit (potentially spurious) tlb flush before > > write-unprotect? >=20 > There is a concrete scenario that I actually encountered and then there= is a > general problem. >=20 > In general, the kernel code assumes that PTEs that are read from the > page-tables are coherent across all the TLBs, excluding permission prom= otion > (i.e., the PTE may have higher permissions in the page-tables than thos= e > that are cached in the TLBs). >=20 > We therefore need to both: (a) protect change_protection_range() from t= he > changes of others who might defer TLB flushes without taking mmap_sem f= or > write (e.g., try_to_unmap_one()); and (b) to protect others (e.g., > page-fault handlers) from concurrent changes of change_protection(). >=20 > We have already encountered several similar bugs, and debugging such is= sues > s time consuming and these bugs impact is substantial (memory corruptio= n, > security). So I think we should only stick to general solutions. >=20 > So perhaps your the approach of your proposed solution is feasible, but= it > would have to be applied all over the place: we will need to add a chec= k for > mm_tlb_flush_pending() and conditionally flush the TLB in every case in > which PTEs are read and there might be an assumption that the > access-permission reflect what the TLBs hold. This includes page-fault > handlers, but also NUMA migration code in change_protection(), softdirt= y > cleanup in clear_refs_write() and maybe others. >=20 > [ I have in mind another solution, such as keeping in each page-table a= =20 > =E2=80=9Ctable-generation=E2=80=9D which is the mm-generation at the ti= me of the change, > and only flush if =E2=80=9Ctable-generation=E2=80=9D=3D=3D=E2=80=9Cmm-g= eneration=E2=80=9D, but it requires > some thought on how to avoid adding new memory barriers. ] >=20 > IOW: I think the change that you suggest is insufficient, and a proper > solution is too intrusive for =E2=80=9Cstable". >=20 > As for performance, I can add another patch later to remove the TLB flu= sh > that is unnecessarily performed during change_protection_range() that d= oes > permission promotion. I know that your concern is about the =E2=80=9Cpr= otect=E2=80=9D case > but I cannot think of a good immediate solution that avoids taking mmap= _lock > for write. >=20 > Thoughts? >=20