From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 68664C4361B for ; Sat, 19 Dec 2020 22:06:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id BFA2523158 for ; Sat, 19 Dec 2020 22:06:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BFA2523158 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EFB156B005C; Sat, 19 Dec 2020 17:06:07 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id EAAF86B005D; Sat, 19 Dec 2020 17:06:07 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D709F6B0068; Sat, 19 Dec 2020 17:06:07 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0041.hostedemail.com [216.40.44.41]) by kanga.kvack.org (Postfix) with ESMTP id A508C6B005C for ; Sat, 19 Dec 2020 17:06:07 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 663E5181AF5C4 for ; Sat, 19 Dec 2020 22:06:07 +0000 (UTC) X-FDA: 77611415574.06.meat72_0d0ab3c27449 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 446E71003B0E8 for ; Sat, 19 Dec 2020 22:06:07 +0000 (UTC) X-HE-Tag: meat72_0d0ab3c27449 X-Filterd-Recvd-Size: 7915 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) by imf49.hostedemail.com (Postfix) with ESMTP for ; Sat, 19 Dec 2020 22:06:06 +0000 (UTC) Received: by mail-pl1-f174.google.com with SMTP id be12so3382560plb.4 for ; Sat, 19 Dec 2020 14:06:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=ZeuDbNPJjdCrxnc/whXpP5CtWhYRV2UKLbSIgMwSJ3s=; b=oIThQ+pmjJQj8vfLoSfRI9DcL3jQ5AqhtrrP5x28pUWjyrPz+5GCwCJMKq3oXTpUOx 5fRC4wFYo/t245fCxQGJw7QwW7TaXcRIFDg2OPRoFR/loAKuD4N+r930ZcpaFW6LiptM MpPxK/37XkPcF2Vd2WnxVgVmZyjH1r+H4qPs+GoTRUDtPksL438gpmhJJ3Br65vkf/AD TFHCDG7StvWy5uJD4tTOf3mhr4KlBrczxb3WevYb2ZrVzEvkaNn8Q+9nXKPWGbI+eX6L +6TW1aYbf/Ub0dcc2/f7dAkOP10wPcU1HfFlozBPR6M2cACoKBhiXqFyhxOhva6CVkzm B2+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:subject:from:in-reply-to:date:cc :content-transfer-encoding:message-id:references:to; bh=ZeuDbNPJjdCrxnc/whXpP5CtWhYRV2UKLbSIgMwSJ3s=; b=rmPzqOKm+ZIZdGnHh2jwAqjkOiacJF3B/fCKbTafHGtBuspaGylUT9FobiBgzyq+Xt JMNj5t8CK+o0alXT89udx+5xMubiwEhzn7aNSnMPQPW879ko8iGDm0ts1di8jyoFaTMB tEgofl+hcW3tKad7vi9GTCHVM+uN5I8f5MuxUQEWQ1EVknqNaLfiiduhGXdFoOMbye1y jZEtPWf2ZimAfqUZQKrnftr9U0LiHVBLBJe+IaxZUqXp0Zf/qGtMW0FGdv5w20jvO+yU 19f3zUkMM/ALyGj4FzbML0jX7Qd0ptTscgW645UhA3GyLrgfuKCgrcynjkPfMkj4xL8Q jihg== X-Gm-Message-State: AOAM532+vNb1s5Pt7FwkS6x87ruyj6EZpqjtJcSd0PBVYJ2ghuKwCnQz 9PNSdQmB2aORd7QseUlXnd0= X-Google-Smtp-Source: ABdhPJxov/6qdC8y0CWrVUsy2gKRSlO+6b+0GpPCpXuL0XMqGU6ANXebYdF5kGNS+aeGzIstK1HZyA== X-Received: by 2002:a17:902:e901:b029:db:c0d6:62cc with SMTP id k1-20020a170902e901b02900dbc0d662ccmr10115937pld.7.1608415565611; Sat, 19 Dec 2020 14:06:05 -0800 (PST) Received: from ?IPv6:2601:647:4700:9b2:c998:6c11:32cc:4648? ([2601:647:4700:9b2:c998:6c11:32cc:4648]) by smtp.gmail.com with ESMTPSA id er23sm10930517pjb.12.2020.12.19.14.06.03 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 19 Dec 2020 14:06:04 -0800 (PST) Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 13.4 \(3608.120.23.2.4\)) Subject: Re: [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect From: Nadav Amit In-Reply-To: Date: Sat, 19 Dec 2020 14:06:02 -0800 Cc: linux-mm , Peter Xu , lkml , Pavel Emelyanov , Mike Kravetz , Mike Rapoport , stable@vger.kernel.org, minchan@kernel.org, Andy Lutomirski , yuzhao@google.com, Will Deacon , Peter Zijlstra Content-Transfer-Encoding: quoted-printable Message-Id: References: <20201219043006.2206347-1-namit@vmware.com> To: Andrea Arcangeli X-Mailer: Apple Mail (2.3608.120.23.2.4) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > On Dec 19, 2020, at 1:34 PM, Nadav Amit wrote: >=20 > [ cc=E2=80=99ing some more people who have experience with similar = problems ] >=20 >> On Dec 19, 2020, at 11:15 AM, Andrea Arcangeli = wrote: >>=20 >> Hello, >>=20 >> On Fri, Dec 18, 2020 at 08:30:06PM -0800, Nadav Amit wrote: >>> Analyzing this problem indicates that there is a real bug since >>> mmap_lock is only taken for read in mwriteprotect_range(). This = might >>=20 >> Never having to take the mmap_sem for writing, and in turn never >> blocking, in order to modify the pagetables is quite an important >> feature in uffd that justifies uffd instead of mprotect. It's not the >> most important reason to use uffd, but it'd be nice if that guarantee >> would remain also for the UFFDIO_WRITEPROTECT API, not only for the >> other pgtable manipulations. >>=20 >>> Consider the following scenario with 3 CPUs (cpu2 is not shown): >>>=20 >>> cpu0 cpu1 >>> ---- ---- >>> userfaultfd_writeprotect() >>> [ write-protecting ] >>> mwriteprotect_range() >>> mmap_read_lock() >>> change_protection() >>> change_protection_range() >>> ... >>> change_pte_range() >>> [ defer TLB flushes] >>> userfaultfd_writeprotect() >>> mmap_read_lock() >>> change_protection() >>> [ write-unprotect ] >>> ... >>> [ unprotect PTE logically ] >>> ... >>> [ page-fault] >>> ... >>> wp_page_copy() >>> [ set new writable page in PTE] >>=20 >> Can't we check mm_tlb_flush_pending(vma->vm_mm) if MM_CP_UFFD_WP_ALL >> is set and do an explicit (potentially spurious) tlb flush before >> write-unprotect? >=20 > There is a concrete scenario that I actually encountered and then = there is a > general problem. >=20 > In general, the kernel code assumes that PTEs that are read from the > page-tables are coherent across all the TLBs, excluding permission = promotion > (i.e., the PTE may have higher permissions in the page-tables than = those > that are cached in the TLBs). >=20 > We therefore need to both: (a) protect change_protection_range() from = the > changes of others who might defer TLB flushes without taking mmap_sem = for > write (e.g., try_to_unmap_one()); and (b) to protect others (e.g., > page-fault handlers) from concurrent changes of change_protection(). >=20 > We have already encountered several similar bugs, and debugging such = issues > s time consuming and these bugs impact is substantial (memory = corruption, > security). So I think we should only stick to general solutions. >=20 > So perhaps your the approach of your proposed solution is feasible, = but it > would have to be applied all over the place: we will need to add a = check for > mm_tlb_flush_pending() and conditionally flush the TLB in every case = in > which PTEs are read and there might be an assumption that the > access-permission reflect what the TLBs hold. This includes page-fault > handlers, but also NUMA migration code in change_protection(), = softdirty > cleanup in clear_refs_write() and maybe others. >=20 > [ I have in mind another solution, such as keeping in each page-table = a=20 > =E2=80=9Ctable-generation=E2=80=9D which is the mm-generation at the = time of the change, > and only flush if =E2=80=9Ctable-generation=E2=80=9D=3D=3D=E2=80=9Cmm-ge= neration=E2=80=9D, but it requires > some thought on how to avoid adding new memory barriers. ] >=20 > IOW: I think the change that you suggest is insufficient, and a proper > solution is too intrusive for =E2=80=9Cstable". >=20 > As for performance, I can add another patch later to remove the TLB = flush > that is unnecessarily performed during change_protection_range() that = does > permission promotion. I know that your concern is about the = =E2=80=9Cprotect=E2=80=9D case > but I cannot think of a good immediate solution that avoids taking = mmap_lock > for write. >=20 > Thoughts? On a second thought (i.e., I don=E2=80=99t know what I was thinking), = doing so =E2=80=94 checking mm_tlb_flush_pending() on every PTE read which is potentially dangerous and flushing if needed - can lead to huge amount of TLB = flushes and shootodowns as the counter might be elevated for considerable amount = of time. So this solution seems to me as a no-go.