From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95381C636C9 for ; Thu, 15 Jul 2021 20:16:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3EBF1613C9 for ; Thu, 15 Jul 2021 20:16:05 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3EBF1613C9 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A2AE78D00FE; Thu, 15 Jul 2021 16:16:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A01A18D00EC; Thu, 15 Jul 2021 16:16:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 87B858D00FE; Thu, 15 Jul 2021 16:16:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0182.hostedemail.com [216.40.44.182]) by kanga.kvack.org (Postfix) with ESMTP id 64C238D00EC for ; Thu, 15 Jul 2021 16:16:05 -0400 (EDT) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 41DF31858254A for ; Thu, 15 Jul 2021 20:16:04 +0000 (UTC) X-FDA: 78365928648.23.47B6097 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf15.hostedemail.com (Postfix) with ESMTP id CCB9BD0000B6 for ; Thu, 15 Jul 2021 20:16:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1626380163; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8MXCY9EtHu8l00OeGEX9KSWIeSxJZ91wjW9WQpZLBho=; b=AAtDnlnFd3BUKDivpDC6TpCiUg9taGzQdD3JtJdtNPb9iV6pkI6gJR3vWOI54Z+Nx1fpfx t/FfzyNuF8fhB4pfTi4OwhBgAURh7jAdLhdJUsgmdsKyoJ+w5aWvg3EREOW2VoVxJVAK1t sOjrO0yOJwG0pajVHh1Yy0jcHFuH49o= Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-317-8a-sZ226Nhm6ZXVS0HOMXw-1; Thu, 15 Jul 2021 16:16:02 -0400 X-MC-Unique: 8a-sZ226Nhm6ZXVS0HOMXw-1 Received: by mail-qt1-f197.google.com with SMTP id z6-20020a05622a0606b029025368c044d9so5009552qta.0 for ; Thu, 15 Jul 2021 13:16:02 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8MXCY9EtHu8l00OeGEX9KSWIeSxJZ91wjW9WQpZLBho=; b=tBAeVlhMiI2Jq2q43761FisStAm3Ttrc0WBD0TbjmJOBIci5XQmpZWIcisgL+aOPYs vQUpxLUKn48MosV1ky3zoRI8/S5pcmMYMY8ocJ7cKMq4JHZTQs3H65vxn6k5aDYZK6UA gd2VRqQFhhHcaDjy1DXXLYCPgMb5AZF/KGm+RE6JYMgLropnrfPsNObmMM0kTpsqaloc qxdVVPYePiLeIJ/LRTdW8rAV1FwPY742pXm0ndRmeU1ZDaf34MjcwGEgOB8XfTN5rAqd 9PglUQwCDYouiAppolij2xYiQGF6w723LUgFM3bccrZz+aYj1la1tDP+S9O3G3sWFxXM UTFw== X-Gm-Message-State: AOAM530gCsXURc1tY/Gv79DiQkp86HN1BHcgOQP5tYllSdSWUBZddoA0 ebc5vgBwuoauN/lJ+T5AxP/mUC+5+up8S4hGrdEVQwS//R/a98xVQ+z99lYEL4FKOZGVSSfvBkj OTKogDS4C7vU= X-Received: by 2002:a0c:a997:: with SMTP id a23mr6444399qvb.48.1626380161800; Thu, 15 Jul 2021 13:16:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxnkScJAMZERLCDilAOd1Xvi8Q1pdCfHfDvLaxpg8/u7+h0sfEcZ0u5r/v/g1/qT6Q3G9NlSQ== X-Received: by 2002:a0c:a997:: with SMTP id a23mr6444371qvb.48.1626380161571; Thu, 15 Jul 2021 13:16:01 -0700 (PDT) Received: from localhost.localdomain (bras-base-toroon474qw-grc-65-184-144-111-238.dsl.bell.ca. [184.144.111.238]) by smtp.gmail.com with ESMTPSA id x15sm2931686qkm.66.2021.07.15.13.16.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 15 Jul 2021 13:16:01 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Axel Rasmussen , Nadav Amit , Jerome Glisse , "Kirill A . Shutemov" , Jason Gunthorpe , Alistair Popple , Andrew Morton , David Hildenbrand , peterx@redhat.com, Andrea Arcangeli , Matthew Wilcox , Mike Kravetz , Tiberiu Georgescu , Hugh Dickins , Miaohe Lin , Mike Rapoport Subject: [PATCH v5 11/26] shmem/userfaultfd: Allow wr-protect none pte for file-backed mem Date: Thu, 15 Jul 2021 16:15:58 -0400 Message-Id: <20210715201558.211445-1-peterx@redhat.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20210715201422.211004-1-peterx@redhat.com> References: <20210715201422.211004-1-peterx@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="US-ASCII" Authentication-Results: imf15.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=AAtDnlnF; spf=none (imf15.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 216.205.24.124) smtp.mailfrom=peterx@redhat.com; dmarc=pass (policy=none) header.from=redhat.com X-Rspamd-Server: rspam03 X-Rspamd-Queue-Id: CCB9BD0000B6 X-Stat-Signature: 4txbtf4yiqixgthmjdg7zq7441s7u1u9 X-HE-Tag: 1626380163-753073 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: File-backed memory differs from anonymous memory in that even if the pte = is missing, the data could still resides either in the file or in page/swap = cache. So when wr-protect a pte, we need to consider none ptes too. We do that by installing the uffd-wp special swap pte as a marker. So wh= en there's a future write to the pte, the fault handler will go the special = path to first fault-in the page as read-only, then report to userfaultfd serve= r with the wr-protect message. On the other hand, when unprotecting a page, it's also possible that the = pte got unmapped but replaced by the special uffd-wp marker. Then we'll need= to be able to recover from a uffd-wp special swap pte into a none pte, so that = the next access to the page will fault in correctly as usual when trigger the= fault handler next time, rather than sending a uffd-wp message. Special care needs to be taken throughout the change_protection_range() process. Since now we allow user to wr-protect a none pte, we need to be= able to pre-populate the page table entries if we see !anonymous && MM_CP_UFFD= _WP requests, otherwise change_protection_range() will always skip when the p= gtable entry does not exist. Note that this patch only covers the small pages (pte level) but not cove= ring any of the transparent huge pages yet. But this will be a base for thps = too. Signed-off-by: Peter Xu --- mm/mprotect.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/mm/mprotect.c b/mm/mprotect.c index 4b743394afbe..8ec85b276975 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -186,6 +187,32 @@ static unsigned long change_pte_range(struct vm_area= _struct *vma, pmd_t *pmd, set_pte_at(vma->vm_mm, addr, pte, newpte); pages++; } + } else if (unlikely(is_swap_special_pte(oldpte))) { + if (uffd_wp_resolve && !vma_is_anonymous(vma) && + pte_swp_uffd_wp_special(oldpte)) { + /* + * This is uffd-wp special pte and we'd like to + * unprotect it. What we need to do is simply + * recover the pte into a none pte; the next + * page fault will fault in the page. + */ + pte_clear(vma->vm_mm, addr, pte); + pages++; + } + } else { + /* It must be an none page, or what else?.. */ + WARN_ON_ONCE(!pte_none(oldpte)); + if (unlikely(uffd_wp && !vma_is_anonymous(vma))) { + /* + * For file-backed mem, we need to be able to + * wr-protect even for a none pte! Because + * even if the pte is null, the page/swap cache + * could exist. + */ + set_pte_at(vma->vm_mm, addr, pte, + pte_swp_mkuffd_wp_special(vma)); + pages++; + } } } while (pte++, addr +=3D PAGE_SIZE, addr !=3D end); arch_leave_lazy_mmu_mode(); @@ -219,6 +246,25 @@ static inline int pmd_none_or_clear_bad_unless_trans= _huge(pmd_t *pmd) return 0; } =20 +/* + * File-backed vma allows uffd wr-protect upon none ptes, because even i= f pte + * is missing, page/swap cache could exist. When that happens, the wr-p= rotect + * information will be stored in the page table entries with the marker = (e.g., + * PTE_SWP_UFFD_WP_SPECIAL). Prepare for that by always populating the = page + * tables to pte level, so that we'll install the markers in change_pte_= range() + * where necessary. + * + * Note that we only need to do this in pmd level, because if pmd does n= ot + * exist, it means the whole range covered by the pmd entry (of a pud) d= oes not + * contain any valid data but all zeros. Then nothing to wr-protect. + */ +#define change_protection_prepare(vma, pmd, addr, cp_flags) \ + do { \ + if (unlikely((cp_flags & MM_CP_UFFD_WP) && pmd_none(*pmd) && \ + !vma_is_anonymous(vma))) \ + WARN_ON_ONCE(pte_alloc(vma->vm_mm, pmd)); \ + } while (0) + static inline unsigned long change_pmd_range(struct vm_area_struct *vma, pud_t *pud, unsigned long addr, unsigned long end, pgprot_t newprot, unsigned long cp_flags) @@ -237,6 +283,8 @@ static inline unsigned long change_pmd_range(struct v= m_area_struct *vma, =20 next =3D pmd_addr_end(addr, end); =20 + change_protection_prepare(vma, pmd, addr, cp_flags); + /* * Automatic NUMA balancing walks the tables with mmap_lock * held for read. It's possible a parallel update to occur --=20 2.31.1