From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 681FDC433E0 for ; Mon, 8 Feb 2021 23:54:19 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id E058764E9A for ; Mon, 8 Feb 2021 23:54:18 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org E058764E9A Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 46CE06B0005; Mon, 8 Feb 2021 18:54:18 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 3F4146B006C; Mon, 8 Feb 2021 18:54:18 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E3056B006E; Mon, 8 Feb 2021 18:54:18 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0043.hostedemail.com [216.40.44.43]) by kanga.kvack.org (Postfix) with ESMTP id 185A96B0005 for ; Mon, 8 Feb 2021 18:54:18 -0500 (EST) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id C431AA2BE for ; Mon, 8 Feb 2021 23:54:17 +0000 (UTC) X-FDA: 77796756954.01.80B3727 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf08.hostedemail.com (Postfix) with ESMTP id E988080191E9 for ; Mon, 8 Feb 2021 23:54:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1612828456; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=0hEEOuDz0qaViW1EeHUlbDM3oSPv21a1byvV4vjjMF0=; b=e4ZEPrTgnLdU5ToWBgtcfdQRoV73CwW3l2/rWAzG9LwKNZvuxqa8/ejXprUT6aTUX1unDe EdkkCawB9B8Ty9Dh4tkH+les94DWa2qJh8E5kO2QeLIUiY19pZoXsPMw9s861mfqLJ4T17 oh/HYsz56zg/0j2cV1MB0YEqyOnoPzE= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-409-8aQMq77mOi--KGyIF1_hRQ-1; Mon, 08 Feb 2021 18:54:15 -0500 X-MC-Unique: 8aQMq77mOi--KGyIF1_hRQ-1 Received: by mail-qk1-f198.google.com with SMTP id u14so2869967qke.14 for ; Mon, 08 Feb 2021 15:54:15 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=0hEEOuDz0qaViW1EeHUlbDM3oSPv21a1byvV4vjjMF0=; b=EuEGv1DlPBdKLtLozOaEB2Ld1kzZKUowbDRb0DnvnEc0al+0u0RW6bNVFEa/zStKoD 30Zp4YcoaFEJGSxUw38fmYJnNuswwNz2oY8mcuMWaLP62e22B45UKtd2flWEPTukA1eX BnuWQU3WW+R7I1pHeMJXptJEJyiefsdOn3y0URUKTxwVkwbhIpo75NpvPQw53XnAh1zS e/IcMncoTAzWPZRXCIUwJyJL0jt1fn7eZ5PBXmCVKhcVog8J8h+FFJh4TOuXuPt4EoRf BkFO1POdb+btjFtylBwECF9qzo3t7MkKX6aVnc1tOZj+mVjGLXRyo2Alwd7a1q3BTVh7 dxWg== X-Gm-Message-State: AOAM5316eZfUddjIm1jP6mv/kQzx/BHUFiuyQw52pcm7jXpTWd/yh7fq LSgsUklQMYQpr9bXC+gcZUmzlrFKdIt570y4OdEVCiuScQ6GvWKj91SzdcvB2Ck5zvWE9drws42 BX4pT4bZOVg4= X-Received: by 2002:a05:622a:347:: with SMTP id r7mr16405673qtw.279.1612828454460; Mon, 08 Feb 2021 15:54:14 -0800 (PST) X-Google-Smtp-Source: ABdhPJy5G5F7gFkOpiOmlbgcsIGD0kBV6CtIU06iqhH3Ng182T8B2jcc3fwAHv3zd0PCraTxLi/PxQ== X-Received: by 2002:a05:622a:347:: with SMTP id r7mr16405638qtw.279.1612828454188; Mon, 08 Feb 2021 15:54:14 -0800 (PST) Received: from xz-x1 (bras-vprn-toroon474qw-lp130-20-174-93-89-182.dsl.bell.ca. [174.93.89.182]) by smtp.gmail.com with ESMTPSA id 11sm18825424qkm.25.2021.02.08.15.54.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 Feb 2021 15:54:13 -0800 (PST) Date: Mon, 8 Feb 2021 18:54:11 -0500 From: Peter Xu To: Axel Rasmussen Cc: Alexander Viro , Alexey Dobriyan , Andrea Arcangeli , Andrew Morton , Anshuman Khandual , Catalin Marinas , Chinwen Chang , Huang Ying , Ingo Molnar , Jann Horn , Jerome Glisse , Lokesh Gidra , "Matthew Wilcox (Oracle)" , Michael Ellerman , Michal =?utf-8?Q?Koutn=C3=BD?= , Michel Lespinasse , Mike Kravetz , Mike Rapoport , Nicholas Piggin , Shaohua Li , Shawn Anastasio , Steven Rostedt , Steven Price , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-mm@kvack.org, Adam Ruprecht , Cannon Matthews , "Dr . David Alan Gilbert" , David Rientjes , Mina Almasry , Oliver Upton Subject: Re: [PATCH v4 08/10] userfaultfd: add UFFDIO_CONTINUE ioctl Message-ID: <20210208235411.GC71523@xz-x1> References: <20210204183433.1431202-1-axelrasmussen@google.com> <20210204183433.1431202-9-axelrasmussen@google.com> MIME-Version: 1.0 In-Reply-To: <20210204183433.1431202-9-axelrasmussen@google.com> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: E988080191E9 X-Stat-Signature: ott7mhknmff53q7mfnpfr7s7r6ed7udh Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf08; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=216.205.24.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1612828455-642411 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Feb 04, 2021 at 10:34:31AM -0800, Axel Rasmussen wrote: > +enum mcopy_atomic_mode { > + /* A normal copy_from_user into the destination range. */ > + MCOPY_ATOMIC_NORMAL, > + /* Don't copy; map the destination range to the zero page. */ > + MCOPY_ATOMIC_ZEROPAGE, > + /* Just setup the dst_vma, without modifying the underlying page(s). */ "setup the dst_vma" sounds odd. How about "install pte with the existing page in the page cache"? > + MCOPY_ATOMIC_CONTINUE, > +}; [...] > @@ -4749,22 +4754,27 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, > hugepage_add_new_anon_rmap(page, dst_vma, dst_addr); > } > > - _dst_pte = make_huge_pte(dst_vma, page, dst_vma->vm_flags & VM_WRITE); > - if (dst_vma->vm_flags & VM_WRITE) > + dst_pte_flags = dst_vma->vm_flags & VM_WRITE; > + /* For CONTINUE on a non-shared VMA, don't set VM_WRITE for CoW. */ > + if (mode == MCOPY_ATOMIC_CONTINUE && !vm_shared) > + dst_pte_flags &= ~VM_WRITE; I agree it should work but it's odd to explicitly remove a VM_WRITE bit, since imho what we want to do is not changing vma or vma flags but deciding whether to keep the write bit in the ptes. How about as simple as: bool writable; if (mode == MCOPY_ATOMIC_CONTINUE && !vm_shared) writable = false; else writable = dst_vma->vm_flags & VM_WRITE; _dst_pte = make_huge_pte(dst_vma, page, writable); if (writable) _dst_pte = huge_pte_mkdirty(_dst_pte); ? > + _dst_pte = make_huge_pte(dst_vma, page, dst_pte_flags); > + if (dst_pte_flags & VM_WRITE) > _dst_pte = huge_pte_mkdirty(_dst_pte); > _dst_pte = pte_mkyoung(_dst_pte); > > set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); > > (void)huge_ptep_set_access_flags(dst_vma, dst_addr, dst_pte, _dst_pte, > - dst_vma->vm_flags & VM_WRITE); > + dst_pte_flags); > hugetlb_count_add(pages_per_huge_page(h), dst_mm); > > /* No need to invalidate - it was non-present before */ > update_mmu_cache(dst_vma, dst_addr, dst_pte); > > spin_unlock(ptl); > - set_page_huge_active(page); > + if (mode != MCOPY_ATOMIC_CONTINUE) > + set_page_huge_active(page); This has been changed to SetHPageMigratable(page) in akpm-next by Mike's new series. So maybe it's time to rebase your series to that starting from the next post. > if (vm_shared) > unlock_page(page); After removing the shared restriction, I think we need: if (vm_shared || (mode == MCOPY_ATOMIC_CONTINUE)) unlock_page(page); Since we seem to check (mode == MCOPY_ATOMIC_CONTINUE) a lot, maybe we can introduce a temp var for that too. > ret = 0; > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index b2ce61c1b50d..7bf83ffa456b 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -207,7 +207,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, > unsigned long dst_start, > unsigned long src_start, > unsigned long len, > - bool zeropage) > + enum mcopy_atomic_mode mode) > { > int vm_alloc_shared = dst_vma->vm_flags & VM_SHARED; > int vm_shared = dst_vma->vm_flags & VM_SHARED; > @@ -227,7 +227,7 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, > * by THP. Since we can not reliably insert a zero page, this > * feature is not supported. > */ > - if (zeropage) { > + if (mode == MCOPY_ATOMIC_ZEROPAGE) { > mmap_read_unlock(dst_mm); > return -EINVAL; > } > @@ -273,8 +273,6 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, > } > > while (src_addr < src_start + len) { > - pte_t dst_pteval; > - > BUG_ON(dst_addr >= dst_start + len); > > /* > @@ -297,16 +295,17 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, > goto out_unlock; > } > > - err = -EEXIST; > - dst_pteval = huge_ptep_get(dst_pte); > - if (!huge_pte_none(dst_pteval)) { > - mutex_unlock(&hugetlb_fault_mutex_table[hash]); > - i_mmap_unlock_read(mapping); > - goto out_unlock; > + if (mode != MCOPY_ATOMIC_CONTINUE) { > + if (!huge_pte_none(huge_ptep_get(dst_pte))) { Maybe merge the two "if"s? > + err = -EEXIST; > + mutex_unlock(&hugetlb_fault_mutex_table[hash]); > + i_mmap_unlock_read(mapping); > + goto out_unlock; > + } > } > > err = hugetlb_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma, > - dst_addr, src_addr, &page); > + dst_addr, src_addr, mode, &page); > > mutex_unlock(&hugetlb_fault_mutex_table[hash]); > i_mmap_unlock_read(mapping); > @@ -408,7 +407,7 @@ extern ssize_t __mcopy_atomic_hugetlb(struct mm_struct *dst_mm, > unsigned long dst_start, > unsigned long src_start, > unsigned long len, > - bool zeropage); > + enum mcopy_atomic_mode mode); > #endif /* CONFIG_HUGETLB_PAGE */ > > static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, > @@ -417,10 +416,14 @@ static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm, > unsigned long dst_addr, > unsigned long src_addr, > struct page **page, > - bool zeropage, > + enum mcopy_atomic_mode mode, > bool wp_copy) > { > ssize_t err; > + bool zeropage = (mode == MCOPY_ATOMIC_ZEROPAGE); > + > + if (mode == MCOPY_ATOMIC_CONTINUE) > + return -EINVAL; So you still passed in the mode into mfill_atomic_pte() just to make sure CONTINUE is not called there. It's okay, but again I think it's not extremely necessary: we should make sure to fail early at the entry of uffdio_continue() by checking against the vma type to be hugetlb, rather than reaching here. Thanks, -- Peter Xu