From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 69E41C43464 for ; Fri, 18 Sep 2020 17:32:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D94F421707 for ; Fri, 18 Sep 2020 17:32:43 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=ziepe.ca header.i=@ziepe.ca header.b="DXyd84b9" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D94F421707 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=ziepe.ca Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4C8C46B0037; Fri, 18 Sep 2020 13:32:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 478D76B005C; Fri, 18 Sep 2020 13:32:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3405C6B005D; Fri, 18 Sep 2020 13:32:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0061.hostedemail.com [216.40.44.61]) by kanga.kvack.org (Postfix) with ESMTP id 1CDD96B0037 for ; Fri, 18 Sep 2020 13:32:43 -0400 (EDT) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A9D4D3650 for ; Fri, 18 Sep 2020 17:32:42 +0000 (UTC) X-FDA: 77276876964.20.slope33_4b0a0172712d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 82B9C180C07A3 for ; Fri, 18 Sep 2020 17:32:42 +0000 (UTC) X-HE-Tag: slope33_4b0a0172712d X-Filterd-Recvd-Size: 9049 Received: from mail-qt1-f193.google.com (mail-qt1-f193.google.com [209.85.160.193]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Fri, 18 Sep 2020 17:32:42 +0000 (UTC) Received: by mail-qt1-f193.google.com with SMTP id c18so5763837qtw.5 for ; Fri, 18 Sep 2020 10:32:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ziepe.ca; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=4SHX8ZGuaYD1hlaYnDwv8nOImxVEtLmZ4RlTLJzVllI=; b=DXyd84b9tLaGrB4gNv577QgaA2UEGN9f76+VBoioPCpqFZrA1vtmpasfNBrIwyIpPR +xTH+Fncct+vo09n0IoGNR7h6M1DnxAU/5ubdCirRudrXixSm394W565PEnCzvqdClQP PQVFRHjfBenbmiO5Oci48HhWnK25c71eE8lvOdEc0GUtUVkEay6RU7kcmZd2SnIuFW0u Uc9QdFnkZRKMvhViKC5b+dsb0/PSPaG/jdotVCTE8k3pLbSwCUtNmO4ub6ty/RKbbkwO DUsfTKrpyEZpE/8gbcWFSqHuhbDj64q7zp/tPtMOnKnNO33iDlqkx4VtTBcXvjWizx7F y79A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=4SHX8ZGuaYD1hlaYnDwv8nOImxVEtLmZ4RlTLJzVllI=; b=Gu10IlJ4qJYRFCZXLrM2j4uYZRbERs6mIp6wzdDmZXXfTO6UK+gc9+kZcGKkwG3sCG xtODHrxWsHj1Kco1iziv1gs24d0xf+SUVBYTlYiTCv88aNtE3gVsl2QCKYajxpkOK9mc 4kmvEiILDJBB9D+Ri21EVLwpUANpgpl5tuGDwVSM8BJRMrvu1YZhAkFCV4eyHxb4JVRu JFJDADDUnaAzkKaCIVqnKsg+DfIFCp8CupOx6H5gjfAAG3iU9dAGvxG3XlVmSrgacA86 7e8umuh7Ar68k++0fYxZuez3NMrm9beI2v8RHU2dkbl02Xra8ErjjvE2cG27rm0NiGIx +i/Q== X-Gm-Message-State: AOAM531chSUD5n7vswoiz33fPhWLRAvIIcCGKKquNBr693ORwOt63aKL njuwZGhqAwMl+mvaZUiL+OhXPA== X-Google-Smtp-Source: ABdhPJy1EghOjmGTy4zwKfrhXOoAlnPZeyZoQg0CO2kHuuN4l8luV+ouemaxqX22fG23hXRIgRBzZQ== X-Received: by 2002:ac8:4e24:: with SMTP id d4mr32426996qtw.28.1600450361361; Fri, 18 Sep 2020 10:32:41 -0700 (PDT) Received: from ziepe.ca (hlfxns017vw-156-34-48-30.dhcp-dynamic.fibreop.ns.bellaliant.net. [156.34.48.30]) by smtp.gmail.com with ESMTPSA id v2sm2480550qkv.26.2020.09.18.10.32.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 18 Sep 2020 10:32:40 -0700 (PDT) Received: from jgg by mlx with local (Exim 4.94) (envelope-from ) id 1kJKFU-001VZl-2n; Fri, 18 Sep 2020 14:32:40 -0300 Date: Fri, 18 Sep 2020 14:32:40 -0300 From: Jason Gunthorpe To: Peter Xu Cc: Linus Torvalds , John Hubbard , Leon Romanovsky , Linux-MM , Linux Kernel Mailing List , "Maya B . Gokhale" , Yang Shi , Marty Mcfadden , Kirill Shutemov , Oleg Nesterov , Jann Horn , Jan Kara , Kirill Tkhai , Andrea Arcangeli , Christoph Hellwig , Andrew Morton Subject: Re: [PATCH 1/4] mm: Trial do_wp_page() simplification Message-ID: <20200918173240.GY8409@ziepe.ca> References: <20200915213330.GE2949@xz-x1> <20200915232238.GO1221970@ziepe.ca> <20200916174804.GC8409@ziepe.ca> <20200916184619.GB40154@xz-x1> <20200917112538.GD8409@ziepe.ca> <20200917193824.GL8409@ziepe.ca> <20200918164032.GA5962@xz-x1> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200918164032.GA5962@xz-x1> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Sep 18, 2020 at 12:40:32PM -0400, Peter Xu wrote: > Firstly in the draft patch mm->has_pinned is introduced and it's written to 1 > as long as FOLL_GUP is called once. It's never reset after set. Worth thinking about also adding FOLL_LONGTERM here, at last as long as it is not a counter. That further limits the impact. > One more thing (I think) we need to do is to pass the new vma from > copy_page_range() down into the end because if we want to start cow during > fork() then we need to operate on that new vma too when new page linked to it > rather than the parent's. Makes sense to me > One issue is when we charge for cgroup we probably can't do that onto the new > mm/task, since copy_namespaces() is called after copy_mm(). I don't know > enough about cgroup, I thought the child will inherit the parent's, but I'm not > sure. Or, can we change that order of copy_namespaces() && copy_mm()? I don't > see a problem so far but I'd like to ask first.. Know nothing about cgroups, but I would have guessed that the page table allocations would want to be in the cgroup too, is the struct page a different bucket? > The other thing is on how to fail. E.g., when COW failed due to either > charging of cgroup or ENOMEM, ideally we should fail fork() too. Though that > might need more changes - current patch silently kept the shared page for > simplicity. I didn't notice anything tricky here.. Something a bit gross but simple seemed workable: @@ -852,7 +852,7 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, continue; } entry.val = copy_one_pte(dst_mm, src_mm, dst_pte, src_pte, - vma, addr, rss); + vma, addr, rss, &err); if (entry.val) break; progress += 8; @@ -865,6 +865,9 @@ static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm, pte_unmap_unlock(orig_dst_pte, dst_ptl); cond_resched(); + if (err) + return err; + if (entry.val) { if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) return -ENOMEM; It is not really any different from add_swap_count_continuation() failure, which already works.. > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 496c3ff97cce..b3812fa6383f 100644 > +++ b/include/linux/mm_types.h > @@ -458,6 +458,7 @@ struct mm_struct { > > unsigned long total_vm; /* Total pages mapped */ > unsigned long locked_vm; /* Pages that have PG_mlocked set */ > + unsigned long has_pinned; /* Whether this mm has pinned any page */ Can be unsigned int or smaller, if there is a better hole in mm_struct > diff --git a/mm/gup.c b/mm/gup.c > index e5739a1974d5..cab10cefefe4 100644 > +++ b/mm/gup.c > @@ -1255,6 +1255,17 @@ static __always_inline long __get_user_pages_locked(struct mm_struct *mm, > BUG_ON(*locked != 1); > } > > + /* > + * Mark the mm struct if there's any page pinning attempt. We're > + * aggresive on this bit since even if the pinned pages were unpinned > + * later on, we'll still keep this bit set for this address space just > + * to make everything easy. > + * > + * TODO: Ideally we can use mm->pinned_vm but only until it's stable. > + */ > + if (flags & FOLL_PIN) > + WRITE_ONCE(mm->has_pinned, 1); This should probably be its own commit, here is a stab at a commit message: Reduce the chance of false positive from page_maybe_dma_pinned() by keeping track if the mm_struct has ever been used with pin_user_pages(). mm_structs that have never been passed to pin_user_pages() cannot have a positive page_maybe_dma_pinned() by definition. This allows cases that might drive up the page ref_count to avoid any penalty from handling dma_pinned pages. Due to complexities with unpining this trivial version is a permanent sticky bit, future work will be needed to make this a counter. > +/* > + * Do early cow for the page and the pte. Return true if page duplicate > + * succeeded, false otherwise. > + */ > +static bool duplicate_page(struct mm_struct *mm, struct vm_area_struct *vma, Suggest calling 'vma' 'new' here for consistency > + unsigned long address, struct page *page, > + pte_t *newpte) > +{ > + struct page *new_page; > + pte_t entry; > + > + new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address); > + if (!new_page) > + return false; > + > + copy_user_highpage(new_page, page, address, vma); > + if (mem_cgroup_charge(new_page, mm, GFP_KERNEL)) { > + put_page(new_page); > + return false; > + } > + cgroup_throttle_swaprate(new_page, GFP_KERNEL); > + __SetPageUptodate(new_page); It looks like these GFP flags can't be GFP_KERNEL, this is called inside the pte_alloc_map_lock() which is a spinlock One thought is to lift this logic out to around add_swap_count_continuation()? Would need some serious rework to be able to store the dst_pte though. Can't help about the cgroup question Jason